金沙集团1862cc“六朝松∙智控论坛”—名家讲坛系列报告

发布者:赵剑锋发布时间:2024-10-23浏览次数:10

报告时间:202410月24日 周四上午 10:00

报告地点:金沙集团1862cc四牌楼校区中心楼二楼中庭

组织单位:金沙集团1862cc 金沙集团1862cc


报告主题:

The use of Extreme Learning Machine NNs in the numerical resolution of  differential problems

报告人简介:

Francesco Calabrò, PhD Student: Davide Elia De Falco Scuola Superiore MeridionaleFrancesco Calabrò is Associate Professor in Numerical Analysis - University of Naples “Federico II”, Italy,  since 2020 and Member of the board of the Ph.D. program “Mathematical And Physical Sciences For Advanced Materials And Technologies” of the “Scuola Superiore Meridionale” (SSM). He is BS in Applied Mathemathics at the Università “Federico II” di Napoli summa cum laude in 2001 and MS in Applications of Mathematics in Industry and Services at the Università di Milano “Bicocca”, Italy. He received his Ph. D. in Computational Science and Informatics at the Università di Napoli “Federico II” in 2004. His research interests include IsoGeometric Analysis and Modeling in Nanochannels and Membranes. Recently he has worked on Scientific Machine Learning, in particular on the use of Extreme Learning Machines for the resolution of differential problems and applications in medical science. 

Davide Elia De Falco is PhD student at “Mathematical And Physical Sciences For Advanced Materials And Technologies” of the “Scuola Superiore Meridionale” (SSM) school of excellence. He is MS in Mathemaical Engeneering at the Università “Federico II” di Napoli summa cum laude in 2023 & BS in Computer Engineering at the Università di Pisa, scholarship holder of the Scuola Superiore Sant'Anna school of excellence.

报告摘要:

    Neural Networks (NN) are a powerful tool in approximation theory because of the existence of Universal Approximation (UA) results. In the last decades, a significant attention has been given to Extreme Learning Machines (ELMs), typically employed for the training of single layer NNs, and for which a UA result can also be proven. In a generic NN, the design of the optimal approximator can be recast as an optimization problem that turns out to be particularly demanding from the computational viewpoint. However, under the adoption of ELM, the optimization task reduces to an - possibly rectangular – linear problem. This makes ELMs faster than typical deep neural networks, where optimization methods may lead to prohibitively slow learning speeds. This gives that such structures are efficiently applied to a wide class of scientific problems, including differential problems. 

    In Part 1 of this talk, some basic ideas and their application to linear Partial differential equations are presented. In Part 2, some advanced topics and work in progress are disscused.