Approximation and Generalization Errors in Deep Neural Networks for Sobolev Spaces measured by Sobolev Norms

Event Start
Event End
Location
https://kaust.zoom.us/j/4406489644

Abstract

 In this presentation, we initially discuss the approximation capabilities of deep neural networks (DNNs) with ReLU and the square of ReLU as activation functions for Sobolev functions measured by the Sobolev norms \(W^{m,p}\) where \(m \ge 1\). Subsequently, we consider how to address the issue of the curse of dimensionality for DNNs’ approximation. Finally, we analyze the generalization errors associated with DNNs using such Sobolev loss functions. Additionally, we provide recommendations on when to opt for deeper NNs versus wider NNs, considering factors such as the number of sample points, parameter count, and the regularity of the loss functions.

Brief Biography

Yahong Yang is a postdoctoral scholar in the Department of Mathematics at Penn State University. He earned his Ph.D. degree from the Hong Kong University of Science and Technology in 2023.

Contact Person