![]() |
|
Canal @matematicasuamEnlace al canal del Departamento en youtube. |
PIM (Pequeño Instituto de Matemáticas)Con el objetivo de fomentar el interés por las matemáticas y dirigido a jóvenes entre 14 y 18 años, nace este proyecto de Instituto de Ciencias Matemáticas (ICMAT) en colaboración con nuestro Departamento, la Universidad Autónoma de Madrid y la Real Sociedad Matemática Española. El proyecto comienzó en el curso académico 2022-2023. Ampliar información en su página web. |
Machine learning in Madrid
Lunes, 26 de septiembre de 2022, 17-18h
Ponente: Boris Hanin (Princeton University)
Título: Ridgeless Interpolation in 1D with One Layer ReLU Networks and Tight Generalization Bounds for Learning Lipschitz Functions
Abstract: In this talk, I will give a complete answer to the question of how neural networks use training data to make predictions on unseen inputs in a very simple setting. Namely, for a fixed dataset D = {(x_i,y_i), i=1,...,N} with x_i and y_i being scalars, I will consider the space of all one layer ReLU networks of arbitrary width that exactly fit this data and, among all such interpolants, achieve the minimal possible L_2-norm on the neuron weights. Intuitively, this is the space of "ridgeless ReLU interpolants" in that sense that it consists of ReLU networks that minimize the mean squared error over D plus an infinitesimal L_2-regularization on the neuron weights. I will give a complete characterization of how such ridgeless ReLU interpolants can make predictions on intervals (x_i, x_{i+1}) between consecutive datapoints. I will then explain how to use this characterization to obtain, uniformly over the infinite collection of ridgeless ReLU interpolants of a given dataset D, tight generalization bounds under the assumption y_i = f(x_i) with f a Lipschitz function.
Enlace: https://us06web.zoom.us/j/82490461958