Headlines

  • A scenario for learning in overparametrized neural networks

    Modern neural networks, with billions of parameters, are so overparametrized that they can “overfit” even random, structureless data. Yet when trained on datasets with structure, they learn the underlying features. Understanding why overparametrization does not destroy their effectiveness is a fundamental challenge in AI. Two researchers, Andra Montanari (Stanford) and Pierfrancesco Urbani (IPhT) propose that…

Agenda

28 November 2025
14h1515h30

The uses of lattice non-invertible dualities and symmetries

Salle Claude Itzykson, Bât. 774
1 December 2025
10h0012h00

Soutenance de Thèse

2 December 2025
11h0012h30

QCD Theory meets Information Theory

Amphi Claude Bloch, Bât. 774
Aucun événement

Directory