A tale of convolutions and non-gaussianity
Alessandro Ingrosso
ICTP, Trieste
Jeudi 30/05/2024, 10:00-11:00
Salle Claude Itzykson, Bât. 774, Orme des Merisiers
Exploiting invariances in the inputs is crucial for constructing efficient representations and accurate predictions in neural circuits. While the hallmark of convolutions, namely localised receptive fields that tile the input space, can be implemented with fully-connected neural networks, learning convolutions directly from inputs in a fully-connected network has so far proven elusive. In this talk, I will show how initially fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs, resulting in localised, space-tiling receptive fields. I will provide an analytical and numerical characterization of the pattern-formation mechanism responsible for this phenomenon in a simple model, which results in an unexpected link between receptive field formation and the tensor decomposition of higher-order input correlations. In more complex visual tasks, I will show how iterative magnitude pruning can yield local features by dynamically amplifying non-Gaussian statistics during learning. Non-Gaussianity of internal representations thus provides us with a framework to analyze feature learning in deep networks.
Contact : Gregoire MISGUICH

 

Retour en haut