Unsupervised neural networks: from theory to systems biology (2/5)
Rémi Monasson
ENS Paris
Fri, May. 18th 2018, 10:00-12:15
Salle Claude Itzykson, Bât. 774, Orme des Merisiers

Artificial neural networks, introduced decades ago, are now key tools for automatic learning from data. This series of six lectures will focus on a few neural network architectures used in the context of unsupervised learning, that is, of unlabeled data. \par In particular we will focus on dimensional reduction, feature extraction, and representation building. We will see how statistical physics, in particular the techniques and concepts of random matrix theory and disordered systems, can be used to understand the properties of these algorithms and the phase transitions taking place in their operation. \par Special attention will be devoted to the so-called high-dimensional inference setting, where the numbers of data samples and of defining parameters of the neural nets are comparable. The general principles will be illustrated on recent applications to data coming from neuroscience and genomics, highlighting the potentialities of unsupervised learning for biology. \par Some issues: \\ - What is unsupervised learning? \\ - Hebbian learning for principal component analysis: retarded-learning phase transition and prior information. \\ - Bipartite neural nets and representations: auto-encoders, restricted Boltzmann machines, Boltzmann machines. \\ - Recurrent neural nets: from point to finite-dimensional attractors, temporal sequences.

Contact : Loic BERVAS


Retour en haut