PhD subjects

7 sujets IPhT

Dernière mise à jour : 26-05-2019

• Theoretical Physics


Statistical physics modelling of artificial neural networks


Research field : Theoretical Physics
Location :

Service de Physique Théorique


Contact :


Pierfrancesco Urbani

Starting date : 01-10-2019

Contact :


CNRS - DSM - Institut de Physique Théorique

01 6908 8114

Thesis supervisor :

Pierfrancesco Urbani


33 1 69 08 79 28

Personal web page :

Laboratory link :

There is a long history of statistical physics bringing ideas to machine learning. Some of the commonly

used terms such as Boltzmann machine or Gibbs sampling are witnesses of that. Notably the 80s-90s were

very fruitful period where research in statistical physics brought a range of theoretical results about mod-

els of neural networks, see e.g. [AGS85,GD88,EVB01]. Those results concentrate on probabilistic models

of data (both the data distribution and the map to labels is modelled) in a way complementary to the

mainstream learning theory. Nowadays, the wide use of deep learning brings up a range of open theoreti-

cal questions and challenges that will likely require a synergy of theoretical ideas from several areas in-

cluding theoretical physics. In terms of modelling artificial neural networks, existing physics literature

mostly considers fully connected feedforward neural networks for supervised learning, and restricted

Boltzmann machines for unsupervised learning, it models data as random iid vectors. LZ currently holds

an ERC Starting grant focusing on statistical physics study of fully connected feedforward neural net-

works and auto-encoders, and related theoretical and algorithmic questions.

This PhD project will apply the statistical physics analysis to two classes of neural networks that, as far as

we know, have not yet been studied within this framework (and are not part the above ERC project):

Convolutional neural networks (for supervised learning) and Generative Adversarial Learning (for unsu-

pervised learning). We will evaluate analytically the optimal performance of such networks in the mod-

elled situations and compare it to the performance of efficient algorithm (message passing, and gradient

based algorithms), studied analytically and numerically. The main goal is to understand the behaviour,

advantages and limitations of such networks and improve the algorithmic procedures used for training.

Convolutional neural networks (ConvNets) stand at the basis of majority of modern state-of-the-art im-

age processing systems. Compared to fully connected networks the hidden neurons in ConvNets are con-

nected to only a subset of variables in the previous layer (a so-called receptive field) and usually many of

the hidden neurons connected to different receptive fields share the corresponding vector of weights thus

leading to computational speed up. The ConvNets architectures are also chosen for their ability to impose

the symmetries (e.g. the translational one). The model for ConvNet we have in mind is related to the

committee machine. The two supervisors recently worked on the committee machine neural networks,

while LZ studied its fully connected version [AMB18], PU studied a tree version where weights of each

hidden neuron are independent [FHU18]. The tree committee machine with weights of each hidden neu-

ron being the same is a simple (possibly simplest) model for convolutional neural network previously

studied in theoretical statistics and computer science literature, see e.g. [DLT17]. This model has not been

analyzed yet using the statistical physics methods that can access a broad range of open questions. Its so-

lution should not be more complicated than what is already done in [AMB18, FHU18], thus as ideal sub-

ject for a PhD student. With the solution at hand, many questions about ConvNets, their performance and

learning with efficient algorithms, will become analytically accessible and will be investigated. Exten-

sions to models where the receptive fields overlap will be the next case to study.

Generative adversarial networks (GANs) [GPM14] are often cited as the most influential idea in deep

learning in the past five years. Their purpose is to generate data samples that look statistically indistin-

guishable from sample in the training set. The principle of GANs is very simple: A generator-neural-net-

work is being trained to minimise the accuracy of a discriminator-neural-networks, that is in turn trying

maximise the number of samples classified correctly as coming from the true data versus from the genera-

tor-neural-network. The min-max nature of the training problem, however, causes serious problems to the

training algorithms and it is not understood mathematically when such learning is reliable and leads to

convergence and when it does not. A very recent work [WHL18] proposed a very elegant simple model of

GANs and analyzed the behavior of online learning in the model. The goal of this thesis project is to ana-

lyze the batch learning in this model of GANs, the underlying algorithms, their convergence and proper-

ties. From the point of view of existing research the above model of GANs can be seen as a combination

of a low-rank matrix factorisation model, and a perceptron problems with structured disorder. Both these

models were studied extensively by both of the supervisors, e.g. [LKZ17, FPSUZ17], and we are positive

that their combination corresponding the the above model of GANs can also be analyzed in a closed


The longer term goal of this project is to understand the underlying principles of why the current deep

learning methods work well, what are their limitations and how they can be further improved. The two

supervisors combine expertise of the powerful methodology coming from statistical physics of disordered

systems, that is applicable to study of high-dimensional non-convex problems, and experience in multi-

disciplinary applications of this methodology. The student shall be trained in this vibrant field having ap-

plications in modern data analysis and machine learning which should be a great asset for his/her future

career prospects.


[AGS85] Amit, D. J., Gutfreund, H., & Sompolinsky, H. (1985). Spin-glass models of neural networks. Phys. Rev. A, 32(2), 1007.

[AMB18] Aubin, B., Maillard, A., Barbier, J., Krzakala, F., Macris, N., & Zdeborová, L. (2018). The committee machine: Com-

putational to statistical gaps in learning a two-layers neural network. arXiv preprint arXiv:1806.05451.

[DLT17] Du, S.S., Lee, J.D., Tian, Y., Poczos, B., & Singh, A. (2017). Gradient Descent Learns One-hidden-layer CNN: Don’t be

Afraid of Spurious Local Minima. arXiv:1712.00779

[EVB01] Engel A., Van Den Broeck C. (2001), Statistical Mechanics of Learning, Cambridge.

[FPSUZ17] Franz, S., Parisi, G., Sevelev, M., Urbani, P. & Zamponi, F. (2017) Universality of the SAT-UNSAT (jamming)

threshold in non-convex continuous constraint satisfaction problems, SciPost Phys. 2, 019.

[FHU18] Franz, S., Hwang, S., & Urbani, P. (2018). Jamming in multilayer supervised learning models. ArXiv:1809.09945.

[GD88] Gardner, E., & Derrida, B. (1988). Optimal storage properties of neural network models. J. Phys. A: Math. and Gen.

[GPM14] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative

adversarial nets. In Advances in neural information processing systems (pp. 2672-2680).

[LKZ17] Lesieur, T., Krzakala, F., & Zdeborová, L. (2017). Constrained low-rank matrix estimation: Phase transitions, approxi-

mate message passing and applications. J. Stat. Mech.: Th. and Exp. 073403

[WHL18] Wang, C., Hu, H., & Lu, Y. M. (2018). A Solvable High-Dimensional Model of GAN. Preprint arXiv:1805.08349.

New universality classes in combinatorial models of statistical mechanics


Research field : Theoretical Physics
Location :

Service de Physique Théorique


Contact :

Jérémie Bouttier

Starting date : 01-09-2019

Contact :

Jérémie Bouttier



Thesis supervisor :

Jérémie Bouttier



Personal web page :

Laboratory link :

There are deep connections between statistical mechanics and combinatorics, particu-

larly within the realm of “exactly solvable models”. Such models played historically an

important role in the study of phase transitions and critical phenomena, they are still

useful today to analyze for instance systems out of equilibrium, and to prove rigorous

mathematical results. There are many examples of the fascinating interplay between

physics and combinatorics: the 2D Ising model and its connections with dimer models,

spanning trees and free fermions [1]; polymers and self-avoiding walks [2]; the ice/six-

vertex model and alternating sign matrices [3], etc.

In this PhD thesis we propose two directions to investigate:

1. decorated random maps and 2D quantum gravity,

2. 2D dimer models, domino/lozenge tilings, random partitions and Schur processes.

Common to these two directions is the appearance of new universality classes which are

still not fully understood.

Random maps are discretizations of random surfaces, used to model 2D quantum

gravity. Much attention has been devoted to the study of the “uniform” distribution on

random maps: it is for instance known that large planar maps have a fractal structure

of dimension 4, whose essence is described by the so-called Brownian map [4]. But it is

believed that the geometry changes drastically when the maps are decorated by a critical

model of statistical physics, such as Ising/Potts spins or nonintersecting loops [5]. We

propose to investigate some of the properties of such decorated random maps, through

the use of suitable exploration processes [6].

Dimer models are among the simplest combinatorial models of statistical mechanics.

Noninteracting fully-packed dimers in 2D are known to belong to the class of determi-

nantal (or free fermionic) processes. Due to the nonoverlapping constraint, the boundary

conditions may have long-range effects, and cause spatial phase separations. This is the

“limit shape” or “arctic curve” phenomenon, which is now understood rigorously. Of

particular interest is the interface between phases, where we observe universal scaling

limits closely related with random matrices. We propose to investigate a certain family

of models [7] and special boundary conditions [8] giving rise to new scaling limits.


[1] D. Chelkak, D. Cimasoni, and A. Kassel. Revisiting the combinatorics of the 2D Ising

model. Ann. Inst. Henri Poincaré D, 4(3):309–385, 2017.

[2] H. Duminil-Copin

and S. Smirnov. The connective constant of the honeycomb lattice



equals 2 + 2. Ann. of Math. (2), 175(3):1653–1665, 2012.

[3] L. Cantini and A. Sportiello. Proof of the Razumov-Stroganov conjecture. J. Combin.

Theory Ser. A, 118(5):1549–1574, 2011.

[4] G. Miermont. Aspects of random maps. Lecture notes of the 2014 Saint-Flour Proba-

bility Summer School, preliminary version available at

gregory.miermont/coursSaint-Flour.pdf, 2014.

[5] G. Borot, J. Bouttier, and E. Guitter. A recursive approach to the O(n) model on

random maps via nested loops. J. Phys. A, 45(4):045002, 38, 2012.

[6] T. Budd. The peeling process on random planar maps coupled to an O(n) loop model.

arXiv:1809.02012 [math.PR], 2018. With an appendix by L. Chen.

[7] C. Boutillier, J. Bouttier, G. Chapuy, S. Corteel, and S. Ramassamy. Dimers on rail

yard graphs. Ann. Inst. Henri Poincaré D, 4(4):479–539, 2017.

[8] Dan B., J. Bouttier, P. Nejjar, and M. Vuletic. The free boundary Schur process and

applications. Ann. Henri Poincaré, to appear, 2018.

Black Hole Microstate Geometries


Research field : Theoretical Physics
Location :

Service de Physique Théorique


Contact :

Iosif BENA

Starting date : 01-10-2019

Contact :

Iosif BENA

CEA - DSM - Institut de Physique Théorique

01 6908 7468

Thesis supervisor :


More :

Black holes are defined to be objects whose gravity is strong enough to trap light. In General Relativity, a black hole not only traps light but also traps all other matter and information inside a surface of no return, called the event horizon. As a result, the exterior of a black hole (outside the horizon) is independent of how, and from what, the black hole formed. Moreover, the exterior structure of a black hole is completely determined by its long-range parameters, like mass, charge and angular momentum. This is black hole uniqueness.

When first formulated in GR, black holes were thought to be unphysical artifacts of imposing symmetry on solutions to Einstein’s equations. However, in the early 1970’s, the singularity theorems of Penrose and Hawking began the shift towards today’s paradigm: black holes are essential parts of Nature. Indeed, by the 1980’s, black holes provided the best description of certain ‘exotic’ binary star systems as well as the engines powering jets from the cores of active galaxies. The LIGO detection of black-hole mergers, in 2015, was the crowning achievement of over a century of theoretical development, finally confirmed by observations of characteristic, extremely strong-field signatures of the black holes of GR.

The “frustrating details” emerge when black-hole physics is combined with quantum mechanics (QM). In 1975, Hawking showed that the correct description of the vacuum around an event horizon leads to the emission of Hawking Radiation as a form of vacuum polarization. Because this radiation originates from just above the horizon, the uniqueness of black holes in GR implies that Hawking radiation is universal, thermal and (almost) featureless. In particular, it is independent of how the black hole formed. Semi-classical back-reaction of this Hawking radiation also implies that the black hole will evaporate, albeit extremely slowly.

This leads to the Information Paradox: It is impossible to reconstruct the interior state of a black hole (apart from mass, charge and angular momentum) from the exterior data, and thus from the final state of the Hawking radiation. The evaporation process cannot, therefore, be represented through a unitary transformation of states in a Hilbert space. Hence black-hole evaporation, as predicted by GR and QM, is inconsistent with a foundational postulate of QM. Based on its horizon area, the black hole at the core of the Milky Way should have about e^(10^90) microstates. From the outside, black-hole uniqueness implies that its state is unique, as would be the state of its Hawking radiation were it to evaporate. The problem is therefore vast: e^(10^90) ? 1!

In 2009, Mathur used quantum information theory to show that Hawking’s information paradox could not be solved incrementally: It will require radical modification of the physics at the scale of the black hole horizon.

The only existing framework in which this can be done while including complete, general-relativistic, gravitational effects is the Microstate Geometry (MG) programme. This programme, which forms the core of this thesis research was started more than 15 years ago by the two supervisors, and has resulted in the construction of huge families of string theory and supergravity (that we call black hole Microstate Geometries) that have no singularity, no horizon, and exactly the same mass, charge, and angular momentum as a black hole.

This thesis has three directions. The first is to analyze the existing Microstate Geometries within the framework of the AdS-CFT correspondence. The second is to construct new Microstate Geometries for supersymmetric black holes. The third is to construct such geometries for non-supersymmetric black holes. The techniques that will be used in the first area are those of the holographic AdS-CFT correspondence, and mastering them will be useful if the student will want to explore other areas of holography. Research in the second direction will be done using supersymmetric solution-building methods of supergravity. These methods will also be useful if the student will want in the future to diversify his or her research interests towards string phenomenology or cosmology. The techniques to be used in the third area will be at first analytical (probe brane actions and factorized supergravity equations for constructing non-supersymmetric solutions), but depending on the preliminary results numerical methods to construct cohomegeneity-two supergravity solutions may also be used.

Applicants are expected to have a solid background in General Relativity and Quantum Field Theory. Knowledge of basic String Theory notions is a bonus.

de Sitter vacua in String Theory


Research field : Theoretical Physics
Location :

Service de Physique Théorique


Contact :

Iosif BENA

Starting date : 01-10-2019

Contact :

Iosif BENA

CEA - DSM - Institut de Physique Théorique

01 6908 7468

Thesis supervisor :

Iosif BENA

CEA - DSM - Institut de Physique Théorique

01 6908 7468

Personal web page :

Laboratory link :

String Theory is the most promising candidate for a theory that

unifies all the forces that exist in nature, and could therefore

provide a framework from which one may hope to derive all the observed

physical laws. However, String Theory lives in ten dimensions, and to

obtain real-world physics one needs to compactify it on certain

six-dimensional compact spaces that have a size much smaller than any

scale accessible to observations. Since there exist a large number of

such spaces it has been argued that there exist of order 10^{500}

four-dimensional String Theory vacua. These vacua have all possible

physical laws with all possible constants. This has led to a radically

new view of the physics in which one argues that the constants in the

physical laws that we measure in our Universe do not come from an

underlying unified theory, but are environmental,

anthropically-constrained variables that are determined by where we

are in this Multiverse.

Despite the fact that it goes agains the reductionist paradigm that

has driven scientific progress over the past century, the anthropic

explanation is rapidly becoming the favored explanation to the

extremely difficult task of explaining the enormous amount of fine

tuning present in the physical laws. First, the observed accelerated

expansion of the Universe is driven by a mysterious form of energy

density with negative pressure, whose value is 120 orders of magnitude

smaller than expected in particle physics (this was called “the worst

theoretical prediction in the history of physics”). Next comes the

hierarchy problem: the 24 orders of magnitude between the electroweak

energy scale and the gravity scale. Supersymmetry was thought for many

years to provide a beautiful solution to this problem, but the absence

of any LHC signal supporting supersymmetry, after scanning most of the

available phase space, is driving many towards anthropic/multiverse

explanations. Finally, models of cosmological inflation require

considerable fine-tuning when trying to achieve the almost flatness of

the inflationary potential and to meet the upper bound from

2015-Planck results on tensor-to-scalar ratio in the spectrum of


The multiverse paradigm provides a framework where none of these

fine-tunings requires an explanation, and string theory, with its

believed “landscape” of de Sitter vacua, appears to support

it. However, solutions in the landscape are not constructed directly

in ten dimensional string theory, but are found using effective

low-energy descriptions in four space-time dimensions. In order to

satisfy all experimental constraints, the effective theories require a

number of intricate ingredients such as anti-branes, T-branes or

nongeometric fluxes, whose string theory origin and consistency is

unclear. The purpose of this thesis is to examine whether a very large

number of these vacua are in fact unstable or inconsistent with the

experimental data coming out of the Large Hadron Collider. This will

be done by analyzing one of the key ingredients of the Multiverse

construction - the uplifting of the cosmological constant, and taking

into account the embeddings of Standard Model physics in String


In parallel to this to-down endeavor to understand de Sitter vacua in

string theory, over the past year there has been an explosion of

interest in this question from the bottom-up perspective: the recent

de Sitter Swampland conjecture, (by Vafa and collaborators

arXiv:1806.08362) proposes that, in the regime of parameters where

calculations can be trusted, all solutions with a positive

cosmological constant are either unstable or have a runaway

behavior. We will attempt to link the top-down results we will obtain

with this line of work, hoping to establish that String Theory does

not support the multiverse paradigm. Applicants are expected to have a

solid background in general relativity and quantum field theory.

Understanding the origin of the laws governing our universe is an

endeavor in which the DRF is positioned very well worldwide, both

theoretically and experimentally: from the LHC group at the IRFU/DPP

that puts stronger and stronger bounds on beyond-the-standard-model

physics, to the Planck group at the IRFU/Département d'Astrophysique

that tries to measure the cosmological parameters of our Universe

using the cosmic microwave background, to the particle theory group at

the IPhT that explores extensions of the Standard Model and the fine

tuning thereof. Moreover, there exists an ongoing line of research to

count and investigate the vacua of string theory using Machine

Learning. One of the talks at the “Séminaire Intelligence Artificielle

et Physique Théorique” organized by DRT/LIST last June was on this


M-theory quantum corrections, fluxes and compactifications to four dimensions


Research field : Theoretical Physics
Location :

Service de Physique Théorique


Contact :


Starting date : 01-10-2019

Contact :


CNRS - DSM - Institut de Physique Théorique

01 6908 7466

Thesis supervisor :


More :

The focus of this project is on four-dimensional compactifications of eleven-dimensional M-theory on seven manifolds with fluxes.

In the relatively better studied case of purely gravitational backgrounds, the internal seven-manifold should be of G2 holonomy. This is a very interesting class of geometries that saw much recent interest. Notably there has been a huge progress in construction of explicit examples. However there are many unexplored questions concerning four-dimensional theories (and their quantum properties) that arise as a result of compactification of M-theory on these manifolds.

The situation becomes much more complicated when the four-form flux of M-theory is turned on. While it is known how to turn on the internal flux while preserving supersymmetry, the global constraints seem to rule out compact internal manifolds. To this day there is no known mechanism of tadpole cancellation. The quantum corrections of M-theory and the so-far unexplored possibility of higher-derivative corrections to the M-theoretic Bianchi identities present the best hope of avoiding the no-go theorems to four-dimensional compactifications. Better understanding of the M-theoretic quantum corrections and their possible geometric interpretation is an important open problem on its own. Recently generalised-geometric tools have been developed that may facilitate the analysis systematically.

In addition to addressing some purely sting/M-theoretic and geometric questions, the project should hopefully lead to progress in analysing an important and largely unexplored class of four-dimensional theories with low or no supersymmetry. In particular, the effects of the quantum corrections on the possible four-dimensional M-theoretic vacua will be studied and the possibility of construction of de Sitter vacua in M-theory will be explored.

The project will require the candidate to master string dualities and a large set of geometric and string theoretic computational techniques.

Solving two-dimensional conformal field theories using the bootstrap approach


Research field : Theoretical Physics
Location :

Service de Physique Théorique


Contact :

Sylvain Ribault

Starting date : 01-10-2019

Contact :

Sylvain Ribault


01 69 08 71 26

Thesis supervisor :

Sylvain Ribault


01 69 08 71 26

Personal web page : Sylvain Ribault

More :

Two-dimensional conformal field theories have applications to

statistical mechanics, string theory and quantum gravity. Moreover, they

provide nontrivial examples of exactly solvable quantum field theories.

The bootstrap approach consists in computing correlation functions by

solving symmetry and consistency constraints. This can be done

numerically in CFTs in arbitrary dimensions, and this can also be done

analytically in some 2d CFTs such as minimal models or Liouville theory.

The aim of this PhD project is to solve two-dimensional CFTs, if

possible exactly. By solving we mean computing three- and four-point

correlation functions, and checking that four-point functions obey the

consistency constraint called crossing symmetry. Solving a CFT in this

sense not only establishes its existence, but also leads to a

quantitative understanding that is crucial for applications. In order to

exactly solve CFTs, we will rely on algebraic structures such as fusion

rules, and analytic properties of correlation functions, including their

decomposition into special functions called conformal blocks.

The CFTs we will focus on are known theories that have not yet been

fully solved, and new CFTs that we will strive to explore. The known

CFTs may include the SL(2,R) Wess-Zumino-Witten model, the critical

Ashkin-Teller model, or logarithmic minimal models. The new CFTs may be

constructed by taking limits of known CFTs, or starting with a symmetry

algebra and working out its fusion rules.

For a taste of the ideas and methods, see my "minimal lectures"

In order to apply, please send me a CV, cover letter and Master grade

transcripts no later than 16 April 2019. The plan is to apply for a

scholarship from ED PIF (deadline 30

April), unless you have access to other funding.

S-matrix methods for effective field theories


Research field : Theoretical Physics
Location :

Service de Physique Théorique


Contact :


Starting date :

Contact :



33 1 69 08 73 65

Thesis supervisor :


Laboratory link :

Historically, progress in understanding fundamental physics has been driven mostly by experimental

discoveries. Yet purely theoretical (at the time) developments have played pivotal roles as well, to mention

only the Yang-Mills theories or the Higgs mechanism. Now it may be a good time to reappraise the basics

of our quantum mechanical description of elementary particles.

Such a description is intrinsically rooted in the Effective Field Theory approach, that is one of the

deepest and most useful guiding principles in physics. Its tools and methods allow one to study the

universal aspects of entire classes of unknown microscopic models, with their main features being captured

by symmetries and few relevant parameters of the effective degrees of freedom. Because of its universality,

Effective Field Theories find applications across all scale in physics: from super-Hubble scales all the way

to the Planck length, passing from the TeV scale relevant at particle colliders.

At the same time, alternative conceptual and computational methods are making their way into particle

physics. These are built upon very basic axioms: Poincaré invariance, locality, causality, and unitarity of

the scattering S-matrix, without necessarily introducing auxiliary objects like fields, Lagrangians, gauge

invariance, etc. These new methods allow one to reformulate known facts about effective field theory in a

simpler and more intuitive language, and often to derive new surprising results that would be difficult to

uncover using the traditional techniques.

New applications of the S-matrix approach to Effective Field Theory will be the focus of this PhD


One direction of exploration of this project will involve the so-called on-shell amplitude methods, where

probabilities of particle processes are calculated using recursion relations rather than Feynman diagrams.

Most of the existing applications is in theories with massless particles, such as gluons or gravitons. Recently,

a convenient formalism was proposed to extend these methods to particles with arbitrary masses. This

opens a possibility to study a larger class of theories, such as the spontaneously broken Yang-Mills theories

with massive vector bosons. Taking advantage of this new formalism, this PhD project will undertake

a systematic exploration of the Standard Model and its effective theory extensions using the on-shell

amplitude methods.

Another important question is related to calculating operator mixing in Effective Field Theories. Renor-

malization group equations for Wilson coefficients of effective operators can be laboriously calculated for

certain theories using the standard Feynman diagram techniques. However, using the amplitude methods

allows one to obtain the same results more efficiently, and also to uncover new surprising patterns and

non-renormalization theorems. One of the tasks in this PhD project will be to generalize these methods,

e.g. to finite renormalization terms, to massive theories, and apply them to explicit classes of theories of

interest in particle physics and cosmology. Constraints on the renormalization group flow from amplitudes’

positivity techniques will be explored as well.

The project will also develop new theoretical methods to study the effective theory of massive higher-

spin theories, as well as the generalization of galileon symmetries that control their high-energy scattering

processes. Our goal is to write down the most general effective theory for higher spin particles interacting

with matter employing S-matrix techniques, including positivity bounds for scattering amplitudes. With

that framework at hand, we will be able to explore the phenomenology of higher-spin particles in colliders

and in cosmology. In the very same spirit of the various no-go theorems that forbid exactly massless higher

spins, one can establish general scaling properties of the interactions between light (but not massless)

higher spins and ordinary matter. As those interactions must be be strongly suppressed at low energies,

one could address the question whether higher spin particles can constitute the dark matter in the universe.


Retour en haut