Vai alla Home Page About me Courseware Federica Living Library Federica Federica Podstudio Virtual Campus 3D La Corte in Rete
Il Corso Le lezioni del Corso La Cattedra
Materiali di approfondimento Risorse Web Il Podcast di questa lezione

Gennaro Miele » 6.Dark Matter

The missing mass problem

The discovery made by Fritz Zwicky in 1933 that visible matter accounts for only a tiny fraction of the total mass in the universe turned out to have been one of the most profound new insights produced by astronomical exploration during the past century. From observations of the radial velocities of eight galaxies in the Coma Cluster, Zwicky found an unexpectedly large velocity dispersion, 1019±360 km s-1. Zwicky concluded from these observations that, for a velocity dispersion of 1000 km s-1, the mean density of the Coma Cluster would have been 400 times greater than what is derived from luminous matter. He overestimated the mass-to-light ratio of the Coma Cluster because he assumed a Hubble parameter H0=588 km s-1 Mpc-1 instead of the value we now know it posses, H0=72.4 km s-1 Mpc-1. At that time, in fact, the Hubble’s prestige was so great that none of the early astonomers thought of reducing Hubble’s constant value to lover the mass-to-light ratios they founded. His value for the overdensity of the Coma Cluster should therefore be reduced from 400 to (72.4/588) x 400≈50. Zwicky wrote: ” If this [overdensity] is confirmed we would arrive at the astonishing conclusion that dark matter is present [in Coma] with a much greater density than luminous matter.” This was the very first time the dark matter (in it’s modern sense) made its appearance into scientific literature.

Coma cluster in negative B&W

Coma cluster in negative B&W

The missing mass problem

Zwicky continued: ”From these considerations it follows that the large velocity dispersion in Coma (and in other clusters of galaxies) represents an unsolved problem”. It is not yet clear what was the basis for Zwicky’s claim that other clusters also exhibited a missing mass problem. Not until 3 years later (1936) Smith found that the Virgo Cluster also appears to exhibit an unexpectedly high mass. In 1959 Kahn and Woltjer pointed out that M31 and the Milky Way were moving toward each other, so that they must have completed most of a (very elongated) orbit around each other during a Hubble time. Under this assumption that M31 and the Galaxy started to move apart 15 Gyr ago, they found that the mass of the Local Group had to be  \gtrsim 1.8\times10^{12}M_{\odot} Assuming that the combined mass of the Andromeda galaxy and the Milky Way system was 0.5\times10^{12}M_{\odot} Kahn and Woltjer concluded that most of the mass of the Local Group existed in some invisible form.

Rotation curves of spiral galaxies

So, dark matter manifested itself for the first time in clusters of galaxies, but it is on smaller scales that it gave the clearest and till now most robust, evidence of its existence. It clearly shows its presence is galaxies like our own, but astronomers had to wait nearly forty years to become aware of this. Observations of the 21-cm radio emission from rotating clouds of neutral hydrogen bounded to our galaxy determined the detailed rotation curve of the Milky Way as well as other spiral galaxies to be flat much beyond their extent as seen in the optical band. Assuming a balance between the gravitational and centrifugal forces within Newtonian mechanics, the orbital speed VC is expected to fall with the galactocentric distance r as VC2GM/r beyond the physical extent of the galaxy of mass M. The behaviour of VC vs r, for distances less than the physical extent, then leads to the distribution M(r) of mass within radius r. The observation V≈ constant  for large enough r, to the largest r, up to ten times the radius of the luminous disk, thus showed that there is substantial amount of non luminous matter beyond even this largest distance. The simplest way to justify a linearly increasing mass M(r)\propto r  is to assume that dark matter is present in an approximately spherical, isothermal, halo surrounding the disk, \rho(r)\propto1/r^2.

Rotation curve for the spiral galaxy NGC6503. [Begeman et al. MNRAS. 249 (1991) 523]. The Figure shows one of such rotation curves. Rotation velocities measurements are shown as a function of distance from the galactic center. The dashed and dotted curves are the contributions to Vc due to the disc and the gas, respectively, while the dot-dash curve represents the contribution from the dark halo.

Rotation curve for the spiral galaxy NGC6503. [Begeman et al. MNRAS. 249 (1991) 523]. The Figure shows one of such rotation curves. Rotation velocities measurements are shown as a function of distance from the galactic center. The dashed and dotted curves are the contributions to Vc due to the disc and the gas, respectively, while the dot-dash curve represents the contribution from the dark halo.

Cosmological abundance of dark matter

As we saw, the first environments in which the dark matter presence was initially deduced was astrophysical, but it was from an analysis of the WMAP [Komatsu et al. 2008] data that we finally knew the DM total amount in the universe. As already discussed, General Relativity relates the geometry of the universe to the energy momentum content. The geometry is expressed via the metric gab and subsequently through the Ricci Tensor Rab and the curvature scalar R, while the energy momentum tensor is commonly denoted by Tab. Using the reduced Planck mass M_{Pl}=(8\pi G)^{-1/2} the Einstein’s equation reads


The cosmological constant, Λ, is already assumed to bepart of the energy momentum tensor. Due to isotropy and homogeneity of the universe on cosmological scale one can describe the metric by using the Friedman – Lemaitre – Robertson – Walker form

ds^2=-dt^2+a^2(t)\left(\frac{dr^2}{1 -kr^2}+r^2d\theta+r^2\sin^2{\theta}d\phi^2\right)

where k =-1, 0, +1 corresponds to open, flat and closed geometries. For flat geometries (the case we will concentrate on), this can be written in terms of the cartesian coordinates xi as

ds^2=-dt^2+a^2(t)\delta_{ij}dx^i dx^j

Cosmological abundance of dark matter

The only possibility for the energy momentum tensor Tab to be compatible with the assumptions of homogeneity and isotropy of the universe is:

T_{ab}=(\rho+p)u_a u_b + p g_{ab}

that is the energy momentum tensor describing a perfect fluid with energy density ρ and pressure p. The relation between ρ and p is expressed in the equation of state p=w ρ For non-relativistic matter, the pressure vanishes (w=0), whereas photons and massless neutrinos have w = 1/3. From the 0-0 and i-i part of Einstein’s Equation, we get the Friedmann equation


having defined the Hubble parameter, H,  as H=a^{-1}\frac{da}{dt} The ratio of the energy of some species ρ with the so-called critical energy density \rho_{cr}\equiv 3M_{Pl}^2H^2is defined as \Omega=\frac{\rho}{\rho_{cr}} For a flat universe (as suggested by the inflationary paradigm, by WMAP measurements and the fact that if curvature is small today it was practically zero immediately after the Big Bang) Ω is just the fraction a given specie contributes to the total energy of the Universe. This is what is commonly called ” cosmological abundance” of that specie.

Cosmological abundance of dark matter

After five years of observations the estimates WMAP gives for the total amount of matter, Ωm, and the amount of baryonic matter only, Ωb, are

\Omega_m=0.258\pm0.030 \qquad \Omega_{b}=0.0462\pm0.0015

The large discrepancy between this two values states that ordinary matter only constitutes a 2% of the total matter of the Universe. In the following we will return on this point and we will see how the attempts to give a solution to the consistency problems of the Standard Model of elementary particles suggests quite naturally a possible explanation for the existence of this great amount of non-baryonic dark matter.

Dark matter distribution (Local dark matter density, Galactic substructures)

It is quite obvious that the understanding of how DM is distributed around us is of fundamental importance for the calculation of every type of DM signal we can expect from our Galaxy. Unfortunately indirect observations (e.g. from rotation curves) are not sufficient to constraint the shape of the density profile of the Milky Way. So our knowledge relay basically on N-body simulations. Since the first simulation of the evolution of a galaxy cluster, in 1941 [Holmberg:1941], our computational capabilities have experienced an incredible growth. It was using such a simulation that Navarro, Frank, and White (NFW) first shown, in 1996, that DM seems to aggregate following a sort of universal density profile [Navarro et al :1996],

\rho(r)=\rho_0 \left(\frac{r}{r_0}\right)^{-1}\left(1+\frac{r}{r_0}\right)^{-2}

parameterized by a length parameter r0 and a density mass parameter ρ0. After this first result many other simulations were made, confirming or not the NFW profile.

Some of the DM density profiles usually considered in literature.

Some of the DM density profiles usually considered in literature.

Dark matter distribution (Local dark matter density, Galactic substructures)

In particular there is a lot of uncertainty on the value of power-law index describing the mass profile of the innermost part of the galaxy. The majority of these different results can be summarized in a compact way in the form of a parameterized density profile:

\rho(r)=\rho_0 \left(\frac{r}{r_0}\right)^{-\gamma}\left[1+\left(\frac{r}{r_0}\right)^{\alpha}\right]^{(\gamma-\beta)/\alpha}

It reproduces the NFW profile for α=1, β=3, and γ=1. Other models, often encountered in literature are resummed in table.  In the following we will use a NFW profile to describe the dark matter halo distribution. Note, anyway, that this choice is quite conservative with respect to i.e. other proposed profiles like the Moore profile [Moore:1999], which exhibits an internal cusp \propto r^{-1.5} that would give in principle a divergent DM annihilation signal. A problem related to  the NFW profile is that the mass enclosed is logarithmically divergent thus a regularization procedure is required to define the halo mass.

Dark matter distribution (Local dark matter density, Galactic substructures)

Following the usual conventions we define the mass of the halo as the mass contained within the virial radius r_{\rm{vir}}, defined as the radius within which the mean density of the halo is \delta_{\rm{vir}}=200 times the mean critical cosmological density ρcr which, for a standard cosmological model (\Omega_m\simeq0.3,\Omega_\Lambda\simeq 0.7) is equal to \rho_{cr}\simeq 5\times 10^{-6} \,\,\,{\rm GeV/cm}^3.

The parameters describing the halo are determined imposing the local value of the DM density, ρS, and the Milky Way virial mass. The next trasparency is devoted to the first of these two observables. Moreover, to assign the virial mass of the DM halo we have first to take into account the inhomogeneities in the dark matter distribution predicted by numerical simulations.

Local dark matter density

Despite technical difficulties, the determination of the local value of the dark matter density ρS is, in principle, a simple task. It requires a very accurate knowledge of the rotation curve shape (this is the technical difficulty, especially if we are talking about our Galaxy) and a good understanding of how matter is distributed among the disk, the bulge, and the halo.
Let’s calculate at least the order of magnitude of ρS. We will be working in the simplest hypothesis. We know that the mass contained in a sphere of radius r has to fulfill the equation

dM/dr=v^2/G={\rm const.}\,\,\,\,\,\, {\rm for \,\,\, large \,\,\, r}

to reproduce the result v2=GM(r)/r= const., so we assume that

M(r)=4\pi\int_0^r r'^2\rho(r')dr'\qquad\textrm{with}\qquad 4\pi r^2\rho(r)=\frac{v^2}{G}


\rho(r)=\frac{v^2}{4\pi Gr^2}=\frac{v^2}{4\pi G R_S^2}\left(\frac{r}{R_S}\right)^{-2}

At the Solar System distance, RS=8.5 kpc, we obtain

\rho_S=\frac{v^2}{4\pi G R_S^2}

Local dark matter density

The very sophisticated analyses performed by various groups show very different values for ρS, but its order of magnitude agrees with this simple estimate. The Figure shows the acceptable range of local dark matter densities, 0.2-0.8 GeV/cm3, coming out from an analysis of [Bergstrom et al. Astropart. Phys. 9 (1998) 137] for various choices of halo profile. Bahcall et al. 1981  finds ρS=0.34 GeV/cm3, Caldwell and Ostriker find ρS=0.23 GeV/cm3 while [Turner 1986] calculates ρS=0.3 – 0.6 GeV/cm3.

Using an asymptotical velocity $v\sim100$ km s$^{-1}$ one can deduce ρS ≈ 0.1 GeV/cm3

The range of local dark matter densities acceptable with observations of rotation curves for a variety of halo profiles and galactocentric distances.  From [Bergstrom et al. Astropart. Phys. 9 (1998) 137].

The range of local dark matter densities acceptable with observations of rotation curves for a variety of halo profiles and galactocentric distances. From [Bergstrom et al. Astropart. Phys. 9 (1998) 137].

Galactic substructures

Simulations predict a DM distribution as a sum of a smooth halo component, and of an additional clumpy one with total masses roughly of the same order of magnitude. Hereafter we will assume for the mass of the Milky Way M_{\text{MW}}=M_{\text{h}}+M_{\text{cl}}=2\times 10^{12}M_{\odot} where  Mh and Mcl denote the total mass contained in the host galactic halo and  in the substructures (subhaloes) distribution, respectively. The relative normalization is fixed by imposing that the total mass in subhaloes, whose mass ranges between 10^{7}M_{\odot} and 10^{10}M_{\odot}, amounts to 10% of MMW . Current numerical simulations can resolve clumps with a minimum mass scale of \sim 10^{7}M_{\odot}. However, for WIMP particles clumps down to a mass of 10^{-6}M_{\odot} are expected. We will thus consider a clump mass range between 10^{-6}M_{\odot} and 10^{10}M_{\odot}. Finally, to fully characterize the subhalo population we will assume a mass distribution \propto m_{cl}^{-2} and that they are spatially distributed following the NFW profile of the main halo, i.e. with a mass spectrum number density of subhaloes, in galactocentric coordinates \vec{r}, given by

\frac{{d} n_{{\rm cl}}}{{d} m_{{\rm cl}}}(m_{{\rm cl}},\vec{r})= A \left( \frac{m_{{\rm cl}}}{M_{\text{cl}}} \right)^{-2} \left( \frac{r}{r_{h}} \right)^{-1}\left(1+\frac{r}{r_{h}}\right)^{-2}

where A is a dimensional normalization constant. Recent results show that mass distribution seems to converge to m_{\text{cl}}^{-1.9} rather than m_{\text{cl}}^{-2}. However, with a minimum mass scale of 10^{-6}M_{\odot}, an mass index of 2.0 or 1.9 produces only a minor change in the following results. A more realistic clump distribution should take into account tidal disruption of clumps near the galactic center. Also, numerical simulations suggest that the radial distribution could be somehow anti-biased with respect to the host halo profile.

Galactic substructures

However, with our conservative assumptions the host halo dominates the DM annihilation signal until 20^\circ-30^\circ from the galactic center so that the details of the clumps distribution have just a light influence on the final results.

Following the previous assumptions the total mass in DM clumps of mass between m1 and m2 results to be M(m_1,m_2)=\int {d}\vec{r} \int_{m_1}^{m_2} m_{\text{cl}}\frac{{d} n_{\text{cl}}}{{d} m_{\text{cl}}}(m_{\text{cl}},\vec{r}) {d}m_{\text{cl}}         =4\pi\left[\ln{(1+c_h)}-\frac{c_h}{1+c_h}\right]\left(A \, r_h^{3} \, M_{\text{cl}} \right)\ln{\left(\frac{m_2}{m_1}\right)}M_{\text{cl}}

where c_{\text{h}} \equiv r_{\rm vir}/r_{\text{h}}  denotes the host halo concentration; while their number is

N(m_1,m_2) =\int {d}\vec{r}\, \int_{m_1}^{m_2} \, \frac{{d}n_{\text{cl}}}{{d} m_{\text{cl}}}(m_{\text{cl}},\vec{r}) \, \,{d}m_{\text{cl}}        = 4\pi\left[\ln{(1+c_h)}-\frac{c_h}{1+c_h}\right] \left(A \, r_h^{3} \, M_{\text{cl}} \right)\left(\frac{M_{\text{cl}}}{m_1}-\frac{M_{\text{cl}}}{m_2}\right)

In what follow we will determine A imposing the condition: M(10^{n_{1'}}M_{\odot},10^{n_{2'}}M_{\odot})=p_{1',2'}M_{MW} p1′,2′ being the mass fraction in the range [10^{n_{1'}},10^{n_{2'}}]M_{\odot}. This means that

\frac{A}{M_{cl}^{-1}r_h^{-3}}=\left\{4\pi\left[\ln{(1+c_h)}-\frac{c_h}{1+c_h}\right]\ln{10}\right\}^{-1} \frac{p_{1',2'}}{n_{2'}-n_{1'}} \frac{M_{MW}}{M_{cl}}

Galactic substructures




N(m_1,m_2)=\frac{p_{1',2'}}{n_{2'}-n_{1'}} \frac{1}{\ln{10}} \left( \frac{M_{MW}}{m_1}-\frac{M_{MW}}{m_2}\right)

Following [Diemand:2005] we will assume n1′=7, n2′=10 and p1′,2′=10%. Using this condition we can calculate the mass due to the entire clumps distribution: [10^{-6},10^{10}]M_{\odot}

M_{cl}=M(10^{-6}M_{\odot},10^{10}M_{\odot})=\frac{16}{30}M_{MW}\sim 53.3\,\% M_{MW}

while for the number of these clumps we obtain

N(10^{-6}M_{\odot},10^{10}M_{\odot})=\frac{10^6}{30\ln{10}}\frac{M_{MW}}{M_{\odot}}\sim 2.90\times10^{17}

Galactic substructures

Finally by using the previous constraints one can fix the values of free parameters r0 and ρ0. Thus we solve M_{vir}=\frac{14}{30}M_{MW}=M_{\text{h}}\,\,\,\,\,\,\,\,\rho(R_S)=\rho_S

jointly with the equation defining the virial mass:

\frac{M_{vir}}{\frac{4}{3}\pi r_{vir}^3}=\Delta\rho_{cr}\quad (\Delta=200)

hence obtaining r0=14.0 kpc and ρ0=0.572 GeV c-2 cm-3 . A further piece of information is required, namely how the DM is distributed inside the clumps themselves. We will assume that each clump follows a NFW profile as the main halo with rcl and ρcl replacing the corresponding quantities in the Eq. of NFW profile. However, for a full characterization of a clump, further information on its concentration ccl is required. Unluckily, numerical simulation are not completely helpful in this case, since we require information about the structure of clumps with masses down to 10^{-6}M_{\odot}, far below the current numerical resolution. Analytical models are thus required. In the current cosmological scenario structures formed hierarchically, via gravitational collapse, with smaller structures forming first. Thus, naively,  since the smallest clumps formed when the universe was denser, a reasonable expectationis c_{\text{cl}} \propto (1+z_f), where zf is the clump formation redshift. Following the model of [Bullock 1999] we will thus assume c_{\text{cl}}=c_1\left(\frac{m_{\text{cl}}}{M_{\odot}}\right)^{-\alpha} with c1=38.2 and α=0.0392.

Weakly Interacting Massive Particles (WIMP)

As we saw in the previous trasparencies the matter density in the universe point to a value larger than the maximal value provided by baryons alone, according to Big Bang nucleosynthesis (BBN) and confirmed by WMAP and now Planck data.The need for nonbaryonic dark matter is therefore striking. An important task of cosmology and particle physics is to produce viable non-baryonic candidates. An alternative approach to the introduction of new particles in the theory could obviously be the modification of the gravitational theory itself, but it has turned out to be very difficult to modify gravity on the various length scales where the dark matter manifests its presence. Till now only the flatness of galaxy rotation curves seems to be easily explainable  by introducing violations of Newton’s laws. Therefore the topic of this section will be the theoretical justification of the nowadays common belief that a huge amount ofnon standard particles exists. Also, there is a fascinating coincidence of characteristics these particles have to posses to both solve the dark matter problem and be naturally exhibited by the most probable extension of the Standard Model of particle physics. To understand how this happens we will firstly approach the problem from a cosmological point of view. Therefore, we will introduce a new particle specie into the primordial plasma of particles constituting the early universe and follow the evolution of its density during the expansion that followed the Big Bang.

Thermal relics

As we have already discussed in the previous lessons, we can describe the early universe as a thermal bath of particles at equilibrium, due to the hight rate of their mutual interactions. However, these interactions have to be frequent enough to resist to the expansion effect that would lead each specie of particle out of the equilibrium. This is intuitive interpretation we can give to the Boltzmann transport equation,

\frac{\textrm{d}n}{\textrm{d}t}=-3Hn-\langle\sigma_A v\rangle (n^2-n_{eq}^2)

describing the evolution of the  number density n(t) of the particular specie we are considering. H is the Hubble parameter (the expansion rate) and \langle \sigma_A v\rangle is the thermally averaged total cross section for annihilation into lighter particles times the relative velocity v. The reason why we are considering only lighter particles is that in what follows we will refer to massive particles possessing non relativistic velocity. Therefore we can consider them practically at rest respect to their annihilation products. Relativistic dark matter particles, in fact, do not seem to be able to drive the primordial inhomogeneities of the baryonic matter distribution towards the structures we observe today. So we will restrict our study to non relativistic, often called  Cold Dark Matter (CDM). At last, neq is the number density at thermal equilibrium hence, for a massive particle, it can be written as the integral of a Maxwell–Boltzmann distribution:


Thermal relics

g being the number of degrees of freedom of the particle. For example, in the case of a spin – 1/2 particle g=2 and it corresponds to the two possible values of the spin.

The Boltzmann transport equation can be easily interpreted in this way:

1) The -3 H n term represents the reduction of the particle density number due to the expansion effect. In fact,neglecting the interactions (σA=0) the Boltzmann equation reduces to


Remembering that  H=\dot{a}/a  we obtain


that leads us to the expected result, n(t)\propto a^{-3}(t), namely that the number density n decreases only due to the expansion effect.

2) The -\langle \sigma_A v\rangle n^2 term is the number of particles that, annihilating, reduce n. 3) At last, the +\langle \sigma_A v\rangle n_{eq}^2 represents the creation of new particles due to the inverse creation process. The Boltzmann transport equation does not admit analytic solutions. Nevertheless very accurate approximated solutions can be obtained. We will not report here the way they are deduced, because this is very well explained elsewhere (see for example Kolb & Turner book}. We will only try to make a little bit more quantitative our previous simplifying interpretation.

Thermal relics

First of all, The Boltzmann equation  can be recast in a more convenient way as


in terms of the new variables

Y=\frac{n}{s}\quad\left( Y_{eq}=\frac{n_{eq}}{s} \right)\quad \textrm{ and }\quad  x=\frac{m}{T}

where “s” is the universe entropy density and T the equilibrium temperature. Note that we are using temperature instead of time to describe the evolution. The quantity \Gamma_A= n_{eq}\langle\sigma_A v\rangle is the annihilation rate, and ΓA/H defines the decoupling time/temperature of our dark matter particles from the remaining species. An approximated estimate of this epoch can be obtained imposing the condition ΓA(T)=H(T) and leads to T\approx\frac{m}{20}

After this decoupling time the number density ceases to decrease following the equilibrium law and its value freezes, approximately, to its current value. Therefore, the resulting cosmological abundance can be calculated as its value at this decoupling time, and an order of magnitude estimate gives

\Omega_{DM} h^2\sim \frac{3\cdot 10^{-27}\ {\rm cm}^3{\rm s}^{-1}}{\langle \sigma_A v\rangle}

Thermal relics

An interesting consequence of the last equation is that massive particles which have interactions of the order of the weak interactions naturally give contributions to the matter density of the universe of order unity, and therefore can account for the missing mass. The generic name for such a dark matter candidate is a WIMP (Weakly Interacting Massive Particle).
Under the assumption that these WIMPs constitute the dark matter particles we can use the last result to try to estimate their annihilation cross section. In fact, according to WMAP measurements, we can assume \Omega_{DM}=\Omega_m-\Omega_b=0.212  that leads to

\langle \sigma_A v\rangle=\frac{3\cdot 10^{-27}\ {\textrm{cm}}^3\textrm{s}^{-1}}{\Omega_{DM}h^2}\approx 3\cdot 10^{-26}\ \textrm{cm}^3\textrm{s}^{-1}

We conclude this part remembering that assuming a thermal production process in the early universe, there is an upper limit to the mass of a stable relic  particle. This comes about because unitarity precludes the annihilation cross section of particles of mass m, spin J and relative velocity (in the center of mass frame) v from being larger than


Using the estimated v at freeze-out, it is found that m cannot exceed a value around 340 TeV. The most favored heavy dark matter candidate, the lightest supersymmetric particle, always has a mass much below this limit in the minimal models.

Departures from the standard scenario

There are many situations in which the standard method of calculating the abundance of a thermal relic fails.

- Coannihilation - This case occurs when the relic particle is the lightest of a set of similar particles whose masses are nearly degenerate. In this case the relic abundance of the lightest particle is determined not only by its annihilation cross section, but also by the annihilation of heavier particles, which will later decay into the lightest.

- Annihilation into forbidden channel - This case concerns annihilation into particles which are more massive than the relic particle. In the simplified analysis of the previous section this was considered simply as kinematically forbidden, but it can be shown that if the heavier particles are only 5 – 15 % more massive, these channels can dominate the annihilation cross section and determine the relic abundance.

- Annihilation near a pole of the cross section - This case occurs when the annihilation takes place near a pole in the cross section. This happens, for example in Z0- exchange annihilation when the mass of the relic particle is near mZ/2. A pole can also occur when the annihilating dark matter particle is nearly one-half the mass of a resonance such as J/ψ or η.

- Non–thermal relics - Although thermal production of stable particles is a generic and unavoidable mechanism in the Big Bang scenario, there are several additional processes possible. For instance, some very heavy particles were perhaps never in thermal equilibrium. Non-thermal production may for example occur near cosmic strings and other defects. Near the end of a period of early inflation, several mechanisms related either to the inflaton field or to the strong gravity present at that epoch could contribute to nonthermal production.

Thermal relics

- Charge conjugation asymmetry - The most probable dark matter particle candidates, the supersymmetric neutralino (see next trasparencies) or the lightest Kaluza-Klein particle are both selfconjugate (It is quite obvious that dark matter has to be electrically neutral), nevertheless if the particle is different from the antiparticle, it may be that there exists an asymmetry similar to that we know had to exists for the baryons that otherwise would have been quickly completely annihilated by antibaryons in the early universe. Such an asymmetry can make the relic number density of the dark matter particles higher than if there would be a perfect symmetry. This may allow for a relic density which is higher than the estimate in the equation for the relic density even if the annihilation cross section is large.

- Evading unitarity - There may be a possibility to evade the unitarity bound previously showed  and accept even extremely heavy particles as dark matter candidates if, for instance, they are not absolutely stable (so that the expression for the relic density does not apply, or if the production mechanism is non-thermal (wimpzilla, crypton, etc.).

Due to the large number of phenomena that can alter the simple estimate  of the annihilation cross section previously computed, we will assume \left\langle \sigma_A v\right\rangle as a free parameter. The same we will make with regards to the dark matter particle mass.
However, to fully calculate the radio/gamma signal produced by the annihilation for the indirect DM search we need to specify a particular particle. This will be the pretext to briefly introduce one of the most desirable solutions to the problem: the possibility that Supersymmetry not only represents the right way to extend the Standard Model of the particle physics but, also, that its lightest particle, in many cases a neutralino, is the particle we are looking for.


A review of the supersymmetric theories is obviously out of the scope of a course devoted to astroparticle physics. Therefore, in the following trasparencies, we will simply remind to the reader where to find the main reasons that lead to its formulation. We will also introduce the unavoidably concepts and definitions needed to introduce the dark matter candidate, the so–called neutralino. For further discussions of supersymmetry, we refer the interested reader to more complete references.
As we know, in the Standard Model of particle physics there exists a fundamental distinction between bosons and fermions: bosons are the mediators of interactions or fundamental scalars, while fermions are the constituents of matter. One of the main achievements of supersymmetry is to provide a unified picture of matter and interactions.
We know that the elementary fermions constitute the irreducible representations of the Poincaré group, while bosons are introduced into the theory according to the gauge principle and Higgs mechanism. Hence the entire Standard Model particles spectrum comes out from the assumptions of the invariance of the theory under a symmetry group which is the product of the Poincaré group by the internal/gauge group. Therefore a possible way to wipe out the previous dichotomy is to ask whether a Lie group exists mixing internal and space-time symmetries. Early attempts to find such a  group had to face the limitations imposed by the so-called no-go theorem of Coleman and Mandula. Such limitations were avoided at last introducing new fermionic generators satisfying anticommutation relations instead of the usually assumed commutation relations o fthe quantum theory. Such, mixed algebras, were called graded Lie algebras. They include generators that change fermions into bosons and viceversa:

Q\left| f \right.\rangle = \left| b \right.\rangle \,\,\,\,\,\,\,\,\,\,\,\, Q\left| b \right.\rangle = \left| f \right.\rangle

where “f” stands for fermion and “b” for boson, respectively.


Due to their fermionic nature, the operators Q carry spin 1/2, which implies that supersymmetry must be a spacetime symmetry. The question then arises of how to extend the Poincaré group of spatial translations and Lorentz transformations to include this new boson/fermion symmetry. The structure of such a group is highly restricted by the Haag-Lopuszanski-Sohnius extension of the Coleman and Mandula theorem [Haag 1974]. For realistic theories, the operators, Q, which we choose by convention to be Majorana spinors, must satisfy the algebra

\{ Q_a, \overline{Q}_b \} = 2 \gamma^\mu_{ab} P_\mu\qquad\{ Q_a, P_\mu \}  = 0\qquad[ Q_a , M^{\mu \nu}  ] = \sigma^{\mu \nu}_{ab} Q^b    where

\overline{Q}_a \equiv \left( Q^\dagger \gamma_0 \right)_a\qquad\textrm{and}\qquad\sigma^{\mu \nu}= \frac{i}{4} [\gamma^\mu, \gamma^\nu]

are the structure constants of the theory. There are also other important reasons for introducing supersymmetry.  The solution it provides to the hierarchy problem is very natural. The hierarchy problem has to do with the enormous difference between the electroweak and Planck energy scales. This problem arises in the radiative corrections to the mass of the Higgs boson. It is well known that scalar masses get radiative corrections quadratically with energy, while fermion masses increase only logarithmically. Therefore when we consider the radiative corrections at 1-loop forthe Higgs boson we find

\delta m_H^2 \sim \left( \frac{\alpha}{2 \pi} \right) \Lambda^2,\label{deltam2}

where Λ is a cut-off energy where new physicsis expected to intervene.The Higgs mass is expected to be of the same order of the electroweak scale, it to say that mH≈ 100 GeV, while its radiative correction can be order TeV if Λ is about the Planck mass.


This clearly destroys the stability of the electroweak scale.

A possible solution to this problem is to assume the existence of new particles with similar masses but opposite statistics. Then, since the contribution of fermion loops to \delta m_s^2 have a famous opposite sign to the corresponding bosonic loops, at the 1-loop level, namely one gets

\delta m_s^2 \sim \left( \frac{\alpha}{2 \pi} \right)\left( \Lambda^2 + m_H^2 \right)  -\left( \frac{\alpha}{2 \pi} \right)\left( \Lambda^2 + m_F^2 \right) =\left( \frac{\alpha}{2 \pi} \right)\left(  m_H^2 - m_F^2 \right)

Providing that |m_B^2 - m_F^2| \lesssim 1 \,\,\, {\rm TeV} the divergence to the Higgs mass is cancelled at all orders of perturbation theory. The algebra of supersymmetry we introduced naturally guarantees the existence of new particles associated to all particles of the Standard Model with the same mass but opposite statistic, therefore gives a natural solution to the hierarchy problem.
A third, fundamental reason for introducing Supersymmetry comes from Grand Unification Theory, which predicts the unification of the three gauge couplings below the Planck scale. It is well known that this does not happen for the Standard Model only (Upper Figure) while, once Supersymmetry is taken into account  it happens at a unification scale of 2 \times 10^{16}\, {\rm GeV} (Lower Figure).

The gauge couplings at LEP do not evolve to a unified value if there is no supersymmetry.

The gauge couplings at LEP do not evolve to a unified value if there is no supersymmetry.

They rather evolve to a unified value if supersymmetry is included.

They rather evolve to a unified value if supersymmetry is included.

Minimal Supersymmetric Standard Model

In the following we will only consider the simplest possible way to extend the Standard Model to a supersymmetric theory. This extension constitutes the so-called Minimal Supersymmetric Standard Model (MSSM), a theory containing all the known fields of the Standard Mode, an extra Higgs doublet and the partners of the ordinary particles required to form the supersymmetric multiplets. The MSSM is clearly assumed to be invariant under the gauge group of the Standard Model, SU(3)\times SU(2)\times U(1), it is required to be renormalizable and it is constrained by an additional symmetry, the R-parity, necessary to prevent lepton and baryon number variations during the interactions. Despite the fact that MSSM constitutes, as stated, the simplest supersymmetric description for the elementary particles interactions, it represents nevertheless an extremely difficult theory, defined by a great number of free parameters. The only fast way to deduce the lagrangian describing the MSSN makes an extensive use of the superspace formalism, so we will not reproduce that results here. The interested reader is redirected to the extended literature existing. Let us only state that the superspace plays in supersymmetric theories the same role played by the Minkowsky space-time in Special Relativity, in the sense that it constitutes the right formalism to make supersymmetry invariance recognizable at sight.

Minimal Supersymmetric Standard Model

Given our interest in dark matter, the new ingredient of the theory we are most interested in is R-parity. It constitutes a discrete symmetry whose action on the component fields of the theory is


where B and L are respectively the baryon and the lepton number  operators while S is the spin. It is easily recognized that R is always equal to one for the standard particle while it assumes value minus one for the supersymmetric particle, due to their opposite statistic. Consider for example an electron. In this case L=1, B=0, and S=1/2, therefore R=+1. An hypothetical spin–0 partner of the electron would have R=-1. Such a  reasoning is true for all Standard Model particles, as the reader can easily check. The assumption that R-parity is the symmetry associated to a multiplicative quantum number introduces an important rule to prevent the decay of the lightest supersymmetric particle (LSP). In fact a decay in particles with R=-1 has to produce an even number of non standard particles. Nevertheless the LSP cannot do this, so it is stable and the only way it can change its number is annihilating in couples. This is the reason why Supersymmetry is so important in the cosmology of dark matter: it provides in a very natural way a viable dark matter candidate.

Minimal Supersymmetric Standard Model

Once the degrees of freedom of the Standard Model are doubled by the introduction of a fermionic degree of freedom for each boson of the theory and by two bosons (one for the left helicity and another one for the right elicits) for each fermion, the resulting spectrum of particles appears very reach. It is reassumed in the table reported in the following trasparency. The most exotic features comes for the Higgs sector. After the spontaneous electroweak symmetry breaking the MSSM possesses five Higgs bosons, three neutral and two charged: h0, H0, A0, H+, H-. There are also present the superpartners of the interaction states of the Higgs particles, \tilde{H}^{\pm} and \tilde{H}_{1,2}^0 and the superpartners of the electroweak gauge bosons, \tilde{W}^{\pm}, \tilde{B} and \tilde{W}_3. The charged Higgs superpartners share the same quantum number, therefore they merge to forme the mass eigenstates know as charginos, \tilde{\chi}^{\pm}_{1,2}. In the same way the neutral Higgs superpartners and the neutral gauge bosons superpartners (all fermions) form the neutralinos, \tilde{\chi}^{0}_{1,\dots,4} (ordered with increasing mass). The importance of the neutralino basically resides in the fact that the lighter of them, \tilde{\chi}^{0}_1, simply denoted \chi and called the neutralino, is the LSP in many realizations (read parameters choice) of the MSSM.

Standard Model particles and their superpartners in the MSSM.

The lightest neutralino

In the basis (\tilde{B},\tilde{W}_{3},\tilde{H}_{1}^0,\tilde{H}_{2}^0), the neutralino mass matrix can be expressed as

\arraycolsep=0.01in\left( \begin{array}{cccc}M_1 & 0 & -M_Z\cos \beta \sin \theta_W^{} & M_Z\sin \beta \sin \theta_W^{}\\0 & M_2 & M_Z\cos \beta \cos \theta_W^{} & -M_Z\sin \beta \cos \theta_W^{}\\-M_Z\cos \beta \sin \theta_W^{} & M_Z\cos \beta \cos \theta_W^{} & 0 & -\mu\\M_Z\sin \beta \sin \theta_W^{} & -M_Z\sin \beta \cos \theta_W^{} & -\mu & 0\end{array} \right)

The MSSM parameters intervening in the neutralino sector are therefore:

M_1 - a bino  (\tilde{B})  mass parameter;

M_2  - a wino (\tilde{W}_3)  mass parameter;

\mu  - the so-called higgsino mass term;

\tan \beta -  the ratio of the vacuum expectation values of the Higgs bosons.

At last, as usually made in the literature devoted to the cosmological implications of the MSSM, we will assume a relation between  M1 and M2 that comes from Grand Unification Theory


The lightest neutralino

This allowes us to reduce the neutralino MSSM parameters to three. Also, the neutralino mass is quite insensitive to \tan{\beta}, so we can fix a value for it, for example \tan{\beta}=2 in Figure, and only care about M2 and μ.

Writing the lightest neutralino as

\chi = N_{11} \tilde B+  N_{12}\tilde W_3 + N_{13} \tilde H_1^0 + N_{14} \tilde H_2^0

we can define the gaugino fraction, fG,  and the higgsino fraction, fH, as

f_G=N_{11}^2 + N_{12}^2 \,\,\, {\rm and}\,\,\,\,f_H=N_{13}^2 + N_{14}^2

The reason why we define this quantities is that the annihilation and scattering properties of the neutralino are extremely simplified if expressed in term of this fractions, while they appears extremely involved when described in terms of the pure MSSM parameters. A plot will a good examples of this.

The Figure represents the contour plots of the neutralino mass (dashed lines), each labeled with the corresponding mass in GeV and its gaugino fraction (continuous lines). Masses from 50 GeV till 1600 GeV are represented. The figure clearly shows that randomly choosing a couple M2-μ the probability of obtaining a mixed neutralino, f≈ f≈1/2 is quite low.

Contour plots of the neutralino mass (dashed lines) and its gaugino fraction (continuous lines).

Contour plots of the neutralino mass (dashed lines) and its gaugino fraction (continuous lines).

The lightest neutralino

The most probable situations correspond in fact to an almost pure higgsino, f_G\leq0.1 or an almost pure gaugino f_G\geq0.9. This is a very fortunate case, because explicitc calculation of the annihilation cross section [Fornengo], and therefore of the neutralino cosmological abundance, show that mixed neutralinos have no chances to significantly contribute to the dark matter abundance. On the contrary, low mass (m_{\chi}<100 \, {\rm GeV})  higgsinos and high mass (m_{\chi}>100 \, {\rm GeV}) gauginos are perfect candidates.

All this suggests that the attention devoted in literature to neutralino is definitely motivated.

Direct and indirect searches

There are two basic ways to detect WIMP (Weakly Interacting Massive Particles) dark matter which is present in the halo of our Galaxy.

The first method, direct detection, relies on the possibility to detect the recoil energy of the nuclei of a low–background detector as a consequence of their elastic scattering with a WIMP.

The second method, indirect detection, exploits the possibility to detect products of the annihilation of DM particles, either in the galactic halo or in celestial bodies (namely the Earth and the Sun) where WIMPs may have been accumulated by gravitational capture. In this last case, the signal consists of a flux of neutrinos emitted from the central regions of the body, and the typical observable is a flux of upgoing muons produced by the charged–current conversion of the muon neutrino component of the signal. In the case of DM annihilation in the galactic halo, there are more possibilities: the signal can consists of gamma–rays, X- rays, radio emission, neutrinos and antimatter (positrons, antiprotons and antideuterons).

From the experimental side, the searches of DM signals involve many different techniques, ranging from low–background underground detectors, to neutrino telescopes, antimatter and gamma–rays detectors in space, to air-Cerenkov detectors.

  • Contenuti protetti da Creative Commons
  • Feed RSS
  • Condividi su FriendFeed
  • Condividi su Facebook
  • Segnala su Twitter
  • Condividi su LinkedIn
Progetto "Campus Virtuale" dell'Università degli Studi di Napoli Federico II, realizzato con il cofinanziamento dell'Unione europea. Asse V - Società dell'informazione - Obiettivo Operativo 5.1 e-Government ed e-Inclusion

Fatal error: Call to undefined function federicaDebug() in /usr/local/apache/htdocs/html/footer.php on line 93