The discovery made by Fritz Zwicky in 1933 that visible matter accounts for only a tiny fraction of the total mass in the universe turned out to have been one of the most profound new insights produced by astronomical exploration during the past century. From observations of the radial velocities of eight galaxies in the Coma Cluster, Zwicky found an unexpectedly large velocity dispersion, 1019±360 km s^{-1}. Zwicky concluded from these observations that, for a velocity dispersion of 1000 km s^{-1}, the mean density of the Coma Cluster would have been 400 times greater than what is derived from luminous matter. He overestimated the mass-to-light ratio of the Coma Cluster because he assumed a Hubble parameter H_{0}=588 km s^{-1} Mpc^{-1} instead of the value we now know it posses, H_{0}=72.4 km s^{-1} Mpc^{-1}. At that time, in fact, the Hubble’s prestige was so great that none of the early astonomers thought of reducing Hubble’s constant value to lover the mass-to-light ratios they founded. His value for the overdensity of the Coma Cluster should therefore be reduced from 400 to (72.4/588) x 400≈50. Zwicky wrote: ” If this [*overdensity*] is confirmed we would arrive at the astonishing conclusion that dark matter is present [in Coma] with a much greater density than luminous matter.” This was the very first time the *dark matter* (in it’s modern sense) made its appearance into scientific literature.

Zwicky continued: ”From these considerations it follows that the large velocity dispersion *in Coma* (and in other clusters of galaxies) represents an unsolved problem”. It is not yet clear what was the basis for Zwicky’s claim that other clusters also exhibited a missing mass problem. Not until 3 years later (1936) Smith found that the Virgo Cluster also appears to exhibit an unexpectedly high mass. In 1959 Kahn and Woltjer pointed out that M31 and the Milky Way were moving toward each other, so that they must have completed most of a (very elongated) orbit around each other during a Hubble time. Under this assumption that M31 and the Galaxy started to move apart 15 Gyr ago, they found that the mass of the Local Group had to be Assuming that the combined mass of the Andromeda galaxy and the Milky Way system was Kahn and Woltjer concluded that most of the mass of the Local Group existed in some invisible form.

So, dark matter manifested itself for the first time in clusters of galaxies, but it is on smaller scales that it gave the clearest and till now most robust, evidence of its existence. It clearly shows its presence is galaxies like our own, but astronomers had to wait nearly forty years to become aware of this. Observations of the 21-cm radio emission from rotating clouds of neutral hydrogen bounded to our galaxy determined the detailed rotation curve of the Milky Way as well as other spiral galaxies to be flat much beyond their extent as seen in the optical band. Assuming a balance between the gravitational and centrifugal forces within Newtonian mechanics, the orbital speed *V _{C}* is expected to fall with the galactocentric distance

Rotation curve for the spiral galaxy NGC6503. [Begeman et al. MNRAS. 249 (1991) 523]. The Figure shows one of such rotation curves. Rotation velocities measurements are shown as a function of distance from the galactic center. The dashed and dotted curves are the contributions to Vc due to the disc and the gas, respectively, while the dot-dash curve represents the contribution from the dark halo.

As we saw, the first environments in which the dark matter presence was initially deduced was astrophysical, but it was from an analysis of the WMAP [Komatsu et al. 2008] data that we finally knew the DM total amount in the universe. As already discussed, General Relativity relates the geometry of the universe to the energy momentum content. The geometry is expressed via the metric g_{ab} and subsequently through the Ricci Tensor R_{ab} and the curvature scalar R, while the energy momentum tensor is commonly denoted by T_{ab}. Using the reduced Planck mass the Einstein’s equation reads

The cosmological constant, Λ, is already assumed to bepart of the energy momentum tensor. Due to isotropy and homogeneity of the universe on cosmological scale one can describe the metric by using the Friedman – Lemaitre – Robertson – Walker form

where k =-1, 0, +1 corresponds to open, flat and closed geometries. For flat geometries (the case we will concentrate on), this can be written in terms of the cartesian coordinates x^{i} as

The only possibility for the energy momentum tensor T_{ab} to be compatible with the assumptions of homogeneity and isotropy of the universe is:

that is the energy momentum tensor describing a *perfect fluid* with energy density ρ and pressure p. The relation between ρ and p is expressed in the equation of state p=w ρ For non-relativistic matter, the pressure vanishes (w=0), whereas photons and massless neutrinos have w = 1/3. From the 0-0 and i-i part of Einstein’s Equation, we get the *Friedmann equation*

having defined the *Hubble parameter, *H, as The ratio of the energy of some species ρ with the so-called critical energy density is defined as For a flat universe (as suggested by the inflationary paradigm, by WMAP measurements and the fact that if curvature is small today it was practically zero immediately after the Big Bang) Ω is just the fraction a given specie contributes to the total energy of the Universe. This is what is commonly called ” cosmological abundance” of that specie.

After five years of observations the estimates WMAP gives for the total amount of matter, Ω_{m}, and the amount of baryonic matter only, Ω_{b}, are

The large discrepancy between this two values states that ordinary matter only constitutes a 2% of the total matter of the Universe. In the following we will return on this point and we will see how the attempts to give a solution to the consistency problems of the Standard Model of elementary particles suggests quite naturally a possible explanation for the existence of this great amount of non-baryonic dark matter.

It is quite obvious that the understanding of how DM is distributed around us is of fundamental importance for the calculation of every type of DM signal we can expect from our Galaxy. Unfortunately indirect observations (e.g. from rotation curves) are not sufficient to constraint the shape of the density profile of the Milky Way. So our knowledge relay basically on N-body simulations. Since the first simulation of the evolution of a galaxy cluster, in 1941 [Holmberg:1941], our computational capabilities have experienced an incredible growth. It was using such a simulation that Navarro, Frank, and White (NFW) first shown, in 1996, that DM seems to aggregate following a sort of universal density profile [Navarro et al :1996],

parameterized by a length parameter r_{0} and a density mass parameter ρ_{0}. After this first result many other simulations were made, confirming or not the NFW profile.

In particular there is a lot of uncertainty on the value of power-law index describing the mass profile of the innermost part of the galaxy. The majority of these different results can be summarized in a compact way in the form of a parameterized density profile:

It reproduces the NFW profile for α=1, β=3, and γ=1. Other models, often encountered in literature are resummed in table. In the following we will use a NFW profile to describe the dark matter halo distribution. Note, anyway, that this choice is quite conservative with respect to i.e. other proposed profiles like the Moore profile [Moore:1999], which exhibits an internal cusp that would give in principle a divergent DM annihilation signal. A problem related to the NFW profile is that the mass enclosed is logarithmically divergent thus a regularization procedure is required to define the halo mass.

Following the usual conventions we define the mass of the halo as the mass contained within the *virial radius* , defined as the radius within which the mean density of the halo is times the mean critical cosmological density ρ_{cr} which, for a standard cosmological model is equal to .

The parameters describing the halo are determined imposing the local value of the DM density, ρ_{S}, and the Milky Way virial mass. The next trasparency is devoted to the first of these two observables. Moreover, to assign the virial mass of the DM halo we have first to take into account the inhomogeneities in the dark matter distribution predicted by numerical simulations.

Despite technical difficulties, the determination of the local value of the dark matter density ρ_{S} is, in principle, a simple task. It requires a very accurate knowledge of the rotation curve shape (this is the technical difficulty, especially if we are talking about our Galaxy) and a good understanding of how matter is distributed among the disk, the bulge, and the halo.

Let’s calculate at least the order of magnitude of ρ_{S}. We will be working in the simplest hypothesis. We know that the mass contained in a sphere of radius r has to fulfill the equation

to reproduce the result v^{2}=GM(r)/r= const., so we assume that

So

At the Solar System distance, R_{S}=8.5 kpc, we obtain

The very sophisticated analyses performed by various groups show very different values for ρ_{S}, but its order of magnitude agrees with this simple estimate. The Figure shows the acceptable range of local dark matter densities, 0.2-0.8 GeV/cm^{3}, coming out from an analysis of [Bergstrom et al. Astropart. Phys. 9 (1998) 137] for various choices of halo profile. Bahcall et al. 1981 finds ρ_{S}=0.34 GeV/cm^{3}, Caldwell and Ostriker find ρ_{S}=0.23 GeV/cm^{3} while [Turner 1986] calculates ρ_{S}=0.3 – 0.6 GeV/cm^{3}.

Using an asymptotical velocity $v\sim100$ km s$^{-1}$ one can deduce ρ_{S} ≈ 0.1 GeV/cm^{3}

Simulations predict a DM distribution as a sum of a smooth halo component, and of an additional clumpy one with total masses roughly of the same order of magnitude. Hereafter we will assume for the mass of the Milky Way where M_{h} and M_{cl} denote the total mass contained in the host galactic halo and in the substructures (subhaloes) distribution, respectively. The relative normalization is fixed by imposing that the total mass in subhaloes, whose mass ranges between and , amounts to 10% of M_{MW} . Current numerical simulations can resolve clumps with a minimum mass scale of . However, for WIMP particles clumps down to a mass of are expected. We will thus consider a clump mass range between and . Finally, to fully characterize the subhalo population we will assume a mass distribution and that they are spatially distributed following the NFW profile of the main halo, i.e. with a mass spectrum number density of subhaloes, in galactocentric coordinates , given by

where A is a dimensional normalization constant. Recent results show that mass distribution seems to converge to rather than . However, with a minimum mass scale of , an mass index of 2.0 or 1.9 produces only a minor change in the following results. A more realistic clump distribution should take into account tidal disruption of clumps near the galactic center. Also, numerical simulations suggest that the radial distribution could be somehow anti-biased with respect to the host halo profile.

However, with our conservative assumptions the host halo dominates the DM annihilation signal until from the galactic center so that the details of the clumps distribution have just a light influence on the final results.

Following the previous assumptions the total mass in DM clumps of mass between m_{1} and m_{2} results to be

where denotes the host halo concentration; while their number is

In what follow we will determine A imposing the condition: p_{1′,2′} being the mass fraction in the range . This means that

So

and

Following [Diemand:2005] we will assume n_{1′}=7, n_{2′}=10 and p_{1′,2′}=10%. Using this condition we can calculate the mass due to the entire clumps distribution:

while for the number of these clumps we obtain

Finally by using the previous constraints one can fix the values of free parameters r_{0} and ρ_{0}. Thus we solve

jointly with the equation defining the virial mass:

hence obtaining r_{0}=14.0 kpc and ρ_{0}=0.572 GeV c^{-2 }cm^{-3} . A further piece of information is required, namely how the DM is distributed inside the clumps themselves. We will assume that each clump follows a NFW profile as the main halo with r_{cl} and ρ_{cl} replacing the corresponding quantities in the Eq. of NFW profile. However, for a full characterization of a clump, further information on its concentration c_{cl} is required. Unluckily, numerical simulation are not completely helpful in this case, since we require information about the structure of clumps with masses down to , far below the current numerical resolution. Analytical models are thus required. In the current cosmological scenario structures formed hierarchically, via gravitational collapse, with smaller structures forming first. Thus, naively, since the smallest clumps formed when the universe was denser, a reasonable expectationis , where z_{f} is the clump formation redshift. Following the model of [Bullock 1999] we will thus assume with c_{1}=38.2 and α=0.0392.

As we saw in the previous trasparencies the matter density in the universe point to a value larger than the maximal value provided by baryons alone, according to Big Bang nucleosynthesis (BBN) and confirmed by WMAP and now Planck data.The need for nonbaryonic dark matter is therefore striking. An important task of cosmology and particle physics is to produce viable non-baryonic candidates. An alternative approach to the introduction of new particles in the theory could obviously be the modification of the gravitational theory itself, but it has turned out to be very difficult to modify gravity on the various length scales where the dark matter manifests its presence. Till now only the flatness of galaxy rotation curves seems to be easily explainable by introducing violations of Newton’s laws. Therefore the topic of this section will be the theoretical justification of the nowadays common belief that a huge amount ofnon standard particles exists. Also, there is a fascinating coincidence of characteristics these particles have to posses to both solve the dark matter problem and be naturally exhibited by the most probable extension of the Standard Model of particle physics. To understand how this happens we will firstly approach the problem from a cosmological point of view. Therefore, we will introduce a new particle specie into the primordial plasma of particles constituting the early universe and follow the evolution of its density during the expansion that followed the Big Bang.

As we have already discussed in the previous lessons, we can describe the early universe as a thermal bath of particles at equilibrium, due to the hight rate of their mutual interactions. However, these interactions have to be frequent enough to resist to the expansion effect that would lead each specie of particle out of the equilibrium. This is intuitive interpretation we can give to the Boltzmann transport equation,

describing the evolution of the number density n(t) of the particular specie we are considering. H is the Hubble parameter (the expansion rate) and is the thermally averaged total cross section for annihilation into lighter particles times the relative velocity v. The reason why we are considering only lighter particles is that in what follows we will refer to massive particles possessing non relativistic velocity. Therefore we can consider them practically at rest respect to their annihilation products. Relativistic dark matter particles, in fact, do not seem to be able to drive the primordial inhomogeneities of the baryonic matter distribution towards the structures we observe today. So we will restrict our study to non relativistic, often called *Cold Dark Matter *(CDM). At last, n_{eq} is the number density at thermal equilibrium hence, for a massive particle, it can be written as the integral of a Maxwell–Boltzmann distribution:

g being the number of degrees of freedom of the particle. For example, in the case of a spin – 1/2 particle g=2 and it corresponds to the two possible values of the spin.

The Boltzmann transport equation can be easily interpreted in this way:

1) The -3 H n term represents the reduction of the particle density number due to the expansion effect. In fact,neglecting the interactions (σ_{A}=0) the Boltzmann equation reduces to

Remembering that we obtain

that leads us to the expected result, , namely that the number density n decreases only due to the expansion effect.

2) The term is the number of particles that, annihilating, reduce n. 3) At last, the represents the creation of new particles due to the inverse creation process. The Boltzmann transport equation does not admit analytic solutions. Nevertheless very accurate approximated solutions can be obtained. We will not report here the way they are deduced, because this is very well explained elsewhere (see for example Kolb & Turner book}. We will only try to make a little bit more quantitative our previous simplifying interpretation.

First of all, The Boltzmann equation can be recast in a more convenient way as

in terms of the new variables

where “s” is the universe entropy density and T the equilibrium temperature. Note that we are using temperature instead of time to describe the evolution. The quantity is the *annihilation rate*, and Γ_{A}/H defines the decoupling time/temperature of our dark matter particles from the remaining species. An approximated estimate of this epoch can be obtained imposing the condition Γ_{A}(T)=H(T) and leads to

After this decoupling time the number density ceases to decrease following the equilibrium law and its value freezes, approximately, to its current value. Therefore, the resulting cosmological abundance can be calculated as its value at this decoupling time, and an order of magnitude estimate gives

An interesting consequence of the last equation is that massive particles which have interactions of the order of the weak interactions naturally give contributions to the matter density of the universe of order unity, and therefore can account for the missing mass. The generic name for such a dark matter candidate is a WIMP (Weakly Interacting Massive Particle).

Under the assumption that these WIMPs constitute the dark matter particles we can use the last result to try to estimate their annihilation cross section. In fact, according to WMAP measurements, we can assume that leads to

We conclude this part remembering that assuming a thermal production process in the early universe, there is an upper limit to the mass of a stable relic particle. This comes about because unitarity precludes the annihilation cross section of particles of mass m, spin J and relative velocity (in the center of mass frame) v from being larger than

Using the estimated v at freeze-out, it is found that m cannot exceed a value around 340 TeV. The most favored heavy dark matter candidate, the lightest supersymmetric particle, always has a mass much below this limit in the minimal models.

There are many situations in which the standard method of calculating the abundance of a thermal relic fails.

**- Coannihilation - **This case occurs when the relic particle is the lightest of a set of similar particles whose masses are nearly degenerate. In this case the relic abundance of the lightest particle is determined not only by its annihilation cross section, but also by the annihilation of heavier particles, which will later decay into the lightest.

**- Annihilation into forbidden channel - **This case concerns annihilation into particles which are more massive than the relic particle. In the simplified analysis of the previous section this was considered simply as kinematically forbidden, but it can be shown that if the heavier particles are only 5 – 15 % more massive, these channels can dominate the annihilation cross section and determine the relic abundance.

**- Annihilation near a pole of the cross section - **This case occurs when the annihilation takes place near a pole in the cross section. This happens, for example in Z^{0}- exchange annihilation when the mass of the relic particle is near m_{Z}/2. A pole can also occur when the annihilating dark matter particle is nearly one-half the mass of a resonance such as J/ψ or η.

**- Non–thermal relics - **Although thermal production of stable particles is a generic and unavoidable mechanism in the Big Bang scenario, there are several additional processes possible. For instance, some very heavy particles were perhaps never in thermal equilibrium. Non-thermal production may for example occur near cosmic strings and other defects. Near the end of a period of early inflation, several mechanisms related either to the inflaton field or to the strong gravity present at that epoch could contribute to nonthermal production.

**- Charge conjugation asymmetry - **The most probable dark matter particle candidates, the supersymmetric neutralino (see next trasparencies) or the lightest Kaluza-Klein particle are both selfconjugate (It is quite obvious that dark matter has to be electrically neutral), nevertheless if the particle is different from the antiparticle, it may be that there exists an asymmetry similar to that we know had to exists for the baryons that otherwise would have been quickly completely annihilated by antibaryons in the early universe. Such an asymmetry can make the relic number density of the dark matter particles higher than if there would be a perfect symmetry. This may allow for a relic density which is higher than the estimate in the equation for the relic density even if the annihilation cross section is large.

**- Evading unitarity -** There may be a possibility to evade the unitarity bound previously showed and accept even extremely heavy particles as dark matter candidates if, for instance, they are not absolutely stable (so that the expression for the relic density does not apply, or if the production mechanism is non-thermal (wimpzilla, crypton, etc.).

Due to the large number of phenomena that can alter the simple estimate of the annihilation cross section previously computed, we will assume as a free parameter. The same we will make with regards to the dark matter particle mass.

However, to fully calculate the radio/gamma signal produced by the annihilation for the indirect DM search we need to specify a particular particle. This will be the pretext to briefly introduce one of the most desirable solutions to the problem: the possibility that Supersymmetry not only represents the right way to extend the Standard Model of the particle physics but, also, that its lightest particle, in many cases a neutralino, is the particle we are looking for.

A review of the supersymmetric theories is obviously out of the scope of a course devoted to astroparticle physics. Therefore, in the following trasparencies, we will simply remind to the reader where to find the main reasons that lead to its formulation. We will also introduce the unavoidably concepts and definitions needed to introduce the dark matter candidate, the so–called neutralino. For further discussions of supersymmetry, we refer the interested reader to more complete references.

As we know, in the Standard Model of particle physics there exists a fundamental distinction between bosons and fermions: bosons are the mediators of interactions or fundamental scalars, while fermions are the constituents of matter. One of the main achievements of supersymmetry is to provide a unified picture of matter and interactions.

We know that the elementary fermions constitute the irreducible representations of the Poincaré group, while bosons are introduced into the theory according to the gauge principle and Higgs mechanism. Hence the entire Standard Model particles spectrum comes out from the assumptions of the invariance of the theory under a symmetry group which is the product of the Poincaré group by the internal/gauge group. Therefore a possible way to wipe out the previous dichotomy is to ask whether a Lie group exists mixing internal and space-time symmetries. Early attempts to find such a group had to face the limitations imposed by the so-called *no-go* theorem of Coleman and Mandula. Such limitations were avoided at last introducing new fermionic generators satisfying anticommutation relations instead of the usually assumed commutation relations o fthe quantum theory. Such, mixed algebras, were called *graded* Lie algebras. They include generators that change fermions into bosons and viceversa:

where “f” stands for fermion and “b” for boson, respectively.

Due to their fermionic nature, the operators Q carry spin 1/2, which implies that supersymmetry must be a spacetime symmetry. The question then arises of how to extend the Poincaré group of spatial translations and Lorentz transformations to include this new boson/fermion symmetry. The structure of such a group is highly restricted by the Haag-Lopuszanski-Sohnius extension of the Coleman and Mandula theorem [Haag 1974]. For realistic theories, the operators, Q, which we choose by convention to be Majorana spinors, must satisfy the algebra

where

are the structure constants of the theory. There are also other important reasons for introducing supersymmetry. The solution it provides to the *hierarchy problem* is very natural. The hierarchy problem has to do with the enormous difference between the electroweak and Planck energy scales. This problem arises in the radiative corrections to the mass of the Higgs boson. It is well known that scalar masses get radiative corrections quadratically with energy, while fermion masses increase only logarithmically. Therefore when we consider the radiative corrections at 1-loop forthe Higgs boson we find

where Λ is a cut-off energy where new physicsis expected to intervene.The Higgs mass is expected to be of the same order of the electroweak scale, it to say that m_{H}≈ 100 GeV, while its radiative correction can be order TeV if Λ is about the Planck mass.

This clearly destroys the stability of the electroweak scale.

A possible solution to this problem is to assume the existence of new particles with similar masses but opposite statistics. Then, since the contribution of fermion loops to have a famous opposite sign to the corresponding bosonic loops, at the 1-loop level, namely one gets

Providing that the divergence to the Higgs mass is cancelled at all orders of perturbation theory. The algebra of supersymmetry we introduced naturally guarantees the existence of new particles associated to all particles of the Standard Model with the same mass but opposite statistic, therefore gives a natural solution to the hierarchy problem.

A third, fundamental reason for introducing Supersymmetry comes from Grand Unification Theory, which predicts the unification of the three gauge couplings below the Planck scale. It is well known that this does not happen for the Standard Model only (Upper Figure) while, once Supersymmetry is taken into account it happens at a unification scale of (Lower Figure).

In the following we will only consider the simplest possible way to extend the Standard Model to a supersymmetric theory. This extension constitutes the so-called Minimal Supersymmetric Standard Model (MSSM), a theory containing all the known fields of the Standard Mode, an extra Higgs doublet and the partners of the ordinary particles required to form the supersymmetric multiplets. The MSSM is clearly assumed to be invariant under the gauge group of the Standard Model, , it is required to be renormalizable and it is constrained by an additional symmetry, the R-parity, necessary to prevent lepton and baryon number variations during the interactions. Despite the fact that MSSM constitutes, as stated, the simplest supersymmetric description for the elementary particles interactions, it represents nevertheless an extremely difficult theory, defined by a great number of free parameters. The only fast way to deduce the lagrangian describing the MSSN makes an extensive use of the *superspace formalism*, so we will not reproduce that results here. The interested reader is redirected to the extended literature existing. Let us only state that the superspace plays in supersymmetric theories the same role played by the Minkowsky space-time in Special Relativity, in the sense that it constitutes the right formalism to make supersymmetry invariance recognizable at sight.

Given our interest in dark matter, the new ingredient of the theory we are most interested in is R-parity. It constitutes a discrete symmetry whose action on the component fields of the theory is

where B and L are respectively the baryon and the lepton number operators while S is the spin. It is easily recognized that R is always equal to one for the standard particle while it assumes value minus one for the supersymmetric particle, due to their opposite statistic. Consider for example an electron. In this case L=1, B=0, and S=1/2, therefore R=+1. An hypothetical spin–0 partner of the electron would have R=-1. Such a reasoning is true for all Standard Model particles, as the reader can easily check. The assumption that R-parity is the symmetry associated to a multiplicative quantum number introduces an important rule to prevent the decay of the lightest supersymmetric particle (LSP). In fact a decay in particles with R=-1 has to produce an even number of non standard particles. Nevertheless the LSP cannot do this, so it is stable and the only way it can change its number is annihilating in couples. This is the reason why Supersymmetry is so important in the cosmology of dark matter: it provides in a very natural way a viable dark matter candidate.

Once the degrees of freedom of the Standard Model are doubled by the introduction of a fermionic degree of freedom for each boson of the theory and by two bosons (one for the left helicity and another one for the right elicits) for each fermion, the resulting spectrum of particles appears very reach. It is reassumed in the table reported in the following trasparency. The most exotic features comes for the Higgs sector. After the spontaneous electroweak symmetry breaking the MSSM possesses five Higgs bosons, three neutral and two charged: h^{0}, H^{0}, A^{0}, H^{+}, H^{-}. There are also present the superpartners of the interaction states of the Higgs particles, and and the superpartners of the electroweak gauge bosons, , and . The charged Higgs superpartners share the same quantum number, therefore they merge to forme the mass eigenstates know as *charginos*, . In the same way the neutral Higgs superpartners and the neutral gauge bosons superpartners (all fermions) form the *neutralinos*, (ordered with increasing mass). The importance of the neutralino basically resides in the fact that the lighter of them, , simply denoted and called *the neutralino*, is the LSP in many realizations (read parameters choice) of the MSSM.

In the basis , the neutralino mass matrix can be expressed as

The MSSM parameters intervening in the neutralino sector are therefore:

- a bino mass parameter;

- a wino mass parameter;

- the so-called higgsino mass term;

- the ratio of the vacuum expectation values of the Higgs bosons.

At last, as usually made in the literature devoted to the cosmological implications of the MSSM, we will assume a relation between M_{1} and M_{2} that comes from Grand Unification Theory

This allowes us to reduce the neutralino MSSM parameters to three. Also, the neutralino mass is quite insensitive to , so we can fix a value for it, for example in Figure, and only care about M_{2} and μ.

Writing the lightest neutralino as

we can define the gaugino fraction, f_{G}, and the higgsino fraction, f_{H}, as

The reason why we define this quantities is that the annihilation and scattering properties of the neutralino are extremely simplified if expressed in term of this fractions, while they appears extremely involved when described in terms of the pure MSSM parameters. A plot will a good examples of this.

The Figure represents the contour plots of the neutralino mass (dashed lines), each labeled with the corresponding mass in GeV and its gaugino fraction (continuous lines). Masses from 50 GeV till 1600 GeV are represented. The figure clearly shows that randomly choosing a couple M_{2}-μ the probability of obtaining a mixed neutralino, f_{G }≈ f_{H }≈1/2 is quite low.

The most probable situations correspond in fact to an almost pure higgsino, or an almost pure gaugino . This is a very fortunate case, because explicitc calculation of the annihilation cross section [Fornengo], and therefore of the neutralino cosmological abundance, show that mixed neutralinos have no chances to significantly contribute to the dark matter abundance. On the contrary, low mass higgsinos and high mass gauginos are perfect candidates.

All this suggests that the attention devoted in literature to neutralino is definitely motivated.

There are two basic ways to detect WIMP (Weakly Interacting Massive Particles) dark matter which is present in the halo of our Galaxy.

The first method, **direct detection**, relies on the possibility to detect the recoil energy of the nuclei of a low–background detector as a consequence of their elastic scattering with a WIMP.

The second method,** indirect detection**, exploits the possibility to detect products of the annihilation of DM particles, either in the galactic halo or in celestial bodies (namely the Earth and the Sun) where WIMPs may have been accumulated by gravitational capture. In this last case, the signal consists of a flux of neutrinos emitted from the central regions of the body, and the typical observable is a flux of upgoing muons produced by the charged–current conversion of the muon neutrino component of the signal. In the case of DM annihilation in the galactic halo, there are more possibilities: the signal can consists of gamma–rays, X- rays, radio emission, neutrinos and antimatter (positrons, antiprotons and antideuterons).

From the experimental side, the searches of DM signals involve many different techniques, ranging from low–background underground detectors, to neutrino telescopes, antimatter and gamma–rays detectors in space, to air-Cerenkov detectors.

*1*. Rèsumé of standard cosmology in FRWL

*2*. Thermodynamics of the expanding universe

*5*. Baryogenesis

*6*. Dark Matter

*7*. Primordial Nucleosynthesis: theory and experimental data

*8*. Theory of classical cosmological perturbations

*9*. Theory of Quantum Cosmological Perturbations

*10*. A Brief Introduction to Cosmic Microwave Background Anisotropy Formation

*11*. Cosmic Rays - I

*12*. Cosmic Rays - II

Progetto "Campus Virtuale" dell'Università degli Studi di Napoli Federico II, realizzato con il cofinanziamento dell'Unione europea. Asse V - Società dell'informazione - Obiettivo Operativo 5.1 e-Government ed e-Inclusion