Vai alla Home Page About me Courseware Federica Living Library Federica Federica Podstudio Virtual Campus 3D La Corte in Rete
 
Il Corso Le lezioni del Corso La Cattedra
 
Materiali di approfondimento Risorse Web Il Podcast di questa lezione

Luigi Paura » 12.Decoder for Linear Codes


Decoder for Linear Codes

The purpose of block coding is to increase the Euclidean distance between the transmitted signals, thus improving so the performances (P(e)) for a given transmitted power at the price of a bandwidth expansion.

To compare codes, we can employ the Hamming distances since the Euclidean ones depend on Hamming distances.

There are two alternatives:

  1. Soft-decision decoding (ML receiver).
  2. Hard-decision decoding.

Soft decision decoding (1/2)

Assuming that we are employing an antipodal signaling scheme.

The code word ci=(ci1,ci2,…,cin) is mapped into the sequence:

s_i(t)=\sum_{k=1}^n \psi_{ik}(t - (k-1)T)

\text{where}~~ \psi_{ik}(t)=\left\{ \begin{array}{rl} \psi(t) ~~\mbox{se}~~c_{ik}=1 \\ -\psi (t)~~ \mbox{se} ~~c_{ik}=0

with ψ (t) a signal of duration T and energy E.

Soft decision decoding (2/2)


Hard decision decoding (1/2)

The hard demodulator makes binary decions on the components of the received vector r and then it delivers the received word to the decoder that estimates the code word by calculating the distance between the received word and all M possible code words.

Therefore it finds the code word that is closest to it in the Hamming distance sense.

Hard decision decoding (2/2)

In hard-decion decoding three basic steps can be recognized:

  1. Demodulation (r(t) passes througth the match filter and then it is sampled at T).
  2. Threshold decision device (quantization with two levels).
  3. Decoding by finding the code word that is closest to received one in the Hamming distance sense.

Decoding is realized on the base of standard array that identifies the 2k decision regions.

Hard-Decision Decoding: Standard Array

The construction of the standard array allows us to divide the observation space, constituted by 2n vectors, into 2k decision regions.

A standard array is generated in the following way:

  • The first row is constituted by all code words starting with the all-zero code word.
  • The second row is built putting in the first position of the second row a sequence of length n that is not in the first row of the array and with minimum weight. The row is completed adding this sequence with the corresponding code words.
  • The third row is constructed in the same way, putting in the first position a minimum weight sequence that is not in the array and then putting in the subsequent positions the sum of this sequence with the corresponding code words.

Standard Array (1/5)

Each  row  of the standard array is called a coset and the first element of each coset is called the coset leader.

Each row of the standard array is called a coset and the first element of each coset is called the coset leader.


Standard Array (2/5)

Standard Array Properties

Theorem 1. All elements of the standard array are distinct.

Proof: Let us assume two elements of the array are equal.

This can happen in two cases.

1. The elements belong to the same row (coset ). In this case we have:

e_l \oplus c_i = e_l \oplus c_j\Rightarrow c_i = c_j \Rightarrow which is impossible because the elements belong to different columns.

Standard Array (3/5)

2. The two elements belong to two different cosets:

e_l \oplus c_i = e_k \oplus c_j ~~\text{with}~~ l\neq k

\text{then}~~ e_l=e_k \oplus (c_i \oplus c_j)=e_k \oplus c_m \Rightarrow

e_l=e_k \oplus c_m \Rightarrow e_l ~~\text{and}~~ e_k belong to the same coset which is impossible since by assumption k ≠ 1

Theorem 2: if y1 and y2 belong to the same coset

y_1H^t=y_2H^t

Proof:

y_1H^t=(e_1 \oplus c_i)H^t=e_1H^t + 0 = (e_1 \oplus c_j)H^t=y_2 H^t~~~(c_iH^t=c_jH^t =0)

Standard Array (4/5)

From this theorem we conclude that each coset can be uniquely identified by the product e_1H^t. We can define the syndrome as

yH^t=s

with s (called syndrome) a (n-k) – vector that depends only on the coset leader.

s = 0 \Leftrightarrow y \in \{c_1, ... , c_{2^k}\}

s \neq 0 \Leftrightarrow y \notin \{ c_1, ..., c_{2^k}\}

Standard Array (5/5)

Each coset can be identified uniquely by a syndrome s
Decoding strategy:

  1. Let us calculate s: yHt = s
  2. Find the coset leader corresponding to s by using the Syndrome Table
  3. Decode y as:

\hat c = y \oplus \hat e

RX structure with hard demodulator


Hard-Decision decoding : Standard Array

Example. Hamming Code (7,4).

Example. Hamming Code (7,4).


Soft-Decision Decoding (1/5)

  • (d_{ij}^E)^2=4d_{ij}^H~E if binary segnaling scheme is antipodal (es. BPSK)
  • (d_{ij}^E)^2=2d_{ij}^H~E if binary segnaling scheme is orthogonal (es. BFSK)
  • P_b(e)=Q\Biggl (\frac{d^E}{\sqrt{2N_0}}\Biggr) error probability for a binary modulation scheme with d^E = \int _0^T \bigl[s_0 (t) - s_1(t)\bigr]^2dt


Soft decision decoding (2/5)

P(\hat s_j |s_i)= \left\{ \begin{array}{rl} Q\Biggl(\sqrt{\frac{d_{ij}^H E}{N_0}}\Biggr) ~~\text{for orthogonal signaling} \\  Q\Biggl(\sqrt{\frac{2d_{ij}^H E}{N}}\Biggr) ~~\text{for antipodal signaling} \end{array} \right

Since dijH ≥ dminH and since Q(·) is decreasing function:

P(\hat s_j |s_i)\leq \left\{ \begin{array}{rl} Q\Biggl(\sqrt{{d_{minj}^H E/{N_0}}}\Biggr) ~~\text{for orthogonal signaling} \\  Q\Biggl(\sqrt{{2d_{minj}^H E/ N_0}}\Biggr) ~~\text{for antipodal signaling} \end{array} \right


Soft-Decision Decoding (3/5)

Using the Union Bound we obtain:

P(e)\leq \left\{ \begin{array}{rl} (M-1)Q\Biggl(\sqrt{{d_{min}^H}E/N_0}}\Biggr)~~\text{for orthogonal signaling} \\ (M-1)Q\Biggl(\sqrt{{2d_{min}^H}E/N_0}}\Biggr)~~\text{for antipodal signaling} \end{array} \right

Soft-Decision Decoding (4/5)

Example: Compare the performances of an uncoded data transmission system with the performances of a code system using a (7,4) Hamming code with R=10 Kbit/sec, the received power is P=10-6W, N0/2 =10-11W/Hz and the modulation scheme is binary PSK.

No coding

P_b(e)=Q\Biggl(\sqrt{\frac{2E_b}{N_0}}\Biggr)=Q\Biggl(\sqrt{\frac{2P}{RN_0}}\Biggr)=Q\Biggl(\sqrt{\frac{2 x 10^{-6}}{10^4~\text{x} 2 ~\text{x} ~10^{-11}}}\Biggr)=

=Q(\sqrt{10})~~7.86\text{x}10^{-4}\Rightarrow P^{SC} (e)=1-(1-P_b)^4~~~~3.1 \text{x}10^{-3}

Soft-Decision Decoding (5/5)

coding

Hamming code → dHmin=3

\frac E {N_o}= \frac{R_c E_b}{N_0}=R_c \frac P{RN_0}=\frac 4 7 \text{x} \frac {10} 2 = \frac{20} 7

P^C(e)\leq 15Q\Biggl(\sqrt{\frac{2d_{min}^H E}{N_0}}\Biggl)=15Q\Biggr(\sqrt\frac{3\text{x}40}{7}}\Biggl)~~~2.6 \text{x}10^{-4}

Using this code the error probability decreases by a factor of 12.

Hard-Decision decoding performances (1/5)

In this case mod.+AWGN+ demod. hard can be modeled by a BSC:

p=P_b(e)=Q(d/\sqrt{2N_0})

P(e|c_i)=P(e|c_0)\Rightarrow P(e)=P(e|c_0) because the code is linear and therefore they depend only on the weight distribution

\{w_k\}_{k=1}^{M-1}

The correct decision probability can be calculated as the probability that a correctable error occurs ( see standard array).

It can be also estimated with the union baud.

Hard-Decision decoding performances (2/5)


Hard decision decoding performances (3/5)


Hard decision decoding performances (4/5)


Hard decision decoding performances (5/5)


Hard and soft decision decoding comparison

Hard-decision decoding scheme can obtain the same performances as the optimum one (soft-decision decoding scheme) paying in Eb/N0 <2 dB for common applications.

If we employ an 8-level quantizer, the performance difference with soft decision reduces to Eb/N0 < 0.1 dB.

Example (1/2)

Consider the example of the prvious example and we want to employ a hard decision decoding:

P_b=Q\Biggl(\sqrt{\frac{40}7}\Biggr)=Q(2.39)=0.0084

d_{min}=3

P_e\leq \Biggr(\sum_{i=2}^3 \left( \begin{array}{ccc} 3 \\ i \end{array}\right) P_b^i (1-P_b)^{3-1}\Biggr) \approx 3~\text{x}~10^{-3}

Example (2/2)

Hard-decision:

Pe=P (a non – correctable error configurationoccurs)=

= P (error configuration occurs with weight >1)=

= \sum_{i=2}^7 \left( \begin{array}{ccc}  7 \\ i \end{array} \right ) P_b^i (1-P_b)^{7-i}\approx 21 P_b^2 = 1.5~\text{x}~10^{-3}
In this case the error probability decreases by a factor 2 (12 in the soft-decision case).

  • Contenuti protetti da Creative Commons
  • Feed RSS
  • Condividi su FriendFeed
  • Condividi su Facebook
  • Segnala su Twitter
  • Condividi su LinkedIn
Progetto "Campus Virtuale" dell'Università degli Studi di Napoli Federico II, realizzato con il cofinanziamento dell'Unione europea. Asse V - Società dell'informazione - Obiettivo Operativo 5.1 e-Government ed e-Inclusion

Fatal error: Call to undefined function federicaDebug() in /usr/local/apache/htdocs/html/footer.php on line 93