# Luigi Paura » 10.Block Channel Coding

Block Channel Coding

Summary:

• Why the coding ?
• Block Coding.
• Bandwidth expansion and coding gain.

### Why the coding? (1/9)

The orthogonal signals are reliable but not efficient, namely: $P_M~~~\longrightarrow~~0~~but~~W~~\longrightarrow~~\infty \\$ $\text{ }k \rightarrow \infty$

Is it possible to deliver information with PM → 0 but employing a constrained bandwidth W ?
Yes !!

By resorting, for example, to block coding.

### Why the coding? (2/9)

Example

A digital system utilizes an average power P to transmit information with a rate R (bit-rate).
The system resorts to a PPM scheme.

### Why the coding? (5/9)

In vettorial notation: $s_1=\sqrt{E_b}(+1,+1)$ $s_2=\sqrt{E_b}(+1,-1)$ $s_3=\sqrt{E_b}(-1,+1)$ $s_4=\sqrt{E_b}(-1,-1)$

The closest codewords differ in one single component.

### Why the coding? (6/9) Rather than using a PPM with dimensionality two, let us use tree orthogonal signals to deliver the pair (k=2) of information symbols.

### Why the coding? (7/9) $T=\frac{T_s}3 \Rightarrow W^C = \frac 3 {T_s}~~~~~\Rightarrow~~W^C= \frac 3 {T_s}= \frac 3 2 W^{SC}$

In vettorial notation: $s_1=\sqrt E (+1.+1,+1)$ $s_2=\sqrt E (+1.-1,-1)$ $s_3=\sqrt E (-1.-1,+1)$ $s_4=\sqrt E (-1.+1,-1)$

Any codeword differs from each other in two components.

### Why the coding? (8/9)

Which is the advantage if the engaged bandwidth is increased? $d^2_{i~j}=|s_i-s_j|^2=8E~~~~~\forall i,j$

The comparison between the two solutions is performed for a fixed energy used for delivering the two information bit: $2E_b=3E\Rightarrow E=\frac 2 3 E_b$ $(d^C_{i~j})=8 X \frac 2 3 E_b \Rightarrow (d^C_{i~j})=\frac {16}3E_b > (d_{min}^{SC})^2=4E_b \Rightarrow \frac{d^2_{i~j}}{d^2_{min}}=\frac 4 3$

### Why the coding? (9/9) $d^C_{i~j} d^{SC}>d^{SC}_{min}~~~~~~\Rightarrow P_M^C$

The advantage related to the decreasing of the error probability is payed in terms of increased engaged bandwidth and by the complexity increase (presence of the coder and decoder).
The bandwidth increase is not exponential!!!

### Block Coding (2/15)

The codewords are represented by the vertices of a hypercube whose dimensionality is n and length 2√E $nE=kE_b=E_s~\Rightarrow E=\frac k n E_b =R_cE_b$ $R_C~~~\frac k n ~~\text{coding rate}$

### Block Coding (3/15)

How many codewords we need?

A number equal to the number of the information blocks namely: $M=2^k$

How many are the possible codewords?

As many as are the vertices of the hypercube whose dimensionality is n:

# possible codewords = 2n>2k

### Block Coding (4/15)

To choose a code means to select 2k from the 2n possible codewords.

How to perform such a choice?

In order to make the codewords as different from each other as possible, namely to have the distances between the signals si as large as possible.

### Block Coding (5/15) $(d^E_{i~j})=\sum (\pm 2 \sqrt E)^2=4d^H_{ij}E$

. $1 \leq l \leq n$ $l:s_i^l\neq s_j^l$

with $d^H_{i j}$ number of the disagrees between the codewords associated to si and sj

### Block Coding (6/15)

Example

n=5 $(1, -1,1,1,-1)\Leftrightarrow (\sqrt E, - \sqrt E, \sqrt E, \sqrt E, -\sqrt E)$ $(1, 1,-1,-1,1)\Leftrightarrow (\sqrt E, \sqrt E, - \sqrt E, - \sqrt E, \sqrt E)$ $d^H_{i~j}=4 \Rightarrow (d^E_{i~j})^2=16 E d^H_{i~j}=16 E$

dHi j ≡ # disagrees between si and sj

Hamming distance

### Block Coding (7/15)

Let us recall that the performances of an optimum receiver in AWGN can be evaluated approximatly by resorting to the minimum euclidean distance between the signals.

### Block Coding (10/15)

If no coding is performed, all the vertices of the hypercube of dimensionality k will be utilized → $d^E_{min}=2\sqrt{E_b} \Rightarrow P_e^{SC} \leq MQ \Biggl( \sqrt{\frac{4E_b}{2N_0}}\Biggr)\leq \frac M 2 e^{- \frac {E_b}{N_0}}$

### Block Coding (11/15)

By comparing the two bounds one has: $P_e^{SC}\leq \frac M 2 e^{-\frac{E_b}{N_o}}$ $P_e^C\leq \frac M 2 e^{-\frac{R_cd_{min}^H E_b}{N_0}}$ $\text{if}~~ R_cd^H_{min}>1 \Rightarrow P_e^C$ $R_cD^H_{min}~~\text{is referred to the asymptotic}~~(E_b/N_0>>1)~\text{coding gain}$

### Block Coding (12/15)

Since for binary codes Rc< 1 (k<n) for a fixed k and n the best code is the one which maximizes the minimum Hamming distance.

Bandwidth requirement

The impulse duration for the transmission of the single bit when no coding is adopted is 1/R  WSC=R

If the coding is adopted, the impulse duration to transmitt a single symbol of the codeword is: $T=\frac k {nR}\rightarrow W_c = \frac R {R_c}$

### Block Coding (13/15) $W_{SC}=R$ $W_C=\frac R {R_c} > R=W_{SC}$

Bandwidth expansion = $\frac {W_C}{W_{SC}}=\frac 1 {R_c}$

### Block Coding (14/15)

It can be shown that in AWGN there existes a family of di block codes (ni, ki) with $R_c=\frac{k_i}{n_i}$

such that if ni increases Rc is manteined constant and therefore, with the fixed banwidth expansion Pe → 0 provided that $R_c

Pe can approach zero without requiring a exponential bandwidth expansion such as happens for the orthogonal signals.

### Block Coding (15/15)

How do we pay for such an advantage?

In complexity since the block length n and therefore k must increase.

As k increases the number of the codewords M = 2k increases exponentially and, therefore also the complexity of both coder and decoder unless some convenient further property is imposed to the code such as for example linearity to lower significantly the complexity amount.

### Coding Techniques

Block linear codes

Block cyclic codes

Convolutional codes

Progetto "Campus Virtuale" dell'Università degli Studi di Napoli Federico II, realizzato con il cofinanziamento dell'Unione europea. Asse V - Società dell'informazione - Obiettivo Operativo 5.1 e-Government ed e-Inclusion