Presentation is loading. Please wait.

Presentation is loading. Please wait.

EC 723 Satellite Communication Systems

Similar presentations


Presentation on theme: "EC 723 Satellite Communication Systems"— Presentation transcript:

1 EC 723 Satellite Communication Systems
Mohamed Khedr

2 Syllabus Tentatively Week 1 Overview Week 2
Orbits and constellations: GEO, MEO and LEO Week 3 Satellite space segment, Propagation and satellite links , channel modelling Week 4 Satellite Communications Techniques Week 5 Satellite Communications Techniques II Week 6 Satellite Communications Techniques III Week 7 Satellite error correction Techniques Week 8 Satellite error correction TechniquesII Multiple access Week 9 Satellite in networks I, INTELSAT systems , VSAT networks, GPS Week 10 GEO, MEO and LEO mobile communications INMARSAT systems, Iridium , Globalstar, Odyssey Week 11 Presentations Week 12 Week 13 Week 14 Week 15 Tentatively Lecture 9

3 Block diagram of a DCS Source encoder Channel encoder Pulse modulate
Bandpass modulate Format Digital modulation Channel Digital demodulation Source decode Channel decoder Demod. Sample Format Detect Lecture 9

4 What is channel coding? Channel coding:
Transforming signals to improve communications performance by increasing the robustness against channel impairments (noise, interference, fading, ..) Waveform coding: Transforming waveforms to better waveforms Structured sequences: Transforming data sequences into better sequences, having structured redundancy. “Better” in the sense of making the decision process less subject to errors. Lecture 9

5 What is channel coding? Coding is mapping of binary source (usually) output sequences of length k into binary channel input sequences n (>k) A block code is denoted by (n,k) Binary coding produces 2k codewords of length n. Extra bits in codewords are used for error detection/correction In this course we concentrate on two coding types: (1) block, and (2) convolutional codes realized by binary numbers: Block codes: mapping of information source into channel inputs done independently: Encoder output depends only on the current block of input sequence Convolutional codes: each source bit influences n(L+1) channel input bits. n(L+1) is the constraint length and L is the memory depth. These codes are denoted by (n,k,L). (n,k) block coder k-bits n-bits Lecture 9

6 Error control techniques
Automatic Repeat reQuest (ARQ) Full-duplex connection, error detection codes The receiver sends a feedback to the transmitter, saying that if any error is detected in the received packet or not (Not-Acknowledgement (NACK) and Acknowledgement (ACK), respectively). The transmitter retransmits the previously sent packet if it receives NACK. Forward Error Correction (FEC) Simplex connection, error correction codes The receiver tries to correct some errors Hybrid ARQ (ARQ+FEC) Full-duplex, error detection and correction codes Lecture 9

7 Why using error correction coding?
Error performance vs. bandwidth Power vs. bandwidth Data rate vs. bandwidth Capacity vs. bandwidth A F B D C E Uncoded Coded Coding gain: For a given bit-error probability, the reduction in the Eb/N0 that can be realized through the use of code: Lecture 9

8 Channel models Discrete memory-less channels Binary Symmetric channels
Discrete input, discrete output Binary Symmetric channels Binary input, binary output Gaussian channels Discrete input, continuous output Lecture 9

9 Linear block codes Let us review some basic definitions first which are useful in understanding Linear block codes. Lecture 9

10 Some definitions Binary field :
The set {0,1}, under modulo 2 binary addition and multiplication forms a field. Binary field is also called Galois field, GF(2). Addition Multiplication Lecture 9

11 Some definitions… Vector subspace: Examples of vector spaces
The set of binary n-tuples, denoted by Vector subspace: A subset S of the vector space is called a subspace if: The all-zero vector is in S. The sum of any two vectors in S is also in S. Example: Lecture 9

12 Some definitions… Spanning set: Bases: A collection of vectors ,
the linear combinations of which include all vectors in a vector space V, is said to be a spanning set for V or to span V. Example: Bases: A spanning set for V that has minimal cardinality is called a basis for V. Cardinality of a set is the number of objects in the set. Lecture 9

13 Linear block codes Linear block code (n,k)
A set with cardinality is called a linear block code if, and only if, it is a subspace of the vector space . Members of C are called code-words. The all-zero codeword is a codeword. Any linear combination of code-words is a codeword. Lecture 9

14 Linear block codes – cont’d
Bases of C mapping Lecture 9

15 (a) Hamming distance d(ci, cj)  2t  1
(a) Hamming distance d(ci, cj)  2t  1. (b) Hamming distance d(ci, cj)  2t. The received vector is denoted by r. Lecture 9

16 # of bits for FEC Want to correct t errors in an (n,k) code
Data word d =[d1,d2, ,dk] => 2k data words Code word c =[c1,c2, ,cn] => 2n code words dj 001 11 cj 011 10 111 101 000 010 01 00 cj’ Data word 110 100 Code word Lecture 9

17 Representing codes by vectors
Code strength is measured by Hamming distance that tells how different code words are: Codes are more powerful when their minimum Hamming distance dmin (over all codes in the code family) is large Hamming distance d(X,Y) is the number of bits that are different between code words (n,k) codes can be mapped into n-dimensional grid: valid code word Lecture 9

18 Error Detection If a code can detect a t bit error, then cj’ must be within a Hamming sphere of t For example, if cj=101, and t =1, then ‘100’,’111’, and ‘001’ lie in the Hamming sphere. 001 cj 011 111 101 000 010 cj’ 110 100 Code word Lecture 9

19 Error Correction To correct an error, the Hamming spheres around a code word must be nonoverlapping, dmin =2 t +1 001 cj 011 111 101 000 010 cj’ 110 100 Lecture 9

20 6-D Code Space Lecture 9

21 Block Code Error Detection and Correction
(6,3) code 23 => 26, dmin=3 Can detect 2 bit errors, correct 1 bit sent; received Erasure: Suppose code word sent but two digits were erased (xx0011), correct code word has smallest Hamming distance Message Code-word 1 2 000 000000 4 100 110100 3 010 011010 110 101110 001 101001 101 011101 011 110011 111 000111 Lecture 9

22 2k n-tuples, subspace of codewords
Geometric View Want code efficiency, so the space should be packed with as many code words as possible Code words should be as far apart as possible to minimize errors 2n n-tuples, Vn 2k n-tuples, subspace of codewords Lecture 9

23 Linear block codes – cont’d
The information bit stream is chopped into blocks of k bits. Each block is encoded to a larger block of n bits. The coded bits are modulated and sent over channel. The reverse procedure is done at the receiver. Data block Channel encoder Codeword k bits n bits Lecture 9

24 Linear block codes – cont’d
The Hamming weight of vector U, denoted by w(U), is the number of non-zero elements in U. The Hamming distance between two vectors U and V, is the number of elements in which they differ. The minimum distance of a block code is Lecture 9

25 Linear block codes – cont’d
Error detection capability is given by Error correcting-capability t of a code, which is defined as the maximum number of guaranteed correctable errors per codeword, is Lecture 9

26 Linear block codes – cont’d
For memory less channels, the probability that the decoder commits an erroneous decoding is is the transition probability or bit error probability over channel. The decoded bit error probability is Lecture 9

27 Linear block codes – cont’d
Discrete, memoryless, symmetric channel model Note that for coded systems, the coded bits are modulated and transmitted over channel. For example, for M-PSK modulation on AWGN channels (M>2): where is energy per coded bit, given by 1 1-p 1 p Tx. bits Rx. bits p 1-p Lecture 9

28 Linear block codes –cont’d
A matrix G is constructed by taking as its rows the vectors on the basis, Bases of C mapping Lecture 9

29 Linear block codes – cont’d
Encoding in (n,k) block code The rows of G, are linearly independent. Lecture 9

30 Linear block codes – cont’d
Example: Block code (6,3) Message vector Codeword Lecture 9

31 Linear block codes – cont’d
Systematic block code (n,k) For a systematic code, the first (or last) k elements in the codeword are information bits. Lecture 9

32 Linear block codes – cont’d
For any linear code we can find an matrix , which its rows are orthogonal to rows of : H is called the parity check matrix and its rows are linearly independent. For systematic linear block codes: Lecture 9

33 Linear block codes – cont’d
Syndrome testing: S is syndrome of r, corresponding to the error pattern e. Format Channel encoding Modulation decoding Demodulation Detection Data source Data sink channel Lecture 9

34 Linear block codes – cont’d
Standard array For row , find a vector in of minimum weight which is not already listed in the array. Call this pattern and form the row as the corresponding coset zero codeword coset coset leaders Lecture 9

35 Linear block codes – cont’d
Standard array and syndrome table decoding Calculate Find the coset leader, , corresponding to . Calculate and corresponding . Note that If , error is corrected. If , undetectable decoding error occurs. Lecture 9

36 Linear block codes – cont’d
Example: Standard array for the (6,3) code codewords coset Coset leaders Lecture 9

37 Linear block codes – cont’d
Error pattern Syndrome Lecture 9

38 Hamming codes Hamming codes
Hamming codes are a subclass of linear block codes and belong to the category of perfect codes. Hamming codes are expressed as a function of a single integer The columns of the parity-check matrix, H, consist of all non-zero binary m-tuples. Lecture 9

39 Hamming codes Example: Systematic Hamming code (7,4) 2006-02-16
Lecture 9

40 Example of the block codes
8PSK QPSK Lecture 9

41 Convolutional codes Convolutional codes offer an approach to error control coding substantially different from that of block codes. A convolutional encoder: encodes the entire data stream, into a single codeword. does not need to segment the data stream into blocks of fixed size (Convolutional codes are often forced to block structure by periodic truncation). is a machine with memory. This fundamental difference in approach imparts a different nature to the design and evaluation of the code. Block codes are based on algebraic/combinatorial techniques. Convolutional codes are based on construction techniques. Lecture 9

42 Convolutional codes-cont’d
A Convolutional code is specified by three parameters or where is the coding rate, determining the number of data bits per coded bit. In practice, usually k=1 is chosen and we assume that from now on. K is the constraint length of the encoder a where the encoder has K-1 memory elements. There is different definitions in literatures for constraint length. Lecture 9

43 Block diagram of the DCS
Information source Rate 1/n Conv. encoder Modulator Channel Information sink Rate 1/n Conv. decoder Demodulator Lecture 9

44 A Rate ½ Convolutional encoder
Convolutional encoder (rate ½, K=3) 3 shift-registers where the first one takes the incoming data bit and the rest, form the memory of the encoder. Input data bits Output coded bits First coded bit Second coded bit (Branch word) Lecture 9

45 A Rate ½ Convolutional encoder
Message sequence: Time Output Time Output (Branch word) (Branch word) 1 1 1 1 Lecture 9

46 A Rate ½ Convolutional encoder
1 Time Output (Branch word) Encoder Lecture 9

47 Effective code rate Initialize the memory before encoding the first bit (all-zero) Clear out the memory after encoding the last bit (all-zero) Hence, a tail of zero-bits is appended to data bits. Effective code rate : L is the number of data bits and k=1 is assumed: Encoder data tail codeword Lecture 9

48 Encoder representation
Vector representation: We define n binary vector with K elements (one vector for each modulo-2 adder). The i:th element in each vector, is “1” if the i:th stage in the shift register is connected to the corresponding modulo-2 adder, and “0” otherwise. Example: Lecture 9

49 Encoder representation – cont’d
Impulse response representation: The response of encoder to a single “one” bit that goes through it. Example: Branch word Register contents Output Input m Modulo-2 sum: Lecture 9

50 Encoder representation – cont’d
Polynomial representation: We define n generator polynomials, one for each modulo-2 adder. Each polynomial is of degree K-1 or less and describes the connection of the shift registers to the corresponding modulo-2 adder. Example: The output sequence is found as follows: Lecture 9

51 Encoder representation –cont’d
In more details: Lecture 9

52 State diagram A finite-state machine only encounters a finite number of states. State of a machine: the smallest amount of information that, together with a current input to the machine, can predict the output of the machine. In a Convolutional encoder, the state is represented by the content of the memory. Hence, there are states. Lecture 9

53 State diagram – cont’d A state diagram is a way to represent the encoder. A state diagram contains all the states and all possible transitions between them. Only two transitions initiating from a state Only two transitions ending up in a state Lecture 9

54 State diagram – cont’d 00 1 11 01 10 0/00 1/11 0/11 1/00 0/10 1/01
Current state input Next state output 00 1 11 01 10 0/00 Output (Branch word) Input 00 1/11 0/11 1/00 10 01 0/10 1/01 11 0/01 1/10 Lecture 9

55 Trellis – cont’d Trellis diagram is an extension of the state diagram that shows the passage of time. Example of a section of trellis for the rate ½ code State 0/00 0/11 0/10 0/01 1/11 1/01 1/00 1/10 Time Lecture 9

56 Trellis –cont’d A trellis diagram for the example code 0/00 1/11 0/11
0/10 0/01 1/11 1/01 1/00 0/00 1 11 10 00 Input bits Output bits Tail bits Lecture 9

57 Trellis – cont’d 0/00 0/00 0/11 0/10 0/01 1/11 1/01 1/00 0/00 0/00
Input bits Tail bits 1 Output bits 11 10 00 0/00 0/00 0/11 0/10 0/01 1/11 1/01 1/00 0/00 0/00 0/00 1/11 1/11 0/11 0/11 0/10 0/10 1/01 0/01 Lecture 9

58 Trellis of an example ½ Conv. code
Input bits Tail bits 1 Output bits 11 10 00 1/11 0/00 0/10 1/01 0/11 0/01 1/00 Lecture 9

59 Soft and hard decision decoding
In hard decision: The demodulator makes a firm or hard decision whether one or zero is transmitted and provides no other information for the decoder such that how reliable the decision is. In Soft decision: The demodulator provides the decoder with some side information together with the decision. The side information provides the decoder with a measure of confidence for the decision. Lecture 9

60 Soft and hard decoding Regardless whether the channel outputs hard or soft decisions the decoding rule remains the same: maximize the probability However, in soft decoding decision region energies must be accounted for, and hence Euclidean metric dE, rather that Hamming metric dfree is used Transition for Pr[3|0] is indicated by the arrow Lecture 9

61 Decision regions Coding can be realized by soft-decoding or hard-decoding principle For soft-decoding reliability (measured by bit-energy) of decision region must be known Example: decoding BPSK-signal: Matched filter output is a continuos number. In AWGN matched filter output is Gaussian For soft-decoding several decision region partitions are used Transition probability for Pr[3|0], e.g. prob. that transmitted ‘0’ falls into region no: 3 Lecture 9

62 Soft and hard decision decoding …
ML soft-decisions decoding rule: Choose the path in the trellis with minimum Euclidean distance from the received sequence ML hard-decisions decoding rule: Choose the path in the trellis with minimum Hamming distance from the received sequence Lecture 9

63 The Viterbi algorithm The Viterbi algorithm performs Maximum likelihood decoding. It finds a path through trellis with the largest metric (maximum correlation or minimum distance). At each step in the trellis, it compares the partial metric of all paths entering each state, and keeps only the path with the largest metric, called the survivor, together with its metric. Lecture 9

64 Example of hard-decision Viterbi decoding
2 1 3 Branch metric Partial metric Lecture 9

65 Example of soft-decision Viterbi decoding
-5/3 -5/3 10/3 1/3 14/3 -5/3 -1/3 1/3 -1/3 1/3 1/3 5/3 5/3 5/3 8/3 1/3 -1/3 -5/3 1/3 4/3 Partial metric 5/3 3 2 13/3 5/3 -4/3 -5/3 5/3 Branch metric 1/3 10/3 -5/3 Lecture 9

66 Free distance of Convolutional codes
Distance properties: Since a Convolutional encoder generates codewords with various sizes (as opposite to the block codes), the following approach is used to find the minimum distance between all pairs of codewords: Since the code is linear, the minimum distance of the code is the minimum distance between each of the codewords and the all-zero codeword. This is the minimum distance in the set of all arbitrary long paths along the trellis that diverge and remerge to the all-zero path. It is called the minimum free distance or the free distance of the code, denoted by Lecture 9

67 Free distance … The path diverging and remerging to
all-zero path with minimum weight Hamming weight of the branch All-zero path 2 1 Lecture 9

68 Interleaving Convolutional codes are suitable for memoryless channels with random error events. Some errors have bursty nature: Statistical dependence among successive error events (time-correlation) due to the channel memory. Like errors in multipath fading channels in wireless communications, errors due to the switching noise, … “Interleaving” makes the channel looks like as a memoryless channel at the decoder. Lecture 9

69 Interleaving … Interleaving is done by spreading the coded symbols in time (interleaving) before transmission. The reverse in done at the receiver by deinterleaving the received sequence. “Interleaving” makes bursty errors look like random. Hence, Conv. codes can be used. Types of interleaving: Block interleaving Convolutional or cross interleaving Lecture 9

70 Interleaving … Let us use a block interleaver 3X3
Consider a code with t=1 and 3 coded bits. A burst error of length 3 can not be corrected. Let us use a block interleaver 3X3 A1 A2 A3 B1 B2 B3 C1 C2 C3 2 errors A1 A2 A3 B1 B2 B3 C1 C2 C3 A1 B1 C1 A2 B2 C2 A3 B3 C3 Interleaver Deinterleaver A1 B1 C1 A2 B2 C2 A3 B3 C3 A1 A2 A3 B1 B2 B3 C1 C2 C3 1 errors 1 errors 1 errors Lecture 9

71 Concatenated codes A concatenated code uses two levels on coding, an inner code and an outer code (higher rate). Popular concatenated codes: Convolutional codes with Viterbi decoding as the inner code and Reed-Solomon codes as the outer code The purpose is to reduce the overall complexity, yet achieving the required error performance. Input data Outer encoder decoder Interleaver Inner encoder decoder Modulate Channel Output data Deinterleaver Demodulate Lecture 9

72 Optimum decoding If the input sequence messages are equally likely, the optimum decoder which minimizes the probability of error is the Maximum likelihood decoder. ML decoder, selects a codeword among all the possible codewords which maximizes the likelihood function where is the received sequence and is one of the possible codewords: codewords to search!!! ML decoding rule: Lecture 9

73 ML decoding for memory-less channels
Due to the independent channel statistics for memoryless channels, the likelihood function becomes and equivalently, the log-likelihood function becomes The path metric up to time index , is called the partial path metric. Path metric Branch metric Bit metric ML decoding rule: Choose the path with maximum metric among all the paths in the trellis. This path is the “closest” path to the transmitted sequence. Lecture 9

74 Binary symmetric channels (BSC)
If is the Hamming distance between Z and U, then 1 1 p Modulator input Demodulator output p 1-p Size of coded sequence ML decoding rule: Choose the path with minimum Hamming distance from the received sequence. Lecture 9

75 Inner product or correlation
AWGN channels For BPSK modulation the transmitted sequence corresponding to the codeword is denoted by where and and The log-likelihood function becomes Maximizing the correlation is equivalent to minimizing the Euclidean distance. Inner product or correlation between Z and S ML decoding rule: Choose the path which with minimum Euclidean distance to the received sequence. Lecture 9

76 Soft and hard decisions
In hard decision: The demodulator makes a firm or hard decision whether one or zero is transmitted and provides no other information for the decoder such that how reliable the decision is. Hence, its output is only zero or one (the output is quantized only to two level) which are called “hard-bits”. Decoding based on hard-bits is called the “hard-decision decoding”. Lecture 9

77 Soft and hard decision-cont’d
In Soft decision: The demodulator provides the decoder with some side information together with the decision. The side information provides the decoder with a measure of confidence for the decision. The demodulator outputs which are called soft-bits, are quantized to more than two levels. Decoding based on soft-bits, is called the “soft-decision decoding”. On AWGN channels, 2 dB and on fading channels 6 dB gain are obtained by using soft-decoding over hard-decoding. Lecture 9

78 The Viterbi algorithm The Viterbi algorithm performs Maximum likelihood decoding. It find a path through trellis with the largest metric (maximum correlation or minimum distance). It processes the demodulator outputs in an iterative manner. At each step in the trellis, it compares the metric of all paths entering each state, and keeps only the path with the largest metric, called the survivor, together with its metric. It proceeds in the trellis by eliminating the least likely paths. It reduces the decoding complexity to ! Lecture 9

79 The Viterbi algorithm - cont’d
Do the following set up: For a data block of L bits, form the trellis. The trellis has L+K-1 sections or levels and starts at time and ends up at time Label all the branches in the trellis with their corresponding branch metric. For each state in the trellis at the time which is denoted by , define a parameter Then, do the following: Lecture 9

80 The Viterbi algorithm - cont’d
Set and At time , compute the partial path metrics for all the paths entering each state. Set equal to the best partial path metric entering each state at time . Keep the survivor path and delete the dead paths from the trellis. If , increase by 1 and return to step 2. Start at state zero at time Follow the surviving branches backwards through the trellis. The path thus defined is unique and correspond to the ML codeword. Lecture 9

81 Example of Hard decision Viterbi decoding
0/00 0/00 0/00 0/00 0/00 1/11 1/11 1/11 0/11 0/11 0/11 0/10 1/00 0/10 0/10 1/01 1/01 0/01 0/01 Lecture 9

82 Example of Hard decision Viterbi decoding-cont’d
Label al the branches with the branch metric (Hamming distance) 2 1 2 1 1 1 1 1 2 1 1 2 2 1 1 Lecture 9

83 Example of Hard decision Viterbi decoding-cont’d
2 2 1 2 1 1 1 1 1 2 1 1 2 2 1 1 Lecture 9

84 Example of Hard decision Viterbi decoding-cont’d
2 3 2 1 2 1 1 1 3 1 1 2 1 1 2 2 1 2 1 Lecture 9

85 Example of Hard decision Viterbi decoding-cont’d
2 3 2 1 2 1 1 1 3 2 1 1 1 2 3 1 2 2 1 2 3 1 Lecture 9

86 Example of Hard decision Viterbi decoding-cont’d
2 3 1 2 1 2 1 1 1 3 2 1 1 1 2 3 2 1 2 2 1 2 3 1 Lecture 9

87 Example of Hard decision Viterbi decoding-cont’d
2 3 1 2 2 1 2 1 1 1 3 2 1 1 1 2 3 2 1 2 2 1 2 3 1 Lecture 9

88 Example of Hard decision Viterbi decoding-cont’d
Trace back and then: 2 3 1 2 2 1 2 1 1 1 3 2 1 1 1 2 3 2 1 2 2 1 2 3 1 Lecture 9

89 Example of soft-decision Viterbi decoding
-5/3 -5/3 10/3 1/3 14/3 -5/3 -1/3 1/3 -1/3 1/3 1/3 5/3 5/3 5/3 8/3 1/3 -1/3 -5/3 1/3 4/3 Partial metric 5/3 3 2 13/3 5/3 -4/3 -5/3 5/3 Branch metric 1/3 10/3 -5/3 Lecture 9

90 Lecture 9

91 Trellis diagram for K = 2, k = 2, n = 3 convolutional code. 2006-02-16
Lecture 9

92 State diagram for K = 2, k = 2, n = 3 convolutional code. 2006-02-16
Lecture 9


Download ppt "EC 723 Satellite Communication Systems"

Similar presentations


Ads by Google