Presentation is loading. Please wait.

Presentation is loading. Please wait.

Near Shannon Limit Performance of Low Density Parity Check Codes

Similar presentations


Presentation on theme: "Near Shannon Limit Performance of Low Density Parity Check Codes"— Presentation transcript:

1 Near Shannon Limit Performance of Low Density Parity Check Codes
老師各位學長姐各位同學大家好。 今天要和各位分享的主題是 “Near Shannon Limit Performance of Low Density Parity Check Codes”。 這篇論文是在 1996 年由 Mackay 和 Neal 兩人發表在 Electronics Letters 上的著作, 而 LDPC code 最早是發明在 1960 年代, 之後也就是被 Mackay 在 90 年中期 rediscover。 這邊論文主要是希望藉由實驗來驗證 LDPC codes 在編碼上的 performance, 是否真的能接近 Shannon Limit 的理論值; 同時也和其他不同種類的編碼方式做評比。 David J.C. MacKay and Radford M. Neal, Electronics Letters 29th Vol. 32 No. 28, August 1996.

2 Outline Features of LDPC Codes Properties of LDPC Codes Decoding
History of LDPC Codes Some Fundamentals A Simple Example Properties of LDPC Codes How to construct? Decoding Concept of Message Passing Sum Product Algorithm Concept of Iterative Decoding Channel Transmission Decoding Algorithm Decoding Example Performance Cost 這是為今天準備的 Outline。 看起來項目很多,主要是依照整個的流程找了一些輔助的參考資料或另外舉了一些例子, 將內容串起來,主要是希望能夠幫助對這篇論文的了解。

3 Shannon Limit (1/2) Shannon Limit: Given:
Describes the theoretical maximum information transfer rate of the communication channel (channel capacity), for a particular levels of noise. Given: A noisy channel with channel capacity C Information transmitted at a rate R Shannon Limit 是描述: “在不同程度的噪訊條件下,一個 [通訊頻道] 理論上擁有的最大容量”。 假設: 給定一個 容量為 C 的 [通訊頻道] 其中資訊以 rate R 來傳輸

4 Shannon Limit (2/2) If R < C, there exist codes that allow the probability of error at the receiver to be made arbitrarily small. Theoretically, it is possible to transmit information nearly without error at any rate below a limiting rate, C. If R > C, all codes will have a probability of error greater than a certain positive minimal level. This level increases as the rate increases. Information cannot be guaranteed to be transmitted reliably across a channel at rates beyond the channel capacity. 如果 R<C (表示流量 < 最大容量),即存在一種編碼方式能讓 [接收端] 收到錯誤資訊的的機率微乎其微。 換言之,理論上來說,只要該頻道的傳輸的 Rate < 通道容量,即資料幾乎不會出錯。 倘若 R>C (表示流量 > 最大容量),即不論使用任何編碼方式,均存在錯誤機率。 而這個錯誤的程度(或機率),將隨著 Rate 的升高而增加。 換言之,只要傳輸的 Rate > 通道的最大容量,就沒有辦法保證資訊的正確性。

5 Features of LDPC Codes Low-Density Parity-Check Codes (LDPC Codes) is an error correcting code. A method of transmitting a message over a noisy transmission channel. Approaching Shannon capacity For example, 0.3 dB from Shannon limit (1999). An closer design from (Chung:2001), dB away from capacity. Linear decoding complexity in time. Suitable for parallel implementation. 接著介紹 LDPC code 的一些特色。 如同剛剛談的,LDPC code 是一種 error correcting code,中文翻成 “糾錯碼”, 是為一種可以在有躁訊干擾的頻道上傳輸的編碼方式。 LDPC 最大的特色就是編碼的效率佳,能貼近 [傳輸通道] 的容量上限值。 然而雖然都是 LDPC code,實作方式的不同,在編碼上也會有些差異。 譬如在 1999 年的一種實作方法,在傳送很長的 codeword 時,編碼的結果距離 Shannon limit 有 0.3 dB; 然而到了 2001 年,距離 Shannon limit 已經可以貼近到 dB。 dB: 分貝,SNR 或躁訊的單位。 其解碼的複雜度和時間有著線性的關係 (後面 performance 會談) 在硬體實作上,相當容易支援平行處理。

6 History of LDPC Codes Also known as Gallager codes, by whom developed the LDPC concept in his doctoral dissertation at MIT in 1960. Long time being ignored due to requirement of high complexity computation. In 90’s, Rediscovered by MacKay and Richardson/Urbanke. LDPC code 最早由 Gallager 先生在 1960 年,發表他的博士論文時所提出。 所以為了紀念他,也稱作 Gallager codes。 由於在實作上需要較複雜的計算,所以一直沒有被重視。 直到九零年代,才又被 MacKay 和 Richardson 所重新發現;且隨著半導體產業的進步, 很多計算可以藉由硬體平行處理加速,LDPC code 在這幾年來才又漸漸熱門起來。

7 Some Fundamentals The structure of a linear block code is described by the generator matrix G or the parity-check matrix H. H is sparse. Very few 1's in each row and column. Regular LDPC codes: Each column of H had a small weight (e.g. 3), and the weight per row was also uniform. H was constructed at random subject to these constraints. Irregular LDPC codes: The number of 1's per column or row is not constant. Usually irregular LDPC codes outperforms regular LDPC codes. LDPC code 是一種 Linear block code,而 Linear block code 可以被一個 bipartite Graph 來描述, 當然也可以用一個 generator matrix 或是 parity-check matrix 來表示。 LDPC code 的 parity-check matrix 和同他的名稱: Low-density,H is sparse,是一個稀疏矩陣, 也就是矩陣中的非零元素很少,幾乎都是零。 那 LDPC code 也分為 Regular LDPC codes 和 Irregular LDPC codes, 主要差別在 Regular LDPC codes 的 parity-check matrix H ,每個 row 和 column 中的 1 的各數相同。 通常 irregular LDPC codes 的表現結果較 regular LDPC codes 好。

8 A Simple Example (1/6) Variable Node: box with an '=' sign.
Variable Node: box with an '=' sign. Check Node: box with a '+' sign. Constraints: All lines connecting to a variable node have the same value, and all values connecting to a check node must sum, modulo two, to zero (they must sum to an even number). 讓我們先來看一個簡單的例子,應該會比較有感覺一點。 先說明這是我在英文版 wikipedia 找到的範例,很 ”簡潔” 或著說很 “隱喻式” 的點出了很多 LDPC code 的特色, 我覺得還滿好理解的,所以也放上來跟大家分享。 首先看到的這個 graph 描述了 parity check 的規則,而這就是一個 bipartite Graph,也就是同一邊的 node 彼此不會相互連結。 其中上面的方框為 Variable Node,下面的為 Check Node。 當一個 message (或著稱為 codeword) 從 “T字” 進入了這個 graph,並且滿足了 graphical constraints, 那麼符合規則的 valid message。 “說明一下 Constraints”

9 A Simple Example (2/6) There are 8 possible 6 bit strings which correspond to valid codewords: 000000, , , , , , , This LDPC code fragment represents a 3-bit message encoded as 6 bits. The redundancy is been used to aid in recovering from channel errors. 雖然 codewords 的長度為 6-bit,但符合條件的 codewords 其實只有 8 組: 000000, , , , , , , 所以這表示 LDPC code 將 3-bit 的資訊,用 6-bit 來表示, 多出 redundancy 的目的就是為了達成 error recovery。

10 A Simple Example (3/6) 先說過了,LDPC code 除了可以被一個 bipartite Graph 來描述, 也可以用一個 parity-check matrix 來表示。這是對應的 parity-check matrix H。 The parity-check matrix representing this graph fragment is:

11 A Simple Example (4/6) Consider the valid codeword: 101011
Been transmitted across a binary erasure channel, and received with the 1st and 4th bit erased: ?01?11 Belief Propagation is particularly simple for the binary erasure channel. Consists of iterative constraint satisfaction. 如果現在有一組合法的 codeword: 藉由 binary erasure channel 來傳送, 結果接收端收到的結果變成: ?01?11 先談什麼是: binary erasure channel? 圖中 0 傳送出後,接收端有 P 的機率會成功收到 0;有 1-P 的機率會收到一個 e 或是 ? 也許表示一個不高不低的電位差,無法確定究竟是 0 或是 1。同樣地,1 的 case 也是如此。 LDPC code 在 decoding 上採用了 Belief Propagation 的架構, 概念上就是藉由許多次的 iteration 來驗證 parity check 的 constraints,最終解回原先的值。

12 A Simple Example (5/6) Consider the erased codeword: ?01?11
In this case: The first step of belief propagation is to realize that the 4th bit must be 0 to satisfy the middle constraint. Now that we have decoded the 4th bit, we realize that the 1st bit must be a 1 to satisfy the leftmost constraint. 第四位數位了符合中間的條件,必為 0 (左三和右一的 “1” 往下送給中間 check node,得出 0,反推送給第四位); 左二、左三和左四的數往下送給左邊的 check node,得出 1,反推送給第一位。

13 A Simple Example (6/6) Thus we are able to iteratively decode the message encoded with our LDPC Code. We can validate this result by multiplying the corrected codeword r by the parity-check matrix H: Because the outcome z (the syndrome) of this operation is the 3 x 1 zero vector, we have successfully validated the resulting codeword r. 最後成功解回原值。 剛剛的動作,我們可以藉由將 parity check matrix H 和 codeword 相乘, 得出 check 的結果來作為判斷是否成功解回原值。 (mod 2)

14 Properties of LDPC Codes
The structure of a linear block code is completely described by the generator matrix G or the parity-check matrix H. r = Gm r: codeword, G: generator matrix, m: input message HG = 0 Hr = 0 A Low-Density Parity-Check Code is a code which has a very sparse, random parity check matrix H. Typically, column weight is around 3 or 4. 之前有提到,LDPC code 是一種 Linear block code,而 Linear block code 可以被一個 bipartite Graph 來描述, 當然也可以用一個 generator matrix 或是 parity-check matrix 來表示。 我們已經看過 parity-check matrix H,至於 codeword 的產生 (r = Gm),則要靠 generator matrix G。 G和H的性質為: HG = 0,可在得出 parity-check matrix H 後,利用高斯消去法作矩陣運算求出。

15 How to construct? (1/4) Construction 1A:
An M by N matrix is created at random with: weight per column t (e.g., t = 3). weight per row as uniform as possible. overlap between any two columns no greater than 1. No cycle graph. In 1996, MacKay 和 Neal 造出 random parity check matrices H 的方法為下列幾種方式: 首先是 1A 法: 隨機取 The weight of a column: the number of non-zero elements. The overlap between two columns: their inner product. “可用滑鼠筆跡做講解。”

16 How to construct? (2/4) Construction 2A:
Up to M/2 of the columns: designated weight 2 columns Such that there is zero overlap between any pair of columns. The remaining columns: made at random with weight 3. The weight per row: as uniform as possible. Overlap between any two columns of the entire matrix no greater than 1. The weight of a column: the number of non-zero elements. The overlap between two columns: their inner product.

17 How to construct? (3/4) Construction 1B and 2B:
A small number of columns are deleted from a matrix produced by Constructions 1A and 2A. The bipartite graph has no short cycles of length less than some length l. Construction 1B and 2B 就是將 1A 和 2A 造出的 matrix 再加工, 直接丟棄一些 coiumn,目的也是為了想消除 short cycle 的問題。

18 How to construct? (4/4) Above constructions do not ensure all the rows of the matrix are linearly independent. The M by N matrix created is the parity matrix of a linear code with rate at least R = K/N, where K = N - M. The generator matrix of the code can be created by Gaussian elimination.

19 Decoding Decoding problem is to find the most probable vector x iteratively, such that: Hx mod 2 = 0 Gallager's algorithm may be viewed as an approximate Belief Propagation (BP) algorithm. Message Passing (MP). Decoding 的主要依據,就是找出一組最有可能的 codeword,使得 codeword 乘上 H matrix 的結果 mod 2 為零。 乘積的結果 mod 2 餘零,其實就是 XOR。 LDPC code 解碼時主要利用 Message Passing 的機制來實作 Belief Propagation 概念。

20 Concept of Message Passing (1/4)
How to know the number of soldiers stand in line? Each soldier plus 1 to the number hearing from the neighbor on one side, then pass the result to the neighbor on the opposite side. 一樣,舉一個簡單例子來看 Message Passing。 怎樣才能知道所有士兵的數目呢? 任一個士兵得知一個數目時,將這個數目加1 並且將這個新的數目告知 另一側的士兵,以此類推。 從這個規則可以知道當某一個士兵接收到一個數字(假設為n)時,這個數字告 訴他一個訊息說現在有n 個士兵在他的左側或右側。

21 Concept of Message Passing (2/4)
In the beginning, every soldier knows that: There exists at least one soldier (himself). Intrinsic Information. 一開始每個士兵都知道至少有一個士兵(就是他自己),這個訊息我們稱為 intrinsic information。 Figure 1. Each node represent a soldier & Local rule: +1

22 Concept of Message Passing (3/4)
Start from the leftmost and rightmost soldiers. Extrinsic Information. 接下來最左邊或最右邊的的士兵將這個1 的訊息告知鄰兵,而該鄰兵接收到這個 的訊息時將其再加1 後在告知下一個鄰兵。此時如<圖二>我們會看到(以左二 士兵為例)其左邊會進入一個訊息為1,右邊會進入一個訊息為4。這代表他接 受到一個訊息為:在你的左邊有1 個士兵,另一個訊息為在你的右邊有4 個士兵。 這兩個訊息我們稱之為extrinsic information。 Figure 2. Extrinsic Information Flow.

23 Concept of Message Passing (4/4)
The total number = [left + right] + [oneself] Overall Information = Extrinsic Information + Intrinsic Information. 接下來每個士兵就會知道到底有多少個士兵在這一整排中。以左二士兵為例,[所 有的士兵是數目]就等於[左邊鄰兵告訴他的數目加上右邊鄰兵告訴他的數目]再 加上[自己]。這句話可以簡寫為 overall information = extrinsic information + intrinsic information。 Figure 3. Overall Information Flow.

24 Sum Product Algorithm (1/4)
During decoding, apply Sum Product Algorithm to derive: Extrinsic Information (Extrinsic Probability) Overall Information (Overall Probability) In this paper, channel transmission: BPSK through AWGN. Binary Phase-Shift Keying. Additive White Gaussian Noise. 在 decode 的過程中,運用 Sum Product Algorithm 來獲得 Extrinsic Probability 和 Overall Probability。 先提一下這篇論文中的傳送方法,是運用 BPSK 的 modulation,通過帶有一個 [加成式的白色高斯噪訊] 的通道。 所謂的 “白色”,不是指顏色,只是說明帶有各種不同頻率的噪訊訊號(好像白光一樣)。 不過這裡還用不到這些訊息。

25 Sum Product Algorithm (2/4)
A simple example: If local rule: 一樣,還是看一個簡單例子。 如果這個 check node 的 constraint 是 m1 XOR m2 XOR m3 = 0, m1: variable node,為 0 的機率是: P10,為 1 的機率是 P11。 其他 m bit 類推。

26 Sum Product Algorithm (3/4)
Form 1. valid codeword for rule: 由上列數學式可知( overall probability with m2 = 0 )等於具有m2 = 0 的合法字碼 之機率值總和— i.e. sum operation; 而個別合法字碼的機率值等於該合法字碼內, 所有symbol 之機率值乘積— i.e. product operation,因此稱為Sum-Product Algorithm。 (Overall Probability with m2 = 0): P10P20P30 + P11P20P31 (Extrinsic Probability with m2 = 0): P10P30 + P11P31

27 Sum Product Algorithm (4/4)
Likelihood Ratio: Likelihood Ratio: (該 bit = 1 的機率) 除以 (該 bit = 0 的機率), 如果值大於 1,表示該 bit 的值較可能為 1。 其中 Extrinsic LR 的值,最後可以化簡為: (“非自己的LR相加” 除以 “1+非自己的LR相乘”), 等一下 decoding 的小範例會運用到。 , where ( with m1 or m2 is similar )

28 Concept of Iterative Decoding (1/6)
A simple example: If received a codeword with Likelihood Ratio: ( “10110” is an invalid codeword ) 以之前的parity-check matrix來解釋iterative decoding 的工作原理。(注意:這並不是一個LDPC code,只是為了方便解釋)。 假設現在receiver 端收到一組字碼且其相對應的likelihood ratio 分別為: 2 ¼ 2 2 ¼ 若我們直接就對收到的likelihood ratio 作判斷, 會以為收到的字碼為10110,但這是一個不合法的字碼。 所以必須參考”其他人的看法”,做 message passing 的動作。

29 Concept of Iterative Decoding (2/6)
A simple example: Calculate Extrinsic Probability by Check Nodes: 接下來利用剛剛提到 Sum Product Algorithm 的方式, 由 check node 計算算出分別為 a、b、c 的 Extrinsic Likelihood。 如同先前提到,Extrinsic LR 的值可化簡為: (“非自己的LR相加” 除以 “1+非自己的LR相乘”)。

30 Concept of Iterative Decoding (3/6)
A simple example: We then obtain the Overall Probability of 1st. Round: 結果很幸運地得出了正確的 codeword (“10010” 符合 constraint) 。 但因為 iterative decoding 不一定在 first round 就會收斂, 所以進行 second round 以確認是否收斂。 ( “10010” is an valid codeword )

31 Concept of Iterative Decoding (4/6)
A simple example: 2nd. Round : second round: 注意,和先前的例子相仿,因為 Message Passing 時,variable node 要再次 pass LR 給 check node 時, 要 pass 給和自己鍵結的另外一個 check node。 所以: d = a * 入2 e = b * 入2 這樣才有

32 Concept of Iterative Decoding (5/6)
A simple example: 2nd. Round : 一樣,Sum Product Algorithm , Extrinsic LR 為: (“非自己的LR相加” 除以 “1+非自己的LR相乘”)。

33 Concept of Iterative Decoding (6/6)
A simple example: We then obtain the Overall Probability of 2nd. Round: 結果與 first round 相同,所以收斂。停止 iterate 。 ( “10010” is an valid codeword )

34 Channel Transmission BPSK through AWGN:
A Gaussian channel with binary input ±a and additive noise of variance σ2 = 1. t: BPSK-modulated signal, AWGN channel: Posterior probability: 本篇論文中的傳送方法,是運用 BPSK 的 modulation,通過帶有一個 [加成式的白色高斯噪訊] 的通道。 在這裡設定高斯的變異數為 1,input 的二元訊號為 “正負 a”。 t 則是 BPSK-modulated 的訊號,tn = (2cn – 1) * a 經過 channel 後,原先 modulate 後的訊號 t 會加上躁訊變成 y 至於得出 codeword 中任意 bit 為 1 的 Posterior probability ,為 1 / (1+e^…) 。

35 Decoding Algorithm (1/6)
We refer to the elements of x as bits, to the rows of H as checks, and denote: the set of bits n that participate in check m by N(m): the set of checks in which bit n participates by M(n): a set N(m) with bit n excluded by N(m)\n 一樣, decoding 的主角還是 codeword 和 parity-check 矩陣 H, 而 codeword 中的 bit 和矩陣 H 的 row 相乘即為進行 check 的動作。 介紹一些 notation: N(m) 表示一組共同連結到 check node m 的 variable nodes; 也就是說,選定一個 check node m,丟到 N 這個函式,output 會是一組 variable nodes。 M(n) 則是表示一組共同連結到 variable nodes 的 check nodes; 意思是,選定一個 variable node n,丟到 M 這個函式,output 會是一組 check nodes。 至於 N(m)\n 則是代表一組 N(m),但排除了 n 這個 variable node。看個例子好了。

36 Decoding Algorithm (2/6)
一組 LDPC code 的 parity-check 關係。

37 Decoding Algorithm (3/6)
這是對應的 parity-check 矩陣 H。 剛剛介紹的 notation: N(m) 表示一組共同連結到 check node m 的 variable nodes; 也就是說,選定一個 check node m,丟到 N 這個函式,output 會是一組 variable nodes。 M(n) 則是表示一組共同連結到 variable nodes 的 check nodes; 意思是,選定一個 variable node n,丟到 M 這個函式,output 會是一組 check nodes。 至於 N(m)\n 則是代表一組 N(m),但排除了 n 這個 variable node。 N(1) = {1, 2, 3, 6, 7, 10}, N(2) = {1, 3, 5, 6, 8, 9}, … etc M(1) = {1, 2, 5}, M(2) = {1, 4, 5}, … etc N(1)\1 = {2, 3, 6, 7, 10}, N(2)\3 = {1, 5, 6, 8, 9}, … etc M(1)\1 = {2, 5}, M(2)\4 = {1, 5}, … etc

38 Decoding Algorithm (4/6)
The algorithm has two parts, in which quantities qmn and rmn associated with each non-zero element in the H matrix are iteratively updated: qxmn: the probability that bit n of x is x, given the information obtained via checks other than check m. rxmn: the probability of check m being satisfied if bit n of x is considered fixed at x, and other bits have a separable distribution given by the probabilities The algorithm would produce the exact posterior probabilities of all the bits of the bipartite graph defined by the matrix H. 這篇論文在實踐 Message passing 演算法中,分成兩個部份: (variable nodes 傳 Likelihood 給 check nodes) 和 (check nodes 傳 Likelihood 給 variable nodes)。 關鍵的兩個變數 qmn 和 rmn,他們表示的是矩陣 H 中非零元素位置的值。 其中 qmn 表示: codeword 中(即 variable node) 該 bit n 的值是 x 的機率,動線是由 “variable node” 傳到 “check node” 。 至於 rmn 表示: 當 codeword 中的 bit n 的值是 x,配合除了 n 以外的其他 variable node 一組 codeword, 滿足該 check node 的 constraint 被的機率。動線是由 “check node” 傳到 “variable node” 。

39 Decoding Algorithm (5/6)
Initialization: q0mn and q1mn are initialized to the values f0n and f1n Horizontal Step: Define: For each m, n: Set: 用前述的 LDPC 遞迴解碼範例來解說。

40 Decoding Algorithm (6/6)
Vertical Step: For each n and m, and for x = 0, 1 we update: We can also update the “pseudoposterior probabilities” q0n and q1n, given by: , where αmn is chosen such that q0mn+ q1mn = 1

41 Decoding Example (1/10) 是的,相同於前面的一組 LDPC code parity-check 關係。

42 Decoding Example (2/10) 對應的矩陣 H,還有一組 codeword。

43 Decoding Example (3/10) BPSK through AWGN:
Simulated a Gaussian channel with binary input ±a and additive noise of variance σ2 = 1. t: BPSK-modulated signal, AWGN channel: BPSK 的環境,帶入 a = 2

44 Decoding Example (4/10) BPSK through AWGN: Posterior probability:
直接取 threshold,會發現 invalid。

45 Recall: Decoding Algorithm
Input: The Posterior Probabilities pn(x). Initialization: Let qmn(x) = pn(x). 1. Horizontal Step: (a). Form the δq matrix from qmn(0) - qmn(1) (at sparse non-zero location). (b). For each nonzero location (m, n), let δrmn be the product of δq matrix elements along its row, excluding the (m, n) position. (c). Let rmn (1) = (1 -δrmn ) / 2, rmn (0) = (1 +δrmn ) / 2 2. Vertical Step: For each nonzero location (m, n) let qmn(0) be the product along its column, excluding the (m, n) position, times pn(0). Similarly for qmn(1). Then normalize.

46 Decoding Example (5/10) Initialization: Let qmn(x) = pn(x).
親自跑一次 Algo 流程。

47 Decoding Example (6/10) Iteration 1: Horizontal Step: (a) (b)

48 Decoding Example (7/10) Iteration 1: Horizontal Step: (c)

49 Decoding Example (8/10) Iteration 1: Vertical Step: (a) (b)
qmn(1) 是經過計算化簡而來的。 根據演算法

50 Decoding Example (9/10) Iteration 1: After Vertical Step: Hc mod 2 ≠ 0
還是沒完成,不過修正了一個 bit。 Hc mod 2 ≠ 0 Recall: update pseudoposterior probabilities qn(1) and qn(0), given by:

51 Decoding Example (10/10) After two more iterations:
However, a failure is declared if some maximum number of iterations (e.g., 100) occurs without a valid decoding. Successfully decoded Hc mod 2 = 0 成功復原,不過 LDPC code 並非一個 bounded-distance decoder,需要另外設定終止條件。

52 Performance (1/4) Compares the performance of LDPC codes with textbook codes and with state of the art codes. Textbook codes: The curve (7, 1/2) shows the performance of a rate 1/2 convolutional code with constraint length 7 de facto standard for satellite communications. The curve (7, 1/2)C shows the performance of the concatenated code composed of the same convolutional code and a Reed-Solomon code. 最後,本篇論文在 LDPC code performance 測試上,和 textbook codes 以及 state of the art codes 作比較。 這裡採了兩種 textbook codes,分別是標示為 (7, 1/2) 和 (7, 1/2)C 的曲線。 前者為 constraint length 7, rate ½ 的一種 convolutional code;後者基本上和前者相同,不過還結合了 Reed-Solomon code.

53 Performance (2/4) State of the art:
The curve (15,1/4)C shows the performance of concatenated code developed at JPL based on a constraint length 15, rate 1/4 convolutional code. Extremely expensive and computer intensive. The curve Turbo shows the performance of the rate 1/2 Turbo code. 同時也採了兩種 state of the art codes,分別是標示為 (15,1/4)C 和 Turbo 的曲線。 前者為 constraint length 15, rate 1/4 的一種 convolutional code, 而且是一種代價相當高昂且計算複雜的 code; 後者則是 Turbo Code。

54 Performance (3/4) LDPC codes: from left to right, the codes had the following parameters (N; K; R): (29507; ; 0:322) (Construction 2B); (15000; ; 0:333) (Construction 2A); (14971; ; 0:332) (Construction 2B); (65389; 32621; 0:499) (Construction 1B); (19839; ; 0:496) (Construction 1B); (13298; ; 0:248) (Construction 1B); (29331; 19331; 0:659) (Construction 1B). 圖中剩下的曲線,從左至右分別為採用前述不同方式創造出來的 H 矩陣編碼而成。 參數說明: N: block length (codeword 長度) K: 原 message 長度 (也許是 packet 長度) R: Rate = K/N

55 Performance (4/4) 縱軸是: empirical bit error probability. GL codes: It should be emphasized that all the errors made by the GL codes were detected errors: the decoding algorithm reported the fact that it had failed. 結果顯示 LDPC code 的 performance 明顯地比 standard convolutional 和 concatenated codes 好, 而且 performance 確實如同 Turbo codes 一樣,相當接近 Shannon limit。 從圖中這幾種 LDPC code 表現的結果來看,似乎要將 code 的 column weight 盡量調低,會得出較佳的結果 (construction 2A)。 而且如預料中的,有較長 block length 的 codes 的表現會比較出色。 就 Eb/N0 的值來看,最佳的結果應該是 Rate 介於 1/2 和 1/3 之間的 Code. Figure LDPC codes’ performance over Gaussian channel compared with that of standard textbook codes and state of art codes.

56 Cost (1/2) In a brute force approach, the time to create the code scales as N3, where N is the block size. Encoding time scales as N2 But encoding involves only binary arithmetic so for the block lengths studied here it takes considerably less time than the simulation of the Gaussian channel. It may be possible to reduce encoding time using sparse matrix techniques.

57 Cost (2/2) Decoding involves approximately 6Nt floating point multiplies per iteration, so the total number of operations per decoded bit (assuming 20 iterations) is about 120t/R, independent of block length. (6Nt * 20 ) / K = 120t * (N/K) = 120t / R For the codes presented here, this is about 800 operations.

58 Thank you Sincerely thanks for your listening.

59 How It Works

60 Probabilities Used

61 Vertical Step I: Single Tier Approximate Subset

62 Vertical Step II: Multi Tier Approximate Subset

63 Horizontal Step


Download ppt "Near Shannon Limit Performance of Low Density Parity Check Codes"

Similar presentations


Ads by Google