Presentation is loading. Please wait.

Presentation is loading. Please wait.

Digital Video Solutions to Midterm Exam 2012 Edited by Yang-Ting Chou Confirmed by Prof. Jar-Ferr Yang LAB: 92923 R, TEL: ext. 621

Similar presentations


Presentation on theme: "Digital Video Solutions to Midterm Exam 2012 Edited by Yang-Ting Chou Confirmed by Prof. Jar-Ferr Yang LAB: 92923 R, TEL: ext. 621"— Presentation transcript:

1 Digital Video Solutions to Midterm Exam 2012 Edited by Yang-Ting Chou Confirmed by Prof. Jar-Ferr Yang LAB: 92923 R, TEL: ext. 621 E-mail: yangting115@gmail.comyangting115@gmail.com Page of MPL: http://mediawww.ee.ncku.edu.twhttp://mediawww.ee.ncku.edu.tw

2 AVG: 120.806 STDEV: 42.9794 MAX: 168 MIN: 21

3 (a) I These entropy encoders compress data by replacing each fixed- length input symbol by the corresponding variable-length prefix-free output codeword. The length of each codeword is approximately proportional to the negative logarithm of the probability. Therefore, the most common symbols use the shortest codes. According to Shannon's source coding theorem, the optimal code length for a symbol is −log b P, where b is the number of symbols used to make output codes and P is the probability of the input symbol. Two of the most common entropy encoding techniques are Huffman coding and arithmetic coding. If the approximate entropy characteristics of a data stream are known in advance, a simpler static code may be useful. These static codes include universal codes and Golomb codes.

4 (b) (c )

5 (d )

6 (e) (f)

7 II 2.1 Huffman code: A 0 (1) B 11 (00) C 100 (011) D 1011 (0100) E 10100 (01011) F 10101 (01010) A 0.85 B 0.05 C 0.05 D 0.02 E 0.02 1 1 1 0 1 0 0 0 (a) p(A) = 0.85, p(B) = 0.05, p(C) = 0.05, p(D) = 0.02, p(E) = 0.02 p(F) = 1- p(A)- p(B)- p(C)- p(D)- p(E)=0.01 F 0.01 1 0

8 0 1 11 101 100 1000 1001 10001 10000 10 1 0 Optimal symmetrical RVLC A 0 B 11 C 101 D 1001 E 10001 F 100001 (b) 100001 100000

9 0 1 11 101 001 0001 1001 10001 00001 01 0 1 Prefix conflict Optimal asymmetrical RVLC A 0 B 11 C 101 D 1001 E 10001 F 100001 (c) 100001 000001

10 2.2 00 0 0 0 00 00 00 0 4 3 1 RLCSkipSSSSValueEncoded (1,4)131001111001 100 (0,-1)01000 0 (1,1)111 1100 1 (3,3)3211111110111 11 EOB001010 (a) (b) (c) 題目沒給

11

12 2.3 (a) Entropy: (b) Huffman code: A 2 A 2 0 (1) A 2 A 1 11 (00) A 1 A 2 100 (011) A 1 A 1 101 (010) A 2 A 2 0.64 A 2 A 1 0.16 A 1 A 2 0.16 A 1 A 1 0.04 1 1 1 0 0 0

13 (c) {A 2 A 1 A 2 A 2 }:{110} ({001}) Huffman code: A 2 A 2 0 (1) A 2 A 1 11 (00) A 1 A 2 100 (011) A 1 A 1 101 (010) (d) Occurrence symbols: {A 2 A 1 A 2 A 2 } 0.01.0 0.2 0.36 1.0 0.2 0.36 0.232 0.36 0.232 0.2576 0.2576<W<0.36 0101

14 2.4 -2 -1 0 1 2 Synthesis Filter: n -3-20123456789 x[n]x[n]341-31432-22341 y 0 [2n] 0-2.7504.1250101 y 1 [2n+1] 00.50 000-400 x0[n]x0[n] -2.750.68754.1252.562511 x1[n]x1[n] -0.250.3125-0.1250.43751-3 x’[n]-31432-2 The last row is the reconstructed synthesis data

15 2.5 (a) (c) They are not unitary transforms. 以 quantization or scaling 來做補償 AA -1 = AA T ≠ I (b)

16 2.6 p(A) = 0.7, p(B) =0.2, p(C) = 0.05, p(D)=p(E)=0.02, p(F)=0.01 Huffman code: A 0 (1) B 10 (01) C 110 (001) D 1111 (0000) E 11100 (00011) F 11101 (00010) A 0.7 B 0.2 C 0.05 D 0.02 E 0.02 F 0.01 1 1 1 1 1 0 0 0 0 0 (a) RVL code: A 0 (1) B 101 (010) C 11011 (00100) D 1111111 (0000000) E 111000111 (000111000) F 111010111 (000101000)

17 (b) 0 1 11 101 100 1000 1001 10001 10000 10 1 0 100001 100000 Optimal symmetrical RVLC A 0 B 11 C 101 D 1001 E 10001 F 100001 optimal symmetrical RVLC : 從後面長

18 (c) 0 1 11 101 001 0001 1001 10001 00001 01 0 1 Prefix conflict Optimal asymmetrical RVLC A 0 B 11 C 101 D 1001 E 10001 F 100001 100001 000001 optimal asymmetrical RVLC : 從前面長

19 2.7 Initialization: LIP: { (0,0)  42, (0,1)  17, (1,0)  -19, (1,1)  13 } LIS: { D(0,1), D(1,0), D(1,1) } LSP: {} Significant Pass: 10 0 0 0 000 Refinement Pass: LIP: { (0,0)  42, (0,1)  17, (1,0)  -19, (1,1)  13 } LIS: {D(0,1), D(1,0), D(1,1) } LSP: { (0,0)  42 } (a) (b) SPIHT 48 42 17 -19 6-7 78 45 -4-5 3-4 00 13

20 Significant Pass: 10 11 0 000 Refinement Pass: 0 LIP: { (0,1)  17, (1,0)  -19, (1,1)  13} LIS: {D(0,1), D(1,0), D(1,1) } LSP: {(0,0)  42, (0,1)  17, (1,0)  -19} Significant Pass: 10 0 1 0 0 0 10 0 Refinement Pass: 1 0 0 up to 25 bits Generated bitstream: 10 0 0 0 000 10 11 0 000 0 10 0 1 0 0 0 1 40 42 17 -19 6-7 78 45 -4-5 3-4 00 13

21 (c) Generated bitstream: 10 0 0 0 000 10 11 0 000 0 10 0 1 0 0 0 1 48 0 0 00 00 00 00 00 00 0 40 24 -24 00 00 00 00 00 00 0 (1) (2) (3) 40 24 -24 00 00 00 00 00 00 12 Generated bitstream: 10 0 0 0 000 10 11 0 000 0 10 0 1 0 0 0 1

22 2.8 JPEG-LS Block Diagram (a)

23 (b) Fixed Predictor

24 sc zc sc zc sc zc sc zc sc Significance Propagation Pass (Pass 1) : Coefficient which is already significant : Significance Propagation Pass (Pass 1) ZC: Zero Coding SC: Sign Coding (a) zc 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 2.9

25 Magnitude Refinement Pass (Pass 2) : Significance Propagation Pass (Pass 1) (a) : Magnitude Refinement Pass (Pass 2) sc zc sc zc sc zc sc zc sc zc 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0

26 sc zc Clean-up Pass (Pass 3) (a) : Pass 1 : Pass 2 : Pass 3 (ZC & SC) : Pass 3 (RLC) sc zc sc zc sc zc sc zc sc zc 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0

27 a: ZC, LL band kh[j] = 1, kv[j] = 1, kd[j] = 0, ksig[j]=7 b: SC  h[j] = 0,  v[j] = 0, ksign[j] = 9 c: ZC, LL band kh[j] = 1, kv[j] = 0, kd[j] = 0, ksig[j]=5 d: SC  h[j] = 1,  v[j] = 0, ksign[j] = 12 (b) sc zc sc zc sc zc sc zc sc zc sc zc 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 (a)(b) (c) (d)

28 (b)  sig [j] LL and LH blocksHL blocksHH blocks h[j]h[j] v[j]v[j] d[j]d[j] h[j]h[j] v[j]v[j] d[j]d[j] d[j]d[j]  h [j]+  v [j] 82xxx2x≥3x 71≥1x 1x2 610 01 20 51000001≥2 402x20x11 301x10x10 200 00 0 100100101 000000000 Assignment of context labels for significant coding “x” means “don’t care.”

29 (b) h[j]h[j] v[j]v[j]  sign  flip 11131 10121 1111 01101 0091 010 111 012 13 Assignment of context labels and flipping factor for sign coding  h [j],  v [j]: neighborhood sign status -1: one or both negative. 0: both insignificant or both significant but opposite sign. 1: one or both positive. Current sample

30 (b)  [j]  sig [j]  mag 0014 0>015 1X16 Assignment of context labels and flipping factor for magnitude refinement coding  [j]: remains zero until after the first magnitude refinement bit has been coded. For subsequent refinement bits,  [j] = 1.  sig  [j]: context label for significant coding of sample j

31 2.10 (a) Diamond Search

32 -201234567 0 -2 -3 1 2 3 4 5 6 7 8 9 (-2, 3): 9+6+4 = 19 points

33 (2, -7): 9+6+5+5+4 = 29 points -201234567 0 -2 -3 1 2 3 4 5 6 7 8 9

34 (b) Four Step Search

35 -201234567 0 -2 -3 1 2 3 4 5 6 7 8 9 (-2, 3): 9+5+8 = 22 points

36 -201234567 0 -2 -3 1 2 3 4 5 6 7 8 9 (2, -7): 9+5+3+3+8 = 28 points

37 (c) Enhanced Hexagon Search

38 (-2, 3): 7+3+2 = 12 points -201234567 0 -2 -3 1 2 3 4 5 6 7 8 9

39 (2, -7): 7+3+3+3+2 = 18 points -201234567 0 -2 -3 1 2 3 4 5 6 7 8 9

40 2.11


Download ppt "Digital Video Solutions to Midterm Exam 2012 Edited by Yang-Ting Chou Confirmed by Prof. Jar-Ferr Yang LAB: 92923 R, TEL: ext. 621"

Similar presentations


Ads by Google