Presentation is loading. Please wait.

Presentation is loading. Please wait.

NCHU1 The LOCO-I Lossless image Compression Algorithm: Principles and Standardization into JPEG-LS Authors: M. J. Weinberger, G. Seroussi, G. Sapiro Source.

Similar presentations


Presentation on theme: "NCHU1 The LOCO-I Lossless image Compression Algorithm: Principles and Standardization into JPEG-LS Authors: M. J. Weinberger, G. Seroussi, G. Sapiro Source."— Presentation transcript:

1 NCHU1 The LOCO-I Lossless image Compression Algorithm: Principles and Standardization into JPEG-LS Authors: M. J. Weinberger, G. Seroussi, G. Sapiro Source : IEEE Transactions on Image Processing, Vol. 9, No. 8, August 2000, 1309-1324 Speaker: Chia-Chun Wu ( 吳佳駿 ) Date : 2004/10/20

2 NCHU2 Outline 1. IntroductionIntroduction 2. ExampleExample 3. ModelerModeler 4. Regular modeRegular mode 5. Run modeRun mode 6. Coded dataCoded data 7. ResultResult 8. ConclusionConclusion 9. CommentsComments

3 NCHU3 1. Introduction (1/2) LOCO-I (LOw COmplexity LOssless COmpression for Image) is the standard for lossless and near-lossless compression of continuous-tone images, such as JPEG-LS.

4 NCHU4 1. Introduction (2/2) Fig.1 JPEG-LS : Block diagram 1100 0000…

5 NCHU5 2. Example Fig.2 4 x 4 Example image data Inde x 012345 0000000 10009074 20685043205 36864145 464100145 補 0補 0 RcRbRd RaIx

6 NCHU6 3. Modeler 3.1 Compute local gradientsCompute local gradients 3.2 Local gradient quantizationLocal gradient quantization 3.3 Quantized gradient mergingQuantized gradient merging 3.4 Select the modeSelect the mode

7 NCHU7 3.1 Compute local gradients Ix is the value of current sample in the image. Three gradients g 1, g 2, g 3. g 1 = Rd – Rb g 2 = Rb – Rc g 3 = Rc – Ra Example 1 Example 2 RcRbRd RaIx 64145 100145 (g1,g2,g3)=( 0,81,-36) (g1,g2,g3)=( 0, 0, 0)

8 NCHU8 3.2 Local gradient quantization Q1, Q2, Q3 are region numbers of quantized local gradients. Example 1 Example 2 g i ( i = 1 to 3 ) Q i ( i = 1 to 3 ) {0} 0 ± {1,2} ± 1 ± {3,4,5,6} ± 2 ± {7,8,...,20} ± 3 ± { ≧ 21} ± 4 (g 1,g 2,g 3 ) ⇒ (Q 1,Q 2,Q 3 ) (g 1, g 2, g 3 )=( 0,81,-36) ⇒ (Q 1,Q 2,Q 3 )=( 0, 4, -4) (g 1, g 2, g 3 )=( 0, 0, 0) ⇒ (Q 1,Q 2,Q 3 )=( 0, 0, 0)

9 NCHU9 3.3 Quantized gradient merging If the first non-zero element of vector (Q 1,Q 2,Q 3 ) is negative, then (Q 1,Q 2,Q 3 ) ⇒ (-Q 1,-Q 2,-Q 3 ) and SIGN set to – 1, otherwise it set to +1. Example 1 Example 2 Q1Q1 Q2Q2 Q3Q3 00-2 -43 00 0 01-3 22 2 Q1Q1 Q2Q2 Q3Q3 SIGN 00 2 4-3 1 00 0+1 01-3+1 22 2 (Q 1, Q 2, Q 3 )=( 0, 4, -4) ⇒ SIGN = +1 (Q 1, Q 2, Q 3 )=( 0, 0, 0) ⇒ SIGN = +1

10 NCHU10 3.4 Select the mode If quantized local gradients are not all zero, choose the regular mode.regular mode If Q 1 =Q 2 =Q 3 =0, go to the run mode.run mode Example 1 Example 2 (Q 1, Q 2, Q 3 )=( 0, 4, -4) ⇒ Reqular mode (Q 1, Q 2, Q 3 )=( 0, 0, 0) ⇒ Run mode

11 NCHU11 4. Regular model 4.1 Compute the fixed predictionCompute the fixed prediction 4.2 Adaptive correctionAdaptive correction 4.3 Compute the prediction errorCompute the prediction error 4.4 Modulo reduction of the errorModulo reduction of the error 4.5 Error mappingError mapping 4.6 Compute the Golomb coding parameter k Compute the Golomb coding parameter k 4.7 Golomb CodeGolomb Code 4.8 Mapped-error encodingMapped-error encoding 4.9 Update the variablesUpdate the variables

12 NCHU12 4.1 Compute the fixed prediction Ra, Rb, Rc used to predict Ix. Px is predicted value for the sample Ix. Example 1 min(Ra,Rb), if Rc ≧ max(Ra,Rb). max(Ra,Rb), if Rc ≦ min(Ra,Rb). Ra+Rb–Rc, otherwise. { Px = Rc=64Rb=145Rd=145 Ra=100Ix=145 Rc = 64 ≦ min(100,145) ≦ 100 ⇒ Px = max(100,145) = 145

13 NCHU13 4.2 Adaptive correction C =, C is the prediction of correction value. Example 1 B = -1, N = 2 ⇒ C = = -1 Px + C, if SIGN = +1. Px ─ C, if SIGN = ─1. { Px = [ ] 2 [ ] B N To P23 Px = 145, SIGN = +1 ⇒ Px = Px + C = 145 + (-1) = 144

14 NCHU14 4.3 Compute the prediction error Errval is the prediction error. Example 1 1. Errval = Ix – Px. 2. Errval = ─ Errval, if SIGN = ─1. ⇒ Errval = Ix - Px = 145 ─ 144 = 1 Ix = 145 Px = 144 SIGN = +1

15 NCHU15 4.4 Modulo reduction of the error The prediction error will be reduced to the range relevant for coding,-127~+128. Example 1 1. Errval = Errval + 256, if Errval < 0. 1. Errval = Errval ─ 256, if Errval ≧ 128. Errval = 1 (1 > 0 and 1 ≦ 128) ⇒ Errval = 1

16 NCHU16 4.5 Error mapping (1/2) The prediction error, Errval will be mapped to a non-negative value. MErrval is the Errval mapped to non-negative integers in regular mode. Example 1 2*Errval, if Errval ≧ 0. ─2*Errval-1, if Errval < 0. { MErrval = Errval = 1 (1 ≧ 0) ⇒ MErrval = 2 * 1 = 2

17 NCHU17 4.5 Error mapping (2/2) Prediction error 0 -1 1 -2 2 -3 3 … 127 -128 ↓ ↓ ↓ ↓ ↓↓ ↓ ↓ ↓ 0 1 2 3 4 5 6 … 254 255 Mapped value 0-128 127 255 0

18 NCHU18 4.6 Compute the Golomb coding parameter k K is the Golomb coding parameter for regular mode. k = min {k ’ |2 k ’ * N ≧ A} Example 1 N = 2, A = 64 ⇒ 2 k ’ * 2 ≧ 64 ⇒ 2 k ’ ≧ 32 ⇒ k = 5

19 NCHU19 4.7 Golomb Code (1/3) A formula : MErrval = q * m + r A parameter : m (m = 2 k ) Two parts : unary code (q) and modified binary code (r) Example : MErrval = 13, k = 2 ⇒ m = 2 2 = 4 ⇒ 13 = 3 x 4 + 1 ⇒ q = 3, r = 1 ⇒ unary code=3, modified binary code=1 ⇒ unary code=000, modified binary code=01 ⇒ 000101

20 NCHU20 4.7 Golomb Code (2/3) nqrCodewordnqr 000 100820 00100 101 101921 00101 202 1101022 00110 303 1111123 00111 410 01001230 000100 511 01011331 000101 612 01101432 000110 713 01111533 000111 Table. Ⅰ Golomb code for m = 4 (k=2).

21 NCHU21 4.7 Golomb Code (3/3) Properties - n↓ ⇒ code length↓ - one pass encoding - without to store the code tables - Golomb code are optimal for one sided geometric distributions of nonnegative integers 255 0

22 NCHU22 4.8 Mapped-error encoding Example 1 k = 5 ⇒ m = 2 k = 2 5 = 32 MErrval = 2 ⇒ 2 = q * m + r = 0 * 32 + 2 ⇒ q = 0, r = 2 ⇒ unary code= 0, modified binary code=2 ⇒ unary code= null, modified binary code=00010 ⇒ 100010

23 NCHU23 4.9 Update the variables (1/2) The variables A, B and N are updated according to the current prediction error. A, B are counters for the accumulated prediction error. N is counter for frequency of occurrence of the context.{ A = A + | Errval | B = B + Errval N = N + 1

24 NCHU24 4.9 Update the variables (2/2) The variables before encoding are : A = 64, B = -1, N = 2. The variables after updating are : Example 1: Errval = 1 ⇒ A = A + | Errval | = 64 + 1 = 65. B = B + Errval = -1 + 1 = 0. N = N + 1 = 2 + 1 = 3. To P13

25 NCHU25 5. Run model 5.1 Run scanningRun scanning 5.2 Run-length codingRun-length coding

26 NCHU26 5.1 Run scanning RUNval is the value of repetitived sample. RUNcnt is the value of repetitived sample count for run mode. Example 2 RUNval = Ra = 145 Ix = 145 = RUNval ⇒ RUNcnt = 2 145 { RUNval = Ra; while (Ix == RUNval) { RUNcnt = RUNcnt + 1; }

27 NCHU27 5.2 Run-length coding RUNcnt is the value represents the run- length. Example 2 RUNcnt = 2 ⇒ 11 { while (RUNcnt >0) { Append 1 to bit stream; RUNcnt = RUNcnt ─ 1; }

28 NCHU28 6. Coded data BinaryHexadecimal 1100 00000000 0110 1100C0 00 00 6C 1000 00000010 00001000 11100000 000180 20 8E 01 1100 00000000 0101 0111C0 00 00 57 0100 00000000 0000 111040 00 00 6E 1110 01100000 0000 0001E6 00 00 01 1011 11000001 10000000 BC 18 00 00 0000 01011101 10000000 05 D8 00 00 1001 00010110 0000 91 60 Table. Ⅱ Coded segment. PS:The last five bits in the above table are padding with 0.

29 NCHU29 7. Results (1/2) ImageLOCO-IJPEG-LS Balloon2.682.67 Barb 13.883.89 Barb 23.994.00 Board3.203.21 Boats3.34 Girl3.393.40 Gold3.923.91 Hotel3.783.80 Zelda3.35 Average3.503.51 Table Ⅲ Compression Results On ISO/IEC 10918-1 Image Test Set (In Bits/Sample)

30 NCHU30 7. Results (2/2) ImageLOCO-IJPEG-LS Lossless JPEG Huffman Lossless JPEG arithm CALIC arithm LOCO-A (arithm) bike3.593.364.063.923.503.54 cafe4.804.835.315.354.694.75 woman4.174.204.584.474.054.11 tools5.075.085.425.474.955.01 bike34.374.384.674.784.234.33 cats2.592.613.322.742.512.54 water1.791.812.361.871.741.75 finger5.635.666.115.855.475.50 us2.672.633.282.522.342.45 chart1.331.322.141.451.281.18 chart_s2.742.773.443.072.662.65 compound11.301.272.391.501.241.21 compound21.351.332.401.541.241.25 aerial24.014.114.494.143.833.58 Faxballs0.970.901.740.840.750.64 Gold3.923.914.104.133.833.85 hotel3.783.804.064.153.713.72 Average3.183.193.763.403.06 Table Ⅳ Compression Results On New Image Test Set (In Bits/Sample)

31 NCHU31 8. Conclusion LOCO-I/JPEG-LS significantly outperforms other schemes of comparable complexity (e.g.,JPEG-Huffman), and it attains compression rations similar or superior to those of higher complexity schemes based on arithmetic coding (e.g.,JPEG-Arithm, CALIC Arithm). LOCO-I performed within a few percentage points of the best available compression ratios (given, in practice, by CALIC), at a much lower complexity level.

32 NCHU32 9. Comments 找出一個方法去修改 JPEG-LS 的壓縮演算法, 使其具有資訊隱藏的功能。

33 NCHU33 The end Thank you!!


Download ppt "NCHU1 The LOCO-I Lossless image Compression Algorithm: Principles and Standardization into JPEG-LS Authors: M. J. Weinberger, G. Seroussi, G. Sapiro Source."

Similar presentations


Ads by Google