Digital Video Solutions to Midterm Exam 2012 Edited by Yang-Ting Chou Confirmed by Prof. Jar-Ferr Yang LAB: 92923 R, TEL: ext. 621

Slides:



Advertisements
Similar presentations
Lecture 3: Source Coding Theory TSBK01 Image Coding and Data Compression Jörgen Ahlberg Div. of Sensor Technology Swedish Defence Research Agency (FOI)
Advertisements

Sampling and Pulse Code Modulation
Michael Alves, Patrick Dugan, Robert Daniels, Carlos Vicuna
Huffman code and ID3 Prof. Sin-Min Lee Department of Computer Science.
Arithmetic Coding. Gabriele Monfardini - Corso di Basi di Dati Multimediali a.a How we can do better than Huffman? - I As we have seen, the.
Information Theory EE322 Al-Sanie.
Problem: Huffman Coding Def: binary character code = assignment of binary strings to characters e.g. ASCII code A = B = C =
T(n) = 4 T(n/3) +  (n). T(n) = 2 T(n/2) +  (n)
Lossless Compression - II Hao Jiang Computer Science Department Sept. 18, 2007.
School of Computing Science Simon Fraser University
SWE 423: Multimedia Systems Chapter 7: Data Compression (3)
Lecture 6: Huffman Code Thinh Nguyen Oregon State University.
SWE 423: Multimedia Systems
ENGS Assignment 3 ENGS 4 – Assignment 3 Technology of Cyberspace Winter 2004 Thayer School of Engineering Dartmouth College Assignment 3 – Due Sunday,
1/88 DCT Transform Decoder. 2/88 Image (512x512) Subsample (128x128) Manipulation Reposition : (256,256)-(384,384) Compress (JPEG) D array.
SWE 423: Multimedia Systems Chapter 7: Data Compression (2)
A Data Compression Algorithm: Huffman Compression
Klara Nahrstedt Spring 2014
Notes by Shufang Wu Embedded Block Coding with Optimized Truncation - An Image Compression Algorithm Notes by Shufang Wu
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
Lossless Compression in Multimedia Data Representation Hao Jiang Computer Science Department Sept. 20, 2007.
Basics of Compression Goals: to understand how image/audio/video signals are compressed to save storage and increase transmission efficiency to understand.
Context-Based Adaptive Entropy Coding Xiaolin Wu McMaster University Hamilton, Ontario, Canada.
Huffman Coding Vida Movahedi October Contents A simple example Definitions Huffman Coding Algorithm Image Compression.
Noiseless Coding. Introduction Noiseless Coding Compression without distortion Basic Concept Symbols with lower probabilities are represented by the binary.
Computer Vision – Compression(2) Hanyang University Jong-Il Park.
Entropy coding Present by 陳群元. outline constraints  Compression efficiency  Computational efficiency  Error robustness.
Source Coding-Compression
Dr.-Ing. Khaled Shawky Hassan
Digital Video Solutions to Final Exam 2008 Edited by Hung-Ming Wang Shih-Ming Huang Confirmed by Prof. Jar-Ferr Yang LAB: R, TEL: ext
Basics of Data Compression Paolo Ferragina Dipartimento di Informatica Università di Pisa.
CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 7 – Basics of Compression (Part 2) Klara Nahrstedt Spring 2011.
Digital Image Processing
Basics of Data Compression Paolo Ferragina Dipartimento di Informatica Università di Pisa.
Digital Video Solutions to Final Exam 2013 Edited by Yang-Ting Chou Confirmed by Prof. Jar-Ferr Yang LAB: R, TEL: ext. 621
Image Compression (Chapter 8) CSC 446 Lecturer: Nada ALZaben.
1 Classification of Compression Methods. 2 Data Compression  A means of reducing the size of blocks of data by removing  Unused material: e.g.) silence.
Lossless Compression CIS 465 Multimedia. Compression Compression: the process of coding that will effectively reduce the total number of bits needed to.
JPEG2000 Yeh Po-Yin Lien Shao-Chieh Yang Yi-Lun. Outline Introduction Features Flow chart Discrete wavelet transform EBCOT ROI coding Comparison of ROI.
COMPRESSION. Compression in General: Why Compress? So Many Bits, So Little Time (Space) CD audio rate: 2 * 2 * 8 * = 1,411,200 bps CD audio storage:
Data Compression Meeting October 25, 2002 Arithmetic Coding.
1 The Embedded Block Coding with Optimized Truncation (EBCOT) in JPEG2000.
CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 6 – Basics of Compression (Part 1) Klara Nahrstedt Spring 2011.
Lecture 4: Lossless Compression(1) Hongli Luo Fall 2011.
Digital Signal Processing Solutions to Final 2014 Edited by Yang-Ting Justing Chou Confirmed by Prof. Jar-Ferr Kevin Yang LAB: R, TEL: ext
Bahareh Sarrafzadeh 6111 Fall 2009
1 Source Coding and Compression Dr.-Ing. Khaled Shawky Hassan Room: C3-222, ext: 1204, Lecture 7 (W5)
ENTROPY & RUN LENGTH CODING. Contents What is Entropy coding? Huffman Encoding Huffman encoding Example Arithmetic coding Encoding Algorithms for arithmetic.
1Computer Sciences Department. 2 Advanced Design and Analysis Techniques TUTORIAL 7.
Multi-media Data compression
1 Data Compression Hae-sun Jung CS146 Dr. Sin-Min Lee Spring 2004.
Chapter 7 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information Theory 7.3 Run-Length Coding 7.4 Variable-Length Coding (VLC) 7.5.
Index construction: Compression of documents Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading Managing-Gigabytes: pg 21-36, 52-56,
CS Spring 2012 CS 414 – Multimedia Systems Design Lecture 7 – Basics of Compression (Part 2) Klara Nahrstedt Spring 2012.
Page 1KUT Graduate Course Data Compression Jun-Ki Min.
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" Basics
Digital Video Solutions to Midterm Exam 2008 Edited by Hung-Ming Wang Confirmed by Prof. Jar-Ferr Yang LAB: R, TEL: ext. 621
Entropy vs. Average Code-length Important application of Shannon’s entropy measure is in finding efficient (~ short average length) code words The measure.
Tamal Bose, Digital Signal and Image Processing © 2004 by John Wiley & Sons, Inc. All rights reserved. Figure 11-1 (p. 624) (a) Image coder; (b) image.
UNIT I. Entropy and Uncertainty Entropy is the irreducible complexity below which a signal cannot be compressed. Entropy is the irreducible complexity.
Digital Video Solutions to Midterm Exam 2004 Edited by Yu-Kuang Tu Confirmed by Prof. Jar-Ferr Yang LAB: R, TEL: ext. 621
MP3 and AAC Trac D. Tran ECE Department The Johns Hopkins University Baltimore MD
Information theory Data compression perspective Pasi Fränti
Ch4. Zero-Error Data Compression Yuan Luo. Content  Ch4. Zero-Error Data Compression  4.1 The Entropy Bound  4.2 Prefix Codes  Definition and.
Image Compression The still image and motion images can be compressed by lossless coding or lossy coding. Principle of compression: - reduce the redundant.
Context-based Data Compression
CSE 589 Applied Algorithms Spring 1999
Dynamic Buffering in EBCOT
Image Transforms for Robust Coding
CSE 589 Applied Algorithms Spring 1999
Presentation transcript:

Digital Video Solutions to Midterm Exam 2012 Edited by Yang-Ting Chou Confirmed by Prof. Jar-Ferr Yang LAB: R, TEL: ext Page of MPL:

AVG: STDEV: MAX: 168 MIN: 21

(a) I These entropy encoders compress data by replacing each fixed- length input symbol by the corresponding variable-length prefix-free output codeword. The length of each codeword is approximately proportional to the negative logarithm of the probability. Therefore, the most common symbols use the shortest codes. According to Shannon's source coding theorem, the optimal code length for a symbol is −log b P, where b is the number of symbols used to make output codes and P is the probability of the input symbol. Two of the most common entropy encoding techniques are Huffman coding and arithmetic coding. If the approximate entropy characteristics of a data stream are known in advance, a simpler static code may be useful. These static codes include universal codes and Golomb codes.

(b) (c )

(d )

(e) (f)

II 2.1 Huffman code: A 0 (1) B 11 (00) C 100 (011) D 1011 (0100) E (01011) F (01010) A 0.85 B 0.05 C 0.05 D 0.02 E (a) p(A) = 0.85, p(B) = 0.05, p(C) = 0.05, p(D) = 0.02, p(E) = 0.02 p(F) = 1- p(A)- p(B)- p(C)- p(D)- p(E)=0.01 F

Optimal symmetrical RVLC A 0 B 11 C 101 D 1001 E F (b)

Prefix conflict Optimal asymmetrical RVLC A 0 B 11 C 101 D 1001 E F (c)

RLCSkipSSSSValueEncoded (1,4) (0,-1) (1,1) (3,3) EOB (a) (b) (c) 題目沒給

2.3 (a) Entropy: (b) Huffman code: A 2 A 2 0 (1) A 2 A 1 11 (00) A 1 A (011) A 1 A (010) A 2 A A 2 A A 1 A A 1 A

(c) {A 2 A 1 A 2 A 2 }:{110} ({001}) Huffman code: A 2 A 2 0 (1) A 2 A 1 11 (00) A 1 A (011) A 1 A (010) (d) Occurrence symbols: {A 2 A 1 A 2 A 2 } <W<

Synthesis Filter: n x[n]x[n] y 0 [2n] y 1 [2n+1] x0[n]x0[n] x1[n]x1[n] x’[n] The last row is the reconstructed synthesis data

2.5 (a) (c) They are not unitary transforms. 以 quantization or scaling 來做補償 AA -1 = AA T ≠ I (b)

2.6 p(A) = 0.7, p(B) =0.2, p(C) = 0.05, p(D)=p(E)=0.02, p(F)=0.01 Huffman code: A 0 (1) B 10 (01) C 110 (001) D 1111 (0000) E (00011) F (00010) A 0.7 B 0.2 C 0.05 D 0.02 E 0.02 F (a) RVL code: A 0 (1) B 101 (010) C (00100) D ( ) E ( ) F ( )

(b) Optimal symmetrical RVLC A 0 B 11 C 101 D 1001 E F optimal symmetrical RVLC : 從後面長

(c) Prefix conflict Optimal asymmetrical RVLC A 0 B 11 C 101 D 1001 E F optimal asymmetrical RVLC : 從前面長

2.7 Initialization: LIP: { (0,0)  42, (0,1)  17, (1,0)  -19, (1,1)  13 } LIS: { D(0,1), D(1,0), D(1,1) } LSP: {} Significant Pass: Refinement Pass: LIP: { (0,0)  42, (0,1)  17, (1,0)  -19, (1,1)  13 } LIS: {D(0,1), D(1,0), D(1,1) } LSP: { (0,0)  42 } (a) (b) SPIHT

Significant Pass: Refinement Pass: 0 LIP: { (0,1)  17, (1,0)  -19, (1,1)  13} LIS: {D(0,1), D(1,0), D(1,1) } LSP: {(0,0)  42, (0,1)  17, (1,0)  -19} Significant Pass: Refinement Pass: up to 25 bits Generated bitstream:

(c) Generated bitstream: (1) (2) (3) Generated bitstream:

2.8 JPEG-LS Block Diagram (a)

(b) Fixed Predictor

sc zc sc zc sc zc sc zc sc Significance Propagation Pass (Pass 1) : Coefficient which is already significant : Significance Propagation Pass (Pass 1) ZC: Zero Coding SC: Sign Coding (a) zc

Magnitude Refinement Pass (Pass 2) : Significance Propagation Pass (Pass 1) (a) : Magnitude Refinement Pass (Pass 2) sc zc sc zc sc zc sc zc sc zc

sc zc Clean-up Pass (Pass 3) (a) : Pass 1 : Pass 2 : Pass 3 (ZC & SC) : Pass 3 (RLC) sc zc sc zc sc zc sc zc sc zc

a: ZC, LL band kh[j] = 1, kv[j] = 1, kd[j] = 0, ksig[j]=7 b: SC  h[j] = 0,  v[j] = 0, ksign[j] = 9 c: ZC, LL band kh[j] = 1, kv[j] = 0, kd[j] = 0, ksig[j]=5 d: SC  h[j] = 1,  v[j] = 0, ksign[j] = 12 (b) sc zc sc zc sc zc sc zc sc zc sc zc (a)(b) (c) (d)

(b)  sig [j] LL and LH blocksHL blocksHH blocks h[j]h[j] v[j]v[j] d[j]d[j] h[j]h[j] v[j]v[j] d[j]d[j] d[j]d[j]  h [j]+  v [j] 82xxx2x≥3x 71≥1x 1x ≥2 402x20x11 301x10x Assignment of context labels for significant coding “x” means “don’t care.”

(b) h[j]h[j] v[j]v[j]  sign  flip Assignment of context labels and flipping factor for sign coding  h [j],  v [j]: neighborhood sign status -1: one or both negative. 0: both insignificant or both significant but opposite sign. 1: one or both positive. Current sample

(b)  [j]  sig [j]  mag >015 1X16 Assignment of context labels and flipping factor for magnitude refinement coding  [j]: remains zero until after the first magnitude refinement bit has been coded. For subsequent refinement bits,  [j] = 1.  sig  [j]: context label for significant coding of sample j

2.10 (a) Diamond Search

(-2, 3): = 19 points

(2, -7): = 29 points

(b) Four Step Search

(-2, 3): = 22 points

(2, -7): = 28 points

(c) Enhanced Hexagon Search

(-2, 3): = 12 points

(2, -7): = 18 points

2.11