2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Variable Length Coding  Information entropy  Huffman code vs. arithmetic.

Slides:



Advertisements
Similar presentations
Noise, Information Theory, and Entropy (cont.) CS414 – Spring 2007 By Karrie Karahalios, Roger Cheng, Brian Bailey.
Advertisements

15-583:Algorithms in the Real World
Lecture 4 (week 2) Source Coding and Compression
Michael Alves, Patrick Dugan, Robert Daniels, Carlos Vicuna
Arithmetic Coding. Gabriele Monfardini - Corso di Basi di Dati Multimediali a.a How we can do better than Huffman? - I As we have seen, the.
Lossless/Near-lossless Compression of Still and Moving Images Part 2. Entropy coding Xiaolin Wu Polytechnic University Brooklyn, NY.
Information Theory EE322 Al-Sanie.
2004 NTU CSIE 1 Ch.6 H.264/AVC Part2 (pp.200~222) Chun-Wei Hsieh.
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
SIMS-201 Compressing Information. 2  Overview Chapter 7: Compression Introduction Entropy Huffman coding Universal coding.
Topics covered: Floating point arithmetic CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
School of Computing Science Simon Fraser University
Lecture 6: Huffman Code Thinh Nguyen Oregon State University.
Variable Length Coding
CSCI 3 Chapter 1.8 Data Compression. Chapter 1.8 Data Compression  For the purpose of storing or transferring data, it is often helpful to reduce the.
Spatial and Temporal Data Mining
Context-Based Adaptive Binary Arithmetic Coding in the H.264/AVC Video Compression Standard Detlev Marpe, Heiko Schwarz, and Thomas Wiegand IEEE Transactions.
SWE 423: Multimedia Systems Chapter 7: Data Compression (2)
CS336: Intelligent Information Retrieval
H.264 / MPEG-4 Part 10 Nimrod Peleg March 2003.
Information Theory Eighteenth Meeting. A Communication Model Messages are produced by a source transmitted over a channel to the destination. encoded.
7/2/2015Errors1 Transmission errors are a way of life. In the digital world an error means that a bit value is flipped. An error can be isolated to a single.
Fundamentals of Multimedia Chapter 7 Lossless Compression Algorithms Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
2015/7/12VLC 2008 PART 1 Introduction on Video Coding StandardsVLC 2008 PART 1 Variable Length Coding  Information entropy  Huffman code vs. arithmetic.
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
Data Compression Basics & Huffman Coding
1 Lossless Compression Multimedia Systems (Module 2) r Lesson 1: m Minimum Redundancy Coding based on Information Theory: Shannon-Fano Coding Huffman Coding.
Yehong, Wang Wei, Wang Sheng, Jinyang, Gordon. Outline Introduction Overview of Huffman Coding Arithmetic Coding Encoding and Decoding Probabilistic Model.
Basics of Compression Goals: to understand how image/audio/video signals are compressed to save storage and increase transmission efficiency to understand.
©2003/04 Alessandro Bogliolo Background Information theory Probability theory Algorithms.
Context-Based Adaptive Entropy Coding Xiaolin Wu McMaster University Hamilton, Ontario, Canada.
Huffman Coding Vida Movahedi October Contents A simple example Definitions Huffman Coding Algorithm Image Compression.
Information and Coding Theory
STATISTIC & INFORMATION THEORY (CSNB134)
Noiseless Coding. Introduction Noiseless Coding Compression without distortion Basic Concept Symbols with lower probabilities are represented by the binary.
15-853Page :Algorithms in the Real World Data Compression II Arithmetic Coding – Integer implementation Applications of Probability Coding – Run.
1 Lossless Compression Multimedia Systems (Module 2 Lesson 2) Summary:  Adaptive Coding  Adaptive Huffman Coding Sibling Property Update Algorithm 
296.3Page 1 CPS 296.3:Algorithms in the Real World Data Compression: Lecture 2.5.
CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 7 – Basics of Compression (Part 2) Klara Nahrstedt Spring 2011.
1 Classification of Compression Methods. 2 Data Compression  A means of reducing the size of blocks of data by removing  Unused material: e.g.) silence.
Linawati Electrical Engineering Department Udayana University
Lossless Compression CIS 465 Multimedia. Compression Compression: the process of coding that will effectively reduce the total number of bits needed to.
COMPRESSION. Compression in General: Why Compress? So Many Bits, So Little Time (Space) CD audio rate: 2 * 2 * 8 * = 1,411,200 bps CD audio storage:
Introduction to Algorithms Chapter 16: Greedy Algorithms.
Prof. Amr Goneid, AUC1 Analysis & Design of Algorithms (CSCE 321) Prof. Amr Goneid Department of Computer Science, AUC Part 8. Greedy Algorithms.
Huffman coding Content 1 Encoding and decoding messages Fixed-length coding Variable-length coding 2 Huffman coding.
Data Compression Meeting October 25, 2002 Arithmetic Coding.
Abdullah Aldahami ( ) April 6,  Huffman Coding is a simple algorithm that generates a set of variable sized codes with the minimum average.
CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 6 – Basics of Compression (Part 1) Klara Nahrstedt Spring 2011.
Lecture 4: Lossless Compression(1) Hongli Luo Fall 2011.
CS654: Digital Image Analysis Lecture 34: Different Coding Techniques.
Bahareh Sarrafzadeh 6111 Fall 2009
Lossless Compression(2)
1 Source Coding and Compression Dr.-Ing. Khaled Shawky Hassan Room: C3-222, ext: 1204, Lecture 7 (W5)
CMPT365 Multimedia Systems 1 Arithmetic Coding Additional Material Spring 2015 CMPT 365 Multimedia Systems.
Image Processing Architecture, © Oleh TretiakPage 1Lecture 4 ECE-C490 Winter 2004 Image Processing Architecture Lecture 4, 1/20/2004 Principles.
Multi-media Data compression
CS Spring 2012 CS 414 – Multimedia Systems Design Lecture 7 – Basics of Compression (Part 2) Klara Nahrstedt Spring 2012.
Page 1KUT Graduate Course Data Compression Jun-Ki Min.
Search Engines WS 2009 / 2010 Prof. Dr. Hannah Bast Chair of Algorithms and Data Structures Department of Computer Science University of Freiburg Lecture.
Lossless Compression-Statistical Model Lossless Compression One important to note about entropy is that, unlike the thermodynamic measure of entropy,
Introduction to Lossless Compression
HUFFMAN CODES.
EE465: Introduction to Digital Image Processing
CSI-447: Multimedia Systems
The Johns Hopkins University
Image Compression The still image and motion images can be compressed by lossless coding or lossy coding. Principle of compression: - reduce the redundant.
Context-based Data Compression
Analysis & Design of Algorithms (CSCE 321)
Foundation of Video Coding Part II: Scalar and Vector Quantization
Presentation transcript:

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Variable Length Coding  Information entropy  Huffman code vs. arithmetic code  Arithmetic coding  Why CABAC?  Rescaling and integer arithmetic coding  Golomb codes  Binary arithmetic coding  CABAC

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Information Entropy Claude E. Shannon  Information entropy: Claude E. Shannon 1948, “A Mathematical Theory of Communication”  The information contained in a statement asserting the occurrence of an event depends on the probability p(f), of occurrence of the event f. - lg p(f)  The unit of the above information quantity is referred as a bit, since it is the amount of information carried by one (equally likely) binary digit.  Entropy H is a measure of uncertainty or information content - Very uncertain  high information content

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Entropy Rate  Conditional entropy H(F|G) between F and G: uncertainty of F given G  N th order entropy  M th order conditional entropy  Entropy rate (lossless coding bound)

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Bound for Lossless Coding  Scalar coding: could differ from the entropy by up to 1 bit/symbol  Vector (block) coding: assign one codeword for each group of N symbols  Conditional coding (predictive coding, context-based coding): The codeword of the current symbol depends on the pattern (context) formed by the previous M symbol

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Huffman Coding Huffman coding for pdf: (a 1, a 2, a 3 ) = (0.5, 0.25, 0.25) ––lg0.5 = 1, –lg 0.25 = 2 –a 1 = 0, a 2 = 10, a 3 = 11 If the self information is not integer? –pdf: (a 1, a 2, a 3, a 4 ) = (0.6, 0.2, 0.125, 0.075) ––lg 0.6 = 0.737, –lg 0.2 = 2.32, –lg = 3, –lg = 3.74 –a 1 = 0, a 2 = 10, a 3 = 110, a 4 = 111 a 1 = 0.5 a 2 = 0.25 a 3 = a 1 = 0.6 a 2 = 0.2 a 3 = a 4 =

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Huffman vs. Arithmetic Coding  Huffman coding: convert a fixed number of symbols into a variable length codeword  Efficiency  The usage of fixed VLC tables does not allow an adaptation to the actual symbol statistics.  Arithmetic Coding: convert a variable number of symbols into a variable length codeword  Efficiency  Process one symbol at a time  Easy to adapt to changes in source statistics  Integer implementation is available

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Arithmetic Coding The bits allocated for each symbol can be non-integer –If pdf(a) = 0.6, then the bits to encode ‘a’ is For the optimal pdf, the coding efficiency is always better than or equal to the Huffman coding Huffman coding for a 2 a 1 a 4 a 1 a 1 a 3, total 11 bits: Arithmetic coding for a 2 a 1 a 4 a 1 a 1 a 3, total bits: The exact probs are preserved

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Arithmetic Coding Basic idea: – Represent a sequence of symbols by an interval with length equal to its probability – The interval is specified by its lower boundary (l), upper boundary (u) and length d (= probability) – The codeword for the sequence is the common bits in binary representations of l and u The interval is calculated sequentially starting from the first symbol – The initial interval is determined by the first symbol – The next interval is a subinterval of the previous one, determined by the next symbol

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 An Example Any binary value between l and u can unambiguously specify the input message. ½=(10…)=(01…1…) ¼ =(010…)=(001…1…) d(  ab  )=1/8

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Why CABAC?  The first standard that uses arithmetic entropy coder is given by Annex E of H.263  Drawbacks: 1.Annex E is applied to the same syntax elements as the VLC elements of H All the probability models are non-adaptive that their underlying probability as assumed to be static. 3.The generic m-ary arithmetic coder used involves a considerable amount of computational complexity.

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 CABAC: Technical Overview Context modeling Binarization Probability estimation Coding engine update probability estimation Adaptive binary arithmetic coder Chooses a model conditioned on past observations Maps non-binary symbols to a binary sequence Uses the provided model for the actual encoding and updates the model

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 CABAC

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Context-based Adaptive Binary Arithmetic Code (CABAC)  Usage of adaptive probability models  Exploiting symbol correlations by using contexts  Non-integer number of bits per symbol by using arithmetic codes  Restriction to binary arithmetic coding Simple and fast adaptation mechanism But: Binarization is needed for non-binary symbols Binarization enables partitioning of state space

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Implementation of Arithmetic Coding  Rescaling and Incremental coding  Integer arithmetic coding  Binary arithmetic coding  Hoffman Trees  Exp-Golomb Codes

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Issues  Finite precision (underflow & overflow): As n gets larger, these two values, l (n) and u (n) come closer and closer together. This means that in order to represent all the subintervals uniquely we need to increase the precision as the length of the sequence increases.  Incremental transmission: transmit portions of the code as the sequence is being observed.  Integer implementation

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Rescaling & Incremental Coding

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Incremental Encoding U L L L U U

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Question for Decoding  How do we start decoding? decode the first symbol unambiguously  How do we continue decoding? mimic the encoder  How do we stop decoding?

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Incremental Decoding Top 18% of [0,0.8) U

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Issues in the Incremental Coding

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Solution

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Solution (2)

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Incremental Encoding

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Incremental Decoding

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Integer Implementation

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Integer Implementation n j : the # of times the symbol j occurs in a sequence of length Total Count. F X (k) can be estimated by Define we have E 3 : if (E 3 holds) Shift l to the left by 1 and shift 0 into LSB Shift u to the left by 1 and shift 0 into LSB Complement (new) MSB of l and u Increment Scale3

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Golomb Codes  Golomb-Rice code: a family of codes designed to encode integers with the assumption that the larger an integer, the lower its probability of occurrence.  An example (the simplest, unary code): for integer n, codes as n 1s followed by a 0. This code is the same as the Huffman code for {1, 2, …} with probability model  Golomb code with m: code n > 0 using two numbers q and r:  Q is coded by unary code of q; r is represented by binary code using bits.  the first – m values, uses bits  the rest values: uses bits  Golomb code for m = 5: nqrcodenqr =110

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Golomb Codes  Golomb code is optimal for the probability model  exp-Golomb code: variable length codes with regular construction: [m zeros] [1] [info]. code_num: index info: is an m-bit field carrying information  Mapping types: ue, te, se, and me  designed to produce short codewords for frequently-occurring values and longer codewords for less common parameter values.

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 exp-Golomb Codes  exp-Golomb code  Decode: 1. Read in m leading zeros followed by Read m -bit info field. 3. ﹝ m zeros ﹞﹝ 1 ﹞﹝ info ﹞

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 exp-Golomb Entropy Coding A parameter k to be encoded is mapped to code_num in one of the following ways:

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 exp-Golomb Entropy Coding

2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 H.264 Coding Parameters