MPEG-4 AVC (H.264). Introduction The H.264 is aimed at very low bit rate, real- time, low end-to-end delay, and mobile applications such as conversational.

Slides:



Advertisements
Similar presentations
Introduction to H.264 / AVC Video Coding Standard Multimedia Systems Sharif University of Technology November 2008.
Advertisements

T.Sharon-A.Frank 1 Multimedia Compression Basics.
H.264 Intra Frame Coder System Design Özgür Taşdizen Microelectronics Program at Sabanci University 4/8/2005.
MPEG4 Natural Video Coding Functionalities: –Coding of arbitrary shaped objects –Efficient compression of video and images over wide range of bit rates.
Basics of MPEG Picture sizes: up to 4095 x 4095 Most algorithms are for the CCIR 601 format for video frames Y-Cb-Cr color space NTSC: 525 lines per frame.
2004 NTU CSIE 1 Ch.6 H.264/AVC Part2 (pp.200~222) Chun-Wei Hsieh.
-1/20- MPEG 4, H.264 Compression Standards Presented by Dukhyun Chang
Technion - IIT Dept. of Electrical Engineering Signal and Image Processing lab Transrating and Transcoding of Coded Video Signals David Malah Ran Bar-Sella.
1 Video Coding Concept Kai-Chao Yang. 2 Video Sequence and Picture Video sequence Large amount of temporal redundancy Intra Picture/VOP/Slice (I-Picture)
Source Coding for Video Application
SWE 423: Multimedia Systems
SWE 423: Multimedia Systems
H.264/AVC Baseline Profile Decoder Complexity Analysis Michael Horowitz, Anthony Joch, Faouzi Kossentini, and Antti Hallapuro IEEE TRANSACTIONS ON CIRCUITS.
CABAC Based Bit Estimation for Fast H.264 RD Optimization Decision
Ch. 6- H.264/AVC Part I (pp.160~199) Sheng-kai Lin
Overview of the H.264/AVC Video Coding Standard
H.264/Advanced Video Coding – A New Standard Song Jiqiang Oct 21, 2003.
2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Variable Length Coding  Information entropy  Huffman code vs. arithmetic.
JPEG.
Analysis, Fast Algorithm, and VLSI Architecture Design for H
H.264 / MPEG-4 Part 10 Nimrod Peleg March 2003.
CS :: Fall 2003 MPEG-1 Video (Part 1) Ketan Mayer-Patel.
BY AMRUTA KULKARNI STUDENT ID : UNDER SUPERVISION OF DR. K.R. RAO Complexity Reduction Algorithm for Intra Mode Selection in H.264/AVC Video.
2015/7/12VLC 2008 PART 1 Introduction on Video Coding StandardsVLC 2008 PART 1 Variable Length Coding  Information entropy  Huffman code vs. arithmetic.
Adaptive Deblocking Filter in H.264 Ehsan Maani Course Project:
5. 1 JPEG “ JPEG ” is Joint Photographic Experts Group. compresses pictures which don't have sharp changes e.g. landscape pictures. May lose some of the.
1 Image and Video Compression: An Overview Jayanta Mukhopadhyay Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur,
Why Compress? To reduce the volume of data to be transmitted (text, fax, images) To reduce the bandwidth required for transmission and to reduce storage.
Image and Video Compression
Image Compression - JPEG. Video Compression MPEG –Audio compression Lossy / perceptually lossless / lossless 3 layers Models based on speech generation.
 Coding efficiency/Compression ratio:  The loss of information or distortion measure:
1 Image Compression. 2 GIF: Graphics Interchange Format Basic mode Dynamic mode A LZW method.
Video Coding. Introduction Video Coding The objective of video coding is to compress moving images. The MPEG (Moving Picture Experts Group) and H.26X.
MPEG-1 and MPEG-2 Digital Video Coding Standards Author: Thomas Sikora Presenter: Chaojun Liang.
Video Compression Techniques By David Ridgway.
Windows Media Video 9 Tarun Bhatia Multimedia Processing Lab University Of Texas at Arlington 11/05/04.
Concepts of Multimedia Processing and Transmission IT 481, Lecture 5 Dennis McCaughey, Ph.D. 19 February, 2007.
Outline JVT/H.26L: History, Goals, Applications, Structure
H.264 數位影音技術 Wen-Jyi Hwang ( 黃文吉 ) Department of Computer Science and Information Engineering, National Taiwan Normal University.
JPEG. The JPEG Standard JPEG is an image compression standard which was accepted as an international standard in  Developed by the Joint Photographic.
Image Processing and Computer Vision: 91. Image and Video Coding Compressing data to a smaller volume without losing (too much) information.
JPEG CIS 658 Fall 2005.
Codec structuretMyn1 Codec structure In an MPEG system, the DCT and motion- compensated interframe prediction are combined. The coder subtracts the motion-compensated.
June, 1999 An Introduction to MPEG School of Computer Science, University of Central Florida, VLSI and M-5 Research Group Tao.
8. 1 MPEG MPEG is Moving Picture Experts Group On 1992 MPEG-1 was the standard, but was replaced only a year after by MPEG-2. Nowadays, MPEG-2 is gradually.
Compression video overview 演講者:林崇元. Outline Introduction Fundamentals of video compression Picture type Signal quality measure Video encoder and decoder.
Image Processing Architecture, © 2001, 2002 Oleh TretiakPage 1Lecture 15 ECEC-453 Image Processing Architecture 3/11/2004 Exam Review Oleh Tretiak Drexel.
Figure 1.a AVS China encoder [3] Video Bit stream.
Guillaume Laroche, Joel Jung, Beatrice Pesquet-Popescu CSVT
Image/Video Coding Techniques for IPTV Applications Wen-Jyi Hwang ( 黃文吉 ) Department of Computer Science and Information Engineering, National Taiwan Normal.
1 Modular Refinement of H.264 Kermin Fleming. 2 What is H.264? Mobile Devices Low bit-rate Video Decoder –Follow on to MPEG-2 and H.26x Operates on pixel.
JPEG Image Compression Standard Introduction Lossless and Lossy Coding Schemes JPEG Standard Details Summary.
CS654: Digital Image Analysis
Video Compression—From Concepts to the H.264/AVC Standard
Block-based coding Multimedia Systems and Standards S2 IF Telkom University.
JPEG. Introduction JPEG (Joint Photographic Experts Group) Basic Concept Data compression is performed in the frequency domain. Low frequency components.
MPEG CODING PROCESS. Contents  What is MPEG Encoding?  Why MPEG Encoding?  Types of frames in MPEG 1  Layer of MPEG1 Video  MPEG 1 Intra frame Encoding.
Motion Estimation Multimedia Systems and Standards S2 IF Telkom University.
Introduction to MPEG Video Coding Dr. S. M. N. Arosha Senanayake, Senior Member/IEEE Associate Professor in Artificial Intelligence Room No: M2.06
MP3 and AAC Trac D. Tran ECE Department The Johns Hopkins University Baltimore MD
Multi-Frame Motion Estimation and Mode Decision in H.264 Codec Shauli Rozen Amit Yedidia Supervised by Dr. Shlomo Greenberg Communication Systems Engineering.
H. 261 Video Compression Techniques 1. H.261  H.261: An earlier digital video compression standard, its principle of MC-based compression is retained.
Introduction to H.264 / AVC Video Coding Standard Multimedia Systems Sharif University of Technology November 2008.
Present by 楊信弘 Advisor: 鄭芳炫
JPEG Compression What is JPEG? Motivation
CSI-447: Multimedia Systems
Supplement, Chapters 6 MC Course, 2009.
ENEE 631 Project Video Codec and Shot Segmentation
Standards Presentation ECE 8873 – Data Compression and Modeling
MPEG4 Natural Video Coding
Presentation transcript:

MPEG-4 AVC (H.264)

Introduction The H.264 is aimed at very low bit rate, real- time, low end-to-end delay, and mobile applications such as conversational services and internet video. Enhanced visual quality at very low bit rates and particularly at rate below 24kb/s.

Structure of H.264/AVC Video Coder VCL: Designed to efficiently represent the video content NAL: formats the VCL representation of the video and provides head information for conveyance by a variety of transport layers or storage media.

Video Coding Layer

Basic Structure of VCL Entropy Coding Scaling & Inv. Transform Motion- Compensation Control Data Quant. Transf. coeffs Motion Data Intra/Inter Coder Control Decoder Motion Estimation Transform/ Scal./Quant. - Input Video Signal Split into Macroblocks 16x16 pixels Intra-frame Prediction De-blocking Filter Output Video Signal

Intra-frame Prediction Entropy Coding Scaling & Inv. Transform Motion- Compensation Control Data Quant. Transf. coeffs Motion Data Intra/Inter Coder Control Decoder Motion Estimation Transform/ Scal./Quant. - Input Video Signal Split into Macroblocks 16x16 pixels Intra-frame Prediction De-blocking Filter Output Video Signal

Intra-frame encoding of H.264 supports Intra_4  4, Intra_16  16 and I_PCM. I_PCM allows the encoder directly send the values of encoded sample. Intra_4  4 and Intra_16  16 allows the intra prediction.

Intra 4  4 –9 modes –Used in texture area Intra 16  16 –4 modes –Used in flat area

Four modes of Intra_16  16 –Mode 0 (vertical) : extrapolation from upper samples(H) –Mode 1 (horizontal): extrapolation from left samples(V) –Mode 2 (DC): mean of upper and left-hand samples (H+V) –Mode 3 (Plane) : a linear “plane” function is fitted to the upper and left-hand samples H and V. This works well in areas of smoothly-varying luminance

Example: Original image

Nine modes of Intra_4  4 –The prediction block P is calculated based on the samples labeled A-M. –The encoder may select the prediction mode for each block that minimizes the residual between P and the block to be encoded

Example: Consider a 4  4 block and its neighbors labeled below. Suppose we use the mode 4 for prediction. Then a = (A + 2M + I + 2)/4

Example:

Motion Estimation/Compensation Entropy Coding Scaling & Inv. Transform Motion- Compensation Control Data Quant. Transf. coeffs Motion Data Intra/Inter Coder Control Decoder Motion Estimation Transform/ Scal./Quant. - Input Video Signal Split into Macroblocks 16x16 pixels Intra-frame Prediction De-blocking Filter Output Video Signal

Features of the H.264 motion estimation –Various block sizes –¼ sample accuracy 6-tap filtering to ½ sample accuracy simplified filtering to ¼ sample accuracy –Multiple reference pictures –Generalized B-Frames

Variable Block Size Block-Matching –In the H.264, a video frame is first splitted using fixed size macroblocks. –Each macroblock may then be segmented into subblocks with different block sizes. –A macroblock has a dimension of 16  16 pixels. The size of the smallest subblock is 4  x x16 MB Types 8x x x8 0 4x84x x4 8x x8 Types

Example: This example shows the effectiveness of block matching operations with smaller sizes. Frame 1

Frame 2

Difference between Frame 1 and Frame 2

Results of block-matching operation with size 16×16

Results of block-matching operation with size 8×8

Results of block-matching operation with size 4×4

To use a subblock with size less than 8  8, it is necessary to first split the macroblock into four 8  8 subblocks.

Example:

Encoding a motion vector for each subblock can cost a significant number of bits, especially if small block sizes are chosen. Motion vectors for neighboring subblocks are often highly correlated and so each motion vector is predicted from vectors of nearby, previously coded subblocks. The difference between the motion vector of the current block and its prediction is encoded and transmitted.

The method of forming the prediction depends on the block size and on the availability of nearby vectors. Let E be the current block, let A be the subblock immediately to the left of E, let B be the subblock immediately above E, and let C be the subblock above and to the right of E. It is not necessary that A, B, C, and E have the same size. C DB AE

There are two modes for the prediction of motion vectors: Median prediction Use for all block sizes excluding 16×8 and 8×16 Directional segmentation prediction Use for 16×8 and 8×16

C DB AE Median prediction If C not exist then C=D If B, C not exist then prediction = V A If A, C not exist then prediction = V B If A, B not exist then prediction = V C Otherwise Prediction = median(V A,,V B,V C )

Directional segmentation prediction Vector block size 8×16 Left: prediction = V A Right: prediction = V C Vector block size 16×8 Up: prediction = V B Down: prediction =V A

Fractional Motion Estimation In H.264, the motion vectors between current block and candidate block has ¼-pel resolution. The samples at sub-pel positions do not exist in the reference frame and so it is necessary to create them using interpolation from nearby image samples.

b=round((E-5F+20G+20H-5I+J)/32) h=round((A-5C+20G+20M-5R+T)/32) j=round((aa-5bb+20b+20s-5gg+hh)/32) Interpolation of ½-pel samples.

Interpolation of ¼-pel samples. a=round((G+b)/2) d=round((G+h)/2) e=round((b+h)/2)

Multiple Reference Frames

The motion estimation techniques based on multiple reference frame technique provides opportunities for more precise inter-prediction, and also improved robustness to lost picture data. The drawback of multiple reference frames is that both the encoder and decoder have to store the reference frames used for Inter-frame prediction in a multi-frame buffer.

Mobile & Calendar (CIF, 30 fps) R [Mbit/s] PSNR Y [dB] PBB... with generalized B pictures PBB... with classic B pictures PPP... with 5 previous references PPP... with 1 previous reference ~15%

Generalized B Frames Basic B-frames: The basic B-frames cannot be used as reference frames.

Generalized B-frames: The generalized B-frames can be used as reference frames.

Mobile & Calendar (CIF, 30 fps) R [Mbit/s] PSNR Y [dB] PBB... with generalized B pictures PBB... with classic B pictures PPP... with 5 previous references PPP... with 1 previous reference >25%

Mobile & Calendar (CIF, 30 fps) R [Mbit/s] PSNR Y [dB] PBB... with generalized B pictures PBB... with classic B pictures PPP... with 5 previous references PPP... with 1 previous reference ~40%

Transformation/Quantization Entropy Coding Scaling & Inv. Transform Motion- Compensation Control Data Quant. Transf. coeffs Motion Data Intra/Inter Coder Control Decoder Motion Estimation Transform/ Scal./Quant. - Input Video Signal Split into Macroblocks 16x16 pixels Intra-frame Prediction De-blocking Filter Output Video Signal

The Discrete Cosine transform (DCT) operates on x, a block of N × N samples and creates X, and N × N block of coefficients. The forward DCT: The reverse DCT: Transformation

The elements of A are: where That is,

Example: The transform matrix A for a 4×4 DCT is:

orwhere That is,

The H.264 transform is based on the 4×4 DCT but with some fundamental differences: 1.It is an integer transfer,. 2.The core part of the transform can be implemented using only additions and shifts. 3.A scaling multiplication is integrated into the quantizer, reducing the total number of multiplications.

Recall that where

1. We call (CxC T ) the core 2D transform. 2. E is a matrix of scaling factors. 3.  indicates that each element of (CxC T ) is multiplies by the scaling factor in the same position in matrix E (i.e.,  is scalar multiplication rather than matrix multiplication) ( where d = c/b) Post-scaling

To simplify the implementation of the transform, d is approximated by 0.5. In order to ensure that the transform remains orthogonal, b also needs to be modified so that:

The final forward transform becomes

The inverse transform is given by: Pre-Scaling

Quantization H.264 assumes a scalar quantization. The quantization should satisfy the following requirements: (a)avoid division and/or floating point arithmetic (b)incorporate the post and pre-scaling matrices E f and E i.

The basic forward quantizer operation is Z(u,v)= round( X(u,v)/QStep ) where X(u,v) is a transform coefficient, Z(u,v) is a quantized coefficient, and QStep is a quantizer step size.

There are 52 quantizers (i.e.,Quantization Parameter ( QP) =0-51). Increase of 1 in QP means an increase of QStep by approximately 12% Increase of 6 in QP means an increase of QStep by a factor of 2.

The post-scaling factor (PF) (i.e., a 2, ab/2 or b 2 /4) is incorporated into the forward quantizer in the following way: 1.The input block x is transformed to give a block of unscaled coefficients W=C f xC f T. 2.Then, each coefficient in W is quantized and scaled in a single operation: where PF is a 2, ab/2 or b 2 /4 depending on the position ( u, v ). Z(u,v)= round( W(u,v)×PF /QStep ) Why? Post-Scaling

In order to simplify the arithmetic, the factor ( PF/QStep ) is implemented as a multiplication by a factor MF and a right shift, avoiding any division operations. Z(u,v)= round( W(u,v)×MF /2 qbits ) where and qbits=15+  QP/6 

Note that the round operation does not have to be the nearest integer operation. In the reference model software, the round operation is realized by |Z(u,v)|=(|W(u,v)|×MF+f)>>qbits sign(Z(u,v))=sign(W(u,v)) where f is 2 qbits /3 for Intra blocks and 2 qbits /6 for Inter blocks.

Example: Suppose QP=4 and (u,v)=(0,0). Therefore, QStep=1.0, PF=a 2 =0.25, and qbits=15. From We have MF=8192

The MF value for various QP s ( QP  5) are shown below. For QP >5, the factors MF remain unchanged, but qbits increases by 1 for each increment of six in QP. That is, qbits =16 for 6  QP  11, qbits =17 for 12  QP  17, and so on. Table_for_MF

Pre-Scaling The de-quantized coefficient is given by The inverse transform involving pre-scaling operations proceeds in the following way: 1. The dequantized block is pre-scaled to block for core 2D inverse transform. 2. The reconstructed block is then given by

The pre-scaling factor ( PF ) (i.e., a 2, ab or b 2 ) is incorporated in the computation of, together with a constant scaling factor of 64 to avoid rounding errors. The values at the output of the inverse transform should be divided by 64 to remove the constant scaling factor.

The H.264 standard does not specify QStep or PF directly. Instead, the parameters V=QStep×PF×64 is defined. The V values for various QP s ( QP  5) are shown below. Table_for_V

For QP >5, the V value increases by a factor of 2 for each increment of six in QP. That is, where

The Complete Transformation, Quantization, Rescaling and Inverse Transformation Encoding: 1.Input 4×4 block: x 2.Forward core transform: W=C f xC f T 3.Post-scaling and quantization: Z(u,v)= round( W(u,v)×MF /2 qbits ) Decoding: 1.Pre-scaling: 2.Inverse core transform: 3.Re-scaling:

Example: Suppose QP=10, and input block x = Forward core transform: W =

3. MF=8192,3355 or 5243, qbits=16 and f is 2 qbits /3. Z= V=32, 50 or 40 because 2  QP/6  =

5. Output of the inverse core transform after division by 64 is

Entropy Coding Entropy Coding Scaling & Inv. Transform Motion- Compensation Control Data Quant. Transf. coeffs Motion Data Intra/Inter Coder Control Decoder Motion Estimation Transform/ Scal./Quant. - Input Video Signal Split into Macroblocks 16x16 pixels Intra-frame Prediction De-blocking Filter Output Video Signal

Here we present two basic variable length coding (VLC) techniques used by H.264: the Exp-Golomb code and context adaptive VLC (CAVLC). Exp-Golomb code is used universally for all symbols except for transform coefficients. CAVLC is used for coding of transform coefficients. No end-of-block, but number of coefficients is decoded. Coefficients are scanned backward. Contexts are built dependent on transform coefficients.

Exp-Golomb codes are variable length codes with a regular construction. First 9 codewords of Exp-Golomb codes Exp-Golomb code

Each codeword of Exp-Golomb codes is constructed as follows: [M zeros][1][INFO] where INFO is an M-bit field carrying information. Therefore, the length of a codeword is 2M+1.

Given a code_num, the corresponding Exp-Golomb codeword can be obtained by the following procedure: (a)M=  log 2 [code_num+1])  (b)INFO=code_num+1-2 M Example: code_num=6 M=  log 2 [6+1])  =2 INFO= =3 The corresponding Exp-Golomb codeword =[M zeros][1][INFO]=00111

Given a Exp-Golomb codeword, its code_num can be found as follows: (a)Read in M leading zeros followed by 1. (b)Read M-bit INFO field (c)code_num=2 M +INFO-1 Example: Exp-Golomb codeword=00111 (a)M=2 (b)INFO=3 (c)code_num= =6

A parameter v to be encoded is mapped to code_num in one of 3 ways: ue(v) : Unsigned direct mapping, code_num=v. (Mainly used for macroblock type and reference frame index) se(v): Signed mapping. v is mapped to code_num as follows. code_num=2|v|, (v  0) code_num=2v-1,(v>0) (Mainly used for motion vector difference and delta QP)

me(v): Mapped symbols. Parameter v is mapped to code_num according to a table specified in the standard. This mapping is used for coded_block_pattern parameters. An example of such a mapping is shown below.

CAVLC This is the method used to encode residual and zig-zag ordered blocks of transform coefficients.

The CAVLC is designed to take advantage of several characteristics of quantized 4×4 blocks: After prediction, transformation and quantization, blocks are typically sparse (containing mostly zeros). The highest non-zero coefficients after the zig/zag are often sequences of +/- 1. The number of non-zero coefficients in neighboring blocks is correlated. The level (magnitude) of non-zero coefficients tends to be higher at the start of the zig-zag scan, and lower towards the high frequencies.

The procedure described below is based on the document entitled JVT Document JVT-C028, Gisle Bjøntegaard and Karl Lillevold, “Context-adaptive VLC (CVLC) coding of coefficients,” Fairfax, VA, May The H.264 CAVLC is an extension of this work.

The CAVLC encoding of a block of transform coefficients proceeds as follows. 1.Encode the number of coefficients and trailing ones. 2.Encode the sign of each trailing ones. 3.Encode the levels of the remaining no-zero coefficients. 4.Encode the total number of zeros before the last coefficients. 5.Encode each run of zeros.

The first step is to encode the number of coefficients (NumCoef) and trailling ones (T1s). NumCoef can be anything from 0 (no coefficient in the block) to 16 (16 non-zero coefficients). T1s can be anything from 0 to 3. If there are more than 3 trailing +/- 1s, only the last 3 are treated as ``special cases” and the others are coded as normal coefficients. Encode the number of coefficients and trailing ones

Example: Consider the 4×4 block shown below The Num-Coef=7, and T1s=3

Three tables can be used for the encoding of Num_Coeff and T1: Num-VLC0, Num-VLC1 and Num-VLC2. Num-VLC0

The selection of tables depends on the number of non-zero coefficients in upper and left-hand previously coded blocks N U and N L. A parameter N is calculate as follows: If blocks U and L are available (i.e., in the same coded slice), N=(N U +N L )/2 If only block U is available, N=N U. If only block L is available, N= N L. If neither is available, N=0.

The selection of table is based on N in the following way: NSelected Table 0,1Num-VLC0 2,3Num-VLC1 4,5,6,7Num-VLC2 8 or aboveFLC The FLC is of the following form: xxxxyy (i.e., 6 bits) where xxxx and yy represent Num_Coeff and T1, respectively.

For each T1, a single bit encodes the sign (0=+,1=-). These are encoded in reverse order, starting with the highest frequency T1. Encode the sign of each trailing ones

The level (sign and magnitude) of each remaining non-zero coefficient in the block is encoded in reverse order. There are 5 VLC tables to choose from, Lev_VLC0 to Lev_VLC4. Lev_VLC0 is biased towards lower magnitudes; Lev_VLC1 is biased towards slightly higher magnitudes, and so on. Encode the levels of the remaining no-zero coefficients

This is used only when it is impossible for a coefficient to have values +/- 1. It will happen when T1s<3.

To improve coding efficiency, the tables are changed along with the coding process based on the following procedure.

The following shows the table for encoding the total number of zeros before the last coefficient (TotZeros) Encode the total number of zeros before the last coefficient

Encode each run of zeros At this stage it is known how many zeros are left to distribute (call this ZerosLeft). When encoding or decoding a non-zero coefficient for the first time, ZerosLeft begins at TotZeros, and decreases as more non-zero coefficients are encoded or decoded. The number of preceding zeros before each non-zero coefficient (called RunBefore) needs to be coded to properly locate that coefficient. Before coding the next RunBefore, ZerosLeft is updated and used to select one out of 7 tables.

Why the maximum number is 14? zero-left

Example: Consider the following interframe residual 4×4 block The zigzag re-ordering of the block is shown below: 0,3,0,1,-1,-1,0,1,0,0,0,0,0,0,0,0 Therefore, NumCoeff=5, TotZero=3, T1s=3 Assume N=0

Encoding: ValueCodeComments NumCoeff=5, T1s= Use Num-VLC0 sign of T1 (1)0 Starting at highest frequency sign of T1(-1)1 1 Level= +11 Use Lev-VLC0 Level= Use Lev-VLC1 TotZeros=31110 Also depends on NumCoeff ZerosLeft=3;RunBefore=100 RunBefore of the 1 st Coeff ZerosLeft=2;RunBefore=01 RunBefore of the 2 nd Coeff ZerosLeft=2;RunBefore=01 RunBefore of the 3 rd Coeff ZerosLeft=2;RunBefore=101 RunBefore of the 4 th Coeff ZerosLeft=1;RunBefore=1 No code required; last coeff The transmitted bitstream for this block is

Decoding: CodeValueOutput ArrayComments NumCoeff=5, T1s=3 Empty 0+1 T1 sign 1--1,1 T1 sign 1--1,-1,1 T1 sign 1+11,-1,-1,1 level value ,1,-1,-1,1 level value 1110TotZeros=3+3,1,-1,-1,1 00RunBefore=1+3,1,-1,-1,0,1 RunBefore of the 1 st Coeff 1RunBefore=0+3,1,-1,-1,0,1 RunBefore of the 2 nd Coeff 1RunBefore=0+3,1,-1,-1,0,1 RunBefore of the 3 rd Coeff 01RunBefore=1+3,0,1,-1,-1,0,1 RunBefore of the 4 th Coeff 0,+3,0,1,-1,-1,0,1 ZeroLeft=1

De-block Filter Entropy Coding Scaling & Inv. Transform Motion- Compensation Control Data Quant. Transf. coeffs Motion Data Intra/Inter Coder Control Decoder Motion Estimation Transform/ Scal./Quant. - Input Video Signal Split into Macroblocks 16x16 pixels Intra-frame Prediction De-blocking Filter Output Video Signal

The beblocking filter improves subjective visual quality. The filter is highly context adaptive. It operates on the boundary of 4×4 subblock as shown below. q3 q2 q1 q0 p0 p1 p2 p3 q3q2q1q0p0p1p2p3

The choice of filtering outcome depends on the boundary strength and on the gradient of image samples across the boundary. The boundary strength parameter Bs is selected according to the following rules.

A group of samples from the set (p2,p1,p0,q0,q1,q2) is filtered only if: (a) Bs>0 and (b) |p0-q0| <  and |p1-p0| <  and |q1-q0| <  where  and  are thresholds defined in the standard. The threshold values increase with the average quantizer parameter QP of two blocks q and p.

When QP is small, anything other than a very small gradient across the boundary is likely to be due to image features that should be preserved and so the thresholds  and  are low. When QP is larger, blocking distortion is likely to be more significant and  and  are higher so that more boundary samples are filtered.

without deblock filtering with deblock filtering

Data Partitioning and Network Abstraction Layer

A video picture is coded as one or more slices. Each slice contains an integral number of macroblocks from 1 to total number of macroblocks in a picture. The number of macroblocks per slice need not to be constant within a picture.

There are five slice modes. Three commonly use modes are: 1.I-slice: A slice where all macroblocks of the slice are coded using intra prediction. 2.P-slice: In addition to the coding types of the I- slice, some macroblocks of the P-slice can be coded using inter-prediction (predicted from one reference picture buffer only). 3.B-slice: In addition to the coding types available in a P-slice, some macroblocks of the B-slice can be predicted from two reference picture buffers.

Note that the coded data in a slice can be placed in three separate Data Partitions (A, B and C) for robust transmission. Partition A contains the slice header and header data for each marcoblock in the slice. Partition B contains coded residual data for Intra slice macroblocks. Partition C contains coded residual data for Inter slice macroblocks.

In the H.264, the VCL data will be mapped into NAL units prior to transmission or storage. Each NAL unit contains a Raw Byte Sequence Payload (RBSP), a set of data corresponding to coded video data or header information. The NAL units can be delivered over a packet-based network or a bitstream transmission link or stored in a file. NAL header RBSPNAL header RBSPNAL header RBSP sequence of NAL units

RBSP typeDescription Parameter Set Global parameter for a sequence such as picture dimensions, video format. Supplemental Enhancement Information Side messages that are not essential for correct decoding of the video sequences. Picture Delimiter Boundary between pictures (optional). If not present, the decoder infers the boundary based on the frame number contained within each slice header. Coded Slice Header and data for a slice; this RBSP contains actual coded video data. Data Partition A, B or C Three units containing Data Partitioned slice layer data (useful for error decoding). End of Sequence End of Stream Filler Data Contains ‘dummy’ data

Example: Sequence parameter set SEIPicture parameter set I Slice (Coded slice) Picture delimiter P Slice (Coded slice) P Slice (Coded slice) The following figure shows an example of RBSP elements....

Profiles Baseline Main Extended High