Erik de Jong & Willem Bouma.  Arithmetic Coding  Octree  Compression  Surface Approximation  Child Cells Configurations  Single Child cell configurations.

Slides:



Advertisements
Similar presentations
Introduction to H.264 / AVC Video Coding Standard Multimedia Systems Sharif University of Technology November 2008.
Advertisements

T.Sharon-A.Frank 1 Multimedia Compression Basics.
Nearest Neighbor Search
Lecture 4 (week 2) Source Coding and Compression
Sampling and Pulse Code Modulation
Michael Alves, Patrick Dugan, Robert Daniels, Carlos Vicuna
Arithmetic Coding. Gabriele Monfardini - Corso di Basi di Dati Multimediali a.a How we can do better than Huffman? - I As we have seen, the.
Geometry Compression Michael Deering, Sun Microsystems SIGGRAPH (1995) Presented by: Michael Chung.
Quantization Prof. Siripong Potisuk.
University of Wisconsin-Milwaukee Geographic Information Science Geography 625 Intermediate Geographic Information Science Instructor: Changshan Wu Department.
Lecture04 Data Compression.
Progressive Encoding of Complex Isosurfaces Haeyoung Lee Mathieu Desbrun Peter Schröder USC USC Caltech.
School of Computing Science Simon Fraser University
CSE 589 Applied Algorithms Spring 1999 Image Compression Vector Quantization Nearest Neighbor Search.
SWE 423: Multimedia Systems Chapter 7: Data Compression (3)
SWE 423: Multimedia Systems
ENGS Assignment 3 ENGS 4 – Assignment 3 Technology of Cyberspace Winter 2004 Thayer School of Engineering Dartmouth College Assignment 3 – Due Sunday,
2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Variable Length Coding  Information entropy  Huffman code vs. arithmetic.
Introduction Lossless compression of grey-scale images TMW achieves world’s best lossless image compression  3.85 bpp on Lenna Reasons for performance.
Spatial and Temporal Data Mining
SWE 423: Multimedia Systems Chapter 7: Data Compression (2)
CS336: Intelligent Information Retrieval
1 Image filtering Images by Pawan SinhaPawan Sinha.
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
Fundamentals of Multimedia Chapter 7 Lossless Compression Algorithms Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
2015/7/12VLC 2008 PART 1 Introduction on Video Coding StandardsVLC 2008 PART 1 Variable Length Coding  Information entropy  Huffman code vs. arithmetic.
Lossless Compression - I Hao Jiang Computer Science Department Sept. 13, 2007.
Department of Computer Engineering University of California at Santa Cruz Data Compression (2) Hai Tao.
5. 1 JPEG “ JPEG ” is Joint Photographic Experts Group. compresses pictures which don't have sharp changes e.g. landscape pictures. May lose some of the.
Roger Cheng (JPEG slides courtesy of Brian Bailey) Spring 2007
Data Compression Arithmetic coding. Arithmetic Coding: Introduction Allows using “fractional” parts of bits!! Used in PPM, JPEG/MPEG (as option), Bzip.
01/31/02 (C) 2002, UNiversity of Wisconsin, CS 559 Last Time Color and Color Spaces.
Huffman Codes Message consisting of five characters: a, b, c, d,e
CS559-Computer Graphics Copyright Stephen Chenney Image File Formats How big is the image? –All files in some way store width and height How is the image.
CS Spring 2012 CS 414 – Multimedia Systems Design Lecture 8 – JPEG Compression (Part 3) Klara Nahrstedt Spring 2012.
296.3Page 1 CPS 296.3:Algorithms in the Real World Data Compression: Lecture 2.5.
Klara Nahrstedt Spring 2011
JPEG. The JPEG Standard JPEG is an image compression standard which was accepted as an international standard in  Developed by the Joint Photographic.
Model Construction: interpolation techniques 1392.
CIS679: Multimedia Basics r Multimedia data type r Basic compression techniques.
Lossless Compression CIS 465 Multimedia. Compression Compression: the process of coding that will effectively reduce the total number of bits needed to.
COMPRESSION. Compression in General: Why Compress? So Many Bits, So Little Time (Space) CD audio rate: 2 * 2 * 8 * = 1,411,200 bps CD audio storage:
Design of Novel Two-Level Quantizer with Extended Huffman Coding for Laplacian Source Lazar Velimirović, Miomir Stanković, Zoran Perić, Jelena Nikolić,
Spatial Interpolation Chapter 13. Introduction Land surface in Chapter 13 Land surface in Chapter 13 Also a non-existing surface, but visualized as a.
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Sec 14.7 Bitmap Indexes Shabana Kazi. Introduction A bitmap index is a special kind of index that stores the bulk of its data as bit arrays (commonly.
Entropy Coding of Video Encoded by Compressive Sensing Yen-Ming Mark Lai, University of Maryland, College Park, MD
Abdullah Aldahami ( ) April 6,  Huffman Coding is a simple algorithm that generates a set of variable sized codes with the minimum average.
COMPUTER ORGANISATION Sri.S.A.Hariprasad Sr.Lecturer R.V.C.E Bangalore.
CS654: Digital Image Analysis Lecture 34: Different Coding Techniques.
1 Source Coding and Compression Dr.-Ing. Khaled Shawky Hassan Room: C3-222, ext: 1204, Lecture 7 (W5)
ELE 488 F06 ELE 488 Fall 2006 Image Processing and Transmission ( ) Image Compression Review of Basics Huffman coding run length coding Quantization.
Chapter 7 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information Theory 7.3 Run-Length Coding 7.4 Variable-Length Coding (VLC) 7.5.
Fundamentals of Multimedia Chapter 6 Basics of Digital Audio Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" Basics
4C8 Dr. David Corrigan Jpeg and the DCT. 2D DCT.
Lossless Compression-Statistical Model Lossless Compression One important to note about entropy is that, unlike the thermodynamic measure of entropy,
Information theory Data compression perspective Pasi Fränti
Introduction to H.264 / AVC Video Coding Standard Multimedia Systems Sharif University of Technology November 2008.
Measures of Central Tendency
JPEG Compression What is JPEG? Motivation
CSI-447: Multimedia Systems
The Johns Hopkins University
JPEG.
Quantization and Encoding
Huffman Coding, Arithmetic Coding, and JBIG2
Context-based Data Compression
Why Compress? To reduce the volume of data to be transmitted (text, fax, images) To reduce the bandwidth required for transmission and to reduce storage.
Point-Cloud 3D Modeling.
Presentation transcript:

Erik de Jong & Willem Bouma

 Arithmetic Coding  Octree  Compression  Surface Approximation  Child Cells Configurations  Single Child cell configurations  Results  Questions

 Assign to every symbol a range from the interval [0-1]. The size of the range represents the probability of the symbol occurring. Example: A:60% [ 0.0, 0.6 ) B: 20% [ 0.6, 0.8 ) C: 10% [ 0.8, 0.9 ) D: 10% [ 0.9, 1.0 ) (end of data symbol)

Ranges: A – [0,0.6), B – [0.6,0.8), C – [0.8,0.9), D – [0.9,1) The example shows the decoding of A B C D

 Huffman coding is a specialized case of arithmetic coding  Every symbol is converted to a bit sequence of integer length  Probabilities are rounded to negative powers of two  Advantage: Can decode parts of the input stream  Disadvantage: Arithmetic coding comes much closer to the optimal entropy encoding

SQUEEL

 Estimate/Approximate as much as possible  Store only the differences w.r.t. the estimate  A better estimate ->  smaller numbers  lower entropy  better compression

 What is an Octree?

 Per cell we store only the occupied child cells  Options:  Store a single byte, each bit representing a child cell (for example )  Store the number of occupied cells e and a tupel T with the indices of the occupied cells (for example e=4, T ={0,1,4,5})

 We will approximate/estimate/compress:  The surface  Number of non-empty child cells  Child cell configuration  Index compression  Single child cell configuration

 Every level of the octree will yield a preliminary approximation Q of the complete point cloud P  For a cell that is to be subdivided:  Predict surface F based on Moving Least Squares (MLS) on k nearest points in Q

 Prediction of number of non-empty cells e  Based on estimation of the sampling density ρ  Local sampling density ρ i at point p i in P :  k, nearest points  p i, point in the point cloud P  q i, point on the surface approximation Q

 Sampling density:  Guess the number of child cells e based on:  The area of the plane F  The sampling density ρ

 Quality of prediction Graphs show the difference between te estimated value and the true value. (a)The level 5 octree (b)The level 7 octree (c)The entire octree.

 Given the number of non-empty child cells e there are only a limited number of configurations:

 We have an array with all weighted possible configurations, sorted in ascending order  Each configuration of the subdivision is encoded as an index of the array.  Common configurations get lower weights, means smaller indices, means lower entropy.

 Cell centers tend to be close to F.

 To find the weight of a configuration:  Sum up the (L1) distances from the cell centers to F

 Index of the configuration in the sorted array is encoded using arithmetic coding under two contexts  First context: the octree level of the cell C  Second context: the expressiveness e(F)

 e(F) reflects the angle of the plane to the coordinate directions

 In order to use e(F) as a context for arithmetic coding it has to be quantized.  It has been found that five bins was sufficient and delivered the best results.

 We can exploit an observation for cells that have only one occupied child cell.  Scanning devices often have a regular sampling grid.  It is possible to predict samples on the surface, rather than just close to the surface.  This is relevant for the finer levels in the octree hierarchy.

 For cells with only one child cell we can predict T based on the nearest neighbours of points m, centroid of the k nearest neighbours

 Quite suprisingly, the cell center projections on F that are farthest away are most likely to be occupied.  The area farther away can be seen as undersampled.  So a sample in that area becomes more likely since we expect the surface to be regular and no undersampling should exist.

 The weights for the eight possible configurations are given as c(T), cell center of T prj(F,c(T)), projection of c(T) on F

 Extra attributes can be encoded with an octree  Color  Normals

 Compressing color  Same two-step method as with coordinates  Different prediction functions are needed

ModelNumber of points Raw sizeCompressed (bpp) Compressed size Dragon ,44 MB5,061,66 MB Venus~ KB11,27184 KB Rabbit~ KB11,3793 KB MaleWB~ KB8,87160 KB (bpp) Bits Per Point Raw size is assumed to use 3 times 4 Bytes per point. The octree uses 12 levels. Except for the Dragon for which it is unknown.

(b) uses 1.89 bpp

?