9/26 디지털 영상통신 Mathematical Preliminaries Math Background Predictive Coding Huffman Coding Matrix Computation.

Slides:



Advertisements
Similar presentations
ELEN 5346/4304 DSP and Filter Design Fall Lecture 15: Stochastic processes Instructor: Dr. Gleb V. Tcheslavski Contact:
Advertisements

EE 4780 Huffman Coding Example. Bahadir K. Gunturk2 Huffman Coding Example Suppose X is a source producing symbols; the symbols comes from the alphabet.
Applied Algorithmics - week7
Sampling and Pulse Code Modulation
Information Theory EE322 Al-Sanie.
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
SIMS-201 Compressing Information. 2  Overview Chapter 7: Compression Introduction Entropy Huffman coding Universal coding.
Entropy Rates of a Stochastic Process
Operations Research: Applications and Algorithms
Chapter 6 Information Theory
Measures of Information Hartley defined the first information measure: –H = n log s –n is the length of the message and s is the number of possible values.
Lecture04 Data Compression.
Determinants Bases, linear Indep., etc Gram-Schmidt Eigenvalue and Eigenvectors Misc
SWE 423: Multimedia Systems
Department of Computer Engineering University of California at Santa Cruz Data Compression (1) Hai Tao.
Part1 Markov Models for Pattern Recognition – Introduction CSE717, SPRING 2008 CUBS, Univ at Buffalo.
Review of Probability and Random Processes
Fundamentals of Multimedia Chapter 7 Lossless Compression Algorithms Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
Lossless Compression - I Hao Jiang Computer Science Department Sept. 13, 2007.
Information Theory and Security
Noise, Information Theory, and Entropy
Review of Probability.
Noise, Information Theory, and Entropy
Prof. SankarReview of Random Process1 Probability Sample Space (S) –Collection of all possible outcomes of a random experiment Sample Point –Each outcome.
Entropy and some applications in image processing Neucimar J. Leite Institute of Computing
Probability Theory and Random Processes
Dr.-Ing. Khaled Shawky Hassan
CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 7 – Basics of Compression (Part 2) Klara Nahrstedt Spring 2011.
Page 110/6/2015 CSE 40373/60373: Multimedia Systems So far  Audio (scalar values with time), image (2-D data) and video (2-D with time)  Higher fidelity.
Prof. Amr Goneid Department of Computer Science & Engineering
Mathematical Preliminaries. 37 Matrix Theory Vectors nth element of vector u : u(n) Matrix mth row and nth column of A : a(m,n) column vector.
Course Review for Final ECE460 Spring, Common Fourier Transform Pairs 2.
Linawati Electrical Engineering Department Udayana University
Lossless Compression CIS 465 Multimedia. Compression Compression: the process of coding that will effectively reduce the total number of bits needed to.
Computer Vision – Compression(1) Hanyang University Jong-Il Park.
Elements of Stochastic Processes Lecture II
Chapter 17 Image Compression 17.1 Introduction Redundant and irrelevant information  “Your wife, Helen, will meet you at Logan Airport in Boston.
Section 3.2 Notes Conditional Probability. Conditional probability is the probability of an event occurring, given that another event has already occurred.
ارتباطات داده (883-40) فرآیندهای تصادفی نیمسال دوّم افشین همّت یار دانشکده مهندسی کامپیوتر 1.
Chapter 3 Foundation of Mathematical Analysis § 3.1 Statistics and Probability § 3.2 Random Variables and Magnitude Distribution § 3.3 Probability Density.
Abdullah Aldahami ( ) April 6,  Huffman Coding is a simple algorithm that generates a set of variable sized codes with the minimum average.
Lecture 4: Lossless Compression(1) Hongli Luo Fall 2011.
Digital Image Processing Lecture 22: Image Compression
Bahareh Sarrafzadeh 6111 Fall 2009
Jayanth Nayak, Ertem Tuncel, Member, IEEE, and Deniz Gündüz, Member, IEEE.
Image Processing Architecture, © Oleh TretiakPage 1Lecture 4 ECE-C490 Winter 2004 Image Processing Architecture Lecture 4, 1/20/2004 Principles.
Random Processes Gaussian and Gauss-Markov processes Power spectrum of random processes and white processes.
1Computer Sciences Department. 2 Advanced Design and Analysis Techniques TUTORIAL 7.
Chapter 7 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information Theory 7.3 Run-Length Coding 7.4 Variable-Length Coding (VLC) 7.5.
STATISTICS 6.0 Conditional Probabilities “Conditional Probabilities”
The Chinese University of Hong Kong
1 Review of Probability and Random Processes. 2 Importance of Random Processes Random variables and processes talk about quantities and signals which.
Spatial Analysis Variogram
SEAC-3 J.Teuhola Information-Theoretic Foundations Founder: Claude Shannon, 1940’s Gives bounds for:  Ultimate data compression  Ultimate transmission.
디지털통신 Random Process 임 민 중 동국대학교 정보통신공학과 1.
Chapter 6 Random Processes
EE465: Introduction to Digital Image Processing
Assignment 6: Huffman Code Generation
Digital Image Processing Lecture 20: Image Compression May 16, 2005
Increasing Information per Bit
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
The Chinese University of Hong Kong
Context-based Data Compression
Probability Review Definitions/Identities Random Variables
Why Compress? To reduce the volume of data to be transmitted (text, fax, images) To reduce the bandwidth required for transmission and to reduce storage.
ASV Chapters 1 - Sample Spaces and Probabilities
General Probability Rules
Uniform Probability Distribution
Presentation transcript:

9/26 디지털 영상통신 Mathematical Preliminaries Math Background Predictive Coding Huffman Coding Matrix Computation

Mathematical Preliminaries Self-Information (Shannon) Entropy (in bits, x=2) Markov Models

Self-Information(Shannon)(1) Definition x=2 : bits[unit]

For two independent events A and B, the self- information associated with the occurrence of both events, A and B. Experiment set is composed of independent events A i. Self-Information (2)

Entropy (in bits, x=2)

Markov Model (1) Definition (Ex) 1-st Markov Model

Markov Model (2) (Ex) White and Black pixel (binary image)

Math Background Joint, Conditional, and Total Probabilities ; Independence Expectation Distribution Functions Stochastic Process Random Variables Characteristics  independent, orthogonal, uncorrelated, autocorrelation Strict Sense Stationary Wide Sense Stationary

Joint, conditional, and total probabilities ; Independence

Expectation Distribution Function(1) Uniform Distribution ab

Distribution Function(2) Gaussian Distribution Laplacian Distribution

Distribution Function (3)

Stochastic Process : Function of time

Random Variables Characteristics(1) Independent Orthogonal Uncorrelated Autocorrelation Function

Random Variables Characteristics(2) Strict Sense Stationary Wide Sense Stationary

Predictive Coding (1)

Predictive Coding (2)

Predictive Coding (3) Examples

Predictive Coding (4)

Predictive Coding (5)

Predictive Coding (6)

Predictive Coding (7)

Predictive Coding (8)

Predictive Coding (9)

Predictive Coding (10)

Predictive Coding (11)

Predictive Coding (12)

Predictive Coding (13)

Predictive Coding (14)

Predictive Coding (15)

Predictive Coding (16)

Huffman Coding (1) (Ex) P(a 1 )=1/2, P(a 2 )=1/4, P(a 3 )=P(a 4 )=1/8

Huffman Coding (2) Nodes internal node external node

Huffman Coding (3) The Huffman Coding Algorithm  In an optimum code, symbols that occur more frequently (have a higher probability of occurrence) will have shorter codewords than symbols that occur less frequently.  In an optimum code, the two symbols that occur least frequently will have the same length.

Matrix Computation (1) ① ② Determinants 의 응용 Object : Find the solution of Ax=b

Matrix Computation (2) ③

Matrix Computation (3) ④ Cramer ’ s Rule : jth component of x=A -1 b 응용 : stability, Markov Process (Steady State)