Download presentation
Presentation is loading. Please wait.
Published byOswald Matthews Modified over 9 years ago
1
9/26 디지털 영상통신 Mathematical Preliminaries Math Background Predictive Coding Huffman Coding Matrix Computation
2
Mathematical Preliminaries Self-Information (Shannon) Entropy (in bits, x=2) Markov Models
3
Self-Information(Shannon)(1) Definition x=2 : bits[unit]
4
For two independent events A and B, the self- information associated with the occurrence of both events, A and B. Experiment set is composed of independent events A i. Self-Information (2)
5
Entropy (in bits, x=2)
6
Markov Model (1) Definition (Ex) 1-st Markov Model
7
Markov Model (2) (Ex) White and Black pixel (binary image)
8
Math Background Joint, Conditional, and Total Probabilities ; Independence Expectation Distribution Functions Stochastic Process Random Variables Characteristics independent, orthogonal, uncorrelated, autocorrelation Strict Sense Stationary Wide Sense Stationary
9
Joint, conditional, and total probabilities ; Independence
10
Expectation Distribution Function(1) Uniform Distribution ab
11
Distribution Function(2) Gaussian Distribution Laplacian Distribution
12
Distribution Function (3)
13
Stochastic Process : Function of time
14
Random Variables Characteristics(1) Independent Orthogonal Uncorrelated Autocorrelation Function
15
Random Variables Characteristics(2) Strict Sense Stationary Wide Sense Stationary
16
Predictive Coding (1)
17
Predictive Coding (2)
18
Predictive Coding (3) Examples
19
Predictive Coding (4)
20
Predictive Coding (5)
21
Predictive Coding (6)
22
Predictive Coding (7)
23
Predictive Coding (8)
24
Predictive Coding (9)
25
Predictive Coding (10)
26
Predictive Coding (11)
27
Predictive Coding (12)
28
Predictive Coding (13)
29
Predictive Coding (14)
30
Predictive Coding (15)
31
Predictive Coding (16)
32
Huffman Coding (1) (Ex) P(a 1 )=1/2, P(a 2 )=1/4, P(a 3 )=P(a 4 )=1/8
33
Huffman Coding (2) Nodes internal node external node
34
Huffman Coding (3) The Huffman Coding Algorithm In an optimum code, symbols that occur more frequently (have a higher probability of occurrence) will have shorter codewords than symbols that occur less frequently. In an optimum code, the two symbols that occur least frequently will have the same length.
35
Matrix Computation (1) ① ② Determinants 의 응용 Object : Find the solution of Ax=b
36
Matrix Computation (2) ③
37
Matrix Computation (3) ④ Cramer ’ s Rule : jth component of x=A -1 b 응용 : stability, Markov Process (Steady State)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.