資料壓縮 授課老師 : 陳建源 研究室 : 法 401 網站 資訊理論基本概念.

Slides:



Advertisements
Similar presentations
Let X 1, X 2,..., X n be a set of independent random variables having a common distribution, and let E[ X i ] = . then, with probability 1 Strong law.
Advertisements

Sampling and Pulse Code Modulation
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
CIS 2033 based on Dekking et al. A Modern Introduction to Probability and Statistics Slides by Michael Maurizi Instructor Longin Jan Latecki C9:
Chain Rules for Entropy
On the size of dissociated bases Raphael Yuster University of Haifa Joint work with Vsevolod Lev University of Haifa.
Chapter 6 Information Theory
Chapter 2 Random Vectors 與他們之間的性質 (Random vectors and their properties)
Chapter 0 Computer Science (CS) 計算機概論 教學目標 瞭解現代電腦系統之發展歷程 瞭解電腦之元件、功能及組織架構 瞭解電腦如何表示資料及其處理方式 學習運用電腦來解決問題 認知成為一位電子資訊人才所需之基本條 件 認知進階電子資訊之相關領域.
Background Knowledge Brief Review on Counting,Counting, Probability,Probability, Statistics,Statistics, I. TheoryI. Theory.
資訊理論 授課老師 : 陳建源 研究室 : 法 401 網站 Ch4: Channel.
Entropy. Optimal Value Example The decimal number 563 costs 10  3 = 30 units. The binary number costs 2  10 = 20 units.  Same value as decimal.
Random Variables November 23, Discrete Random Variables A random variable is a variable whose value is a numerical outcome of a random phenomenon.
資訊理論 授課老師 : 陳建源 研究室 : 法 401 網站 Ch3: Coding Theory.
資訊理論 授課老師 : 陳建源 研究室 : 法 401 網站 Ch1: Elements of Probability.
Chapter 0 Computer Science (CS) 計算機概論 General Goals To give you a solid, broad understanding of how a computing system works To develop an appreciation.
資訊理論 授課老師 : 陳建源 研究室 : 法 401 網站
1 Chapter 5 A Measure of Information. 2 Outline 5.1 Axioms for the uncertainty measure 5.2 Two Interpretations of the uncertainty function 5.3 Properties.
資訊理論 授課老師 : 陳建源 研究室 : 法 401 網站 Ch6: Differential entropy.
資訊理論 授課老師 : 陳建源 研究室 : 法 401 網站 Ch2: Basic Concepts.
Lecture 2: Basic Information Theory Thinh Nguyen Oregon State University.
Lossless Compression - I Hao Jiang Computer Science Department Sept. 13, 2007.
資訊理論 授課老師 : 陳建源 研究室 : 法 401 網站 Ch5: Error Correction.
Noise, Information Theory, and Entropy
NIPRL Chapter 2. Random Variables 2.1 Discrete Random Variables 2.2 Continuous Random Variables 2.3 The Expectation of a Random Variable 2.4 The Variance.
Noise, Information Theory, and Entropy
Some basic concepts of Information Theory and Entropy
Information and Coding Theory
§1 Entropy and mutual information
STATISTIC & INFORMATION THEORY (CSNB134)
2. Mathematical Foundations
教育部網路通訊人才培育先導型計畫 Probability, Random Processes and Noise 1 Example 5-15 Random binary wave (1/4) Consider a random binary wave, a sample function is shown.
INFORMATION THEORY BYK.SWARAJA ASSOCIATE PROFESSOR MREC.
 Image Enhancement in Spatial Domain.  Spatial domain process on images can be described as g(x, y) = T[f(x, y)] ◦ where f(x,y) is the input image,
1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy.
§4 Continuous source and Gaussian channel
Moment Generating Functions
Ch5 Indefinite Integral
1 7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to.
Lecture 9. If X is a discrete random variable, the mean (or expected value) of X is denoted μ X and defined as μ X = x 1 p 1 + x 2 p 2 + x 3 p 3 + ∙∙∙
Continuous Distributions The Uniform distribution from a to b.
Channel Capacity.
COMMUNICATION NETWORK. NOISE CHARACTERISTICS OF A CHANNEL 1.
§2 Discrete memoryless channels and their capacity function
Basic Concepts of Probability CEE 431/ESS465. Basic Concepts of Probability Sample spaces and events Venn diagram  A Sample space,  Event, A.
Expectation. Let X denote a discrete random variable with probability function p(x) (probability density function f(x) if X is continuous) then the expected.
2. Introduction to Probability. What is a Probability?
Exam 2: Rules Section 2.1 Bring a cheat sheet. One page 2 sides. Bring a calculator. Bring your book to use the tables in the back.
Basic Concepts of Information Theory Entropy for Two-dimensional Discrete Finite Probability Schemes. Conditional Entropy. Communication Network. Noise.
Data Structures -3 st exam- 授課教師 : 李錫智 教授. 1. [5] Assume we have a binary tree which is implemented in a pointer-based scheme. Describe how to know the.
Review of Probability Concepts Prepared by Vera Tabakova, East Carolina University.
STA347 - week 91 Random Vectors and Matrices A random vector is a vector whose elements are random variables. The collective behavior of a p x 1 random.
The Channel and Mutual Information
Basic Concepts of Information Theory A measure of uncertainty. Entropy. 1.
Lecture 3 Appendix 1 Computation of the conditional entropy.
Mutual Information, Joint Entropy & Conditional Entropy
Chapter 5 Joint Probability Distributions and Random Samples  Jointly Distributed Random Variables.2 - Expected Values, Covariance, and Correlation.3.
Conditional Expectation
ENTROPY Entropy measures the uncertainty in a random experiment. Let X be a discrete random variable with range S X = { 1,2,3,... k} and pmf p k = P X.
(C) 2000, The University of Michigan 1 Language and Information Handout #2 September 21, 2000.
Statistical methods in NLP Course 2 Diana Trandab ă ț
Cumulative distribution functions and expected values
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
COT 5611 Operating Systems Design Principles Spring 2012
COT 5611 Operating Systems Design Principles Spring 2014
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
Continuous Distributions
Uniform Probability Distribution
Presentation transcript:

資料壓縮 授課老師 : 陳建源 研究室 : 法 401 網站 資訊理論基本概念

Let S be a system of events Def: The self-information of the event E k is written I(E k ): 1. Self-information in which The base of the logarithm: 2 (log), e (ln) 單位: bit, nat

when then when then when then when then 愈小 愈大 1. Self-information

Ex1. A letter is chosen at random from the Enlish alphabet. Ex2. A binary number of m digits is chosen at random. 1. Self-information

Ex3. 64 points are arranged in a square grid. E j be the event that a point picked at random in the j th column E k be the event that a point picked at random in the k th row Why? 1. Self-information

E(f) be expectation or average or mean of f f: E k → f k Let S be the system with events the associated probabilities being 2. Entropy

觀察 最小值為 0 ,表示已確定。但最大值呢 ? Let certainty Def: The entropy of S, called H(S), is the average of the self-information Self-information of an event increases as its uncertainty grows 2. Entropy

Thm: with equality only when Proof: 2. Entropy

Thm 2.2:For x>0 with equality only when x=1. Assume that p k ≠0 2. Entropy

In the system S the probabilities p 1 and p 2 where p 2 > p 1 are replaced by p 1 +ε and p 2 -εrespectively under the proviso 0<2ε<p 2 -p 1. Prove the H(S) is increased. We know that entropy H(S) can be viewed as a measure of _____ about S. Please list 3 items for this blank. information uncertainty randomness 2. Entropy

Let S 1 be the system with events the associated probabilities being Let S 2 be the system with events the associated probabilities being 3. Mutual information

Two systems S 1 and S 2 satisfying relation 3. Mutual information

relation 3. Mutual information

conditional probability conditional self-information mutual information NOTE: 3. Mutual information

conditional entropy mutual information 3. Mutual information

conditional self-informationmutual information and If E j and F k are statistically independent 3. Mutual information

joint entropy joint entropy and conditional entropy 3. Mutual information

mutual information and conditional entropy 3. Mutual information

mutual information of two systems cannot exceed the sum of their separate entropies Thm: 3. Mutual information

Joint entropy of two statistically independent systems is the sum of their separate entropies System’s independent If S 1 and S 2 are statistically independent 3. Mutual information

Ch2: Basic Concepts 2. 3 Mutual information with equality only if S 1 and S 2 are statistically independent Thm: Proof:Assume that p jk ≠0

Ch2: Basic Concepts 2. 3 Mutual information with equality only if S 1 and S 2 are statistically independent Thm: Proof:

Ex: A binary symmetric channel with crossover probability ε Let S 1 be the input E 0 =0, E 1 =1 and S 2 be the output F 0 =0, F 1 = ε ε ε 3. Mutual information

Assume that Then 3. Mutual information

Compute the output Then If then 3. Mutual information

Compute the mutual information 3. Mutual information

Compute the mutual information 3. Mutual information

whenever the integral exists. The differential entropy of f(x) is defined by NOTE: (1)The entropy of a continuous distribution need not exist. (2)Entropy may be negative Def: The entropy of S, called H(S), is the average of the self-information 4. Differential entropy

whenever the integral exists. Example: Consider a random variable distributed uniformly from 0 to a so that its density is 1/a from 0 to a and 0 elsewhere. Then its differential entropy is 4. Differential entropy