Download presentation
Presentation is loading. Please wait.
Published byChristal Ferguson Modified over 9 years ago
1
Information and Entropy
2
Consider W discrete events with probabilities p i such that ∑ i=1 W p i =1. Shannon’s (1) measure of the amount of choice for the p i is H = -k ∑ i=1 W p i log p i, where k is a positive constant If p i =1/W and k= Boltzmann’s constant, then H = -k W/W log 1/W = k log W, which is the entropy of a system with W microscopic configurations= k log W Hence (using k=1), H =- ∑ i=1 M p i log p i is the Shannon’s information entropy Example: Schneider (2) notes that H is a measure of entropy/disorder/incertitude. It is a measure of information in Shannon’s sense only if considering it as the information gained by complete incertitude removal (i.e. noiseless channel) (1) C. E. Shannon. A mathematical theory of communication. Bell Sys. Tech. J., 1948A mathematical theory of communication (2) T. D. Schneider, Information Theory Primer, last updated Jan 6, 2003Information Theory Primer Shannon information entropy on discrete variables pi=pi= 1/81/21/81/4 H=1.21H=1.39 Second law of thermodynamic: The entropy of a system increases until it reaches equilibrium within the constraints imposed on it.
3
Information about a random variable x map taking continuous values arises from the exclusion its possible alternatives (realizations) Hence a measure of information for continuous valued x map is Info(x map ) = -log f (x map ) The expected information is then H(x map ) = -∫d map f ( map ) log f ( map ) By noting the similarity with H =- ∑ i=1 M p i log p i for discrete variables, we see that H(x map ) = -∫d map f ( map ) log f ( map ) is Shannon’s information entropy associated with the PDF f ( map ) for continuous variables x map Information Entropy on continuous variables
4
Example 1: Given knowledge that “two blue toys are in the corner of a room”, consider the following two arrangements Example 2: Given knowledge that “the PDF has mean =0 and variance 2 =1 ”, consider the following uniform and Gaussian PDFs Hence, the prior stage of BME aims at informativeness by using all but no more general knowledge than is available, i.e. we will seek to maximize information entropy given constraints expressing general knowledge. Maximizing entropy given knowledge constraints (a) (b) Out of these two arrangements, arrangement (a) maximizes entropy given the knowledge constraint, hence given our knowledge, it is the most likely toy arrangement (would kids produce (b)?) Out of these two PDFs, the Gaussian PDF maximizes information entropy given the knowledge constraint that =0 and 2 =1 Uniform: 2 =1, H= 1.24 Gaussian: 2 =1, H= 1.42
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.