Information Theory. information  the non-redundant portion of any string of numbers .333333333333..., holds very little information, since the 3s can.

Slides:



Advertisements
Similar presentations
Noise, Information Theory, and Entropy (cont.) CS414 – Spring 2007 By Karrie Karahalios, Roger Cheng, Brian Bailey.
Advertisements

Data Compression CS 147 Minh Nguyen.
Fourier Transforms and Their Use in Data Compression
Arithmetic Coding. Gabriele Monfardini - Corso di Basi di Dati Multimediali a.a How we can do better than Huffman? - I As we have seen, the.
An introduction to Data Compression
SIMS-201 Compressing Information. 2  Overview Chapter 7: Compression Introduction Entropy Huffman coding Universal coding.
Learning Objectives Explain similarities and differences among algorithms, programs, and heuristic solutions List the five essential properties of an algorithm.
Processing of large document collections
Day 2 Information theory ( 信息論 ) Civil engineering ( 土木工程 ) Cultural exchange.
ECIV 201 Computational Methods for Civil Engineers Richard P. Ray, Ph.D., P.E. Error Analysis.
Approximations and Errors
Compression Techniques. Digital Compression Concepts ● Compression techniques are used to replace a file with another that is smaller ● Decompression.
SWE 423: Multimedia Systems
An Interactive Lesson on Music Theory Created By Jon Litz.
2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Variable Length Coding  Information entropy  Huffman code vs. arithmetic.
CSCI 3 Chapter 1.8 Data Compression. Chapter 1.8 Data Compression  For the purpose of storing or transferring data, it is often helpful to reduce the.
The Islamic University of Gaza Faculty of Engineering Civil Engineering Department Numerical Analysis ECIV 3306 Chapter 3 Approximations and Errors.
1 Chapter 1 Introduction. 2 Outline 1.1 A Very Abstract Summary 1.2 History 1.3 Model of the Signaling System 1.4 Information Source 1.5 Encoding a Source.
2015/7/12VLC 2008 PART 1 Introduction on Video Coding StandardsVLC 2008 PART 1 Variable Length Coding  Information entropy  Huffman code vs. arithmetic.
Information Theory and Security
1 Lossless Compression Multimedia Systems (Module 2) r Lesson 1: m Minimum Redundancy Coding based on Information Theory: Shannon-Fano Coding Huffman Coding.
Insertion Sort & Shellsort By: Andy Le CS146 – Dr. Sin Min Lee Spring 2004.
MDIA 308 Digital technologies. Converting analog to digital ADC – analog to digital conversion An analog voltage is converted in binary code Binary =
Spring 2015 Mathematics in Management Science Binary Linear Codes Two Examples.
Time Series Data Analysis - II
©2003/04 Alessandro Bogliolo Background Information theory Probability theory Algorithms.
INFORMATION THEORY BYK.SWARAJA ASSOCIATE PROFESSOR MREC.
Genetic Programming.
Chapter 2 Source Coding (part 2)
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 31.
It is physically impossible for any data recording or transmission medium to be 100% perfect 100% of the time over its entire expected useful life. As.
1 Copyright © 2010, 2007, 2004 Pearson Education, Inc. All Rights Reserved. Basic Principle of Statistics: Rare Event Rule If, under a given assumption,
Precision, Error and Accuracy Physics 12 Adv. Measurement  When taking measurements, it is important to note that no measurement can be taken exactly.
Basic Concepts of Encoding Codes, their efficiency and redundancy 1.
Channel Capacity.
MECN 3500 Inter - Bayamon Lecture 3 Numerical Methods for Engineering MECN 3500 Professor: Dr. Omar E. Meza Castillo
Summer 2004CS 4953 The Hidden Art of Steganography A Brief Introduction to Information Theory  Information theory is a branch of science that deals with.
COMPRESSION. Compression in General: Why Compress? So Many Bits, So Little Time (Space) CD audio rate: 2 * 2 * 8 * = 1,411,200 bps CD audio storage:
Prof. Amr Goneid, AUC1 Analysis & Design of Algorithms (CSCE 321) Prof. Amr Goneid Department of Computer Science, AUC Part 8. Greedy Algorithms.
Information & Communication INST 4200 David J Stucki Spring 2015.
Information Theory The Work of Claude Shannon ( ) and others.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Chapter 3.
Basic Principles (continuation) 1. A Quantitative Measure of Information As we already have realized, when a statistical experiment has n eqiuprobable.
CSCI-100 Introduction to Computing Hardware Part II.
Dancing and running shake up the chemistry of happiness. Mason Cooley Mason Cooley Read more at
Entropy (YAC- Ch. 6)  Introduce the thermodynamic property called Entropy (S)  Entropy is defined using the Clausius inequality  Introduce the Increase.
Errors in Numerical Methods
Signatures and Earmarks: Computer Recognition of Patterns in Music By David Cope Presented by Andy Lee.
Precision, Error and Accuracy Physics 12. Measurement  When taking measurements, it is important to note that no measurement can be taken exactly  Therefore,
Discovering Musical Patterns through Perceptive Heuristics By Oliver Lartillot Presentation by Ananda Jacobs.
Fundamentals of Audio Production. Chapter 3 1 Fundamentals of Audio Production Chapter Three: Digital Audio.
Computer Sciences Department1. 2 Data Compression and techniques.
Lossless Compression-Statistical Model Lossless Compression One important to note about entropy is that, unlike the thermodynamic measure of entropy,
Submitted To-: Submitted By-: Mrs.Sushma Rani (HOD) Aashish Kr. Goyal (IT-7th) Deepak Soni (IT-8 th )
2.8 Error Detection and Correction
Algorithmic Information Theory
Shannon Entropy Shannon worked at Bell Labs (part of AT&T)
Data Compression.
Error Detection and Correction
Distribution of the Sample Means
Data Compression.
Data Compression CS 147 Minh Nguyen.
Information Theory Michael J. Watts
A Brief Introduction to Information Theory
Image Transforms for Robust Coding
Chapter 3 DataStorage Foundations of Computer Science ã Cengage Learning.
MUSIC HIGH SCHOOL – MUSIC TECHNOLOGY – Unit 5
2.8 Error Detection and Correction
Presentation transcript:

Information Theory

information  the non-redundant portion of any string of numbers  , holds very little information, since the 3s can be expressed more succinctly as a fraction.  π, or , with its apparently endless non-patterned and non-repeating sequence of numbers, contains a very high level of information (often called entropy).

entropy  Information measures the amount of non- redundant information in data.  Entropy measures the amount of improbability or unpredictability in data.

Preference or aesthetic value  High information is no better or worse than low information (entropy).

Example  language is highly redundant  J-st tr- t- r--d th-s s-nt-nc-.  The meaning of a message can remain unchanged even though parts of it are removed.

Definition  Information theory is a branch of communications theory that deals with the amount and accuracy of information when transmitted from a source, through a medium, to a destination.

Claude Shannon (1949) (1) Zero-order symbol approximation as in ZZFYNN PQZF LQN, where errors and correct symbols exist side by side;(1) Zero-order symbol approximation as in ZZFYNN PQZF LQN, where errors and correct symbols exist side by side; (2) First-order symbol approximation (including letter frequency), as in ID AHE RENI MEAT, where letter frequency is taken into consideration as well as which letter likely follows which letter;(2) First-order symbol approximation (including letter frequency), as in ID AHE RENI MEAT, where letter frequency is taken into consideration as well as which letter likely follows which letter; (3) Second-order symbol approximation, as in RENE ID AHA MIET, where letter ordering two steps back is taken into consideration;(3) Second-order symbol approximation, as in RENE ID AHA MIET, where letter ordering two steps back is taken into consideration; (4) Third-order symbol approximation as in HE ARE ID TI NEAM, where letter ordering three steps back is taken into consideration;(4) Third-order symbol approximation as in HE ARE ID TI NEAM, where letter ordering three steps back is taken into consideration; (5) First-order word approximation as in I DARE HE IN TAME, where letters and now word existence are taken into consideration;(5) First-order word approximation as in I DARE HE IN TAME, where letters and now word existence are taken into consideration; (6) Second-order word approximation as in I DARE HE NAME IT, where letters, word existence, and probable word order are taken into consideration;(6) Second-order word approximation as in I DARE HE NAME IT, where letters, word existence, and probable word order are taken into consideration;

Algorithmic information theory  (AIT), a branch of information theory, concentrates less on the communication accuracy of information, and more on the precise amount of non-compressible information contained in a message.

Gregory Chaitin  the complexity of something to be the size of the simplest theory for it, in other words, the size of the smallest program for calculating it. This is the central idea of algorithmic information theory (AIT), a field of theoretical computer science. Using the mathematical concept of program-size complexity, we exhibit irreducible mathematical facts, mathematical facts that cannot be demonstrated using any mathematical theory simpler than they are.

Compression  An important component of AIT is that the files compressed must be "lossless," that is, restorable (by reverse processing) to their precise original form.

Musical Algorithmic Information Theory  MIDI (Musical Instrument Digital Interface) presents an interesting example of compression  MIDI produces accurate results with a fraction of the storage space required.  MIDI also loses a significant amount of information due to its score representation approach

Better example  The information content of a musical work—the part that cannot be further reduced—consists of material that does not repeat sufficiently, exactly, or in variation, such that a symbol can replace it to compress that passage.

Aesthetics again  low information content music is more developed and well formed, while higher information content music is more erratic and disorganized???????  better music has lower information content?????????  Not necessarily.

Examples  information contents of various segments of works by J. S. Bach (0.51), W. Mozart (0.45), Ludwig Beethoven (0.50), Johannes Brahms (0.64), Anton Webern (0.88), Ernst Krenek (0.87), and David Cope (0.63) using a very simple compression scheme demonstrate interesting results

Dynamic Musical AIT (DMAIT).  In music, the ear shifts from one musical parameter to another.  Melody, harmony, dynamics, timbre, rhythm, and so on, all jostle for attention with each succeeding at one time or another.  The mind’s ear tends to go where the most information occurs.

How it works  Dynamic Musical AIT systematically refigures Musical AIT for several musical parameters simultaneously.  DMAIT fingerprints help identify the important elements of music at different times.  To avoid flatlining in longer works, a moving DMAIT aperture of several beats preceding the already calculated information contentavoids re- computing information from the very beginning of a work.

Bartók's Mikrokosmos No. 81

A DMAIT graph of previous