Shannon ’ s theory part II Ref. Cryptography: theory and practice Douglas R. Stinson.

Slides:



Advertisements
Similar presentations
CMSC 414 Computer (and Network) Security Lecture 4 Jonathan Katz.
Advertisements

CS 6262 Spring 02 - Lecture #7 (Tuesday, 1/29/2002) Introduction to Cryptography.
22C:19 Discrete Structures Integers and Modular Arithmetic
22C:19 Discrete Math Integers and Modular Arithmetic Fall 2010 Sukumar Ghosh.
Chain Rules for Entropy
Foundations of Network and Computer Security J J ohn Black Lecture #10 Sep 18 th 2009 CSCI 6268/TLEN 5550, Fall 2009.
CMSC 414 Computer and Network Security Lecture 3 Jonathan Katz.
PAUL CUFF ELECTRICAL ENGINEERING PRINCETON UNIVERSITY A Framework for Partial Secrecy.
Class notes for ISE 201 San Jose State University
Ref. Cryptography: theory and practice Douglas R. Stinson
Practical Techniques for Searches on Encrypted Data Author: Dawn Xiaodong Song, David Wagner, Adrian Perrig Presenter: 紀銘偉.
Communication Theory of Secrecy Systems On a paper by Shannon (and the industry it didn’t spawn) Gilad Tsur Yossi Oren December 2005 Gilad Tsur Yossi Oren.
Information Theory and Security pt. 2. Lecture Motivation Previous lecture talked about a way to measure “information”. In this lecture, our objective.
Information Theory and Security. Lecture Motivation Up to this point we have seen: –Classical Crypto –Symmetric Crypto –Asymmetric Crypto These systems.
Lossless data compression Lecture 1. Data Compression Lossless data compression: Store/Transmit big files using few bytes so that the original files.
3: Stream Ciphers and Probability Professor Richard A. Stanley, P.E.
Princeton University COS 433 Cryptography Fall 2005 Boaz Barak COS 433: Cryptography Princeton University Fall 2005 Boaz Barak Lecture 2: Perfect Secrecy.
CSE331: Introduction to Networks and Security Lecture 17 Fall 2002.
Intro To Encryption Exercise 1. Monoalphabetic Ciphers Examples:  Caesar Cipher  At Bash  PigPen (Will be demonstrated)  …
1 Chapter 5 A Measure of Information. 2 Outline 5.1 Axioms for the uncertainty measure 5.2 Two Interpretations of the uncertainty function 5.3 Properties.
CS470, A.SelcukIntroduction1 CS 470 Introduction to Applied Cryptography Instructor: Ali Aydin Selcuk.
Lecture 2: Basic Information Theory Thinh Nguyen Oregon State University.
CryptographyPerfect secrecySlide 1 Today What does it mean for a cipher to be: –Computational secure? Unconditionally secure? Perfect secrecy –Conditional.
Princeton University COS 433 Cryptography Fall 2005 Boaz Barak COS 433: Cryptography Princeton University Fall 2005 Boaz Barak Lecture 2: Perfect Secrecy.
Information Theory and Security
Cryptographic Algorithms Course information General Concepts Introductory examples Terminology Classical cryptography Cryptanalysis.
EECS 598 Fall ’01 Quantum Cryptography Presentation By George Mathew.
Computer Security CS 426 Lecture 3
PAUL CUFF ELECTRICAL ENGINEERING PRINCETON UNIVERSITY Secure Communication for Distributed Systems.
CMSC 414 Computer and Network Security Lecture 3 Jonathan Katz.
Basic Concepts in Information Theory
Cryptography Week-6.
EE5552 Network Security and Encryption block 4 Dr. T.J. Owens CEng MIET Dr T. Itagaki MIET, MIEEE, MAES.
§1 Entropy and mutual information
INFORMATION THEORY BYK.SWARAJA ASSOCIATE PROFESSOR MREC.
Chapter 2 Basic Encryption and Decryption. csci5233 computer security & integrity 2 Encryption / Decryption encrypted transmission AB plaintext ciphertext.
Lecture 2 Overview.
David Evans CS551: Security and Privacy University of Virginia Computer Science Lecture 2: Breaking Unbreakable Ciphers.
A Few Simple Applications to Cryptography Louis Salvail BRICS, Aarhus University.
Topic 21 Cryptography CS 555 Topic 2: Evolution of Classical Cryptography CS555.
Basic Concepts of Encoding Codes, their efficiency and redundancy 1.
Some Number Theory Modulo Operation: Question: What is 12 mod 9?
Cryptography Part 1: Classical Ciphers Jerzy Wojdyło May 4, 2001.
JHU CS /Jan Hajic 1 Introduction to Natural Language Processing ( ) Essential Information Theory I AI-lab
Cryptograpy By Roya Furmuly W C I H D F O P S L 7.
1 Cryptanalysis Four kinds of attacks (recall) The objective: determine the key ( Herckhoff principle ) Assumption: English plaintext text Basic techniques:
Part 9, Basic Cryptography 1. Introduction A cryptosystem is a tuple: ( M,K,C, E,D) where M is the set of plaintexts K the set of keys C the set of ciphertexts.
CRYPTANALYSIS OF STREAM CIPHER Bimal K Roy Cryptology Research Group Indian Statistical Institute Kolkata.
Traditional Symmetric-Key Ciphers
1 Information Theory Nathanael Paul Oct. 09, 2002.
Cryptography and Authentication A.J. Han Vinck Essen, 2008
CS555Spring 2012/Topic 31 Cryptography CS 555 Topic 3: One-time Pad and Perfect Secrecy.
Cryptography Lecture 2 Arpita Patra. Recall >> Crypto: Past and Present (aka Classical vs. Modern Cryto) o Scope o Scientific Basis (Formal Def. + Precise.
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
JHU CS /Jan Hajic 1 Introduction to Natural Language Processing ( ) Essential Information Theory II AI-lab
Lecture 2 Overview. Cryptography Secret writing – Disguised data cannot be read, modified, or fabricated easily – Feasibility of complexity for communicating.
Cryptography Lecture 3 Arpita Patra © Arpita Patra.
CHAPTER 14 ENCRYPTION AND DECRYPTION Sajina Pradhan
@Yuan Xue CS 285 Network Security Block Cipher Principle Fall 2012 Yuan Xue.
@Yuan Xue Quick Review.
Introduction to Information theory
Textbook: Introduction to Cryptography 2nd ed. By J.A. Buchmann
B504/I538: Introduction to Cryptography
Cryptography Lecture 4.
Topic 3: Perfect Secrecy
Information-Theoretic Secrecy
Shannon Secrecy CSCI284/162 Spring 2009 GWU.
Cryptography Lecture 3.
2. Perfect Secret Encryption
CIS 5371 Cryptography 2. Perfect Secret Encryption
Presentation transcript:

Shannon ’ s theory part II Ref. Cryptography: theory and practice Douglas R. Stinson

Shannon ’ s theory 1949, “ Communication theory of Secrecy Systems ” in Bell Systems Tech. Journal. Two issues: What is the concept of perfect secrecy? Does there any cryptosystem provide perfect secrecy? It is possible when a key is used for only one encryption How to evaluate a cryptosystem when many plaintexts are encrypted using the same key?

Perfect secrecy Definition: A cryptosystem has perfect secrecy if Pr[ x | y ] = Pr[ x ] for all x  P, y  C Idea: Oscar can obtain no information about the plaintext by observing the ciphertext Alice Bob Oscar x y Pr[Head]=1/2 Pr[Tail]=1/2 Case 1: Pr[Head | y ]=1/2 Pr[Tail | y ]=1/2 Case 2: Pr[Head | y ]=1 Pr[Tail | y ]=0

Perfect secrecy when |K|=|C|=|P| (P,C,K,E,D) is a cryptosystem where |K|=|C|=|P|. The cryptosystem provides perfect secrecy iff every keys is used with equal probability 1/|K| For every x  P, y  C, there is a unique key K such that Ex. One-time pad in Z 2 P: 010 K: 101 P: 111 K: 000 C: 111 ?

Outline Introduction One-time pad Elementary probability theory Perfect secrecy Entropy Properties of entropy Spurious keys and unicity distance Product system

Preview (1) We want to know: the average amount of ciphertext required for an opponent to be able to uniquely compute the key, given enough computing time Plaintext Ciphertext xnxn ynyn K

Preview (2) That is, we want to know: How much information about the key is revealed by the ciphertext = conditional entropy H(K|C n ) We need the tools of entropy

Entropy (1) Suppose we have a discrete random variable X What is the information gained by the outcome of an experiment? Ex. Let X represent the toss of a coin, Pr[head]=Pr[tail]=1/2 For a coin toss, we could encode head by 1, and tail by 0 => i.e. 1 bit of information

Entropy (2) Ex. Random variable X with Pr[ x 1 ]=1/2, Pr[ x 2 ]=1/4, Pr[ x 3 ]=1/4 The most efficient encoding is to encode x 1 as 0, x 2 as 10, x 3 as 11. Pr[ x 1 ]=1/2 Pr[ x 2 ]=1/4 uncertaintyinformationcodeword length

Entropy (3) Notice: probability 2 -n => n bits p => -log 2 p Ex.(cont.) The average number of bits to encode X

Entropy: definition Suppose X is a discrete random variable which takes on values from a finite set X. Then, the entropy of the random variable X is defined as

Entropy : example Let P={a, b}, Pr[a]=1/4, Pr[b]=3/4. K={K 1, K 2, K 3 }, Pr[K 1 ]=1/2, Pr[K 2 ]=Pr[K 3 ]= 1/4. encryption matrix: ab K1K1 12 K2K2 23 K3K3 34 H(P)= H(K)=1.5, H(C)=1.85

Properties of entropy (1) Def: A real-valued function f is a strictly concave ( 凹 ) function on an interval I if x y f(x) f(y)

Properties of entropy (2) Jensen’s inequality: Suppose f is a continuous strictly concave function on I, x1x1 xnxn Then Equality hold iff x 1 =...= x n

Properties of entropy (3) Theorem: X is a random variable having a probability distribution which takes on the values on p 1, p 2, … p n, p i >0, 1  i  n. Then H(X)  log 2 n with equality iff p i =1/n for all i * Uniform random variable has the maximum entropy

Properties of entropy (4) Proof:

Entropy of a natural language (1) H L : average information per letter in English 1. If the 26 alphabets are uniform random, = log 2 26  Consider alphabet frequency H(P)  4.19

Entropy of a natural language (2) 3. However, successive letters has correlations Ex. Digram, trigram Q: entropy of two or more random variables?

Properties of entropy (5) Def: Theorem: H(X,Y)  H(X)+H(Y) with equality iff X and Y are independent Proof: Let

Entropy of a natural language (3) 3. Let P n be the random variable that has as its probability distribution that of all n-gram of plaintext. tabulation of digrams => H(P 2 )/2  3.90 tabulation of trigrams => H(P 3 )/3 … tabulation of n-grams => H(P n )/4 1.0  H L  1.5

Entropy of a natural language (4) Redundancy of L is defined as Take H L =1.25, R L = 0.75 English language is 75% redundant !  We can compress English text to about one quarter of its original length

Conditional entropy Known any fixed value y on Y, information about random variable X Conditional entropy: the average amount of information about X that is revealed by Y Theorem: H(X,Y)=H(Y)+H(X|Y)

Theorem about H(K|C) (1) Let (P,C,K,E,D) be a cryptosystem, then H(K|C) = H(K) + H(P) – H(C) Proof: H(K,P,C) = H(C|K,P) + H(K,P) Since key and plaintext uniquely determine the ciphertext H(C|K,P) = 0 H(K,P,C) = H(K,P) = H(K) + H(P) Key and plaintext are independent

Theorem about H(K|C) (2) We have Similarly, Now, H(K,P,C) = H(K,C) = H(K) + H(C) H(K,P,C) = H(K,P) = H(K) + H(P) H(K|C)= H(K,C)-H(C) = H(K,P,C)-H(C) = H(K)+H(P)-H(C)

Results (1) Define random variables as Plaintext Ciphertext PnPn CnCn K => Set |P|=|C|,

Spurious( 假 ) keys (1) Ex. Oscar obtains ciphertext WNAJW, which is encrypted using a shift cipher K=5, plaintext river K=22, plaintext arena One is the correct key, and the other is spurious Goal: prove a bound on the expected number of spurious keys

Spurious keys (2) Given y  C n, the set of possible keys The number of spurious keys |K(y)|-1 The average number of spurious keys Plaintext Ciphertext PnPn CnCn K

Relate H(K|C n ) to spurious keys (1) By definition

Relate H(K|C n ) to spurious keys (2) We have derived So

Relate H(K|C n ) to spurious keys (3) Theorem: |C|=|P| and keys are chosen equiprobably. The expected number of spurious keys As n increases, right hand term => 0

Relate H(K|C n ) to spurious keys (4) Set For substitution cipher, |P|=|C|=26, |K|=26! The average amount of ciphertext required for an opponent to be able to unique compute the key, given enough time Unicity distance

Product cryptosystem S 1 = (P,P,K 1,E 1,D 1 ), S 2 = (P,P,K 2,E 2,D 2 ) The product of two cryptosystems is S 1 = (P,P, K 1  K 2,E,D) Encryption: Decryption:

Product cryptosystem (cont.) Two cryptosystem M and S commute if Idempotent cryptosystem: S 2 = S Ex. Shift cipher If a cryptosystem is not idempotent, then there is a potential increase in security by iterating it several times MxS = SxM

How to find non-idempotent cryptosystem? Thm: If S and M are both idempotent, and they commute, then S  M will also be idempotent Idea: find simple S and M such that they do not commute SxM is possibly non-idempotent (SXM) x (SxM) = S x (M x S) xM =S x (S x M) x M =(S x S) x (M x M) =S x M