Download presentation
Presentation is loading. Please wait.
1
Information-Theoretic Secrecy
Probability Theory: Bayes’ theorem Perfect secrecy: definition, some proofs, examples: one-time pad; simple secret sharing Entropy: Definition, Huffman coding property, unicity distance Information-Theoretic Secrecy CSCI381 Fall 2005 GWU Reference: Stinson
2
CS284/Spring04/GWU/Vora/Shannon Secrecy
Bayes’ Theorem If Pr[y] > 0 then Pr[x|y] = Pr[x]Pr[y|x]/ xX Pr[x]Pr[y|x] What is the probability that the 1st dice throw is 2 when the sum of two dice throws is 5? What is the probability that the 2nd dice throw is 3 when the product of the two dice throws is: 6, and 5? 11/22/2018 CS284/Spring04/GWU/Vora/Shannon Secrecy
3
(Im)Perfect Secrecy: Example
P = {1, 2, 3} K = {K1, K2, K3} and C = {2, 3, 4, 5, 6} 1 2 3 K1 4 K2 5 K3 6 Keys chosen equiprobably Pr[1] = Pr[2] = Pr[3] = 1/3 Pr[c=3] = ? Pr[m|c=3] = ? Pr[k|c=3] = ? 11/22/2018 CS284/Spring04/GWU/Vora/Shannon Secrecy
4
(Im)Perfect Secrecy: Example
1 2 3 K1 4 K2 5 K3 6 a b c K1 1 2 3 K2 4 K3 How should the above ciphers be changed to improve the cryptosystem? What defines a good cryptosystem? 11/22/2018 CS284/Spring04/GWU/Vora/Shannon Secrecy
5
(Im)Perfect Secrecy: Example Latin Square
1 2 3 K1 K2 K3 Assume all keys and messages equiprobable What’s good about this? P(k|c) P(m|c) 11/22/2018 CS284/Spring04/GWU/Vora/Shannon Secrecy
6
Perfect Secrecy: Definition
A cryptosystem has perfect secrecy if Pr[x|y] = Pr[x] xP, yC a posteriori probability = a priori probability posterior = prior 11/22/2018 CS284/Spring04/GWU/Vora/Shannon Secrecy
7
CS284/Spring04/GWU/Vora/Shannon Secrecy
Example: one-time pad P = C = Z2n dK=eK(x1, x2, …xn) = (x1+K1, x2+K2, …xn+Kn) mod 2 Show that it provides perfect secrecy 11/22/2018 CS284/Spring04/GWU/Vora/Shannon Secrecy
8
CS284/Spring04/GWU/Vora/Shannon Secrecy
Some proofs: Thm. 2.4 Thm 2.4: Suppose (P, C, K, E, D) is a cryptosystem where |K| = |P| = |C|. Then the cryptosystem provides perfect secrecy if and only if every key is used with equal probability 1/|K|, and x P and y C, there is a unique key K such that eK(x) = y (eg: Latin square) 11/22/2018 CS284/Spring04/GWU/Vora/Shannon Secrecy
9
CS284/Spring04/GWU/Vora/Shannon Secrecy
Entropy H(X) = - pi log2 pi Example: pi = 1/n Examples: ciphertext and plaintext entropies for examples. 11/22/2018 CS284/Spring04/GWU/Vora/Shannon Secrecy
10
Huffman encoding f: X* {0, 1}*
String of random variables to string of bits e.g. X = {a, b, c, d} f(a) = 1, f(b) = 10, f(c) = 100, f(d) = 1000 11/22/2018 CS284/Spring04/GWU/Vora/Shannon Secrecy
11
Huffman encoding algorithm
b c d e 0.05 0.1 0.12 0.13 0.6 1 0.15 0.25 0.4 X = {a, b, c, d, e} p(a) = 0.05 p(b) = 0.1 p(c) = 0.12 p(d) = 0.13 p(e) = 0.6 a: 000, b: 001, c: 010, d: 011, e: 1 Average length = ? Entropy = ? 11/22/2018 CS284/Spring04/GWU/Vora/Shannon Secrecy
12
CS284/Spring04/GWU/Vora/Shannon Secrecy
Theorem H(X) average length of Huffman encoding H(X) + 1 Without proof 11/22/2018 CS284/Spring04/GWU/Vora/Shannon Secrecy
13
CS284/Spring04/GWU/Vora/Shannon Secrecy
Properties of Entropy H(X) log2 n H(X, Y) H(X) + H(Y) H(X, Y) = H(X) + H(Y|X) = H(Y) + H(X|Y) Where H(X|Y) = - x y p(y)p(x|y)log2p(x|y) H(X|Y) H(X) With proofs and examples 11/22/2018 CS284/Spring04/GWU/Vora/Shannon Secrecy
14
Theorem H(K|C) = H(K) + H(P) – H(C)
Examples: Previous imperfect squares Proof: H(K, P, C) = H(K, C) = H(K, P) 11/22/2018 CS284/Spring04/GWU/Vora/Shannon Secrecy
15
Language Entropy and Redudancy
HL = Lim n H(Pn ) /n (lies between 1 and 1.5 for English) RL = 1 – HL /log2 |P| (the amount of “space” in a letter of English for other information) Need, on average, about n ciphertext characters to break a substitution cipher where: n = key entropy / RL log2 |P| n is “unicity distance” of cryptosystem 11/22/2018 CS284/Spring04/GWU/Vora/Shannon Secrecy
16
CS284/Spring04/GWU/Vora/Shannon Secrecy
Proof H(K|Cn) = H(K) + H(Pn) – H(Cn) 11/22/2018 CS284/Spring04/GWU/Vora/Shannon Secrecy
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.