RELIABLE COMMUNICATION

Slides:



Advertisements
Similar presentations
Ulams Game and Universal Communications Using Feedback Ofer Shayevitz June 2006.
Advertisements

Applied Algorithmics - week7
Cryptography for Unconditionally Secure Message Transmission in Networks Kaoru Kurosawa.
Qiwen Wang, Sidharth Jaggi, Shuo-Yen Robert Li Institute of Network Coding (INC) The Chinese University of Hong Kong October 19, 2011 IEEE Information.
Information Theory EE322 Al-Sanie.
Gillat Kol (IAS) joint work with Ran Raz (Weizmann + IAS) Interactive Channel Capacity.
Enhancing Secrecy With Channel Knowledge
Information Theory Introduction to Channel Coding Jalal Al Roumy.
Reliable Deniable Communication: Hiding Messages in Noise Mayank Bakshi Mahdi Jafari Siavoshani ME Sidharth Jaggi The Chinese University of Hong Kong The.
June 4, 2015 On the Capacity of a Class of Cognitive Radios Sriram Sridharan in collaboration with Dr. Sriram Vishwanath Wireless Networking and Communications.
Threshold Phenomena and Fountain Codes
The 1’st annual (?) workshop. 2 Communication under Channel Uncertainty: Oblivious channels Michael Langberg California Institute of Technology.
Resilient Network Coding in the presence of Byzantine Adversaries Michelle Effros Michael Langberg Tracey Ho Sachin Katti Muriel Médard Dina Katabi Sidharth.
Bikash Dey Sidharth Jaggi Causal Michael Langberg OPEN UNIVERSITY OF ISRAEL (adversarial) channel codes.
Bikash Dey Sidharth Jaggi Causal Michael Langberg OPEN UNIVERSITY OF ISRAEL adversarial channel codes.
Tracey Ho Sidharth Jaggi Tsinghua University Hongyi Yao California Institute of Technology Theodoros Dikaliotis California Institute of Technology Chinese.
1 Verification Codes Michael Luby, Digital Fountain, Inc. Michael Mitzenmacher Harvard University and Digital Fountain, Inc.
Noise, Information Theory, and Entropy
Boulat A. Bash Dennis Goeckel Don Towsley LPD Communication when the Warden Does Not Know When.
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
When rate of interferer’s codebook small Does not place burden for destination to decode interference When rate of interferer’s codebook large Treating.
CY2G2 Information Theory 5
Information Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. Source.
Channel Capacity.
User Cooperation via Rateless Coding Mahyar Shirvanimoghaddam, Yonghui Li, and Branka Vucetic The University of Sydney, Australia IEEE GLOBECOM 2012 &
Threshold Phenomena and Fountain Codes Amin Shokrollahi EPFL Joint work with M. Luby, R. Karp, O. Etesami.
Resilient Network Coding in the Presence of Eavesdropping Byzantine Adversaries Michael Langberg Sidharth Jaggi Open University of Israel ISIT 2007 Tsinghua.
Redundancy The object of coding is to introduce redundancy so that even if some of the information is lost or corrupted, it will still be possible to recover.
Correction of Adversarial Errors in Networks Sidharth Jaggi Michael Langberg Tracey Ho Michelle Effros Submitted to ISIT 2005.
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
Coding Theory. 2 Communication System Channel encoder Source encoder Modulator Demodulator Channel Voice Image Data CRC encoder Interleaver Deinterleaver.
1 Private codes or Succinct random codes that are (almost) perfect Michael Langberg California Institute of Technology.
University of Massachusetts Amherst · Department of Computer Science Square Root Law for Communication with Low Probability of Detection on AWGN Channels.
Channel Coding Binit Mohanty Ketan Rajawat. Recap…  Information is transmitted through channels (eg. Wires, optical fibres and even air)  Channels are.
Computer Science Division
1 Channel Coding (III) Channel Decoding. ECED of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft.
1 Lecture 7 System Models Attributes of a man-made system. Concerns in the design of a distributed system Communication channels Entropy and mutual information.
Turbo Codes. 2 A Need for Better Codes Designing a channel code is always a tradeoff between energy efficiency and bandwidth efficiency. Lower rate Codes.
Reliable Deniable Communication: Hiding Messages in Noise The Chinese University of Hong Kong The Institute of Network Coding Pak Hou Che Mayank Bakshi.
Jayanth Nayak, Ertem Tuncel, Member, IEEE, and Deniz Gündüz, Member, IEEE.
Jayanth Nayak, Ertem Tuncel, Member, IEEE, and Deniz Gündüz, Member, IEEE.
Raptor Codes Amin Shokrollahi EPFL. BEC(p 1 ) BEC(p 2 ) BEC(p 3 ) BEC(p 4 ) BEC(p 5 ) BEC(p 6 ) Communication on Multiple Unknown Channels.
Reliable Deniable Communication: Hiding Messages from Noise Pak Hou Che Joint Work with Sidharth Jaggi, Mayank Bakshi and Madhi Jafari Siavoshani Institute.
RELIABLE COMMUNICATION 1 IN THE PRESENCE OFLIMITEDADVERSARIES.
1 CSCD 433 Network Programming Fall 2013 Lecture 5a Digital Line Coding and other...
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
ON AVCS WITH QUADRATIC CONSTRAINTS Farzin Haddadpour Joint work with Madhi Jafari Siavoshani, Mayank Bakshi and Sidharth Jaggi Sharif University of Technology,
1 CSCD 433 Network Programming Fall 2016 Lecture 4 Digital Line Coding and other...
PROJECT DOMAIN : NETWORK SECURITY Project Members : M.Ananda Vadivelan & E.Kalaivanan Department of Computer Science.
Hamming codes. Golay codes.
The Viterbi Decoding Algorithm
Introduction to Information theory
Reliable Deniable Communication: Hiding Messages in Noise
Amplify-and-Forward Schemes for Wireless Communications
Modern symmetric-key Encryption
Ivana Marić, Ron Dabora and Andrea Goldsmith
Information Theory Michael J. Watts
Error-Correcting Codes:
Trellis Codes With Low Ones Density For The OR Multiple Access Channel
A Brief Introduction to Information Theory
RS – Reed Solomon List Decoding.
Information-Theoretic Study of Optical Multiple Access
General Strong Polarization
Using Secret Key to Foil an Eavesdropper
Miguel Griot, Andres I. Vila Casado, and Richard D. Wesel
Information-Theoretic Security
Information Theoretical Analysis of Digital Watermarking
How many deleted bits can one recover?
Lecture 18 The Main Coding Theory Problem (Section 4.7)
Zeev Dvir (Princeton) Shachar Lovett (IAS)
Presentation transcript:

RELIABLE COMMUNICATION 1 RELIABLE COMMUNICATION IN THE PRESENCE OF LIMITED ADVERSARIES Sidharth Jaggi, The Chinese University of Hong Kong

Between Shannon and Hamming: Codes against limited adversaries

Codes, Algorithms, Networks: Design & Optimization for 3 Research group: Codes, Algorithms, Networks: Design & Optimization for Information Theory Codes, Algorithms, Networks: Design & Optimization for Information Theory ME Zitan Chen Qiaosheng Zhang Mayank Bakshi Sidharth Jaggi Swanand Kadhe Alex Sprintson Michael Langberg Bikash Kumar Dey Anand Dilip Sarwate

Background – Communication Scenario 4 Background – Communication Scenario Bad guy Calvin Alice 011100110110111101110011 Bob 𝑢 𝑘 𝑦 𝑛 Encoder 𝑥 𝑛 Decoder 𝑢 𝑘 (Adversarial) Noisy Channel the n superscript not visible. Message Codeword Received word Decoded message

Background – Related Work 5 Background – Related Work “Benchmark” channel models One extreme: random noise (Shannon) 𝑢 𝑘 𝑥 𝑛 𝑦 𝑛 𝑢 𝑘 = uk w.h.p. 𝑒 𝑛 Alphabet 𝑞 𝑅= 𝑘 𝑛 𝑝𝑛 errors (q-ary symmetric channel) 1 q=2 (binary) 1 general q 1 “large” q Be nice to have an actual adversary who looks blind. [Sha48] {Sha48] [Sha48] R R R 0.5 1-1/q p p 1 p Many “good” codes (computationally efficient, “close to” capacity)

Background – Related Work 6 Background – Related Work “Benchmark” channel models The other extreme: omniscient adversary (Hamming) One extreme: random noise (Shannon) 𝑢 𝑘 𝑥 𝑛 𝑦 𝑛 𝑢 𝑘 Calvin He knows “everything”! 1 q=2 (binary) 1 “large” q Intermediate q – AG codes achievability,/McERRW outer bound, still a gap. Show both curves. 1-2p [Reed-Solomon]/[Singleton] R [McERRW77] R [Gilbert Varshamov] 0.25 0.5 p 0.5 1 p “Good, computationally efficient” codes for large q, not so much for small q

x2(0), p2 x1(0), p1 x3(0), p3 x5(0), p5 x4(0), p4 Avg Pe: Max Pe: R p 0.5

Avg Pe: Avg Pe: Max Pe: Max Pe: R R p p 1 1 [McERRW77] 0.5 R p 1 0.5 [McERRW77] [Gilbert Varshamov]

Background – Related Work 9 Background – Related Work “Benchmark” channel models One extreme: random noise The other extreme: omniscient adversary 𝑢 𝑘 𝑥 𝑛 𝑦 𝑛 𝑢 𝑘 Calvin He knows “everything”! 1 q=2 (binary) 1 “intermediate” q 1 “large” q Intermediate q – AG codes achievability,/McERRW outer bound, still a gap. Show both curves. [Sha48] [Reed-Solomon]/[Singleton] R [McERRW77] R [Gilbert Varshamov] 0.5 0.5 p p 1 p

Background – Related Work 10 Background – Related Work “Benchmark” channel models One intermediate model: oblivious adversary 𝑢 𝑘 𝑥 𝑛 𝑦 𝑛 𝑢 𝑘 𝑒 𝑛 Alphabet 𝑞 𝑅= 𝑘 𝑛 𝑝𝑛 errors 1 q=2 (binary) 1 “large” q Be nice to have an actual adversary who looks blind. (AVC capacity) (AVC capacity/folklore) R R 0.5 p 1 p

Background – Related Work 11 Background – Related Work “Benchmark” channel models One intermediate model: oblivious adversary 𝑢 𝑘 𝑥 𝑛 𝑦 𝑛 𝑢 𝑘 s 𝑒 𝑛 Alphabet 𝑞 en: Error function of codebook, message uk, but NOT codeword xn s: Secret key known only to Alice (NOT Bob/Calvin) 𝑅= 𝑘 𝑛 𝑝𝑛 errors 1 q=2 (binary) 1 “large” q Be nice to have an actual adversary who looks blind. (AVC capacity) (AVC capacity/folklore) R R 0.5 p 1 p “Good, computationally efficient” codes recently constructed for q=2 [Guruswami-Smith-10]

Background – Related Work 12 Background – Related Work “Benchmark” channel models Another intermediate model: common-randomness between Alice/Bob 𝑢 𝑘 𝑥 𝑛 𝑦 𝑛 𝑢 𝑘 s s 𝑒 𝑛 , Alphabet 𝑞 en: Error function of codebook, codeword xn, but NOT uk ûk: decoded codeword function of s s: Secret key known to Alice and Bob (NOT Calvin) 𝑅= 𝑘 𝑛 𝑝𝑛 errors 1 q=2 (binary) 1 “large” q Be nice to have an actual adversary who looks blind. (AVC capacity) (AVC capacity/folklore) R R 0.5 p 1 p “Good, computationally efficient” codes (Eg: Ahlswede’s permutation trick.)

Background – Related Work 13 Background – Related Work Weaker channel models List-decoding - weakened reconstruction goal 𝑢 𝑘 𝑥 𝑛 𝑦 𝑛 1 q=2 (binary) 1 “large” q [Elias,Wozencfraft] [Elias,Wozencfraft] R R 0.5 p 1 p

Background – Related Work 14 Background – Related Work Weaker channel models List-decoding - weakened reconstruction goal 𝑢 𝑘 𝑥 𝑛 𝑦 𝑛 1 q=2 (binary) 1 “large” q [Elias,Wozencfraft] [Elias,Wozencfraft] R R 0.5 p 1 p “Good, computationally efficient” codes recently constructed for large q [Guruswami-Rudra06], q=2 open

Background – Related Work 15 Background – Related Work Weaker channel models List-decoding - weakened reconstruction goal Computationally bounded adversaries - weakened adversary power 𝑢 𝑘 𝑥 𝑛 Calvin 𝑦 𝑛 𝑢 𝑘 Computationally efficient encoding/decoding schemes 1 q=2 (binary list-decoding) 1 “large” q Micali-Sudan (modified with Guruswami-Rudra) Smith-Guruswami R R 0.5 p 1 p

Background – Related Work 16 Background – Related Work Weaker channel models AVCs with common randomness between Alice and Bob Omniscient jammer with noiseless feedback 𝑢 𝑘 𝑥 𝑛 𝑦 𝑛 𝑢 𝑘 Alice Calvin Bob Only deals with 0-error (𝜖-error capacity open) 1 q=2 (binary) 1 “large” q [Sha48] R [Berlekamp] [McERRW77] R [Gilbert Varshamov] 0.5 p 1/3 0.5 p

Between oblivious and omniscient adversaries 18 Causal adversaries Between oblivious and omniscient adversaries Current Future 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Transmitted Word 1 1 1 ? ? ? ? ? ? ? ? ? Tampered Word 1 1 * Calvin Calvin Calvin Calvin Calvin

Between oblivious and omniscient adversaries 19 Causal adversaries Between oblivious and omniscient adversaries Causal large alphabet Delayed adversary 1 Causal “large” q 1 Delayed q=2 1 Delayed “large” q (additive) 1 Delayed “large” q (overwrite) d R R R R 0.5 0.5 1 0.5 1 p p p p “Sha-mming”

Causal adversaries Capacity (Chen, J, Langberg, STOC 2015) 20 “Sha-mming”

One possible adversarial trajectory 21 Causal adversaries Analysis of all possible causal adversarial behaviours Between oblivious and omniscient adversaries 1 One possible adversarial trajectory (Slopes are bounded) 𝑝 𝑡 𝑝 𝑡 𝑛

Analysis of all possible causal adversarial behaviours 22 Causal adversaries Analysis of all possible causal adversarial behaviours Proof techniques overview - Converse: “Babble-and-push” attack 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Transmitted Word 1 1 1 ? ? ? ? ? ? ? ? ? Tampered Word 1 1 1 Babbling phase Pushing phase Randomly tamper with 𝑛 𝑝 bits

Proof techniques overview - Converse: “Babble-and-push” attack 23 Causal adversaries Proof techniques overview - Converse: “Babble-and-push” attack Pushing phase … 6 7 8 9 10 11 12 13 14 Construct a set of codewords based on corrupted bits transmitted so far Select one codeword from the set and then “push” the transmitted codeword towards the selected one Transmitted Word … 1 1 1 1 1 Tampered Word … 1 1 1 1 1 Selected Word … 1 1 1 1 1 Pushing phase

Proof techniques overview - Converse: “Babble-and-push” attack 24 Causal adversaries Proof techniques overview - Converse: “Babble-and-push” attack Pushing phase … 6 7 8 9 10 11 12 13 14 Construct a set of codewords based on corrupted bits transmitted so far Select one codeword from the set and then “push” the transmitted codeword towards the selected one Transmitted Word … 1 1 1 1 1 Tampered Word … 1 1 1 1 1 Selected Word … 1 1 1 1 1 Pushing phase The tampered word lies in midway between the transmitted word and selected word.

Proof techniques overview - Converse: “Babble-and-push” attack 25 Causal adversaries Proof techniques overview - Converse: “Babble-and-push” attack Plotkin bound: Binary code with dmin > n(1+ε)/2 has O(1/ε) codewords Group of size 1 𝜖  

Proof techniques overview - Converse: “Babble-and-push” attack 26 Causal adversaries Proof techniques overview - Converse: “Babble-and-push” attack Babbling phase Pushing phase 1 ? 3 2 10 4 5 6 7 8 9 13 11 12 14 Transmitted Word Tampered Word 𝑙 𝑛−𝑙 List-decoding condition Energy-bounding condition Shannon-capacity of first l channel uses # message bits remaining bit-flip budget Avg pairwise distance (Plotkin)

Proof techniques overview - Converse: “Babble-and-push” attack 27 Causal adversaries Proof techniques overview - Converse: “Babble-and-push” attack Using stochastic encoders instead of deterministic encoders illustrated in previous slides Message

Proof techniques overview - Converse: “Babble-and-push” attack 28 Causal adversaries Proof techniques overview - Converse: “Babble-and-push” attack Calvin won’t run out of his flipping budget Imposing a probability of decoding error 𝑂 𝜖 1 𝜖 Calvin

Trajectory of the “babble-and-push strategy” 29 Causal adversaries Proof techniques overview - Converse: “Babble-and-push” attack Proof techniques overview – Achievability 1 Possible decoding points 𝑝 𝑡 Trajectory of the “babble-and-push strategy” 𝑝 𝑡 𝑛

Proof techniques overview - Converse: “Babble-and-push” attack 3030 Causal adversaries Proof techniques overview - Converse: “Babble-and-push” attack Proof techniques overview – Achievability Encoder: concatenated stochastic codes 𝑚 𝑛𝑅 bits 𝑠 1 𝑛𝑆 bits 𝐶 1 (𝑚, 𝑠 1 ) 𝑛𝜃 bits code 𝐶 1

Proof techniques overview – Achievability 31 Causal adversaries Proof techniques overview – Achievability Encoder: concatenated stochastic codes code 𝐶 1 𝑚 𝑛𝑅 bits 𝑠 1 𝑛𝑆 bits 𝐶 1 (𝑚, 𝑠 1 ) 𝑛𝜃 bits 𝐶 1 (𝑚, 𝑠 1 ) 𝐶 2 (𝑚, 𝑠 2 ) 𝐶 1 𝜃 (𝑚, 𝑠 1 𝜃 ) 𝑛 bits

Proof techniques overview – Achievability 32 Causal adversaries Proof techniques overview – Achievability Encoder: concatenated stochastic codes Decoding process: list-decoding + unique decoding 𝐶 1 (𝑚, 𝑠 1 ) 𝐶 1 (𝑚, 𝑠 1 ) 𝐶 2 (𝑚, 𝑠 2 ) 𝐶 2 (𝑚, 𝑠 2 ) 𝐶 𝑡 (𝑚, 𝑠 𝑡 ) 𝐶 𝑡 (𝑚, 𝑠 𝑡 ) 𝐶 1 𝜃 (𝑚, 𝑠 1 𝜃 ) 𝑡𝑛𝜃 bits List-decoding phase Unique decoding phase Obtain a list of messages

Proof techniques overview – Achievability 3333 Causal adversaries Proof techniques overview – Achievability Encoder: concatenated stochastic codes Decoding process: list-decoding + unique decoding If two words differ in a limited number of positions, they are said to be consistent. 𝐶 1 (𝑚, 𝑠 1 ) 𝐶 2 (𝑚, 𝑠 2 ) 𝐶 𝑡 (𝑚, 𝑠 𝑡 ) 𝐶 1 𝜃 (𝑚, 𝑠 1 𝜃 ) 𝑡𝑛𝜃 bits List-decoding phase Unique decoding phase Consistency Checking Encodings Obtain a list of messages

Proof techniques overview – Achievability 34 Causal adversaries Proof techniques overview – Achievability Encoder: concatenated stochastic codes Decoding process: list-decoding + unique decoding List of “right mega sub-codewords” Received “right mega sub-codeword” Fails consistency checking…

Proof techniques overview – Achievability 35 Causal adversaries Proof techniques overview – Achievability Encoder: concatenated stochastic codes Decoding process: list-decoding + unique decoding With high probability, Bob succeeds in decoding Received “right mega sub-codeword” Passes consistency checking!

Summary (Chen, J, Langberg STOC 2015) 3636 Summary (Chen, J, Langberg STOC 2015) Binary Bit-flips Binary Erasures

37 Sneak Previews (Chen, Jaggi, Langberg, ISIT2016) q-ary online p*-erasure p-error channels (Dey, Jaggi, Langberg, Sarwate ISIT2016) one-bit delay p*-erasure channels Stochastic encoding Deterministic encoding

0101001011001011001011000101000101001010110 0101001011001011001011000101000101001010110 Coherence.pptx Coherence.pptx Coherence.pptx . . . 0101001011001011001011000101000101001010110

q1 q2 q3 q4 q5 q5 q4 q3 q2 q1 q1 q2 q3 q4 q5 q5 q4 q3 q2 q1 q1 q2 q3 q4 q5 An example demonstrating two codewordscoherence over ${\cal T}$ for two base codewords $\bu(m)$ and $\bu(m’)$. Let ${\cal T} = [25]$, {\it i.e.}, it comprises of the first $25$ locations of each codeword, and let there be $K=5$ noise-levels $q_1,\ldots,q_5$ for each codeword, in the sets of locations $S(m,1),\ldots,S(m,5)$ and $S(m’,1),\ldots,S(m’,5)$ respectively. The expected size of each $V(m,m’,k,k’)$ is therefore $|{\cal T}|/K^2 = 1$. It can be verified that the largest size of $V(m,m’,k,k)$ is $2$ (only for $k = 3$ -- all other sets of size at least $2$ have $k\neq k’$), and hence $\bu(m)$ and $\bu(m’)$ are at most $2$-coherent over ${\cal T}$. Further, there are exactly $4$ locations in which $\bu(m)$ and $\bu(m’)$ are coherent (the $8$th, $12$th, $16$th and $18$th locations, as highlighted in this figure), so the remaining $21$ decoherent locations are potentially usable by the decoder Bob, to disambiguate between $m$ and $m’$. In this example, $V(m,m’,1,2)$ comprising of the two locations $\{1,20\}$ is a possible choice for the disambiguation set, being of ``reasonable size’’, and being the lexicographically first set with $k \neq k’$.

Erasures 1 0 0 1 0 1 0 1 1 Erasing branch-points Erasing disambiguity

Limited-view adversaries: 41 Other related models Limited-view adversaries: Multipath networks with large-alphabet symbols Adversary can see a certain fraction and jam another fraction ITW 2015, ISIT 2015 Myopic adversaries: Adversary has (non-causal) view of a noisy version of Alice's transmission ISIT 2015

Questions?