Download presentation
Presentation is loading. Please wait.
Published byGrant Milo Pope Modified over 8 years ago
1
Universal Semantic Communication Madhu Sudan MIT CSAIL Joint work with Brendan Juba (MIT CSAIL).
2
An fantasy setting (SETI) 010010101010001111001000 Bob What should Bob ’ s response be? If there are further messages, are they reacting to him? Is there an intelligent Alien (Alice) out there? No common language! Is meaningful communication possible? Alice
3
Pioneer’s face plate Why did they put this image? What would you put? What are the assumptions and implications?
4
Motivation: Better Computing Networked computers use common languages: Networked computers use common languages: Interaction between computers (getting your computer onto internet). Interaction between computers (getting your computer onto internet). Interaction between pieces of software. Interaction between pieces of software. Interaction between software, data and devices. Interaction between software, data and devices. Getting two computing environments to “talk” to each other is getting problematic: Getting two computing environments to “talk” to each other is getting problematic: time consuming, unreliable, insecure. time consuming, unreliable, insecure. Can we communicate more like humans do? Can we communicate more like humans do?
5
Classical Paradigm for interaction Object 1 Object 2 Designer
6
Object 2 New paradigm Object 1 Designer
7
Robust interfaces Want one interface for all “Object 2”s. Want one interface for all “Object 2”s. Can such an interface exist? Can such an interface exist? What properties should such an interface exhibit? What properties should such an interface exhibit? Our thesis: Sufficient (for Object 1) to count on intelligence (of Object 2). Our thesis: Sufficient (for Object 1) to count on intelligence (of Object 2). But how to detect this intelligence? Puts us back in the “Alice and Bob” setting. But how to detect this intelligence? Puts us back in the “Alice and Bob” setting.
8
Goal of our work: Definitional issues and a definition: Definitional issues and a definition: What is successful communication? What is successful communication? What is intelligence? cooperation? What is intelligence? cooperation? Theorem: “If Alice and Bob are intelligent and cooperative, then communication is feasible” (in one setting) Theorem: “If Alice and Bob are intelligent and cooperative, then communication is feasible” (in one setting) Proof ideas: Proof ideas: May suggest: May suggest: Protocols, Phenomena … Protocols, Phenomena … Methods for proving/verifying intelligence Methods for proving/verifying intelligence
9
Some observations: Communication without “goals” leads to inherent ambiguity. Communication without “goals” leads to inherent ambiguity. But communication must have a goal … right? But communication must have a goal … right? Example: Outsourcing computation, asking for “wisdom” (email=spam? Program=virus?, asking printer to print). Example: Outsourcing computation, asking for “wisdom” (email=spam? Program=virus?, asking printer to print). Our formalization: Any such task must be accompanied by a “verification” algorithm, that Bob can apply to transcript of conversation with Alice. Our formalization: Any such task must be accompanied by a “verification” algorithm, that Bob can apply to transcript of conversation with Alice.
10
Results: Computational Wisdom: Alice can help Bob solve hard problems if and only if Bob can verify it. Computational Wisdom: Alice can help Bob solve hard problems if and only if Bob can verify it. Equivalence of misunderstanding and mistrust. Equivalence of misunderstanding and mistrust. Communication can be useful in computational context. Communication can be useful in computational context. Generic Communicational Goal: Generic Communicational Goal: Should be verifiable. Should be verifiable. Should be something Bob can not do without Alice’s abilities. Should be something Bob can not do without Alice’s abilities. Examples: Printer? Computationally capable? Examples: Printer? Computationally capable?
11
Questions?
12
Goal of our work: Definitional issues and a definition: Definitional issues and a definition: What is successful communication? What is successful communication? What is intelligence? cooperation? What is intelligence? cooperation? Theorem: “If Alice and Bob are intelligent and cooperative, then communication is feasible” (in one setting) Theorem: “If Alice and Bob are intelligent and cooperative, then communication is feasible” (in one setting) Proof ideas: Proof ideas: May suggest: May suggest: Protocols, Phenomena … Protocols, Phenomena … Methods for proving/verifying intelligence Methods for proving/verifying intelligence
13
What has this to do with computation? In general: Subtle issues related to “human” intelligence/interaction are within scope of computational complexity. E.g., In general: Subtle issues related to “human” intelligence/interaction are within scope of computational complexity. E.g., Proofs? Proofs? Easy vs. Hard? Easy vs. Hard? (Pseudo)Random? (Pseudo)Random? Secrecy? Secrecy? Knowledge? Knowledge? Trust? Trust? Privacy? Privacy? This talk: What is “understanding”? This talk: What is “understanding”?
14
A first attempt at a definition Alice and Bob are “universal computers” (aka programming languages) Alice and Bob are “universal computers” (aka programming languages) Have no idea what the other’s language is! Have no idea what the other’s language is! Can they learn each other’s language? Can they learn each other’s language? Good News: Language learning is finite. Can enumerate to find translator. Good News: Language learning is finite. Can enumerate to find translator. Bad News: No third party to give finite string! Bad News: No third party to give finite string! Enumerate? Can’t tell right/wrong Enumerate? Can’t tell right/wrong
15
Communication & Goals Indistinguishability of Right/Wrong: Consequence of “communication without goal”. Indistinguishability of Right/Wrong: Consequence of “communication without goal”. Communication (with/without common language) ought to have a “Goal”. Communication (with/without common language) ought to have a “Goal”. Before we ask how to improve communication, we should ask why we communicate? Before we ask how to improve communication, we should ask why we communicate? “Communication is not an end in itself, “Communication is not an end in itself, but a means to achieving a Goal” but a means to achieving a Goal”
16
Part I: A Computational Goal
17
Computational Goal for Bob Bob wants to solve hard computational problem: Bob wants to solve hard computational problem: Decide membership in set S. Decide membership in set S. Can Alice help him? Can Alice help him? What kind of sets S? E.g., What kind of sets S? E.g., S = {set of programs P that are not viruses}. S = {set of programs P that are not viruses}. S = {non-spam email} S = {non-spam email} S = {winning configurations in Chess} S = {winning configurations in Chess} S = {(A,B) | A has a factor less than B} S = {(A,B) | A has a factor less than B}
18
Review of Complexity Classes P (BPP) – Solvable in (randomized) polynomial time (Bob can solve this without Alice’s help). P (BPP) – Solvable in (randomized) polynomial time (Bob can solve this without Alice’s help). NP – Problems where solutions can be verified in polynomial time (contains factoring). NP – Problems where solutions can be verified in polynomial time (contains factoring). PSPACE – Problems solvable in polynomial space (quite infeasible for Bob to solve on his own). PSPACE – Problems solvable in polynomial space (quite infeasible for Bob to solve on his own). Computable – Problems solvable in finite time. (Includes all the above.) Computable – Problems solvable in finite time. (Includes all the above.) Uncomputable (Virus detection. Spam filtering.) Uncomputable (Virus detection. Spam filtering.) Which problems can you solve with (alien) help?
19
Setup Bob Alice R Ã $$$ q 1 a 1 a k q k Which class of sets? x 2 S ? f ( x ; R ; a 1 ;:::; a k ) = 1 ? H ope f u ll yx 2 S, f ( ¢¢¢ ) = 1
20
Contrast with Interactive Proofs Similarity: Interaction between Alice and Bob. Similarity: Interaction between Alice and Bob. Difference: In IP, Bob does not trust Alice. Difference: In IP, Bob does not trust Alice. (In our case Bob does not understand Alice). (In our case Bob does not understand Alice). Famed Theorem: IP = PSPACE [LFKN, Shamir]. Famed Theorem: IP = PSPACE [LFKN, Shamir]. Membership in PSPACE solvable S can be proved interactively to a probabilistic Bob. Membership in PSPACE solvable S can be proved interactively to a probabilistic Bob. Needs a PSPACE-complete prover Alice. Needs a PSPACE-complete prover Alice.
21
Intelligence & Cooperation? For Bob to have a non-trivial interaction, Alice must be: For Bob to have a non-trivial interaction, Alice must be: Intelligent: Capable of deciding if x in S. Intelligent: Capable of deciding if x in S. Cooperative: Must communicate this to Bob. Cooperative: Must communicate this to Bob. Modelling Alice: Maps “(state of mind,external input)” to “(new state of mind, output)”. Modelling Alice: Maps “(state of mind,external input)” to “(new state of mind, output)”. Formally: Formally:
22
Successful universal communication Bob should be able to talk to any S-helpful Alice and decide S. Bob should be able to talk to any S-helpful Alice and decide S. Formally, Formally, P p t B i s S -un i versa l i ff oreveryx 2 f 0 ; 1 g ¤ A i sno t S - h e l p f u l ) N o t h i ng !! ¡ A i s S - h e l p f u l ) [ A $ B ( x )] = 1 i ® x 2 S ( w h p ). Or should it be … A i sno t S - h e l p f u l ) [ A $ B ( x )] = 1 i mp l i esx 2 S.
23
Main Theorem - - In English: In English: If S is moderately stronger than what Bob can do on his own, then attempting to solve S leads to non-trivial (useful) conversation. If S is moderately stronger than what Bob can do on his own, then attempting to solve S leads to non-trivial (useful) conversation. If S too strong, then leads to ambiguity. If S too strong, then leads to ambiguity. Uses IP=PSPACE Uses IP=PSPACE I f t h ereex i s t san S -un i versa l B o b t h en S i s i n PSPACE. I f S i s PSPACE -comp l e t e ( a k a C h ess ), t h en t h ereex i s t san S -un i versa l B o b. ( G enera l i zes t oanyc h ec k a bl ese t S. )
24
Few words about the proof Positive result: Enumeration + Interactive Proofs Positive result: Enumeration + Interactive Proofs Alice Prover Bob Interpreter P roo f wor k s ) x 2 S ; D oesn t wor k ) G uesswrong. A l i ce S - h e l p f u l ) I n t erpre t erex i s t s ! G uess: I n t erpre t er;x 2 S ?
25
Proof of Negative Result L not in PSPACE implies Bob makes mistakes. L not in PSPACE implies Bob makes mistakes. Suppose Alice answers every question so as to minimize the conversation length. Suppose Alice answers every question so as to minimize the conversation length. (Reasonable effect of misunderstanding). (Reasonable effect of misunderstanding). Conversation comes to end quickly. Conversation comes to end quickly. Bob has to decide. Bob has to decide. Conversation + Decision simulatable in PSPACE (since Alice’s strategy can be computed in PSPACE). Conversation + Decision simulatable in PSPACE (since Alice’s strategy can be computed in PSPACE). Bob must be wrong if L is not in PSPACE. Bob must be wrong if L is not in PSPACE. Warning: Only leads to finitely many mistakes. Warning: Only leads to finitely many mistakes.
26
Potential Criticisms of Main Theorem This is just rephrasing IP=PSPACE. This is just rephrasing IP=PSPACE. No … the result proves “misunderstanding is equal to mistrust”. Was not a priori clear. No … the result proves “misunderstanding is equal to mistrust”. Was not a priori clear. Even this is true only in some contexts. Even this is true only in some contexts.
27
Potential Criticisms of Main Theorem This is just rephrasing IP=PSPACE. This is just rephrasing IP=PSPACE. Bob is too slow: Takes exponential time in length of Alice, even in his own description of her! Bob is too slow: Takes exponential time in length of Alice, even in his own description of her! A priori – not clear why he should have been able to decide right/wrong. A priori – not clear why he should have been able to decide right/wrong. Polynomial time learning not possible in our model of “helpful Alice”. Polynomial time learning not possible in our model of “helpful Alice”. Better definitions can be explored – future work. Better definitions can be explored – future work.
28
Potential Criticisms of Main Theorem This is just rephrasing IP=PSPACE. This is just rephrasing IP=PSPACE. Bob is too slow: Takes exponential time in length of Alice, even in his own description of her! Bob is too slow: Takes exponential time in length of Alice, even in his own description of her! Alice has to be infinitely/PSPACE powerful … Alice has to be infinitely/PSPACE powerful … But not as powerful as that Anti-Virus Program! But not as powerful as that Anti-Virus Program! Wait for Part II Wait for Part II
29
Part II: Intellectual Curiosity
30
Setting: Bob more powerful than Alice What should Bob’s Goal be? What should Bob’s Goal be? Can’t use Alice to solve problems that are hard for him. Can’t use Alice to solve problems that are hard for him. Can pose problems and see if she can solve them. E.g., Teacher-student interactions. Can pose problems and see if she can solve them. E.g., Teacher-student interactions. But how does he verify “non-triviality”? But how does he verify “non-triviality”? What is “non-trivial”? Must distinguish … What is “non-trivial”? Must distinguish … Alice Bob Interpreter Scene 1 Scene 2
31
Setting: Bob more powerful than Alice Concretely: Concretely: Bob capable of TIME(n 10 ). Bob capable of TIME(n 10 ). Alice capable of TIME(n 3 ) or nothing. Alice capable of TIME(n 3 ) or nothing. Can Bob distinguish the two settings ? Can Bob distinguish the two settings ? Answer: Yes, if Translate(Alice,Bob) computable in TIME(n 2 ). Answer: Yes, if Translate(Alice,Bob) computable in TIME(n 2 ). Bob poses TIME(n 3 ) time problems to Alice and enumerates all TIME(n 2 ) interpreters. Bob poses TIME(n 3 ) time problems to Alice and enumerates all TIME(n 2 ) interpreters. Moral: Language (translation) should be simpler than problems being discussed. Moral: Language (translation) should be simpler than problems being discussed.
32
Part III: Concluding thoughts
33
Is this language learning? End result promises no language learning: Merely that Bob solves his problem. End result promises no language learning: Merely that Bob solves his problem. In the process, however, Bob learns Interpreter! In the process, however, Bob learns Interpreter! But this may not be the right Interpreter. But this may not be the right Interpreter. All this is Good! All this is Good! No need to distinguish indistinguishables! No need to distinguish indistinguishables!
34
Goals of Communication Largely unexplored (at least explicitly) Largely unexplored (at least explicitly) Main categories Main categories Remote Control: Remote Control: Laptop wants to print on printer! Laptop wants to print on printer! Buy something on Amazon Buy something on Amazon Intellectual Curiosity: Intellectual Curiosity: Learning/Teaching Learning/Teaching Listening to music, watching movies Listening to music, watching movies Coming to this talk Coming to this talk Searching for alien intelligence Searching for alien intelligence May involve common environment/context. May involve common environment/context.
35
Extension to generic goals Generic (implementation of) Goal: Given by: Generic (implementation of) Goal: Given by: Strategy for Bob. Strategy for Bob. Class of Interpreters. Class of Interpreters. Boolean function G of Boolean function G of Private input, randomness Private input, randomness Interaction with Alice through Interpreter Interaction with Alice through Interpreter Environment (Altered by actions of Alice) Environment (Altered by actions of Alice) Should be Should be Verifiable: G should be easily computable. Verifiable: G should be easily computable. Complete: Achievable w. common language (for some Alice, independent of history). Complete: Achievable w. common language (for some Alice, independent of history). Non-trivial: Not achievable without Alice. Non-trivial: Not achievable without Alice.
36
Generic Verifiable Goal Alice Strategy Interpreter x,R Verifiable Goal = (Strategy, Class of Interpreters, V) V(x,R, )
37
Generic Goals Can define Goal-helpful; Goal-universal; and prove existence of Goal-universal Interpreter for all Goals. Can define Goal-helpful; Goal-universal; and prove existence of Goal-universal Interpreter for all Goals. Claim: Captures all communication Claim: Captures all communication (unless you plan to accept random strings). Modelling natural goals is still interesting. E.g. Modelling natural goals is still interesting. E.g. Printer Problem: Bob(x): Alice should say x. Printer Problem: Bob(x): Alice should say x. Intellectual Curiosity: Bob: Send me a “theorem” I can’t prove, and a “proof”. Intellectual Curiosity: Bob: Send me a “theorem” I can’t prove, and a “proof”. Proof of Intelligence (computational power): Proof of Intelligence (computational power): Bob: given f, x; compute f(x). Bob: given f, x; compute f(x). Conclusion: (Goals of) Communication can be achieved w/o common language Conclusion: (Goals of) Communication can be achieved w/o common language
38
Role of common language? If common language is not needed (as we claim), then why do intelligent beings like it? If common language is not needed (as we claim), then why do intelligent beings like it? Our belief: To gain efficiency. Our belief: To gain efficiency. - Reduce # bits of communication - # rounds of communication Topic for further study: Topic for further study: What efficiency measure does language optimize? What efficiency measure does language optimize? Is this difference asymptotically significant? Is this difference asymptotically significant?
39
Further work Exponential time learning (enumerating Interpreters) Exponential time learning (enumerating Interpreters) What is a reasonable restriction on languages? What is a reasonable restriction on languages? What is the role of language in communication? What is the role of language in communication? What are other goals of communication? What are other goals of communication? What is intelligence? What is intelligence? Paper (Part I) available from ECCC
40
Thank You!
41
Example Symmetric Alice and Bob (computationally): Symmetric Alice and Bob (computationally): Bob’s Goal: Bob’s Goal: Get an Interpreter in TIME(n 2 ), to solve TIME(n 3 ) problems by talking to Alice? Get an Interpreter in TIME(n 2 ), to solve TIME(n 3 ) problems by talking to Alice? Verifiable: Bob can generate such problems, with solutions in TIME(n 3 ). Verifiable: Bob can generate such problems, with solutions in TIME(n 3 ). Complete: Alice can solve this problem. Complete: Alice can solve this problem. Non-trivial: Interpreter can not solve problem on its own. Non-trivial: Interpreter can not solve problem on its own.
42
Summary Communication should strive to satisfy one’s goals. Communication should strive to satisfy one’s goals. If one does this “understanding” follows. If one does this “understanding” follows. Can enable understanding by dialog: Can enable understanding by dialog: Laptop -> Printer: Print Laptop -> Printer: Print Printer: But first tell me Printer: But first tell me “If there are three oranges and you take away two, how many will you have?” “If there are three oranges and you take away two, how many will you have?” Laptop: One! Laptop: One! Printer: Sorry, we don’t understand each other! Printer: Sorry, we don’t understand each other! Laptop: Oh wait, I got it, the answer is “Two”. Laptop: Oh wait, I got it, the answer is “Two”. Printer: All right … printing. Printer: All right … printing.
43
Few words about the proof Positive result: Enumeration + Interactive Proofs Positive result: Enumeration + Interactive Proofs ¡ B o b : V er i ¯ esx 2 L b ys i mu l a t i ng IP ver i ¯ er. ¡ B u t nee d s t oas k t h e IPP rovermanyques t i ons ¡ T rans l a t es i n t omanyo t h erques t i onsy 2 L ¡ T oge t answers: B o b guesses B o b 0 ± S i mu l a t es i n t erac t i on b e t ween A l i cean d B o b 0. I f proo f i sconv i nc i ngx 2 L ! I f x 2 L an d B o b 0 i scorrec t, ge t aconv i nc i ngproo f.
44
How to model curiosity? How can Alice create non-trivial conversations? (when she is not more powerful than Bob) How can Alice create non-trivial conversations? (when she is not more powerful than Bob) Non-triviality of conversation depends on the ability to jointly solve a problem that Bob could not solve on his own. Non-triviality of conversation depends on the ability to jointly solve a problem that Bob could not solve on his own. But now Alice can’t help either! But now Alice can’t help either! We are stuck? We are stuck?
45
Communication & Goals Indistinguishability of Right/Wrong: Consequence of “communication without goal”. Indistinguishability of Right/Wrong: Consequence of “communication without goal”. Communication (with/without common language) ought to have a “Goal”. Communication (with/without common language) ought to have a “Goal”. Bob’s Goal: Bob’s Goal: Verifiable: Easily computable function of interaction; Verifiable: Easily computable function of interaction; Complete: Achievable with common language. Complete: Achievable with common language. Non-trivial: Not achievable without Alice. Non-trivial: Not achievable without Alice.
46
Cryptography to the rescue Alice can generate hard problems to solve, while knowing the answer. Alice can generate hard problems to solve, while knowing the answer. E.g. “I can factor N”; E.g. “I can factor N”; Later “P * Q = N” Later “P * Q = N” If B’ is intellectually curious, then he can try to factor N first on his own … he will (presumably) fail. Then Alice’s second sentence will be a “revelation” … If B’ is intellectually curious, then he can try to factor N first on his own … he will (presumably) fail. Then Alice’s second sentence will be a “revelation” … Non-triviality: Bob verified that none of the algorithms known to him, convert his knowledge into factors of N. Non-triviality: Bob verified that none of the algorithms known to him, convert his knowledge into factors of N.
47
More generally Alice can send Bob a Goal function. Alice can send Bob a Goal function. Bob can try to find conversations satisfying the Goal. Bob can try to find conversations satisfying the Goal. If he fails (once he fails), Alice can produce conversations that satisfy the Goal. If he fails (once he fails), Alice can produce conversations that satisfy the Goal. Universal? Universal?
48
Part III: Pioneer Faceplate? Non-interactive proofs of intelligence?
49
Compression is universal When Bob receives Alice’s string, he should try to look for a pattern (or “compress” the string). When Bob receives Alice’s string, he should try to look for a pattern (or “compress” the string). Universal efficient compression algorithm: Universal efficient compression algorithm: Input(X); Input(X); Enumerate efficient pairs (C(), D()); Enumerate efficient pairs (C(), D()); If D(C(X)) ≠ X then pair is invalid. If D(C(X)) ≠ X then pair is invalid. Among valid pairs, output the pair with smallest |C(X)|. Among valid pairs, output the pair with smallest |C(X)|.
50
Compression-based Communication As Alice sends her string to Bob, Bob tries to compress it. As Alice sends her string to Bob, Bob tries to compress it. After.9 n steps After.9 n steps X Bob C(X) After n steps X’X’ C(X,X ’ ) Such phenomena can occur! Surely suggest intelligence/comprehension?
51
Discussion of result Alice needs to solve PSPACE. Realistic? Alice needs to solve PSPACE. Realistic? What about virus-detection? Spam-filters? What about virus-detection? Spam-filters? These solve undecidable problems!!! These solve undecidable problems!!! PSPACE-setting: natural, clean setting PSPACE-setting: natural, clean setting Arises from the proof. Arises from the proof. Other languages work, (SZK, NP coNP). Other languages work, (SZK, NP coNP). Learning B’ is taking exponential time! Learning B’ is taking exponential time! This is inevitable (minor theorem) … This is inevitable (minor theorem) … … unless, languages have structure. Future work. … unless, languages have structure. Future work. \
52
Discussion of result Good news: Good news: If we take “self-verification” as an axiom, then meaningful learning is possible! If we take “self-verification” as an axiom, then meaningful learning is possible! Simple semantic principle … reasonable to assume that Alice (the alien) would have determined this as well, and so will use this to communicate with us (Bobs). Simple semantic principle … reasonable to assume that Alice (the alien) would have determined this as well, and so will use this to communicate with us (Bobs).
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.