Presentation is loading. Please wait.

Presentation is loading. Please wait.

Indistinguishability by adaptive procedures with advice, and lower bounds on hardness amplification proofs Aryeh Grinberg, U. Haifa Ronen.

Similar presentations


Presentation on theme: "Indistinguishability by adaptive procedures with advice, and lower bounds on hardness amplification proofs Aryeh Grinberg, U. Haifa Ronen."β€” Presentation transcript:

1 Indistinguishability by adaptive procedures with advice, and lower bounds on hardness amplification proofs Aryeh Grinberg, U. Haifa Ronen Shaltiel, U. Haifa Emanuele Viola, Northeastern 𝑓: 0,1 𝑛 β†’{0,1} βˆ€πΆ in circuit class C: Pr X 𝐢 𝑋 =𝑓 𝑋 <1βˆ’π›Ώ 𝑓 β€² : 0,1 𝑛 β€² β†’{0,1} βˆ€πΆβ€² in circuit class C’: Pr X 𝐢′ 𝑋 =𝑓′ 𝑋 < 1 2 +πœ–

2 βˆ€πΆβ€² in circuit class C’:
Hardness amplification theorems: mildly hard functions β‡’ very hard functions 𝑓: 0,1 𝑛 β†’{0,1} βˆ€πΆ in circuit class C: Pr X 𝐢 𝑋 =𝑓 𝑋 <1βˆ’π›Ώ β€œ(1βˆ’π›Ώ)–hard function”. 𝑓 β€² : 0,1 𝑛 β€² β†’{0,1} βˆ€πΆβ€² in circuit class C’: Pr X 𝐢′ 𝑋 =𝑓′ 𝑋 < 1 2 +πœ– β€œ( 1 2 +πœ–)–hard function”. Used all over in Crypto, Derandomization…

3 Example: Yao’s XOR-Lemma [Yao82,Lev87,Imp95,GNW95,KS03]
Construction map: 𝑓⇒ 𝑓 β€² =πΆπ‘œπ‘›(𝑓) 𝑓 β€² π‘₯ 1 ,…, π‘₯ 𝑑 =𝑓 π‘₯ 1 βŠ•β€¦βŠ•π‘“ π‘₯ 𝑑 Thm: for 𝑑=𝑂( log 𝑛) βˆ€π‘“: 𝑓 is (1βˆ’ 1 10 )-hard for P/poly. β‡’ 𝑓 β€² is 𝑛 βˆ’hard for P/poly. What about lower circuit classes? Lose-lose principle: You can only amplify the hardness you don’t have. Most frustrating for 𝐴 𝐢 0 βŠ• : have mildly hard functions (majority) [Raz87], but not very hard ones. Majority 𝐴 𝐢 𝐴 𝐢 0 [βŠ•] 𝑇 𝐢 0 =𝐴 𝐢 0 π‘šπ‘Žπ‘— 𝑁𝐢 𝑃/π‘π‘œπ‘™π‘¦ Power of C Have lower bounds! No amplification  Can do hardness amplification! Cannot prove lower bounds [RR,NR] 

4 You can only amplify the hardness you don’t have
Our results: Limitations on β€œpowerful” black-box techniques for hardness amplification Lose-lose principle: You can only amplify the hardness you don’t have Most frustrating for 𝐴 𝐢 0 βŠ• : have mildly hard functions (majority) [Raz87], but not very hard ones. Can’t afford hybrid argument and get PRGs w/large stretch. Previous work [SV08,GR09]: Barrier cannot be bypassed by certain black-box techniques. This work: Barrier cannot be bypassed by general black-box techniques. Majority 𝐴 𝐢 𝐴 𝐢 0 [βŠ•] 𝑇 𝐢 0 =𝐴 𝐢 0 π‘šπ‘Žπ‘— 𝑁𝐢 𝑃/π‘π‘œπ‘™π‘¦ Power of C Have lower bounds! No amplification  Can do hardness amplification! Cannot prove lower bounds [RR,NR] 

5 Example: Yao’s XOR-Lemma [Yao82,Lev87,Imp95,GNW95,KS03]
Construction map: 𝑓⇒ 𝑓 β€² =πΆπ‘œπ‘›(𝑓) 𝑓 β€² π‘₯ 1 ,…, π‘₯ 𝑑 =𝑓 π‘₯ 1 βŠ•β€¦βŠ•π‘“ π‘₯ 𝑑 Thm: for 𝑑=𝑂(log(1/πœ–)/𝛿) βˆ€π‘“: 𝑓 is (1βˆ’π›Ώ)-hard for size 𝑠 circuits. β‡’ 𝑓 β€² is 1 2 +πœ– βˆ’hard for size 𝑠 β€² = 𝑠 π‘ž circuits, π‘ž=𝑂( log⁑(1/𝛿) πœ– 2 ) Circuit for 𝑓’ is q times smaller?! β‡’ πœ–β‰₯ 1 𝑠 , disappointing! This work: a loss of π‘ž=𝑂( log⁑(1/𝛿) πœ– 2 ) is necessary for general black-box techniques for hardness amplification. Improves upon [SV08,AS11]. The case 𝛿= 2 βˆ’π‘› , captures worst-case hardness. Closely related to locally-decoadable list-decodable codes [STV99].

6 Reductions proving hardness amplification: nonuniform advice and adaptivity
(black-box) hardness amplification theorems consist of: Construction map: 𝑓⇒ 𝑓 β€² =πΆπ‘œπ‘›(𝑓). Proof: reduction 𝑅𝑒 𝑑 β‹… π‘₯ showing that: 𝐢’ breaks 𝑓’ β‡’ 𝐢 π‘₯ =𝑅𝑒 𝑑 𝐢 β€² π‘₯ breaks 𝑓. nonuniform : uniform ≑ list decoding : unique decoding. Our results: lower bounds on circuit depth and # of queries for general reductions 𝑅𝑒 𝑑 β‹… that take advice and are adaptive. General reductions: Can be adaptive. Receive poly-size β€œnonuniform” advice string. black box 𝐢′ 1 , 𝐢′ 2 ,…………………, 𝐢′ 𝑁 query answer 𝑅𝑒 𝑑 β‹… π‘₯ β€œadvice”: 𝛼=𝛼( 𝐢 β€² ) of short length. 𝛼 is an arbitrary function of 𝐢’.

7 Black-box hardness amplification: A pair of construction/reduction
non-uniform Dfn: A b.b. hardness amplification is (πΆπ‘œπ‘›,𝑅𝑒𝑑) s.t. Construction map, maps 𝑓⇒ 𝑓 β€² =πΆπ‘œπ‘› 𝑓 𝑅𝑒 𝑑 β‹… π‘₯ is an oracle circuit s.t. βˆ€π‘“,𝐢′ s.t. Cβ€² πœ– -agrees with 𝑓 β€² =πΆπ‘œπ‘›(𝑓), 𝐢 π‘₯ =𝑅𝑒 𝑑 𝐢 β€² π‘₯ is a function that 1βˆ’π›Ώ βˆ’agrees with 𝑓. Uniform vs. Non-uniform reductions: For 𝛿=0, b.b. hardness amp. ≑ uniquely decodable codes. Plotkin bound: no b.b. hardness amp. for πœ–< 1 4 . non-uniform b.b. hardness amp. ≑ list-decodable codes. encoding map list- decoding map 𝛼= 𝛼 𝑓, 𝐢 β€² 𝑅𝑒𝑑 gets non b.b. access to 𝐢′. βˆƒπ›Ό β€œnon-uniform advice string” s.t. 𝐢 π‘₯ =𝑅𝑒 𝑑 𝐢 β€² (π‘₯,𝛼)

8 Black-box hardness amplification: A pair of construction/reduction
non-uniform Dfn: A b.b. hardness amplification is (πΆπ‘œπ‘›,𝑅𝑒𝑑) s.t. Construction map, maps 𝑓⇒ 𝑓 β€² =πΆπ‘œπ‘› 𝑓 𝑅𝑒 𝑑 β‹… π‘₯ is an oracle circuit s.t. βˆ€π‘“,𝐢′ s.t. Cβ€² πœ– -agrees with 𝑓 β€² =πΆπ‘œπ‘›(𝑓), 𝐢 π‘₯ =𝑅𝑒 𝑑 𝐢 β€² π‘₯ is a function that 1βˆ’π›Ώ βˆ’agrees with 𝑓. Complexity of 𝑅𝑒𝑑 governs the complexity diff. between 𝐢,𝐷: Circuit size of 𝑅𝑒𝑑 and length of 𝛼 (governs size difference). # of queries that 𝑅𝑒 𝑑 β‹… makes (governs size difference). (Queries can be adaptive/non-adaptive). Circuit depth of 𝑅𝑒𝑑 (governs depth difference). encoding map list- decoding map 𝛼= 𝛼 𝑓, 𝐢 β€² 𝑅𝑒𝑑 gets non b.b. access to 𝐢′. βˆƒπ›Ό β€œnon-uniform advice string” s.t. 𝐢 π‘₯ =𝑅𝑒 𝑑 𝐢 β€² (π‘₯,𝛼)

9 Our results on non-uniform b.b. hardness amplification
Thm: Let (πΆπ‘œπ‘›,𝑅𝑒𝑑) be a non-uniform b.b. hard. amp. s.t. size(𝑅𝑒𝑑), # of queries, 1 πœ– , 𝛼 = 2 o(k) , and 2 βˆ’2π‘˜ ≀𝛿≀ 1 3 : 𝑅𝑒𝑑 can be used to compute majority on length β„“=Ξ© 1 πœ– , β‡’ 𝑅𝑒𝑑 requires size exp β„“ Ξ© 1 d for depth d circuits (even with parity gates). [SV08] only handled non-adaptive reductions. [GR09] only handled logarithmic nonuniformity. 𝑅𝑒𝑑 makes at least π‘ž=Ξ©( log⁑(1/𝛿) πœ– 2 ) queries. [AS11] only achieved π‘ž=Ξ© 1 πœ– .

10 Proof strategy following [Vio06,SV08,GR09]
Let 𝑁 𝑝 denote an oracle where each entry is an i.i.d. bit which is one with probability 𝑝. Fix 𝑓 to be very hard for circuits of size 2 π‘œ(π‘˜) (such 𝑓 exist). Consider two oracle distributions: 𝐢 1/2βˆ’πœ– β€² = πΆπ‘œπ‘› 𝑓 βŠ•π‘ 1/2βˆ’πœ– 𝐢 1/2βˆ’πœ– β€² ( 1 2 +πœ–)-agrees w/πΆπ‘œπ‘› 𝑓 ⇒𝑅𝑒 𝑑 𝐢 1/2βˆ’πœ– β€² must 1βˆ’π›Ώ -agree with 𝑓. 𝐢 1/2 β€² = πΆπ‘œπ‘› 𝑓 βŠ•π‘ 1/2 = 𝑁 1/2 𝐢 1/2 β€² gives no info on 𝑓 ⇒𝑅𝑒 𝑑 𝐢 1/2 β€² can’t 1βˆ’π›Ώ -agree with 𝑓. 𝑅𝑒𝑑 can be used to distinguish 𝑁 1/2 from 𝑁 1/2βˆ’πœ– w/ adv. 1βˆ’π›Ώ. β‡’ 𝑅𝑒𝑑 can be used to compute maj on length β„“=Ξ© 1 πœ– [SV08]. β‡’ 𝑅𝑒𝑑 must make at least π‘ž=Ξ©( log⁑(1/𝛿) πœ– 2 ) queries [SV08].

11 Proof strategy following [Vio06,SV08,GR09]
Problem: a non-uniform 𝑅𝑒𝑑 gets advice 𝛼=𝛼 𝐢′ =𝛼 𝑁 . Solution: Argue that 𝑅𝑒𝑑 can’t distinguish 𝑁 𝑝 from (𝑁 𝑝 A for a β€œlarge” event A. Intuition: for most fixings 𝛼 β€² , 𝐴= 𝛼(𝑁 𝑝 =𝛼′} is β€œlarge”. 𝐢 1/2βˆ’πœ– β€² = πΆπ‘œπ‘› 𝑓 βŠ•π‘ 1/2βˆ’πœ– 𝐢 1/2βˆ’πœ– β€² ( 1 2 +πœ–)-agrees w/πΆπ‘œπ‘› 𝑓 ⇒𝑅𝑒 𝑑 𝐢 1/2βˆ’πœ– β€² must 1βˆ’π›Ώ -agree with 𝑓. 𝐢 1/2 β€² = πΆπ‘œπ‘› 𝑓 βŠ•π‘ 1/2 = 𝑁 1/2 𝐢 1/2 β€² gives no info on 𝑓 ⇒𝑅𝑒 𝑑 𝐢 1/2 β€² can’t 1βˆ’π›Ώ -agree with 𝑓. 𝑅𝑒𝑑 can be used to distinguish 𝑁 1/2 from 𝑁 1/2βˆ’πœ– w/ adv. 1βˆ’π›Ώ. β‡’ 𝑅𝑒𝑑 can be used to compute maj on length β„“=Ξ© 1 πœ– [SV08]. β‡’ 𝑅𝑒𝑑 must make at least π‘ž=Ξ©( log⁑(1/𝛿) πœ– 2 ) queries [SV08].

12 Indistinguishability by adaptive procedures that take advice
(A component in the proof) Unrelated to black-box issues! Potentially useful in other settings?

13 Indistinguishability by adaptive procedures with advice
say π‘ž,π‘Ž=π‘π‘œπ‘™π‘¦π‘™π‘œπ‘”(𝑁) Setup: Let 𝑅= 𝑅 1 ,…, 𝑅 𝑁 be uniform i.i.d. bits. Let A be an event s.t. Pr π‘…βˆˆπ΄ β‰₯ 2 βˆ’π‘Ž . Let 𝑋=(𝑅|𝐴). Can depth q decision trees distinguish R from X? Advice is helpful! Bad bits: 𝐴={ 𝑅 1 =1}. Pointer: 𝑁=β„“+ 2 β„“ 𝑅= 𝑅 𝑃 , 𝑅 𝐷 , 𝐴= 𝑅 𝑅 𝑃 𝐷 =1 Forbidden set lemma: βˆƒπ΅βŠ† 𝑁 , small, s.t. depth q trees that don’t query in B cannot distinguish 𝑅 from 𝑋. Fixed set lemma: βˆƒπ΅βŠ† 𝑁 , small, βˆƒvalue 𝑣 for 𝑋 𝐡 , s.t. depth q trees cannot distinguish (𝑅| 𝑅 𝐡 =𝑣) from (𝑋| 𝑋 𝐡 =𝑣). so that: 𝐻 𝑋 β‰₯π‘βˆ’π‘Ž fixed Nonadaptive tree distinguishes by querying 𝑅 1 . 𝑅 1 , 𝑅 2 ,……………….…, 𝑅 𝑁 fixed 𝑅 𝑃 𝑅 1 𝐷 , 𝑅 2 𝐷 ,… 𝑅 𝑅 𝑃 𝐷 …, 𝑅 2 β„“ 𝐷 adaptive tree distinguishes by querying 𝑅 1 𝑃 ,… 𝑅 β„“ 𝑃 , and then 𝑅 𝑅 𝑃 𝐷 . β„“β‰ˆπ‘™π‘œπ‘”π‘ 2 β„“

14 Indistinguishability by adaptive procedures with advice
say π‘ž,π‘Ž=π‘π‘œπ‘™π‘¦π‘™π‘œπ‘”(𝑁) Setup: Let 𝑅= 𝑅 1 ,…, 𝑅 𝑁 be uniform i.i.d. bits. Let A be an event s.t. Pr π‘…βˆˆπ΄ β‰₯ 2 βˆ’π‘Ž . Let 𝑋=(𝑅|𝐴). Can depth q decision trees distinguish R from X? Forbidden set lemma: βˆƒπ΅βŠ† 𝑁 , small, s.t. depth q trees that don’t query in B cannot distinguish 𝑅 from 𝑋. Fixed set lemma: βˆƒπ΅βŠ† 𝑁 , small, βˆƒvalue 𝑣 for 𝑋 𝐡 , s.t. depth q trees cannot distinguish (𝑅| 𝑅 𝐡 =𝑣) from (𝑋| 𝑋 𝐡 =𝑣). small = π‘π‘œπ‘™π‘¦(π‘ž,π‘Ž,1/πœ‚) where πœ‚ is distinguishing advantage. Forbidden set lemma is a generalization of folklore lemma that has q=1, and [SV08] where trees are nonadaptive. Related variants of fixed set lemma in [Unr07,DGK17,CDGS18]. Our proofs on reductions end up using the fixed set lemma. so that: 𝐻 𝑋 β‰₯π‘βˆ’π‘Ž

15 Proof of fixed set lemma
Setup: Let 𝑅= 𝑅 1 ,…, 𝑅 𝑁 be uniform i.i.d. bits. Let A be an event s.t. Pr π‘…βˆˆπ΄ β‰₯ 2 βˆ’π‘Ž . Let 𝑋=(𝑅|𝐴). Can depth q decision trees distinguish R from X? Fixed set lemma: βˆƒπ΅βŠ† 𝑁 , small, βˆƒvalue 𝑣, for 𝑋 𝐡 s.t. depth q trees cannot distinguish (𝑅| 𝑅 𝐡 =𝑣) from (𝑋| 𝑋 𝐡 =𝑣). Let 𝐻𝐷 𝑋 = 𝑋 βˆ’π» 𝑋 β‰₯0 be the β€œentropy deficiency” of X. Claim: If depth q tree πœ‚-distinguishes X from R, then βˆƒπ‘„βŠ† 𝑁 , of size q, βˆƒπ‘£βˆˆ 0,1 π‘ž , s.t 𝐻𝐷 𝑋| 𝑋 𝑄 =𝑣 ≀𝐻𝐷 𝑋 βˆ’ πœ‚ 2 . Fixed lemma follows as initially, 𝐻𝐷 𝑋 β‰€π‘Ž, and so after at most π‘Ž/ πœ‚ 2 steps, no tree can distinguish. We fix at most π‘žπ‘Ž/ πœ‚ 2 bits.

16 Proof of fixed set lemma: Proof of claim
Let 𝐻𝐷 𝑋 = 𝑋 βˆ’π» 𝑋 β‰₯0 be the β€œentropy deficiency” of X. Claim: If depth q tree πœ‚-distinguishes X from R, then βˆƒπ‘„βŠ† 𝑁 , of size q, βˆƒπ‘£βˆˆ 0,1 π‘ž , s.t 𝐻𝐷 𝑋| 𝑋 𝑄 =𝑣 ≀𝐻𝐷 𝑋 βˆ’ πœ‚ 2 . Proof: Assume that a depth q tree T, πœ‚-distinguishes. Let 𝐼=( 𝐼 1 ,…, 𝐼 π‘ž ) be the queries asked on X (RVs). 𝑋 𝐼 1 ,…, 𝑋 𝐼 π‘ž is πœ‚-far from uniform β‡’ 𝐻 𝑋 𝐼 1 ,…, 𝑋 𝐼 π‘ž β‰€π‘žβˆ’ πœ‚ 2 𝐻 𝑋 =𝐻 𝑋, 𝑋 𝐼 1 ,…, 𝑋 𝐼 π‘ž =𝐻 𝑋 𝐼 1 ,…, 𝑋 𝐼 π‘ž +𝐻 𝑋| 𝑋 𝐼 1 ,…, 𝑋 𝐼 π‘ž β‡’ 𝐻 𝑋| 𝑋 𝐼 1 ,…, 𝑋 𝐼 π‘ž β‰₯𝐻 𝑋 βˆ’π‘ž+ πœ‚ 2 . β‡’ βˆƒπ‘£:𝐻 𝑋 𝑋 𝐼 =v β‰₯𝐻 𝑋 βˆ’π‘ž+ πœ‚ 2 , 𝐼 fixed to 𝑄. β‡’ 𝐻𝐷 𝑋| 𝑋 𝑄 =𝑣 ≀𝐻𝐷 𝑋 βˆ’ πœ‚ 2 . Pinsker’s lemma I is a function of X Entropy chain rule

17 Conclusion and Open problems
We show that the XOR lemma for constant depth circuits cannot be proven by general black-box techniques. Does the XOR lemma hold for constant depth circuits? Question: is it true that for 𝑑=𝑂( log 𝑛) (or even 𝑑=π‘π‘œπ‘™π‘¦ 𝑛 ) βˆ€π‘“: 𝑓 is (1βˆ’ 1 10 )-hard for 𝐴 𝐢 0 βŠ• β‡’ 𝑓 β€² π‘₯ 1 ,…, π‘₯ 𝑑 =𝑓 π‘₯ 1 βŠ•β€¦βŠ•π‘“ π‘₯ 𝑑 is 𝑛 βˆ’hard for 𝐴 𝐢 0 βŠ• . What about non-black-box techniques? In [GST05,Ats06,GT07], a β€œweak variant of amplification” that provably beats black-box lower bounds of [FF98,BT03]. This proof technique isn’t ruled out by our result.

18 More conclusions and open problems
In paper we consider hardness amplification that corresponds to β€œnon-Boolean codes”, β€œdecoding from erasures”. Example, direct product: Construction map: 𝑓⇒ 𝑓 β€² =πΆπ‘œπ‘›(𝑓) 𝑓 β€² π‘₯ 1 ,…, π‘₯ 𝑑 =(𝑓 π‘₯ 1 ,…,𝑓 π‘₯ 𝑑 ) Holds for 𝐴 𝐢 0 ! Some reductions don’t use majority [IJKW]. We prove: tight lower bound on queries: q=Ξ©( log⁑(1/𝛿) πœ– ). We show limitations on converting f that is 𝑓 is (1βˆ’π›Ώ)-hard for 𝐴 𝐢 0 βŠ• into a 1 𝑛 -PRG for 𝐴 𝐢 0 βŠ• . (Same as main result). Is it possible to get PRG? [FSUV12] beats hybrid argument. Limitations on specific black-box constructions [Vio18].

19 That’s it…

20 Old Slides

21 Hardness amplification theorems: hard functions β‡’ harder functions
Dfn: For 𝑓,𝐢: 0,1 π‘˜ β†’ 0,1 , C, π‘βˆ’agree with 𝑓 if: Pr 𝑋← π‘ˆ π‘˜ 𝐢 𝑋 =𝑓 𝑋 β‰₯𝑝 . (𝑓 is 𝑝-hard for 𝐢 otherwise). Very hard functions: explicit 𝑓 is πœ– -hard for all poly-size circuits (or other circuit classes). Required for crypto, derandomization, etc… Hardness amplification: Map 𝑓⇒ 𝑓 β€² =πΆπ‘œπ‘›(𝑓) s.t. βˆ€π‘“: 𝑓 mildly hard (𝑝=1βˆ’π›Ώ) β‡’ 𝑓 β€² =πΆπ‘œπ‘›(𝑓) very hard. 𝛿=0 (or 𝛿= 2 βˆ’2π‘˜ ) captures worst-case hardness. Hardness amplification is a conditional result.

22 βˆ€πΆβ€² in circuit class C’:
Hardness amplification theorems: mildly hard functions β‡’ very hard functions 𝑓: 0,1 π‘˜ β†’{0,1} βˆ€πΆ in circuit class C: Pr X 𝐢 𝑋 =𝑓 𝑋 <1βˆ’π›Ώ β€œ(1βˆ’π›Ώ)–hard function”. 𝑓 β€² : 0,1 π‘˜ β€² β†’{0,1} βˆ€πΆβ€² in circuit class C’: Pr X 𝐢′ 𝑋 =𝑓′ 𝑋 < 1 2 +πœ– β€œ( 1 2 +πœ–)–hard function”. (black-box) hardness amplification theorems consist of: Construction map: 𝑓⇒ 𝑓 β€² =πΆπ‘œπ‘›(𝑓). Proof: reduction 𝑅𝑒 𝑑 β‹… π‘₯ showing that: 𝐢’ breaks 𝑓’ β‡’ 𝐢 π‘₯ =𝑅𝑒 𝑑 𝐢 β€² π‘₯ breaks 𝑓. Used all over in Crypto, Derandomization… Special case: 𝛿=0β‰ˆ 2 βˆ’π‘˜ , captures worst case hardness.

23 Proof of fixed set lemma
Setup: Let 𝑅= 𝑅 1 ,…, 𝑅 𝑁 be uniform i.i.d. bits. Let A be an event s.t. Pr π‘…βˆˆπ΄ β‰₯ 2 βˆ’π‘Ž . Let 𝑋=(𝑅|𝐴). Can depth q decision trees distinguish R from X? Fixed set lemma: βˆƒπ΅βŠ† 𝑁 , small, βˆƒvalue 𝑣 for 𝑋 𝐡 , s.t. depth q trees cannot distinguish (𝑅| 𝑅 𝐡 =𝑣) from (𝑋| 𝑋 𝐡 =𝑣). Let 𝐻𝐷 𝑋 = 𝑋 βˆ’π» 𝑋 β‰₯0 be the β€œentropy deficiency” of X. Claim: If depth q tree πœ‚-distinguishes X from R, then βˆƒπ‘„βŠ† 𝑁 , of size q, βˆƒπ‘£βˆˆ 0,1 π‘ž , s.t 𝐻𝐷 𝑋| 𝑋 𝑄 =𝑣 ≀𝐻𝐷 𝑋 βˆ’ πœ‚ 2 . Fixed lemma follows as initially, 𝐻𝐷 𝑋 β‰€π‘Ž, and so after at most π‘Ž/ πœ‚ 2 steps, no tree can distinguish. We fix at most π‘žπ‘Ž/ πœ‚ 2 bits.

24 Proof of fixed set lemma: Proof of claim
Let 𝐻𝐷 𝑋 = 𝑋 βˆ’π» 𝑋 β‰₯0 be the β€œentropy deficiency” of X. Claim: If depth q tree πœ‚-distinguishes X from R, then βˆƒπ‘„βŠ† 𝑁 , of size q, βˆƒπ‘£βˆˆ 0,1 π‘ž , s.t 𝐻𝐷 𝑋| 𝑋 𝑄 =𝑣 ≀𝐻𝐷 𝑋 βˆ’ πœ‚ 2 . Proof: Assume that a depth q tree T, πœ‚-distinguishes. Let 𝐼=( 𝐼 1 ,…, 𝐼 π‘ž ) be the queries asked on X (RVs). 𝑋 𝐼 1 ,…, 𝑋 𝐼 π‘ž is πœ‚-far from uniform β‡’ 𝐻 𝑋 𝐼 1 ,…, 𝑋 𝐼 π‘ž β‰€π‘žβˆ’ πœ‚ 2 𝐻 𝑋 =𝐻 𝑋, 𝑋 𝐼 1 ,…, 𝑋 𝐼 π‘ž =𝐻 𝑋 𝐼 1 ,…, 𝑋 𝐼 π‘ž +𝐻 𝑋| 𝑋 𝐼 1 ,…, 𝑋 𝐼 π‘ž β‡’ 𝐻 𝑋| 𝑋 𝐼 1 ,…, 𝑋 𝐼 π‘ž β‰₯𝐻 𝑋 βˆ’π‘ž+ πœ‚ 2 . β‡’ βˆƒπ‘£:𝐻 𝑋 𝑋 𝐼 =v β‰₯𝐻 𝑋 βˆ’π‘ž+ πœ‚ 2 , 𝐼 fixed to 𝑄. β‡’ 𝐻𝐷 𝑋| 𝑋 𝑄 =𝑣 ≀𝐻𝐷 𝑋 βˆ’ πœ‚ 2 . Pinsker’s lemma I is a function of X Entropy chain rule

25 Black-box hardness amplification: A pair of construction/reduction
non-uniform Dfn: A b.b. hardness amplification is (πΆπ‘œπ‘›,𝑅𝑒𝑑) s.t. Construction map, maps 𝑓⇒ 𝑓 β€² =πΆπ‘œπ‘› 𝑓 𝑅𝑒 𝑑 β‹… π‘₯ is an oracle circuit s.t. βˆ€π‘“,𝐷 s.t. 𝐷 πœ– -agrees with 𝑓 β€² =πΆπ‘œπ‘›(𝑓), that 1βˆ’π›Ώ βˆ’agree is a function that 1βˆ’π›Ώ βˆ’agrees with 𝑓. Complexity of 𝑅𝑒𝑑 governs the complexity diff. between 𝐢,𝐷: Circuit size of 𝑅𝑒𝑑 and length of 𝛼 (governs size difference). # of queries that 𝑅𝑒 𝑑 β‹… makes (governs size difference). (Queries can be adaptive/non-adaptive). Circuit depth of 𝑅𝑒𝑑 (governs depth difference). 𝛼= 𝛼 𝑓,𝐷 𝑅𝑒𝑑 gets non b.b. access to 𝐷. βˆƒπ›Ό β€œnon-uniform advice string” s.t. 𝐢 π‘₯ =𝑅𝑒 𝑑 𝐷 (π‘₯,𝛼)

26 Proof strategy following [Vio06,SV08,GR09]
Problem: a non-uniform 𝑅𝑒𝑑 gets advice 𝛼=𝛼 𝐷 =𝛼 𝑁 . Solution: Argue that 𝑅𝑒𝑑 can’t distinguish 𝑁 𝑝 from (𝑁 𝑝 A for a β€œlarge” event A. Intuition: for most fixings 𝛼 β€² , 𝐴= 𝛼(𝑁 𝑝 =𝛼′} is β€œlarge”. 𝐷 1/2βˆ’πœ– = πΆπ‘œπ‘› 𝑓 βŠ•π‘ 1/2βˆ’πœ– 𝐷 1/2βˆ’πœ– ( 1 2 +πœ–)-agrees w/πΆπ‘œπ‘› 𝑓 ⇒𝑅𝑒 𝑑 𝐷 1/2βˆ’πœ– must 1βˆ’π›Ώ -agree with 𝑓. 𝐷 1/2 = πΆπ‘œπ‘› 𝑓 βŠ•π‘ 1/2 = 𝑁 1/2 𝐷 1/2 gives no info on 𝑓 ⇒𝑅𝑒 𝑑 𝐷 1/2 can’t 1βˆ’π›Ώ -agree with 𝑓. 𝑅𝑒𝑑 can be used to distinguish 𝑁 1/2 from 𝑁 1/2βˆ’πœ– w/ adv. 1βˆ’π›Ώ. β‡’ 𝑅𝑒𝑑 can be used to compute maj on length β„“=Ξ© 1 πœ– [SV08]. β‡’ 𝑅𝑒𝑑 must make at least π‘ž=Ξ©( log⁑(1/𝛿) πœ– 2 ) queries [SV08].


Download ppt "Indistinguishability by adaptive procedures with advice, and lower bounds on hardness amplification proofs Aryeh Grinberg, U. Haifa Ronen."

Similar presentations


Ads by Google