Download presentation
Presentation is loading. Please wait.
Published byKelly Baker Modified over 6 years ago
1
Randomization Techniques and Parallel Cryptography
Yuval Ishai Hello, I am benny applebaum and I’m very excited to be here. I’m going to talk about cryptography in NCzero, a joint work with Yuval Ishay and Eyal Kushilevitz. Technion
2
Variants: perfect, statistical, computational
The Basic Question Dec(g(x,r)) = f(x) x f y Sim(f(x)) g(x,r) decoder simulator g x Enc(y) Enc(y) r Variants: perfect, statistical, computational g is a “randomized encoding” of f Nontrivial relaxation of computing f Hope: g can be “simpler” than f (meaning of “simpler” determined by application) g can be used as a substitute for f Our key idea is to replace the cryptographic function f with a related function g, taking the original input of f and an additional random input, such that the output of g on input x and a uniformly random r is a randomized encoding of f(x). More specifically, we require that given g(x,r) one can decond f(x), and given f(x) one can simulate the distriution g(x,r) even without knowing x. We refer to g as a randomized encoding of f. To be useful in our context this type of encoding should be on one hand secure, in the sense that, … and on the other hand liberal enough… Note that if g is much easier than f, then moving from g to f must be complex. We now define our notion of randomized encoding in more detail.
3
Applications at a Glance
Randomized encodings Secure computation Parallel cryptography Hardness of approximation Our key idea is to replace the cryptographic function f with a related function g, taking the original input of f and an additional random input, such that the output of g on input x and a uniformly random r is a randomized encoding of f(x). More specifically, we require that given g(x,r) one can decond f(x), and given f(x) one can simulate the distriution g(x,r) even without knowing x. We refer to g as a randomized encoding of f. To be useful in our context this type of encoding should be on one hand secure, in the sense that, … and on the other hand liberal enough… Note that if g is much easier than f, then moving from g to f must be complex. We now define our notion of randomized encoding in more detail.
4
Rest of Tutorial Constructions of randomized encodings Applications
Different notions of simplicity Different flavors of encoding Information-theoretic Computational Applications Secure computation Parallel cryptography Our key idea is to replace the cryptographic function f with a related function g, taking the original input of f and an additional random input, such that the output of g on input x and a uniformly random r is a randomized encoding of f(x). More specifically, we require that given g(x,r) one can decond f(x), and given f(x) one can simulate the distriution g(x,r) even without knowing x. We refer to g as a randomized encoding of f. To be useful in our context this type of encoding should be on one hand secure, in the sense that, … and on the other hand liberal enough… Note that if g is much easier than f, then moving from g to f must be complex. We now define our notion of randomized encoding in more detail.
5
Randomized Encoding - Syntax
f g x x r inputs random inputs A randomized encoding of a function f uses the original inputs of f, as well as random inputs. For every x the value f(x) is encoded by the distribution of g applied on x and a randomly chosen r. Typically, a randomized encoding has more outputs than the original function. inputs f(x) is encoded by g(x,r)
6
Randomized Encoding - Semantics
Correctness: f(x) can be efficiently decoded from g(x,r). f(x) ≠ f(w) g(x,U) x r w g(w,U) r Privacy: efficient simulator Sim such that Sim(f(x)) ≡ g(x,U) g(x,U) depends only on f(x) This encoding satisfies 2 properties. Correctness – that is, for every x and every r, the value f(x) can be decoded efficiently from g(x,r). Therefore, if f(x) is not equal to f(w) the supports of the distributions g of x and uniformly chosen r, and g on w and uniformly chosen r - are disjoint. [click] The second property is Privacy – that is, there exists an efficient simulator that given f(x) perfectly simulates the distribution g of x and r , where r is uniformly chosen. Thus, the encoding depends only on f(x) and hides any other information regarding x. So, when f(x) is equal to f(w) the distributions f(x,r) and g(w,r) are equivalent. f(x) = f(w) g(x,U) x r w g(w,U) ≡ r
7
Notions of Simplicity - I
Application: “minimal model for secure computation” [Feige-Kilian-Naor 94, …] 2-decomposability: g((xA,xB),r)=(gA(xA,r),gB(xB,r)) r xA xB Alice Bob Carol f(xA,xB) Our key idea is to replace the cryptographic function f with a related function g, taking the original input of f and an additional random input, such that the output of g on input x and a uniformly random r is a randomized encoding of f(x). More specifically, we require that given g(x,r) one can decond f(x), and given f(x) one can simulate the distriution g(x,r) even without knowing x. We refer to g as a randomized encoding of f. To be useful in our context this type of encoding should be on one hand secure, in the sense that, … and on the other hand liberal enough… Note that if g is much easier than f, then moving from g to f must be complex. We now define our notion of randomized encoding in more detail. gA(xA,r) gB(xB,r)
8
Example: sum f(xA,xB) = xA+xB (xA,xB finite group G) mA+mB xA+r xB-r
rRG mA+mB xA+r xB-r xA xB Alice Bob Our key idea is to replace the cryptographic function f with a related function g, taking the original input of f and an additional random input, such that the output of g on input x and a uniformly random r is a randomized encoding of f(x). More specifically, we require that given g(x,r) one can decond f(x), and given f(x) one can simulate the distriution g(x,r) even without knowing x. We refer to g as a randomized encoding of f. To be useful in our context this type of encoding should be on one hand secure, in the sense that, … and on the other hand liberal enough… Note that if g is much easier than f, then moving from g to f must be complex. We now define our notion of randomized encoding in more detail. Carol
9
Example: equality f(xA,xB) = equality (xA,xBfinite field F) mA=mB ?
r1RF \ {0} , r2RF mA=mB ? r1xA+r2 r1xB+r2 xA xB Alice Bob Our key idea is to replace the cryptographic function f with a related function g, taking the original input of f and an additional random input, such that the output of g on input x and a uniformly random r is a randomized encoding of f(x). More specifically, we require that given g(x,r) one can decond f(x), and given f(x) one can simulate the distriution g(x,r) even without knowing x. We refer to g as a randomized encoding of f. To be useful in our context this type of encoding should be on one hand secure, in the sense that, … and on the other hand liberal enough… Note that if g is much easier than f, then moving from g to f must be complex. We now define our notion of randomized encoding in more detail. Carol
10
Exponential complexity
Example: ANY function f(xA,xB) = xA xB (xA,xB{0,1}) Reduction to equality: xA 1/0, xB 2/0 General boolean f: write as disjoint 2-DNF f(xA,xB) = (a,b):f(a,b)=1 (xA=a xB=b) = t1 t2 … tm ts ts+1 ts-1 ..... t1 t2 tm ..... Exponential complexity Our key idea is to replace the cryptographic function f with a related function g, taking the original input of f and an additional random input, such that the output of g on input x and a uniformly random r is a randomized encoding of f(x). More specifically, we require that given g(x,r) one can decond f(x), and given f(x) one can simulate the distriution g(x,r) even without knowing x. We refer to g as a randomized encoding of f. To be useful in our context this type of encoding should be on one hand secure, in the sense that, … and on the other hand liberal enough… Note that if g is much easier than f, then moving from g to f must be complex. We now define our notion of randomized encoding in more detail. 0 1
11
Notions of Simplicity - II
Full decomposability: g((x1,…,xn),r)=(g1(x1,r),…,gn(xn,r)) Application: Basing SFE on OT [Kilian 88, ...] Dishonest Alice? r xA xB Alice Bob f(xA,xB) gn(0,r) gn(1,r) xn Our key idea is to replace the cryptographic function f with a related function g, taking the original input of f and an additional random input, such that the output of g on input x and a uniformly random r is a randomized encoding of f(x). More specifically, we require that given g(x,r) one can decond f(x), and given f(x) one can simulate the distriution g(x,r) even without knowing x. We refer to g as a randomized encoding of f. To be useful in our context this type of encoding should be on one hand secure, in the sense that, … and on the other hand liberal enough… Note that if g is much easier than f, then moving from g to f must be complex. We now define our notion of randomized encoding in more detail. OT gA(xA,r) gn(xn,r)
12
Example: iterated group product
Abelian case f(x1,…,xn)=x1+x2+…+xn g(x, (r1,…,rn-1)) = x1+r1 x2+r2 … xn-1+rn-1 xn-r1-…-rn-1 General case [Kilian 88] f(x1,…,xn)=x1x2 …xn x1r1 r1-1x2r2 r2-1x2r3 … rn-2-1xn-1rn-1 rn-1-1xn So we showed that randomized encoding preserves cryptographic hardness. It is left to show how to encode relatively complex functions by NC-zero functions. In other words, we want to replace this homer [click] by this homer [click]…
13
Example: iterated group product
f(x1,…,xn) reduces to 12 …m where: i S5 Each i depends on a single xj distinct 0,1 S5 s.t. 12 …m = f(x) Thm [Barrington 86] Every boolean fNC1 can be computed by a poly-length, width-5 branching program. g r1 f x1 x2 xn r2 rm-1 … ..… 1r1 r1-12r2 rm-1-1m So we showed that randomized encoding preserves cryptographic hardness. It is left to show how to encode relatively complex functions by NC-zero functions. In other words, we want to replace this homer [click] by this homer [click]… Encoding iterated group product 123 …m 1r1 r1-12r2 r2-13r3 … rm-1-1m Every output bit of g depends on just a single bit of x Efficient fully decomposable encoding for every fNC1
14
Notions of Simplicity - III
Low degree: g(x,r) = vector of degree-d poly in x,r over F aka “Randomizing Polynomials” [I-Kushilevitz 00,…] Application: round-efficient MPC Motivating observation: Low-degree functions are easy to distribute! Round complexity of MPC protocols [BGW88,CCD88,CDM00,…] Semi-honest model t<n/d 2 rounds t<n/2 multiplicative depth + 1 = log d+1 rounds Malicious model Optimal t O(log d) rounds Our key idea is to replace the cryptographic function f with a related function g, taking the original input of f and an additional random input, such that the output of g on input x and a uniformly random r is a randomized encoding of f(x). More specifically, we require that given g(x,r) one can decond f(x), and given f(x) one can simulate the distriution g(x,r) even without knowing x. We refer to g as a randomized encoding of f. To be useful in our context this type of encoding should be on one hand secure, in the sense that, … and on the other hand liberal enough… Note that if g is much easier than f, then moving from g to f must be complex. We now define our notion of randomized encoding in more detail.
15
Examples What’s wrong with previous examples? Coming up:
Great degree in x (degx=1), bad degree in r Coming up: Degree-3 encoding for every f Efficient in size of branching program g r1 f x1 x2 xn r2 rm-1 … ..… 1r1 r1-12r2 rm-1-1m Our key idea is to replace the cryptographic function f with a related function g, taking the original input of f and an additional random input, such that the output of g on input x and a uniformly random r is a randomized encoding of f(x). More specifically, we require that given g(x,r) one can decond f(x), and given f(x) one can simulate the distriution g(x,r) even without knowing x. We refer to g as a randomized encoding of f. To be useful in our context this type of encoding should be on one hand secure, in the sense that, … and on the other hand liberal enough… Note that if g is much easier than f, then moving from g to f must be complex. We now define our notion of randomized encoding in more detail. RS5
16
Notions of Simplicity - IV
Small locality: Application: parallel cryptography! [Applebaum, I, Kushilevitz 04,…] Coming up: encodings with locality 4 degree 3, fully decomposable efficient in size of branching program x r Our key idea is to replace the cryptographic function f with a related function g, taking the original input of f and an additional random input, such that the output of g on input x and a uniformly random r is a randomized encoding of f(x). More specifically, we require that given g(x,r) one can decond f(x), and given f(x) one can simulate the distriution g(x,r) even without knowing x. We refer to g as a randomized encoding of f. To be useful in our context this type of encoding should be on one hand secure, in the sense that, … and on the other hand liberal enough… Note that if g is much easier than f, then moving from g to f must be complex. We now define our notion of randomized encoding in more detail.
17
Parallel Cryptography
How low can we get? poly-time NC log-space NC1 AC0 NC0
18
Cryptography in NC0? Longstanding open question
Håstad 87 Impagliazzo Naor 89 Goldreich 00 Cryan Miltersen 01 Krause Lucks 01 Mossel Shpilka Trevisan 03 Real-life motivation: super-fast cryptographic hardware Our basic motivation is to study the minimal computational resources needed to compute cryptogrphic primitives, such as OWF and PRG. An extreme form of this question, that emerged in the 80’s and was later studied in several works, is whether it is possible to go all the way to the class NC0. This class contains functions that can be computed by constant-depth bounded fan-in circuits, or equivalently ones where each output bit depends on a constant number of input bits. We will refer to this constant as the locality of the function. Tempting conjecture: crypto hardness “complex” function [CM]: Yes [G]: No
19
Pseudorandom or Random?
Main Primitives OWF find xf -1(y) f Uin y = f(Uin) poly-time PRG Pseudorandom or Random? f Uin …. f(Uin) …. Say: A PRG expands its input, it’s output cannot be distinguished from truly random bits in polynomial time. Uout …. poly-time
20
Previous Work Positive results Negative results open open PRG OWF
PRG in NC1, TC0 from factoring, discrete-log, lattices… PRF in TC0 from number theoretic assumptions [Naor Reingold 97] Low-stretch PRG in AC0 from subset sum [Impagliazzo Naor 89] [Goldreich 00] conjectured OWF in NC0 Negative results No PRF in AC0 [Linial Mansour Nisan 89] No PRG, OWF in NC02 [Goldreich 00, Cryan Miltersen 01] PRG in NC03, NC04 low stretch [CM01, Mossel Shpilka Trevisan 03] factoring, discrete-log, lattices, … NC1 NC1 NC1 NC1 subset sum TC0 TC0 TC0 TC0 AC0 AC0 AC0 AC0 impossible There are many relevant previous works. On the negative side, we know that pseudorandom functions cannot be computed even in ACzero and therefore not in NC-zero. We also know that there are no pseudorandom generators nor one-way functions that their output bits depend on 2 inputs. [click] In the last few years it was proved that pseudorandom generators with locality 3 or 4 cannot stretch their seed with super - linear amount of random bits. (A very new result shows that an ACzero pseudorandom generator with super-linear stretch cannot be obtained from a one-way function via black-box constructions.) On the positive side, we can construct pseudorandom generators in NC-one under most standard intractability assumptions. [click] It is possible to construct pseudorandom functions in TCzero under number-theoretic assumption [click]; as well as pseudorandom generators in ACzero under subset sum [click] (which is considered to be weaker). Goldreich conjectured that there exists a one-way functions in NCzero, and proposed a candidate that is not followed from any previous known assumption. [click] So this is the world before our work, and as you can see despite previous efforts the question of NC-zero cryptography is still open. [click] NC0 NC0 open open NC04 NC04 low stretch NC03 NC03 NC02 NC02 NC02 NC02 PRG OWF
21
Surprising Positive Result [AIK04]
Compile primitives in a “relatively high” complexity class (e.g., NC1, NL/poly, L/poly) into ones in NC0. NC1 cryptography implied by factoring, discrete-log, lattices… essentially settles open question locality 4 OWF OWF NC1 NC1 NC1 NC1 factoring, discrete-log, lattices, … TC0 TC0 TC0 TC0 subset-sum Our main result is that any one-way function or pseudorandom generator in a relatively high complexity class, containing NC-one, can be efficiently “compiled” [click] into a corresponding primitive that each of its output bits depend on 4 input bits. [clik] As we said before the existence of one-way functions and pseudorandom generators in NCone is implied by most standard intractability assumptions, such as factoring, lattices and discrete-log. Therefore, we settle the question of cryptography in NC-zero with a big yes. [click] That is, NCzero cryptography is indeed possible unless you do not believe in the hardness of factoring, nor lattices, nor discrete-log and so on… It turns out that high locality is not essential for cryptographic hardness, after all. [click] So this is the state of the world after this work, and we can definitely say that it is a better world! AC0 AC0 AC0 AC0 impossible NC0 NC0 NC0 NC0 open low stretch NC04 NC04 NC04 NC04 NC03 NC03 NC02 NC02 NC02 NC02 PRG OWF
22
Encoding a OWF B A Thm. f(x) is a OWF g(x,r) is a OWF
Proof: inverter B for g inverter A for f x A Simulator y B (x,r) z yR f(Un) prob p zR g(Un,Um) prob p g(x,r)=z g(x,r)=z f(x)=y We will now show how randomized encoding preserves security. We start with the case of one-way function. Recall that one-way function is a function that is easy to compute but hard to invert. We want to prove that if f is a one-way function then so does its randomized encoding. The proof is by reduction. Suppose we have an adversary B for g, that given an image of randomly chosen pair x and r under g, outputs a pre-image with probability p. Then, [click] We construct an adversary A for f in the following way: given y - an image of f, we use the simulator to transform y into zi - an image of g. Then we use the inverter B to find a pre-image (x,r) of zi under g, and output the same x as B. [click] Note that privacy ensures that y which is the image of uniformly random input x is transformed into zi which is the image of uniformly random pair (x,r). Therefore the adversary B is fed with the “right” distribution and indeed succeeds with the promised probability. [click] Also note, that due to correctness if (x,r) is a pre-image of zi, then x is a pre-image of y. Therefore, whenever B succeeds, A also succeeds. All together we get an efficient adversary A for f with the same probability success as B. Other Lecture: Add another slide about the relaxation of privacy and correctness A succeeds whenever B succeeds Dec(z) = Dec(g(x,r)) = f(x) Dec(z) = Dec(Sim(y)) = y Dec(g(x,r)) = f(x) Sim(f(x)) g(x,r) A generates a correct input distribution for B Sim(f(Un)) = g(Un,Um)
23
Encoding a PRG Want: f(x) is a PRG g(x,r) is a PRG Problems:
output of g may not be pseudorandom g may shrink its input Solution: “perfect” randomized encoding respects pseudorandomness, additive stretch, … stretch of g is typically sublinear even if that of f is superlinear most (not all) known constructions give perfectness for free We will now show how randomized encoding preserves security. We start with the case of one-way function. Recall that one-way function is a function that is easy to compute but hard to invert. We want to prove that if f is a one-way function then so does its randomized encoding. The proof is by reduction. Suppose we have an adversary B for g, that given an image of randomly chosen pair x and r under g, outputs a pre-image with probability p. Then, [click] We construct an adversary A for f in the following way: given y - an image of f, we use the simulator to transform y into zi - an image of g. Then we use the inverter B to find a pre-image (x,r) of zi under g, and output the same x as B. [click] Note that privacy ensures that y which is the image of uniformly random input x is transformed into zi which is the image of uniformly random pair (x,r). Therefore the adversary B is fed with the “right” distribution and indeed succeeds with the promised probability. [click] Also note, that due to correctness if (x,r) is a pre-image of zi, then x is a pre-image of y. Therefore, whenever B succeeds, A also succeeds. All together we get an efficient adversary A for f with the same probability success as B. Other Lecture: Add another slide about the relaxation of privacy and correctness
24
Additional Cryptographic Primitives
General compiler also applies to: one-way / trapdoor permutations collision-resistant hashing public key / symmetric encryption signatures / MACs commitments … Caveat: decryption / verification not in NC0… … But: can commit in NC0 with decommit in NC0[AND] Applications: coin-flipping, zero-knowledge, … Our reductions can be also applied to other cryptographic primitives such as one-way permutations, trapdoor permutations, encryptions, signatures, and many more. It is interested to note that different primitives require different properties from the encodings. In the case of encryption and signature scheme, we construct NC-zero encrypting or signing algorithms, but do not improve the parallel-time complexity of the decrypter and the verifier. (In fact, in some cases it can be shown that these parties cannot be in NC-zero). Since we know that there are no PRFs in NC-zero, it is interesting to understand where does our reduction fail. The answer is that our encoding promise privacy only when the random variables are indeed chosen uniformly. This is not the case in PRF, since the adversary can query the function on any desired point. Other talk: Talk about specific cases in which decryption is easy such as commitment Mention (maybe somewhere else) con: Blow up + pro: generality
25
Non-cryptographic PRGs
ε-biased generators [Mossel Shpilka Trevisan 03]: superlinear stretch in NC05 Using randomized encoding: linear stretch in NC03 optimal locality, stretch PRGs for space-bounded computation Our results can also be applied to non-cryptographic primitives, for example epsilon-biased generators. NC-zero epsilon-biased generators were studied recently by Mossel, Shpilka, and Trevisan. They constructed a linear stretch eps-bias generator in non-uniform NC-zero five. Using our reductions we can construct a simple epsilon-biased generator with sub-linear stretch in uniform NC-zero three. Since there are no epsilon-biased generators in NC-zero two, this generator has optimal stretch. Using the MST generator we can construct a linear stretch epsilon-biased generator in non-uniform NC-zero three. Since epsilon-biased generators in NC-zero three cannot stretch their seed with more than linear amount of output bits - this generator is also optimal in its stretch. These results solve some open questions that posed by Mossel et al. We also construct the first log-space robust pseudorandom generator.
26
Remaining Challenge How to encode “complex” f by g NC0? Coming up…
Observation: enough to obtain const. degree encoding Locality Reduction: degree 3 poly over GF(2) locality 4 rand. encoding So we showed that randomized encoding preserves cryptographic hardness. It is left to show how to encode relatively complex functions by NC-zero functions. In other words, we want to replace this homer [click] by this homer [click]… f(x) = T1(x) T2(x) … Tk(x) g(x,r,s) = T1(x)+r1 T2(x)+r2 Tk(x)+rk … –r1+s1 –s1 –r2 +s2 –sk-1–rk …
27
3 Ways to Degree 3 x1 x2 x3 x4 y1 y2 x5 y3 1. Degree-3 encoding using
a circuit representation f(x)=1 y1, y2 , y3 y1=NAND(x1 , x2)= x1(1-x2)+(1-x1)x2+(1-x1)(1-x2) y2=NAND(x3 , x4) y3=NAND(y1 , y2) 1 =NAND(y3 , x5) Note: ! y1, y2 , y3
28
g(x, y,r)= ri qi(x,y) Using circuit representation (contd.) q1(x,y)=0
... qs(x,y)=0 deg.-2 g(x, y,r)= ri qi(x,y) f(x)=0 g(x,y,r) is uniform f(x)=1 g(x,y,r) 0 given y=y0, otherwise it is uniform Statistical distance amplified to 1/2 by 2(s) repetitions. deg.-3 works over any field complexity exponential in circuit size
29
2. Degree-3 encoding using quadratic characters
Fact from number theory: Let N=2n, b = length-N truth-table of f, F=GF(q) Define p(x1,…,xn, r) = one polynomial huge field size
30
3. Perfect Degree-3 Encoding from Branching Programs
BP=(G, s , t, edge-labeling) Gx=subgraph induced by x s t t s mod-q NBP: f(x) = # s-t paths in Gx (mod q) size = # of vertices circuit-size BP-size formula-size Boolean case: q=2. Captures complexity class L/poly
31
Perfect Degree-3 Encoding of BPs
BP=(G, s, t, edge-labeling) Gx=subgraph induced by x mod-q BP: f(x) = # st paths in Gx mod q. 1 $ $ $ $ $ $ * * * * -1 * * * * * * $ $ $ Encoding based on Lemma: g(x,r1,r2)= R1(r1)L(x)R2(r2) Lemma: degree-1 mapping L : x s.t. det(L(x))= f(x). * * * * -1 * * * * * * size(BP) Our reductions can be also applied to other cryptographic primitives such as one-way permutations, trapdoor permutations, encryptions, signatures, and many more. It is interested to note that different primitives require different properties from the encodings. In the case of encryption and signature scheme, we construct NC-zero encrypting or signing algorithms, but do not improve the parallel-time complexity of the decrypter and the verifier. (In fact, in some cases it can be shown that these parties cannot be in NC-zero). Since we know that there are no PRFs in NC-zero, it is interesting to understand where does our reduction fail. The answer is that our encoding promise privacy only when the random variables are indeed chosen uniformly. This is not the case in PRF, since the adversary can query the function on any desired point. Other talk: Talk about specific cases in which decryption is easy such as commitment Mention (maybe somewhere else) con: Blow up + pro: generality Correctness: f(x)=det g(x,r1,r2) det L(x) (= f(x)) 1 * * * * * * 1 * * * * * * * * * * -1 * * * * * * * * * * * * * * * * * -1 * * * = Privacy: 1 * * * * * * -1 * * * -1 1 $ $ $ $ $ $ 1 $ $ $ $ $ $ * * * * * -1 * * * * * * $ $ $ $ $ $ g(x,r1,r2)
32
Proof of Lemma * * * * -1 * * * * * * Lemma: degree-1 mapping L : x s.t. det(L(x))= f(x). Proof: A(x)= adjacancy matrix of Gx (over F=GF(q)) A* = I+A+A2+… = (I-A)-1 A*s,t = (-1)s+t det (I-A)|t,s / det (I-A) = det (A-I)|t,s Our reductions can be also applied to other cryptographic primitives such as one-way permutations, trapdoor permutations, encryptions, signatures, and many more. It is interested to note that different primitives require different properties from the encodings. In the case of encryption and signature scheme, we construct NC-zero encrypting or signing algorithms, but do not improve the parallel-time complexity of the decrypter and the verifier. (In fact, in some cases it can be shown that these parties cannot be in NC-zero). Since we know that there are no PRFs in NC-zero, it is interesting to understand where does our reduction fail. The answer is that our encoding promise privacy only when the random variables are indeed chosen uniformly. This is not the case in PRF, since the adversary can query the function on any desired point. Other talk: Talk about specific cases in which decryption is easy such as commitment Mention (maybe somewhere else) con: Blow up + pro: generality L(x)= (A(x)-I)|t,s s t -1 * * * * * * * * * * 0 * * * * * * * * * * A= L=
33
Thm. size-s BP degree 3 encoding of size O(s2)
perfect encoding for mod-q BP (capturing L/poly for q=2) imperfect for nondeterministic BP (capturing NL/poly) Our main machinery is randomizing polynomials. It was introduced by Ishay and Kushilevitz in the context of information theoretically secure multiparty computation. randomizing polynomials capture randomized encoding within an algebraic framework. In randomizing polynomials, each output bit is viewed as a polynomial, in our case over GF(2). The main complexity measure of randomizing polynomials is their algebraic degree. Surprisingly it was shown that every function that admits a polynomial size branching program representation, can be encoded by a degree 3 randomizing polynomials with polynomial size. Polynomial size branching programs captures variants of log-space computations such as NL, parityL/poly which in particular conclude NC-one.
34
Round complexity of information-theoretic MPC in semi-honest model:
The secure evaluation of an arbitrary function can be reduced to the secure evaluation of degree-3 polynomials. Round complexity of information-theoretic MPC in semi-honest model: How many rounds for maximal privacy? How much privacy in 2 rounds? 3 rounds suffice t<n/3 suffices perfect privacy + correctness complexity O(BP-size2)
35
Is 3 minimal? Thm . [IK00] A boolean function f
admits a perfectly private degree-2 encoding over F if and only if either: • or its negation test for a linear condition Ax=b over F; admits standard representation by a degree-2 polynomial over F.
36
Wrapping Up h encodes h’ encodes g encodes f g f Composition Lemma: f
Concatenation Lemma: g(1) encodes f(1) … g(l) encodes f(l) So now it remains to convert low degree into low locality. Note that low degree polynomials might have many terms and therefore high locality. The key idea is to think of a degree 3 polynomial as a sum of terms, each having locality 3, and then to randomize this sum. For each term we construct two output bits and add two random variables r and s (except from the last term which has only one random variable). To prove correctness we should show how to decode f(x) from g(x,r,s). This can be done by summing up the output bits of g. The random variables cancel each other and the result is the original sum. [Click] To prove privacy, we should show that the outputs of the encoding are distributed uniformly under the previous constraint. This is true because g has 2k outputs and 2k-1 random variables, and each of the first 2k-1 output bits has an exclusive random variable. [click] Therefore, for any given output y there is a unique way to set these random such that f(x,r,s) is equal to y. As you can see this new function indeed has locality 4. g encodes f …
37
From Branching Programs to Locality 4
poly-size BPs s t s t s t f (1) f (2) f (l) … BP encoding composition … degree 3 g(1) g(2) g(l) locality reduction h(1) h(2) … NC04 Let’s look at the big picture again. We start a function that each of its output bits can be represented by polynomial size BP. [click] Then, use randomizing polynomials to encode each of these boolean function by degree 3 encoding. Then, use locality reduction to encode these functions by locality 4 functions. Then, we use a composition lemma to encode the output bits of f. That is we show that if h encodes g, and g encode f then h also encodes f. Finally, we prove that the concatenation of the encodings of the output bit functions - is itself an encoding of the original non-boolean function. And thus we get a locality 4 encoding for f. All these steps respects the “nicer” encoding as well. h(l) concatenation h … NC04 locality 4
38
Computationally Private Encodings
Known: f NC1, L encoding in NC0 Goal: f P encoding in NC0 Idea: relax encoding requirement Respects security of most primitives Thm: f P computational encoding in NC assuming “easy PRG” (min-PRG L) x g r Enc(y) comp The main open question is to try and base NC-zero cryptography on more general assumption. For example, is the existence of OWF that can be computed in polynomial time implies the existence of NC-zero OWF. Such questions motivates more study of the complexity of randomized encoding. On a lower level, it is interesting to ask what is the exact locality that can be achieved. We know that locality 2 is impossible, and on the other hand, that locality 4 is possible under general assumptions. In the case of OWF we can construct a locality 3 OWF based on some specific assumptions. So the gap is very small, and it’ll nice to close it. “Easy PRG” can be based on factoring, discrete-log, lattices
39
Tool: Yao’s Garbled Circuit [Yao86]
x1 x2 x3 x4 K1,1 K2,1 K3,1 K4,1 K1,0 K2,0 K3,0 K4,0 In the second and main step, we use a one-time symmetric encryption scheme to encode a polynomial-time computable function. This encoding is computed by an NCzero circuit with oracle gates to the encryption algorithm. [Click] This construction uses a variant of Yao’s garbled circuit technique, which was introduced in the context of secure computation. Roughly speaking, a garbled circuit allows computing the value of its output wires but hides any other information about the other internal wires. This is exactly what we require from a randomized encoding. We use a specific variant of this technique, and the full details as well as the security proof are quite tedious. However, I’ll try to give the basic intuition behind this construction. [click] We associate a pair of random keys with each wire. One key represents the 0-bit, and the other key represents the 1-bit. These two keys have different colors. We shuffle the keys, [click] and so even if one gets a key, he cannot tell which bit it represents. Consider an “AND” gate. Suppose that someone holds the 1-keys of the input wires of this gate. That is, the green key and the red key. We would like to allow him to compute the 1-key of the output wire of this gate, (which is the black key) without knowing what is the semantic of this key. To do this, we encrypt the keys of the output wires under the XOR of the corresponding keys of the input wires. This way, if I have the green key (which represents 1) and the red key (which also represents 1), then I can “open” the red-green envelope and get the black key. But I cannot open any other envelope of this gate. However, since I do not know the semantics of the keys that I hold, I actually learn nothing but the value of random keys. In fact, I want to learn SOMETHING – I want to learn the value of the output wires of the circuit, and therefore, we do not shuffle the keys of the output wires. Hence, when I get these keys I know their semantics which is the output of the gate. Of course, when a gate has more than one outgoing wire, we encrypt all the corresponding keys of these output wires under the same input keys. Therefore, we need an encryption that deals with messages whose length is polynomially longer than the key size. This is indeed an NCzero reduction since the inputs of the encryptions are just random strings or the bit-wise-XOR of two random strings. Gives rise to a randomized encoding: g(x,(ki,b,r))= (ki,xi)i=1..n , garbled circuit
40
Garbled Circuit Construction
1-key 0-key 1-key 1-key 0-key 0-key Pair of randomly colored keys for each wire For each input wire, key corresponding to its value is revealed Color semantics of output wires are revealed Garbled gates: In the second and main step, we use a one-time symmetric encryption scheme to encode a polynomial-time computable function. This encoding is computed by an NCzero circuit with oracle gates to the encryption algorithm. [Click] This construction uses a variant of Yao’s garbled circuit technique, which was introduced in the context of secure computation. Roughly speaking, a garbled circuit allows computing the value of its output wires but hides any other information about the other internal wires. This is exactly what we require from a randomized encoding. We use a specific variant of this technique, and the full details as well as the security proof are quite tedious. However, I’ll try to give the basic intuition behind this construction. [click] We associate a pair of random keys with each wire. One key represents the 0-bit, and the other key represents the 1-bit. These two keys have different colors. We shuffle the keys, [click] and so even if one gets a key, he cannot tell which bit it represents. Consider an “AND” gate. Suppose that someone holds the 1-keys of the input wires of this gate. That is, the green key and the red key. We would like to allow him to compute the 1-key of the output wire of this gate, (which is the black key) without knowing what is the semantic of this key. To do this, we encrypt the keys of the output wires under the XOR of the corresponding keys of the input wires. This way, if I have the green key (which represents 1) and the red key (which also represents 1), then I can “open” the red-green envelope and get the black key. But I cannot open any other envelope of this gate. However, since I do not know the semantics of the keys that I hold, I actually learn nothing but the value of random keys. In fact, I want to learn SOMETHING – I want to learn the value of the output wires of the circuit, and therefore, we do not shuffle the keys of the output wires. Hence, when I get these keys I know their semantics which is the output of the gate. Of course, when a gate has more than one outgoing wire, we encrypt all the corresponding keys of these output wires under the same input keys. Therefore, we need an encryption that deals with messages whose length is polynomially longer than the key size. This is indeed an NCzero reduction since the inputs of the encryptions are just random strings or the bit-wise-XOR of two random strings.
41
Garbled Circuit Construction
Implementing locks (one-time) symmetric encryption computational privacy, works for any circuit one-time pads information-theoretic privacy, efficient only for log-depth circuits 1-key 0-key 1-key 1-key 0-key 0-key Pair of randomly colored keys for each wire For each input wire, key corresponding to its value is revealed Color semantics of output wires are revealed Garbled gates: In the second and main step, we use a one-time symmetric encryption scheme to encode a polynomial-time computable function. This encoding is computed by an NCzero circuit with oracle gates to the encryption algorithm. [Click] This construction uses a variant of Yao’s garbled circuit technique, which was introduced in the context of secure computation. Roughly speaking, a garbled circuit allows computing the value of its output wires but hides any other information about the other internal wires. This is exactly what we require from a randomized encoding. We use a specific variant of this technique, and the full details as well as the security proof are quite tedious. However, I’ll try to give the basic intuition behind this construction. [click] We associate a pair of random keys with each wire. One key represents the 0-bit, and the other key represents the 1-bit. These two keys have different colors. We shuffle the keys, [click] and so even if one gets a key, he cannot tell which bit it represents. Consider an “AND” gate. Suppose that someone holds the 1-keys of the input wires of this gate. That is, the green key and the red key. We would like to allow him to compute the 1-key of the output wire of this gate, (which is the black key) without knowing what is the semantic of this key. To do this, we encrypt the keys of the output wires under the XOR of the corresponding keys of the input wires. This way, if I have the green key (which represents 1) and the red key (which also represents 1), then I can “open” the red-green envelope and get the black key. But I cannot open any other envelope of this gate. However, since I do not know the semantics of the keys that I hold, I actually learn nothing but the value of random keys. In fact, I want to learn SOMETHING – I want to learn the value of the output wires of the circuit, and therefore, we do not shuffle the keys of the output wires. Hence, when I get these keys I know their semantics which is the output of the gate. Of course, when a gate has more than one outgoing wire, we encrypt all the corresponding keys of these output wires under the same input keys. Therefore, we need an encryption that deals with messages whose length is polynomially longer than the key size. This is indeed an NCzero reduction since the inputs of the encryptions are just random strings or the bit-wise-XOR of two random strings.
42
one-time symmetric encryption
Thm. “easy PRG” encoding in NC0 for all fP f P gNC0[ ] one-time symmetric encryption gNC0[min-PRG] Yao garbled circuit one-time symmetric encryption NC0[min-PRG] gL hNC04 easy PRG [AIK04]
43
App 1: Relaxed Assumptions for Crypto in NC0
Using encoding: perfect comp. OWF OWP PRG Hash Sym-Enc PK-Enc Signature Commit NIZK OWF OWP PRG Hash Sym-Enc PK-Enc Signature Commit NIZK Assuming “easy PRG” Sym-Enc PK-Enc Signature Commit NIZK Sym-Enc PK-Enc Signature Commit NIZK L NC0 exist
44
App 2: Parallel Reductions Between Primitives
Proof: given code of min-PRG Construct f P[min-PRG] via known reduction Use code of f to construct g NC0[min-PRG] Note: non-black-box reduction! Thm. All are equivalent under poly-time reductions What about NC reductions? Much less is known…. New Blum Micali 82, Yao 82, Levin 85, Goldreich Krawczyk Luby 88, Håstad Impagliazzo Levin Luby 90, Goldreich Micali 84, Goldreich Goldwasser Micali 84, Goldwasser Micali Rivest 84, Bellare Micali 88, Naor Yung 89, Rompel 90, Naor 89, Impagliazzo Luby 89, … NR NC0 NC1 Sym-Enc PRF Synthesizer HILL Viola AIK NC0 NC0 NC0 “Regular” OWF min-PRG PRG Signature OWF Naor NC0 NC0 Commit
45
App 3: Secure Multiparty Computation
In case you don’t insist on unconditional security… Securely evaluating an arbitrary function f efficiently reduces to securely evaluating deg-3 polynomials … assuming an “easy PRG” In particular: Basic MPC protocols (e.g., BGW) imply constant-round computationally secure MPC for every f. Known assuming any PRG [BMR90,DI05]; however, current approach is simpler and can be made more efficient [DI06].
46
Crypto in NC0 and Inapproximability
k-Constraint Satisfaction Problem X1 +X3 X5 =0 X2 X3 X4 =1 . X2 +X3 + X4 =1 Q. how many of the constraints can be satisfied together? List of constraints over n variables x1,…,xn Each constraint involves k variables Our reductions can be also applied to other cryptographic primitives such as one-way permutations, trapdoor permutations, encryptions, signatures, and many more. It is interested to note that different primitives require different properties from the encodings. In the case of encryption and signature scheme, we construct NC-zero encrypting or signing algorithms, but do not improve the parallel-time complexity of the decrypter and the verifier. (In fact, in some cases it can be shown that these parties cannot be in NC-zero). Since we know that there are no PRFs in NC-zero, it is interesting to understand where does our reduction fail. The answer is that our encoding promise privacy only when the random variables are indeed chosen uniformly. This is not the case in PRF, since the adversary can query the function on any desired point. Other talk: Talk about specific cases in which decryption is easy such as commitment Mention (maybe somewhere else) con: Blow up + pro: generality Corollary of PCP [ALMSS,AS92, Din06] If: PNP Then: k-CSP cannot be approximated better than some multiplicative constant AIK06 If: Lin-Stretch PRG in NC0. Then: k-CSP cannot be approximated better than some multiplicative constant
47
Suppose we have a .99-approximation alg. A for k-CSP.
G1(x) Gm(x) locality = k G x1 xn Suppose we have a .99-approximation alg. A for k-CSP. We break G as follows. Given y=(y1,…,ym): Run A on k-CSP instance “Gi(x)=yi”, i=1,…,m. Output “pseudorandom” iff output .99m Our reductions can be also applied to other cryptographic primitives such as one-way permutations, trapdoor permutations, encryptions, signatures, and many more. It is interested to note that different primitives require different properties from the encodings. In the case of encryption and signature scheme, we construct NC-zero encrypting or signing algorithms, but do not improve the parallel-time complexity of the decrypter and the verifier. (In fact, in some cases it can be shown that these parties cannot be in NC-zero). Since we know that there are no PRFs in NC-zero, it is interesting to understand where does our reduction fail. The answer is that our encoding promise privacy only when the random variables are indeed chosen uniformly. This is not the case in PRF, since the adversary can query the function on any desired point. Other talk: Talk about specific cases in which decryption is easy such as commitment Mention (maybe somewhere else) con: Blow up + pro: generality
48
On Linear-Stretch PRGs in NC0
Can be constructed based on a previous assumption of Alekhnovich related to the hardness of decoding certain error-correcting codes [AIK06]. elementary proof of hardness of approximation! However: Stronger hardness of approximation results based on same assumption already proved in [Alekhnovich 03] (following [Feige 02]). Hope: Construct Linear-Stretch PRG based on more standard assumptions. Strengthen hardness of approximation results. implicitly
49
Summary Different flavors of randomized encoding
Motivated by different applications Secure computation Parallel cryptography Hardness of approximation? “Simplest” encodings: outputs of form xirjrk+rh Efficient perfect/statistical encodings for various complexity classes (NC1, NL/poly, modqL/poly) Algebraic approach Combinatorial approach: information-theoretic garbled circuit Efficient computationally private encodings for all P, assuming “Easy PRG”. The main open question is to try and base NC-zero cryptography on more general assumption. For example, is the existence of OWF that can be computed in polynomial time implies the existence of NC-zero OWF. Such questions motivates more study of the complexity of randomized encoding. On a lower level, it is interesting to ask what is the exact locality that can be achieved. We know that locality 2 is impossible, and on the other hand, that locality 4 is possible under general assumptions. In the case of OWF we can construct a locality 3 OWF based on some specific assumptions. So the gap is very small, and it’ll nice to close it.
50
Open Questions Randomized encoding MPC Parallel crypto
poly-size NC0 encoding for every fP? Unconditionally secure constant-round protocols for every fP? OWF OWF in NC0? locality 3 for every f? maximal privacy with minimal interaction? OWF in NC1 OWF in NC03? The main open question is to try and base NC-zero cryptography on more general assumption. For example, is the existence of OWF that can be computed in polynomial time implies the existence of NC-zero OWF. Such questions motivates more study of the complexity of randomized encoding. On a lower level, it is interesting to ask what is the exact locality that can be achieved. We know that locality 2 is impossible, and on the other hand, that locality 4 is possible under general assumptions. In the case of OWF we can construct a locality 3 OWF based on some specific assumptions. So the gap is very small, and it’ll nice to close it. better encodings? better constant-round protocols? practical hardware?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.