Cryptographic protocols 2014, Lecture 2 assumptions and reductions Helger Lipmaa University of Tartu, Estonia
motivation Assume Alice designs a protocol How to make sure it is secure? Approach 1: proof by intimidation "don't you trust me" also "tautology" Theorem. Assume Protocol X is secure. Then Protocol X is secure
Unconditional security motivation Approach 2 (much better): prove that the protocol is secure Problem: only known how to do for a small number of protocols need major advances in complexity theory it is not known how to prove that any function takes more than a linear number of steps to compute Unconditional security
Computational security motivation Approach 3 (mostly correct): make an assumption: assume that some well-known problem (say, factoring) is hard prove that if that assumption holds, then your protocol is secure Computational security
security verification Proof by intimidation Protocol designer's task Security verifier's task proof by intimidation Simpler: no need to prove anything Spend years cryptanalyzing OR trust the protocol proof by reduction More complex: must reduce security to some assumption Verify the usually short reduction. Trust the assumption
some known assumptions Factoring and friends The problem of distinguishing prime numbers from composite numbers and of resolving the latter into their prime factors is known to be one of the most important and useful in arithmetic. ~~~~~~~ Carl Friedrich Gauss Discrete logarithm and friends Lattice assumptions Coding-theoretic assumptions ... Many more protocols Many assumptions
Various pairing assumptions Some assumptions Factoring RSA Strong RSA Discrete Log CDH DDH SVP CVP LWE RLWE gapSVP DCRA Various pairing assumptions
Some assumptions Factoring RSA CDH DDH SVP CVP LWE RLWE gapSVP Underlying mathematical structure Number-theoretic ℤ_n Number-theoretic finite cyclic groups Lattice Factoring RSA Strong RSA Discrete Log CDH DDH SVP CVP LWE RLWE gapSVP DCRA Various pairing assumptions
Some assumptions Factoring RSA CDH DDH SVP CVP LWE RLWE gapSVP Security against quantum computers Insecure Secure(?) Factoring RSA Strong RSA Discrete Log CDH DDH SVP CVP LWE RLWE gapSVP DCRA Various pairing assumptions
Some assumptions Factoring RSA CDH DDH SVP CVP LWE RLWE gapSVP Strength/familiarity weaker/known stronger/less known More assurance Often more efficient Factoring RSA Strong RSA Discrete Log CDH DDH SVP CVP LWE RLWE gapSVP DCRA Various pairing assumptions
Choice of assumption: tradeoff More security assurance or better efficiency? Quantum security? Number-theoretic flavor? Any algebra needed by the goal? Compatibility with other protocols Some protocols are impossible/very inefficient with some assumptions
Assumption = everything Choice of the assumption is very important Seeing an assumption already gives a flavor of what can(not) be done efficiently So let's learn some
Recall: isomorphism Assume G₁ is additive and G₂ is multiplicative f : G₁ → G₂ is isomorphism if it agrees with group operations f (x + y) = f (x) f (y) f (0) = 1 f (-x) = f (x)⁻¹ two groups are isomorphic if there exists isomorphism between them
isomorphic = equal + G₁ f f⁻¹ · G2 In mathematics, isomorphic groups are considered to be "essentially the same They have the same structure Instead of executing group operation in one group, you can map to another group, do group operation there, and then map back G₁ G2 + · f f⁻¹ f can be thought of as data representation ... assuming both f and f⁻¹ can be computed efficiently
assumption (not known how to prove such things) one-way isomorphisms f : G₁ → G₂ is a one-way isomorphism if f is an isomorphism f can be computed efficiently f⁻¹ cannot be computed efficiently assumption (not known how to prove such things)
One-way isomorphic ≠ equal efficient G₁ G2 + · f f⁻¹ not efficient
Recall: exponentiation expg : ℤq → G, expg (m) = gᵐ expg is isomorphism gᵐ ⁺ ⁿ = gᵐ gⁿ g⁰ = 1 g⁻ᵐ = 1 / gᵐ
Is Exp one-way? What do you think? Depends on the group Easy: (ℝ*, ·): the inverse of exp is logarithm (ℤ, +), (ℤq, +): exp = multiplication, inverse = division In finite groups, inverse of exp is called discrete logarithm
Hard DL groups Instantiation 1 Let p be a big prime (3000+ digits) The order of ℤ p*={1, 2, ..., p - 1} is p - 1 Let q | p-1 be a smaller prime (160+ digits) By Sylow theorem, ℤp* has a unique subgroup G of order q DL is assumed to be hard in G Best known algorithms to break DL in G have subexponential complexity in |p| and exponential complexity in |q|
Reminder: basic complexity theory Running time T(n) of algorithm = function of input length n Example. Running time of exponentiation is a function of the bitlength n of the group elements Simplification: in ℤq, n = log q Efficient algorithm: T(n) is polynomial in n E.g.: T(n) = 1000 · n^6 Inefficient algorithm: T(n) is not polynomial in n E.g.: T(n) = n^(log n)
Efficient vs inefficient ... but it will eventually become less efficient Superpolynomial might be more efficient for small n
Complexity in cryptography When we encrypt, security should not depend on the message length but say on key size Instead of input length n, take security parameter κ Usually κ related to key length First, fix κ so that T(κ) of attacks is big and of "honest" algorithms is small Finally, choose corresponding key
Corollaries of complexity Most of algorithms work with undetermined κ In practical implementations fix κ so that protocol is fast but attacks are assumed to be hard E.g., attacks take time 2⁸⁰ If attacks are improved somewhat, increase κ accordingly
choosing p Too optimistic graph, so p is chosen much larger p time of index calculus: best known DL (appr) Efficient protocol has small value here log of time time of exponentiation (appr) log p Too optimistic graph, so p is chosen much larger
complexity notation Θ (f (n)): asymptotically c f (n) for some constant c 100 n²+ 20 n - 10 = Θ(n²) O (f (n)): any fun. that does not grow faster than Θ (f (n)) o (f (n)): any function that grows slower than Θ (f (n)) Ω (f (n)): any fun. that does not grow slower than Θ (f (n)) ω (f (n)): any function that grows faster than Θ (f (n))
quiz Question: What is (n⁸ + n + 1) / (n² + n + 1) ? Answer: it is n⁶ + smaller terms thus Θ (n⁶)
complexity notation polynomial: poly (n) = n^(O (1)) not faster than any polynomial superpolynomial: n^(ω (1)) faster than any polynomial exponential: 2^(Θ (n)) negligible: negl (n) = n^(-ω (1)) slower than inverse of any polynomial linear: Θ (n) asymptotically c n for some constant c etc: logarithmic, superlogarithmic, sublinear
Best known dl algorithms Any groups of order q, n := log q Baby-step-giant-step and Pohlig-Hellman algorithms --- O (√q) Instantiation 1, parameters p and q Index calculus, O (e^(√(2 ln p ln ln p))) BSGS/PH algorithms O (√q) Recent advances in groups of order pᵐ for midsize m DL in any group can be broken by using quantum computer Generic algorithms: only use group operations
Hard DL groups Instantiation 2 Elliptic curve groups Let q be a small prime (160+ digits) Elliptic curve group G has order q Definition complicated (see supplementary notes) DL is assumed to be hard in well-chosen G Best known algorithms to break DL in G have exponential complexity in |q|
comparison of instantiations Asymptotically not optimal, but good for inputs of that size Exponent 1.58 due to Karatsuba algorithm Parameters Group element representation Complexity of multiplication Security ℤp* q, log q≥160 log p O((log p)^1.58) 2⁸⁰ E.C.G. q, log q≥160 log q O((log q)^1.58) q is much smaller than p, though constant in O( ) is larger
quiZ Θ ((log p)^1.58) O ((log p)^2.58) O ((log p)^3) Ω ((log q)^1.58) Question: What is the asymptotic efficiency of exponentiation in the first instantiation? q = 2^(O (κ)), thus log q = Θ (κ) 2^κ = O (e^√ (ln p ln ln p))) thus log p = Θ (κ² / log κ) Answer: complexity of multiplication times Θ (log p) multiplications Θ ((log p)^2.58) = Θ (κ^5.16 / (log κ)^2.58) Θ ((log p)^1.58) O ((log p)^2.58) O ((log p)^3) Ω ((log q)^1.58) O ((log q)^2.58) o ((log q)⁶) O ((log p)^1.58 log q) o ((log p)^2.58) O ((log q)^1.58 log p) Ω (1) Θ ((log q)^5.16) ω ((log p)^1.58)
DL assumption: Formal g g Informally, we need that inverting exponentiation is hard Complications: when exponent is smaller than L , one can compute DL in Θ (√L) steps inverting is impossible when g = 1 inverting is always possible with probability 1 / q (guessing answer randomly) exponent must be random (e.g., exponent is secret key) g g must be a generator g security must hold against probabilistic algorithms that can use random numbers break is only successful when adversary's advantage is >> 1 / q
Security game A challenger generates values from some fixed "valid" distributions and sends them to the adversary A After some computation, A returns some value to the challenger Depending on the input and the output, the challenger declares A to be either successful or not A breaks the assumption if her advantage is big compared to random guessing
Def: DL groups Game DL(G, A) gk ← desc(G) m ← ℤq h ← gᵐ m* ← A (gk, h) If m = m* return 1 else return 0 Let G be a finite cyclic group of order q, let g be its fixed generator One can take any g, or a random g Assume desc(G) contains a description of G, incl. g Adv[DL(G, A)] := | Pr[DL(G,A) = 1] - 1 / q | A ε-breaks DL in G iff Adv[DL(G, A)] ≥ ε G is a (τ,ε)-DL group iff Adv[DL(G, A)] ≤ ε for all probabilistic polynomial time adversaries A that take time ≤ τ G is a DL group iff it is a (poly(κ),negl(κ))-DL group
Study outcomes Assumptions: motivation Some example assumptions, why so many Discrete logarithm basic idea formal definition