On the Depth-Robustness and Cumulative Pebbling Cost of Argon2i

Slides:



Advertisements
Similar presentations
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Advertisements

Approximation, Chance and Networks Lecture Notes BISS 2005, Bertinoro March Alessandro Panconesi University La Sapienza of Rome.
Theory of Computing Lecture 3 MAS 714 Hartmut Klauck.
Study Group Randomized Algorithms 21 st June 03. Topics Covered Game Tree Evaluation –its expected run time is better than the worst- case complexity.
Online Scheduling with Known Arrival Times Nicholas G Hall (Ohio State University) Marc E Posner (Ohio State University) Chris N Potts (University of Southampton)
Global Flow Optimization (GFO) in Automatic Logic Design “ TCAD91 ” by C. Leonard Berman & Louise H. Trevillyan CAD Group Meeting Prepared by Ray Cheung.
Complexity 11-1 Complexity Andrei Bulatov Space Complexity.
June 3, 2015Windows Scheduling Problems for Broadcast System 1 Amotz Bar-Noy, and Richard E. Ladner Presented by Qiaosheng Shi.
Online Graph Avoidance Games in Random Graphs Reto Spöhel Diploma Thesis Supervisors: Martin Marciniszyn, Angelika Steger.
Karger’s Min-Cut Algorithm Amihood Amir Bar-Ilan University, 2009.
CPSC 689: Discrete Algorithms for Mobile and Wireless Systems Spring 2009 Prof. Jennifer Welch.
1 Ecole Polytechnque, Nov 7, 2007 Scheduling Unit Jobs to Maximize Throughput Jobs:  all have processing time (length) = 1  release time r j  deadline.
1 On the Benefits of Adaptivity in Property Testing of Dense Graphs Joint work with Mira Gonen Dana Ron Tel-Aviv University.
The k-server Problem Study Group: Randomized Algorithm Presented by Ray Lam August 16, 2003.
Complexity 1 Mazes And Random Walks. Complexity 2 Can You Solve This Maze?
On Everlasting Security in the Hybrid Bounded Storage Model Danny Harnik Moni Naor.
Packing Element-Disjoint Steiner Trees Mohammad R. Salavatipour Department of Computing Science University of Alberta Joint with Joseph Cheriyan Department.
Lecture 2 We have given O(n 3 ), O(n 2 ), O(nlogn) algorithms for the max sub-range problem. This time, a linear time algorithm! The idea is as follows:
Ragesh Jaiswal Indian Institute of Technology Delhi Threshold Direct Product Theorems: a survey.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Analysis of Algorithms
How to reform a terrain into a pyramid Takeshi Tokuyama (Tohoku U) Joint work with Jinhee Chun (Tohoku U) Naoki Katoh (Kyoto U) Danny Chen (U. Notre Dame)
Edge-disjoint induced subgraphs with given minimum degree Raphael Yuster 2012.
PODC Distributed Computation of the Mode Fabian Kuhn Thomas Locher ETH Zurich, Switzerland Stefan Schmid TU Munich, Germany TexPoint fonts used in.
Testing the independence number of hypergraphs
Graph Colouring L09: Oct 10. This Lecture Graph coloring is another important problem in graph theory. It also has many applications, including the famous.
15.082J & 6.855J & ESD.78J September 30, 2010 The Label Correcting Algorithm.
Approximation Algorithms based on linear programming.
Intrusion Resilience via the Bounded-Storage Model Stefan Dziembowski Warsaw University and CNR Pisa.
Algorithms for Big Data: Streaming and Sublinear Time Algorithms
Towards Human Computable Passwords
Information Complexity Lower Bounds
Designing Proofs of Human Work for Cryptocurrency and Beyond
Attacking Data Independent Memory Hard Functions
Approximating the MST Weight in Sublinear Time
Scrypt is maximally memory-hard
Modern symmetric-key Encryption
Lectures on Network Flows
Lecture 18: Uniformity Testing Monotonicity Testing
Chapter 5. Optimal Matchings
Approximating k-route cuts
Cryptographic Hash Functions Part I
Joël Alwen (IST Austria/Wickr Inc.)
Homework 5 Statistics Minimum Value 40 Maximum Value Range
Human-Computable Passwords
James B. Orlin Presented by Tal Kaminker
CIS 700: “algorithms for Big Data”
Haim Kaplan and Uri Zwick
Algorithms (2IL15) – Lecture 2
The Curve Merger (Dvir & Widgerson, 2008)
Near-Optimal (Euclidean) Metric Compression
Constrained Bipartite Vertex Cover: The Easy Kernel is Essentially Tight Bart M. P. Jansen February 18th, STACS 2016, Orléans, France.
Algorithms (2IL15) – Lecture 5 SINGLE-SOURCE SHORTEST PATHS
3.5 Minimum Cuts in Undirected Graphs
Coverage Approximation Algorithms
Kiran Subramanyam Password Cracking 1.
On the effect of randomness on planted 3-coloring models
Introduction Wireless Ad-Hoc Network
Analysis of Algorithms
The Byzantine Secretary Problem
Theory of Computability
Cryptographic Hash Functions Part I
Bandwidth Hard Functions: Reductions and Lower Bounds
On The Quantitative Hardness of the Closest Vector Problem
Clustering.
Switching Lemmas and Proof Complexity
Theory of Computability
Locality In Distributed Graph Algorithms
Analysis of Algorithms
Sustained Space Complexity
Presentation transcript:

On the Depth-Robustness and Cumulative Pebbling Cost of Argon2i Jeremiah Blocki (Purdue University) Samson Zhou (Purdue University) TCC 2017

Motivation: Password Storage jblocki, 123456 Username jblocki Salt 89d978034a3f6 Hash 85e23cfe0021f584e3db87aa72630a9a2345c062 SHA1(12345689d978034a3f6)=85e23cfe0021f584e3db87aa72630a9a2345c062 + As a motivating example consider an adversary who breaks into an authentication server and steals a users crptographic hash value. The adversary can execute an offline attack against the password by comparing the hash value with the hashes of likely password guesses. The adversary is limited only by the resources that he is willing to invest cracking the passwords.

Offline Attacks: A Common Problem Password breaches at major companies have affected millions billions of user accounts. Unfortunately, offline dictionary attacks are quite common. Password breaches at major companies have affected millions of users.

Offline Attacks: A Common Problem Password breaches at major companies have affected millions billions of user accounts. Unfortunately, offline dictionary attacks are quite common. Password breaches at major companies have affected millions of users.

Goal: Moderately Expensive Hash Function Fast on PC and Expensive on ASIC? This motivates the need for moderately hard functions. The basic idea is to build a password hash function that is moderately expensive to compute to limit the number of guesses that an adversary would be willing to try. An honest server only needs to compute the function once during a typical authentication session. However, the adversary potentially has an advantage in this game. While the honest server must typically evaluate the function on standard hardware, the adversary could use potentially reduce the cost per password guess by developing customized hardware (GPUs, FPGAs, ASICs) to evaluate the password hash function. Thus, we want to ensure that the cost to evaluate this function is equitable across platforms.

(2013-2015) https://password-hashing.net/

(2013-2015) We recommend that you use Argon2… https://password-hashing.net/

(2013-2015) We recommend that you use Argon2… There are two main versions of Argon2, Argon2i and Argon2d. Argon2i is the safest against side-channel attacks (2013-2015) https://password-hashing.net/

Memory Hard Function (MHF) Intuition: computation costs dominated by memory costs vs. Goal: force attacker to lock up large amounts of memory for duration of computation Expensive even on customized hardware In this work we study a special type of memory hard function called data-independent memory hard functions. As the name suggests the memory access pattern will not depend on the input making the functions resistant to side-channel attacks.

Memory Hard Function (MHF) Intuition: computation costs dominated by memory costs vs. In this work we study a special type of memory hard function called data-independent memory hard functions. As the name suggests the memory access pattern will not depend on the input making the functions resistant to side-channel attacks.

Memory Hard Function (MHF) Intuition: computation costs dominated by memory costs vs. In this work we study a special type of memory hard function called data-independent memory hard functions. As the name suggests the memory access pattern will not depend on the input making the functions resistant to side-channel attacks.

Memory Hard Function (MHF) Intuition: computation costs dominated by memory costs vs. Data Independent Memory Hard Function (iMHF) Memory access pattern should not depend on input In this work we study a special type of memory hard function called data-independent memory hard functions. As the name suggests the memory access pattern will not depend on the input making the functions resistant to side-channel attacks.

Memory Hard Function (MHF) Intuition: computation costs dominated by memory costs vs. Data Independent Memory Hard Function (iMHF) Memory access pattern should not depend on input In this work we study a special type of memory hard function called data-independent memory hard functions. As the name suggests the memory access pattern will not depend on the input making the functions resistant to side-channel attacks.

Data-Independent Memory Hard Function (iMHF) iMHF fG,H defined by H: 0,1 2𝑘→ 0,1 𝑘 (Random Oracle) DAG G (encodes data-dependencies) Maximum indegree: 𝛿=O 1 Input: pwd, salt 2 4 1 1 Output: fG,H (pwd,salt)= L4 3 𝐿3=𝐻(𝐿2,𝐿1) 𝐿1=𝐻(𝑝𝑤𝑑,𝑠𝑎𝑙𝑡)

Evaluating an iMHF (pebbling) 2 4 Output: L4 Input: 1 1 3 pwd, salt 𝐿3=𝐻(𝐿2,𝐿1) 𝐿1=𝐻(𝑝𝑤𝑑,𝑠𝑎𝑙𝑡) Pebbling Rules : 𝑃 =P1,…,Pt⊂𝑉 s.t. Pi+1⊂Pi∪ 𝑥∈𝑉 parents 𝑥 ⊂Pi+1 (need dependent values) n∈ Pt (must finish and output Ln) We will describe our algorithms for evaluating an iMHF using the language of graph pebbling. Placing a pebble on a node corresponds to computing the corresponding label, and keeping a pebble on the graph corresponds to storing that label in memory. Of course we can only compute a label if we have all of the dependent labels. Thus, in a legal pebbling we cannot place a pebble on a node until we have pebbles on the parents of that node. If the adversary is parallel then we are allowed to place multiple pebbles on the graph in each round. We can remove pebbles from the graph at any point in time.

Evaluating an iMHF (pebbling) 1 1 2 3 3 4 4 5 5

Evaluating an iMHF (pebbling) 1 2 3 4 5 P1 = {1} (data value L1 stored in memory)

Pebbling Example 1 2 3 4 5 P1 = {1} P2 = {1,2} (data values L1 and L2 stored in memory)

Pebbling Example 1 2 3 4 5 P1 = {1} P2 = {1,2} P3 = {3}

Pebbling Example 1 2 3 4 5 P1 = {1} P2 = {1,2} P3 = {3} P4 = {3,4}

Pebbling Example P1 = {1} P2 = {1,2} P3 = {3} P4 = {3,4} P5 = {5} 1 2

Measuring Pebbling Costs [AS15] CC 𝐺 = min 𝑃 𝑖=1 𝑡 𝑃 𝑃 𝑖 Approximates Amortized Area x Time Complexity of iMHF Memory Used at Step i

Measuring Pebbling Costs [AS15] CC 𝐺 = min 𝑃 𝑖=1 𝑡 𝑃 𝑃 𝑖 [AS15] Costs scale linearly with #password guesses m×CC 𝐺,…,𝐺 =m×CC(𝐺) Memory Used at Step i 𝑚 times

Pebbling Example: Cumulative Cost 1 2 3 4 5 P1 = {1} CC 𝐺 ≤ 𝑖=1 5 𝑃 𝑖 = 1+2+1+2+1 = 7 P2 = {1,2} P3 = {3} P4 = {3,4} P5 = {5}

Desiderata Find a DAG G on n nodes such that Constant Indegree (𝛿=2) Running Time: 𝑛 𝛿−1 =𝑛 CC(G) ≥ 𝑛2 𝜏 for some small value 𝜏.

Maximize costs for fixed running time n Desiderata Find a DAG G on n nodes such that Constant Indegree (𝛿=2) Running Time: 𝑛 𝛿−1 =𝑛 CC(G) ≥ 𝑛2 𝜏 for some small value 𝜏. Maximize costs for fixed running time n (Users are impatient)

Depth-Robustness: The Key Property Necessary [AB16] and sufficient [ABP17] for secure iMHFs

Block Depth Robustness [ABP17] Definition: A DAG G=(V,E) is (e,d,b)-block depth-robust for all 𝑆⊆𝑉 s.t. 𝑆 ≤𝑒 we have depth 𝐺− 𝑣∈𝑆 [𝑣,𝑣+𝑏) >d. Otherwise, we say that G is (e,d,b)-reducible. Example: (e=1,d=2,b=2)-block reducible 1 2 3 4 5 6

Block Depth Robustness [ABP17] Definition: A DAG G=(V,E) is (e,d,b)-block depth-robust for all 𝑆⊆𝑉 s.t. 𝑆 ≤𝑒 we have depth 𝐺− 𝑣∈𝑆 [𝑣,𝑣+𝑏) >d. Otherwise, we say that G is (e,d,b)-reducible. Example: (e=1,d=2,b=2)-block reducible 1 2 3 4 5 6

Block Depth Robustness [ABP17] Definition: A DAG G=(V,E) is (e,d,b)-block depth-robust for all 𝑆⊆𝑉 s.t. 𝑆 ≤𝑒 we have depth 𝐺− 𝑣∈𝑆 [𝑣,𝑣+𝑏) >d. Otherwise, we say that G is (e,d,b)-reducible. Special Case 𝒃=𝟏 : We simply say that G is (e,d)-depth robust (reducible).

Depth Robustness is Necessary [AB16] Theorem (Depth-Robustness is a necessary condition): If G is not (e,d)- depth robust then CC 𝐺 =O 𝑒𝑛+ 𝑛3𝑑 . In particular, CC 𝐺 =o 𝑛2 for e,d=o(n).

How Depth-Reducible is Argon2i? Theorem [AB17]: A random Argon2i DAG G is 𝑛 0.8 ,𝑂 𝑛 0.6 –reducible (whp). Corollary [AB16,AB17]: For a random Argon2i DAG G we have CC G =𝑂 𝑛 1.8 (whp).

How Depth-Reducible is Argon2i? Theorem [AB17]: A random Argon2i DAG G is 𝑛 0.8 ,𝑂 𝑛 0.6 –reducible (whp). Corollary [AB16,AB17]: For a random Argon2i DAG G we have CC G =𝑂 𝑛 1.8 (whp). Theorem: Argon2i is 𝑒,𝑂 𝑛 3 𝑒 3 –depth reducible with high probability.

How Depth-Reducible is Argon2i? Theorem [AB17]: A random Argon2i DAG G is 𝑛 0.8 ,𝑂 𝑛 0.6 –reducible (whp). Corollary [AB16,AB17]: For a random Argon2i DAG G we have CC G =𝑂 𝑛 1.8 (whp). Theorem: Argon2i is 𝑒,𝑂 𝑛 3 𝑒 3 –depth reducible with high probability. Corollary: For a random Argon2i DAG G we have CC G =𝑂 𝑛 1.768 (whp). Proof Sketch: Recursive Attack [ABP17] against DAG G is (e,d)-reducible at multiple points a curve e.g., d e =𝑂 𝑛 3 𝑒 3 .

Depth-Robustness is Sufficient! [ABP17] 𝐊𝐞𝐲 𝐓𝐡𝐞𝐨𝐫𝐞𝐦: Let G=(V,E) be (e,d)-depth robust then CC(G)≥𝑒𝑑.

Depth-Robustness is Sufficient! [ABP17] 𝐊𝐞𝐲 𝐓𝐡𝐞𝐨𝐫𝐞𝐦: Let G=(V,E) be (e,d)-depth robust then CC(G)≥𝑒𝑑. Implications: There exists a constant indegree graph G with CC G ≥Ω 𝑛 2 log 𝑛 . [AB16]: We cannot do better (in an asymptotic sense) CC 𝐺 =O 𝑛 2 log log 𝑛 log 𝑛 .

Depth-Robustness is Sufficient! [ABP17] 𝐊𝐞𝐲 𝐓𝐡𝐞𝐨𝐫𝐞𝐦: Let G=(V,E) be (e,d)-depth robust then CC (G)≥𝑒𝑑. Implications: There exists a constant indegree graph G with CC G ≥Ω 𝑛 2 log 𝑛 . [AB16]: We cannot do better (in an asymptotic sense) CC 𝐺 =O 𝑛 2 log log 𝑛 log 𝑛 . [ABH17]: New practical construction of Ω 𝑛 log 𝑛 ,Ω 𝑛 -depth robust graph (DRSample) with CC G ≥Ω 𝑛 2 log 𝑛 .

Depth-Robustness is Sufficient! [ABP17] 𝐊𝐞𝐲 𝐓𝐡𝐞𝐨𝐫𝐞𝐦: Let G=(V,E) be (e,d)-depth robust then CC(G)≥𝑒𝑑. Implications: There exists a constant indegree graph G with CC G ≥Ω 𝑛 2 log 𝑛 . [AB16]: We cannot do better (in an asymptotic sense) CC 𝐺 =O 𝑛 2 log log 𝑛 log 𝑛 . [ABH17]: New practical construction of Ω 𝑛 log 𝑛 ,Ω 𝑛 -depth robust graph (DRSample) with CC G ≥Ω 𝑛 2 log 𝑛 . Argon2i has been deployed in crypto libraries & ongoing IETF standardization efforts  Still important iMHF for cryptanalysis

How Depth-Robust is Argon2i? Theorem [ABP17]: Argon2i is 𝑒, Ω 𝑛 2 𝑒 2 ,Ω 𝑛 𝑒 –block depth robust with high probability for 𝑒> 𝑛 .

How Depth-Robust is Argon2i? Theorem [ABP17]: Argon2i is 𝑒, Ω 𝑛 2 𝑒 2 ,Ω 𝑛 𝑒 –block depth robust with high probability for 𝑒> 𝑛 . Big Gap Between Upper/Lower Bound on depth 𝑛 3 𝑒 3 ≫ 𝑛 2 𝑒 2

How Depth-Robust is Argon2i? Theorem [ABP17]: Argon2i is 𝑒, Ω 𝑛 2 𝑒 2 ,Ω 𝑛 𝑒 –block depth robust with high probability for 𝑒> 𝑛 . Big Gap Between Upper/Lower Bound on depth 𝑛 3 𝑒 3 ≫ 𝑛 2 𝑒 2 Theorem: Argon2i is is 𝑒, Ω 𝑛 3 𝑒 3 ,Ω 𝑛 𝑒 –block depth robust with high probability for 𝑒> 𝑛 2/3 .

Complexity of Argon2i DAG G Theorem: Argon2i is is 𝑒, Ω 𝑛 3 𝑒 3 ,Ω 𝑛 𝑒 –block depth robust with high probability for 𝑒> 𝑛 2/3 .

Complexity of Argon2i DAG G Theorem: Argon2i is is 𝑒, Ω 𝑛 3 𝑒 3 ,Ω 𝑛 𝑒 –block depth robust with high probability for 𝑒> 𝑛 2/3 . Theorem [ABP17]: If DAG G is 𝑒,𝑑 –depth robust then CC G =Ω 𝑒𝑑

Complexity of Argon2i DAG G Theorem: Argon2i is is 𝑒, Ω 𝑛 3 𝑒 3 ,Ω 𝑛 𝑒 –block depth robust with high probability for 𝑒> 𝑛 2/3 . Theorem [ABP17]: If DAG G is 𝑒,𝑑 –depth robust then CC G =Ω 𝑒𝑑 Corollary: CC Argon2i = max 𝑒> 𝑛 2/3 Ω 𝑒 𝑛 3 𝑒 3 = Ω 𝑛 5/3

Complexity of Argon2i DAG G Theorem: Argon2i is is 𝑒, Ω 𝑛 3 𝑒 3 ,Ω 𝑛 𝑒 –block depth robust with high probability for 𝑒> 𝑛 2/3 . Theorem [ABP17]: If DAG G is 𝑒,𝑑 –depth robust then CC G =Ω 𝑒𝑑 Corollary: CC Argon2i = max 𝑒> 𝑛 2/3 Ω 𝑒 𝑛 3 𝑒 3 = Ω 𝑛 5/3 Our Stronger Theorem: CC G = Ω 𝑛 7/4 ≫ Ω 𝑛 5/3

Complexity of Argon2i DAG G Theorem: Argon2i is is 𝑒, Ω 𝑛 3 𝑒 3 ,Ω 𝑛 𝑒 –block depth robust with high probability for 𝑒> 𝑛 2/3 . Proves Argon2i was stronger than competing iMHF candidates like Catena and Balloon Hashing [AB16,AB17,AGKKOPRRR16]. Theorem [ABP17]: If DAG G is 𝑒,𝑑 –depth robust then CC G =Ω 𝑒𝑑 Corollary: CC Argon2i = max 𝑒> 𝑛 2/3 Ω 𝑒 𝑛 3 𝑒 3 = Ω 𝑛 5/3 Our Stronger Theorem: CC G = Ω 𝑛 7/4 ≫ Ω 𝑛 5/3

Argon2i … 1 2 3 4 i n

… Argon2i Argon2iA (v1.1): Uniform Distribution random predecessor r(i) < i … 1 2 3 4 i n Indegree: 𝛿=2 Argon2iA (v1.1): Uniform Distribution Argon2iB (v1.2+): Non-Uniform

… … … Argon2i Argon2i Edge Distribution: 𝑁≫𝑛 1 2 n i-8 i-4 i-4 i-3 i-2 i-1 i Indegree: 𝛿=2 r(i) Argon2i Edge Distribution: 𝑁≫𝑛 Pr r(i)=𝑗 = Pr 𝑥~[𝑁] 𝑖 1− 𝑥 2 𝑁 2 ∈ 𝑗−1, 𝑗

… … … Argon2i Argon2i Edge Distribution: 𝑁≫𝑛 Closer Nodes receive more weight … … … 1 2 n i-8 i-4 i-4 i-3 i-2 i-1 i Indegree: 𝛿=2 r(i) Argon2i Edge Distribution: 𝑁≫𝑛 Pr r(i)=𝑗 = Pr 𝑥~[𝑁] 𝑖 1− 𝑥 2 𝑁 2 ∈ 𝑗−1, 𝑗

… … … Argon2i Argon2i Edge Distribution: 𝑁≫𝑛 Pr r(i)=1 =Ω 1 𝑖 Closer Nodes receive more weight … … … 1 2 n i-8 i-4 i-4 i-3 i-2 i-1 i Indegree: 𝛿=2 r(i) Argon2i Edge Distribution: 𝑁≫𝑛 Pr r(i)=𝑗 = Pr 𝑥~[𝑁] 𝑖 1− 𝑥 2 𝑁 2 ∈ 𝑗−1, 𝑗

Argon2i Meta-Graph … … … 𝐺 1 2 n 𝑚 2m

Argon2i Meta-Graph … … … 𝐺 1 2 n 𝑚 2m

Argon2i Meta-Graph … … … 𝐺 1 2 n 𝑚 2m 𝐺 𝑚

… … … Argon2i Meta-Graph 𝐺 𝐺 𝑚 1 2 n 𝑚 2m 𝐺 𝑚 Meta-Graph 𝐺 𝑚 : 𝑛 𝑚 nodes e.g., 𝑚= Ω 𝑛 1/3

… … … Argon2i Meta-Graph 𝐺 𝐺 𝑚 1 2 n 𝑚 2m 𝐺 𝑚 Meta-Graph 𝐺 𝑚 : Ω 𝑛 𝑚 ,Ω 𝑛 𝑚∙ 𝑟 ∗ -depth robust for 𝑟 ∗ = Ω 𝑛 𝑚 3

Argon2i Analysis Theorem [ABP17]: If meta-graph 𝐺 𝑚 is (2e,d)-depth-robust then G is (e,dm/3,m)-block depth-robust. Meta-Graph 𝐺 𝑚 : Ω 𝑛 𝑚 ,Ω 𝑛 𝑚∙ 𝑟 ∗ -depth robust for 𝑟 ∗ = Ω 𝑛 𝑚 3 Corollary: With high probability a random Argon2i DAG G is Ω 𝑛 𝑚 ,Ω 𝑛 𝑟 ∗ ,𝑚 -block depth-robust for 𝑟 ∗ = Ω 𝑛 𝑚 3 .

𝛿−bipartite expander B 𝐴 = 𝐵 =𝑛 A

𝛿−bipartite expander B 𝐴 = 𝐵 =𝑛 ≥𝛿𝑛 A X⊆𝐴

𝛿−bipartite expander Y⊆𝐵 𝐴 = 𝐵 =𝑛 B Y⊆𝐵 (unreachable from X) A ≥𝛿𝑛 X⊆𝐴 <𝛿𝑛 B 𝐴 = 𝐵 =𝑛 Y⊆𝐵 (unreachable from X) ≥𝛿𝑛 A X⊆𝐴

… … … … … … … … 𝛿, 𝑟 ∗ -local expander For every node v and interval r≥ 𝑟 ∗ … … … … 1 2 v-2r v-r-1 v-r v n 𝐶𝑜𝑛𝑡𝑎𝑖𝑛𝑠 𝐵𝑖𝑝𝑎𝑟𝑡𝑖𝑡𝑒 𝐸𝑥𝑝𝑎𝑛𝑑𝑒𝑟 𝐺𝑟𝑎𝑝ℎ … … … … 1 2 v v+r-1 v+r v+2r-1 n

Argon2i Analysis Lemma 1: Let 𝐺 𝑚 be the meta-graph of a random Argon2i DAG G and suppose 𝑟 ∗ = Ω 𝑛 𝑚 3 . Then with high probability 𝐺 𝑚 is a 𝛿, 𝑟 ∗ -local expander. Remark 1: Increased weight placed on closer edges improves this bound in comparison to previous iMHF constructions. Remark 2: Recent work [AB17] modifies distribution to obtain maximally depth-robust graphs.

Argon2i Analysis Lemma 1: Let 𝐺 𝑚 be the meta-graph of a random Argon2i DAG G and suppose 𝑟 ∗ = Ω 𝑛 𝑚 3 . Then with high probability 𝐺 𝑚 is a 𝛿, 𝑟 ∗ -local expander. It remains to show that Local Expansion  Depth-Robustness

Layers of Argon2i Meta-Graph … … … 𝐺 𝑚 𝑛 𝑚 1 2 𝑟 ∗ 2 𝑟 ∗ 𝐿 𝑛 𝑚 𝑟 ∗ 𝐿 1 𝐿 2 … 𝐺 𝑚 −𝑆

c-good Layer 𝐿 𝑖 with respect to deleted set 𝑆 … … … 𝐺 𝑚 𝑛 𝑚 1 2 𝑟 ∗ 2 𝑟 ∗ 𝐿 𝑖 … 𝐿 𝑖+𝑘 For every k≥1 𝑆∩ 𝑗=0 𝑖+𝑘 𝐿 𝑘+𝑗 ≤𝑐 𝑘+1 𝑟 ∗ Fraction of deleted nodes must be small

Proof Outline Claim 4: If 𝑆 ≤ 𝑛 10000 𝑚 then at least half k= 𝑛 2𝑚 𝑟 ∗ of the layers are c-good. Notation: Let 𝐻 1,𝑆 , 𝐻 2,𝑆 ,…, 𝐻 𝑘,𝑆 denote the c-good layers and let 𝑅 𝑖,𝑆 ⊂ 𝐻 𝑖,𝑆 denote the set of nodes reachable from previous layer ( 𝑅 𝑖−1,𝑆 ) in 𝐺 𝑚 −𝑆.

Proof Outline Claim 4: If 𝑆 ≤ 𝑛 10000 𝑚 then at least half k= 𝑛 2𝑚 𝑟 ∗ of the layers are c-good. Notation: Let 𝐻 1,𝑆 , 𝐻 2,𝑆 ,…, 𝐻 𝑘,𝑆 denote the c-good layers and let 𝑅 𝑖,𝑆 ⊂ 𝐻 𝑖,𝑆 denote the set of nodes reachable from previous layer ( 𝑅 𝑖−1,𝑆 ) in 𝐺 𝑚 −𝑆. Note 1: 𝑅 1,𝑆 :=𝐻 1,𝑆 Note 2: If 𝑅 𝑘,𝑆 ≠∅ then 𝐺 𝑚 contains a path of length k= 𝑛 2𝑚 𝑟 ∗

Proof Outline Claim 4: If 𝑆 ≤ 𝑛 10000 𝑚 then at least half k= 𝑛 2𝑚 𝑟 ∗ of the layers are c-good. Notation: Let 𝐻 1,𝑆 , 𝐻 2,𝑆 ,…, 𝐻 𝑘,𝑆 denote the c-good layers and let 𝑅 𝑖,𝑆 ⊂ 𝐻 𝑖,𝑆 denote the set of nodes reachable from previous layer ( 𝑅 𝑖−1,𝑆 ) in 𝐺 𝑚 −𝑆. Note 1: 𝑅 1,𝑆 :=𝐻 1,𝑆 Note 2: If 𝑅 𝑘,𝑆 ≠∅ then 𝐺 𝑚 contains a path of length k= 𝑛 2𝑚 𝑟 ∗ Lemma 3: Let 𝐺 𝑚 be a 𝛿= 1 16 , 𝑟 ∗ -local expander and let 𝑆⊂𝑉 𝐺 𝑚 be given such that 𝑆 ≤ 𝑛 10000 𝑚 then 𝑅 𝑖,𝑆 ≥ 7 𝑟 ∗ 8

Intuitive Proof of Lemma 3 Reachable Nodes: must Be large by expansion (inductive argument). Deleted Nodes: Must Be Small since layer 𝐻 𝑖,𝑆 is c-good w.r.t S. Deleted Nodes: Must Be Small since layer 𝐻 𝑖+1,𝑆 is c-good w.r.t S.

Intuitive Proof of Lemma 3 Deleted Nodes: Must Be Small since layer 𝐻 𝑖,𝑆 is c-good w.r.t S.

Intuitive Proof of Lemma 3 Deleted Nodes: Must Be Small since layer 𝐻 𝑖,𝑆 is c-good w.r.t S. Deleted Nodes: Must Be Small since layer 𝐻 𝑖+1,𝑆 is c-good w.r.t S.

Intuitive Proof of Lemma 3 Reachable Nodes: must Be large by expansion (inductive argument). Deleted Nodes: Must Be Small since layer 𝐻 𝑖,𝑆 is c-good w.r.t S. Deleted Nodes: Must Be Small since layer 𝐻 𝑖+1,𝑆 is c-good w.r.t S.

Complexity of Argon2i DAG G Theorem: Argon2i is is 𝑒, Ω 𝑛 3 𝑒 3 ,Ω 𝑛 𝑒 –block depth robust with high probability for 𝑒> 𝑛 2/3 . Theorem [ABP17]: If G is 𝑒,𝑑 –depth robust then CC G =Ω 𝑒𝑑 .

Complexity of Argon2i DAG G Theorem: Argon2i is is 𝑒, Ω 𝑛 3 𝑒 3 ,Ω 𝑛 𝑒 –block depth robust with high probability for 𝑒> 𝑛 2/3 . Theorem [ABP17]: If G is 𝑒,𝑑 –depth robust then CC G =Ω 𝑒𝑑 . Corollary: CC G = max 𝑒> 𝑛 2/3 Ω 𝑒 𝑛 3 𝑒 3 = Ω 𝑛 5/3

Complexity of Argon2i DAG G Theorem: Argon2i is is 𝑒, Ω 𝑛 3 𝑒 3 ,Ω 𝑛 𝑒 –block depth robust with high probability for 𝑒> 𝑛 2/3 . Theorem [ABP17]: If G is 𝑒,𝑑 –depth robust then CC G =Ω 𝑒𝑑 . Corollary: CC G = max 𝑒> 𝑛 2/3 Ω 𝑒 𝑛 3 𝑒 3 = Ω 𝑛 5/3 Our Stronger Theorem: CC G = Ω 𝑛 7/4 ≫ Ω 𝑛 5/3

Tighter Analysis via Block Depth-Robustness … … 𝐺 𝑈 𝑛 2 +1 … n … … … 𝐺 𝐿 𝑛 2 1 2 𝑚 2m

Tighter Analysis via Block Depth-Robustness … … 𝐺 𝑈 𝑛 2 +1 … n … … … 𝐺 𝐿 𝑛 2 1 2 𝑚 2m Lower graph 𝐺 𝐿 is 𝑒, Ω 𝑛 3 𝑒 3 ,Ω 𝑛 𝑒 -block depth-robust

Tighter Analysis via Block Depth-Robustness … … 𝐺 𝑈 𝑛 2 +1 … n … … … 𝐺 𝐿 𝑛 2 1 2 𝑚 2m In particular, 𝐺 𝐿 is 𝑛 3/4 , Ω 𝑛 3/4 ,Ω 𝑛 1/4 -block depth-robust

Tighter Analysis via Block Depth-Robustness … … 𝐺 𝑈 𝑛 2 +1 … n … … … 𝐺 𝐿 𝑛 2 1 2 𝑚 2m [ABP17]  𝐶𝑜𝑠𝑡 𝑡𝑜 𝑟𝑒𝑝𝑒𝑏𝑏𝑙𝑒 𝑚𝑜𝑠𝑡 𝑜𝑓 𝐺 𝐿 from starting configuration with at most e= O 𝑛 3/4 pebbles is at least Ω ed = Ω 𝑛 3/2

Pebbling O 𝑛 3/4 consecutive nodes in 𝐺 𝑈 + ≤ O 𝑛 3/4 pebbles on 𝐺 𝐿 must re-pebble (almost all) of 𝐺 𝐿 … … 𝐺 𝑈 𝑛 2 +1 … n … … … 𝐺 𝐿 𝑛 2 1 2 𝑚 2m [ABP17]  𝐶𝑜𝑠𝑡 𝑡𝑜 𝑟𝑒𝑝𝑒𝑏𝑏𝑙𝑒 𝑚𝑜𝑠𝑡 𝑜𝑓 𝐺 𝐿 from starting configuration with at most e= O 𝑛 3/4 pebbles is at least Ω ed = Ω 𝑛 3/2

Case Analysis Case 1: Keep e= Ω 𝑛 3/4 pebbles on 𝐺 𝐿 for Ω 𝑛 steps. CC is at least Ω 𝑛 7/4 Case 2: (Mostly) repebble 𝐺 𝐿 at least Ω 𝑛 1/4 times. CC is at least Ω 𝑛 3/2 × Ω 𝑛 1/4 = Ω 𝑛 7/4

Summary of Results Theorem: Argon2i is 𝑒,𝑂 𝑛 3 𝑒 3 –depth reducible with high probability. Corollary: For a random Argon2i DAG G we have CC G =𝑂 𝑛 1.768 (whp).

Summary of Results Theorem: Argon2i is 𝑒,𝑂 𝑛 3 𝑒 3 –depth reducible with high probability. Corollary: For a random Argon2i DAG G we have CC G =𝑂 𝑛 1.768 (whp). Theorem: Argon2i is is 𝑒, Ω 𝑛 3 𝑒 3 ,Ω 𝑛 𝑒 –block depth robust with high probability for 𝑒> 𝑛 2/3 . Theorem:For a random Argon2i DAG G we have CC G = Ω 𝑛 7/4 (whp).

Hybrid Modes of Operation (Argon2id) Argon2id (now the recommended mode for Password Hashing): Run in data-independent mode (Argon2i) for n/2 steps Switch to data-dependent mode for last n/2 steps Recent results suggest that CC(Argon2id)≥Ω 𝑛 2 [ACPRT17] Side-Channel Attacks reduce costs (but less effect is less substantial) CC Argon2id 𝑆𝑖𝑑𝑒𝐶ℎ𝑎𝑛𝑛𝑒𝑙𝐴𝑡𝑡𝑎𝑐𝑘 =𝑂 𝑛 1.767 [𝐁𝐙𝟏𝟕] Improvement over SCRYPT: CC SCRYPT 𝑆𝑖𝑑𝑒𝐶ℎ𝑎𝑛𝑛𝑒𝑙𝐴𝑡𝑡𝑎𝑐𝑘 =𝑂 𝑛

New Results DRSample: practical construction of maximally depth-robust graph [ABH17] Easy to implement (small modification to Argon2 source code) Optimal asymptotic guarantees for DRSample CC(G)=Ω 𝑛 2 log 𝑛 Hybrid mode with optimal side-channel resistance Argon2i + DRSample are both bandwidth hard [RD17] See next talk for definition of bandwidth hardness [RD17] Future Work: Sustained Space Complexity of Argon2id? Constant factors?

Thanks for Listening

A Puzzle First person to find the unsavory solution wins a $20 Amazon Gift card E-mail: jblocki@purdue.edu