Presentation is loading. Please wait.

Presentation is loading. Please wait.

ObliVM: A Programming Framework for Secure Computation

Similar presentations


Presentation on theme: "ObliVM: A Programming Framework for Secure Computation"— Presentation transcript:

1 ObliVM: A Programming Framework for Secure Computation
Chang Liu Joint work with Xiao Shaun Wang, Kartik Nayak Yan Huang, and Elaine Shi Put UT and Berkeley Logo

2 Not leaking their sensitive genomic data to anyone else!
Dating: Genetically Not leaking their sensitive genomic data to anyone else! Good match?

3 Security requirement:
Problem Abstraction Bob Alice Public function f Holds 𝑥 Holds 𝑦 z = f(x, y) Reveal z Security requirement: but nothing more! 3

4 Efficient, requires Expertise
Nina Taft Distinguished Scientist 5 researchers, 4 months to develop an (efficient) oblivious matrix factorization algorithm over secure computation Generic protocols Customized protocols Low design cost, Flexible Efficient, requires Expertise

5 Can generic secure computation be practical?
Challenge 1: Efficiency: time & space Challenge 2: Programmability: for non-expert programmers

6 ObliVM: Achieve the Best of Both Worlds
Programs by non-specialists achieve the performance of customized designs. Challenge 1: Efficiency: time & space Challenge 2: Programmability: for non-expert programmers

7 Programmer’s favorite model Cryptographer’s favorite model
AND XOR OR Cryptographer’s favorite model def binSearch(a, x): lo, hi = 0, len(a) res = -1 while lo <= hi: mid = (lo+hi)//2 midval = a[mid] if midval < x: lo = mid+1 elif midval > x: hi = mid else: res = mid return res Accessing a secret index may leak information!

8 How secret indexes leak information?
Breast cancer Liver problem Kidney 𝑓(𝑥, 𝑦) AND XOR OR A naive solution (in generic approaches) is to linear scan through the entire memory for each memory access. Extremely Slow!

9 Crypto Tool: Oblivious RAM
𝑂(𝑝𝑜𝑙𝑦 log 𝑁 ) Hide access patterns Redundancy Data Shuffling Poly-logarithmic cost per access Garbled Circuit 𝑖 Read M[i] [𝑖] ORAM Scheme [𝑀[𝑖]] [Shi, et al., 2011] Oblivious RAM with O((logN)3) Worst-Case Cost. In ASIACRYPT 2011. [Stefanov et al., 2013] Path ORAM: An extremely simple oblivious RAM protocol. In CCS 2013 [Wang, et al., 2015] Circuit ORAM: On Tightness of the Goldreich-Ostrovsky Lower Bound

10 Oblivious Program Source Program Oblivious Program Circuit Challenge!
Easy

11 ObliVM: A Programming Framework for Oblivious Computation
1 2 Program-specific optimizations through static analysis Programming abstractions for oblivious computation Color – more obvious To achieve this goal, I will talk about two main ideas. The first idea is to for the compiler to perform static analysis, and perform program-specific optimizations at compile time. [LHS-CSF’13] [LHSKH-Oakland’14] [LHMHTS-ASPLOS’15] [LWNHS-Oakland’15]

12 Example: FindMax h[] need not be in ORAM.
int max(public int n, secret int h[]) { public int i = 0; secret int m = 0; while (i < n) { if (h[i] > m) then m = h[i]; i++; } return m; Lets look at a concrete examples, findmax. This program is simple, it sequentially scans through an array to find the maximal element. Even though the array h may be secret, we need not place h in an ORAM, because the access pattern is fixed. We essentially only need to encrypt the array h. h[] need not be in ORAM. Encryption suffices.

13 Dynamic Memory Accesses:
Main loop in Dijkstra for(int i=1; i<n; ++i) { int bestj = -1; for(int j=0; j<n; ++j) if(!vis[j] && (bestdis < 0 || dis[j] < bestdis)) bestdis = dis[j]; vis[bestj] = 1; if(!vis[j] && (bestdis + e[bestj][j] < dis[j])) dis[j] = bestdis + e[bestj][j]; } dis[]: Not in ORAM vis[], e[][]: Inside ORAM It is not important to understand the details. Distance array – look at the structure of the loops. Pause a little. “This can often give us orders of magnitude performance improvements for practical applications. “ Here is a more sophisticated example. Here is the main loop the Dijkstra shortest path algorithm. In this program there are three different arrays, the visited array denoted vis, the distance array denoted dis, and the edge array denoted e. Without having to understand the semantics of the program, if you just examine this at a syntax level, you can observe that the distance array is always going to be accessed sequentially, and therefore it need not be placed in oram. By contract, the two red arrays should be placed inside an oram. Essentially our compiler can automate such analysis and make decisions to place the minimal number of variables inside orams. Our real compiler is more sophisticated than this. It can also do other optimizations. For example, deciding when it is safe to divide variables up into multiple, smaller ORAMs. Our compiler automates this analysis

14 Do we need to place all variables/data inside one ORAM?
here is our key observation. In a program, not all accesses will leak information. For some variables, their access patterns are safe to reveal, and for such variables, we need not place them inside an ORAM. Key observation: Accesses that do not depend on secret inputs need not be hidden

15 A memory-trace obliviousness type system ensures the security of the target program.
[LHS-CSF’13, LHSKH-Oakland’14, LHMHTS-ASPLOS’15] [LHS-CSF ‘13] Memory Trace Oblivious Program Execution. In CSF 2013. [LHSKH-Oakland ‘14] Automating RAM-model Secure Computation. In Oakland 2014 [LHMHTS-ASPLOS ‘15] GhostRider: A Hardware-Software System for Memory Trace Oblivious Computation. In ASPLOS 2015

16 ObliVM: A Programming Framework for Oblivious Computation
1 2 Program-specific optimizations through static analysis Programming abstractions for oblivious computation Color – more obvious To achieve this goal, I will talk about two main ideas. The first idea is to for the compiler to perform static analysis, and perform program-specific optimizations at compile time. [LHS-CSF’13] [LHSKH-Oakland’14] [LHTHMS-ASPLOS’15] [LWNHS-Oakland’15]

17 Analogy to Parallel Computation
A program written in C Approach 1: Limited opportunities for compile-time optimizations. Compile A program written in MapReduce Approach 2: MapReduce is a parallel programming abstraction. The best way to understand this idea is to make an analogy to parallel computation. So let’s think about how we can automate parallelism. The first approach is to take a program written in a traditional language like C, which is sequential in nature. The compiler can then try to figure out how to parallelize this program. This approach, however, offers limited opportunities for compile-time optimizations. The second approach, most of us are also familiar with. Namely, developers can code in a parallel programming paradigm such as mapreduce or spark. Such a program would give insights to the system as to how to parallelize it. Therefore we say that mapreduce is a parallel programming abstraction. Compile

18 Programming Abstractions for Oblivious Computation
A program written in C Approach 1: Limited opportunities for compile-time optimizations. Oblivious representation using ORAM Compile A program written in ObliVM abstractions Approach 2: We provide oblivious programming abstractions. [NWIWTS-Oakland15] [WLNHS-Oakland15] Approach 1 is what I just described to you Well, imprecisely speaking, our high-level idea is essentially the same -- but replace parallel with oblivious, and replace mapreduce with oblivm. So far, our approach had been to write a program in a traditional language like C and perform static optimizations. This allows us to achieve significant speedup for a wide class of programs, but we can do better. Therefore, we consider programming abstractions for oblivious computation. Unlike mapreduce, our framework oblivm actually provides a suite of programming patterns, or programming abstractions. If the developer’s program follows these patterns or abstractions, our compiler can gain more insights, and emit efficient target code that not only leverage the generic ORAM, but also a variety of more efficient oblivious algorithms. this is our fundamental idea. In the interest of time, I will not have time to describe the details of all these programming abstractions – in particular several of them required a co-design of the abstraction and the underlying algorithms simultaneously. Therefore, I am just going to cherrypick a couple examples and give you a one-sentence summary of each. Oblivious representation using ORAM (generic) and oblivious algorithms (problem specific, but efficient) Compile

19 Programming abstractions
Interactions between PL and algorithms Programming abstractions Oblivious algorithms Have one slide that talks about the unexpected Find common patterns, generalize into abstractions The expected

20 New insights lead to new algorithms Programming abstractions
Interactions between PL and algorithms The unexpected New insights lead to new algorithms Programming abstractions Oblivious algorithms Have one slide that talks about the unexpected Find common patterns, generalize into abstractions The expected

21 New insights lead to new algorithms Programming abstractions
Interactions between PL and algorithms allowed us to solve open problems in oblivious algorithms design! Interactions between PL and algorithms The unexpected New insights lead to new algorithms Programming abstractions Oblivious algorithms Depth-First Search Shortest path Minimum spanning tree Have one slide that talks about the unexpected Find common patterns, generalize into abstractions The expected

22 Gives oblivious Dijkstra and MST for sparse graphs
Loop Coalescing Block 1 ×n Gives oblivious Dijkstra and MST for sparse graphs Block 2 ×m Block 3 ×n Here is a quick example: of PL technique that led to the discovery of new algorithms.

23 Gives oblivious Dijkstra and MST for sparse graphs
Loop Coalescing Gives oblivious Dijkstra and MST for sparse graphs Loop coalescing slide

24 Hand-crafting vs. Automated Compilation
2013 ObliVM Today Nina Taft Distinguished Scientist Same Tasks Matrix Factorization 1 graduate student-day 10x-20x better performance [NIWJTB-CCS’13] 5 researchers 4 months Ridge Regression [NWIJBT-IEEE S&P ’13] 5 researchers 3 weeks [LWNHS-IEEE S&P ’15] (This work)

25 Speedup for More Applications
Earlier non-tree-based ORAMs perform worse than linear scans of memory Speedup for More Applications Backend PL Circuit ORAM [HKFV12] 1.7x106x 7x 2x 1.2x105x 9x105x 7x 2500x 51x 9x105x 7x 2500x 51x 106 105 104 103 100 10 1 1.6x104x 7x 5.5x 407x 2.6x104x 7x 10x 366x 8200x 7x 5.5x 212x 5900x 7x 13x 65x 7400x 7x 2x 530x Speedup Dijkstra MST K-Means Heap Map/Set BSearch AMS CountMin Sketch Sketch Data size: 768KB KB MB GB GB GB GB GB

26 Reference point: ~24 hours in 2012
ObliVM: Binary Search on 1GB Database Reference point: ~24 hours in 2012 [HFKV-CCS’12] ObliVM Today: 7.3 secs/query With hardware AES  20 times better. For example, consider a common binary search query over a 1gigabyte database. Let me tell you about our performance today. On a single pair of processors, our oblivm framework takes 9 seconds to answer each binary search query. Of the 9 seconds, only 2.5 seconds attribute to the online cost, and all the rest can be done in an offline phase – and moreover the offline work is embarassingly parallelizable. There are also some low hanging fruits that will immediately boost our performance further. For example, our current implementation is in java, and does not utilize hardware AES. If we moved our implementation over to C, and made use of hardware AES features widely present in off-the-shelf processors today, we expect that the performance can improve to a hundredth of a second per query. To achieve this performance would require about 3GBps bandwidth. this calculation should be fairly accurate based on numbers reported in a recent work by Bellare et al. 2 EC2 virtual cores, 60GB memory, 10MBps bandwidth [HFKV-CCS’12] Holzer et al. Secure Two-Party Computations in ANSI C. In CCS ‘12

27 Reference point: ~24 hours in 2012 With cryptographic extensions
ObliVM: Binary Search on 1GB Database Reference point: ~24 hours in 2012 [HFKV-CCS’12] With cryptographic extensions (projected) 0.3 secs/query With hardware AES  20 times better. For example, consider a common binary search query over a 1gigabyte database. Let me tell you about our performance today. On a single pair of processors, our oblivm framework takes 9 seconds to answer each binary search query. Of the 9 seconds, only 2.5 seconds attribute to the online cost, and all the rest can be done in an offline phase – and moreover the offline work is embarassingly parallelizable. There are also some low hanging fruits that will immediately boost our performance further. For example, our current implementation is in java, and does not utilize hardware AES. If we moved our implementation over to C, and made use of hardware AES features widely present in off-the-shelf processors today, we expect that the performance can improve to a hundredth of a second per query. To achieve this performance would require about 3GBps bandwidth. this calculation should be fairly accurate based on numbers reported in a recent work by Bellare et al. 2 EC2 virtual cores, 60GB memory, 300MBps bandwidth [HFKV-CCS’12] Holzer et al. Secure Two-Party Computations in ANSI C. In CCS ‘12

28 Overhead w.r.t. Insecure Baseline
Distributed GWAS 130× slowdown 1.7×104× slowdown 9.3×106× slowdown Hamming Distance Instructions Secure computation: encrypted computation bit by bit Floating point overhead for floating point ? Slowdown in comparison with cleartext K-Means

29 Overhead w.r.t. Insecure Baseline
Distributed GWAS Opportunities for further optimizations: 130× slowdown 1.7×104× slowdown 9.3×106× slowdown Hardware acceleration Parallelism Faster cryptography Hamming Distance Instructions Secure computation: encrypted computation bit by bit Floating point overhead for floating point ? Slowdown in comparison with cleartext K-Means

30 ObliVM Adoption Privacy-preserving data mining and
Privacy-preserving data mining and recommendation system Computational biology, privacy-preserving microbiome analysis Privacy-preserving Software-Defined Networking Cryptographic MIPS processor iDash secure genome analysis competition (Won an “HLI Award for Secure Multiparty Computing”)

31 ObliVM: Compiling Programs
Future Work: From ObliVM to A Unified Programming Framework for Modern Cryptography Secure Multiparty Computation Program Obfuscation (DARPA Safeware) Fully Homomorphic Encryption Functional Encryption Verifiable Computation ObliVM: Compiling Programs into Circuits


Download ppt "ObliVM: A Programming Framework for Secure Computation"

Similar presentations


Ads by Google