Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Information Security – Theory vs. Reality 0368-4474-01, Winter 2012-2013 Lecture 8: Integrity on untrusted platforms: Proof-Carrying Data Eran Tromer.

Similar presentations


Presentation on theme: "1 Information Security – Theory vs. Reality 0368-4474-01, Winter 2012-2013 Lecture 8: Integrity on untrusted platforms: Proof-Carrying Data Eran Tromer."— Presentation transcript:

1 1 Information Security – Theory vs. Reality 0368-4474-01, Winter 2012-2013 Lecture 8: Integrity on untrusted platforms: Proof-Carrying Data Eran Tromer

2 2 Recall our high-level goal Ensure properties of a distributed computation when parties are mutually untrusting, faulty, leaky & malicious.

3 3 Approaches Assurance using validation, verification and certification Attestation using Trusted Platform Module Cryptographic protocols –Multiparty computation –Proofs of correctness (“delegation of computation”)

4 4 Toy example (3-party correctness) Alice z y  F(x) y Bob z  G(y) Carol is “z=G(F(x))” true? x, F G

5 5 Trivial solution Carol can recompute everything, but: Uselessly expensive Requires Carol to fully know x,F,G –We will want to represent these via short hashes/signatures z y  F(x) y z  G(y)z’  G(F(x)) z’ = z Alice Bob Carol ?

6 6 S ecure multiparty computation [GMW87][BGW88][CCD88] y  F(x) z  G(y) Alice Bob Carol x, F G Preserves integrity, and even secrecy. :

7 7 S ecure multiparty computation [GMW87][BGW88][CCD88] y  F(x) z  G(y) Alice Bob Carol x, F G computational blowup is polynomial in the whole computation, and not in the local computation computation (F and G) must be chosen in advance But: does not preserve the communication graph: parties must be fixed in advance, otherwise…

8 8 y  F(x) z  G(y) Alice Bob x, F G Carol #1 Carol #2 Carol #3 Secure multiparty computation [GMW87][BGW88][CCD88]

9 9 Computationally-sound (CS) proofs [Micali 94] z  G(y) verify  z zz Bob can generate a proof string that is: Tiny (polylogarithmic in his own computation) Efficiently verifiable by Carol  z  prove ( “z=G(F(x))” ) Alice Bob Carol x, F G y  F(x) yz z=G(F(x)) However, now Bob recomputes everything...

10 10 Proof-Carrying Data [Chiesa Tromer 09] following Incrementally-Verifiable Computation [Valiant 08] yy y  F(x) z  G(y) Each party prepares a proof string for the next one. Each proof is: Tiny (polylogarithmic in party’s own computation). Efficiently verifiable by the next party. Alice Bob Carol x, F G y verify  z zz z z=G(y) and I got a valid proof that “y=F(x)” y=F(x)

11 11 Generalizing: The Proof-Carrying Data framework

12 12 Generalizing: distributed computations Distributed computation: m1m1 m2m2 m3m3 m4m4 m5m5 m6m6 m7m7 m out Parties exchange messages and perform computation.

13 13 Generalizing: arbitrary interactions Arbitrary interactions –communication graph over time is any DAG m1m1 m2m2 m3m3 m4m4 m5m5 m6m6 m7m7 m out

14 14 Generalizing: arbitrary interactions Computation and graph are determined on the fly –by each party’s local inputs: m1m1 m2m2 m3m3 m4m4 m5m5 m6m6 m7m7 m out human inputs randomnessprogram

15 15 Generalizing: arbitrary interactions Computation and graph are determined on the fly –by each party’s local inputs: m1m1 m2m2 m3m3 m4m4 m5m5 m6m6 m7m7 m out human inputs randomnessprogram How to define correctness of dynamic distributed computation?

16 16 C-compliance m1m1 m2m2 m3m3 m4m4 m5m5 m6m6 m7m7 m out System designer specifies his notion of correctness via a compliance predicate C(in,code,out) that must be locally fulfilled at every node. C code inout accept / reject (program, human inputs, randomness)

17 17 Examples of C-compliance correctness is a compliance predicate C(in,code,out) that must be locally fulfilled at every node Some examples: C = “the output is the result of correctly computing a prescribed program” C = “the output is the result of correctly executing some program signed by the sysadmin” C = “the output is the result of correctly executing some type-safe program” or “… program with a valid formal proof” m1m1 m2m2 m3m3 m4m4 m5m5 m6m6 m7m7 m out C C C

18 18 Dynamically augment computation with proofs strings In PCD, messages sent between parties are augmented with concise proof strings attesting to their “compliance”. Distributed computation evolves like before, except that each party also generates on the fly a proof string to attach to each output message. m11m11 m22m22 m44m44 m55m55 m66m66 m77m77 m out  out m33m33 C

19 19 Model Every node has access to a simple, fixed, stateless trusted functionality -- essentially, a signature card. C m11m11 m22m22 m44m44 m55m55 m66m66 m77m77 m out  out m33m33 SIR Signed-Input-and-Randomness (SIR) oracle

20 20 Model Every node has access to a simple, fixed, stateless trusted functionality -- essentially, a signature card. Signed-Input-and-Randomness (SIR) oracle x input string SIR SK r random string r ← {0,1} s  ← SIGN SK (x, r) s length  signature on (x,r) VK can derandomize: Generate r using PRF

21 21 Model and trust Assume that: trusted party generates the key pair (SK,VK) tells VK to each party gives O SK to each party Similar assumptions [Hofheinz Muller-Quade Unruh 05] ― model [Katz 07] [Moran Segev 08] [Chandran Sahai Goyal 08] ― technical approach to PoK [Damgard Nielsen Wichs 09]

22 22 Goals Allow for any interaction between parties Preserve parties’ communication graph –no new channels Allow for dynamic computations –human inputs, indeterminism, programs Blowup in computation and communication is local and polynomial Ensure C-compliance while respecting the original distributed computation.

23 23 (Some) envisioned applications

24 24 Application: Correctness and integrity of IT supply chain Consider a system as a collection of components, with specified functionalities –Chips on a motherboard –Servers in a datacenter –Software modules C(in,code,out) checks if the component’s specification holds Proofs are attached across component boundaries If a proof fails, computation is locally aborted  integrity, attribution

25 25 Application: Fault and leakage resilient Information Flow Control

26 26 Application: Fault and leakage resilient Information Flow Control Computation gets “ secret ” / “ non-secret ” inputs “ non-secret ” inputs are signed as such Any output labeled “ non-secret ” must be independent of secret s System perimeter is controlled and all output can be checked (but internal computation can be leaky/faulty). C allows only: –Non-secret inputs: Initial inputs must be signed as “ non-secret ”. –IFC-compliant computation: Subsequent computation respect Information Flow Control rules and follow fixed schedule Censor at system’s perimeter inspects all outputs: –Verifies proof on every outgoing message –Releases only non-secret data.

27 27 Application: Fault and leakage resilient Information Flow Control Computation gets “ secret ” / “ non-secret ” inputs “ non-secret ” inputs are signed as such Any output labeled “ non-secret ” must be independent of secret s System perimeter is controlled and all output can be checked (but internal computation can be leaky/faulty). C allows only: –Non-secret inputs: Initial inputs must be signed as “ non-secret ”. –IFC-compliant computation: Subsequent computation respect Information Flow Control rules and follow fixed schedule Censor at system’s perimeter inspects all outputs: –Verifies proof on every outgoing message –Releases only non-secret data. Big assumption, but otherwise no hope for retroactive leakage blocking (by the time you verify, the EM emanations are out of the barn). Applicable when interface across perimeter is well-understood (e.g., network packets). Verify using existing assurance methodology.

28 28 Application: Simulations and MMO Distributed simulation: –Physical models –Virtual worlds (massively multiplayer online virtual reality) How can participants prove they have “obeyed the laws of physics”? (e.g., cannot reach through wall into bank safe) Traditional: centralized. P2P architectures strongly motivated but insecure [Plummer ’04] [GauthierDickey et al. ‘04] Use C-compliance to enforce the laws of physics.

29 29 Application: Simulations and MMO – example Alice and Bob playing on an airplane, can later rejoin a larger group of players, and prove they did not cheat while offline. m,  “While on the plane, I won a billion dollars, and here is a proof for that” m, 

30 30 Application: type safety Using PCD, type safety can be maintained –even if underlying execution platform is untrusted –even across mutually untrusting platforms C ( in,code,out ) verifies that code is type-safe & out=code(in) Type safety is very expressive: Using dependent types (e.g., Coq) or refinement types: Can express any computable property. Extensive literature on what that can be verified efficiently (at east with heuristic completeness – good enough!) Using object-oriented model (subclassing as constraint specification): leverage OO programming methodology.

31 31 Vision PCD type safety fault isolation & accountability multilevel security financial systems distributed dynamic program analysis antispam email policies proof-carrying code Security design reduces to “compliance engineering”: write down a suitable compliance predicate C.

32 32 More applications Mentioned: Fault isolation and accountability, type safety, multilevel security, simulations. Many others: Enforcing rules in financial systems Proof-carrying code Distributed dynamic program analysis Antispam email policies Security design reduces to “compliance engineering”: write down a suitable compliance predicate C. Recurring patterns: signatures, censors, verify-code-then-verify-result… Introduce design patterns (a la software engineering) [GHJV95]

33 33 Design patterns use signatures to designate parties or properties universal compliance: takes as input a (signed) description of a compliance predicate user inputs into local computation keep track of counters / graph structure (maybe mention before examples? some of these may be used there)

34 34 Does this work? Established: Formal framework Explicit construction –“Polynomial time” - not practically feasible (yet). –Requires signature cards Ongoing and future work: Full implementation Practicality Reduce requirement for signature cards (or prove necessity) Extensions (e.g., zero knowledge) Interface with complementary approaches: tie “compliance” into existing methods and a larger science of security Applications and “compliance engineering” methodology


Download ppt "1 Information Security – Theory vs. Reality 0368-4474-01, Winter 2012-2013 Lecture 8: Integrity on untrusted platforms: Proof-Carrying Data Eran Tromer."

Similar presentations


Ads by Google