Download presentation
Presentation is loading. Please wait.
Published byMyron Scott Modified over 9 years ago
1
Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011
2
Distributed proving is an effective way to combine information in different administrative domains Distributed authorization – Make a granting decision by constructing a proof from security policies – Examples: DL[Li03], DKAL [Gurevich08], SD3 [Jim01], SecPAL [Becker07], and Grey [Bauer05] Data fusion in pervasive environments – Infer a user’s activity from sensor data owned by different organizations
3
Distributed Proof System Consist of multiple principals, which consist of a knowledge base and an inference engine Construct a proof by exchanging proofs in a peer-to- peer way Support rules and facts in Datalog with the says operator (e.g., BAN logic) Ç Ç Ç Ç Quoted fact
4
Protecting each domain’s confidential information is crucial Each organization in a virtual business coalition needs to protect its proprietary information from the others A location server must protect users’ location information with proper privacy policies To do these, principals in a distributed proof system could limit access to their sensitive information with discretionary access-control policies
5
To determine the safety of a system involving multiple principals is not so trivial Suppose that principal p 0 is willing to disclose the truth of fact f 0 only to p 2 What if p 2 still derives fact f 2 ?
6
Problem Statements How should define confidentiality and safety in distributed proof systems? Is it possible to derive more facts that a system that enforces confidentiality policies on a principal-to-principal basis? If so, is there any upper bound in terms of the proving power of distributed proof systems?
7
Outline System model based on a TTP Safety definition based on non-deducibility Safety analysis – DAC system – NE system – CE system Conclusion
8
Abstract System Model Parameterize a distributed proof system D with a set of inference rules I and a finite set of principals P (i.e., D[P, I]) Only consider the initial and final state of system D based on a trusted third-party model (TTP) Datalog inference rule:
9
Reference System D[I S ] (COND) (SAYS) The body of a rule contains a set of quoted facts (e.g., q 1 = (p 1 says f 1 )) All the information is freely shared among principals
10
TTP is a fixpoint function that computes the final state of a system Trusted Third Party (TTP) p1p1 p2p2 pnpn KB 1 KB n fixpoint 1 (KB)fixpoint n (KB) KB 2 fixpoint 2 (KB) Inference rules I
11
Soundness Requirement Definition Definition (Soundness) A distributed proof system D[I] is sound if Any confidentiality-preserving system D[I] should not prove a fact that is not provable with the reference system D[I S ]
12
Outline System model based on a TTP Safety definition based on non-deducibility Safety analysis – DAC system – NE system – CE system Conclusion
13
Confidentiality Policies Each principal defines a discretionary access- control policy on its local fact Each confidentiality policy is defined with the predicate release(principal_name, fact_name) E.g., if Alice is willing to disclose her location to Bob, she could add the policy – release(Bob, loc(Alice, L)) to her knowledge base.
14
Attack Model A set of malicious colluding principals A try to infer the truth of a confidential facts f in non-malicious principal p i ’s knowledge base KB i A System D Fact f 0 is confidential because all the principals in A are not authorized to learn its truth Fact f 1 is NOT confidential because p 4 is authorized to learn its truth
15
Attack Model (Cont.) A System D Malicious principals only use their initial and final states ) to perform inferences
16
Attack Model (Cont.) A System D Malicious principals only use their initial and final states to perform inferences are available
17
Sutherland’s non-deducibility model inferences by considering all possible worlds W Consider two information functions v 1 : W → X and v 2 : W Y. X Y W v1v1 Public view Private view w w’ x v2v2 y y’ W’ = { w v 1 (w) = x} Y’ This cannot be possible!
18
Nondeducibility considers information flow between two information functions regarding system configuration A set of initial configurations Initial and final states of malicious principals in set A Confidential facts that are actually maintained by non- malicious principals Information flow Function v 1 Function v 2
19
Safety Definition We say that a distributed proof system D[P, I] is safe if for every possible initial state KB, for every possible subset of principals A, for every possible subset of confidential facts Q, there exists another initial state KB’ such that 1.v 1 (KB) = v 1 (KB’), and 2.Q = v 2 (KB’). Malicious principals A has the same initial and final local states Non-malicious principals could posses any subset of confidential facts
20
Outline System model based on a TTP Safety definition based on non-deducibility Safety analysis – DAC system – NE system – CE system Conclusion
21
DAC System D[I DAC ] Enforce confidentiality policies on a principal- to-principal basis (COND) (DAC-SAYS)
22
Example Derivations in D[I DAC ] (DAC-SAYS) (COND)
23
D[P, I DAC ] is Safe because deviations performed by one principal are transparent from others Let P and A be {p 0, p 1 } and {p 1 } respectively KB 0 KB 1 KB’ 0 Principal p 1 cannot distinguish KB 0 and KB’ 0
24
NE System D[I NE ] Introduce function E i to represent an encrypted value Associate each fact or quoted fact q with an encrypted value e Each principal performs an inference on an encrypted fact (q, e) Principals cannot infer the truth of an encrypted fact without decrypting it TTP discards encrypted facts from the final system state
25
Inference Rules I NE (ECOND) (DEC1) (DEC2) (ENC-SAYS)
26
Example Derivations (ENC-SAYS) (ECOND) (ENC-SAYS) (DEC1) (DEC2) (ECOND)
27
Analysis of System D[I NE ] The strategy we use for the DAC system does not work Need to make sure that every malicious principals receive an encrypted fact of the same structure Malicious principals A KB 0
28
NE System is Safe All the encrypted values must be decrypted in the exact reverse order Can collapse a proof for a malicious princpal’s fact such that all the confidential facts are only mentioned in non-malicious principals’ rules Thus, can make all the confidential facts transparent from the malicious principals by modifying non-malicious principals’ rules
29
Conversion Method – Part 1 Keep collapsing proofs by modifying non- malicious principals’ rules – If a proof contains a subsequence replace the sequence above with Eventually, all the confidential facts only appear in non-malicious principals rules
30
Conversion Method – Part 2 Given a set of quoted facts Q that should be in KB’ Case 1: (p i says f) is not in Q, but f is in KB i *, – Remove (p i says f) from the body of every non- malicious principal rule Case 2: (pi says f) is in Q, but f is not in KB i *, – Remove all non-malicious principal’ rules whose body contains (p i says f)
31
CE System D[I CE ] is NOT safe An encrypted value can be decrypted in any arbitrary order Consequently, we cannot collapse a proof as we did for the NE system (CE-DEC)
32
Summary Develop formal definitions of safety for distributed proof systems based on the notion of nondeducibility Show that the NE system, which derives more facts than the DAC system, is indeed safe Provide an unsafe result of the CE system, which extends the NE system with commutative encryption The proof system with the maximum proving power exists somewhere between the NE and CE systems
33
Thank you!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.