Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011.

Slides:



Advertisements
Similar presentations
Completeness and Expressiveness
Advertisements

Some important properties Lectures of Prof. Doron Peled, Bar Ilan University.
Spring 2000CS 4611 Security Outline Encryption Algorithms Authentication Protocols Message Integrity Protocols Key Distribution Firewalls.
Binder: A logic-based security language John DeTreville, Microsoft What has this to do with building secure software? I think we need many collaborating.
Secure Context-sensitive Authorization Kazuhiro Minami and David Kotz Dartmouth College.
How Bad is Selfish Routing? By Tim Roughgarden Eva Tardos Presented by Alex Kogan.
Safety in Discretionary Access Control for Logic-based Publish-subscribe Systems Kazuhiro Minami, Nikita Borisov, and Carl A. Gunter University of Illinois.
8.2 Discretionary Access Control Models Weiling Li.
ITIS 3200: Introduction to Information Security and Privacy Dr. Weichao Wang.
Introduction to Computability Theory
A Semantic Characterization of Unbounded-Nondeterministic Abstract State Machines Andreas Glausch and Wolfgang Reisig 1.
1 Introduction to Computability Theory Lecture2: Non Deterministic Finite Automata Prof. Amos Israeli.
CSCE 715 Ankur Jain 11/16/2010. Introduction Design Goals Framework SDT Protocol Achievements of Goals Overhead of SDT Conclusion.
Lecture III : Communication Security, Services & Mechanisms Internet Security: Principles & Practices John K. Zao, PhD SMIEEE National Chiao-Tung University.
CS5371 Theory of Computation
Daniel Moran & Marina Yatsina. Access control through encryption.
Security in Databases. 2 Srini & Nandita (CSE2500)DB Security Outline review of databases reliability & integrity protection of sensitive data protection.
EEC 693/793 Special Topics in Electrical Engineering Secure and Dependable Computing Lecture 7 Wenbing Zhao Department of Electrical and Computer Engineering.
1 Lecture #10 Public Key Algorithms HAIT Summer 2005 Shimrit Tzur-David.
EEC 688/788 Secure and Dependable Computing Lecture 7 Wenbing Zhao Department of Electrical and Computer Engineering Cleveland State University
Spring 2003CS 4611 Security Outline Encryption Algorithms Authentication Protocols Message Integrity Protocols Key Distribution Firewalls.
Security in Databases. 2 Outline review of databases reliability & integrity protection of sensitive data protection against inference multi-level security.
Proof by Deduction. Deductions and Formal Proofs A deduction is a sequence of logic statements, each of which is known or assumed to be true A formal.
Solving trust issues using Z3 Z3 SIG, November 2011 Moritz Y. Becker, Nik Sultana Alessandra Russo Masoud Koleini Microsoft Research, Cambridge Imperial.
ITIS 3200: Introduction to Information Security and Privacy Dr. Weichao Wang.
ORACLE DATABASE SECURITY
1 September 14, 2006 Lecture 3 IS 2150 / TEL 2810 Introduction to Security.
Overview of Privacy Preserving Techniques.  This is a high-level summary of the state-of-the-art privacy preserving techniques and research areas  Focus.
Inference is a process of building a proof of a sentence, or put it differently inference is an implementation of the entailment relation between sentences.
1 A pattern language for security models Eduardo B. Fernandez and Rouyi Pan Presented by Liping Cai 03/15/2006.
MATH 224 – Discrete Mathematics
DECIDABILITY OF PRESBURGER ARITHMETIC USING FINITE AUTOMATA Presented by : Shubha Jain Reference : Paper by Alexandre Boudet and Hubert Comon.
m-Privacy for Collaborative Data Publishing
CSCD 218 : DATA COMMUNICATIONS AND NETWORKING 1
1 Security on Social Networks Or some clues about Access Control in Web Data Management with Privacy, Time and Provenance Serge Abiteboul, Alban Galland.
Security protocols and their verification Mark Ryan University of Birmingham Midlands Graduate School University of Birmingham April 2005 Steve Kremer.
1 Dept of Information and Communication Technology Creating Objects in Flexible Authorization Framework ¹ Dep. of Information and Communication Technology,
Proof-Carrying Code & Proof-Carrying Authentication Stuart Pickard CSCI 297 June 2, 2005.
Advanced Topics in Propositional Logic Chapter 17 Language, Proof and Logic.
Slide 1 Propositional Definite Clause Logic: Syntax, Semantics and Bottom-up Proofs Jim Little UBC CS 322 – CSP October 20, 2014.
Scalability in a Secure Distributed Proof System Kazuhiro Minami and David Kotz May 9, 2006 Institute for Security Technology Studies Dartmouth College.
Lightweight Consistency Enforcement Schemes for Distributed Proofs with Hidden Subtrees Adam J. Lee, Kazuhiro Minami, and Marianne Winslett University.
Correctness Proofs and Counter-model Generation with Authentication-Protocol Logic Koji Hasebe Mitsuhiro Okada Department of Philosophy, Keio University.
Security Many secure IT systems are like a house with a locked front door but with a side window open -somebody.
Single-bit Re-encryption with Applications to Distributed Proof Systems Nikita Borisov and Kazuhiro Minami University of Illinois at Urbana-Champaign.
A Logic of Partially Satisfied Constraints Nic Wilson Cork Constraint Computation Centre Computer Science, UCC.
m-Privacy for Collaborative Data Publishing
Privilege Management Chapter 22.
Private key
2/1/20161 Computer Security Foundational Results.
Fall, Privacy&Security - Virginia Tech – Computer Science Click to edit Master title style Decentralized Information Flow A paper by Myers/Liskov.
CS104:Discrete Structures Chapter 2: Proof Techniques.
Newcastle uopn Tyne, September 2002 V. Ghini, G. Lodi, N. Mezzetti, F. Panzieri Department of Computer Science University of Bologna.
1 Authenticated Key Exchange Rocky K. C. Chang 20 March 2007.
Computer Science and Engineering Computer System Security CSE 5339/7339 Session 16 October 14, 2004.
Notions & Notations (2) - 1ICOM 4075 (Spring 2010) UPRM Department of Electrical and Computer Engineering University of Puerto Rico at Mayagüez Spring.
PREPARED BY: MS. ANGELA R.ICO & MS. AILEEN E. QUITNO (MSE-COE) COURSE TITLE: OPERATING SYSTEM PROF. GISELA MAY A. ALBANO PREPARED BY: MS. ANGELA R.ICO.
SECURITY. Security Threats, Policies, and Mechanisms There are four types of security threats to consider 1. Interception 2 Interruption 3. Modification.
Fuzzy Relations( 關係 ), Fuzzy Graphs( 圖 形 ), and Fuzzy Arithmetic( 運算 ) Chapter 4.
1 Anonymity. 2 Overview  What is anonymity?  Why should anyone care about anonymity?  Relationship with security and in particular identification 
Decentralized Access Control: Policy Languages and Logics
Security Outline Encryption Algorithms Authentication Protocols
Institute for Cyber Security
刘振 上海交通大学 计算机科学与工程系 电信群楼3-509
Computer Security Foundations
SETS, RELATIONS, FUNCTIONS
刘振 上海交通大学 计算机科学与工程系 电信群楼3-509
Representations & Reasoning Systems (RRS) (2.2)
Presentation transcript:

Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Distributed proving is an effective way to combine information in different administrative domains Distributed authorization – Make a granting decision by constructing a proof from security policies – Examples: DL[Li03], DKAL [Gurevich08], SD3 [Jim01], SecPAL [Becker07], and Grey [Bauer05] Data fusion in pervasive environments – Infer a user’s activity from sensor data owned by different organizations

Distributed Proof System Consist of multiple principals, which consist of a knowledge base and an inference engine Construct a proof by exchanging proofs in a peer-to- peer way Support rules and facts in Datalog with the says operator (e.g., BAN logic) Ç Ç Ç Ç Quoted fact

Protecting each domain’s confidential information is crucial Each organization in a virtual business coalition needs to protect its proprietary information from the others A location server must protect users’ location information with proper privacy policies To do these, principals in a distributed proof system could limit access to their sensitive information with discretionary access-control policies

To determine the safety of a system involving multiple principals is not so trivial Suppose that principal p 0 is willing to disclose the truth of fact f 0 only to p 2 What if p 2 still derives fact f 2 ?

Problem Statements How should define confidentiality and safety in distributed proof systems? Is it possible to derive more facts that a system that enforces confidentiality policies on a principal-to-principal basis? If so, is there any upper bound in terms of the proving power of distributed proof systems?

Outline System model based on a TTP Safety definition based on non-deducibility Safety analysis – DAC system – NE system – CE system Conclusion

Abstract System Model Parameterize a distributed proof system D with a set of inference rules I and a finite set of principals P (i.e., D[P, I]) Only consider the initial and final state of system D based on a trusted third-party model (TTP) Datalog inference rule:

Reference System D[I S ] (COND) (SAYS) The body of a rule contains a set of quoted facts (e.g., q 1 = (p 1 says f 1 )) All the information is freely shared among principals

TTP is a fixpoint function that computes the final state of a system Trusted Third Party (TTP) p1p1 p2p2 pnpn KB 1 KB n fixpoint 1 (KB)fixpoint n (KB) KB 2 fixpoint 2 (KB) Inference rules I

Soundness Requirement Definition Definition (Soundness) A distributed proof system D[I] is sound if Any confidentiality-preserving system D[I] should not prove a fact that is not provable with the reference system D[I S ]

Outline System model based on a TTP Safety definition based on non-deducibility Safety analysis – DAC system – NE system – CE system Conclusion

Confidentiality Policies Each principal defines a discretionary access- control policy on its local fact Each confidentiality policy is defined with the predicate release(principal_name, fact_name) E.g., if Alice is willing to disclose her location to Bob, she could add the policy – release(Bob, loc(Alice, L)) to her knowledge base.

Attack Model A set of malicious colluding principals A try to infer the truth of a confidential facts f in non-malicious principal p i ’s knowledge base KB i A System D Fact f 0 is confidential because all the principals in A are not authorized to learn its truth Fact f 1 is NOT confidential because p 4 is authorized to learn its truth

Attack Model (Cont.) A System D Malicious principals only use their initial and final states ) to perform inferences

Attack Model (Cont.) A System D Malicious principals only use their initial and final states to perform inferences are available

Sutherland’s non-deducibility model inferences by considering all possible worlds W Consider two information functions v 1 : W → X and v 2 : W  Y. X Y W v1v1 Public view Private view w w’ x v2v2 y y’ W’ = { w v 1 (w) = x} Y’ This cannot be possible!

Nondeducibility considers information flow between two information functions regarding system configuration A set of initial configurations Initial and final states of malicious principals in set A Confidential facts that are actually maintained by non- malicious principals Information flow Function v 1 Function v 2

Safety Definition We say that a distributed proof system D[P, I] is safe if for every possible initial state KB, for every possible subset of principals A, for every possible subset of confidential facts Q, there exists another initial state KB’ such that 1.v 1 (KB) = v 1 (KB’), and 2.Q = v 2 (KB’). Malicious principals A has the same initial and final local states Non-malicious principals could posses any subset of confidential facts

Outline System model based on a TTP Safety definition based on non-deducibility Safety analysis – DAC system – NE system – CE system Conclusion

DAC System D[I DAC ] Enforce confidentiality policies on a principal- to-principal basis (COND) (DAC-SAYS)

Example Derivations in D[I DAC ] (DAC-SAYS) (COND)

D[P, I DAC ] is Safe because deviations performed by one principal are transparent from others Let P and A be {p 0, p 1 } and {p 1 } respectively KB 0 KB 1 KB’ 0 Principal p 1 cannot distinguish KB 0 and KB’ 0

NE System D[I NE ] Introduce function E i to represent an encrypted value Associate each fact or quoted fact q with an encrypted value e Each principal performs an inference on an encrypted fact (q, e) Principals cannot infer the truth of an encrypted fact without decrypting it TTP discards encrypted facts from the final system state

Inference Rules I NE (ECOND) (DEC1) (DEC2) (ENC-SAYS)

Example Derivations (ENC-SAYS) (ECOND) (ENC-SAYS) (DEC1) (DEC2) (ECOND)

Analysis of System D[I NE ] The strategy we use for the DAC system does not work Need to make sure that every malicious principals receive an encrypted fact of the same structure Malicious principals A KB 0

NE System is Safe All the encrypted values must be decrypted in the exact reverse order Can collapse a proof for a malicious princpal’s fact such that all the confidential facts are only mentioned in non-malicious principals’ rules Thus, can make all the confidential facts transparent from the malicious principals by modifying non-malicious principals’ rules

Conversion Method – Part 1 Keep collapsing proofs by modifying non- malicious principals’ rules – If a proof contains a subsequence replace the sequence above with Eventually, all the confidential facts only appear in non-malicious principals rules

Conversion Method – Part 2 Given a set of quoted facts Q that should be in KB’ Case 1: (p i says f) is not in Q, but f is in KB i *, – Remove (p i says f) from the body of every non- malicious principal rule Case 2: (pi says f) is in Q, but f is not in KB i *, – Remove all non-malicious principal’ rules whose body contains (p i says f)

CE System D[I CE ] is NOT safe An encrypted value can be decrypted in any arbitrary order Consequently, we cannot collapse a proof as we did for the NE system (CE-DEC)

Summary Develop formal definitions of safety for distributed proof systems based on the notion of nondeducibility Show that the NE system, which derives more facts than the DAC system, is indeed safe Provide an unsafe result of the CE system, which extends the NE system with commutative encryption The proof system with the maximum proving power exists somewhere between the NE and CE systems

Thank you!