CS 711 Fall 2002 Programming Languages Seminar Andrew Myers 2. Noninterference 4 Sept 2002.

Slides:



Advertisements
Similar presentations
Information Flow and Covert Channels November, 2006.
Advertisements

Types of Logic Circuits
SECURITY AND VERIFICATION Lecture 4: Cryptography proofs in context Tamara Rezk INDES TEAM, INRIA January 24 th, 2012.
Non Interference, Open Systems, Information flows quantification Loïc HélouëtINRIA Rennes.
Hyperproperties Michael Clarkson and Fred B. Schneider Cornell University IEEE Symposium on Computer Security Foundations June 23, 2008 TexPoint fonts.
Ashish Kundu CS590F Purdue 02/12/07 Language-Based Information Flow Security Andrei Sabelfield, Andrew C. Myers Presentation: Ashish Kundu
Lecture 02 – Structural Operational Semantics (SOS) Eran Yahav 1.
Introduction to Computability Theory
CPSC 411, Fall 2008: Set 12 1 CPSC 411 Design and Analysis of Algorithms Set 12: Undecidability Prof. Jennifer Welch Fall 2008.
Fall 2004COMP 3351 Recursively Enumerable and Recursive Languages.
1 Operational Semantics Mooly Sagiv Tel Aviv University Textbook: Semantics with Applications.
Information Flow, Security and Programming Languages Steve Steve Zdancewic.
Steve Zdancewic ESOP011 Secure Information Flow and CPS Steve Zdancewic Joint work with Andrew Myers Cornell University.
1 Enforcing Confidentiality in Low-level Programs Andrew Myers Cornell University.
1 Analysis of the Linux Random Number Generator Zvi Gutterman, Benny Pinkas, and Tzachy Reinman.
Chapter 1 pp 1-14 Properties of Algorithms Pseudocode.
Robust Declassification Steve Zdancewic Andrew Myers Cornell University.
1 Ivan Lanese Computer Science Department University of Bologna Italy Concurrent and located synchronizations in π-calculus.
Operational Semantics Semantics with Applications Chapter 2 H. Nielson and F. Nielson
Introduction to Finite Automata Adapted from the slides of Stanford CS154.
Programming Language Semantics Denotational Semantics Chapter 5 Part III Based on a lecture by Martin Abadi.
7/16/2015 3:58 AM Lecture 4: Bell LaPadula James Hook CS 591: Introduction to Computer Security.
Fall 2004COMP 3351 Regular Expressions. Fall 2004COMP 3352 Regular Expressions Regular expressions describe regular languages Example: describes the language.
1 September 14, 2006 Lecture 3 IS 2150 / TEL 2810 Introduction to Security.
Distributed Computing 5. Snapshot Shmuel Zaks ©
Security Policy What is a security policy? –Defines what it means for a system to be secure Formally: Partition system into –Secure (authorized) states.
Cs3102: Theory of Computation Class 18: Proving Undecidability Spring 2010 University of Virginia David Evans.
Verification of Information Flow Properties in Cyber-Physical Systems Ravi Akella, Bruce McMillin Department of Computer Science Missouri University of.
Language-Based Information-Flow Security Richard Mancusi CSCI 297.
Runtime Refinement Checking of Concurrent Data Structures (the VYRD project) Serdar Tasiran Koç University, Istanbul, Turkey Shaz Qadeer Microsoft Research,
Containment and Integrity for Mobile Code Security policies as types Andrew Myers Fred Schneider Department of Computer Science Cornell University.
Benjamin Gamble. What is Time?  Can mean many different things to a computer Dynamic Equation Variable System State 2.
Fall, Privacy&Security - Virginia Tech – Computer Science Click to edit Master title style Language-Based Information- Flow Security Andrei Sabelfeld.
Reasoning about Information Leakage and Adversarial Inference Matt Fredrikson 1.
. CLASSES RP AND ZPP By: SARIKA PAMMI. CONTENTS:  INTRODUCTION  RP  FACTS ABOUT RP  MONTE CARLO ALGORITHM  CO-RP  ZPP  FACTS ABOUT ZPP  RELATION.
Internet Security CSCE 813 Communicating Sequential Processes.
Alternative Wide Block Encryption For Discussion Only.
Hyperproperties Michael Clarkson and Fred B. Schneider Cornell University Air Force Office of Scientific Research December 4, 2008 TexPoint fonts used.
Lecture 5 1 CSP tools for verification of Sec Prot Overview of the lecture The Casper interface Refinement checking and FDR Model checking Theorem proving.
CSCI1600: Embedded and Real Time Software Lecture 28: Verification I Steven Reiss, Fall 2015.
Recognising Languages We will tackle the problem of defining languages by considering how we could recognise them. Problem: Is there a method of recognising.
Hyperproperties Michael Clarkson and Fred B. Schneider Cornell University Ph.D. Seminar Northeastern University October 14, 2010.
Chapter 8 Asynchronous System Model by Mikhail Nesterenko “Distributed Algorithms” by Nancy A. Lynch.
Quantification of Integrity Michael Clarkson and Fred B. Schneider Cornell University IEEE Computer Security Foundations Symposium July 17, 2010.
Operational Semantics Mooly Sagiv Tel Aviv University Textbook: Semantics with Applications Chapter.
High Performance Embedded Computing © 2007 Elsevier Chapter 1, part 3: Embedded Computing High Performance Embedded Computing Wayne Wolf.
High Performance Embedded Computing © 2007 Elsevier Lecture 4: Models of Computation Embedded Computing Systems Mikko Lipasti, adapted from M. Schulte.
Operational Semantics Mooly Sagiv Tel Aviv University Sunday Scrieber 8 Monday Schrieber.
Definition of the Hidden Markov Model A Seminar Speech Recognition presentation A Seminar Speech Recognition presentation October 24 th 2002 Pieter Bas.
Operational Semantics Mooly Sagiv Reference: Semantics with Applications Chapter 2 H. Nielson and F. Nielson
Complexity and Computability Theory I Lecture #5 Rina Zviel-Girshin Leah Epstein Winter
Language-Based Information- Flow Security (Sabelfeld and Myers) “Practical methods for controlling information flow have eluded researchers for some time.”
Operational Semantics Mooly Sagiv Reference: Semantics with Applications Chapter 2 H. Nielson and F. Nielson
PREPARED BY: MS. ANGELA R.ICO & MS. AILEEN E. QUITNO (MSE-COE) COURSE TITLE: OPERATING SYSTEM PROF. GISELA MAY A. ALBANO PREPARED BY: MS. ANGELA R.ICO.
INDUCTION David Kauchak CS52 – Spring to-1 multiplexer control control_negate and_out1 input0 input1 and_out2 output.
Chapter 29: Program Security Dr. Wayne Summers Department of Computer Science Columbus State University
Lecture 2 Page 1 CS 236 Online Security Policies Security policies describe how a secure system should behave Policy says what should happen, not how you.
MA/CSSE 474 Theory of Computation Decision Problems, Continued DFSMs.
Secure Information Flow for Reactive Programming Paradigm Zhengqin Luo SAFA workshop 2009.
Program Analysis and Verification Noam Rinetzky Lecture 2: Operational Semantics 1 Slides credit: Tom Ball, Dawson Engler, Roman Manevich, Erik.
INDUCTION David Kauchak CS52 – Spring to-1 multiplexer control control_negate and_out1 input0 input1 and_out2 output.
Paper Reading Group:. Language-Based Information-Flow Security. A
Turing Machines Acceptors; Enumerators
Information Security CS 526
Information Security CS 526
Chapter 29: Program Security
Non-preemptive Semantics for Data-race-free Programs
Information Security CS 526
Formal Definitions for Turing Machines
Non Deterministic Automata
Presentation transcript:

CS 711 Fall 2002 Programming Languages Seminar Andrew Myers 2. Noninterference 4 Sept 2002

2 Information-Flow Security properties based on information flow describe end-to-end behavior of system Access control: “This file is readable only by processes I have granted authority to.” Information-flow control: “The information in this file may be released only to appropriate output channels, no matter how much intervening computation manipulates it.”

3 Noninterference [Goguen & Meseguer 1982, 1984] Low output of a system is unaffected by high input LH1H1 LL H1H1 LH2H2 LL H2H2 LL

4 Security properties Confidentiality: is information secret? L = public, H = confidential Integrity: is information trustworthy? L = trusted, H = untrusted Partial order: L  H, H  L, information can only flow upward in order Channels: ways for inputs to influence outputs LH1H1 H1H1 LL

5 Formalization No agreement on how to formalize in general GM84 (simplified): system is defined by a transition function do : S×E  S and low output function out: S  O (what the low user can see) –S is the set of system states –E is the set of events (inputs) : either high or low –Trace  is sequence of state-event pairs ((s 0,e 0 ),(s 1,e 1 ), …) where s i+1 = do(s i, e i ) Noninterference: for all event histories (e 0,…,e n ) that differ only in high events, out(s n ) is the same where s n is the final state of the corresponding traces Alternatively: out(s n ) defined by results from a purged event history

6 Example h1h1 h2h2 ll l 3 Visible output from input sequences (l), (h 1,l), (h 2,l) is 3 Visible output from input sequences (), (h 1 ), (h 2 ) is 2 Low part of input determines visible results hxhx hxhx hxhx

7 Limitations Doesn’t deal with all transition functions –partial (e.g., nontermination) –nondeterministic (e.g., concurrency) –sequential input, output assumption

8 A generalization Key idea: behaviors of the system C should not reveal more information than the low inputs Consider applying C to inputs s Define:  C  s is the result of C applied to s (“do”) s 1 = L s 2 means inputs s 1 and s 2 are indistinguishable to the low user (same “purge”)  C  s 1  L  C  s 2 means results are indistinguishable : the low view relation (same “out”) Noninterference for C: s 1 = L s 2   C  s 1  L  C  s 2 “Low observer doesn’t learn anything new”

9 Unwinding condition Induction hypothesis for proving noninterference Assume  C ,  L defined using traces s1s1 s1s1 h s2s2 =L=L =L=L s1s1 s1s1 l s2s2 =L=L s2s2 l =L=L By induction: traces differing only in high steps, starting from equivalent states, preserve equivalence = L must be an equivalence—need transitivity (s 1 = L s 1  ) (s 1 =  L s 1  )

10 Example “System” is a program with a memory if h 1 then h 2 := 0 else h 2 := 1; l := 1 s =  c, m   c 1,m 1  = L  c 2, m 2  if identical after: –erasing high terms from c i –erasing high memory locations from m i Choice of = L controls what low observer can see at a moment in time Current command c included in state to allow proof by induction

11 Example if h 1 then h 2 := 0 else h 2 := 1; l := 1, { h 1  0, h 2  1, l  0} if h 1 then h 2 := 0 else h 2 := 1; l := 1, { h 1  1, h 2  1, l  0} h 2 := 1; l := 1, { h 1  0, h 2  1, l  0} h 2 := 0; l := 1, { h 1  1, h 2  1, l  0} l := 1, { h 1  0, h 2  1, l  0} l := 1, { h 1  1, h 2  0, l  0} =L=L =L=L =L=L { h 1  0, h 2  1, l  1}{ h 1  1, h 2  0, l  1} =L=L

12 Nontermination Is this program secure? while h > 0 do h := h+1; l := 1 { h  0, l  0}  * { h  0, l  1} { h  1, l  0}  * { h  i, l  0} (  i>0) Low observer learns value of h by observing nontermination, change to l But… might want to ignore this channel to make analysis feasible

13 Equivalence classes Equivalence relation = L generates equivalence classes of states indistinguishable to attacker [s] L = { s  | s  = L s } Noninterference  transitions act uniformly on each equivalence class Given trace  = (s 1, s 2, …), low observer sees at most ([s 1 ] L, [s 2 ] L, …)

14 Low views Low view relation  L on traces modulo = L determines ability of attacker to observe system execution Termination-sensitive but no ability to see intermediate states: (s 1, s 2,…,s m )  L (s  1, s  2,…s  n ) if s m = L s  n & all infinite traces are related by  L Termination-insensitive: (s 1, s 2,…,s m )  L (s  1, s  2,…s  n ) if s m = L s  n & infinite traces are related by  L to all traces Timing-sensitive: (s 1, s 2,…,s n )  L (s  1, s  2,…s  n ) if s n = L s  n & all infinite traces are related by  L Not always an equivalence relation!

15 Nondeterminism Two sources of nondeterminism: –Input nondeterminism –Internal nondeterminism GM assume no internal nondeterminism Concurrent systems are nondeterministic s 1  s 1  s 1 | s 2  s 1  | s 2 s 2  s 2  s 1 | s 2  s 1 | s 2  Noninterference for nondeterministic systems?  s 1, s 2. s 1 = L s 2   C  s 1  L  C  s 2

16 Possibilistic security [Sutherland 1986, McCullough 1987] Result of a system  C  s is set of possible outcomes (traces) Low view relation on traces is lifted to sets of traces:  C  s 1  L  C  s 2 if  1   C  s 1.  2   C  s 2.  1  L  2 &  2   C  s 2.  1   C  s 1.  2  L  1 “For any trace produced by C 1 there is an indistinguishable one produced by C 2 (and vice-versa) ”

17 Proving possibilistic security Almost the same induction hypothesis: s1s1 s1s1 h s2s2 =L=L =L=L s1s1 s1s1 l s2s2 =L=L s2s2 l =L=L (s 1 = L s 1  ) (s 1 =  L s 1  ) Show that there is a transition that preserves state equivalence (for termination-insensitive security)

18 Example l := true | l := false | l := h h =true : possible results are { h  true, l  false}, { h  true, l  true} h = false: { h  false, l  false}, { h  false, l  true} Program is possibilistically secure =L=L =L=L

19 What is wrong? Round-robin scheduler: program equiv. to l:=h Random scheduler: h most probable value of l System has a refinement with information leak l:=h l:=true l:=false l := true | l := false | l := h

20 Refinement attacks Implementations of an abstraction generally refine (at least probabilistically) transitions allows by the abstraction Attacker may exploit knowledge of implementation to learn confidential info. l := true | l := false Is this program secure?

21 Determinism-based security Require that system is deterministic from the low viewpoint [Roscoe95] High information cannot affect low output – no nondeterminism to refine Another way to generalize noninterference to nondeterministic systems : don’t change definition!  s 1, s 2. s 1 = L s 2   C  s 1  L  C  s 2 Nondeterminism may be present, but not observable More restrictive than possibilistic security