Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 711 Fall 2002 Programming Languages Seminar Andrew Myers 2. Noninterference 4 Sept 2002.

Similar presentations


Presentation on theme: "CS 711 Fall 2002 Programming Languages Seminar Andrew Myers 2. Noninterference 4 Sept 2002."— Presentation transcript:

1 CS 711 Fall 2002 Programming Languages Seminar Andrew Myers 2. Noninterference 4 Sept 2002

2 2 Information-Flow Security properties based on information flow describe end-to-end behavior of system Access control: “This file is readable only by processes I have granted authority to.” Information-flow control: “The information in this file may be released only to appropriate output channels, no matter how much intervening computation manipulates it.”

3 3 Noninterference [Goguen & Meseguer 1982, 1984] Low output of a system is unaffected by high input LH1H1 LL H1H1 LH2H2 LL H2H2 LL

4 4 Security properties Confidentiality: is information secret? L = public, H = confidential Integrity: is information trustworthy? L = trusted, H = untrusted Partial order: L  H, H  L, information can only flow upward in order Channels: ways for inputs to influence outputs LH1H1 H1H1 LL

5 5 Formalization No agreement on how to formalize in general GM84 (simplified): system is defined by a transition function do : S×E  S and low output function out: S  O (what the low user can see) –S is the set of system states –E is the set of events (inputs) : either high or low –Trace  is sequence of state-event pairs ((s 0,e 0 ),(s 1,e 1 ), …) where s i+1 = do(s i, e i ) Noninterference: for all event histories (e 0,…,e n ) that differ only in high events, out(s n ) is the same where s n is the final state of the corresponding traces Alternatively: out(s n ) defined by results from a purged event history

6 6 Example h1h1 h2h2 ll 2 22 3 3 l 3 Visible output from input sequences (l), (h 1,l), (h 2,l) is 3 Visible output from input sequences (), (h 1 ), (h 2 ) is 2 Low part of input determines visible results hxhx hxhx hxhx

7 7 Limitations Doesn’t deal with all transition functions –partial (e.g., nontermination) –nondeterministic (e.g., concurrency) –sequential input, output assumption

8 8 A generalization Key idea: behaviors of the system C should not reveal more information than the low inputs Consider applying C to inputs s Define:  C  s is the result of C applied to s (“do”) s 1 = L s 2 means inputs s 1 and s 2 are indistinguishable to the low user (same “purge”)  C  s 1  L  C  s 2 means results are indistinguishable : the low view relation (same “out”) Noninterference for C: s 1 = L s 2   C  s 1  L  C  s 2 “Low observer doesn’t learn anything new”

9 9 Unwinding condition Induction hypothesis for proving noninterference Assume  C ,  L defined using traces s1s1 s1s1 h s2s2 =L=L =L=L s1s1 s1s1 l s2s2 =L=L s2s2 l =L=L By induction: traces differing only in high steps, starting from equivalent states, preserve equivalence = L must be an equivalence—need transitivity (s 1 = L s 1  ) (s 1 =  L s 1  )

10 10 Example “System” is a program with a memory if h 1 then h 2 := 0 else h 2 := 1; l := 1 s =  c, m   c 1,m 1  = L  c 2, m 2  if identical after: –erasing high terms from c i –erasing high memory locations from m i Choice of = L controls what low observer can see at a moment in time Current command c included in state to allow proof by induction

11 11 Example if h 1 then h 2 := 0 else h 2 := 1; l := 1, { h 1  0, h 2  1, l  0} if h 1 then h 2 := 0 else h 2 := 1; l := 1, { h 1  1, h 2  1, l  0} h 2 := 1; l := 1, { h 1  0, h 2  1, l  0} h 2 := 0; l := 1, { h 1  1, h 2  1, l  0} l := 1, { h 1  0, h 2  1, l  0} l := 1, { h 1  1, h 2  0, l  0} =L=L =L=L =L=L { h 1  0, h 2  1, l  1}{ h 1  1, h 2  0, l  1} =L=L

12 12 Nontermination Is this program secure? while h > 0 do h := h+1; l := 1 { h  0, l  0}  * { h  0, l  1} { h  1, l  0}  * { h  i, l  0} (  i>0) Low observer learns value of h by observing nontermination, change to l But… might want to ignore this channel to make analysis feasible

13 13 Equivalence classes Equivalence relation = L generates equivalence classes of states indistinguishable to attacker [s] L = { s  | s  = L s } Noninterference  transitions act uniformly on each equivalence class Given trace  = (s 1, s 2, …), low observer sees at most ([s 1 ] L, [s 2 ] L, …)

14 14 Low views Low view relation  L on traces modulo = L determines ability of attacker to observe system execution Termination-sensitive but no ability to see intermediate states: (s 1, s 2,…,s m )  L (s  1, s  2,…s  n ) if s m = L s  n & all infinite traces are related by  L Termination-insensitive: (s 1, s 2,…,s m )  L (s  1, s  2,…s  n ) if s m = L s  n & infinite traces are related by  L to all traces Timing-sensitive: (s 1, s 2,…,s n )  L (s  1, s  2,…s  n ) if s n = L s  n & all infinite traces are related by  L Not always an equivalence relation!

15 15 Nondeterminism Two sources of nondeterminism: –Input nondeterminism –Internal nondeterminism GM assume no internal nondeterminism Concurrent systems are nondeterministic s 1  s 1  s 1 | s 2  s 1  | s 2 s 2  s 2  s 1 | s 2  s 1 | s 2  Noninterference for nondeterministic systems?  s 1, s 2. s 1 = L s 2   C  s 1  L  C  s 2

16 16 Possibilistic security [Sutherland 1986, McCullough 1987] Result of a system  C  s is set of possible outcomes (traces) Low view relation on traces is lifted to sets of traces:  C  s 1  L  C  s 2 if  1   C  s 1.  2   C  s 2.  1  L  2 &  2   C  s 2.  1   C  s 1.  2  L  1 “For any trace produced by C 1 there is an indistinguishable one produced by C 2 (and vice-versa) ”

17 17 Proving possibilistic security Almost the same induction hypothesis: s1s1 s1s1 h s2s2 =L=L =L=L s1s1 s1s1 l s2s2 =L=L s2s2 l =L=L (s 1 = L s 1  ) (s 1 =  L s 1  ) Show that there is a transition that preserves state equivalence (for termination-insensitive security)

18 18 Example l := true | l := false | l := h h =true : possible results are { h  true, l  false}, { h  true, l  true} h = false: { h  false, l  false}, { h  false, l  true} Program is possibilistically secure =L=L =L=L

19 19 What is wrong? Round-robin scheduler: program equiv. to l:=h Random scheduler: h most probable value of l System has a refinement with information leak l:=h l:=true l:=false l := true | l := false | l := h

20 20 Refinement attacks Implementations of an abstraction generally refine (at least probabilistically) transitions allows by the abstraction Attacker may exploit knowledge of implementation to learn confidential info. l := true | l := false Is this program secure?

21 21 Determinism-based security Require that system is deterministic from the low viewpoint [Roscoe95] High information cannot affect low output – no nondeterminism to refine Another way to generalize noninterference to nondeterministic systems : don’t change definition!  s 1, s 2. s 1 = L s 2   C  s 1  L  C  s 2 Nondeterminism may be present, but not observable More restrictive than possibilistic security


Download ppt "CS 711 Fall 2002 Programming Languages Seminar Andrew Myers 2. Noninterference 4 Sept 2002."

Similar presentations


Ads by Google