Week 8 - Wednesday.  What did we talk about last time?  Authentication  Challenge response  Biometrics  Started Bell-La Padula model.

Slides:



Advertisements
Similar presentations
TOPIC CLARK-WILSON MODEL Ravi Sandhu.
Advertisements

Week 7 - Wednesday.  What did we talk about last time?  Set proofs and disproofs  Russell’s paradox.
1 cs691 chow Hybrid Policies CS691 – Chapter 7 of Matt Bishop.
November 1, 2004Introduction to Computer Security ©2004 Matt Bishop Slide #7-1 Chapter 7: Hybrid Policies Overview Chinese Wall Model Clinical Information.
ITIS 3200: Introduction to Information Security and Privacy Dr. Weichao Wang.
Security Models and Architecture
Hybrid Policies Overview Chinese Wall Model Clinical Information Systems Security Policy ORCON RBAC Introduction to Computer Security ©2004 Matt Bishop.
Hybrid Policies Overview Chinese Wall Model Clinical Information Systems Security Policy ORCON RBAC Introduction to Computer Security ©2004 Matt Bishop.
Chapter 6: Integrity Policies Overview Requirements Biba’s models Clark-Wilson model Introduction to Computer Security ©2004 Matt Bishop.
ITIS 3200: Introduction to Information Security and Privacy Dr. Weichao Wang.
June 1, 2004Computer Security: Art and Science © Matt Bishop Slide #6-1 Chapter 6: Integrity Policies Overview Requirements Biba’s models Lipner’s.
May 4, 2004ECS 235Slide #1 Biba Integrity Model Basis for all 3 models: Set of subjects S, objects O, integrity levels I, relation ≤  I  I holding when.
Verifiable Security Goals
1 Hybrid Policies CSSE 490 Computer Security Mark Ardis, Rose-Hulman Institute March 23, 2004.
Computer Security Hybrid Policies
Security Models.
1 Integrity Policies CSSE 490 Computer Security Mark Ardis, Rose-Hulman Institute March 22, 2004.
Chapter 6: Integrity Policies Overview Requirements Biba’s models Clark-Wilson model Introduction to Computer Security ©2004 Matt Bishop.
CMSC 414 Computer and Network Security Lecture 11 Jonathan Katz.
CS526Topic 21: Integrity Models1 Information Security CS 526 Topic 21: Integrity Protection Models.
User Domain Policies.
November 1, 2004Introduction to Computer Security ©2004 Matt Bishop Slide #6-1 Chapter 6: Integrity Policies Overview Requirements Biba’s models Clark-Wilson.
Sicurezza Informatica Prof. Stefano Bistarelli
CMSC 414 Computer and Network Security Lecture 19 Jonathan Katz.
Week 8 - Friday.  What did we talk about last time?  Bell-La Padula model  Clark-Wilson model  Chinese Wall model  Biba model.
Mandatory Security Policies CS461/ECE422 Spring 2012.
1 September 14, 2006 Lecture 3 IS 2150 / TEL 2810 Introduction to Security.
Slide #6-1 Integrity Policies CS461/ECE422 – Computer Security I Fall 2009 Based on slides provided by Matt Bishop for use with Computer Security: Art.
ITIS 3200: Introduction to Information Security and Privacy Dr. Weichao Wang.
1 Confidentiality Policies September 21, 2006 Lecture 4 IS 2150 / TEL 2810 Introduction to Security.
Computer Security 3e Dieter Gollmann
Session 2 - Security Models and Architecture. 2 Overview Basic concepts The Models –Bell-LaPadula (BLP) –Biba –Clark-Wilson –Chinese Wall Systems Evaluation.
Week 8 - Monday.  What did we talk about last time?  Access control  Authentication.
Security Architecture and Design Chapter 4 Part 3 Pages 357 to 377.
ITIS 3200: Introduction to Information Security and Privacy Dr. Weichao Wang.
Chapter 5 Network Security
Chapter 6: Integrity Policies  Overview  Requirements  Biba’s models  Clark-Wilson model Introduction to Computer Security ©2004 Matt Bishop.
CS426Fall 2010/Lecture 251 Computer Security CS 426 Lecture 25 Integrity Protection: Biba, Clark Wilson, and Chinese Wall.
Trusted OS Design and Evaluation CS432 - Security in Computing Copyright © 2005, 2010 by Scott Orr and the Trustees of Indiana University.
12/3/2015Slide #7-1 Chapter 7: Hybrid Policies Overview Chinese Wall Model Clinical Information Systems Security Policy ORCON RBAC.
UT DALLAS Erik Jonsson School of Engineering & Computer Science FEARLESS engineering Integrity Policies Murat Kantarcioglu.
12/4/20151 Computer Security Security models – an overview.
Chapter 5 – Designing Trusted Operating Systems
A security policy defines what needs to be done. A security mechanism defines how to do it. All passwords must be updated on a regular basis and every.
IS 2150 / TEL 2810 Introduction to Security
Week 7 - Wednesday.  What did we talk about last time?  Proving the subset relationship  Proving set equality  Set counterexamples  Laws of set algebra.
Slide #6-1 Chapter 6: Integrity Policies Overview Requirements Biba’s models Clark-Wilson model.
6/22/20161 Computer Security Integrity Policies. 6/22/20162 Integrity Policies Commercial requirement differ from military requirements: the emphasis.
CS526Topic 19: Integrity Models1 Information Security CS 526 Topic 19: Integrity Protection Models.
Lecture 2 Page 1 CS 236 Online Security Policies Security policies describe how a secure system should behave Policy says what should happen, not how you.
Week 8 - Wednesday.  Spam  OS security.
Chapter 7. Hybrid Policies
TOPIC: Web Security Models
Verifiable Security Goals
Chapter 6 Integrity Policies
Chapter 6: Integrity Policies
Chapter 5: Confidentiality Policies
Computer Security Hybrid Policies
Integrity Models and Hybrid Models
Guest Lecture in Acc 661 (Spring 2007) Instructor: Christopher Brown)
Chapter 6: Integrity Policies
Lecture 18: Mandatory Access Control
Integrity Policies Dr. Wayne Summers Department of Computer Science
Biba Integrity Model Basis for all 3 models:
Chapter 6: Integrity Policies
Computer Security Integrity Policies
Chapter 7: Hybrid Policies
Computer Security Hybrid Policies
Presentation transcript:

Week 8 - Wednesday

 What did we talk about last time?  Authentication  Challenge response  Biometrics  Started Bell-La Padula model

Yuki Gage

 Confidentiality access control system  Military-style classifications  Uses a linear clearance hierarchy  All information is on a need- to-know basis  It uses clearance (or sensitivity) levels as well as project-specific compartments Unclassified Restricted Confidential Secret Top Secret

 Both subjects (users) and objects (files) have security clearances  Below are the clearances arranged in a hierarchy Clearance LevelsSample SubjectsSample Objects Top Secret (TS)Tamara, ThomasPersonnel Files Secret (S)Sally, Samuel Files Confidential (C)Claire, ClarenceActivity Log Files Restricted (R)Rachel, RileyTelephone List Files Unclassified (UC)Ulaley, UrsulaAddress of Headquarters

 Let level O be the clearance level of object O  Let level S be the clearance level of subject S  The simple security condition states that S can read O if and only if the level O ≤ level S and S has discretionary read access to O  In short, you can only read down  Example?  In a few slides, we will expand the simple security condition to make the concept of level

 The *-property states that S can write O if and only if the level S ≤ level O and S has discretionary write access to O  In short, you can only write up  Example?

 Assume your system starts in a secure initial state  Let T be all the possible state transformations  If every element in T preserves the simple security condition and the *-property, every reachable state is secure  This is sort of a stupid theorem, because we define “secure” to mean a system that preserves the security condition and the *- property

 We add compartments such as NUC = Non-Union Countries, EUR = Europe, and US = United States  The possible sets of compartments are:   {NUC}  {EUR}  {US}  {NUC, EUR}  {NUC, US}  {EUR, US}  {NUC, EUR, US}  Put a clearance level with a compartment set and you get a security level  The literature does not always agree on terminology

 The subset relationship induces a lattice {NUC, EUR, US} {NUC, US} {EUR}   {NUC, EUR} {EUR, US} {NUC} {US}

 Let L be a clearance level and C be a category  Instead of talking about level O ≤ level S, we say that security level (L, C) dominates security level (L’, C’) if and only if L’ ≤ L and C’  C  Simple security now requires (L S, C S ) to dominate (L O, C O ) and S to have read access  *-property now requires (L O, C O ) to dominate (L S, C S ) and S to have write access  Problems?

 Commercial model that focuses on transactions  Just like a bank, we want certain conditions to hold before a transaction and the same conditions to hold after  If conditions hold in both cases, we call the system consistent  Example:  D is the amount of money deposited today  W is the amount of money withdrawn today  YB is the amount of money in accounts at the end of business yesterday  TB is the amount of money currently in all accounts  Thus, D + YB – W = TB

 Data that has to follow integrity controls are called constrained data items or CDIs  The rest of the data items are unconstrained data items or UDIs  Integrity constraints (like the bank transaction rule) constrain the values of the CDIs  Two kinds of procedures:  Integrity verification procedures (IVPs) test that the CDIs conform to the integrity constraints  Transformation procedures (TPs) change the data in the system from one valid state to another

 Clark-Wilson has a system of 9 rules designed to protect the integrity of the system  There are five certification rules that test to see if the system is in a valid state  There are four enforcement rules that give requirements for the system

 CR1: When any IVP is run, it must ensure that all CDIs are in a valid state  CR2: For some associated set of CDIs, a TP must transform those CDIs in a valid state into a (possibly different) valid state  By inference, a TP is only certified to work on a particular set of CDIs

 ER1: The system must maintain the certified relations, and must ensure that only TPs certified to run on a CDI manipulate that CDI  ER2: The system must associate a user with each TP and set of CDIs. The TP may access those CDIs on behalf of the associated user. If the user is not associated with a particular TP and CDI, then the TP cannot access that CDI on behalf of that user.  Thus, a user is only allowed to use certain TPs on certain CDIs

 CR3: The allowed relations must meet the requirements imposed by the principle of separation of duty  ER3: The system must authenticate each user attempting to execute a TP  In theory, this means that users don't necessarily have to log on if they are not going to interact with CDIs

 CR4: All TPs must append enough information to reconstruct the operation to an append-only CDI  Logging operations  CR5: Any TP that takes input as a UDI may perform only valid transformations, or no transformations, for all possible values of the UDI. The transformation either rejects the UDI or transforms it into a CDI  Gives a rule for bringing new information into the integrity system

 ER4: Only the certifier of a TP may change the list of entities associated with that TP. No certifier of a TP, or of any entity associated with that TP, may ever have execute permission with respect to that entity.  Separation of duties

 Designed close to real commercial situations  No rigid multilevel scheme  Enforces separation of duty  Certification and enforcement are separated  Enforcement in a system depends simply on following given rules  Certification of a system is difficult to determine

 The Chinese Wall model respects both confidentiality and integrity  It's very important in business situations where there are conflict of interest issues  Real systems, including British law, have policies similar to the Chinese Wall model  Most discussions around the Chinese Wall model are couched in business terms

 We can imagine the Chinese Wall model as a policy controlling access in a database  The objects of the database are items of information relating to a company  A company dataset (CD) contains objects related to a single company  A conflict of interest (COI) class contains the datasets of companies in competition  Let COI(O) be the COI class containing object O  Let CD(O) be the CD that contains object O  We assume that each object belongs to exactly one COI

Bank COI Class Gasoline Company COI Class Bank of America a Bank of America a Citibank c Citibank c Bank of the West b Bank of the West b Shell Oil s Shell Oil s Standard Oil e Standard Oil e Union '76 u Union '76 u ARCO n ARCO n

 Let PR(S) be the set of objects that S has read  Subject S can read O if and only if any of the following is true 1. There is an object O' such that S has accessed O' and CD(O') = CD(O) 2. For all objects O', O'  PR(S)  COI(O')  COI(O) 3. O is a sanitized object  Give examples of objects that can and cannot be read

 Subject S may write to an object O if and only if both of the following conditions hold 1. The CW-simple security condition permits S to read O 2. For all unsanitized objects O', S can read O'  CD(O') = CD(O)

 Integrity based access control system  Uses integrity levels, similar to the clearance levels of Bell-LaPadula  Precisely the dual of the Bell-LaPadula Model  That is, we can only read up and write down  Note that integrity levels are intended only to indicate integrity, not confidentiality  Actually a measure of accuracy or reliability

 S is the set of subjects and O is the set of objects  Integrity levels are ordered  i(s) and i(o) gives the integrity level of s or o, respectively  Rules: 1. s  S can read o  O if and only if i(s) ≤ i(o) 2. s  S can write to o  O if and only if i(o) ≤ i(s) 3. s 1  S can execute s 2  S if and only if i(s 2 ) ≤ i(s 1 )

 Rules 1 and 2 imply that, if both read and write are allowed, i(s) = i(o)  By adding the idea of integrity compartments and domination, we can get the full dual of the Bell-La Padula lattice framework  Real systems (for example the LOCUS operating system) usually have a command like run-untrusted  That way, users have to recognize the fact that a risk is being made  What if you used the same levels for integrity AND security, could you implement both Biba and Bell-La Padula on the same system?

 How do we know if something is secure?  We define our security policy using our access control matrix  We say that a right is leaked if it is added to an element of the access control matrix that doesn’t already have it  A system is secure if there is no way rights can be leaked  Is there an algorithm to determine if a system is secure?

 In a mono-operational system, each command consists of a single primitive command:  Create subject s  Create object o  Enter r into a[s,o]  Delete r from a[s,o]  Destroy subject s  Destroy object o  In this system, we could see if a right is leaked with a sequence of k commands

 Delete and Destroy commands can be ignored  No more than one Create command is needed (in the case that there are no subjects)  Entering rights is the trouble  We start with set S 0 of subjects and O 0 of objects  With n generic rights, we might add all n rights to everything before we leak a right  Thus, the maximum length of the command sequence that leaks a right is k ≤ n(|S 0 |+1)(|O 0 |+1) + 1  If there are m different commands, how many different command sequences are possible?

 A Turing machine is a mathematical model for computation  It consists of a head, an infinitely long tape, a set of possible states, and an alphabet of characters that can be written on the tape  A list of rules saying what it should write and should it move left or right given the current symbol and state A A

 3 state, 2 symbol “busy beaver” Turing machine:  Starting state A Tape Symbol State AState BState C WriteMoveNextWriteMoveNextWriteMoveNext 01RB0RC1LC 11RHALT1RB1LA

 If an algorithm exists, a Turing machine can perform that algorithm  In essence, a Turing machine is the most powerful model we have of computation  Power, in this sense, means the ability to compute some function, not the speed associated with its computation

 Given a Turing machine and input x, does it reach the halt state?  It turns out that this problem is undecidable  That means that there is no algorithm that can be to determine if any Turing machine will go into an infinite loop  Consequently, there is no algorithm that can take any program and check to see if it goes into an infinite loop

 We can simulate a Turing machine using an access control matrix  We map the symbols, states and tape for the Turing machine onto the rights and cells of an access control matrix  Discovering whether or not the right leaks is equivalent to the Turing machine halting with a 1 or a 0

 Without heavy restrictions on the rules for an access control, it is impossible to construct an algorithm that will determine if a right leaks  Even for a mono-operational system, the problem might take an infeasible amount of time  But, we don’t give up!  There are still lots of ways to model security  Some of them offer more practical results

 Finish theoretical limitations  Trusted system design elements  Common OS features and flaws  OS assurance and evaluation  Taylor Ryan presents

 Read Sections 5.4 and 5.5  Keep working on Project 2  Finish Assignment 3