Presentation is loading. Please wait.

Presentation is loading. Please wait.

Integrity policies.

Similar presentations


Presentation on theme: "Integrity policies."— Presentation transcript:

1 Integrity policies

2 LSABELLA: Some one with child by him? My cousin Juliet?
LUCIO: Is she your cousin? ISABELLA: Adoptedly; as school-maids change their names By vain , though apt affection. -Measure for Measure , I , iv , An inventory control system may function correctly if the data it manages is released; but in cannot function correctly if the data can be randomly changed. So integrity, rather than confidentiality, is key.

3 Integrity policies focus on integrity rather than confidentiality, because most commercial and industrial firms are more concerned whit accuracy than disclosure. In this chapter we discuss the major integrity security policies and explore their design .

4 6.1 Goals Commercial requirements differ from military requirements in their emphasis on preserving data integrity. Lipner [636] identifies five requirements: 1. Users will not write their own programs, but will use existing production programs and databases . 2. Programmers will develop and test programs on a no production system; if they need access to actual data, they will be given production data via a special process, but will use it on their development system .

5 3.A special process must be followed to install a program from the development system onto the production system . 4.The special process in requirement 3 must be controlled and audited . 5.The managers and auditors must have access to both the system state and the system logs that are generated.

6 These requirements suggest several principles of operation .
First comes separation of duty. The principle of separation duty states that if two or more steps are required to perform a critical function, at least two different people should perform the steps. Moving a program from the development system to the production system is an example of a critical function. Suppose one of the application programmers made an invalid assumption while developing the program. Part of the installation procedure is for the installer to certify that the program work "correctly" that is, as required.

7 The error is more likely to be caught if the installer is a different person (or set of people) than the developer. Similarly, if the developer wishes to subvert the production data with a corrupt program, the certifier either must not detect the code to do the corruption, or must be in league with the developer. Next comes separation of function. Developer do not develop new programs on production system because of the potential threat to production data. Similarly, the developers do not process production data on the development systems.

8 Depending on the sensitivity of the data, the developers and testers may receive sanitized production data. Further, the development environment must be as similar as possible to the actual production environment . Last comes auditing. Commercial systems emphasize recovery and accountability. Auditing is the process of analyzing systems to determine what actions took place and who performed them. Hence, commercial systems must allow extensive auditing and thus have extensive logging (the basis for most auditing).

9 Logging and auditing are especially important when programs move from the development system to the production system, since the integrity mechanisms typically do not constrain the certifier. Auditing is, in many senses, external to the model . Even when disclosure is at issue, the needs of a commercial environment differ from those of a military environment. In a military environment, clearance to access specific categories and security levels brings the ability to access information in those compartments. Commercial firms rarely grant access on the basis of "clearance"; if a particular individual needs to know specific information, he or she will be given it.

10 While this can be modeled using the Bell-Lapadula Model
While this can be modeled using the Bell-Lapadula Model. It requires a large number of categories and security levels, increasing the complexity of the modeling. More difficult is the issue of controlling this proliferation of categories and security levels. In a military environment, creation of security levels and categories is centralized. In commercial firm, this creation would usually be decentralized. The former allows tight control on the number of compartments, whereas the latter allows no such control .

11 More insidious is the problem of information aggregation
More insidious is the problem of information aggregation. Commercial firms usually allow a limited amount of (innocuous) information to become public. Bur keep large amount of (sensitive) information confidential. By aggregating the innocuous information, one can often deduce much sensitive information. Preventing this requires the model to track what questions have been asked, and this complicates the model enormously. Certainly the Bell-Lapadula Model lacks this ability.

12 6.2 Biba integrity Model In 1977, Biba [94] studied the nature of the integrity of systems. He proposed three policies, one of which was the mathematical duel of the Bell-Lapadula Model . A system consists of a set S of subject, a set O of objects, and a set I of integrity levels. The levels are ordered. The relation <  I * I holds when the second integrity level dominates the first. The relation  I * I holds when the second integrity level either dominates or is the same as the first. The function min: I * I  I gives the lesser of the two integrity levels (with respect to ).

13 The function i:S  OI returns the integrity level of on object or a subject. The relation r  S * O defines the ability of a subject to read an object; the relation w S * O defines the ability of a subject to write to an object; and the relation x  S * S defines the ability of a subject to invoke (execute) another subject . Some comments on the meaning of "integrity level" will provide intuition behind the constructions to follow. The higher the level, the more confidence one has that a program will execute correctly (or detect problems with its inputs and stop executing).

14 Data at a higher level is more a accurate and/or reliable (with respect to some metric) that data at a lower level. Again, this model implicitly incorporates the notion of "trust"; in fact, the term "trustworthiness" is used as a measure of integrity level. For example, a process at a level higher than that of an object is considered more "trustworthy" than that object . Integrity labels, in general, are not also security labels.They are assigned and maintained separately, because the reasons behind the labels are different. Security labels primarily limit the flow of information; integrity labels primarily inhibit the modification of information. They may overlap, however, with surprising results (see Exercise 3) .

15 Biba tests his policies againts the notion of an information transfer path :
Definition 6-1. An information transfer path is a sequence of objects o1, … , on+1 and a corresponding sequence of subjects s1 , … , sn such that si r oi and si w oi+1 for all I , 1 i  n . Intuitively, data in the object o1 can be transferred into the object on+1 along an information flow path by a succession of reads and writes.

16 6.2.1 Low- Water- Mark policy
Whenever a subject accesses an object, the policy changes the integrity level of the subject to the lower of the subject and the object. Specifically : 1.s  Scan write to 0  O if and only if i(0)  i(s). 2. If s  Spreads 0  O , then i'(s) = min (i(s), i(0)), where i'(s) s is the subject's integrity level after the read . 3.      s1  Scan execute s2  S if and only if i(s2)  i(s1) .

17 The first rule prevents writing from one level
The first rule prevents writing from one level. This prevents a subject from writing to a more trusted object. Intuitively, if a subject were to alter a more trusted object, it could implant incorrect or false data (because the subject is less trusted than the object). In some sense, the trustworthiness of the object would drop to that of the subject. Hence, such writing is disallowed . The second rule causes a subject's integrity level to drop whenever it reads an object at a lower integrity level. The idea is that the subject is relying on data less trustworthy than itself.

18 Hence, is trustworthiness drops to the lesser trustworthy level
Hence, is trustworthiness drops to the lesser trustworthy level. This prevents the data from "contaminating" the subject or its actions . The third rule allows a subject to execute another subject provided the second is not at a higher integrity level. Otherwise, the less trusted invoker could control the execution of the invoked subject, corrupting it even though it is more trustworthy.

19 This policy constrains any information transfer path:
Theorem If there is an information transfer path from object on+1  O, then enforcement of the low-water-mark policy requires that i(on+1)  i(o1) for all n>1. Proof If an information transfer path exists between 01 and on+1, then Definition 6-1 gives a sequence of subjects and objects identifying the entities on the path. Without loss of generality, assume that each real and write was performed in the order of the indices of the vertices. By induction, for any 1 k n, i(sk) = min {i(oj)| 1 j k} after k reads.

20 As the nth write succeeds, by rule 1, i(on+1)  i(sn)
As the nth write succeeds, by rule 1, i(on+1)  i(sn). Thus, by transitivity, i(on+1)  i(o1) . This policy prevents direct modification that would lower integrity labels. It also prevents indirect modifications by lowering the integrity label of a subject that reads from an object with a lower integrity level . The problem with this policy is that, in practice, the subjects change integrity levels. In particular, the level of a subject is no increasing, which means that it will soon be unable to access objects at a high integrity level. An alternative policy is to decrease object integrity levels rather than subject integrity levels, but this policy has the property of downgrading object integrity levels to the lowest level .

21 Ring policy The ring policy ignores the issue of indirect modification and focuses on direct mod- ification only. This solves the problems described above. The rules are follows. 1. Any subject may read any object,regardless of integrity levels. 2. s  S can write to 0 O and only if I (0) I (s) 3. s1  S can execute s2 S if and if I(s2) I (s1) The difference between this policy and the low-water-mark policy is simply that any subject can read any object. Hence, Theorem 6-1 holds for this model. too.

22 6.2.3 Biba’s Model (strict integrity policy)
This model is the dual of the Bell – Lapadula Model, and is most commonly called “ Biba’s model.” Its rules are as follows. 1. s S can read 0  O if and only if I (s)  I (0) 2. s  S can write to 0  O if and only if I (0)  I(s) 3. s1  S can execute s2  S if and only if I (s2)  I(s1) Given these rules, Theorem 6-1 still holds, but its proof changes (see Exercise 1). Note that rules 1 and 2 imply that if both read and write are allowed, I(s) = I(0).

23 Like the low – water – mark policy, this policy prevents indirect as well as direct modification of “ integrity compartments,” and adding the notion of discretionary con- trolls, one obtains the full dual of Bell- Lapadula. EXAMPLE: Pozzo and Gray ] 817,818[ implemented Biba’s strict model o n the distributed operating system LOCUS ] 811[. Their goal was to limit execution domains for each program to prevent untrusted software from altering data or other software. Their approach was to make the level of trust in software and data explicit. They have different classes of executable programs.

24 Their credibility ratings (Biba’s integrity levels) assign a measure of trustworthiness on a scale from 0 (untrusted) to n (highly trusted), depending on the source of the software. Trusted file system con- tain only executable files with the same credibility level. Associated with each user ( process) is a risk execute. Users may execute programs with credibility levels at least as great as the user’s risk level. To execute programs at a lower credibility level, a user must use the tun – untrusted command. This acknowledges the risk that the user is taking.

25 6.3 Lipner’s integrity Matrix Model
Lipner returned to the Bell- Lapadula Model and co mbined it with the Biba model to create a model ]636[ that conformed more accurately to the requirements of a com- metrical policy. For clarity, we consider the Bell – Lapadula aspects of Lipner’s model first, and then combine those aspects with Biba’s model.

26 6.3.1 Lipner’s Use of the Bell –Lapadula Model
Lipner provides two security levels, in the following order (higher to lower): Audit Manager (AM): system audit and management function are this level. System Low (SL): any process can read information at this level. He similarly defined five categories: Development (D): production programs under development and testing, but not yet in production use

27 Production Code (PC): production processes and programs
Production Code (PD): data covered by the integrity policy System Development (SD): system programs under development, but not yet in production use Software Tools (T): programs provided on the production system not related to the sensitive or protected data Lipnr then assigned users to security levels based on their jobs. Ordinary users will use production code to modify production data؛ hance, their clearance is (SL,{PC, PD}).

28 Application developers need access to tools for developing their programs, and to a category for the programs that are being developed (the categories should be separate). Hecne, application programs have (SL, {D,T}) clearance. System programmers devlop system programs should have clearance (SL, {SD, T}). System managers and auditors need system high clearance, because they must be able to access all logs؛ their clearance is (AM, {D,PC,PD,SD,T}). Finally, the system controllers must have the ability to downgrade code once it is certified for production, so other entities cannot write to it؛ thus, the clearance for this type of user is (SL, {D,PC,PD,SD,T}) with the ability to downgrade programs. These security levels are summarized as follows.

29 Users clearance Ordinary users (SL, { PC,PD})
Application developers (SL, {D,T} ) System programmers ( Sl, {SD,T}) System managers and auditors (AM, {D,PC,PD,SD,T} ) System controllers (SL, {D,PC,PD,SD,T} ) and downgraded privilege

30 The system objects Objects are assigned to security levels based on who should access them. Object that might be altered have two categories: that of the data itself and that of the program that may alter it. For example, an ordinary user needs to execute production code؛ hence, that user must be able to tead production code. Placing pro- diction code in the level (SL, {PC} )allows such access by the simple security property of the Bell –Lapadula Model. Because an ordinary user needs to alter production data, the *-property dictates that production data be in(SL,{ PC, PD} ).

31 Similar reasoning supplies the following: Objects class
Development code / test data (SL, {D,T} ) Production code (SL, {P C}) Production data (SL, {PC,PD} ) Software tools (SL, {T} ) System programs (SL.) System programs in modification (SL, { SD,T} ) System and application logs (AM,{appropriate categories})

32 All logs are append – only. By the
All logs are append – only. By the *- property , their classes must dominate those of the subjects that write to them. Hence, each log will have its own categories, but the simplest way to prevent their being compromised is to put them at a higher security level.

33 We now examine this model in light of the requirements in Section 6.1
1. Because users do not have execute access to category T,they cannot write their own programs , so requirement 1 is met 2. Application programmers and system programmers do not have read or write access to category PD, and hence cannot access production data . If they do require production data to test their programs, the data must be downgraded from PD toD, and cannot be upgraded (because the model has no upgrade predicate). The downgrading requires intervention of system control usres, which is a special process within the meaning of requirement 2. Thus, requirement 2 is satisfied.

34 3. The process of installing a program requires the downgrade privilege (specifically, changing the category of the program from D to Pc), which belongs only to the system control users؛ hence, only those users can install applications or system programs. The use of the downgrade privilege satisfies requirement 3’s need for a special process. 4. The control part of requirement 4 is met by allowing only system control users to have the downgrade privilege؛ the auditing part is met by requiring all downgrading to be logged.

35 5. Finally , the placement of system management and audit users in AM ensures that they have access both to the system state and to system logs, so the model meets requirement 5. Thus, the model meets all requirements. However, it allows little flexibility in special – purpose software. For example, a program for repairing an inconsistent or erroneous production database cannot be application – level software. To remedy these problems, Lipner integrates his model with Biba’s model.

36 6.3.2 Lipner’s Full Model Augment the security classification with three integrity classification (highest to lowest): System program (ISP): the classifications for system program Operational (IO): the classification for production Production programs and development software System Low (ISL): the classification at which users log in

37 Two integrity categories distinguish between production and development software and data:
Development (ID): development entities Production (IP): production entities The security category T(tools) allowed application developers and system programmers to use the same programs without being able to alter those programs. The new integrity categories now distinguish between development and production, so they serve the purpose of the security tools category , which is eliminated from the model. We can also collapse production code and production data into a single cate- gory.

38 This gives us the following security categories:
Production (SP) : production code and data Development (SD) : same as (previous) security category Development (D) System Development (SSD): same as (previous) security category Development (SD)

39 The security classes of users remain equivalent to those of the model without integrity levels and categories, The integrity classes are chosen to allow modification of data and programs as appropriate. For example, ordinary users should be able to modify production data, so users of that class must have write access to integrity category IP. The following listing shows the integrity classes and categories of the classes of users:

40 Users Security clearance Integrity learance
 Ordinary users (SL,{SP}) (ISL,{IP}) Application (SL,{SD}) (ISL,{ID}) System programmers (SL,{SDD}) (ISL,{ID}) System controllers (SL,{SP,SD,SSD}) (SP,{IP,ID}) and downgrade privilege System managers and auditors (AM,{SP,SD,SSD}) (ISL,{IP,ID}) Repair (SL,{SP}) (ISL,{IP})

41 The final step is to select integrity class for objects
The final step is to select integrity class for objects. Consider the objects production Code and Production Data. Ordinary users must be able to write the latter but not the former. By placing Production Data in integrity class(IO,{IP}), an ordinary user cannot alter production code but can alter production data. Similar analysis leads to the following:

42 Objects security level Integrity level
Development code/ test data (SL, {SD} ) (ISL, { IP}) Production code ( SL, {SP } ) (IO, {IP}) Production data (SL, {SP} ) (ISL,{IP}) Software tools (SL, ) (IO,{ID}) System programs (SL, ) (ISP.{IP,ID}) System programs in (SL, {SSD}) ( ISL,{ID}) modification System and application logs (AM,{appropriate (ISL, ) Categories}) Repai (SL, {SP}) (ISL ,{IP})

43 The repair class of users has the same integrity and security clearance as that of production data, and so can read and write that data. It can also read production code (same security classification and (IO, {IP}) dom (ILS, {IP})), system programs ((SL, {SP}) dom (SL, ) and (ISP, {IP, ID}) dom (ISL, {IP})), and repair objects (same security classes and same integrity classes); it can write, but not read, the system and application logs (as (AM, {SP}) dom (SL, {SP}) and (ISL, {IP}) dom (ISL, )).

44 It cannot access development code/ test data (since the security categories are disjoint), system programs in modification (since the integrity categories are disjoint), or software tools (again, since the integrity categories are disjoint). Thus, the repair function works as needed . The reader should verify that this model meets Lipner's requirements for commercial models .

45 6.3.3 Comparison with Biba Lipner's model demonstrates that the Bell- Lapadula Model can meet many commercial requirements, even though it was designed for a very different purpose. The resiliency of that model is part of its attractiveness; however, fundamentally, the Bell-Lapadula Model restricts the flow of information. Lipner notes this, suggesting that combining his model with Biba's may be the most effective.


Download ppt "Integrity policies."

Similar presentations


Ads by Google