Security Architecture and Design: Part II Chao-Hsien Chu, Ph.D. College of Information Sciences and Technology The Pennsylvania State University University Park, PA 16802 chu@ist.psu.edu Theory Practice Learning by Doing IST 515
The Transformation Process The security policy provides the abstract goals and the security model provides the do’s and don’ts necessary to fulfill the goals Security Policy Security Model Program-ming Abstract objectives, goals and requirements Rules or practice Framework Mathematical relationship and formulas Specifications Data structure Computer code GUI – check box Product or System
Security Policy Outlines the security requirements for an organization. Is an abstract term that represents the objectives and goals a system must meet and accomplish to be deemed secure and acceptable. Is a set of rules and practices that dictates how sensitive information and resources are managed, protected, and distributed. Expresses what the security level should be by setting the goals of what the security mechanisms are supposed to accomplish. Provides the framework for the systems’ security architecture.
Security Model Is a symbolic representation of a policy, which Outlines the requirements needed to support the security policy and how authorization is enforced. Maps the abstract goals of the policy to information system terms by specifying explicit data structures and techniques that are necessary to enforce the security policy. Maps the desires of the policymakers into a set of rules that computer system must follow. Is usually represented in mathematics and analytical ideas, which are then mapped to system specifications, and then developed by programmers through programming codes.
“Subjects need to be authorized to access objects.” Security Policy “Subjects need to be authorized to access objects.” Example Security Model Derive the mathematical relationships and formulas explaining how x can access y only through outlined specific methods. Develop specifications to provide a bridge to what this means in a computing environment and how it maps to components and mechanisms that need to be coded and developed. Write the program code to produce the mechanisms that provide a way for a system to use access control lists and give administrators some degree of control. This mechanism presents the network administrator with a GUI representation, like check boxes, to choose which subjects can access what objects, within the operating system.
Security Models Security Requirements Security Models Bell-LaPadula Model (1973) Biba Model (1977) Clark-Wilson Model (1987) Access Control Matrix Information Flow Model Noninterference Model Chinese Wall Model Lattice Model Confidentiality Integrity Availability
Bell-LaPadula Model Funded by the U.S. government, Bell-LaPadula model is the first mathematical model of a multilevel security policy. Because users with different clearances use the system, and the system processes data with different classifications. Is a state machine model that enforce the confidentiality aspects of access control, but not with integrity or availability Is an information flow security model as it ensures information does not flow in an insecure manner. All mandatory access control (MAC) model are based on the Bell-LaPadula model.
Bell-LaPadula Model Properties The Simple Security Property (ss Property) states that a subject at a given security level cannot read data that resides at a higher security level (No Read Up). The * (star) Security Property states that a subject in a given security level cannot write information to a lower security level. (No Write Down). The Strong Star Property states that a subject that has read and write capabilities can only perform those functions at the same security level, nothing higher and nothing lower. A subject to be able to read and write to an object, the clearance and classification must be equal.
The Bell-LaPadula Model Layer of Higher Secrecy Reading Secrets Reading Secrets Χ Χ Read Write Read/Write Divulging Secrets Divulging Secrets Χ Χ Layer of Lower Secrecy Simple Security Property Star (*) Property Strong Star (*) Property
Biba Security Model Developed in 1977, the Biba integrity model mathematically describes read and write restrictions based on integrity access classes of subjects and objects. It is the first model to address integrity. Is an information flow model as it is concerned about data flowing from one level to another. The model looks similar to the Bell-LaPadula Model; however, the read-write conditions are reversed.
Biba Integrity Model Axiom The Simple Integrity Axiom: States that a subject at one level of integrity is not permitted to observe (read) an object of a lower integrity. No Read Down. The * (Star) Integrity Axiom: States that an object at one level of integrity is not permitted to modify (write to) an object of a higher level of integrity. No Write Up. Invocation property states that a subject at one level of integrity cannot invoke (call up) a subject at a higher level of integrity.
The Biba Model Χ Χ Layer of Higher Secrecy Read Write Layer of Contamination Read Write Get Contaminated Χ Layer of Lower Secrecy Simple Integrity Property Integrity Star (*) Property
The Invocation Property The Biba model can be extended to include an access operation called invoke. A subject can invoke another subject, such as a software utility, to access an object. The subject cannot send message (logical request for service) to subjects of higher integrity. Subjects are only allowed to invoke utilities or tools at the same or lower integrity level (otherwise, a dirty subject could use a clean tool to access or contaminate a clean object).
Clark-Wilson Integrity Model The model uses the following elements: Users: Active agents. Transformation Procedures (TPs): Programed abstract operations, such as read, write and modify. Constrained Data Item (CDI): A data item whose integrity is to be preserved. Can only be manipulated by TPs. Unconstrained Data Item (UDI): Data items outside of the control area of the modeled environment such as input information. Can be manipulated by users via primitive read and write operations. Integrity Verification Procedure (IVP): Check the consistency of CDIs with external reality.
Elements of Clark-Wilson Model Users UDI CDI IVP TP CDI 1 CDI 2 CDI 3 Log CDI
Clark-Wilson Model The three main rules of integrity models: Prevent unauthorized users from making modifications Prevent authorized users from making improper modifications (separation of duties) Maintain internal/external consistency (well-formed transaction) Clark-Wilson model addresses each of these goals. Biba model only addresses the first goal.
Clark-Wilson Model Developed by Clark-Wilson in 1987, the model addresses the integrity requirements of applications. Clark-Wilson model enforces the three goals of integrity by using access triple (subject, software TP, and object), separation of duties, and auditing. It enforces integrity by using well-formed transactions (through access triple) and separation of user duties.
Information Flow Model Information flow model deals with any kind of information flow, which help architects and developers make sure their software does not allow information to flow in a way that can put the system or data in danger. One way that the information flow model provides protection is by ensuring that covert channels do not exist in the code. The Bell-LaPadula model focuses on preventing information from flowing from a high security level to a low security level. The Biba model focuses on preventing information from flowing from a low security level to a high security level.
Other Models State machine model is an abstract mathematic model that uses state variables to represent the system state. Failure of a state machine should fail in a secure state. Non-interference model ensures that actions at a higher level (domain) cannot interfere with actions at a lower level. Graham-Denning Modem defines a set of eight primitive protection rights in terms of commands that a specific subject can execute on an object. Brewer and Nash Model (Chinese Wall Model) allows for dynamically changing access controls to protect against conflicts of interest.
Security Protection Mechanisms Protection mechanisms are used to ensure the separation between objects in a system. Active protection mechanism. It prevents access to an object if the access is not authorized. Passive protection mechanism. It prevents or detects unauthorized use of the information associated with an object, even if access to the object itself is not prevented. In most cases, these techniques use cryptographic techniques to prevent unauthorized disclosure of information or checksum techniques to detect an unauthorized alteration of an object.
Protection Mechanisms 1 Trusted Computing Base (TCB) Trusted Computer System Abstraction, Encapsulation, and Information Hiding Security Perimeter Trusted Path Labeling and Classification
Protection Mechanisms 2 Centralized backup for desktop systems Control of software on desktop systems Encryption / File encryption Appropriate access controls Robust access control and biometrics Protection of applications and database Protection domain, disks, systems, laptops
Protection Mechanisms 3 Separation of privileged process and others Logging of transaction and transmission Email and download/upload policies Security awareness and regular training Graphical user interface mechanism Security formal methods in software development, change control, configuration management, and environmental change Disaster recovery and business continuity planning, for all systems including desktop, file system and storages, database and applications, data and information
Factors for Security Product Selection Cost. Flexibility. Environmental. User interface. System administration. Future development of a product. Process. Functionality. Effectiveness. Assurance.
Security / System Evaluation A security evaluation examines the security-relevant parts of a system including: Trusted computing base (TCB) Access control mechanisms. Reference monitor Kernel Protection mechanisms. There are different methods of evaluating and assigning assurance levels to systems, as various parts of the world look at computer security differently and rate some aspects of security differently.
Security Evaluation Standards Trusted Computer Security Evaluation Criteria (TCSEC): 1985 by the National Computer Security Center (NCSC) Also known as Orange Book of the Rainbow Series. Address Confidentiality. The Trusted Network Interpretation (TNI): 1987, known as Red Book Address network and telecommunications Information Technology Security Evaluation Criteria (ITSEC): Drafted in 1990 and endorsed by the Council of the European Union in 1995. Include integrity and availability as well as confidentiality as security goals. The Common Criteria (CC): Based on the U.S. Federal Criteria that expanded on the ITSEC. An international standard to evaluate trust.
Orange Book - TCSEC The U.S. Department of Defense (DoD) developed the Trusted Computer System Evaluation Criteria (TCSEC) to evaluate operating systems, applications, and different products. These evaluation criteria are published in a book with an orange cover, which is called, appropriately, the Orange Book. The Orange Book is used to review the functionality, effectiveness, and assurance of a product during its evaluation, which can be used to evaluate whether a product contains the security properties the vendor claims it does and whether the product is appropriate for a specific application or function.
Classification Systems Orange Book - TCSEC Topics Classification Systems Security policy. Labels. Marking of objects. Identification of subjects. Accountability. Life-cycle assurance. Documentation. Continuous protection. A: Verified Protection. B: Mandatory Protection: B1: Labeled security. B2: Structured protection. B3: Security domains C: Discretionary Protection: C1: Discretionary security protection C2: Controlled asset protection D: Minimal Protection
Rainbow Series Security practitioners have pointed out the deficiencies in the Orange Book: It looks specifically at the operating system and not at other issues like networking and database. It focuses mainly on one attribute of security – confidentiality, and not on integrity and availability. It works with government classification and not the protection classifications commercial industries use. It has a relatively small number of rating. Many different aspects of security are not evaluated and rated. More books were written to extend the coverage of the Orange Book, which are collectively called Rainbow Series.
The Red Book - TNI The Trusted Network Interpretation (TNI), also called the Red Book, addresses security evaluation topics for networks and network components, including local area networks and wide area internetwork systems. Like Orange Book, the Red Book only provides a framework for securing different types of networks, and does not supply specific details on how to implement security mechanisms. The Red Book rates confidentiality of data and operations that happen within a network and the network products. Data and labels need to be protected from unauthorized modification, and the integrity of information needs to be ensured. The source and destination mechanisms used for messages are evaluated and tested to ensure modification is not allowed.
Common Criteria The Orange Book and the Rainbow Series provide evaluation schemes that are too rigid for the business world. ITSEC attempted to provide a more flexible approach by separating the functionality and assurance. However, this add much complexity and results in too many classifications to keep straight. The International Organization for Standardization (ISO) identified the need of international standard evaluation criteria in security, which resulted in the development of Common Criteria. The Common Criteria was developed through a collaboration among national security standards organizations within the United States, Canada, France, Germany, the United Kingdom, and the Netherlands.
Common Criteria Assurance Levels Under the Common Criteria model, an evaluation is carried out on a product and it is assigned an Evaluation Assurance Level (EAL). The Common Criteria has seven assurance levels: EAL1: Functionally tested EAL2: Structurally tested EAL3: Methodically tested and checked EAL4: Methodically designed, tested and reviewed EAL5: Semiformally designed and tested EAL6: Semiformally verified design and tested EAL7: Formally verified design and tested.
Certification Certification is the comprehensive evaluation of the technical and non-technical security features of an information system and the other safeguards, which are created in support of the accreditation process, to establish the extent in which a particular design and implementation meets the set of specified security Certification is the endorsement that the system/application meets their functional and security requirements. It is the comprehensive technical analysis of the security features and safeguards of a system to establish the extent to which the security requirements are satisfied.
Certification Certification uses a combinations of security evaluation techniques: - Risk analysis. - Validation, verification and testing - Security countermeasure evaluation - Audit Certification should consider the following issues: - Security modes of operation - Specific users and their training - System and facility configuration and location - Intercommunication with other systems
Vulnerabilities of Certification Organizations and users cannot count on the certified product being free of security flaws. Because new vulnerabilities are always being discovered, no product is ever completely secure. Most software products must be securely configured to meet certain protection mechanisms. Certifications are not the definitive answer to security. IS security depends on more than just technical software protection mechanisms, such as personnel and physical security measures.
Accreditation Accreditation is a formal declaration by a Designated Approving Authority (DAA) where an information system is approved to operate in a particular security mode using a prescribed set of safeguards at an acceptable level of risk Accreditation is the official management decision to operate a system. It is management’s formal approval of the adequacy of a system’s
Certification and Accreditation Accreditation looks at the following items: - Particular security mode - Prescribed set of countermeasures - Defined threats; stated vulnerabilities - Given operational concept and environment - Stated interconnections to other systems - Risk formally accepted - Stated period of time Certification and accreditation should be an ongoing process. A formal recertification and reaccreditation is required whenever a major change occurs, a major application is added, the security environment changes, or significant technology is upgraded.
Examples of Guidelines for Certification Defense Information Technology Security Certification and Accreditation Process (DITSACP). DOD #5200.40 (December 1997) National Information Assurance Certification and Accreditation Process (NIACAP). NSA #NSTISSI No. 100. (April 2000). Guide for the Security Certification and Accreditation of Federal Information Systems, NIST SP 800-37. (October 2002)