CS 5950 – Computer Security and Information Assurance Section 6: Program Security This is the short version of Section 6. It does not includes OPTIONAL slides that you may SKIP. OPTIONAL details can be seen in the longer version of this Section. Dr. Leszek Lilien Department of Computer Science Western Michigan University Slides based on Security in Computing. Third Edition by Pfleeger and Pfleeger. Using some slides courtesy of: Prof. Aaron Striegel — course taught at U. of Notre Dame Prof. Barbara Endicott-Popovsky and Prof. Deborah Frincke (U. Idaho) — taught at U. Washington Prof. Jussipekka Leiwo — taught at Vrije Universiteit (Free U.), Amsterdam, The Netherlands Slides not created by the above authors are © 2006 by Leszek T. Lilien Requests to use original slides for non-profit purposes will be gladly granted upon a written request.
Program Security – Outline (1) NOTE: Some subsections SKIPped 6.1. Secure Programs – Defining & Testing Introduction Judging S/w Security by Fixing Faults Judging S/w Security by Testing Pgm Behavior Judging S/w Security by Pgm Security Analysis Types of Pgm Flaws 6.2. Nonmalicious Program Errors Buffer overflows b. Incomplete mediation Time-of-check to time-of-use errors Combinations of nonmalicious program flaws
Program Security – Outline (2) 6.3. Malicious Code 6.3.1. General-Purpose Malicious Code incl. Viruses Introduction Kinds of Malicious Code How Viruses Work Virus Signatures Preventing Virus Infections Seven Truths About Viruses Case Studies Virus Removal and System Recovery After Infection 6.3.2. Targeted Malicious Code Trapdoors Salami attack Covert channels
Program Security – Outline (3) 6.4. Controls for Security Introduction Developmental controls for security Operating System controls for security Administratrive controls for security Conclusions
6. Program Security (1) Program security – The fundamental step in applying security to computing Protecting programs is the heart of computer security All kinds of programs, from apps via OS, DBMS, networks Issues: How to keep pgms free from flaws How to protect computing resources from pgms with flaws Issues of trust not considered: How trustworthy is a pgm you buy? How to use it in its most secure way? Partial answers: Third-party evaluations Liability and s/w warranties
in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, --SKIPped a slide-- You can see all SKIPped slides in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, not required for exams.)
6.1. Secure Programs - Defining & Testing … Continued … [cf. B. Endicott-Popovsky]
Introduction (1) Developmental criteria for program security include: Pgm is secure if we trust that it provides/enforces: Confidentiality Integrity Availability What is „Program security?” Depends on who you ask user - fit for his task programmer - passes all „her” tests manager - conformance to all specs Developmental criteria for program security include: Correctness of security & other requirements Correctness of implementation Correctness of testing
Fault tolerance terminology: Error - may lead to a fault Introduction (2) Fault tolerance terminology: Error - may lead to a fault Fault - cause for deviation from intended function Failure - system malfunction caused by fault Note: [cf. A. Striegel] Faults - seen by „insiders” (e.g., programmers) Failures - seen by „outsiders” (e.g., independent testers, users) Error/fault/failure example: Programmer’s indexing error, leads to buffer overflow fault Buffer overflow fault causes system crash (a failure) Two categories of faults w.r.t. duration [cf. A. Striegel] Permanent faults Transient faults – can be much more difficult to diagnose [cf. A. Striegel]
Basic approaches to having secure programs: 1) Judging s/w security by fixing pgm faults Red Team / Tiger Team tries to crack s/w If pgm withstands the attack => security is good 2) Judging s/w security by testing pgm behavior Run tests to compare behavior vs. requirements (think testing or Ss/w engg) Important: If a flaw detected as a failure (an effect), look for the underlying fault (the cause) Recall: fault seen by insiders, failure – by outsiders If possible, detect faults before they become failures Any kind of fault/failure can cause a security incident => we must consider security consequences for all kinds of detected faults/failures Even inadvertent faults / failures Inadvertent faults are the biggest source of security vulnerabilities exploited by attackers Testing only increases probability of eliminating faults [cf. B. Endicott-Popovsky]
3) Judging s/w security by pgm security analysis Best approach to judging s/w security Analyze what can go wrong At every stage of program development! From requirement definition to testing After deployment Configurations / policies / practices [cf. B. Endicott-Popovsky]
in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, --SKIPped-- a few slides You can see all SKIPped slides in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, not required for exams.)
e. Types of Pgm Flaws Taxonomy of pgm flaws: 1) Intentional a) Malicious b) Nonmalicious 2) Inadvertent a) Validation error (incomplete or inconsistent) e.g., incomplete or inconsistent input data b) Domain error e.g., using a variable value outside of its domain c) Serialization and aliasing serialization – e.g., in DBMSs or OSs aliasing - one variable or some reference, when changed, has an indirect (usually unexpected) effect on some other data Note: ‘Aliasing’ not in computer graphics sense! d) Inadequate ID and authentication (Section 4—on OSs) e) Boundary condition violation f) Other exploitable logic errors [cf. B. Endicott-Popovsky]
6.2. Nonmalicious Program Errors Nonmalicious program errors include: Buffer overflows Incomplete mediation Time-of-check to time-of-use errors Combinations of nonmalicious program flaws
Buffer Overflows (1) Buffer overflow flaw — often inadvertent (=>nonmalicious) but with serious security consequences Many languages require buffer size declaration C language statement: char sample[10]; Execute statement: sample[i] = ‘A’; where i=10 Out of bounds (0-9) subscript – buffer overflow occurs Some compilers don’t check for exceeding bounds C does not perform array bounds checking. Similar problem caused by pointers No reasonable way to define limits for pointers [cf. B. Endicott-Popovsky]
Depends on what is adjacent to ‘sample[10]’ Buffer Overflows (2) Where does ‘A’ go? Depends on what is adjacent to ‘sample[10]’ Affects user’s data - overwrites user’s data Affects users code - changes user’s instruction Affects OS data - overwrites OS data Affects OS code - changes OS instruction This is a case of aliasing (cf. Slide 26) [cf. B. Endicott-Popovsky]
Implications of buffer overflow: Buffer Overflows (3) Implications of buffer overflow: Attacker can insert malicious data values/instruction codes into „overflow space” Supp. buffer overflow affects OS code area Attacker code executed as if it were OS code Attacker might need to experiment to see what happens when he inserts A into OS code area Can raise attacker’s privileges (to OS privilege level) When A is an appropriate instruction Attacker can gain full control of OS [cf. B. Endicott-Popovsky]
Supp. buffer overflow affects a call stack area A scenario: Buffer Overflows (4) Supp. buffer overflow affects a call stack area A scenario: Stack: [data][data][...] Pgm executes a subroutine => return address pushed onto stack (so subroutine knows where to return control to when finished) Stack: [ret_addr][data][data][...] Subroutine allocates dynamic buffer char sample[10] => buffer (10 empty spaces) pushed onto stack Stack: [..........][ret_addr][data][data][...] Subroutine executes: sample[i] = ‘A’ for i = 10 Stack: [..........][A][data][data][...] Note: ret_address overwritten by A! (Assumed: size of ret_address is 1 char)
Supp. buffer overflow affects a call stack area—CONT Buffer Overflows (5) Supp. buffer overflow affects a call stack area—CONT Stack: [..........][A][data][data][...] Subroutine finishes Buffer for char sample[10] is deallocated Stack: [A][data][data][...] RET operation pops A from stack (considers it ret. addr.) Stack: [data][data][...] Pgm (which called the subroutine) jumps to A => shifts program control to where attacker wanted Note: By playing with ones own pgm attacker can specify any „return address” for his subroutine Upon subroutine return, pgm transfers control to attacker’s chosen address A (even in OS area) Next instruction executed is the one at address A Could be 1st instruction of pgm that grants highest access privileges to its „executor”
Buffer Overflows (6) Note: [Wikipedia – aliasing] C programming language specifications do not specify how data is to be laid out in memory (incl. stack layout) Some implementations of C may leave space between arrays and variables on the stack, for instance, to minimize possible aliasing effects.
Buffer Overflows (7) Web server attack similar to buffer overflow attack: pass very long string to web server (details: textbook, p.103) Buffer overflows still common Used by attackers to crash systems to exploit systems by taking over control Large # of vulnerabilities due to buffer overflows
b. Incomplete Mediation (1) Incomplete mediation flaw — often inadvertent (=> nonmalicious) but with serious security consequences Incomplete mediation: Sensitive data are in exposed, uncontrolled condition Example URL to be generated by client’s browser to access server, e.g.: http://www.things.com/order/final&custID=101&part=555A&qy=20&price=10&ship=boat&shipcost=5&total=205 Instead, user edits URL directly, changing price and total cost as follows: http://www.things.com/order/final&custID=101&part=555A&qy=20&price=1&ship=boat&shipcost=5&total=25 User uses forged URL to access server The server takes 25 as the total cost
in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, --SKIPped a slide-- You can see all SKIPped slides in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, not required for exams.)
c. Time-of-check to Time-of-use Errors (1) Time-of-check to time-of-use flaw — often inadvertent (=> nonmalicious) but with serious security consequences A.k.a. synchronization flaw / serialization flaw TOCTTOU — mediation with “bait and switch” in the middle Non-computing example: Swindler shows buyer real Rolex watch (bait) After buyer pays, switches real Rolex to a forged one In computing: Change of a resource (e.g., data) between time access checked and time access used Q: Any examples of TOCTTOU problems from computing?
TOCTTOU — mediation with “bait and switch” in the middle Time-of-check to Time-of-use Errors (2) ... TOCTTOU — mediation with “bait and switch” in the middle Q: Any examples of TOCTTOU problems from computing? A: E.g., DBMS/OS: serialization problem: pgm1 reads value of X = 10 pgm1 adds X = X+ 5 pgm2 reads X = 10, adds 3 to X, writes X = 13 pgm1 writes X = 15 X ends up with value 15 – should be X = 18
in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, --SKIPped-- a few slides You can see all SKIPped slides in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, not required for exams.)
6.3. Malicious Code Malicious code or rogue pgm is written to exploit flaws in pgms Malicious code can do anything a pgm can Malicious code can change Data Other programs Malicious code - „officially” defined by Cohen in 1984 but virus behavior known since at least 1970 Ware’s study for Defense Science Board (classified, made public in 1979) Outline for this Subsection: 6.3.1. General-Purpose Malicious Code (incl. Viruses) 6.3.2. Targeted Malicious Code
6.3.1. General-Purpose Malicious Code (incl. Viruses) Outline Introduction Kinds of Malicious Code How Viruses Work Virus Signatures Preventing Virus Infections Seven Truths About Viruses Case Studies [cf. B. Endicott-Popovsky]
a. Introduction Viruses are prominent example of general-purpose malicious code Not „targeted” against any user Attacks anybody with a given app/system/config/... Viruses Many kinds and varieties Benign or harmful Transferred even from trusted sources Also from „trusted” sources that are negligent to update antiviral programs and check for viruses [cf. B. Endicott-Popovsky]
--REMIND YOURSELF-- (from Section 1) b. Kinds of Malicious Code (1) Trapdoors Trojan Horses Bacteria Logic Bombs Worms Viruses X Files FIRST, you must begin to think of malicious logic as more than just a virus. Some of this malicious code act as delivery agents. Others act as triggers. Regardless of their method of use, operational capability or intent - - All malicious code can be evaluated in the context of three principles: Understanding these principle allows for successful countermeasures which we will touch on later for each type of code. - Delivery Method or System Access - Trigger or Initiation Mechanism - Payload or effect on system [cf. Barbara Edicott-Popovsky and Deborah Frincke, CSSE592/492, U. Washington] 25
--REMIND YOURSELF-- (from Section 1) b. Kinds of Malicious Code (2) Trojan horse - A computer program that appears to have a useful function, but also has a hidden and potentially malicious function that evades security mechanisms, sometimes by exploiting legitimate authorizations of a system entity that invokes the program Virus - A hidden, self-replicating section of computer software, usually malicious logic, that propagates by infecting (i.e., inserting a copy of itself into and becoming part of) another program. A virus cannot run by itself; it requires that its host program be run to make the virus active. Worm - A computer program that can run independently, can propagate a complete working version of itself onto other hosts on a network, and may consume computer resources destructively.
--REMIND YOURSELF-- (from Section 1) Kinds of Malicious Code (3) Bacterium - A specialized form of virus which does not attach to a specific file. Usage obscure. Logic bomb - Malicious [program] logic that activates when specified conditions are met. Usually intended to cause denial of service or otherwise damage system resources. Time bomb - activates when specified time occurs Rabbit – A virus or worm that replicates itself without limit to exhaust resource Trapdoor / backdoor - A hidden computer flaw known to an intruder, or a hidden computer mechanism (usually software) installed by an intruder, who can activate the trap door to gain access to the computer without being blocked by security services or mechanisms.
in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, --SKIPped a slide-- You can see all SKIPped slides in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, not required for exams.)
c. How Viruses Work (1) Pgm containing virus must be executed to spread virus or infect other pgms Even one pgm execution suffices to spread virus widely Virus actions: spread / infect --SKIP-- Spreading – Example 1: Virus in a pgm on installation CD User activates pgm contaning virus when she runs INSTALL or SETUP Virus installs itself in any/all executing pgms present in memory Virus installs itself in pgms on hard disk From now on virus spreads whenever any of the infected pgms (from memory or hard disk) executes
in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, --SKIPped-- a few slides You can see all SKIPped slides in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, not required for exams.)
Characteristics of a ‘perfect’ virus (goals of virus writers) How Viruses Work (6) Characteristics of a ‘perfect’ virus (goals of virus writers) Hard to detect Not easily destroyed or deactivated Spreads infection widely Can reinfect programs Easy to create Machine and OS independent
1) In bootstrap sector – best place for virus How Viruses Work (7) Virus hiding places 1) In bootstrap sector – best place for virus Bec. virus gains control early in the boot process Before detection tools are active! 2) In memory-resident pgms TSR pgms (TSR = terminate and stay resident) TSR pgms are most frequently used OS pgms or specialized user pgms => good place for viruses (activated very often) Before infection: After infection: [Fig. cf. J. Leiwo & textbook]
Best for viruses: apps with macros How Viruses Work (8) 3) In application pgms Best for viruses: apps with macros (MS Word, MS PowerPoint, MS Excel, MS Access, ...) One macro: startup macro executed when app starts Virus instructions attach to startup macro, infect document files Bec. doc files can include app macros (commands) E.g., .doc file include macros for MS Word Via data files infects other startup macros, etc. etc. 4) In libraries Libraries used/shared by many pgms => spread virus Execution of infected library pgm infects 5) In other widely shared pgms Compilers / loaders / linkers Runtime monitors Runtime debuggers Virus control pgms (!)
d. Virus Signatures (1) Virus hides but can’t become invisible – leaves behind a virus signature, defined by various patterns: 1) Storage patterns: must be stored somewhere/somehow (maybe in pieces) 2) Execution patterns: executes in a particular way 3) Distribution patterns: spreads in a certain way Virus scanners use virus signatures to detect viruses (in boot sectior, on hard disk, in memory) Scanner can use file checksums to detect changes to files Once scanner finds a virus, it tries to remove it I.e., tries to remove all pieces of a virus V from target pgm T Virus scanner and its database of virus signatures must be up-to-date to be effective! Update and run daily!
Detecting Virus Signatures (1) Difficulty 1 — in detecting execution patterns: Most of effects of virus execution (see next page) are „invisible” Bec. they are normal – any legitimate pgm could cause them (hiding in a crowd) => can’t help in detecion
in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, --SKIPped a slide-- You can see all SKIPped slides in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, not required for exams.)
Difficulty 2 — in finding storage patterns: Polymorphic viruses: Virus Signatures (4) Detecting Virus Signatures (3) Difficulty 2 — in finding storage patterns: Polymorphic viruses: changes from one „form” (storage pattern) to another Simple virus always recognizable by a certain char pattern Polymorphic virus mutates into variety of storage patterns Examples of polymorphic virus mutations Randomly repositions all parts of itself and randomly changes all fixed data within its code Repositioning is easy since (infected) files stored as chains of data blocks - chained with pointers Randomly intersperses harmless instructions throughout its code (e.g., add 0, jump to next instruction) Encrypting virus: Encrypts its object code (each time with a different/random key), decrypts code to run ... More below ...
in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, --SKIPped-- a few slides You can see all SKIPped slides in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, not required for exams.)
e. Preventing Virus Infections Use commercial software from trustworthy sources But even this is not an absolute guarantee of virus-free code! Test new software on isolated computers Open only safe attachments Keep recoverable system image in safe place Backup executable system files Use virus scanners often (daily) Update virus detectors daily Databases of virus signatures change very often No absolute guarantees even if you follow all the rules – just much better chances of preventing a virus [cf. B. Endicott-Popovsky]
f. Seven Truths About Viruses Viruses can infect any platform Viruses can modify “hidden” / “read only” files Viruses can appear anywhere in system Viruses spread anywhere sharing occurs Viruses cannot remain in memory aftera complete power off/power on on reboot But virus reappears if saved on disk (e.g., in the boot sector) Viruses infect software that runs hardware There are firmware viruses (if firmware writeable by s/w) Viruses can be malevolent, benign, or benevolent Hmmm... Would you like a benevolent virus doing good things (like compressing pgms to save storage) but without your knowledge? [cf. B. Endicott-Popovsky]
in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, --SKIPped-- a few slides You can see all SKIPped slides in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, not required for exams.)
System Recovery After Infection h. Virus Removal and System Recovery After Infection Fixing a system after infection by virus V: 1) Disinfect (remove) viruses (using antivirus pgm) Can often remove V from infected file for T w/o damaging T if V code can be separated from T code and V did not corrupt T Have to delete T if can’t separate V from T code 2) Recover files: - deleted by V - modified by V - deleted during disinfection (by antivirus pgm) => need file backups! Make sure to have backups of (at least) important files
6.3.2. Targeted Malicious Code Targeted = written to attack a particular system, a particular application, and for a particular purpose Many virus techniques apply Some new techniques as well Outline: Trapdoors Salami attack Covert channels
Trapdoors (1) --SKIP this def.-- Original def: Trapdoor / backdoor - A hidden computer flaw known to an intruder, or a hidden computer mechanism (usually software) installed by an intruder, who can activate the trap door to gain access to the computer without being blocked by security services or mechanisms. A broader definition: Trapdoor – an undocumented entry point to a module Inserted during code development For testing As a hook for future extensions As emergency access in case of s/w failure
With stubs and drivers for unit testing (Fig. 3-10 p. 138) Trapdoors (2) Testing: With stubs and drivers for unit testing (Fig. 3-10 p. 138) Testing with debugging code inserted into tested modules May allow programmer to modify internal module variables Major sources of trapdoors: Left-over (purposely or not) stubs, drivers, debugging code Poor error checking E.g., allowing for unacceptable input that causes buffer overflow Undefined opcodes in h/w processors Some were used for testing, some random Not all trapdoors are bad Some left purposely w/ good intentions — facilitate system maintenance/audit/testing
b. Salami attack Salami attack - merges bits of seemingly inconsequential data to yield powerful results Old example: interest calculation in a bank: Fractions of 1 ¢ „shaved off” n accounts and deposited in attacker’s account Nobody notices/cares if 0.1 ¢ vanishes Can accumulate to a large sum Easy target for salami attacks: Computer computations combining large numbers with small numbers Require rounding and truncation of numbers Relatively small amounts of error from these op’s are accepted as unavoidable – not checked unless a strong suspicion Attacker can hide „salami slices” within the error margin
c. Covert Channels (CC) (1) --SKIP-- Outline: Covert Channels - Definition and Examples Types of Covert Channels Storage Covert Channels Timing Covert Channels Identifying Potential Covert Channels Covert Channels - Conclusions
i. CC – Definition and Examples (1) So far: we looked at malicious pgms that perform wrong actions Now: pgms that disclose confidential/secret info They violate confidentiality, secrecy, or privacy of info Covert channels = channels of unwelcome disclosure of info Extract/leak data clandestinely Examples 1) An old military radio communication network The busiest node is most probably the command center Nobody is so naive nowadays 2) Secret ways spies recognize each other Holding a certain magazine in hand Exchanging a secret gesture when approaching each other ...
How programmers create covert channels? Covert Channels – Definition and Examples (2) How programmers create covert channels? Providing pgm with built-in Trojan horse Uses covert channel to communicate extracted data Example: pgm w/ Trojan horse using covert channel Should be: Protected Legitimate Data <------[ Service Pgm ]------> User Is: [ w/ Trojan h. ] covert channel Spy (Spy - e.g., programmer who put Trojan into pgm; directly or via Spy Pgm)
How covert channels are created? I.e., How leaked data are hidden? Covert Channels – Definition and Examples (3) How covert channels are created? I.e., How leaked data are hidden? Example: leaked data hidden in output reports (or displays) Different ‘marks’ in the report: (cf. Fig. 3-12, p.143) Varying report format Changing line length / changing nr of lines per page Printing or not certain values, characters, or headings - each ‘mark’ can convey one bit of info
in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, --SKIPped a slide-- You can see all SKIPped slides in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, not required for exams.)
ii. Types of Covert Channels Storage covert channels Convey info by presence or absence of an object in storage Timing covert channels Convey info by varying the speed at which things happen
in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, --SKIPped-- a few slides You can see all SKIPped slides in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, not required for exams.)
Covert Channels - Conclusions Covert channels are a serious threat to confidentiality and thus security („CIA” = security) Any virus/Trojan horse can create a covert channel In open systems — no way to prevent covert channels Very high security systems require a painstaking and costly design preventing (some) covert channels Analysis must be performed periodically as high security system evolves
6.4. Controls for Security How to control security of pgms during their development and maintenance --SKIP-- Outline: Introduction Developmental controls for security Operating system controls for security Administrative controls for security Conclusions
Introduction „Better to prevent than to cure” Preventing security flaws We have seen a lot of possible security flaws How to prevent (some of) them? Software engineering concentrates on developing and maintaining high-quality s/w We’ll take a look at some techniques useful specifically for developing/ maintaining secure s/w Three types of controls for security (against pgm flaws): Developmental controls OS controls Administrative controls
b. Developmental Controls for Security (1) Nature of s/w development Collaborative effort Team of developers, each involved in 1 of stages: Requirement specification Regular req. specs: „do X” Security req. specs: „do X and nothing more” Design Implementation Testing Documenting at each stage Reviewing at each stage Managing system development thru all stages Maintaining deployed system (updates, patches, new versions, etc.) Both product and process contribute to overall quality — incl. security dimension of quality
Fundamental principles of s/w engineering Modularity Encapsulation Developmental Controls for Security (2) Fundamental principles of s/w engineering Modularity Encapsulation Info hiding 1) Modularity Modules should be: Single-purpose - logically/functionally Small - for a human to grasp Simple - for a human to grasp Independent – high cohesion, low coupling High cohesion – highly focused on (single) purpose Low coupling – free from interference from other modules Modularity should improve correctness Fewer flaws => better security
3) Information hiding 2) Encapsulation Developmental Controls for Security (3) 2) Encapsulation Minimizing info sharing with other modules => Limited interfaces reduce # of covert channels Well documented interfaces „Hiding what should be hidden and showing what should be visible.” 3) Information hiding Module is a black box Well defined function and I/O Easy to know what module does but not how it does it Reduces complexity, interactions, covert channels, ... => better security
Many techniques for building solid software --SKIP-- Peer reviews Developmental Controls for Security (4) Many techniques for building solid software --SKIP-- Peer reviews Hazard analysis Testing Good design Risk prediction & mangement Static analysis Configuration management Additional developmental controls --SKIP--> ... Please read on your own ... ..Also see slides—all discussed below ... [cf. B. Endicott-Popovsky]
in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, --SKIPped-- a few slides You can see all SKIPped slides in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, not required for exams.)
4) Good design Good design uses: Developmental Controls for Security (8) 4) Good design Good design uses: Modularity / encapsulation / info hiding Fault tolerance Consistent failure handling policies Design rationale and history Design patterns i. Using modularity / encapsulation / info hiding - as discussed above
ii. Using fault tolerance for reliability and security Developmental Controls for Security (9) ii. Using fault tolerance for reliability and security System tolerates component failures System more reliable than any of its components Different than for security, where system is as secure as its weakest component Fault-tolerant approach: Anticipate faults (car: anticipate having a flat tire) Active fault detection rather than pasive fault detection (e.g., by use of mutual suspicion: active input data checking) Use redundancy (car: have a spare tire) Isolate damage Minimize disruption (car: replace flat tire, continue your trip) [cf. B. Endicott-Popovsky]
Example 1: Majority voting (using h/w redundancy) Developmental Controls for Security (10) Example 1: Majority voting (using h/w redundancy) 3 processor running the same s/w E.g., in a spaceship Result accepted if results of 2 processors agree Example 2: Recovery Block (using s/w redundancy) Primary Code e.g., Quick Sort Secondary Code e.g., Bubble Sort Acceptance Test Quick Sort – – new code (faster) Bubble Sort – – well-tested code
in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, --SKIPped-- a few slides You can see all SKIPped slides in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, not required for exams.)
=> translates into (saving) BIG bucks ! Developmental Controls for Security (13) Value of Good Design Easy maintenance Understandability Reuse Correctness Better testing => translates into (saving) BIG bucks ! [cf. B. Endicott-Popovsky]
in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, --SKIPped-- a few slides You can see all SKIPped slides in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, not required for exams.)
c. Operating System Controls for Security (1) Developmental controls not always used OR: Even if used, not foolproof => Need other, complementary controls, incl. OS controls Such OS controls can protect against some pgm flaws
Trusted code establishes foundation upon which untrusted code runs Operating System Controls for Security (2) Trusted software – code rigorously developed an analyzed so we can trust that it does all and only what specs say Trusted code establishes foundation upon which untrusted code runs Trusted code establishes security baseline for the whole system In particular, OS can be trusted s/w
Key characteristics determining if OS code is trusted Operating System Controls for Security (3) Key characteristics determining if OS code is trusted 1) Functional correctness OS code consistent with specs 2) Enforcement of integrity OS keeps integrity of its data and other resources even if presented with flawed or unauthorized commands 3) Limited privileges OS minimizes access to secure data/resources Trusted pgms must have „need to access” and proper access rights to use resources protected by OS Untrusted pgms can’t access resources protected by OS 4) Appropriate confidence level OS code examined and rated at appropriate trust level
Similar criteria used to establish if s/w other than OS can be trusted Operating System Controls for Security (4) Similar criteria used to establish if s/w other than OS can be trusted Ways of increasing security if untrusted pgms present: Mutual suspicion Confinement Access log 1) Mutual suspicion between programs Distrust other pgms – treat them as if they were incorrect or malicious Pgm protects its interface data With data checks, etc.
2) Confinement OS can confine access to resources by suspected pgm Operating System Controls for Security (5) 2) Confinement OS can confine access to resources by suspected pgm Example 1: strict compartmentalization Pgm can affect data and other pgms only within its compartment Example 2: sandbox for untrusted pgms Can limit spread of viruses
Operating System Controls for Security (6) 3) Audit log / access log Records who/when/how (e.g., for how long) accessed/used which objects Events logged: logins/logouts, file accesses, pgm ecxecutions, device uses, failures, repeated unsuccessful commands (e.g., many repeated failed login attempts can indicate an attack) Audit frequently for unusual events, suspicious patterns It is a forensic measure not protective measure Forensics – investigation to find who broke law, policies, or rules (a posteriori, not a priori)
d. Administrative Controls for Security (1) They prohibit or demand certain human behavior via policies, procedures, etc. They include: Standards of program development Security audits Separation of duties
in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, --SKIPped-- a few slides You can see all SKIPped slides in the LONG version of Section 5. (The SKIPped slides are OPTIONAL, not required for exams.)
e. Conclusions (for Controls for Security) Developmental / OS / administrative controls help produce/maintain higher-quality (also more secure) s/w Art and science - no „silver bullet” solutions „A good developer who truly understands security will incorporate security into all phases of development.” [textbook, p. 172] Summary: [cf. B. Endicott-Popovsky] Control Purpose Benefit Develop- mental Limit mistakes Make malicious code difficult Produce better software OperatingSystem Limit access to system Promotes safe sharing of info Adminis-trative Limit actions of people Improve usability, reusability and maintainability
The End of Section 6 (Ch. 3): Program Security OPTIONAL details can be seen in the longer version of this Section (which includes slides that you may SKIP)