Download presentation
Presentation is loading. Please wait.
1
CIT 480: Securing Computer Systems
Secure Design Principles
2
Topics Attack Surface Attack Trees Secure Design Principles
3
Attack Surface Attack surface: the set of ways an application can be attacked. Used to measure attackability of a system. The larger the attack surface of a system, the more likely an attacker is to exploit its vulnerabilities and the more damage is likely to result from attack. Compare to measuring vulnerability by counting number of reported security bugs. Both are useful measures of security, but have very different meanings.
4
Network Attack Surface
IPv4 and IPv6 addresses accessible via firewall. Protocols allowed to each IP address. Open TCP/UDP ports on each IP address. List of applications running on ports.
5
Automotive Attack Surface
6
Why Attack Surface Reduction?
If your code is perfect, why worry? All code has a nonzero probability of containing vulnerabilities. Even if code is perfect now, new vulns arise. Format string vulnerability was discovered in 1999. A particular application was immune to XML injection until you added an XML storage feature. Allows focus on more dangerous code. ASR eliminates unnecessary exposures. Allows focus on required exposures.
7
Attack Surface Reduction
Reduce code that executes by default. Disable features not all users need. Restrict who can access the code. Require authentication to access. Request admin to access dangerous functions. Reduce privilege level of code. Prefer code running as user to admin. Prefer SETGID to SETUID.
8
Attack Trees—Graph Notation
Goal: Read file from password-protected PC. Read File Get Password Search Desk Social Engineer Network Access Physical Access Boot with CD Remove hard disk Graph represents decision-making process of attacker. Root node represents goal; leaves represent methods of achieving goals. Leaves become more specific lower in tree. Most child nodes represent logical ORs, but some represent ANDs (get encrypted key file AND password used to encrypt it.) Assign values to nodes, representing perceived risk, i.e. how feasible is the attack.
9
Attack Trees—Text Notation
Goal: Read message sent from one PC to another. 1. Convince sender to reveal message. 1.1 Blackmail. 1.2 Bribe. 2. Read message when entered on sender’s PC. 1.1 Visually monitor PC screen. 1.2 Monitor EM radiation from screen. 3. Read message when stored on receiver’s PC. 1.1 Get physical access to hard drive. 1.2 Infect user with spyware. 4. Read message in transit. 1.1 Sniff network. 1.2 Usurp control of mail server.
10
Example Tree: Repudiation
Threat Modeling: Designing for Security, Figure 4.3
11
Example Tree: ACFE Fraud
Threat Modeling: Designing for Security, Figure 4.4
12
Security Design Principles
Least Privilege Fail-Safe Defaults Economy of Mechanism Complete Mediation Open Design Separation of Privilege Least Common Mechanism Psychological Acceptability
13
Meta Principles Simplicity (Minimization) Restriction (Isolation)
Minimize components and cases to fail. Fewer possible inconsistencies. Easy to understand. Restriction (Isolation) Minimize access. Inhibit communication. Encapsulate components. The design principles are rooted in simplicity and restrictiveness. Simplicity lies on many levels. The basic idea is that simpler things have fewer components, so less can go wrong. Further, there are fewer interfaces, so there are fewer entities communicating through the interfaces that can be inconsistent. Finally, they are easier to check, since the mechanism is not complex, and therefore easier to understand. There is also less to check. Restriction minimizes the number and types of interactions between the entity and other entities. In some circles, an example is the “need to know” principle: only give the entity the information it needs to complete its task. It also should only be able to release information when required to by the goals of the entity. Note this includes writing (integrity), because by altering other entities, the writer can communicate information.
14
Least Privilege A subject should be given only those privileges necessary to complete its task. Function, not identity, controls. Rights added as needed, discarded after use. Minimal protection domain. Most common violation: Running as administrator or root. Use runas or sudo instead. This is an example of restriction. Key concepts are: Function: what is the task, and what is the minimal set of rights needed? “Minimal” here means that if the right is not present, the task cannot be performed. A good example is a UNIX network server that needs access to a port below 1024 (this access requires root). Rights being added, discarded: if the task requires privileges for only one action, then the privileges should be added before the action and then removed. Going back to the UNIX network server, if the server need not act as root (for example, an SMTP server), then drop the root privileges immediately after the port is opened. The protection domain statement emphasizes the other two.
15
Least Privilege Example
Problem: A web server. Serves files under /usr/local/http. Logs connections under /usr/local/http/log. HTTP uses port 80 by default. Only root can open ports < 1024. Solution: Start web server as root, then open port. Then change UID to a non-root user. This is an example of restriction. Key concepts are: Function: what is the task, and what is the minimal set of rights needed? “Minimal” here means that if the right is not present, the task cannot be performed. A good example is a UNIX network server that needs access to a port below 1024 (this access requires root). Rights being added, discarded: if the task requires privileges for only one action, then the privileges should be added before the action and then removed. Going back to the UNIX network server, if the server need not act as root (for example, an SMTP server), then drop the root privileges immediately after the port is opened. The protection domain statement emphasizes the other two.
16
How do we run with least privilege?
List required resources and special tasks Files Network connections Change user account Backup data Determine what access you need to resources Access Control model Do you need create, read, write, append, etc.?
17
Fail-Safe Defaults System default configuration is secure.
Default action is to deny access. When an action fails, system must be restored to a state as secure as the state it was in when it started the action. The first is well-known. Add rights explicitly; set everything to deny, and add back. This follows the Principle of Least Privilege. You see a variation when writing code that has security considerations. If you take untrusted data (such as input) that may contain meta-characters. The rule of thumb is to specify the LEGAL characters, and discard all others, rather than to specify the ILLEGAL characters and discard them. More on this in chapter 29 … The second is often overlooked, but goes to the meaning of “fail safe”: if something fails, the system is still safe. Failure should never change the security state of the system. Hence, if an action fails, the system should be as secure as if the action never took place. Credit card system defaults to manual process if it cannot phone in to check validity. Manual process insecure. Counterpoint: risk management—cheaper to accept loss than to refuse valid transactions when line is down.
18
Fail Safe Defaults Example
Problem: Retail credit card transaction. Card looked up in vendor database to check for stolen cards or suspicious transaction pattern. What happens if system cannot contact vendor? Solution: No authentication, but transaction is logged. How does this system violate the Principle of Fail-Safe Defaults?
19
Fail Safe Defaults Example
Problem: MS Office Macro Viruses. MS office files can contain Visual Basic code (macros.) MS Office automatically executes certain macros when opening a MS Office file. Users can turn off automatic execution. Don’t mix code and data! Solution: MS Office XP has automatic execution of macros turned off by default. While the solution is a fail-safe default, does it follow least privilege too?
20
Economy of Mechanism Keep system as simple as possible.
Use the simplest solution that works. Fewer cases and components to fail. Can review all code of a small application. Reuse known secure solutions i.e., don’t write your own cryptography. Simplicity refers to all dimensions: design, implementation, operation, interaction with other components, even in specification. The toolkit philosophy of the UNIX system is excellent here; each tool is designed and implemented to perform a single task. The tools are then put together. This allows the checking of each component, and then their interfaces. It is conceptually much less complex than examining the unit as a whole. The key, though, is to define all interfaces completely (for example, environment variables and global variables as well as parameter lists).
21
Economy of Mechanism Example
Problem: SMB File Sharing Protocol. Used since late 1980s. Newer protocol version protects data integrity by employing packet signing technique. What do you do about computers with older versions of protocol? Solution: Let client negotiate which SMB version to use. How does this solution violate economy of mechanism?
22
Complete Mediation Check every access.
Usually checked once, on first access: UNIX: File ACL checked on open(), but not on subsequent accesses to file. If permissions change after initial access, unauthorized access may be permitted. bad example: DNS cache poisoning The reason for relaxing this one is efficiency: if you do lots of accesses, the checks will slow you down substantially. It’s not clear if that is really true, though. Exercise: Have a process open a UNIX file for reading. From the shell, delete the read permissions that allow the process to read the file. Then have the process read from the open file. The process can do so. This shows the check is done at the open. If you want to be sure, have the process close the file. Then have the process try to reopen the file for reading. This open will fail. Note that UNIX systems fail to enforce this principle to any degree on a superuser process, where access permissions are not even checked for an open! This is why people create management accounts (more properly, role accounts) like bin or mail: by restricting processes to those accounts, so access control checking applies. It also is an application of the principle of least privilege. DNS Cache Poisoning (classic): Delegate your domain to google.com, supplying your own addresses for google’s DNS servers. Wait for someone to query DNS server and then update their cache with changed information, using it for next TTL period which can be 24+ hours.
23
CSC 666: Secure Software Engineering
Open Design Security should not depend on secrecy of design or implementation. i.e. Don’t rely on “Security through obscurity” Makes expert public scrutiny possible. Still need to keep keys and passwords secret. Cannot maintain secrecy of clients Customers can reverse engineer hardware and software with decompilers, disassemblers, logic analyzers, etc. CSC 666: Secure Software Engineering
24
Open Design Example: Problem: MPAA wants control over DVDs.
Region coding, unskippable commercials. Solution: CSS (Content Scrambling System) CSS algorithm kept secret. DVD Players need player key to decrypt disk key on DVD to descript movie for playing. Encryption uses 40-bit keys. People without keys can copy but not play DVDs. Result: CSS algorithm reverse engineered. Weakness allows disk key to be recovered in an attack of complexity 225, which takes only a few seconds. Note that source code need not be available to meet this principle. It simply says that your security cannot depend upon your design being a secret. Secrecy can enhance the security, but if the design becomes exposed, the security of the mechanism cannot be affected. The problem is that people are very good at finding out what secrets protect you. They may figure it out from the way the system works, or from reverse engineering the interface or system, or by more prosaic techniques such as dumpster diving. This principle does not speak to secrets not involving design or implementation. For example, you can keep crypto keys and passwords secret. deCSS: alleged purpose—stop piracy; actual purposes—region coding, nonskippable commercials. Algorithm weak, easily reverse engineered. The “secret” key must be stored on firmware of every DVD player.
25
Separation of Privilege
Require multiple conditions to grant access. Separation of duty Compartmentalization (encapsulation) Defence in depth You need to meet more than one condition to gain access. Separation of duty says that the one who signs the checks cannot be the one who prints the checks because then a single person could steal money. To make that more difficult, the thief must compromise two people, not one. This also provides a finer-grained control over a resource than a single condition. The analogy with non-computer security mechanisms is that of “defense in depth.” To get into a castle, you need to cross the moat, scale the walls, and drop down over the walls before you can get in. That is three barriers (conditions) that must be overcome (met). Openssh: separate app into monitor and child processes; child processes are not privileged and must ask monitor to perform privileged ops on its behalf; user interaction only with unprivileged children
26
Separation of Duty Functions are divided so that one entity does not have control over all parts of a transaction. Example: Different persons must initiate a purchase and authorize a purchase. Two different people may be required to arm and fire a nuclear missile.
27
Compartmentalization
Problem: A security violation in one process should not affect others. Solution: Virtual Memory Each process has its own address space. Isolates memory accesses in process. In what ways is this solution flawed? i.e., how can the compartments communicate? How could we improve compartmentalization of processes?
28
Defence in Depth Diverse defensive strategies
Multiple layers of defences And different types of defences. Protection (firewall, password) Detection (anti-virus, NIDS) Reaction (CIRT) If one layer pierced, next layer may stop. Firewall: what about insiders? Bank: auto cameras, security guard, vault requiring multiple locks + codes, dye bills
29
Defense in Depth Example
30
Least Common Mechanism
Mechanisms used to access resources should not be shared. Information can flow along shared channels. Examples Shared directories like /tmp Shared memory like CPU caches, TLB Tradeoffs Contradicts Economy of Mechanism Isolation prevents communication, and communication with something—another process or a resource—is necessary for a breach of security. Limit the communication, and you limit the damage. Works with “separation of privilege” openssh example. Covert channels: if two processes share a resource, by coordinating access, they can communicate by modulating access to the entire resource. Examples: percent of CPU used. To send a 1 bit, the first process uses 75% of the CPU; to send a 0 bit, it uses 25% of the CPU. The other process sees how much of the CPU it can get and from that can tell what the first process used, and hence is sending. Variations include filling disks, creating files with fixed names, and so forth. Approaches to implementing this principle: isolate each process, via virtual machines or sandboxes (a sandbox is like a VM, but the isolation is not complete).
31
Least Common Mechanism
Problem: Compromising web server allows attacker access to entire machine that the web server runs on. Solution Run web server as non-root user. Attacker still gains “other” access to filesystem. Attacker may be able to elevate privilege. Better solution Run web server in a container or VM. Web server compromise only impacts VM. Isolation prevents communication, and communication with something—another process or a resource—is necessary for a breach of security. Limit the communication, and you limit the damage. Works with “separation of privilege” openssh example. Covert channels: if two processes share a resource, by coordinating access, they can communicate by modulating access to the entire resource. Examples: percent of CPU used. To send a 1 bit, the first process uses 75% of the CPU; to send a 0 bit, it uses 25% of the CPU. The other process sees how much of the CPU it can get and from that can tell what the first process used, and hence is sending. Variations include filling disks, creating files with fixed names, and so forth. Approaches to implementing this principle: isolate each process, via virtual machines or sandboxes (a sandbox is like a VM, but the isolation is not complete).
32
Psychological Acceptability
Security mechanisms should not add to the difficulty of accessing a resource. Usability: Ease of installation, configuration, and use. Hide complexity introduced by security mechanisms. Principle of Least Astonishment: Design should match user’s experience, expectations, and mental models. Follow UI conventions. This recognizes the human element. General rules are: Be clear in error messages. You don’t need to be detailed (e.g., did the user mistype the password or login name) but you do need to state the rules for using the mechanism (in the example, that the user must supply both password and login name). This is usually interpreted as meaning the mechanism must not impose an onerous burden. Strictly speaking, passwords violate this rule (because it’s not as easy to access a resource by giving a password as accessing the resource without a password), but the password is considered a minimal burden.
33
Are these examples acceptable?
Requiring a password before making a purchase with stored credit card. Requiring credit card entry for any purchase and refusing to store credit card information. SSL certificate error dialog box asking user to continue or not. SSL certificate error refusal message, with no decision to make.
34
Sendmail 8 Architecture
Local Mail Running as root Running as root Sendmail MDA SMTP Mail Mail Queue Mail Box
35
What principles are found in qmail?
From The Security Architecture of qmail
36
Key Points Attack Surface Secure Design Principles Attack Trees
Measure of how easy it is to attack a system. Least Privilege Fail-Safe Defaults Reduce AS by reducing code, limiting who can access code, and reducing code privilege. Economy of Mechanism Complete Mediation Open Design Attack Trees Separation of Privilege Least Common Mechanism Method to model attacks and plan defenses. Psychological Acceptability
37
References Bishop, Matt, Introduction to Computer Security, Addison-Wesley, 2005. Graff, Mark and van Wyk, Kenneth, Secure Coding: Principles & Practices, O’Reilly, 2003. Howard, Michael and LeBlanc, David, Writing Secure Code, 2nd edition, Microsoft Press, 2003. Olzak, Tom, Enterprise Security: A Practitioner’s Guide, Viega, John, and McGraw, Gary, Building Secure Software, Addison-Wesley, 2002. Wheeler, David, Secure Programming for UNIX and Linux HOWTO,
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.