Presentation is loading. Please wait.

Presentation is loading. Please wait.

Accountability and Freedom

Similar presentations


Presentation on theme: "Accountability and Freedom"— Presentation transcript:

1 Accountability and Freedom
Butler Lampson Microsoft September 26, 2005

2 Real-World Security It’s about risk, locks, and deterrence.
Risk management: cost of security < expected loss Perfect security costs way too much Locks good enough that bad guys break in rarely Bad guys get caught and punished enough to be deterred, so police / courts must be good enough. Can recover from damage at an acceptable cost. Internet security similar, but little accountability Can’t identify the bad guys, so can’t deter them

3 How Much Security The best is the enemy of the good
Security is costly—buy only what you need You pay mainly in inconvenience If there’s no punishment, you pay a lot People do behave this way We don’t tell them this—a big mistake The best is the enemy of the good Perfect security is the worst enemy of real security Feasible security Costs less than the value it protects Simple enough for users to manage Simple enough for vendors to implement

4 Causes of Security Problems
Exploitable bugs Bad configuration TCB: Everything that security depends on Hardware, software, and configuration Does formal policy say what I mean? Can I understand it? Can I manage it? Why least privilege doesn’t work Too complicated, can’t manage it The unavoidable price of reliability is simplicity —Hoare

5 Defensive strategies Locks: Control the bad guys
Coarse: Isolate—keep everybody out Medium: Exclude—keep the bad guys out Fine: Restrict—Keep them from doing damage Recover—Undo the damage Deterrence: Catch bad guys, punish them Auditing, police, courts or other penalties

6 The Access Control Model
Isolation Boundary to prevent attacks outside access-controlled channels Access Control for channel traffic Policy management Object Resource Reference monitor Guard Do operation Request Principal Source Authorization Audit log Authentication Policy 1. Isolation boundary 2. Access control 3. Policy

7 Isolation I am isolated if anything that goes wrong is my fault Actually, my program’s fault Services Boundary Creator G U A R D policy Program Data guard Host Attacks on: Program Isolation Policy Our model enables us to categorize attacks according to which model components get attacked, thus creating a checklist for devs and testers to use to validate the security of their programs. Attack color code: black – channel; red – isolation; green – security administration (policy) The host and other things relied upon (e.g. hardware, crypto) work correctly – the red arrows show possible attack points on the isolation mechanism that can lead to isolation failures - the black arrows show possible attacks points that can lead to program failures. The program knows about all allowed input channels It’s up to the program to handle all inputs correctly Attacks Both a crypto protocol stack and the guard filter traffic, ruling out some attacks. The remaining attack points are shown here: Packet handling code exposed to all sources Crypto stack exposed to most sources Packet handling code exposed to crypto-authorized sources Guard code exposed to crypto authorized sources Internal app code exposed to sources passed by the guard Object Resource Reference monitor Guard Do operation Request Principal Source Authorization Audit log Authentication Policy 1. Isolation boundary 2. Access control 3. Policy

8 Access Control Mechanisms: The Gold Standard
Authenticate principals: Who made a request Mainly people, but also channels, servers, programs (encryption implements channels, so key is a principal) Authorize access: Who is trusted with a resource Group principals or resources, to simplify management Can define by a property, e.g. “type-safe” or “safe for scripting” Audit: Who did what when? Lock = Authenticate + Authorize Deter = Authenticate + Audit Object Resource Reference monitor Guard Do operation Request Principal Source Authorization Audit log Authentication Policy 1. Isolation boundary 2. Access control 3. Policy

9 Making Isolation Work Isolation is imperfect: Can’t get rid of bugs
TCB = M lines of code Customers want features more than correctness Instead, don’t tickle them. How? Reject bad inputs Code: don’t run or restrict severely Communication: reject or restrict severely Especially web sites Data: don’t send; don’t accept if complex Accountability is the ability to hold an entity, such as a person or organization, responsible for its actions. Accountability is not the opposite of anonymity or the same as total loss of privacy. The degree of accountability is negotiated between the parties involved, as in Infocard, for example; if there’s no agreement, then nothing is disclosed and they stop interacting. In other words, the sender chooses how accountable he wants to appear and the recipient chooses the level of acceptable accountability. If the sender is not accountable-enough for the recipient, then the interaction ends with nothing disclosed on either side. Accountability requires a consistent identifier based upon a name, a pseudonym or a set of attributes. When the identifier is based upon a name, the recipient may use a reputation service to determine whether the sender is accountable enough. Should the sender behave unacceptably, then the recipient can “punish” the sender by reducing the sender’s reputation. When the identifier is a pseudonym, it must be issued by an indirection service which knows the true identity of the sender. When the sender behaves unacceptably, the indirection service may be requested to reveal the real-world identity to appropriate authorities by those authorities. A set of attributes being used as the identifier requires a certificate, or other claims mechanism, from a trusted authority. When the sender behaves unacceptably or the claimed attributes are proved to be false, then the trusted authority may be contacted and asked to “punish” the sender by removing him from the trusted authority’s list. Alternatively, the recipient may choose to remove the trusted authority as not being accountable-enough. Becoming accountable does not necessarily mean disclosing anything about your real-world identity thus protecting privacy. Using accountability as a mechanism for receiving network packets is much more difficult. Since there is no end node, packets pass through nodes having no direct relation to the sender, and the per-packet cost of accountability verification must be very small to not impact network performance. This makes checking accountability for network access very difficult.

10 Accountability All trust is local Need an ecosystem for
Can’t identify bad guys, so can’t deter them Fix? End nodes enforce accountability Refuse messages that aren’t accountable enough or strongly isolate those messages Senders are accountable if you can punish them All trust is local Need an ecosystem for Senders becoming accountable Receivers demanding accountability Third party intermediaries To stop DDOS attacks, ISPs must play

11 Enforcing Accountability
Not being accountable enough means end nodes will reject inputs Application execution is restricted or prohibited Communication is restricted or prohibited Information is not shared or accepted Access to devices or networks is restricted or prohibited Accountability is the ability to hold an entity, such as a person or organization, responsible for its actions. Accountability is not the opposite of anonymity or the same as total loss of privacy. The degree of accountability is negotiated between the parties involved, as in Infocard, for example; if there’s no agreement, then nothing is disclosed and they stop interacting. In other words, the sender chooses how accountable he wants to appear and the recipient chooses the level of acceptable accountability. If the sender is not accountable-enough for the recipient, then the interaction ends with nothing disclosed on either side. Accountability requires a consistent identifier based upon a name, a pseudonym or a set of attributes. When the identifier is based upon a name, the recipient may use a reputation service to determine whether the sender is accountable enough. Should the sender behave unacceptably, then the recipient can “punish” the sender by reducing the sender’s reputation. When the identifier is a pseudonym, it must be issued by an indirection service which knows the true identity of the sender. When the sender behaves unacceptably, the indirection service may be requested to reveal the real-world identity to appropriate authorities by those authorities. A set of attributes being used as the identifier requires a certificate, or other claims mechanism, from a trusted authority. When the sender behaves unacceptably or the claimed attributes are proved to be false, then the trusted authority may be contacted and asked to “punish” the sender by removing him from the trusted authority’s list. Alternatively, the recipient may choose to remove the trusted authority as not being accountable-enough. Becoming accountable does not necessarily mean disclosing anything about your real-world identity thus protecting privacy. Using accountability as a mechanism for receiving network packets is much more difficult. Since there is no end node, packets pass through nodes having no direct relation to the sender, and the per-packet cost of accountability verification must be very small to not impact network performance. This makes checking accountability for network access very difficult.

12 For Accountability To Work
Senders must be able to make themselves accountable This means pledging something of value Friendship Reputation Money Receivers must be able to check accountability Specify what is accountable enough Verify sender’s evidence of accountability

13 Accountability vs. Access Control
“In principle” there is no difference but Accountability is about punishment, not locks Hence audit is critical Accountability is very coarse-grained

14 The Accountability Ecosystem
Identity, reputation, and indirection services Mechanisms to establish trust relationships Person to person and person to organization A flexible, simple user model for identity Stronger user authentication Smart card, cell phone, biometrics Application identity: signing, reputation

15 Accountable Internet Access
Just enough to block DDoS attacks Need ISPs to play. Why should they? Servers demand it; clients don’t get locked out Regulation? A server asks its ISP to block some IP addresses ISPs propagate such requests to peers or clients Probably must be based on IP address Perhaps some signing scheme to traverse unreliable intermediaries? High priority packets can get through

16 Accountability vs. Freedom
Partition world into two parts: Green Safer/accountable Red Less safe/unaccountable Two aspects, mostly orthogonal User Experience Isolation mechanism Separate hardware with air gap VM Process isolation Red-Green is our name for the creation of two different environments for each user in which to do their computing. One environment is carefully managed to keep out attacks – code and data are only allowed if of known trusted origin – because we know that the implementation will have flaws and that ordinary users trust things that they shouldn’t. This is the “Green” environment; important data is kept in it. But because lots of work, and lots of entertainment, requires accessing things on the Internet about which little is known, or is even feasible to know, regarding their trustworthiness (so it can not be carefully managed), we need to provide an environment in which this can be done – this is the “Red” environment. The Green environment backs up both environments, and when some bug or user error causes the Red environment to become corrupt, it is restored to a previous state (see the recovery slide); this may entail loss of data, which is why important data is kept on the Green side, where it is less likely to be lost. Isolation between the two environments is enforced using IPsec. The big unknown is the user experience, at this point. We know different models: The KVM switch model (as in NetTop) The X-Windows model (with windows actually fronting for an execution on some other machine) What the users will find preferable is an open question, still. More to the point, the security of the system depends on separation that will be visible to the user. There will be things that the user will not be allowed to do because of the security policy. What also needs to be researched is the best way to communicate that separation to the user. The KVM switch model, in which the user envisions two separate PCs that have to use network shares to share files might be the simplest to grasp. Implementation is still a matter of debate within the company – and even within the team. Today we are going to talk about the VM-based isolation solution

17 Total: N+m attacks/yr on all assets
Without R|G: Today Less trustworthy Less accountable entities More trustworthy More accountable entities (N >> m) m attacks/yr N attacks/yr Less valuable assets More My Computer Almost everyone has a “red” machine Security settings optimized for immediate productivity not long-term security So… The untrustworthy or unaccountable get to interact with important assets Entities - Programs - Network hosts - Administrators Total: N+m attacks/yr on all assets

18 With R|G (N >> m) m attacks/yr N attacks/yr More valuable assets
Less trustworthy Less accountable entities More trustworthy More accountable entities (N >> m) m attacks/yr N attacks/yr More valuable assets My Green Computer My Red Computer Less valuable assets Entities - Programs - Network hosts - Administrators N attacks/yr on less valuable assets m attacks/yr on more valuable assets

19 Must Get Configuration Right
Keep valuable stuff out of red Keep hostile agents out of green Less valuable assets My Red Computer More My Green Computer Valuable Asset Less trustworthy Less accountable entities More trustworthy More accountable Hostile agent

20 Why R|G? Problems: Solution: Isolated work environments:
Any OS will always be exploitable The richer the OS, the more bugs Need internet access to get work done, have fun The internet is full of bad guys Solution: Isolated work environments: Green: important assets, only talk to good guys Don’t tickle the bugs, by restricting inputs Red: less important assets, talk to anybody Blow away broken systems Good guys: more trustworthy / accountable Bad guys: less trustworthy or less accountable

21 Configuring Green Green = locked down = only whitelist inputs
Requires professional management Few users can make these decisions Avoid “click OK to proceed” To escape, use Red Today almost all machines are Red

22 R|G User Model Dilemma People don’t want complete isolation
They want to: Cut/paste, drag/drop Share parts of the file system Share the screen Administer one machine, not multiple But more integration can weaken isolation Add bugs Compromise security

23 Data Transfer Mediates data transfer between machines Problems
Drag / drop, Cut / paste, Shared folders Problems Red → Green : Malware entering Green → Red : Information leaking Possible policy Allowed transfers (configurable). Examples: No transfer of “.exe” from R to G Only transfer ASCII text from R to G Non-spoofable user intent; warning dialogs Auditing Synchronous virus checker; third party hooks, ... Downloaded content through IE, Messenger, p2p, etc should be tagged with download source information – similar to IE zones.  As the content moves through the “airlock”, the tags should move as well. Auditing – sync virus checker.  Mention the attachment execution services (AES) check

24 Where Should Email/IM Run?
As productivity applications, they must be well integrated in the work environment (green) Threats—A tunnel from the bad guys Executable attachments Exploits of complicated data formats Choices Run two copies, one in Green and one in Red Run in Green and mitigate threats Green platform does not execute arbitrary programs Green apps are conservative in the file formats they accept Route messages to appropriate machine

25 R|G and Enterprise Networks
Red and green networks are defined as today: IPSEC Guest firewall Proxy settings The VMM can act as a router E.g. red only talks to the proxy Must keep: Important data Attackers On different sides of a VM isolation boundary Partition network as shown Apply stricter security settings in Green Software restriction policies Restrict user admin privileges … We think this works pretty well for RG in enterprises But we don’t know how to do it for home users…

26 Summary Security is about risk management
Cost of security < expected loss Security relies on deterrence more than locks Deterrence requires the threat of punishment This requires accountability Accountability needs an ecosystem Senders becoming accountable Receivers verifying accountability Accountability limits freedom Beat this by partitioning: red | green Don’t tickle bugs in green, dispose of red


Download ppt "Accountability and Freedom"

Similar presentations


Ads by Google