Accountability and Freedom

Slides:



Advertisements
Similar presentations
1 Accountability and Freedom Butler Lampson Microsoft September 26, 2005.
Advertisements

Firewalls By Tahaei Fall What is a firewall? a choke point of control and monitoring interconnects networks with differing trust imposes restrictions.
ITIS 1210 Introduction to Web-Based Information Systems Chapter 44 How Firewalls Work How Firewalls Work.
Addressing spam and enforcing a Do Not Registry using a Certified Electronic Mail System Information Technology Advisory Group, Inc.
CSCI 530 Lab Firewalls. Overview Firewalls Capabilities Limitations What are we limiting with a firewall? General Network Security Strategies Packet Filtering.
1 Accountability and Freedom Butler Lampson Microsoft October 27, 2005.
Lecture 2 Page 1 CS 236, Spring 2008 Security Principles and Policies CS 236 On-Line MS Program Networks and Systems Security Peter Reiher Spring, 2008.
Intrusion Detection Systems and Practices
Web Security A how to guide on Keeping your Website Safe. By: Robert Black.
FIREWALL TECHNOLOGIES Tahani al jehani. Firewall benefits  A firewall functions as a choke point – all traffic in and out must pass through this single.
CECS 5460 – Assignment 3 Stacey VanderHeiden Güney.
Intranet, Extranet, Firewall. Intranet and Extranet.
1 Infrastructure Hardening. 2 Objectives Why hardening infrastructure is important? Hardening Operating Systems, Network and Applications.
Lecture 18 Page 1 CS 111 Online Design Principles for Secure Systems Economy Complete mediation Open design Separation of privileges Least privilege Least.
Component 4: Introduction to Information and Computer Science Unit 8: Security Lecture 2 This material was developed by Oregon Health & Science University,
October 15, 2002Serguei A. Mokhov, 1 Intro to Internet-services from Security Standpoint SOEN321-Information-Systems Security Revision.
Firewall Technologies Prepared by: Dalia Al Dabbagh Manar Abd Al- Rhman University of Palestine
Lecture 16 Page 1 Advanced Network Security Perimeter Defense in Networks: Virtual Private Networks Advanced Network Security Peter Reiher August, 2014.
1 Topic 2: Lesson 3 Intro to Firewalls Summary. 2 Basic questions What is a firewall? What is a firewall? What can a firewall do? What can a firewall.
Fundamentals of Proxying. Proxy Server Fundamentals  Proxy simply means acting on someone other’s behalf  A Proxy acts on behalf of the client or user.
Network Security. 2 SECURITY REQUIREMENTS Privacy (Confidentiality) Data only be accessible by authorized parties Authenticity A host or service be able.
Section 11: Implementing Software Restriction Policies and AppLocker What Is a Software Restriction Policy? Creating a Software Restriction Policy Using.
Lecture 16 Page 1 CS 236 Online Web Security CS 236 On-Line MS Program Networks and Systems Security Peter Reiher.
Security: The Goal Computers are as secure as real world systems, and people believe it. This is hard because: Computers can do a lot of damage fast. There.
Security and Firewalls Ref: Keeping Your Site Comfortably Secure: An Introduction to Firewalls John P. Wack and Lisa J. Carnahan NIST Special Publication.
IT Security. What is Information Security? Information security describes efforts to protect computer and non computer equipment, facilities, data, and.
Security fundamentals Topic 5 Using a Public Key Infrastructure.
Chapter 40 Network Security (Access Control, Encryption, Firewalls)
Firewalls Priyanka Verma & Jessica Wong. What is it? n A firewall is a collection of security measures designed to prevent unauthorised electronic access.
Role Of Network IDS in Network Perimeter Defense.
4.1 © 2004 Pearson Education, Inc. Exam Managing and Maintaining a Microsoft® Windows® Server 2003 Environment Lesson 12: Implementing Security.
Lecture 12 Page 1 CS 136, Spring 2009 Network Security: Firewalls CS 136 Computer Security Peter Reiher May 12, 2009.
By: Brett Belin. Used to be only tackled by highly trained professionals As the internet grew, more and more people became familiar with securing a network.
Lecture 9 Page 1 CS 236 Online Firewalls What is a firewall? A machine to protect a network from malicious external attacks Typically a machine that sits.
Polytechnic University Firewall and Trusted Systems Presented by, Lekshmi. V. S cos
ArcGIS for Server Security: Advanced
Securing Network Servers
Information Security, Theory and Practice.
Modularity Most useful abstractions an OS wants to offer can’t be directly realized by hardware Modularity is one technique the OS uses to provide better.
Essentials of UrbanCode Deploy v6.1 QQ147
Protecting Memory What is there to protect in memory?
Critical Security Controls
Configuring ALSMS Remote Navigation
Unit 4 IT Security.
Protecting Memory What is there to protect in memory?
Protecting Memory What is there to protect in memory?
Secure Software Confidentiality Integrity Data Security Authentication
IP Security IP sec IPsec is short for Internet Protocol Security. It was originally created as a part of IPv6, but has been retrofitted into IPv4. It works.
Presented by Muhammad Abu Saqer
Domain 4 – Communication and Network Security
Outline What does the OS protect? Authentication for operating systems
Introduction to Networking
Introduction to Networking
Firewalls.
Information and Network Security
Outline What does the OS protect? Authentication for operating systems
Schneider Symposium on Trustworthiness
Computer Security Computer viruses Hardware theft Software Theft Unauthorized access by hackers Information Theft Computer Crimes.
Security of a Local Area Network
Security in Networking
Firewalls Jiang Long Spring 2002.
How to Mitigate the Consequences What are the Countermeasures?
AbbottLink™ - IP Address Overview
ONLINE SECURE DATA SERVICE
Designing IIS Security (IIS – Internet Information Service)
Mohammad Alauthman Computer Security Mohammad Alauthman
Security Principles and Policies CS 236 On-Line MS Program Networks and Systems Security Peter Reiher.
Preventing Privilege Escalation
6. Application Software Security
Outline The concept of perimeter defense and networks Firewalls.
Presentation transcript:

Accountability and Freedom Butler Lampson Microsoft September 26, 2005

Real-World Security It’s about risk, locks, and deterrence. Risk management: cost of security < expected loss Perfect security costs way too much Locks good enough that bad guys break in rarely Bad guys get caught and punished enough to be deterred, so police / courts must be good enough. Can recover from damage at an acceptable cost. Internet security similar, but little accountability Can’t identify the bad guys, so can’t deter them

How Much Security The best is the enemy of the good Security is costly—buy only what you need You pay mainly in inconvenience If there’s no punishment, you pay a lot People do behave this way We don’t tell them this—a big mistake The best is the enemy of the good Perfect security is the worst enemy of real security Feasible security Costs less than the value it protects Simple enough for users to manage Simple enough for vendors to implement

Causes of Security Problems Exploitable bugs Bad configuration TCB: Everything that security depends on Hardware, software, and configuration Does formal policy say what I mean? Can I understand it? Can I manage it? Why least privilege doesn’t work Too complicated, can’t manage it The unavoidable price of reliability is simplicity —Hoare

Defensive strategies Locks: Control the bad guys Coarse: Isolate—keep everybody out Medium: Exclude—keep the bad guys out Fine: Restrict—Keep them from doing damage Recover—Undo the damage Deterrence: Catch bad guys, punish them Auditing, police, courts or other penalties

The Access Control Model Isolation Boundary to prevent attacks outside access-controlled channels Access Control for channel traffic Policy management Object Resource Reference monitor Guard Do operation Request Principal Source Authorization Audit log Authentication Policy 1. Isolation boundary 2. Access control 3. Policy

Isolation I am isolated if anything that goes wrong is my fault Actually, my program’s fault Services Boundary Creator G U A R D policy Program Data guard Host Attacks on: Program Isolation Policy Our model enables us to categorize attacks according to which model components get attacked, thus creating a checklist for devs and testers to use to validate the security of their programs. Attack color code: black – channel; red – isolation; green – security administration (policy) The host and other things relied upon (e.g. hardware, crypto) work correctly – the red arrows show possible attack points on the isolation mechanism that can lead to isolation failures - the black arrows show possible attacks points that can lead to program failures. The program knows about all allowed input channels It’s up to the program to handle all inputs correctly Attacks Both a crypto protocol stack and the guard filter traffic, ruling out some attacks. The remaining attack points are shown here: Packet handling code exposed to all sources Crypto stack exposed to most sources Packet handling code exposed to crypto-authorized sources Guard code exposed to crypto authorized sources Internal app code exposed to sources passed by the guard Object Resource Reference monitor Guard Do operation Request Principal Source Authorization Audit log Authentication Policy 1. Isolation boundary 2. Access control 3. Policy

Access Control Mechanisms: The Gold Standard Authenticate principals: Who made a request Mainly people, but also channels, servers, programs (encryption implements channels, so key is a principal) Authorize access: Who is trusted with a resource Group principals or resources, to simplify management Can define by a property, e.g. “type-safe” or “safe for scripting” Audit: Who did what when? Lock = Authenticate + Authorize Deter = Authenticate + Audit Object Resource Reference monitor Guard Do operation Request Principal Source Authorization Audit log Authentication Policy 1. Isolation boundary 2. Access control 3. Policy

Making Isolation Work Isolation is imperfect: Can’t get rid of bugs TCB = 10-50 M lines of code Customers want features more than correctness Instead, don’t tickle them. How? Reject bad inputs Code: don’t run or restrict severely Communication: reject or restrict severely Especially web sites Data: don’t send; don’t accept if complex Accountability is the ability to hold an entity, such as a person or organization, responsible for its actions. Accountability is not the opposite of anonymity or the same as total loss of privacy. The degree of accountability is negotiated between the parties involved, as in Infocard, for example; if there’s no agreement, then nothing is disclosed and they stop interacting. In other words, the sender chooses how accountable he wants to appear and the recipient chooses the level of acceptable accountability. If the sender is not accountable-enough for the recipient, then the interaction ends with nothing disclosed on either side. Accountability requires a consistent identifier based upon a name, a pseudonym or a set of attributes. When the identifier is based upon a name, the recipient may use a reputation service to determine whether the sender is accountable enough. Should the sender behave unacceptably, then the recipient can “punish” the sender by reducing the sender’s reputation. When the identifier is a pseudonym, it must be issued by an indirection service which knows the true identity of the sender. When the sender behaves unacceptably, the indirection service may be requested to reveal the real-world identity to appropriate authorities by those authorities. A set of attributes being used as the identifier requires a certificate, or other claims mechanism, from a trusted authority. When the sender behaves unacceptably or the claimed attributes are proved to be false, then the trusted authority may be contacted and asked to “punish” the sender by removing him from the trusted authority’s list. Alternatively, the recipient may choose to remove the trusted authority as not being accountable-enough. Becoming accountable does not necessarily mean disclosing anything about your real-world identity thus protecting privacy. Using accountability as a mechanism for receiving network packets is much more difficult. Since there is no end node, packets pass through nodes having no direct relation to the sender, and the per-packet cost of accountability verification must be very small to not impact network performance. This makes checking accountability for network access very difficult.

Accountability All trust is local Need an ecosystem for Can’t identify bad guys, so can’t deter them Fix? End nodes enforce accountability Refuse messages that aren’t accountable enough or strongly isolate those messages Senders are accountable if you can punish them All trust is local Need an ecosystem for Senders becoming accountable Receivers demanding accountability Third party intermediaries To stop DDOS attacks, ISPs must play

Enforcing Accountability Not being accountable enough means end nodes will reject inputs Application execution is restricted or prohibited Communication is restricted or prohibited Information is not shared or accepted Access to devices or networks is restricted or prohibited Accountability is the ability to hold an entity, such as a person or organization, responsible for its actions. Accountability is not the opposite of anonymity or the same as total loss of privacy. The degree of accountability is negotiated between the parties involved, as in Infocard, for example; if there’s no agreement, then nothing is disclosed and they stop interacting. In other words, the sender chooses how accountable he wants to appear and the recipient chooses the level of acceptable accountability. If the sender is not accountable-enough for the recipient, then the interaction ends with nothing disclosed on either side. Accountability requires a consistent identifier based upon a name, a pseudonym or a set of attributes. When the identifier is based upon a name, the recipient may use a reputation service to determine whether the sender is accountable enough. Should the sender behave unacceptably, then the recipient can “punish” the sender by reducing the sender’s reputation. When the identifier is a pseudonym, it must be issued by an indirection service which knows the true identity of the sender. When the sender behaves unacceptably, the indirection service may be requested to reveal the real-world identity to appropriate authorities by those authorities. A set of attributes being used as the identifier requires a certificate, or other claims mechanism, from a trusted authority. When the sender behaves unacceptably or the claimed attributes are proved to be false, then the trusted authority may be contacted and asked to “punish” the sender by removing him from the trusted authority’s list. Alternatively, the recipient may choose to remove the trusted authority as not being accountable-enough. Becoming accountable does not necessarily mean disclosing anything about your real-world identity thus protecting privacy. Using accountability as a mechanism for receiving network packets is much more difficult. Since there is no end node, packets pass through nodes having no direct relation to the sender, and the per-packet cost of accountability verification must be very small to not impact network performance. This makes checking accountability for network access very difficult.

For Accountability To Work Senders must be able to make themselves accountable This means pledging something of value Friendship Reputation Money … Receivers must be able to check accountability Specify what is accountable enough Verify sender’s evidence of accountability

Accountability vs. Access Control “In principle” there is no difference but Accountability is about punishment, not locks Hence audit is critical Accountability is very coarse-grained

The Accountability Ecosystem Identity, reputation, and indirection services Mechanisms to establish trust relationships Person to person and person to organization A flexible, simple user model for identity Stronger user authentication Smart card, cell phone, biometrics Application identity: signing, reputation

Accountable Internet Access Just enough to block DDoS attacks Need ISPs to play. Why should they? Servers demand it; clients don’t get locked out Regulation? A server asks its ISP to block some IP addresses ISPs propagate such requests to peers or clients Probably must be based on IP address Perhaps some signing scheme to traverse unreliable intermediaries? High priority packets can get through

Accountability vs. Freedom Partition world into two parts: Green Safer/accountable Red Less safe/unaccountable Two aspects, mostly orthogonal User Experience Isolation mechanism Separate hardware with air gap VM Process isolation Red-Green is our name for the creation of two different environments for each user in which to do their computing. One environment is carefully managed to keep out attacks – code and data are only allowed if of known trusted origin – because we know that the implementation will have flaws and that ordinary users trust things that they shouldn’t. This is the “Green” environment; important data is kept in it. But because lots of work, and lots of entertainment, requires accessing things on the Internet about which little is known, or is even feasible to know, regarding their trustworthiness (so it can not be carefully managed), we need to provide an environment in which this can be done – this is the “Red” environment. The Green environment backs up both environments, and when some bug or user error causes the Red environment to become corrupt, it is restored to a previous state (see the recovery slide); this may entail loss of data, which is why important data is kept on the Green side, where it is less likely to be lost. Isolation between the two environments is enforced using IPsec. The big unknown is the user experience, at this point. We know different models: The KVM switch model (as in NetTop) The X-Windows model (with windows actually fronting for an execution on some other machine) What the users will find preferable is an open question, still. More to the point, the security of the system depends on separation that will be visible to the user. There will be things that the user will not be allowed to do because of the security policy. What also needs to be researched is the best way to communicate that separation to the user. The KVM switch model, in which the user envisions two separate PCs that have to use network shares to share files might be the simplest to grasp. Implementation is still a matter of debate within the company – and even within the team. Today we are going to talk about the VM-based isolation solution

Total: N+m attacks/yr on all assets Without R|G: Today Less trustworthy Less accountable entities More trustworthy More accountable entities (N >> m) m attacks/yr N attacks/yr Less valuable assets More My Computer Almost everyone has a “red” machine Security settings optimized for immediate productivity not long-term security So… The untrustworthy or unaccountable get to interact with important assets Entities - Programs - Network hosts - Administrators Total: N+m attacks/yr on all assets

With R|G (N >> m) m attacks/yr N attacks/yr More valuable assets Less trustworthy Less accountable entities More trustworthy More accountable entities (N >> m) m attacks/yr N attacks/yr More valuable assets My Green Computer My Red Computer Less valuable assets Entities - Programs - Network hosts - Administrators N attacks/yr on less valuable assets m attacks/yr on more valuable assets

Must Get Configuration Right Keep valuable stuff out of red Keep hostile agents out of green Less valuable assets My Red Computer More My Green Computer Valuable Asset Less trustworthy Less accountable entities More trustworthy More accountable Hostile agent

Why R|G? Problems: Solution: Isolated work environments: Any OS will always be exploitable The richer the OS, the more bugs Need internet access to get work done, have fun The internet is full of bad guys Solution: Isolated work environments: Green: important assets, only talk to good guys Don’t tickle the bugs, by restricting inputs Red: less important assets, talk to anybody Blow away broken systems Good guys: more trustworthy / accountable Bad guys: less trustworthy or less accountable

Configuring Green Green = locked down = only whitelist inputs Requires professional management Few users can make these decisions Avoid “click OK to proceed” To escape, use Red Today almost all machines are Red

R|G User Model Dilemma People don’t want complete isolation They want to: Cut/paste, drag/drop Share parts of the file system Share the screen Administer one machine, not multiple … But more integration can weaken isolation Add bugs Compromise security

Data Transfer Mediates data transfer between machines Problems Drag / drop, Cut / paste, Shared folders Problems Red → Green : Malware entering Green → Red : Information leaking Possible policy Allowed transfers (configurable). Examples: No transfer of “.exe” from R to G Only transfer ASCII text from R to G Non-spoofable user intent; warning dialogs Auditing Synchronous virus checker; third party hooks, ... Downloaded content through IE, Messenger, p2p, etc should be tagged with download source information – similar to IE zones.  As the content moves through the “airlock”, the tags should move as well. Auditing – sync virus checker.  Mention the attachment execution services (AES) check

Where Should Email/IM Run? As productivity applications, they must be well integrated in the work environment (green) Threats—A tunnel from the bad guys Executable attachments Exploits of complicated data formats Choices Run two copies, one in Green and one in Red Run in Green and mitigate threats Green platform does not execute arbitrary programs Green apps are conservative in the file formats they accept Route messages to appropriate machine

R|G and Enterprise Networks Red and green networks are defined as today: IPSEC Guest firewall Proxy settings … The VMM can act as a router E.g. red only talks to the proxy Must keep: Important data Attackers On different sides of a VM isolation boundary Partition network as shown Apply stricter security settings in Green Software restriction policies Restrict user admin privileges … We think this works pretty well for RG in enterprises But we don’t know how to do it for home users…

Summary Security is about risk management Cost of security < expected loss Security relies on deterrence more than locks Deterrence requires the threat of punishment This requires accountability Accountability needs an ecosystem Senders becoming accountable Receivers verifying accountability Accountability limits freedom Beat this by partitioning: red | green Don’t tickle bugs in green, dispose of red