Agenda for a Science of Security Fred B. Schneider* Department of Computer Science Cornell University Ithaca, New York 14853 U.S.A.

Slides:



Advertisements
Similar presentations
Automated Theorem Proving Lecture 1. Program verification is undecidable! Given program P and specification S, does P satisfy S?
Advertisements

Defenses. Preventing hijacking attacks 1. Fix bugs: – Audit software Automated tools: Coverity, Prefast/Prefix. – Rewrite software in a type safe languange.
Current Techniques in Language-based Security David Walker COS 597B With slides stolen from: Steve Zdancewic University of Pennsylvania.
Ensuring Operating System Kernel Integrity with OSck By Owen S. Hofmann Alan M. Dunn Sangman Kim Indrajit Roy Emmett Witchel Kent State University College.
Computer Security: Principles and Practice EECS710: Information Security Professor Hossein Saiedian Fall 2014 Chapter 10: Buffer Overflow.
Computer Security: Principles and Practice First Edition by William Stallings and Lawrie Brown Lecture slides by Lawrie Brown Chapter 11 – Buffer Overflow.
Lecture 16 Buffer Overflow modified from slides of Lawrie Brown.
Containment and Integrity for Mobile Code Status Report to DARPA ISO: Feb Fred B. Schneider Andrew Myers Department of Computer Science Cornell University.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 20 Slide 1 Critical systems development.
Hyperproperties Michael Clarkson and Fred B. Schneider Cornell University IEEE Symposium on Computer Security Foundations June 23, 2008 TexPoint fonts.
Using Programmer-Written Compiler Extensions to Catch Security Holes Authors: Ken Ashcraft and Dawson Engler Presented by : Hong Chen CS590F 2/7/2007.
Attacking Malicious Code: A Report to the Infosec Research Council Kim Sung-Moo.
CMSC 414 Computer and Network Security Lecture 24 Jonathan Katz.
Software Testing and Quality Assurance
1 Enforcing Confidentiality in Low-level Programs Andrew Myers Cornell University.
1 Building with Assurance CSSE 490 Computer Security Mark Ardis, Rose-Hulman Institute May 10, 2004.
A Type System for Expressive Security Policies David Walker Cornell University.
Methods For The Prevention, Detection And Removal Of Software Security Vulnerabilities Jay-Evan J. Tevis Department of Computer Science and Software Engineering.
Lecture 16 Buffer Overflow
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
Address Obfuscation: An Efficient Approach to Combat a Broad Range of Memory Error Exploits Sandeep Bhatkar, Daniel C. DuVarney, and R. Sekar Stony Brook.
The Impact of Programming Language Theory on Computer Security Drew Dean Computer Science Laboratory SRI International.
CS252: Systems Programming Ninghui Li Final Exam Review.
Formal Methods 1. Software Engineering and Formal Methods  Every software engineering methodology is based on a recommended development process  proceeding.
Secure Embedded Processing through Hardware-assisted Run-time Monitoring Zubin Kumar.
Dr. Pedro Mejia Alvarez Software Testing Slide 1 Software Testing: Building Test Cases.
Vulnerability-Specific Execution Filtering (VSEF) for Exploit Prevention on Commodity Software Authors: James Newsome, James Newsome, David Brumley, David.
Address Space Layout Permutation
Secure Virtual Architecture John Criswell, Arushi Aggarwal, Andrew Lenharth, Dinakar Dhurjati, and Vikram Adve University of Illinois at Urbana-Champaign.
G53SEC 1 Reference Monitors Enforcement of Access Control.
CSC3315 (Spring 2009)1 CSC 3315 Programming Languages Hamid Harroud School of Science and Engineering, Akhawayn University
Computer Science Open Research Questions Adversary models –Define/Formalize adversary models Need to incorporate characteristics of new technologies and.
Computer Security and Penetration Testing
Containment and Integrity for Mobile Code Security policies as types Andrew Myers Fred Schneider Department of Computer Science Cornell University.
Learning, Monitoring, and Repair in Application Communities Martin Rinard Computer Science and Artificial Intelligence Laboratory Massachusetts Institute.
University of Toronto Department of Computer Science © Steve Easterbrook. This presentation is available free for non-commercial use with attribution.
Lecture slides prepared for “Computer Security: Principles and Practice”, 3/e, by William Stallings and Lawrie Brown, Chapter 10 “Buffer Overflow”.
Branch Regulation: Low-Overhead Protection from Code Reuse Attacks.
Chapter 13: Regression Testing Omar Meqdadi SE 3860 Lecture 13 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
Secure Active Network Prototypes Sandra Murphy TIS Labs at Network Associates March 16,1999.
G53SEC 1 Reference Monitors Enforcement of Access Control.
1 CSCD 326 Data Structures I Software Design. 2 The Software Life Cycle 1. Specification 2. Design 3. Risk Analysis 4. Verification 5. Coding 6. Testing.
Hyperproperties Michael Clarkson and Fred B. Schneider Cornell University Air Force Office of Scientific Research December 4, 2008 TexPoint fonts used.
Chapter 8 Lecture 1 Software Testing. Program testing Testing is intended to show that a program does what it is intended to do and to discover program.
Operating Systems Security
Unix Security Assessing vulnerabilities. Classifying vulnerability types Several models have been proposed to classify vulnerabilities in UNIX-type Oses.
Lecture 5 1 CSP tools for verification of Sec Prot Overview of the lecture The Casper interface Refinement checking and FDR Model checking Theorem proving.
Information Leaks Without Memory Disclosures: Remote Side Channel Attacks on Diversified Code Jeff Seibert, Hamed Okhravi, and Eric Söderström Presented.
Introduction Program File Authorization Security Theorem Active Code Authorization Authorization Logic Implementation considerations Conclusion.
SASI Enforcement of Security Policies : A Retrospective* PSLab 오민경.
Hyperproperties Michael Clarkson and Fred B. Schneider Cornell University Ph.D. Seminar Northeastern University October 14, 2010.
SAFEWARE System Safety and Computers Chap18:Verification of Safety Author : Nancy G. Leveson University of Washington 1995 by Addison-Wesley Publishing.
Web Security Firewalls, Buffer overflows and proxy servers.
Group 9. Exploiting Software The exploitation of software is one of the main ways that a users computer can be broken into. It involves exploiting the.
Requirements Engineering Methods for Requirements Engineering Lecture-31.
Security Attacks Tanenbaum & Bo, Modern Operating Systems:4th ed., (c) 2013 Prentice-Hall, Inc. All rights reserved.
Chapter 29: Program Security Dr. Wayne Summers Department of Computer Science Columbus State University
Compilers and Security
1 Jay Ligatti (Princeton University); joint work with: Lujo Bauer (Carnegie Mellon University), David Walker (Princeton University) Enforcing Non-safety.
Protecting Memory What is there to protect in memory?
Chapter 8 – Software Testing
TRUST:Team for Research in Ubiquitous Secure Technologies
Security in Java Real or Decaf? cs205: engineering software
Information Security CS 526
Information Security CS 526
Chapter 29: Program Security
Software Security.
Information Security CS 526
Carmine Abate Rob Blanco Deepak Garg Cătălin Hrițcu Jérémy Thibault
Presentation transcript:

Agenda for a Science of Security Fred B. Schneider* Department of Computer Science Cornell University Ithaca, New York U.S.A.

1 Maps = Features + Relations l Features –Land mass –Route l Relationships –Distance –Direction

2 Map of Security (circa 2005) Features: l Port Scan l Bugburg l Geekland l Bufferville l Malwaria l Root kit pass l Sploit Market l Valley of the Worms l Sea Plus Plus l Sea Sharp l … Reproduced courtesy Fortify Software Inc

3 Map of Security (circa 2015?) Features: l Classes of attacks l Classes of policies l Classes of defenses Relationships: “Defense class D enforces policy class P despite attacks from class A.” Attacks Defenses Policies

4 Outline Give examples to demonstrate: –map features: -- policy, defense, attack classes –relationships between these “features” Discuss scope for term “security science”. “If everybody is special, then nobody is.” -Mr. Incredible “Good” work in security might not be “security science”. Give example and non-obvious open questions in “security science.”

5 Oldspeak: Security Features: Attacks Attack: Means by which policy is subverted. A threat exploits a vulnerability. –Attacks du jour :  E.g. buffer overflow, format string, x-site scripting, … –Threat models have been articulated:  E.g. insider, nation-state, hacker, ….  E.g. 10 GByte + 50 Mflops, …  Threat model  Attacks?

6 Oldspeak: Security Features: Policies Policy: What the system should do; what the system should not do: –Confidentiality: Who is allowed to learn what? –Integrity: What changes are allowed by system. … includes resource utilization, input/output to environment. –Availability: When must service be rendered. Usual notions of “program correctness” are a special case.

7 Oldspeak: Security Features: Defenses Defense Mechanism: Ensure that policies hold. Example general classes include: –Monitoring (reference monitor, firewall, …) –Isolation (virtual machines, processes, sfi, …) –Obfuscation (cryptography, automated diversity)

8 Oldspeak: Security Features: Relationships Attack  Defense Secure System Pragmatics: l Attacks exploit vulnerabilities. –Vulnerabilities are unavoidable. l Assumptions are potential vulnerabilities. –Assumptions are unavoidable. … All non-trivial systems can be attacked. –? Can a threat of concern launch a successful attack ?

9 Classes of Attacks l Operational description: –“Overflow an array to clobber the return ptr…” l Semantic characterization: –A program…  RealWorld = System || Attack (Dolev-Yao, Mitchell) –An input…  Causes deviation from a specification.  Causes different outputs in diverse variants.

10 Classes of Policies System behavior t: an infinite trace t = s 0 s 1 s 2 s 3 … s i … System property P: set of traces P = { t | pred(t) } System S: set S of traces (its behaviors). System S satisfies property P: S  P

11 Safety and Liveness [Lamport 77] Safety: Some “bad thing” doesn’t happen. –Traces that contain irremediable prefix. Liveness: Some “good thing” does happen. –Prefixes that are not irremediable. Thm: Every property is the conjunction of a safety property and a liveness property. Thm: Safety properties proved by invariance. Thm: Liveness properties proved by well-foundedness

12 Safety and Liveness [Alpern+Schneider 85,87] Safety: Some “bad thing” doesn’t happen. –Proscribes traces that contain some irremediable prefix. Liveness: Some “good thing” does happen. –Prescribes that prefixes are not irremediable. Thm: Every property is the conjunction of a safety property and a liveness property. Thm: Safety properties proved by invariance. Thm: Liveness properties proved by well-foundedness.

13 Monitoring: Attack  Defense  Policy Execution Monitoring (EM) [Schneider 2000] Execution monitor: –Gets control on every policy-relevant event –Blocks execution if allowing event would violate policy –Integrity of EM protected from subversion. Thm: EM only enforces safety properties. Examples of EM-enforceable policies: l Only Alice can read file F. l Don’t send msg after reading file F. l Requests processing is FIFO wrt arrival. Examples of non EM-enforceable policies: l Every request is serviced l Value of x is not correlated with value of y. l Avg execution time is 3 sec.

14 Monitoring: Attack  Defense  Policy New EM Approaches Every safety property corresponds to an automaton. read not read not send □ ( read  □  send )

15 Monitoring: Attack  Defense  Policy Inlined Reference Monitor (IRM) New approach to enforcing EM policies: 1. Automaton  Pgm code (case statement) 2. Inline automaton into target program. Relocates trust from pgm to reference monitor. Application Secure application Specialize P” P P Policy Insert P P SASI Compile

16 Monitoring: Attack  Defense  Policy Proof Carrying Code New approach to enforcing EM policies: l Code producer: –Automaton A + Pgm S  Proof S sat A l Code consumer: –If A suffices for required security then check: Proof S sat A (Proof checking is easier than proof construction.) Relocates trust from pgm and prover to proof checker. Proofs more expressive than EM.

17 Monitoring: Attack  Defense  Policy Proof Carrying Code PCC and IRM… Specialize P” P Insert P P SASI Compile Application P Policy Optimize PCC Application Pr Proof

18 Monitoring: Attack  Defense  Policy Virtues of IRM When mechanism inserted into the application... –Allows policies in terms of application abstractions. –Pay only for what you need. –Enforcement without context switches into kernel. –Isolates state of enforcement mechanism. Program Kernel RM

19 Security  Safety Properties Non-correlation: Value of L reveals nothing about value of H. Non-interference: Deleting cmds from H-users cannot be detected by cmd exec by L-users. [Goguen-Meseguer 82] Properties, safety, liveness not expressive enough! EM not powerful enough.

20 Hyper-Properties [Clarkson+Schneider 08] Hyper-property: set of properties = set of sets of traces System S satisfies hyper-property HP: S  HP Hyper-property [P]: {P’ | P’  P} Note: –(P  HP and P’  P)  HP not required. –Non-interference is a HP. –Non-correlation is a HP.

21 Hyper-Safety Properties Hyper-safety HS: “Bad thing” is property M comprising finite number of finite traces. –Proscribes tracing containing irremediable observations. Thm: For safety property S, [S] is hyper-safety. Thm: All hyper-safety [S] are refinement closed. Note: –Non-interference is a HS. –Non-correlation is a HS.

22 Hyper-Safety Applications 2SP: Safety property on program S composed with itself (with variables renamed). [Terauchi+Aiken 05] S; S’ 2SP transforms information flow into a safety property! K-safety: Safety property on program S K : S || S’ || … || S” K-safety is HS. Thm: Any K-safety property of S is equivalent to a safety property on S K.

23 Hyper-Liveness Properties Hyper-liveness HL: Any finite set M of finite traces has an augmentation that is in HL. Prescribes: observations are not irremediable.  Examples: possibility, statistical performance, etc. Thm: Every HP is the conjunction of HS and HL.

24 Hyper Recap Safety Properties  EM enforceable:  New enforcement (IRM) Properties not expressive enough:  Hyper-properties (-safety, -liveness)  K-safety (reduces proving HS to a prop). Q: Verification for HS and HL? Q: Refinement for HS and HL? Q: Enforcement for HS and HL?

25 Obfuscation: Attack  Defense  Policy Obfuscation: Goals and Options Semantics-preserving random program rewriting… Goals: Attacker does not know: –address of specific instruction subsequences. –address or representation scheme for variables. –name or service entry point for any system service. Options: –Obfuscate source (arglist, stack layout, …). –Obfuscate object or binary (syscall meanings, basic block and variable positions, relative offsets, …). –All of the above.

26 Obfuscation: Attack  Defense  Policy Obfuscation Landscape [Pucella+Schneider 06] Given program S, obfuscator computes morphs: T(S, K1), T(S, K2), … T(S, Kn) l Attacker knows:  Obfuscator T  Input program S l Attacker does not know:  Random keys K1, K2, … Kn … Knowledge of the Ki would enable attackers to automate attacks! Will an attack succeed against a morph? –Seg fault likely if attack doesn’t succeed. integrity compromise  availability compromise.

27 Obfuscation: Attack  Defense  Policy Successful Attacks on Morphs All morphs implement the same interface. –Interface attacks. Obfuscation cannot blunt attacks that exploit the semantics of that (flawed) interface. –Implementation attacks. Obfuscation can blunt attacks that exploit implementation details. Def. implementation attack: An input for which all morphs (in some given set) don’t all produce the same output.

28 Obfuscation: Attack  Defense  Policy Effectiveness of Obfuscation Ultimate Goal: Determine the probability that an attack will succeed against a morph? Modest goal: Understand how effective obfuscation is as compared with other defenses? –Obvious candidate: Type checking

29 Obfuscation: Attack  Defense  Policy Type Checking as a Defense Type checking: Process to establish that all executions satisfy certain properties. –Static: Checks made prior to exec. Requires a decision procedure –Dynamic: Checks made as exec proceeds. Requires adding checks. Exec aborted if violated. Probabilistic dynamic type checking: Some checks are skipped on a random basis.

30 Obfuscation: Attack  Defense  Policy Obfuscation versus Type Checking Thesis: Obfuscation and probabilistic dynamic type systems can “defend against” the same attacks. From “thesis”  “theorem” requires fixing:  a language  a type system  a set of attacks

31 Obfuscation: Attack  Defense  Policy Obfuscation approximates typing Theorem: Type error signaled if and only if ressistible attack relative to T() and keys K1, K2, …, Kn for type systems: –“pointer de-ref sanity” types.  Implied by usual notion of “strong typing”.  Is a stronger type system than necessary. E.g. if x[i] = x[i] then skip is not type-safe but is not affected by T. –“tainting” type system (=info flow)  Better approximation than “pointer de-ref sanity” types.  Low integrity value: can vary from morph to morph

32 Obfuscation: Attack  Defense  Policy Type Systems / Obfuscator Bad News Theorem: There is no computable type system that signals a type error iff attacks relative to address obfuscation and some finite set of keys K1, K2, …, Kn.

33 Obfuscation: Attack  Defense  Policy Pros and Cons of Obfuscation l Type systems: –Prevent attacks (always---not just probably) –If static, they add no run-time cost –Not always part of the language. l Obfuscation –Works on legacy code. –Doesn’t always defend.

34 Recap: Features + Relationships l Defined: Characterization of policy: hyper-policies –Linked to semantics + orthogonal decomp l Relationship: Class of defense (EM) and class of policies (safety): –Provides account of IRM and PCC. l Relationship: Class of defense (obfusc) and class of defense (type systems). –Uses “reduction proof” and class of attacks

35 A Science? l Science, meaning focus on process: –Hypothesis + experiments  validation l Science, meaning focus on results: –abstractions and models, obtained by  invention  measurement + insight –connections + relationships, packaged as  theorems, not artifacts l Engineering, meaning focus on artifacts: –discovers missing or invalid assumptions  Proof of concept; measurement –discovers what are the real problems  Eg. Attack discovery

36 A Security Science? Address questions that transcend systems, attacks, defenses: l Is “code safety” universal for enforcement? l Can sufficiently introspective defenses always be subverted? l … Computer Science ≠ Science base for Security Analogy with medical science vs medical practice