Nicholas Weaver Vern Paxson Stuart Staniford UC Berkeley ICIR

Slides:



Advertisements
Similar presentations
Honeypot Research Hung Nguyen Brendan Roberts Comp 4027 Forensic and Analytical Computing.
Advertisements

Leveraging Good Intentions to Reduce Unwanted Network Traffic Marianne Shaw (U. Washington) USENIX 2nd Workshop on Steps to Reducing Unwanted Traffic on.
FIREWALLS. What is a Firewall? A firewall is hardware or software (or a combination of hardware and software) that monitors the transmission of packets.
FIREWALLS The function of a strong position is to make the forces holding it practically unassailable —On War, Carl Von Clausewitz On the day that you.
Security+ Guide to Network Security Fundamentals, Third Edition
 Population: N=100,000  Scan rate  = 4000/sec, Initially infected: I 0 =10  Monitored IP space 2 20, Monitoring interval:  = 1 second Infected hosts.
 Well-publicized worms  Worm propagation curve  Scanning strategies (uniform, permutation, hitlist, subnet) 1.
Stopping Worm/Virus Attacks Chiu Wah So (Kelvin).
The ICSI HoneyfarmCui, Paxson, Weaver ICSI Honeyfarm Status Weidong Cui Vern Paxson Nicholas Weaver.
Very Fast containment of Scanning Worms Presenter: Yan Gao Authors: Nicholas Weaver Stuart Staniford Vern.
Copyright Silicon Defense Worm Overview Stuart Staniford Silicon Defense
SAVE: Source Address Validity Enforcement Protocol Jun Li, Jelena Mirković, Mengqiu Wang, Peter Reiher and Lixia Zhang UCLA Computer Science Dept 10/04/2001.
Vigilante: End-to-End Containment of Internet Worms Manuel Costa, Jon Crowcroft, Miguel Castro, Antony Rowstron, Lidong Zhou, Lintao Zhang, Paul Barham.
Intrusion Detection using Honeypots Patrick Brannan Honeyd with virtual machines.
Security+ Guide to Network Security Fundamentals, Third Edition Chapter 5 Network Defenses.
Automated Web Patrol with Strider HoneyMonkeys Present by Zhichun Li.
Scalability, Fidelity and Containment in the Potemkin Virtual Honeyfarm Authors: Michael Vrable, Justin Ma, Jay chen, David Moore, Erik Vandekieft, Alex.
Michael Vrable, Justin Ma, Jay Chen, David Moore, Erik Vandekieft, Alex C. Snoeren, Geoffrey M. Voelker, and Stefan Savage Presenter: Martin Krogel.
Introduction to Honeypot, Botnet, and Security Measurement
Kirby Kuehl Honeynet Project Member 05/08/2002 Intrusion Deception.
How Many Ways to 0wn the Internet? Portions Copyright 2002 Silicon Defense1 How Many Ways to 0wn the Internet? Towards Viable Worm Defenses.
HoneyD (Part 2) Small Business NIDS This presentation demonstrates the ability for Small Businesses to emulate virtual operating systems and conduct.
Honeypots. Introduction A honeypot is a trap set to detect, deflect, or in some manner counteract attempts at unauthorized use of information systems.
Principles of Computer Security: CompTIA Security + ® and Beyond, Third Edition © 2012 Principles of Computer Security: CompTIA Security+ ® and Beyond,
1 How to 0wn the Internet in Your Spare Time Authors: Stuart Staniford, Vern Paxson, Nicholas Weaver Publication: Usenix Security Symposium, 2002 Presenter:
A Taxonomy of Computer Worms Nicholas Weaver, Vern Paxson, Stuart Staniford, and Robert Cunningham ACM WORM 2003 Speaker: Chang Huan Wu 2008/8/8.
Module 4: Configuring ISA Server as a Firewall. Overview Using ISA Server as a Firewall Examining Perimeter Networks and Templates Configuring System.
A Virtual Honeypot Framework Author: Niels Provos Published in: CITI Report 03-1 Presenter: Tao Li.
A virus is software that spreads from program to program, or from disk to disk, and uses each infected program or disk to make copies of itself. Basically.
The Internet Motion Sensor: A Distributed Blackhole Monitoring System Presented By: Arun Krishnamurthy Authors: Michael Bailey, Evan Cooke, Farnam Jahanian,
Virus Detection Mechanisms Final Year Project by Chaitanya kumar CH K.S. Karthik.
1Of 25. 2Of 25  Definition  Advantages & Disadvantages  Types  Level of interaction  Honeyd project: A Virtual honeypot framework  Honeynet project:
HONEYPOTS PRESENTATION TEAM: TEAM: Ankur Sharma Ashish Agrawal Elly Bornstein Santak Bhadra Srinivas Natarajan.
Modeling Worms: Two papers at Infocom 2003 Worms Programs that self propagate across the internet by exploiting the security flaws in widely used services.
Presented by Spiros Antonatos Distributed Computing Systems Lab Institute of Computer Science FORTH.
A Virtual Distributed Honeynet at KFUPM: A Case Study Build a high-interaction honeynet environment at KFUPM’s two main campuses: The students’ living.
A VIRTUAL HONEYPOT FRAMEWORK Author : Niels Provos Publication: Usenix Security Symposium Presenter: Hiral Chhaya for CAP6103.
IEEE Communications Surveys & Tutorials 1st Quarter 2008.
1 Honeypot, Botnet, Security Measurement, Spam Cliff C. Zou CDA /01/07.
1 Large Scale Malicious Code: A Research Agenda N. Weaver, V. Paxson, S. Staniford, R. Cunningham Presented by Stefan Birrer.
A Virtual Honeypot Framework Niels Provos Google, Inc. The 13th USENIX Security Symposium, August 9–13, 2004 San Diego, CA Presented by: Sean Mondesire.
Presented by: Reem Alshahrani. Outlines What is Virtualization Virtual environment components Advantages Security Challenges in virtualized environments.
Introduction to Honeypot, measurement, and vulnerability exploits
Lecture 12 Page 1 CS 236, Spring 2008 Virtual Private Networks VPNs What if your company has more than one office? And they’re far apart? –Like on opposite.
1 Very Fast containment of Scanning Worms By: Artur Zak Modified by: David Allen Nicholas Weaver Stuart Staniford Vern Paxson ICSI Nevis Netowrks ICSI.
Security with Honeyd By Ryan Olsen. What is Honeyd? ➲ Open source program design to create honeypot networks. ➲ What is a honeypot? ● Closely monitored.
Worm Defense Alexander Chang CS239 – Network Security 05/01/2006.
Search Worms, ACM Workshop on Recurring Malcode (WORM) 2006 N Provos, J McClain, K Wang Dhruv Sharma
1 On the Performance of Internet Worm Scanning Strategies Cliff C. Zou, Don Towsley, Weibo Gong Univ. Massachusetts, Amherst.
1 Modeling, Early Detection, and Mitigation of Internet Worm Attacks Cliff C. Zou Assistant professor School of Computer Science University of Central.
Automated Worm Fingerprinting Authors: Sumeet Singh, Cristian Estan, George Varghese and Stefan Savage Publish: OSDI'04. Presenter: YanYan Wang.
1 Monitoring and Early Warning for Internet Worms Cliff C. Zou, Lixin Gao, Weibo Gong, Don Towsley Univ. Massachusetts, Amherst.
Defending against Hitlist Worms using NASR Khanh Nguyen.
1 Monitoring and Early Warning for Internet Worms Authors: Cliff C. Zou, Lixin Gao, Weibo Gong, Don Towsley Univ. Massachusetts, Amherst Publish: 10th.
1 Modeling and Measuring Botnets David Dagon, Wenke Lee Georgia Institute of Technology Cliff C. Zou Univ. of Central Florida Funded by NSF CyberTrust.
HoneyStat: Local Worm Detection Using Honeypots David Dagon, Xinzhou Qin, Guofei Gu, Wenke Lee, et al from Georgia Institute of Technology Authors: The.
Role Of Network IDS in Network Perimeter Defense.
IPv6 Security Issues Georgios Koutepas, NTUA IPv6 Technology and Advanced Services Oct.19, 2004.
SemiCorp Inc. Presented by Danu Hunskunatai GGU ID #
Using Honeypots to Improve Network Security Dr. Saleh Ibrahim Almotairi Research and Development Centre National Information Centre - Ministry of Interior.
Chapter 14.  Upon completion of this chapter, you should be able to:  Identify different types of Intrusion Detection Systems and Prevention Systems.
Kevin Watson and Ammar Ammar IT Asset Visibility.
Network Security Lab Jelena Mirkovic Sig NewGrad presentantion.
Vigilante: End-to-End Containment of Internet Worms Manuel Costa, Jon Crowcroft, Miguel Castro, Antony Rowstron, Lidong Zhou, Lintao Zhang and Paul Barham.
Very Fast containment of Scanning Worms
Click to edit Master subtitle style
Internet Worm propagation
Network hardening Chapter 14.
Modeling, Early Detection, and Mitigation of Internet Worm Attacks
Introduction to Internet Worm
Presentation transcript:

Wormholes and a Honeyfarm: Automatically Detecting Novel Worms (and other random stuff) Nicholas Weaver Vern Paxson Stuart Staniford UC Berkeley ICIR Silicon Defense

Problem: Automatically Detecting New Worms Detect a new worm on the Internet before many machines are infected Use this information to guide defenses 30-60 seconds to detect (and stop) Slammer Honeypots are accurate detectors Monitor egress to detect worms k vulnerable honeypots will detect a worm when ~1/k of the vulnerable machines are infected But impractical Cost: time, not machines Trust: must trust all honeypots!

Idea: Split the Network Endpoints from the Honeypots Wormholes are traffic tunnels Routes connections to a remote system Untrusted endpoints Honeyfarm consists of Virtual Machine honeypots Create virtual honeypots on demand See honeynet.org Route internally generated traffic to other images Classify based on what can be infected This is the resulting split. The endpoints, “Wormholes”, are traffic tunnels. The honeyfarm itself uses virtual machine techniques so it doesn’t need to create a honeypot for each endpoint. The honeyfarm needs to use VM techniques simply because of the number of wormholes and you need to create images on demand. For more information on virtual honeypots, see the honeynet project.

How Wormholes Work Low cost “appliance”: Clear Box: Plugs into network, obtains address through DHCP Contacts the Honeyfarm Reconfigures local network stack fool nmap style detection Forwards all traffic to/from the Honeyfarm Clear Box: Deployers have source code Restrictions built into the wormhole code so it doesn't trust the honeyfarm, can't contact the local network! Instead/addition to wormholes, one can... Route small telescopes to the honeyfarm Route ALL unused addresses in an institution... The goal of the wormholes is to design traffic tunnels that people will deploy. Exception: DNS requests can be generated to local DNS server. Wormhole CAN forward requests from the honeyfarm out, to prevent “phone home” based detection strategies. But it can’t forward requests to the local network, allowing it to be deployed more safely. The goal is to build devices which the deployers can trust, but the Honeyfarm does not have to Since the functionality is simple, we can run on low cost, commodity hardware We might want to also route entire address ranges, although such address ranges, although sensitive, represent more valuable “secrets” from an attacker’s viewpoint, so they can't be relied on as heavily. Image is random DS9 screen capture, found on the net.

How a Honeyfarm Works Creates Virtual Machine images to implement Honeypots Using VMware or similar Images exist "in potential" until traffic received Niels Provos suggested: Use honeyd as a first pass filter Completes the illusion that a honeypot exists at every wormhole location Any traffic received from wormhole Activate and configure a VM image Forward traffic to VM image Honeypot image generated traffic is monitored and redirected Wormhole IP: aa.bb.cc.dd Honeyfarm VM Image IP: xx.xx.xx.xx VM Image IP: aa.bb.cc.dd The honeyfarm needs to use virtual machine honeypots, as otherwise it would require too many resources. The goal is to detect behaviors in the system. Thus we need to have images almost ready to go, they need to be reconfigured and connected when traffic is received. We monitor the traffic to serve as the detector. VM Image IP: aa.bb.cc.ee

What Could We Automatically Learn From a Honeyfarm? A new worm is in the Internet Triggered based on ability to infect VMs What the worm is capable of Types of vulnerable configurations Including patch level Creates a “Vulnerability Signature” Some overt, immediate malicious behavior Immediate file erasers etc Possible attack signatures Works best for tracking: Human attackers Scanning worms Slow enough to react effectively Randomness hits wormholes Note the one limitation: Can only detect worms which can infect the VM images. We note capabilities based on what other VM image the captured worm can succeed in infecting. A “vulnerability signature” could be used by response mechanisms to block all traffic to vulnerable machines, without affecting immune machines. Noticing overtly malicious behavior is useful secondary information, but not the primary objective

What Trust is Needed? Wormhole deployers: Honeyfarm operator: Need to trust wormhole devices, not the honeyfarm operator Honeyfarm operator: Attackers know of some wormholes, but most are generally unknown Wormhole locations are “open secrets” Does not trust wormhole deployers Detection is based on infected honeypots, not traffic from a wormhole Dishonest wormholes are filtered out Responding systems receiving an alert: Either the honeyfarm and operator are honest and uncompromised OR rely on multiple, independent honeyfarms all raising an alarm "If CERT and DOD-CERT say..." When building distributed systems and systems which trigger automatic responses, we need to be very concerned about trust issues. The wormhole deployers trust the wormholes, NOT the honeyfarm. The honeyfarm doesn’t trust the wormhole deployers. Responding systems either trust the honeyfarm, or trust 2 independent honeyfarms to both raise an alarm. (Image is taken from http://www.sonypictures.com/movies/matilda/assets/car.jpg ), copyright sony.

Status and Acknowledgements Status: Paper design Idea, attacks, costs, development time Lots of attacks on the honeyfarm system and possible defenses Plan to build honeyfarm first, attached to a small telescope Wormholes can be built for <$350, no moving parts, 50 Watts power, quantity 1 Acknowledgements: Honeypot technology: Honeynet project, honeyd, DTK Feedback from many people: Stefan Savage, David Moore, David Wagner, Niels Provos, etc etc etc.

Random Slide: 1 Gb (ASAP), 10 Gb (+2-3 years) Need wiring-closet defenses: As close to the endpoint as possible, need to be reprogrammable <$1000 for GigE today (build for $500) Optical ideal, +$100 for 1000-base-T <$2000 for 10GigE in 2-3 years (build for $1000) New FPGAs with SERDESes, embedded processors, massive parallelism and pipelining DIMM 1000-BaseT PHY SX Transceiver FPGA SX Transceiver 1000-BaseT PHY SX Transceiver DIMM

Random Slide: Colonel John R. Boyd’s OODA “Loop” Observe Orient Decide Act Implicit Guidance & Control Implicit Guidance & Control Unfolding Circumstances Cultural Traditions Observations Genetic Heritage Decision (Hypothesis) Analyses & Synthesis Action (Test) Feed Forward Feed Forward Feed Forward New Information Previous Experience Outside Information Unfolding Interaction With Environment Unfolding Interaction With Environment Feedback This slide (but not the following notes) is Copyright by the estate of John R Boyd. Permission was granted to use this slide in theses talks as long as the attribution remains. The OODA (Observe, Orient, Decide, Act) “loop/cycle” was developed by John Boyd as a way of describing competitors, with each participant (or group) having its own OODA loop. John R. Boyd was a retired USAF Colonel. As a military serviceman, he wrote the book on air-to-air tactics, was a driving force behind both the F15, F16, and F18 designs, and developed many of the concepts used in current US military strategies and tactics. Robert Corman’s biography “Boyd”, is a good biography for the curious. (Need good reference for understanding OODA concepts however) Feedback Note how orientation shapes observation, shapes decision, shapes action, and in turn is shaped by the feedback and other phenomena coming into our sensing or observing window. Also note how the entire “loop” (not just orientation) is an ongoing many-sided implicit cross-referencing process of projection, empathy, correlation, and rejection. From “The Essence of Winning and Losing,” John R. Boyd, January 1996. From Defense and the National Interest, http://www.d-n-i.net, copyright 2001 the estate of John Boyd Used with permission

Ranom Slide: What is the OODA loop? The OODA (Observe, Orient, Decide, Act) cycle was designed as a semi-formal model of adversarial decision making Really a complex nest of feedback loops Originally designed to represent strategic and tactical decision-making Implicit shortcuts are critical in human-based systems Every participant or group has its own OODA loop Attack the opponent’s decision making process Avoid/confuse/manipulate the opponent’s observation/detection Stealthy worms Take advantage of errors in orientation/analysis Not yet but will begin to happen! Move faster than the opponent’s reaction time Why autonomous worms outrace “human-in-the-loop” systems Reactive worm defenses need fully-automated OODA loops The fastest, accurate OODA loop usually wins The OODA loop is a semi-formal model of adversarial decision making: each participant has their own “loop”, and groups create loops as well. The term loop is a misnomer, rather it is a collection of numerous feedback loops. This was primarily developed to model strategies and tactics, based around the idea of attacking the opponent’s decision making process rather than just the opponent's physical resources. This is critical in understanding worm defenses, because there are at least two competing processes: the worms and the defenses. During propagation, if a worm can avoid triggering the detection mechanisms, then the worm can avoid the entire defense, as nothing will get triggered. Likewise, any errors in the orientation/analysis portion can be exploited. Finally, and this is the worm’s greatest advantage against the defenders, if an individual’s OODA loop is operating effectively within the reaction-timescale of the opponent, then the opponent really can’t do anything, because he’s always one (or many) steps behind, and gets farther behind over time. As long as humans are in the response patch necessary to stop a worm, the defense OODA loop is always insufficiently slow

Random Slide: Automated OODA Loops Since both the worms and worm-defense routines are automatic while a fast worm is spreading, the OODA loops are much simpler No implicit paths, everything is now explicit Orientation and decision making are combined Communication is also made explicit The OODA loops are shaped by the designer’s goals, objectives, and skills Observation is often critical for both sides In an automated loop, there are no implicit fast-paths, as all paths are now explicit. Orientation and decision is combined, simply because orientation in the original OODA formulation represents the implicit decision making, as opposed to explicit decision making. With an automated system, there is no longer a significant distinction between the two. Communication is made explicit in this simplification... It’s implicit in the original OODA loop formulation: actions can communicate to others (both friend and foe), and communication is one of the areas observed. But when thinking about both worms and worm-defense, communication becomes such an important part (in creating a wider viewpoint), that this version explicitly includes communication between decision-making systems. Likewise, since observation techniques can leak information to the opponents, this diagram isolates observation into three classes: passive, local, and active. Observe Orient/Decide Act Passive Automatic Decision Making Information Control Local Actions Control Feedback Active Interaction with Environment Communication