Michael Vrable, Justin Ma, Jay Chen, David Moore, Erik Vandekieft,

Slides:



Advertisements
Similar presentations
Multihoming and Multi-path Routing
Advertisements

Remus: High Availability via Asynchronous Virtual Machine Replication
The Internet Motion Sensor: A Distributed Blackhole Monitoring System Michael Bailey*, Evan Cooke*, Farnam Jahanian* †, Jose Nazario †, David Watson* Presenter:
CST Computer Networks NAT CST 415 4/10/2017 CST Computer Networks.
CS 443 Advanced OS Fabián E. Bustamante, Spring 2005 Resource Containers: A new Facility for Resource Management in Server Systems G. Banga, P. Druschel,
FIREWALLS Chapter 11.
FLAME: A Flow-level Anomaly Modeling Engine
Firewall Security Chapter 8. Perimeter Security Devices Network devices that form the core of perimeter security include –Routers –Proxy servers –Firewalls.
Stopping Worm/Virus Attacks Chiu Wah So (Kelvin).
Firewalls Presented by: Sarah Castro Karen Correa Kelley Gates.
Intrusion Detection using Honeypots Patrick Brannan Honeyd with virtual machines.
Scalability, Fidelity and Containment in the Potemkin Virtual Honeyfarm Authors: Michael Vrable, Justin Ma, Jay chen, David Moore, Erik Vandekieft, Alex.
UCSD Potemkin Honeyfarm Jay Chen, Ranjit Jhala, Chris Kanich, Erin Kenneally, Justin Ma, David Moore, Stefan Savage, Colleen Shannon, Alex Snoeren, Amin.
Michael Vrable, Justin Ma, Jay Chen, David Moore, Erik Vandekieft, Alex C. Snoeren, Geoffrey M. Voelker, and Stefan Savage Presenter: Martin Krogel.
Networking Components Chad Benedict – LTEC
FIREWALL TECHNOLOGIES Tahani al jehani. Firewall benefits  A firewall functions as a choke point – all traffic in and out must pass through this single.
Christopher Bednarz Justin Jones Prof. Xiang ECE 4986 Fall Department of Electrical and Computer Engineering University.
Introduction to Honeypot, Botnet, and Security Measurement
CS533 Concepts of Operating Systems Jonathan Walpole.
HoneyD (Part 2) Small Business NIDS This presentation demonstrates the ability for Small Businesses to emulate virtual operating systems and conduct.
Honeypot and Intrusion Detection System
Improving Network I/O Virtualization for Cloud Computing.
Presented by: Sanketh Beerabbi University of Central Florida COP Cloud Computing.
FIREWALLS Vivek Srinivasan. Contents Introduction Need for firewalls Different types of firewalls Conclusion.
Internet and Intranet Fundamentals Class 9 Session A.
A Virtual Honeypot Framework Author: Niels Provos Published in: CITI Report 03-1 Presenter: Tao Li.
The Internet Motion Sensor: A Distributed Blackhole Monitoring System Presented By: Arun Krishnamurthy Authors: Michael Bailey, Evan Cooke, Farnam Jahanian,
KFSensor Vs Honeyd Honeypot System Sunil Gurung
Presented by Spiros Antonatos Distributed Computing Systems Lab Institute of Computer Science FORTH.
A VIRTUAL HONEYPOT FRAMEWORK Author : Niels Provos Publication: Usenix Security Symposium Presenter: Hiral Chhaya for CAP6103.
Network Security. 2 SECURITY REQUIREMENTS Privacy (Confidentiality) Data only be accessible by authorized parties Authenticity A host or service be able.
Server Virtualization
A Virtual Honeypot Framework Niels Provos Google, Inc. The 13th USENIX Security Symposium, August 9–13, 2004 San Diego, CA Presented by: Sean Mondesire.
Presented by: Reem Alshahrani. Outlines What is Virtualization Virtual environment components Advantages Security Challenges in virtualized environments.
Firewall Security.
DETECTING TARGETED ATTACKS USING SHADOW HONEYPOTS AUTHORS: K. G. Anagnostakisy, S. Sidiroglouz, P. Akritidis, K. Xinidis, E. Markatos, A. D. Keromytisz.
Security Vulnerabilities in A Virtual Environment
Scalability, Fidelity, and Containment in the Potemkin Virtual Honeyfarm Michael Vrable, Justin Ma, Jay chen, David Moore, Erik Vandekieft, Alex C. Snoeren,
Full and Para Virtualization
Automated Worm Fingerprinting Authors: Sumeet Singh, Cristian Estan, George Varghese and Stefan Savage Publish: OSDI'04. Presenter: YanYan Wang.
Running Commodity Operating Systems on Scalable Multiprocessors Edouard Bugnion, Scott Devine and Mendel Rosenblum Presentation by Mark Smith.
Operating System (Reference : OS[Silberschatz] + Norton 6e book slides)
Internet Quarantine: Requirements for Containing Self-Propagating Code
Chapter 6: Securing the Cloud
© 2002, Cisco Systems, Inc. All rights reserved.
Heitor Moraes, Marcos Vieira, Italo Cunha, Dorgival Guedes
CONNECTING TO THE INTERNET
Wireless Network Security
Characteristics of Internet Background Radiation
Chapter 1: Introduction
Jonas Pfoh, Daniel Angermeier
VIRTUAL SERVERS Presented By: Ravi Joshi IV Year (IT)
Introduction to Networking
Introducing To Networking
Introduction to Networking
Firewalls.
Oracle Solaris Zones Study Purpose Only
Switching Techniques In large networks there might be multiple paths linking sender and receiver. Information may be switched as it travels through various.
Group 8 Virtualization of the Cloud
Replication Middleware for Cloud Based Storage Service
* Essential Network Security Book Slides.
Data collection methodology and NM paradigms
Cloud computing mechanisms
AWS Cloud Computing Masaki.
Switching Techniques.
Multithreaded Programming
Specialized Cloud Architectures
Chapter 11: Network Address Translation for IPv4
IP Control Gateway (IPCG)
Presentation transcript:

Scalability, Fidelity, and Containment in the Potemkin Virtual Honeyfarm Michael Vrable, Justin Ma, Jay Chen, David Moore, Erik Vandekieft, Alex C. Snoeren, Geoffrey M. Voelker, Stefan Savage Symposium on Operating Systems Principles, 2005 Tracy Wagner CDA 6938 February 8, 2007

Outline Introduction Potemkin Virtual Honeyfarm Evaluation Contributions/Strengths Weaknesses Further Research

Introduction Problem: Sharp tradeoff between scalability, fidelity, and containment when deploying a honeyfarm. Scalability – how well a solution works when the size of the problem increases The problem identified by the researchers – there is a sharp tradeoff between scalability, fidelity, and containment when deploying a honeyfarm. Wait – What did we just say? The three things we will be looking at are: Scalability – how well a solution works when the size of the problem increases We want a honeyfarm to scale well, and cover as large of an IP space possible, creating opportunity for interesting traffic and interactions Fidelity - adherence to fact or detail; accuracy or exactness We want the honeyfarm to be able to track actions and traffic of an attack and compromise, as the more information we can gather, the better And Containment - the act of containing, keeping something from spreading Once there is a compromise within the system, we don’t want to let it out to the global internet. Also, depending on traffic patterns, we could cause an attacker that infected us with something to be infected with something else. Just because a host is infected by one virus, does not give us the right to infect it with something unrelated. There is a significant possibility for third party liability in a large scale system, and so a dynamic policy to control the actions of a host in the honeyfarm is required. Fidelity – adherence to fact or detail; accuracy or exactness Containment – the act of containing, keeping something from spreading

Introduction Inherent tension between: Scalability and Fidelity Low Interaction  High Scalability High Interaction  High Fidelity Containment and Fidelity Strict Containment Policies  Loss of Fidelity Example: No outbound packets allowed/Will not let a trojan “phone home” Low Interaction  High Scalability Network Telescopes, Honeyd Disadvantage – cannot be fully compromised, as they do not execute kernel or application code High Interaction  High Fidelity Individual server for each monitored IP address Disadvantage – extremely expensive to scale and manage Inherent tension between containment policies and fidelity No outbound packets allowed Problem – cannot complete TCP handshake; effectively becomes a low-interaction honeypot Only allow outbound packets in response to inbound packets Problem – does not allow benign requests (DNS) Neither of these would allow a worm or trojan to “phone home” – fidelity is lost!

Introduction Proposal: A honeyfarm system architecture that can Monitor hundreds of thousands of IP addresses, providing scalability Offer high fidelity, similar to high-interaction honeypots Support customized containment policies customized containment policies allow for the tradeoffs between potential liability vs. greater fidelity.

Potemkin Virtual Honeyfarm Prototype Honeyfarm System Virtual Machines Physical Memory Sharing Idleness Major Components Gateway Virtual Machine Monitor (VMM) Key insight to the Potemkin Design: Since Honeypots have no computational purpose, only externally driven tasks have any value – This leaves a conventional honeypot idle at the network level, waiting for incoming traffic, in the processor and memory (most incoming requests not cpu or memory-intensive) In fact, a conventional honeypot network will use far less than one percent of processor or memory resources for their intended purpose. SO POTEMKIN TAKES ADVANTAGE OF LATE BINDING OF RESOURCES to requests.

Gateway Direct Inbound Traffic Contain Outbound Traffic Implement Resource Management Interface With Other Components Effectively, the Gateway is the “brains” of the honeyfarm It is the only component that implements policy or maintains long-term state Four Distinct functions

Gateway Direct Inbound Traffic Traffic Arrival Routing, Tunneling Load Balance Backend Honeyfarm servers Random, Based on Platform Programmable Filters Eliminate short-lived VMs as a result of same-service port scans across large IP range Direct inbound traffic as router, gateway will act as last-hop router for packets destined for addresses that had individual IP prefixes globally advertised via Border Gateway Protocol this has some practical drawbacks – requires ability to make globally visible BGP advertisements and renders location visible to anyone using simple tools such as traceroute can also attract traffic via tunneling – configure external Internet routers to tunnel packets destined for particular address ranges back to the gateway can add latency and added possibility of packet loss, but invisible to traceroute – used in other honeypot designs The Gateway then forwards the packets to backend honeyfarm servers. It tries to keep the servers load balanced, and may or may not have other considerations. Some forwarding policies may be a simple random assignment, or it could forward them based on the probability that an infection could take place. For example, a packet heading for a NETBIOS service would be unlikely to infect a Linux machine. Type mapping could be used to keep the illusion that a particular IP hosts a specific software configuration, which would be important if attackers conduct reconnaissance scans for vulnerabilities prior to exploiting them. Any packets heading for active IP addresses will be sent to the physical server hosting the VM. In order to support this, the gateway must: Maintain state for each live VM in the system, and Track the load and liveness of individual servers in the honeyfarm The Gateway also decides if traffic will go any further –

Gateway Contain Outbound Traffic Only physical connection between honeyfarm servers and Internet Customizable Containment Policies DNS Traffic Traffic that does not pass containment filter may be reflected back into honeyfarm The gateway is the only physical connection between the honeyfarm servers and the Internet, so that all traffic is subjected to the same containment policies. It supports a wide range of containment policies allowing customizable tradeoffs between potential liability and greater fidelity. DNS Traffic is an example of benign third party requests that must be addressed – the gateway either needs to implement its own DNS server, or allow DNS requests to be proxied to a dedicated server. Traffic that doesn’t pass the containment filter may be reflected back into the honeyfarm, which will then adopt and create the identity of the destination IP address. Effectively, the entire Internet can be virtualized using reflection policies. Depending upon the configuration of the system, attacks and subsequent compromises can be isolated or allowed to interact.

Gateway Implement Resource Management Interface With Other Components Dedicate only a subset of servers to reflection Limit number of reflections with identical payload Determine when to reclaim VM resources Interface With Other Components Detection Analysis User-Interface Implement Resource Management policies: Some policies may include: Dedicate only a subset of servers to reflection to avoid totally consuming all resources Limit the number of reflections with identical (or similar) payloads – attack can be tracked and observed once a VM is compromised; don’t need the same compromise over and over Determine when to reclaim resources – the gateway must prioritize which VMs should be reclaimed, which should have a snapshot of them taken and stored, which can continue to execute.

Virtual Machine Monitor Flash Cloning Instantiates Virtual Machines Quickly Copies and modifies a host reference image Delta Virtualization Optimizes the Flash Cloning Operation Utilizes Copy-on-Write To reduce overhead and increase the number of VMs supported on each honeyfarm server Flash Cloning - avoids the startup overhead of system or application initialization Reference VM image is memory overhead, but results in reduction of memory requirements for cloned VM Delta Virtualization - optimizes this copy operation using copy-on-write, exploiting the memory coherence between different VMs. Allocates new storage only when a modification to the reference image is made.

Virtual Machine Monitor

Architecture Implementation of Potemkin included 10 servers, one used as the gateway and 9 as honeyfarm servers. Each had Xeon 2.8 GHz processors, 2GB memory, 1Gbps ethernet NICs. Traffic routed to the gateway, which maps active IPs to physical servers and decides which server will implement a new IP Outbound traffic also goes through the Gateway, and depending upon policies implemented, can be allowed out, proxied, or reflected back into the honeyfarm

Evaluation - /16 network 156 destination addresses multiplexed per active VM instance Hundreds of honeypot VMs per physical server Hundreds of distinct VMs can be supported running simple services Live deployment created 2100 VMs dynamically in a 10-minute period; possible to create honeyfarms with both scale and fidelity! A honeyfarm only needs enough resources to serve the PEAK NUMBER OF ACTIVE IP addresses at any particular time Various parameters were used to determine the number of active VMs required to process the traffic workload for a /16 network (64 thousand IP addresses). Using a 60-second inactivity timeout and scan filtering to reduce the number of short-lived VMs created, 156 destination addresses per active VM instance can be multiplexed even during the worst-case period. Overhead that determines the scalability of a server includes three things: (1) memory overhead required by honeypot VMs (2) cpu overhead required to CREATE honeypot VMs (3) cpu UTILIZATION of honeypot VMs responding to traffic Hundreds of honeypot VMs can be multiplexed per physical server With 2GB memory, hundreds of distinct VMs can be supported per server, even running simple services While they admit they are still in development mode, live deployment of the system showed promise and the conclusion is that it is possible to create honeyfarms with both scale and fidelity

Contributions/Strengths Flash Cloning and Delta Virtualization allows for a highly scalable system with high fidelity Improvement in scale of up to six orders of magnitude; as implemented can support 64K addresses Internal Reflection can offer significant insight into spreading dynamics of new worms Customizable containment policies allow testing of various scenarios Flash Cloning and Delta Virtualization are novel ideas integrated into honeyfarm At the time of this paper, largest known honeypot was Symantec's DeepSight system, which uses 40 servers executing VMware to emulate 2000 IP addresses – Potemkin showed an improvement of up to six orders of magnitude Internal reflection can show how worms propagate, and depending upon policies implemented, can show how they interact Customizable containment policies allow for flexibility in the system

Weaknesses Reflection must be carefully managed to avoid resource starvation VM cannot respond until cloning is complete (too long may cause loss of traffic) Scalability depends upon Gateway Router function renders honeyfarm visible to anyone using traceroute Attacker techniques exist for determining virtualized honeyfarm Reflection must be carefully managed to avoid resource starvation – a worm whose traffic is reflected could use all of the honeyfarm resources, otherwise VM cannot respond until cloning is complete. If this takes too long, interesting traffic may be lost or an attacker may think there is no host at that IP The gateway can be a traffic bottleneck, as it is only one way in, one way out Router function renders honeyfarm visible to anyone using traceroute (loss of attacker traffic) Attacker techniques exist for determining a virtualized honeyfarm – could result in a DDoS, strangling the system, or the attackers may avoid it all together (some bots can determine)

Further Research Defer creation of new VM until a complete session is established Optimization of all aspects of flash cloning Optimization of gateway Offer support for disk devices Develop support for Windows hosts Defer creation of new VM until a complete session is established, to avoid overhead of creating short lived VMs. Optimization of all aspects of flash cloning – currently, 300-500 ms of overhead for cloning and teardown of VMs; they believe it is possible to optimize that to under 30ms While the gateway in its current state will support the /16 network (64 thousand addresses), the scalability of the system depends upon the gateway. Optimization of the gateway functions will scale to even bigger networks Offer support for disk services – currently can only support memory-based filesystems. Develop support for Windows hosts – current implementation only supports Linux hosts