Measuring Adversaries Vern Paxson International Computer Science Institute / Lawrence Berkeley National Laboratory June 15, 2004.

Slides:



Advertisements
Similar presentations
(Distributed) Denial of Service Nick Feamster CS 4251 Spring 2008.
Advertisements

Insertion, Evasion and Denial of Service: Eluding Network Intrusion Detection Aaron Beach Spring 2004.
Network and Application Attacks Contributed by- Chandra Prakash Suryawanshi CISSP, CEH, SANS-GSEC, CISA, ISO 27001LI, BS 25999LA, ERM (ISB) June 2006.
Code-Red : a case study on the spread and victims of an Internet worm David Moore, Colleen Shannon, Jeffery Brown Jonghyun Kim.
Analyzing Network Traffic in the Presence of Adversaries Vern Paxson International Computer Science Institute / Lawrence Berkeley National Laboratory
IS333, Ch. 26: TCP Victor Norman Calvin College 1.
1 Reading Log Files. 2 Segment Format
The Threat of Internet Worms Vern Paxson ICSI Center for Internet Research and Lawrence Berkeley National Laboratory December 2, 2004.
Tracking the Role of Adversaries in Measuring Unwanted Traffic Mark Allman(ICSI) Paul Barford(Univ. Wisconsin) Balachander Krishnamurthy(AT&T Labs - Research)
Computer Security Fundamentals by Chuck Easttom Chapter 4 Denial of Service Attacks.
Firewalls and Intrusion Detection Systems
 Population: N=100,000  Scan rate  = 4000/sec, Initially infected: I 0 =10  Monitored IP space 2 20, Monitoring interval:  = 1 second Infected hosts.
Internet Intrusions: Global Characteristics and Prevalence Presented By: Elliot Parsons Using slides from Vinod Yegneswaran’s presentation at SIGMETRICS.
5/1/2006Sireesha/IDS1 Intrusion Detection Systems (A preliminary study) Sireesha Dasaraju CS526 - Advanced Internet Systems UCCS.
Intruder Trends Tom Longstaff CERT Coordination Center Software Engineering Institute Carnegie Mellon University Pittsburgh, PA Sponsored by.
IT Security Doug Brown Jeff Bollinger. What is security? P.H.P. People Have Problems Security is the mitigation and remediation of human error in information.
1 CCNA 2 v3.1 Module Intermediate TCP/IP CCNA 2 Module 10.
WXES2106 Network Technology Semester /2005 Chapter 8 Intermediate TCP CCNA2: Module 10.
TCP/IP Basics A review for firewall configuration.
How to Own the Internet in your spare time Ashish Gupta Network Security April 2004.
Internet Quarantine: Requirements for Containing Self-Propagating Code David Moore et. al. University of California, San Diego.
Network Intrusion Detection: Capabilities & Limitations Vern Paxson International Computer Science Institute Lawrence Berkeley National Laboratory
Port Scanning.
Lucent Technologies – Proprietary Use pursuant to company instruction Learning Sequential Models for Detecting Anomalous Protocol Usage (work in progress)
Introduction to Honeypot, Botnet, and Security Measurement
FIREWALL Mạng máy tính nâng cao-V1.
Computer Security: Principles and Practice First Edition by William Stallings and Lawrie Brown Lecture slides by Lawrie Brown Chapter 8 – Denial of Service.
Internet Worms Brad Karp UCL Computer Science CS GZ03 / th December, 2007.
Part 2  Access Control 1 CAPTCHA Part 2  Access Control 2 Turing Test Proposed by Alan Turing in 1950 Human asks questions to another human and a computer,
1 Semester 2 Module 10 Intermediate TCP/IP Yuda college of business James Chen
Firewall and Internet Access Mechanism that control (1)Internet access, (2)Handle the problem of screening a particular network or an organization from.
Network security Further protocols and issues. Protocols: recap There are a few main protocols that govern the internet: – Internet Protocol: IP – Transmission.
ECE4112 Lab 7: Honeypots and Network Monitoring and Forensics Group 13 + Group 14 Allen Brewer Jiayue (Simon) Chen Daniel Chu Chinmay Patel.
Honeypot and Intrusion Detection System
CIS 450 – Network Security Chapter 16 – Covering the Tracks.
Intrusion Detection and Prevention. Objectives ● Purpose of IDS's ● Function of IDS's in a secure network design ● Install and use an IDS ● Customize.
How to Own the Internet in Your Spare Time (Stuart Staniford Vern Paxson Nicholas Weaver ) Giannis Kapantaidakis University of Crete CS558.
Carleton University School of Computer Science Exposure Maps: Removing Reliance on Attribution During Scan Detection David Whyte, P.C. van Oorschot, Evangelos.
MonNet – a project for network and traffic monitoring Detection of malicious Traffic on Backbone Links via Packet Header Analysis Wolfgang John and Tomas.
Learning Rules for Anomaly Detection of Hostile Network Traffic Matthew V. Mahoney and Philip K. Chan Florida Institute of Technology.
Breno de MedeirosFlorida State University Fall 2005 Network Intrusion Detection Systems Beyond packet filtering.
IEEE Communications Surveys & Tutorials 1st Quarter 2008.
The UCSD Network Telescope A Real-time Monitoring System for Tracking Internet Attacks Stefan Savage David Moore, Geoff Voelker, and Colleen Shannon Department.
Scanning & Enumeration Lab 3 Once attacker knows who to attack, and knows some of what is there (e.g. DNS servers, mail servers, etc.) the next step is.
Lecture 22 Network Security CS 450/650 Fundamentals of Integrated Computer Security Slides are modified from Hesham El-Rewini.
Distributed Denial of Service Attacks
Denial of Service Sharmistha Roy Adversarial challenges in Web Based Services.
FTP File Transfer Protocol Graeme Strachan. Agenda  An Overview  A Demonstration  An Activity.
Worm Defense Alexander Chang CS239 – Network Security 05/01/2006.
4061 Session 26 (4/19). Today Network security Sockets: building a server.
Search Worms, ACM Workshop on Recurring Malcode (WORM) 2006 N Provos, J McClain, K Wang Dhruv Sharma
Computer Science and Engineering Computer System Security CSE 5339/7339 Session 25 November 16, 2004.
Chapter 8 Network Security Thanks and enjoy! JFK/KWR All material copyright J.F Kurose and K.W. Ross, All Rights Reserved Computer Networking:
DoS/DDoS attack and defense
Automated Worm Fingerprinting Authors: Sumeet Singh, Cristian Estan, George Varghese and Stefan Savage Publish: OSDI'04. Presenter: YanYan Wang.
Slammer Worm By : Varsha Gupta.P 08QR1A1216.
Firewalls A brief introduction to firewalls. What does a Firewall do? Firewalls are essential tools in managing and controlling network traffic Firewalls.
Machine Learning for Network Anomaly Detection Matt Mahoney.
© 2002, Cisco Systems, Inc. All rights reserved..
Characteristics of Internet Background Radiation ACM Internet Measurement Conference (IMC), 2004 Authors: Ruoming Pang, Vinod Yegneswaran, Paul Barford,
TCP/IP1 Address Resolution Protocol Internet uses IP address to recognize a computer. But IP address needs to be translated to physical address (NIC).
© 2006 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 Access Control Lists Accessing the WAN – Chapter 5.
Port Scanning James Tate II
Characteristics of Internet Background Radiation
Domain 4 – Communication and Network Security
A Distributed DoS in Action
Local Worm Detection using Honeypots Justin Miller Jan 25, 2007
Brad Karp UCL Computer Science
Lecture 3: Secure Network Architecture
Introduction to Internet Worm
Presentation transcript:

Measuring Adversaries Vern Paxson International Computer Science Institute / Lawrence Berkeley National Laboratory June 15, 2004

Data courtesy of Rick Adams = 80% growth/year

= 60% growth/year

= 596% growth/year

The Point of the Talk Measuring adversaries is fun: –Increasingly of pressing interest –Involves misbehavior and sneakiness –Includes true Internet-scale phenomena –Under-characterized –The rules change

The Point of the Talk, con’t Measuring adversaries is challenging: –Spans very wide range of layers, semantics, scope –New notions of “active” and “passive” measurement –Extra-thorny dataset problems –Very rapid evolution: arms race

Adversaries & Evasion Consider passive measurement: scanning traffic for a particular string (“USER root”) Easiest: scan for the text in each packet –No good: text might be split across multiple packets Okay, remember text from previous packet –No good: out-of-order delivery Okay, fully reassemble byte stream –Costs state …. –…. and still evadable

Evading Detection Via Ambiguous TCP Retransmission

The Problem of Evasion Fundamental problem passively measuring traffic on a link: Network traffic is inherently ambiguous Generally not a significant issue for traffic characterization … … But is in the presence of an adversary: Attackers can craft traffic to confuse/fool monitor

The Problem of “Crud” There are many such ambiguities attackers can leverage A type of measurement vantage-point problem Unfortunately, these occur in benign traffic, too : –Legitimate tiny fragments, overlapping fragments –Receivers that acknowledge data they did not receive –Senders that retransmit different data than originally In a diverse traffic stream, you will see these: –What is the intent?

Countering Evasion-by-Ambiguity Involve end-host: have it tell you what it saw Probe end-host in advance to resolve vantage-point ambiguities (“active mapping”) –E.g., how many hops to it? –E.g., how does it resolve ambiguous retransmisions? Change the rules - Perturb –Introduce a network element that “normalizes” the traffic passing through it to eliminate ambiguities E.g., regenerate low TTLs (dicey!) E.g., reassemble streams & remove inconsistent retransmissions

Adversaries & Identity Usual notions of identifying services by port numbers and users by IP addresses become untrustworthy E.g., backdoors installed by attackers on non- standard ports to facilitate return / control E.g., P2P traffic tunneled over HTTP General measurement problem: inferring structure

Adversaries & Identity: Measuring Packet Origins Muscular approach (Burch/Cheswick) –Recursively pound upstream routers to see which ones perturb flooding stream Breadcrumb approach: –ICMP ISAWTHIS Relies on high volume –Packet marking Lower volume + intensive post-processing Yaar’s PI scheme yields general tomography utility  Yields general technique: power of introducing small amount of state inside the network

Adversaries & Identity: Measuring User Origins Internet attacks invariably do not come from the attacker's own personal machine, but from a stepping-stone: a previously- compromised intermediary. Furthermore, via a chain of stepping stones. Manually tracing attacker back across the chain is virtually impossible. So: want to detect that a connection going into a site is closely related to one going out of the site. Active techniques? Passive techniques?

Measuring User Origins, con’t Approach #1 (SH94; passive): Look for similar text –For each connection, generate a 24-byte thumbprint summarizing per-minute character frequencies Approach #2 (USAF94) - particularly vigorous active measurement: –Break-in to upstream attack site –Rummage through its logs –Recurse

Measuring User Origins, con’t Approach #3 (ZP00; passive): Leverage unique on/off pattern of user login sessions: –Look for connections that end idle periods at the same time. –Two idle periods correlated if ending time differ by ≤  sec. –If enough periods coincide  stepping stone pair. –For A  B  C stepping stone, just 2 correlations suffices –(For A  B  …  C  D, 4 suffices.)

Measuring User Origins, con’t Works very well, even for encrypted traffic But: easy to evade, if attacker cognizant of algorithm –C’est la arms race And: also turns out there are frequent legit stepping stones Untried active approach: imprint traffic with low-frequency timing signature unique to each site (“breadcrumb”). Deconvolve recorded traffic to extract.

Global-scale Adversaries: Worms Worm = Self-replicating/self-propagating code Spreads across a network by exploiting flaws in open services, or fooling humans (viruses) Not new --- Morris Worm, Nov –6-10% of all Internet hosts infected Many more small ones since … … but came into its own July, 2001

Code Red Initial version released July 13, Exploited known bug in Microsoft IIS Web servers. 1 st through 20 th of each month: spread. 20 th through end of each month: attack. Spread: via random scanning of 32-bit IP address space. But: failure to seed random number generator  linear growth  reverse engineering enables forensics

Code Red, con’t Revision released July 19, Payload: flooding attack on Bug lead to it dying for date ≥ 20 th of the month. But: this time random number generator correctly seeded. Bingo!

Worm dies on July 20th, GMT

Measuring Internet-Scale Activity: Network Telescopes Idea: monitor a cross-section of Internet address space to measure network traffic involving wide range of addresses –“Backscatter” from DOS floods –Attackers probing blindly –Random scanning from worms LBNL’s cross-section: 1/32,768 of Internet –Small enough for appreciable telescope lag UCSD, UWisc’s cross-section: 1/256.

Spread of Code Red Network telescopes give lower bound on # infected hosts: 360K. Course of infection fits classic logistic. That night (  20 th ), worm dies … … except for hosts with inaccurate clocks! It just takes one of these to restart the worm on August 1 st …

Could parasitically analyze sample of 100K’s of clocks!

The Worms Keep Coming Code Red 2: –August 4th, 2001 –Localized scanning: prefers nearby addresses –Payload: root backdoor –Programmed to die Oct 1, Nimda: –September 18, 2001 –Multi-mode spreading, including via Code Red 2 backdoors!

Code Red 2 kills off Code Red 1 Code Red 2 settles into weekly pattern Nimda enters the ecosystem Code Red 2 dies off as programmed CR 1 returns thanks to bad clocks

Code Red 2 dies off as programmed Nimda hums along, slowly cleaned up With its predator gone, Code Red 1 comes back!, still exhibiting monthly pattern

80% of Code Red 2 cleaned up due to onset of Blaster Code Red 2 re- released with Oct die-off Code Red 1 and Nimda endemic Code Red 2 re-re- released Jan 2004 Code Red 2 dies off again

Detecting Internet-Scale Activity Telescopes can measure activity, but what does it mean?? Need to respond to traffic to ferret out intent Honeyfarm: a set of “honeypots” fed by a network telescope Active measurement w/ an uncooperating (but stupid) remote endpoint

Internet-Scale Adversary Measurement via Honeyfarms Spectrum of response ranging from simple/cheap auto-SYN acking to faking higher levels to truly executing higher levels Problem #1: Bait –Easy for random-scanning worms, “auto-rooters” –But for “topological” or “contagion” worms, need to seed honeyfarm into application network  Huge challenge Problem #2: Background radiation –Contemporary Internet traffic rife with endemic malice. How to ignore it??

Measuring Internet Background Radiation For good-sized telescope, must filter: –E.g., UWisc /8 telescope sees 30Kpps of traffic heading to non-existing addresses Would like to filter by intent, but initially don’t know enough Schemes - per source: –Take first N connections –Take first N connections to K different ports –Take first N different payloads –Take all traffic source sends to first N destinations

Responding to Background Radiation

Hourly Background Radiation Seen at a 2,560-address Telescope

Measuring Internet-scale Adversaries: Summary New tools & forms of measurement: –Telescopes, honeypots, filtering New needs to automate measurement: –Worm defense must be faster-than-human The lay of the land has changed: –Endemic worms, malicious scanning –Majority of Internet connection (attempts) are hostile (80+% at LBNL) Increasing requirement for application- level analysis

The Huge Dataset Headache Adversary measurement particularly requires packet contents –Much analysis is application-layer Huge privacy/legal/policy/commercial hurdles Major challenge: anonymization/agents technologies –E.g. [PP03] “semantic trace transformation” –Use intrusion detection system’s application analyzers to anonymize trace at semantic level (e.g., filenames vs. users vs. commands) –Note: general measurement increasingly benefits from such application analyzers, too

Attacks on Passive Monitoring State-flooding : –E.g. if tracking connections, each new SYN requires state; each undelivered TCP segment requires state Analysis flooding: –E.g. stick, snot, trichinosis But surely just peering at the adversary we’re ourselves safe from direct attack?

Attacks on Passive Monitoring Exploits for bugs in passive analyzers! Suppose protocol analyzer has an error parsing unusual type of packet –E.g., tcpdump and malformed options Adversary crafts such a packet, overruns buffer, causes analyzer to execute arbitrary code E.g. Witty, BlackIce & packets sprayed to random UDP ports –12,000 infectees in < 60 minutes!

Summary The lay of the land has changed –Ecosystem of endemic hostility –“Traffic characterization” of adversaries as ripe as characterizing regular Internet traffic was 10 years ago –People care Very challenging: –Arms race –Heavy on application analysis –Major dataset difficulties

Summary, con’t Revisit “passive” measurement: –evasion –telescopes/Internet scope –no longer isolated observer, but vulnerable Revisit “active” measurement –perturbing traffic to unmask hiding & evasion –engaging attacker to discover intent IMHO, this is "where the action is” … … And the fun!