Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 16 Page 1 CS 239, Spring 2007 Designing Performance Experiments: An Example CS 239 Experimental Methodologies for System Software Peter Reiher.

Similar presentations


Presentation on theme: "Lecture 16 Page 1 CS 239, Spring 2007 Designing Performance Experiments: An Example CS 239 Experimental Methodologies for System Software Peter Reiher."— Presentation transcript:

1 Lecture 16 Page 1 CS 239, Spring 2007 Designing Performance Experiments: An Example CS 239 Experimental Methodologies for System Software Peter Reiher May 31, 2007

2 Lecture 16 Page 2 CS 239, Spring 2007 Outline The example system What should we test? How should we test it? Critique of existing experimental evidence

3 Lecture 16 Page 3 CS 239, Spring 2007 DefCOM A defensive system to counter distributed denial of service (DDoS) attacks Especially attacks based on high volumes of garbage traffic –Originating from many sources

4 Lecture 16 Page 4 CS 239, Spring 2007 The DDoS Problem

5 Lecture 16 Page 5 CS 239, Spring 2007 Why Distributed Attacks? Targets are often highly provisioned servers A single machine usually cannot overwhelm such a server So harness multiple machines to do so Also makes defenses harder

6 Lecture 16 Page 6 CS 239, Spring 2007 How to Defend? A vital characteristic: –Don’t just stop a flood –ENSURE SERVICE TO LEGITIMATE CLIENTS!!! If you only deliver a manageable amount of garbage, you haven’t solved the problem

7 Lecture 16 Page 7 CS 239, Spring 2007 Complicating Factors High availability of compromised machines –At least tens of thousands of zombie machines out there Internet is designed to deliver traffic –Regardless of its value IP spoofing allows easy hiding Distributed nature makes legal approaches hard Attacker can choose all aspects of his attack packets –Can be a lot like good ones

8 Lecture 16 Page 8 CS 239, Spring 2007 DefCOM Defense Approach Addresses the core problem: –Too much traffic coming in, so get rid of some of it –A common idea in DDoS defense Vital to separate the sheep from the goats Unless you have good discrimination techniques, not much help

9 Lecture 16 Page 9 CS 239, Spring 2007 Where Do You Filter? Near the target? Near the source? In the network core? In multiple places?

10 Lecture 16 Page 10 CS 239, Spring 2007 Filtering Near the Target +Easier to detect attack +Sees everything +Obvious deployment incentive  May be hard to prevent collateral damage  May be hard to handle attack volume

11 Lecture 16 Page 11 CS 239, Spring 2007 Filtering Near the Sources +Easier to prevent collateral damage +Easier to handle attack volume  May be hard to detect attack  Only works where deployed  Deployment incentives?

12 Lecture 16 Page 12 CS 239, Spring 2007 Filtering in the Internet +Spreads attack volume over many machines +Sees everything With sufficient deployment Which can be quite reasonable  May be hard to prevent collateral damage  May be hard to detect attack  Low per-packet processing budget  Deployment incentive?

13 Lecture 16 Page 13 CS 239, Spring 2007 What If All Parties Cooperated? Could we leverage strengths of all locations? While minimizing their weaknesses? That’s the DefCOM approach A prototype system built at U Delaware and UCLA

14 Lecture 16 Page 14 CS 239, Spring 2007 DefCOM alert generator classifier core DefCOM instructs core nodes to apply rate limits Core nodes use information from classifiers to prioritize traffic Classifiers can assure priority for good traffic

15 Lecture 16 Page 15 CS 239, Spring 2007 Rate Limiting Approach DefCOM builds an overlay of participating nodes –That see traffic to the victim –Good and bad traffic Rate limits at each DefCOM node so bottleneck link isn’t overwhelmed Each DefCOM node monitors amount of traffic provided by upstream nodes –If they don’t follow rate limits, evidence of misbehavior by DefCOM node

16 Lecture 16 Page 16 CS 239, Spring 2007 DefCOM Packet Stamping Classifiers put stamps in packets they consider good Preferential treatment given to packets with good stamps Packet’s stamp changes between each pair of DefCOM participant nodes Non-stamped traffic gets a “low” stamp after it’s been subjected to rate limits

17 Lecture 16 Page 17 CS 239, Spring 2007 What Should We Test for DefCOM? What are the core claims about DefCOM? Which of those are least plausible or most risky? How do we prioritize among many things we could test?

18 Lecture 16 Page 18 CS 239, Spring 2007 Performance Questions for DefCOM How well does DefCOM defend against attacks? Does DefCOM damage performance of normal traffic? Can all DefCOM components run fast enough for realistic cases? How much does partial deployment pattern matter? –Among classifiers? –Among filtering nodes?

19 Lecture 16 Page 19 CS 239, Spring 2007 More Performance Questions How much do important components of DefCOM cost to run? –E.g., is stamping packets and checking stamps cheap or expensive? What impacts would we see if some DefCOM nodes were compromised? Others?

20 Lecture 16 Page 20 CS 239, Spring 2007 Which of These Must We Test? Can’t get away without showing it defends against DDoS attacks But what else should we prioritize?

21 Lecture 16 Page 21 CS 239, Spring 2007 How Do We Test? Let’s concentrate first on the core issue of whether it defends How do we propose to test that?

22 Lecture 16 Page 22 CS 239, Spring 2007 Basic Approach What is our basic testing approach? Set up a four machine testbed like so: TargetRate limiter ClassifierTraffic source

23 Lecture 16 Page 23 CS 239, Spring 2007 Or One Like This? alert generator classifier core

24 Lecture 16 Page 24 CS 239, Spring 2007 Or One Like This?

25 Lecture 16 Page 25 CS 239, Spring 2007 If It’s Not the Simple One... What is the topology? How many edge nodes? Organized into how many subnets? How many core nodes? Connected how? And how do we arrange the routing?

26 Lecture 16 Page 26 CS 239, Spring 2007 Is the Base Case Full Deployment? And what does that mean in terms of where we put classifiers and filtering nodes? If it’s not full deployment, what is the partial deployment pattern? –A single pattern? –Or treat that as a factor in experiment?

27 Lecture 16 Page 27 CS 239, Spring 2007 Metrics What metric or metric should we use to decide if DefCOM successfully defends against DDoS attacks? Utilization of the bottleneck link? Percentage of dropped attack packets? Percentage of legitimate packets delivered? Something else?

28 Lecture 16 Page 28 CS 239, Spring 2007 Workload Probably two components: –Legitimate traffic –Attack traffic Where do we get them from? If we’re not using the simple topology, where do we apply them?

29 Lecture 16 Page 29 CS 239, Spring 2007 The Attack Workload Basically, something generating a lot of packets But is there more to it? Do we care about kind of packets? Pattern of their creation? Contents? –Header? –Payload? Do attack dynamics change during attack? Which nodes generate attack packets?

30 Lecture 16 Page 30 CS 239, Spring 2007 The Legitimate Workload What is it? How realistic must it be? How do we get it? Where is it applied? Is it responsive to what happens at the target? Cross-traffic?

31 Lecture 16 Page 31 CS 239, Spring 2007 Parameters and Factors Do we just define one set of conditions and test DefCOM there? If not, what gets varied? –Deployment pattern? –Attack size in packets? –Number of attacking nodes? –Legitimate traffic patterns? –Size of target’s bottleneck link? –Accuracy of classification? –Something else?

32 Lecture 16 Page 32 CS 239, Spring 2007 Actual DefCOM Performance Evaluation Concentrated on issues of how well system defended Investigated moderately large network Considered partial deployment Investigated performance under compromise of some defense nodes

33 Lecture 16 Page 33 CS 239, Spring 2007 Other Data Gathered Time to detect attack –Only range reported (“1.58-2.45 seconds”) Robustness to packet drops –“robust up to 20% drops” Packet processing time –“.5 microsec without attack, 1.3 microsec. during attack” –Plus 50 microseconds to run code related to calculating rate limit (weighted fair share) –Argued that special HW would speed this up

34 Lecture 16 Page 34 CS 239, Spring 2007 Topology Used Fairly simple But with reasonable number of nodes One victim node Purely hierarchical set of routers –3 levels Bunch of source nodes arranged into subnets 70 nodes, overall

35 Lecture 16 Page 35 CS 239, Spring 2007 The Topology S1S2S3 S46S47S48 Victim Bottleneck link Classifiers deployed here Rate limiters deployed here Source traffic (good and bad) injected here

36 Lecture 16 Page 36 CS 239, Spring 2007 Metrics Used Throughput of both legit and attack traffic –Measured over time, not as totals or averages Goodput –Legit application-level bits delivered –Doesn’t count headers or retransmitted packets –Also plotted over time

37 Lecture 16 Page 37 CS 239, Spring 2007 A Sample Graph Shows throughput over time of three classes of traffic

38 Lecture 16 Page 38 CS 239, Spring 2007 Another Type of DefCOM Graph


Download ppt "Lecture 16 Page 1 CS 239, Spring 2007 Designing Performance Experiments: An Example CS 239 Experimental Methodologies for System Software Peter Reiher."

Similar presentations


Ads by Google