Download presentation
Presentation is loading. Please wait.
1
Quantifying Network Denial of Service: A Location Service Case Study Yan Chen, Adam Bargteil, David Bindel, Randy H. Katz and John Kubiatowicz Computer Science Division University of California at Berkeley USA
2
Outline Motivation Network DoS Attacks Benchmarking Object Location Services Simulation Setup and Results Conclusions
3
Motivations Network DoS attacks increasing in frequency, severity and sophistication –32% respondents detected DoS attacks (1999 CSI/FBI survey) –Yahoo, Amazon, eBay and MicroSoft DDoS attacked –About 4,000 attacks per week in 2000 Security metrics in urgent need –Mission-critical applications built on products claiming various and suspect DoS resilient properties –No good benchmark for measuring security assurance Desired: A general methodology for quantifying arbitrary system/service resilience to network DoS attacks
4
Outline Motivation Network DoS Attacks Benchmarking Object Location Services Simulation Setup and Results Conclusions
5
Network DoS Benchmarking QoS metrics –DoS attacks resource availability, a spectrum metric rather than binary –General vs. App-specific metrics General: end-to-end latency, throughput and time to recover Multi-dimensional Resilience Quantification –Dimension rank based on importance, frequency, severity & sophistication –Application/system specific, hard to generalize –Solution: be specific in the threat model definition and only quantify resilience in the model
6
Network DoS Benchmarking (Cont’d) Simulation vs. Experiment –Standard & realistic simulation environment specification Network configuration Workload generation Threat model – taxonomy from CERT Y Consumption of network connectivity and/or bandwidth Y Consumption of other resources, e.g. queue, CPU Y Destruction or alternation of configuration information N Physical destruction or alternation of network components
7
Two General Classes of Attacks Flooding Attacks –Point-to-point attacks: TCP/UDP/ICMP flooding, Smurf attacks –Distributed attacks: hierarchical structures Corruption Attacks –Application specific –Impossible to test all, choose typical examples for benchmarking
8
Outline Motivation Network DoS Attacks Benchmarking Object Location Services Simulation Setup and Results Conclusions
9
Object Location Services (OLS) Centralized directory services (CDS) vulnerable to DoS attack –SLP, LDAP Replicated directory services (RDS) suffer consistency overhead, still limited number of targets Distributed directory services (DDS) –Combined routing & location on overlay network –Guaranteed success and locality –Failure and attack isolation –Examples: Tapestry, CAN, Chord, Pastry
10
Tapestry Routing and Location Namespace (nodes and objects) –Each object has its own hierarchy rooted at Root f (ObjectID) = RootID, via a dynamic mapping function Suffix Routing from A to B –At h th hop, arrive at nearest node hop(h) s.t. hop(h) shares suffix with B of length h digits –Example: 5324 routes to 0629 via 5324 2349 1429 7629 0629 Object Location –Root responsible for storing object’s location –Publish / search both route incrementally to root Http://www.cs.berkeley.edu/~ravenben/tapestry
11
4 2 3 3 3 2 2 1 2 4 1 2 3 3 1 3 4 1 1 43 2 4 Tapestry Mesh Incremental suffix-based routing NodeID 0x43FE NodeID 0x13FE NodeID 0xABFE NodeID 0x1290 NodeID 0x239E NodeID 0x73FE NodeID 0x423E NodeID 0x79FE NodeID 0x23FE NodeID 0x73FF NodeID 0x555E NodeID 0x035E NodeID 0x44FE NodeID 0x9990 NodeID 0xF990 NodeID 0x993E NodeID 0x04FE NodeID 0x43FE
12
Routing in Detail 5712 0880 3210 7510 4510 Neighbor Map For “5712” (Octal) Routing Levels 1234 xxx1 5712 xxx0 xxx3 xxx4 xxx5 xxx6 xxx7 xx02 5712 xx22 xx32 xx42 xx52 xx62 xx72 x012 x112 x212 x312 x412 x512 x612 5712 0712 1712 2712 3712 4712 5712 6712 7712 5712 012 3 45 6 7 0880 012 3 45 6 7 3210 012 3 45 6 7 4510 012 3 45 6 7 7510 012 3 45 6 7 Example: Octal digits, 2 12 namespace, 5712 7510
13
Object Location Randomization and Locality
14
Outline Motivation Network DoS Attacks Benchmarking Object Location Services Simulation Setup and Results Conclusions
15
Simulation Setup Fast Ethernet T3 T1 Distributed Information Retrieval System Built on top of ns 1000-node Transit-stub Topology from GT-ITM –Extended with common network bandwidth Synthetic Workload –Zipf’s law and hot-cold patterns –500 objects, 3 replicas each on 3 random nodes –Size randomly chosen as typical web contents: 5KB – 50KB
16
Directory Servers Simulation CDS with random replica (CDSr) CDS with closest replica (CDSo) RDS: with 4 random widely-distributed nodes, always return random replica –With random directory server (RDSr) –With closest directory server (RDSo) DDS: simplified version of Tapestry (DDS) –Tapestry mesh statically built with full topology knowledge –Using hop count as distance metric
17
Attacks Simulation Flooding Attacks –200 seconds simulation –Vary the number of agents: 1 – 16 –Each inject various constant bit stream to targets: 25KB/s – 500KB/s –Targets: CDS, RDS: the directory server(s) DDS: the root(s) of hot object(s) Corruption Attacks –Corrupted application-level routing tables of target nodes: CDS, RDS, DDS –Forged replica advertisement through node spoofing: DDS
18
CDS vs. Tapestry –1 * 100KB/s, 4 * 25KB/s – 4 *100KB/s –Tapestry shows resistance to DoS attacks Results of Flooding Attacks Average response latencyRequest throughput Only the flood traffic amount matter? No! – Bottleneck bandwidth restrict attackers’ power – Path sharing of clients and attackers – Hard to identify and eliminate multiple attackers simultaneously
19
Dynamics of Flooding Attacks CDS vs. Tapestry (most severe case: 4 *100KB/s) –Attacks start at 40th second, ends at 110th second Time to Recover –CDS (both policies): 40 seconds –Tapestry: negligible Average response latencyRequest throughput
20
Results of Flooding Attacks RDS vs. Tapestry –4 * 100KB/s – 16 * 500KB/s –Both RDS and Tapestry are far more resilient than CDS –Performance: RDSo > Tapestry > RDSr Counter DoS attacks: Decentralization and topology-aware locality Average response latencyRequest throughput
21
Results of Corruption Attacks Distance Corruption –One false edge on directory/root server –Only CDSo (85%) and Tapestry (2.2%) affected App-specific attacks –One Tapestry node spoofing to be root of every object (black square node in the graph) –24% of nodes are affected (enclosed by round-corner rectangles)
22
Resiliency Ranking Combine multiple-dimensional resiliency quantification into a single ranking –For multiple flooding attacks, weights are assigned in proportion to the amount of flood traffic –Normalized with the corresponding performance without attacks Flooding (80%) Distance corruption (10%) Node spoofing (10%) Total score Rank CDS, rand obj0.027N/A 0.22164 CDS, opt obj0.0230.85N/A0.20345 Random RDS0.17N/A 0.3363 Optimal RDS0.48N/A 0.5841 DDS,Tapestry0.350.9780.760.45382
23
Outline Motivation Network DoS Attacks Benchmarking Object Location Services Simulation Setup and Results Conclusions
24
First attempt for network DoS benchmarking Applied to quantify various directory services Replicated/distributed services more resilient than centralized ones Will expand it for more comprehensive attacks and more dynamic simulation Will use it to study other services such as web hosting and content distribution
25
Queuing Theory Analysis (backup slide) Assume M/M/1 queuing Predict the trends and how to choose simulation parameters to cover enough spectrum Average response latency (s) Legitimate throughput X axis: ratio of attack traffic vs. legitimate traffic
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.