Download presentation
Presentation is loading. Please wait.
1
Extensible Security Services on the CROSS/Linux Programmable Router David K. Y. Yau Department of Computer Sciences Purdue University yau@cs.purdue.edu
2
Motivations n Internet is an open and democratic environment –increasingly used for mission-critical work n Many security threats are present or appearing –need effective and flexible defenses to detect/trace/counter attacks –protect innocent users, prosecute criminals
3
Routing Infrastructure n Router software critical to network health –patches for security bugs –new defenses against new attacks n Scalable distribution of router software to many routing points –minimal disruptions to existing services –little human intervention n Exploit software-programmable router technology (CROSS platform)
4
Existing Networks client router: simple forwarding ISP server
5
CROSS Network Architecture client router: processing + forwarding Web code server Denial-of-service defense Intelligent congestion control ISP
6
Cross Forwarding Paths Resource allocation manager Function dispatcher Cut- through subscribe dispatch Active packet send Per-flow processing Output network queues Input queues Packet classifier
7
Example Security Problem: Network Denial-of-service Attacks n Some attacks quite subtle –at routing infrastructure, malicious dropping of packets, etc –securing protocols and intrusion detection n Others by brute force: flooding attacks –cripples victim; precludes any sophisticated defense at point under attack –viewed as resource management problem
8
Flooding Attack Server
9
Server-centric Router Throttle n Installed by server when under stress, at a set deployment routers –can be sent by multicast n Specifies leaky bucket rate at which router can forward traffic to the server –aggressive traffic for server dropped before reaching server –rate determined by a control algorithm
10
To S Router Throttle Aggressive flow Throttle for S’ To S’ Throttle for S Securely installed by S Deployment router
11
Key Design Problems n Resource allocation: who is entitled to what? –need to keep server operating within load limits –notion of fairness, and how to achieve it? Need global, rather than router-local, fairness n How to respond to network and user dynamics? –Feedback control strategy needed
12
What is being fair? n Baseline approach of dropping a fraction f of traffic for each flow won’t work well –a flow can cause more damage to other flows simply by being more aggressive! n Rather, no flow should get a higher rate than another flow that has unmet demands –this way, we penalize aggressive flows only, but protect the well-behaving ones
13
Fairness Notion n Since we proactively drop packets ahead of congestion point, we need a global fairness notion –router-local max-min at destination, and push back to upper levels (Mahajan et al) –max-min fairness among level-k routing points, R(k), I.e., routers about k hops away from destination
14
Level-k Deployment Points n Deployment points parameterized by an integer k n R(k) -- set of routers that are either k hops away from server S or less than k hops away from S but are directly connected to a host n Fairness across global routing points R(k)
15
Level-3 Deployment Server
16
Feedback Control Strategy n Hysteresis control –high and low water marks for server load, to strengthen or relax router throttle n Additive increase/multiplicative decrease rate adjustment –increases when server load exceeds U S, and decreases when server load falls below L S –throttle removed when a relaxed rate does not result in significant server load increase
17
Fairness Definition n A resource control algorithm achieves level-k max-min fairness among the routers R(k) if the allowed forwarding rate of traffic for S at each router is the router’s max-min fair share of some rate r satisfying L S r U S
18
Fair Throttle Algorithm
19
Example Max-min Rates (L=18, H=22) Server 18.23 6.65 14.1 0.01 1.40 0.22 17.73 0.61 0.95 6.25 20.53 24.88 15.51 17.73 0.22 0.61 0.95 59.9
20
Interesting Questions n Can we preferentially drop attacker traffic over good user traffic? n Can we successfully keep server operating within design limits, so that good user traffic that makes it gets acceptable service? –How stable is such a control algorithm? How does it converge?
21
Algorithm Evaluation n Control-theoretic analysis –algorithm stability and convergence under different system parameters n Packet network simulations –good user protection under both UDP and TCP traffic n System implementation –deployment costs
22
Control-theoretic Model
23
Throttle Rate (L=900; U=1100)
24
Server Load (L = 900; U = 1100)
25
Throttle Rate (U = 1100)
26
Server Load (U = 1100)
27
Throttle Rate (L=1050;U=1100)
28
Server Load (L=1050; U=1100)
29
UDP Simulation Experiments n Global network topology reconstructed from real traceroute data –AT&T Internet mapping project: 709,310 traceroute paths, single source to 103,402 other destinations –randomly select 5,000 paths, with 135,821 nodes of which 3879 are hosts n Randomly select x% of hosts to be attackers –good users send at rate [0,r], attackers at rate [0,R]
30
20% Evenly Distributed Aggressive (10:1) Attackers
31
40% Evenly Distributed Aggressive (5:1) Attackers
32
Evenly Distributed “meek” Attackers
33
Deployment Extent
34
TCP Simulation Experiment n Clients access web server via HTTP 1.0 over TCP Reno n Simulated network subset of AT&T traceroute topology –85 hosts, 20% attackers n Web clients make request probabilistically with empirical document size and inter- request time distributions
35
Web Server Protection
36
Web Server Traffic Control
37
System Implementation n On CROSS/Linux router –as Click element kernel service (loadable kernel module) –code can be remotely downloaded through anetd daemon n Deployment platform –Pentium III/864 MHz PC –multiple 10/100 Mb/s ethernet interfaces
38
Module Load Overhead
39
Memory and Delay Results n Memory overhead –7.5 bytes of memory per throttle n Delay through throttle element about 200 ns –independent of number of throttles installed
40
Throughput Result
41
Future Work n Offered load-aware control algorithm for computing throttle rate –impact on convergence and stability n Policy-based notion of fairness –heterogeneous network regions, by size, susceptibility to attacks, tariff payment n Selective deployment issues n Impact on real user applications
42
Conclusions n Extensible routers can help improve network health n Presented a server-centric router throttle mechanism for DDoS flooding attacks –can better protect good user traffic from aggressive attacker traffic –can keep server operational under an ongoing attack –has efficient implementation
43
Acknowledgements n CROSS Implementation –Prem Gopalan, Seung Chul Han, Xuxian Jiang, Puneet Zaroo n Funding has been provided by –NSF, CERIAS, Purdue Research Foundation
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.