Download presentation
Presentation is loading. Please wait.
Published byAshlyn Hall Modified over 9 years ago
1
Wire Speed Packet Classification Without TCAMs ACM SIGMETRICS 2007 Qunfeng Dong (University of Wisconsin-Madison) Suman Banerjee (University of Wisconsin-Madison) Jia Wang (AT&T Laboratories – Research) Dheeraj Agrawal (University of Wisconsin-Madison)
2
Introduction Previous work and our objective Motivation Design Evaluation Summary Outline
3
Packet classification [SVSW98,LS98] Make a decision on each incoming packet based on the value of some packet header field(s), according to a given rule set. Example — IP forwarding based on destination IP address Is the foundation of many Internet functions (e.g. security, QoS, etc). Each rule specifies a range literal on each relevant field For example, the source port must be in the range [1024, 65535] Prefix, single value, and wildcard are all special ranges. A rule and a packet match, if the packet satisfies all range literals. Objective For each incoming packet, find the first highest priority rule that matches the packet. Introduction
4
Introduction Hardware solution Ternary Content Addressable Memory (TCAM) is the favoured solution for wire speed packet classification in high speed routers. Fast — search all stored rules in parallel and return the first matching rule. Expensive — accounts for a significant portion of router line card cost Power consuming — one TCAM chip consumes 12W-15W Heat dissipation is a major challenge in designing high performance architectures Cooling cost is a considerable portion of ISPs’ operational cost Board area efficiency is low Not convenient to perform complex operations Software solution Compared with TCAM Better for performing complex classification tasks Cheap — no additional hardware needed Low power consumption — DRAM/SRAM-based implementation Slow
5
Packet Classification @ Wire Speeds With 40-byte packet size, OC-768 allows 8 nano-seconds per packet. Researchers have been working on the design of routers with 4×OC-768 Software solutions O(logn) memory accesses per packet, using O(n^d) memory space, or O((logn)^(d-1)) memory accesses per packet, using O(n) memory space n is the number of rules d is the number of packet header fields As wire speeds accelerate much faster than memory access speeds, software solution will become increasingly difficult. TCAM is the de facto solution for wire speed packet classification, and even IP lookup as well. Introduction
6
Using a small and fast cache is a natural and appealing choice.
7
Flow Cache [Xu et al. 2000, Chang et al. 2004] Xu, Singhal, and Degroat 2000 Number of concurrent flows: 14,000 Cache size: 16K entries Cache miss ratio: 8% Chang, Feng, and Li 2004 Number of concurrent flows: 567 Cache size: 4KB Cache miss ratio: 4.85% Prior arts
8
Number of concurrent flows 100,000+ To be realistic in today’s Internet Cache size A small number of entries To be cost efficient Cache miss ratio 0.1% or lower To classify missed packets using a low cost packet classifier Our objectives
9
Caching rules is more efficient than caching packets One rule can match many different flows A small number of rules match most traffic Cached rules need not be existing rules in the rule set A new rule may cover more flows than any existing rule Cached rules should evolve in response to traffic dynamics Evolving rules may cover more flows than any existing rule Observations
10
Example
11
Framework
12
What (not which!) rules should we cache? To cover incoming flows using as few rules as possible How should cached rules evolve? In response to changes in traffic pattern Semantic integrity of the rule cache? If hit, the cache should always output the right decision Effect of cache management delay on cache hit ratio? Prefer low cost and hence relatively slow cache manager Updated rules are not available until after cache management Can possibly miss some packets because of the delay Challenges
13
Framework
14
RHL & Sliding Window
15
Each element in Regular Hypercube List (RHL) is a rule Namely, a d-dimensional hyper-cube in the definition space An RHL element has a single decision Thus can be represented as a single rule Every sample is linked to some RHL element covering it To fully utilize sampled packets in the sliding window The weight of an RHL is its number of associated samples Overlapping RHL elements must have the same decision Greatly simplifies cache management and cache design! We can simply put the top RHL elements into rule cache. Regular Hypercube List (RHL)
16
SPDD
17
Framework
18
Rule Cache Design
21
If attacking traffic accounts for a percentage of x in aggregate traffic, cache miss ratio caused by an adversary is bounded by x/1-x. Even if the adversary is perfectly informed Even if the adversary can arbitrarily control the content of attacking packets, when sampled by the cache manager For example, if x = 10%, cache miss ratio caused by the adversary is at most 11.1%. Detailed proof can be found in the paper. Security of Rule Cache
22
Evaluation
23
Evaluation
24
Evaluation
25
Evaluation
26
Evaluation
27
TCAM as the de facto solution has some disadvantages Accounts for a significant portion of router line card cost Quite power consuming We propose smart rule cache architecture to replace TCAM A small on-chip rule cache matches more than 99.9% incoming traffic Missed packets can be easily classified using a low cost classifier The small cache can be implemented at negligible cost Summary
28
ACM SIGMETRICS 2007 Qunfeng Dong University of Wisconsin - Madison Email: qunfeng@cs.wisc.edu Thank you!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.