Download presentation
Presentation is loading. Please wait.
Published byAlannah Horn Modified over 9 years ago
1
BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations Minlan Yu Princeton University minlanyu@cs.princeton.edu Joint work with Alex Fabrikant, Jennifer Rexford 1 CoNEXT’09
2
Large-scale SPAF Networks Large network using flat addresses – Simple for configuration and management – Useful for enterprise and data center networks SPAF:Shortest Path on Addresses that are Flat – Flat addresses (e.g., MAC addresses) – Shortest path routing (link state, distance vector) 2 S H S S S S S S S S S S S S H H H H H H H H
3
Scalability Challenges Recent advances in control plane – E.g., TRILL, SEATTLE – Topology and host information dissemination – Route computation Data plane remains a challenge – Forwarding table growth (in # of hosts and switches) – Increasing link speed 3
4
State of the Art Hash table in SRAM to store forwarding table – Map MAC addresses to next hop – Hash collisions: extra delay Overprovision to avoid running out of memory – Perform poorly when out of memory – Difficult and expensive to upgrade memory 4 00:11:22:33:44:55 00:11:22:33:44:66 aa:11:22:33:44:77 …
5
Bloom Filters Bloom filters in fast memory (SRAM) – A compact data structure for a set of elements – Calculate s hash functions to store element x – Easy to check membership – Reduce memory at the expense of false positives h 1 (x)h 2 (x)h s (x) 010001010000010 x V0V0 V m-1 h 3 (x)
6
One Bloom filter (BF) per next hop – Store all addresses forwarded to that next hop 6 Nexthop 1 Nexthop 2 Nexthop T …… Packet destination query Bloom Filters hit BUFFALO: Bloom Filter Forwarding
7
BUFFALO Challenges How to optimize memory usage? Minimize the false-positive rate How to handle false positives quickly? No memory/payload overhead How to handle routing dynamics? Make it easy and fast to adapt Bloom filters 7
8
BUFFALO Challenges How to optimize memory usage? Minimize the false-positive rate How to handle false positives quickly? No memory/payload overhead How to handle routing dynamics? Make it easy and fast to adapt Bloom filters 8
9
Optimize Memory Usage Consider fixed forwarding table Goal: Minimize overall false-positive rate – Probability one or more BFs have a false positive Input: – Fast memory size M – Number of destinations per next hop – The maximum number of hash functions Output: the size of each Bloom filter – Larger BF for next-hops with more destinations 9
10
Constraints and Solution Constraints – Memory constraint Sum of all BF sizes fast memory size M – Bound on number of hash functions To bound CPU calculation time Bloom filters share the same hash functions Proved to be a convex optimization problem – An optimal solution exists – Solved by IPOPT (Interior Point OPTimizer) 10
11
– Forwarding table with 200K entries, 10 next hop – 8 hash functions – Optimization runs in about 50 msec Minimize False Positives 11
12
Comparing with Hash Table 12 65% Save 65% memory with 0.1% false positives More benefits over hash table – Performance degrades gracefully as tables grow – Handle worst-case workloads well
13
BUFFALO Challenges How to optimize memory usage? Minimize the false-positive rate How to handle false positives quickly? No memory/payload overhead How to handle routing dynamics? Make it easy and fast to adapt Bloom filters 13
14
False Positive Detection Multiple matches in the Bloom filters – One of the matches is correct – The others are caused by false positives 14 Nexthop 1 Nexthop 2 Nexthop T …… Packet destination query Bloom Filters Multiple hits
15
Handle False Positives Design goals – Should not modify the packet – Never go to slow memory – Ensure timely packet delivery BUFFALO solutions – Exclude incoming interface Avoid loops in “one false positive” case – Random selection from matching next hops Guarantee reachability with multiple false positives 15
16
One False Positive Most common case: one false positive – When there are multiple matching next hops – Avoid sending to incoming interface Provably at most a two-hop loop – Stretch <= Latency(AB)+Latency(BA) 16 False positive A A Shortest path B B dst
17
Multiple False Positives Handle multiple false positives – Random selection from matching next hops – Random walk on shortest path tree plus a few false positive links – To eventually find out a way to the destination 17 dst Shortest path tree for dst False positive link
18
Stretch Bound Provable expected stretch bound – With k false positives, proved to be at most – Proved by random walk theories However, stretch bound is actually not bad – False positives are independent – Probability of k false positives drops exponentially Tighter bounds in special topologies – For tree, expected stretch is polynomial in k 18
19
Stretch in Campus Network 19 When fp=0.001% 99.9% of the packets have no stretch When fp=0.001% 99.9% of the packets have no stretch 0.0003% packets have a stretch of shortest path length When fp=0.5%, 0.0002% packets have a stretch 6 times of shortest path length
20
BUFFALO Challenges How to optimize memory usage? Minimize the false-positive rate How to handle false positives quickly? No memory/payload overhead How to handle routing dynamics? Make it easy and fast to adapt Bloom filters 20
21
Problem of Bloom Filters Routing changes – Add/delete entries in BFs Problem of Bloom Filters (BF) – Do not allow deleting an element Counting Bloom Filters (CBF) – Use a counter instead of a bit in the array – CBFs can handle adding/deleting elements – But, require more memory than BFs 21
22
Update on Routing Change Use CBF in slow memory – Assist BF to handle forwarding-table updates – Easy to add/delete a forwarding-table entry 22 CBF in slow memory BF in fast memory Delete a route
23
Occasionally Resize BF Under significant routing changes – # of addresses in BFs changes significantly – Re-optimize BF sizes Use CBF to assist resizing BF – Large CBF and small BF – Easy to expand BF size by contracting CBF 23 1 0 Hard to expand to size 4 CBF BF Easy to contract CBF to size 4
24
BUFFALO Switch Architecture 24 – Prototype implemented in kernel-level Click
25
Prototype Evaluation Environment – 3.0 GHz 64-bit Intel Xeon – 2 MB L2 data cache, used as fast memory size M Forwarding table – 10 next hops – 200K entries 25
26
Evaluation Results Peak forwarding rate – 365 Kpps, 1.9 μs per packet – 10% faster than hash-based EtherSwitch Performance with FIB updates – 10.7 μs to update a route – 0.47 s to reconstruct BFs based on CBFs On another core without disrupting packet lookup Swapping in new BFs is fast 26
27
Conclusion Three properties of BUFFALO – Small, bounded memory requirement – Gracefully increase stretch with the growth of forwarding table – Fast reaction to routing updates Key design decisions – One Bloom filter per next hop – Optimizing of Bloom filter sizes – Preventing forwarding loops – Dynamic updates using counting Bloom filters 27
28
Thanks! Questions? 28
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.