Multi-Terabit IP Lookup Using Parallel Bidirectional Pipelines Author: Weirong Jiang, Viktor K. Prasanna Publisher: May 2008 CF '08: Proceedings of the 2008 conference on Computing frontiers ACM Presenter: Yu-Ping Chiang Date: 2008/09/16
Outline Overview Front End Back End Memory Balancing Trie Partitioning Subtrie-to-Pipeline Mapping Node-to-Stage Mapping Performance
Front End Receive packets Dispatch packets to pipelines: cache hit / miss set delay
Back End Process packets Output retrieved next-hop information: using delay to retrieve output information
Outline Overview Front End Back End Memory Balancing Trie Partitioning Subtrie-to-Pipeline Mapping Node-to-Stage Mapping Performance
Trie Partitioning Initial stride (I) following section: I=12 I=2
Subtrie-to-Pipeline Mapping Problem formulation Algorithm – O(KP) K = # of tries P = # of pipelines
Performance
Node-to-Stage Mapping Problem formulation Constraint: ancestor mapped preceding to child. Main idea: two subtries mapped onto different direction. two same trie level nodes mapped onto different stages.
Inversion: Methods: largest leaf least height largest leaf per height least average depth per leaf (use in following section) Inversion Factor (IFR) (in following section: 4~8)
O(HN) Node fields: Distance to child Memory address of child H = # of pipeline stages N = total # of trie nodes
Outline Overview Front End Back End Memory Balancing Trie Partitioning Subtrie-to-Pipeline Mapping Node-to-Stage Mapping Performance
Memory: 1.8 MB (13+5)*2^13*25*4=14.75Mb=1.8MB 18 KB/stage G packets / sec 7.5 PPC*2.5GHz=18.75G packets/sec Power consumption: 0.2 W / IP lookup 0.008*25=0.2W