1 Searching Very Large Routing Tables in Wide Embedded Memory Author: Jan van Lunteren Publisher: GLOBECOM 2001 Presenter: Han-Chen Chen Date: 2010/01/06.

Slides:



Advertisements
Similar presentations
Chapter 12: File System Implementation
Advertisements

Memory.
An On-Chip IP Address Lookup Algorithm Author: Xuehong Sun and Yiqiang Q. Zhao Publisher: IEEE TRANSACTIONS ON COMPUTERS, 2005 Presenter: Yu Hao, Tseng.
Bio Michel Hanna M.S. in E.E., Cairo University, Egypt B.S. in E.E., Cairo University at Fayoum, Egypt Currently is a Ph.D. Student in Computer Engineering.
Author: Nan Hua, Bill Lin, Jun (Jim) Xu, Haiquan (Chuck) Zhao Publisher: ANCS’08 Presenter: Yun-Yan Chang Date:2011/02/23 1.
Chapter 11: File System Implementation
1 Author: Ioannis Sourdis, Sri Harsha Katamaneni Publisher: IEEE ASAP,2011 Presenter: Jia-Wei Yo Date: 2011/11/16 Longest prefix Match and Updates in Range.
Day 20 Memory Management. Assumptions A process need not be stored as one contiguous block. The entire process must reside in main memory.
IP Address Lookup for Internet Routers Using Balanced Binary Search with Prefix Vector Author: Hyesook Lim, Hyeong-gee Kim, Changhoon Publisher: IEEE TRANSACTIONS.
Fast Filter Updates for Packet Classification using TCAM Authors: Haoyu Song, Jonathan Turner. Publisher: GLOBECOM 2006, IEEE Present: Chen-Yu Lin Date:
Chapter 7 Memory Management
Power Efficient IP Lookup with Supernode Caching Lu Peng, Wencheng Lu*, and Lide Duan Dept. of Electrical & Computer Engineering Louisiana State University.
1 Multi-Terabit IP Lookup Using Parallel Bidirectional Pipelines Author: Weirong Jiang Viktor K. Prasanna Publisher: ACM 2008 Presenter: Po Ting Huang.
Scalable IPv6 Lookup/Update Design for High-Throughput Routers Authors: Chung-Ho Chen, Chao-Hsien Hsu, Chen -Chieh Wang Presenter: Yi-Sheng, Lin ( 林意勝.
Parallel-Search Trie-based Scheme for Fast IP Lookup
1 A Fast IP Lookup Scheme for Longest-Matching Prefix Authors: Lih-Chyau Wuu, Shou-Yu Pin Reporter: Chen-Nien Tsai.
An Efficient IP Lookup Architecture with Fast Update Using Single-Match TCAMs Author: Jinsoo Kim, Junghwan Kim Publisher: WWIC 2008 Presenter: Chen-Yu.
1 Performing packet content inspection by longest prefix matching technology Authors: Nen-Fu Huang, Yen-Ming Chu, Yen-Min Wu and Chia- Wen Ho Publisher:
EaseCAM: An Energy And Storage Efficient TCAM-based IP-Lookup Architecture Rabi Mahapatra Texas A&M University;
Chapter 91 Translation Lookaside Buffer (described later with virtual memory) Frame.
Fast binary and multiway prefix searches for pachet forwarding Author: Yeim-Kuan Chang Publisher: COMPUTER NETWORKS, Volume 51, Issue 3, pp , February.
Chapter 7 Memory Management
Hash, Don’t Cache: Fast Packet Forwarding for Enterprise Edge Routers Minlan Yu Princeton University Joint work with Jennifer.
Memory Allocation CS Introduction to Operating Systems.
Memory Management ◦ Operating Systems ◦ CS550. Paging and Segmentation  Non-contiguous memory allocation  Fragmentation is a serious problem with contiguous.
1 Efficient packet classification using TCAMs Authors: Derek Pao, Yiu Keung Li and Peng Zhou Publisher: Computer Networks 2006 Present: Chen-Yu Lin Date:
1 Route Table Partitioning and Load Balancing for Parallel Searching with TCAMs Department of Computer Science and Information Engineering National Cheng.
RAID Ref: Stallings. Introduction The rate in improvement in secondary storage performance has been considerably less than the rate for processors and.
IP Address Lookup Masoud Sabaei Assistant professor
LayeredTrees: Most Specific Prefix based Pipelined Design for On-Chip IP Address Lookups Author: Yeim-Kuau Chang, Fang-Chen Kuo, Han-Jhen Guo and Cheng-Chien.
Scalable Name Lookup in NDN Using Effective Name Component Encoding
Author: Haoyu Song, Fang Hao, Murali Kodialam, T.V. Lakshman Publisher: IEEE INFOCOM 2009 Presenter: Chin-Chung Pan Date: 2009/12/09.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 12: File System Implementation File System Structure File System Implementation.
StrideBV: Single chip 400G+ packet classification Author: Thilan Ganegedara, Viktor K. Prasanna Publisher: HPSR 2012 Presenter: Chun-Sheng Hsueh Date:
Virtual Memory 1 1.
1 Power-Efficient TCAM Partitioning for IP Lookups with Incremental Updates Author: Yeim-Kuan Chang Publisher: ICOIN 2005 Presenter: Po Ting Huang Date:
PARALLEL-SEARCH TRIE- BASED SCHEME FOR FAST IP LOOKUP Author: Roberto Rojas-Cessa, Lakshmi Ramesh, Ziqian Dong, Lin Cai Nirwan Ansari Publisher: IEEE GLOBECOM.
A Dynamic Longest Prefix Matching Content Addressable Memory for IP Routing Author: Satendra Kumar Maurya, Lawrence T. Clark Publisher: IEEE TRANSACTIONS.
Memory-Efficient IPv4/v6 Lookup on FPGAs Using Distance-Bounded Path Compression Author: Hoang Le, Weirong Jiang and Viktor K. Prasanna Publisher: IEEE.
Cross-Product Packet Classification in GNIFS based on Non-overlapping Areas and Equivalence Class Author: Mohua Zhang, Ge Li Publisher: AISS 2012 Presenter:
Chapter 15 A External Methods. © 2004 Pearson Addison-Wesley. All rights reserved 15 A-2 A Look At External Storage External storage –Exists beyond the.
Author: Haoyu Song, Murali Kodialam, Fang Hao and T.V. Lakshman Publisher/Conf. : IEEE International Conference on Network Protocols (ICNP), 2009 Speaker:
Updating Designed for Fast IP Lookup Author : Natasa Maksic, Zoran Chicha and Aleksandra Smiljani´c Conference: IEEE High Performance Switching and Routing.
HIGH-PERFORMANCE LONGEST PREFIX MATCH LOGIC SUPPORTING FAST UPDATES FOR IP FORWARDING DEVICES Author: Arun Kumar S P Publisher/Conf.: 2009 IEEE International.
Parallel tree search: An algorithmic approach for multi- field packet classification Authors: Derek Pao and Cutson Liu. Publisher: Computer communications.
Evaluating and Optimizing IP Lookup on Many Core Processors Author: Peng He, Hongtao Guan, Gaogang Xie and Kav´e Salamatian Publisher: International Conference.
Cache Miss-Aware Dynamic Stack Allocation Authors: S. Jang. et al. Conference: International Symposium on Circuits and Systems (ISCAS), 2007 Presenter:
IP Routing table compaction and sampling schemes to enhance TCAM cache performance Author: Ruirui Guo a, Jose G. Delgado-Frias Publisher: Journal of Systems.
A NOVEL LEVEL-BASED IPV6 ROUTING LOOKUP ALGORITHM Author: Xiaohong Huang, Xiaoyu Zhao, Guofeng Zhao, Wenjian Jiang, Dongqu Zheng, Qiong Sun and Yan Ma.
1 IP Routing table compaction and sampling schemes to enhance TCAM cache performance Author: Ruirui Guo, Jose G. Delgado-Frias Publisher: Journal of Systems.
Part III Storage Management
Author : Masanori Bando and H. Jonathan Chao Publisher : INFOCOM, 2010 Presenter : Jo-Ning Yu Date : 2011/02/16.
© 2006 Pearson Addison-Wesley. All rights reserved15 A-1 Chapter 15 External Methods.
Packet Classification Using Multi- Iteration RFC Author: Chun-Hui Tsai, Hung-Mao Chu, Pi-Chung Wang Publisher: 2013 IEEE 37th Annual Computer Software.
IP Address Lookup Masoud Sabaei Assistant professor Computer Engineering and Information Technology Department, Amirkabir University of Technology.
Author : Tzi-Cker Chiueh, Prashant Pradhan Publisher : High-Performance Computer Architecture, Presenter : Jo-Ning Yu Date : 2010/11/03.
AN ON-CHIP IP ADDRESS LOOKUP ALGORITHM
Chapter 11: File System Implementation
Operating Systems (CS 340 D)
Day 19 Memory Management.
Regular Expression Acceleration at Multiple Tens of Gb/s
Chapter 11: File System Implementation
Computer Architecture
Scalable Memory-Less Architecture for String Matching With FPGAs
Overview: File system implementation (cont)
Lecture 3: Main Memory.
A Small and Fast IP Forwarding Table Using Hashing
Chapter 11: File System Implementation
A SRAM-based Architecture for Trie-based IP Lookup Using FPGA
Virtual Memory 1 1.
Presentation transcript:

1 Searching Very Large Routing Tables in Wide Embedded Memory Author: Jan van Lunteren Publisher: GLOBECOM 2001 Presenter: Han-Chen Chen Date: 2010/01/06

2 Introduction  Exponentially growing routing tables create the need for increasingly storage-efficient lookup schemes that do not compromise on lookup performance and update rates.  This paper presents a novel IP lookup scheme for searching large routing tables in embedded memory. The Balanced Routing Table Search (BARTS) scheme exploits the wide data buses available in this technology to achieve improved storage efficiency over conventional lookup methods, in combination with wire-speed lookup performance and high update rates.

3 CONVENTIONAL LOOKUP SCHEMES (1/2)

4 CONVENTIONAL LOOKUP SCHEMES (2/2) 1.Prefix h is a nested prefix of prefix h, which results in a table entry containing both a search result and a pointer. This entry can become rather wide for large routing tables, resulting in inefficient storage usage for table entries that only need to store one of the two fields. Leaf pushing can overcome this problem by moving the next-hop information Q into the empty entries in the table indexed by the third IP address segment. However, this reduces update performance as a larger number of table entries need modification in the case of an update. 2.Data structures composed of variable-sized buffers can suffer from memory fragmentation, which can significantly reduce the actual number of routing-table entries that can be stored in a given memory. Memory fragmentation can be reduced by limiting the number of buffer sizes and by defragmentation.

5 Table Compression The memory width allows an entire block to be read in one access. Parallel test logic will then determine the longest matching next-hop entry as well as the longest matching pointer entry.

6 An optimum compressed index will be calculated by a “ brute- force ” count of the actual number of collisions that occur for each possible value of each possible compressed index. The smallest compressed index for which the number of collisions for all values is bounded by N is then selected as the optimum compressed index. This requires a counter array that includes one counter for every possible combination of a compressed index and a compressed- index value. different compressed and maximum of s - logN + 1 bits for a block size N. total number of counters equal to Compressed-Index Calculation (1/2)

7 Compressed-Index Calculation (2/2) 1 0 1

8 Data Structure Entry type 00 is an empty entry. Entry type 01 is a nexthop entry. Entry type 10 is a pointer entry. Entry type 11 is a special pointer entry involving an index mask equal to zero for referring to compressed tables consisting of at most N prefixes that are stored within a single block.

9 Incremental Updates and Memory Management  The data structure can be incrementally updated by creating modified copies of the corresponding compressed tables in memory, and linking them by an atomic write operation.  If power-of-2 table sizes are enforced, which matches very well with a buddy system. Otherwise, if memory fragmentation were to become a problem, then the BARTS scheme can be adapted to use fewer buffer sizes by reducing the segment size s. A suboptimum compression will be achieved, but the implementation of the index-mask calculation as described becomes simpler because fewer counters are needed.

10 Performance (1/4) The first segment of each partition indexes an uncompressed table, the remaining segments “ index ” compressed tables. All entries fit into 32 bits for the simulated routing tables, while also including an 18- bit next-hop field.

11 Performance (2/4)

12 Performance (3/4)

13 Performance (4/4)

14 Thanks for your listening