Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 IP Routing table compaction and sampling schemes to enhance TCAM cache performance Author: Ruirui Guo, Jose G. Delgado-Frias Publisher: Journal of Systems.

Similar presentations


Presentation on theme: "1 IP Routing table compaction and sampling schemes to enhance TCAM cache performance Author: Ruirui Guo, Jose G. Delgado-Frias Publisher: Journal of Systems."— Presentation transcript:

1 1 IP Routing table compaction and sampling schemes to enhance TCAM cache performance Author: Ruirui Guo, Jose G. Delgado-Frias Publisher: Journal of Systems Architecture: the EUROMICRO Journal (January 2009) Presenter: Po Ting Huang Date: 2009/10/28

2 2 Background Routing tables are becoming larger; thus, a large memory is required to store such tables in a local router. This in turn may restrict the lookup speed since the complete routing table is stored in main memory with slow access time (TCAM high cost) In order to speed up the address lookup process, a cache architecture is generally used in network processors

3 3 Introduction Using the TCAM caching architecture and our compaction scheme, the system can achieve a high hit rate using significantly smaller number of entries however, a potential problem with port assignments. This problem, called port error, occurs when the port selected by the cache scheme differs from the port that would have been selected from the routing table This in turn leads to an inappropriate port assignment. We address this problem using a sampling technique

4 4 basic organization of the cache scheme for IP lookup Studies have shown that the network packet streams indeed have temporal locality. That is, a routing entry that has been recently accessed has a high probability of being referenced again in a short period of time

5 5 Evaluations of our proposed scheme are based on four IPv4 and six IPv6 trace files which are downloaded from the Measurement and Analysis on the WIDE Internet (MAWI) Working Group Each trace file has 2 million destination addresses. Analyzing these incoming destination addresses in each trace file, we found that there are only several thousands unique addresses. This in turn indicates there exists temporal locality in these trace files

6 6 A TCAM cache organization. The TCAM cache scheme has been shown to have a lower miss rate in IP routing applications than an SRAM cache scheme [15]. The cache miss ratio for the TCAM is about half of what is in the SRAM [15].

7 7 Espresso-like scheme(C1) Entry addresses with the same destination port are considered for compaction. Two addresses that differ by only one bit including don’t care(*) are candidates for compaction. If these conditions are satisfied, the entries with these two addresses can be combined into one by using don’t care (*) to replace the bit. The two entries that have been combined are removed from further consideration. The new entry is included in the potential entries for further compaction.

8 8 Compaction with non-existing entries (C2) Two addresses with the same destination port that have only one different bit excluding don’t care (*) are candidates for additional compaction. If there is no other address with a different destination port that can be compacted with one of the two addresses in the previous step, then the compaction can be performed. If the previous step is not possible the current attempt to compact these entries is abandoned.

9 9 Compaction using a continuous don’t care threshold (C3) Two addresses with the same destination port that have only one different bit excluding don’t care () are candidates for compaction. If there is no routing conflict in the routing table, the compaction is performed. If there exists only one entry address with a different destination port, which is equivalent to the current compacted address, and the number of the continuous don’t care () in this compacted entry is less than a given threshold, then the compaction can be performed. In addition, the entry with the different port assignment is marked with higher priority, that is, this entry is considered for a match before the compacted entry. If the previous steps are not possible, the current compaction is abandoned

10 10 Threshold A long sequence of don’t care elements increases the probability of a routing conflict. The purpose of setting a threshold of the number of continuous don’t care elements is to reduce this problem. A large threshold value usually helps in compacting more routing entries; but, this increases the possibility of routing conflicts, which cause more port errors Therefore, we take into account both the number of routing entries and the effective hit rate

11 11 Effect of choosing different thresholds We should point out that extensive simulations show the value of 5 generates a smaller port error ratio with fewer routing entries, and also needs far less searches in the routing table when implementing our improved sampling techniques to alleviate the port error problems

12 12 Compaction using threshold with different bits (C4) *0100110* in entry x (5) 0****1111 in entry y (3) *****11** in entryz. Assume that an address 10100*111 in entry w with a different port assignment exists; entryz would cover this entry and has a port assignment conflict.

13 13 C4 continue an effective threshold should be determined by achieving a good balance between the number of routing entries and the effective hit rate. In this study the threshold for C4 is set to 3.

14 14 Evaluation of compaction schemes(I)

15 15 Evaluation of compaction schemes(II)

16 16 Comparison with other schemes The compaction ratio is about 75.6% in [25], 52% in [24](similar to C1) and 29% in proposed scheme. The proposed compacting scheme yields a very high cache hit rate. This helps to reduce the impact of replacement policies on the hit rate [26]. For hit rates above 90% the difference in hit rate using different replacement policies is less than 1% [26].

17 17 Definition critiria Port error ratio is the ratio of the number of port errors to the number of cache hits A port error occurs when the port selected by the cache does not match the port that would be selected using the routing table. A port error is generated because the matching entry in the cache is not the longest prefix matching entry in the routing table. By implementing the compaction scheme, the number of port errors increases, since the time that the entry with a short prefix or with many don’t care stays in cache could be extended

18 18 Interval sampling (In)[25] It requires performing one search in the routing table every M lookups. M is the sampling rate. When a sample is performed and if the port number found in cache is different from that in the routing table, the matching entry in the routing table will be written into the cache to fix future port assignments. M = 3

19 19 Port error ratio w/ and w/o interval sampling(In)

20 20 Port error ratio w/ and w/o interval sampling(In) (II) We observed that the port error problem was greatly reduced using the interval sampling for IPv4 routing. Actually, this sampling decreases port errors down to almost 20% With regard to IPv6, it works well for those compacted tables without routing conflicts, such as the tables compacted by the C1 and C2. However, this is not appropriate for C3 and C4. For some traces, the port error ratios are still over 1%.

21 21 We count the number of overlapping entries (N) that are covered by a particular entry

22 22 The y-axis shows the sum of the port errors caused by the group of the entries with the same number of N. actually reflects the typical port error distribution for both version C3 and C4

23 23 Selective sampling (Sl) Label those entries with high probability (i.e. N < 10) of causing port errors according to the port error distribution. Do not search the routing table in the case of matching entry unlabeled. Otherwise, search routing table at the sampling rate and update cache if a port error occurs.

24 24 Adaptive sampling (Ad)

25 25

26 26

27 27

28 28


Download ppt "1 IP Routing table compaction and sampling schemes to enhance TCAM cache performance Author: Ruirui Guo, Jose G. Delgado-Frias Publisher: Journal of Systems."

Similar presentations


Ads by Google