Download presentation
Presentation is loading. Please wait.
1
1 On Constructing Efficient Shared Decision Trees for Multiple Packet Filters Author: Bo Zhang T. S. Eugene Ng Publisher: IEEE INFOCOM 2010 Presenter: Han-Chen Chen Date: 2010/06/02
2
2 Outline Introduction Background Construct shared Hypercuts Decision Tree Performance Evaluation
3
3 Introduction Multiple packet filters serving different purposes may be deployed on a single physical router (i.e firewalling, quality of service (QoS), virtual private networks (VPNs), load balancing, etc.) The saved memory can be used to improve cache performance, to more efficiently hold more packet filters and to support more virtual routers. In this paper, we will use the HyperCuts decision tree to represent packet filters since it is one of the most efficient data structures for performing packet filter matching.
4
4 Background (1/2) Efficiency Metrics of The HyperCuts Decision Tree 1.Memory consumption 2.Average depth of leaf nodes : The average memory access time when searching. 3.Height of the tree : The worst case memory access time when searching.
5
5 Background (2/2)
6
6 Construct shared Hypercuts Decision Tree (1/8) 1.Find the subset of packet filters sharing a HyperCuts decision tree which is more efficient than a set of separate trees. Given the pair-wise prediction on all possible pairs, a greedy heuristic algorithm is used to classify packets filters into a number of shared HyperCuts decision trees. 2. Construct shared Hypercuts decision tree.
7
7 1. Define the good or bad pair of Packet Filters. Two packet filters are defined to be a “good” pair if their shared HyperCuts tree has decreased memory usage and decreased average depth of leaf nodes compared to the two separate HyperCuts trees. 2. Use machine learning techniques to predict whether a pair of filters is good. We use 3 types of machine learning techniques 1.decision tree (DT) 2.generalized linear regression (GLR) 3.naive Bayse classifier (NBC) input some factors affecting the efficiency of the shared tree, and train the machine learning techniques for 10% packet filters, then it output the good or bad pair for all packet filters pair. Construct shared Hypercuts Decision Tree (2/8)
8
8 Construct shared Hypercuts Decision Tree (3/8) Factors Affecting the Efficiency of the Shared Tree 1.Class-1 factors : Include some simple statistical properties of a packet filter itself. They include the size of the packet filter and the number of unique elements in each field. 2.Class-2 factors : Represent the characteristics of the constructed HyperCuts decision tree. They include the memory consumption of the tree, the average depth of leaf nodes and the height of the tree, the number of leaf nodes, the number of internal nodes and the total number of cuts on each field.
9
9 Construct shared Hypercuts Decision Tree (4/8)
10
10 Construct shared Hypercuts Decision Tree (5/8) False positive rate (bad pairs be mistakenly predicted to be good one)
11
11 Construct shared Hypercuts Decision Tree (6/8) False negative rate (good pairs be mistakenly predicted to be bad one)
12
12 A BC D H G E F 5 4 3 1 1 1 1 2 S filter : {A,C,D,E,F,G,H} Cluster i : {B} Construct shared Hypercuts Decision Tree (7/8) Clustering Packet Filters Base on Pair-wise Prediction.
13
13 A BC D H G E F 5 4 3 1 1 1 1 2 S filter : {A,C,E,F,G,H} Cluster i : {B,D} Construct shared Hypercuts Decision Tree (7/8) Clustering Packet Filters Base on Pair-wise Prediction.
14
14 A BC D H G E F 5 4 3 1 1 1 1 2 S filter : {A,E,F,G,H} Cluster i : {B,C,D} Construct shared Hypercuts Decision Tree (7/8) Clustering Packet Filters Base on Pair-wise Prediction.
15
15 Construct shared Hypercuts Decision Tree (8/8) We extend the original HyperCuts tree construction algorithm. If F 1, F 2 are sharing Hypercuts decision tree 1.Child limit : N 1 is the rule number of F 1, N 2 is the rule number of F 2 2.The number of unique elements : u j = ( u 1j + u 2j ) / 2 j is the dimension 3.The rest of the algorithm is just the same as original HyperCuts algorithm.
16
16 Performance Evaluation (1/4) Memory consumption ratio : Average leaf depth ratio : Tree height ratio :
17
17 Performance Evaluation (2/4)
18
18 Performance Evaluation (3/4) If we fix α as 1, then we can reduce memory consumption over 20% on average while only increasing average leaf depth by 3%.
19
19 Performance Evaluation (4/4) Computing time breakdown (in seconds) for each step in the proposed approach.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.