Presentation is loading. Please wait.

Presentation is loading. Please wait.

Decision Tree Classification of Spatial Data Streams Using Peano Count Trees Qiang Ding Qin Ding * William Perrizo Department of Computer Science.

Similar presentations


Presentation on theme: "Decision Tree Classification of Spatial Data Streams Using Peano Count Trees Qiang Ding Qin Ding * William Perrizo Department of Computer Science."— Presentation transcript:

1 Decision Tree Classification of Spatial Data Streams Using Peano Count Trees Qiang Ding Qin Ding * William Perrizo Department of Computer Science North Dakota State University, USA (P-tree technology is patented by NDSU)

2 Decision Tree A flow-chart-like hierarchical tree structure
Root: represents the entire dataset A node without children is called a leaf node. Otherwise called an internal node. Internal nodes: denote a test on one test attribute Branch: represents an outcome of the test Leaf nodes: represent class labels or class distribution

3 Data Stream Large quantities of data
Open-ended (data continues to arrive) Updated periodically Need to mine in real time or near real-time E.g., Spatial Data Streams e.g. AVHRR (e.g., used for forest fire detection) E.g., stock market data

4 Building Decision Tree Classifiers
Classifiers are built using training data (already classified). Once the classifier is built (trained) it is used on new samples (unclassified) to predict the class. Basic algorithm (a greedy algorithm) Tree is constructed in a top-down recursive divide-and-conquer manner At the start, all training samples are at the root Training sample set is partitioned recursively based on a selected attribute (test attribute) at each inode Test attributes are selected on the basis of a heuristic or statistical measure (e.g., information gain)

5 E.g., Binary Tree Building Algorithm
Input: training-dataset S, split-selection-method Output: decision tree Top-Down Decision Tree Induction Scheme (Binary Splits): BuildTree(S, split-selection-method) (1) If (all points in S are in the same class) then return; (2) Use split-selection-method to evaluate splits for each attribute (3) Use best split found to partition S into S1 and S2; (4) BuildTree(S1, split-selection-method) (5) BuildTree(S2, split-selection-method) Build the whole tree by calling: BuildTree(dataset-TrainingData, split-selection-method) Splits not always binary (may be the entire decision-attribute value-set) and split-selection method can be info-gain, gain-ratio, gini-index, other..)

6 Information Gain (ID3) T = Training set; S = any set of samples
freq(Ci, S) = the number of samples that belong to class Ci |S| = the number of samples in set S Information of set S is defined: consider a similar measurement after T has been partitioned into {Ti} in accordance with the n outcomes of a test X (e.g. Ai-value > ai). The expected information requirement can be found as the weighted sum over the subsets: The quantity measures the information that is gained by partitioning T in accordance with the test X. The gain criterion, then, selects a test to maximize this information gain. Gain(X) = Info(T) – Entropy(X)

7 Background on Spatial Data
Band – attribute Pixel – transaction (tuple) Value – usually 0~255 (one byte) Different images have different numbers of bands TM4/5: 7 bands (B, G, R, NIR, MIR, TIR, MIR2) TM7: 8 bands (B, G, R, NIR, MIR, TIR, MIR2, PC) TIFF: 3 bands (B, G, R) Ground data: individual bands (Yield, Soil-Moisture, Nitrate, Temperature)

8 Spatial Data Formats Existing formats New format BSQ (Band Sequential)
BIL (Band Interleaved by Line) BIP (Band Interleaved by Pixel) New format bSQ (bit Sequential)

9 Spatial Data Formats (Cont.)
BAND-1 ( ) ( ) ( ) ( ) BAND-2 ( ) ( ) ( ) ( ) BSQ format (2 files) Band 1: Band 2:

10 Spatial Data Formats (Cont.)
BAND-1 ( ) ( ) ( ) ( ) BAND-2 ( ) ( ) ( ) ( ) BSQ format (2 files) Band 1: Band 2: BIL format (1 file)

11 Spatial Data Formats (Cont.)
BAND-1 ( ) ( ) ( ) ( ) BAND-2 ( ) ( ) ( ) ( ) BSQ format (2 files) Band 1: Band 2: BIL format (1 file) BIP format (1 file)

12 Spatial Data Formats (Cont.)
BAND-1 ( ) ( ) ( ) ( ) BAND-2 ( ) ( ) ( ) ( ) BSQ format (2 files) Band 1: Band 2: BIL format (1 file) BIP format (1 file) bSQ format (16 files) B11 B12 B13 B14 B15 B16 B17 B18 B21 B22 B23 B24 B25 B26 B27 B28

13 bSQ Format and P-trees bSQ format separates each band into 8 files, 1 for each bit-slice. Reasons of using bSQ format Different bits contribute to the value differently. bSQ format facilitates the representation of a precision hierarchy (from 1 bit up to 8 bit precision). bSQ format facilitates the creation of an efficient data mining-ready P-tree data structure, a P-tree algebra and a data-cube of P-tree root counts. P-trees (Peano-trees) represent bit-slices, band-values and tuples in a recursive quadrant-by-quadrant, compressed but lossless representation of the original data.

14 An example of a P-tree Peano or Z-ordering
bSQ 1 Arranged in 2-D space in raster order 55 16 8 15 3 4 1 55 1 3 1 16 15 16 8 1 4 1 4 3 4 4 1 1 Peano or Z-ordering Pure (Pure-1/Pure-0) quadrant Root Count Level Fan-out QID (Quadrant ID)

15 An example of Ptree Peano or Z-ordering Pure (Pure-1/Pure-0) quadrant
001 55 16 8 15 3 4 1 1 2 3 2 3 111 Peano or Z-ordering Pure (Pure-1/Pure-0) quadrant Root Count Level Fan-out QID (Quadrant ID) ( 7, 1 ) ( 111, 001 )

16 P-tree variation – PM-tree
1 Peano Mask tree (PM-tree) uses a mask instead of count. 1 denotes pure-1, 0 denotes pure-0 and m denotes mixed. 3-value logic It provides an efficient representation for ANDing.

17 Ptree Algebra And, Or, Complement Other (XOR, etc) PCT 55 PMT m
______/ / \ \_______ ______/ / \ \______ / __ / \___ \ / __ / \ __ \ / / \ \ / / \ \ 16 __8____ _15__ m m / / | \ / | \ \ / / \ \ / / \ \ m m m 1 //|\ //|\ //|\ //|\ //|\ //|\ PMT1: m ______/ / \ \______ / / \ \ / / \ \ m m / / \ \ / / \ \ m m m 1 //|\ //|\ //|\ PMT2: m m / / \ \ m //|\ 0100 AND-Result: m ________ / / \ \___ / ____ / \ \ / / \ \ m / | \ \ 1 1 m m //|\ //|\ OR-Result: m m m m //|\ //|\ Complement m ______/ / \ \_______ ______/ / \ \______ / __ / \___ \ / __ / \ __ \ / / \ \ / / \ \ 0 __8____ _1__ m m / / | \ / | \ \ / / \ \ / / \ \ m m m 0 //|\ //|\ //|\ //|\ //|\ //|\

18 Peano Cube (P-cube) The (v1,v2,v3)th cell of the P-cube contains
P(v1,v2,v3) = P1,v1 AND P2,v2 AND P3,v3 where e.g., Pi,vi = Pi,110 = Pi,1 AND Pi,2 AND P’i,3 (P-cube above shows just root counts of the P-trees) P-cube can be rolled-up (on left), sliced, diced…

19 Data Smoothing Using P-tree
Bottom-up purity shifting Replacing 3 counts with 4 Replacing 1 counts with 0 Data is smoothed P-tree is more compressed

20 Decision Tree Induction Using P-trees
Basic Ideas Calculate information gain (gain ratio or gini index or..) by using the count information recorded in P-trees. P-tree generation replaces sub-sample set creation. Done once for the life of the dataset (cost amortized over life of data) P-tree also determine if all samples are in same class. Without additional database scan

21 Some Notation and Definitions
Store the Decision Path for each node Decision Path for Node N09 is: Band 2, value 0011, Band 3, value 1000. Decision Path for ROOT: empty Class attribute – B[0] Given Node N’s Decision Path: (B1, v1), (B2, v2), …, (Bt, vt) then, P-tree of the dataset represented by Node N is: For ROOT node R, where T is the whole training set, PT is the full P-tree (unit P-tree) N12 N01 N03 N11 N10 N09 N08 N07 N06 N05 N04 N02 N13 N14 B2 B B B B B1 B B1 P = P ( v [ 1 ]) Ù P ( v [ 2 ]) Ù Ù P ( v [ t ]) Set ( N ) B [ 1 ] B [ 2 ] B [ t ]

22 Using P-trees to Calculate Information Gain
P = Node N’s P-tree, RC = root count function Node N’s information is: where pi = RC(P^PB, v[i])/RC(P). B=decision variable and v[1],...,v[n] are decision values at node i. Information Gain of attribute A at Node N: Gain(A) = I - E(A), where entropy Here vA[1], ... ,vA[n] are possible A values if classified by attribute A at node N

23 Example – Phase 1 bSQ: Basic P-trees: Value P-trees:
BSQ Training Set: 0,0 | 0011| 0111| 1000| 1011 0,1 | 0011| 0011| 1000| 1111 0,2 | 0111| 0011| 0100| 1011 0,3 | 0111| 0010| 0101| 1011 1,0 | 0011| 0111| 1000| 1011 1,1 | 0011| 0011| 1000| 1011 1,2 | 0111| 0011| 0100| 1011 1,3 | 0111| 0010| 0101| 1011 2,0 | 0010| 1011| 1000| 1111 2,1 | 0010| 1011| 1000| 1111 2,2 | 1010| 1010| 0100| 1011 2,3 | 1111| 1010| 0100| 1011 3,0 | 0010| 1011| 1000| 1111 3,1 | 1010| 1011| 1000| 1111 3,2 | 1111| 1010| 0100| 1011 3,3 | 1111| 1010| 0100| 1011 bSQ: B11 B B B14 B21, B22, B23, B24 Basic P-trees: P11 , P12 , P13 , P14 P21 , P22 , P23 , P24 P31 , P32 , P33 , P34 P41 , P42 , P43 , P44 P11’ , P12’ , P13’ , P14’ P21’ , P22’’, P23 , P24’ P31’ , P32’ , P33’ , P34’ P41’ , P42’ , P43’ , P44’ Value P-trees: P1, P1, P1, P1, P1, P1, P1, P1,1110 P1, P1, P1, P1, P1, P1, P1, P1,1111 P2, P2, P2, P2, P2, P2, P2, P2,1110 P2, P2, P2, P2, P2, P2, P2, P2,1111 P3, P3, P3, P3, P3, P3, P3, P3,1110 P3, P3, P3, P3, P3, P3, P3, P3,1111 P4, P4, P4, P4, P4, P4, P4, P4,1110 P4, P4, P4, P4, P4, P4, P4, P4,1111

24 Example – Phase 2 Initially the decision tree is a single node representing the entire training set. Start with A=B2. Calculate I, E, Gain(B2) using root count of value P-trees. Calculate Gain(B3) and Gain(B4). Select the best split attribute, say B2. Branches are created for each value of B2 and samples are partitioned accordingly. B2=0010  SampleSet1 B2=0011  SampleSet2 B2=0111  SampleSet3 B2=1010  SampleSet4 B2=1011  SampleSet5 Advancing the algorithm recursively to each sub-sample set Stopping rule: E.g., when all samples belong to same class or no remaining attribs. This is the final decision tree: B2=0010  B1=0111 B2=0011  B3=0100  B1=0111 B3=1000  B1=0011 B2=0111  B1=0011 B2=1010  B1=1111 B2=1011  B1=0010

25 Preliminary Performance Study P-Classifier versus ID3
Classification Time 100 200 300 400 500 600 700 20 40 60 80 Size of data (Million samples) ID3 P-Classifer Classification cost with respect to the dataset size

26 Conclusion Decision tree classification on spatial data streams using P-trees Especially efficient for stream data


Download ppt "Decision Tree Classification of Spatial Data Streams Using Peano Count Trees Qiang Ding Qin Ding * William Perrizo Department of Computer Science."

Similar presentations


Ads by Google