Presentation is loading. Please wait.

Presentation is loading. Please wait.

Data Mining Spring 2007 Noisy data Data Discretization using Entropy based and ChiMerge.

Similar presentations


Presentation on theme: "Data Mining Spring 2007 Noisy data Data Discretization using Entropy based and ChiMerge."— Presentation transcript:

1 Data Mining Spring 2007 Noisy data Data Discretization using Entropy based and ChiMerge

2 Noisy Data Noise: Random error, Data Present but not correct. –Data Transmission error –Data Entry problem Removing noise –Data Smoothing (rounding, averaging within a window). –Clustering/merging and Detecting outliers. Data Smoothing –First sort the data and partition it into (equi-depth) bins. –Then the values in each bin using Smooth by Bin Means, Smooth by Bin Median, Smooth by Bin Boundaries, etc.

3 Noisy Data (Binning Methods) Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34 * Partition into (equi-depth) bins: - Bin 1: 4, 8, 9, 15 - Bin 2: 21, 21, 24, 25 - Bin 3: 26, 28, 29, 34 * Smoothing by bin means: - Bin 1: 9, 9, 9, 9 - Bin 2: 23, 23, 23, 23 - Bin 3: 29, 29, 29, 29 * Smoothing by bin boundaries: - Bin 1: 4, 4, 4, 15 - Bin 2: 21, 21, 25, 25 - Bin 3: 26, 26, 26, 34

4 Noisy Data (Clustering) Outliers may be detected by clustering, where similar values are organized into groups or “clusters”. Values which falls outside of the set of clusters may be considered outliers.

5 Data Discretization The task of attribute (feature)-discretization techniques is to discretize the values of continuous features into a small number of intervals, where each interval is mapped to a discrete symbol. Advantages:- –Simplified data description and easy-to-understand data and final data-mining results. –Only Small interesting rules are mined. –End-results processing time decreased. –End-results accuracy improved.

6 Effect of Continuous Data on Results Accuracy Data Mining If ‘age <= 30’ and income = ‘medium’ and age = ‘9’ then buys_computer = ‘no’ If ‘age <= 30’ and income = ‘medium’ and age = ‘10’ then buys_computer = ‘no’ If ‘age <= 30’ and income = ‘medium’ and age = ‘11’ then buys_computer = ‘no’ If ‘age <= 30’ and income = ‘medium’ and age = ‘12’ then buys_computer = ‘no’ Discover only those rules which contain support (frequency) greater >= 1 Due to the missing value in training dataset, the accuracy of prediction decreases and becomes “66.7%”

7 Entropy-Based Discretization Given a set of samples S, if S is partitioned into two intervals S1 and S2 using boundary T, the entropy after partitioning is Where p i is the probability of class i in S1, determined by dividing the number of samples of class i in S1 by the total number of samples in S1.

8 Entropy-Based Discretization The boundary that minimizes the entropy function over all possible boundaries is selected as a binary discretization. The process is recursively applied to partitions obtained until some stopping criterion is met, e.g.,

9 Example 1ID123456789 Age2122242527 3541 GradeFFPFPPPPP Let Grade be the class attribute. Use entropy-based discretization to divide the range of ages into different discrete intervals. There are 6 possible boundaries. They are 21.5, 23, 24.5, 26, 31, and 38. Let us consider the boundary at T = 21.5. Let S1 = {21} Let S2 = {22, 24, 25, 27, 27, 27, 35, 41} (21+22) / 2 = 21.5 (22+24) / 2 = 23

10 Example 1 (cont’) The number of elements in S1 and S2 are: |S1| = 1 |S2| = 8 The entropy of S1 is The entropy of S2 is ID123456789 Age2122242527 3541 GradeFFPFPPPPP

11 Example 1 (cont’) Hence, the entropy after partitioning at T = 21.5 is

12 Example 1 (cont’) The entropies after partitioning for all the boundaries are: T = 21.5 = E(S,21.5) T = 23 = E(S,23).. T = 38 = E(S,38) Select the boundary with the smallest entropy Suppose best is T = 23 ID123456789 Age2122242527 3541 GradeFFPFPPPPP Now recursively apply entropy discretization upon both partitions

13 ChiMerge (Kerber92) This discretization method uses a merging approach. ChiMerge’s view: –First sort the data on the basis of attribute (which is being descretize). –List all possible boundaries or intervals. In case of last example there were 6 boundaries, 0, 21.5, 23, 24.5, 26, 31, and 38. –For all two near intervals, calculate  2 class independent test. {0,21.5} and {21.5, 23} {21.5, 23} and {23, 24.5}.. –Pick the best  2 two near intervals and merge them.

14 ChiMerge -- The Algorithm ¶Compute the  2 value for each pair of adjacent intervals ·Merge the pair of adjacent intervals with the lowest  2 value ¸Repeat  and  until  2 values of all adjacent pairs exceeds a threshold

15 Chi-Square Test o ij = observed frequency of interval i for class j e ij = expected frequency (R i * C j ) / N Class 1Class 2Σ Int 1o 11 o 12 R1R1 Int 2o 21 o 22 R2R2 ΣC1C1 C2C2 N

16 ChiMerge Example Data Set Sample: FK 111 232 371 481 591 6112 7232 8371 9392 10451 11461 12591 Interval points for feature F are: 0, 2, 5, 7.5, 8.5, 10, etc.

17 ChiMerge Example (cont.)  2 was minimum for intervals: [7.5, 8.5] and [8.5, 10] K=1K=2  Interval [7.5, 8.5] A 11 =1 A 12 =0R 1 =1 Interval [8.5, 9.5] A 21 =1 A 22 =0R 2 =1  C 1 =2C 2 =0N=2 Based on the table’s values, we can calculate expected values: E 11 = 2/2 = 1, E 12 = 0/2  0, E 21 = 2/2 = 1, and E 22 = 0/2  0 and corresponding  2 test:  2 = (1 – 1)2 / 1 + (0 – 0)2 / 0.1 + (1 – 1)2 / 1 + (0 – 0)2 / 0.1 = 0.2 For the degree of freedom d=1, and  2 = 0.0 < 2.706 ( MERGE !) o ij = observed frequency of interval i for class j e ij = expected frequency (R i * C j ) / N

18 ChiMerge Example (cont.) Additional Iterations: K=1K=2  Interval [0, 7.5] A 11 =2A 12 =1 R 1 =3 Interval [7.5, 10] A 21 =2A 22 =0 R 2 =2  C 1 =4C 2 =1 N=5 E 11 = 12/5 = 2.4, E 12 = 3/5 = 0.6, E 21 = 8/5 = 1.6, and E 22 = 2/5 = 0.4  2 = (2 – 2.4)2 / 2.4 + (1 – 0.6)2 / 0.6 + (2 – 1.6)2 / 1.6 + (0 – 0.4)2 / 0.4  2 = 0.834 For the degree of freedom d=1,  2 = 0.834 < 2.706 (MERGE!)

19 ChiMerge Example (cont.) K=1 K=2  Interval [0, 10.0]A 11 =4A 12 =1R 1 =5 Interval [10.0, 42.0]A 21 =1A 22 =3R 2 =4  C 1 =5C 2 =4N=9 E 11 = 2.78 E 12 =2.22 E 21 = 2.22 E 22 = 1.78 and  2 = 2.72 > 2.706 (NO MERGE !) Final discretization: [0, 10], [10, 42], and [42, 60]

20 References –Text book of Jiawei Han and Micheline Kamber, “Data Mining: Concepts and Techniques”, Morgan Kaufmann Publishers, August 2000. (Chapter 3). –Data Mining: Concepts, Models, Methods, and Algorithms by Mehmed Kantardzic John Wiley & Sons 2003. (Chapter 3).


Download ppt "Data Mining Spring 2007 Noisy data Data Discretization using Entropy based and ChiMerge."

Similar presentations


Ads by Google