Presentation is loading. Please wait.

Presentation is loading. Please wait.

DISCRETIZATION ALGORITHMS Sai Jyothsna Jonnalagadda MS Computer Science.

Similar presentations


Presentation on theme: "DISCRETIZATION ALGORITHMS Sai Jyothsna Jonnalagadda MS Computer Science."— Presentation transcript:

1 DISCRETIZATION ALGORITHMS Sai Jyothsna Jonnalagadda MS Computer Science

2 Outline Why to Discretize Features/Attributes Unsupervised Discretization Algorithms Equal Width - Equal Frequency Supervised Discretization Algorithms CADD Discretization CAIR Discretization CAIM Discretization CACC Discretization Other Discretization Methods K-means clustering One-level Decision Tree Dynamic Attribute Paterson and Niblett 2

3 Why Discretize? The goal of discretization is to reduce the number of values a continuous attribute into discrete values and assumes them by grouping them into a number, n, of intervals (bins). Discretization is often a required preprocessing step for many supervised learning methods 3

4 Discretization Algorithms Discretization algorithms can be divided into: Unsupervised vs. Supervised Unsupervised algorithms do not use class information but supervised do. Static vs. Dynamic Discretization of continuous attributes is most often performed one attribute at a time, independent of other attributes – this is known as static attribute discretization Dynamic algorithm searches for all possible intervals for all features simultaneously Local vs. Global If partitions produced apply only to localized regions of the instance space they are called local. When all attributes are discretized dynamically and they produce n1 x n2 x ni x… x and regions, where ni is the number of intervals of the ith attribute; such methods are called global. 4

5 Discretization Any discretization process consists of two steps: Step 1: the number of discrete intervals needs to be decided Often it is done by the user, although a few discretization algorithms are able to do it on their own Step 2: the width (boundary) of each interval must be determined Often it is done by a discretization algorithm itself. Considerations: Deciding the number of discretization intervals: large number – more of the original information is retained small number – the new feature is “easier” for subsequently used learning algorithms Computational complexity of discretization should be low since this is only a preprocessing step 5

6 6 Back up slides

7 Discretization Discretization scheme depends on the search procedure – it can start with either: minimum number of discretizing points and find the optimal number of discretizing points as search proceeds. maximum number of discretizing points and search towards a smaller number of the points, which defines the optimal discretization. 7

8 8 Heuristics for guessing the # of intervals 1. Use the number of intervals that is greater than the number of classes to recognize 2. Use the rule of thumb formula: n Fi = M / (3*C) where: M – number of training examples/instances C – number of classes F i – i th attribute

9 9 Unsupervised Discretization Example of rule of thumb: c = 3 (green, blue, red) M=33 Number of discretization intervals: n Fi = M / (3*c) = 33 / (3*3) = 4

10 10 Unsupervised Discretization Equal Width Discretization 1. Find the minimum and maximum values for the continuous feature/attribute F i 2. Divide the range of the attribute F i into the user-specified, n Fi,equal-width discrete intervals

11 11 Unsupervised Discretization Equal Width Discretization example n Fi = M / (3*c) = 33 / (3*3) = 4

12 12 Unsupervised Discretizatio n Equal Width Discretization The number of intervals is specified by the user or calculated by the rule of thumb formula. The number of the intervals should be larger than the number of classes, to retain mutual information between class labels and intervals. Disadvantage: If values of the attribute are not distributed evenly a large amount of information can be lost. Advantage: If the number of intervals is large enough (i.e., the width of each interval is small) the information present in the discretized interval is not lost.

13 13 Unsupervised Discretization Equal Frequency Discretization 1. Sort values of the discretized feature F i in ascending order 2. Find the number of all possible values for feature F i 3. Divide the values of feature F i into the user-specified n Fi number of intervals, where each interval contains the same number of sorted sequential values

14 14 Unsupervised Discretization Equal Frequency Discretization example: n Fi = M / (3*c) = 33 / (3*3) = 4 values/interval = 33 / 4 = 8

15 15 Unsupervised Discretization Equal Frequency Discretization No search strategy The number of intervals is specified by the user or calculated by the rule of thumb formula The number of intervals should be larger than the number of classes to retain the mutual information between class labels and intervals

16 Supervised Discretization  CADD  CAIR Discretization  CAIM Discretization  CACC Discretization

17 CADD Algorithm Disadvantag es It uses a user-specified number of intervals when initializing the discrete intervals. It initializes the discretization intervals using a maximum entropy Discretization method; such initialization may be the worst starting point in terms of the CAIR criterion. This algorithm requires training for selection of a confidence interval.

18 CAIUR ALGORITHM The CAIU and CAIR criteria were both used in the CAIUR Discretization algorithm. CAIUR = CAIU + CAIR R = Redundancy U = Uncertainity It avoids the disadvantages of the CADD algorithm generating Discretization schemes with higher CAIR values. 18

19 CAIR Discretization Class-Attribute Interdependence Redundancy The goal is to maximize the interdependence relationship between target class and discretized attributes, as measured by cair. The method is highly combinatoric so a heuristic local optimization method is used. 19

20 CAIR Discretization STEP 1: Interval Initialization 1. Sort unique values of the attribute in increasing order. 2. Calculate number of intervals using the rule of thumb formula. 3. Perform maximum entropy discretization on the sorted unique values – initial intervals are obtained. 4. The quanta matrix is formed using the initial intervals. STEP 2: Interval Improvement 1. Tentatively eliminate each boundary and calculate the CAIR value. 2. Accept the new boundaries where CAIR has the largest value. 3. Keep updating the boundaries until there is no increase in the value of CAIR. 20

21 CAIR Criterion 21

22 Example of CAIR Criterion 22

23 CAIR Discretization Disadvantages: Uses the rule of thumb to select initial boundaries For large number of unique values, large number of initial intervals is searched - computationally expensive. Using maximum entropy discretization to initialize the intervals results in the worst initial discretization in terms of class-attribute interdependence. It can suffer from the overfitting problem. 23

24 Information-Theoretic Algorithms - CAIM Given a training dataset consisting of M examples belonging to only one of the S classes. Let F indicate a continuous attribute. There exists a discretization scheme D on F that discretizes the continuous attribute F into n discrete intervals, bounded by the pairs of numbers: where d0 is the minimal value and dn is the maximal value of attribute F, and the values are arranged in the ascending order. These values constitute the boundary set for discretization D: {d0, d1, d2, …, dn-1, dn}

25 Goal of CAIM Algorithm The main goal is to find the minimum number of discrete intervals while minimizing the loss of class-attribute interdependency. It uses the class-attribute interdependency information as the criterion for the optimal discretization. 25

26 CAIM Algorithm Class Interval Class Total [d 0, d 1 ]…(d r-1, d r ]…(d n-1, d n ] C1:Ci:CSC1:Ci:CS q 11 : q i1 : q S1 ………………………… q 1r : q ir : q Sr ………………………… q 1n : q in : q Sn M 1+ : M i+ : M S+ Interval Total M +1 …M +r …M +n M CAIM discretization criterion where: n is the number of intervals r iterates through all intervals, i.e. r = 1, 2,..., n maxr is the maximum value among all qir values (maximum in the rth column of the quanta matrix), i = 1, 2,..., S, M+r is the total number of continuous values of attribute F that are within the interval (dr-1, dr] : Quanta matrix :

27 CAIM Discretization Algorithms – 2 D Quanta Matrix qir is the total number of continuous values belonging to the ith class that are within interval (dr-1, dr] Mi+ is the total number of objects belonging to the ith class M+r is the total number of continuous values of attribute F that are within the interval (dr-1, dr], for i = 1,2…,S and, r = 1,2, …, n. Class Interval Class Total [d 0, d 1 ]…(d r-1, d r ]…(d n-1, d n ] C1:Ci:CSC1:Ci:CS q 11 : q i1 : q S1 ………………………… q 1r : q ir : q Sr ………………………… q 1n : q in : q Sn M 1+ : M i+ : M S+ Interval Total M +1 …M +r …M +n M Quanta matrix

28 CAIM Discretization Algorithms c = 3 r j = 4 M=33

29 CAIM Discretization Algorithms Total number of values: M = 8 + 7 + 10 + 8 = 33 M = 11 + 9 + 13 = 33 Number of values in the First interval: q +first = 5 + 1 + 2 = 8 Number of values in the Red class: q red+ = 5 + 2 + 4 + 0 = 11

30 CAIM Discretization Algorithms The estimated joint probability of the occurrence that attribute F values are within interval Dr = (dr-1, dr] and belong to class Ci is calculated as: p red, first = 5 / 33 = 0.24 The estimated class marginal probability that attribute F values belong to class Ci, pi+, and the estimated interval marginal probability that attribute F values are within the interval Dr = (dr-1, dr] p+r, are: p red+ = 11 / 33 p +first = 8 / 33

31 CAIM Discretization Algorithms Class-Attribute Mutual Information (I), between the class variable C and the discretization variable D for attribute F is defined as: I = 5/33*log((5/33) /(11/33*8/33)) + …+ 4/33*log((4/33)/(13/33)*8/33)) Class-Attribute Information (INFO) is defined as: INFO = 5/33*log((8/33)/(5/33)) + …+ 4/33*log((8/33)/(4/33))

32 CAIM Discretization Algorithms Shannon’s entropy of the quanta matrix is defined as: H = 5/33*log(1 /(5/33)) + …+ 4/33*log(1/(4/33)) Class-Attribute Interdependence Redundancy (CAIR, or R) is the I value normalized by entropy H: Class-Attribute Interdependence Uncertainty (U) is the INFO normalized by entropy H:

33 CAIM Algorithm CAIM discretization criterion The larger the value of the CAIM ([0, M], where M is # of values of attribute F, the higher the interdependence between the class labels and the intervals The algorithm favors discretization schemes where each interval contains majority of its values grouped within a single class label (the max i values) The squared max i value is scaled by the M+r to eliminate negative influence of the values belonging to other classes on the class with the maximum number of values on the entire discretization scheme The summed-up value is divided by the number of intervals, n, to favor discretization schemes with smaller number of intervals

34 CAIM Algorithm Given: M examples described by continuous attributes F i, S classes For every F i do: Step1 1.1 find maximum (d n ) and minimum (d o ) values 1.2 sort all distinct values of F i in ascending order and initialize all possible interval boundaries, B, with the minimum, maximum, and the midpoints, for all adjacent pairs 1.3 set the initial discretization scheme to D:{[d o,d n ]}, set variable GlobalCAIM=0 Step2 2.1 initialize k=1 2.2 tentatively add an inner boundary, which is not already in D, from set B, and calculate the corresponding CAIM value 2.3 after all tentative additions have been tried, accept the one with the highest corresponding value of CAIM 2.4 if (CAIM >GlobalCAIM or k<S) then update D with the accepted, in step 2.3, boundary and set the GlobalCAIM=CAIM, otherwise terminate 2.5 set k=k+1 and go to 2.2 Result:Discretization scheme D

35 CAIM Algorithm Uses greedy top-down approach that finds local maximum values of CAIM. Although the algorithm does not guarantee finding the global maximum of the CAIM criterion it is effective and computationally efficient: O(M log(M)) It starts with a single interval and divides it iteratively using for the division the boundaries that resulted in the highest values of the CAIM The algorithm assumes that every discretized attribute needs at least the number of intervals that is equal to the number of classes

36 CAIM Algorithm Experim ents The CAIM’s performance is compared with 5 state-of-the-art discretization algorithms:  two unsupervised: Equal-Width and Equal Frequency  three supervised: Patterson-Niblett, Maximum Entropy, and CADD All 6 algorithms are used to discretize four mixed-mode datasets. Quality of the discretization is evaluated based on the CAIR criterion value, the number of generated intervals, and the time of execution. The discretized datasets are used to generate rules by the CLIP4 machine learning algorithm. The accuracy of the generated rules is compared for the 6 discretization algorithms over the four datasets. NOTE: CAIR criterion was used in the CADD algorithm to evaluate class-attribute interdependency

37 CAIM Algorithm Comparison Properties Datasets IrissatthywavionsmoHeapid # of classes36332322 # of examples1506435720036003512855270768 # of training / testing examples 10 x CV # of attributes43621 3413 8 # of continuous attributes 43662132268 CV = cross validation

38 CAIM Algorithm Comparison Criterion Discretization Method Dataset irisstdsatstdthy stdstd wavstdionstdsmostdheastdpid stdstd CAIR mean value through all intervals Equal Width 0.4 0 0.0 1 0.2 4 0 0.07 1 00.0680 0.09 8 0 0.01 1 0 0.08 7 00.0580 Equal Frequency 0.4 1 0.0 1 0.2 4 0 0.03 8 00.0640 0.09 5 0 0.01 0 0 0.07 9 00.0520 Paterson-Niblett 0.3 5 0.0 1 0.2 1 0 0.14 4 0.010.01 0.1410 0.19 2 0 0.01 2 0 0.08 8 00.0520 Maximum Entropy 0.3 0 0.0 1 0.2 1 0 0.03 2 00.0620 0.10 0 0 0.01 1 0 0.08 1 00.0480 CADD 0.5 1 0.0 1 0.2 6 0 0.02 6 00.0680 0.13 0 0 0.01 5 0 0.09 8 0.0 1 0.0570 IEM 0.5 2 0.0 1 0.2 2 0 0.14 1 0.010.01 0.1120 0.19 3 0.01 0.00 0 0 0.11 8 0.0 2 0.079 0.010.01 CAIM 0.5 4 0.0 1 0.2 6 0 0.17 0 0.010.01 0.1300 0.16 8 0 0.01 0 0 0.13 8 0.0 1 0.0840 # of intervals Equal Width 1602520126 0.480.48 6300640022 0.4 8 5601060 Equal Frequency 1602520126 0.480.48 6300640022 0.4 8 5601060 Paterson-Niblett 480432045 0.790.79 2520384017 0.5 2 48 0.5 3 62 0.480.48 Maximum Entropy 1602520125 0.520.52 63005726.7022 0.4 8 56 0.4 2 97 0.320.32 CADD 16 0.7 1 246 1.2 6 84 3.483.48 628 1.4 3 536 10.2 6 22 0.4 8 55 0.3 2 96 0.920.92 IEM 12 0.4 8 430 4.8 8 28 1.601.60 91 1.5 0 113 17.6 9 2010 0.4 8 17 1.271.27 CAIM 120216018063064060120160

39 CAIM Algorithm Comparison Algorithm Discretization Method Datasets irissatthywavionsmopidhea #std# # # # # # # CLIP4Equal Width 4.20.4 47. 9 1.27.00.0 14. 0 0.01.10.3 20. 0 0.07.30.57.00.5 Equal Frequency 4.90.6 47. 4 0.87.00.0 14. 0 0.01.90.3 19. 9 0.37.20.46.10.7 Paterson-Niblett 5.20.4 42. 7 0.87.00.0 14. 0 0.02.00.0 19. 3 0.71.40.57.01.1 Maximum Entropy 6.50.7 47. 1 0.97.00.0 14. 0 0.02.10.3 19. 8 0.67.00.06.00.7 CADD 4.40.7 45. 9 1.57.00.0 14. 0 0.02.00.0 20. 0 0.07.10.36.80.6 IEM 4.00.5 44. 7 0.97.00.0 14. 0 0.02.10.7 18. 9 0.63.60.58.30.5 CAIM 3.60.5 45. 6 0.77.00.0 14. 0 0.01.90.3 18. 5 0.51.90.37.60.5 C5.0Equal Width 6.00.0 348.5 18. 1 31. 8 2.5 69. 8 20. 3 32. 7 2.91.00.0 249.7 11. 4 66. 9 5.6 Equal Frequency 4.20.6 367.0 14. 1 56. 4 4.8 56. 3 10. 6 36. 5 6.51.00.0 303.4 7.8 82. 3 0.6 Paterson-Niblett 11. 8 0.4 243.4 7.8 15. 9 2.3 41. 3 8.1 18. 2 2.11.00.0 58. 6 3.5 58. 0 3.5 Maximum Entropy 6.00.0 390.7 21. 9 42. 0 0.8 63. 1 8.5 32. 6 2.41.00.0 306.5 11. 6 70. 8 8.6 CADD 4.00.0 346.6 12. 0 35. 7 2.9 72. 5 15. 7 24. 6 5.11.00.0 249.7 15. 9 73. 2 5.8 IEM 3.20.6 466.9 22. 0 34. 1 3.0 270.1 19. 0 12. 9 3.01.00.0 11. 5 2.4 16. 2 2.0 CAIM 3.20.6 332.2 16. 1 10. 9 1.4 58. 2 5.67.71.31.00.0 20. 0 2.4 31. 8 2.9 Built-in 3.80.4 287.7 16. 6 11. 2 1.3 46. 2 4.1 11. 1 2.01.41.3 35. 0 9.3 33. 3 2.5

40 CAIM Algorithm Features: fast and efficient supervised discretization algorithm applicable to class-labeled data maximizes interdependence between the class labels and the generated discrete intervals generates the smallest number of intervals for a given continuous attribute when used as a preprocessing step for a machine learning algorithm significantly improves the results in terms of accuracy automatically selects the number of intervals in contrast to many other discretization algorithms its execution time is comparable to the time required by the simplest unsupervised discretization algorithms

41 CAIM ADVANTAGES It avoids the Disadvantages of CADD and CAIUR algorithms. It works in a top-down manner. It discretizes an attribute into the smallest number of intervals and maximizes the class-attribute interdependency and, thus, makes the ML task subsequently performed much easier. The Algorithm automatically selects the number of discrete intervals without any user supervision.

42 Future Work It include the expansion of the CAIM algorithm.  It can remove irrelevant or redundant attributes after the discretization is performed. This can be performed by  Application of the  2 methods.  Reduce the dimensionality of the discretized data.  In addition to the already reduced number of values for each attribute.

43 References Cios, K.J., Pedrycz, W. and Swiniarski, R. 1998. Data Mining Methods for Knowledge Discovery. Kluwer Kurgan, L. and Cios, K.J. (2002). CAIM Discretization Algorithm, IEEE Transactions of Knowledge and Data Engineering, 16(2): 145-153 Ching J.Y., Wong A.K.C. & Chan K.C.C. (1995). Class-Dependent Discretization for Inductive Learning from Continuous and Mixed Mode Data, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.17, no.7, pp. 641-651 Gama J., Torgo L. and Soares C. (1998). Dynamic Discretization of Continuous Attributes. Progress in Artificial Intelligence, IBERAMIA 98, Lecture Notes in Computer Science, Volume 1484/1998, 466, DOI: 10.1007/3-540-49795-1_14

44 Thank you … Questions ?


Download ppt "DISCRETIZATION ALGORITHMS Sai Jyothsna Jonnalagadda MS Computer Science."

Similar presentations


Ads by Google