DISCRETIZATION ALGORITHMS Sai Jyothsna Jonnalagadda MS Computer Science.

Slides:



Advertisements
Similar presentations
Mustafa Cayci INFS 795 An Evaluation on Feature Selection for Text Clustering.
Advertisements

Random Forest Predrag Radenković 3237/10
Ch2 Data Preprocessing part3 Dr. Bernard Chen Ph.D. University of Central Arkansas Fall 2009.
_ Rough Sets. Basic Concepts of Rough Sets _ Information/Decision Systems (Tables) _ Indiscernibility _ Set Approximation _ Reducts and Core _ Rough Membership.
Data Mining Feature Selection. Data reduction: Obtain a reduced representation of the data set that is much smaller in volume but yet produces the same.
Huffman code and ID3 Prof. Sin-Min Lee Department of Computer Science.
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Part I Introduction to Data Mining by Tan,
1 Data Mining Classification Techniques: Decision Trees (BUSINESS INTELLIGENCE) Slides prepared by Elizabeth Anglo, DISCS ADMU.
Decision Tree.
Minimum Redundancy and Maximum Relevance Feature Selection
Mutual Information Mathematical Biology Seminar
Decision Tree Algorithm
Reduced Support Vector Machine
Basic Data Mining Techniques Chapter Decision Trees.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Basic Data Mining Techniques
Data Mining with Decision Trees Lutz Hamel Dept. of Computer Science and Statistics University of Rhode Island.
Lecture 5 (Classification with Decision Trees)
Machine Learning CMPT 726 Simon Fraser University
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
ML ALGORITHMS. Algorithm Types Classification (supervised) Given -> A set of classified examples “instances” Produce -> A way of classifying new examples.
MACHINE LEARNING. What is learning? A computer program learns if it improves its performance at some task through experience (T. Mitchell, 1997) A computer.
Learning Chapter 18 and Parts of Chapter 20
Chapter 8 DISCRETIZATION
Fall 2004 TDIDT Learning CS478 - Machine Learning.
Inductive learning Simplest form: learn a function from examples
Introduction to variable selection I Qi Yu. 2 Problems due to poor variable selection: Input dimension is too large; the curse of dimensionality problem.
1 Data Mining Lecture 3: Decision Trees. 2 Classification: Definition l Given a collection of records (training set ) –Each record contains a set of attributes,
1 11 Subcarrier Allocation and Bit Loading Algorithms for OFDMA-Based Wireless Networks Gautam Kulkarni, Sachin Adlakha, Mani Srivastava UCLA IEEE Transactions.
Chapter 10. Sampling Strategy for Building Decision Trees from Very Large Databases Comprising Many Continuous Attributes Jean-Hugues Chauchat and Ricco.
Chapter 9 – Classification and Regression Trees
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
1 Learning Chapter 18 and Parts of Chapter 20 AI systems are complex and may have many parameters. It is impractical and often impossible to encode all.
1 E. Fatemizadeh Statistical Pattern Recognition.
Chapter 7 FEATURE EXTRACTION AND SELECTION METHODS Part 2 Cios / Pedrycz / Swiniarski / Kurgan.
Additive Data Perturbation: the Basic Problem and Techniques.
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology Advisor : Dr. Hsu Graduate : Yu Cheng Chen Author: Manoranjan.
MACHINE LEARNING 10 Decision Trees. Motivation  Parametric Estimation  Assume model for class probability or regression  Estimate parameters from all.
Chapter 6 Classification and Prediction Dr. Bernard Chen Ph.D. University of Central Arkansas.
1 Universidad de Buenos Aires Maestría en Data Mining y Knowledge Discovery Aprendizaje Automático 5-Inducción de árboles de decisión (2/2) Eduardo Poggi.
Decision Trees Binary output – easily extendible to multiple output classes. Takes a set of attributes for a given situation or object and outputs a yes/no.
A DISCRETIZATION ALGORITHM BASED ON CLASS-ATTRIBUTE CONTIGENCY COEFFICIENT professor-Dr. Kim Presenter-Sukumar.
CS 8751 ML & KDDData Clustering1 Clustering Unsupervised learning Generating “classes” Distance/similarity measures Agglomerative methods Divisive methods.
Discretization. 1.Introduction 2.Perspectives and Background 3.Properties and Taxonomy 4.Experimental Comparative Analysis.
Lecture Notes for Chapter 4 Introduction to Data Mining
Data Mining and Decision Support
Feature Selction for SVMs J. Weston et al., NIPS 2000 오장민 (2000/01/04) Second reference : Mark A. Holl, Correlation-based Feature Selection for Machine.
1 Classification: predicts categorical class labels (discrete or nominal) classifies data (constructs a model) based on the training set and the values.
Discriminative Training and Machine Learning Approaches Machine Learning Lab, Dept. of CSIE, NCKU Chih-Pin Liao.
Data Mining By Farzana Forhad CS 157B. Agenda Decision Tree and ID3 Rough Set Theory Clustering.
DECISION TREES Asher Moody, CS 157B. Overview  Definition  Motivation  Algorithms  ID3  Example  Entropy  Information Gain  Applications  Conclusion.
Dr. Chen, Data Mining  A/W & Dr. Chen, Data Mining Chapter 3 Basic Data Mining Techniques Jason C. H. Chen, Ph.D. Professor of MIS School of Business.
May 2003 SUT Color image segmentation – an innovative approach Amin Fazel May 2003 Sharif University of Technology Course Presentation base on a paper.
Data Mining CH6 Implementation: Real machine learning schemes(2) Reporter: H.C. Tsai.
Chapter 3 Data Mining: Classification & Association Chapter 4 in the text box Section: 4.3 (4.3.1),
Chapter 8 DISCRETIZATION Cios / Pedrycz / Swiniarski / Kurgan.
Data Science Algorithms: The Basic Methods
Chapter 6 Classification and Prediction
Data Mining Classification: Basic Concepts and Techniques
Classification and Prediction
K Nearest Neighbor Classification
FCTA 2016 Porto, Portugal, 9-11 November 2016 Classification confusion within NEFCLASS caused by feature value skewness in multi-dimensional datasets Jamileh.
Classification and Prediction
Data Mining – Chapter 4 Cluster Analysis Part 2
Machine Learning in Practice Lecture 17
Data Transformations targeted at minimizing experimental variance
Learning Chapter 18 and Parts of Chapter 20
Chapter 7: Transformations
Algorithm Course Algorithms Lecture 3 Sorting Algorithm-1
Presentation transcript:

DISCRETIZATION ALGORITHMS Sai Jyothsna Jonnalagadda MS Computer Science

Outline Why to Discretize Features/Attributes Unsupervised Discretization Algorithms Equal Width - Equal Frequency Supervised Discretization Algorithms CADD Discretization CAIR Discretization CAIM Discretization CACC Discretization Other Discretization Methods K-means clustering One-level Decision Tree Dynamic Attribute Paterson and Niblett 2

Why Discretize? The goal of discretization is to reduce the number of values a continuous attribute into discrete values and assumes them by grouping them into a number, n, of intervals (bins). Discretization is often a required preprocessing step for many supervised learning methods 3

Discretization Algorithms Discretization algorithms can be divided into: Unsupervised vs. Supervised Unsupervised algorithms do not use class information but supervised do. Static vs. Dynamic Discretization of continuous attributes is most often performed one attribute at a time, independent of other attributes – this is known as static attribute discretization Dynamic algorithm searches for all possible intervals for all features simultaneously Local vs. Global If partitions produced apply only to localized regions of the instance space they are called local. When all attributes are discretized dynamically and they produce n1 x n2 x ni x… x and regions, where ni is the number of intervals of the ith attribute; such methods are called global. 4

Discretization Any discretization process consists of two steps: Step 1: the number of discrete intervals needs to be decided Often it is done by the user, although a few discretization algorithms are able to do it on their own Step 2: the width (boundary) of each interval must be determined Often it is done by a discretization algorithm itself. Considerations: Deciding the number of discretization intervals: large number – more of the original information is retained small number – the new feature is “easier” for subsequently used learning algorithms Computational complexity of discretization should be low since this is only a preprocessing step 5

6 Back up slides

Discretization Discretization scheme depends on the search procedure – it can start with either: minimum number of discretizing points and find the optimal number of discretizing points as search proceeds. maximum number of discretizing points and search towards a smaller number of the points, which defines the optimal discretization. 7

8 Heuristics for guessing the # of intervals 1. Use the number of intervals that is greater than the number of classes to recognize 2. Use the rule of thumb formula: n Fi = M / (3*C) where: M – number of training examples/instances C – number of classes F i – i th attribute

9 Unsupervised Discretization Example of rule of thumb: c = 3 (green, blue, red) M=33 Number of discretization intervals: n Fi = M / (3*c) = 33 / (3*3) = 4

10 Unsupervised Discretization Equal Width Discretization 1. Find the minimum and maximum values for the continuous feature/attribute F i 2. Divide the range of the attribute F i into the user-specified, n Fi,equal-width discrete intervals

11 Unsupervised Discretization Equal Width Discretization example n Fi = M / (3*c) = 33 / (3*3) = 4

12 Unsupervised Discretizatio n Equal Width Discretization The number of intervals is specified by the user or calculated by the rule of thumb formula. The number of the intervals should be larger than the number of classes, to retain mutual information between class labels and intervals. Disadvantage: If values of the attribute are not distributed evenly a large amount of information can be lost. Advantage: If the number of intervals is large enough (i.e., the width of each interval is small) the information present in the discretized interval is not lost.

13 Unsupervised Discretization Equal Frequency Discretization 1. Sort values of the discretized feature F i in ascending order 2. Find the number of all possible values for feature F i 3. Divide the values of feature F i into the user-specified n Fi number of intervals, where each interval contains the same number of sorted sequential values

14 Unsupervised Discretization Equal Frequency Discretization example: n Fi = M / (3*c) = 33 / (3*3) = 4 values/interval = 33 / 4 = 8

15 Unsupervised Discretization Equal Frequency Discretization No search strategy The number of intervals is specified by the user or calculated by the rule of thumb formula The number of intervals should be larger than the number of classes to retain the mutual information between class labels and intervals

Supervised Discretization  CADD  CAIR Discretization  CAIM Discretization  CACC Discretization

CADD Algorithm Disadvantag es It uses a user-specified number of intervals when initializing the discrete intervals. It initializes the discretization intervals using a maximum entropy Discretization method; such initialization may be the worst starting point in terms of the CAIR criterion. This algorithm requires training for selection of a confidence interval.

CAIUR ALGORITHM The CAIU and CAIR criteria were both used in the CAIUR Discretization algorithm. CAIUR = CAIU + CAIR R = Redundancy U = Uncertainity It avoids the disadvantages of the CADD algorithm generating Discretization schemes with higher CAIR values. 18

CAIR Discretization Class-Attribute Interdependence Redundancy The goal is to maximize the interdependence relationship between target class and discretized attributes, as measured by cair. The method is highly combinatoric so a heuristic local optimization method is used. 19

CAIR Discretization STEP 1: Interval Initialization 1. Sort unique values of the attribute in increasing order. 2. Calculate number of intervals using the rule of thumb formula. 3. Perform maximum entropy discretization on the sorted unique values – initial intervals are obtained. 4. The quanta matrix is formed using the initial intervals. STEP 2: Interval Improvement 1. Tentatively eliminate each boundary and calculate the CAIR value. 2. Accept the new boundaries where CAIR has the largest value. 3. Keep updating the boundaries until there is no increase in the value of CAIR. 20

CAIR Criterion 21

Example of CAIR Criterion 22

CAIR Discretization Disadvantages: Uses the rule of thumb to select initial boundaries For large number of unique values, large number of initial intervals is searched - computationally expensive. Using maximum entropy discretization to initialize the intervals results in the worst initial discretization in terms of class-attribute interdependence. It can suffer from the overfitting problem. 23

Information-Theoretic Algorithms - CAIM Given a training dataset consisting of M examples belonging to only one of the S classes. Let F indicate a continuous attribute. There exists a discretization scheme D on F that discretizes the continuous attribute F into n discrete intervals, bounded by the pairs of numbers: where d0 is the minimal value and dn is the maximal value of attribute F, and the values are arranged in the ascending order. These values constitute the boundary set for discretization D: {d0, d1, d2, …, dn-1, dn}

Goal of CAIM Algorithm The main goal is to find the minimum number of discrete intervals while minimizing the loss of class-attribute interdependency. It uses the class-attribute interdependency information as the criterion for the optimal discretization. 25

CAIM Algorithm Class Interval Class Total [d 0, d 1 ]…(d r-1, d r ]…(d n-1, d n ] C1:Ci:CSC1:Ci:CS q 11 : q i1 : q S1 ………………………… q 1r : q ir : q Sr ………………………… q 1n : q in : q Sn M 1+ : M i+ : M S+ Interval Total M +1 …M +r …M +n M CAIM discretization criterion where: n is the number of intervals r iterates through all intervals, i.e. r = 1, 2,..., n maxr is the maximum value among all qir values (maximum in the rth column of the quanta matrix), i = 1, 2,..., S, M+r is the total number of continuous values of attribute F that are within the interval (dr-1, dr] : Quanta matrix :

CAIM Discretization Algorithms – 2 D Quanta Matrix qir is the total number of continuous values belonging to the ith class that are within interval (dr-1, dr] Mi+ is the total number of objects belonging to the ith class M+r is the total number of continuous values of attribute F that are within the interval (dr-1, dr], for i = 1,2…,S and, r = 1,2, …, n. Class Interval Class Total [d 0, d 1 ]…(d r-1, d r ]…(d n-1, d n ] C1:Ci:CSC1:Ci:CS q 11 : q i1 : q S1 ………………………… q 1r : q ir : q Sr ………………………… q 1n : q in : q Sn M 1+ : M i+ : M S+ Interval Total M +1 …M +r …M +n M Quanta matrix

CAIM Discretization Algorithms c = 3 r j = 4 M=33

CAIM Discretization Algorithms Total number of values: M = = 33 M = = 33 Number of values in the First interval: q +first = = 8 Number of values in the Red class: q red+ = = 11

CAIM Discretization Algorithms The estimated joint probability of the occurrence that attribute F values are within interval Dr = (dr-1, dr] and belong to class Ci is calculated as: p red, first = 5 / 33 = 0.24 The estimated class marginal probability that attribute F values belong to class Ci, pi+, and the estimated interval marginal probability that attribute F values are within the interval Dr = (dr-1, dr] p+r, are: p red+ = 11 / 33 p +first = 8 / 33

CAIM Discretization Algorithms Class-Attribute Mutual Information (I), between the class variable C and the discretization variable D for attribute F is defined as: I = 5/33*log((5/33) /(11/33*8/33)) + …+ 4/33*log((4/33)/(13/33)*8/33)) Class-Attribute Information (INFO) is defined as: INFO = 5/33*log((8/33)/(5/33)) + …+ 4/33*log((8/33)/(4/33))

CAIM Discretization Algorithms Shannon’s entropy of the quanta matrix is defined as: H = 5/33*log(1 /(5/33)) + …+ 4/33*log(1/(4/33)) Class-Attribute Interdependence Redundancy (CAIR, or R) is the I value normalized by entropy H: Class-Attribute Interdependence Uncertainty (U) is the INFO normalized by entropy H:

CAIM Algorithm CAIM discretization criterion The larger the value of the CAIM ([0, M], where M is # of values of attribute F, the higher the interdependence between the class labels and the intervals The algorithm favors discretization schemes where each interval contains majority of its values grouped within a single class label (the max i values) The squared max i value is scaled by the M+r to eliminate negative influence of the values belonging to other classes on the class with the maximum number of values on the entire discretization scheme The summed-up value is divided by the number of intervals, n, to favor discretization schemes with smaller number of intervals

CAIM Algorithm Given: M examples described by continuous attributes F i, S classes For every F i do: Step1 1.1 find maximum (d n ) and minimum (d o ) values 1.2 sort all distinct values of F i in ascending order and initialize all possible interval boundaries, B, with the minimum, maximum, and the midpoints, for all adjacent pairs 1.3 set the initial discretization scheme to D:{[d o,d n ]}, set variable GlobalCAIM=0 Step2 2.1 initialize k=1 2.2 tentatively add an inner boundary, which is not already in D, from set B, and calculate the corresponding CAIM value 2.3 after all tentative additions have been tried, accept the one with the highest corresponding value of CAIM 2.4 if (CAIM >GlobalCAIM or k<S) then update D with the accepted, in step 2.3, boundary and set the GlobalCAIM=CAIM, otherwise terminate 2.5 set k=k+1 and go to 2.2 Result:Discretization scheme D

CAIM Algorithm Uses greedy top-down approach that finds local maximum values of CAIM. Although the algorithm does not guarantee finding the global maximum of the CAIM criterion it is effective and computationally efficient: O(M log(M)) It starts with a single interval and divides it iteratively using for the division the boundaries that resulted in the highest values of the CAIM The algorithm assumes that every discretized attribute needs at least the number of intervals that is equal to the number of classes

CAIM Algorithm Experim ents The CAIM’s performance is compared with 5 state-of-the-art discretization algorithms:  two unsupervised: Equal-Width and Equal Frequency  three supervised: Patterson-Niblett, Maximum Entropy, and CADD All 6 algorithms are used to discretize four mixed-mode datasets. Quality of the discretization is evaluated based on the CAIR criterion value, the number of generated intervals, and the time of execution. The discretized datasets are used to generate rules by the CLIP4 machine learning algorithm. The accuracy of the generated rules is compared for the 6 discretization algorithms over the four datasets. NOTE: CAIR criterion was used in the CADD algorithm to evaluate class-attribute interdependency

CAIM Algorithm Comparison Properties Datasets IrissatthywavionsmoHeapid # of classes # of examples # of training / testing examples 10 x CV # of attributes # of continuous attributes CV = cross validation

CAIM Algorithm Comparison Criterion Discretization Method Dataset irisstdsatstdthy stdstd wavstdionstdsmostdheastdpid stdstd CAIR mean value through all intervals Equal Width Equal Frequency Paterson-Niblett Maximum Entropy CADD IEM CAIM # of intervals Equal Width Equal Frequency Paterson-Niblett Maximum Entropy CADD IEM CAIM

CAIM Algorithm Comparison Algorithm Discretization Method Datasets irissatthywavionsmopidhea #std# # # # # # # CLIP4Equal Width Equal Frequency Paterson-Niblett Maximum Entropy CADD IEM CAIM C5.0Equal Width Equal Frequency Paterson-Niblett Maximum Entropy CADD IEM CAIM Built-in

CAIM Algorithm Features: fast and efficient supervised discretization algorithm applicable to class-labeled data maximizes interdependence between the class labels and the generated discrete intervals generates the smallest number of intervals for a given continuous attribute when used as a preprocessing step for a machine learning algorithm significantly improves the results in terms of accuracy automatically selects the number of intervals in contrast to many other discretization algorithms its execution time is comparable to the time required by the simplest unsupervised discretization algorithms

CAIM ADVANTAGES It avoids the Disadvantages of CADD and CAIUR algorithms. It works in a top-down manner. It discretizes an attribute into the smallest number of intervals and maximizes the class-attribute interdependency and, thus, makes the ML task subsequently performed much easier. The Algorithm automatically selects the number of discrete intervals without any user supervision.

Future Work It include the expansion of the CAIM algorithm.  It can remove irrelevant or redundant attributes after the discretization is performed. This can be performed by  Application of the  2 methods.  Reduce the dimensionality of the discretized data.  In addition to the already reduced number of values for each attribute.

References Cios, K.J., Pedrycz, W. and Swiniarski, R Data Mining Methods for Knowledge Discovery. Kluwer Kurgan, L. and Cios, K.J. (2002). CAIM Discretization Algorithm, IEEE Transactions of Knowledge and Data Engineering, 16(2): Ching J.Y., Wong A.K.C. & Chan K.C.C. (1995). Class-Dependent Discretization for Inductive Learning from Continuous and Mixed Mode Data, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.17, no.7, pp Gama J., Torgo L. and Soares C. (1998). Dynamic Discretization of Continuous Attributes. Progress in Artificial Intelligence, IBERAMIA 98, Lecture Notes in Computer Science, Volume 1484/1998, 466, DOI: / _14

Thank you … Questions ?