Dr. M. Sulaiman Khan Dept. of Computer Science University of Liverpool 2010 COMP207: Data Mining General Data Mining Issues COMP207:

Slides:



Advertisements
Similar presentations
COMP3740 CR32: Knowledge Management and Adaptive Systems
Advertisements

The Software Infrastructure for Electronic Commerce Databases and Data Mining Lecture 4: An Introduction To Data Mining (II) Johannes Gehrke
UNIT-2 Data Preprocessing LectureTopic ********************************************** Lecture-13Why preprocess the data? Lecture-14Data cleaning Lecture-15Data.
Data Mining Methodology 1. Why have a Methodology  Don’t want to learn things that aren’t true May not represent any underlying reality ○ Spurious correlation.
Data Mining Practical Machine Learning Tools and Techniques Slides for Chapter 3 of Data Mining by I. H. Witten, E. Frank and M. A. Hall.
CSCI 347 / CS 4206: Data Mining Module 02: Input Topic 03: Attribute Characteristics.
Machine Learning in Practice Lecture 3 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Introduction to Data Mining with XLMiner
Evaluation.
Instance Based Learning. Nearest Neighbor Remember all your data When someone asks a question –Find the nearest old data point –Return the answer associated.
1 Classification with Decision Trees I Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei.
Data Mining with Decision Trees Lutz Hamel Dept. of Computer Science and Statistics University of Rhode Island.
Knowledge Representation. 2 Outline: Output - Knowledge representation  Decision tables  Decision trees  Decision rules  Rules involving relations.
Special Topic: Missing Values. Missing Values Common in Real Data  Pneumonia: –6.3% of attribute values are missing –one attribute is missing in 61%
Classification.
Lecture Slides Elementary Statistics Twelfth Edition
Major Tasks in Data Preprocessing(Ref Chap 3) By Prof. Muhammad Amir Alam.
M. Sulaiman Khan Dept. of Computer Science University of Liverpool 2009 COMP527: Data Mining Classification: Evaluation February 23,
Data Mining Techniques
Data Mining – Algorithms: OneR Chapter 4, Section 4.1.
Confidence Intervals and Hypothesis Testing - II
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence From Data Mining To Knowledge.
Ch2 Data Preprocessing part2 Dr. Bernard Chen Ph.D. University of Central Arkansas Fall 2009.
Encoding, Validation and Verification Chapter 1. Introduction This presentation covers the following: – Data encoding – Data validation – Data verification.
Slides for “Data Mining” by I. H. Witten and E. Frank.
SQL Unit 5 Aggregation, GROUP BY, and HAVING Kirk Scott 1.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Review and Preview This chapter combines the methods of descriptive statistics presented in.
WEKA - Explorer (sumber: WEKA Explorer user Guide for Version 3-5-5)
Data Mining – Input: Concepts, instances, attributes Chapter 2.
Weka Project assignment 3
Outline Introduction Descriptive Data Summarization Data Cleaning Missing value Noise data Data Integration Redundancy Data Transformation.
Experiments in Machine Learning COMP24111 lecture 5 Accuracy (%) A BC D Learning algorithm.
ISV Innovation Presented by ISV Innovation Presented by Business Intelligence Fundamentals: Data Cleansing Ola Ekdahl IT Mentors 9/12/08.
M. Sulaiman Khan Dept. of Computer Science University of Liverpool 2009 COMP527: Data Mining Classification: Bayes February 17, 2009.
Machine learning system design Prioritizing what to work on
Preprocessing for Data Mining Vikram Pudi IIIT Hyderabad.
M. Sulaiman Khan Dept. of Computer Science University of Liverpool 2009 COMP527: Data Mining Text Mining: Challenges, Basics March.
M. Sulaiman Khan Dept. of Computer Science University of Liverpool 2009 COMP527: Data Mining Association Rule Mining March 5, 2009.
Today Ensemble Methods. Recap of the course. Classifier Fusion
1 1 Slide Using Weka. 2 2 Slide Data Mining Using Weka n What’s Data Mining? We are overwhelmed with data We are overwhelmed with data Data mining is.
 Classification 1. 2  Task: Given a set of pre-classified examples, build a model or classifier to classify new cases.  Supervised learning: classes.
Data Mining – Algorithms: Naïve Bayes Chapter 4, Section 4.2.
Data Mining Practical Machine Learning Tools and Techniques By I. H. Witten, E. Frank and M. A. Hall Chapter 5: Credibility: Evaluating What’s Been Learned.
Data Preprocessing Compiled By: Umair Yaqub Lecturer Govt. Murray College Sialkot.
Machine Learning Tutorial-2. Recall, Precision, F-measure, Accuracy Ch. 5.
Data Mining and Decision Support
Machine Learning in Practice Lecture 9 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Data Mining What is to be done before we get to Data Mining?
Data Mining CH6 Implementation: Real machine learning schemes(2) Reporter: H.C. Tsai.
Data Mining Practical Machine Learning Tools and Techniques Chapter 6.5: Instance-based Learning Rodney Nielsen Many / most of these slides were adapted.
Machine Learning in Practice Lecture 9 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Data Mining Chapter 4 Algorithms: The Basic Methods Reporter: Yuen-Kuei Hsueh.
Machine Learning in Practice Lecture 4 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Pattern Recognition Lecture 20: Data Mining 2 Dr. Richard Spillman Pacific Lutheran University.
Data Science Credibility: Evaluating What’s Been Learned
Data Mining – Input: Concepts, instances, attributes
Prepared by: Mahmoud Rafeek Al-Farra
BASICS OF SOFTWARE TESTING Chapter 1. Topics to be covered 1. Humans and errors, 2. Testing and Debugging, 3. Software Quality- Correctness Reliability.
Dept. of Computer Science University of Liverpool
Data Mining Practical Machine Learning Tools and Techniques
Clustering.
Data Preprocessing Copyright, 1996 © Dale Carnegie & Associates, Inc.
Data Preprocessing Copyright, 1996 © Dale Carnegie & Associates, Inc.
CSCI N317 Computation for Scientific Applications Unit Weka
Machine Learning in Practice Lecture 17
Data Preprocessing Copyright, 1996 © Dale Carnegie & Associates, Inc.
Avoid Overfitting in Classification
Data Preprocessing Copyright, 1996 © Dale Carnegie & Associates, Inc.
Data Preprocessing Copyright, 1996 © Dale Carnegie & Associates, Inc.
Data Mining CSCI 307, Spring 2019 Lecture 6
Presentation transcript:

Dr. M. Sulaiman Khan Dept. of Computer Science University of Liverpool 2010 COMP207: Data Mining General Data Mining Issues COMP207: Data Mining

Machine Learning? Input to Data Mining Algorithms Data types Missing values Noisy values Inconsistent values Redundant values Number of values Over-fitting / Under-fitting Scalability Human Interaction Ethical Data Mining Today's Topics General Data Mining Issues COMP207: Data Mining

What do we mean by 'learning' when applied to machines?  Not just committing to memory (= storage)‏  Can't require consciousness  Learn facts (data), or processes (algorithms)? “Things learn when they change their behaviour in a way that makes them perform better” (Witten)‏  Ties to future performance, not the act itself  But things change behaviour for reasons other than 'learning'  Can a machine have the Intent to perform better? Machine Learning General Data Mining Issues COMP207: Data Mining

The aim of data mining is to learn a model for the data. This could be called a concept of the data, so our outcome will be a concept description. Eg, the task is classify s as spam/not spam. Concept to learn is the concept of 'what is spam?' Input comes as instances. Eg, the individual s. Instances have attributes. Eg sender, date, recipient, words in text Inputs General Data Mining Issues COMP207: Data Mining

Use attributes to determine what about an instance means that it should be classified as a particular class. == Learning! Obvious input structure: Table of instances (rows) and attributes (columns)‏ Inputs General Data Mining Issues COMP207: Data Mining

@relation sepal_length sepal_width petal_length petal_width 5.1, 3.5, 1.4, , 3.0, 1.4, , 3.2, 1.3, , 3.6, 1.4, But what about non numeric data? WEKA's ARFF Format General Data Mining Issues COMP207: Data Mining

Nominal: Prespecified, finite number of values eg:{cat, fish, dog, squirrel} Includes boolean {true, false} and all enumerations. Ordinal: Orderable, but no concept of distance eg: hot > warm > cool > cold Domain specific ordering, but no notion of how much hotter warm is compared to cool. Data Types General Data Mining Issues COMP207: Data Mining

Interval: Ordered, fixed unit eg: 1990 < 1995 < 2000 < 2005 Difference between values makes sense (1995 is 5 years after 1990)‏ Sum does not make sense ( = year 3985??)‏ Ratio: Ordered, fixed unit, relative to a zero point eg: 1m, 2m, 3m, 5m Difference makes sense (3m is 1m greater than 2m)‏ Sum makes sense (1m + 2m = 3m)‏ Data Types General Data Mining Issues COMP207: Data Mining

name {option1, option2,... optionN} name numeric -- real values name string -- text name date -- date fields (ISO-8601 format)‏ ARFF Data Types General Data Mining Issues COMP207: Data Mining

The following issues will come up over and over again, but different algorithms have different requirements. What happens if we don't know the value for a particular attribute in an instance? For example, the data was never stored, lost or not able to be represented. Maybe that data was important! ARFF records missing values with a ? in the table How should we process missing values? Data Issues: Missing Values General Data Mining Issues COMP207: Data Mining

Possible 'solutions' for dealing with missing values:  Ignore the instance completely. (eg class missing in training data set) Not very useful solution if in test data to be classified!  Fill in values by hand Could be very slow, and likely to be impossible  Global 'missingValue' constant Possible for enumerations, but what about numeric data?  Replace with attribute mean  Replace with class's attribute mean  Train new classifier to predict missing value!  Just leave as missing and require algorithm to apply appropriate technique Missing Values General Data Mining Issues COMP207: Data Mining

By 'noisy data' we mean random errors scattered in the data. For example, due to inaccurate recording, data corruption. Some noise will be very obvious:  data has incorrect type (string in numeric attribute)‏  data does not match enumeration (maybe in yes/no field)‏  data is very dissimilar to all other entries (10 in an attr otherwise 0..1)‏ Some incorrect values won't be obvious at all. Eg typing 0.52 at data entry instead of Noisy Values General Data Mining Issues COMP207: Data Mining

Some possible solutions:  Manual inspection and removal  Use clustering on the data to find instances or attributes that lie outside the main body (outliers) and remove them  Use regression to determine function, then remove those that lie far from the predicted value  Ignore all values that occur below a certain frequency threshold  Apply smoothing function over known-to-be-noisy data If noise is removed, can apply missing value techniques on it. If it is not removed, it may adversely affect the accuracy of the model. Noisy Values General Data Mining Issues COMP207: Data Mining

Some values may not be recorded in different ways. For example 'coke', 'coca cola', 'coca-cola', 'Coca Cola' etc etc In this case, the data should be normalised to a single form. Can be treated as a special case of noise. Some values may be recorded inaccurately on purpose! address: Spike in early census data for births on 11/11/1911. Had to put in some value, so defaulted to 1s everywhere. Ooops! (Possibly urban legend?)‏ Inconsistent Values General Data Mining Issues COMP207: Data Mining

Just because the base data includes an attribute doesn't make it worth giving to the data mining task. For example, denormalise a typical commercial database and you might have: ProductId, ProductName, ProductPrice, SupplierId, SupplierAddress... SupplierAddress is dependant on SupplierId (remember SQL normalisation rules?) so they will always appear together. A 100% confident, 100% support association rule is not very interesting! Redundant Values General Data Mining Issues COMP207: Data Mining

Is there any harm in putting in redundant values? Yes for association rule mining, and... yes for other data mining tasks too. Can treat text as thousands of numeric attributes: term/frequency from our inverted indexes. But not all of those terms are useful for determining (for example) if an is spam. 'the' does not contribute to spam detection. The number of attributes in the table will affect the time it takes the data mining process to run. It is often the case that we want to run it many times, so getting rid of unnecessary attributes is important. Number of Attributes General Data Mining Issues COMP207: Data Mining

Called 'dimensionality reduction'. We'll look at techniques for this later in the course, but some simplistic versions:  Apply upper and lower thresholds of frequency  Noise removal functions  Remove redundant attributes  Remove attributes below a threshold of contribution to classification (Eg if attribute is evenly distributed, adds no knowledge) Number of Attributes/Values General Data Mining Issues COMP207: Data Mining

Learning a concept must stop at the appropriate time. For example, could express the concept of 'Is Spam?' as a list of spam s. Any identical to those is spam. Accuracy: 0% on new data, 100% on training data. Ooops! This is called Over-Fitting. The concept has been tailored too closely to the training data. Story: US Military trained a neural network to distinguish tanks vs rocks. It would shoot the US tanks they trained it on very consistently and never shot any rocks... or enemy tanks. [probably fiction, but amusing] Over-Fitting / Under-Fitting General Data Mining Issues COMP207: Data Mining

Extreme case of over-fitting: Algorithm tries to learn a set of rules to determine class. Rule1: attr1=val1/1 and attr2=val2/1 and attr3=val3/1 = class1 Rule2: attr1=val1/2 and attr2=val2/2 and attr3=val3/2 = class2 Urgh. One rule for each instance is useless. Need to prevent the learning from becoming too specific to the training set, but also don't want it to be too broad. Complicated! Over-Fitting / Under-Fitting General Data Mining Issues COMP207: Data Mining

Extreme case of under-fitting: Always pick the most frequent class, ignore the data completely. Eg: if one class makes up 99% of the data, then a 'classifier' that always picks this class will be correct 99% of the time! But probably the aim of the exercise is to determine the 1%, not the 99%... making it accurate 0% of the time when you need it. Over-Fitting / Under-Fitting General Data Mining Issues COMP207: Data Mining

We may be able to reduce the number of attributes, but most of the time we're not interested in small 'toy' databases, but huge ones. When there are millions of instances, and thousands of attributes, that's a LOT of data to try to find a model for. Very important that data mining algorithms scale well.  Can't keep all data in memory  Might not be able to keep all results in memory either  Might have access to distributed processing?  Might be able to train on a sample of the data? Scalability General Data Mining Issues COMP207: Data Mining

Problem Exists Between Keyboard And Chair.  Data Mining experts are probably not experts in the domain of the data. Need to work together to find out what is needed, and formulate queries  Need to work together to interpret and evaluate results  Visualisation of results may be problematic  Integrating into the normal workflow may be problematic  How to apply the results appropriately may not be clear (eg Barbie + Chocolate?)‏ Human Interaction General Data Mining Issues COMP207: Data Mining

Just because we can doesn't mean we should. Should we include married status, gender, race, religion or other attributes about a person in a data mining experiment? Discrimination? But sometimes those attributes are appropriate and important... medical diagnosis, for example. What about attributes that are dependent on 'sensitive' attributes? Neighbourhoods have different average incomes... discriminating against the poor by using location? Privacy issues? Data Mining across time? Government sponsored data mining? Ethical Data Mining General Data Mining Issues COMP207: Data Mining