Download presentation
Presentation is loading. Please wait.
Published byLisa Norman Modified over 9 years ago
1
Motivation: Why Data Mining? Holy Grail - Informed Decision Making Lots of Data are Being Collected – Business - Transactions, Web logs, GPS-track, … – Science - Remote sensing, Micro-array gene expression data, … Challenges: – Volume (data) >> number of human analysts – Some automation needed Limitations of Relational Database – Can not predict future! (questions about items not in the database!) Ex. Predict tomorrow’s weather or credit-worthiness of a new customer – Can not compute transitive closure and more complex questions Ex. What are natural groups of customers? Ex. Which subsets of items are bought together? Data Mining may help! – Provide better and customized insights for business – Help scientists for hypothesis generation
2
Motivation for Data Mining Understanding of a (new) phenomenon Discovery of model may beis aided by patterns – Ex. 1854 London: Cholera deaths clustered around a water pump – Narrow down potential causes – Change Hypothesis: Miasma => Water-borne Though, final model may not involve patterns – Cause-effect e.g. Cholera caused by germs
3
Data Mining: Definition The process of discovering – interesting, useful, non-trivial patterns patterns: non-specialist exception to patterns: specialist – from large datasets Pattern families 1.Clusters 2.Outlier, Anomalies 3.Associations, Correlations 4.Classification and Prediction models 5.…
4
What’s NOT Data Mining Simple Querying or summarization of Data – Find number of Subaru drivers in Ramsey county – Search space is not large (not exponential) Testing a hypothesis via a primary data analysis – Ex. Do Subaru driver vote for Democrats ? – Search space is not large! – DM: secondary data analysis to generate multiple plausible hypotheses Uninteresting or obvious patterns in data – Minneapolis and St. Paul have similar climate – Common knowledge: Nearby places have similar climate!
5
Context of Data Mining Models CRISP-DM (CRoss-Industry Standard Process for DM) – Application/Business Understanding – Data Understanding – Data Preparation – Modeling – Evaluation – Deployment http://www.crisp-dm.org Phases of CRISP-DM
6
Outline Clustering Outlier Detection Association Rules Classification & Prediction Summary
7
Clustering: What are natural groups of employees? R IdAgeYears of Service A305 B5025 C5015 D255 E3010 F5525 K = 2
8
Clustering: Geometric View shows 2 groups! R IdAgeYears of Service A305 B5025 C5015 D255 E3010 F5525 K = 2 Age Years Of Service 304050 10 20 A F E D C B
9
K-Means Algorithm: 1. Start with random seeds R IdAgeYears of Service A305 B5025 C5015 D255 E3010 F5525 K = 2 Age Years Of Service 304050 10 20 A F E D C B Seed
10
K-Means Algorithm: 2. Assign points to closest seed R IdAgeYears of Service A305 B5025 C5015 D255 E3010 F5525 K = 2 Age Years Of Service 304050 10 20 A F E D C B Seed Age Years Of Service 304050 10 20 A F E D C B Seed Color shows closest seed
11
K-Means Algorithm: 3. Revise seeds to group centers R IdAgeYears of Service A305 B5025 C5015 D255 E3010 F5525 K = 2 Age Years Of Service 304050 10 20 A F E D C B Revised seeds
12
R IdAgeYears of Service A305 B5025 C5015 D255 E3010 F5525 K = 2 Age Years Of Service 304050 10 20 A F E D C B Revised seeds Age Years Of Service 304050 10 20 A F E D C B Colors show closest Seed K-Means Algorithm: 2. Assign points to closest seed
13
R IdAgeYears of Service A305 B5025 C5015 D255 E3010 F5525 K = 2 Age Years Of Service 304050 10 20 A F E D C B Revised seed Age Years Of Service 304050 10 20 A F E D C B Colors show Closest Seed K-Means Algorithm: 3. Revise seeds to group centers
14
R IdAgeYears of Service A305 B5025 C5015 D255 E3010 F5525 K = 2 Age Years Of Service 304050 10 20 A F E D C B Colors show Closest seed K-Means Algorithm: If seeds changed then Loop back to step 2. Assign points to closest seed
15
R IdAgeYears of Service A305 B5025 C5015 D255 E3010 F5525 K = 2 Age Years Of Service 304050 10 20 A F E D C B Colors show Closest seed Age Years Of Service 304050 10 20 A F E D C B Termination K-Means Algorithm: 3. Revise seeds to group centers
16
Outline Clustering Outlier Detection Association Rules Classification & Prediction Summary
17
Outliers – Global and local Ex. Traffic Data in Twin Cities – Abnormal Sensor 9
18
Outlier Detection Distribution Tests – Global Outliers, i.e., different from population – Local Outliers, i.e. different from neighbors
19
Outline Clustering Outlier Detection Association Rules Classification & Prediction Summary
20
Associations: Which Items are bought together ? Input: Transactions with Item-types Metrics balance computation cost and statistical interpretation! – Support: probability (Diaper and Beer in T) = 2/5 – Confidence: probability (Beer in T | Diaper in T) = 2/2 Algorithm Apriori [Agarwal, Srikant, VLDB94] – Support based pruning using monotonicity TransactionItems Bought 1{socks,, milk,, beef, egg, …} 2{pillow,, toothbrush, ice-cream, muffin, …} 3{,, pacifier, formula, blanket, …} …… n{battery, juice, beef, egg, chicken, …}
21
Apriori Algorithm: How to eliminate infrequent item-sets asap? Transaction IdTimeItem-types bought 110118:35Milk, bread, cookies, juice 79219:38Milk, juice 213020:05Milk, eggs 173520:40Bread, cookies, coffee Support threshold >= 0.5
22
Apriori Algorithm: Eliminate infrequent Singleton sets. Transaction IdTimeItem-types bought 110118:35Milk, bread, cookies, juice 79219:38Milk, juice 213020:05Milk, eggs 173520:40Bread, cookies, coffee Item- type Count Milk3 Bread2 Cookies2 Juice2 Coffee1 Eggs1 MilkCookiesBread Eggs Juice Coffee Support threshold >= 0.5
23
Apriori Algorithm: Make pairs from frequent items & Prune infrequent pairs! Transaction IdTimeItem-types bought 110118:35Milk, bread, cookies, juice 79219:38Milk, juice 213020:05Milk, eggs 173520:40Bread, cookies, coffee Item- type Count Milk3 Bread2 Cookies2 Juice2 Coffee1 Eggs1 Item PairCount Milk, Cookies2 Milk, Juice2 Bread, Cookies2 Milk, Bread1 Bread, Juice1 Cookies, Juice1 MilkCookiesBread Eggs Juice Coffee MBBJMJBCMCCJ Support threshold >= 0.5
24
Transaction IdTimeItem-types bought 110118:35Milk, bread, cookies, juice 79219:38Milk, juice 213020:05Milk, eggs 173520:40Bread, cookies, coffee Item- type Count Milk3 Bread2 Cookies2 Juice2 Coffee1 Eggs1 MilkCookiesBread Eggs Juice Coffee MBBJMJBCMCCJ MBCMBJ BCJ MBCJ MCJ Support threshold >= 0.5 Apriori Algorithm: Make triples from frequent pairs& Prune infrequent triples! Item PairCount Milk, Cookies2 Milk, Juice2 Bread, Cookies2 Milk, Bread1 Bread, Juice1 Cookies, Juice1 No triples generated Due to Monotonicity! Apriori algorithm examined only 12 subsets instead of 64!
25
Outline Clustering Outlier Detection Association Rules Classification & Prediction Summary
26
Find a (decision-tree) model to predict loanworthy ! RID MarriedSalaryAcct_balanceAgeLoanWorthy 1 No>=50K< 5K>=25Yes 2 >=50K>= 5K>=25Yes 3 20K..50K< 5K<25No 4 <20K>= 5K<25No 5 <20K< 5K>=25No 6 yes20K..50K>= 5K>=25Yes Predict Class = Loanworthy From Other columns RID MarriedSalaryAcct_balanceAgeLoanWorthy 7 yes<20K>= 5K>=25? Learning Samples Testing Samples
27
RID MarriedSalaryAcct_balanceAgeLoanWorthy 4 No<20K>= 5K<25No 5 <20K< 5K>=25No Salary RID MarriedSalaryAcct_balanceAgeLoanWorthy 3 Yes20K..50K< 5K<25No RID MarriedSalaryAcct_balanceAgeLoanWorthy 1 No>=50K< 5K>=25Yes 2 >=50K>= 5K>=25Yes RID MarriedSalaryAcct_balanceAgeLoanWorthy 6 yes20K..50K>= 5K>=25Yes Age < 20K > 50K 20..50K < 25 >=25 A Decision Tree to Predict Loanworthy From Other columns RID MarriedSalaryAcct_balanceAgeLoanWorthy 7 yes<20K>= 5K>=25? Q? What is the decision on the new application?
28
RID MarriedSalaryAcct_balanceAgeLoanWorthy 3 Yes20K..50K< 5K<25No 4 <20K>= 5K<25No Age RID MarriedSalaryAcct_balanceAgeLoanWorthy 1 No>=50K< 5K>=25Yes 2 >=50K>= 5K>=25Yes RID MarriedSalaryAcct_balanceAgeLoanWorthy 5 No<20K< 5K>=25No Salary < 25 >= 25 < 20K >=50K RID MarriedSalaryAcct_balanceAgeLoanWorthy 6 yes20K..50K>= 5K>=25Yes 20..50K Another Decision Tree to Predict Loanworthy From Other columns RID MarriedSalaryAcct_balanceAgeLoanWorthy 7 yes<20K>= 5K>=25? Q? What is the decision on the new application?
29
ID3 Algorithm: Choosing a decision for Root Node -1 RID MarriedSalaryAcct_balanceAgeLoanWorthy 1 No>=50K< 5K>=25Yes 2 >=50K>= 5K>=25Yes 3 20K..50K< 5K<25No 4 <20K>= 5K<25No 5 <20K< 5K>=25No 6 yes20K..50K>= 5K>=25Yes MarriedSalaryAcct_balanceAgeLoanworthy # Groups23222 Predict Class = Loanworthy From Other columns
30
RID MarriedSalaryAcct_balanceAgeLoanWorthy 1 No>=50K< 5K>=25Yes 2 >=50K>= 5K>=25Yes 3 20K..50K< 5K<25No 4 <20K>= 5K<25No 5 <20K< 5K>=25No 6 yes20K..50K>= 5K>=25Yes MarriedSalaryAcct_balanceAgeLoanworthy # Groups23222 Groupsyyn, nnyyy, yn, nnyyn, nyyyyyn, nnyyy, nnn Predict Class = Loanworthy From Other columns ID3 Algorithm: Choosing a decision for Root Node -2
31
RID MarriedSalaryAcct_balanceAgeLoanWorthy 1 No>=50K< 5K>=25Yes 2 >=50K>= 5K>=25Yes 3 20K..50K< 5K<25No 4 <20K>= 5K<25No 5 <20K< 5K>=25No 6 yes20K..50K>= 5K>=25Yes MarriedSalaryAcct_balanceAgeLoanworthy # Groups23222 Groupsyyn, nnyyy, yn, nnyyn, nyyyyyn, nnyyy, nnn Entropy0.920.330.920.541 Predict Class = Loanworthy From Other columns ID3 Algorithm: Choosing a decision for Root Node -3
32
RID MarriedSalaryAcct_balanceAgeLoanWorthy 1 No>=50K< 5K>=25Yes 2 >=50K>= 5K>=25Yes 3 20K..50K< 5K<25No 4 <20K>= 5K<25No 5 <20K< 5K>=25No 6 yes20K..50K>= 5K>=25Yes MarriedSalaryAcct_balanceAgeLoanworthy # Groups23222 Groupsyyn, nnyyy, yn, nnyyn, nyyyyyn, nnyyy, nnn Entropy0.920.330.920.541 Gain0.080.670.080.46 Predict Class = Loanworthy From Other columns ID3 Algorithm: Choosing a decision for Root Node - 4
33
RID MarriedSalaryAcct_balanceAgeLoanWorthy 1 No>=50K< 5K>=25Yes 2 >=50K>= 5K>=25Yes 3 20K..50K< 5K<25No 4 <20K>= 5K<25No 5 <20K< 5K>=25No 6 yes20K..50K>= 5K>=25Yes MarriedSalaryAcct_balanceAgeLoanworthy # Groups23222 Groupsyyn, nnyyy, yn, nnyyn, nyyyyyn, nnyyy, nnn Entropy0.920.330.920.541 Gain0.080.670.080.46 Predict Class = Loanworthy From Other columns Root Node : Decision is based on Salary
34
Root Node of a Decision Tree to Predict Loanworhty RID MarriedSalaryAcct_balanceAgeLoanWorthy 4 No<20K>= 5K<25No 5 <20K< 5K>=25No Salary RID MarriedSalaryAcct_balanceAgeLoanWorthy 3 Yes20K..50K< 5K<25No 6 yes20K..50K>= 5K>=25Yes RID MarriedSalaryAcct_balanceAgeLoanWorthy 1 No>=50K< 5K>=25Yes 2 >=50K>= 5K>=25Yes < 20K > 50K 20..50K
35
ID3 Algorithm: Which Leafs needs refinement? RID MarriedSalaryAcct_balanceAgeLoanWorthy 4 No<20K>= 5K<25No 5 <20K< 5K>=25No Salary RID MarriedSalaryAcct_balanceAgeLoanWorthy 3 Yes20K..50K< 5K<25No 6 yes20K..50K>= 5K>=25Yes RID MarriedSalaryAcct_balanceAgeLoanWorthy 1 No>=50K< 5K>=25Yes 2 >=50K>= 5K>=25Yes < 20K > 50K 20..50K
36
RID MarriedSalaryAcct_balanceAgeLoanWorthy 4 No<20K>= 5K<25No 5 <20K< 5K>=25No Salary RID MarriedSalaryAcct_balanceAgeLoanWorthy 3 Yes20K..50K< 5K<25No RID MarriedSalaryAcct_balanceAgeLoanWorthy 1 No>=50K< 5K>=25Yes 2 >=50K>= 5K>=25Yes RID MarriedSalaryAcct_balanceAgeLoanWorthy 6 yes20K..50K>= 5K>=25Yes Age < 20K > 50K 20..50K < 25 >=25 ID3 Algorithm Output: A Decision Tree to Predict Loanworthy column From Other columns
37
Another Decision Tree to Predict Loanworthy From Other columns RID MarriedSalaryAcct_balanceAgeLoanWorthy 4 No<20K>= 5K<25No 5 <20K< 5K>=25No Salary RID MarriedSalaryAcct_balanceAgeLoanWorthy 3 Yes20K..50K< 5K<25No RID MarriedSalaryAcct_balanceAgeLoanWorthy 1 No>=50K< 5K>=25Yes 2 >=50K>= 5K>=25Yes RID MarriedSalaryAcct_balanceAgeLoanWorthy 6 yes20K..50K>= 5K>=25Yes Acct_balance < 20K > 50K 20..50K < 5K >=5K
38
A Decision Root not preferred by ID3 RID MarriedSalaryAcct_balanceAgeLoanWorthy 3 Yes20K..50K< 5K<25No 4 <20K>= 5K<25No ID3 prefer Salary over Age for decision in root node due to difference in information gain Even though the choices are comparable for classification accuracy. Age RID MarriedSalaryAcct_balanceAgeLoanWorthy 1 No>=50K< 5K>=25Yes 2 >=50K>= 5K>=25Yes 5 No<20K< 5K>=25No 6 yes20K..50K>= 5K>=25Yes < 25 >= 25
39
A Decision Tree not prefered by ID3 RID MarriedSalaryAcct_balanceAgeLoanWorthy 3 Yes20K..50K< 5K<25No 4 <20K>= 5K<25No ID3 is greedy preferring Salary over Age for decision in root node. Thus, it prefers decision tress in earlier slides over following (despite comparable quality): Age RID MarriedSalaryAcct_balanceAgeLoanWorthy 1 No>=50K< 5K>=25Yes 2 >=50K>= 5K>=25Yes RID MarriedSalaryAcct_balanceAgeLoanWorthy 5 No<20K< 5K>=25No Salary < 25 >= 25 < 20K >=50K RID MarriedSalaryAcct_balanceAgeLoanWorthy 6 yes20K..50K>= 5K>=25Yes 20..50K
40
Summary The process of discovering – interesting, useful, non-trivial patterns – from large datasets Pattern families 1.Clusters, e.g., K-Means 2.Outlier, Anomalies 3.Associations, Correlations 4.Classification and Prediction models, e.g., Decision Trees 5.…
41
Review Quiz Consider an Washingtonian.com article about election micro-targeting using a database of 200+ Million records about individuals. The database is compiled from voter lists, memberships (e.g. advocacy group, frequent buyer cards, catalog/magazine subscription,...) as well polls/surveys of effective messages and preferences. It is at www.washingtonian.com/articles/people/9627.html Q1. Match the following use-cases in the article to categories of traditional SQL2 query, association, clustering and classification: (i) How many single Asian men under 35 live in a given congressional district? (ii) How many college-educated women with children at home are in Canton, Ohio? (iii) Jaguar, Land Rover, and Porsche owners tend to be more Republican, while Subaru, Hyundai, and Volvo drivers lean Democratic. (iv) Some of the strongest predictors of political ideology are things like education, homeownership, income level, and household size. (v) Religion and gun ownership are the two most powerful predictors of partisan ID. (vi)... it even studied the roads Republicans drove as they commuted to work, which allowed the party to put its billboards where they would do the most good. (vii) Catalyst and its competitors can build models to predict voter choices.... Based on how alike they are, you can assign a probability to them.... a likelihood of support on each person based on how many character traits a person shares with your known supporters.. (viii) Will 51 percent of the voters buy what RNC candidate is offering? Or will DNC candidate seem like a better deal? Q2. Compare and contrast Data Mining with Relational Databases. Q3. Compare and contrast Data Mining with Traditional Statistics (or Machine Learning).
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.