Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture 35 of 42 Wednesday, 15 November.

Slides:



Advertisements
Similar presentations
1 Machine Learning: Lecture 3 Decision Tree Learning (Based on Chapter 3 of Mitchell T.., Machine Learning, 1997)
Advertisements

Decision Tree Learning - ID3
Decision Trees Decision tree representation ID3 learning algorithm
CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 15 Nov, 1, 2011 Slide credit: C. Conati, S.
ICS320-Foundations of Adaptive and Learning Systems
Classification Techniques: Decision Tree Learning
Computing & Information Sciences Kansas State University Lecture 35 of 42 CIS 530 / 730 Artificial Intelligence Lecture 35 of 42 Machine Learning: Artificial.
Università di Milano-Bicocca Laurea Magistrale in Informatica
Computing & Information Sciences Kansas State University Lecture 33 of 42 CIS 530 / 730 Artificial Intelligence Lecture 33 of 42 Machine Learning: Version.
Decision Tree Algorithm
Computing & Information Sciences Kansas State University Lecture 34 of 42 CIS 530 / 730 Artificial Intelligence Lecture 34 of 42 Machine Learning: Decision.
Induction of Decision Trees
Machine Learning Lecture 10 Decision Trees G53MLE Machine Learning Dr Guoping Qiu1.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Wednesday, January 19, 2001.
Computing & Information Sciences Kansas State University Lecture 01 of 42 Wednesday, 24 January 2008 William H. Hsu Department of Computing and Information.
Kansas State University Department of Computing and Information Sciences CIS 732: Machine Learning and Pattern Recognition Friday, 25 January 2008 William.
Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture 35 of 42 Wednesday, 15 November.
Decision tree learning
Machine Learning Chapter 3. Decision Tree Learning
CS-424 Gregory Dudek Today’s outline Administrative issues –Assignment deadlines: 1 day = 24 hrs (holidays are special) –The project –Assignment 3 –Midterm.
For Wednesday No new reading Homework: –Chapter 18, exercises 3, 4, 7.
For Monday Read chapter 18, sections 5-6 Homework: –Chapter 18, exercises 1-2.
Decision tree learning Maria Simi, 2010/2011 Inductive inference with decision trees  Decision Trees is one of the most widely used and practical methods.
Lecture 7. Outline 1. Overview of Classification and Decision Tree 2. Algorithm to build Decision Tree 3. Formula to measure information 4. Weka, data.
For Friday No reading No homework. Program 4 Exam 2 A week from Friday Covers 10, 11, 13, 14, 18, Take home due at the exam.
Machine Learning Lecture 10 Decision Tree Learning 1.
CpSc 810: Machine Learning Decision Tree Learning.
CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website:
Computing & Information Sciences Kansas State University Monday, 13 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture 34 of 42 Monday, 13 November.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Monday, February 5, 2001.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
Kansas State University Department of Computing and Information Sciences CIS 732: Machine Learning and Pattern Recognition Thursday, 01 February 2007.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.3: Decision Trees Rodney Nielsen Many of.
Machine Learning Chapter 2. Concept Learning and The General-to-specific Ordering Tom M. Mitchell.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Monday, January 22, 2001 William.
Computing & Information Sciences Kansas State University Friday, 10 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture 33 of 42 Friday, 10 November.
For Wednesday No reading Homework: –Chapter 18, exercise 6.
CpSc 810: Machine Learning Concept Learning and General to Specific Ordering.
Outline Inductive bias General-to specific ordering of hypotheses
CS690L Data Mining: Classification
For Monday No new reading Homework: –Chapter 18, exercises 3 and 4.
Decision Trees Binary output – easily extendible to multiple output classes. Takes a set of attributes for a given situation or object and outputs a yes/no.
1 Decision Tree Learning Original slides by Raymond J. Mooney University of Texas at Austin.
Decision Trees, Part 1 Reading: Textbook, Chapter 6.
Kansas State University Department of Computing and Information Sciences CIS 690: Implementation of High-Performance Data Mining Systems Wednesday, 21.
Kansas State University Department of Computing and Information Sciences CIS 732: Machine Learning and Pattern Recognition Wednesday, 31 January 2007.
Kansas State University Department of Computing and Information Sciences CIS 798: Intelligent Systems and Machine Learning Tuesday, September 7, 1999.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Friday, Febuary 2, 2001 Presenter:Ajay.
Concept Learning and The General-To Specific Ordering
Machine Learning Recitation 8 Oct 21, 2009 Oznur Tastan.
DECISION TREES Asher Moody, CS 157B. Overview  Definition  Motivation  Algorithms  ID3  Example  Entropy  Information Gain  Applications  Conclusion.
1 Universidad de Buenos Aires Maestría en Data Mining y Knowledge Discovery Aprendizaje Automático 4-Inducción de árboles de decisión (1/2) Eduardo Poggi.
Computing & Information Sciences Kansas State University Wednesday, 04 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 17 of 42 Wednesday, 04 October.
1 By: Ashmi Banerjee (125186) Suman Datta ( ) CSE- 3rd year.
Computing & Information Sciences Kansas State University Friday, 13 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 21 of 42 Friday, 13 October.
Kansas State University Department of Computing and Information Sciences CIS 732: Machine Learning and Pattern Recognition Thursday, 25 January 2007 William.
Decision Trees an introduction.
CS 9633 Machine Learning Decision Tree Learning
Decision Tree Learning
Classification Algorithms
CS 9633 Machine Learning Concept Learning
Artificial Intelligence
Analytical Learning Discussion (4 of 4):
Data Science Algorithms: The Basic Methods
Machine Learning Chapter 3. Decision Tree Learning
Machine Learning: Lecture 3
Machine Learning Chapter 3. Decision Tree Learning
Machine Learning Chapter 2
Machine Learning Chapter 2
Presentation transcript:

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture 35 of 42 Wednesday, 15 November 2006 William H. Hsu Department of Computing and Information Sciences, KSU KSOL course page: Course web site: Instructor home page: Reading for Next Class: Section 20.5, Russell & Norvig 2 nd edition Statistical Learning Discussion: ANNs and PS7

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Lecture Outline Today’s Reading: Section 20.1, R&N 2e Friday’s Reading: Section 20.5, R&N 2e Machine Learning, Continued: Review Finding Hypotheses  Version spaces  Candidate elimination Decision Trees  Induction  Greedy learning  Entropy Perceptrons  Definitions, representation  Limitations

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence S2S2 = G 2 Example Trace S0S0 G0G0 d 1 : d 2 : d 3 : d 4 : = S 3 G3G3 S4S4 G4G4 S1S1 = G 1

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence An Unbiased Learner Example of A Biased H  Conjunctive concepts with don’t cares  What concepts can H not express? (Hint: what are its syntactic limitations?) Idea  Choose H’ that expresses every teachable concept  i.e., H’ is the power set of X  Recall: | A  B | = | B | | A | (A = X; B = {labels}; H’ = A  B)  {{Rainy, Sunny}  {Warm, Cold}  {Normal, High}  {None, Mild, Strong}  {Cool, Warm}  {Same, Change}}  {0, 1} An Exhaustive Hypothesis Language  Consider: H’ = disjunctions (  ), conjunctions (  ), negations (¬) over previous H  | H’ | = 2 ( ) = 2 96 ; | H | = 1 + ( ) = 973 What Are S, G For The Hypothesis Language H’?  S  disjunction of all positive examples  G  conjunction of all negated negative examples

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Decision Trees Classifiers: Instances (Unlabeled Examples) Internal Nodes: Tests for Attribute Values  Typical: equality test (e.g., “Wind = ?”)  Inequality, other tests possible Branches: Attribute Values  One-to-one correspondence (e.g., “Wind = Strong”, “Wind = Light”) Leaves: Assigned Classifications (Class Labels) Representational Power: Propositional Logic (Why?) Outlook? Humidity?Wind? Maybe SunnyOvercastRain YesNo HighNormal MaybeNo StrongLight Decision Tree for Concept PlayTennis

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Example: Decision Tree to Predict C-Section Risk Learned from Medical Records of 1000 Women Negative Examples are Cesarean Sections  Prior distribution: [833+, 167-] 0.83+,  Fetal-Presentation = 1: [822+, 116-]0.88+,  Previous-C-Section = 0: [767+, 81-] 0.90+, –Primiparous = 0: [399+, 13-]0.97+, –Primiparous = 1: [368+, 68-]0.84+, Fetal-Distress = 0: [334+, 47-]0.88+, – Birth-Weight  , – Birth-Weight < , Fetal-Distress = 1: [34+, 21-]0.62+,  Previous-C-Section = 1: [55+, 35-] 0.61+,  Fetal-Presentation = 2: [3+, 29-]0.11+,  Fetal-Presentation = 3: [8+, 22-]0.27+, 0.73-

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence [21+, 5-][8+, 30-] Decision Tree Learning: Top-Down Induction (ID3) A1A1 TrueFalse [29+, 35-] [18+, 33-][11+, 2-] A2A2 TrueFalse [29+, 35-] Algorithm Build-DT (Examples, Attributes) IF all examples have the same label THEN RETURN (leaf node with label) ELSE IF set of attributes is empty THEN RETURN (leaf with majority label) ELSE Choose best attribute A as root FOR each value v of A Create a branch out of the root for the condition A = v IF {x  Examples: x.A = v} = Ø THEN RETURN (leaf with majority label) ELSE Build-DT ({x  Examples: x.A = v}, Attributes ~ {A}) But Which Attribute Is Best?

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Choosing the “Best” Root Attribute Objective  Construct a decision tree that is a small as possible (Occam’s Razor)  Subject to: consistency with labels on training data Obstacles  Finding the minimal consistent hypothesis (i.e., decision tree) is NP-hard (D’oh!)  Recursive algorithm (Build-DT)  A greedy heuristic search for a simple tree  Cannot guarantee optimality (D’oh!) Main Decision: Next Attribute to Condition On  Want: attributes that split examples into sets that are relatively pure in one label  Result: closer to a leaf node  Most popular heuristic  Developed by J. R. Quinlan  Based on information gain  Used in ID3 algorithm

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence A Measure of Uncertainty  The Quantity  Purity: how close a set of instances is to having just one label  Impurity (disorder): how close it is to total uncertainty over labels  The Measure: Entropy  Directly proportional to impurity, uncertainty, irregularity, surprise  Inversely proportional to purity, certainty, regularity, redundancy Example  For simplicity, assume H = {0, 1}, distributed according to Pr(y)  Can have (more than 2) discrete class labels  Continuous random variables: differential entropy  Optimal purity for y: either  Pr(y = 0) = 1, Pr(y = 1) = 0  Pr(y = 1) = 1, Pr(y = 0) = 0  What is the least pure probability distribution?  Pr(y = 0) = 0.5, Pr(y = 1) = 0.5  Corresponds to maximum impurity/uncertainty/irregularity/surprise  Property of entropy: concave function (“concave downward”) Entropy: Intuitive Notion p + = Pr(y = +) 1.0 H(p) = Entropy(p)

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Entropy: Information Theoretic Definition Components  D: a set of examples {,, …, }  p + = Pr(c(x) = +), p - = Pr(c(x) = -) Definition  H is defined over a probability density function p  D contains examples whose frequency of + and - labels indicates p + and p - for the observed data  The entropy of D relative to c is: H(D)  -p + log b (p + ) - p - log b (p - ) What Units is H Measured In?  Depends on the base b of the log (bits for b = 2, nats for b = e, etc.)  A single bit is required to encode each example in the worst case (p + = 0.5)  If there is less uncertainty (e.g., p + = 0.8), we can use less than 1 bit each

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Information Gain: Information Theoretic Definition Partitioning on Attribute Values  Recall: a partition of D is a collection of disjoint subsets whose union is D  Goal: measure the uncertainty removed by splitting on the value of attribute A Definition  The information gain of D relative to attribute A is the expected reduction in entropy due to splitting (“sorting”) on A: where D v is {x  D: x.A = v}, the set of examples in D where attribute A has value v  Idea: partition on A; scale entropy to the size of each subset D v Which Attribute Is Best? [21+, 5-][8+, 30-] A1A1 TrueFalse [29+, 35-] [18+, 33-][11+, 2-] A2A2 TrueFalse [29+, 35-]

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Constructing A Decision Tree for PlayTennis using ID3 [1] Selecting The Root Attribute Prior (unconditioned) distribution: 9+, 5-  H(D) = -(9/14) lg (9/14) - (5/14) lg (5/14) bits = 0.94 bits  H(D, Humidity = High) = -(3/7) lg (3/7) - (4/7) lg (4/7) = bits  H(D, Humidity = Normal) = -(6/7) lg (6/7) - (1/7) lg (1/7) = bits  Gain(D, Humidity) = (7/14) * (7/14) * = bits  Similarly, Gain (D, Wind) = (8/14) * (6/14) * 1.0 = bits [6+, 1-][3+, 4-] Humidity HighNormal [9+, 5-] [3+, 3-][6+, 2-] Wind LightStrong [9+, 5-]

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Constructing A Decision Tree for PlayTennis using ID3 [2] Humidity?Wind? Yes NoYesNo Outlook? 1,2,3,4,5,6,7,8,9,10,11,12,13,14 [9+,5-] SunnyOvercastRain 1,2,8,9,11 [2+,3-] 3,7,12,13 [4+,0-] 4,5,6,10,14 [3+,2-] HighNormal 1,2,8 [0+,3-] 9,11 [2+,0-] StrongLight 6,14 [0+,2-] 4,5,10 [3+,0-]

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Decision Tree Overview Heuristic Search and Inductive Bias Decision Trees (DTs)  Can be boolean (c(x)  {+, -}) or range over multiple classes  When to use DT-based models Generic Algorithm Build-DT: Top Down Induction  Calculating best attribute upon which to split  Recursive partitioning Entropy and Information Gain  Goal: to measure uncertainty removed by splitting on a candidate attribute A  Calculating information gain (change in entropy)  Using information gain in construction of tree  ID3  Build-DT using Gain() ID3 as Hypothesis Space Search (in State Space of Decision Trees) Next: Artificial Neural Networks (Multilayer Perceptrons and Backprop) Tools to Try: WEKA, MLC++

Computing & Information Sciences Kansas State University Wednesday, 15 Nov 2006CIS 490 / 730: Artificial Intelligence Inductive Bias (Inductive) Bias: Preference for Some h  H (Not Consistency with D Only) Decision Trees (DTs)  Boolean DTs: target concept is binary-valued (i.e., Boolean-valued)  Building DTs  Histogramming: a method of vector quantization (encoding input using bins)  Discretization: continuous input  discrete (e.g.., by histogramming) Entropy and Information Gain  Entropy H(D) for a data set D relative to an implicit concept c  Information gain Gain (D, A) for a data set partitioned by attribute A  Impurity, uncertainty, irregularity, surprise Heuristic Search  Algorithm Build-DT: greedy search (hill-climbing without backtracking)  ID3 as Build-DT using the heuristic Gain()  Heuristic : Search :: Inductive Bias : Inductive Generalization MLC++ (Machine Learning Library in C++)  Data mining libraries (e.g., MLC++) and packages (e.g., MineSet)  Irvine Database: the Machine Learning Database Repository at UCI