Machine Learning A Quick look Sources:

Slides:



Advertisements
Similar presentations
Induction of Decision Trees (IDT)
Advertisements

Learning from Observations Chapter 18 Section 1 – 3.
1 Machine Learning: Lecture 3 Decision Tree Learning (Based on Chapter 3 of Mitchell T.., Machine Learning, 1997)
Decision Tree Learning Brought to you by Chris Creswell.
P is a subset of NP Let prob be a problem in P There is a deterministic algorithm alg that solves prob in polynomial time O(n k ), for some constant k.
Homework. Homework (next class) Read Chapter 2 of the Experience Management book and answer the following questions: Provide an example of something that.
CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 15 Nov, 1, 2011 Slide credit: C. Conati, S.
Decision Trees Jeff Storey. Overview What is a Decision Tree Sample Decision Trees How to Construct a Decision Tree Problems with Decision Trees Decision.
Learning from Observations Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 18 Spring 2004.
1 Chapter 10 Introduction to Machine Learning. 2 Chapter 10 Contents (1) l Training l Rote Learning l Concept Learning l Hypotheses l General to Specific.
Learning from Observations Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 18 Fall 2005.
Cooperating Intelligent Systems
Learning from Observations Chapter 18 Section 1 – 4.
18 LEARNING FROM OBSERVATIONS
Learning From Observations
Learning from Observations Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 18 Fall 2004.
Learning from Observations Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 18.
Data Mining with Decision Trees Lutz Hamel Dept. of Computer Science and Statistics University of Rhode Island.
Three kinds of learning
LEARNING DECISION TREES
Machine Learning: Symbol-Based
Learning decision trees derived from Hwee Tou Ng, slides for Russell & Norvig, AI a Modern Approachslides Tom Carter, “An introduction to information theory.
Learning decision trees
Learning decision trees derived from Hwee Tou Ng, slides for Russell & Norvig, AI a Modern Approachslides Tom Carter, “An introduction to information theory.
Learning: Introduction and Overview
MACHINE LEARNING. What is learning? A computer program learns if it improves its performance at some task through experience (T. Mitchell, 1997) A computer.
Induction of Decision Trees (IDT) CSE 335/435 Resources: – –
Information Theory and Games (Ch. 16). Information Theory Information theory studies information flow Under this context information has no intrinsic.
Machine Learning CPS4801. Research Day Keynote Speaker o Tuesday 9:30-11:00 STEM Lecture Hall (2 nd floor) o Meet-and-Greet 11:30 STEM 512 Faculty Presentation.
INTRODUCTION TO MACHINE LEARNING. $1,000,000 Machine Learning  Learn models from data  Three main types of learning :  Supervised learning  Unsupervised.
Inductive learning Simplest form: learn a function from examples
Learning CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Other Potential Machine Learning Uses
Machine Learning A Quick look Sources: Artificial Intelligence – Russell & Norvig Artifical Intelligence - Luger By: Héctor Muñoz-Avila.
LEARNING DECISION TREES Yılmaz KILIÇASLAN. Definition - I Decision tree induction is one of the simplest, and yet most successful forms of learning algorithm.
Learning from observations
Learning from Observations Chapter 18 Through
CHAPTER 18 SECTION 1 – 3 Learning from Observations.
Learning from Observations Chapter 18 Section 1 – 3, 5-8 (presentation TBC)
Learning from Observations Chapter 18 Section 1 – 3.
Artificial Intelligence in Game Design N-Grams and Decision Tree Learning.
Learning from observations
George F Luger ARTIFICIAL INTELLIGENCE 6th edition Structures and Strategies for Complex Problem Solving Machine Learning: Symbol-Based Luger: Artificial.
Machine Learning Chapter 5. Artificial IntelligenceChapter 52 Learning 1. Rote learning rote( โรท ) n. วิถีทาง, ทางเดิน, วิธีการตามปกติ, (by rote จากความทรงจำ.
Decision Trees. What is a decision tree? Input = assignment of values for given attributes –Discrete (often Boolean) or continuous Output = predicated.
Learning, page 1 CSI 4106, Winter 2005 Symbolic learning Points Definitions Representation in logic What is an arch? Version spaces Candidate elimination.
KU NLP Machine Learning1 Ch 9. Machine Learning: Symbol- based  9.0 Introduction  9.1 A Framework for Symbol-Based Learning  9.2 Version Space Search.
1 Chapter 10 Introduction to Machine Learning. 2 Chapter 10 Contents (1) l Training l Rote Learning l Concept Learning l Hypotheses l General to Specific.
Machine Learning A Quick look Sources: Artificial Intelligence – Russell & Norvig Artifical Intelligence - Luger By: Héctor Muñoz-Avila.
CSC 8520 Spring Paula Matuszek DecisionTreeFirstDraft Paula Matuszek Spring,
Computational Learning Theory Part 1: Preliminaries 1.
Chapter 18 Section 1 – 3 Learning from Observations.
Learning From Observations Inductive Learning Decision Trees Ensembles.
Anifuddin Azis LEARNING. Why is learning important? So far we have assumed we know how the world works Rules of queens puzzle Rules of chess Knowledge.
Decision Tree Learning CMPT 463. Reminders Homework 7 is due on Tuesday, May 10 Projects are due on Tuesday, May 10 o Moodle submission: readme.doc and.
Learning from Observations
Learning from Observations
Machine Learning Inductive Learning and Decision Trees
Machine Learning: Symbol-Based
Introduce to machine learning
Machine Learning: Symbol-Based
Presented By S.Yamuna AP/CSE
Machine Learning: Lecture 3
Learning from Observations
AI and Machine Learning
P is a subset of NP Let prob be a problem in P
Learning from Observations
Decision Trees Jeff Storey.
Machine Learning: Decision Tree Learning
Version Space Machine Learning Fall 2018.
Presentation transcript:

Machine Learning A Quick look Sources: Artificial Intelligence – Russell & Norvig Artifical Intelligence - Luger By: Héctor Muñoz-Avila

What Is Machine Learning? “Logic is not the end of wisdom, it is just the beginning” --- Spock time Environment Knowledge Environment System System Action1 Action2 Knowledge changed

Learning: The Big Picture Two forms of learning: Supervised: the input and output of the learning component can be perceived (for example: friendly teacher) Unsupervised: there is no hint about the correct answers of the learning component (for example to find clusters of data)

Offline vs. Online Learning 11-Apr-17 Offline vs. Online Learning Online – during gameplay Adapt to player tactics Avoid repetition of mistakes Requirements: computationally cheap, effective, robust, fast learning (Spronck 2004) Offline - before the game is released Devise new tactics Discover exploits Two different types of learning in games: online and offline: Online learning takes place during gameplay against human players. The main applications for online learning are to adapt to player tactics and playing styles and avoid the repetition of mistakes. However, online learning has rarely been applied in commercial games because most game companies are not comfortable with shipping their games with an online learning technique included in the game. What if the NPC learns something stupid? What if the learning takes too long or is too computational expensive? Therefore we set a couple of requirements for online learning: Computationally cheap: Should not disturb flow of gameplay. Effective: Should not generate too many bad inferior opponents. Robust: Should be able to deal with randomness incorporated in games. For instance you don’t want permanently unlearn a specific behavior because it performed badly at some point. The reason for the bad performance could be subscribed to chance. Fast learning: Should lead to results quickly. Don’t want to bore the players with slow learning AI. Offline learning can be applied before a games is being released. Typically, offline learning is used to explore new game tactics or to find exploits.

Classification (According to the language representation) Symbolic Version Space Decision Trees Explanation-Based Learning … Sub-symbolic Reinforcement Learning Connectionist Evolutionary

Classification (According to the language representation) Symbolic Version Space Decision Trees Explanation-Based Learning … Sub-symbolic Reinforcement Learning Connectionist Evolutionary

Version Space Idea: Learn a concept from a group of instances, some positive and some negative Example: target: obj(Size,Color,Shape) Size = {large, small} Color = {red, white, blue} Shape = {ball, brick, cube} Instances: +: obj(large,white,ball) obj(small,blue,ball) −: obj(small,red,brick) obj(large,blue,cube) Two extremes (temptative) solutions: obj(X,Y,Z) obj(large,white,ball) obj(small,blue,ball) … too general too specific concept space obj(large,Y,ball) obj(small,Y,ball) obj(X,Y,ball) …

How Version Space Works If we consider only positives If we consider positive and negatives + + + + + + + + + + + + + + − − − − + + What is the role of the negative instances? to help prevent over-generalizations

Classification (According to the language representation) Symbolic Version Space Decision Trees Explanation-Based Learning … Sub-symbolic Reinforcement Learning Connectionist Evolutionary

Explanation-Based learning C A B C A B C B A B A C ? B A ? C ? B A B C C B A B A C A C A A B C B C C B A A B C Can we avoid making this error again?

Explanation-Based learning (2) C A B ? B A ? C ? A B C C B A B Possible rule: If the initial state is this and the final state is this, don’t do that A C More sensible rule: don’t stack anything above a block, if the block has to be free in the final state

Classification (According to the language representation) Symbolic Version Space Decision Trees Explanation-Based Learning … Sub-symbolic Reinforcement Learning Connectionist Evolutionary

Motivation # 1: Analysis Tool Suppose that a company have a data base of sales data, lots of sales data How can that company’s CEO use this data to figure out an effective sales strategy Safeway, Giant, etc cards: what is that for?

Motivation # 1: Analysis Tool (cont’d) Sales data induction Decision Tree Ex’ple Bar Fri Hun Pat Type Res wait x1 no yes some french x4 full thai x5 x6 x7 x8 x9 x10 x11 “if buyer is male & and age between 24-35 & married then he buys sport magazines”

The Knowledge Base in Expert Systems A knowledge base consists of a collection of IF-THEN rules: if buyer is male & age between 24-50 & married then he buys sport magazines if buyer is male & age between 18-30 then he buys PC games magazines Knowledge bases of fielded expert systems contain hundreds and sometimes even thousands such rules. Frequently rules are contradictory and/or overlap

Main Drawback of Expert Systems: The Knowledge Acquisition Bottle-Neck The main problem of expert systems is acquiring knowledge from human specialist is a difficult, cumbersome and long activity. Name KB #Rules Const. time (man/years) Maint. time MYCIN KA 500 10 N/A XCON 2500 18 3 KB = Knowledge Base KA = Knowledge Acquisition

Motivation # 2: Avoid Knowledge Acquisition Bottle-Neck GASOIL is an expert system for designing gas/oil separation systems stationed of-shore The design depends on multiple factors including: proportions of gas, oil and water, flow rate, pressure, density, viscosity, temperature and others To build that system by hand would had taken 10 person years It took only 3 person-months by using inductive learning! GASOIL saved BP millions of dollars

Motivation # 2 : Avoid Knowledge Acquisition Bottle-Neck Name KB #Rules Const. time (man/years) Maint. time (man/months) MYCIN KA 500 10 N/A XCON 2500 18 3 GASOIL IDT 2800 1 0.1 BMT KA (IDT) 30000+ 9 (0.3) 2 (0.1) KB = Knowledge Base KA = Knowledge Acquisition IDT = Induced Decision Trees

Example of a Decision Tree Patrons? waitEstimate? no yes 0-10 >60 Full no yes none some Alternate? Reservation? Yes 30-60 no yes Hungry? yes No 10-30 Alternate? Yes no Raining? Fri/Sat? No Yes yes no No no Bar? Yes yes

Definition of A Decision Tree A decision tree is a tree where: The leaves are labeled with classifications (if the classification is “yes” or “no”. The tree is called a boolean tree) The non-leaves nodes are labeled with attributes The arcs out of a node labeled with an attribute A are labeled with the possible values of the attribute A

Induction Databases: what are the data that matches this pattern? Induction: what is the pattern that matches these data? induction Data Ex’ple Bar Fri Hun Pat Type Res wait x1 no yes some french x4 full thai x5 x6 x7 x8 x9 x10 x11 pattern

Induction of Decision Trees Objective: find a concise decision tree that agrees with the examples The guiding principle we are going to use is the Ockham’s razor principle: the most likely hypothesis is the simplest one that is consistent with the examples Problem: finding the smallest decision tree is NP-complete However, with simple heuristics we can find a small decision tree (approximations)

Induction of Decision Trees: Algorithm Initially all examples are in the same group Select the attribute that makes the most difference (i.e., for each of the values of the attribute most of the examples are either positive or negative) Group the examples according to each value for the selected attribute Repeat 1 within each group (recursive call)

Example Ex’ple Bar Fri Hun Pat Alt Type wait x1 no yes some French x4 full Thai x5 x6 Italian x7 none Burger x8 x9 x10 x11 No

IDT: Example Lets compare two candidate attributes: Patrons and Type. Which is a better attribute? Patrons? none X7(-),x11(-) some X1(+),x3(+),x6(+),x8(+) full X4(+),x12(+), x2(-),x5(-),x9(-),x10(-) Type? french X1(+), x5(-) italian X6(+), x10(-) burger X3(+),x12(+), x7(-),x9(-) X4(+),x12(+) x2(-),x11(-) thai

Example of a Decision Tree Patrons? waitEstimate? no yes 0-10 >60 Full no yes none some Alternate? Reservation? Yes 30-60 no yes Hungry? yes No 10-30 Alternate? Yes no Raining? Fri/Sat? No Yes yes no No no Bar? Yes yes

Decision Trees in Gaming Black & White, developed by Lionhead Studios, and released in 2001 used ID3 Used to predict a player’s reaction to a certain creature’s action In this model, a greater feedback value means the creature should attack

Decision Trees in Black & White Example Attributes   Target Allegiance Defense Tribe Feedback D1 Friendly Weak Celtic -1.0 D2 Enemy 0.4 D3 Strong Norse D4 -0.2 D5 Greek D6 Medium 0.2 D7 -0.4 D8 Aztec 0.0 D9

Decision Trees in Black & White Allegiance Friendly Enemy Defense -1.0 Weak Strong Medium 0.4 -0.3 0.1 Note that this decision tree does not even use the tribe attribute

Decision Trees in Black & White Now suppose we don’t want the entire decision tree, but we just want the 2 highest feedback values We can create a Boolean expressions, such as ((Allegiance = Enemy) ^ (Defense = Weak)) v ((Allegiance = Enemy) ^ (Defense = Medium))

Classification (According to the language representation) Symbolic Version Space Decision Trees Explanation-Based Learning … Sub-symbolic Reinforcement Learning Connectionist Evolutionary