CO Games Development 2 Week 22 Trees

Slides:



Advertisements
Similar presentations
DECISION TREES. Decision trees  One possible representation for hypotheses.
Advertisements

Decision Tree Learning - ID3
CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 15 Nov, 1, 2011 Slide credit: C. Conati, S.
Huffman code and ID3 Prof. Sin-Min Lee Department of Computer Science.
Decision Tree Approach in Data Mining
Decision Tree Learning 主講人:虞台文 大同大學資工所 智慧型多媒體研究室.
Chapter 12: Expert Systems Design Examples
Decision Trees Jeff Storey. Overview What is a Decision Tree Sample Decision Trees How to Construct a Decision Tree Problems with Decision Trees Decision.
GS 540 week 6. HMM basics Given a sequence, and state parameters: – Each possible path through the states has a certain probability of emitting the sequence.
ID3 Algorithm Abbas Rizvi CS157 B Spring What is the ID3 algorithm? ID3 stands for Iterative Dichotomiser 3 Algorithm used to generate a decision.
Induction of Decision Trees
Basic Data Mining Techniques
LEARNING DECISION TREES
Learning decision trees
Classification and Prediction by Yen-Hsien Lee Department of Information Management College of Management National Sun Yat-Sen University March 4, 2003.
Classification.
Fall 2004 TDIDT Learning CS478 - Machine Learning.
Machine Learning Chapter 3. Decision Tree Learning
Learning what questions to ask. 8/29/03Decision Trees2  Job is to build a tree that represents a series of questions that the classifier will ask of.
Mohammad Ali Keyvanrad
Decision Trees & the Iterative Dichotomiser 3 (ID3) Algorithm David Ramos CS 157B, Section 1 May 4, 2006.
1 CO Games Development 2 Week 19 Probability Trees + Decision Trees (Learning Trees) Gareth Bellaby.
Learning from Observations Chapter 18 Through
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.3: Decision Trees Rodney Nielsen Many of.
Artificial Intelligence in Game Design N-Grams and Decision Tree Learning.
CS 8751 ML & KDDDecision Trees1 Decision tree representation ID3 learning algorithm Entropy, Information gain Overfitting.
ID3 Algorithm Michael Crawford.
MACHINE LEARNING 10 Decision Trees. Motivation  Parametric Estimation  Assume model for class probability or regression  Estimate parameters from all.
CS 5751 Machine Learning Chapter 3 Decision Tree Learning1 Decision Trees Decision tree representation ID3 learning algorithm Entropy, Information gain.
Decision Trees Binary output – easily extendible to multiple output classes. Takes a set of attributes for a given situation or object and outputs a yes/no.
Data Mining Practical Machine Learning Tools and Techniques By I. H. Witten, E. Frank and M. A. Hall Chapter 6.2: Classification Rules Rodney Nielsen Many.
Learning to Detect Faces A Large-Scale Application of Machine Learning (This material is not in the text: for further information see the paper by P.
An Introduction Student Name: Riaz Ahmad Program: MSIT( ) Subject: Data warehouse & Data Mining.
CSC 8520 Spring Paula Matuszek DecisionTreeFirstDraft Paula Matuszek Spring,
1 CO Games Development 2 Week 16 Blackboard Model Gareth Bellaby.
Learning From Observations Inductive Learning Decision Trees Ensembles.
Dr. Chen, Data Mining  A/W & Dr. Chen, Data Mining Chapter 3 Basic Data Mining Techniques Jason C. H. Chen, Ph.D. Professor of MIS School of Business.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Data Mining CH6 Implementation: Real machine learning schemes(2) Reporter: H.C. Tsai.
1 CO Games Development 2 Week 13 Influence Maps Gareth Bellaby.
CSE343/543 Machine Learning: Lecture 4.  Chapter 3: Decision Trees  Weekly assignment:  There are lot of applications and systems using machine learning.
Chapter 11 – Neural Nets © Galit Shmueli and Peter Bruce 2010 Data Mining for Business Intelligence Shmueli, Patel & Bruce.
Control of Dynamic Discrete-Event Systems Lenko Grigorov Master’s Thesis, QU supervisor: Dr. Karen Rudie.
Machine Learning Reading: Chapter Classification Learning Input: a set of attributes and values Output: discrete valued function Learning a continuous.
Stochasticity and Probability. A new approach to insight Pose question and think of the answer needed to answer it. Ask: How do the data arise? What is.
Machine Learning in Practice Lecture 18
CO Games Development 2 Week 16 Blackboard Model
Planning & System installation
Decision Trees an introduction.
Rule Induction for Classification Using
Artificial Intelligence
Decision Trees (suggested time: 30 min)
Ch9: Decision Trees 9.1 Introduction A decision tree:
Introduction to Artificial Intelligence
Data Science Algorithms: The Basic Methods
Issues in Decision-Tree Learning Avoiding overfitting through pruning
CO Games Development 2 Week 19 Extensions to Finite State Machines
Classification and Prediction
Roberto Battiti, Mauro Brunato
Objective of This Course
Machine Learning Chapter 3. Decision Tree Learning
Chapter 11 Data Compression
CIS 488/588 Bruce R. Maxim UM-Dearborn
Machine Learning Chapter 3. Decision Tree Learning
CO Games Development 1 Week 8 Depth-first search, Combinatorial Explosion, Heuristics, Hill-Climbing Gareth Bellaby.
Learning Chapter 18 and Parts of Chapter 20
AI and Machine Learning
Learning from Observations
Decision Trees Jeff Storey.
CO4301 – Advanced Games Development Week 5 Walkthrough of Red-Black Tree Insertion Gareth Bellaby.
Presentation transcript:

CO3301 - Games Development 2 Week 22 Trees Gareth Bellaby

Probability Tree

Probability Trees A probability tree is a tree which has probabilities associated with each branch.

Probability Trees

Probability Trees Probabilities are propagated down the tree. A probability which follows on from another probability is multiplied, i.e. one probability is multiplied by the other probability in order to calculate the final probability. A each level of the tree the total of the probabilities must be equal to the sum of 1. 0.125 + 0.125 + 0.25 + 0.5 = 1

Decision Tree

Decision Trees A decision tree is a way of representing knowledge. A decision tree is a way of using inputs to predict future outputs. Decision trees are a good way of expressing decisions for computer games. Not just for AI but general game play. A decision tree is a classification method. Decision trees learn from examples using induction.

Example

Decision Trees Each internal node is a test. Each leaf node is a classification. The intention is that an unknown type can be classified by traversing the tree. Decision trees can be created in real time extremely efficiently. This means that they are a practical option for machine learning for games.

Decision Trees Can have the response "unknown" on a branch. Decision trees can deal with uncertainty. Decision trees cannot use ranges because of the large number of branching alternatives, e.g. floating point numbers. Instead you must provide a set of "buckets" each with a specified range. We'll an example of this below with Black and White talking about "none", "medium" and "maximum". Tests followed in sequence down the tree can be considered to be logical AND. The branching can be considered to be logical OR.

Behaviour Tree

Behaviour Tree Tree structure: a hierarchical structure Nodes modelling decision making of an entity Akin to a hierarchical FSM, but built out of tasks rather than states. Transparent, easy to understand. Diagrammatic method. Expressive. Robust. Less error prone.

Implementation Behaviour of a node is dependent upon context. Operation of a node is inserted into the scope of its parent. Allows modularity. Leaf nodes are tasks and so may take time to complete.

Approach Leaf nodes. Control nodes: Sequence Selector Decorators

Leaf Leaf nodes. A task. An execution node. Signals: success failure running

Sequence Execute first node that has not yet succeeded. Execution in sequence: task 1 until it returns success, then task 2 until it returns success, and so on. Variants: any failure leads to failure of the sequence overall, or just select the next in sequence

Selector Selects one child node to execute. Could be random choice or some sort of control mechanism.

Decorator Single child node. Allows for other types of operation such as repetition, filter or an invertor.

Examples

Examples Chris Simpson, Behavior trees for AI: How they work

Examples Chris Simpson, Behavior trees for AI: How they work

Examples Chris Simpson, Behavior trees for AI: How they work

Learning Tree/Classification Tree

Black & White What he ate Feedback - "How nice it tasted" A big rock -1.0 A small rock -0.5 -0.4 A tree -0.2 A cow +0.6 The values are averaged. Taken from Evans, R., (2002), "Varieties of Learning".

Black & White What creature attacked Feedback from player Friendly town, weak defence, tribe Celtic -1.0 Enemy town, weak defence, tribe Celtic +0.4 Friendly town, strong defence, tribe Norse Enemy town, strong defence, tribe Norse -0.2 Friendly town, medium defence, tribe Greek Enemy town, medium defence, tribe Greek +0.2 Enemy town, strong defence, tribe Greek -0.4 Enemy town, medium defence, tribe Aztec 0.0 Friendly town, weak defence, tribe Aztec

Black & White Taken from Evans, R., (2002), "Varieties of Learning".

Black & White Some of the criteria is lost because it turns out to be irrelevant, e.g. information about the tribe. The decision tree is created in real-time. Each time the creature receives new input from the player the tree will be rebuilt. Each new input will change the values. The rebuilding could be significant. Information that previously was jettisoned as irrelevant could become relevant.

Black & White The Evan's article provides more detail as to how decision trees were used in Black and White. One thing that it is important to add is his observation that in order to iterate through all of the attributes of an object efficiently is necessary to define the objects by their attributes.

ID3 The ID3 algorithm was presented by Quinlan, 1986. Uses an iterative method. From the training examples a random subset is selected. Test the tree on training examples. If all of the examples are classified successfully then end. Otherwise add some more training examples to our subset and repeat the process.

ID3 Start with a root node. Assign to the root node the best attribute. Branch then generated for each value of the attribute. A node is created at the end of each branch. Each training example is assigned to one of these new nodes. If no examples are assigned then the node and branch can be removed. Each node is then treated as a new root and the process repeated.

ID3 It should be apparent that different trees can be constructed. It is desirable to derive the smallest tree since this will be the most efficient one. The top most choices need to be the most informative. Aiming towards the greatest information gain. Information theory provides a mathematical measurement of the information content of a message. Information Theory was presented by Shannon in 1948.

Information Theory Shannon defines the amount of information in a message as a function of the probability of occurrence of each possible message.

ID3 ID3 was extended by Quinlan to provide probabilistic classification using Bayesian statistics.

Sources & Further reading DeSylva, C., (2005), "Optimizing a Decision Tree Query Algorithm for Multithreaded Architectures", Game Programming Gems 5, Charles River Media: Hingham, Mass, USA. Evans, R., (2002), "Varieties of Learning", AI Game Programming Wisdom, Charles River Media: Hingham, Mass, USA. Fu , D., & Houlette, R., (2003), "Constructing a Decision Tree Based on Past Experience", AI Game Programming Wisdom 2, Charles River Media: Hingham, Mass, USA.

Sources & Further reading Manslow, J., (2006), "Practical Algorithms for In-Game Learning", AI Game Programming Wisdom 3, Charles River Media: Hingham, Mass, USA. Quinlan, J. R., (1986), "Induction of decision trees", Machine Learning, 1: 81-106. Shannon, C, (1948), "A mathematical theory of communication", Bell System Technical Journal. Chris Simpson, Behavior trees for AI: How they work on Gamasutra 07/17/14