11/6/2001CS 638, Fall 2001 Today AI –Fuzzy Logic –Neural Nets.

Slides:



Advertisements
Similar presentations
Artificial Intelligence 12. Two Layer ANNs
Advertisements

1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Fuzzy Expert System  An expert might say, “ Though the power transformer is slightly overloaded, I can keep this load for a while”.  Another expert.
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Artificial Intelligence 13. Multi-Layer ANNs Course V231 Department of Computing Imperial College © Simon Colton.
Neural Networks  A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
Introduction to Training and Learning in Neural Networks n CS/PY 399 Lab Presentation # 4 n February 1, 2001 n Mount Union College.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Fuzzy Logic Steve Foster.
Chapter 14.7 Russell & Norvig. Fuzzy Sets  Rules of thumb frequently stated in “fuzzy” linguistic terms. John is tall. If someone is tall and well-built.
Fuzzy Expert Systems. Lecture Outline What is fuzzy thinking? What is fuzzy thinking? Fuzzy sets Fuzzy sets Linguistic variables and hedges Linguistic.
FUZZY SYSTEMS. Fuzzy Systems Fuzzy Sets – To quantify and reason about fuzzy or vague terms of natural language – Example: hot, cold temperature small,
University of Texas at Austin CS 378 – Game Technology Don Fussell CS 378: Computer Game Technology AI – Fuzzy Logic and Neural Nets Spring 2012.
Introduction to Artificial Intelligence (G51IAI)
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
GATE Reactive Behavior Modeling Fuzzy Logic (GATE-561) Dr.Çağatay ÜNDEĞER Instructor Middle East Technical University, GameTechnologies Bilkent University,
Artificial Neural Networks
Fuzzy Logic Based on a system of non-digital (continuous & fuzzy without crisp boundaries) set theory and rules. Developed by Lotfi Zadeh in 1965 Its advantage.
Neural Networks Marco Loog.
Chapter 18 Fuzzy Reasoning.
1 Chapter 18 Fuzzy Reasoning. 2 Chapter 18 Contents (1) l Bivalent and Multivalent Logics l Linguistic Variables l Fuzzy Sets l Membership Functions l.
WELCOME TO THE WORLD OF FUZZY SYSTEMS. DEFINITION Fuzzy logic is a superset of conventional (Boolean) logic that has been extended to handle the concept.
CS 4700: Foundations of Artificial Intelligence
CS Instance Based Learning1 Instance Based Learning.
Neural Networks Slides by Megan Vasta. Neural Networks Biological approach to AI Developed in 1943 Comprised of one or more layers of neurons Several.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Fuzzy Logic BY: ASHLEY REYNOLDS. Where Fuzzy Logic Falls in the Field of Mathematics  Mathematics  Mathematical Logic and Foundations  Fuzzy Logic.
Fuzzy Logic. Priyaranga Koswatta Mundhenk and Itti, 2007.
FUZZY LOGIC Babu Appat. OVERVIEW What is Fuzzy Logic? Where did it begin? Fuzzy Logic vs. Neural Networks Fuzzy Logic in Control Systems Fuzzy Logic in.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Chapter 9 Neural Network.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
CS-378: Game Technology Lecture #17: AI Prof. Okan Arikan University of Texas, Austin Thanks to James O’Brien, Steve Chenney, Zoran Popovic, Jessica Hodgins.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
 Diagram of a Neuron  The Simple Perceptron  Multilayer Neural Network  What is Hidden Layer?  Why do we Need a Hidden Layer?  How do Multilayer.
For games. 1. Control  Controllers for robotic applications.  Robot’s sensory system provides inputs and output sends the responses to the robot’s motor.
Artificial Intelligence Methods Neural Networks Lecture 4 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Artificial Intelligence in Game Design Lecture 6: Fuzzy Logic and Fuzzy State Machines.
 Definition Definition  Bit of History Bit of History  Why Fuzzy Logic? Why Fuzzy Logic?  Applications Applications  Fuzzy Logic Operators Fuzzy.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
CSC Intro. to Computing Lecture 22: Artificial Intelligence.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Soft Computing Lecture 19 Part 2 Hybrid Intelligent Systems.
Fuzzy Systems Michael J. Watts
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 12 FUZZY.
Homework 5 Min Max “Temperature is low” AND “Temperature is middle”
AI Fuzzy Systems. History, State of the Art, and Future Development Sde Seminal Paper “Fuzzy Logic” by Prof. Lotfi Zadeh, Faculty in Electrical.
Could Be Significant.
Fuzzy Sets and Logic Sarah Spence Adams Discrete Mathematics.
Artificial Intelligence in Game Design Lecture 8: Complex Steering Behaviors and Combining Behaviors.
Introduction to Neural Networks Freek Stulp. 2 Overview Biological Background Artificial Neuron Classes of Neural Networks 1. Perceptrons 2. Multi-Layered.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
Chapter 6 Neural Network.
1 Technological Educational Institute Of Crete Department Of Applied Informatics and Multimedia Intelligent Systems Laboratory.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Fuzzy Logic Artificial Intelligence Chapter 9. Outline Crisp Logic Fuzzy Logic Fuzzy Logic Applications Conclusion “traditional logic”: {true,false}
Neural Networks References: “Artificial Intelligence for Games” "Artificial Intelligence: A new Synthesis"
CS679 - Fall Copyright Univ. of Wisconsin
Neural networks.
Introduction to Fuzzy Logic and Fuzzy Systems
Artificial Neural Networks
Split-Brain Studies What do you see? “Nothing”
Learning with Perceptrons and Neural Networks
Fuzzy Logic and Fuzzy Sets
Homework 8 Min Max “Temperature is low” AND “Temperature is middle”
Introduction to Fuzzy Logic
Homework 9 Min Max “Temperature is low” AND “Temperature is middle”
Artificial Intelligence 12. Two Layer ANNs
Presentation transcript:

11/6/2001CS 638, Fall 2001 Today AI –Fuzzy Logic –Neural Nets

11/6/2001CS 638, Fall 2001 Fuzzy Logic Philosophical approach –Ontological commitment based on “degree of truth” – Is not a method for reasoning under uncertainty Crisp Facts – distinct boundaries Fuzzy Facts – imprecise boundaries Probability - incomplete facts Example – Scout reporting an enemy –“Two to three tanks at grid NV “ (Crisp) –“A few tanks at grid NV ” (Fuzzy) –“There might be 2 tanks at grid NV 54 (Probabilistic)

11/6/2001CS 638, Fall 2001 Apply to Computer Games Can have different characteristics of players –Strength: strong, medium, weak –Aggressiveness: meek, medium, nasty –If meek and attacked, run away fast –If medium and attacked, run away slowly –If nasty and strong and attacked, attack back Control of a vehicle –Should slow down when close to car in front –Should speed up when far behind car in front Provides smoother transitions – not a sharp boundary

11/6/2001CS 638, Fall 2001 Fuzzy Sets Classical set theory –An object is either in or not in the set Sets with smooth boundary –Not completely in or out – somebody 6” is 80% tall Fuzzy set theory – An object is in a set by matter of degree – 1.0 => in the set – 0.0 => not in the set – 0.0 partially in the set Provides a way to write symbolic rules with terms like “medium” but evaluate them in a quantified way

11/6/2001CS 638, Fall 2001 Example Fuzzy Variable Each function tells us how much we consider a character in the set if it has a particular aggressiveness value Or, how much truth to attribute to the statement: “The character is nasty (or meek, or neither)?” Aggressiveness Membership (Degree of Truth) Medium Nasty Meek -0.5

11/6/2001CS 638, Fall 2001 Fuzzy Set Operations: Complement The degree to which you believe something is not in the set is 1.0 minus the degree to which you believe it is in the set Membership Units FS ¬FS

11/6/2001CS 638, Fall 2001 Fuzzy Set Ops: Intersection (AND) If you have x degree of faith in statement A, and y degree of faith in statement B, how much faith do you have in the statement A and B? –Eg: How much faith in “that person is about 6’ high and tall” Does it make sense to attribute more truth than you have in one of A or B? Membership Height About 6’Tall

11/6/2001CS 638, Fall 2001 Fuzzy Set Ops: Union (OR) If you have x degree of faith in statement A, and y degree of faith in statement B, how much faith do you have in the statement A or B? –Eg: How much faith in “that person is about 6’ high or tall” Does it make sense to attribute less truth than you have in one of A or B? Membership Height About 6’Tall

11/6/2001CS 638, Fall 2001 Fuzzy Rules “If our distance to the car in front is small, and the distance is decreasing slowly, then decelerate quite hard” –Fuzzy variables in blue –Fuzzy sets in red –Conditions are on membership in fuzzy sets –Actions place an output variable (decelerate) in a fuzzy set (the quite hard deceleration set) We have a certain belief in the truth of the condition, and hence a certain strength of desire for the outcome Multiple rules may match to some degree, so we require a means to arbitrate and choose a particular goal - defuzzification

11/6/2001CS 638, Fall 2001 Fuzzy Rules Example (from Gems) Rules for controlling a car: –Variables are distance to car in front and how fast it is changing, delta, and acceleration to apply –Sets are: Very small, small, perfect, big, very big - for distance Shrinking fast, shrinking, stable, growing, growing fast for delta Brake hard, slow down, none, speed up, floor it for acceleration –Rules for every combination of distance and delta sets, defining an acceleration set Assume we have a particular numerical value for distance and delta, and we need to set a numerical value for acceleration –Extension: Allow fuzzy values for input variables (degree to which we believe the value is correct)

11/6/2001CS 638, Fall 2001 Set Definitions for Example distance v. smallsmallperfectbigv. big delta <=>>> acceleration slowpresentfastfastest << brake

11/6/2001CS 638, Fall 2001 Instance for Example Distance could be considered small or perfect Delta could be stable or growing What acceleration? distance v. smallsmallperfectbigv. big delta <=>>> acceleration slowpresentfastfastest << brake ????

11/6/2001CS 638, Fall 2001 Matching for Example Relevant rules are: –If distance is small and delta is growing, maintain speed –If distance is small and delta is stable, slow down –If distance is perfect and delta is growing, speed up –If distance is perfect and delta is stable, maintain speed For first rule, distance is small has 0.75 truth, and delta is growing has 0.3 truth –So the truth of the and is 0.3 Other rule strengths are 0.6, 0.1 and 0.1

11/6/2001CS 638, Fall 2001 Fuzzy Inference for Example Convert our belief into action –For each rule, clip action fuzzy set by belief in rule acceleration present acceleration present acceleration slow acceleration fast

11/6/2001CS 638, Fall 2001 Defuzzification Example We have three things sets we have reason to believe we are in, and each set covers a range of values Two options in going from current state to a single value: –Mean of Max: Take the rule we believe most strongly, and take the (weighted) average of its possible values –Center of Mass: Take all the rules we partially believe, and take their weighted average In this example, we slow down either way, but we slow down more with Mean of Max –Mean of max is cheaper, but center of mass exploits more information

11/6/2001CS 638, Fall 2001 Evaluation of Fuzzy Logic Does not necessarily lead to non-determinism Advantages –Allows use of numbers while still writing “crisp” rules –Allows use of “fuzzy” concepts such as medium –Biggest impact is for control problems Help avoid discontinuities in behavior In example problem strict rules would give discontinuous acceleration Disadvantages –Sometimes results are unexpected and hard to debug –Additional computational overhead –There are other ways to get continuous acceleration

11/6/2001CS 638, Fall 2001 References Nguyen, H. T. and Walker, E. A. A First Course in Fuzzy Logic, CRC Press, Rao, V. B. and Rao, H. Y. C++ Neural Networks and Fuzzy Logic, IGD Books Worldwide, McCuskey, M. Fuzzy Logic for Video Games, in Game Programming Gems, Ed. Deloura, Charles River Media, 2000, Section 3, pp

11/6/2001CS 638, Fall 2001 Neural Networks Inspired by natural decision making structures (real nervous systems and brains) If you connect lots of simple decision making pieces together, they can make more complex decisions –Compose simple functions to produce complex functions Neural networks: –Take multiple numeric input variables –Produce multiple numeric output values –Normally threshold outputs to turn them into discrete values –Map discrete values onto classes, and you have a classifier! –But, the only time I’ve used them is as approximation functions

11/6/2001CS 638, Fall 2001 Simulated Neuron - Perceptron Inputs (a j ) from other perceptrons with weights (W i,j ) –Learning occurs by adjusting the weights Perceptron calculates weighted sum of inputs (in i ) Threshold function calculates output (a i ) –Step function (if in i > t then a i = 1 else a i = 0) –Sigmoid g(a) = 1/1+e -x Output becomes input for next layer of perceptron ajaj W i,j aiai Σ W i,j a j = in i a i = g(in i )

11/6/2001CS 638, Fall 2001 Network Structure Single perceptron can represent AND, OR not XOR –Combinations of perceptron are more powerful Perceptron are usually organized on layers –Input layer: takes external input –Hidden layer(s) –Output layer: external output Feed-forward vs. recurrent –Feed-forward: outputs only connect to later layers Learning is easier –Recurrent: outputs can connect to earlier layers or same layer Internal state

11/6/2001CS 638, Fall 2001 Neural network for Quake Four input perceptron –One input for each condition Four perceptron hidden layer –Fully connected Five output perceptron –One output for each action –Choose action with highest output –Or, probabilistic action selection Choose at random weighted by output Enemy Sound Dead Low Health Attack Retreat Wander Chase Spawn

11/6/2001CS 638, Fall 2001 Learning Neural Networks Learning from examples –Examples consist of input and correct output Learn if network’s output doesn’t match correct output –Adjust weights to reduce difference –Only change weights a small amount (η) Basic perceptron learning –W i,j = W i,j + η(t-o)a j –If output is too high (t-o) is negative so W i,j will be reduced –If output is too low (t-o) is positive so W i,j will be increased –If a j is negative the opposite happens

11/6/2001CS 638, Fall 2001 Neural Net Example Single perceptron to represent OR –Two inputs –One output (1 if either inputs is 1) –Step function (if weighted sum > 0.5 output a 1) Initial state (below) gives error on (1,0) input –Training occurs Σ W j a j = 0.1 g(0.1) =

11/6/2001CS 638, Fall 2001 Neural Net Example Wj = Wj + η(t-o)aj W1 = (1-0)1 = 0.2 W2 = (1-0)0 = 0.6 After this step, try (0,1)  1 example –No error, so no training Σ W j a j = 0.6 g(0.6) =

11/6/2001CS 638, Fall 2001 Neural Net Example Try (1,0)  1 example –Still an error, so training occurs W 1 = (1-0)1 = 0.3 W 2 = (1-0)0 = 0.6 And so on… –What is a network that works for OR? –What about AND? –Why not XOR? Σ W j a j = 0.2 g(0.2) =

11/6/2001CS 638, Fall 2001 Neural Networks Evaluation Advantages –Handle errors well –Graceful degradation –Can learn novel solutions Disadvantages –“Neural networks are the second best way to do anything” –Can’t understand how or why the learned network works –Examples must match real problems –Need as many examples as possible –Learning takes lots of processing Incremental so learning during play might be possible

11/6/2001CS 638, Fall 2001 References Mitchell: Machine Learning, McGraw Hill, Russell and Norvig: Artificial Intelligence: A Modern Approach, Prentice Hall, Hertz, Krogh & Palmer: Introduction to the theory of neural computation, Addison-Wesley, Cowan & Sharp: Neural nets and artificial intelligence, Daedalus 117:85-121, 1988.