State Machines Chapter 5.

Slides:



Advertisements
Similar presentations
Multi-Layer Perceptron (MLP)
Advertisements

Reactive and Potential Field Planners
Artificial Intelligence Chapter 5 State Machines.
1 Chapter 13 Artificial Life: Learning through Emergent Behavior.
October 7, 2010Neural Networks Lecture 10: Setting Backpropagation Parameters 1 Creating Data Representations On the other hand, sets of orthogonal vectors.
Introduction to Artificial Intelligence Lecture 2: Perception & Action
Neural networks - Lecture 111 Recurrent neural networks (II) Time series processing –Networks with delayed input layer –Elman network Cellular networks.
Drones Collecting Cell Phone Data in LA AdNear had already been using methods.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Chapter 10 Artificial Intelligence. © 2005 Pearson Addison-Wesley. All rights reserved 10-2 Chapter 10: Artificial Intelligence 10.1 Intelligence and.
Artificial Intelligence Chapter 2 Stimulus-Response Agents
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
Artificial Intelligence Lecture 8. Outline Computer Vision Robots Grid-Space Perception and Action Immediate Perception Action Robot’s Perception Task.
CS654: Digital Image Analysis Lecture 3: Data Structure for Image Analysis.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
Artificial Intelligence Chapter 25 Agent Architectures Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
CS 484 – Artificial Intelligence1 Announcements Lab 4 due today, November 8 Homework 8 due Tuesday, November 13 ½ to 1 page description of final project.
NEURAL NETWORKS FOR DATA MINING
Classification / Regression Neural Networks 2
Artificial Intelligence Chapter 6 Robot Vision Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Chapter 13 Artificial Intelligence and Expert Systems.
Chapter Four: Motion  4.1 Position, Speed and Velocity  4.2 Graphs of Motion  4.3 Acceleration.
I Robot.
Artificial Intelligence Chapter 3 Neural Networks Artificial Intelligence Chapter 3 Neural Networks Biointelligence Lab School of Computer Sci. & Eng.
Neural Networks Chapter 7
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Learning and Acting with Bayes Nets Chapter 20.. Page 2 === A Network and a Training Data.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Chapter 8: Adaptive Networks
Hazırlayan NEURAL NETWORKS Backpropagation Network PROF. DR. YUSUF OYSAL.
Autonomous Robots Robot Path Planning (3) © Manfred Huber 2008.
第 25 章 Agent 体系结构. 2 Outline Three-Level Architectures Goal Arbitration The Triple-Tower Architecture Bootstrapping Additional Readings and Discussion.
AGI-09 Scott Lathrop John Laird 1. 2  Cognitive Architectures Amodal, symbolic representations & computations No general reasoning with perceptual-based.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Chapter 6 Neural Network.
February 9, 2016Introduction to Artificial Intelligence Lecture 5: Perception & Action 1 Frege Notes In order to use many of the Java math functions in.
1 Chapter 13 Artificial Intelligence and Expert Systems.
Chapter 21 Robotic Perception and action Chapter 21 Robotic Perception and action Artificial Intelligence ดร. วิภาดา เวทย์ประสิทธิ์ ภาควิชาวิทยาการคอมพิวเตอร์
Neural networks (2) Reminder Avoiding overfitting Deep neural network Brief summary of supervised learning methods.
Deep Learning Amin Sobhani.
CSNB COMPUTER SYSTEM CHAPTER 1 INTRODUCTION CSNB153 computer system.
Neural Networks.
CS201 Lecture 02 Computer Vision: Image Formation and Basic Techniques
Artificial Intelligence (CS 370D)
Artificial Intelligence Chapter 25 Agent Architectures
Chapter Four: Motion 4.1 Position, Speed and Velocity
MOTION.
Self organizing networks
Chapter 6. Robot Vision.
Power and limits of reactive intelligence
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 2 Stimulus-Response Agents
Understanding LSTM Networks
Introduction to Artificial Intelligence Lecture 11: Machine Evolution
Pattern Matching Pattern matching allows us to do things like this:
Artificial Intelligence Lecture No. 28
Artificial Intelligence Chapter 3 Neural Networks
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
Artificial Intelligence Chapter 25. Agent Architectures
CHAPTER I. of EVOLUTIONARY ROBOTICS Stefano Nolfi and Dario Floreano
Artificial Intelligence Chapter 3 Neural Networks
Chapter Four: Motion 4.1 Position, Speed and Velocity
Artificial Intelligence Chapter 25 Agent Architectures
Artificial Intelligence Chapter 3 Neural Networks
The Network Approach: Mind as a Web
Lecture 09: Introduction Image Recognition using Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Technology of Data Glove
Presentation transcript:

State Machines Chapter 5

A State Machine

The State Machine The feature vector represents the state of the environment. The S-R agent computes an action appropriate for that environmental state. Sensory limitations of the agent preclude completely accurate representation of environmental state by feature vectors. The accuracy can be improved by taking into account previous history. The representation of environmental state at the previous time step The action taken at the previous time step The state machine must have memory.

The Boundary-Following Robot The sensory-impaired version This robot can sense only the cells immediately to its north, east, south, and west. The sensory inputs are only Even with this impairment, this robot can still perform boundary-following behavior if it computes the needed feature vector from its immediate sensory inputs, the previous feature vector, and the just-performed action.

The Sensory-Impaired Robot The features wi = si, for i = 2, 4, 6, 8 w1 has value 1 if and only if at the previous time step w2 had value 1 and the robot moved east. Similar for w3, w5, w7 The production system gives wall-following behavior.

An Elman Network An Elman network A special type of recurrent neural network The Elman network can learn how to compute a feature vector and an action from a previous feature vector and sensory inputs. For the boundary-following robot Inputs: (s2, s4, s6, s8) + the values of the eight hidden units one time step earlier Hidden units: eight hidden units, one for each feature Outputs: four output units, one for each action

The Elman Network This Elman network can be trained by ordinary backpropagation.

Iconic Representations Representing the world By features By data structures – iconic representation The agent computes actions appropriate to its task and to the present modeled state of the environment. The sensory information is first used to update the iconic model as appropriate. Then, operations similar to perceptual processing are used to extract features needed by the action computation subsystem. The actions include those that change the iconic model as well as those that affect the actual environment. The features derived from the iconic model must represent the environment in a manner that is adequate for the kinds of actions the robot must take.

Iconic Representation

An Artificial Potential Field (1/2) This technique is used extensively in controlling robot motion. The robot’s environment is represented as a 2-dimensional potential field. The potential field is the sum of an “attractive” and a “repulsive” component. An attractive field Associated with the goal location A repulsive field Associated with the obstacles

An Artificial Potential Field (2/2) The artificial potential field Motion of the robot is directed along the gradient of the potential field. Either the potential field can be precomputed and stored in memory or it can be computed at the robot’s location just before the use.

An Example Artificial Potential Field R: The robot position G: The goal location (b) Attractive potential (c) Repulsive potential (d) Total potential (e) Equipotential curves and the path to be followed

The Blackboard System The blackboard architecture Knowledge sources (KSs) read and change the blackboard. A condition part computes the value of a feature from the blackboard data structure. An action part can be any program that changes the data structure or takes external action (or both). When two or more KSs evaluate to 1, a conflict resolution program decides which KSs should act. KS actions can have external effects and the blackboard might be changed by perceptual subsystems that process sensory data. The KSs are supposed to be “experts” about the part(s) of the blackboard that they watch. Blackboard systems are designed so that as computation proceeds, the blackboard ultimately becomes a data structure that contains the solution to some particular problem.

A Blackboard System

A Robot in Grid World (1/2) The robot can sense all eight cells, but sensors sometimes give erroneous information. The data structure representing the map and the data structure containing sensory data compose the blackboard.

A Robot in Grid World (2/2) A KS (gap filler) The gap filler looks for tight spaces in the map, and (knowing that there can be no tight spaces) either fills them in with 1’s or expands them with additional adjacent 0’s. For example, the gap filler decides to fill the tight space at the top of the map in Figure 5.7. Another KS (sensory filter) The sensory filter looks at both the sensory data and the map and attempts to reconcile any discrepancies. In Figure 5.7, the sensory filter notes that s7 is a strong “cell-occupied” signal but that the corresponding cell in the map was questionable. It decides to reconcile the difference by replacing that ? in the map with a 1.

Additional Readings and Discussion State machines are even more ubiquitous than S-R agents, and the relationship between S-R agents and ethological models of animal behavior applies also to state machines. Elman networks are one example of learning finite-state automata. Many researchers have studied the problem of learning spatial maps, which are examples of iconic representations.

Exercises Page 81, Ex 5.2 Page 82, Ex.5.5