Intelligent Systems Lectures 17 Control systems of robots based on Neural Networks.

Slides:



Advertisements
Similar presentations
Multi-Layer Perceptron (MLP)
Advertisements

1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Introduction to Neural Networks Computing
Artificial Intelligence 13. Multi-Layer ANNs Course V231 Department of Computing Imperial College © Simon Colton.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
Soft Computing Lecture 21 Review of using of NN for solving of real tasks.
Tuomas Sandholm Carnegie Mellon University Computer Science Department
Kostas Kontogiannis E&CE
Machine Learning Neural Networks
Overview over different methods – Supervised Learning
Soft computing Lecture 6 Introduction to neural networks.
Lecture 14 – Neural Networks
The back-propagation training algorithm
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Introduction to ROBOTICS
CS 484 – Artificial Intelligence
Artificial Neural Network
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Definition of an Industrial Robot
Soft Computing Colloquium 2 Selection of neural network, Hybrid neural networks.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Computer Science and Engineering
Multiple-Layer Networks and Backpropagation Algorithms
Artificial Neural Networks
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Chapter 11 – Neural Networks COMP 540 4/17/2007 Derek Singer.
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
1 Chapter 6: Artificial Neural Networks Part 2 of 3 (Sections 6.4 – 6.6) Asst. Prof. Dr. Sukanya Pongsuparb Dr. Srisupa Palakvangsa Na Ayudhya Dr. Benjarath.
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University EE459 Neural Networks The Structure.
CONTENTS:  Introduction  What is neural network?  Models of neural networks  Applications  Phases in the neural network  Perceptron  Model of fire.
NEURAL NETWORKS FOR DATA MINING
Lecture 3 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 3/1 Dr.-Ing. Erwin Sitompul President University
Classification / Regression Neural Networks 2
Artificial Intelligence Techniques Multilayer Perceptrons.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
CS621 : Artificial Intelligence
Inverse Kinematics for Robotics using Neural Networks. Authors: Sreenivas Tejomurtula., Subhash Kak
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Lecture 5 Neural Control
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Chapter 8: Adaptive Networks
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Chapter 6 Neural Network.
Lecture 9 Model of Hopfield
ECE 471/571 - Lecture 16 Hopfield Network 11/03/15.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
第 3 章 神经网络.
CSE 473 Introduction to Artificial Intelligence Neural Networks
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSE P573 Applications of Artificial Intelligence Neural Networks
Classification / Regression Neural Networks 2
Artificial Intelligence Methods
CSE 573 Introduction to Artificial Intelligence Neural Networks
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
David Kauchak CS158 – Spring 2019
Chapter 4 . Trajectory planning and Inverse kinematics
Presentation transcript:

Intelligent Systems Lectures 17 Control systems of robots based on Neural Networks

Neuron of MacCallock&Pitts Threshold Logical Unit (TLU)

Geometry of TLU

R-category linear classifier based on TLU

Geometric Interpretation of action of linear classifier

layer network Three Planes Implemented by the Hidden Units

Multi Layer Perceptron (MLP) (Feed-forward network)

Kinds of sigmoid used in perceptrons Exponential Rational Hyperbolic tangent

10 Formulas for error back propagation algorithm Modification of weights of synapses of j th neuron connected with i th ones, x j – state of j th neuron (output) For output layer For hidden layers k – number of neuron in next layer connected with j th neuron (1) (2) (3)

Hopfield network X1X1 X2X2 X' 1 X' 2 X N-1 XNXN X' N-1 X' N Y1Y1 Y2Y2 Y N-1 YNYN Features of structure: Every neuron is connected with all others Connections are symmetric, i.e. for all i and j w ij – w ji Every neuron may be Input and output neuron Presentation of input is set of state of input neurons

Hopfield network (2) Learning –Hebbian rule is used: Weight of link increases for neurons which fire together (with same states) and decreases if otherwise Working (recalling) - iteration process of calculation of states of neurons until convergence will be achieved –Each neuron receives a weighted sum of the inputs from other neurons: –If the input h j is positive the state of the neuron will be 1, otherwise -1:

Elman Network (SRN). The number of context units is the same as the number of hidden units

Robot-manipulator

Tasks for robot manipulator control system Forward kinematics Kinematics is the science of motion which treats motion without regard to the forces which cause it Within this science one studies the position velocity acceleration and all higher order derivatives of the position variables A very basic problem in the study of mechanical manipulation is that of forward kinematics This is the static geometrical problem of computing the position and orientation of the endeector hand of the manipulator Inverse kinematics This problem is posed as follows given the position and orientation of the endeector of the manipulator calculate all possible sets of joint angles which could be used to attain this given position and orientation This is a fundamental problem in the practical use of manipulators

Tasks for robot manipulator control system (2) Dynamics. Dynamics is a field of study devoted to studying the forces required to cause motion In order to accelerate a manipulator from rest glide at a constant end-effector velocity and finally decelerate to a stop a complex set of torque functions must be applied by the joint actuators In dynamics not only the geometrical properties kinematics are used but also the physical properties of the robot are taken into account. Take for instance the weight inertia of the robotarm which determines the force required to change the motion of the arm. The dynamics introduces two extra problems to the kinematic problems: – The robot arm has a memory. Its responds to a control signal depends also on its history (e.g. previous positions speed acceleration) –If a robot grabs an object then the dynamics change but the kinematics don’t. This is because the weight of the object has to be added to the weight of the arm (that’s why robot arms are so heavy making the relative weight change very small)

Tasks for robot manipulator control system (3) Trajectory generation. To move a manipulator from here to there in a smooth controlled fashion each joint must be moved via a smooth function of time. Exactly how to compute these motion functions is the problem of trajectory generation

Camera-robot coordination is function approximation The system we focus on in this section is a work floor observed by a fixed cameras and a robot arm. The visual system must identify the target as well as determine the visual position of the end-effector.

Camera-robot coordination is function approximation (2)

Camera-robot coordination is function approximation (3). Two approach to use neural networks: Usage of feed-forward networks –Indirect learning –General learning –Specialized learning Usage of topology conserving maps

Camera-robot coordination is function approximation (4). feed-forward networks Indirect learning system for robotics. In each cycle the network is used in two different places: first in the forward step then for feeding back the error

Camera-robot coordination is function approximation (5). feed-forward networks (2)

Camera-robot coordination is function approximation (6). feed-forward networks (3) or

Camera-robot coordination is function approximation (7). feed-forward networks (4) The learning rule applied here regards the plant as an additional and unmodiable layer in the neural network The Jacobian matrix can be used to calculate the change in the function when its parameters change where i iterates over the outputs of the plant

Camera-robot coordination is function approximation (8). Topology conserving maps

Camera-robot coordination is function approximation (9). Topology conserving maps (2)

Robot arm dynamics (Kawato et al, 1987)

Robot arm dynamics (2)

Nonlinear transformations used in the Kawato model

Robot arms dynamics (4)

Mobile robots Schematic representation of the stored rooms and the partial information which is available from a single sonar scan

Mobile robots (2) Two problems. The first called local planning relies on information available from the current viewpoint of the robot. This planning is important since it is able to deal with fast changes in the environment. The second situation is called global path planning in which case the system uses global knowledge from a topographic map previously stored into memory Although global planning permits optimal paths to be generated it has its weakness Missing knowledge or incorrectly selected maps can invalidate a global path to an extent that it becomes useless A possible third anticipatory planning combined both strategies the local information is constantly used to give a best guess what the global environment may contain

Mobile robots (3)

Sensor based control

The structure of the network for the autonomous land vehicle

Experiments The network was trained by presenting it samples with as inputs a wide variety of road images taken under different viewing angles and lighting conditions Images were presented, 40 times each while the weights were adjusted using the backpropagation principle The authors claim that once the network is trained the vehicle can accurately drive at about km/hour along ‘… a path though a wooded area adjoining the Carnegie Mellon campus under a variety of weather and lighting conditions.’ The speed is nearly twice as high as a non-neural algorithm running on the same vehicle.

Drama

DRAMA (2)

DRAMA (3). Associative module

DRAMA (4)

DRAMA (5)

DRAMA (6) Learning