Nonlinear Network Structures for Optimal Control

Slides:



Advertisements
Similar presentations
Introduction to Neural Networks Computing
Advertisements

Chapter 9 Perceptrons and their generalizations. Rosenblatt ’ s perceptron Proofs of the theorem Method of stochastic approximation and sigmoid approximation.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Some useful Contraction Mappings  Results for a particular choice of norms.
Lecture 13 – Perceptrons Machine Learning March 16, 2010.
Ch. 4: Radial Basis Functions Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 based on slides from many Internet sources Longin.
The loss function, the normal equation,
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
RBF Neural Networks x x1 Examples inside circles 1 and 2 are of class +, examples outside both circles are of class – What NN does.
Artificial Intelligence Statistical learning methods Chapter 20, AIMA (only ANNs & SVMs)
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks I PROF. DR. YUSUF OYSAL.
Why Function Optimization ?
Aula 4 Radial Basis Function Networks
Radial-Basis Function Networks
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
Radial Basis Function Networks
ADP for Feedback Control
Approximating the Algebraic Solution of Systems of Interval Linear Equations with Use of Neural Networks Nguyen Hoang Viet Michal Kleiber Institute of.
ITERATIVE TECHNIQUES FOR SOLVING NON-LINEAR SYSTEMS (AND LINEAR SYSTEMS)
Radial Basis Function Networks
Biointelligence Laboratory, Seoul National University
Artificial Neural Networks
Neural NetworksNN 11 Neural netwoks thanks to: Basics of neural network theory and practice for supervised and unsupervised.
Artificial Neural Networks Shreekanth Mandayam Robi Polikar …… …... … net k   
Chapter 11 – Neural Networks COMP 540 4/17/2007 Derek Singer.
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Radial Basis Function Networks:
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 21 Oct 28, 2005 Nanjing University of Science & Technology.
CS 478 – Tools for Machine Learning and Data Mining Backpropagation.
So Far……  Clustering basics, necessity for clustering, Usage in various fields : engineering and industrial fields  Properties : hierarchical, flat,
A note about gradient descent: Consider the function f(x)=(x-x 0 ) 2 Its derivative is: By gradient descent. x0x0 + -
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 31: Feedforward N/W; sigmoid.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 24 Nov 2, 2005 Nanjing University of Science & Technology.
Multi-Layer Perceptron
Akram Bitar and Larry Manevitz Department of Computer Science
Neural Nets: Something you can use and something to think about Cris Koutsougeras What are Neural Nets What are they good for Pointers to some models and.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
CS621 : Artificial Intelligence
1 EEE 431 Computational Methods in Electrodynamics Lecture 18 By Dr. Rasime Uyguroglu
Inverse Kinematics for Robotics using Neural Networks. Authors: Sreenivas Tejomurtula., Subhash Kak
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural NetworksNN 21 Architecture We consider the architecture: feed- forward NN with one layer It is sufficient to study single layer perceptrons with.
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
Lecture 39 Hopfield Network
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Fall 2004 Backpropagation CS478 - Machine Learning.
RADIAL BASIS FUNCTION NEURAL NETWORK DESIGN
Extreme Learning Machine
One-layer neural networks Approximation problems
Soft Computing Applied to Finite Element Tasks
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
CS621: Artificial Intelligence
Computational Intelligence
Neuro-Computing Lecture 4 Radial Basis Function Network
network of simple neuron-like computing elements
Neural Network - 2 Mayank Vatsa
Jiequn Han, Arnulf Jentzen, and Weinan E Presented By Aishwarya Singh
Computational Intelligence
Identification of Wiener models using support vector regression
Chapter 8: Generalization and Function Approximation
The loss function, the normal equation,
Introduction to Radial Basis Function Networks
Mathematical Foundations of BME Reza Shadmehr
Artificial Intelligence 10. Neural Networks
Chapter - 3 Single Layer Percetron
Computational Intelligence
Linear Discrimination
Prediction Networks Prediction A simple example (section 3.7.3)
Nonlinear Conjugate Gradient Method for Supervised Training of MLP
Computational Intelligence
Akram Bitar and Larry Manevitz Department of Computer Science
Presentation transcript:

Nonlinear Network Structures for Optimal Control Automation & Robotics Research Institute (ARRI) Nonlinear Network Structures for Optimal Control Frank L. Lewis and Murad Abu-Khalaf Advanced Controls, Sensors, and MEMS (ACSM) group

System Cost The Usual Suspects

NONLINEAR QUADRATIC REGULATOR Generalized HJB Equation Optimal Control (SVFB) Hamilton-Jacobi-Bellman (HJB) Equation

PROBLEM- HJB usually has no analytic solution SOLUTION- Successive Approximation a stabilizing control A contraction map (Saridis) Saridis and Beard used Galerkin Approx to allow for GHJB solution Converges to optimal solution Gives u(x) in SVFB form

For Constrained Controls NONLINEAR NONQUADRATIC REGULATOR with Nonquadratic form- Lyshevsky PD if u

Natural, exact, no approximation New GHJB is u(t) constrained if f(.) is a saturation function! tanh(p) 1 p -1

Iterate: Problem- cannot solve HJB Solution- Use Successive Approximation on GHJB Iterate: a stabilizing control

Problem- Cannot solve GHJB! Solution- Neural Network to approximate V(i)(x) Select basis set (.) x1 x2 y1 y2 VT WT inputs hidden layer outputs Two-Layer Neural Network with adjustable output weights xn ym 1 2 3 L

Cost gradient approximation Let Nonzero residual! Then GHJB is

Neural-network-based nearly optimal saturated control law.

To minimize the residual error in a LS sense Evaluate the GHJB at a number of points on Note, if Then, GHJB is

NN Training Set! Evaluating this at N points gives L x N coefficient matrix Solve by LS NN Training Set!

Select the N sample points xk Uniform Mesh Grid in Random selection- Montecarlo Approximation error is (Barron) Approximation error is Montecarlo overcomes NP-complexity problems!

ASIDE- Useful for reducing complexity of fuzzy logic systems? Uniform grid of Separable Gaussian activation functions for RBF NN

NN Training Set must be PE

Algorithm and Proofs work for any Q(x) in Constrained input given by CONSTRAINED STATE CONTROL k large and even MINIMUM-TIME CONTROL For small R and this is approx.

Example: Linear system

Region of asymptotic stability for the initial controller,

Region of asymptotic stability for the nearly optimal controller,

Example: Nonlinear oscillator