Slides:



Advertisements
Similar presentations
Introductory Control Theory I400/B659: Intelligent robotics Kris Hauser.
Advertisements

استاد محترم : دکتر توحيدخواه ارائه دهنده : فاطمه جهانگيري.
Slides from: Doug Gray, David Poole
Perceptron Learning Rule
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Advanced Mechanical Design December 2008
280 SYSTEM IDENTIFICATION The System Identification Problem is to estimate a model of a system based on input-output data. Basic Configuration continuous.
Chapter 5 NEURAL NETWORKS
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
Goals of Adaptive Signal Processing Design algorithms that learn from training data Algorithms must have good properties: attain good solutions, simple.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Unit 3a Industrial Control Systems
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
Supervised Hebbian Learning
Introduction to Adaptive Digital Filters Algorithms
A Shaft Sensorless Control for PMSM Using Direct Neural Network Adaptive Observer Authors: Guo Qingding Luo Ruifu Wang Limei IEEE IECON 22 nd International.
Book Adaptive control -astrom and witten mark
Artificial Neural Networks
Chapter 9 Neural Network.
Artificial Neural Network Unsupervised Learning
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Neural Network Based Online Optimal Control of Unknown MIMO Nonaffine Systems with Application to HCCI Engines OBJECTIVES  Develop an optimal control.
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
CS 478 – Tools for Machine Learning and Data Mining Backpropagation.
1 Deadzone Compensation of an XY –Positioning Table Using Fuzzy Logic Adviser : Ying-Shieh Kung Student : Ping-Hung Huang Jun Oh Jang; Industrial Electronics,
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Controller Design (to determine controller settings for P, PI or PID controllers) Based on Transient Response Criteria Chapter 12.
1 Adaptive Control Neural Networks 13(2000): Neural net based MRAC for a class of nonlinear plants M.S. Ahmed.
CHAPTER 5 S TOCHASTIC G RADIENT F ORM OF S TOCHASTIC A PROXIMATION Organization of chapter in ISSO –Stochastic gradient Core algorithm Basic principles.
Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.
Model Reference Adaptive Control (MRAC). MRAS The Model-Reference Adaptive system (MRAS) was originally proposed to solve a problem in which the performance.
September Bound Computation for Adaptive Systems V&V Giampiero Campa September 2008 West Virginia University.
Low Level Control. Control System Components The main components of a control system are The plant, or the process that is being controlled The controller,
ANFIS (Adaptive Network Fuzzy Inference system)
Introduction to Motion Control
Chapter 20 1 Overall Objectives of Model Predictive Control 1.Prevent violations of input and output constraints. 2.Drive some output variables to their.
PID CONTROLLERS By Harshal Inamdar.
Review: Neural Network Control of Robot Manipulators; Frank L. Lewis; 1996.
ME 431 System Dynamics Dept of Mechanical Engineering.
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology Rival-Model Penalized Self-Organizing Map Yiu-ming Cheung.
Inverse Kinematics for Robotics using Neural Networks. Authors: Sreenivas Tejomurtula., Subhash Kak
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology 1 A self-organizing map for adaptive processing of structured.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Chapter 8: Adaptive Networks
Hazırlayan NEURAL NETWORKS Backpropagation Network PROF. DR. YUSUF OYSAL.
DEFENSE EXAMINATION GEORGIA TECH ECE P. 1 Fully Parallel Learning Neural Network Chip for Real-time Control Jin Liu Advisor: Dr. Martin Brooke Dissertation.
Adaptive Optimal Control of Nonlinear Parametric Strict Feedback Systems with application to Helicopter Attitude Control OBJECTIVES  Optimal adaptive.
Professor : Ming – Shyan Wang Department of Electrical Engineering Southern Taiwan University Thesis progress report Sensorless Operation of PMSM Using.
Neural Networks 2nd Edition Simon Haykin
Perceptrons Michael J. Watts
Chapter 6 Neural Network.
Dynamic Neural Network Control (DNNC): A Non-Conventional Neural Network Model Masoud Nikravesh EECS Department, CS Division BISC Program University of.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
1 Technological Educational Institute Of Crete Department Of Applied Informatics and Multimedia Intelligent Systems Laboratory.
Topic 1 Neural Networks. Ming-Feng Yeh1-2 OUTLINES Neural Networks Cerebellar Model Articulation Controller (CMAC) Applications References C.L. Lin &
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Wednesday, January 26, 2000.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
A Self-organizing Semantic Map for Information Retrieval Xia Lin, Dagobert Soergel, Gary Marchionini presented by Yi-Ting.
A Presentation on Adaptive Neuro-Fuzzy Inference System using Particle Swarm Optimization and it’s Application By Sumanta Kundu (En.R.No.
A PID Neural Network Controller
Case Study on Robotic Systems Using Intelligent Approach
COMMON NONLINEARITIES IN CONTROL SYSTEMS:
Chapter 8: Generalization and Function Approximation
The Naïve Bayes (NB) Classifier
Masoud Nikravesh EECS Department, CS Division BISC Program
Presentation transcript:

3. Applications Function approximation: t  y t: target, y: actual output Neural control r: reference u: control effort y: system output r  y Ming-Feng Yeh

Neural Control Schemes Supervised control Hybrid control Model reference control Internal model control Adaptive control Reference: C.L. Lin & H.W. Su, “Intelligent control theory in guidance and control system design: an overview,” Proc. Natl. Sci. Counc. ROC (A), pp. 15-30. Ming-Feng Yeh

Supervisory Control Neural network  inverse NN controller The neural controller in the system is utilized as an inverse system model. Ming-Feng Yeh

Hybrid Control Generalized learning (off-line learning) A rough approximation to the desired control law  Drive the plant over the operating range and without instability Specialized learning (on-line learning) Improve the control provided by the NN controller Ming-Feng Yeh

Model Reference Control Must define its input-output pair {r(t), yR(t)} in advance Attempts to make the plant output y(t) match the reference model output asymptotically. The error e(t) is used to adjust the weights of an neural controller. Ming-Feng Yeh

Internal Model Control The NN plant model is first trained off-line to emulate the controller plant dynamics directly. During on-line operation, the error is used as a feedback signal and passed to the NN controller. The effect of the NN controller is to subtract the effect of the control signal from the plant output, i.e., disturbances. The IMC plays a role as a feedforward controller and can cancel the influence due to unmeasured disturbances. Ming-Feng Yeh

Adaptive Control The tracking error cost is evaluated according to some performance index. The result is then used as a basis for adjusting the connection weights of the neural networks. The weights are adjusted on-line using basic backpropagation rather than off-line. Ming-Feng Yeh

Practical Stability Issues in CMAC Neural Network Control Systems Paper Study #1 Practical Stability Issues in CMAC Neural Network Control Systems Fu-Chuang Chen & Chih-Horng Chang IEEE Trans. on Control Systems Technology, Vol. 4, No. 1, pp. 86-91, 1996 Ming-Feng Yeh

Abstract CMAC is a practical tool for improving existing nonlinear control systems. CMAC can effectively reduce tracking error, but can also destabilize a control system which is otherwise stable. Quantitative studies are presented to search for the cause of instability in the CMAC control system. Ming-Feng Yeh

I. Introduction CMAC is basically a look-up table method, very easy to implement, and at the same time it is a powerful and practical tool for nonlinear control. There has been convergence result on the CMAC learning. Ming-Feng Yeh

Main Purpose of This Paper To introduce the CMAC control system from an industrial point of view. To describe the unstable phenomenon. To quantitatively study how the system parameters such as control gain, quantization, generalization, learning rate, etc., are related to the instability of the system. To suggest ways to improve system stability. To provide some experience evidence. Ming-Feng Yeh

II. CMAC Control System CMAC controller Plant proportional controller Y(k+1) = 0.5Y(k)+sin(Y(k))+U(k) Use a workable traditional controller to stabilize the plant and to help the CMAC learn to provide precise control. Ming-Feng Yeh

Functioning of CMAC Initially the CMAC table is empty. In each time step k, the CMAC involves a recall and a learning process. Ming-Feng Yeh

Recall Process Uses Yd(k+1) and Y(k) as the address to generate the control signal from CMAC table, where Yd(k+1) is the desired system output for the next time step. CMAC has two inputs and one output. Ming-Feng Yeh

Leaning Process U(k) is treated as the desired output to modified the content stored at location Y(k) and Y(k+1), where Y(k+1) is the actual system output for the next time step k+1. To speed up the initial learning and to achieve better generalization, the generalization technique is employed, i.e., each input vector to CMAC for recall and learning will map to a number of memory locations instead of only one memory location. Ming-Feng Yeh

Function Approximation How precisely the CMAC can approximate a function is mainly determined by the quantization in each dimension of the input vector. Reducing quantization would quickly increase the memory demand for storing the CMAC table. Ming-Feng Yeh

Table Update Mechanism Gradient-type learning rule Wi(k+1) = Wi(k) +   [ U(k)  Uc(k) ] / g g: the size of generation Wi: the content of the ith memory location, there being q locations to be updated : the learning rule U: the correct (desired) data Uc: the current (actual) data Ming-Feng Yeh

III. A Typical Simulation Study Proportional gain: P = 1.4 Learning rate:  = 0.1 Generalization = 50 Quantization = 5/500 (meaning five units divided into 500 divisions) Reference command = sin(2k/200) with each sinusoidal cycle consisting of 400 time steps Tracking error Number of cycles Ming-Feng Yeh

A Typical Simulation Study In the first five cycles, the system is solely controlled by the P controller. The CMAC is added at the 6th cycle, and then the error reduces quickly and significantly. The error remains small for some time, and then it diverges around the 143th cycle. Control output Number of time steps Ming-Feng Yeh

Discussion The CMAC can significantly reduce the tracking error. CMAC can destabilize a control system which is otherwise stable. The unstable phenomenon certainly comes from the interactions between the proportional controller and the CMAC network. Ming-Feng Yeh

Discussion The proportional controller can not be removed even when the magnitude of the proportional control is very small compared with that of the CMAC (i.e., when the system output error has been significantly reduced). Otherwise, the good tracking can not be maintained. Ming-Feng Yeh

Growth of Oscillation 8th cycle 90th cycle 155th cycle 40th cycle Ming-Feng Yeh

V. Method for Improving System Stability The continued learning of CMAC after the tracking error has reduced is the major cause of the instability. Stopping the CMAC learning has two drawbacks. First, it can be difficult to determine when to stop the CMAC learning. Second, if the CMAC stops learning, then the CMAC control system cannot respond to any change in the reference command. Ming-Feng Yeh

Modified Learning Rule To effectively stop the CMAC learning when the tracking error is small, but at the same time allow the system to respond to any change in the reference command, a deadzone is added to the CMAC updating rule. Wi(k+1) = Wi(k) +   D[ U(k)  Uc(k) ] / g where Ming-Feng Yeh

VI. Experiment Ming-Feng Yeh

Paper Study #2 Intelligent Controller Using CMACs with Self-Organized Structure and Its Application for a Process System T. Yamamoto, H. Yanagino & M. Kaneda Proceedings of 1997 IEEE , pp. 76-81 Ming-Feng Yeh

Abstract This paper describes a design scheme of intelligent system consists of some CMACs. Each of CMACs is trained for the specified command signal. A new CMAC is generated for unspecificed command signals, and the CMAC whose command is nearest for the new command signal, is eliminated. The proposed intelligent controller can be designed with relatively small memories. Ming-Feng Yeh

1. Introduction The CMACs included in the intelligent controller are trained in both off-line and on-line learning process for each of the specified command signals. For a unspecified command, a new CMAC is generated. The initial weights are set by employing the linear interpolation to the trained weights included in two CMACs whose command signals are nearest for the new command signal. The CMAC corresponding to the nearest command signal is eliminated. The proposed intelligent controller can be designed with relatively small memories. Ming-Feng Yeh

3. Intelligent Controller Design Reference Model CMAC1 CMACi CMACn  System controller Ming-Feng Yeh

Outline The input signals to each CMAC are the control error signal and the difference, that is, the two- dimensional CMACs are equipped in the intelligent control system. By including the command signal as one of input signals in the CMAC, the intelligent control system can be constructed by using only a three-dimensional CMAC. Ming-Feng Yeh

Off-line Learning Process CMAC  w(t): command signal u*(t): teacher signal Updated rule: h = 1, 2, …, K: total number of the selected weights g1(t): the gradient to update the weights a1, b1, c1: positive cont. Ming-Feng Yeh

On-line Learning Process The last weights obtained in the off-line learning are used as initial ones in the on-line learning. Updated rule: k: the time-delay of the system g2(t): the gradient to update the weights a2, b2, c2: positive cont. Ming-Feng Yeh

Off-line vs On-line In the off-line learning process, the teacher signal u*(t) is generated by a certain control law, e.g., PID control law and human experts. u*(t) is utilized in order to determine the initial weights in the on-line learning of the CMAC. In the on-line learning process, the teaching signal u*(t) can not be obtained. The desired reference model output ym(t) is introduced, and the on-line learning is performed so that the system output y(t) approaches to ym(t). Ming-Feng Yeh

Self-organized Structure A new CMAC is generated for a new command signal The initial weights includes in the new CMAC are set by employing the linear interpolation to the trained weights included two CMACs which are nearest for the new command signal The CMAC whose command signal is nearest for the new one is eliminated. Ming-Feng Yeh

4. Experimental Results Air pressure control system Control object: regulate the air pressure y to any desired values by manipulating the control value angle u. In order to obtain u*(t), PID control law is employed for this control system. Ming-Feng Yeh

Control Result Conventional PID control Off-line learning (20 iterations) Ming-Feng Yeh

Control Result (cont.) On-line learning (after 5 more iterations) Unspecified command signal Ming-Feng Yeh