Design of Attitude and Path Tracking Controllers for Quad-Rotor Robots using Reinforcement Learning Sérgio Ronaldo Barros dos Santos Cairo Lúcio Nascimento Júnior Instituto Tecnológico de Aeronáutica (ITA) Brazil Sidney Nascimento Givigi Júnior Royal Military College of Canada (RMCC) Canada
Introduction Quad-rotor robots have attracted the attention of many researchers in the past few years. Examples of applications: Military applications: surveillance, border patrolling, crowd control. Civilian applications: rescue missions during floods and earthquakes, monitoring pipelines and electric transmission liones.
Introduction A quad-rotor consists of four independent propellers attached to the corners of a cross-shaped frame, turning in opposite directions.
Quad-Rotor Dynamics All rotational and translational movements of a quad- rotor can be achieved by adjusting its rotor speeds.
Introduction Quad-rotor robots are affected by a number of physical effects such as: Aerodynamic effects, Gravity effect, Ground effect, Gyroscopic effect, Friction. Due to these nonlinear effects, it is difficult to design good controllers for a quad-rotor.
Introduction Typically quad-rotor applications use controllers derived using linearized models. These controllers exhibit poor performance for fast maneuvers or in the presence of disturbances such as wind and the ground effect. In order to perform path tracking in the presence of nonlinear disturbances, a machine learning technique (RL-LA) will be applied.
Objectives To present a solution for testing and evaluation of attitude stabilization and path tracking controllers for quad-rotors. To use a Reinforcement Learning algorithm (Learning Automata) to adjust the controllers parameters using a simulation environment that includes wind and ground effects.
Quad-Rotor Dynamics An inertial frame and a body fixed frame whose origin is in the center of mass of the quad-rotor are used.
Quad-Rotor Dynamics The dynamic model is derived under the following assumptions. the vehicle frame is rigid and symmetrical, the body fixed frame is located at the vehicle center of mass, the propellers are also rigid.
Quad-Rotor Dynamics The dynamic model of the quad-rotor can de derived using Newton-Euler formalism.
Robot Controllers The control architecture for the robot involves two loops: inner and outer. The roll, pitch, and yaw angles are represented by Φ, θ and ψ, respectively.
Robot Controllers Three nonlinear control strategies are used: - Nonlinear PID Control, - Backstepping technique - Sliding Model Control.
Robot Controllers The parameters of the 6 controllers are tuned using the RL algorithm. Technique Controllers Path Tracking Attitude Height x-position y-position Pitch Roll Yaw PID kp,ki,kd kp,ki, ,kd Backstepping α12, α11 α10, α9 α4, α3 α1, α2 α5, α6 α7, α8 Sliding Mode k5, λ5 k4, λ4 k2, λ2 k1, λ1 k3, λ3 k6, λ6
Simulation Environment A simulation setup is proposed to train and evaluate the quad-rotor controller under more realistic conditions.
Simulation Environment
Simulation Environment
Simulation Environment Using the Plane-Marker, a X-Plane model of the X3D-BL quad-rotor (manufactured by Ascending Technologies) was created.
Simulation Environment The responses of the X-Plane and SIMULINK models are compared for a hovering maneuver.
Reinforcement Learning Learning Automata (LA) is an alternative approach that can be used to adjust the parameters of the controllers.
Reinforcement Learning Steps of the learning process: Initialize the probability and parameters vectors of each controller; Select the parameters for each controller using its associated probability vector; Execute the desired task, obtain its response and use a cost function to measure its performance. Compute the reinforcement signal; Adjust the probability vectors; Check the probability vectors for convergence, otherwise return to step 2.
Reinforcement Learning Supervisory level: LA adjusts the parameters of the attitude and path tracking controllers.
Reinforcement Learning Learning the parameters of the controllers was executed using the X-Plane model in 3 stages with increasing levels of difficulty : without the presence of any external disturbances, considering only the presence of wind, considering the wind and ground effects.
Reinforcement Learning
Reinforcement Learning
Reinforcement Learning A cost function evaluates the response of each controller (i) for the selected task at the end of each trial (k) :
Reinforcement Learning The reinforcement signal is computed for each controller (i) at the end of each trial (k):
Reinforcement Learning The element of the probability vector associated with the selected controller parameter is adjusted: The probability vector is then normalized.
Reinforcement Learning Learning the desired trajectory using the PID controller during the first stage.
Results The nonlinear PID controllers results obtained during simulation. The trajectory is formed by the points (0,0) - (0,10) - (10,10) - (10,0) meters.
Results The quad-rotor robot during the execution of a pre-defined trajectory visualized in the X-Plane.
Results The backstepping controller results in the presence of wind and ground effects
Results The path tracking of quad-rotor obtained by the backstepping controllers in the presence of wind and ground effects, visualized in the X-Plane.
Results The sliding mode controller response using the in presence of wind and ground effects.
Results The quad-rotor trajectory obtained by the sliding controllers in presence of wind and ground effects, visualized in the X-Plane.
Results Evaluation of the controllers tracking of desired path after the learning process.
Conclusions The proposed method (Learning Automata) allows one to tune the parameters of different controllers for a quad-rotor aircraft, considering external disturbances such as wind and ground effects. It was shown that the proposed simulation framework can be useful to investigate the application of learning algorithms to adjust the control laws of quad-rotors for different flight maneuvers.
Future Research Evaluate the controllers (obtained using LA, the simulated model, the simulation environment) using real quad-rotors. On-line learning: useful to correct inaccuracies of the simulated (model + environment).
Future Research Comparison to other RL methods (e.g., Q- Learning) and other search procedures (e.g., genetic algorithms). Limitation of learning: generalization to other tasks Problem: selection of tasks to be executed during training (adaptive control: choice of excitation signal).
Thank You !