Department of Electrical Engineering, Southern Taiwan University 1 Robotic Interaction Learning Lab The ant colony algorithm In short, domain is defined as the limit of the membership function, and then we can transfer this question as a route on a plain one. Fig. 9 the form of fuzzy rules transfered into the route one.
Department of Electrical Engineering, Southern Taiwan University 2 Robotic Interaction Learning Lab The ant colony algorithm We can let be the number of ants at time in city. However, in computation side, we will show as domain, and shows is the total number of ants. This gives the possibilities of choosing target and it means the possibilities for the ant to reach the next city under the affection of visibility and pheromone, and its possibilities to choose by follow equation.
Department of Electrical Engineering, Southern Taiwan University 3 Robotic Interaction Learning Lab The ant colony algorithm For ant, stands for the value of pheromone at time through route i to j. As shown in follow equations.
Department of Electrical Engineering, Southern Taiwan University 4 Robotic Interaction Learning Lab Ant colony algorithm used in obstacle avoidance We propose Ant Colony Algorithm that aims at planning obstacle avoidance path of moving object as Robot Soccer. As Fig. 10 shows: Fig. 10 obstacle avoidance path
Department of Electrical Engineering, Southern Taiwan University 5 Robotic Interaction Learning Lab Ant colony algorithm used in obstacle avoidance In order to improve searching speed of Ant Colony Algorithm and prevent Convergence Rate from becoming slow and optimizing partially, we take concentration which could be got by i into account to confirm path number e that ant could choose by follow equation:
Department of Electrical Engineering, Southern Taiwan University 6 Robotic Interaction Learning Lab Ant colony algorithm used in obstacle avoidance Robot Soccer will have to choose the best path. This part adopts Objective Function to describe the performance of path choice by follow equation. When Objective Function is confirmed, weight value of every path that soccer may pass is confirmed by follow equation.
Department of Electrical Engineering, Southern Taiwan University 7 Robotic Interaction Learning Lab Ant colony algorithm used in obstacle avoidance we simulate ant’s Pheromone in this way. When all the Robot Soccer find Feasible Solution of one planning path, but it may not the best solution because the Pheromone has been changed at this time, therefore it is necessary to make a overall amending, amending principle is follow equation.
Department of Electrical Engineering, Southern Taiwan University 8 Robotic Interaction Learning Lab Ant colony algorithm used in obstacle avoidance is the Pheromone variable quantity of path (i, j): Its formula just like Ant-Cycle type; as formula shows follow equation.
Department of Electrical Engineering, Southern Taiwan University 9 Robotic Interaction Learning Lab Ant colony algorithm used in obstacle avoidance Now we are going to explain the procedure of using Ant Colony algorithm. Step 1: Parameter Initialization. Step 2: Iterative process. And then calculate probability of path choice according to
Department of Electrical Engineering, Southern Taiwan University 10 Robotic Interaction Learning Lab Ant colony algorithm used in obstacle avoidance Step 3: Update Pheromone concentration of path according to follow equation. Step 4: Repeat Step2, Step3 until ant reach its target point. Step 5: Stop iterative search when one in m ants has already completed searching the path length and has exceeded the best path length of previous iteration. Step 6: Make N=N+1, place ant at starting point and target point again if N<NC, repeat STEP2; else output the best path and stop the Algorithm.
Department of Electrical Engineering, Southern Taiwan University 11 Robotic Interaction Learning Lab Experiments This part we have several points of simulation. The first part is the experiment for the robot to reach the top speed and predict the route of control. The second part is the simulation of the optimal route of the robot. The third part is the obstacle-avoiding route of the robot.
Department of Electrical Engineering, Southern Taiwan University 12 Robotic Interaction Learning Lab Simulations of the velocity and the GPC Fig. 11 Using Fuzzy Ant Colony algorithm to adjust the velocity of the soccer robot. Fig. 12 using GPC to predict the movement of the target and design the moving route for the robot.
Department of Electrical Engineering, Southern Taiwan University 13 Robotic Interaction Learning Lab Simulation of the robot’s path Fig. 13 using fuzzy ant colony algorithm control machine to chase the route of the target, and also apply MATLAB to simulate. Fig. 14 using control robot driven by fuzzy ant colony algorithm to look for the route of the target, and also using FIRA simulation.
Department of Electrical Engineering, Southern Taiwan University 14 Robotic Interaction Learning Lab Simulate obstacle avoidance path Fig. 15 simulate obstacle avoidance path of Robot Soccer by MATLAB Fig. 16 simulate obstacle avoidance path of Robot Soccer by using FIRA simulation.
Department of Electrical Engineering, Southern Taiwan University 15 Robotic Interaction Learning Lab Conclusion The result of the experiment above shows that the method we provided can apply on the wheel robot effectively, and the generalized predictive control machine we designed can clarify the position of the target appearing at the next sampling time. In the future, we will shorten the time for the fuzzy ant colony to weaken and also make the system to reach the optimal condition in a short time. By adding other different algorithms, we can find out the best combination of them.
Department of Electrical Engineering, Southern Taiwan University 16 Robotic Interaction Learning Lab Thanks for your attention!