CIS 488/588 Bruce R. Maxim UM-Dearborn

Slides:



Advertisements
Similar presentations
Slides from: Doug Gray, David Poole
Advertisements

NEURAL NETWORKS Backpropagation Algorithm
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Artificial Intelligence 13. Multi-Layer ANNs Course V231 Department of Computing Imperial College © Simon Colton.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Artificial Intelligence in Game Design Introduction to Learning.
Classification Neural Networks 1
Neural Networks Marco Loog.
Introduction What is this ? What is this ? This project is a part of a scientific research in machine learning, whose objective is to develop a system,
Lecture 4 Neural Networks ICS 273A UC Irvine Instructor: Max Welling Read chapter 4.
Artificial Neural Networks
8/6/20151 Neural Networks CIS 479/579 Bruce R. Maxim UM-Dearborn.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Artificial Neural Networks
1 Artificial Neural Networks Sanun Srisuk EECP0720 Expert Systems – Artificial Neural Networks.
Multi-Layer Perceptrons Michael J. Watts
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Machine Learning Chapter 4. Artificial Neural Networks
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
Neural Networks - Berrin Yanıkoğlu1 Applications and Examples From Mitchell Chp. 4.
For games. 1. Control  Controllers for robotic applications.  Robot’s sensory system provides inputs and output sends the responses to the robot’s motor.
The Boltzmann Machine Psych 419/719 March 1, 2001.
Math / Physics 101 GAM 376 Robin Burke Fall 2006.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 9: Ways of speeding up the learning and preventing overfitting Geoffrey Hinton.
Non-Bayes classifiers. Linear discriminants, neural networks.
Neural Networks and Backpropagation Sebastian Thrun , Fall 2000.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
An Artificial Neural Network Approach to Surface Waviness Prediction in Surface Finishing Process by Chi Ngo ECE/ME 539 Class Project.
Neural Networks - Berrin Yanıkoğlu1 Applications and Examples From Mitchell Chp. 4.
Neural Networks - lecture 51 Multi-layer neural networks  Motivation  Choosing the architecture  Functioning. FORWARD algorithm  Neural networks as.
Artificial Neural Networks (Cont.) Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Chapter 6 Neural Network.
Artificial Intelligence in Game Design Lecture 20: Hill Climbing and N-Grams.
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
Intro. ANN & Fuzzy Systems Lecture 11. MLP (III): Back-Propagation.
Improved Protein Secondary Structure Prediction. Secondary Structure Prediction Given a protein sequence a 1 a 2 …a N, secondary structure prediction.
AI in Space Group 3 Raquel Cohen Yesenia De La Cruz John Gratton.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Multinomial Regression and the Softmax Activation Function Gary Cottrell.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
Neural networks.
Fall 2004 Backpropagation CS478 - Machine Learning.
Beam Gas Vertex – Beam monitor
Classification with Perceptrons Reading:
CIS 488/588 Bruce R. Maxim UM-Dearborn
CSE P573 Applications of Artificial Intelligence Neural Networks
CIS 488/588 Bruce R. Maxim UM-Dearborn
Machine Learning Today: Reading: Maria Florina Balcan
Analysis and Understanding
Navigation In Dynamic Environment
Lecture 11. MLP (III): Back-Propagation
Classification Neural Networks 1
Learning and Perceptrons
Designing Intelligence
CIS 488/588 Bruce R. Maxim UM-Dearborn
Decision Trees and Voting
CIS 488/588 Bruce R. Maxim UM-Dearborn
CIS 488/588 Bruce R. Maxim UM-Dearborn
CIS 488/588 Bruce R. Maxim UM-Dearborn
CSE 573 Introduction to Artificial Intelligence Neural Networks
CIS 488/588 Bruce R. Maxim UM-Dearborn
Neural Networks ICS 273A UC Irvine Instructor: Max Welling
Machine Learning: Lecture 4
Learning Theory Reza Shadmehr
Machine Learning: UNIT-2 CHAPTER-1
Mathematical Foundations of BME
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
David Kauchak CS158 – Spring 2019
Presentation transcript:

CIS 488/588 Bruce R. Maxim UM-Dearborn Target Selection CIS 488/588 Bruce R. Maxim UM-Dearborn 12/6/2018

Case Study - 1 Important problem space attributes affecting target selection Distance form origin to target Distance between enemy position and target Distance between target and estimated enemy position Relative angle between projectile trajectory and velocity of target 12/6/2018

12/6/2018

Case Study - 2 Important animat attributes affecting target selection Angular divergence from animat heading to target as well as the angular velocities involved Speed of travel and direction of movement affect ability to hit targets Should be possible to create a perceptron that can guess how much damage a projectile will inflict on the enemy and use this to assist in target detection 12/6/2018

Design Rationale - 1 A fully connected, feed-forward MLP module should do the job It would be nice for the animat to learn target selection on-line A good compromise would be to gather game data on-line, but actually is done off-line This allows the designer some control over features to emphasize and provide some guidance to the learning process 12/6/2018

Design Rationale - 2 Having lots of data is good, since it allows assessment of the impact of noise on a game Since no two fights are ever the same in a good game, noise can be expected to affect learning a great deal Might need to create statistical profile for the MLP to learn from 12/6/2018

Module Design Initialization in XML <layer inputs=“4” units=“8”/> <layer units=“1”/> Interfaces (both incremental and batch) void Run(const vector<float>& input, const vector<float>& output); float Sample(const vector<float>& input, void Randomize( ); float Batch(const vector<Pattern>& inputs, const vector<Pattern>& outputs); 12/6/2018

Data Structures Array of layers Each layer contains a set of neurons plus output array for layer Training data stored separately Activation function derivative also stored along with the gradient error for each neuron Layers also store weight deltas used to allow momentum in steepest descent 12/6/2018

Simulation and Learning Simulation is set of nested loops processing the layers in turn Each loop is located in its own function (except for innermost loop) Learning algorithms perform forward simulation to determine derivatives and gradients Backpropagation of error is then used to adjust the neuron gradients 12/6/2018

Algorithm Outline // Selects best target by predicting damage // Obstacles are checked before calling function function select_target { repeat // propose target spot near enemy target = position + randvec( ); // perceptron predicts hit probability value = estimate_damage(target); until value > threshold; // stop if big enough return target; } 12/6/2018

Tracking Rockets - 1 AI must be able to answer the following Did projectile explode nearby? Was it another player’s projectile? Was there any damage from projectile? What conditions present when shot fired? To prevent misinterpretations each animat is only allowed to have one rocket in the air at a time 12/6/2018

Tracking Rockets - 2 Information gathering When rocket fired start tracking and remember initial conditions When sensor detects noise, see if it was caused by a rocket If rocket explosion detected set target to point of collision and expected time of collision to current time When collision time is past use data to train NN 12/6/2018

Dealing with Noise Simulation data contains lots of noise Game situations have been simplified Features gathered from situations are limited Environment is unpredictable since it contains autonomous agents Gathering on-line data is real tough Better to gather lots of data from several animats and log it for later analysis and off-line training 12/6/2018

Inputs and Outputs Chosen from distances and dot products using four points in space Player origin Enemy position Estimated position Chosen target Output is Boolean value indicating whether damage was inflicted or not 12/6/2018

12/6/2018

Training Bots are not given unlimited rockets They are expected to pick up a rocket launcher and forage for ammunition During testing rockets are freely available Animats without ammo serve as good targets any way Learning can be sped up by running quake in server mode without graphics 12/6/2018

Splash Predicts enemy position and generates a random target around it A perceptron is used to evaluate the likelihood of success Rocket is fired if the chances of success are acceptable Training data is gathered when expected collision time is past 12/6/2018

Evaluation - 1 Noisy environment, average error rate 25% Takes batch algorithms hundreds of training periods (epochs) to become competent The NN visibly improves the animat’s shooting abilities Perceptron often aims at floor near enemy tends to favor spots Close to estimate Close to enemy current position Close to rocket origin 12/6/2018

Evaluation - 2 Perceptron good at finding flaws in suggestions made by target generation mechanism Prevents kamikaze shots, by enforcing minimal distance to target Perceptron only improves target selection according to its experience Does not seem to learn “stupid” behaviors 12/6/2018