Training Neural networks to play checkers

Slides:



Advertisements
Similar presentations
A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Advertisements

1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Ch. Eick: More on Machine Learning & Neural Networks Different Forms of Learning: –Learning agent receives feedback with respect to its actions (e.g. using.
Place captured red pieces here Place captured yellow pieces here To use as Kings Rules New Game Exit Left mouse click on piece – drag to desired location.
Adaptive Resonance Theory (ART) networks perform completely unsupervised learning. Their competitive learning algorithm is similar to the first (unsupervised)
GENETIC PROGRAMMING FOR CHECKERS PLAY Dustin Shutes Abstract: This project uses genetic programming to develop a program that plays the game of checkers.
Handwritten Character Recognition Using Artificial Neural Networks Shimie Atkins & Daniel Marco Supervisor: Johanan Erez Technion - Israel Institute of.
Radial Basis Functions
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
UNIVERSITY OF JYVÄSKYLÄ Topology Management in Unstructured P2P Networks Using Neural Networks Presentation for IEEE Congress on Evolutionary Computing.
Incorporating Advice into Agents that Learn from Reinforcement Presented by Alp Sardağ.
RULES Each player begins the game with twelve normal pieces (either white or black). The pieces are automatically set in their proper positions. The object.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2006 Shreekanth Mandayam ECE Department Rowan University.
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
Ch1 AI: History and Applications Dr. Bernard Chen Ph.D. University of Central Arkansas Spring 2011.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 3. Rote Learning.
Multiple-Layer Networks and Backpropagation Algorithms
Artificial Neural Networks
Using Neural Networks in Database Mining Tino Jimenez CS157B MW 9-10:15 February 19, 2009.
Neural Networks Chapter 6 Joost N. Kok Universiteit Leiden.
Appendix B: An Example of Back-propagation algorithm
 Diagram of a Neuron  The Simple Perceptron  Multilayer Neural Network  What is Hidden Layer?  Why do we Need a Hidden Layer?  How do Multilayer.
Computer Go : A Go player Rohit Gurjar CS365 Project Presentation, IIT Kanpur Guided By – Prof. Amitabha Mukerjee.
 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 2.
Multi-Layer Perceptron
CSE & CSE6002E - Soft Computing Winter Semester, 2011 Neural Networks Videos Brief Review The Next Generation Neural Networks - Geoff Hinton.
Blondie24 Presented by Adam Duffy and Josh Hill. Overview Introduction to new concepts Design of Blondie24 Testing and results Other approaches to checkers.
Neural Network Implementation of Poker AI
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 9 of 42 Wednesday, 14.
DEEP RED An Intelligent Approach to Chinese Checkers.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Hand-written character recognition
Backgammon Group 1: - Remco Bras - Tim Beyer - Maurice Hermans - Esther Verhoef - Thomas Acker.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
The Standard Genetic Algorithm Start with a “population” of “individuals” Rank these individuals according to their “fitness” Select pairs of individuals.
A Scalable Machine Learning Approach to Go Pierre Baldi and Lin Wu UC Irvine.
1 Technological Educational Institute Of Crete Department Of Applied Informatics and Multimedia Intelligent Systems Laboratory.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Intro. ANN & Fuzzy Systems Lecture 11. MLP (III): Back-Propagation.
Supervise Learning Introduction. What is Learning Problem Learning = Improving with experience at some task – Improve over task T, – With respect to performance.
Multiple-Layer Networks and Backpropagation Algorithms
Othello Artificial Intelligence With Machine Learning
Diagram of Neural Network Forward propagation and Backpropagation
The Gradient Descent Algorithm
Extreme Learning Machine
ANN-based program for Tablet PC character recognition
I Know it’s an Idiot But it’s MY artificial idiot!
Mastering the game of Go with deep neural network and tree search
CHESS.
CSE 473 Introduction to Artificial Intelligence Neural Networks
Intelligent Information System Lab
Othello Artificial Intelligence With Machine Learning
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Dr. Unnikrishnan P.C. Professor, EEE
CSE P573 Applications of Artificial Intelligence Neural Networks
CS621: Artificial Intelligence
Training Neural networks to play checkers
Prof. Carolina Ruiz Department of Computer Science
General Aspects of Learning
Lecture 11. MLP (III): Back-Propagation
of the Artificial Neural Networks.
XOR problem Input 2 Input 1
CSE 573 Introduction to Artificial Intelligence Neural Networks
Artificial Neural Networks
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
General Aspects of Learning
Artificial Neural Networks / Spring 2002
Prof. Carolina Ruiz Department of Computer Science
Presentation transcript:

Training Neural networks to play checkers Daniel Boonzaaier Supervisor – Adiel Ismail April 2017

Content Project Overview Checkers – the board game Background on Neural Networks Neural Network applied to Checkers Requirements Project Plan References Questions

Project Overview Why this project? A program that teaches itself Checkers Checkers – Complete Information Neural Network

Checkers – the Board Game Two player board game Played on a board sectioned into 64 squares Each player has 12 pieces The goal is to remove all to opponents pieces. Pieces can only move diagonally forward. When a piece reaches the end of the board they are “kinged” and can then move in both directions. Multiple jumps can occur so long as there are corresponding empty spaces for the piece to jump to. If an opportunity to jump is present it must be taken.

Background on Neural Networks What is a Neural Network? Machine Learning Algorithm. Input propagates through the network to get an output. An error is gained from output with what is expected. Update network by going back through the network. Update till error is acceptably small.

Background on Neural Networks Has an input, a hidden and an output layer. Nodes are fully connected by weights. Calculation in the hidden layer to produce an output. Training is done on a set of inputs with back propagation used to update weights. Update weights till error is acceptably small. Once trained the weights are fixed for testing.

Neural Network applied to Checkers Checkers board will be represented by the 32 places on the board where a piece can be placed. The 32 places will be the inputs. Calculate scores of all possible moves. Move with highest score should be chosen.

Requirements Learn from just given rules with no external help. Neural network must not be over trained. Play checkers to at least average human competency.

Project Plan Term 1 – Research Term 2 – Designing and Prototyping Research related works and determine requirements Term 2 – Designing and Prototyping Design program and neural net Term 3 – Implementation of Design Create program and train neural net Term 4 – Testing and Refining Test competency and refine program if necessary

References A. L. Samuel, “Some Studies in Machine Learning Using the Game of Checkers”, IBM Journal, Vol 3, No. 3, (July 1959) K. Chellapilla, D. B. Fogel, “Evolving Neural Networks to Play Checkers Without Relying on Expert Knowledge”, IEEE Transactions on Neural Networks, Vol. 10, No 6, (Nov 1999) N. Franken, A. P. Engelbrecht, “Evolving intelligent game-playing agents”, Proceedings of SAICSIT, Pages 102-110, (2003) N. Franken, A. P. Engelbrecht, “Comparing PSO structures to learn the game of checkers from zero knowledge”, The 2003 Congress on Evolutionary Computation. (2003) A. Singh, K. Deep, “Use of Evolutionary Algorithms to Play the Game of Checkers: Historical Developments, Challenges and Future Prospects”, Proceedings of the Third International Conference on Soft Computing for Problem Solving, Advances in Intelligent Systems and Computing 259, (2014)

Questions?