Game Intelligence: The Future Simon M. Lucas Game Intelligence Group School of CS & EE University of Essex.

Slides:



Advertisements
Similar presentations
Game Playing CS 63 Chapter 6
Advertisements

Artificial Intelligence Presentation
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
Games & Adversarial Search
For Monday Read chapter 7, sections 1-4 Homework: –Chapter 4, exercise 1 –Chapter 5, exercise 9.
Monte Carlo Tree Search: Insights and Applications BCS Real AI Event Simon Lucas Game Intelligence Group University of Essex.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
1 CSC 550: Introduction to Artificial Intelligence Fall 2008 search in game playing  zero-sum games  game trees, minimax principle  alpha-beta pruning.
Adversarial Search Chapter 5.
Adversarial Search CSE 473 University of Washington.
MINIMAX SEARCH AND ALPHA- BETA PRUNING: PLAYER 1 VS. PLAYER 2.
Artificial Intelligence for Games Game playing Patrick Olivier
Games CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Adversarial Search Board games. Games 2 player zero-sum games Utility values at end of game – equal and opposite Games that are easy to represent Chess.
Mastering Chess An overview of common chess AI Adam Veres.
Minimax and Alpha-Beta Reduction Borrows from Spring 2006 CS 440 Lecture Slides.
Game Playing CSC361 AI CSC361: Game Playing.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.6: Adversarial Search Fall 2008 Marco Valtorta.
Min-Max Trees Based on slides by: Rob Powers Ian Gent Yishay Mansour.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Adversarial Search: Game Playing Reading: Chess paper.
Games & Adversarial Search Chapter 6 Section 1 – 4.
Marco Adelfio CMSC 828N – Spring 2009 General Game Playing (GGP)
The Parameterized Poker Squares EAAI NSG Challenge
1 Project Ideas. 2 Algorithmic Evaluations/Comparisons  Compare variants of (nested) policy rollout using different bandit algorithms  Compare some.
Monte-Carlo Tree Search
Search and Planning for Inference and Learning in Computer Vision
Upper Confidence Trees for Game AI Chahine Koleejan.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Game Playing. Towards Intelligence? Many researchers attacked “intelligent behavior” by looking to strategy games involving deep thought. Many researchers.
1 Adversarial Search CS 171/271 (Chapter 6) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Monte-Carlo methods for Computation and Optimization Spring 2015 Based on “N-Grams and the Last-Good-Reply Policy Applied in General Game Playing” (Mandy.
Jack Chen TJHSST Computer Systems Lab Abstract The purpose of this project is to explore Artificial Intelligence techniques in the board game.
Game Playing Revision Mini-Max search Alpha-Beta pruning General concerns on games.
Today’s Topics Playing Deterministic (no Dice, etc) Games –Mini-max –  -  pruning –ML and games? 1997: Computer Chess Player (IBM’s Deep Blue) Beat Human.
RADHA-KRISHNA BALLA 19 FEBRUARY, 2009 UCT for Tactical Assault Battles in Real-Time Strategy Games.
1 Monte-Carlo Tree Search Alan Fern. 2 Introduction  Rollout does not guarantee optimality or near optimality  It only guarantees policy improvement.
Artificial Intelligence
For Monday Read chapter 7, sections 1-4 Homework: –Chapter 4, exercise 1 –Chapter 5, exercise 9.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Adversarial Search. Game playing u Multi-agent competitive environment u The most common games are deterministic, turn- taking, two-player, zero-sum game.
Pigeon Problems Revisited Pigeons (Columba livia) as Trainable Observers of Pathology and Radiology Breast Cancer Images.
Game Playing: Adversarial Search chapter 5. Game Playing: Adversarial Search  Introduction  So far, in problem solving, single agent search  The machine.
Adversarial Search 2 (Game Playing)
Adversarial Search and Game Playing Russell and Norvig: Chapter 6 Slides adapted from: robotics.stanford.edu/~latombe/cs121/2004/home.htm Prof: Dekang.
Adversarial Search and Game Playing Russell and Norvig: Chapter 5 Russell and Norvig: Chapter 6 CS121 – Winter 2003.
Parallel Programming in Chess Simulations Part 2 Tyler Patton.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 5 Adversarial Search (Thanks Meinolf Sellman!)
Economic reasoning and artificial intelligence by David C. Parkes, and Michael P. Wellman Science Volume 349(6245): July 17, 2015 Published by AAAS.
Adversarial Reasoning: Sampling-Based Search with the UCT algorithm Joint work with Raghuram Ramanujan and Ashish Sabharwal.
AI: AlphaGo European champion : Fan Hui A feat previously thought to be at least a decade away!!!
ConvNets for Image Classification
CE810 / IGGI Game Design II PTSP and Game AI Agents Diego Perez.
Luca Weibel Honors Track: Competitive Programming & Problem Solving Partisan game theory.
Search: Games & Adversarial Search Artificial Intelligence CMSC January 28, 2003.
Understanding AlphaGo. Go Overview Originated in ancient China 2,500 years ago Two players game Goal - surround more territory than the opponent 19X19.
Adversarial Search and Game-Playing
Stochastic tree search and stochastic games
Adversarial Search and Game Playing (Where making good decisions requires respecting your opponent) R&N: Chap. 6.
資訊新知 Playing Games with Computational Intelligence
Department of intelligent systems
Games & Adversarial Search
Announcements Homework 3 due today (grace period through Friday)
Games & Adversarial Search
Games & Adversarial Search
IEEE Transactions on Computational Intelligence and AI in Games
Mini-Max search Alpha-Beta pruning General concerns on games
Games & Adversarial Search
Games & Adversarial Search
Presentation transcript:

Game Intelligence: The Future Simon M. Lucas Game Intelligence Group School of CS & EE University of Essex

Meet Adrianne from nVidia

Beautiful, but not very bright, … … yet.

Game Intelligence Group Main Activity: – General purpose intelligence for game agents 2 Academic staff 1 post-doc 10 PhD students

Approaches Evolution Reinforcement Learning Monte Carlo Tree Search

Conventional Game Tree Search Minimax with alpha-beta pruning, transposition tables Works well when: – A good heuristic value function is known – The branching factor is modest E.g. Chess: Deep Blue, Rybka Tree grows exponentially with search depth

Go Much tougher for computers High branching factor No good heuristic value function “Although progress has been steady, it will take many decades of research and development before world-championship– calibre go programs exist”. Jonathan Schaeffer, 2001

MCTS Operation (fig from CadiaPlayer, Bjornsson and Finsson, IEEE T-CIAIG) Each iteration starts at the root Follows tree policy to reach a leaf node Then perform a random roll-out from there Node ‘N’ is then added to tree Value of ‘T’ back- propagated up tree

Upper Confidence Bounds on Trees (UCT) Node Selection Policy From Kocsis and Szepesvari (2006) Aim: optimal balance between exploration and exploitation Converges to optimal policy given infinite number of roll-outs Often not used in practice!

Sample MCTS Tree (fig from CadiaPlayer, Bjornsson and Finsson, IEEE T-CIAIG)

Learning Tree Policy and Roll-Out Policy Results for Othello (IEEE CIG 2011)

Research Leadership Grants Conference Series Journal Conference Special Sessions Competitions Software Toolkits (e.g. WOX, featured in MSDN Magazine)

Research Grants AI Games Network (EPSRC, 2007 – 2010; with Imperial and Bradford) UCT for Games and Beyond (EPSRC, 2010 – 2014, joint with Imperial and Bradford; 1.5M total, 489k Essex) Plus IEEE T-CIAIG Editorial Assistant + Travel

IEEE Transactions on Computational Intelligence and AI in Games Published quarterly since March 2009 Journal has made an excellent start

Competitions Many, many competitions Most recent: Ms Pac- Man versus Ghost Team: IEEE CEC 2011 An interesting and fun AI challenge Covered by New Scientist, Slashdot etc By Philipp Rohlfshagen and David Robles

Summary AI + Games – Fantastic field to work in – The BEST test-bed for general intelligence – Monte-Carlo Tree Search + Reinforcement Learning: very promising! Reasonable standard general game playing is already a reality for many games Within the next 10 years we’ll enjoy interacting with life-like AI characters

Sample Games