Computer analysis of World Chess Champions Matej Guid and Ivan Bratko CG 2006.

Slides:



Advertisements
Similar presentations
Adversarial Search Chapter 6 Sections 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Advertisements

UNM CTSC Mentored Career Development Scholars Program 6-Month Progress Report Template (Years 3+) Name Title Department or College.
Eighteenth Century Chess Set. 2 nd Century game pieces from Uzbekistan (probably a part of a chess set)
CMSC 471 Spring 2014 Class #9 Tue 2/25/14 Game Playing Professor Marie desJardins,
Game Playing CS 63 Chapter 6
Chess History: By: Matt Pedraza. History: Paul MorphyAdolf Anderssen Francois Andre Philidor Before the official World Championship title was created,
Adversarial Search Chapter 6 Section 1 – 4. Types of Games.
Adversarial Search We have experience in search where we assume that we are the only intelligent being and we have explicit control over the “world”. Lets.
2014 Reykjavik Open Pub Quiz. Question #1 What is the FULL name of current World Champion Carlsen? Answer: Sven Magnus Oen Carlsen.
Lesson 14 Creating Formulas and Charting Data
Data Mining Methodology 1. Why have a Methodology  Don’t want to learn things that aren’t true May not represent any underlying reality ○ Spurious correlation.
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
Games & Adversarial Search
CS 484 – Artificial Intelligence
Regression Analysis Once a linear relationship is defined, the independent variable can be used to forecast the dependent variable. Y ^ = bo + bX bo is.
Adversarial Search Chapter 5.
COMP-4640: Intelligent & Interactive Systems Game Playing A game can be formally defined as a search problem with: -An initial state -a set of operators.
Adversarial Search: Game Playing Reading: Chapter next time.
G5AIAI Introduction to AI Graham Kendall Game Playing Garry Kasparov and Deep Blue. © 1997, GM Gabriel Schwartzman's Chess Camera, courtesy IBM.
Adversarial Search Chapter 6.
An Introduction to Artificial Intelligence Lecture VI: Adversarial Search (Games) Ramin Halavati In which we examine problems.
1 Adversarial Search Chapter 6 Section 1 – 4 The Master vs Machine: A Video.
By Joseph Tanti FIDE Instructor. Some completely different ways to play chess.
CPSC 322 Introduction to Artificial Intelligence October 22, 2004.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.6: Adversarial Search Fall 2008 Marco Valtorta.
Chapter 9 Audit Sampling: An Application to Substantive Tests of Account Balances McGraw-Hill/Irwin ©2008 The McGraw-Hill Companies, All Rights Reserved.
Monte Carlo Go Has a Way to Go Haruhiro Yoshimoto (*1) Kazuki Yoshizoe (*1) Tomoyuki Kaneko (*1) Akihiro Kishimoto (*2) Kenjiro Taura (*1) (*1)University.
Games & Adversarial Search Chapter 6 Section 1 – 4.
Computer Chess. Humble beginnings The Turk Ajeeb Mephisto El Ajedrecista.
Factors Affecting Diminishing Returns for Searching Deeper Matej Guid and Ivan Bratko CGW 2007.
Chess Quiz Question 1 Which two players finished at the recent group B of the Tata Steel Tournament 2013?
Game Playing State-of-the-Art  Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in Used an endgame database defining.
CSC 412: AI Adversarial Search
Game Trees: MiniMax strategy, Tree Evaluation, Pruning, Utility evaluation Adapted from slides of Yoonsuck Choe.
Mathematics Numbers: Exponents Science and Mathematics Education Research Group Supported by UBC Teaching and Learning Enhancement Fund Department.
Comparing two sample means Dr David Field. Comparing two samples Researchers often begin with a hypothesis that two sample means will be different from.
Evaluation Function in Game Playing Programs M1 Yasubumi Nozawa Chikayama & Taura Lab.
Adversarial Search Chapter 6 Section 1 – 4. Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 6 –Adversarial Search Thursday –AIMA, Ch. 6 –More Adversarial Search The “Luke.
W. Robert Holliman CPA/ABV, CFF Business Valuation Services How much is it worth? – When You Really Need To Know What The Business Is Worth, Professional.
Game Playing. Towards Intelligence? Many researchers attacked “intelligent behavior” by looking to strategy games involving deep thought. Many researchers.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
1 Adversarial Search CS 171/271 (Chapter 6) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Grading and Analysis Report For Clinical Portfolio 1.
IWEC20021 Threat Stacks to Guide Pruning and Search Extensions in Shogi Reijer Grimbergen Department of Information Science Saga University, Japan.
Poker as a Testbed for Machine Intelligence Research By Darse Billings, Dennis Papp, Jonathan Schaeffer, Duane Szafron Presented By:- Debraj Manna Gada.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
Today’s Topics Playing Deterministic (no Dice, etc) Games –Mini-max –  -  pruning –ML and games? 1997: Computer Chess Player (IBM’s Deep Blue) Beat Human.
Performance Improvement Project Validation Process Outcome Focused Scoring Methodology and Critical Analysis Presenter: Christi Melendez, RN, CPHQ Associate.
Hypothesis Testing Introduction to Statistics Chapter 8 Feb 24-26, 2009 Classes #12-13.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Game-playing AIs Part 2 CIS 391 Fall CSE Intro to AI 2 Games: Outline of Unit Part II  The Minimax Rule  Alpha-Beta Pruning  Game-playing.
Adversarial Search and Game Playing Russell and Norvig: Chapter 6 Slides adapted from: robotics.stanford.edu/~latombe/cs121/2004/home.htm Prof: Dekang.
Chapter 7: Hypothesis Testing. Learning Objectives Describe the process of hypothesis testing Correctly state hypotheses Distinguish between one-tailed.
Luca Weibel Honors Track: Competitive Programming & Problem Solving Partisan game theory.
Adversarial Search Chapter 5 Sections 1 – 4. AI & Expert Systems© Dr. Khalid Kaabneh, AAU Outline Optimal decisions α-β pruning Imperfect, real-time decisions.
ADVERSARIAL SEARCH Chapter 6 Section 1 – 4. OUTLINE Optimal decisions α-β pruning Imperfect, real-time decisions.
1 Chapter 6 Game Playing. 2 Chapter 6 Contents l Game Trees l Assumptions l Static evaluation functions l Searching game trees l Minimax l Bounded lookahead.
Part I Project Initiation.
Performance Improvement Project Validation Process Outcome Focused Scoring Methodology and Critical Analysis Presenter: Christi Melendez, RN, CPHQ Associate.
Influence of Search Depth on Position Evaluation
Performance Improvement Project Validation Process Outcome Focused Scoring Methodology and Critical Analysis Presenter: Christi Melendez, RN, CPHQ Associate.
Dakota Ewigman Jacob Zimmermann
Adversarial Search.
Flow diagrams (i) (ii) (iii) x – Example
3. Old Brain I III II V IV VII VI VIII IX X XII XI
Presenter: Kate Bell, MA PIP Reviewer
Adversarial Search CS 171/271 (Chapter 6)
Games & Adversarial Search
Presentation transcript:

Computer analysis of World Chess Champions Matej Guid and Ivan Bratko CG 2006

Introduction Who was the best chess player of all time?  Chess players of different eras never met across the chess board.  No well founded, objective answer. I Wilhelm Steinitz, High quality chess programs...  Provide an opportunity of an objective comparisson. Statistical analysis of results do NOT reflect:  true strengths of the players,  quality of play. Computers...  Were so far mostly used as a tool for statistical analysis of players’ results.

Related work II Emanuel Lasker, Jeff Sonas, 2005:  rating scheme, based on tournament results from 1840 to the present,  ratings are calculated for each month separately, player’s activity is taken into account. Disadvantages  Playing level has risen dramatically in the recent decades.  The ratings in general reflect the players’ success in competition, but NOT directly their quality of play.

Our approach III Jose Raul Capablanca,  computer analysis of individual moves played  determine players’ quality of play regardless of the game score  the differences in players’ style were also taken into account calm positional players vs aggresive tactical players a method to assess the difficulty of positions was designed Analysed games  14 World Champions (classical version) from 1886 to 2004  analyses of the matches for the title of “World Chess Champion”  slightly adapted chess program Crafty has been used

The modified Crafty  Instead of time limit, we limited search to fixed search depth.  Backed-up evaluations from depth 2 to 12 were obtained for each move.  Quiescence search remained turned on to prevent horizont effects. IV Alexander Alekhine, and Advantages  complex positions automatically get more computation time,  the program could be run on computers of different computational powers. Obtained data  best move and its evaluation,  second best move and its evaluation,  move played and its evaluation,  material state of each player.

Average error  average difference between moves played and best evaluated moves  basic criterion Formula ∑|Best move evaluation – Move played evaluation| Number of moves  “Best move” = Crafty’s decision resulting from 12 ply search Constraints  Evaluations started on move 12.  Positions, where both the move suggested and the move played were outside the interval [-2, 2], were discarded. V Max Euwe,  Positional players are expected to commit less errors due to somewhat less complex positions, than tactical players.

Average error V Max Euwe,

Blunders VI Mikhail Botvinnik, , , and  Big mistakes can be quite reliably detected with a computer.  We label a move as a blunder when the numerical error exceeds 1.00.

Complexity of a position VII Vasily Smyslov, Basic idea  A given position is difficult, when different “best moves”, which considerably alter the evaluation of the root position, are discovered at different search depths. Assumption  This definition of complexity also applies to humans.  This assumption is in agreement with experimental results. Formula ∑ |Best move evaluation – 2nd best move evaluation| best i ≠ best i - 1

Complexity of a position VII Vasily Smyslov, depth1steval2ndeval 2Qc2-0.09Qc Qc2+0.24Qc Qc2+0.08Qc Qc2+0.35Qc Qc2+0.07Qc Qc2+0.57Qc Qc2+0.72Qc Qc2+0.96Qc Qc1+1.30Qc Qc1+1.52Qc Qd4+4.46Qc complexity = 0.00 Euwe-Alekhine, 16th World Championship 1935

Complexity of a position VII Vasily Smyslov, depth1steval2ndeval 2Qc2-0.09Qc Qc2+0.24Qc Qc2+0.08Qc Qc2+0.35Qc Qc2+0.07Qc Qc2+0.57Qc Qc2+0.72Qc Qc2+0.96Qc Qc1+1.30Qc Qc1+1.52Qc Qd4+4.46Qc complexity = 0.00 Euwe-Alekhine, 16th World Championship 1935

Complexity of a position VII Vasily Smyslov, depth1steval2ndeval 2Qc2-0.09Qc Qc2+0.24Qc Qc2+0.08Qc Qc2+0.35Qc Qc2+0.07Qc Qc2+0.57Qc Qc2+0.72Qc Qc2+0.96Qc Qc1+1.30Qc Qc1+1.52Qc Qd4+4.46Qc complexity = 0.00 Euwe-Alekhine, 16th World Championship 1935

Complexity of a position VII Vasily Smyslov, depth1steval2ndeval 2Qc2-0.09Qc Qc2+0.24Qc Qc2+0.08Qc Qc2+0.35Qc Qc2+0.07Qc Qc2+0.57Qc Qc2+0.72Qc Qc2+0.96Qc Qc1+1.30Qc Qc1+1.52Qc Qd4+4.46Qc complexity = 0.00 Euwe-Alekhine, 16th World Championship 1935

Complexity of a position VII Vasily Smyslov, depth1steval2ndeval 2Qc2-0.09Qc Qc2+0.24Qc Qc2+0.08Qc Qc2+0.35Qc Qc2+0.07Qc Qc2+0.57Qc Qc2+0.72Qc Qc2+0.96Qc Qc1+1.30Qc Qc1+1.52Qc Qd4+4.46Qc complexity = 0.00 Euwe-Alekhine, 16th World Championship 1935

Complexity of a position VII Vasily Smyslov, depth1steval2ndeval 2Qc2-0.09Qc Qc2+0.24Qc Qc2+0.08Qc Qc2+0.35Qc Qc2+0.07Qc Qc2+0.57Qc Qc2+0.72Qc Qc2+0.96Qc Qc1+1.30Qc Qc1+1.52Qc Qd4+4.46Qc complexity = 0.00 Euwe-Alekhine, 16th World Championship 1935

Complexity of a position VII Vasily Smyslov, depth1steval2ndeval 2Qc2-0.09Qc Qc2+0.24Qc Qc2+0.08Qc Qc2+0.35Qc Qc2+0.07Qc Qc2+0.57Qc Qc2+0.72Qc Qc2+0.96Qc Qc1+1.30Qc Qc1+1.52Qc Qd4+4.46Qc complexity = 0.00 Euwe-Alekhine, 16th World Championship 1935

Complexity of a position VII Vasily Smyslov, depth1steval2ndeval 2Qc2-0.09Qc Qc2+0.24Qc Qc2+0.08Qc Qc2+0.35Qc Qc2+0.07Qc Qc2+0.57Qc Qc2+0.72Qc Qc2+0.96Qc Qc1+1.30Qc Qc1+1.52Qc Qd4+4.46Qc complexity = 0.00 Euwe-Alekhine, 16th World Championship 1935

Complexity of a position VII Vasily Smyslov, depth1steval2ndeval 2Qc2-0.09Qc Qc2+0.24Qc Qc2+0.08Qc Qc2+0.35Qc Qc2+0.07Qc Qc2+0.57Qc Qc2+0.72Qc Qc2+0.96Qc Qc1+1.30Qc Qc1+1.52Qc Qd4+4.46Qc complexity = 0.00 Euwe-Alekhine, 16th World Championship 1935

Complexity of a position VII Vasily Smyslov, depth1steval2ndeval 2Qc2-0.09Qc Qc2+0.24Qc Qc2+0.08Qc Qc2+0.35Qc Qc2+0.07Qc Qc2+0.57Qc Qc2+0.72Qc Qc2+0.96Qc Qc1+1.30Qc Qc1+1.52Qc Qd4+4.46Qc complexity = (1.30 – 1.16) Euwe-Alekhine, 16th World Championship 1935 complexity = 0.14

Complexity of a position VII Vasily Smyslov, depth1steval2ndeval 2Qc2-0.09Qc Qc2+0.24Qc Qc2+0.08Qc Qc2+0.35Qc Qc2+0.07Qc Qc2+0.57Qc Qc2+0.72Qc Qc2+0.96Qc Qc1+1.30Qc Qc1+1.52Qc Qd4+4.46Qc complexity = 0.14 Euwe-Alekhine, 16th World Championship 1935

Complexity of a position VII Vasily Smyslov, depth1steval2ndeval 2Qc2-0.09Qc Qc2+0.24Qc Qc2+0.08Qc Qc2+0.35Qc Qc2+0.07Qc Qc2+0.57Qc Qc2+0.72Qc Qc2+0.96Qc Qc1+1.30Qc Qc1+1.52Qc Qd4+4.46Qc complexity = (4.46 – 1.60) Euwe-Alekhine, 16th World Championship 1935 complexity = complexity = 3.00

Complexity of a position VII Vasily Smyslov,

Average error in equally complex positions VIII Mikhail Tal,  How would players perform if they faced equally complex positions?  What would be their expected error if they were playing in another style?

Percentage of best moves played  It alone does NOT reveal true strength of a player. IX Tigran Petrosian,

The difference in best move evaluations X Boris Spassky,

Percentage of best moves played and the difference in best move evaluations XI Robert James Fischer,

Material XII Anatoly Karpov,

Credibility of Crafty as an analysis tool XIII Garry Kasparov,  By limiting search depth we achieved automatic adaptation of time used to the complexity of a given position.  Occasional errors cancel out through statistical averaging (around analyses were applied, altogether over positions). Using another program instead of Crafty...  An open source program was required for the modification of the program.  Analyses of “Man against the machine” matches indicate that Crafty competently appreciates the strength of the strongest chess programs. Deep Blue0.0757New York, 1997Kasparov Deep Fritz0.0617Bahrain, 2002Kramnik Deep Junior0.0865New York, 2003Kasparov FritzX3D0.0904New York, 2003Kasparov Hydra0.0743London, 2005Adams

Conclusion XIV Vladimir Kramnik,  Slightly modified chess program Crafty was applied as tool for computer analysis aiming at an objective comparison of chess players of different eras.  Several criteria for evaluation were designed:  average difference between moves played and best evaluated moves  rate of blunders (big errors)  expected error in equally complex positions  rate of best moves played & difference in best moves evaluations  A method to assess the difficulty of positions was designed, in order to bring all players to a “common denominator”.  The results might appear quite surprising. Overall, they can be nicely interpreted by a chess expert.

XIV Vladimir Kramnik,