Presentation is loading. Please wait.

Presentation is loading. Please wait.

IGB GO —— A self-learning GO program Lin WU Information & Computer Science University of California, Irvine.

Similar presentations


Presentation on theme: "IGB GO —— A self-learning GO program Lin WU Information & Computer Science University of California, Irvine."— Presentation transcript:

1 IGB GO —— A self-learning GO program Lin WU Information & Computer Science University of California, Irvine

2 12/02/2003 Lin WU, lwu@ics.uci.edu Outline Background: – What is GO? – Existing GO programs IGB GO Past work: – Three past scenarios – Present scenario Discussion Conclusion Demon.

3 12/02/2003 Lin WU, lwu@ics.uci.edu What is GO Black and white player play alternatively. Black plays first. Basic concepts: – Liberty – Eye – Territory – Unconditional live – Position

4 12/02/2003 Lin WU, lwu@ics.uci.edu What is GO (cont.) Rules – Stone(s) are captured, if the liberty becomes 0. – Captured stones are removed from board – Winner is determined by counting the territory

5 12/02/2003 Lin WU, lwu@ics.uci.edu Existing GO programs There are many existing GO programs – KCC Igo – HARUKA – Go++ – Goemate – Hand talk – The Many Faces of Go: www.smart-games.comwww.smart-games.com – GNU GO: www.gnu.org/software/gnugo/gnugo.html www.gnu.org/software/gnugo/gnugo.html – NeuroGo: www.markus-enzenberger.de/neurogo.htmlwww.markus-enzenberger.de/neurogo.html – etc. None of them can beat average amateur players.

6 12/02/2003 Lin WU, lwu@ics.uci.edu Conceptual Architecture Pattern libraries: – Library for opening – Library of corner – Library for the internal part of the board – Libraries for attack, defense, connection, etc. Engine: match the board position against the libraries Evaluation: determine the best, if there are multiple hits

7 12/02/2003 Lin WU, lwu@ics.uci.edu Architecture I The many faces of GO (1981- now) – Knowledge Representation in The Many Faces of Go, David Fotland, February 27, 1993 – Joseki database of standard corner patterns (36,000 moves) – a pattern database of 8x8 patterns (4,000 moves) – a rule based expert system with about 200 rules that suggests plausible moves for full board evaluation

8 12/02/2003 Lin WU, lwu@ics.uci.edu Architecture II GNU GO (1989.3 – now) – GNU GO documentation – Pattern libraries General: patterns.db, patterns2.db Fuseki (opening): fuseki.db Eyes: eyes.db Connection: conn.db Influence: influence.db, barriers.db Etc – GNU Go engine: calculate states of different level, pattern matching, move reasoning.

9 12/02/2003 Lin WU, lwu@ics.uci.edu Why pattern based system? Simple rules doesn’t mean simple game – Simple rules means extremely huge searching space Board evaluation is hard, especially in the middle of the game – The representation space is extremely huge – The evaluation function is sensitive to small difference of input – Result: to get reliable evaluation results, the level of search have to be very high Pattern based system – Avoid search by pattern matching

10 12/02/2003 Lin WU, lwu@ics.uci.edu Complexity —— Search time Search Level 5x57x79x919x19 1 254981361 2 6252,4016,561130,321 3 15,625117,649531,4414.7 E 7 4 390,6255.7 E 64.3 E 72.1 E 8

11 12/02/2003 Lin WU, lwu@ics.uci.edu Problems of pattern based system Everything is manual work As system become larger, it’s harder to improve the pattern database. As database becomes larger, more likely to be inconsistent. Results: – Performance improves slower as the performance becomes better.

12 12/02/2003 Lin WU, lwu@ics.uci.edu Outline Background: – What is GO? – Existing GO programs IGB GO Past work: – Three past scenarios – Present scenario Discussion Conclusion Demon.

13 12/02/2003 Lin WU, lwu@ics.uci.edu IGB GO http://contact.ics.uci.edu/go.html A GO program which can improve its performance automatically How? – Use artificial neural networks to learn the evaluation function. – Improving the quality of the neural networks by improving the quality of training data.

14 12/02/2003 Lin WU, lwu@ics.uci.edu Architecture of the neural networks 6 planes – 1 input plane – 1 output plane – 4 transmission Use recurrent neural network to learn two functions

15 12/02/2003 Lin WU, lwu@ics.uci.edu How to improving the training data 1. Initiate a group of neural networks 2. Let neural networks play against each other 3. Identify the set of good moves 4. Train neural networks over those good moves 5. Repeat 2.

16 12/02/2003 Lin WU, lwu@ics.uci.edu Two key issues of this system Given the neural networks, how to identify “the good moves” Given the good moves, how to improve neural networks’ performance efficiently

17 12/02/2003 Lin WU, lwu@ics.uci.edu Outline Background: – What is GO? – Existing GO programs IGB GO Past work: – Three past scenarios – Present scenario Discussion Conclusion Demon.

18 12/02/2003 Lin WU, lwu@ics.uci.edu Play against itself 1. Randomly initiate a neural network 2. The neural network plays against itself over a set of initial setups. 3. If black(or white) wins, learn the black(or white) moves. 4. Update weights, repeat 2.

19 12/02/2003 Lin WU, lwu@ics.uci.edu Play against itself — Good move identification Win: the color who gets larger territory Good moves: all the moves played by wining color

20 12/02/2003 Lin WU, lwu@ics.uci.edu Play against itself — Results Results – First, improve – Then, begin to get worse – Last, learn a very deterministic and bad pattern Improvement: No guarantee.

21 12/02/2003 Lin WU, lwu@ics.uci.edu Group playing 1. Initiate a group of neural networks (18) 2. Randomly assign a neural network to another as a pair. 3. Members in a pair play against each other 4. Identify the set of good moves 5. Train the loser neural networks over those good moves 6. Repeat 2.

22 12/02/2003 Lin WU, lwu@ics.uci.edu Group playing — Good move identification Each pair has two players (A and B) Game1: A plays black, B plays white, get a result R1 Game2: B plays black, A plays white, get a result R2 If R1 > R2, then A is better player. B is the loser. So B learn all the moves played by A.

23 12/02/2003 Lin WU, lwu@ics.uci.edu Group playing — Results Results – Improve at beginning. – If a player dominates, the whole system degrades as “play against itself”. – No indication of converge till now. (9 machines, 1 month on 9 by 9 board) Improvement: No guarantee.

24 12/02/2003 Lin WU, lwu@ics.uci.edu ABC scenario 1. Initiate a group of neural networks 2. Randomly assign three different neural networks (A,B,C) in a group 3. Let A and B play against each other 4. Identify the set of good moves 5. Train neural networks over those good moves 6. Repeat 2.

25 12/02/2003 Lin WU, lwu@ics.uci.edu ABC scenario — Good move identification For a given pair with player A and player B – Suppose B is the loser. Randomly assign a teacher C – C will tell B, what move C will make for every B’s turn C’s suggested move is the same as that of B C’s suggested move is different from B Based on C’s suggest move, A play with B again – Better: understandable good move – The same – Worse The set of good moves is all the understandable good moves

26 12/02/2003 Lin WU, lwu@ics.uci.edu ABC scenario — Results Results – It took 1 week to get a best player from 3 randomly initialized players – The best player was beaten by another randomly initialized player. – The speed of improving became slower as the performance increased. Improvement: guarantee. Training Speed: unacceptable slow

27 12/02/2003 Lin WU, lwu@ics.uci.edu Present scenario Output representation: – Two papers: Temporal Difference Learning of Position Evaluation in the Game of Go, Nicol N. Schraudolph, Peter Dayan, and Terrence J. Sejnowski, Advances in Neural Information Processing 6, 1994 Learning to evaluate GO positions via temporal difference methods, Nicol N. Schraudolph, Peter Dayan, and Terrence J. Sejnowski, Soft Computing Techniques in Game Playing, 2000 – Each intersection has an output: real number [0,1] – The likelihood to make a move => the likelihood of securing that intersection as black territory at the end of the game. – Reinforcement learning Good move identification: reinforcement learning identify good moves automatically

28 12/02/2003 Lin WU, lwu@ics.uci.edu Present scenario — Results Improvement: guarantee. Training Speed: better than ABC scenario, but still slow Results – 5x5: 3 - 4 hours training: beat random player 100% 1 - 2 weeks (168-336 h): comparable to GNUGO Prediction accuracy is >90% after the board is occupied >50% – 7x7: after 1 month of training, GNUGO beats it without any difficulty

29 12/02/2003 Lin WU, lwu@ics.uci.edu Outline Background: – What is GO? – Existing GO programs IGB GO Past work: – Three past scenarios – Present scenario Discussion Conclusion Demon.

30 12/02/2003 Lin WU, lwu@ics.uci.edu Why better results Old architecture – Target is inconsistent – Target is harder to learn, spatial complexity 3 25 / 8 ( 105911076180.375) for 5x5 – Quality of training data is bad New architecture – Target is consistent, and at the end of the game, it’s true target. – Target correlates mainly to local information, so the complexity should be much less than 3 25 / 8 – Quality of training data is determined by the neural network itself.

31 12/02/2003 Lin WU, lwu@ics.uci.edu Is present arch. enough — search time complexity Board Size 2(25%)3(25%)2(50%)3(50%) 5x5 647294,096531,441 7x7 4,096531,4411.6E72.8E11 9x9 1E63.5E91E121.2E19 19x19 1.2E278.7E421.5E547.6E85

32 12/02/2003 Lin WU, lwu@ics.uci.edu Known Problems Intrinsic hard problems: – No complexity bounds for the number of iterations to get a better player – Representation space is extremely huge

33 12/02/2003 Lin WU, lwu@ics.uci.edu Known Problems — Technical Temporary technical problems: – Lack position-level evaluation method – Unable to respond to some unusual cases correctly Unable to AUTOMATICALLY identify the unusual cases, which will cause problems – Time complexity per iteration: Play a match: O(n 6 W) Learn a match: O(n 4 W) for TD0, O(n 6 W) for Q-Learning (19/5) 6 = 3011

34 12/02/2003 Lin WU, lwu@ics.uci.edu Bounds for iteration Maybe exponential Observation: – Human being: the complexity increases as the level of player increases. – Present implementation: same as above Important to know – How fast the complexity increases, as the level of player increases?

35 12/02/2003 Lin WU, lwu@ics.uci.edu The complexity could be exponential – Suppose, one player dominate the whole system, or a small group of players dominate the whole system – How much time is needed for obtaining a better new player or a better group? – Repeat the experiment, with the same amount of time, there is a 50% chance to get a better one, due to the symmetry – At least exponential to 2.

36 12/02/2003 Lin WU, lwu@ics.uci.edu Position-level performance evaluation With it – Study the iteration bounds empirically – The evaluation results can be used to find good tradeoff between performance and searching space Without it – Every method is trial and error, but there exists infinite number of potential methods to try.

37 12/02/2003 Lin WU, lwu@ics.uci.edu Time complexity per iteration Separate “play” and “learn” – A database of training data – Training data: Best players play against each other Online server Manually find ways to beat the best player. – All players learn the generated training data

38 12/02/2003 Lin WU, lwu@ics.uci.edu Unusual move identification Difficulty – Search space is huge  Hard to identify automatically Possible solution – Use database to record all such moves, once they appear Can be implemented the same as training database

39 12/02/2003 Lin WU, lwu@ics.uci.edu Why it’s so hard No method touches the tough problem explicitly. – Key problems: extremely huge searching space hard to evaluate positions Present strategy is to reduce the searching space by improve evaluation function.

40 12/02/2003 Lin WU, lwu@ics.uci.edu Why it’s so hard (cont.) Reinforcement learning may not be enough – Nicol N. Schraudolph, 6 years without any observable progress – Arthur Samuel, “no progress has been made in overcoming [this defect]” (11 years, 1956-1967) (Blondie24, p146-147) Neural network may not learn – Why? Representation space is huge even for the last move 90% occupied, 9x9 board, equal number of black and white – Solution Generalization ability Automatically identify features

41 12/02/2003 Lin WU, lwu@ics.uci.edu Lesson I Ability to improve The best? – The speed of improving: 5x5: 3 - 4 hours training to beat random 1 - 2 weeks (168-336 h) to be comparable to GNUGO 7x7: after 1 month of training, GNUGO is still able to win. ==?

42 12/02/2003 Lin WU, lwu@ics.uci.edu Lesson II Deterministic function between input and output Neural network can learn it without any difficulty No – The intrinsic complexity of the function – Neural network can only learn the correlation between the input and the output, as a result of hill climbing ==?

43 12/02/2003 Lin WU, lwu@ics.uci.edu Conclusion A self-learning GO program is possible but exists several technically difficult problems – Automatic feature discovery – Automatic learning from failure – Position-level performance evaluation

44 12/02/2003 Lin WU, lwu@ics.uci.edu Demon. http://contact.ics.uci.edu/go.html

45 12/02/2003 Lin WU, lwu@ics.uci.edu Thanks for coming


Download ppt "IGB GO —— A self-learning GO program Lin WU Information & Computer Science University of California, Irvine."

Similar presentations


Ads by Google