Presentation is loading. Please wait.

Presentation is loading. Please wait.

Starcraft Opponent Modeling CSE 391: Intro to AI Luciano Cheng.

Similar presentations


Presentation on theme: "Starcraft Opponent Modeling CSE 391: Intro to AI Luciano Cheng."— Presentation transcript:

1 Starcraft Opponent Modeling CSE 391: Intro to AI Luciano Cheng

2 What is opponent modeling? Opponent modeling is when a computer opponent, rather than just reacting to the moves of the human, attempts to predict the next moves of the human based on a set paradigm of “a player”, and on the past actions of an opponent. It is a form of machine learning Popular for working with poker games, however is applied all the time in strategy computer games My project: to create a program that can model an opponent in Starcraft, a real time strategy game. In particular, to model playing the Protoss race. If a computer were able to better predict the movements of its opponent, it would perform significantly better, as Starcraft is not a game that can be played effectively if you don’t use this level of strategy.

3 A Brief Starcraft Background Starcraft is a real time strategy game, released 6 years ago. In less then 2 months of release, it was one of the most played video games on the planet. The object of the game is to build an army of aliens, using various tools at your disposal, that can effectively wipe out your enemy’s army. The game has three species you can choose to play. Each species is better or worse that the other two in different ways, but overall all three are balanced. The game lends itself to many strategies, and In Japan, it is considered a sport, and often televised. Voted one of the most popular video games in history by PC gamer magazine In many magazines most easily considered the best real time strategy game of all time.

4 Difference between discrete and continuous games In other games we have studied this semester, the games have always been discrete, or turn based games; poker, tic-tac-toe, connect four. Starcraft: different because it is a continuous game: it never stops, and there are infinite possibilities to how you can play. Simple alpha-beta won’t work, and neither will calculating probabilities. There is not always a mathematical best choice. Infinitely more complex than a card game. However, there may not be a way to calculate the mathematically best possible move, it is certainly still possible to create an AI that can come up with the best plan, given the state of the game at the given time. Instead of evaluating each turn, it evaluates the state of the game at any point in a time interval.

5 The Built-In Starcraft AI Played games with the computer and pitted the computer against itself for my research, and discovered a few important points: The Starcraft AI, if forced to play against one another, will not adapt to weaknesses in its opponent. The programs brute force each other to death, eventually allowing luck to let one of them win out. When playing a human, the computer will do a “knee-jerk” reaction to the way a human plays (If the human builds x, the computer will build y and z to counter). It will not change it’s future strategy based on the information it has gathered. This limits it’s tactical ability significantly.

6 The Quake III Bot Closest model of AI currently using opponent modeling: the bot used in the Quake death matches. According to a paper by one of it’s creators, John E. Laird, the Bot in Quake III uses a constantly cycling difference engine to create a “anticipation” Observes opponents behaviors, then predicts the next behaviors based on what it would do in that situation. Has over 700 different “move” commands and hundreds of situations it looks for when making a decision. My program is similar because it will attempt to add “anticipation” to the AI based on what it observes the opponent doing. It is different in that it won’t base it’s prediction on what the computer would do, instead it bases it on what the average opponent would do, unless the actions it observes contradicts its prediction.

7 The Extensive Research To gather information on tactics and their effectiveness, I dove into the very serious research of playing Starcraft for hours on end against a variety of opponents online. Played a total of 25 games of Starcraft, and afterward found that 6 styles of play were especially important for understanding the nature of the opponent, and predicting his future actions: 1.Whether or not the tech tree was climbed 2.Whether or not a defensive strategy was used 3.Whether or not the opponent expanded 4.Whether or not the attacks were well coordinated 5.Whether or not the proper buildings were attacked 6.Whether ground or air troops were used in an attack.

8 Response Decision Tree Early Tech Tree Climbing Defensive Strategy Extra Expansion Well-Coordinated Attack Proper Building Attack Ground AttackAir Attack............ Yes No Yes No Allowed for better predictability, it was more accurate Allowed for more advanced tactics, the computer can now distinguish the opponent enough so it simply doesn’t throw troops at it.

9 The Bayes Rating On top of the decision tree, I devised a Bayes rating system in order to give the opponent a skill level. The skill level was based on which style of play the computer was using, and in the statistics I took whether or not that style of play was able to win against me. P( He Wins | defensive & tech tree ) = P( He wins | defensive) * P( He wins | tech tree ) * P (he wins) P( defensive & tech tree) = (.22 *.35 *.40) / (.63 *.37) = 13% - Not very high, which makes sense since both those strategies are not favored

10 Possibilities for Improvement 1. Only modeled it for the protoss species 2. Only modeled it one-on- one 3. Still a very simple program, only considered 6 factors, I could think of at least 20


Download ppt "Starcraft Opponent Modeling CSE 391: Intro to AI Luciano Cheng."

Similar presentations


Ads by Google