Department of Electrical Engineering, Portland State University

Slides:



Advertisements
Similar presentations
Sampling Distributions
Advertisements

Artistic Robots through Interactive Genetic Algorithm with ELO rating system Andy Goetz, Camille Huffman, Kevin Riedl, Mathias Sunardi and Marek Perkowski.
Determining Sample Size
PROBABILITY (6MTCOAE205) Chapter 6 Estimation. Confidence Intervals Contents of this chapter: Confidence Intervals for the Population Mean, μ when Population.
PARAMETRIC STATISTICAL INFERENCE
Essential Statistics Chapter 131 Introduction to Inference.
Chapter 7 Sampling Distributions Statistics for Business (Env) 1.
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University.
Chapter 7: The Distribution of Sample Means
Sampling: Distribution of the Sample Mean (Sigma Known) o If a population follows the normal distribution o Population is represented by X 1,X 2,…,X N.
Yandell – Econ 216 Chap 8-1 Chapter 8 Confidence Interval Estimation.
Copyright © 2009 Pearson Education, Inc. 5.2 Properties of the Normal Distribution LEARNING GOAL Know how to interpret the normal distribution in terms.
©The McGraw-Hill Companies, Inc. 2008McGraw-Hill/Irwin Estimation and Confidence Intervals Chapter 9.
Theoretical distributions: the Normal distribution.
CHAPTER 7 SAMPLING DISTRIBUTIONS Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved.
Chapter 6 The Normal Distribution and Other Continuous Distributions
Chapter Eight Estimation.
Chapter 7 Confidence Interval Estimation
Figure 5: Change in Blackjack Posterior Distributions over Time.
Chapter 6 Inferences Based on a Single Sample: Estimation with Confidence Intervals Slides for Optional Sections Section 7.5 Finite Population Correction.
ESTIMATION.
Introduction to inference Estimating with confidence
Normal Distribution and Parameter Estimation
Point and interval estimations of parameters of the normally up-diffused sign. Concept of statistical evaluation.
Joint Probability Distributions and Random Samples
Sampling Distributions and Estimation
5.3 The Central Limit Theorem
SAMPLING DISTRIBUTIONS
Hypothesis Testing and Confidence Intervals (Part 1): Using the Standard Normal Lecture 8 Justin Kern October 10 and 12, 2017.
Estimating the Population Mean Income of Lexus Owners
STAT 206: Chapter 6 Normal Distribution.
8.1 Sampling Distributions
Chapter 5 Sampling Distributions
Chapter 8: Inference for Proportions
Week 10 Chapter 16. Confidence Intervals for Proportions
Chapter 5 Sampling Distributions
Statistical Process Control
Means and Variances of Random Variables
5.3 The Central Limit Theorem
Chapter 9 Hypothesis Testing.
Simple Random Sample A simple random sample (SRS) of size n consists of n elements from the population chosen in such a way that every set of n elements.
Confidence intervals: The basics
Chapter 5 Sampling Distributions
Geology Geomath Chapter 7 - Statistics tom.h.wilson
5.2 Properties of the Normal Distribution
Essential Statistics Introduction to Inference
AP Statistics: Chapter 18
Voting systems Chi-Kwong Li.
Sampling Distributions
Get to know the rating system in the model
One-Way Analysis of Variance
Calculating Probabilities for Any Normal Variable
Sampling Distributions
Sampling Distributions
Normal Distribution Z-distribution.
Sampling Distributions
Basic Practice of Statistics - 3rd Edition Introduction to Inference
Confidence intervals: The basics
What are their purposes? What kinds?
SAMPLING DISTRIBUTIONS
5.3 The Central Limit Theorem
Continuous Probability Distributions
SAMPLING DISTRIBUTIONS
Hypothesis Testing S.M.JOSHI COLLEGE ,HADAPSAR
Multicriteria Decision Making
Objectives 6.1 Estimating with confidence Statistical confidence
Objectives 6.1 Estimating with confidence Statistical confidence
MGS 3100 Business Analysis Regression Feb 18, 2016
Chapter 5: Sampling Distributions
Basic Practice of Statistics - 5th Edition Introduction to Inference
Presentation transcript:

Department of Electrical Engineering, Portland State University Artistic Robots through Interactive Genetic Algorithm with ELO rating system Andy Goetz, Camille Huffman, Kevin Riedl, Mathias Sunardi and Marek Perkowski Department of Electrical Engineering, Portland State University

Portland Cyber Theatre

Making science out of robot theater?

How to make a science from robot theatre?

We want to evaluate sound, shape, motion, color,etc.

Behavior Generation and Verification Interactive Genetic Algorithm Human evaluators Behavior expression robot Probabilistic Automaton behavior Generator and verifier Behavior Automaton

Main Idea of this work A new approach to create fitness function for Interactive Genetic Algorithm in which (possibly) many humans evaluate robot motions via Internet page. Based on ELO rating system known from chess. The robots use: a genetic algorithm, fuzzy logic, probabilistic state machines, a small set of functions for creating picture components, and a user interface which allows the Internet users to rate individual sequences.

Previous work on IEC systems Human-based genetic algorithm. Interactive evolution strategy, Interactive genetic programming, Interactive genetic algorithm. Mostly for music composition and graphics Usually weighted functions were used

Ranking Systems in Sports Rating systems for many sports award points in accordance with subjective evaluations of the 'greatness' of certain achievements. For example, winning an important golf tournament might be worth an arbitrarily chosen five times as many points as winning a lesser tournament. A statistical endeavor, by contrast, uses a model that relates the game results to underlying variables representing the ability of each player.

Elo rating system The Elo rating system is a method for calculating the relative skill levels of players in two-player games such as chess. It is named after its creator Arpad Elo, a Hungarian-born American physics professor. The Elo system was invented as an improved chess rating system, but today it is also used in many other games. It is also used as a rating system for multiplayer competition in a number of video games. It has been adapted to team sports including association football, American college football, basketball, and Major League Baseball.

Previous works You have many candidates, you want to select the best. You can compare individually each with each You cannot compare individually each with each

Pairwise Comparison

Pairwise Comparison Method: Compare each two candidates (players) head-to-head. Award each candidate one point for each head-to-head victory. The candidate with the most points wins. N(N-1)/2 comparisons.

Pairwise Comparison - Example Selection of best robot facial expression: 4 candidates: {A,B,C,D} and 4 rankings of them 37 voters 5 trials (columns) Table shows the rankings of the candidates (rows) and the number of voters (columns) that ranked the candidates that way # of Voters Rank 14 10 8 4 1 1st A C D B 2nd 3rd 4th

Pairwise Comparison - Example Compare candidates A & B: 14 voters ranked A higher than B 10+8+4+1=23 voters ranked B higher than A So, B wins against A # of Voters Rank 14 10 8 4 1 1st A C D B 2nd 3rd 4th

Pairwise Comparison - Example Next, compare candidates A & C: 14 voters ranked A higher than C 10+8+4+1=23 voters ranked C higher than A So, C wins against A Continue for next pairs: A vs. D, B vs. C, B vs. D, C vs. D Exclude: permutations (e.g. C vs. A = A vs. C) comparison with itself (e.g. A vs. A) # of Voters Rank 14 10 8 4 1 1st A C D B 2nd 3rd 4th

Pairwise Comparison - Example Record points: wins=1, lose=0 Cell values: number of voters that ranked candidate (row) over candidate (column) # of Voters (total=37) Rank 14 10 8 4 1 1st A C D B 2nd 3rd 4th A B C D Wins over Lost against Points 14 23 18 28 19 25 9 12

Pairwise Comparison - Example Record points: wins=1, lose=0 Cell values: number of voters that ranked candidate (row) over candidate (column) # of Voters (total=37) Rank 14 10 8 4 1 1st A C D B 2nd 3rd 4th A B C D Wins over Lost against Points 14 - B,C,D 23 18 28 19 25 9 12

Pairwise Comparison - Example Record points: wins=1, lose=0 Cell values: number of voters that ranked candidate (row) over candidate (column) # of Voters (total=37) Rank 14 10 8 4 1 1st A C D B 2nd 3rd 4th A B C D Wins over Lost against Points 14 - B,C,D 23 18 28 A,D 2 19 25 A,B,D 3 9 12 B,C 1

Pairwise Comparison - Example Record points: wins=1, lose=0 Cell values: number of voters that ranked candidate (row) over candidate (column) # of Voters (total=37) Rank 14 10 8 4 1 1st A C D B 2nd 3rd 4th A B C D Wins over Lost against Points 14 - B,C,D 23 18 28 A,D 2 19 25 A,B,D 3 9 12 B,C 1 C wins!

Pairwise Comparison - Example Record points: wins=1, lose=0 Another way to calculate the winner: use half the table triangle, mark the winner, and count the number of times the player appears # of Voters (total=37) Rank 14 10 8 4 1 1st A C D B 2nd 3rd 4th A B C D Wins over Lost against Points - B,C,D A,D 2 A,B,D 3 B,C 1 C wins!

Other possible scenario A three-way tie: Inconsistency: A wins over B, B wins over C, C wins over A A B C -

ELO Rating System

Overview of ELO A player’s skill is assumed to be a normal distribution: True skill is around the mean Elo System gives two things: A players expected chance of winning A method to update a player’s Elo Rating

Basic Ideas of ELO One cannot look at a sequence of moves and say, "That performance is 2039." Performance can only be inferred from wins, draws and losses. Therefore, if a player wins a game, he is assumed to have performed at a higher level than his opponent for that game. Conversely if he loses, he is assumed to have performed at a lower level. If the game is a draw, the two players are assumed to have performed at nearly the same level.

Scores and ranking of players A player’s ranking is updated based on its: Expected value of winning (E) Which depends on the ranking difference with the opponent Outcome of the match (S for ‘score’) 1 = win 0 = lose 0.5 = draw

Expected scores in Elo Rating Expected score (E) Where: EA, EB = expected score for player A and B, respectively RA, RB = Rating of player A and B, respectively Remember: 1=win, 0=lose, 0.5=draw score http://en.chessbase.com/home/TabId/211/PostId/4007114

Characteristics of ELO A player with higher Elo ranking than his opponent has a higher expected value (i.e. chance of winning), and vice versa. When both players have similar Elo rankings, the chance of having a draw is higher. After the match, both players’ rankings are updated with the same amount, but: the winner gains the rank (rating), the loser loses the rank. If a higher ranking player (‘stronger’) wins against a weaker player, the rank changes are smaller than when the weaker player wins against the higher ranking player. Subjective value K

Basic Assumptions of ELO Elo's central assumption was that the chess performance of each player in each game is a normally distributed random variable. Although a player might perform significantly better or worse from one game to the next, ELO assumed that the mean value of the performances of any given player changes only slowly over time. A further assumption is necessary, because chess performance in the above sense is still not measurable. Our question: “Is ELO good for human evaluation of robot art (motion, behavior)?”

How ELO Works A player's expected score is his probability of winning plus half his probability of drawing. Thus an expected score of 0.75 could represent a 75% chance of winning, 25% chance of losing, and 0% chance of drawing. On the other extreme it could represent a 50% chance of winning, 0% chance of losing, and 50% chance of drawing. The probability of drawing, as opposed to having a decisive result, is not specified in the Elo system. Instead a draw is considered half a win and half a loss.

How ELO Works The relative difference in rating between two players determines an estimate for the expected score between them. Both the average and the spread of ratings can be arbitrarily chosen. Elo suggested scaling ratings so that a difference of 200 rating points in chess would mean that the stronger player has an expected score (which basically is an expected average score) of approximately 0.75, The USCF initially aimed for an average club player to have a rating of 1500.

Elo Rating - Example Suppose a Robot Boxing league: The league has tens, hundreds, or more robots Each robot has a ranking (higher number = higher rank) A robot’s ranking is updated after each match But it can also be done after multiple matches A match is a one-vs-one battle

Elo Rating Example: scores for robots Expected score (E) Suppose: Robot A rank: 1500 Robot B rank: 1320 Then: EA = 1 / (1 + 10(1320 - 1500)/400) = 0.738 EB = 1 - 0.738 = 0.262 Expected to win

Elo Rating Example: Adjusting ratings after match Next, the match is held. After the match, the ratings of both robots will be adjusted by: Where: R’A = Robot A’s new rating RA = Robot A’s old/current rating K = some constant*, for practical reasons we choose K=24 in this example S = Score/match result (1=win, 0=lose, 0.5=draw) EA = Expected score Similarly for robot B

Elo Rating Example: Adjusting scores after one match Suppose the outcome of the match: Robot A wins! Robot B wins! It’s a draw! Remember before the match it was: Robot A rank: 1500 Robot B rank: 1320

Elo Rating Example: adjusting rankings after five matches Suppose rank update is done after 5 matches: Robot A current rank: 1500 Opponent/match Opponent rank (RB) EA Score/match outcome (1=win, 0=lose, 0.5=draw) 1 1320 0.738 2 1700 0.240 3 1480 0.529 4 1560 0.415 0.5 5 1800 0.151 Total 2.073 2.5

About K in chess How about robot art? K is the rate of adjustments to one’s rating. Example when Robot A wins (B loses): Some Elo implementations adjust K based on some criteria. For example: FIDE (World Chess Federation): K = 30 for a player new to the rating list until s/he has completed events with a total of at least 30 games. K = 15 as long as a player's rating remains under 2400. K = 10 once a player's published rating has reached 2400, and s/he has also completed events with a total of at least 30 games. Thereafter it remains permanently at 10. USCF (United States Chess Federation): Players below 2100 --> K-factor of 32 used Players between 2100 and 2400 --> K-factor of 24 used Players above 2400 --> K-factor of 16 used. How about robot art?

Picture Drawing Robots

Audience votes through a Webpage

ELO for art (motion) scoring Score of 194

ELO for art (motion) scoring Score of 0

Physical Robot DERPY Derpy with a sharpie marker

Fuzzy/Probabilistic state Machine operates differently in dark and light areas. Examples of fuzzy variables. Image with dark and light areas.

Fuzzy and Probabilistic Machines Simple probabilistic machine of Derpy

“Robot art” on butcher paper located on a floor.

Another piece of art from Derpy

Now use Part 2 of slides

Auxiliary Slides

Microsoft TrueSkill

Microsoft TrueSkill Addressing: Subjective K value - instead, based on players’ skill Ranking of multiple players (>2) Can find “interesting” matches - balanced, where either player have comparable chance of winning the match. Build “Leaderboards” (ranking of all players)

Microsoft TrueSkill Player’s skill is modeled as normal distribution, with mean as the player’s “true skill” and standard deviation as uncertainties (about the player’s skill) Player start with some “mean skill” and uncertainty values. As player plays more games/matches, the mean skill gets adjusted, and the uncertainty (i.e. std. dev) decreases.

Microsoft TrueSkill Updating mean and standard deviation β2 is unknown, which is the variance of the performance around the skill of each player.

Microsoft TrueSkill v and w

Microsoft TrueSkill

Microsoft TrueSkill Microsoft TrueSkill http://research.microsoft.com/en- us/projects/trueskill/Details.aspx#How_to_Update_Skills unbalanced matches (can't win or can't lose) are not interesting balanced matches are interesting (even chance of winning) accommodates two or more players a module to track skills of all players based on game outcomes between players (update) TA module to arrange interesting matches for its members (Matchmaking) module to recognize and potentially publish skills of members (leader boards) Truskill is skill-based ranking system so interesting matches can reliably arranged within a league uses Bayesian inference for ranking

Microsoft TrueSkill The intuition is that the greater the difference between two player’s μ values – assuming their σ value are similar – the greater the chance of the player with the higher μ value performing better in a game. This principle holds true in the TrueSkill ranking system. But, this does not mean that the players with the larger μ's are always expected to win, but rather that their chance of winning is higher than that of the players with the smaller μ's. The TrueSkill ranking system assumes that the performance in a single match is varying around the skill of the player, and that the game outcome (relative ranking of all players participating in a game) is determined by their performance. Thus, the skill of a player in the TrueSkill ranking system can be thought of as the average performance of the player over a large number of games. The variation of the performance around the skill is, in principle, a configurable parameter of the TrueSkill ranking system.

Microsoft TrueSkill mu and sigma are updated based on outcome of game (win/lose). score difference makes no impact. 1. assumes skill of each player may change slightly between current and previous game -> sigma is increased (a configurable parameter) "It is this parameter that both allows the TrueSkill system to track skill improvements of gamers over time and ensures that the skill uncertainty σ never decreases to zero ("maintaining momentum")." 2. determine the probability of game outcome for given skills of participating players, and weight by probability of corresponding skill beliefs. --> average over all possible performances (weighted by their probability - Bayes Law) and derive the game outcome from performances: player with highest performance is winner, second highest is first tuner up, and s on. 3. if player performance are very close, true skill considers the outcome to be draw. The larger the draw margin is defined in a league, the more likely a draw is to occur. The size of margin is configurable and adjusted by game mode.

Measuring consistency

Measuring consistency in Pairwise Comparison Can be done when comparison is done with “degree of importance”. E.g.: 1=equally important, 2=somewhat more important, 3=more important, 4=most important Example: Determining important criteria in buying a car Price MPG Comfort Style 3 2 4 Values in cells are importance with respect to the row item

Measuring consistency in Pairwise Comparison Complete the values in the matrix Example: Determining important criteria in buying a car Price MPG Comfort Style 1 3 2 4 Criterion compared to itself is “equally important” 1=equally important, 2=somewhat more important, 3=more important, 4=most important

Measuring consistency in Pairwise Comparison Complete the values in the matrix Example: Determining important criteria in buying a car Price MPG Comfort Style 1 3 2 1/3 1/4 1/2 4 Importance of less important criterion is reciprocal of the importance of the more important criterion e.g.: Price vs. Style => Style is two times more imporant than Price (2). So, Price is one half as imporant than Style (1/2) 1=equally important, 2=somewhat more important, 3=more important, 4=most important

Measuring consistency in Pairwise Comparison Example: Determining important criteria in buying a car Calculate weights of each criterion: Price MPG Comfort Style 1 3 2 1/3 1/4 1/2 4

Evaluation Criteria for Ranking Methods The Method of Pairwise Comparisons satisfies the Majority Criterion. (A majority candidate will win every pairwise comparison.) The Method of Pairwise Comparisons satisfies the Condorcet Criterion. (A Condorcet candidate will win every pairwise comparison -- that's what a Condorcet candidate is!) The Method of Pairwise Comparisons satisfies the Public-Enemy Criterion. (If there is a public enemy, s/he will lose every pairwise comparison.) The Method of Pairwise Comparisons satisfies the Monotonicity Criterion. (Ranking Candidate X higher can only help X in pairwise comparisons.)

ELO

Agenda on ELO Overview How it works Details Mathematic al details

How it Works Elo formulas Expected value Score How to update the rank