Download presentation
Presentation is loading. Please wait.
Published byJewel Carter Modified over 6 years ago
1
Department of Electrical Engineering, Portland State University
Artistic Robots through Interactive Genetic Algorithm with ELO rating system Andy Goetz, Camille Huffman, Kevin Riedl, Mathias Sunardi and Marek Perkowski Department of Electrical Engineering, Portland State University
2
Portland Cyber Theatre
3
Making science out of robot theater?
4
How to make a science from robot theatre?
7
We want to evaluate sound, shape, motion, color,etc.
8
Behavior Generation and Verification
Interactive Genetic Algorithm Human evaluators Behavior expression robot Probabilistic Automaton behavior Generator and verifier Behavior Automaton
9
Main Idea of this work A new approach to create fitness function for Interactive Genetic Algorithm in which (possibly) many humans evaluate robot motions via Internet page. Based on ELO rating system known from chess. The robots use: a genetic algorithm, fuzzy logic, probabilistic state machines, a small set of functions for creating picture components, and a user interface which allows the Internet users to rate individual sequences.
10
Previous work on IEC systems
Human-based genetic algorithm. Interactive evolution strategy, Interactive genetic programming, Interactive genetic algorithm. Mostly for music composition and graphics Usually weighted functions were used
11
Ranking Systems in Sports
Rating systems for many sports award points in accordance with subjective evaluations of the 'greatness' of certain achievements. For example, winning an important golf tournament might be worth an arbitrarily chosen five times as many points as winning a lesser tournament. A statistical endeavor, by contrast, uses a model that relates the game results to underlying variables representing the ability of each player.
12
Elo rating system The Elo rating system is a method for calculating the relative skill levels of players in two-player games such as chess. It is named after its creator Arpad Elo, a Hungarian-born American physics professor. The Elo system was invented as an improved chess rating system, but today it is also used in many other games. It is also used as a rating system for multiplayer competition in a number of video games. It has been adapted to team sports including association football, American college football, basketball, and Major League Baseball.
13
Previous works You have many candidates, you want to select the best.
You can compare individually each with each You cannot compare individually each with each
14
Pairwise Comparison
15
Pairwise Comparison Method:
Compare each two candidates (players) head-to-head. Award each candidate one point for each head-to-head victory. The candidate with the most points wins. N(N-1)/2 comparisons.
16
Pairwise Comparison - Example
Selection of best robot facial expression: 4 candidates: {A,B,C,D} and 4 rankings of them 37 voters 5 trials (columns) Table shows the rankings of the candidates (rows) and the number of voters (columns) that ranked the candidates that way # of Voters Rank 14 10 8 4 1 1st A C D B 2nd 3rd 4th
17
Pairwise Comparison - Example
Compare candidates A & B: 14 voters ranked A higher than B =23 voters ranked B higher than A So, B wins against A # of Voters Rank 14 10 8 4 1 1st A C D B 2nd 3rd 4th
18
Pairwise Comparison - Example
Next, compare candidates A & C: 14 voters ranked A higher than C =23 voters ranked C higher than A So, C wins against A Continue for next pairs: A vs. D, B vs. C, B vs. D, C vs. D Exclude: permutations (e.g. C vs. A = A vs. C) comparison with itself (e.g. A vs. A) # of Voters Rank 14 10 8 4 1 1st A C D B 2nd 3rd 4th
19
Pairwise Comparison - Example
Record points: wins=1, lose=0 Cell values: number of voters that ranked candidate (row) over candidate (column) # of Voters (total=37) Rank 14 10 8 4 1 1st A C D B 2nd 3rd 4th A B C D Wins over Lost against Points 14 23 18 28 19 25 9 12
20
Pairwise Comparison - Example
Record points: wins=1, lose=0 Cell values: number of voters that ranked candidate (row) over candidate (column) # of Voters (total=37) Rank 14 10 8 4 1 1st A C D B 2nd 3rd 4th A B C D Wins over Lost against Points 14 - B,C,D 23 18 28 19 25 9 12
21
Pairwise Comparison - Example
Record points: wins=1, lose=0 Cell values: number of voters that ranked candidate (row) over candidate (column) # of Voters (total=37) Rank 14 10 8 4 1 1st A C D B 2nd 3rd 4th A B C D Wins over Lost against Points 14 - B,C,D 23 18 28 A,D 2 19 25 A,B,D 3 9 12 B,C 1
22
Pairwise Comparison - Example
Record points: wins=1, lose=0 Cell values: number of voters that ranked candidate (row) over candidate (column) # of Voters (total=37) Rank 14 10 8 4 1 1st A C D B 2nd 3rd 4th A B C D Wins over Lost against Points 14 - B,C,D 23 18 28 A,D 2 19 25 A,B,D 3 9 12 B,C 1 C wins!
23
Pairwise Comparison - Example
Record points: wins=1, lose=0 Another way to calculate the winner: use half the table triangle, mark the winner, and count the number of times the player appears # of Voters (total=37) Rank 14 10 8 4 1 1st A C D B 2nd 3rd 4th A B C D Wins over Lost against Points - B,C,D A,D 2 A,B,D 3 B,C 1 C wins!
24
Other possible scenario
A three-way tie: Inconsistency: A wins over B, B wins over C, C wins over A A B C -
25
ELO Rating System
26
Overview of ELO A player’s skill is assumed to be a normal distribution: True skill is around the mean Elo System gives two things: A players expected chance of winning A method to update a player’s Elo Rating
27
Basic Ideas of ELO One cannot look at a sequence of moves and say, "That performance is 2039." Performance can only be inferred from wins, draws and losses. Therefore, if a player wins a game, he is assumed to have performed at a higher level than his opponent for that game. Conversely if he loses, he is assumed to have performed at a lower level. If the game is a draw, the two players are assumed to have performed at nearly the same level.
28
Scores and ranking of players
A player’s ranking is updated based on its: Expected value of winning (E) Which depends on the ranking difference with the opponent Outcome of the match (S for ‘score’) 1 = win 0 = lose 0.5 = draw
29
Expected scores in Elo Rating
Expected score (E) Where: EA, EB = expected score for player A and B, respectively RA, RB = Rating of player A and B, respectively Remember: 1=win, 0=lose, 0.5=draw score
30
Characteristics of ELO
A player with higher Elo ranking than his opponent has a higher expected value (i.e. chance of winning), and vice versa. When both players have similar Elo rankings, the chance of having a draw is higher. After the match, both players’ rankings are updated with the same amount, but: the winner gains the rank (rating), the loser loses the rank. If a higher ranking player (‘stronger’) wins against a weaker player, the rank changes are smaller than when the weaker player wins against the higher ranking player. Subjective value K
31
Basic Assumptions of ELO
Elo's central assumption was that the chess performance of each player in each game is a normally distributed random variable. Although a player might perform significantly better or worse from one game to the next, ELO assumed that the mean value of the performances of any given player changes only slowly over time. A further assumption is necessary, because chess performance in the above sense is still not measurable. Our question: “Is ELO good for human evaluation of robot art (motion, behavior)?”
32
How ELO Works A player's expected score is his probability of winning plus half his probability of drawing. Thus an expected score of 0.75 could represent a 75% chance of winning, 25% chance of losing, and 0% chance of drawing. On the other extreme it could represent a 50% chance of winning, 0% chance of losing, and 50% chance of drawing. The probability of drawing, as opposed to having a decisive result, is not specified in the Elo system. Instead a draw is considered half a win and half a loss.
33
How ELO Works The relative difference in rating between two players determines an estimate for the expected score between them. Both the average and the spread of ratings can be arbitrarily chosen. Elo suggested scaling ratings so that a difference of 200 rating points in chess would mean that the stronger player has an expected score (which basically is an expected average score) of approximately 0.75, The USCF initially aimed for an average club player to have a rating of 1500.
34
Elo Rating - Example Suppose a Robot Boxing league:
The league has tens, hundreds, or more robots Each robot has a ranking (higher number = higher rank) A robot’s ranking is updated after each match But it can also be done after multiple matches A match is a one-vs-one battle
35
Elo Rating Example: scores for robots
Expected score (E) Suppose: Robot A rank: 1500 Robot B rank: 1320 Then: EA = 1 / (1 + 10( )/400) = 0.738 EB = = 0.262 Expected to win
36
Elo Rating Example: Adjusting ratings after match
Next, the match is held. After the match, the ratings of both robots will be adjusted by: Where: R’A = Robot A’s new rating RA = Robot A’s old/current rating K = some constant*, for practical reasons we choose K=24 in this example S = Score/match result (1=win, 0=lose, 0.5=draw) EA = Expected score Similarly for robot B
37
Elo Rating Example: Adjusting scores after one match
Suppose the outcome of the match: Robot A wins! Robot B wins! It’s a draw! Remember before the match it was: Robot A rank: 1500 Robot B rank: 1320
38
Elo Rating Example: adjusting rankings after five matches
Suppose rank update is done after 5 matches: Robot A current rank: 1500 Opponent/match Opponent rank (RB) EA Score/match outcome (1=win, 0=lose, 0.5=draw) 1 1320 0.738 2 1700 0.240 3 1480 0.529 4 1560 0.415 0.5 5 1800 0.151 Total 2.073 2.5
39
About K in chess How about robot art?
K is the rate of adjustments to one’s rating. Example when Robot A wins (B loses): Some Elo implementations adjust K based on some criteria. For example: FIDE (World Chess Federation): K = 30 for a player new to the rating list until s/he has completed events with a total of at least 30 games. K = 15 as long as a player's rating remains under 2400. K = 10 once a player's published rating has reached 2400, and s/he has also completed events with a total of at least 30 games. Thereafter it remains permanently at 10. USCF (United States Chess Federation): Players below > K-factor of 32 used Players between 2100 and > K-factor of 24 used Players above > K-factor of 16 used. How about robot art?
40
Picture Drawing Robots
41
Audience votes through a Webpage
42
ELO for art (motion) scoring
Score of 194
43
ELO for art (motion) scoring
Score of 0
44
Physical Robot DERPY Derpy with a sharpie marker
45
Fuzzy/Probabilistic state Machine operates differently in dark and light areas.
Examples of fuzzy variables. Image with dark and light areas.
46
Fuzzy and Probabilistic Machines
Simple probabilistic machine of Derpy
47
“Robot art” on butcher paper located on a floor.
48
Another piece of art from Derpy
49
Now use Part 2 of slides
50
Auxiliary Slides
51
Microsoft TrueSkill
52
Microsoft TrueSkill Addressing:
Subjective K value - instead, based on players’ skill Ranking of multiple players (>2) Can find “interesting” matches - balanced, where either player have comparable chance of winning the match. Build “Leaderboards” (ranking of all players)
53
Microsoft TrueSkill Player’s skill is modeled as normal distribution, with mean as the player’s “true skill” and standard deviation as uncertainties (about the player’s skill) Player start with some “mean skill” and uncertainty values. As player plays more games/matches, the mean skill gets adjusted, and the uncertainty (i.e. std. dev) decreases.
54
Microsoft TrueSkill Updating mean and standard deviation
β2 is unknown, which is the variance of the performance around the skill of each player.
55
Microsoft TrueSkill v and w
56
Microsoft TrueSkill
57
Microsoft TrueSkill Microsoft TrueSkill us/projects/trueskill/Details.aspx#How_to_Update_Skills unbalanced matches (can't win or can't lose) are not interesting balanced matches are interesting (even chance of winning) accommodates two or more players a module to track skills of all players based on game outcomes between players (update) TA module to arrange interesting matches for its members (Matchmaking) module to recognize and potentially publish skills of members (leader boards) Truskill is skill-based ranking system so interesting matches can reliably arranged within a league uses Bayesian inference for ranking
58
Microsoft TrueSkill The intuition is that the greater the difference between two player’s μ values – assuming their σ value are similar – the greater the chance of the player with the higher μ value performing better in a game. This principle holds true in the TrueSkill ranking system. But, this does not mean that the players with the larger μ's are always expected to win, but rather that their chance of winning is higher than that of the players with the smaller μ's. The TrueSkill ranking system assumes that the performance in a single match is varying around the skill of the player, and that the game outcome (relative ranking of all players participating in a game) is determined by their performance. Thus, the skill of a player in the TrueSkill ranking system can be thought of as the average performance of the player over a large number of games. The variation of the performance around the skill is, in principle, a configurable parameter of the TrueSkill ranking system.
59
Microsoft TrueSkill mu and sigma are updated based on outcome of game (win/lose). score difference makes no impact. 1. assumes skill of each player may change slightly between current and previous game -> sigma is increased (a configurable parameter) "It is this parameter that both allows the TrueSkill system to track skill improvements of gamers over time and ensures that the skill uncertainty σ never decreases to zero ("maintaining momentum")." 2. determine the probability of game outcome for given skills of participating players, and weight by probability of corresponding skill beliefs. --> average over all possible performances (weighted by their probability - Bayes Law) and derive the game outcome from performances: player with highest performance is winner, second highest is first tuner up, and s on. 3. if player performance are very close, true skill considers the outcome to be draw. The larger the draw margin is defined in a league, the more likely a draw is to occur. The size of margin is configurable and adjusted by game mode.
60
Measuring consistency
61
Measuring consistency in Pairwise Comparison
Can be done when comparison is done with “degree of importance”. E.g.: 1=equally important, 2=somewhat more important, 3=more important, 4=most important Example: Determining important criteria in buying a car Price MPG Comfort Style 3 2 4 Values in cells are importance with respect to the row item
62
Measuring consistency in Pairwise Comparison
Complete the values in the matrix Example: Determining important criteria in buying a car Price MPG Comfort Style 1 3 2 4 Criterion compared to itself is “equally important” 1=equally important, 2=somewhat more important, 3=more important, 4=most important
63
Measuring consistency in Pairwise Comparison
Complete the values in the matrix Example: Determining important criteria in buying a car Price MPG Comfort Style 1 3 2 1/3 1/4 1/2 4 Importance of less important criterion is reciprocal of the importance of the more important criterion e.g.: Price vs. Style => Style is two times more imporant than Price (2). So, Price is one half as imporant than Style (1/2) 1=equally important, 2=somewhat more important, 3=more important, 4=most important
64
Measuring consistency in Pairwise Comparison
Example: Determining important criteria in buying a car Calculate weights of each criterion: Price MPG Comfort Style 1 3 2 1/3 1/4 1/2 4
65
Evaluation Criteria for Ranking Methods
The Method of Pairwise Comparisons satisfies the Majority Criterion. (A majority candidate will win every pairwise comparison.) The Method of Pairwise Comparisons satisfies the Condorcet Criterion. (A Condorcet candidate will win every pairwise comparison -- that's what a Condorcet candidate is!) The Method of Pairwise Comparisons satisfies the Public-Enemy Criterion. (If there is a public enemy, s/he will lose every pairwise comparison.) The Method of Pairwise Comparisons satisfies the Monotonicity Criterion. (Ranking Candidate X higher can only help X in pairwise comparisons.)
66
ELO
67
Agenda on ELO Overview How it works Details Mathematic al details
68
How it Works Elo formulas Expected value Score How to update the rank
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.