Competition between adaptive agents: learning and collective efficiency Damien Challet Oxford University Matteo Marsili ICTP-Trieste (Italy)

Slides:



Advertisements
Similar presentations
Summary of previous lectures 1.How to treat markets which exhibit normal behaviour (lecture 2). 2.Looked at evidence that stock markets were not always.
Advertisements

Market Mechanism Analysis Using Minority Game Market Models
THE PRICE OF STOCHASTIC ANARCHY Christine ChungUniversity of Pittsburgh Katrina LigettCarnegie Mellon University Kirk PruhsUniversity of Pittsburgh Aaron.
Vincent Conitzer CPS Repeated games Vincent Conitzer
Markov Decision Process
Non myopic strategy Truth or Lie?. Scoring Rules One important feature of market scoring rules is that they are myopic strategy proof. That means that.
Evolution and Repeated Games D. Fudenberg (Harvard) E. Maskin (IAS, Princeton)
MIT and James Orlin © Game Theory 2-person 0-sum (or constant sum) game theory 2-person game theory (e.g., prisoner’s dilemma)
 1. Introduction to game theory and its solutions.  2. Relate Cryptography with game theory problem by introducing an example.  3. Open questions and.
Ai in game programming it university of copenhagen Reinforcement Learning [Outro] Marco Loog.
Reinforcement Learning
Game Theory, Mechanism Design, Differential Privacy (and you). Aaron Roth DIMACS Workshop on Differential Privacy October 24.
Algoritmi per Sistemi Distribuiti Strategici
Basics on Game Theory For Industrial Economics (According to Shy’s Plan)
Minority Games A Complex Systems Project. Going to a concert… But which night to pick? Friday or Saturday? You want to go on the night with the least.
Review: Game theory Dominant strategy Nash equilibrium
Introduction to Collectives
Are Markets Rational or Irrational? The El Farol Problem Inductive reasoning Bounded rationality W Brian Arthur.
Dressing of financial correlations due to porfolio optimization and multi-assets minority games Palermo, Venerdi 18 giugno 2004 Unità di Trieste: G. Bianconi,
Dynamic Spectrum Management: Optimization, game and equilibrium Tom Luo (Yinyu Ye) December 18, WINE 2008.
Outline  In-Class Experiment on a Coordination Game  Test of Equilibrium Selection I :Van Huyck, Battalio, and Beil (1990)  Test of Equilibrium Selection.
Why How We Learn Matters Russell Golman Scott E Page.
Selfish Caching in Distributed Systems: A Game-Theoretic Analysis By Byung-Gon Chun et al. UC Berkeley PODC’04.
Outline  In-Class Experiment on a Coordination Game  Test of Equilibrium Selection I :Van Huyck, Battalio, and Beil (1990)  Test of Equilibrium Selection.
XYZ 6/18/2015 MIT Brain and Cognitive Sciences Convergence Analysis of Reinforcement Learning Agents Srinivas Turaga th March, 2004.
An Introduction to Game Theory Part III: Strictly Competitive Games Bernhard Nebel.
Nash Q-Learning for General-Sum Stochastic Games Hu & Wellman March 6 th, 2006 CS286r Presented by Ilan Lobel.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
Zoltán Toroczkai György Korniss (Rensellaer Pol. Inst.) Kevin Bassler (U. Houston) Marian Anghel (CNLS-LANL) Effects of Inter-agent Communications on the.
Perfect Competition Sections 4.1 and 4.2. Market Structures and Organization.
NASA Workshop on Collectives Ames Lab, 6 August 2002 Complex System Management: Hoping for the Best by Coping with the Worst Hoping for the best... but.
On Bounded Rationality and Computational Complexity Christos Papadimitriou and Mihallis Yannakakis.
Reinforcement Learning Game playing: So far, we have told the agent the value of a given board position. How can agent learn which positions are important?
Network Formation Games. Netwok Formation Games NFGs model distinct ways in which selfish agents might create and evaluate networks We’ll see two models:
1 On the Agenda(s) of Research on Multi-Agent Learning by Yoav Shoham and Rob Powers and Trond Grenager Learning against opponents with bounded memory.
1 Issues on the border of economics and computation נושאים בגבול כלכלה וחישוב Congestion Games, Potential Games and Price of Anarchy Liad Blumrosen ©
The Agencies Method for Coalition Formation in Experimental Games John Nash (University of Princeton) Rosemarie Nagel (Universitat Pompeu Fabra, ICREA,
A Principled Study of Design Tradeoffs for Autonomous Trading Agents Ioannis A. Vetsikas Bart Selman Cornell University.
CS Reinforcement Learning1 Reinforcement Learning Variation on Supervised Learning Exact target outputs are not given Some variation of reward is.
Bottom-Up Coordination in the El Farol Game: an agent-based model Shu-Heng Chen, Umberto Gostoli.
MAKING COMPLEX DEClSlONS
Changing Perspective… Common themes throughout past papers Repeated simple games with small number of actions Mostly theoretical papers Known available.
Mechanisms for Making Crowds Truthful Andrew Mao, Sergiy Nesterko.
Learning in Multiagent systems
A quantum protocol for sampling correlated equilibria unconditionally and without a mediator Iordanis Kerenidis, LIAFA, Univ Paris 7, and CNRS Shengyu.
3.1 & 3.2: Fundamentals of Probability Objective: To understand and apply the basic probability rules and theorems CHS Statistics.
Game Theory is Evolving MIT , Fall Our Topics in the Course  Classical Topics Choice under uncertainty Cooperative games  Values  2-player.
Man and Superman Human Limitations, innovation and emergence in resource competition Robert Savit University of Michigan.
M9302 Mathematical Models in Economics Instructor: Georgi Burlakov 4.1.Dynamic Games of Incomplete Information Lecture
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
Chapters 29, 30 Game Theory A good time to talk about game theory since we have actually seen some types of equilibria last time. Game theory is concerned.
Daniel Ariosa Ecole Polytechnique Fédérale de Lausanne (EPFL) Institut de Physique de la Matière Complexe CH-1015 Lausanne, Switzerland and Hugo Fort Instituto.
Topic 3 Games in Extensive Form 1. A. Perfect Information Games in Extensive Form. 1 RaiseFold Raise (0,0) (-1,1) Raise (1,-1) (-1,1)(2,-2) 2.
© D. Weld and D. Fox 1 Reinforcement Learning CSE 473.
Workshop on Optimization in Complex Networks, CNLS, LANL (19-22 June 2006) Application of replica method to scale-free networks: Spectral density and spin-glass.
Information Theory for Mobile Ad-Hoc Networks (ITMANET): The FLoWS Project Competitive Scheduling in Wireless Networks with Correlated Channel State Ozan.
1 Mean Field and Variational Methods finishing off Graphical Models – Carlos Guestrin Carnegie Mellon University November 5 th, 2008 Readings: K&F:
1 What is Game Theory About? r Analysis of situations where conflict of interests is present r Goal is to prescribe how conflicts can be resolved 2 2 r.
Designing Games for Distributed Optimization Na Li and Jason R. Marden IEEE Journal of Selected Topics in Signal Processing, Vol. 7, No. 2, pp ,
Complexity in the Economy and Business IBM Almaden Institute April 12, 2007 W. Brian Arthur External Professor, Santa Fe Institute.
Distributed Q Learning Lars Blackmore and Steve Block.
Repeated Game Modeling of Multicast Overlays Mike Afergan (MIT CSAIL/Akamai) Rahul Sami (University of Michigan) April 25, 2006.
MAIN RESULT: We assume utility exhibits strategic complementarities. We show: Membership in larger k-core implies higher actions in equilibrium Higher.
The El Farol Bar Problem on Complex Networks Maziar Nekovee BT Research Mathematics of Networks, Oxford, 7/4/2006.
Collective behavior of El Farol attendees European Conference on Complex Systems 2007 October 1-6, 2007 – Dresden Photo credit Matthew Bannister, James.
Double Coordination in Small Groups Luigi Mittone, Matteo Ploner, Ivan Soraperra Computable and Experimental Economics Laboratory – University of Trento,
Asset market with heterogeneous agents
Game Theory.
Dr. Unnikrishnan P.C. Professor, EEE
Multiagent Systems Repeated Games © Manfred Huber 2018.
Presentation transcript:

Competition between adaptive agents: learning and collective efficiency Damien Challet Oxford University Matteo Marsili ICTP-Trieste (Italy) ● My definition of the Minority Game ● Simple worlds (M= 0) ● Markovian behavior ● Neural networks ● Reinforcement learning ● Multistate worlds (M> 0) ● Cause of large inefficiencies ● Remedies ● From El Farol to MG and back

'Truth is always in the minority' Kierkegaard

Zig-Zag-Zoug ● Game played by Swiss children ● 3 players, 3 feet, 3 magic words ● “Ziiig”... “Zaaag”.... “ZOUG!”

Minority Game ● Zig-Zag-Zoug with N players ● Aim: to be in the minority ● Outcome = #UP-#DOWN = #A-#B ● Model of competition between adaptive players Challet and Zhang (1997), from El Farol's bar problem (Arthur 1994)

Initial goals of the MG El Farol (1994): impossible to understand Drastic simplification, keeping key ingredients Bounded rationality Reinforcement learning Symmetrize the problem: 60/100 -> 50/50 Understand the symmetric problem Generalize results to the asymmetric problem

Repeated games Why playing again ? Frustration Losers in majority How to play ? Deduction Rationality Best answer All lose ! Induction Limited capabilities Beliefs, strategies, personality Trial and error Learning

Minority Game a 1 ( t) a 2 ( t) a N ( t)... A(t) =  i a i (t) Payoff player i -a i (t)A(t) N agents i=1,..., N Choice a i (t) +1 Total losses = A 2

Markovian learning 'If it ain't broken, don't fix it' (Reents et al., Physica A 2000: If I won, I stick to my previous choice If I lost, I change to the other choice with prob p Results: ( s 2 = 2 ) ● pN = x = cst (small p):   2 = 1 + 2x (1+ x/6) ● p~ N 1/2   2 ~ N ● p~ 1   2 ~ N 2

Markovian learning II Problem: if N unknown, p= ? Try: p= f(t) e.g. p= t -k Convergence for any N Freezing When to stop ?

Neural networks Simple perceptrons, learning rate R (Metzler )  2 = N + N(N-1)F(N,R)  min 2 = N (1-2/  ) = N

Reinforcement learning ● Each player has a register D i ● D i > 0 + is better ● D i < 0 - is better ● D i (t+1) = D i (t) – A(t) ● Choice: prob(+ | D i ) = f(D i ) f '(x) > 0 (RL)

Reinforcement learning II ● Central result: agents minimize 2 (predictability) for all f ● Stationary state: = 0 ● Fluctuations = ? ● Ex: f(x)=(1+tanh(K x))/2 exponential learning, K learning rate ● K< K c   ~ N ● K> K c  2 ~ N 2

Market Impact: each agent has an influence on the outcome ● Naive agents: payoff- A = - A -i -a i ● Non-naive agents: payoff- A + c a i ● Smart agents: payoff - A -i cf WLU, AU ● Central result 2: non-naive agents minimize (fluctuations) for all f -> Nash equilibrium Reinforcement learning III    ~ 1

Summary

Minority Games with memory If an agent believes that the outcome depends on the past results, the outcome will depend on the past results. Sun spot effect Self-fulfilling prophecies Fallacies of casual inference Consequence: The other agents will change their behavior accordingly

=P/N s 2 /N Minority Games with memory: naïve agents Fixed randomly drawn strategies = quenched disorder Tools of statistical physics give the exact solution in principle Agents minimize the predictability Predictability = Hamiltonian Optimization problem Numeric: Savit++ PRL99 Analytic: Challet++ PRL99 Coolen+ J. Phys A 2002 ?

Minority Games with memory: low efficiency = P/N

Minority Games with memory: low efficiency P/N is not the right scaling for large fluctuations

Minority Games with memory: origin of low efficiency Stochastic dynamical equation for strategy score U i slow varying part + correlated noise I: Size independent II = K P -1/2 When I << II, large fluctuations Transition at I / K = G / P 1/2 Critical signal to noise ratio = G / P 1/2

Minority Games with memory: origin of low efficiency Check: Determine G Predict critical points I/K G / P 1/2

Minority Games with memory: origin of low efficiency BEFORE AFTER

Minority Games with memory: origin of low efficiency

Minority Games with memory: sophisticated agents Agents minimize fluctuations Optimization problem again

Reverse problem Many variations, different global utility functions ● Grand canonical game (play or not play) ● Time window of scores (exponential moving average) ● Any payoff Hence, given a task (global utility function), one knows how to design agents (local utility). example: optimal defects combinations (cf. Neil's talk)

From El Farol to MG and back El Farol 0 N L MG 0 N L = N/2 Differences, similarities? Which results from MG are valid for El Farol?

From El Farol to MG and back 0 N L Theorem: all results from MG apply to El Farol  N Everything scales like (L/N – )/  =  P ½ The El Farol problem with P states of the world is solved.

From El Farol to MG and back: new results If (L/N – )/  =  P ½  0, P>P c = 2 S 2 / [  (L/N- ) 2 ]: no more phase transition.

Summary AU/WLU suppresses large fluctuations -> Nash equilibrium Design: agents must know they have an impact. The knowledge of the exact impact not crucial Reverse problem also possible MG: simple, rich, fun, and useful commented references