Learning to Identify Winning Coalitions in the PAC Model A. D. Procaccia & J. S. Rosenschein.

Slides:



Advertisements
Similar presentations
VC Dimension – definition and impossibility result
Advertisements

Informational Complexity Notion of Reduction for Concept Classes Shai Ben-David Cornell University, and Technion Joint work with Ami Litman Technion.
6.896: Topics in Algorithmic Game Theory Lecture 20 Yang Cai.
On Complexity, Sampling, and -Nets and -Samples. Range Spaces A range space is a pair, where is a ground set, it’s elements called points and is a family.
Ioannis Caragiannis, Jason A. Covey, Michal Feldman, Christopher M. Homan, Christos Kaklamanis, Nikos Karanikolask, Ariel D. Procaccia, Je ff rey S. Rosenschein.
Learning Voting Trees Ariel D. Procaccia, Aviv Zohar, Yoni Peleg, Jeffrey S. Rosenschein.
Effort Games and the Price of Myopia Michael Zuckerman Joint work with Yoram Bachrach and Jeff Rosenschein.
Lecture notes for Stat 231: Pattern Recognition and Machine Learning 1. Stat 231. A.L. Yuille. Fall 2004 PAC Learning and Generalizability. Margin Errors.
Negotiating a stable distribution of the payoff among agents may prove challenging. The issue of coalition formation has been investigated extensively,
Learning Juntas Elchanan Mossel UC Berkeley Ryan O’Donnell MIT Rocco Servedio Harvard.
Discrete Mathematics CS 2610 February 26, part 1.
A Computational Characterization of Multiagent Games with Fallacious Rewards Ariel D. Procaccia and Jeffrey S. Rosenschein.
1 Polynomial Time Probabilistic Learning of a Subclass of Linear Languages with Queries Yasuhiro TAJIMA, Yoshiyuki KOTANI Tokyo Univ. of Agri. & Tech.
Cooperative Weakest Link Games Yoram Bachrach, Omer Lev, Shachar Lovett, Jeffrey S. Rosenschein & Morteza Zadimoghaddam CoopMAS 2013 St. Paul, Minnesota.
Manipulation and Control in Weighted Voting Games Based on: Bachrach, Elkind, AAMAS’08 Zuckerman, Faliszewski, Bachrach, Elkind, AAAI’08.
On the Limits of Dictatorial Classification Reshef Meir School of Computer Science and Engineering, Hebrew University Joint work with Shaull Almagor, Assaf.
TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAA A.
Learning Cooperative Games Maria-Florina Balcan, Ariel D. Procaccia and Yair Zick (to appear in IJCAI 2015)
Machine Learning Week 2 Lecture 2.
Proclaiming Dictators and Juntas or Testing Boolean Formulae Michal Parnas Dana Ron Alex Samorodnitsky.
Strategy-Proof Classification Reshef Meir School of Computer Science and Engineering, Hebrew University A joint work with Ariel. D. Procaccia and Jeffrey.
Finding Small Balanced Separators Author: Uriel Feige Mohammad Mahdian Presented by: Yang Liu.
The Communication Complexity of Coalition Formation Among Autonomous Agents A. D. Procaccia & J. S. Rosenschein.
Northwestern University Winter 2007 Machine Learning EECS Machine Learning Lecture 13: Computational Learning Theory.
Announcements See Chapter 5 of Duda, Hart, and Stork. Tutorial by Burge linked to on web page. “Learning quickly when irrelevant attributes abound,” by.
Analysis of greedy active learning Sanjoy Dasgupta UC San Diego.
Measuring Model Complexity (Textbook, Sections ) CS 410/510 Thurs. April 27, 2007 Given two hypotheses (models) that correctly classify the training.
Vapnik-Chervonenkis Dimension
Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia.
FSM Decomposition using Partitions on States 290N: The Unknown Component Problem Lecture 24.
Evaluating Hypotheses
1 How to be a Bayesian without believing Yoav Freund Joint work with Rob Schapire and Yishay Mansour.
Computing the Banzhaf Power Index in Network Flow Games
Vapnik-Chervonenkis Dimension Definition and Lower bound Adapted from Yishai Mansour.
Sketched Derivation of error bound using VC-dimension (1) Bound our usual PAC expression by the probability that an algorithm has 0 error on the training.
Junta Distributions and the Average Case Complexity of Manipulating Elections A. D. Procaccia & J. S. Rosenschein.
Vapnik-Chervonenkis Dimension Part II: Lower and Upper bounds.
Automated Design of Voting Rules by Learning From Examples Ariel D. Procaccia, Aviv Zohar, Jeffrey S. Rosenschein.
Approximating Power Indices Yoram Bachrach(Hebew University) Evangelos Markakis(CWI) Ariel D. Procaccia (Hebrew University) Jeffrey S. Rosenschein (Hebrew.
A New Linear-threshold Algorithm Anna Rapoport Lev Faivishevsky.
Lecture II.  Using the example from Birenens Chapter 1: Assume we are interested in the game Texas lotto (similar to Florida lotto).  In this game,
PAC learning Invented by L.Valiant in 1984 L.G.ValiantA theory of the learnable, Communications of the ACM, 1984, vol 27, 11, pp
Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 38: PAC Learning, VC dimension; Self Organization.
Bounding the Cost of Stability in Games with Restricted Interaction Reshef Meir, Yair Zick, Edith Elkind and Jeffrey S. Rosenschein COMSOC 2012 (to appear)
CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 36-37: Foundation of Machine Learning.
1 CS 391L: Machine Learning: Computational Learning Theory Raymond J. Mooney University of Texas at Austin.
1 Machine Learning: Lecture 8 Computational Learning Theory (Based on Chapter 7 of Mitchell T.., Machine Learning, 1997)
T. Poggio, R. Rifkin, S. Mukherjee, P. Niyogi: General Conditions for Predictivity in Learning Theory Michael Pfeiffer
Computational Learning Theory IntroductionIntroduction The PAC Learning FrameworkThe PAC Learning Framework Finite Hypothesis SpacesFinite Hypothesis Spaces.
Junta Distributions and the Average-Case Complexity of Manipulating Elections A presentation by Jeremy Clark Ariel D. Procaccia Jeffrey S. Rosenschein.
6.853: Topics in Algorithmic Game Theory Fall 2011 Constantinos Daskalakis Lecture 22.
Goal of Learning Algorithms  The early learning algorithms were designed to find such an accurate fit to the data.  A classifier is said to be consistent.
CS 8751 ML & KDDComputational Learning Theory1 Notions of interest: efficiency, accuracy, complexity Probably, Approximately Correct (PAC) Learning Agnostic.
MAIN RESULT: We assume utility exhibits strategic complementarities. We show: Membership in larger k-core implies higher actions in equilibrium Higher.
Carla P. Gomes CS4700 Computational Learning Theory Slides by Carla P. Gomes and Nathalie Japkowicz (Reading: R&N AIMA 3 rd ed., Chapter 18.5)
Machine Learning Chapter 7. Computational Learning Theory Tom M. Mitchell.
Generalization Error of pac Model  Let be a set of training examples chosen i.i.d. according to  Treat the generalization error as a r.v. depending on.
Ch 2. The Probably Approximately Correct Model and the VC Theorem 2.3 The Computational Nature of Language Learning and Evolution, Partha Niyogi, 2004.
1 CS 391L: Machine Learning: Computational Learning Theory Raymond J. Mooney University of Texas at Austin.
Computational Learning Theory
Statistical Cost Sharing: Learning Fair Cost Allocations from Samples
Computational Learning Theory
Introduction to Machine Learning
CH. 2: Supervised Learning
Vapnik–Chervonenkis Dimension
Computational Learning Theory
Computational Learning Theory
The probably approximately correct (PAC) learning model
CSCI B609: “Foundations of Data Science”
Machine Learning: UNIT-3 CHAPTER-2
Presentation transcript:

Learning to Identify Winning Coalitions in the PAC Model A. D. Procaccia & J. S. Rosenschein

Lecture Outline Cooperative Games Learning: PAC model VC dimension Motivation Results Closing Remarks

Simple Cooperative Games Cooperative n-person game = def (N;v). N={1,…,n} is the set of players, v:2 N →R. v(C) is the value of coalition C. Simple games: v is binary-valued. C is winning if v(C)=1, losing if v(C)=0. 2 N is partitioned into W and L, s.t. 1.  in L. 2.N in W. 3.Superset of winning coalition is winning. Coalitions

PAC Model Sample space X; wish to learn target concept c:X  {0,1} in concept class C. Pairs (x i,c(x i )) given, according to a fixed distribution on X. Produce concept but allow mistakes: Probability  that learning algorithm fails.  -approximation of target concept. How many samples are needed? Sample Complexity m C ( ,  ).

VC-Dimension X = sample space, C contains functions c:X  {0,1}. S={x 1,…x m },  C (S) = def {(c(x 1 ),...,c(x m )): c in C} S is shattered by C iff |  C (S)|=2 m. VC-dim(C) = def size of largest set shattered by C. VC dimension yields upper and lower bounds on sample complexity of concept class.

VC Dimension: Example X = sample space, C contains functions c:X  {0,1}. S={x1,…xm},  C(S)={c(x1),...,c(xm): c in C} S is shattered by C if |  C (S)|=2m. VC-dim(C) = size of largest set shattered by C. X = R, C={f:  a,b s.t. f(x)=1 iff x is in [a,b]}

Motivation Multiagent community shows interest in learning, but almost all work is reinforcement learning. Cooperative games are interesting in multiagent context. Real world simple cooperative games settings: Parliament. Advisers.

Minimum Winning Coalitions Simple cooperative games defined by sets of minimum winning coalitions. X = coalitions, C * = sets of minimum winning coalitions. {}{} {1}{2}{3} {4} {1,2} {1,3}{1,4}{2,3}{2,4} {1,2,3}{1,2,4}{1,3,4}{2,3,4} {1,2,3,4} {3,4}

VC-dim(C * ) F is an antichain iff  A,B in F: A  B. Sperner’s Theorem: F = antichain of subsets of {1,..,n}. Then {}{} {1}{2}{3}{4} {1,2}{1,3}{1,4}{2,3}{2,4} {1,2,3}{1,2,4}{1,3,4}{2,3,4} {1,2,3,4} {3,4} Theorem:

Restricted Simple Games Dictator: Single minimum winning coalition with one player. VC-dim =  logn . Junta: Single minimum winning coalition. VC-dim = n.

Restricted Simple Games II Proper games: C is winning  N\C is losing. It holds that: Elimination of dummies:  i  C s.t. C is winning but C\{i} is losing. Same lower bound.

Closing Remarks Easy to learn simple games with dictator or junta; general games are much harder. Monotone DNF formulae are equivalent to minimum winning coalitions. Need to find implementation. Algorithms included!