4.1 Bayesian Games (cited from Wikipedia) A Bayesian game is one in which information about characteristics of the other players (i.e. payoffs) is incomplete. Imperfect info는 상대방 전략 선택을 모를 때. Incomplete info는 상대방 보상을 모를 때 Following John C. Harsanyi's framework, a Bayesian game can be modeled by introducing Nature as a player in a game. Nature assigns a random variable to each player which could take values of types for each player and associating probabilities or a probability density function with those types (in the course of the game, nature randomly chooses a type for each player according to the probability distribution across each player's type space).
4.1 Bayesian Games Harsanyi's approach to modeling a Bayesian game in such a way allows games of incomplete information to become games of imperfect information (in which the history of the game is not available to all players). The type of a player determines that player's payoff function and the probability associated with the type is the probability that the player for whom the type is specified is that type. 선수타입이 1) 보상과 2) 전략 선택 확률을 결정 In a Bayesian game, the incompleteness of information means that at least one player is unsure of the type (and so the payoff function) of another player.
4.1 Bayesian Games Such games are called Bayesian because of the probabilistic analysis inherent in the game. Players have initial beliefs about the type of each player (where a belief is a probability distribution over the possible types for a player) and can update their beliefs according to Bayes' Rule (calculate probabilities given what has already taken place in the game) as play takes place in the game, i.e. the belief a player holds about another player's type might change on the basis of the actions they have played. The lack of information held by players and modeling of beliefs mean that such games are also used to analyze imperfect information scenarios.
4.1 Bayesian Games Example: Signaling Game Skip to Slide 15 Signaling games constitute an example of Bayesian games. In such a game, the informed party (the agent) knows their type, whereas the uninformed party (the principal) does not know the (agent's) type. In some such games, it is possible for the principal to deduce the agent's type based on the actions the agent takes (in the form of a signal sent to the principal) in what is known as a separating equilibrium. A specific example of a signaling game is a model of the job market.
4.1 Bayesian Games Example: Signaling Game Skip to Slide 15 The players are the applicant (agent) and the employer (principal). There are two types of applicant, skilled and unskilled, but the employer does not know which the applicant is, but he does know that 90% of applicants are unskilled and 10% are skilled (the type 'skilled' occurs with 10% chance and unskilled with 90% chance). The employer will offer the applicant a contract based on how productive he thinks he will be. Skilled workers are very productive (generating a large payoff for the employer) and unskilled workers are unproductive (generating a low payoff for the employer).
4.1 Bayesian Games Example: Signaling Game Skip to Slide 15 The payoff of the employer is determined thus by the skill of the applicant (if the applicant accepts a contract) and the wage paid. The applicant's action space comprises two actions, take a university education or do not. It is less costly for the skilled worker to do so (because he does not pay extra tuition fees, finds classes less taxing, etc.). The employer's action space is the set of (say) natural numbers, which represents the wage of the applicant (the applicant's action space might be extended to include acceptance of a wage, in which case it would be more appropriate to talk of his strategy space).
4.1 Bayesian Games Example: Signaling Game Skip to Slide 15 It might be possible for the employer to offer a wage that would compensate a skilled applicant sufficiently for acquiring a university education, but not an unskilled applicant, leading to a separating equilibrium where skilled applicants go to university and unskilled applicants do not, and skilled applicants (workers) command a high wage, whereas unskilled applicants (workers) receive a low wage. Crucially in the game sketched above, the employer chooses his action (the wage offered) according to his belief about how skilled the applicant is and this belief is determined, in part, by the signal sent by the applicant.
4.1 Bayesian Games Example: Signaling Game Skip to Slide 15 The employer starts the game with an initial belief about the applicant's type (unskilled with 90% chance), but during the course of the game this belief may be updated (depending on the payoffs of the different types of applicants) to 0% unskilled if he observes a university education or 100% unskilled if he does not.
4.1 Bayesian Games Bayesian Nash equilibrium Skip to Slide 15 In a non-Bayesian game, a strategy profile is a Nash equilibrium if every strategy in that profile is a best response to every other strategy in the profile; i.e., there is no strategy that a player could play that would yield a higher payoff, given all the strategies played by the other players. In a Bayesian game (where players are modeled as risk-neutral), rational players are seeking to maximize their expected payoff, given their beliefs about the other players (in the general case, where players may be risk averse or risk-loving, the assumption is that players are expected utility-maximizing).
4.1 Bayesian Games Bayesian Nash equilibrium Skip to Slide 15 A Bayesian Nash equilibrium is defined as a strategy profile and beliefs specified for each player about the types of the other players that maximizes the expected payoff for each player given their beliefs about the other players' types and given the strategies played by the other players. This solution concept yields an abundance of equilibria in dynamic games, when no further restrictions are placed on players' beliefs. This makes Bayesian Nash equilibrium an incomplete tool with which to analyze dynamic games of incomplete information. (So, we will learn Perfect Bayesian Nash Equilibrium)
4.1 Bayesian Games Perfect Bayesian equilibrium Skip to Slide 15 Bayesian Nash equilibrium results in some implausible equilibria in dynamic games, where players take turns sequentially rather than simultaneously. Similarly, implausible equilibria might arise in the same way that implausible Nash equilibria arise in games of perfect and complete information, such as incredible threats and promises. Such equilibria might be eliminated in perfect and complete information games by applying Subgame Perfect Nash equilibrium(SPE).
4.1 Bayesian Games Perfect Bayesian equilibrium Skip to Slide 15 However, it is not always possible to avail oneself of this solution concept in incomplete information games because such games contain non-singleton information sets and since subgames must contain complete information sets, sometimes there is only one subgame—the entire game—and so every Nash equilibrium is trivially subgame perfect. Even if a game does have more than one subgame, the inability of subgame perfection to cut through information sets can result in implausible equilibria not being eliminated.
4.1 Bayesian Games Perfect Bayesian equilibrium Skip to Slide 15 To refine the equilibria generated by the Bayesian Nash solution concept or subgame perfection, one can apply the Perfect Bayesian Nash Equilibrium solution concept. PBE is in the spirit of subgame perfection in that it demands that subsequent play be optimal. However, it places player beliefs on decision nodes that enables moves in non-singleton information sets to be dealt with more satisfactorily. So far in discussing Bayesian games, it has been assumed that information is perfect (or if imperfect, play is simultaneous).
4.1 Bayesian Games Perfect Bayesian equilibrium Skip to Slide 15 In examining dynamic games, however, it might be necessary to have the means to model imperfect information. PBE affords this means: players place beliefs on nodes occurring in their information sets, which means that the information set can be generated by nature (in the case of incomplete information) or by other players (in the case of imperfect information).
Perfect Bayesian equilibrium Belief systems 4.1 Bayesian Games Perfect Bayesian equilibrium Belief systems 여기 있을 확률 0.7 여기 있을 확률 0.3 The beliefs held by players in Bayesian games can be approached more rigorously in PBE(Perfect Bayesian Eq.). A belief system is an assignment of probabilities to every node in the game such that the sum of probabilities in any information set is 1 (ex: 0.7+0.3=1). The beliefs of a player are exactly those probabilities of the nodes in all the information sets at which that player has the move (a player’s belief might be specified as a function from the union of his information sets to [0,1]). A belief system is consistent for a given strategy profile if and only if the probability assigned by the system to every node is computed as the probability of that node being reached given the strategy profile, i.e. by Bayes' rule. (소신체계는 베이즈법칙에 의하여 할당된 활율로 구성됨)
Perfect Bayesian equilibrium Sequential rationality 4.1 Bayesian Games Perfect Bayesian equilibrium Sequential rationality This will be dealt with in 4.3. Skip The notion of sequential rationality is what determines the optimality of subsequent play in PBE. A strategy profile is sequentially rational at a particular information set for a particular belief system if and only if the expected payoff of the player whose information set it is (i.e. who has the move at that information set) is maximal given the strategies played by all the other players. A strategy profile is sequentially rational for a particular belief system if it satisfies the above for every information set.
Perfect Bayesian equilibrium Definition 4.1 Bayesian Games Perfect Bayesian equilibrium Definition This will be dealt with in 4.3. Skip A perfect Bayesian equilibrium is a strategy profile and a belief system such that the strategies are sequentially rational given the belief system and the belief system is consistent, wherever possible, given the strategy profile. It is necessary to stipulate the 'wherever possible' clause because some information sets might not be reached with a non-zero probability given the strategy profile and hence Bayes' rule cannot be employed to calculate the probability at the nodes in those sets. Such information sets are said to be off the equilibrium path and any beliefs can be assigned to them.
Perfect Bayesian equilibrium An example 4.1 Bayesian Games Perfect Bayesian equilibrium An example This will be dealt with in 4.3. Skip Information in the game on the right is imperfect since player 2 does not know what player 1 does when he comes to play. If both players are rational and both know that both players are rational and everything that is known by any player is known to be known by every player (i.e. player 1 knows player 2 knows that player 1 is rational and player 2 knows this, etc. ad infinitum - common knowledge), play in the game will be as follows according to perfect Bayesian equilibrium:
Perfect Bayesian equilibrium An example 4.1 Bayesian Games Perfect Bayesian equilibrium An example This will be dealt with in 4.3. Skip Player 2 cannot observe player 1's move. Player 1 would like to fool player 2 into thinking he has played U when he has actually played D so that player 2 will play D' and player 1 will receive 3. In fact, there is a perfect Bayesian equilibrium where player 1 plays D and player 2 plays U' and player 2 holds the belief that player 1 will definitely play D (i.e player 2 places a probability of 1 on the node reached if player 1 plays D). In this equilibrium, every strategy is rational given the beliefs held and every belief is consistent with the strategies played. In this case, the perfect Bayesian equilibrium is the only Nash equilibrium.
Bayes’ Theorem(or Rule or Law) 4.1 Bayesian Games Bayes’ Theorem(or Rule or Law) Thomas Bayes (1701~1761) London, England In probability theory, Bayes' theorem (often called Bayes' law and named after Rev. Thomas Bayes) shows how one conditional probability (such as the likelihood of a hypothesis given observed evidence) depends on its inverse (in this case, the prior likelihood of that evidence given the hypothesis, H). P(A|B)의 역(逆; inverse) = P(B|A) The theorem expresses the posterior probability of a hypothesis (i.e. after evidence E is observed) in terms of the prior probabilities of H and E, and the likelihood of E given H. It implies that evidence has a stronger confirming effect if it was more unlikely before being observed 5-24-19
Bayes’ Theorem(or Rule or Law) 4.1 Bayesian Games Bayes’ Theorem(or Rule or Law) As a formal theorem, Bayes' theorem is valid in all common interpretations of probability. However, frequentist and Bayesian interpretations disagree on how (and to what) probabilities are assigned. In the Bayesian interpretation, probabilities are rationally coherent degrees of belief, or a proposition's likelihood given a body of well-specified information. Bayes' Theorem can then be understood as specifying how an ideal scientist responds to evidence. (원인결과) In the frequentist interpretation, probabilities are the frequencies of occurrence of random events as proportions of a whole. (원인-결과 없음) Though his name has become associated with subjective probability, Bayes himself interpreted the theorem in an objective sense.
Bayes’ Theorem(or Rule or Law) 4.1 Bayesian Games Bayes’ Theorem(or Rule or Law) Bayes' theorem relates the conditional and marginal probabilities of events A and B, where B has a non-vanishing(0이 되지 않는) probability: Each term in Bayes' theorem has a conventional name: P(A) is the prior(사전事前; before event) probability or marginal probability of A. It is "prior" in the sense that it does not take into account any information about B. P(A|B) is the conditional probability of A, given B. It is also called the posterior(사후事後; after event) probability because it is derived from or depends upon the specified value of B.
Bayes’ Theorem(or Rule or Law) 4.1 Bayesian Games Bayes’ Theorem(or Rule or Law) P(B|A) is the conditional probability of B given A. P(B) is the prior or marginal probability of B, and acts as a normalizing constant. Intuitively, Bayes' theorem in this form describes the way in which one's beliefs about observing 'A' are updated by having observed 'B'. (사건B를 관찰하고 사건A가 일어날 확률을 갱신) Posterior information = Updated information Prior information
Bayes’ Theorem(or Rule or Law) Example 4.1 Bayesian Games Bayes’ Theorem(or Rule or Law) Example Suppose there is a co-ed school having 60% boys and 40% girls as students. The girl students wear trousers or skirts in equal numbers(50% trousers, 50% skirts); the boys all wear trousers(100% trousers). An observer sees a (random) student from a distance; all the observer can see is that this student is wearing trousers(B). What is the probability this student is a girl? The correct answer can be computed using Bayes' theorem. Event A Event A’ Girls Boys Event B Trousers P(B|A)=0.5 P(B|A’)=1.0 Event B’ Skirts P(B’|A)=0.5 P(B’|A’)=0.0 Total P(A)=0.4 P(A’)=0.6 Prior information
Bayes’ Theorem(or Rule or Law) Example 4.1 Bayesian Games Bayes’ Theorem(or Rule or Law) Example The event A is that the student observed is a girl, and the event B is that the student observed is wearing trousers. To compute P(A|B), we first need to know: P(A), or the probability that the student is a girl regardless of any other information. Since the observers sees a random student, meaning that all students have the same probability of being observed, and the fraction of girls among the students is 40%, this probability equals 0.4 (P(A)=0.4). P(A'), or the probability that the student is a boy regardless of any other information (A' is the complementary event to A). This is 60%, or 0.6 (P(A’)=0.6).
Bayes’ Theorem(or Rule or Law) Example 4.1 Bayesian Games Bayes’ Theorem(or Rule or Law) Example P(B|A), or the probability of the student wearing trousers given that the student is a girl. As they are as likely to wear skirts as trousers, this is 0.5. P(B|A'), or the probability of the student wearing trousers given that the student is a boy. This is given as 1. P(B), or the probability of a (randomly selected) student wearing trousers regardless of any other information. Since P(B) = P(B|A)P(A) + P(B|A')P(A'), this is 0.5×0.4 + 1×0.6 = 0.8 (여학생 40명 중 20명이 바지, 남학생 60명 중 60명이 바지 100명 중 80명이 바지) 목격된 학생이 바지입을 확률 P(B) = 0.8 = 여학생이면서 바지입을 확률 + 남학생이면서 바지입을 확률 = 여학생이바지입을 확률P(B|A)*여학생일확률P(A) + 남학생이바지입을확률 P(B|A')*남학생일확률P(A')
Bayes’ Theorem(or Rule or Law) Example 4.1 Bayesian Games Bayes’ Theorem(or Rule or Law) Example Given all this information, the probability of the observer having spotted a girl given that the observed student is wearing trousers can be computed by substituting these values in the formula: Italic: Prior info. Underlined: Posterior/Updated info. 바지(B),여자(A) 치마(B’), 여자(A) 바지(B), 남자(A’) 치마(B’), 남자(A’) Event A Event A’ Girls Boys Event B Trousers P(B|A)=0.5 P(A|B)=0.25 P(B|A’)=1.0 P(A’|B)=0.75 Event B’ Skirts P(B’|A)=0.5 P(A|B’)=1.0 P(B’|A’)=0.0 P(A’|B’)=0.0 Total P(A)=0.4 P(A’)=0.6
Bayes’ Theorem(or Rule or Law) Example 4.1 Bayesian Games Bayes’ Theorem(or Rule or Law) Example Another, essentially equivalent way of obtaining the same result is as follows. Assume, for concreteness, that there are 100 students, 60 boys and 40 girls. Among these, 60 boys and 20 girls wear trousers. All together there are 80 trouser-wearers, of which 20 are girls. Therefore the chance that a random trouser-wearer is a girl equals 20/80 = 0.25. It is often helpful when calculating conditional probabilities to create a simple table containing the number of occurrences of each outcome, or the relative frequencies of each outcome, for each of the independent variables. The table below illustrates the use of this method for the above girl-or-boy example Girls Boys Total Trousers 20/80 =0.25 60/80 =0.75 80/80 =1.0 Skirts 20/20 0/20 =0.0 40/100 =0.4 60/100 =0.6 100/100 Persons Prior information Posterior information