Game theory and epistemology




















Before we make use of these choice rules, we need to address two potentially confusing issues about these definitions. The definitions of strict and weak dominance are given in terms of mixed strategies even though we are assuming that players only select pure strategies.

That is, we are not considering situations in which players explicitly randomize. In particular, recall that only pure strategies are associated with states in a game model. That is, we do not replace the above definition with. This is because both definitions are equivalent.

Each state in an epistemic -plausibility model is associated with a such a set of strategy profiles. The precise definition depends on the type of game model:. This can be made more precise using the following well-known Lemma. Lemma 3. The proof of this Lemma is given in the supplement, Section 1. The general conclusion is that no dominated strategy can maximize expected utility at a given state; and, conversely, if there is a strategy that is not a best in a specific context, then it is not strictly dominated.

Similar facts hold about weak dominance , though the situation is more subtle. The crucial observation is that there is a characterization of weak dominance in terms of best response to certain types of probability measures. The following analogue of Lemma 3. The proof of this Lemma is more involved. See Bernheim Appendix A for a proof. Comparing these two Lemmas, we see that strict dominance implies weak dominance, but not necessarily vice versa.

A strategy might not be a best response to any full-support probability measure while being a best response to some particular beliefs, those assigning probability one to a state where the player is indifferent between the outcome of its present action and the potentially inadmissible one.

There is another, crucial, difference between weak and strict dominance. The following observation is immediate from the definition of strict dominance:. Observation 3.

If a strategy is strictly dominated, it remains so if the player gets more information about what her opponents might do. The same observation does not hold for weak dominance.

The existential part of the definition of weak dominance means that the analogue of Observation 3. The epistemic approach to game theory focuses on the choices of individual decision makers in specific informational contexts, assessed on the basis of decision-theoretic choice rules.

This is a bottom-up, as opposed to the classical top-down, approach. An important line of research in epistemic game theory asks under what epistemic conditions will players follow the recommendations of particular solution concept? Providing such conditions is known as an epistemic characterization of a solution concept. In this section, we present two fundamental epistemic characterization results.

The first is a characterization of iterated removal of strictly dominated strategies henceforth ISDS , and the second is a characterization of backward induction. These epistemic characterization results are historically important. They mark the beginning of epistemic game theory as we know it today. Furthermore, they are also conceptually important. The developments in later sections build on the ideas presented in this section. For that reason, instead of focusing on the formal details, the emphasis here will be on its significance for the epistemic foundations of game theory.

One important message is that the result highlights the importance of higher-order information. Iterated elimination of strictly dominated strategies ISDS is a solution concept that runs as follows.

After having removed the strictly dominated strategies in the original game, look at the resulting sub-game, remove the strategies which have become strictly dominated there, and repeat this process until the elimination does not remove any strategies.

The profiles that survive this process are said to be iteratively non-dominated. That is, iteratively removing strictly dominated strategies generates the following sequence of games:. For arbitrary large finite strategic games, if all players are rational and there is common belief that all players are rational , then they will choose a strategy that is iteratively non-dominated.

The result is credited to Bernheim and Pearce Before stating the formal result, we illustrate the result with an example. The next step is to identify the types that believe that the other players are rational. In this context, belief means probability 1. We first need some notation. Putting everything together, we have.

Note that, the above process need not generate all strategies that survive iteratively removing strictly dominated strategies. However, for any type space, if a strategy profile is consistent with rationality and common belief of rationality, then it must be a strategy that is in the set of strategies that survive iteratively removing strictly dominated strategies.

Theorem 4. This result establishes sufficient conditions for ISDS. It has also a converse direction: given any strategy profile that survives iterated elimination of strictly dominated strategies, there is a model in which this profile is played where all players are rational and this is common knowledge. In other words, one can always view or interpret the choice of a strategy profile that would survive the iterative elimination procedure as one that results from common knowledge of rationality.

Of course, this form of the converse is not particularly interesting as we can always define a type space where all the players assign probability 1 to the given strategy profile and everyone playing their requisite strategy. Much more interesting is the question whether the entire set of strategy profiles that survive iteratively removal of strictly dominated strategies is consistent with rationality and common belief in rationality.

Analogues of the above results have been proven using different game models e. Many authors have pointed out the strength of the common belief assumption in the results of the previous section see, e. It requires that the players not only believe that the others are not choosing an irrational strategy, but also to believe that everybody believes that nobody is choosing an irrational strategy, and everyone believes that everyone believes that everyone believes that nobody is choosing an irrational strategy, and so on.

It should be noted, however, that this unbounded character is there only to ensure that the result holds for arbitrary finite games. A possible reply to the criticism of the infinitary nature of the common belief assumption is that the result should be seen as the analysis of a benchmark case, rather than a description of genuine game playing situations or a prescription for what rational players should do Aumann The results above show that, once formalized, this assumption does indeed lead to a classical solution concept, although, interestingly, not the well-known Nash equilibrium, as is often informally claimed in early game-theoretic literature.

Epistemic conditions for Nash equilibrium are presented in Section 5. The main message to take away from the results in the previous section is: Strategic reasoning in games involves higher-order information. This means that, in particular,. In general, first-order belief of rationality will not do either. There are two further issues we need to address. First of all, how can agents arrive at a context where rationality is commonly believed? The above results do not answer that question. This has been the subject of recent work in Dynamic Epistemic Logic van Benthem Then, we have:.

Here, the most well-known solution concept is the so-called subgame perfect equilibrium , also known as backward induction in games of perfect information.

The main point that we highlight in this section, which is by now widely acknowledged in the literature, is:. Belief revision policies play a key role in the epistemic analysis of extensive games. The most well-known illustration of this is through the comparison of two apparently contradictory results regarding the consequences of assuming rationality and common knowledge of rationality in extensive games. Aumann showed that this epistemic condition implies that the players will play according to the backward induction solution while Stalnaker argued that this is not necessarily true.

Extensive games make explicit the sequential structure of choices in a game situation. In this section, we focus on games of perfect information in which there is no uncertainty about earlier choices in the game. These games are represented by tree-like structures:. Definition 4. A strategy is a term of art in extensive games. It denotes a plan for every eventuality, which tells an agent what to do at all histories she is to play, even those which are excluded by the strategy itself.

The following example of a perfect information extensive game will be used to illustrate these concepts. The game is an instance of the well-known centipede game , which has played an important role in the epistemic game theory literature on extensive games. The labels of the edges in the above tree are the actions available to each player. The game models vary according to which epistemic attitudes are represented e.

One of the simplest approaches is to use the epistemic models introduced in Section 2. Aumann ; Halpern b. The rationality of a strategy at a decision node depends both on what actions the strategy prescribes at all future decision nodes and what the players know about the strategies that their opponents are following.

We shall return to this in the discussion below. Information about the rationality of players at pre-terminal nodes is very important for players choosing earlier in the game. Furthermore, the reasoning that we went through in the previous paragraphs is very close to backward induction algorithm. This algorithm can be used to calculate the sub-game perfect equilibrium in any perfect information game in which all players receive unique payoffs at each outcome.

BI Algorithm At terminal nodes, players already have the nodes marked with their utilities. In fact, the markings on each and every node even nodes not on the backward induction path defines a unique path through the game tree. Aumann showed that the above reasoning can be carried out for any extensive game of perfect information. The following are equivalent:. This result has been extensively discussed. The standard ground of contention is that common knowledge of rationality used in this argument seems self-defeating , at least intuitively.

Both violate common knowledge of rationality. Is there a contradiction here? This entry will not survey the extensive literature on this question.

The reader can consult the references in Bruin Stalnaker , offers a different perspective on backward induction. However, this is just one example of a belief revision policy.

It is not suggested that this is the belief revision policy that players should adopt. Faced with surprising behavior in the course of a game, the players must decide what then to believe. The players must decide, but the theorists should not—at least they should not try to generalize about epistemic priorities that are meant to apply to any rational agent in all situations. One belief revision policy that has been extensively discussed in the epistemic game theory literature is the rationalizability principle.

Battigalli describes this belief revision policy as follows:. This belief revision policy is closely related to so-called forward induction reasoning.

To illustrate, consider the following imperfect information game:. This is the forward induction outcome of the above game. They build on an idea of Stalnaker , to characterize forward induction solution concepts in terms of common strong belief in rationality.

The evidence available to a player in an extensive game consists of the observations of the previous moves that are consistent with the structure of the game tree—i. A complete discussion of this approach is beyond the scope of the entry. In this section, we present a number of results that build on the methodology presented in the previous section. Iterated elimination of strictly dominated strategies is a very intuitive concept, but for many games it does not tell anything about what the players will or should choose.

In coordination games Figure 1 above for instance, all profiles, can be played under rationality and common belief of rationality. Intuitively, it is quite clear that his rational choice is to coordinate with her. The situation is symmetric for Ann.

More formally, the only states where Ann is rational and her type knows i. A Nash equilibrium is a profile where no player has an incentive to unilaterally deviate from his strategy choice. In other words, a Nash equilibrium is a combination of possibly mixed strategies such that they all play their best response given the strategy choices of the others. See, also, Spohn for an early statement. Consider the following coordination game:.

As usual, we fix an informational context for this game. Furthermore, while it is true that both Ann and Bob are rational, it is not common knowledge that they are rational. The example above is a situation where there is mutual knowledge of the choices of the players.

There is a more general theorem concerning mixed strategy equilibrium. Theorem 5. The general version of this result, for arbitrary finite number of agents and allowing for mixed strategies, requires common knowledge of conjectures , i. This epistemic characterization of Nash equilibrium requires mutual knowledge and rather than beliefs. The result fails when agents can be mistaken about the strategy choice of the others.

This has lead some authors to criticize this epistemic characterization: See Gintis and Bruin , for instance. How could the players ever know what the others are choosing? Is it not contrary to the very idea of a game, where the players are free to choose whatever they want Baltag et al.

One popular response to this criticism Brandenburger ; Perea is that the above result tells us something about Nash equilibrium as a solution concept , namely that it alleviates strategic uncertainty. Indeed, returning to the terminology introduced in Section 1.

When players have reached full knowledge of what the others are going to do, there is nothing left to think about regarding the other players as rational, deliberating agents. Their choices are fixed, after all.

The idea here is not to reject the epistemic characterization of Nash Equilibrium on the grounds that it rests on unrealistic assumptions, but, rather, to view it as a lesson learned about Nash Equilibrium itself. From an epistemic point of view, where one is focused on strategic reasoning about what others are going to do and are thinking, this solution concepts might be of less interest.

There is another important lesson to draw from this epistemic characterization result. To be sure, game theoretic models do assume that the structure of the game is common knowledge though, see Section 5. Nonetheless, the above result shows that both of these ideas are incorrect:. Common knowledge of rationality is neither necessary nor sufficient for Nash Equilibrium.

In fact, as we just stressed, Nash equilibrium can be played under full uncertainty, and a fortiori under higher-order uncertainty, about the rationality of others. Most of these characterizations are not epistemic, and thus fall outside the scope of this entry.

In context of this entry, it is important to note that most of these results aim at something different than the epistemic characterization which we are discussing in this section. Mostly developed in Computer Sciences, these logical languages have been used to verify properties of multi-agents systems, not to provide epistemic foundations to this solution concept. However, note that in recent years, a number of logical characterizations of Nash equilibrium do explicitly use epistemic concepts see, for example, van Benthem et al.

A key issue in epistemic game theory is the epistemic analysis of iterated removal of weakly dominated strategies. For example, Samuelson showed among other things that the analogue of Theorem 4. The main problem is illustrated by the following game:.

This issue is nicely described in a well-known microeconomics textbook:. However, this hypothesis clashes with the logic of iterated deletion, which assumes, precisely that eliminated strategies are not expected to occur. The extent of this conflict is nicely illustrated in Samuelson Given the above considerations, the epistemic analysis of iterated weak dominance is not a straightforward adaptation of the analysis of iterated strict dominance discussed in the previous section.

A number of authors have developed frameworks that do resolve this conflict Brandenburger et al. We sketch one of these solutions below:.

So, representing beliefs as lexicographic probability measures resolves the conflict between strategic reasoning and the assumption that players do not play weakly dominated strategies. However, there is another, more fundamental, issue that arises in the epistemic analysis of iterated weak dominance:. Under admissibility, Ann considers everything possible. But this is only a decision-theoretic statement. What does he consider possible? Alternatively put, it seems that a full analysis of the admissibility requirement should include the idea that other players do not conform to the requirement.

Brandenburger et al. There are two main ingredients to the epistemic characterization of iterated weak dominance. The precise answer turns out to be surprisingly subtle—the details are beyond the scope of this article see Brandenburger et al.

The game models introduced in Section 2 have been used to describe the uncertainty that the players have about what their opponents are going to do and are thinking in a game situation. In the analyses provided thus far, the structure of the game i. However, there are many situations where the players do not have such complete information about the game. There is no inherent difficulty in using the models from Section 2 to describe situations where players are not perfectly informed about the structure of the game for example, where there is some uncertainty about available actions.

There is, however, a foundational issue that arises here. Now, there are many reasons why Ann would hold such an opinion. She may have a completely different model of the game in mind than her opponents. The foundational question is: Can the game models introduced in Section 2 faithfully represent this latter type of uncertainty?

We do not go into details here—see Halpern for a complete discussion of possibility structures and how they relate to epistemic models. However, S. Modica and A. Modica and Rustichini use a variant of the above Sherlock Holmes story to show that there is a problem with this definition of unawareness.

More generally, Dekel et al. Dekel et al. The Unawareness Bibliography see Other Internet Resources has an up-to-date list of papers in this area. As we noted already in Section 2. So, there is an important implicit assumption behind the choice of a structure.

It is not hard to see that one always finds substantive assumptions in finite structures: Given a countably infinite set of atomic propositions, for instance, in finite structures it will always be common knowledge that some logically consistent combination of these basic facts are not realized, and a fortiori for logically consistent configurations of information and higher-order information about these basic facts.

More generally, there are no models of games, as we defined in Section 2 , where it is not common knowledge that the players believe all the logical consequences of their beliefs. Can we compare models in terms of the number of substantive assumptions that are made? Are there models that make no, or at least as few as possible, substantive assumptions? These questions have been extensively discussed in the epistemic foundations of game theory—see the discussion in Samuelson and the references in Moscati Intuitively, a structure without any substantive assumptions must represent all possible states of higher-order information.

Such a structure, often called called a universal structure or a terminal object in the language of category theory , if it exists, incorporates any substantive assumption that an analyst can imagine. A second approach takes an internal perspective by asking whether, for a fixed set of states or types , the agents are making any substantive assumptions about what their opponents know or believe.

The idea is to identify in a given model a set of possible conjectures about the players. A space is said to be complete if each agent correctly takes into account each possible conjecture about her opponents. A simple counting argument shows that there cannot exist a complete structure when the set of conjectures is all subsets of the set of states Brandenburger However, there is a deeper result here which we discuss below.

Adam Brandenburger and H. Jerome Keisler introduce the following two person, Russel-style paradox. The statement of the paradox involves two concepts: beliefs and assumptions.

We will say more about the interpretation of an assumption below. Suppose there are two players, Ann and Bob, and consider the following description of beliefs. Brandenburger and Keisler formalize the above argument in order to prove a very strong impossibility result about the existence of so-called assumption-complete structures.

We need some notation to state this result. It will be most convenient to work in qualitative type spaces for two players Definition 2. A qualitative type space for two players cf. A much deeper result is:.

Theorem 6. See the supplement for a discussion of the proof of this theorem see Section 2. The epistemic view on games is that players should be seen as individual decision makers, choosing what to do on the basis of their own preferences and the information they have in specific informational contexts.

What decision they will make—the descriptive question—or what decision they should make—the normative question, depends on the decision-theoretic choice rule that the player use, or should use, in a given context. We conclude with two general methodological issues about epistemic game theory and some pointers to further reading. Common knowledge of rationality is an informal assumption that game theorists, philosophers and other social scientists often appeal to when analyzing social interactive situations.

Broadly speaking, much of the epistemic game theory literature is focused on two types of projects. The goal of the first project is to map out the relationship between different mathematical representations of what the players know and believe about each other in a game situation.

The second project addresses the nature of rational choice in game situations. The importance of this project is nicely explained by Wolfgang Spohn:. Could we assume that his expectations were given, then his problem of strategy choice would become an ordinary maximization problem: he could simply choose a strategy maximizing his own payoff on the assumption that the other players would act in accordance with his given expectations. Spohn Maximization of expected utility, for instance, underlies most of the results in the contemporary literature on the epistemic foundations of game theory.

From a methodological perspective, however, the choice rule that the modeler assumes the players are following is simply a parameter that can be varied. The reader interested in more extensive coverage of all or some of the topics discussed in this entry should consult the following articles and books. Logic in Games by Johan van Benthem: This book uses the tools of modal logic broadly conceived to discuss many of the issues raised in this entry , MIT Press.

Epistemic Game Theory by Eddie Dekel and Marciano Siniscalchi: A survey paper aimed at economists covering the main technical results of epistemic game theory , Available online. The Bounds of Reason: Game Theory and the Unification of the Behavioral Sciences by Herbert Gintis: This book offers a broad overview of the social and behavioral science using the ideas of epistemic game theory , Princeton University Press. The editors would like to thank Philippe van Basshuysen for reading this entry carefully and taking the time to inform us of a significant number of typographical errors.

Roy uni-bayreuth. The Epistemic View of Games 1. Game Models 2. Choice Rules, or Choosing Optimally 3. Fundamentals 4. Developments 5. Concluding Remarks 7. Classically, the mathematical description of a game includes following components: The players. Figure 2. Figure 3. Figure 5. Figure 6. Figure 7. Figure 8. Figure Bob Ann l c r t 3,3 1,1 0,0 m 1,1 3,3 1,0 b 0,4 0,0 4,0 Figure Figure An extensive game.

B A l r u 2,2 0,0 d 0,0 1,1 Figure Bob Ann l r u 1,1 1,0 d 1,0 0,1 Figure Bibliography Abramsky, S. Apt, K. Aumann, R. Roy Eds. Automatic Press. Hart, , Handbook of game theory with economic applications Vol. Baltag, A. Battigalli, P. Vannucci Ed. Bernheim, D. Board, O. Bonanno, G. Brandenburger, A. Basili, N. Gilboa Eds. Loewe Eds. Chen, Y. Colman, A. Cubitt, R. Dekel, E. Doyle, A. Fagin, R. Geanakoplos, J. Halpern, Y. Finetti, B. Friedenberg, A. Gintis, H. Halpern, J. Heifetz Ed. Harsanyi, J.

Heifetz, A. Hendricks, V. Zalta Ed. Hoek, W. For this region of the parameter space, the inferior equilibrium is risk dominant Harsanyi and Selten Aumann, R. Agreeing to disagree. The Annals of Statistics , 4 6 , — Article Google Scholar. Banerjee, A. A simple model of herd behavior. The Quarterly Journal of Economics , 3 , — Bicchieri, C. Grammar of society. Cambridge: Cambridge University Press.

Book Google Scholar. Bovens, L. Bayesian epistemology. Oxford: Oxford University Press. Google Scholar. Braithwaite, R. Theory of games as a tool for the moral philosopher.

Brier, G. Verification of forecasts expressed in terms of probability. Monthly Weather Review , 78 1 , 1—3. Carr, J. Epistemic utility theory and the aim of belief. Philosophy and Phenomenological Research , 95 3 , — Theory of probability: A critical introductory treatment.

New York: Wiley. Diaconis, P. Ten great ideas about chance. Princeton: Princeton University Press. Dogramaci, S. Reverse engineering epistemic evaluations.

Philosophy and Phenomenological Research , 84 3 , — Dretske, F. Knowledge and the flow of information. Cambridge: MIT Press.

Dunn, J. Epistemic free riding. In epistemic consequentialism. Easwaran, K. Truthlove or: How I learned to stop worrying and love Bayesian probabilities.

Nous , 50 4 , — Fallis, D. The brier rule is not a good measure of epistemic utility and other useful facts about epistemic betterness. Australasian Journal of Philosophy , 94 3 , — Goldman, A. Knowledge in a social world.

Oxford: Clarendon Press. Greaves, H. Epistemic decision theory. Mind , , — Justifying conditionalization: Conditionalization maximizes expected epistemic utility. Harsanyi, J. A general theory of equilibrium selection in games. Heesen, R. A game-theoretic approach to peer disagreement.

Erkenntnis , 81 6 , — Joyce, J. A nonpragmatic vindication of probabilism. Philosophy of Science , 65 4 , — Causal reasoning and backtracking. Philosophical Studies , 1 , — Kelly, T. Epistemic rationality as instrumental rationality: A critique. Philosophy and Phenomenological Research , 66 3 , — Kummerfeld, E. Conservatism and the scientific state of nature.

British Journal for the Philosophy of Science , 67 4 , — Levinstein, B. Leitgeb and Pettigrew on accuracy and updating. Philosophy of Science , 79 3 , — List, C. An epistemic free-riding problem? MacDonald Eds. Hove: Psychology Press. Mayo-Wilson, C. The independence thesis : When individual and social epistemology diverge. Philosophy of Science , 78 4 , — Moss, S. Scoring rules and epistemic compromise. Oddie, G. Conditionalization, cogency, and cognitive value. British Journal for the Philosophy of Science , 48 , — Pettigrew, R.

Accuracy and the laws of credence. Schervish, M. A general method for comparing probability assessors. The Annals of Statistics , 17 4 , — Proper scoring rules, dominated forecasts, and coherence. Decision Analysis , 6 4 , — Seidenfeld, T. Calibration, coherence, and scoring rules.

Philosophy of Science , 52 2 , — On the shared preferences of two Bayesian decision makers. The Journal of Philosophy , 86 5 , — Forecasting with imprecise probabilities. International Journal of Approximate Reasoning , 53 8 , — Skyrms, B. The stag hunt and the evolution of social structure. New York: Cambridge University Press.

Signals: Evolution, learning and information. New York: Oxford University Press. Staffel, J. Disagreement and epistemic utility-based compromise. Journal of Philosophical Logic , 44 3 , — Tebben, N. Epistemic free riders and reasons to trust testimony. Social Epistemology , 29 3 , — Theory of games and economic behavior. Zollman, K.



0コメント

  • 1000 / 1000