Centipede game in normal form




















In the unique subgame perfect equilibrium, each player chooses to defect at every opportunity. This, of course, means defection at the first stage. In the Nash equilibria, however, the actions that would be taken after the initial choice opportunities even though they are never reached since the first player defects immediately may be cooperative.

Defection by the first player is the unique subgame perfect equilibrium and required by any Nash equilibrium , it can be established by backward induction. Suppose two players reach the final round of the game; the second player will do better by defecting and taking a slightly larger share of the pot.

Since we suppose the second player will defect, the first player does better by defecting in the second to last round, taking a slightly higher payoff than she would have received by allowing the second player to defect in the last round. But knowing this, the second player ought to defect in the third to last round, taking a slightly higher payoff than she would have received by allowing the first player to defect in the second to last round.

This reasoning proceeds backwards through the game tree until one concludes that the best action is for the first player to defect in the first round. The same reasoning can apply to any node in the game tree. In the example pictured above, this reasoning proceeds as follows. If we were to reach the last round of the game, Player 2 would do better by choosing d instead of r. However, given that 2 will choose d , 1 should choose D in the second to last round, receiving 3 instead of 2.

Given that 1 would choose D in the second to last round, 2 should choose d in the third to last round, receiving 2 instead of 1. But given this, Player 1 should choose D in the first round, receiving 1 instead of 0.

There are a large number of Nash equilibria in a centipede game, but in each, the first player defects on the first round and the second player defects in the next round frequently enough to dissuade the first player from passing. Being in a Nash equilibrium does not require that strategies be rational at every point in the game as in the subgame perfect equilibrium.

This means that strategies that are cooperative in the never-reached later rounds of the game could still be in a Nash equilibrium. In the example above, one Nash equilibrium is for both players to defect on each round even in the later rounds that are never reached.

Another Nash equilibrium is for player 1 to defect on the first round, but pass on the third round and for player 2 to defect at any opportunity. Several studies have demonstrated that the Nash equilibrium and likewise, subgame perfect equilibrium play is rarely observed. Instead, subjects regularly show partial cooperation, playing "R" or "r" for several moves before eventually choosing "D" or "d".

It is also rare for subjects to cooperate through the whole game. As in many other game theoretic experiments, scholars have investigated the effect of increasing the stakes.

As with other games, for instance the ultimatum game , as the stakes increase the play approaches but does not reach Nash equilibrium play. Since the empirical studies have produced results that are inconsistent with the traditional equilibrium analysis, several explanations of this behavior have been offered.

Rosenthal suggested that if one has reason to believe her opponent will deviate from Nash behavior, then it may be advantageous to not defect on the first round.

One reason to suppose that people may deviate from the equilibria behavior is if some are altruistic. The basic idea is that if you are playing against an altruist, that person will always cooperate, and hence, to maximize your payoff you should defect on the last round rather than the first.

If enough people are altruists, sacrificing the payoff of first-round defection is worth the price in order to determine whether or not your opponent is an altruist. Passing strictly decreases a player? If the opponent also passes, the two players are faced with the same choice situation with reversed roles and increased payoffs.

The game has a finite number of moves which is known in advance to both players. In the above diagram, a 1 at a black circle "decision node" denotes a decision opportunity for player 1.

A 2 at a decision node tells us that person 2 can make a decision here. The top number at the end of each vertical line is a payoff for player 1 and the bottom number is a payoff for player 2. Player 1 has the first move: if she chooses D, both players get 1; if she chooses A, the opportunity to make a decision passes to player 2.

Player 2 has the second move: if he chooses D, player 1 gets payoff of D and he gets 3; if he chooses A, the opportunity to make a decision passes to player 1. And so on to the end of the game tree. If both players always choose A, they both receive payoff of at the end of the game tree. We have just observed that both players receive payoff of if both players always choose A rather than D.

Note also that both players receive payoff of 1 if player 1 chooses D on his first move. What does game theory predict will happen? Game Theory predicts that Player 1 will choose D in his first move and thus both players will receive payoff of 1!

How can that be the case? Click here to learn how using backward induction to solve a game in extensive form leads to the prediction player 1 will choose D on his first move. Furthermore,since all Nash equilibria make the same outcomes prediction, any usual refinements of Nash equilibrium also make the same prediction.

If the responder accepts, the money is split per the proposal; if the responder rejects, both players receive nothing. Both players know in advance the consequences of the responder accepting or rejecting the offer. In game theory, the battle of the sexes BoS is a two-player coordination game that also involves elements of conflict.

The game was introduced in by Luce and Raiffa in their classic book, Games and Decisions. In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

An extensive-form game is a specification of a game in game theory, allowing for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, the information each player has about the other player's moves when they make a decision, and their payoffs for all possible game outcomes. Extensive-form games also allow for the representation of incomplete information in the form of chance events modeled as "moves by nature".

In game theory, a Perfect Bayesian Equilibrium PBE is an equilibrium concept relevant for dynamic games with incomplete information.

A perfect Bayesian equilibrium has two components -- strategies and beliefs :. The Stackelberg leadership model is a strategic game in economics in which the leader firm moves first and then the follower firms move sequentially. It is named after the German economist Heinrich Freiherr von Stackelberg who published Market Structure and Equilibrium in which described the model.

Backward induction is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by examining the last point at which a decision is to be made and then identifying what action would be most optimal at that moment. Using this information, one can then determine what to do at the second-to-last time of decision. This process continues backwards until one has determined the best action for every possible situation at every point in time.

Backward induction was first used in by Arthur Cayley, who uncovered the method while trying to solve the infamous Secretary Problem.

In game theory, trembling hand perfect equilibrium is a refinement of Nash equilibrium due to Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or tremble, may choose unintended strategies, albeit with negligible probability.

In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game.

This result was called the Folk Theorem because it was widely known among game theorists in the s, even though no one had published it. Friedman's Theorem concerns the payoffs of certain subgame-perfect Nash equilibria SPE of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games.

Repeated games capture the idea that a player will have to take into account the impact of his or her current action on the future actions of other players; this impact is sometimes called his or her reputation.

Single stage game or single shot game are names for non-repeated games. Informally, a strategy set is a MAPNASH of a game if it would be a subgame perfect equilibrium of the game if the game had perfect information. It is a solution concept based on how players think about other players' thought processes.

In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that at any point in the game, the players' behavior from that point onward should represent a Nash equilibrium of the continuation game, no matter what happened before. Every finite extensive game with perfect recall has a subgame perfect equilibrium.

Perfect recall is a term introduced by Harold W. Kuhn in and "equivalent to the assertion that each player is allowed by the rules of the game to remember everything he knew at previous moves and all of his choices at those moves".

Quantal response equilibrium QRE is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey, it provides an equilibrium notion with bounded rationality.

QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.

In game theory, an epsilon-equilibrium , or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium.

In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias.

This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers. Cooperative bargaining is a process in which two people decide how to share a surplus that they can jointly generate. In many cases, the surplus created by the two players can be shared in many ways, forcing the players to negotiate which division of payoffs to choose.

Such surplus-sharing problems are faced by management and labor in the division of a firm's profit, by trade partners in the specification of the terms of trade, and more. A non-credible threat is a term used in game theory and economics to describe a threat in a sequential game that a rational player would not actually carry out, because it would not be in his best interest to do so.

Mertens stability is a solution concept used to predict the outcome of a non-cooperative game. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.

Cognitive hierarchy theory CHT is a behavioral model originating in behavioral economics and game theory that attempts to describe human thought processes in strategic games.



0コメント

  • 1000 / 1000