**Thursday 24th July**

**Game theory**

**Dr. Reto Mueller**

**Game Theory is the study of strategic decision making**

Game Theory is the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. An alternative name for this theory is ”interactive decision theory”. Game theory is mainly used in economics, political science, and psychology, as well as logic, computer science, and biology.

The normal (or strategic form) game is usually represented by a matrix which shows the players, strategies, and payoffs

The normal (or strategic form) game is usually represented by a matrix which shows the players, strategies, and payoffs (see the example above). More generally it can be represented by any function that associates a payoff for each player with every possible combination of actions. In the accompanying example there are two players; one chooses the row and the other chooses the column. Each player has two strategies, which are specified by the number of rows and the number of columns. The payoffs are provided in the interior. The first number is the payoff received by the row player (Player 1 in the example); the second is the payoff for the column player (Player 2 in the example). Suppose that Player 1 plays Up and that Player 2 plays Left. Then Player 1 gets a payoff of 4, and Player 2 gets 3.

When a game is presented in normal form, it is presumed that each player acts simultaneously or, at least, without knowing the actions of the other. If players have some information about the choices of other players, the game is usually presented in extensive form.

Every extensive-form game has an equivalent normal-form game, however the transformation to normal form may result in an exponential blowup in the size of the representation, making it computationally impractical.(Leyton-Brown & Shoham 2008, p. 35)

Looking at the example above again which strategy should the players pick? In the first game, it is clear that the players will choose (up, left), maximising the profit for both of them. But what about the second game that is shown below?

Player A does better with the ”down” strategy (no matter what B picks) and player B does better with the ”left” strategy (no matter what A picks). So will they end up picking (down, left) – despite the fact that with (up, right) they would both profit more?

http://en.wikipedia.org/wiki/Game_theory

John Forbes Nash, Jr. (born June 13, 1928) is an American mathematician whose works in game theory, differential geometry, and partial differential equations have provided insight into the factors that govern chance and events inside complex systems in daily life. His theories are used in market economics, computing, evolutionary biology, artificial intelligence, accounting, politics and military theory. Serving as a Senior Research Mathematician at Princeton University during the latter part of his life, he shared the 1994 Nobel Memorial Prize in Economic Sciences with game theorists Reinhard Selten and John Harsanyi.

http://en.wikipedia.org/wiki/John_Forbes_Nash,_Jr.

**Nash equilibrium**

Definition: Informally, a set of strategies is a Nash equilibrium if no player can do better by unilaterally changing his or her strategy. To see what this means, imagine that each player is told the strategies of the others and then asks himself or herself: ”Knowing the strategies of the other players, can I benefit by changing my strategy?” If any player would answer ”Yes”, then that set of strategies is not a Nash equilibrium. But if every player prefers not to switch (or is indifferent between switching and not) then the set of strategies is a Nash equilibrium.

If a game has one Nash equilibrium, then all players will play the Nash equilibrium strategy if

1. The players all will do their utmost to maximize their expected payoff as described by the game.

2. The players are flawless in execution.

3. The players have sufficient intelligence to deduce the solution.

4. The players know the strategy of all of the other players.

5. The players believe that a deviation in their own strategy will not cause deviations by any other players.

6. There is common knowledge that all players meet these conditions, including this one. So, not only must each player know the other players meet the conditions, but also they must know that they all know that they meet them, and know that they know that they know that they meet them, etc.

**Example 1: Prisoner’s Dilemma**

Imagine two prisoners held in separate cells, interrogated simultaneously, and offered deals (lighter jail sentences) for betraying their fellow criminal. [They both go to prison for one year for stealing a car, but the police thinks they also robbed a jewellery shop together.]

While both players would do better staying silent, the rational behaviour for both of them is to betray the other! A Prisoner’s Dilemma has a unique Nash equilibrium (which is not however not optimal).

**Example 1: Prisoner’s Dilemma – in the real world**

1. Doping in sport. If none of two athletes takes doping, then neither gains an advantage over the other. If only one does, then that athlete gains a significant advantage over the competitor. If both athletes take the drug, however, the benefits cancel out and only the drawbacks remain (the drug might be illegal and dangerous), putting them both in a worse position than if neither had used doping.

The rational strategy for both athletes is to take the doping!

2. Arms race. During the Cold War the opposing alliances had the choice to arm or disarm. From each side’s point of view, disarming whilst their opponent continued to arm would have led to military inferiority and possible annihilation. Conversely, arming whilst their opponent disarmed would have led to superiority. If both sides chose to arm, or if both sides chose to disarm, war can be avoided. However, arming comes at the high cost of developing and maintaining a nuclear arsenal…

3. Advertising. If only one of two competing companies advertises its product [or invests more in advertising], it gains a huge advantage over the other. If they advertise at the same level (including not advertising at all), then none of them improves compared to the other. But advertising comes at some high cost.

**Example 2: Coordination game**

A typical case for a coordination game is choosing the sides of the road upon which to drive, a social standard which can save lives if it is widely adhered to. In a simplified example, assume that two drivers meet on a narrow dirt road. Both have to swerve in order to avoid a head-on collision. If both execute the same swerving manoeuvre they will manage to pass each other, but if they choose differing manoeuvres they will collide.

There are two Nash equilibria! Once a social standard is achieved, nobody has any interest to change it (unless all the others change as well).

**Example 2: Coordination game – Variants**

1. Party or stay home?

2. Change to compatible technologies?

3. Battle of the sexes (BoS – also ”Bach or Stravinsky”)

**Example 3: Anti-Coordination Game ”Chicken”**

The game ”chicken” is a game in which two drivers drive towards each other on a collision course: one must swerve, or both may die in the crash, but if one driver swerves and the other does not, the one who swerved will be called a ”chicken”, meaning a coward.

Like the coordination game, also the anti-coordination game has two Nash equilibria.

**Example 4: Competition game**

This can be illustrated by a two-player game in which both players simultaneously choose an integer from 0 to 3 and they both win the smaller of the two numbers in points. In addition, if one player chooses a larger number than the other, then he/she has to give up two points to the other

Like the coordination game, also the anti-coordination game has two Nash equilibria.

**Example 5: Network traffic**

Every traveller has a choice of 3 strategies, where each strategy is a route from A to D (either ABD, ABCD, or ACD).

The ”payoff” of each strategy is the travel time of each route. In the graph above, a car travelling via ABD experiences travel time of (1 + x/100) + 2, where x is the number of cars traveling on edge AB. Thus, payoffs for any given strategy depend on the choices of the other players, as is usual. However, the goal in this case is to minimize travel time, not maximize it.

Equilibrium will occur when the time on all paths is exactly the same. When that happens, no single driver has any incentive to switch routes, since it can only add to his/her travel time. For the graph above, if, for example, 100 cars are travelling from A to D, then equilibrium will occur when 25 drivers travel via ABD, 50 via ABCD, and 25 via ACD. Every driver now has a total travel time of 3.75 (to see this, note that a total of 75 cars take the AB edge, and likewise 75 cars take the CD edge).

Nash Equilibrium: 25 drivers via ABD, 50 via ABCD, and 25 via ACD. Every driver now has a total travel time of 3.75.

BUT if the 100 drivers agree that 50 travel via ABD and the other 50 through ACD, then travel time for any single car would actually be 3.5 which is less than 3.75. This is also the Nash equilibrium if the path between B and C is removed, which means that adding an additional possible route can decrease the efficiency of the system (Braess’s paradox).

Braess’s paradox, credited to the German mathematician Dietrich Braess, states that adding extra capacity to a network when the moving entities selfishly choose their route, can in some cases reduce overall performance. This is because the Nash equilibrium of such a system is not necessarily optimal.

http://en.wikipedia.org/wiki/Braess’s_paradox

**Is there always a Nash equilibrium?**

In the ”matching pennies” game, both players reveal a penny at the same time. If they match (i.e. both show heads or both show tails), Player A gets both of them. If they don’t match, Player B gets them. So the payoff matrix looks as follows:

Is there a Nash equilibrium? Another example:

**Mixed strategies**

The answer to the question is: There is always a Nash equilibrium if one allows mixed strategies, i.e. an assignment of a probability to each pure strategy. [Note: Since probabilities are continuous, there are infinitely many mixed strategies available to a player, even if there are only finitely many pure strategies.]

In the game with two players having two strategies, this is easy: Player A chooses with probability p (where 0 ≤ p ≤ 1) the first and with probability (1 − p) the other strategy. Player B picks with probability q (where 0 ≤ q ≤ 1) the first strategy and with probability (1 − q) the second strategy.

**Matching pennies**

In order for Player B to be willing to randomize, Player A should play in such a way that the expected payoff for each pure strategy of B (and hence also every mixed strategy) should be the same. So player A picks p in such a way that 1 − 2p = 2p − 1, i.e. p = 1/2. Similarly, Player B picks q = 1/2.We get

**Mixed strategies in previous games**

Coordination game:

We solve −5 + 6p = 1 − 6p, which gives p = 1/2. From symmetry reasons, Player B then picks q = 1/2.This means that a mixed strategy is randomly going either left or right to avoid the crash. Surprisingly, this is a Nash equilibrium!

**Mixed strategies in previous games**

Battle of the sexes:

We solve −2 + 5p = 5 − 5p, which gives p = 0.7. From symmetry reasons, Player B then picks his favourite option (namely Opera) with probability 0.7.This means that a mixed strategy is going with probability 0.7 to your favourite event!

**The Final Problem**

From the book ”The Final Problem”… where Holmes and Watson attempted to flee England to get away from Professor Moriarty, who was pursuing Holmes for the purpose of killing him before Holmes’ investigation could bring down the Professor and his criminal empire.

Holmes elected to flee to the European continent while the London police did their work to wrap up the case he had provided for them. Holmes and Watson took a train and headed to Dover, where they could board a steamer to Calais and leave Moriarty behind and vulnerable. Holmes knew the train schedule, as did Moriarty. There was an intermediate stop in Canterbury. By this time, Moriarty and his confederates were close upon the trail chasing Holmes.

What is the strategy that they choose?

SOLUTION: Holmes gets out at Canterbury with probability 2/3. Moriarty on the other hand stays on the train with probability 2/3.

SPOILER: In the book, Holmes gets indeed out at Canterbury while Moriarty’s special pursuit train sped through hoping to catch Holmes in Dover and kill him there.