15.01.2015 Views

Game Theory with Applications to Finance and Marketing

Game Theory with Applications to Finance and Marketing

Game Theory with Applications to Finance and Marketing

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Game</strong> <strong>Theory</strong> <strong>with</strong> <strong>Applications</strong> <strong>to</strong> <strong>Finance</strong> <strong>and</strong><br />

<strong>Marketing</strong><br />

Lecture 0: A Quick Introduction <strong>and</strong> Some Examples<br />

Instruc<strong>to</strong>r: Chyi-Mei Chen<br />

Room 1102, Management Building 2<br />

(02) 3366-1086<br />

(email) cchen@ccms.ntu.edu.tw<br />

1. Definition 1. A game can be described by:<br />

• Who are the players<br />

• What strategies are available <strong>to</strong> each player<br />

• What does each player get given players’ choices of strategies<br />

A game described by (i) the set of players, (ii) the strategies available<br />

<strong>to</strong> each player, <strong>and</strong> (iii) the payoff of each player as a function of<br />

the vec<strong>to</strong>r of all players’ strategic choices is called a game depicted in<br />

normal form. A game described in normal form is called a strategic<br />

game.<br />

A game can also be described in extensive form, which needs <strong>to</strong> specify<br />

the timing of players’ moves <strong>and</strong> what they know when they move, in<br />

addition <strong>to</strong> the things specified in normal form. A game described in<br />

extensive form is called an extensive game.<br />

2. Example 1. The following is a two-player normal-form game.<br />

player 1/player 2 L R<br />

U 0,1 -1,2<br />

D 2,-1 -2,-2<br />

• Who are the players Players 1 <strong>and</strong> 2.<br />

• What strategies are available <strong>to</strong> player 1 Player 1 has two pure<br />

strategies, U <strong>and</strong> D, <strong>and</strong> player 2 has two pure strategies, L <strong>and</strong><br />

R. A mixed strategy for player 1 is a probability distribution over<br />

1


U <strong>and</strong> D, <strong>and</strong> a mixed strategy for player 2 is a probability distribution<br />

over L <strong>and</strong> R. For example, playing U <strong>and</strong> D <strong>with</strong> probabilities<br />

1 <strong>and</strong> 1 is one mixed strategy for player 1, <strong>and</strong> playing<br />

2 2<br />

U <strong>and</strong> D <strong>with</strong> probabilities 1 <strong>and</strong> 2 is another mixed strategy for<br />

3 3<br />

player 1. Apparently, each player has an infinite number of different<br />

mixed strategies. In its broad sense, mixed strategies include<br />

pure strategies.<br />

• What does each player get given players’ choices of pure strategies<br />

Players 1 <strong>and</strong> 2 get respectively 0 <strong>and</strong> 1, if the vec<strong>to</strong>r of the two<br />

players’ pure strategies (called a strategy profile) is (U, L); that is,<br />

player 1 plays U <strong>and</strong> player 2 plays L. Let us write<br />

Similarly, we have<br />

u 1 (U, L) = 0, u 2 (U, L) = 1. (1)<br />

u 1 (U, R) = −1, u 2 (U, R) = 2, u 1 (D, L) = 2, (2)<br />

u 2 (D, L) = −1, u 1 (D, R) = −2, u 2 (D, R) = −2. (3)<br />

The functions u 1 (·, ·) <strong>and</strong> u 2 (·, ·) are referred <strong>to</strong> as the two players’<br />

payoff functions. Now, what are the two players’ payoffs when,<br />

respectively, player 1 plays U <strong>and</strong> D <strong>with</strong> probabilities 1 2 <strong>and</strong> 1 2<br />

<strong>and</strong> player 2 plays L <strong>and</strong> R <strong>with</strong> probabilities 1 3 <strong>and</strong> 2 3 Here<br />

we assume that players are “expected-utility” maximizers, so that<br />

player 1’s payoff is<br />

1<br />

2 · 1<br />

3 · u 1(U, L) + 1 2 · 2<br />

3 · u 1(U, R)<br />

+ 1 2 · 1<br />

3 · u 1(D, L) + 1 2 · 2<br />

3 · u 1(D, R),<br />

<strong>and</strong> player 2’s payoff is<br />

1<br />

2 · 1<br />

3 · u 2(U, L) + 1 2 · 2<br />

3 · u 2(U, R)<br />

+ 1 2 · 1<br />

3 · u 2(D, L) + 1 2 · 2<br />

3 · u 2(D, R).<br />

2


3. Example 2. Consider the following Cournot game: firms 1 <strong>and</strong> 2<br />

producing the same product must compete in supply quantities. The<br />

inverse dem<strong>and</strong> curve is<br />

where<br />

P (Q) = 1 − Q,<br />

Q = q 1 + q 2<br />

is the <strong>to</strong>tal supply of the product. For simplicity, assume that firms<br />

have no costs.<br />

• Who are the players Firms 1 <strong>and</strong> 2.<br />

• What strategies are available <strong>to</strong> player 1 Any non-negative real<br />

number q 1 , st<strong>and</strong>ing for the supply quantity chosen by firm 1, is<br />

a pure strategy for firm 1. Any cumulative distribution function<br />

F 1 : [0, +∞) → [0, 1] is a mixed strategy for firm 1. By symmetry,<br />

any non-negative real number q 2 , st<strong>and</strong>ing for the supply quantity<br />

chosen by firm 2, is a pure strategy for firm 2, <strong>and</strong> any cumulative<br />

distribution function F 2 : [0, +∞) → [0, 1] is a mixed strategy for<br />

firm 2.<br />

• What does each player get given players’ choices of pure strategies<br />

(q 1 , q 2 ) Firm 1 gets profit u 1 (q 1 , q 2 ) = q 1 (1 − q 1 − q 2 ), <strong>and</strong> firm<br />

2 gets profit u 2 (q 1 , q 2 ) = q 2 (1 − q 1 − q 2 ). (We generally will write<br />

π 1 <strong>and</strong> π 2 instead of u 1 <strong>and</strong> u 2 , if the latter actually represent<br />

firms’ profits.) What do the two players get if player 1 adopts<br />

the mixed strategy F 1 <strong>and</strong> player 2 adopts the mixed strategy F 2 <br />

The answers are respectively<br />

<strong>and</strong><br />

∫ ∞ ∫ ∞<br />

0<br />

0<br />

∫ ∞ ∫ ∞<br />

0<br />

0<br />

u 1 (x, y)dF 1 (x)dF 2 (y)<br />

u 2 (x, y)dF 1 (x)dF 2 (y).<br />

4. Definition 2. A (pure strategy) Nash equilibrium (NE) for a twoplayer<br />

normal-form game {X, Y, u 1 (x, y), u 2 (x, y)}, where X <strong>and</strong> Y are<br />

respectively the sets of pure strategies for the two players (hereafter,<br />

3


the two players’ strategy spaces), is a pair (x ∗ , y ∗ ) such that x ∗ ∈ X,<br />

y ∗ ∈ Y , <strong>and</strong><br />

u 1 (x ∗ , y ∗ ) ≥ u 1 (x, y ∗ ), ∀x ∈ X, (4)<br />

u 2 (x ∗ , y ∗ ) ≥ u 2 (x ∗ , y), ∀y ∈ Y. (5)<br />

(The last two inequalities are called incentive compatibility conditions.)<br />

Given y ∈ Y , we say that x ∗ ∈ X is player 1’s best response against y<br />

if<br />

u 1 (x ∗ , y) ≥ u 1 (x, y), ∀x ∈ X. (6)<br />

If for all y ∈ Y , there exists a unique player 1’s best response against<br />

y, then we can write x ∗ = r 1 (y), <strong>and</strong> refer <strong>to</strong> r 1 (·) as player 1’s reaction<br />

function. Similarly, given x ∈ X, we say that y ∗ ∈ Y is player 2’s best<br />

response against x if<br />

u 2 (x, y ∗ ) ≥ u 2 (x, y), ∀y ∈ Y. (7)<br />

If for all x ∈ X, there exists a unique player 2’s best response against<br />

x, then we can write y ∗ = r 2 (x), <strong>and</strong> refer <strong>to</strong> r 2 (·) as player 2’s reaction<br />

function. Apparently, if the two-player game has a unique Nash<br />

equilibrium (x ∗ , y ∗ ), then it must be that<br />

r 1 (y ∗ ) = x ∗ , y ∗ = r 2 (x ∗ ). (8)<br />

That is, an NE must appear at the intersection of the two reaction<br />

functions.<br />

More generally, let r 1 (y) contain the elements of X that maximize the<br />

function u 1 (·, y) <strong>and</strong> r 2 (x) contain the elements of Y that maximize<br />

the function u 2 (x, ·). Then (x ∗ , y ∗ ) is a pure-strategy NE of the game<br />

{X, Y, u 1 (x, y), u 2 (x, y)} if <strong>and</strong> only if<br />

x ∗ ∈ r 1 (y ∗ ), y ∗ ∈ r 2 (x ∗ ). (9)<br />

Again, this requires that x ∗ be one of player 1’s best responses against<br />

y ∗ , <strong>and</strong> y ∗ be one of player 2’s best responses against x ∗ .<br />

A mixed-strategy NE is defined similarly, but <strong>with</strong> players playing<br />

mixed strategies rather than pure strategies.<br />

4


5. Now we solve for the pure-strategy NEs for the game described in<br />

Example 1. It is easy <strong>to</strong> see that (U,R) <strong>and</strong> (D,L) are the two purestrategy<br />

NEs of the game.<br />

6. Now we solve for the mixed-strategy NEs for the game described in<br />

Example 1. A mixed strategy Nash equilibrium (p, q) of this game is<br />

a pair of the two players’ mixed strategies, such that player 1 plays<br />

U <strong>with</strong> probability p ∈ (0, 1) <strong>and</strong> player 2 plays L <strong>with</strong> probability<br />

q ∈ (0, 1), <strong>and</strong> such that player 2’s mixed strategy makes player 1<br />

feel indifferent about U <strong>and</strong> D, <strong>and</strong> player 1’s mixed strategy makes<br />

player 2 feels indifferent about L <strong>and</strong> R; that is, the following incentive<br />

compatibility conditions hold:<br />

qu 1 (U, L) + (1 − q)u 1 (U, R) = qu 1 (D, L) + (1 − q)u 1 (D, R), (10)<br />

pu 2 (U, L) + (1 − p)u 2 (D, L) = pu 2 (U, R) + (1 − p)u 2 (D, R). (11)<br />

Solving, we have p = 1 <strong>and</strong> q = 1 . There is an obvious reason for<br />

2<br />

3<br />

the above two equations: if given his rival’s mixed strategy, a player<br />

strictly prefers one pure strategy <strong>to</strong> the other, then he will assign zero<br />

probability <strong>to</strong> the latter; that is, a mixed strategy can never be his best<br />

response. Thus in a mixed strategy Nash equilibrium, a player has <strong>to</strong><br />

feel indifferent about his two pure strategies.<br />

7. Now we solve for the pure-strategy NEs for the game described in<br />

Example 2. By definition, it is a pair (q1, ∗ q2), ∗ such that given q2, ∗ q1 ∗ is<br />

profit maximizing for firm 1, <strong>and</strong> given q1 ∗, q∗ 2 is profit maximizing for<br />

firm 2. The procedure is first <strong>to</strong> find the best response r i for firm i given<br />

any possible q j , for i, j = 1, 2, i ≠ j. Then, the NE can be obtained by<br />

finding the intersection of the two reaction functions. So, consider step<br />

1. To solve for r i (·), given any q j , consider firm i’s problem of finding<br />

its profit-maximizing supply quantity:<br />

max<br />

q i<br />

q i (1 − q i − q j ),<br />

<strong>and</strong> the (necessary <strong>and</strong> sufficient) first-order condition gives<br />

r i (q j ) = 1 − q j<br />

.<br />

2<br />

5


(The reaction function is well-defined because given each q j , there exists<br />

exactly 1 optimal q i for firm i.) Now, consider step 2. By definition,<br />

an NE (q1, ∗ q2) ∗ is located at the intersection of the two firms’ reaction<br />

functions. Thus (q1 ∗, q∗ 2 ) must satisfy<br />

r 1 (q ∗ 2) = q ∗ 1, r 2 (q ∗ 1) = q ∗ 2. (12)<br />

Solving, we obtain the Nash equilibrium for the above Cournot game:<br />

(q ∗ 1 , q∗ 2 ) = (1 3 , 1 3 ).<br />

Note that this game has no mixed-strategy NEs.<br />

8. Example 3. Consider a two-player game, where the two must pick an<br />

integer from the set {1, 2, · · · , 100} at the same time. If they pick the<br />

same number, then they each get 1; or else, they each get zero. Find<br />

the pure strategy NE’s. Find the mixed strategy NE’s.<br />

9. Definition 3. The games appearing in the above two examples are<br />

called simultaneous games, because players move at the same time in<br />

those games. In a sequential game, on the other h<strong>and</strong>, players take turn<br />

<strong>to</strong> move. Sequential games are usually described in extensive form <strong>and</strong><br />

represented by a game tree. A game tree is composed of a collection of<br />

decision nodes <strong>and</strong> a set of branches connecting those nodes. The first<br />

mover’s decision node is the root of the tree, <strong>and</strong> each pure strategy<br />

available <strong>to</strong> the first mover is represented by a branch emanating from<br />

that decision node. These branches lead <strong>to</strong> the second mover’s decision<br />

nodes. For example, consider the following sequential game:<br />

⎡<br />

Up—<br />

(I)—<br />

⎢<br />

⎣ Down—<br />

(II)—<br />

.<br />

(II)—<br />

⎡<br />

⎢<br />

⎣<br />

⎡<br />

⎢<br />

⎣<br />

Right— (0, 1)<br />

Left— (−1, 2)<br />

Right— (2, −1)<br />

Left— (−2, −2)<br />

Note that player II’s two decision nodes are connected by a dotted line,<br />

which says that player II, when determining her own moves, does not<br />

6


know whether player I has moved up or down. Thus these two decision<br />

nodes define player II’s information set. Formally, an information set<br />

is a set of decision nodes for a player, who, while knowing that he is<br />

sitting on one of those nodes contained in the information set, cannot<br />

tell which node in the information set he is exactly sitting on.<br />

The remaining game tree starting from a single<strong>to</strong>n information set is<br />

called a subgame of the original game. A subgame perfect Nash equilibrium<br />

(SPNE) is an NE for a game in extensive form, which specifies<br />

NE strategies in each <strong>and</strong> every subgame.<br />

10. Example 4. M is the owner-manager of a firm. The debt due one year<br />

from now has a face value equal <strong>to</strong> $10. The <strong>to</strong>tal assets in place are<br />

worth only $8. Just now, a new investment opportunity <strong>with</strong> NPV=x ><br />

1 became available, which requires no addition investment. M comes <strong>to</strong><br />

his credi<strong>to</strong>r <strong>and</strong> says that he will give up the investment project unless<br />

the credi<strong>to</strong>r agrees <strong>to</strong> reduce the face value of debt by $1.<br />

(i) Suppose x > 2. Show that there is an NE in which the credi<strong>to</strong>r<br />

agrees <strong>to</strong> reduce the face value of debt <strong>and</strong> M makes the investment.<br />

(ii) Show that the NE in (i) is not an SPNE. Find an SPNE.<br />

(iii) How may your conclusion about (ii) change if x ∈ (1, 2]<br />

(iv) Define bankruptcy as a state where the firm’s equity value drops<br />

<strong>to</strong> zero. Explain why bankruptcy does not take place in (iii).<br />

11. Example 5. A monopolistic firm F selling product X at its s<strong>to</strong>re is<br />

faced <strong>with</strong> 2 consumers A <strong>and</strong> B, where A is an expert in computer<br />

science, while B has no knowledge in computer-related knowledge. F<br />

cannot distinguish A from B, but F knows that A is willing <strong>to</strong> pay 2<br />

dollars for 1 unit of X, while B’s reservation value for X is 5 dollars. A<br />

<strong>and</strong> B both have unit dem<strong>and</strong> for X.<br />

(i) First suppose that e-commerce is unavailable. Suppose that production<br />

is costless. What is the price for X What is F’s profit<br />

(ii) Next suppose that F can also open up a retail outlet on the net,<br />

which will cost t > 0. Should F sells exclusively through the on-line outlet<br />

Should F ignore the chance <strong>to</strong> open up an on-line outlet Should<br />

F sell its product through its original s<strong>to</strong>re as well as the on-line outlet<br />

(iii) What if B’s reservation value is 3.9 dollars<br />

(iv) What is missing in the above analysis<br />

7


Solution. For part (i), F should price X at 5 dollars, which is also F’s<br />

profit. For part (ii), F can price X at 2 dollars on the net <strong>and</strong> at 5<br />

dollars at the original s<strong>to</strong>re, which raises F’s profit by 2 dollars. This is<br />

beneficial as long as t < 2. 1 For part (iii), giving up the on-line trading<br />

opportunity implies a profit of 4 dollars for F; <strong>and</strong> selling through both<br />

channels yields a profit of 3.9 + 2 − t, so that F should sell through<br />

both channels if t < 1.9. 2<br />

As for part (iv), many important issues are left out. For example, conventional<br />

outlets usually provide in-s<strong>to</strong>re services that on-line retailers<br />

cannot afford <strong>to</strong>, <strong>and</strong> transactions at the conventionl outlets usually<br />

incur a transportation cost for consumers. Furthermore, on-line transactions<br />

may imply a higher probability of commerical frauds.<br />

Note that the assumption that B has a higher reservation value than<br />

A does is important in the above analysis. If the reservation values are<br />

reversed, F cannot price discriminate between A <strong>and</strong> B.<br />

Finally, let me ask the following question: Other things being equal,<br />

does an increase in dem<strong>and</strong> encourage F <strong>to</strong> adopt the dual channels<br />

strategy<br />

Verify that, for t ∈ (1.9, 2), other things being equal, an increase in<br />

A’s dem<strong>and</strong> (here, the reservation value) from 2 dollars <strong>to</strong> 3.1 dollars<br />

actually decrease F’s incentive <strong>to</strong> use the on-line outlet. We know that<br />

when A’s reservation value is 2 dollars, F will use the on-line outlet<br />

because t < 2; but when A’s reservation value becomes 3.1 dollars, F<br />

will use the on-line outlet if <strong>and</strong> only if (5 − 3.1) > t, that is 1.9 > t.<br />

1 Note that A is not served <strong>with</strong>out on-line markets, <strong>and</strong> t is the cost that F incurs in<br />

order <strong>to</strong> extract the 2 dollars from A in the presence of on-line markets.<br />

2 Note that selling <strong>to</strong> A on the net actually creates a loss: F still charges A 2 dollars,<br />

but now F has <strong>to</strong> spend t > 0. What is beneficial is that F can now charge B 3.9 instead of<br />

2 dollars. Why With A being directed <strong>to</strong> the on-line markets, B is identified as the only<br />

segment left <strong>to</strong> be served at the original s<strong>to</strong>re, <strong>and</strong> hence F can fully extract the consumer<br />

surplus from B. The extra 1.9=3.9-2 dollars that F can make from serving B must be<br />

greater than the loss incurred when F moves A from the s<strong>to</strong>re <strong>to</strong> the on-line markets in<br />

order for F <strong>to</strong> adopt this dual channels strategy, <strong>and</strong> hence the condition is t < 1.9. From<br />

here it becomes clear that F’s incentive <strong>to</strong> spend t may be reduced when A’s reservation<br />

value increases: if A is already served <strong>with</strong>out the internet, then the benefit from spending<br />

t is equal <strong>to</strong> the difference in A’s <strong>and</strong> B’s reservation values, <strong>and</strong> this difference is decreasing<br />

in A’s reservation value.<br />

8


Thus F will give up the on-line outlet when A’s valuation rises from 2<br />

dollars <strong>to</strong> 3.1 dollars. The intuition has been spelt out in the preceding<br />

footnote, where we emphasized that F’s benefit from spending t is<br />

decreasing in A’s reservation value if F would choose <strong>to</strong> serve A in<br />

the absence of the internet.<br />

On the other h<strong>and</strong>, an increase in B’s dem<strong>and</strong> does weakly encourage<br />

F <strong>to</strong> use the on-line outlet: it does not affect F’s preference about<br />

spending t if F would choose not <strong>to</strong> serve A in the absence of the<br />

internet, but it increases F’s benefit from spending t if F would choose<br />

<strong>to</strong> serve A in the absence of the internet.<br />

So, we conclude that an increase in dem<strong>and</strong> may not encourage F <strong>to</strong><br />

use the on-line outlet; it depends on which segment’s dem<strong>and</strong> increases<br />

more <strong>and</strong> which segment’s dem<strong>and</strong> increases less. In particular, an increase<br />

in the valuation of the low-valuation segment (which also has expertise<br />

in computer science) need not encourage the firm <strong>to</strong> sell through<br />

the internet.<br />

12. Definition 4. In the above we have assumed that players know the<br />

payoff functions of each other. Such a game is a game <strong>with</strong> complete (or<br />

symmetric) information. What if at least one player in the game does<br />

not know for sure another player’s payoff function We call it a game<br />

<strong>with</strong> information asymmetry, or a game <strong>with</strong> incomplete information,<br />

or simply a Bayesian game. In a Bayesian game, at least one player Z<br />

has more than one possible payoff function. We say that this player Z<br />

has more than one type. At least one other player W cannot be sure<br />

which type player Z has. In this case, we shall look for an equilibrium<br />

called Bayesian equilibrium (BE). This is nothing but a Nash equilibrium<br />

of an enlarged version of the original game, where each different<br />

type of Z is treated as a distinct player.<br />

To give a concrete example, suppose that we have a two-player game<br />

where player 1’s strategy space is X <strong>and</strong> player 2’s strategy space is Y ,<br />

<strong>and</strong> player 2 has two possible types (or two possible payoff functions), θ 1<br />

<strong>and</strong> θ 2 , which, from player 1’s perspective, may occur <strong>with</strong> probabilities<br />

π 1 <strong>and</strong> π 2 respectively. Certainly, player 2 knows his own type. A<br />

Bayesian equilibrium for this two-player game is nothing but the Nash<br />

equilibrium of the three-player game where the two types of player 2<br />

9


are treated as two different players. Thus a Bayesian equilibrium is a<br />

triple (x ∗ , y1 ∗, y∗ 2 ) such that x∗ ∈ X, y1 ∗ ∈ Y , y∗ 2 ∈ Y , <strong>and</strong> the following<br />

three incentive compatibility conditions hold:<br />

π 1 u 1 (x ∗ , y ∗ 1) + π 2 u 1 (x ∗ , y ∗ 2) ≥ π 1 u 1 (x, y ∗ 1) + π 2 u 1 (x, y ∗ 2), ∀x ∈ X; (13)<br />

u 2 (x ∗ , y ∗ 1; θ 1 ) ≥ u 2 (x ∗ , y; θ 1 ), ∀y ∈ Y ; (14)<br />

u 2 (x ∗ , y ∗ 2 ; θ 2) ≥ u 2 (x ∗ , y; θ 2 ), ∀y ∈ Y. (15)<br />

In words, x ∗ is player 1’s best response, which is on average the optimal<br />

strategic choice of player 1. It is not really player 1’s best response<br />

against player 2 if player 1 is sure that player 2 will use y1 ∗ . Neither is<br />

it player 1’s best response against player 2 if player 1 is sure that player<br />

2 will use y2 ∗ . Since player 1 can only choose one x in X <strong>to</strong> play against<br />

two possible types of player 2, given his conjecture of (y1 ∗, y∗ 2 ), the choice<br />

x ∗ must be on average optimal. On the other h<strong>and</strong>, player 2 knows<br />

his own type, <strong>and</strong> his best response against player 1’s average optimal<br />

choice x ∗ depends on his type. Note that θ denotes player 2’s type, <strong>and</strong><br />

it determines u 2 (x, y)! This is why we say that incomplete information<br />

in this game is equivalent <strong>to</strong> player 1 not knowing player 2’s payoff<br />

function. Again, player 1’s average optimal choice x ∗ , player 2’s best<br />

response y1 ∗ when his type is θ 1, <strong>and</strong> player 2’s best response y2 ∗ when his<br />

type is θ 2 , must al<strong>to</strong>gether form a Nash equilibrium. This three-player<br />

Nash equilibrium is what we defined as the Bayesain equilibrium.<br />

13. Definition 5. Given a game, an event is mutual knowledge if every<br />

player knows it. Given a game, an event is called the players’ common<br />

knowledge if every player knows it, everyone knows that everyone knows<br />

it, everyone knows that everyone knows that everyone knows it, <strong>and</strong> so<br />

on. Anything which is not common knowledge is some player’s private<br />

information. A player’s private information is also called his type. A<br />

game where no players have private information (everything relevant is<br />

common knowledge) is a game <strong>with</strong> complete information. Otherwise,<br />

the game is one <strong>with</strong> incomplete information, or one <strong>with</strong> information<br />

asymmetry.<br />

14. Example 6. Three boys are locked in a room <strong>with</strong>out a mirror <strong>and</strong><br />

are unallowed <strong>to</strong> talk <strong>to</strong> each other. Each of them is wearing a hat<br />

10


which is either red or black. Each boy can see the hats the other two<br />

boys are wearing, but not his own hat. At times 0, 1, 2, <strong>and</strong> so on,<br />

their teacher will come ask each of them in private whether the boy<br />

can figure out the color of his hat, <strong>and</strong> if he can, then he is free <strong>to</strong> go<br />

home. Suppose that the three boys are all wearing red hats.<br />

(i) At what time will the room become empty<br />

(ii) Now suppose that at time zero the teacher tells all the boys that<br />

at least one of them is wearing a red hat. At what time will the room<br />

become empty<br />

15. Definition 6. An incomplete-information game where the uninformed<br />

players can move after observing the informed players’ moves is called<br />

a dynamic game, <strong>and</strong> otherwise a static game.<br />

16. Example 7. Let us modify Example 2 by assuming a r<strong>and</strong>om dem<strong>and</strong><br />

curve. This gives rise <strong>to</strong> a static game <strong>with</strong> incomplete information.<br />

More precisely, let the inverse dem<strong>and</strong> curve be<br />

P (q 1 + q 2 ) = ã − q 1 − q 2 , (16)<br />

where only firm 1 knows the outcome of the r<strong>and</strong>om variable ã. Firm<br />

1<br />

2 only knows that ã may be 2 <strong>with</strong> prob. or 4 <strong>with</strong> prob. 2<br />

. Since<br />

3 3<br />

the same firm 1 will act differently when it receives different dam<strong>and</strong><br />

information, we should regard firm 1 as different players given that it<br />

has received different information. Interpreted this way, this game has<br />

3 players—the firm 1 seeing ã = 2; the firm 1 seeing ã = 4; <strong>and</strong> firm 2.<br />

An NE for this 3-player game is what we shall be looking for, which is<br />

also called a Bayesian equilibrium (BE) of the game. Find a BE.<br />

Solution. First observe that firm 1 has two possible types. Firm 2<br />

has only one type (no private information). So, we should consider<br />

a three-firm game, where the two types of firm 1 will be regarded as<br />

two different players, <strong>and</strong> then we look for the NE of the new 3-player<br />

game. By definition, we must find three strategies q1(2), ∗ q1(4), ∗ <strong>and</strong> q2.<br />

∗<br />

These 3 strategies are such that, given any two of them, the third one<br />

is the corresponding player’s best response!<br />

In other words, for firm 2,<br />

q ∗ 2<br />

1<br />

= arg max<br />

q 2 3 [q 2(2 − q1 ∗ (2) − q 2)] + 2 3 [q 2(4 − q1 ∗ (4) − q 2)].<br />

11


Similarly, for the firm 1 that has seen ã = 2,<br />

q1 ∗ (2) = arg max q 1 (2 − q 1 − q<br />

q 2 ∗ );<br />

1<br />

<strong>and</strong> for the firm 1 that has seen ã = 4,<br />

q1 ∗ (4) = arg max q 1 (4 − q 1 − q<br />

q 2 ∗ ).<br />

1<br />

Each of the above three maximization problems is concave, <strong>and</strong> so<br />

the 3 first-order conditions are necessary <strong>and</strong> sufficient. Each firs<strong>to</strong>rder<br />

condition gives a reaction function for one of the player. Again,<br />

the NE must appear at the intersection of the 3 reaction functions.<br />

Solving, the NE of the 3-player game, or the BE of the original game<br />

<strong>with</strong> information asymmetry, is the following:<br />

(q ∗ 1 (2), q∗ 1 (4), q∗ 2 ) = (4 9 , 13 9 , 10<br />

9 ).<br />

17. Definition 7. A dynamic Bayesian game is called a signalling game if<br />

there are only two players, informed <strong>and</strong> uninformed, <strong>and</strong> the informed<br />

moves first as a Stackelberg leader, <strong>and</strong> the uninformed moves as a follower.<br />

The game ends after the uninformed’s move. A perfect Bayesian<br />

equilibrium (PBE) is defined by a set of actions (one for each type of<br />

each player) <strong>and</strong> a set of posterior beliefs called the supporting beliefs<br />

for the PBE that specify what the uninformed player thinks about the<br />

informed players’ types after seeing each possible actions taken by the<br />

informed, such that, given that all types of all players play the specified<br />

equilibrium strategies, each player finds playing her equilibrium<br />

strategy optimal given the specified equilibrium beliefs. Moreover, all<br />

posterior beliefs must be updated using Bayes law whenever possible.<br />

18. Example 8. Spence’s Signalling Model This is a game <strong>with</strong> two<br />

players, an employer <strong>and</strong> an employee (a worker). The worker may<br />

have high (H) or low (L) productivity, which is his private information<br />

unknown <strong>to</strong> the employer. The employer only knows that the worker’s<br />

productivity is H <strong>with</strong> prob. α <strong>and</strong> L <strong>with</strong> prob. (1 − α). The worker<br />

can go <strong>to</strong> school for a couple of years. Schooling costs respectively c H<br />

per year for the type-H worker <strong>and</strong> c L per year for the type-L worker.<br />

12


Suppose that education does not change the worker’s productivity <strong>and</strong><br />

that c H < c L . Further assume that the labor market is competitive in<br />

the sense that the wage w received by a worker is equal <strong>to</strong> the expected<br />

productivity of that worker. To summarize: The two types of worker,<br />

H <strong>and</strong> L, first choose e H <strong>and</strong> e L in R + , which represent how many years<br />

of education the worker is willing <strong>to</strong> receive before appearing in the job<br />

market, <strong>and</strong> after seeing e H <strong>and</strong> e L , the employer chooses w.<br />

This game is ‘dynamic’ in nature, because the uninformed player (the<br />

employer) can see the informed player’s (worker’s) choice of e before<br />

choosing w. The choice e is regarded by the uninformed employer as a<br />

signal about the worker’s type. The employer first make a conjecture<br />

regarding how many years of education the two types of worker would<br />

respectively choose, <strong>and</strong> based on his conjecture, when he actually sees<br />

the choice e made by the worker, he uses Bayes Law <strong>to</strong> form a posterior<br />

belief regarding the worker’s type, <strong>and</strong> then chooses w based on the<br />

posterior belief. Since seeing different choices e made by the worker, the<br />

employer may form different beliefs <strong>and</strong> then choose different w, we can<br />

represent equilibrium actions by (e H , e L , w(e)), such that (Nash equilibrium<br />

concept!) (i) expecting the employer’s pricing rule w(·), e H <strong>and</strong><br />

e L are respectively the type-H <strong>and</strong> type-L worker’s best response; (ii)<br />

w(e H ) <strong>and</strong> w(e L ) should make the employer just break even (because<br />

of competitive labor market) conditional on the employer’s posterior<br />

belief f(e) =prob.(H|e); <strong>and</strong> (iii) the posterior belief is consistent <strong>with</strong><br />

Bayes Law. 3<br />

19. Now we are ready <strong>to</strong> solve the PBE’s for the Spencian signalling game.<br />

By definition, a pure-strategy PBE of this game is defined as a tuple<br />

(e H , e L , f(e) = prob.(H|e), w(e)), such that (i) if e H = e L = e ∗ (called<br />

a pooling equilibrium), then<br />

f(e ∗ ) = α, (prior beliefs are also posterior)<br />

3 The key difference between a dynamic Bayesian game <strong>and</strong> a static Bayesian game is<br />

that, in the latter the uninformed looks for his best responses based on his prior beliefs<br />

about the informed’s types, whereas in the former the uninformed looks for his best responses<br />

based on his posterior beliefs about the informed’s types. This happens because<br />

the uninformed can see a signal before making his move, <strong>and</strong> it should use this new piece<br />

of information <strong>to</strong> infer the informed’s type.<br />

13


f(e) ∈ [0, 1], ∀e ≠ e ∗ , (Bayes law has no bite for zero-probability event)<br />

w(e) = f(e)H + [1 − f(e)]L, ∀e,<br />

w(e ∗ ) − c H · e ∗ ≥ w(e) − c H · e, ∀e,<br />

w(e ∗ ) − c L · e ∗ ≥ w(e) − c L · e, ∀e;<br />

<strong>and</strong> such that (ii) if e H ≠ e L (called a separating equilibrium), then<br />

f(e H ) = 1, f(e L ) = 0 (uncertainty is completely resolved)<br />

f(e) ∈ [0, 1], ∀e ≠ e H , e L , (Bayes law has no bite)<br />

w(e) = f(e)H + [1 − f(e)]L, ∀e,<br />

w(e H ) − c H · e H ≥ w(e) − c H · e, ∀e,<br />

w(e L ) − c L · e L ≥ w(e) − c L · e, ∀e.<br />

Check if these are consistent <strong>with</strong> the definition of PBE.<br />

20. A mixed-strategy equilibrium where there exist at least two signals e<br />

<strong>and</strong> e ′ such that f(e)[1 − f(e)] = 0 <strong>and</strong> 0 < f(e ′ ) < 1 is called a<br />

semi-separating equilibrium. In such an equilibrium, sometimes the<br />

uninformed can identify the informed player’s type by seeing the signal,<br />

but sometimes she cannot.<br />

21. The above states that, considering both pure-strategy <strong>and</strong> mixed-strategy<br />

PBE’s, there are generally three types of equilibria for a dynamic game<br />

<strong>with</strong> incomplete information: pooling equilibrium (in which e H = e L ),<br />

separating equilibrium (in which e H ≠ e L ), <strong>and</strong> semi-separating equilibrium<br />

(where either H or L r<strong>and</strong>omizes). Now let us solve for these<br />

PBE’s for Spencian signalling model.<br />

22. Pooling Equilibrium: Suppose both H <strong>and</strong> L choose e ∗ <strong>and</strong> f(e) =<br />

0, ∀e ≠ e ∗ . (Note that we have arbitrarily selected a set of supporting<br />

beliefs f(e) for off-equilibrium signals e.) Then, w H = w L = w(e ∗ ) =<br />

αH + (1 − α)L ≡ w. We need the following incentive compatibility<br />

conditions (hereafter IC conditions) <strong>to</strong> hold for respectively type-H <strong>and</strong><br />

type-L workers:<br />

w − c H e ∗ ≥ L − c H · 0,<br />

14


w − c L e ∗ ≥ L − c L · 0.<br />

Thus, we have a continuum (an uncountably infinite number) of pooling<br />

equilibria: Each e ∗ ≤ w−L<br />

c H<br />

corresponds <strong>to</strong> one pooling PBE.<br />

23. Separating Equilibrium: Suppose H <strong>and</strong> L chooses respectively<br />

E <strong>and</strong> e, E ≠ e, <strong>and</strong> f(e ′ ) = 0, ∀e ′ ≠ E. In this case, of course,<br />

w(E) = H, w(e) = L (why). We need the following IC conditions <strong>to</strong><br />

hold:<br />

H − c H E ≥ L − c H · 0,<br />

L − c L e ≥ H − c L · E.<br />

Immediately, we have e = 0 (why). Again, we have a continuum of<br />

separating equilibria: ≥ E ≥ H−L<br />

c L<br />

.<br />

H−L<br />

c H<br />

24. Semi-Separating Equilibrium: Consider the equilibrium where<br />

one type r<strong>and</strong>omizes over two signal levels <strong>and</strong> the other concentrates<br />

on one signal level. At first, assume that in equilibrium H r<strong>and</strong>omizes<br />

over E (prob. p) <strong>and</strong> e (prob. 1 − p) <strong>and</strong> L plays e <strong>with</strong> probability<br />

one, <strong>with</strong> E > e. For simplicity, assume that f(e ′ ) = 0, e ′ ≠ E, e.<br />

Then, such an equilibrium exists if <strong>and</strong> only if E > e ≥ 0 <strong>and</strong><br />

(1 − α)(H − L)<br />

c H<br />

≤ E − e ≤<br />

(H − L)<br />

c H<br />

.<br />

On the other h<strong>and</strong>, assume that in equilibrium H plays some E <strong>with</strong><br />

prob. 1 <strong>and</strong> L r<strong>and</strong>omizes over E (prob. q) <strong>and</strong> e (prob. 1 − q) <strong>with</strong><br />

E > e ≥ 0. Again, assume f(e ′ ) = 0, ∀e ′ ≠ E, e. It follows that e = 0!<br />

Such an equilibrium exists if <strong>and</strong> only if<br />

α(H − L)<br />

c L<br />

≤ E ≤ H − L<br />

c L<br />

.<br />

25. Example 9. (The <strong>Game</strong> of Beer <strong>and</strong> Quiche) Two cowboys A<br />

<strong>and</strong> B meet in a bar, <strong>and</strong> A may be weak (w) or strong (s), which is A’s<br />

private information. The game proceeds as follows. A first decides <strong>to</strong><br />

order either a beer (b) or a quiche (q), <strong>and</strong> upon observing A’s order,<br />

B decides <strong>to</strong> or not <strong>to</strong> fight A. We assume that in the absence of B, A<br />

prefers beer (b) <strong>to</strong> quiche (q) if he is (s), otherwise he prefers (q) <strong>to</strong><br />

15


(b). The prior beliefs of B are such that A is (s) <strong>with</strong> probability 0.9.<br />

Now the payoffs: if A orders <strong>and</strong> eats something he dislikes, he gets 0,<br />

or else he gets 1, if B does not fight A, A gets an additional payoff of<br />

2. On the other h<strong>and</strong>, B gets 1 if he has no chance <strong>to</strong> fight, gets 2 if<br />

he fights A <strong>and</strong> A is of the weak type, <strong>and</strong> gets zero if he fights A <strong>and</strong><br />

A is of the strong type.<br />

This game has two pooling PBE’s:<br />

(1) Equilibrium (B): Both types of A order a beer <strong>and</strong> B’s strategy<br />

is <strong>to</strong> fight A if <strong>and</strong> only if he sees A order a quiche. What are the<br />

supporting beliefs Let f(s) =pro.(A is strong| A orders s), for all<br />

s ∈ {b, q}. Then of course f(b) = 0.9. Note that s = q is a zero<br />

probability event. Recall that Bayes Law says<br />

P (E|F )P (F ) = P (E ⋂ F ),<br />

where E <strong>and</strong> F are two r<strong>and</strong>om events. From the probability theory,<br />

we know that for any two events C <strong>and</strong> D,<br />

Thus we have<br />

C ⊂ D ⇒ P (C) ≤ P (D).<br />

P (F ) = 0 ⇒ P (E ⋂ F ) = 0,<br />

since E ⋂ F ⊂ F . Thus given P (F ) = 0, Bayes Law requires<br />

P (E|F ) · 0 = 0,<br />

<strong>and</strong> hence P (E|F ) can be anything contained in [0, 1]. Let E be the<br />

event that A is of the strong type, <strong>and</strong> F the event that B has observed<br />

that A ordered a quiche. We conclude that any f(q) ∈ [0, 1] will be<br />

consistent <strong>with</strong> Bayes Law in this case. We must find at least one<br />

f(q) ∈ [0, 1] so that the above strategy profile does constitute the two<br />

players’ best responses against each other. Note that for B <strong>to</strong> fight A<br />

after seeing A order a quiche, it is necessary that<br />

1 ≤ f(q) · 0 + [1 − f(q)] · 2 ⇒ f(q) ≤ 1 2 .<br />

Now we show that given the beliefs f(b) = 0.9 <strong>and</strong> f(q) being anything<br />

in [0, 1 ], the aforementioned A’s <strong>and</strong> B’s strategies are respectively the<br />

2<br />

16


two players best responses. For A, if his type is (s), he gets 1 + 2 = 3 if<br />

he orders a beer, <strong>and</strong> if he deviates <strong>and</strong> orders a quiche, then not only<br />

he eats something he hates but he also must fight B, yielding a payoff<br />

of 0 + 0 = 0. Thus A will not deviate if he is of type (s). What if A is<br />

of type (w) If he orders a beer, then he must eat something he hates,<br />

but the good news is that he can avoid fighting B, so that his payoff<br />

is 0 + 2 = 2; <strong>and</strong> if he deviates <strong>and</strong> orders a quiche, then he will have<br />

<strong>to</strong> fight B, so that his payoff is 1 + 0 = 1. We conclude that the weak<br />

type of A does not want <strong>to</strong> deviate either. What about B We have<br />

shown that given f(q) ≤ 1 , fighting A if A dares <strong>to</strong> order the quiche is<br />

2<br />

really optimal for B. On the other h<strong>and</strong>, if A orders a beer, then since<br />

B expects both types of A <strong>to</strong> do so, ordering the beer really does not<br />

tell B anything new, <strong>and</strong> B’s posterior beliefs are identical <strong>to</strong> his prior<br />

beliefs (A is of the strong type <strong>with</strong> prob. 0.9), <strong>and</strong> so not <strong>to</strong> fight A<br />

is optimal for B.<br />

To sum up, we have shown that the following is a PBE (check if it<br />

corresponds <strong>to</strong> our definition of a PBE!):<br />

(i) The strong type of A orders a beer;<br />

(ii) The weak type of A also orders a beer;<br />

(iii) B’s strategy must describe what he will do in every possible contingency:<br />

B will fight A if A ordered a quiche, <strong>and</strong> B will not fight A<br />

if A ordered a beer;<br />

(iv) The supporting beliefs fully describe what B thinks of A in every<br />

possible contingency: B thinks that A is of the strong type <strong>with</strong> prob.<br />

0.9 if he sees A order a beer; <strong>and</strong> B thinks that A is of the strong type<br />

<strong>with</strong> prob. f(q) if he sees A order a quiche, where f(q) is any real<br />

number contained in [0, 1].<br />

2<br />

(2) Equilibrium (Q): Both types of A order a quiche <strong>and</strong> B’s strategy<br />

is <strong>to</strong> fight A if <strong>and</strong> only if he sees A order a beer. What are the supporting<br />

beliefs Let f(s) =pro.(A is strong| A orders s), for all s ∈ {b, q}.<br />

Then of course f(q) = 0.9. Now for f(b) <strong>to</strong> induce B <strong>to</strong> fight A after<br />

seeing A order a beer, it is necessary that<br />

1 ≤ f(b) · 0 + [1 − f(b)] · 2 ⇒ f(b) ≤ 1 2 .<br />

Now we show that given the beliefs f(q) = 0.9 <strong>and</strong> f(b) being anything<br />

in [0, 1 ], the aforementioned A’s <strong>and</strong> B’s strategies are respectively the<br />

2<br />

17


two players best responses. For A, if his type is (w), he gets 1 + 2 = 3<br />

if he orders a quiche, <strong>and</strong> if he deviates <strong>and</strong> orders a beer, then not<br />

only he eats something he hates but he also must fight B, yielding a<br />

payoff of 0 + 0 = 0. Thus A will not deviate if he is of type (w). What<br />

if A is of type (s) If he orders a beer, then he must eat something he<br />

hates, but the good news is that he can avoid fighting <strong>with</strong> B, so that<br />

his payoff is 0+2 = 2; <strong>and</strong> if he deviates <strong>and</strong> orders a beer, then he will<br />

have <strong>to</strong> fight B, so that his payoff is 1 + 0 = 1. We conclude that the<br />

strong type of A does not want <strong>to</strong> deviate either. What about B We<br />

have shown that given f(b) ≤ 1 , fighting A if A dares <strong>to</strong> order the beer<br />

2<br />

is really optimal for B. On the other h<strong>and</strong>, if A orders a quiche, then<br />

since B expects both types of A <strong>to</strong> do so in equilibrium, ordering the<br />

quiche really does not tell B anything new about A, <strong>and</strong> B’s posterior<br />

beliefs are identical <strong>to</strong> his prior beliefs (A is of the strong type <strong>with</strong><br />

prob. 0.9), <strong>and</strong> so not <strong>to</strong> fight A is optimal for B.<br />

To sum up, we have shown that the following is a PBE (check if it<br />

corresponds <strong>to</strong> our definition of a PBE!):<br />

(i) The strong type of A orders a quiche;<br />

(ii) The weak type of A also orders a quiche;<br />

(iii) B’s strategy must describe what he will do in every possible contingency:<br />

B will fight A if A ordered a beer, <strong>and</strong> B will not fight A if<br />

A ordered a quiche;<br />

(iv) The supporting beliefs fully describe what B thinks of A in every<br />

possible contingency: B thinks that A is of the strong type <strong>with</strong> prob.<br />

0.9 if he sees A order a quiche; <strong>and</strong> B thinks that A is of the strong<br />

type <strong>with</strong> prob. f(b) if he sees A order a beer, where f(b) is any real<br />

number contained in [0, 1].<br />

2<br />

26. Example 10. Recall the game of three doors (labeled no. 1, no. 2,<br />

<strong>and</strong> no. 3) <strong>with</strong> one hidden car. This is a TV show, where a host<br />

is faced <strong>with</strong> a guest, who must guess behind which among the three<br />

closed doors there is a hidden new car.<br />

(a) The guest first chooses one closed door.<br />

(b) Then the host offers <strong>to</strong> open another closed door for the guest,<br />

saying that the guest would get the car if the car is hidden behind<br />

the door selected by the host.<br />

18


(c) Then the door chosen by the host is opened, <strong>and</strong> if there is indeed<br />

a car behind that door, it is goven as a gift <strong>to</strong> the guest; <strong>and</strong><br />

if there is no car hidden behind that door, then the guest will<br />

be given an opportunity <strong>to</strong> re-make his choice between the two<br />

remaining closed doors.<br />

(d) After the guest re-makes his choice, the remaining two doors are<br />

both opened, <strong>and</strong> the car will be given <strong>to</strong> the guest if it is hidden<br />

behind the door chosen by the guest in the stage (c); or else, the<br />

car is returned <strong>to</strong> the sponsor of the TV show. This ends the<br />

game.<br />

Assume that the guest’s payoff is 1 if he gets the new car <strong>and</strong> zero<br />

otherwise. Assume that the host knows from the very beginning behind<br />

which closed door the new car is hidden.<br />

(1) First assume that the host’s payoff is 1 regardless of whether the<br />

guest gets the new car. Which door should the guest select at stage<br />

(c), if the guest does not get the new car at stage (c)<br />

(2) Now assume instead that the host’s payoff is 1 if the guest fails <strong>to</strong><br />

get the new car at the end of the show, <strong>and</strong> the host’s payoff is zero if<br />

otherwise. Which door should the guest select at stage (c), if the guest<br />

does not get the new car at stage (c)<br />

(3) Now assume instead that the host’s payoff is 0 if the guest fails <strong>to</strong><br />

get the new car at the end of the show, <strong>and</strong> the host’s payoff is 1 if<br />

otherwise. Which door should the guest select at stage (c), if the guest<br />

does not get the new car at stage (c)<br />

Solution. This is clearly a signaling game <strong>with</strong> the host <strong>and</strong> the guest<br />

being respectively the informed signal sender <strong>and</strong> the uninformed signal<br />

receiver. The guest always feels indifferent about the three closed door<br />

at stage (a), in all of parts (1), (2) <strong>and</strong> (3). However, the guest’s<br />

optimal choice at stage (c) differs among parts (1), (2) <strong>and</strong> (3) as we<br />

now demonstrate.<br />

In part (1), the host will ignore his private information, <strong>and</strong> simply<br />

open another door for the guest. If the guest does not get the car,<br />

then the guest should consider the two remaining closed doors equally<br />

likely <strong>to</strong> have the new car hidden behind them. Hence the guest can<br />

r<strong>and</strong>omize over those two doors in any fashion.<br />

19


In part (2), the host will avoid giving away the new car. Knowing<br />

this, the guest should infer as follows: if the car is hidden behind the<br />

door chosen by the guest at stage (a), then from the host’s perspective<br />

opening the door selected by the host at stage (b) is as good as opening<br />

the door that neither the guest nor the host has chosen, so that the<br />

door chosen by the host at stage (b) is opened at stage (c) only <strong>with</strong><br />

probability 1 ; but it the car is not hidden behind the door chosen by the<br />

2<br />

guest at stage (a), then the host must select the door that he opened<br />

at stage (c) <strong>with</strong> probability one at stage (b)! This implies that the<br />

guest should always alter his choice at stage (c)!<br />

In part (3), the host wants <strong>to</strong> help the guest get the new car. If the<br />

guest does not get the car at stage (c), it proves that the car must be<br />

hidden behind the door originally chosen by the guest at stage (a), <strong>and</strong><br />

hence at stage (c) the guest should always stick <strong>to</strong> his original choice<br />

made at stage (a)! 4<br />

4 We shall come back <strong>to</strong> reconsider this example in lecture 5, where we shall write down<br />

the above PBE’s in a formal manner.<br />

20

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!