Difference between revisions of "Alonso Dessein Matouschek (2008) - When Does Coordination Require Centralization"
imported>Ed |
imported>Ed |
||
Line 13: | Line 13: | ||
===Basic Setup=== | ===Basic Setup=== | ||
− | There are two divisions, <math>j \in \{1,2\} | + | There are two divisions, <math>j \in \{1,2\}\;</math>. |
− | Each division makes a decision <math>d | + | Each division makes a decision <math>d\;</math>, based on local conditions <math>\theta_j in \mathbb{R}\;</math>. |
The profits of the divisions are given by: | The profits of the divisions are given by: | ||
− | :<math>\pi = K_1 - (d_1 - \theta_1)^2 - \delta (d_1 - d_2)^2 | + | :<math>\pi = K_1 - (d_1 - \theta_1)^2 - \delta (d_1 - d_2)^2\;</math> |
− | :<math>\pi = K_2 - (d_2 - \theta_2)^2 - \delta (d_1 - d_2)^2 | + | :<math>\pi = K_2 - (d_2 - \theta_2)^2 - \delta (d_1 - d_2)^2\;</math> |
Where: | Where: | ||
− | *<math>K_j \in \mathbb{R} | + | *<math>K_j \in \mathbb{R}\;</math>, WLOG <math>K_j = 0\;</math> |
− | *<math>\delta \in [0,\infty] | + | *<math>\delta \in [0,\infty]\;</math> measures the importance of coordination |
− | *<math>\theta_j \sim U[-s_j,s_j] | + | *<math>\theta_j \sim U[-s_j,s_j]\;</math>, where the distribution is common knowledge but the draw is private |
− | The division managers have preferences (<math>\lambda \in [\frac{1}{2},1] | + | The division managers have preferences (<math>\lambda \in [\frac{1}{2},1]\;</math> represents bias): |
− | :<math>u_1 = \lambda \pi_1 + (1-\lambda \pi_2) | + | :<math>u_1 = \lambda \pi_1 + (1-\lambda \pi_2)\;</math> |
− | :<math>u_2 = \lambda \pi_2 + (1-\lambda \pi_1) | + | :<math>u_2 = \lambda \pi_2 + (1-\lambda \pi_1)\;</math> |
The headquarters (HQ) manager has preferences: | The headquarters (HQ) manager has preferences: | ||
− | :<math>u_h = \pi_1 + \pi_2 | + | :<math>u_h = \pi_1 + \pi_2\;</math> |
− | The managers can send messages <math>m_1 \in M_1 | + | The managers can send messages <math>m_1 \in M_1\;</math> and <math>m_2 \in M_2\;</math> respectively. |
There are two organisational forms: | There are two organisational forms: | ||
Line 50: | Line 50: | ||
The game proceeds are follows: | The game proceeds are follows: | ||
#Decision rights are allocated | #Decision rights are allocated | ||
− | #Managers learn states <math>\theta_1 | + | #Managers learn states <math>\theta_1\;</math> and <math>\theta_2\;</math> respectively |
− | #Managers send messages <math>m_1 | + | #Managers send messages <math>m_1\;</math> and <math>m_2\;</math> respectively |
− | #Decisions <math>d_1 | + | #Decisions <math>d_1\;</math> and <math>d_2\;</math> are made |
===Decision Making=== | ===Decision Making=== | ||
Line 58: | Line 58: | ||
====Under Centralization:==== | ====Under Centralization:==== | ||
− | HQ determines <math>d_1^C | + | HQ determines <math>d_1^C\;</math> and <math>d_2^C\;</math> by maximizing <math>u_h\;</math> with respect to these variables. The solutions are: |
− | :<math>d_1^C - \gamma_C \mathbb{E}[\theta_1|m | + | :<math>d_1^C - \gamma_C \mathbb{E}[\theta_1|m] + (1-\gamma_C) \mathbb{E}[\theta_2|m]\;</math> |
− | :<math>d_1^C - \gamma_C \mathbb{E}[\theta_2|m | + | :<math>d_1^C - \gamma_C \mathbb{E}[\theta_2|m] + (1-\gamma_C) \mathbb{E}[\theta_1|m]\;</math> |
where: | where: | ||
− | :<math>\gamma_C = \frac{1+2\delta}{1+4\delta} | + | :<math>\gamma_C = \frac{1+2\delta}{1+4\delta}\;</math> |
− | ====Centralization Comparative Statics:=== | + | ====Centralization Comparative Statics:==== |
− | *<math>\frac{d \gamma_C}{d\delta} < 0, \gamma_C \in [\frac{1}{2},1] | + | *<math>\frac{d \gamma_C}{d\delta} < 0, \gamma_C \in [\frac{1}{2},1] \;</math> |
− | *When <math>\delta = 0: <math>d_1^C = \mathbb{E}[\theta_1|m] | + | *When <math>\delta = 0\;</math>: <math>d_1^C = \mathbb{E}[\theta_1|m]\;</math> |
− | *When<math> \delta = 1: <math>d_1^C | + | *When<math> \delta = 1\;</math>: <math>d_1^C\;</math> puts more weight on <math>\mathbb{E}[\theta_2|m]\;</math> |
− | *As <math>\delta \to infty | + | *As <math>\delta \to \infty\;</math>: equal weight is put on both, <math>d_1^C = \mathbb{E}[\frac{\theta_1 + \theta_2}{2}|m]\;</math> |
====Under Decentralization:==== | ====Under Decentralization:==== | ||
− | Each manager determines their own decision by maximizing <math>u_j | + | Each manager determines their own decision by maximizing <math>u_j\;</math> with respect to <math>d_j\;</math>, taking the message from the other party into account. This gives: |
− | :<math>d_1^D = \frac{\lambda}{\lambda + \delta} \theta_1 + \frac{\delta}{\lambda + \delta} \mathbb{E}[d_2|theta_1,m] | + | :<math>d_1^D = \frac{\lambda}{\lambda + \delta} \theta_1 + \frac{\delta}{\lambda + \delta} \mathbb{E}[d_2|theta_1,m]\;</math> |
− | :<math>d_1^D = \frac{\lambda}{\lambda + \delta} \theta_2 + \frac{\delta}{\lambda + \delta} \mathbb{E}[d_1|theta_2,m] | + | :<math>d_1^D = \frac{\lambda}{\lambda + \delta} \theta_2 + \frac{\delta}{\lambda + \delta} \mathbb{E}[d_1|theta_2,m]\;</math> |
− | Note that the weight each decision puts on local information is increasing the bias <math>\lambda | + | Note that the weight each decision puts on local information is increasing the bias <math>\lambda\;</math>, and decreasing in the need for coordination <math>\delta\;</math>. |
By taking expectations and subbing back in, we get: | By taking expectations and subbing back in, we get: | ||
− | :<math>d_1^D = \frac{\lambda}{\lambda + \delta} \theta_1 + \frac{\delta}{\lambda + \delta} \left(\frac{\delta}{\lambda + 2 \delta} \mathbb{E}[\theta_1|\theta_2,m] + \frac{\lambda+ \delta}{\lambda + 2\delta} \mathbb{E}[\theta_2|theta_1,m] \right ) | + | :<math>d_1^D = \frac{\lambda}{\lambda + \delta} \theta_1 + \frac{\delta}{\lambda + \delta} \left(\frac{\delta}{\lambda + 2 \delta} \mathbb{E}[\theta_1|\theta_2,m] + \frac{\lambda+ \delta}{\lambda + 2\delta} \mathbb{E}[\theta_2|theta_1,m] \right )\;</math> |
− | :<math>d_2^D = \frac{\lambda}{\lambda + \delta} \theta_2 + \frac{\delta}{\lambda + \delta} \left(\frac{\delta}{\lambda + 2 \delta} \mathbb{E}[\theta_2|\theta_1,m] + \frac{\lambda+ \delta}{\lambda + 2\delta} \mathbb{E}[\theta_1|theta_2,m] \right ) | + | :<math>d_2^D = \frac{\lambda}{\lambda + \delta} \theta_2 + \frac{\delta}{\lambda + \delta} \left(\frac{\delta}{\lambda + 2 \delta} \mathbb{E}[\theta_2|\theta_1,m] + \frac{\lambda+ \delta}{\lambda + 2\delta} \mathbb{E}[\theta_1|theta_2,m] \right )\;</math> |
− | ====Decentralization Comparative Statics:=== | + | ====Decentralization Comparative Statics:==== |
− | *As <math>\delta | + | *As <math>\delta\;</math> increases: each manager puts less weight on his own information, and more on a weighted average |
− | *As <math>\delta \to infty | + | *As <math>\delta \to \infty\;</math>: again equal weight is put on both, <math>d_1^C = \mathbb{E}[\frac{\theta_1 + \theta_2}{2}|m]\;</math> |
===Strategic Communication=== | ===Strategic Communication=== | ||
− | When <math>\theta=0 | + | When <math>\theta=0\;</math> there is no reason to misrepresent. However, otherwise both under centralization and decentralization their is an incentive to exagerate. |
− | Under centralization, the need for coordination (a high <math>\delta | + | Under centralization, the need for coordination (a high <math>\delta\;</math>) exacerbates this problem (because the HQ manager is already a little insensitive to local conditions, and now becomes entire insensitive). |
− | Under decentraliztaion, the need for coordination (a high <math>\delta | + | Under decentraliztaion, the need for coordination (a high <math>\delta\;</math>) mitigates this problem (as the managers become more responsive to each other's needs). |
====With HQ (under centralization)==== | ====With HQ (under centralization)==== | ||
− | Let <math>\nu_1^* = \mathbb{E}[\theta_1|m] | + | Let <math>\nu_1^* = \mathbb{E}[\theta_1|m]\;</math> be the expection of the local state that 1 would like HQ to have, so that: |
− | :<math>\nu_1^* =arg \max_{\nu_1} \mathbb{E} [ - \lambda(d_1 - \theta_1)^2 -(1-\lambda) (d_2 - \theta_2)^2- \delta (d_1 - d_2)^2 ] | + | :<math>\nu_1^* =arg \max_{\nu_1} \mathbb{E} [ - \lambda(d_1 - \theta_1)^2 -(1-\lambda) (d_2 - \theta_2)^2- \delta (d_1 - d_2)^2 ]\;</math> |
− | In equilibrium the beliefs of the HQ manager will be correct, so <math>\mathbb{E}_{m_2}( \mathbb{E}[\theta_1|m] ) = \mathbb{E}[\theta_1] = 0 | + | In equilibrium the beliefs of the HQ manager will be correct, so <math>\mathbb{E}_{m_2}( \mathbb{E}[\theta_1|m] ) = \mathbb{E}[\theta_1] = 0\;</math>, and likewise for <math>\theta_2\;</math>, so: |
− | :<math>\nu_1^* - \theta_1 = \frac{(2 \lambda - 1) \delta}{\lambda+\delta}\theta_1 = b_C \cdot \theta_1 | + | :<math>\nu_1^* - \theta_1 = \frac{(2 \lambda - 1) \delta}{\lambda+\delta}\theta_1 = b_C \cdot \theta_1\;</math> |
− | Where we will call <math>b_C | + | Where we will call <math>b_C\;</math> the bias in messages to the HQ. This bias is zero when <math>\theta_1 = 0\;</math>, and positive otherwise. It is also increasing in <math>| \theta_1 | , \lambda, \delta\;</math>. |
Line 130: | Line 130: | ||
In the same way we can calculate: | In the same way we can calculate: | ||
− | :<math>\nu_1^* - \theta_1 = \frac{(2\lambda -1)(\lambda+\delta)}{\lambda(1-\lambda)+\delta}\theta_1 = b_D \theta_1 | + | :<math>\nu_1^* - \theta_1 = \frac{(2\lambda -1)(\lambda+\delta)}{\lambda(1-\lambda)+\delta}\theta_1 = b_D \theta_1\;</math> |
− | Where we will call <math>b_D | + | Where we will call <math>b_D\;</math> the bias in messages to the other division manager. This bias is zero when <math>\theta_1 = 0\;</math>, and positive otherwise. It is also increasing in <math>| \theta_1 |\;</math> and <math>\lambda\;</math> (home bias), but decreasing in <math>\delta (the need for coordination). |
===Communication Equilibria=== | ===Communication Equilibria=== | ||
− | The paper uses a Crawford and Sobel (1982) type model, which is covered in [[Grossman Helpman (2001) - Special Interest Politics Chapters 4 And 5 | Grossman and Helpman (2001)]], in which the state spaces <math>[-s_1,s_1] | + | The paper uses a Crawford and Sobel (1982) type model, which is covered in [[Grossman Helpman (2001) - Special Interest Politics Chapters 4 And 5 | Grossman and Helpman (2001)]], in which the state spaces <math>[-s_1,s_1]\;</math> and <math>[-s_2,s_2]\;</math> are partitioned into intervals. The size of the intervals (which determine how informative messages are) depends directly on the biases <math>b_D\;</math> and <math>b_C\;</math>. |
The game uses a perfect Bayesian equilibria solution concept which requires: | The game uses a perfect Bayesian equilibria solution concept which requires: |
Revision as of 18:57, 23 November 2010
Contents
Reference(s)
- Alonso, Ricardo, Wouter Dessein and Niko Matouschek (2008), "When Does Coordination Require Centralization?" American Economic Review, Vol. 98(1), pp. 145-179. pdf
Abstract
This paper compares centralized and decentralized coordination when managers are privately informed and communicate strategically. We consider a multidivisional organization in which decisions must be adapted to local conditions but also coordinated with each other. Information about local conditions is dispersed and held by self-interested division managers who communicate via cheap talk. The only available formal mechanism is the allocation of decision rights. We show that a higher need for coordination improves horizontal communication but worsens vertical communication. As a result, decentralization can dominate centralization even when coordination is extremely important relative to adaptation.
The Model
Basic Setup
There are two divisions, [math]j \in \{1,2\}\;[/math].
Each division makes a decision [math]d\;[/math], based on local conditions [math]\theta_j in \mathbb{R}\;[/math].
The profits of the divisions are given by:
- [math]\pi = K_1 - (d_1 - \theta_1)^2 - \delta (d_1 - d_2)^2\;[/math]
- [math]\pi = K_2 - (d_2 - \theta_2)^2 - \delta (d_1 - d_2)^2\;[/math]
Where:
- [math]K_j \in \mathbb{R}\;[/math], WLOG [math]K_j = 0\;[/math]
- [math]\delta \in [0,\infty]\;[/math] measures the importance of coordination
- [math]\theta_j \sim U[-s_j,s_j]\;[/math], where the distribution is common knowledge but the draw is private
The division managers have preferences ([math]\lambda \in [\frac{1}{2},1]\;[/math] represents bias):
- [math]u_1 = \lambda \pi_1 + (1-\lambda \pi_2)\;[/math]
- [math]u_2 = \lambda \pi_2 + (1-\lambda \pi_1)\;[/math]
The headquarters (HQ) manager has preferences:
- [math]u_h = \pi_1 + \pi_2\;[/math]
The managers can send messages [math]m_1 \in M_1\;[/math] and [math]m_2 \in M_2\;[/math] respectively.
There are two organisational forms:
- Under centralization division managers simultaneously send messages to HQ who makes decisions
- Under decentralization the division managers simultaneously exchange messages and make decisions
The game proceeds are follows:
- Decision rights are allocated
- Managers learn states [math]\theta_1\;[/math] and [math]\theta_2\;[/math] respectively
- Managers send messages [math]m_1\;[/math] and [math]m_2\;[/math] respectively
- Decisions [math]d_1\;[/math] and [math]d_2\;[/math] are made
Decision Making
Under Centralization:
HQ determines [math]d_1^C\;[/math] and [math]d_2^C\;[/math] by maximizing [math]u_h\;[/math] with respect to these variables. The solutions are:
- [math]d_1^C - \gamma_C \mathbb{E}[\theta_1|m] + (1-\gamma_C) \mathbb{E}[\theta_2|m]\;[/math]
- [math]d_1^C - \gamma_C \mathbb{E}[\theta_2|m] + (1-\gamma_C) \mathbb{E}[\theta_1|m]\;[/math]
where:
- [math]\gamma_C = \frac{1+2\delta}{1+4\delta}\;[/math]
Centralization Comparative Statics:
- [math]\frac{d \gamma_C}{d\delta} \lt 0, \gamma_C \in [\frac{1}{2},1] \;[/math]
- When [math]\delta = 0\;[/math]: [math]d_1^C = \mathbb{E}[\theta_1|m]\;[/math]
- When[math] \delta = 1\;[/math]: [math]d_1^C\;[/math] puts more weight on [math]\mathbb{E}[\theta_2|m]\;[/math]
- As [math]\delta \to \infty\;[/math]: equal weight is put on both, [math]d_1^C = \mathbb{E}[\frac{\theta_1 + \theta_2}{2}|m]\;[/math]
Under Decentralization:
Each manager determines their own decision by maximizing [math]u_j\;[/math] with respect to [math]d_j\;[/math], taking the message from the other party into account. This gives:
- [math]d_1^D = \frac{\lambda}{\lambda + \delta} \theta_1 + \frac{\delta}{\lambda + \delta} \mathbb{E}[d_2|theta_1,m]\;[/math]
- [math]d_1^D = \frac{\lambda}{\lambda + \delta} \theta_2 + \frac{\delta}{\lambda + \delta} \mathbb{E}[d_1|theta_2,m]\;[/math]
Note that the weight each decision puts on local information is increasing the bias [math]\lambda\;[/math], and decreasing in the need for coordination [math]\delta\;[/math].
By taking expectations and subbing back in, we get:
- [math]d_1^D = \frac{\lambda}{\lambda + \delta} \theta_1 + \frac{\delta}{\lambda + \delta} \left(\frac{\delta}{\lambda + 2 \delta} \mathbb{E}[\theta_1|\theta_2,m] + \frac{\lambda+ \delta}{\lambda + 2\delta} \mathbb{E}[\theta_2|theta_1,m] \right )\;[/math]
- [math]d_2^D = \frac{\lambda}{\lambda + \delta} \theta_2 + \frac{\delta}{\lambda + \delta} \left(\frac{\delta}{\lambda + 2 \delta} \mathbb{E}[\theta_2|\theta_1,m] + \frac{\lambda+ \delta}{\lambda + 2\delta} \mathbb{E}[\theta_1|theta_2,m] \right )\;[/math]
Decentralization Comparative Statics:
- As [math]\delta\;[/math] increases: each manager puts less weight on his own information, and more on a weighted average
- As [math]\delta \to \infty\;[/math]: again equal weight is put on both, [math]d_1^C = \mathbb{E}[\frac{\theta_1 + \theta_2}{2}|m]\;[/math]
Strategic Communication
When [math]\theta=0\;[/math] there is no reason to misrepresent. However, otherwise both under centralization and decentralization their is an incentive to exagerate.
Under centralization, the need for coordination (a high [math]\delta\;[/math]) exacerbates this problem (because the HQ manager is already a little insensitive to local conditions, and now becomes entire insensitive).
Under decentraliztaion, the need for coordination (a high [math]\delta\;[/math]) mitigates this problem (as the managers become more responsive to each other's needs).
With HQ (under centralization)
Let [math]\nu_1^* = \mathbb{E}[\theta_1|m]\;[/math] be the expection of the local state that 1 would like HQ to have, so that:
- [math]\nu_1^* =arg \max_{\nu_1} \mathbb{E} [ - \lambda(d_1 - \theta_1)^2 -(1-\lambda) (d_2 - \theta_2)^2- \delta (d_1 - d_2)^2 ]\;[/math]
In equilibrium the beliefs of the HQ manager will be correct, so [math]\mathbb{E}_{m_2}( \mathbb{E}[\theta_1|m] ) = \mathbb{E}[\theta_1] = 0\;[/math], and likewise for [math]\theta_2\;[/math], so:
- [math]\nu_1^* - \theta_1 = \frac{(2 \lambda - 1) \delta}{\lambda+\delta}\theta_1 = b_C \cdot \theta_1\;[/math]
Where we will call [math]b_C\;[/math] the bias in messages to the HQ. This bias is zero when [math]\theta_1 = 0\;[/math], and positive otherwise. It is also increasing in [math]| \theta_1 | , \lambda, \delta\;[/math].
With each other (under decentralization)
In the same way we can calculate:
- [math]\nu_1^* - \theta_1 = \frac{(2\lambda -1)(\lambda+\delta)}{\lambda(1-\lambda)+\delta}\theta_1 = b_D \theta_1\;[/math]
Where we will call [math]b_D\;[/math] the bias in messages to the other division manager. This bias is zero when [math]\theta_1 = 0\;[/math], and positive otherwise. It is also increasing in [math]| \theta_1 |\;[/math] and [math]\lambda\;[/math] (home bias), but decreasing in [math]\delta (the need for coordination).
===Communication Equilibria===
The paper uses a Crawford and Sobel (1982) type model, which is covered in [[Grossman Helpman (2001) - Special Interest Politics Chapters 4 And 5 | Grossman and Helpman (2001)]], in which the state spaces \lt math\gt [-s_1,s_1]\;[/math] and [math][-s_2,s_2]\;[/math] are partitioned into intervals. The size of the intervals (which determine how informative messages are) depends directly on the biases [math]b_D\;[/math] and [math]b_C\;[/math].
The game uses a perfect Bayesian equilibria solution concept which requires:
- Communication rules are optimal given the decision rules
- Decision rules are optimal given belief functions
- Beliefs are derived from the communication rules using Bayes' rule (whenever possible).