ECON 626: Problem Set 6

Published

November 9, 2023

\[ \def\Er{{\mathrm{E}}} \def\En{{\mathbb{En}}} \def\cov{{\mathrm{Cov}}} \def\var{{\mathrm{Var}}} \def\R{{\mathbb{R}}} \newcommand\norm[1]{\left\lVert#1\right\rVert} \def\rank{{\mathrm{rank}}} \newcommand{\inpr}{ \overset{p^*_{\scriptscriptstyle n}}{\longrightarrow}} \def\inprob{{\,{\buildrel p \over \rightarrow}\,}} \def\indist{\,{\buildrel d \over \rightarrow}\,} \DeclareMathOperator*{\plim}{plim}\]

Problem 1

Consider the simple regression model \[ y_i = \beta_0 + \beta_1 x^*_i + \epsilon_i \] Assume that \(\var(x^*)>0\) and \(\Er[x^* \epsilon] = 0\). However, \(x^*\) is measured with error. Instead of observing \(x^*\), you observe \(x_i = x^*_i + u_i\). Assume that \(\Er[u] = 0\), \(\Er[u x^*] =0\) and \(\Er[\epsilon u] = 0\).

1

Find \(\plim \hat{\beta}_1\), where \(\hat{\beta}_1\) is the OLS etimator.

2

In ``Do Low Levels of Blood Lead Reduce Children’s Future Test Scores?’’ Aizer et al. (2018) examine the relationship between blood lead levels and children’s test scores. Table 4 shows estimates of \(\beta_1\) from regressions of

\[ test_i = \beta_0 + \beta_1 lead_i + \text{ other controls} + \epsilon_i \tag{1}\]

where \(test_i\) is a 3rd grade reading or math test and \(lead_i\) is a blood lead level measurement taken before the age of six. Some children had multiple measurements of their blood lead levels taken. Each blood lead level measurement has some error. In comparing columns (1) and (2), note that venous tests are known to have less measurement error than capillary tests, and in comparing columns (3) and (4) the average of all blood lead levels has less measurement error than a single one. Are the changes in the estimates across columns what you would expect from part part 1? Why or why not?

Aizer et al. (2018) table 4

3

Suppose \(z_i\) is a second measurement of \(x^*\), [ z_i = x^*_i + e_i ] with \(\Er[e] = 0\), \(\Er[x^* e] = 0\), \(\Er[\epsilon e] = 0\) and \(\Er[e u] = 0\). Show that \[ \hat{\beta}_1^{IV} = \frac{\sum_{i=1}^n (z_i - \bar{z}) y_i} {\sum_{i=1}^n (z_i - \bar{z}) x_i} \] is a consistent estimate of \(\beta_1\).

4

Table 5 from Aizer et al. (2018), shows additional estimates of the model Equation 1. Column (1) shows standard multiple regression estimates. Columns (2) and (3) show estimates using the estimator \(\hat{\beta}_1^{IV}\) from part 3. Is the change in the estimates between columns (1) and (2) what you would expect based on parts 1 and 3? Why or why not?

Aizer et al. (2018) table 5

Problem 2

In the linear model, \[ y = \underbrace{X}_{n \times k} \beta + \epsilon, \] partition \(X\) as \[ X= \begin{pmatrix} \underbrace{X_1}_{n \times k_1} & \underbrace{ X_2}_{n \times (k - k_1)} \end{pmatrix} \]

and \(\beta = \begin{pmatrix} \beta_1\\ \beta_2 \end{pmatrix}\).

Let \(\hat{\beta} = \begin{pmatrix} \hat{\beta}_1\\ \hat{\beta}_2 \end{pmatrix}\) be the OLS estimator.

Show that \[ \hat{\beta}_1 = \textrm{arg}\min_{b_1} \norm{M_{X_2} y - M_{X_2} X_1 b_1 }^2 \] where \(M_{X_2} = I - X_2 (X_2' X_2)^{-1} X_2'\).

Problem 3

Consider estimating the linear model,1 \[ y_i = \beta D_i + x_i' \alpha + \epsilon_i \] where \(\beta \in \R\), \(D_i \in \{0,1\}\) and \(x_i \in \R^k\).

  1. Show that the OLS estimate for \(\beta\) can be written as \[ \hat{\beta} = \frac{1}{n} \sum_{i=1}^n y_i \hat{\omega}(x_i,D_i) \] where \(\frac{1}{n} \sum_{i=1}^n D_i \hat{\omega}(x_i,D_i) = 1\), and \(\frac{1}{n} \sum_{i=1}^n (1-D_i) \hat{\omega}(x_i,D_i) = -1\). [Hint: use the result of problem 2.]

  2. Let \[ \pi \in \mathrm{arg}\min_{\pi} \Er[(D_i - x_i'\pi)^2]. \] Show that \[ \hat{\omega}(x,D) \inprob \frac{D - x'\pi}{\Er[(D - x'\pi)^2]} \] If needed, state additional assumptions about dependence or moments.

  3. Suppose that the linear model being estimated might not be the data generating process. Instead, assume the data comes from a potential outcomes framework. There are potential outcomes \((y_i(1),y_i(0))\). You observe \(y_i = y_i(1)D_i + y_i(0)(1-D_i)\). Show that \[ \hat{\beta} \inprob \Er[y_i(0) \omega(x_i,D_i)] + \Er\left[\left(y_i(1) - y_i(0)\right) D_i \omega(x_i,D_i) \right]. \]

  4. If you assume that \((y_i(1),y_i(0))\) are independent of \(D_i\) conditional on \(x_i\), does \(\hat{\beta}\) have any nice interpretation? [Hint: what can be the range of \(\omega(x,1)\), especially if the range of \(x\) is large?]

References

Aizer, Anna, Janet Currie, Peter Simon, and Patrick Vivier. 2018. “Do Low Levels of Blood Lead Reduce Children’s Future Test Scores?” American Economic Journal: Applied Economics 10 (1): 307–41. https://doi.org/10.1257/app.20160404.
Borusyak, Kirill, and Xavier Jaravel. 2018. “Revisiting Event Study Designs.” https://scholar.harvard.edu/files/borusyak/files/borusyak_jaravel_event_studies.pdf.

Footnotes

  1. This problem is partially based on Borusyak and Jaravel (2018).↩︎