regression: BLP not BLUE!

Scott Cunningham asks a big picture question regarding pedagogy in MHE:

In chapter 3, you emphasize early on the important property of
prediction. For example, Theorem 3.1.2 and 3.1.5. In my econometrics
training years ago, early initiation into regression focused more on
OLS as BLUE than as BLP (best linear predictor). I was curious why,
then, in your pedagogy you chose to make prediction and not
unbiasedness so central a concept for introducing people to causal
inference. I hesitate to say this, because it’s probably wrong, but I
don’t even remember Greene’s textbook going into BLP at all. Why are
these BLP properties so pedagogically valuable to you, as opposed to
just focusing on BLUE like the traditional econometrics pedagogy seems
to do?

Thanks! I’m a huge fan.

Great question, Scott, all the more so in view of the release of our undergrad-focused Mastering Metrics this winter. Your undergrad econometrics training (like most people’s) focused on the sampling distribution of OLS. Hence you were tortured with the Gauss-Markov Thm, which says that OLS is a Best Linear Unbiased Estimator (BLUE). MHE and MM are largely unconcerned with such things. Rather, we try to give our students a clear understanding of what regression means. To that end, we introduce regression as the best linear approximation to whatever conditional expectation fn. (CEF) motivates your empirical work – this is the BLP property you mention, which is a regression feature unrelated to samples. (MM also emphasises our interpretation of regression as a form of “automated matching”).

In particular, our MHE/MM understanding of regn is divorced from sampling properties like BLUE, which, the attention your old-school training gave them notwithstanding, are: (a) boring (b) of little practical importance for the quality of your empirical work (c) untrue in most applications. BLUEness of OLS estimates (the solution to the least squares problem that Stata solves when u ask it to regress) holds only when the underlying CEF is linear, with constant residual variance to boot. Since there’s usually no reason to believe such things obtain in the empirical world, and there is no need to assume they apply either, sampling properties like unbiasedness and efficiency (“best”) needn’t trouble us. When it comes to sampling properties, we care only to get the standard errors right, also a boring problem, but necessary for statistical inference and not driven by the sophomoric literalism of old school ‘metrics pedagogy.
— Master Joshway

Published Tagged , , . Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>