This morning, while scrolling through my Twitter timeline, I was greeted by a link to a new study about online learning. Caroline Hoxby, a well-known and well-respected scholar at Stanford, published a study through NBER entitled “The Returns to Online Postsecondary Education.” The tl;dr version of the study is, as Hoxby writes, “On the whole, I find little support for optimistic prognostications about online education.” It’s a pretty dreadful report if you’re an advocate for online learning.

After getting a copy of the report of the study, a BIG red flag went up based on the very first sentence of the abstract: This study analyzes longitudinal data on nearly every person who engaged in postsecondary education that was wholly or substantially online between 1999 and 2014. 

Nearly every person who engaged in postsecondary education that was wholly or substantially online over 15 years? As the kids these days say, “Woah, big if true!”

The problem is that the sampling claim is not true. Like, bigly untrue. Fake news? Well, I spent much of the afternoon working on a review of the study. Phil Hill, though, beat me to the publication market. His critique of the sampling methodology linked in his tweet is excellent; I’d written much of the same. Read it; the methodology is DEEPLY flawed rendering the findings essentially useless. I hope to write more about it tomorrow, but it may be too late. The Chronicle of Higher Education has already reported on the study with no critical insight.

I should add here, BTW, that I don’t actually doubt Hoxby’s ultimate conclusions. That is, by my account, most online programs are pretty crappy and the labor market hasn’t fully bought into the hype of online learning; degrees from fully online universities definitely still send a different signal to employers. But, a study that claims to be of “nearly every person who engaged in postsecondary education that was wholly or substantially online over 15 years” that’s off by more than a million students can’t be taken seriously.