This "classic" (but very topical and certainly relevant Big Data) post discusses issues that Big Data can face when it forgets, or ignores applied statistics. As great of a discussion today as it was 2 years ago.
Editor's note: This blog post was originally published over 2 years ago, and it is republished here unchanged. Not only is it just as relevant as it was then, it is likely even more applicable today.
This year the idea that statistics is important for big data has exploded into the popular media. Here are a few examples, starting with the Lazer et. al paper in Science that got the ball rolling on this idea.
All of these articles warn about issues that statisticians have been thinking about for a very long time: sampling populations, confounders, multiple testing, bias, and overfitting. In the rush to take advantage of the hype around big data, these ideas were ignored or not given sufficient attention.
One reason is that when you actually take the time to do an analysis right, with careful attention to all the sources of variation in the data, it is almost a law that you will have to make smaller claims than you could if you just shoved your data in a machine learning algorithm and reported whatever came out the other side.
The prime example in the press is Google Flu trends. Google Flu trends were originally developed as a machine learning algorithm for predicting the number of flu cases based on Google Search Terms. While the underlying data management and machine learning algorithms were correct, a misunderstanding about the uncertainties in the data collection and modeling process has led to highly inaccurate estimates over time. A statistician would have thought carefully about the sampling process, identified time series components to the spatial trend, investigated why the search terms were predictive and tried to understand what the likely reason that Google Flu trends were working.
As we have seen, a lack of expertise in statistics has led to fundamental errors in both genomic science
. In the first case, a team of scientists led by Anil Potti created an algorithm for predicting the response to chemotherapy. This solution was widely praised in both the scientific and popular press. Unfortunately, the researchers did not correctly account for all the sources of variation in the data set and had misapplied statistical methods and ignored major data integrity problems. The lead author and the editors who handled this paper didn't have the necessary statistical expertise, which led to major consequences and canceled clinical trials.
Similarly, two economists, Reinhart and Rogoff published a paper claiming that GDP growth was slowed by high governmental debt. Later it was discovered that there was an error in an Excel spreadsheet they used to perform the analysis. But more importantly, the choice of weights they used in their regression model was questioned as being unrealistic and leading to dramatically different conclusions than the authors espoused publicly. The primary failing was a lack of sensitivity analysis to data analytic assumptions that any well-trained applied statisticians would have performed.
Statistical thinking has also been conspicuously absent from major public big data efforts so far. Here are some examples:
One example of this kind of thinking is this insane table from the alumni magazine of the University of California which I found from this
(via Rafa, go watch his talk right now, it gets right to the heart of the issue). It shows a fundamental disrespect for applied statisticians who have developed serious expertise in a range of scientific disciplines.
All of this leads to two questions:
- Given the importance of statistical thinking, why aren't statisticians involved in these initiatives?
- When thinking about the big data era, what are some statistical ideas we've already figured out?
The article was originally published here