Statistical reasoning is a valuable tool that is unfortunately vastly abused in practice. Whether because of mistaken self-deception or deliberate misdirection, statistical arguments are routinely abused, including those widely reported in the press as news. This baleful influence afflicts the social sciences most extensively. In reaction to the widespread abuse of statistics, I have insisted, against the methodology taught in economics, that correct conceptual foundations are essential to sound reasoning.
Postwar economics has come to be dominated by a methodology that is a caricature of positivist philosophy of science advocated by Karl Popper. The economists’ corruption of positivism was popularized during the 1950s by Milton Friedman. The most objectionable element of Friedman’s arguments is that in building economic models, correct assumptions do not matter. Even more boldly, he asserts that the most powerful models must be rooted in quite unrealistic assumptions. This statement is accepted by few philosophers of science as representative of any actual successful scientific process, but that does not stop economists from asserting it as gospel even today. This methodological move inoculates economics against people like me who ridicule its foundational assumptions.
I prefer not to speak narrowly of the assumptions of any particular model, but more broadly of the basic soundness of foundational concepts. In this regard, economics is bankrupt, as I have argued throughout this blog series. Statistical theory has a term for errors of this sort: model misspecification. This term means that if your statistical model’s foundational concepts are deficient, any subsequent statistical calculations are spurious. Computer science has a simpler term for this: garbage in, garbage out.
Yet because of the pernicious influence of Freidman’s absurd methodological stance, vast numbers of statistical studies are published every year with faulty conceptual foundations that render their conclusions meaningless. Such erroneous pseudo-science is not confined to economics either. It infects much of the social sciences and many other fields that also misuse statistics. It affects the news as well.
I was reminded of that recently by a brief discussion of the value of a college education that took place on National Public Radio. The interviewee, a university administrator, made the argument that the value of a university education can be measured, major by major, by statistical comparisons of the salaries earned by graduates. Even apart from the obvious philosophical error of equating value with salaries earned, this sort of argument, so commonplace, is based on a serious model misspecification. Similar arguments are made “proving” that degrees from elite universities are more valuable by comparing the salaries earned by their graduates with those earned by graduates of non-elite universities.
This sort of reasoning would only by correct if the populations entering each major and each university were identical. Therefore, any differences after graduation must presumably be based on what happened during the four years of university training. Anyone thinking conceptually must notice, however, that the sorts of students who major in dance are not the same as those who major in engineering. Furthermore, those admitted to Harvard are not identical to those choosing the University of Massachusetts. If people are so different before their university training begins, how can we say that the university training is what produced their higher salaries, rather than other specific advantages, such as raw intellect, family wealth and connections, or aspects of personality, like motivation and self-confidence?
Competent statisticians know that for an argument to be valid, they have to control for other variables likely to influence outcomes. Unfortunately, details of model specification are seldom included in news reports and consumers of news rarely look them up. Therefore, producers of statistical arguments often get lazy and make simplifying assumptions to make studies easier to research and publish. Peer reviewers often overlook poorly specified models because they have imbibed the “assumptions don’t matter” nonsense taught since Friedman. I find that if you probe statistical arguments for important conceptual errors, very few pass muster.
Some studies of the value of a university education do try to control for the vast differences in starting circumstances. But it is easier just to assume populations start out equal or use only the easiest means to obtain control variables, like zip code (as a weak proxy for wealth), test scores, or self-reported race.
Likewise, statistical polling data in this election season are presented as objective information, despite the fact that many people being interviewed lie or care little about whether their answers are accurate. They may just want to please the interviewer and get them off the phone. It is well known by expert practitioners that answers vary a lot according to the order and wording of questions. Thus surveys prepared by partisan organizations are often skewed to bias the results. Truly brilliant political strategists downplay obsessive polling in favor of game-changing conceptual innovation, reframing issues and thereby changing the way people think rather than taking it for granted.
Statisticians will argue back that even if conceptual innovations are important, ultimately their value can be proven only by further survey research. In some ideal sense, ignoring time, this is true. But in the real world, rapid reaction and adjustment may be more effective than waiting to confirm everything by polling data. Furthermore, people do not know how they will react to the next new thing until they experience it. Survey questions in the abstract may be too inadequately evocative of the impact of future experiences.
Statistics have the potential to measure routine and repeatable things, but the strategic method, one foundation to the political economy I teach, relies more on strong conceptual foundations and conceptual innovation. This sort of creativity is distinct from the statistical method. As I have mentioned, statistical studies are analogous to the idea of “ordinary force” in Sun Tzu’s The Art of War. At best they establish a routine baseline. But decisive interventions often rely on “extraordinary force,” i.e., surprising adversaries with unexpected or innovative approaches that blindside them. Their conceptual complacency becomes a vulnerability. Extraordinary force prevails by exploiting the regularities and predictability of ordinary force.
I have also deeply studied the Vietnam War, where statistical analysis of intelligence became a high art. Yet best-selling books like Philip Caputo’s A Rumor of War, and films like Go Tell the Spartans, show that obsession with statistical and routinely measurable variables often blinds conventional thinkers to a torrent of relentless innovation, an extraordinary force beyond their measurement or anticipation. Data itself are seldom sufficient. Ignorance of conceptual flaws is a recipe for decisive defeat.
Originally posted on World Policy Institute blog September 8, 2016 – Statistical Deception.