Friday, July 13, 2018

Review: Rigor Mortis, by Richard Harris


Originally posted on RARITANIA on May 24, 2018.

Contemporary fiction is prone to beat us over the head with overdrawn pictures of elite prowess, not least in regard to its depiction of workers in scientific and technological fields. Just as the billionaire is invariably brilliant and hyperarticulate (what a far cry from real life!), the scientist is always superhumanly lucid and meticulous.

Just how superhuman the depictions are, how removed from the human reality they are, is made all too clear by Richard Harris' recent book, Rigor Mortis. The field of medicine, Harris argues, has seen much smoke and no fire in the endless announcement of imminent breakthroughs that somehow never materialize. Where computer science has "Moore's Law," describing the geometric growth and cheapening of computing power, medical science has the reverse--"Eroom's Law," in which progress toward treatments and cures has slowed in equally dramatic fashion in a textbook case of futurehype, even as we remain in a very early phase of the battle against disease. (Harris informs us that of some 7,000 known diseases, only 500 have treatments, and the treatments of many of those "offer just marginal benefits" (3).)

Harris gives us to understand that this is a function of the publication of enormous volumes of intriguing experimental results that, somehow, other scientists are incapable of reproducing--a "crisis of reproducibility" suggestive of the field drowning in low-quality research. Just how bad is it? To take a significant example, John Ioannidis' survey of tens of thousands of papers on genomics reporting genetic links to particular diseases--98.8 percent of those results irreproducible, forcing one to conclude that they are false positives (132).

This problem is, partially, a matter of the maddening complexity of biological systems--the sheer difficulty of controlling for single variables in the way we are all taught in the third grade science is supposed to do. As Harris' ancedotes demonstrate the slightest difference in the choice of experimental apparatus, or lab procedure, can skew the results. (Not only is the gender of the mouse in question relevant, but so is that of the scientist who picked them up, given the stress reactions of mice to male rather than female handlers.)

However, the roots of "Eroom's Law" run a good deal deeper than that. It is partly a matter of educational failures--of medical researchers not getting a proper grounding in the deeper intellectual foundations of science (the scientific epistemology, sheer old-fashioned logic), or the use of statistics. Moreover, rather than lab work being a matter of standardized best practice, it remains a matter of individual craft, acquired by way of apprenticeship. All of this makes the design of experiments and the interpretation of their results sloppier, in ways ranging from inadequate sample sizes to the failure to properly "blind" studies, opening the door to the influence of the results by researchers' biases--with such problems afflicting perhaps three-quarters of the studies examined by the Global Biological Standards Institute.

It is also a matter of institutional failings that worsen the problem. There is the scarcity of genuine, academic research positions, and even for those who can land research positions, funding for the actual research work. (Colleges don't pay for it. "Get a grant, serf!"--to quote recent Nobel laureate Jeffrey Hall--is the prevailing attitude.) One result is the intensification of the competition not just for positions (let alone tenure and promotion) but for grant money far beyond what is good for the field. Scientists might spend half their time writing grant applications--a devotion of time, thought and energy that would be far better utilized in the actual research. The difficulty of raising suitable sums also encourages the cutting of corners to stay within one's budget--among the results of which scientists have not always taken the trouble to properly check their cell lines (an epic of scientific blundering in itself), and become overly reliant on the pool of underemployed, ill-paid "postdocs" left by the scarcity of research positions as cheap but inexperienced scientific labor. Additionally, with the premium on being first rather than on doing it right; on the Next Big Thing; there is a tendency to rush to conclusions, and play those up. (One study found that the use of superlatives like "unprecedented" in the opening sections of scientific papers shot up a staggering 15,000 percent between 1974 and 2014 (191).) Mixed in with all this is an increasing temptation to fudge, if not fabricate and falsify (187), about which no one can be complacent.

Of course, the great virtue of the scientific method is that it has a mechanism for self-correction built in--but the current situation undermines the mechanisms by which this happens. The failure to reproduce experiments does not easily register within the field's collective consciousness. One reason is that journal publishers (a multi-billion dollar business), and unavoidably career-minded scientists, are encouraged by all these circumstances to attend to the next thing (they hope, that Next Big Thing) rather than revisit old work. The journals' editors are uninterested in the repetition of experiments to check old findings, especially when they have negative results. They also make the retraction of invalidated findings difficult (journals charging thousands of dollars for this), while by the time it happens, a good deal of damage might have been done, the work cited hundreds of times already by other researchers in other papers--building on that flawed foundation. Besides, the intensified career pressures exacerbate the resistance to admitting prior research failures, or calling out those of others (who might be in a position to pass judgment on one's next grant proposal). Accordingly, even though it seems that much or most of what is out there ought to be retracted, this happens with just 0.02 percent of papers in a given year. (It might be noted, too, that pharmaceutical manufacturers have "whittled away their own research departments," leaving them without alternatives to the troubled academic system as a source of leads (227).)

The cult of specializaton also feeds into this, by reducing the chance for outside views, fresh thinking and greater perspective. And the problem is not relieved but worsened by the revolutionary changes in scientific tools--the weakness in statistical analysis taking its toll as medical research becomes more quantitative, and increasingly deals with "Big Data" like everything else.

All that said, Harris is optimistic--arguing that the problem is at least better publicized than it was before, and that reformers are taking steps to correct the problem. There is a broader development of "meta-research" (such as that conducted by the aforementioned Ioannidis), examining the difficulties of research itself and searching for solutions to it, and limited but real efforts to create standards where none existed before.

I can only judge Harris' book as an outsider to this world. Still, his case strikes me as a formidable one, lucidly argued, and amply supported by scholarly as well as journalistic research. The bigger problem of low productivity in this area of research is undeniable, and Harris is quite persuasive in his pointing to the factors that are leading to that crisis of reproducibility. Moreover, for all that he presents his argument in a highly accessible fashion--the most general knowledge of biology and academic research sufficient to follow it along.

Still, one could argue that if he points to the problems, the full story is even larger than the one he tells. While Harris does not actually claim anything of the kind, his book can still lead one to think that the only problem is a broken academic research system, when arguably there are a whole host of others--from the more intrinsically difficult nature of the problems on which scientists are now working (admittedly, he does acknowledge the Big Data issue), to the ways in which the patent system frustrates the creation of potentially helpful new drugs. Moreover, where this particular problem is concerned, the book's stress on the scientist in the lab can seem to slight the other factors in the situation. After all, just why is it that scientific education has been weakened in the ways he says? Might the lack of grounding in epistemology and logic that helps lead to those flawed experimental designs not say something else--about the disdain for humanities like philosophy in contemporary culture? For that matter, why is medical research so starved for funding? And might all this be a problem not merely within this field, but across the sciences?

Anyone looking for answers to these parts of a larger story would have to look elsewhere to round it out--while in doing so it would quickly become apparent that not just the problem of maximizing the return on our massive investment in medical research, but the solution to it, is also larger, depending very much on what non-scientists do. Nevertheless, this very worthwhile book deserves a great deal of credit for its account of a key element of the problem.

No comments:

Subscribe Now: Feed Icon