If you’re a human being, in possession of one working, standard-issue human brain (and, for the remainder of this post, I’m going to assume you are), it is inevitable that you will fall victim to a wide variety of cognitive biases and mistakes. Many of these biases result in our feeling much more certain about our knowledge of the world than we have any rational grounds for: from the
Availability Heuristic, to the
Dunning-Kruger Effect, to
Confirmation Bias, there is an increasingly-well-documented system of ways in which we (and yes, that even includes you) become overconfident in our own judgment.
Over the years, scientists have developed a number of tools to help us overcome these biases in order to better understand the world. In the biological sciences, one of our best tools is the randomized controlled trial (RCT). In fact, randomization helps minimize biases so well that randomized trials have been suggested as a means of
developing better governmental policy.
However, RCTs in general require an investment of time and money, and they need to be somewhat narrowly tailored. As a result, they frequently become the target of people impatient with the process – especially those who perhaps feel themselves exempt from some of the above biases.
A shining example of this impatience-fortified-by-hubris can be
|
4 out of 5 Hammer Doctors agree: the world is 98% nail. |
found in a recent “Speaking of Medicine” blog post by Dr Trish Greenhalgh, with the mildly chilling title
Less Research is Needed. In it, the author finds a long list of things she feels to be so obvious that additional studies into them would be frivolous. Among the things the author knows, beyond a doubt, is that patient education does not work, and electronic medical records are inefficient and unhelpful.
I admit to being slightly in awe of Dr Greenhalgh’s omniscience in these matters.
In addition to her “we already know the answer to this” argument, she also mixes in a completely different argument, which is more along the lines of “we’ll never know the answer to this”. Of course, the upshot of that is identical: why bother conducting studies? For this argument, she cites the example of coronary artery disease: since a large genomic study found only a small association with CAD heritability, Dr Greenhalgh tells us that
any studies of different predictive methods is bound to fail and thus not worth the effort (she specifically mentions “genetic, epigenetic, transcriptomic, proteomic, metabolic and intermediate outcome variables” as things she apparently already knows will not add anything to our understanding of CAD).
As studies grow more global, and as we adapt to massive increases in computer storage and processing ability, I believe we will see an increase in this type of backlash. And while physicians can generally be relied on to be at the forefront of the demand for more, not less, evidence, it is quite possible that a vocal minority of physicians will adopt this kind of strongly anti-research stance. Dr Greenhalgh suggests that she is on the side of “thinking” when she opposes studies, but it is difficult to see this as anything more than an attempt to shut down critical inquiry in favor of deference to experts who are presumed to be fully-informed and bias-free.
It is worthwhile for those of us engaged in trying to understand the world to be aware of these kinds of threats, and to take them seriously. Dr Greenhalgh writes glowingly of a 10-year moratorium on research – presumably, we will all simply rely on her expertise to answer our important clinical questions.