First, a couple months ago came the rather dramatic announcement that clinical trial participation in the UK had "tripled over the last 6 years". That announcement, by the chief executive of the
Sweet
creature of bombast: is Sir John
writing press releases for the NIHR? |
That immediately caught my attention. In large, global trials, most pharmaceutical companies I've worked with can do a reasonable job of predicting accrual levels in a given country. I like to think that if participation rates in any given country had jumped that heavily, I’d have heard something.
(To give an example: looking at a quite-typical study I worked on a few years ago: UK sites were overall slightly below the global average. The highest-enrolling countries were about 2.5 times as fast. So, a 3-fold increase in accruals would have catapulted the UK from below average to the fastest-enrolling country in the world.)
Further inquiry, however, failed to turn up any evidence that the reported tripling actually corresponded to more human beings enrolled in clinical trials. Instead, there is some reason to believe that all we witnessed was increased reporting of trial participation numbers.
Now we have a new source of wonder, and a new giant multiplier coming out of the UK. As the Director of the NIHR's Mental Health Research Network, Til Wykes, put it in her blog coverage of her own paper:
Our research on the largest database of UK mental health studies shows that involving just one or two patients in the study team means studies are 4 times more likely to recruit successfully.Again, amazing! And not just a tripling – a quadrupling!
Understand: I spend a lot of my time trying to convince study teams to take a more patient-focused approach to clinical trial design and execution. I desperately want to believe this study, and I would love having hard evidence to bring to my clients.
At first glance, the data set seems robust. From the King's College press release:
Published in the British Journal of Psychiatry, the researchers analysed 374 studies registered with the Mental Health Research Network (MHRN).
Studies which included collaboration with service users in designing or running the trial were 1.63 times more likely to recruit to target than studies which only consulted service users. Studies which involved more partnerships - a higher level of Patient and Public Involvement (PPI) - were 4.12 times more likely to recruit to target.But here the first crack appears. It's clear from the paper that the analysis of recruitment success was not based on 374 studies, but rather a much smaller subset of 124 studies. That's not mentioned in either of the above-linked articles.
And at this point, we have to stop, set aside our enthusiasm, and read the full paper. And at this point, critical doubts begin to spring up, pretty much everywhere.
First and foremost: I don’t know any nice way to say this, but the "4 times more likely" line is, quite clearly, a fiction. What is reported in the paper is a 4.12 odds ratio between "low involvement" studies and "high involvement" studies (more on those terms in just a bit). Odds ratios are often used in reporting differences between groups, but they are unequivocally not the same as "times more likely than".
This is not a technical statistical quibble. The authors unfortunately don’t provide the actual success rates for different kinds of studies, but here is a quick example that, given other data they present, is probably reasonably close:
- A Studies: 16 successful out of 20
- Probability of success: 80%
- Odds of success: 4 to 1
- B Studies: 40 successful out of 80
- Probability of success: 50%
- Odds of success: 1 to 1
From the above, it’s reasonable to conclude that A studies are 60% more likely to be successful than B studies (the A studies are 1.6 times as likely to succeed). However, the odds ratio is 4.0, similar to the difference in the paper. It makes no sense to say that A studies are 4 times more likely to succeed than B studies.
This is elementary stuff. I’m confident that everyone involved in the conduct and analysis of the MHRN paper knows this already. So why would Dr Wykes write this? I don’t know; it's baffling. Maybe someone with more knowledge of the politics of British medicine can enlighten me.
If a pharmaceutical company had promoted a drug with this math, the warning letters and fines would be flying in the door fast. And rightly so. But if a government leader says it, it just gets recycled verbatim.
The other part of Dr Wykes's statement is almost equally confusing. She claims that the enrollment benefit occurs when "involving just one or two patients in the study team". However, involving one or two patients would seem to correspond to either the lowest ("patient consultation") or the middle level of reported patient involvement (“researcher initiated collaboration”). In fact, the "high involvement" categories that are supposed to be associated with enrollment success are studies that were either fully designed by patients, or were initiated by patients and researchers equally. So, if there is truly a causal relationship at work here, improving enrollment would not be merely a function of adding a patient or two to the conversation.
There are a number of other frustrating aspects of this study as well. It doesn't actually measure patient involvement in any specific research program, but uses just 3 broad categories (that the researchers specified at the beginning of each study). It uses an arbitrary and undocumented 17-point scale to measure "study complexity", which collapses and quite likely underweights many critical factors into a single number. The enrollment analysis excluded 11 studies because they weren't adequate for a factor that was later deemed non-significant. And probably the most frustrating facet of the paper is that the authors share absolutely no descriptive data about the studies involved in the enrollment analysis. It would be completely impossible to attempt to replicate its methods or verify its analysis. Do the authors believe that "Public Involvement" is only good when it’s not focused on their own work?
However, my feelings about the study and paper are an insignificant fraction of the frustration I feel about the public portrayal of the data by people who should clearly know better. After all, limited evidence is still evidence, and every study can add something to our knowledge. But the public misrepresentation of the evidence by leaders in the area can only do us harm: it has the potential to actively distort research priorities and funding.
Why This Matters
We all seem to agree that research is too slow. Low clinical trial enrollment wastes time, money, and the health of patients who need better treatment options.
However, what's also clear is that we lack reliable evidence on what activities enable us to accelerate the pace of enrollment without sacrificing quality. If we are serious about improving clinical trial accrual, we owe it to our patients to demand robust evidence for what works and what doesn’t. Relying on weak evidence that we've already solved the problem ("we've tripled enrollment!") or have a method to magically solve it ("PPI quadrupled enrollment!") will cause us to divert significant time, energy, and human health into areas that are politically favored but less than certain to produce benefit. And the overhyping those results by research leadership compounds that problem substantially. NIHR leadership should reconsider its approach to public discussion of its research, and practice what it preaches: critical assessment of the data.
[Update Sept. 20: The authors of the study have posted a lengthy comment below. My follow-up is here.]
[Image via flikr user Elliot Brown.]