The authors
of the study I blogged about on Monday were kind enough to post a lengthy
comment, responding in part to some of the issues I raised. I thought their
response was interesting, and so reprint it in its entirety below, interjecting
my own reactions as well.
There were a number of points you
made in your blog and the title of questionable maths was what caught our eye
and so we reply on facts and provide context.
Firstly, this is a UK study where
the vast majority of UK clinical trials take place in the NHS. It is about
patient involvement in mental health studies - an area where recruitment is
difficult because of stigma and discrimination.
I agree, in
hindsight, that I should have titled the piece “questionable maths” rather than
my Americanized “questionable math”. Otherwise, I think this is fine, although
I’m not sure that anything here differs from my post.
1. Tripling of studies - You
dispute NIHR figures recorded on a national database and support your claim
with a lone anecdote - hardly data that provides confidence. The reason we can
improve recruitment is that NIHR has a Clinical Research Network which provides
extra staff, within the NHS, to support high quality clinical studies and has
improved recruitment success.
To be
clear, I did not “dispute” the figures so much as I expressed sincere doubt that those
figures correspond with an actual increase in actual patients consenting to
participate in actual UK studies. The anecdote explains why I am skeptical – it's
a bit like I've been told there was a magnitude 8 earthquake in Chicago, but neither
I nor any of my neighbors felt anything. There are many reasons why reported numbers
can increase in the absence of an actual increase. It’s worth noting that my lack
of confidence in the NIHR's claims appears to be shared by the 2 UK-based
experts quoted by Applied Clinical Trials in the article I linked to.
2. Large database: We have the
largest database of detailed study information and patient involvement data - I
have trawled the world for a bigger one and NIMH say there certainly isn't one
in the USA. This means few places where patient impact can actually be measured
3. Number of studies: The database
has 374 studies which showed among other results that service user involvement
increased over time probably following changes by funders e.g. NIHR requests
information in the grant proposal on how service users have been and will be
involved - one of the few national funders to take this issue seriously.
As far as I
can tell, neither of these points is in dispute.
4. Analysis of patient involvement
involves the 124 studies that have completed. You cannot analyse recruitment
success unless then.
I agree you
cannot analyze recruitment success in studies that have not yet completed. My
objection is that in both the KCL press release and the NIHR-authored Guardian
article, the only number mentioned in 374, and references to the recruitment
success findings came immediately after references to that number. For example:
Published in the British Journal of Psychiatry, the researchers analysed 374
studies registered with the Mental Health Research Network (MHRN).
Studies which included collaboration
with service users in designing or running the trial were 1.63 times more
likely to recruit to target than studies which only consulted service
users. Studies which involved more
partnerships - a higher level of Patient and Public Involvement (PPI) - were
4.12 times more likely to recruit to target.
The above
quote clearly implies that the recruitment conclusions were based on an
analysis of 374 studies – a sample 3 times larger than the sample actually
used. I find this disheartening.
The complexity measure was
developed following a Delphi exercise with clinicians, clinical academics and
study delivery staff to include variables likely to be barriers to recruitment.
It predicts delivery difficulty (meeting recruitment & delivery staff
time). But of course you know all that as it was in the paper.
Yes, I did
know this, and yes, I know it because it was in the paper. In fact, that’s all I know about this measure, which is
what led me to characterize it as “arbitrary and undocumented”. To believe that
all aspects of protocol complexity that might negatively affect enrollment have
been adequately captured and weighted in a single 17-point scale requires a leap
of faith that I am not, at the moment, able to make. The extraordinary claim
that all complexity issues have been accounted for in this model requires
extraordinary evidence, and “we conducted a Delphi exercise” does not suffice.
6. All studies funded by NIHR
partners were included – we only excluded studies funded without peer review,
not won competitively. For the involvement analysis we excluded industry
studies because of not being able to contact end users and where inclusion
compromised our analysis reliability due to small group sizes.
It’s only
that last bit I was concerned about. Specifically, the 11 studies that were
excluded due to being in “clinical groups” that were too small, despite the
fact that “clinical groups” appear to have been excluded as non-significant
from the final model of recruitment success.
(Also: am I
being whooshed here? In a discussion of "questionable math" the authors' enumeration goes from 4 to 6. I’m going to take the miscounting here as a sly
attempt to see if I’m paying attention...)
I am sure you are aware of the high
standing of the journal and its robust peer review. We understand that our
results must withstand the scrutiny of other scientists but many of your
comments were unwarranted. This is the first in the world to investigate
patient involvement impact. No other databases apart from the one held by the
NIHR Mental Health Research Network is available to test – we only wish they
were.
I hope we
can agree that peer review – no matter how "high standing" the journal – is not
a shield against concern and criticism. Despite the length of your response, I’m
still at a loss as to which of my comments specifically were unwarranted.
In fact, I
feel that I noted very clearly that my concerns about the study’s limitations
were minuscule compared to my concerns about the extremely inaccurate way that
the study has been publicized by the authors, KCL, and the NIHR. Even if I
conceded every possible criticism of the study itself, there remains the fact
that in public statements, you
- Misstated an odds ratio of 4 as “4 times more likely to”
- Overstated the recruitment success findings as being based on a sample 3 times larger than it actually was
- Re-interpreted, without reservation, a statistical association as a causal relationship
- Misstated the difference between the patient involvement categories as being a matter of merely “involving just one or two patients in the study team”
To use the
analogy from my previous post: if a pharmaceutical company had committed these acts
in public statements about a new drug, public criticism would have been loud
and swift.
Your comment on the media coverage
of odds ratios is an issue that scientists need to overcome (there is even a
section in Wikipedia).
It's highly
unfair to blame "media coverage" for the use of an odds ratio as if
it were a relative risk ratio. In fact, the first instance of "4 times
more likely" appears in Dr Wykes's own blog post. It's repeated in the KCL
press release, so you yourselves appear to have been the source of the error.
You point out the base rate issue
but of course in a logistic regression you also take into account all the other
variables that may impinge on the outcome prior to assessing the effects of our
key variable patient involvement - as we did – and showed that the odds ratio
is 4.12 - So no dispute about that. We have followed up our analysis to produce
a statement that the public will understand. Using the following equations:
Model predicted recruitment lowest
level of involvement
exp(2.489-.193*8.8-1.477)/(1+exp(2.489-.193*8.8-1.477))=0.33
Model predicted recruitment highest
level of involvement exp(2.489-.193*8.8-1.477+1.415)/(1+exp(2.489-.193*8.8-1.477+1.415)=0.67
For a study of typical complexity
without a follow up increasing involvement from the lowest to the highest
levels increased recruitment from 33% to 66% i.e. a doubling.
So then,
you agree that your prior use of “4 times more likely” was not true? Would you
be willing to concede that in more or less direct English?
This is important and is the first
time that impact has been shown for patient involvement on the study success.
Luckily in the UK we have a network
that now supports clinicians to be involved and a system for ensuring study
feasibility.
The addition of patient involvement
is the additional bonus that allows recruitment to increase over time and so cutting
down the time for treatments to get to patients.
No, and no
again. This study shows an association in a model. The gap between that and a
causal relationship is far too vast to gloss over in this manner.
In summary, I thank the authors for taking the time to response, but I feel they've overreacted to my concerns about the study, and seriously underreacted to my more important concerns about their public overhyping of the study.
I believe this study provides useful, though limited, data about the potential relationship between patient engagement and enrollment success. On the other hand, I believe the public positioning of the study by its authors and their institutions has been exaggerated and distorted in clearly unacceptable ways. I would ask the authors to seriously consider issuing public corrections on the 4 points listed above.
I believe this study provides useful, though limited, data about the potential relationship between patient engagement and enrollment success. On the other hand, I believe the public positioning of the study by its authors and their institutions has been exaggerated and distorted in clearly unacceptable ways. I would ask the authors to seriously consider issuing public corrections on the 4 points listed above.
No comments:
Post a Comment