
![]() |
Adam Feuerstein: This time, too much froth, not enough coffee? |
![]() |
Adam Feuerstein: This time, too much froth, not enough coffee? |
![]() | ||
Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals Carolina Riveros, Agnes Dechartres, Elodie Perrodeau, Romana Haneef, Isabelle Boutron, Philippe Ravaud |
The articles identified through the search had to match the corresponding trial in terms of the information registered at ClinicalTrials.gov (i.e., same objective, same sample size, same primary outcome, same location, same responsible party, same trial phase, and same sponsor) and had to present results for the primary outcome.So it appears that a reviewed had to score the journal article as an exact match on 8 criteria in order for the trial to be considered the same. That could easily lead to exclusion of journal articles on the basis of very insubstantial differences. The authors provide no detail on this; and again, that would be easy to verify if the study dataset was published.
[T]he [ClinicalTrials.gov] database was never meant to replace journal publications, which often contain longer descriptions of methods and results and are the basis for big reviews of research on a given drug.I suppose that some journal articles have better methodology sections, although this is far from universally true (and, like this study here, these methods are often quite opaquely described and don't support replication). As for results, I don't believe that's the case. In this study, the opposite was true: ClinicalTrial.gov results were generally more complete than journal results. And I have no idea why the registry wouldn't surpass journals as a more reliable and complete source of information for "big reviews".
Genzyme’s correspondence with the FDA regarding pediatric plans and design of this study began in 2006 and included a face to face meeting with FDA in May 2009. Genzyme submitted 8 revisions of the pediatric study design based on feedback from FDA including that received in 4 General Advice Letters. The Advice Letter dated February 17, 2011 contained further recommendations on the study design, yet still required the final clinical study report by December 31, 2011.This highlights one of PREA’s real problems: the requirements as specified in most drug approval letters are not specific enough to fully dictate the study protocol. Instead, there is a lot of back and forth between the sponsor and FDA, and it seems that FDA does not always fully account for their own contribution to delays in getting studies started.
On December 22, 2010, Genzyme submitted a revised pediatric development plan (Serial No. 212) which was intended to address FDA feedback and concerns that had been received to date. This submission included proposed protocol HECT05310. [...] At this time, Genzyme has not received feedback from the FDA on the protocol included in the December 22, 2010 submission.If this is true, it appears extremely embarrassing for FDA. Have they really not provided feedback in over 2.5 years, and yet still sending noncompliance letters to the sponsor? It will be very interesting to see an FDA response to this.
Recognizing that, due to circumstances beyond the company’s control, the pediatric assessment could not be completed by the due date, The Medicines Company notified FDA in September 2010, and sought an extension. At that time, it was FDA’s view that no extensions were available. Following the passage of FDASIA, which specifically authorizes deferral extensions, the company again sought a deferral extension in December 2012.So, after hearing that they had to move forward in 2010, the company promptly waited 2 years to ask for another extension. During that time, the letter seems to imply that they did not try to move the study forward at all, preferring to roll the dice and wait for changing laws to help them get out from under the obligation.
![]() |
Pharma: breaking the law in broad daylight? |
In trials for approval of new drugs or approval for a new indication, a certification [permitting delayed results reporting] should be posted within 1 year and should be publicly available.
If no results were posted at ClinicalTrials.gov, we determined whether the responsible party submitted a certification. In this case, we recorded the date of submission of the certification to ClinicalTrials.gov.
![]() |
Two words that make us mistrust Duke: |
I would have thought that the two words “Anil Potti” are sufficient for convincing anyone that Duke University is a poor choice for a contractor whose task it is to confirm the integrity of scientific research.(One wonders how far Marciniak is willing to take his guilt-by-association theme. Are the words “Cheng Yi Liang” sufficient to convince us that all FDA employees, including Marciniak, are poor choices for deciding matter relating to publicly-traded companies? Should I not comment on government activities because I’m a resident of Illinois (my two words: “Rod Blagojevich”)?)
![]() |
When they're frothing at the mouth, even Atticus doesn't let them publish a review |
A framework for benefit-risk decision-making that summarizes the relevant facts, uncertainties, and key areas of judgment, and clearly explains how these factors influence a regulatory decision, can greatly inform and clarify the regulatory discussion. Such a framework can provide transparency regarding the basis of conflicting recommendations made by different parties using the same information.(Emphasis mine.)
In contrast to the prospective and highly planned studies of effectiveness, safety findings emerge from a wide range of sources, including spontaneous adverse event reports, epidemiology studies, meta-analyses of controlled trials, or in some cases from randomized, controlled trials. However, even controlled trials, where the evidence of an effect is generally most persuasive, can sometimes provide contradictory and inconsistent findings on safety as the analyses are in many cases not planned and often reflect multiple testing. A systematic approach that specifies the sources of evidence, the strength of each piece of evidence, and draws conclusions that explain how the uncertainty weighed on the decision, can lead to more explicit communication of regulatory decisions. We anticipate that this work will continue beyond FY 2013.I hope that work will continue beyond 2013. Thoughtful, open discussions of real uncertainties are one of the most worthwhile goals FDA can aspire to, even if it means having to learn how to do so without letting the Marciniaks of the world scuttle the whole endeavor.
There have been, and continue to be, differences of opinion and scientific disputes, which is not uncommon within the agency, stemming from varied conclusions about the existing data, not only with Avandia, but with other FDA-regulated products.
At FDA, we actively encourage and welcome robust scientific debate on the complex matters we deal with — as such a transparent approach ensures the scientific input we need, enriches the discussions, and enhances our decision-making.I agree, and hope she can pull it off.]
![]() |
No stones, please. |
![]() |
If it's a coin-toss conspiracy, it's the worst one in the history of conspiracies. |
If I flipped a coin a hundred times, but then withheld the results from you from half of those tosses, I could make it look as if I had a coin that always came up heads. But that wouldn't mean that I had a two-headed coin; that would mean that I was a chancer, and you were an idiot for letting me get away with it. But this is exactly what we blindly tolerate in the whole of evidence-based medicine.and in this recent op-ed column in the New York Times:
If I toss a coin, but hide the result every time it comes up tails, it looks as if I always throw heads. You wouldn't tolerate that if we were choosing who should go first in a game of pocket billiards, but in medicine, it’s accepted as the norm.I can understand why he likes using this metaphor. It's a striking and concrete illustration of his claim that pharmaceutical companies are suppressing data from clinical trials in an effort to make ineffective drugs appear effective. It also dovetails elegantly, from a rhetorical standpoint, with his frequently-repeated claim that "half of all trials go unpublished" (the reader is left to make the connection, but presumably it's all the tail-flip trials, with negative results, that aren't published).
I understand your point that the author is the greatest authority on their own work, but we require secondary sources.
![]() |
Report: 0% of decapitees could accurately recall their diagnosis |
![]() |
4 out of 5 non-doctors recommend starting with "regular strength", and titrating up from there... (Photo from inventedbyamother.com) |
Information leaflets provide participants with a permanent written record about a clinical trial and its procedures and thus make an important contribution to the process of informing participants about placebos.And from the PR materials furnished along with publication:
We believe the health changes associated with placebos should be better represented in the literature given to patients before they take part in a clinical trial.There are two points that I think are important here – points that are sometimes missed, and very often badly blurred, even within the research community:
![]() |
A few years back, I was working with a small biotech companies as they were ramping up to begin their first-ever pivotal trial. One of the team leads had just produced a timeline for enrollment in the trial, which was being circulated for feedback. Seeing as they had never conducted a trial of this size before, I was curious about how he had arrived at his estimate. My bigger clients had data from prior trials (both their own and their
He proudly shared with me the secret of his methodology: he had looked up some comparable studies on ClinicalTrials.gov, counted the number of listed sites, and then compared that to the sample size and start/end dates to arrive at an enrollment rate for each study. He’d then used the average of all those rates to determine how long his study would take to complete.
If you’ve ever used ClinicalTrials.gov in your work, you can immediately determine the multiple, fatal flaws in that line of reasoning. The data simply doesn’t work like that. And to be fair, it wasn’t designed to work like that: the registry is intended to provide public access to what research is being done, not provide competitive intelligence on patient recruitment.
I’m therefore sympathetic, but skeptical, of a recent article in PLoS Medicine, Disclosure of Investigators' Recruitment Performance in Multicenter Clinical Trials: A Further Step for Research Transparency, that proposes to make reporting of enrollment a mandatory part of the trial registry. The authors would like to see not only actual randomized patients for each principal investigator, but also how that compares to their “recruitment target”.
The entire article is thought-provoking and worth a read. The authors’ main arguments in favor of mandatory recruitment reporting can be boiled down to:
Image: Philip Johnson's Glass House from Staib via Wikimedia Commons.