I work on large multi-center clinical trials as a machine learning engineer. One of my projects involves the semi-automation of the detection of fraudulent data.
There's one link in the chain here missing that some people here seem to be ignoring. The authors of this post (while entirely correct) draw no link between "bad data" (which is doubtlessly responsible for a large number of "bad papers"/"bad trials") and "bad clinical practice."
I don't know a single clinician who would base their care on the findings of a single-center RCT of the kind described in this article. Or the findings of a meta-analysis of single-center RCTs, for that matter.
Bad data happens in multi-center RCTs too, and in fact that's what I'm focused on, but a lot of work already (and therefore $, for the cynical) goes into the validation of data (see [1] for a brief description). Phase III clinical trials in the west practically require a robust multi-center RCT, where systemic fraud is very difficult to perform (but not impossible [2]). By the time a Phase III trial is conducted, the efficacy of the drug can already be estimated, and the focus of the drug company (who yes, often fund these trials) is to conduct a trial which is unimpeachable in the face of a regulatory board (who are generally good at their jobs, although the revolving-door tends to reduce public trust and should be legislated away).
In short, I support most of the proposed changes to incentives around publish-or-perish. I reject the notion that these incentives are (currently) significant drivers of decreased quality of standard of care in the West. I think global governance structures, as suggested in this article, could improve understanding among both clinicians who are not necessarily scientists and the general public about just how validated a given standard of care is.
tl;dr Most good evidence-based practitioners already think this way -- not because they inherently believe fraud is rampant, necessarily, but because evidence says the kinds of studies where fraud is most prevalent are untrustworthy for other reasons.
Some segments on HN have strong anti-scientist sentiments (even while they proclaim to be pro-science), assumine we all are crooked, stupid or both, hence your insightful and reasonable comment being downvoted.
Is there fraud? Sure. Is there a lot of fraud happening in American science? I don't think so. To quote the article:
"Many of the trials came from the same countries (Egypt, China, India, Iran, Japan, South Korea, and Turkey), and when John Ioannidis, a professor at Stanford University, examined individual patient data from trials submitted from those countries to Anaesthesia during a year he found that many were false: 100% (7/7) in Egypt; 75% (3/ 4) in Iran; 54% (7/13) in India; 46% (22/48) in China; 40% (2/5) in Turkey; 25% (5/20) in South Korea; and 18% (2/11) in Japan. Most of the trials were zombies. Ioannidis concluded that there are hundreds of thousands of zombie trials published from those countries alone. "
I think institutional incentives matter a lot, and the reasonably lucrative prospect of careers outside of academia if things don't work out. That is perhaps why we see such stark regional differences.
No one I know has committed fraud in their research. I've seen mistakes in their code however, but that is another story.
There's one link in the chain here missing that some people here seem to be ignoring. The authors of this post (while entirely correct) draw no link between "bad data" (which is doubtlessly responsible for a large number of "bad papers"/"bad trials") and "bad clinical practice."
I don't know a single clinician who would base their care on the findings of a single-center RCT of the kind described in this article. Or the findings of a meta-analysis of single-center RCTs, for that matter.
Bad data happens in multi-center RCTs too, and in fact that's what I'm focused on, but a lot of work already (and therefore $, for the cynical) goes into the validation of data (see [1] for a brief description). Phase III clinical trials in the west practically require a robust multi-center RCT, where systemic fraud is very difficult to perform (but not impossible [2]). By the time a Phase III trial is conducted, the efficacy of the drug can already be estimated, and the focus of the drug company (who yes, often fund these trials) is to conduct a trial which is unimpeachable in the face of a regulatory board (who are generally good at their jobs, although the revolving-door tends to reduce public trust and should be legislated away).
In short, I support most of the proposed changes to incentives around publish-or-perish. I reject the notion that these incentives are (currently) significant drivers of decreased quality of standard of care in the West. I think global governance structures, as suggested in this article, could improve understanding among both clinicians who are not necessarily scientists and the general public about just how validated a given standard of care is.
tl;dr Most good evidence-based practitioners already think this way -- not because they inherently believe fraud is rampant, necessarily, but because evidence says the kinds of studies where fraud is most prevalent are untrustworthy for other reasons.
[1] doi:10.1177/1740774512447898
[2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4340084/