Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t think 50% or greater is a good standard for defining what we would consider to be a major problem with scientific fraud. I’m sure you’re well aware that “major” can refer to relative impact depending on the context of the issue being discussed. In the case of scientific publishing, I would argue that even 5% fraud rate would be grounds for labeling this as a major problem.

That said, this issue has received a lot of attention in recent years. Some articles try to put a positive spin on it but the stark truth of where we are today is that there are no standards enforced requiring data publishing (“available on request” doesn’t cut it) and reproducibility. This, combined with the amount of money in this industry and the impact that it has on public policy is an acceptable situation.

Some good coverage:

https://www.experimental-history.com/p/the-rise-and-fall-of-...

(Lots of links to follow on this above post)

https://retractionwatch.com/

https://www.science.org/content/article/what-massive-databas...

(This one puts a positive spin on the editorial process but the fact remains that there is a massive flood of fraudulent papers being published. Going back to standards of open data plus verified reproducibility would go a long way to mitigate this problem.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: