Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This sounds inseparable from the replication crisis. The incentives are clearly broken: they are not structured in a manner that achieves the goal of research, which is to expand the scope and quality of human knowledge. To solve the crisis, we must change the incentives.

Does anyone have ideas on how that may be achieved - what a correct incentive structure for research might look like?



Ex-biochemist here, turned political technologist (who's spent a few years engaged in electoral reform and governance convos)

> the goal of research, which is to expand the scope and quality of human knowledge.

But are we so certain this is ever what drove science? Before we dive into twiddling knobs with a presumption of understanding some foundational motivation, it's worth asking. Sometimes the stories we tell are not the stories that drive the underlying machinery.

For e.g., we have a lot of wishy-washy "folk theories" of how democracy works, but actual political scientists know that most of the ones people "think" drive democracy, are actually just a bullshit story. According to some, it's even possible that the function of these common-belief fabrications is that their falsely simple narrative stabilizes democracy itself in the mind of the everyman, due to the trustworthiness of seemingly simple things. So it's an important falsehood to have in the meme pool. But the real forces that make democracy work are either (a) quite complex and obscure, or even (b) as-of-yet inconclusive. [1]

I wonder if science has some similar vibes: folks theory vs what actually drives it. Maybe the folk theory is "expand human knowledge", but the true machinery is and always has been a complex concoction of human ego, corruption and the fancies of the wealthy, topped with an icing of natural human curiosity.

[1]" https://www.amazon.ca/Democracy-Realists-Elections-Responsiv...


> I wonder if science has some similar vibes: folks theory vs what actually drives it. Maybe the folk theory is "expand human knowledge", but the true machinery is and always has been a complex concoction of human ego, corruption and the fancies of the wealthy, topped with an icing of natural human curiosity.

The Structure of Scientific Revolutions by Thomas Kuhn is an excellent read on this topic - dense but considered one of the most important works in the philosophy of science. It popularized Planck's Principle paraphrased as "Science progresses one funeral at a time." As you note, the true machinery is a very complicated mix of human factors and actual science.



Modern real science is driven by engineering that is driven by an industry that is is driven by profit and nature. If you are reading a paper that isn't driven by that chain of incentives, then the bullshit probability shoots way up. If someone somewhere isn't reading your paper to make a widget that is sold to someone to do something useful, then you can say whatever you want.


I've thought about it a lot and I don't think it might be achieved.

The trouble is that for the evaluators (all the institutions that can be sources of an incentive structure) it's impossible to distinguish an unpublished 90%-ready Nobel prize from unpublished 90%-ready bullshit. So if you've been working for 4 years on minor, incremental work and published a bunch of papers it's clear that you've done something useful, not extraordinary, but not bad; but if you've been working on a breakthrough and haven't achieved it, then there's simply no data to judge. Are you one step from major success? Or is that one step impossible and will never be achieved? Perhaps all of it is a dead end? Perhaps you're just slacking off on a direction that you know is a dead end, but it's the one thing you can do which brings you some money, so meh? Perhaps you're just crazy and it was definitely a worthless dead end? Perhaps everyone in the field thought that you're just crazy and this direction is worthless but they're actually wrong?

Peter Higgs was a relevant case - IIRC he said in one interview taht for quite some time "they" didn't know what to do with him as he wasn't producing anything much, and the things he had done earlier were either useless or Nobel prize worthy, but it was impossible to tell for many years after the fact. How the heck can an objective incentive structure take that into account? It's a minefield.

IMHO any effective solution has to scale back on accountability and measurability, and to some extent just give some funding to some people/teams with great potential, and see what they do - with the expectation that it's OK if it doesn't turn out, since otherwise they're forced to pick only safe topics that are certain to succeed and also certain to not achieve a breaktrhough. I believe European Research Foundation had a grant policy with similar principles, and I think that DARPA, at least originally, was like that.

But there's a strong entirely opposite pressure from key stakeholders holding the (usually government) purses, their interests are more towards avoiding bad PR for any project with seemingly wasted money, and that results in a push towards these broken incentive structures and mediocrity.


I would go a step further and say that the value of specific scientific discoveries (even if no bullshit is involved) can often not be evaluated until decades later. Moreover, I would argue that trying to measure scientific value is in fact an effort to try to quantify something unquantifiable.

At the same time, academics have been increasingly been evaluated by some metrics to show value for money. This has let to some schizophrenic incentive structures. Most professor level academics are spending probably around 30% of their time on writing grants, evaluating grants and reporting on grants. Moreover, the evaluation criteria also often demand that work should be innovative, "high risk/high reward" and "breakthrough science", but at the same time feasible (and often you should show preliminary work), which I would argue is a contradiction. This naturally leads to academics overselling their results. Even more so because you are also supposed to show impact.

The main reason for all this IMO is the reduced funding for academic research in particular considering the number of academics that are around. So everyone is competing for a small pot, which makes those that play to the (broken) incentives, the most successful.


Well, perhaps we can learn from how the startup ecosystem works?

For commercial ventures, you also have the same issue of incremental progress vs big breakthroughs that don't look like much until they are ready.

As far as I can tell, in the startup ecosystem the whole thing works by different investors (various angels and VCs and public markets etc), all having their own process to (attempt to) solve this tension.

There's beauty in competition. And no taxpayer money is wasted here. (Yes, there are government grants for startups in many parts of the world, but that's a different issue from angels evaluating would-be companies.)


Start ups are at an entirely different phase that have something research does not - feedback via market success. The USSR already demonstrated what happens when you try to run a process dependent upon price signals with their dead end economic theory attempts to calculate a global fair price.

You get what you measure for applies here. Now if we had some Objective Useful Research Quality Score t could replace the price signals. But then we wouldn't have the problem in the first place, just promote based on OURQS.


Let people promote with their own money based on what subjective useful researche quality score they feel like.


Startups have misaligned incentives in a monopoly ruled world? Build a thousand messenger variations to get acquired by Facebook, comes to mind. So economic thinking might be harmful here?


Your comments are mostly dead. I didn't see anything wrong with them in a cursory glance.


Why? If that's what society values, that's what society gets. Who are we to judge?

A 0.1% chance to build an app that's gonna be useful to hundreds of millions of people is better than what most career scientists manage.


> Does anyone have ideas on how that may be achieved - what a correct incentive structure for research might look like?

Perhaps start with removing tax payer money from the system.

Stop throwing good money after bad.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: