When you want to do a proper work, your grants and papers get rejected because they are not innovatove enough or don't go far enough. So it is not a surprise that people that lied in their applications about what they can realistically do also lie when it comes to reporting results. Unfortunately there is no way out. I stopped counting how many reviewers of my grants disagreed on what was proposed, one saying that it was not innovative, the other saying that is was too risky to use this approach. We have a big problem in science, peer-review is broken and everything relies on it. And many reviewers are way out of touch about what happens in their field, I see reviews that clearly show the reviewer was sleeping for the last 10 years.
Furthermore, universities tend to require tons of publications to promote you. Things are spinning out of control. I know a few EU countries where the written norm is to need > 100 publications to qualify for a full professorship, with equally ridiculous requirements for associate and assistant positions.
Obviously, this encourages and rewards completely broken practices. Many associate and full professors in my area only care about stamping their names into as many journal articles as humanly possible. Some of them are already beyond 500, with many of these in top tier journals (Nature, Science, Cell, NEJM). Obviously, they hardly ever contribute anything. Their serfs do all the work. Their job is basically to plot in order to stay on top of their neofeudal shire.
In addition to this, funding bodies do nothing after fraud has been proven. ERC only terminates grants on rare occasions. https://forbetterscience.com/ discusses many cases of serial fraudsters who keep getting funded despite having retracted 10 or 15 articles in major journals.
One of the clearest examples of the publishing problem to me was the shift in meaning of last authors on papers over the course of my career. When I first started, last author meant the person who had contributed the least to the paper (in cases where ordering of effort can actually be determined -- often there's genuinely equal contributions). Often this was the senior faculty member, as they did little but sort of read over a paper or maybe supervise someone independently functioning.
Over time though the last author came to mean "the more senior person" and then "the person whose idea it really is". So being last went from this thing that no one wanted, to this thing that people would kind of argue over. In the process the more manipulative cases, people would kind of casually say "oh I can be last author" realizing the gains from that position.
It seems when a more junior person is doing all the work and is first author, an unscrupulous senior researcher will claim that "it's the idea that counts"; when that senior researcher is first author, it's "ideas are a dime a dozen, it's getting it done that matters."
> Many associate and full professors in my area only care about stamping their names into as many journal articles as humanly possible. [...] Obviously, they hardly ever contribute anything. Their serfs do all the work.
This describe my lab's head perfectly. At first I found strange he was so angry about a side project paper I wrote alone quickly on my free time and asked to publish at a conference. Then I understood why: in his view, every minute I spend on my projects is one I don't spend on his projects. The guy approved my first journal paper submission, which had his name on it without even reading it. It was obvious by the lack of comments and when he asked a few days later during a lab meeting to change half the content of the paper...
I'm not against putting name of people contributing to the research, even slightly and informally, but at this point this is pure leeching and exploitation. Then he wonders why my thesis isn't progressing (hint: because when I chose nothing about the topic, method and experiment setting I'm not really motivated to work on that).
Yep. The incentives in science are all wrong. To maximize your chances of publication (i.e. keeping your job), you have to make the most outlandish claims you can possibly maybe defend. Additionally, the complexity of data/analysis is increasing every day while also the esoteric domain knowledge required to make any progress is deeper and more specialized.
Not enough people realize that science and academia are just as prone to organizational politics and corruption as everything else. Peer reviewed studies are great, but, just because it was published doesn't mean it represents "The Truth". And sadly, being skeptical of studies makes you appear less credible in arguments.
When I was an economics RA literally half of the econ professors didn't work Fridays and barely worked summers. It was incredible you get paid 150k with that kind of schedule.
Interesting. In my experience professors might not be on campus teaching one or two days a week or in summer, but that's only because they are working their ass off from home, writing grant proposals, reviewing papers, doing basically anything to get funding, and trying to find time to manage their own research.
I used to think it was a great gig too, since most professors had one or more small businesses on the side. Then I realized they have those businesses and consulting companies because that means they can also apply for small business grants (which they use to subcontract the research out to the university) in addition to the normal academic research grants. If you also count teaching, then that means those professors are working three jobs for one salary.
I made the decision that I'd rather make 50% more working in industry doing easier (if boring) work.
I'd guess that's pretty field dependent. What you're saying matches my experience with biology profs - technically once they get tenure they could chill out, but then they wouldn't have any funding for research anymore, so they wouldn't be able to do much of anything in their field.
In CS I saw more of a mix though. It's feasible to fund a small research group without busting your ass, and it also seemed to me that putting time into coursework, writing books, etc. was culturally a more acceptable use of time in that department than it was in biology.
I knew a few CS PIs that actually purposely scaled back their research once they got tenure because they were more excited about teaching and some of the educational initiatives the school was working on. That's not the norm of course, but I literally can't imagine that ever happening in a bio department lol.
Agreed, you can definitely work from home, but you're being quite charitable -- 'let's assume their behind closed doors with no accountability and do everything we expect'.
This is basically how we ended up in the replication crisis to begin with. There's little to no accountability and people assume they're working hard and being honest.
5% of all degree granting universities are R1 or R2 research universities, e.g. Harvard, Stanford, etc. The vast majority of professors aren't obligated the conduct research or apply for grants.
I agree there needs to be more accountability, but IME some of the worst offenders with bad studies were the people putting the most hours in the lab. So I wouldn't conflate hard work and honesty here - most of the dishonesty isn't about avoiding work, it's about misrepresenting negative results, which everyone gets. The people most obsessed with their academic status are the ones to be wary of.
That’s surprising to hear. All the professors I’ve known (a bunch, mostly in Math and Engineering), while they all have some problem at some level, are all very hard working. They definitively work every work day, and often much more.
Worse yet, it compounds. The people approving grants, seeing all these amazing results promised, will then raise the bar for what kind of results you're promising. Which means the next batch of promises will need to be that much more extreme to get approved. It's a race to the top... or the bottom, depending on your point of view.
I'm sadly amused by all this. The complaint I hear about privately done research is it's all tainted by the profit motive, and so research should be funded by the government, as then it'll be pure and untainted by selfish motives.
Of course, government funded motives are just as tainted by selfish motives, if not more. Even worse, the people who make the funding decisions aren't spending their own money, so they have little reason to care.
At least with privately funded research, the people providing the money aren't going to fund bullshit fake research. This is why market systems work better than government systems.
I think there are two separate versions of "private research" that people below are responding to. In one, a company has a problem and they pay researchers to work on it. The key metric is solving the problem or making progress on in depending on the time scale- good orgs have different scales (usually from 3 months to 5 years at most) that they are investing in. In this case, there is little room for fraud or deception, but it goes up with time scale because of how you frame early results. (I work doing applied research for companies, and they want and will only pay for something they can use to improve their business. Actually a lot of my time is spent helping make a clear connection between how research findings will move the needle on business objectives). I think it is this kind of research you, the parent, are referring to.
There is also "sponsored" research as others have pointed out, that is more of a bought study that a business hopes they can use for marketing. These have a big conflict.
I agree that government is probably the worst system in most cases. It's the same kind of "picking winners" that doesnt work in corporate funding. I'm from Canada where our tech industry basically runs on subsidies, and very little escapes the bubble of trying to get more government funding and actually becomes self sustaining.
Personally, I have seen there is a legit appetite for corporate funded research that advances the company's goals. As an academic, I would rather seek out companies for funding, knowing that I'm working in something that someone wants, and not trying to optimize for government priorities. I'm coming at this from a hard science perspective. I imagine the dynamics are very different for drug trials or other efficacy type studies, which are maybe more relevant to this discussion.
Good points, but there's another wrinkle. If a company pays a research institution to do a fraudulent study, the research institution risks losing their status as a reputable research outfit, and thereby loses a multiple of that as other companies avoid funding them.
A prestigious reputation is like glass - easy to break, very hard to put back together.
You'd think this would work with government funding, too. But it appears it does not. It could be because one's "reputation" is based on how many papers are published and how many cites. This is like rating a programmer on how many lines of code written.
Cites are supposed to be more analogous to how many times a programmers' library has been used.
Unless gamed, it _is_ a useful measure.
But if your programming ability was measured by how much are your libraries used (eg. in hiring, determining salary, seniority...), there would be every incentive to aim for your library to be used as much as possible even when it is superflouous.
I think the timescale point is an important one. For long timescales (and how long is long has changed over the years), government might be more likely to invest: eg. imagine the project of sending people to Moon: at the time, no private investor in the world could rival what Soviet Union or USA could dedicate to those projects — the price has gone down in a sense (or we've got richer individuals/companies), so you do see private investors in the field today, but there are always projects of the similar scale that might never happen if business returns are too far off.
That's the same problem in disguise. The reason they don't do the "our product is great" research themselves is because if they did people would switch their brains on and properly evaluate it. They pay universities (i.e. government funded organizations) because of the false belief in our society that government funding means universities are neutral, trustworthy, competent research institutions, when in fact they are really quite corrupt and filled with easily bribed researchers who will publish basically anything if it means they get another paper or grant out of it.
If/when the perception of government funded researchers finally aligns with the reality, businesses would stop doing that because there'd be no reputational misalignment to exploit.
I'm talking about incentives here, and people do things almost entirely on selfish impulses. Money is a powerful motivator, and people are strongly motivated to not spend their own money on bullshit. That motivation is absent when government funds things - but other motivations remain.
Pulitzer Prizes have been awarded for work later shown to be complete frauds. Those severely damaged the value of getting one. I know I don't attach any respect for Pulitzer Prizes.
> At least with privately funded research, the people providing the money aren't going to fund bullshit fake research
They absolutely are if it helps them promote something. Cigarettes and asbestos industries helped produce plenty of fake safety studies.
The problem is that research has been marketized; you have to "sell" your proposal to get funding, so naturally you big it up as much as possible. And thus the incentive to fake results.
If you are personally funding Professor X to do some research say, on making a better LCD display, and Professor X comes up with nothing but personal aggrandizement passed off as "research", are you personally likely to fund him some more?
I seriously doubt it. Any more than you'd continue taking your car to an auto shop that took your money but didn't fix it.
It doesnt work like that. You have a proposal that you send. A group of people (never the same ususally) come and review your work with half of the people that have not heard from you. They look at your story, and your metrics (papers, patents...) and put a bunch of scores (such as societal interest, innovation, approach, personel...) then the jury convene discuss a bit and change their scores if they want. The head of the jury then takes. all of that, combine and make a report. So in practice I've seen big abuses at every stage. Friends that help other friends get grants. Enemies that destroy theor enemies grants. Thieves that destroy the grant and then develop the work themselves after (that one we reported two times for two events to the funding agency that just removed that person from jury panels, nothing else). In the end, no one cares if they failed to achieve what they promised because nobody sees the grant (not public unless you submit a FOIA request) or remembers it 5 years later.
> governments are at minimum accountable to voters
Voting on how government spends money is in no conceivable way like you deciding how to spend your money.
> Private money is in no way accountable
It's accountable to the people who are providing the funds out of their own pockets. People do not like wasting their own money.
I bet you look at your own budget. You have to, otherwise you'll be in jail for bouncing checks and tax evasion. I also bet you've never looked at your city, county, state or federal budget. It's other peoples' money, so who cares!
> I also bet you've never looked at your city, county, state or federal budget.
I have. Not in great detail though. The problem is I can't really do anything about it. Even if I find something bad and by lucky chance get people to care (there are plenty of slow news nights) - there is far more bad things in the budget than I can expose before people get tired of the corruption and give up listening. I try to elect politicians who will do something about it - I have low success: people who benefit from any specific spending are more powerful than people who are just against waste in general. That is assuming I can get my person on the ballot in the first place (low odds), and they don't realize once elected that reelection (read power) comes from handing out pork to those who want some specific waste. There are more things that make it hard - I just scratched the surface.
Pork is hard to figure out. Is spending money not to repair something that isn't broken good money or bad? I've seen perfectly good buildings get needless remodels and I've seen perfectly good buildings suffer because they never got maintained. I've seen towns put in sewer systems they don't need, and other towns fail to put in a sewer system until it was an expensive emergency. Flint had 40 years to replace the lead pipes in their water system - or they could have investing in water treatment chemicals that makes lead not leach from the pipes for much less money even over 40 years (you can pick anything from 60 to 30 years ago as the date when lead is bad became known - 40 was my somewhat arbitrary pick).
> but governments are at minimum accountable to voters.
Governments are at a minimum accountable to the people willing to use force against the government if they are sufficiently displeased. They may also be accountable to voters qua voters, depending on whether they have voting at all, and, if they do, what options are presented to voters and how fairly voted are counted, all of which are axes on which governments vary considerably, with many falling into ranges resulting in little or now accountability to voters.
The problem is not that people lie on their application but that these people are now being judged by people who lied on their applications some time back. The lying has been institutionalized and leaves little resources for small but meaningful progress.
The "way out" is to have severe, lasting professional repercussions for those creating these fraudulent studies. If the most egregious offenders found it hard to get any job at any institution, and those with state licensure oversight (I'm thinking primarily of physicians) lost their license to practice, you would see instances of this dry up almost overnight.
> When you want to do a proper work, your grants and papers get rejected because they are not innovatove enough or don't go far enough.
Not being innovative enough isn't the root cause though. The real issue is there isn't enough funding to go around, and so the bar is higher than it needs to be. Available research funding in the US is a paltry sum considering the aggregate ROI of discoveries and technologies that originate in universities. Funding rates can be as low as 10-20%, with thousands of researchers competing for the same grants. They need to all paint a tortured story of how their idea will be the next big invention.
The problem with our system is that we put public money into research, which is then commercialized by corporations and sold to consumers, and corporations/universities end up capturing the profits. Those profits are then invested in ways that yield short-term returns instead of being reinvested in research.
Some of those profits are supposed to come back to the government and reinvested in research, but more and more corporations (and I consider universities to be a kind of corporation with the way they act like hedge funds that do education as a side hustle) are figuring out how to keep as much of those profits as possible, despite those profits only being made possible in the first place due to publicly funded research.
What if we increase funding into research? VCs are willing to pour millions into ridiculous or tenuous ideas because they know a single success will more than make up for the duds. Lower the stakes, make funding more available to researchers, and then maybe we won't need to squeeze every bit of "innovation" out of every research dollar. Make room for research that fails or yields a negative result. This is important work that is valuable and needs to be done, but there's no funding for it. We could double the amount of funding for e.g. the NSF and it would still be a drop in the federal government's proverbial bucket.
I get the sense from colleagues and visiting different universities that this varies across the US, Canada, the UK, and the EU, but grants are now the bread and butter at most US universities. It's not really enough to publish 100s of articles, or have a high h index, it's to bring in money even if it's not strictly necessary for your research.
Part of the reason we have the problem you're mentioning is not that there isn't enough money to go around, it's that universities (at least in the US) now depend on inflated costs to function. The costs of research are kicked down the road to the federal government, and the research itself is seen in terms of profits rather than discovery. So if you have all these universities essentially telling researchers their jobs are on the line if they don't bring in profits, you're going to have everyone scrambling to bring in as much money as they can. It's not just postdocs or untenured research professor lines, it's tenured professors as well, whose income can be brought down below some necessary standard of living, or who can have salaries frozen or resources cut.
I was thinking about this the other morning. I had a grant proposal that the program officer was really excited about. This program of research could probably be conducted for almost nothing because it involved archival data analysis. However if you put a dollar amount on the time, it might realistically actually cost around 250k USD, maybe 500k max, pretty generously in terms of staff effort. However, the university managed to inflate the budget ask to around 2 million for the sole purpose of indirect funds.
When you have that kind of monetary incentive (carrot or stick), of course you're going to have thousands of persons applying for each opportunity. It's what led to the graduate student ponzi scheme, inflated numbers of surplus graduates, etc and so forth.
It all trickles down too, in terms of research claims, p-hacking, etc and so forth.
There's a place for profit, but there's also some realms where it does nothing but corrupt.
The problem here is not profit but the reverse, the corruption comes from the absence of profit.
Universities and grants are this firehose of tax money being sprayed everywhere without even the slightest bit of accountability in how it's used. The government effectively "loses" all of it in accounting terms, but because it's tax it doesn't matter. The buyer is blind and doesn't even bother looking at the papers they've paid for, let alone caring about the quality.
Now go look at the results coming out of corporate labs when the corporates actually want to use the tech. You get amazing labs that are consistently re-defining the state of the art: Bell Labs, DeepMind, Google Research, FAIR, Intel, ARM, TSMC etc. The first thing that happens when the corporate labs get interested in an area is that universities are immediately emptied out because they refuse to pay competitive wages - partly because being non-profit driven entities they have no way to judge what any piece of research is actually worth.
> Universities and grants are this firehose of tax money being sprayed everywhere without even the slightest bit of accountability in how it's used.
This is definitely not true, recipients of grants are heavily restricted on what kind of things they can spend that money on. I can't even fly a non-domestic carrier using grant money without proving no other alternatives exist.
Do research projects sometimes fail to deliver? Yeah. But that's just the reality of doing research. The problem I see is people expect research to be closer to development, with specific ROIs and known deliverables years ahead of time. Sometimes in the course of research you realize what you said you were going to do is impossible, and that's a good result we need to embrace, instead of attaching an expected profit to everything.
> Bell Labs, DeepMind, Google Research
I don't know so much about all the labs you listed, but just taking these three, they certainly don't have a good feeling for what their research is worth either. Do you think Bell Labs fully comprehended the worth of the transistor? For all the research Google does, ad money still accounts for 80% of their revenue. DeepMind is a pretty ironic choice because Google has dropped billions into them and it's still not clear where the profit is going to come from. So it's not clear anyone, even those with a profit motive, have any way to judge what any piece of research is actually worth.
But that's not to say there's anything wrong with that... that's just how research works. You don't know how things are going to turn out, and sometimes it takes a very long time to figure that out, and it. This is why massive corporations like AT&T, Intel, Google, Xerox, MS etc. are able to run such labs.
> The first thing that happens when the corporate labs get interested in an area is that universities are immediately emptied out because they refuse to pay competitive wages
I've seen this happen first hand. In my experience these researchers usually go on to spend their time figuring our how to get us to click on more ads or to engage with a platform more. In one instance, I remember one of my lab mates being hired out of his PhD to use his research to figure out which relative ordering and styling of ads on a front page optimized ad revenue for Google. They paid him quite a lot of money to do that, and I guess it made Google some profit. But is the world better off?
They are restricted in trivial ways that are easy for a bureaucracy to mechanically enforce, as is true of employees at every institution.
What I meant by accountability is deeper: people are not accountable for the quality or utility of their work, hence the endless tidal wave of corrupt and deceptive research that pours out of government funded 'science' every day. These researchers probably filled out their expenses paperwork correctly but the final resulting paper was an exercise in circular reasoning, or the data tables were made up, or it was P-hacked or whatever. And nobody in government cares or even notices, because nobody is held accountable for the quality of the outputs.
Whilst DeepMind is not especially interested in profit it's true, and is just doing basic research, Google itself is an excellent example of how to seamlessly integrate fundamental research with actual application of that research. That's what profit motivated research looks like: just this endless stream of innovative tech being deployed into real products that are used by lots of people, without much drama.
We have come to take this feat so much for granted that you're actually asking if someone working on ads is leaving the world better off. Yes, it does. Google ads are clicked on all the time because they are useful to people who are in the market to buy something. Those ads are at the center of an enormous and very high tech economic engine that powers staggering levels of wealth creation. If I understand correctly, a lot of academic papers are actually never cited by anyone - a researcher who optimises search ads by just 1% will have a positive impact on the world orders of magnitude greater than that.
> What I meant by accountability is deeper: people are not accountable for the quality or utility of their work, hence the endless tidal wave of corrupt and deceptive research that pours out of government funded 'science' every day. These researchers probably filled out their expenses paperwork correctly but the final resulting paper was an exercise in circular reasoning, or the data tables were made up, or it was P-hacked or whatever. And nobody in government cares or even notices, because nobody is held accountable for the quality of the outputs.
Have you ever received and administered a grant? I have to ask. You seem pretty certain about how it works, but it just doesn't match with my experience.
You say that there's no accountability in results and this leads to people committing fraud. In my experience, fraud happens when there is too high of an expectation that researchers can't meet. Let's say you get a $5 million grant to do X, and in the course of doing X you find out it's not possible. You have a negative result. First of all, good luck publishing a negative result. Without that publication, good luck getting the next grant.
High expectations for results incentivize fraud. There should be room for researchers to come up short with their research and still be able to progress in their careers. But when grants dry up, the publications dry up and then your career is derailed by failing to get tenure.
The fact is that not everyone can be researching world-changing technologies. That's just not a realistic expectation. Even Google can't do that, as much as you laud a profit motive (how many Google projects are in the trash right now?). But that's what we expect of everyone who gets grant money, precisely for the reason that people have an expectation that an immediate and tangible ROI must be demonstrated.
I don't know if you consider people at funding agencies like the NSF as part of the government, but they do notice when projects fall short, and they do deny 80% of grant applications (I would consider that accountability).
I haven't, but I'm unclear why it's relevant given that you don't seem to really be disagreeing!
The NSF denies 80% of grant applications because there is a radical oversupply of people who want to be scientists and the NSF has a finite budget. That by itself doesn't create accountability any more than the fact that lots of people want to be movie stars creates accountability for actors. That's not how accountability works.
Accountability means people being held to account for illegitimate acts. If it existed it would look like this: we (the government) gave you money to deliver some genuine research, yet you delivered a paper that simply modelled your own beliefs, cited a retracted paper and cited another paper that actually disagrees with the claimed statement, used a input dataset too tiny to achieve statistical significance, looks suspiciously P-hacked, you then misrepresented your own findings to the press and by extension the government and then to top it all off it doesn't replicate. We will therefore prosecute you for research fraud and failure to meet the terms of your contract.
What actually happens is this: nothing. Journals will happily publish papers with the problems I just listed, universities praise them, the 'scientists' who do this stuff proceed to get lots of citations and the government proceeds to award even more grant money because who are they to argue with citations.
As you admit, fraud is everywhere in science, supposedly due to "high expectations for results". But so what? Lots of people, non scientists included, have high expectations for results placed upon them. The right mechanisms and incentives to stop fraud are not simply having low expectations of scientists, that's absurd and wouldn't be seriously proposed as a solution in any other area of society. It'd be like saying the answer to fraudulent CEOs fiddling the books is to simply stop expecting them to turn a profit, or like the solution to shoplifting is to just stop expecting shoplifters to pay for things.
Expectations on scientists are already rock bottom: large chunks of the research literature doesn't even seem to replicate, other large chunks are not even replicable by design, and nobody seems to care. You can't get much lower expectations than "We don't even care if your claims replicate" and yet this ridiculous suggestion that the solution to fraud is to give fraudsters even more money keeps cropping up on HN and elsewhere. The solution to fraud is tighter contracts to ensure the rules are clear, and systematic prosecutions of people who break them.
> I haven't, but I'm unclear why it's relevant given that you don't seem to really be disagreeing!
It's relevant because you are criticizing the process but you don't seem to understand how it actually works. Your original characterization was that grant money is "this firehose of tax money being sprayed everywhere without even the slightest bit of accountability in how it's used." The reality is, when I get grant money I need to account for how every dollar is spent, and if there are ever any questions about spending, I better have the receipts to back it up. The other reality is, I only get to spend a small fraction of a grant award, as the University takes most of it off the top, and my students take almost all the rest in the form of a tuition remission and a stipend, leaving whatever is left over for equipment and conference costs. Then there are strict conflict of interest regulations which come with their own reporting requirements. I don't even get all of the money at once; I'll get some up front and then I have to show significant midterm progress in order to get more. There's accountability at multiple layers by multiple organizations.
> The NSF denies 80% of grant applications because there is a radical oversupply of people who want to be scientists and the NSF has a finite budget.
It's accountability in the form of: if you didn't do what you promised you'd do, then you don't get any more money and your career is derailed. Isn't that what you want? Anyway, do we have an oversupply of scientists relative to the amount of science that needs to be done? I don't think so. The NSF budget is finite, but it's also embarrassingly miniscule given the upside of research that has come out of NSF funded projects.
> If it existed it would look like this ... We will therefore prosecute you for research fraud and failure to meet the terms of your contract.
Fraud is one thing. I'm not going to say we shouldn't prosecute fraud. But treating a grant proposal like a contract with positive deliverables (no negative results allowed) is the exact problem. Research is not development. Research implies failure. You can't do one without the other. Failure to meet stated objectives shouldn't be met with prosecution. That just further incentivizes fraud.
If there's a replicability crises it just means we need to spend money on replication research. Researchers know no one is going to bother replicating their study because there is no grant money available for redoing someone else's research. Grant agencies don't pay for that kind of thing, and you can't make a career doing that kind of research. No one gets tenure this way. If we want studies to be replicated, we need to allocate money to replicate them, and we need to incentivize people to do so by making it a viable career for a Ph.D.
> Lots of people, non scientists included, have high expectations for results placed upon them. The right mechanisms and incentives to stop fraud are not simply having low expectations of scientists, that's absurd and wouldn't be seriously proposed as a solution in any other area of society.
I didn't say we should have low expectations, I said we should have realistic expectations, and yes, that does involve lowering expectations from where they are right now. Because the current expectation is this: you have to write a proposal that has a <20% chance of getting funding. If you can't get that funding your career is basically over, so you better promise the world, because everyone else is. In this proposal you need to lay out a research plan for the next 3-5 years and you have to convince the funding agency that your research is going to change the world as we know it. If within that time you fail to meet your stated objectives, you will find funding hard to come across, and your tenure will be threatened, meaning you will probably lose your job and have to move your family. On top of that you want to add potential federal prosecution to stakes, thinking that will make things better.
> Expectations on scientists are already rock bottom .. The solution to fraud is tighter contracts to ensure the rules are clear, and systematic prosecutions of people who break them.
Okay, run with this idea: exactly what rules need to be clearer and exactly how do the contracts need to be tightened? Because there are already clear rules and tight contracts, yet the problem persists. Will clearer rules and tighter contracts fix it? How?
I'll tell you what I think will happen with this system: you'll chase out all of the public scientists because the stakes are too high. Already the pay is too good on the corporate side, and now you add potential federal prosecution to the list if I don't meet positive deliverables? No thanks. I'll go work for Microsoft where my research will be privatized. You might be okay with this as you pointed out you believe a profit motive is good for research, but you know who wouldn't be good with this? Microsoft. And Google. And all the other tech companies who were (or will be) built on top of technologies that started as government funded research. All this does is make Microsoft stronger. Is that what we want? What about the next Microsoft or Google? Where will they come from?
I'll give you a concrete example of where your idea fails: the 2004 DARPA Grand Challenge. Millions were spent trying to bootstrap autonomous cars, and what was the result? They all crashed, no one completed the race. What should the response have been, to prosecute everyone involved? No, they tried again and gave everyone more money. Next time around in 2005 more succeeded (mostly because they relaxed the expectations).
Then in 2007 we saw the first real demonstration of autonomous cars in the DARPA Urban Challenge. Today, everything Tesla, Google, GM, Ford, et al. are doing with driverless cars is based on the research that happened in 2004-2007. Without government funded autonomous car research, there would be no Tesla or Waymo today. That's how research works, you try, you fail, you try again, and you have no idea how far your impact will be, and really no one does. If we try to control this process toward producing only successes with contracts and positive deliverables, like it's an engineering project (with prosecution of failure and all), it just means we're going to lose dynamics like the Grand Challenges, and the broader economy will suffer for it.
Take all that money you want to invest in prosecutors, courts, lawyers, and prisons, and invest that in a system where replication studies are well funded and a viable career path for scientists. Increase funding into the NSF and other grant funding agencies to hire more people to consider grants, and increase grant throughput. I guarantee you you'll fix a lot of the problems you're identifying.
I think we are 80% in agreement but still using words differently.
> when I get grant money I need to account for how every dollar is spent
Yes I know, but that's not what I mean by accountability. Again: nobody is upset with academics because of expenses scandals or taking too many expensive flights. Well, except maybe for climatologists who supposedly take more flights than the average academic, but that's due to the perception of hypocrisy rather than concern over cost.
People are getting upset because when they download and read papers, the papers turn out to be bad and there are no visible consequences for that. Even just getting a clearly fraudulent paper retracted is reported to be a nightmare, according to people who search for scientific fraud as a hobby like Elizabeth Bik. And I've read endless reams of terrible papers that were useless or outright deceptive, I tried reporting a few and nobody ever cared.
Now, you're arguing that there is accountability of the following form:
> It's accountability in the form of: if you didn't do what you promised you'd do, then you don't get any more money
This is true given that scientists are promising the NSF to publish papers, not strictly speaking to do research, and therefore by implication promising to come up with interesting claims, not necessarily true claims. But that's not what we want.
This is an inevitable problem with government funding of research. The buyer, the government, cannot really check if the claims they're buying from scientists are true, so they need proxies like did it get published, did it get cited, etc. But those aren't the same things. Corporate research doesn't have this problem because the corporate will try to apply the research at some point and if it was fraudulent they will discover it at that point, and of course they're strongly incentivized to ensure it never gets to that point in the first place.
In theory the government could write grants in such a way that money is awarded independent of what claims end up being made, instead awarding money for the quality of work done. That's what you're arguing for here. And indeed corporate labs write contracts in this exact way. Scientists get a salary in a corporate lab, they don't have to write grants. They do have to convince their management chain that the research is worth funding, but there are many different ways to do that which don't involve continually publishing astonishing claims in scientific journals.
You're asking me to propose how science should work instead but, indeed, you already know my answer: eliminate the NSF completely, and stop subsidizing student loans. All science should be funded by companies. They have already solved the problems you're treating as novel / intractable above. Scientists are awarded salaries and promotions by firms on a more flexible basis than the NSF. Importantly, they are rewarded for doing research not producing claims. Companies can do this because they have management structures sufficiently well staffed to closely monitor what scientists are doing. That means if a firm is truly committed to research then the scientists will get paid even if their programme has some dry years. Plus there's a huge body of law handling fraud and corruption in the workplace.
At the same time, firms are incentivized to eliminate the research that is probably always going to be nearly useless. Outside of firms selling books or self help courses I doubt many would subsidize sociology or gender studies for example, and it's also unclear that would be a loss.
Your argument about who it would or wouldn't be good for seems a bit contradictory and I struggled to follow it. You're arguing it would be both bad for Google and Microsoft yet also make them stronger. I disagree with both possibilities: I think they would hardly notice the difference and it wouldn't affect how powerful they are. Having worked for one of those companies and also worked at a startup where we often read research papers in a certain subfield of CS with views to maybe applying them, my view is that even in the relatively good field of computer science, most academic output is useless and has no impact. These firms do not rely heavily on government funded research:
- The web was very briefly funded for a couple of years as a side project of CERN, but then R&D was taken over by the private sector where it remained ever since. Page & Brin never even finished their PhD before moving their research into the private sector! It's hardly a mystery where the next Google will come from - probably the same place the previous one did, a garage in Silicon Valley.
- What government funded tech was Microsoft built on? The internet? Microsoft is still with us in spite of the internet, not because of it! Or are you going back to military computers in World War 2? Military R&D is different, governments can fund that semi-effectively because they actually use the outputs.
- Neural networks were a backwater until Jeff Dean resurrected the field using the resources of the private sector, academia has been chasing to catch up ever since.
There are a lot of other examples. The DARPA Grand Challenge is not an example of what I'm talking about because:
1. DARPA is military research and therefore structured differently to how the NSF does things. The very structure of it as a Grand Challenge is a clue here: the output of the programme was cars (not) going round a track, not papers and citations.
2. I'm not arguing for prosecution of researchers who end up with null results!
I'll try not to do another wall of text since we're mostly in agreement, but I will make a couple final comments:
> Your argument about who it would or wouldn't be good for seems a bit contradictory and I struggled to follow it. You're arguing it would be both bad for Google and Microsoft yet also make them stronger.
What I meant is, if e.g. Page and Brin in 1998 had no access to government funding and research because it was privatized by e.g. AOL, there wouldn't be a Google today. But if we were to privatize all research, Google of today would certainly like that insofar as it strengthens their market position (jut like the AOL of 1998 would like the situation), but it also means they have to start funding more research because now they can't get any from the public.
> - The web was very briefly funded - What government funded tech was Microsoft built on? - Neural networks were a backwater
But the point is that it all started with government funding, so we need to be very careful about the consequences of privatizing it all. Today, ideas start out funded by the government, they gain legs in academia, move out into corporations, and are productized and disseminated to the public in the form of consumer goods. This is the progress pipeline, and it's proven extremely effective and enduring at driving innovation.
You want to cut out the beginning of the process because you think corporations can handle that part, but I don't think you've really demonstrated that. Can you point to any tech product out there that is exclusively built on in-house, private research? I certainly can't think of one.
For example, you bring up the origin of Page & Brin. Yes, they never finished their Ph.D., but the fact is they did meet in grad school while they were doing NSF funded work. Brin was at Stanford on an NSF fellowship. They built the first prototype of Google on an NSF grant. They were mentored by academics who also were funded by the NSF as professors and graduate students themselves. You take that funding away, and maybe these two people never meet, maybe they never learn what they need to get that spark of insight. So I agree with you that the next Google will come from the same place the previous one did - a government-funded research lab in Silicon Valley. The garage is where they moved their operation only after they had already used a lot of NSF money to get their start.
> 1. DARPA is military research and therefore structured differently to how the NSF does things. The very structure of it as a Grand Challenge is a clue here: the output of the programme was cars (not) going round a track, not papers and citations.
The processes of getting grants from NSF and DARPA are very similar, and in most cases the deliverable is a paper. The Grand Challenges are the exception of DARPA funding, not the rule.
> Military R&D is different, governments can fund that semi-effectively because they actually use the outputs.
Yes and no. DARPA would like to use the fruits of its funded research, but it funds projects on a very long timescale, so what it funds may or may not be used in the long term. Sometimes the research is not to strengthen the military per se, but to strengthen American interests though creating domestic tech sectors. e.g. I'm sure the military would like to use autonomous vehicles, but what's even better is for America to have its own domestic autonomous car sector that can produce those vehicles.
> most academic output is useless and has no impact.
You've tried to make the case that we should optimize toward useful research, and companies are better at identifying useful research because they have a profit motive, but I still think it's difficult to say today what research will be important 30-40 years down the line. DARPA recognizes that it's very hard to tell how useful research will be ahead of time, and that corporations don't like to engage in foundational research when there is no obvious short-term path to profit. This was the entire point of the Grand Challenge series, and it worked out well -- they wanted to bootstrap the autonomous car industry, so they paid researchers to get them rolling and now look where we are. If the government hadn't gotten involved, there probably wouldn't be an autonomous car sector in the US today.
There are plenty of cases in our history where some technology seemed useless initially turned out to be bigger than anyone could have imagined. We need to be careful not squelch those ideas too quickly because they don't return an immediate profit. Things like the Internet and neural networks come to mind. A lot of people, particularly large corporations, thought the Internet was a toy when it first was introduced. Neural networks seemed like a dead end and then found new life. But the fact is they started in academia. The Deepmind arcade paper and essentially the entire deep reinforcement learning field today is based on decades-old research funded by the UK government. What if that research was locked away in a UK corporation? Would Deepmind even exist? That research was a toy for 30 years, until it wasn't.
The whole point of DARPA and other government funding agencies is that they don't know what the winners are ahead of time, and I don't think corporations can know this either. (if they could, why didn't they do more to fund RL research 30 years ago?). Therefore we shouldn't try to optimize for obvious winners because we'll miss out on non-obvious winners, which bring the biggest upsides. This means we have to fund losers and research that ends up not being useful, and we should be okay with that, because things have turned out pretty well over all.
> 2. I'm not arguing for prosecution of researchers who end up with null results!
Sorry I thought you were with this:
We will therefore prosecute you for research fraud and failure to meet the terms of your contract.
I guess you mean failing to meet the terms of your contract and fraudulently representing that. But it still doesn't address the incentive to commit fraud because if you fail to meet your objectives, you're still not going to get published and therefore won't get the next grant, so your career is still derailed. It just means people will try to hide the fraud better.
After I typed all this I realized I failed at my pledge to not give you a wall of text. Oops!
What I mean by prosecution is that if a research body signs a contract with a scientist to do research, then those contracts would need to specify what research actually is, and that is the first step towards penalizing people who aren't really doing it. Indeed the process of flushing more research into the private sector would automatically eliminate a lot of the grey-area fraud that is so prevalent, because it would force a lot more people to write down what precisely they mean by "doing research", as well as continually evaluate that definition via normal management techniques. For example, is a simple modelling exercise "research"? It's often treated as such by e.g. banks, but the big tech labs we're talking about don't engage in a much of that, unless you count AI, but I think that's sufficiently beyond the sort of modelling you find in most science that it's best to treat it separately.
At the moment governments fund science but have no working definition of what science is, which breeds a lot of cynicism of the type I display above w.r.t. sociology. Is gender studies "science"? Most people would say no, but the government says yes. A more subtle example is epidemiology. A close examination of their papers will reveal that it's just plugging public CSV files into a bunch of very over-simplified simulations, and publishing the outputs. Is that science? If it is, can I get paid to play Cities: Skylines all day as long as I write a paper at the end? It sounds like a stupid suggestion but actually yes I can:
In my view this type of thing is not science, but my guess is at this point the science-y ness of epidemiology or urban planning would split 50/50 or most people would just go with the government's definition of "they receive grants and call themselves scientists, therefore they're scientists".
Would Google exist without the NSF? The specific company maybe not, but there were plenty of search engines around before Google, and Page in particular was already keen on creating a tech company when he was very young so would likely have ended up a startup founder sooner or later. An example competitor was Inktomi, which had already started doing pay-per-click ads. It's all forgotten now but Google nearly didn't survive its early years because they got sued over 'stealing' the PPC ad concept. They were able to argue that their own elaborations on the idea were sufficiently different that it wasn't infringement. It's very plausible that one of these other firms would have struck upon the idea of PageRank; they were certainly incentivized to do so especially once Inktomi had realised that PPC ads were a way to monetize search engines.
"The Deepmind arcade paper and essentially the entire deep reinforcement learning field today is based on decades-old research funded by the UK government. What if that research was locked away in a UK corporation? Would Deepmind even exist?"
Well DeepMind is a difficult example to debate here for both of us because of course DeepMind is or was a UK corporation and they do the exact opposite of locking up their research, if anything they're famously publicity and paper hungry. Google/DeepMind are actually a strong counterpoint to the idea we need academia for long range research: DeepMind is nothing but long range research (of unclear utility!) and of course self driving cars have been driven by Google for the last decade, pun totally intended.
If I were arguing in your shoes I'd be trying to argue Google is the exception that proves the rule and/or trying to distract attention from it, because it shows that companies can and will do long range research. Microsoft Research is another example, although it's less "pure" because it's more or less a little recreation of academia inside of Microsoft. I prefer the Google approach where science and technology are fully integrated.
Now the wider issue of governments needing to fund long range research is one I used to fully agree with. It sounds right and it's easy to find examples where you can sort of link them to government funded research. But as you can see, I changed my mind over time and no longer find myself in that camp, because:
1. Government funded basic research isn't free. We have to weigh up costs and benefits. How much of a contribution does government grant money make to the technological successes we take for granted today? For examples like PageRank, self-driving or DeepMind the initial contribution was quite small and mostly in the form of logistics (grand challenges) or theory work (which is cheap). And how much of a cost does it impose?
2. The costs are not just financial. I guess this is what mostly changed my mind. I concluded a big part of the "cost" of government funded research is actually in terms of intellectual pollution of the literature. If you have to wade through 50 useless, deceptive or outright fraudulent papers to find 1 good one because governments aren't paying attention to what they fund, then that poses an externalized cost on everyone who wants to benefit from research. Moreover this work has to be endlessly duplicated because journals are loathe to retract anything, so everyone who wants to push technology forward in a certain area has to do this work within their own small group because there's no coordination mechanism ... or just give up and ignore the literature entirely (this is what eventually happened to me).
I think a stronger argument for government funded research than the "it would never have happened" approach is that government funded science is usually un-patented and freely accessible. But even this argument is kind of weak because universities do patent the results of tax funded science, maybe not in computer science but it happens a lot in other fields, and also because the results of the research are often behind paywalls too! Although that's been getting better with time and is usually not a problem in CS (which IMHO is definitely one of the better fields).
But overall, to me it's just not clear that the benefits of buying papers en-masse outweighs the costs, both in dollar terms, time terms and of course, the inevitable costs when people put bogus research into production and things go wrong.
> This is definitely not true, recipients of grants are heavily restricted on what kind of things they can spend that money on. I can't even fly a non-domestic carrier using grant money without proving no other alternatives exist.
That is pure corruption: the grant is funneling money from you to a domestic ariline. If it was about accountability you would have to prove the flight was really needed in the first place, and then that you found the best price. (though the grant should allow you to ignore the "skip maintenance and pilot training to give you a lower price" airline, but if that best happens to be foreign it shouldn't matter to the grant unless there is corruption involved)
> If it was about accountability you would have to prove the flight was really needed in the first place,
Friend, at a certain point the overhead to administrate these kinds of checks is more costly than just letting people buy tickets to go to conferences. And at this point it isn't corruption in the university, it's in the form of handouts to large corporations.
It is pretty common for universities to impose a 50-59.9% indirect cost (that they take in addition to the requested funding by the researchers). I've had grants where the university refused to offer a few thousands dollars of free equipment use as support in the grant (which is something required for funding) on a multi million grant. That's because universities are badly managed, they bleed money with tuition to compete with other universities, pay indecent salaries to administratives (and sometimes researchers) many universities pay millions to their basketball coach. And then you have to pay for all those renovations that are requires to get a good ranking in the annual US news or whatever other performance metric. And you are right for grad students ponzi scheme saw that firsthand.
You don't if you do that the ones with power will immediately seize even more and remove the last barriers. A bit of why you can't reboot a country, people who abuse the system are the ones that have the money, properties etc. Communists tried to break some of that, but almost if not all of its applications failed because of power abuse.
Many see peer review as an integral part of the scientific method, but it’s actually a quite recent custom (ca 1970 IIRC). And I agree it’s not without problems. Whenever you give a collective power over the individual it creates room for politics.
academics are largely the only people who will be able to understand the work, but sure.
The fact that it's not out in the open is somewhat complicated. You're perhaps right that it would lead to better outcomes, but it's also important that researchers feel free to speak openly.
I understand the concern about being free to speak candidly, but I think it's trumped by the need for transparency to ensure that if improper gatekeeping or other unethical behavior is happening, their reputation is also on the line. Basically if you can't say it to your peers in public, don't say it at all.
This also fixes the problem of incompetent peer review, because it will be called out as such and the reviewer's reputation will suffer.
Opening peer review to public scrutiny will not make the process any less political--quite the contrary. There must be some way to combat the unethical behavior that does exist in academia, but that isn't it.
Please don't post opinions without supporting evidence and then ask for supporting evidence when someone disagrees with you. This just shows that you're applying skepticism selectively.
I don't have supporting evidence, and I'm not about to look it up right now. I think you're in the same boat or you wouldn't have replied like that.
I don't think it's controversial though, isn't it commonly believed that increased transparency means less corruption? It might not be true, but if it's the prevailing belief then the burden of proof is in fact on you.