> My problem with it is that inbuilt into the models of all LLMs is that they'll fabricate a lot. What's worse, people are treating them as authoritative.
The same is true about the internet, and people even used to use these arguments to try to dissuade people from getting their information online (back when Wikipedia was considered a running joke, and journalists mocked blogs). But today it would be considered silly to dissuade someone from using the internet just because the information there is extremely unreliable.
Many programmers will say Stack Overflow is invaluable, but it's also unreliable. The answer is to use it as a tool and a jumping off point to help you solve your problem, not to assume that its authoritative.
The strange thing to me these days is the number of people who will talk about the problems with misinformation coming from LLMs, but then who seem to uncritically believe all sorts of other misinformation they encounter online, in the media, or through friends.
Yes, you need to verify the information you're getting, and this applies to far more than just LLMs.
Shades of grey fallacy. You have way more context clues about the information on the internet than you do with an LLM. In fact, with an LLM you have zero(-ish?).
I can peruse your previous posts to see how truthful you are, I can tell if your post has been down/upvoted, I can read responses to your post to see if you've been called out on anything, etc.
This applies tenfold in real life where over time you get to build comprehensive mental models of other people.
I have decided it must be attached to a sort of superiority complex. These types of people believe they are capable of deciphering fact from fiction but the general population isn’t so LLMs scare them because someone might hear something wrong and believe it. It almost seems delusional. You have to be incredibly self aggrandizing in your mind to think this way. If LLMs were actually causing “a problem” then there would be countless examples of humans making critical mistakes because of bad LLM responses, and that is decidedly not happening. Instead we’re just having fun ghiblifying the last 20 years of the internet.
Regardless of anything else it’s extremely too early to make such claims. We have to wait until people start allowing “AI agents” to make autonomous blackbox decision with minimal supervision since nobody has any clue what’s happening.
Even if we tone down the SciFi dystopia angle not that many people really use LMMs in non superficial ways yet. What I’m most afraid of would be the next generation growing without the ability to critically synthesize information on their own.
Most people - the vast majority of people - cannot critically synthesize information on their own.
But the implication of what you are saying is that academic rigour is going to be ditched overnight because of LLMs.
That’s a little bit odd. Has the scientific community ever thrown up its collective hands and said “ok, there are easier ways to do things now, we can take the rest of the decade off, phew what a relief!”
> what you are saying is that academic rigour is going to be ditched overnight
Not across all level and certainly not overnight. But a lot of children entering the pipeline might end up having a very different experience than anyone else before LLMs (unless they are very lucky to be in an environment that provides them better opportunities).
> cannot critically synthesize information on their own.
That’s true, but if we even less people will try to so that or even know where to start that will get even worse.
The same is true about the internet, and people even used to use these arguments to try to dissuade people from getting their information online (back when Wikipedia was considered a running joke, and journalists mocked blogs). But today it would be considered silly to dissuade someone from using the internet just because the information there is extremely unreliable.
Many programmers will say Stack Overflow is invaluable, but it's also unreliable. The answer is to use it as a tool and a jumping off point to help you solve your problem, not to assume that its authoritative.
The strange thing to me these days is the number of people who will talk about the problems with misinformation coming from LLMs, but then who seem to uncritically believe all sorts of other misinformation they encounter online, in the media, or through friends.
Yes, you need to verify the information you're getting, and this applies to far more than just LLMs.