If the answer is wrong for topics you do know, why trust the answers for things you don’t? What’s going to be the term for Gell-Mann amnesia[1] but applied to LLMs instead of the media?
The questions I ask on topics I am familiar with are usually very demanding. On these it is clear that there are limits to its comprehension and reasoning abilities, but nonetheless, I find it very impressive.
Questions I ask on topics I am not familiar with are much further from the limits of its knowledge. I find it to be an amazing tool for quickly getting a structured overview of a new subject, including pros and cons of different alternatives and risks I should be aware of.
You should have a healthy dose of skepticism about anything you read online.
The examples I'm thinking of where people completely dismissed ChatGPT were asking things like "tell me about <MY NAME>", "Explain this thing I wrote my thesis on".
In other words, throwing a toy problem at it, getting a bad answer, then making up their mind that it's not useful.
I'm not advocating blind trust in it, I'm just saying don't try a couple things and decide it's garbage. You're doing yourself a disservice.
I love asking it for unit tests and docstrings for my code. Is it perfect? No. But does it give me a starting point for something that I otherwise might not do? Absolutely.
I've asked it a lot of things that I'm familiar with the topic and can immediately eyeball it's answer, either because I'm having a brainfart or because it's a little fiddly to work out. It can be good at those.
I've asked it a lot of questions with things that I'm not at all familiar with, probably the best example is Powershell, and it's provided good answers, or answers that were at least able to lead me in the right direction so I can iterate towards an answer. But then I also tried to use it to write an AWK script and it failed miserably, but got 80% of the way there. Which, honestly, is about the best I've ever been able to do with any complex AWK script. :-)
I think GP means people are being too perfectionist with GPT-4. Between things you're an expert on, and things you have no first clue about, there's also a vast space of things you understand to some degree - enough to let you apply intuition, common sense. In this space, you'll be able to spot rather quickly if GPT-4 is making things up, while still benefiting greatly from when it's being right.
Indeed, plus what are you going to do otherwise? Google it. And how will you vet the pages Google returns? Intuition and common sense, and perhaps an appeal to authority.
And LLMs are a new thing too, it takes some getting used to. I admit I almost got burned a couple times when, for a few moments, I bought the bullshit GPT tried to sell me. Much like with Googling experience of old (sadly made near-useless over the last decade, with Google changing the fundamental ideas behind what search queries are) - as you keep using LLMs, you develop an intuition about prompting them and interpreting results. You get a feel for what is probably bullshit, what needs to be double-checked, and what can be safely taken at face value.
[1] https://web.archive.org/web/20190808123852/http://larvatus.c...