at least with bad search results, you had to look at them to know they were bad or become used to certain domains that you could prejudge the result and move to the next one. LLMs confidently tell you false/made up information as fact. If you fail to follow up with any references and just accept result, you are very susceptible to getting fooled by the machine. Getting outside of the tech bubble echo chamber that is HN, a large number of GPT app users have never heard of hallucinations or any of the issues inherit with LLMs.