Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am impressed by AI. I just don't want to use it. "Look at how realistic this extruded text looks from a distance!" is definitely an achievement. It just doesn't add to my life in any useful way.


Real question for me is how often do I want to generate a image, video or have a conversation with computer. For me really quite rarely.

And no replacing your customer support with chat bot does not make it better. Just make a damm website with everything I need. Lot less errors, lot simpler for me to do what I want.


yeah the generated images are fun a few times, and they've come up in some tabletop RPG gaming with friends as quick fillers for NPCs or cave locations, etc.

but it's a niche thing. in exchange for one-offs we basically have the internet turn into bots and the annihilation of art as man-made expression -- and by burning the equivalent of a small country's daily power consumption.

at this point its tech bros praying for AGI, which will in all likelihood end up with a Torment Nexus


>It just doesn't add to my life in any useful way.

Most products don't add value into our lives imo. They are the means by which we get money flowing which is needed to keep the economy alive. Some might argue that they actually subtract from it hence the need for dopaminergic products. The question for the tech CEOs is how to make LLMs reliably dopaminergic in the way Instagram/Tik Tok and the like are.


I've come to be unimpressed by this "money flow" hypothesis to the economy. Kinda sounds like retarded bullshit an accountant monkey came up with.


Have you tried using it as a knowledge retrieval mechanism, I. E., in lieu of Google? Because if not, you really should.


I would invite you to read the other comments on this post by people who have tried to use it for that, and found that it makes regular fundamental errors.

Most of the time you're better off reading a few responses to a given question (on, say, Stack Overflow) and synthesizing your own understanding out of them, rather than taking one that an AI has synthesized for you.


I feel that calling it AI is a big part of the problem.

An LLM parses and generates language exceedingly well. I use LLMs daily now and they are a boon for certain tasks.

An LLM is not an all knowing Oracle. It doesn’t know anything. People who treat the language generator as an authority on anything are fools.


Who is getting fundamental regular errors? I don’t get any.


Do you verify regularly against real documentation/outside sources/subject authorities/question the output? I do, and regularly get wrong information from premier LLMs. I still use it for information retrieval because interrogating large corpora of text and double checking key information can still be faster, but I'm not fully convinced it's long term beneficial for my intellect or knowledge retention.


I need a way to block specific people on HN.


It's mostly useless for anything with numbers or other hard facts. Sure, 80% of the numbers would be correct, but we can never know which 80% and would need to manually verify everything. Not that Google is better in complex tasks). LLMs are mostly useful for producing some vague non-factual business speak, or some abstract imprecise media. This is natural result of the architecture of these programs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: