I am impressed by AI. I just don't want to use it. "Look at how realistic this extruded text looks from a distance!" is definitely an achievement. It just doesn't add to my life in any useful way.
Real question for me is how often do I want to generate a image, video or have a conversation with computer. For me really quite rarely.
And no replacing your customer support with chat bot does not make it better. Just make a damm website with everything I need. Lot less errors, lot simpler for me to do what I want.
yeah the generated images are fun a few times, and they've come up in some tabletop RPG gaming with friends as quick fillers for NPCs or cave locations, etc.
but it's a niche thing. in exchange for one-offs we basically have the internet turn into bots and the annihilation of art as man-made expression -- and by burning the equivalent of a small country's daily power consumption.
at this point its tech bros praying for AGI, which will in all likelihood end up with a Torment Nexus
>It just doesn't add to my life in any useful way.
Most products don't add value into our lives imo. They are the means by which we get money flowing which is needed to keep the economy alive. Some might argue that they actually subtract from it hence the need for dopaminergic products. The question for the tech CEOs is how to make LLMs reliably dopaminergic in the way Instagram/Tik Tok and the like are.
I would invite you to read the other comments on this post by people who have tried to use it for that, and found that it makes regular fundamental errors.
Most of the time you're better off reading a few responses to a given question (on, say, Stack Overflow) and synthesizing your own understanding out of them, rather than taking one that an AI has synthesized for you.
Do you verify regularly against real documentation/outside sources/subject authorities/question the output? I do, and regularly get wrong information from premier LLMs. I still use it for information retrieval because interrogating large corpora of text and double checking key information can still be faster, but I'm not fully convinced it's long term beneficial for my intellect or knowledge retention.
It's mostly useless for anything with numbers or other hard facts. Sure, 80% of the numbers would be correct, but we can never know which 80% and would need to manually verify everything. Not that Google is better in complex tasks). LLMs are mostly useful for producing some vague non-factual business speak, or some abstract imprecise media. This is natural result of the architecture of these programs.