Honestly, I think part of the reason Apple hasn't jumped deep into AI is due to two big reasons:
1) Apple is not a data company.
2) Apple hasn't found a compelling, intuitive, and most of all, consistent, user experience for AI yet.
Regarding point 2: I haven't seen anyone share a hands down improved UX for a user driven product outside of something that is a variation of a chat bot. Even the main AI players can't advertise anything more than, "have AI plan your vacation".
Put proper LLM into Siri. Encourage developers to expose the functionality of their apps as functions, allow Siri LLM to access those (and sprinkle some magic security dust over it).
Boom, you have an agent in the phone capable of doing all the stuff you can do with the apps. Which means pretty much everything in our life.
I'm pretty sure most people didn't notice any kind of inconsistency. I myself have a hard time figuring out what's going on. I'm so focused on doing the work with the computer that I don't have the time to notice what's "wrong" with the OS. Which makes me wonder if the whole thing is blown out of proportion.
No, I don't think so. You've paid for a service that will run an AI model given some prompt. There have been zero guarantees made that it will actually solve your problem.
As others have stated too, how do you define what an incorrect output is?
Chegg is a service many students used to get guidance and answers to homework problems for whatever courses they were taking. It was a sinking ship once GPT 4 came out, but GPT 5 was really it's final nail in the coffin.
> I am suspicious of grifters and would like to find trustworthy advice.
If you want actual good advice: go to a doctor.
Don't go to a chiropractor, don't go to hackernews. Go to a doctor. You can either start with a physical therapist in your area or start with your primary care doctor to get a referral.
I'm assuming you're in the US, so I know it's expensive but this will genuinely shorten your life span if you let it get significantly worse.
I feel like people genuinely don't understand what vibe coding means.
Just cause you're using an LLM doesn't mean you're "vibe coding".
I regularly use LLMs at work, but I don't "vibe-code", which is where you're just saying garbage to the model and blindly clicking accept on whatever is spit out from it.
I design, think about architecture, write out all of my thoughts, expected example inputs, expected example outputs, etc. I write out pretty extensive prompts that capture all of that, and then request for an improved prompt. I review that improved prompt to make sure it aligns with the requirements I've gathered.
I read the output like I'm doing a deep code review, and if I don't understand some code I make sure to figure it out before moving forward. I make sure that the change set is within the scope of the problem I'm trying to solve.
Excluding the pieces that augment the workflow, this is all the same stuff you would normally do. You're an engineer solving problems and that domain you do it in happens to involve software and computers.
Writing out code has always been a means to an end. The productivity gains if you actually give LLMs a shot and learn to use the tools are real. So yes, pretty soon it's going to become expected from most places that you use the tools. The same way you've been expected to use a specific language, framework, or any other tool that greatly improves productivity.
1) Apple is not a data company.
2) Apple hasn't found a compelling, intuitive, and most of all, consistent, user experience for AI yet.
Regarding point 2: I haven't seen anyone share a hands down improved UX for a user driven product outside of something that is a variation of a chat bot. Even the main AI players can't advertise anything more than, "have AI plan your vacation".
reply