Hacker Newsnew | past | comments | ask | show | jobs | submit | sky2224's commentslogin

Honestly, I think part of the reason Apple hasn't jumped deep into AI is due to two big reasons:

1) Apple is not a data company.

2) Apple hasn't found a compelling, intuitive, and most of all, consistent, user experience for AI yet.

Regarding point 2: I haven't seen anyone share a hands down improved UX for a user driven product outside of something that is a variation of a chat bot. Even the main AI players can't advertise anything more than, "have AI plan your vacation".


Put proper LLM into Siri. Encourage developers to expose the functionality of their apps as functions, allow Siri LLM to access those (and sprinkle some magic security dust over it).

Boom, you have an agent in the phone capable of doing all the stuff you can do with the apps. Which means pretty much everything in our life.


As for consistency, Apple's latest UI shows they don't give a damn any more.

I'm pretty sure most people didn't notice any kind of inconsistency. I myself have a hard time figuring out what's going on. I'm so focused on doing the work with the computer that I don't have the time to notice what's "wrong" with the OS. Which makes me wonder if the whole thing is blown out of proportion.

It's probably one of the biggest headlines right now. OpenAI has about $96 billion in debt and they don't have a revenue generating product yet.

I might be wrong but should you not have said profit generating? I pay them $20 a month so they have at least $20 of revenue

No, I don't think so. You've paid for a service that will run an AI model given some prompt. There have been zero guarantees made that it will actually solve your problem.

As others have stated too, how do you define what an incorrect output is?


Can you provide examples of YC startups that knowingly broke laws and just dealt with those issues later? I'm not very aware.

Airbnb, DoorDash

Uber

Uber is not YC backed.


Oh my context window is apparently too small

This is a sales post in disguise.


> 03 audio sourced from the web

Where? How do I know you're not pulling from some shady repository?


Maybe will get a resurgence of the limewire-style pranks people are so nostalgic for


I want Arnold to tell me about pizza again soooooooooo bad.


shady repository of... audio? to what end?


Yes exactly there's lossless FLAC of everything ever recorded sitting out there for the takers


lmao it's just tidal > deezer > yt

i've added an update comment to this post


Chegg is a service many students used to get guidance and answers to homework problems for whatever courses they were taking. It was a sinking ship once GPT 4 came out, but GPT 5 was really it's final nail in the coffin.

I don't know any student that really uses it now.


No kidding, it took my CPU usage from 1% to 55% instantly sheesh


> I am suspicious of grifters and would like to find trustworthy advice.

If you want actual good advice: go to a doctor.

Don't go to a chiropractor, don't go to hackernews. Go to a doctor. You can either start with a physical therapist in your area or start with your primary care doctor to get a referral.

I'm assuming you're in the US, so I know it's expensive but this will genuinely shorten your life span if you let it get significantly worse.


I feel like people genuinely don't understand what vibe coding means.

Just cause you're using an LLM doesn't mean you're "vibe coding".

I regularly use LLMs at work, but I don't "vibe-code", which is where you're just saying garbage to the model and blindly clicking accept on whatever is spit out from it.

I design, think about architecture, write out all of my thoughts, expected example inputs, expected example outputs, etc. I write out pretty extensive prompts that capture all of that, and then request for an improved prompt. I review that improved prompt to make sure it aligns with the requirements I've gathered.

I read the output like I'm doing a deep code review, and if I don't understand some code I make sure to figure it out before moving forward. I make sure that the change set is within the scope of the problem I'm trying to solve.

Excluding the pieces that augment the workflow, this is all the same stuff you would normally do. You're an engineer solving problems and that domain you do it in happens to involve software and computers.

Writing out code has always been a means to an end. The productivity gains if you actually give LLMs a shot and learn to use the tools are real. So yes, pretty soon it's going to become expected from most places that you use the tools. The same way you've been expected to use a specific language, framework, or any other tool that greatly improves productivity.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: