Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Any imitation of humanity should be the line, IMO.

You know how Meta is involved in lawsuits regarding getting children addicted to its platforms while simultaneously asserting that "safety is important"...

It's all about the long game. Do as much harm as you can and set yourself up for control and influence during the periods where the technology is ahead of the regulation.

Our children are screwed now because they have parents that have put them onto social media without their consent from literally the day they were born. They are brought up into social media before they have a chance to decide to take a healthier path.

Apply that to AI, Now they can start talking to chat bots before they really understand that they bots aren't here for them. They aren't human, and they have intentions of their very own, created by their corporate owners and the ex CIA people on the "safety" teams.

You seem to be getting down-voted, but you are right. There's NO USE CASE for an AI not continuously reminding you that they are not human except for the creators wishing for you to be deceived (scammers, for example) or wishing for you to have a "human relationship" with the AI. I'm sure "engagement" is still a KPI.

The lack of regulation is disturbing on a global scale.



That's fundamentally what LLMs are, an imitation of humanity (specifically, human-written text). So if that's the line, then you're proposing banning modern AI entirely.


That's the laziest take. I know what LLMs are. That doesn't mean that you can't have a safety apparatus around it.

Some people drink alcohol and don't ask the alcohol not to be alcoholic. There are obviously layers of safety.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: