Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We already have true AI - it's a vast, thriving industry.

Or how about calling that vast, thriving industry "weak AI," or "clever algorithms," which is what they really are. The original definition of AI was what we now call strong AI, but after some lesser problems were solved without actually creating strong AI, we had to come up with some name for those.



I want to see an AI that can improve itself by developing new algorithms for arbitrary tasks. I wonder how far off we are from that now?


You know, if you're at the point where you can give a human-readable spec of the problem and the AI can make a passable attempt at it, that's basically the Turing Test -- hence why I think it deserves its status as holy grail. Something that passes would really give the impression of "there's a ghost inside here".


Rather than a ghost, I wonder if we'll ever have the average person looking at brains and thinking "there's a program inside here."

And then to reverse it, imagine that the world really is some kind of massive simulation... and that there are backups of the save()-ed :)



The problem is that fundamentally all our AI techniques are heavily data-driven. It's not clear what sort of data to feed in to represent good/bad algorithm design




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: