Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> No, it's not. Skynet was a recursively self improving ASI. You are conflating an autokill bot and, apparently, an ASI that can embody and replicate itself.

It never said it was any of that. The point of terminator is that decisioning around war was taken out of the hands of humans, and then nobody could control it.

You people really don't get it do you? Skynet doesn't need to be evil, or conscious, or self improving. It can be good, very good. But when WE don't control it, we don't know the consequences of what we created. Nobody saw AI psychosis coming but we created it, by making the models good. By making the models listen to you and agree with you.

For fucks sake, you could make an automated system that just signs postcards and, if you give it enough access, it could wipe out the human race. Not because it's evil, it might not even have an understanding of evil, but because we don't control it, and it will meet it's own goals without concerns for us because it's not human.

> autokill bots are coming. Whether any of us like it or not.

Inevitability is not an argument, and I won't humor it. It's cognitively lazy and dishonest. With this reasoning you can justify ANYTHING. Rape, murder, nuclear warfare, killing and eating children. This reasoning is bad and stupid and nobody should do it anymore.

 help



> you could make an automated system that just signs postcards and, if you give it enough access, it could wipe out the human race.

I mean this sincerely. You really ought to stop reading Bostrom and Yudkowsky. It is very hard to take this kind of hysteria seriously.

> Inevitability is not an argument, and I won't humor it.

It is and I don't care what you will or won't humor. Just answer me this: how will you convince all the other countries of the world to not build terminators? The leading example of "it is inevitable" is of course China. They are already testing and deploying semi-autonomous robots throughout their national security apparatus. If you're answer is: "Just because they do it doesn't mean anyone else should" then you're not to be taken seriously on this topic.

> killing and eating children

I'd really like to know what convoluted scenario you could conjure in which one would argue that killing and eating children is inevitable.


Saying something is hysteria is also not an argument. Again, it's just intellectually lazy. Just because you refuse to take problems seriously doesn't mean they cease to exist, it just means you lack critical thinking.

And, as for eating and killing children, it's easy: starvation. If you're hungry enough you'll eat children. All it takes is a supply chain disruption, much more likely than nuclear war even.

So why not eat the children now? It's gonna happen anyway.

It's true that I am jumping the gun here. We don't need an apocalypse for AI to suck ass. It sucks ass right now and is causing massive problems. We should probably focus on that.


Saying something is intellectually lazy is not an argument. It's just intellectually lazy. Just because you refuse to take China's unrestrained development of autokill bots seriously doesn't mean they won't do it, it just means you lack critical thinking.

Perhaps you could write Xi a nicely worded letter informing him that he really shouldn't let his military-industrial complex develop autokill bots. When he inevitably realizes the error of his ways (mostly due to you accusing him of intellectual laziness), he'll no doubt shutdown autokill bot development. Taiwan and India will rest easy and praise your hard working intellect. Then we can then shift all societal resources to focus LLMs and why you think they suck.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: