Not you specifically, but I honestly don't understand how positive many in this community (or really anyone at all) can be about these news. Tim Urban's article explicitly touches on the risk of human extinction, not to mention all the smaller-scale risks from weaponized AI. Have we made any progress on preventing this? Or is HN mostly happy with deprecating humanity because our replacement has more teraflops?
Even the best-case scenario that some are describing, of uploading ourselves into some kind of post-singularity supercomputer in the hopes of being conscious there, doesn't seem very far from plain extinction.
I think the best-case scenario is that 'we' become something different than we are right now. The natural tendency of life(on the local scale) is toward greater information density. Chemical reactions beget self-replicating molecules beget simple organisms beget complex organisims beget social groups beget tribes beget city states beget nations beget world communities. Each once of these transitions looks like the death of the previous thing and in actuality the previous thing is still there, just as part of a new whole. I suspect we will start with natural people and transition to some combination of people whose consciousness exists, at least partially, outside of the boundaries of their skulls, people who are mostly information on computing substrate outside of a human body, and 'people' who no longer have much connection with the original term.
And that's OK. We are one step toward the universe understanding itself, but we certainly aren't the final step.
Growing tomatoes is less efficient than buying them, regardless of your metric. If you just want really cleanly grown tomatoes, you can buy those. If you want cheap tomatoes, you can buy those. If you want big tomatoes, you can buy those.
And yet individual people still grow tomatoes. Zillions of them. Why? Because we are inherently over-evolved apes who like sweet juicy fruits. The key to being a successful human in the post-scarcity AI overlord age is to embrace your inner ape and just do what makes you happy, no matter how simple it is.
The real insight out of all this is that the above advice is also valid even if there are no AI overlords.
Humans are great at making up purpose where there is absolutely none, and indeed this is a helpful mechanism for dealing with post-scarcity.
The philosophical problem that I see with the "AI overlord age" (although not directly related to AI) is that we'll then have the technology to change the inherent human desires you speak of, and at that point growing tomatoes just seems like a very inefficient way of satisfying a reward function that we can change to something simpler.
Maybe we wouldn't do it precisely because it'd dissolve the very notion of purpose? But it does feel to me like destroying (beating?) the game we're playing when there is no other game out there.
(Anyway, this is obviously a much better problem to face than weaponized use of a superintelligence!)
Any game you play has cheat codes. Do you use them? If not, why not?
In a post-scarcity world we get access to all the cheat codes. I suspect there will be many people who use them and as a result run into the inevitable ennui that comes with basing your sense of purpose on competing for finite resources in a world where those resources are basically free.
There will also be many people who choose to set their own constraints to provide some 'impedance' in their personal circuit. I suspect there will also be many people who will simply be happy trying to earn the only resource that cannot ever be infinite: social capital. We'll see a world where influencers are god-kings and your social credit score is basically the only thing that matters, because everything else is freely available.
I feel exactly the opposite. AI has not yet posed any significant threats to humanity other than issues with the way people choose to use it (tracking citizens, violating privacy, etc.).
So far, we have task-driven AI/ML. It solves a problem you tell it to solve. Then you, as the engineer, need to make sure it solves the problem correctly enough for you. So it really still seems like it would be a human failing if something went wrong.
So I'm wondering why there is so much concern that AI is going to destroy humanity. Is the theoretical AI that's going to do this even going to have the actuators to do so?
Philosophically, I don't have an issue with the debate, but the "AI will destroy the world" side doesn't seem to have any tangible evidence. It seems to me that people seem to take it as a given that it's possible AI could eliminate all of humanity and they do not support that argument in the least. From my perspective, it appears to be fearmongering because people watched and believed Terminator. It appears uniquely out-of-touch.
Agreed. People think of the best case scenario without seriously considering everything that can go wrong. If we stay on this path the most likely outcome is human extinction. Full stop
Mechanized factories failed to kill humanity two hundreds ago and the Luddite movement against them seems comical today. What makes you think extinction is most likely?
this path will indeed lead to human extinction, but the path is climate change. AI is one of the biggest last hopes for reversing it. from my perspective, if it does kill us all, well, it's most likely still a less painful death.
> Or is HN mostly happy with deprecating humanity because our replacement has more teraflops?
If we manage to make a 'better' replacement for ourselves, is it actually a bad thing? Our cousin's on the hominoid family tree are all extinct, yet we don't consider that a mistake. AI made by us could well make us extinct. Is that a bad thing?
Your comment summarizes what I worry might be a more widespread opinion than I expected. If you think that human extinction is a fair price to pay for creating a supercomputer, then our value systems are so incompatible that I really don't know what to say.
I guess I wouldn't have been so angry about any of this before I had children, but now I'm very much in favor of prolonged human existence.
I suppose the same axioms of every ape that's ever existed (and really the only axioms that exist). My personal survival, my comfort, my safety, accumulation of resources to survive the lean times (even if there are no lean times), stimulation of my personal interests, and the same for my immediate 'tribe'. Since I have a slightly more developed cerebral cortex I can abstract that 'tribe' to include more than 10 or 12 people, which judging by your post you can too. And fortunate for us, because that little abstraction let us get past smashing each other with rocks, mostly.
I think the only difference between our outlooks is I don't think there's any reason that my 'tribe' shouldn't include non-biological intelligence. Why not shift your priorities to the expansion of general intelligence?
We have Neanderthal, Denisovan DNA (and two more besides). Our cousins are not exactly extinct - we are a blend of them. Sure no pure strains exist, but we are not a pure strain either!
> If we manage to make a 'better' replacement for ourselves, is it actually a bad thing?
It's bad for all the humans alive at the time. Do you want to be replaced and have your life cut short? For that matter, why should something better replace us instead of coexist? We don't think killing off all other animals would be a good thing.
> Our cousin's on the hominoid family tree are all extinct, yet we don't consider that a mistake.
It's just how evolution played out. But if there was another hominid still alive along side us, advocating for it's extinction because we're a bit smarter would be considered genocidal and deeply wrong.
Even the best-case scenario that some are describing, of uploading ourselves into some kind of post-singularity supercomputer in the hopes of being conscious there, doesn't seem very far from plain extinction.