The catastrophe is humanity going extinct from superintelligent AI. Like a native species going extinct after an invasive species arrives. Mentioning Terminator is like saying the Earth is flat because Hitler said it is round.
This reminds of the Eliezer Yudkowsky tweet saying that AI was going to hack our DNA and use our bodies to mine bitcoin or something. Ridiculous fearmongering.
I have probably read more sci-fi than the average HN user but the whole "superintelligent AI is going to kill us all" hysteria is among the more ridiculous ideas I have ever heard.
Really though I have entertained all the doomers propositions and none of them seem any more likely than the plot of the Matrix. The ideas that prop these fears up are based on layers of ever more far fetched hypothesis about things that do not exist. If you have a novel reason why AI poses an x-risk I am more than interested in hearing it.
Here is a really interesting quote that I think might go against some of the misanthropic tendencies of doomers and the tech crowd in general but it really is more relevant than ever:
“There was also the Argument of Increasing Decency, which basically held that cruelty was linked to stupidity and that the link between intelligence, imagination, empathy and good-behaviour-as-it-was-generally-understood – i.e. not being cruel to others – was as profound as these matters ever got.”
True humans have been remarkably ignorant throughout our short history. Though you might notice though that most folks dont go around abusing animals or hurting other people on purpose. Take from that what you will.
Maybe together as a species we can avoid hellish cyberpunk dystopias brought on by regulatory capture of the most powerful technology created by humans thusfar. I can only hope.
>Though you might notice though that most folks dont go around abusing animals or hurting other people on purpose. Take from that what you will.
It doesn't matter what "most folks" go about doing. If anything it makes things all the scarier. All this destruction we've caused to so many other species and we weren't even trying.
But anyway, that's just human terrorists using future AIs to build biological weapons. But the much greater danger is superintelligent AI causing human extinction by itself.
That is not an article that is a series of short-form tweets. If it was an actual article I would absolutely read it but I cannot see referencing a bunch of tweets from a self-proclaimed expert as in any sense good faith. I asked if you had any novel ideas about how "superintelligent AI" gives a valid x-risk but you failed to provide any interesting ideas. So I wont engage with what from my perspective is fearmongering for the benefit of malicious corporations.
I still dont see any studies with control groups and reproduceable experimental procedures saying that any AI agent is in any way more useful to a 'terrorist' than unrestricted access to the internet.
> That is not an article that is a series of short-form tweets. If it was an actual article I would absolutely read it
So you didn't even read the tweets? They contain the link to a paper.
> self-proclaimed expert
Esvelt is absolutely a recognized expert on biotechnology. The authors of the article you linked to are not.
> I asked if you had any novel ideas about how "superintelligent AI" gives a valid x-risk
I just responded to your article about bioterrorism, which was not about x-risk. Arguments about x-risk were made elsewhere, but I'm sure you would dismiss them because they don't contain studies with control groups.
Oh you mean the paper that was discredited in the link I already shared? I am not going to copy and paste the argument from the link I shared, you can read it for yourself. I gave you the benefit of the doubt that you actually read the article that you claimed to read but unfortunately good faith arguments around this seem impossible to have with the alarmist crowd.
So yes the paper that was already discredited in the essay I posted that you didn't read
------------------
"While I was writing this, an extra paper game out on the same topic as the "Dual-use biotechnology" paper, with the fun title "Will releasing the weights of future large language models grant widespread access to pandemic agents?"."
Maybe this is a totally different paper with exactly the same name but if not, I am not really interested in reading a paper that is unrepeatable and doesn't use control groups because that isnt science
I think without AI we wouldn't go extinct for a long time. There are no other likely extinction risks. Toby Ord has a nice book (The Precipice) about various forms of extinction risks, and he basically says the same thing.