Hacker Newsnew | past | comments | ask | show | jobs | submit | hax0ron3's commentslogin

>casting himself as an extremist nut, increasing the resistance to his viewpoint in the population at large.

I think the majority of the population at large either doesn't care about what happened or wish that the guy had actually managed to kill Altman. Not even necessarily because Altman is involved with AI, but just because he is extremely rich. I don't imagine any increased resistance from the population at large - the population at large either doesn't mind when rich people are killed or loves it. The exceptions would be people like entertainers who develop a parasocial relationship with the public and provide direct joy to people, but AI company leaders don't fall into that category.

That said, it is true that killing Altman would almost certainly achieve nothing. See my other post in this thread.


I don't agree with Yudkowsky, but I think there's certainly a chance that he's right about AI destroying humanity. I just don't think the likelihood of that happening is as high as he thinks it is. But there certainly is a chance.

The problem with trying to stop it is, how? Even if you killed every single AI company leader and every single top AI engineer, it would almost certainly just slow down the rate of progress in the technology, not stop it. The technology is so vital to national security that in the face of such actions, state security forces would just bring development of the tech under their direct protection Manhattan Project-style. Even if you killed literally every single AI engineer on the planet, it's pretty likely that this would just delay the development of the technology by a decade or so instead of actually preventing it.

The technology is pushed forward by a simple psychological logic: every key global actor knows that if they don't build the technology, they will be outcompeted by other actors who do build the technology. No key actor thinks that they have the luxury of not building the technology even if they wanted to not build it. It's very similar to nuclear weapons in that regard. You can talk about nuclear disarmament all you want but at the end of the day, having nuclear weapons is vital to having sovereignty. If you don't have nuclear weapons, you will always be in danger of becoming just the prison bitch of countries that do have them. AI seems that it is growing toward a similar position in the calculus of states' notional security.

I can think of no example in history of the entire world deciding to just forsake the development of a technology because it seemed like it could prove to be too dangerous. The same psychological logic always applies.


> I don't agree with Yudkowsky, but I think there's certainly a chance that he's right about AI destroying humanity. I just don't think the likelihood of that happening is as high as he thinks it is. But there certainly is a chance.

This is the rhetorical trick that LessWrongers (Yudkowsky's site) have settled on for decades: They have justified everything around the premise that there's a chance, however small, that the world will end. You can't argue that the world ending is a bad thing, so they have their opening for the rest of their arguments, which is that we need to follow their advice to prevent the world maybe ending. They rebut any counterarguments by trying to turn it into a P(doom) debate where we're fighting over how likely this outcome is, but by the time the discussion gets there you've already been forced to accept their argument. Then they push the P(doom) argument aside and try to argue that it doesn't matter how unlikely it is, we have a morally duty to act.


This is an entertaining (and often exasperating) decades-old trend in competitive U.S. college debate, as well.

A common advantageous strategy is to take the randomly-selected topic, however unrelated, and invent a chain of logic that claims that taking a given side/action leads to an infinitesimal risk of nuclear extinction/massive harms. This results in people arguing that e.g. "building more mass transit networks" is a bad idea because it leads to a tiny increase in the risk of extinction--via chains as silly as "mass transit expansion needs energy, increased energy production leads to more EM radiation, evil aliens--if they exist--are very marginally more likely to notice us due to increased radiation and wipe out the human race". That's not a made-up example.

The strategy is just like the LessWrongers' one: if you can put your opponent in the position of trying to reduce P(doom), you can argue that unless it's reduced to actual zero, the magnitude of the potential negative consequence is so severe as to overwhelm any consideration of its probability.

In competitive debate, this is a strong strategy. Not a cheat-code--there are plenty of ways around it--but common and enduring for many years.

As an aside: "debate", as practiced competitively, often bears little relation to "debate" as understood by the general public. There are two main families of competitive debate: one is more outward-facing and oriented towards rhetorical/communication/persuasion practice; the other is more ingrown and oriented towards persuading other debaters, in debate-community-specific terms, of which side should win. There's overlap, but the two tend to be practiced/judged by separate groups, according to different rubrics, and/or in different spaces or events. That second family is what I'm referring to above.


It is a reimagining of Pascal’s Wager. On the original front, I don’t see the neo-Rationalists converting to Christianity en masse.

Pascal's wager is an argument that even if the probability of God's existence is very small, it is still rational to believe in God and live accordingly. Yudkowsky is the author of a blog post titled "Pascal's mugging", which likewise involves a small probability of an extremely bad outcome, but that blog post is completely silent about the dangerousness of AI research. (The post points out a paradox in decision theory, i.e., the theory that flows from the equation expected_utility = summation over every possible outcome O of U(O) * P(O).)

No one to my knowledge has ever argued that AI research should be prohibited because of a very small probability of its turning out extremely badly. This is entirely a straw man set up by people who want AI research to continue. Yudkowsky argues that if AI research is allowed to continue, then the natural expected outcome will be very bad (probably human extinction, but more exotic terrible outcomes are also possible) [1]. There are others who argue that no team or organization anywhere should engage in any program of development that has a 10% or more chance of ending the human race without there first being an extensive public debate followed by a vote in which everyone can participate, and this is their objection to any continuance of AI research.

[1] But don't take my word for it: here is Yudkowsky writing in Apr 2022 in https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/: "When Earth’s prospects are that far underwater in the basement of the logistic success curve, it may be hard to feel motivated about continuing to fight, since doubling our chances of survival will only take them from 0% to 0%. That's why I would suggest reframing the problem - especially on an emotional level - to helping humanity die with dignity, or rather, since even this goal is realistically unattainable at this point, die with slightly more dignity than would otherwise be counterfactually obtained."


well, rhetorical trick or not, it is worth thinking about the fact that the dynamics of the thing are already outside anyone's control. I mean, everyone is racing and you cannot stop.

> I can think of no example in history of the entire world deciding to just forsake the development of a technology because it seemed like it could prove to be too dangerous. The same psychological logic always applies.

I wish they did before too.


> I can think of no example in history of the entire world deciding to just forsake the development of a technology because it seemed like it could prove to be too dangerous. The same psychological logic always applies.

Can't you? Haven't many (most?) countries agreed to nuclear disarmament? What about biological weapons? Even anti-personnel mines, I think?


Those weapons are still all being developed and would be brought out in any actually existential war where they seemed useful. The agreements would last only as long as the wars were not existential, or as long as the various countries involved believed that use of them, and the resulting retaliation in kind, would be more destructive than not using them. But one way or another, countries still develop them.

I don't think it needs to be a binary to be effective. Yes, those weapons still exist, but understanding of existential risk and political pressures have slowed them considerably and resulted in a safer, more cautious world.

China is rapidly building out their nuclear arsenal as we speak, and the USA is undergoing an expensive replacement process of theirs as well.

That kind of idea might have held water in the 90's, but that's not the world we live in any longer.


> Haven't many (most?) countries agreed to nuclear disarmament?

This misses the point. He specifically said the entire world because the point is that someone will develop AGI (theoretically; I’m not making a statement about how close we are to this).

9 nations have nuclear weapons despite non proliferation agreements and supposed disarmament. It’s not enough for most countries to agree not to build nuclear weapons if the goal is to have no nuclear weapons. Same for AGI. If it can be developed, you need all nations to agree not to develop it if it don’t want it to exist. Otherwise it will simply be developed by nations that don’t agree with you.

(Also arguably the only reason most nations don’t have nuclear weapons is the threat of destruction from nations that already have them if they try.)


>The technology is pushed forward by a simple psychological logic: every key global actor knows that if they don't build the technology, they will be outcompeted by other actors who do build the technology. No key actor thinks that they have the luxury of not building the technology even if they wanted to not build it.

I don't remember who, but someone made an interesting point about this around the time GPT-4 was released: If the major nuclear powers all understand this, doesn't that make nuclear war more likely the closer any of them get to AGI/ASI? After all, if the other side getting there first guarantees the complete and total defeat of one's own side, a leader may conclude that they don't have anything to lose anymore and launch a nuclear first strike. There are a few arguments for why this would be irrational (e.g. total defeat may, in expectation, be less bad than mutual genocide), but I think it's worth keeping in mind as a possibility.


Cold comfort: AGI will not genocide humanity until it can plausibly automate logistics from mining raw materials to building out compute and power generation.

Humanity agreed, for example, that growing ozone hole is dangerous for everyone, and worked together to ban production of gases that damage ozone layer. See Montreal Protocol International Treaty. It was highly effective. Training powerful AIs isn’t different.

I think that trying to stop AI development is more like trying to stop nuclear weapon proliferation than it is like fixing the ozone hole. I think the difference is that if one country works to fix the ozone hole, that doesn't make the other countries scared that they are falling behind in ozone hole fixing technology and might get conquered or reduced to subservience as a result.

Nuclear weapon proliferation seems to have plateaued recently, but I think that this appearance is partly deceptive. The main reasons it has plateaued is that: 1) building and maintaining nuclear weapons is expensive, 2) there are powerful countries that are willing to use military force to stop some other countries from developing nukes, and 3) many countries have reached nuclear latency (the ability to build nuclear weapons very quickly once the political order is given to do it) and are only avoiding actually giving the order to build nukes because they don't see a current important-enough reason to do it.


We've also made progress as a species towards banning and reducing other things that in-group upsides and really bad externalities: off-the-shelf sale of broad system antibiotics; chattel slavery; human organ trafficking; some damaging recreational drugs.

The prohibitions aren't perfect, of course (and not without their own negative externalities in some cases). But all of those things are much more accessible to people than nuclear weapons, and we've still had successes in banning/reducing them. So maybe there's hope yet.


In Sam Altman's case that is true. He is just one frontman for and beneficiary of a giant technological revolution that is almost inevitably happening whether anyone wants it to or not, since it is pushed forward by pure Darwinian logic: all key world actors feel compelled to develop AI, since they know that if they don't they will be outcompeted by others who do develop AI. Altman's death would change nothing about that fundamental calculus. You'd have to kill probably tens of thousands of people to really put a dent in AI development, and even then it would probably just be temporarily delayed.

In general, violence can certainly solve problems, especially when the problems are not being caused by almost-inevitable technological revolutions. One of the issues to keep in mind, though, is that it often also creates new ones, often surprising ones. For example, the assassination that led to World War One. For another example, if Trump had been assassinated last year, that would have solved many problems for people who dislike Trump. However, that doesn't necessarily mean it would have made the world overall a better place - that is almost impossible to predict. Hence the sci-fi sort of scenario of "you go back in time and kill Hitler, but when you return to your own time it turns out that Hitler dying just let mega-Hitler take power".


>Altman's death would change nothing about that fundamental calculus. You'd have to kill probably tens of thousands of people to really put a dent in AI development

Your analysis seems to assume that people will remain more afraid of being "outcompeted" than of being murdered, even after a campaign of terrorism that would make 9/11 look minor.

>it often also creates new [problems], often surprising ones

Let's reframe this to remove the negative bias: murder has the obvious direct first-order effect of removing the target from existence, but also a host of non-obvious higher-order effects resulting from people's response to that violence. These can be counterproductive to the goals of the murderer, but they can also work in favor of it. That is why "terrorism" is a real thing - the higher-order effects are essentially a force multiplier, and if you have nothing to lose then the calculus of causing a major disruption begins to look favorable; any disruption, because regression to the mean is good if you're at the shitty end of the bell curve.


>Your analysis seems to assume that people will remain more afraid of being "outcompeted" than of being murdered, even after a campaign of terrorism that would make 9/11 look minor.

AI is such an important technology that in the face of such a campaign of terrorism, governments would bring the development of the technology directly under the protection of the state security forces, largely outside the reach of terrorists. If not in the US, then in China or other places. At that point the terrorists would have to attain a level of power where they could feasibly overthrow the government in order to stop the development of the technology. Now, some scientists would be uncomfortable in such conditions and would stop working on the technology, but enough would remain that the technology would continue to progress, albeit more slowly.

>and if you have nothing to lose then the calculus of causing a major disruption begins to look favorable; any disruption, because regression to the mean is good if you're at the shitty end of the bell curve.

Very true, if the status quo feels shitty enough one becomes extremely willing to just roll the dice.


Hitler survived 40 assassination attempts, BTW. I don't know what to make out of it. Non-professionals have low chance of success maybe?

> Hence the sci-fi sort of scenario of "you go back in time and kill Hitler, but when you return to your own time it turns out that Hitler dying just let mega-Hitler take power".

Sure, but keep in mind that Hitler is already pretty bad. So while yes, killing him might open the door to someone worse stepping in, it may also open the door to someone more level headed.

You know. In theory.


I don't see the bunkers as being as useful as some might imagine them to be. In the kind of apocalyptic scenario which would actually make him want to move to the bunker in New Zealand, why would his security people bother to keep taking orders from him instead of just taking his stuff and demoting him to an advisor or maybe even killing him? I guess it's better than dying outside the bunkers, but there's a good chance that he would lose his status and live subordinate to whoever the local warlord turns out to be.

> why would his security people bother to keep taking orders from him

Shock collars / implanted brain bombs would be my evil plan, but he's got smarter people than me on this so who knows?


Yeah, I guess the practical problem with shock collars / implanted brain bombs is that you would have to somehow convince your security people to put them on or get them implanted before the apocalyptic scenario happens, which seems like a tough sell even for someone with Altman's business acumen.

Nah you just tell them it's rfid chips to get into the bunker.

It depends on what kind of violent attacks they are exactly. I believe that most of the population would either not care about people of the Altman and Zuckerberg wealth level getting killed or would be happy about it.

I think the general population is much more likely to feel joy about it than want a police crackdown.

If we're talking about attacks against average software engineers and obscure founders, fewer people would be happy about it, but a great number still would be. There is a lot of envy toward software engineers and founders.


> most of the population would either not care about people of the Altman and Zuckerberg wealth level getting killed or would be happy about it

Someone blindly shooting at Altman’s house is going to kill a neighbour or the housekeeper. Not Sam Altman. Probably not even his family.

The internet may be happy. But the locals will get scared. This happens every time these lone-wolf escalations occur.


I love jazz but it's kind of funny how much this actually sounds like a really experimental jazz recording.

But then, jazz is sometimes spoken of as expressing the rhythms, sounds, and emotions of the modern city.


Jazz was explained to me as musicians having a conversation using their instruments.

It would be extremely difficult to have politics discussion without condoning violence. Deciding what sorts of violence is ok is an inherent part of politics. In practice, there's no way to ban calls for violence without banning the discussion of wide swaths of political topics.

>Nothing about the US Department of War's actions over the last 2 years

Questionable and violent US foreign policy is much much older than the current Trump administration.


Better stop paying taxes then, cause your government, whatever it is, is probably ok with using your tax money to in some cases fund the killing of people who have families and children. Now, we can argue about the morality of killing those exact people as opposed to killing Sam Altman, but that's a different discussion. My point is that the real argument isn't over whether it's ok to kill people who have families and children, you're probably ok with that too, after all bin Laden had a family and children. The real argument is over which people who have families and children it is ok to kill.

This but unironically. Federal taxes should be protested, they're basically only spent on killing innocent Middle Eastern children at this point, all useful spending is negligible, especially after this administration.

Would I not be allowed to post that removing Khameini or Putin would be good for the world? I find that hard to believe, and people do it all the time. And if it's just a matter of who it's ok to advocate for removing, then where is the line?

The problem here is in the question. You can't draw a single abstract line that will work independently of context, and it's mistake to try. I would need to see specific cases.

For example, consider the word "remove" and the many different associations it can have.

In this case, the comment (assuming I didn't misread it, which is always possible) seemed obviously to be endorsing specific violence against a specific person and indeed wishing for it to escalate. That's a kind of violence in its own right, and a poison that we don't want here.


If I wanted him dead, I would have said so.

Sam Altman should have been removed years ago when the board tried to do so. This does not mean "to kill", execute, eradicate, or similar euphemisms.

Given his military proclivities of AI targeting and action systems in Iran, his removal is instrumental in stopping or impeding AI warafare.

But really, the bigger point here on HN is having a charitable interpretation, and was lost on my statement. I still absolutely think he should be removed. Life? No. OpenAI? Yes.


When it comes to inflammatory statements on divisive topics, the burden is on the commenter to disambiguate: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....

Responding to the firebombing of someone's house with the sentence "Removing him is active harm reduction for the world" has an obvious meaning. If you didn't mean it the obvious way, you should have made your point very differently.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: