Yes, 95% agreement in any company is unprecedented but:
1. They can get equivalent position and pay at the new Microsoft startup during that time, so their jobs are not at risk.
2. Sam approved each hire in the first place.
3. OpenAI is selecting for the type of people who want to work at a non-profit with a goal in mind instead of another company that could offer higher compensation. Mission driven vs profit driven.
Either way on how they got to that conclusion of banding together to quit, it was a good idea, and it worked. And it is a check on power for a bad board of directors, when otherwise a board of directors cannot be challenged. "OpenAI is nothing without its people".
> OpenAI is selecting for the type of people who want to work at a non-profit with a goal in mind instead of another company that could offer higher compensation. Mission driven vs profit driven.
Maybe that was the case at some point, but clearly not anymore ever since the release of ChatGPT. Or did you not see them offer completely absurd compensation packages, i.e. to engineers leaving Google?
I'd bet more than half the people are just there for the money.
I think the analogy is kind of shaky. The board tried to end the CEO, but employees fought them and won.
I've been in companies where the board won, and they installed a stoolie that proceeded to drive the company into the ground. Anybody who stood up to that got fired too.
I have an intuition that OpenAI's mid-range size gave the employees more power in this case. It's not as hard to coordinate a few hundred people, especially when those people are on top of the world and want to stay there. At a megacorp with thousands of employees, the board probably has an easier time bossing people around. Although I don't know if you had a larger company in mind when you gave your second example.
My comment was more of a reflection of the fact that you might have multiple different governance structures to your organization. Sometimes investors are at the top. Sometimes it's a private owner. Sometimes there are separate kinds of shares for voting on different things. Sometimes it's a board. So you're right, the depending on the governance structure you can have additional dragons. But, you can never prevent any of these three from being a dragon. They will always be dragons, and you can never wake them up.
It's clear most employees didn't care much about OpenAI's mission -- and I don't blame them since they were hired by the __for-profit__ OpenAI company and therefore aligned with __its__ goals and rewarded with equity.
In my view the board did the right thing to stand by OpenAI's original mission -- which now clearly means nothing. Too bad they lost out.
One might say the mission was pointless since Google, Meta, MSFT would develop it anyway. That's really a convenience argument that has been used in arms races (if we don't build lots of nuclear weapons, others will build lots of nuclear weapons) and leads to ... well, where we are today :(
Where we are today is a world where people do not generally worry about nuclear bombs being dropped. So seems like a pretty good outcome in that example.
The nuclear arms race lead to the cold war, not a "good outcome" IMO. It wasn't until nations started imposing those regulations that we got to the point we're at today with nuclear weapons.
Note that the response is Altman's, and he seems to support it.
As additional context, Paul Graham has said a number of times that Altman is one of the most power-hungry and successful people he know (as praise). Paul Graham, who's met hundreds if not thousands of experienced leaders in tech, says this.
I'm not sure I buy the idea that Ilya was just some hapless researcher who got unwillingly pulled into this. Any one of the board could have voted not to remove Sam and stop the board coup, including Ilya. I'd bet he only got cold feet after the story became international news and after most of the company threatened to resign because their bag was in jeopardy.
That's a strange framing. In that scenario would it not be that he made the decision he thought was right and aligned with openais mission initially, then when seeing the public support Sam had he decided to backtrack so he had a future career?
Ilya signed the letter saying he would resign if Sam wasn't brought back. Looks like he regretted his decision and ultimately got played by the 2 departing board members.
Ilya is also not a developer, he's a founder of OpenAI and was the CSO.
Why are you assuming employees are incentivized by $$$ here, and why do you think the board's reason is related to safety or that employees don't care about safety? It just looks like you're spreading FUD at this point.
Everything is vaporware until it gets made. If you wait until a new technology definitively exists to start caring about safety, you have guaranteed it will be unsafe.
Lucky for us this fiasco has nothing to do with AGI safety, only AI technology. Which only affects automated decision making in technology that's entrenched in every fact of our lives. So we're all safe here!
> If you wait until a new technology definitively exists to start caring about safety, you have guaranteed it will be unsafe.
I don’t get this perspective. The first planes, cars, computers, etc. weren’t initially made with safety in mind. They were all regulated after the fact and successfully made safer.
How can you even design safety into something if it doesn’t exist yet? You’d have ended up with a plane where everyone sat on the wings with a parachute strapped on if you designed them with safety first instead of letting them evolve naturally and regulating the resulting designs.
If you're trying to draw a parallel here then safety and the federal government needs to catch up. There's already commercial offerings that any random internet user can use.
I agree, and I am not saying that AI should be unregulated. At the point the government started regulating flight, the concept of an airplane had existed for decades. My point is that until something actually exists, you don’t know what regulations should be in place.
There should be regulations on existing products (and similar products released later) as they exist and you know what you’re applying regulations to.
I understand where you're coming from and I think that's reasonable in general. My perspective would be: you can definitely iterate on the technology to come up with safer versions. But with this strategy you have to make an unsafe version first. If you got in one of the first airplanes ever made the likely hood of crashing is pretty high.
At some point, our try it until it works approach will bite us. Consider the calculations done to determine if fission bombs would ignite the atmosphere. You don't want to test that one and find out. As our technology improves exponentially we're going to run into that situation more and more frequently. Regardless if you think it's AGI or something else, we will eventually run into some technology where one mistake is a cataclysm. How many nuclear close calls have we already experienced.
The principles, best practices and tools of safety engineering can be applied to new projects. We have decades of experience now. Not saying it will be perfect on the first try, or that we know everything that is needed. But the novel aspects of AI are not an excuse to not try.
Assuming employees are not incentivized by $$$ here seems extraordinary and needs a pretty robust argument to show it isn't playing a major factor when there is this much money involved.
One developer (Ilya) vs. One businessman (Sam) -> Sam wins
Hundreds of developers threaten to quit vs. Board of Directors (biz) refuse to budge -> Developers win
From the outside it looks like developers held the power all along ... which is how it should be.