What’s Going on at OpenAI?
The AI industry experienced OpenAI-shaped shockwaves over the weekend as Sam Altman, CEO of OpenAI, was ousted as CEO. At the time of writing, not much is known about the reasoning behind Altman’s dismissal, but the industry leader’s Board of Directors, itself a small, tightly-knit team, cited “A deliberative review process by the Board, which concluded that he was not constantly candid in his communications with the board, hindering its ability to exercise its responsibility”.
If anyone’s an expert on careful, legally concise language, please come to the front and tell everyone what that could mean. Clearly, they’re keeping their cards close to their chest, but for what reason?
For more insight, a timeline of events is often helpful, and this screenshot of Greg Brockman’s, former President of OpenAI and the other co-founder to have left the company this weekend as a result of this scenario, post on X helps with this. With the bullet points being written in the third person, it’s clear that this was drafted by either Altman or a lawyer, but it paints a slightly rushed picture.
According to the President of the company at the time, both he and Altman were told within 25 minutes that Altman was being fired, and supposedly the only person outside of the Board who knew about this decision prior to the meeting was Mira Murati, the short-lived interim CEO.
For such a large decision on a company’s future, it’s odd that it would all be done and dusted in a hastily scheduled Google Meet call, which begs the question of what exactly he did to make this his fate.
The haste with which the board made their decision and informed Altman about it smacks of a simmering distaste for the former CEO that had bubbled away under the surface for some time before this all unfolded publicly.
There has been much debate recently around the duties and responsibilities the parent companies of these new AI models must uphold in order to ensure that humanity’s safety is protected at all costs, and OpenAI’s founding charter says exactly that.
Begun as a non-profit organisation in 2015, OpenAI was not always the industry leader it is today. Its original mission statement was to ensure that artificial general intelligence (AGI) benefits all of humanity, but it restructured in 2019 to a for-profit entity so that it could “Raise capital in pursuit of this mission, while preserving the nonprofit’s mission, governance, and oversight”.
How true that last part really is remains to be seen, but it certainly seems as though that was a major catalyst in all of this.
There had been reports of Altman wanting to commercialise parts of OpenAI faster than other Board members, and there was clearly a worry from some corners of the company that this would divert them too far from their original mission.
On the other hand, Jakub Pachocki, the company’s director of research, Aleksander Madry, head of a team evaluating potential risks from AI, and Szymon Sidor, a seven-year researcher at the startup, all resigned on Friday night in protest at Altman’s removal from the top job.
To put this all into some perspective, we had the CEO, the President (and behind-the-scenes software engineer extraordinaire), the individual responsible for GPT-4’s pre-training, and the new head of OpenAI’s Preparedness Team — the team that helps “Track, evaluate, forecast, and protect against catastrophic risks” according the company — all being fired or stepping down in one weekend.
These are pillars of one of the world’s leading organisations at the moment all gone in the space of 72 hours.
According to a transcript of an all-hands meeting chaired by Ilya Sutskever, the individual who arranged the calls to fire Altman over the weekend, he was faced with some difficult questions from the other OpenAI employees.
Some called it a “coup” and a “hostile takeover”, to which Sutskever replied, “You can call it this way, and I can understand why you chose this word, but I disagree with this. This was the Board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity”.
The next inquisitor asked if “Backroom removals” are the best way to run the company, to which Sutskever replied, “I mean, fair, I agree that there is a not ideal element to it. 100%”.
There was also an interesting post on X from Sutskever, stating that “Ego is the enemy of growth”. Presumably this has something to do with the goings on at OpenAI, but what it confirms is that this fiasco goes far deeper than just commercialists vs anti-commercialists within the company.
It speaks to a hostile work culture within the Board, if not further down the company as well, and this was borne out by reports of Altman moving to reduce Sutskever’s role in the company before this all started. It makes sense then that Sutskever seems to have led the majority of this process.
So how does Microsoft fit into all this? Well, Microsoft is, as stated previously, OpenAI’s largest and most recognisable investor, and their stock took a 2% hit in the wake of Sam Altman’s firing. However, yesterday (21st November 2023), Microsoft’s CEO announced that they would be creating a new AGI lab within the company, and that Altman would be leading it as CEO. This marked a significant step forward in the potential collapse of OpenAI as a result of all of this, and clearly these warning signs were heeded.
This combined with the spectacular explosion of support for Altman from within the company seems to have worked. Today, it has been announced that Sam Altman will be going back to OpenAI to be reinstated as CEO. along with the Board being replaced entirely.
Considering that almost the entire workforce signed an open letter demanding Altman’s reinstatement as CEO, otherwise they would leave the company for other silicon valley giants, it’s hard to see that the now ex-Board had any other choice in the matter.
Now that everything has stabilised somewhat, what exactly does this soap-opera mean for the industry as a whole? OpenAI was valued at $80bn last month, but its overheads are also extremely high. The BBC reported in October that if every search query cost the same as a chat-bot one, even Google wouldn’t be able to afford to run its search engine.
It therefore stands to reason that Altman & Co would look to commercialise faster in order to keep investors onside so that their lights, and many servers, can stay on. If that does turn out to be the catalytic rift that sparked this episode, one would imagine the anti-commercialists will be silenced, at least for now.
In any case, share prices of OpenAI and Microsoft have stabilised, and Altman is back where he started, and with a new Board of Directors to boot. “People building AGI (artificial general intelligence) are unable to predict consequences of their actions three days in advance”, wrote quantum physics professor Andrzej Dragan on X. Credit to the excellent Zoe Kleinmann (BBC) for finding this quote on the whole debacle.