Game of AI Thrones
On the boomerang return of Sam Altman to OpenAI as CEO, the problem with superstar tech wunderkinds and why you should care whether or not you're in tech.
In my days studying International Relations I learned there are two books that anyone who wants to understand the principles of strategy needs to be familiar with.
One is The Prince by Niccolò Machiavelli, a well-known political treaty, and the other is The Art of War by Sun Tzu, an ancient Chinese military treatise whose wisdom has unfortunately been perverted by its study in business management courses.
While one may argue that the world into which these books were conceived was a very different one to the one we live in today, there is a reason why both are still relevant and studied as a basis to understand the foundations of politics (and at that also geopolitics) and diplomacy despite having been written centuries ago. If you are wondering what that reason is, the answer can be found in the recent OpenAI drama, which like the telenovelas of my childhood, has delivered the right line, or headline, at the right time for maximum dramatic effect to keep the audience watching.
If you speak Spanish this is exactly how things have played out between Sam Altman, OpenAI and Microsoft
Machiavelli is argued to have been the first theorician to have divorced politics from ethics in his pragmatic handbook for rulers, which includes the famous precepts that “it is better to be feared than loved” and that the appearance of virtue is more important than virtue itself.
Principles that Sam Altman seems to embody.
And yes I hear you say that when Altman was ousted many members of staff threatened to leave in protest -and indeed Greg Brockman, OpenAI’s president walked out right after the news broke- so that would in fact prove that he is loved and apparently virtuous. But as I said at the start we also need to keep in mind The Art of War, which states that the five essential principles for victory are:
Know when to fight and when not to fight
Understand how to deploy large and small numbers
Have officers and men who share a single will
Be ready for the unexpected
Have a capable general unhampered by his sovereign.
Which translated into the headlines we’ve read over the past week mean the following:
Altman has been very cryptic and kept mostly quiet from the unexpected firing to the rumours of OpenAI in conversations to rehire him only 24h after his ousting to Microsoft CEO Satya Nadella’s announcement that he and Brockman were joining Microsoft to lead a new AI research division. Altman’s reaction to this leaves room for interpretation as he is not mentioning clearly he is “excited about joining Microsoft” or “looking forward to being part of the team”, which are the usual replies one might expect. This was Altman deployment of force: keep them guessing what your next move is and observe the reactions so you can leverage your true power.
Principles number 2 and 3 are best exemplified by the debacle that unfolded following Altman’s firing and Brockman’s departure. Things escalated quickly as hundreds of OpenAI staff threatened to leave and join Altman at Microsoft if the board didn’t reinstate Altman and Brockman, but by then OpenAI was already playing musical chairs to secure a new CEO
After a weekend of cliffhangers, principle number 4: Sam Altman is coming back as CEO to OpenAI.
Finally principle number 5: not only have Altman and Brockman returned to their positions at OpenAI but also the board that fired Altman has been dismantled, and a more commercially friendly board has been appointed -more on this later-, with potentially a seat for Microsoft, the majority shareholder in OpenAI, which has clearly shown his support for Altman and Brockman. If that’s not a genius counter-strategy to regain power over those who have overthrown you I don’t know what it is.
Sam Altman’s soundtrack after returning to OpenAI
If you are not familiar with the figure of Cesare Borgia, the real life inspiration for Machiavelli’s The Prince, I urge you to get acquainted with him as the Altman-OpenAI drama is a clear example of how Cesare Borgia ruled over his territories: he was such a powerful and feared leader that his presence alone stabilised and his absence caused chaos.
Although Altman may be loved and respected by those who have seen his ascension to tech royalty, especially since the advent of ChatGPT, his public persona may not be so different to that of Cesare Borgia. Whether it was Altman’s intention or not, the past week has shown the world how he singlehandley witholds an incredible amount of power just by being -or not being- where he is supposed to be, which is not necessarily a good thing.
For Altman is not just a capable founder and CEO of a tech company that has been reinstated to his role. He is an eminent global voice in the conversation on the future of artificial intelligence. The wunderkind rockstar of AI who tours the world and is received by European, Asian and Middle East leaders with a red carpet as he attempts to woo them before regulators do.
And here is when the recent OpenAI drama gets a bit more complicated as it has now emerged that researchers at OpenAI had warned the board via letter on the dangers of a new model they were developing before Altman’s sacking on Friday. The same model that Altman may have referred to as a new breakthrough at conference in San Francisco one day before his surprise dismissal, which now seems to have been precipitated by that warning letter.
“Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime,” Altman on November 16th at the Asia-Pacific Economic Cooperation Summit in San Francisco one day before his surprise ousting.
While there has been lots of speculation over why Altman was fired without warning, including that he was working on other ventures on the side, in light of the new information the hypothesis that the board decided to get rid of Altman due to concerns over safety seems plausible. Especially if we consider that safety lies at the core of OpenAI’s mission as a nonprofit venture whose aim is to develop “safe and beneficial artificial general intelligence for the benefit of humanity”.
It all sounds fantastic and great and almost true. However, there are a number of problems.
The first one is that artificial general intelligence stands for AI that is smart or smarter than a human, and this precisely what has rung all the alarms that have precipitated the events of the past week.
Q* -pronounced Q-Star-, the new model researchers were concerned about, apparently was able to solve basic maths problems it had not seen before, which is a significant development in AI. This could have cemented even further OpenAI leadership in the AI space and potentially added a new revenue stream. The model Altman may have hinted at during his San Francisco appearance last week.
The second issue is OpenAI’s structure.
The company was originally created as a nonprofit, with a board that rules over a commercial division, which is headed by Altman and backed by substantial Microsoft investment, which currently controls 49% of the business. Perhaps it is worth noting that the other key element in OpenAI’s mission is that the for-profit company would be “legally bound to pursue the nonprofit’s mission”.
On the one hand there’s a board that seeks to benefit humanity, and not investors, and on the other there’s a major shareholder that also needs to see a return on investment and relies on Altman’s vision to attract it. In short: When your mission statement is in conflict with your source of income it’s always a recipe for disaster that can escalate quickly.
Having said that, at the time of writing the board has not yet offered an explanation on why they decided to sack Altman, but both parties have agreed to an internal investigation on what has led to this unexpected special tech edition of Games of Thrones.
With that context in mind, let’s return to Altman for a second as the protagonist at the centre of this drama.
There have been many interesting analysis I’ve read of the aftermath of the OpenAIgate, including this one called The ‘defenestration’ of Sam Altman and the schism in AI by
which argues that Altman, despite his ‘God-like’ tech credentials, is just a man and as such he is as vulnerable as the rest of us.While this was published a day before Altman was restored to his former role, and although I see Altman more as a clever strategist -even if against his will- than a vulnerable victim, Schick makes the case of how so much power rests in the hands of so few when it comes to AI development and how clashes among big tech personalities and their convictions can lead to further disruption.
“The biggest takeaway we can glean from Open AI's meltdown is that commercializing AND containing these systems will become *politically* (even if not hypothetically) incompatible as the differences between the pioneers of AI development become even more pronounced” Nina Schick in her substack article.
Another relevant conclusion is that while Altman has made a prodigal return and his personal brand has been strengthened, OpenAI has been negatively impacted by the internal conflict even after Altman’s return. Again, not a good sign when your leader’s image and your brand image compete on opposite ends of the spectrum.
OpenAI was already vulnerable as
argued in this article as well as in his now premonitory Web Summit talk last week.In his analysis of the debacle, Kantrowitz argues that now OpenAI faces a bigger problem: that of credibility as a champion for AI safety, which had been its trademark. However, after the whirlwind of last week it all has just gone up in smoke, hindering the efforts of Altman’s world tour to influence policy.
According to Kantrowitz “with the non-profit board’s decision so quickly reversed after pressure from investors (and well-compensated employees), the myth will take a hit. OpenAI will now become one of the pack, without its special sheen, which will change its ability to influence policy.”
Since the focus on safety had also acted as a catalyst to attract top talent, Kantrowitz argues that with a new board that seems more commercially than safety minded, there may be unlikely winners -including Meta- in the war for AI talent as researchers reconsider whether OpenAI can indeed honour its mission statement and put safety over profit.
What are the key takeaways of this whole saga of tech and power then?
There seems to be agreement that Microsoft has benefited from the drama
Any claims about prioritising safety when it comes to frontier AI have lost credibility. Think about it: if th CEO of a company that put safety first is supposedly fired over concerns about safety only to be reinstated as its $86bn valuation was at stake you don’t have to be the sharpest tool in the box to know what that means.
Therefore, we may have just entered the Wild AI West as a result and those who want to push for faster development in AI may have won this battle
Everyone loves a wunderkind. They often have big tech personalities and offer strong leadership, which is very reassuring. The downside is that they can singlehandlely overhaul companies’ priorities and policies as their individual image becomes associated with a company’s direction and potential commercial success. Sam Altman is the proof that the future of AI is currently in the hands of an influential handful.
Perhaps the demotion and immediate restoration of Sam Altman can make us reflect on whether it may be time we knocked off their pedestal these almost omnipotent figures and make them operate in collaboration, not isolation, with other actors impacted by their leadership.
Not only with their own board -which should be fit for purpose and strong enough to withstand storms, including the lost of their captain- but also with the rest of agents in society, from public and private institutions to a wide range of social groups that can proactively contribute to the conversation on what kind of AI solutions we should develop that effectively and safely can benefit humanity.
After all, in his last public appearance at the San Francisco conference on 16th November Altman himself stated” that it is important to acknowledge the need for guardrails to protect humanity from the existential threat posed by the quantum leaps being taken by computers.” Feel free to replace computers by wunderkind gurus and it also works. As he added: “I really think the world is going to rise to the occasion and everybody wants to do the right thing”
I think we really do.
While we achieve that utopia, get ready for Amazon to emerge shortly as the new crown AI king to sit on the throne of spades.
For the time being they’ve launched a new AI skills programme to train 2 million people in "tech and tech-adjacent roles" by 2025, in a bid to gain on the likes of Google and other tech giants in the scramble for talent. The program will include eight online courses and be free to access online, even for non-Amazon employees.
(Note to self: Google Amazon training progamme because you never know what the future might hold)
And since Jeff Bezos is now offloading almost $1billion in his Amazon shares to fund his space venture, perhaps the next initiative Amazon should consider funding could be a space skills programme so that Elon can finally see one of his spaceships come back in one piece.
Tech and Creative News
More than a quarter of the UK’s workforce will move to a 32-hour week by 2033 thanks to the advancement of AI
10 regions around the UK are to receive a share of £36m of DSIT funding to boost their 5G connectivity.
Zara has become the latest brand to launch on TikTok Shop
And drum roll please as in the middle of the past week turmoil OpenAI has launched a new ChatGPT feature
Creative News
Royal Holloway has been announced as the lead partner for a National Lab for research and development (R&D) in Creative Technology
Participants on Squid Game: The Challenge threaten legal action against Netflix and producers
As Doctor Who gets ready to celebraty its 60th anniversary, a new BBC report reveals the series has contributed more than £134m in gross value added (GVA) in Wales, and a total UK contribution to the economy of £256m.
And more good news as according to Pac UK's Report UK TV exports reached a record £1.853 billion in 2022-23 - a 22% year-on-year increase, with 53% of programmes sold to international streaming and video-on-demand platforms, up from 39 per cent last year.
New research commissioned by Creative PEC outlines recommendations to improve access within the TV, film and games industries for disadvantaged young people.
Last but not least
As this past week I’ve had the pleasure to be knocked out by a rather annoying cold, my energy and germ levels haven’t allowed me to leave the house and therefore the plans to watch Saltburn and Anatomy of a Fall had to be aborted.
But when life gives you lemons, Netflix gives you chefs’ shows. So it is with great pleasure that I have unashamedly bingewatched the three seasons of Gordon, Gino and Fred’s Road Trip that are now available for streaming.
Can I just say these three are the most hilarious combination I’ve ever seen on a British show? As a foreigner myself Gino and Fred’s accents melt my heart and their reactions to Gordon’s antics are just fantastic and completely relatable. Besides the two of them are such a great pair and prove that Italians and French people do not always hate each other.
And having said that, I do love Gordon and have been a fan since the days I did own a tv and discovered him for the first time. I have actually been watching Kitchen Nightmares recently just because I needed some passive-aggressiveness to spice up the dullness of politeness that are my daily interactions. And he has a natural gift for comedy.
Have a good weekend!