senckađ
Group745
Group745
Group745
Group745
Group745
Group745
Digital Craft in association withAdobe Firefly
Group745

How Open AI's Rollercoaster Week Lays Bare a High Stakes Game

23/11/2023
Publication
London, UK
124
Share
Technology experts from the advertising and marketing world reflect on what we can learn from the saga of Sam Altman, Open AI and Microsoft
It’s been a topsy-turvy week for AI watchers, with the most influential players in the game taking a pause from the race towards general intelligence to give us a performance of boardroom dramatics straight out of Waystar Royco.

First there was an announcement on Friday from the Open AI board that they had ‘lost confidence’ in CEO Sam Altman and that he would be fired. Co-founder Greg Brockman swiftly quit. The weekend saw scores of Open AI employees tweet their shock and dissatisfaction. By Monday morning, Satya Nadella, CEO of Microsoft, a major partner of Open AI which has integrated ChatGPT into several applications and, er, Bing, announced that he had swooped in and hired Sam and Greg.

With Open AI employees threatening to leave, Open AI’s chief scientist and co-founder Ilya Sutskever, who had been linked to the board’s initial firing of Sam, tweeted his regret and the most audacious take-backsie in history. A petition from Sutskever and Open AI employees demanded that the board vacate their seats … and by Wednesday morning, Sam was back.

If you’re experiencing whiplash, you’re not the only one. And as the advertising and marketing industry has been one of the fastest to adopt the technology, with the major holding companies betting heavily on it, one might also wonder what this very, very human squabble means, and if we can glean anything about the trajectory of a technology that is supposed to shape our future.


Michael Dobell

Chief Innovation Officer at Media.Monks

It’s a high stakes game being played when AI firms are flush with VC cash but have yet to demonstrate the annual recurring revenue required to be viable. Thus, the push for speed, features, and an adequate paid user base.
 
On the other hand, ChatGPT was basically launched as an MVP, and there’s a lot of fundamental research and development needed to make it run faster, with lower energy demand and with the built-in ethics and safety systems to help them scale safely. There is also the aspect of giving the economy, culture and governance time to adapt and make a responsible transition.
 
Adding to the tension is the flywheel effect where being fast means being first, but with that comes risk. There’s a balancing act going on that’s familiar to any organization valuing innovation and growth: it’s about getting the communication and culture right as much as it is the technology and business underpinnings. You set the culture appropriately and it’s like getting the PH in the garden right—good things grow. Sam Altman had the PH right, based on the 743 out of 770 employees at OpenAI who signed the petition for board resignations.


Isabel Perry

VP of Emerging Tech at DEPT

Open AI was set up for the leading minds in AI to work out how humanity should align our goals to the future of AI. Turns out, they couldn’t even align their individual goals between six people - four members of the board, and two founders. 

Microsoft, unencumbered by a value-driven board and ostensibly free to chase profit over ethics, briefly snapped up Sam Altman and Greg Brockman, two of the brightest minds in AI, with the sweet prospect of semi-acqui-hiring the rest of the best of OpenAI, avoiding anti-competition blockades. 

Yet, in a final shocking plot twist, less than a week later, Sam and Greg announce they’re returning to OpenAI, along with an entirely new board of directors. Turning down the opportunity to work at Microsoft - a company with $140bn cash on its books, world-leading compute capacity, one of the largest databases in the world, and a team that has recently announced they’re developing an AI Chip to rival Nvidia. 

This drama feels more Hollywood than Silicon Valley, but is there anything more dramatic than debating the future of the human race as we develop a technology more powerful than ourselves? 

At DEPT® we know the risk in AI isn’t about how the models we use are trained, but where you choose to invest. It’s going to be choppy out there as companies get acquired, fail, and start up. We’re concentrating on delivering the best experiences, made possible by generative AI, no matter who’s making the models and what kind of backroom intrigue is happening.
Credits
Work from LBB Editorial
The Missing Review
Google and Yelp
22/04/2024
24
0
Fuck the Poor Case Study
The Pilion Trust
19/04/2024
18
0
ALL THEIR WORK