#chatGPT #Microsoft #openai #boardgovernance
Fig. 1. Former OpenAI CEO Sam Altman and Microsoft CEO Satya Nadella. Getty Images, 2023.
Update: Sam Altman is returning to OpenAI as CEO, ending days of drama and negotiations with the help of heavy investor Microsoft and Silicon Valley insiders (Bloomberg, 11/22/23). In sum, there were more issues without Sam than with him and the board realized that pretty fast. So now some board members have to be shown the door.
Some may view a fired executive like Sam Altman as damaged goods but we all know that corporate boards get these things wrong all the time, and it’s more about office politics and cliques than substantive performance.
The board described their decision as a “deliberative review process which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.” Yet the board’s statement makes little sense and is out of context for an emerging technology at a time such as this.
As a result of this nonsensical firing, there was likely no job interview when Sam Altman joined Microsoft. He was already validated as a thought leader in the tech and generative AI community, so it was hardly needed. Microsoft CEO Satya Nadella was a fan and already invested billions into OpenAI. He saw the open opportunity and took it fast before another tech company could. The same thing happened when Oracle CEO Larry Ellison hired Mark Hurd in 2010 after HP fired him and the results were great.
This begs the question of how valuable are job interviews in the area of emerging tech or for people with visible achievements. What is the H.R. screener or some tech director in a fiefdom going to ask you? They would hardly understand the likely answers in a meaningful way anyway. I know many tech and business leaders who have wasted time in dumb interviews in contexts such as these and it is a poor reflection of the companies setting them up this way.
In other words, plenty of people will not want to work for OpenAI because of how Altman was publicly treated while Microsoft looks more inclusive and forward-thinking. So I am sure many people will leave OpenAI to follow Altman at Microsoft and that is really how OpenAI shot themselves in the foot especially considering Microsoft’s size.
Any failings and risks designed into ChatGPT are as much the problem of OpenAIs as it is every other company working in this vastly unknown and emerging area of tech. To blame that on Altman in this context seems unreasonable and thus he is a fall guy.
There are good and bad things with AI just like with any technology, yet the good far outweighs the bad in this context. Microsoft knows that there are problems in AI in cyber security, fraud, IP theft, and more. The bigger and more capable their AI team the better they can address these issues, now with Altman’s help.
Now, of course, Altman has to be evaluated on his performance at Microsoft making sure AI stays viable and within the approved guardrails, and hopefully innovates a few solutions to make society better. Yet the free market of other tech companies and regulators also have that responsibility.
About the Author:
Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. Over 17 years he has held progressive roles at many banks, insurance companies, retailers, healthcare orgs, and even governments including being a member of the Federal Reserve Secure Payment Task Force. Organizations relish in his ability to bridge gaps and flesh out hidden risk management solutions while at the same time improving processes. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. As a futurist, his writings on digital currency, the Target data breach, and Google combining Google + video chat with Google Hangouts video chat have been validated by many. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire.