.
A

s we continue full speed ahead into our exponential tech future, there’s a somewhat alarming but correctable divide among humans on planet earth.

In one corner, we have ‘accelerationists’ (mostly Silicon Valley tech bros and financiers who like to “move fast and break things”). In the other, we have ‘delecerationists’ (diverse international leaders, thinkers and doers, many women and/or people of color more focused on ensuring stakeholder guardrails). This drama plays itself out at every level and in every sector, everywhere.

OpenAI illustrates the point in microcosm. Consider the 5 days of drama at OpenAI in November 2023 when two long standing board members—Helen Toner and Tasha McCauley—were asked to resign after the OpenAI Board first fired and then rehired CEO, Sam Altman. His firing was ostensibly for “not being consistently candid” with the board. But then he was back and the two board members were gone.

What happened next illustrates the accelerationist vs. decelerationist battle further. OpenAI’s former Chief Scientist and co–founder, Ilya Sutskever (the brains behind ChatGPT), had advocated for stricter guardrails and helped to create a super–alignment (safety) team focused on ensuring the safe development of artificial general intelligence. He was the board member who had delivered the firing message to Altman. But as the tide turned and Altman returned, Sutskever was sidelined and so was his super–alignment team. Sutskever left OpenAI to create his own safety first company in late June 2024—Safe Superintelligence

The Economist published point/counterpoint opinions from OpenAI former and current board members further underscoring this cultural divide. The first was from the two female board members fired in November 2023 titled “AI Firms mustn’t Govern Themselves” advocating for heavier regulation of AI. The other article was written by the two male board members who replaced them—Bret Taylor and Larry Summers—in which they argue that OpenAI was a leader in both AI safety and capability

While OpenAI and other firms may still get the balance between invention and safety right, it is an ongoing struggle which this case illustrates in microcosm almost to perfection. It is indeed a battle between those wanting to innovate at all costs and without guardrails and those intent on mitigating known and unknown risks. It is a battle that is waged at every level locally and internationally. 

However, in addition to the aggressive for profit ventures looking to make billions on exponential tech, there are also forces arrayed at all levels working hard to raise awareness, develop solutions, and provide examples of how responsible AI can be done. 

Check out the following governmental, business, and societal resources: The EU AI Act, The ASEAN Guide on AI Governance and Ethics, The Center for Humane Technology, The African Observatory on Responsible AI, The Bletchley Declaration on AI Safety Summit 2023, the U.S. NIST AI Institute, The Future of Life Institute, the OECD.AI Policy Observatory, InterAmerican Development Bank AI Initiative, and the White House Executive Order on Safe, Secure and Trustworthy AI.

Being an innovator doesn’t exclude being a steward—we can be both innovators and stewards and there are some great examples out there that are doing this like Anthropic, Safe Superalignment, Microsoft AI, and others. Let’s do this!

About
Andrea Bonime-Blanc
:
Dr. Andrea Bonime–Blanc is the Founder and CEO of GEC Risk Advisory, a board advisor and director, and author of multiple books.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

What OpenAI teaches us about fixing the tech culture divide

Image by fauxels from Pexels.

July 18, 2024

OpenAI and other firms face an ongoing struggle with getting the balance between invention and safety right. It is indeed a battle between those wanting to innovate at all costs and without guardrails and those intent on mitigating known and unknown risks, writes Andrea Bonime–Blanc.

A

s we continue full speed ahead into our exponential tech future, there’s a somewhat alarming but correctable divide among humans on planet earth.

In one corner, we have ‘accelerationists’ (mostly Silicon Valley tech bros and financiers who like to “move fast and break things”). In the other, we have ‘delecerationists’ (diverse international leaders, thinkers and doers, many women and/or people of color more focused on ensuring stakeholder guardrails). This drama plays itself out at every level and in every sector, everywhere.

OpenAI illustrates the point in microcosm. Consider the 5 days of drama at OpenAI in November 2023 when two long standing board members—Helen Toner and Tasha McCauley—were asked to resign after the OpenAI Board first fired and then rehired CEO, Sam Altman. His firing was ostensibly for “not being consistently candid” with the board. But then he was back and the two board members were gone.

What happened next illustrates the accelerationist vs. decelerationist battle further. OpenAI’s former Chief Scientist and co–founder, Ilya Sutskever (the brains behind ChatGPT), had advocated for stricter guardrails and helped to create a super–alignment (safety) team focused on ensuring the safe development of artificial general intelligence. He was the board member who had delivered the firing message to Altman. But as the tide turned and Altman returned, Sutskever was sidelined and so was his super–alignment team. Sutskever left OpenAI to create his own safety first company in late June 2024—Safe Superintelligence

The Economist published point/counterpoint opinions from OpenAI former and current board members further underscoring this cultural divide. The first was from the two female board members fired in November 2023 titled “AI Firms mustn’t Govern Themselves” advocating for heavier regulation of AI. The other article was written by the two male board members who replaced them—Bret Taylor and Larry Summers—in which they argue that OpenAI was a leader in both AI safety and capability

While OpenAI and other firms may still get the balance between invention and safety right, it is an ongoing struggle which this case illustrates in microcosm almost to perfection. It is indeed a battle between those wanting to innovate at all costs and without guardrails and those intent on mitigating known and unknown risks. It is a battle that is waged at every level locally and internationally. 

However, in addition to the aggressive for profit ventures looking to make billions on exponential tech, there are also forces arrayed at all levels working hard to raise awareness, develop solutions, and provide examples of how responsible AI can be done. 

Check out the following governmental, business, and societal resources: The EU AI Act, The ASEAN Guide on AI Governance and Ethics, The Center for Humane Technology, The African Observatory on Responsible AI, The Bletchley Declaration on AI Safety Summit 2023, the U.S. NIST AI Institute, The Future of Life Institute, the OECD.AI Policy Observatory, InterAmerican Development Bank AI Initiative, and the White House Executive Order on Safe, Secure and Trustworthy AI.

Being an innovator doesn’t exclude being a steward—we can be both innovators and stewards and there are some great examples out there that are doing this like Anthropic, Safe Superalignment, Microsoft AI, and others. Let’s do this!

About
Andrea Bonime-Blanc
:
Dr. Andrea Bonime–Blanc is the Founder and CEO of GEC Risk Advisory, a board advisor and director, and author of multiple books.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.