fbpx

Type to search

Seismic AI Discovery May Have Led to Altman’s OpenAI Firing

A letter written by staff researchers to the OpenAI board of directors, in the days before Altman’s dismissal, warned of an AI discovery that ‘could threaten humanity’


Sam Altman, CEO of ChatGPT maker OpenAI, arrives for a bipartisan Artificial Intelligence (AI) Insight Forum for all US senators hosted by Senate Majority Leader Chuck Schumer at the US Capitol in Washington
Sam Altman, CEO of ChatGPT maker OpenAI, arrives for a bipartisan Artificial Intelligence (AI) Insight Forum for all US senators hosted by Senate Majority Leader Chuck Schumer at the US Capitol in Washington. Photo: Reuters

 

After six dramatic days during which he was abruptly fired and then hired back, Sam Altman is set to return to artificial intelligence giant OpenAI, possibly with more power than ever before.

But questions still loom about what prompted the now-former OpenAI board to dismiss Altman, a man who took the company’s valuation from $29 billion to over $80 billion just within this year.

The answer may lie in a letter written by some staff researchers to the then OpenAI board of directors warning of a seismic AI discovery that ‘could threaten humanity’, two people familiar with the matter told Reuters.

 

Also on AF: Sam Altman Back at OpenAI as CEO After Days of Drama

 

The letter raised concerns about the commercialising of AI advances before understanding their consequences, the sources said.

The previously unreported letter and AI algorithm were key developments before the short-lived ouster of Altman — seen within the industry as the face of generative AI, sources said.

The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman’s firing.

Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

OpenAI also declined to comment on the letter. But after being contacted, the company acknowledged the existence of a project called Q* in an internal message to staffers.

The message was sent by long-time executive Mira Murati, who also acknowledged the letter to the board before the weekend’s events.

 

AI to surpass humans

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup’s search for what’s known as artificial general intelligence (AGI), one source said.

OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity. Researchers consider math to be a frontier of generative AI development.

Currently, generative AI — as seen in ChatGPT — is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely.

But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Q* can currently perform math only on the level of grade-school students. But acing such tests has made researchers very optimistic about the project’s future success, the source said.

 

 

View this post on Instagram

 

A post shared by Asia Financial (@asiafinancial)

At the ‘frontier of discovery’

In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter.

There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Researchers have also flagged work by an “AI scientist” team, the existence of which multiple sources confirmed. The group, formed by combining earlier “Code Gen” and “Math Gen” teams, was exploring how to optimise existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.

Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment – and computing resources – necessary from Microsoft to get closer to AGI.

In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight.

“Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honour of a lifetime,” he said at the Asia-Pacific Economic Cooperation summit.

A day later, the board fired Altman.

 

Return with potential ‘unchecked power’

A subsequent rebellion by OpenAI employees — with more than 700 of them threatening to walk out of the company — mean the Altman was promptly reinstated as the OpenAI chief this week.

It also led to a significant revamp of the OpenAI board of directors.

But corporate governance experts and analysts warn that Altman’s return will strengthen his grip on OpenAI. It may also leave the startup with fewer checks on Altman’s power as the company introduces technology that could upend industries, they say.

“Altman has been invigorated by the last few days,” GlobalData analyst Beatriz Valle said. But that could come at a cost, she said, adding that he has “too much power now.”

Strong support from investors including Microsoft may also give Altman more leeway to commercialise the technology.

“Sam’s return may put an end to the turmoil on the surface, but there may continue to be deep governance issues,” said Mak Yuen Teen, director of the centre for investor protection at the National University of Singapore Business School.

“Altman seems awfully powerful and it is unclear that any board would be able to oversee him. The danger is the board becomes a rubber stamp.”

 

  • Reuters, with additional editing by Vishakha Saxena

 

Also read:

 

China Joins US, EU in Vow to Tackle ‘Catastrophic’ AI Harm Risk

 

Big Tech Exaggerating AI’s Threat to Humanity, Expert Says

 

Western Spy Chiefs Warn China Using AI to Steal Tech Secrets

 

OpenAI Boss Urges Regulations to Prevent ‘Harm to the World’

 

G7 Agree AI Code of Conduct to Limit Tech Threat Risks

 

Musk, Experts Call for Pause on Training of Powerful AI Systems

 

AI ‘Godfather’ Quits Google, Warns of ‘Risk To Humanity’ – NYT

 

AI to Spark Markets Crash in Next Decade: SEC Chair – Insider

 

Biden, Xi Will Vow to Ban AI in Nuclear Weapons, Drones – SCMP

 

 

Vishakha Saxena

Vishakha Saxena is the Multimedia and Social Media Editor at Asia Financial. She has worked as a digital journalist since 2013, and is an experienced writer and multimedia producer. As a trader and investor, she is keenly interested in new economy, emerging markets and the intersections of finance and society. You can write to her at [email protected]

logo

AF China Bond