fbpx

Type to search

Microsoft Dismisses Dangerous AI Worry, Says Tech Decades Away

Dismissing claims of a ‘dangerous breakthrough’ at OpenAI, Microsoft President Brad Smith said a breakthrough in super-intelligent AI was not likely this year


artificial intelligence
AGI refers to artificial general intelligence — technology that OpenAI defines as autonomous systems that surpass humans in most economically valuable tasks. Photo: Reuters

 

Microsoft President Brad Smith has dismissed claims of a breakthrough in super-intelligent artificial intelligence (AI) at OpenAI, saying the technology was decades away.

“There’s absolutely no probability that you’re going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It’s going to take years, if not many decades, but I still think the time to focus on safety is now,” Smith said.

AGI refers to artificial general intelligence – technology that OpenAI defines as autonomous systems that surpass humans in most economically valuable tasks.

 

Also on AF: Seismic AI Discovery May Have Led to Altman’s OpenAI Firing

 

Smith’s comments follow reports last week that a dangerous discovery at OpenAI was likely behind the short-lived firing of the ChatGPT-maker’s chief Sam Altman.

Reuters reported that an internal project at OpenAI, named Q* (pronounced Q-Star), may have had a breakthrough in the startup’s search for AGI. Researchers at the company reportedly contacted the board, warning that the discovery could have unintended consequences.

They also voiced concerns over commercialising AI advances before assessing their risks.

Many computer scientists, including the “godfather” of AI, have warned that unchecked development of super-intelligent machines could be a threat to human life.

A widely quoted example is that of a potential scenario wherein an AI decides that the destruction of humanity was in its best interest.

 

 

View this post on Instagram

 

A post shared by Asia Financial (@asiafinancial)

 

Caution for safety

Asked if such a discovery contributed to Altman’s removal, Smith said: “I don’t think that is the case at all. I think there obviously was a divergence between the board and others, but it wasn’t fundamentally about a concern like that.”

He, however, did caution on the need for checks in AI development.

“What we really need are safety brakes. Just like you have a safety brake in an elevator, a circuit breaker for electricity, an emergency brake for a bus – there ought to be safety brakes in AI systems that control critical infrastructure, so that they always remain under human control,” Smith said.

Smith’s statements come two days after an announcement from Altman that Microsoft will take a non-voting, observer position on OpenAI’s board.

OpenAI’s board underwent a complete overhaul after it abruptly fired Altman – a move that triggered a rebellion at the start-up, with almost all of its staff threatening to quit over the dismissal.

The observer position means Microsoft’s representative can attend OpenAI’s board meetings and access confidential information, but it does not have voting rights on matters including electing or choosing directors.

Microsoft has committed to invest over $10 billion into OpenAI and owns 49% of the company.

 

  • Reuters, with additional editing by Vishakha Saxena

 

Also read:

Sam Altman Back at OpenAI as CEO After Days of Drama

OpenAI Boss Urges Regulations to Prevent ‘Harm to the World’

Musk, Experts Call for Pause on Training of Powerful AI Systems

Big Tech Exaggerating AI’s Threat to Humanity, Expert Says

AI to Spark Markets Crash in Next Decade: SEC Chair – Insider

 

 

 

Vishakha Saxena

Vishakha Saxena is the Multimedia and Social Media Editor at Asia Financial. She has worked as a digital journalist since 2013, and is an experienced writer and multimedia producer. As a trader and investor, she is keenly interested in new economy, emerging markets and the intersections of finance and society. You can write to her at [email protected]

logo

AF China Bond