AI

Microsoft Dismisses Dangerous AI Worry, Says Tech Decades Away

 

Microsoft President Brad Smith has dismissed claims of a breakthrough in super-intelligent artificial intelligence (AI) at OpenAI, saying the technology was decades away.

“There’s absolutely no probability that you’re going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It’s going to take years, if not many decades, but I still think the time to focus on safety is now,” Smith said.

AGI refers to artificial general intelligence – technology that OpenAI defines as autonomous systems that surpass humans in most economically valuable tasks.

 

Also on AF: Seismic AI Discovery May Have Led to Altman’s OpenAI Firing

 

Smith’s comments follow reports last week that a dangerous discovery at OpenAI was likely behind the short-lived firing of the ChatGPT-maker’s chief Sam Altman.

Reuters reported that an internal project at OpenAI, named Q* (pronounced Q-Star), may have had a breakthrough in the startup’s search for AGI. Researchers at the company reportedly contacted the board, warning that the discovery could have unintended consequences.

They also voiced concerns over commercialising AI advances before assessing their risks.

Many computer scientists, including the “godfather” of AI, have warned that unchecked development of super-intelligent machines could be a threat to human life.

A widely quoted example is that of a potential scenario wherein an AI decides that the destruction of humanity was in its best interest.

 

 

Caution for safety

Asked if such a discovery contributed to Altman’s removal, Smith said: “I don’t think that is the case at all. I think there obviously was a divergence between the board and others, but it wasn’t fundamentally about a concern like that.”

He, however, did caution on the need for checks in AI development.

“What we really need are safety brakes. Just like you have a safety brake in an elevator, a circuit breaker for electricity, an emergency brake for a bus – there ought to be safety brakes in AI systems that control critical infrastructure, so that they always remain under human control,” Smith said.

Smith’s statements come two days after an announcement from Altman that Microsoft will take a non-voting, observer position on OpenAI’s board.

OpenAI’s board underwent a complete overhaul after it abruptly fired Altman – a move that triggered a rebellion at the start-up, with almost all of its staff threatening to quit over the dismissal.

The observer position means Microsoft’s representative can attend OpenAI’s board meetings and access confidential information, but it does not have voting rights on matters including electing or choosing directors.

Microsoft has committed to invest over $10 billion into OpenAI and owns 49% of the company.

 

  • Reuters, with additional editing by Vishakha Saxena

 

Also read:

Sam Altman Back at OpenAI as CEO After Days of Drama

OpenAI Boss Urges Regulations to Prevent ‘Harm to the World’

Musk, Experts Call for Pause on Training of Powerful AI Systems

Big Tech Exaggerating AI’s Threat to Humanity, Expert Says

AI to Spark Markets Crash in Next Decade: SEC Chair – Insider

 

 

 

Vishakha Saxena

Vishakha Saxena is the Multimedia and Social Media Editor at Asia Financial. She has worked as a digital journalist since 2013, and is an experienced writer and multimedia producer. As a trader and investor, she is keenly interested in new economy, emerging markets and the intersections of finance and society. You can write to her at vishakha.saxena@asiafinancial.com

Recent Posts

Japan’s SLIM Moon Lander Sparks Back Into Life, Makes Contact

The probe unexpectedly survived a two-week lunar night after touching down on the lunar surface…

9 mins ago

Bitcoin Hits $57k, Posts Two-Year High as Big Players Wade In

The leading crypto coin has gained 32% in value so far in February, still riding…

2 hours ago

China-Wary Investors ‘Icing on Cake’ For Japan’s Nikkei

Nikkei heavyweights with a significant presence in China have seen mega rallies, especially in the…

4 hours ago

Tougher Penalties in China for Firms Understating Emissions

Emissions data fraud has been a big problem for China's carbon trading scheme, but State…

4 hours ago

AI Model Boosts Scientists’ Nuclear Fusion Energy Dream – IS

An artificial intelligence (AI) trained on previous experiments has been able to predict instabilities in…

6 hours ago

Nikkei Flatlines After Record Push, AI Optimism Lifts Hang Seng

Tokyo’s benchmark hit another record before retreating while China stocks rode the AI wave to…

7 hours ago