fbpx

Type to search

Musk, Experts Call for Pause on Training of Powerful AI Systems

Elon Musk and artificial intelligence experts have signed an open letter calling for a 6-month pause in training of systems more powerful than GPT-4, warning of potential risks to society and humanity


Artificial intelligence, facial recognition stock image
The claims arw based on work by researchers at the Google-backed AI firm Anthropic.

 

Elon Musk and artificial intelligence experts are calling for a six-month pause in training of systems more powerful than GPT-4.

They and industry bosses have urged a delay in an open letter that warns of potential risks to society and humanity.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, Apple co-founder Steve Wozniak and Stability AI CEO Emad Mostaque.

It called for a pause on advanced AI development until shared safety protocols for such designs are developed, implemented and audited by independent experts.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

ALSO SEE:

Baidu Scraps Public Launch of ChatGPT-Rival Ernie Bot

 

The letter also detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities.

The Future of Life Institute focuses on four major risks – artificial intelligence, biotechnology, nuclear weapons and climate change.

“We believe that the way powerful technology is developed and used will be the most important factor in determining the prospects for the future of life. This is why we have made it our mission to ensure that technology continues to improve those prospects,” the Institute says on its website.

 

European police warning

The letter comes as EU police force Europol on Monday joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Since its release last year, Microsoft-backed OpenAI’s ChatGPT has set off a tech craze, prompting rivals to launch similar products and companies to integrate it or similar technologies into their apps and products.

“As the capabilities of LLMs (large language models) such as ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals provide a grim outlook,” Europol said as it presented its first tech report starting with the chatbot.

It singled out the harmful use of ChatGPT in three areas of crime.

“ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes,” Europol said.

With its ability to reproduce language patterns to impersonate the style of speech of specific individuals or groups, the chatbot could be used by criminals to target victims, the EU enforcement agency said.

It said ChatGPT’s ability to churn out authentic sounding text at speed and scale also also makes it an ideal tool for propaganda and disinformation.

“It allows users to generate and spread messages reflecting a specific narrative with relatively little effort.”

Criminals with little technical knowledge could also turn to ChatGPT to produce malicious code, Europol said.

 

  • Reuters with additional editing by Jim Pollard

 

 

ALSO SEE:

 

China’s Tencent Assembles Team To Create ChatGPT Rival

 

China Tech Fighting Over AI Talent in ChatGPT Chase – SCMP

 

Alibaba, Tencent Race to Build ChatGPT Rivals – Nikkei

 

Jim Pollard

Jim Pollard is an Australian journalist based in Thailand since 1999. He worked for News Ltd papers in Sydney, Perth, London and Melbourne before travelling through SE Asia in the late 90s. He was a senior editor at The Nation for 17+ years.

logo

AF China Bond