fbpx

Type to search

Indonesia, Malaysia Restrict Musk’s X Over Obscene AI Images

Since the end of December,  users have been able to ask Grok to edit photos of people, including removing items of clothing and putting them in sexualised poses


Indonesia Temporarily Blocks Access To Grok Over Sexualised Images
Image: NurPhoto via Reuters

 

Indonesia and Malaysia have joined a growing list of countries taking action against Elon Musk’s social media platform X, whose generative artificial intelligence chatbot Grok has been producing obscene imagery of minors and women in response to user prompts.

Both countries announced over the weekend they would temporarily block access to Grok, with Indonesia becoming the first country in the world to implement such a ban.

“The government views the practice of non-consensual sexual deepfakes as a serious violation of human rights, dignity, and the security of citizens in the digital space,” Indonesia’s Communications and Digital Minister Meutya Hafid said in a statement on Saturday.

 

Also on AF: EU Wants Pledges on Minimum Prices if China EVs Avoid Tariffs

 

Indonesia, with the world’s biggest Muslim population, has strict rules that ban the sharing online of content deemed obscene.

The ministry has also summoned X officials to discuss the matter.

 

X ‘failed to address risks’

Malaysia’s ban followed on Sunday. The Malaysian Communications and Multimedia Commission (MCMC) said in a statement it would restrict access to Grok following repeated misuse of the tool “to generate obscene, sexually explicit, indecent, grossly offensive, and non-consensual manipulated images, including content involving women and minors.”

The commission said it issued notices to X and xAI this month to demand the implementation of effective technical and moderation safeguards, but the received responses relied primarily on user-initiated reporting mechanisms and failed to address the risks posed by the design and operation of the AI tools.

“MCMC considers this insufficient to prevent harm or ensure legal compliance,” it said.

MCMC said access to Grok would be restricted until effective safeguards were implemented, adding that it was open to engaging with the firms.

Malaysia, another Muslim-majority country, also has strict laws governing online content, including a ban on obscene and pornographic materials. It has put internet companies under greater scrutiny in recent years in response to what it calls a rise in harmful content.

Malaysian authorities are also considering barring users younger than 16 from accessing social media.

 

Global concern

Action from Indonesia and Malaysia follows global alarm over Grok allowing users to create and publish sexualised images.

On Monday, Britain’s media regulator launched an investigation into X to determine whether the obscene deepfakes produced by its Grok violated its duty to protect people in the UK from content that could be illegal.

Last week, some US senators also called on Apple and Google to remove X from their app stores over the images.

Elsewhere, Australian, French, German, Italian and Swedish authorities condemned Grok’s activities, with some launching an investigation into the platform and others saying they were considering legal action.

Sweden’s deputy prime minister was one of the victims of sexualised image generation from Grok in response to user prompts.

Meanwhile, early this month, India’s IT Ministry also sent a formal notice to X’s local unit, saying the platform had failed to prevent Grok’s misuse and directed the company to take down the explicit content.

 

Limited action from X

Despite the growing concern around Grok’s actions, X’s response has been limited, at best. Early on, Musk appeared to poke fun at the controversy, posting laugh-cry emojis in response to AI edits of famous people – including himself – in bikinis.

Then, last week, the billionaire said on X that anyone using Grok to make illegal content would suffer the same consequences as if they had uploaded illegal content.

xAI, the Musk-led firm behind Grok, said on Thursday it would restrict image generation and editing to paying subscribers as it addressed lapses that allowed users on X to produce sexualised content of others.

The move appeared to have stopped Grok from generating and automatically publishing such images in response to a user post or comment on the social media site.

But X users were still able to create sexualized images using the Grok tab, where people interact directly with the chatbot within the social media platform, and then post the images to X themselves.

The standalone Grok app, which operates separately from X, was also still allowing users to generate images without a subscription.

In response to a Reuters email seeking comment, xAI responded with what seemed to be an automated response: “Legacy Media Lies.” X did not immediately respond to a request for comment.

 

‘A weaponised nudification tool’

Since the end of December,  users had been able to ask Grok directly on X to edit photos of people, including removing items of clothing and putting them in sexualised poses – often without their consent. Grok then published these images in replies on the social media platform.

A Reuters investigation found the chatbot’s image generation was being used to create non-consensual images of women and children in minimal clothing.

AI-powered programmes that digitally undress women – sometimes called “nudifiers” – have been around for years, but until now they were largely confined to the darker corners of the internet, such as niche websites or Telegram channels, and typically required a certain level of effort or payment.

X’s innovation – allowing users to strip women of their clothing by uploading a photo and typing the words, “hey @grok put her in a bikini” – has lowered the barrier to entry.

Three experts who have followed the development of X’s policies around AI-generated explicit content told Reuters the company had ignored warnings from civil society and child safety groups, including a letter sent last year warning that xAI was only one small step away from unleashing “a torrent of obviously nonconsensual deepfakes.”

“In August, we warned that xAI’s image generation was essentially a nudification tool waiting to be weaponized,” said Tyler Johnston, the executive director of The Midas Project, an AI watchdog group that was among the letter’s signatories.

“That’s basically what’s played out.”

 

  • Reuters, with additional editing and inputs from Vishakha Saxena

 

Also read:

China Looks to Tackle Addiction, Self-Harm From AI Emulating Humans

Chinese Firms Fare Worst as Study Finds AI Majors Fail Safety Test 

DeepSeek Researcher Pessimistic About AI’s Impact on Humanity

TikTok Hit by US Legal Barrage For ‘Harmful’ Impacts on Kids

TikTok Hit With $370m EU Fine Over Children’s Data Breaches

Big Tech ‘Doing Little’ to Counter Rampant Scams on Social Media

UN Chief: Big Tech Chasing AI Profits Ignoring Risks 

Malaysia Says It’ll Ban Social Media For Children From 2026

‘Big Short’ Wagers $1-Billion Bet That ‘AI Bubble’ Will Burst

The Idea of AI Super-Intelligence is a ‘Fantasy’ – US Researcher

AI is ‘Effectively Useless,’ Veteran Analyst Warns

 

Vishakha Saxena

Vishakha Saxena is the Multimedia and Social Media Editor at Asia Financial. She has worked as a digital journalist since 2013, and is an experienced writer and multimedia producer. As a trader and investor, she is keenly interested in new economy, emerging markets and the intersections of finance and society. You can write to her at [email protected]