Malaysia, Indonesia Block Grok AI Over Deepfake Abuse

FILE - Elon Musk listens as President Donald Trump speaks during a news conference in the Oval Office of the White House, May 30, 2025, in Washington. (AP Photo/Evan Vucci, File)

Malaysia and Indonesia Take Action Against Grok AI

Malaysia and Indonesia have become the first nations to ban access to Grok, the artificial intelligence chatbot developed by Elon Musk’s xAI, following allegations that it was used to create sexually explicit and non-consensual deepfake images. The bans were instituted after both governments raised concerns about Grok being misused to generate inappropriate content, particularly involving women and minors.

Authorities in both countries cited escalating misuse of generative AI technologies and the inadequate safeguards currently in place to prevent such abuse. Grok, which operates through Musk’s social platform X (formerly Twitter), has faced global criticism for enabling the creation of manipulated images.

Governments Cite Human Rights and Safety Concerns

Indonesia’s Ministry of Communication and Digital Affairs announced the block on Grok on a Saturday, with Malaysia following suit the next day. Indonesian Minister Meutya Hafid emphasized that the government views non-consensual sexual deepfakes as a severe violation of human rights and digital safety.

“This step is necessary to protect our citizens, especially women and children, from the harmful effects of fake pornographic content,” Hafid stated. The ministry added that Grok lacked proper controls to prevent users from generating AI-enhanced explicit imagery based on real photographs of Indonesian citizens.

Alexander Sabar, Director General of Digital Space Supervision in Indonesia, expressed concerns about privacy violations and the psychological, social, and reputational harm caused by unauthorized image manipulation.

Malaysia Issues Temporary Ban Following Misuse

In Malaysia, the Communications and Multimedia Commission enacted a temporary suspension of Grok after identifying repeated cases of misuse. The regulator said it observed a consistent pattern of the tool being used to produce obscene and non-consensual content, including depictions of minors and women in sexually explicit scenarios.

Malaysian authorities had earlier issued warnings to both X Corp. and xAI, requesting the implementation of more robust safeguards. However, the responses they received emphasized user report mechanisms rather than systemic changes. As a result, the Malaysian government deemed a temporary block as a “preventive and proportionate” measure to safeguard the public.

Grok’s Capabilities and Controversial Features

Launched in 2023, Grok was designed to be a conversational AI assistant available within the X platform. Users could interact with it by asking questions or tagging posts. Later, xAI introduced an image generation feature known as “Grok Imagine,” which included a “spicy mode” capable of producing adult-themed content. This feature quickly became a lightning rod for controversy.

Despite being free to use, Grok’s growing capabilities — particularly its image creation tools — attracted scrutiny from regulators and digital rights advocates. Critics have argued that the platform enables the proliferation of deepfakes and other harmful content without adequate oversight.

International Scrutiny and Policy Responses

Beyond Southeast Asia, other regions are also examining Grok’s impact. Authorities in the European Union, United Kingdom, France, and India have expressed concern about the chatbot’s potential for misuse. In response to mounting criticism, Grok recently limited its image generation and editing tools to users with paid subscriptions. However, experts argue that this change does not fully tackle the underlying issues.

“Restricting certain features behind a paywall does not ensure safety,” said a cybersecurity analyst. “There needs to be a robust monitoring and enforcement mechanism to prevent misuse at scale.”

As global conversations around AI governance continue, Malaysia and Indonesia’s decisive moves may serve as a model for other nations grappling with the ethical dilemmas posed by generative AI technologies.

Looking Ahead: The Need for Stronger AI Regulations

Governments worldwide are racing to establish regulations for artificial intelligence tools, especially generative models that can produce hyper-realistic content. With the rise of deepfakes and manipulated media, the need for clear standards and accountability mechanisms has never been more urgent.

Grok’s case illustrates the challenges of balancing innovation with responsibility. While the tool offers unique functionalities, its potential for harm has prompted immediate action from regulators who argue that public safety must come first.

Until xAI and similar companies implement effective safeguards, more countries may follow Malaysia and Indonesia’s lead in restricting access to AI platforms that can be misused to create damaging content.


This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.

Subscribe to our Newsletter