India Introduces Flexible Guidelines for AI Regulation
India has taken a significant step in shaping its artificial intelligence (AI) future by unveiling a new set of AI guidelines. The announcement comes ahead of a major AI summit scheduled for early 2026 in New Delhi. These guidelines represent a distinctive approach to AI governance that prioritizes innovation while addressing potential risks.
Unlike the European Union’s stringent AI Act or China’s centralized oversight, India’s strategy relies primarily on existing legal frameworks, such as the Information Technology Act and the Digital Personal Data Protection Act. This model emphasizes self-regulation and voluntary compliance, allowing more flexibility for developers and businesses.
Amal Mohanty, an AI policy expert and a lead author of the guidelines, explained, “India’s AI governance adopts a balanced, agile and flexible approach that promotes innovation and safety.” He stressed that this framework is designed to be adaptive, providing room for technological evolution without imposing rigid constraints.
AI Adoption Accelerates Across Sectors
India is rapidly integrating AI into various sectors, including healthcare, agriculture, financial technology, and public services. The country’s young, tech-savvy population has made it one of the global frontrunners in AI adoption. A recent Boston Consulting Group report revealed that 92% of Indian employees in areas like customer service and operations are already using AI tools—well above the global average of 72%.
Paul Emmanuel, CEO of Two Minute Reports, likened the current AI era to the early days of the internet, urging policymakers not to create barriers. “We are in the dial-up internet era of AI. Now is not the time to set roadblocks in front of innovation,” he said. Emmanuel supports the government’s light-touch approach, noting that it enables Indian businesses to compete globally and prepares the workforce for future challenges.
Criticism Over Lack of Specific Implementation
Despite the optimism, some experts have expressed concerns about the vague implementation of the guidelines. Urvashi Aneja, founder of Digital Futures Lab, pointed out that while the objectives are ambitious, the roadmap remains unclear. “For example, how is data usability and sharing going to be enhanced?” she asked, noting the persistent issue of poor public data quality.
Aneja also highlighted the guidelines’ narrow focus on AI risks. “They are silent on labor displacement, psychological and environmental harms, and make no mention of market concentration,” she said. This limited scope, according to her, could hinder comprehensive risk management.
Principles to Promote Safety and Transparency
Central to the new guidelines is the “Do No Harm” principle, which aims to mitigate AI-related risks while fostering innovation. Another notable aspect is the emphasis on content authentication. The government plans to introduce amendments to existing IT rules requiring platforms like YouTube and Instagram to label AI-generated or modified content.
These labels must be visible and cover at least 10% of the screen for visual content, and 10% of the duration for audio content. The goal is to help users distinguish between authentic and AI-altered material, thereby reducing the spread of misinformation and deepfakes.
India’s Contextual Approach to AI
Yash Shah, CEO of Momentum91, praised the guidelines for being tailored to India’s economic and social realities. “Think of the EU approach as airport security—meticulous and slow. India’s is more like metro security—efficient and adaptable,” he explained.
He noted that under India’s voluntary framework, a fintech startup in Bengaluru can roll out an AI underwriting model more swiftly than in regions with stricter compliance requirements. Similarly, edtech firms can experiment with personalized learning tools without navigating complex layers of regulation.
The Indian model focuses on regulating specific applications rather than the AI technology itself. This approach contrasts with China’s heavy-handed regulatory style and the United States’ fragmented, evolving policies.
Calls for a Legal AI Framework
While the guidelines mark a proactive step, experts emphasize the need for a comprehensive legal framework. Pawan Duggal, a leading cybersecurity expert, warned that the current guidelines lack legal enforceability. “There are no legal consequences if stakeholders do not follow the said guidelines,” he said.
Duggal advocates for a distinct AI law that would provide legal recognition and define accountability, liability, and transparency. He also stressed the importance of sustainable AI development and granting conditional personality rights to AI systems.
As India charts its unique path in AI governance, the balance between innovation and regulation remains a critical challenge. The flexible framework may foster rapid growth, but experts caution that it must be backed by enforceable laws and broader risk assessments to ensure responsible development.
This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.
