Ministry of Electronics and Information Technology Notifies New IT Rules—can the new framework keep pace with AI risks?

Ministry of Electronics and Information Technology Notifies New IT Rules—can the new framework keep pace with AI risks
Ministry of Electronics and Information Technology Notifies New IT Rules—can the new framework keep pace with AI risks
Author-
8 Min Read

What happened as MeitY brought deepfakes and synthetic media under formal IT Rules

India’s Ministry of Electronics and Information Technology (MeitY) on February 10 notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, formally bringing deepfakes and other forms of synthetically generated information (SGI) under a regulatory framework.

The notified amendments require online intermediaries—including social media platforms and digital publishers—to ensure that AI-generated or altered content is appropriately identified. This can be done through visible labels, embedded metadata or other technical disclosures that inform users when content has been synthetically created or modified.

The move marks one of the most concrete regulatory steps by the Indian government so far to address risks posed by deepfakes and AI-generated media, which have raised concerns globally around misinformation, fraud and reputational harm.

The notification follows months of consultations and industry feedback on earlier draft proposals.

Why it matters as India tightens oversight of AI-driven content

The regulation of deepfakes sits at the intersection of technology policy, platform accountability and user protection. Deepfakes—realistic but fabricated audio, video or images—have increasingly been linked to misinformation campaigns, impersonation scams and political manipulation.

By creating explicit obligations around labelling and takedown, the government is signaling that AI-generated content is no longer outside the scope of mainstream digital regulation. For platforms, this introduces new compliance expectations and potential legal exposure.

For policymakers, the rules reflect a balancing act between enabling innovation in artificial intelligence and preventing harm from deceptive content. India, with one of the world’s largest internet user bases, has a significant stake in shaping how AI governance evolves.

For global observers, the amendments place India among jurisdictions actively experimenting with AI-related guardrails, alongside the European Union and parts of East Asia.

Also Read : Rising Gold Prices Lift Titan’s Quarterly Earnings and Revenue—Are Jewellery Margins Entering a Sweet Spot?

What we know so far about the notified changes

According to the notification, several provisions stand out:

  • Mandatory identification: Intermediaries must ensure that synthetically generated or altered content is labelled or identifiable.

  • Flexible disclosure methods: Identification can be through visible labels, embedded metadata or other technical means.

  • User awareness: Platforms must inform users when content has been synthetically created or modified.

  • Faster takedowns: Content subject to lawful government or court orders must be removed or access disabled within three hours.

  • Due diligence obligations: Platforms are expected to make “reasonable efforts” to comply.

A key feature is that the final rules narrow the scope compared to earlier drafts. Instead of covering all algorithmically altered content, the focus now leans toward content likely to mislead users.

This shift suggests an attempt to distinguish harmful deepfakes from routine digital edits.

What remains unclear as implementation details evolve

Despite the notification, several operational questions remain open.

It is not yet clear:

  • How “likely to mislead” will be interpreted in practice

  • What technical standards will define adequate metadata-based labelling

  • How smaller platforms will manage compliance costs

  • Whether safe-harbour protections will be tested in disputes

Enforcement mechanisms and penalty frameworks were not elaborated in the summary of changes. Details may emerge through subsequent advisories or case-by-case enforcement.

It is also unclear how proactively platforms must detect synthetic content versus acting only upon complaints or official orders.

How the changes affect digital platforms and tech companies

For large technology platforms, the rules introduce both technical and legal implications.

On the technical side, platforms may need to invest in detection tools, watermarking technologies and metadata systems that can flag AI-generated media. This could increase compliance and operational costs.

On the legal side, shorter takedown timelines raise the stakes for content moderation teams. A three-hour window following a government or court order leaves limited room for procedural delays.

Key platform implications include:

  • Higher compliance burden

  • Need for scalable AI detection systems

  • Faster response protocols for legal orders

  • Potential liability if labelling is deemed insufficient

Large firms may be better positioned to absorb these costs, while smaller intermediaries could face proportionally greater strain.

Broader policy context shows India moving toward harm-based AI regulation

The final version of the rules reflects a shift toward a harm-based regulatory approach rather than a blanket classification of all synthetic content.

Earlier drafts had defined SGI broadly as any content artificially or algorithmically created, generated or altered. Industry groups warned that such wording could capture harmless edits such as filters, dubbing or routine enhancements.

By narrowing the emphasis to misleading or deceptive content, the government appears to be aligning regulation more closely with risk.

This approach mirrors emerging global conversations on “responsible AI,” where the focus is increasingly on real-world harm rather than the technology itself.

India’s move also comes as governments worldwide debate how to regulate generative AI without stifling innovation.

What industry bodies and stakeholders had warned earlier

Industry associations including IAMAI, Nasscom and the Business Software Alliance had raised concerns during consultations.

They cautioned that:

  • Overbroad definitions could create compliance uncertainty

  • Routine digital edits might be unintentionally regulated

  • Strict visible labelling mandates could be impractical

  • Compliance burdens could hinder innovation

These groups urged MeitY to adopt a harm-based framework focused on deceptive content. The final notification’s narrower scope and flexible labelling options suggest that some of this feedback was incorporated.

However, industry responses to the final rules are still awaited.

What it means for users, creators and the digital ecosystem

For users, the rules aim to increase transparency. Clear labelling could help people distinguish between authentic and synthetic media, potentially reducing the risk of deception.

For content creators, especially those using AI tools for creative or commercial work, the rules introduce new responsibilities. Disclosure norms may affect how AI-generated art, marketing and entertainment content are presented.

For the broader digital ecosystem, the changes reinforce a trend toward platform accountability. Intermediaries are increasingly expected to play an active role in moderating emerging technological risks.

At the same time, excessive compliance burdens could influence how startups and smaller platforms operate in India.

What to watch next as AI regulation evolves in India

Several developments will be closely tracked:

  • How MeitY clarifies compliance standards

  • Early enforcement cases and legal challenges

  • Industry guidance on labelling technologies

  • Alignment with future AI-specific legislation

  • Global regulatory coordination on deepfakes

If enforcement is measured and predictable, the framework could evolve into a model for other emerging markets. If implementation proves inconsistent, it could create regulatory uncertainty.

For now, the notification signals that India is moving from consultation to action on deepfakes and synthetic media. As AI-generated content becomes more common, how these rules are applied in practice will determine whether they strike the intended balance between innovation and protection.

Share This Article
Follow:

Sourabh loves writing about finance and market news. He has a good understanding of IPOs and enjoys covering the latest updates from the stock market. His goal is to share useful and easy-to-read news that helps readers stay informed.

Go to Top
Join our WhatsApp channel
Subscribe to our YouTube channel