India Tightens IT Rules: 3-Hour Deadline for Deepfake Removal Starts February 20
Digital Desk
India mandates 3-hour deepfake removal & AI labelling under new IT Rules starting Feb 20. Learn how this affects social media and your digital safety.
In a major move to clean up the digital landscape, the Indian government has officially shortened the clock for social media giants. Starting February 20, 2026, platforms like X (formerly Twitter), YouTube, Meta (Facebook and Instagram), and Snapchat must remove flagged deepfakes and illegal AI content within just three hours.
The Ministry of Electronics and Information Technology (MeitY) notified the amendments to the IT Rules 2021 on Tuesday. This update shifts the responsibility squarely onto the platforms to ensure that "synthetically generated information" does not mislead the public or cause harm.
Mandatory Labelling and Digital Watermarks
One of the most significant changes is the mandatory labelling of AI-generated content. Under the new Rule 3(3), any content created or modified using AI must carry a clear, prominent label.
-
Metadata Identifiers: Every AI post must have a unique, permanent metadata tag embedded in its code. This "digital fingerprint" cannot be hidden, edited, or deleted.
-
Visibility Standards: For videos and images, the AI label must be easy to spot. For audio, a disclosure must be heard within the first few seconds of the clip.
-
Verification Tools: Platforms are now required to provide technical tools that can verify if a piece of content is AI-generated before a user even hits the "upload" button.
Why the 3-Hour Rule Matters Now
The urgency comes after several high-profile incidents. From the viral deepfake of actress Rashmika Mandanna to a manipulated video of cricket legend Sachin Tendulkar promoting a gaming app, the speed at which misinformation spreads has outpaced traditional 36-hour removal windows.
By slashing the deadline to three hours, the government aims to stop "viral" misinformation in its tracks, particularly during sensitive periods like elections or when personal reputation is at stake.
Quarterly Warnings for Users
It’s not just the platforms under scrutiny; users are also being put on notice. Social media companies must now send quarterly warnings to their users. These notifications will remind people of the legal consequences—including potential criminal charges—for creating or sharing deceptive AI content.
"The goal is an open, safe, and accountable internet," a MeitY official stated. "Delayed removals lead to irreversible damage. We are ensuring the law keeps pace with technology."
Expert Perspective: The Cost of Compliance
While digital rights activists have largely welcomed the move to curb deepfakes, industry experts suggest a steep climb ahead for tech companies.
Practical Takeaways for Creators:
-
Always Disclose: If you use AI tools to generate or heavily edit a post, use the platform's built-in "AI label" feature to avoid account strikes.
-
Verify Sources: Before sharing "shocking" celebrity videos, check for the mandatory metadata or official labels.
-
Stay Informed: Familiarize yourself with the new rules to avoid accidental violations that could lead to account suspension.
As the February 20 deadline approaches, the digital world watches to see if social media giants can scale their moderation speed to meet India’s new, aggressive standards.
