India Mandates AI Labeling: New Rules Target Deepfakes with 3-Hour Removal Deadline
Digital Desk
India's new AI labeling rules take effect February 20, 2026, requiring labels on AI-generated content and mandating deepfake removal within 3 hours. Know the rules.
A major crackdown on AI-generated misinformation begins today as India's new AI labeling rules come into effect nationwide. Social media platforms now have just three hours to remove deepfakes and synthetically generated content after receiving complaints—a dramatic reduction from the previous 36-hour window.
The Ministry of Electronics and Information Technology notified the IT Amendment Rules 2026 on February 10, giving platforms a 10-day compliance buffer. Starting today, every AI-generated photo, video, or audio uploaded in India must carry a clear "AI-generated" label, with technical markers embedded in the file's metadata.
What Are the New AI Labeling Rules?
Think of it as a "digital stamp" for synthetic content. Just as food packets display vegetarian or non-vegetarian labels, AI-created videos must now show visible disclaimers—typically in the corner of the screen—indicating machine-generated origin.
But the requirements go deeper. Every AI-generated file must contain what officials call "digital DNA"—metadata recording the creation date, the AI tool used, and the platform where it was first uploaded. This technical marker enables law enforcement to trace the source of harmful content.
"If someone tampers with that metadata or crops out the visible label to make fake content appear real, the content itself will be deleted automatically," a senior IT ministry official explained. Platforms must deploy technology that detects and blocks such tampering attempts.
The 3-Hour Deepfake Removal Deadline
Perhaps the most significant change is the compressed timeline for action. Previously, platforms had 36 hours to remove illegal content after government notification. Now, that window shrinks to just three hours.
The rules apply specifically to "Synthetically Generated Information" (SGI)—AI-created content that realistically mimics real people or events. Basic edits like brightness adjustments or compression don't qualify as SGI, but creating fake government documents, impersonating individuals, or generating non-consensual intimate imagery using AI triggers strict penalties.
PM Modi's Push for Digital Transparency
The timing coincides with Prime Minister Narendra Modi's address at the AI Summit in Paris just yesterday. "Just as food items have nutrition labels, digital content needs authenticity labels," Modi suggested. "People deserve to know what is real and what is fabricated."
His analogy resonated globally, and within 24 hours, India's rules became operational—though officials clarify the notification was already finalized before his speech.
Who Is Liable? Platforms Face New Responsibilities
Social media companies now bear significant compliance burdens. When users upload content, platforms must obtain declarations about AI involvement and deploy verification tools. If platforms publish unlabeled AI content, they lose their "safe harbour" protection under Section 79 of the IT Act.
Legal consequences include:
- Loss of legal immunity for non-compliant platforms
- Up to 3 years imprisonment under BNS Section 353 for spreading fear through false information
- Up to 2 years under BNS Section 336 for AI-based impersonation
Why This Matters Now
Recent high-profile deepfakes have galvanized government action. In November, actor Rashmika Mandanna's deepfake video went viral, followed by cricket icon Sachin Tendulkar's manipulated video promoting a gaming app. Both cases highlighted how convincingly AI can fabricate content featuring public figures.
"The internet must remain open, safe, trusted, and accountable," the Ministry stated. "Generative AI poses risks of misinformation, impersonation, and election manipulation. These rules address those threats head-on."
What Users Need to Know
For ordinary internet users, the changes mean greater transparency. That viral video of a politician making controversial statements? It must now carry an AI label if synthetic. Suspicious caller imitating your relative? AI voice scams now face stricter tracking through metadata requirements.
The rules exempt basic edits, adding subtitles, or creating fictional case studies for academic research—provided they don't impersonate real individuals or forge documents.
