Read Gazette Notification
https://drive.google.com/file/d/1gOx14G0BbZGnyaqvhHLdgVZzyu8YOpbl/view?usp=sharing
This is not a standalone “AI Act” in one single statute, but it is one of India’s strongest legal interventions yet aimed at curbing deepfake misinformation, non-consensual AI imagery, and AI-enabled digital deception—by placing concrete duties on online platforms.
What counts as “synthetically generated information”?
The amended Rules first broaden and clarify what “audio, visual or audio-visual information” covers—ranging from audio and sound recordings to images, photographs, graphics, videos and moving visual recordings (with or without accompanying audio), whether created, generated, modified or altered using any computer resource.
Next, they introduce a key definition: “synthetically generated information” means audio/visual content that is artificially or algorithmically created or altered using a computer resource in a way that makes it appear real/authentic/true and depicts an individual or event in a manner likely to be perceived as indistinguishable from a natural person or a real-world event.
At the same time, the Rules carve out legitimate, good-faith uses: routine editing/enhancement that does not materially misrepresent meaning; preparation of documents/PDFs/training or educational and research materials that do not create false documents/false electronic records; and accessibility/translation/clarity improvements that do not manipulate material parts of the content.
The new deal for platforms: Block unlawful deepfakes, label the rest
If a platform offers a computer resource that enables or facilitates the creation, modification, publication, transmission, sharing, or dissemination of synthetically generated information, it must deploy “reasonable and appropriate technical measures”—including automated tools or other suitable mechanisms—so that users are not allowed to create or disseminate synthetic content that violates law and falls within specified harmful categories.
The Gazette text explicitly highlights high-harm buckets of synthetic content, including: child sexual exploitative and abuse material (CSAM), non-consensual intimate imagery, obscene/pornographic/paedophilic/sexually explicit content, privacy-invasive content; synthetic content resulting in false documents or false electronic records; content relating to explosive material/arms/ammunition; and deceptive synthetic depictions that misrepresent identity, voice, conduct, action, statements, or falsely portray real-world events as having occurred in a manner likely to deceive.
For synthetic content that is not in the prohibited bucket, the Rules move toward transparency and traceability: such content must be prominently labelled (visually noticeable, or by a prominently prefixed audio disclosure for audio), and embedded with permanent metadata or other suitable technical provenance mechanisms—technically feasible—along with a unique identifier to identify the intermediary’s computer resource used to create/alter it. The intermediary must also not enable the modification, suppression, or removal of the label/metadata/unique identifier.
In plain terms: illegal and harmful deepfakes should be prevented; other AI-generated media should carry a clear “this is synthetic” signal and provenance markers.
Special burden on Big Social Media: Declaration + verification before publishing
The amendment introduces an additional pre-publication duty for significant social media intermediaries (SSMIs). Before display/upload/publication, an SSMI must require users to declare whether the content is synthetically generated. It must also deploy appropriate technical measures (including automated tools or other suitable mechanisms) to verify the accuracy of the declaration, keeping in mind the nature, format, and source of the content.
If the declaration or the platform’s technical verification confirms that the content is synthetic, the SSMI must ensure it is clearly and prominently displayed with an appropriate label/notice indicating it is synthetically generated. The Rules also carry a strong accountability signal: if the intermediary knowingly permitted, promoted, or failed to act upon such synthetic content in contravention of the Rules, it is deemed to have failed to exercise due diligence under this framework.
The clocks get stricter: “36 hours” becomes “within 3 hours” (and more)
Beyond deepfake definitions and labelling, the notification tightens several timelines. Notably, in Rule 3(1)(d), the time reference is substituted from “thirty-six hours” to “within three hours.” The amendments also reduce other compliance/grievance timelines in specified parts of Rule 3(2), including changes from “fifteen days” to “seven days”, “seventy-two hours” to “thirty-six hours”, and “twenty-four hours” to “two hours.”
For users, this is meant to translate into quicker action on urgent harms. For platforms, it sharply raises the compliance bar—because time is now a key element of due diligence.
Safe harbour reassurance (but only for compliant conduct)
The Gazette also clarifies, “for removal of doubts,” that when intermediaries remove or disable access to information (including synthetically generated information) in compliance with these Rules—also where they act upon violations by deploying reasonable and appropriate technical measures—this does not amount to violation of conditions under Section 79(2)(a) or (b) of the IT Act.
This matters because platforms often argue that proactive moderation can expose them to allegations of “editorial control.” The amendment attempts to protect good-faith compliance action, while still expecting strong preventive and transparency measures around synthetic media.
IPC is out; BNS is in
Finally, the amendment updates the Rules’ penal reference: in Rule 7, “the Indian Penal Code” is substituted with “the Bharatiya Nyaya Sanhita, 2023 (45 of 2023).” This aligns the intermediary framework with India’s updated criminal law terminology in the Gazette text itself.
What this means for you (creator, user, victim)
-
If you’re a user/creator: AI-generated content is moving into a “declare, label, and do not deceive” compliance environment, especially on major platforms.
-
If you’re a victim of deepfakes or non-consensual synthetic imagery: the Rules are designed to force faster platform response and stronger preventive tooling, with clear emphasis on sexual privacy harms and CSAM.
-
If you’re a platform: the era of “we’re just a neutral host” becomes harder to sustain where synthetic media creation/distribution is enabled—because the Rules now demand prevention, labelling, provenance signals, and rapid action.
The big question is no longer whether deepfakes are dangerous; the law has answered that. The real test begins on 20 February 2026: how effectively platforms implement verification, labelling, provenance, and rapid takedown systems—without over-censoring legitimate speech and lawful creativity.

No comments:
Post a Comment