New Delhi: The Central Government has officially notified a series of sweeping amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, aimed at bringing artificial intelligence and synthetic media within a stringent regulatory framework. Notified by the Ministry of Electronics and Information Technology (MeitY) on 10 February 2026, the amended rules are set to come into force on 20 February 2026. The primary objective of this intervention is to curb the growing misuse of generative artificial intelligence, particularly deepfakes, which have the potential to cause irreversible reputational harm, electoral interference, and public disorder. By introducing a formal legal definition of AI-generated content and significantly reducing platform response timelines, the government signals a decisive shift away from what has been described as an “era of leisurely compliance” by digital intermediaries.
The amendment has been issued by the Central Government in exercise of its rule-making powers under Section 87 of the Information Technology Act, 2000, and may also be viewed as a significant regulatory response to recent controversies involving the misuse of artificial intelligence to create deepfakes of high-profile businesspersons, actors, social media influencers, and political figures—an issue that has drawn national attention and was described by Prime Minister Narendra Modi as a “crisis”.
One of the most foundational changes is the formal legal definition of “synthetically generated information” (SGI) under the newly inserted Rule 2(1)(wa). This includes any audio, visual, or audio-visual content that is artificially or algorithmically created or altered in a way that appears authentic and is likely to be perceived as indistinguishable from a natural person or a real-world event. To ensure that regular digital activity is not hampered, the rule includes a proviso carving out exemptions for routine editing, such as colour adjustment or noise reduction, and good-faith educational materials, provided they do not result in false electronic records. Furthermore, Rule 2(1A) clarifies that any reference to “information” in the context of unlawful acts now explicitly includes SGI, ensuring that AI-generated content is subject to the same due diligence and takedown obligations as any other form of data.
In a major breakthrough, the amendments have introduced an acceleration in enforcement through Rule 3(1)(d), which slashes the compliance window for removing content pursuant to a lawful order from 36 hours to just three hours. This near-immediate requirement underscores the government’s concern regarding the potential for deepfakes to cause irreversible harm, such as electoral interference or public disorder. Additionally, Rule 3(2)(b) requires intermediaries to act on specified content removal complaints within a mere two hours, down from the previous 24-hour limit. Grievance redressal timelines have also been tightened under Rule 3(2)(a), with the period for disposing of general user grievances reduced from 15 days to seven days.
To protect users proactively, Rule 3(3)(a)(i) mandates that intermediaries deploy technical measures and automated tools to prevent the generation or sharing of SGI that violates the law. This specifically targets content involving child sexual abuse material (CSAM), non-consensual intimate imagery, and deceptive impersonation. For content that is not prohibited but still synthetic, Rule 3(3)(a)(ii) requires it to be prominently labelled so users can instantly identify its artificial nature. This must be accompanied by permanent metadata or provenance markers, including a unique identifier, to trace the content back to the computer resource used to create it. Crucially, Rule 3(3)(b) expressly prohibits intermediaries from allowing the removal, suppression, or modification of these labels or metadata once applied.
Additionally, Significant Social Media Intermediaries (SSMIs), such as Facebook and Instagram, face even more rigorous duties under Rule 4(1A). Before any content is published, these platforms must now require users to declare whether the information is synthetically generated. Platforms cannot simply accept these declarations at face value; they are required to deploy automated technical tools to verify their accuracy. If a platform is found to have knowingly permitted or promoted unlabelled synthetic content in violation of these rules, it will be deemed to have failed its due diligence, potentially endangering its legal protections.
Transparency is further bolstered by Rule 3(1)(c), which requires platforms to periodically inform their users, at least once every three months, about compliance requirements and the consequences of misusing AI tools. Intermediaries must warn users that violations can lead to immediate account termination or suspension, and that certain offences, such as those involving child protection or election laws, will be mandatorily reported to the authorities. For platforms offering AI-creation tools, Rule 3(1)(ca) adds an extra layer of obligation to specifically warn users that misusing these resources to create unlawful SGI may attract criminal penalties under the Bharatiya Nyaya Sanhita, 2023, or the POCSO Act.
Finally, the government has provided a vital legal safeguard for intermediaries. Under Rule 2(1B), it is clarified that removing or disabling access to synthetic content in compliance with these rules, even when using automated tools, will not be considered a violation of the safe harbour protections under Section 79(2) of the IT Act. This ensures that platforms can act decisively against deepfakes without fear of losing their statutory immunity for third-party content. By replacing references to the old Indian Penal Code with the Bharatiya Nyaya Sanhita (BNS) in Rule 7, the amendment also modernises the legal framework to align with India’s latest criminal law reforms.
