top of page

Mandatory Watermarking and Labelling of AI-generated Content

  • 4 hours ago
  • 8 min read

Introduction

The rapid advancement of generative artificial intelligence has fundamentally transformed the digital information ecosystem. Text, images, audio, and video content generated or manipulated through AI systems today often appear indistinguishable from authentic human-created material. While these technologies enable creativity and innovation, they also present serious risks in the form of misinformation, damage reputations, manipulate or influence elections, or commit financial fraud.


In response to the growing misuse of synthetically generated information, the Government of India has proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, aimed at mandating watermarking, labelling, and traceability of AI-generated content. These Rules came into force on 25 February 2021, upon their publication in the Official Gazette of India. These proposed amendments aim to provide a clear legal basis for labelling, traceability, and accountability related to synthetically generated information.


Understanding Watermarking in the Context of AI-Generated Content

Watermarking, in the context of artificial intelligence, refers to the branding or marking of AI-generated or AI-modified outputs whether text, images, audio, or audio-visual content so that such content is clearly identifiable as synthetic rather than human-created.


Beyond transparency, watermarking enables attribution, facilitating accountability, and assisting in the prevention of intellectual property infringement. By enabling identification of the origin and modification history of content, watermarking contributes to both legal enforcement and self-regulation in the generative AI ecosystem.


The idea of “permanent unique metadata” also raises significant technical concerns. In practice, watermarking operates as a technological “cat-and-mouse” game between developers and malicious actors. Many currently deployed watermarking techniques especially visible overlays or embedded metadata can be removed through basic image editing tools such as cropping, compression, format conversion, or by “re-rolling” content through another generative AI system. This highlights the broader challenge of creating truly “indelible” watermarks in open digital environments.


Internationally, efforts are being made to standardise provenance tracking. At the technical level, the Coalition for Content Provenance and Authenticity (C2PA) has developed an open standard known as “Content Credentials,” which enables publishers, creators, and platforms to record the origin and subsequent edits of digital content through secure, tamper-evident metadata. Major technology companies including Adobe and Microsoft have publicly committed to implementing C2PA standards in their generative AI tools. If India’s proposed “permanent unique metadata” requirement is to function effectively in a globally interconnected digital ecosystem, regulatory alignment with standards such as C2PA may become necessary to ensure interoperability and enforceability across jurisdictions.


Global Legal Approaches to Watermarking and AI Transparency

Globally, policymakers are increasingly converging on the need for transparency obligations in relation to generative AI systems. The European Union’s Artificial Intelligence Act represents the world’s first comprehensive and binding legal framework regulating artificial intelligence. As discussed above, the rapid integration of AI into various sectors created significant benefits, but also serious risks which made the establishment of the AI Act necessary to establish uniform standards, prevent harmful AI practices, and promote trustworthy, human-centric AI while supporting innovation, particularly for SMEs and start-ups.

On 21 May 2024, the European Council formally adopted the European Union Artificial Intelligence Act (“EU AI”) Act. The Act subsequently entered into force on 1 August 2024. While the Act will be fully applicable 24 months after its entry into force, certain provisions apply earlier.



Key Provisions of the European Union’s Artificial Intelligence Act

  • Risk-Based Regulatory Framework – Establishes a classification system categorising AI system according to risk levels, with corresponding compliance obligations.

  • Prohibited AI Practices (Article 5, Chapter II) – Article 5 prohibits the placing on the market, putting into service or use of AI systems that deploy subliminal, manipulative or deceptive techniques, exploit vulnerabilities, conduct social scoring, make criminal risk assessments based solely on profiling, create facial recognition databases through untargeted scraping, infer emotions in workplace or education institutions (except medical or safety reasons), or conduct biometric categorisation based on sensitive characteristics.

  • High-Risk AI Systems Regulation – AI systems that negatively affect safety or fundamental rights are classified as high-risk, including systems used in products under EU product safety legislation and in specified areas such as critical infrastructure, education, employment, essential services, law enforcement, migration, and legal application; they require assessment, registration where applicable, and lifecycle compliance.

  • Transparency Obligations for Generative and Certain AI Systems – Generative AI is subject to transparency requirements, including disclosure that content is AI generated, prevention of illegal content, publication of training data summaries, evaluation of high-impact general-purpose models, reporting of serious incidents, and clear labelling of AI-generated or modified content.

  • Strict Penalty Framework (Article 99) – Member States shall lay down rules on penalties and other enforcement measures for infringements of this Regulation; penalties shall be effective, proportionate and dissuasive, taking into account the interests of SMEs, including start-ups.

  • Innovation Support Measures – The law supports AI innovation and start-ups, allowing the development and testing of general-purpose AI models before public release. It requires national authorities to provide a testing environment that simulates real-world conditions to support companies, including small and medium-sized enterprises, in the EU artificial intelligence market.


In the United States, California introduced the “California Provenance, Authenticity and Watermarking Standards Act,” establishing a framework to authenticate synthetic and real content in response to generative AI and deepfakes. The Bill provides for the implementation of imperceptible and indelible watermarks by AI-generated content providers, embedding provenance data to ensure traceability of synthetic content. It mandates visible disclosure of provenance data by major online platforms in content distributed to users. It further requires digital cameras and recording devices, including smartphones, to offer watermarking capabilities indicating authenticity and provenance, including firmware updates for existing devices. A previous version also required all media producers to embed maximally indelible and privacy-preserving provenance data in all content, whether AI generated or authentic

Similarly, Singapore’s Model AI Governance Framework for Generative AI seeks to promote a trusted AI ecosystem by balancing innovation with user protection. The Framework adopts a systematic and balanced approach, calling on policymakers, industry, researchers and the public to collectively address accountability, misinformation and related concerns across nine interconnected dimensions. These global developments demonstrate a shared regulatory understanding, transparency through watermarking is foundational to trustworthy AI governance.



India’s Regulatory Evolution- From Advisories to Mandatory Obligations

India’s approach to regulating AI-generated content has evolved gradually. The Ministry of Electronics and Information Technology (“MeitY”) has previously issued multiple advisories directing intermediaries to exercise due diligence in curbing deepfakes and synthetic misinformation. A key advisory issued on 15 March 2024 required intermediaries facilitating synthetic content creation to ensure such content is labelled or embedded with permanent unique metadata, capable of identifying both the generating tool and subsequent modifications by users. Non-compliance was linked to penalties under the Information Technology Act, 2000.


Building upon this foundation, the proposed amendments dated 10th February, 2026 introduce a clear statutory framework for regulating synthetically generated information. The amendments define synthetically generated information and clarify that references to “information” in key due diligence provisions include such content. They strengthen intermediary obligations by requiring periodic user notifications, specifically warning users about legal consequences of misuse of synthetic content tools under applicable laws. Intermediaries enabling synthetic content creation must ensure prominent labelling or embedding of permanent identifiers for immediate identification. The amendments also require swift action on violations by reducing the time limit for removing unlawful content upon lawful order from 36 hours to 3 hours.


The European Union’s AI Act and Digital Services Act emphasises proportionality and explicitly account for SME interests in contrast to India’s 3-hour obligation. A 3-hour compliance window may necessitate 24/7 automated moderation infrastructure, real-time monitoring systems, and rapid legal review teams. While large Significant Social Media Intermediaries (“SSMIs”) may possess such technological capacity, smaller platforms and start-ups (“SMEs”) may face disproportionate compliance burdens. Additionally, such a narrow compliance window may create a “risk of over-blocking.” To avoid potential penalties under Section 79(3) of the IT Act, intermediaries may err on the side of caution and remove content pre-emptively, including lawful speech. A compressed takedown window, if not accompanied by procedural safeguards, may indirectly incentivise excessive content removal, thereby impacting freedom of speech under Article 19(1)(a) of the Constitution.



Key Features of the Proposed IT Rules Amendment on AI-Generated Content

Objectives of the Amendments-

  • Provide a clear statutory definition of “synthetically generated information”;

  • Require mandatory labelling, visibility, and embedding of permanent metadata or identifiers in synthetically generated or modified information to distinguish it from authentic content; and

  • Enhance the accountability of Significant Social Media Intermediaries (SSMIs) by requiring reasonable and proportionate technical measures to verify user declarations and appropriately flag synthetic information.


Key features of the Amendments-

  • Introduces a definition covering information artificially or algorithmically created, generated, modified, or altered using a computer resource in a manner that appears reasonably authentic or true. Rule 2(1)(wa);

  • Clarifies that references to “information” in the context of unlawful acts under Rule 3(1)(b), Rule 3(1)(d), Rule 4(2), and Rule 4(4) include synthetically generated information. Rule 2(1A);

  • Grants statutory protection to intermediaries removing or disabling access to harmful synthetically generated information based on reasonable efforts or user grievances, without affecting the safe harbour exemption under Section 79(2) of the IT Act. Proviso to Rule 3(1)(b); and

  • Due Diligence Requirements -Intermediaries enabling the creation or modification of synthetically generated information must ensure that such content is labelled or embedded with a permanent unique metadata or identifier. The label must be prominently displayed or made audible, covering at least 10% of the visual surface or the first 10% of the audio duration to ensure immediate identification as synthetically generated information. New Rule 3(3).



Institutional and Legal Capacity to Enforce Watermarking in India

India’s existing legal framework comprising the IT Act, 2000, the IT Rules, 2021, the Digital Personal Data Protection Act, 2023, and the Bharatiya Nyaya Sanhita, 2023 already addresses various dimensions of cybercrime, privacy violations, and intermediary accountability. Institutionally, mechanisms such as Grievance Appellate Committees, CERT-In, the Indian Cyber Crime Coordination Centre (I4C), and the SAHYOG Portal provide a multi-layered response system for detection, reporting, and enforcement. Mandatory watermarking integrates seamlessly into this ecosystem by improving traceability, evidentiary reliability, and enforcement efficiency.


Challenges and the Way Forward

While watermarking has emerged as a powerful regulatory tool to enhance transparency and accountability in AI-generated content, its implementation must carefully balance the objectives of regulation with the need to preserve innovation and free expression, while also accounting for the compliance costs faced by developers and platforms. India’s regulatory approach should focus on effectively implementing mandatory labelling, metadata embedding, and due diligence obligations within the existing framework of the IT Act, 2000 and the IT Rules, 2021. Clear compliance standards, proportionate enforcement, and coordination among regulatory and enforcement authorities will be essential to ensure traceability and accountability of synthetically generated information while preserving intermediary protections.


A crucial complementary element is digital literacy. Regulatory transparency mechanisms are effective only when users understand how to identify and act upon them. Integrating awareness modules on detecting labelled or unlabelled AI-generated content would strengthen user capacity. Practically, if a user encounter suspected non-watermarked deepfake content, the reporting process would typically involve- they can use the platform’s grievance mechanism under Rule 3(2) of the IT Rules, 2021 or file a complaint on the National Cyber Crime Reporting Portal. Such procedural clarity empowers users to operationalise the regulatory safeguards introduced through mandatory watermarking.


A balanced and innovation-sensitive framework, aligned with global transparency trends and supported by institutional mechanisms, can strengthen trust in digital content, enhance evidentiary reliability, and promote responsible development and deployment of AI technologies in India.


Conclusion

Mandatory watermarking and labelling of AI-generated content represent a critical step towards building a transparent, accountable, and trusted digital environment. India’s proposed amendments to the IT Rules reflect a mature regulatory approach one that recognises both the transformative potential and the disruptive risks of generative AI.


As global experience demonstrates, watermarking alone cannot eliminate misuse. However, when combined with legal accountability, technological safeguards, and informed users, it becomes a cornerstone of responsible AI governance. Moving forward, India’s challenge lies not only in enforcement, but in ensuring that regulation evolves in step with innovation protecting democratic values, individual rights, and public trust in an increasingly synthetic information age.


The above article was authored by Mr. Rahul Jain (Partner), Ms. Yashodhara B Roy, Mr. Harsh (Associate) and Ms. Nandini (Associate).

Comments


SUBSCRIBE TO OUR NEWSLETTER

Get updates on the latest publications, judgements, policy updates, webinars, reports and much more.

Thank you for subscribing!

bottom of page