Taming the AI Goliath: Battle for Ethical and Legal AI Development
- Shashank Tiwari
- 5 days ago
- 6 min read
“AI will be the best or worst thing ever for humanity.” - Elon Musk
Just weeks ago, Geoffrey Hinton, widely regarded as the 'Godfather of Artificial Intelligence,' expressed his agreement with Elon Musk’s concerns and estimated that there is a 10 to 20 percent risk of AI eventually taking control away from humans.
It is no surprise that Artificial Intelligence (AI) is advancing at a scorching pace and transforming the way businesses and industries operate is nothing short of phenomenal. Different organizations are deploying AI models in different ways to refine internal operations and upgrade offerings, sparking new waves of innovation. For instance, in transportation, AI is enabling self-driving vehicles, optimizing logistics and improving traffic management. In the manufacturing sector, AI is enhancing quality control, predictive maintenance and process automation. The retail sector is using AI for personalized marketing, inventory management and customer behavior analysis. In agriculture sector in India, AI is assisting with crop monitoring, precision farming and pest detection. In healthcare sector, AI is being utilized for diagnostics, imaging analysis, disease prediction and treatment recommendations. The National Cloud Initiative by National Informatics Center in India is providing different citizen centric services using AI technology.
In the legal profession too, AI powered tools are assisting the lawyers in legal research, contract analysis and automating routine tasks, thereby boosting lawyers’ productivity. Even in the fast-paced world of M&A, AI-powered tools are being used by legal professionals to carry out due diligence of documents with unprecedented speed and accuracy. Supreme Court of India is using an AI tool ‘SUVAS’ to translate judgments from English language into different vernacular languages and as per an estimate of August 2024, around 36000 judgments have been translated into Hindi language. Other High Courts, such as Delhi High Court and Bombay High Court are also using AI tools to translate judgments into Hindi and Marathi, respectively.
Yet, the expansive use of AI comes hand-in-hand with substantial responsibilities and emerging regulatory concerns. These AI tools gather information from online sources or existing literature on a specific topic and instantly generate content. The growing use of AI in content generation has sparked critical concerns around copyright, especially where such tools reproduce, adapt, or summarize copyrighted material without proper authorization. Must it be so? To address these challenges, it is imperative to assess whether India’s existing IP laws are adequately equipped to regulate the complexities introduced by generative AI models, or whether there is an urgent need for new, dedicated legislation to ensure that AI continues to serve as a catalyst for innovation rather than a threat to creators’ rights and interests.
INTERPLAY BETWEEN AI AND INTELLECTUAL PROPERTY
Arguably, in the absence of copyright laws in many countries specifically incorporating AI-focused provisions, courts around the world are increasingly witnessing a surge in cases where copyright owners allege that AI models infringe their rights, either by being trained on copyrighted works or by generating outputs that themselves constitute copyright infringement. As a result, courts are grappling with complex issues such as:
Who bears responsibility for infringement of third-party rights—is it the AI developer, the user who created the output, or the entity that trained the model?
How can a balance be struck between protecting existing IP rights without hampering AI-driven innovation? and
Whether AI-generated content can fall within the ambit of ‘fair use’ under copyright laws.
Further, considering that both copyright and patent laws require the author or inventor to be a human being and that the work or invention must contain a sufficient degree of intellectual creativity or novelty, another critical question arises: whose copyright is actually being infringed when a generative AI tool creates entirely new content or works, whether in the form of images or text?
While India has not yet introduced a dedicated regulatory framework that directly governs the development, deployment and utilization of AI models (akin to the EU Artificial Intelligence Act), the current laws pertaining to IP, data protection, IT and consumer protection are being considered to be adequate in regulating and managing certain aspects of AI technology. The central government is of the view that the existing copyright laws are well equipped to protect AI-created works. Additionally, the proposed Digital India Act is expected to address the gaps in the Information Technology Act, 2000 and provide a comprehensive legal framework that balances the need for innovation with principles of accountability.
In India, this issue has come to the forefront with a recent legal dispute in the case of ANI Media Pvt. Ltd. vs. OpenAI OPCO LLC (CS (Comm) No. 1028/2024). ANI, a prominent Indian news agency, has filed a copyright infringement case against OpenAI, claiming that the OpenAI used its news content without authorization to train its large-language model (LLMs), ChatGPT. ANI contends that even though its news articles were publicly accessible online, OpenAI had no right to use them for commercial purposes or to generate responses from its proprietary content. OpenAI, however, has opposed these claims, arguing that its technology does not replicate specific articles but generates responses by analyzing linguistic patterns across publicly available datasets, therefore, not requiring prior permission.
The case has garnered wider attention, with stakeholders from the music industry and book publishing sector also raising concerns about OpenAI’s use of copyrighted works in training its models. The outcome of the ANI Media case is expected to be a landmark ruling that could shape the future legal framework governing the development and deployment of AI in India. It may ultimately determine if current laws are sufficient to address AI related issues or if specific legislative reforms are needed to regulate the use of AI technologies more effectively.
DATA PRIVACY
Another pressing issue is the extent to which AI platforms are deploying safeguards when collecting, processing and utilizing personal data—an area that raises significant concerns under data privacy laws across various jurisdictions. The concerns include: (i) the use of compromised or unlawfully obtained data (such as information sourced from data breaches); (ii) unlawful sharing of personal data with third parties; (iii)cross-border data transfers that may contravene data protection regulations; (iv) processing personal data without obtaining the informed consent of the data principal; and (v) the failure to implement adequate legal and technical safeguards to ensure data confidentiality and security.
In India, the Digital Personal Data Protection Act, 2023 (DPDP Act) aims to regulate the processing and storage of personal data, the manner of obtaining prior consent from data principals and the intended use of such data. However, the Act does not specifically address the challenges posed by AI. Justifiably, stakeholders developing AI platforms, in their capacity as data fiduciaries, ought to comply with the provisions of the DPDP Act and fulfil the legal obligations prescribed therein while processing personal data.
The DPDP Act stipulates that a data fiduciary must provide a notice to the data principal, informing them of the nature of the personal data being collected, the purposes for which it will be processed, the procedure for withdrawal of consent, and other rights available to the data principal under the Act.
Notably, while the DPDP Act introduces data localization requirements by restricting certain categories of personal data to India, it does not adequately address issues arising from the use of large-language models hosted or deployed abroad, thereby leaving significant gaps in cross-border data protection. No wonder, India’s data protection regime is still at a nascent stage and that there is currently no dedicated sectoral regulator or a comprehensive AI-specific legislation, numerous challenges are likely to emerge in tackling legal issues involving AI.
EVOLVING GLOBAL FRAMEWORKS FOR REGULATION OF AI
Several countries have taken proactive steps to regulate AI, offering valuable insights for India. Different international jurisdictions are adopting varied approaches to AI governance. For instance, rather than enacting specific legislation, countries such as Singapore, Japan, the United Kingdom, and Australia are focusing on providing industry guidance aligned with the OECD principles. In contrast, some jurisdictions are actively working towards formal legislation to regulate AI. For example, Canada, through its proposed Artificial Intelligence and Data Act, and the European Union, through its Artificial Intelligence Act, 2021, seek to regulate AI based on a risk-based classification system, ranging from minimal to unacceptable risk, with stringent compliance requirements for high-risk AI applications. Notably, China has implemented the world’s first regulations specifically targeting generative AI. In the United States, a risk-based approach is also being followed, with multiple regulatory frameworks under discussion to address various aspects of AI at both the federal and state levels.
PARTING THOUGHTS
As the global AI landscape continues to evolve, fueled by significant technological advancements and its growing adoption by both public and private sectors, the idea, rightly, is to establish a robust legal framework that addresses the legal, ethical, and security challenges posed by AI. It is time that law adapts swiftly to keep pace with rapid technological revolution driven by GenAI ensuring that while sufficient protection is afforded to copyrighted works, innovation is not unduly stifled. To this end, a carefully calibrated approach is essential—one that fosters the development of transparent and accountable AI practices while recognizing that the benefits of AI far outweigh the challenges it presents.
The article was authored by Mr. Sumit Jay Malhotra(Partner Designate), and Ms. Nishi Rathore(Associate).
The article was published by Legal 500 (https://www.legal500.com/firms/33535-hammurabi-solomon-partners/c-india/news-and-developments/taming-the-ai-goliath-battle-for-ethical-and-legal-ai-development)
Comentários