Introduction Almost all industries are growing their roots in Artificial Intelligence today. From drafting judgements…
When AI Gets it Wrong: Rethinking Authorship and Liability in the Age of Machine Learning
Introduction
With the technology ever evolving, there has been a recent increase in reliance on artificial intelligence in professional settings that has begun to force law to confront a very important question: when AI generates false or fabricated content, who bears responsibility? The recent incident involving Deloitte, where a government-commissioned report made by an AI contained false citations and non-existent work, has brought focus on this issue, not as an imaginary concern but as a real legal problem.[1] Content that looks reliable can be generated by AI systems, which although looks reliable, is actually false.[2]
Diving into how AI tools really work, it is relevant to mention that these systems do not really know facts but rather they generate answers based on patterns from large datasets. Due to this being the reason, your AI tools can sometimes create “Hallucination”, which means information that sounds correct but is actually false.[3]
Such errors occurring as results when used in professional work can lead to serious consequences like financial loss and reputational damage etc.[4]
Can AI be an Author?
An important as well as an interesting question that often comes in our minds is whether an AI can be considered an author of the content that it generates.
Delving into this topic, let it be noted that copyright law has always been based on human creativity and many previous precedents given by Hon’ble courts have made it clear that for something to be protected, it must involve a certain degree of human intellect or effort.
To support the above observation, let us take reference of Eastern Book Company V. D.B. Modak, wherein the Hon’ble Supreme Court of India held that copyright requires a minimum level of creativity.[5] This concludes that purely mechanical or automated outputs do not qualify.
Similarly, the Hon’ble U.S. Supreme Court in the case Feist Publications Inc. V. Rural Telephone Service Co., held that originality requires both independent creation and some creativity.[6]
The idea and expression distinction further supports this, as can be seen in the case of R.G. Anand V. Delux Films where the court clarified that copyright protects expression and not mere ideas on which the expression is based.[7] AI systems mostly work by identifying and reproducing patterns, which raised doubts about whether their outputs truly qualify as original expression.
Aside from the above stated cases, courts have also stated in full clarity that entities that are non-human cannot be human. Let’s take the example of Naruto V. Slater, wherein a picture taken by a monkey was denied protection of copyright because the author of the picture taken was not human.[8]
However, in a similar case in India, called the RAGHAV AI Artwork Case, AI was briefly recognized as a co-author, even though that recognition was later withdrawn.[9]
From the above examples, it can be concluded that law does not recognize AI as an author.
Issues with AI- Generated Content
AI systems are always trained on huge amounts of data and much of this data on which they are trained may be copyrighted. But the output that they produce is often treated as new or original.
This often creates confusion because on one hand, AI depends on existing works, but at the same time, its output is treated as independent.
Courts have long said that originality comes from human skill and judgment. This was proven in the case University of London Press Ltd. V. University Tutorial Press Ltd., wherein it was held by the court that originality requires the exercising of human effort.[10]
Let it be known that AI only predicts patterns, it does not apply judgment in the legal sense. Because of this the outputs that we sometimes get as result can be inaccurate or completely false.
Who is Responsible?
Judging from the aforementioned case laws, it can be said that since AI cannot be an author, it also cannot be held legally responsible. That leaves to possible parties that can be held responsible:
- The developer of the AI tool and
- User of the AI tool
Most legal thinking supports holding the user of the AI tool responsible, and this is especially true in professional settings.
This is because of the principle of Duty of Care which was seen in the case Donoghue V. Stevenson, wherein the Hon’ble court held that a person must exercise proper care to prevent causing harm to others.[11]
Applying the principle of Duty of Care to AI, if a person or company relies on AI- generated content without checking it, they may be negligent.[12]
Therefore, in the Deloitte situation, the responsibility lies with the firm that submitted the report and not with the AI tool that made the report.
To put it in simple words, using AI does not remove your responsibility to verify information.
Need for laws governing the same
Even though the courts have repeatedly tried to apply existing principles, the major issue still lies in the fact that there are no clear laws directly dealing with AI-generated content.
The current laws that are being used to resolve the issues are older laws like Copyright and Negligence; however, these older laws were not designed for AI.[13]
The legal systems are evolving in a much slower pace than the quickly developing technology. This has resulted in a gap between how AI is used and how it is regulated.[14]
Even newer regulations like EU’s AI Act, focus mainly on safety and risk, not on deeper questions like authorship and liability.[15]
Furthermore, in India, the situation is even more dire and unclear with no specific laws that deal with:
- Ownership of AI- generated works
- Responsibility for AI errors
- Use of copyrighted training data
The lack of clarity leads to uncertainty for businesses, professionals, and courts.
Conclusion
As powerful as AI is, it is not a responsible actor. Though it produces content, it cannot verify that content, nor can it verify or take accountability for the same. For the same reason, that responsibility continues to lie with people who use these tools, and rightly so. Many courts across various jurisdictions have repeatedly emphasized that authorship, ownership, and liability are rooted in human involvement and not in machine output as can be realized from the multiple precedents mentioned above.
At the same time, the current legal framework is without a doubt in need of urgent improvement. Existing laws were made to deal with systems that can independently generate large volumes of content, often with convincing yet incorrect details. Relying on traditional doctrines like copyright and negligence may not be enough as AI becomes more deeply rooted into professional and commercial decision making.
There is, therefore, an increasing need for legal systems to develop alongside technological developments. More laws with better clarity on issues such as authorship of AI- generated works, liability for errors made by AI, and accountability for the use of training data are required. Uncertainty will continue for users, developers, and courts alike without any clarity.
The Deloitte incident is a reminder to us humans that while technology may change rapidly, accountability remains the core concern of law. While AI assists in creating content, it does not face any consequences for the content it creates. Until the law develops more specific frameworks relating to AI, one principle remains unchanged that those who choose to rely on AI must also bear the responsibility for its output.
Author: Kavya Sharma, in case of any queries please contact/write back to us via email to [email protected] or at IIPRD.
[1] Deloitte AI Report Had Fabricated References: What Happened, Business Standard (Oct. 8, 2025),
https://www.business-standard.com/technology/tech-news/deloitte-ai-hallucination-report-australia-gpt4o-fabricated-references-125100800915_1.html
[2] Deloitte, AI Hallucinations: A New Risk in Decision-Making (2023),
https://www.deloitte.com/ch/en/services/consulting/perspectives/ai-hallucinations-new-risk-m-a.html
[3] Ibid.
[4] Deloittes AI Fallout Explained: The $440,000 Report That Backfired, NDTV (2025),
https://www.ndtv.com/world-news/deloittes-ai-fallout-explained-the-440-000-report-that-backfired-9417098
[5] Eastern Book Company v. D.B. Modak, (2008) 1 S.C.C. 1 (India).
[6] Feist Publications Inc. v. Rural Telephone Service Co., 499 U.S. 340 (1991).
[7] R.G. Anand v. Delux Films, (1978) 4 S.C.C. 118 (India).
[8] Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018).
[9] Indian Copyright Office issues withdrawal notice to AI co-author (2021),Managing IP https://www.managingip.com/article/2a5bqtj8ume32iwlaoy5y/exclusive-indian-copyright-office-issues-withdrawal-notice-to-ai-co-author
[10] University of London Press Ltd. v. University Tutorial Press Ltd., [1916] 2 Ch 601 (KB).
[11] Donoghue v. Stevenson, [1932] A.C. 562 (HL).
[12] AI Errors in Deloitte Report Highlight Risks of Generative AI Use, Medianama (2025),
https://www.medianama.com/2025/10/223-deloitte-australia-ai-errors-440k-report/
[13] World Intellectual Property Organization, Revised Issues Paper on Intellectual Property Policy and Artificial Intelligence (202O),
https://www.wipo.int/meetings/en/doc_details.jsp?doc_id=499504
[14] European Parliament, Artificial Intelligence Act: EU Rules for AI (2023),
https://www.europarl.europa.eu/topics/en/article/20230601STO93804/artificial-intelligence-act-eu-rules-for-ai
[15] Ibid.
