Introduction There is an unprecedented growth in the mobile application market across India and South…
Innovation Under Licence: A Critical Analysis of DPIIT’s Generative AI Copyright Proposal
The Department for Promotion of Industry and Internal Trade (“DPIIT”), on 8 December 2025, published a working paper titled “Generative AI and Copyright, Part I: One Nation, One Licence, One Payment: Balancing AI and Innovation”. The paper presents copyright issues related to the training of generative AI systems and provides a hybrid form of licensing that would enable tech companies to use copyrighted materials to train AI with a blanket license, which would involve payment of royalties to the owners of the copyrighted material.
The proposal will seek to strike a balance between the needs of creators and the need to facilitate innovation in artificial intelligence. The proposed mechanism is associated with legal, economic and institutional issues due to the scale, complexity and commercial character of AI training.
The author, firstly, elaborates on the proposal of the committee, secondly, highlights the challenges in the proposed model and, lastly, compares it to the legal frameworks adopted by different jurisdictions.
The proposed Hybrid Licensing Model
The DPIIT paper suggests the creation of Copyright Royalties Collective for AI Training(“CRCAT”) as a non-profit statutory entity, a copyright licensing organisation to deal with the training of generative AI systems on copyrighted materials. CRCAT would comprise representatives of various types of copyrighted material and its task would be to collect royalties and distribute them to owners of copyright material. License would cover all the lawfully acquired copyrighted content for training AI and royalties would be paid only when AI systems would start generating revenue.
The proposed hybrid model framework strongly resembles a general statutory licensing system with the distinguishing factor being its magnitude and a new administrative system that is comparatively broader in its ambit.
Administrative and Distribution Problems
The training of AI relies on the consumption of enormous amounts of data, where it is considered highly improbable to know the extent of value added by an individual’s work. The paper, at this stage, does not disclose methods for the calculation of royalties. This mechanism for calculation must be based on objective measures of valuations, in the absence of which the distribution of royalty will become opaque and discretionary.

Incentives, Quality, and Long-term Impact
High quality creators may face the issue of incentive alignment since not every copyrighted content has the same contribution to the efficacy of AI models. When compensation is standardised or unrelated to quality and use, producers of quality material end up being compensated poorly. Inadequate incentive to creators may lead to decrease in the production of good quality works, resulting in quality deterioration of datasets to train AI. In the long term, it may make the outputs of AI inferior, which can negatively affect the creative ecosystem and the AI industry.
Constitutional Implications
Article 300A of the Constitution recognizes copyright as a property. If compulsory access without permission to train AI commercially is granted, it would cast questions of proportionality and due process. In Entertainment Network (India) Ltd. v. Super Cassette Industries Ltd., the Supreme court established that compulsory licensing must act as an exception and not a general rule. Moreover, the policy of compulsory licensing may concern Article 19(1)(g) leading to objections by creators for the use of their works due to moral or reputational reasons and present a question of the freedom of occupation and expression.
Market Structure and Compliance Costs
The recommended system subjects fixed compliance costs in the form of licence fees, reporting, auditing requires and legal oversight. These fees may act as an overburden on small AI developers, startups, academic institutions and open-source projects which do not have the financial and administrative means to meet, resulting in market concentration and lack of competition. In the long run, the model may kill experiments and create monopolies, which will contradict India’s goal to establish a diversified and competitive AI environment.
The US Supreme Court in Eastman Kodak Co. v. Image Technical Services, Inc. (1992), stated that licensing control over access to crucial parts or service can block the entry of competition and sequential innovation. The initial AI phase of development is not the place to impose strict regulatory friction.
Enforcement and Information Asymmetry
Self-reporting and audits are usually the ways of determining what data was used in training, in verifying disclosures and in attaching value. This brings information asymmetry between regulators and firms, and poses room of discretionary enforcement and strategic compliance behaviour.
Legal framework across jurisdictions
The problem of copyrighted material in AI training has been tackled differently in various jurisdictions.
Text and Data Mining (“TDM”) exemptions provided in the European Union allow training AI on copyrighted sources without authorization or compensation. Although this model guarantees both low transaction costs and legal assurance of the AI developers, it has been criticised because it does not compensate creators adequately, especially when AI systems run on a commercial scale.
In response to this issue, the EU has tried to have an opt-out policy, where owners of the rights are allowed to hold their rights in reserve. Nonetheless, the system of opt-out has its shortcomings. Creators whose works are small might not have the technical expertise and/or resources to enforce their opt-out rights, making de facto inclusion without meaningful consent.
Both Japan and Singapore have taken wide-ranging computational or data-analysis exceptions, allowing works under copyright to be used to perform an analytical function, without regard to whether the use is commercial or not. These frameworks are highly innovation-focused and have also resulted in healthy AI ecosystems. They do not, however, directly reward creators with much economic value, but rather view the benefits of the market downstream.
Even though compulsory licensing models have the potential of being used to guarantee remuneration, they are often limited in application, and are decided case-to-case basis. Making AI training compulsory will over-coerce many creators, particularly when masses of creators may oppose it either due to moral, reputational, or economic reasons.
Essentially, different models try to provide for balance between innovation and compensation of creators. Nevertheless, the majority of jurisdictions have not imposed blanket ex ante obligations of licensing AI training but instead use less burdensome offerings, such as output-based liability.
Conclusion
The working paper of DPIIT may be an authentic effort to balance opposing interests in the AI-copyright debate. Nevertheless, the hybrid licensing scheme poses a threat to the autonomy of creators, the effectiveness of fair dealing and the incentives to innovate, and may lead to a rise in concentrated power in AI development.
Instead of reinstating the wide-ranging licensing regulation, balanced services by focusing on large commercial AI systems, encouraging fair dealing, permitting opt-outs or long-term collective licensing, and transferring liability to infringing products would provide a cost effective and more efficient mechanism for Generative AI training. Without these, the model will have a tendency to focus on redistributing the short-term as opposed to innovation in the long-term, which will harm the vision of India as a global AI leader.
Author: Cheenar Shah, in case of any queries please contact/write back to us via email to [email protected] or at IIPRD.
