This piece has been published in the Asia Business Law Journal under the Legal Guide to AI Regulation Laws – India Chapter at Call for focused approach to AI regulation in India | Law.asia.
India has a robust services-based economy encompassing diverse sectors including IT services, telecoms, e-commerce, healthcare and financial services. These factors position it as a significant data depository that can drive development of AI, specifically GenAI.
Recognising the transformative and economic potential of AI, the government has been taking proactive measures aimed at monetising this growth, evidenced by initiatives such as the IndiaAI Mission, IndiaAI Dataset Platform and AIKosha, directed towards delivery of essential or routine services.
Additionally, multiple policy briefs and sectoral consultation papers have highlighted the need to bring effective regulatory oversight over AI, ensuring it is developed and deployed safely, fairly and with accountability.
The national approach is primarily directed towards encouraging AI innovation, along with mandating a principles-based approach to AI ethics. The government is prioritising AI-driven economic growth by creating frameworks to foster AI innovation, recommending ethical standards, and promoting investment in digital infrastructure and skill development programmes.
As such, regulation of AI remains an evolving issue, with the government striving to balance technological advancement with regulatory oversight, while taking steps as a relevant stakeholder in the global conversation on AI governance.
In contrast to jurisdictions like the EU, with prescriptive AI-specific laws, this regulatory approach has been somewhat reactive and lacks materiality at this stage.
For example, India has yet to frame a comprehensive legal framework (whether under a new law or by amending existing laws) tailored to AI governance.
While sectoral requirements exist, both binding and advisory, these are fragmented across multiple regulators like the Reserve Bank (RBI), Securities and Exchange Board of India (SEBI), Telecom Regulatory Authority of India (TRAI) and Competition Commission of India (CCI), each interpreting AI governance through its own institutional lens.
This decentralised approach has resulted in some business uncertainty. To address issues arising from AI use while navigating this evolving landscape, India must move towards a more unified and proactive approach to its regulation and governance.
Regulatory approach so far
The government has, at different instances, argued for both a hands-off approach and more direct intervention, creating unpredictability for AI developers. Recent developments include:
- In 2024, the Ministry of Electronics and Information Technology (MeitY) issued an advisory requiring prior approval for deploying AI models, along with mandating platforms and intermediaries to implement measures to prevent dissemination of deepfakes and algorithmic discrimination, label AI-generated content, and inform users of unpredictability of AI.
However, following some industry pushback, this advisory was withdrawn and replaced with a revised version that is non-binding in nature. A more direct approach was expected from the Digital India Act (DIA), a proposed unified law regulating high-risk AI systems, bringing algorithmic accountability, zero-day threat and vulnerability assessment, and AI-based ad targeting and content moderation. The government is still working on drafting this law and continues reconsidering the timing of its release.
- In January 2025, the MeitY released its Report on AI Governance Guidelines Development, which advocates some of the issues discussed in connection with the DIA, such as amended intellectual property laws addressing AI-based infringement and copyrightability of AI outputs, regulation of bias and discrimination arising from use of AI, and activity-based regulation of AI based on risk mitigation. The report also encourages a sandbox approach for low-risk use of AI, and advocates voluntary commitments from the industry (through content provenance, red teaming, model cards, etc.) to help the government’s information gathering objective via AI.
- Various sector-specific regulatory bodies like the SEBI, RBI and BIS have made efforts to address AI-related concerns applicable to their regulated entities. The positive news is that the MeitY report advocates a “whole of government” approach so a unified AI policy framework can be applied across industries.
Key legal issues
Several critical legal issues remain unresolved, such as:
AI bias and algorithmic accountability. AI systems have been criticised for exhibiting bias, especially in hiring, lending, law enforcement and healthcare. This bias often arises from use of skewed or incomplete data during the training phase.
Unfortunately, India’s current legal framework lacks provisions that mandate fairness, transparency and accountability in AI systems, and their inherent training data. The MeitY report acknowledges the concerns related to AI bias, but stops short of recommending specific regulatory requirements aimed at mitigating such risks.
As a result, AI developers continue to operate with limited or no legal safeguards, and users remain at risk of algorithmic discrimination and lack adjudication safeguards in event of risk/loss.
Data privacy and AI training. While the Digital Personal Data Protection Act, 2023 (DPDP Act) itself does not regulate AI, it will have indirect implications on the way AI systems are developed and deployed, particularly when they make use of personal data.
For example, the DPDP Act does not apply to public data; given that many AI applications will use such data, insights on this would be helpful. Further, it allows data holders to seek changes to their data, which is a difficult task if that data has already been used in AI training.
Exploratory use of data for AI training may also not be possible, as the DPDP Act requires purpose and data-based content to be obtained from the data holders.
Copyright conundrum. The use of copyrighted material to develop and train AI systems may lead to the creation of derivative works and thereby lead to infringement actions. The MeitY report also categorises this use of copyrighted data as infringement.
However, it fails to clarify what degree of similarity with the copyrighted material should be present for a successful claim of infringement. It indicates that, since the current law provides limited means of enforcement of infringement claims of copyright holders, the law will need to be updated to strengthen their position, as well as due diligence measures implemented by AI developers prior to accessing copyrighted content.
As AI becomes more capable of generating creative works, questions surrounding the copyrightability of AI-generated content have become increasingly important. But in India there is no clear legal stance on whether AI-created content is eligible for copyright protection, or if human involvement is necessary for authorship claims. This uncertainty creates challenges for businesses and creators who are unsure of their rights.
Intermediary liability. The classification of AI models as intermediaries under India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 requires careful legal scrutiny, particularly in light of section 79 of the Information Technology Act, 2000 (IT Act). The safe harbour protections under this provision are contingent on intermediaries not modifying or selecting content, a condition that most AI-based systems may not satisfy.
The intermediary liability framework under the IT Act needs updating to reflect the realities of AI systems, ensuring that AI-generated content is not misclassified under legacy definitions of publishers or intermediaries.
Responsibility. Another core issue in AI regulation is determining who should bear responsibility – the developer who builds the AI model, the deployer who integrates it into applications, both in different capacities, or even users who apply prompts in increased use of AI via APIs?
These issues are not dealt with under present law, and while the MeitY argues that existing laws will continue to regulate instances of abuse or violation of AI use, interpretative practices on allocation of liability and accountability will continue unless sufficient guidance is provided.
Conclusion
India’s approach to AI regulation has made significant strides in terms of policy, but continues to grapple with uncertainty when it comes to definitive legislation. Although the MeitY report has sparked discussions on key regulatory challenges, concrete measures may take time. While the DIA was expected to regulate some issues, latest news suggests the government is reconsidering introducing a unified law until the implications and benefits of AI are fully understood in India’s unique context.
Moving forward, a balanced and thoughtful approach to AI-specific legislation is crucial to foster business certainty, support user rights and enable responsible innovation.