Background
In late 2023, the Office of the Principal Scientific Advisor (“PSA”) to the Government of India took steps toward addressing the complex regulatory landscape of artificial intelligence (“AI”) in India. Recognizing AI’s transformative potential across various sectors, the PSA formed an Advisory Group tasked with providing guidance on AI governance. Under the guidance of this Advisory Group, a sub-committee on ‘AI Governance and Guidelines Development’ (“Sub-Committee”) was set up on November 9, 2023, to provide workable recommendations for AI governance in India.
On January 06, 2025, the Sub-Committee released its recommendations in the form of the AI Governance Guidelines Report (“Report”), which is currently open for public consultation, feedback by January 27, 2025.
Overview of the Report
- The Report reiterates a principle-based approach for AI governance in India and refers to existing global and Indian principles that may be operationalized by the Government and private sector. Essentially it reiterates these existing principles.
- It fails to define AI – and states that existing definitions are either ill-equipped to regulate the rapidly evolving technology or are overbroad. The Sub-Committee argues that instead of a catch-all definition, specific technologies may be defined, when necessary, to regulate their possible harm.
- It argues for harm mitigation to be a core regulatory principle for prospective laws but also clarifies that the risk of harm should be real and specific for any prospective regulation of AI.
- It identifies that AI regulation is possible in the following ways: –
- Entity-Based regulation: Sectoral entities in banking, finance, healthcare, telecom, vehicle manufacturing sector, can be subject to licenses/authorizations;
- Activity-Based regulation: where specific activities relating to AI, regardless of the entity undertaking such activity is subject to regulation. For example, consumer safety, taxation, online safety, data protection, anti-trust, copyright, parent, employment, contracting are possible use cases of regulation;
- Combination approach: A combination of above two approaches.
At the present stage of AI development in India, the Report advises that activity-based regulation may be adopted, to be followed by a combination approach in the future.
- It emphasizes the need for a “whole-of-government” approach and advises against fragmented regulations or siloed governance, as that could hamper AI’s potential or amplify its risks and restrain the Government from building a common understanding of crucial issues. For this, it asks for setting up for certain agencies, namely: –
- Inter-Ministerial AI Coordination Committee (“Committee”) – To monitor use of AI, obtain information from the AI sector, and regulate AI development in India;
- Technical Secretariat (“Secretariat”) – To be set up by MeitY to offer technical solutions, develop standards, protocols, and maintain an AI incident database;
- AI Sub-Group – To work with MeitY to suggest measures to be incorporated under the Digital India Act, to consolidate and strengthen the legal framework, and provide adjudication mechanism for claims arising from AI risks.
- It argues that while existing laws do regulate AI to some extent, there is a need to bring in specific regulation in areas such as copyright laws, addressing bias/discrimination through use of AI.
- Going forward, it suggests that India should regulate AI through a combination of voluntary commitments / standards from AI developers / deployers / other relevant entities in the lifecycle of an AI system, and sectoral and/or risk-based regulations applicable for use of AI.
- At various instances, it reiterates the need for the Government to have information and transparency on AI use in India, and supplement laws with technological solutions that can be explored by the Government with assistance from AI developers / deployers. There are multiple references to deepfakes being a focus area of regulation and related information sharing under the Report.
Key highlights and Gap Analysis provided under the Report
A. Principle-based approach to be adopted through the lifecycle of an AI system to regulate all relevant AI actors – Providing information to the Government
- Principles for use of AI – The Report highlights key AI governance principles for ‘responsible and trustworthy AI’[1], and indicates India’s consensus with bringing in AI policy-making that operationalizes these principles.
These principles include: (i) Transparency; (ii) Accountability; (iii) Safety, Reliability, & Robustness; (iv) Privacy & Security; (v) Fairness & Non-Discrimination; (vi) Human-Centered Values & “Do No Harm”; (vii) Inclusive & Sustainable Innovation; and (viii) Digital-by-Design Governance.
- It suggests a lifecycle approach to AI governance to best operationalize these principles, given that the risks of AI systems will differ across the stages of an AI system. These stages are described follows: –
- Development stage: when an AI system is designed, built, tested, and trained for eventual use;
- Deployment stage: when an AI system is put into use by deployers;
- Diffusion stage: when an AI system is assessed in view of its long-term use and implications across sectors.
- It further suggests an ecosystem approach towards regulation of AI actors to ensure that regulatory focus is not limited to a single or specific stakeholder(s) and instead ensure that roles are examined at a broader level to better distribute responsibility, accountability, and liability. AI actors are described as data principals, data providers, AI developers (including model builders), AI deployers (distributors, and app builders), and end-users (B2B, and B2C). One of the recommendations in this regard is to explore “technology artefacts” to assign unique identities to AI actors, so that their respective activities can be tracked and recorded to establish liability. The Report expects that this may also help the AI developers or deployers to trace and then accede to Government requests for data disclosure on identified grounds under laws such as the Information Technology Act, 2000 (“IT Act”), namely prevention of crime, national security.
- It identifies that all existing and prospective regulation must be based on the Government being equipped with all relevant information pertaining to use of AI systems in India. It highlights that to develop appropriate governance, the Government will require information pertaining to: –
- Traceability: of all data, models, systems, and stakeholders through the lifestyle of an AI model; and
- Transparency: expected from relevant stakeholders regarding their respective liability and risk management practices, including amongst each other.
The Report alludes to existing sectoral laws that achieve the above information-sharing objective and asks for these disclosures to be more widespread.
B. ‘Whole-of-Government’ Approach to be achieved through the Committee
- The Report highlights the fragmented nature of current AI governance efforts and proposes a unified, coordinated governance approach. To achieve this, it recommends establishing the Committee, comprising both official (for instance, departmental and sectoral representatives from MEITY, Niti Aayog, Telecommunication Engineering Centre, Bureau of Indian Standards, sectoral regulators like RBI, IRDAI, SEBI, Indian Council for Medical Research, Telecom Regulatory Authority of India, etc.) and non-official (persons capable of representing the interests of AI developers, AI deployers, data providers, data principals, and end-users, members of the academia) members, all of whom are important stakeholders in matters of AI governance. This Committee should have a permanent status, and will broadly be responsible for the following:-
- Applying existing laws, and updating them as necessary to minimize risk;
- Provide legal clarity on emerging AI issues;
- Harmonize efforts on common definitions, risk analysis of emerging issues;
- Promote self-regulation to operationalize AI principles;
- Coordinate multi-stakeholder efforts amongst Government regulators;
- Promote development of responsible AI applications;
- Extend efforts to create Indian datasets, to ensure measurement of fairness, accountability, and transparency in the Indian context.
- It reiterates that a common roadmap for AI governance is necessary to avoid duplicity, especially in sectors where multiple authorities may be involved – such as consumer protection, food, transportation, agriculture, healthcare, etc.
- The Report highlights that this consistent approach is ideal because currently, only some regulators may have an oversight on AI programs deployed in India under sectoral laws (such as finance, health), or where the market is concentrated (such as e-commerce, aggregators). There are also examples of AI development & deployment where no interface with the Government takes place. Having a Committee will therefore enable all relevant Government ministries to have an oversight on the standard of AI development & deployment in place in India and better assess the risks for the future.
- A positive indication is that the Report stresses that enabling the Committee to have access to AI use in India should not result in regulatory overreach, in the form of AI application registrations, or reporting requirements.
C. Setting up of the Secretariat within MeitY – For technical advice and mapping
- The Report advises MEITY to set up a Secretariat, that can work as a technical advisory body and coordination contact for the Committee. While the Committee will work at a broader level on matters of policy and stakeholder coordination, the Secretariat may support the Committee functions by offering technical advisory and gap analysis on critical areas of AI development. Its members will comprise officers of Government departments, and experts from academia and industry, however the Report suggests that it should not have statutory backing at this stage, to encourage its ability to fairly analyze gaps in AI regulation and development. Its advisory status is expected to encourage private sector engagement.
Its primary responsibilities include:-
- Stakeholder Engagement: Facilitating dialogue between Government, members of the industry and academia, and civil society to build a consensus on AI governance priorities.
- Horizon Scanning & Risk assessment: Monitoring emerging AI technologies and trends to proactively identify risks to consumers & society.
- Incident Database Management: Creating and maintaining of AI Incident Database, which shall be repository of AI-related incidents, including ethical violations, security breaches, and system failures.
- Standardization: Develop metrics & measurement standards for assessing environmental impact of AI development in India, and encourage common frameworks for issues such as data provenance, security, data sets evaluation, use of open source applications, disclosure of transparency reports, etc.
- Technical advisory: Examine solutions for responsible & ethical use of AI, including requiring labelling of synthetic media, set up guardrails for technical requirements, etc. Promote research in this regard, to support applicable laws.
D. AI Incident Database – Managed by the Secretariat
- The AI Incident Database is proposed to be set up by the Secretariat to maintain a repository of AI incidents in India.
- Definition of an AI incident: Report correctly acknowledges that an “AI incident” may be much broader in scope than a “cyber incident” or a “cyber security incident”, despite both such incidents being within the scope of an AI incident. To be specific, AI incidents will include:-
- suspected or potential vulnerability against AI systems;
- adverse or dangerous outcomes from use of AI applications and systems, causing harm to society, businesses, users;
- malfunctions;
- unauthorized outcomes such as hallucinations, discriminatory outcomes, unforeseeable, unexpected or unexplainable outcomes;
- system failures;
- data privacy violations;
- physical security issues arising from equipment, cloud infrastructure, etc.
- The Report clarifies that given the inherent difference, AI incident reporting should not be subsumed into the existing cybersecurity incident regime. Given that CERT-In has been granted the authorization under the IT Act to act as a repository for “cyber security incidents”, the possibility of CERT-In working along with the Secretariat in maintaining this AI incident repository as well has been alluded to in the Report. This may be further evaluated by MeitY.
- Reporting entities: Initially Government entities who deploy AI systems (whether directly or in public-private projects) should submit reports to this database. The private sector will be encouraged to submit their reports on a voluntary basis.
- The Report encourages the Government to develop reporting protocols to ensure confidentiality and focus on harm mitigation, instead of approaching the proposed incident reporting framework as a fault-finding exercise that penalizes reporting entities.
E. Invest in Techno-Legal measures – For Government focus areas like Deepfakes
- The Report asks the Secretariat to work with MeitY to examine technological solutions for Government focus areas for AI regulation, particularly for malicious synthetic media, i.e. deepfakes. It lists measures such as watermarking, platform labelling, other fact-checking tools like content provenance chains, etc. to curb deepfakes. Pertinently, in March 2024, MeitY had recommended such measures to platforms/intermediaries for identifying and marking misinformation, and AI generated content on their platforms.
- It argues that such techno-legal measures can complement the protection afforded by law, and its enforcement.
F. Commentary on existing laws – their capacity to regulate AI and need for change
- Indian copyright law: The Report asks for infringement of rights of copyright holders because of use of copyrighted content as AI training data to be examined further. The Sub-Committee is of this view because the Copyright Act, 1957, permits a limited number of fair uses of copyrighted content, and the use of such information for training AI systems for commercial purposes is likely to be beyond the ambit of such fair use exception and may result in infringement claims. However, it indicates that since the current law provides limited means of enforcement of infringement claims of copyright holders, the law will need to be updated to strengthen their position, and to further introduce certain due diligence measures that ought to be implemented by AI developers prior to accessing copyrighted content. The question of who is liable for infringement – the AI developer, or the user who generates new AI content by using prompts, is left open for exploration.
The Report further explains that since works generated from AI systems lack human authorship, they are presently unable to claim copyright protection under Indian law. Pertinently, this is also the position in other jurisdictions like the UK, and the US. The Sub-Committee recommends the Copyright Office under the Ministry of Commerce & Industry, to frame guidance in this regard, for example, outline the extent of human authorship needed to claim copyright, or the kind of AI-generated outputs that could be eligible for copyright protection. This therefore is an area where the Report expressly asks for specific laws to be framed.
- Intermediary law under IT Act: The Sub-Committee argues that the safe harbor protection available under Section 79 of the IT Act for intermediaries may not be possible for many AI models, as the pre-condition for claiming the said protection is that the intermediary should not ‘select or modify the content’, which most AI based intermediaries are likely to do. This analysis appears to be misplaced, given that many AI systems may not qualify within the definition of intermediary itself, which is applicable to entities who “on behalf of another person, receives, stores, or transmits that record or provides any service with respect to that record”. It is arguable that many AI developers may be using training data to offer a service in respect of the content available to them, as opposed to such content being in the nature of third party content that is “hosted by them”.
3. Deepfakes: The Report refers to existing provisions under the IT Act, Indian Penal Code (now replaced by the Bhartiya Nyaya (Second) Sanhita (“BNS2″) that may be applied to allocate liability for offences of cheating by impersonation, identity theft, publication of obscene content, offenses against children, in relation to offences involving deepfakes. The Report suggests that existing laws should be updated & supported with techno-legal measures such as assignment of unique and immutable identities to each participant in the chain, after which their respective content (both input and output) can be watermarked to track its use for the purpose of generating deepfakes.
- Cybersecurity: It clarifies that the existing cyber incident reporting framework under the IT Act, Digital Personal Data Protection Act, 2023 (“DPDP Act”), and by sectoral regulators like RBI, SEBU, IRDAI, DoT, will apply to incidents attributable to use of AI in information systems by relevant entities. While the Report recommends for a separate AI based incident reporting framework to be set up and for there to be a more nuanced focus on AI-based cyber security incidents in the future, however, in the meantime, the current reporting framework will continue to apply.
- Bias and discrimination: The Report states that while existing laws do address instances of bias and discrimination (like laws protecting minorities, interests of consumers), these are specific to their sector and are not sufficiently updated to address adverse impact that could be caused with AI-based decision-making at a large scale and across sectors. This too is an area where the Sub-Committee asks for specific laws to be introduced, to keep pace with emerging technology.
- Anti-trust laws: It asks the Competition Commission of India to examine abuse of dominance, vertical integration, particularly due to few entities owning large AI systems and benefitting from larger datasets and computational capacity.
G. Proposed Regulatory Framework – Voluntary commitments & baseline commitments from the private sector
- As per the Report, the primary objective of any AI regulation should be to minimize risk of harm from AI systems and applications.
- The Report advocates demonstrable self-regulation in the form of voluntary commitments from the industry. It indicates that this can be achieved by: –
- Examining the existing voluntary reports and disclosures released by AI developers and deployers presently (through model cards, transparency reports, etc.). It is pertinent that as on date, there isn’t a common practice of AI entities voluntarily disclosing data to the Government, therefore whether the existing disclosure practices satisfy the Government’s information capacity requirements suggested under the Report is unclear. The kind of voluntary commitments that may be asked from the industry and which could vary as per industry and/or risk level, could include:-
- disclosure of intended purpose of AI system and applications;
- release of regular transparency reports by AI developers and deployers;
- commitments to internal and external red-teaming of AI systems;
- testing and validation of data quality and governance measures;
- commitment towards peer review by third party experts;
- ensure conformity assessments with AI principles mentioned under the Report;
- create and conform to standardized risk assessment protocols that can be applied through the lifecycle of an AI system;
- commitment to instill technology artefacts across the lifecycle of an AI system – from the data collection, training and deployment stages, to aid traceability and liability chain for application of laws;
- general requirements relating to security, vulnerability assessment, and business continuity.
The Sub-Committee suggests that industry’s commitments towards these voluntary measures may negate the need to introduce prescriptive laws.
- Regulators applying provisions of existing laws such as the Copyright Act, 1957, the IT Act, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, BNS2 etc. to encourage or mandate relevant entities to implement risk mitigation measures, as needed.
- Baseline framework & Sectoral laws – focus on a deployment approach: In addition to the voluntary disclosure framework, the Report recommends that a baseline commitment framework should be applicable for all AI systems, especially for medium to high-risk systems. While it acknowledges that high-impact sectors may continue to have specific requirements, it discourages risk classification of AI systems based on computational ability, or availability of data or as per sectoral classification alone (such as banking, healthcare). It instead argues for risk classification of AI systems to be undertaken on a deployment basis – if they are deployed widely, or for sensitive use cases. This is because sectoral regulators may not necessarily assess risk in a holistic manner, especially if AI systems deployed within their domain have a spillover effect onto other regulatory areas.
- The Report recommends that above measures should be equally adopted by the Government and the private sector and may be monitored by the Secretariat.
- It asks for a sub-group to be created within MEITY that can submit suggestions for the upcoming Digital India Act, including to provide grievance redressal and adjudication mechanisms for AI-related risk and incidents. Similar to the DPDP Act, it advocates for such grievance forums to adopt a “digital by design” principle and adjudicate matters online.
H. Conclusion
- This Report currently contains the recommendations of the Sub-Committee, which is now under review by MeitY. Contrary to various Government statements in the recent past, alluding to India according primacy to innovation as opposed to regulation of AI, this Report suggests that India should first commence gaining information about the use of AI across sectors, and then work on developing regulations along with encouraging the industry to submit information & develop their own voluntary commitments. As and when specific regulations are developed by the Legislature, the Report also encourages the Government to abandon the sector-specific approach, and instead focus on how AI is being deployed across India, and particularly look out for AI applications that have the capacity to be deployed at a wide scale, or for sensitive purposes.
- In our view, this indicates that MeitY may in the near future, encourage the private sector to share information about AI development and deployment in India.
- A positive fact from the Report is that the industry can expect a single regulator for AI, which will hopefully provide some consistency in matters of regulation, monitoring, reporting and engagement. The ask for creation of technical-focused agencies like the Secretariat is also an opportunity for entities in the private sector to share voluntary technical standards, that are easily deployable, before specific regulations in this regard are brought in India.
Footnote:
[1] E.g., NITI Aayog Principles of Responsible AI (2021), Operationalising Principles (2021), and FRT Report (2022); Indian 2 Council of Medical Research Ethics Guidelines for Application of AI in Biomedical Research and Healthcare (2023); Tamil Nadu Safe & Ethical AI Policy (2020); TEC Voluntary Standard for Fairness Assessment and Rating of AI systems (2023); TEC Voluntary Standard for Robustness of AI systems in Telecom Sector (under development); Telangana AI Procurement Guide (under development – 2023); Nasscom Responsible AI Resource Kit (2022) and Guidelines for Generative AI (2023); and OECD AI Principles (2019).