top of page

India

Country Example

Classification

AI Governance, Policy, Regulatory Frameworks

Overview

India's approach to AI governance is characterized by a combination of advisory-led oversight, sectoral guidelines, and committee recommendations, rather than a comprehensive, binding AI law. The Ministry of Electronics and Information Technology (MeitY) issues advisories and consults with expert committees to guide the responsible deployment of AI. Obligations include preventing the dissemination of illegal content and labeling outputs from unreliable or under-tested AI models. The absence of a unified legal framework creates flexibility, allowing rapid policy adaptation, but also leads to ambiguity for developers and businesses regarding compliance. This approach is evolving, with ongoing stakeholder consultations and draft frameworks under discussion. A key limitation is the lack of enforceable, consistent standards across sectors, potentially resulting in uneven risk mitigation and enforcement gaps.

Governance Context

India's AI governance relies on sector-specific advisories and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which impose obligations on intermediaries to prevent dissemination of unlawful content. In 2023-2024, MeitY issued advisories requiring AI platforms to label outputs from unreliable or under-tested models and to ensure mechanisms for content takedown. These controls align with obligations under the Digital Personal Data Protection Act, 2023, mandating data protection and user consent. Additionally, the National Strategy for Artificial Intelligence by NITI Aayog outlines ethical principles and risk management, though it is non-binding. The absence of a comprehensive AI Act means reliance on existing IT, data protection, and sectoral regulations, leading to a patchwork of compliance requirements for AI developers and deployers. Two key obligations/controls include: (1) Mandatory labeling of outputs from unreliable or under-tested AI models, and (2) Implementation of robust content takedown mechanisms to prevent dissemination of unlawful or harmful content.

Ethical & Societal Implications

India's approach reflects a balance between fostering innovation and addressing risks such as misinformation, privacy violations, and algorithmic bias. The advisory-led model enables rapid response to emerging issues but may inadequately protect vulnerable populations due to inconsistent enforcement. Societal concerns include potential over-reliance on self-regulation, lack of redress mechanisms for harm, and challenges in ensuring equitable access and accountability. The evolving nature of India's AI governance underscores the need for inclusive policy development and transparent stakeholder engagement. Additionally, there is risk that fragmented controls could exacerbate digital divides and limit effective recourse for individuals harmed by AI outputs.

Key Takeaways

India governs AI primarily through advisories, sectoral guidelines, and existing IT laws.; There is no comprehensive, binding AI law; governance is decentralized and adaptive.; Key obligations include illegal content prevention and labeling of unreliable AI outputs.; The approach allows flexibility but can create compliance uncertainty and uneven enforcement.; Ongoing reforms and consultations may lead to more structured, enforceable AI governance in the future.; Sector-specific and data protection laws supplement AI oversight, but lack uniformity.; Ethical principles are outlined in national strategies but are not legally binding.

bottom of page