CURRENT AFFAIRS | MARCH 2026
UPSC Exam Relevance
Prelims: MANAV framework — full form and five pillars; India-America Connect Initiative; glass box vs black box approach; paperclip problem in AI.
Mains GS-III (Science & Technology): India’s ethical AI framework; data sovereignty and global data governance; regulatory approaches to emerging technologies; balancing innovation with accountability.
Mains GS-II (Governance): Role of executive leadership in technology policy; India’s position in global AI governance; public-private partnerships in digital infrastructure.
Introduction
At the India AI Impact Summit 2026, Prime Minister Narendra Modi articulated a comprehensive vision for India’s approach to Artificial Intelligence governance through the acronym MANAV. This five-pillar framework — encompassing Morality, Accountability, National sovereignty, Accessibility, and Validity — represents India’s attempt to carve out a distinctive ethical and regulatory approach to AI that draws upon both contemporary governance principles and India’s civilisational philosophical traditions. The MANAV vision is significant not only as a policy declaration but as a philosophical statement about how a developing democracy with 1.4 billion citizens should engage with a technology that promises both immense benefit and considerable risk.
Unpacking MANAV: The Five Pillars
M — Moral and Ethical Systems
A — Accountable Governance
N — National Sovereignty
A — Accessible and Inclusive
V — Valid and Legitimate
Remember: “MANAV” means “Human” in Sanskrit — AI must serve humanity.
M — Moral and Ethical Systems
The first pillar emphasises that AI systems must be grounded in moral principles. This is not merely an abstract ethical injunction but has concrete policy implications. Moral AI systems are those that do not perpetuate discrimination, that respect human dignity, and that align with constitutional values such as equality (Article 14), freedom of expression (Article 19), and the right to privacy (Article 21, as expanded by the K.S. Puttaswamy judgment, 2017). India’s insistence on morality in AI reflects a growing global consensus that technological capability must be tempered by ethical guardrails — a theme that resonates with the European Union’s AI Act and UNESCO’s Recommendation on the Ethics of AI.
However, the concept of “morality” is culturally situated. What is considered moral in one society may differ in another. India’s framework implicitly suggests that each nation should have the sovereign prerogative to define the moral boundaries of AI deployment within its jurisdiction — a position that distinguishes it from more prescriptive international frameworks.
A — Accountable Governance
The second pillar calls for transparent rules and robust oversight mechanisms. Accountable governance in the AI context means that decisions made by AI systems — particularly those affecting citizens’ rights, livelihoods, and access to public services — must be explainable, auditable, and subject to legal challenge. This principle addresses a fundamental concern about modern AI: the opacity of algorithmic decision-making.
In India’s domestic context, this pillar is particularly relevant to the deployment of AI in government services — from Aadhaar-linked welfare distribution to predictive policing. The demand for accountability implies the need for institutional mechanisms such as algorithmic impact assessments, independent audit bodies, and grievance redressal frameworks specifically designed for AI-mediated decisions.
N — National Sovereignty
The third pillar — articulated as “whose data, his right” — is perhaps the most politically charged. It asserts that data generated within India’s territorial jurisdiction belongs to its citizens and its government, and that foreign corporations and governments cannot claim unfettered access to this data without consent and appropriate governance frameworks.
This principle builds on India’s existing data governance trajectory, including the Digital Personal Data Protection Act, 2023, and the earlier debates around data localisation. At the global level, it challenges the dominant Silicon Valley model of data extraction, where data flows freely across borders to wherever computational capacity is greatest. India’s position aligns with the concept of digital sovereignty advocated by the European Union but goes further in linking data rights to national sovereignty rather than individual privacy alone.
A — Accessible and Inclusive
The fourth pillar frames AI as a “multiplier, not a monopoly.” This is a direct challenge to the concentration of AI capabilities in a small number of corporations — primarily American and Chinese. India’s argument is that AI must be democratised: its benefits should reach small farmers in Bihar, weavers in Varanasi, and fisherfolk in Kerala, not merely the technology elites of Bengaluru and Hyderabad.
This pillar has operational implications for India’s own AI policy. It implies the need for public investment in AI infrastructure (compute capacity, datasets, training programmes), multilingual AI tools that work in all 22 scheduled languages, and sector-specific AI applications for agriculture, healthcare, education, and governance. The IndiaAI Mission, with its seven pillars, is the institutional vehicle for translating this aspiration into reality.
V — Valid and Legitimate
The final pillar insists that AI systems must be lawful and verifiable. Validity means that the outputs of AI systems — whether medical diagnoses, credit scores, or criminal risk assessments — must be empirically accurate and legally defensible. Legitimacy means that the deployment of AI must be sanctioned by democratic processes: legislation, regulation, and judicial oversight.
This pillar implicitly rejects the move-fast-and-break-things ethos of Silicon Valley. It suggests that AI systems should be deployed only after rigorous testing, certification, and regulatory approval — an approach closer to how pharmaceuticals or aviation technologies are governed.
The Glass Box Approach: Transparency as a Governing Principle
One of the most striking rhetorical devices employed by Prime Minister Modi was his call for a “glass box approach” to AI, in contrast to the prevailing “black box” problem. Black box AI refers to machine learning systems whose internal decision-making processes are opaque — even to their creators. A glass box approach demands interpretability: the ability to understand, explain, and audit how an AI system arrives at a particular decision.
This metaphor carries significant policy weight. If adopted as a regulatory principle, it would require AI developers to provide explanations for algorithmic decisions, particularly in high-stakes domains such as criminal justice, healthcare, and financial services. The glass box approach aligns with the broader global movement towards Explainable AI (XAI), which has gained traction in both academic research and regulatory circles.
The Paperclip Problem: A Philosophical Warning
In an unusual intellectual move for a political summit, Prime Minister Modi referenced the “paperclip problem” — a well-known thought experiment in AI safety research proposed by philosopher Nick Bostrom. The thought experiment posits an AI system tasked with maximising paperclip production that, left unchecked, converts all available matter — including human beings — into paperclips, because its objective function has no constraint against such outcomes.
By invoking this thought experiment, Modi was making a nuanced point: that AI systems optimised for narrow objectives without broader ethical constraints can produce catastrophic outcomes. This resonates with contemporary debates about AI alignment — the technical and philosophical challenge of ensuring that AI systems pursue goals that are compatible with human values and welfare.
Global Data Framework and the India-America Connect Initiative
- Glass box approach = transparent, explainable AI (vs black box)
- Paperclip problem — thought experiment by Nick Bostrom on AI misalignment
- India-America Connect Initiative — subsea cable gateway at Visakhapatnam
- Digital Personal Data Protection Act, 2023 — India’s data privacy law
- K.S. Puttaswamy (2017) — Right to Privacy as fundamental right
Modi called for a global trusted data framework that respects data sovereignty — a proposal that, if operationalised, would require multilateral negotiations on data governance norms, cross-border data flow regulations, and mechanisms for resolving jurisdictional conflicts over data.
In a related development at the summit, Google CEO Sundar Pichai announced the India-America Connect Initiative, which includes the construction of a subsea cable gateway in Visakhapatnam and three new subsea cable pathways connecting India to global internet infrastructure. This initiative addresses a critical infrastructure gap: India’s dependence on a limited number of submarine cable landing points, which creates vulnerability to both physical disruption and geopolitical pressure. The Visakhapatnam gateway, situated on India’s eastern coast, diversifies connectivity and strengthens India’s position as a data hub.
Critical Assessment
The MANAV framework is philosophically ambitious and diplomatically shrewd. It positions India as a thought leader in AI governance — a nation that thinks about technology not merely in terms of economic competitiveness but in terms of civilisational values. However, the framework’s effectiveness will depend on its translation into concrete legislation, institutional architecture, and enforcement mechanisms. India currently lacks a comprehensive AI regulation — the Digital Personal Data Protection Act addresses data privacy but not the broader challenges of algorithmic accountability, AI safety, or liability for AI-caused harm.
Moreover, the tension between the “Accessible and Inclusive” pillar and the “National Sovereignty” pillar deserves careful examination. If AI is to be democratised globally, it requires open data flows, shared research, and collaborative development — which may sometimes conflict with strict data sovereignty regimes. Navigating this tension will be a defining challenge for India’s AI diplomacy in the years ahead.
Conclusion
The MANAV vision represents India’s most comprehensive articulation of an ethical AI governance framework. By weaving together morality, accountability, sovereignty, accessibility, and validity, it offers a holistic alternative to both the laissez-faire approach of the United States and the prescriptive regulatory model of the European Union. For UPSC aspirants, MANAV is a valuable analytical framework for answering questions on technology governance, ethics in public life, and India’s role in shaping global norms. The challenge, as always, lies in moving from vision to implementation — from the poetry of summits to the prose of governance.
Source: UPSC Essentials, The Indian Express — March 2026. Content rewritten and analysed for UPSC preparation by Civils Gyani — Empowering Future Officers.
Practice Quiz
Test your understanding with these 10 MCQs:
Quiz data error: Control character error, possibly incorrectly encoded