‘We Need To Remove The Halo Around AI And The Myth Of Ungovernability’
In BehanBox Talkies, we explore ideas through the lens of scholars. In this instalment, researchers Anita Gurumurthy and Nandini Chami talk about the AI Impact Summit, the Big Tech monopoly, and what regulations overlook

The AI Impact Summit has ended, and everyone – from tech companies and industry lobbies to policymakers and researchers – spoke about building equitable AI systems and a future where AI is a public good. What was missing, however, was evidence, specifics, and communities themselves.
AI regulation falls into the larger arc of India’s approach towards IT and digital industries. We asked Anita Gurumurthy and Nandini Chami, researchers at IT For Change, a non-profit working on technology and society, to put into context the emerging debates around AI. How is India trying to govern AI technologies? What economic and political thinking influenced this? Where are gender and caste in these conversations?
Over email, before and after the summit, the researchers spoke about how India’s ‘market-first’ and ‘light touch’ approach has allowed tech companies to continue building their surveillance empires without safeguards.
While the focus seems to be on privacy, protection and sovereignty, we fail to ask fundamental questions, the researchers told Saumya Kalia. “We need to press the delete button on the halo around AI and the myth of ungovernability – AI is firmly and squarely an issue for public reason,” they say. Edited excerpts below.
Let’s start with the summit. Why does it matter that it is happening in India, and why now? What economic and political trends are shaping the agenda?
In the official narratives, the USP of this summit is that it shifts the conversation from mere declarations to tangible action. The India AI Expo, Knowledge Compendiums and Case Books on Applied AI, CEO Roundtable on Responsible AI–showcasing Indian industry and DPI [digital public infrastructure] for AI models–is clearly the centrepiece of this agenda.
At the same time, what remains uncertain is the question of the extent to which the Summit will produce concrete gains for India. 88 countries may have signed onto India’s ‘AI for all’ vision in the Summit’s final declaration, but it remains to be seen whether India will be able to lead a South-first AI paradigm, overcoming dependence on US big tech in practice, and neo-colonial equations in digital trade.
Geopolitically, India is on the backfoot in protecting its nascent data and AI industry from the onslaught of UK, US and EU markets. In the BRICS [a coalition of 11 countries], India has supported declarations on data sovereignty and development-oriented visions of AI governance. This rhetoric comes back at the Summit through themes like Indic language models, open cloud infrastructure etc, but it is unclear how technocratic choices and innovation start-ups will tackle social inequalities or deepen public purpose.
A lot of AI startups are in fact valued at several times over their actual valuation. This makes them moonshot projects, with only a few players making money right now. India had recognised 1,61,150 startups by January 2025, yet only a subset builds core AI products.
Tech companies and AI startups had a dominant presence at the Summit. Will the presence of industry reshape the ‘global IT’ agenda? Who or what could be left out?
Big Tech has always shaped the global IT and digital governance agendas. This is neither new nor surprising. The ‘Key Visionaries’ page of the Summit, the processes of the thematic working groups – all give primacy to an expert-led narrative of AI governance.
The agenda of public accountability and social purpose innovation–integral to the core sutras of People, Planet and Progress–were often a postscript to the market-first narratives that dominated the discussion.
With a beast that brings unknowns, even with its scintillating promise, the governance question is all-important. It is necessary because AI pervades everything. It is a technology that is re-shaping market, civic, and governance spheres, opening up new questions about power and agency and troubling pre-existing benchmarks.
There is also value in seeing the AI governance question in terms of constitutional values. This will logically open up the ‘who’ and ‘how’ aspects of agenda-setting–where principles of federalism, societal well-being, freedom from harm, the rights of future generations, and many others–will guide and determine which publics emerge as relevant to what debates.
In 2019, you wrote about the ‘wicked problem’ of AI governance. Wicked because of the struggle to find the right combination of legal, ethical, and technological regulations. Which of these has dominated in India and what does the choice reveal about India’s priorities?
‘Wicked’ problems elide simplistic, linear fixes. They need a complete grasp of the complex socio-political nature of societal structures.
The technocratic imaginaries that have defined India’s digital governance approach over the past two decades have persisted in the AI context. Building locally relevant AI in the reality of global infrastructural divides is not easy. We have to think about orchestrating a governance environment that aligns our concerns around inclusion, transformation, and more with resource allocation, public interest, ethics, citizen participation, public-private boundary management etc. It is about building a pathway to industrialisation all over again in the AI moment. Aspirations to become the AI garage of the world may need to wait till the home environment can throw up sufficient learnings.
Since 2019, India has tended to valorise the speed of AI adoption in the domestic market. Diffusion is not an intrinsic public good; the terrifying stories of social media influence on young people testify to the crises of the AI society. Public policy is mainly directed towards getting on top of the global AI race – underwriting compute investments of domestic firms, opening up public datasets to market actors, and providing preferential tax incentives, IP protections, and access to funding for deep-tech firms.
This market-first approach to AI may not do justice to India’s socio-economic challenges. Our current ‘light-touch’ regulation which wants to establish market supremacy of India’s AI sidesteps intentional, whole-of-society governance.
We reflected a few years ago about the North-South AI divide and argued that data and AI are two sides of the innovation coin, and that resource (re)distribution at the global level is vital for an equitable AI economy. Our concern about equity of data access remains.
Today, we also see that the country AI trajectory badly needs a rule-of-law-based approach rooted in constitutional values. As things stand, the rights of citizens, the duty of the State, and the accountabilities of the market are but weak and tenuous in the national policy vision for AI. Also, how the data domain will be organised and governed for social good remains an open question at the national level.
What would a constitutional and rights-centred approach to AI governance look like? Would a community-led approach also be automatically anti-caste?
This is an important starting point to define the relationship between tech and society. What public interest tech communities are telling us is that AI ecosystems are built on and deeply reinforce the axes of caste and gender. Discrimination, exclusion, and exploitation on the lines of caste and gender are encoded in large language models (LLMs) and AI applications (consider how people generated non-consensual intimate images on Grok).
The Constitution of India recognises the right to equality as a positive right – moving beyond freedom from discrimination to enshrining affirmative action for furthering an equal society. It is important to extend this constitutional, anti-caste framework to address new challenges in the AI moment. What is needed is a horizontal omnibus legislation for extending fundamental rights in the AI moment (including affirmative action for the protection of vulnerable groups), sector-specific legislation, and new institutional bodies/mechanisms for the enforcement of these rights.
Some steps have been taken by the government, like developing sector-specific strategies. However, this thinking currently privileges unhindered AI innovation. A patchy and fragmented rights approach will not provide the same protection that a re-reading of constitutional morality can offer for governing AI.
Going forward, we need to move away from focusing only on product ‘bias’ and instead, examine the mode of production itself in order to better address caste, gender, and other social biases. This means foregrounding and making visible human and socio-ecological costs in data and AI value chains, and preventing structural harms, including those based on caste and gender lines, from being entrenched.
Without effective rights to information about and participation in AI policy, social progress is bound to be thwarted, directly undermining the ability of people – historically marginalised communities, environmental defenders, small economic actors, indigenous people – to shape AI futures.
Summit conversations on data sovereignty–Indic language models, open cloud infrastructure—happened without the participation of grassroots organisations or unions. What do you make of the representation gaps?
The India Summit could certainly have encompassed a more diverse range of voices. The representation gaps at the Summit reflect the wider terrain of the AI debate: Those who matter, and those whose lives are already entangled in disempowering ways in the dominant AI paradigm, are seen to be peripheral. Having more seats at a Summit table does count. But what is more important is that demands are consolidated and channelled across multiple political spaces to expose and address the public accountability gaps in AI governance.
The short history of global AI Summits reveals a largely celebratory and homogenising narrative about AI. In any case, as with large spectacles, these moments are rarely the right avenues for serious public debate.
The India Summit did open up some space for a South-led AI vision, which civil society can use tactically for asserting pluralistic visions of the South – those that can emerge from the ground, those that challenge state-led rhetoric. But civil society and global justice organisations need to work beyond the surface-level questions about infrastructures in order to reject both technological determinism (this dogmatic faith in tech revolution as a driver of social progress) and long-termism (the inevitability of AI’s existential risks to humanity becoming a reality).
Our concerns about structural justice must address the material foundations of the AI economy and how they play out in practice. Yes, openness is important, local language models are critical. But we need to ask what kind of openness, and what kind of public goods, can support a resilient society that truly has the capability and freedom to pursue data science towards goals that it sets for itself.
AI is not a brand new technology and we’ve seen cycles of AI hype and ‘AI winters’ earlier. Can we cast a wider look: how has India tried to regulate or steer conversations around AI? Do today’s debates continue or disrupt earlier thinking?
There is a persisting blindspot, when we attempt to place the present discourse on AI governance in the long arc of IT and digital policy. There is an absence of a robust industrial policy that leverages the general purpose nature of network, data, and AI technologies.
Scholars have highlighted how the growth in software and IT-enabled sectors since the 1980s has not been leveraged to add value to the domestic economy and transform core sectors. These sectors have also employed only a minuscule percentage of the educated elite. New Gen AI developments, such as AI productivity assistants and ‘co-pilots’ that automate thinking, threaten these gains.
After the liberalisation of the Indian economy in the 1990s, the country has largely been a consumption market for digital products and high-end digital services, with some limited contributions to the low-value segments of the global digital value chain [such as call centres, data processing, and back-office services in the IT and ITeS industry].
Since 2014, the Union Government has responded to this challenge in various ways, including focusing on strengthening the domestic digital start-up and innovation sector. But as commerce and industry minister Piyush Goyal reflected in April last year, start-ups have mainly come up in ‘rent-seeking sectors’ [those that manipulate existing economic conditions to generate profit without creating new wealth] such as quick commerce. And India lacks deep-tech innovation in sectors like EV, semi-conductors, industrial robotics.
The limitations of our innovation culture have haunted us over a long time; the past decade of attempts to bring data and AI industrial policy as a corrective has been insufficient.
India first wanted to keep data within the country to protect its economic interests. Over time and with the LLM boom, it became more willing to share data with certain countries and companies. But it still sees data as crucial for national development. Similar lines of thinking pervade the AI governance discourse as well – AI for Atmanirbhar Bharat is a running thread in the narrative, which is also one of the pillars of the India AI Mission.
Where does India stand in terms of AI and data governance in the world?
The shifts in India’s stances reflect a moving carrot – an ongoing tension between domestic digital industrial priorities and the opportunities offered by the global digital economy. We have come a long way. Earlier, India had taken unique positions, such as recognising community rights over non-personal data and placing limits on cross-border data flows. But those were still the days before the ChatGPT moment.
Our position, today, as the digital trade negotiations of the India-UK agreement show, is pretty close to the mainstream agenda pushed by the Big Tech lobby. Our industry wants access to global data markets, open government data, foreign capital and so on. This does not sit neatly with India’s rhetorical assertions about development sovereignty in the global digital/AI order; after all, development cannot mean greater wealth for some while deepening social inequalities for others.
What is needed is intentional governance that places equitable development at the centre of our data and AI governance approach. One thing we have partly got right is the connection between DPI and AI innovation. The next step is to ensure smaller actors also benefit and that social-purpose innovation is not left to market interests.
Signal’s founder Meredith Whitaker described today’s tech economy as a “surveillance empire” where you monetise data to a point the whole system becomes unstable, like a centipede with too many legs that eventually “collapses”. Has any Indian framework taken note of this business model, do you see any real safeguards?
No, and that’s what worries us. We currently have a ‘light-touch’ regulatory approach to the AI market and lack sufficient purpose limitation and other ethical boundaries in our data stacks for key social sectors such as health and agriculture. This may be a ticking time-bomb or a formula for slow societal atrophy. This may sound hyper-sceptical – but we believe that policy frames must adopt a worldview that treats even the magical as an ingredient to be examined for the old question of power and control. The glitz of AI may be undeniable; but it may leave behind, or conveniently erase, the ordinary.
Research has shown women and gender-diverse people remain less represented in AI jobs, especially leadership positions. How does that impact governance frameworks? Have Indian frameworks planned to engage with these concerns, particularly risks of surveillance and violence?
The politics of representation in the industry does impact bias in products. But it is also true that India’s laws don’t deal with AI systems that profile or harm whole groups like caste communities and women. Discrimination is rooted in deep social structures; to challenge it, we need continuous post-market assessments of products. One-time certification of risk is partial at best and dangerous at worst; you can’t approve an AI system once and forget about it, harms often show up only after people start using it. These systems need to be watched continuously, and their limits should be set by constitutional values like equality and dignity.
There is some policy attention to the risks of synthetic media and image generator AI technology for gendered violence, but progress still remains to be made to build public trust. We may need to learn from other jurisdictions – the debate on societal data rights in Brazil, ongoing discussions in Australia about treating online digital footprints as personal data, the experience of the Network Enforcement Act in Germany (the NetzDG law), EU’s AI omnibus and so on.
At the same time, there are very interesting civil society developments – the Samiksha LLM pipeline evaluation and innovations like Shhor.ai and other research efforts to map gender-based harms in AI.
Our current focus on weaponisation of AI technology has been on DigiYatra and facial recognition technologies (FRTs). What are we missing?
The weaponisation of AI technology is witnessed in so many instances – commercial spyware, clandestine state surveillance, cyberespionage. The manipulative impulses of surveillance capitalism destroy the social fabric, they annihilate democratic culture. We are mute spectators to bots and troll armies running amok, and AI functionalities that are inimical to the dignity of women and marginalised groups. This weaponisation of social media has acquired new dimensions with the advent of synthetic media.
Another emergent threat is the potential for agentic AI to be weaponised by malicious actors.
As AI agents become ubiquitous in public service sectors – banking, welfare delivery, etc – this will become a major security risk, as entire systems, public systems, can now be held under siege. We are missing a nuanced conversation on how to set redlines for the deployment of agentic AI.
So far we have been focusing on parity, privacy, and bias. What rights are missing from this conversation?
We need to be talking more about economic, social and cultural rights in AI value chains. This is beyond decent work for data labour. It is about the concerns of farmers and producers in agritech value chains, cultural and indigenous communities whose knowledge commons and creations have been taken into generative AI data pools, communities displaced by data centres, and so much more.
We also need to be centering questions of epistemic rights – what is AI doing to our societal knowledge commons? Our sense-making and meaning-making as societies? Our knowledge and lifeworld pluralisms? In essence, the right of all to live a life of vitality and real autonomy, and our collective right to infrastructures of care and creativity. These are the faultlines that life under AI is visibilising.
You wrote about the silence around thinking of AI as a ‘public common’. Why does that silence exist?
When we wrote our paper in 2019, we were concerned that social media algorithms would commodify all of sociality; the commons of our everyday interactions. The concern was that not many were taking the threat seriously. But today, after the Gen AI boom, it is evident that fundamental questions about market-led AI innovation are deliberately buried. Questions about the data that is used for training models – originating in the plunder of the knowledge commons – have been reduced to semantics about copyright law and its fair use exemptions.
We are of the view that an internationally binding framework that centres a commons-based approach to the governance of training data – covering the entire lifecycle of AI, including innovation capability – is essential to address the current challenge of knowledge colonisation and cannibalisation of the public domain.
As an innovation built on the collective resource of social data, AI cannot be allowed to be monopolised by a few. It must be protected and preserved as an epistemic infrastructure that underpins and enriches people’s knowledge commons. But how this framework will be enacted and arrived at in the global political economy – that’s the million-dollar question.
We believe everyone deserves equal access to accurate news. Support from our readers enables us to keep our journalism open and free for everyone, all over the world.



