E103

The Vaporstate: All Hail Scale at the India AI Summit

Read Transcript
Listen on:

Show Notes

Shownotes:

In The Vaporstate, we have traveled to Brazil, India, and the UK. But what does this look like as a global movement of nations and companies evangelising technology as the key to solving all problems, everywhere?

More like this: Paris Post-Mortem (live)

For our final instalment of The Vaporstate, Alix is joined by Astha Kapoor and Amba Kak to reflect on the series, and discuss the upcoming AI Action Summit in India. This is the first time this summit is being hosted by a global majority country — will this create new opportunities for civil society to have a say, or is this just yet another chance for tech companies to whisper magic AI spells into the ear of government?

The end of The Vaporstate series marks the beginning of another series, made in partnership with AI Now and Aapti Institute: in the run up to the AI Summit, we want to rethink the terms that have been co-opted by government and industry. Terms like ‘sovereignty’, ‘AI for good’, and ‘human capital’. We interviewed twelve experts who unpack how these terms are framed in global summits like this one — watch this space for conversations with Naomi Klein, Meredith Whitaker, and Karen Hao, to name a few.

Further reading & resources:

**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Hosts

Alix Dunn

Release Date

February 6, 2026

Episode Number

E103

Transcript

This is an autogenerated transcript and may contain errors.

Astha: [00:00:00] DPI is like a way of life. You just have to adopt it.

Mila: Each person needs a unique id. Each land person needs a unique id.

Lua: It's super easy. It's for free.

Rafa: Somebody else with her identity was opening telephone accounts.

Beatriz: Oracle is in so many different government systems.

Dan: You build an entire tech stack on big tech products, and then you put a union jack on the top of it, and you wave it intensely in the hope that nobody will look at what it's sitting on.

Alix: This is Vapor State. A new series from us here at Computer Says Maybe.

Hey there. Welcome to Computer Says Maybe this is your host, Alix Dunn, and this is our last installment of The Vapor State. But the good news is it is also the beginning of another. We are about to start sharing interviews that we've been doing over the last few months in partnership with AI Now and the Aapti Institute.

And in the episode you're gonna hear us talk about quote unquote, the series a [00:01:00] lot. Um, and that's because, uh, in this week's episode we're talking to the two people that we've been working closely with on it. And basically in the run up to the AI summit in India, we. Are kind of stealing ourselves to the reality that in these summits, people throw around really vague terms and oversimplify them.

So you, you'll hear things like sovereignty, data rich countries democratization, AI for good. Um, you hear all of these terms that sound politically important. They sound like the kind of conversation we should be having. But oftentimes in really formal spaces like summits, those words get turned into basically nothing.

They get thrown around all the time in ways that they aren't really intended to be used. And they also oftentimes, sometimes can even be used to distract from the fact that the very thing the concept is meant to explain isn't actually being addressed in whatever conversation. It's being invoked. And so we decided [00:02:00] to take all the concepts we expect to hear, like think of your AI Summit Bingo card.

Rather than just say, guys, that's not how that word is supposed to be used in advance. We sat down and interviewed the people that best understand those concepts so that we could give you a little bit of a tour of how those words can be extremely potent if they were explored fully. Um, and also give you a bit of preparation, um, for the summit.

So expect to hear from people like Naomi Klein and Meredith Whittaker and Karen Hao among others. It was very fun to make. Um, and the full individual interviews are gonna be exclusively released on our YouTube channel at the maybe media. And the first four are actually out, uh, with the rest coming next week.

You can also find them on our website and we'll link in the show notes. We'll then share highlights from these conversations on the pod in the coming weeks. So if you don't want to dig into our YouTube channel, that is fine. Um, just stay subscribed to this and you will be able to hear from some of those [00:03:00] brilliant people for this conversation.

I'm joined by two good friends of the pod, Amba Kak , the co-founder of the AI Now Institute and Astha Kapoor The co-founder of the Opti Institute and OTA actually appeared in the first episode of the Vapor State digging into India's. History of AHA and digital public infrastructure. So she knows basically just so much, um, about nations rolling out giant tech stacks and all the things that happen after, um, and has a ton of experience with aha.

So if you haven't heard that episode with Astha, please do go back and listen. Both Amba and Astha were guests on the. Live show, our first live show ever, actually, which was in Paris basically a year ago, which is kind of wild to think about at the AI Action Summit. So this year's the AI Impact Summit and France hosted the AI Action Summit, um, and Amba, Astha, Nabiha Syed, um, and Abeba Birhane and I debriefed after that summit, if you wanna deep cut.

And, uh, a bit of [00:04:00] throwback a year ago. What were, what was AI conversation like then? Um, we have that for you somewhere back on our feed. So it was really great to kind of get the band together again to talk this time instead of after a summit, talking a little bit before a summit about what to expect and how to kind of prepare intellectually for all of the things coming our way.

So let's get into it with  Amba Kak and  Astha Kapoor . We'll dig into the Vapor State series and also what is top of mind for them as we head into another season of summit.

I wanted to start a little bit with just like reflections from you all. I mean, you are both. I'm actually learning more about what it means to be an Indian that doesn't live in India versus what it means to be an Indian that lives in India. But I feel like you both kind of cut your teeth [00:05:00] on how technology politics manifests when a government is trying to build out these really ambitious digital projects and then doesn't necessarily do a great job of dealing with all of the edge cases, or they see them as edge cases when really it's like a core of the experience for a lot of people.

But did you all have reflections on. This history and how you feel about India now hosting this AI summit or anything you wanna go into based on seeing these multiple contexts. You thinking about multiple contexts, kind of where is your head in terms of the digital public infrastructure project? I think

Astha: that.

It's very interesting because India's positioning itself as this like hero state, and I find that to be quite interesting in terms of how much of it has been glossed over to make this into a palatable narrative. And we've talked about this a lot, but like the amorphousness of the DPI like moves away from ad hard debts or UPI fraud or whatever else, right?

[00:06:00] Like. But the meta narrative is so strong that it helps carry through the India image, and I think it's with that. And I also think that the Indian G 20 had a really big role to play in terms of India was able to package this enormous multilateral event and see it through that. It really helped. Give India the opportunity to say like, we are gonna be able to just do something bigger, flashier, more India central.

Again, it's the success of G 20, the success of the DPI narrative and then going into the new AI summit, which is carrying a lot of that feeling.

Amba: So I did get a chance to listen to the, the episodes that are already online, and I couldn't help but think that, you know, this is. Too little too late in that like there's such rich history and contestation that has happened over the last, you know, 15 years around DPI in many parts of the world.

Obviously in [00:07:00] India, this is the first mover and now we have a much more recent conversation in Brazil. But it reminded me that that conversation has still been incredibly siloed. Along global North and global South Lines. I was just so glad to see that break out of the mold and Alex, 'cause I know that your listenership is much broader and I was thinking of how many people would be introduced to this tapestry of DPI and everything that it has to offer us for us strategies going ahead.

That work and that excavating hasn't really been done. It did really good handholding for even someone who's new to these conversations to get a feel of it. And then I think the question that it raises. Both as we're going into the summit, but more generally too, as AI kind of takes over everything, including the DPI spaces, what are the lessons that we can take forward both as a, we can get really pedantic about how DPI and public interest AI conversations are similar or different, but there is just so much tactical wisdom in these civil society [00:08:00] fights around these projects that I think we should continue to keep mining.

Alix: I felt similarly that it was kind of overwhelming to even begin knowing how to have the conversation. Like every time I talk to you all about digital, public infrastructure, people that have worked on it for years, there's just so much history, so many examples, so many mistakes made, so many players and stakeholders and narratives and politics around it that I found it hard to get my arms around as well, and I feel like.

The summit being in India feels like a wonderful moment for excavation, so I'm glad that you felt that it was maybe an accessible introduction for people who haven't been following closely. Amba, when you found out, like when did you find out that India was hosting the next summit? Like how are you all feeling about that government trying to position itself as sort of an innovative technical leader?

In some ways hinting that that's because of these projects without really maybe them explaining or [00:09:00] reflecting meaningfully on how those projects have gone, how has it felt them leading in this AI conversation for you all?

Astha: Yeah, I guess we all knew. Because the three of us were in conversation at the end of the Paris AI summits because India was a co-chair to the French AI Action Summit, and so we knew that the next one was going to be in India.

They'd originally thought that they would do it in the same year, which was 2025, but then. It just so happened that it was now a whole year later. In terms of what it feels like, I think that when you had asked this question in Paris, I had felt at that point, and maybe that feeling hasn't moved that much for me personally, is that I think there's a broader question of like, what is the value of the AI summits and then getting into like who gets to do it and whether, you know, like are these the right places to, and at the sort of like meta question, I'm still there, which is like.

Why are we doing this again? But then there is the sort of like click into like the India question in [00:10:00] terms of like, is this valuable to do it in a global south country? Um, it's the first one here and what does that. Mean both in terms of the symbolism and the optics of it, and then the reality of it. I think the symbolism is pretty strong in terms of like, this is a global south country.

It's the largest summit that would've happened up until now. We expect 40,000 people to attend, you know, and so it's, it's huge. There has been 500 plus pre AI summits all over the world. A hundred plus countries are involved. So, you know, it's enormous. And there is a. Maybe if you had asked me this a few weeks ago, I may have had a different position, but at least there is some sense of inclusion because at least still the civil society organizations have been given space on the 16th, 17th, 18th.

So we know that there is some way of making this a broad base. Summit and so there is a certain [00:11:00] kind of value in that. Now, the bigger question in terms of like how this will likely translate into it, who has influence, how much of it is like lip service? All of those questions still remain, but I think that there is symbolic value in India doing it, which again, like my position has shifted a little bit since the start of the year.

Alix: I was just gonna say, I had no idea 40,000 people were descending on India to talk about AI on Delhi in like a couple of weeks. Yeah.

Amba: Speaking of symbolism, my parents who live in in Delhi messaged me a, a week ago saying that there's like front page news, that there's going to be a beautification effort in the city in the lead up to like welcoming these thousands of visitors, including obviously the tech CEOs and heads of state, maybe most importantly, and there's like a cleanup.

No dirty laundry. A vibe around it. There's a deeper metaphor there, basically. And I think my dominant feeling, and I have felt this both at the UK Summit and also in Paris last year, is like one of, of like [00:12:00] feeling as civil society, like we are vos to this pageant because it's, they really are an opportunity to, to get a sense, uh, a much more like visceral sense and a stark picture of what's going on in the kind of high level elite conversations.

Power play around ai, right? So you get a sense of national aspirations. How subservient or not are we projecting to the tech industry, usually more rather than less. And I think that's always most true for the host country. I remember with the UK we saw Prime Minister Sun acts like really naked ambitions to have his legacy be built around AI safety.

But one that was very deliberate to place him as a peer to the tech CEOs like Elon Musk, which is how the summit ended. And then with France. I guess it began with exceptionalism around this better discourse that was grounded in liberal values. But then you also saw that quickly, you know, unravel with the geopolitical anxieties and like the arms race rhetoric and everything that was swirling [00:13:00] around in the aftermath of the US elections.

And so the lasting echoes were, you know, at least in the news cycle, we're less about public interest ai and much more about like. President Macron positioning that France was ready to catch up and play the game. And plug, what was it? Plug baby. Plug plug, baby. Yeah, it, it just like, it, it really like devolved, um, very quickly.

And so that's what I mean by being a voyeur is like we're just sitting on the sidelines where very clear, like whether we have a. Quote, unquote seat at the table or not. Like, this is not a summit that is going to meaningfully take our fights forward because it's not meant for us. Right? It's a pageant between heads of state and tech CEOs, um, and a moment for them to tell the stories that they wanna tell.

But there's some that we can take away into our strategies just observing from the, the sidelines or even from the rooms.

Alix: God, I'd forgotten about. Plug, baby plug. I also think the idea that the summit has become the New Olympics is also really weird. That yeah, the [00:14:00] beautification effort is like, that's both not surprising and also just really weird to roll out the red carpet in that way for a bunch of colonialist nerds.

Uh,

Astha: but India's done this historically just as a point of note, like slums get like hidden behind white sheets and things anytime there's a global event like this. So,

Alix: yeah, I mean, I get it. Even if it is gross and part of this much bigger like part, I mean Amba, I think the point of it being kind of a, a metaphor I feel like is, is spot on.

Do you think it will, by being hosted in a global majority country, do you think that is going to create opportunities that otherwise didn't exist? Or do you guys anticipate that it will be as. Formal and hermetically sealed away from any meaningful conversation as it was last time

Astha: I heard this metaphor, which I've, I've been like, like the Olympics, you know, like how, what's it called?

The World Cup first happened in South Africa out of like, you know, the global north countries and then like all the [00:15:00] other global south countries started to host it. And maybe the AI summit is a little bit like that, which is that, oh, you know, one of us does it and then. All the other global south countries will do it, and this will somehow meaningfully impact the discourse versus, you know, doing what Amba was saying, which is like the pageantry around it.

I mean, at least like basis, what we've seen in the context, so at least in the runup to the India summit, is that there is a lot. Of language around global south. It feels like this is the big framing, and we've gotten into this a little bit in the essay that Amba and I wrote with Mila and Samira, that that framing of the global south actually is deployed quite strategically as a way of positioning India as, as you were saying, as like this.

Country that's going to create all of this opportunity, but then also doesn't really mean like you scratch the surface and it doesn't mean anything. So whether it's like logistics, like visas [00:16:00] and all of those, which are obviously harder for some countries versus others level of inclusion, the space that is given to the countries from say Africa or Southeast Asia, all of those are things that need to be scrutinized much more because I think that the way to understand.

The summit is like how much of it is the pageantry and symbolism and all of that, which India is really, really good at, and how much of it is a substance? We do really well on the symbolism, pageantry, the works, and then as soon as you start to scratch the surface a little bit in terms of like, is this really inclusive?

Is this really about global south? You know, are people on the table in terms of civil society organizations, or is this about big tech and big philanthropy? I think that then those questions become much more complicated.

Amba: My hope, hope and dream, although as assessing maybe we are quite far from it, is it's kind of like the response to Kearney's speech at Davos, right?

Like you can just tell like people are dying for [00:17:00] a high level articulation of the trash fire. We are currently witnessing that. To have him say the neoliberal order was always a fiction, we can all stop pretending seems that, you know, you can almost hear the collective sigh of relief and I think we haven't.

Had that truth telling moment for AI yet. And so you can imagine that a global majority country, or really like a global majority hemen, like India has every reason to like take the bait and, and say what's in it for us? Like, are we heralding this concentration of power? Sure it's an issue for the rest of the world and the Europeans are anxious, but like really, truly, who has the most to lose from this?

Like a power grab, unlike one we've ever seen before. And like, what does this mean for global inequity and speaking to each other rather than to the tech companies, which I think is what this is going to end up like. So there's always the promise of, uh, a big moment, which if, even if it does nothing else involves that kind of truth telling.

And that would be the [00:18:00] best case. But for all the reasons, I'm not like holding my breath.

Alix: Uh, that tracks, and don't, please don't, 'cause I want you around. Uh, if you hold your breath, you might not make it. So I wanna turn a little bit to, uh, in the context of the summit. Oftentimes, or in these kinds of high level spaces where there's a lot of complexity, a lot of power in the room, that complexity very quickly evaporates and everything gets very simplistic.

And terms get used, words get used, ideas get used in ways that basically carry. Pure symbolism. So like no meaning at all. Um, even though these words are important to be able to use to kind of articulate what's happening and potentially like plan what should happen. And that is a long preamble to try and tee you up, AMBA to share a little bit about the series and why you all thought it would be a good idea to like.

Pre, not necessarily pre bunk, but like prepare people who are going to [00:19:00] the summit with much more depth of feeling and thinking around key political questions so that when they step into those spaces, they don't so quickly get kind of sucked into the co-optation. Of some of these ideas and allow people to hear from ex actual experts on these topics and make it three dimensional, rather than this two dimensional kind of flattening of everything.

But do you wanna say a little bit about why you wanted to do the series and maybe a little bit about what it is?

Amba: I mean, honestly, you just said it beautifully, but I guess I can share a little bit of the instincts that really the three of us and a, a broader crew of folks were guided by. I think it's, it's kind of like a one two punch.

The first is truth telling, right? We were just talking about it. We haven't heard enough. We've decided we're not holding our breath for global leaders to, to get on stage and do it. But I think there's just so much room and appetite for brutal diagnostics and like taking apart of these [00:20:00] big words.

Sometimes, like you said, like really important useful concepts that are gonna circle This summit are already circling the summit, like AI for development, AI for good global south leadership, democratization, openness. All of these terms, and this is not just a summit thing, right? Like they have well and truly been co-opted by a pretty complex blend of corporate state and, and big institutional interests.

And it's more, more tactically like, you know, that these frames are going to be in the spotlight. You can bet that they are likely to be in the new cycle in and around the summit. And so for us, it's a really good opportunity to make sure we are not letting the co-optation go unchallenged. Also it, it really does suck, and maybe we'll talk about this later, but it's like sucks when you can't use these words anymore.

Right. Because I think Asa, maybe Asa, did you come up with the fact that like the DPI discourse has all the good words, or do we need to credit someone else?

Astha: I've been winding about that.

Amba: Yeah. I mean, they really do have all the good words. It's like, no, we [00:21:00] agree things should be democratic and open, but that's not it.

Right? And so I think like that work of that's not it is important, but equally it needs to come the, the like one, two punch. Part of it is like, it has to be accompanied by at least muscle building around what is the alternative discourse that we want to lead with. And that's what a lot of these interviews I'm, I'm like excited for everyone to hear them, but I think they sort of like balance the two really well.

Um, and it's because there is no more. A powerful way of, of imagining the alternative then first really thoroughly and rigorously pointing out what's wrong with the status quo, right? So I think that that's what this series hopefully, will bring to folks, which is both the smell test for what is co-opted and watered down in the prevailing discourse.

And then the best version of these terms and of these concepts, I will say there are some concepts that we spotlight that. AI for good being at the top of the list where I think many of us would be like, maybe we don't need to [00:22:00] reclaim this one. Maybe we can just let this one go and we can just find a different frame for, for what we wanna talk about.

Astha: I mean, you know, what do we want, right? Like we want public tech. But then when you start thinking about that, then it's digital public. But then we also want infrastructure, because infrastructure comes with a certain notion of claim making around it. That's also been co-opted. So I think we've all been struggling and Umba and I have been in many Google talks together about like, should we reframe, should we reclaim, should we reimagine a lot of the notions, but then also the words that carry all of these meanings that we have not ascribed, but then I have been already given.

And also I think that this is not to say that none of it is intentional, but it is all intentional, which is that. This is intentionally phrased this way, right? Like because it is ambiguous, because it is amorphous, because it is co-op, because it is all the good words and nobody will challenge you if you say, I'm building [00:23:00] digital public infrastructure, and nobody will think.

Two question, as Amba was saying, is like to challenge that question of like, what does digital mean? Does it mean the privatization of the last mile? What does public mean? Is it in public interest? Is it openness as publicness? Is that sufficient? And what is infrastructure? Is it just scale or does it come with like rights and claim making around that?

But because of the good words, the good concepts are somewhat lost.

Amba: I will say on that front, um, on the definitional like ambiguity, that's probably like up there as my like lesson number one from the last decade of DPI for the current AI conversation, which is we always, and by we I mean society, those that are like.

Advocating for the public interests are always on the losing end when terms are defined in these abstract ways. Um, and so we would do ourselves a favor, I think, to like rebut that abstraction [00:24:00] with getting really concrete and describing the specific system where naming and it's specific attributes.

Because otherwise I feel like we're always getting lost. In these agendas that have lives of their own. And that does mean sometimes giving up on some of the good words. Right. I, I really feel like it's a tough competition between DPI and public AI on who wins in terms of like, under definition, like no one is ever talking about the same thing.

So, so to me, like as I was listening to some of the DPI conversations, I was just thinking let's take some of this learning into our, our current discourse and, and even if it's. Feels pedantic, constantly kind of like asking that hard question of like, wait, what are we actually talking about? And, and getting concrete.

Alix: Yeah. I think that's good advice. I think specificity does always benefit the people that are questioning power. Um, and it, it does feel like vagueness as a tool in a way that we need to be on the lookout for. There were like a couple of patterns that connect this digital public infrastructure conversation, but also this kind of [00:25:00] misappropriation of certain concepts.

I'm thinking specifically things like. Digital public infrastructure is all about being inclusive and it's all about efficiency. It's all about innovation. Um, like thinking about the tension between the dominant narrative that digital public infrastructure and as an example, and I think AI also is in this bucket, is gonna make things easier for everyone.

It's gonna be wealth creating, it's gonna be useful and add benefits that everyone can feel and experience with this. Lived reality that when you are within these systems, there's oftentimes a lot of structural violence that get produced. So marginalized groups are more likely to experience harms in these systems where all of a sudden they're required to produce an ID to be able to access food as an example.

Or in the context of ai, marginalized groups are more likely to be on the receiving end of hate speech produced by general purpose models who have trained on all the garbage of the internet. As an example. How do you all tease [00:26:00] out. Effectively communicate about there are some benefits maybe for some people, but this broader narrative of, of this is gonna be transformative.

And if you don't see that you're just being negative, um, with this obvious concrete specific examples of harm and these structural trends towards harming people that have less power, like how do you navigate yourselves in how you tell the story about these technologies?

Astha: What we are seeing is that like this pursuit.

Of efficiency. This pursuit, and we were talking about this in our essay as well, is that the carrot of development is so large and so deteriorating that as countries, as populations, as communities pursue it, you sort of let go of a lot. Of the questions because that you are told that this is good for you and there's a lot of trust in the systems that are telling you this.

So obviously it's the government in many ways that is enabling these systems to be pushed to communities in with [00:27:00] the lure of like, oh, today you don't have a medical system. But then tomorrow, you know you will because all you need is one thing, right? All you need is one id. All you need is to download this payment platform.

All you need is to use the, you know, chat bot to tell you it's these like little sort of bread crumbing of development that is sent around. And that's what makes it so difficult to counter it, because it's never one thing. It's many things which are sort of drip fed as. Part of the development agenda, and not just to communities, but also I imagine to governments as well.

Right? Again, like referencing the essay that we wrote, we've talked about the whole development. AI for good public interest, whatever you call it, is so enormous that on one end, you know, you're being told that all of your structural problems will be magically solved. But then. On the other end, the harms, the evidence around it, all of those other things are completely [00:28:00] glossed over or not discussed because the promise of development is so big in some ways.

So it's a very hard one to talk about because you do appear to be anti-growth, anti-development, anti-national, anti sovereignty, whatever it is when you start to ask the question, because the perpetuation of technology, so. Deeply embedded in the self-realization of the, of the state.

Amba: On an optimistic note, I think one lesson from the last few years in the kind of AI policy governance space is that just saying these, these words like innovation and efficiency actually isn't enough.

Like you, you see time and again, whether it's with DPI or, or certainly with ai, like governments are having to make appeals to the public interest and to public values to justify why they're giving this industry a red carpet. My favorite example is President Trump at the press release [00:29:00] for the Stargate $500 billion investment.

And he suddenly like takes a second and he says, and we're doing all this because this is gonna cure cancer. Right? And, and Sam Altman is kind of like, I guess. Like there is still immense power and in claims to the public interest, and that is where our contestation has been sharpest, right? Like today, AI companies can't get away with dismissing data center resistance as just like nimbyism.

They're trying, but. You know, they're coming up against constraints because of people's lived experiences and people saying like, hi, over here. Like, this is not, we are not being benefited by this. And so that brings us to what is a tool that we have in our toolbox. Like one is, I think, to point to people's lived experiences.

And then the second is to kind of uncover the incentives at place so we know who's losing out, but who's winning. And I think there again, like such a strong lesson from the DPI examples that. You can this in the, in the series, but what you learn is that often, almost always it is [00:30:00] public interest and public oriented goals that like are the conduit for these projects, but they're a conduit for marketization and for creating new markets.

And you can argue like you need private sector engagement and we can have that debate, but I think it's the subterfuge or like the deception that feels very insidious and like is the thing to point out both with DPI and with these public AI investments is. Let's get real about who's benefiting and who is paying, what communities are most likely to pay the cost.

Astha: I think that in contexts like India, it's been a bit harder. I think that that lived experience takes a while to trickle in, and also I think that people's abilities to negotiate more from the bottom up is much more restrained. So I think that. In India. I think that it is in very many ways the role of civil society organizations, community organizers, et cetera, to start to, like I, I don't see, for instance, like the [00:31:00] data center resistance that we're seeing out of the US bubble up in India.

In the next, you know, 12 to 18 months unless somebody's like thinking about organizing around that. Which is why I think that it's almost the notion of development that needs to be unbundled, like Kamba said, like for whom? Who's benefiting, I think, versus who's losing out. And I think that frame is a hard one to communicate.

Alix: I think the problem is that nuance gets interpreted as negativity, and I'm not quite sure how to combat that feeling because it, it is a really potent, I feel, I feel like it cuts across, basically every time you talk about technology is that when you introduce nuance, you're immediately processed as a critic.

Um, who isn't particularly constructive or like, isn't interested in. Basically you're, oh, you just wanna throw the baby out with a bath water. Like there's no good in this. Uh, yeah, I'm not quite sure what to do with that, but I think it's a really [00:32:00] interesting dynamic that emerges in these spaces that makes it extremely difficult to have grownup discussions about this stuff.

Amba: Yeah, I mean, I think there's also a, maybe a lesson there for us. Is that like showing up as a good example? Is this like conversation around the public interest? I, I've been in so many rooms where, and I've probably been one of these people, right? Like, we'll ask the question, but, but what is the public interest?

And then like, we'll get into this, like, you know, like never ending loop. I'm feeling more like that. That way of showing up what we're trying to get to is, is really important. And like you said is about intro reintroducing the nuance, but maybe more so it's about, again, like truth telling about who is actually gaining from these systems and who is losing.

And I think that that approach of like we are just dropping facts here and trying to introduce some honesty into what are often. Intentionally deceptive discourses, like, I think that feels more sharp-edged, uh, versus, um, you know, coming into, I guess complicate or, or, or nuance in a [00:33:00] discursive way, at least as a matter of like, when you're in these rooms, you can tell that like one can, you know, position you as a, in a more academic armchair critic mode and another can position you as, as an advocate for the supposed publics that are being represented or the interests that are being represented by these projects.

Alix: I hear you also on the circular ness. 'cause I think there's also that one of the problems is that a lot of the people that study this and have nuance to share are also in spaces where like wordsmithing and definitional things can sort of trap you in this cul-de-sac of irrelevance where you sort of sit around and try and make definitions and write papers and things.

But I also hear you that like coming with facts and sort of stepping back and trying rather than try and persuade by. Articulating with nuance. Sometimes it's just like, here's a fact, here's a question. Yeah. And

Amba: kind of

Alix: let the space adapt around.

Amba: Yeah, maybe it's a question of form rather than substance.

Like we're getting to the same point, but like, I guess how we, how we [00:34:00] get there,

Alix: I have I guess two more like themes that stuck out for me in terms of where the DPI conversation overlaps with the summit. That also feels really relevant for finding a way to make use of times like the summit, and one of them is thinking about scale and this idea that technology.

Is going to at scale create these kind of frictionless experiences for people, but that actually, that like institutional desire to go fast at scale with no nuance actually ends up creating a lot of the very problems that these systems are, are designed to solve. So do you, do you wanna talk a little bit about.

How summit's engaged in this, but also the digital public infrastructure debate has engaged with speed and scale of technology sometimes in a zero, some way with questioning and nuance, and like really getting into who's paying for these systems and who, who's, who's harmed.

Amba: So I, I, I've been, um, you know, like I feel like scale is really the specter that [00:35:00] is haunting the current conversation around AI because it's.

Become very clear that this current trajectory and the chatbots that it seems like more and more people across the world know and love much of what people love about them. If we're to be like Candid does come from the fact that these are really large scale systems, right? Like that is what, at least at some level, is producing these performance gains over previous versions of the tech that.

People weren't as jazzed up about and so scale in some ways, both in the production, like scale in production has produced this new, uh, version of the, or this, this version of the technology. And that's why it's become, and Karen's interview in the series, I think is going to get at this, which is, is the response then to large scale AI to point.

Towards other modes of building. And that does bring up questions of trade-offs, which is like, what do we lose when we move from larger scale systems to smaller scale ones? Because I think that the tool is like, we will lose something. And the question will be, uh, will it still be [00:36:00] worth it as a, as a kind of broader societal bet?

Is that still the bet we should make? One thing I felt listening to the the DPI conversations was it was a good reminder that the seduction or the. The attraction of partnerships with the state in the rollout of these large scale projects in no uncertain terms, is the fact that the state is best positioned to offer scale.

And like what's implicit in that is that they are able to wield powers of coercion in, in ways that I think the private sector still can't, like they're getting close. They really still can't. So I think the, the ways in which, and maybe this is going to that concept of slow violence that I think Miller brought up, but there's, there's something in the public private partnership, the thing that makes the state, um, you know, the unbeatable partner is the fact that they can wield all of the tools that their disposal, including direct and indirect violence to get a product or a, or a platform to be population scale.

Alix: Yeah, I think it's. [00:37:00] I, I wanna hear what you think. And also I think this, um, I hadn't thought about this scale question also being a way of thinking about. How much power and resource big tech companies have accumulated via their capacity to roll out products that are immediately global and that they still need permission to operate in different jurisdictions to allow for that scale.

And I imagine there's this like political process happening now where nation states and tech CEOs are kind of navigating a new type of relationship where they're like, nation states are like, you need our permission to operate here. Big tech companies are like. Do I? And also, okay, if that's the case, like what might partnerships look like And it's quite, I feel like the next few years is gonna be a really interesting process of like via scale, these public-private partnerships taking on a completely different character when countries and companies start working together.

To build out the systems that they want. 'cause I feel like the interest of those stakeholders, nation state leaders and big tech companies are so different oftentimes than Publix. [00:38:00] And like imagining what might happen when they start partnering up is kind of a terrifying prospect.

Astha: Yeah. And just to, I mean, I, somebody had at one point asked whether, you know how there's like that dichotomy between like innovation and governance and then they'd also said like.

The biggest dichotomy is between governance and scale because you can't govern and scale at the same time, which kind of like points to what the push is. The push has always been make it too big to fail, right? Like we've seen this repeatedly, which is that as soon as you have this enormous scale, a billion people or however many other people use it, it is deeply embedded in.

Various kinds of your daily lives, and then even if it doesn't work in for whatever shape and form or if you know it is, it's harmful in in ways, then it's just like, oh, we're just gonna like cutely hack at it versus roll it back. Right. [00:39:00] The whole objective is to get to scale so that. The rollback is impossible, so that like if governance comes, then it's only incremental.

That is a very significant. Strategic goal for these technologies, which is also why you see in India, you know, all of this. Like, oh, India's gonna be like the test bed. Or, you know, the highest number of open AI users or whatever chat GPT users are in India now. And there's a lot of like, you know, fairy dust around the scale question as well.

Right. We've never really known is it like. Are you an adopter? If you have it on your phone, are you using it once a week? Like all of those, there's a lot of mythmaking around scale, primarily because scale is. Seen as the goal to be like, oh no, if we're gonna shut it down, that means like a billion people will lose some made up number of, you know, their daily lives.[00:40:00]

And I think that's also, again, like part of that evidence question that we've been getting at is like the nuance of it. The scale also is not just like a big number, it also has. Layers of nuance. And I can talk about a very basic thing, right? Like we know that say 800 million people in India have a mobile phone.

That's huge scale, but we don't know how many people are actually using these mobile phones. What is the nature of this usage? Et cetera, et cetera. So when you take some of. Those types of questions and deploy to other kinds of tech, you'll realize actually that the numbers are maybe smaller. And then to AM's point, and also the point that Mila has been making in all of his work is that the state's ability to coerce scale is a really, really critical game changer.

And very much part of like that story that India [00:41:00] has shown.

Alix: I think that's really interesting. I love that idea that there's a kind of complicated relationship between governance and scale as well. I think that's really just like a helpful way of thinking about it. Um, I also really love this idea that when you get into a certain scale, because I, I've been in intellectual fights before with people who will say, are you telling me you don't think that a state should roll out?

Thing that makes the lives of tens of millions of people better because there are some people who will be negatively affected by it. That like you're just gonna throw away the idea of improving for some people 'cause. 'cause when you get to a certain scale, that does feel like a very different calculation and you can make the case that millions and millions and millions of people are positively affected.

Which changes the whole tenor of the conversation because then it becomes, uh, again, critique becomes this negative. Don't do digital anything because it's not perfect, which is obviously not what any of us are saying.

Amba: And it, but it's also why like issues like, you [00:42:00] know, child safety rise to the top of the.

The most appealing advocacy hooks because there's a way in which you can no longer other anybody. Like we're all as, uh, you know, in our, you know, identities as parents or caregivers, like we can all empath or just individuals online. There's a way in which I think these companies are impacting all of us.

And their universalizing impact has also allowed for like more solidarity across, uh, I guess across all of our different identities than maybe previous versions of the tech.

Alix: You can't say. Wait, you're telling me that a kid dying by suicide, a single kid, um, isn't worth it? We have, you know, I can, I can summarize a long article online for myself with chat GPT You're telling me that that isn't worth having one kid die by suicide?

Like the numbers game is such a different thing when it's kids. As well. I think like it's hard to explain right away. Yeah,

Amba: yeah.

Alix: And justify

Amba: it. I mean, I, I guess the moment you said that, I just thought of, you know, all of the exclusion [00:43:00] related deaths that, uh, have been attributed to other failures and the like, honest, though uncomfortable truth is, it's probably because a large number of, you know, Indian elites didn't see themselves in that kind of a tragedy.

And so I, I think it raises this other question of like, what. Are the, the like tech failures or tech tragedies that are genuinely uniting in ways that might be strategic for our fights. Uh, but what does that say about us as a society is a maybe deeper question.

Alix: Yeah. So when people are engaging in this.

Content that we've put together, both AI now and Opti working on written pieces from each of these thinkers who help us go deeper on a set of concepts. And then these videos edited with those same people. In addition to you should read these pieces and also you should watch these interviews. What do you want people to do when they're engaging with that content?

Like anything you want them to be staying alert to in the run up to the summit?

Astha: Yeah. I think as Samba was saying, is like. Push for nuance and specificity. I think [00:44:00] that there is a deluge of words and jargons and we're always told that these are good for us and will magically solve the things that we're trying to solve, even as members of civil society organizations that are seemingly better informed than others.

And I think that's what the series. Aims to do is, is to ask the question and not just leave it at the diagnosis, but then also like sort of has the courage to reimagine what can be. So yeah, that would be what I would think

Amba: like one lesson from, from the previous summit is also that it is useful to have.

A kind of unified, high level counter to a lot of the bluster that then overwhelms the news cycle just as a matter of even media strategy. So I'm also, as a tactical note, I think this series will hopefully give us those touch points on how we, how do we position in response to a delusion, you know, big important concepts being used in, in ways that.[00:45:00]

Weaken their impact or, or run counter to, to the public interest. And like I think our CCE sort of picks up there and gives you the toolkit that you need to be able to do that.

Alix: Yeah, I think that's a really good reminder is that one of the problems of nuance is that it requires so many different knowledges and expertise combined together to be able to make a, uh, you know, a clear, compelling.

Case and story about what should happen. And hopefully we've done some of that work for folks of like creating that intellectual scaffolding of all these people. And so as you're sort of exploring the content, like making sure that, you know, you're taking away the like really sharp pointed, pithy arguments that are coming from that.

Um, 'cause I think that there's a lot of ammunition there. That hopefully will be useful when you're asked questions about different aspects of the summit. So I'm hoping that we've put together a little bit of like a glossary that will be useful, um, and that will help us be better organized and specific about, you know, what we wanna see and, and what questions we think need to be asked [00:46:00] in these forum.

Okay. I think we can stop there.

Astha: Thank you. No, thank you.

Alix: Thanks everyone who was involved in bringing the Vapor State together, particularly Sarah Myles and Georgia Iacovou, who led. Editing and production. Please do go listen to the earlier episodes if you haven't already. There's some really rich stuff in there and I think it's incredible.

Context and historical underpinnings for the upcoming summit. Up next is the series, uh, we talked about at the beginning of the episode focused on the AI summit and all these concepts that get bandied about. And we are talking to one of the most interesting people that's an expert in that concept. Um, and we will be doing 12 of those.

They will exist as full interviews on YouTube, but if YouTube isn't your style, we're also gonna be publishing audio edited versions to our feed in three curated episodes. So be on the lookout for that. And it's all leading up to the AI summit in India. If you're gonna be there, do reach out. I'm sure there'll be interesting things to talk about over the next few weeks as it relates to how.

Companies and governments are showing up in that [00:47:00] space and, uh, what happens next? Um, so let us know if you're gonna be there, 'cause we're gonna be organizing some community programming and if you wanna be involved, do let us know. And, uh, with that we will see you next week.

Stay up to speed on tech politics

Subscribe for updates and insights delivered right to your inbox. (We won’t overdo it. Unsubscribe anytime.)

Illustration of office worker in a pants suit leaning so far into their computer monitor that their entire head appears to have entered the face of the computer screen