E107

Lingo Bingo at the India AI Summit w/ Naomi Klein, Timnit Gebru, Nikhil Dey, and Chinasa Okolo

Read Transcript
Listen on:

Show Notes

This is the last of our series AI Lingo Bingo Series! We dig into four more co-opted concepts with four more all stars.

More like this: Last week’s episode with Meredith Whittaker, Audrey Tang, Abeba Birhane, and Usha Ramanathan

This week we’ll hear from Naomi Klein, who will discuss how ‘AI for Climate’ is very much not a thing; Nikhil Dey who shares all the ways powerful actors cosplay at having ‘accountability’; Timnit Gebru who explains that ‘frugal AI’ is something being made novel by the hype & scale of big tech business models; and finally Chinasa Okolo who will help us better understand the complexities of ‘multilateralism’.

Further reading & resources:

**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Post Production by Sarah Myles | Pre Production by Georgia Iacovou

Hosts

Alix Dunn

Release Date

February 27, 2026

Episode Number

E107

Transcript

This is an autogenerated transcript and may contain errors.

Alix: Hey there. Welcome to Computer Says Maybe. This is your host, Alix Dunn. And before we start the episode, I wanna let you know about an upcoming live stream We are hosting next Monday, March 2nd at 5:00 PM Mountain Time in the us. 7:00 PM Eastern. We are hosting our first virtual live show on YouTube. I'll be in conversation with David Seligman, Executive Director of Towards Justice and a Democratic candidate for Colorado Attorney General Alvaro Bedoya, former commissioner of the Federal Trade Commission, and founding director of the Center on Privacy and Technology at Georgetown University Law Center and Elliott “El’Bo” Awatt, who's a driver organizer with Colorado Independent Drivers United.

We're gonna be talking about how we might be able to steer the future of Big Tech accountability at local, state and national levels, looking specifically at the Colorado context, but also thinking about how state level work over the next few years is gonna be really important in the US and what opportunities there might be that David Alvaro and elbows see.

We'll put the link to RSVP in the show notes and if you RSVP and end up not being able to make it, we will send around the recording after. But for this episode, this is our final installment of Lingo. Bingo. Which has been part of a series of, I've been thinking of it as like pre-and before the India AI Summit, thinking about those really big words that get used at summits.

They're good words. They're just words that get co-opted and kind of become meaningless. And we've been trying to add depth in partnership with experts who really understand those concepts and can give us the texture we need so that we don't have to chuck 'em away just because people with power use them in ways that they're not meant to be used.

So this is the last episode, um, where we've taken long form wonderful interviews that are available on YouTube and we cut them together for this episode. Also the two before. So if you are enjoying this one, make sure you listen to the last two, and you can also [00:02:00] watch the full interviews on our YouTube channel, and we will drop a link to that in the show notes.

So let's get into what we put together for you this week. Um, we have Naomi Klein to discuss AI for Climate. Timnit Gebru on Frugal Ai Chinasa Okolo on multilateralism, and Nikhil Dey on accountability. And since we're gonna start with him a little bit more about his work, he is a social activist and a founding member of the Mazdoor Kisan Shakti Sangathan, MKSS.

An organization based in the Indian state of Rajas then, which supports workers' rights and has spearheaded some huge campaigns in India, such as the right to work and the right to Information, which Nikia will tell you more about himself. We dug into accountability, which is a thing that powerful people pretend, uh, they think is important.

But also simultaneously avoid it like the plague.[00:03:00]

Nikhil: There is no one who says they're not accountable. Even the greatest dictator says they're accountable every day, all the time, but it's a question of accountable to whom. And of course, in a democracy, everyone says they're accountable to the people, which again, is rubbish. They aren't. And the entire exercise of those who hold power is to carry on not being accountable to people, but pretending to say they are.

So it is only a question yes, again, of people being able to even in information, ask and frame the questions, demand the particular answer. Be able to probe all the way through, and that's how you'd get into maybe a culture of openness rather than a culture of secrecy, which even after the right to information we continue to live in, because that culture of secrecy comes from this domination that I am privy and should be privy.

I meaning [00:04:00] people in power, people who hold power, and this applies across the board from wherever power exists, whether it's in government. Or it's in the corporate sector where money or the markets where it functions, or in social relationships in countries like India where feudal and caste relationships dominate power, and whether it's gender or whether you, it's all part of a social package where again, information is controlled very strongly.

So in the turning it around. For us. Actually, it was very interesting. There was a person who called Moji and one must in conversations like this, go back to that he was a person who was a, from a Dali community. He had never been to school in his life. He came to the mk s. S when he was almost 50. And in the first set of hunger strikes, when we went back to seeing, when did we start getting into this information question, he had this very powerful song [00:05:00] that used to, that would go.

Essentially in translation that it's the era of thieves. It's the state is full of thieves. It goes through many a series of things. The thieves of your used to live in the jungles, they now live in the bungalows. The thieves of your used to kill you with a gun, but now they kill you with a pen. So it's an inversion of people recognizing how power is dispensed.

It's not just that the pen is more powerful than the sword, but it is also that the pen is more powerful than the sword and subjugating you as well. Uh, and therefore when you turn the tables, it was that same four years later said, until we get those records out, we will continue to be called thieves, whereas the actual thieves will continue to hide.

And people who were far more powerful than us who could use violence against us. As soon as the records came out, they were unable to face us or the [00:06:00] people they were unable to face anyone in public domain, in that democratization, in that dissemination of power, in that breaking their power just by being exposed.

So that was the first step, and the naming and shaming did a lot. But very powerful people learn how to become shameless almost very quickly. And therefore, the next set of questions have been far more difficult, which are questions of accountability. And we have been fighting for a social accountability law because they promptly say yes, accountability.

And again, they use up power to themselves, those power, and you end up actually giving them more power in their hands. So it's extremely important, just like the right to information in a sense, allowed every Indian citizen to suddenly ask anyone in power and get an answer. And if they didn't get an answer, hold them personally accountable.

It was a unique feature of the law that from their pocket they would have to [00:07:00] pay a fine for every day of delay, which was a great accountability system not present in any law anywhere around the world. In a freedom of information law. Similarly. If there is real and true accountability to the people, it really means disseminating these centers apart rather than making the police or investigating agencies more and more powerful, or even so-called independent agencies, which we all fight for, but we now learn that some unscrupulous people who take part just pervert independent agencies of their favor, and then.

You are caught with an unaccountable, independent, so-called, not really independent anymore, but accountable. So these questions of accountability, again, permeate. I have given you a whole lot of things that come from just, again, normal political relationships and the way you deal with it. But as you look at the digital world, which we've started looking at, which.

Both [00:08:00] on the commercial side as well as in governance side and in systems that say they can make things more efficient and they can make things more efficient. Obviously they make it for those who control them, but they claim to make it more efficient by getting a few inputs from who they say it affects, which is a set of questions that you're saying.

And again, that's a very dangerous trend because you don't understand anything in that black box. You have not asked for it. You have not even asked for that box in the first place. Suddenly to be taken on in this kind of fashion and to be asked, okay, you just give these two answers and really make everything better, is just not the place to start to begin with.

And when something controls everyone's lives in such a tremendous ways. I think there is a very big challenge. The, the challenges of accountability are even more, but [00:09:00] I would say that in our experience at least, we've had that one advantage, that if you say that, we started this entire battle at a time when people who had never been to school started demanding information and when it came, they put it to.

Such use that they disrupted so many power relationships. More than a hundred people in India have been killed pursuing just freedom of information requests because the ordinary citizen could suddenly challenge the most powerful set of people in the country and they would not shut up and they would not stop, and they would carry on and they still do.

And so it really was something that changed relationships that like nothing else. Similarly, we have seen even in the world of where digital information is used, that if it is not just what has been called a management information system, [00:10:00] where everything is digitized for a manager to be more efficient.

What we call a genta information system or a people's information system, everything changes. Suddenly you can't turn the tables. You are not so expert at it. You did not ask for this method of management. Uh, you were a victim, but just by a small meaningful entry, which you define, you let the cat amongst the pigeons, that group that has it all neatly tied up, doesn't know how to deal with.

But this new entrant inside. But if it's just a outsider giving an input or two, which is what seems to be happening in the world of ai, then. You're just being taken for a ride.

Alix: Now I wanna move to my conversation with Naomi Klein, who I feel probably needs no introduction, but just in case. Uh, she has authored many great books, um, most recently doppelganger and also a throwback that really shaped a lot of [00:11:00] my politics and maybe yours called the Shock Doctrine.

She's also a professor of climate Justice at British Columbia University. And so for our conversation, we discuss the term AI for climate and to build on what Nikhil. Just described where governments kind of cosplay this interest in accountability. Naomi explains that that's also happening in conversations about climate

Naomi: because these companies have built out all of this capacity.

They need customers really badly and the market. Is not liking their product, right? Like individuals love using free chat GBT or paying a little bit, but certainly not the actual price. But we we're hearing more and more the businesses aren't seeing the kinds of efficiencies that they promised. So there is no market for what they have just built.

There's a huge gap and unfortunately governments are kind of marks a little bit. And so, by the way, I say with much chagrin, our universities, you know where I work, right? Yeah.

Alix: Oh my God, yes. Yes.

Naomi: Yeah. So, so with this fear of being [00:12:00] left behind, right? You go in and they're not measuring on the same matrix as.

Maybe a Fortune 500 company. It's just like, oh, we're afraid of being left behind. These people are coming in. They're scaring us about how our students are not gonna be prepared for the future, or it's gonna save all this money for the government. So this is kind of part of the bailout, like, you know, we're talking about is there gonna be an AI bailout when this market bubble bursts?

I think the bailout has started, right. And AI now has been flagging this, but the bailout has started with these contracts, right? Where our governments are buying very faulty tech and they're like, thank you very much. Gobbling

Alix: it up. Yeah,

Naomi: yeah, exactly. But why are they doing it? I mean, look, I think for every country it's different.

I mean, in Canada I think we're on the verge of, of making this mistake because we had built. A whole bunch of LNG infrastructure and Asia was not interested as interested in buying it because there's a renewable transition that is happening in Asia. So they've [00:13:00] got a whole bunch of, of liquified natural gas and they have to figure out who they're gonna sell it to.

And so now they're talking about building data centers to justify the fossil fuel infrastructure that they've just built, right? So I think it's a huge crisis of imagination of leadership where. They are afraid to introduce something like we've been calling for for years, which is like an actual green New Deal, which is a plan for our economy to get off fossil fuels and create jobs for everyone.

Meaningful work in this transition for everyone who wants a job, but that's. Harder than just accepting a contract from Google or you know, some big LNG company in Malaysia. I think it's just a failure of imagination, a failure of leadership and the way they think about how to build an economy. You know, we're not investing in the things that we actually need.

And when you see it enter spaces like healthcare, it's particularly tragic, right? Because this is meaningful work and. These are jobs that people wanna do and people want to actually interface with a human, you know, when they have a health problem, they want those relationships. And so [00:14:00] to gradually see these sort of meaningful work teachers, healthcare providers, nurses being replaced by robots, it's just, you know, it's what Cory, Dr.

Rose described as in ification, but it's in the most intimate spaces of our lives. Maybe it's almost like a three-legged stool, right, of this emergent fossil tech economy where one leg is tech, one leg is fossil fuel, you know, which is fossil fuels, which are looking for their new. Market. And then the other leg of the stool is the state that includes, you know, the police state and the military state, you know, which is also gobbling up these technologies.

So that's what's holding this together. The public is really not part of this at all, but that is, it's reinforcing. They're reinforcing each other at the moment. But yeah, I think it's really important for us to understand that the fossil fuel sector was panicking about the fact that solar was, for the first time cheaper than gas.

You know, when coal plants were being decommissioned and [00:15:00] they're holding up this sort of notional nuclear fusion, which. How many notional technologies can we pile on top of each other, right? Like a GI is notional and it's gonna be fueled by a notional energy source. But in the meantime, what we're gonna actually have is fossil fuels fueling.

AI slop because those are the two things we know for sure. So we've got slop fueling slop, and it's a fantastic economy.

Alix: We'll be back with Naomi in a few minutes, but let's go to Tim Neat Gabriel now, who you might know as the founder of The DARE Institute and someone who, through her work, has been very critical of big Tech's approach to scale.

Um, because of this, it felt natural to talk to her about the term frugal ai. And she's gonna start us off by explaining why massive general models are ineffective, and oftentimes a huge waste of resources.

Timnit: AI is not like a coherent set of methods or technologies. There's a whole bunch of things that go in and out of being called ai.

And right [00:16:00] now AI is synonymous with these chat bots or image generators or generating some videos or code or something like that. And I'm just not into that as a whole, I'll just say I'm just not into that. But within the, the field of ai, I, I believe there are, you know, within all these disparate things that are lumped into ai, some of them I think are useful.

They're legitimately useful techniques, legitimately useful ways of doing things, and. Or products and tools that we can have. One example of that is speech transcription. Automated speech transcription is a very useful thing for many reasons. This is a very well-defined task where the input is speech, your speech, my speech, and the output is text, transcribing the speech, so it's very well defined.

Input, speech output, transcribed speech for all of these discussions about ai. We have a lot of cultures in the world that are oral cultures, right? So sometimes you don't wanna interact [00:17:00] digitally by typing or something. Maybe you just wanna interact with these things, different devices via speech, or sometimes you wanna control certain machinery with your speech.

And so for anything like that, for any speech as an input to be understood or to be able to command some sort of tool, we have to have. Speech, automatic speech recognition. So for years, speech recognition has been a a specific topic, specific work. And then of course, now we're in the age of use as much data as possible, use as big models as possible and don't just try to have one task that you work on.

Try to do everything all at once. Try to create a digital machine. Got kind of. Paradigm, that's where we are. So I wanna give you an example of what has this done for speech recognition? Right. So for speech recognition, sure. You know, there were issues with itself. Sometimes the output would be erroneous, there would be some mistakes or whatever.

[00:18:00] But this idea of so-called hallucinations, I know not a good term, was not a problem with speech recognition. Right? You didn't have an issue where the output, the text, transcribing the speech was a whole bunch of just made up texts. Fast forward to 2021. Open Air has something called Whisper, which is, um, supposedly a speech recognition model and also a translation between different languages.

It's like one giant model that does a bunch of different things. It's been integrated into chat, GPT. And now we hear that doctors are using Whisper to transcribe patient notes or other things. Whisper has been found to make up a whole bunch of things, so instead of transcribing your speech, it makes up a whole bunch of stuff.

Some of these examples were so bizarre. Like there is a speech that said that I think that he was wearing a necklace. And then it's transcribed to he was holding a terror knife and he killed a bunch of people and all of that, right? So, but, and you're [00:19:00] not saving the original data because you don't even know what the speech is.

So this is an example of this whole one giant model for everything approach, creating problems, things that we didn't even have before, and tasks that existed from before. The industry has absolutely no incentive. To look at, uh, less resource intensive things for anything because they view their stealing of data and having and using private data as.

Like a competitive advantage 'cause they have all the data and some of them view the fact that they can outdo anyone with respect to how many GPUs they buy or how, how big their data centers are as a competitive advantage. So there's not gonna be any research in these kinds of places of how to do something super efficiently and give that research away to other people, that's not gonna happen.

So I'm not looking to those. Big players for any kind of inspiration here. On the other hand, there is a parallel movement of [00:20:00] different groups, different small organizations to do this kind of thing. So one example that I think many of us talk about is Taku Media and New Zealand. So this organization all the way in New Zealand, just focus on Maori language.

From everything they do, right? They gather data with ma language revitalization in mind, and people are very excited to participate in that data gathering process. And so they built the speech recognition system, one of the kinds of systems I was talking about earlier, uh, for MA language. So they.

Gathered this data, an American company, uh, wanted to license the data, and they said, absolutely not because everything we do has to serve the Maori people first. And also they said that you guys kind of beat this language out of our grandparents, and now you wanna sell it to us as a service. Kioni from TE Media often talks about a language back campaign that is similar to the land back campaign.

The idea being that if someone, uh, a part of the [00:21:00] technology that is based on a specific language should first and foremost serve the speakers of that language, and that means. Also the revenue gained from that technology because of course they're getting the data from the language speakers in terms of data sovereignty or licensing.

They came up with the specific protocol based on moral language, right? And so, and then they also built a platform to help speakers of smaller indigenous languages create. Tools that serve them. So all of this is very inspiring for me because one, they're not trying to build the one machine God that rules them all.

They're only concerned about their own language and they're supporting other people in building. There are other tools, and when you hear about what they're doing, you can look at aspects of what they're doing and adopt it to your context. And so there's a bunch of different groups like that where we talk about our different approaches.

And so, for example, Adair, [00:22:00] we're building our own cluster, which is. Think of it like a small data center because we don't wanna use cloud computing resources and we want to support our peers who don't wanna use AWS or Google Cloud. And we found that from a, our analysis. A one-time investment of let's say, you know, $400,000, which is a lot, that would mean that would give us an equivalent cluster that would've cost us almost $2 million per year to use in cloud computing services.

Right. And so the question is that we wanna answer and some other people are thinking about is like, how can I. You know, with my cluster and then take media with their cluster in New Zealand and some other people who don't even use their GPUs that much. How can we share resources across all of us, right?

And, and pull our resources to support each other. This is a very different kind of paradigm than whatever big tech is. Pushing and not looking to big tech to push to a different paradigm. But I'm [00:23:00] looking, but there is an alternate sort of ecosystem that's building around this idea that, no, we don't have to do the same things that these people are pushing.

Alix: Basically all the conversations about ai. We hear about these different protagonists. We hear about governments, we hear about companies, we hear about sometimes citizen representatives, and that kind of gives you this vibe of multilateralism. So this idea that there's these very high level conversations happening between really important stakeholders, and that's the only way to breakthrough on these.

Intense questions. So to dig deeper into this concept of multilateralism, we are gonna hear from Chinasa Ollo who can describe a little bit more about what that really means if it's actually leading to action. Um, Chinasa is an AI governance expert whose research focuses on global majority countries.

And here offers advice for governments who feel like they wanna adopt AI systems, maybe, and should be cautious not to replicate governance models coming from mainstream big [00:24:00] tech services.

Chinasa: I've been saying, you know, to a lot of people that when, and you know. If, which is more likely when the AI bubble pops that the good thing is that because of the downstream impacts of governments, particularly in global majority countries, focusing on their intention, on trying to enable ai, uh, you know, within their governments, you know, within their countries respectively, they have realized that they would've had to fix these fundamental issues for the most part, particularly around electricity, hopefully education access, healthcare, access, agriculture, and we'll see better improvement outcomes.

But. Right now, it's really hard to say, but I think that there's a lot of pressure and also interest from large funders in multilateral development banks and also multilateral institutions to help bring attention, um, to a lot of these issues. And so I do see that there is this trickle down effect that's happening from the interest of AI governments, particularly those in gold majority countries really need to understand like one.

How AI solutions are already being used, [00:25:00] developed, adopted in the restorative countries, and also figure out what are the issues. For example, you know, there are like these AI databases. A lot of times they're not really useful to my work personally because they don't provide a lot of context, you know, on issues outside of like, you know, the us, the uk.

Across the EU and other, let's say like western countries, quote unquote. Um, and so really, you know, finding ways to collaborate with civil society, academia to do this kind of landscape mapping, to really understand like, is AI helping these issues or problems or just creating more? And I think another way to like get that information, that it's not necessarily country specific.

Is to really, like when you go to these big summits, you go to these UN forums or events that you really find a way to like talk to others, peer countries to understand like what issues are they also experiencing with ai? And also like, is this something that could happen or could be something that affects or impacts, you know, citizens in your countries or even your government more broadly.

First is what are [00:26:00] the long-term costs? And not just, you know, the financial ones. Companies may approach global majority governments with, you know, let's say reduced costs for the first couple years. And then, you know, as the model itself and also the usage of its scales, it actually may become unsustainable for countries to afford in long run.

And, you know, they have wasted all this money on, you know, this AI system that they can even run or using more when it could have been diverted or directed towards actually, you know. Pressing social problems. And so that's something I, I think that governments really have to employ researchers and just, you know, even government analysts to figure out what that kind of issue may look like in the long run.

And then also really understand like what the agreements need. So I would say for, for example, like there was an issue around like Kenya having this health data partnership with the United States and you know, it would provide access to the US with sensitive, you know, Kenyan health data for basically decades.

And [00:27:00] that's. Unreasonable. And I think that many other governments, you know, across Africa, across Globe majority may, you know, I would say experience similar causes in their partnerships with different companies. So because, you know, these companies are finally seeing global majority, you know, as a new market for them, you know, to one, increase their revenues, but also provide, um, new avenues of data to train and refine their models to become more culturally and contextually relevant, they will.

Maybe introduce these clauses that provide them with data access to sensitive data or just general usage data about consumers that governments may not be aware of because they don't necessarily have the technical capabilities or expertise in their governments. And so, uh, this is something I really want.

Governments pursuing these partnerships to be aware of.

Alix: Let's take it back to Tim Neat. Now, who can go into more detail on how these partnerships can indeed be exploitative and can sometimes even be used explicitly as a threat against small organizations that are trying to build or have built context [00:28:00] specific and resource light language models for their communities.

Timnit: I collaborate with a number of small language tech organizations. You know, take Media is just one of them. There's one called Lesson that focuses on Ethiopian languages, and there is another one called Ghana, NLP, that focuses on Ghanaian languages. And from speaking to both of them, I found that. They both had kind of similar complaints.

One is that when open air or meta or something comes with a, an announcement of a big model. So Meta came out with an announcement of a model called No Language Left Behind. I know bad name already, so let's not, let's fast forward from the name. They claimed that there is this one model that performs data AME translation across 200 languages.

Including 55 languages, African languages for which there was before no state of the art translation system. Once this announcement came out. A number of potential investors in [00:29:00] these, uh, smaller organizations that I was talking about, literally told them to close up shop. They were like, well, Facebook has solved it and they're eventually gonna solve it.

So your little puny startup is not gonna be able to do anything. Similarly, when they speak to, um, high up people at Open Eye and other places, they basically threatened to threaten them by saying that Open Eye is gonna. Pretty much put you out of business soon 'cause we're gonna make our models better in your language.

You're better off collaborating with us and supplying us data for which we're gonna pay you like peanuts, right? So. On the one hand this was happening. Another thing that was happening was that potential customers, some potential customers would go to them and say, come back to us when you have bigger, more languages.

Come back to us when you have more linguistic coverage. So imagine you're less on you. You know, your contextual knowledge is on a specific geography, on a specific set of languages. Your contextual knowledge is not about South Africa, it's about Ethiopia and that political situation in Ethiopia [00:30:00] and how different languages, language groups are treated and what kind of politics is associated with which kind of ethnicity, right?

Like this is your, your specific context, but then what you're being pushed to do. By both clients and investors is claim that you are either producing language technology for all African languages and even scaling globally. Like it's all about scale. As you said earlier, my idea was to have a federation of these small organizations where they share data.

Themselves, but then for externally, there's like a kind of an easier interaction between them. So if you want language support in both Ghanaian languages and Ethiopian languages, you can have like, you can interface with one, um, application interface, right? One API. You don't have to go to like all these different organizations and perhaps they could also band together and present themselves as a really good competition to these.

Bigger organizations and at the same time, each of them knows their context so [00:31:00] well, and they curate data so well. They're not just guzzling everything on the internet. They're really curating data and using low resource models. I don't think this is a, a language specific problem at all. I just think this is a paradigm that needs to exist for everything.

Everything needs to be built that way. Anything, not even just ai, so-called ai, right? My specialty is actually in computer vision and unfortunately, a lot of bad things are part of computer vision, a lot of surveillance, face recognition, gait recognition, you know, action recognition, et cetera, which is unfortunate, but also things like plant recognition.

Or even things like in medical imaging, like trying to analyze certain kinds of images and see where the likelihood of having a tumor might be, et cetera. This is also a part of computer vision, right? And in this case, it's the same exact thing. You want to create a specific tool for a specific context.

If you're interested in radiology, you should just have [00:32:00] data in radiology. Figure out what you're trying to build the model for what specific thing you're trying to build it for. Right? You're not trying to just build something that you claim will be like a superhuman doctor. Right. That doesn't make any sense.

But you know, so to me this idea of a task specific tool is just a concept in anything we build in engineering. It's not really a new concept, it's just that these people came along and decided that they wanna build a machine God, and then claim that they're doing it. Then they end up kind of stealing data, killing the environment, exploiting labor in that process.

And it's not the the frugal AI thing that you're talking about that's new. It's more the flip side of what we've been seeing that's ridiculous and, and new. And we are just trying to be like, let's go back to basic principles and let's not do something that sounds ridiculous and dead on on arrival.

Another ungenerous take would be, we need to do chat GPT, but with fewer resources, which I also, I'm not [00:33:00] pro. I think that that's just not a good paradigm. Why do we need chat bots? We need to both build other things and do it with other resources. It's, it's kind of like, you know, when the Model Deep Sea came out, I had like two conflicting feelings.

One of them was like. Exactly When you are constrained, when you can't get the best GPUs because of export controls, you think you, that's when you innovate. When you feel like everything is at the tip of your fingers and you don't need to think about being efficient or clever or anything, you're not gonna innovate.

So what's happening in the US is not innovation. On the other hand, deep seek was still this large language model kind of. Paradigm and I am thinking, why is everybody's imagination hijacked If, if they could have put that effort into something else and build something that is resource efficient, et cetera.

So it both shows. How clever and innovative you can be when you are resource constrained, [00:34:00] but also the fact that if your imagination is hijacked, you're still gonna produce that same sort of paradigm where we can have a paradigm shift. My hope is that in this summit, one of our researchers is gonna be there.

Hopefully you guys will get to me too in this summit. Some people can try to. Redirect some of that energy. I don't know if that's too much placing, too much hope on anything, but I think that we really need to produce, um, content or, or something to help people not un hijack their imagination into something different.

Alix: Let's hear more from Naomi now because she talked a lot about this crisis of imagination and how it's essentially a product of US dominance.

Naomi: This is an opportunity for us to ask. What is our economy for? What is technology, for? What are humans, for what do we value because. This mania is so out of control that we've actually gotten [00:35:00] to the point where, you know, the richest men in the world are telling us that they're summoning digital gods or digital demons.

You know, if it's Elon Musk or if it's Sam Altman, literally saying that they can foresee a future where the entire world is covered in data centers. And then Google comes in and says, or maybe we'll just circle outer space with data centers. So. If that is not the future we want, we clearly have to dream a different future and we clearly have to make very different decisions.

There's so much work to do. Um, there's so much important work that needs to be done, but that doesn't mean there's no role for automation, but. All of this is speaking to a need, you know, a level of deliberation and planning about what kind of world we want, what kind of society we want, and really deep questions about, you know, what is work for, what do we wanna devote our lives to?

What is policy for? Because right now what we have are governments led by the us right? But you know, what is happening now is once Trump was brought on board. [00:36:00] The AI bubble, he aligned himself with these Silicon Valley executives. We all saw the picture from inauguration and everyone said, oh, what's going on?

We thought these guys were liberals. Is it because their kids were trans? Or you know, is it about woke workers or whatever? It's about, it was about this. It was about the fact, it was about exactly what, what is happening right now, which is that they determined that they wanted a wild west for ai. That the way to get as rich as possible was just eliminating even the mildest kinds of regulations that were being introduced by the Biden administration, by the European Union.

They don't want any of it. They want the kind of frontier that, you know, mark Andreessen had in the early days of the internet when they con convinced governments that they couldn't tax things that went on on the internet. 'cause they needed a frontier. So they wanted AI Frontier, they don't want any rules, you know, and there were modest regulations that the Biden administration, but introducing things like on federal lands, if you build a data center, it has to [00:37:00] be.

Fueled with renewable energy, things like that, sensible regulations, and they did not want that. And they rebelled. They backed Trump. And Trump immediately declared an energy emergency, which opened the faucet for all of these coal. Plants that were being decommissioned to be brought back online nuclear power plants to be brought back online.

Massive new coal, new LNG, uh, infrastructure. The kinds of things that Elon Musk is doing in Memphis, where he is just Jerry rigging, you know, these. Massive gas fired methane, methane turbines, creating his own power unpermitted power plant. It's all because they've declared an emergency, right? So it's an emergency wild west atmosphere where the rules don't apply, um, use as much water.

When organizers start to fight back, they're saying you can't do that. They're overriding local, local infrastructure, but they're doing more than that. And this is what's really important. Their definition of US AI dominance is the whole world using American tech. That's the way they've defined [00:38:00] it. So when you see JD Vance going to Europe and screaming at them about their, you know, modest digital regulations, or when you see Mark Carney Canada's Prime Minister, you know, I live in Canada, caving shamefully to Trump, and rolling back a modest tax on digital media.

Or when you see all of them traveling with Trump to Saudi Arabia or dining with the Crown Prince in Washington, this is US foreign policy now, right? The policy is everybody. Everybody has to be part of this wild with everybody has to give these tech companies what they want. So this is where we are during the pandemic, wrote a piece called the Screen New Deal about how Eric Schmidt was pushing for many of the ways in which our lives.

Had become virtual during lockdown to be made permanent. And he was on this big lobbying push to have smart cities and digital cash. And I mean, this is something that in India, I think Modi was ahead on, right? Was [00:39:00] pushing digital currency for everyone. Um, but you know, right before the pandemic, it's always worth remembering that there was a really quite strong tech backlash, right?

There was a big backlash against driverless cars. They weren't getting that, you know, online. Nearly as quickly as they thought Amazon got kicked outta New York City. They were planning to have their second headquarters in New York City. It was pushed out and there was tech worker organizing going on, you know, pushing, uh, companies like Google and Microsoft not to do business with ICE and the military and so on.

So that was happening and then. The pandemic hits and we're all in our homes, and suddenly everything is being delivered to us by their platforms, right? Whether it's GrubHub or Amazon. And we're entertaining ourselves through streaming services, and everyone's on Zoom and they're kind of. Keeping us alive, like, or at least that's the story they're telling.

Right. And so, you know, I don't think we've analyzed this quite enough like that, [00:40:00] like what the pandemic meant to tech, because we have this other narrative that they all hated it and that their workers wouldn't come back to work and so on. But actually they really loved it. It was a taste of the AI future.

Alix: Mm-hmm.

Naomi: And Eric Schmidt was saying things like, well, maybe we can just keep telehealth and a, you know, China's doing it with AI and you know, we can have. Google classroom, like ev like suddenly all the schools are using Google Classroom and maybe we'll have more remote uh, learning and so on. But actually we didn't like it.

Like, we did not like it. It was terrible. It was terrible. It was terrible. So that's the other way we know that we don't want their AI future. Right. That's the other thing that happened is that Toronto kicked out sidewalks lab. So, so there was a period there during the pandemic. Like we all know that tech executives, what they doubled their worth in the first year of the pandemic.

I think there was, there was an extraordinary increase in data use. It was very profitable for these tech companies, I guess is the bottom line. But then as things got back to sort of normal, they were starting to flatline. So then you think [00:41:00] about the fact that Chad GPTs launched in November, 2022, like really just at the moment when we're coming out of the pandemic and it holds out the promise of being the next bubble, right?

Alix: Mm-hmm.

Naomi: The catch is there's just no way for you to chase this bubble and hold onto the things that you claimed about climate. There's just no way. Not if you're gonna do the arms race model, not if it's a race. Yeah. Right. If it's a race and meta is fight, you know, it's like this Battle of the Titans or you know, then they're just gonna do it as fast as they possibly can.

And we've seen this most clearly with Elon Musk, you know, a guy who most people thought of him as probably the highest profile, you know, green tech entrepreneur on the planet. This is the guy who decides he's gonna power grok with 35. Um, you know, methane powered, rigged together.

Alix: Yeah. And

Naomi: poison a

Alix: city.

Naomi: Yeah. And poison a city. Um, I think that more than anything else shows that this, that it really is just incompatible. Um, they had to choose. [00:42:00] They've made their choice. Um, so we really shouldn't play along with this idea that it can be green. There's no way to greenwash this, you know, and they say things like, oh, it'll solve climate change, but it's just silly.

I don't think that's a serious line of argument. We actually know the things we need to do, and we have the tech. We have the tech. So let's use the tech and let's not build like an artificial mirror world of ai. We don't have the tech for that because even Green tech has a cost, right? We don't have, we can't do both.

Like we will either transition our economy off of fossil fuels and lower our emissions in time to avoid maybe two degrees. I mean, it's getting harder and harder, you know? Or we will build this parallel. Madness of this data, a world covered in data centers in order to summon a digital God.

Alix: I wanna end with Nikhil now, who comes at this issue of tech and power from a different angle.

Starting with the observation that when we [00:43:00] use the word technology nowadays, we often mean digital and digital technologies are this kind of unique combination of opaque, but also durable enough to control and subjugate people while they're being told it's for their benefit.

Nikhil: The idea of tech for good is really something that where this question has only been restricted and brought down to the digital wellbeing technology.

Now, many of us who work with the rural poor in India fought for many years and then got what we thought was an extraordinary law, which was part of what we call in India, the right to work in the us. Actually, the right to work is. Something altogether different. Uh, in India, it's known as what it is my right to employment, and we got after many years of struggle and employment guarantee law, which was a huge breakthrough.

It was a breakthrough to be able to live your life with dignity, to know, not get some kind of [00:44:00] handout, to get agency, to get politicized. Many, many women who did unpaid work got paid work for the first time. It guaranteed employment to any rural citizen up to a hundred days of work in a year, five kilometers from their home on public works.

And it was there for 20 years and it's just been repealed in a very, very, uh, subversive fashion. But it was something quite extraordinary. Firstly, it is on public earthworks, mostly because it's to generate employment. So it's fantastic work. In many ways, machines are banned, contractors are banned under that particular law.

It's a work on which more than 65% are women workers. The basic tools are the same. Pick racks and shovel that has been there for probably what, thousands of years. The same heavy [00:45:00] instrument, the same inefficient instrument. Same instrument that, and and work is a beautiful thing to have, but why could not, some of this effort on technology have been spent on making some of this work more intrinsically efficient and more in tune with whatever people were doing.

Because in that same program, the managers of that program have just spent all their time going into use of tech. Meaning the digital tech to a point where you have, the Indian numbers unfortunately, are in cross, so you have actually 26 cross, which will be, I don't know, 260 million people who go, who are registered to work and about half that number who go to work every year, and they go off out over hundreds and thousands of work sites across the country now because of this [00:46:00] obsession with tech.

Their attendance is marked through two photographs uploaded twice a day. So many, many people spend half their day running around trying to get the photographs and unable to get their work or wages, and it's an insane system of tech dominance. Completely insane and. There are many things around that because that didn't work so well and there was fraud around that because they thought they were ending fraud through the digital means.

They then said, oh, we will geotag where these photos are taken because that's where the work is and that's who sort it out. That didn't sort it out. So then they said when the geotagged, then people still said, we are working somewhere. So they geofenced it, which is almost like putting a kind of a leash, digital leash.

So people who are working on a road that started off in one place who had to come every day, back two kilometers, get their photograph, then go into their work, then come back for their [00:47:00] second photograph. In the insanity of what this whole world can do, this entire framework can do. So what is tech doing?

Is tech doing something useful for me intrinsically, it's actually, maybe even if it were to succeed, that idea is not an idea of heaven. It's an idea of health for many, many people. This summit is run by a whole bunch of people who control the world control countries. They love to say that they are accountable.

Um, they're anything but that to the people at least. And this question of upwards accountability versus, and this is something we've seen, I mean, this summit is being hosted by the Indian government. It's being house hosted by. The Ministry of Information Technology, and they are the people who have thrust upon all of us, the system of haha, [00:48:00] this digital id,

Alix: which at all times they said, oh, it's not mandatory.

It's only compulsory.

Nikhil: Without which nobody could get a single one of their welfare benefits, which was essential for their survival. Which we have seen huge numbers of people actually they like to like being threatened and which is now being sold all across the world. These summits will talk about these systems as being tech for good, these systems as being something should, that should be taken on by other countries as well, and real accountability.

Real, even feedback, leave alone, accountability, leave, just, just feedback is just not present. There is no accountability. There are literally, uh, today in the state of raan, 2 million people in the state where I live, 2 million Social Security pensioners pensions have been stopped because they were unable to verify [00:49:00] themselves.

And this is just a few days ago, uh, they were unable to verify themselves through their biometric system as being alive. Now you have, and we have seen hundreds of those examples, and in fact, finally they are marked as dead even though they are alive because there is a huge pressure on that system to say, okay, I either verify them and they're not verifiable that some, some impression doesn't work.

The iris doesn't work. Whatever new technology, their facial recognition doesn't work. There is nothing better than a social relationship, and there is nothing worse than someone being able to mark you dead because you're a digit and not your human being. That is a frightening thing and we are seeing, it would not have happened earlier when whoever, however insensitive that bureaucrat.

Official may have been, they would not have marked you debt. Even if they were not able to meet other requirements that that [00:50:00] required you, they would not have marked you debt. But if you're just a digit, it's very easy. Strike you off.

Alix: We landed on kind of a dark note there with Nikhil, but I think it's important that we understand the real world effects of these population level digital programs and stop pretending that because technology can do some good things, sometimes that means that we can turn off all of our critical capacities and say, go, go, go.

When Nation states think about taking up, um, new. Gigantic projects. I wanna end with a few hopeful words from Naomi Klein. Uh, but before I do that, thank you to producers, Georgia Iacovou and Sarah Myles, um, and to all of our guests, all 12 of them. Thank you so much for enriching my understanding of lots of terms that I kind of thought I knew, but I learned so much in every one of these discussions, and I hope those of you who tuned in also did remember also that.

We have full interviews up on our YouTube channel, so if there's any guests you wanna go deeper [00:51:00] with in conversation, um, or any concept that you wanna hear more about, you can do that on our YouTube channel, which I'll link to in the show notes. Also, I got the chance to sit down for an even deeper conversation with Naomi Klein a sec.

One where we were not talking about the upcoming AI summit, um, but had a much wider ranging conversation about our current political moment. Um, and talked about her upcoming book on, uh, what we can learn from past movements and past fascist attempts to sort of take over and dominate societies. Um, and.

What we might do about it. I came away from the conversation feeling so much more hopeful, and I'm excited to share that with you all in the next few weeks. So I'll give her the final word in this episode, and here she is giving us some sound advice on why and how we might be able to imagine better futures for ourselves.

Naomi: We have to dream our own dreams and not have somebody else's. You know, half remembered dystopian sci-fi kind of [00:52:00] imposed on us as somebody who teaches university students and I teach a course on fascism and we talk about AI because, because I do think that the centralization of knowledge. That these, you know, the promise of generative AI is that it can think for us, right?

And that is a truly fascist idea, the idea of outsourcing, thinking. And, and so our powers is in our ability to think with each other, like to generate, to genuinely generate new ideas. That happens when we sit down with each other and bounce ideas off of each other. And our energy's actually. Interact and intersect and what, where we're not just like a sycophant machine or we're not just regurgitating things that other people have already said and just, you know, like re compositing them.

So I guess I would just say like, is it actually seductive? Um, or is it just the bleakest thing you could ever imagine? You know, I would just say like, [00:53:00] go for a walk with your friends and try to dream a more beautiful future than that.

Stay up to speed on tech politics

Subscribe for updates and insights delivered right to your inbox. (We won’t overdo it. Unsubscribe anytime.)

Illustration of office worker in a pants suit leaning so far into their computer monitor that their entire head appears to have entered the face of the computer screen