
Show Notes
How does a country wage war using LLMs? Oh, and WHY?
More like this: AI in Gaza: Live from Mexico City
In Computer Says Kill Ep #1 we are joined by Matt Mahmoudi. The US Department of War is leaning heavily on AI technologies to attack Iran. Matt explains how the use of LLMs to identify ‘legitimate targets’ is collapsing the chain of decisions that lead to lethal force. We discuss what this means at a time when fascist governments are eager to demonstrate their strength on the global stage. From Israel field-testing AI weapons in Gaza, to the US using AI tools in horrifying new ways to perpetuate ever worse war crimes, we start to connect the dots between the technology, the people powering it, and the human costs.
Further reading & resources:
- Automated Apartheid — Amnesty International 2023
- How Israel uses facial-recognition systems in Gaza and beyond — Matt’s interview in The Guardian about the report
- Crimes of Dispassion: Autonomous Weapons and the Moral Challenge of Systematic Killing — Elke Schwartz, 2023
- Sam Altman May Control Our Future—Can He Be Trusted? — By Ronan Farrow and Andrew Marantz, The New York Times, April 2026
- “Big Brother” in Jerusalem’s Old City — Who Profits Research Centre
- What is Israel's secretive cyber warfare unit 8200? — Reuters 2024
- Genocide as Colonial Erasure — Francesca Albanese, October 2024
- Buy Resisting Borders and Technologies of Violence, edited by Mizue Aizeki, Matt Mahmoudi, and Coline Schupfer
- Buy The Palestine Laboratory: How Israel Exports the Technology of Occupation Around the World by Anthony Loewesnstein
**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
Computer Says Maybe is produced by Georgia Iacovou, Kushal Dev, Marion Wellington, Sarah Myles, Van Newman, and Zoe Trout
Transcript
Alix: [00:00:00] I am Alix Dunn, and this is Computer Says Maybe. For the next few weeks we'll be running a special series called Computer Says Kill. It's gonna be focused on tracing the people decisions and systems that have, I would say, recklessly ushered AI into the business of war. I personally have been wrapping my head around this issue for quite some time.
And the recent news in war crimes and drama, uh, had me craving a deep dive. So that's what we're gonna do. We're gonna dig into this topic from various angles to understand what's happening, how we got here, and what might happen next. And just to be clear, here is this dark reality where the US Department of War is actively using large language models to make decisions about who to target and kill.
To kick us off, I sat down with longtime friend and collaborator, Matt Mahmoudi. He's my go-to on all kinds of things. Not just AI and war, um, also things like strength training. Um, but aside from that, he's also an assistant professor in digital [00:01:00] humanities at the University of Cambridge, and a researcher and advisor at Amnesty International.
So if you have been curious, frustrated, confused, maybe all of those things at once in what's going on in Iran and all over the world, from AI, military chatbots to tech companies in bed with increasingly authoritarian governments, this is the episode to start with, to help ground yourself and better understand what's going on.
So we'll start with Matt explaining a little bit about his work before we dig in.
Matt: I guess I'm hunting bad tech companies for a living. I think that's my simplest way of putting it, and the ways in which governments and states end up using those technologies in profoundly rights, violating ways. At least that's one of my hats, but I'm looking at that in particular because I'm interested in the ways in which tech companies and states have made an industry out of in particular the exploitation of [00:02:00] racialized people.
And that industry is kept alive in part through the promises that tech companies make either. Related to the domestication as it were of those racial others, be they in the, in the form of threats like terrorists or, you know, refugees that are about to invade your borders or in the form of, you know, services that they're going to provide.
The humanitarian needy, as it were. But either way. The companies here provide the really materials by which governments can then make claims on handling the situation. Again, whether it's a threat or whether it's, you know, under the auspices of some higher moral ground. And so I'm interested in the ways in which those things play out with each other.
And I think the more we're attuned to the ways in which some of the most racialized, vulnerable communities get exploited by these companies and, and states that govern on the back of them, the more attuned we are to understanding what these companies are actually about at the end of the day.
Alix: I feel like now we're in a very specific moment.
I've been thinking about [00:03:00] the last 15 years. A lot of the work in this area has been focused on calling out the accumulation of power and that we're now in this moment where that power that has been accumulated is being exercised, and you've been a part of a community of people that have been. Shouting from the rooftops that like there were harms happening and that those harms would get significantly worse if we didn't prevent an accumulation of power matched with a collapse of accountability.
And now it feels like all of this is manifesting at once, which I feel like the public is now. Seeing in a way that's jarring. It's stressful. It's like how did we get to the point where you could have a chatbot involved in lethal force with an authoritarian state attacking with impunity? How would you explain to someone how emerging technologies are being applied right now in this moment and why this feels like such a watershed moment to be talking about the corporate application of technology in military action?
Matt: I think [00:04:00] the watershed moment is really that. Some of the colonial and conquest driven impulses of a bunch of Western states with authoritarian inclinations have been put on steroids, and those inclinations just happen to meet as sort of a normalization of what's. Seen as permissible both by way of what states are allowed to do as well as what's seen as permissible and possible in terms of what technologies can do.
So you have both this like inflation of the magic of technology as well as a normalization of the rhetoric that we use around particular peoples and areas. And this comes together in the form of what we've seen play out, for example, in Iran, where we've learned that. Claude, a chatbot, a large language model, was effectively used to at least support in the AI decision support system.
Basically, that allows militaries to draw up lists of where to target. [00:05:00] We know that the same system was being used in the context of Venezuela. We know that this comes on the back of very little response outside of particular circles to Israel using target acquisition systems during the genocide in Gaza.
Going back to the early days of the genocide in Gaza and the fact that those were sort of testing grounds by which the will of critical pushback were being articulated and failed to garner in enough offa, no response meant that we're now in an era in which. Again, the sort of insistence on what these technologies can do and the normalization of who is it allowed to be used on and for what purposes are sort of meeting into this deeply cursed handshake that's playing out at large.
I think some of the narrative and the ways in which philanthropic played this game of, on the one hand, entering into a contract right with the Department of War to effectively provide AI services and [00:06:00] warfare. Whilst on the other hand, also distancing themselves from the usage of their technology for specifically lack of oversight use in autonomous weapon systems allows 'em to play this both crowds effectively.
Both the crowds that say, what are you doing? You should be on our side. You should be against these wars. And also the crowds that go well. We'd like to see these systems used in context of warfare effectively, and they can say, well, look, we just did that. We had our product used in the context of one of the, as you said, largest instances of filling casualties caused by the United States.
Anyway, this is my long ranty way of saying this is a particular moment and it's rare and it's form, and it comes around ever so often, and now we're witnessing it at a watershed moment in which the normalization of this form of, of conquest and the hyper. Imagination of technology, meat. It's this thing that Elk Schwartz refers to as the technication of the kill chain, right?
It's like you take. [00:07:00] Aspects that are normally wound up with like profound moral questions and questions that are actually quite emotional, that are related to ethics, that are related to context and what you're observing as a human being. Things that a human being must parse and perhaps discuss with their immediate commander and those around them void of the particular technical indicator that they have on their screen, but at every step of the way where a decision is normally made through these layers and layers of peer review.
This is now increasingly turned into these numerical indicators that just tell you, trust the system. It's telling you that it's a legitimate target because it meets a particular probabilistic threshold. And who are you as someone who has been tasked with ensuring that? You know, as long as we're complying by international humanitarian law, which is to say that we can say that there is a legitimate target, that the computer is wrong.
If the computer says it, surely there's a lower risk associated with me as an operator accepting that number and going ahead with it. Then there is for me to say no [00:08:00] and start a whole process that will potentially put me in a room with some superior and their superior. There is this element of how does both the insistence on the probabilities merge with a sort of diminishing risk appetite to dissent, which is a problem, but I also think it emerges against this bigger backdrop.
We go back to the early days of probabilities being a thing and being used in policy. I mean, I remember when RCTs first became a thing that, you know, randomized control trials that in particular humanitarian programs were rolling out, which then turned into big data for development, which then turned into, you know, tech for Good, which then turned into tech for development and all of these arcs we've had.
Of technology companies and governments coming together effectively trying to posit their version of what efficiency looks like and every step of the way through the last three iterations, you know, which is like the last three decades of different technological moments that have been wound up with somehow making policy work that affected people that live [00:09:00] in.
Quote unquote, liminal contexts more efficient, whether it's through warfare, through humanitarian policy interventions, or through securitization every step of the way. We see how that idea of efficiency is really wound up with a displacement of ethics, a displacement of moral duty, and is a displacement completely of anything that's concerned with rights, and instead brings to the fore an absolution and absolving of your.
Role in those decision making for those individuals to the computer. Why is it that we're so obsessed with fraud in these systems? It's often that these systems come about in the context of fraud detection. Like we see how, I mean, in refugee camps, we're witnessing the usage of iris scanners, of probabilistic indicators and systems that would determine whether someone was likely to commit fraud and therefore should be taken off of particular cache, like the system emerge in a context of.
Profound suspicion of [00:10:00] usually marginalized others. And then you go ahead and use these same systems to make determinations about whether individuals are likely to be combatants years later, right? In this particular context, and they have this history of always being suspect of the other. If I had to reflect on what's gone wrong here, I'd say nothing.
What we're seeing is effectively a reinforcement of. Things that were maybe difficult to say out loud in the past in technical terms, but now we've come full circle, were the things that we previously had trouble articulating out loud and had to put them through technical guises and sheaths. We can now both articulate in technical terms and also articulate and stand by in political terms.
It's no longer that you have to hide that the killing is done on racialized terms or. On the basis of the fact that somebody is moving in a particular way in a Middle Eastern country, and you have to hide that. You made this determination on the basis of bias and stereotypes is that you [00:11:00] can both admit that and also go ahead and then say something to the effect of, you're looking to destroy an entire civilization on the back of having used those tools.
Right.
Alix: Can you break down and paint a picture for us about how chatbots are actually being used in war? And I use the term chatbot on purpose to be slightly derogatory, but meaning more broadly, large language models and transformer architecture. Can you paint a picture of how large language models are being used in war right now?
Matt: Imagine right now that. You want to understand as an operator where there is likely going to be combatants in a particular area, like let's say in the context of of Iran, or you're looking to understand where there is likely going to be military facilities or even infrastructure that may be covering for acting as some form of.
Makeshift site that's hiding nuclear facilities. Like you could use a chatbot and you could ask it to draw [00:12:00] up a list of highly likely sites or individuals or areas, and it would give you a list. It would parse what you're asking it, and it would give you a list what you wouldn't be prompted with. Is why we arrived at the particular sites, areas, or individuals that the chat bot came up with.
You'll never really be told exactly how it arrived at that particular list or what databases it drew on, or what other applications were running in the background of the chat chatbot that it then drew up in order to serve you. This very, very simplified reductive response and people as we see. Within militaries actually end up sadly depraved making decisions on the back of what these systems often spit out.
And oftentimes the companies behind these tools actually boast the amount of targets that their systems are able to generate and show on any one screen and use it [00:13:00] as a metric for the fact that their systems are working.
Alix: Vin Kata, Superman said something to me a few years ago, which really I found shocking, which is that generative AI eventually would become the backend of itself.
That basically, rather than having database architecture that LLM sit on top of that, eventually the LLM would be the backend. And I like haven't been able to shake that like inversion of how any system trying to make meaning and make decisions is gonna like. Ingest LLMs as the architecture underpinning things in a way that I find, I mean, in a military contest, it brings particular problematic dynamics to play.
But like for any organization, for anyone trying to make meaning, the idea that you would whole hog integrate LLMs as your backbone of knowledge and intelligence just seems absolutely wild to me.
Matt: It's bonkers. I mean, on the one hand, you know, from all the science. Large language models and AI models that are fed on training data that are synthetic become recursively [00:14:00] regressive over time, right?
Like we know that these things will spit up bad data, but we also know that when we first heard about Lavender, when we first heard about gospel and where's Daddy, we learned that many of these systems were built on proxy data about people's posts on social media, their calls, where they're held on databases, but also things like if you'd posted an image of yourself.
In an area in which there was a building that would be Hamas affiliated in Gaza, and for reference, Gaza is being run by Hamas. It's difficult to not be in an area that isn't Hamas affiliated, to not be by a public building that isn't Hamas affiliated. This is bad data to use as proxy for whether someone is a combatant, and yet we see these systems.
Crunching and ingesting exactly these forms of data to make sense of whether someone is a combatant. And so I think at the end of the day, if you are using bad data. Whether you're using bad data or whether you're using synthetic data that's spa [00:15:00] out by a large language model to make a determination using a large language model about whether you should kill a particular group of people, you'll end up with the same outcome.
And I think that's by design. I don't think these systems were made to help people make good decisions about how to keep people safe. These systems were made to. Justify, normalize and provide an alibi for committing the crimes and doing the killing and the biddings that certain actors were already intending on committing.
But the question to ask is this thing that we come back to, you know, from like 15, 12 years ago, which is like, what is a problem that you're solving for? And is the large language model the correct answer to that? If you're just trying to retrieve information from a large-ish database, but it's a military database and it's stored in house.
Why aren't you using a small language model? Why must it be a large language model? Like why is it that you're insisting on this thing having ingested half of the internet in order to use it [00:16:00] within your particular context? That's where I start to not understand, especially when all the scientific review of this, in particular by folks like Abe Bhan shows us.
The larger these models get, the more inaccurate and the more faulty and the more discriminatory and the more bias they become.
Alix: Yeah. How do you think the introduction of large language models in military decision making has affected the role that people are playing in military force and use of force?
Matt: I think there's probably a sense right about now from people who aren't within militaries that people are playing a smaller role in the decision making process around who is being targeted and why, and that a lot of this information comes down to an automated system in which an operator has effectively just sat there okaying, for lack of better expression, but.
I think the reality of it is that there's probably right about now as well, a bit of an existential question that's being asked within the context [00:17:00] of militaries in which people are faced with this complication to the normal chain of command, the command to kill. You know, that there would be under both international humanitarian law as well as domestic law, at least a codification of.
Understanding who made exactly what decision when and based on what information specifically. So that accountability can be ascribed to the individuals in the chain of command when a particular decision to strike was made. And I bet you that. Individuals who are sat on the other side of the system, having used it to generate targets, having been effectively pressured in some instances.
I'm not saying this is the case for most of the military, but I bet that some individuals were pressured into getting particular outcomes and particular, let's say, magnitude of destruction out of, let's say the war in Iran, that shortcuts were taken and that probably [00:18:00] see these systems. Shortcuts and are wondering about the extent to which this is a palatable, efficient, materially sustainable form of warfare and defense and security moving forwards.
And I think. More now so than ever. It's also probably within their hand in some ways to push back against this, right? To understand. And they probably understand better than anyone that their obligations under international humanitarian law are increasingly made difficult. To abide by as a result of how murky these technologies are making, how we arrived at the particular target list, what we have to understand.
Is that large language models, chatbots are sheath, right? There are sheath that make determinations on the basis of other applications that are working in the background that themselves have [00:19:00] ingested a bunch of data. Then there's the chatbot, which also has ingested a bunch of data from elsewhere.
Probabilities multiplied by probabilities to the power of probabilities divided by probabilities like we're talking about. So many different disparate nodes of information that often don't have very much to do with each other than outputting a reduced image and overview of where you ought to put your efforts and resourcing, and resourcing to kill.
As a military unit, it's effectively obfuscating every single layer of. Where do we get this intelligence from? How do we decide this intelligence was credible? How do we decide that this credible information meant that these areas are individuals should be targeted? That layer is now squashed. And so when we talk about this in the context of the kill chain, which is the chain.
Effectively decisions and processes that militaries have [00:20:00] to go through when they're deciding to strike either an individual or a site, as it were. When we talk about the kill chain, we talk about a compression of kill chain that happens through the use of large language models. That happens through the use of these automated tools that chatbots effectively rather than expand out and help us make better sense of how particular decisions were arrived at effectively compress.
The entirety of the kill chain, thus making it difficult for both the people involved in it as well as the people outside of it to make sense of exactly what happened and why.
Alix: I think a lot about. The transition from having a pilot and a co-pilot to having one person on like a TV screen and like how even removing, it's almost like pair programming and technology like technology has figured out how to reintroduce people into certain technical processes as a way to make it all happen better.
Because you have two humans that are engaging with each other and it feels like over time. In the same way that in consumer technology, [00:21:00] people are increasingly individualized and isolated and like discouraged from directly engaging with other people. It feels like that's probably also happening in military contexts where I presume there were like, I mean, I know there must be so many conversations where it's like, should we blow up that building?
There's like a lot of people talking about it and like. Going back and forth and asking each other tough questions and asking for additional information, and like there's that moment when a group makes a decision or like someone senior makes a human decision and says, I have heard all the deliberations and I in this position, in this military, I'm making this decision that we will bomb this building.
That that whole process is important. And then if you flatten it to a sort of isolated, individualized atomic. Decision making where it's a machine and a person in discourse. To the extent that it's possible for those two things, me talking to each other, that that will fundamentally change both the outcome, but also the like [00:22:00] evolution of warfare.
Matt: Yeah. Yeah. Yeah. I mean, wars. Wars are supposed to be costly, right? Whereas this supposed to be costly in part because. Wars are supposed to discourage wars. I'm not like here to draw up a, a genealogy of the word war and how we got here, you know, through the millennia that we've been warring as a species.
But wars have become less costly, both in terms of the decision making that's involved, the human capacity involved in a decision-making process, as well as the literal upfront costs. Involved in even having those processes of deliberating and gathering intelligence and figuring out who are going to bring that intelligence about the particular sites in question.
The cheaper that becomes as a result of this flattening, the fewer barriers to entry there are on. Large scale warfare on militarization of spaces and actions that were perhaps under [00:23:00] different circumstances just a few years ago, more for the realm of diplomacy rather than rapid fire escalation. You would probably see that sort of confluence of both strongman, authoritarian politics that meets this decreased cost of warfare coming together in a way.
That allows for exactly this form of kneejerk rapid military escalation. I think that's, that's exactly what you see happening when these rooms, where there is less friction involved with the decision making, where effectively you have to humanize. The process of deliberating on who you're targeting and why, but you're doing it through a computer in order to then be able to look past the inhumanity involved with actually doing this automated process of target selection and killing is itself a sort of a weird perversion and a psychological game and a psychological form of violence that I think [00:24:00] plays out particularly.
Well, and perhaps particularly productively within the context of the military. And then we have some history to draw on from here, right? Like the early conversations around how drone operators were effectively gamifying warfare and how that distance, that critical distance from individuals, families, people that were being targeted in Iraq and Afghanistan with drones.
Just made the cost of killing that much lower and it enabled more killing to occur On the back of it, it enabled us to accept that certain proxy deliberations, proxy data could be used as enough of a probable cause. In order to kill someone, you know where signature strikes become a thing where you're suddenly relying on movement data to make decisions about whether someone might be a terrorist and you end up killing the bride and groom of a wedding.
Like it's this [00:25:00] kind of lowering of costs that will have a ripple effect that's far larger than what we immediately see at our hands. I
Alix: think that's very, very well said. And I think what's so interesting too is that we're in this moment where I think in 50 years we're gonna look back at this particular moment in time and say the world was taken down an extremely dark path because there were two narcissists who didn't wanna go to jail.
And it's so interesting how the military is being dehumanized and becoming this technical infrastructure that is passionless and scaled and cold and like. Wrong a lot of the time, but that doesn't matter. There's no feedback loops. It's becoming more mechanized. And then leadership is becoming more personalized because it's the institutions around people in power are collapsing and accountability mechanisms are failing.
And you just got these like personalities that are just driving a scale of war that is just [00:26:00] unimaginable. And so there's this irony where like these personalities are driving. Out people in the military and then replacing them with this infrastructure and computers and like ification to use a word that you use that I really like.
Um, I'm still not quite sure. I know what it means, but I really like it. Um, uh, and I just, I don't know. I find that, I find that that juxtaposition really interesting.
Matt: Yeah, no, I think that juxtaposition is, is necessary in order for bureaucracy to reach the level of inhumanity where anything that's beyond the screen in the room is considered other and is considered disposable and is considered worth killing.
It's where the only humanity you face on a daily basis. Is the humanity that you're shown by this like high personality, fantastical, bombastic leader that you have at the top, the head honcho, the Trumps, as well as, you know, your immediate periphery of colleagues and potentially your AI that tells [00:27:00] you who to strike.
And it's this thing where like, it makes me think of, of this wonderful. Poem slash song. It's a song, really a spoken word, more so than anything by Gil Scott Heron. That's about, uh, what's it's called Whitey on the Moon. Do you remember this?
Alix: I do know that song. Yes. Yes.
Matt: And it just feels wonderfully apt because we just have Artemis two and we've just had everyone turn their attention to the moon, but also like there's this.
Wonderful way in which we can think about how as people are sat with their ais, making their lives as operators of warfare easier and more convenient and more efficient. There, there's the rest of us, right? Whose family are being bombed and who are experiencing exactly the consequences of this. Anyway, let's.
Go back very, very briefly to this whitey on the moon. There are some phrases in here that are wonderful.
Alix: Yeah. Like, I wait. No, but I, I, I, you
Matt: remember it.
Alix: I can't pay my doctor's bill.
Matt: Yes.
Alix: And whitey's on the moon.
Matt: Exactly. Exactly. My rat done bit. My sister now with Whitey on the moon, her face and arm began to swell and [00:28:00] whitey's on the moon was all that money I made last year for whiteys on the moon.
How come I ain't got no money here? Hmm. Whiteys on the moon. You know, I just about had my fill. Of Whitey on the moon. I think I'll send these doctors, bill Air me special to the whitey on the moon. And I think the whitey on the moon year is like, yes, in part Artemis two, but also in part the. Folks that are sat there with their AI systems making decisions in farm removed places with what they seem to think of as a pinnacle of humanity, which is a system that talks to them about the inhumanity that they ought to inhabit in order to commit these strikes.
Alix: Yeah, I think whitey on the moon is also. The B2B salesman bro on LinkedIn talking about how they're reorganizing their gym routines using a chat GBT recurring agent that like populates their cat. You know, like there's something so disconnected, it feels like they're on the moon. 'cause I'm like, what are you, do You clearly don't.
Connect with [00:29:00] me and like how I see the world.
Matt: Yeah, yeah. No, and genuinely I do think there is something here about the, you know, the search for a GI and the devolution of everything and the absolving of every facet of governance to AI as effectively are whitey on the moon. Right? This is the race for us.
Alix: Yes. Okay. Chatbots are new. The military partnering with technology companies is not new. I know that you have a lot of. Knowledge and have done a lot of work on the sort of innovation trajectory of Israel in apartheid management. Do we call it management when someone has an apartheid system? I don't know.
That feels like too innocuous of a word
to
Matt: describe Administration. Administration, policies, administration. Yeah. Yeah.
Alix: The administration of apartheid has created all these opportunities for Israel to create a homegrown technology industry that has partnered with different. Non Israeli companies and has [00:30:00] basically like turbocharged development of technologies that are useful in military contexts.
How would you help someone understand the evolution of partnerships between tech companies and nation states with militaries that go on excursions?
Matt: UN excursions? That's a good way to put it.
Alix: I'm sorry.
Matt: Yeah. No, it's good. I, I mean, I think, I think the reason why Israel stands out in this. In a particularly.
Gross way is because if you look at the history of how Israel asserted itself following the nakba and the settler colonial establishment of the state of Israel, you see a constant prodding up of Israeli security apparatus by means of interventions initially from. The Brits who help establish the much more analog surveillance infrastructure of the village files, which were files that were written about Palestinians.
What they owned were they had family where they had land [00:31:00] and then became replaced by much more technologically advanced systems. In part through help of technology companies from the United States and Britain, and in part through homegrown technology companies within Israel. So coming into the eighties and nineties, you start seeing the permit regime system merge.
You start seeing databases emerge of Palestinians and everything they own that are digitalized, that are being used to make determinations about. Whether someone might have a propensity to engage in activism on human rights work, or be a dissenter in some way on the basis of, say, a relative that may well have been in the PLO or something to that effect.
And you start seeing the Israeli security apparatus actually make decisions around the life of Palestinians on the back of drawing relationships between these data points. And it's in the early two thousands that you [00:32:00] see that exploited even further through the application of. Remote biometric surveillance systems on top of these databases to give them not just a highly bureaucratic, atomized overview of who the Palestinians within the occupied Palestinian Territories are, but also exactly where they are and how they're moving, right?
So in the early 2000, you see the. Establishment of Theba 2000 system in East Jerusalem, in occupied East Jerusalem, which is a 24 7 connected CCTV camera system that gives Israeli police an overview of how people are moving around the old city. In 2018, the system becomes equipped with facial recognition capabilities.
Shortly after that, you see facial recognition systems that we now know of as Blue Wolf and Red Wolf and White Wolf. Rolled out across the West Bank where they effectively give soldiers the ability to look [00:33:00] up who a Palestinian is through the touch of an app, or immediately have it flagged on their system at a checkpoint who an individual is, and whether they ought to be detained, interrogated, tortured, taken away, et cetera.
These systems effectively help reinforce the fragmentation that the state of Israel instituted on Palestinians as they sought to both occupy more of the Palestinian territory, but also as they sought to de facto x. Certain areas. So we've seen the actual annexation of Jerusalem, but we've also seen the defacto annexation of places like Hebron, where because of the permanent presence of particular checkpoints and the.
AI augmentation of them using things like facial recognition effectively become spaces that are only for illegal Israeli settlers and where Palestinians are cornered and quartered off to very, very small areas, despite effectively [00:34:00] being a majority in numbers. This is a particular mode of apartheid management.
If we go back to that. Term that's reinforced right by the mixture of remote biometric surveillance. Uh, the hardware provided by Chinese companies, by American companies, by British companies, as well as by unit 82 hundred's own. Startups, uh, some drone developers that receive venture capital from Canadian and American individuals and who end up developing drone systems that can drop payloads off in Gaza, but that rely on the same kinds of remote biometric surveillance and AI architecture that has been tried and tested in the West Bank.
Where you literally see these startups show up at weapons expos in Europe saying that their technologies have been combat proven, and then showing footage of how it's been used on Palestinians, in Heran, in Gaza, in East Jerusalem. And this economy, this [00:35:00] startup Israel economy that's fueled by apartheid as it were, becomes.
So prolific and starts to gain traction on the back of exactly how it utilizes Palestinians to sell itself. That it begins to have a bit of a market, right? And it begins to have a bit of a market in particular within context that have long had a history and tradition of attempting to build. What we can now understand as a homeland security state, the United States of America is one such example, right?
The Homeland Security State building that we see emerge both in the nineties, but in particular being reinforced during the war on terror following nine 11 in which you start seeing the startup arm, the venture capital arm of. The CIA, um, and others investing into tech companies that seem to [00:36:00] promise, being able to provide mechanisms for preemptive policing, for preemptive security, the ability to make databases and to make sense of them in data rich ways that weren't previously possible.
And so those inclinations were always there. And what was interesting is that. Whilst you've seen some of these projects be developed and incubated over the last many decade in places like the United States, I mean Palantir, which is now used in in Israel by Israel against Palestinians and Gaza.
Palantir emerges in the context of the United States, and in particular through the venture capital arm of the CIA. These companies like Palantir, which weren't household name for a long times, were able to fly under the radar in part because. States, like the United States, there was a time where at least they would have a liberal veneer and there would be an insistence on trying to codify the otherwise less excusable tendencies towards domination and [00:37:00] control to the technology itself.
It was written in code once and zeros as opposed to being spoken out loud. Where Israel flips the game is that it speaks the quiet part out loud by saying, look. I have managed to successfully govern through apartheid and through technology smoothly and efficiently, populations that were a bit of a nuisance.
I'm now selling that outwards and I'm standing on my two feet by saying that you can do this, and if we can get away with it, why not our bigger brother, the United States? And you start seeing both the transfer of the same technologies to the United States for the use on the southern border against undocumented communities, communities on the move in United States.
You also increasingly see things like Project Maven and larger attempts at trying to do AI slash big data-driven military work within the context of the United States that seem awfully familiar to the kinds of systems and [00:38:00] practices that we've seen emerge in the context of of Palestine.
Alix: So it kind of feels like.
Israel is like a technology edge Lord in our era of modern conflict and that they're like basically doing things that are outrageous, but the outrage. Becomes normalized and then the bar sinks lower and lower and lower with these standards, and then the expectation of accountability kind of evaporates, which then means you can go even further in more depraved ways.
I feel like it's mean to ask someone in this current moment, especially after which just to name, we're recording the day after. Trump casually suggested that he was gonna annihilate an entire civilization and you are Iranian. Um, and I. Just like recognizing this might be badly timed and accrual question to ask you.
Um, but taking Iran as one of several countries and [00:39:00] several continents maybe even that are of, of gonna be affected by this trajectory. Do you wanna say something about where you see this headed? If, if the bottom falls out and continues to fall out in this way, do you maybe wanna share like a worst case?
Best case scenario in terms of what you're seeing with this trajectory.
Matt: I'll start with what I'm observing immediately in front of us, which is to say this is neither the best case nor the worst case. Arguably probably the worst case, but could we worse? The masks are off, and I do think that that plays to our hands in some ways.
Let me back that up. Basically, we've. Witness the particular moment that we're in now emerge against the backdrop of a long period of a liberatory incubation. Era in which Googles and the Microsofts of the world, uh, effectively posited themselves and the matters of the world as being interested in human rights and being interested in humanitarian [00:40:00] law, and, you know, wanting to even develop tools for human rights, activists and communities to be able to do the work that they do, but better.
And Google had up until two. Thousand and 18, the Do No Evil as their slogan, right? They got rid of it shortly after they dropped Project Maven. It was the kind of era from the early two thousands into, um, the late 2010s in which you saw this liberal incubation period in which, at least under the auspices of particular values and norms, tech companies appear to play ball while still developing.
Military technologies, having conversations with Department of Homeland Security for the security of the interior, providing policing tools, developing things like facial recognition, algorithms that were used against Black Lives Matter protesters, like they were already engaged in things that were dodgy, but at least.
On paper, they were committing to particular norms and ideas. What we saw in those [00:41:00] times was effectively that if, if you try to engage with the idea that these tech companies may in fact be engaged in, in evil and be interested in, and perhaps a little agnostic and nihilistic about where their money comes from, that this would be conspiratorial, that there would be this sense.
Especially in policy circles that you should engage with them more and we should talk with them more, and we should ask them to do more human rights due diligence, et cetera, and that they would work with us on getting there. But the fact of the matter is that now. We've seen how tech companies will happily strip out those already non-binding ethical principles of theirs that say that they won't develop their AI for weaponry.
They will strip it of that, even though again, it's non-binding and openly. Advertise that they're working with particular militaries on advancing particular war objectives that are [00:42:00] abhorrent in being able to do that and being able to advertise that and broadcast that loudly. I think the fault lines have been drawn, like we're understanding now exactly where these companies are, and for those that were on the fence and then me have called those of us on the particularly critical and you know.
Somewhat, some might say radical end of things out for throwing the baby out with a bath. Water can now see that they are speaking the very ills that they're involved in. They're no longer trying to hide it. And I do think that behooves us. It gives us at least a narrative that we can go after, and it gives us an understanding that the instincts and the documentation work that we've done around their crimes and what they've been involved in in the past was not misplaced.
It was not a misunderstanding. It was not an error. It was always, it was a feature, and these features have now come to grow out in. Monstrous mutinous way right in front of us. I think that [00:43:00] helps at the level of activism. I think that helps At the level of resistance. What does it do in terms of the downward spiral towards escalation?
Obviously it also means that there are. Somewhat fewer stakes as it were in terms of trying to please a liberal audience when it comes to exactly how these companies talk about, for example, what they're developing. And it also in terms of their rapid entering into agreements and contracts with different militaries in order to ensure their survival as a battle for AI and market dominance continues, and that is a problem and what we're gonna end up seeing as a result of that.
I think sadly is a lot of military escalation in service of trying and testing some of these so-called lower cost approaches to committing warfare, to engaging in warfare. And that is the other side of that double-edged sword, right?
Alix: Yes. I don't feel great, um, about. Any of those trajectories, [00:44:00] either the best case manifesting or the worst case being the worst case.
I do feel like every day I feel like the depths of how far we can sink feels deeper, and it makes me, I mean, I feel quite torn about how to not cut yourself off from influence, but also not be complicit in either normalizing or. Being naive about the level of influence when it's usually, it's kind of like the carbon footprint kind of thing with the fossil fuel companies.
And I don't wanna be, I don't wanna have people feel disempowered because it's like, well, there's a small number of people that are making all the decisions and you have to be subjected to them. And the physicality of a lot of this is becoming clear to people, which means it's localized. And I think actually the abstraction of military force means it can feel disempowering because there's nowhere to go because protests are.
Crack down on because people feel like their physical presence in areas of resistance that maybe aren't possible, but there are site fights and local efforts that join up [00:45:00] into pushing back against this bigger infrastructure. Yeah, I feel good about that. I feel good about people. What I don't feel good about is a small number of people that have basically chipped away at any controls and have gone mask off and basically.
Just that their depravity can convert into such horror that it's, it's hard to not just wonder what they're gonna do every day. And I think the more time we'll spend trying to wait to see what they're gonna do, the less time we're spending in connecting with people in community and like being like, what can we meaningfully do?
And yes, and maybe sometimes I should look away from the assholes.
Matt: Look away from the assholes, and look, look to the, to the radical folks in the streets that are doing all the things. I think, I think, I mean, part what we're struggling with, perhaps you and I both and, and I know a lot of our friends in this space, is how do we bridge the short term completely unsustainable depravity, which will not last.
But it is here in the short term, it will not last, but it's here. How do we bridge from here [00:46:00] to the medium term where we effectively emerge out of this depravity and build whatever is there for us next, right? Like the, the tech companies have been very good at selling their version of what that looks like, whether that's, you know, just efficient warfare.
Smart cities flying cars, you know, iris orbs that give you universal basic income. Like they have an idea to sell a few of what the future looks like after all this batshit craziness is over. And I don't know that we've necessarily been very good at formulating that because of not being granted the opportunity.
'cause every day we wake up, we check the news, we. Just pray or you know, hope that nothing bad happens, even when everything points to the contrary, when that energy could be spent towards thinking about how we bridge this short term. Depraved moment with the next one that hopefully will have more spaces of opportunity.
Alix: I think that's exactly right and I think it's actually a good note to end on to share. I don't know [00:47:00] where I was in the world, but I was walking down a jet way and I was signal messaging you while trying to make sure I had all the things leaving an airplane and I was like, oh my god. Like, are you okay?
How are you doing? This is terrible. What is happening? And you were like, basically we started talking about how we were kind of emotionally handling everything that was going on and you said that you had been like wrapping your head around the idea of like knowing that accountability would come, we may not be alive to see it.
Um, but that like that, that it will come and that we have to stop thinking from kind of a today me, how can I feel? Connect it to some meaningful accountability for all these atrocities and just contribute to a system and to institutions and to process and to organizing that will at some point that we don't have any control over when will happen, um, manifest into accountability.
And I think that like. I continue to like, revisit that as an idea and, and I find it very comforting. So
Matt: [00:48:00] thank you for for that. And thank you for this. This has been wonderful.
Alix: If you thought that was a lot, here is a warning for you. We have only begun to scratch the surface. And in next week for part two, we are gonna be talking with Jeff Stern, who's gonna discuss his book, the Warhead, which was released earlier this year. And. It is essentially the story of how a calculator company reshaped modern warfare.
And it is a fascinating tale that I think gives us some context of how emerging technologies have been integrated into the battlefields, um, for particularly the US government and how that has transformed the way that. The US has engaged militarily globally. Um, and it is a fascinating story. Jeff is a fantastic storyteller and this is his third book, I think, um, and very worth a read.
But we'll dig into that next week with him. So stay tuned. And I wanna thank our production team, Sarah Myles, Georgia Iacovou, Van Newman, Kushal Dev Zoe Trout and Marion Wellington who helped put this series [00:49:00] together. So thank you for listening, and we will see you next week.
Stay up to speed on tech politics
Subscribe for updates and insights delivered right to your inbox. (We won’t overdo it. Unsubscribe anytime.)
