E118

Computer Says Kill: A License for Unlimited War w/ Amos Toh

Read Transcript
Listen on:

Show Notes

Military spending on AI is a triple black box: How is AI being used in the military? Who is winning these contracts and what are they worth? And who’s in control of calling the shots— the military or the tech companies that they’re licensing their tech from?

More like this: How a Calculator Company Reshaped Modern Warfare w/ Jeff Stern

Amos Toh will help us answer these questions in part three of Computer Says Kill. We will cover how military spending has changed over the last couple of decades: there has been a clear shift from the straightforward buying up of jets, to the over-reliance on licensed software. Amos also shares what hasn’t changed, which is: yes, there is no shortage of government spending on military technology.

Further reading & resources:

**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Computer Says Maybe is produced by Georgia Iacovou, Kushal Dev, Marion Wellington, Sarah Myles, Van Newman, and Zoe Trout

Hosts

Alix Dunn

Release Date

May 1, 2026

Episode Number

E118

Transcript

This is an autogenerated transcript and may contain errors.

[00:00:00] Hi, I am Alix Dunn and welcome to part three of Computer Says, kill a series tracing the people decisions and systems that I think have recklessly ushered AI into the business of war. And last week I spoke to Jeff Stern about the history of what's called precision weapons. So. Laser guided missiles that were invented by Texas Instruments.

The same company that made my high school graphing calculator. Um, so if you miss that episode, do go check it out. 'cause I think it gives us a really good context for where we are now. And this week we're kind of fast forwarding to what it looks like when the US military buys emerging technology today.

And the process and implications have come a long way for. From those early laser guided missile days, but some is still quite similar. And to talk us through this will be Amos Toh who recently wrote a report called the Business of Military ai, which we've linked to in the show notes. Um, but Amos is gonna help us understand what has changed in that kind of sticky world of military spending.[00:01:00]

And crucially, what hasn't, and in his research aim is defines the military's procurement of technology as kind of like a triple black box in that one. It's hard to know the players. So who is gobbling up all these contracts and for how much? Two, it's hard to know the game because the military itself is already an extremely secretive entity.

So what are the shared motivations and ideologies between the tech companies selling in these technologies and the US government today? And then third, of course, it's hard to understand the tech. So how is AI actually being used in these military contexts? So let's start there. Here's Amos kicking us off with the Department of Defense.

Department of War, whatever we wanna call it, what their definition of AI is,

just to go by the DO D'S definition. It's. Quite a broad definition, right? It's technology that performs tasks that would otherwise [00:02:00] require human intelligence, and so that does suite a lot of the technologies that predated from. Station models, but that would also encompass foundation models. And I think like it's important to understand that because AI is ultimately a tool, it's not used in like the standalone way.

It frequently not used in the standalone way. It's actually integrated into. Other kinds of systems. So a, a classic example of that would be drones and other autonomous weapons, right? Drones have existed way before the latest developments on ai, but what we are seeing now is that to increase the autonomy of drones, what DOD as well as other militaries are doing, is to use ai.

To enable them to communicate with each other, to navigate fairly autonomously and potentially even to engage targets without [00:03:00] human intervention. So like that is like the AI component of drones and other autonomous systems. Even if autonomous systems. On one for one equivalent with ai. Really, at the end of the day, what we are really concerned about is how AI is being used to facilitate the identification and the striking of targets people or infrastructure, because that is where you kind of see the most kind of proximity to civilian harm in this realm of like target.

Selection, identification and engagement. How we see AI evolving is in, in two different ways. One is in analyzing data in ways that would not have been manually possible. Or even possible with some kind of basic technology. So really here we are talking about scale, the ability to [00:04:00] combine multiple data sources, data sources that.

Seem kind of incongruous with each other in terms of like format. So you think about pulling social media feeds and satellite footage, right? Those data sources don't necessarily blend well together, but you do have companies, namely Palantir. Integrating and organizing all of this data in a way that enables DOD to run analytical tools, including AI tools to draw insights, to identify patterns with the goal of helping with target identification and selection.

So you have that and that's where Maven sits. And then you have the use of AI to increase the autonomy of unmanned systems, including unmanned weapon systems, uh, such as armed drones. But drones can be armed, cannot be armed, can just be surveillance drones. But the point is that like AI is being used to increase the autonomy.

And I think where this [00:05:00] is going is these are not two kind of distinct categories of ai. The research really indicates that like the end state, right? Some people in DOD as well as some of these companies is that both the data integration and analysis converges with the autonomy in weapon systems. So you essentially have this kill chain that is automated as much as possible.

I think like it's part of this like larger push for. AI adoption. Right. Be over caution really of almost any kind. Actually, the thing about that that is striking is in the January 9th, 2026 memo that Secretary Seth put out, one of the most concerning things about the memo, and trust me there were many concerning things, is that, you know, he like, he ordered that like if any military operational program [00:06:00] does not use ai, they would be flagged for a budget adjustment, which is code for.

You might potentially get your budget cut if you're not integrating AI into your program. But the thing about ai, firstt warfare, even from a national security person, and I've talked to like a couple of ex-military personnel who. Unnecessarily on the same side as me or you on these issues, but I do think they make a really good point that in this kind of hype about ai, I think there is a tendency to forget that warfare is ultimately a human endeavor.

And at the end of the day, you can send as many machines and automated weapons as you want. But the whole point of war is to get at in military lingo, human assets. Right, and so at the end of the day, human operators [00:07:00] on both sides are going to bear the brunt of the ways in which these automated slash AI-based tools are being used and being discharged potentially without enough testing, scrutiny or oversight, or even worse, potentially in ways that AI is just not suited for.

So ultimately those are gonna have devastating human costs, and I think like that's really often forgotten in, in this back and forth between industry and DOD now about AI and, and all of its benefits. It also feels connected to the Unseriousness with which this kind of current wave of political leadership seems to take highly consequential steps and like this vibe of domination, the worst version of masculinity is being exercised by this administration.

And one of those things is that you never admit that you've done anything wrong in this way that is just really gross and obvious. When you watch someone in the [00:08:00] public eye for long enough, you're like, it's fine. Just say that you made. Mistake like, but it's like the worst thing for them. And it feels like AI applications and warfare requires not just a recognition of mistakes, but like a laser focus on finding them so that there's some type of, I mean, the whole AI field is predicated on the idea that AI will make mistakes and that you have to actually feed it better information and do reinforcement learning.

And this. Lack of interest in evaluating the efficacy of these technologies and improving them over time and understanding standards that are set like as an adult endeavor, not just a human endeavor. And it, it's really shocking. How little interest there is in that, because it says something about the overall trajectory of the quality of the technologies that they will deploy because they don't have the maturity to see that, like this is a learning journey also.

Yeah, and I, I think like both the current DOD as well as past DOD leadership have [00:09:00] been operating under the assumption that a lot of this learning can be outsourced. And that's kind of what they have done on multiple levels. They're not only outsourcing the development of these systems and the training of these systems, but also the governance of these systems, right?

So I think the, think about Palantir's technology, tech, Palantir's technology, for example. There is both privacy and kind of very serious privacy and civil liberties concerns with the use of technology, how the technology is implicated. In civilian harm, like whether there are inaccuracies of targeting an analysis that are linked to the technology.

Then there is also a related kind of risk that I don't think people are talking about enough, which is how dependence on Palantir could actually. Compromise national security. And what I mean by that is Palantir is essentially a closed source system. Once [00:10:00] DODs data goes into that system, it becomes very hard for DOD to extract the information.

So think about if they want to cancel a paler contract, if they determine, for example, that there are failures. That are linked to that system that they just cannot abide by, or they found like a cheaper system or more effective or efficient data integration infrastructure, or they want to actually develop their own Palantir systems actually make it very hard.

To do that because of how it's closed source and once data goes in it's very hard to come out. So like I don't think people are like really understanding the, the risk that is associated with that. If we are talking about Palantir systems being one among many that the DOD is using, that's one thing. But what we are seeing here as kind of reflected in the procurement records is that Paled is the lead contractor for the Maven system.

So. If like your main kind of AI [00:11:00] program relies on a single proprietary system, there is gonna be very high switching costs that will prevent you from switching vendors. Even if you decide that it is time to do that, or like even national security reasons dictate that you should do that. Yeah, that's super interesting.

I find also the outsourcing of learning really interesting, and one question I wanted to ask you is that. AI is obviously a different beast than other technologies that have been procured by the military historically, but in some ways it's the same like I could imagine the stealth bomber or like certain technologies that were made, the people that were procuring it maybe understood operationally what it was meant to do, but they didn't necessarily understand the technology like they were like, oh, I'm a physicist that can say that this aero dynamism and radar and like I'm, I'm sure that like there was a feeling that.

The people building it that they were outsourcing to had this specialist knowledge, which was the very reason that they were asking them to help them do something. But how has that relationship between core [00:12:00] military personnel and outsource technology vendor, how has that relationship changed? Or like how would you describe the change that you have seen in your, your report vis-a-vis these AI companies?

So what I'm concerned about is that it is not changing. Oh, interesting. Okay. Yeah. Under, under the guise. Of change of disruption. So this, this is the like real challenge I think with with AI and DOD and the military. So you have these defense primes who still are dominating defense spending by any measure, right?

We actually looked at both like the top five defense primes as well as the primes mean. Prime means prime contractors. Yeah. Legacy. Yeah. Uh, defense contractors, the biggest of which is Lockheed Martin. Right. And then RTX, Notre Grumman, general Dynamics in Boeing. Those are like the big five defense contractors.

So Lockheed Martin, which is, you know, like the largest defense [00:13:00] contractor, like in, in fiscal year 2025, like our calculations showed that they had amassed 76 billion in revenue. RTX, which is formerly Raytheon, essentially was awarded 35 billion in comparison. PE and Enduro were awarded 903 million and 912 million respectively.

So you know, still there is this kind of huge gulf, but when you actually look at how fast Palant and Enduro are growing their revenue over the last decade. Essentially, Enduro is the fastest growing major defense contractor by any measure that we used. And Palantir is somewhere near the top, in the top five.

Basically, when you calculate median animal growth between 2016 and 2025, and by another measure, it's actually even higher. So one of the, the narratives, which actually is powerful because it's true, is that defense contractors have exercised this kind of monopoly [00:14:00] on the defense industrial base in ways that is actually very harmful to military readiness.

And, you know, that are all these huge costs, overruns and delays and like there is this kind of. Locking into like very bespoke defense systems that are no longer right for modern warfare. Right? And that's why you see like the defense budget ballooning to a trillion dollars, right? So what talented and dural and yeah.

Investors and other kind of military tech startups essentially. Saying arguing is that, well, we are here to disrupt that. We are here because this is an inefficient way of contracting. The DOD is so beholden to these legacy contractors and it really is time for DOD to modernize the way it fights worse.

So all of that sounds really good, but the question is, [00:15:00] what? Is the end game, particularly of companies that are growing to the size that Palant Enduro have already grown to. Enduro has openly stated that it wants to be the next major difference. Prime Palantir hasn't come out and said that specifically, but it has kind of choreographed that it does want to be a significant major defense player.

Right? So the ambition here is not. Maybe to disrupt the defense industrial base, but also to create a different kind of dependence, right? Uh, like a software based dependence on a handful of tech firms that are able to provide the data, the compute, the manpower, the resources that are needed to operate.

These AI systems at scale. And so when we look at like why it matters that their revenue is growing [00:16:00] so quickly is because that dependence is growing with this kind of growth in, in revenue. And that comes with a lot of risks that, that we've talked about in in the report. It kind of seemed like over time there were these agreements reached between these five big prime contractors and the US government to keep prices nominally comparable between them and like try and create some type of structured practice so that they weren't just artificially inflating revenue because they knew they could, like, they have leverage in some way and like basically preventing these companies from using leverage.

Do you wanna talk a little bit about. I think Tina was the name of the first legislation, and then in the nineties there was the, it started with an F that Clinton put in place. Do you wanna describe. Some of the, the changes over time in how procurement has been regulated with Department of Defense spending.

So Tina is the Truth in Negotiations Act. It was passed after a series of [00:17:00] scandals involving primarily defense contractors that didn't actually fully disclose or rather had kind of inflated prices and overcharged the government. And so. You know, one of the kind of key requirements of Tina has been that when you are in a low or no competition environment, meaning there is only one single contractor providing that very bespoke system, for example, then you are required to.

Submit cost and pricing data and also certify that cost and pricing data is accurate and true. So that was the standard, right? But the standard has essentially become an exception. So there isn't actually a lot of guardrails in the contracting process when. You have kind of these sole source situations where one or maybe at most two contractors can provide the service or the product you want because of all the various exceptions that [00:18:00] have been carved into the law.

And so I think what's concerning about that is that we are in a world where software is very pricey. We don't really know why it's priced the way. It's a lot of this. Software is software as a service. So when the government is spending, you know, millions, hundreds of millions of dollars annually and billions cumulatively, they're not buying the technology outright a lot of the time.

The Maven Smart System, if you look at the contracts, therefore licenses, they're not for the system itself. Licenses expire. When they expire. You have to buy more licenses. When you buy more licenses. Will the price go up? Will you stay the same? How, what are the different kind of costs that go into that if Palantir wants a price adjustment?

All of this information and all of this kind of back and forth is why the [00:19:00] government, in theory, has this very powerful tool to discipline costs and make sure that taxpayer dollars actually are accounted for. Right. But I think because of this like gradual erosion of Tina's requirements on certified data, what ends up happening is that.

The government and DOD contracting offices basically have very little insight into how the contractor, be it pal and dural, or some other tech company is pricing their services and their software. Yeah, I think that's a really good point about license, like when you buy a stealth bomber to keep that example alive, you get.

The plane. Like you can't be like, mm, you can't access that plane this fiscal year until you renew your license with us. Which I feel like is a really important differentiator. And I imagine should encourage more controls and scrutinies around pricing, [00:20:00] but feels like at a time when even less is being applied.

Is that right? So it's, it's funny that erase the South Bomber example because actually. Again, we are seeing a similarity in strategy. Right? So what happened, for example, with Lockheed Martin was that yes, they sold the jets and those jets kind of belong to the US government, but then they also in maintain control over like the maintenance.

So you had probably firmware and there's probably things they ways they got in there. Yeah. Well even. Spare parts, right? So you become like the sole kind of provider of spare parts. So you have no choice but to use that. You have no choice but to use the maintenance facilities of Lockade Martin. But this strategy is kind of being repeated it just in the software context, right?

You are licensing technology. You are licensing cloud services. Though in order to kind of preserve and operate and make sure to enter continuous operation, we really do need to keep. Renewing [00:21:00] those licenses and pay for those licenses. So in some ways you're kind of beholden to what the company or the contractor sets on price.

Even in the software context. Our kind of analysis shows that like up to 75 billion has been committed to AI or AI related services over the last couple of years. And how we did this was we essentially calculated the contract ceilings of a couple of like significant contracts that we know AI is. Being used, even if it's not the only technology that is being used.

I think that the other thing that's important to note is that the software is not nearly as pricey as the infrastructure, right? So DOD has kind of constructed a lot of its own data centers to presumably its security requirements, and that's kind of been outsourced to companies like AWS. To Amazon and all of that.

Again, not, you know, not fully owned outright by DOD, but again, licensed and so the cloud computing [00:22:00] contracts, right for both the military as well as for kind of classified environments, which is hosted by the CIA. Are essentially like a huge proportion of the AI costs because it's the infrastructure that is required to sustain this large scale AI adoption.

In the report you talk about a triple black box, and it feels like the part of the challenge here is like corporate accountability and also the way AI functions, and also these relationships are quite opaque. Do you wanna describe what you mean when you say triple black box? You know actually Des, who is a National Security, Scotland in the US has a book out called The Double Black Box, right?

Like that National security is its own kind of black box, right? Intelligence agencies, the military operate largely in secret, sometimes even, and they often actually hide things from lawmakers and Congress and. Then ai, right? Introduced in this black box, creates [00:23:00] this double black box where sometimes the decision making of an AI system is opaque even to their developers, nevermind their operators, nevermind, like duty leadership as well as um, overseers and I think what kind of we.

Have found is that private involvement is a third kind of black box that adds to the challenges of, of transparency and accountability for various reasons. The first reason is that contracting is very opaque sometimes for systems and, and I know this because we've kind of crawled through procurement records for hours.

Days. Months. And so much of the difficulty of this work, right, is you find a contract announcement and this is a press release, and you try to find the procurement record. And I can tell you like five times out of 10 it's missing. There is no procurement [00:24:00] record. And in some cases you actually found that the procurement record, uh, disappeared.

Because the DOD decided to take it down. And so like we don't even know sometimes the actual amounts that are being spent on certain kinds of technology. Sometimes we don't even know the identity of the contractor because a lot of companies go through intermediaries when they sell goods and services and software to DOD, and sometimes they form groups.

Of companies that are opaque as to membership and you know, contracts are awarded to these groups known as consortia. It makes it very hard to like follow the money, which you know is one of our best. Proxies into what the military is doing or is not doing. And then I think the second thing is that like trade secrecy is really complicating here because even when government agencies want to disclose information about the software that they're using, they might be [00:25:00] prevented from doing so by company claims that that is a trade secret and, and should not be out there.

Finally, like sometimes even government customers themselves don't have insight into the very systems that they are buying. We saw that quite clearly and quite, quite concerningly with an Army memo that kind of was reported on last September when there was this battlefield communication system that was developed by Palant and and Dural and other unnamed contractors, the Army, CTO, essentially.

Raised alarm that the system posed cybersecurity risks because they didn't have clear insight into who, which personnel has access to what permissions within the system. Effectively rendering something as basic as the permissions infrastructure of a battlefield communication system [00:26:00] opaque. To the army itself, and you just wonder, right, how many systems actually have these kinds of opacities baked in that are posing kind of huge security risks?

Okay. Well, I, let's turn to like the palace intrigue of all of this, because I feel like. It was like a cast of characters around the Trump opera that come up again and again in your report. Do you wanna talk a little bit about the like behind the scenes money men on this stuff? I think in terms of thinking about influence.

I think there are hard measures, right? And one of the hard measures that we use is like lobbying spend and still the big defense contractors like the legacy defense contractors locking Martin, still outspend PE Enduro by two to one or even more than that. But Penant Endur lobbying spending is ticking up.

Their lobbying presence is becoming more [00:27:00] pervasive on the hill. But I also think like we have to think about. Influence not just in kind of traditional like campaign finance terms, but the hold that they have on public and media and narratives. Right. So one of the things that is, is important to note is that like they have actually permeated the highest levels of executive branch leadership and not just.

Of government leadership, and not just when you think about the revolving door, but just in terms of they've just had this huge, significant presence in both the White House as well as in Congress. Eric Schmidt chair, the National Security Commissioner, artificial Intelligence, more than a hundred recommendations of the commission were written into law by Congress on a bipartisan basis.

You know you have Mark Andreessen [00:28:00] advising the White House on candidates with defense and intelligence posts. You have David Sachs, who was appointed the White House's special advisor for AI and crypto. Then on the military side, you have the for tech industry. Leaders from Palantir Meta OpenAI and Thinking Machines Lab, which is the organization that spun out of OpenAI.

You have four industry leaders that have been commissioned to serve in, uh, specially created detachment in the Army Reserve. And you know, the Army has said that, well, you know, they won't be allowed to have any interventions on. Our contracting with the companies at question, but it does raise the question about whether the input of these leaders established within the Army like command structure, could need to greater uptake of their respective [00:29:00] companies technologies.

All of this is to say that, you know, there has been a very successful push, right to. Establish aggressive AI adoption for war fighting as both inevitable and necessary. You know, I think suddenly money and their ability to pour resources into. Putting themselves in spaces and in media environments where they can amplify the narrative is a factor.

But I also think that the story that they're telling is very convincing. And so the. Question. I think that leaves for civil society and for people who, uh, are counseling more caution is how do we develop a response, right? To the [00:30:00] sense that this is really urgent given the rise of China in particular, um, and given this framing.

Of what is going on when it comes to AI as an arms race, right? The question that I am grappling with right till this day is like, how do you frame an appropriate. Equally compelling response to that, right? That actually accounts for the full set of risks and harms that haven't been addressed and need to be addressed if we actually are going to use AI ethically, responsibly, and lawfully in this space.

Right? Do there need to be red lines, right. Yes. Yeah. Right. And, and so I think the question is there, if, if, if that's where we need to go. I think it's, it's really important to like craft a strategy and a narrative that can [00:31:00] bring the level of urgency to those considerations that the industry has brought to adoption.

I mean, I think that's a great. Call. I think one thing I've been thinking a lot about is that the force it takes to essentially construct government functions into market makers and corporate protectors, so it feels increasingly like the strategy of the Department of Defense is. In some ways to buttress a couple of companies that we have decided is gonna be the future of the American economy and that if we don't help them weather this period where it's not actually clear what, what their product is and it's not actually clear the path towards profitability, like I feel like it that we've essentially subsumed a lot of our military goals and priorities under that umbrella of a couple of guys who have friends that are also guys that like work in the White House.

There's an energy to it that is. Almost religious. It's obviously also white supremacist, but I've been thinking [00:32:00] a lot about how when we, when, when those people look at China. They use that lens of religion and like a clash of civilizations or whatever, reductionist, like way of understanding a kind of hero narrative that's motivating.

Whereas I feel like China's like looking at this as through a lens of industrial policy and they're like, what of this technology might be applicable to help grow the economy of this country? And like, and like that, that imbalance. Feels like such an important, like the Department of Defense is one example where this fight is playing out, but I've just been thinking a lot about what do we do when basically state capture is part of a survival and like growth ambition of a small number of oligarchs in the us We don't have the mature government response that I think that a lot of us think that we do or should.

Um, like where do we go from here? It's not about ai, it's about like just a fundamental failure of. A functioning government. Yeah, so it's, it's something I, I kind of [00:33:00] have been thinking about a lot as well, and I go back and forth between, you know, the government is essentially subjugating themselves to like a handful of tech companies versus government is playing a really active role.

In both shaping technology development in a specific direction on a market making scale and retrenching regulation and oversight of these technologies. Right, because I think when you, you kind of break it down like it's not that we have. People in government, leaders in government that are just buying the line of, of tech companies, hook, land, zinc.

Right. I kind of maybe like to think about it more in terms of how, what is like some of the organizing principles that are pushing us in this particular direction. Like I do think [00:34:00] that there are certain assumptions about. Power that are baked into the way we conduct foreign policy. Um, there is kind of this assumption that we need to win right in arms race.

Right. I think there are far more qualified people that can can talk about like the arms race framing, but if that is kind of the driving assumption, if that kind of nationalism is what is driving the adoption of AI guardrails, risk mitigations, red lines, those are always going to. Take a backseat to this race for essentially technological supremacy.

And so like, you know, the entire machinery of government, whether in conjunction with industry or not, is [00:35:00] like kind of being put. On that direction. And if that is kind of the posture, then that makes it very difficult to truly get, you know, comprehensive regulation of the very technologies that are deemed, you know, to be.

Like the prize, right? Of winning a race. So that's kind of where I think about some of the constraints on specific kind of legal and policy frameworks for thinking about these issues. Because I do think that there are geopolitical dynamics that I will be the first to concede, like I am not an expert on, but I am seeing kind of shaping this dynamic.

And I will say that. There are people on both sides, on government and industry that, um, are very invested in that [00:36:00] dynamic, um, of dominance. So if you have that, like, you know, I think that that gives one pause about how much regulation, uh, there is going to be and how meaningful the regulation will be. I think it's just helpful to continue to try and pull at this thread of like, what is the relationship because it feels.

That's the path through which we find a strategy for replacing it or undermining it. And I think the nauseousness of working with war has, has basically meant that the worst people end up in these, like these kind of situations with government. And I understand how we get there, but I also would love to find a way to not have like the worst people with the worst technology, just like intensifying.

Existing, you know, from War on Terror Forward like September 11th, forward, this intensification of technical use that feels more and more disconnected from any type of legal framework and more and more [00:37:00] disconnected from any sense of of justice. To what extent are you seeing. The advances in technical use in warfare globally being applied domestically.

I'm thinking about Palantir's deployment in New Orleans for predictive policing and then like Clearview being applied in almost like a really messed up consumer project or like meta glasses and facial recognition now being used potentially in battle, but like how do you see the cross pollination between domestic.

Law enforcement and military applications of these technologies. So I think the dynamic is complicated when it comes to Palantir. They were military first. It's only kind of in recent years that they kind of have spread out to other branches of government. And also actually like commercial. They also sell like the data integration technology to, you know, uh, companies that want to have a better grip on their supply chain and stuff like that.

The, the, the spread from military to local law [00:38:00] enforcement. The question it raises is like, what data sources are being blended together? Whether there are any kind of boundaries between the various data sources, because some data from a US perspective, like there is kind of this distinction between US persons and non-US persons, and there are like kind of.

Stronger protections for US persons and there are for non US persons. And, and you know, like that is problematic from like a human rights point of view, right? One might say, but, but like at least domestically, right? Like you are supposed to abide by different sets of legal protections. And what happens when you run AI across a bunch of data sources that might mix both kinds of data and also to generate insights that.

Are themselves revealing about the private lives of people, of migrants and, and citizens like, I think like that raises all kinds of privacy and [00:39:00] civil liberties concerns, right? I think the issue is sometimes less to do with whether it goes from military to law enforcement and vice versa, but the trajectory of scale, yes.

These technologies are not becoming. Small in scope. Yeah. We don't use, you don't see anybody saying you're coming up with a surveillance technology that uses less data, that contains less analytics. Right. Everything is, it's more we can do all of this data, we can do even more. And I think like that is the trajectory of which the military has.

Played a big role in with Maven in, uh, 2018 or 2017 when it first came out. Like there was a huge uproar over Google's involvement, but Maven was also significant for other reasons, right. Maven essentially was trying to answer the question the military had, which was we have. All of this [00:40:00] data that we cannot possibly analyze, not just manually, but with the existing technology that we have.

So how do we develop technology that Kent. Analyze all of this data at scale, at high levels of speed and, and reduce like kind of the time it takes to pick out like things of interest from that data. And you see that dynamic being repeated and amplified in law enforcement users of this technology in isis, use of this technology.

Like they're not talking about, oh, we have like a very. Specific set of people with criminal histories that we are going after. They essentially, we have this. Huge amounts of data. Help us identify from all of this data, people for deportation and uh, detention. And so you start seeing really the ethos of scale being [00:41:00] reiterated and amplified across, you know, the military law enforcement boundary.

I also really appreciate you bringing in the migrant. Angle. 'cause I think the migration industrial complex is one that also has just continued to ratchet up and has used these physical spaces where rights are less available for people and have basically intensified and trained a lot of these technologies in those border zones As a way of, yeah, like building technologies that are now being deployed in Minneapolis and other places.

But like that to me is one of the most. Law fairy parts of this is that post nine 11, it was like a wild west and nobody really did anything in those zones, and now they're just like being taken advantage of. It's kind of like Cory Dura was saying that if a company has the ability to do something, they'll probably do it if it.

If they're incentivized to do it. It's the same thing with law enforcement or with military or with immigration enforcement. Like if they have like the latitude, they will [00:42:00] take it and they will then take an inch further or a foot further. And I think we just did a really bad job over a 25 year period of like managing that.

Yeah, I think, you know, I think my, my main like kind of message here and main challenge to policy makers, particularly in Congress is that the Pentagon is pouring billions into. Risky and unproven applications of AI while weakening safeguards, while getting oversight and handing within the or not unprecedented control to a handful of very powerful tech companies.

Congress is essentially as the legisl. Native body is supposed to be a check and balance right on these excesses, on this overreach. And I think the question, I think, for Congress is whether they are willing to really step up right and start meaningfully, um, regulating [00:43:00] the use and the acquisition of these technologies before it spirals out of control.

Uh, the report's so good. I think it's one of those things that like will be impactful because it's a piece of work that sits along. I feel like most of the people that are calling attention to these issues are painted with a brush of activism, advocacy, unrealistic expectations, pacifism and expectation of estate to lay down all arms.

And I just think that like. There are a few people who do these big pieces of work that are, do and do it seriously and rigorously that then start putting bricks in the wall of being able to make a really strong argument that you kind of forced to be taken seriously. Yeah. In a way that I think I know is hard, but I really, I, I just really appreciate it and I also think like, you know, it's a both end kind of approach.

Right. And I've been thinking a lot about this, like our role in civil society. Like I definitely understand like, you know, a lot of the. Ban Killer Robots advocacy approach and posture. You know, I [00:44:00] was at Human Rights Watch and you know, human Rights Watch led the campaign on Stop Killer robots, right? But I think it's also important to take them seriously and say, okay, if this is what you're concerned with.

How does you know this particular approach to adopting ai, like how actually that conflicts with what you claim your concerns to be? National security, you know, can be a very problematic concept, but let's assume right for a moment, let's put that aside. Let's assume for a moment that national security is something we actually need to take seriously.

And then the question becomes like, is the way that the military is adopting. AI mitigating and addressing national security risks? Or is it creating new ones? And I think there's this where, for example, Heidi klas work and AI Now's work like really fits in, right? Because they're not just saying that, oh, you know, you shouldn't use this technology, but they're saying that if you use it in this way and you become dependent on companies in this way, this is going to create the national [00:45:00] security concerns that.

You yourself, the government has set our concerns that you wanna avoid and mitigate that, I think is there is kind of a, a, a whole range of work that kind of needs to be done to really show why, um, like a lot of this is risky, even for policy makers and military leaders that actually do have real like.

Valid arguments about the potential and promise of ai. I mean, I think that's great advice generally for this era. 'cause I think just taking another example, if you look at like Doge and like usaid, continuing to reiterate the number of deaths that were caused, uh, by the, um, speedy destruction of usaid.

The argument is do you want one dude who's not a government employee to be able to go in and shut down a government department? Like, that's like not the way that you want. Government to function, right? Like that's like not, like don't you want there to be a [00:46:00] mature process where Congress decides how to spend money on stuff and that like somebody can't just unilaterally reverse that.

And I think it's not to say that that is the best argument in that context, but I just think there's a way of trying to make our arguments more legible to people that are holding positions of power or see themselves as like the grownups in the room. And I think sometimes. We do a bad job of finding language and finding strategic focus and finding evidence that is useful in those kinds of conversations.

And I feel like this is a report that is one of those that I think is really like, can be that, which is really cool. Thank you. Thank you. I hope, uh. I hope uh, people in Congress read it.

Some great final words from Amos there and also just a fantastic read. The research that they put together. Do check it out. It's linked in the show notes, and I think in conversations like this one, we can often get lost. Understanding the motivation. So why is the government so desperate to pump military weapons with AI no matter what the cost?

And to help us answer that question, next [00:47:00] week will be Liz Siegel. But I can tell you right now that the short answer is China. The we must Beat China. Narrative is doing a lot of heavy lifting and it is working. Liz will explain how it's justifying a lot of the opaque procurement practices that Amos just took us through, and is ultimately used as an excuse to never develop a coherent and safe AI strategy for military usage.

So thank you to our production team, Sarah Myles, Georgia Iacovou, Van Newman, Kushal Dev, Zoe Trout, and Marion Wellington. So we'll see you next week with our next edition of Computer Says Kill. Thanks for joining.

Stay up to speed on tech politics

Subscribe for updates and insights delivered right to your inbox. (We won’t overdo it. Unsubscribe anytime.)

Illustration of office worker in a pants suit leaning so far into their computer monitor that their entire head appears to have entered the face of the computer screen