E106

Is Claude Out of the War Business? w/ Amos Toh

Read Transcript
Listen on:

Show Notes

Anthropic’s Claude was used in the military operation to kidnap president Maduro earlier this year. Why? Unclear. Was this legal? Absolutely not.

More like this: AI In Gaza: Live from Mexico City

Surprise, surprise: the DoD feels that they should able to use AI models however they want, as long as its lawful — but… was this lawful? They are now threatening to designate Anthropic as a supply chain risk. What does this all mean?

For this short, Alix was joined by Amos Toh, senior counsel at the Brennan Centre for Justice, to help us understand why the US defence department and an AI company are arguing about how best to us AI models for dehumanising and unjust military purposes.

Further reading & resources:

Computer Says Maybe Shorts bring in experts to give their ten-minute take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email pod@themaybe.orgPost Production by Sarah Myles | Pre Production by Georgia Iacovou

Hosts

Alix Dunn

Release Date

February 25, 2026

Episode Number

E106

Transcript

This is an autogenerated transcript and may contain errors.

Alix: Hey there. Welcome to Computer Says Maybe. This is a quick one and it is in response to a question I found myself asking when I saw reference to Claude, Anthropic's primary chatbot product, if we wanna call it a chatbot. It was apparently used in the invasion of Venezuela, and I got curious. Anthropic oftentimes positions itself as the company above reproach vis-a-vis, you know, the companies like OpenAI or xAI. So I was curious, what does it mean if Claude is being used during an invasion that is, I think, universally considered to be a war crime? And what might it mean for their business model when the Department of Defense is happy about them taking issue with its use?

So we dig into this with Amos Toh, who's at the Brendan Center, and has been working a lot on understanding the new types of financial relationships. Between AI companies and the Department of Defense, and he digs into what actually happened and what questions we should be asking when thinking about this new frontier of warfare.

Amos: Anthropic had essentially signed along with like three, I believe, other AI companies to provide foundation models on an experimental basis to DOD. This was last year, and fast forward to today, we've learned that Anthropic is the only company whose model has been certified for classified environments, which means that it can be used in like military operations, like the invasion of Venezuela.

And according to reporting, what prompted. Anthropic concerns that is, is that it was being used in the invasion of Venezuela, um,

Alix: which was illegal. Like I feel like, I don't know how, when are we allowed to use that adjective? I don't know.

Amos: Told it unconstitutional.

Alix: Yeah.

Amos: So, uh, which is in some ways worse than illegal, but, uh, but so I think that we don't know what the.

[00:02:00] Specific users of Clot was in the Venezuela invasion, but this had prompted concerns and I quote from the reporting that Clot was being used to spy on Americans and was being used to assist weapons targeting without sufficient human oversight. This is directly report. DOD has come back and said, well, we should be able to use cloud however we want, as long as it's lawful.

This is how the dispute has been characterized. A lot of information we don't know. Right? But let's take all of the fase value and then the other like material bit. Is that. Duty has now come out and threatened Anthropic, uh, with designating them as a supply chain risk, which actually could seriously hurt Anthropic business because, um, any company that does business with Anthropic may potentially not be allowed to work with the military.

So, I mean, it's like several kind of. Really consequential revelations that come out reporting. So [00:03:00] what I actually, I think no one is asking, and it's really important to ask, is whether the disputed uses of court are lawful in the first place, right? For two reasons. One is that there are specific restrictions that, you know, DOD is supposed to abide by when it spies on Americans, right?

You know, both kind of like the high level constitutional restrictions, meaning the fall amendment. But also there are like specific rules, however inadequate, right, that govern, you know, the collection of information about US citizens and permanent residents. So one kind of very basic rule is you're not supposed to like target anyone.

Solely on the basis of, you know, them exercising their first amendment rights. Right. And in this climate that we are in, and given that we've seen multiple reports of ice, for example, targeting protestors and surveilling protests simply because, [00:04:00] you know, they were engaged in peaceful protest, it does raise the question of like, what is DOD using Claude for?

Right, and if it is actually abiding by the restrictions on how US person information can be processed, collected, and analyzed, and when it comes to like weapons. Assisting weapons targeting. Right. So again, I think the critical question here is not whether it's using a fully autonomous system because the reporting suggests that it isn't it, it's actually the AI is facilitating or helping, but without sufficient human oversight.

And then the question is whether. The use of ai, the use of claw actually violates the laws of war, which is something the DOD is bound by because it said it's bound by it. Right. So I think the main kind of like legal norms that might be implicated here are first the principle of distinction. You are able to distinguish combatants from civilians and the principles of ity, whether like the [00:05:00] amount of potential civilian harm that you anticipate is commensurate to the objective of the military operation and the targeting.

And then there's also like a separate question of whether you can, um, suspend the operation of the weapons system if you discover as it is operating that. It's actually going after Target that is actually not a legitimate military objective. So I think like there are these kind of laws that are implicated right by the use of ai.

Again, because we don't know the specifics of how DOD use cloud, both in the invasion of Venezuela, but also more generally how it is using it or planning to use it in other military operations. I think there are several legal questions we need to answer first before we get to the question of like, well, you know, I should be allowed to use it as long as it is legally compliant.

The question is, is it legally compliant? And then I think then, then there is the question of the supply [00:06:00] designating and tropical supply chain risk. Like I do think that obviously this is to. Excerpt pressure on Tropic. Right. And to compel them to see. Yeah, I mean, I mean, one could say that it is a really heavy pressure, significant pressure for them to conceit on usage restrictions around clo.

And I think that this, um, kind of. Strategy that do OD is implying is something that has been set in motion for a long time. If you recall, in July of 2025, there was an executive order as part of like the Trump administration's AI action plan that basically set that to agencies that if you want to do business with any government agency, you need to have AI that is neutral.

Unbiased, um, which is really kind of like double speak, right? For anything the administration potentially perceives to be unfavorable to its [00:07:00] viewpoints or its policies, and so that really kind of has. In the broader climate and crackdown on like DEI and and dissent, that has really kind of set the ball rolling on which we are kind of seeing some of the end game right of, of what has been put in place.

And so I think there is a very real crack here that tropic will fold and allow. Cloud to be used in the ways that DOD wants to use it, even if they are off dubious legality.

Alix: Pretty soon we're gonna have more of Amos as part of a militarization and AI series where we dig a little bit more into the business models and implications for all of us when. These technologies are being sold into government and transforming the way warfare is conducted. So be on the lookout for that.

And thank you for listening to this, and thanks to Georgia Iacovou and Sarah Myles for putting it together, and Amos for coming [00:08:00] on.

Stay up to speed on tech politics

Subscribe for updates and insights delivered right to your inbox. (We won’t overdo it. Unsubscribe anytime.)

Illustration of office worker in a pants suit leaning so far into their computer monitor that their entire head appears to have entered the face of the computer screen