Computer Says Kill: The Toxic Love Triangle of Big Tech, Big War & Big Science w/ David Gray Widder
.png)
Show Notes
Academia, Big Tech, and the military are caught in a sordid love triangle — and their love language is money.
More like this: Computer Says Kill: The Blank Check to Beat China w/ Lis Siegel
For part five of Computer Says Kill, researcher David Widder describes the powerful trifecta that is academia, Big Tech, and the US military: all of them need each other to survive, but who is benefiting the most? Half of Carnegie Mellon’s research funding comes from the DoW or the DHS — and David will explain how it’s being used to both prop up war apparatus, and serve as an on-ramp to Big Tech platforms.
Further reading & resources:
- It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them? — David Widder et al, June 2023
- Basic Research, Lethal Effects: Military AI Research Funding as Enlistment — David Widder et al
- Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI — David Widder, Sarah West, Meredith Whittaker, August 2023
- What Tech Calls Thinking by Adrian Daub
- The Undone Computer Science Conference
- Computer-vision research powers surveillance technology — Nature Magazine, June 2025
- What’s happening in Memphis with Anthropic? — The Maybe Media
**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
Computer Says Maybe is produced by Georgia Iacovou, Kushal Dev, Marion Wellington, Sarah Myles, Van Newman, and Zoe Trout
Transcript
Alix: [00:00:00] Hey there. I'm Alix Dunn, and this is episode five of Computer Says Kill, a series that explores the people, politics, and systems that have ushered AI into the business of war. In this episode, I'm joined by David Gray Witter. He's an assistant professor in the School of Information at University of Texas, Austin.
And if you are paying attention at all, you know that the fundamental role of universities and higher education in the US is in kind of an existential spiral. There's huge cuts in funding, there's rising censorship, and there's just a litany of attacks from an authoritarian Republican party. So in this episode, we are gonna dive into the role of universities, less so on the front lines of culture wars, and instead looking at them as engines of research that powers the AI war machine.
So according to David, we are all affected by a love triangle. This love triangle is made up of big tech, the military, and [00:01:00] academia. Their love child is AI, and their love language is money. David is a computer scientist who studies this love triangle and how it impacts basically everything. Computer scientists don't normally study this kind of thing, so let's hear more about how David got into it.
David: Hello, my name is David Gray Witter. I am an assistant professor in the School of Information at the University of Texas at Austin.
Alix: Do you wanna talk a little bit about how you ended up in computer science academia?
David: So as an American who grew up overseas, right as the US invaded Afghanistan and then Iraq, I was one of the few Americans in an international school in Singapore.
I was always tinkering with computers. I almost didn't go to college because it was too expensive, but I ended up being able to afford that, and I worked as tech support, and then I [00:02:00] worked-- I got invited to be a, a research assistant for a, for a professor there. And I was like, "I don't know what research is, but sure."
It pays a dollar an hour more than I was making. So it turns out I liked it, and I, I did a research summer at CMU as an undergrad at Carnegie Mellon, and that's sort of how I got, got involved. I knew I didn't want to go to industry 'cause the Google recruiter who deigned to come to the lowly state school I was at was rude as fuck.
I thought, "Why not just, uh, try a PhD and see what happens?" And moving from University of Oregon to Carnegie Mellon was, was wild. It was already wild to move from Singapore to Oregon, but from University of Oregon to Carnegie Mellon was almost as otherworldly. From a lowly ranked computer science school at Oregon Where people were smart, hardworking, and kind, but you know, there to get a job to, like, Carnegie Mellon, where, like, Google would pay h- tons of money to recruit, and everyone was gonna work at Facebook, Amazon, Google, whatever, Microsoft.[00:03:00]
And also where University of Oregon, there was... I mean, it's a very lefty school. There's a ton of protests, big, long history of political activism, and I was like, "Oh, it seems like everyone has it under control. That's good," to Carnegie Mellon, where like, oh my God, all this crazy shit is happening, and there's so little protest.
There's so little critical thought. And that's, I think, kind of what made me wanna start thinking more critically is all this military stuff going on, all this surveillance stuff going on, and seemingly so few pockets of critical resistance. And I eventually found one. Early on, there was the Coalition Against Predictive Policing protesting a predictive policing algorithm that Carnegie Mellon professors built and then deployed in Black communities in Pittsburgh, and that's sort of how I started getting involved in more critical tech spaces and then later helping lead a few.
Alix: Amazing. Okay. Mm-hmm. So fast-forward a couple- Yeah ... of years. You started doing research on... It's kind of meta. You're, like, in academia, um, you're doing research, and then you're looking around you and saying, "How is it that all these [00:04:00] people don't see what I'm seeing in terms of how Big Tech is sort of changing what's happening in so many different ways?"
And so you conceptualize this love triangle and then start doing research on it. Do you wanna describe what this love triangle is before we dig into its implications?
David: Absolutely. And yeah, that pivot was, was interesting. Not normal to do that in computer science. I actually started by studying how, which is a little bit more human-computer interaction flavored, how people building AI understand the ethical impact of what they are building, and that's what my dissertation was in.
But after that, I moved more to thinking about, as you mentioned, this love triangle. And there are, are a few different ways you can narrate the history of computer science or AI, relatively young fields versus, like, history. But it's very hard to tell a history of those fields that doesn't involve those three actors, big tech, the military, and academia.
And that is the love triangle that brings us the AI we have today. It brings us the technology we have today, and their love language is money. And so I study the [00:05:00] flows of people, money, and ideas between these three major institutions that bring us our technological presence. And yeah, and sort of the...
And how those flows express the power dynamics between those three actors.
Alix: I think money is a really interesting lens with which to look at the three different sectors. Do you wanna take us through who's bringing the money? Like, how does, how does the money flow between these three sectors?
David: I mean, there's a few ways, but the, the basic answer is grants.
As an academic, particularly in light of decreasing state support for universities, your job is to raise money. Particularly in computer science, your job is to raise money, and- You know, instead of seeing your job as writing papers or teaching, it's to raise money to support your students, and the way you're getting that money is through grants increasingly, or almost all the way now, majority from big tech or the military, sometimes the NSF, but we've seen cuts there.
The way that love language is expressed is through often transfers of, of [00:06:00] money for research from big tech or from the military to academics. And to the extent that, like, to get tenure as a computer scientist at most schools, at most top schools at least, you have to have a track record of bringing in bajillions of dollars.
And it, it's also, y- speaking of tenure, like, the way you sort of demonstrate the worth of your work is so entwined with both of those incentives too. Like, to demonstrate impact is to show that you have real world impact, and the nature of the impact is less important than the fact that you can show that big company X or Y uses your work to do something with.
And so it's important to sort of realize how money draws academics together with the incentives or with the goals of those other two institutions, namely big tech and the military. I think the best example I can speak from is Carnegie Mellon, which as I like to always point out, is named after a billionaire union crushing philanthrocapitalist and his banker.
Let's not forget that there's history here. In fact, I, I know that during World War I, they actually used the [00:07:00] quad at Carnegie Mellon as a, a soldier training camp, and they had tanks on the quad and everything. So this, this is intensified lately, but it has a long history. So in 2022, which is the latest that they'll actually give us numbers 'cause they stopped how they re- uh, once we started critiquing, they stopped, they changed how they reported these numbers.
Carnegie Mellon had a $466 million research budget, and roughly half of that came from the US military or the Department of Homeland Security. Like a top school, half of the funding from the, from the military or Homeland Security. So military 43%, 199 million, or Homeland Security, 33 million or 7%, which already should raise questions like, are we a university or are we the research arm of the military?
That's alongside 70 million or 15% coming from the National Science Foundation, which you might have thought is like the primary funder of science in
Alix: the country. I totally did.
David: Yeah. At least at Carnegie Mellon, it's not, and, and certainly even actually [00:08:00] also federally, about half of federal research dollars come from the military.
And so it's not actually the NSF traditionally that funds most of it, but that does include everything from like basic research to applied research to like pilot technology development. And I also think like if we actually look at those numbers and where that money at Carnegie Mellon is going to, there's like a few different like subplots there That includes funding what are called federally funded research and development centers, so-called semi-autonomous units, including the National Robotics Engineering Center, which does robotics engineering for the US military, and the Software Engineering Institute, which does software and hacking and, and things like that for the US military and intelligence services.
And for research that goes to those autonomous separate labs, they're allowed to do what's called restricted research, which is prohibited at, at most universities. Restricted research is research that you're not allowed to tell anyone about. But there's a carve-out in Carnegie Mellon's [00:09:00] bylaws, as there are at many universities, that lets you do restricted research at autonomous units.
And if we think of the role of a university as, like, creating and disseminating knowledge, why are we doing any of that research at all? Why are we doing research that you can't talk about to the public and you can only talk about to the people paying for it, in this case, the military? That seems like a complete bastardization of the noble goals of a institution of higher learning and research, right?
And so, like, getting into these, like, weird financial presentations from Carnegie Mellon really sort of hit home to me how this money changes the mission of academics to something that is much less favorable, much less virtuous.
Alix: So before we get into the way that big tech comes in here, 'cause that's sort of DoD plus academia, can you define basic research versus applied research?
I know that you think a lot about... or other people think a lot about those- Yeah ... distinctions as important part of this. Yeah. But I, I imagine you're gonna use those phrases again. Do you wanna [00:10:00] describe or do you wanna define what- Yeah ... basic research and applied research are?
David: I can define it as the Department of Defense or, I guess, the Department of War does, but I think it's most instructive to just to, as an example, right?
At Carnegie Mellon, they build what's called Crusher, and the National Robotics Engineering Center builds a big robot that can climb over and crush obstacles and go find whatever. An autonomous robot that looks a lot like a tank, and they'll say it's not a tank 'cause it has a camera rather than a, a gun turret.
You give it to the military, they take one off, they put something on. Anyway, and so, you know, that was weird for me to learn that Carnegie Mellon was doing this kind of weapons, what I see as weapons development, they see as search and rescue, not search and destroy. And so when I talk to my colleagues at Carnegie Mellon, "Why are we doing this research?"
And also, "Why are, maybe why are you accepting military money?" They'll say, "I'm doing basic research. I'm accepting military money to understand how complex software systems work, or the basic laws of physics or, or [00:11:00] things like this." And so there's this narrative, and a lot of the ways that academics will justify accepting military money at all is because it is so-called basic research.
It's research that's supposed to go to fundamental science rather than weapons development. But... and, and that was actually really hard for me to argue with for a while because, you know- It's sort of a Robin Hood story. You're stealing from the rich to do virtuous things with. It still didn't make me feel good, but that's why I started thinking more about that, and that's why I worked on a paper with, uh, Sirish Guraja and Lucy Suchman examining what basic research from the military goes towards or what basic research funding from the military goes towards.
Alix: There's probably an infinite number of research questions you could ask that qualify as basic research- Yeah ... but it's a political question which basic research questions get asked.
David: Yeah. Exactly. And that is, surprise, surprise, constrained by what will get funded-
Alix: Yeah ...
David: and who's doing the funding. And who has the money,
Alix: and then who has the money- Yeah
is the military. Yeah, yeah, yeah. It's all a [00:12:00] circle, yes. Um, or- Exactly ... a triangle. Yeah, I'm curious, I mean, is there stuff from the paper that you wrote about the patterns in how basic research for military spending, like, what your takeaways are from that?
David: Yeah. What we were trying to interrogate further is, like, the refrain, "Oh, I'm just accepting basic research funding so there's no ethical issue here.
I'm actually using the military's money for, for good science." And so what we did is we actually scraped, our colleagues scraped a bunch of grant solicitations for basic research from the Department of War, and we were actually able to look at 7,000 of them for AI to ask, "Well, they say it's basic research.
What are they actually trying to have academics do with this money?" And we sort of came away with three big themes showing how basic research nonetheless enlists academics in these war fighting goals, even if it is still basic research. And the first one is, sure, you can do general [00:13:00] purpose pure science, but is only if it's in these particular areas that are related to long-term national security needs.
So sure, it is a little bit, you know, disconnected from an application, but the, the circles, the things they're targeting still have a long-term goal in mind articulated in the grant solicitation. The second one is a little bit more pernicious is, yeah, it's basic research, but the way the grant solicitation, which is the, the document that announces to academics that they should apply for, for grant proposals or grant funding, is they're using language that en- en- engenders an imaginary of war.
So they're saying instead of, you know, the user will, they're saying the soldier will or the war fighter will do X, Y, Z. And they're sort of enlisting academics into this war fighting imaginary and honestly nudging them towards certain kinds of examples so that they'll have successful grant applications.
So they're sort of putting an imaginary out there that [00:14:00] is, the academics are then receptive to when they write. They're naturally gonna adapt that language 'cause they want that, or adopt that language 'cause they want the money. And then the third way, and this is kind of most concerning to me, is they'll have a pot of money that is basic research And then it'll serve as an on-ramp to a much bigger pot of money for applied research.
And so they'll have a, a basic research program, and then you do well there, and then wouldn't you like, like, 10 times more money to do applied research? And, you know, if your grants, if your NSF grants got cut and you have students to fund, it's hard to say no to that. And especially if you've already been sort of enlisted in that imaginary in the basic research stage, it's easy to see that as a, a natural progression into, into that bigger pot of money, applied research.
Alix: So it's like grooming them.
David: You said that, not me. We used the word enlistment. Um. Um, but yeah, I, I think that it's like, you know, socialization is a hell of a drug, right? If the coin of the realm is literally [00:15:00] coin, money, and the thing that gets you tenure is money, and you get some money, and then you get to talk to all these military folks who remind you how important all your work is, and your colleagues pat you on the back for getting a big grant, and, and then there's more money to be had, it's hard to retain a critical posture in light of that socialization, of light, in, in light of that enlistment, in light of that, like, grooming.
What we're trying to get at in the paper is, is how money is used as a way to socialize academics into war-fighting goals.
Alix: So we've covered the money piece, especially between the military and academia. I wanna talk about the profit piece, 'cause I feel like academia is not a very profitable enterprise to pursue.
You may get grant money and it covers salaries, but somebody's making a lot of money here, and it's not the Department of Defense, and it's not academics. So do you wanna talk about big tech as kind of the third point of this triangle, and how, sort of [00:16:00] once they get involved, like what, what is in it for them?
How does that relationship work?
David: The joke, as I've said a few times, is there's two ways to make money on the internet: porn or advertising. But there's actually a secret and much more lucrative third way, which is federal contracts, 'cause they renew every year. They're a bajillion dollars, and there's not a lot of oversight.
And you kinda see right around when money gets expensive, when w- when interest rates go up, a rush to try and figure out how the heck we're gonna make money from all this investment. We're seeing that right now, and a lot of the answer that these big AI companies have come to is be evil, to reverse Google's slogan, to find ways to get federal contracts.
I think it's kind of scary to, and sad to see Anthropic being held up as a paragon of ethical practice for very, being very excited about having military contracts, just not letting them be used for weapons targeting or domestic surveillance. So somehow, like- Yes, please use this for [00:17:00] military stuff, but not for targeting or surveillance is now the most we can ask for.
It often is, just to be clear, Google, Microsoft, Amazon, a lot of other companies will directly give grants to academics to do research that are in line with their goals. They'll also give internships to students. They'll also give part-time, um, like if you work 10% time but for like 10X the salary at Google, you're making a lot more money than you are as a normal professor somewhere.
So that's quite common actually, in computer science, to have a part-time appointment at a company. But it's not always that. In research with colleagues at Carnegie Mellon, we, we did a sort of retrospective of the field of natural language processing, NLP, the field that gave us ChatGPT and all these big large language models.
And we interviewed people who had been in the field a long time, and we also did some bibliometrics. So we like computationally analyzed citation counts and, and the, the contents of papers computationally. And NLP, before all this money entered [00:18:00] it, before it blew up, was a small field, and you could take-- keep track of who was in the field and what they were doing, you know, in a small conference.
But then, because of this money, it blew up and the response was benchmarks. The, the way you kept track of what was going on in the field was which research lab is doing well at which, which benchmark. And as a result, the research that people did was the one that advanced the benchmark. And then companies like Meta built PyTorch, and companies like Google built TensorFlow, and suddenly, instead of building your own software stack, you were having to start from their, their framework.
And so I think the point here is not just that Google or Meta was giving money, but they were also building large infrastructures of software that incentivize certain kind of research that they had a profit incentive to, to do. And thus, they kind of used that infrastructure to co-opt the entire field, not even just those people that were getting their grant money, but [00:19:00] to do well at those benchmarks, you definitely had to start From what everyone else was using, which at that time was TensorFlow or PyTorch, right?
So it's not just money, it's the infrastructure that these big tech companies provide that also help them get their tentacles into academia. Why are they-
Alix: It's also then gives academia a free training program, 'cause then when they wanna hire you, you've been trained on their stack.
David: Exactly. It's a free training program, but it, it's also an on-ramp to the paid services, at least in Google's case.
Like, they would love it if you did your research in Google's cloud, Google Cloud Platform, and guess what works really well in Google Cloud Platform? TensorFlow. So if you're already doing your research because you've already been, you've sort, sort of forced you to compete on the benchmarks using Google's TensorFlow, and now to compete on the benchmarks you have to use all of the compute capacity in the entire world or something like that, you're gonna run it on a cloud platform, and if you've already done it in TensorFlow, you might as well use Google Cloud's platform.
It's also a way to get public money. It's a way to get academic money into, into academia. So it's, it's sort of the money goes [00:20:00] both ways in, in both directions it's in big tech's benefit.
Alix: You mentioned that not every academic is thinking about the political economy of these systems and sort of think, like follow the money is not a part of a lot of computer science thinking and practice and probably like curricula in terms of the schools.
Given what's happening now though, where like every day is a new horrifying application of emerging technology in contexts where it doesn't belong, you've got this kind of unhinged department of war, you've got like illegal invasions happening, and AI is being invoked in like every headline. Do you feel like that's changing?
Like, do you notice more academics reflecting on these dynamics that they're participating in?
David: The, the through line of my PhD is that, I mean, I interviewed about 111, 115 technical people about how they understand the impact of what they're building. The through line is that [00:21:00] their job, they see their job as building whatever they're doing in a really ethical way at most, but they don't see their job as being concerned with how it's used, being concerned with how it travels down the supply chain from the basic pure research context to the, to the bomb, to the downstream explosion or whatever.
But I do think that for some, I think these questions and these, this sort of all the, all the, you know, the war and genocide that's going on using AI has triggered these questions. But sometimes it's to remove oneself from the discussion. It's, it's to leave the US, which I think is a reasonable response, especially for folks who are on, you know, contingent visas here.
But I think there is some more critical questioning going on in, in computer science as a field. I worry actually what I call, um, what I often refer to as, like, computer scientists liking to see themselves as, like, special snowflakes. Computer science likes to reinvent other ideas in their own, in their own terms or, or respond to certain concerns [00:22:00] in their own way that is often counterproductive.
And that's actually where I think we see a lot of folks being excited about AI safety now is because it offers a computer science-y way, a techno solutionistic way of thinking about harms in the world.
Alix: Have you read What Tech Calls Thinking?
David: Oh, yeah. Um-
Alix: With it on your shelf. Yeah.
David: Yeah.
Alix: Um, I feel like that's a real...
It's like when, and I imagine this happens within academia, definitely happens in big tech, where there's these ideas that you have this principled philosophical underpinning and understanding, and everything is from first principles, and you're the smartest person in the room, and that basically any problem is new if it's the first time a computer scientist has taken a look at it, I think is a really interesting paradigm that leads us into all kinds of messed up situations when tech is this dominant in the world's imagination and also is this resourced.
Um, it's kind of terrifying.
David: Yeah. Exactly. I mean, like, in the union campaign that I [00:23:00] was involved with at Carnegie Mellon, like, it was the hardest to always organize computer science grad students because for so long we've been treated the best of all academic fields. We've watched or maybe not even noticed that anthropology's disappeared or that humanities funding has gone because we've been fine.
That sort of line of thought is what makes it harder to see ourselves as part of a broader struggle rather than something, uh, facing a new de novo problem that we've never seen before that we have the tools to fix within our field. And so the push is always to engage first as a, as a worker, as an academic worker, as a citizen who can vote or protest or put your body in between folks that are harmed by, um, carceral and, and, and technological systems before seeing yourself as a special snowflake 'cause you have a PhD in computer science.
But I actually also think that there are ways that technical skills are really important in [00:24:00] movements for, for AI resistance or for techno-fascism resistance, and I think the first one is obvious given what we've been talking about, which is, like, if Google or the military want certain research to happen because they need it to happen for their profit incentive or to fight a war, they're gonna do that anyway.
Why would we accept getting paid 10 times less to do the same research that they would do anyway? Surely instead of the Venn diagram looking like this, an overlap- We should, like, make use of our unique privilege as an academic to do the research that wouldn't get done otherwise. And there's been movements for that.
There's the Undone Computer Science Conference. There's a few other sort of similar things that actually try and argue that we should not have an overlap between what we're doing as academics in academic institutions and what the military or Google wants. Google is a stand-in for all big tech. And then there's also ways to re-- So there, there's effectively refusal.
There's just refusing to do the research that big tech or the military wants. Then there's ways to remain engaged, and I, [00:25:00] I've been really excited in following, like, the, the No Tech For Genocide and No Tech... Well, I was involved with the No Tech For ICE movement during my PhD, and then I've been in various states of involvement with the No Tech For Genocide movement.
And what I think is exciting about that is there are often tech workers as part of that movement in coalition with folks affected by that, like, together. So they're remaining engaged with the use of what they're building and protesting it. And then the third one is, is we can reconstruct. We can, we can show in a tangible way how this so-called basic neutral technology or basic research neutral technology is used in practice, and this is where we're going to that really cool Nature paper that everyone's talking about even though it's came out like two years ago, that's showing that computer vision is used for surveillance.
Surveillance patents disproportionately cite computer vision papers, and most computer vision papers when they show up in patents, the patents are for surveillance. [00:26:00] And I think that that kind of technical skills that's required to make that link is exactly where we are, where we do have something unique to add.
We can do the, the infrastructural software power mapping that shows how a particular upstream research experiment or research, um, infrastructure or product is then used downstream in the world. And so I think that that is where we can put forward our technical expertise. I guess another way we can do that is recognizing that a lot of AI is used for harm, and in light of that, we ought to make technical ways of destroying it.
And that's why I've been really excited by things like the, um, the Nightshade data poisoning examples of, like, computer science researchers going, "We want to make it easier to sabotage, to destroy harmful uses of AI, and so we're going to use our technical skills to make that possible." So refuse, remain engaged, reconstruct, and destroy.
Those are the [00:27:00] ways that I see technically skilled or computer science folks having a, a, a way they can engage through that skill set in this movement.
Alix: Love that. I also-- It reminds me of, um, one of the things that most excited me about Lina Khan was, um, the idea of disgorgement, that basically if you were found to have violated some law or were found to be, you know, liable for something that the FTC had taken action against you It wasn't just you were fined, it was that you would be required to dispose of the assets that were downstream from your illegal data collection or the way that you train systems, and that you had to actually destroy the assets that were created, which I, like, always thought is, like, a very powerful- Well, yeah
way of combating. Yeah, go ahead.
David: Completely. And, and, and it- it's-- The reason I like that so much is because it fights against this pattern we see over and over from big tech, which is deciding by fiat and sheer force of capital that [00:28:00] some kind of behavior is gonna be legal. We saw this with Uber. Uber was illegal, and they rolled out Uber anyway.
Uber is still illegal, apparently, I just learned this in Colombia, and it still operates because they can afford to do that. And by the time it had already engorged itself, um, into our life, by the time it had already became a, a practice in our life, it was sort of seen as beyond the pale to, to un- undo that.
And, and we see this also with AI. Like, when there are all these questions about, you know, if, "Oh man, they've scooped up all the copyrighted data on the internet," if I use Microsoft's AI to do something that... and I generate on accident, "on accident," copyrighted data, can the copyright owner sue me? Microsoft said, "Well, guess what, guys?
Here's a billion-dollar commitment that if you ever get sued for using our AI system in a way that violates copyright, we will defend you with a billion dollars worth of fancy lawyers." And suddenly, everyone was very happy to use Microsoft. Like, this is the way that they were able to, to [00:29:00] force that to become the de facto law.
And I think now, unfortunately, even the more critical, like, legal scholars in tech are going, "Well, it's kind of a for-- it's a lost cause." Like, like, yeah, I, I personally believe that it is illegal to steal people's data to train a for-profit system for. The US Copyright Office ruled that it was a violation of copyright if it's gonna compete in the same market as the original service providers, like, for example, providing, uh, graphic design services where there would've been otherwise graphic designers through AI.
And somehow we've just given up on that. And so I, I think disgorgement is a really good example of a legal remedy that tries to fight against that sort of de facto legal normalization that tech can get away with because they're so wealthy.
Alix: Do you have any overall observations of the love triangle between big tech, academia, and the military, and, like, what you wanna see in the next few years?
Or does it feel like it's so locked in that this can't be incremental in terms of change, but it has to [00:30:00] be transformational? Like, what would you wanna see if this all played out the way you wanted it to?
David: Well, I think the, the, the sort of thing that gave me hope recently was I went to ICLR, uh, the International Conference on Learning Representation.
It's one of the biggest computer vision conferences. The reason I was there is 'cause I was giving a invited keynote at an AI for Peace or an AI and War workshop. And Our workshop was packed. At a mainline computer vision conference, many-- like the room was full with folks who showed up to, to learn about and to learn to-- or at least with some curiosity about how to think about how, how their work or computer vision work generally is used in war.
And I talked about these things. I talked about basic research not being all that basic. I talked about how big tech tries to use its software infrastructure to get into academic sort of, uh, ways of, of doing research. And the reception was really critical in a, in a positive [00:31:00] way. Folks were interested.
And I think that you asked earlier about in these times. I think in these times, folks are looking for, for ways to begin to have this conversation. And I think one thing that I'm excited about doing is finding more and more ways for that to happen in a community that leads that in a power-building collective direction.
Not as a, like, "We're computer scientists, and we're here to save the world," but as we are people concerned-- we are workers concerned about the, the output of our work, and we are here to have that conversation in coalition with other kinds of workers. And so that was hopeful, and we're gonna continue to have that kind of workshop, um, and that is a large part of what I see my work as an academic doing.
I gave this sort of in a response to a question at the ICLR conference, which is like computer people, that's a technical word, computer people, like most people, feel scared about the times we're in. I'd say most of my [00:32:00] colleagues are concerned about or, or somewhere between concerned and aghast at the genocide in Palestine or the, the unprovoked war on Iran, the bombing in Lebanon or the, you know, any other sorts of things, or Palantir even.
Some of the most precarious people are the ones I see taking the most courageous action, and some of the least precarious people, like a tenured professors, are the most cowardly people I've ever met as a class. And so- I think-- It's true, right? Like what is tenure for, guys? So, so I guess I, I think a lot of the project here is like getting folks that aren't that precarious in the whole grand scheme of things, like even assistant professors in a, in a neo-fascist state like Texas or workers at Meta to realize that in the whole scheme of things, they're probably gonna be just all right, just fine, and they can take a little bit more, uh, confidence to act on their ethical concerns that they have.
And sort of putting it in the grand scheme of [00:33:00] things. That is what makes me optimistic in a weird way because I'm like, "You know what? I'm gonna fight for what I think is right here. I'm gonna do what I think is right because I'm probably gonna be fine," you know? So I guess it is sort of recognizing the privilege that I have.
Alix: Excellent and appreciated. And I think that there's a kind of a perceived blood-brain barrier between higher education and ivory towers and the experiences of everyday people, um, that isn't real. And I really appreciate the way that you highlight that people in academia are people, and centering the h- their humanity and their experiences and seeing yourself as a worker first, I think is a really powerful way of kind of reframing some of the elitism that I think has rightfully pissed a lot of people off and probably contributed to some of Everything that's, that's happening.
Next week we have Maddie Batt, who authored an amicus brief in the federal case between [00:34:00] Anthropic and the US government. She's gonna walk us through what we know and what the legal implications might be for tech's role in the department of war. Speaking of Anthropic, there are two things that happened last week in my hometown of Memphis that I'm still kind of processing.
I'll share more over the next few days, but I wanted to just name them. The first is that Anthropic has announced that it is buying up all of the compute capacity of Colossus-1, which if you're not familiar, which I'd be surprised if you weren't and you listen to this show, Elon Musk built one of the dirtiest data centers in the United States that is actively poisoning kids in Memphis.
He basically built the thing in like 18 days with no regulatory controls, used temporary, quote-unquote, power in the form of these giant generators, and now that's just kind of permanently how they're being run 'cause there's not a hookup to the grid. So the good guys, Anthropic, is now partnering with Musk, using this dirty data center to [00:35:00] power its products.
So if you use Claude, you probably contributed to a respiratory illness of a child somewhere. Maybe we'll think twice now when we think about Anthropic being nominally better than the other ones, 'cause turns out they're not. So that's the first thing. The second is that Tennessee Republicans in the House of Representatives could not move fast enough after the Supreme Court struck down a core part of the Voting Rights Act, which had protected Black representation in federal districts in the South, because during the Jim Crow era, basically even though there were cities like Memphis that were majority Black, you would see all white representatives because districts would be gerrymandered to all hell to ensure that only white people could get elected into positions of power.
That was forcibly reversed by the Voting Rights Act, which has now essentially been overturned, so the Tennessee House of Representatives races as fast as they can to gerrymander Memphis out of any Black representation, effectively making all of those constituents who are negatively affected [00:36:00] by the emissions from Colossus-1 disenfranchised.
There is no representative now in Tennessee that is going to fight to protect those kids because white Republicans have decided that they now have legal permission to take us back 50 years, where if you were a Black politician, it was basically impossible to get elected in a state like Tennessee. And those two things are related So when you hear people talk positively about Anthropic, and when you hear people try and disconnect the technology innovation happening in the AI industry from racism, from rising authoritarianism in the US, from fascism, do not let them bring them back, um, to these two things happening on the same day because they are connected.
I am going to share more thoughts on this. I'm actually really enraged if you can't tell already, but more soon on that. And if we do, uh, make something this week about it, we will drop a link to it in the show notes so you can check that out on YouTube or wherever else we put it. Thank you for listening and thank you to the podcast [00:37:00] team, um, Van Newman, Kushal Dev, Zoe Trout, Georgia Iacovou, Sarah Myles, and Marion Wellington for helping put this series and all episodes together and we will see you next week.
Stay up to speed on tech politics
Subscribe for updates and insights delivered right to your inbox. (We won’t overdo it. Unsubscribe anytime.)
