
Show Notes
The US is in a race to ‘beat China’ at AI. Or is it? What if I told you that powerful actors in the US have built the story of an all-or-nothing race to get what they want?
More like this: Computer Says Kill: A License for Unlimited War w/ Amos Toh
In part four of Computer Says Kill we are joined by Lis Siegel who shares the history. We start with a document produced by China in 2017, and arrive at today when the Chinese bogeyman is being used to drive money, political influence and supply chain control to a few US tech giants. Listen in for some insight into how we got here.
Further reading & resources:
- Examining AI Safety as a Global Public Good: Implications, Challenges, and Research Priorities — Lis Siegel et al, March 2025
- Silicon Valley enabled brutal mass detention and surveillance in China, internal documents show — AP News, September 2025
- A Dark-Money Campaign Is Paying Influencers to Frame Chinese AI as a Threat — Taylor Lorenz, Wired, May 2026
- Slogan Politics by Jinghan Zeng
- Breakneck: China’s Quest to Engineer the Future by Dan Wang
- Final Report from the National Security Commission for AI — 2021
- Yellow Techno-Peril: The ‘Clash of Civilizations’ and anti-Chinese racial rhetoric in the US–China AI arms race — Kerry McInerney 2024
- Bernie Sanders urges international cooperation to halt AI’s ‘runaway train’ — The Guardian, April 2026
**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
Computer Says Maybe is produced by Georgia Iacovou, Kushal Dev, Marion Wellington, Sarah Myles, Van Newman, and Zoe Trout
Transcript
Alix: [00:00:00] Hey there, I'm Alix Dunn, and this is episode four of Computer Says Kill, a series that explores the people, politics, and systems that have ushered AI into the business of war. Today, I'm joined by Lis Siegel, a researcher who studies the origins of the US and China AI race, which you might think that the no-holds-barred rivalry to achieve AI dominance was always thus.
But it turns out that the story is a little bit more complicated than that. These narrative frames that whoever, quote-unquote, "wins" AI will dominate the future are key to understanding because they drive a lot of the mania and, in turn, carelessness we see in US AI military spending. Beat China continues to allow for congressional approvals of infinite resources and effectively blank checks that accelerate the development and deployment of AI technology, and it creates a permission structure to push AI into the most high-stake situations as fast as possible at any cost, whether that's money, long-protected military technology practices, like [00:01:00] we heard from Amos last week, or international humanitarian law, for that matter.
So we're slowing down to ask, how did we end up in this race, and what exactly are we racing towards? Liz is gonna kick us off with explaining how this all started with one policy document in 2017.
Lis: Hi, I'm Liz Siegel. I'm a doctoral researcher at the University of Oxford at the Department for Politics and International Relations, and I focus on how AI has altered the US-China relationship over the past 10 to 15 years of rapid technology development.
Alix: So we're now kind of locked into this understanding or lens that the US and China are in this race and that that is a to-the-death, uh , um, situation, and that there's this [00:02:00] end-of-history style pursuit of AGI, and that once one of the two of them arrives, they will dominate humanity for all eternity.
From your work, I am now learning that that was not an inevitable place we've arrived, and I'm wondering if you can kind of rewind us a little bit to, let's say, 2017 when China publishes a national AI strategy, which I don't even know if they called it an AI strategy. Um-
Lis: It's the New Generation AI Development Plan, really.
Alix: Okay, so start us there. Like, China says in some document that's signaling its direction of travel on some of these issues, what happens? Like, how do we get to where we are now?
Lis: So China released that 2017 new generation AI development plan, and I think that as it got media coverage or uptake in the US, it was really touted as this, quote-unquote, "Sputnik moment," which raises the importance of AI in the eyes of China, [00:03:00] I guess, to US-China competition in the sense that due to other policy factors like the ongoing trade war that was already starting up during the Trump administration at that time and other factors that was leading to a bit more of a confrontational bent to the overall US-China relationship.
These structural factors were in place, but I think that it wasn't a guarantee at all that AI would suddenly go from this kind of more niche but growing field of machine learning to suddenly being the entire center of something like a US-China competition. And so I think in general, how this plan was received in media was a really big contributor to that, where there was a couple of misapprehensions at play, I think, in the perception of China with this strategy.
China at the time actually put out this plan when it was really this underdog as far as AI went, and it really perceived itself as an underdog. And I think various researchers have taken the time to look at around that time and the years immediately following around some of the other metrics of AI development across the stack, [00:04:00] across other parts of research and development and hard science.
Jeffrey Ding, for example, does some great work on this, and really pinpointed how, in terms of this being reported, this plan, which is basically this dreamy, grandiose, domestic vision of science and tech greatness that actually is just one keystone in a really long trajectory of China always thinking this way about science and technology.
China's been very engineering-led for a long time, so this didn't really come across as a surprise for those who had watched China for a longer period of time. But also, the fact that this really is like a dreamy shopping list of everything that China really wants, but not with a ton of substance behind implementation.
You know, the implementation of something like the strategy or the plan is really, really far from being top-down, centralized, seamless, or perfect. But I think folks really dug into this plan as some sort of evidence that China had been playing a really long game around technology domination. I think it's a combination of two things.
One of them is [00:05:00] the growing lack of really independent evaluative expertise in all things AI, and then it sort of meeting in the middle with this strong lack of hands-on China-based expertise around the sort of US kind of world because of the cultural or educational pipeline breakdowns, like different attitudes about China, of getting to know it as a place and sort of all these security-associated concerns around getting to know China in a deeper way.
So I think these two things have really, like, come together in a really interesting way for how this plan and some other news elements coming out of China end up getting kind of misapprehended a little bit from the public.
Alix: Yeah, it's super interesting. That combination of, like, atmospheric adversary, um, with, like, vague understanding of what AI is, so there's not any clear understanding of what is progress or what do these breakthroughs imply.
When you combine those two things, I can very much see how it quite quickly gets really disorienting and can be weaponized in ways that are hard to [00:06:00] necessarily notice or see. So do you wanna talk a little bit about, in this 2017 AI document or, like, plan where there's all kinds of components, it's like the kitchen sink, essentially, this document.
There's not, like, there's not a real clear, uh, direction of travel. It's just like, "Here's a bucket of stuff that we're interested in doing," and they produce this thing. But there's a mistranslation-
Lis: Oh,
Alix: yeah ... about a really core concept.
Lis: So one example of this, I think, in a way, was written from some really kind of perspicacious researchers who immediately noticed, I think, a flag where one of the topic sentences of this strategy or plan was that China wanted to dominate in AI, or China wanted to be the number one power in AI.
Whereas I think a more measured interpretation of the same line might instead look at China kind of wants to be a primary AI power or a primary AI center. Some of this can stem from the fact that in Chinese we don't have articles. So it's like [00:07:00] a lot of the times you have to kind of read into whether they're saying "the" or "a" or, uh, you know, "an."
Oh,
Alix: right. Like, we are a superpower
Lis: versus- Yeah, exactly ... the superpower. Yeah, interesting. Exactly. Exactly. And, and so I, I wouldn't say conclusively either way just because, you know, also translation, this work is not my forte. But then a more sort of granular but really specifically kind of mistranslation-y thing was covered recently by a piece that Carson Elmgren came out with, where he digs into the translation or the mistranslation of a term within the new generation AI development plan that basically said China was pursuing some form of AGI, like in the real AGI sense of how folks in Western countries are using that term.
And so this was sparking a great deal of consternation and thinking at around the 2017 time, 'cause that was quite early in all senses for that word to be appearing anywhere near, like, a real government. But then I think some more careful people said, "Hey, I think, first of all, if this term is really AGI, why is it buried in this really random part of, like, basic research and, and not, you know- [00:08:00] More front and center with the whole of government approach to AI.
Can we dig into what these terms actually mean? And so I think that this was actually more about mathematical model research and general purpose AI models as, like, this term that doesn't have the same baggage and emotional kind of weight to it. The other thing is that, you know, overall, when you have this dreamy shopping list kind of approach to this, I think that there's this broader misconception that stems from unfamiliarity with the Chinese policy ecosystem where when you stay on the outside of it and you look from outside to in, you're like, "Oh, so this is a centralized power and centralized authority, so that means that Chinese policymaking and decision-making is gonna be monolithic, it's seamless, it's a top-down process that's extremely centralized, central planning," and so on and so forth.
But I think many accounts of both life on the ground and also the experience of delivering policy or being at the receiving end of policy really change that view or alter that perception, where, you know, it looks more like domestic economic affairs of various regions and the sort of really im- [00:09:00] sharp importance of regional areas within China competing for parts of Xi Jinping's big shopping list of dreamy AI goals, and then receiving perks as a region, for example, or as a governorate when they over-perform or getting differently prioritized or different perks for over-performing versus under-performing in terms of this big list.
And then China, of course, as a country, is very talented in terms of big infrastructural mega projects, so that also gives them a little bit of a leg up in terms of hardware and energy infrastructure, which are both pretty important for this kind of stuff. I also will shout out that another academic, Jinghan Zeng, has a really great work on this called Slogan Politics, in which there's this slogan or ultimate goal articulated in this grand Xi Jinping speech, and then the implementation of it in practice is actually left up to these other areas of political leadership, whether it's ministries or, more likely, regional governments or these other centers of power who are really not thinking about this in terms of competing internationally.
They're actually just competing with each other for the sort of central pot of resources, the central area of funding that they're trying to [00:10:00] access. And so in this case, it's really important to understand in China that provincial politics really, really matter. Just because it does end up as this Xi Jinping slogan doesn't mean that there's some smooth line connecting the slogan being said towards 10 out of 10 implementation.
And Dan Wang act- also recently talked about this in his book, where there's a section of the book that tries to more ethnographically identify where various departments or forums or people, for example, are creating friction, dragging their feet, pushing back against these, like, impersonal top-down directives such as around tech in China.
There's this sort of cultural essentialism here around China that also doesn't really do much hard work of trying to assess, like, what are the structural strengths of China here, you know, what are people actually doing in practice, what does this actually look like on the ground? And so, um, looking at some of the other ways that tech is or isn't diffusing into people's lives, for example, like, you just see a very different image.
Alix: I think that makes so much sense, and I think there's such a, an understanding That what China does is important, and there's kind of an impenetrability, I think, [00:11:00] for Western observers who are either too lazy to learn Mandarin and actually, like, dig in, um, or just want a reductive understanding of what's happening because that's easier than just feel like you know something about this very large, very important country that is extremely complicated, as is every other country.
Um, and so that makes loads of sense to me that you would get to a point where you're kind of just projecting onto this thing all kinds of things that you already think or, like, a simplistic idea of where this is all headed. So what happened next? So this happened, and like, it, this kind of rattles people, and then what happens?
Lis: It's kind of seen as this, quote-unquote, "Sputnik moment," which is a term that also hearkens back to a previous technology prestige race, right? It's the US and USSR competing and racing to put people on the moon. I think it's a complicated picture where it's not as though, you know, this narrative wouldn't have existed at all if not for, like, one or two actors, right?
It's a really messy picture of a lot of people making a lot of claims on AI and what AI can [00:12:00] do and what we need to worry about with regards to AI and how much China figures into that picture. For example, Eric Schmidt comes up a lot in these conversations. I think he had written previously that he didn't think that China had too bright a future, you know, a few years back, and it's interesting 'cause I think that the 2017 plan seemed to also force a recalculation of that a little bit in his head.
Toward the end of 2017, he's starting to talk really forcefully, really publicly about how the US really needs to pay attention to the fact that it was assumed as though China had already won, or China is, like, winning. China is beating the US at AI. It's a really snappy slogan, but there's also a lot to unpack in terms of what is supposed to be meant by that.
What's really interesting, though, is that, you know, in the years to follow, that became such a salient and such a resonant frame in terms of how to understand US-China relations, AI being suddenly at the center of this. Fast-forward to now, 2025, 2026, and all of these race narratives feel like they're super locked in.
I'm just noting that this is how we kind of produced reality in some way. In my work, I try to show how much of [00:13:00] that was manufactured up till now. On the other hand, there's really very little clarity on either side from the US or China about where the heck we are racing to.
Alix: I really enjoy the work of Daniel Stone, um, who has broken down a lot of the metaphors and, like, talks a lot about race, the race to somewhere being one of the primary, both industrial but also geopolitical frames that gets used.
You kinda see the blueprint. I mean, you're comparing it to a Sputnik moment. You, you see these parallels with the Cold War, where there was actually, like, a proxy fight over who could travel to a particular destination, like a physical destination, the moon, um, that you can race to. AI as a technology isn't really real.
Like, it, it's a thousand things. It's a, it's so many different possible innovations, and how one races towards an ill-defined goal. How did you unpack that dimension of this? Because it feels like the most important thing to be able [00:14:00] to narratively convince a government that it is in a race, which is seems like what Eric Schmidt was trying to do, without actually articulating what it would look like to complete the race.
Like, how do they do that? That's so impressive.
Lis: There's something fundamentally so human about this, where humans are this storytelling species, and we're drawn to stories to make sense of these really complex topics like AI, which could be anything to anybody. But there is a really strong trend that I've noticed for narratives that get carried up or advocated for or amplified by a really specific type of actor that then gain a great deal of momentum and face real resource reallocation, I suppose, you know, allocation to put oomph behind the narrative, I suppose, in the US government as far as that goes.
In terms of comparing certain narratives to other narratives, and I, I am speaking comparatively because this landscape is so messy. There's so many voices going at once, many ways of seeing and believing in AI. So I set out in my work to try to understand or predict what narratives are likely to capture a whole of [00:15:00] state imagination or something near that.
So looking over the past 10 to 12 years with the rise and fall or rise and rise of narratives, I sort of look at these sense makers or influencers and how they often leverage almost this hybrid positionality across public and private roles. We talked about Eric Schmidt already, and I just think it's really notable that he was already in a bunch of advisory roles, or at least, for example, he was already on the Defense Innovation Advisory Board at the time that he started to really speak up about the threat specifically posed by China.
Looking at some of the minutes and some of the resources from the Defense Innovation Advisory Board parts, he's always been this advocate, I think, for looking at what automation and looking at what software AI can do for parts of the defense apparatus in the US. And I think that as China began to take a much more prominent role, I suppose, in terms of the overall threat perceptions of the US, [00:16:00] he was able to sort of pivot to this, and then as soon as he started talking about this in a really public and a really forceful way, you can also see how different figures throughout government who likely already knew him in, in terms of sometimes these are just, like, social connections or other times they're just people that he's naturally exposed to because he is able to carry himself from place to place in these policy circles.
You have Secretary Mattis writing up this letter advocating for the US to create this national security commission on AI National Security Commission on AI is this blue ribbon commission that ended up doing a lot of analysis of the sort of US's national security and AI posture, and some threat forecasting also of what would face the US in different fields, and what the US ought to fund or ought to support or ought to develop a strategy around in terms of anything related to the intersection of AI and national security.
And Eric Schmidt, who essentially suggested the creation of this commission, was given the chairman role, and that report, the NSCAI report, which I think the final report came out in 2021. [00:17:00] By that point, also a lot of the rhetorical wheels had been spinning for quite some time in terms of really drawing attention to the fact that the US and China were now in this AI race.
And then the sort of continuous rhetoric around this, as well as the actual output of the NSCAI final report and other kind of opportunities really seeded this kind of competitive language into the broader consciousness of policymakers, and really contributed to the accelerationist bent of US policymaking.
It also contributed, I think, to the sort of blunting of the critical edges of other narratives that were at play at the time, in the sense that you also saw real grassroots momentum generated around topics related to AI ethics. That was really quickly defanged into sort of these catch-all responsible AI initiatives, very voluntary policies, especially because there was a lot of effort to pay, I think, lip service to responsible AI.
But racing and these accelerationist concerns did a-- really swept in terms of how dominant they were and how they took a lot of the momentum away from narratives potentially [00:18:00] moving in parallel to generate other kinds of action.
Alix: Was it Eric Schmidt co-wrote a book with Kissinger? It feels like that geopolitical paradigm of, like, essentially, like, a former war criminal head of American foreign policy, uh, combined with a guy who owns 1% of Alphabet or, like, whatever he-- like, the former CEO of Google.
The idea that those two men would be directly involved in designing the narrative architecture through which we evaluate our policy priorities vis-a-vis China feels, like, really important. Are there other people you wanna add to that cast of characters, or any other stories that could help us understand just how much influence they have and, like, how they wielded it?
Lis: It's all part of this mosaic of voices, but it is really interesting how resonant things can get after folks like this, either if they're not themselves originating something, it's already maybe out there in the public sphere. But when they glom onto something, [00:19:00] then things seem to move. Things seem to really get moving.
Another example of this that I think is interesting to look at is actually the kind of arc of the narrative around existential risk. In particular, I, I, I found there was this one figure who everyone will know, Elon Musk, who invoked it actually explicitly in kind of the first opportunity of its kind in terms of when he decided to really address a really big political group about it, and that was, I think, the 2017 National Governors Association.
He actually just stood up there and gave a speech about how we were at risk from being killed by superintelligence or something. You saw that the existential risk narrative, I, I think, had the most penetration in the US government when you had these specific figures of advocacy really pulling for it.
If you look at the emails between Musk and Altman, it's the founding ethos behind OpenAI in some way, where they both had this legitimate fear around something like that, and then they concluded that they didn't want Google to have that. And so they wanted to start OpenAI to change that. It figures into the ongoing beef between [00:20:00] them playing out in the courts right now.
It's interesting because these figures were so forceful in elevating this line of thinking or this narrative out of the sort of niches of academia that you could find it in before, and then really bringing it to the attention of policymakers throughout the early 2020s especially. There was also the major ChatGPT moment when that triggered a much wider set of consumer eyes to, like, have eyes on the frontier angles of this new technology and then generate a lot of, you know, uncertainty, fear, and, and so on and so forth.
But then you kind of see this narrative start fizzling out around then. I think that as soon as, cynically, I would say, like, perhaps as soon as they realized that there was money to be made off products in the present, the switch kind of flipped in the sense that it, it's not too good if your consumers are existentially afraid of the technology you're ushering into the world.
So there was a lot of fracturing there, I think. But you see that as in, in terms of it really correlating, I think, the amount of executive branch, you know, Biden executive order attention paid to, at least rhetorically, this kind of line of thinking around threat. Now [00:21:00] it's this kind of dual hammer that comes up, uh, especially invoked around racing, where it's like because this tech is maybe existentially risky, so that therefore we must be the ones to build it as fast as possible with the least amount of government oversight possible because it needs to be us that builds it first because otherwise...
And then usually the otherwise statement involves threat statements around China. It, it just proceeds to assume that so much is inevitable and, um, so much of the, this sort of very apocalyptic vision of the future being this, something that we're locked into, something inevitable, and that's really a product of all these narratives that I'm talking about.
But I definitely wanna add Elon to that cast of characters. Some of these other narratives where you can see them at work are really good examples to note that, I think.
Alix: Yeah, and I think there's so much that happens downstream from this construction. Increase in military spending and US spending, an increase sort of resonance among policymakers that basically this is a priority which also increases budgets and increases attention paid to this as an issue, a further kind of war hawkification of the narrative and [00:22:00] discourse, which I think changes the nature of how industry in the US engages in these topics and it becomes this, like, pressure cooker where basically everyone just, like, has to work as fast as possible to access as many resources as possible to build as much as possible and- In an environment where the military industrial complex in the US exists in the way that it does, you end up basically just shoveling money to these companies and basically creating an environment where really cavalier deployment of these technologies, that part does seem inevitable, even if these other things are not.
Lis: Yeah, it's, it's crazy. I feel like there's definitely an overlap of this with this phenomenon where you could say, "Isn't this just corporate interest playing out in another form?" But there is an interesting discontinuity that's something to look at, which is just how personalist in particular this all is.
Maybe it's a follow on from the founder cultiness of tech and how much that matters culturally within this particular field. People take on this mantle, and you know, Eric Schmidt is still involved in venture capital and [00:23:00] has some fiscal interests there or financial interests there, but it isn't like he's coming to us from a hands-on frontier lab experience, if that makes sense, in terms of coming to us and really telling us how to make sense of this technology and what it ought to do for us or what it will do for us.
But people get to a certain level of success maybe in, in business, and then get to take this mantle with them, and it becomes them acting as this interesting hybrid interpreter for us. Some of this also has to do with some of this knowledge pipeline where I think in this AI era, there is such profoundly strong concentration of power and knowledge in the hands of a really very small handful of labs.
And so that just ends up having this trickle-down effect that means structurally the kind of folks that get perceived as experts because they have what we perceive as that hands-on experience with this field increasingly only come from those small number of actors, and it's this reinforcing problem, I think, where they, their voices kind of get privileged in this discourse and in [00:24:00] terms of what narratives we ought to pay attention to and respond to.
I think it's this ongoing cycle, and so it's really defined so much of the discursive space in the US in a way that is really pushing us towards a lot of accelerationism. I think that now, I'll just say that right now it looks like this dual push on both sides of self-reliance and this kind of idea around onshoring as much of capabilities of the AI stack as possible for these superpower countries, while in turn now there's this perceived kind of competition to push to export parts of that stack to client states or client customers.
You see middle powers also internationally, and by middle powers I just mean literally any country that's kind of, quote-unquote, "In the middle between these two polar countries." But middle powers are currently debating the same thing, like what can they onshore to insulate themselves from these different forms of economic coercion that they might face from China or the US that, you know, are open avenues due to just the dependence that a lot of these systems really cultivate [00:25:00] through absorbing them and using them for national critical infrastructure and, and so on and so forth.
I saw a headline recently that Beijing has announced this strategy of pursuing AI self-reliance at all costs. That's like a, a translation that I'm quoting, but it's a really interesting open debate about whether or how, like, a lot of the hawkish industrial policy from the US over the past 5 to 10 years actually really spurred that decoupling process, pushing the bifurcation of systems there.
Because I think there is an American-centric argument to be made of how pushing China to be more self-reliant, you know, whether that's actually really in the American interest commercially, because I think some folks think it actually would be great if Chinese companies still were incentivized to use US chips for their tech and, you know, that kind of dependence there.
But then there's also the more holistic look at what that kind of decoupling or the normalization of supply chain weaponization is really doing for the whole world right now, right? Like, what that really means for the broader world and other affected populations that this is now sort of in the playbook that everyone's using, and the fact that this [00:26:00] unprecedented fusion of corporate and state power that we're seeing as a result of this privileging of these national champion companies basically, who are creating AI, and now they're being put out there to race against each other.
And the US's message, for example, at the India summit earlier this year in Delhi was like, "Buy American, buy our exports," and so on and so forth. And you see slightly different repertoires across both countries. There's this increasing kind of like feeling that this is the business contest. And of course, who benefits from that?
It's the companies and their shareholders really. They're able to sort of try and get customers like that, and it's not really clear what this race, what that approach to competition is bringing back actually in dividends for, like people anywhere.
Alix: I interviewed Paris Marx a couple months ago, and Paris went down this rabbit hole about how the process of competition, basically b- uh, the process by which China funded electric car development in an emerging industry within the country, and that most people think BYD was an overnight success because [00:27:00] it had some entrepreneurial genius that was unique to it, but really it was this, like, industrial policy over a period of time of, like, investing in different companies, creating competitive contexts, and then forcing mergers where the state thought that it was smart.
And then basically, like, over a 5 to 10-year period, they built this massive manufacturing infrastructure and capability and this mature design capability and battery infrastructure, et cetera, et cetera. And as I was thinking about that and learning about that, I was just like Oh, fuck. Like, there's a plan.
Like, they have industrial policy, and we have, like, ketamine-addled 10 dudes. And then, like, Musk has Tesla but can't produce nearly enough cars to meet demand, and now demand is dropping because he's kind of a psychopath. When you think about the way that China has engaged in these, not just narratives, but also sort of investment in an AI future in a way that is kind of dealing with the geopolitical context within which it finds itself, can you [00:28:00] describe a little bit about the policy and political approaches?
Like, who's involved? I presume there's not an equivalent Elon Musk or Schmidt who's, like, driving a lot of this stuff, or maybe they're party apparatchiks who are, like, really senior and, like, I don't know. Like, how does it, how does it work?
Lis: I think that there's really interesting work that I am a bit more familiar with is the landscape of regulation there because one of the things that folks have thought about...
And I guess before I say that, I'll, I'll just say generally speaking, in terms of the slogan politics approach to stuff, there is generally this push among different regions of China to say, "Hey, this is our specialized strength as a region, and here's how it applies to the master plan that was laid out in, you know, the five-year plan or one of Xi Jinping's speeches recently or something like that.
And here's the sort of p- uh, resources that we have access to centrally that we can draw upon to make Tianjin one of the great AI areas or, you know, create, like, a cluster here or, or something there." And there's some provinces that are [00:29:00] really going hard on energy generation and energy generation projects and saying, "Build data centers here because we can offer the cheapest possible electricity."
That's places like Guizhou, for example. So I'll say in terms of, like, the actual actors involved, what's fundamentally harder to compare, I guess, across both political systems is just that in the US, we have 12 guys, and they, uh, sort of have these sometimes circus-like public personalities that are always, you know, going through the news cycle and stuff.
There's definitely cultural essentialist statements that enter into the discourse when you have statements about China's civilizational mores or, like, approaches to doing things or approach... You know, as a people, that can get really crazy, and I wanna shout out Kerry McInerney's work on yellow techno peril, which is so interesting and relevant here in terms of analysis of China as this monolith, as this, like, threat.
What I'll just say also is that some of the same complaints that you hear also from the sort of, like, corporate types [00:30:00] in the US around regulation and pushing for deregulation, I hear the exact same stuff actually coming from the sort of folks that I speak to directly or indirectly vis-à-vis Chinese companies.
So for me, it's really interesting to think about the sort of State structure difference between the US and China, where there's these obviously, like, different ways that the state is organized, different ways that the government works. Uh, what is the sort of balance of power right now between public and private?
It's constantly being negotiated, and in the field of AI, where even if you do have a lot of folks from an engineering background, in the Chinese government's case, they're all working on hardware and civil engineering projects. It's not like they would have extra insight into AI. And so I think they're also kind of coming from behind in the way that all of this knowledge, all of the understanding of AI can be really walled away, I think, from the parts of the public sector that are most needed to independently evaluate claims.
Because I think oftentimes what happens in the US is this calculation of ex-actor who is a spokesperson of this company or, you know, formerly at this company [00:31:00] says we need to deregulate, or we need to not pass this particular regulation because it'll make things harder for companies to contest China internationally.
When we think of China as this strong, monolithic, top-down state regulations wise, for me, it's an open question actually of how much room currently companies that are developing this, and usually agile startups that aren't even on the radar of CCP leadership, DeepSeek, for example. We talked to a couple regulators, like, in the month or two following or indirectly heard also that there was just a lot of confusion.
They were like, "What is this company and where did it come from?" I think that's really interesting to think about and does a bit more work to complicate this really centralized, really perfect information world picture of where China's at right now.
Alix: What do you think that this narrative construct enables?
So like, w- who's benefiting from this? What is happening because of this narrative? Who are the people that are kind of making out like bandits because of this, um, this frame?
Lis: In terms of what this enables, I would say it's [00:32:00] this blank check to do what it takes to win, even though we don't really have a clear vision of what winning looks like.
But there's these kind of really cultural essentialist, like, clash of civilizational stakes being appended to a lot of these things. That means that, you know, it's almost like in the US, we need to marshal this whole of state approach to developing AI and funneling money into these projects as much as possible in order to, and then it's like confront China on the global stage at some point somehow, whether folks think that it's going to be in the form of some sort of kinetic conflict down the line or simply continue to be this commercial oriented export battle.
There's a lot of underlying beliefs that kind of make up this Jenga tower. One component of this is that values are encoded in the AI systems that you're exporting, and so there's this value based tinge to all of this that has really firmed up over the past couple years that's really interesting. You know, it just kind of [00:33:00] feeds the whole Beast, you know, as a, as a narrative goes.
It's one of the narratives that is so resonant right now, and I think it's self-perpetuating. Like, a lot of folks will say really cynically that they're gonna lean into the China argument to do X, Y, Z thing, you know, in Congress, for example, because they're like, "Well, it gets the people going," you know?
It's provocative. It gets the people going. It is the-
Alix: China and kids are the only two things that I hear people explicitly cynically say that they're- Yeah ... gonna, like, use to motivate. Yeah.
Lis: Yeah, absolutely. But also I think that what's really interesting right now is the sort of lesser... Well, it, you know, they've gotten a lot of press, I guess, over- overall, but way less covered is, like, the Palantirs, right?
Like the Palantirs, the neo-defense contractors that are kind of doing really behind-the-scenes work with the national security establishment here in the US, or also doing a lot of work in getting contracts abroad. But something else that I've been doing a little bit more research work on has to do with, you know, the systems in place around [00:34:00] procurement, and how procurement could be this remaining lever for governments to try and stipulate, you know, rules of the road for acceptable conduct or acceptable tenets or facets of a technology before they go ahead and procure it.
It's kind of this potentially a bit more of a Band-Aid type approach to some of the underlying structural problems here of the balance of power, as I said. But I don't know. I think that there's this alarm bell being rung, especially in the defense sector, around this feeling of unpreparedness, this feeling of decline in the face of this rising challenge from China.
And then I think that you have a lot of folks coming from the tech sector who say, you know, "We understand that this is the fundamental problem with defense, uh, more so than you understand it, and so we'll tell you exactly what to procure from us and why you need it, and what the acceptable rules around procurement should look like," and, you know, all of these things.
And because this is so implicated in sort of military defense world around sort of this rising challenge of China and [00:35:00] potential flashpoints in the, in the future or potential kinetic conflict in the future, this feeling of pre-defeat almost on the US side that is, like, really motivating a lot of these crazy contracts, and this crazy kind of, like, descent into outsourcing everything.
I do wish that there was more critical reflection, even in terms of what comes up with conversations with policymakers specifically, is thinking about how these governments themselves often will get hurt, I think, by the unfettered racing, you know, deploying without testing and evaluation into critical government systems.
Of all the failure modes, like, I could find really likely to have a lot of these really frontier black box systems malfunctioning in a way that really humans aren't trained to interpret and identify You remember that situation from the Cold War where a computer system is telling an operator, "Hey, the US is launching nukes at you right now," or something like that, and it's the operator's discretion to say, "Oh, sorry, this system is what's mal- malfunctioning.
The US is not launching nukes right now. It's the system." I wonder if that would be possible today, you know, with all of the layers of procurement and contracting and all of these black boxes that are now [00:36:00] sitting inside the compression of the decision-making process and the sort of like suppression of dissent in the national security state and all that.
I think there's just this recognition by everybody that that situation would really be in the interest of nobody, but it's just getting deprioritized in thinking about it just because of profit-seeking and rent-seeking by these firms that are inserting themselves into these national security priorities.
And so above all, I'm just really wondering how human good is going to be recentered in all of this.
Alix: Yeah. That's such an interesting historical change that I think you're right. This like black box is gonna change so many things that I think we just kind of presume, like the human infrastructure that we presume is there, and the stories we've told ourselves as a society that like we don't need to really worry, 'cause there's like someone sensible that's gonna like stop something from happening that we're, we're constructing something that's quite scary.
All of this is predicated [00:37:00] on a construct of competition, and I know you mentioned there's other ways of seeing this, and you mentioned kind of AI ethics or responsible AI. But it feels like there's also another one that we have evidence was possibly a better horse to ride, and that's collaboration. And I feel like, um, the way that AI, and I hate using that word as though it's real, but the way that these technologies have been developed in a scientific community that is global in nature and has become more global or has started becoming more global, there were signs that that was great for everybody.
And it feels a bit sad now that we're possibly losing that new young space that we only knew for a brief period. Um, but do you wanna, but do you wanna talk a little bit about how Chinese-American collaboration has affected the advance of science and kind of how these new [00:38:00] narrative constructs mess with that?
Lis: One of the most common combinations of researchers to see over the past 20 years is US and Chinese scientists or US Chinese, um, like a US company collaborating with Chinese scientists or funding work by Chinese scientists or the vice versa, right? There's a sense that basic research in a lot of these areas helps everybody, and I think that more work is being done right now to try and better articulate, you know, why it is that has changed over time.
I think that with the securitization of this field has come this- perception that any collaborative work is a risk because there is this sense of, like, even helping on any kind of dual use research or, like, helping each other with, like, safety research actually helps the other side. And that's, like, a really sad state of affairs because US-Chinese collaboration built the entire academic field of [00:39:00] AI, and it's definitely good, especially in this kind of low trust environment that we have between these states, where folks have these catastrophic hypotheses about what might happen in case of, like, low trust miscalculation or misperception between these two states, even from, like, a basic international relations point of view.
Like, just wanting to avert the kind of escalation to outright conflict between these two great powers, it'll be horrible for the whole world. Like, countless humanitarian catastrophes would, would result from something like that. And so finding ways to collaborate on shared priorities in technical fields like this is usually really good scaffolding to be able to look to when tensions are high like this.
So it is my hope that the closure of some of these channels is not irreversible, and that there's a chance that that kind of status quo could come back. I also think it helps because whenever someone comes up with a great idea about how to do something better or safer or [00:40:00] new kinds of guardrails to, like, put into things technically or otherwise, also fostering really important sociotechnical conversations that need to get layered on top of everything, you know, everything in this field, you'd want that to be happening internationally and in a collaborative way.
We already have so many problems around sort of what's still being kept in a proprietary sense in a way. It's my hope that this kind of trend towards the extreme securitization of some of these research areas isn't irreversible, because I think that there's a lot of primary and secondary benefits we gain from working together.
Alix: That is, like, the perfect way to end it. I love ending something on working together.
Lis: Perfect.
Alix: I hope this conversation was helpful for kind of getting under the hood of how, uh, narratives have been constructed and how US actors have intentionally stoked fear around China to motivate all kinds of things that might not make sense, um, if you didn't know that they [00:41:00] had been tinkering with this bilateral adversarial relationship for the last 10 to 15 years.
And I also, the timing, it feels good, 'cause I think diving into what motivates the Chinese government feels really relevant this week given its bullying of the digital rights community with the cancellation of RightsCon, and it just feels like we're on a glide path towards a lot more geopolitical conflict between these two countries.
And while I think it's easy, um, to focus on what is happening now, I think looking at that history is really useful for understanding how we got here. There's two pieces of journalism I wanted to highlight because I think emphasizing collaboration between American and US scientific interests is all well and good, but there's also been other kinds of partnerships that are worth highlighting.
In fact, a piece of journalism was just nominated as a Pulitzer Prize finalist, and it shows really intricately how US tech companies have basically built most of the domestic Chinese surveillance [00:42:00] infrastructure. So when we talk about collaboration in the episode, uh, we're not talking about that kind of collaboration.
Um, but I think it really is an important to understand that when big tech complains about Chinese technology companies or acts above reproach, it's really important to look at how they've directly partnered with the Chinese state when it's attempted to build authoritarian infrastructure to essentially assist, enable, and accelerate its digital surveillance state.
That's the first piece of journalism. The second is Taylor Lorenz came out with a great piece last week in Wired, and it is focused on a dark money campaign that is investing in messaging that is both pro-AI, so around the midterms trying to encourage policy engagement and, uh, political rhetoric, um, that is pro-AI, but also messaging that emphasizes the threat that China may pose to the US.
And I think that is a really good piece to kind of show how this attempt to craft messaging didn't end [00:43:00] once these tech guys got this narrative locked in and continues to this day and will continue. So understanding how we got here is really important to understanding where we're headed. Um, next week we are gonna hear from David Witter.
He is a researcher who brings academia into the chat. Um, so academia has played a huge role in how technology has been developed for the military. He describes the relationship between academia, the military, and big tech companies as kind of a love triangle, which is what we're gonna get into next week.
Um, thank you to our production team, Sarah Myles, Georgia Iacovou, Kushal Dev, Marion Wellington, Van Newman, and Zoe Trout. And thanks for listening.
Stay up to speed on tech politics
Subscribe for updates and insights delivered right to your inbox. (We won’t overdo it. Unsubscribe anytime.)
