
Show Notes
The left has a messaging problem. Silicon Valley elites are literally making up impossible fantasies and their narratives are winning out. Why?
More like this: The Stories we Tell Ourselves About AI
This week in our second episode leading to the AI Doc, we are joined by Anat Shenker-Osorio, a progressive campaign strategist who hosts the Words To Win By podcast. Anat tries to focus on the positives: if you don’t think people should join the AI party, throw a better party. She gives us some quick lessons on messaging: how to paint tech CEOs as actual villains, how to flip the script and convince AI men that actually, it’s okay to die — and how to avoid what Anat refers to as ‘Mar-a-lago face’
Further reading & resources:
- Listen to Anat’s podcast Words to Win by
- Pre-Suasion and Influence by Robert Cialdini
- Messaging This Moment — a critical handbook for progressive comms
Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!
Computer Says Maybe is produced by Georgia Iacovou, Kushal Dev, Marion Wellington, Sarah Myles, Van Newman, and Zoe Trout
Transcript
Alix: [00:00:00] Hey there. Welcome to Computer Says maybe this is your host, Alix Dunn. And last week I mentioned the AI Doc. It's a documentary about AI with super high production value in a big budget. The film really leans into narratives that are beneficial to Silicon Valley in that they. Distract us from the way that these technologies are being wielded by a small number of companies with tremendous power and focuses instead on near future speculative fantasies that get people all riled up, but don't really give them anywhere to put the energy that comes from that in a way that I think is paralyzing and benefits companies who don't want us to do anything.
They want us to be scared, they want us to be excited. They want us to be confused and unsure and uncertain, and they want us to do nothing and let them lead us. Into the future that they're building that benefits them. The movie is out one week from today, so we're doing this little series on challenging the main narratives that [00:01:00] dominate the space.
Last week, Adam Becker was literally talking about space. He's the author of More Everything Forever. Who uses the power of science as an astrophysicist to challenge some of these AI fantasies. And this week we have Anat Shenker-Osorio, who's a different kind of expert. She is a progressive campaign advisor who hosts Words to Win by, um, which is a fantastic podcast on how narrative victories happen.
For the left, I kind of felt like I was getting a free consultation, um, talking to a knot and sort of throwing a lot of the problems that AI narratives present. I was kind of expecting her to, from the ground up, make the case for campaign messaging and framing that can help us break through some of this corporate messaging.
And really, she ended up giving me a quick and dirty lesson in messaging strategy and drew on tons of historical examples that made me think a little bit about, maybe we don't have to do this from scratch. Maybe there are tons of existing examples we can learn from and shortcut our way to better [00:02:00] communication and messaging and narratives that help.
People understand the implications and stakes, um, of our AI era. Before we hear directly from Anat about the AI politics space and how she might manage some of the dimensions here, I wanted to give my colleague Marion Wellington, our communications lead a chance to talk a little bit about how Anat has shaped their work.
Marion: Hey everyone. Marion here. I've been doing communications work in the tech politics space for about a decade now, and back in the early days in 2018, I went to a messaging workshop as part of this housing narrative coalition that. Unironically changed my life and not gave us a very needed wake up call on how to actually understand where people are at and how we bring them to where we want them to be, rather than just all talking to each other.
Um, her handbook messaging this moment really shaped me as a communicator. So to anyone that's trying to make sense of power and how we move it, I highly recommend giving it a read.
Alix: Cool. And we will drop a link to that in the show notes. For those of you interested in more effective communication on AI politics, this is the episode for you.
We have to stop talking to ourselves and stop talking in very abstract intellectual language about all the problems, and a [00:03:00] knot is gonna help us navigate how.
Anat: Well first thank you for having me and for setting out such a tiny little, small thing for us to talk about. This feels very chill.
Alix: Yeah,
Anat: very exciting. No
Alix: pressure. There's
Anat: no pressure. Basically share my favorite recipe. We'll be done. Yeah, so one of my core messaging edicts, and I have many because. Try to make things easy to understand, digestible.
Memorable is say what you're for, say what you're for. And I will often joke that I can boil down all of progressive messaging to three sentences. Boy, have I got a problem for you? This is the Titanic. Would you like to buy a ticket? And we're the losing team. We lose a lot. We lost recently, so you should join us.
And shockingly, um. By the way, that's gonna come back up when we start talking about the [00:04:00] AI messaging. I'm
Alix: already
Anat: noticing
Alix: some patterns in my own communication.
Anat: Hit hit, hit the trifecta. Well done. Yeah. Making me look honest. Yeah. And accurate. It turns out that, boy, have I got a problem for you is not that effective.
For most people, it works for activists. Activists a new problem is their love language and they're very excited about it. But most normal people, even ideologically aligned people got 99 problems and they don't want yours. So winning campaigns, say what they're for, say what they're for, say what they're for.
So let me take super simple example. In Australia where I work frequently, we were able to. Alter a policy that the government had pursued from 2001 until 2016 of shunting people seeking asylum to offshore prisons and have four sequential wins in a single year, 2016, after a large scale research project to shift the messaging at the end of 2015.
And close many of those [00:05:00] camps and bring people there and the message was not end this, stop that. Don't do this. We can't have that, but rather bring them here. That was one of the campaigns. Another was let them stay. If you think about iconic slogans like Black Lives Matter. Black Lives Matter is an affirmation.
Marion: Mm-hmm.
Anat: It's not a stop killing us. It's not a end police brutality. Of course, that's like vital and critical and important, but the idea itself is an affirmation, so, mm-hmm. Basically, here I am going around being like, say what you're for. Say what you're for. If you want people to come to your party, throw a better party.
And like the laugh does not like to throw a good party. We are like miserable is, you know, we all grew up listening to the Smiths, or at least some of us did. You know, we're not a good time a lot of the time. And here I was doing presentations in which I would do what I just did. Now. Which was, look at this terrible message.
It's so bad. Why did [00:06:00] you say this? Look at this terrible message. It's so bad. And I recognized that I was not following my own counsel, that my message about messaging was frequently about deconstructing what was not working
Alix: interesting.
Anat: And so I finally internalized that if I really do mean it, that it is more effective to persuade and mobilize people.
Like voting populations in respective jurisdictions by saying what we're for and by painting the beautiful tomorrow and by making ourselves the winning team, then what was this hypocrisy that I was doing where I would talk to activists and advocates and campaign managers and party leaders, and I wasn't following my own advice.
So I disciplined myself to make a podcast called Words to Win By, which is about campaigns we've won. All around the world and how we did it. And so in a lot of the episodes, and there's a couple that [00:07:00] sort of break form, I'll be honest, but for the most part, there's no dramatic tension. The episode begins, you know, we're gonna win.
Marion: Mm-hmm.
Anat: That's what's gonna happen at the end, but kind of like a Marvel movie. You also know that the good guys are gonna beat the bad guys at the end, guys. Gender neutral. And you tune in to see how they'll do it. Mm-hmm. Not if they'll do it.
Marion: Mm-hmm.
Anat: And so, yeah, almost every episode is basically looking at what was the strategy, what was the messaging, what were the arguments internal to the campaign where some people wanted to message it in this way, other people disagreed.
How did we reconcile that? What was the research that we did and so on.
Alix: There's a lot there. That's so interesting. 'cause I'm immediately trying to apply all of that advice in the context of ai. 'cause I feel like it's something. I mean, taking a step back, one of the reasons that the space does feel difficult is because there is this industry centrality to all of the stories we're being told.
And I think that it's [00:08:00] interesting 'cause now I'm not only thinking about the ways that I introduce this problem as a problem, but also how effective industry is at introducing it as this panacea solution for everything. And so I'm thinking about like. When PR of companies is like, you don't wanna cure cancer, we're gonna cure cancer, it's gonna be amazing.
Like, isn't that gonna be incredible? And it's like, oh, but I thought the problem was that you're pedaling, you know, knowingly not very good technology on a consumer base that doesn't want it. While building like one of the biggest pieces of physical infrastructure that humanity has ever built, that is extremely energy intensive at a time when we be like, you know, finding the climate crisis.
And I feel like their ability. To turn it to this speculative positive vision is one of the biggest strengths that they've exhibited. Um, and they've obviously put a lot of money behind it, but it's also a, a, a technique that is really impressive and consistent and you see it anytime one of the CEOs talks about the technology is it's not.
Today and like the kind of [00:09:00] annoying things that don't yet work. It's this like really beautiful future that you can't quite imagine. It's gonna be incredible and we're stewards of that incredible future, even though at the same time they're handmaidens of something much, probably darker that we're tempted to, to talk about or challenge them with.
Are there corollaries you can think of in other spaces where. Um, a villain or, or a part of the problem is capable of doing the tech, like using the technique that you just described. I'm kind of curious. Oh, yeah. Yeah. Okay. Like, like what?
Anat: Yeah. Yeah. I mean, it's not a perfect analogy because what they're pedaling is a very, very, I mean, not very old, but it's fairly old.
It's like industrial revolution on, so therefore old now. But I think that there are a lot of parallels to. The problem of fossil fuels and climate change and how all of that went down. If you actually watch, if you subject yourself to Chevron commercials.
Alix: That's so true. Yes. Yes. Carry [00:10:00] on.
Anat: Or Texaco commercials or BP commercials.
Mm-hmm. Now, but in the past as well. It's like your future pumped, you know, like powering your future, the energy to bring you where you wanna go. Yeah. We are taking you places. Look at this grid. We have the engineering for this. Future. Yeah, we're gonna own the future. Let's say other things that I can't remember about the future, and there's a really, really old ad from one of these.
I don't remember what it was like when I say old, I mean like 1950s. That was, put a tiger in your tank and it was. Like I can see the image. It's, uh, Cialdini, who is a really seminal psychologist who wrote the book Influence and then he wrote the book Persuasion. It's a lot of really, really important ideas about like how persuasion works and put a tiger in your tank.
Whichever gas company had that [00:11:00] old ad was really about what is this metaphor we're trying to give you around this particular product? Mm-hmm. That you know, you're newly ish gonna buy. New, because this was when the advent of the personal vehicle
Alix: mm-hmm.
Anat: Was like more rare because there was an era, I mean obviously predates me, I wasn't around for it, but like where not everybody had a car where cars, per se, as a technology were maybe not brand new.
Like, I mean, there was an era where cars were brand new, but. The kind of increase. And the same thing with personal computers, and the same thing with cell phones, and the same thing with cell phones that were not flip phones. I think that they've always been sold as power unlimited. The future unleashed, like putting everything in your hands that is gonna be like magical and delicious.
And with climate. I don't know if you know this, but do you remember, are you old enough to [00:12:00] remember famously calculating your own carbon footprint?
Alix: I mean, I remember the calculators and the encouragement to Yes.
Anat: Yes. Okay. So for those of you who don't know, in the olden times. When we used to have phones at our houses and they stayed there.
Mm-hmm. You could go online early internet, and you could put in like, I live in this many square feet or square meters and like, I drive this often and I fly this often, and it would give you your carbon footprint. Do you know who invented the carbon footprint?
Alix: It must be a fossil fuel company.
Anat: It 100% was because what that was was a way for people to feel like, okay.
I am worried about this because they figured out they couldn't completely quash people's like concerns and you know, Rachel Carson, silent Springs like very old. Like there were already kind of early indicators that there were gonna be problems with this thing. Yeah. And that we were in fact cooking ourselves alive in a way that was probably not gonna work out too well.
And [00:13:00] so rather than just fully quash it, they did multiple things at once. One of the things that they did was they made up this thing that you could do, and it's a little bit like the toy. You get a children's birthday party mm-hmm. Where you put your fingers in and the more you pull. The
Alix: stuck you get,
Anat: the more stuck you get and the only way to get your fingers out is to push them together.
So they were like, let's keep 'em occupied, let's give them something to do so that they feel like they're doing something about this climate change thing. 'cause some of them just keep wanting to be upset about this. We can't stop all of them from being upset. And so like, let's do light bulbs or let's do recycling and, and you know, I'm not knocking better light bulbs or recycling or electric cars or solar panels.
It's great if you know your finances allow that, but they reduced climate change to a matter of individual consumer behavior. Mm-hmm. Because if you can get people in that little finger trap, then they are not organizing to [00:14:00] contest for power. So that's one thing they did. Mm-hmm. The next thing that they did, and this is really where I desperately hope.
Folks in your community learn from this.
Alix: I'm listening,
Anat: but they, I will be generous and say forced advocates. I don't think they forced them. I think this is what happened. They forced advocates into having an, I don't know how many decade long debate over whether climate change was person made or not. So 97 outta 100 climate scientists have studied the longitudinal data and determined, you know, that 97 out of 100 scientist thing that like everyone liked to tout.
When we would show that to people in focus groups, do you know what they would say?
Alix: What about the other three?
Anat: They would say, oh, so there's still doubt.
Alix: Yeah.
Anat: You're basically doing advertising for the 3% kooks. Yeah. You are admitting to people that like, there's still some ambiguity here, but worse than that you are.
Having a conversation about whether or [00:15:00] not climate change is person made. And when I tell people all the time, it's like, look, I'm not your science teacher. I don't care if you understand that climate change is a product, or at least accelerated climate change is a product of human behavior. All I care is that you.
Take the actions I need you to take. And if you take those actions and you are delusional about why that's a problem between like you and your science teacher, your mom, I don't know, but it's not my problem, at least not as an advocate. So advocates took that bait and they were in this endless conversation about like, it is person made and we're gonna prove it to you.
It's not person made. And all of that time was time wasted. Mm-hmm. When they could have been saying A clean energy future is ours for the taking. We can have clean energy made at our house by our door jobs that no one can outsource ever power our lives by the wind in the sun. And [00:16:00] anyone who tells you differently is trying to poison your air and pad their pockets.
And instead of talking about that, instead of selling the solution, saying what we were for. We spent all of this time admiring the problem and the amount of time that the left likes to spend admiring problems. It's extraordinary. It's a
Alix: problem. Yeah. Yeah. I mean it's interesting 'cause when you describe the carbon footprint calculator, I can see how someone very earnestly, that wasn't the fossil fuel industry would be like, well, we wanna activate people.
So that they will take an action. We don't necessarily know exactly what action we want them to take, but we want them to feel a sense of agency in a very large issue. And if they feel a sense of agency, they'll feel more committed. And then we can kind of like connect those people and that agency into something that rolls up into a bigger coalition or movement.
And I can see what you're saying. I think the finger trap metaphor is a really good one where it's like actually agency and empowerment. For an individual in that way is actually a distraction [00:17:00] from something else. And I feel like within the AI conversation, I mean, I'm curious if you have immediate ideas about what that more positive vision is when it feels like.
All of the directions or all of the possible projections of the future that don't include AI as a central component to it is immediately slide tackled as like naive or like, what are you talking about? Of course our future is going to be more technological in some way, and I feel like there's a. I at least find it challenging to make a positive message around this stuff That isn't something like, wouldn't it be amazing if we had time to spend with our friends and family and actually like could touch grass and actually like weren't being constantly exploited and like not even talk about ai, like have it be much more of a positive political vision of a more connected world that is not disintermediated by these companies.
But I don't know. I mean, do you immediately have messaging that you would imagine we should be thinking about?
Anat: Yeah. Messaging is [00:18:00] only as good or as bad as the task that you set out for it. And this is where a lot of things fall apart across issue areas. And I would guess that AI is one of the hardest, like genuinely so.
So I'm not saying what I'm about to say is easy, but a message can't be evaluated in some sort of empty space. A message is, did it make people believe what I need them to believe and do what I need them to do? And
Alix: I love how instrumental you are about this.
Anat: Oh, I mean, I, I'm a very practical human. Yeah.
I'm into it. And so when I do projects with clients, those are the two questions that we ask at the outset of the project. We say, what do you wish people believed? And then what do you need people to do? And normally they can't answer.
Alix: Yeah, I'm, I'm like, I'm not quite sure. Yeah.
Anat: Right. So,
Alix: so interesting.
Anat: I can't answer for you number one, because like I don't possess the expertise to answer and also because.
I'm not a policy person. I mean, the purpose of messaging Yeah. Is to put the most [00:19:00] attractive possible wrapping paper on the box. There has to be a box, like there has to be a set of directives.
Marion: Mm-hmm.
Anat: And so let's say for example, I know this isn't ai, but like in campaigns that I have worked on to do regulation of big tech in other ways that are, I think adjacent.
So controls over social media or controls over, for example, how police can access public camera footage. I'm thinking this is principally in the EU and in Brazil there's been at least attempts and some. Actual regulation. And so in those situations, messages that we've experimented with that have done really well, I'm not gonna remember it verbatim, but there was one around social that was like, this is my crude version of it.
'cause I don't remember how we tested it word for word, but it was like your aunt's favorite recipe. Your kids' first, first place. Finish in swimming your [00:20:00] best friend's drawing. These are the memories that we share. These are the things that draw us online onto social media platforms for connections. But today, a handful of corporations are stealing our moments and our memories.
They're stealing our genuine desire to connect with each other. In order to create profits and be able to invade our privacy and sell our data back to the highest bidder we need to make our memories and our moments our own by passing whatever this like European regulatory act.
Alix: Yeah,
Anat: but that's because there's like a campaign, there's a theory of change and so in the absence of that, I dunno what to tell you.
Alix: Yeah. I feel like it's the speculative aspect of the narrative from industry that I find most flummoxing because to have a box with something inside that you put a nice wrapper around, like to have a policy prescription, it has to be on a [00:21:00] terrain of specificity. And I think there's something extremely skilled about shifting the conversation to the near future in a way that makes it extremely difficult to make positive statements.
So I can imagine lots of negative statements of, and maybe. It's about taking those negative state, for example, I don't want a local community who does not. Want a data center in their backyard to be forced to have a data center in their backyard. Yeah. Um, I don't want local government officials to be bought out to have industry, take water, take energy, and I don't, I don't want another kid in Memphis to have asthma because Elon Musk can't bother to get regulatory permits and wait for grok to be on the grid.
You know, like I, I, I, there are a lot of. But it feels like a negative posture still.
Anat: Yeah. Yeah. But so to be clear, like say what you're for is the opening edict. Okay. I hope it's also clear that that's not enough. Another really, really common messaging mistake and another rule therefore, that we preach is [00:22:00] people do things.
So another common mistake in left-wing messaging is that everything appears to be a problem out of the ether. So wages are falling, democracy is eroding
Alix: and passive. A voice.
Anat: Yes. The wealth gap is growing. Harms are increasing. Yeah. I mean, I often say that I could go through progressive websites and write passages out of the passive voice every single day of my life and be fully employed forever and ever and ever.
Mm. And a big, huge part of that is the way that we're funded and C3 and Fear and Academic Ease, and the fact that there are too many lawyers writing things, which they have no business writing. And rule number two, yes, rule number one is say what you're for, paint that thing that people wanna be involved in.
Attract people to your cause. But then rule number two is if people don't understand that a problem is person made, it is inconsistent to believe that it could be person fixed. So we're not out here trying to ask for regulations about the [00:23:00] tides. Like I'm not trying to call, you know, my senator to be like, Hey, I was thinking of taking my kid to the tide pools, like, could it please be low tide at 10:00 AM Thanks, bye.
Because that makes me sound like a lunatic. Mm. And that. Is back to the problem of climate change. Ironically in this battle to make people understand that climate change is person made, the discourse came to personify climate change. So climate change is raising sea levels. Mm-hmm. Climate change is making weather more dangerous.
Climate change is raising temperatures. You can't actually pass along climate change. You can't go to climate change's house and be like,
Alix: how dare you.
Anat: Right. You can't like do a social media storm. Yeah. At climate change.
Alix: Yeah.
Anat: So there's no organizing theory behind that. Yeah, so back to AI and big tech and so on.
I think at least there is a positive, but then there needs to [00:24:00] be, the way that we preach it is values villain vision. That's our message order. That's all the components. So it's not just purely say what you're for, it's also be very clear. So the way that I would do it is we all wanna have more moments, more memories, more time with our kids to be able to touch grass, to have a life, you know, to make a good living and have time left over to have a nice life.
Yeah, we wanna put food on the table and we wanna be home in time to eat it. It'd be great if it tasted good too, but today a handful. Of tech CEOs or, but today, Elon Musk or, but today, you know, whomever wanna take everything from us and poison our communities to pad their own profits.
Alix: Yeah.
Anat: Having our children need universal inhalers is a choice We don't need to keep making.
We can decide where our power in our communities goes and we can reject. These data centers that wanna [00:25:00] take from us in order to destroy us. So it's like, here's the world we want. Here's what we want, here's how we'd like our life to be. We want clean, locally made energy, and we want it to power our schools.
We want it to create the kinds of experiences and jobs and so on that are in our communities. And then there are these assholes who wanna do something else, and we're not gonna let 'em.
Alix: I think that's super helpful. So one of the reasons I'm asking these questions is because in a few weeks from us recording this conversation, there is a film that is premiering, it's actually already premiered.
It premiered at Sundance, and I got to watch it then. Um, that does an exceptional job of narratively constructing this emphasis on near term speculative stuff, whether it's AI's gonna cure cancer or it's AI's gonna. Kill us all, and it is an extremely effective piece of propaganda that essentially [00:26:00] Unmoor is the entire conversation away from these more concrete, human concerns, human needs, and into some.
Fantasy space that is emotive like I, I think people watching this film will feel an emotional reaction to it, but it's an emotional reaction that I think cannot be channeled towards any specific action towards companies that are responsible for the harms today and the harms that probably will accrue.
There are a few people in the film that are wonderful experts in academics that have spent their entire lives trying to. Better understand what's coming and, and what, what is here and now, and being the nerd that's like, no, AI can't do that and no AI will never be able to do that. No, these are, these are not worthwhile technologies for us to continue to invest in.
It comes across as like a negative. Posturing or like a hand wring, leftist academic that's like complaining about something when industry's over there innovating and you just can't understand [00:27:00] and you should like be quiet. And I don't quite know how to grapple with that tiveness of this, like the lack of, and then also this.
Very effective way that it forces the actual, factual, knowledgeable people into a box. Um, because it's like, oh, you're not speculating, which means you're kind of behind the curve in a certain way.
Anat: Yeah. I mean, the way to do that is, this is gonna sound like I have forgotten what we're talking about, and it's a digression, which is always a good way to start, but I promise it's not.
So when we are trapped. In a conversation about tough on crime, and it is obvious and normal and understandable that human beings are concerned about their safety. That is normal. It is a natural response to want yourself to be safe. It's a natural response to want your family to be safe. Mm-hmm. That's very low on Maslow's hierarchy.
That does not [00:28:00] like make you a weirdo, outlier, bad person. And so. Forever and ever and ever, not just in the United States, but in many countries, center left parties and left wing movements have been baited into this impossible conversation where the right is like tough on crime. We're gonna be tough on crime and we're gonna be more draconian, and we're gonna, you know, build more prisons and we're gonna increase penalties and we're gonna.
Sort of incarcerate more people and punish more and have more police, and you need more police and you need more police
Alix: because we're gonna keep you safe
Anat: because we're gonna keep you safe. And the population has been led to understand that more police equal more safety. When in fact we all know that like the safest neighborhoods in any country are the ones that have no police in them.
The higher up into the hills that you get, the fewer police that you will see. And those are the neighborhoods where like nothing is going on, at least crime wise. Well, there's plenty of crime going on. Yeah, it's just going on inside like.
Alix: Totally.
Anat: It's, it's computerized crime on the
Alix: computers inside those
Anat: houses.
Exactly. Right. Ones and [00:29:00] zeros. That is ones and zeros. That's like a much larger scale of crime, and that's why they get a pass on it.
Alix: Yeah, yeah,
Anat: yeah. They're, they're not like out,
Alix: they're being polite with their crying.
Anat: That's right.
Alix: Yeah.
Anat: Well, they're not, it doesn't like take physical, you know, they don't have to sweat.
They're not perspiring. It's a, you know, deodorant free crime, whatever. So it doesn't require deodorant. So. Instead of engaging in this like, we're gonna be tough on crime. There are gonna be tough on crime, we're gonna out tough. Or the way that you deal with crime is whatever, which you cannot win 'cause you basically agreed to have the other side's argument.
Then what we have found actually works is talking about, let's get serious about safety, because people's underlying psychological desire in this case is for safety, which is a normal desire. So for example, in Minnesota. After the murder of George Floyd and the Black Lives Mattering resurgence in the summer of 2020 and crime and safety and protest on people's mind, the Republicans in [00:30:00] the state went like whole hog as they always do, talking about how Tim Walls owes an apology to suburban women, which is a dog whistle for white women, suburban moms, excuse me.
And instead of being like. No, there isn't crime in the cities, or no, there isn't serious crime or, you know, protests are fine. We created an entire campaign saying, I'm a suburban mom and what I'm serious about is safety, and I know what keeps us safe. It's living in communities where we look after each other.
I know what keeps us safe. It's having the people sworn to serve and protect us, act in our interest and treat us all as equals. I know what keeps us safe. It's having services so that. A really bad day doesn't turn into an epic disaster and so on. So in this case, what I would say is that that academic, they're being pitted against this like dude in a black turtleneck with the like weird microphone thing.
Alix: Mm-hmm. Pretty almost on the nose.
Anat: Yeah. I mean, I do live in the bay, like [00:31:00] I have to see them at bars and stuff like, so. Instead of saying, it's not gonna do this thing, it's not gonna work this way, it's not gonna work that way, depending on what the topic is. What you could say instead about the same thing is like, look, we live in a society in which we wanna do things.
We wanna innovate, we wanna be able to look at diseases that have plagued us and that are extraordinarily difficult to manage like cancer. We wanna figure out is there a better way? Is there a better response? Is there a better answer? And that's what we're seeking. And it turns out in some of these cases, there is, and that answer is you have to be able to have readily accessible healthcare so that you can have
Alix: yeah,
Anat: screenings and so that you can have genetic testing so you can plan ahead and not [00:32:00] get to stage four.
Another thing that we have to confront as a society, we wanna be able to innovate. We wanna be able to solve our problems. And seemingly one of our biggest problems is very, very, uh, powerful white men having absolute terror about death and wanting to make themselves live infinitely. And so we need to innovate and we need to figure out.
How we become a society that understands that humans. Die. Our mammals die.
Alix: They're
Anat: mammals, and just as we are born, we shall also die. And we, it turns out, need to help people come to terms with the lifecycle. That's how we need to innovate.
Alix: I really love, I love the, the passive aggression in that last one.
That's
Anat: really good. Mean
Alix: I'm
Anat: into it. I mean, could you tell, I thought I was,
Alix: I'm into it.
Anat: I thought it was stealth. I thought I was being stealth.
Alix: No, I'm into [00:33:00] it. I really like that. I also like, so the exam, but the example with, I'm a suburban mom and I want. To be safe.
Anat: Yeah. I, I know how to be safe.
Alix: I know how to be safe.
In that case, what, what was, 'cause you're also making me realize that when you say, um, it needs to be obvious and specific, what you want the person to do, based on the belief that you help them arrive at that it doesn't have to be. This is a specific regulatory action that I want taken. Yeah. It can also be, I don't want regulatory action or policies written that result in part of the city budget going to a new prison.
Like I like. Exactly. It can be a, it can be an, I don't want not just a Absolutely. Yeah. It's interesting.
Anat: But again, what you wanna do is figure out what is the underlying psychological need. Mm-hmm. You know, it's basic cognitive behavioral therapy. Right. You can't have a no without a yes. It's also basic parenting.
And so you can't replace something with nothing.
Alix: Yeah.
Anat: And if you understand, oh, this is what's going on for people, what's [00:34:00] going on for people is they do want better treatments or they do want. Um, care to be less painful or they do want, you know, whatever it is. And you know, obviously some of these things that they want are like completely ridiculous because they want like a, a silicon sex robot.
And so in that case, what you have to do is figure out what's underlying that. Like what is the pathology there and how is our society gonna treat it?
Alix: Yeah, I think it, what's really interesting is there's a part in the film, which I really don't like talking about it too much 'cause it's just not worth thinking about.
But, um, there's a part in the film where the director. It says something like, oh, so we could live in a disease free world, or literally, I don't even have to do the second half of that. Yes. He literally said that when talking about the possibility for AI to support in drug discovery, and it's, it, it's this extremely devoid statement where you're like, no, the wait, like right now, right now, every minute.
A [00:35:00] kid dies of malaria in Africa. Yeah. And we have a vaccine. We have a vaccine for malaria.
I
Anat: mean, have you seen the outbreak of measles happening?
Alix: I know, I know, I know. And but it's this, it's this, it's this ability of tech adjacent people to talk about positive futures in this way that is so devoid of critical thinking that like, I don't even know where to begin.
But that, that's a really interesting entry point as you were just describing that message. And I'm like, oh. You could come about that arrive at the message of, um, you know, we wanna live in a world where people have access to quality healthcare. And when there's an innovation, we want it to be diffused in society so that everyone, I mean like, like I can imagine making it a much more tangible about physical reality argument.
And by juxtaposing it make what they're saying, look so stupid.
Anat: Yeah. And also let's not overlook the importance of ridicule. Like, yeah.
Alix: Yes.
Anat: I mean,
Alix: yeah.
Anat: They don't wanna have ai. They wanna be [00:36:00] ai.
Alix: Yeah.
Anat: Like you want to be a robot. Like you don't wanna be George Jetson, you wanna be. What's the name of the cleaning robot that they had in the Jetsons?
Alix: We don't remember. That's the thing.
Anat: Wow. This analogy was gonna be so good. And now it's Rosie. Her name was Rosie. Rosie.
Alix: Okay. You wanna be Rosie?
Anat: Yeah. I'm like I, I don't wanna be Rosie. I
Alix: don't wanna be
Anat: Rosie.
Alix: I'm
Anat: not trying to be Rosie,
Alix: but it's so interesting this like. Inhumane, like, so a lot of, um, another sort of narrative arc that I think a lot about is the emphasis on humanity.
So the word humanity gets used a lot when people talk about ai. They've anchored into this idea that their stewards of bringing about this new better world. And I feel like, I don't know, I'm thinking, I've been thinking a lot about their personal reputations. Mm-hmm. And like what the public expects from them and wants from them and like, thinks about them.
And I think the most recent example of that being. Dario Amai being held up as a hero in [00:37:00] relation to him trying to sell his shitty products to the Department of War to kill people, and knowingly selling that to an increasingly illegally operating executive branch. And then the public consciousness around it is quit chat, GBT, sign up for Claude because it's like the better one, even though they're all just like terrible.
And I and I, and while I think there's this need to. Have a vision to help people see so that you can activate larger and larger groups of people to motivate better policy and motivate better regulation. I still feel like there's this general vibe around these people. Mm-hmm. That I would love to hear your take on.
Like what types of messaging, and maybe it's ridicule, maybe it's other sort of forms of creating alternative spaces that people can step into where they see what actual good looks like. Like what do you think we do about this? Hero, complex tech bro, like savior thing around Musk, around Amida, around Altman, around all these [00:38:00] dudes.
Anat: Yeah. I think that one potential way of approaching it is to turn them into cartoon villains. You know that they're the Lex Luther of it all. And other kind of iconic, very technology obsessed. I mean there's a lot of that in true Western literature and especially in superhero, you know, they often like live in some weird and, and you know, there's obviously like technology fetish among the people that are revered as heroes too.
That's true. Those like Batman's very into tech. Next you're gonna ask me about sports or next I'm gonna try to produce a sports analogy, which I'm going, I
Alix: wouldn't,
Anat: which I like am only slight. I mean this like Marvel attempt at mine. Like
Alix: You're doing great.
Anat: Yeah,
Alix: yeah.
Anat: Like
Alix: I know
Anat: nothing about this. Yeah.
I
Alix: know
Anat: less about sports, so like I'm now on duty.
Alix: But like Iron Man, the is like actually partly inspired by Musk as a character and I think that that's a really great example of where that type of. Iconography almost, or like narrative architecture has been [00:39:00] used in that way to support and like uplift the sense that they're heroes.
Yeah, but I think the, I think, yeah. Yeah.
Anat: So I think, you know, one option, don't do that. Don't do that. One option is like. To turn that on its head and make them be these weird robot villain nonhuman, like incapable of kind of real connection and emotion and that, you know, like I said before, they don't wanna have ai.
They want to be ai, they want to exit humanity in Yeah, like entirely. And we couldn't possibly have them. Making decisions about our lives because
Alix: they're not on our team.
Anat: Right. They're not on our team. So that's kind of like would be one narrative attempt I would make. Mm-hmm. Another narrative attempt I would make is every single technology.
Not every, but like a lot of technologies can either be used for good or for ill, it depends like what you wanna do with them. You know, the printing press can print amp, or it can print like [00:40:00] May Angelou, like it just, you can do different things with the same technology. And some of them are like very evil in some of them, or like.
Extraordinarily important contributions to like humanity and thought and so on. And so the question is really around power. You know, you'll come back, I said earlier I was talking about the production of cars. Henry Ford famous bad guy. You know, used at that point a very new technology of the way that he innovated.
It was a huge innovation the way that they did the assembly line and not having like a single employee mm-hmm. Make all aspects of a car, but rather each person is kind of hyper specialized in their little micro task. And that just like makes the car assembly so much faster. Taylorism it's called. And that allowed.
The production of cars much more readily, much more cheaply. And so then people could buy cars and so on. But it also [00:41:00] without worker power, and I mean, you want to talk about a specific case, like if you look at Detroit just as a single city, Detroit used to be the United States, third most populous city.
Alix: Oh my God.
Anat: Yes.
Alix: What
Anat: is that? That is wow. Where Motown happened.
Alix: Yeah.
Anat: Detroit has some of the most legendarily, gorgeous architecture, art Deco Detroit had a thriving, incredible black middle class. Detroit was the shit. It is again, I love Detroit. I'm from Wisconsin. So for me to admit this about a Michigan City, you know it must be true.
Um, I must really care when workers in the auto industry had worker power. Could be organized, we're organized in unions, and we're able to negotiate a fair return on their work. That was really important and significant. Mm-hmm. And I'm not trying to say we've ever had that as like some sort of nirvana, but a lot of this boils down [00:42:00] to does labor have enough collective power
Marion: mm-hmm.
Anat: That it can actually do something about capital. Mm-hmm. And again, these are all questions of power. It's really. Who is this technology gonna benefit? Who's gonna be in charge? Mm-hmm. Who's gonna make decisions about it and what will it be used for? Will it be used to further enrich truly a handful of people so they can purchase themselves politicians, and create what we have now, which is a MAGA murder regime that is abducting, assaulting, and, and came into power doing it and continues.
You know, there's a through line between Epstein Ice and the war on Iran. It's all the same thing. It's all the same thing. Are we going to empower this handful of people to be this MAGA murder regime, at least here domestically? Or will we decide that it's our choices that determine what comes next and organize collectively that we get to be in [00:43:00] charge of the thing?
Alix: I really, really like that as a frame, and I think that it connects to some of the. Also some aspects of ludism. And when I think the taylorism metaphor is really interesting, and also the effect over time on when an industry gets an advantage over workers, what happens. Mm-hmm. Um, and also what happens when we make presumptions about industrial policy in this country, about the economic future of a particular area based on a single industry is also, I feel like.
Quite interesting for thinking about in terms of AI because it does feel like this over concentration on valuation of these companies inside our overall economy is a huge vulnerability that we are not dealing with at all. And I think that there's a systemic risk here that is quite scary. And I think if the bubble does burst, you know it's going to affect everyday people.
It's gonna affect pensions, it's gonna affect, everyone's exposed to the risk, but they're not actually getting any of the benefit. That's just like a really powerful. Comparison. Cool. [00:44:00] Okay. Well, there was another film that premiered at Sundance called Ghost in the Machine. It's a documentary about the history of eugenics and how AI is essentially a descendant, a direct descendant of eugenics.
It's not. An affiliate. It's like it came from AI has come from Eugenics and she and I have been having conversations over the last few weeks about the power for an artist or a filmmaker within this current moment to say that they're not gonna use ai. And then the way that that is so triggering because of the dominant narrative around these things, but also the way that it.
Both brings about, for an individual that says no, this sense of agency in a moment that can feel quite abstract and overwhelming. But I wonder how much, it's also this finger trap of like, what can you do as an individual and like gets us out of the structural piece. So any thoughts on like how refusal in this moment.
Like, is that a politically, is it a political dead end because it makes it seem like you're not being real and you're not actually, you're being naive about where things are going and like no one's gonna listen because of da, da, da, da. [00:45:00] Or is it an important signal to build a bigger base of power around some of these concerns and questions?
Anat: Well, first, because I'm me, I'm gonna reframe it. It's a right of refusal, but it is also a freedom of creation. What she is claiming is her own freedom to create. Things by humans, for humans that she wants to create things that are made by humans, for humans. And so first there's like the affirmation of what you are doing.
Mm-hmm. And what you are for. And I'm gonna share a personal ish story. So I have, um, two kids. My older kid just started university. And both in high school and in university, he is absolutely adamantly against any form of ai. And in fact, when he has had like assignments, because sometimes teachers will assign things that like include AI components, it makes him furious.
Like he doesn't want anything to do with it. And obviously there's [00:46:00] a ton going on. Kids just writing their essays and you know, doing all sorts of things with ai. If you view education, like if you understand education, children, people, learners, I guess even when they're not children, as lamps to be lit and not vessels to be filled, and more importantly, lamps to be lit rather than assignments to be completed in order to get a degree.
Then you wanna write your own essays because you want to learn things, you want to read things, you want to think about things, you want to process information, you wanna analyze it, you wanna come to interesting conclusions. And I understand that some people will say, well, you can use AI for that. Like you can use it to prompt and you can use it to spur along your thinking.
And maybe you can, and maybe you can't. I also don't do it. But. I think when people are making [00:47:00] choices, we have to frame them around what it is you're getting, not what it is you're refusing. Mm-hmm. Or what it is you're giving up.
Alix: Mm-hmm.
Anat: But this is my choice to have this come from me.
Alix: Mm-hmm.
Anat: To reflect not just my knowledge, but my elbow grease.
Not just my talents, but my perseverance. My frustrations, my writer's block. Mm-hmm. And the way that I powered through it, because all of that is what creates a piece of work. Whatever that work is that I feel I can truly say is mine, and I can truly say exists in this world like no other, maybe I'm being naive, but the more that we have Mar-a-Lago face.
Alix: Oh gosh.
Anat: I keep, I keep taking you places that you are not expecting. I was
Alix: not expecting
Anat: that. Yeah, you're welcome. Maybe there should be a contact warning on
Alix: this entire thing. Yeah. I feel like [00:48:00] maybe yeah, we could put some filters on as well, but
Anat: carry out, I mean, you're gonna edit it, you know, you can, you can take out,
Alix: oh God, no, I'm not taking this out.
Anat: Okay.
Alix: Keep going. Mar-a-Lago face.
Anat: Yeah, Mar-a-Lago face and like. Heroin chic body, which is coming back in and just homogeneity. Yeah. This, this is my like Yeah. Way of saying homogeneity. Yeah. Across different things. My hope is that there's gonna be a rebellion.
Alix: Yeah.
Anat: Because when things are processed off of, you know, LLMs, you are regressing to the mean, and eventually everyone's gonna have the same assay.
Alix: Yeah.
Anat: I understand you've like taught the thing to think like you and whatever, like I kind of call bullshit on that. So there are going to be some people, and you know, it's interesting that in movements of art and in movements of, and I say art broadly speaking, like when we look at innovations in music and the way that that has shifted over time, you know, from like moving from baroque [00:49:00] to romanticism to classical to like.
Modern moments of innovation in different kinds of art forms have come when an art form is so calcified that there becomes like a level of homogeneity to it. Then someone gets pissed off about that or annoyed or whatever and they're like, I don't wanna be a cubist anymore. I don't wanna be an impressionist anymore.
I don't wanna be a modernist anymore. Be a postmodernist. And maybe that's what's gonna happen. Maybe what's gonna happen is that people are gonna be pissed off about Mar-a-Lago face and they're gonna stop botoxing. I dunno.
Alix: The aesthetics of it is really interesting. Um, and I think also watching. I don't know.
Watching like Mark Zuckerberg learn how to do like martial arts or watching and put his little stupid gold chain on, like Jeff Bezos. They're book
Anat: becoming the same person. Like they really are becoming the same person the way that they look, the way that they talk, the things that they do. And that's boring.
Alix: Yeah,
Anat: like it's just boring. I [00:50:00] guess. This isn't a lot to hang my hat on, but like I just Boring.
Alix: Boring doesn't win.
Anat: I don't think people want boring.
Alix: Yeah. Yeah.
Anat: The problem is that when the only other option besides boring, and to be clear, like I don't mean to make light about planetary destruction. Like I understand that this is more than boring.
Yeah. Like it's real bad. And I'm not trying to say it isn't, but aesthetically like. They suck. Yeah. They're like weird and uninteresting and like tiresome and so on. Careless people as the book said and as we learned from it's really good title.
Alix: Yeah.
Anat: Um, I don't think people wanna be boring and I don't think they wanna live in boring, and that's why.
We have to give people another option and that other option cannot be Ah,
Alix: yeah,
Anat: that other option. I mean, again, if you want people to come to your party, throw a better party. If it isn't ai, then what is it? And like, how is it a really nice time?
Alix: Yeah. Thank you so much. This has
Anat: been
Alix: a wild, [00:51:00] surprising journey.
Anat: The only person who's gonna bring up Mar-a-Lago face in these conversations.
Alix: I mean, I feel like, I mean, I might now, from now on,
Anat: yeah. These people are, these people suck. Like they can't get laid. They're like, they suck.
Alix: Thanks for listening, and I'm sure you may be thinking, yes, AI discourse is bad. What do we do about it? How do we combat the relentless AI propaganda machine? Well, next week we're gonna hear from Valerie Veach, a documentary filmmaker who made Ghost in the Machine, which is a history of how we got the AI we have today.
And it is. Very eye-opening. I thought I understood a lot of the underlying racist and misogynistic structures that underpin the technologies that we see today. Um, I had no idea. Um, so we're gonna talk to Valerie about the film and what she learned making it, but also her experience is a creative that's being really encouraged and pushed to use AI as part of her [00:52:00] craft and how she's engaging in the politics of refusal.
So we will see you next week. Thank you to Georgia Iacovou and Sarah Myles for producing this episode. And to the team, Zoe Trout, Kushal Dev, and Marion Wellington, who helped put it all together.
Stay up to speed on tech politics
Subscribe for updates and insights delivered right to your inbox. (We won’t overdo it. Unsubscribe anytime.)
