
Show Notes
Next time someone tells you that we can build data centres in space, show them this podcast episode — because it is literally impossible.
More like this: AI Safety’s Spiral of Urgency w/ Shazeda Ahmed
Or better yet, recommend that they buy More Everything Forever, Adam Becker’s latest book exploring all the fantasies and promises of coming out of Silicon Valley. This episode is the first in our Fantasy Factory series, where we explore how and why tech evangelists manufacture consent about AI’s boom, doom, and inevitability.
The futures that AI men want for us — e.g. a disembodied immortal life in AI utopia — are all scientifically impossible. Even the worse mass-extinction event on Earth would be more pleasant than trying to live on Mars. Yes, space is very cold, but it doesn’t mean we should put data centres out there! Adam explains where these narratives are coming from, who they benefit, and why they exist outside the laws of physics.
Further reading & resources:
- More about Adam Becker
- Buy More Everything Forever
- For All Mankind (TV series)
- Our video on the Iran war
**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
Computer Says Maybe is produced by Georgia Iacovou, Kushal Dev, Marion Wellington, Sarah Myles, Van Newman, and Zoe Trout
Transcript
Alix: Hey there. Welcome to Computer Says Maybe. This is your host, Alix Dunn. And before I get into our regularly scheduled programming, I wanna briefly acknowledge something else I have been thinking about a lot lately. And we at the, maybe have been thinking about a lot lately, we have been following what's going on in Iran and the broader question of illegal wars.
And the growing role of AI companies like Anthropic, like OpenAI in military contexts, you know, it's been hard to miss coverage of this back and forth telenovela between OpenAI and Anthropic, and who's the good guy and who's the bad guy? Grabbing headlines. And I think the thing we keep sitting with is that in many ways, this feels so beside the point.
What really matters is that we seem to be approaching this precipice when it comes to integrating. Large language models into warfare. I realized this is something that's been happening for the last few years, that in many ways administratively, the American military has been using LLMs in all kinds of ways, and I just think we need to return to some basic questions of what we want when it comes to large scale violence and warfare.
And I don't think we want large language models, transformer architecture, chat bots. Being used to make decisions about who is going to be targeted and give us this veneer over an American administration that is essentially acting as a rogue state on the world stage, and is able to sort of technify these interventions that are obviously so blatantly wrong, but by incorporating ai, incorporating this new fancy form of innovation, engaging in these companies that are supposed to be at the cutting edge or frontier technologies, it.
Covers up what's actually happening and makes it worse. And I just wanna name that as something we're thinking about. We're gonna be doing more work on this. We had a newsletter come out this week about it. I also did maybe a little bit of a cringe YouTube video, um, laying out some of these thoughts. I have found it really kind of like a straw that's broken the camel's back in my mind about how this nexus of AI companies and.
Trump administration interventions and this sort of new era of American Empire, it feels really alarming and I think something that we need to talk more about in clearer terms, rather than talking about exactly how these technologies can be integrated into war. We need to talk about if we want them in.
Systems at all, and what it tells us about where the world is headed when they're being involved in things like the kidnapping of Maduro and illegal invasion of Venezuela, this all at war in partnership with the US government and Israel and Iran, et cetera, et cetera, et cetera. I think there's a time and a place for debating the sort of technical intricacies of how a technology can be used.
So when might it be appropriate? Is it appropriate for the bureaucracy of the Pentagon to use LLMs to help with, you know, paperwork? Is it appropriate for large language models to help with target acquisition, where there's actually a decision being made about who is going to be attacked with lethal force.
Is it appropriate for surveillance intercepts when there's someone being spied on for large language models, to, uh, transcribe those intercepts and actually inform targeting? I don't. Wanna talk about that right now. Like I know there's a lot of wonderful people that are working on some of those clear specifications and clear rules of engagement, but I just think there's a really important line that we have to draw now that we cannot live in a world where large language models are used to direct lethal force.
We know they're unreliable technologies. We know that they're not even reliable. Enough right now to be a successful consumer or enterprise product, we cannot, at this stage of their development, incorporate them into warfare, particularly at a time when we have an executive branch of the United States government that is.
Waging illegal wars. And I think that the timing of all these things at once makes it really overwhelming to even wrap your head around. It's even more oppressing and more important to not focus on the companies, to not focus on the products, to not focus on specific use cases, but to say it is not okay to incorporate large language models in war.
And the more people who speak up and say that clearly, I think the more it will matter. I think we've kind of been frogs and boiling water, and I'm hoping that we can. Make a clearer stance in a way that influences at least our broad understanding of what's okay. I know in some ways the ship has sailed, but I do think that there are red lines we need to draw because if we do not draw them now, I'm really terrified as to where this might go and how much worse this could get.
And I think the idea of using Gaza as an inspiration for all future warfare feels like a bridge too far that we have to say no to. [00:05:00] We're gonna be covering this more with greater depth and substance. I wanna take more time with it. There will be more coming, but I just wanna say, I just don't think we have to live in a world where large language models are used to directly the force.
I think if there was ever an administration that makes clear what happens when a. Company like Anthropic. Like OpenAI is willing to sell its products, its unreliable products that it can't even quite make a business case for in a direct to consumer product because they're so unreliable to have those integrated into the most consequential.
Arenas in the world when an administration has shown that it doesn't care about the law, it doesn't care about civilians, it doesn't care about accuracy, it's not interested in continuously improving these technologies. It's not interested in the intricacies of how these technologies can be deployed in making sure that those lines are respected.
So when you combine those dimensions of this administration with the features of this [00:06:00] technology, which is largely that it is extremely unreliable. It's a really dangerous precedent we're setting right now, and I hope that if it bothers you, you will strike up a conversation with other people around you, because I think the more we have these conversations, the more we.
Say what's happening? I think it's so much clearer to say that it's wrong and that we should try and stop it. So stay tuned. There'll be more on this soon, but I did wanna name that up top. The episode this week is related, but kind of in an adjacent way, so in a few weeks time, there is a film that is being released called the AI Doc.
It's being released on March 27th. And I had the privilege, I'll use that word for now, of seeing the film in advance of Sundance and I, um, I struggled with it because I think it represents all of the narrative play around AI that I've seen and I think is really problematic. It has very high production [00:07:00] values, and that's kind of all I can say about it that's positive.
It conveys an AI world that tech billionaires want us to see. And given the backers, I suspect it will drive a lot of public attention to these narratives. And so, uh, we're making a collection of episodes around its release to try and challenge the prevailing AI themes that we seem to pass around our society, kind of like the common cold.
So today we're gonna start with Adam Becker, who is featured in the upcoming documentary Ghost in the Machine, which you'll hear about in a few weeks. And he recently published More Everything Forever. And in that book, he challenges the boom and hype of AI fantasies with. Actual science. He's an astrophysicist.
Um, so listen to this episode and you'll be ready to correct the next person who says, we can solve all of our problems by building data centers in space.
You know, when you go to an event and your friends are like, Hey, there's somebody that's gonna be there that you have to meet. That happened at Sundance, um, with Adam Becker, uh, who wrote [00:08:00] Everything More Forever, which has been in my queue. No. Okay. Don't tell me. God, this is like a test in front of an author.
That's so embarrassing. Every, so it's not everything more forever.
Adam: No, you're really close. My book is called More Everything Forever.
Alix: God, I was so close.
Adam: Okay. You should really put this in the podcast.
Alix: Yeah, no, it, it, it will be included. 'cause I feel like
Adam: excellent.
Alix: Georgia and Sarah, anytime I do something embarrassing, they love to um, spice it up by including it.
Um, but anyway, we have, so we have mutual friend Jaida Ahmed, um, which we will link to an interview with her in the show notes 'cause you'll immediately understand why her work is relevant to a lot of Adam's work. And anyway, Adam and I have been hanging out for the last few days, including for the premiere of Valerie Beach's film Ghost in the machine in which we both appear.
So it's been very cool to kind of hang out with some AI hype. Slide tacklers, um, in this space. Um, anyway, so we decided that we would sit down and talk [00:09:00] more because Adam's book is obviously super relevant to lots of the themes we discussed. Very excited, uh, to talk to you about a couple things, but for starters, I wanna talk to you about why you wrote the book.
Adam: In a sense. I wrote the book because I've lived in the Bay Area for too long, uh, and I got frustrated. You know, I've been out there now for almost 15 years. I just kept meeting people who believed weird stuff about the future, and at first that just seemed weird and kind of harmless. And then, you know, I'm a science journalist and I kept seeing people report on the ideas that the leaders of the tech industry had.
And have about the future that you know, are just nonsense. Clearly not gonna happen for all sorts of reasons. Both, you know, scientific and sociological and everything in between. Nonetheless, these ideas were being reported on really credulously. Like, yeah, that's definitely gonna happen. Or, or if not, definitely gonna happen.
Like, this is a [00:10:00] reasonable idea because this smart person, Jeff Bezos, Elon Musk, mark Andreessen, you name it, believes that this is, you know, the future and says, you know, this is the future or a good future or whatever. And of course none of that's true, and I think that there's this idea that a lot of people have, which is finally starting to break, that if you're wealthy, you're smart, and if you are a tech CEO, you must know a lot about science and it's just not true.
The other thing though, is. Like I said, I've lived in the Bay Area for too long. I've gone to a lot of parties and I've met a lot of very strange people out there, and I kept running into members of these subcultures, the The Effective altruists, the rationalists, and I've been tracking them actually since before I even moved out to the Bay because people linked me to their stuff and said, Adam, you might be interested in this because you're a massive nerd.
And, uh, and they [00:11:00] were right. I was interested in the sense that, like, you know, from
Alix: an anthropological
Adam: perspective, right, exactly. It was like, oh yeah, I, I think I, I remember the first time I encountered some of this stuff. This is back when I was in grad school, and I remember looking at it and thinking, wow, that's not true, but this is really, really weird.
These people are crazy. It never occurred to me that they might end up with massive, like, influence on the world. And, uh, and here we are. So that's, that's why I read the book.
Alix: It's great. I've already admitted to Adam that I'm only like a third of the way through. I'll finish it on the way home. Uh, and it has been in my list for a very long time, and I'm glad that I started reading it before meeting you or even knowing I was gonna get to meet you.
But the, one of my favorite parts so far in the book is when you use your credibility as an astrophysicist to explain why the pursuit of artificial general intelligence just doesn't make. Since as, as some, I hadn't actually heard [00:12:00] someone who had that level of scientific background kind of systematically break down the constraint, the hardware constraints, and essentially like some of the neuroscience around it also, you get into, which I know is not your area of expertise, but like, it's, it's, you do a good job.
Do you wanna explain why a GI and artificial general intelligence is just not a reasonable thing to think will ever? Appear in our world or be possible to build.
Adam: Yeah, I mean I wanna be careful about that 'cause there's a few things, right? You know, do I think that we could ever conceivably build a machine that can do and say, and even feel all of the ways and things that, you know, people do maybe.
But that's really gonna depend on your definition of machine. I don't think it's gonna be anything like our modern computers. Of course, it also depends on what you mean by a GI, which is part of the problem here. It's not [00:13:00] well-defined. One of the few attempts to define it that I've seen is in the open AI charter, like the original charter, they define a GI as a machine that can reproduce any economically productive activity that humans engage in, which is like such a bad definition.
It's both way too vague and way too narrow, right? There's all sorts of economically unproductive things that humans do that are an important part of the human experience, like taking a long walk with a friend. Um. God. There was a paper not that long ago where, where I think people invested in this idea, tried to define a GI, you know, they, they knew that people were dinging them for not having a good definition.
And then, you know, the paper came out and people pointed out that it had hallucinated references. So, yeah. Uh, you know, the attempt to define a GI is going just great. But I think as with so many of these fantasies, that the [00:14:00] real definition is one that lies in science fiction, right? That a GI is. Computers or robots, like the ones that we see in sci-fi that you know are treated as living beings or, you know, conscious beings.
My favorite one, right? Commander Data from Star Trek. I think that that's what these people mean when they say that. And you can't just say, we are building commander data. So instead they say, oh a GI is this. And of course there's all sorts of stuff around the definition of intelligence and that word general in general intelligence, the history of eugenics in there, which is of course in Valerie's film.
And also I go through some of it in my book. But putting all of that to the side, um, the number of connections in the human brain is large. I don't remember the number offhand. The number of [00:15:00] neurons in the brain is around a hundred billion, I believe. And so the number of connections in the brain between neurons like the synapses, that's something I think a couple of order of magnitudes higher, uh, like at least one order magnitude higher.
I don't remember.
Alix: There's a lot going on,
Adam: but there's a lot in the brain. Yeah, there's a lot.
Alix: Yeah.
Adam: But those numbers are not completely outside of the realm of what. You know, the number of transistors that you can get going in a, in a modern computer. The issue is that a transistor is not really very much like a neuron or even like a synapse because, you know, synapses are analog, not digital.
They have, you know, many states and there's also many things that influence each synapse, including, you know, most notably perhaps neurotransmitters. And, you know, that means that you're gonna have to model the brain. Really on a molecular level, if you're gonna try to reproduce everything the brain does, and [00:16:00] that is not computationally feasible, the number of molecules in the brain is.
Astronomical is actually not the right word for it. 'cause it's larger than that. It's a truly vast number. You know, something like a trillion, trillion molecules in the brain. And then of course the way that they interact is very complex. And some people say, oh yeah, but this is what quantum computers will help us do now.
No, I'm sorry that, that there I can just be a physicist and say no. Yeah, now you're
Alix: back in
the
Adam: physics realm. That's, yeah. That's, no, that's not, you know, not saying quantum computers won't happen, but No, they're not gonna solve that problem. And of course there's more in the brain than just neurons and synapses and, and neurotransmitters.
There's also glial cells and, and other stuff. And would that even be enough? Right. Because we are not our brains. We are our bodies in our environments. You know, there was a, I remember one of these stupid memes, I mean, there are a lot of great memes. I love memes, but a stupid meme that was running around years [00:17:00] ago that said like, you know, your, your body is a space suit for this.
And it was just like a picture of, uh, brain and a bunch of, you know, nerves coming off of it like, you know, nervous system. That's not true. We are our bodies and we are our bodies and our environment. And I don't just mean like we need our environment because otherwise we die without, you know, air and water and food and whatnot.
Though that's true. I also mean like the complex interaction between ourselves and our environment is an important part. What we mean when we try to talk about something like intelligence and social interaction is part of that as well. You know, I think one of the pernicious myths about intelligence is that it's something primarily individual rather than social.
This is sort of denying the great truth of human history, which is that [00:18:00] our great power as a species. Is the ability to coordinate and work together. So this is all to say that these fantasies of simulating a brain in a computer and thus achieving artificial intelligence, no,
Alix: restating that. It's basically saying that the level of complexity in the physical aspects, and I guess the chemical aspects of how we as humans are intelligent.
If you try to make a corollary. Infrastructure for a computer that is a physical impossibility, it's like not just not possible to do, but there are people that are pursuing paths that might not necessarily. Take that approach, but generally the vibe around talking about the pursuit of artificial general intelligence seems to deny the complexity of intelligence, the amazingness of the human body and the human experience, and that it's kind of silly to try and [00:19:00] reduce it into something that's quantitative and computational.
Adam: Yeah, I think I want to. Try to be as generous to these people who believe in this stuff as possible,
Alix: which is really nice of you. I don't really know why,
Adam: but carry on. Well, just because I think, you know, if you try that and then you can tear them to shreds anyway, then you're done. Um, but like. I think that they would say something like, well, but you know, we're not necessarily trying to reproduce the brain.
We're just trying to reproduce, you know, all of the economically productive activities that humans engage in. And, you know, can you simulate a brain with a level of detail necessary in. Any computer we have or anything that I think we would call a computer that's coming down the line, no. Moore's law is over.
And so we know that, you know, we're not going to get enough transistors packed into anything like a computer to be able to simulate a brain with the level of fidelity needed to achieve this dream, which I actually don't even think that that would work because we're more than just our brains. But can [00:20:00] you get a computer to like do everything that a human does?
I mean, I can't. Mathematically prove that you can't do that. But you know, the human body and the human brain are quite remarkable things, and evolution has worked on them for a very long time to get them down to a level of efficiency in, you know, energy usage space and whatnot. I. Find it difficult to believe that something so much less complex.
Than a brain and a body is going to be able to do all of the same things that we do because, make no mistake, these things are way less complex, way less complex. And I mean, I realize that sounds really strange given how complex modern computers are. Yeah, they are phenomenally complex. But we are even [00:21:00] more complex.
Alix: Yeah. Pales in comparison.
Adam: Yeah, exactly.
Alix: A lot of your book and a lot of your, seems like your focus is sort of exploring what the sort of function of this fantasy and this pursuit is. So do you wanna describe a little bit about why you think these communities are so driven by this idea that feels extremely implausible and maybe entirely misguided?
Adam: Yeah, I mean. Can I get at that question in a roundabout way? Yeah,
Alix: yeah, that's fine.
Adam: One of the things that I spend a lot of time thinking about as a writer is first sentences, because I, I feel like they're very important and I, I like to get it right and try to nail it. I spent a long time trying to figure out what the first sentence of this book was gonna be.
And eventually, you know, I got there and realized, okay, this one's it. And the reason it was, it is because it goes after exactly what you're asking about. And the the sentence is, the dream is always the same. Go to [00:22:00] space and live forever. These are people who fear death, which. As I say, I think multiple times in the book, I think it's perfectly natural to fear death.
I fear death. I don't wanna die anytime soon, but I don't allow that fear of death to become the overriding consideration in my life because, and this is a cliche, but it's true. If you allow your fear of death to run your life, then you don't really live. And I think that this fear of an ending. To life, to experience, to being in the world and having the ability to influence what's in the world is quite terrifying for, you know, these tech billionaires who are accustomed to having all of this control.
But you can't buy your way out of death. And, uh, another cliche, you can't take it with you, but also these sorts of. Subcultures that have had this sort of complex interplay of influence with the tech billionaires, these people like the [00:23:00] effect of altruists, the rationalists, extropians, transhumanists, you name it.
And in the book I call all of that, the ideology of technological salvation. I chose the word salvation very specifically. I'm not Christian. I've never been Christian. I wasn't raised Christian. My, my family's Jewish, but I had to go talk to a Christian friend. When I was thinking about how to think about this, and I, I realized salvation might be the word I wanted, I was like, Hey, what does salvation mean to Christians again?
Um, and, and she was like, no, no, no. Yep, you're right. That's the right word. That's the one you want. And I'm like, excellent. In any event, yeah, the, there is this desire for transcendence, right? It's the body that dies. And so if you have a belief that you are more than your body. That you are in fact a pattern.
You are software running on the hardware that is your brain. Then if you could have that software running somewhere [00:24:00] else, then you could make a copy of yourself. You could live indefinitely or forever. Some of the more responsible. Of the people who believe in this stuff. They don't talk about living forever.
They talk about living until the heat death of the universe. But others literally talk about living forever and think that you'll find a way around the heat death of the universe, which again, that's actually very solidly in my scientific wheelhouse. No, that's not happening. Next question, but like, um.
It's a desire to transcend all limits. Uh, and it is, I think, very religiously coded, right? You know, going to space, space being the heavens, the ai, super intelligence. Having God-like powers one of these articles of faith. Taken as given in many of these communities and by many of these people, is that the AI will be so powerful that it could solve any problem [00:25:00] and would be able to fend off any threat to humanity if only we can ensure that the AI is working in humanity's best interest.
There are all sorts of problems that throwing more intelligence. At the problem doesn't solve it. Right? Even putting aside all of the problems with the idea of a GI, all of the problems with, you know, the idea of intelligence, I see no reason to believe that, you know, vast intelligence means that you're gonna be safe from all threats forever.
But that does sound like God.
Alix: I think this is a great like explanation as to why they're personally motivated for this project. Yes. Because basically they've reached such peak. Ah, yes. Can you talk about it? How it functions as a business strategy?
Adam: Oh, absolutely. Yes. Yeah, no, I mean, I was, I was going down.
Uh,
Alix: no, no, no. I think it's super interesting. Yeah. Yeah. I think it actually is. Directly connected to why they choose to build these businesses.
Adam: Oh,
Alix: totally. Yeah. Like how they
Adam: Yeah, yeah,
Alix: absolutely. How this becomes a shield or a strategy or sort of a set of tactics for accessing resources.
Adam: Absolutely. [00:26:00] Yeah.
I mean, and that goes back to this idea of salvation as well, right? Because like if what you're building. Is something that has this power to save humanity from all threats, solve all problems. We can go to space and live forever. We can transcend all limits. We can defeat death and build God. What wouldn't be worth that, right?
If that's what's at stake. Any price is worth paying to get there, and no problem could be as important as the problems that stand in the way of achieving that goal. And you see this very explicitly at work in the rhetoric of one Elon Musk talking about Mars. Right. He says he wants to make humanity interplanetary to preserve the light of consciousness.
And so this is of course part of this whole techno-utopian ethos [00:27:00] of technological salvation. You know, there are reports from people who have worked at SpaceX saying, you know, the rhetoric used within the company is. You know, we have to go as fast as possible and ignore all possible damage that we could do along the way in order to get humanity to Mars as quickly as possible, because Earth is doomed and Musk talks about it in public in the same way.
And when people are praising Musk, they talk about it in the same way. I mean, what was it? I think it was Larry Page said that Musk's ambition to send humanity to Mars is philanthropic. Um, which is bullshit, but it creates such a convenient excuse to do whatever you want to do if you are saving the species.[00:28:00]
Then what does climate change matter? What does fascism and, and the erosion of democracy matter? Um, you know, if that helps you get where you want to go faster. You know, you gotta break a few eggs, right? You know, what does the threat of nuclear war matter? I mean, after all, we're all gonna be on Mars and Earth is doomed.
Anyway, unclear why, by the way, and, and we can get back to that later in the conversation where we can talk about it now, but the space stuff is both really weird. Complete bullshit.
Alix: So I do wanna get into this space stuff. Yeah. 'cause I think this idea of space is happens is hilarious.
Adam: Yeah.
Alix: But also when we were talking over breakfast and started talking about data centers in space,
Adam: oh God,
Alix: your head almost exploded.
Yes. Do you wanna tell me why? Yes. As a physicist Yeah. Putting data centers in space is dumb as shit.
Adam: Yep.
Alix: And also impossible.
Adam: Yeah, yeah, yeah. Um, I mean. [00:29:00] I'm always a little careful about using the word impossible. Fair. You could do it, but it would never be a good idea, and it's always gonna be a better idea to build them here.
Like, there's a lot of reasons for that, right. You know, there's the fact that like just doing hardware maintenance right? Yeah. Is gonna be really hard in
Alix: space. Requiring a space walk.
Adam: Yeah, exactly. Yes. Um, and there's, you know, uh, and there's the difficulty of getting stuff into orbit to begin with.
There's the fact that you're gonna have more radiation out there. And radiation is of course. Terrible for computers, um, and especially for like data integrity. Um,
Alix: but I've heard that data centers get hot.
Adam: Yes, exactly. That's, yes, that's where I was going. Yeah. The thing that you will hear people say is, space is a really ideal place for data centers because data centers need.
Energy and they need to be cooled because data centers get very hot and space has boundless solar energy and it's very cold in space. And so it's a great place to put [00:30:00] data centers. Like, okay, first of all, you wanna put those data centers in low earth orbit. They're actually gonna be, you know, behind the earth half the time.
And yeah, you can sort of fix that with an orbit, but whatever. So the, the solar energy thing, whatever, but the heat. The heat is the thing I keep coming back to and, and is just so incredibly stupid because yes, space is cold. It is true that space is very cold. It's also true that space is a vacuum.
Vacuums are the best insulators, you know? How does a thermos work? Well, you've got two walls, and between those two walls is a pretty good vacuum because the best insulator. Of vacuum. There are three ways to transfer heat, conduction, convection, and radiation. Conduction and convection require there to be a substance.
Of some kind between the thing that is, you know, radiating or the thing that is hot and the, the stuff that's not hot. [00:31:00] There's no substance in a vacuum. There's nothing out there in space, and so your only option is to radiate the heat away, which takes a while and requires really big like structures to radiate the heat away efficiently.
And it's very hard to get those structures into space and deploy them. And so even though space is technically quite cold, it would be very hard to bleed off that heat. And a data center in space would overheat. It is a phenomenally stupid idea. Um. I remember I was telling somebody about this and they said, but you know, couldn't you get around that with fans or something?
Fans don't do anything in space other than make things hotter 'cause there's no air for them to move. Like that's
Alix: a fan in a background.
Adam: Yeah.
Alix: So this is one of, I think, many examples and there's others in the book where you're using your physics background to try and systematically, and I think in good faith try and explain whether or not concepts or ideas that are being proposed are real.
Adam: Yep. [00:32:00]
Alix: Does it drive you sane to
Adam: like, oh, sure. Yeah. Yeah, absolutely. Yeah. I mean people, people have asked me like, how did you stay sane while writing this book? And my usual answer is, you know, that question presumes a lot about my mental health, God. So the stuff I was talking about, about how this looks like a religion.
Sometimes the people who believe in this stuff will say, well, you know, but that doesn't make us wrong. And it's like, yeah, no, no, no. I'm not saying that you're wrong because you look like a religion. I'm saying that you're wrong and you look like a religion. And this is one of the things that motivated me to write the book.
There have been a lot of critiques of these ideas that point out. If these things that these people want come to pass, it would be bad. Right? It would be bad if Musk had a huge colony on Mars because it would essentially be a gigantic company town that he would have like King, like control over. It's like
Alix: total recall
Adam: shit.
Right? Exactly. Yeah. Total recall shit. Right? I, I mean, he's even quoted total recall without remembering like what happened to the corporate [00:33:00] overlord. Yeah. In total recall. Uh, the
Alix: oxygen salesman.
Adam: Yeah, exactly. Yeah. I mean, it's, it's really bad, but we, we know that Elon Musk has, you know, terrible reading comprehension, but, um, what I hadn't seen as much of was pointing out, yes, it would be bad if they did these things, but also these ideas are not workable.
They just don't work. And so instead, they are just science fiction fantasies that are used as an excuse to grab power and ignore all limits in an attempt to, uh. Like live forever and curry public favor in the attempt to make themselves look like they're doing something good for all of us, when in fact what they're doing is, you know, mostly just making everything worse and amassing more power and wealth.
So, yeah. Did it drive me crazy? Yeah, a bit. I mean, I, I had to spend a lot of time literally touching grass, going on hikes and [00:34:00] stuff, getting out into nature, but. Yeah, if you have expertise in the things that these guys talk about you, you realize that they have no idea what they're talking about. And this is partly because space is like so close to my heart.
But I really do feel like the best example of this is the stuff that Musk says about Mars because he's just completely wrong. He talks about Mars as a backup for humanity, which is a problematic idea for so many reasons, but. Like just the idea of a backup for humanity. Like who gets to be backed up?
What do you mean by humanity? But putting that aside, Mars is terrible. You know, the radiation is too high, the gravity is too low, there's no air, and the dirt is made of poison and
Alix: home. Sweet home.
Adam: Yeah, exactly. I mean, Musk. Talks about putting a million people there by 2050 in order to [00:35:00] serve as a self-sufficient backup for humanity that will survive even if the rockets from earth stop coming.
He has used almost those exact words over and over again. I feel very comfortable saying that there is no way that that is happening. It is absolutely not going to happen. For so many reasons and the stuff I just said about Mars is only the start. I mean, first of all, there's not really anything that could happen to Earth that would make it less hospitable than Mars.
Musk likes to talk about an asteroid impact. Which I think is interesting because it's, it's, you know, unlike say the climate crisis, something that no human has responsibility for, like if there's an asteroid coming at us, that's not 'cause of something anyone did.
Alix: So you set aside the argument, maybe we should spend energy trying to prevent these things from coming about 'cause like Yeah, yeah,
Adam: yeah, yeah.
Alix: It's setting aside that and
Adam: asteroid. Exactly,
Alix: yes.
Adam: Right. But the thing is, the largest asteroid that has hit earth in the last. 500 [00:36:00] million years is the one that killed off the, all the dinosaurs except for the birds 66 million years ago. That's not the worst mass extinction in the history of complex life on earth, but it is the single worst day in the history of complex life on earth.
Several hours after that asteroid hit, there has been a bunch of rock ejected out from that impact site, some of which just escaped altogether and, and went to like the moon or Mars, but a lot of it. Sort of went on these like ballistic trajectories and then reentered into the earth's atmosphere, and there was so much of it, the heat of reentry heated the Earth's atmosphere.
And so it's unclear exactly how bad that got. The best case scenario is there were, you know, widespread wildfires all around the globe. The worst case scenario is that for a few minutes, the entire. Surface level atmosphere of earth got as hot as an oven set to broil, and so [00:37:00] anything that couldn't duck underground or underwater died that.
Alix: Was still not as bad
Adam: as Mars was. Not as bad as Mars. That is not as bad as Mars. And we know that because mammals survived.
Alix: Yeah,
Adam: because we are descended from those mammals. There were no mammals then or now that could survive unprotected on Mars. Even if you burrow underground. 'cause there's no oxygen, there's no way you can't breathe.
Even if you bring an oxygen pack to Mars. Like, 'cause like the temperature's on Mars. Or actually like, like a, a nice balmy day on Mars is like a brisk cold day here on earth. The temperature is one of the things that's not a problem. So if you are there on a brisk day with like an oxygen mask. A heavy coat, you'll still die very, very quickly because the air pressure is so low that you will asphyxiate anyway.
Um, because, because the air pressure is so low that the [00:38:00] boiling point of water is below body temperature, so you will asphyxiate while the saliva boils off your tongue.
Alix: Jesus Christ.
Adam: Yeah.
Alix: So, so the fact that this is so. Impossible.
Adam: Yeah.
Alix: Which I know you don't like using that word, but it sounds like you're comfortable using that word in
Adam: this case.
This is not, this is not happening.
Alix: Okay.
Adam: Yeah.
Alix: So we're saying this is possible. So, so this is a, it's a, but it's a construct that people keep. Like why would you continue to listen to someone who continues to assert that we should invest this many resources and is driving this much attention to a prospect?
That's ludicrous.
Adam: Yeah. Yeah. I mean, the tech billionaires are not. The ones who told us that the future was in space. Right. They are simply taking advantage of like a cultural myth that was already there. And in some ways they're, they're victims of that myth as well, right? They've believed in this idea that the future is in space their entire lives, [00:39:00] like Bezos has been going on about this since he was in high school.
Musk went for it as soon as he could. You know, the minute he got, you know, real money. There are people who are very cynical about like Bezos and Musk and the rest of these. Space barons their, their interest in space saying that it's just an excuse for their behavior here on earth. I don't think that's true, and I think the fact that it provides a convenient excuse for their behavior here on Earth makes it more likely that it's a genuine belief rather than less likely.
You know, it's easier to believe something that's advantageous to you. The reason they've been able to get away with this, I think, you know, is partly the peculiarly American belief that wealth means you're smart. I know that's not unique to the US, but I feel like we have a particularly bad case of it here.
I
Alix: was gonna say it is a disease, so I'm glad you refer to it as a case
Adam: carry on. Disease's a disease. Yeah, it's a disease, but also this idea that the future is in space [00:40:00] is just incredibly durable and really knew. If you go back 150 years, people didn't think that the future of humanity was inevitably in space.
People weren't really thinking about it that much at all. I mean, space is fascinating. I spent years studying our cosmos on the largest scales that we know how to study. I think that sending robots into space, you know. Space probes and, and, uh, rovers and whatnot. I think that it's one of the great stories of scientific exploration ever.
I think it's, it's one of the epics of humanity, but we are, our bodies and our bodies do not work in space.
Alix: But it also gets, I mean, this, this mythology behind it. Yeah. It also, to me, and I hadn't actually thought about this until you described it in that way, that. The narrative of a race. Yeah. So thinking about Cold War dynamics Oh yeah.
Thinking about like how much propaganda Americans have consumed about this [00:41:00] space race.
Adam: Yes.
Alix: Um, that a lot of those, one, you never say, where are we racing two, and what are we trying to actually do, aside from just performatively compete with another, you know, large country that we're nominally at war with, but.
That same metaphorical structure of a, the AI race also feels like it's performing the same function where it's, there's no question of, well, what are we racing towards? Because as you've described, a GI is an incredibly unlikely prospect, and it's also maybe a silly pursuit altogether. The space moving, multi-planetary, whatever, whatever is also seems like a nonsensical thing to race towards.
The very fact that we're locked in a race animates all of this resource allocation and energy to be sort of driven towards this thing without any critical question of do we wanna get to the finish line? Because I don't know, like this doesn't sound so like, how do you, in the book, like how do you deal with the race?
That, that narrative construct of we're, we're in a race. [00:42:00] How do you unpack that? How do you deal with it?
Adam: I think when it comes to ai, I mean, everybody talks about this race with China and, and it's just nonsense. But, you know, a GI is a cultural narrative and it's not one that's widely shared in China.
It's not, it's not what they want from ai, but I don't really talk about that very much in the book. But the. Concept of a race to me with ai. Actually, I feel like the most important example of that is not this idea of a race with China, but this idea of a race against time to prevent the evil, super intelligent AI from coming into being that will then destroy humanity because that it's, it's such a pure example.
It's such a perfect example of a race because it's a race against something that can't be reasoned with, right. It's like if there was a race to a GI with China. You can always say, well, [00:43:00] let's have a diplomatic solution. But the prospect of diplomacy with the super intelligent AI that is going to turn us all into paperclips, that's not an option on the table, according to, you know, Yad, Kowski and company.
And it's such a convenient narrative because what it does, and, and I do talk about this in the book, is it relieves you of the need to ask pressing questions about. What the hell are we doing? Because instead there's no time to consider that sort of thing. We have to get there because otherwise we're all dead and you know, there's just so many things that are so stupid about that.
Like if you look at real life and death situations where you really are racing against time, that's not how thinking those things through works. Right? I think that the fear. That erase the [00:44:00] narrative of a race, sort of injects into the conversation. It's very useful for people who want to exert control over other people and over what happens.
Right. You know, the same way that the narrative of a race to a GI with China mm-hmm. Means that you know, oh, well, you know, if this isn't AI bubble, it doesn't matter. We still need to invest in massive AI infrastructure because otherwise China will beat us. It was like, you know. We went to the moon in the space race, right?
And there was this sort of very existential fear that if we didn't win the space race, that would be the end of America. It would be the end of democracy, it'd be the end of capitalism. And you know, I'm no great fan of capitalism, but um, like looking back on it. Of course none of that was true if it had been a cosmonaut who stepped on the moon rather than Neil Armstrong.
Like if it had been Yuri Garrin or something. [00:45:00] Um,
Alix: which is the premise of
Adam: for all
Alix: mankind
Adam: I know for all mankind, which is. I, I love the first season of that show, and then it went off the rails. It kind of went off the rails.
Alix: Yeah. The first season's really
Adam: good. The first season's really good. The second season's not bad.
And then, yeah, yeah, yeah, yeah. But, but yeah. No, but
Alix: that premise about
Adam: like, oh, the premise is so good.
Alix: Yeah. 'cause the world continues Russia. Exactly. Yeah,
Adam: yeah, yeah, exactly. Because like, it, and, and also, you know, we went to the moon and then we didn't. Really do a whole lot with it. And why is that? Well, some of that was questions of budgets and, and political priorities, but the reason that mattered is that having a base on the moon has scientific value.
It has propaganda value, but it's very difficult, very difficult thing to do. You know, it's. It's not really like a great means of military superiority, right? If you have a base on the moon, that doesn't mean that [00:46:00] you're going to win a nuclear war. I mean, no. Yeah, it's
Alix: symbolic.
Adam: Yeah, exactly. You know, and, and I think in a similar way, this race to a GI with China, which is not even a thing that's happening, I think we're gonna end up with, you know, a large number of, of chips and data centers.
And for what? And then it's gonna be, you know, like, I'm not saying we shouldn't have gone to the moon. I am saying we shouldn't build these data centers. So the analogy is a bit strained, but
Alix: I don't think so. Yeah. 'cause I think what's interesting is that the space race, it didn't have an end necessarily, but we did arrive on the moon and in the path to that made a lot of compromises and a lot of decisions that were deeply
Adam: Yeah,
Alix: problematic.
But you could say, did it happen, yes or no? Yes it did. With the AI race. Because from the very beginning of our conversation, because you can't define a GI, yeah. Mm-hmm. You're basically in this forever pursuit Yep. In a race mentality. Yeah. So you're, you've got the adrenaline going, the fear going, yep, we [00:47:00] can't miss this opportunity, but that just goes on forever.
Adam: Yeah, exactly. And, and I mean, it's very, again, very useful. Because it just creates this atmosphere of fear. And you know, in the same way, I mean I use this analogy in the book, but an apocalyptic cult would disband if the apocalypse came.
Alix: Yes.
Adam: Like if there was a nuclear war, that would probably, well,
they
Alix: would be forcibly disbanded because Right.
Adam: Exactly.
Alix: The apocalypse would've happened.
Adam: Right, exactly. I get your point. Like the great rhetorical power is the impending apocalypse, not the arrival of the apocalypse.
Alix: Yeah.
Adam: Right. And, and so the, the idea of an apocalypse that doesn't come serves an important rhetorical and political function within that group.
This is a, an end that will never arrive. That's really useful if you want to get people afraid and under your thumb.
Alix: Yeah. I don't know what band it was, but there's a music lyric that the anticipation is always better than the realization. I think that [00:48:00] that's a, that's what we're in, we're just in this permanent anticipatory state.
Mm-hmm. That is being weaponized in a political way. That is hard to wrap your head around because it's such an effective rhetorical device.
Adam: Yeah.
Alix: Okay, so I think we need to leave it here because we have to go.
Adam: Yes, we do.
Alix: To our. Second screening of Ghost in the Machine. Yes. Which very quickly we saw the premiere yesterday.
I thought it was really great. What did you think?
Adam: I thought it was fantastic.
Alix: We will link to, I think hopefully where, we'll you'll be able to see the film.
Adam: Yeah. Great.
Alix: Uh, when this comes out. But, um, it's a documentary film with over 40 experts interviewed, and it stitches together the history of how essentially AI isn't just connected to eugenics.
It's basically, it's. The direct descendant, both the project of trying to construct intelligence, but also the project of trying to quantify what it is, um, to be human is directly descended from generations and generations of race scientists. Uh, Valerie does an amazing job of making that history seem less like fringe and conspiratorial, which I think is [00:49:00] sometimes how it feels to me.
But like she really like just stakes a ground in the history using archival footage and just a very systematic exploration of that history. And I just thought it was extremely well done and much more, I think, accessible to wider audiences than I was expecting.
Adam: Yeah. Yeah.
Alix: I thought it was gonna be like a kind of niche thing.
Adam: I felt the same way I was, uh, especially 'cause I'd seen an earlier cut and I wasn't, I was like, I don't, I'm not sure how how many people are gonna wanna watch this whole thing and then if
Alix: you're not familiar with the thing.
Adam: Yeah, exactly. Yeah. And then I saw it and I was like, okay, yeah, no, no, no, no. This is, this is gonna work.
Alix: And I want a lot of people to see it. Yes. I, I like left being like, how do we get as many people to watch this film as possible? So we'll link to where that's gonna be, hopefully showing more widely. And we're gonna go now. So thank you so much. We'll also link to the book. I'm gonna try it again. More everything forever.
Yes, I did it, um, by Adam Becker. It's great. Uh, I can already endorse it even though I haven't finished it, and I will finish it on my flight home. Uh, this is lovely. So thank you for taking the time.
Adam: Thank you for having me. This is a lot of fun.[00:50:00]
Alix: So, living on Mars data centers in space, as Adam said, it's never gonna happen. So whats. The point of these fantasies. So with me to help answer that question next week will be Anat Rosario, a campaign advisor who hosts Words to Win by an excellent podcast that maps real world narrative shifts to real world victories in progressive campaigns.
She paints a picture of how AI propagandas use the same messaging playbook that oil companies use to try and convince us that climate change was all our fault and not theirs. She also gives us a path forward on how to counter the AI narrative sludge by figuring out what. Future we're fighting for, not just against.
Thank you to Adam for today, and to our producers, Georgia Iacovou and Sarah Myles. And to team Zoe Trout, Marion Wellington, and Kushal Dev. And please join us next week to continue the conversation on AI narratives with [00:51:00] Anat.
Stay up to speed on tech politics
Subscribe for updates and insights delivered right to your inbox. (We won’t overdo it. Unsubscribe anytime.)
