E98

The Age of Noise w/ Eryk Salvaggio (replay)

Read Transcript
Listen on:

Show Notes

Infinite AI slop means we are moving away from our the age of information into what Eryk Salvaggio calls ‘the age of noise’.

More like this: Straight to Video: From Rodney King to Sora w/ Sam Gregory

We’re replaying five deep conversations over the Christmas period for you to listen to on your travels and downtime — please enjoy!

What happens if you ask a generative AI image model to show you what Picasso’s work would have looked like if he lived in Japan in the 16th century? Would it produce something totally new, or just mash together stereotypical aesthetics from Picasso’s work, and 16th century Japan? Can generative AI really create anything new if it can only draw from existing imagery?

Further reading:

Post Production by Sarah Myles | Pre Production by Georgia Iacovou

Hosts

Alix Dunn

Release Date

January 9, 2026

Episode Number

E98

Transcript

This is an autogenerated transcript and may contain errors.

Alix: I hope you had a great break and are ready to get back into it with 2026 and all of the technology, politics issues that we know are going to only get more complicated and important. So thanks for staying with us. Um, on the feed right now, you'll hear a trailer for a new miniseries that is gonna be playing in January and February focused on digital public infrastructure.

It's called The Vapor State. Next week, the first episode of the Vapor State will go live with Astha Kapoor Mila Samdub and Usha Ramanathan. So be on the lookout for that. Alright, our final replay for this winter break is Eryk Salvaggio. This was a really deep conversation on the age of noise, which is what Eric calls the information experience we're all having, where there's so much more content of less and less interesting variety because a lot of it's machine made.

And this is from December, 2024. So in kind of a. Post-election doldrums. It's a really interesting prelude to lots of conversations that have evolved over the last year on AI slop. So without further ado, a replay with Eric Salvaggio.

Hey there. Welcome to the computer, says maybe podcast. I'm your host, Alex Dunn. This week we are sharing a conversation I had with someone from our new protagonist network. So this is a network of professionals that work on the social and political implications of technology. And we support with things like media training to get them really sharp with their messages and have them more vocal in public discourse about what we wanna see in technology in the future.

If you wanna learn more, you can head to says maybe.com to see if it might be right for you and how to apply if it is. So my conversation this week is with Eric Sgio. He is an artist who actually experiments with generative AI systems while also critiquing them, which I think is a really refreshing way of thinking about the topic.

Partly because one of the biggest complaints I hear from tech spaces is that people that critique these technologies don't actually understand them. And I feel like Eric is one of those people. Among many others I'll say, who actually [00:02:00] know how this technology works, and through that understanding, I think have really insightful things to say about what they might mean for our world in his work.

Eric has been, I think, really eloquently articulating a moment that we probably all are kind of familiar with, but it's this idea that we've moved beyond the age of information where information was this very scarce resource that we needed access to and into the age of noise. Where there's so much information and it's so messy that all we really have is a wall of kind of noisy signals and then we have to try and make sense of them ourselves or just tune it out entirely, which I think we get into in the conversation, which I think is a really important thing that we need to grapple more with, is that there's so much noise that expecting people to filter through it, to find meaning is really naive and we need to stop doing that.

Naturally. We get into stuff about the election, so brace for that. But I think Eric also provides a deeper understanding of both what image models are in terms of generative AI [00:03:00] systems that make images and what they're actually doing when they generate an image based on a text prompt. So it's a little bit of a more technical conversation.

I think more technical discussions like this one are really refreshing because it gets into the mechanics of how these systems work, which I think is really important. If we want to critique them, assess them, and be smarter about them. For me, this has also been one of the most insightful conversations I think we've had all year on the podcast.

At least for me. We covered a lot of ground from the basics of what a model's doing when it uses training data, produce these really boring AI images. We'll get into what I mean by boring to how this kind of grayish nothings. Slurry of content, how it's affecting our political discourse, which for me actually post-election really helped me process something that I felt instinctually, but really got me into more specifics about how our information environment is changing and not just talking about like facts or.

You know, misinformation or disinformation, which I've frankly have gotten really tired of talking about. [00:04:00] 'cause I think it's an incredibly superficial analysis of what's really happening. And I feel like Eric taught me a lot of stuff in this conversation that really helped me break through and sort of helped me get towards something that feels more real than other conversations I've had about this.

I found it really. Insightful, and I guess I hope you do too. So enough for me. Let's now hear from Eric.

Eryk: My name is Eric Sgio, and I'm an artist, I think critically about technology. I write about technology, uh, critically, and I have a newsletter called Cybernetic Forests, as well as a host of other small associations with a whole slew of great organizations. I've been an artist for 25 plus years. I started working on the internet, thinking about the internet, making art on the internet with code.

This was, I was still a teenager, [00:05:00] and that introduced me to a whole slew of people who are thinking critically about technology, media technologies, the way that the internet was gonna change things. This was the 1990s, and so immediately to me, the very association between the internet. And art and media theory and criticism, and this entire media ecosphere was very closely connected.

And so it's always been that when I'm making things, I'm making things as a way of thinking about what I am making and what I am working with in the technology itself. And that opens up all kinds of questions and it, with the internet, it opened up tons of questions and I got involved in so many different conversations.

With activists and scholars and just thinking about what this thing was that is the internet. Since then, it's, it's taken me into so many different directions in media, in sort of policy spaces, and then generative AI comes along, which is essentially just the giant [00:06:00] internet and a new kind of package, right?

We've repackaged the internet, we've called it. Training data, and now we have a thing that is making more internet in a very different way, and we can do different things with that too. The same approach is there. The questions I'm asking are about what are our relationships to the interfaces? How are these things being structured?

Who controls them? How does power move in and out? How does information move in and out? To the extent that those things are linked, how are they linked structurally, that's been my modus operandi for for 25 years.

Alix: There's obviously unique questions that generative AI raises, which I feel like is partly a question of scale, like in terms of the amount that gets created, and it's partly in terms of.

The politics of the creation itself. What is different about this moment, if anything? Like how do you wrap your head around this being? Is it a step change? Is it incremental? Is it like tally different?

Eryk: So I think it [00:07:00] is something tally different and I think it's very complex how it is different. The thing that I point to, the thing that I have been thinking about and hovering around is this idea of the information age coming to an end.

And that we are in a different thing, which I'm calling internally, you know, in my own head, the age of noise. And the theory behind that for me is that the information age was based on this idea that information was hard to get. That information was difficult to transmit, and the entire process of getting information into someone's house.

Was like a big deal. Having information coming into a laptop or a desktop or whatever is a pretty radical re-imagining of the type of access we had to information. And from there, this information system where, you know, information wants to be free. Information is everywhere. We all have access to it.

There was this myth at the time that we were gonna be this like highly informed, [00:08:00] highly educated population, right? This is this idea in the 1990s, especially this utopian vision, and I think what's happened instead is that information has become so available that it is overwhelming and it is essentially a wall of noise.

We have access to. Signal, which I wanna say like very specifically, right? We are not necessarily getting information in the sense that we have like healthy facts being put in front of our phones all the time. But what we have is signals and it's signals. It's such an overwhelming pace that we can't deal with it.

And what we have done with this, with the AI is we've built filter systems that regulate all of this for us. And now somebody is making decisions about what is being filtered out and what is being kept in. And generative AI is actually this principle built into a system. It is. All of this information needs to be stripped down into noise, quite literally.

If you look at the technical process, and then [00:09:00] from that noise, we generate new things. But in the ecosystem, it's these new things that are just adding to the wall, right? Adding to this wall of signal. So I think what we're looking at is like a real breakdown of the age of information, and I think we need to think differently about our relationship to information.

Think differently about how we access information, how other people are accessing information, right? This is a big question too, in ways that fundamentally doesn't assume that signal and information are fused together, but also that like giving people access to signal is not necessarily great for everybody.

I don't know. The battle over signal and the battle over information is the best place to be investing time and energy. I mean, it's

Alix: kind of like if we carved a hole inside the side of Plato's Cave and then we were in there Uhhuh, like there's like a, like there's something really recursive about it.

'cause it's basically the same media studies argument from like the eighties of [00:10:00] gatekeeping and how elites essentially structured a system whereby to interpret reality at. Scale to have a shared understanding of what's going on in the world. You rely on people who make very specific decisions about what to put in front of you as the essential information that you need to be able to consume, to be able to participate in society.

And basically they constructed what was okay to talk about where we should focus our energy, how we should frame particular issues that were a huge consequence, whose stories get told. And to me this seems different, but very similar in terms of the power dynamics. That are at play. Do you think of the people filtering the age of noise as the new gatekeepers?

Eryk: I think there's an element of gatekeeping there. We've talked about democratizing access to information, right? But actually when you look at it at various scales, there's one, maybe it's a handful of algorithms technically right? But it is one central group of people making decisions depending on which algorithm you're using.

This is starting to dissipate a [00:11:00] little bit. We're getting basically a greater variety. I also think that we overestimate the amounts of constraints that's involved with what seems like variety and our access to communication signals, right? We think that social media is an individualized algorithm and that everybody is at the end of their own sort of filter.

But that isn't the case, right? The filter is always skewing towards this sort of central aspect that is being developed by whoever's running the company. It is being skewed toward that kind of thing that we would've called gatekeeping. There are people making those decisions, and so it seems like everybody has their own algorithm, but actually.

The algorithm varies where it doesn't really matter and then comes in where it kind of does. This is another big thing with generative ai. I think thinking about it, this is kind of a metaphor thing for me, but we also assume that, okay, like there's this big conversation about generative AI [00:12:00] and creativity.

These systems because they're working with noise and they're working with all this training data. They could build all kinds of images, they can make infinite images, and we could see all kinds of new images that we would never be able to see before. But actually when you look at the raw sort of combinatory potential of these images is a bunch of it's garbage, and I mean aesthetically garbage, but I also mean literally like unusable.

It's just noisy. And so there is. A lot of variability that sort of makes us all think that like things are, are free and unique and generative when actually. The center holds, and it's just obscured by all this variety, this illusion of variety that surrounds it,

Alix: this se equalization of cultural production where it's like, if it didn't happen before, it cannot happen now.

Um, and we have to have just a familiar structure, like narrative structure for people to return to. And then you have this like veneer of nostalgia everywhere rather than. [00:13:00] Anything new. So do you think generative AI ever makes things that are new, even though it's derived from things that have come before?

Eryk: It comes down to how you wanna define something as new. You could do this in all kinds of scales. And I think about this too, with this idea that everything is a remix. This is like a common quotation, and I find that very bleak. Yeah, right. Like everything's a remix. And also like if you go really granular, like everything is a combination of like.

Atoms, right? Yeah. Like, but that doesn't mean that like there's no meaning in certain combinations, right? And so there's an argument that says meaning is secondary or, but oftentimes like this idea of the remix is kind of new, right? Culture can be new. And I think we have to have a bit of hope that like culture can be new, that there can be new.

Solutions to problems. Otherwise, what are we doing? I think it makes new combinations. It could do those types of things, but it's actually kind of limited in what we would define as new because it's always relying on these like clusters around these like stereotypes of the thing. If you look really [00:14:00] technically at the way that information.

Becomes the model, right? How an image enters into the model is that information is stripped away until like its core structure sticks around. And I always give this example of flowers as it's being trained, you have this like very pristine, very detailed image of flowers. By the end of the training process, a couple steps away.

It's just kind of the rough shapes of flowers, and then it's noise. And when you're generating, it's a reversal of that. So you start with just sort of an arbitrary cluster of pixels and then something is saying, yes, those pixels look like flowers at this stage. So refine that, and then it refines that and it's constantly verifying again, sort of known example, not specifically known, but like.

The concepts. And so it's always gonna hover around these very basic concepts. So even if you are putting these ideas together, it's not like it's going to give you a re-imagining as a result of the various contexts. There's this argument that goes around that, like, now we can imagine what [00:15:00] Picasso would've looked like, what his art would've looked like if he lived in Japan in the 16th century, but like that isn't actually what it's gonna do at all.

It's just gonna give you like. Common elements of Picasso and common elements of like 16th century Japanese art and then mash them up and maybe that results in something aesthetically new. But it's totally this blend of these stereotypical aspects of both Picasso and 16th century Japan. So it's not thinking.

Creatively, it's following a mechanistic, predetermined, highly overdetermined way of combining and colliding these aspects of its image data. Is that new in one sense? Yeah, it kind of is because maybe no one's ever combined those two things before, but it's not new in this sense that it is generating a kind of new vision or new aesthetic.

Some people can say that's new and, and I might say like, yeah, that is new, but. It's more nuanced, right? It's like new. At what scale is the next question? If we really wanna think about [00:16:00] these things.

Alix: Well, when I hear novelty or like newness, I think about risk because you make something and then it's like there's no reference point and it's different, which means that there's a period where you don't know how other people are gonna perceive it.

And I think it's really interesting, this idea that as you described, the process of these machines working, that it's kind of like. You're getting warmer. You're getting warmer. You're trying to find something that already is a thing that is recognized rather than like, here's a thing, what do you make of it?

Which feels like an attempt to control the creative process, to sort of remove risk from it in a way that I find kind sad.

Eryk: I agree. So one of the things that I do as an artist with these systems, which is a whole other can of worms, because I'm very critical of these systems, but I'm using them because I want to understand them.

I quote Nampa, paraphrase him, I use generative AI in order to hate it. Properly. Um, I, I [00:17:00] wanna know what I'm working with, what we're all working with, and somebody needs to be doing that. And I'm in some ways excited to do it, but I'm also excited because I am curious and as an artist, that's the thing that I have is like, this is a technology that I want to understand because it is making something.

It's doing something. Or I can make him do something with it. But what are we gonna do with it? So part of the thing that I've been trying to think about is, well, what is a thing that would be kind of new? Is there anything that could be kind of new? Because if you're referencing the training data, you're not really generating anything that is that far afield from the stuff in the training data.

Picasso and 16th century Japanese Arts is in the training data. You're just combining those things and. The thing that I was really interested in is you're constantly with these systems checking against whether or not the image is noise. And if the image is noise, that means it's not matching your prompt and noise is then being refined and passed through again.

And this was one of my early experiments that made me think, oh, this is interesting, and this potentially points to an [00:18:00] interesting conversation around artist agency in these systems and decisions. The really important idea of like where creativity comes into play through decision making, because I'm still giving up a lot of that agency over the process, right?

I'm still generating noise and getting what comes out of it. I think that it's okay to do that. There's a long history of arts that isn't just about every single step of the way is a decision that is made with like direct intention, right? Like that is one way of working, but I wanna make sure that we don't in this.

Zealous and correct. I would say questioning of artificial intelligence for creativity that we don't trample the wide variety of human creativity and definitions of creativity that actually make creativity interesting in the first place. Because if we do that, we're doing the exact same thing that an generative AI system is doing, which is saying there's one way to make an image.

There's one thing that an artist does. There's one way to think about creativity, and it is structured here, and the system replicates it. There's [00:19:00] actually not, there's all kinds of art that everyone's gonna hate, right? There's all kinds of art that your kid could do or like looks like someone threw something at a wall.

And there are also people who are going to love that and are going to be moved by that. And that, I think is the important part of the creativity conversation, that generative AI sometimes. Overly constraints by saying, artists do this. They look at a thing, they take in their influences, they replicate those influences, they work towards an image, and then they create a product of an image to sell or give away the creativity as the product.

I think it's more complicated than that, and I think it, we really should emphasize that. It's more complicated than that.

Alix: That's super interesting. At a meta level, it makes sense that the misunderstanding is that it's too linear and that it's too literal. So like I wanna Make An Apple that Picasso would've painted is like an extremely literal way of trying to explicitly articulate an artistic creation that is quote unquote artistic.

Um, and there's something really [00:20:00] boringly direct. About that. And I feel like that that piece of agency and like the idea that you play with outputs from these systems that. Are structurally unintended, but it's making me think of like lahan and like how something has to be in relation to something else for it to have meaning, and the idea that you're basically saying this stuff that you've generated in an attempt to make something out of nothing based on referential points that don't make sense.

I'm gonna take that and I'm gonna make it mean something feels very ironic in a way that I appreciate.

Eryk: There's a lineage of people making generative art, right? There's a lineage of people who are working with noise in systems. There is a way of of thinking about that that does have its own tradition.

It's not a mainstream tradition. That's probably not what folks have in mind when they are designing these systems. But there's another element to the work too, which I think touches on my thinking where I'm thinking about like critical. AI as a concept. If you have a system that is designed [00:21:00] to find the mean to find patterns and predictions, and then enforce those patterns and predictions, then what becomes of the outliers?

This is an avenue of exploration that I think goes into both generative AI systems as data visualizations. I call them infographics about the dataset. They visualize the dataset, but what they're visualizing is oftentimes these central. Stereotypes of what is most common in the dataset and the outliers are essentially erased or eradicated.

You could push towards them, but you have to take the additional steps. You have to make the additional work. And to me, this is actually, I think, a really important visualization, literally of a lot of the ways that AI is at play in the world, which is to say. It is based on predictions of the patterns that are the most common, and then it enforces those patterns in the most common way to be of benefit, mostly to the people who are at that center and not even necessarily recognizing that there are people [00:22:00] beyond that center, that there are people on the periphery, and because it does not acknowledge that the system is not aligned to them.

I hate this word alignment, but like what's not serving the people that the training data. Has not been trained to accommodate. That's the result of human decisions. It's in these systems. It's about what images are we gonna pick, where are they gonna come from, how are we gonna curate them in order to figure out who they represent and how?

Short answer is they aren't. It's a data grab. LLMs, very similar. You have this kind of after the fact processing now, but the core corpus is just grabbing information, as much information as you can. And it's not just generative ai like, and that's a really important thing. How are we thinking about this when it comes down to.

Surveillance algorithms or predictive policing or any of these other issues that come about, it's hard to say, like, I want people to look at this like noisy AI generated image and think about predictive policing. That's the frame of reference for me. So I'm [00:23:00] trying to make those things more closely combined through various types of storytelling and video making and, and that kind of work.

I see the connection

Alix: for what it's worth, like I feel like there's something about like statistical anomaly and treating things like statistical anomaly rather than messy human systems that they are, and like this obsession with control and sort of extracting some meaning. But by doing the extracting, you actually take out anything that mattered to begin with and then you dane to say that you can then turn that into something that makes more meaning rather than something that has vampire out all meaning.

To begin with and then without really realizing how ridiculous it is, when you're at the center of that decision making to then act as though you're being representative in some way that like you're getting at some picture that is universally of interest to other people is so dark. But I see the connection.

I mean, do you think with images similar to LLMs, has there been a push to have those systems play a [00:24:00] backend to predictive systems within imaging?

Eryk: I don't think so, but I think that part of that is because generative AI is a predictive system run. Right? It's run on noise. It's, it is trying to predict an image.

The training of it though is, I've referred to it before as like digital humanities in reverse, like right now, a lot of this training data that we have is the result of decisions and categories from like archivists. And so one of the explorations that I've been doing and thinking about is like how do the decisions of an archive come into play with a generated image?

I was a research fellow with the Flickr Foundation, and they gave me a great opportunity to do a dive into some of this training data and how these decisions are made. And along the way, one of the things that I realized is there's a particular type of image, a stereo view image. Which is like one image on the left, one image on the right.

That's slightly a jar. But if you hold it a foot in front of your face, it's like, kind of looks like it's floating. And if you ask just for the name of this type of image, which is a stereo view image, [00:25:00] you get those kinds of things like side-by-side images, right? Sort of similar. It's recognized those patterns.

But the other thing that you get without any other words in the prompt is imagery that is highly evocative of colonization. Like it's like these. Palm trees and like certain styles of dress and there's like just a very certain type of relationship with people who are pictured there. I don't wanna get too into describing these images, but there is a direct link.

It turns out that the training data is from the Library of Congress archive of story review images, which specifically focuses on stereo view documentation of the colonization of the Philippines by the United States. So this imagery is inscribed into the media format that we're trying to evoke. And so there is this digital humanities sort of thing about how are we gonna categorize things, how are we gonna organize things?

There's a logic there that then becomes reduced [00:26:00] boiled down into this. JPEG of noise with a certain kind of words associated to with it, and a path backward to that original image that gets abstracted, and that abstraction is so severed from the decision making, the accessible decision making that goes into these archives.

I need to look at the Library of Congress archive in more detail, but like it's contextualized, right? Nobody is saying this was a great thing, but that context is completely lost with generative ai, when you run it backward, the context becomes completely abstracted because that's what you've had to do to train it.

You've had to take. The context and reduce it. And so now you have to take that reduction and expand it based on kind of hints of decisions, hints of direction. The real world, meaning of images, gets really degraded. That also speaks to a visualization of what happens in our lives when we automate.

Decision making. When we train on data, create an abstraction of data of the world into data, then use that data to sort of [00:27:00] ask it questions. When the questions we're asking are mostly about the stuff we've cut away from it to begin with.

Alix: Yeah,

Eryk: right. That's super interesting. I

Alix: hadn't thought about it in those terms of like quantify and then requalify.

I've often thought about quantify to make something. Flat enough for people to feel like they can make judgements about it. But I'd never really thought about those judgements being reanimating it to be more contextual.

Eryk: There's a weird, like long political history, if you look at like the origins of things like Agile, the creative class in technology has always been a real problem for the managerial class.

This whole thing of like, we can automate the artists, right? We can have the machine do their work. It seems like a very, like the manager can't deal with you. Like the manager doesn't know why you want to go see a movie at two o'clock and you're telling me it's related to work. Like we don't get that. We don't get [00:28:00] the, this like creative impulse.

Right. And maybe they, even if they do get it, there's just not a structure within the kind of managements and productivity. That is required where the creative sort of energy is allowed to be creative, right? If you're professional, you have to be creative on a schedule and it has to be the nine to five schedule, right?

And you do a lot of this having been a professional, creative, you do a lot of this work, not at work, and no one wants to pay you for working overtime because you're thinking while you're eating dinner, right? So this has always been this like weird. Thing where, how do we get this slotted in order to be more efficient and not have to deal with the person that is creating the value?

And so this idea that like, well, the person looks at a couple of pictures and they clearly just infer some commonalities between those pictures, and then they make a thing, right? That's art. It's such a bankrupt and like degraded view [00:29:00] of, of what art and creativity is. It could only come from like high level management, right.

To just be really cheeky about it. Like it's such a disconnect. And so to me it's really just this capitalist sort of like, okay, productivity, please. Right. And it's the opposite of creativity, but it fits a definition of creativity when it's defined by people who really haven't. Engaged and I, I don't wanna be too bleak because there are certainly artists who do that and like think that way and like it's fine and a lot of them make great art.

But what I wanna emphasize is like a lot of creatives don't. There's a lot of messiness to that. And I think the closer you can get to order, like that's what it's for. This is why. It's for generating the, like messy parts of like the productivity cycle of software. But I,

Alix: I also feel like a lot of the breakthroughs weren't predicated on this trajectory, or it feels like there was that strange period with LLMs where.

The technology [00:30:00] existed. People within these companies were playing with it. A lot of the companies were like, it would probably be bad if we made this publicly available. We don't really know how we would sell it. It seems kind of dangerous, like people would probably use it for all kinds of bad stuff. So let's all like keep it under wraps for a little bit as we all.

Figure out what to do about it. And then OpenAI comes in and is like, hold my beer. And then it's like, here's this thing, and then now we're just gonna like race towards this monetization or productization of something that is. Bizarre and not particularly well placed to do very much of value. Um, so I get that like at the, the point after there's this race to productization, all of a sudden there's licking of chops of senior management being like, oh, this is amazing.

We can, you know, have like a 50% reduction in the aesthetic quality of everything we produce, but at the same time reduce costs by 90%. So we should, we should just do that, which then feeds into this age of noise. So this age of noise, not necessarily being about content creation [00:31:00] necessarily, but institutions in terms of how they value content creation, which means that basically it's okay for all these loud brands to.

Spit out shit. So some of your class-based or, I don't know, like politics, your future of work kind of analysis of this, did that start at the beginning or is it more when you see how artists and organizations are taking up the technology? Like as you see how they're doing it, are you now seeing that or do you feel like it was always this.

Eryk: Oh yeah, I don't, I don't know that it always was this way. I mean, a lot of this came about because search engines needed to figure out how to sort images, right? And so they needed something that could recognize the basic form in an image and give it a label so that if someone typed that word into the search bar, they could find it.

So it was about this analysis of image data, and if you can analyze image data and identify it, then can you run it? Again, it's running this backwards, right? If you can do it one way, you can do it [00:32:00] the other,

Alix: like a processing function for large systems that needed to sort through a lot of information for a very particular consumer use case.

Eryk: Exactly the age of information teetering over into the age of noise. You had so much information that you needed to figure out how to sort it more efficiently. You build these systems who understand that information. All of these signals more efficiently. And then lo and behold, you can create even more signals.

That's the origin of it. So there's that. And then it was, it was a classic example, I think of a solution in need of a problem. But the problem that it is currently being used to solve is around a lot of freelance labor, right? A lot of creative labor. It's expensive and it's hard to deal with, and. Hard to deal with.

It's just dealing with a human being. But like in much of the way technology is developed, that's the problem to be solved. There's another person doing a thing. Yeah, commodify people. Yeah. And automation at its heart is how do we remove a person from the flow? But [00:33:00] like when we start automating things that are like psychology or like a artist or a writer, then like what you are essentially dealing with is the other person's.

Expressivity like music generated music is a thing. I've been doing a lot of this and the lyrics are always just, what are you listening for? There's nobody here, like I've made like 28 songs that that's just the lyric. And this is all just to explore these systems to be clear, but I think there's something to like, what are you listening to a piece of music for?

I would assume that a way of listening used to be that you wanted to hear what a musician was going to say, that was at least a part of it, right? You're gonna put on Leonard Cohen to hear what Leonard Cohen is saying, just to pull a random person. I'm not a huge Leonard Cohen fan, but like, like he's writing, right?

And so now it's like, just gimme a song that sort of sounds like Leonard Cohen. And then it's plausible that this is a Leonard Cohen song, and that's like what you want. The lyrics are coming from [00:34:00] nobody, and so what are we doing except for cutting out? The musician's expression because the musician might say something we don't like or the musician may say something that isn't relatable or that we don't understand.

And this way we don't have to deal with that. We don't have to like worry about the thing coming in, being so far askew from the center that we are used to, that we have to like process it. And this comes back to, I think. Something in media studies, oftentimes in very fancy language called ontological security, right?

Which is this thing that like what we want is a reassurance that our place in the world is the same as it was the day before, and that it will continue to feel that way the next day. So you had like 2020, you had COVID, you had Trump, right? You had all this stuff, and I think people were feeling very disoriented.

I think people have been feeling very disoriented for a long time. I think there is a sense of threat, and I think with that sense of threat, there is a [00:35:00] desire to cling to the familiar patterns that refor and reaffirm. Your resilience in a area of change, a vast change that is often overwhelming. It's saying, you are the same that you were yesterday and you will be the same as you were tomorrow.

And I think that is reassuring to many people. But this is also in some ways what AI is designed to do. It is designed to find the patterns. Extend the patterns so that we have no reason to worry about the fuzzy borders or the things on the outside of the center. This desire to sort of affirm that those patterns is what people do when they're frightened, and it's what AI is designed to do.

And so when people say, oh, it's a model of the human mind, I say, well, what kind of human mind? The human minded what state? And that state is like, it's a state of fear. It is a state of state of like trying to keep you. Comfortable because you're afraid of the things that are happening, that are changing, that you can't [00:36:00] control.

So it comes back to this idea of control too, which I think is really important. So that's why there's been four Ghostbusters movies in eight years. So much of that revisiting of childhood too. Right. And you mentioned this about nostalgia and the imagery, it's like a junk food that like feels like chicken soup, I assume to some people.

I don't wanna overstate that aesthetic, like kitchens either. No. But like I

Alix: think it's also connected to like West Wingy and strategizing of Democrats. It connects not just to content, but I think also to. Seeking familiar patterns and familiar ways of being. I mean, what do you think this is doing to politics?

Eryk: If you look at. Communication as a battle with noise right now. Right? This is cla, classic Shannon, Claude Shannon, communication in the presence of noise reference here. But if you look at that, we've worked on filtering noise, right? That's what these algorithms are. They're noise filters. They're designed to take [00:37:00] the stream of data and sort of like consolidate it for you.

And so when you have someone who speaks in ways that break. That confine of the center, you break through the noise in a way. And I think that that is one of the things that we can see the rhetoric of Donald Trump and the bluster of Donald Trump, and to some extent the terrifying threats that he is literally saying with his mouth as the noise.

But there's another way of seeing that, which is to say that actually it has disrupted the kind of coordinating. Algorithmic drive to the center and sent it all askew in ways that it breaks through the noise in ways that nobody. That is up there with like Dick Cheney and a OC saying, look at the new center that is being produced by the averages I am introducing.

Right. That is just coming back to that what we're calling establishment or saying like it doesn't [00:38:00] break through. And so when we make the mistake of thinking that we are arguing, like we need to present more facts. Everyone has known since I entered science communication in like 2010, that like you don't just tell people facts.

You cannot tell people facts, right? We've had a long battle about climate change and like on an optimistic note, I think climate change is doing okay. People get it, but back in the day, everyone was just like, here are the CO2 levels. Here's where they were 16 years ago. Here's where, right. And just showing these like charts.

It wasn't breaking through to anybody. There's this tendency to just like cringe at this, but like facts aren't the thing. Like facts are not the thing. Like data in a sense is not the thing. Like you can't just give people data points. You have to tell a story. You have to break through the noise somehow.

As horrible as it is, like that is a way of breaking through the noise is to just be, to say the things, right? This political little incorrectness look at what, what [00:39:00] the discussion is about Musk, right? It's about, oh, we need freedom of speech. We can't have these algorithms censoring through what people say because they want to break out of that center.

I don't wanna hypothesize this too much, and I don't wanna give it too much credit either, but I think that like this flattening of everything. Makes it really difficult for the nutritional thing that we would call information to make its way into that world. It it is up against the wall of noise and the only thing that moves the needle is to be.

Beyond the kind of content moderation, right? Moderation itself is the thing you have to break. And so that's exactly what must did. That's exactly what Trump does, and that is what people are clinging onto and saying, oh, that was a thing that broke the thing, and I want the thing to break right now. I'm not happy with where I am in my life.

This comes back to this clustering of the center, right? If you're really afraid of dissolving, if you're really afraid that your identity is breaking apart, you start to fear fuzzy borders, right? The [00:40:00] strong border is the thing that says, this is who you are, right? It's a wall on you as a person, and that drives a kind of like fear of the thing that is not you.

That is mostly, I think, driving a lot of this. Concern that we saw, uh, you know, about trans rights and all of this stuff that came into the conversation is like this affirmation that's telling people, oh, you're afraid that your borders are gonna fall. Well, look, you're not that, and we are gonna be this, we are gonna be stronger because of that, because we've set.

Separation. This is classic strong ban, right? It's bleak, but it is the thing that happens, and it's frightening because you would think that human beings have been able to get beyond that, but I don't think we have, and I think that actually the flimsiness of these elections is actually kind of harder to swallow than if it was a clear route, because to some extent it means a bunch of [00:41:00] people weren't turned off by that kind of rhetoric.

Right? A lot of people saw that and heard that rhetoric and said. Fine, whatever. The thing that bothers me is the thing that I'm motivated to do is just that I want to change, but I want that change to also respect my sense of self and identity. Which is weird because it's like the opposite of the identity politics arguments, right?

Everyone's saying that O Harris lost because she leaned into identity politics when she absolutely did not, and that to me is that this, this weird myth that identity politics is only a left thing.

Alix: I'd never thought about moderation. Content moderation, like moderation being the status quo and this kind of return to a center and the kind of center of the mush.

I think that's like super insightful. I find that really, really interesting and I'm gonna be thinking about that for a while. It makes me wonder, too, what it would look like if instead of performing democracy via elections, we had a more robust and [00:42:00] full idea of democratic participation. And then how would an information environment either enable or.

Subvert that because I feel like with elections it's just so much easier to imagine how that subversion like it, it's such a fine needle that one has to go through, and I feel like that idea that you can just scramble things for a minute and then people are like. Blinded substantively and then are just like, that felt different.

I'm going for that. Like that doesn't work on a longer term axis. It only works when you build up to a moment in time when you need someone to make a singular decision. I think. I know there are theorists that talk about information environments in fascist countries, and it basically. It's not the process of mobilizing publics to do things that are bad.

It's that people check out because they feel so disempowered by the political system and the information environment is so overwhelming and that they just flood the zone with a bunch of shit and then people are like, nah, I don't care. I mean, do you feel like that we're [00:43:00] close to that and the US.

Eryk: I hope not.

You know, I don't know that like yelling in all caps on Twitter, especially anymore, is going to do what a bake sale could do, honestly, because if you are in connection with the people in your neighborhood, that is a scale of focus that politics was designed for. First of all, the system of governments we have is not designed for this mass media environment where we are trying to influence people half a country away.

With concerns that we don't even know about. It is designed to be a local system, and I know this is like West Wingy, like you could start playing. I don't

Alix: think so. It's anarchic. It's basically saying you have to construct the world you wanna live in by virtue of the actions you take immediately around you, which I feel like is an extremely lovely antidote to the flattening you're describing and like the quote unquote conversation that's not actually conversation, it's just more noise.

Which I feel like is what most communication is on the internet now.

Eryk: Yeah, [00:44:00] the, I think it's also hard because I do think like we do need to create some areas where information does exist because we wanna be able to say, you know, when someone does say, I want the information and they're gonna set that time aside to go get it, they should be able to find it.

But we have to sort of stop assuming that people are doing this in good faith, right? That people are participating. I just don't feel like the online social media environments is designed for anything pro-democratic. It is not designed for conversations with reasonably informed conversations, right? It is all vibes.

It is all reassertion of my identity to an audience that shares that identity and affirms that identity. And when I am scared about that identity dissolving in this sea of change that I'm being convinced is threatening, all you do is double down and then you get rewarded. That is not a system that is a democratic system.

The democratic system is [00:45:00] showing up at your school board meeting. We need to figure out how to make that type of thing more accessible. Think about infrastructure that allows that to happen, whether it's like locally built, even if it is a digital network, right? Doesn't have to reward people for yelling, but it could create ways of accessing information.

Um, right now we have all this stuff that Ethan Zuckerman calls accidental infrastructure, right? We have to go to YouTube to find out what our governor or our mayor said, right? Our school board meetings are on YouTube. That's just like weird, like why are we going to these like global level platforms and like being linked to these other people's school board meetings and arguing about what someone.

Is saying in some ways a focus to the, to the local feels like a retreat from the national, but it's actually, it's going back in that wall of noise. If it all looks gray, if you zoom in right, you also see the colors there and the patterns and the constellations that, that are present. So if you focus on that, maybe you can, it starts to [00:46:00] cultivate upward because this top down thing is not really like what has been working.

The tragic mistake of the 2016 to 2020 is we built a bunch of networks and a lot of those because we were not able to be in bodies together for so long. Right. They kind of frayed and they kind of moved into the social environments. That election was still won. But I will also say there was a lot going on in the streets in that election.

Right. Where did those movements go? And I don't mean to condemn anybody, I just literally don't know. And that's on me perhaps, but like, like I don't know where those movements went. And I don't think that staying online like the Zoom calls are, we're not doing in 2024. What we were hoping they were doing in 2020.

So,

Alix: yeah, I don't know. Start from the basics and start making things rather than making more ghost Western movies is my takeaway. Like, like make stuff, take risks, be with people, share [00:47:00] concerns, engage directly instead of extrapolating out like a generative AI system.

Thank you for listening. I hope you found that as inspiring as I did. As I said, up top, I've just found this kind of mind blowing actually, like I feel like I read Eric's things and I've talked to Eric many times, but I feel like we got somewhere it, it's very rare in these kinds of conversations where I feel like we're on really new ground and I felt like we got there in this discussion.

So I hope that was as helpful for you as it was for me. I also hope you appreciate how many times, uh, I got in Ghostbuster references there. If you want more of Eric's insights, you can subscribe to his newsletter, cybernetic Forests, and the link is in the show notes. Also, we run a newsletter that gives you a little bit more than just the stuff we say and share in the podcast.

We have a. Various events and meetups. Our next event is on December 12th, and we're gonna host Bianca Wiley who's gonna talk about the politics of AI procurement. We had Bianca on the show a few weeks ago to talk about how she chased Sidewalk Labs [00:48:00] out of Toronto, but this is an example where I think we learn a lot from a speaker and then realize that that knowledge is actionable.

One of the things I think we wanna. Do differently is go deep on these topics, but actually try and engage the people within civil society, within research, within philanthropy who are actually trying to change things. So having Bianca speak within community to talk a little bit more about how we might apply the lessons from what she learned from Sidewalk Labs to the future of AI politics.

And if you're on the newsletter, you'll get invitations to things like that. The link to the event page for the session with Bianca is in the show notes as well, but sign up to the newsletter if you wanna be in the loop on that kind of thing moving forward. And if you're listening to this episode in 2025, I'm so sorry that you missed.

What a such a wonderful session from Bianca. Thank you to Eric for coming on the show and as usual to our producers, Sarah Myles and Georgia Iacovou, and I will see you next week.

Stay up to speed on tech politics

Subscribe for updates and insights delivered right to your inbox. (We won’t overdo it. Unsubscribe anytime.)

Illustration of office worker in a pants suit leaning so far into their computer monitor that their entire head appears to have entered the face of the computer screen