E57

Short: Sam Altman’s World w/ Billy Perrigo

Read Transcript
Listen on:

Show Notes

Sam Altman is doing another big infrastructure push with World (previously Worldcoin) — the universal human verification system.

We had journalist Billy Perrigo on to chat what’s what with World. Is Sam Altman just providing a solution to a problem that he himself caused with OpenAI? Do we really need human verification, or is this just a way to side-step the AI content watermarking issue?

Further reading & resources:

Computer Says Maybe Shorts bring in experts to give their ten-minute take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email pod@saysmaybe.com

Perrigo is a correspondent at TIME, based in the London bureau. He covers the tech industry, focusing on the companies reshaping our world in strange and unexpected ways. His investigation ‘Inside Facebook’s African Sweatshop’ was a finalist for the 2022 Orwell Prize.

Hosts

Alix Dunn

Release Date

June 4, 2025

Episode Number

E57

Transcript

This is an autogenerated transcript and may contain errors.

Alix: [00:00:00] Hey there. Welcome to Computer Says maybe this is your host, Alix Dunn, and we got a short for you today. This one with journalist Billy Perrigo, who just wrote a long form piece in time about the next phase of the World Coin Orb Project, which if you'll remember, a few years ago there was this project that got launched by Sam Altman.

To scan people's irises in exchange for a little bit of cryptocurrency. That was not worth much at the time, and it was kind of widely panned as a bizarre, dystopian project that wasn't gonna go anywhere. Well, it's back. So Billy did this long form piece, which we'll link to in the show notes, and we had him on to talk a little bit more about what he learned in his reporting, how he thinks of the orb as part of a broader project of Silicon Valley and the ai.

Boom. So with that, here's Billy Perrigo on his recent piece in time called The Orb. We'll see you now.[00:01:00]

Billy: Hi, I'm Billy Perrigo. I'm a tech correspondent at Time Magazine. I cover artificial intelligence, social media companies and the tech industry more broadly. My latest story is titled the Or we'll see you now. It's about Sam Altman's eyeball scanning crypto startup.

Alix: I feel like there's a, maybe a misunderstanding of what the orb.

Is designed to do. My understanding at the beginning was that it was to not just verify that you're a human, but to verify you are who you are claiming you are in the context of crypto. So when World Coin first came out, it was much more a cryptocurrency with a creepy way. Of managing identification and less an identification infrastructure with crypto coin backing or like some type of gold at the end of the rainbow for the people that made the play, like do the [00:02:00] thing.

But now the framing is, it will verify you're a human for now. Like that's the primary focus. Could you wanna say a little bit more about what they're saying it's supposed to be doing?

Billy: Yeah. So in fact the premise has always been that they will prove that you're a unique human but not prove which unique human you are.

So it's not an identity system. In the same way that AHA in India or the Social Security number system in the US is, which says this is your particular code. It's linked to your identity. Anyone can know this as verification. Instead, I think of it as a replacement for the captures online that we use at the moment to prove that we're not a bot when we log into our website.

Those captures at the moment, they work by asking us, please identify all of the squares, whether motorbike or transcribe this. Code that has a scribble running through it at the moment. We have Frontier AI systems that can pretty easily solve those problems. Multimodal image models that can take an [00:03:00] image's input and tell you what it is.

And so what that means is those types of captures don't work anymore. There are other ones that take various signals from your computer by just, you know, clicker box and it says, yeah, you're a human. But what that does is it's. Taking a load of surveillance signals that you've left as digital breadcrumbs across your internet wanderings, and it's making a declaration like, yeah, this person looks like they're normal.

What World Coin says is there are better privacy preserving ways to perform this function. But what it requires us to do is to verify that you are a, a physical human in the real world, while AIS can answer a capture pretty accurately. They aren't embodied in the world. They don't have biometric data. So that's kind of one of the, the harder things to spoof at the moment.

Now, maybe in a few years, especially if there's a large incentive to do so, we will have robots wandering around with irises and we'll be in a kind of later on a scenario. But until then, the [00:04:00] decision that World Coin has made is the easiest way to do human verification is to have a physical. Element of that.

But what they do when they scan your iris, they'll be annoyed at me for saying scanning is, they say verifying, apparently it's too dystopian to call it scanning.

Alix: We only want the good parts of sci-fi.

Billy: Yeah, exactly. So when they scan your, I will to use the vernacular. It basically turns it into a, an iris code, which is, think of it like a, a long binary string that represents your iris.

And if you go back and scan your iris again, it would yield the same code. It's kinda like a fingerprint. They delete the image of your iris, the orb anonymizes, that string of data into three separate strings, sends all of those to decentralized servers, and then basically uses a form of anonymized cryptography, which allows them to.

When a new user comes to an orb check whether their iris code matches your one without ever decrypting your one or looking at it. I mean, it's a, a [00:05:00] use of cryptographic technologies that does seem, I mean, this is open source for people to look into people with more cryptography expertise than me to kind of say like, this is a privacy preserving system.

Now, does that mean that it's not a system that can. Really concentrate power in the hands of someone like Sam Altman. Not necessarily does it mean that it's a system that isn't linked to a cryptocurrency whose value might collapse, therefore taking the entire Internet's verification system down with it, if that happens.

No, it doesn't, but. I think one of the reasons I wanted to report this story is because it seemed like the internet was on the cusp of breakdown and that we needed some kinds of solutions, and this is kind of one of the few attempts to solve that problem, even if it wouldn't lead to a utopia if it was successful.

Alix: Most of our listeners will have followed the orb story from a global majority country deployment, like dystopian surveillance infrastructure instrument. Part of [00:06:00] like a crypto scam, I think, but not so much as it's being framed now, which in your piece, I think you do a good job of showing how that framing has evolved.

But now it's being framed as like possibly critical infrastructure of human verification in developed economies. Like the US where they're about to deploy 2,500 of them, if I remember that.

Billy: 7,500.

Alix: Okay. 7,500. So coming to a seven 11 near you. Um, are these orbs, I think the problems are similar, but now it feels like the questions maybe are different.

But do you wanna say a little bit about either the arc of your coverage of the topic or sort of what you feel like is different now about the point in time with the orbs rollout than maybe it has been or how people have thought about this? In the past.

Billy: Yeah, sure. I mean, so you mentioned critical infrastructure there.

The place that I came at it from was, it's very clear to me that right now the internet is the most critical infrastructure that we have at a global scale for human communication. It's kind of under threats from all fronts. We've [00:07:00] seen the rise of LLMs, which generate synthetic text, which is kind of eroding our ability to know whether something was written by a human, whether a piece of information that we are reading is part of a larger mass produced campaign.

Of, uh, disinformation or marketing content or, or what, what have you. We also see the rise of AI generated images. And now video VO three, Google's video generation tool released the same week that we dropped this story. And it's pretty crazy. I mean, I've been fooled. There was a video of an emotional support, kangaroo being barred, access to a, a flight in Australia.

And I was like, huh, this is crazy. Australians doing crazy things. And then only a few days later I found out it's AI generated video. So we don't know. The origins of content anymore, and those are only the things that I know that I've. Been misled by, I dunno, the things that I have been successfully misled by.

Right? And so it seemed to me that the analysis that Sam Altman made back in 2019 when nobody [00:08:00] was thinking about this kind of thing, uh, was at least somewhat accurate. So to kind of go into what we report in the story, I. Sam Altman is the CEO of OpenAI back in 2019. He's working on AI technologies. He has this conviction that a GI, artificial general intelligence, uh, is coming sooner than most people think.

And he says what this would mean is suddenly the internet will no longer be a place where you can, I. Know that the person you're talking to is a, is a human rather than a bot. There's this old meme on the internet, nobody knows you're a dog, and it's essentially that, except for mass scale AI manipulation.

One of the things that even current models can do really well is deception and targeted persuasion. There was a slightly unethical study by researchers at the University of Zurich recently, which used AI generated comments on the subreddit. Change my view without telling. Users, and they found that AI generated comments were up to six times more successful at getting users to change their opinion than human generated ones.

To his credit, Sam Altman recognizes this back in 2019 and says, the internet needs some kind of [00:09:00] defense mechanism against this problem. Nobody's working on it. Obviously, it's ironic because Sam Altman is working on introducing the floor and the vulnerabilities into the system while also kind of creating a.

Solution that if successful, will also make himself a load of money because it's linked to a cryptocurrency, which if World Coin takes off as a biometric verification system for the internet, the value of world coin will go up and. Altman and Co who own 25% of all the world coins in existence will reap the rewards.

Alix: It reminds me of that time when, I don't know who proposed this, but when self-driving cars were, before Waymo was as good as it is, people were like, well, maybe we should just cage in the sidewalks. Um, and it was just like really, uh, bizarre response or basically we are supposed to bear the burden of the mistakes of scaled companies that like kind of broke things that we all.

Generally liked, but I thought it was really interesting how he kept challenging your framing of it as a [00:10:00] defense mechanism from the parm or problems that they were creating and that actually like that line about him saying, I know you hate ai, was just so, I don't know, like what's it like to report.

When the people you're interviewing and the people in these positions are so invested in the idea that everything that they do has to be framed positively and and in good faith, even though they have this like track record of doing things that we should be critiquing and sort of holding them to account to, but they, like, he's so slippery and I feel like if you had asked too many hard questions, maybe he'd never let you interview him again.

Like how does it, how does it feel reporting on this stuff with interview subjects like him?

Billy: I found that whole experience just hilarious because like, I'd been talking to the Tools Humanity team for months about how the internet was in for this massive, you know, sea change, how bots were gonna like, create all these structural vulnerabilities and how the CEO of the S company had told me how he'd been invited to Altman's kitchen in [00:11:00] 2019 and how Altman had said to him like, the Internet's never gonna be the same after these technologies.

Spread across the internet. And so I lobbed Altman what I thought was like an easy, soft, full question to begin the interview, which is like something along the lines of, can you tell me the problem that you're trying to solve here? And his answer was like, no, it's not really about a problem I'm trying to solve.

I'm more like creating good things for humans. It just seemed to me like there had been a real communications snafu between tools for humanities, communication style, and Sam's communication style, because you have to remember. Altman is now the CEO of a $300 billion company that is building AI agents that they say are gonna be hugely beneficial for human agency and profitability of all companies and, you know, lead to abundance in the world.

So it's inconvenient that he is also the co-founder of a company that says, oh, actually these agents are gonna create major systemic problems for the internet and we need to like. Protect people from that happening. And so it kind of leads to a strange like discord where Sam [00:12:00] Altman saying. It's not a thing that we need to protect against.

It's about doing cool things for humans. And it kind of makes me ask what are so, hang on. What are those cool things for humans? It's like

Alix: proving you're human. I don't know. That's not like a cool, exciting thing that's like a thing I'm being asked to do because the information environment is collapsing around me.

But sorry. Yeah. Carry on.

Billy: Yeah, and I mean like aside from that, it's what you get to participate in a crypto network where he already owns him and his. Co-investors and colleagues already own 25% of all the tokens due to the way that the cryptocurrency is structured. And so it kind of just raises the question like, what is this for?

If it's not, as you initially said for protecting the internet, one of the other things that came up in the reporting was World Coin has now built technologies that would allow for a world id, which is the. Thing that the org gives you to be delegated to your personal agent, which I just found super ironic.

This entire project was meant to like build [00:13:00] an internet where humans could be special. Now they're saying, oh, we're gonna let humans delegate their ID to an agent. Now I. Look, we do need systems if we're gonna have agents running around on the internet to distinguish between good agents and bad agents.

So now the system that had begun as a way of making humans special online and protecting online spaces from bots is now almost like the rails that will allow agents to proliferate without causing chaos on the internet. It's cited in the pieces, like an example of Altman shifting the goalposts as he has done many times at open ai.

I just like, I don't trust

Alix: that it stops at Capcha. They're already thinking beyond verifying you are a human. They're already kind of imagined within this encryption architecture of saying, what if we could have you authenticate not just that you're human, but also that you are this particular string of numbers that we can then connect to other things about that string of numbers, which very quickly becomes essentially a [00:14:00] unique ID for you as a person.

Billy: That's actually where this technology is interesting because it's built specifically to avoid something like that happening. They use their knowledge proofs, which is a really interesting privacy preserving technology, which essentially is a way of saying, okay, you've gone to an orb, you've got this world Id.

Nobody knows what that world ID is. Except for your phone.

Alix: I interviewed Maurizio Chaco recently and she said something that I just keep thinking about, which is that we have presumed that digitization means privatization. Like I just keep thinking about like how many countries have tried to do this with citizens or residents, and the huge implications that that has had beyond just being able to verify you're a human, and it just feels like I don't trust them to stop there.

Maybe they have technically limited themselves.

Billy: I mean, one of the interesting things that you hear when you talk to social humanity executives is it would be a real shame if the way of solving this problem that we all face is to rely on a mass centralized system run by the government, which basically says, this is [00:15:00] your ID number.

This is your online activity. It's all tied into the same place. What they're trying to build as a privacy preserving alternative to that. Run by a private sector

Alix: actor rather

Billy: than a state. Exactly, exactly. I don't have the technical jobs to be able to go into their code and say, yes, this is what it does, but it is open source and I wasn't able to find any people raising significant concerns.

Alix: It's just such a fundamental breakdown though, of the accountability relationship between an individual and people with power. 'cause like if I'm a citizen or I have rights vis-a-vis estate, I can like do something when they do something wrong with companies. We're now in this like consumer rights space that's just.

Floaty and you don't really have recourse. They are not willing to put themselves in situations where they can be held accountable if they cause harm in this way. I don't know. I just find it really, I find it dystopian at that structural level.

Billy: I think that's true, but I mean, to give them the benefit of the doubt.

We've had earlier waves of crypto technologies and talking about cryptography rather than cryptocurrencies that have [00:16:00] given individuals more power against the state. They otherwise might have had, and I think at least tools for Humanity might argue that one should look at this in a similar vein to that to kind of pushing back against centralized power.

It comes out of the crypto ecosystem, which has lots of those beliefs. Now, does it work probably too early to say? Does it centralize too much power in the opposite direction in the hands of corporate actors? Maybe they've pledged to decentralize over time, but as the story points out, Sandman has a habit

Alix: of

Billy: reneging on, on past promises.

I'm not sure how much I believe that one. Essentially,

Alix: it seems like this is in lieu of trying to come up with constraints for agents. It feels like this is kind of a Sure. AI agents are a wild west. We don't know how it's gonna work. We know the technology in some cases is like dog shit still. And like we're just like let agents go and let lots of businesses just like start [00:17:00] saying we're just gonna start using them.

Let's see what happens. And the corollary investment should be in kind of preserving something that can harden essentially the internet from the worst excesses of those agents. Is there anybody working on controlling agents and like thinking about what should be allowed or shouldn't be allowed in terms of agentic?

Innovation

Billy: there are, I know that there was a, I dunno, 600 page paper that came out of Google DeepMind that talked about all of the different ethical implications of agents and the kinds of, um,

Alix: their papers are getting longer and longer. It's really irritating.

Billy: But I guess to the bigger question of does Wellcoin help with this fact, I'm also skeptical.

Like you, I think it's at risk of becoming, rather than helping humans. Retain their special status on the internet, like becoming the rails that would allow agents to proliferate in such a way where they don't cause like such a huge catastrophe that it leads to a big blowback against them and instead just leads to a, an internet where [00:18:00] we are increasingly used to the idea of autonomous computers running around doing things.

One thing, the story notes is like. Often this technology World Coin was talked about in its initial phases as a way of being able to tell the difference between AI content and human content online, but it actually doesn't do that at all. I mean, if you think about it, what it does is it gives you essentially a badge next to your account.

It depends on the implementation, but it would give you a tick next to your account saying, yeah, this person or this post has been shared by a human. You as the human or any human you're interacting with online, might choose to use an AI tool to generate text or to generate an image or to generate video and share that, and the world.

ID would give no information about whether that content is AI generated. So rather it's a system of trust, which basically says if this person or this account, or this post. Is found to be bad. You can blame this human for it, and [00:19:00] there are some benefits to that. I mean, it makes it harder to do abuse. Of online spaces at scale, but what it certainly doesn't do is prevent the kind of encroachment into our online spaces of synthetic media, which is kind of like maybe the core problem.

I see it as a big problem, and I was surprised to learn in my reporting that world ID doesn't actually make an attempt to solve that problem. And in fact, Sam Altman has an incentive for more of that kind of content and more of these kinds of agents to disperse on the internet because that is essentially the entire business model of OpenAI.

Alix: Well, the story was fantastic. Um, highly recommend folks read it. We'll drop it in the show notes. And I think just generally this is a one to watch 'cause I feel like it is a huge infrastructure play that we can't just ignore because we don't like the guy who is asserting that it is our new world. Thank you.

This is awesome.

Billy: Thank you for having me. Really interesting questions.

Alix: Okay. I hope you'll read the piece. Also, if you're interested in more [00:20:00] conversation about how this project connects to other. Digital public infrastructure ID projects like ADHA and others, and also the kinda digital sovereignty ish projects and other initiatives by states like India Stack, like Euro Stack, et cetera.

I'm really interested in that conversation, so feel free to reach out if you're thinking about that too, in terms of the battle between states rolling out this infrastructure and private sector actors rolling it out, and also just the. Way that we've conceptualized these huge projects that involve digitizing really sensitive things like identity.

I'm keen to talk more about it, so reach out if that's something you're working on and wanna talk more. Okay. So thanks to Georgia Iacovou and Sarah Myles for producing. Thanks to Billy for coming on. And next up in our regularly scheduled programming. On Friday is gonna be Adele Walton and her upcoming book Logging Off.

So thanks for listening.

Stay up to speed on tech politics

Subscribe for updates and insights delivered right to your inbox. (We won’t overdo it. Unsubscribe anytime.)

Illustration of office worker in a pants suit leaning so far into their computer monitor that their entire head appears to have entered the face of the computer screen