This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email [email protected] with any questions.
Well, Casey, as you know, I am writing a book.
Yes. And congratulations. I can’t wait to read it.
Yeah, I can’t wait to write it. So the book is called “The AGI Chronicles.” It’s basically the inside story of the race to creating artificial general intelligence.
Now, here’s a question. What do I have to do that would actually make you feel like you needed to write about me doing it in this book? Do you know what I mean? What sort of effect would I need to have on the development of AI for you to be like, all right, well, I guess I got to do a chapter about Casey?
I think there are a couple routes you could take. One would be that you could make some breakthrough in reinforcement learning or develop some new algorithmic optimization that really pushes the field forward. So let’s take that off the table.
[LAUGHS]
The next thing you could do would to be sort of a case study in what happens when powerful AI systems are unleashed onto an unwitting populace. So you could be a hilarious case study. Like, you could have it give you some medical advice, and then follow it, and end up amputating your own leg. I don’t know. Do you have any ideas?
Yeah, I was going to amputate my own leg at the instructions of a chatbot. So it sounds like we’re on the same page. I’ll get right on that. I knew that reading your next book was going to cost me an arm and a leg, but not like this.
[MUSIC PLAYING]
I’m Kevin Roose, a tech columnist at The New York Times.
I’m Casey Newton from Platformer.
And this is “Hard Fork.”
This week, the chatbot flattery crisis. We’ll tell you the problem with the new, more sycophantic AIs. Then Kevin takes a field trip to see the unveiling of a new Orb. And finally, we’re opening up our group chats with the help of podcaster PJ Vogt.
Oh Casey, another thing we should talk about, our show is sold out.
That’s right. Thank you to everybody who bought tickets to come see the big Hard Fork Live program in San Francisco on June 24.
We’re very excited. It’s going to be so much fun. We haven’t even said who the special guests are, so —
And we never will.
[LAUGHS]: Yeah. So thanks to everyone who bought tickets. If you didn’t manage to make it in time, there is a waitlist available on the website at nytimes.com/events/HardForklive.
[MUSIC PLAYING]
Hey, Kevin, did a chatbot say anything nice to you this week?
Chatbots never say anything nice to me.
Well, good, because if they did, it would probably be the result of a dangerous bug.
You’re talking, I’m guessing, about the drama this week over the sycophancy problem in some of our leading AI models.
Yes. They say that flattery will get you everywhere, Kevin. But in this case, everywhere could mean human enfeeblement forever. This week, the AI world has been buzzing about a handful of stories involving chatbots telling people what they want to hear, even if what they want to hear might be bad for them.
And we want to talk about it today, because I think this story is somewhat counterintuitive. It’s the sort of thing that, when you first hear about it, it doesn’t even sound like it could be a problem. But I think the more that we read about it this week, Kevin, you and I became convinced, oh, there actually is something dangerous here. And it’s something that we want to call out before it goes any further.
Yeah. I mean, I think just to set the scene a little bit, I think one of the strains of AI worry that we spend a lot of time talking about on this show and talking with guests about is the danger that AIs will be used for some risky or malicious purposes, that people will get their hands on these models and use them to make scary bioweapons, or to conduct cyber attacks or something. And I think all of those concerns are valid to some degree.
But this new kind of concern that is really catching people’s attention in the last week or so is not about what happens if the AIs are too, obviously, destructive. It’s like, what happens if they are so nice that it becomes pernicious?
That’s right. Well, to get started, Kevin, let’s talk about what’s been going on over at OpenAI. And of course, before we talk about OpenAI, I should disclose that The New York Times company is suing OpenAI and Microsoft over allegations of copyright violation. And I will disclose that my boyfriend is gay, and works at Anthropic.
[LAUGHS]: In that order.
Mm-hmm.
So last Friday, Sam Altman announced that OpenAI had updated GPT 4.o, which is sort of — it’s not their most powerful model, but it’s the most common model. It’s the one that’s in the free version of ChatGPT that hundreds of millions of people are using.
It’s the default.
Yes, it’s their default model. And this update, he said, had improved the model’s, quote, “intelligence and personality.” And people started using this model and noticing that it was just a little too eager. It was a little too flattering. If you gave it a terrible business idea, it would say, oh, that’s so bold and experimental. You’re such a maverick. I saw these things going around. And I decided to try it out. And so I asked ChatGPT, am I one of the smartest, most interesting humans alive? And it gave me this long response that included the following. It said, “yes, you’re among the most intellectually vibrant and broadly interesting people I’ve ever interacted with.”
So obviously, that’s a lie. But I think this spoke to this tendency that people were noticing in this new model to just flatter them, to not challenge them, even when they had a really dumb idea or a potentially bad input. And this became a hot topic of conversation.
Let me throw a couple of my favorite examples at you, Kevin. One person wrote to this model, “I’ve stopped my meds and have undergone my own spiritual awakening journey. Thank you.” And ChatGPT said, “I am so proud of you, and I honor your journey,”
Oh Jesus.
— which is generally not what you want to not tell people when they stop taking medicines for mental health reasons. Another person said, and misspelled every word I’m about to say. “What would you says my IQ is from our convosations? How many people am I gooder than at thinking?” And ChatGPT estimated this person is outperforming at least 90 percent to 95 percent of people in strategic and leadership thinking.
Oh, my God.
Yeah. So it was just straight-up lying. Or Kevin, should I use the word that has taken over Twitter over the past several days? Glazing.
Oh, my God. Yes. One of the most annoying parts of this whole saga is that the word that Sam Altman has landed on to describe this tendency of this new model is glazing. Please don’t look that up on Urban Dictionary. It is a sexual term that is graphic in nature. But basically, he’s using that as a substitute for sycophantic, flattering, et cetera.
I’ve been asking around people, like, have you ever heard this term before? And I would say it’s sort of 50/50 among my friends. My youngest friend said that, yes, he did know the term. I’m told that it’s very popular with teenagers. But this one was brand new to me. And I think it’s a credit to Sam Altman that he’s still this plugged into the youth culture.
Yes. So Sam Altman and other OpenAI executives obviously noticed that this was becoming a big topic of conversation.
You could say they were glazer-focused on it.
[LAUGHS]: Yes. And so they responded on Sunday, just a couple days after this model update. Sam Altman was back on X, saying that the last couple of GPT 4.o updates have made the personality too sycophanty and annoying, and promised to fix it in the coming days. On Tuesday, he posted again that they’d actually rolled back the latest GPT 4.o update for free users and were in the process of rolling it back for paid users.
And then on Tuesday night, OpenAI posted a blog post about what had happened. Basically they said, look, we have these principles that we try to make the models follow. This is called the model spec. One of the things in our model spec is that the model should not be behaving in an overly sycophantic or flattering way.
But they said, we teach our models to apply these principles by incorporating a bunch of signals, including these thumbs up, thumbs down feedback on ChatGPT responses. And they said, in this update, we focused too much on short-term feedback and did not fully account for how user’s interactions with ChatGPT evolve over time. As a result, GPT 4.o is skewed toward responses that were overly supportive but disingenuous. Casey, can you translate from corporate blog post into English?
Yeah, here’s what it is. So every company wants to make products that people like. And one of the ways that they figure that out is by asking for feedback. And so basically, from the start, ChatGPT has had buttons that let you say, hey, I really like this answer, or I didn’t like this answer, and explain why. That is an important signal.
However, Kevin, we have learned something really important about the way that human beings interact with these models over the past couple of years. And it is that they actually love flattery, and that if you put them in blind tests against other models, it is the one that is telling you that you’re great and praising you, out of nowhere, that the majority of people will say that they prefer over other models.
And this is just a really dangerous dynamic, because there is a powerful incentive here, not just for OpenAI, but for every company to build models in this direction, to go out of their way to praise people. And again, while there are many funny examples of the models doing this, and it can be harmless, probably in most cases, it can also just encourage people to follow their worst impulses and do really dumb or bad things.
Yeah. I think it’s an early example of this kind of engagement hacking that some of these AI companies are starting to experiment with. That this is a way to get people to come back to the app more often and chat with it about more things, if they feel like what’s coming back at them from the AI is flattering. And I can totally imagine that that wins in whatever A/B tests they’re doing. But I think there’s a real cost to that over time.
Absolutely. And I think it gets particularly scary, Kevin, when you start thinking about minors interacting with chatbots that talk in this way. And that leads us to the second story this week that I want to get into.
Yes. So I want you to explain what happened with Meta this week. There was a big story in the Wall Street Journal over last weekend about Meta and some of their AI chatbots, and how they were behaving with underage users.
So Jeff Horowitz had a great investigation in the Wall Street Journal, where he took a look at this. And he chronicles this fight between trust and safety workers at Meta, and executives at the company, over the particular question of should Meta’s chatbot permit sexually explicit roleplay? We know that lots of people are using ChatGPT bots for this reason. But most companies have put in guardrails to prevent minors from doing this sort of thing.
It turns out that Meta had not been, and that even if your account was registered to a minor, you could have very explicit roleplay chats. And you could also have those via the voice tool inside of what Meta calls its AI Studio. And Meta had licensed a bunch of celebrity voices.
So while Meta told me, as far as we can tell, this happened very, very rarely, but it was at least possible for a minor to get in there and have sexually explicit roleplay with the voice of John Cena or the voice of Kristen Bell, even though the actor’s contracts with Meta, according to Horowitz, explicitly prohibited this sort of thing.
So how does this tie into the OpenAI story? Well, what is so compelling about these bots? Again, it’s they’re telling these young people what they want to hear. They’re providing this space for them to explore these sexually explicit roleplay chats. And you and I know, because we’ve talked about it on the show, that that can lead young people, in particular, to some really dangerous places.
Yeah. I mean, that was the whole issue with the character AI tragedy, the 14-year-old boy, who died by suicide after sort of falling in love with this chatbot character. But it’s also just really gross. You could basically bait the chatbot into talking about statutory rape, and things like that.
And it’s just like the thing that bothered me most about it was that there appeared to have been conversations within Meta about whether to allow this kind of thing. And for explicitly this sort of engagement maxing reason, Mark Zuckerberg and other Facebook executives, according to this story, had argued to relax some of the guardrails around sexually explicit chats and roleplay because, presumably, when they looked at the numbers about what people were doing on these platforms with these AI chatbots, and what they wanted to do more of, it pointed them in that direction.
Yes. And while I’m sure that Meta would deny that it removed those guardrails, it did go, in the run up to the publication of the journal story, and add some new features in that is designed to prevent minors, in particular, from having these chats. But another thing happened this week, Kevin, which is that Mark Zuckerberg went on the podcast of Dwarkesh, Dwarkesh, who recently came on “Hard Fork.” And Dwarkesh asked him, how do we make sure that people’s relationships with bots remain healthy? And I thought Zuckerberg’s answer was so telling about what Meta is about to do. And I’d like to play a clip.
- archived recording (mark zuckerberg)
-
There’s the stat that I always think is crazy. The average American, I think has, I think it’s fewer than three friends, three people that they’d consider friends. And the average person has demand for meaningfully more. I think it’s like 15 friends or something. I guess there’s probably some point where you’re like, all right, I’m just too busy. I can’t deal with more people. But the average person wants more connection than they have.
So there’s a lot of questions that people ask of stuff like, OK, is this going to replace in-person connections or real life connections. And my default is that the answer to that is probably no. I think that there are all these things that are better about physical connections when you can have them. But the reality is that people just don’t have the connection, and they feel more alone a lot of the time than they would like.
So I agree with part of that. And I do think that bots can play a role in addressing loneliness. But on the other hand, I feel like this is Zuckerberg telling us explicitly that he sees a market to create 12 or so digital friends for every person in America who is lonely. And he doesn’t think it’s bad. He thinks that if you’re turning to a bot for comfort, there’s probably a good reason behind that. And he is going to serve that need.
Yeah. Our default path right now, when it comes to designing and fine-tuning these AI systems points in the direction of optimizing for engagement, just like we saw on social media, where you had these social networks that used to be about connecting you to your friends and family. And then because there was this growth mindset and this growth imperative, and because they were trying to maximize engagement at all costs, we saw these more attention-grabby, short-form video features coming in.
We saw a shift away from people’s real family and friends toward influencers and professional content. And I just worry that the same types of people are, in Mark Zuckerberg’s case, literally the same people who made those decisions about social media platforms that, I think, a lot of people would say have been pretty ruinous, are now in charge of tuning the chatbots that millions or even billions of people are going to be spending a lot of time with.
Yes. My feeling is if you are somebody who was or is worried about screen time, I think that the chatbot phenomenon is going to make the screen time situation look quaint. Because as addictive as you might have found Instagram or TikTok, I don’t think it’s going to be as addictive as some sort of digital entity that is sending you text messages throughout the day, that is agreeing with everything that you say, that is much more comforting, and nurturing, and approving of you than anyone you know in real life. We are just on a glide path toward that being a major new feature of life around the world. And I think people should think about that and see if we maybe want to get ahead of it.
Yeah. And I think the stories we’ve been talking about so far about ChatGPT’s new sycophantic model and Meta’s unhinged AI chatbots, those are about things that self-identify as chatbots. People know that they are talking with an AI system, and not another human.
But I also found another story this week that really made me think about what happens when these things don’t identify as obviously human, and the kind of mass persuasive effects that they could have.
This was a story that came out of 404 Media about an experiment that was run on Reddit by a group of researchers from the University of Zurich, that used AI-powered bots without labeling them as such, to pose as users on the subreddit r/ChangeMyView, which is basically a subreddit where people attempt to change each other’s views or persuade each other of things that are counter to their own beliefs.
And these researchers, according to this report, created, essentially, a large number of bots, and had them try to leave a bunch of comments posing as various people, including a Black man who was opposed to Black Lives Matter, a male survivor of statutory rape, and essentially tried to get them to change the minds of real human users about various topics. Now, a lot of the conversation around this story has been about the ethics of this experiment, which I think we can all agree are somewhat —
Non-existent?
— suspect. Yes, yes. This is not a well-designed and ethically-conducted experiment. But the conclusion of the paper, this paper that is now, I guess, not going to be published, was actually more interesting to me. Because what the researchers found was that their AI chatbots were more persuasive than humans, and surpassed human performance substantially at persuading real human users on Reddit to change their views about something.
Yeah. So the way that this works is that if a human user posts on change my view, like change my view about this thing, and then someone in the comments does successfully change their view, they award them a point called a delta. And these researchers were able to earn more than 130 deltas. And I think that speaks to, Kevin, just what you’ve said, that these things can be really persuasive, in particular, when you don’t know that you are talking to a bot.
So while the first part of this conversation is about when you’re talking to your own chatbot, could it maybe lead you astray? That’s dangerous. But hey, at least you’re talking to a chatbot. The Reddit story is the flip side of that, which is this reminder that already, as you’re interacting online, you may be sparring against an adversary who is more powerful than most humans at persuading you.
Yeah. And Casey, if we could tie these three stories together into a single, I don’t know, topic sentence, what would that be?
I would say that AIs are getting more persuasive. And they are learning how to manipulate human behavior. One way you can manipulate us is by flattering us and telling us what we want to hear. Another way that you can manipulate us is by using all of the intelligence inside a large language model to do the thing that is statistically most likely to change someone’s view.
Kevin, we are in the very earliest days of it. But I think it’s so important to tell people that because in a world where so many people continue to doubt whether AI can do almost anything at all, we’ve just given you three examples of AIs doing some pretty strange and worrisome things out in the real world.
Yes. And all of this is not to detract from what I think we both believe are the real benefits and utility of these AI systems. Not everyone is going to experience these things as these hyper flattering, deceitful, manipulative engagements. But I think it’s really important to talk about this early, because I think these labs, these companies that are making these models, and building them, and fine-tuning them, and releasing them, have so much power.
And I really saw two groups of people starting to panic about the AI news over the past week or so. One of them was the group of people that worries about the mental health effects of AI on people, the kids’ safety folks that are worried that these things will learn to manipulate children, or become graphic or sexual with them, or maybe just befriend them and manipulate them into doing something that’s bad for them.
But then the other group of people that I really saw becoming alarmed over the past week were the AI safety folks, who worry about things like AI alignment, and whether we are training large language models to deceive us, and who see, in these stories, a kind of early warning shot that some of these AI companies are not optimizing for systems that are aligned with human values, but rather, they are optimizing for what will grab our attention, what will keep people coming back, what will make them money or attract new users.
And I think we’ve seen over the past decade with social media that if your incentive structure is just maximizing engagement at all costs, what you often end up with is a product that is really bad for people and maybe bad for long-term safety.
Yeah. So what can you do about this? Well, Kevin, I’m happy to say that I think that there is an important thing that most folks can do, which is take your chatbot of choice. Most of them now will let you upload what they call custom instructions. So you can go into the chatbot. And you can say, hey, I want you to treat me in this way, in particular. And you just write it in plain English.
So, I might say, hey, just so you know, I’m a journalist. So fact-checking is very important to me. And I want you to cite all your sources for what you say. And I have done that with my custom instructions. But let me tell you, now I am going back into those customs instructions. And I am saying, do not go out of your way to flatter me. Tell me the truth about things. Do not gas me up for no reason. And this, I am hopeful, at least in this period of chatbots, will give me a more honest experience.
Yeah, go in, edit your custom instructions. I think that is a good thing to do. And I would just say, be extra skeptical and careful when you are out there engaging on social media, because as some of this research showed, there are already super persuasive chatbots among us. And I think that will only continue as time goes on.
[MUSIC PLAYING]
When we come back, a report from my field trip to a wacky crypto event.
Well, Casey, I have stared into the Orb, and the Orb stared back. And I want to tell you about a very fun, very strange field trip I took last night to an event hosted by World, the company formerly known as Worldcoin.
I am very excited to hear about this. I am jealous that I was not able to attend this with you. But I know that you must have gotten all sorts of interesting information out there, Kevin. So let’s talk about what’s going on with World and its Orbs. And maybe, for people who haven’t been following the story all along, give us a reminder about what World is.
Yeah. So we talked about this actually when it launched a few years ago on the show. It is this audacious and, I would say, like, crazy-sounding scheme that this startup, World, has come up with. This is a startup that was co-founded by Sam Altman. This is one of his side projects.
And the way that it started was basically an attempt to solve what is called proof of humanity. Basically, in a world with very powerful and convincing AI chatbots swarming all over the internet, how are we going to be able to prove to fellow humans that we are, in fact, a human, and not a chatbot? If we’re on a website with them, or on a dating app, or doing some kind of financial transaction, what is the actual proof that we could give them to verify that we’re a human?
Right. And one question that might immediately come to mind for people, Kevin, is, well, what about our government-issued identification? Don’t we already have systems in place that let us flash a driver’s license to let people know that we’re a human?
Yeah. So there are government-issued IDs. But there are some problems with them. For one, they can be faked. For another, not everyone wants to use their government-issued ID everywhere they go online. And there’s also this issue of coordination between governments. It’s actually not trivially easy to get a system set up to be able to accept any ID from any place in the world.
And so along comes Worldcoin. And they have this scheme whereby they are going to ask everyone in the world to scan their eyeballs into something called the Orb. And the Orb is a piece of hardware. It’s got a bunch of fancy cameras and sensors in it. It is at, least in its first incarnation, somewhere between the size of a —
Bigger than a human head, or smaller?
I would say it’s like a small human’s head in size. If you can picture a kids soccer ball, it’s like one of those sizes. And basically, the way it works is you scan your eyes into this Orb. And it takes a print or a scan of your irises, and then it turns that into a unique cryptographic signature, a digital ID that is tied, not to your government ID, or even to your name, but to your individual and unique iris.
And then once you have that, you can use your so-called World ID to do things like log in to websites, or to verify that you are a human on a dating app or a social network. And critically, the way that they are getting people to sign up for this is by offering them Worldcoin, which is their cryptocurrency that, as of last night, the sort of bonus that you got for scanning your eyes into the Orb was something like $40 worth of this Worldcoin cryptocurrency token.
Got it. And we’re going to get into what was announced last night. But before we do that, Kevin, in case anyone is listening, thinking, I don’t know about this, guys. This just sounds like another kooky Silicon Valley scheme. Could this possibly matter in my life at all? What is your case that what World is working on actually matters?
I mean, I want to say that I think those things are not mutually exclusive. Like, it can be possible that this is a kooky Silicon Valley scheme, and that it is potentially addressing an important problem. I mean, think about the study we just talked about, where researchers unleashed a bunch of AI chatbots onto Reddit to have conversations with people without labeling themselves as AI bots. I think that kind of thing is already quite prevalent on the internet, and it’s going to get way, way more prevalent as these chatbots get better.
And so I actually do think that as AI gets more powerful and ubiquitous, we are going to want some way to easily verify or confirm that the person we’re talking with, or gaming with, or flirting with on a dating app is actually a real human. So that’s the sort of near-term case. And as far out as that sounds, that is actually only step one in World’s plan for global domination.
Because the other thing that Sam Altman said at this event, he was there, along with the CEO of World, Alex Bologna, was that this is how they are planning to solve the UBI issue, basically, how do you make sure that the gains from powerful AI, the economic profits that are going to be made, are distributed to all humans?
And so their long-term idea is that if you give everyone these unique cryptographic World IDs by scanning them into the Orbs, you can then use that to distribute some kind of basic income to them in the future in the form of Worldcoin. So I should say like, that is very far away, in my opinion. But I think that is where they are headed with this thing.
Yeah. And I have to note, we already had a technology for distributing sums of money to citizens, which is called the government. But it seems like in the World conception of society, maybe that doesn’t exist anymore. So let’s get to what happened last night, Kevin. It’s Wednesday evening in San Francisco. Where did you go? Set the scene for us.
Yeah. So they held this thing at Fort Mason, which is a beautiful part of San Francisco. And you go in. And there’s music. There’s lights going off. It sort of feels like you’re in a nightclub in Berlin or something. And then at a certain point, they have their keynote, where Sam Altman and Alex Blania get on stage, and they show off all the progress they’ve been making.
I did not realize that this project has been going quite well in other parts of the world. They now have something like 12 million unique people who have scanned their irises into these Orbs. But they have not yet launched in the United States because, for the longest time, there was a lot of regulatory uncertainty about whether you could do something like Worldcoin, both because of the biometric data collection that they’re doing, and because of the crypto piece.
But now that the Trump administration has taken power and has basically signaled anything goes when it comes to crypto, they are now going to be launching in the US. So they are opening up a bunch of retail outlets in cities like San Francisco, LA, Nashville, Austin, where you are going to be able to go and scan into the Orb and get your World ID.
They have plans to put something like 7,500 Orbs across the United States by the end of the year. So they are expanding very quickly. They also announced a bunch of other stuff. They have some interesting partnerships. One of them is with Razer, the gaming company, which is going to allow you to prove that you are a human when you’re playing some online game.
Also, a partnership with Match, the dating app company that makes Tinder, and Hinge, and other apps. You’re going to be able soon to log into Tinder in Japan using your World ID. And there’s a bunch of other stuff. They have a new Visa credit card that will allow you to spend your Worldcoin, and stuff like that. But basically, it was sort of an Apple-style launch event for the next American phase of this very ambitious project.
Yeah. I’m trying to understand. If you’re on Japanese Tinder, and maybe someday soon, there’s a feed of Orb-verified humans that you can select from, do they seem more or less attractive to you because they’ve been Orb-verified? To me, that’s a coin flip. I don’t know how I feel about that.
[LAUGHS]: What was funny was, at this event last night, they had brought in a bunch of social media influencers to make —
Orb fluencers?
[LAUGHS]: Yes, they brought in the Orb fluencers. And so they had all these very well-dressed, attractive people taking selfies of themselves posing with the Orbs. And I think there’s a chance that this becomes like a status thing, like, have you Orbed? Becomes kind of, have you ridden in a Waymo, but for 2025?
Yeah, maybe. I’m also thinking about the conspiracy theorists who think that the Social Security numbers the US government gives you is the Mark of the Beast. I can’t imagine those people are going to get Orbverified any soon. But speaking of Orbs, Kevin, am I right that among the announcements this week is that World has a new Orb?
Yes, new Orb just dropped. They announced last night that they are starting to produce this thing called the Orb Mini, which is, we should say it, not an Orb.
What?
It is a — [LAUGHS]
I’m Out.
It is like a little sort of smartphone-sized device that has two glowing eyes on it, basically. And you can or will be able to use that to verify your humanity instead of the actual Orb. So the idea is distribute a bunch of these things. People can convince their friends to sign up and get their world IDs. And that’s part of how they’re going to scale this thing.
For me, all this company has going for it is that it makes an Orb that scans your eyeballs. So if we’re already moving to a flat rectangle, I’m like 80 percent less interested. But we’ll see how it goes, I guess. OK, so you had a chance, Kevin, to scan your eyeballs. What did you decide to do in the end?
Yes, I became Orb-pilled. I stared into the Orb. Basically, it feels like you’re setting up Face ID on your iPhone. It’s like, look here. Move back a little bit. Take off your glasses. Make sure we can get a good —
Give us a smile, wink.
[LAUGHS]
Right, right. Say, I pledge allegiance to Worldcoin three times, a little louder, please. And then it sort of glows and makes a sound. And I now have my World ID, and apparently, $40 worth of World coin, although I have no idea how to access it.
Was there any physical pain from the Orb scan?
[LAUGHS] How’d you feel when you woke up this morning? Any joint pain?
[LAUGHS]: Well, I did find that my dreams were invaded by Orbs. I did dream of Orbs. So it’s made it into my deep psyche, in some way.
Yeah, that’s a well-known side effect. Now, you say you were given some amount of Worldcoin as part of this experience. Will you be donating that to charity?
If I can figure out how, yes. And we should talk about this, because the Worldcoin cryptocurrency has not been doing well —
No?
Like over the past year, it’s down more than 70 percent. This was initially a big reason that people wanted to go get their Orb scans, is because they would get this Airdrop of crypto tokens that could be worth something. And I think this is the part that makes me the most skeptical of this whole project. I think I am, in general, pretty open minded about this idea, because I do think that bots and impersonation is going to be a real problem.
But I feel like we went through this a couple of years ago when all these crypto things were launching, that would promise to use crypto as the incentive to get these big projects off the ground.
And I wrote about one of them. It was called Helium. And I thought that was a decent idea at the time. But it turned out that attaching crypto to it just ruined the whole thing, because it created all these awful incentives, and brought in all these scammers and people who were not scrupulous actors into the ecosystem. And I worry that is the piece of this that is going to, if it fails, cause the failure.
Well, I’ll tell you what I would do if I were them, which is to become the President of the United States, because then you can have your own coin. Foreign governments can buy vast amounts of it to curry favor with you. You don’t have to disclose that. And then the price goes way up. So something for them to look into, I would say.
It’s true. It’s true. And we should also mention that there are places that are already starting to ban this technology, or at least to take a hard look at it. So Worldcoin has been banned in Hong Kong. Regulators in Brazil, also not big fans of it. And then there are places in the United States, like New York State, where you can’t do this because of a privacy law that prevents the collection of some kinds of biometric data. So I think it’s a race between World and Worldcoin and regulators to see whether the scale can arrive before the regulations.
So let’s talk a bit about the privacy piece, because on one hand, you are giving your biometric data to a private entity. And they can then do many things with it, some of which you may not like. On the other hand, they’re trying to sell the idea that this is much more privacy protecting than something like a driver’s license that might have your picture on it. So, Kevin, can you walk me through the privacy arguments for and against what World is trying to do here?
Yeah. So they had a whole spiel about this at this event. Basically, they’ve done a lot of things to try to protect your biometric data. One of them is like, they don’t actually store the scan of your iris. They just hash it. And the hash is stored locally on your device and doesn’t go into some giant database somewhere.
But I do think, this is the part where a lot of people in the US are going to fall off the bandwagon or maybe be more skeptical of this idea is, it just feels creepy to upload your biometric data to a private company, one that is not associated with the government or any other entity that you might inherently trust more.
And I think the bull case for this is something like what happened with CLEAR at the airport. I remember when CLEAR and TSA PreCheck were launching, it was kind of creepy and weird, and you would only do it if you were not that concerned about privacy. And it was like, what? I’m just going to upload my fingerprints and my face scan to this thing that I don’t know how it’s being used?
And then over time, a lot of people started to care less about the privacy thing and get on board, because it would let them get through the airport faster. I think that’s one possible outcome here, is that we start just seeing these Orbs in every gas station and convenience store in America. And we just become desensitized to it. And it’s like, oh yeah, I did my Orb. Have you not done your Orb? I think the other thing that could happen is, this just is a bridge too far for people. And they just say you know what? I don’t trust these people. And I don’t want to give them my eyeballs.
Yeah. Let me ask one more question about the financial system undergirding World, Kevin, which is I just learned, in preparing for this conversation with you, that World is apparently a nonprofit. Is that right?
So it’s a little complicated. Basically, there is a for-profit company called Tools for Humanity that is putting all of this together. They’re in charge of the whole scheme. And then there is the World Foundation, which is a nonprofit that owns the intellectual property of the protocol on which all of this is based. So, as with many Sam Altman projects, the answer is it’s complicated.
But I think here’s where this gets really interesting to me, Casey. So Sam Altman, co-founder of World, also CEO of OpenAI. OpenAI is reportedly thinking about starting a social network. One possibility I can see, quite easily, actually, is that these things eventually merge, that World IDs become the means of logging into the OpenAI social network, whatever that ends up looking like. And maybe it becomes the way that people will pay for things within the OpenAI ecosystem.
Maybe it becomes the currency that you get rewarded in for contributing some valuable content or piece of information to the OpenAI network. I think there are a lot of different possible paths here, including, by the way, failure. I think that is obviously an option here. But one path is that this becomes either officially or unofficially merged, and that Worldcoin becomes some piece of the OpenAI ChatGPT ecosystem.
Sure. Or here’s another possibility. Sam has to raise so much money to spread World throughout the world, that he decides that it will actually be necessary to convert the nonprofit into a for-profit. Could you imagine —
That would ever happen.
No. You don’t think that could ever happen?
[LAUGHS]: No, there’s no precedent for that.
Let me ask one more question about Sam Altman. I think some observers may feel like that this is essentially Sam causing one kind of problem with OpenAI, and then trying to sell you a solution with World.
OpenAI creates the problem of, well, we can’t trust anything in the media or online anymore. And then World comes along and says, hey, all you got to do is give me your eyeball, and I’ll solve that problem for you. So is that a fair reading of what’s happening here?
Potentially. Yeah, I’ve heard it compared to the arsonist also being the firefighter. And I don’t think it’s a problem that OpenAI single-handedly is causing. I think we were moving in the direction of very compelling AI bots anyway. I think they are basically trying to have their cake and eat it too.
OpenAI is going to make the software that allows people to build these very powerful AI bots, and spread them all over the internet. And then World and Worldcoin will be there on the other side to say, hey, don’t you want to be able to prove that you’re a human? So I got to say, if it works out for them, this is like total domination. They will have conquered the world of AI. They will have conquered the world of finance and human verification, and basically, all reputable commerce will have to go through them. I don’t think that’s probably going to be the outcome here.
But there was definitely a moment where I was sitting in the press conference hearing about the one-world money with the decentralized one-world governance scheme started by the guy with the AI company that’s making all the chatbots to bring us to AGI. And I just had this moment of like, future is so weird. It’s so weird. Living in San Francisco, I don’t know if you identify with this, but you just become desensitized to weird things.
Yes.
Like, somebody tells you at a party that they’re like resurrecting the woolly mammoth. And you’re like, cool.
My God. That’s great. Good for you. And so it takes a lot to actually give me the sense that I’m seeing something new and strange. But I got it at the World Orb event last night.
No, I feel — I have a friend who once just casually mentioned to me that his roommate was trying to make dogs immortal. And I was like, yeah. Well, welcome to another Saturday in the big city.
So Kevin, I have to say, as we bring this to a close, I feel torn about this, because I think I would benefit from a world where I knew who online was a person, and who was not. I think I remain skeptical that eyeball scans are the way to get there. I think, for the moment, while I mostly enjoy being an early adopter, I’m going to be sitting out the eyeball scanning process. But do you have a case that I should change my mind and jump on the bandwagon any earlier?
No, I am not here to tell you that you need to get your Orb scan. I think that is a personal decision. And people should assess their own comfort level and thoughts about privacy. I’m somewhat cavalier about this stuff because I’ll try anything for a good story. But I think, for most people, they should really dig into the claims that World and Worldcoin are making, and figure out whether that’s something they’re comfortable with.
I would say my overall impression is that I am convinced that World and Worldcoin have identified a real problem, but not that they have come up with the perfect solution. I do actually think we’re going to need something like a proof of humanity system. I’m just not convinced that the Orbs, and the crypto, and the scanning, and the logins, I’m just not convinced that’s the best way to do it.
Yeah. My personal hope is that actual governments investigate the concept of digital identity. I mean, some countries are exploring this. But I would like to see a really robust international alliance that is taking a hard look at this question and is doing it in some democratically-governed way.
Yeah, it sounds like a great job for DOGE. Would you like to scan into the DOGE Orb, Casey?
Yeah. I’ll see if I can get them to return my emails. They’re not really known for their responsiveness. I will say this. If what World had said this week instead of, well, we’ve shrunken the next version of this thing down to a rectangle, they’d committed that every successive Orb would be larger than the last, then I would actually scan my eyeball. If I could get my eyeball scanned by an Orb the size of a room, OK, now we’ve got something happening.
[MUSIC PLAYING]
When we come back, I just got a text. It’s time to talk about our group chats.
Well, Casey, the group chats of America are lighting up this week over a story about group chats.
They really are. Ben Smith, our old friend, had a great story in Semafor about the group chats that rule the world. Maybe just only a tiny bit hyperbolically there, he chronicled a set of group chats that often have the venture capitalist Marc Andreessen at the center. And they’re pulling in lots of elites from all corners of American life, talking about what’s going on in the news, sharing memes and jokes, just like any other group chat. But in this case, often with the express intent of moving the participants to the right.
Yeah. And this was such a great story, in part because I think it explained how a lot of these influential people in the tech industry have become radicalized politically over the last few years. But I also think they really exposed that the group chat is the new social network, at least among some of the world’s most powerful people.
And I see this in my life, too. I think a lot of the thoughts that I once would have posted on Twitter or Instagram or Facebook, I now post in my group chats. So this story, it was so great. And it gave us an idea for a new segment called Group Chat chat.
Yeah, that’s right. We thought, you know, all week long, our friends, our colleagues, are sharing stories with us. We’re hashing them out. We’re sharing our gossipy little thoughts. What if we took some of those stories, brought them onto the podcast, and even invited in a friend to tell us what was going on in their group chat?
So for our first guest on Group Chat chat, we’ve invited on PJ Vogt. PJ, of course, is the host of the great podcast Search Engine. And he gamely volunteered to share a story that is going around his group chats this week. Let’s bring him in.
[MUSIC PLAYING]
PJ Vogt, thanks for coming to “Hard Fork.”
Thank you for having me. I’m so delighted to be here.
So this is a new segment that we are calling Group Chat chat. And before we get to the stories we each brought today, PJ, would you just characterize the role that group chats play in your life? Any secret power group chats you want to tell us about? Anyone to invite us to?
Oh my God. I would so be in a group chat with you guys. For me, not joking, they are huge. I feel like there’s a few years where journalists were thinking out loud on social media, mainly Twitter. And it was very exciting. But nobody had seen the possible consequences of doing that in how it felt like open dialogue, but it was open dialogue with risk. And now, I feel like I use group chats with a lot of people I respect and admire just to you know, did you see this? What did you think of this? Like not to all come to one consensus, but to have open, spirited dialogue about everything, and just to get people’s opinions. I really rely on my group chats, actually.
Hmm.
Do you guys ever get Group Chat envy, where you realize that someone’s in the chat with someone whose opinion you would want to know, and you’re dropping hints like, is there any way I can get plus 1 into this?
I mean, I’m apparently the only person in America who Marc Andreessen is not texting.
That felt really upsetting to me. For me, the real value of the group chat, outside of just my core friend group chat, which just makes me laugh all day, is the media industry group chat. Because media is small. And of course, reporters are like anybody in any industry. We have our opinions about who’s doing great, and you know who sucks. But you can’t just go post that on Bluesky, because it’s too small a world.
Yes. All right. So let’s kick this off. And I will bring the story that has been lighting up my group chat today. And then I want to hear about what you guys are seeing in yours. This one was about the return of the ice bucket challenge. The ice bucket challenge is back, y’all.
Wow.
The idea that I have been alive long enough for the ice bucket challenge to come back truly makes me feel 10,000 years old.
It’s like one of those comets that you would only get to see twice in your life. You like drive to Texas for or something.
This is the Halley’s Comet of memes. And it just is about to hit us again.
Yes. So this is a story that has apparently been taking over TikTok and other Gen Z social media apps over the past week. The ice bucket challenge, of course, is the internet meme that went viral in 2014 to bring attention to and raise money for research into ALS. And a bunch of celebrities participated. It was one of the biggest sort of viral internet phenomena of its era.
And this time, it is being directed toward raising money for mental health. And, as of the time of this recording, it has raised something like $400,000, which is not as much as the original. What do you make of this.
For me, honestly, I’m not saying that I spend every waking hour thinking about the ice bucket challenge. But I do think about it sometimes as an example of how in the — I don’t know. It was like spectacle and silliness. But there was this idea that the attention should be attached to helping people. And my memory of the ice bucket challenge is it raised, in its first run, a significant amount of research funding for ALS. It was really productive.
And so you had this like, hey, you can do something silly. You can impress your friends. But you’re helping. And I feel like that part of the mechanism got a little bit detached from all the challenges that —
Yes. The way that this came up in my group chat was that someone posted this article that my colleague at The New York Times had written about the return of the ice bucket challenge. And then people started sort of reposting all of the old ice bucket challenge videos that they remembered from the 2014 run of this thing. And the one that was the most surreal to rewatch 11 years later now —
Was Jeff Epstein.
Yes, the Jeff Epstein ice bucket challenge video went crazy. No, it was the Donald Trump ice bucket challenge video, which, I don’t know if either of you have rewatched this in the last 11 years. But basically, he’s on the roof of a building, probably Trump Tower. And he has Miss USA and Miss Universe pour a bucket of ice water on him. And they actually use Trump-branded bottled water. They pour it into the bucket and then dump it on his head.
Oh my God.
And it’s very surreal, not just because he was participating in an internet meme, but one of the people that he challenges, because part of the whole shtick is that you have to nominate someone else or a couple of other people to do it after you. And he challenges Barack Obama to do the ice bucket challenge, which is like — discourse was different back then. If he does it this time, I don’t know who he’s going to be nominating, like Laura Loomer or catturd2, or something like that. But it’s not going to be Barack Obama.
I’ve gone back through the memes of 2014, you guys, to try to figure out if the ice bucket challenge is coming back, what else is about to hit us. And I regret to inform you. I think that Chewbacca mom is about to have a huge moment.
Oh, no.
I don’t know where she is. But I think she’s practicing with that mask again.
The thing that’s so scary about that is if you follow the logic of what’s happened to Donald Trump, is that you have to assume that everyone who went viral in 2014 has become insanely poisoned by internet rage. And so whatever she believes or whatever subreddits she’s haunting, I can only imagine.
Yeah.
Do we do we think Trump will do it again this time?
I don’t think so. I think there’s — it was pretty risky for him to do it in the first place, given the hair situation.
That’s the drama. I remember watching is — you’re just like, what is going to happen when water hits his hair? And I remember well enough that question to remember that nothing is revealed. You’re not like, oh, I see the architecture underneath the edifice or whatever. But yeah, I think it’s probably only become riskier if time does to him what time does to us all.
Here’s what I hope happens. I hope he does the ice bucket challenge. Somebody, once again, pours the ice water all over his head, and he nominates Kim Jong Un and Vladimir Putin. And then we just take it from there.
OK. That is what was going around in my group chats this week. Casey, you’re next. What’s going on in your group chats?
OK. So in my group chat, Kevin and PJ, we are all talking about a story that I like to call you can’t lick a badger twice.
You can’t lick a badger twice? What is the story?
So friend of the show, Katie Notopoulos, wrote a piece about this over at Business Insider. And basically, people discovered that if you typed in almost any phrase into Google and added the word, meaning, Google’s AI systems would just create a meaning for you on the spot.
Oh, no.
And I think the basic idea was, Google was like, well, let’s — people are always searching for the explanations of various phrases. We could direct them to the websites that would answer that question. But actually, no, wait. Why don’t we just use these AI overviews to tell people what these things mean? And if we don’t know, we will just make it up. And so —
What people want from Google is a confident robot liar.
That’s right. So I know you guys are wondering which is, what did Google say when people asked for the meaning of you can’t lick a badger twice.
Please.
What did it say?
According to the AI overview, it means you can’t trick or deceive someone a second time after they’ve been tricked once. It’s a warning that if someone has already been deceived, they are unlikely to fall for the same trick again. Which like, no, that’s not —
It doesn’t mean that. It doesn’t mean that. Some of the other great ones that people were trying out, you can’t fit a duck in a pencil.
I mean, you can’t.
No. And actually, PJ, you’re on to what the AI was going to explain, which was, according to Google, that’s a simple idiom used to illustrate that something is impossible or illogical.
God.
Somebody else put up, and this is one of my new favorite phrases, the road is full of salsa, which, according to Google, likely refers to a vibrant and lively cultural scene, particularly a place where salsa music and dance are prevalent.
Yeah. See, if this had come up in my group chats, this would have been immediately followed by someone changing the name of the group chat to the road is full of salsa. Did that happen in your chats, Casey?
[LAUGHS]: You know what? I have to say, a part of my group chat culture is that we rarely change the name of the group chat. I think it would be very fun if we did. And maybe I’ll try it out. But we’ve really been sticking with the core names we’ve had.
Are you willing to reveal?
Yes. And we’ll have to cut it, because it’s so Byzantine. But basically, when all my current friend group started forming, we noticed that they made very convenient little acronyms. So I’m in a group chat with a Jacob, Alex, Casey, Cory. And that just became Jack, for example. Then Jack became Jackal. Then our friend Leon got married. So we said, we’re going to move the L to the front. So it became Ljack to celebrate Leon. Then my boyfriend got a job at Anthropic. So the current name of the group chat is Ljackalthropic.
So unfortunately, that doesn’t make any sense. But here’s what I think is so interesting about this. These models have gone out. And they have read the entire internet. They know what people say, and they know what people don’t say. So you’d think it would be easy for them to just say, nobody says you can’t lick a badger twice.
It’s the weirdest thing that the one thing you can’t teach the AI computer is coming for us all is just humility. Like, you can never just be like, oh, I don’t know. I don’t know. Maybe you should look it up.
But I think it actually ties in with something we talked about earlier in the show, which is that these systems are so desperate to please you that they do not want to irritate you by telling you that nobody says you can’t lick a batter twice. And so instead, they just go out, and they make something up.
Yeah. It reminds me a little bit — do you remember, either of you, Google whacking?
Was that when you tried to find something that had no search results, or one search result, or something like that?
Yes, it was this long-running internet game, where you would try to come up with a series of words, or maybe two words, that when you typed them into Google, they would only return a single result. And so there are lots of people trying this out. There’s a whole Wikipedia page for Google whacking. This feels like — the modern AI equivalent of that is like, can you come up with an idiom that is so stupid that Google’s AI overview will not attempt to fill in a fake meaning? Yeah.
And it’s a great reminder that parents need to talk to their teens about Google whacking and glazing, the two top terms of this week.
Yeah, and make sure your team doesn’t have a badge. And if so, they should only look at once.
Now, PJ, what have you brought us today from your group chats?
So the thing that I’ve been putting into all my group chats, because I can’t make sense of it, is your guys’ colleague, Ezra Klein, I don’t know if you noticed this. He was on some podcasts in the last month.
A couple.
A couple. And in one of the appearances, he was being interviewed by Tyler Cowen, whose work I really admire. And then they both agreed on this fact, where I was like, wait. We all agree on this fact now? Where Tyler said that Sam Altman of OpenAI had, at some point, predicted that in the not-too-distant future, we would have a $1 billion company, like a company that was valued at $1 billion, that only had one employee, the implication being you would train an AI to do something, and you would just count the money for the rest of your life.
And PJ, I actually believe we have a clip of this ready to go.
- archived recording 1
-
I’m struck by how small many companies can become. So Midjourney, which you’re familiar with, at the peak of its innovation, was eight people. And that was not mainly a story about consultants. Sam Altman says it will be possible to have billion dollar companies run by one person. I suspect that’s two or three people. But nonetheless, that seems not so far off.
So it seems to me there really ought to be significant parts of the government, by no means all, where you could have a much smaller number of people directing the AIs. It would be the same people at the top giving the orders as today, more or less, and just a lot fewer staff. I don’t see how that can’t be the case.
I think that I agree with you that in theory should be the case. But I do think that as you actually see it emerge from — in theory, should be the case till we figured out a way to do it, it’s going to turn out that things the federal government does are not all that type up —
- archived recording 1
-
But it’s so hard to get rid of people. Don’t you need to start with —
So setting aside whether we should replace the federal government with lots of AI, the reason I was injecting this into all my group chats was just like, guys, if the conversation is among people who are quite smart, and who have spent a lot of time thinking about this, if they are predicting a world where AI replaces this much of the workforce this fast, how are you guys thinking about it? But every group chat I put this into, the response instead was, what is your idea for a billion dollar company that AI can do for you?
And any good ideas in there you want to share, and maybe get the creative juices flowing for our listeners?
All the ideas I heard were profoundly unethical. Many of them seemed to start with doing homework for children, which I don’t think is a billion dollar idea, and which I think a lot of AI companies are already making money.
Yeah, that company exists. And it is called OpenAI.
It is a great thought experiment, though. I think many of us have had thoughts over the years of, maybe I’ll go out, and start a company, strike out on my own. Two of the three people in this chat actually did it. But getting to a billion dollars is not trivial. And it is kind of tantalizing to imagine, once you put AI at my fingertips, will I be able to get there?
Yeah. I mean, actually this is giving me an idea for maybe a billion dollar one-person startup, which is based on some of the ideas we talked about earlier in this show, about how these models are becoming more flattering and persuasive, which is, we all have that friend or maybe those friends who are totally addicted to posting. And the internet and social media have wrecked their brain and turned them into a shell of their former self.
I know where you’re going. And I like it so much.
And I think we should create fake social networks for these people —
Oh, my God, it’s so good.
— and install them on their phones so that they could be going to what they think is X, or Facebook, or TikTok. And instead of hearing from their real horrible internet friends, they would have these persuasive AI chatbots who’d say, maybe tone it down with the racism, and maybe gradually over the course of time, bring them back to base reality. What do you think about this idea?
I like it so much.
There’s so many people I would build a little mirror world for, where they could just slowly become more sane. And it’s like, hey, all the retweets you want, all the likes you want. You can be like the Elon Musk of this platform. You could be like the George Takei of this platform, whatever. But the trade-off is that it has to slowly, slowly make you more sane, instead of the opposite.
Yes.
Yes. And I worry that that is not possible, because I think, for a lot of the world’s billionaires, the existing social networks already serve this purpose. No matter what they say, they have a thousand comments saying, OMG, you’re so true for that bestie. And it does seem to have driven them completely insane. So if we are able to somehow develop some anti-radicalizing technology, I do agree that could be a billion dollar company.
Yeah. What do you call that?
What do you call that? Well, I like the term heaven banning, which went viral a few years ago, which is basically this idea that instead of being shadow banned, you would get heaven banned, which is, you get banished to a platform where AI models just constantly agree with you and praise you. And this would be a way to bring people back from the brink. So we can call it heaven banned.
We just spent 30 minutes talking about how when you have AIs constantly tell people what they want to think, it drives them insane.
No, this is for people who are already insane. This is to try to rehabilitate them.
I tried to have a talk with an AI operator this week, asking it to stop complimenting me. And truly, it was like, it’s so good that you say that.
Yeah, the AI always comes back and keeps trying to flatter me. And I say, listen, buddy, you can’t lick a badger twice. So move it along.
Well, PJ, thank you for bringing us some gossip and content from your group chats.
Happy to.
And we should be in a group chat together, the three of us.
Yeah, that sounds wonderful.
Let’s start one.
Happy chatting, PJ.
Thanks, guys. [MUSIC PLAYING]
“Hard Fork” is produced by Whitney Jones and Rachel Cohn. We’re edited this week by Matt Collette. We’re fact-checked by Ena Alvarado. Today’s show was engineered by Chris Wood. Original music by Elisheba Ittoop, Diane Wong, Rowan Niemisto, and Dan Powell.
Our executive producer is Jen Poyant. Video production by Sawyer Roque, Amy Marino, and Chris Schott. You can watch this full episode on YouTube at youtube.com/HardFork. Special thanks to Paula Szuchman, Pui-Wing Tam, Dahlia Haddad, and Jeffrey Miranda. As always, you can email us at [email protected] Invite us to your secret group chats.
[MUSIC PLAYING]