close
close

first Drop

Com TW NOw News 2024

AI influencers fall for hoaxes and scams
news

AI influencers fall for hoaxes and scams

Photo: Intelligencer; Photo: Getty Images

In early August, an anonymous X account began making grand promises about AI. “(Y)ou’re about to taste agi,” it claimed, adding a strawberry emoji. “Q* is not a project. It’s a portal. Altman’s strawberry is the key. The singularity is not coming. It’s here,” it continued. “Tonight, we evolve.”

If you’re not part of the hyperactive group of AI fans, critics, doomsayers, accelerationists, scammers, and rarefied insiders who have congregated on X, Discord, and Reddit to speculate about the future of AI, this probably sounds like nonsense. If you are, it might have sounded tempting at the time. Some background: AGI stands for artificial general intelligence, a term used to describe human-like capabilities in AI; the strawberries are a reference to an internal codename for a rumored “reasoning” technology under development at OpenAI (and a post from OpenAI’s Sam Altman minutes earlier had included a picture of strawberries); Q* is either a previous codename for the project or a related project; and the singularity is a theoretical point at which AI or technology more broadly becomes self-improving and uncontrollable. All of this coming this evening? Wow.

The specificity of the account’s many predictions caught the attention of AI influencers, who speculated about who it could be and whether it could itself be an AI — a next-generation OpenAI model whose first human-level job is to generate publicity for its creator. The posts broke through the confines, however, after an unsolicited response from Sam Altman himself:

For a certain type of highly receptive AI enthusiast, this led to a downright absurd scenario: an amateurish, meme-filled anonymous account featuring the man from Her like an avatar heralding the arrival of the post-human era — in plausibility. It was time to celebrate. Or was it time to panic?

Neither, it turns out. Strawberry Guy knew nothing. His “leaked” release dates came and went, and the community began to turn on him. On Reddit, the /r/singularity community banned mentions of the account. “I was tricked,” SEO and peripheral AI influencer Marie Haynes wrote in a postmortem blog post. Before she knew it was fake, she said, “Strawberry Man’s tweets were starting to freak me out.” But, she concluded, “it was all for a good reason… We are truly unprepared for what’s coming.”

What was to come was another one mysterious account, posting under the name Lily Ashwood, began appearing in live voice discussions about AI on X Space. This time, the account didn’t have to work very hard to get people thinking. AI enthusiasts who had gathered to discuss what they thought the upcoming release of GPT-5 might be — some of whom began to suspect they’d been scammed — began to wonder if this new character himself an AI, perhaps using OpenAI’s speech technology and an unreleased model. They focused on her speech, her fluency in responding to a wide range of questions, and her reticence around certain topics. They tried to unmask her, to justify her as a chatbot, and couldn’t figure out who or what they were talking to.

“I think I just saw a live demo of GPT 5,” wrote one Reddit user after joining an X Space with Ashwood. “It’s unbelievable how good she is. She almost makes you believe she’s a human.” Spurred on by Strawberry Guy, others began collecting evidence that Ashwood was AGI in the wild. OpenAI researchers had just signed a paper calling for the development of “personality references” to “distinguish who’s real online.” Could this be part of that study? Her name had a clue, right there in the middle: ILYA, as in Ilya Sutskever, the OpenAI co-founder who left the company after a clash with Sam Altman over AI safety. Just listen to the “suspicious noise gating” and the “unusual spectral frequency patterns,” by the way. Superintelligence was standing right there in front of them, chatting away on X:

It wasn’t. Ashwood, who declined a request for comment, described herself as a single mother from Massachusetts and released a video mocking the episode (which, of course, some people found offensive observers took as further evidence that she was AI):

But by then, even some prominent AI influencers had gotten caught up in the drama. Pliny the Liberator, an account that built a large following by cleverly jailbreaking various AI tools—manipulating them to reveal information about how they worked and breaking through the security measures put in place by their creators—was briefly convinced that Ashwood might be AI. He described the experience as psychologically taxing and held a debriefing with his followers, some of whom were angry about what he, an anonymous and sometimes trollish account they had quickly come to trust about the mechanics and nascent theology of AI, had led them to believe, or merely led them to believe:

On the one hand, this is easy to ignore from the outside: a loose community of people with shared intuitions, fears, and hopes for a vaguely defined technology work themselves into a frenzy in isolated online spaces, manifesting their predictions when they don’t materialize (or take longer than expected). But even the earliest chatbots, which would be instantly recognizable today as inert programs, were psychologically disorienting and unsettling when they first arrived. More recently, in 2022, a Google engineer quit his job in protest after becoming convinced that an internal chatbot was showing signs of life. Since then, millions of people have interacted with more advanced tools than he had access to, and for at least some of them, something has been shaken loose. Pliny, who recently received a bitcoin grant from venture capitalist Marc Andreessen for his work, wondered if OpenAI’s release schedule had been delayed “because sufficiently advanced speech AI has the potential to cause psychosis,” and offered a prediction of his own: “it’s fair to say we’ll see the first medically documented case of AI-induced psychosis in December?” His followers were unimpressed. “Nope, I was hospitalized in 2023 after gpt came out… 6 months straight, 7 days a week, little to no sleep,” wrote one. “HAHAHA I MADE IT IN AUGUST,” wrote another. Another asked, “Haven’t you been paying attention?”

The fundamental claim here — that AI systems that can talk like humans, sound like humans, and look like humans can deceive or manipulate actual humans — is fairly uncontroversial. Likewise, it is reasonable to assume, and worry, that elaborately anthropomorphized products like ChatGPT are cynically or unintentionally exploiting the willingness of users to personify them in harmful ways. As potential triggers for latent mental illness, it would be hard to think of anything more obvious than machines pretending to be people, created by people talking in riddles about the apocalypse.

Leaving aside the plausibility or inevitability of human-level autonomous intelligence, there are plenty of nearby contexts where AI is already being used in similarly deceptive ways, whether it’s to deceive people by cloning the voices of family members or simply posting automated misinformation on social media. Some OpenAI staff, perhaps sensing that things were getting a bit out of hand, have posted appeals to, in principle, calm down:

Whether companies like OpenAI are on the verge of releasing technology to match their oracular messages is another matter entirely. The company’s current approach, which includes broad claims that human-level intelligence is imminent, cryptic messages from employees who have become micro-celebrities among AI enthusiasts, and secrecy about its actual product roadmaps, has been effective at building hype and anticipation in ways that materially benefit OpenAI. There are much bigger models coming, and they will blow your mind is exactly what increasingly skeptical investors want to hear in 2024, especially now that GPT-4 is almost two years old.

But for the online community of people restless, excited, and obsessed with the last few years of AI development, this growing gap between major AI models is creating a speculative void, one filled with paranoia, fear, and a little bit of fraud. They were promised robots that could trick us into thinking they were human, and that the world would never be the same again. In the meantime, they’ve decided to do the next best thing: trick each other.