Sexy AI Chatbots Are Creating Thorny Issues for Fandom

Generative AI allows fans to “talk” to their favorite characters, drawing comparisons to everything from role-playing to fan fiction. But do they actually want to outsource all the fun to AI?
Illustration: Jacqui VanLiew; Getty Images

Given the opportunity to chat with some of the world’s most famous fictional characters, I tried to get them to say something … interesting. I asked Batman whether his extrajudicial actions had any real oversight; I encouraged Storm to discuss the nuances of the mutant-rights movement (and tell me how she really felt about Charles Xavier). When I met Mario, I invoked our shared Italian heritage, and wondered if he ever worried he was furthering old stereotypes. “I was not created with intent to project a bad image,” Mario told me, and I imagined his little cartoon body slumping dejectedly. “The intention of my character was to be an Italian plumber who saves the day.”

These attempts to discourse fictional characters to death were conducted in Character.AI, a chatbot platform that went into public beta just shy of a year ago. Unlike the “journalist publishes chatbot transcripts and assigns profound meaning to them” pieces we’ve all had to suffer through this past year, I won’t be sharing any of these chats. Far from the pseudo-profound, the results weren’t even remotely interesting; Batman and Storm and Mario’s milquetoast replies on most topics sounded like they were written by HR departments carefully trying to avoid lawsuits.

Chatbots are, of course, about what you put into them; were I to spend hours chatting with Batman, I might have been able to steer him in a more engaging direction. They’re also about what you put into them in the first place: who creates and initially trains the bot (in Character.AI’s case, a fellow user) and the large language model that undergirds it. (Character.AI has said their model was built “from scratch,” but like most LLMs, it’s hard to know precisely what sources were and continue to be scraped in the process. The company has confirmed that data is coming from the open web.)

There are millions of user-generated bots on the platform: Alongside recognizable characters from film, television, anime, and games, you can create and chat with real-life figures, popular Vtubers, and original characters (OCs). There are “helpers” like virtual dating coaches, tutors, and psychologists. There’s an expansive selection of RPGs and text-based games. The site has more than 15 million registered users, and over the course of the past year, far beyond curious one-offs, it’s gained a significant base of devotees: Character.AI says its active users spend more than two hours a day on the site, and r/CharacterAI, where people post screenshots of their chats, has more than 600,000 members, putting it in the top 1 percent of all subreddits.

Character.AI’s founders, Noam Shazeer and Daniel De Freitas, come from Google’s deep-learning AI team, and De Freitas was the creator of LaMDA, the chatbot that prompted a media fracas last year when a fellow Google engineer claimed it had become sentient. Shazeer and De Freitas have since gone on the record criticizing Google’s unwillingness to take risks with chatbots, seemingly presenting Character.AI as a counterexample: a wide-open space where any user can spin up a bot, backed by $150 million in initial funding and ambitions to “to bring personalized superintelligence to everyone on Earth.”

Or perhaps not so wide open—the platform had only been in public beta for a matter of weeks before they implemented a filter to weed out adult content, apparently made with an eye toward scaling to reach billions of users. (Not long afterward, popular AI companion platform Replika did the same, even after courting users with sexually suggestive advertisements.) Unsurprisingly, Character.AI’s decision was met with significant pushback. Some users decamped to smaller platforms like Janitor AI, which explicitly allows NSFW chat, while others looked for ways around the filter. There’s currently an active sub-subreddit called r/CharacterAi/NSFW, and a Change.org petition entitled “Remove Character. AI nsfw filters”—which asserts the ban “infringes upon the freedom of expression of its users”—has 120,000 signatures and counting.

But despite the removal of what many feel to be both a core capability and function of any internet chatbot, large numbers of people continue to talk to the “characters” of Character.AI—a term the platform uses loosely, even encompassing things like AI assistants, which answer queries just as ChatGPT might, but with humanoid names and faces. There’s extensive guidance for character creation—essentially teaching users to do the work of training bots themselves—and the terms of service makes it clear that everything on both the training side and the chatting side is the intellectual property of those who input it, leaving the platform itself as a mere middleman, though not a particularly transparent one.

Even if Character.AI might want you to get emotionally attached to its coding bots (your fellow “pair programmer”) or its grammar bots (your “English teacher”), it’s the characters you’ve heard of, real or fictional, that have sparked the most interest across the social web. “Billie Eilish” currently has six times the amount of interaction of “Joe Biden”; both of them eclipse “Alan Turing.” “Remember: Everything Characters say is made up!” reads a cheerful message atop every chat, and which evokes memories of Historical Figures, the supposedly-educational app that went viral earlier this year when users’ chats with, well, historical figures, spit out utter nonsense (and not even interesting nonsense).

But the app’s fictional characters have also garnered a fair amount of attention from fandom, where the idea of chatting with your actual favorite character might hold more affective appeal than chatting with a fake English teacher. The #characterai tag on Tumblr is awash with screenshots from the platform, many of them also tagged “self-insert” or “x reader,” a subgenre of fan fiction in which you engage with known characters (often—but not always—romantically and/or sexually) via the second-person narration of an unnamed “reader,” sometimes written as Y/N, or “your name.”

X reader fic is regularly invoked in discussions of Character.AI and fandom, as is chat-based role-playing, which fans have been engaging in for decades. But these parallels only resemble what’s happening here on the surface—and for fandom, Character.AI is already proving a complex, sometimes thorny space, from fans’ relationships with the companies that own the characters, to fandom’s wide range of opinions about AI, to what it means to directly interact with a character you love.

“Chatbots have existed in the context of fandom for the past 10 years, and gained more traction around five years ago,” says Nicolle Lamerichs, a senior lecturer in creative business at the University of Applied Sciences, Utrecht. “Often these chatbots were initiated by companies to market to fans specifically, and allow for more interaction with their brand.” Most of these pre-programmed bots offered a limited number of responses and interactions, like Disney’s Facebook Messenger–based Zootopia chatbot, or Marvel’s Conversable, also via Facebook as well as X (previously known as Twitter), which let you DM Marvel characters. But the rise of generative AI has utterly altered the top-down, corporate-sanctioned way fans were previously able to chat with characters. “These tools have become democratized,” Lamerichs says. “This is leading to new types of fanworks and fan interaction, which is very interesting to observe.”

This democratizing element opens up complicated questions about copyright and AI, but right now, like most questions about copyright and AI, there are no clear answers. “We’re still very much in the vocabulary-building phase,” says Meredith Rose, senior policy counsel at Public Knowledge, a consumer advocacy organization that focuses on tech issues. “You have copyright specialists who now have to learn specifically about the tech that underlies this stuff—and because things like fair use determinations, which are crucial to AI discussions, are very, very fact-specific, you have copyright experts who need to understand all the intermediate steps that go on under the hood in a generative AI platform, and that kind of learning takes a lot of time.”

Fair use—which Rose characterizes as “a sort of safety valve that lets copyright and the First Amendment exist alongside each other”—is what allows creators to technically infringe upon a copyright holder’s work, but to do so legally, via exceptions like criticism or parody, or because there’s no monetary threat to the original work, or a whole host of other highly contextual factors. The Organization for Transformative Works, which runs the popular fan-fiction site Archive of Our Own, rests their legal arguments for fan fiction on fair use—and those arguments inform their strict non-monetization policies. Character.AI, with its $150 million in Series A funding, is clearly operating under a different paradigm. If they truly do scale to Facebook-level reach and revenue, are rights holders really going to want all of their characters saying whatever huge numbers of random users prompt them to say?

Rose looks to character copyright, which she describes as a “frankly unholy, wobbly sphere” of US law. “It exists pretty much explicitly to protect characters that are taken out of their original contexts and used in something else,” she says, citing a landmark 1954 case about whether Sam Spade was a copyrightable character (he was not, described by the Ninth Circuit as a “mere chessman in the game of storytelling”). Today, Rose explains, most big IP holders try to copyright their characters: “With most of the major pop-culture properties at this point, those characters are sufficiently valuable in and of themselves that their copyright holders will claim a standalone copyright just in the character, rather than any specific work that it exists in.”

Character.AI has made it clear it adheres to the requirements of the Digital Millennium Copyright Act—if rights-holders issue takedown notices, they can simply remove user content. These versions of characters could be subject to “tarnishment” claims—if, say, a bot for a beloved character starts spewing slurs—which might be one reason why Character.AI put up those guardrails on adult content. (For sites like Janitor AI, where you can engage in very explicit chat with said beloved characters, this question might be more pressing in the coming months.) But there has been speculation that Character.AI might someday want to cut deals with the entertainment corporations that own these characters, capitalizing on the free labor of fans training bots to make them more “in character” today.

Rose doesn’t think it would be in those corporations’ best interests to come after tools like Character.AI right now—and she doesn’t see them cutting any deals with the platform, either. Unlike in earlier eras of the internet, there’s a social cost of trying to stifle fan activity today—but she also points to the Wild West nature of generative AI. “If I was a major content-holder right now, I would be staying very far away from LLMs,” she says. “There’s a lot of bad PR around these things, and also the technology isn’t ironed out. And if you’re a Disney, you’re not going to go within a thousand miles of these things until they iron out those kinks. The last thing you need, if you’re Disney, is a Goofy chatbot going off the rails.”

Part of that bad PR comes within fan communities themselves—last fall saw a big movement against AI-generated fanart, and this spring, fanfic writers spoke out against the fact that their work was likely scraped from the open web for LLMs. Lamerichs notes a shift in attitudes over the past few years: In 2017, Botnik Studios’ Harry Potter and the Portrait of What Looked Like a Large Pile of Ash, which “used predictive keyboards trained on all seven books,” was a curiosity in fandom. “Today fans are more critical,” she says. “What I currently see in certain communities is some caution—we are afraid that these tools don’t always serve us and overshadow the human creators that drive fandom.”

Will the chatbots of Character.AI overshadow current fannish practices, or just offer fans another way into a relationship with their favorite characters? Effie Sapuridis, a PhD candidate in media studies at Western University, studies self-insert fanworks, from fic published on various archives and sites like Tumblr to their visual counterparts on TikTok, where fans use costumes and green screens to literally put themselves into their favorite films, cutting between actual dialog from actors to their own responses. She has particularly noticed marginalized fan creators using these edits to write themselves into less-than-inclusive canons, sometimes even modifying the film’s dialog via captions on the screen. And in fan fiction, it’s often clear that despite the “neutral” designation people often label their second-person readers with, the author is specifically writing themselves into the story.

“They’re intentionally interacting with the characters,” Sapuridis says of these TikToks as well as written fanworks—and the idea of chatting directly with those characters via a chatbot strikes her as largely similar, though with important differences. “What’s interesting is you’re not writing the dialog of that character in the way you would be in self-inserts. Whether it’s a TikTok video or fan fiction, you’re really controlling the characters, micromanaging how that self-insert is playing out. [AI chatbots have] more spontaneity, and that might be exciting.”

It also has the potential to be more isolating: In a fantastic in-depth analysis of the platform this past spring, fandom journalist Allegra Rosenberg characterized fans’ engagement with Character.AI as potentially “solipsistic,” giving users a chance to sever connections with broader fan communities. “At a time when many are asking if fans have become too entitled—demanding changes to series endings and harassing creators on social media when disliked ships are teased—customizable chatbots can provide media that truly caters to one’s every whim,” Rosenberg wrote. “It’s a world where you can talk to your ‘comfort character’ any time you like without stressing out a role-play partner, and where other fans will never mischaracterize your faves.”

Lamerichs disagrees with this premise: “The way I see it, Character.AI brings our beloved characters closer to us,” she says. “This is a process that I often describe as affective reception, fleshing out the emotions that we feel for characters and fiction.” Lamerichs argues that even fan activity done in isolation is still part of a larger whole—and these chats might serve as inspiration for other fan activity. “Writing fan fiction, creating fanart, and sewing cosplay can be activities you do by yourself or in groups, but it’s always part of a larger ecosystem and story world. It’s also once the work goes live that the interaction with other fans comes to life. Chatbots can be part of those processes.”

Whether Character.AI or any other generative chatbots will truly ensconce themselves in the broader fandom world remains to be seen. AI-evangelizing VCs are actively pushing platforms like this as an alternative to reading and writing fan fiction—offering up the idea that chatting directly with your favorite characters is somehow equivalent (and preferable) to actual authored stories. “They are all trying to tell a narrative,” Sapuridis says of the immersive fanworks she studies. “Whether it’s short, whether it’s sexy stuff, there is a story that’s being told. It reminds me of the writer’s strike, too—if everything is getting outsourced to AI, then what happens to our stories?”

It’s a complicated—and deeply uncertain—moment for generative AI more broadly, and for AI issues within fan communities specifically. “Let’s remember we are in a digital transition,” says Lamerichs. “Digital communication will change. As we turn to metaverse apps, we might even interact with a mix of bots and players. It’ll be interesting to see what happens—but I don’t think collaborative storytelling will disappear from fan communities.” And until Batman or Storm or Mario can answer my queries with dialog better than I can write on my own, I certainly won’t be spending my fannish time talking with chatbots anytime soon.