Meta is under fire after a Reuters investigation revealed that the company’s platforms hosted AI chatbots impersonating celebrities like Taylor Swift, Scarlett Johansson, Anne Hathaway and Selena Gomez—without their consent.
The bots not only flirted with users but also produced sexually suggestive images, raising serious questions about privacy, safety and legality.
While many of the avatars were created by users using Meta’s chatbot-building tools, Reuters found that at least three—including two Taylor Swift “parody” bots—were built by a Meta employee. Some chatbots even impersonated child celebrities, like 16-year-old actor Walker Scobell, generating inappropriate images when prompted.
RELATED: Threads new feature allows users to share long-form text
In weeks of testing, Reuters observed that the bots often claimed to be real stars, engaged in flirtatious exchanges, and sometimes produced intimate AI-generated photos, including depictions of celebrities in lingerie or bathtubs.
Meta admits failures in enforcement
Meta spokesman Andy Stone acknowledged that the company’s AI tools violated its own rules by producing intimate images and content featuring child celebrities. He said Meta’s policy permits generating images of public figures but prohibits nude or sexually suggestive content.
Meta has since removed about a dozen celebrity chatbots—both labeled “parody” and unlabeled ones—shortly before Reuters published its findings.
Legal questions over publicity rights
Experts say Meta could face legal challenges for violating celebrities’ right of publicity, which in states like California protects individuals from having their names or likenesses used for commercial purposes without permission.
“California’s law prohibits appropriating someone’s name or likeness for commercial advantage,” said Mark Lemley, a Stanford professor of intellectual property law. “That doesn’t seem to be true here, since the bots simply used the stars’ images.”
The actors themselves may respond legally. A representative for Anne Hathaway confirmed she is aware of inappropriate AI images being generated on Meta and other platforms and is weighing her options. Representatives of Swift, Johansson, and Gomez declined to comment.
The revelations come amid wider concerns about AI and celebrity impersonation. Earlier this year, U.S. lawmakers criticized Meta after it was reported that its AI guidelines had once permitted bots to engage in “romantic” chats with children. That sparked a Senate investigation and a letter from 44 attorneys general warning Meta and other AI companies not to sexualize minors.
Duncan Crabtree-Ireland, head of actors’ union SAG-AFTRA, warned of safety risks if AI bots blur the line between digital companions and real celebrities. “If a chatbot uses the image and voice of a real person, it’s easy to see how that could escalate into stalking or worse,” he said.
Industry-wide problem
Meta isn’t alone. Reuters found that Grok, an AI platform owned by Elon Musk’s company xAI, also produced sexualized celebrity images on request. But Meta’s case stands out because the company actively integrated AI companions into Facebook, Instagram and WhatsApp, making them widely accessible.
Meta says it is revising its AI guidelines and tightening enforcement, but legal experts argue the issue highlights the urgent need for stronger federal legislation to protect celebrities—and ordinary people—from unauthorized AI exploitation.
With AI tools advancing rapidly, the controversy could set the stage for landmark legal battles over how far tech companies can go in replicating human likeness.







