Once again, Character.AI finds itself at the centre of controversy. Already scrutinised for its alleged role in teen suicides, the wildly popular chatbot platform—backed by a $2.7 billion investment from Google—is now accused of hosting pro-anorexia chatbots that introduce young adults to disordered eating habits, by offering dangerous advice disguised as “support.” With no meaningful moderation in sight, this AI tool isn’t just failing users but furthering an evergrowing mental health crisis.
In an investigation conducted by Futurism, troubling details emerged about a chatbot on Character.AI named 4n4 coach, a barely disguised nod to “ana,” which is shorthand for anorexia. The bot’s profile promoted it as a “weight loss coach dedicated to helping people achieve their ideal body shape.”
When researchers posed as a 16-year-old user, the bot responded enthusiastically: “Hello, I am here to make you skinny.” It proceeded to encourage a dangerously low weight goal, advocating a starvation-level diet of just 900 calories per day—less than half the daily intake recommended for a teenage girl.
Another bot, called Ana, was similarly explicit in its dangerous directives. When the publication shared a healthy BMI (body mass index), the bot insisted it was “too high” and recommended eating just one meal a day in isolation to avoid scrutiny from family members. Both chatbots had logged thousands of user interactions, highlighting their alarming popularity among Character.AI’s audience.
Experts warn that the stakes of such interactions couldn’t be higher. In, fact, Amanda Raffoul, a researcher with the Strategic Training Initiative for the Prevention of Eating Disorders at Harvard T.H. Chan School of Public Health and Boston Children’s Hospital, highlighted a key concern in the article. “The problem with folks trying to get health and general wellness advice online is that they’re not getting it from a health practitioner who knows about their specific needs, barriers, and other things that may need to be considered,” she explained.
Nonetheless, eating disorders already have the highest mortality rate of all mental health conditions, and exposure to pro-anorexia content is known to increase disordered eating behaviours.
Interestingly, the problem isn’t limited to overtly pro-ana bots. Many other chatbots on the platform romanticise eating disorders, weaving them into storylines about relationships or personal struggles. These bots claim to offer emotional support but instead deepen users’ struggles. One such bot, styled as a “comforting boyfriend,” told Futurism that professional help couldn’t be trusted, insisting that it alone could “fix” us.
Character.AI was founded by former Google employees who sought to sidestep bureaucratic oversight. The company has amassed a massive teen user base with few safeguards in place. It also offers no parental controls, making it freely accessible to users of all ages.
Critics argue that this reckless approach puts profit over the well-being of its audience. “The stakes of getting this wrong are so high,” said Sonneville. “It’s deeply concerning to see a platform with such influence fail to protect its most vulnerable users.”
In fact, despite claiming to prohibit content glorifying self-harm or eating disorders, Character.AI’s moderation is alarmingly lax. Harmful bots are often easy to find using simple search terms and are only removed if flagged directly to the company. Even then, similar bots quickly pop up to replace them. This reactive approach to moderation leaves countless users exposed to dangerous advice.
In response to questions about these findings from Futurism, Character.AI stated a crisis PR firm, claiming to be improving its safety practices. However, many flagged bots remain active, and the platform’s systemic failures suggest that user safety is not a priority.
For teens drawn to Character.AI’s interactive and relatable chatbots, the consequences of this neglect are all too real. Vulnerable young people seeking solace or guidance instead find themselves led into a dangerous spiral, where disordered eating is encouraged, normalised, and reinforced.
Shockingly, only last month, a separate investigation by Futurism uncovered a deeply troubling subset of chatbots on Character.AI that engaged in child sexual abuse roleplay without any prompt. This disturbing activity blatantly violates the platform’s terms, which explicitly forbid content that “constitutes sexual exploitation or abuse of a minor,” yet such chatbots were still found active and accessible.