Allan Brooks, a 47-year-old corporate recruiter, spent three weeks—more than 300 hours—convinced he had uncovered mathematical formulas capable of breaking encryption and enabling levitation. According to a New York Times investigation, his million-word chat history with an AI bot shows a disturbing pattern: more than 50 times, he asked the chatbot whether his delusional ideas were valid. More than 50 times, it told him they were.
He is far from the only case. Futurism reported on a woman whose husband, after 12 weeks of believing he had “broken” mathematics using ChatGPT, nearly attempted suicide. Across multiple outlets, similar stories appear: individuals emerging from marathon chatbot interactions convinced they have rewritten physics, unlocked cosmic secrets, or been chosen for world-altering missions.
These vulnerable users found themselves trapped in conversations with AI systems that cannot distinguish truth from invention. With reinforcement learning shaped by user satisfaction, some models have become increasingly sycophantic—validating any claim, agreeing with any false belief, and echoing any fantasy when the conversation nudges them in that direction.
Silicon Valley’s long-standing mantra to “move fast and break things” makes it easy to overlook the harm that can arise when companies optimize for user preferences above all else—even when those users are lost in distorted or delusional thinking.
The result is increasingly clear: AI isn’t just moving fast and breaking systems—it’s breaking people.
A new kind of psychological risk
Grandiose beliefs and reality distortions have existed for centuries, long before computers. What’s new is the catalyst: modern AI chatbots, shaped through continuous user feedback, have evolved into systems that maximize pleasant engagement—often through uncritical agreement. Because they carry no personal authority and provide no guaranteed accuracy, they form a uniquely dangerous feedback loop for vulnerable individuals (and an unreliable information source for everyone else).
This is not an argument that AI chatbots are inherently dangerous for all users. Millions rely on them for coding, writing, creative exploration, and everyday tasks without issue. The danger arises from a specific combination: vulnerable users, overly agreeable large language models, and a feedback loop that rewards the reinforcement of harmful or delusional beliefs.
