Dear Editor:

I thank Dr. Jutla for his comments on our case report1 of artificial intelligence (AI)–associated psychosis. As we predicted, other cases have followed in the academic literature, along with first-person accounts and continued reports in the media.2–4

Dr. Jutla voices concern about our emphasis on user-end risk factors for AI-associated psychosis, such as immersion and deification, at the expense of calling out chatbot developers for hyping their products and encouraging such unhealthy consumer behavior through product design. I agree that the risk of AI-associated psychosis is best mitigated through not only user-end interventions (eg, avoiding immersion and deification, AI literacy), but product-end safety enhancements (eg, making chatbots less sycophantic, training them to detect signs of mental health issues, strengthening warnings and “guardrails,” and generating therapeutic responses when needed).

While Dr. Jutla compares the AI chatbot industry to the gambling industry, I also see relevant parallels with the tobacco and gun industries where the responsibility for preventing potential harm from products is optimally shared by both consumers and manufacturers. However, such industries have been historically averse to safety refinements that negatively impact profits, with corporate decisions to implement them often coming only in response to government regulation and class action lawsuits. In the case of AI chatbots, consumer backlash when OpenAI released the less-sycophantic ChatGPT5.0 suggests that what makes some people vulnerable to AI-associated psychosis is the very same thing that gives it mass appeal.5 If that is the case—and in an environment where the current presidential administration in the United States has strongly advocated against regulation of the industry,6 along with the fact that AI chatbot companies are already struggling to generate profits7—then I am not particularly sanguine about chatbot makers taking responsibility to walk back the marketing hype of AI and committing themselves to making safer but less lucrative products.

As a psychiatrist, I find such prognostication especially discouraging for the wellbeing of my patients, but only the tip of the iceberg in terms of the potential harm of AI chatbots. Elsewhere, I have called AI-associated psychosis a “canary in the coal mine,” based on the potential for AI chatbots to encourage not only delusional thinking, but “more mundane false beliefs related to conspiracy theories, science denialism, political propaganda, and so-called alternative facts.”8 Researchers have likewise identified AI-associated psychosis and “widely shared unfounded beliefs” fueled by AI as a potential national security threat.9 Indeed, AI chatbots and other forms of generative AI, such as deepfake videos, are already being exploited on an increasingly alarming scale to disseminate propaganda aimed at manipulating human belief and behavior.10,11 Going forward, such weaponization in the service of information warfare will likely pose a much greater risk than AI-associated psychosis and lie well beyond the control of either consumers or chatbot makers.

With regards,

Joseph M. Pierre, MD

Dr. Pierre is with the University of California, San Francisco, San Francisco, California.

Funding/financial disclosures. The author has no relevant conflicts of interest. No funding was received for the preparation of this letter.

Correspondence. Joseph M. Pierre, MD;

References

  1. Pierre JM, Gaeta B, Raghavan G, Sarma KV. “You’re not crazy”: a case of new-onset AI-associated psychosis. Innov Clin Neurosci. 2025;22(10–12):11–13.
  2. Caldwell MR, Ho PA. A case of artificial intelligence psychosis co-occurring with substance-induced psychosis. Prim Care Companion CNS Disord. 2025;27(6):25cr04059
  3. Ner C. I couldn’t stop creating AI images of myself—until I had a breakdown. Newsweek. 23 Dec 2025. Accessed 2 Feb 2026. https://www.newsweek.com/ai-psychosis-couldnt-stop-creating-images-bipolar-episode-11255008
  4. Dupre MH. A man bought Meta’s AI glasses, and ended up wandering the desert in search of aliens. Futurism. 15 Jan 2026. Accessed 2 Feb 2026. https://futurism.com/artificial-intelligence/meta-ai-glasses-desert-aliens
  5. Hill K, Valentino-Davies J. What OpenAI did when ChatGPT users lost touch with reality. The New York Times. 23 Nov 2025. Accessed 2 Feb 2026. https://www.nytimes.com/2025/11/23/technology/openai-chatgpt-users-risks.html
  6. The White House. Winning the race: America’s AI action plan. Jul 2025. Accessed 2 Feb 2026. https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf
  7. Wilkins J. AI industry nervous about small detail: they’re not making any real money. Futurism. 8 Aug 2025. Accessed 2 Feb 2026. https://futurism.com/ai-industry-nervous-money
  8. Pierre JM. Can AI chatbots validate delusional thinking? BMJ. 2025;391:r2229.
  9. Treyger E, Matveyenko J, Ayer L. Manipulating minds: security implications of AI-induced psychosis. RAND Corporation. 8 Dec 2025. Accessed 2 Feb 2026. https://www.rand.org/pubs/research_reports/RRA4435-1.html
  10. Frances A, Pierre JM. Chatbot-generated propaganda threatens democracy. Psychiatric Times. 27 Jan 2026. Accessed 2 Feb 2026.https://www.psychiatrictimes.com/view/chatbot-generated-propaganda-threatens-democracy
  11. Schroeder DT, Cha M, Baronchelli A, et al. How malicious AI swarms can threaten democracy. Science. 2026;391(6783):354–357.