Innov Clin Neurosci. 2026;23(1–3):5–9.

Dear Editor:

Pierre et al1 carefully document a case of a phenomenon that has received widespread media attention in recent months. The authors correctly note that certain fundamental characteristics of large language model (LLM) “chatbot” products could encourage or exacerbate delusional thinking—because they have a textual interface, humans are prone to anthropomorphize them; because they are optimized for engagement, they tend to be flattering toward the user regardless of what the user is saying; and because they are predictive text generators, they produce output that aligns closely with the content and style of the user’s input but is not necessarily accurate.

However, by suggesting that “regarding artificial intelligence (AI) chatbots as a kind of superhuman intelligence or god-like entity” might be a risk factor for AI-associated psychosis and that preventive strategies might include “enhanced AI literacy,” the authors stop just short of making an important ethical point.

The textual interface of LLM chatbot products is knowingly implemented despite decades of human-computer interaction research going back to Joseph Weizenbaum’s ELIZA that this interface is extraordinarily persuasive in its implicit suggestion of an anthropomorphic “intelligence” on the other side.2,3 This interface is persuasive even if one knows “better.” Notably, the authors describe the woman in their case report as having “extensive experience” with LLM technologies, with “a firm understanding” of how they work.

Although LLM products are text prediction engines, they are aggressively promoted by well-capitalized corporations as a form of “AI,” a term with specific pop-cultural resonance: Data (Star Trek), Jarvis (Iron Man), Samantha (Her). They are marketed with a specific quasimystical visual language: ChatGPT has a mandala-like logo; Claude and Gemini have stylized stars. The chief executives of the companies developing them make hyperbolic claims about their capabilities: that talking to ChatGPT is like talking to “a legitimate PhD-level expert in anything, in any area you need” (OpenAI’s Sam Altman)4 and that in the imminent future LLMs will be “smarter than a Nobel Prize winner across most relevant fields—biology, programming, math, engineering, writing, etc.” (Anthropic’s Dario Amodei).5

In this context, it is not at all surprising that many users might regard these products as superhuman or god-like. Implicitly and explicitly, the developers of these products are encouraging consumers to regard them as such—and have an obvious incentive to do so. A user who trusts a chatbot as an oracle-like source of advice or information will stay engaged with the product and purchase a subscription. What Pierre et al1 call “deification” is not a misunderstanding. It is the predictable outcome of decisions made in the design and marketing of these products with textual interfaces. Education alone cannot overcome this intentional persuasiveness.

I worry that in discussing “AI literacy” without emphasizing this dynamic, we are deflecting responsibility away from the developers of chatbot products and toward their users. Commercial gambling serves as an illustrative analogy: the industry successfully guided policy and research away from interrogation of its own practices and toward individuals engaging in problematic gambling behavior, even though the industry’s practices encourage that very behavior.6 Pierre et al1 do reasonably note that “governmental regulation and the development of safer products” is important, but here, too, the problem is less with the safety of individual products and more with the safety of the “AI” paradigm in which they are positioned. An important component of what we might call “AI literacy” is the acknowledgement that “AI” is itself a constructed category.

With regards,

Amandeep Jutla, MD

Dr. Jutla is with Columbia University and the New York State Psychiatric Institute, New York, New York.

Funding/financial disclosures. The author has no relevant conflicts of interest. No funding was received for the preparation of this letter.

Correspondence. Amandeep Jutla, MD

References

  1. Pierre JM, Gaeta B, Raghavan G, Sarma KV. “You’re not crazy”: a case of new-onset AI-associated psychosis. Innov Clin Neurosci. 2025;22(10–12):11–13.
  2. Weizenbaum, J. Contextual understanding by computers. Commun ACM. 1967;10(8):474–480.
  3. Hofstadter DR. The ineradicable Eliza effect and its dangers. Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought. Harvester Wheatsheaf; 1995:155–169.
  4. Kan M. With GPT-5, OpenAI promises access to “PhD-level” AI expertise. PC Magazine. 7 Aug 2025. Accessed 13 Jan 2026. https://www.pcmag.com/news/with-gpt-5-openai-promises-access-to-phd-level-ai-expertise
  5. Amodei D. Machines of loving grace. 11 Oct 2024. Accessed 13 Jan 2026. https://www.darioamodei.com/essay/machines-of-loving-grace
  6. Wardle H, Degenhardt L, Marionneau V, et al. The Lancet Public Health Commission on gambling. Lancet Public Health. 2024:S2468–2667(24)00167–1.