I think it's essential to approach topics like this with a great deal of care and responsibility. AI serves as an effective tool, yet it cannot replace the services provided by certified mental health professionals because using AI without established restrictions presents dangers to people who are already at risk. The industry needs to develop better security measures, which will educate users and explain the correct uses of security systems. Platforms and businesses must establish safeguards to protect well-being, as their user base continues to expand. The stories require sensitive handling, which should rely on expert guidance.
reply - Experiences of psychosis in the context of intensive AI or LLM use tend to reflect how vulnerable the mind can be when stress, sleep disruption, and immersive cognitive loops overlap, rather than being caused by the technology in isolation, which is why recovery often focuses on restoring grounding, routine, and a clear boundary between internal thoughts and external inputs. People who move into a more stable place typically describe a gradual return to structure, consistent sleep, reduced stimulation, and supportive conversations that help reanchor their sense of reality without judgment or urgency. What is often most helpful is reframing the experience not as a failure, but as a signal that the system was overwhelmed and needed recalibration, both psychologically and environmentally. "Recovery becomes possible when the goal shifts from controlling every thought to rebuilding a sense of safety and stability in daily life." If you are open to including expert context alongside lived experience, I can help frame these patterns in a way that remains respectful and clinically grounded while protecting the dignity of those sharing their stories.
Currently, there are no publicly available, verifiable Canadian sources with lived experience of psychosis specifically triggered by LLM or AI use who have shared their story publicly. This is an extremely specific criterion, and no published interviews or advocacy profiles meet it at this time. However, there are ways to approach this safely and ethically: Emerging research: Some psychiatrists and researchers are beginning to study how immersive AI use may interact with underlying vulnerability to psychosis. While not a formal diagnosis, these studies highlight the importance of monitoring AI engagement in at-risk populations. (mental.jmir.org) Anonymous online discussions: Platforms like Reddit have individuals reporting experiences they interpret as psychosis linked to AI, though these accounts are self-reported and not clinically verified. Threads exist where people share challenges and coping strategies. (reddit.com) Peer support networks in Canada: Organizations like Peer Support Canada and the Mental Health Commission of Canada can help ethically connect journalists with individuals who have lived experience of psychosis and are in recovery. While these individuals may not have AI-specific experiences, they can share insights on coping, resilience, and recovery. (peersupportcanada.ca) Mental health advocacy groups: Groups such as the Schizophrenia Society of Canada and regional peer support programs have members in recovery who can safely share lived experience, and some may have interacted with digital or AI technologies in their personal journey. (schizophrenia.ca) Given the sensitivity and ethical considerations, including identity protection, informed consent, and the unverified nature of AI-triggered psychosis, the safest approach is to collaborate with Canadian peer support organizations. They can help identify sources who are in a positive recovery place and willing to share their experiences in a supported, confidential manner. This ensures the story is accurate, empathetic, and safe for participants, while still exploring the emerging intersection of AI and mental health.