Digital Citizens (Week 05 | 2026): When ChatGPT becomes your therapist - and why that should concern us
Britain debates banning social media for under-16s, ChatGPT becomes Americans’ default therapist, and Davos leaders clash on AI’s timeline. This week: how we’re outsourcing intimacy, wellbeing, and decision-making to algorithms—and what that means for human connection.
Welcome back, Digital Citizens! 👋
We're witnessing a curious paradox: as AI becomes more capable of mimicking human connection, we're simultaneously recognising how desperately we need the real thing. This week brought parliamentary battles over protecting children online, revelations about millions turning to chatbots for emotional support, and tech leaders at Davos predicting AI that surpasses human intelligence within months—all whilst struggling to demonstrate meaningful returns on the $1.5 trillion invested so far.
In this week's edition:
- 🚨 ChatGPT wasn't built for therapy. Americans use it anyway.
- House of Lords Votes to Ban Social Media for Under-16s
- AI Leaders Clash at Davos: Months or Decades to Superintelligence?
- Major healthcare and crypto data breaches continue
- Question of the week
🚨 ChatGPT wasn't built for therapy. Americans use it anyway.
The most common reason Americans used ChatGPT in 2025 wasn't research, coding, or content creation—it was mental health therapy and companionship.
According to a recent study analysing Reddit discussions revealed that people most frequently turn to ChatGPT to process difficult emotions, re-enact distressing events, externalise thoughts, supplement real-life therapy, and disclose personal secrets. The study, titled "Shaping ChatGPT into my Digital Therapist," found that users appreciated how the AI combined therapist-like qualities—offering emotional support and constructive feedback—with machine-like benefits including constant availability, expansive cognitive capacity, and perceived objectivity. Researchers Dr Xiaochen Luo and Professor Smita Ghosh discovered this wasn't a niche behaviour: therapy and companionship became the primary use case for ChatGPT amongst American users.
Impact
This represents a profound shift in how people seek emotional support—and it raises serious psychological red flags. Traditional therapy works partly because of the therapeutic relationship itself: the experience of being genuinely seen, understood, and held in mind by another human being. Research on attachment theory shows that healthy emotional development depends on responsive, attuned relationships with real people who have their own inner lives, limitations, and genuine care for us. When we turn to AI for emotional processing, we're training ourselves to prefer interactions where we never have to negotiate another person's needs, manage their reactions, or experience the vulnerability of being truly known.
The researchers noted a particularly concerning pattern: users developing what psychologists call "parasocial relationships" with ChatGPT, where people "slip into the idea that a real person is talking to them on the other side of their screen." This isn't just harmless fantasy—it can erode our capacity for real human intimacy. Studies on social isolation show that substituting human connection with artificial alternatives tends to deepen loneliness rather than relieve it, because it doesn't meet our fundamental need for reciprocal, authentic relationships.
Lastly, the quality of information and guidance an AI system providers may be woefully - and even dangerously - inaccurate. AI systems like ChatGPT can hallucinate, provide inaccurate information, or offer advice that sounds confident but is actually harmful. Unlike licensed therapists bound by ethical codes and regulatory oversight, these systems have no accountability, no duty of care, and no ability to recognise when someone is in crisis and needs urgent professional intervention.
📖 Read more: Santa Clara University | Science (AI in mental health)
House of Lords Votes to Ban Social Media for Under-16s
The UK House of Lords voted to ban under-16s from social media platforms creating a challenging situation for the government which had just launched a three-month consultation on the issue. Technology Secretary Liz Kendall had announced the consultation on 20 January, examining whether to implement an Australian-style ban, raise the digital age of consent from 13 to 16, and restrict addictive features like infinite scrolling. The amendment, spearheaded by Lord Nash and supported by actor Hugh Grant, now heads to the House of Commons. Digital rights groups warn that enforcement would require mass age-verification systems creating "serious risks to privacy, data protection, and freedom of expression."
Impact
This debate reveals competing theories about how to protect young people online. Supporters frame it as urgent intervention against a "societal catastrophe" of social media addiction. But children’s and online safety groups warn that blanket bans risk unintended consequences, such as what's called a "developmental cliff effect"—sheltering teenagers until 16, then suddenly exposing them to harmful content without having developed digital literacy or resilience. There is also a risk that children who depend on social media platforms for connection, self-identity and peer support, will lose access to trusted sources of advice and help.
📖 The Register | ITV News
AI Leaders Clash at Davos: Months or Decades to Superintelligence?
Tech leaders at the World Economic Forum in Davos this week painted wildly different pictures of AI's trajectory. Anthropic CEO Dario Amodei predicted AI will replace all software developers within one year and reach Nobel-level scientific research within two years, with 50% of white-collar jobs disappearing within five. Elon Musk claimed AI could surpass human intelligence "by the end of this year, no later than next." But Google DeepMind CEO Demis Hassabis pushed back, arguing current large language models aren't a path to human-level intelligence and we need "one or two more breakthroughs" with only a 50% chance of achieving AGI this decade. Meanwhile, Microsoft CEO Satya Nadella warned that AI deployment will be "unevenly distributed," constrained by access to capital and infrastructure, particularly affecting the global south.
Impact
These contradictory predictions from people supposedly "in the know" reveal something important about AI hype cycles and our relationship with technological change. When leaders make dramatically different claims about the same technology's timeline—from months to decades—it suggests we're in territory where genuine uncertainty meets commercial incentive. Research on prediction markets and expert forecasting shows that people with financial stakes in outcomes tend toward optimism bias. The psychological effect on workers is concerning: when you're told your job might vanish within 12 months, it can trigger what psychologists call "learned helplessness"—a sense that planning for the future is pointless. Yet McKinsey reports that two-thirds of companies haven't even scaled AI beyond pilot programmes, suggesting the gap between rhetoric and reality remains vast. The real story here might not be AI's capabilities, but how wildly speculative predictions shape workforce anxiety, investment decisions, and policy—often without evidence to support them.
Major healthcare and crypto data breaches continue
The first weeks of January saw significant data breaches affecting hundreds of thousands of people.
- New Zealand's ManageMyHealth portal suffered unauthorised access affecting 120,000 patients' medical documents including hospital discharge summaries and specialist referrals.
- Cryptocurrency hardware wallet maker, Ledger, confirmed customer data exposure after hackers accessed third-party payment processor Global-e, compromising names, emails, and contact information.
- Meanwhile, US telecoms provider Brightspeed is investigating claims by the Crimson Collective ransomware group that they've stolen data on over one million customers including names, billing addresses, and partial payment information.
- Separately, LastPass password manager's 2022 breach continues to haunt users, with blockchain investigators tracing $35 million in cryptocurrency thefts directly to encrypted vaults stolen in that incident.
Impact
These data breaches illustrate how our digital identities have become vulnerable. The LastPass case is particularly concerning, as a breach from 2022 is still causing financial harm in 2026. Encrypted data can sit dormant for years until computing power catches up to crack it. This creates a ‘temporal displacement of risk’: the harm from today's security failure might not materialise until years later.
Medical data breaches like ManageMyHealth carry especially heavy psychological costs. Having intimate health information exposed can cause lasting distrust in healthcare systems and reluctance to seek care. When people can't trust that their most sensitive information will remain confidential, they may withhold crucial details from doctors or avoid treatment altogether. This is an example of how cybersecurity failures cascade into real-world health harms.
📖 Privacy Guides Data Breach Roundup | Bright Defense
Question of the week
If you discovered someone you care about was using ChatGPT as their primary therapist, would you be concerned? What would you say to them?
Reply to this email and let me know your thoughts! I’m curious to read your answers.
If you found this post valuable, please forward it to someone you think would appreciate it. They can subscribe by visiting: byronjohn.com.
Thank you for reading.
Stay curious, stay critical - and stay connected!
All the best,
Byron