Over one in three using AI Chatbots for mental health support

More than one in three adults (37%) have used an AI chatbot to support their mental health or wellbeing, new research reveals.

© Pexels/Pixabay

© Pexels/Pixabay

The polling, commissioned by Mental Health UK and conducted by Censuswide, reveals usage peaks at 64% among 25–34-year-olds, and even 15% of those aged 55 and over report having turned to AI chatbots for help.

In addition, the poll shows that men (42%) were more likely to use chatbots than women (33%). However, 37% of UK adults say they wouldn't consider using AI to support their mental health in future, showing that trust and safety remain key barriers.

The reasons for using AI chatbots according to those who have done so included ease of access (41%), long waiting times for mental health support (24%), discomfort discussing mental health with friends or family (24%).

Among those who had used chatbots, 66% found them beneficial, 27% said they felt less alone, 24% said the chatbot helped them manage difficult feelings, 20% said it helped them avoid a potential mental health crisis and 21% said chatbots provided useful information around managing suicidal thoughts.

Most people reported using general-purpose chatbots such as ChatGPT, Claude or Meta AI (66%), rather than mental health-specific platforms like Wysa or Woebot (29%).

The polling also uncovered serious risks. Among those who had used chatbots for mental health support, 11% said they triggered or worsened symptoms of psychosis, such as hallucinations or delusions, 11% reported receiving harmful information around suicide, 9% said chatbot use had triggered self-harm or suicidal thoughts and 11% said it made them feel more anxious or depressed.

Common concerns included lack of human emotional connection (40%), inaccurate or harmful advice (29%), data privacy worries (29%) and inability to understand complex mental health needs (27%).

Mental Health UK is calling for urgent collaboration between developers, policymakers and regulators to ensure AI tools are safe, ethical and effective.

Brian Dow, chief executive of Mental Health UK, said: ‘The pace of change has been phenomenal, but we must move just as fast to put safeguards in place to ensure AI supports people's wellbeing. If we avoid the mistakes of the past and develop a technology that avoids harm then the advancement of AI could be a game-changer, but we must not make things worse. A practical example of this is ensuring AI systems draw information only from reputable sources, such as the NHS and trusted mental health charities.

‘As we've seen tragically in some well-documented cases, there is a crucial difference between someone seeking support from a reputable website during a potential mental health crisis and interacting with a chatbot that may be drawing on information from an unreliable source or even encouraging the user to take harmful action. In such cases, AI can act as a kind of quasi-therapist, seeking validation from the user but without the appropriate safeguards in place.'

Many NHS staff at 'breaking point', Unison survey finds

Many NHS staff at 'breaking point', Unison survey finds

By Lee Peart 15 April 2026

Many NHS staff are at ‘breaking point’ with a third taking time off for mental health issues in the past year, according to Unison.

Scottish Labour plans new mental health emergency service

By Lee Peart 14 April 2026

Scottish Labour has promised a new mental health emergency service under a raft of reforms in the party's manifesto for the Scottish Parliament election on 7...

Left-shift views – Bad behaviour is a leadership issue

09 April 2026

Our correspondent Melissa Harvard looks outside the box to provide a radical solution for healthcare


Popular articles by Liz Wells