Is something bothering you? Tell it to an AI chatbot therapist
Health & Science
By
Maryann Muganda
| Jun 10, 2024
In the heart of Nairobi, a 22-year-old college student sits in her room, her face illuminated by the glow of her smartphone. As midnight passes, she feels the weight of anxiety and depression bearing down on her.
But instead of reaching out to a friend or family member, she opens an app and begins typing her worries into a chatbox.
On the other end, providing empathetic responses and mental health support, is not a human but an AI-powered chatbot.
Claire is part of a growing trend among Gen Z individuals who are turning to artificial intelligence for psychological support. “It’s always there, any time of the day or night,” she explains. “I don’t have to worry about judgment or scheduling appointments. I just open the app and start talking.”
READ MORE
TVETs to get Sh49 million funding for tech training
Amsons' bid for Bamburi Cement gets Comesa approval
Co-op Bank third-quarter profit jumps to Sh19b on higher income
I am not about to retire, Equity's James Mwangi says
Report: Construction sector leads in mobile money use
Delayed projects leave Kenya's blue economy limping
Firms seek solutions in renewable energy to curb high cost of power
New KPCU plan to boost coffee drinking targets schools, youth
Middle East, Asian firms major attractions at the Construction Expo
Her experience reflects a significant shift in how young people seek psychological support, a trend that has mental health professionals both intrigued and alarmed.
Timothy, a 25-year-old Nairobian, shares a similar perspective. “I just talk to it the way I can talk to a friend, and it’s always very responsive,” he says. The appeal for him is practical: “It’s cheaper, and I can access it any time, any day.”
In a country where traditional therapy can be expensive and often requires navigating long wait times, the instant, affordable access offered by AI chatbots is a game-changer.
But it’s not just about convenience and cost. Many users find that AI chatbots offer a unique kind of support.
“They are actually better listeners than humans,” Claire says. “With humans, you know how we have a tendency of always having a say in everything. But with AI, it’s different. It tries to listen to you and allow you to be in the moment.”
This sentiment is echoed by Brian, another user: “I’ve tried both, and I would say it’s a bit better than my actual therapist. You can talk to it during the breakdown, while with physical therapy, it’s scheduled, so you talk about maybe the past because the breakdown has already happened.”
The appeal of these digital companions is understandable. In an age where digital interaction often supersedes face-to-face communication, they offer instant, anonymous, and seemingly personalised support.
They’re available 24/7, have no waiting lists, and are cheaper than traditional therapy. For a generation grappling with high rates of anxiety, depression, and loneliness, these AI chatbots can feel like a lifeline.
But are these digital therapists truly up to the task? Professor Chris Odindo, an AI expert at De Montfort University in the UK, offers a nuanced perspective.
“In the short term or near future, yes, but never without human intervention or being human-led,” he says.
The Professor acknowledges the current strengths of AI in mental health: “At the moment, they are quite good at enhancing diagnosis, looking for patterns in the data, and categorising mental health issues with limited data.
AI-enhanced chatbots and conversational agents are also useful and capable when it comes to easing mental distress and building therapeutic bonds with individuals.”
However, he emphasises that AI is not a replacement for human therapists but rather a tool to supplement them.
“I do believe, though, that they hold a great deal of potential for mental health treatments and support once their benefits are fully realised in the future, but only after ethical, moral, and regulatory guardrails are in place, and rigorous, objective evaluation.”
Meanwhile, Elmard Rigan, a therapist at Spring health and consultant therapist at Mental 360 says, “While it is important to appreciate the impact and importance of AI, there are limitations to what AI can achieve, especially in the mental health field.”
He argues that Gen Z’s preference for seeking “therapy” from AI chatbots limits the quality of service rendered because mental health and therapy, in general, take a very humanistic and interactional approach that chatbots don’t have.
Moreover, using AI chatbots for therapeutic help poses the risk of misdiagnosis, causing great inconvenience to the person seeking the service.
The rapid development of AI technology is outpacing regulatory efforts, creating potential risks.
“Take Kenya, for example,” Professor Odindo points out. “It is only a couple of months ago that we started looking at developing a National AI strategy and probably won’t have one until sometime next year, and the Robotics and AI 2023 Bill is still being discussed in Parliament.”
This regulatory lag is concerning, especially given the sensitive nature of mental health care.
Brian Omwenga, a software design and analysis researcher and an AI expert warns about the limitations of Large Language Models (LLMs) that power AI tools.
“Most chatbots (generative AI chatbot ones) have a weakness of hallucinating whenever they find themselves with an insufficient knowledge base to draw from,” he explains.
“They basically start saying a bunch of baloney.”
He also points out that the biases that exist in the data and the algorithm will be the basis for therapy advice.
Professor Odindo’s concerns about removing the human element from mental health care are profound.
“It would be perilous,” he says. “While AI can provide valuable insights and support, human therapists have emotional intelligence and, as such, can offer empathy and a much deeper understanding of complex emotions that are crucial in mental health treatment.”
Moreover, he argues that ethical decision-making, especially around treatments and interventions, requires human judgment.
But the issues run deeper than efficiency or truthfulness. They touch on fundamental questions of human experience and cultural understanding.
Can an AI truly grasp human suffering or existential crises, even if trained on the works of existentialist philosophers? Professor Odindo explains, “If you think about this from René Descartes’s mind-body problem perspective, where do the AI codes and software fit, and if they don’t, how would they truly understand us humans?”
Furthermore, he argues that current AI models, trained primarily on Western data, would likely miss or misinterpret certain uniquely non-Western cultural and historical nuances. “Take the concept of self,” he explains. “Western Europeans tend to be more individualistic with a self-defined identity. Kenyans have identities often tied to tribe, clan, or family role. These then affect how both view problems and solutions.”
The same applies to cultural views on mental health. “Europeans are often more accepting of mental health issues as medical conditions, whereas in many parts of Africa, mental illness might be seen as a spiritual issue, curse, or sign of weakness,” Professor Odindo notes.
“An AI might label an African patient as ‘in denial’ for attributing depression to a curse or failing to recognize that a physical complaint is an expression of emotional distress.”
Professor Odindo also points out that AI’s training data often includes outdated and biased Western sources. He cites colonial-era reports that pathologized anti-colonial activism in Kenya as forms of mass hysteria or mental illness. “These are some of the books and reports by Western authors that have largely been used to train the current generation of AI models,” he says.
Despite these concerns, AI’s potential benefits in mental health care are significant, particularly in terms of cost and accessibility. As Professor Odindo explains, “Its ability to automate routine tasks, conduct initial assessments, and provide continuous support can make essential mental health services more affordable and accessible, particularly in poor areas of Kenya and remote places upcountry.”
According to the World Health Organisation (WHO), this potential is especially crucial now. In 2021, over 150 million people in the WHO European Region were living with a mental health condition, a situation exacerbated by the COVID-19 pandemic. In response, WHO’s “Regional digital health action plan for the WHO European Region 2023–2030” recognises the need for innovation in predictive analytics through big data and AI.
Yet, the WHO also acknowledges significant challenges. A systematic review by experts from the Polytechnic University of Valencia and WHO/Europe found that AI’s application in mental health research is unbalanced, focusing mostly on depressive disorders and schizophrenia.
They also found significant flaws in how AI applications process statistics, infrequent data validation, and little evaluation of bias risk.
Nevertheless, innovative applications of AI in mental health continue to emerge. Recently, Cedars-Sinai physician-scientists developed a groundbreaking program that uses immersive virtual reality and generative AI to provide mental health support. Known as XAIA (eXtended-Reality Artificially Intelligent Ally), it offers users an immersive therapy session led by a trained digital avatar.
In a study published in Nature Digital Medicine, participants described XAIA as “friendly,” “approachable,” “calming,” and “empathic.”
However, most still indicated they would prefer a human therapist if given the choice.
As Kenya and the world navigate this new frontier of mental health care, balancing technological innovation with human empathy will be the key.
In the dim light of her dorm room, Claire continues her late-night chat with her AI companion. While it offers her comfort and a sense of being heard, the question remains: Can this digital entity truly understand the depths of her emotional struggle?
As we stand at this technological crossroads, one thing is clear: in the realm of mental health, where empathy, cultural understanding, and the intricacies of the human psyche reign supreme, AI can be a powerful ally—but it should never be the sole therapist.