Skip to main content

Are online chatboxes the new therapists?

A recent study conducted by the Stanford Department of Psychiatry and Behavioural Sciences shows how a chatbot therapist, Woebot, has effectively reduced anxiety and depression in its users by successfully administering Cognitive Behavioural Therapy (CBT). In the study, 70 randomised college students participated, and they were either asked to engage with the woebot or a self-help e-book for two weeks. The students who used Woebot self-reported a significant reduction in their symptoms. (Fitzpatrick, 2017).

Woebots are, however, not based on an uncommon concept. Projects such as Ellie (a visual therapist developed by the University of California) and Therachat (another therapeutic chatbox) also use mechanical systems for diagnosis and treatment of mental disorders (Molteni, 2018).

Such advances in artificial intelligence repeatedly question the premise of how effective these mechanical methods of assessment and treatment are in comparison to clinical methods. This blog post will demonstrate how such mechanical methods despite being more valid, reliable and standardised should only be used in conjunction with the clinical method and not as a substitution for it. 

Technologies like Woebot use a sophisticated algorithm and statistical equations to sort through enormous amounts of information like medical history, information from family members and scores on psychological testing. In contrast to this, clinicians use skilled intuition to focus on relevant information and make informed judgments. However, due to clinical biases, clinicians often tend to give more weight to personal experiences and encounters compared to professional findings (Meehl, 1992). Hence, mechanical predictions and systems tend to assign valid weights to data sets making the assessment more accurate and the treatment more effective. 

A recently conducted field-survey revealed that the reliability of Major Depressive Disorder was close to 28% (Freedman, 2013). This statistic indicates for most of the cases if a patient is diagnosed as depressed by one clinician, he/she might not be diagnosed as depressed by another clinician. This problem of reliability frequently occurs in the clinical method of assessment because experts are subjected to a range of biases when observing, interpreting and analysing the information given to them. Mechanical systems of diagnosis make fairly consistent predictions as they base their clinical decisions on objective calculations and computations. However, with increased research into mental illnesses, practitioners now use universal diagnostic manuals such as the DSM (Diagnostic Statistical Manual) and ICD (The International Classification of Diseases) which help them get rid of cultural and personal biases.

There is no denying that an 18-year-old schoolgirl might be more comfortable typing her personal life details to an anonymous chatbox on facebook compared to discussing her problems with a 50-year old therapist who raises her eyebrows in judgment at her. However, these chat boxes are not licensed therapists; they are not able to handle crisis situations like panic attacks and seizures. They are merely supportive tools for people who lack protective factors in their environment. For, e.g. the app therachat is used by many mental health practitioners for patients to journal their thoughts. Since these apps identify negative thinking patterns and cognitive distortions, clinicians use them for logistical convenience. From this information, the clinician derives a subjectively unique treatment plan for their clients using their judgment and skill, which has been honed by years of knowledge and experience (Kostopoulos, 2018).

Even from an experimental standpoint, the study conducted by Stanford does not prove that talking to Woebots is better than clinical interventions. The control group used in the study were just fed with information on the treatment of depression. If the control group contained patients who were given interventions by human therapists, then the study would have compared the clinical and the mechanical system more effectively. The only logical inference from the study is that conversations with Woebots are better than not providing any assessment or treatment. Hence, these technologies can only be used to make therapy more financially and logistically accessible to society. They act as "gateway therapists' and encourage their users to seek help in the real world and provides text and hotline resources. 

With the current research findings, it is only safe to assume that technological innovations like woebots should be funded and encouraged as they are logistically and financially more accessible. However, the argument that if they will ever be able to replace human therapists still seems far-fetched.  

References- 

1. Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR mental health, 4(2), e19.

2. Molteni, M. (2018, November 20). The Chatbot Therapist Will See You Now. Retrieved from https://www.wired.com/2017/06/facebook-messenger-woebot-chatbot-therapist/.

3. Meehl, P. E. (1992). Cliometric metatheory: The actual approach to empirical, history-based philosophy of science. Psychological Reports, 71, 339–467.

4. Freedman, R., Lewis, D. A., Michels, R., Pine, D. S., Schultz, S. K., Tamminga, C. A., ... Yager, J. (2013). The initial field trials of DSM-5: New blooms and old thorns. American Journal of Psychiatry, 170(1), 1-5. https://doi.org/10.1176/appi.ajp.2012.12091189

5. Kostopoulos, L. (2018). The Emerging Artificial Intelligence Wellness Landscape: Benefits and Potential Areas of Ethical Concern. Cal. WL Rev., 55, 235.

Comments

Popular posts from this blog

The Burari Deaths: The Psychopathology of Lalit, a Biopsychosocial Perspective

Pankhudi Narayan Blogpost 1  TW: Death, mentions of suicide.         On July 1st of 2018, eleven members of a family were found dead in their shared home in the Burari area of Delhi. The deaths seemed to be fashioned in a ritualistic manner and evidence suggested that the family members were willing participants. This was the Bhatia family, a typical middle-class Indian joint family. Bhopal Singh who had passed away and his wife Narayani Devi formed the older generations of the family and were Lalith’s parents. The most compelling evidence in the uncovering of the events that led to the death of an entire family was provided by eleven diaries found by authorities. The diaries described the events that transpired before the deaths, discussing a ritual that needed to be conducted and the diary entries were corroborated by the post mortem findings as the accounts were found to be consistent with injuries (Yadav et al., 2021). It was uncovered that Lalit, a member of the family who was the

Made in Heaven: An analysis of Faiza Naqvi

Vyoma Vijai Blog Post 3 ‘Made in Heaven’ is a popular Indian web series created by Zoya Akhtar and Reema Kaagti and was launched in March 2018. The show gained a lot of attention in the first few days of it coming out. It is a bold show that focuses on marriage practices in the rich and elite class of Delhi. The show focuses on the social issues and practices that are often not spoken of or are kept closeted. These issues include homosexuality, dowry, molestation and other questionable Indian customs. The story follows the lives of multiple characters at the same time. The two most important characters are Tara and Karan who run a wedding planning agency.   Tara is married to a rich industrialist whose name is Adil and her best friend in the show is Faiza, played by Kalki Koechlin. This essay analyses Faiza’s character and her role in this web series. Faiza is a complex character to understand. Her actions make it hard for the viewers to decide whether they l

Disorderly Delvian: A Deep Dive into "Anna Delvey" through the Lens of NPD

       A markedly thick accent, a mop of blonde hair, a magical array of unimaginably expensive clothing, and an air of calculated mystery mesh uncomfortably together to invent Anna Delvey, the centre of Netflix’s appropriately named documentary/drama series, “Inventing Anna”. This series tells or rather retells the fascinating story of how one woman deceived the creme de la creme of New York society as well as some prestigious financial institutions under the guise that she was a wealthy heiress from Germany. The series follows a journalist, Vivian Kent, as she tries to uncover the carefully constructed web of lies Anna spun around high society after her arrest, heavily interspersed by flashbacks, present-day court hearings, and interviews with the enigma herself (Shondaland, 2022). Anna as a character, infused with a troubling reality and a dramatised narrative, presents an interesting scope to study the symptomatology of Narcissistic Personality Disorder as presented in her behaviou