A recent study conducted by the Stanford Department of Psychiatry and Behavioural Sciences shows how a chatbot therapist, Woebot, has effectively reduced anxiety and depression in its users by successfully administering Cognitive Behavioural Therapy (CBT). In the study, 70 randomised college students participated, and they were either asked to engage with the woebot or a self-help e-book for two weeks. The students who used Woebot self-reported a significant reduction in their symptoms. (Fitzpatrick, 2017).
Woebots are, however, not based on an uncommon concept. Projects such as Ellie (a visual therapist developed by the University of California) and Therachat (another therapeutic chatbox) also use mechanical systems for diagnosis and treatment of mental disorders (Molteni, 2018).
Such advances in artificial intelligence repeatedly question the premise of how effective these mechanical methods of assessment and treatment are in comparison to clinical methods. This blog post will demonstrate how such mechanical methods despite being more valid, reliable and standardised should only be used in conjunction with the clinical method and not as a substitution for it.
Technologies like Woebot use a sophisticated algorithm and statistical equations to sort through enormous amounts of information like medical history, information from family members and scores on psychological testing. In contrast to this, clinicians use skilled intuition to focus on relevant information and make informed judgments. However, due to clinical biases, clinicians often tend to give more weight to personal experiences and encounters compared to professional findings (Meehl, 1992). Hence, mechanical predictions and systems tend to assign valid weights to data sets making the assessment more accurate and the treatment more effective.
A recently conducted field-survey revealed that the reliability of Major Depressive Disorder was close to 28% (Freedman, 2013). This statistic indicates for most of the cases if a patient is diagnosed as depressed by one clinician, he/she might not be diagnosed as depressed by another clinician. This problem of reliability frequently occurs in the clinical method of assessment because experts are subjected to a range of biases when observing, interpreting and analysing the information given to them. Mechanical systems of diagnosis make fairly consistent predictions as they base their clinical decisions on objective calculations and computations. However, with increased research into mental illnesses, practitioners now use universal diagnostic manuals such as the DSM (Diagnostic Statistical Manual) and ICD (The International Classification of Diseases) which help them get rid of cultural and personal biases.
There is no denying that an 18-year-old schoolgirl might be more comfortable typing her personal life details to an anonymous chatbox on facebook compared to discussing her problems with a 50-year old therapist who raises her eyebrows in judgment at her. However, these chat boxes are not licensed therapists; they are not able to handle crisis situations like panic attacks and seizures. They are merely supportive tools for people who lack protective factors in their environment. For, e.g. the app therachat is used by many mental health practitioners for patients to journal their thoughts. Since these apps identify negative thinking patterns and cognitive distortions, clinicians use them for logistical convenience. From this information, the clinician derives a subjectively unique treatment plan for their clients using their judgment and skill, which has been honed by years of knowledge and experience (Kostopoulos, 2018).
Even from an experimental standpoint, the study conducted by Stanford does not prove that talking to Woebots is better than clinical interventions. The control group used in the study were just fed with information on the treatment of depression. If the control group contained patients who were given interventions by human therapists, then the study would have compared the clinical and the mechanical system more effectively. The only logical inference from the study is that conversations with Woebots are better than not providing any assessment or treatment. Hence, these technologies can only be used to make therapy more financially and logistically accessible to society. They act as "gateway therapists' and encourage their users to seek help in the real world and provides text and hotline resources.
With the current research findings, it is only safe to assume that technological innovations like woebots should be funded and encouraged as they are logistically and financially more accessible. However, the argument that if they will ever be able to replace human therapists still seems far-fetched.
References-
1. Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR mental health, 4(2), e19.
2. Molteni, M. (2018, November 20). The Chatbot Therapist Will See You Now. Retrieved from https://www.wired.com/2017/06/facebook-messenger-woebot-chatbot-therapist/.
3. Meehl, P. E. (1992). Cliometric metatheory: The actual approach to empirical, history-based philosophy of science. Psychological Reports, 71, 339–467.
4. Freedman, R., Lewis, D. A., Michels, R., Pine, D. S., Schultz, S. K., Tamminga, C. A., ... Yager, J. (2013). The initial field trials of DSM-5: New blooms and old thorns. American Journal of Psychiatry, 170(1), 1-5. https://doi.org/10.1176/appi.ajp.2012.12091189
5. Kostopoulos, L. (2018). The Emerging Artificial Intelligence Wellness Landscape: Benefits and Potential Areas of Ethical Concern. Cal. WL Rev., 55, 235.
Comments