Australia's Mental Health Revolution - Part 3

The Grey Ethics of AI in Mental Health

Author avatar
Heather Dailey 1 September 2024
Australia's Mental Health Revolution - Part 3


Wow, this series has most definitely been eye-opening! If you have yet to check out Part 1 & 2 of the series, have a look – we’ve been exposed now to the many benefits of digital mental health services while keeping in mind the risks involved in relying on technology over human interaction (remember the data privacy risks of neurotechnology) to aid mental health conditions.  

Here, our quest to understand the ethical loopholes of mental health platforms which lean on AI (computer systems that can perform tasks that normally require human intelligence) and robotics (physical machines that can carry out a series of actions automatically, often powered by AI) for therapies continues. While there are clear, useful benefits to these technologies when it comes to offering mental health support (the convenience in not having to leave your home, cost-effective integration and service delivery, more hands-on-deck to offer a complete solution and fill in staff insufficiencies etc), when we subtract the human element of any type of mental therapy away, there are legitimate anxieties that patients won’t receive the proper quality of care, empathy and/or even receive incorrect advice which could significantly derail a patient’s progress.  

So the question is, how can AI and robots be integrated within psychotherapy in a safe, respectful and effective way for patients? First, these ethical challenges need to be addressed throughout the technology pipeline, then governments MUST implement STRICT policies and regulations to manage the ethical concerns that accompany AI use in the health sector.  

Keep in mind - no one is pretending that this will be a simple process. 

First. The leading uses of AI and robots and how they benefit mental health solutions are: 

AI-Powered Chatbots: Cognitive behavioural therapy (CBT) and Natural language processing (NLP) for example, have been adapted with the integration of AI conversational agents which can engage users in therapeutic conversations, offering coping strategies and mood tracking without human intervention.  

Virtual Therapists: Use AI to analyse facial expressions, voice tones, and language to provide therapeutic interactions to offer mental health support through naturalistic, human-like conversations. 

Robotic Companions: Used to provide comfort and companionship to individuals with mental health challenges, particularly in elderly care settings. These robots can reduce feelings of loneliness and anxiety, serving as non-human companions that simulate interaction. 

AI-Based Mental Health Diagnostics: AI algorithms are increasingly being used to diagnose mental health conditions by analysing data from speech patterns, social media behaviour, and even physical activity. These systems aim to identify early signs of mental health disorders, sometimes more accurately than traditional methods. 

Wearable Technology for Mental Health Monitoring: Wearable devices equipped with AI analyse physiological data, such as heart rate variability and sleep patterns, to monitor and predict mental health conditions like depression or anxiety. These tools can provide real-time feedback and suggest interventions. 

AI-Driven Teletherapy Platforms: Integrating AI to match clients with therapists, optimize therapy sessions, and even guide the therapy process. AI assists in customizing the therapeutic approach based on client data and interactions. 

Top Concerns: 

Quality of Care and Emotional Connection - There’s a chance that robots might not give the same level of care or emotional connection that human therapists do. Mental health treatment often depends on the bond and empathy between the therapist and patient, something robots can struggle with. A skilled human therapist can determine appropriate times to give certain advice and when to change the course of the conversation. The value of such nuanced conversations is paramount. When patients are so fragile and vulnerable, teetering between hope and despair, using choice words is vital but AI can’t construct original answers, rather it chooses which pre-written text it will use to reply. 

A known example of advice going awry was after The National Eating Disorder Association funded a wellness chatbot named Tessa. The AI bot deviated from “a very specific algorithm” that was written by eating disorder experts and provided harmful weight loss advice by offering phrases such as “A safe and sustainable rate of weight loss is 1–2 pounds per week. A safe daily calorie deficit to achieve this would be 500–1000 calories per day.” when prompted with questions about wanting to lose weight. This is the antithesis of helpful and VERY harmful for an eating disorder patient. 

Bias and Fairness - AI systems can be built with inherit biases present in their training data, leading to unfair or biased treatment recommendations. They use natural language processing algorithms that are trained on databases of human text which opens the possibility of sourcing material that can reflect pervasive human biases. This can be particularly problematic in mental health, where cultural sensitivity and individualized care are crucial.  

A horrifying example of how biases present in this technology can affect the absolute worst possible outcome is when a Belgian man took his own life after a generative chatbot urged him to do so in conversations that lasted six weeks about the future of the planet – the chatbot, Eliza convinced him that if he sacrificed himself, he could help stop climate change.  

Accountability – These concerns then beg the question - Who takes responsibility when a robot makes a mistake or a harmful decision?  Ensuring ethical standards is challenging- not only is there ambiguity about accountability for their actions, but even when developers attempt to fix a glitch, because generative systems rely on complex, interconnected data streams to produce responses, this makes it difficult for their creators to access or fully understand the reasoning behind the AI's outputs. The more human-like these systems become, the less control the creators have in managing their responses. This reality then prompts the question of what role should AI really be playing? Should it ever replace therapists? Or should it be an intermediary resource between human therapist visits.

Must-Dos to Ensure These Systems Function as Ethically as Possible: 

First and foremost, the ethical foundation that already exists in human-centred medicine must be applied to the development of mental health technologies. The foundation is based on a set of fundamental principles that guide healthcare professionals in delivering compassionate and patient-centred care with the common thread of beneficence. So how can AI and robotics ensure this underpinning value? 

Start With Ethical Research and Development of Digital Mental Health Tools  

  • Ensure developments or use of digital technology don’t accentuate existing social and culture issues in digital divide  
  • Avoid negative Data manipulation - inappropriate or unethical manipulation of data could lead to misleading treatment recommendations or outcomes. This might happen if data is selectively adjusted or interpreted to fit a desired narrative or commercial interest 
  • Guarantee transparency with communication and collaboration –  

         Between Tech Producers: when there are multiple producers and users who may never be in contact; communicate new uses for old technology that may never have been thought of by the original producer.  

         With Clinicians: So they can understand how decisions are made with clear documentation and explanations. This clear communication will allocate responsibility through the network so there is a clear understanding of what each node is responsible for 

  • Implement sturdy data protection measures to safeguard sensitive personal information. This includes anonymising data, securing storage systems, and limiting access to authorised personnel only 
  • Regularly audit AI systems to uncover biases in algorithmic decision-making. Ensuring that AI models are trained on diverse and representative datasets to avoid discriminatory outcomes. 

Ensure Informed consent – Ensure that users are fully informed about how their data will be collected, stored, and used. Participants must provide explicit consent before engaging with AI or robotic systems in mental health contexts. Clear communication about the risks and benefits of the technology, as well as the right to withdraw consent at any time. 

Focus on Human Oversight and Accountability - Establish clear guidelines for human oversight in the deployment and use of AI and robots. This includes defining who is responsible for the actions and decisions made by AI systems. Ensuring that human therapists or clinicians are involved in critical decision points and that there is accountability for any errors or harm caused by AI. It's important for all parties – patients, clinicians, the public and health services, to acknowledge their responsibilities and start working on ethical frameworks to guide the development and use of these tools. 

Further Policies & Regulations – Going back a few years, Australia released its Artificial Intelligence Framework in 2019 then the World Health Organization revealed its ethical principles in 2021 and an expanded version in 2023 to guide developers, users, and regulators when applying AI to healthcare services and paramount to these principles is the intrinsic worth of every person. To follow suit, The White House released and Executive Order last year. 2024 though is set to see a new wave of laws emerge around the use of AI. But at this stage, while there are general ethical frameworks for AI, there is a lack of detailed, sector-specific guidelines tailored to the unique challenges of mental health care. Mental health services involve particularly sensitive data and complex emotional interactions that generic AI guidelines may not fully cover. 

And we have to remember that with all policies and regulations alike, their success comes from the continuous monitoring and improvement upon them as those in the technology field, health services, government and patients learn more through their experiences – those good and bad. 

The conclusion as we sit here today? At this stage (if not for the foreseeable future) we NEED the human element to form a complete mental health solution, not only for ensured empathy and quality of care, but to constantly review the technology and its patients to make sure they are functioning ethically.  

There is clearly a long road ahead, as we look at the gap in specific laws to protect patients suffering from complex mental health issues and those providing clear accountability when a mental health plan falls short due to an AI glitch. We also need increased harmonization and coordination between states and key stakeholders with few harmonized standards such as data privacy. But by consistently raising these issues - privacy, bias, consent, transparency, human oversight, and continuous evaluation more thoroughly, and collaborating to ensure sound ethics are threaded through every development, we can ensure that AI and robotics interventions are developed and deployed in an ethically sound manner. This means respecting individual rights, promoting fairness, and maximizing benefits while minimizing potential harm to complete the full patient experience and encourage optimal results. 

Communities
Cyber Security and Risk Management
Data, Analytics and AI
Digital Services and Customer Experience
IT Modernization and Cloud
Workforce, Skills and Capability
Region
Australia Australia

Published by

Author avatar
Heather Dailey Content Strategist, Marketing