AI must be developed with mental health outcomes in mind

For years, artificial intelligence has been touted as a potential game-changer for healthcare in the United States. More than a decade since the HITECH Law hospital systems encouraged to use electronic health records (EHRs) for patient data management, there have been an explosion the amount of health data generated, stored and available to generate information and make clinical decisions.

The motivation to integrate AI into mental health services has increased during the pandemic. The Kaiser Family Foundation reported a increase in adults with symptoms of anxiety and depression, from 1 in 10 adults before the pandemic to 4 in 10 adults in early 2021. Coupled with a national shortage of mental health professionals as well as limited opportunities for in-person mental health support, AI-powered tools could be used as an entry point to care by automatically and remotely measuring and intervening to reduce symptoms of Mental Health.

Many mental health startups are integrating AI into their product offerings. Woebot Health has developed a chatbot that offers on-demand therapy to users through natural language processing (NLP). spring health leverages machine learning powered by historical patient data to generate personalized treatment recommendations. Big tech companies are also starting to dive into this space: Apple recently partnered with UCLA develop algorithms that measure symptoms of depression using data collected from Apple devices.

Yet we also found that the AI ​​is far from perfect. There have been notable bumps in the road in other areas of medicine that reveal the limitations of AI and, in particular, the machine learning models that power its decision-making. For example, Epic, one of the largest EHR software developers in the United States, has deployed a sepsis predictor tool in hundreds of hospitals. The researchers found that the poorly executed tool in many of these hospital systems. A widely used algorithm for directing people to “high-risk care management” programs has been less likely to refer blacks than whites who were just as sick. As mental health AI products are launched, technologists and clinicians must learn from past failures of AI tools in order to create more effective interventions and limit potential harm.

Our recent to research outlines three areas where AI-powered mental health technologies may underperform.

  • Understanding individuals: First, it can be difficult for AI mental health measurement tools to contextualize the different ways individuals experience mental health changes. For example, some people sleep more when experiencing a depressive episode, while others sleep less, and AI tools may not be able to understand these differences without additional human interpretation.
  • Adapt over time: Second, AI technologies must adapt to the ongoing needs of patients as they evolve. For example, during the COVID-19 pandemic, we were forced to adapt to new personal and professional standards. Likewise, AI-based mental health measurement tools must adapt to new behavioral routines, and treatment tools must offer a new suite of options to adapt to users’ changing priorities.
  • Uniform data collection: Third, AI tools may work differently from device to device due to different data access policies created by device manufacturers. For example, many researchers and companies are developing AI-based mental health measures using data collected from technologies such as smartphones. Apple does not allow developers to collect many types of data available on Android, and numerous studies have created and validated AI mental health measures with Android-only devices.

Knowing these areas of intervention, we sought if a smartphone-based AI tool could measure the mental health of individuals with different mental health symptoms, using different devices. Although the tool was quite accurate, the different symptoms and types of data collected from the devices limited what our tool could measure compared to tools evaluated on more homogeneous populations. As these systems are deployed in larger and more diverse populations, it will be more difficult to meet the needs of different users.

Given these limitations, how can we responsibly develop AI tools that improve mental health care? As a general mindset, technologists should not assume that AI tools will perform well once deployed, but rather continuously work with stakeholders to re-evaluate solutions when they are underperforming or mismatched. to the needs of stakeholders.

On the one hand, we shouldn’t assume that technological solutions are always welcome. History proves it; it is well established that the introduction of EHRs increase in supplier burnout and are notoriously difficult to use. Likewise, we need to understand how AI mental health technologies can affect different stakeholders within the mental health system.

For example, AI-powered therapy chatbots may be an adequate solution for patients with mild mental health symptoms, but patients with more severe symptoms will need additional support. How to activate this transfer from a chatbot to a care provider? As another example, continuous measurement tools can provide a remote and less strenuous method of measuring patients’ mental health. But who should be allowed to see these metrics, and when should they be made available? Clinicians, already overloaded and experiencing data overload, may not have time to review this data outside of the appointment. At the same time, patients may feel that the collection and sharing of data violates their privacy.

Organizations deploying AI-based mental health technologies must understand these complexities to be successful. By working with stakeholders to identify the different ways that AI tools interact with and affect the people who provide, receive and affect care, technologists are more likely to create solutions that improve patient mental health.

Dan Adler is a PhD candidate at Cornell Tech, where he works in the People-Aware Computing Lab building technology to improve mental health and well-being.

Leave a Comment