“Voice biomarker” technology analyzes your voice for signs of depression


Illustration: Natalie Peeples/Axios

Software that analyzes snippets of your speech to identify mental health issues is rapidly making its way into call centers, medical clinics, and telehealth platforms.

  • The idea is to detect diseases that would otherwise go untreated.

Why it matters: Proponents of “voice biomarker” technology say the underlying artificial intelligence is good enough to recognize depression, anxiety and a host of other illnesses.

  • But its growing ubiquity raises privacy issues similar to those raised by facial recognition.

Driving the news: Hospitals and insurance companies are installing voice biomarker software for incoming and outgoing calls so that, with explicit permission from patients, they can identify in real time whether someone they’re chatting with may be anxious or depressed. and refer you for help.

  • The technology is also making inroads into doctor’s offices, clinical trials and senior centers, to identify problems on the spot and monitor patients remotely.
  • Employers are looking to use voice biomarkers on their employee helplines.
  • Companies selling the technology, such as Kintsugi, Winterlight Labs, Ellipsis Health and Sonde Health, are raising money, racking up customers and pointing to growing clinical evidence of its effectiveness.
  • Apps designed for personal use, like Sonde Mental Fitness, encourage people to keep a daily voice diary so the AI ​​can detect stress and other changes.

Where is it located: A small but growing number of scientific studies support claims that voice biomarkers can help detect everything from depression to cardiovascular problems and respiratory diseases such as COVID-19, asthma and COPD.

  • Depressed patients “pause more” and “stop more often,” Maria Espinola, a psychologist and assistant professor at the University of Cincinnati School of Medicine, told the New York Times.
  • “His speech is generally more monotonous, flatter and softer,” he said. “They also have a reduced pitch range and lower volume.”

How does it work: Unlike Siri and Alexa, vocal biomarker systems analyze how you speak (prosody, intonation, pitch, etc.), but No What do you say.

  • Your voice sample is run through a machine learning model that uses a large database of anonymous voices for comparison.

What they are saying: “From as little as 20 seconds of freeform speech, we can detect with 80% accuracy whether someone is struggling with depression or anxiety,” Kintsugi CEO Grace Chang tells Axios.

  • That figure comes from comparing the results of Kintsugi’s technology to a clinical evaluation by a mental health professional.
  • “When we’re embedded in a call center where there’s a nurse online, the nurse can ask additional questions” when there’s a positive match, Chang said.
  • Before it is activated, patients are asked if they agree to have their voice analyzed for health assessment purposes. About 80% agree, says Chang.

Where is it located: Kinsugi is seeking approval from the Food and Drug Administration for his product to become a recognized diagnostic tool.

  • Sonde Health positions its system differently. “We are an early warning system, we are not a diagnostic device,” Sonde CEO David Liu tells Axios.
  • Sonde’s technology, which detects depression, anxiety and breathing problems, is being integrated into hearing aids and tested by the Cognitive Behavior Institute.

  • “From a few seconds of ‘ahhh…’ we can tell if you have symptoms of respiratory disease,” Liu tells Axios.

Yes, but: Skepticism and ethical questions surround voice biomarker technology, which observers describe as promising but not foolproof, and ripe for potential misuse.

  • “There is still a long way to go before the clinical community can endorse AI-driven vocal biomarkers,” reads an editorial published by the medical journal. the lancet “Better ethical and technical standards are required for this research to realize the potential of vocal biomarkers in early disease detection.”
  • One risk is that the systems “may increase systemic biases toward people from specific regions, backgrounds, or with a specific accent,” according to a review in the journal. digital biomarkers.
  • “My research suggests that the ability of AI to identify vocal biomarkers is often overestimated or overlooks the highly subjective processes involved in building these systems,” writes Beth Semel, an assistant professor of anthropology at Princeton who has studied vocal biomarkers. since 2015. .

For example: Amazon’s Halo wearable device has drawn criticism for its vocal pitch tracking feature, which uses a form of voice biomarker technology.

  • “People of color and women are traditionally more discriminated against by AI systems, and Amazon’s own AI research has struggled with this same issue in the past,” according to tech news site Protocol.

The bottom line: Enthusiasm from big pharmaceutical, insurance and health care companies will likely drive voice biomarker technology, and its powers of early detection could prove useful in diagnosing a myriad of conditions, if concerns about efficacy are kept in check and privacy.


Please enter your comment!
Please enter your name here

Share post:


More like this