A trio of researchers have developed an experimental machine learning method that allows AI to listen for the early whispers of psychotic break that humans can’t hear.
The team, consisting of Neguine Rezaii of Harvard Medical School and Emory School of Medicine, and Elaine Walker and Philipp Wolff from Emory University’s Department of Psychology, set out to see if there was any way to use language as an indicator of impending latent onset psychosis.
They developed a machine learning method that looks for specific indicators long thought associated with psychosis, especially schizophrenia. The team then spent two years observing study volunteers, a significant portion of whom ended up demonstrating psychotic break (the first experience of a fully psychotic episode).
The results of the study were incredible. The team not only determined their tool could experimentally predict psychotic break with higher-than-human accuracy, but also discovered a new indicator of impending psychotic break. One that could have wide-spread implications for the field of psychology: the ability to predict if a person is experiencing early signs of auditory hallucinations.
The medical profession accepts that a significant number of people who suffer from mental illnesses associated with psychosis have auditory hallucinations. Unfortunately these positive symptoms – positive meaning sufferers experience symptoms neurotypical individuals do not – are usually observed too late in the prodromal (early) phase of psychosis, such as schizophrenia, to be of much use. Detecting such hallucinations early could change the landscape of mental illness treatment.
According to the team’s research:
Our findings indicate that during the prodromal phase of psychosis, the emergence of psychosis was predicted by speech with low levels of semantic density and an increased tendency to talk about voices and sounds. When combined, these two indicators of psychosis enabled the prediction of future psychosis with a high level of accuracy.
Semantic density is the easy one. It’s well-known among psychologists that individuals who suffer from prodromal psychosis usually communicate differently than those displaying neurotypical behavior. Those with low levels of semantic density are people who speak with little substance or context unless prompted, and even then sparingly.
This is also referred to as alogia, and can be typified with the following chart:
The researchers built an algorithm to detect semantic density, using more than 30,000 Reddit posts to draft a baseline. They then compared this information with study participants’ interviews to determine where individuals lay in respect to norms.
But discerning if people were hearing voices or sounds that don’t actually exist is a bit trickier, especially when those individuals may be experiencing only mild onset hallucinations or could be entirely unaware of changes in their mental state.
This isn’t as simple as just asking patients a bunch of questions about what they hear in their head. Often, due to the fact that those people who eventually go on to have psychotic breaks exhibit low semantic density, the clues to their auditory hallucinations are too faint for human scientists to detect.
Rezaii, the lead author on the paper, told Emory University’s Carol Clark:
Trying to hear these subtleties in conversations with people is like trying to see microscopic germs with your eyes. The automated technique we’ve developed is a really sensitive tool to detect these hidden patterns. It’s like a microscope for warning signs of psychosis.
In order to find what essentially works out to be a handful of needles distributed randomly throughout an infinite number of potential haystacks, the researchers created a technique called “vector unpacking” that determines how much meaning is packed into a given sentence.
Basically, the machine determined that people in the trial who would go on to have a psychotic break – called Converters – within the experiment’s two year time-frame were more likely to use words associated with sounds and noises than those who don’t.
According to the research:
Most notably, the language of the Converters tended to emphasize the topic of auditory perception, with one cluster consisting of the probe words “voice,” “hear,” “sound,” “loud,” and “chant” and the other, of the words “whisper,” “utter,” and “scarcely.”
Interestingly, many of the words included in these clusters – like the word whisper – were never explicitly used by the Converters but were implied by the overall meaning of their sentences.
When the researchers combined the system that looks for low semantic density and the system that looks for indications of auditory hallucination, they were able to determine with 93-percent accuracy whether an individual would convert and have a psychotic break. Compare that to state-of-the art methods involving interviews, analysis, and cognitive tests that only achieve 80-percent accuracy.
It’s still too early to tell what the actual implications for this machine learning method will be, but the prognosis is certainly hopeful. There’s no cure for psychosis, but early detection could provide one of our biggest weapons in the fight against mental illness. There’s optimism that cognitive behavioral therapy and targeted treatment could have a much greater impact on patient outlooks if begun before the onset of a psychotic break.
Those who suffer from psychosis have always cried out for help, we’ve just never had the right tools to understand exactly what they’re saying. This breakthrough demonstrates how powerful machine learning can be, and could represent a pivotal moment in medical science, AI research, and human history.
Read next: Apple just registered 7 laptop models we might see this fall