Though formal cocktail parties are rare these days, the "cocktail party problem" is still very much on scientists' minds.

Audiologists and neuroscientists have puzzled over the brain's ability to navigate crowded rooms for decades. Even when dozens of people are talking over each other, the brain is able to focus on a single speaker, instantaneously and effortlessly. It'd be a great design feature for a hearing aid-if only we could figure out how the brain actually does it.

Recently, though, engineers from Columbia University's Zuckerman Institute in New York got a rare glimpse of what happens inside the brain's auditory cortex while multiple people are talking.

A team led by Nima Mesgarani, an electrical engineer at Columbia, worked with eight volunteers who were undergoing electrode seizure monitoring as a part of treatment for epilepsy (it'd be too invasive to ask otherwise healthy individuals to undergo neurosurgery). Surgeons inserted tiny electrodes over the part of the brain devoted to hearing, recording the subjects' neuronal activity while they listened: first, to two voices speaking separately, and then simultaneously at similar volumes.

The researchers also recorded activity as patients switched their focus between the speakers-like what you'd do if you were eavesdropping on two different conversations. Broadly, they found, the brain picks who it wants to hear in two distinct steps.

First, one area interprets both speakers' voices, lining them up the way a radio producer might dub over a guest speaking in a foreign language with an English translation. Then, neurons in a second region work to amplify the desired voice while dampening the second-like the producer turning up the English voice in the mix. The whole process takes about 150 milliseconds.

This work, published in the journal Neuron on Oct. 21, could inform hearing aid improvement. Conventional hearing aids work by amplifying all kinds of speech, which can help when wearers are conversing in situations with little background noise. But in areas where there's lots of babble, like a bar or restaurant, it can be difficult to pick out the specific voice they want to hear.

That's not an insignificant problem. Nearly 30 million adults living in the US have hearing loss that could be improved with hearing aids, and this figure will only grow as the proportion of adults over 65 grows. Many people who could use a hearing aid don't, in part because the technology is still imperfect. And a growing body of work suggests that untreated hearing loss may be linked to cognitive decline, which can be the first symptom of dementia. Already, dementia care and treatment has hit global costs of nearly $1 trillion; some estimates suggest that figure will double in the next decade.

One of the next steps for Mesgarani's lab, then, is figuring out how to apply this research to develop a device that connects directly to the brain to pick up on the right sounds. It'd be similar to a cochlear implant, which can help those with extremely limited hearing by directly stimulating the auditory portion of the brain.

That said, it'll likely be years before that kind of product is ready for testing-like the electrode technique used in this study, brain-computer technologies are invasive. In the meantime, the team is also trying to figure out how to train existing external hearing aids to pick up on specific voices of the user's choice. In 2017, the team showed that it was possible to train a machine-learning algorithm to do so in a proof-of-concept paper.

tag