Most people can hear so easily that it is difficult to understand how much information our brains need to process and decode. It must take in incoming sounds and convert them into the acoustic objects we perceive. It must separate relevant sounds from background noise. It must determine if a word is spoken by two people has the same meaning.
Traditional neural processing models state that when we hear sounds, the auditory system extracts simple features and then combines them into more complex and abstract representations. The brain can convert the sound of someone speaking into phonemes, then to syllables and finally words.
In a paper published in Cell, researchers challenged this model and reported that the auditory system can often process sound and speech simultaneously. Scientists were surprised to find that the brain's ability to understand speech is not as simple as they thought. The signals from the ears branch into distinct brain pathways at an early stage of processing, sometimes bypassing a region of the brain believed to be crucial in the building of representations of complex sounds.
This work provides clues to a new explanation of how the brain can unbraid multiple streams of auditory stimuli quickly and efficiently. The discovery challenges existing theories about speech processing, but it also challenges the way the auditory system works. Many of the current beliefs about how we perceive sounds are based on analogies with what we know about the computations that occur in the visual system. There is increasing evidence that auditory processing works differently, as shown by the recent study on speech. Scientists are now starting to question how the different parts of the auditory system work and what this means for our ability to decipher rich soundscapes.
According to Dana Boebinger (a Harvard University cognitive neuroscientist who was not involved with the study), this is a huge undertaking. She isn't ready to abandon her more traditional theories about how the brain processes complex auditory data, but she finds these results intriguing because they suggest that we might not have an accurate picture of the truth.
A Hierarchy at Its Head
It is well known that the earliest steps of our perception of sound are quite simple. The cochlea in the ear breaks down complex sounds into component frequencies. This information is then sent to the auditory cortex through multiple stages of processing. Information about the location of a sound in space, its pitch, and how it changes is first extracted from these signals. It is more difficult to determine what happens next. Higher cortical areas are believed to be able to extract features that relate to speech, from phonemes to prosody in an hierarchical sequence. Similar procedures would apply to other complex sounds like music.
This arrangement is reminiscent of the way the visual system works. It interprets light patterns falling on cells in their retinas first as lines and edges. Then it interprets them as more complex features or patterns, eventually building up a representation either of a face, or an object.
It has been hard to dissect the auditory information flow. Because speech is an individual trait, it's difficult to study speech in animals. In humans, the majority of brain activity research must be done using indirect methods. Direct recordings are more difficult because they can be invasive. Scientists must support medical procedures by collecting data from electrodes placed in patients' brains. Many auditory regions of interest lie deep inside the brain, between the temporal and frontal lobes. This is an area that surgeons don't usually want to record.
Many of these studies, both direct and indirect, found evidence that the hierarchical model of speech and auditory processing is still valid. The primary auditory cortex seems to be one of the first stops of the process. It encodes simple sounds like frequency. As signals move away from the primary auditory cortical, brain regions respond to more complex sounds features, such as phonemes, and other speech features. So far, so good.
Liberty Hamilton, a neuroscientist from the University of Texas in Austin, stated that scientists have deduced this hierarchy based on experiments. However, they weren't looking necessarily to determine how these regions are connected or the sequences where they become active.
In 2014, she began to create a more complete map of speech sound representations in the auditory cortex. This was to discover what information is extracted from different brain areas and how it gets integrated.
This was her first chance to investigate the question as a postdoctoral research assistant in the lab of Edward Chang (a neurosurgeon at University of California, San Francisco) and later in her lab in Austin. Chang, Hamilton, and their colleagues were able bring together patients whose treatment required electrode grids to placed at different locations.