How selective hearing works in the brain

The longstanding mystery of how selective hearing works – how people can tune in to a single speaker while tuning out their crowded, noisy environs – is solved this week in the journal Nature by two scientists from the University of California, San Francisco (UCSF).

Psychologists have known for decades about the so-called “cocktail party effect,” a name that evokes the Mad Men era in which it was coined. It is the remarkable human ability to focus on a single speaker in virtually any environment-a classroom, sporting event or coffee bar-even if that person’s voice is seemingly drowned out by a jabbering crowd.

To understand how selective hearing works in the brain, UCSF neurosurgeon Edward Chang, MD, a faculty member in the UCSF Department of Neurological Surgery and the Keck Center for Integrative Neuroscience, and UCSF postdoctoral fellow Nima Mesgarani, PhD, worked with three patients who were undergoing brain surgery for severe epilepsy.

Part of this surgery involves pinpointing the parts of the brain responsible for disabling seizures. The UCSF epilepsy team finds those locales by mapping the brain’s activity over a week, with a thin sheet of up to 256 electrodes placed under the skull on the brain’s outer surface or cortex. These electrodes record activity in the temporal lobe-home to the auditory cortex.

UCSF is one of few leading academic epilepsy centers where these advanced intracranial recordings are done, and, Chang said, the ability to safely record from the brain itself provides unique opportunities to advance our fundamental knowledge of how the brain works.

“The combination of high-resolution brain recordings and powerful decoding algorithms opens a window into the subjective experience of the mind that we’ve never seen before,” Chang said.

What Is Selective Hearing?

Selective hearing is a way of describing the tendency of some people to ignore things that they don’t want to hear. It is not a physiological condition, as they are physically hearing the words, but their minds choose not to acknowledge the words. In many cases, the conscious mind does not appear to receive the information, so it is different than an active ignoring of speech. Instead, it is a sort of selective inattention that may be done consciously or subconsciously.

Classically, selective hearing is an attribute people associated with men. The standard example would be a woman asking her male partner whether he wants to go to the opera that night, only to have him seemingly ignore her. However, when she mentions something of interest to him, such as football or beer, he immediately responds as though he had been listening all along. Although these sorts of examples may seem facetious, in fact they are not uncommon in everyday interactions between people of all genders and relationships.

In the experiments, patients listened to two speech samples played to them simultaneously in which different phrases were spoken by different speakers. They were asked to identify the words they heard spoken by one of the two speakers.

The authors then applied new decoding methods to “reconstruct” what the subjects heard from analyzing their brain activity patterns. Strikingly, the authors found that neural responses in the auditory cortex only reflected those of the targeted speaker. They found that their decoding algorithm could predict which speaker and even what specific words the subject was listening to based on those neural patterns. In other words, they could tell when the listener’s attention strayed to another speaker.

“The algorithm worked so well that we could predict not only the correct responses, but also even when they paid attention to the wrong word,” Chang said.

SPEECH RECOGNITION BY THE HUMAN BRAIN AND MACHINES

The new findings show that the representation of speech in the cortex does not just reflect the entire external acoustic environment but instead just what we really want or need to hear.

Selective hearing may also be used to describe an unrelated manner of interaction, in which a person chooses to hear only what they wish to out of a conversation. This is commonly seen in cases where a person asks a question with the goal of achieving a desired end, rather than actually understanding a situation. For example, if Jane asks John to bring some milk by her house, and John refuses, Jane may ask, “Why not?” In this case Jane is asking the question not with the intent of understanding why John doesn’t wish to bring her milk, but rather to pick apart his rationale to try to get him to buy the milk.

As a result, no matter what John responds, Jane will only hear the point she can respond to. If he says, “It will take too long,” she will respond, “It will only take five minutes.” If he responds, “I have too much to do today,” she may respond, “I would do it for you.” In every instance, she is using selective hearing, as she is not truly listening to understand what John is saying, but only to gather enough superficial information to refute his point.

They represent a major advance in understanding how the human brain processes language, with immediate implications for the study of impairment during aging, attention deficit disorder, autism and language learning disorders.

In addition, Chang, who is also co-director of the Center for Neural Engineering and Prostheses at UC Berkeley and UCSF, said that we may someday be able to use this technology for neuroprosthetic devices for decoding the intentions and thoughts from paralyzed patients that cannot communicate.

Page 1 of 21 2 Next »

Provided by ArmMed Media