How selective hearing works in the brain

Revealing how our brains are wired to favor some auditory cues over others may even inspire new approaches toward automating and improving how voice-activated electronic interfaces filter sounds in order to properly detect verbal commands.

How the brain can so effectively focus on a single voice is a problem of keen interest to the companies that make consumer technologies because of the tremendous future market for all kinds of electronic devices with voice-active interfaces. While the voice recognition technologies that enable such interfaces as Apple’s Siri have come a long way in the last few years, they are nowhere near as sophisticated as the human speech system.

An average person can walk into a noisy room and have a private conversation with relative ease-as if all the other voices in the room were muted. In fact, said Mesgarani, an engineer with a background in automatic speech recognition research, the engineering required to separate a single intelligible voice from a cacophony of speakers and background noise is a surprisingly difficult problem.

Speech recognition, he said, is “something that humans are remarkably good at, but it turns out that machine emulation of this human ability is extremely difficult.”

###

The article, “Selective cortical representation of attended speaker in multi-talker speech perception” by Nima Mesgarani and Edward F. Chang appears in the April 19, 2012 issue of the journal Nature.

This work was funded by the National Institutes of Health and the Ester A. and Joseph Klingenstein Foundation.

UCSF is a leading university dedicated to promoting health worldwide through advanced biomedical research, graduate-level education in the life sciences and health professions, and excellence in patient care.


###

Jason Socrates Bardi
.(JavaScript must be enabled to view this email address)
415-502-6397
University of California - San Francisco

Page 2 of 21 2

Provided by ArmMed Media