|
Hearing Through Noise: Study Looks At The Neuronal Mechanism Of Temporal Fine Structure
Just picture the scene: you’re at a cocktail party, talking to someone you would like to get to know better but the background noise is making it hard to concentrate. Luckily, humans are very gifted at listening to someone speaking while many other people are talking loudly at the same time. This so-called cocktail-party-phenomenon is based on the ability of the human auditory system to decompose the acoustic world into discrete objects of perception.
It was originally believed that the major acoustic cue used by the auditory system to solve this task was directional information of the sound source, but even though localisation of different sound sources with two ears improves the performance, it can be achieved monaurally, for example in telephone conversations, where no directional information is available.
Scientists led by Holger Schulze at the Leibniz-Institute for Neurobiology in Magdeburg, and the Universities of Ulm, Newcastle and Erlangen have now found a neuronal mechanism in the auditory system that is able to solve the task based on the analysis of the temporal fine structure of the acoustic scene. The findings, published in this week’s PLoS ONE, show that different speakers have different temporal fine structure in their voiced speech and that such signals are represented in different areas of the auditory cortex according to this different time structure.
By means of a so-called winner-take-all algorithm, one of these representations gains control over all other representations. This means that only the voice of the speaker to whom you wish to listen is still represented in the auditory cortex and can thus be followed over time. This predominance of the representation of one speaker’s voice over the representations of all other speakers is achieved by long-range inhibitory interactions that are first described by Schulze and colleagues using functional neurophysiological, pharmacological and anatomical methods.
The findings provide a deeper understanding of how the parcellation of sensory input into perceptually distinct objects is realised in the brain, and may help to improve the auditory experience of hearing aid wearers at cocktail parties.
Citation: Kurt S, Deutscher A, Crook JM, Ohl FW, Budinger E, et al (2008) Auditory Cortical Contrast Enhancing by Global Winner-Take-All Inhibitory Interactions.
尽管想象一下这幅情景:你在参加一个鸡尾酒会,正与人交谈,你想进一步了解这个人,但背景噪音很大,很难集中精力。幸运的是,人类十分天才,在许多人大声交谈的同时仍能听清对方说话。这种所谓的鸡尾酒会现象建立的基础是人类的听觉系统能够将声音世界分解为分散的感知对象。
原先,人们以为听觉系统解决这一任务使用的主要声学提示是直接的声源信息,但即使用双耳集中不同的声源,从而改善听觉效果,单耳也可做到这一点,例如,电话交谈中就没有直接信息可用。
Holger Schulze率领的来自马格德堡莱布尼兹神经生物学研究所(德国)、乌尔姆大学(德国)、纽卡斯尔大学(英国)和埃朗根大学(德国)的科学家们发现,通过对声音现场颞部精细结构的分析,听觉系统的神经机制能够解决这一任务。研究结果表明,不同讲话者的有声语言拥有不同的颞部精细结构,而这种信号根据不同的时间结构表现在听觉皮层的不同区域。结果发表在本周出版的美国《公共科学图书馆-综合》(PLoS ONE)杂志上。
根据所谓的赢者全拿运算法则,这些表现中的一种取得对其他所有表现的控制。这就意味着,只有你想听的那个人的声音仍旧表现在听觉皮层,并如此跟踪一段时间。此人声音的表现取得对其他所有人声音的表现的优先权是通过长程抑制性互动作用实现的。Schulze及其同事利用功能神经生理学、药理学和解剖学方法首次阐述了这种长程抑制性互动作用。
这一结果的发现,有助于人们更加的深入了解感觉输入分割成明确感知对象这一过程是如何在大脑中实现的,并可有助于改善助听器佩戴者在鸡尾酒会上的听觉体验。
引用:Kurt S、Deutscher A、Crook JM、Ohl FW、Budinger E等(2008) 全球赢者全拿抑制性互动增强听觉皮层对比 《公共科学图书馆-综合》PLoS ONE |
|