Catálogo de publicaciones - libros

Compartir en
redes sociales


Título de Acceso Abierto

Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing

Pim van Dijk ; Deniz Başkent ; Etienne Gaudrain ; Emile de Kleine ; Anita Wagner ; Cris Lanting (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Neurosciences; Otorhinolaryngology

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No requiere 2016 SpringerLink acceso abierto

Información

Tipo de recurso:

libros

ISBN impreso

978-3-319-25472-2

ISBN electrónico

978-3-319-25474-6

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© The Editor(s) (if applicable) and the Author(s) 2016

Cobertura temática

Tabla de contenidos

Suppression Measured from Chinchilla Auditory-Nerve-Fiber Responses Following Noise-Induced Hearing Loss: Adaptive-Tracking and Systems-Identification Approaches

Mark Sayles; Michael K. Walls; Michael G. Heinz

The compressive nonlinearity of cochlear signal transduction, reflecting outer-hair-cell function, manifests as suppressive spectral interactions; e.g., two-tone suppression. Moreover, for broadband sounds, there are multiple interactions between frequency components. These frequency-dependent nonlinearities are important for neural coding of complex sounds, such as speech. Acoustic-trauma-induced outer-hair-cell damage is associated with loss of nonlinearity, which auditory prostheses attempt to restore with, e.g., “multi-channel dynamic compression” algorithms.

Neurophysiological data on suppression in hearing-impaired (HI) mammals are limited. We present data on firing-rate suppression measured in auditory-nerve-fiber responses in a chinchilla model of noise-induced hearing loss, and in normal-hearing (NH) controls at equal sensation level. Hearing-impaired (HI) animals had elevated single-fiber excitatory thresholds (by ~ 20–40 dB), broadened frequency tuning, and reduced-magnitude distortion-product otoacoustic emissions; consistent with mixed inner- and outer-hair-cell pathology. We characterized suppression using two approaches: adaptive tracking of two-tone-suppression threshold (62 NH, and 35 HI fibers), and Wiener-kernel analyses of responses to broadband noise (91 NH, and 148 HI fibers). Suppression-threshold tuning curves showed sensitive low-side suppression for NH and HI animals. High-side suppression thresholds were elevated in HI animals, to the same extent as excitatory thresholds. We factored second-order Wiener-kernels into excitatory and suppressive sub-kernels to quantify the relative strength of suppression. We found a small decrease in suppression in HI fibers, which correlated with broadened tuning. These data will help guide novel amplification strategies, particularly for complex listening situations (e.g., speech in noise), in which current hearing aids struggle to restore intelligibility.

Pp. 285-295

Does Signal Degradation Affect Top–Down Processing of Speech?

Anita Wagner; Carina Pals; Charlotte M. de Blecourt; Anastasios Sarampalis; Deniz Başkent

Speech perception is formed based on both the acoustic signal and listeners’ knowledge of the world and semantic context. Access to semantic information can facilitate interpretation of degraded speech, such as speech in background noise or the speech signal transmitted via cochlear implants (CIs). This paper focuses on the latter, and investigates the time course of understanding words, and how sentential context reduces listeners’ dependency on the acoustic signal for natural and degraded speech via an acoustic CI simulation.

In an eye-tracking experiment we combined recordings of listeners’ gaze fixations with pupillometry, to capture effects of semantic information on both the time course and effort of speech processing. Normal-hearing listeners were presented with sentences with or without a semantically constraining verb (e.g., crawl) preceding the target (baby), and their ocular responses were recorded to four pictures, including the target, a phonological (bay) competitor and a semantic (worm) and an unrelated distractor.

The results show that in natural speech, listeners’ gazes reflect their uptake of acoustic information, and integration of preceding semantic context. Degradation of the signal leads to a later disambiguation of phonologically similar words, and to a delay in integration of semantic information. Complementary to this, the pupil dilation data show that early semantic integration reduces the effort in disambiguating phonologically similar words. Processing degraded speech comes with increased effort due to the impoverished nature of the signal. Delayed integration of semantic information further constrains listeners’ ability to compensate for inaudible signals.

Pp. 297-306

The Effect of Peripheral Compression on Syllable Perception Measured with a Hearing Impairment Simulator

Toshie Matsui; Toshio Irino; Misaki Nagae; Hideki Kawahara; Roy D. Patterson

Hearing impaired (HI) people often have difficulty understanding speech in multi-speaker or noisy environments. With HI listeners, however, it is often difficult to specify which stage, or stages, of auditory processing are responsible for the deficit. There might also be cognitive problems associated with age. In this paper, a HI simulator, based on the dynamic, compressive gammachirp (dcGC) filterbank, was used to measure the effect of a loss of compression on syllable recognition. The HI simulator can counteract the cochlear compression in normal hearing (NH) listeners and, thereby, isolate the deficit associated with a loss of compression in speech perception. Listeners were required to identify the second syllable in a three-syllable “nonsense word”, and between trials, the relative level of the second syllable was varied, or the level of the entire sequence was varied. The difference between the Speech Reception Threshold (SRT) in these two conditions reveals the effect of compression on speech perception. The HI simulator adjusted a NH listener’s compression to that of the “average 80-year old” with either normal compression or complete loss of compression. A reference condition was included where the HI simulator applied a simple 30-dB reduction in stimulus level. The results show that the loss of compression has its largest effect on recognition when the second syllable is attenuated relative to the first and third syllables. This is probably because the internal level of the second syllable is attenuated proportionately more when there is a loss of compression.

Pp. 307-314

Towards Objective Measures of Functional Hearing Abilities

Hamish Innes-Brown; Renee Tsongas; Jeremy Marozeau; Colette McKay

People with impaired hearing often have difficulties in hearing sounds in a noisy background. This problem is partially a result of the auditory systems reduced capacity to process temporal information in the sound signal. In this study we examined the relationships between perceptual sensitivity to temporal fine structure (TFS) cues, brainstem encoding of complex harmonic and amplitude modulated sounds, and the ability to understand speech in noise. Understanding these links will allow the development of an objective measure that could be used to detect changes in functional hearing before the onset of permanent threshold shifts.

We measured TFS sensitivity and speech in noise performance (QuickSIN) behaviourally in 34 normally hearing adults with ages ranging from 18 to 63 years. We recorded brainstem responses to complex harmonic sounds and a 4000 Hz carrier signal modulated at 110 Hz. We performed cross correlations between the stimulus waveforms and scalp-recorded brainstem responses to generate a simple measure of stimulus encoding accuracy, and correlated these measures with age, TFS sensitivity and speech-in-noise performance.

Speech-in-noise performance was positively correlated with TFS sensitivity, and negatively correlated with age. TFS sensitivity was also positively correlated with stimulus encoding accuracy for the complex harmonic stimulus, while increasing age was associated with lower stimulus encoding accuracy for the modulated tone stimulus.

The results show that even in a group of people with normal hearing, increasing age was associated with reduced speech understanding, reduced TFS sensitivity, and reduced stimulus encoding accuracy (for the modulated tone stimulus). People with good TFS sensitivity also generally had less faithful brainstem encoding of a complex harmonic tone.

Pp. 315-325

Connectivity in Language Areas of the Brain in Cochlear Implant Users as Revealed by fNIRS

Colette M. McKay; Adnan Shah; Abd-Krim Seghouane; Xin Zhou; William Cross; Ruth Litovsky

Many studies, using a variety of imaging techniques, have shown that deafness induces functional plasticity in the brain of adults with late-onset deafness, and in children changes the way the auditory brain develops. Cross modal plasticity refers to evidence that stimuli of one modality (e.g. vision) activate neural regions devoted to a different modality (e.g. hearing) that are not normally activated by those stimuli. Other studies have shown that multimodal brain networks (such as those involved in language comprehension, and the default mode network) are altered by deafness, as evidenced by changes in patterns of activation or connectivity within the networks. In this paper, we summarise what is already known about brain plasticity due to deafness and propose that functional near-infra-red spectroscopy (fNIRS) is an imaging method that has potential to provide prognostic and diagnostic information for cochlear implant users. Currently, patient history factors account for only 10 % of the variation in post-implantation speech understanding, and very few post-implantation behavioural measures of hearing ability correlate with speech understanding. As a non-invasive, inexpensive and user-friendly imaging method, fNIRS provides an opportunity to study both pre- and post-implantation brain function. Here, we explain the principle of fNIRS measurements and illustrate its use in studying brain network connectivity and function with example data.

Pp. 327-335

Isolating Neural Indices of Continuous Speech Processing at the Phonetic Level

Giovanni M. Di Liberto; Edmund C. Lalor

The human ability to understand speech across an enormous range of listening conditions is underpinned by a hierarchical auditory processing system whose successive stages process increasingly complex attributes of the acoustic input. In order to produce a categorical perception of words and phonemes, it has been suggested that, while earlier areas of the auditory system undoubtedly respond to acoustic differences in speech tokens, later areas must exhibit consistent neural responses to those tokens. Neural indices of such hierarchical processing in the context of continuous speech have been identified using low-frequency scalp-recorded electroencephalography (EEG) data. The relationship between continuous speech and its associated neural responses has been shown to be best described when that speech is represented using both its low-level spectrotemporal information and also the categorical labelling of its phonetic features (Di Liberto et al., Curr Biol 25(19):2457–2465, 2015). While the phonetic features have been proven to carry extra-information not captured by the speech spectrotemporal representation, the causes of this EEG activity remain unclear. This study aims to demonstrate a framework for examining speech-specific processing and for disentangling high-level neural activity related to intelligibility from low-level activity in response to spectrotemporal fluctuations of speech. Preliminary results suggest that neural measure of processing at the phonetic level can be isolated.

Pp. 337-345

Entracking as a Brain Stem Code for Pitch: The Butte Hypothesis

Philip X Joris

The basic nature of pitch is much debated. A robust code for pitch exists in the auditory nerve in the form of an across-fiber pooled interspike interval (ISI) distribution, which resembles the stimulus autocorrelation. An unsolved question is how this representation can be “read out” by the brain. A new view is proposed in which a known brain-stem property plays a key role in the coding of periodicity, which I refer to as “entracking”, a contraction of “entrained phase-locking”. It is proposed that a scalar rather than vector code of periodicity exists by virtue of coincidence detectors that code the dominant ISI directly into spike rate through entracking. Perfect entracking means that a neuron fires one spike per stimulus-waveform repetition period, so that firing rate equals the repetition frequency. Key properties are invariance with SPL and generalization across stimuli. The main limitation in this code is the upper limit of firing (~ 500 Hz). It is proposed that entracking provides a periodicity tag which is superimposed on a tonotopic analysis: at low SPLs and fundamental frequencies > 500 Hz, a spectral or place mechanism codes for pitch. With increasing SPL the place code degrades but entracking improves and first occurs in neurons with low thresholds for the spectral components present. The prediction is that populations of entracking neurons, extended across characteristic frequency, form plateaus (“buttes”) of firing rate tied to periodicity.

Pp. 347-354

Can Temporal Fine Structure and Temporal Envelope be Considered Independently for Pitch Perception?

Nicolas Grimault

In psychoacoustics, works on pitch perception attempt to distinguish between envelope and fine structure cues that are generally viewed as independent and separated using a Hilbert transform. To empirically distinguish between envelope and fine structure cues in pitch perception experiments, a dedicated signal has been proposed. This signal is an unresolved harmonic complex tones with all harmonics shifted by the same amount of Hz. As the frequency distance between adjacent components is regular and identical than in the original harmonic complex tone, such a signal has the same envelope but a different fine structure. So, any perceptual difference between these signals is interpreted as a fine structure based percept. Here, as illustrated by very basic simulations, I suggest that this orthogonal point of view that is generally admitted could be a conceptual error. In fact, neither the fine structure nor the envelope is required to be fully encoded to explain pitch perception. Sufficient information is conveyed by the peaks in the fine structure that are located nearby a maximum of the envelope. Envelope and fine structure could then be in perpetual interaction and the pitch would be conveyed by “the fine structure under envelope”. Moreover, as the temporal delay between peaks of interest is rather longer than the delay between two adjacent peaks of the fine structure, such a mechanism would be much less constrained by the phase locking limitation of the auditory system. Several data from the literature are discussed from this new conceptual point of view.

Pp. 355-362

Locating Melody Processing Activity in Auditory Cortex with Magnetoencephalography

Roy D. Patterson; Martin Andermann; Stefan Uppenkamp; André Rupp

This paper describes a technique for isolating the brain activity associated with melodic pitch processing. The magnetoencephalograhic (MEG) response to a four note, diatonic melody built of French horn notes, is contrasted with the response to a control sequence containing four identical, “tonic” notes. The transient response (TR) to the first note of each bar is dominated by energy-onset activity; the melody processing is observed by contrasting the TRs to the remaining melodic and tonic notes of the bar (2–4). They have uniform shape within a tonic or melodic sequence which makes it possible to fit a 4-dipole model and show that there are two sources in each hemisphere—a melody source in the anterior part of Heschl’s gyrus (HG) and an onset source about 10 mm posterior to it, in planum temporale (PT). The N1m to the initial note has a short latency and the same magnitude for the tonic and the melodic sequences. The melody activity is distinguished by the relative sizes of the N1m and P2m components of the TRs to notes 2–4. In the anterior source a given note elicits a much larger N1m-P2m complex with a shorter latency when it is part of a melodic sequence. This study shows how to isolate the N1m, energy-onset response in PT, and produce a clean melody response in the anterior part of auditory cortex (HG).

Pp. 363-369

Studying Effects of Transcranial Alternating Current Stimulation on Hearing and Auditory Scene Analysis

Lars Riecke

Recent studies have shown that perceptual detection of near-threshold auditory events may depend on the relative timing of the event and ongoing brain oscillations. Furthermore, transcranial alternating current stimulation (tACS), a non-invasive and silent brain stimulation technique, can entrain cortical alpha oscillations and thereby provide some experimental control over their timing. The present research investigates the potential of delta/theta-tACS to modulate hearing and auditory scene analysis. Detection of near-threshold auditory stimuli, which are modulated at 4 Hz and presented at various moments (phase lags) during ongoing tACS (two synchronous 4-Hz alternating currents applied transcranially to the two cerebral hemispheres), is measured in silence or in a masker. Results indicate that performance fluctuates as a function of phase lag and these fluctuations can be explained best by a sinusoid at the tACS frequency. This suggests that tACS may amplify/attenuate sounds that are temporally coherent/anticoherent with tACS-entrained cortical oscillations.

Pp. 371-379