research Projects

Our research covers the transmission of sound with hearing devices. The transmission process starts with the sensory input, passes through speech processing and neural stimulation before being perceived by the listener and interpreted as speech. Our goal is to understand and improve this process by using technological innovations. This includes computational models, signal processing and deep neural networks as well as perceptual experiments.

Sensory input

Speech processing

Neural stimulation

Perception & Cognition

sensory input

This research investigates different input modalities of sound transmission (acoustic, tactile, electric) and their combinations. We use acoustic simulations of sound perception with cochlear implants and assess whether integration of tactile and auditory input helps CI listening. We also investigate whether hearable devices live up to their claims of helping people with hearing loss.


Electrohaptics

This project led by Mark Fletcher's team at the University of Southampton investigates whether the combination of tactile with auditory stimulation can be used to enhance speech perception with cochlear implants. This led to a prototype device for tactile stimulation at the wrist which we will be using in future studies.

www.electrohaptics.co.uk

hearables

This project is a collaboration with Saima Rajasingam at Anglia Ruskin University where we are performing a series of investigations to evaluate listening performance with and attitudes towards hearables (wireless earbuds) by people with mild-to-moderate hearing loss.

Related publications

  • Fletcher, M., Hadeedi, A., Goehring, T., Mills, S. (2019). Electro-haptic hearing: speech-in-noise performance in cochlear implant users is enhanced by tactile stimulation of the wrists. Scientific Reports, 9(1), 1-8. doi.org/10.1038/s41598-019-47718-z

  • Fletcher, M., Mills, S., Goehring, T. (2018). Vibro-tactile enhancement of speech intelligibility in multi-talker noise for simulated cochlear implant listening. Trends in Hearing, 22. doi.org/10.1177%2F2331216518797838

speech processing

This part focuses on improving speech signals before they are presented to the listener, for example by removing background noise or other acoustic distortions. We use powerful methods from deep learning (deep neural networks) and digital signal processing to facilitate speech perception with hearing devices.

noise reduction based on DEEP learning

We develop noise-reduction algorithms based on deep neural networks, that are trained to extract and enhance speech in noisy situations. The networks are trained with many examples of noisy speech and then evaluated in listening studies with cochlear implant and hearing aid users.

removal of background music

This project is a collaboration with Alan Archer-Boyd and Charlotte Garcia to develop adaptive algorithms for removing background music in realistic situations (e.g. a coffeeshop). In practice, a reference music signal via wireless streaming would be filtered by the hearing device to facilitate speech perception.

Related publications

  • Goehring, T., Keshavarzi, M., Carlyon, R., Moore, B. (2019). Using recurrent neural networks to improve the perception of speech in non-stationary noise by people with cochlear implants. The Journal of the Acoustical Society of America, 146(1), 705-718. doi.org/10.1121/1.5119226

  • Keshavarzi, M., Goehring, T., Turner, R., Moore, B. (2019). Comparison of effects on subjective intelligibility and quality of speech in babble for two algorithms: a deep recurrent Neural network and spectral subtraction. The Journal of the Acoustical Society of America, 145(3), 1493-1503. doi.org/10.1121/1.5094765

  • Keshavarzi, M., Goehring, T., Zakis, Z., Turner, R., Moore, B. (2018). Use of a deep recurrent neural network to reduce wind noise: effects on judged speech intelligibility and sound quality. Trends in Hearing, 22. doi.org/10.1177%2F2331216518770964

  • Monaghan, J., Goehring, T., Yang, X., Bolner, F., Wang, S., Bleeck, S. (2017). Auditory inspired machine learning techniques can improve speech intelligibility and quality for hearing-impaired listeners. The Journal of the Acoustical Society of America, 141(3), 1985-1998. doi.org/10.1080/14992027.2017.1367848

  • Goehring, T., Bolner, F., Monaghan, J., Van Dijk, B., Zarowski, A., Bleeck, S. (2017). Speech Enhancement Based on Neural Networks Improves Speech Intelligibility in Noise for Cochlear Implant Users. Hearing Research, 344, 183-194. doi.org/10.1016/j.heares.2016.11.012

  • Goehring, T., Yang, X., Monaghan, J., Bleeck, S. (2016). Speech enhancement for hearing-impaired listeners using deep neural networks with auditory-model based features. In 2016 EURASIP 24th European Signal Processing Conference (EUSIPCO) (pp. 2300-2304), Hungary. IEEE.

  • Bolner, F., Goehring, T., Monaghan, J., Van Dijk, B., Wouters, J., Bleeck, S. (2016). Speech enhancement based on neural networks applied to cochlear implant coding strategies. In 2016 IEEE Intern. Conf. on Acoustics, Speech and Signal Processing (ICASSP), China. IEEE.

Neural Stimulation

This research investigates the electro-neural interface and stimulation patterns with cochlear implants and their impact on speech perception. We investigate new coding strategies, assess channel interaction effects and build models for the electrical stimulation and sound transmission with cochlear implants.

optimisation of CI coding strategies

We develop strategies to improve sound transmission and speech perception with CIs:

  • Site-selection strategy based on polarity sensitivity as a measure of neural health

  • Removing temporally-masked pulses to simplify stimulation patterns

Spectral blurring in cochlear implants

Project with Bob Carlyon (CBU) and Prof Julie Arenberg (Harvard) .

Manipulating electrode channel interaction with cochlear implants and assessing its effects on speech perception to inform future speech strategies and clinical assessment.

end-2-end models of sound transmission

Collaboration led by Tim Brochier with Josef Schlittenlacher, Iwan Roberts, Chen Jiang, Debi Vickers and Prof Manohar Bance.

We are combining high-resolution models and automatic speech recognition to assess speech processing strategies and information transmission with cochlear implants.

patient-specific patterns of neural excitation

Collaboration with Charlotte Garcia (Lead) and Bob Carlyon.

We are developing a model-based algorithm for estimating patient-specific excitation profiles and to characterise stimulation and neural health patterns.

Related publications

  • Garcia, C., Goehring, T., Cosentino, S., Turner, R. E., Deeks, J. M., Brochier, T., ... & Carlyon, R. P. (2021). The panoramic ECAP method: estimating patient-specific patterns of current spread and neural health in cochlear implant users. Journal of the Association for Research in Otolaryngology, 1-23. doi.org/10.1007/s10162-021-00795-2

  • Jiang, C., Singhal, S., Landry, T., Roberts, I., De Rijk, S., Brochier, T., Goehring, T., ... & Malliaras, G. G. (2021). An Instrumented Cochlea Model for the Evaluation of Cochlear Implant Electrical Stimulus Spread. IEEE Transactions on Biomedical Engineering. doi.org/10.1109/TBME.2021.3059302

  • Goehring, T., Arenberg, J., Carlyon, R. (2020). Using spectral blurring to assess effects of channel interaction on speech-in-noise perception with cochlear implants. Journal of the Association for Research in Otolaryngology. doi.org/10.1007/s10162-020-00758-z

  • Lamping, W., Goehring, T., Marozeau, J., Carlyon, R. (2020). The effect of a coding strategy that removes temporally masked pulses on speech perception by cochlear implant users. Hearing Research, 391, 107969. doi.org/10.1016/j.heares.2020.107969

  • Goehring, T., Archer-Boyd, A., Deeks, J., Arenberg, J., Carlyon, R. (2019). A site-selection strategy based on polarity sensitivity for cochlear implants: effects on spectro-temporal resolution and speech perception. Journal of the Association for Research in Otolaryngology, 1-18. doi.org/10.1007/s10162-019-00724-4

Perception & cognition

Finally, we assess different aspects of sound perception and speech recognition by human listeners. This involves signal qualities, such as spectral and temporal resolution, as well as speech perception in terms of intelligibility, quality and listening effort. We will expand our research towards cognitive processes and measurements to complete the picture.

spectro-temporal resolution

Collaboration led by Alan Archer-Boyd at the MRC CBU.

Investigations of spectro-temporal resolution with cochlear implants. Using the STRIPES test to test new strategies acutely, and to avoid speech learning biases.

Speech perception

We use a range of measures for assessing speech perception, such as speech intelligibility, speech quality and tolerance thresholds for distortions and artefacts.

Cortical measures to follow soon.

Related publications

  • Archer-Boyd, A., Goehring, T., Carlyon, R. (2020). The effect of free-field presentation and processing strategy on a measure of spectro-temporal processing by cochlear-implant listeners. Trends in Hearing, doi.org/10.1177%2F2331216520964281

  • Goehring, T., Chapman, J., Bleeck, S., Monaghan, J. (2018). Tolerable delay for speech production and perception: effects of hearing ability and experience with hearing aids. International Journal of Audiology, 57(1), 61-68. doi.org/10.1080/14992027.2017.1367848