Our research covers the transmission of sound with hearing devices. The transmission process starts with the sensory input, passes through speech processing and neural stimulation before being perceived by the listener and interpreted as speech. Our goal is to understand and improve this process by using technological innovations. This includes computational models, signal processing and deep neural networks as well as perceptual experiments.
Perception & Cognition
This research investigates different input modalities of sound transmission (acoustic, tactile, electric) and their combinations. We use acoustic simulations of sound perception with cochlear implants and assess whether integration of tactile and auditory input helps CI listening. We also investigate whether hearable devices live up to their claims of helping people with hearing loss.
This project led by Mark Fletcher's team at the University of Southampton investigates whether the combination of tactile with auditory stimulation can be used to enhance speech perception with cochlear implants. This led to a prototype device for tactile stimulation at the wrist which we will be using in future studies.
This project is a collaboration with Saima Rajasingam at Anglia Ruskin University where we are performing a series of investigations to evaluate listening performance with and attitudes towards hearables (wireless earbuds) by people with mild-to-moderate hearing loss.
This part focuses on improving speech signals before they are presented to the listener, for example by removing background noise or other acoustic distortions. We use powerful methods from deep learning (deep neural networks) and digital signal processing to facilitate speech perception with hearing devices.
noise reduction based on DEEP learning
We develop noise-reduction algorithms based on deep neural networks, that are trained to extract and enhance speech in noisy situations. The networks are trained with many examples of noisy speech and then evaluated in listening studies with cochlear implant and hearing aid users.
removal of background music
This project is a collaboration with Alan Archer-Boyd and Charlotte Garcia to develop adaptive algorithms for removing background music in realistic situations (e.g. a coffeeshop). In practice, a reference music signal via wireless streaming would be filtered by the hearing device to facilitate speech perception.
This research investigates the electro-neural interface and stimulation patterns with cochlear implants and their impact on speech perception. We investigate new coding strategies, assess channel interaction effects and build models for the electrical stimulation and sound transmission with cochlear implants.
optimisation of CI coding strategies
We develop strategies to improve sound transmission and speech perception with CIs:
Site-selection strategy based on polarity sensitivity as a measure of neural health
Removing temporally-masked pulses to simplify stimulation patterns
Spectral blurring in cochlear implants
Project with Bob Carlyon (CBU) and Prof Julie Arenberg (Harvard) .
Manipulating electrode channel interaction with cochlear implants and assessing its effects on speech perception to inform future speech strategies and clinical assessment.
end-2-end models of sound transmission
Collaboration led by Tim Brochier with Josef Schlittenlacher, Iwan Roberts, Chen Jiang, Debi Vickers and Prof Manohar Bance.
We are combining high-resolution models and automatic speech recognition to assess speech processing strategies and information transmission with cochlear implants.
patient-specific patterns of neural excitation
Collaboration with Charlotte Garcia (Lead) and Bob Carlyon.
We are developing a model-based algorithm for estimating patient-specific excitation profiles and to characterise stimulation and neural health patterns.
Perception & cognition
Finally, we assess different aspects of sound perception and speech recognition by human listeners. This involves signal qualities, such as spectral and temporal resolution, as well as speech perception in terms of intelligibility, quality and listening effort. We will expand our research towards cognitive processes and measurements to complete the picture.
Collaboration led by Alan Archer-Boyd at the MRC CBU.
Investigations of spectro-temporal resolution with cochlear implants. Using the STRIPES test to test new strategies acutely, and to avoid speech learning biases.
We use a range of measures for assessing speech perception, such as speech intelligibility, speech quality and tolerance thresholds for distortions and artefacts.
Cortical measures to follow soon.