Speech Dereverberation using a Learned Speech Model

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)

Publication date: April 19, 2015

Dawen Liang, Matt Hoffman, Gautham Mysore

We present a general single-channel speech dereverberation method based on an explicit generative model of reverberant and noisy speech. To regularize the model, we use a pre-learned speech model of clean and dry speech as a prior and perform posterior inference over the latent clean speech. The reverberation kernel and additive noise are estimated under the maximum-likelihood framework. Our model assumes no prior knowledge about specific speakers or rooms, and consequently our method can automatically adapt to various reverberant and noisy conditions. We evaluate the proposed model with both simulated data and real recordings from the REVERB Challenge1 in the task of speech enhancement and obtain results comparable to or better than the state-of-the-art.

Research Areas:  Adobe Research iconAI & Machine Learning Adobe Research iconAudio