UMD-JHU 2011 Speaker Recognition System

37th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)

Publication date: March 25, 2012

D. Garcia-Romero, X. Zhou, D. Zotkin, Balaji Vasan Srinivasan, Y. Luo, S. Ganapathy, S. Thomas, S. Nemala, G. Sivaram, M. Mirbagheri, S. Mallidi, T. Janu, P. Rajan, N. Mesgarani, M. Elhilali, H. Hermansy, S. Shamma, R. Duraiswami

In recent years, there have been significant advances in the field of speaker recognition that has resulted in very robust recognition systems. The primary focus of many recent developments have shifted to the problem of recognizing speakers in adverse conditions, e.g in the presence of noise/reverberation. In this paper, we present the UMD-JHU speaker recognition system applied on the NIST 2010 SRE task. The novel aspects of our systems are: 1) Improved performance on trials involving different vocal effort via the use of linear-scale features; 2) Expected improved recognition performance in the presence of reverberation and noise via the use of frequency domain perceptual linear predictor and cortical features; 3) A new discriminative kernel partial least squares (KPLS) framework that complements state-of-the-art back-end systems JFA and PLDA to aid in better overall recognition; and 4) Acceleration of JFA, PLDA and KPLS back-ends via distributed computing. The individual components of the system and the fused system are compared against a baseline JFA system and results reported by SRI and MIT-LL on SRE2010.

Learn More

Research Areas:  Adobe Research iconAI & Machine Learning Adobe Research iconAudio