FFTNet: a Real-Time Speaker-Dependent Neural Vocoder

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)

Published April 15, 2018

Zeyu Jin, Adam Finkelstein, Gautham Mysore, Jingwan (Cynthia) Lu

We introduce FFTNet, a deep learning approach synthesizing audio waveforms. Our approach builds on the recent WaveNet project, which showed that it was possible to synthesize a natural sounding audio waveform directly from a deep convolutional neural network. FFTNet offers two improvements over WaveNet. First it is substantially faster, allowing for real-time synthesis of audio waveforms. Second, when used as a vocoder, the resulting speech sounds more natural, as measured via a "mean opinion score" test.

Learn More

Research Areas:  AI & Machine Learning Audio