Sparse Overcomplete Latent Variable Decoposition of Counts Data

In Proc. of Neural Information Processing Systems (NIPS)

Published June 25, 2007

M. Shashanka, B. Raj, Paris Smaragdis

An important problem in many fields is the analysis of counts data to extract mean- ingful latent components. Methods like Probabilistic Latent Semantic Analysis (PLSA) and Latent Dirichlet Allocation (LDA) have been proposed for this purpose. However, they are limited in the number of components they can extract and lack an explicit provision to control the “expressiveness” of the extracted components. In this paper, we present a learning formulation to address these limitations by employing the notion of sparsity. We start with the PLSA framework and use an entropic prior in a maximum a posteriori formulation to enforce sparsity. We show that this allows the extraction of overcomplete sets of latent components which better characterize the data. We present experimental evidence of the utility of such representations.

Learn More

Research Area:  AI & Machine Learning