Preconditioned Krylov solvers for kernel regression

Workshop on Large Scale Matrix Analysis and Inference at Neural Information Processing Systems (NIPS)

Published December 9, 2013

Balaji Vasan Srinivasan, Q. Hu, N. Gumerov, R. Murtugudde, R. Duraiswami

A primary computational problem in kernel regression is solution of a dense linear system with the N×N kernel matrix. Because a direct solution has an O(N3) cost, iterative Krylov methods are often used with fast matrix-vector products. For poorly conditioned problems, convergence of the iteration is slow and preconditioning becomes necessary. We investigate preconditioning from the viewpoint of scalability and efficiency. The problems that conventional preconditioners face when applied to kernel methods are demonstrated. A novel flexible preconditioner that not only improves convergence but also allows utilization of fast kernel matrix-vector products is introduced. The performance of this preconditioner is first illustrated on synthetic data, and subsequently on a suite of test problems in kernel regression and geostatistical kriging.

Learn More

Research Area:  AI & Machine Learning