Thursday, November 08, 2012

OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings

If you recall David Woodruff's video presentation at MMDS (or his slides at the recent Workshop on "Randomized Numerical Linear Algebra (RandNLA): Theory and Practice") one could use a relatively sparse projection to perform least squares and low rank computations. Well, it looks like Jelani Nelson and Huy L. Nguyen have a slightly better embeddings

OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings by Jelani Nelson and Huy L. Nguyen. The abstract reads:

An "oblivious subspace embedding (OSE)" given some parameters eps,d is a distribution D over matrices B in R^{m x n} such that for any linear subspace W in R^n with dim(W) = d it holds that Pr_{B ~ D}(forall x in W ||B x||_2 in (1 +/- eps)||x||_2) > 2/3 We show an OSE exists with m = O(d^2/eps^2) and where every B in the support of D has exactly s=1 non-zero entries per column. This improves previously best known bound in [Clarkson-Woodruff, arXiv:1207.6365]. Our quadratic dependence on d is optimal for any OSE with s=1 [Nelson-Nguyen, 2012]. We also give two OSE's, which we call Oblivious Sparse Norm-Approximating Projections (OSNAPs), that both allow the parameter settings m = \~O(d/eps^2) and s = polylog(d)/eps, or m = O(d^{1+gamma}/eps^2) and s=O(1/eps) for any constant gamma>0. This m is nearly optimal since m >= d is required simply to no non-zero vector of W lands in the kernel of B. These are the first constructions with m=o(d^2) to have s=o(d). In fact, our OSNAPs are nothing more than the sparse Johnson-Lindenstrauss matrices of [Kane-Nelson, SODA 2012]. Our analyses all yield OSE's that are sampled using either O(1)-wise or O(log d)-wise independent hash functions, which provides some efficiency advantages over previous work for turnstile streaming applications. Our main result is essentially a Bai-Yin type theorem in random matrix theory and is likely to be of independent interest: i.e. we show that for any U in R^{n x d} with orthonormal columns and random sparse B, all singular values of BU lie in [1-eps, 1+eps] with good probability.
Plugging OSNAPs into known algorithms for numerical linear algebra problems such as approximate least squares regression, low rank approximation, and approximating leverage scores implies faster algorithms for all these problems.
The description of two OSNAP mebddings are featured in page 5.


Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

2 comments:

Ken Clarkson said...

Hi. While Huy and Jelani have sharper results, and cleaner proofs, it's worth mentioning that David and I have sharpened our analysis as well, reducing O(nnz(A)log ) to O(nnz(A)) dependence in that second set of results for us in their table, for regression and low-rank approximation. These improvements result from a more careful analysis in the heavy-coordinate, "perfect hashing", part of our proof; they are in the current version of our paper on arXiv.

Igor said...

Thank you Ken ( http://nuit-blanche.blogspot.com/2012/11/low-rank-approximation-and-regression.html and http://nuit-blanche.blogspot.com/2012/11/a-note-from-jelani-nelson-on-osnap.html)

Printfriendly