Monday, August 03, 2009

CS: Online Sparse Coding and Matrix Factorization, GraDes, TSW-CS





Today, we have a potential game changer in dictionary learning which as we all know is part of compressive sensing since it allows one to reconstruct sparse signals after being incoherently acquired. The article is Online Learning for Matrix Factorization and Sparse Coding by Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro and features a somewhat simple algorithm that seems to scale to very large datasets. The abstract of the paper reads:
Sparse coding—that is, modelling data vectors as sparse linear combinations of basis elements—is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on the large-scale matrix factorization problem that consists of learning the basis set, adapting it to specific data. Variations of this problem include dictionary learning in signal processing, non-negative matrix factorization and sparse principal component analysis. In this paper, we propose to address these tasks with a new online optimization algorithm, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples, and extends naturally to various matrix factorization formulations, making it suitable for a wide range of learning problems. A proof of convergence is presented, along with experiments with natural images and genomic data demonstrating that it leads to state-of-the-art performance in terms of speed and optimization for both small and large datasets.

One of the author, Julien Mairal, mentioned to me that the algorithm should be shortly available. I have added the paper (and soon to follow code) to the dictionary section of the Big Picture in Compressive Sensing.

Rahul Garg let me know that the GraDes algorithm is now available here. It was featured in Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property by Rahul Garg and Rohit Khandekar and mentioned here. As Laurent Jacques and the authors mentioned later, GraDes is pretty much an IHT algorithm. I have added it to the reconstruction section of the Big Picture in Compressive Sensing page.

Finally, we have Exploiting Structure in Wavelet-Based Bayesian Compressive Sensing by Lihan He and Lawrence Carin. The abstract reads:
Bayesian compressive sensing (CS) is considered for signals and images that are sparse in a wavelet basis. The statistical structure of the wavelet coefficients is exploited explicitly in the proposed model, and therefore this framework goes beyond simply assuming that the data are compressible in a wavelet basis. The structure exploited within the wavelet coefficients is consistent with that used in waveletbased compression algorithms. A hierarchical Bayesian model is constituted, with efficient inference via Markov chain Monte Carlo (MCMC) sampling. The algorithm is fully developed and demonstrated using several natural images, with performance comparisons to many state-of-the-art compressive-sensing inversion algorithms.


The TSW-CS code is part of the Bayesian Compressive Sensing Code of Larry Carin's group at Duke.

Credit: World Science Festival 2009: Bobby McFerrin Demonstrates the Power of the Pentatonic Scale from World Science Festival on Vimeo.

No comments:

Printfriendly