Friday, March 13, 2009

CS: Codes, Algorithmic Problems and Results in CS, Compressed imaging with separable sensing operator, General Deviants, Low Density Frames, SPARLS

While looking at the log on the blog, somebody from eop.gov came to see the site. Should I say woohoo or is it something more nefarious ? :-) (for non-US readers, eop stands for Executive Office of the President).

The Bayesian Compressive Sensing Code has seen an upgrade: tswcs.zip (Last Updated: Mar. 10, 2009, allowing other wavelets besides 'db1' (Haar).)

I mentioned before the paper of Vincent Y. F. Tan and and Vivek Goyal entitled Estimating signals with finite rate of innovation from noisy samples: A stochastic algorithm. The attendant matlab code is now available here.

Matthias Seeger has a new presentation entitled Optimization of k-Space Trajectories for Compressed Sensing by Bayesian Experimental Design.

In the Tuesday's long post, I mentioned the work of M. Salman Asif and Justin Romberg entitled
Streaming Measurements in Compressive Sensing: l_1 Filtering. Following up on that, Justin just let me know that he would be releasing the attendant implementation within two weeks. Great.


S. Muthu Muthukrishnan reports on his blog about his workshop in Barbados and put his note on this page. One of the documents listed is: Some Algorithmic Problems and Results in Compressed Sensing by S. Muthu Muthukrishnan. The abstract reads:

In Compressed Sensing [9], we consider a signal A element of R^n that is compressible with respect to some dictionary of \psi_s, that is, its information is concentrated in k coefficients (A,\psi_k) . The goal is to reconstruct such signals using only a few measurements (A,v_i) , for carefully chosen v_i which depend on \psi_i. Known results [9], [3], [21] prove that there exists a single O(klog(n))xn measurement matrix such that any compressible signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals. Compressed sensing has generated tremendous excitement both because of the sophisticated underlying mathematics including Linear Algebra [9], Geometry [21] and Uncertainty Principle [3], and because
of its potential applications to signal processing, communication theory and compression. In this paper, we focus on algorithmic aspects of compressed sensing and present new problems and results. For example, we
  1. Present a simple, deterministic explicit construction of poly(k,log(n))linear measurements that suffice for compressed sensing.
  2. Introduce functional compressed sensing, that is, extend compressed sensing of a signal to that of functions of the signal, for various functions.
  3. Extend compressed sensing to a distributed, continuous model of computing.
All the results above are obtained by simple combinatorialor number-theoretic ideas.
As he mentioned, here is a nice set of dataset from Yahoo!

Here is an issue that is dear to me, i.e. the calibration of a compressive imager. The following paper tries to use the concept of kronecker products mentioned earlier where if you recall Yin Zhang and others like Thong Do and John Sidles (see comments) the measurement operation can be reduced through the use of Kronecker products of measurement matrices. The paper is entitled: Compressed imaging with separable sensing operator by Yair Rivenson and Adrian Stern. The abstract reads:
Compressive imaging (CI) is a natural branch of compressed sensing (CS). Although a number of CI implementations have started to appear, the design of efficient CI system still remains a challenging problem. One of the main difficulties in implementing CI is that it involves huge amounts of data, which has far-reaching implications for the complexity of the optical design, calibration, data storage and computational burden. In this paper, we solve these problems by using a two dimensional separable sensing operator. By so doing, we reduce the complexity by factor of 10^6 for megapixel images. We show that applying this method requires only a reasonable amount of additional samples.

From the Rice Compressive Sensing Repository, we have:

General Deviants: An Analysis of Perturbations in Compressed Sensing by Matthew A. Herman and Thomas Strohmer. The abstract reads:
We analyze the Basis Pursuit recovery method when observing signals with general perturbations (i.e., additive, as well as multiplicative noise). This completely perturbed model extends the previous work of Candes, Romberg and Tao on stable signal recovery from incomplete and inaccurate measurements. Our results show that, under suitable conditions, the stability of the recovered signal is limited by the noise level in the observation. Moreover, this accuracy is within a constant multiple of the best case reconstruction using the technique of least squares.
I am not a specialist but when reading the paper I am getting the sense that if with the use of the RIP, one eventually gets accuracy bounds with the conclusion that "this accuracy is within a constant multiple of the best case reconstruction using the technique of least squares". Should we see this type of result as another clue that RIP is too sufficient ?




In Compressive Sensing Using Low Density Frames by Mehmet Akçakaya, Jinsoo Park, Vahid Tarokh. The authors devise a new set of sparse measurement matrices called Low Density Frames. The abstract reads:
We consider the compressive sensing of a sparse or compressible signal x 2 RM. We explicitly construct a class of measurement matrices, referred to as the low density frames, and develop decoding algorithms that produce an accurate estimate ^x even in the presence of additive noise. Low density frames are sparse matrices and have small storage requirements. Our decoding algorithms for these frames have O(M) complexity. Simulation results are provided, demonstrating that our approach significantly outperforms state-of-the-art recovery algorithms for numerous cases of interest. In particular, for Gaussian sparse signals and Gaussian noise, we are within 2dB range of the theoretical lower bound in most cases.
From the conclusion,

In this paper, we constructed an ensemble of measurement matrices with small storage requirements. We denoted the members of this ensemble as Low Density Frames (LDF). For these frames, we provided sparse reconstruction algorithms that have O(M) complexity and that are Bayesian in nature. We evaluated the performance of this ensemble of matrices and their decoding algorithms, and compared their performance to other state-of-the-art recovery algorithms and their associated measurement matrices. We observed that in various cases of interest, SuPrEM algorithms with LDFs outperformed the other algorithms with partial Fourier matrices. In particular, for Gaussian sparse signals and Gaussian noise, we are within 2 dB range of the theoretical lower bound in most cases.
Mehmet Akçakaya mentioned to me that an implementation of the measurement matrices and attendant reconstruction codes will be available in the future.

Also from the same group: Asymptotic Achievability of the Cramér–Rao Bound for Noisy Compressive Sampling by Behtash Babadi, Nicholas Kalouptsidis, and, Vahid Tarokh. The abstract reads:
We consider a model of the form y = Ax + n, where x element of C^M is sparse with at most L nonzero coefficients in unknown locations, y element of C^N is the observation vector, A element of C^(MxN) is the measurement matrix and n is the Gaussian noise.We develop a Cramér–Rao bound on the mean squared estimation error of the nonzero elements of x, corresponding to the genie-aided estimator (GAE) which is provided with the locations of the nonzero elements of x. Intuitively, the mean squared estimation error of any estimator without the knowledge of the locations of the nonzero elements of x is no less than that of the GAE. Assuming that L/N is fixed, we establish the existence of an estimator that asymptotically achieves the Cramér–Rao bound without any knowledge of the locations of the nonzero elements of x as N goes to \infty , for A a random Gaussian matrix whose elements are drawn i.i.d. according to N(0,1) .

and SPARLS: A Low Complexity Recursive $\mathcal{L}_1$-Regularized Least Squares Algorithm by Behtash Babadi, Nicholas Kalouptsidis, and, Vahid Tarokh. The abstract reads:
We develop a Recursive $\mathcal{L}_1$-Regularized Least Squares (SPARLS) algorithm for the estimation of a sparse tap-weight vector in the adaptive filtering setting. The SPARLS algorithm exploits noisy observations of the tap-weight vector output stream and produces its estimate using an Expectation-Maximization type algorithm. Simulation studies in the context of channel estimation, employing multi-path wireless channels, show that the SPARLS algorithm has significant improvement over the conventional widely-used Recursive Least Squares (RLS) algorithm, in terms of both mean squared error (MSE) and computational complexity.

No comments:

Printfriendly