Monday, September 14, 2009

CS: Optimally Tuned Iterative Reconstruction Algorithms for CS, Limits of Deterministic CS Considering Arbitrary Orthonormal Basis for Sparsity

I was watching Angels and Demons the other day (don't ask) and realized that CS could have shortened this less than optimal viewing experience. See, part of the plot is for the good guys to find out where the bad guys have hidden some anti-matter bomb (in catch my gamma ray if you can, it was evident that you could not use antimatter for rockets and the same reasoning applies to bombs) that is located in a place lit by a lightbulb which can also be viewed on a live webcam. It so happens that since we are in the Vatican, the good guys have the ability to shut the power off however they want. They (actually the clerics) devise a scheme to shut off portions of the power grid one subdivision at a time and hope that by doing so and watching the live feed from the anti-matter bomb they will be able to find the location of the device. Suffice to say, the process is too slow and methods in CS and attendant Group Testing could have been a life saver for some of the characters of the movie and more importantly would have saved the viewer two precious hours....Oh well....Let's go back to something more interesting.

I mentioned this paper before (CS: Let us define the Current State of the Art, shall we ?) but since it just popped up on Arxiv, this maybe a new version. Anyway, it doesn't matter as this is an important paper:

We conducted an extensive computational experiment, lasting multiple CPU-years, to optimally select parameters for two important classes of algorithms for finding sparse solutions of underdetermined systems of linear equations. We make the optimally tuned implementations available at sparselab.stanford.edu; they run ‘out of the box’ with no user tuning: it is not necessary to select thresholds or know the likely degree of sparsity. Our class of algorithms includes iterative hard and soft thresholding with or without relaxation, as well as CoSaMP, subspace pursuit and some natural extensions. As a result, our optimally tuned algorithms dominate such proposals. Our notion of optimality is defined in terms of phase transitions, i.e. we maximize the number of nonzeros at which the algorithm can successfully operate. We show that the phase transition is a well-defined quantity with our suite of random underdetermined linear systems. Our tuning gives the highest transition possible within each class of algorithms. We verify by extensive computation the robustness of our recommendations to the amplitude distribution of the nonzero coefficients as well as the matrix ensemble defining the underdetermined system. Our findings include: (a) For all algorithms, the worst amplitude distribution for nonzeros is generally the constant amplitude random-sign distribution, where all nonzeros are the same amplitude. (b) Various random matrix ensembles give the same phase transitions; random partial isometries may give different transitions and require different tuning; (c) Optimally tuned subspace pursuit dominates optimally tuned CoSaMP, particularly so when the system is almost square.

also found on the web:

It is previously shown that proper random linear samples of a finite discrete signal (vector) which has a sparse representation in an orthonormal basis make it possible (with probability 1) to recover the original signal. Moreover, the choice of the linear samples does not depend on the sparsity domain. In this paper, we will show that the replacement of random linear samples with deterministic functions of the signal (not necessarily linear) will not result in unique reconstruction of -sparse signals except for . We will show that there exist deterministic nonlinear sampling functions for unique reconstruction of - sparse signals while deterministic linear samples fail to do so.

No comments:

Printfriendly