Friday, September 25, 2009

CS: ISD, A New Compressed Sensing Algorithm via Iterative Support Detection (and NSP vs RIP)

Some of you may recall my description of the DUMBEST algorithm. Well, it looks like Yilun Wang and Wotao Yin had a more powerful idea along similar lines of thoughts four months earlier and implemented it. Woohoo! This is likely to be going to be an important line of work in Compressive Sensing as it focuses on the idea that once some data has been gathered and your reconstruction algorithm fails there is still hope to recover something. Wotao Yin mentioned to me that they released ISD, A New Compressed Sensing Algorithm via Iterative Support Detection. From the website introduction:
ISD is as fast as L1-minimization but requires fewer measurements. ISD addresses wrong solutions of L1 construction due to insufficient measurements. It will learn from such wrong solutions and solve new minimization problems that return a perfect or a better solution. A demo is given on Pages 4 and 5 of our report.

You can download an efficient implementation of ISD, called threshold-ISD, for recovering signals with fast decaying distributions of nonzeros from compressive measurements. The package includes numerical experiments showing that ISD has significant overall advantages over the classical L1 minimization approach, as well as two other state-of-the-art algorithms: the iterative reweighted L1 minimization algorithm (IRL1) and the iterative reweighted least--squares algorithm (IRLS).
The technical report is Compressed Sensing via Iterative Support Detection by Yilun Wang and Wotao Yin. The astract reads:
We present a new compressive sensing reconstruction method "ISD", aiming to achieve fast reconstruction and a reduced requirement on the number of measurements compared to the classical l_1 minimization approach. ISD addresses failed cases of l_1 based construction due to insufficient measurements, in which the returned signals are not equal or even close to the true signals. ISD will learn from such signals and solve new minimization problems that return a perfect or a better signal. Specifically, given an incorrect signal, ISD detects an index set I that includes components most likely to be true nonzeros and solves minf P i62I jxij : Ax = bg for a new signal x, and it iterates these two steps alternatively using latest x and I from one another until convergence. ISD di ffers from the orthogonal matching pursuit (OMP) method, as well as its variants, in two aspects. First, although both methods generate index sets iteratively, the index set I in ISD is not necessarily nested or increasing over the iterations. Second, the OMP family methods at each of their iterations x xi, i 2 I, and update the remaining components of x, but the ISD minimization problem above updates all the components of x. To analyze the performance of ISD, we generalize the Null Space Property to Truncated Null Space Property and present our analysis on the latter. We introduce an efficient implementation of ISD, called threshold-ISD, for recovering signals with fast decaying distributions of nonzeros from compressive measurements. Numerical experiments show that threshold-ISD has signi cant overall advantages over the classical l_1 minimization approach, as well as two other state-of-the-art algorithms such as the iterative reweighted l_1 minimization algorithm (IRL1) and the iterative reweighted least-squares algorithm (IRLS). MATLAB code is available for download from http://www.caam.rice.edu/~optimization/L1/ISD/.
From the paper:
ISD enhances sparse and compressible signal reconstruction by learning from failed BP-based reconstructions that arise when the number of linear measurements is not sufficient. In a failed reconstruction, the solution of BP is not equal or even close to the true signal x. One would normally give up, but ISD learns from the incorrect signal and solves a new optimization problem that returns a perfect or a better solution, requiring no additional measurements.
This is great. As a sub-issue, I also noticed the somewhat detailed discussion about NSP and RIP based sufficient conditions for recovery:

....We start with introducing the truncated null space property (t-NSP), a generalization of the null space property (NSP) studied in [35, 36, 17, 11, 13]. The NSP is used in slightly di fferent forms and difference names in these papers....
and
...With \gamma \lt 1, the NSP says that any nonzero vector \eta in the null space of A cannot have an l_1-mass concentrated on any set of L or fewer elements. [12] proposes a semide definite relaxation to test the NSP for general matrices. A sufficient exact reconstruction condition for BP is given in [13] based on the NSP: the true k-sparse signal x^bar is the unique solution of BP if A has the NSP of order L \ge k and 0 \llt \gamma \lt 1. The more widely known restricted isometry property (RIP) is more restricted than the NSP for establishing recoverability and stability results for BP [8, 7]. .... The NSP is more relaxed than the RIP. ....
Ah! I recoiled from this statement since it looks to me that an opposite statement was made earlier (see my comment in this entry). I fired up an e-mail out to Wotao stating the following:
In your paper you mentioned that the NSP is more relaxed than the RIP-based sufficient property. Does this mean that there are cases where either:
  1. The NSP-based sufficient property can be fulfilled while at the same time the RIP-base sufficient property is not fulfilled OR
  2. The RIP-based sufficient property can be fulfilled while at the same time the NSP-base sufficient property is not fulfilled.


To what Wotao kindly responded with:
*1. The NSP-based sufficient property can be fulfilled while at the same time the
* RIP-base sufficient property is not fulfilled

This holds!

Suppose that A has both NSP and RIP (with certain parameters). Let B = PA where P is a non-singular matrix. Then, B has the same NSP but it is not clear whether it has RIP or not.
Thanks Wotao !

No comments:

Printfriendly