Sunday, May 23, 2010

CS: The long post of the week: CS over the Grassmann Manifold, RMT,Multi-view Imaging, a job and more.

A version of this entry came up earlier than I thought. Here is the corrected version:

Today we have a new paper that is somehow an extension of the phase transition highlighted by Jared Tanner and David Donoho and expands the analysis (with a different concept than the projection of k-neighborly polytopes) to the case where noise is present.
Compressive Sensing over the Grassmann Manifold: a Unified Geometric Framework by Weiyu Xu, Babak Hassibi. The abstract reads:
$\ell_1$ minimization is often used for finding the sparse solutions of an under-determined linear system. In this paper we focus on finding sharp performance bounds on recovering approximately sparse signals using $\ell_1$ minimization, possibly under noisy measurements. While the restricted isometry property is powerful for the analysis of recovering approximately sparse signals with noisy measurements, the known bounds on the achievable sparsity (The "sparsity" in this paper means the size of the set of nonzero or significant elements in a signal vector.) level can be quite loose. The neighborly polytope analysis which yields sharp bounds for ideally sparse signals cannot be readily generalized to approximately sparse signals. Starting from a necessary and sufficient condition, the "balancedness" property of linear subspaces, for achieving a certain signal recovery accuracy, we give a unified \emph{null space Grassmann angle}-based geometric framework for analyzing the performance of $\ell_1$ minimization. By investigating the "balancedness" property, this unified framework characterizes sharp quantitative tradeoffs between the considered sparsity and the recovery accuracy of the $\ell_{1}$ optimization. As a consequence, this generalizes the neighborly polytope result for ideally sparse signals. Besides the robustness in the "strong" sense for \emph{all} sparse signals, we also discuss the notions of "weak" and "sectional" robustness. Our results concern fundamental properties of linear subspaces and so may be of independent mathematical interest.

Of interest are the following two figures where C provides a way to go from the Donoho-Tanner transition to a figure for compressible signals (as opposed to just sparse signals).








at the conclusion of the paper we can read:
It is well known that l_1 optimization can be used to recover ideally sparse signals in compressive sensing, if the underlying signal is sparse enough. While for the ideally sparse signals, the results of [Don06b] have given us very sharp bounds on the sparsity threshold the l_1 minimization can recover, sharp bounds for the recovery of general signals or approximately sparse signals were not available. In this paper we analyzed a null space characterization of the measurement matrices for the performance bounding of `1-norm optimization for general signals or approximately sparse.
Using high-dimensional geometry tools, we give a unified null space Grassmann angle-based analytical framework for compressive sensing. This new framework gives sharp quantitative tradeoffs between the signal sparsity parameter and the recovery accuracy of the l_1 optimization for general signals or approximately sparse signals. As expected, the neighborly polytopes result of [Don06b] for ideally sparse signals can be viewed as a special case on this tradeoff curve. It can therefore be of practical use in applications where the underlying signal is not ideally sparse and where we are interested in the quality of the recovered signal. For example, using the results and their extensions in this paper and [Don06b], we are able to give a precise sparsity threshold analysis for weighted l_1 minimization when prior information about the signal vector is available [KXAH09]. In [XKAH10], using the robustness result from this paper, we are able to show that a polynomial-time iterative weighted l_1 minimization algorithm can provably improve over the sparsity threshold of l_1 minimization for interesting classes of signals, even when prior information is not available. In essence, this work investigates the fundamental “balancedness” property of linear subspaces, and may be of independent mathematical interest. In future work, it is interesting to obtain more accurate analysis for compressive sensing under noisy measurements than presented in the current paper.
In the Donoho and Tanner papers, one could see how to produce the figures. It does not seem obvious in this case and it's a shame as eventually we need to have similar tools for different l_1 solvers or even l_o solvers. I note the use of Random Matrix Theory results in this paper,


Roman Vershynin has some new slides on Random Matrix Theory that is to be presented at the upcoming conference in Random Matrices in Paris.
The program of the conference is finally out and it is here.

Here is a new paper on manifold signal processing applied to imaging. It sure looks very interesting, especially its connection to CT imaging: A Geometric Approach to Multi-view Compressive Imaging by Jae Young Park and Michael B. Wakin. The introduction reads:
I. INTRODUCTION
Armed with potentially limited communication and computational resources, designers of distributed imaging systems face increasing challenges in the quest to acquire, compress, and communicate ever richer and higher-resolution image ensembles. In this paper, we consider multiview imaging problems in which an ensemble of cameras collect images describing a common scene. To simplify the acquisition and encoding of these images, we study the effectiveness of non-collaborative compressive sensing (CS) [2, 3] encoding schemes wherein each sensor directly and independently compresses its image using a small number of randomized measurements (see Fig. 1). CS is commonly intended for the encoding of a single signal, and a rich theory has been developed for signal recovery from incomplete measurements by exploiting the assumption that the signal obeys a sparse model. In this paper, we address the problem of how to recover an ensemble of images from a collection of image-by-image random measurements. To do this, we advocate the use of implicitly geometric models to capture the joint structure among the images. CS is particularly useful in two scenarios. The first is when a high-resolution signal is difficult to measure directly. For example, conventional infrared cameras require expensive sensors, and with increasing resolution such cameras can become extremely costly. A compressive imaging camera has been proposed [4] that can acquire a digital image using far fewer (random) measurements than the number of pixels in the image. Such a camera is simple and inexpensive and can be used not only for imaging at visible wavelengths, but also for imaging at nonvisible wavelengths. A second scenario where CS is useful is when one or more high-resolution signals are difficult or expensive to encode. Such scenarios arise, for example, in sensor networks and multi-view imaging, where it may be feasible to measure the raw data at each sensor, but joint, collaborative compression of that data among the sensors would require costly communication. As an alternative to conventional Distributed Source Coding (DSC) methods [5], an extension of single-signal CS known as Distributed CS (DCS) [6] has been proposed, where each sensor encodes only a random set of linear projections of its own observed signal. These projections could be obtained either by using CS hardware as described above, or by using a random, compressive encoding of the data collected from a conventional sensor. While DCS encoding is non-collaborative, an effective DCS decoder should reconstruct all signals jointly to exploit their common structure. As we later discuss, most existing DCS algorithms for distributed imaging reconstruction rely fundamentally on sparse models to capture intra- and inter-signal correlations [6–9]. What is missing from each of these algorithms, however, is an assurance that the reconstructed images have a global consistency, i.e., that they all describe a common underlying scene. This may not only lead to possible confusion in interpreting the images, but more critically may also suggest that the reconstruction algorithm is failing to completely exploit the joint structure of the ensemble. To better extend DCS techniques specifically to problems involving multi-view imaging, we propose in this paper a general geometric framework in which many such reconstruction problems may be cast. In particular, we explain how viewing the unknown images as living along a low dimensional manifold within the high-dimensional signal space can inform the design of effective joint reconstruction algorithms. Such algorithms can build on existing sparsity-based techniques for CS but ensure a global consistency among the reconstructed images. We refine our discussion by focusing on two settings: far-field and near-field multi-view imaging. Finally, as a proof of concept, we demonstrate a “manifold lifting” algorithm in a specific far-field multi-view scenario where the camera positions are not known a priori and we only observe a small number of random measurements at each sensor. Even in such discouraging circumstances, by effectively exploiting the geometrical information preserved in the manifold model, we are able to accurately reconstruct both the underlying scene and the camera positions

Here is another very interesting one: Reconstruction of Sparse Signals from Distorted Randomized Measurements by Petros Boufounos. The abstract reads:

In this paper we show that, surprisingly, it is possible to recover sparse signals from nonlinearly distorted measurements, even if the nonlinearity is unknown. Assuming just that the nonlinearity is monotonic, we use the only reliable information in the distorted measurements: their ordering. We demonstrate that this information is sufficient to recover the signal with high precision and present two approaches to do so. The first uses order statistics to compute the minimum mean square (MMSE) estimate of the undistorted measurements and use it with standard compressive sensing (CS) reconstruction algorithms. The second uses the principle of consistent reconstruction to develop a deterministic nonlinear reconstruction algorithm that ensures that measurements of the reconstructed signal have ordering consistent with the ordering of the distorted measurements. Our experiments demonstrate the superior performance of both approaches compared to standard CS methods.


This one just came up on my radar screen and is behind a paywall [update this link goes to Amit's page directly. Thanks Amit]: Compressive light field imaging by Amit Ashok, Mark Neifeld. The abstract reads:

Light field imagers such as the plenoptic and the integral imagers inherently measure projections of the four dimensional (4D) light field scalar function onto a two dimensional sensor and therefore, suffer from a spatial vs. angular resolution trade-off. Programmable light field imagers, proposed recently, overcome this spatioangular resolution trade-off and allow high-resolution capture of the (4D) light field function with multiple measurements at the cost of a longer exposure time. However, these light field imagers do not exploit the spatio-angular correlations inherent in the light fields of natural scenes and thus result in photon-inefficient measurements. Here, we describe two architectures for compressive light field imaging that require relatively few photon-efficient measurements to obtain a high-resolution estimate of the light field while reducing the overall exposure time. Our simulation study shows that, compressive light field imagers using the principal component (PC) measurement basis require four times fewer measurements and three times shorter exposure time compared to a conventional light field imager in order to achieve an equivalent light field reconstruction quality.
Let us note the new address of Mark Neifeld.

The next paper reminded me of others and is interesting in that the PSF is far from being a nice little delta: Non-uniform Deblurring for Shaken Images by Oliver Whyte, Josef Sivic, Andrew Zisserman, Jean Ponce. The abstract reads:
Blur from camera shake is mostly due to the 3D rotation of the camera, resulting in a blur kernel that can be significantly non-uniform across the image. However, most current deblurring methods model the observed image as a convolution of a sharp image with a uniform blur kernel. We propose a new parametrized geometric model of the blurring process in terms of the rotational velocity of the camera during exposure. We apply this model to two different algorithms for camera shake removal: the first one uses a single blurry image (blind deblurring), while the second one uses both a blurry image and a sharp but noisy image of the same scene. We show that our approach makes it possible to model and remove a wider class of blurs than previous approaches, including uniform blur as a special case, and demonstrate its effectiveness with experiments on real images.

Dualization of Signal Recovery Problems by Patrick L. Combettes, Dinh Dung, and Bang Cong Vu. The abstract reads:
In convex optimization, duality theory can sometimes lead to simpler solution methods than those resulting from direct primal analysis. In this paper, this principle is applied to a class of composite variational problems arising in image (and, more generally, signal) recovery. These problems are not easily amenable to solution by current methods but they feature Fenchel- Moreau-Rockafellar dual problems that can be solved reliably by forward-backward splitting and allow for a simple construction of primal solutions from dual solutions. The proposed framework is shown to capture and extend several existing duality-based signal recovery methods and to be applicable to a variety of new problems beyond their scope.
This one is a little old but since it touches on group testing that shares some overwhelming connection to compressive sensing: Statistical physics of group testing by Marc Mezard, Marco Tarzia and Cristina Toninelli. The abstract reads:
This paper provides a short introduction to the group testing problem, and reviews
various aspects of its statistical physics formulation. Two main issues are discussed: the optimal design of pools used in a two-stage testing experiment, like the one often used in medical or biological applications, and the inference problem of detecting defective items based on pool diagnosis. The paper is largely based on: M. Mezard and C. Toninelli, arXiv:0706.3104i, and M. Mezard and M. Tarzia Phys. Rev. E 76, 041124 (2007).
and

Group testing with Random Pools: Phase Transitions and Optimal Strategy by Marc Mezard, Marco Tarzia and Cristina Toninelli. The abstract reads:
The problem of Group Testing is to identify defective items out of a set of objects by means of pool queries of the form "Does the pool contain at least a defective?". The aim is of course to perform detection with the fewest possible queries, a problem which has relevant practical applications in different fields including molecular biology and computer science. Here we study GT in the probabilistic setting focusing on the regime of small defective probability and large number of objects, $p \to 0$ and $N \to \infty$. We construct and analyze one-stage algorithms for which we establish the occurrence of a non-detection/detection phase transition resulting in a sharp threshold, $\bar M$, for the number of tests. By optimizing the pool design we construct algorithms whose detection threshold follows the optimal scaling $\bar M\propto Np|\log p|$. Then we consider two-stages algorithms and analyze their performance for different choices of the first stage pools. In particular, via a proper random choice of the pools, we construct algorithms which attain the optimal value (previously determined in Ref. [16]) for the mean number of tests required for complete detection. We finally discuss the optimal pool design in the case of finite $p$.
Group testing, random batches and phase transition, that sounds quite familiar, uh ?

Here is a presentation of a poster entitled: Can Compressive Sensing Improve Low-contrast Object Detectability in Accelerated MRI Applications? by Joshua D. Trzasko, Armando Manduca, Zhonghao Bao, Scott O. Stiving, Kiaran P. McGee, Matt A. Bernstein. The description reads:
Purpose: Compressive Sensing (CS) methods can reduce image artifacts when reconstructing undersampled data sets. Most MRI applications of CS, however, have focused on high contrast objects such as gadolinium-enhanced vessels, rather than on low-contrast object detectability (LCOD). Using a novel computational framework, we rigorously examine whether CS reconstruction can improve the LCOD performance of several standard techniques for undersamped MRI reconstruction – across a variety of undersampling rates and strategies. Methods and Materials: The American College of Radiology (ACR) quality control (QC) phantom was imaged on a GE 14.0 1.5T MRI using our routine quality assurance protocol and an 8-channel head coil. The raw k-space data corresponding to the 5.1% contrast detectability plane was retrospectively undersampled along the phase-encoded direction at 10 different rates (10-100%) and, for each, using 3 different distribution strategies: 1) low-frequency band only; 2) uniform sampling; 3) random sampling (Poisson Disk). For the latter case, 5 sampling instances were generated at each rate. Each undersampled data set was reconstructed using 3 different strategies: 1) zero-filling with root sum-of-squares combination; 2) Tikhonov-SENSE; and 3) Compressive Sensing (L1-minimization, finite difference sparsity). Reconstruction results were then analyzed with our in-house developed QC software to automatically determine the fraction of complete visually detectable spokes which, is a measure of LCOD performance. Results: Across all sampling rates and under all sampling strategies, the LCOD score for CS reconstructions was consistently equal to or higher than that of the other two reconstruction methods. The CS advantage was especially pronounced at very low sampling rates (£30%). Conclusion: Although most CS work to date has focused on high-contrast objects, CS reconstructions consistently improved LCOD compared to several standard MRI reconstruction techniques for undersampled data. These results suggest that CS reconstruction is useful not only for undersampled high contrast objects, but can improve LCOD as well.
Version 2 of Estimation with Random Linear Mixing, Belief Propagation and Compressed Sensing is now available.


Namrata Vaswani will give a talk at UCLA entitled: Recursive Reconstruction of Sparse Signal Sequences. Tuesday, May 25, 2010 at 11:00am, Engr IV Maxwell Room 57-124

Abstract
Consider the problem of recursively and causally reconstructing a time sequence of sparse signals from a greatly reduced number of linear projection measurements at each time. The signals are sparse in some transform domain referred to as the sparsity basis and their sparsity patterns can change with time. Some key applications where this problem occurs include dynamic MR imaging for real-time applications such as MR image-guided surgery or real-time single-pixel video imaging. Since the recent introduction of compressive sensing (CS), the static sparse reconstruction problem has been thoroughly studied. But most existing algorithms for the dynamic problem are batch solutions with very high complexity. Using the empirically observed fact that sparsity patterns change slowly over time, the recursive reconstruction problem can be formulated as one of sparse reconstruction with partially known support. We develop two classes of approaches to solve this problem - CS-Residual and Modified-CS, both of which have the same complexity as CS at a single time instant (simple CS), but achieve exact/accurate reconstruction using much fewer measurements.

Under the practically valid assumption of slowly changing support, Modified-CS achieves provably exact reconstruction using much fewer noise-free measurements than those needed to provide the same guarantee for simple CS. When using noisy measurements, under fairly mild assumptions, and again using fewer measurements, it can be shown that the error bounds for both Modified-CS and CS-Residual are much smaller; and most importantly, their errors are "stable" (remain bounded by small time-invariant values) over time. The proof of stability is critical for any recursive algorithm since it ensures that the error does not blow up over time. Experiments for the dynamic MRI application back up these claims for real data. Important extensions that also use the slow change of signal values over time are developed.


There is a job announcement at Qualcomm that includes as a requirement the need to know about compressed sensing.

Requisition # G1880018
Job Title Sr Systems Engineer Algorithms / Corporate R&D
Post Date 5/20/2010
Division Corporate Research & Development
Job Area Engineering - Systems
Location California - San Diego
Job Function QUALCOMM Corporate R&D has a mission to advance wireless and mobile communications to create compelling user experience with next generation mobile devices. We seek a Systems Engineer with demonstrable knowledge and experience in algorithms and modeling, embedded device architecture, protocols and system design.

We expect our systems engineers to keep up to date and understand technological advances in areas directly and indirectly related to mobile and wireless systems and devices and channel those trends toward enabling new applications. To that end, the responsibilities include developing new system architectures, protocols, services, standards, middleware or hardware enhancements. Skills/Experience
  • Algorithm design and analysis
  • Statistics and data mining
  • Machine learning/robotics
  • Data modeling and analysis
  • Graph theory and information theory
  • Real-time, embedded and/or distributed systems
  • Mobile and/or hardware optimizations
  • Sensor data analysis and compressed sensing
Responsibilities Candidates will be providing deep understanding and expertise in algorithms, data acquisition, data modeling and analysis.

Will also work in information theory, compressed sensing, theory and practice of machine learning, communication systems, automatic control systems.

Implementation and architecture of mobile platforms, including special purpose hardware engines, data buses and power management.
Education Requirements Required: Master's, Computer Engineering and/or Computer Science and/or Electrical Engineering
Preferred: Doctorate, Computer Engineering and/or Computer Science and/or Electrical Engineering


I'll add it to the compressive sensing jobs page.

1 comment:

Unknown said...

Hi, Thanks for sharing this really impressive and useful article. I own a website related to special purpose machines that might be useful for your visitors. Have a look!

Printfriendly