Thursday, July 02, 2009

CS: Compressive Inverse Scattering, CT-CS Imaging, Trace Norm Regularization:, SL0 and another bayesian algorithm


Here is a new paper from Arxiv:

Compressive Inverse Scattering I. High Frequency SIMO Measurements by Albert C. Fannjiang. The abstract reads:

The inverse scattering problem with point scatterers and the single-input-multiple-output (SIMO) measurements is analyzed by the compressed sensing techniques with and without the Born approximation. Three main results about (probabilistic) recoverability of sparse target by the $L^1$-optimization technique called Basis Pursuit (BP) are obtained in the high frequency limit. In the absence of noise, it is shown that BP can recover exactly the target of sparsity up to the dimension of the measurement either with the multi-shot SIMO measurement for the Born scattering or with the single-shot SIMO measurement for the exact scattering. The stability with respect to noisy data is proved for weak or widely separated scatterers under a stronger sparsity constraint.
From the latest e-publication from http://www.optimization-online.org there are three papers of interest:
Reconstruction of CT Images from Parsimonious Angular Measurements via Compressed Sensing by Necdet Serhat Aybat, Amit Chakraborty. The abstract reads:
Computed Tomography is one of the most popular diagnostic tools available to medical professionals. However, its diagnostic power comes at a cost to the patient- significant radiation exposure. The amount of radiation exposure is a function of the number of angular measurements necessary to successfully reconstruct the imaged volume. Compressed sensing on the other hand is a technique that allows one to reconstruct signals from a very limited set of samples provided that the target signal is compressible in a transform domain. Combining the two gives clinicians the benefits of CT while at the same time limiting the risk posed by excessive radiation. In this work we formulate the computed tomography reconstruction problem within a compressed sensing framework using partial pseudo-polar Fourier transform. Simulated results indicate that the number of angular projections and hence the associated radiation can be cut by a factor of 4 or more without noticeable loss of image quality.

A First-Order Smoothed Penalty Method for Compressed Sensing by Necdet Serhat Aybat, Garud Iyengar. The abstract reads:
We propose a first-order smoothed penalty algorithm (SPA) to solve the sparse recovery problem min{||x||_1 : Ax=b}. SPA is efficient as long as the matrix-vector product Ax and A^Ty can be computed efficiently; in particular, A need not be an orthogonal projection matrix. SPA converges to the target signal by solving a sequence of penalized optimization sub-problems, and each sub-problem is solved using Nesterov's optimal algorithm for simple sets [13, 14]. We show that the SPA iterates x_k are eps-feasible, i.e. ||Ax_k-b||_2 \le eps and eps-optimal, i.e. | ||x_k||_1-||x*||_1 | \le eps after O(eps^(-3/2)) iterations. We also bound the sub-optimality, |||x_k||_1-||x*||_1 | for any iterate x_k; thus, the user can stop the algorithm at any iteration k with guarantee on the sub-optimality. SPA is able to work with L_1, L_2 or L_infinity penalty on the infeasibility, and SPA can be easily extended to solve the relaxed recovery problem min{||x||_1 : ||Ax-b||_2}.


and finally not CS per se, but the use of the nuclear norm as a regularization technique is interesting: Trace Norm Regularization: Reformulations, Algorithms, and Multi-task Learning by Ting Kei Pong, Paul Tseng, Shuiwang Ji, Jieping Ye. The abstract reads:
We consider a recently proposed optimization formulation of multi-task learning based on trace norm regularized least squares. While this problem may be formulated as a semidefinite program (SDP), its size is beyond general SDP solvers. Previous solution approaches apply proximal gradient methods to solve the primal problem. We derive new primal and dual reformulations of this problem, including a
reduced dual formulation that involves minimizing a convex quadratic function over an operator-norm ball in matrix space. This reduced dual problem may be solved by gradient-projection methods, with each projection involving a singular value decomposition. The dual approach is compared with existing approaches and its practical effectiveness is illustrated on simulations and an application to gene expression pattern analysis
Finally, we also have three contributions from the folks who brought us the Smooth L0 technique (SL0). Sparse Decomposition of Two Dimensional Signals by A. Ghafari, Massoud Babaie-Zadeh and Christian Jutten. The abstract reads:
In this paper, we consider sparse decomposition (SD) of two dimensional (2D) signals on overcomplete dictionaries with separable atoms. Although, this problem can be solved by converting it to the SD of one-dimensional (1D) signals, this approach requires a tremendous amount of memory and computational cost. Moreover, the uniqueness constraint obtained by this approach is too restricted. Then in the paper, we present an algorithm to be used directly for sparse decomposition of 2D signals on dictionaries with separable atoms. Moreover, we will state another uniqueness constraint for this class of decomposition. Our algorithm is obtained by modifying the Smoothed L0 (SL0) algorithm, and hence we call it two-dimensional SL0 (2D-SL0).

Sparse Decomposition Over Non-Full-Rank Dictionaries by Massoud Babaie-Zadeh, Vincent Vigneron and Christian Jutten. The abstract reads:
Sparse Decomposition (SD) of a signal on an overcomplete dictionary has recently attracted a lot of interest in signal processing and statistics, because of its potential application in many different areas including Compressive Sensing (CS). However, in the current literature, the dictionary matrix has generally been assumed to be of full-rank. In this paper, we consider non-full-rank dictionaries (which are not even necessarily overcomplete), and extend the definition of SD over these dictionaries. Moreover, we present an approach which enables to use previously developed SD algorithms for this non-full-rank case. Besides this general approach, for the special case of the Smoothed L0-norm (SL0) algorithm, we show that a slight modification of it covers automatically non-full-rank dictionaries.

Bayesian Pursuit Algorithm for Sparse Representation by Hadi Zayyani, Massoud Babaie-Zadeh and Christian Jutten. The abstract reads:
In this paper, we propose a Bayesian Pursuit algorithm for sparse representation. It uses both the simplicity of the pursuit algorithms and optimal Bayesian framework to determine active atoms in sparse representation of a signal. We show that using Bayesian Hypothesis testing to determine the active atoms from the correlations leads to an efficient activity measure. Simulation results show that our suggested algorithm has better performance among the algorithms which have been implemented in our simulations in most of the cases.

Inflating Compressed Samaples: A Joint Source-Channel Coding Approach for noise-Resistant Compressed Sensing by A. Hesam Mohseni, Massoud Babaie-Zadeh and Christian Jutten.. The abstract reads:
Recently, a lot of research has been done on compressed sensing, capturing compressible signals using random linear projections to a space of radically lower dimension than the ambient dimension of the signal. The main impetus of this is that the radically dimensionlowering linear projection step can be done totally in analog hardware, in some cases even in constant time, to avoid the bottleneck in sensing and quantization steps where a large number of samples need to be sensed and quantized in short order, mandating the use of a large number of fast expensive sensors and A/D converters. Reconstruction algorithms from these projections have been found that come within distortion levels comparable to the state of the art in lossy compression algorithms. This paper considers a variation on compressed sensing that makes it resistant to spiky noise. This is achieved by an analog real-field error-correction coding step. It results in a small asymptotic overhead in the number of samples, but makes exact reconstruction under spiky measurement noise, one type of which is the salt and pepper noise in imaging devices, possible. Simulations are performed that corroborate our claim and in fact substantially improve reconstruction under unreliable sensing characteristics and are stable even under small perturbations with Gaussian noise.

Robust-SL0 For Stable Sparse Representation in Noisy Settings by Armin Eftekhari, Massoud Babaie-Zadeh, Christian Jutten., Hamid Abrishami Moghaddam. The abstract rreads:
In the last few years, we have witnessed an explosion in applications of sparse representation, the majority of which share the need for finding sparse solutions of underdetermined systems of linear equations (USLE’s). Based on recently proposed smoothed l0-norm (SL0), we develop a noise-tolerant algorithm for sparse representation, namely Robust-SL0, enjoying the same computational advantages of SL0, while demonstrating remarkable robustness against noise. The proposed algorithm is developed by adopting the corresponding optimization problem for noisy settings, followed by theoretically justified approximation to reduce the complexity. Stability properties of Robust-SL0 are rigorously analyzed, both analytically and experimentally, revealing a remarkable improvement in performance over SL0 and other competing algorithms, in the presence of noise.


Finally, the paper entitled Sequential adaptive compressed sampling via Huffman codes by Akram Aldroubi, Haichao Wang, Kourosh Zarringhalam has a new version out.

Credit: Hugh McLeod, more evil plans.

No comments:

Printfriendly