Monday, April 13, 2009

CS: split bregman, Bregmanized Nonlocal Regularization, One Sketch For All, A Sparse and Non-Negative Solution of Ax=b, MMSE Est. for Sparse Rep.


Today we have a paper that are not strictly CS but rather hover on the subject, however, if you feel like you ought to put your head for a challenge, you may want to hit this one.

Geometric Applications of the Split Bregman Method: Segmentation and Surface Reconstruction by Tom Goldstein, Xavier Bresson and Stanley Osher.The attendant code is here. The abstract reads:
Variational models for image segmentation have many application, but can be slow to compute. Recently, globally convex segmentation models have been introduced which are very reliable, but contain TV-regularizers, making them diffcult to compute. The previously introduced Split Bregman method is a technique for fast minimization of L1 regularized functionals, and has been applied to denoising and compressed sensing problems. By applying the Split Bregman concept to image segmentation problems, we build fast solvers which can out-perform more conventional schemes, such as duality based methods and graph-cuts. We also consider the related problem of surface reconstruction from unorganized data points, which is used for constructing level set representations in 3 dimensions.

Bregmanized Nonlocal Regularization for Deconvolution and Sparse Reconstruction by Xiaoqun Zhang, Martin Burger, Xavier Bresson, and Stanley Osher.
We propose two algorithms based on Bregman iteration and operator splitting technique for nonlocal TV regularization problems. The convergence of the algorithms is analyzed and applications to deconvolution and sparse reconstruction are presented.


One Sketch For All: Theory and Application of Conditional Random Sampling by Ping Li, Kenneth W. Church and Trevor J. Hastie. The abstract reads:
Conditional Random Sampling (CRS) was originally proposed for efficiently computing pairwise (l2, l1) distances, in static, large-scale, and sparse data. This study modifies the original CRS and extends CRS to handle dynamic or streaming data, which much better reflect the real-world situation than assuming static data. Compared with many other sketching algorithms for dimension reductions such as stable random projections, CRS exhibits a significant advantage in that it is “one-sketch-for-all.” In particular, we demonstrate the effectiveness of CRS in efficiently computing the Hamming norm, the Hamming distance, the lp distance,and the l2 distance. A generic estimator and an approximate variance formula are also provided, for approximating any type of distances. We recommend CRS as a promising tool for building highly scalable systems, in machine learning, data mining, recommender systems, and information retrieval.


Compressible Dictionary Learning for Fast Sparse Approximation by Mehrdad Yaghoobi and Mike Davies. ( also in Poster). The abstract reads:

By solving a linear inverse problem under a sparsity constraint, one can successfully recover the coefficients, if there exists such a sparse approximation for the proposed class of signals. In this framework the dictionary can be adapted to a given set of signals using dictionary learning methods. The learned dictionary often does not have useful structures for a fast implementation, i.e. fast matrix-vector multiplication. This prevents such a dictionary being used for the real applications or large scale problems. The structure can be induced on the dictionary throughout the learning progress. Examples of such structures are shift-invariance and being multi-scale. These dictionaries can be efficiently implemented using a filter bank. In this paper a well-known structure, called compressibility, is adapted to be used in the dictionary learning problem. As a result, the complexity of the implementation of a compressible dictionary can be reduced by wisely choosing a generative model. By some simulations, it has been shown that the learned dictionary provides sparser approximations, while it does not increase the computational complexity of the algorithms, with respect to the pre-designed fast structured dictionaries.

a little old (2008) but I don't recall featured it is a presentation by Michael Elad entitled:
A Sparse and Non-Negative Solution of Ax=b is Necessarily Unique (also in pdf) by Alfred Bruckstein, Michael Elad, and Michael Zibulevsky

An underdetermined linear system of equations Ax = b with non-negativity constraint is considered. It is shown that for matrices A with a row-span intersecting the positive orthant, if this problem admits a sufficiently sparse solution, it is necessarily unique. The bound on the required sparsity depends on a coherence property of the matrix A. It is further shown that A undergoes a conditioning stage with some degrees of freedom, which may be used to improve the coherence measure and strengthen the claimed result

newer is MMSE Estimation for Sparse Representation Modeling (also in ppt) by Michael Elad,

Cleaning of noise from signals is a classical and long-studied problem in signal processing. Algorithms for this task necessarily rely on an a-priori knowledge about the signal characteristics, along with information about the noise properties. For signals that admit sparse representations over a known dictionary, a commonly used denoising technique is to seek the sparsest representation that synthesizes a signal close enough to the corrupted one. As this problem is too complex in general, approximation methods, such as greedy pursuit algorithms, are often employed. In this line of reasoning, we are led to believe that detection of the sparsest representation is key in the success of the denoising goal. Does this means that other competitive and slightly inferior sparse representations are meaningless? Suppose we are served with a group of competing sparse representations, each claiming to explain the signal differently. Can those be fused somehow to lead to a better result? Surprisingly, the answer to this question is positive; merging these representations can form a more accurate, yet dense, estimate of the original signal even when the latter is known to be sparse. In this talk we demonstrate this behavior, propose a practical way to generate such a collection of representations by randomizing the Orthogonal Matching Pursuit (OMP) algorithm, and produce a clear analytical justification for the superiority of the associated Randomized OMP (RandOMP) algorithm. We show that while the Maximum a-posterior Probability (MAP) estimator aims to find and use the sparsest representation, the Minimum Mean-Squared-Error (MMSE) estimator leads to a fusion of representations to form its result. Thus, working with an appropriate mixture of candidate representations, we are surpassing the MAP and tending towards the MMSE estimate, and thereby getting a far more accurate estimation, especially at medium and low SNR. Another topic covered in thistalk concerns the case of a unitary dictionary. In such a case it is well-known that the MAP estimators has a closed-form and exact solution, and OMP is accurately computing it. Can a similar result be derived for MMSE? We show that this is indeed possible, obtaining a recursive formula that computes the MMSE simply and exactly.

Credit & Copyright: Ralf Vandebergh, this photo shows an atronaut walking next to one of the truss of the international space station. this photo was tekn from the ground (i.e. from 250 miles up). Via APOD.

No comments:

Printfriendly