Thursday, February 12, 2009

CS:Unified Approach to Sparse Signal Processing, Using the IHT Algo, Optimal CS Algo,CfP Signal Processing, Stephen's Boyd Class on Youtube, SODA

I'll be in a place where The Google and the Interwebs have trouble reaching people. So this may look like a long post but I have not written any more post to be published later this week.

From Arxiv we have: A Unified Approach to Sparse Signal Processing by Farokh Marvasti, A. Amini, F. Haddadi, Mahdi Soltanolkotabi, Babak Khalaj, Akram Aldroubi, Sverre Holm, Saeid Sanei, Jonathon Chambers. The abstract reads:

A unified view of sparse signal processing is presented in tutorial form by bringing together various fields. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common benefits of significant reduction in sampling rate and processing manipulations are revealed. The key applications of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of reconstruction algorithms, linkages are made with random sampling, compressed sensing and rate of innovation. The redundancy introduced by channel coding in finite/real Galois fields is then related to sampling with similar reconstruction algorithms. The methods of Prony, Pisarenko, and MUSIC are next discussed for sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter and Error Locator Polynomials in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method. Such spectral estimation methods is then related to multi-source location and DOA estimation in array processing. The notions of sparse array beamforming and sparse sensor networks are also introduced. Sparsity in unobservable source signals is also shown to facilitate source separation in SCA; the algorithms developed in this area are also widely used in compressed sensing. Finally, the multipath channel estimation problem is shown to have a sparse formulation; algorithms similar to sampling and coding are used to estimate OFDM channels.
Supporting the Sparsify recent Version 0.4, here are two papers:
How To Use The Iterated Hard Thresholding Algorithm by Thomas Blumensath and Mike Davies. The abstract reads:

Several computationally efficient algorithms have been shown to offer near optimal recovery of sparse signals from a small number of linear measurements. However, whilst many of the methods have similar guarantees whenever the measurements satisfy the so called restricted isometry property, empirical performance of the methods can vary significantly in a regime in which this condition is not satisfied. We here modify the Iterative Hard Thresholding algorithm by including an automatic step-size calculation. This makes the method independent from an arbitrary scaling of the measurement system and leads to a method that shows state of the art empirical performance. What is more, theoretical guarantees derived for the unmodified algorithm carry over to the new method with only minor changes.

A Simple, Efficient and Near Optimal Algorithm for Compressed Sensing by Thomas Blumensath and Mike Davies. The abstract reads:
When sampling signals below the Nyquist rate, efficient and accurate reconstruction is nevertheless possible, whenever the sampling system is well behaved and the signal is well approximated by a sparse vector. This statement has been formalised in the recently developed theory of compressed sensing, which developed conditions on the sampling system and proved the performance of several efficient algorithms for signal reconstruction under these conditions. In this paper, we prove that a very simple and efficient algorithm, known as Iterative Hard Thresholding, has near optimal performance guarantees rivalling those derived for other state of the art approaches.


Laurent Duval just let me know of a new Call for Papers:
Call for papers Special issue on Advances in Multirate Filter Bank Structures and Multiscale Representations
Scope
A century after the first outbreak of wavelets in Alfred Haar’s thesis in 1909, and twenty years after the advent of Multiresolution Analysis, filter banks and wavelet transforms lie at the heart of many digital signal processing and communication systems. During the last thirty years, they have been the focus of tremendous theoretical advances and practical applications in a growing digital world. They are for instance present, as local linear expansions, at the core of many existing or forthcoming audio, image or video compression algorithms. Beyond standards, many exciting developments have emerged in filter banks and wavelets from the confrontation between scientists from different fields (including signal and image processing, computer science, harmonic analysis, approximation theory, statistics, bioengineering, physics). At their confluence, multiscale representations of data, associated with their efficient processing in a multirate manner, have unveiled tools or refreshed methods impacting the whole data management process, from acquisition to interpretation, through communications, recovery and visualization. Multirate structures naturally shelter key concepts such as the duality between redundancy and sparsity, as well as means for extracting low dimensional structures from higher ones. In image processing in particular, various extensions of wavelets provide smart linear tools for building insightful geometrical representations of natural images. The purpose of this special issue is to report on recent progresses performed and emerging trends in the domain of multirate filter banks and multiscale representations of signals and images. Answers to the challenge of handling an increasing demand of information extraction and processing from large data sets will be explored.
Topics (not exclusive)
  • Sampling theory, compressive sensing
  • Sparse representations
  • Multiscale models
  • Multiscale processing: interpolation, inpainting, restoration,
  • Wavelet shrinkage and denoising
  • Oversampled filter banks, discrete frames
  • Rational and non-uniform multirate systems
  • Directional, steerable filter banks and wavelets
  • Nonlinear filter banks
  • (Multidimensional) filter bank design and optimization
  • Hybrid analog/digital filter banks
  • Fast and low-power schemes (lifting, integer design)
  • Multiscale and multirate applications to source and channel coding, equalization, adaptive filtering, ....

Important dates:
  • Deadline for submission: 15 December 2009
  • First round of reviews/decisions: 1 April 2010
  • Resubmission of revised papers: 1 July 2010
Submission
Guidelines for authors can be found at http://www.elsevier.com/locate/sigpro. Authors should submit their manuscripts before the submission deadline via http://ees.elsevier.com/sigpro/ selecting ‘‘Special Issue: Multirate/Multiscale’’ as Article Type.


I already knew of the Stephen Boyd's class on Youtube such as this one:



What I did not realize was that somebody had gone through the pain of transcribing these videos. Wow, just Wow. For instance the text of the video above is found here. The rest of Stephen Boyd's excellent class can be found here and here with the attendant text.

The Papers presented at SODA are now available. Among the ones, I am surely going to be reading (after some advice from Frank Nielsen that I should know about Coresets)



Over at the CAVE lab at Columbia, they just released the title of an intriguing technical report entitled "Generalized Assorted Pixel Camera: Post-Capture Control of Resolution, Dynamic Range and Spectrum," by F. Yasuma, T. Mitsunaga, D. Iso, and S.K. Nayar. Yet the pdf link does not seem to work for the moment.

Also of interest is theASPLOS 2008 Tutorial : CUDA: A Heterogeneous Parallel Programming Model for Manycore Computing blog entry.

In a different area, Shenzou VII passed by the International Space Station, here is a simulation by the folks at AGI. In a different area, I look forward to seeing how the recent collision between an Iridium satellite and a Russian satellite will disturb that low earth orbit environment with space debris. To see how bad this collision is, you need to check out the calculations on the Bad Astronomy blog. Have you ever been rearsided by something at 17,000 mph ? Let us recall that we are really close to a prompt critical situation in this area and this is worrisome as most of our daily lives depends on Satellites these days.

No comments:

Printfriendly