Thursday, February 04, 2010

CS: Single-letter Characterization of Signal Estimation, The Gelfand widths of $\ell_p$-balls, K-Dimensional Coding Schemes, A CS-EEG for BCI X-Prize

Today, we have an upcoming presentation and three preprints with a mathematical and information theory flavor. The more applied part comes at the end. The presentation is by Dongning Guo, Dror Baron, and Shlomo Shamai entitled Single-letter Characterization of Signal Estimation from Linear Measurements. More on that later.

The following preprints showed up on Arxiv:

The Gelfand widths of $\ell_p$-balls for $0 \lt p\leq 1$ by Simon Foucart, Alain Pajor, Holger Rauhut, Tino Ullrich. The abstract reads:
We provide sharp lower and upper bounds for the Gelfand widths of $\ell_p$-balls in the $N$-dimensional $\ell_q^N$-space for $0 \lt p\le 1$ and $p \lt q \le 2$ and Such estimates are highly relevant to the novel theory of compressive sensing, and our proofs rely on methods from this area.
K-Dimensional Coding Schemes in Hilbert Spaces by Andreas Maurer and Massimiliano Pontil. The abstract reads:
This paper presents a general coding method where data in a Hilbert space are represented by finite dimensional coding vectors. The method is based on empirical risk minimization within a certain class of linear operators, which map the set of coding vectors to the Hilbert space. Two results bounding the expected reconstruction error of the method are derived, which highlight the role played by the codebook and the class of linear operators. The results are specialized to some cases of practical importance, including K-means clustering, nonnegative matrix factorization and other sparse coding methods.
and of related interest:
Dominated concentration by Andreas Maurer. The abstract reads:
The concentration properties of one random variable may be governed by the values of another random variable which is concentrated and more easily analyzed. We present a general concentration inequality to handle such cases and apply it to the eigenvalues of the Gram matrix for a sample of independent vectors distributed in the unit ball of a Hilbert space. For large samples the deviation of the eigenvalues from their mean is shown to scale with the largest eigenvalue.


It looks like there is going to be an X-prize for BCI. For those who don't know what an X-prize is, it's really the equivalent of the DARPA grand challenge but more dedicated to civilian approaches and it is non-governemental making it likely to not have restrictions on citizenship. The most recent X-prize was won by Burt Rutan when he built and flew twice the Spaceship one into space. Well this time, Peter Diamandis (hey Peter, what's up with this picture of you in microgravity in a Russian plane, I thought you were floating in your own 737 :-)), the originator of the X-prize is building some "steam" to get people to pitch in some money for the BCI X-prize. Can a Compressive Sensing-EEG system compete ? Time will tell :-). In the meantime, here is Ray Kurzweil view on the prize:




Credit: NASA, ESA, David Jewitt, Hubble captures picture of asteroid collision! (via the Bad Astronomy blog).

No comments:

Printfriendly