Thursday, July 03, 2014

Some Comments: Universal Compressed Sensing of Markov Sources, A survey of uncertainty principles and some signal processing applications, Refined support and entropic uncertainty inequalities


Benjamin Ricaud commented:

Hi Igor,
To add some complementary info to your nice post: you may be interested in our study of the uncertainty principle in signal processing (and the references we cite therein). One is a review of some versions of the uncertainty principles:
http://arxiv.org/abs/1211.5914
We explain the relation between l^p-norms and Shannon entropy via the Renyi entropies (already noticed in the paper you cite) and give some other insights. You are right, these quantities give a family of uncertainty principles which are different from the Heisenberg one in that they do not take into account any localization of a function around a particular value (its mean position). Indeed, if you change the labelling of points in the function domain, the norm do not change (l^p norm or entropy will stay the same), but the variance of the function will change.
We also wrote a paper on uncertainty in 2 different frames, a generalization of the ones using 2 different bases:
The bound in the uncertainty inequality depends on the coherence between the two frames. the same coherence which is also involved in compressive sensing. We also propose a slightly different definition for the coherence.
I hope this will help understanding better the uncertainty principle.
We also prepare a study of some uncertainty principles applying to signal defined on graphs for the following weeks! I send you a mail when it is ready.
Thanks for sharing your views on the uncertainty principle.


and he subsequently added
A few additional facts I want to mention on the uncertainty principle, that I was happy to learn.
If we look at the uncertainty principle between the signal and its Fourier transform. The minimizer of the Heisenberg uncertainty principle (the function which makes the uncertainty equal to the bound) is a Gaussian function, and it is the same for the entropic uncertainty principle in infinite dimension (continuous case). In the finite dimensional case, the minimizer of the l^p-norm family of uncertainty principle (in particular l^0-norm) is a « picket fence signal » , i.e. regularly spaced Kronecker deltas. A striking result is that the Fourier transform of a Gaussian is a Gaussian and the (discrete) Fourier transform of a picket fence is a picket fence. Also an important thing to remark is the difference between continuous and discrete domains. One minimizer is well localized around a mean value whereas the other is completely delocalized, with a few peaks of energy but evenly distributed in the signal domain. If you make a function which approximate well a gaussian in the discrete case, you will have a small uncertainty, but the picket fence beats it. In the continuous case, the picket fence can not exists as it is not square integrable.
Continuing with the continuous vs discrete case, it is difficult to find equivalent of the Heisenberg and its variance measure in the discrete case. You have to adapt the definition to each domain (if it is possible). Even in the continuous case, take functions living on a circle: if you choose a point of the circle to be your zero coordinate and you define your domain to be [-pi,pi] with periodic boundary conditions. If you translate a nice well localized function to a point around pi. A part of the function will appear around -pi because we are on a circle. If you compute the standard variance as defined for (-infinity,infinity), it will give a huge value, but on the circle the function is still the same, with the same concentration.
I did not find any general formula for the variance in the discrete case.
Hence, we have to be careful when comparing uncertainty in the continuous and the discrete case.

The uncertainty principle exists between two representations which are the expressions of the function in two different bases of a Hilbert space. But it exists also for two frames (at least in the finite dimensional case), meaning that you have an uncertainty principle each time you use two linear transformations which are invertible (even if you have to use a pseudo-inverse). That means, the uncertainty principle is related also to information. If you loose some information by looking at your function in some representation you don’t have any uncertainty principle (the bound is zero).
Thomas Arildsen also added:

Rényi's information dimension is also used in this very recent paper on a universal approach to compressed sensing: http://arxiv.org/abs/1406.7807 which also cites [2].

To summarize, we have:


Universal Compressed Sensing of Markov Sources by Shirin Jalali, H. Vincent Poor

The main promise of compressed sensing is accurate recovery of high-dimensional structured signals from an underdetermined set of randomized linear projections. Several types of structure such as sparsity and low-rankness have already been explored in the literature. For each type of structure, a recovery algorithm has been developed based on the properties of signals with that type of structure. Such algorithms recover any signal complying with the assumed model from its sufficient number of linear projections. However, in many practical situations the underlying structure of the signal is not known, or is only partially known. Moreover, it is desirable to have recovery algorithms that can be applied to signals with different types of structure.
In this paper, the problem of developing "universal" algorithms for compressed sensing of stochastic processes is studied. First, Renyi's notion of information dimension is generalized to analog stationary processes. This provides a measure of complexity for such processes and is connected to the number of measurements required for their accurate recovery. Then a minimum entropy pursuit (MEP) algorithm is proposed, and it is proven that it can reliably and robustly recover any Markov process of any order from sufficient number of randomized linear measurements, without having any prior information about the distribution of the process. An implementable version of MEP with the same performance guarantees is also provided.



The goal of this paper is to review the main trends in the domain of uncertainty principles and localization, emphasize their mutual connections and investigate practical consequences. The discussion is strongly oriented towards, and motivated by signal processing problems, from which significant advances have been made recently. Relations with sparse approximation and coding problems are emphasized.

Generalized versions of the entropic (Hirschman-Beckner) and support (Elad-Bruckstein) uncertainty principle are presented for frames representations. Moreover, a sharpened version of the support inequality has been obtained by introducing a generalization of the coherence. In the finite dimensional case and under certain conditions, minimizers of this inequalities are given as constant functions on their support. In addition, ℓp-norms inequalities are introduced as byproducts of the entropic inequalities.
Thanks Thomas and Benjamin !

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly