Tuesday, August 04, 2009

CS: Sparsity in Random Matrix Theory, Approximation of functions of few variables in high dimensions, YALL1, KSVD-Box v11


Going back from "applied" math to "pure" mathematics: On the role of sparsity in Compressed Sensing and Random Matrix Theory by Roman Vershynin. The abstract reads:
We discuss applications of some concepts of Compressed Sensing in the recent work on invertibility of random matrices due to Rudelson and the author. We sketch an argument leading to the optimal bound \Omega^(N−1/2) on the median of the smallest singular value of an N × N matrix with random independent entries. We highlight the parts of the argument where sparsity ideas played a key role.
Following up on some elements shown in the third Paris Lecture of Ron DeVore, here is the more substantive paper: Approximation of functions of few variables in high dimensions by Ron DeVore, Guergana Petrova, Przemyslaw Wojtaszczyk. The abstract reads:

Let f be a continuous function defined on \Omega:= [0, 1]^N which depends on only l coordinate variables, f(x1, . . . , xN) = g(xi1 , . . . , xi` ). We assume that we are given m and are allowed to ask for the values of f at m points in \Omega. If g is in Lip1 and the coordinates i_1, . . . , i_l are known to us, then by asking for the values of f at m = L^l uniformly spaced points, we could recover f to the accuracy |g|Lip1L−1 in the norm of C(). This paper studies whether we can obtain similar results when the coordinates i_1, . . . , i_l are not known to us. A prototypical result of this paper is that by asking for C(l)L^l(log2 N) adaptively chosen point values of f, we can recover f in the uniform norm to accuracy |g|Lip1L−1 when g 2 Lip1. Similar results are proven for more general smoothness conditions on g. Results are also proven under the assumption that f can be approximated to some tolerance \epsilon (which is not known) by functions of l variables.
I note that the authors make a connection to the Junta's problem as discussed by Dick Lipton recently and mentioned here.

Also, following up on yesterday's release of a dictionary learning technique (without the code yet), today we have two new upgrades of a dictionary learning tool and a reconstruction solver:

  • Ron Rubinstein has released a new version of the K-SVD dictionary learning technique. From the page:

    KSVD-Box v11 Implementation of the K-SVD and Approximate K-SVD dictionary training algorithms, and the K-SVD Denoising algorithm. Requires OMP-Box
    v9.

    Update (August 3 2009): KSVD-Box v11 has been released, and adds 1-D signal handling!


  • Yin Zhang just released YALL1 version beta-6 that can solve 2 more models for A*A' = I (with bug fixing). From the page:


    This Matlab code can currently be applied to the following eight (8) L1-minimization
    problems:

    • (BP) min ||Wx||w,1 s.t. Ax = b
    • (L1/L1) min ||Wx||w,1 + (1/ν)||Ax - b||1
    • (L1/L2) min ||Wx||w,1 + (1/2ρ)||Ax - b||22
    • (L1/L2con) min ||Wx||w,1, s.t. ||Ax - b||2 \le δ
    • (BP+) min ||x||w,1 s.t. Ax = b and x \ge 0
    • (L1/L1+) min ||x||w,1 + (1/ν)||Ax - b||1 s.t. x \ge 0
    • (L1/L2+) min ||x||w,1 + (1/2ρ)||Ax - b||22 s.t. x \ge 0
    • (L1/L2con+) min ||x||w,1, s.t. ||Ax - b||2 <= δ, x \ge 0
    where A is m by n with m less than n, and the solution x (or its representation Wx) is supposed to be (approximately) sparse. The data (A,b) can be real or complex, and the signal x can also be complex in the cases of no nonnegativity constraint. A unitary sparsifying basis W is allowed and the 1-norm for x (or Wx) can be weighted by a vector w \ge 0. The capacity for solving models (L1/L2con) and (L1/L2con+) has been added to version beta-6 for A*A' = I.



  • Image credit: NASA, ESA, and H. Hammel (Space Science Institute, Boulder, Colo.), and the Jupiter Impact Team. If you recall the Another Space Odyssey entry, well, this picture is that of this impact as viewed by Hubble. Via the Bad Astronomy blog.

    No comments:

    Printfriendly