Tuesday, October 28, 2008

CS: Seismic CS, CS audio and image tampering, Exploiting structure in wavelet-based bayesian CS, Practical near-optimal sparse recovery

Here is a new batch from the Rice repository. I am listing only those that I have not covered before;

Efficient Seismic Forward Modeling using Simultaneous Random Sources and Sparsity by Ramesh Neelamani, C. Krohn, J. Krebs, M. Deffenbaugh, J. E. Anderson and Justin Romberg. The abstract reads:

This paper proposes an approach to speed up seismic forward modeling when the Green's function of a given model is structured in the sense that the Green's function has a sparse representation in some known transform domain. The first step of our approach runs a forward finite-difference (FD) simulation for a duration longer than each conventional run, but with all sources activated simultaneously using different long random noise waveforms. The cumulative responses to the simultaneous sources are measured at all receivers. The second step separates the interfering Green's functions from the receiver measurements by exploiting prior knowledge of the random waveforms and the sparsity of the Green's function in a suitable domain. Simulation results demonstrate such a simultaneous source approach is indeed promising.

We mentioned them before here:

 Identification of sparse audio tampering using distributed source coding and compressive sensing techniques by Giorgio Prandi, Giuseppe ValenziseMarco TagliasacchiAugusto Sarti [See also related conference publication: DAFX 2008]. The abstract reads:

The increasing development of peer-to-peer networks for delivering and sharing multimedia files poses the problem of how to protect these contents from unauthorized and, possibly, malicious manipulations. In the past few years, a large amount of techniques, including multimedia hashes and digital watermarking, have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, with the aim of assessing whether the modified content is still meaningful for the final user and, hopefully, of recovering the original content semantics. One of the main issues that have prevented multimedia hashes from being used for tampering identification is the large amount of data required for this task. Generally the size of the hash should be kept as small as possible to reduce the bandwidth overhead. To overcome this limitation, we propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools, provided that the distortion introduced by tampering is not too high. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary (e.g. DCT or wavelet), it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second: the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.


 Hash-based identification of sparse image tampering by Giorgio Prandi, Giuseppe ValenziseMarco TagliasacchiAugusto Sarti [See also related conference publication: ICIP 2008]. The abstract reads: 

In the last decade, the increased possibility to produce, edit and disseminate multimedia contents has not been adequately balanced by similar advances in protecting these contents from unauthorized diffusion of forged copies. When the goal is to detect whether or not a digital content has been tampered with in order to alter its semantics, the use of multimedia hashes turns out to be an effective solution to offer proof of legitimacy and to eventually identify the introduced tampering. We propose an image hashing algorithm based on compressive sensing principles, which solves both the authentication and the tampering identification problems. The original content producer generates a hash using a small bit budget by quantizing a limited number of random projections of the authentic image. The content user receives the (possibly altered) image, and uses the hash to estimate the mean square error distortion between the original and the received image. In addition, if the introduced tampering is sparse in some orthonormal basis or redundant dictionary, an approximation is given in the pixel domain. We emphasize that the hash is universal, e.g. the same hash signature can be used to detect and identify different types of tampering. At the cost of additional complexity at the decoder, the proposed algorithm is robust to moderate content-preserving transformations including cropping, scaling and rotation. In addition, in order to keep the size of the hash small, hash encoding/decoding takes advantage of distributed source codes.

There is also Exploiting structure in wavelet-based bayesian compressed sensing by Lihan He and Lawrence Carin. The abstract reads:

Bayesian compressive sensing (CS) is considered for signals and images that are sparse in a wavelet basis. The statistical structure of the wavelet coefficients is exploited explicitly in the proposed model, and therefore this framework goes beyond simply assuming that the data are compressible in a wavelet basis. The structure exploited within the wavelet coefficients is consistent with that used in wavelet based compression algorithms. A hierarchical Bayesian model is constituted, with efficient inference via Markov chain Monte Carlo (MCMC) sampling. The algorithm is fully developed and demonstrated using several natural images, with performance comparisons to many state-of-the-art compressive-sensing inversion algorithms.
As far as I can tell, the TSW-CS algorithm is not available yet.

Finally, I mentioned their wiki but had forgotten to mention one of the papers that is the basis of the wikiPractical near-optimal sparse recovery in the ell-1 norm by Radu BerindePiotr Indyk and Milan Ruzic. The abstract reeads:

We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a high dimensional vector x ∈ Rn from its lower-dimensional sketch Ax ∈ Rm. Specifically, we focus on the sparse recovery problem in the ℓ1 norm: for a parameter k, given the sketch Ax, compute an approximation ˆx of x such that the ℓ1 approximation error ||x − ˆx||_1 is close to min_x′ ||x − x′||_1, where x′ ranges over all vectors with at most k terms. The sparse recovery problem has been subject to extensive research over the last few years. Many solutions to this problem have been discovered, achieving different trade-offs between various attributes, such as the sketch length, encoding and recovery times. A recent paper [IR08] provided a sparse recovery scheme which achieved close to optimal performance on virtually all attributes (see Figure 1). In particular, this was the first recovery scheme that guaranteed O(k log(n/k)) sketch length, and near linear O(n log(n/k)) recovery time simultaneously. This was achieved by using sketching matrices A which were themselves very sparse. The matrix sparsity enabled decreasing the amount of computation spent on encoding and recovery. In this paper we present a new practical variant of that algorithm, that we call Sparse Matching Pursuit, or SMP. The running time of the new algorithm is slightly higher (by a logarithmic factor) than of its predecessor, and its sketch length bound remains unchanged. However, its empirical sketch length is substantially lower. This makes our scheme an attractive option for sparse recovery problems, both in theory and in practice.

No comments:

Printfriendly