Saturday, August 31, 2013

Nuit Blanche in Review (August 2013)

Since the last Nuit Blanche in Review (July 2013) , we featured one implementation:


we also focused on several subjects

We mentioned two ideas (more on those later)

We reported and featured the videos and slides of the meetings of importance in the field this Summer:
and finally we had several videos around the genomics subject are:

Credit: NASA/ESA




Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday Morning Videos: Gingko, Videos of the 2013 Summer meetings

I have no stake in Gingko a tree based word processor, but I like the idea. 

All the videos of the Summer meetings of note can be found here:
Thank you to all the organizers of these meetings for making these videos a reality!



Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, August 30, 2013

Cleaning Up Toxic Waste Removing Nefarious Contributions to Recommendation Systens

Here is an interesting use of the low rank + sparse decomposition :
Cleaning Up Toxic Waste Removing Nefarious Contributions to Recommendation Systens by Adam Charles, Ali Ahmed, Aditya Joshi, Stephen Conover, Christopher Turnes, Mark Dave.
Recommendation systems are becoming increasingly important, as evidenced by the popularity of the Netflix prize and the sophistication of various online shopping systems. With this increase in interest, a new problem of nefarious or false rankings that compromise a recommendation system’s integrity has surfaced. We consider such purposefully erroneous rankings to be a form of “toxic waste,” corrupting the performance of the underlying algorithm. In this paper, we propose an adaptive reweighted algorithm as a possible approach towards correcting this problem. Our algorithm relies on finding a low-rank-plus-sparse decomposition of the recommendation matrix, where the adaptation of the weights aids in rejecting the malicious contributions. Simulations suggest that our algorithm converges fairly rapidly and produces accurate results.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

SLRA: Structured Low-Rank Approximation as Optimization on a Grassmann Manifold - implementation -


From this presentation on Structured Low-Rank Approximation as Optimization on a Grassmann Manifold by Konstantin Usevich, here is SLRA a structured low-rank approximation algorithm using an optimization on a Grassmann manifold. The Matlab/Octave/R code is on GitHub at: https://github.com/slra/slra/



Structured Low-Rank Approximation as Optimization on a Grassmann Manifold

Konstantin Usevich

Attendant relevant papers include:



Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, August 28, 2013

Videos: ROKS 2013, International Workshop on Advances in Regularization, Optimization, Kernel Methods and Support Vector Machines



I was somehow led to believe that no video would be taken at ROKS 2013 ( International Workshop on Advances in Regularization, Optimization, Kernel Methods and Support Vector Machines). Well, I was wrong, it looks like the organizers contracted the videolectures folks, Here is the list of videos available from the workshop:, but first a big thank you to the organizing committee ( Johan SuykensAndreas Argyriou, Kris De BrabanterMoritz DiehlKristiaan PelckmansMarco SignorettoVanya Van Belle, Joos Vandewalle) for having the forethoughts of videotaping most of the talks. Enjoy!

Opening

Invited Talks
Oral session 1: Feature selection and sparsity
Oral session 2: Optimization algorithms
Oral session 3: Kernel methods and support vector machines
Oral session 4: Structured low-rank approximation
Oral session 5: Robustness
Closing

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Compressive Multiplexing of Correlated Signals


Compressive Multiplexing of Correlated Signals by Ali Ahmed, Justin Romberg

We present a general architecture for the acquisition of ensembles of correlated signals. The signals are multiplexed onto a single line by mixing each one against a different code and then adding them together, and the resulting signal is sampled at a high rate. We show that if the $M$ signals, each bandlimited to $W/2$ Hz, can be approximated by a superposition of $R < M$ underlying signals, then the ensemble can be recovered by sampling at a rate within a logarithmic factor of $RW$ (as compared to the Nyquist rate of $MW$). This sampling theorem shows that the correlation structure of the signal ensemble can be exploited in the acquisition process even though it is unknown a priori.
The reconstruction of the ensemble is recast as a low-rank matrix recovery problem from linear measurements. The architectures we are considering impose a certain type of structure on the linear operators. Although our results depend on the mixing forms being random, this imposed structure results in a very different type of random projection than those analyzed in the low-rank recovery literature to date.

Tuesday, August 27, 2013

Is Somebody Watching Your Facebook Newsfeed?

Finding intruders in your Facebook feed using an l_1 regularizer, uh!

Is Somebody Watching Your Facebook Newsfeed? by Shan-Hung Wu, Man-Ju Chou, Ming-Hung Wang, Chun-Hsiung Tseng, Yuh-Jye Lee, Kuan-Ta Chen

With the popularity of Social Networking Services (SNS), more and more sensitive information are stored online and associated with SNS accounts. The obvious value of SNS accounts motivates the usage stealing problem -- unauthorized, stealthy use of SNS accounts on the devices owned/used by account owners without any technology hacks. For example, anxious parents may use their kids' SNS accounts to inspect the kids' social status; husbands/wives may use their spouses' SNS accounts to spot possible affairs. Usage stealing could happen anywhere in any form, and seriously invades the privacy of account owners. However, there is no any currently known defense against such usage stealing. To an SNS operator (e.g., Facebook Inc.), usage stealing is hard to detect using traditional methods because such attackers come from the same IP addresses/devices, use the same credentials, and share the same accounts as the owners do.
In this paper, we propose a novel continuous authentication approach that analyzes user browsing behavior to detect SNS usage stealing incidents. We use Facebook as a case study and show that it is possible to detect such incidents by analyzing SNS browsing behavior. Our experiment results show that our proposal can achieve higher than 80% detection accuracy within 2 minutes, and higher than 90% detection accuracy after 7 minutes of observation time.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, August 26, 2013

Convex Optimization Approaches for Blind Sensor Calibration using Sparsity /A Conjugate Gradient Algorithm for Blind Senor Calibration in Sparse Recovery


One of the most important finding in recent years besides the idea of compressive sensing is the idea that there are sharp phase transition between parameters where a system works and doesn't work. In particular in compressive sensing, this was shown  by Jared Tanner and David Donoho in Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing ( see also Phase Transitions of the Regular Polytopes and Cone at University of Edinburgh) but it was really shown to be a larger property than this particular problem of sparse recovery as shown by Joel Tropp and colleagues ( .Living on the edge: A geometric theory of phase transitions in convex optimizationSAHD; Living on the edge: A geometric theory of phase transitions in convex optimization - Joel Tropp ). The importance of these results cannot be underestimated. They do two worhtwhile things in my opinion:
  • they remove the asymptotic conditions that are littering the field (RIP, NSP, etc...)
  • they are making it easier for hardware makers to have guidelines for actually making hardware. 

Case in point, the utilization of these phase transition curves to explore hardware calibration issues.


We investigate a compressive sensing framework in which the sensors introduce a distortion to the measurements in the form of unknown gains. We focus on blind calibration, using measurements on multiple unknown (but sparse) signals and formulate the joint recovery of the gains and the sparse signals as a convex optimization problem. The first proposed approach is an extension to the basis pursuit optimization which can estimate the unknown gains along with the unknown sparse signals. Demonstrating that this approach is successful for sufficient number of input signals except in cases where the phase shifts among the unknown gains varies significantly, a second approach is proposed that makes use of quadratic basis pursuit optimization to calibrate for constant amplitude gains with maximum variance in the phases. An alternative form of this approach is also formulated to reduce the complexity and memory requirements and provide scalability with respect to the number of input signals. Finally a third approach is formulated which combines the first two approaches for calibration of systems with any variation in the gains. The performance of the proposed algorithms are investigated extensively through numerical simulations, which demonstrate that simultaneous signal recovery and calibration is possible when sufficiently many (unknown, but sparse) calibrating signals are provided.
and A Conjugate Gradient Algorithm for Blind Senor Calibration in Sparse Recovery Hao Shen, Martin KleinsteuberCagdas BilenRemi Gribonval

This work studies the problem of blind sensor calibration(BSC) in linear inverse problems, such as compressive sensing. It aims to estimate the unknown complex gains on eachsensor, given a set of measurements of some unknown training signals. We assume that the unknown training signalsare all sparse. Instead of solving the problem by using convex optimization, we propose a cost function on a suitablemanifold, namely, the set of complex diagonal matrices withdeterminant one. Such a construction can enhance numericalstabilities of the proposed algorithm. By exploring a globalparameterization of the manifold, we tackle the BSC problem with a conjugate gradient method. Several numericalexperiments are provided to oppose our approach to the solutions given by convex optimization and to demonstrate itsperformance.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, August 25, 2013

Sunday Morning Insight; Using Muon Tomography to Image Salt Domes ?

In Louisana and Texas; there is an interest in imaging salt domes. This is for two reasons: an industrial one as oil and gas deposits seem to be neighbors to these structures and an environmental one which can be briefly summarized in these two videos of the 1980 Lake Peigneur sinkhole disaster and the recent appearance of a sinkhole in  Napoleonville, LA.

 



 


What is a salt dome and why it matters that it collapses ? In that region it may be a sign that there is a more structural connection to the Gulf of Mexico since as soon as water hits salt; the structural dome becomes liquid and unstable.  
 
There are currently two majors means of performing this imaging: Acoustic/Seismic imaging and gravity. Both involve dilling most of the time since performing the survey from the surface only makes it hard to image the near vertical structure of the dome. But as we know and see from the sinkhole examples, in some cases drilling may not be appropriate. Here is another idea that ought to be investigated and which could provide a richer set of elements : Muon Tomography [2].
 
from [5]

from [4]

from [6]



Friday, August 23, 2013

Coded Acquisition of High Frame Rate Video

Here is a different approach to temporal coded aperture (see previous Duke's effort featured recentlt in these entries: Coded aperture compressive temporal imaging and Video Compressive Sensing by Larry Carin ( compressive hyperspectral camera and Compressive video )). Here, we have different architecture where several cameras are used.


Coded Acquisition of High Frame Rate Video by Reza Pournaghi, Xiaolin Wu

High frame video (HFV) is an important investigational tool in sciences, engineering and military. In ultra-high speed imaging, the obtainable temporal, spatial and spectral resolutions are limited by the sustainable throughput of in-camera mass memory, the lower bound of exposure time, and illumination conditions. In order to break these bottlenecks, we propose a new coded video acquisition framework that employs K larger than 2 conventional cameras, each of which makes random measurements of the 3D video signal in both temporal and spatial domains. For each of the K cameras, this multi-camera strategy greatly relaxes the stringent requirements in memory speed, shutter speed, and illumination strength. The recovery of HFV from these random measurements is posed and solved as a large scale l1 minimization problem by exploiting joint temporal and spatial sparsities of the 3D signal. Three coded video acquisition techniques of varied trade offs between performance and hardware complexity are developed: frame-wise coded acquisition, pixel-wise coded acquisition, and column-row-wise coded acquisition. The performances of these techniques are analyzed in relation to the sparsity of the underlying video signal. Simulations of these new HFV capture techniques are carried out and experimental results are reported.
I wish they would show some video results somewhere,


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, August 22, 2013

Compressive Sensing in Control Theory

Masaaki Nagahara whom we know from his blog has been busy augmenting his list of preprints on Arxiv:

Maximum-Hands-Off Control and L1 Optimality by Masaaki Nagahara, Daniel E. Quevedo, Dragan Nesic
In this article, we propose a new paradigm of control, called a maximum-hands-off control. A hands-off control is defined as a control that has a much shorter support than the horizon length. The maximum-hands-off control is the minimum-support (or sparsest) control among all admissible controls. We first prove that a solution to an L1-optimal control problem gives a maximum-hands-off control, and vice versa. This result rationalizes the use of L1 optimality in computing a maximum-hands-off control. The solution has in general the "bang-off-bang" property, and hence the control may be discontinuous. We then propose an L1/L2-optimal control to obtain a continuous hands-off control. Examples are shown to illustrate the effectiveness of the proposed control method.


L1-Optimal Splines for Outlier Rejection by Masaaki Nagahara, Clyde F. Martin

In this article, we consider control theoretic splines with L1 optimization for rejecting outliers in data. Control theoretic splines are either interpolating or smoothing splines, depending on a cost function with a constraint defined by linear differential equations. Control theoretic splines are effective for Gaussian noise in data since the estimation is based on L2 optimization. However, in practice, there may be outliers in data, which may occur with vanishingly small probability under the Gaussian assumption of noise, to which L2-optimized spline regression may be very sensitive. To achieve robustness against outliers, we propose to use L1 optimality, which is also used in support vector regression. A numerical example shows the effectiveness of the proposed method.

Packetized Predictive Control for Rate-Limited Networks via Sparse Representation by Masaaki Nagahara, Daniel E. Quevedo, Jan Ostergaard

We study a networked control architecture for linear time-invariant plants in which an unreliable data-rate limited network is placed between the controller and the plant input. The distinguishing aspect of the situation at hand is that an unreliable data-rate limited network is placed between controller and the plant input. To achieve robustness with respect to dropouts, the controller transmits data packets containing plant input predictions, which minimize a finite horizon cost function. In our formulation, we design sparse packets for rate-limited networks, by adopting an an ell-0 optimization, which can be effectively solved by an orthogonal matching pursuit method. Our formulation ensures asymptotic stability of the control loop in the presence of bounded packet dropouts. Simulation results indicate that the proposed controller provides sparse control packets, thereby giving bit-rate reductions for the case of memoryless scalar coding schemes when compared to the use of, more common, quadratic cost functions, as in linear quadratic (LQ) control.

Sparse Packetized Predictive Control for Networked Control over Erasure Channels by Masaaki Nagahara, Daniel E. Quevedo, Jan Ostergaard

We study feedback control over erasure channels with packet-dropouts. To achieve robustness with respect to packet-dropouts, the controller transmits data packets containing plant input predictions, which minimize a finite horizon cost function. To reduce the data size of packets, we propose to adopt sparsity-promoting optimizations, namely, L1 and L2-constrained L1 optimizations, for which efficient algorithms exist. We derive sufficient conditions on design parameters, which guarantee (practical) stability of the resulting feedback control systems when the number of consecutive packet-dropouts is bounded.


Sparse Command Generator for Remote Control by Masaaki Nagahara, Daniel E. Quevedo, Jan Ostergaard, Takahiro Matsuda, Kazunori Hayashi

In this article, we consider remote-controlled systems, where the command generator and the controlled object are connected with a bandwidth-limited communication link. In the remote-controlled systems, efficient representation of control commands is one of the crucial issues because of the bandwidth limitations of the link. We propose a new representation method for control commands based on compressed sensing. In the proposed method, compressed sensing reduces the number of bits in each control signal by representing it as a sparse vector. The compressed sensing problem is solved by an L1-L2 optimization, which can be effectively implemented with an iterative shrinkage algorithm. A design example also shows the effectiveness of the proposed method.



Compressive Sampling for Remote Control Systems by Masaaki Nagahara, Takahiro Matsuda, Kazunori Hayashi

In remote control, efficient compression or representation of control signals is essential to send them through rate-limited channels. For this purpose, we propose an approach of sparse control signal representation using the compressive sampling technique. The problem of obtaining sparse representation is formulated by cardinality-constrained L2 optimization of the control performance, which is reducible to L1-L2 optimization. The low rate random sampling employed in the proposed method based on the compressive sampling, in addition to the fact that the L1-L2 optimization can be effectively solved by a fast iteration method, enables us to generate the sparse control signal with reduced computational complexity, which is preferable in remote control systems where computation delays seriously degrade the performance. We give a theoretical result for control performance analysis based on the notion of restricted isometry property (RIP). An example is shown to illustrate the effectiveness of the proposed approach via numerical experiments.


Compressive Sampling for Networked Feedback Control by Masaaki Nagahara, Daniel E. Quevedo, Takahiro Matsuda, Kazunori Hayashi

We investigate the use of compressive sampling for networked feedback control systems. The method proposed serves to compress the control vectors which are transmitted through rate-limited channels without much deterioration of control performance. The control vectors are obtained by an L1-L2 optimization, which can be solved very efficiently by FISTA (Fast Iterative Shrinkage-Thresholding Algorithm). Simulation results show that the proposed sparsity-promoting control scheme gives a better control performance than a conventional energy-limiting L2-optimal control.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Frequency Multiplexed Magnetometry via Compressive Sensing / Compressing measurements in quantum dynamic parameter estimation

The keyword today is Magnetometry.


Frequency Multiplexed Magnetometry via Compressive Sensing by Graciana Puentes, Gerald Waldherr, Philipp Neumann, Jörg Wrachtrup

Quantum sensors based on single Nitrogen-Vacancy (NV) defects in diamond are state-of-the-art tools for nano-scale magnetometry with precision scaling inversely with total measurement time $\sigma_{B} \propto 1/T$ (Heisenberg scaling) rather than as the inverse of the square root of $T$, with $\sigma_{B} =1/\sqrt{T}$ the Shot-Noise limit. This scaling can be achieved by means of phase estimation algorithms (PEAs) using adaptive or non-adaptive feedback, in combination with single-shot readout techniques. Despite their accuracy, the range of applicability of PEAs is limited to periodic signals involving single frequencies with negligible temporal fluctuations. In this Letter, we propose an alternative method for precision magnetometry in frequency multiplexed signals via compressive sensing (CS) techniques. We show that CS can provide for precision scaling approximately as $\sigma_{B} \approx 1/T$, both in the case of single frequency and frequency multiplexed signals, as well as for a 5-fold increase in sensitivity over dynamic-range gain, in addition to reducing the total number of resources required.


Compressing measurements in quantum dynamic parameter estimation by Easwar Magesan, Alexandre Cooper, Paola Cappellaro

We present methods that can provide an exponential savings in the resources required to perform dynamic parameter estimation using quantum systems. The key idea is to merge classical compressive sensing techniques with quantum control methods to efficiently estimate time-dependent parameters in the system Hamiltonian. We show that incoherent measurement bases and, more generally, suitable random measurement matrices can be created by performing simple control sequences on the quantum system. Since random measurement matrices satisfying the restricted isometry property can be used to reconstruct any sparse signal in an efficient manner, and many physical processes are approximately sparse in some basis, these methods can potentially be useful in a variety of applications such as quantum sensing and magnetometry. We illustrate the theoretical results throughout the presentation with various practically relevant numerical examples.






Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, August 21, 2013

Startups News - August 2013

One of the most important aspect of ideas surrounding the themes covered by Nuit Blanche is that they are, to a large extent, different than traditional approaches.  While large companies can inspect these new ideas, it is very likely that some of these ideas will target initially niche markets first. The reason for this new series of startup news is to cover exactly these new and disruptive technologies and how they go from the ideas of academia covered here to actual products. 

Inview



Centice



Aqueti

Tuesday, August 20, 2013

Around the blogs in 78 Summer Hours: More high dimensional body shape references and more....

In the last sround the blogs in 78 summer hours, I made a mention on What does a high dimensional body look like. As a follow-up on Twitter, Giuseppe mentioned  
while Pierre mentioned

So here are the two links plus one:

The SPARC 2013, Coding, Complexity, and Sparsity Workshop was summarized by Dick Lipton in Move the Cheese. Sebastien Bubeck mentioned the release of the COLT 201 videos. Both meetings were featured here. In other interesting blog entries we could read:

Hein The Exact Standard Deviation of the Sample Median
Larry
Patrick
Muthu Interspecies Internet
Dick
Gael Scikit-learn 0.14 release: features and benchmarks
John
Mikael Spotify music graph meetup
Josh:
John Robots Podcast #136: Drone Journalism
Brian
Rich
Terry Polymath8: Writing the paper
David
László Visualizing the languages of the world
Suresh The Sa-battle-cal starts..

While on Nuit Blanche, we had:




Image Credit: NASA/JPL/Space Science Institute
W00083940.jpg was taken on August 18, 2013 and received on Earth August 19, 2013. The camera was pointing toward SUN at approximately 914,414,892 miles (1,471,608,120 kilometers) away, and the image was taken using the IR2 and IRP90 filters. This image has not been validated or calibrated. A validated/calibrated image will be archived with the NASA Planetary Data System in 2014.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, August 18, 2013

Sunday Morning Insight: Thinking about a Compressive Genome Sequencer

If you have been reading a few entries on the subject here on Nuit Blanche, you know that genomic sequencing is a revolutionary technology that is capable of drastically changing how medicine work. In particular, there is this one technology that has been very promising for the past fifteen years and yet still has not been capable of producing more rapid genome decoding capability: nanopore sequencing.


We mentioned a few ideas on nanopore sequencing before ( see Of Well Logging and Nanopore Sequencing and Imagine: A Faster Nanopore DNA Sequencing Technique ) but here is another one. 

If you read [1,2,3] , you'll note that  one of the idea of nanopore sequencing is that one needs to use biological processes to slow down the translocation (movement) of the DNA through the Nanopore. This slowdown (or "rate control") needed (about three orders of magnitude according to [1]) allows for the sampling to be performed "accurately" and thereby provides a way to distinctly decide which of the base pairs (G,T,A,C) goes through the nanopore (and its attendant voltage readings).

As mentioned in the wikipedia entry on nanopores
Coupling an exonuclease to the biological pore would slow the translocation of the DNA through the pore, and increase the accuracy of data acquisition.
Or from [1]
For bandwidth and noise levels common to nanopore experiments, the specification for rate reduction is that the DNA should be slowed at least three orders of magnitude, from the un-impeded 1–3 s/nt [12] to 1 ms/nt or slower [4].

In other words, in the past ten years, much technology improvement has been focused on slowing down the DNA movement through the pores in order to be able to nicely sample the voltage recording and map that to a particular base pair. Let us also note that even then, researchers are considering several parallel operations of the same DNA strand through several pores [2] in order to allow redundancy and eventually reduce the overall voltage reading errors. Current result of the technology show a still too low accuracy.

It turns out that in compressive sensing several folks have been taken a stab at this exact problem: if a very rapid phenomenon cannot be sampled with current technology, one can find a solution if one has an ability to have a modulating technology that goes as fast as the phenomenon at play. If you have this modulating capability, then there is probably a way to use these new randomized Analog to Information samplers. Some of theses efforts are summarized in the A2I webpage set up by Emmanuel Candes at Stanford. In the case of nanopore technology, if one uses several batteries of DNA through several pores[2], and a switching technology based on, say, a different voltage across the different pores, at different times, then one might be able to forget about slowing down the DNA translocation through the pores and use directly the randomized readings of several pores to get data that can then be deconvoluted. What about sparsity ? Well, for one, there is already a generic known map of the Human genome. Any particular human genome must not be more than 2% different from that reference. The difference between the two is sparse. Easier said than done, I know, but it's important.




The prospect of nanopores as a next-generation sequencing platform has been a topic of growing interest and considerable government-sponsored research for more than a decade.Oxford Nanopore Technologies recently announced the first commercial nanopore sequencing devices, to be made available by the end of 2012, while other companies (Life, Roche, and IBM) are also pursuing nanopore sequencing approaches. In this paper, the state of the art in nanopore sequencing is reviewed, focusing on the most recent contributions that have or promise to have next-generation sequencing commercial potential. We consider also the scalability of the circuitry to support multichannel arrays of nanopores in future sequencing devices, which is critical to commercial viability.


This numerical study provides an error analysis of an idealized nanopore sequencing method in which ionic current measurements are used to sequence intact single-stranded DNA in the pore, while an enzyme controls DNA motion. Examples of systematic channel errors when more than one nucleotide affects the current amplitude are detailed, which if present will persist regardless of coverage. Absent such errors, random errors associated with tracking through homopolymer regions are shown to necessitate reading known sequences (Escherichia coli K-12) at least 140 times to achieve 99.99% accuracy (Q40). By exploiting the ability to reread each strand at each pore in an array, arbitrary positioning on an error rate versus throughput tradeoff curve is possible if systematic errors are absent, with throughput governed by the number of pores in the array and the enzyme turnover rate.


ABSTRACT: Complexes formed between the bacteriophage phi29 DNA polymerase (DNAP) and DNA fluctuate between the pre-translocation and post-translocation states on the millisecond time scale. These fluctuations can be directly observed with single-nucleotide precision in real-time ionic current traces when individual complexes are captured atop the α-hemolysin nanopore in an applied electric field. We recently quantified the equilibrium across the translocation step as a function of applied force (voltage), active-site proximal DNA sequences, and the binding of complementary dNTP. To gain insight into the mechanism of this step in the DNAP catalytic cycle, in this study, we have examined the stochastic dynamics of the translocation step. The survival probability of complexes in each of the two states decayed at a single exponential rate, indicating that the observed fluctuations are between two discrete states. We used a robust mathematical formulation based on the autocorrelation function to extract the forward and reverse rates of the transitions between the pre-translocation state and the post-translocation state from ionic current traces of captured phi29 DNAP−DNA binary complexes. We evaluated each transition rate as a function of applied voltage to examine the energy landscape of the phi29 DNAP translocation step. The analysis reveals that active-site proximal DNA sequences influence the depth of the pre-translocation and post-translocation state energy wells and affect the location of the transition state along the direction of the translocation.
. H/t to Jerry Zon's Three Takeaways from the 3rd Next-Generation Sequencing Conference blog entry.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly