Friday, December 02, 2016

Slides: New Directions for Learning with Kernels and Gaussian Processes, Dagstuhl Seminar

 

Arthur Gretton, Philipp Hennig, Carl Edward Rasmussen, Bernhard Schölkopf organized the Dagstuhl seminar on New Directions for Learning with Kernels and Gaussian Processes
Here are the slides of the presentation availabl on this site:
 
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, December 01, 2016

Nuit Blanche in Review (November 2016)


Since the last Nuit Blanche in Review (October 2016), a somewhat very interesting idea came out in the form of this paper
Identifying how certain parts of the brain do a specific computation is indeed an awesome idea !

We had a one implementation, a few in-depth papers but we also saw the release of numerous papers for NIPS and about 500 submissions for a 2 year old conference (ICLR)! Among these papers, one has drawn the attention:
It seems to promise a faster training time in Deep Learning. We'll see. In the meantime, The Paris Machine Learning meetup had to meetup (one regular and one 'Hors série') but most importantly, we have a website:

Enjoy the rest of the review !

Implementation
In-depth

Conferences

Theses
Paris Machine Learning
Job
Videos:
Sunday Morning Insight


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Efficient Convolutional Auto-Encoding via Random Convexification and Frequency-Domain Minimization

Random Convexification ?
The omnipresence of deep learning architectures such as deep convolutional neural networks (CNN)s is fueled by the synergistic combination of ever-increasing labeled datasets and specialized hardware. Despite the indisputable success, the reliance on huge amounts of labeled data and specialized hardware can be a limiting factor when approaching new applications. To help alleviating these limitations, we propose an efficient learning strategy for layer-wise unsupervised training of deep CNNs on conventional hardware in acceptable time. Our proposed strategy consists of randomly convexifying the reconstruction contractive auto-encoding (RCAE) learning objective and solving the resulting large-scale convex minimization problem in the frequency domain via coordinate descent (CD). The main advantages of our proposed learning strategy are: (1) single tunable optimization parameter; (2) fast and guaranteed convergence; (3) possibilities for full parallelization. Numerical experiments show that our proposed learning strategy scales (in the worst case) linearly with image size, number of filters and filter size.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, November 29, 2016

CfP: SPARS 2017, Signal Processing with Adaptive Sparse Structured Representations

 
 
 
  Mark just sent me the following:
 
 
Dear Igor, 
A quick reminder for Nuit Blanche readers:
The deadline for submissions to SPARS 2017 is just 2 weeks away, on 12 December!
Best wishes,
 
Mark
 
Here is the announcement:
 
================================================

 
  SPARS 2017

  Signal Processing with Adaptive Sparse Structured Representations

  Lisbon, Portugal - June 5-8, 2017

  Submission deadline: December 12, 2016

  http://spars2017.lx.it.pt/

  ------------------------------------------

CALL FOR PAPERS

The Signal Processing with Adaptive Sparse Structured Representations (SPARS) workshop aims to bring together people from statistics, engineering, mathematics, and computer science, fostering the exchange and dissemination of new ideas and results, both applied and theoretical, on the general area of sparsity-related techniques and computational methods, for high dimensional data analysis, signal processing, and related applications.

Contributions (talks and demos) are solicited as one-page abstracts, which may extend to a second page in order to include figures, tables and references. Talks should present recent and novel research results. We welcome abstract submissions for technological demonstrations of the mathematical topics within our scope.

Topics of interest include (but are not limited to):

 * Sparse coding and representations, and dictionary learning
 * Sparse and low-rank approximation algorithms
 * Compressive sensing and learning
 * Dimensionality reduction and feature extraction
 * Sparsity in approximation theory, information theory, and statistics
 * Low-complexity/low-dimensional regularization
 * Statistical/Bayesian models and algorithms for sparsity
 * Sparse network theory and analysis
 * Sparsity and low-rank regularization
 * Applications including but not limited to: communications,
   geophysics, neuroscience, audio & music, imaging, denoising, genetics.

PLENARY SPEAKERS:

 * Yoram Bresler, University of Illinois, USA
 * Volkan Cevher, École Polytechnique Fédérale de Lausanne, Switzerland
 * Jalal Fadili, École Nationale Supérieure d'Ingénieurs de Caen, France
 * Anders Hansen, University of Cambridge, UK
 * Gitta Kutyniok, Technische Universität Berlin, Germany
 * Philip Schniter, Ohio State University, USA
 * Eero Simoncelli, Howard Hughes Medical Institute, NYU, USA
 * Rebecca Willett, University of Wisconsin, USA

VENUE:

SPARS 2017 will be held at Instituto Superior Técnico (IST), the
engineering school of the University of Lisbon, Portugal.

IMPORTANT DATES:

* Submission deadline: December 12, 2016
* Notification of acceptance: March 27, 2017
* Summer School: May 31-June 2, 2017 (tbc)
* Workshop: June 5-8, 2017

CHAIRS:

 Mario A. T. Figueiredo, Instituto Superior Técnico
 Mark Plumbley, University of Surrey


FURTHER INFORMATION:  http://spars2017.lx.it.pt/

============================================================

Prof Mark D Plumbley
Professor of Signal Processing
Centre for Vision, Speech and Signal Processing (CVSSP)
University of Surrey, Guildford, Surrey, GU2 7XH, UK
Email: m.plumbley@surrey.ac.uk
 
 
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thesis: Robust Large Margin Approaches for Machine Learning in Adversarial Settings

Congratulations Ali ! Maybe I missed something but this is the first time I see a connection between Random Features and Dropout in non-linear SVMs. Nice !

Robust Large Margin Approaches for Machine Learning in Adversarial Settings by Mohamad Ali Torkamani.
Machine learning algorithms are invented to learn from data and to use data to perform predictions and analyses. Many agencies are now using machine learning algorithms to present services and to perform tasks that used to be done by humans. These services and tasks include making high-stake decisions. Determining the right decision strongly relies on the correctness of the input data. This fact provides a tempting incentive for criminals to try to deceive machine learning algorithms by manipulating the data that is fed to the algorithms. And yet, traditional machine learning algorithms are not designed to be safe when confronting unexpected inputs. In this dissertation, we address the problem of adversarial machine learning; i.e., our goal is to build safe machine learning algorithms that are robust in the presence of noisy or adversarially manipulated data. Many complex questions -- to which a machine learning system must respond -- have complex answers. Such outputs of the machine learning algorithm can have some internal structure, with exponentially many possible values. Adversarial machine learning will be more challenging when the output that we want to predict has a complex structure itself. In this dissertation, a significant focus is on adversarial machine learning for predicting structured outputs. In this thesis, first, we develop a new algorithm that reliably performs collective classification: It jointly assigns labels to the nodes of graphed data. It is robust to malicious changes that an adversary can make in the properties of the different nodes of the graph. The learning method is highly efficient and is formulated as a convex quadratic program. Empirical evaluations confirm that this technique not only secures the prediction algorithm in the presence of an adversary, but it also generalizes to future inputs better, even if there is no adversary. While our robust collective classification method is efficient, it is not applicable to generic structured prediction problems. Next, we investigate the problem of parameter learning for robust, structured prediction models. This method constructs regularization functions based on the limitations of the adversary in altering the feature space of the structured prediction algorithm. The proposed regularization techniques secure the algorithm against adversarial data changes, with little additional computational cost. In this dissertation, we prove that robustness to adversarial manipulation of data is equivalent to some regularization for large-margin structured prediction, and vice versa. This confirms some of the previous results for simpler problems. As a matter of fact, an ordinary adversary regularly either does not have enough computational power to design the ultimate optimal attack, or it does not have sufficient information about the learner's model to do so. Therefore, it often tries to apply many random changes to the input in a hope of making a breakthrough. This fact implies that if we minimize the expected loss function under adversarial noise, we will obtain robustness against mediocre adversaries. Dropout training resembles such a noise injection scenario. Dropout training was initially proposed as a regularization technique for neural networks. The procedure is simple: At each iteration of training, randomly selected features are set to zero. We derive a regularization method for large-margin parameter learning based on dropout. Our method calculates the expected loss function under all possible dropout values. This method results in a simple objective function that is efficient to optimize. We extend dropout regularization to non-linear kernels in several different directions. We define the concept of dropout for input space, feature space, and input dimensions, and we introduce methods for approximate marginalization over feature space, even if the feature space is infinite-dimensional. Empirical evaluations show that our techniques consistently outperform the baselines on different datasets






Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, November 28, 2016

Paris Machine Learning Hors Serie #4 Season 4 on Deeplearning4j: Applying Deep Learning to business problems in production

Here is the video streaming of tonight Paris Machine Learning Hors Serie #4 Season 4 on  Deeplearning4j by  Chris Nicholson. Title of the presentation is "Deeplearning4j: Applying Deep Learning to business problems in production". slides are here.
  
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Faster Kernel Ridge Regression Using Sketching and Preconditioning / Sharper Bounds for Regression and Low-Rank Approximation with Regularization


Faster Kernel Ridge Regression Using Sketching and Preconditioning by Haim Avron, Kenneth L. Clarkson, David P. Woodruff

Random feature maps, such as random Fourier features, have recently emerged as a powerful technique for speeding up and scaling the training of kernel-based methods such as kernel ridge regression. However, random feature maps only provide crude approximations to the kernel function, so delivering state-of-the-art results requires the number of random features to be very large. Nevertheless, in some cases, even when the number of random features is driven to be as large as the training size, full recovery of the performance of the exact kernel method is not attained. In order to address this issue, we propose to use random feature maps to form preconditioners to be used in solving kernel ridge regression to high accuracy. We provide theoretical conditions on when this yields an effective preconditioner, and empirically evaluate our method and show it is highly effective for datasets of up to one million training examples.

Sharper Bounds for Regression and Low-Rank Approximation with Regularization by Haim Avron, Kenneth L. Clarkson, David P. Woodruff

The technique of matrix sketching, such as the use of random projections, has been shown in recent years to be a powerful tool for accelerating many important statistical learning techniques. Research has so far focused largely on using sketching for the "vanilla" un-regularized versions of these techniques. 
Here we study sketching methods for regularized variants of linear regression, low rank approximations, and canonical correlation analysis. We study regularization both in a fairly broad setting, and in the specific context of the popular and widely used technique of ridge regularization; for the latter, as applied to each of these problems, we show algorithmic resource bounds in which the {\em statistical dimension} appears in places where in previous bounds the rank would appear. The statistical dimension is always smaller than the rank, and decreases as the amount of regularization increases. In particular, for the ridge low-rank approximation problem minY,XYXA2F+λY2F+λX2F, where YRn×k and XRk×d, we give an approximation algorithm needing 
O(nnz(A))+O~((n+d)ε1kmin{k,ε1sdλ(Y)})+O~(ε8sdλ(Y)3)
time, where sλ(Y)k is the statistical dimension of Y, Y is an optimal Y, ε is an error parameter, and nnz(A) is the number of nonzero entries of A. 
We also study regularization in a much more general setting. For example, we obtain sketching-based algorithms for the low-rank approximation problem minX,YYXA2F+f(Y,X) where f(,) is a regularizing function satisfying some very general conditions (chiefly, invariance under orthogonal transformations).

Image Credit: NASA/JPL-Caltech/Space Science Institute
W00102494.jpg was taken on 2016-11-25 17:59 (UTC) and received on Earth 2016-11-26 04:45 (UTC). The camera was pointing toward Saturn, and the image was taken using the MT3 and CL2 filters.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly