Wednesday, August 31, 2016

Fast brain decoding with random sampling and random projections

Ah ! Random features applied to different sets of discrimination tasks performed by the brain.
 

Fast brain decoding with random sampling and random projections by Andrés Hoyos-Idrobo, Gaël VaroquauxBertrand Thirion
Abstract : Machine learning from brain images is a central tool for image-based diagnosis and diseases characterization. Predicting behavior from functional imaging, brain decoding, analyzes brain activity in terms of the behavior that it implies. While these multivariate techniques are becoming standard brain mapping tools, like mass-univariate analysis, they entail much larger computational costs. In an time of growing data sizes, with larger cohorts and higher-resolutions imaging, this cost is increasingly a burden. Here we consider the use of random sampling and projections as fast data approximation techniques for brain images. We evaluate their prediction accuracy and computation time on various datasets and discrimination tasks. We show that the weight maps obtained after random sampling are highly consistent with those obtained with the whole feature space, while having a fair prediction performance. Altogether, we present the practical advantage of random sampling methods in neuroimaging, showing a simple way to embed back the reduced coefficients, with only a small loss of information.


 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Scaling up Vector Autoregressive Models With Operator-Valued Random Fourier Features

Ah ! Random Features used in autoregressive modeling:



Scaling up Vector Autoregressive Models With Operator-Valued Random Fourier Features by Romain Brault, N eh emy Lim, and Florence d'Alch é-Buc

We consider a nonparametric approach to Vector Autoregressive modeling by working in vector-valued Reproducing Kernel Hilbert Spaces (vv-RKHS). The main idea is to build vector-valued models (OKVAR) using Operator-Valued Kernels (OVK). As in the scalar case, regression with OVK boils down to learning as many weight parameters as data, except that here, weights are vectors. To avoid the inherent complexity in time and in memory to deal with kernels, we introduce Operator-Valued Random Fourier Features (ORFF) that extend Random Fourier Features devoted to scalar-valued kernels approximation. Applying the approach to decomposable kernels, we show that ORFF-VAR is able to compete with OKVAR in terms of accuracy on stationary nonlinear time series while keeping low execution time, comparable to VAR. Results on simulated datasets as well as real datasets are presented
some code information can be found here.
 and earlier: Random Fourier Features for Operator-Valued Kernels by Romain Brault, Florence d'Alché-Buc, Markus Heinonen

Devoted to multi-task learning and structured output learning, operator-valued kernels provide a flexible tool to build vector-valued functions in the context of Reproducing Kernel Hilbert Spaces. To scale up these methods, we extend the celebrated Random Fourier Feature methodology to get an approximation of operator-valued kernels. We propose a general principle for Operator-valued Random Fourier Feature construction relying on a generalization of Bochner's theorem for translation-invariant operator-valued Mercer kernels. We prove the uniform convergence of the kernel approximation for bounded and unbounded operator random Fourier features using appropriate Bernstein matrix concentration inequality. An experimental proof-of-concept shows the quality of the approximation and the efficiency of the corresponding linear models on example datasets.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, August 30, 2016

Functional Hashing for Compressing Neural Networks

Among the different ways of mapping Machine Learning to hardware -especially in mobile platforms), here is a new approach for compressing redundant deep learning architectures.


Functional Hashing for Compressing Neural Networks by Lei Shi, Shikun Feng, ZhifanZhu

As the complexity of deep neural networks (DNNs) trend to grow to absorb the increasing sizes of data, memory and energy consumption has been receiving more and more attentions for industrial applications, especially on mobile devices. This paper presents a novel structure based on functional hashing to compress DNNs, namely FunHashNN. For each entry in a deep net, FunHashNN uses multiple low-cost hash functions to fetch values in the compression space, and then employs a small reconstruction network to recover that entry. The reconstruction network is plugged into the whole network and trained jointly. FunHashNN includes the recently proposed HashedNets as a degenerated case, and benefits from larger value capacity and less reconstruction loss. We further discuss extensions with dual space hashing and multi-hops. On several benchmark datasets, FunHashNN demonstrates high compression ratios with little loss on prediction accuracy.

and earlier from a different group:


Compressing Convolutional Neural Networks by Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, Yixin Chen

Convolutional neural networks (CNN) are increasingly used in many areas of computer vision. They are particularly attractive because of their ability to "absorb" great quantities of labeled data through millions of parameters. However, as model sizes increase, so do the storage and memory requirements of the classifiers. We present a novel network architecture, Frequency-Sensitive Hashed Nets (FreshNets), which exploits inherent redundancy in both convolutional layers and fully-connected layers of a deep learning model, leading to dramatic savings in memory and storage consumption. Based on the key observation that the weights of learned convolutional filters are typically smooth and low-frequency, we first convert filter weights to the frequency domain with a discrete cosine transform (DCT) and use a low-cost hash function to randomly group frequency parameters into hash buckets. All parameters assigned the same hash bucket share a single value learned with standard back-propagation. To further reduce model size we allocate fewer hash buckets to high-frequency components, which are generally less important. We evaluate FreshNets on eight data sets, and show that it leads to drastically better compressed performance than several relevant baselines.








 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, August 29, 2016

Densely Connected Convolutional Networks

If neural networks are an instance of an iteration of a solver or a dynamical system, they seldom use more than previous iterates at every iteration. There is an on-going area of research that seems to be getting good (CIFAR) results by "remembering" some of these past iterates. From the paper:

Many recent publications address this or related problems. ResNets (He et al., 2015b) and Highway Networks (Srivastava et al., 2015) bypass signal from one layer to the next via identity connections. Stochastic Depth (Huang et al., 2016) shortens ResNets by randomly dropping layers during training to allow better information and gradient flow. Recently, Larsson et al. (2016) introduced FractalNets , which repeatedly combine several parallel layer sequences with different number of convolutional blocks to obtain a large nominal depth, while maintaining many short paths in the network. Although these different approaches vary in network topology and training procedure, we observe a key characteristic shared by all of them: they create short paths from earlier layers near the input to those later layers near the output. In this paper we propose an architecture that distills this insight into a simple and clean connectivity pattern. The idea is straight-forward, yet compelling: to ensure maximum information flow between layers in the network, we connect all layers directly with each other. To preserve the feed-forward nature, each layer obtains additional inputs from all preceding layers and passes on its own featuremaps to all subsequent layers





Densely Connected Convolutional Networks by Gao Huang, Zhuang Liu, Kilian Q. Weinberger
Recent work has shown that convolutional networks can be substantially deeper, more accurate and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper we embrace this observation and introduce the Dense Convolutional Network (DenseNet), where each layer is directly connected to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections, one between each layer and its subsequent layer (treating the input as layer 0), our network has L(L+1)/2 direct connections. For each layer, the feature maps of all preceding layers are treated as separate inputs whereas its own feature maps are passed on as inputs to all subsequent layers. Our proposed connectivity pattern has several compelling advantages: it alleviates the vanishing gradient problem and strengthens feature propagation; despite the increase in connections, it encourages feature reuse and leads to a substantial reduction of parameters; its models tend to generalize surprisingly well. We evaluate our proposed architecture on five highly competitive object recognition benchmark tasks. The DenseNet obtains significant improvements over the state-of-the-art on all five of them (e.g., yielding 3.74% test error on CIFAR-10, 19.25% on CIFAR-100 and 1.59% on SVHN).


 An implementation is available on GitHub: https://github.com/liuzhuang13/DenseNet

see also the comments on Reddit.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday, August 27, 2016

Paris Machine Learning Newsletter, Summer 2016



Paris Machine Learning Newsletter, Summer 2016

Table of content / Table des matières

1- [English Version] Foreword by Franck et Igor, “We live in interesting times: Dare Mighty Things”
1- [French Version] L’édito de Franck et Igor, “We live in interesting times: Dare Mighty Things”
3- france is AI in Paris, Sep 16-18th
4- Le premier meetup de Paris NLP, September 28th
6- Réduction sur les summits de Re-work

1- [English Version] A foreword by Franck et Igor, “We live in interesting times: Dare Mighty Things”

We’ve had more than 150 speakers in the past three seasons. Two of them made the news this summer: Danny Bickson (E9 Season 1) one of the co-founders of Graphlab then Dato then Turi and Arjun Bansal from Nervana systems (E12 Season 3). Turi just got acquired by Apple for 300M$, and Nervana got acquired for 350M$ by Intel.

In a different direction, at the last meetup, Raymond Francis explained to us what got picked by the LA Times a month later, Curiosity now uses Machine Learning on Mars.  ( AI: NASA's Curiosity rover can now choose its own laser targets on Mars). This news is exciting on two levels: First, robots can now explore the universe better and second, it definitely brings some perspective when we talk about the dichotomy between exploration and exploitation in our discussions.

Unless you have been hiding in a cave, CRISPR continues to be the subject of many fantasies,, Jennifer Listgarten has talked about it in meetup #4 this last season . No subject can hide from our meetups.

There were many other equally interesting presentations and attendant conversations last season and there are all here.

It is thanks to you that we can organize the meetups and have great speakers but it is central that we have a number of companies and organization that invite us and sponsor our netwroking events

These companies and associations hosted us during season 3

These companies and associations sponsored the very important networking events of season 3

A big thank you goes to these companies and associations which understadn the interest in having a lively community around Machine Learning and Data Science here in Paris. You want to host or sponsor our meetups and netwroking event for season 4, contact Franck (bardolfranck+sponsorMLParis@gmail.com ) or I ( igor.carron+sponsorMLParis@gmail.com ), our calendar is listed below. We’ll have the  first  meetup of season 4 at DojoCrea/DojoEvents on  September 14th..

This community of more than 3800 members is one of the largest such meetup in the world on the subject of Machine Learning. Bu this is only the tip of the iceberg as there is a large diversity of other meetups on similar themes. One of them is just starting with its first  first meetup of the Paris NLP meetup. It will take place at TheFamily on September 28th. More informtion can be found below.

We are regularly contacted by various stakeholders and entities (VCs…) who want to liven up our community :

On of them is Paul Strachman a VC from ISAI Capital who has decided to organize france is AI in  Paris between September 16th till 18th at BPIFrance. Lots of goodness will occur during these two days with, in particular, a workshop on Keras on Friday Morning. All relevant information are listed below.

Another is the'Office parlementaire d'évaluation des choix scientifiques et technologiques (OPECST),an entity of the French parliament that has started to work on AI in order to write a report to be presented to French lawmakers and decision makers from the executive branch. The two “rapporteurs” are Senator Dominique Gillot, former minister and a representative from the Assemblée Nationale, Monsieur Claude de Ganay. We believe this is an excellent initative in light of what has already happened in the US, the White House and the OPSTP have already put in place a series of workshops in understand the benefits and risks of AI. This is all great, the technicians at DARPA have begun to  figure out the research theme they could fund in order for Machine Learning models that can be more explainable. The faster we develop these tools, the faster AI will become mainstream.

Goign back to the study by OPECST, Franck and I have put a Google form to get your input on the matter. The responses from these google form will be viewable to the secretary for that study.  It would be optimal if you put forth your thoughts to share before September 5th, 2016. The form is here:  https://goo.gl/forms/gNdyEiwTgCmvG7il2

On the the academic and research side NIPS will take place in Barcelona in Décember and the announcement was put forth this Summer by Yann LeCun that  ICLR 2017 will take place Toulon. Igor had submitted a proposal fora workshop at NIPS on “Mapping Machine Learning to Hardware”. Nous had a prestigious line-up mixing people from ML and electronics but it was not enough for the gods of NIPS. The hardware barbarians will sit it for another year around the ML village :-) It’s only a question of time. This Summer several schools and conferences made available their slides and videos, here is a sample:


We are also beginning to see interesting newsletters such as that of Jack Clark it’s Import AI. to register go here. If you know others don’t hesitate to let us know.

This Summer there were also a number of interesting Q&As on Reddit and Quora, here is a sample:

If you want to pass information to our community through this newsletter go fill this form:: https://goo.gl/forms/S1iHB1EDONjimEHL2

We also have a Facebook page (197 likes), a Google+ page (354 membres) and a  group on  LinkedIn (we have 1344 professionnels in the Paris area, if you are looking for data scientists, do not forget to put [JOB] in the title of your announcement). You can also contact us on our  Twitter:@ParisMLgroup, or use  #MLParis and will do an RT if it relates to our community.

We live in interesting times, dare mighty things !


1- [French Version]  L’édito de Franck et Igor, “We live in interesting times: Dare Mighty Things”

Nous avons eu plus de 150 présentatrices et présentateurs depuis trois saisons. Deux d’entres eux ont fait les news récemment: Danny Bickson (E9 Saison 1) un des co-fondateurs de Graphlab devenu Turi et Arjun Bansal de Nervana systems (E12 Saison 3). Turi a été acquis par Apple pour 300M$, et Nervana a été acquis pour 350M$ par Intel.

Dans un autre registre, et comme nous l’avait expliquer Raymond Francis au dernier meetup un mois avant, la nouvelle a enfin été annonçé dans le LA Times un mois plus tard, Curiosity utilise maintenant du Machine Learning sur Mars.  ( AI: NASA's Curiosity rover can now choose its own laser targets on Mars). Cette nouvelle est assez géniale, les robots peuvent réellement explorer l’univers et surtout, elle illustre bien la dichotomie qui existe entre exploitation et exploration qui anime une grande partie de nos discussions.

CRISPR continue à faire l’objet de beaucoup de fantasmes, Jennifer Listgarten nous en avait parler au meetup 4  de cette derniere saison. Aucun sujet n’est épargné.

Il y a eu encore bien d’autres présentations intéressantes la saison dernière, tout se trouve dans les archives qui sont ici.

C’est grâce à vous que nous pouvons organiser des meetups et que nous pouvons avoir des invités exceptionnels mais aussi et surtout grâce aux entreprises et associations qui nous ont invitées et qui ont aussi sponsorisés les networking events après les présentations.

Les entreprises et associations suivantes nous ont accueillis dans une salle pendant la saison 3

Les entreprises et associations suivantes ont sponsorisés les networking events de la saison 3

Un grand merci à ces entreprises et associations qui ont compris l’intérêt d’avoir une communauté vivante du Machine Learning et de la Data Science à Paris. Vous voulez nous accueillir ou sponsoriser les meetups et les networking events de la saison 4, contactez Franck (bardolfranck+sponsorMLParis@gmail.com ) ou moi-même ( igor.carron+sponsorMLParis@gmail.com ), notre calendrier de date se trouve plus bas. Nous aurons le premier meetup de la saison 4 à DojoCrea/DojoEvents le 14 Septembre.

Cette communauté de plus de 3800 personnes en fait un des plus gros meetup sur le Machine Learning du monde. Mais ce n’est que le tip de l’iceberg, il y a un très grande diversité d’autres meetups qui parlent des mêmes thèmes. L’un d’entre eux va faire son premier Meetup, c’est le meetup Paris NLP à TheFamily le 28 septembre. Vous trouverez plus d’information plus bas.

Nous sommes régulièrement contactés par diverses personnes et entités (VCs, ….) qui veulent pouvoir animer ou consulter notre communauté:

L’un d’entre eux est Paul Strachman du fonds ISAI et qui organise france is AI à Paris du 16 au 18 septembre à la BPI. Plein de bonnes choses vont se passer pendant ces deux jours avec en particulier un workshop sur Keras le vendredi matin. Toutes les informations importantes sont en dessous.

Un autre est l'Office parlementaire d'évaluation des choix scientifiques et technologiques (OPECST) du parlement Français qui a commencé à travailler sur l'intelligence artificielle en vue de rédiger un rapport. Les deux rapporteurs sont Madame la sénatrice Dominique Gillot, ancienne ministre, et Monsieur le député Claude de Ganay. Nous trouvons que c’est une très bonne initiative au vu du fait qu’aux Etats-Unis, la Maison Blanche et l’OSTP ont déja mis en place une série de workshop pour comprendre les benefices et risques de l’IA. Ceci étant dit, les techniciens ne sont pas en reste, la DARPA commence a regarder les secteurs de la recherche qu’elle pourrait financé afin de rendre les modeles plus explicatifs. Plus nous developerons ces outils plus l’intelligence artificielle et le Machine Learning deviendront mainstream.

Pour en revenir à l’étude de l’OPECST, Franck et moi avons mis en place un formulaire Google qui vous permet de donner vos impressions sur le sujet. Ce formulaire sera lisible directement par le rapporteur. Nous vous demandons de mettre un moyen de vous joindre au cas ou le rapporteur estime que vos écrits doivent être présenté et entendu par le parlement. Il serait optimal si vous mettiez vos avis avant le 5 septembre 2016. Le formulaire se trouve ici: https://goo.gl/forms/gNdyEiwTgCmvG7il2

Du point de vue plus académique et recherche, NIPS se passera a Barcelone en Décembre et l’annonce a été faite par Yann LeCun pendant l’été que ICLR 2017 se passerai à Toulon. Igor avait proposé un workshop à NIPS sur “Mapping Machine Learning to Hardware”. Nous avions un super line-up mais il semble que ce n’était pas assez pour les dieux de NIPS. Les barbares du hardware devront siéger encore autour des conférences de ML cette année :-) Ce n’est qu’une question de temps. Cet été aussi plusieurs écoles de ML/Deep Learning ont eu lieu et ont mis leurs slides et vidéos en ligne, en voilà un échantillon:

Enfin, nous commençons à voir fleurir des newsletters intéressantes telle que celle de Jack Clark qui s’appelle Import AI. Pour s’inscrire c’est ici. Si vous en connaissez d’autres n’hésitez pas à nous en parler.

Cet été il y a eu aussi plusieurs sessions de questions AMA Reddit et sur Quora avec les personnes suivantes:

Si vous avez des informations à faire passer à la communauté à travers cette newsletter ? vous pouvez le faire en remplissant ce formulaire: https://goo.gl/forms/S1iHB1EDONjimEHL2

Vous pouvez aussi le faire en postant directement sur notre page Facebook (197 likes), notre page Google+ (354 membres) ou sur notre groupe LinkedIn (nous avons plus de 1344 professionnels sur Paris, si vous cherchez des data scientists, n'oubliez pas de mettre [JOB] si c’est une annonce). Vous pouvez aussi nous contacter sur notre  compte Twitter:@ParisMLgroup, ou utilisez #MLParis nous ferons un RT.

Si avec tout ça, vous n’avez pas la pêche: We live in interesting times, dare mighty things !





2- Notre calendrier

Vous voulez présenter ? Il faut remplir ce formulaire.
Vous voulez nous accueillir ? ou sponsoriser les networking events ? ou les deux ? contactez Franck (bardolfranck+sponsorMLParis@gmail.com ) ou moi-même ( igor.carron+sponsorMLParis@gmail.com ),

Pour venir, il vous suffit de vous inscrire quand les meetups sont ouverts (en général une petite semaine avant). Franck et moi avons décidé de ne pas faire de liste d’attente parce que cela ne marche pas. Si il y a trop d’inscrits par rapport à la salle, seuls les premiers arrivés et les présentateurs pourront rentrer. Nous mettons toujours un système de Streaming en place et surtout toutes les présentations devraient être sur nos archives AVANT le meetup.

3- france is AI in Paris, Sep 16-18th

Join us for the largest AI event in France: france is AI in Paris, Sep 16-18th.

This event is organized by ISAI, BPI and French Tech to bring together all the players (startups, tech companies, research centers, meetups…) in the ecosystem for fascinating panels, engaging discussions, a startup competition and networking.

At the event, we will publish a map of the ecosystem and the startup landscape: so make sure to register (or contact paul@isai.vc if you have any questions).

IF YOU ARE A STARTUP, ENTER THE COMPETITION: be one of the 5 finalists to pitch during the event and win a trip to NY and SF to meet the US AI ecosystem and interact with top tier US VCs. The winner will be automatically accepted in the famous Stanford accelerator program StartX.

IF YOU ARE INTERESTED IN AI, REGISTER FOR THE CONFERENCE: Come listen to AI experts discuss the big challenges ahead, share their experience and network with startups, tech companies and VCs.
Featured speakers include CTO Microsoft France, CEO INRIA, CEO Snips.ai, Execs from IBM Watson...

4- Le premier meetup de Paris NLP, September 28th

Antoine Dusséaux, un des membres du meetup a décidé de créer un nouveau meetup! Voici ce qu’il nous envoie:

Vous êtes intéressé par les applications du traitement automatique du langage ? Rejoignez le premier Meetup Paris NLP à TheFamily le 28 septembre !

On y parle techniques, applications et recherche en NLP et on abordera aussi bien les approches traditionnelles que modernes du NLP, des règles métier à l'apprentissage profond.

Inscription sur Meetup : http://bit.ly/ParisNLP1


Interested in applications of natural language processing (NLP) in your favorite field? Join us for the first Paris NLP Meetup on Sept 28th at TheFamily!

We'll talk about NLP techniques, applications and ongoing research, and we will discuss both traditional and modern NLP approaches, from hand-designed rules to deep learning.

Please RSVP on Meetup: http://bit.ly/ParisNLP1

5- Workshop : Big Data Story from Collection to Visualization. Next Step !, Paris, vendredi 7 octobre 2016 . Mustapha Lebbah nous écrit:

Bonjour,

Nous organisons une journée ‘Workshop : Big Data Story from Collection to Visualization. Next Step !) pour présenter les résultats du projet Square Predict (Projet investissement d'avenir-Big Data financé par BPI France), et également des travaux R&D novateurs dans le domaine de la science des données menés au sein des laboratoires ''Data Lab'' des entreprises comme AXA, SAFRAN, SARENZA et Data-Fellas. Cette journée est soutenue par le groupe DMA (Data Mining et Apprentissage) de la SFdS (Société Française de Statistique) La journée est gratuite, mais l'inscription est obligatoire. https://squarepredict.sciencesconf.org/
Le workshop se déroulera à Paris-Descartes. La salle sera confirmée d'ici peu



6- Réduction sur les summits de Re-work

Sophie de Re-work nous écrit la chose suivante:

Hi Igor

I just wanted to also let you know that we're holding a special offer of 20% off all ticket types to all upcoming summits! Ticket buyers simply enter the discount code SUMMER20 at the checkout.

It also applies to Super Early Bird tickets and passes for Startups and Academics, so it takes down the price dramatically to attend our events and I felt your community may benefit from this.

If you could share this with your community and network we'd be really grateful! I've attached an image if you'd like to share one. There's also a blog post here with the full information if you'd like to share that: https://re-work.co/blog/summer-special-offer-discount-on-all-tickets-to-all-summits-2016

If you'd be so kind as to share the offer, I've written some tweet text with a few variations if you're able to share on Twitter:
Many thanks,
Sophie


C’est tout pour cette newsletter !



Credit photo: "Courtesy of NASA/SDO and the AIA, EVE, and HMI science teams."






 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly