Learn how to code papers with TensorFlow, MXNet, PyTorch, CNTK and more @ UAI 2017

August 11th

This summer the heart of machine learning is beating in Australia. Three major conferences ICML, UAI and IJCAI are taking place back to back in Sydney and Melbourne from August 6th to August 25th. On August 11th UAI and ICML are colocating the ICML workshops and UAI tutorials. As part of these events and in an effort to bridge the gap between academic papers and their implementation, MLTrain is teaming up with UAI to provide a full day of training events. We will host several 3 hour sessions during which instructors will be teaching how to implement academic papers using state of the art frameworks such as TensorFlow, PyTorch, MXNet, CNTK and others. These events will be presented as a combination of lectures and hands on exercises using ipython notebooks, and applications using real data. Events like these will allow participants to take famous academic papers and apply them successfully in industry. Please follow us on facebook and twitter for updates and news.

The registration for the MLTrain training event is independent from the UAI conference.

To register click here

Program

Time Room 1 @ICC C3.1 Room 2 @ICC C4.2
9:00 - 12:00 Generative Models
Sponsored by Disney
Neural Abstract Machines
Sponsored by Uber
13:00 - 16:00 LSTMs and Text Reinforcement Learning
16:00 - 19:00 Deep Recommender Systems
Sponsored by Amazon
Speech and Time series
Sponsored by NVIDIA

Instructions

Dear Attendees

We are very excited to kick off MLTrain @UAI 2017. This is the first time we an event like that is introduced at an Academic conference and we hope that it will become part of the conference. We have prepared a virtual machine loaded with all the notebooks, papers and other material of the training. You can download it and install it on your machine. You will need to install virtual box on your machine and import the VM file (mltrain_vm.ova). Instructions for importing a virtual machine can be found here. We are working on building Amazon EC2 instances, so that you can access the material through the cloud. The notebooks were provided by the authors and they were written in the framework of their preference. Although they are all working, keep in mind that training models takes time and most likely you will not be able to train a model end to end during the class. The focus of the workshop is on understanding the papers and how they are coded. The material is rich and most likely we will not be able to go through all of it during the class. You will have access to the material of every session. Chris Fregly will demonstrate deployment on the cloud in 3 of the sessions. At last we would like to thank Amazon AWS, Uber and Disney for making this event possible.

Trainers

Chris Fregly

Chris Fregly is Founder and Research Engineer at PipelineIO, a Streaming Machine Learning and Artificial Intelligence Startup based in San Francisco. He is also an Apache Spark Contributor, a Netflix Open Source Committer, founder of the Global Advanced Spark and TensorFlow Meetup, author of the O'Reilly Training and Video Series titled, "High Performance TensorFlow in Production." Previously, Chris was a Distributed Systems Engineer at Netflix, a Data Solutions Engineer at Databricks, and a Founding Member and Principal Engineer at the IBM Spark Technology Center in San Francisco.

Nikolaos Vasiloglou

Nikolaos Vasiloglou holds a PhD from the department of Electrical and Computer Engineering at Georgia Institute of Technology. His thesis was focused on scalable machine learning over massive datasets. After graduating from Georgia Tech he founded Analytics1305 LLC and Ismion Inc. He has architected and developed the PaperBoat machine learning library which has been successfully integrated and used in the LogicBlox and HPCCSystems platforms. He has also served as a machine learning consultant for Predictix, Revolution Analytics, Damballa, Tapad and LexisNexis. Vasiloglou has recently focused his studies on Google's TensorFlow and has been active in developing the syllabus for a series of TensorFlow training events.

Alex Dimakis

Alex Dimakis is an Associate Professor in the Electrical & Computer Engineering department at The University of Texas at Austin. Prof. Dimakis received his Ph.D. in 2008 and M.S. degree in 2005 in electrical engineering and computer sciences from UC Berkeley and the Diploma degree from the National Technical University of Athens in 2003. During 2009 he was a CMI postdoctoral scholar at Caltech. He received an NSF Career award in 2011, a Google faculty research award in 2012 and the Eli Jury dissertation award in 2008. He is the co-recipient of several best paper awards including the joint Information Theory and Communications Society Best Paper Award in 2012.


Sessions

Chris Fregly will demonstrate how you can easily deploy tensorFlow models on the cloud with the Spark framework
We have also created a service that has summaries of the papers www.snap-learn.com

Generative Models

One of the holy grails of machine learning is the ability to generate structured data from noise. Modern Deep Neural Networks such as Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAEs) make it possible to generate synthetic images that look very realistic. In this session you will learn the theory and implementation of different versions of the above algorithms.

Reinforcement Learning

One of the biggest success of AI is the explosion of autonomous agents who can play Atari Games on their own and beat human performance. This is because of the recent advances in Reinforcement Learning. In this tutorial you will learn about the seminal papers that contributed in the advance of the field. You will code them and see how you can train agents that play games with the openAI gym framework.

Speech and Time series

Speech recognition is an old machine learning research problem that has been open for several decades. LSTMs combined with multilayer perceptrons give excellent performance in ASR as they can model the acoustic part and the language model of speech. We will also see how the same technique can be used for other time series data.

Neural Abstract Machines and Logical Tasks

One of the applications of AI is solving logical tasks like the ones described by BABI. Deep Learning has supercharged Theorem Provers and has created new architectures that can solve sudoku puzzles. Creating computers who are differentiable end to end is another milestone. These computers can learn how to execute algorithms by example.

LSTMs and Text

Understanding text and human language is the epitome of AI. In this tutorial you will see how you can extract information from text with traditional word2vec models and with more advanced like memory networks. Then you will build a simple system that can answer questions and also generate text with Recurrent Neural Nets.

Deep Recommender Systems

Sponsored by Amazon
One of the most successful and traditional areas of Machine Learning is recommender systems. Although traditional methods like matrix factorization have been successful, modern AI and deep learning methods are improving the performance. In this tutorial you will learn how to improve your models by incorporating timestamps with dynamical systems. You will also find out how content and rating info can be better fused together

Pricing

Early bird
(finishes June 30th)
Late bird
1 session $200 $250
2 sessions $350 $400
3 sessions $450 $500

* UAI offers some very compelling sponsorship packages for big companies and startups. For more information and custom packages contact us at uai2017org@gmail.com

** All proceedings from the event will go to AUAI, the non-profit organization that organizes the annual Conference on Uncertainty in Artificial Intelligence (UAI) and, more generally, promotes research in pursuit of advances in knowledge representation, learning and reasoning under uncertainty.





Golden Sponsor



Golden Sponsor



Golden Sponsor



Bronze Sponsor



Bronze Sponsor



Training session Sponsorship



Startup Sponsor



Media Sponsor