Skip to content

Prof. Max Welling

Prof. Max Welling

Vice President Technologies, Qualcomm
Professor of Machine Learning, University of Amsterdam

Bio:

Max Welling is a research chair in Machine Learning at the University of Amsterdam and a Vice President Technologies at Qualcomm. He has a secondary appointment at the Canadian Institute for Advanced Research (CIFAR). In the past he held postdoctoral positions at Caltech (’98-’00), UCL (’00-’01) and the U. Toronto (’01-’03). He received his PhD in ’98 under supervision of Prof. G. 't Hooft. Max Welling has served as associate editor in chief of IEEE TPAMI from 2011-2015. He serves on the board of the NIPS foundation since 2015 and has been program chair and general chair of NIPS in 2013 and 2014, respectively. He was also program chair of AISTATS in 2009 and ECCV in 2016. He has served on the editorial boards of JMLR and JML and was an associate editor for Neurocomputing, JCGS and TPAMI. He received an NSF career award in 2005 and is the recipient of the ECCV Koenderink Prize in 2010. He is in the board of the Data Science Research Center in Amsterdam. Besides AMLAB he co-directs deep learning labs at UVA funded by Qualcomm, Bosch, Philips, Microsoft and SAP. He co-authored over 200 publications in machine learning.  

Title:

Powering Deep Learning

Abstract:

Deep Learning has been amazingly successful in applications such as speech recognition, image and video analysis and machine translation. Yet, compared with the human brain it is still extremely inefficient, both in terms of data and power. In this talk we will discuss a number of directions to improve in both these dimensions. First I will discuss how symmetries in the data can be exploited to extract more information from each data point, through the use of group convolutional networks. Then we will discuss how a Bayesian view of deep learning can help us compress neural networks, sometimes by a very large amount, thus improving its power efficiency. Finally, we will discuss how spiking neural networks can improve the efficiency of deep learning in the temporal domain.