Can machine learning survive the artificial intelligence revolution?

Speaker
Francis Bach

Data and algorithms are ubiquitous in all scientific, industrial and personal domains. Data now come in multiple forms (text, image, video, web, sensors, etc.), are massive, and require more and more complex processing beyond their mere indexation or the computation of simple statistics, such as recognizing objects in images or translating texts. For all of these tasks, commonly referred to as artificial intelligence (AI), significant recent progress has allowed algorithms to reach performances that were deemed unreachable a few years ago and that make these algorithms useful to everyone.

Many scientific fields contribute to AI, but most of the visible progress come from machine learning and tightly connected fields such as computer vision and natural language processing. Indeed, many of the recent advances are due to the availability of massive data to learn from, large computing infrastructures and new machine learning models (in particular deep neural networks).

Beyond the well publicized visibility of some advances, machine learning has always been a field characterized by the constant exchanges between theory and practice, with a stream of algorithms that exhibit both good empirical performance on real-world problems and some form of theoretical guarantees. Is this still possible?

In this talk, I will present recent illustrating machine learning successes and propose some answers to the question above.

Francis Bach is the Distinguished Visiting Faculty of the NSF-TRIPODS Algorithmic Foundations of Data Science Institute. The seminar is part of the CORE Seminar Series, the Data Science Seminar Series, and the ML Seminar Series.

Summary Image