# Best Research Papers From ICML 2020

This year’s virtual ICML conference hosted 10800+ attendees from 75 countries. Apparently, the virtual format makes big research conferences such as ICML more accessible to the AI community all over the world.

With almost 5000 research papers submitted to ICML 2020 and an acceptance rate of 21.8%, a total of 1088 papers were presented at the conference. As usual, the Outstanding Papers awards were given to exemplary papers at this year’s ICML. To help you stay aware of the most prominent AI research breakthroughs, we’ve summarized the key ideas of these papers.

If you’d like to skip around, here are the papers we featured:

On Learning Sets of Symmetric Elements

Tuning-free Plug-and-Play Proximal Algorithm for Inverse Imaging Problems

Generative Pretraining from Pixels

Efficiently Sampling Functions from Gaussian Process Posteriors

Subscribe to our AI Research mailing list at the bottom of this article to be alerted when we release new summaries.

ICML 2020 Best Paper Awards

1. On Learning Sets of Symmetric Elements , by Haggai Maron, Or Litany, Gal Chechik, Ethan Fetaya

Original Abstract

Learning from unordered sets is a fundamental learning setup, recently attracting increasing attention. Research in this area has focused on the case where elements of the set are represented by feature vectors, and far less emphasis has been given to the common case where set elements themselves adhere to their own symmetries. That case is relevant to numerous applications, from deblurring image bursts to multi-view 3D shape recognition and reconstruction.

In this paper, we present a principled approach to learning sets of general symmetric elements. We first characterize the space of linear layers that are equivariant both to element reordering and to the inherent symmetries of elements, like translation in the case of images. We further show that networks that are composed of these layers, called Deep Sets for Symmetric Elements layers (DSS), are universal approximators of both invariant and equivariant functions. DSS layers are also straightforward to implement. Finally, we show that they improve over existing set-learning architectures in a series of experiments with images, graphs, and point-clouds.

Our Summary

The research paper focuses on learning sets in the case when the elements of the set exhibit certain symmetries. That case is relevant when learning with sets of images, sets of point-clouds, or sets of graphs. The research team from NVIDIA Research, Stanford University, and Bar Ilan University introduces a principled approach to learning such sets, where they first characterize the space of linear layers that are equivariant both to element reordering and to the inherent symmetries of elements and then show that networks that are composed of these layers are...