title
stringlengths
10
192
authors
stringlengths
7
342
abstract
stringlengths
82
4.51k
url
stringlengths
44
59
detail_url
stringlengths
44
59
abs
stringlengths
44
59
OpenReview
stringclasses
1 value
Download PDF
stringlengths
47
77
tags
stringclasses
1 value
A New Representation of Successor Features for Transfer across Dissimilar Environments
Majid Abdolshah, Hung Le, Thommen Karimpanal George, Sunil Gupta, Santu Rana, Svetha Venkatesh
Transfer in reinforcement learning is usually achieved through generalisation across tasks. Whilst many studies have investigated transferring knowledge when the reward function changes, they have assumed that the dynamics of the environments remain consistent. Many real-world RL problems require transfer among environ...
https://proceedings.mlr.press/v139/abdolshah21a.html
https://proceedings.mlr.press/v139/abdolshah21a.html
https://proceedings.mlr.press/v139/abdolshah21a.html
http://proceedings.mlr.press/v139/abdolshah21a/abdolshah21a.pdf
ICML 2021
Massively Parallel and Asynchronous Tsetlin Machine Architecture Supporting Almost Constant-Time Scaling
Kuruge Darshana Abeyrathna, Bimal Bhattarai, Morten Goodwin, Saeed Rahimi Gorji, Ole-Christoffer Granmo, Lei Jiao, Rupsa Saha, Rohan K. Yadav
Using logical clauses to represent patterns, Tsetlin Machine (TM) have recently obtained competitive performance in terms of accuracy, memory footprint, energy, and learning speed on several benchmarks. Each TM clause votes for or against a particular class, with classification resolved using a majority vote. While the...
https://proceedings.mlr.press/v139/abeyrathna21a.html
https://proceedings.mlr.press/v139/abeyrathna21a.html
https://proceedings.mlr.press/v139/abeyrathna21a.html
http://proceedings.mlr.press/v139/abeyrathna21a/abeyrathna21a.pdf
ICML 2021
Debiasing Model Updates for Improving Personalized Federated Training
Durmus Alp Emre Acar, Yue Zhao, Ruizhao Zhu, Ramon Matas, Matthew Mattina, Paul Whatmough, Venkatesh Saligrama
We propose a novel method for federated learning that is customized specifically to the objective of a given edge device. In our proposed method, a server trains a global meta-model by collaborating with devices without actually sharing data. The trained global meta-model is then personalized locally by each device to ...
https://proceedings.mlr.press/v139/acar21a.html
https://proceedings.mlr.press/v139/acar21a.html
https://proceedings.mlr.press/v139/acar21a.html
http://proceedings.mlr.press/v139/acar21a/acar21a.pdf
ICML 2021
Memory Efficient Online Meta Learning
Durmus Alp Emre Acar, Ruizhao Zhu, Venkatesh Saligrama
We propose a novel algorithm for online meta learning where task instances are sequentially revealed with limited supervision and a learner is expected to meta learn them in each round, so as to allow the learner to customize a task-specific model rapidly with little task-level supervision. A fundamental concern arisin...
https://proceedings.mlr.press/v139/acar21b.html
https://proceedings.mlr.press/v139/acar21b.html
https://proceedings.mlr.press/v139/acar21b.html
http://proceedings.mlr.press/v139/acar21b/acar21b.pdf
ICML 2021
Robust Testing and Estimation under Manipulation Attacks
Jayadev Acharya, Ziteng Sun, Huanyu Zhang
We study robust testing and estimation of discrete distributions in the strong contamination model. Our results cover both centralized setting and distributed setting with general local information constraints including communication and LDP constraints. Our technique relates the strength of manipulation attacks to the...
https://proceedings.mlr.press/v139/acharya21a.html
https://proceedings.mlr.press/v139/acharya21a.html
https://proceedings.mlr.press/v139/acharya21a.html
http://proceedings.mlr.press/v139/acharya21a/acharya21a.pdf
ICML 2021
GP-Tree: A Gaussian Process Classifier for Few-Shot Incremental Learning
Idan Achituve, Aviv Navon, Yochai Yemini, Gal Chechik, Ethan Fetaya
Gaussian processes (GPs) are non-parametric, flexible, models that work well in many tasks. Combining GPs with deep learning methods via deep kernel learning (DKL) is especially compelling due to the strong representational power induced by the network. However, inference in GPs, whether with or without DKL, can be com...
https://proceedings.mlr.press/v139/achituve21a.html
https://proceedings.mlr.press/v139/achituve21a.html
https://proceedings.mlr.press/v139/achituve21a.html
http://proceedings.mlr.press/v139/achituve21a/achituve21a.pdf
ICML 2021
f-Domain Adversarial Learning: Theory and Algorithms
David Acuna, Guojun Zhang, Marc T. Law, Sanja Fidler
Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain, and a related labeled dataset. In this paper, we introduce a novel and general domain-adversarial framework. Specifically, we derive a novel generalization boun...
https://proceedings.mlr.press/v139/acuna21a.html
https://proceedings.mlr.press/v139/acuna21a.html
https://proceedings.mlr.press/v139/acuna21a.html
http://proceedings.mlr.press/v139/acuna21a/acuna21a.pdf
ICML 2021
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Darius Afchar, Vincent Guigue, Romain Hennequin
Feature attribution is often loosely presented as the process of selecting a subset of relevant features as a rationale of a prediction. Task-dependent by nature, precise definitions of "relevance" encountered in the literature are however not always consistent. This lack of clarity stems from the fact that we usually ...
https://proceedings.mlr.press/v139/afchar21a.html
https://proceedings.mlr.press/v139/afchar21a.html
https://proceedings.mlr.press/v139/afchar21a.html
http://proceedings.mlr.press/v139/afchar21a/afchar21a.pdf
ICML 2021
Acceleration via Fractal Learning Rate Schedules
Naman Agarwal, Surbhi Goel, Cyril Zhang
In practical applications of iterative first-order optimization, the learning rate schedule remains notoriously difficult to understand and expensive to tune. We demonstrate the presence of these subtleties even in the innocuous case when the objective is a convex quadratic. We reinterpret an iterative algorithm from t...
https://proceedings.mlr.press/v139/agarwal21a.html
https://proceedings.mlr.press/v139/agarwal21a.html
https://proceedings.mlr.press/v139/agarwal21a.html
http://proceedings.mlr.press/v139/agarwal21a/agarwal21a.pdf
ICML 2021
A Regret Minimization Approach to Iterative Learning Control
Naman Agarwal, Elad Hazan, Anirudha Majumdar, Karan Singh
We consider the setting of iterative learning control, or model-based policy learning in the presence of uncertain, time-varying dynamics. In this setting, we propose a new performance metric, planning regret, which replaces the standard stochastic uncertainty assumptions with worst case regret. Based on recent advance...
https://proceedings.mlr.press/v139/agarwal21b.html
https://proceedings.mlr.press/v139/agarwal21b.html
https://proceedings.mlr.press/v139/agarwal21b.html
http://proceedings.mlr.press/v139/agarwal21b/agarwal21b.pdf
ICML 2021
Towards the Unification and Robustness of Perturbation and Gradient Based Explanations
Sushant Agarwal, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu, Himabindu Lakkaraju
As machine learning black boxes are increasingly being deployed in critical domains such as healthcare and criminal justice, there has been a growing emphasis on developing techniques for explaining these black boxes in a post hoc manner. In this work, we analyze two popular post hoc interpretation techniques: SmoothGr...
https://proceedings.mlr.press/v139/agarwal21c.html
https://proceedings.mlr.press/v139/agarwal21c.html
https://proceedings.mlr.press/v139/agarwal21c.html
http://proceedings.mlr.press/v139/agarwal21c/agarwal21c.pdf
ICML 2021
Label Inference Attacks from Log-loss Scores
Abhinav Aggarwal, Shiva Kasiviswanathan, Zekun Xu, Oluwaseyi Feyisetan, Nathanael Teissier
Log-loss (also known as cross-entropy loss) metric is ubiquitously used across machine learning applications to assess the performance of classification algorithms. In this paper, we investigate the problem of inferring the labels of a dataset from single (or multiple) log-loss score(s), without any other access to the...
https://proceedings.mlr.press/v139/aggarwal21a.html
https://proceedings.mlr.press/v139/aggarwal21a.html
https://proceedings.mlr.press/v139/aggarwal21a.html
http://proceedings.mlr.press/v139/aggarwal21a/aggarwal21a.pdf
ICML 2021
Deep kernel processes
Laurence Aitchison, Adam Yang, Sebastian W. Ober
We define deep kernel processes in which positive definite Gram matrices are progressively transformed by nonlinear kernel functions and by sampling from (inverse) Wishart distributions. Remarkably, we find that deep Gaussian processes (DGPs), Bayesian neural networks (BNNs), infinite BNNs, and infinite BNNs with bottl...
https://proceedings.mlr.press/v139/aitchison21a.html
https://proceedings.mlr.press/v139/aitchison21a.html
https://proceedings.mlr.press/v139/aitchison21a.html
http://proceedings.mlr.press/v139/aitchison21a/aitchison21a.pdf
ICML 2021
How Does Loss Function Affect Generalization Performance of Deep Learning? Application to Human Age Estimation
Ali Akbari, Muhammad Awais, Manijeh Bashar, Josef Kittler
Good generalization performance across a wide variety of domains caused by many external and internal factors is the fundamental goal of any machine learning algorithm. This paper theoretically proves that the choice of loss function matters for improving the generalization performance of deep learning-based systems. B...
https://proceedings.mlr.press/v139/akbari21a.html
https://proceedings.mlr.press/v139/akbari21a.html
https://proceedings.mlr.press/v139/akbari21a.html
http://proceedings.mlr.press/v139/akbari21a/akbari21a.pdf
ICML 2021
On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting
Shunta Akiyama, Taiji Suzuki
Deep learning empirically achieves high performance in many applications, but its training dynamics has not been fully understood theoretically. In this paper, we explore theoretical analysis on training two-layer ReLU neural networks in a teacher-student regression model, in which a student network learns an unknown t...
https://proceedings.mlr.press/v139/akiyama21a.html
https://proceedings.mlr.press/v139/akiyama21a.html
https://proceedings.mlr.press/v139/akiyama21a.html
http://proceedings.mlr.press/v139/akiyama21a/akiyama21a.pdf
ICML 2021
Slot Machines: Discovering Winning Combinations of Random Weights in Neural Networks
Maxwell M Aladago, Lorenzo Torresani
In contrast to traditional weight optimization in a continuous space, we demonstrate the existence of effective random networks whose weights are never updated. By selecting a weight among a fixed set of random values for each individual connection, our method uncovers combinations of random weights that match the perf...
https://proceedings.mlr.press/v139/aladago21a.html
https://proceedings.mlr.press/v139/aladago21a.html
https://proceedings.mlr.press/v139/aladago21a.html
http://proceedings.mlr.press/v139/aladago21a/aladago21a.pdf
ICML 2021
A large-scale benchmark for few-shot program induction and synthesis
Ferran Alet, Javier Lopez-Contreras, James Koppel, Maxwell Nye, Armando Solar-Lezama, Tomas Lozano-Perez, Leslie Kaelbling, Joshua Tenenbaum
A landmark challenge for AI is to learn flexible, powerful representations from small numbers of examples. On an important class of tasks, hypotheses in the form of programs provide extreme generalization capabilities from surprisingly few examples. However, whereas large natural few-shot learning image benchmarks have...
https://proceedings.mlr.press/v139/alet21a.html
https://proceedings.mlr.press/v139/alet21a.html
https://proceedings.mlr.press/v139/alet21a.html
http://proceedings.mlr.press/v139/alet21a/alet21a.pdf
ICML 2021
Robust Pure Exploration in Linear Bandits with Limited Budget
Ayya Alieva, Ashok Cutkosky, Abhimanyu Das
We consider the pure exploration problem in the fixed-budget linear bandit setting. We provide a new algorithm that identifies the best arm with high probability while being robust to unknown levels of observation noise as well as to moderate levels of misspecification in the linear model. Our technique combines prior ...
https://proceedings.mlr.press/v139/alieva21a.html
https://proceedings.mlr.press/v139/alieva21a.html
https://proceedings.mlr.press/v139/alieva21a.html
http://proceedings.mlr.press/v139/alieva21a/alieva21a.pdf
ICML 2021
Communication-Efficient Distributed Optimization with Quantized Preconditioners
Foivos Alimisis, Peter Davies, Dan Alistarh
We investigate fast and communication-efficient algorithms for the classic problem of minimizing a sum of strongly convex and smooth functions that are distributed among $n$ different nodes, which can communicate using a limited number of bits. Most previous communication-efficient approaches for this problem are limit...
https://proceedings.mlr.press/v139/alimisis21a.html
https://proceedings.mlr.press/v139/alimisis21a.html
https://proceedings.mlr.press/v139/alimisis21a.html
http://proceedings.mlr.press/v139/alimisis21a/alimisis21a.pdf
ICML 2021
Non-Exponentially Weighted Aggregation: Regret Bounds for Unbounded Loss Functions
Pierre Alquier
We tackle the problem of online optimization with a general, possibly unbounded, loss function. It is well known that when the loss is bounded, the exponentially weighted aggregation strategy (EWA) leads to a regret in $\sqrt{T}$ after $T$ steps. In this paper, we study a generalized aggregation strategy, where the wei...
https://proceedings.mlr.press/v139/alquier21a.html
https://proceedings.mlr.press/v139/alquier21a.html
https://proceedings.mlr.press/v139/alquier21a.html
http://proceedings.mlr.press/v139/alquier21a/alquier21a.pdf
ICML 2021
Dataset Dynamics via Gradient Flows in Probability Space
David Alvarez-Melis, Nicolò Fusi
Various machine learning tasks, from generative modeling to domain adaptation, revolve around the concept of dataset transformation and manipulation. While various methods exist for transforming unlabeled datasets, principled methods to do so for labeled (e.g., classification) datasets are missing. In this work, we pro...
https://proceedings.mlr.press/v139/alvarez-melis21a.html
https://proceedings.mlr.press/v139/alvarez-melis21a.html
https://proceedings.mlr.press/v139/alvarez-melis21a.html
http://proceedings.mlr.press/v139/alvarez-melis21a/alvarez-melis21a.pdf
ICML 2021
Submodular Maximization subject to a Knapsack Constraint: Combinatorial Algorithms with Near-optimal Adaptive Complexity
Georgios Amanatidis, Federico Fusco, Philip Lazos, Stefano Leonardi, Alberto Marchetti-Spaccamela, Rebecca Reiffenhäuser
The growing need to deal with massive instances motivates the design of algorithms balancing the quality of the solution with applicability. For the latter, an important measure is the \emph{adaptive complexity}, capturing the number of sequential rounds of parallel computation needed. In this work we obtain the first ...
https://proceedings.mlr.press/v139/amanatidis21a.html
https://proceedings.mlr.press/v139/amanatidis21a.html
https://proceedings.mlr.press/v139/amanatidis21a.html
http://proceedings.mlr.press/v139/amanatidis21a/amanatidis21a.pdf
ICML 2021
Safe Reinforcement Learning with Linear Function Approximation
Sanae Amani, Christos Thrampoulidis, Lin Yang
Safety in reinforcement learning has become increasingly important in recent years. Yet, existing solutions either fail to strictly avoid choosing unsafe actions, which may lead to catastrophic results in safety-critical systems, or fail to provide regret guarantees for settings where safety constraints need to be lear...
https://proceedings.mlr.press/v139/amani21a.html
https://proceedings.mlr.press/v139/amani21a.html
https://proceedings.mlr.press/v139/amani21a.html
http://proceedings.mlr.press/v139/amani21a/amani21a.pdf
ICML 2021
Automatic variational inference with cascading flows
Luca Ambrogioni, Gianluigi Silvestri, Marcel van Gerven
The automation of probabilistic reasoning is one of the primary aims of machine learning. Recently, the confluence of variational inference and deep learning has led to powerful and flexible automatic inference methods that can be trained by stochastic gradient descent. In particular, normalizing flows are highly param...
https://proceedings.mlr.press/v139/ambrogioni21a.html
https://proceedings.mlr.press/v139/ambrogioni21a.html
https://proceedings.mlr.press/v139/ambrogioni21a.html
http://proceedings.mlr.press/v139/ambrogioni21a/ambrogioni21a.pdf
ICML 2021
Sparse Bayesian Learning via Stepwise Regression
Sebastian E. Ament, Carla P. Gomes
Sparse Bayesian Learning (SBL) is a powerful framework for attaining sparsity in probabilistic models. Herein, we propose a coordinate ascent algorithm for SBL termed Relevance Matching Pursuit (RMP) and show that, as its noise variance parameter goes to zero, RMP exhibits a surprising connection to Stepwise Regression...
https://proceedings.mlr.press/v139/ament21a.html
https://proceedings.mlr.press/v139/ament21a.html
https://proceedings.mlr.press/v139/ament21a.html
http://proceedings.mlr.press/v139/ament21a/ament21a.pdf
ICML 2021
Locally Persistent Exploration in Continuous Control Tasks with Sparse Rewards
Susan Amin, Maziar Gomrokchi, Hossein Aboutalebi, Harsh Satija, Doina Precup
A major challenge in reinforcement learning is the design of exploration strategies, especially for environments with sparse reward structures and continuous state and action spaces. Intuitively, if the reinforcement signal is very scarce, the agent should rely on some form of short-term memory in order to cover its en...
https://proceedings.mlr.press/v139/amin21a.html
https://proceedings.mlr.press/v139/amin21a.html
https://proceedings.mlr.press/v139/amin21a.html
http://proceedings.mlr.press/v139/amin21a/amin21a.pdf
ICML 2021
Preferential Temporal Difference Learning
Nishanth Anand, Doina Precup
Temporal-Difference (TD) learning is a general and very useful tool for estimating the value function of a given policy, which in turn is required to find good policies. Generally speaking, TD learning updates states whenever they are visited. When the agent lands in a state, its value can be used to compute the TD-err...
https://proceedings.mlr.press/v139/anand21a.html
https://proceedings.mlr.press/v139/anand21a.html
https://proceedings.mlr.press/v139/anand21a.html
http://proceedings.mlr.press/v139/anand21a/anand21a.pdf
ICML 2021
Unitary Branching Programs: Learnability and Lower Bounds
Fidel Ernesto Diaz Andino, Maria Kokkou, Mateus De Oliveira Oliveira, Farhad Vadiee
Bounded width branching programs are a formalism that can be used to capture the notion of non-uniform constant-space computation. In this work, we study a generalized version of bounded width branching programs where instructions are defined by unitary matrices of bounded dimension. We introduce a new learning framewo...
https://proceedings.mlr.press/v139/andino21a.html
https://proceedings.mlr.press/v139/andino21a.html
https://proceedings.mlr.press/v139/andino21a.html
http://proceedings.mlr.press/v139/andino21a/andino21a.pdf
ICML 2021
The Logical Options Framework
Brandon Araki, Xiao Li, Kiran Vodrahalli, Jonathan Decastro, Micah Fry, Daniela Rus
Learning composable policies for environments with complex rules and tasks is a challenging problem. We introduce a hierarchical reinforcement learning framework called the Logical Options Framework (LOF) that learns policies that are satisfying, optimal, and composable. LOF efficiently learns policies that satisfy tas...
https://proceedings.mlr.press/v139/araki21a.html
https://proceedings.mlr.press/v139/araki21a.html
https://proceedings.mlr.press/v139/araki21a.html
http://proceedings.mlr.press/v139/araki21a/araki21a.pdf
ICML 2021
Annealed Flow Transport Monte Carlo
Michael Arbel, Alex Matthews, Arnaud Doucet
Annealed Importance Sampling (AIS) and its Sequential Monte Carlo (SMC) extensions are state-of-the-art methods for estimating normalizing constants of probability distributions. We propose here a novel Monte Carlo algorithm, Annealed Flow Transport (AFT), that builds upon AIS and SMC and combines them with normalizing...
https://proceedings.mlr.press/v139/arbel21a.html
https://proceedings.mlr.press/v139/arbel21a.html
https://proceedings.mlr.press/v139/arbel21a.html
http://proceedings.mlr.press/v139/arbel21a/arbel21a.pdf
ICML 2021
Permutation Weighting
David Arbour, Drew Dimmery, Arjun Sondhi
A commonly applied approach for estimating causal effects from observational data is to apply weights which render treatments independent of observed pre-treatment covariates. Recently emphasis has been placed on deriving balancing weights which explicitly target this independence condition. In this work we introduce p...
https://proceedings.mlr.press/v139/arbour21a.html
https://proceedings.mlr.press/v139/arbour21a.html
https://proceedings.mlr.press/v139/arbour21a.html
http://proceedings.mlr.press/v139/arbour21a/arbour21a.pdf
ICML 2021
Analyzing the tree-layer structure of Deep Forests
Ludovic Arnould, Claire Boyer, Erwan Scornet
Random forests on the one hand, and neural networks on the other hand, have met great success in the machine learning community for their predictive performance. Combinations of both have been proposed in the literature, notably leading to the so-called deep forests (DF) (Zhou & Feng,2019). In this paper, our aim is no...
https://proceedings.mlr.press/v139/arnould21a.html
https://proceedings.mlr.press/v139/arnould21a.html
https://proceedings.mlr.press/v139/arnould21a.html
http://proceedings.mlr.press/v139/arnould21a/arnould21a.pdf
ICML 2021
Dropout: Explicit Forms and Capacity Control
Raman Arora, Peter Bartlett, Poorya Mianjy, Nathan Srebro
We investigate the capacity control provided by dropout in various machine learning problems. First, we study dropout for matrix completion, where it induces a distribution-dependent regularizer that equals the weighted trace-norm of the product of the factors. In deep learning, we show that the distribution-dependent ...
https://proceedings.mlr.press/v139/arora21a.html
https://proceedings.mlr.press/v139/arora21a.html
https://proceedings.mlr.press/v139/arora21a.html
http://proceedings.mlr.press/v139/arora21a/arora21a.pdf
ICML 2021
Tighter Bounds on the Log Marginal Likelihood of Gaussian Process Regression Using Conjugate Gradients
Artem Artemev, David R. Burt, Mark van der Wilk
We propose a lower bound on the log marginal likelihood of Gaussian process regression models that can be computed without matrix factorisation of the full kernel matrix. We show that approximate maximum likelihood learning of model parameters by maximising our lower bound retains many benefits of the sparse variationa...
https://proceedings.mlr.press/v139/artemev21a.html
https://proceedings.mlr.press/v139/artemev21a.html
https://proceedings.mlr.press/v139/artemev21a.html
http://proceedings.mlr.press/v139/artemev21a/artemev21a.pdf
ICML 2021
Deciding What to Learn: A Rate-Distortion Approach
Dilip Arumugam, Benjamin Van Roy
Agents that learn to select optimal actions represent a prominent focus of the sequential decision-making literature. In the face of a complex environment or constraints on time and resources, however, aiming to synthesize such an optimal policy can become infeasible. These scenarios give rise to an important trade-off...
https://proceedings.mlr.press/v139/arumugam21a.html
https://proceedings.mlr.press/v139/arumugam21a.html
https://proceedings.mlr.press/v139/arumugam21a.html
http://proceedings.mlr.press/v139/arumugam21a/arumugam21a.pdf
ICML 2021
Private Adaptive Gradient Methods for Convex Optimization
Hilal Asi, John Duchi, Alireza Fallah, Omid Javidbakht, Kunal Talwar
We study adaptive methods for differentially private convex optimization, proposing and analyzing differentially private variants of a Stochastic Gradient Descent (SGD) algorithm with adaptive stepsizes, as well as the AdaGrad algorithm. We provide upper bounds on the regret of both algorithms and show that the bounds ...
https://proceedings.mlr.press/v139/asi21a.html
https://proceedings.mlr.press/v139/asi21a.html
https://proceedings.mlr.press/v139/asi21a.html
http://proceedings.mlr.press/v139/asi21a/asi21a.pdf
ICML 2021
Private Stochastic Convex Optimization: Optimal Rates in L1 Geometry
Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar
Stochastic convex optimization over an $\ell_1$-bounded domain is ubiquitous in machine learning applications such as LASSO but remains poorly understood when learning with differential privacy. We show that, up to logarithmic factors the optimal excess population loss of any $(\epsilon,\delta)$-differentially private ...
https://proceedings.mlr.press/v139/asi21b.html
https://proceedings.mlr.press/v139/asi21b.html
https://proceedings.mlr.press/v139/asi21b.html
http://proceedings.mlr.press/v139/asi21b/asi21b.pdf
ICML 2021
Combinatorial Blocking Bandits with Stochastic Delays
Alexia Atsidakou, Orestis Papadigenopoulos, Soumya Basu, Constantine Caramanis, Sanjay Shakkottai
Recent work has considered natural variations of the {\em multi-armed bandit} problem, where the reward distribution of each arm is a special function of the time passed since its last pulling. In this direction, a simple (yet widely applicable) model is that of {\em blocking bandits}, where an arm becomes unavailable ...
https://proceedings.mlr.press/v139/atsidakou21a.html
https://proceedings.mlr.press/v139/atsidakou21a.html
https://proceedings.mlr.press/v139/atsidakou21a.html
http://proceedings.mlr.press/v139/atsidakou21a/atsidakou21a.pdf
ICML 2021
Dichotomous Optimistic Search to Quantify Human Perception
Julien Audiffren
In this paper we address a variant of the continuous multi-armed bandits problem, called the threshold estimation problem, which is at the heart of many psychometric experiments. Here, the objective is to estimate the sensitivity threshold for an unknown psychometric function Psi, which is assumed to be non decreasing ...
https://proceedings.mlr.press/v139/audiffren21a.html
https://proceedings.mlr.press/v139/audiffren21a.html
https://proceedings.mlr.press/v139/audiffren21a.html
http://proceedings.mlr.press/v139/audiffren21a/audiffren21a.pdf
ICML 2021
Federated Learning under Arbitrary Communication Patterns
Dmitrii Avdiukhin, Shiva Kasiviswanathan
Federated Learning is a distributed learning setting where the goal is to train a centralized model with training data distributed over a large number of heterogeneous clients, each with unreliable and relatively slow network connections. A common optimization approach used in federated learning is based on the idea of...
https://proceedings.mlr.press/v139/avdiukhin21a.html
https://proceedings.mlr.press/v139/avdiukhin21a.html
https://proceedings.mlr.press/v139/avdiukhin21a.html
http://proceedings.mlr.press/v139/avdiukhin21a/avdiukhin21a.pdf
ICML 2021
Asynchronous Distributed Learning : Adapting to Gradient Delays without Prior Knowledge
Rotem Zamir Aviv, Ido Hakimi, Assaf Schuster, Kfir Yehuda Levy
We consider stochastic convex optimization problems, where several machines act asynchronously in parallel while sharing a common memory. We propose a robust training method for the constrained setting and derive non asymptotic convergence guarantees that do not depend on prior knowledge of update delays, objective smo...
https://proceedings.mlr.press/v139/aviv21a.html
https://proceedings.mlr.press/v139/aviv21a.html
https://proceedings.mlr.press/v139/aviv21a.html
http://proceedings.mlr.press/v139/aviv21a/aviv21a.pdf
ICML 2021
Decomposable Submodular Function Minimization via Maximum Flow
Kyriakos Axiotis, Adam Karczmarz, Anish Mukherjee, Piotr Sankowski, Adrian Vladu
This paper bridges discrete and continuous optimization approaches for decomposable submodular function minimization, in both the standard and parametric settings. We provide improved running times for this problem by reducing it to a number of calls to a maximum flow oracle. When each function in the decomposition act...
https://proceedings.mlr.press/v139/axiotis21a.html
https://proceedings.mlr.press/v139/axiotis21a.html
https://proceedings.mlr.press/v139/axiotis21a.html
http://proceedings.mlr.press/v139/axiotis21a/axiotis21a.pdf
ICML 2021
Differentially Private Query Release Through Adaptive Projection
Sergul Aydore, William Brown, Michael Kearns, Krishnaram Kenthapadi, Luca Melis, Aaron Roth, Ankit A. Siva
We propose, implement, and evaluate a new algo-rithm for releasing answers to very large numbersof statistical queries likek-way marginals, sub-ject to differential privacy. Our algorithm makesadaptive use of a continuous relaxation of thePro-jection Mechanism, which answers queries on theprivate dataset using simple p...
https://proceedings.mlr.press/v139/aydore21a.html
https://proceedings.mlr.press/v139/aydore21a.html
https://proceedings.mlr.press/v139/aydore21a.html
http://proceedings.mlr.press/v139/aydore21a/aydore21a.pdf
ICML 2021
On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent
Shahar Azulay, Edward Moroshko, Mor Shpigel Nacson, Blake E Woodworth, Nathan Srebro, Amir Globerson, Daniel Soudry
Recent work has highlighted the role of initialization scale in determining the structure of the solutions that gradient methods converge to. In particular, it was shown that large initialization leads to the neural tangent kernel regime solution, whereas small initialization leads to so called “rich regimes”. However,...
https://proceedings.mlr.press/v139/azulay21a.html
https://proceedings.mlr.press/v139/azulay21a.html
https://proceedings.mlr.press/v139/azulay21a.html
http://proceedings.mlr.press/v139/azulay21a/azulay21a.pdf
ICML 2021
On-Off Center-Surround Receptive Fields for Accurate and Robust Image Classification
Zahra Babaiee, Ramin Hasani, Mathias Lechner, Daniela Rus, Radu Grosu
Robustness to variations in lighting conditions is a key objective for any deep vision system. To this end, our paper extends the receptive field of convolutional neural networks with two residual components, ubiquitous in the visual processing system of vertebrates: On-center and off-center pathways, with an excitator...
https://proceedings.mlr.press/v139/babaiee21a.html
https://proceedings.mlr.press/v139/babaiee21a.html
https://proceedings.mlr.press/v139/babaiee21a.html
http://proceedings.mlr.press/v139/babaiee21a/babaiee21a.pdf
ICML 2021
Uniform Convergence, Adversarial Spheres and a Simple Remedy
Gregor Bachmann, Seyed-Mohsen Moosavi-Dezfooli, Thomas Hofmann
Previous work has cast doubt on the general framework of uniform convergence and its ability to explain generalization in neural networks. By considering a specific dataset, it was observed that a neural network completely misclassifies a projection of the training data (adversarial set), rendering any existing general...
https://proceedings.mlr.press/v139/bachmann21a.html
https://proceedings.mlr.press/v139/bachmann21a.html
https://proceedings.mlr.press/v139/bachmann21a.html
http://proceedings.mlr.press/v139/bachmann21a/bachmann21a.pdf
ICML 2021
Faster Kernel Matrix Algebra via Density Estimation
Arturs Backurs, Piotr Indyk, Cameron Musco, Tal Wagner
We study fast algorithms for computing basic properties of an n x n positive semidefinite kernel matrix K corresponding to n points x_1,...,x_n in R^d. In particular, we consider the estimating the sum of kernel matrix entries, along with its top eigenvalue and eigenvector. These are some of the most basic problems def...
https://proceedings.mlr.press/v139/backurs21a.html
https://proceedings.mlr.press/v139/backurs21a.html
https://proceedings.mlr.press/v139/backurs21a.html
http://proceedings.mlr.press/v139/backurs21a/backurs21a.pdf
ICML 2021
Robust Reinforcement Learning using Least Squares Policy Iteration with Provable Performance Guarantees
Kishan Panaganti Badrinath, Dileep Kalathil
This paper addresses the problem of model-free reinforcement learning for Robust Markov Decision Process (RMDP) with large state spaces. The goal of the RMDPs framework is to find a policy that is robust against the parameter uncertainties due to the mismatch between the simulator model and real-world settings. We firs...
https://proceedings.mlr.press/v139/badrinath21a.html
https://proceedings.mlr.press/v139/badrinath21a.html
https://proceedings.mlr.press/v139/badrinath21a.html
http://proceedings.mlr.press/v139/badrinath21a/badrinath21a.pdf
ICML 2021
Skill Discovery for Exploration and Planning using Deep Skill Graphs
Akhil Bagaria, Jason K Senthil, George Konidaris
We introduce a new skill-discovery algorithm that builds a discrete graph representation of large continuous MDPs, where nodes correspond to skill subgoals and the edges to skill policies. The agent constructs this graph during an unsupervised training phase where it interleaves discovering skills and planning using th...
https://proceedings.mlr.press/v139/bagaria21a.html
https://proceedings.mlr.press/v139/bagaria21a.html
https://proceedings.mlr.press/v139/bagaria21a.html
http://proceedings.mlr.press/v139/bagaria21a/bagaria21a.pdf
ICML 2021
Locally Adaptive Label Smoothing Improves Predictive Churn
Dara Bahri, Heinrich Jiang
Training modern neural networks is an inherently noisy process that can lead to high \emph{prediction churn}– disagreements between re-trainings of the same model due to factors such as randomization in the parameter initialization and mini-batches– even when the trained models all attain similar accuracies. Such predi...
https://proceedings.mlr.press/v139/bahri21a.html
https://proceedings.mlr.press/v139/bahri21a.html
https://proceedings.mlr.press/v139/bahri21a.html
http://proceedings.mlr.press/v139/bahri21a/bahri21a.pdf
ICML 2021
How Important is the Train-Validation Split in Meta-Learning?
Yu Bai, Minshuo Chen, Pan Zhou, Tuo Zhao, Jason Lee, Sham Kakade, Huan Wang, Caiming Xiong
Meta-learning aims to perform fast adaptation on a new task through learning a “prior” from multiple existing tasks. A common practice in meta-learning is to perform a train-validation split (\emph{train-val method}) where the prior adapts to the task on one split of the data, and the resulting predictor is evaluated o...
https://proceedings.mlr.press/v139/bai21a.html
https://proceedings.mlr.press/v139/bai21a.html
https://proceedings.mlr.press/v139/bai21a.html
http://proceedings.mlr.press/v139/bai21a/bai21a.pdf
ICML 2021
Stabilizing Equilibrium Models by Jacobian Regularization
Shaojie Bai, Vladlen Koltun, Zico Kolter
Deep equilibrium networks (DEQs) are a new class of models that eschews traditional depth in favor of finding the fixed point of a single non-linear layer. These models have been shown to achieve performance competitive with the state-of-the-art deep networks while using significantly less memory. Yet they are also slo...
https://proceedings.mlr.press/v139/bai21b.html
https://proceedings.mlr.press/v139/bai21b.html
https://proceedings.mlr.press/v139/bai21b.html
http://proceedings.mlr.press/v139/bai21b/bai21b.pdf
ICML 2021
Don’t Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification
Yu Bai, Song Mei, Huan Wang, Caiming Xiong
Modern machine learning models with high accuracy are often miscalibrated—the predicted top probability does not reflect the actual accuracy, and tends to be \emph{over-confident}. It is commonly believed that such over-confidence is mainly due to \emph{over-parametrization}, in particular when the model is large enoug...
https://proceedings.mlr.press/v139/bai21c.html
https://proceedings.mlr.press/v139/bai21c.html
https://proceedings.mlr.press/v139/bai21c.html
http://proceedings.mlr.press/v139/bai21c/bai21c.pdf
ICML 2021
Principled Exploration via Optimistic Bootstrapping and Backward Induction
Chenjia Bai, Lingxiao Wang, Lei Han, Jianye Hao, Animesh Garg, Peng Liu, Zhaoran Wang
One principled approach for provably efficient exploration is incorporating the upper confidence bound (UCB) into the value function as a bonus. However, UCB is specified to deal with linear and tabular settings and is incompatible with Deep Reinforcement Learning (DRL). In this paper, we propose a principled explorati...
https://proceedings.mlr.press/v139/bai21d.html
https://proceedings.mlr.press/v139/bai21d.html
https://proceedings.mlr.press/v139/bai21d.html
http://proceedings.mlr.press/v139/bai21d/bai21d.pdf
ICML 2021
GLSearch: Maximum Common Subgraph Detection via Learning to Search
Yunsheng Bai, Derek Xu, Yizhou Sun, Wei Wang
Detecting the Maximum Common Subgraph (MCS) between two input graphs is fundamental for applications in drug synthesis, malware detection, cloud computing, etc. However, MCS computation is NP-hard, and state-of-the-art MCS solvers rely on heuristic search algorithms which in practice cannot find good solution for large...
https://proceedings.mlr.press/v139/bai21e.html
https://proceedings.mlr.press/v139/bai21e.html
https://proceedings.mlr.press/v139/bai21e.html
http://proceedings.mlr.press/v139/bai21e/bai21e.pdf
ICML 2021
Breaking the Limits of Message Passing Graph Neural Networks
Muhammet Balcilar, Pierre Heroux, Benoit Gauzere, Pascal Vasseur, Sebastien Adam, Paul Honeine
Since the Message Passing (Graph) Neural Networks (MPNNs) have a linear complexity with respect to the number of nodes when applied to sparse graphs, they have been widely implemented and still raise a lot of interest even though their theoretical expressive power is limited to the first order Weisfeiler-Lehman test (1...
https://proceedings.mlr.press/v139/balcilar21a.html
https://proceedings.mlr.press/v139/balcilar21a.html
https://proceedings.mlr.press/v139/balcilar21a.html
http://proceedings.mlr.press/v139/balcilar21a/balcilar21a.pdf
ICML 2021
Instance Specific Approximations for Submodular Maximization
Eric Balkanski, Sharon Qian, Yaron Singer
The predominant measure for the performance of an algorithm is its worst-case approximation guarantee. While worst-case approximations give desirable robustness guarantees, they can differ significantly from the performance of an algorithm in practice. For the problem of monotone submodular maximization under a cardina...
https://proceedings.mlr.press/v139/balkanski21a.html
https://proceedings.mlr.press/v139/balkanski21a.html
https://proceedings.mlr.press/v139/balkanski21a.html
http://proceedings.mlr.press/v139/balkanski21a/balkanski21a.pdf
ICML 2021
Augmented World Models Facilitate Zero-Shot Dynamics Generalization From a Single Offline Environment
Philip J Ball, Cong Lu, Jack Parker-Holder, Stephen Roberts
Reinforcement learning from large-scale offline datasets provides us with the ability to learn policies without potentially unsafe or impractical exploration. Significant progress has been made in the past few years in dealing with the challenge of correcting for differing behavior between the data collection and learn...
https://proceedings.mlr.press/v139/ball21a.html
https://proceedings.mlr.press/v139/ball21a.html
https://proceedings.mlr.press/v139/ball21a.html
http://proceedings.mlr.press/v139/ball21a/ball21a.pdf
ICML 2021
Regularized Online Allocation Problems: Fairness and Beyond
Santiago Balseiro, Haihao Lu, Vahab Mirrokni
Online allocation problems with resource constraints have a rich history in computer science and operations research. In this paper, we introduce the regularized online allocation problem, a variant that includes a non-linear regularizer acting on the total resource consumption. In this problem, requests repeatedly arr...
https://proceedings.mlr.press/v139/balseiro21a.html
https://proceedings.mlr.press/v139/balseiro21a.html
https://proceedings.mlr.press/v139/balseiro21a.html
http://proceedings.mlr.press/v139/balseiro21a/balseiro21a.pdf
ICML 2021
Predict then Interpolate: A Simple Algorithm to Learn Stable Classifiers
Yujia Bao, Shiyu Chang, Regina Barzilay
We propose Predict then Interpolate (PI), a simple algorithm for learning correlations that are stable across environments. The algorithm follows from the intuition that when using a classifier trained on one environment to make predictions on examples from another environment, its mistakes are informative as to which ...
https://proceedings.mlr.press/v139/bao21a.html
https://proceedings.mlr.press/v139/bao21a.html
https://proceedings.mlr.press/v139/bao21a.html
http://proceedings.mlr.press/v139/bao21a/bao21a.pdf
ICML 2021
Variational (Gradient) Estimate of the Score Function in Energy-based Latent Variable Models
Fan Bao, Kun Xu, Chongxuan Li, Lanqing Hong, Jun Zhu, Bo Zhang
This paper presents new estimates of the score function and its gradient with respect to the model parameters in a general energy-based latent variable model (EBLVM). The score function and its gradient can be expressed as combinations of expectation and covariance terms over the (generally intractable) posterior of th...
https://proceedings.mlr.press/v139/bao21b.html
https://proceedings.mlr.press/v139/bao21b.html
https://proceedings.mlr.press/v139/bao21b.html
http://proceedings.mlr.press/v139/bao21b/bao21b.pdf
ICML 2021
Compositional Video Synthesis with Action Graphs
Amir Bar, Roei Herzig, Xiaolong Wang, Anna Rohrbach, Gal Chechik, Trevor Darrell, Amir Globerson
Videos of actions are complex signals containing rich compositional structure in space and time. Current video generation methods lack the ability to condition the generation on multiple coordinated and potentially simultaneous timed actions. To address this challenge, we propose to represent the actions in a graph str...
https://proceedings.mlr.press/v139/bar21a.html
https://proceedings.mlr.press/v139/bar21a.html
https://proceedings.mlr.press/v139/bar21a.html
http://proceedings.mlr.press/v139/bar21a/bar21a.pdf
ICML 2021
Approximating a Distribution Using Weight Queries
Nadav Barak, Sivan Sabato
We consider a novel challenge: approximating a distribution without the ability to randomly sample from that distribution. We study how such an approximation can be obtained using *weight queries*. Given some data set of examples, a weight query presents one of the examples to an oracle, which returns the probability, ...
https://proceedings.mlr.press/v139/barak21a.html
https://proceedings.mlr.press/v139/barak21a.html
https://proceedings.mlr.press/v139/barak21a.html
http://proceedings.mlr.press/v139/barak21a/barak21a.pdf
ICML 2021
Graph Convolution for Semi-Supervised Classification: Improved Linear Separability and Out-of-Distribution Generalization
Aseem Baranwal, Kimon Fountoulakis, Aukosh Jagannath
Recently there has been increased interest in semi-supervised classification in the presence of graphical information. A new class of learning models has emerged that relies, at its most basic level, on classifying the data after first applying a graph convolution. To understand the merits of this approach, we study th...
https://proceedings.mlr.press/v139/baranwal21a.html
https://proceedings.mlr.press/v139/baranwal21a.html
https://proceedings.mlr.press/v139/baranwal21a.html
http://proceedings.mlr.press/v139/baranwal21a/baranwal21a.pdf
ICML 2021
Training Quantized Neural Networks to Global Optimality via Semidefinite Programming
Burak Bartan, Mert Pilanci
Neural networks (NNs) have been extremely successful across many tasks in machine learning. Quantization of NN weights has become an important topic due to its impact on their energy efficiency, inference time and deployment on hardware. Although post-training quantization is well-studied, training optimal quantized NN...
https://proceedings.mlr.press/v139/bartan21a.html
https://proceedings.mlr.press/v139/bartan21a.html
https://proceedings.mlr.press/v139/bartan21a.html
http://proceedings.mlr.press/v139/bartan21a/bartan21a.pdf
ICML 2021
Beyond $log^2(T)$ regret for decentralized bandits in matching markets
Soumya Basu, Karthik Abinav Sankararaman, Abishek Sankararaman
We design decentralized algorithms for regret minimization in the two sided matching market with one-sided bandit feedback that significantly improves upon the prior works (Liu et al.\,2020a, Sankararaman et al.\,2020, Liu et al.\,2020b). First, for general markets, for any $\varepsilon > 0$, we design an algorithm tha...
https://proceedings.mlr.press/v139/basu21a.html
https://proceedings.mlr.press/v139/basu21a.html
https://proceedings.mlr.press/v139/basu21a.html
http://proceedings.mlr.press/v139/basu21a/basu21a.pdf
ICML 2021
Optimal Thompson Sampling strategies for support-aware CVaR bandits
Dorian Baudry, Romain Gautron, Emilie Kaufmann, Odalric Maillard
In this paper we study a multi-arm bandit problem in which the quality of each arm is measured by the Conditional Value at Risk (CVaR) at some level alpha of the reward distribution. While existing works in this setting mainly focus on Upper Confidence Bound algorithms, we introduce a new Thompson Sampling approach for...
https://proceedings.mlr.press/v139/baudry21a.html
https://proceedings.mlr.press/v139/baudry21a.html
https://proceedings.mlr.press/v139/baudry21a.html
http://proceedings.mlr.press/v139/baudry21a/baudry21a.pdf
ICML 2021
On Limited-Memory Subsampling Strategies for Bandits
Dorian Baudry, Yoan Russac, Olivier Cappé
There has been a recent surge of interest in non-parametric bandit algorithms based on subsampling. One drawback however of these approaches is the additional complexity required by random subsampling and the storage of the full history of rewards. Our first contribution is to show that a simple deterministic subsampli...
https://proceedings.mlr.press/v139/baudry21b.html
https://proceedings.mlr.press/v139/baudry21b.html
https://proceedings.mlr.press/v139/baudry21b.html
http://proceedings.mlr.press/v139/baudry21b/baudry21b.pdf
ICML 2021
Generalized Doubly Reparameterized Gradient Estimators
Matthias Bauer, Andriy Mnih
Efficient low-variance gradient estimation enabled by the reparameterization trick (RT) has been essential to the success of variational autoencoders. Doubly-reparameterized gradients (DReGs) improve on the RT for multi-sample variational bounds by applying reparameterization a second time for an additional reduction i...
https://proceedings.mlr.press/v139/bauer21a.html
https://proceedings.mlr.press/v139/bauer21a.html
https://proceedings.mlr.press/v139/bauer21a.html
http://proceedings.mlr.press/v139/bauer21a/bauer21a.pdf
ICML 2021
Directional Graph Networks
Dominique Beaini, Saro Passaro, Vincent Létourneau, Will Hamilton, Gabriele Corso, Pietro Lió
The lack of anisotropic kernels in graph neural networks (GNNs) strongly limits their expressiveness, contributing to well-known issues such as over-smoothing. To overcome this limitation, we propose the first globally consistent anisotropic kernels for GNNs, allowing for graph convolutions that are defined according t...
https://proceedings.mlr.press/v139/beaini21a.html
https://proceedings.mlr.press/v139/beaini21a.html
https://proceedings.mlr.press/v139/beaini21a.html
http://proceedings.mlr.press/v139/beaini21a/beaini21a.pdf
ICML 2021
Policy Analysis using Synthetic Controls in Continuous-Time
Alexis Bellot, Mihaela van der Schaar
Counterfactual estimation using synthetic controls is one of the most successful recent methodological developments in causal inference. Despite its popularity, the current description only considers time series aligned across units and synthetic controls expressed as linear combinations of observed control units. We p...
https://proceedings.mlr.press/v139/bellot21a.html
https://proceedings.mlr.press/v139/bellot21a.html
https://proceedings.mlr.press/v139/bellot21a.html
http://proceedings.mlr.press/v139/bellot21a/bellot21a.pdf
ICML 2021
Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling
Gregory Benton, Wesley Maddox, Sanae Lotfi, Andrew Gordon Gordon Wilson
With a better understanding of the loss surfaces for multilayer networks, we can build more robust and accurate training procedures. Recently it was discovered that independently trained SGD solutions can be connected along one-dimensional paths of near-constant training loss. In this paper, we in fact demonstrate the ...
https://proceedings.mlr.press/v139/benton21a.html
https://proceedings.mlr.press/v139/benton21a.html
https://proceedings.mlr.press/v139/benton21a.html
http://proceedings.mlr.press/v139/benton21a/benton21a.pdf
ICML 2021
TFix: Learning to Fix Coding Errors with a Text-to-Text Transformer
Berkay Berabi, Jingxuan He, Veselin Raychev, Martin Vechev
The problem of fixing errors in programs has attracted substantial interest over the years. The key challenge for building an effective code fixing tool is to capture a wide range of errors and meanwhile maintain high accuracy. In this paper, we address this challenge and present a new learning-based system, called TFi...
https://proceedings.mlr.press/v139/berabi21a.html
https://proceedings.mlr.press/v139/berabi21a.html
https://proceedings.mlr.press/v139/berabi21a.html
http://proceedings.mlr.press/v139/berabi21a/berabi21a.pdf
ICML 2021
Learning Queueing Policies for Organ Transplantation Allocation using Interpretable Counterfactual Survival Analysis
Jeroen Berrevoets, Ahmed Alaa, Zhaozhi Qian, James Jordon, Alexander E. S. Gimson, Mihaela van der Schaar
Organ transplantation is often the last resort for treating end-stage illnesses, but managing transplant wait-lists is challenging because of organ scarcity and the complexity of assessing donor-recipient compatibility. In this paper, we develop a data-driven model for (real-time) organ allocation using observational d...
https://proceedings.mlr.press/v139/berrevoets21a.html
https://proceedings.mlr.press/v139/berrevoets21a.html
https://proceedings.mlr.press/v139/berrevoets21a.html
http://proceedings.mlr.press/v139/berrevoets21a/berrevoets21a.pdf
ICML 2021
Learning from Biased Data: A Semi-Parametric Approach
Patrice Bertail, Stephan Clémençon, Yannick Guyonvarch, Nathan Noiry
We consider risk minimization problems where the (source) distribution $P_S$ of the training observations $Z_1, \ldots, Z_n$ differs from the (target) distribution $P_T$ involved in the risk that one seeks to minimize. Under the natural assumption that $P_S$ dominates $P_T$, \textit{i.e.} $P_T< \! \! Cite this Paper Bi...
https://proceedings.mlr.press/v139/bertail21a.html
https://proceedings.mlr.press/v139/bertail21a.html
https://proceedings.mlr.press/v139/bertail21a.html
http://proceedings.mlr.press/v139/bertail21a/bertail21a.pdf
ICML 2021
Is Space-Time Attention All You Need for Video Understanding?
Gedas Bertasius, Heng Wang, Lorenzo Torresani
We present a convolution-free approach to video classification built exclusively on self-attention over space and time. Our method, named “TimeSformer,” adapts the standard Transformer architecture to video by enabling spatiotemporal feature learning directly from a sequence of frame-level patches. Our experimental stu...
https://proceedings.mlr.press/v139/bertasius21a.html
https://proceedings.mlr.press/v139/bertasius21a.html
https://proceedings.mlr.press/v139/bertasius21a.html
http://proceedings.mlr.press/v139/bertasius21a/bertasius21a.pdf
ICML 2021
Confidence Scores Make Instance-dependent Label-noise Learning Possible
Antonin Berthon, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama
In learning with noisy labels, for every instance, its label can randomly walk to other classes following a transition distribution which is named a noise model. Well-studied noise models are all instance-independent, namely, the transition depends only on the original label but not the instance itself, and thus they a...
https://proceedings.mlr.press/v139/berthon21a.html
https://proceedings.mlr.press/v139/berthon21a.html
https://proceedings.mlr.press/v139/berthon21a.html
http://proceedings.mlr.press/v139/berthon21a/berthon21a.pdf
ICML 2021
Size-Invariant Graph Representations for Graph Classification Extrapolations
Beatrice Bevilacqua, Yangze Zhou, Bruno Ribeiro
In general, graph representation learning methods assume that the train and test data come from the same distribution. In this work we consider an underexplored area of an otherwise rapidly developing field of graph representation learning: The task of out-of-distribution (OOD) graph classification, where train and tes...
https://proceedings.mlr.press/v139/bevilacqua21a.html
https://proceedings.mlr.press/v139/bevilacqua21a.html
https://proceedings.mlr.press/v139/bevilacqua21a.html
http://proceedings.mlr.press/v139/bevilacqua21a/bevilacqua21a.pdf
ICML 2021
Principal Bit Analysis: Autoencoding with Schur-Concave Loss
Sourbh Bhadane, Aaron B Wagner, Jayadev Acharya
We consider a linear autoencoder in which the latent variables are quantized, or corrupted by noise, and the constraint is Schur-concave in the set of latent variances. Although finding the optimal encoder/decoder pair for this setup is a nonconvex optimization problem, we show that decomposing the source into its prin...
https://proceedings.mlr.press/v139/bhadane21a.html
https://proceedings.mlr.press/v139/bhadane21a.html
https://proceedings.mlr.press/v139/bhadane21a.html
http://proceedings.mlr.press/v139/bhadane21a/bhadane21a.pdf
ICML 2021
Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
Arjun Nitin Bhagoji, Daniel Cullina, Vikash Sehwag, Prateek Mittal
Understanding the fundamental limits of robust supervised learning has emerged as a problem of immense interest, from both practical and theoretical standpoints. In particular, it is critical to determine classifier-agnostic bounds on the training loss to establish when learning is possible. In this paper, we determine...
https://proceedings.mlr.press/v139/bhagoji21a.html
https://proceedings.mlr.press/v139/bhagoji21a.html
https://proceedings.mlr.press/v139/bhagoji21a.html
http://proceedings.mlr.press/v139/bhagoji21a/bhagoji21a.pdf
ICML 2021
Additive Error Guarantees for Weighted Low Rank Approximation
Aditya Bhaskara, Aravinda Kanchana Ruwanpathirana, Maheshakya Wijewardena
Low-rank approximation is a classic tool in data analysis, where the goal is to approximate a matrix $A$ with a low-rank matrix $L$ so as to minimize the error $\norm{A - L}_F^2$. However in many applications, approximating some entries is more important than others, which leads to the weighted low rank approximation p...
https://proceedings.mlr.press/v139/bhaskara21a.html
https://proceedings.mlr.press/v139/bhaskara21a.html
https://proceedings.mlr.press/v139/bhaskara21a.html
http://proceedings.mlr.press/v139/bhaskara21a/bhaskara21a.pdf
ICML 2021
Sample Complexity of Robust Linear Classification on Separated Data
Robi Bhattacharjee, Somesh Jha, Kamalika Chaudhuri
We consider the sample complexity of learning with adversarial robustness. Most prior theoretical results for this problem have considered a setting where different classes in the data are close together or overlapping. We consider, in contrast, the well-separated case where there exists a classifier with perfect accur...
https://proceedings.mlr.press/v139/bhattacharjee21a.html
https://proceedings.mlr.press/v139/bhattacharjee21a.html
https://proceedings.mlr.press/v139/bhattacharjee21a.html
http://proceedings.mlr.press/v139/bhattacharjee21a/bhattacharjee21a.pdf
ICML 2021
Finding k in Latent $k-$ polytope
Chiranjib Bhattacharyya, Ravindran Kannan, Amit Kumar
The recently introduced Latent $k-$ Polytope($\LkP$) encompasses several stochastic Mixed Membership models including Topic Models. The problem of finding $k$, the number of extreme points of $\LkP$, is a fundamental challenge and includes several important open problems such as determination of number of components in...
https://proceedings.mlr.press/v139/bhattacharyya21a.html
https://proceedings.mlr.press/v139/bhattacharyya21a.html
https://proceedings.mlr.press/v139/bhattacharyya21a.html
http://proceedings.mlr.press/v139/bhattacharyya21a/bhattacharyya21a.pdf
ICML 2021
Non-Autoregressive Electron Redistribution Modeling for Reaction Prediction
Hangrui Bi, Hengyi Wang, Chence Shi, Connor Coley, Jian Tang, Hongyu Guo
Reliably predicting the products of chemical reactions presents a fundamental challenge in synthetic chemistry. Existing machine learning approaches typically produce a reaction product by sequentially forming its subparts or intermediate molecules. Such autoregressive methods, however, not only require a pre-defined o...
https://proceedings.mlr.press/v139/bi21a.html
https://proceedings.mlr.press/v139/bi21a.html
https://proceedings.mlr.press/v139/bi21a.html
http://proceedings.mlr.press/v139/bi21a/bi21a.pdf
ICML 2021
TempoRL: Learning When to Act
André Biedenkapp, Raghu Rajan, Frank Hutter, Marius Lindauer
Reinforcement learning is a powerful approach to learn behaviour through interactions with an environment. However, behaviours are usually learned in a purely reactive fashion, where an appropriate action is selected based on an observation. In this form, it is challenging to learn when it is necessary to execute new d...
https://proceedings.mlr.press/v139/biedenkapp21a.html
https://proceedings.mlr.press/v139/biedenkapp21a.html
https://proceedings.mlr.press/v139/biedenkapp21a.html
http://proceedings.mlr.press/v139/biedenkapp21a/biedenkapp21a.pdf
ICML 2021
Follow-the-Regularized-Leader Routes to Chaos in Routing Games
Jakub Bielawski, Thiparat Chotibut, Fryderyk Falniowski, Grzegorz Kosiorowski, Michał Misiurewicz, Georgios Piliouras
We study the emergence of chaotic behavior of Follow-the-Regularized Leader (FoReL) dynamics in games. We focus on the effects of increasing the population size or the scale of costs in congestion games, and generalize recent results on unstable, chaotic behaviors in the Multiplicative Weights Update dynamics to a much...
https://proceedings.mlr.press/v139/bielawski21a.html
https://proceedings.mlr.press/v139/bielawski21a.html
https://proceedings.mlr.press/v139/bielawski21a.html
http://proceedings.mlr.press/v139/bielawski21a/bielawski21a.pdf
ICML 2021
Neural Symbolic Regression that scales
Luca Biggio, Tommaso Bendinelli, Alexander Neitz, Aurelien Lucchi, Giambattista Parascandolo
Symbolic equations are at the core of scientific discovery. The task of discovering the underlying equation from a set of input-output pairs is called symbolic regression. Traditionally, symbolic regression methods use hand-designed strategies that do not improve with experience. In this paper, we introduce the first s...
https://proceedings.mlr.press/v139/biggio21a.html
https://proceedings.mlr.press/v139/biggio21a.html
https://proceedings.mlr.press/v139/biggio21a.html
http://proceedings.mlr.press/v139/biggio21a/biggio21a.pdf
ICML 2021
Model Distillation for Revenue Optimization: Interpretable Personalized Pricing
Max Biggs, Wei Sun, Markus Ettl
Data-driven pricing strategies are becoming increasingly common, where customers are offered a personalized price based on features that are predictive of their valuation of a product. It is desirable for this pricing policy to be simple and interpretable, so it can be verified, checked for fairness, and easily impleme...
https://proceedings.mlr.press/v139/biggs21a.html
https://proceedings.mlr.press/v139/biggs21a.html
https://proceedings.mlr.press/v139/biggs21a.html
http://proceedings.mlr.press/v139/biggs21a/biggs21a.pdf
ICML 2021
Scalable Normalizing Flows for Permutation Invariant Densities
Marin Biloš, Stephan Günnemann
Modeling sets is an important problem in machine learning since this type of data can be found in many domains. A promising approach defines a family of permutation invariant densities with continuous normalizing flows. This allows us to maximize the likelihood directly and sample new realizations with ease. In this wo...
https://proceedings.mlr.press/v139/bilos21a.html
https://proceedings.mlr.press/v139/bilos21a.html
https://proceedings.mlr.press/v139/bilos21a.html
http://proceedings.mlr.press/v139/bilos21a/bilos21a.pdf
ICML 2021
Online Learning for Load Balancing of Unknown Monotone Resource Allocation Games
Ilai Bistritz, Nicholas Bambos
Consider N players that each uses a mixture of K resources. Each of the players’ reward functions includes a linear pricing term for each resource that is controlled by the game manager. We assume that the game is strongly monotone, so if each player runs gradient descent, the dynamics converge to a unique Nash equilib...
https://proceedings.mlr.press/v139/bistritz21a.html
https://proceedings.mlr.press/v139/bistritz21a.html
https://proceedings.mlr.press/v139/bistritz21a.html
http://proceedings.mlr.press/v139/bistritz21a/bistritz21a.pdf
ICML 2021
Low-Precision Reinforcement Learning: Running Soft Actor-Critic in Half Precision
Johan Björck, Xiangyu Chen, Christopher De Sa, Carla P Gomes, Kilian Weinberger
Low-precision training has become a popular approach to reduce compute requirements, memory footprint, and energy consumption in supervised learning. In contrast, this promising approach has not yet enjoyed similarly widespread adoption within the reinforcement learning (RL) community, partly because RL agents can be n...
https://proceedings.mlr.press/v139/bjorck21a.html
https://proceedings.mlr.press/v139/bjorck21a.html
https://proceedings.mlr.press/v139/bjorck21a.html
http://proceedings.mlr.press/v139/bjorck21a/bjorck21a.pdf
ICML 2021
Multiplying Matrices Without Multiplying
Davis Blalock, John Guttag
Multiplying matrices is among the most fundamental and most computationally demanding operations in machine learning and scientific computing. Consequently, the task of efficiently approximating matrix products has received significant attention. We introduce a learning-based algorithm for this task that greatly outper...
https://proceedings.mlr.press/v139/blalock21a.html
https://proceedings.mlr.press/v139/blalock21a.html
https://proceedings.mlr.press/v139/blalock21a.html
http://proceedings.mlr.press/v139/blalock21a/blalock21a.pdf
ICML 2021
One for One, or All for All: Equilibria and Optimality of Collaboration in Federated Learning
Avrim Blum, Nika Haghtalab, Richard Lanas Phillips, Han Shao
In recent years, federated learning has been embraced as an approach for bringing about collaboration across large populations of learning agents. However, little is known about how collaboration protocols should take agents’ incentives into account when allocating individual resources for communal learning in order to...
https://proceedings.mlr.press/v139/blum21a.html
https://proceedings.mlr.press/v139/blum21a.html
https://proceedings.mlr.press/v139/blum21a.html
http://proceedings.mlr.press/v139/blum21a/blum21a.pdf
ICML 2021
Black-box density function estimation using recursive partitioning
Erik Bodin, Zhenwen Dai, Neill Campbell, Carl Henrik Ek
We present a novel approach to Bayesian inference and general Bayesian computation that is defined through a sequential decision loop. Our method defines a recursive partitioning of the sample space. It neither relies on gradients nor requires any problem-specific tuning, and is asymptotically exact for any density fun...
https://proceedings.mlr.press/v139/bodin21a.html
https://proceedings.mlr.press/v139/bodin21a.html
https://proceedings.mlr.press/v139/bodin21a.html
http://proceedings.mlr.press/v139/bodin21a/bodin21a.pdf
ICML 2021
Weisfeiler and Lehman Go Topological: Message Passing Simplicial Networks
Cristian Bodnar, Fabrizio Frasca, Yuguang Wang, Nina Otter, Guido F Montufar, Pietro Lió, Michael Bronstein
The pairwise interaction paradigm of graph machine learning has predominantly governed the modelling of relational systems. However, graphs alone cannot capture the multi-level interactions present in many complex systems and the expressive power of such schemes was proven to be limited. To overcome these limitations, ...
https://proceedings.mlr.press/v139/bodnar21a.html
https://proceedings.mlr.press/v139/bodnar21a.html
https://proceedings.mlr.press/v139/bodnar21a.html
http://proceedings.mlr.press/v139/bodnar21a/bodnar21a.pdf
ICML 2021
The Hintons in your Neural Network: a Quantum Field Theory View of Deep Learning
Roberto Bondesan, Max Welling
In this work we develop a quantum field theory formalism for deep learning, where input signals are encoded in Gaussian states, a generalization of Gaussian processes which encode the agent’s uncertainty about the input signal. We show how to represent linear and non-linear layers as unitary quantum gates, and interpre...
https://proceedings.mlr.press/v139/bondesan21a.html
https://proceedings.mlr.press/v139/bondesan21a.html
https://proceedings.mlr.press/v139/bondesan21a.html
http://proceedings.mlr.press/v139/bondesan21a/bondesan21a.pdf
ICML 2021
Offline Contextual Bandits with Overparameterized Models
David Brandfonbrener, William Whitney, Rajesh Ranganath, Joan Bruna
Recent results in supervised learning suggest that while overparameterized models have the capacity to overfit, they in fact generalize quite well. We ask whether the same phenomenon occurs for offline contextual bandits. Our results are mixed. Value-based algorithms benefit from the same generalization behavior as ove...
https://proceedings.mlr.press/v139/brandfonbrener21a.html
https://proceedings.mlr.press/v139/brandfonbrener21a.html
https://proceedings.mlr.press/v139/brandfonbrener21a.html
http://proceedings.mlr.press/v139/brandfonbrener21a/brandfonbrener21a.pdf
ICML 2021
High-Performance Large-Scale Image Recognition Without Normalization
Andy Brock, Soham De, Samuel L Smith, Karen Simonyan
Batch normalization is a key component of most image classification models, but it has many undesirable properties stemming from its dependence on the batch size and interactions between examples. Although recent work has succeeded in training deep ResNets without normalization layers, these models do not match the tes...
https://proceedings.mlr.press/v139/brock21a.html
https://proceedings.mlr.press/v139/brock21a.html
https://proceedings.mlr.press/v139/brock21a.html
http://proceedings.mlr.press/v139/brock21a/brock21a.pdf
ICML 2021
Evaluating the Implicit Midpoint Integrator for Riemannian Hamiltonian Monte Carlo
James Brofos, Roy R Lederman
Riemannian manifold Hamiltonian Monte Carlo is traditionally carried out using the generalized leapfrog integrator. However, this integrator is not the only choice and other integrators yielding valid Markov chain transition operators may be considered. In this work, we examine the implicit midpoint integrator as an al...
https://proceedings.mlr.press/v139/brofos21a.html
https://proceedings.mlr.press/v139/brofos21a.html
https://proceedings.mlr.press/v139/brofos21a.html
http://proceedings.mlr.press/v139/brofos21a/brofos21a.pdf
ICML 2021
Reinforcement Learning of Implicit and Explicit Control Flow Instructions
Ethan Brooks, Janarthanan Rajendran, Richard L Lewis, Satinder Singh
Learning to flexibly follow task instructions in dynamic environments poses interesting challenges for reinforcement learning agents. We focus here on the problem of learning control flow that deviates from a strict step-by-step execution of instructions{—}that is, control flow that may skip forward over parts of the i...
https://proceedings.mlr.press/v139/brooks21a.html
https://proceedings.mlr.press/v139/brooks21a.html
https://proceedings.mlr.press/v139/brooks21a.html
http://proceedings.mlr.press/v139/brooks21a/brooks21a.pdf
ICML 2021