Abstract
We study a robust alternative to empirical risk minimization called distributionally robust learning (DRL), in which one learns to perform against an adversary who can choose the data distribution from a specified set of distributions. We illustrate a problem with current DRL formulations, which rely on an overly broad definition of allowed distributions for the adversary, leading to learned classifiers that are unable to predict with any confidence. We propose a solution that incorporates unlabeled data into the DRL problem to further constrain the adversary. We show that this new formulation is tractable for stochastic gradient-based optimization and yields a computable guarantee on the future performance of the learned classifier, analogous to—but tighter than—guarantees from conventional DRL. We examine the performance of this new formulation on 14 real data sets and find that it often yields effective classifiers with nontrivial performance guarantees in situations where conventional DRL produces neither. Inspired by these results, we extend our DRL formulation to active learning with a novel, distributionally-robust version of the standard model-change heuristic. Our active learning algorithm often achieves superior learning performance to the original heuristic on real data sets.
Original language | English |
---|---|
Journal | Journal of Machine Learning Research |
Volume | 22 |
State | Published - 2021 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2021 Charlie Frogner, Sebastian Claici, Edward Chien, and Justin Solomon.
Funding
The authors acknowledge the generous support of Army Research Office grants W911NF1710068 and W911NF2010168, of Air Force Office of Scientific Research award FA9550-19-1-031, of National Science Foundation grant IIS-1838071, from the MIT-IBM Watson AI Laboratory, from the Toyota-CSAIL Joint Research Center, from the QCRI–CSAIL Computer Science Research Program, from the MIT CSAIL Systems that Learn initiative, from the Skoltech– MIT Next Generation Program, and from a gift from Adobe Systems. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of these organizations. The authors also thank Nestor Guillen for helping them to understand duality over spaces of measures.
Funders | Funder number |
---|---|
MIT Next Generation Program | |
QCRI | |
Toyota-CSAIL Joint Research Center | |
National Science Foundation | IIS-1838071 |
Air Force Office of Scientific Research | FA9550-19-1-031 |
Army Research Office | W911NF2010168, W911NF1710068 |
Massachusetts Institute of Technology |
Keywords
- Active learning
- Distributionally robust optimization
- Optimal transport
- Supervised learning
- Wasserstein distance