Incorporating unlabeled data into distributionally-robust learning

Charlie Frogner, Sebastian Claici, Edward Chien, Justin Solomon

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

We study a robust alternative to empirical risk minimization called distributionally robust learning (DRL), in which one learns to perform against an adversary who can choose the data distribution from a specified set of distributions. We illustrate a problem with current DRL formulations, which rely on an overly broad definition of allowed distributions for the adversary, leading to learned classifiers that are unable to predict with any confidence. We propose a solution that incorporates unlabeled data into the DRL problem to further constrain the adversary. We show that this new formulation is tractable for stochastic gradient-based optimization and yields a computable guarantee on the future performance of the learned classifier, analogous to—but tighter than—guarantees from conventional DRL. We examine the performance of this new formulation on 14 real data sets and find that it often yields effective classifiers with nontrivial performance guarantees in situations where conventional DRL produces neither. Inspired by these results, we extend our DRL formulation to active learning with a novel, distributionally-robust version of the standard model-change heuristic. Our active learning algorithm often achieves superior learning performance to the original heuristic on real data sets.

Original languageEnglish
JournalJournal of Machine Learning Research
Volume22
StatePublished - 2021
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2021 Charlie Frogner, Sebastian Claici, Edward Chien, and Justin Solomon.

Funding

The authors acknowledge the generous support of Army Research Office grants W911NF1710068 and W911NF2010168, of Air Force Office of Scientific Research award FA9550-19-1-031, of National Science Foundation grant IIS-1838071, from the MIT-IBM Watson AI Laboratory, from the Toyota-CSAIL Joint Research Center, from the QCRI–CSAIL Computer Science Research Program, from the MIT CSAIL Systems that Learn initiative, from the Skoltech– MIT Next Generation Program, and from a gift from Adobe Systems. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of these organizations. The authors also thank Nestor Guillen for helping them to understand duality over spaces of measures.

FundersFunder number
MIT Next Generation Program
QCRI
Toyota-CSAIL Joint Research Center
National Science FoundationIIS-1838071
Air Force Office of Scientific ResearchFA9550-19-1-031
Army Research OfficeW911NF2010168, W911NF1710068
Massachusetts Institute of Technology

    Keywords

    • Active learning
    • Distributionally robust optimization
    • Optimal transport
    • Supervised learning
    • Wasserstein distance

    Fingerprint

    Dive into the research topics of 'Incorporating unlabeled data into distributionally-robust learning'. Together they form a unique fingerprint.

    Cite this