Learning efficient random maximum a-posteriori predictors with non-decomposable loss functions

Tamir Hazan, Subhransu Maji, Joseph Keshet, Tommi Jaakkola

Research output: Contribution to journalConference articlepeer-review

15 Scopus citations

Abstract

In this work we develop efficient methods for learning random MAP predictors for structured label problems. In particular, we construct posterior distributions over perturbations that can be adjusted via stochastic gradient methods. We show that any smooth posterior distribution would suffice to define a smooth PAC-Bayesian risk bound suitable for gradient methods. In addition, we relate the posterior distributions to computational properties of the MAP predictors. We suggest multiplicative posteriors to learn super-modular potential functions that accompany specialized MAP predictors such as graph-cuts. We also describe label-augmented posterior models that can use efficient MAP approximations, such as those arising from linear program relaxations.

Original languageEnglish
JournalAdvances in Neural Information Processing Systems
StatePublished - 2013
Event27th Annual Conference on Neural Information Processing Systems, NIPS 2013 - Lake Tahoe, NV, United States
Duration: 5 Dec 201310 Dec 2013

Fingerprint

Dive into the research topics of 'Learning efficient random maximum a-posteriori predictors with non-decomposable loss functions'. Together they form a unique fingerprint.

Cite this