Abstract
The disagreement coefficient of Hanneke has become a central concept in proving active learning rates. It has been shown in various ways that a concept class with low complexity together with a bound on the disagreement coefficient at an optimal solution allows active learning rates that are superior to passive learning ones. We present a different tool for pool based active learning which follows from the existence of a certain uniform version of low disagreement coefficient, but is not equivalent to it. In fact, we present two fundamental active learning problems of significant interest for which our approach allows nontrivial active learning bounds. However, any general purpose method relying on the disagreement coefficient bounds only fails to guarantee any useful bounds for these problems. The tool we use is based on the learner's ability to compute an estimator of the difference between the loss of any hypotheses and some fixed "pivotal" hypothesis to within an absolute error of at most " times the ̀1 distance (the disagreement measure) between the two hypotheses. We prove that such an estimator implies the existence of a learning algorithm which, at each iteration, reduces its excess risk to within a constant factor. Each iteration replaces the current pivotal hypothesis with the minimizer of the estimated loss difference function with respect to the previous pivotal hypothesis. The label complexity essentially becomes that of computing this estimator. The two applications of interest are: learning to rank from pairwise preferences, and clustering with side information (a.k.a. semi-supervised clustering). They are both fundamental, and have started receiving more attention from active learning theoreticians and practitioners.
Original language | English |
---|---|
Pages (from-to) | 19.1-19.20 |
Journal | Journal of Machine Learning Research |
Volume | 23 |
State | Published - 2012 |
Externally published | Yes |
Event | 25th Annual Conference on Learning Theory, COLT 2012 - Edinburgh, United Kingdom Duration: 25 Jun 2012 → 27 Jun 2012 |
Keywords
- Active learning
- Clustering with side information
- Disagreement coefficient
- Learning to rank from pairwise preferences
- Semi-supervised clustering
- Smooth relative regret approximation