Abstract
Learning problems in the text processing domain often map the text to a space whose dimensions are the measured features of the text, e.g., its words. Three characteristic properties of this domain are (a) very high dimensionality, (b) both the learned concepts and the instances reside very sparsely in the feature space, and (c) a high variation in the number of active features in an instance. In this work we study three mistake-driven learning algorithms for a typical task of this nature - text categorization. We argue that these algorithms- which categorize documents bY learning a linear separator in the feature space - have a few properties that make them ideal for this domain. We then show that a quantum leap in performance is achieved when we further modify the algorithms to better address some of the specific characteristics of the domain. In particular, we demonstrate (1) how variation in document length can be tolerated by either normalizing feature weights or by using negative weights, (2) the positive effect of applying a threshold range in training, (3) alternatives in considering feature frequency, and (4) the benefits of discarding features while training. Overall, we present an algorithm, a variation of Littlestone's Winnow, which performs significantly better than any other algorithm tested on this task using a similar feature set.
Original language | English |
---|---|
Pages | 55-63 |
Number of pages | 9 |
State | Published - 1997 |
Event | 2nd Conference on Empirical Methods in Natural Language Processing, EMNLP 1997 - Providence, United States Duration: 1 Aug 1997 → 2 Aug 1997 |
Conference
Conference | 2nd Conference on Empirical Methods in Natural Language Processing, EMNLP 1997 |
---|---|
Country/Territory | United States |
City | Providence |
Period | 1/08/97 → 2/08/97 |
Bibliographical note
Publisher Copyright:© Proceedings of the 2nd Conference on Empirical Methods in Natural Language Processing, EMNLP 1997. All rights reserved.