Identifying the information contained in a flawed theory

Sean P Engelson, M. Koppel

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

One common approach to using a prior domain theory as a learning bias is to revise the theory in accordance with a set of training examples. More recently, another class of methods has arisen in which the theory is reinterpreted, either by probabilizing it, or by using its components in constructive induction. Revision-based methods tend to work best when aws in the given theory are localized, whereas reinterpretation methods tend to work well when aws are distributed evenly throughout the theory. This paper describes a `meta-learning' algorithm which, given a awed domain theory, determines the general nature of the theory's aws by analyzing the information ow in the theory. The method works by frst `probabilizing' the theory, and then selectively `de-probabilizing' components, based on the theory's performance on a preclassifed set of training examples. This method distinguishes between those parts of the theory which should be interpreted as given and those which need to be revised or reinterpreted. This allows us to directly determine the nature of the information contained in the theory, and hence to exploit the theory in the best way possible.
Original languageAmerican English
Title of host publicationICML
StatePublished - 1996

Bibliographical note

Place of conference:USA

Fingerprint

Dive into the research topics of 'Identifying the information contained in a flawed theory'. Together they form a unique fingerprint.

Cite this