Abstract
The idea that a trained network can assign a confidence number to its prediction, indicating the level of its reliability, is addressed and exemplified by an analytical examination of a perceptron with discrete and continuous output units. Results are derived for both Gibbs and Bayes scenarios. The information gain by the confidence number is estimated by various entropy measurements.
Original language | English |
---|---|
Pages (from-to) | 799-802 |
Number of pages | 4 |
Journal | Physical Review E |
Volume | 60 |
Issue number | 1 |
DOIs | |
State | Published - Jul 1999 |