Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2001 Sep;20(9):886-99.
doi: 10.1109/42.952727.

Ideal observer approximation using Bayesian classification neural networks

Affiliations

Ideal observer approximation using Bayesian classification neural networks

M A Kupinski et al. IEEE Trans Med Imaging. 2001 Sep.

Abstract

It is well understood that the optimal classification decision variable is the likelihood ratio or any monotonic transformation of the likelihood ratio. An automated classifier which maps from an input space to one of the likelihood ratio family of decision variables is an optimal classifier or "ideal observer." Artificial neural networks (ANNs) are frequently used as classifiers for many problems. In the limit of large training sample sizes, an ANN approximates a mapping function which is a monotonic transformation of the likelihood ratio, i.e., it estimates an ideal observer decision variable. A principal disadvantage of conventional ANNs is the potential over-parameterization of the mapping function which results in a poor approximation of an optimal mapping function for smaller training samples. Recently, Bayesian methods have been applied to ANNs in order to regularize training to improve the robustness of the classifier. The goal of training a Bayesian ANN with finite sample sizes is, as with unlimited data, to approximate the ideal observer. We have evaluated the accuracy of Bayesian ANN models of ideal observer decision variables as a function of the number of hidden units used, the signal-to-noise ratio of the data and the number of features or dimensionality of the data. We show that when enough training data are present, excess hidden units do not substantially degrade the accuracy of Bayesian ANNs. However, the minimum number of hidden units required to best model the optimal mapping function varies with the complexity of the data.

PubMed Disclaimer

Similar articles

Cited by

Publication types

LinkOut - more resources