Ideal observer approximation using Bayesian classification neural networks
- PMID: 11585206
- DOI: 10.1109/42.952727
Ideal observer approximation using Bayesian classification neural networks
Abstract
It is well understood that the optimal classification decision variable is the likelihood ratio or any monotonic transformation of the likelihood ratio. An automated classifier which maps from an input space to one of the likelihood ratio family of decision variables is an optimal classifier or "ideal observer." Artificial neural networks (ANNs) are frequently used as classifiers for many problems. In the limit of large training sample sizes, an ANN approximates a mapping function which is a monotonic transformation of the likelihood ratio, i.e., it estimates an ideal observer decision variable. A principal disadvantage of conventional ANNs is the potential over-parameterization of the mapping function which results in a poor approximation of an optimal mapping function for smaller training samples. Recently, Bayesian methods have been applied to ANNs in order to regularize training to improve the robustness of the classifier. The goal of training a Bayesian ANN with finite sample sizes is, as with unlimited data, to approximate the ideal observer. We have evaluated the accuracy of Bayesian ANN models of ideal observer decision variables as a function of the number of hidden units used, the signal-to-noise ratio of the data and the number of features or dimensionality of the data. We show that when enough training data are present, excess hidden units do not substantially degrade the accuracy of Bayesian ANNs. However, the minimum number of hidden units required to best model the optimal mapping function varies with the complexity of the data.
Similar articles
-
Wavelet neural network classification of EEG signals by using AR model with MLE preprocessing.Neural Netw. 2005 Sep;18(7):985-97. doi: 10.1016/j.neunet.2005.01.006. Neural Netw. 2005. PMID: 15921885
-
Forecasting the prognosis of choroidal melanoma with an artificial neural network.Ophthalmology. 2005 Sep;112(9):1608. doi: 10.1016/j.ophtha.2005.04.008. Ophthalmology. 2005. PMID: 16023213
-
Generation of optimal artificial neural networks using a pattern search algorithm: application to approximation of chemical systems.Neural Comput. 2008 Feb;20(2):573-601. doi: 10.1162/neco.2007.08-06-316. Neural Comput. 2008. PMID: 18045024
-
Prediction of survival in patients with esophageal carcinoma using artificial neural networks.Cancer. 2005 Apr 15;103(8):1596-605. doi: 10.1002/cncr.20938. Cancer. 2005. PMID: 15751017 Review.
-
Applications of neural networks in histopathology.Pathologica. 1995 Jun;87(3):246-54. Pathologica. 1995. PMID: 8570285 Review.
Cited by
-
Prevalence scaling: applications to an intelligent workstation for the diagnosis of breast cancer.Acad Radiol. 2008 Nov;15(11):1446-57. doi: 10.1016/j.acra.2008.04.022. Acad Radiol. 2008. PMID: 18995195 Free PMC article.
-
Exploring nonlinear feature space dimension reduction and data representation in breast Cadx with Laplacian eigenmaps and t-SNE.Med Phys. 2010 Jan;37(1):339-51. doi: 10.1118/1.3267037. Med Phys. 2010. PMID: 20175497 Free PMC article.
-
Performance of breast ultrasound computer-aided diagnosis: dependence on image selection.Acad Radiol. 2008 Oct;15(10):1234-45. doi: 10.1016/j.acra.2008.04.016. Acad Radiol. 2008. PMID: 18790394 Free PMC article.
-
Combined use of T2-weighted MRI and T1-weighted dynamic contrast-enhanced MRI in the automated analysis of breast lesions.Magn Reson Med. 2011 Aug;66(2):555-64. doi: 10.1002/mrm.22800. Epub 2011 Apr 26. Magn Reson Med. 2011. PMID: 21523818 Free PMC article.
-
Computerized three-class classification of MRI-based prognostic markers for breast cancer.Phys Med Biol. 2011 Sep 21;56(18):5995-6008. doi: 10.1088/0031-9155/56/18/014. Epub 2011 Aug 22. Phys Med Biol. 2011. PMID: 21860079 Free PMC article.
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Other Literature Sources