Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jun 13:14:1175881.
doi: 10.3389/fphys.2023.1175881. eCollection 2023.

Multi-stage glaucoma classification using pre-trained convolutional neural networks and voting-based classifier fusion

Affiliations

Multi-stage glaucoma classification using pre-trained convolutional neural networks and voting-based classifier fusion

Vijaya Kumar Velpula et al. Front Physiol. .

Abstract

Aim: To design an automated glaucoma detection system for early detection of glaucoma using fundus images. Background: Glaucoma is a serious eye problem that can cause vision loss and even permanent blindness. Early detection and prevention are crucial for effective treatment. Traditional diagnostic approaches are time consuming, manual, and often inaccurate, thus making automated glaucoma diagnosis necessary. Objective: To propose an automated glaucoma stage classification model using pre-trained deep convolutional neural network (CNN) models and classifier fusion. Methods: The proposed model utilized five pre-trained CNN models: ResNet50, AlexNet, VGG19, DenseNet-201, and Inception-ResNet-v2. The model was tested using four public datasets: ACRIMA, RIM-ONE, Harvard Dataverse (HVD), and Drishti. Classifier fusion was created to merge the decisions of all CNN models using the maximum voting-based approach. Results: The proposed model achieved an area under the curve of 1 and an accuracy of 99.57% for the ACRIMA dataset. The HVD dataset had an area under the curve of 0.97 and an accuracy of 85.43%. The accuracy rates for Drishti and RIM-ONE were 90.55 and 94.95%, respectively. The experimental results showed that the proposed model performed better than the state-of-the-art methods in classifying glaucoma in its early stages. Understanding the model output includes both attribution-based methods such as activations and gradient class activation map and perturbation-based methods such as locally interpretable model-agnostic explanations and occlusion sensitivity, which generate heatmaps of various sections of an image for model prediction. Conclusion: The proposed automated glaucoma stage classification model using pre-trained CNN models and classifier fusion is an effective method for the early detection of glaucoma. The results indicate high accuracy rates and superior performance compared to the existing methods.

Keywords: classifier fusion; convolutional neural network; deep learning; fundus image; hybrid model; transfer learning.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

FIGURE 1
FIGURE 1
(A) Structure of the eye representing CDR and NRR. (B–D) Sample images of the HVD dataset to show the variation in the cup and disc ratio. (E–H) Sample glaucoma-affected images of datasets used in our work.
FIGURE 2
FIGURE 2
(A) Workflow diagram for glaucoma classification. (B) Basic CNN architecture.
FIGURE 3
FIGURE 3
Comparison of augmented images of glaucoma class of each dataset.
FIGURE 4
FIGURE 4
Classifier fusion to improve the performance of the system in glaucoma classification.
FIGURE 5
FIGURE 5
(A–E) Confusion matrices of two-class and three-class datasets using CNN models. (F) Training graph of ResNet50 for the ACRIMA dataset.
FIGURE 6
FIGURE 6
AUCs of CNN models for each dataset. (A) ACRIMA-ResNet50, (B) HVD+Drishti-VGG19, (C) HVD-ResNet50, (D) RIM-ONE-ResNet50, and (E) Drishti-Inception-ResNet-v2.
FIGURE 7
FIGURE 7
Network predictions for class detection using visualization techniques. (A) Input image, (B) activations of L1, (C) activations of L2, (D) GCAM, (E) LIME, and (F) OS.

Similar articles

Cited by

References

    1. Acharya U. R., Dua S., Du X., Chua C. K. (2011). Auto-mated diagnosis of glaucoma using texture and higher order spectra features. IEEE Trans. informa- tion Technol. Biomed. 15 (3), 449–455. 10.1109/TITB.2011.2119322 - DOI - PubMed
    1. Ajitha S., Akkara J. D., Judy M. (2021). Identification of glaucoma from fundus images using deep learning tech-niques. Indian J. Ophthalmol. 69 (10), 2702. 10.4103/ijo.ijo_92_21 - DOI - PMC - PubMed
    1. Akbar S., Hassan S. A., Shoukat A., Alyami J., Bahaj S. A. (2022). Detection of microscopic glaucoma through fun-dus images using deep transfer learning approach. Mi- croscopy Res. Tech. 85 (6), 2259–2276. 10.1002/jemt.24083 - DOI - PubMed
    1. Alghamdi M., Abdel-Mottaleb M. (2021). A comparative study of deep learning models for diagnosing glaucoma from fundus images. IEEE access 9, 23894–23906. 10.1109/access.2021.3056641 - DOI
    1. Anwar S. M., Majid M., Qayyum A., Awais M., Al- nowami M., Khan M. K. (2018). Medical image analysis using con-volutional neural networks: A review. J. Med. Syst. 42 (11), 226–313. 10.1007/s10916-018-1088-1 - DOI - PubMed

LinkOut - more resources