Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2023 Sep 19:6:1203546.
doi: 10.3389/frai.2023.1203546. eCollection 2023.

Explainable deep learning in plant phenotyping

Affiliations
Review

Explainable deep learning in plant phenotyping

Sakib Mostafa et al. Front Artif Intell. .

Abstract

The increasing human population and variable weather conditions, due to climate change, pose a threat to the world's food security. To improve global food security, we need to provide breeders with tools to develop crop cultivars that are more resilient to extreme weather conditions and provide growers with tools to more effectively manage biotic and abiotic stresses in their crops. Plant phenotyping, the measurement of a plant's structural and functional characteristics, has the potential to inform, improve and accelerate both breeders' selections and growers' management decisions. To improve the speed, reliability and scale of plant phenotyping procedures, many researchers have adopted deep learning methods to estimate phenotypic information from images of plants and crops. Despite the successful results of these image-based phenotyping studies, the representations learned by deep learning models remain difficult to interpret, understand, and explain. For this reason, deep learning models are still considered to be black boxes. Explainable AI (XAI) is a promising approach for opening the deep learning model's black box and providing plant scientists with image-based phenotypic information that is interpretable and trustworthy. Although various fields of study have adopted XAI to advance their understanding of deep learning models, it has yet to be well-studied in the context of plant phenotyping research. In this review article, we reviewed existing XAI studies in plant shoot phenotyping, as well as related domains, to help plant researchers understand the benefits of XAI and make it easier for them to integrate XAI into their future studies. An elucidation of the representations within a deep learning model can help researchers explain the model's decisions, relate the features detected by the model to the underlying plant physiology, and enhance the trustworthiness of image-based phenotypic information used in food production systems.

Keywords: agriculture; data bias; deep learning; explainable AI; plant phenotyping.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
Graphical representation of the categories of XAI techniques.
Figure 2
Figure 2
Plot of the distribution of images in different classes of the PlantVillage dataset.
Figure 3
Figure 3
Examples of synthetic images generated by AR-GAN. The leftmost column shows the real images and the rest of the columns show synthetic images with effects of canker, miner, whitefly, powdery mildew, plague, and low temperature (left to right), respectively (Nazki et al., 2020).
Figure 4
Figure 4
Samples of augmented images generated by ARIGAN. The bottom right represents the number of leaves (Valerio Giuffrida et al., 2017).
Figure 5
Figure 5
Visualization of the feature maps representing significant features contributing to the diagnosis of calyx-end cracking for the VGG16 model (Akagi et al., 2020). (A) Original image. (B) Grad-CAM. (C) Guided backpropagation. (D) LRP-Sequential B. (E) LRP-Epsilon. (F) Guided Grad-CAM.
Figure 6
Figure 6
Visualization of the learning of a VGG16 based regression model using GBP and LRP (Dobrescu et al., 2019). (A) GBP and LRP visualization of different plants. (B) Average activations at the end of several convolutional blocks at the start and end of training.
Figure 7
Figure 7
Overview of a framework incorporating XAI in deep learning based plant phenotyping studies.

Similar articles

Cited by

References

    1. Adadi A., Berrada M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160. 10.1109/ACCESS.2018.2870052 - DOI
    1. Adebayo J., Gilmer J., Goodfellow I., Kim B. (2018a). Local explanation methods for deep neural networks lack sensitivity to parameter values. arXiv:1810.03307v1.
    1. Adebayo J., Gilmer J., Muelly M., Goodfellow I., Hardt M., Kim B. (2018b). Sanity checks for saliency maps. Adv. Neural Inf. Process. Syst. 31.
    1. Agarwal R., Melnick L., Frosst N., Zhang X., Lengerich B., Caruana R., et al. . (2021). Neural additive models: interpretable machine learning with neural nets. Adv. Neural Inf. Process. Syst. 34, 4699–4711. Available online at: https://proceedings.neurips.cc/paper_files/paper/2021/file/251bd0442dfcc... - PubMed
    1. Aich S., Stavness I. (2017). “Leaf counting with deep convolutional and deconvolutional networks,” in Proceedings of the IEEE International Conference on Computer Vision Workshops (Italy: ), 2080–2089.

Grants and funding

This research was undertaken thanks in part to funding from the Canada First Research Excellence Fund.

LinkOut - more resources