Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Feb 16;18(2):e0281931.
doi: 10.1371/journal.pone.0281931. eCollection 2023.

Small hand-designed convolutional neural networks outperform transfer learning in automated cell shape detection in confluent tissues

Affiliations

Small hand-designed convolutional neural networks outperform transfer learning in automated cell shape detection in confluent tissues

Louis Combe et al. PLoS One. .

Abstract

Mechanical cues such as stresses and strains are now recognized as essential regulators in many biological processes like cell division, gene expression or morphogenesis. Studying the interplay between these mechanical cues and biological responses requires experimental tools to measure these cues. In the context of large scale tissues, this can be achieved by segmenting individual cells to extract their shapes and deformations which in turn inform on their mechanical environment. Historically, this has been done by segmentation methods which are well known to be time consuming and error prone. In this context however, one doesn't necessarily require a cell-level description and a coarse-grained approach can be more efficient while using tools different from segmentation. The advent of machine learning and deep neural networks has revolutionized the field of image analysis in recent years, including in biomedical research. With the democratization of these techniques, more and more researchers are trying to apply them to their own biological systems. In this paper, we tackle a problem of cell shape measurement thanks to a large annotated dataset. We develop simple Convolutional Neural Networks (CNNs) which we thoroughly optimize in terms of architecture and complexity to question construction rules usually applied. We find that increasing the complexity of the networks rapidly no longer yields improvements in performance and that the number of kernels in each convolutional layer is the most important parameter to achieve good results. In addition, we compare our step-by-step approach with transfer learning and find that our simple, optimized CNNs give better predictions, are faster in training and analysis and don't require more technical knowledge to be implemented. Overall, we offer a roadmap to develop optimized models and argue that we should limit the complexity of such models. We conclude by illustrating this strategy on a similar problem and dataset.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Illustration of the problem.
A: full raw image of the Drosophila dorsal thorax. Small black circles represent the size of the images used in our models. B: three examples of such images along with their segmented counterparts and representations of the average cell shapes with the three key targets: long axis L, short axis l and orientation α.
Fig 2
Fig 2. Exploration of the data set.
A: relative gray level distribution over the entire dataset. B: Distributions of long axis, in blue, and short axis, in red, in the training set. C: distribution of the raw orientation in the training set. D: Relationship between short and long axes in the training set, the red dashed line shows y = x. E-F: Distribution of the cosine (E) and sine (F) of double the orientation.
Fig 3
Fig 3. Schematics of the networks’ architecture.
Models are based on the repetition of convolutional and max pooling layers. The dropout layers are not represented to help visualization. The four neurons of the final dense layer each correspond to one of the model’s targets.
Fig 4
Fig 4. Grid search on 1-conv models.
A: the MAE averaged over 5 folds is shown both in color and numbers as a function of the number of filters in the convolutional layer (N_filters_1) and size of these filters (filter_size_1). B: comparison between ground truth and model predictions for the optimal 1-conv model after retraining. Long axis, in blue, and short axis, in red. C: similar comparison for orientation after inverse transformation from cosine and sine. In B and C, black dashed lines are a guide for the eye representing identity.
Fig 5
Fig 5. Grid search on 2-conv models.
A: Effect of filter size in each layer. The averaged MAE shown in colors and numbers. B: same study for the effect of number of filters in both layers. Numbers in A and B represent the same quantity as in Fig 4A. C: comparison between true performance of 2-conv models and the performance predicted by a decision tree regressor. The red line is a guide for the eye representing identity. D: feature importance of all four parameters in the decision tree regressor. E: Pearson correlation coefficient of all parameters with the performance of the models. Negative values indicate that increasing any of these parameters will tend to reduce the MAE and hence increase performance. F: comparison of predicted and true long and short axes for the re-trained optimal 2-conv model. G: comparison of predicted and true orientation for the same model. In F and G, the dashed black line represents identity.
Fig 6
Fig 6. Performance of the best 3-conv model.
A: comparison of predicted and true long and short axes for the re-trained optimal 3-conv model. B: comparison of predicted and true orientation for the same model. In A and B, dashed black lines represent identity. C: effect of anisotropy on the performance of the model in predicting orientation.
Fig 7
Fig 7. Transfer learning.
A: Comparison of true and predicted long and short axes for the best 2-conv model (blue) and transfer from VGG19 (red). The black dashed line represents identity. B-C-D: distribution of absolute errors on the test set for all four models and for long axes (B), short axes (C) and orientation (D). E: Loss on validation set during the final training of each model as a function of the number of epochs. F: Loss on validation set as a function of time spent training.
Fig 8
Fig 8. Effect of training set size.
Performance of either the best 3-conv model (blue) or transfer from VGG19 (red) as a function of the number of images used in their training. MAE is shown for long axis (A), short axis (B) and orientation (C).
Fig 9
Fig 9. Illustration of strategy.
A: Original image used to create a training set. It is the top part of a larger image which was separated in two. B: two examples of 128*128 images extracted from A. C: anisotropy map of the full image, each image in B represents one pixel of this map. D: Original image used to create a test set which was separated in similar 128*128 windows and these images were passed to the trained CNN. E: Predicted anisotropy map of the image in D. Comparing the ground truth in C and predicted anisotropy maps in E.

Similar articles

Cited by

References

    1. Finegan TM, Na D, Cammarota C, Skeeters A V, Nádasi TJ, Dawney NS, et al.. Tissue tension and not interphase cell shape determines cell division orientation in the Drosophila follicular epithelium. EMBO J. 2019. Feb;38(3). doi: 10.15252/embj.2018100072 - DOI - PMC - PubMed
    1. Gupta VK, Chaudhuri O. Mechanical regulation of cell-cycle progression and division. Trends Cell Biol. 2022. Sep 1;32(9):773–85. doi: 10.1016/j.tcb.2022.03.010 - DOI - PMC - PubMed
    1. Mason DE, Collins JM, Dawahare JH, Nguyen TD, Lin Y, Voytik-Harbin SL, et al.. YAP and TAZ limit cytoskeletal and focal adhesion maturation to enable persistent cell motility. J Cell Biol. 2019. Apr 1;218(4):1369–89. doi: 10.1083/jcb.201806065 - DOI - PMC - PubMed
    1. Verkhovsky AB, Svitkina TM, Borisy GG. Self-polarization and directional motility of cytoplasm. Curr Biol. 1999. Jan 14;9(1):11–20. doi: 10.1016/s0960-9822(99)80042-6 - DOI - PubMed
    1. Sunyer R, Conte V, Escribano J, Elosegui-Artola A, Labernadie A, Valon L, et al.. Collective cell durotaxis emerges from long-range intercellular force transmission. Science (80-). 2016. Sep 9;353(6304):1157–61. doi: 10.1126/science.aaf7119 - DOI - PubMed

Publication types

MeSH terms

Grants and funding

This work was supported by the LABEX iMUST of the University of Lyon (ANR-10-LABX-0064). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.