This is not the official repo for RETFound: a foundation model for generalizable disease detection from retinal images, which is based on MAE: (Or Keras version implemented by Yuka Kihara can be found here)
This is a repo for playing with RETFound.
Currently, it supports:
- Fine-tuning RETFound on your own data
- Using the wonderful Transformer-MM-Explainability Repo by @hila-chefer to explain the predictions of RETFound
- An attempt at using the retfound weights to train a Color Fundus Photo (cfp) to Fluorescein Angiography (FA) encoder-decoder model
()
- RETFound is pre-trained on 1.6 million retinal images with self-supervised learning
- RETFound has been validated in multiple disease detection tasks
- RETFound can be efficiently adapted to customised tasks
Before you do anything else, you need to get a clean environment set up.
- Create environment with conda:
conda create -n retfound python=3.7.5 -y
conda activate retfound
- Install dependencies
git clone https://github.com/beswift/RETFound_MAE.git
cd RETFound_MAE
pip install -r requirements.txt
To fine tune RETFound on your own data:
Did you follow the steps above to get your environment set up? Do that first!
1. Download the RETFound pre-trained weights
ViT-Large | |
---|---|
Colour fundus image | download |
OCT | download |
- Organise your images into folders so that each folder is the name of the "class" the images belong to, like:
├── data
│ ├── class_1
│ │ ├── image_1.jpg
│ │ ├── image_2.jpg
│ │ ├── image_3.jpg
│ ├── class_2
│ │ ├── image_1.jpg
│ │ ├── image_2.jpg
│ │ ├── image_3.jpg
│ ├── class_3
│ │ ├── image_1.jpg
│ │ ├── image_2.jpg
│ │ ├── image_3.jpg
- In a terminal, run the following command to start fine-tuning (use IDRiD as example). A fine-tuned checkpoint will be saved during training. Evaluation will be run after training.
python train.py
- To evaluate the fine-tuned model, run the following command:
python test.py
Using the Retfound weights to train a Color Fundus Photo (cfp) to Fluorescein Angiography (FA) encoder-decoder model
To train a cfp to FA encoder-decoder model using the retfound weights:
Did you follow the steps above to get your environment set up? Do that first!
- Download the retfound weights
- Organise your images into folders so that each folder is the name of the "class" the images belong to, like:
├── data
│ ├── cfp
│ │ ├── image_1.jpg
│ │ ├── image_2.jpg
│ │ ├── image_3.jpg
│ ├── fa
| | ├── image_1.jpg
│ │ ├── image_2.jpg
│ │ ├── image_3.jpg
- In a terminal, run the following command to start training:
python leaky.py
- To evaluate the model, run the following command:
python leakyTest.py
See the original README for more details on how to use the base retfound scripts
If you find the RETFound repository useful, please consider citing this paper:
@article{zhou2023foundation,
title={A foundation model for generalizable disease detection from retinal images},
author={Zhou, Yukun and Chia, Mark A and Wagner, Siegfried K and Ayhan, Murat S and Williamson, Dominic J and Struyven, Robbert R and Liu, Timing and Xu, Moucheng and Lozano, Mateo G and Woodward-Court, Peter and others},
journal={Nature},
pages={1--8},
year={2023},
publisher={Nature Publishing Group UK London}
}
If you find this repo useful, that's amazing and unexpected! If you are interested in working or partnering on eye care, chronic disease or wellness related projects, we'd love to work with you over at Unified Imaging!