RIP-AV: Joint Representative Instance Pre-training and Context Aware Network for Retinal Artery/Vein Segmentation
▶️ The first attempt to joint the Representative Instance Pre-training (RIP) with the contextual analysis of local patch for retinal A/V segmentation.
▶️ The RIP task is proposed to learn latent arteriovenous features from diverse spatial regions.
▶️ Distance Aware and Patch Context Fusion modules are desgined to explore the relationship of patch and its context base on the sense of vascular structures.
▶️ Validated against three datasets, outperforming state-of-the-art methods.
Conda environment settings:
conda create -n rip_av python=3.10 -y
conda activate rip_av
pip install poetry
poetry install
pip install tensorflow==2.9.1
pip install torch==1.13.1 torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu117 --force
Download the public datasets: AV-DRIVE, LES-AV, HRF (v1, v2) and put in the ./data
├─dataset
├─test
│ ├─av
│ └─images
└─training
├─av
└─images
There are two stage: RIP-stage and segmentation stage.
Run Preprocessing/generate_patch_selection_for_RIP.py
to generate representative patches for RIP stage, or directly use the pretrained weights.
python generate_patch_selection_for_RIP.py
--dataset_path <dataset path>
-- train_or_test <training|test>
--out <output path>
cd ./RIP/
python train.py
Dataset | Pretrain weight |
---|---|
AV-DRIVE | RIP_pretrain_drive |
LES-AV | RIP_pretrain_les |
HRF | RIP_pretrain_hrf |
AV-DRIVE
LES-AV
HRF
# train and evaluate
cd AV/
python main config/config_train_general
# change config file and run test with visulization
python test_with_vis.py
where the hyper-parameters ${{\lambda }{a}}$, ${{\lambda }{s}}$, ./AV/config/config_train_general.py
.
Dataset | checkpoint |
---|---|
AV-DRIVE | RIP_checkpoint_drive |
LES-AV | RIP_checkpoint_les |
HRF | RIP_checkpoint_hrf |
Full prediction results can be available at AV-DRIVE, LES-AV, HRF.
- training and test code
- visualization demo
- test on more retinal dataset