Skip to content

weidai00/RIP-AV

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RIP-AV: Joint Representative Instance Pre-training and Context Aware Network for Retinal Artery/Vein Segmentation

⭕ Contribution:

▶️The first attempt to joint the Representative Instance Pre-training (RIP) with the contextual analysis of local patch for retinal A/V segmentation.

▶️The RIP task is proposed to learn latent arteriovenous features from diverse spatial regions.

▶️Distance Aware and Patch Context Fusion modules are desgined to explore the relationship of patch and its context base on the sense of vascular structures.

▶️Validated against three datasets, outperforming state-of-the-art methods.

🔧 Usage

⚡ Dependencies

dada

dada

dada

dada

Conda environment settings:

conda create -n rip_av python=3.10 -y
conda activate rip_av
pip install poetry 
poetry install
pip install tensorflow==2.9.1
pip install torch==1.13.1 torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu117 --force

🚀 Preparing Segmentation Datasets

Download the public datasets: AV-DRIVE, LES-AV, HRF (v1, v2) and put in the ./data

├─dataset

​	├─test 
​	│  ├─av
​	│  └─images
​	└─training
​    	├─av
​    	└─images

☀️ Training stage

There are two stage: RIP-stage and segmentation stage.

1️⃣ RIP Stage

Run Preprocessing/generate_patch_selection_for_RIP.py to generate representative patches for RIP stage, or directly use the pretrained weights.

python generate_patch_selection_for_RIP.py 
	--dataset_path <dataset path>
	-- train_or_test <training|test>
    --out <output path>
cd ./RIP/
python train.py

RIP_loss

Dataset Pretrain weight
AV-DRIVE RIP_pretrain_drive
LES-AV RIP_pretrain_les
HRF RIP_pretrain_hrf

🌈 Tsne visualization results (W/O RIP)

AV-DRIVE

LES-AV

HRF

2️⃣Segmentation Stage

# train and evaluate  
cd AV/
python main config/config_train_general

# change config file and run test with visulization
python test_with_vis.py

AV-loss

where the hyper-parameters ${{\lambda }{a}}$, ${{\lambda }{s}}$, ${{\lambda }_{d}}$ are meticulously calibrated to balance these three losses and set as 0.01, 5 and 1, respectively, see ./AV/config/config_train_general.py.

Dataset checkpoint
AV-DRIVE RIP_checkpoint_drive
LES-AV RIP_checkpoint_les
HRF RIP_checkpoint_hrf

🌈 Segmentation Performance

Full prediction results can be available at AV-DRIVE, LES-AV, HRF.

📝To-Do List

  • training and test code
  • visualization demo
  • test on more retinal dataset

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages