Pytorch implementation of PTNet for high-resolution and longitudinal infant MRI synthesis.
PTNet: A High-Resolution Infant MRI Synthesizer Based on Transformer
Xuzhe Zhang1, Xinzi He1, Jia Guo2, Nabil Ettehadi1, Natalie Aw2, David Semanek2, Jonathan Posner2, Andrew Laine1, Yun Wang2
1Columbia University Department of Biomedical Engineering, 2CUMC Department of Psychiatry
This 2D-only PTNet repo has been deprecated. Please visit our latest repo containing both 2D and 3D versions with a better data sampling strategy.
This repo contains the code of our first version preprint paper. This version of PTNet is only designed for pure MAE/MSE loss. Combining it with adversarial training will significantly impair performance. If you want to integrate an adversarial training framework, please refer to our updated version for the journal paper which introduces substantial improvements (e.g., 3D version, perceptual and adversarial losses). https://github.com/XuzheZ/PTNet3D
To synthesize high resolution infant brain MRI.
- Linux
- Python3.6
- NVIDIA GPU (11G memory or larger) + CUDA cuDNN
git clone https://github.com/XuzheZ/PTNet.git
coming soon
In our first version preprint paper, we conducted experiments only on dHCP dataset (http://www.developingconnectome.org/), For more challenging longitudinal tasks, please refer to our updated version for the journal paper: https://github.com/XuzheZ/PTNet3D
coming soon
coming soon
If you find this useful for your research, please use the following.
@article{zhang2021ptnet,
title={PTNet: A High-Resolution Infant MRI Synthesizer Based on Transformer},
author={Zhang, Xuzhe and He, Xinzi and Guo, Jia and Ettehadi, Nabil and Aw, Natalie and Semanek, David and Posner, Jonathan and Laine, Andrew and Wang, Yun},
journal={arXiv preprint arXiv:2105.13993},
year={2021}
}
This code borrows heavily from: Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet, pix2pixHD, pytorch-CycleGAN-and-pix2pix.