-
Notifications
You must be signed in to change notification settings - Fork 131
Unet
U-Net is an encoder-decoder architecture originally used for image segmentation. The first half of the U-Net architecture is a downsampling convolutional neural network which acts as a feature extractor from input images. The other half upsamples these results and restores an image by combining results from downsampling with the upsampled images. This page contains information to help you train Unet in google colab using your own images.
U-Net was described in the following papers:
- U-Net: Convolutional Networks for Biomedical Image Segmentation by Olaf Ronneberger, Philipp Fischer and Thomas Brox
- U-Net: deep learning for cell counting, detection, and morphometry by Thorsten Falk et al.
U-Net original code and documentation is freely available in github
Please also cite these original papers when using U-Net with our notebook.
To train U-Net network you need matching images of input (8-bit TIF) and masks (8-bit TIF). Additionally, the masks within the training dataset should have values being either 0 or 255.
As an example dataset to train U-Net in our notebook, we are using the dataset the original U-Net was designed for which are EM images of neuronal membranes and segmentation masks from the ISBI segmentation challenge 2012 (more information can be found here). However, segmentation masks can also be drawn by hand using drawing tools as those which are available in Fiji/ImageJ, see for example the instructions for Stardist. This is not limited to EM images and could also work e.g. for brightfield images. The masks should be saved in the same 8-bit format as the example masks given in our dataset.
!! For U-Net example dataset, it seems that the ISBI website is currently broken, so please use the alternative link provided. !!
When using this link to download the dataset, please follow the steps described below:
-
Download the repository that contains the useful data, by clicking on Clone or download and save as a .zip file.
-
Once you have the folder downloaded, the data we are interested in are the "train-labels.tif" and "train-volume.tif" stacks.
-
Load "train-volume.tif" stack into Fiji and save the stack as an image sequence.
- The data will now be saved as individual images into a folder of your choice. Repeat the operation for the "train-labels.tif" stack. You can also set aside some of this data as test dataset.
To train U-Net in Google Colab:
Network | Link to example training and test dataset | Direct link to notebook in Colab |
---|---|---|
U-Net (2D) | ISBI challenge | |
U-Net (3D) | EPFL dataset |
or:
-
Download our streamlined ZeroCostDL4Mic notebooks
-
Open Google Colab
-
Once the notebook is open, follow the instructions.
Main:
- Home
- Step by step "How to" guide
- How to contribute
- Tips, tricks and FAQs
- Data augmentation
- Quality control
- Running notebooks locally
- Running notebooks on FloydHub
- BioImage Modell Zoo user guide
- ZeroCostDL4Mic over time
Fully supported networks:
- U-Net
- StarDist
- Noise2Void
- CARE
- Label free prediction (fnet)
- Object Detection (YOLOv2)
- pix2pix
- CycleGAN
- Deep-STORM
Beta notebooks
Other resources: