Self-supervised Few-shot Learning for Semantic Segmentation: An Annotation-free Approach
MICCAI 2023 PRIME Workshop
This is the implementation of the paper "Self-supervised Few-shot Learning for Semantic Segmentation: An Annotation-free Approach" by Sanaz Karimijafarbigloo, Reza Azad, and Dorit Merhof. Implemented on Python 3.7 and Pytorch 1.5.1.
For more information, check out our paper on [arXiv].
- Python 3.7
- PyTorch 1.5.1
- cuda 10.1
Conda environment settings:
conda create -n fewshot python=3.7
conda activate fewshot
conda install pytorch=1.5.1 torchvision cudatoolkit=10.1 -c pytorch
Download the following dataset:
Download FSS-1000 images and annotations from our [Google Drive].
Create a directory 'Dataset' and use the folder path in train/test arguments.
In order to accelerate the inference process, we separated the spectral method for generating a support mask. please go to 'create_mask' and run the 'creating_mask.ipynb' notebook to create masks for all samples. In our data loader, we use these generated masks during the inference time for the test set. Hence, our method will predict the new samples without requiring support annotation. Download the following dataset:
python train.py --backbone resnet50 --benchmark fss --lr 1e-3 --bsz 20 --logpath "your_experiment_name"
- Training takes approx. 12 hours (trained with RTX A5000 GPU).
python test.py --backbone resnet50 --benchmark fss --nshot 1 --load "path_to_trained_model/weight.pt"
- Inference takes approx. 5 minutes (with RTX A5000 GPU).
If you use this code for your research, please consider citing:
@article{karimijafarbigloo2023self,
title={Self-supervised Few-shot Learning for Semantic Segmentation: An Annotation-free Approach},
author={Karimijafarbigloo, Sanaz and Azad, Reza and Merhof, Dorit},
journal={arXiv preprint arXiv:2307.14446},
year={2023}
}