diff --git a/.gitignore b/.gitignore index d34c57548..db2eae9b9 100644 --- a/.gitignore +++ b/.gitignore @@ -115,4 +115,5 @@ ENV/ trains/ .nnunetv2-venv/ -sanbox.ipynb \ No newline at end of file +sanbox.ipynb +!documentation/assets/scribble_example.png diff --git a/documentation/assets/scribble_example.png b/documentation/assets/scribble_example.png new file mode 100644 index 000000000..62e775cfd Binary files /dev/null and b/documentation/assets/scribble_example.png differ diff --git a/documentation/ignore_label.md b/documentation/ignore_label.md new file mode 100644 index 000000000..fc8127feb --- /dev/null +++ b/documentation/ignore_label.md @@ -0,0 +1,104 @@ +# Ignore Label + +The _ignore label_ can be used to mark regions that should be ignored by nnU-Net. This can be used to +learn from images where only sparse annotations are available, for example in the form of scribbles or a limited +amount of annotated slices. Internally, this is accomplished by using partial losses, i.e. losses that are only +computed on annotated pixels while ignoring the rest. Take a look at our +[`DC_and_BCE_loss` loss](../nnunetv2/training/loss/compound_losses.py) to see how this is done. +During inference (validation and prediction), nnU-Net will always predict dense segmentations. Metric computation in +validation is of course only done on annotated pixels. + +Using sparse annotations can be used to train a model for application to new, unseen images or to autocomplete the +provided training cases given the sparse labels. + +(See our [paper](https://arxiv.org/abs/2403.12834) for more information) + +Typical use-cases for the ignore label are: +- Save annotation time through sparse annotation schemes + - Annotation of all or a subset of slices with scribbles (Scribble Supervision) + - Dense annotation of a subset of slices + - Dense annotation of chosen patches/cubes within an image +- Coarsly masking out faulty segmentations in the reference segmentations +- Masking areas for other reasons + +If you are using nnU-Net's ignore label, please cite the following paper in addition to the original nnU-net paper: + +``` +Gotkowski, K., Lüth, C., Jäger, P. F., Ziegler, S., Krämer, L., Denner, S., Xiao, S., Disch, N., H., K., & Isensee, F. +(2024). Embarrassingly Simple Scribble Supervision for 3D Medical Segmentation. ArXiv. /abs/2403.12834 +``` + +## Usecases + +### Scribble Supervision + +Scribbles are free-form drawings to coarsly annotate an image. As we have demonstrated in our recent [paper](https://arxiv.org/abs/2403.12834), nnU-Net's partial loss implementation enables state-of-the-art learning from partially annotated data and even surpasses many purpose-built methods for learning from scribbles. As a starting point, for each image slice and each class (including background), an interior and a border scribble should be generated: + +- Interior Scribble: A scribble placed randomly within the class interior of a class instance +- Border Scribble: A scribble roughly delineating a small part of the class border of a class instance + +An example of such scribble annotations is depicted in Figure 1 and an animation in Animation 1. +Depending on the availability of data and their variability it is also possible to only annotated a subset of selected slices. + +
+
+
+
+
+
+
+
+