Skip to content

Commit

Permalink
Merge branch 'MIC-DKFZ:master' into master
Browse files Browse the repository at this point in the history
  • Loading branch information
fitzjalen authored Apr 2, 2024
2 parents fbf6d6a + c7f85b7 commit 6b9602d
Show file tree
Hide file tree
Showing 3 changed files with 27 additions and 2 deletions.
10 changes: 10 additions & 0 deletions nnunetv2/inference/predict_from_raw_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -422,6 +422,16 @@ def predict_single_npy_array(self, input_image: np.ndarray, image_properties: di
output_file_truncated: str = None,
save_or_return_probabilities: bool = False):
"""
WARNING: SLOW. ONLY USE THIS IF YOU CANNOT GIVE NNUNET MULTIPLE IMAGES AT ONCE FOR SOME REASON.
input_image: Make sure to load the image in the way nnU-Net expects! nnU-Net is trained on a certain axis
ordering which cannot be disturbed in inference,
otherwise you will get bad results. The easiest way to achieve that is to use the same I/O class
for loading images as was used during nnU-Net preprocessing! You can find that class in your
plans.json file under the key "image_reader_writer". If you decide to freestyle, know that the
default axis ordering for medical images is the one from SimpleITK. If you load with nibabel,
you need to transpose your axes AND your spacing from [x,y,z] to [z,y,x]!
image_properties must only have a 'spacing' key!
"""
ppa = PreprocessAdapterFromNpy([input_image], [segmentation_previous_stage], [image_properties],
Expand Down
18 changes: 17 additions & 1 deletion nnunetv2/inference/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,11 @@ cons:

tldr:
- you give one image as npy array
- axes ordering must match the corresponding training data. The easiest way to achieve that is to use the same I/O class
for loading images as was used during nnU-Net preprocessing! You can find that class in your
plans.json file under the key "image_reader_writer". If you decide to freestyle, know that the
default axis ordering for medical images is the one from SimpleITK. If you load with nibabel,
you need to transpose your axes AND your spacing from [x,y,z] to [z,y,x]!
- everything is done in the main process: preprocessing, prediction, resampling, (export)
- no interlacing, slowest variant!
- ONLY USE THIS IF YOU CANNOT GIVE NNUNET MULTIPLE IMAGES AT ONCE FOR SOME REASON
Expand All @@ -160,9 +165,20 @@ cons:
- never the right choice unless you can only give a single image at a time to nnU-Net

```python
# predict a single numpy array
# predict a single numpy array (SimpleITKIO)
img, props = SimpleITKIO().read_images([join(nnUNet_raw, 'Dataset003_Liver/imagesTr/liver_63_0000.nii.gz')])
ret = predictor.predict_single_npy_array(img, props, None, None, False)

# predict a single numpy array (NibabelIO)
img, props = NibabelIO().read_images([join(nnUNet_raw, 'Dataset003_Liver/imagesTr/liver_63_0000.nii.gz')])
ret = predictor.predict_single_npy_array(img, props, None, None, False)

# The following IS NOT RECOMMENDED. Use nnunetv2.imageio!
# nibabel, we need to transpose axes and spacing to match the training axes ordering for the nnU-Net default:
nib.load('Dataset003_Liver/imagesTr/liver_63_0000.nii.gz')
img = np.asanyarray(img_nii.dataobj).transpose([2, 1, 0]) # reverse axis order to match SITK
props = {'spacing': img_nii.header.get_zooms()[::-1]} # reverse axis order to match SITK
ret = predictor.predict_single_npy_array(img, props, None, None, False)
```

## Predicting with a custom data iterator
Expand Down
1 change: 0 additions & 1 deletion nnunetv2/training/dataloading/base_data_loader.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ def __init__(self,
pad_sides: Union[List[int], Tuple[int, ...], np.ndarray] = None,
probabilistic_oversampling: bool = False):
super().__init__(data, batch_size, 1, None, True, False, True, sampling_probabilities)
assert isinstance(data, nnUNetDataset), 'nnUNetDataLoaderBase only supports dictionaries as data'
self.indices = list(data.keys())

self.oversample_foreground_percent = oversample_foreground_percent
Expand Down

0 comments on commit 6b9602d

Please sign in to comment.