This repository contains the codes (in PyTorch) for the framework introduced in the following paper:
ImplicitVol: Sensorless 3D Ultrasound Reconstruction with Deep Implicit Representation [Paper] [Project Page]
@article{yeung2021implicitvol,
title = {ImplicitVol: Sensorless 3D Ultrasound Reconstruction with Deep Implicit Representation},
author = {Yeung, Pak-Hei and Hesse, Linde and Aliasi, Moska and Haak, Monique and Xie, Weidi and Namburete, Ana IL and others},
journal = {arXiv preprint arXiv:2109.12108},
year = {2021},
}
- Python (3.7), other versions should also work
- PyTorch (1.12), other versions should also work
- scipy
- skimage
- nibabel
Due to data privacy of the ultrasound data we used in our study, in this repository, we use an example MRI volume example/sub-feta001_T2w.nii.gz
from FeTA for demonstration. When running train.py
, here is the high-level description of what it does:
- A set of 2D slices are sampled from the
example/sub-feta001_T2w.nii.gz
, with known plane locations in the volume. This mimics acquiring the 2D ultrasound vidoes and predicting their plane locations. - The set of 2D slices will be used to train the implicit representation model.
- Novel views (i.e. 2D slices sampled from volume, perpendicular to the training slices) are generated from the trained model.
To modify the codes for your own data, you may need to modify the following aspects:
- If the plane location of each 2D image is known, you can skip this stage
- Localizing the images in the 3D space using approaches such as PlaneInVol.
- Save the 2D images and their corresponding plane location
- Modify the
Dataset_volume_video
class indataset.py
to import the saved file.