Skip to content

Reconstructing a 3D volume from 2D freehand ultrasound images by training a deep network to implicitly represent the volume.

Notifications You must be signed in to change notification settings

pakheiyeung/ImplicitVol

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ImplicitVol: Sensorless 3D Ultrasound Reconstruction with Deep Implicit Representation

This repository contains the codes (in PyTorch) for the framework introduced in the following paper:

ImplicitVol: Sensorless 3D Ultrasound Reconstruction with Deep Implicit Representation [Paper] [Project Page]

@article{yeung2021implicitvol,
	title = {ImplicitVol: Sensorless 3D Ultrasound Reconstruction with Deep Implicit Representation},
	author = {Yeung, Pak-Hei and Hesse, Linde and Aliasi, Moska and Haak, Monique and Xie, Weidi and Namburete, Ana IL and others},
	journal = {arXiv preprint arXiv:2109.12108},
	year = {2021},
}

Contents

Dependencies

  • Python (3.7), other versions should also work
  • PyTorch (1.12), other versions should also work
  • scipy
  • skimage
  • nibabel

Reconstruct from an example volume

Due to data privacy of the ultrasound data we used in our study, in this repository, we use an example MRI volume example/sub-feta001_T2w.nii.gz from FeTA for demonstration. When running train.py, here is the high-level description of what it does:

  1. A set of 2D slices are sampled from the example/sub-feta001_T2w.nii.gz, with known plane locations in the volume. This mimics acquiring the 2D ultrasound vidoes and predicting their plane locations.
  2. The set of 2D slices will be used to train the implicit representation model.
  3. Novel views (i.e. 2D slices sampled from volume, perpendicular to the training slices) are generated from the trained model.

Modify for your own images

To modify the codes for your own data, you may need to modify the following aspects:

Plane localization

  1. If the plane location of each 2D image is known, you can skip this stage
  2. Localizing the images in the 3D space using approaches such as PlaneInVol.

Reconstruction

  1. Save the 2D images and their corresponding plane location
  2. Modify the Dataset_volume_video class in dataset.py to import the saved file.

About

Reconstructing a 3D volume from 2D freehand ultrasound images by training a deep network to implicitly represent the volume.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages