Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DeepVelo and GPUs #8

Open
mionarankovic opened this issue Jan 15, 2024 · 2 comments
Open

DeepVelo and GPUs #8

mionarankovic opened this issue Jan 15, 2024 · 2 comments
Assignees

Comments

@mionarankovic
Copy link

Hi,

I'm interested in understanding whether it's feasible to run deepvelo on multiple GPUs (instead of just a single GPU).

My particular questions are the following:

  1. Does deepvelo support parallelization and does it can take advantage of multiple GPUs?
  2. If so, does it significantly speed up the analysis?

Best,
Miona Rankovic

@hsmaan hsmaan self-assigned this Jan 22, 2024
@hsmaan
Copy link
Member

hsmaan commented Jan 22, 2024

Hi Miona,

Currently, we don't utilize mini-batch training, due to constraints involving the model. We have to modify the model to incorporate mini-batch training on the DeepVelo graph, and this is something that is in the to-do's and we'll create a PR for this. Without this, we can't necessarily incorporate the DataParallel or DDP torch classes (https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html).

But even with one GPU, we found that DeepVelo runs in under a minute for >10000 cells, so it will scale quite well to larger datasets.

We did not explicitly perform experiments comparing one vs multi-gpu usage, but I'm confident this will lower the training time when enabled, just not sure to what extent/scaling factor.

In the meantime, please let us know if you find any limitations in training the model for your data.

@hsmaan
Copy link
Member

hsmaan commented Jan 22, 2024

PR for mini-batch/multi-gpu training opened here: minibatch_Training

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants