Skip to content
This repository has been archived by the owner on Jun 26, 2021. It is now read-only.

Additional backends #102

Merged
merged 292 commits into from
Aug 2, 2019
Merged

Additional backends #102

merged 292 commits into from
Aug 2, 2019

Conversation

justusschock
Copy link
Member

@justusschock justusschock commented May 20, 2019

This PR will add additional backends (TorchScript in #98 and TF Eager Execution in #101 ) as well as the move to the new apex.amp API (#97 ).

Currently this PR is just open to track conversations and the Progress of the different Parts of it. It will be merged, after #66 , #97 , #98 and #101 are merged.

Progress so far:

The following changes will be done in subsequent PRs.

  • MXNet Backend (WIP: Mxnet backend #119 ) merged
  • Provide examples for all backends
  • Add All backend-specific files to docs
  • Add all backends to README
  • Add all backends to Docs/Getting Started

Other possible backends include

  • Theano: Could be easy - quite similar to TF
  • Caffe2: Probably won't integrate due to incorporation into PyTorch
  • CNTK: Probably won't integrate because no longer actively developed and not that commonly used
  • PaddlePaddle (by Baidu)

If single backends of this list will be included, this should be done in separate issues/prs afterwards, since it is not yet clear which backends we want to integrate from here

@justusschock justusschock self-assigned this May 20, 2019
@justusschock justusschock changed the base branch from master to parallel_master June 4, 2019 11:24
@justusschock
Copy link
Member Author

@ORippler Tests are passing now. It would be great if you could make some time for a review

@justusschock justusschock added ready for review Things that need to be reviewed and removed help wanted Things that need some help from others in progress Things that are currently developed labels Jul 30, 2019
Copy link
Collaborator

@ORippler ORippler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • change
    self.save_state(os.path.join(self.save_path, "checkpoint_epoch_0"))
    to make use of self.start_epoch.
  • Look at the docstrings. Oftentimes, e.g. for basetrainer, they are wrong (still wrongly states to raise NotImplementedError. I think this still comes from before the Trainer merger.)

In general, tfeager and tfgraph seem to look quite nice. One note, we don't need to actually do the tf.convert_to_tensor, as tf accepts np.ndarrays. But that is nitpicking.

I also remarked some points for discussion

delira/io/tf.py Show resolved Hide resolved
delira/io/tf.py Show resolved Hide resolved
delira/training/backends/torch/trainer.py Show resolved Hide resolved
delira/training/backends/torch/trainer.py Outdated Show resolved Hide resolved
@justusschock
Copy link
Member Author

justusschock commented Aug 1, 2019

I'll have a look at all of them and either address them here or in separate PRs:

Regarding the conversation to tensor type: I added this on purpose, since explicit is better than implicit and one can do some optimization (and maybe even quantization) after converting to tensors. Also this shows where the conversion is done, as tensors and numpy arrays are not completely identical regarding their syntax. Also it is important to notice, that for tf eager one will probably start gradient taping right after this. But if you prefer removing this part it's fine for me.

@justusschock justusschock merged commit e6b6593 into master Aug 2, 2019
@justusschock justusschock deleted the additional_backends branch August 2, 2019 13:00
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
ready for review Things that need to be reviewed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants