Releases: OpenNMT/OpenNMT-py
OpenNMT-py v0.7.1
Many fixes and code refactoring thanks @bpopeters, @flauted, @guillaumekln
New features
Random sampling thanks @daphnei
Enable sharding for huge files at translation
OpenNMT-py v0.7.0
Many fixes and code refactoring thanks @benopeters
Migrated to Pytorch 1.0
OpenNMT-py v0.6.0
Mostly fixes and code improvements.
New: yml config files. See the config folder
OpenNMT-py v0.5.0
Ability to reset the optimizer when using -train_from
-reset_optim = ['none', 'all', 'states', 'keep_states']
none: default behavior as before
all: reset the optimizer !! steps start at zero again.
states: reset only states, keep all other parameters from checkpoint
keep_states: keep current states from checkpoint, but allow to change parameters (learning_rate for instance)
Bug fixes.
Tested with Pytorch 1.0RC works fine.
OpenNMT-py v0.4.1
- fix preprocess filenames introduced by new sharding.
OpenNMT-py v0.4
Fixed Speech2Text training (thanks Yuntian)
Removed -max_shard_size, replaced by -shard_size = number of examples in a shard.
Default value = 1M which works fine in most Text dataset cases. (will avoid Ram OOM in most cases)
OpenNMT-py v0.3
Now requires Pytorch 0.4.1
Multi-node Multi-GPU with Torch Distributed
New options are:
-master_ip: ip address of the master node
-master_port: port number of th emaster node
-world_size = total number of processes to be run (total GPUs accross all nodes)
-gpu_ranks = list of indices of processes accross all nodes
-gpuid is deprecated
See examples in https://github.com/OpenNMT/OpenNMT-py/blob/master/docs/source/FAQ.md
Fixes to img2text now working
New sharding based on number of examples
Fixes to avoid 0.4.1 deprecated functions.
OpenNMT-py v0.2.1
Fixes and improvements
- First compatibility steps with Pytorch 0.4.1 (non breaking)
- Fix TranslationServer (when various request try to load the same model at the same time)
- Fix StopIteration error (python 3.7)
New features
- Ensemble at inference (thanks @Waino) see FAQ
Last Pytorch 0.4.0 version
New in this release:
Multi-GPU based on torch distributed (acknowledgement to Fairseq)
Change from Epoch to Step (see opts.py)
Average Attention Network (AAN) for the Transformer (thanks @francoishernandez )
New fast beam search (see -fast in translate.py) (thanks @guillaumekln)
Sparse attention / sparsemax (thanks to @bpopeters)
and many fixes.
This is the last version with pytorch 0.4.0
Next 0.4.1 pytorch version includes breakings changes.
Pytorch 0.3 Last Release
Merge pull request #680 from OpenNMT/torch0.4 Fix softmaxes