Releases: mllam/neural-lam
v0.2.0
Highlights
This release focuses on setting up the neural-lam repository and codebase to enable collaboration.
Detailed Changes
Added
-
Added tests for loading dataset, creating graph, and training model based on reduced MEPS dataset stored on AWS S3, along with automatic running of tests on push/PR to GitHub, including push to main branch. Added caching of test data to speed up running tests.
#38 #55
@SimonKamuk -
Replaced
constants.py
withdata_config.yaml
for data configuration management
#31
@sadamov -
new metrics (
nll
andcrps_gauss
) andmetrics
submodule, stddiv output option
c14b6b4
@joeloskarsson -
ability to "watch" metrics and log
c14b6b4
@joeloskarsson -
pre-commit setup for linting and formatting
#6, #8
@sadamov, @joeloskarsson -
added github pull-request template to ease contribution and review process
#53, @leifdenby -
ci/cd setup for running both CPU and GPU-based testing both with pdm and pip based installs #37, @khintz, @leifdenby
Changed
-
Clarify routine around requesting reviewer and assignee in PR template
#74
@joeloskarsson -
Argument Parser updated to use action="store_true" instead of 0/1 for boolean arguments.
(#72)
@ErikLarssonDev -
Optional multi-core/GPU support for statistics calculation in
create_parameter_weights.py
#22
@sadamov -
Robust restoration of optimizer and scheduler using
ckpt_path
#17
@sadamov -
Updated scripts and modules to use
data_config.yaml
instead ofconstants.py
#31
@sadamov -
Added new flags in
train_model.py
for configuration previously inconstants.py
#31
@sadamov -
moved batch-static features ("water cover") into forcing component return by
WeatherDataset
#13
@joeloskarsson -
change validation metric from
mae
tormse
c14b6b4
@joeloskarsson -
change RMSE definition to compute sqrt after all averaging
#10
@joeloskarsson
Removed
WeatherDataset(torch.Dataset)
no longer returns "batch-static" component of
training item (onlyprev_state
,target_state
andforcing
), the batch static features are
instead included in forcing
#13
@joeloskarsson
Maintenance
-
simplify pre-commit setup by 1) reducing linting to only cover static
analysis excluding imports from external dependencies (this will be handled
in build/test cicd action introduced later), 2) pinning versions of linting
tools in pre-commit config (and remove fromrequirements.txt
) and 3) using
github action to run pre-commit.
#29
@leifdenby -
change copyright formulation in license to encompass all contributors
#47
@joeloskarsson -
Fix incorrect ordering of x- and y-dimensions in comments describing tensor
shapes for MEPS data
#52
@joeloskarsson -
Cap numpy version to < 2.0.0 (this cap was removed in #37, see below)
#68
@joeloskarsson -
Remove numpy < 2.0.0 version cap
#37
@leifdenby -
turn
neural-lam
into a python package by moving all*.py
-files into the
neural_lam/
source directory and updating imports accordingly. This means
all cli functions are now invoke through the package name, e.g.python -m neural_lam.train_model
instead ofpython train_model.py
(and can be done
anywhere once the package has been installed).
#32, @leifdenby -
move from
requirements.txt
topyproject.toml
for defining package dependencies.
#37, @leifdenby -
Add slack and new publication info to readme
#78
@joeloskarsson
Compatibility
This version has been tested with Python 3.9-3.12.
Upgrade Steps
To upgrade to neural-lam v0.2.0:
-
Update your local version:
git pull
the latest changes from the repository and install locallypip install -e .
- Note: This release is not yet available on PyPI. You will need to install it from the GitHub repository.
-
Adapt any code relying on constants previously defined in
neural_lam/constants.py
to instead read them from the new YAML config file.
Links
v0.1.0
Initial version of Neural-LAM, reproduces the workshop paper.