Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Argument Parser updated to use action="store_true" instead of 0/1 for boolean arguments. #72

Merged
merged 8 commits into from
Sep 5, 2024

Conversation

ErikLarssonDev
Copy link
Contributor

@ErikLarssonDev ErikLarssonDev commented Sep 5, 2024

Describe your changes

I have replaced the parser.add_argument where the standard has been to have an integer 0=False, 1=True with an action="store_true" instead.

Example:
OLD:
parser.add_argument(
"--restore_opt",
type=int,
default=0,
help="If optimizer state should be restored with model "
"(default: 0 (false))",
)
NEW:
parser.add_argument(
"--restore_opt",
action="store_true",
help="If optimizer state should be restored with model "
"(default: false)",
)

This will save some time and characters when running the scripts from the command line as well as being easier to understand as the parsed variables are supposed to be booleans.

Issue Link

I don't have an issue.

Type of change

You are not able to run the scripts from the command line using old syntax with 0/1 for False/True.

  • 🐛 Bug fix (non-breaking change that fixes an issue)
  • ✨ New feature (non-breaking change that adds functionality)
  • 💥 Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • 📖 Documentation (Addition or improvements to documentation)

Checklist before requesting a review

  • My branch is up-to-date with the target branch - if not update your fork with the changes from the target branch (use pull with --rebase option if possible).
  • I have performed a self-review of my code
  • For any new/modified functions/classes I have added docstrings that clearly describe its purpose, expected inputs and returned values
  • I have placed in-line comments to clarify the intent of any hard-to-understand passages of my code
  • I have updated the README to cover introduced code changes
  • I have added tests that prove my fix is effective or that my feature works
  • I have given the PR a name that clearly describes the change, written in imperative form (context).
  • I have requested a reviewer and an assignee (assignee is responsible for merging)

Checklist for reviewers

Each PR comes with its own improvements and flaws. The reviewer should check the following:

  • the code is readable
  • the code is well tested
  • the code is documented (including return types and parameters)
  • the code is easy to maintain

Author checklist after completed review

  • I have added a line to the CHANGELOG describing this change, in a section
    reflecting type of change (add section where missing):
    • added: when you have added new functionality
    • changed: when default behaviour of the code has been changed
    • fixes: when your contribution fixes a bug

Checklist for assignee

  • PR is up to date with the base branch
  • the tests pass
  • author has added an entry to the changelog (and designated the change as added, changed or fixed)
  • Once the PR is ready to be merged, squash commits and merge the PR.

Copy link
Collaborator

@sadamov sadamov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, store_true was designed to do exactly that, a better choice. Thanks @ErikLarssonDev and welcome to NeuralLAM 🤝 🚀

@sadamov
Copy link
Collaborator

sadamov commented Sep 5, 2024

Oh I just saw that the tests are failling now, you will have to update the last few lines of https://github.com/mllam/neural-lam/blob/main/tests/test_mllam_dataset.py:

def test_create_graph_reduced_meps_dataset():
    args = [
        "--graph=hierarchical",
        "--hierarchical=1",
        "--data_config=data/meps_example_reduced/data_config.yaml",
        "--levels=2",
    ]
    create_mesh(args)


def test_train_model_reduced_meps_dataset():
    args = [
        "--model=hi_lam",
        "--data_config=data/meps_example_reduced/data_config.yaml",
        "--n_workers=4",
        "--epochs=1",
        "--graph=hierarchical",
        "--hidden_dim=16",
        "--hidden_layers=1",
        "--processor_layers=1",
        "--ar_steps=1",
        "--eval=val",
        "--n_example_pred=0",
    ]
    train_model(args)

message = "pytest: error: argument --hierarchical: ignored explicit argument '1'\n"

@joeloskarsson joeloskarsson self-assigned this Sep 5, 2024
Copy link
Collaborator

@joeloskarsson joeloskarsson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! Fix the tests as Simon pointed out and also add an entry in the changelog, then I can merge this in 😄

Copy link
Collaborator

@joeloskarsson joeloskarsson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests work now, merging

@joeloskarsson joeloskarsson merged commit 68399f7 into mllam:main Sep 5, 2024
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants