Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix dora tune_to_peft_adapter_weights #1884

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

huiwy
Copy link

@huiwy huiwy commented Oct 23, 2024

Context

What is the purpose of this PR? Is it to

  • add a new feature
  • fix a bug
  • update tests and/or documentation
  • other (please add here)

Please link to any issues this PR addresses.

Changelog

What are the changes made in this PR?

fix dora tune_to_peft_adapter_weights. When converting adapter weights, the original code use xxx.magnitude.weight as a key to map torchtune weight names to peft weight names. However, magnitude is of type nn.Parameter and its state dict does not contain weight. Finetuning with dora with the original code will raise such error with saving peft adapters:

[rank0]:     self._checkpointer.save_checkpoint(
[rank0]:   File "/mnt/data/torchtune/torchtune/training/checkpointing/_checkpointer.py", line 637, in save_checkpoint
[rank0]:     sys.exit(recipe_main())
[rank0]: The above exception was the direct cause of the following exception:
[rank0]: Traceback (most recent call last):
[rank0]:   File "/mnt/data/torchtune/torchtune/config/_parse.py", line 99, in wrapper
[rank0]: Traceback (most recent call last):
[rank0]:   File "/mnt/data/torchtune/torchtune/models/convert_weights.py", line 285, in tune_to_peft_adapter_weights
[rank0]:     new_key = get_mapped_key(key, full_mapping)
[rank0]: KeyError: 'layers.{}.attn.q_proj.magnitude'
[rank0]: Exception: Error converting the state dict. Found unexpected key: "layers.0.attn.q_proj.magnitude". Please make sure you're loading a checkpoint with the right format. 
[rank0]:   File "/mnt/data/torchtune/recipes/lora_finetune_distributed.py", line 726, in save_checkpoint
[rank0]:   File "/mnt/data/torchtune/recipes/lora_finetune_distributed.py", line 910, in <module>
[rank0]:   File "/mnt/data/torchtune/torchtune/models/convert_weights.py", line 55, in get_mapped_key
[rank0]:     ] = convert_weights.tune_to_peft_adapter_weights(
[rank0]:     recipe.train()
[rank0]:     new_key = mapping_dict[abstract_key]
[rank0]:   File "/mnt/data/torchtune/torchtune/models/convert_weights.py", line 60, in get_mapped_key

Test plan

Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.

  • run pre-commit hooks and linters (make sure you've first installed via pre-commit install)
  • add unit tests for any new functionality
  • update docstrings for any new or updated methods or classes
  • run unit tests via pytest tests
  • run recipe tests via pytest tests -m integration_test
  • manually run any new or modified recipes with sufficient proof of correctness
  • include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)

UX

If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example

  • I did not change any public API
  • I have added an example to docs or docstrings

Copy link

pytorch-bot bot commented Oct 23, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/1884

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 3 New Failures, 6 Cancelled Jobs, 2 Unrelated Failures

As of commit d33ff55 with merge base 1e5f0d5 (image):

NEW FAILURES - The following jobs have failed:

  • Build Docs / build_docs (3.11) (gh)
    sphinx.errors.ExtensionError: Handler <function process_generate_options at 0x7f38c30756c0> for event 'builder-inited' threw an exception (exception: no module named torchtune.training)
  • GPU tests / gpu_test (3.11, stable) (gh)
    E ImportError: cannot import name 'TensorCoreTiledLayout' from 'torchao.dtypes' (/home/ec2-user/actions-runner/_work/torchtune/torchtune/3/envs/test/lib/python3.11/site-packages/torchao/dtypes/__init__.py)
  • Lint / lint (3.10) (gh)
    ##[error]Process completed with exit code 1.

CANCELLED JOBS - The following jobs were cancelled. Please retry:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

BROKEN TRUNK - The following job failed but was present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

  • Unit Test / unit_tests (3.11) (gh) (trunk failure)
    E ImportError: cannot import name 'TensorCoreTiledLayout' from 'torchao.dtypes' (/usr/share/miniconda3/envs/test/lib/python3.11/site-packages/torchao/dtypes/__init__.py)

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link

Hi @huiwy!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@facebook-github-bot
Copy link

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 23, 2024
@joecummings
Copy link
Contributor

Hey @huiwy - thanks for the PR! I'll be taking a look later today.

Copy link
Contributor

@pbontrager pbontrager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for updating this

@joecummings
Copy link
Contributor

@huiwy Can you make sure to run linting and merge from main?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants