Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(4/n) Data Refactor - Finetuning Scripts #950

Merged
merged 76 commits into from
Feb 29, 2024
Merged

(4/n) Data Refactor - Finetuning Scripts #950

merged 76 commits into from
Feb 29, 2024

Conversation

awaelchli
Copy link
Contributor

@awaelchli awaelchli commented Feb 23, 2024

Fixes #954
Fixes #951

@awaelchli awaelchli mentioned this pull request Feb 26, 2024
lit_gpt/datasets/alpaca.py Outdated Show resolved Hide resolved
@rasbt
Copy link
Collaborator

rasbt commented Feb 26, 2024

When running with 1 epoch and an epoch size of 50000 on Alpaca (default 64 global batch size and microbatch size of 1), that's currently how the learning rate looks like during the course of the training:

plot (1)

This looks good and is exactly what I would expect.

When I reduced the training epoch size to 1000 and adjusted the warmup steps 100 -> 10, then it seems to be doing something weird:

plot (4)

I think we need to adjust the code so that the scheduler steps here

    steps_per_epoch = len(train_dataloader) // train.gradient_accumulation_iters(devices)
    lr_max_steps = train.epochs * steps_per_epoch

are perhaps computed by the epoch_size * train.epochs so that this doesn't happen? We can use len(train_dataloader) // train.gradient_accumulation_iters(devices) then if train.epoch_size is None. What do you think?

@awaelchli awaelchli changed the title [WIP] Data Refactor (3/n) Data Refactor - Finetuning Scripts Feb 29, 2024
@awaelchli awaelchli changed the title (3/n) Data Refactor - Finetuning Scripts (4/n) Data Refactor - Finetuning Scripts Feb 29, 2024
@awaelchli awaelchli marked this pull request as ready for review February 29, 2024 12:58
@awaelchli awaelchli changed the base branch from main to wip February 29, 2024 16:16
Copy link
Contributor

@carmocca carmocca left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My comments would apply to all the other files too

finetune/adapter.py Outdated Show resolved Hide resolved
finetune/adapter.py Show resolved Hide resolved
finetune/adapter.py Show resolved Hide resolved
pretrain/openwebtext.py Outdated Show resolved Hide resolved
pretrain/redpajama.py Outdated Show resolved Hide resolved
tests/conftest.py Show resolved Hide resolved
@rasbt rasbt mentioned this pull request Feb 29, 2024
4 tasks
@awaelchli awaelchli merged commit f814cad into wip Feb 29, 2024
8 checks passed
@awaelchli awaelchli deleted the refactor/data branch February 29, 2024 18:38
@carmocca carmocca added this to the Configurability milestone Mar 1, 2024
awaelchli added a commit that referenced this pull request Mar 15, 2024
Co-authored-by: rasbt <[email protected]>
Co-authored-by: Carlos Mocholí <[email protected]>
awaelchli added a commit that referenced this pull request Mar 15, 2024
Co-authored-by: rasbt <[email protected]>
Co-authored-by: Carlos Mocholí <[email protected]>
rasbt added a commit that referenced this pull request Mar 18, 2024
Co-authored-by: rasbt <[email protected]>
Co-authored-by: Carlos Mocholí <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Data Refactor Proposal Trainer args consistency
3 participants