Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I-JEPA #1273

Merged
merged 40 commits into from
Jul 14, 2023
Merged

I-JEPA #1273

merged 40 commits into from
Jul 14, 2023

Conversation

Natyren
Copy link
Contributor

@Natyren Natyren commented Jun 7, 2023

closes #issue_number

Description

  • this pr adds ijepa implementation to lightly

Documentation

  • my implementation need docs

Implications / comments / further issues

  • added inept

Copy link
Contributor Author

@Natyren Natyren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

draft version of pr

@Natyren Natyren changed the title del line I-JEPA Jun 7, 2023
@IgorSusmelj
Copy link
Contributor

Hey @Natyren, can we help you with the PR? Adding I-JEPA to the library would be awesome :)

@Natyren
Copy link
Contributor Author

Natyren commented Jul 2, 2023

Hey, @IgorSusmelj! Sorry for the long wait, I've been really busy. Just wanted to give you a heads up that I'll be adding a new model to the framework next week.

@Natyren
Copy link
Contributor Author

Natyren commented Jul 9, 2023

Added collator and model (encoder and predictor, according to original paper and code).

@Natyren
Copy link
Contributor Author

Natyren commented Jul 13, 2023

@guarin Please review the completed implementation and the provided example exclusively using the Torch framework. What should be done next?

@Natyren Natyren marked this pull request as ready for review July 13, 2023 12:53
@Natyren
Copy link
Contributor Author

Natyren commented Jul 13, 2023

Furthermore, I have examined the current setup in the examples by training the model on an A100 for 10 epochs. During this training, the loss consistently decreases, indicating that the model is effectively learning.

@guarin
Copy link
Contributor

guarin commented Jul 13, 2023

Thanks a lot for the PR @Natyren! This looks great! I'll have a closer look ASAP.

Copy link
Contributor

@guarin guarin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks really good!

I would suggest that we merge it as is and mark it as experimental until we have a fully tested version. There are also some points we'll have to address in follow-up issues/PRs (let me know if you are interested in helping out @Natyren):

  • Torchvision ViT has no stoachstic depth (here) while IJEPA ViT has it (here). I had the same issue when working on MAE. The annoying thing is that drop path is not even used during pretraining but only during finetunig. Without it we'll sadly not be able to properly evaluate the model. For MAE I added drop path only during finetuning but it is pretty hacky, see here. I guess our best bet is to switch to timm vision transformer which already has support for it, see Add timm dependency? #1301
  • I believe torchvision ViT also doesn't do this part of the initialization but I have to check in detail.
  • The collator looks like it should be in the transform instead. The only thing I can find that depends on the whole batch is the number of masks here. But maybe I am missing something.
  • The positional embedding functions should be moved from lightly/models/modules/i_jepa to lightly/models/utils. Note that I already partially implemented them in the MAE PR: Guarin lig 3056 add mae imagenet benchmark #1263
  • Some docstrings are missing or in the wrong format. For example the apply_masks method or IJEPAMaskCollator.
  • We should also add references to the original paper/code in major classes/functions. For example in IJEPAMaskCollator.
  • Use ijepa instead of i_jepa for filenames to make it more consistent with the class names.
  • Use camel case in all class names. For example, change IJEPA_predictor to IJEPAPredictor etc.
  • Add tests
  • Finish examples and docs
  • Add imagenet benchmark to lightly/benchmarks

@codecov
Copy link

codecov bot commented Jul 14, 2023

Codecov Report

Patch coverage: 4.31% and project coverage change: -3.85 ⚠️

Comparison is base (b7acb35) 87.37% compared to head (3dafc05) 83.52%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1273      +/-   ##
==========================================
- Coverage   87.37%   83.52%   -3.85%     
==========================================
  Files         126      128       +2     
  Lines        5275     5518     +243     
==========================================
  Hits         4609     4609              
- Misses        666      909     +243     
Impacted Files Coverage Δ
lightly/models/modules/ijepa.py 0.00% <0.00%> (ø)
lightly/transforms/ijepa_transform.py 0.00% <0.00%> (ø)
lightly/data/collate.py 63.56% <8.33%> (-30.66%) ⬇️
lightly/models/utils.py 77.92% <25.00%> (-4.48%) ⬇️

... and 14 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@Natyren
Copy link
Contributor Author

Natyren commented Jul 14, 2023

@guarin Hi! This is cool, let's advance and start working on it! What are our next steps?

@guarin
Copy link
Contributor

guarin commented Jul 14, 2023

Let's do these and then merge:

  • We should also add references to the original paper/code in major classes/functions. For example in IJEPAMaskCollator.
  • Use ijepa instead of i_jepa for filenames to make it more consistent with the class names.
  • Use camel case in all class names. For example, change IJEPA_predictor to IJEPAPredictor etc.

Would you have time to work on it? Otherwise I can do it.

@Natyren
Copy link
Contributor Author

Natyren commented Jul 14, 2023

@guarin I will do it today, thank you for quick reply

@Natyren
Copy link
Contributor Author

Natyren commented Jul 14, 2023

@guarin Done. Please take a look

@guarin
Copy link
Contributor

guarin commented Jul 14, 2023

Awesome! Thanks for the super fast update. I added some small changes and the note about experimental support. Tested the code and it is running 🥳

Will merge once the tests pass.

@guarin guarin merged commit a55cf44 into lightly-ai:master Jul 14, 2023
4 of 6 checks passed
This was referenced Jul 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants