Skip to content

Commit

Permalink
OnElast thing
Browse files Browse the repository at this point in the history
  • Loading branch information
SalmanMohammadi committed Oct 29, 2024
1 parent b208c94 commit 0448ed7
Showing 1 changed file with 3 additions and 5 deletions.
8 changes: 3 additions & 5 deletions docs/source/tutorials/memory_optimizations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -301,8 +301,8 @@ As above, these parameters are also specified under the ``model`` flag or config
tune run lora_finetune_single_device --config llama3/8B_lora_single_device \
model.apply_lora_to_mlp=True \
model.lora_attn_modules=["q_proj","k_proj","v_proj"] \
model.lora_rank=128 \
model.lora_rank=256
model.lora_rank=32 \
model.lora_rank=64
.. code-block:: yaml
Expand All @@ -311,7 +311,7 @@ As above, these parameters are also specified under the ``model`` flag or config
apply_lora_to_mlp: True
lora_attn_modules: ["q_proj", "k_proj", "v_proj"]
lora_rank: 32
lora_rank: 64
lora_alpha: 64
.. note::

Expand Down Expand Up @@ -369,8 +369,6 @@ or, by modifying a config:
lora_rank: 32
lora_alpha: 64
.. _glossary_dora:

Weight-Decomposed Low-Rank Adaptation (DoRA)
Expand Down

0 comments on commit 0448ed7

Please sign in to comment.