forked from vllm-project/vllm
-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Contiguous PA #424
Merged
michalkuligowski
merged 9 commits into
habana_main
from
dev/mfylcek/contiguous_pa_main_24_10
Oct 25, 2024
Merged
Contiguous PA #424
michalkuligowski
merged 9 commits into
habana_main
from
dev/mfylcek/contiguous_pa_main_24_10
Oct 25, 2024
+29
−27
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Hi, @mfylcek , we've been follow this branch, just did test on Gaudi3 - static batch_size as 128 test script:
|
@mfylcek @michalkuligowski , Here is the performance I measured with PR 426 |
From observation, after warm up,
|
michalkuligowski
approved these changes
Oct 25, 2024
madamczykhabana
added a commit
that referenced
this pull request
Oct 25, 2024
This reverts commit 5b7f685.
Merged
afierka-intel
pushed a commit
that referenced
this pull request
Oct 26, 2024
Contiguous cache fetching to avoid using costly gather operation. Requires changes in vllm-hpu-extension (HabanaAI/vllm-hpu-extension#17) to work. Introduces redundant calculations in decoding phase. In all tested cases improves performance over the entire run (5-12%). For even better performance cache defragmentation is required. Only compatible with v2-block-manager.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Contiguous cache fetching to avoid using costly gather operation. Requires changes in vllm-hpu-extension (HabanaAI/vllm-hpu-extension#17) to work.
Introduces redundant calculations in decoding phase. In all tested cases improves performance over the entire run (5-12%). For even better performance cache defragmentation is required. Only compatible with v2-block-manager.