Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature][habana-main]: HPUAttentionImpl support for 'Encoder_Decoder' #370

Open
1 task done
xuechendi opened this issue Oct 7, 2024 · 0 comments
Open
1 task done

Comments

@xuechendi
Copy link

xuechendi commented Oct 7, 2024

🚀 The feature, motivation and pitch

Llama3.2 vision (Mllama) models requires model runner as "Enocoder_Decoder_Model_Runner"
which includes:

  1. prepare "encoder_seq_lens" and "encoder_seq_lens_tensor" when preparing input data
    necessary fix for "HPUModelRunner - prepare_input_tensors" xuechendi@1f5a702
  2. enable "Encoder self-attention" and "encoder/decoder cross-attention" in HPUAttentionImpl

test cmd:

python offline_inference_vision_language.py --model_type mllama

Error msg:

File "/workspace/vllm/vllm/attention/backends/hpu_attn.py", line 159, in forward
[rank0]:     raise NotImplementedError("Encoder self-attention and "
[rank0]: NotImplementedError: Encoder self-attention and encoder/decoder cross-attention are not implemented for HPUAttentionImpl

Alternatives

No response

Additional context

No response

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant