You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Llama3.2 vision (Mllama) models requires model runner as "Enocoder_Decoder_Model_Runner"
which includes:
prepare "encoder_seq_lens" and "encoder_seq_lens_tensor" when preparing input data
necessary fix for "HPUModelRunner - prepare_input_tensors" xuechendi@1f5a702
enable "Encoder self-attention" and "encoder/decoder cross-attention" in HPUAttentionImpl
File "/workspace/vllm/vllm/attention/backends/hpu_attn.py", line 159, in forward
[rank0]: raise NotImplementedError("Encoder self-attention and "
[rank0]: NotImplementedError: Encoder self-attention and encoder/decoder cross-attention are not implemented for HPUAttentionImpl
Alternatives
No response
Additional context
No response
Before submitting a new issue...
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
🚀 The feature, motivation and pitch
Llama3.2 vision (Mllama) models requires model runner as "Enocoder_Decoder_Model_Runner"
which includes:
necessary fix for "HPUModelRunner - prepare_input_tensors" xuechendi@1f5a702
test cmd:
Error msg:
Alternatives
No response
Additional context
No response
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: