Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Remove redundant set_active_loras call during warmup (#413)
CUDA uses `capture` for warmup runs and `execute_model` for actual runs. During each phase they call `set_active_loras` only once. HPU uses `execute_model` for both warmup and actual runs. Since `execute_model` already takes care of `set_active_loras` internally, the redundant call can be removed. This special handling is redundant and incorrect, as it causes out-of-bound slicing in decode phase reported in #405. This PR removes special handling of `set_active_loras` function call from warmup runs and resolves the issue in #405.
- Loading branch information