You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.12/site-packages/whisper_s2t/backends/__init__.py", line 187, in transcribe_with_vad
res = self.generate_segment_batched(mels.to(self.device), prompts, seq_len, seg_metadata)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.12/site-packages/whisper_s2t/backends/huggingface/model.py", line 82, in generate_segment_batched
predicted_ids = self.model.generate(features[idx_list],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.12/site-packages/transformers/models/whisper/generation_whisper.py", line 652, in generate
self._set_max_new_tokens_and_length(
File "lib/python3.12/site-packages/transformers/models/whisper/generation_whisper.py", line 1689, in _set_max_new_tokens_and_length
raise ValueError(
ValueError: The length of `decoder_input_ids` equal `prompt_ids` plus special start tokens is 4, and the `max_new_tokens` is 448. Thus, the combined length of `decoder_input_ids` and `max_new_tokens` is: 452. This exceeds the `max_target_positions` of the Whisper model: 448. You should either reduce the length of your prompt, or reduce the value of `max_new_tokens`, so that their combined length is less than 448.
The text was updated successfully, but these errors were encountered:
Trying out any whisper large (v2, v3, turbo) and it seems to not work, I get the same error regardless of model:
The text was updated successfully, but these errors were encountered: