-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update feature_extractor.py #1038
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@BBC-Esq
Thanks for your work
if padding: | ||
waveform = torch.nn.functional.pad(waveform, (0, self.n_samples)) | ||
|
||
window = torch.hann_window(self.n_fft).to(waveform.device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why was the hann_window
deleted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll take a look...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is actually an optimization. In the old version, a new Hann window was being created and moved to the device every time call was executed. The new version creates it once during initialization and caches it as an instance variable (self.window).
faster_whisper/feature_extractor.py
Outdated
else waveform | ||
) | ||
# Move waveform to the target device if necessary | ||
if self.device == "cuda" and not waveform.is_cuda: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improved readability, thank you.
Hello, the way you implemented caching is not going to work because the only parameter that might change when creating the mel filters is In the future, it's preferable to use a caching decorator such as |
Added mel filter bank caching to FeatureExtractor class to optimize memory usage and reduce computational overhead when processing multiple audio files with identical parameters, particularly beneficial for batch processing scenarios.