Replies: 1 comment
-
Maybe it triggered |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
i use post_api.py to do inference, and for the exactly same reference audio and to-be-generated-text, when i don't pass the streaming argument, the inference takes like 2 seconds; but if i pass the streaming argument, the inference time increases to like 8 seconds! what could be the reason?
Beta Was this translation helpful? Give feedback.
All reactions