There is way to increase context length for pretrained models? #858
Unanswered
PARSA-MHMDI
asked this question in
Q&A
Replies: 1 comment
-
same issue here: #857 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I want to use pre-trained models to write image captions. But the contex length is limited to 77 for coca models. Is there a way to increase this amount or are there other models that can be used for this purpose? I want the length of the output text to be more than 77 tokens.
Beta Was this translation helpful? Give feedback.
All reactions