Exclude bad words in text generation #3434
-
Is there a way to exclude bad tokens when generating? This is different from stop_words mentioned in the vLLM docs. Looking for something similar to this HF doc. |
Beta Was this translation helpful? Give feedback.
Answered by
hverma-forrester
Mar 13, 2024
Replies: 1 comment
-
Just passing the logits of bad words token ID assigned with negative number worked for me: def bad_word_processor(token_ids, logits):
logits[121] = float("-inf")
logits[345] = float("-inf")
logits[420] = float("-inf")
return logits
sampling_params = SamplingParams(temperature=0.2, top_p=0.99, max_tokens=512, frequency_penalty=1.1, logits_processors=[bad_word_processor])
outputs = llm.generate(prompts, sampling_params) |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
richardliaw
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Just passing the logits of bad words token ID assigned with negative number worked for me: