Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove unnecessary Tokenizer library calls #155

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

dyastremsky
Copy link
Contributor

@dyastremsky dyastremsky commented Oct 25, 2024

This pull request accomplishes two things by removing unnecessary Tokenizer calls:

  • Removes the default value for a tokenizer in InputsConfig, so that GenAI-Perf can be used in an air-gapped environment. Today, it cannot due the default tokenizer value in InputsConfig. It tries to get the default tokenizer from HuggingFace, even when a local path is provided for the tokenizer.
  • Reduces the time for a --help call or parser error return by 1.5-4x by delaying the Tokenizer import until is needed.

Loading times: genai-perf profile --help

First call:
Before:
image
After:
image

Repeated calls:
Before:
image

After:
image

Loading times: genai-perf error, first run
Before:
image

After:
image

Remove unnecessary import
@@ -58,7 +57,7 @@ def __init__(self, name: str, trust_remote_code: bool, revision: str) -> None:
self._encode_args = {"add_special_tokens": False}
self._decode_args = {"skip_special_tokens": True}

def __call__(self, text, **kwargs) -> BatchEncoding:
def __call__(self, text, **kwargs) -> "BatchEncoding":
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a use of forward references, defined in PEP 484. This defers the resolution of the typing to when it is needed. The type is only needed for MyPy, which can do the type checking due to the above import that occurs only during static type checking.

@@ -50,9 +51,10 @@ def convert(
) -> Dict[Any, Any]:
request_body: Dict[str, Any] = {"data": []}

tokenizer = cast(Tokenizer, config.tokenizer)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This casting stuff seems strange - Why isn't config.tokenizer already of type Tokenizer? It looks to me like the genai_perf.tokenizer.Tokenizer class has the necessary wrapper methods needed to be used in-place, ex: __call__, encode, decode. So instead of casting, is config.tokenizer set to the right type/value correctly wherever it's initialized?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems more prone to error to have these extra places that use tokenizer instead of self.config.tokenizer, and possibly not having a single source of truth in all the places that touch it

Copy link
Contributor Author

@dyastremsky dyastremsky Oct 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for reviewing! Yes, it's casting it to the genai_perf.tokenizer.Tokenizer class. The cast is only performed during static type checking (i.e. by MyPy), not during normal runtime. It's how we deal with situations where a variable is an Optional (i.e. can be a specified type or none).

This is necessary due to this change. If the type is not an Optional, then it needs a default value other than None, which will require a call to HuggingFace and fail in an air-gapped environment. If the type is an optional, then static type checkers will complain without the cast.

The two options I see are:

  1. Removing the default value in InputsConfig, since None isn't really a default value. However, parameters without default values need to be listed first, so I'd need to break the consistency of the file where every parameter has a default value and is grouped.
  2. I can in-line cast it, which might reduce legibility but avoids the need for a second variable and a copy during runtime. It also maintains a single source of truth.

I'm push a commit with approach 2. If you prefer 1 or a different option, let me know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants