Skip to content

Commit

Permalink
Address comments
Browse files Browse the repository at this point in the history
  • Loading branch information
apsonawane committed Oct 20, 2024
1 parent 2687e60 commit 4333766
Show file tree
Hide file tree
Showing 2 changed files with 19 additions and 23 deletions.
28 changes: 14 additions & 14 deletions src/python/py/models/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,20 +100,6 @@ python3 -m onnxruntime_genai.models.builder -i path_to_local_folder_on_disk -o p
python3 builder.py -i path_to_local_folder_on_disk -o path_to_output_folder -p int4 -e execution_provider -c cache_dir_to_store_temp_files
```

### LoRA Models

This scenario is where you have a finetuned model which supports LoRA and want to get optimized model. To use adapter model you have to pass the path of the model as input

path_to_local_folder_on_disk --> location where adapter_config.json and adapter weights are present

```
# From wheel:
python3 -m onnxruntime_genai.models.builder -o path_to_output_folder -p fp16 -e execution_provider -c cache_dir_to_store_temp_files --extra_options adapter_path=path_to_adapter_files
# From source:
python3 builder.py -o path_to_output_folder -p fp16 -e execution_provider -c cache_dir_to_store_temp_files --extra_options adapter_path=path_to_adapter_files
```

### GGUF Model

This scenario is where your float16/float32 GGUF model is already on disk.
Expand Down Expand Up @@ -231,6 +217,20 @@ python3 -m onnxruntime_genai.models.builder -i path_to_local_folder_on_disk -o p
python3 builder.py -i path_to_local_folder_on_disk -o path_to_output_folder -p precision -e execution_provider -c cache_dir_to_store_temp_files --extra_options use_qdq=1
```

#### LoRA Models

This scenario is where you have a finetuned model with LoRA adapters and your model can be loaded in the Hugging Face style via [PEFT](https://github.com/huggingface/peft).

- path_to_local_folder_on_disk = location where base_model's weights are present

```
# From wheel:
python3 -m onnxruntime_genai.models.builder -i path_to_local_folder_on_disk -o path_to_output_folder -p fp16 -e execution_provider -c cache_dir_to_store_temp_files --extra_options adapter_path=path_to_adapter_files
# From source:
python3 builder.py -i path_to_local_folder_on_disk -o path_to_output_folder -p fp16 -e execution_provider -c cache_dir_to_store_temp_files --extra_options adapter_path=path_to_adapter_files
```

### Unit Testing Models

This scenario is where your PyTorch model is already downloaded locally (either in the default Hugging Face cache directory or in a local folder on disk). If it is not already downloaded locally, here is an example of how you can download it.
Expand Down
14 changes: 5 additions & 9 deletions src/python/py/models/builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -3050,18 +3050,13 @@ def create_model(model_name, input_path, output_dir, precision, execution_provid

# Load model config
extra_kwargs = {} if os.path.isdir(input_path) else {"cache_dir": cache_dir}
hf_name = input_path if os.path.isdir(input_path) and "config.json" in os.listdir(input_path) else model_name
hf_name = input_path if os.path.isdir(input_path) else model_name
hf_token = parse_hf_token(extra_options.get("hf_token", "true"))

is_peft = os.path.isdir(input_path) and "adapter_config.json" in os.listdir(input_path)
peft_model_name = input_path if is_peft else None
peft_config = None
if is_peft:
from peft import PeftConfig
peft_config = PeftConfig.from_pretrained(peft_model_name, token=hf_token, trust_remote_code=True, **extra_kwargs)

config = AutoConfig.from_pretrained(hf_name, token=hf_token, trust_remote_code=True, **extra_kwargs)
if is_peft:
if "adapter_path" in extra_options:
from peft import PeftConfig
peft_config = PeftConfig.from_pretrained(extra_options["adapter_path"], token=hf_token, trust_remote_code=True, **extra_kwargs)
config.update(peft_config.__dict__)

# Set input/output precision of ONNX model
Expand Down Expand Up @@ -3210,6 +3205,7 @@ def get_args():
hf_token = false/token: Use this to disable authentication with Hugging Face or provide a custom authentication token that differs from the one stored in your environment. Default behavior is to use the authentication token stored by `huggingface-cli login`.
If you have already authenticated via `huggingface-cli login`, you do not need to use this flag because Hugging Face has already stored your authentication token for you.
use_qdq = 1 : Use the QDQ decomposition for quantized MatMul instead of the MatMulNBits operator.
adapter_path = Path to folder on disk containing the adapter files (adapter_config.json and adapter model weights).
"""),
)

Expand Down

0 comments on commit 4333766

Please sign in to comment.