In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on GLM-4 models on Intel GPUs. For illustration purposes, we utilize the THUDM/glm-4-9b-chat as a reference InternLM model.
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to here for more information.
We suggest using conda to manage environment:
conda create -n llm python=3.11
conda activate llm
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
# install tiktoken required for GLM-4
pip install "tiktoken>=0.7.0"
We suggest using conda to manage environment:
conda create -n llm python=3.11 libuv
conda activate llm
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
# install tiktoken required for GLM-4
pip install "tiktoken>=0.7.0"
Note
Skip this step if you are running on Windows.
This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.
source /opt/intel/oneapi/setvars.sh
For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.
For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series
export USE_XETLA=OFF
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
export SYCL_CACHE_PERSISTENT=1
For Intel Data Center GPU Max Series
export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
export SYCL_CACHE_PERSISTENT=1
export ENABLE_SDP_FUSION=1
Note: Please note that
libtcmalloc.so
can be installed byconda install -c conda-forge -y gperftools=2.10
.
For Intel iGPU
export SYCL_CACHE_PERSISTENT=1
export BIGDL_LLM_XMX_DISABLED=1
For Intel iGPU
set SYCL_CACHE_PERSISTENT=1
set BIGDL_LLM_XMX_DISABLED=1
For Intel Arc™ A-Series Graphics
set SYCL_CACHE_PERSISTENT=1
Note
For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
In the example generate.py, we show a basic use case for a GLM-4 model to predict the next N tokens using generate()
API, with IPEX-LLM INT4 optimizations on Intel GPUs.
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
Arguments info:
--repo-id-or-model-path REPO_ID_OR_MODEL_PATH
: argument defining the huggingface repo id for the GLM-4 model (e.g.THUDM/glm-4-9b-chat
) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be'THUDM/glm-4-9b-chat'
.--prompt PROMPT
: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be'AI是什么?'
.--n-predict N_PREDICT
: argument defining the max number of tokens to predict. It is default to be32
.
Inference time: xxxx s
-------------------- Prompt --------------------
<|user|>
AI是什么?
<|assistant|>
-------------------- Output --------------------
AI是什么?
AI,即人工智能(Artificial Intelligence),是指由人创造出来的,能够模拟、延伸和扩展人的智能的计算机系统或机器。人工智能的目标
Inference time: xxxx s
-------------------- Prompt --------------------
<|user|>
What is AI?
<|assistant|>
-------------------- Output --------------------
What is AI?
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term "art
In the example streamchat.py, we show a basic use case for a GLM-4 model to stream chat, with IPEX-LLM INT4 optimizations.
Stream Chat using stream_chat()
API:
python ./streamchat.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --question QUESTION
Chat using chat()
API:
python ./streamchat.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --question QUESTION --disable-stream
Arguments info:
--repo-id-or-model-path REPO_ID_OR_MODEL_PATH
: argument defining the huggingface repo id for the GLM-4 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be'THUDM/glm-4-9b-chat'
.--question QUESTION
: argument defining the question to ask. It is default to be"AI是什么?"
.--disable-stream
: argument defining whether to stream chat. If include--disable-stream
when running the script, the stream chat is disabled andchat()
API is used.