You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Loading the delta weights from lmsys/vicuna-7b-delta-v1.1
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in huggingface/transformers#24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message
Downloading shards: 0%| | 0/2 [00:00<?, ?it/s]
I just want to add Vicuna weights to original llama model. The first time i excute the command described in tutorial is normal. But I find i set a wrong output path. So i just cancel this command and run it again. Unfortunately it shows this description, and i have waited more than 30 mins to this download process. So what's wrong with this bug, and how can i fix it. (I have VPN so it's not the network conncetion issue)
The text was updated successfully, but these errors were encountered:
root@c68c31f45482:/workspace/zt/code/FastChat# python3 -m fastchat.model.apply_delta --base-model-path ../../model/Llama-2-7b-hf --target-model-path ../Sequence-Scheduling/ckpts/vicuna-7b --delta-path lmsys/vicuna-7b-delta-v1.1
Loading the delta weights from lmsys/vicuna-7b-delta-v1.1
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the
legacy
(previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, setlegacy=False
. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in huggingface/transformers#24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this messageDownloading shards: 0%| | 0/2 [00:00<?, ?it/s]
I just want to add Vicuna weights to original llama model. The first time i excute the command described in tutorial is normal. But I find i set a wrong output path. So i just cancel this command and run it again. Unfortunately it shows this description, and i have waited more than 30 mins to this download process. So what's wrong with this bug, and how can i fix it. (I have VPN so it's not the network conncetion issue)
The text was updated successfully, but these errors were encountered: