Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Erreur lors de la conversion des types utilisant la V2 de LLama #35

Open
pereconteur opened this issue Nov 9, 2023 · 0 comments
Open

Comments

@pereconteur
Copy link

pereconteur commented Nov 9, 2023

Lors de la conversion, il semble que le tokenizer et les poids ne correspondent pas : voici mon erreur en utilisant python3.10, vigogne-2-7b-chat :

Loading model file models/vigogne-2-7b-chat_path/consolidated.00.pth
params = Params(n_vocab=-1, n_embd=4096, n_layer=32, n_ctx=2048, n_ff=11008, n_head=32, n_head_kv=32, f_norm_eps=1e-06, rope_scaling_type=None, f_rope_freq_base=None, f_rope_scale=None, n_orig_ctx=None, rope_finetuned=None, ftype=None, path_model=PosixPath('models/vigogne-2-7b-chat_path'))
Loading vocab file 'models/tokenizer.model', type 'spm'
tok_embeddings.weight                            -> token_embd.weight                        | F16    | [32000, 4096]
layers.0.attention.wq.weight                     -> blk.0.attn_q.weight                      | F16    | [4096, 4096]
layers.0.attention.wk.weight                     -> blk.0.attn_k.weight                      | F16    | [4096, 4096]
[...]
output.weight                                    -> output.weight                            | F16    | [32000, 4096]
Writing models/vigogne-2-7b-chat_path/ggml-model-f16.gguf, format 1
Traceback (most recent call last):
  File "/Users/pereconteur/Desktop/IA/vigogne/scripts/../../llama.cpp/convert.py", line 1204, in <module>
    main()
  File "/Users/pereconteur/Desktop/IA/vigogne/scripts/../../llama.cpp/convert.py", line 1199, in main
    OutputFile.write_all(outfile, ftype, params, model, vocab, special_vocab, concurrency = args.concurrency, endianess=endianess)
  File "/Users/pereconteur/Desktop/IA/vigogne/scripts/../../llama.cpp/convert.py", line 909, in write_all
    check_vocab_size(params, vocab)
  File "/Users/pereconteur/Desktop/IA/vigogne/scripts/../../llama.cpp/convert.py", line 796, in check_vocab_size
    raise Exception(msg)
Exception: Vocab size mismatch (model has -1, but models/tokenizer.model has 32000).
Traceback (most recent call last):
  File "/Users/pereconteur/Desktop/IA/Vigogne_auto_installer/vigogne_V2_auto_install_Mac_Linux.py", line 269, in <module>
    check_requirements()
  File "/Users/pereconteur/Desktop/IA/Vigogne_auto_installer/vigogne_V2_auto_install_Mac_Linux.py", line 263, in check_requirements
    goForVigogneInstallation(model_vig, poids_B, type_vig)
  File "/Users/pereconteur/Desktop/IA/Vigogne_auto_installer/vigogne_V2_auto_install_Mac_Linux.py", line 205, in goForVigogneInstallation
    subprocess.run(commandConvert, check=True)
  File "/Users/pereconteur/miniconda3/lib/python3.10/subprocess.py", line 526, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['python3.10', '../../llama.cpp/convert.py', 'models/vigogne-2-7b-chat_path']' returned non-zero exit status 1.
@pereconteur pereconteur changed the title Erreur lors de la convertion des type utilisant la V2 de LLama Erreur lors de la conversion des types utilisant la V2 de LLama Nov 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant