Replies: 1 comment
-
已经处理了,下载到本地模型有问题,识别不了才去联网,请尝试重新下载模型,我之前使用git clone失败了,然后重新用魔塔下载 #模型下载
from modelscope import snapshot_download
model_dir = snapshot_download('ZhipuAI/chatglm3-6b') |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
报错:
You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501
Network URL: http://192.168.1.117:8501
111
Could not locate the tokenization_chatglm.py inside THUDM/chatglm3-6b.
111
2024-02-27 15:05:55.664 Uncaught app exception
Traceback (most recent call last):
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/connection.py", line 198, in _new_conn
sock = connection.create_connection(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
raise err
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
TimeoutError: timed out
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 793, in urlopen
response = self._make_request(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 491, in _make_request
raise new_e
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 467, in _make_request
self._validate_conn(conn)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1099, in _validate_conn
conn.connect()
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/connection.py", line 616, in connect
self.sock = sock = self._new_conn()
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/connection.py", line 207, in _new_conn
raise ConnectTimeoutError(
urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x7f15ce18d360>, 'Connection to huggingface.co timed out. (connect timeout=10)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 847, in urlopen
retries = retries.increment(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /THUDM/chatglm3-6b/resolve/main/tokenization_chatglm.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f15ce18d360>, 'Connection to huggingface.co timed out. (connect timeout=10)'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1238, in hf_hub_download
metadata = get_hf_file_metadata(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1631, in get_hf_file_metadata
r = _request_wrapper(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 385, in _request_wrapper
response = _request_wrapper(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 408, in _request_wrapper
response = get_session().request(method=method, url=url, **params)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 67, in send
return super().send(request, *args, **kwargs)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/requests/adapters.py", line 507, in send
raise ConnectTimeout(e, request=request)
requests.exceptions.ConnectTimeout: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /THUDM/chatglm3-6b/resolve/main/tokenization_chatglm.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f15ce18d360>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: b8163dee-d3ee-4344-bafc-b58f009b7e4a)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/transformers/utils/hub.py", line 385, in cached_file
resolved_file = hf_hub_download(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1371, in hf_hub_download
raise LocalEntryNotFoundError(
huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 535, in _run_script
exec(code, module.dict)
File "/home/taichu/git/LLM/ChatGLM3/basic_demo/web_demo_streamlit.py", line 42, in
tokenizer, model = get_model()
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 212, in wrapper
return cached_func(*args, **kwargs)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 241, in call
return self._get_or_create_cached_value(args, kwargs)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 268, in _get_or_create_cached_value
return self._handle_cache_miss(cache, value_key, func_args, func_kwargs)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 324, in _handle_cache_miss
computed_value = self._info.func(*func_args, **func_kwargs)
File "/home/taichu/git/LLM/ChatGLM3/basic_demo/web_demo_streamlit.py", line 35, in get_model
tokenizer = AutoTokenizer.from_pretrained("/home/taichu/Downloads/ChatGlm3-6b/", trust_remote_code=True)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 797, in from_pretrained
tokenizer_class = get_class_from_dynamic_module(class_ref, pretrained_model_name_or_path, **kwargs)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 488, in get_class_from_dynamic_module
final_module = get_cached_module_file(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 294, in get_cached_module_file
resolved_module_file = cached_file(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/transformers/utils/hub.py", line 425, in cached_file
raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like THUDM/chatglm3-6b is not the path to a directory containing a file named tokenization_chatglm.py.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
Could not locate the tokenization_chatglm.py inside THUDM/chatglm3-6b.
2024-02-27 15:06:15.712 Uncaught app exception
Traceback (most recent call last):
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/connection.py", line 198, in _new_conn
sock = connection.create_connection(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
raise err
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
TimeoutError: timed out
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 793, in urlopen
response = self._make_request(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 491, in _make_request
raise new_e
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 467, in _make_request
self._validate_conn(conn)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1099, in _validate_conn
conn.connect()
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/connection.py", line 616, in connect
self.sock = sock = self._new_conn()
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/connection.py", line 207, in _new_conn
raise ConnectTimeoutError(
urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x7f15ce18efe0>, 'Connection to huggingface.co timed out. (connect timeout=10)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 847, in urlopen
retries = retries.increment(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /THUDM/chatglm3-6b/resolve/main/tokenization_chatglm.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f15ce18efe0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1238, in hf_hub_download
metadata = get_hf_file_metadata(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1631, in get_hf_file_metadata
r = _request_wrapper(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 385, in _request_wrapper
response = _request_wrapper(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 408, in _request_wrapper
response = get_session().request(method=method, url=url, **params)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 67, in send
return super().send(request, *args, **kwargs)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/requests/adapters.py", line 507, in send
raise ConnectTimeout(e, request=request)
requests.exceptions.ConnectTimeout: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /THUDM/chatglm3-6b/resolve/main/tokenization_chatglm.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f15ce18efe0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: cff76e46-3023-4578-91e3-052371a20029)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/transformers/utils/hub.py", line 385, in cached_file
resolved_file = hf_hub_download(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1371, in hf_hub_download
raise LocalEntryNotFoundError(
huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 535, in _run_script
exec(code, module.dict)
File "/home/taichu/git/LLM/ChatGLM3/basic_demo/web_demo_streamlit.py", line 42, in
tokenizer, model = get_model()
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 212, in wrapper
return cached_func(*args, **kwargs)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 241, in call
return self._get_or_create_cached_value(args, kwargs)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 268, in _get_or_create_cached_value
return self._handle_cache_miss(cache, value_key, func_args, func_kwargs)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 324, in _handle_cache_miss
computed_value = self._info.func(*func_args, **func_kwargs)
File "/home/taichu/git/LLM/ChatGLM3/basic_demo/web_demo_streamlit.py", line 35, in get_model
tokenizer = AutoTokenizer.from_pretrained("/home/taichu/Downloads/ChatGlm3-6b/", trust_remote_code=True)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 797, in from_pretrained
tokenizer_class = get_class_from_dynamic_module(class_ref, pretrained_model_name_or_path, **kwargs)
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 488, in get_class_from_dynamic_module
final_module = get_cached_module_file(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 294, in get_cached_module_file
resolved_module_file = cached_file(
File "/home/taichu/git/LLM/ChatGLM3/venv/lib/python3.10/site-packages/transformers/utils/hub.py", line 425, in cached_file
raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like THUDM/chatglm3-6b is not the path to a directory containing a file named tokenization_chatglm.py.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
Beta Was this translation helpful? Give feedback.
All reactions