You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
在输入框输入原本数据集中就存在的prompts后,后端服务日志抛异常如下:
This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces
Traceback (most recent call last):
File "/root/.conda/envs/python39/lib/python3.9/site-packages/gradio/routes.py", line 393, in run_predict
output = await app.get_blocks().process_api(
File "/root/.conda/envs/python39/lib/python3.9/site-packages/gradio/blocks.py", line 1108, in process_api
result = await self.call_function(
File "/root/.conda/envs/python39/lib/python3.9/site-packages/gradio/blocks.py", line 929, in call_function
prediction = await anyio.to_thread.run_sync(
File "/root/.conda/envs/python39/lib/python3.9/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/root/.conda/envs/python39/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/root/.conda/envs/python39/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/root/.conda/envs/python39/lib/python3.9/site-packages/gradio/utils.py", line 490, in async_iteration
return next(iterator)
File "/root/llm/ChatGLM-6B-main/web_demo.py", line 63, in predict
for response, history in model.stream_chat(tokenizer, input, history, max_length=max_length, top_p=top_p,
File "/root/.conda/envs/python39/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
response = gen.send(None)
File "/root/.cache/huggingface/modules/transformers_modules/checkpoint-3000/modeling_chatglm.py", line 1281, in stream_chat
for outputs in self.stream_generate(**inputs, **gen_kwargs):
File "/root/.conda/envs/python39/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
response = gen.send(None)
File "/root/.cache/huggingface/modules/transformers_modules/checkpoint-3000/modeling_chatglm.py", line 1356, in stream_generate
model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/checkpoint-3000/modeling_chatglm.py", line 1091, in prepare_inputs_for_generation
mask_positions = [seq.index(mask_token) for seq in seqs]
File "/root/.cache/huggingface/modules/transformers_modules/checkpoint-3000/modeling_chatglm.py", line 1091, in
mask_positions = [seq.index(mask_token) for seq in seqs]
ValueError: 130000 is not in list
这是否还需要加载原本chatglm-6b的那几个.bin文件啊?
Environment
- OS:Centos 7.6
- Python:3.9
- Transformers:4.28.0
- PyTorch:2.0.0+cu117
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) : True
Anything else?
No response
The text was updated successfully, but these errors were encountered:
Is there an existing issue for this?
Current Behavior
ptuning成功后,运行web_demo.py,输入promts后后台抛异常。
Expected Behavior
No response
Steps To Reproduce
This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces
Traceback (most recent call last):
File "/root/.conda/envs/python39/lib/python3.9/site-packages/gradio/routes.py", line 393, in run_predict
output = await app.get_blocks().process_api(
File "/root/.conda/envs/python39/lib/python3.9/site-packages/gradio/blocks.py", line 1108, in process_api
result = await self.call_function(
File "/root/.conda/envs/python39/lib/python3.9/site-packages/gradio/blocks.py", line 929, in call_function
prediction = await anyio.to_thread.run_sync(
File "/root/.conda/envs/python39/lib/python3.9/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/root/.conda/envs/python39/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/root/.conda/envs/python39/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/root/.conda/envs/python39/lib/python3.9/site-packages/gradio/utils.py", line 490, in async_iteration
return next(iterator)
File "/root/llm/ChatGLM-6B-main/web_demo.py", line 63, in predict
for response, history in model.stream_chat(tokenizer, input, history, max_length=max_length, top_p=top_p,
File "/root/.conda/envs/python39/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
response = gen.send(None)
File "/root/.cache/huggingface/modules/transformers_modules/checkpoint-3000/modeling_chatglm.py", line 1281, in stream_chat
for outputs in self.stream_generate(**inputs, **gen_kwargs):
File "/root/.conda/envs/python39/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
response = gen.send(None)
File "/root/.cache/huggingface/modules/transformers_modules/checkpoint-3000/modeling_chatglm.py", line 1356, in stream_generate
model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/checkpoint-3000/modeling_chatglm.py", line 1091, in prepare_inputs_for_generation
mask_positions = [seq.index(mask_token) for seq in seqs]
File "/root/.cache/huggingface/modules/transformers_modules/checkpoint-3000/modeling_chatglm.py", line 1091, in
mask_positions = [seq.index(mask_token) for seq in seqs]
ValueError: 130000 is not in list
这是否还需要加载原本chatglm-6b的那几个.bin文件啊?
Environment
Anything else?
No response
The text was updated successfully, but these errors were encountered: