You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
配置文件位置在本地但是还是提示OSError: Can't load tokenizer for 'bert-base-uncased'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-base-uncased' is the correct path to a directory containing all relevant files for a BertTokenizer tokenizer.
#172
Open
Asmallsoldier opened this issue
Sep 15, 2024
· 1 comment
llama_model: "/home/mao/Video-LLaMA-main/eval_configs/ckpt/Video-LLaMA-2-7B-Finetuned/llama-2-7b-chat-hf/" # or "ckpt/vicuna-7b/" or "ckpt/llama-2-7b-chat-hf" or "ckpt/llama-2-13b-chat-hf"
imagebind_ckpt_path: "/home/mao/Video-LLaMA-main/eval_configs/ckpt/Video-LLaMA-2-7B-Finetuned/imagebind_huge.pth"
ckpt: '/home/mao/Video-LLaMA-main/eval_configs/ckpt/Video-LLaMA-2-7B-Finetuned/VL_LLaMA_2_7B_Finetuned.pth' # you can use our pretrained ckpt from https://huggingface.co/DAMO-NLP-SG/Video-LLaMA-2-13B-Pretrained/
ckpt_2: '/home/mao/Video-LLaMA-main/eval_configs/ckpt/Video-LLaMA-2-7B-Finetuned/AL_LLaMA_2_7B_Finetuned.pth'
equip_audio_branch: False # whether equips the audio branch
fusion_head_layers: 2
max_frame_pos: 32
fusion_header_type: "seqTransf"
run:
task: video_text_pretrain
文件目录结构也都没问题都有规定的文件谁知道问题出在哪里了吗?无法运行本地的模型
下面是报错详情:
/home/mao/yes/envs/videollama1/lib/python3.9/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in 0.14. Please use the 'torchvision.transforms.functional' module instead.
warnings.warn(
/home/mao/yes/envs/videollama1/lib/python3.9/site-packages/torchvision/transforms/_transforms_video.py:25: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in 0.14. Please use the 'torchvision.transforms' module instead.
warnings.warn(
Initializing Chat
/home/mao/yes/envs/videollama1/lib/python3.9/site-packages/huggingface_hub/file_download.py:1150: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True.
warnings.warn(
Traceback (most recent call last):
File "/home/mao/Video-LLaMA-main/demo_video.py", line 67, in
model = model_cls.from_config(model_config).to('cuda:{}'.format(args.gpu_id))
File "/home/mao/Video-LLaMA-main/video_llama/models/video_llama.py", line 574, in from_config
model = cls(
File "/home/mao/Video-LLaMA-main/video_llama/models/video_llama.py", line 81, in init
self.tokenizer = self.init_tokenizer()
File "/home/mao/Video-LLaMA-main/video_llama/models/blip2.py", line 32, in init_tokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
File "/home/mao/yes/envs/videollama1/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1795, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'bert-base-uncased'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-base-uncased' is the correct path to a directory containing all relevant files for a BertTokenizer tokenizer.
The text was updated successfully, but these errors were encountered:
本地配置了本地的模型地址,还是提示无法连接远程下载
配置的文件如下
model:
arch: video_llama
model_type: pretrain_vicuna
freeze_vit: True
freeze_qformer: True
max_txt_len: 512
end_sym: "###"
low_resource: False
frozen_llama_proj: False
If you want use LLaMA-2-chat,
some ckpts could be download from our provided huggingface repo
i.e. https://huggingface.co/DAMO-NLP-SG/Video-LLaMA-2-13B-Finetuned
llama_model: "ckpt/vicuna-13b/" or "ckpt/vicuna-7b/" or "ckpt/llama-2-7b-chat-hf" or "ckpt/llama-2-13b-chat-hf"
imagebind_ckpt_path: "ckpt/imagebind_path/"
ckpt: 'path/pretrained_visual_branch_ckpt' # you can use our pretrained ckpt from https://huggingface.co/DAMO-NLP-SG/Video-LLaMA-2-13B-Pretrained/
ckpt_2: 'path/pretrained_audio_branch_ckpt'
llama_model: "/home/mao/Video-LLaMA-main/eval_configs/ckpt/Video-LLaMA-2-7B-Finetuned/llama-2-7b-chat-hf/" # or "ckpt/vicuna-7b/" or "ckpt/llama-2-7b-chat-hf" or "ckpt/llama-2-13b-chat-hf"
imagebind_ckpt_path: "/home/mao/Video-LLaMA-main/eval_configs/ckpt/Video-LLaMA-2-7B-Finetuned/imagebind_huge.pth"
ckpt: '/home/mao/Video-LLaMA-main/eval_configs/ckpt/Video-LLaMA-2-7B-Finetuned/VL_LLaMA_2_7B_Finetuned.pth' # you can use our pretrained ckpt from https://huggingface.co/DAMO-NLP-SG/Video-LLaMA-2-13B-Pretrained/
ckpt_2: '/home/mao/Video-LLaMA-main/eval_configs/ckpt/Video-LLaMA-2-7B-Finetuned/AL_LLaMA_2_7B_Finetuned.pth'
equip_audio_branch: False # whether equips the audio branch
fusion_head_layers: 2
max_frame_pos: 32
fusion_header_type: "seqTransf"
datasets:
webvid:
vis_processor:
train:
name: "alpro_video_eval"
n_frms: 8
image_size: 224
text_processor:
train:
name: "blip_caption"
run:
task: video_text_pretrain
文件目录结构也都没问题都有规定的文件谁知道问题出在哪里了吗?无法运行本地的模型
下面是报错详情:
/home/mao/yes/envs/videollama1/lib/python3.9/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in 0.14. Please use the 'torchvision.transforms.functional' module instead.
warnings.warn(
/home/mao/yes/envs/videollama1/lib/python3.9/site-packages/torchvision/transforms/_transforms_video.py:25: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in 0.14. Please use the 'torchvision.transforms' module instead.
warnings.warn(
Initializing Chat
/home/mao/yes/envs/videollama1/lib/python3.9/site-packages/huggingface_hub/file_download.py:1150: FutureWarning:
resume_download
is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, useforce_download=True
.warnings.warn(
Traceback (most recent call last):
File "/home/mao/Video-LLaMA-main/demo_video.py", line 67, in
model = model_cls.from_config(model_config).to('cuda:{}'.format(args.gpu_id))
File "/home/mao/Video-LLaMA-main/video_llama/models/video_llama.py", line 574, in from_config
model = cls(
File "/home/mao/Video-LLaMA-main/video_llama/models/video_llama.py", line 81, in init
self.tokenizer = self.init_tokenizer()
File "/home/mao/Video-LLaMA-main/video_llama/models/blip2.py", line 32, in init_tokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
File "/home/mao/yes/envs/videollama1/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1795, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'bert-base-uncased'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-base-uncased' is the correct path to a directory containing all relevant files for a BertTokenizer tokenizer.
The text was updated successfully, but these errors were encountered: