You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Load model directly
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("DAMO-NLP-SG/VideoLLaMA2.1-7B-16F")
but got error
ValueError: The checkpoint you are trying to load has model type `videollama2_qwen2` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
I tried various transformer version, including suggested 4.40.0 and latest 4.48.0, but all face the same issue
The text was updated successfully, but these errors were encountered:
i try to load the model using
but got error
ValueError: The checkpoint you are trying to load has model type `videollama2_qwen2` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
I tried various transformer version, including suggested 4.40.0 and latest 4.48.0, but all face the same issue
The text was updated successfully, but these errors were encountered: