Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

output strangeness #123

Open
babyta opened this issue Nov 14, 2024 · 4 comments
Open

output strangeness #123

babyta opened this issue Nov 14, 2024 · 4 comments

Comments

@babyta
Copy link

babyta commented Nov 14, 2024

image
I don't know what happened, I tested the a+v mode

@xinyifei99
Copy link
Collaborator

Hi, we have re-tested locally and asked colleagues who had no experience with the videollama2 project to build a videollama2 environment from scratch to run the inference demo, but the results are all fine. So we are currently unable to locate the problem. Perhaps you can disclose more details to help us locate the problem.

@Danielement321
Copy link

Is this your finetuned model? My finetuned model met the same problem.

@xinyifei99
Copy link
Collaborator

You can re-clone our repository and follow the instructions. Someone else also encountered this problem at the beginning, but this problem has been solved.(#119)

@lixin4ever lixin4ever reopened this Jan 9, 2025
@lixin4ever
Copy link
Contributor

lixin4ever commented Jan 9, 2025

Hi @babyta @Danielement321, we have internally reproduced this bug and it roots from the version of transformers. Please run pip install transformers==4.42.3 instead and see if it works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants