Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lmms-eval evaluation #144

Open
G-JWLee opened this issue Jan 6, 2025 · 0 comments
Open

lmms-eval evaluation #144

G-JWLee opened this issue Jan 6, 2025 · 0 comments

Comments

@G-JWLee
Copy link

G-JWLee commented Jan 6, 2025

Hello, I am currently trying to run various video benchmarks, including the VideoMME and Egoschema, through the lmms-eval evaluation framework (https://github.com/EvolvingLMMs-Lab/lmms-eval).

Since lmm-eval does not support videollama2, I personally implemented videollama2 on the lmm-eval, but failed to reproduce evaluation benchmark results similar to videomme.

It seems that videollama2 should work on the lmms-eval, as indicated in https://github.com/LLaVA-VL/LLaVA-NeXT/blob/main/docs/LLaVA-NeXT-Video_0716.md
I wonder if you have tried to evaluate the models on the lmms-eval and successfully reproduced the performance.

Thanks for the help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant