You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I am currently trying to run various video benchmarks, including the VideoMME and Egoschema, through the lmms-eval evaluation framework (https://github.com/EvolvingLMMs-Lab/lmms-eval).
Since lmm-eval does not support videollama2, I personally implemented videollama2 on the lmm-eval, but failed to reproduce evaluation benchmark results similar to videomme.
Hello, I am currently trying to run various video benchmarks, including the VideoMME and Egoschema, through the lmms-eval evaluation framework (https://github.com/EvolvingLMMs-Lab/lmms-eval).
Since lmm-eval does not support videollama2, I personally implemented videollama2 on the lmm-eval, but failed to reproduce evaluation benchmark results similar to videomme.
It seems that videollama2 should work on the lmms-eval, as indicated in https://github.com/LLaVA-VL/LLaVA-NeXT/blob/main/docs/LLaVA-NeXT-Video_0716.md
I wonder if you have tried to evaluate the models on the lmms-eval and successfully reproduced the performance.
Thanks for the help!
The text was updated successfully, but these errors were encountered: