Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introducing the MMWorld benchmark #677

Open
jkooy opened this issue Jan 27, 2025 · 1 comment
Open

Introducing the MMWorld benchmark #677

jkooy opened this issue Jan 27, 2025 · 1 comment

Comments

@jkooy
Copy link

jkooy commented Jan 27, 2025

Dear Qwen team,

We are a big fan of your Qwen series and noticed that you have evaluated your models on several video-language benchmarks. We were wondering if you might be interested in evaluating your models on our MMWorld benchmark (https://arxiv.org/abs/2406.08407). MMWorld is designed to assess models' reasoning capabilities across various reasoning tasks and disciplines and could serve as a useful evaluation benchmark for your model development. Thank you!

@ShuaiBai623
Copy link
Collaborator

Thank you for your recognition and good video evaluation work. We will test it after the Spring Festival.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants