-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to evaluate the Open-Vocabulary Segmentation results in Table 2? #5
Comments
Hi, @Glupapa |
Thanks for your prompt response! |
@Glupapa The groundtruth masks are used in calculating these metrics. |
Hi @CircleRadon! I'm having the same question as @Glupayy. How can we calculate the PQ, AP, and mIoU values since Osprey cannot output masks? My guess is that for each sample, the PQ, AP, and mIoU can either be 1 (when the predicted label is correct) or 0 (when the predicted label is wrong), and these 'binary' scores are averaged across all samples to obtain the values in Table2. I was wondering whether this is correct. Thanks! |
If this is the case, could you please share more insights about why leveraging such metrics rather than simply computing accuracies? Thanks! |
Hi,
Thank you for sharing your impressive work!
I got confused about Table 2: How are the open vocabulary segmentation metrics calculated?
Also, could you please explain how Osprey outputs the mask to calculate these metrics?
Thanks for your help!
The text was updated successfully, but these errors were encountered: