-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why don't you keep image ratio? #139
Comments
I see that in torch inference code you do resize without keeping the ratio (as you do during training):
But for onnx inference you do "Resizes an image while maintaining aspect ratio and pads it". Is there a reason for that? I would assume you lose accuracy if you train on squeezed images and then keep ratio during onnx inference. Overall I really like your work and would like to contribute. What do you think about this aspect ratios issue? I would do this: implement aspect ratio preservation as a flag for training and inference. During inference I would also cut grey paddings to not waste time on computing 114 pixels (it worked great for me before, with several yolo models) |
@Peterande
Let me know if you guys are interested in any of these or have other ideas for contribution. |
We are excited about these ideas! They all seem super valuable and will take our project to the next level. We're looking forward to your contributions with great anticipation. |
@ArgoHA I also noticed the aspect ratio issue. Let me know if I can help. |
Is there a reason you train D-FINE without keeping the image ratio? You just use resize function getting image to square, but usually detectors use letterbox like:
Is there a reason why you are not doing that and I should not add it to the training pipeline?
The text was updated successfully, but these errors were encountered: