-
-
Notifications
You must be signed in to change notification settings - Fork 340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about discrite sampling in RT-DETRv2 #515
Comments
Also, why is there a deployment constraint when torch.grid_sample is used? |
When you deploy with And other proprietary inference engines for some npu may not support So For specific speed, it is related to specific device and software. We only give the difference of theoretical calculation. |
Thank you so much for your well-explained answer. If, for deployment, you used discrete sampling, why is the model not trained using torch.grid_sample(..., mode='nearest') |
@lyuwenyu :
Note that I had no issues with the Note also that I have modified the library for 1-channel grayscale images, but I don't think that I have broken it in this detailed way, and as late in the process as in the transformer. |
Adding separate clamping for width and height made it work for me (didn't seem to affect performance, only ran a tuning epoch though, just to be sure):
|
Hello,
I love your work. I have a question regarding the discrete sampling.
You state on the paper:
Could you please explain how much faster the model is when you use discrete sampling?
Also, what is the difference between your proposed grid_sampling and torch.grid_sample with interpolation = nearest?
The text was updated successfully, but these errors were encountered: