-
Notifications
You must be signed in to change notification settings - Fork 742
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resample2D function to CPU compatible code #190
Comments
Yeah I'd also be interested in that. |
zsameem
added a commit
to zsameem/flownet2-pytorch
that referenced
this issue
May 16, 2020
…ple2D layer and the ChannelNorm layer in native Pytorch and C++ to support inference on CPU. The main bottleneck is the Correlation Layer on which the FlowNetC architecture relies. This PR provides 2 implementations of the Correlation layer. -PyTorch native implementation. This requires no extra setup -Optimized C++ implementation for inference on CPU. Also provided are Pytorch native implementations for Resample2D and Channelnorm. Since the Pytorch implementation is quite efficient (compeletely vectorized) with no python for loops, C++ implementation is not needed. These layers also work by default on the GPU dependening on if the input tensors are on gpu and are slightly slower than the provided cuda implementation. See comments at the top of models.py and networks/FlowNetC.py for more details and how to switch to CPU mode. Backward passes are not yet implemented but will be added in the future. run_a_pair.py is replaced with a generic script called test.py to simply test functionality. run_a_pair.py had hardcoded paths. Also 2 frames from sintel added in test_images dir so that functionality and setup can be swiftly checked. Resolves: NVIDIA#190
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
What do the resample2d, correlation and channel_norm cuda codes do. Is it possible to write CPU equivalent code, so that inference can be run on CPU devices?
Thanks
The text was updated successfully, but these errors were encountered: