-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with GPU memory allocation for pangu and graphcast #48
Comments
I met the same problem in pangu on my device 4060 32G GPU today,but yestaday the model work. |
I tried creating a new Python environment and after pip install ai-models , l installed onnxruntime via conda, suspecting that the issue might be related to the version of numpy (the version running smoothly for me is 2.0.0). |
I also met the problem of insufficient GPU memory, I'm wonder if there are any methods to decrease the batch size when doing the prediction |
I fixed the problem, you need to have a single GPU which has large enough GPU memory. |
For pangu, 27GiB GPU memory is required. |
Hi
I'm trying to set up GraphCast and Pangu to run on a 3060 12GB GPU and am getting memory allocation errors for both models.
Pangu:
GraphCast:
I am using Cuda 12.4 in Pangu and 12.3 with GraphCast, I have tried using Cuda 11 and it does not recognise my GPU. I am using cudnn=8.9.7.29. I have also tried setting XLA_PYTHON_CLIENT_PREALLOCATE=false, setting XLA_PYTHON_CLIENT_MEM_FRACTION to smaller values and XLA_PYTHON_CLIENT_ALLOCATOR=platform. The model also runs fine on the CPU, just very slow. Is there a fix to this or it just simply that my GPU has not got enough VRAM?
Thanks.
The text was updated successfully, but these errors were encountered: