-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[3.2.x] ptx_get_version
cannot handle CUDA>12.6
#5737
Comments
ptx_get_version
cannot handle CUDA>=12.6ptx_get_version
cannot handle CUDA>12.6
ptx_get_version
cannot handle CUDA>12.6ptx_get_version
cannot handle CUDA>12.6
ToT tree should be fine. Here is the code:
|
I see that b39c1e1 landed on main, which is almost certainly too big for backporting, but I'm wondering if if major == 12:
if minor < 6:
return 80 + minor
- elif minor == 6:
+ elif minor >= 6:
return 85
if major == 11:
return 70 + minor would be acceptable for 3.2.x |
we're going to need triton 3.2 for pytorch 2.6, and it would be a pity if that cannot be used with CUDA 12.8 - I'm not talking about sm100 support, but just being able to use a CUDA 12.8 toolchain. |
@bertmaher is handling the release branch, I'll defer to him |
Thanks. Can you please reopen the issue in the meantime, otherwise the reduced visibility makes it all too easy for it to fall through the cracks. |
@atalman Can we still patch this to |
From our limited testing, I can confirm that if major == 12:
if minor < 6:
return 80 + minor
- elif minor == 6:
+ elif minor >= 6:
return 85
if major == 11:
return 70 + minor works |
nvidia recently released CUDA 12.8, and I'm seeing failures while running triton if it is present:
IMO it would be more appropriate to use
>=6
intriton/third_party/nvidia/backend/compiler.py
Lines 51 to 55 in 9641643
as it's less of an issue whether an older PTX version is used, than if the whole thing errors out.
The text was updated successfully, but these errors were encountered: