Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use GCC 13 in CUDA 12 conda builds. #6221

Merged

Conversation

bdice
Copy link
Contributor

@bdice bdice commented Jan 13, 2025

Description

conda-forge is using GCC 13 for CUDA 12 builds. This PR updates CUDA 12 conda builds to use GCC 13, for alignment.

These PRs should be merged in a specific order, see rapidsai/build-planning#129 for details.

@bdice bdice added non-breaking Non-breaking change improvement Improvement / enhancement to an existing function labels Jan 13, 2025
Copy link

copy-pr-bot bot commented Jan 13, 2025

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@bdice bdice marked this pull request as ready for review January 13, 2025 18:52
@bdice bdice requested review from a team as code owners January 13, 2025 18:52
@bdice bdice added the 5 - DO NOT MERGE Hold off on merging; see PR for details label Jan 13, 2025
@bdice bdice self-assigned this Jan 13, 2025
@jameslamb jameslamb removed the request for review from msarahan January 13, 2025 19:58
@jakirkham
Copy link
Member

Seeing the following error on CI:

2025-01-13T19:13:00.8954431Z     inlined from 'static cudaError_t cub::CUB_200700_700_750_800_860_900_NS::DeviceReduce::TransformReduce(void*, size_t&, InputIteratorT, OutputIteratorT, NumItemsT, ReductionOpT, TransformOpT, T, cudaStream_t) [with InputIteratorT = int*; OutputIteratorT = int*; ReductionOpT = thrust::plus<int>; TransformOpT = cuda::__4::__detail::__return_type_wrapper<bool, __nv_dl_wrapper_t<__nv_dl_trailing_return_tag<void (ML::HDBSCAN::Common::CondensedHierarchy<int, float>::*)(int*, int*, float*, int*, int), &ML::HDBSCAN::Common::CondensedHierarchy<int, float>::condense, bool, 1> > >; T = int; NumItemsT = int]' at $SRC_DIR/cpp/build/_deps/cccl-src/cub/cub/cmake/../../cub/device/device_reduce.cuh:1000:143:
2025-01-13T19:13:00.8957470Z $SRC_DIR/cpp/build/_deps/cccl-src/thrust/thrust/cmake/../../thrust/system/cuda/detail/core/triple_chevron_launch.h:143:19: error: 'dispatch' may be used uninitialized [-Werror=maybe-uninitialized]
2025-01-13T19:13:00.8958686Z   143 |     NV_IF_TARGET(NV_IS_HOST, (return doit_host(k, args...);), (return doit_device(k, args...);));
2025-01-13T19:13:00.8959168Z       |          ~~~~~~~~~^~~~~~~~~~~~

cpp/test/CMakeLists.txt Outdated Show resolved Hide resolved
@jakirkham
Copy link
Member

Seeing some failures related to cuDF's __dataframe__ deprecation: rapidsai/cudf#17736

Will follow up offline

@bdice
Copy link
Contributor Author

bdice commented Jan 16, 2025

I am trying to address the __dataframe__ issues in #6229, but there are some problems yet to solve. I would consider admin-merging this PR since the builds appear to be working. @dantegd Would you be okay with that? We are down to just one other build failure in cuvs to address, then I want to make this change across all of RAPIDS at once.

@bdice bdice removed the 5 - DO NOT MERGE Hold off on merging; see PR for details label Jan 17, 2025
@AyodeAwe AyodeAwe merged commit d95cae5 into rapidsai:branch-25.02 Jan 17, 2025
56 of 66 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CMake conda conda issue CUDA/C++ improvement Improvement / enhancement to an existing function non-breaking Non-breaking change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants