-
-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement experimental GPU two-phase occlusion culling for the standard 3D mesh pipeline. #17413
base: main
Are you sure you want to change the base?
Conversation
7cd3abd
to
fd03dd0
Compare
f2c4a5e
to
357d4ad
Compare
3D mesh pipeline. *Occlusion culling* allows the GPU to skip the vertex and fragment shading overhead for objects that can be quickly proved to be invisible because they're behind other geometry. A depth prepass already eliminates most fragment shading overhead for occluded objects, but the vertex shading overhead, as well as the cost of testing and rejecting fragments against the Z-buffer, is presently unavoidable for standard meshes. We currently perform occlusion culling only for meshlets. But other meshes, such as skinned meshes, can benefit from occlusion culling too in order to avoid the transform and skinning overhead for unseen meshes. This commit adapts the same [*two-phase occlusion culling*] technique that meshlets use to Bevy's standard 3D mesh pipeline when the new `OcclusionCulling` component, as well as the `DepthPrepass` component, are present on the camera. It has these steps: 1. *Early depth prepass*: We use the hierarchical Z-buffer from the previous frame to cull meshes for the initial depth prepass, effectively rendering only the meshes that were visible in the last frame. 2. *Early depth downsample*: We downsample the depth buffer to create another hierarchical Z-buffer, this time with the current view transform. 3. *Late depth prepass*: We use the new hierarchical Z-buffer to test all meshes that weren't rendered in the early depth prepass. Any meshes that pass this check are rendered. 4. *Late depth downsample*: Again, we downsample the depth buffer to create a hierarchical Z-buffer in preparation for the early depth prepass of the next frame. This step is done after all the rendering, in order to account for custom phase items that might write to the depth buffer. Note that this patch has no effect on the per-mesh CPU overhead for occluded objects, which remains high for a GPU-driven renderer due to the lack of `cold-specialization` and retained bins. If `cold-specialization` and retained bins weren't on the horizon, then a more traditional approach like potentially visible sets (PVS) or low-res CPU rendering would probably be more efficient than the GPU-driven approach that this patch implements for most scenes. However, at this point the amount of effort required to implement a PVS baking tool or a low-res CPU renderer would probably be greater than landing `cold-specialization` and retained bins, and the GPU driven approach is the more modern one anyway. It does mean that the performance improvements from occlusion culling as implemented in this patch *today* are likely to be limited, because of the high CPU overhead for occluded meshes. Note also that this patch currently doesn't implement occlusion culling for 2D objects or shadow maps. Those can be addressed in a follow-up. Additionally, note that the techniques in this patch require compute shaders, which excludes support for WebGL 2. This PR is marked experimental because of known precision issues with the downsampling approach when applied to non-power-of-two framebuffer sizes (i.e. most of them). These precision issues can, in rare cases, cause objects to be judged occluded that in fact are not. (I've never seen this in practice, but I know it's possible; it tends to be likelier to happen with small meshes.) As a follow-up to this patch, we desire to switch to the [SPD-based hi-Z buffer shader from the Granite engine], which doesn't suffer from these problems, at which point we should be able to graduate this feature from experimental status. I opted not to include that rewrite in this patch for two reasons: (1) @JMS55 is planning on doing the rewrite to coincide with the new availability of image atomic operations in Naga; (2) to reduce the scope of this patch. [*two-phase occlusion culling*]: https://medium.com/@mil_kru/two-pass-occlusion-culling-4100edcad501 [Aaltonen SIGGRAPH 2015]: https://www.advances.realtimerendering.com/s2015/aaltonenhaar_siggraph2015_combined_final_footer_220dpi.pdf [Some literature]: https://gist.github.com/reduz/c5769d0e705d8ab7ac187d63be0099b5?permalink_comment_id=5040452#gistcomment-5040452 [SPD-based hi-Z buffer shader from the Granite engine]: https://github.com/Themaister/Granite/blob/master/assets/shaders/post/hiz.comp
357d4ad
to
6aec99d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code looks good, just a minor comment around the experimental module and marking it as doc(hidden)
for Sem-Ver reasons. I unfortunately couldn't get the new occlusion_culling
example to run on my laptop (Intel i5-1240p iGPU Windows 10) with either DX12 or the Vulkan backends.
#endif // MULTISAMPLE | ||
#endif // MESHLET | ||
#endif // MESHLET_VISIBILITY_BUFFER_RASTER_PASS_OUTPUT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am reminded of how spoiled I am getting to just write Rust.
@@ -14,6 +14,7 @@ pub mod core_2d; | |||
pub mod core_3d; | |||
pub mod deferred; | |||
pub mod dof; | |||
pub mod experimental; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be good to annotate this #[doc(hidden)]
. This makes it sem-ver compatible to include breaking changes in this module.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we care about semver compatibility here though if we aren't shipping this in a point release? My concern about #[doc(hidden)]
is that it makes the feature less discoverable, and we want testing on it as it's the kind of thing that could have a lot of bugs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair point! While Bevy is pre-1.0 it's probably not important anyway, since every release is a breaking release.
around Intel Iris Xe restrictions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can confirm the example now runs on my i5-1240p. In the DX12
backend it says my platform doesn't support occlusion culling, but runs the example fine otherwise. On Vulkan
it works as expected, culling approximately 30 meshes. Nice work!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seeing a panic on my M2 MBP:
2025-01-20T01:09:54.639116Z ERROR wgpu::backend::wgpu_core: Handling wgpu errors as fatal by default
thread 'Compute Task Pool (4)' panicked at /Users/char/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/wgpu-23.0.1/src/backend/wgpu_core.rs:996:18:
wgpu error: Validation Error
Caused by:
In Device::create_bind_group, label = 'preprocess_late_indexed_gpu_occlusion_culling_bind_group'
Buffer offset 320 does not respect device's requested `min_storage_buffer_offset_alignment` limit 256
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Encountered a panic in system `bevy_pbr::render::gpu_preprocess::prepare_preprocess_bind_groups`!
crates/bevy_core_pipeline/src/experimental/mip_generation/mod.rs
Outdated
Show resolved
Hide resolved
texture_storage_2d(TextureFormat::R32Float, StorageTextureAccess::WriteOnly), | ||
texture_storage_2d(TextureFormat::R32Float, StorageTextureAccess::WriteOnly), | ||
texture_storage_2d(TextureFormat::R32Float, StorageTextureAccess::WriteOnly), | ||
texture_storage_2d(TextureFormat::R32Float, StorageTextureAccess::ReadWrite), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason this one is marked ReadWrite
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We call textureStore
on it. See mip_6
in downsample_depth.wgsl
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yup, I see it's the handoff point between first and second.
@@ -16,7 +16,9 @@ pub mod graph { | |||
#[derive(Debug, Hash, PartialEq, Eq, Clone, RenderLabel)] | |||
pub enum Node3d { | |||
MsaaWriteback, | |||
Prepass, | |||
EarlyPrepass, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the early prepass running the full prepass? E.g. for deferred is it doing the gbuffer rendering too?
Not sure, but it might make more sense to do depth only in the early pass, and then depth + other attachments in the late pass.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm pretty sure that Griffin said that you usually split the full prepass into early and late phases rather than having a separate z-prepass, but I'd rather not make that change here as this patch is too big as it is. Instead I just documented that occlusion culling is currently incompatible with deferred. We can add support for deferred in a followup.
@@ -1,8 +1,16 @@ | |||
#ifdef MESHLET_VISIBILITY_BUFFER_RASTER_PASS_OUTPUT | |||
@group(0) @binding(0) var<storage, read> mip_0: array<u64>; // Per pixel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just as a note: I believe all the meshlet-specific stuff is going to disappear here once wgpu 24 is merged and I can switch back to an image-based visbuffer.
Did part of my review, will do the rest another time. Focused mainly on meshlets, the depth downsample, and culling test parts. Haven't yet looked at the code for applying the occlusion culling to our main pipeline. |
`multi_draw_indirect_count` mode
add a `--deferred` option to `scene_viewer`.
I believe I've fixed all the issues that I know about and have addressed the review comments. See the commit descriptions for more information. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Crash on mac fixed. Lgtm! ✨
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another herculean effort from you on all fronts, the docs are great thanks.
a few questions
- whats the path forward for occlusion culling shadow views? not planned, not worth it, or worth it but annoying to do/not now?
- how does hzb test interact with TAA jitter?
- what frustum are you culling by for early depth pass? culling by only prev view frustum will render more than necessary, and culling only by current view frustum will test against unwritten parts of the prev hzb, likely resulting in overdraw. this overdraw cost will have to be eaten either in the early pass or the depth pass though because we do not have information for the newly disoccluded region of the screen, so i don't think it matters, meaning culling by current view is ideal.
} | ||
} | ||
} | ||
|
||
fn preprocess_direct_bind_group_layout_entries() -> DynamicBindGroupLayoutEntries { | ||
DynamicBindGroupLayoutEntries::sequential( | ||
DynamicBindGroupLayoutEntries::new_with_indices( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't aware we had a new_with_indices
, this is much nicer to use
let uv_pos = ndc_to_uv(ndc_pos.xy); | ||
|
||
// Update the AABB and maximum view-space depth. | ||
if (i == 0u) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this i == 0u
case can be removed by initializing max_depth_view
to -inf
, and aabb
to vec4(inf, inf, -inf, -inf)
i think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I did that. Note that I had to use a bitcast because Naga complained if I tried 1.0 / 0.0
or -1.0 / 0.0
.
let depth_quad_a = textureLoad(depth_pyramid, aabb_top_left, depth_level).x; | ||
let depth_quad_b = textureLoad(depth_pyramid, aabb_top_left + vec2(1u, 0u), depth_level).x; | ||
let depth_quad_c = textureLoad(depth_pyramid, aabb_top_left + vec2(0u, 1u), depth_level).x; | ||
let depth_quad_d = textureLoad(depth_pyramid, aabb_top_left + vec2(1u, 1u), depth_level).x; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i believe it is recommended to sample 16 pixels from a 1-step-finer mip level. this is definitely something to punt to another pr though
/// a significant slowdown. | ||
/// | ||
/// Occlusion culling currently requires a `DepthPrepass`. If no depth prepass | ||
/// is present on the view, the [`OcclusionCulling`] component will be ignored. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe have OcclusionCulling required component DepthPrepass?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i realize that the current way is just aligning with how all the other prepasses do it. lets punt on this to a required-components prepass migration pr
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to do that but the problem is that DepthPrepass
lives in bevy_core_pipeline
which is downstream of bevy_render
, so OcclusionCulling
can't refer to it.
Planned, but didn't want to do it in this patch since it'd make it bigger and more complex.
I think TAA jitter is basically just a different view matrix, so it's essentially just the same as a regular camera movement (i.e. it should just work).
It tests against the current frame frustum. |
Occlusion culling allows the GPU to skip the vertex and fragment shading overhead for objects that can be quickly proved to be invisible because they're behind other geometry. A depth prepass already eliminates most fragment shading overhead for occluded objects, but the vertex shading overhead, as well as the cost of testing and rejecting fragments against the Z-buffer, is presently unavoidable for standard meshes. We currently perform occlusion culling only for meshlets. But other meshes, such as skinned meshes, can benefit from occlusion culling too in order to avoid the transform and skinning overhead for unseen meshes.
This commit adapts the same two-phase occlusion culling technique that meshlets use to Bevy's standard 3D mesh pipeline when the new
OcclusionCulling
component, as well as theDepthPrepass
component, are present on the camera. It has these steps:Early depth prepass: We use the hierarchical Z-buffer from the previous frame to cull meshes for the initial depth prepass, effectively rendering only the meshes that were visible in the last frame.
Early depth downsample: We downsample the depth buffer to create another hierarchical Z-buffer, this time with the current view transform.
Late depth prepass: We use the new hierarchical Z-buffer to test all meshes that weren't rendered in the early depth prepass. Any meshes that pass this check are rendered.
Late depth downsample: Again, we downsample the depth buffer to create a hierarchical Z-buffer in preparation for the early depth prepass of the next frame. This step is done after all the rendering, in order to account for custom phase items that might write to the depth buffer.
Note that this patch has no effect on the per-mesh CPU overhead for occluded objects, which remains high for a GPU-driven renderer due to the lack of
cold-specialization
and retained bins. Ifcold-specialization
and retained bins weren't on the horizon, then a more traditional approach like potentially visible sets (PVS) or low-res CPU rendering would probably be more efficient than the GPU-driven approach that this patch implements for most scenes. However, at this point the amount of effort required to implement a PVS baking tool or a low-res CPU renderer would probably be greater than landingcold-specialization
and retained bins, and the GPU driven approach is the more modern one anyway. It does mean that the performance improvements from occlusion culling as implemented in this patch today are likely to be limited, because of the high CPU overhead for occluded meshes.Note also that this patch currently doesn't implement occlusion culling for 2D objects or shadow maps. Those can be addressed in a follow-up. Additionally, note that the techniques in this patch require compute shaders, which excludes support for WebGL 2.
This PR is marked experimental because of known precision issues with the downsampling approach when applied to non-power-of-two framebuffer sizes (i.e. most of them). These precision issues can, in rare cases, cause objects to be judged occluded that in fact are not. (I've never seen this in practice, but I know it's possible; it tends to be likelier to happen with small meshes.) As a follow-up to this patch, we desire to switch to the SPD-based hi-Z buffer shader from the Granite engine, which doesn't suffer from these problems, at which point we should be able to graduate this feature from experimental status. I opted not to include that rewrite in this patch for two reasons: (1) @JMS55 is planning on doing the rewrite to coincide with the new availability of image atomic operations in Naga; (2) to reduce the scope of this patch.
A new example,
occlusion_culling
, has been added. It demonstrates objects becoming quickly occluded and disoccluded by dynamic geometry and shows the number of objects that are actually being rendered. Also, a new--occlusion-culling
switch has been added toscene_viewer
, in order to make it easy to test this patch with large scenes like Bistro.Migration guide
bevy::render::batching::gpu_preprocessing::get_or_create_work_item_buffer
, notPreprocessWorkItemBuffers::new
. See thespecialized_mesh_pipeline
example.Showcase
Occlusion culling example:
Bistro zoomed out, before occlusion culling:
Bistro zoomed out, after occlusion culling:
In this scene, occlusion culling reduces the number of meshes Bevy has to render from 1591 to 585.