-
Notifications
You must be signed in to change notification settings - Fork 52
Huge memory usage for GPU when using textures #23
Comments
I've uploaded my code here. When enabling the GPU renderer Also, I'm getting a lot of artefacts when enabling clipping (even on the CPU): The first render is ok, but once I compose again (due to changes in the scene) everything but the first layer seems to disappear. I'm probably doing something wrong here |
Thanks for reporting this. I'll take a look at the high memory consumption and report back. |
Part of the issue with the memory usage is the large atlas size for textures (4096 x 4096), but also the format, which is rgbafloat16. This easy to improve upon, but there is another underlying issue. It seems like performance is not great on macOS GPUs. If someone has more time to investigate which of the shaders is taking too long, it would be appreciated, since wgpu does not have timestamp query implemented on Metal. My intuition is that it's the sorter that's the main issue. |
About the cropping, I don't think I understand the issue. If you're trying to crop the geometry that lies outside of the screen, this should be something we optimize anyway. See #25. |
Thanks for the reply!
Yeah, that's what I was trying to do by having one "base" layer (the size of the window) and then clipping all additional layers after that. But what's happening is that this renders fine, but when the scene changes (e.g. I zoom or translate the transform) then most layers seem to disappear (at least that's how I interpret it): clipping.mp4However, #25 would be the much better solution anyway. |
I merged #29 which should improve performance when zooming in. We can slowly improve on the other issues as well. |
Thanks! I'll try it out tomorrow |
@terhechte How is the parley integration coming? Did the performance improve with the PR from @dragostis? |
@FoundationKen Yeah, performance and memory usage did improve. Then I got distracted by another side project 🤷 |
Hey, so I had some fun porting text rendering to Forma using parley:
However, when using the GPU, the memory usage is quite high (e.g. for this demo, in release mode, ~400MB on my Mac). When I then start resizing the window, it quickly grows to ~600MB and beyond.
This doesn't happen when using a CPU runner. If I render text without emoji, then I also don't see this behaviour with the GPU runner either.
The way emoji are rendered is by rendering the glyph to a small image (16x16 right now) and then rendering this image with a 16x16 Path.
So only if I'm rendering emoji images on the GPU do I see huge memory consumption (and increasing memory consumption).
Do I need to perform some sort of cleanup when rendering images using Forma with the GPU renderer?
The text was updated successfully, but these errors were encountered: