Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docker] Shared iGPU RAM/VRAM leaks after container stops #771

Open
rpolyano opened this issue Jan 15, 2025 · 5 comments
Open

[Docker] Shared iGPU RAM/VRAM leaks after container stops #771

rpolyano opened this issue Jan 15, 2025 · 5 comments
Assignees

Comments

@rpolyano
Copy link

rpolyano commented Jan 15, 2025

Describe the bug

When running from docker.io/intel/intel-extension-for-pytorch:2.5.10-xpu,

After loading a model and running some inference, on an iGPU that shares host RAM and VRAM, the host RAM remains reserved and unusable as VRAM even after the container exits.

The issue occurs with both podman and docker, and both on Fedora and Arch, and on the default, and jemalloc allocators (I was not able to get ipex to pick up tcmalloc, even after adding google-perftools to the container).

The issue can be easily reproduced (on this hardware) by simply running the ComfyUI flux example:

Dockerfile

FROM docker.io/intel/intel-extension-for-pytorch:2.5.10-xpu
WORKDIR /comfyui
ADD "https://github.com/comfyanonymous/ComfyUI.git#v0.3.1" ./
RUN python3 -m pip install -r requirements.txt
ENTRYPOINT ["ipexrun", "main.py", "--listen", "--use-pytorch-cross-attention", "--lowvram"]

Steps to reproduce

  1. Build and start container
docker built -t localhost/comfy-ipex:latest .

docker run --rm --name comfyui -it --privileged -v /dev/dri/by-path:/dev/dri/by-path --ipc=host -p 8188:8188 -v /.../dir with checkpoints/single flux fp8 model>:/comfyui/models:Z localhost/comfy-ipex:latest`
  1. Run the example workflow using the single fp8 checkpoint provided: https://comfyanonymous.github.io/ComfyUI_examples/flux/#flux-dev-1
  2. Observe that the inference completes
  3. Stop + Exit the container
  4. Observe that the memory usage in top (or similar) continues to report as used, even though no single process is using it
  5. Restart from step 0, above.
  6. Observe that in the second run, the process errors with OOM (Killed by OS or UR_ERROR_OUT_OF_MEMORY + UR_ERROR_DEVICE_LOST)

Versions

PyTorch version: N/A
PyTorch CXX11 ABI: N/A
IPEX version: N/A
IPEX commit: N/A
Build type: N/A

OS: Fedora Linux 41.20250111.0 (Sway Atomic) (x86_64)
GCC version: (GCC) 14.2.1 20240912 (Red Hat 14.2.1-3)
Clang version: N/A
IGC version: N/A
CMake version: version 3.30.5
Libc version: glibc-2.40

Python version: 3.13.1 (main, Dec 9 2024, 00:00:00) [GCC 14.2.1 20240912 (Red Hat 14.2.1-3)] (64-bit runtime)
Python platform: Linux-6.12.9-400.vanilla.fc41.x86_64-x86_64-with-glibc2.40
Is XPU available: N/A
DPCPP runtime: N/A
MKL version: N/A

GPU models and configuration onboard:
Intel Arc Graphics 130V / 140V @ 1.95 GHz [Integrated]

GPU models and configuration detected:
N/A

Driver version:

  • intel_opencl: N/A
  • level_zero: N/A

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 42 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 7 258V
CPU family: 6
Model: 189
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
CPU(s) scaling MHz: 19%
CPU max MHz: 4800.0000
CPU min MHz: 400.0000
BogoMIPS: 6604.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb intel_ppin ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni lam wbnoinvd dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid bus_lock_detect movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 320 KiB (8 instances)
L1i cache: 512 KiB (8 instances)
L2 cache: 14 MiB (5 instances)
L3 cache: 12 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected

@rpolyano rpolyano changed the title [Docker] Shared iGPU RAM leak [Docker] Shared iGPU RAM/VRAM leaks after container stops Jan 15, 2025
@feng-intel feng-intel self-assigned this Jan 16, 2025
@feng-intel
Copy link

How do you "Stop + Exit the container" ?
Does the below command line still see the container?
$ docker ps -a

@rpolyano
Copy link
Author

rpolyano commented Jan 17, 2025

The container is running with -it so usually killing the entrypoint in that terminal stops the container as well. I've also tried docker kill which says its not running: Error response from daemon: cannot kill container: comfyui: No such container: comfyui. (same thing if I use hash instead of name).

docker ps -a shows no output after the container stops. While it is running it shows it as Up.

I've even tried stopping the daemon entirely: sudo systemctl stop docker and the memory is still used.

@rpolyano
Copy link
Author

Is there at least some way to manually free the memory because otherwise I have to restart my entire computer after running a single inference task every single time.

@feng-intel
Copy link

  1. Run the example workflow using the single fp8 checkpoint provided:
    https://comfyanonymous.github.io/ComfyUI_examples/flux/#flux-dev-1
  2. Observe that the inference completes
  3. Stop + Exit the container

Can you observe the memory usage after the step 2 ? Can your process/example exit normally?

@rpolyano
Copy link
Author

rpolyano commented Jan 23, 2025

Yes, there is a point during step 2, where the model is "moved" to the "GPU", and in top the memory usage of python process decreases, but memory usage overall does not.

The server catches keyboard interrupt exception, so I believe that is "exiting normally", but I am not sure it has any other graceful shutdown mechanism. Even if it does, it crashes (due to UR_ERROR_RESULT_DEVICE_LOST) fairly often, so this would still leak.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants