Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: Crash using Flux Dev.1 #7523

Open
1 task done
DJP1973 opened this issue Jan 6, 2025 · 7 comments
Open
1 task done

[bug]: Crash using Flux Dev.1 #7523

DJP1973 opened this issue Jan 6, 2025 · 7 comments
Labels
bug Something isn't working

Comments

@DJP1973
Copy link

DJP1973 commented Jan 6, 2025

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

3060

GPU VRAM

12

Version number

5.5.0

Browser

Edge

Python dependencies

Local System
accelerate

1.0.1

compel

2.0.2

cuda

12.4

diffusers

0.31.0

numpy

1.26.3

opencv

4.9.0.80

onnx

1.16.1

pillow

10.2.0

python

3.11.11

torch

2.4.1+cu124

torchvision

0.19.1+cu124

transformers

4.46.3

xformers

Not Installed

What happened

click generate and it exits with:
[2025-01-06 18:09:22,644]::[InvokeAI]::INFO --> Cleaned database (freed 0.01MB)
[2025-01-06 18:09:22,644]::[InvokeAI]::INFO --> Invoke running on http://0.0.0.0:9090/ (Press CTRL+C to quit)
[2025-01-06 18:11:10,050]::[InvokeAI]::INFO --> Executing queue item 91, session 1d1a923c-b431-4d28-b25e-0f1bc38690fa
C:\InvokeAI.venv\Lib\site-packages\bitsandbytes\autograd_functions.py:316: UserWarning: MatMul8bitLt: inputs will be cast from torch.bfloat16 to float16 during quantization
warnings.warn(f"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization")
C:\InvokeAI.venv\Lib\site-packages\transformers\models\clip\modeling_clip.py:540: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
attn_output = torch.nn.functional.scaled_dot_product_attention(
Process exited with code: 3221225477

What you expected to happen

on generation it loads and then exits within about 20 seconds

How to reproduce the problem

generate anything with the Flux DEV non quant mode

Additional context

TY I am very new to this but trying hard to learn

Discord username

No response

@DJP1973 DJP1973 added the bug Something isn't working label Jan 6, 2025
@gigend
Copy link

gigend commented Jan 9, 2025

same error.

@KudintG
Copy link

KudintG commented Jan 9, 2025

I'm getting similar with Flux Schnell on a 2070 Super with Invoke installed viaStability Matrix:

\StabilityMatrix\Packages\InvokeAI\venv\lib\site-packages\transformers\models\clip\modeling_clip.py:540: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.) attn_output = torch.nn.functional.scaled_dot_product_attention(

@tensorflow73
Copy link

Same here.... started happening a few days ago. No idea what caused it. So annoying! Anyone know a fix please?

@TiddlyWiddly
Copy link

I get this:

[2025-01-12 03:08:45,916]::[ModelInstallService]::INFO --> Model download complete: black-forest-labs/FLUX.1-dev [2025-01-12 03:08:45,920]::[ModelInstallService]::INFO --> Model install started: black-forest-labs/FLUX.1-dev [2025-01-12 03:08:45,924]::[ModelInstallService]::ERROR --> Model install error: black-forest-labs/FLUX.1-dev InvalidModelConfigException: Unknown base model for /invokeai/models/tmpinstall_fvh2hdgc/FLUX.1-dev

@RyanJDick
Copy link
Collaborator

@gigend @KudintG @tensorflow73 Just to confirm, are you all seeing Process exited with code: 3221225477? Or just the same warnings that lead up to it? And, can you all confirm what version of Invoke you are seeing this on?

@TiddlyWiddly your error is unrelated to the main issue here. Please open a new bug report.

@tensorflow73
Copy link

I've found that the base Flux schnell model with the standard T5 works fine, but other flux models crash it out with the error pointing to the flash attention build issue. Pony, sd1.x and sdxl models all function normally.

@RyanJDick
Copy link
Collaborator

I've found that the base Flux schnell model with the standard T5 works fine, but other flux models crash it out with the error pointing to the flash attention build issue. Pony, sd1.x and sdxl models all function normally.

The warnings about flash attention are expected on Windows. The real error in the original bug report is Process exited with code: 3221225477. Are you seeing this same error? Or something else? And, what version of Invoke are you running?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants