Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Error: "Did not receive done or success response in stream" when using Flowise with Ollama #3862

Open
Manish-va opened this issue Jan 13, 2025 · 1 comment

Comments

@Manish-va
Copy link

Describe the bug
When attempting to run a chatflow or make an internal prediction request in Flowise, the system encounters an error with the message: Did not receive done or success response in stream. This issue occurs after submitting a request, and the response from the AI model is not received correctly, causing the operation to fail.

To Reproduce
Steps to reproduce the behavior:

  1. Go to 'Start the Flowise server by running docker start flowise.'
  2. Click on 'Attach to the running Flowise container using docker attach flowise.'
  3. Scroll down to 'Set up a chatflow and trigger a prediction request using the configured model (e.g., Ollama).'
  4. Observe the error logs as the system tries to complete the request.

Expected behavior
The system should process the request correctly and receive a "done" or "success" response from the model. The chatflow or prediction should proceed without errors.

Screenshots
If applicable, add screenshots to help explain your problem.
Screenshot from 2025-01-13 17-52-10
Screenshot from 2025-01-13 17-27-52
or
manishkumar@manishkumar-OMEN-Laptop-15-en0xxx:$ sudo docker start flowise
flowise
manishkumar@manishkumar-OMEN-Laptop-15-en0xxx:
$ sudo docker attach flowise
2025-01-13 12:03:12 [INFO]: 📦 [server]: Data Source has been initialized!
2025-01-13 12:03:12 [INFO]: ⚡️ [server]: Flowise Server is listening at :3000
2025-01-13 12:03:21 [INFO]: 🖊 PUT /api/v1/chatflows/0fb8a684-4057-4cb4-855a-e4021ccc72d4
2025-01-13 12:03:24 [INFO]: ⬆️ POST /api/v1/internal-prediction/0fb8a684-4057-4cb4-855a-e4021ccc72d4
2025-01-13 12:03:24 [INFO]: [server]: Chatflow 0fb8a684-4057-4cb4-855a-e4021ccc72d4 added into ChatflowPool
2025-01-13 12:03:29 [ERROR]: [server]: Error: Did not receive done or success response in stream.
Error: Did not receive done or success response in stream.
at [Symbol.asyncIterator] (/usr/src/node_modules/.pnpm/[email protected]/node_modules/ollama/dist/shared/ollama.6a7c6e0f.cjs:47:11)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async ChatOllama.streamResponseChunks (/usr/src/node_modules/.pnpm/@langchain[email protected]@langchain[email protected][email protected][email protected][email protected]__/node_modules/@langchain/ollama/dist/chat_models.cjs:760:26)
at async ChatOllama.generate (/usr/src/node_modules/.pnpm/@langchain[email protected]@langchain[email protected][email protected][email protected][email protected]__/node_modules/@langchain/ollama/dist/chat_models.cjs:687:26)
at async Promise.allSettled (index 0)
at async ChatOllama.generateUncached (/usr/src/node_modules/.pnpm/@langchain[email protected][email protected][email protected][email protected]/node_modules/@langchain/core/dist/language_models/chat_models.cjs:215:29)
at async ChatOllama.invoke (/usr/src/node_modules/.pnpm/@langchain[email protected][email protected][email protected][email protected]_/node_modules/@langchain/core/dist/language_models/chat_models.cjs:60:24)
at async RunnableSequence.invoke (/usr/src/node_modules/.pnpm/@langchain[email protected][email protected][email protected][email protected]_/node_modules/@langchain/core/dist/runnables/base.cjs:1256:33)
at async ConversationChain_Chains.run (/usr/src/packages/components/dist/nodes/chains/ConversationChain/ConversationChain.js:111:19)
at async utilBuildChatflow (/usr/src/packages/server/dist/utils/buildChatflow.js:343:22)
at async createAndStreamInternalPrediction (/usr/src/packages/server/dist/controllers/internal-predictions/index.js:33:29)

Flow
If applicable, add exported flow in order to help replicating the problem.

Setup

  • Installation: docker (Flowise container)
  • Flowise Version: [e.g., 1.2.11]
  • OS: Linux (Ubuntu 20.04)
  • Browser: N/A (issue occurs during server interaction)
  • Model: Ollama (or any other model used in the configuration)
    Additional context
    This issue may be linked to the Ollama integration within Flowise.
    The error message originates from the file /usr/src/node_modules/.pnpm/[email protected]/node_modules/ollama/dist/shared/ollama.6a7c6e0f.cjs.
    Investigating timeout or response handling settings in Flowise or the model integration could help resolve the issue.
@ghitapop
Copy link

I got the same behavior with the last Flowise versions: 2.2.3
it doesn't seem to work with llama local models anymore, I tried with llama 3.2:3b for example latently.

Setup:
OS: Windows 10
IDE: IntelliJ 2023.3.8
Ollama desktop server 0.5.5
Local installation of Flowise versions: 2.2.3
local models installed: llama 3.2:3b

Screenshots

image

Here is the trace log from today:

2025-01-14 11:32:08 [ERROR]: [server]: Error: fetch failed
TypeError: fetch failed
at Object.fetch (node:internal/deps/undici/undici:11576:11)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async post (D:\Projects\workspace_ai\Flowise\node_modules.pnpm\[email protected]\node_modules\ollama\dist\shared\ollama.11c1a3a8.cjs:119:20)
at async Ollama.processStreamableRequest (D:\Projects\workspace_ai\Flowise\node_modules.pnpm\[email protected]\node_modules\ollama\dist\shared\ollama.11c1a3a8.cjs:236:25)
at async ChatOllama.streamResponseChunks (D:\Projects\workspace_ai\Flowise\node_modules.pnpm@[email protected]@langchain[email protected][email protected][email protected][email protected]__\node_modules@langchain\ollama\dist\chat_mo
dels.cjs:754:24)
at async ChatOllama.generate (D:\Projects\workspace_ai\Flowise\node_modules.pnpm@[email protected]@langchain[email protected][email protected][email protected][email protected]__\node_modules@langchain\ollama\dist\chat_models.cjs:687
:26)
at async Promise.allSettled (index 0)
at async ChatOllama.generateUncached (D:\Projects\workspace_ai\Flowise\node_modules.pnpm@[email protected][email protected][email protected][email protected]\node_modules@langchain\core\dist\language_models\chat_models.cjs:215:29
)
at async ChatOllama.invoke (D:\Projects\workspace_ai\Flowise\node_modules.pnpm@[email protected][email protected][email protected][email protected]_\node_modules@langchain\core\dist\language_models\chat_models.cjs:60:24)
at async RunnableSequence.invoke (D:\Projects\workspace_ai\Flowise\node_modules.pnpm@[email protected][email protected][email protected][email protected]_\node_modules@langchain\core\dist\runnables\base.cjs:1256:33)
at async ConversationChain_Chains.run (D:\Projects\workspace_ai\Flowise\packages\components\dist\nodes\chains\ConversationChain\ConversationChain.js:111:19)
at async utilBuildChatflow (D:\Projects\workspace_ai\Flowise\packages\server\dist\utils\buildChatflow.js:342:22)
at async createAndStreamInternalPrediction (D:\Projects\workspace_ai\Flowise\packages\server\dist\controllers\internal-predictions\index.js:33:29)

Thanks and keep up the good work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants