You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was using 4o-mini to build and test, and when everything was running well (as in the code runs from start to finish, not that the quality of the output was fantastic), I switched to Claude 3.5 Sonnet. That was when I got the "Received None or empty response from LLM call".
I have been experiencing the same issue as well using the newest version of Claude Sonnet 3.5 through Amazon Bedrock, so it didn’t seem like it was related to model capability.
The only thing I have found to work is setting Agent(..., use_system_prompt=False). That would be my first recommendation since you are also using Anthropic.
The system prompt does not seem to be playing nice with Anthropic but I am not sure why.
It did work for me, but there's no answer from the CrewAI team about why or how this happened, and I'm not too comfortable with resorting to use_system_prompt=False in production when I don't know what system prompts are being removed.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I was using 4o-mini to build and test, and when everything was running well (as in the code runs from start to finish, not that the quality of the output was fantastic), I switched to Claude 3.5 Sonnet. That was when I got the "Received None or empty response from LLM call".
Anyone else encountering this problem?
I tried the solution outlined here: https://community.crewai.com/t/why-am-i-getting-the-invalid-response-from-llm-call-none-or-empty-error-with-my-custom-tool-if-using-anthropic-llm-but-not-with-openai-llm/1571/8
It did work for me, but there's no answer from the CrewAI team about why or how this happened, and I'm not too comfortable with resorting to
use_system_prompt=False
in production when I don't know what system prompts are being removed.Can someone from the CrewAI please address this?
Beta Was this translation helpful? Give feedback.
All reactions