-
Hello everyone. I have built a ReAct agent using the Llama3.2 model from Ollama. I used For pretty printing, I am using the following code to astream over the agent:
This works fine, but the behavior is odd. There is no per token streaming, but instead the whole answer is plotted. I recall that I read somewhere, that tooling and streaming is incompatible in langgraph. Can someone make sense out of this? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
Additionally, other Ollama model do not work either... |
Beta Was this translation helpful? Give feedback.
-
Thank you very much! |
Beta Was this translation helpful? Give feedback.
responded on this issue #3259 as well -- you need to use stream_mode="messages" in .stream() / .astream() method