What's the proper way to use memory with langgraph? #352
-
Context:When trying this example: I seems that the from langchain.agents import create_react_agent, AgentExecutor
def create_agent(tools):
llm = Ollama(
model="mistral:latest",
temperature=0.0,
num_ctx=16384,
)
prompt = create_prompt()
memory = ConversationBufferMemory(memory_key="chat_history")
agent = create_react_agent(llm=llm, tools=tools, prompt=prompt)
agent_executor = AgentExecutor(
agent=agent, # type: ignore
tools=tools,
memory=memory,
verbose=True,
max_iterations=15,
handle_parsing_errors=True,
)
return agent_executor
If i use this [
File "xxx/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1166, in _iter_next_step
output = self.agent.plan(
^^^^^^^^^^^^^^^^
TypeError: RunnableAgent.plan() got multiple values for argument 'intermediate_steps' Where directly use memory = ConversationBufferMemory(memory_key="chat_history") Since the agent executor doesn't work, what's the proper way to have memory along with the langgraph app? |
Beta Was this translation helpful? Give feedback.
Replies: 15 comments 30 replies
-
I am also curious about what is (if any) the official recommendation for the memory for agents created with langgraph, but while waiting for that I have implemented it for myself by using the LangChain memory methods, see https://python.langchain.com/docs/modules/memory/types/buffer/ Basically when building the prompt I read out the memory with It is rather simple actually. |
Beta Was this translation helpful? Give feedback.
-
@XiaoConstantine did you find solution? |
Beta Was this translation helpful? Give feedback.
-
I ends up doing the similar thing like @zoltan-fedor proposed. I don't think current langgraph API accept the memories yet? Maybe someone has better understanding can chime in |
Beta Was this translation helpful? Give feedback.
-
Hi all - in LangGraph, memory is just checkpointing/persistence (see, e.g., the tutorial) - no need for other memory classes. All you have to do is This is much more powerful than LangChain's older forms of chat memory for a couple of reasons:
|
Beta Was this translation helpful? Give feedback.
-
In my mind those two - the conversation message memory (history) and the graph state checkpointing serve different purposes. Typically the graph state represents a single execution of the graph - which is basically one message from user and how that message is handled by the agent. The message memory though stores the message history of the given chat discussion - where each message from the user was handled by a graph execution. |
Beta Was this translation helpful? Give feedback.
-
Can we get a way to customize memory in LangGraph, for example, in previous Agents memory, we have a thread stored in a Django model, so each user's Agent that, the Agent's variables is stored like that as well then memory FK to it. Would be nice to have a documented way to Customize the memory and inherit the methods as need be without being constrained to the ones from langgraph.checkpoint. |
Beta Was this translation helpful? Give feedback.
-
Hello, Can we use redis for checkpointing ? also is there a way to limit the messages stored like last five etc similar to the redis memory from langchain . Thanks! |
Beta Was this translation helpful? Give feedback.
-
Checking in here to see if there's been any documentation related updates for custom checkpointing? I'm hacking away at putting one together, and will post some approaches if I have any luck! |
Beta Was this translation helpful? Give feedback.
-
I see there are few new checkpoint examples already. i.e Reddis one I already have a way of storing the long-term memory (conversations) . I was thinking of using the checkpointer as a short term memory only (so I guess Reddis with expire time will do the job). Not sure if there is a way to do similar with the default :memory: checkpointer. |
Beta Was this translation helpful? Give feedback.
-
Hey everyone, If it helps, I've got some examples of how to add memory to a LangGraph agent using the MemorySaver class. You can check out my Python and Node.js implementations in the repository. The Python example is These should give you a good starting point for integrating memory using the MemorySaver class. Also, feel free to watch a YouTube tutorial. Happy coding! 🚀 |
Beta Was this translation helpful? Give feedback.
-
This might be a bit different than people want, but whatever. I, for example, do want to manage the memory by myself entirely. Therefore I wanted to be able to just serialize the state, dump it somewhere, and then recover agent again. I think I managed to do so by doing something like this:
|
Beta Was this translation helpful? Give feedback.
-
Hey all! We recently released a new API for memory in LangGraph (reference doc). This API allows for for storing data, and having it persist across threads/sessions. It's independent from our checkpointer API, thus allowing persistence across threads. It also exposes an API for getting, putting, searching, and listing data in the store outside of your graph, allowing you to interact with your data, say from an external API (this can be done through the LangGraph SDK) Here are the conceptual docs on memory: Here are some how-tos for memory: |
Beta Was this translation helpful? Give feedback.
-
Can we add custom information to agent's memory using MemorySaver(). So, that whenever the agent is generating output it can make use of this memory.?? Is that possible or is that just memory starts getting stored only when agent starts conversation. |
Beta Was this translation helpful? Give feedback.
-
How to just clear all the memory of a langgraph? |
Beta Was this translation helpful? Give feedback.
-
I am using above code for
|
Beta Was this translation helpful? Give feedback.
Hey all! We recently released a new API for memory in LangGraph (reference doc). This API allows for for storing data, and having it persist across threads/sessions. It's independent from our checkpointer API, thus allowing persistence across threads. It also exposes an API for getting, putting, searching, and listing data in the store outside of your graph, allowing you to interact with your data, say from an external API (this can be done through the LangGraph SDK)
Here are the conceptual docs on memory:
Here are some how-tos for memory: