Issue combining Memory with Multiple Schems #2072
Unanswered
SkipvdMeer
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi Everyone,
I am running into an issue related to the use of memory in Langgraph.
I am looking for a way where I can insert a part of the output of nodes in memory and others in objects that wont be taken up in memory. In the example (input/output should be part of messages, output of guardrails node in guardrails_state)
I found this: https://github.com/langchain-ai/langchain-academy/blob/main/module-2/multiple-schemas.ipynb
However, i am having a hard time bringing that together with the following class:
class State(TypedDict):
guardrails_state: Literal['Yes', 'No']
messages: Annotated[list[AnyMessage], add_messages]
So in below example, i would like to exclude the output of node_guardrails from the messages object and would want to store that in guardrails_state. This way the memory of the conversation would just be input and output.
Can someone help me?
Beta Was this translation helpful? Give feedback.
All reactions