Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run does not change config based on assistant_id #3199

Open
4 tasks done
weinberg opened this issue Jan 24, 2025 · 0 comments
Open
4 tasks done

run does not change config based on assistant_id #3199

weinberg opened this issue Jan 24, 2025 · 0 comments

Comments

@weinberg
Copy link

weinberg commented Jan 24, 2025

Checked other resources

  • This is a bug, not a usage question. For questions, please use GitHub Discussions.
  • I added a clear and detailed title that summarizes the issue.
  • I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
  • I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.

Example Code

"""
Langgraph Server code
"""

from dataclasses import dataclass, field, fields
from typing import Annotated, Dict, List, Optional, Sequence, cast
from langchain_core.messages import AIMessage
from langchain_core.runnables import RunnableConfig
from langgraph.graph import StateGraph
from typing import Sequence
from react_agent.state import InputState, State
from react_agent.tools import TOOLS
from react_agent.utils import load_chat_model
from langchain_core.runnables import RunnableConfig, ensure_config
from langchain_core.messages import AnyMessage
from langgraph.graph import add_messages


@dataclass
class State(InputState):
    messages: Annotated[Sequence[AnyMessage], add_messages] = field(
        default_factory=list
    )


@dataclass(kw_only=True)
class Configuration:
    prompt: str = field(default="Just reply with MISSING_PROMPT")

    @classmethod
    def from_runnable_config(
        cls, config: Optional[RunnableConfig] = None
    ) -> "Configuration":
        """Create a Configuration instance from a RunnableConfig object."""
        config = ensure_config(config)
        configurable = config.get("configurable") or {}
        _fields = {f.name for f in fields(cls) if f.init}
        return cls(**{k: v for k, v in configurable.items() if k in _fields})


async def call_model(
    state: State, config: RunnableConfig
) -> Dict[str, List[AIMessage]]:
    configuration = Configuration.from_runnable_config(config)
    model = load_chat_model("anthropic/claude-3-5-sonnet-20240620")
    system_message = configuration.prompt
    response = cast(
        AIMessage,
        await model.ainvoke(
            [{"role": "system", "content": system_message}, *state.messages], config
        ),
    )
    return {"messages": [response]}


builder = StateGraph(State, input=InputState, config_schema=Configuration)
builder.add_node(call_model)
builder.add_edge("__start__", "call_model")
builder.add_edge("call_model", "__end__")

graph = builder.compile()

Error Message and Stack Trace (if applicable)

From a jupyter notebook:

%pip install langgraph-sdk


    Requirement already satisfied: langgraph-sdk in /Users/josh/dev/langgraph-jupyter/env/lib/python3.13/site-packages (0.1.51)
    Requirement already satisfied: httpx>=0.25.2 in /Users/josh/dev/langgraph-jupyter/env/lib/python3.13/site-packages (from langgraph-sdk) (0.28.1)
    Requirement already satisfied: orjson>=3.10.1 in /Users/josh/dev/langgraph-jupyter/env/lib/python3.13/site-packages (from langgraph-sdk) (3.10.15)
    Requirement already satisfied: anyio in /Users/josh/dev/langgraph-jupyter/env/lib/python3.13/site-packages (from httpx>=0.25.2->langgraph-sdk) (4.8.0)
    Requirement already satisfied: certifi in /Users/josh/dev/langgraph-jupyter/env/lib/python3.13/site-packages (from httpx>=0.25.2->langgraph-sdk) (2024.12.14)
    Requirement already satisfied: httpcore==1.* in /Users/josh/dev/langgraph-jupyter/env/lib/python3.13/site-packages (from httpx>=0.25.2->langgraph-sdk) (1.0.7)
    Requirement already satisfied: idna in /Users/josh/dev/langgraph-jupyter/env/lib/python3.13/site-packages (from httpx>=0.25.2->langgraph-sdk) (3.10)
    Requirement already satisfied: h11<0.15,>=0.13 in /Users/josh/dev/langgraph-jupyter/env/lib/python3.13/site-packages (from httpcore==1.*->httpx>=0.25.2->langgraph-sdk) (0.14.0)
    Requirement already satisfied: sniffio>=1.1 in /Users/josh/dev/langgraph-jupyter/env/lib/python3.13/site-packages (from anyio->httpx>=0.25.2->langgraph-sdk) (1.3.1)
    Note: you may need to restart the kernel to use updated packages.




import getpass
import os


def _set_env(var: str):
    if not os.environ.get(var):
        os.environ[var] = getpass.getpass(f"{var}: ")


_set_env("LANGSMITH_API_KEY")





from langgraph_sdk import get_client

URL="http://localhost:2024"
client = get_client(url=URL,api_key=os.getenv('LANGSMITH_API_KEY'))
assistant_id = "agent"
thread = await client.threads.create()




assistant_one = await client.assistants.create(
    graph_id="agent",
    config={"configurable": {"prompt": "just respond with ASSISTANT ONE"}},
    assistant_id="11111111-1111-1111-1111-111111111111",
    if_exists="do_nothing",
    name="asssistant one"
)




assistant_two = await client.assistants.create(
    graph_id="agent",
    config={"configurable": {"prompt": "just respond with ASSISTANT TWO"}},
    assistant_id="22222222-2222-2222-2222-222222222222",
    if_exists="do_nothing",
    name="asssistant two"
)




thread = await client.threads.create(
    metadata={"number":1},
    if_exists="raise"
)




thread





    {'thread_id': 'fba20607-a345-4c52-99e2-270122b5604c',
     'created_at': '2025-01-24T18:39:41.939930+00:00',
     'updated_at': '2025-01-24T18:39:41.939934+00:00',
     'metadata': {'number': 1},
     'status': 'idle',
     'config': {},
     'values': None}





result_a = await client.runs.wait(
    thread_id=thread["thread_id"],
    assistant_id=assistant_one["assistant_id"],
    input={"messages": [{"role": "user", "content": "hello"}]},
)
result_a["messages"][1]["content"]





    'ASSISTANT ONE'





result_b = await client.runs.wait(
    thread_id=thread["thread_id"],
    assistant_id=assistant_two["assistant_id"],
    input={"messages": [{"role": "user", "content": "hello"}]},
)
result_b["messages"][1]["content"]





    'ASSISTANT ONE'

Description

I have two assistants. I start a thread and complete a run with the first assistant. Then I submit a run with the second assistant. The config provided to the node is always from the first assistant. Notice in the output from my notebook above that it prints "ASSISTANT ONE" twice even though the second run is using assistant_two which should print "ASSISTANT TWO". If I change the order and call assistant_two first then it will print "ASSISTANT TWO" twice.

I expected that the assistant_id I pass in with the run would cause the configuration for that assistant to be provided to the node.

I notice in LangGraph Studio I can pick an assistant for each run and it will actually work. But looking at the network request I can see Studio is passing in a config parameter with all the configuration of the selected assistant including the prompt. I don't want to have to do this and it seems counter-intuitive that I should have to. I expected I could use the assistant_id for this.

Thanks!

System Info

System Information

OS: Darwin
OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
Python Version: 3.11.11 (main, Jan 18 2025, 10:11:10) [Clang 16.0.0 (clang-1600.0.26.6)]

Package Information

langchain_core: 0.3.30
langchain: 0.3.14
langchain_community: 0.3.14
langsmith: 0.2.11
langchain_anthropic: 0.3.3
langchain_fireworks: 0.2.6
langchain_openai: 0.3.0
langchain_text_splitters: 0.3.5
langgraph_api: 0.0.16
langgraph_cli: 0.1.67
langgraph_license: Installed. No version info available.
langgraph_sdk: 0.1.51
langgraph_storage: Installed. No version info available.

Optional packages not installed

langserve

Other Dependencies

aiohttp: 3.11.11
anthropic: 0.43.1
async-timeout: Installed. No version info available.
click: 8.1.8
cryptography: 43.0.3
dataclasses-json: 0.6.7
defusedxml: 0.7.1
fireworks-ai: 0.15.11
httpx: 0.28.1
httpx-sse: 0.4.0
jsonpatch: 1.33
jsonschema-rs: 0.25.1
langgraph: 0.2.64
langgraph-checkpoint: 2.0.10
langsmith-pyo3: Installed. No version info available.
numpy: 1.26.4
openai: 1.59.8
orjson: 3.10.15
packaging: 24.2
pydantic: 2.10.5
pydantic-settings: 2.7.1
pyjwt: 2.10.1
python-dotenv: 1.0.1
PyYAML: 6.0.2
requests: 2.32.3
requests-toolbelt: 1.0.0
SQLAlchemy: 2.0.37
sse-starlette: 2.1.3
starlette: 0.45.2
structlog: 24.4.0
tenacity: 8.5.0
tiktoken: 0.8.0
typing-extensions: 4.12.2
uvicorn: 0.34.0
watchfiles: 1.0.4
zstandard: Installed. No version info available.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant