跳到主要内容

使用 LangGraph (Python 和 JS/TS) 进行追踪

LangSmith 与 LangGraph (PythonJS) 无缝集成,以帮助您追踪 Agent 工作流程,无论您是使用 LangChain 模块 还是 其他 SDK

使用 LangChain

如果您在 LangGraph 中使用 LangChain 模块,您只需设置几个环境变量即可启用追踪。

本指南将引导您完成一个基本示例。有关配置的更多详细信息,请参阅 使用 LangChain 进行追踪 指南。

1. 安装

安装 LangGraph 库以及 Python 和 JS 的 OpenAI 集成(我们在下面的代码片段中使用了 OpenAI 集成)。

有关可用软件包的完整列表,请参阅 LangChain Python 文档LangChain JS 文档

pip install langchain_openai langgraph

2. 配置您的环境

export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=<your-api-key>
# This example uses OpenAI, but you can use any LLM provider of choice
export OPENAI_API_KEY=<your-openai-api-key>

3. 记录追踪

一旦您设置好环境,就可以像往常一样调用 LangChain runnable。LangSmith 将推断正确的追踪配置

from typing import Literal

from langchain_core.messages import HumanMessage
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langgraph.graph import StateGraph, MessagesState
from langgraph.prebuilt import ToolNode

@tool
def search(query: str):
"""Call to surf the web."""
if "sf" in query.lower() or "san francisco" in query.lower():
return "It's 60 degrees and foggy."
return "It's 90 degrees and sunny."

tools = [search]

tool_node = ToolNode(tools)

model = ChatOpenAI(model="gpt-4o", temperature=0).bind_tools(tools)

def should_continue(state: MessagesState) -> Literal["tools", "__end__"]:
messages = state['messages']
last_message = messages[-1]
if last_message.tool_calls:
return "tools"
return "__end__"


def call_model(state: MessagesState):
messages = state['messages']

# Invoking `model` will automatically infer the correct tracing context
response = model.invoke(messages)
return {"messages": [response]}


workflow = StateGraph(MessagesState)

workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)

workflow.add_edge("__start__", "agent")
workflow.add_conditional_edges(
"agent",
should_continue,
)
workflow.add_edge("tools", 'agent')

app = workflow.compile()

final_state = app.invoke(
{"messages": [HumanMessage(content="what is the weather in sf")]},
config={"configurable": {"thread_id": 42}}
)
final_state["messages"][-1].content

运行上述代码的示例追踪 看起来像这样

Trace tree for a LangGraph run with LangChain

不使用 LangChain

如果您在 LangGraph 中使用其他 SDK 或自定义函数,您将需要适当包装或装饰它们(在 Python 中使用 @traceable 装饰器,在 JS 中使用 traceable 函数,或者类似 wrap_openai 这样的 SDK)。如果您这样做,LangSmith 将自动嵌套来自这些包装方法的追踪。

这是一个例子。您也可以查看此页面以获取更多信息。

1. 安装

安装 LangGraph 库和 Python 和 JS 的 OpenAI SDK(我们在下面的代码片段中使用了 OpenAI 集成)。

pip install openai langsmith langgraph

2. 配置您的环境

export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=<your-api-key>
# This example uses OpenAI, but you can use any LLM provider of choice
export OPENAI_API_KEY=<your-openai-api-key>

3. 记录追踪

一旦您设置好环境,包装或装饰您想要追踪的自定义函数/SDK。然后 LangSmith 将推断正确的追踪配置

import json
import openai
import operator

from langsmith import traceable
from langsmith.wrappers import wrap_openai

from typing import Annotated, Literal, TypedDict

from langgraph.graph import StateGraph

class State(TypedDict):
messages: Annotated[list, operator.add]

tool_schema = {
"type": "function",
"function": {
"name": "search",
"description": "Call to surf the web.",
"parameters": {
"type": "object",
"properties": {"query": {"type": "string"}},
"required": ["query"],
},
},
}

# Decorating the tool function will automatically trace it with the correct context
@traceable(run_type="tool", name="Search Tool")
def search(query: str):
"""Call to surf the web."""
if "sf" in query.lower() or "san francisco" in query.lower():
return "It's 60 degrees and foggy."
return "It's 90 degrees and sunny."

tools = [search]

def call_tools(state):
function_name_to_function = {"search": search}
messages = state["messages"]

tool_call = messages[-1]["tool_calls"][0]
function_name = tool_call["function"]["name"]
function_arguments = tool_call["function"]["arguments"]
arguments = json.loads(function_arguments)

function_response = function_name_to_function[function_name](**arguments)
tool_message = {
"tool_call_id": tool_call["id"],
"role": "tool",
"name": function_name,
"content": function_response,
}
return {"messages": [tool_message]}

wrapped_client = wrap_openai(openai.Client())

def should_continue(state: State) -> Literal["tools", "__end__"]:
messages = state["messages"]
last_message = messages[-1]
if last_message["tool_calls"]:
return "tools"
return "__end__"


def call_model(state: State):
messages = state["messages"]
# Calling the wrapped client will automatically infer the correct tracing context
response = wrapped_client.chat.completions.create(
messages=messages, model="gpt-4o-mini", tools=[tool_schema]
)
raw_tool_calls = response.choices[0].message.tool_calls
tool_calls = [tool_call.to_dict() for tool_call in raw_tool_calls] if raw_tool_calls else []
response_message = {
"role": "assistant",
"content": response.choices[0].message.content,
"tool_calls": tool_calls,
}
return {"messages": [response_message]}


workflow = StateGraph(State)

workflow.add_node("agent", call_model)
workflow.add_node("tools", call_tools)

workflow.add_edge("__start__", "agent")
workflow.add_conditional_edges(
"agent",
should_continue,
)
workflow.add_edge("tools", 'agent')

app = workflow.compile()

final_state = app.invoke(
{"messages": [{"role": "user", "content": "what is the weather in sf"}]}
)
final_state["messages"][-1]["content"]

运行上述代码的示例追踪 看起来像这样

Trace tree for a LangGraph run without LangChain


此页是否对您有帮助?


您可以留下详细的反馈 在 GitHub 上.