注解代码以进行追踪
如果您决定不再追踪运行,可以移除 LANGSMITH_TRACING
环境变量。请注意,这不会影响 RunTree
对象或 API 用户,因为它们旨在作为低级工具,不受追踪开关的影响。
有几种方式可以将追踪日志记录到 LangSmith。
如果您正在使用 LangChain(无论是 Python 还是 JS/TS),您可以跳过此部分,直接前往LangChain 特定说明。
使用 @traceable
/ traceable
LangSmith 允许您通过 Python 中的 @traceable
装饰器和 TypeScript 中的 traceable
函数,以最小的代码改动轻松记录追踪。
- Python
- TypeScript
@traceable
装饰器是 LangSmith Python SDK 记录追踪的简单方式。只需使用 @traceable
装饰任何函数即可。
from langsmith import traceable
from openai import Client
openai = Client()
@traceable
def format_prompt(subject):
return [
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": f"What's a good name for a store that sells {subject}?"
}
]
@traceable(run_type="llm")
def invoke_llm(messages):
return openai.chat.completions.create(
messages=messages, model="gpt-4o-mini", temperature=0
)
@traceable
def parse_output(response):
return response.choices[0].message.content
@traceable
def run_pipeline():
messages = format_prompt("colorful socks")
response = invoke_llm(messages)
return parse_output(response)
run_pipeline()
traceable
函数是 LangSmith TypeScript SDK 记录追踪的简单方式。只需使用 traceable
包装任何函数即可。
请注意,当使用 traceable
包装同步函数时(例如,以下示例中的 formatPrompt
),在调用它时应使用 await
关键字,以确保追踪被正确记录。
import { traceable } from "langsmith/traceable";
import OpenAI from "openai";
const openai = new OpenAI();
const formatPrompt = traceable(
(subject: string) => {
return [
{
role: "system" as const,
content: "You are a helpful assistant.",
},
{
role: "user" as const,
content: `What's a good name for a store that sells ${subject}?`,
},
];
},
{ name: "formatPrompt" }
);
const invokeLLM = traceable(
async ({ messages }: { messages: { role: string; content: string }[] }) => {
return openai.chat.completions.create({
model: "gpt-4o-mini",
messages: messages,
temperature: 0,
});
},
{ run_type: "llm", name: "invokeLLM" }
);
const parseOutput = traceable(
(response: any) => {
return response.choices[0].message.content;
},
{ name: "parseOutput" }
);
const runPipeline = traceable(
async () => {
const messages = await formatPrompt("colorful socks");
const response = await invokeLLM({ messages });
return parseOutput(response);
},
{ name: "runPipeline" }
);
await runPipeline();
使用 trace
上下文管理器(仅限 Python)
在 Python 中,您可以使用 trace
上下文管理器将追踪日志记录到 LangSmith。这在以下情况下很有用:
- 您想为特定代码块记录追踪。
- 您想控制追踪的输入、输出和其他属性。
- 使用装饰器或包装器不可行。
- 以上任何或所有情况。
该上下文管理器与 traceable
装饰器和 wrap_openai
包装器无缝集成,因此您可以在同一个应用程序中一起使用它们。
import openai
import langsmith as ls
from langsmith.wrappers import wrap_openai
client = wrap_openai(openai.Client())
@ls.traceable(run_type="tool", name="Retrieve Context")
def my_tool(question: str) -> str:
return "During this morning's meeting, we solved all world conflict."
def chat_pipeline(question: str):
context = my_tool(question)
messages = [
{ "role": "system", "content": "You are a helpful assistant. Please respond to the user's request only based on the given context." },
{ "role": "user", "content": f"Question: {question}\nContext: {context}"}
]
chat_completion = client.chat.completions.create(
model="gpt-4o-mini", messages=messages
)
return chat_completion.choices[0].message.content
app_inputs = {"input": "Can you summarize this morning's meetings?"}
with ls.trace("Chat Pipeline", "chain", project_name="my_test", inputs=app_inputs) as rt:
output = chat_pipeline("Can you summarize this morning's meetings?")
rt.end(outputs={"output": output})
包装 OpenAI 客户端
Python/TypeScript 中的 wrap_openai
/wrapOpenAI
方法允许您包装 OpenAI 客户端,以自动记录追踪——无需装饰器或函数包装!使用此包装器可确保消息(包括工具调用和多模态内容块)在 LangSmith 中得到良好呈现。另请注意,该包装器与 @traceable
装饰器或 traceable
函数无缝协作,您可以在同一个应用程序中同时使用它们。
- Python
- TypeScript
import openai
from langsmith import traceable
from langsmith.wrappers import wrap_openai
client = wrap_openai(openai.Client())
@traceable(run_type="tool", name="Retrieve Context")
def my_tool(question: str) -> str:
return "During this morning's meeting, we solved all world conflict."
@traceable(name="Chat Pipeline")
def chat_pipeline(question: str):
context = my_tool(question)
messages = [
{ "role": "system", "content": "You are a helpful assistant. Please respond to the user's request only based on the given context." },
{ "role": "user", "content": f"Question: {question}\nContext: {context}"}
]
chat_completion = client.chat.completions.create(
model="gpt-4o-mini", messages=messages
)
return chat_completion.choices[0].message.content
chat_pipeline("Can you summarize this morning's meetings?")
import OpenAI from "openai";
import { traceable } from "langsmith/traceable";
import { wrapOpenAI } from "langsmith/wrappers";
const client = wrapOpenAI(new OpenAI());
const myTool = traceable(async (question: string) => {
return "During this morning's meeting, we solved all world conflict.";
}, { name: "Retrieve Context", run_type: "tool" });
const chatPipeline = traceable(async (question: string) => {
const context = await myTool(question);
const messages = [
{
role: "system",
content:
"You are a helpful assistant. Please respond to the user's request only based on the given context.",
},
{ role: "user", content: `Question: ${question} Context: ${context}` },
];
const chatCompletion = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: messages,
});
return chatCompletion.choices[0].message.content;
}, { name: "Chat Pipeline" });
await chatPipeline("Can you summarize this morning's meetings?");
包装 Anthropic 客户端(仅限 Python)
Python 中的 wrap_anthropic
方法允许您包装 Anthropic 客户端,以自动记录追踪——无需装饰器或函数包装!使用此包装器可确保消息(包括工具调用和多模态内容块)在 LangSmith 中得到良好呈现。该包装器与 @traceable
装饰器或 traceable
函数无缝协作,您可以在同一个应用程序中同时使用它们。
import anthropic
from langsmith import traceable
from langsmith.wrappers import wrap_anthropic
client = wrap_anthropic(anthropic.Anthropic())
# You can also wrap the async client as well
# async_client = wrap_anthropic(anthropic.AsyncAnthropic())
@traceable(run_type="tool", name="Retrieve Context")
def my_tool(question: str) -> str:
return "During this morning's meeting, we solved all world conflict."
@traceable(name="Chat Pipeline")
def chat_pipeline(question: str):
context = my_tool(question)
messages = [
{ "role": "user", "content": f"Question: {question}\\nContext: {context}"}
]
chat_completion = client.messages.create(
model="claude-sonnet-4-20250514",
messages=messages,
max_tokens=3,
system="You are a helpful assistant. Please respond to the user's request only based on the given context."
)
return chat_completion.choices[0].message.content
chat_pipeline("Can you summarize this morning's meetings?")
使用 RunTree
API
另一种更明确的将追踪日志记录到 LangSmith 的方法是通过 RunTree
API。此 API 允许您对追踪进行更多控制——您可以手动创建运行和子运行来组装您的追踪。您仍然需要设置 LANGSMITH_API_KEY
,但此方法不需要 LANGSMITH_TRACING
。
不推荐此方法,因为它更容易在传播追踪上下文时出错。
- Python
- TypeScript
import openai
from langsmith.run_trees import RunTree
# This can be a user input to your app
question = "Can you summarize this morning's meetings?"
# Create a top-level run
pipeline = RunTree(
name="Chat Pipeline",
run_type="chain",
inputs={"question": question}
)
pipeline.post()
# This can be retrieved in a retrieval step
context = "During this morning's meeting, we solved all world conflict."
messages = [
{ "role": "system", "content": "You are a helpful assistant. Please respond to the user's request only based on the given context." },
{ "role": "user", "content": f"Question: {question}\nContext: {context}"}
]
# Create a child run
child_llm_run = pipeline.create_child(
name="OpenAI Call",
run_type="llm",
inputs={"messages": messages},
)
child_llm_run.post()
# Generate a completion
client = openai.Client()
chat_completion = client.chat.completions.create(
model="gpt-4o-mini", messages=messages
)
# End the runs and log them
child_llm_run.end(outputs=chat_completion)
child_llm_run.patch()
pipeline.end(outputs={"answer": chat_completion.choices[0].message.content})
pipeline.patch()
import OpenAI from "openai";
import { RunTree } from "langsmith";
// This can be a user input to your app
const question = "Can you summarize this morning's meetings?";
const pipeline = new RunTree({
name: "Chat Pipeline",
run_type: "chain",
inputs: { question }
});
await pipeline.postRun();
// This can be retrieved in a retrieval step
const context = "During this morning's meeting, we solved all world conflict.";
const messages = [
{ role: "system", content: "You are a helpful assistant. Please respond to the user's request only based on the given context." },
{ role: "user", content: `Question: ${question}
Context: ${context}` }
];
// Create a child run
const childRun = await pipeline.createChild({
name: "OpenAI Call",
run_type: "llm",
inputs: { messages },
});
await childRun.postRun();
// Generate a completion
const client = new OpenAI();
const chatCompletion = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: messages,
});
// End the runs and log them
childRun.end(chatCompletion);
await childRun.patchRun();
pipeline.end({ outputs: { answer: chatCompletion.choices[0].message.content } });
await pipeline.patchRun();
使用示例
您可以扩展上述实用程序,以便方便地追踪任何代码。以下是一些示例扩展:
追踪类中的任何公共方法
from typing import Any, Callable, Type, TypeVar
T = TypeVar("T")
def traceable_cls(cls: Type[T]) -> Type[T]:
"""Instrument all public methods in a class."""
def wrap_method(name: str, method: Any) -> Any:
if callable(method) and not name.startswith("__"):
return traceable(name=f"{cls.__name__}.{name}")(method)
return method
# Handle __dict__ case
for name in dir(cls):
if not name.startswith("_"):
try:
method = getattr(cls, name)
setattr(cls, name, wrap_method(name, method))
except AttributeError:
# Skip attributes that can't be set (e.g., some descriptors)
pass
# Handle __slots__ case
if hasattr(cls, "__slots__"):
for slot in cls.__slots__: # type: ignore[attr-defined]
if not slot.startswith("__"):
try:
method = getattr(cls, slot)
setattr(cls, slot, wrap_method(slot, method))
except AttributeError:
# Skip slots that don't have a value yet
pass
return cls
@traceable_cls
class MyClass:
def __init__(self, some_val: int):
self.some_val = some_val
def combine(self, other_val: int):
return self.some_val + other_val
# See trace: https://smith.langchain.com/public/882f9ecf-5057-426a-ae98-0edf84fdcaf9/r
MyClass(13).combine(29)
确保在退出前提交所有追踪
LangSmith 的追踪是在后台线程中完成的,以避免阻塞您的生产应用程序。这意味着您的进程可能在所有追踪成功发布到 LangSmith 之前结束。以下是一些确保在应用程序退出前提交所有追踪的选项。
使用 LangSmith SDK
如果您独立使用 LangSmith SDK,可以在退出前使用 flush
方法。
- Python
- TypeScript
from langsmith import Client
client = Client()
@traceable(client=client)
async def my_traced_func():
# Your code here...
pass
try:
await my_traced_func()
finally:
await client.flush()
import { Client } from "langsmith";
const langsmithClient = new Client({});
const myTracedFunc = traceable(
async () => {
// Your code here...
},
{ client: langsmithClient }
);
try {
await myTracedFunc();
} finally {
await langsmithClient.flush();
}
使用 LangChain
如果您正在使用 LangChain,请参阅我们的LangChain 追踪指南。
如果您更喜欢视频教程,请查看 LangSmith 入门课程中的追踪基础视频。