跳到主要内容

如何定义 LLM-as-a-judge 评估器

LLM 应用程序通常生成会话文本,没有单一的正确答案,因此评估起来可能具有挑战性。

本指南向您展示如何使用 LangSmith SDK 或 UI 定义用于离线评估的 LLM-as-a-judge 评估器。注意:要在生产跟踪上实时运行评估,请参阅设置在线评估

预构建评估器

预构建评估器是设置评估的有用起点。有关如何在 LangSmith 中使用预构建评估器,请参阅预构建评估器

创建您自己的 LLM-as-a-judge 评估器

要完全控制评估器逻辑,请创建您自己的 LLM-as-a-judge 评估器,并使用 LangSmith SDK(Python / TypeScript)运行它。

需要 langsmith>=0.2.0

from langsmith import evaluate, traceable, wrappers, Client
from openai import OpenAI
# Assumes you've installed pydantic
from pydantic import BaseModel

# Optionally wrap the OpenAI client to trace all model calls.
oai_client = wrappers.wrap_openai(OpenAI())

def valid_reasoning(inputs: dict, outputs: dict) -> bool:
"""Use an LLM to judge if the reasoning and the answer are consistent."""

instructions = """\

Given the following question, answer, and reasoning, determine if the reasoning \
for the answer is logically valid and consistent with question and the answer.\
"""

class Response(BaseModel):
reasoning_is_valid: bool

msg = f"Question: {inputs['question']}\nAnswer: {outputs['answer']}\nReasoning: {outputs['reasoning']}"
response = oai_client.beta.chat.completions.parse(
model="gpt-4o",
messages=[{"role": "system", "content": instructions,}, {"role": "user", "content": msg}],
response_format=Response
)
return response.choices[0].message.parsed.reasoning_is_valid

# Optionally add the 'traceable' decorator to trace the inputs/outputs of this function.
@traceable
def dummy_app(inputs: dict) -> dict:
return {"answer": "hmm i'm not sure", "reasoning": "i didn't understand the question"}

ls_client = Client()
dataset = ls_client.create_dataset("big questions")
examples = [
{"inputs": {"question": "how will the universe end"}},
{"inputs": {"question": "are we alone"}},
]
ls_client.create_examples(dataset_id=dataset.id, examples=examples)

results = evaluate(
dummy_app,
data=dataset,
evaluators=[valid_reasoning]
)

有关如何编写自定义评估器的更多信息,请参阅此处


此页面有帮助吗?


您可以留下详细反馈 在 GitHub 上.