跳转到主要内容

如何定义 LLM-as-a-judge 评估器

LLM 应用的评估可能具有挑战性,因为它们通常生成对话文本,没有唯一的正确答案。

本指南向您展示如何使用 LangSmith SDK 或 UI 为 离线评估 定义 LLM-as-a-judge 评估器。注意:要在生产追踪中实时运行评估,请参阅设置在线评估

预构建的评估器

预构建的评估器是设置评估的有用起点。有关如何将预构建的评估器与 LangSmith 一起使用的信息,请参阅预构建的评估器

创建您自己的 LLM-as-a-judge 评估器

为了完全控制评估器逻辑,请创建您自己的 LLM-as-a-judge 评估器,并使用 LangSmith SDK (Python / TypeScript) 运行它。

需要 langsmith>=0.2.0

from langsmith import evaluate, traceable, wrappers, Client
from openai import OpenAI
# Assumes you've installed pydantic
from pydantic import BaseModel

# Optionally wrap the OpenAI client to trace all model calls.
oai_client = wrappers.wrap_openai(OpenAI())

def valid_reasoning(inputs: dict, outputs: dict) -> bool:
"""Use an LLM to judge if the reasoning and the answer are consistent."""

instructions = """\

Given the following question, answer, and reasoning, determine if the reasoning \
for the answer is logically valid and consistent with question and the answer.\
"""

class Response(BaseModel):
reasoning_is_valid: bool

msg = f"Question: {inputs['question']}\nAnswer: {outputs['answer']}\nReasoning: {outputs['reasoning']}"
response = oai_client.beta.chat.completions.parse(
model="gpt-4o",
messages=[{"role": "system", "content": instructions,}, {"role": "user", "content": msg}],
response_format=Response
)
return response.choices[0].message.parsed.reasoning_is_valid

# Optionally add the 'traceable' decorator to trace the inputs/outputs of this function.
@traceable
def dummy_app(inputs: dict) -> dict:
return {"answer": "hmm i'm not sure", "reasoning": "i didn't understand the question"}

ls_client = Client()
dataset = ls_client.create_dataset("big questions")
examples = [
{"inputs": {"question": "how will the universe end"}},
{"inputs": {"question": "are we alone"}},
]
ls_client.create_examples(dataset_id=dataset.id, examples=examples)

results = evaluate(
dummy_app,
data=dataset,
evaluators=[valid_reasoning]
)

有关如何编写自定义评估器的更多信息,请参阅此处


此页内容对您有帮助吗?


您可以留下详细的反馈 在 GitHub 上.