如何在本地运行评估(仅限Python)
有时,在不将任何结果上传到 LangSmith 的情况下,在本地运行评估会很有帮助。例如,如果你正在快速迭代提示词并想在少量示例上进行冒烟测试,或者你正在验证目标函数和评估器函数是否已正确定义,你可能不希望记录这些评估。
你可以通过使用 LangSmith Python SDK 并将 upload_results=False
传递给 evaluate()
/ aevaluate()
来实现这一点。
这将像往常一样运行你的应用程序和评估器,并返回相同的输出,但不会在 LangSmith 中记录任何内容。这不仅包括实验结果,还包括应用程序和评估器跟踪。
示例
我们来看一个示例
- Python
需要 langsmith>=0.2.0
。示例还使用了 pandas
。
from langsmith import Client
# 1. Create and/or select your dataset
ls_client = Client()
dataset = ls_client.clone_public_dataset(
"https://smith.langchain.com/public/a63525f9-bdf2-4512-83e3-077dc9417f96/d"
)
# 2. Define an evaluator
def is_concise(outputs: dict, reference_outputs: dict) -> bool:
return len(outputs["answer"]) < (3 * len(reference_outputs["answer"]))
# 3. Define the interface to your app
def chatbot(inputs: dict) -> dict:
return {"answer": inputs["question"] + " is a good question. I don't know the answer."}
# 4. Run an evaluation
experiment = ls_client.evaluate(
chatbot,
data=dataset,
evaluators=[is_concise],
experiment_prefix="my-first-experiment",
# 'upload_results' is the relevant arg.
upload_results=False
)
# 5. Analyze results locally
results = list(experiment)
# Check if 'is_concise' returned False.
failed = [r for r in results if not r["evaluation_results"]["results"][0].score]
# Explore the failed inputs and outputs.
for r in failed:
print(r["example"].inputs)
print(r["run"].outputs)
# Explore the results as a Pandas DataFrame.
# Must have 'pandas' installed.
df = experiment.to_pandas()
df[["inputs.question", "outputs.answer", "reference.answer", "feedback.is_concise"]]
- Python
{'question': 'What is the largest mammal?'}
{'answer': "What is the largest mammal? is a good question. I don't know the answer."}
{'question': 'What do mammals and birds have in common?'}
{'answer': "What do mammals and birds have in common? is a good question. I don't know the answer."}
输入.问题 | 输出.答案 | 参考.答案 | 反馈.是否简洁 | |
---|---|---|---|---|
0 | 最大的哺乳动物是什么? | 最大的哺乳动物是什么?这是个好问题。我不知道答案。 | 蓝鲸 | 否 |
1 | 哺乳动物和鸟类有什么共同点? | 哺乳动物和鸟类有什么共同点?这是个好问题。我不知道答案。 | 它们都是温血动物 | 否 |