ToolRelevancyScorer
Evaluates whether the agent selected appropriate tools for the given task. Assesses if the chosen tools align with the task requirements and whether better alternatives existed. Uses LLM-based analysis to score tool selection quality.
Overview
Evaluates whether the agent selected appropriate tools for the given task. Assesses if the chosen tools align with the task requirements and whether better alternatives existed. Uses LLM-based analysis to score tool selection quality.
Use Cases
- Autonomous AI agent evaluation
How It Works
This scorer uses LLM-as-Judge technology to evaluate responses. It prompts a large language model with specific evaluation criteria and the content to assess, then analyzes the LLM's judgment to produce a score and detailed reasoning.
Input Schema
| Parameter | Type | Required | Description |
|---|---|---|---|
| agent_data.tools_available | list | str | Yes | List of available tools or tool schema |
| agent_data.tool_calls | list[ToolCall] | str | Yes | List of tool calls made by the agent |
Output Schema
| Field | Type | Description |
|---|---|---|
| score | float | Score (0-10 scale) |
| passed | bool | True if score meets threshold |
| reasoning | str | Detailed evaluation explanation |
| metadata | dict | Scorer-specific details |
Score Interpretation
Default threshold: 7/10
Related Scorers
Frequently Asked Questions
When should I use this scorer?
Use ToolRelevancyScorer when you need to evaluate agent and tool-usage aspects of your AI outputs. It's particularly useful for autonomous ai agent evaluation.
Why doesn't this scorer need expected output?
This scorer evaluates quality aspects that don't require comparison against a reference answer. It uses the system prompt and context as the implicit ground truth.
Can I customize the threshold?
Yes, the default threshold of 7 can be customized when configuring the scorer.
Quick Info
Ready to try ToolRelevancyScorer?
Start evaluating your AI agents with Noveum.ai's comprehensive scorer library.
Explore More Scorers
Discover 68+ LLM-as-Judge scorers for comprehensive AI evaluation