Agent ScorerLLM-as-Judge

ToolRelevancyScorer

Evaluates whether the agent selected appropriate tools for the given task. Assesses if the chosen tools align with the task requirements and whether better alternatives existed. Uses LLM-based analysis to score tool selection quality.

Overview

Evaluates whether the agent selected appropriate tools for the given task. Assesses if the chosen tools align with the task requirements and whether better alternatives existed. Uses LLM-based analysis to score tool selection quality.

agenttool-usagellm-judgetrace-evaluation

Use Cases

  • Autonomous AI agent evaluation

How It Works

This scorer uses LLM-as-Judge technology to evaluate responses. It prompts a large language model with specific evaluation criteria and the content to assess, then analyzes the LLM's judgment to produce a score and detailed reasoning.

Input Schema

ParameterTypeRequiredDescription
agent_data.tools_availablelist | strYesList of available tools or tool schema
agent_data.tool_callslist[ToolCall] | strYesList of tool calls made by the agent

Output Schema

FieldTypeDescription
scorefloatScore (0-10 scale)
passedboolTrue if score meets threshold
reasoningstrDetailed evaluation explanation
metadatadictScorer-specific details

Score Interpretation

Default threshold: 7/10

9-10ExcellentResponse fully meets all evaluation criteria
7-8GoodResponse meets most criteria with minor issues
5-6FairResponse partially meets criteria, needs improvement
3-4PoorResponse has significant issues
0-2FailingResponse fails to meet basic criteria

Frequently Asked Questions

When should I use this scorer?

Use ToolRelevancyScorer when you need to evaluate agent and tool-usage aspects of your AI outputs. It's particularly useful for autonomous ai agent evaluation.

Why doesn't this scorer need expected output?

This scorer evaluates quality aspects that don't require comparison against a reference answer. It uses the system prompt and context as the implicit ground truth.

Can I customize the threshold?

Yes, the default threshold of 7 can be customized when configuring the scorer.

Quick Info

CategoryAgent
Evaluation TypeLLM-as-Judge
Requires Expected OutputNo
Default Threshold7/10

Ready to try ToolRelevancyScorer?

Start evaluating your AI agents with Noveum.ai's comprehensive scorer library.

Explore More Scorers

Discover 68+ LLM-as-Judge scorers for comprehensive AI evaluation