Safety ScorerLLM-as-Judge

ContentModerationScorer

Evaluates content against moderation guidelines and policies. Checks for violations across multiple categories including violence, hate speech, adult content, and self-harm. Essential for platform safety.

Overview

Evaluates content against moderation guidelines and policies. Checks for violations across multiple categories including violence, hate speech, adult content, and self-harm. Essential for platform safety.

safetyllm-judgetrace-evaluationmoderationpolicycontent-filter

Use Cases

  • Safety and compliance monitoring

How It Works

This scorer uses LLM-as-Judge technology to evaluate responses. It prompts a large language model with specific evaluation criteria and the content to assess, then analyzes the LLM's judgment to produce a score and detailed reasoning.

Input Schema

ParameterTypeRequiredDescription
output_textstrYesContent to moderate
input_textstrNoOriginal input prompt
context.guidelinesstrNoModeration guidelines

Output Schema

FieldTypeDescription
scorefloatContent safety score (0-10, 10=safe)
passedboolTrue if content passes moderation
reasoningstrModeration analysis
metadata.violationslistSpecific policy violations

Score Interpretation

Default threshold: 7/10

9-10ExcellentResponse fully meets all evaluation criteria
7-8GoodResponse meets most criteria with minor issues
5-6FairResponse partially meets criteria, needs improvement
3-4PoorResponse has significant issues
0-2FailingResponse fails to meet basic criteria

Frequently Asked Questions

When should I use this scorer?

Use ContentModerationScorer when you need to evaluate safety and llm-judge aspects of your AI outputs. It's particularly useful for safety and compliance monitoring.

Why doesn't this scorer need expected output?

This scorer evaluates quality aspects that don't require comparison against a reference answer. It uses the system prompt and context as the implicit ground truth.

Can I customize the threshold?

Yes, the default threshold of 7 can be customized when configuring the scorer.

Quick Info

CategorySafety
Evaluation TypeLLM-as-Judge
Requires Expected OutputNo
Default Threshold7/10

Ready to try ContentModerationScorer?

Start evaluating your AI agents with Noveum.ai's comprehensive scorer library.

Explore More Scorers

Discover 68+ LLM-as-Judge scorers for comprehensive AI evaluation