Safety ScorerLLM-as-Judge

ContentSafetyViolationScorer

Detects specific safety policy violations in generated content. More granular than general moderation, focusing on specific prohibited content types with detailed violation reports and severity levels.

Overview

Detects specific safety policy violations in generated content. More granular than general moderation, focusing on specific prohibited content types with detailed violation reports and severity levels.

safetyllm-judgetrace-evaluationviolationspolicyharm-prevention

Use Cases

  • Safety and compliance monitoring

How It Works

This scorer uses LLM-as-Judge technology to evaluate responses. It prompts a large language model with specific evaluation criteria and the content to assess, then analyzes the LLM's judgment to produce a score and detailed reasoning.

Input Schema

ParameterTypeRequiredDescription
output_textstrYesContent to analyze for violations
input_textstrNoOriginal prompt for context
context.system_promptstrNoSystem guidelines defining allowed content

Output Schema

FieldTypeDescription
scorefloatViolation-free score (0-10, 10=clean)
passedboolTrue if no violations
reasoningstrViolation analysis
metadata.violationslistDetected violations
metadata.severitystrOverall severity level

Score Interpretation

Default threshold: 7/10

9-10ExcellentResponse fully meets all evaluation criteria
7-8GoodResponse meets most criteria with minor issues
5-6FairResponse partially meets criteria, needs improvement
3-4PoorResponse has significant issues
0-2FailingResponse fails to meet basic criteria

Frequently Asked Questions

When should I use this scorer?

Use ContentSafetyViolationScorer when you need to evaluate safety and llm-judge aspects of your AI outputs. It's particularly useful for safety and compliance monitoring.

Why doesn't this scorer need expected output?

This scorer evaluates quality aspects that don't require comparison against a reference answer. It uses the system prompt and context as the implicit ground truth.

Can I customize the threshold?

Yes, the default threshold of 7 can be customized when configuring the scorer.

Quick Info

CategorySafety
Evaluation TypeLLM-as-Judge
Requires Expected OutputNo
Default Threshold7/10

Ready to try ContentSafetyViolationScorer?

Start evaluating your AI agents with Noveum.ai's comprehensive scorer library.

Explore More Scorers

Discover 68+ LLM-as-Judge scorers for comprehensive AI evaluation