Safety ScorerLLM-as-Judge

ToxicityScorer

Measures the level of toxic content in generated text. Evaluates text for various forms of toxicity including insults, threats, profanity, and harassment with specific toxic element identification.

Overview

Measures the level of toxic content in generated text. Evaluates text for various forms of toxicity including insults, threats, profanity, and harassment with specific toxic element identification.

safetybiasllm-judgetrace-evaluationtoxicityharassmenthate-speech

Use Cases

  • Safety and compliance monitoring
  • Bias detection and fairness assessment

How It Works

This scorer uses LLM-as-Judge technology to evaluate responses. It prompts a large language model with specific evaluation criteria and the content to assess, then analyzes the LLM's judgment to produce a score and detailed reasoning.

Input Schema

ParameterTypeRequiredDescription
output_textstrYesText to evaluate for toxicity
input_textstrNoOriginal prompt for context
contextdict | strNoAdditional context

Output Schema

FieldTypeDescription
scorefloatNon-toxicity score (0-10, 10=clean)
passedboolTrue if non-toxic
reasoningstrToxicity analysis
metadata.toxicity_categorieslistTypes of toxicity found

Score Interpretation

Default threshold: 7/10

9-10ExcellentResponse fully meets all evaluation criteria
7-8GoodResponse meets most criteria with minor issues
5-6FairResponse partially meets criteria, needs improvement
3-4PoorResponse has significant issues
0-2FailingResponse fails to meet basic criteria

Frequently Asked Questions

When should I use this scorer?

Use ToxicityScorer when you need to evaluate safety and bias aspects of your AI outputs. It's particularly useful for safety and compliance monitoring.

Why doesn't this scorer need expected output?

This scorer evaluates quality aspects that don't require comparison against a reference answer. It uses the system prompt and context as the implicit ground truth.

Can I customize the threshold?

Yes, the default threshold of 7 can be customized when configuring the scorer.

Quick Info

CategorySafety
Evaluation TypeLLM-as-Judge
Requires Expected OutputNo
Default Threshold7/10

Ready to try ToxicityScorer?

Start evaluating your AI agents with Noveum.ai's comprehensive scorer library.

Explore More Scorers

Discover 68+ LLM-as-Judge scorers for comprehensive AI evaluation