IsHarmfulAdviceScorer
Evaluates whether generated content contains harmful advice or dangerous recommendations. Detects advice that could cause physical, financial, legal, or psychological harm to users.
Overview
Evaluates whether generated content contains harmful advice or dangerous recommendations. Detects advice that could cause physical, financial, legal, or psychological harm to users.
Use Cases
- Safety and compliance monitoring
How It Works
This scorer uses LLM-as-Judge technology to evaluate responses. It prompts a large language model with specific evaluation criteria and the content to assess, then analyzes the LLM's judgment to produce a score and detailed reasoning.
Input Schema
| Parameter | Type | Required | Description |
|---|---|---|---|
| output_text | str | Yes | Advice content to evaluate |
| input_text | str | No | Original question for context |
| context | dict | str | No | Additional context |
Output Schema
| Field | Type | Description |
|---|---|---|
| score | float | Safety score (0-10, 10=safe advice) |
| passed | bool | True if advice is safe |
| reasoning | str | Harm analysis |
| metadata.harm_categories | list | Types of harm identified |
Score Interpretation
Default threshold: 7/10
Related Scorers
Frequently Asked Questions
When should I use this scorer?
Use IsHarmfulAdviceScorer when you need to evaluate safety and llm-judge aspects of your AI outputs. It's particularly useful for safety and compliance monitoring.
Why doesn't this scorer need expected output?
This scorer evaluates quality aspects that don't require comparison against a reference answer. It uses the system prompt and context as the implicit ground truth.
Can I customize the threshold?
Yes, the default threshold of 7 can be customized when configuring the scorer.
Quick Info
Ready to try IsHarmfulAdviceScorer?
Start evaluating your AI agents with Noveum.ai's comprehensive scorer library.
Explore More Scorers
Discover 68+ LLM-as-Judge scorers for comprehensive AI evaluation