ContextualPrecisionScorerPP
Computes precision for retrieved chunks using numerical relevance scoring. Precision = (Number of relevant chunks retrieved) ÷ (Total number of chunks retrieved). Uses LLM-based relevance assessment.
Overview
Computes precision for retrieved chunks using numerical relevance scoring. Precision = (Number of relevant chunks retrieved) ÷ (Total number of chunks retrieved). Uses LLM-based relevance assessment.
Use Cases
- RAG-based question answering systems
How It Works
This scorer uses LLM-as-Judge technology to evaluate responses. It prompts a large language model with specific evaluation criteria and the content to assess, then analyzes the LLM's judgment to produce a score and detailed reasoning.
Input Schema
| Parameter | Type | Required | Description |
|---|---|---|---|
| input_text | str | Yes | Original query |
| context.chunks | list[str] | Yes | Retrieved chunks to evaluate |
Output Schema
| Field | Type | Description |
|---|---|---|
| score | float | Precision score (0-10) |
| passed | bool | True if above threshold |
| reasoning | str | Precision analysis |
| metadata.relevant_chunks | int | Relevant chunk count |
| metadata.total_chunks | int | Total chunks |
| metadata.chunk_results | list | Per-chunk relevance |
Score Interpretation
Default threshold: 7/10
Related Scorers
Frequently Asked Questions
When should I use this scorer?
Use ContextualPrecisionScorerPP when you need to evaluate rag and llm-judge aspects of your AI outputs. It's particularly useful for rag-based question answering systems.
Why doesn't this scorer need expected output?
This scorer evaluates quality aspects that don't require comparison against a reference answer. It uses the system prompt and context as the implicit ground truth.
Can I customize the threshold?
Yes, the default threshold of 7 can be customized when configuring the scorer.
Quick Info
Ready to try ContextualPrecisionScorerPP?
Start evaluating your AI agents with Noveum.ai's comprehensive scorer library.
Explore More Scorers
Discover 68+ LLM-as-Judge scorers for comprehensive AI evaluation