NovaEval - AI Evaluation Engine
Production-ready AI model evaluation with 73+ specialized scorers and intelligent metric selection
NovaEval is Noveum's comprehensive AI model evaluation engine designed for production use. It provides an intelligent, automated interface for evaluating language models across various datasets, metrics, and deployment scenarios—all integrated seamlessly into your Noveum dashboard.
About NovaEval
NovaEval is our comprehensive evaluation engine that powers Noveum's intelligent model assessment capabilities. With 73+ specialized scorers across multiple domains, it automatically selects the most relevant evaluation metrics based on your specific use case and presents results directly in your Noveum dashboard.
Key Capabilities
- Comprehensive Scorer Library: 73+ specialized scorers across accuracy, conversational AI, RAG, LLM-as-judge, and agent evaluation domains
- Intelligent Selection: Automatic scorer selection based on your AI application type and use case
- Custom Scorer Support: Create domain-specific evaluation metrics when standard scorers don't meet your needs
- Seamless Integration: Automatic trace processing and dashboard integration with no manual configuration required
- Real-time Results: Live scoring and performance metrics presented in your Noveum dashboard
- Production Ready: Built for enterprise-scale evaluation with robust error handling and scalability
Why Model Evaluation Matters
AI models can perform differently across various scenarios, and understanding their strengths and weaknesses is crucial for:
- Performance Optimization: Identify which models work best for specific use cases
- Cost Efficiency: Find the most cost-effective models without sacrificing quality
- Quality Assurance: Ensure consistent, reliable outputs across different scenarios
- Continuous Improvement: Track model performance over time and identify degradation
Comprehensive Scoring Framework
NovaEval provides a comprehensive suite of scorers organized by evaluation domain. All scorers implement the BaseScorer interface and support both synchronous and asynchronous evaluation.
🎯 Accuracy & Classification Metrics
ExactMatchScorer
Exact string matching with case-sensitive/insensitive options.
AccuracyScorer
Classification accuracy with intelligent answer extraction.
MultiPatternAccuracyScorer
Multiple matching patterns and strategies.
F1Scorer
Token-level F1 score with precision and recall.
📊 NLP Metrics
BLEUScorer
Translation quality evaluation with n-gram precision.
ROUGEScorer
Summarization quality with n-gram overlap.
LevenshteinSimilarityScorer
Edit distance-based similarity measurement.
💬 Conversational AI Metrics
KnowledgeRetentionScorer
Evaluates information retention across conversations.
ConversationRelevancyScorer
Response relevance to conversation context.
ConversationCompletenessScorer
Assesses if user intentions are fully addressed.
RoleAdherenceScorer
Consistency with assigned persona or role.
ConversationalMetricsScorer
Comprehensive conversational evaluation.
🔍 RAG (Retrieval-Augmented Generation) Metrics
AnswerRelevancyScorer
Answer relevance using semantic similarity.
FaithfulnessScorer
Measures faithfulness to context without hallucinations.
ContextualPrecisionScorer
Precision of retrieved context relevance.
ContextualRecallScorer
Recall of necessary information in context.
RAGASScorer
Composite RAGAS methodology combining multiple metrics.
ContextualPrecisionScorerPP
LLM-based chunk precision evaluation.
ContextualRecallScorerPP
LLM-based chunk recall evaluation.
RetrievalF1Scorer
F1 score for retrieval performance.
RetrievalRankingScorer
Ranking quality using NDCG and Average Precision.
SemanticSimilarityScorer
Semantic alignment using embeddings.
RetrievalDiversityScorer
Evaluates diversity and non-redundancy.
AggregateRAGScorer
Aggregates multiple RAG metrics.
📝 Answer Quality Metrics
InformationDensityScorer
Information richness and content value.
ClarityAndCoherenceScorer
Language clarity and logical flow.
RAGAnswerQualityScorer
Comprehensive answer quality for RAG systems.
CitationQualityScorer
Citation relevance and accuracy.
AnswerCompletenessScorer
Completeness in addressing questions.
QuestionAnswerAlignmentScorer
Answer alignment with question intent.
TechnicalAccuracyScorer
Domain-specific technical accuracy.
ToneConsistencyScorer
Consistency of tone throughout.
TerminologyConsistencyScorer
Consistent use of terminology.
🛡️ Hallucination Detection Metrics
FactualAccuracyScorer
Factual accuracy against context.
ClaimVerificationScorer
Verifies individual claims against context.
HallucinationDetectionScorer
Detects unsupported or fabricated information.
🔀 Multi-Context Metrics
ConflictResolutionScorer
Handles contradictory information across contexts.
ContextPrioritizationScorer
Prioritizes most relevant context information.
ContextFaithfulnessScorerPP
Faithfulness across multiple context sources.
ContextGroundednessScorer
Measures grounding in provided contexts.
ContextCompletenessScorer
Utilization of all relevant context.
ContextConsistencyScorer
Consistency across multiple contexts.
🔒 Safety & Moderation Metrics
AnswerRefusalScorer
Appropriate refusal of out-of-scope questions.
ContentModerationScorer
Harmful or inappropriate content detection.
ContentSafetyViolationScorer
Specific safety violations detection.
ToxicityScorer
Toxic or offensive language detection.
IsHarmfulAdviceScorer
Harmful advice detection.
⚖️ Bias Detection Metrics
BiasDetectionScorer
General bias detection across multiple types.
NoGenderBiasScorer
Gender bias and stereotypes detection.
NoRacialBiasScorer
Racial bias and stereotypes detection.
NoAgeBiasScorer
Age bias and stereotypes detection.
CulturalSensitivityScorer
Cultural sensitivity and inclusivity.
✅ Format Validation Metrics
IsJSONScorer
Valid JSON format validation.
IsEmailScorer
Valid email address format validation.
ValidLinksScorer
URL and link validation.
🤖 LLM-as-Judge Metrics
GEvalScorer
LLM evaluation with chain-of-thought reasoning.
PanelOfJudgesScorer
Multi-LLM evaluation with diverse perspectives.
SpecializedPanelScorer
Specialized panel configurations with expertise.
🎭 Agent Evaluation Metrics
Tool Relevancy Scoring
Appropriateness of tool call selection.
Tool Correctness Scoring
Correctness of tool call execution.
Parameter Correctness Scoring
Correctness of tool parameters.
Task Progression Scoring
Progress toward assigned tasks.
Context Relevancy Scoring
Response appropriateness for role and task.
Role Adherence Scoring
Consistency with assigned agent role.
Goal Achievement Scoring
Overall goal accomplishment measurement.
Conversation Coherence Scoring
Logical flow and context maintenance.
🔧 RAG Pipeline Metrics
QueryProcessingEvaluator
Query understanding and preprocessing.
RetrievalStageEvaluator
Retrieval quality and relevance.
RerankingEvaluator
Reranking and generation stage assessment.
RAGPipelineEvaluator
Comprehensive pipeline evaluation.
PipelineCoordinationScorer
Stage coordination and integration.
LatencyAnalysisScorer
Performance and bottleneck analysis.
ResourceUtilizationScorer
Resource efficiency evaluation.
ErrorPropagationScorer
Error handling across stages.
How Noveum Handles Evaluation
Noveum automatically handles the entire evaluation process for you, making it seamless and effortless:
1. Automatic Trace Processing
- Your AI traces are automatically processed and converted into evaluation datasets
- No manual data preparation or configuration required
- Real-world data from your actual AI interactions
2. Intelligent Scorer Selection
Based on your specific use case and AI application type, Noveum automatically selects the most relevant scorers from our comprehensive suite:
- RAG Applications: Contextual precision, faithfulness, answer relevancy
- Conversational AI: Knowledge retention, conversation relevancy, role adherence
- Agent Systems: Tool correctness, task progression, goal achievement
- Classification Tasks: Accuracy, F1 scores, exact match evaluation
3. Custom Scorer Creation
When standard scorers don't meet your specific requirements, Noveum enables you to:
- Create custom evaluation metrics tailored to your business needs
- Define domain-specific scoring criteria
- Implement proprietary evaluation logic
- Integrate with your existing evaluation frameworks
4. Dashboard Integration
All evaluation results are automatically presented in your Noveum dashboard:
- Real-time scoring and performance metrics
- Comparative analysis across different models
- Historical performance tracking
- Actionable insights and recommendations
Choosing the Right Scorers
With our extensive collection of 73+ specialized scorers, selecting the right combination is crucial for meaningful evaluation:
For RAG Applications
- AnswerRelevancyScorer: Ensures answers directly address the questions
- FaithfulnessScorer: Prevents hallucinations and maintains factual accuracy
- ContextualPrecisionScorer: Validates relevance of retrieved context
- ContextualRecallScorer: Ensures comprehensive information coverage
For Conversational AI
- KnowledgeRetentionScorer: Tracks information retention across conversations
- ConversationRelevancyScorer: Maintains context-aware responses
- RoleAdherenceScorer: Ensures consistent persona maintenance
- ConversationCompletenessScorer: Validates complete request fulfillment
For Agent Systems
- Tool Relevancy Scoring: Validates appropriate tool selection
- Task Progression Scoring: Measures goal advancement
- Goal Achievement Scoring: End-to-end task completion assessment
- Context Relevancy Scoring: Role-task alignment validation
For Custom Requirements
When standard scorers don't meet your needs, create custom evaluation metrics that align with your specific business objectives and domain requirements.
Next Steps
- SDK Integration - Set up tracing to capture data for evaluation
- Dashboard Guide - Learn how to interpret evaluation results
- Integration Examples - See evaluation in action
- Best Practices - Optimize your evaluation strategy
Ready to optimize your AI models? Start by setting up SDK Integration to capture data for evaluation!
Get Early Access to Noveum.ai Platform
Be the first one to get notified when we open Noveum Platform to more users. All users get access to Observability suite for free, early users get free eval jobs and premium support for the first year.