Simple LLM Integration
Essential pattern for tracing basic LLM calls with Noveum
This example shows how to trace a basic LLM call using Noveum. You'll learn how to set up tracing, add context, and view results in the dashboard.
🎯 Use Case
Customer Support Chatbot: A simple chatbot that answers customer questions using GPT-4. We'll trace the LLM call to monitor performance, costs, and response quality.
🚀 Essential Integration Pattern
1. Initialize Noveum
Add this once at the start of your application:
2. Trace Your LLM Call
Wrap your LLM call with the tracing context managers:
📊 Key Concepts
Trace Structure
trace_operation()creates a parent span for the entire operationtrace_llm()creates a child span specifically for the LLM call- Spans are nested to show the relationship between operations
Adding Context
Use set_attributes() to add metadata that helps you filter and analyze traces:
Tracking Token Usage
Always track token usage for cost monitoring and optimization:
📈 What You'll See in the Dashboard
Your traces will appear in the Noveum dashboard with:
- Nested span structure showing the operation hierarchy
- Duration for each operation
- Token usage and estimated costs
- Custom attributes you've added
- Timeline of events
🔧 Optional: Adding More Context
You can add custom attributes to track business-specific metrics:
🔍 Quick Troubleshooting
If traces aren't appearing in the dashboard:
- Verify your
NOVEUM_API_KEYis set correctly - Wait 30-60 seconds for traces to process
- Check that you're viewing the correct project
✅ Next Steps
Once you see your traces in the dashboard, you can:
- Explore advanced patterns in the LangChain and LangGraph examples
- Learn about Context Managers for complex workflows
- Set up Evaluation to evaluate AI agents
Get Early Access to Noveum.ai Platform
Be the first one to get notified when we open Noveum Platform to more users. All users get access to Observability suite for free, early users get free eval jobs and premium support for the first year.