Basic LLM Tracing
Learn how to trace basic LangChain LLM calls with Noveum Trace
This guide shows you how to trace basic LLM calls using LangChain with Noveum Trace. You'll learn the simplest integration pattern that works with any LangChain LLM.
🎯 Use Case
Customer Support Chatbot: A simple chatbot that answers customer questions using GPT-4. We'll trace the LLM call to monitor performance, costs, and response quality.
🚀 Complete Working Example
Here's a complete, working example you can copy and run:
📋 Prerequisites
Set your environment variables:
🔧 How It Works
1. Callback Handler Setup
The NoveumTraceCallbackHandler
automatically captures:
- Input messages and prompts
- Model responses
- Token usage and costs
- Latency metrics
- Error information
2. Automatic Tracing
When you call llm.invoke()
, the callback handler:
- Creates a trace span
- Captures the input message
- Records the model call
- Captures the response
- Calculates performance metrics
3. Dashboard Visibility
View your traces in the Noveum dashboard:
- Traces: See the complete conversation flow
- Spans: Individual LLM call details
- Metrics: Performance and cost analysis
🎨 Advanced Examples
Multiple Messages
With Custom Metadata
📊 What You'll See in the Dashboard
After running these examples, check your Noveum dashboard:
Trace View
- Complete conversation flow
- Input and output messages
- Timing information
- Error details (if any)
Span Details
- Model used (GPT-4)
- Token counts (input/output)
- Cost information
- Latency metrics
Analytics
- Total requests
- Average response time
- Cost per request
- Success rate
🔍 Troubleshooting
Common Issues
No traces appearing?
- Check your
NOVEUM_API_KEY
is set correctly - Verify the callback handler is added to your LLM
- Ensure you're calling
llm.invoke()
(not just creating the LLM)
Missing metadata?
- Make sure to add attributes before making the call
- Check that the callback handler is properly initialized
Performance issues?
- The callback adds minimal overhead (~1-2ms per call)
- For high-frequency applications, consider batching
🚀 Next Steps
Now that you've mastered basic LLM tracing, explore these advanced patterns:
- Chain Tracing - Multi-step workflows
💡 Pro Tips
- Use callbacks consistently: Add the callback handler to all your LangChain components
- Add context: Use custom attributes to track business-specific information
- Monitor costs: Set up alerts for unexpected spending patterns
- Debug errors: Use trace details to identify and fix issues quickly
Get Early Access to Noveum.ai Platform
Be the first one to get notified when we open Noveum Platform to more users. All users get access to Observability suite for free, early users get free eval jobs and premium support for the first year.