Simple LLM Integration
Complete working example of basic LLM call tracing with Noveum
This example shows how to trace a basic LLM call using Noveum. You'll learn how to set up tracing, add context, and view results in the dashboard.
🎯 Use Case
Customer Support Chatbot: A simple chatbot that answers customer questions using GPT-4. We'll trace the LLM call to monitor performance, costs, and response quality.
🚀 Complete Working Example
Here's a complete, working example you can copy and run:
📊 What This Example Does
1. Trace Structure
- Root Span:
customer-support-query
- The entire customer interaction - Child Span:
gpt-4
- The LLM call within the interaction - Events: Timeline of what happened during the interaction
2. Attributes Added
- Customer Context: ID, query length, query type
- AI Context: Model, provider, temperature, token usage
- Response Context: Length, quality, cost
- System Context: Bot version, timestamps
3. Events Tracked
- Query Received: When the customer asks a question
- AI Started: When the LLM call begins
- AI Finished: When the LLM call completes
- Query Answered: When the response is ready
- Error Events: If something goes wrong
🎯 Expected Output
When you run this example, you'll see:
📈 Dashboard Visualization
In the Noveum dashboard, you'll see:
Trace View
Span Details
- Duration: How long each operation took
- Token Usage: Input and output tokens
- Cost: Estimated cost of the LLM call
- Status: Success or error
- Attributes: All the metadata we added
Events Timeline
- 10:30:00.000: customer.query.received
- 10:30:00.100: ai.completion.started
- 10:30:01.800: ai.completion.finished
- 10:30:01.900: customer.query.answered
🔧 Customization Ideas
Add More Context
Track Response Quality
Add Business Metrics
🔍 Troubleshooting
Common Issues
"API key not found" error:
"No traces appearing" in dashboard:
- Wait 30-60 seconds for traces to appear
- Check that your API key is correct
- Ensure you're looking at the right project
"OpenAI API error":
- Verify your OpenAI API key is valid
- Check that you have credits in your OpenAI account
- Ensure the model name is correct
🎉 Success Checklist
Before moving on, make sure you can:
- See your traces in the Noveum dashboard
- View token usage and cost information
- Understand the trace timeline
- Add custom attributes to your traces
- Handle errors gracefully in your traces
Congratulations! You've successfully traced your first LLM call. This foundation will help you build more complex AI applications with full observability.
Get Early Access to Noveum.ai Platform
Be the first one to get notified when we open Noveum Platform to more users. All users get access to Observability suite for free, early users get free eval jobs and premium support for the first year.