Documentation
Integration Examples/Simple LLM Integration

Simple LLM Integration

Essential pattern for tracing basic LLM calls with Noveum

This example shows how to trace a basic LLM call using Noveum. You'll learn how to set up tracing, add context, and view results in the dashboard.

🎯 Use Case

Customer Support Chatbot: A simple chatbot that answers customer questions using GPT-4. We'll trace the LLM call to monitor performance, costs, and response quality.

🚀 Essential Integration Pattern

1. Initialize Noveum

Add this once at the start of your application:

import noveum_trace
 
noveum_trace.init(
    api_key=os.getenv("NOVEUM_API_KEY"),
    project="customer-support-bot",
    environment="development"
)

2. Trace Your LLM Call

Wrap your LLM call with the tracing context managers:

from noveum_trace import trace_llm, trace_operation
import openai
 
# Trace the entire operation
with trace_operation("customer-support-query") as main_span:
    # Add context to the main span
    main_span.set_attributes({
        "customer.id": customer_id,
        "query.type": "customer_support"
    })
    
    # Trace the specific LLM call
    with trace_llm(model="gpt-4", provider="openai") as llm_span:
        # Make your LLM call
        response = openai.chat.completions.create(
            model="gpt-4",
            messages=[
                {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": user_question}
            ]
        )
        
        # Track token usage for cost monitoring
        llm_span.set_usage_attributes(
            input_tokens=response.usage.prompt_tokens,
            output_tokens=response.usage.completion_tokens
        )
    
    return response.choices[0].message.content

📊 Key Concepts

Trace Structure

  • trace_operation() creates a parent span for the entire operation
  • trace_llm() creates a child span specifically for the LLM call
  • Spans are nested to show the relationship between operations

Adding Context

Use set_attributes() to add metadata that helps you filter and analyze traces:

span.set_attributes({
    "customer.id": customer_id,
    "query.type": "support",
    "environment": "production"
})

Tracking Token Usage

Always track token usage for cost monitoring and optimization:

llm_span.set_usage_attributes(
    input_tokens=response.usage.prompt_tokens,
    output_tokens=response.usage.completion_tokens
)

📈 What You'll See in the Dashboard

Your traces will appear in the Noveum dashboard with:

  • Nested span structure showing the operation hierarchy
  • Duration for each operation
  • Token usage and estimated costs
  • Custom attributes you've added
  • Timeline of events

🔧 Optional: Adding More Context

You can add custom attributes to track business-specific metrics:

# Add custom business context
main_span.set_attributes({
    "customer.tier": "premium",
    "customer.region": "us-west",
    "query.category": "billing"
})
 
# Track quality metrics
llm_span.set_attributes({
    "response.quality_score": 0.95,
    "response.helpfulness": "high"
})

🔍 Quick Troubleshooting

If traces aren't appearing in the dashboard:

  • Verify your NOVEUM_API_KEY is set correctly
  • Wait 30-60 seconds for traces to process
  • Check that you're viewing the correct project

✅ Next Steps

Once you see your traces in the dashboard, you can:

Exclusive Early Access

Get Early Access to Noveum.ai Platform

Be the first one to get notified when we open Noveum Platform to more users. All users get access to Observability suite for free, early users get free eval jobs and premium support for the first year.

Sign up now. We send access to new batch every week.

Early access members receive premium onboarding support and influence our product roadmap. Limited spots available.

On this page