Documentation
Integration Examples/LangChain Integration/Basic LLM Tracing

Basic LLM Tracing

Learn how to trace basic LangChain LLM calls with Noveum Trace

This guide shows you how to trace basic LLM calls using LangChain with Noveum Trace. You'll learn the simplest integration pattern that works with any LangChain LLM.

🎯 Use Case

Customer Support Chatbot: A simple chatbot that answers customer questions using GPT-4. We'll trace the LLM call to monitor performance, costs, and response quality.

🚀 Complete Working Example

Here's a complete, working example you can copy and run:

import os
from dotenv import load_dotenv
import noveum_trace
from noveum_trace import NoveumTraceCallbackHandler
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
 
load_dotenv()
 
# Initialize Noveum Trace
noveum_trace.init(
    api_key=os.getenv("NOVEUM_API_KEY"),
    project="customer-support-bot",
    environment="development"
)
 
def basic_llm_tracing():
    """Example: Basic LLM call tracing with LangChain."""
    print("=== Basic LLM Tracing ===")
    
    # Initialize the callback handler
    callback_handler = NoveumTraceCallbackHandler()
    
    # Create LLM with callback
    llm = ChatOpenAI(
        model="gpt-4",
        temperature=0.7,
        callbacks=[callback_handler]
    )
    
    # Make a simple call
    response = llm.invoke([
        HumanMessage(content="What is the capital of France?")
    ])
    
    print(f"Response: {response.content}")
    return response
 
if __name__ == "__main__":
    basic_llm_tracing()

📋 Prerequisites

pip install noveum-trace langchain-openai python-dotenv

Set your environment variables:

export NOVEUM_API_KEY="your-noveum-api-key"
export OPENAI_API_KEY="your-openai-api-key"

🔧 How It Works

1. Callback Handler Setup

The NoveumTraceCallbackHandler automatically captures:

  • Input messages and prompts
  • Model responses
  • Token usage and costs
  • Latency metrics
  • Error information

2. Automatic Tracing

When you call llm.invoke(), the callback handler:

  • Creates a trace span
  • Captures the input message
  • Records the model call
  • Captures the response
  • Calculates performance metrics

3. Dashboard Visibility

View your traces in the Noveum dashboard:

  • Traces: See the complete conversation flow
  • Spans: Individual LLM call details
  • Metrics: Performance and cost analysis

🎨 Advanced Examples

Multiple Messages

def multi_message_tracing():
    """Example: Tracing multiple messages in a conversation."""
    callback_handler = NoveumTraceCallbackHandler()
    llm = ChatOpenAI(callbacks=[callback_handler])
    
    messages = [
        HumanMessage(content="Hello, I need help with my order."),
        HumanMessage(content="My order number is 12345."),
        HumanMessage(content="Can you check its status?")
    ]
    
    response = llm.invoke(messages)
    return response

With Custom Metadata

def custom_metadata_tracing():
    """Example: Adding custom metadata to traces."""
    callback_handler = NoveumTraceCallbackHandler()
    llm = ChatOpenAI(callbacks=[callback_handler])
    
    # Add custom attributes
    callback_handler.add_attributes({
        "user_id": "user_123",
        "session_id": "session_456",
        "feature": "customer_support"
    })
    
    response = llm.invoke([
        HumanMessage(content="Help me with my account")
    ])
    
    return response

📊 What You'll See in the Dashboard

After running these examples, check your Noveum dashboard:

Trace View

  • Complete conversation flow
  • Input and output messages
  • Timing information
  • Error details (if any)

Span Details

  • Model used (GPT-4)
  • Token counts (input/output)
  • Cost information
  • Latency metrics

Analytics

  • Total requests
  • Average response time
  • Cost per request
  • Success rate

🔍 Troubleshooting

Common Issues

No traces appearing?

  • Check your NOVEUM_API_KEY is set correctly
  • Verify the callback handler is added to your LLM
  • Ensure you're calling llm.invoke() (not just creating the LLM)

Missing metadata?

  • Make sure to add attributes before making the call
  • Check that the callback handler is properly initialized

Performance issues?

  • The callback adds minimal overhead (~1-2ms per call)
  • For high-frequency applications, consider batching

🚀 Next Steps

Now that you've mastered basic LLM tracing, explore these advanced patterns:

💡 Pro Tips

  1. Use callbacks consistently: Add the callback handler to all your LangChain components
  2. Add context: Use custom attributes to track business-specific information
  3. Monitor costs: Set up alerts for unexpected spending patterns
  4. Debug errors: Use trace details to identify and fix issues quickly
Exclusive Early Access

Get Early Access to Noveum.ai Platform

Be the first one to get notified when we open Noveum Platform to more users. All users get access to Observability suite for free, early users get free eval jobs and premium support for the first year.

Sign up now. We send access to new batch every week.

Early access members receive premium onboarding support and influence our product roadmap. Limited spots available.