Documentation
Integration Examples/LangChain Integration/LangChain Integration Overview

LangChain Integration Overview

Comprehensive guide to integrating Noveum Trace with LangChain applications for automatic AI tracing and observability

Noveum Trace provides seamless integration with LangChain applications, automatically capturing detailed traces of your AI workflows without requiring code changes to your core logic. This integration helps you monitor, debug, and optimize your LangChain applications with comprehensive observability.

What You Get

  • Automatic Tracing: Zero-code integration with LangChain components
  • Complete Visibility: Track LLM calls, chains, agents, tools, and retrieval operations
  • Performance Metrics: Monitor latency, token usage, and costs
  • Error Tracking: Identify and debug issues in your AI workflows
  • Cost Optimization: Analyze spending patterns and find cost-effective alternatives

Installation

pip install noveum-trace

Note: There's no special noveum-trace[langchain] package. The base noveum-trace package includes full LangChain support.

Quick Start

Integrate Noveum Trace with LangChain using the NoveumTraceCallbackHandler. There are two approaches:

Approach 1: LLM-Level Callbacks

Pass callbacks directly to the LLM during construction:

import os
import noveum_trace
from noveum_trace import NoveumTraceCallbackHandler
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
 
# Initialize Noveum Trace
noveum_trace.init(
    api_key=os.getenv("NOVEUM_API_KEY"),
    project="demo-chatbot",
    environment="development"
)
 
# Create callback handler
handler = NoveumTraceCallbackHandler()
 
# Create chain using LCEL with callbacks on LLM
prompt = ChatPromptTemplate.from_template("Summarize: {text}")
chain = prompt | ChatOpenAI(callbacks=[handler]) | StrOutputParser()
 
# Run the chain
result = chain.invoke({"text": "Your document here"})

Pass callbacks via the config parameter at runtime:

import os
import noveum_trace
from noveum_trace import NoveumTraceCallbackHandler
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
 
# Initialize Noveum Trace
noveum_trace.init(
    api_key=os.getenv("NOVEUM_API_KEY"),
    project="demo-chatbot",
    environment="development"
)
 
# Create callback handler
handler = NoveumTraceCallbackHandler()
 
# Create chain using LCEL
prompt = ChatPromptTemplate.from_template("Summarize: {text}")
chain = prompt | ChatOpenAI() | StrOutputParser()
 
# Pass callbacks via config at runtime
result = chain.invoke(
    {"text": "Your document here"},
    config={"callbacks": [handler]}
)

Both approaches work equally well. Choose based on your use case:

  • Use LLM-level for consistent tracing across all executions
  • Use config-based for selective tracing or dynamic callback configuration

Important: Concurrent Execution

The NoveumTraceCallbackHandler maintains internal state and should not be shared across concurrent operations like asyncio.gather() or ThreadPoolExecutor calls, as this will mix traces together. For concurrent execution, create a new handler instance for each call. Sequential operations can safely reuse the same handler instance.

# ❌ Don't share handler across concurrent calls
handler = NoveumTraceCallbackHandler()
results = await asyncio.gather(
    chain.ainvoke({"text": "doc1"}, config={"callbacks": [handler]}),
    chain.ainvoke({"text": "doc2"}, config={"callbacks": [handler]})
)
 
# ✅ Create new handler for each concurrent call
results = await asyncio.gather(
    chain.ainvoke({"text": "doc1"}, config={"callbacks": [NoveumTraceCallbackHandler()]}),
    chain.ainvoke({"text": "doc2"}, config={"callbacks": [NoveumTraceCallbackHandler()]})
)

Integration Patterns

Noveum Trace supports comprehensive tracing for all LangChain components:

1. Basic LLM Calls

Trace individual LLM interactions with automatic context capture. → Learn more

2. Chains

Monitor multi-step workflows and chain compositions. → Learn more

3. Agents & Advanced Patterns

Track agent decision-making, tool usage, error handling, and complex workflows. → Learn more

Key Features

  • Zero Configuration: Works out of the box with existing LangChain code
  • Rich Context: Automatically captures inputs, outputs, and metadata
  • Performance Insights: Detailed metrics on latency and resource usage
  • Error Handling: Comprehensive error tracking and debugging information
  • Cost Analysis: Track spending across different models and operations
  • Manual Trace Control: Advanced control over trace lifecycle for complex workflows
  • Custom Parent Relationships: Explicit parent-child span relationships with metadata
  • LangGraph Integration: Full support for LangGraph routing decisions and node transitions
  • Routing Decision Tracking: Capture and analyze conditional routing logic

Advanced Features

Manual Trace Control

For complex workflows, you can manually control trace lifecycle with start_trace() and end_trace() methods.

Custom Parent Span Relationships

Set explicit parent-child relationships between spans using metadata configuration:

metadata = {
    "noveum": {
        "name": "custom_span_name",
        "parent_name": "parent_span_name"
    }
}

LangGraph Integration

Full support for LangGraph workflows including:

  • Node execution tracing
  • Routing decision tracking
  • State transition monitoring
  • Custom event emission

Routing Decision Attributes

When tracking routing decisions, the following attributes are captured:

  • Source and target nodes
  • Decision reasoning and confidence
  • State snapshots
  • Alternative options

Next Steps

Ready to dive deeper? Explore these detailed guides:

Need Help?

  • Documentation: Browse our comprehensive guides
  • Examples: Check out LangGraph and LiveKit examples
  • Community: Join our Discord for support and discussions
  • Support: Contact our team for enterprise support
Exclusive Early Access

Get Early Access to Noveum.ai Platform

Be the first one to get notified when we open Noveum Platform to more users. All users get access to Observability suite for free, early users get free eval jobs and premium support for the first year.

Sign up now. We send access to new batch every week.

Early access members receive premium onboarding support and influence our product roadmap. Limited spots available.