Documentation
Integration Examples/LangChain Integration/Chain Tracing

Chain Tracing

Learn how to trace LangChain chains and multi-step workflows with Noveum Trace

This guide shows you how to trace LangChain chains and multi-step workflows using Noveum Trace. You'll learn how to monitor chain execution, intermediate steps, and data flow.

🎯 Use Case

Document Processing Chain: A multi-step chain that processes documents through summarization, analysis, and formatting. We'll trace each step to monitor performance and data quality.

🚀 Complete Working Example

Here's a complete, working example you can copy and run:

import os
from dotenv import load_dotenv
import noveum_trace
from noveum_trace import NoveumTraceCallbackHandler
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
 
load_dotenv()
 
# Initialize Noveum Trace
noveum_trace.init(
    api_key=os.getenv("NOVEUM_API_KEY"),
    project="customer-support-bot",
    environment="development"
)
 
def create_document_processing_chain():
    """Create a document processing chain with tracing."""
    # Initialize callback handler
    callback_handler = NoveumTraceCallbackHandler()
    
    # Create LLM with callback
    llm = ChatOpenAI(
        model="gpt-4",
        temperature=0.7,
        callbacks=[callback_handler]
    )
    
    # Define prompts for each step
    summarize_prompt = ChatPromptTemplate.from_template("""
    Summarize the following document in 2-3 sentences:
    
    Document: {document}
    
    Summary:
    """)
    
    analyze_prompt = ChatPromptTemplate.from_template("""
    Analyze the following summary and identify key themes:
    
    Summary: {summary}
    
    Key themes:
    """)
    
    format_prompt = ChatPromptTemplate.from_template("""
    Format the following analysis into a structured report:
    
    Analysis: {analysis}
    
    Structured Report:
    """)
    
    # Create the chain
    chain = (
        {"document": RunnablePassthrough()}
        | summarize_prompt
        | llm
        | StrOutputParser()
        | {"summary": RunnablePassthrough()}
        | analyze_prompt
        | llm
        | StrOutputParser()
        | {"analysis": RunnablePassthrough()}
        | format_prompt
        | llm
        | StrOutputParser()
    )
    
    return chain
 
def run_document_processing():
    """Run the document processing chain with tracing."""
    print("=== Document Processing Chain Tracing ===")
    
    # Create the chain
    chain = create_document_processing_chain()
    
    # Process a document
    document = """
    Artificial Intelligence (AI) is transforming industries across the globe. 
    From healthcare to finance, AI technologies are enabling new capabilities 
    and improving efficiency. Machine learning algorithms can process vast 
    amounts of data to identify patterns and make predictions. However, 
    challenges remain in areas like bias, transparency, and ethical considerations.
    """
    
    result = chain.invoke(document)
    
    print(f"Final result: {result}")
    return result
 
if __name__ == "__main__":
    run_document_processing()

📋 Prerequisites

pip install noveum-trace langchain-openai python-dotenv

Set your environment variables:

export NOVEUM_API_KEY="your-noveum-api-key"
export OPENAI_API_KEY="your-openai-api-key"

🔧 How It Works

1. Chain Structure

The chain processes data through multiple steps:

  • Step 1: Document summarization
  • Step 2: Theme analysis
  • Step 3: Report formatting

2. Automatic Tracing

Each step in the chain is automatically traced:

  • Input data for each step
  • LLM calls and responses
  • Intermediate outputs
  • Step execution times

3. Data Flow Visibility

The dashboard shows:

  • Complete data flow through the chain
  • Intermediate results at each step
  • Performance metrics per step
  • Error handling and debugging

🎨 Advanced Examples

Manual Trace Control

For advanced use cases, you can manually control trace lifecycle:

from noveum_trace import NoveumTraceCallbackHandler
 
# Create callback handler
handler = NoveumTraceCallbackHandler()
 
# Manually start a trace
handler.start_trace("my-custom-trace")
 
# Your LangChain operations here
llm = ChatOpenAI(callbacks=[handler])
response = llm.invoke("Hello world")
 
# Manually end the trace
handler.end_trace()

Custom Parent Span Relationships

You can explicitly set parent-child relationships between spans using custom names:

# Create a parent span with custom name
llm = ChatOpenAI(
    callbacks=[handler],
    metadata={"noveum": {"name": "parent_llm"}}
)
 
# Create child spans that reference the parent
chain = LLMChain(
    llm=llm,
    prompt=prompt,
    callbacks=[handler],
    metadata={"noveum": {"parent_name": "parent_llm"}}
)

Metadata Structure: The metadata parameter supports a noveum configuration object:

metadata = {
    "noveum": {
        "name": "custom_span_name",        # Custom name for this span
        "parent_name": "parent_span_name"  # Name of parent span to attach to
    }
}

Note: When using custom parent relationships, you must manually control trace lifecycle with start_trace() and end_trace().

LangChain Parent ID Support

For LangGraph and complex workflows, you can use LangChain's built-in parent run IDs:

# Enable LangChain parent ID resolution
handler = NoveumTraceCallbackHandler(use_langchain_assigned_parent=True)
 
# LangChain will automatically resolve parent relationships
# based on parent_run_id in the callback events

LangGraph Routing Decision Tracking

Track routing decisions in LangGraph workflows as separate spans:

from langgraph.graph import StateGraph, END
from langchain_core.runnables import RunnableConfig
 
def route_function(state, config):
    """Routing function that emits routing events."""
    
    # Make routing decision
    decision = "next_node" if state["count"] < 5 else "finish"
    
    # Emit routing event (if callbacks available)
    if config and config.get("callbacks"):
        callbacks = config["callbacks"]
        
        # Normalize callbacks into an iterable
        if not isinstance(callbacks, (list, tuple)):
            callbacks = [callbacks]
        
        # Iterate over each callback handler
        for handler in callbacks:
            if hasattr(handler, 'on_custom_event'):
                handler.on_custom_event(
                    "langgraph.routing_decision",
                    {
                        "source_node": "current_node",
                        "target_node": decision,
                        "decision": decision,
                        "reason": f"Count {state['count']} {'< 5' if state['count'] < 5 else '>= 5'}",
                        "confidence": 0.9,
                        "state_snapshot": state,
                    }
                )
    
    return decision
 
# Create graph with routing
workflow = StateGraph(State)
workflow.add_node("process", process_node)
workflow.add_node("finish", finish_node)
workflow.add_conditional_edges(
    "process",
    route_function,
    {"next_node": "process", "finish": "finish"}
)
 
# Run with callback handler
app = workflow.compile()
result = app.invoke(
    {"count": 0},
    config={"callbacks": [handler]}
)

Conditional Chain

def create_conditional_chain():
    """Create a chain with conditional logic."""
    callback_handler = NoveumTraceCallbackHandler()
    llm = ChatOpenAI(callbacks=[callback_handler])
    
    def route_document(document: str) -> str:
        """Route document based on content."""
        if "technical" in document.lower():
            return "technical"
        elif "financial" in document.lower():
            return "financial"
        else:
            return "general"
    
    # Different processing for different document types
    technical_prompt = ChatPromptTemplate.from_template("""
    Provide a technical analysis of: {document}
    """)
    
    financial_prompt = ChatPromptTemplate.from_template("""
    Provide a financial analysis of: {document}
    """)
    
    general_prompt = ChatPromptTemplate.from_template("""
    Provide a general analysis of: {document}
    """)
    
    # Create conditional chain
    chain = (
        {"document": RunnablePassthrough()}
        | route_document
        | {
            "technical": technical_prompt | llm | StrOutputParser(),
            "financial": financial_prompt | llm | StrOutputParser(),
            "general": general_prompt | llm | StrOutputParser()
        }
    )
    
    return chain

🔍 What Gets Traced

The integration automatically captures:

  • LLM Calls: Model, prompts, responses, token usage
  • Chains: Input/output flow, execution steps
  • Agents: Decision-making, tool usage, reasoning
  • Tools: Function calls, inputs, outputs
  • Retrievers: Queries, document results
  • LangGraph Nodes: Graph execution, node transitions
  • Routing Decisions: Conditional routing logic and decisions

Routing Decision Attributes

When you emit routing decisions, the following attributes are automatically captured:

  • routing.source_node: The node making the routing decision
  • routing.target_node: The destination node
  • routing.decision: The routing decision value
  • routing.reason: Human-readable reason for the decision
  • routing.confidence: Confidence score (0.0 to 1.0)
  • routing.state_snapshot: State at the time of routing
  • routing.alternatives: Other possible routing options
  • routing.tool_scores: Tool selection scores (if applicable)

📊 What You'll See in the Dashboard

After running these examples, check your Noveum dashboard:

Trace View

  • Complete chain execution flow
  • Step-by-step processing
  • Data transformations
  • Intermediate results

Span Details

  • Individual step execution times
  • Input/output data for each step
  • LLM call details
  • Error information (if any)

Analytics

  • Chain performance metrics
  • Step-by-step timing analysis
  • Data quality insights
  • Error patterns and debugging

🔍 Troubleshooting

Common Issues

Chain not executing?

  • Check that all steps are properly connected
  • Verify input/output data types match
  • Ensure callbacks are added to the LLM

Missing intermediate steps?

  • Make sure each step in the chain is traced
  • Check that RunnablePassthrough is used correctly
  • Verify prompt templates are properly formatted

Performance issues?

  • Monitor step execution times
  • Check for bottlenecks in specific steps
  • Consider parallel processing for independent steps

🚀 Next Steps

Now that you've mastered chain tracing, explore these patterns:

💡 Pro Tips

  1. Use descriptive step names: Make chain steps easy to identify in traces
  2. Monitor data quality: Check intermediate results for consistency
  3. Handle errors gracefully: Add error handling to each step
  4. Optimize performance: Identify and optimize slow steps
  5. Test edge cases: Ensure your chain handles various input types
Exclusive Early Access

Get Early Access to Noveum.ai Platform

Be the first one to get notified when we open Noveum Platform to more users. All users get access to Observability suite for free, early users get free eval jobs and premium support for the first year.

Sign up now. We send access to new batch every week.

Early access members receive premium onboarding support and influence our product roadmap. Limited spots available.