Documentation
Integration Examples/LangChain Integration/Advanced LangChain Integration

Advanced LangChain Integration

Advanced patterns for LangChain tracing including agents, tools, error handling, and complete workflows

This guide covers advanced LangChain integration patterns with Noveum Trace, including agents with tools, error handling, and complete multi-pattern workflows.

🎯 What You'll Learn

  • Agent Tracing: Track agent decision-making and tool usage
  • Tool Execution: Monitor custom tool calls and results
  • Error Handling: Capture and trace errors automatically
  • Complete Workflows: Combine multiple patterns in production applications

📋 Prerequisites

pip install noveum-trace langchain langchain-openai langchain-community python-dotenv

Set your environment variables:

export NOVEUM_API_KEY="your-noveum-api-key"
export OPENAI_API_KEY="your-openai-api-key"

🤖 Agent with Tool Usage

Agents use LLMs to decide which tools to use and in what order. Here's how to trace them:

import os
from dotenv import load_dotenv
import noveum_trace
from noveum_trace import NoveumTraceCallbackHandler
from langchain.agents import create_react_agent, AgentExecutor
from langchain.tools import Tool
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
 
load_dotenv()
 
# Initialize Noveum Trace
noveum_trace.init(
    api_key=os.getenv("NOVEUM_API_KEY"),
    project="my-langchain-app",
    environment="production",
    transport_config={"batch_size": 1, "batch_timeout": 5.0},
)
 
def agent_with_tools_example():
    """Example: Agent with custom tools."""
    print("=== Agent with Tool Usage ===")
    
    # Create callback handler
    callback_handler = NoveumTraceCallbackHandler()
    
    # Define custom tools
    def calculator(expression: str) -> str:
        """Simple calculator tool."""
        try:
            result = eval(expression)
            return f"The result is: {result}"
        except Exception as e:
            return f"Error: {str(e)}"
    
    def word_counter(text: str) -> str:
        """Count words in text."""
        word_count = len(text.split())
        return f"Word count: {word_count}"
    
    # Create tools list
    tools = [
        Tool(
            name="Calculator",
            func=calculator,
            description="Use this to perform mathematical calculations. Input should be a valid mathematical expression.",
        ),
        Tool(
            name="WordCounter",
            func=word_counter,
            description="Use this to count words in a text. Input should be the text to count.",
        )
    ]
    
    # Create LLM (without callbacks)
    llm = ChatOpenAI(
        model="gpt-3.5-turbo", 
        temperature=0
    )
    
    # Create agent prompt
    prompt = ChatPromptTemplate.from_messages([
        ("system", "You are a helpful assistant. Use the tools available to answer questions."),
        ("human", "{input}"),
        MessagesPlaceholder(variable_name="agent_scratchpad"),
    ])
    
    # Create agent and executor
    agent = create_react_agent(llm, tools, prompt)
    agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
    
    # Run agent with callbacks via config (recommended approach)
    result = agent_executor.invoke(
        {"input": "Calculate 15 * 23 and then count the words in this sentence"},
        config={"callbacks": [callback_handler]}
    )
    
    print(f"Agent result: {result['output']}")
    
    return result
 
if __name__ == "__main__":
    agent_with_tools_example()
    noveum_trace.flush()

What Gets Traced

When running the agent, Noveum Trace automatically captures:

  • Agent Reasoning: The thought process and decision-making
  • Tool Selection: Which tools the agent chooses and why
  • Tool Execution: Input parameters and output results
  • Iteration Steps: Each step in the agent's reasoning loop
  • Performance Metrics: Timing for each tool and LLM call
  • Error Handling: Any errors during tool execution

🔍 Error Handling

Errors are automatically captured and traced with full context:

from langchain_openai import ChatOpenAI
from noveum_trace import NoveumTraceCallbackHandler
 
def error_handling_example():
    """Example: Error handling in tracing."""
    print("=== Error Handling ===")
    
    # Create callback handler
    callback_handler = NoveumTraceCallbackHandler()
    
    # Scenario 1: Invalid API key
    llm = ChatOpenAI(
        model="gpt-3.5-turbo",
        api_key="invalid-key"
    )
    
    try:
        # This will fail and be traced as an error
        # Pass callbacks via config (recommended approach)
        llm.invoke(
            "This will fail",
            config={"callbacks": [callback_handler]}
        )
    except Exception as e:
        print(f"Expected error occurred: {type(e).__name__}")
        print("Error was traced and recorded in span")
    
    # Scenario 2: Tool execution error
    def failing_tool(input: str) -> str:
        """A tool that fails."""
        raise ValueError(f"Tool failed with input: {input}")
    
    from langchain.agents import create_react_agent, AgentExecutor
    from langchain.tools import Tool
    from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
    
    tools = [Tool(
        name="FailingTool",
        func=failing_tool,
        description="A tool that always fails"
    )]
    
    # Create agent with working credentials
    callback_handler = NoveumTraceCallbackHandler()
    llm = ChatOpenAI(model="gpt-3.5-turbo")
    
    # Create agent prompt
    prompt = ChatPromptTemplate.from_messages([
        ("system", "You are a helpful assistant."),
        ("human", "{input}"),
        MessagesPlaceholder(variable_name="agent_scratchpad"),
    ])
    
    agent = create_react_agent(llm, tools, prompt)
    agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
    
    try:
        # Agent will try to use the failing tool
        # Pass callbacks via config (recommended approach)
        agent_executor.invoke(
            {"input": "Use the failing tool"},
            config={"callbacks": [callback_handler]}
        )
    except Exception as e:
        print(f"Tool error occurred: {type(e).__name__}")
        print("Error was traced with full context")
 
if __name__ == "__main__":
    error_handling_example()

Error Trace Details

When an error occurs, the trace includes:

  • Error Type: Exception class name
  • Error Message: Full error description
  • Stack Trace: Complete traceback
  • Context: Input data and state when error occurred
  • Span Status: Marked as error with details

🎯 Complete Integration Example

Here's a production-ready example combining multiple patterns:

"""
Complete LangChain Integration Example for Noveum Trace SDK.
 
This demonstrates:
- Basic LLM tracing
- Chain execution
- Agent with tools
- Error handling
- Production configuration
"""
 
import os
from dotenv import load_dotenv
import noveum_trace
from noveum_trace import NoveumTraceCallbackHandler
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.output_parsers import StrOutputParser
from langchain.agents import create_react_agent, AgentExecutor
from langchain.tools import Tool
 
load_dotenv()
 
# Initialize Noveum Trace with production config
noveum_trace.init(
    api_key=os.getenv("NOVEUM_API_KEY"),
    project="my-langchain-app",
    environment="production",
    transport_config={"batch_size": 1, "batch_timeout": 5.0},
)
 
def example_basic_llm():
    """Example 1: Basic LLM tracing."""
    print("\n=== Example 1: Basic LLM ===")
    
    callback_handler = NoveumTraceCallbackHandler()
    llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.7)
    
    # Pass callbacks via config
    response = llm.invoke(
        "What is the capital of France?",
        config={"callbacks": [callback_handler]}
    )
    print(f"Response: {response.content}")
    return response
 
def example_chain():
    """Example 2: Chain tracing."""
    print("\n=== Example 2: Chain Tracing ===")
    
    callback_handler = NoveumTraceCallbackHandler()
    
    from langchain_core.prompts import ChatPromptTemplate
    from langchain_core.output_parsers import StrOutputParser
    
    prompt = ChatPromptTemplate.from_template(
        "Write a brief summary about {topic}:"
    )
    
    llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.5)
    chain = prompt | llm | StrOutputParser()
    
    # Pass callbacks via config
    result = chain.invoke(
        {"topic": "artificial intelligence"},
        config={"callbacks": [callback_handler]}
    )
    print(f"Chain result: {result[:100]}...")
    return result
 
def example_agent_with_tools():
    """Example 3: Agent with tools."""
    print("\n=== Example 3: Agent with Tools ===")
    
    callback_handler = NoveumTraceCallbackHandler()
    
    # Define tools
    def calculator(expression: str) -> str:
        """Calculate mathematical expressions."""
        try:
            result = eval(expression)
            return f"Result: {result}"
        except Exception as e:
            return f"Error: {e}"
    
    def text_analyzer(text: str) -> str:
        """Analyze text properties."""
        return f"Length: {len(text)}, Words: {len(text.split())}"
    
    tools = [
        Tool(
            name="Calculator",
            func=calculator,
            description="Perform mathematical calculations"
        ),
        Tool(
            name="TextAnalyzer",
            func=text_analyzer,
            description="Analyze text properties"
        )
    ]
    
    llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
    
    # Create agent prompt
    prompt = ChatPromptTemplate.from_messages([
        ("system", "You are a helpful assistant."),
        ("human", "{input}"),
        MessagesPlaceholder(variable_name="agent_scratchpad"),
    ])
    
    agent = create_react_agent(llm, tools, prompt)
    agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
    
    # Pass callbacks via config
    result = agent_executor.invoke(
        {"input": "Calculate 15 * 23"},
        config={"callbacks": [callback_handler]}
    )
    print(f"Agent result: {result['output']}")
    return result
 
def example_error_handling():
    """Example 4: Error handling."""
    print("\n=== Example 4: Error Handling ===")
    
    callback_handler = NoveumTraceCallbackHandler()
    llm = ChatOpenAI(model="gpt-3.5-turbo", api_key="invalid-key")
    
    try:
        # Pass callbacks via config
        llm.invoke(
            "This will fail",
            config={"callbacks": [callback_handler]}
        )
    except Exception as e:
        print(f"Expected error: {type(e).__name__}")
        print("Error was traced and recorded")
 
def main():
    """Run all examples."""
    print("=" * 60)
    print("Noveum Trace - Complete LangChain Integration")
    print("=" * 60)
    
    # Check API keys
    if not os.getenv("NOVEUM_API_KEY"):
        print("Warning: NOVEUM_API_KEY not set")
    
    if not os.getenv("OPENAI_API_KEY"):
        print("Warning: OPENAI_API_KEY not set")
    
    # Run examples
    try:
        example_basic_llm()
        example_chain()
        example_agent_with_tools()
        example_error_handling()
    except Exception as e:
        print(f"Error running examples: {e}")
    
    print("\n" + "=" * 60)
    print("Examples Complete!")
    print("=" * 60)
    print("\nCheck your Noveum dashboard to see the traced operations!")
    print("Dashboard: https://app.noveum.ai")
    
    # Flush any pending traces
    noveum_trace.flush()
 
if __name__ == "__main__":
    main()

🔧 Callback Configuration

Two Approaches to Pass Callbacks

1. Config-Based (Recommended)

# Pass callbacks via config - best practice for all cases
result = agent.invoke(
    {"input": "Your input"},
    config={"callbacks": [callback_handler]}
)
 
# Works with chains too
result = chain.invoke(
    {"topic": "AI"},
    config={"callbacks": [callback_handler]}
)

Benefits:

  • Callbacks automatically propagate through all components
  • Works with LCEL (LangChain Expression Language)
  • Best for agents, chains, and complex workflows
  • Cleaner code separation

2. Direct to Component (Alternative)

# Add callbacks directly to LLM
llm = ChatOpenAI(callbacks=[callback_handler])
chain = prompt | llm | StrOutputParser()  # Callbacks from LLM propagate
 
# For agents, create executor with callbacks
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, callbacks=[callback_handler])

When to use:

  • Simple single-component use cases
  • When callbacks should persist across multiple invocations
  • Quick prototypes

Custom Parent-Child Span Relationships

When you need to set explicit parent-child relationships between spans (e.g., connecting LangChain operations to external traces), you have two options:

Set parent-child relationships using metadata:

# Pass metadata via config to set custom span names and parents
result = chain.invoke(
    {"input": "Your input"},
    config={
        "callbacks": [handler],
        "metadata": {
            "noveum": {
                "name": "custom_span_name",
                "parent_name": "parent_span_name"
            }
        }
    }
)

Option 2: Manual Lifecycle Control (Required for Explicit Parent-Child Control)

⚠️ IMPORTANT: Manual trace lifecycle control is ABSOLUTELY necessary when you need to explicitly set parent-child relationships across different trace contexts or connect LangChain operations to external parent spans.

Use this ONLY when you need explicit parent-child span control. For all other use cases, the automatic callback-based approach is sufficient and recommended.

from noveum_trace import NoveumTraceCallbackHandler
 
# Create handler
handler = NoveumTraceCallbackHandler()
 
# Manually start a trace with a specific trace ID (to link to external parent)
handler.start_trace("my-custom-trace")
 
# Your operations - these will be children of the manually started trace
llm = ChatOpenAI()
response = llm.invoke(
    "Hello world",
    config={"callbacks": [handler]}
)
 
# Manually end the trace to close the parent span
handler.end_trace()

When to use manual lifecycle control:

  • Connecting LangChain operations to external tracing systems
  • Linking multiple LangChain workflows under a single parent trace
  • Integrating with non-LangChain components that already have trace context
  • Bridging different tracing frameworks

When NOT to use:

  • Standard LangChain workflows (callbacks handle this automatically)
  • Simple agent or chain operations
  • Normal production use cases

📊 What You'll See in the Dashboard

After running these examples:

Trace View

  • Complete workflow execution
  • Agent reasoning steps
  • Tool calls and results
  • Error details with context

Span Details

  • Individual operation timing
  • Input/output data
  • Token usage and costs
  • Error stack traces

Analytics

  • Success/failure rates
  • Tool usage patterns
  • Performance bottlenecks
  • Cost analysis

💡 Best Practices

  1. Always use callbacks: Attach callback handler to all LangChain components
  2. Add context: Use custom attributes for business-specific data
  3. Handle errors gracefully: Errors are automatically traced with full context
  4. Monitor costs: Track spending across different models and operations
  5. Use descriptive names: Make traces easy to identify and search
  6. Flush on exit: Call noveum_trace.flush() before application exit
  7. Don't share handlers: Create a separate NoveumTraceCallbackHandler instance for each concurrent/parallel operation (threads, async tasks). Handlers are not thread-safe

🚀 Next Steps

Need Help?

  • Documentation: Browse our comprehensive guides
  • Community: Join our Discord for support
  • Support: Contact our team for enterprise support
Exclusive Early Access

Get Early Access to Noveum.ai Platform

Be the first one to get notified when we open Noveum Platform to more users. All users get access to Observability suite for free, early users get free eval jobs and premium support for the first year.

Sign up now. We send access to new batch every week.

Early access members receive premium onboarding support and influence our product roadmap. Limited spots available.