Documentation
Integration Examples/LangGraph Integration/Basic LangGraph Agent

Basic LangGraph Agent

Learn how to trace basic LangGraph agent workflows with Noveum Trace

This guide shows you how to trace basic LangGraph agent workflows using Noveum Trace. You'll learn how to monitor agent decision-making, tool usage, and state management.

🎯 Use Case

Research Assistant Agent: A simple agent that can search for information and provide answers. We'll trace the agent's decision-making process, tool usage, and state transitions.

🚀 Complete Working Example

Here's a complete, working example you can copy and run:

import os
from typing import Annotated, Literal, TypedDict
from dotenv import load_dotenv
import noveum_trace
from noveum_trace import NoveumTraceCallbackHandler
from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.graph import END, StateGraph
 
load_dotenv()
 
# Initialize Noveum Trace
noveum_trace.init(
    api_key=os.getenv("NOVEUM_API_KEY"),
    project="customer-support-bot",
    environment="development"
)
 
# Define the agent state
class AgentState(TypedDict):
    messages: Annotated[list, "The messages in the conversation"]
    research_complete: bool
 
# Define tools
@tool
def search_web(query: str) -> str:
    """Search the web for information about a query."""
    # Simulate web search
    return f"Search results for: {query}"
 
@tool
def analyze_information(info: str) -> str:
    """Analyze and summarize information."""
    return f"Analysis: {info} is a comprehensive topic with many aspects."
 
def research_node(state: AgentState):
    """Node that performs research using tools."""
    print("🔍 Researching...")
    
    # Get the last human message
    last_message = state["messages"][-1].content
    
    # Search for information
    search_results = search_web(f"research about {last_message}")
    
    # Analyze the results
    analysis = analyze_information(search_results)
    
    # Add the research results to messages
    state["messages"].append(AIMessage(content=f"Research completed: {analysis}"))
    state["research_complete"] = True
    
    return state
 
def should_continue(state: AgentState) -> Literal["research", "end"]:
    """Decide whether to continue researching or end."""
    if state["research_complete"]:
        return "end"
    return "research"
 
def create_research_agent():
    """Create a basic research agent with tracing."""
    # Initialize callback handler
    callback_handler = NoveumTraceCallbackHandler()
    
    # Create LLM with callback
    llm = ChatOpenAI(
        model="gpt-4",
        temperature=0.7,
        callbacks=[callback_handler]
    )
    
    # Create the graph
    graph = StateGraph(AgentState)
    
    # Add nodes
    graph.add_node("research", research_node)
    
    # Add edges
    graph.add_edge("research", "decision")
    graph.add_conditional_edges(
        "decision",
        should_continue,
        {
            "research": "research",
            "end": END
        }
    )
    
    # Set entry point
    graph.set_entry_point("research")
    
    return graph.compile()
 
def run_research_agent():
    """Run the research agent with tracing."""
    print("=== Basic LangGraph Agent Tracing ===")
    
    # Create the agent
    agent = create_research_agent()
    
    # Run the agent
    result = agent.invoke({
        "messages": [HumanMessage(content="Tell me about artificial intelligence")],
        "research_complete": False
    })
    
    print(f"Final result: {result['messages'][-1].content}")
    return result
 
if __name__ == "__main__":
    run_research_agent()

📋 Prerequisites

pip install noveum-trace langchain-openai langgraph python-dotenv

Set your environment variables:

export NOVEUM_API_KEY="your-noveum-api-key"
export OPENAI_API_KEY="your-openai-api-key"

🔧 How It Works

1. State Management

The AgentState TypedDict defines the state structure:

  • messages: Conversation history
  • research_complete: Boolean flag for completion

2. Node Tracing

Each node execution is automatically traced:

  • Input state
  • Processing logic
  • Output state changes
  • Tool calls and results

3. Conditional Routing

The should_continue function determines the next step:

  • Traced as a decision point
  • Shows routing logic in the dashboard

4. Tool Integration

Tools are automatically traced:

  • Input parameters
  • Execution time
  • Output results
  • Error handling

🎨 Advanced Examples

Multi-Step Agent

def create_multi_step_agent():
    """Create an agent with multiple processing steps."""
    callback_handler = NoveumTraceCallbackHandler()
    llm = ChatOpenAI(callbacks=[callback_handler])
    
    def planning_node(state: AgentState):
        """Plan the research approach."""
        print("📋 Planning research approach...")
        # Planning logic here
        return state
    
    def execution_node(state: AgentState):
        """Execute the research plan."""
        print("⚡ Executing research...")
        # Execution logic here
        return state
    
    def review_node(state: AgentState):
        """Review and finalize results."""
        print("📝 Reviewing results...")
        # Review logic here
        return state
    
    graph = StateGraph(AgentState)
    graph.add_node("planning", planning_node)
    graph.add_node("execution", execution_node)
    graph.add_node("review", review_node)
    
    # Linear flow
    graph.add_edge("planning", "execution")
    graph.add_edge("execution", "review")
    graph.add_edge("review", END)
    
    graph.set_entry_point("planning")
    return graph.compile()

Agent with LLM Integration

def create_llm_agent():
    """Create an agent that uses LLM for decision making."""
    callback_handler = NoveumTraceCallbackHandler()
    llm = ChatOpenAI(callbacks=[callback_handler])
    
    def llm_decision_node(state: AgentState):
        """Use LLM to make decisions."""
        print("🤖 LLM making decision...")
        
        # Use LLM to decide next action
        response = llm.invoke([
            HumanMessage(content=f"Based on this context: {state['messages'][-1].content}, what should I do next?")
        ])
        
        # Add LLM response to state
        state["messages"].append(response)
        return state
    
    graph = StateGraph(AgentState)
    graph.add_node("llm_decision", llm_decision_node)
    graph.add_edge("llm_decision", END)
    graph.set_entry_point("llm_decision")
    
    return graph.compile()

📊 What You'll See in the Dashboard

After running these examples, check your Noveum dashboard:

Trace View

  • Complete agent workflow execution
  • Node-by-node execution flow
  • State transitions and changes
  • Tool calls and results

Span Details

  • Individual node execution times
  • State input/output for each node
  • Tool execution details
  • Decision point reasoning

Analytics

  • Workflow execution patterns
  • Node performance metrics
  • Tool usage statistics
  • State transition frequency

🔍 Troubleshooting

Common Issues

No traces appearing?

  • Check your NOVEUM_API_KEY is set correctly
  • Verify the callback handler is added to your LLM
  • Ensure you're calling agent.invoke() with proper state

Missing node traces?

  • Make sure each node function is properly defined
  • Check that the graph is compiled correctly
  • Verify state structure matches your TypedDict

State not updating?

  • Ensure nodes return the updated state
  • Check that state keys match your TypedDict
  • Verify node connections in the graph

🚀 Next Steps

Now that you've mastered basic agent tracing, explore these advanced patterns:

💡 Pro Tips

  1. Use TypedDict: Define clear state structures for better tracing
  2. Name your nodes: Use descriptive names for easier debugging
  3. Add logging: Include print statements to track execution flow
  4. Monitor state: Watch how state evolves through your graph
  5. Test edge cases: Ensure your routing logic handles all scenarios
Exclusive Early Access

Get Early Access to Noveum.ai Platform

Be the first one to get notified when we open Noveum Platform to more users. All users get access to Observability suite for free, early users get free eval jobs and premium support for the first year.

Sign up now. We send access to new batch every week.

Early access members receive premium onboarding support and influence our product roadmap. Limited spots available.