Documentation

Integrating with Noveum Trace SDK

The Noveum Trace SDK is a simple, decorator-based tracing library for LLM applications and multi-agent systems. It provides an easy way to add observability to your Python applications with minimal code changes.

1. Why Use Noveum Trace?

  • 🎯 Decorator-First API: Add tracing with a single @trace decorator
  • 🤖 Multi-Agent Support: Built for multi-agent systems and workflows
  • ☁️ Cloud Integration: Send traces to Noveum platform or custom endpoints
  • 🔌 Framework Agnostic: Works with any Python LLM framework
  • 🚀 Zero Configuration: Works out of the box with sensible defaults
  • 📊 Comprehensive Tracing: Capture function calls, LLM interactions, and agent workflows

2. Installation

Install the SDK using pip:

pip install noveum-trace

3. Quick Start

Basic Setup

import noveum_trace
 
# Initialize the SDK
noveum_trace.init(
    api_key="your-api-key",
    project="my-llm-app"
)

Environment Variables

You can also configure using environment variables:

export NOVEUM_API_KEY="your-api-key"
export NOVEUM_PROJECT="your-project-name"
export NOVEUM_ENVIRONMENT="production"

4. Basic Usage

Tracing Functions

import noveum_trace
 
# Trace any function
@noveum_trace.trace
def process_document(document_id: str) -> dict:
    # Your function logic here
    return {"status": "processed", "id": document_id}
 
# With performance tracking
@noveum_trace.trace(capture_performance=True, capture_args=True)
def expensive_function(data: list) -> dict:
    # Function implementation
    return {"processed": len(data)}

Tracing LLM Calls

@noveum_trace.trace_llm
def call_openai(prompt: str) -> str:
    import openai
    client = openai.OpenAI()
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": prompt}]
    )
    return response.choices[0].message.content
 
# With provider specification
@noveum_trace.trace_llm(provider="openai", capture_tokens=True)
def call_openai_with_metrics(prompt: str) -> str:
    # OpenAI specific implementation
    return response

Tracing Agent Workflows

@noveum_trace.trace_agent(agent_id="researcher")
def research_task(query: str) -> dict:
    # Agent logic here
    return {"findings": "...", "confidence": 0.95}
 
# With full configuration
@noveum_trace.trace_agent(
    agent_id="researcher",
    role="information_gatherer",
    capabilities=["web_search", "document_analysis"]
)
def research_agent(query: str) -> dict:
    # Research implementation
    return {"findings": "...", "sources": [...]}

5. Multi-Agent Example

import noveum_trace
 
noveum_trace.init(
    api_key="your-api-key",
    project="multi-agent-system"
)
 
@noveum_trace.trace_agent(agent_id="orchestrator")
def orchestrate_workflow(task: str) -> dict:
    # Coordinate multiple agents
    research_result = research_agent(task)
    analysis_result = analysis_agent(research_result)
    return synthesis_agent(research_result, analysis_result)
 
@noveum_trace.trace_agent(agent_id="researcher")
def research_agent(task: str) -> dict:
    # Research implementation
    return {"data": "...", "sources": [...]}
 
@noveum_trace.trace_agent(agent_id="analyst")
def analysis_agent(data: dict) -> dict:
    # Analysis implementation
    return {"insights": "...", "metrics": {...}}

6. Context Managers

For scenarios where you need granular control or can't modify function signatures:

import noveum_trace
 
def process_user_query(user_input: str) -> str:
    # Pre-processing (not traced)
    cleaned_input = user_input.strip().lower()
 
    # Trace just the LLM call
    with noveum_trace.trace_llm_call(model="gpt-4", provider="openai") as span:
        response = openai_client.chat.completions.create(
            model="gpt-4",
            messages=[{"role": "user", "content": cleaned_input}]
        )
 
        # Add custom attributes
        span.set_attributes({
            "llm.input_tokens": response.usage.prompt_tokens,
            "llm.output_tokens": response.usage.completion_tokens
        })
 
    # Post-processing (not traced)
    return format_response(response.choices[0].message.content)

7. Auto-Instrumentation

Automatically trace existing code without modifications:

import noveum_trace
 
# Initialize with auto-instrumentation
noveum_trace.init(
    api_key="your-api-key",
    project="my-project",
    auto_instrument=["openai", "anthropic", "langchain"]
)
 
# Now all OpenAI calls are automatically traced
import openai
client = openai.OpenAI()
 
# This call is automatically traced
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello, world!"}]
)

8. Available Decorators

@trace - General Purpose Tracing

@noveum_trace.trace
def my_function(arg1: str, arg2: int) -> dict:
    return {"result": f"{arg1}_{arg2}"}

@trace_llm - LLM Call Tracing

@noveum_trace.trace_llm
def call_llm(prompt: str) -> str:
    # LLM call implementation
    return response

@trace_agent - Agent Workflow Tracing

@noveum_trace.trace_agent(agent_id="my_agent")
def agent_function(task: str) -> dict:
    # Agent implementation
    return result

@trace_tool - Tool Usage Tracing

@noveum_trace.trace_tool
def search_web(query: str) -> list:
    # Tool implementation
    return results

@trace_retrieval - Retrieval Operation Tracing

@noveum_trace.trace_retrieval
def retrieve_documents(query: str) -> list:
    # Retrieval implementation
    return documents

9. Advanced Configuration

Programmatic Configuration

import noveum_trace
 
# Advanced configuration with transport settings
noveum_trace.init(
    api_key="your-api-key",
    project="my-project",
    environment="production",
    transport_config={
        "batch_size": 50,
        "batch_timeout": 2.0,
        "retry_attempts": 3,
        "timeout": 30
    },
    tracing_config={
        "sample_rate": 1.0,
        "capture_errors": True,
        "capture_stack_traces": False
    }
)

Thread Management

Track conversation threads and multi-turn interactions:

from noveum_trace import ThreadContext
 
# Create and manage conversation threads
with ThreadContext(name="customer_support") as thread:
    thread.add_message("user", "Hello, I need help with my order")
 
    # LLM response within thread context
    with noveum_trace.trace_llm_call(model="gpt-4") as span:
        response = llm_client.chat.completions.create(...)
        thread.add_message("assistant", response.choices[0].message.content)

10. Streaming Support

Trace streaming LLM responses with real-time metrics:

from noveum_trace import trace_streaming
 
def stream_openai_response(prompt: str):
    with trace_streaming(model="gpt-4", provider="openai") as manager:
        stream = openai_client.chat.completions.create(
            model="gpt-4",
            messages=[{"role": "user", "content": prompt}],
            stream=True
        )
 
        for chunk in stream:
            if chunk.choices[0].delta.content:
                content = chunk.choices[0].delta.content
                manager.add_token(content)
                yield content
 
        # Streaming metrics are automatically captured

Examples

We provide comprehensive examples to help you get started with Noveum Trace:

Next Steps

Exclusive Early Access

Get Early Access to Noveum.ai Platform

Be the first one to get notified when we open Noveum Platform to more users. All users get access to Observability suite for free, early users get free eval jobs and premium support for the first year.

Sign up now. We send access to new batch every week.

Early access members receive premium onboarding support and influence our product roadmap. Limited spots available.