Framework Integrations
Deep-dive integration guides for Next.js, Express.js, FastAPI, Flask and other popular frameworks
Noveum.ai provides native integrations for popular web frameworks, making it easy to add comprehensive tracing to your AI applications. This guide covers framework-specific setup, best practices, and advanced patterns.
🚀 Quick Framework Overview
Framework | Language | Integration Type | Difficulty |
---|---|---|---|
Next.js | TypeScript | Middleware + Wrappers | ⭐ Easy |
Express.js | TypeScript | Middleware | ⭐ Easy |
Hono | TypeScript | Middleware + Decorators | ⭐ Easy |
FastAPI | Python | Middleware + Decorators | ⭐⭐ Moderate |
Flask | Python | Extensions + Decorators | ⭐⭐ Moderate |
Django | Python | Middleware + Decorators | ⭐⭐⭐ Advanced |
📘 TypeScript Frameworks
Next.js Integration
Next.js is one of the most popular frameworks for AI applications. Noveum provides seamless integration for both App Router and Pages Router.
App Router Setup
1. Initialize Noveum (Root Layout)
// app/layout.tsx
import { initializeClient } from '@noveum/trace';
// Initialize once at app startup
const client = initializeClient({
apiKey: process.env.NOVEUM_API_KEY!,
project: process.env.NOVEUM_PROJECT!,
environment: process.env.NOVEUM_ENVIRONMENT || 'development',
});
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
<html lang="en">
<body>{children}</body>
</html>
);
}
2. API Route Tracing
// app/api/chat/route.ts
import { withNoveumTracing } from '@noveum/trace/integrations/nextjs';
import { NextRequest, NextResponse } from 'next/server';
export const POST = withNoveumTracing(
async (request: NextRequest) => {
const { message, userId } = await request.json();
// Your AI logic here - automatically traced
const response = await processMessage(message, userId);
return NextResponse.json({
response,
timestamp: new Date().toISOString()
});
},
{
spanName: 'chat-completion',
captureRequest: true,
captureResponse: true,
attributes: {
'api.route': '/api/chat',
'api.method': 'POST',
},
}
);
3. Advanced API Route with Custom Tracing
// app/api/rag/query/route.ts
import { trace, span } from '@noveum/trace';
import { withNoveumTracing } from '@noveum/trace/integrations/nextjs';
export const POST = withNoveumTracing(
async (request: NextRequest) => {
const { question, userId } = await request.json();
return await trace('rag-query-endpoint', async (traceInstance) => {
traceInstance.setAttribute('user.id', userId);
traceInstance.setAttribute('question.length', question.length);
// Step 1: Generate embeddings
const embeddings = await span('generate-embeddings', async () => {
return await generateEmbeddings(question);
});
// Step 2: Retrieve documents
const documents = await span('retrieve-documents', async (spanInstance) => {
const docs = await vectorSearch(embeddings, 5);
spanInstance.setAttribute('documents.retrieved', docs.length);
return docs;
});
// Step 3: Generate answer
const answer = await span('generate-answer', async (spanInstance) => {
const result = await generateAnswer(question, documents);
spanInstance.setAttribute('answer.length', result.length);
spanInstance.setAttribute('answer.confidence', result.confidence);
return result;
});
return NextResponse.json({ answer, sources: documents });
});
},
{
spanName: 'rag-query',
captureRequest: true,
}
);
Server Actions Tracing
// app/actions/chat.ts
'use server';
import { trace, span } from '@noveum/trace';
export async function chatAction(message: string) {
return await trace('chat-server-action', async (traceInstance) => {
traceInstance.setAttribute('message.length', message.length);
traceInstance.setAttribute('action.type', 'server-action');
// Process the message with tracing
const response = await span('llm-processing', async () => {
return await callOpenAI(message);
});
return response;
});
}
Express.js Integration
Express.js integration provides automatic tracing for all routes and middleware.
1. Setup Middleware
// app.ts
import express from 'express';
import { initializeClient } from '@noveum/trace';
import { noveumMiddleware } from '@noveum/trace/integrations/express';
const app = express();
// Initialize Noveum client
const client = initializeClient({
apiKey: process.env.NOVEUM_API_KEY!,
project: 'express-ai-app',
environment: process.env.NODE_ENV || 'development',
});
// Add Noveum middleware
app.use(noveumMiddleware({
client,
captureRequest: true,
captureResponse: true,
ignoreRoutes: ['/health', '/metrics'],
attributes: {
'service.name': 'ai-api',
'service.version': '1.0.0',
},
}));
app.use(express.json());
2. Manual Route Tracing
// routes/chat.ts
import { trace, span } from '@noveum/trace';
import { Router } from 'express';
const router = Router();
router.post('/chat', async (req, res) => {
await trace('chat-endpoint', async (traceInstance) => {
const { message, userId } = req.body;
traceInstance.setAttribute('user.id', userId);
traceInstance.setAttribute('message.length', message.length);
try {
// Step 1: Validate input
await span('input-validation', async () => {
validateChatInput(message);
});
// Step 2: Process with LLM
const response = await span('llm-processing', async (spanInstance) => {
const result = await processWithLLM(message);
spanInstance.setAttribute('response.length', result.length);
spanInstance.setAttribute('llm.model', 'gpt-4');
return result;
});
// Step 3: Log conversation
await span('conversation-logging', async () => {
await logConversation(userId, message, response);
});
res.json({ response, timestamp: new Date().toISOString() });
} catch (error) {
traceInstance.setAttribute('error.occurred', true);
traceInstance.setAttribute('error.message', error.message);
res.status(500).json({ error: 'Processing failed' });
}
});
});
export default router;
3. Middleware with Custom Logic
// middleware/auth.ts
import { span } from '@noveum/trace';
export const authenticateUser = async (req, res, next) => {
await span('user-authentication', async (spanInstance) => {
const token = req.headers.authorization?.replace('Bearer ', '');
spanInstance.setAttribute('auth.token_present', !!token);
if (!token) {
spanInstance.setAttribute('auth.result', 'missing_token');
return res.status(401).json({ error: 'Missing token' });
}
try {
const user = await verifyToken(token);
spanInstance.setAttribute('auth.result', 'success');
spanInstance.setAttribute('user.id', user.id);
spanInstance.setAttribute('user.plan', user.plan);
req.user = user;
next();
} catch (error) {
spanInstance.setAttribute('auth.result', 'invalid_token');
spanInstance.setAttribute('auth.error', error.message);
res.status(401).json({ error: 'Invalid token' });
}
});
};
Hono Integration
Hono is a lightweight framework perfect for edge computing and AI applications.
// app.ts
import { Hono } from 'hono';
import { initializeClient } from '@noveum/trace';
import { noveumMiddleware, traced } from '@noveum/trace/integrations/hono';
const app = new Hono();
// Initialize client
const client = initializeClient({
apiKey: process.env.NOVEUM_API_KEY!,
project: 'hono-ai-app',
environment: 'edge',
});
// Add middleware
app.use('*', noveumMiddleware({ client }));
// Traced route
app.post('/chat', traced(async (c) => {
const { message } = await c.req.json();
// Automatically traced
const response = await processMessage(message);
return c.json({ response });
}, 'chat-endpoint', { client }));
export default app;
🐍 Python Frameworks
FastAPI Integration
FastAPI is excellent for building high-performance AI APIs with automatic documentation.
1. Setup with Middleware
# main.py
from fastapi import FastAPI, Depends
from noveum_trace.integrations.fastapi import NoveumMiddleware
import noveum_trace
# Initialize Noveum
noveum_trace.init(
api_key="your-api-key",
project="fastapi-ai-app",
environment="production"
)
app = FastAPI(title="AI API", version="1.0.0")
# Add Noveum middleware
app.add_middleware(
NoveumMiddleware,
capture_request=True,
capture_response=True,
ignore_paths=["/health", "/docs", "/openapi.json"]
)
2. Traced Endpoints
# routers/chat.py
from fastapi import APIRouter, HTTPException
from pydantic import BaseModel
import noveum_trace
router = APIRouter(prefix="/chat", tags=["chat"])
class ChatRequest(BaseModel):
message: str
user_id: str
class ChatResponse(BaseModel):
response: str
confidence: float
@router.post("/", response_model=ChatResponse)
@noveum_trace.trace("chat-endpoint")
async def chat(request: ChatRequest):
"""Chat endpoint with comprehensive tracing"""
# Add request attributes
noveum_trace.set_attribute("user.id", request.user_id)
noveum_trace.set_attribute("message.length", len(request.message))
noveum_trace.set_attribute("endpoint.path", "/chat")
try:
# Step 1: Input validation
with noveum_trace.trace_step("input-validation"):
if len(request.message.strip()) == 0:
raise HTTPException(status_code=400, detail="Empty message")
# Step 2: LLM processing
with noveum_trace.trace_step("llm-processing") as step:
response_text, confidence = await process_with_llm(request.message)
step.set_attribute("llm.model", "gpt-4")
step.set_attribute("response.confidence", confidence)
step.set_attribute("response.length", len(response_text))
# Step 3: Store conversation
with noveum_trace.trace_step("conversation-storage"):
await store_conversation(request.user_id, request.message, response_text)
return ChatResponse(response=response_text, confidence=confidence)
except HTTPException:
noveum_trace.set_attribute("error.type", "validation_error")
raise
except Exception as e:
noveum_trace.set_attribute("error.type", "processing_error")
noveum_trace.set_attribute("error.message", str(e))
raise HTTPException(status_code=500, detail="Processing failed")
3. RAG Endpoint with Detailed Tracing
# routers/rag.py
@router.post("/query")
@noveum_trace.trace("rag-query")
async def rag_query(question: str, user_id: str):
"""RAG query with detailed step tracing"""
noveum_trace.set_attribute("user.id", user_id)
noveum_trace.set_attribute("question.length", len(question))
# Phase 1: Query analysis
with noveum_trace.trace_step("query-analysis") as step:
intent = await analyze_query_intent(question)
complexity = calculate_query_complexity(question)
step.set_attribute("query.intent", intent)
step.set_attribute("query.complexity", complexity)
step.set_attribute("query.category", categorize_question(question))
# Phase 2: Document retrieval
with noveum_trace.trace_step("document-retrieval") as step:
# Generate embeddings
embeddings = await generate_embeddings(question)
step.set_attribute("embeddings.model", "text-embedding-ada-002")
step.set_attribute("embeddings.dimensions", len(embeddings))
# Vector search
documents = await vector_search(embeddings, k=5)
step.set_attribute("documents.retrieved", len(documents))
step.set_attribute("documents.avg_similarity",
sum(doc.similarity for doc in documents) / len(documents))
# Phase 3: Answer generation
with noveum_trace.trace_step("answer-generation") as step:
context = build_context_from_documents(documents)
answer = await generate_answer_with_context(question, context)
step.set_attribute("context.length", len(context))
step.set_attribute("answer.length", len(answer))
step.set_attribute("answer.model", "gpt-4")
step.set_attribute("generation.temperature", 0.7)
return {
"answer": answer,
"sources": [{"title": doc.title, "similarity": doc.similarity}
for doc in documents],
"metadata": {
"intent": intent,
"complexity": complexity,
"documents_used": len(documents)
}
}
Flask Integration
Flask integration provides flexibility for existing applications.
1. Setup with Extensions
# app.py
from flask import Flask
from noveum_trace.integrations.flask import NoveumFlask
import noveum_trace
# Initialize Noveum
noveum_trace.init(
api_key="your-api-key",
project="flask-ai-app",
environment="production"
)
app = Flask(__name__)
# Initialize Noveum Flask extension
noveum_flask = NoveumFlask(app)
2. Traced Routes
# routes/ai.py
from flask import Blueprint, request, jsonify
import noveum_trace
ai_bp = Blueprint('ai', __name__, url_prefix='/ai')
@ai_bp.route('/chat', methods=['POST'])
@noveum_trace.trace("flask-chat")
def chat():
"""Flask chat endpoint with tracing"""
data = request.get_json()
message = data.get('message', '')
user_id = data.get('user_id', '')
# Add attributes
noveum_trace.set_attribute("user.id", user_id)
noveum_trace.set_attribute("message.length", len(message))
noveum_trace.set_attribute("request.method", "POST")
try:
# Process message
with noveum_trace.trace_step("message-processing"):
response = process_message(message)
return jsonify({
"response": response,
"status": "success"
})
except Exception as e:
noveum_trace.set_attribute("error.occurred", True)
noveum_trace.set_attribute("error.message", str(e))
return jsonify({
"error": "Processing failed",
"status": "error"
}), 500
3. Background Task Tracing
# tasks/background.py
import noveum_trace
from celery import Celery
celery_app = Celery('ai_tasks')
@celery_app.task
@noveum_trace.trace("background-document-processing")
def process_document_batch(document_ids):
"""Background document processing with tracing"""
noveum_trace.set_attribute("documents.count", len(document_ids))
noveum_trace.set_attribute("task.type", "batch_processing")
processed = 0
failed = 0
for doc_id in document_ids:
with noveum_trace.trace_step(f"process-document-{doc_id}") as step:
try:
result = process_single_document(doc_id)
step.set_attribute("processing.success", True)
step.set_attribute("processing.result_size", len(result))
processed += 1
except Exception as e:
step.set_attribute("processing.success", False)
step.set_attribute("processing.error", str(e))
failed += 1
noveum_trace.set_attribute("documents.processed", processed)
noveum_trace.set_attribute("documents.failed", failed)
noveum_trace.set_attribute("processing.success_rate", processed / len(document_ids))
return {"processed": processed, "failed": failed}
🔧 Advanced Patterns
Environment-Specific Configuration
# config.py
import os
import noveum_trace
def configure_noveum():
environment = os.getenv("ENVIRONMENT", "development")
config = {
"api_key": os.getenv("NOVEUM_API_KEY"),
"project": os.getenv("NOVEUM_PROJECT", "my-ai-app"),
"environment": environment,
}
if environment == "production":
config.update({
"sampling_rate": 0.1, # 10% sampling in production
"batch_size": 100,
"flush_interval": 5000,
})
elif environment == "development":
config.update({
"sampling_rate": 1.0, # 100% sampling in development
"debug": True,
})
noveum_trace.init(**config)
Custom Middleware
// middleware/custom-noveum.ts
import { trace, span } from '@noveum/trace';
export const customNoveumMiddleware = (options: any) => {
return async (req: Request, res: Response, next: Function) => {
await trace('custom-request', async (traceInstance) => {
// Add custom request attributes
traceInstance.setAttribute('request.url', req.url);
traceInstance.setAttribute('request.method', req.method);
traceInstance.setAttribute('request.user_agent', req.get('User-Agent'));
const startTime = Date.now();
res.on('finish', () => {
traceInstance.setAttribute('response.status_code', res.statusCode);
traceInstance.setAttribute('response.duration_ms', Date.now() - startTime);
});
await span('request-processing', async () => next());
});
};
};
Error Boundary Integration
// components/TracedErrorBoundary.tsx
import React from 'react';
import { span } from '@noveum/trace';
import { ErrorBoundary } from 'react-error-boundary';
function ErrorFallback({ error, resetErrorBoundary }) {
// Log error to Noveum
React.useEffect(() => {
span('error-boundary-caught', async (spanInstance) => {
spanInstance.setAttribute('error.name', error.name);
spanInstance.setAttribute('error.message', error.message);
spanInstance.setAttribute('error.stack', error.stack);
spanInstance.setAttribute('error.component', 'error-boundary');
});
}, [error]);
return (
<div role="alert">
<h2>Something went wrong:</h2>
<pre>{error.message}</pre>
<button onClick={resetErrorBoundary}>Try again</button>
</div>
);
}
export function TracedErrorBoundary({ children }) {
return (
<ErrorBoundary FallbackComponent={ErrorFallback}>
{children}
</ErrorBoundary>
);
}
🎯 Next Steps
Choose your framework and dive deeper:
- Implement Basic Integration - Start with the basics
- Learn Tracing Concepts - Understand the fundamentals
- Explore Advanced Patterns - Custom instrumentation
- Master the Dashboard - Analyze your traces
Framework not listed? Check our Custom Integration Guide or contact our team for specific framework support.
Get Early Access to Noveum.ai Platform
Be the first one to get notified when we open Noveum Platform to more users. All users get access to Observability suite for free, early users get free eval jobs and premium support for the first year.