CrewAI Integration
Trace CrewAI crews, agents, tasks, and tools with Noveum Trace for full observability of multi-agent workflows
Noveum Trace provides deep integration with CrewAI, automatically capturing crew executions, agent decisions, task completions, tool calls, LLM interactions, and multi-agent coordination patterns.
What You Get
- Crew and task visibility: Spans for crew kickoff, tasks, agents, LLM calls, and tools
- Rich context: Optional capture of inputs, outputs, tool schemas, and agent snapshots (configurable on the listener)
- Flows and extras: Flow execution, memory, knowledge, A2A, MCP, and streaming can be traced when you use those features
Requirements
- Python 3.10+ (required for the
noveum-trace[crewai]extra) - CrewAI >= 0.177.0
- A Noveum API key and project
Installation
Quick Start
Set credentials using environment variables:
Initialize the SDK once, then attach the returned listener to your crew before kickoff(). Call listener.shutdown() when the process is done (especially important in tests or short-lived scripts).
setup_crewai_tracing() must run after noveum_trace.init(). Configure your CrewAI LLM providers the same way you do today — Noveum only observes the interactions.
Verification
Run the crew once with a valid NOVEUM_API_KEY, open the Noveum dashboard, pick your project, and confirm new traces appear. In short-lived processes call noveum_trace.flush() before exit to ensure batched spans are sent.
What Gets Traced
| Event | What's captured |
|---|---|
| Crew kickoff / completion | Inputs, final output, execution time |
| Agent execution | Agent role, goal, task assignment, iteration count |
| Task execution | Task description, assigned agent, output, status |
| LLM calls | Model, tokens (input/output), cost, latency, messages |
| Tool calls | Tool name, inputs, outputs, execution time |
| Agent-to-agent delegation | Source agent, target agent, task details |
| Memory operations | Memory query/save events |
| MCP server calls | Server name, tool called, parameters, result |
| Flow events | CrewAI Flow state transitions |
| Reasoning steps | Chain-of-thought reasoning tokens |
| Guardrail evaluations | Validation results, pass/fail status |
Manual Instantiation
For more control over what gets captured, instantiate the listener directly:
Configuration Options
| Option | Default | Description |
|---|---|---|
capture_inputs | True | Capture task inputs, agent goals, tool arguments |
capture_outputs | True | Capture task results, agent outputs, tool results |
capture_llm_messages | True | Capture full message history (system + user) |
capture_tool_schemas | True | Capture tool definitions |
capture_agent_snapshot | True | Capture agent goal/backstory at start |
capture_crew_snapshot | True | Capture crew agents/tasks at kickoff |
capture_memory | True | Capture memory operations |
capture_a2a | True | Capture agent-to-agent delegation |
capture_mcp | True | Capture MCP server calls |
capture_flow | True | Capture CrewAI Flow events |
capture_reasoning | True | Capture reasoning steps |
trace_name_prefix | "crewai" | Prefix for trace names |
verbose | False | Enable debug logging |
Async Kickoff
The integration supports both synchronous and asynchronous crew execution:
Complete Example
For a complete end-to-end example with tests, see:
Next Steps
- LangGraph Integration — Complex stateful agent workflows
- LangChain Integration — Callback-based tracing
- Context Managers — Manual tracing patterns
- SDK Integration — Init options, transports, and other frameworks
Get Early Access to Noveum.ai Platform
Be the first one to get notified when we open Noveum Platform to more users. All users get access to Observability suite for free, early users get free eval jobs and premium support for the first year.