Pipecat Integration Overview
Add automatic tracing to Pipecat voice pipelines with Noveum Trace
Noveum Trace adds automatic tracing to your Pipecat voice pipeline in minutes. Every conversation is recorded as a structured trace with per-turn spans for STT, LLM, and TTS; tool/function-call details are attached to the LLM span as attributes (when available), along with latency and token usage.
Prerequisites
- Python 3.11+
- A working Pipecat pipeline (
pipecat-ai) - A Noveum API key (get one at noveum.ai)
Installation
Quick Start
You only need three changes:
- Initialize
noveum_traceonce at startup. - Create a
NoveumTraceObserver. - Attach the observer to your
PipelineTask(and ensure turn tracking wiring happens before the runner starts).
Traces are flushed automatically when the pipeline ends (EndFrame / CancelFrame).
What Gets Traced
Each pipeline session produces one conversation trace containing a turn span per conversational exchange. Each turn has child spans for STT, LLM, and TTS. When record_audio=True and an AudioBufferProcessor is present, a full-conversation recording span is also created at the root.
Configuration Options
Common tweaks:
- Turn off
capture_textif you want less text stored in spans. - Set
capture_function_calls=Falseif your LLM never emits function-call frames. record_audio=Truedoes two things:- Uploads per-span audio and adds
stt.audio_uuid/tts.audio_uuidattributes. - Captures a full stereo conversation WAV as a
pipecat.full_conversationspan. This requiresAudioBufferProcessor(num_channels=2)to be present in your pipeline (see the basic example).
- Uploads per-span audio and adds
You can also use the convenience factory instead of constructing NoveumTraceObserver directly:
Troubleshooting
No traces appearing
- Verify
noveum_trace.init()is called before the pipeline starts. - Confirm
await trace_obs.attach_to_task(task)is called (from async code) afterPipelineTaskis constructed and beforerunner.run(task). - Confirm your API key and the
projectname match what you configured in the Noveum dashboard.
Turn spans missing or not splitting correctly
await trace_obs.attach_to_task(task)is required for accurate turn boundaries.- Ensure turn tracking is enabled in your Pipecat version so
attach_to_task()can wire external boundaries.
LLM token counts not appearing
- Confirm
PipelineParams(enable_metrics=True, enable_usage_metrics=True)is set on yourPipelineTask— Pipecat only emitsMetricsFramewhen metrics are enabled. - Token counts come from Pipecat's
MetricsFrame. Most standard Pipecat LLM services emit them when metrics are enabled.
Function call spans missing
- Set
capture_function_calls=True. - Confirm your LLM processor emits
FunctionCallInProgressFrame/FunctionCallResultFrame.
System prompt (llm.system_prompt) not appearing
- This attribute is read from the LLM processor's
_settings.system_instruction(or_settings.system_prompt) at the timeLLMFullResponseStartFramefires, then falls back to scanning the message history for arole: "system"entry. - If neither source has the value (e.g. you use a custom or pre-cached LLM processor that does not extend Pipecat's
BaseLLMService), the attribute will be absent. - Fix: call
trace.set_attributes({"llm.system_prompt": YOUR_SYSTEM_PROMPT})on the active trace before the first turn, or ensure your custom LLM processor emitsLLMContextFramecontaining the system role message.
STT spans show only pipecat_span_status: cancelled with no transcript
- This is expected for interrupted turns — when the user or bot interrupts before STT finishes, the span is closed with cancellation status.
- Starting with SDK v1.6.x, cancelled spans also capture
stt.partial_transcript(last interim text),stt.interim_count, andstt.vad_to_cancel_msso partial speech data is not lost.
Latency breakdown (turn.latency.*) not appearing
- These attributes come from Pipecat's
UserBotLatencyObserver.on_latency_breakdownevent. They require:await trace_obs.attach_to_task(task)to be called (wires the latency observer).PipelineParams(enable_metrics=True)to be set onPipelineTask— otherwise TTFB fields in the breakdown are empty.- SDK v1.6.x or later (earlier versions only captured the total
turn.user_bot_latency_seconds).
User audio missing from full_conversation (mono instead of stereo)
- If a custom processor between transport input and
AudioBufferProcessorconsumesInputAudioRawFramewithout re-emitting it downstream,AudioBufferProcessornever receives the user audio. The SDK cannot work around this. - Fix: modify your custom processor to re-emit
InputAudioRawFrameafter processing it internally, so frames continue downstream toAudioBufferProcessor.
Next Steps
- Explore a complete example: Basic Pipecat Voice Pipeline
Get Early Access to Noveum.ai Platform
Be the first one to get notified when we open Noveum Platform to more users. All users get access to Observability suite for free, early users get free eval jobs and premium support for the first year.