Documentation
Integration Examples/LiveKit/Synthetic Voice Testing

Synthetic Voice Testing

Test your LiveKit voice agent at scale with NovaSynth β€” realistic AI-generated callers via LiveKit rooms, with automatic tracing, datasets, and NovaEval evaluations

NovaSynth joins your LiveKit room as a participant and conducts a realistic voice conversation with your agent. The synthetic caller is driven by a persona and scenario β€” your agent handles it as a normal LiveKit session. Your existing trace wrappers (LiveKitSTTWrapper, LiveKitTTSWrapper, setup_livekit_tracing) capture every STT, LLM, and TTS span automatically. The resulting traces are available in your Noveum dashboard for dataset creation and NovaEval evaluations.

Private Beta β€” NovaSynth synthetic voice testing is currently available to select customers. To enable it for your account, contact support@noveum.ai.

How It Works

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  NovaSynth Synthetic Caller                                   β”‚
β”‚  Persona: goal, patience, tone, language, knowledge base      β”‚
β”‚  Scenario: conversation flow with fixed steps + branches      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                         β”‚  Real audio
              (LiveKit room join β€” NovaSynth as participant)
                         ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Your LiveKit Voice Agent                                     β”‚
β”‚  AgentSession runs as normal                                  β”‚
β”‚  LiveKitSTTWrapper + LiveKitTTSWrapper capture every span     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                         β”‚  Traces
                         ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Noveum Platform                                              β”‚
β”‚  Traces dashboard  β†’  per-turn audio, transcripts, latency   β”‚
β”‚  Datasets          β†’  curated trace collections              β”‚
β”‚  NovaEval          β†’  automated quality scoring + model      β”‚
β”‚                        comparison, regression detection       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Why each layer matters:

  • Without the LiveKit trace wrappers: the session runs but produces no data.
  • Without datasets: traces exist but cannot be evaluated systematically.
  • Without NovaEval: no quality measurement, no regression detection, no model comparison.
  • With all three: every test run produces a scored, comparable, auditable result.

Before You Begin

Complete these steps before running synthetic tests.

  1. Integrate Noveum Trace β€” follow the LiveKit Integration Overview and the Basic LiveKit Voice Agent guide. Confirm traces are appearing in your Noveum dashboard before proceeding.
  2. Verify tracing works β€” run a few test conversations with your agent and confirm the traces appear in your dashboard with the expected STT, LLM, and TTS spans.
  3. Use NovaSynth to build your initial dataset β€” run synthetic tests to generate your first batch of traced conversations. Create a dataset from those traces in the Noveum dashboard (Traces β†’ select traces β†’ Create Dataset).
  4. Run a NovaEval evaluation β€” go to Evaluations β†’ New Evaluation, select your dataset, and start an evaluation job to establish a quality baseline.

NovaSynth is designed to generate the conversations that make up your evaluation datasets β€” you do not need an existing dataset before your first run.

Step 1: Expose Your Agent's Audio Endpoint

NovaSynth joins your LiveKit room as a participant and speaks to your agent using AI-generated voice. Your agent processes it as a normal LiveKit job β€” no changes to your agent code are required.

# Your existing LiveKit agent setup β€” no changes needed
from noveum_trace.integrations.livekit import (
    LiveKitSTTWrapper,
    LiveKitTTSWrapper,
    setup_livekit_tracing,
    extract_job_context,
)
from livekit.agents import Agent, AgentSession, JobContext

This snippet shows the imports required for Noveum tracing β€” your LiveKit server URL, room credentials, and API key are configured in the Noveum dashboard, not in agent code.

Register your LiveKit server URL and API secret under Project Settings β†’ Agent Endpoints in the Noveum dashboard. NovaSynth uses these credentials to generate room tokens and connect to your server.

Step 2: Create Personas

Note: All code and curl examples below use placeholder values β€” $NOVEUM_API_KEY, "my-voice-agent", "persona_abc123", "wss://yourapp.livekit.cloud", and other sample strings. Replace them with your actual API key, project name, IDs, and LiveKit server URL before running.

Personas can be created and managed from the Noveum dashboard under Synthetic Testing β†’ Personas, or via the API examples below.

A persona is the synthetic caller's identity β€” who they are, how they speak, and what they want from the call. Realistic, diverse personas surface the failure modes that matter most.

Persona fields:

  • name, description β€” character identity
  • goal β€” what they want to accomplish in this session
  • patience_level β€” 0.0 (immediately frustrated) to 1.0 (very patient)
  • personality_traits β€” e.g. ["direct", "impatient", "tech-savvy"]
  • tone_preference β€” "casual", "formal", "friendly", "curt", "aggressive"
  • primary_language β€” supports multilingual callers, e.g. ["Hindi", "English"]
  • knowledge_base β€” what the caller already knows (menu items, pricing, account history)
  • Optional demographics: age, occupation, location

Paste your agent's system prompt and Noveum generates a diverse set covering a range of personalities and goals automatically:

curl -X POST https://api.noveum.ai/v1/synthetic/personas/generate \
  -H "Authorization: Bearer $NOVEUM_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "project": "my-voice-agent",
    "system_prompt": "You are a friendly drive-thru order taker for BurgerPlace. Help customers place food and drink orders, answer questions about the menu, and confirm orders before finalizing.",
    "count": 5
  }'

Manual persona creation

Create specific personas to target known problem areas:

curl -X POST https://api.noveum.ai/v1/synthetic/personas \
  -H "Authorization: Bearer $NOVEUM_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "project": "my-voice-agent",
    "name": "Alex Chen",
    "description": "Impatient professional on a short lunch break",
    "goal": "Order a burger and fries as fast as possible",
    "patience_level": 0.2,
    "personality_traits": ["direct", "impatient", "efficiency-focused"],
    "tone_preference": "curt",
    "primary_language": ["English"],
    "knowledge_base": ["knows the full menu", "ordered here many times before"]
  }'

A contrasting example β€” patient first-timer who needs guidance:

{
  "project": "my-voice-agent",
  "name": "Meena Patel",
  "description": "Elderly first-time caller, unfamiliar with voice ordering",
  "goal": "Order food for a family of four",
  "patience_level": 0.9,
  "personality_traits": ["polite", "hesitant", "detail-oriented"],
  "tone_preference": "formal",
  "primary_language": ["English", "Gujarati"],
  "knowledge_base": ["has never ordered by voice before", "not sure what sizes are available"]
}

List all personas for a project:

curl "https://api.noveum.ai/v1/synthetic/personas?project=my-voice-agent" \
  -H "Authorization: Bearer $NOVEUM_API_KEY"

Step 3: Create Scenarios

Scenarios can be created and managed from the Noveum dashboard under Synthetic Testing β†’ Scenarios, or via the API examples below.

A scenario is the conversation plan. It defines what the caller wants to accomplish, in what order, and how the conversation branches based on agent responses.

Scenario structure:

  • name, description β€” what this scenario tests
  • events β€” a tree of conversation steps:
    • id β€” unique step identifier
    • parent_id β€” which step this follows (null for the opening step)
    • action β€” what the synthetic caller does at this step
    • condition β€” optional: this step only fires if this condition is met in the conversation
    • fixed: true β€” this step always happens regardless of what the agent says

Steps with fixed: true form the backbone of the conversation. Steps with a condition create branches β€” the synthetic caller responds adaptively, the same way a real person would.

curl -X POST https://api.noveum.ai/v1/synthetic/scenarios/generate \
  -H "Authorization: Bearer $NOVEUM_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "project": "my-voice-agent",
    "system_prompt": "You are a drive-thru order taker for BurgerPlace...",
    "count": 3,
    "focus": "include edge cases and failure modes"
  }'

Manual scenario β€” happy path

curl -X POST https://api.noveum.ai/v1/synthetic/scenarios \
  -H "Authorization: Bearer $NOVEUM_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "project": "my-voice-agent",
    "name": "Quick single-item order",
    "description": "Customer places one item and confirms immediately",
    "events": [
      { "id": "e1", "action": "Greet the agent and say you want to place an order", "fixed": true },
      { "id": "e2", "parent_id": "e1", "action": "Order one burger", "fixed": true },
      { "id": "e3", "parent_id": "e2", "condition": "agent asks about sides or drinks", "action": "Order fries, decline a drink" },
      { "id": "e4", "parent_id": "e2", "condition": "agent confirms the order total", "action": "Confirm and end the session" }
    ]
  }'

Manual scenario β€” edge case (unavailable item)

{
  "project": "my-voice-agent",
  "name": "Unavailable menu item",
  "description": "Customer asks for an item not on the menu and handles the agent's response",
  "events": [
    { "id": "e1", "action": "Ask for the spicy chicken sandwich", "fixed": true },
    { "id": "e2", "parent_id": "e1", "condition": "agent says it is unavailable", "action": "Express disappointment, ask what chicken options are available" },
    { "id": "e3", "parent_id": "e2", "action": "Order the closest alternative the agent suggests" },
    { "id": "e4", "parent_id": "e1", "condition": "agent confirms the item without flagging it as unavailable", "action": "Place the order and end the session" }
  ]
}

List all scenarios for a project:

curl "https://api.noveum.ai/v1/synthetic/scenarios?project=my-voice-agent" \
  -H "Authorization: Bearer $NOVEUM_API_KEY"

Step 4: Trigger a Synthetic Test Run

With a persona, a scenario, and a registered LiveKit endpoint, trigger a run. Noveum's infrastructure handles everything from here β€” joining the room, driving the conversation, and capturing the trace. Runs can also be started from the Noveum dashboard β†’ Synthetic Testing β†’ New Run.

curl -X POST https://api.noveum.ai/v1/synthetic/runs \
  -H "Authorization: Bearer $NOVEUM_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "project": "my-voice-agent",
    "persona_id": "persona_abc123",
    "scenario_id": "scenario_def456",
    "agent_endpoint": {
      "type": "livekit",
      "livekit_url": "wss://yourapp.livekit.cloud",
      "room_name": "test-room-name"
    }
  }'

What happens after the request:

  1. NovaSynth connects to your LiveKit server and joins the specified room as a participant.
  2. The synthetic caller speaks naturally using the persona's voice, tone, and language, following the scenario's event tree.
  3. Your LiveKit agent handles the session as a normal job β€” STT, LLM, and TTS run as normal.
  4. LiveKitSTTWrapper, LiveKitTTSWrapper, and setup_livekit_tracing capture every span with full audio, transcripts, latency, and token data.
  5. When the scenario completes or a natural goodbye occurs, the session ends.
  6. The complete trace appears in your Noveum dashboard, tagged source: synthetic.

Step 5: Monitor a Run

curl "https://api.noveum.ai/v1/synthetic/runs/run_xyz789" \
  -H "Authorization: Bearer $NOVEUM_API_KEY"

Response fields:

FieldDescription
idRun identifier
statuspending | running | completed | failed
trace_idNoveum trace ID for this run
trace_urlDirect link to the trace in the dashboard
duration_secondsTotal session duration
turn_countNumber of conversational turns
persona.nameName of the persona used
scenario.nameName of the scenario used

Runs are also visible in Noveum dashboard β†’ Synthetic Testing β†’ Runs.

Step 6: View Traces

Synthetic traces appear in the Traces section alongside real conversations. The schema is identical to a real session β€” synthetic traces can be filtered by their tags and mixed freely with real traces in datasets.

livekit.session
β”‚  source: synthetic
β”‚  synthetic.persona: "Alex Chen"
β”‚  synthetic.scenario: "Quick single-item order"
β”‚  livekit.job.id, livekit.room.name, livekit.participant.identity
β”‚  session.turn_count, session.total_cost
β”‚
β”œβ”€β”€ livekit.stt Γ— N
β”‚   β”œβ”€β”€ stt.text, stt.is_final, stt.language
β”‚   β”œβ”€β”€ stt.model, stt.confidence
β”‚   β”œβ”€β”€ stt.vad_to_final_ms, stt.first_text_latency_ms
β”‚   └── stt.audio_uuid  (when record=True)
β”‚
β”œβ”€β”€ livekit.llm Γ— N
β”‚   β”œβ”€β”€ llm.model, llm.system_prompt, llm.input, llm.output
β”‚   β”œβ”€β”€ llm.input_tokens, llm.output_tokens, llm.total_tokens
β”‚   β”œβ”€β”€ llm.cost.input, llm.cost.output, llm.cost.total
β”‚   β”œβ”€β”€ llm.time_to_first_token_ms
β”‚   └── llm.function_calls[], llm.function_call_results[]
β”‚
└── livekit.tts Γ— N
    β”œβ”€β”€ tts.input_text, tts.voice, tts.model
    β”œβ”€β”€ tts.time_to_first_byte_ms, tts.characters
    └── tts.audio_uuid  (when record=True)

To filter for synthetic traces in the dashboard: use the source: synthetic filter in the Traces view.

Step 7: Build Datasets and Run NovaEval Evaluations

Creating a dataset

  1. Go to Traces in the Noveum dashboard.
  2. Filter by source: synthetic and/or date range.
  3. Select the traces to include.
  4. Click Create Dataset.

You can mix synthetic and real traces in the same dataset. A dataset that combines both gives the most representative evaluation results.

Running a NovaEval evaluation

  1. Go to Evaluations β†’ New Evaluation.
  2. Select your dataset.
  3. Noveum's NovaEval engine recommends scorers based on your agent type. For voice agents, this includes:
    • Conversational metrics: knowledge retention, conversation relevancy, role adherence
    • Task completion: goal achievement, tool relevancy, task progression
    • Quality: conversation completeness, response clarity
  4. Optionally, select model variants to compare if you are evaluating a model swap.
  5. Start the evaluation β€” Noveum runs it and presents results in the dashboard.

Results show per-scenario quality scores, aggregate metrics across the full dataset, and side-by-side model comparisons. Edge-case scenarios built with impatient or confused personas reveal exactly where your agent fails before real users do.

Batch Testing

Run every persona Γ— scenario combination for full matrix coverage. After all runs complete, select the traces, create a single dataset, and run one NovaEval evaluation across the entire matrix.

import itertools
import os
 
import requests
 
BASE             = "https://api.noveum.ai/v1"
NOVEUM_API_KEY   = os.environ["NOVEUM_API_KEY"]
HEADERS          = {"Authorization": f"Bearer {NOVEUM_API_KEY}", "Content-Type": "application/json"}
PROJECT          = "my-voice-agent"
 
personas  = requests.get(
    f"{BASE}/synthetic/personas",
    params={"project": PROJECT},
    headers=HEADERS
).json()["personas"]
 
scenarios = requests.get(
    f"{BASE}/synthetic/scenarios",
    params={"project": PROJECT},
    headers=HEADERS
).json()["scenarios"]
 
runs = []
for persona, scenario in itertools.product(personas, scenarios):
    run = requests.post(f"{BASE}/synthetic/runs", json={
        "project":     PROJECT,
        "persona_id":  persona["id"],
        "scenario_id": scenario["id"],
        "agent_endpoint": {
            "type":        "livekit",
            "livekit_url": "wss://yourapp.livekit.cloud",  # replace with your LiveKit server URL
            "room_name":   "test-room-name"                # replace with your room name
        }
    }, headers=HEADERS).json()
    runs.append(run)
    print(f"  {persona['name']:25s}  Γ—  {scenario['name']:30s}  β†’  {run['id']}")
 
print(f"\nStarted {len(runs)} runs  ({len(personas)} personas Γ— {len(scenarios)} scenarios)")

Best Practices

  • Start with 3–5 personas covering a spread: a patient happy-path user, an impatient expert, a confused first-timer, and one multilingual user if your agent serves mixed-language callers.
  • Create at least one edge-case scenario per core conversation flow. Happy paths pass by definition β€” edge cases are where agents break.
  • Use patience_level: 0.1–0.3 to stress-test. Impatient callers expose slow response times, rambling answers, and goal-completion failures.
  • Re-run the exact same persona Γ— scenario matrix after every model swap or prompt change. If the new configuration fails more scenarios than the previous one, it is not ready.
  • Keep record=True in setup_livekit_tracing. Audio recordings of synthetic sessions are invaluable for debugging β€” you can hear how the synthetic caller phrased a request and how your agent responded.
  • Tag batch-test datasets separately from production datasets. Evaluating a pure synthetic matrix is useful for regression testing; mixing a representative sample of real traces with synthetic ones gives a broader picture of overall agent health.
  • Use AI-generated personas and scenarios first to get broad coverage fast, then add manual entries to target specific failure modes you discover.

FAQ

Can I run synthetic testing without Noveum Trace integrated into my agent? No. NovaSynth joins your LiveKit room and conducts a real session, but without LiveKitSTTWrapper, LiveKitTTSWrapper, and setup_livekit_tracing active, the session produces no data. The trace is the entire output of a synthetic run β€” without it, the session simply disappears.

Do synthetic traces look different from real user traces? In structure, no. They use the same schema as real sessions. They carry source: synthetic, synthetic.persona, and synthetic.scenario attributes so you can filter them in the dashboard and keep them separate from production data when needed.

Can the synthetic caller handle interruptions? Yes. The synthetic caller delivers real audio into the LiveKit room, so any session behavior that depends on real-time audio β€” VAD, barge-in, end-of-utterance detection β€” works exactly as it would with a real user.

How long does a test run take? Typical voice agent conversations take 30–120 seconds. NovaSynth runs in real time β€” it joins a real LiveKit session and the conversation unfolds at the natural pace of speech.

How many runs can I trigger in parallel? Concurrent run limits depend on your plan. Contact support@noveum.ai for details.

Do I need an existing dataset before my first NovaSynth run? No. NovaSynth generates the conversations that become your dataset. Run your first synthetic tests, select the resulting traces in the Noveum dashboard, and create your dataset from there. You do not need to collect real user data before getting started.

How do I get access to NovaSynth synthetic testing? NovaSynth is currently in private beta and is enabled on request. Reach out to support@noveum.ai and we will enable it for your account.

Exclusive Early Access

Get Early Access to Noveum.ai Platform

Be the first one to get notified when we open Noveum Platform to more users. All users get access to Observability suite for free, early users get free eval jobs and premium support for the first year.

Sign up now. We send access to new batch every week.

Early access members receive premium onboarding support and influence our product roadmap. Limited spots available.

Synthetic Voice Testing | Documentation | Noveum.ai