POST request to your configured callback_url.
1. Enabling Webhooks for an Agent
You configure webhooks at the agent level using:callback_url- your publicly reachable HTTPS endpointcallback_events- a list of event names you want to receive
Replace
<api-key> with your actual API key and callback_url with your HTTPS endpoint.2. Webhook Request Format
Every webhook request uses the same base format:Fields
timestamp- Unix timestamp (float) when the event was generated.conversation_id- Unique identifier for the conversation session.type- Event source category (for these events:"pipeline").event.name- The specific event type (e.g.agent.started_speaking).event.payload- Event-specific data payload.
Handling Webhooks (Example in Node/Express)
Below are examples of how to receive and handle Trugen webhook events in Node.js and Python.Keep your webhook handler fast. Do heavy work asynchronously (e.g., queue jobs) after acknowledging the webhook.
3. Event Types & Payloads
Below are the supported events you listed, with descriptions and example webhook bodies.participant_left
Triggered when a participant leaves the conversation (e.g., the user disconnects).
Use cases:
- Clean up resources (rooms, timers, state).
- Mark the conversation as ended in your backend.
- Trigger post-call workflows (surveys, summaries, etc.).
payload.id- Identifier of the participant who left.
agent.started_speaking
Triggered when the agent starts speaking (TTS + avatar rendering begins).
Use cases:
- Show a “Agent is speaking” indicator in your UI.
- Animate visual elements (e.g., equalizer, glowing avatar border).
- Log when the agent begins its response.
payload.text- The text content the agent is about to speak.
agent.stopped_speaking
Triggered when the agent finishes speaking its current utterance.
Use cases:
- Hide “Agent is speaking” indicators.
- Enable user input (e.g., unmute mic, show “Your turn”).
- Measure speaking durations.
payload.text- The text that was just spoken.
agent.interrupted
Triggered when the agent is interrupted - typically because the user started speaking before the agent finished.
Use cases:
- Cut off visual indicators of TTS.
- Log interruptions to analyze conversational overlap.
- Adjust LLM behavior (e.g., train for shorter answers).
payload- Currently empty for this event (reserved for future metadata).
user.started_speaking
Triggered when the system detects that the user has started speaking.
Use cases:
- Pause or interrupt the agent if still speaking.
- Update UI to show recording / listening state.
- Trigger analytics on number of user turns.
user.stopped_speaking
Triggered when the user stops speaking (end of an utterance).
Use cases:
- Mark the boundary of user turns for transcription.
- Trigger LLM processing on the completed utterance.
- Use in analytics (speech duration, turn-taking patterns).
utterance_committed
Triggered when a user utterance has been fully captured and committed, usually after final ASR (speech-to-text) is ready.
Use cases:
- Store transcripts in your database.
- Trigger downstream workflows (NLP analysis, sentiment, QA).
- Display final transcript in your UI.
payload.text- The final, committed text of the utterance.
max_call_duration_timeout
Triggered when a conversation reaches the configured maximum call duration.
Use cases:
- Automatically end calls and show a “session ended” UI.
- Offer callbacks or follow-up options.
- Log session lengths and enforce billing / usage limits.
payload.call_duration- Actual call duration in seconds.payload.max_call_duration- Configured maximum duration in seconds.
4. Best Practices
- Always return 2xx quickly Acknowledge webhooks immediately and offload heavy work to background jobs.
- Idempotency Design handlers so they can safely process the same event multiple times.
-
Logging & Monitoring
Log incoming events by
conversation_idandevent.namefor debugging and analytics. -
Security (Recommended)
- Use HTTPS for
callback_url. - Restrict by IP, signing secret, or auth token if your infrastructure supports it.
- Use HTTPS for