Bots — Persona, Behaviors, and Directives
Every bot in Hydra has three configuration layers that control how it acts. Each layer has a specific job. Don't try to jam everything into one layer — the whole point of the split is that each one is enforced differently.
The three layers
| Layer | What it controls | Enforcement |
|---|---|---|
| Persona | Voice, tone, personality | Soft — biases generation |
| Behaviors | Goals the bot pursues in conversation | Soft — LLM-interpreted, listen-first |
| Directives | When-X-do-Y rules that run code | Hard — compiled to tools, deterministic |
A quick way to decide which layer a rule belongs in:
- "How should the bot sound?" → Persona
- "What should the bot try to accomplish in chat?" → Behavior
- "When the visitor says X, run this action outside of chat (email me, assign to a human, create a ticket, POST a webhook)" → Directive
Persona
Two to four sentences, written in second person. Covers voice, tone, and vibe. Don't put goals or rules here — they won't be enforced.
Example:
You are warm, concise, and enterprise-savvy. You treat every visitor as a potential long-term partner, not a quick sale. You ask good questions before offering solutions.
Behaviors
A Behavior is a goal the bot pursues in conversation. Soft — the bot uses its judgment to decide when the conversation warrants pursuing it. It's NOT a script.
Each behavior has:
- Goal — one sentence, listen-first. Phrase as "Guide qualified visitors toward..." or "Help users with... and when the conversation warrants...". Never write a scripted response.
- Success when — the observable condition that means the goal was met.
- Capture fields (optional) — info to collect if the goal implies it (e.g. name, email, company for a demo-booking behavior). When all required fields are collected, the bot calls the
capture_contacttool and a new Lead is created in your CRM. - Consent required — check this if the behavior captures PII from unknown visitors. The bot will ask for explicit OK before capturing.
- Follow-up link (optional) — a URL the bot shares in its next reply after a successful capture. Typical use: your Calendly or scheduling link after a demo-booking capture.
- Framing guidance (optional) — tells the bot how to share the follow-up link. Tone, what to mention, etc.
Bot-captured contact info becomes a Lead
When a behavior's capture criteria are met, Hydra creates a row in your CRM as a Lead with source = bot. The Lead is linked back to the originating bot and conversation. Leads are deduped by email — re-capturing the same visitor updates their Lead rather than creating a new one. See the CRM help article for how Leads work.
When to use a follow-up link
Set a follow-up link on any behavior where the right "close" is a specific URL: scheduling a call (Calendly, Cal.com), completing a signup form, viewing a demo video, etc. The bot shares the link after capture, using your framing guidance to decide tone.
Fallback path. If the visitor closes the widget before the bot shares the link, wire a Flow on customer.created with source = bot that sends the captured email via send_email with the link in the body. The flow catches the close case — the bot gets the in-chat case. Together you cover both.
Directives
A Directive is a deterministic when-X-do-Y rule. Each directive compiles to a tool the bot can call. Tools perform real-world side effects outside of chat:
- Assign the conversation to a specific user, channel, or round-robin within a channel
- Send an email (to the account owner, the customer, or a literal address)
- Create a ticket
- POST a webhook
- Add an internal note
Each directive has:
- Trigger (natural language) — clear, plain-English condition ("user asks to speak with a human, requests escalation, or mentions an emergency")
- Action — one of the action types above, with its own config
- Enforcement —
soft(fires when the bot notices the trigger) orstrict(a second cheap AI pass reviews every turn to catch any missed fires; use for escalation-class rules) - Priority — lower number = higher priority; affects ordering in the bot's tool list
Persona vs behavior vs directive — a worked example
Say you want a bot on your marketing site that:
- Sounds like your brand (warm, SaaS-savvy) → Persona
- Answers product questions, and if the visitor seems qualified, guides them to book a 30-min demo — collecting their name, email, and company, then sharing your Calendly link → Behavior with capture fields + follow-up link
- If a visitor ever asks to speak with a human, immediately assign the conversation to your support channel → Directive (strict enforcement, action =
assign_conversation)
That's the right split. Don't try to encode "assign to a human" as a behavior — behaviors are soft and can be missed. Don't try to encode "sound warm" as a directive — there's no tool for tone.
Language
By default, every bot detects the customer's language from their messages and replies in the same language. Detection happens on the first turn (and uses up to the last three user messages for accuracy), and the result is cached on the conversation so it stays consistent across the rest of the chat.
What's detected automatically: English, Spanish, French, German, Italian, Portuguese, Dutch, Japanese, Korean, Chinese, Russian, Arabic, Hebrew, Thai, Greek, and Hindi. If detection isn't confident — typically because the first message is very short ("hi", "thanks") — the bot defaults to English and re-tries on the next turn.
Forcing English-only replies. If your audience is English-only and you don't want the bot to switch languages even when a customer writes in another, open the bot editor → Language card → toggle on Force English. With this on, the bot always replies in English regardless of detected input language.
What it does NOT do. Detection only changes the bot's reply language — it does not translate the customer's incoming message in your inbox, and it does not retranslate stored knowledge-base articles. The bot's grounding sources (your help articles, persona, directives) are sent in the language they were authored in; the bot relies on its own multilingual reasoning to answer in the matched output language.
Switching mid-conversation. Once a language is locked in, the bot keeps responding in that language for the rest of the conversation. If a customer deliberately switches languages partway through, the bot will follow their lead within its own reply, but the cached language for the conversation does not change. To force a re-detection, mark the conversation resolved and start a new one.
Customer memory
Every bot is automatically given short summaries of the customer's recent prior conversations with your tenant — so it can recognize repeat customers, reference past issues, and avoid asking questions the customer already answered weeks ago.
What's pulled in. Up to 5 prior conversations, all of: status resolved, having a generated summary, created within the last 90 days, belonging to the same customer (matched by customer_id), across any channel the customer has used (widget, email, etc.). The most recent qualifying conversations win; older or unresolved threads are excluded automatically. Anonymous visitors (no customer_id yet) get no memory block — there's nothing to remember.
What the bot sees. A ## Prior Conversations section appended to its system prompt, formatted like:
1. [Web Chat · Mar 18, 2026] Customer asked about adding a teammate. Resolved by walking through Settings → Team.
2. [Email · Feb 28, 2026] Reported a billing duplicate; refunded after verifying the Stripe charge.
Channel name and date give the bot enough context to reference timing ("last month you mentioned…") and surface, without any new model spend — these summaries are already generated when conversations are resolved.
Per-bot opt-out. If you want a specific bot to be stateless-by-policy (compliance-heavy contexts, sensitive support flows where memory bleed would be a feature bug), open the bot editor → Customer memory card → toggle to Memory off. The block stops being injected for that bot immediately; existing conversations don't lose anything because the memory was always reconstructed at request time.
Privacy posture. Summaries stay tenant-scoped — there is no cross-tenant read path; the helper filters every query by tenant_id. If you have Privacy → PII redaction turned on (Model-only mode), the prior-conversation block is redacted at send time alongside the rest of the system prompt, so emails, phone numbers, and card-shaped strings inside summaries don't reach the model. Worth knowing: summaries are generated by a separate Haiku pass when conversations resolve, and that pass doesn't actively scrub PII from its output — so summaries can contain literal customer text. The redaction step at chat-call time is your guardrail.
Cost. Adding ~750 input tokens per turn (5 summaries × ~150 tokens each) costs around $0.002 per Sonnet-driven bot turn. Anthropic's prompt caching means follow-up turns within the same conversation pay near-zero on the system prompt — only the first turn of each new conversation carries the full memory cost. Telemetry is logged to bot_runs.prior_conv_summary_tokens so we can measure aggregate spend and tune the cap if needed.
AI drafting
Every section on the bot editor has an AI draft button that turns a short plain-English description into a suggested configuration. Useful when you're not sure how to phrase a goal or trigger. You can always edit the draft before saving.
