AI Runbooks

The AI SDR Stack: Clay + Smartlead + AI Personalization

Build an AI-led outbound engine by combining Clay (list + enrichment + intent), Smartlead (multi-inbox sequencing), and an AI personalization layer that outputs compliant, on-brand, per-prospect emails in a fixed schema. This runbook gives you the exact inputs, prompts, QA gates, and steps to go from ICP to booked meetings using ai sdr tools today.

Key takeaways:

  • Clay becomes your deterministic “prospect compiler”: ICP → accounts → contacts → enrichment → routing.
  • Smartlead becomes your “execution runtime”: inbox pools, throttles, sequences, reply handling.
  • The AI layer only works if you enforce a strict output schema + QA rubric before anything sends.

I’ve run growth teams where outbound was the difference between hitting plan and missing it by a quarter. At Uber and Postmates, the lesson was always the same: distribution beats cleverness, but only when your system is reliable. AI SDR workflows fail for one boring reason: teams ship “cool prompts” instead of a deterministic pipeline with gating, fallbacks, and a schema that downstream tools can trust.

This runbook is the operator version of “AI SDR Stack: Clay + Smartlead + AI Personalization.” You’ll set up a repeatable flow that: (1) builds a target list inside Clay, (2) enriches it with the minimum viable data needed to personalize without hallucinating, (3) generates emails that sound like a competent human SDR, and (4) launches sequences through Smartlead with mailbox controls and measurable outcomes.

I’m going to assume you need this running fast. The structure below is designed so you can point Claude/ChatGPT at it, copy prompts, paste into Clay/Smartlead, and ship. The core idea: treat personalization as a data product. If a field is missing, the model must degrade gracefully, not invent.

1. Objective

Launch a working AI SDR outbound pipeline that produces enriched leads in Clay, generates schema-valid personalized emails, and sends them via Smartlead sequences to book qualified meetings.

2. Inputs Required

  • ICP definition (1–3 sentences) including:
    • Target industries
    • Company size range
    • Geo
    • Buyer titles
    • Primary pain trigger
  • Offer: what you’re asking for (demo, 15-min chat, teardown), plus 1–2 proof points you can defend (no made-up metrics).
  • Clay access with credits for enrichment.
  • Smartlead access with:
    • 3–20 sending inboxes (Google Workspace or Outlook)
    • Domain(s) with correct DNS (SPF, DKIM, DMARC)
  • A source of accounts/contacts:
    • Clay’s native sources, or CSV, or CRM export.
  • A calendar link (e.g., Calendly) and a “reply-to” handling plan (who responds, SLA).
  • Compliance constraints (your legal requirements for cold email, opt-out language, and suppression lists).
  • Assumption: you can tolerate early learning cycles. Expect iteration on ICP, deliverability, and messaging in week 1.

3. Tool Stack

Core stack (recommended)

  • Clay (enrichment + workflows)
    • Alternatives: Apollo (list/enrich), Persana, Common Room (community/intent), ZoomInfo (enterprise data).
  • Smartlead (sending + multi-inbox + warmup + sequencing)
    • Alternatives: Instantly, Lemlist, Outreach (enterprise), Salesloft (enterprise).
  • LLM for personalization
    • Recommended: Claude (writing quality) or OpenAI GPT-4.1 (structured outputs + tool calling)
    • Alternatives: Gemini 1.5/2.0, Mistral Large.
  • Workspace for building + iteration
    • Google Sheets (fast) or Airtable (better workflow states)
    • Alternatives: Notion DB, Coda.

Optional but high ROI

  • Mailbox + deliverability
    • Google Postmaster Tools (for Gmail domains)
    • Alternatives: Microsoft SNDS (Outlook-related signals), GlockApps (testing).
  • Programmatic control
    • Cursor + a small Node/Python script for QA + schema validation
    • Alternatives: n8n, Zapier, Make.

4. Prompt Pack

Use these prompts exactly. They assume you will feed the model the required fields (don’t ask the model to “go research” prospects unless you actually provide sources).

# Prompt 1 (Claude / ChatGPT): Define ICP → Targeting Rules → Clay Columns
You are my Growth Engineer. Output a deterministic spec I can implement in Clay.

Context:
- Company: {{YOUR_COMPANY}}
- Product: {{ONE_SENTENCE_PRODUCT}}
- Primary customer: {{WHO_BUYS}}
- ICP (draft): {{YOUR_ICP_TEXT}}
- Offer CTA: {{CTA}}
- Disqualifiers: {{LIST_DISQUALIFIERS}}
- Regions: {{REGIONS}}
- Compliance constraints: {{COMPLIANCE_NOTES}}

Task:
1) Rewrite the ICP as strict filtering rules (industry keywords, employee range, revenue proxy if needed, tech stack signals, geo).
2) List exactly which Clay columns I need to create to support enrichment + personalization.
3) For each column: give (a) data source type (company, person, web, LinkedIn, intent), (b) acceptable null behavior, (c) how it will be used in email copy.
4) Provide 10 example job titles to include and 10 to exclude.

Output format: YAML with keys: icp_rules, clay_columns, include_titles, exclude_titles.
Hard rule: If a field is not available reliably, mark it optional and define fallback copy behavior.
# Prompt 2 (GPT-4.1 recommended): Personalization Generator (Schema-Strict)
You are an AI SDR personalization engine. You must NOT invent facts. If data is missing, write copy that does not rely on it.

Inputs (JSON):
{{PROSPECT_JSON}}

Your tasks:
- Produce (1) a subject line, (2) a first line personalization, (3) a 90-130 word email body, (4) a single CTA question.
- Use the prospect’s role + company context when available.
- If you reference a “trigger” (hiring, funding, launch, blog post), you MUST quote the exact input field you used (as evidence).
- Tone: concise, direct, competent. No hype. No jargon.
- Banned: fabricated metrics, “noticed you were…”, and any claim not grounded in inputs.

Return ONLY valid JSON matching this exact schema:
{
  "subject": "",
  "first_line": "",
  "email_body": "",
  "cta_question": "",
  "evidence": {
    "fields_used": [],
    "trigger_quote": ""
  },
  "risk_flags": {
    "missing_critical_fields": [],
    "potential_hallucination_risks": []
  }
}
# Prompt 3 (Claude / ChatGPT): QA Judge for Outbound Email (Pass/Fail + Fixes)
You are my outbound QA gate. You will receive:
- prospect inputs (JSON)
- generated email output (JSON, schema above)

Your job:
1) Validate the output schema is correct.
2) Check that every factual claim is supported by prospect inputs.
3) Score the email using the rubric:
   - Relevance (0-5)
   - Specificity (0-5)
   - Clarity (0-5)
   - Compliance safety (0-5)
   - Deliverability risk (0-5)
4) If any category <4, FAIL and propose exact edits (rewrite subject + first line + 2 sentences in body).
5) Ensure the CTA is a single question and is low-friction.

Return format (JSON):
{
  "pass": true/false,
  "scores": {"relevance":0,"specificity":0,"clarity":0,"compliance_safety":0,"deliverability_risk":0},
  "reasons": [],
  "edits": {
    "subject": "",
    "first_line": "",
    "email_body_patch": ["", ""],
    "cta_question": ""
  }
}
Hard rule: Never suggest adding facts not present in the inputs.

5. Execution Steps

Follow this sequence exactly. Don’t “optimize” until it’s running end-to-end.

  1. Define your sending posture (before data work)

    • Decide your daily send cap per inbox (start conservative; increase after deliverability is stable).
    • Decide your suppression sources (existing leads, customers, competitors, do-not-contact list).
    • Set your standard opt-out line (consistent across sequences).
  2. Create the Clay table: AI_SDR_Prospects_v1

    • Columns (minimum):
      • company_name, company_domain
      • person_first_name, person_last_name, person_title
      • person_email (or email_guess + validation)
      • linkedin_url (company + person if available)
      • company_industry, company_employee_range, company_hq_country
      • personalization_inputs (JSON string)
      • ai_email_output (JSON string)
      • qa_output (JSON string)
      • send_status (enum: queued, approved, sent, bounced, replied, booked, do_not_contact)
      • sequence_id, mailbox_pool
  3. Populate accounts

    • Use Clay’s list builders / sources (or import CSV).
    • Apply ICP rules as filters, not “vibes.”
    • Remove obvious junk: consumer domains, students, consultants (unless part of ICP).
  4. Find contacts

    • Target 1–2 personas max at first (example: VP Growth + Head of Marketing).
    • Add titles include/exclude lists from Prompt 1.
    • Enrich to get email + LinkedIn, then validate email format where possible.
  5. Enrich the minimum viable personalization fields

    • What you actually need:
      • Role + seniority signal
      • Company description (1 line)
      • One credible trigger (optional): recent post snippet, hiring signal, product page headline
    • Operator note from my Postmates era: teams over-enrich and then blame “Clay credits.” Start lean. Add fields only after you see what improves replies.
  6. Build personalization_inputs JSON in Clay

    • Deterministic mapping. Example keys:
      • first_name, title, company, domain, industry
      • company_blurb (source: website meta/Clay)
      • trigger_type (e.g., hiring, product, content) or null
      • trigger_text (verbatim snippet) or empty string
      • offer_one_liner
      • cta_link (calendar)
      • compliance_optout_line
  7. Generate email JSON via LLM (Prompt 2)

    • Run on each row.
    • Store result in ai_email_output.
    • Hard rule: if the model returns invalid JSON, mark row send_status = queued_fix_json.
  8. Run QA judge (Prompt 3)

    • Store in qa_output.
    • If pass=false, either:
      • Auto-apply the proposed edits into a new ai_email_output_v2, then re-run QA, or
      • Route to human review for the first 50 sends.
  9. Only export “approved” rows to Smartlead

    • Filter in Clay: qa_output.pass == true AND send_status == queued.
    • Export fields Smartlead needs:
      • email, first name, company, any custom vars for subject/body
      • recommended: export fully-rendered subject/body instead of “Smartlead writes it”
  10. Configure Smartlead

  • Create a campaign per persona + offer (don’t mix too many).
  • Add inbox pool(s), set sending windows, enable warmup if appropriate.
  • Paste sequences:
    • Step 1 uses AI-generated subject/body
    • Steps 2–4 can be templated and lightly personalized (role/company only)
  • Add your opt-out line consistently.
  1. Launch in controlled batches
  • Batch 1: 50–200 prospects max.
  • Watch: bounces, spam complaints, reply sentiment, Smartlead inbox health signals.
  1. Close the loop
  • Write back outcomes to Clay (reply classification + booked).
  • Update suppression list daily.
  • Feed “won replies” back into prompt improvements (Iteration Loop).

Executable schema validation (optional but recommended)

If you’re generating JSON emails, validate them before Smartlead. Here’s a minimal Node script you can run locally.

// validate-email-output.js
import fs from "fs";

function isString(x) { return typeof x

## Frequently Asked Questions

### How many personas should I run in parallel?

Start with one persona and one offer until you can predict deliverability and reply handling. Add a second persona only after you’ve shipped one full iteration loop with labeled outcomes.

### Should the AI write follow-ups too?

Yes, but keep follow-ups more templated than step 1. Step 1 earns relevance; steps 2–4 should be short, consistent, and low-risk for deliverability.

### Do I need intent signals in Clay for this to work?

No. Intent helps, but it’s optional early. A tight ICP, clean enrichment, and grounded personalization beat weak intent signals and sloppy copy.

### What’s the simplest way to avoid hallucinations?

Force the model to output an `evidence` object listing fields used and a verbatim `trigger_quote`. Then fail QA if the trigger quote is empty but the email references a trigger.

### Should I export raw fields to Smartlead and personalize inside Smartlead?

If you want deterministic results, export pre-rendered subject/body from Clay. Smartlead templating is fine for simple variable insertion, but it’s not a QA-friendly “generation layer.”

### What deliverability guardrails matter most in week 1?

Conservative ramp, minimal links, plain-text style copy, and strict suppression. If you see bounces climbing, stop and fix list quality before touching prompts.

Frequently Asked Questions

How many personas should I run in parallel?
Start with one persona and one offer until you can predict deliverability and reply handling. Add a second persona only after you’ve shipped one full iteration loop with labeled outcomes.
Should the AI write follow-ups too?
Yes, but keep follow-ups more templated than step 1. Step 1 earns relevance; steps 2–4 should be short, consistent, and low-risk for deliverability.
Do I need intent signals in Clay for this to work?
No. Intent helps, but it’s optional early. A tight ICP, clean enrichment, and grounded personalization beat weak intent signals and sloppy copy.
What’s the simplest way to avoid hallucinations?
Force the model to output an `evidence` object listing fields used and a verbatim `trigger_quote`. Then fail QA if the trigger quote is empty but the email references a trigger.
Should I export raw fields to Smartlead and personalize inside Smartlead?
If you want deterministic results, export pre-rendered subject/body from Clay. Smartlead templating is fine for simple variable insertion, but it’s not a QA-friendly “generation layer.”
What deliverability guardrails matter most in week 1?
Conservative ramp, minimal links, plain-text style copy, and strict suppression. If you see bounces climbing, stop and fix list quality before touching prompts.

Ready to build your AI growth engine?

I help CEOs use AI to build the growth engine their board is asking for.

Talk to Isaac