The Vibe Marketing Playbook: Prototype, Test, Scale in Hours
Run vibe marketing as a tight, AI-led loop: spot 10–20 angles fast, build landing pages and ads in parallel, test with synthetic audiences plus small-budget live traffic, then scale only winners with clean handoffs into your growth stack. This runbook gives you deterministic prompts, schemas, QA, and fixes so it works today.
Key takeaways:
- Prototype creative + landing pages in hours with a constrained angle matrix and AI “build agents.”
- Pre-test with synthetic ICP panels to kill bad ideas before spend, then run a minimal live experiment.
- Scale with strict promotion criteria, instrumentation, and a repeatable iteration loop.
I’ve run growth where speed is a feature. At Uber, the cost of waiting was global. At Postmates, we built systems so a small team could ship growth experiments constantly. Vibe marketing is that same operator mindset, updated with AI: prototype, test, scale in hours using agents that write, design, and analyze alongside your team.
The trap is “vibes” becoming chaos. The only way vibe marketing works for a CEO, VP Growth, or growth engineer is if it’s deterministic: clear inputs, strict output formats, test design that won’t lie to you, and promotion rules that prevent shiny-object drift. This playbook is a runbook you can hand to Claude or ChatGPT, plus a few tools (Cursor, v0/Lovable, ad platform APIs) and start shipping by lunchtime.
You’ll follow a Spot → Build → Test → Scale workflow. “Spot” produces a non-overlapping angle matrix. “Build” generates production-grade assets (LP + ads + email). “Test” runs synthetic ICP panels first, then a small, controlled live test with instrumentation. “Scale” turns winners into a repeatable campaign kit with guardrails.
1. Objective
Launch and validate 3–5 vibe marketing campaign candidates (angle + landing page + ads + measurement) in under 6 hours, then scale the top 1–2 with a standardized campaign kit.
2. Inputs Required
- Product basics (no numbers required)
- Product name, category, primary use case, target geos, pricing model (free trial, freemium, annual, usage-based), and top 3 differentiators.
- ICP + constraints
- ICP job titles, company size band, industry(s), “must-have” pains, top objections, and compliance constraints (claims you cannot make).
- Offer + funnel
- Primary conversion event (trial start, demo request, checkout), secondary event (activation milestone), and the exact URL or route you want to send traffic to (even if it’s a placeholder).
- Creative + brand inputs
- Brand voice guidelines, forbidden words/claims, logo/hex colors (if any), and 3–10 existing ads/emails/LPs to learn from (screenshots OK).
- Measurement + tracking access
- Analytics: GA4/Segment/Amplitude/Mixpanel access, plus ad accounts (Meta/Google/LinkedIn/TikTok) and a way to install pixels/conversion APIs.
- Budget + runway assumptions
- A fixed “learning budget” you are willing to burn to get signal (your call), plus the minimum acceptable conversion rate or CAC ceiling (your internal target).
- Decision owner
- One person who can approve shipping within minutes (if this is you, great).
3. Tool Stack
- LLM (strategy + copy)
- Primary: Claude 3.5 Sonnet (Alt: GPT-4.1 / o3-mini for reasoning-heavy constraints)
- LLM coding + automation
- Primary: Claude Code (Alt: Cursor + GPT-4.1, or Windsurf)
- Rapid prototyping (landing pages)
- Primary: v0 by Vercel (Alt: Lovable, Webflow, Framer)
- Design
- Primary: Figma (Alt: Canva for speed)
- Ads
- Primary: Meta Ads Manager (Alt: Google Ads, LinkedIn Campaign Manager)
- Experiment tracking
- Primary: GA4 + server-side events (Alt: Segment; Amplitude)
- Data + enrichment (optional)
- Primary: Clay (Alt: Apollo + Clearbit)
- Shipping
- Primary: Vercel (Alt: Netlify)
4. Prompt Pack
Use these prompts exactly. They are designed to be deterministic. Replace bracketed fields only.
# PROMPT 1 (Claude / ChatGPT) — Spot: Non-overlapping Angle Matrix for vibe marketing
You are my AI Growth Operator. Produce a vibe marketing angle matrix for a single product.
INPUTS (do not ask follow-ups; make reasonable assumptions and label them):
- Product: [PRODUCT_NAME]
- Category: [CATEGORY]
- ICP: [ICP_DESCRIPTION]
- Geo: [GEO]
- Pricing model: [PRICING_MODEL]
- Differentiators: [DIFF_1], [DIFF_2], [DIFF_3]
- Primary conversion event: [CONVERSION_EVENT]
- Compliance constraints: [CONSTRAINTS]
- Brand voice: [VOICE_NOTES]
- Competitors (if known): [COMPETITORS]
TASK:
1) Generate exactly 15 angles.
2) Each angle must have a UNIQUE mechanism. Add this constraint: "No two angles may share the same mechanism."
3) For each angle, provide:
- Angle name (3–6 words)
- Mechanism (one sentence, must be distinct)
- ICP segment (one of: Core / Edge / New)
- Hypothesis (one sentence: why this should work)
- Proof asset idea (what evidence would make it credible)
- 3 ad hooks (max 12 words each)
- Landing page above-the-fold copy (headline + subhead)
- Primary objection + rebuttal (one sentence each)
4) Output in JSON following this schema EXACTLY:
{
"angles": [
{
"angle_name": "",
"mechanism": "",
"icp_segment": "Core|Edge|New",
"hypothesis": "",
"proof_asset_idea": "",
"ad_hooks": ["", "", ""],
"lp_above_fold": {"headline": "", "subhead": ""},
"objection": "",
"rebuttal": ""
}
],
"assumptions": ["..."]
}
QUALITY BAR:
- No generic claims like "save time" unless tied to the mechanism.
- Avoid banned claims in constraints.
- Keep language concrete and testable.
# PROMPT 2 (Claude Code / Cursor) — Build: Generate LP + Tracking + Variant System (Next.js)
You are a senior growth engineer. Create a minimal Next.js (App Router) landing page template that supports 5 variants (one per angle), with:
- Route: /v/[variant] where [variant] is a slug
- A config file that stores each variant's headline, subhead, bullets, social proof placeholder, and CTA text
- A single CTA button that fires a "cta_click" event with variant metadata
- A form submit that fires "lead_submit" with variant metadata
- GA4 support via gtag OR a simple Segment analytics wrapper (pick ONE and implement)
- A README with exact deploy steps to Vercel
INPUTS:
- Variants JSON: paste the 5 chosen angles from Prompt 1
- Brand colors (optional): [HEX_VALUES]
- Conversion event name: [CONVERSION_EVENT]
OUTPUT:
- File tree
- Code blocks for each file
- No pseudo-code; runnable code only
- Deterministic instructions; do not offer multiple options
# PROMPT 3 (Claude / ChatGPT) — Test: Synthetic ICP Panel + Kill/Keep Decisions
You are my experiment analyst. Evaluate 5 campaign variants using a synthetic ICP panel.
INPUTS:
- Product summary: [PASTE]
- The 5 variants: [PASTE JSON SNIPPET]
- Funnel step: cold ad -> landing page -> [CONVERSION_EVENT]
- Constraints: [PASTE]
PANEL SETUP (must follow exactly):
- Create 12 synthetic panelists:
- 6 = Core ICP
- 4 = Edge ICP
- 2 = Skeptics (would resist buying)
- Each panelist must have: title, company type, current workflow, top KPI, and 1 strong bias.
TASK:
For each variant:
1) Score on a 1–5 scale for: Clarity, Credibility, Relevance, Differentiation, Motivation to click.
2) Provide top 3 "verbatim-style" reactions (1 sentence each) from different panelists.
3) Identify the #1 confusion point and the fix (copy-level).
4) Decide: KILL / ITERATE / SHIP to live test.
5) If ITERATE: provide revised headline + one revised hook.
OUTPUT FORMAT:
Return strict JSON:
{
"panelists": [...],
"variant_reviews": [
{
"variant": "",
"scores": {"clarity": 0, "credibility": 0, "relevance": 0, "differentiation": 0, "motivation": 0},
"verbatims": ["", "", ""],
"confusion": "",
"fix": "",
"decision": "KILL|ITERATE|SHIP",
"revisions_if_iterate": {"headline": "", "hook": ""}
}
],
"overall_recommendation": {
"ship_variants": ["", ""],
"iterate_variants": ["", ""],
"kill_variants": ["", ""]
}
}
# PROMPT 4 (Claude / ChatGPT) — Scale: Campaign Kit + Handoff to Ads + CRM
You are my VP Growth. Turn the top 1–2 winning variants into a scale-ready campaign kit.
INPUTS:
- Winning variant(s): [PASTE]
- Channel: [Meta|Google|LinkedIn]
- Offer + CTA: [PASTE]
- Constraints: [PASTE]
- Tracking: GA4 events "cta_click" and "lead_submit" with variant
OUTPUT (markdown, strict sections):
1) Targeting thesis (3 bullets)
2) Creative pack:
- 6 primary texts
- 6 headlines
- 6 descriptions (if channel supports)
- 6 image concepts (plain English)
3) Landing page iteration notes (max 8 bullets)
4) Measurement plan:
- event mapping table
- naming conventions for campaigns/ad sets/ads
5) Launch checklist (15 items)
5. Execution Steps
- Freeze the decision surface (15 minutes).
- Pick: one funnel event, one geo, one ICP definition, one offer.
- Write down forbidden claims (compliance, brand, legal).
- Spot: generate angles (20 minutes).
- Run Prompt 1.
- Select 5 angles that are genuinely different. Enforce the mechanism constraint. If two angles feel similar, delete one and rerun for replacements.
- Build: ship a variantable landing page (60–90 minutes).
- Paste the 5 chosen angles into Prompt 2 in Claude Code or Cursor.
- Deploy to Vercel.
- Confirm routes load:
/v/angle-1,/v/angle-2, etc.
- Instrument: verify events (20 minutes).
- Open GA4 Realtime.
- Click CTA on each variant and submit the form once. Confirm
variantparameter is present.
- Synthetic test: kill bad variants before paid spend (30 minutes).
- Run Prompt 3 with your 5 variants.
- Apply promotion rule: only SHIP variants with average score ≥ 4.0 across the five dimensions OR with “Differentiation” ≥ 4 and at least one strong verbatim indicating desire.
- Live test setup: minimal spend, clean read (60 minutes).
- Create one campaign per variant (or one campaign with separate ad sets per variant).
- Use identical targeting across variants.
- Use identical budget across variants.
- Ensure ad → LP URL includes the variant slug.
- Run the test until you have directional signal (same-day or next-day).
- Your stop condition should be deterministic: “stop when each variant has at least N clicks” (pick N that matches your budgets; I often start with 50–100 clicks per variant for a first directional read, depending on CPC and funnel friction).
- Decide and scale (45 minutes).
- Promote 1–2 winners into Prompt 4.
- Create the campaign kit, naming conventions, and a measurement plan.
- Lock the workflow into a repeatable weekly cadence.
- Add an “Angle Matrix” doc, a “Variants config” file, and a “Test log” sheet your team updates every run.
6. Output Schema
Use this schema to store each run in a single file (so you can diff runs and avoid repeated mistakes).
{
"run_id": "2026-02-19_vibe_marketing_run_001",
"product": {
"name": "string",
"category": "string",
"geo": ["string"],
"pricing_model": "string",
"differentiators": ["string", "string", "string"]
},
"constraints": {
"forbidden_claims": ["string"],
"brand_voice": "string"
},
"angle_matrix": {
"angles": [
{
"angle_name": "string",
"mechanism": "string",
"icp_segment": "Core",
"hypothesis": "string",
"proof_asset_idea": "string",
"ad_hooks": ["string", "string", "string"],
"lp_above_fold": { "headline": "string", "subhead": "string" },
"objection": "string",
"rebuttal": "string"
}
],
"selected_variants": ["variant_slug_1", "variant_slug_2", "variant_slug_3", "variant_slug_4", "variant_slug_5"]
},
"build": {
"repo_url": "string",
"vercel_url": "string",
"routes": ["/v/variant_slug_1"],
"events": [
{ "name": "cta_click", "params": ["variant", "angle_name"] },
{ "name": "lead_submit", "params": ["variant", "angle_name"] }
]
},
"synthetic_test": {
"panelists": [{ "title": "string", "company_type": "string", "workflow": "string", "top_kpi": "string", "bias": "string" }],
"variant_reviews": [
{
"variant": "string",
"scores": { "clarity": 0, "credibility": 0, "relevance": 0, "differentiation": 0, "motivation": 0 },
"decision": "KILL"
}
],
"ship_variants": ["string"]
},
"live_test": {
"channel": "Meta",
"campaigns": [
{
"variant": "string",
"campaign_name": "string",
"ad_set_name": "string",
"ad_name": "string",
"budget_per_day": "string",
"targeting_notes": "string",
"status": "planned|running|stopped"
}
],
"results_snapshot": [
{
"variant": "string",
"clicks": 0,
"leads": 0,
"primary_cvr": 0.0,
"notes": "string"
}
],
"winner_variants": ["string"]
},
"scale_kit": {
"winning_variant": "string",
"creative_pack": { "primary_texts": ["string"], "headlines": ["string"] },
"measurement_plan": { "naming_convention": "string" }
}
}
7. QA Rubric
| Area | Check | Pass/Fail Criteria | Score (0–5) |
|---|---|---|---|
| Angle Matrix | Mechanism uniqueness | No two angles share the same mechanism (manual scan) | 0–5 |
| Angle Matrix | Testability | Each angle has a falsifiable hypothesis + proof asset | 0–5 |
| Copy | Specificity | Headlines/hooks are concrete, avoid generic “save time” unless mechanism explains how | 0–5 |
| Compliance | Constraint adherence | No forbidden claims, no unverifiable guarantees | Pass/Fail |
| Build | Variant routing | /v/[variant] works for all 5 variants |
Pass/Fail |
| Tracking | Event integrity | cta_click and lead_submit fire with variant parameter |
Pass/Fail |
| Synthetic Test | Panel realism | Panelists include workflow + KPI + bias; includes skeptics | 0–5 |
| Decisioning | Promotion rules | SHIP/ITERATE/KILL decisions align with rubric thresholds | 0–5 |
| Live Test | Fair test setup | Targeting and budgets identical across variants | Pass/Fail |
| Scale | Handoff completeness | Naming + measurement + checklist provided | 0–5 |
Minimum bar to proceed to live test: Compliance Pass + Build Pass + Tracking Pass + Mechanism uniqueness score ≥ 4.
Minimum bar to scale: Live test fairness Pass + at least one variant shows clear advantage on your primary metric (your internal target).
8. Failure Modes
- Angles converge into the same idea (“save time with AI” five ways).
Fix: Re-run Spot with the hard constraint already included: “No two angles may share the same mechanism.” Then ban any mechanism that uses the same verbs (automate, streamline, simplify) without a different “how.” - Synthetic panel says everything is “fine,” nothing stands out.
I saw this pattern when teams asked for “feedback” instead of forced choice.
Fix: Add forced ranking: “Rank the 5 variants by motivation to click; ties not allowed.” Also increase skeptics from 2 to 4 if you’re selling into a cynical buyer. - LP variants ship, but results are unreadable because tracking is broken.
Fix: Make tracking verification a hard gate. Do not launch ads until you see events in GA4 Realtime with thevariantparam. - Ad platform learning confounds variant comparison.
Common on Meta if you mix audiences or optimization events.
Fix: Hold audience constant. Hold optimization constant. Separate ad sets per variant so spend allocation does not drift invisibly. - You scale a “clicky” angle that doesn’t convert downstream.
I’ve made this mistake personally. CTR dopamine is real.
Fix: Define primary success as your conversion event (trial/demo/checkout). Use CTR only as a diagnostic, not a winner selector. - Teams overbuild instead of prototyping.
Fix: Use the provided Next.js template and placeholders for proof assets. Your first run needs signal, not polish. - Compliance risk from AI-generated claims.
Fix: Put constraints into every prompt. Add a final compliance scan step: “List every claim that implies guaranteed outcomes; rewrite as conditional.”
9. Iteration Loop
- After each run, update your “Mechanism Library.”
Keep a list of mechanisms that worked (and failed). Next run starts with new mechanisms only. - Promote winners by mechanism, not by copy.
Copy can be rewritten fast. Mechanism is the unit of durable differentiation. - Replace only one variable per iteration.
- If the mechanism is strong but conversion is weak, iterate landing page proof and objection handling.
- If conversion is strong but CPC is high, iterate hooks and creative formats.
- Add proof assets incrementally.
- Run 1: placeholder proof.
- Run 2: screenshot, short demo GIF, or a mini case study snippet (even if anonymized).
- Automate the boring parts.
Once you have one working repo, add a script that ingests the angle JSON and outputs variant config + ad naming strings. Your second run should be faster than your first.
Frequently Asked Questions
What makes vibe marketing different from “just making more ads with AI”?
Vibe marketing is a controlled loop: non-overlapping angles, fast builds, synthetic pre-tests, then a fair live test with promotion rules. The difference is the system, not the volume.
Do synthetic ICP panels actually predict performance?
They predict confusion and credibility failures well, which saves time. They do not replace live tests; they reduce wasted spend by killing variants that are clearly unclear or untrustworthy.
How many variants should I ship on the first run?
Start with 5. That’s enough diversity to learn without blowing up your build and tracking surface area.
Should I test multiple channels at once?
No for the first run. Pick one channel where you can buy impressions today, then port the winning mechanism to other channels after you have a baseline.
What’s the fastest way to get landing pages live?
v0/Lovable for the UI, then a Next.js template with a variant config file so you can change messaging without rebuilding. The key is keeping tracking identical across variants.
Who should own this workflow: growth engineering or marketing?
One owner, cross-functional inputs. In practice, you want a growth engineer to own routing/tracking and a marketer to own angle quality and creative volume.
Frequently Asked Questions
What makes vibe marketing different from “just making more ads with AI”?
Do synthetic ICP panels actually predict performance?
How many variants should I ship on the first run?
Should I test multiple channels at once?
What’s the fastest way to get landing pages live?
Who should own this workflow: growth engineering or marketing?
Ready to build your AI growth engine?
I help CEOs use AI to build the growth engine their board is asking for.
Talk to Isaac