AI Prompts for Google Ads Optimization
Use this runbook to ship an AI-led Google Ads optimization loop in a single working session: extract your search terms + asset performance, have Claude/ChatGPT propose negatives, RSA assets, and budget/bid actions in a strict schema, then push changes through Google Ads Editor or the Google Ads API with QA gates.
Key takeaways:
- AI for google ads works best when you constrain it with your account structure, conversion definitions, and a fixed output schema.
- Start with “no-regret” moves: negatives from search terms, RSA asset refresh, and landing-page-to-ad-message alignment.
- Automate safely: staged rollouts, change logs, and deterministic QA checks prevent account damage.
I’ve run Google Ads spend at scales where “a small mistake” turns into a big bill fast. At Uber and Postmates, the only way we moved quickly without breaking things was rigid process: tight inputs, deterministic outputs, and a short feedback loop. That’s the missing piece in most teams trying ai for google ads: they paste a vague prompt, get a clever answer, then manually cherry-pick changes without consistent QA.
This page is a deployable runbook. You’ll pull the exact exports that matter (search terms, RSA asset performance, auction insights, landing page content, and conversion definitions). Then you’ll run a prompt pack that forces the model to output: (1) negative keyword recommendations with match types and rationale, (2) RSA headlines/descriptions mapped to intents and compliance constraints, (3) bid/budget actions with guardrails, and (4) an execution plan you can apply in Google Ads Editor or via the API.
If you do this cleanly once, you can rerun it weekly with the same structure and get consistent improvements without “random AI creativity” sneaking into your account.
1. Objective
Produce a deterministic set of Google Ads optimizations (negatives, RSA assets, and bid/budget actions) generated by AI and packaged for immediate implementation via Google Ads Editor or the Google Ads API.
2. Inputs Required
- Google Ads account access with permission to:
- View and download Search terms, Keywords, Ads & assets, Campaigns, Ad groups
- View conversion actions and attribution settings
- Exports (CSV or Google Sheets) for the last 14–30 days:
- Search terms report (at minimum: Search term, Campaign, Ad group, Match type, Clicks, Cost, Conversions, Conv. value if applicable)
- RSA asset performance (Headlines/Descriptions with performance labels and/or impressions)
- Keyword report (Keyword, Match type, Status, Final URL if used, Quality signals if available)
- Auction insights (optional but recommended)
- Landing page inputs:
- Final URLs per ad group (or per campaign if shared)
- The page content (either scraped text or pasted)
- Conversion definition:
- Primary conversion action(s) and what “good” means (lead, purchase, trial start)
- Any offline conversion import or enhanced conversions status (just describe; no stats needed)
- Constraints you must state explicitly:
- Geo, language, brand safety constraints
- Policy/compliance constraints (health, finance, legal claims, restricted terms)
- Budget guardrails (max daily change, max CPC change, or “no bidding changes in Wave 1”)
Assumptions (make them true before you run it):
- Your conversion tracking is functioning (recent conversions recorded).
- Your account naming conventions are stable (campaign/ad group names mean something).
- You will apply changes in waves with a rollback plan.
3. Tool Stack
Core
- LLM: Claude 3.5 Sonnet or ChatGPT (GPT-4.1 / GPT-4o)
- Alternative: Gemini 1.5 Pro (works fine for summarization + clustering)
- Execution:
- Google Ads Editor (fast manual bulk apply)
- Google Ads API (automated apply + logging)
Data handling
- Google Sheets + Apps Script
- Alternative: Excel + Power Query
- Lightweight scripting:
- Python (pandas)
- Alternative: Node.js
Optional but high-value
- Cursor (or VS Code) to run the API scripts safely
- Alternative: Google Colab (if you want “no local setup”)
- Landing page text extraction:
- Python
trafilatura/readability-lxml - Alternative: copy/paste page copy directly
- Python
4. Prompt Pack
Use these prompts exactly. Do not “improve” them until you’ve run the workflow once end-to-end.
# PROMPT 1 (Claude/ChatGPT): Search Terms → Negatives + New Ad Group Ideas (Strict JSON)
You are my Google Ads optimization engine. Your job: propose negative keywords (with match type) and optional new ad group themes from a Search Terms report.
CONTEXT (read carefully):
- Primary conversion: {{PRIMARY_CONVERSION_DEFINITION}}
- Geo: {{GEO}}
- Language: {{LANGUAGE}}
- Offer + pricing constraints: {{OFFER_CONSTRAINTS}}
- Compliance/policy constraints (absolute): {{COMPLIANCE_CONSTRAINTS}}
- Brand terms to protect: {{BRAND_TERMS}}
- Competitors to exclude/bid on: {{COMPETITOR_POLICY}}
- Current account structure notes: {{STRUCTURE_NOTES}}
INPUT DATA:
I will paste rows from a Search Terms report as CSV with headers:
search_term,campaign,ad_group,match_type,clicks,cost,conversions,conv_value
TASK:
1) Identify irrelevant or low-intent queries that should be added as negatives.
2) Choose match type for each negative: exact, phrase, broad (explain why).
3) Assign a placement level: ad_group or campaign.
4) Also list "new ad group opportunities" for high-intent search terms that don't fit current themes (max 10).
OUTPUT REQUIREMENTS:
- Output ONLY valid JSON matching the schema below.
- Do NOT include any commentary outside JSON.
- Do NOT invent performance numbers. Use only the provided fields.
- Negatives must be specific; avoid blocking core intent.
- If uncertain, mark as "needs_human_review": true.
JSON SCHEMA:
{
"negatives": [
{
"negative_keyword": "string",
"match_type": "exact|phrase|broad",
"level": "campaign|ad_group",
"campaign": "string or null",
"ad_group": "string or null",
"reason": "string",
"evidence_rows": [
{"search_term":"string","clicks":0,"cost":0.0,"conversions":0,"campaign":"string","ad_group":"string"}
],
"risk": "low|medium|high",
"needs_human_review": true
}
],
"new_ad_group_opportunities": [
{
"theme": "string",
"suggested_campaign": "string",
"seed_terms": ["string"],
"reason": "string"
}
]
}
Now wait. I will paste the CSV rows next.
# PROMPT 2 (Claude/ChatGPT): Landing Page + Intent Map → RSA Assets (Headlines/Descriptions)
You are writing RSA assets for Google Search Ads. Your output must be compliant, specific, and mapped to search intent.
CONTEXT:
- Product/service: {{PRODUCT}}
- Differentiators (truthful, provable): {{DIFFERENTIATORS}}
- What we will NOT claim: {{NO_CLAIMS_LIST}}
- Compliance constraints: {{COMPLIANCE_CONSTRAINTS}}
- Target personas: {{PERSONAS}}
- Funnel stage: {{STAGE}} (e.g., bottom-funnel high intent)
- Account structure:
- Campaign: {{CAMPAIGN}}
- Ad group: {{AD_GROUP}}
- Keyword themes: {{KEYWORD_THEMES}}
- Top converting queries (if known): {{TOP_QUERIES}}
LANDING PAGE TEXT:
{{PASTE_LANDING_PAGE_TEXT}}
TASK:
Generate:
- 15 RSA headlines (max 30 characters each)
- 4 RSA descriptions (max 90 characters each)
- 8 callouts (short, max 25 characters)
- 4 sitelinks with 2 description lines each (each line max 35 chars)
RULES:
- Every asset must be consistent with the landing page text.
- Avoid banned claims in NO_CLAIMS_LIST.
- Do not use excessive punctuation.
- Include at least 3 assets that reflect pricing/terms *only if present on the page*.
- Include at least 3 assets that address objections (shipping, setup, cancellation, support) only if truthful.
OUTPUT FORMAT:
Return a Markdown table with columns:
asset_type | text | character_count | intent_bucket | notes
Intent buckets: "price", "feature", "trust", "speed", "support", "comparison", "use_case"
Now generate the assets.
# PROMPT 3 (Claude/ChatGPT): Bid/Budget Actions with Guardrails (No Surprise Moves)
You are advising bid and budget actions for Google Ads with strict safety constraints.
INPUTS:
- Campaign performance summary (paste as table):
campaign | channel | bidding | budget_per_day | cost | conversions | conv_value | CPA | ROAS | impression_share | search_lost_is_budget | search_lost_is_rank
- Constraints:
- Max budget change per day per campaign: {{MAX_BUDGET_CHANGE_RULE}}
- Max bid strategy change frequency: {{BID_STRATEGY_CHANGE_RULE}}
- Do not change more than {{MAX_CAMPAIGNS_TOUCHED}} campaigns in Wave 1
- If data is insufficient, recommend "no change" and explain what to measure next
TASK:
1) Pick Wave 1 actions: the smallest set of changes that reduce waste and increase high-intent coverage.
2) Provide exact recommended new budgets (or keep) and justify using the provided fields only.
3) If recommending negatives or RSA refresh only, say so clearly.
OUTPUT:
Return YAML with:
wave_1:
- campaign:
action_type: ["budget_increase","budget_decrease","hold","bidding_hold","bidding_change"]
current_budget_per_day:
proposed_budget_per_day:
rationale:
risk: low|medium|high
rollback_plan:
wave_2:
gating_criteria:
candidate_actions:
measurement_plan:
primary_kpi:
secondary_kpis:
checks:
# PROMPT 4 (Optional, Growth Engineer): Google Ads API Change Plan (Patch List)
You are generating an API-ready change plan for Google Ads based on decisions already made.
INPUTS:
- Approved negatives JSON from Prompt 1
- Approved assets table from Prompt 2
- Approved wave plan YAML from Prompt 3
- Customer ID: {{CUSTOMER_ID}}
- Dry run: true
TASK:
Create a "patch list" JSON describing:
- negative keyword creation operations
- ad asset creation operations (RSA text assets)
- budget update operations
OUTPUT:
Only JSON:
{
"customer_id": "...",
"dry_run": true,
"operations": [
{"type":"create_negative","level":"campaign|ad_group","resource_names":{},"payload":{}},
{"type":"create_rsa_assets","ad_group_resource_name":"...","payload":{}},
{"type":"update_campaign_budget","campaign_resource_name":"...","payload":{}}
]
}
5. Execution Steps
- Lock scope (Wave 1). Pick 1–3 campaigns you can safely touch today. In my experience, broad “fix everything” passes internal review and fails in production because you can’t isolate causality.
- Export Search Terms (14–30 days).
- Google Ads → Insights and reports → Search terms
- Include: Search term, Campaign, Ad group, Match type, Clicks, Cost, Conversions
- Export RSA asset performance.
- Ads & assets → Assets → filter by campaign/ad group
- Export headlines/descriptions and any performance labels Google provides.
- Capture landing page text per ad group.
- Minimum: paste the hero, pricing, FAQs, and key sections into a doc.
- Better: scrape and clean the page text via Python (snippet below).
- Run Prompt 1 with your Search Terms CSV.
- Apply the Negative QA gate (before adding anything).
- Campaign-level negatives must not block your core intent.
- If a negative could block a valuable variant, downgrade to ad-group-level or mark “needs_human_review”.
- Run Prompt 2 for each priority ad group.
- If you have headline performance data: replace only the bottom-performing headlines; keep proven assets. If you don’t have asset-level performance, replace a small subset and keep the rest stable.
- Run Prompt 3 using campaign summary.
- If you’re not confident in conversion tracking, do “negatives + RSA only” in Wave 1.
- Implement changes.
- Fast path: Google Ads Editor bulk upload negatives and new RSA assets.
- Automation path: generate a patch list and apply via Google Ads API in dry-run, then live-run.
- Log every change.
- Change log sheet: date, campaign/ad group, change type, previous value, new value, owner, link to export.
- Measurement window.
- Use the same date windows for before/after comparisons and annotate the change date in Google Ads.
- Wave 2 only after gates pass.
- Expand to more campaigns or consider bidding strategy changes only after you see stable tracking and no obvious query-quality regressions.
Executable code: landing page text extraction (Python)
Use this if you want consistent input text for Prompt 2.
# pip install trafilatura requests pandas
import trafilatura, requests
def fetch_clean_text(url: str) -> str:
html = requests.get(url, timeout=20, headers={"User-Agent":"Mozilla/5.0"}).text
downloaded = trafilatura.extract(html, include_comments=False, include_tables=False)
return downloaded or ""
if __name__ == "__main__":
url = "https://example.com/landing"
text = fetch_clean_text(url)
print(text[:2000])
Executable code: deterministic clustering of search terms (optional helper)
This makes Prompt 1 more consistent by pre-grouping queries.
# pip install pandas scikit-learn
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.cluster import KMeans
df = pd.read_csv("search_terms.csv")
terms = df["search_term"].astype(str).fillna("")
vec = TfidfVectorizer(ngram_range=(1,2), min_df=2, stop_words="english")
X = vec.fit_transform(terms)
k = 20 # fixed k for determinism; adjust once, then keep constant
model = KMeans(n_clusters=k, random_state=42, n_init=10)
df["cluster"] = model.fit_predict(X)
df.to_csv("search_terms_clustered.csv", index=False)
print(df.groupby("cluster")["search_term"].head(5))
6. Output Schema
Your workflow produces three artifacts. Store them in versioned files (date-stamped) so you can diff week over week.
{
"run_metadata": {
"date": "2026-02-19",
"account": "customer_id_or_name",
"window": "last_30_days",
"wave": "wave_1",
"owner": "name"
},
"negatives": [
{
"negative_keyword": "free",
"match_type": "phrase",
"level": "campaign",
"campaign": "Search - NonBrand - Core",
"ad_group": null,
"reason": "Filters low-intent 'free' queries that spend without converting",
"evidence_rows": [
{
"search_term": "free [product] template",
"clicks": 12,
"cost": 84.32,
"conversions": 0,
"campaign": "Search - NonBrand - Core",
"ad_group": "Templates"
}
],
"risk": "medium",
"needs_human_review": true
}
],
"rsa_assets": [
{
"campaign": "Search - NonBrand - Core",
"ad_group": "Templates",
"headlines": [
{"text": "Template Library", "intent_bucket": "feature"},
{"text": "Fast Setup", "intent_bucket": "speed"}
],
"descriptions": [
{"text": "Browse templates and publish in minutes.", "intent_bucket": "speed"}
],
"callouts": ["No Code", "Live Support"],
"sitelinks": [
{
"text": "Pricing",
"line1": "See plan options",
"line2": "Pick what fits"
}
]
}
],
"wave_plan": {
"wave_1": [
{
"campaign": "Search - NonBrand - Core",
"action_type": "hold",
"current_budget_per_day": 500,
"proposed_budget_per_day": 500,
"rationale": "Wave 1 focuses on negatives + RSA refresh first",
"risk": "low",
"rollback_plan": "Revert negatives added on 2026-02-19 via change history"
}
],
"measurement_plan": {
"primary_kpi": "conversions",
"secondary_kpis": ["CPA", "conv_rate", "search_term_relevance"],
"checks": ["No drop in brand coverage", "No spike in irrelevant query share"]
}
}
}
7. QA Rubric
| Area | Check | Pass/Fail Rule | Score (0–2) |
|---|---|---|---|
| Negatives safety | No core-intent blocking | Fail if any negative blocks top keyword themes | 0/1/2 |
| Negatives precision | Specificity | Fail if >20% of negatives are overly broad (e.g., single generic word) without “needs_human_review” | 0/1/2 |
| Evidence linkage | Evidence rows included | Fail if any negative lacks at least 1 evidence row | 0/1/2 |
| RSA compliance | Claims match landing page | Fail if any asset contains disallowed claims per your constraints | 0/1/2 |
| RSA formatting | Character limits respected | Fail if any headline >30 chars or description >90 chars | 0/1/2 |
| Intent coverage | Assets map to intents | Pass if each ad group has assets across at least 3 intent buckets | 0/1/2 |
| Wave plan safety | Change scope controlled | Fail if Wave 1 touches more campaigns than your constraint | 0/1/2 |
| Determinism | Output matches schema | Fail if JSON/YAML invalid or includes extra prose | 0/1/2 |
Thresholds
- Ship to production: ≥ 14/16, with no Fail on safety/compliance rows.
- Dry-run only: 10–13/16 or any safety concern flagged.
- Reject: ≤ 9/16 or schema invalid.
8. Failure Modes
-
Negative keywords block valuable long-tail intent.
- Symptom: conversion volume drops; search terms become “too narrow.”
- Fix: move risky negatives from campaign-level to ad-group-level; switch broad/phrase to exact; require “needs_human_review: true” for single-word negatives.
-
LLM invents claims not on the landing page.
- Symptom: RSAs mention guarantees, “#1”, or unsupported pricing.
- Fix: tighten Prompt 2 inputs: add a “NO_CLAIMS_LIST” and paste pricing section verbatim. If you operate in regulated categories, add explicit forbidden terms.
-
RSA assets are “varied” but not strategic.
- Symptom: headlines differ cosmetically; no intent segmentation.
- Fix: require intent buckets and minimum coverage. If you know your top queries, paste them into Prompt 2 so the model anchors language to real demand.
-
Bidding recommendations are unstable because the model overreacts to small samples.
- Symptom: it suggests budget shifts everywhere.
- Fix: cap Wave 1 to “negatives + RSA only” unless your tracking is clean and you can explain your conversion lag. Add strict constraints in Prompt 3.
-
Account structure drift breaks automation.
- Symptom: API script can’t find ad groups/campaigns referenced in output.
- Fix: use stable IDs/resource names from the API for operations, not human-readable names. If you must use names, enforce exact matching and fail fast.
-
Search terms export lacks necessary columns.
- Symptom: model can’t justify negatives; output feels random.
- Fix: re-export with Clicks, Cost, Conversions at minimum. If you have conversion value, include it, but don’t add derived metrics.
-
Policy disapprovals spike after AI copy refresh.
- Symptom: ads disapproved for capitalization, restricted terms, or claims.
- Fix: run a “policy lint” step: forbid superlatives, excessive punctuation, and restricted terms in the prompt. Roll out one ad group first, then expand.
9. Iteration Loop
- Weekly rerun cadence: export the same reports, same date window length, same schema, same prompts. Consistency beats novelty.
- Promote winners, prune losers: if you have asset-level results, keep proven headlines/descriptions stable and only replace underperformers. If you don’t have clean asset performance, rotate fewer assets per cycle.
- Negative refinement: add negatives in smaller batches. Review search terms 48–72 hours after changes to confirm you didn’t block valuable variants.
- Prompt tightening: every time you reject output in QA, encode the reason as a new constraint:
- Add forbidden words
- Add “must include evidence row”
- Add “only campaign-level negatives if appears in 3+ ad groups” (a rule you can enforce with your pre-processing script)
- Automation expansion: once Wave 1 is stable, automate:
- Search term ingestion
- LLM recommendation generation
- Human approval
- API apply with dry-run then live-run
- Build your internal “query taxonomy”: maintain a living list of:
- Always-negative categories
- Brand-safe terms
- Competitor policy This reduces prompt ambiguity and makes ai for google ads repeatable across new campaigns.
Frequently Asked Questions
How fast can I deploy this AI workflow?
If you already have Google Ads exports and landing page text, you can generate negatives and RSA assets in under an hour. Implementation time depends on whether you use Google Ads Editor (fastest) or the API (more setup, more control).
Should I let AI change bids automatically on day one?
No, unless your tracking is proven and you have rollback muscle memory. Wave 1 should usually be negatives + ad copy alignment, then bids/budgets after QA gates pass.
What’s the safest first use of ai for google ads?
Search term mining for negatives with evidence rows and conservative match types. That reduces waste without changing who you are trying to reach.
Do I need the Google Ads API for this to work?
No. Google Ads Editor plus the prompt pack gets you most of the value. Use the API once you want repeatable weekly runs with logging and dry-run enforcement.
How do I keep the model from writing policy-violating ads?
Provide a hard “NO_CLAIMS_LIST,” paste the exact landing page pricing/terms, and force output into character-limited rows. If you see a single disapproval pattern, add it as a forbidden term and rerun.
What if my landing pages are different per keyword theme?
Run Prompt 2 per ad group with its specific final URL content. Mixed landing page inputs produce generic copy.
Frequently Asked Questions
How fast can I deploy this AI workflow?
Should I let AI change bids automatically on day one?
What’s the safest first use of ai for google ads?
Do I need the Google Ads API for this to work?
How do I keep the model from writing policy-violating ads?
What if my landing pages are different per keyword theme?
Ready to build your AI growth engine?
I help CEOs use AI to build the growth engine their board is asking for.
Talk to Isaac