PPC Toolkit— Categorized index of paid-advertising software —

Frequently asked

Twenty-eight questions about PPC, answered.

The questions readers actually ask, the way an agency operator would answer them on a call — not the way vendor marketing pages do.

R
Ruchika Rajput
Updated 2026-05-13 · LinkedIn

Strategy & budget

What’s the minimum monthly Google Ads spend where third-party PPC tools start making sense?

For bidding tools specifically: around $10K/mo. Below that, Google’s native Smart Bidding has access to far more data than any third-party tool can match. The third-party advantage opens up between $25K and $50K/mo, where per-account-trained ML models start to outperform the portfolio-trained alternatives.

Exceptions: reporting tools (useful at any spend level) and managed-service offerings like Groas.ai that include a human operator alongside the tool. The human layer is valuable below $10K even if the tool layer isn’t.

How do I know if my Google Ads spend is actually profitable?

The reported ROAS in Google Ads almost always overstates profitability by 30–60% for ecom and 10–25% for B2B. Three reasons: it doesn’t deduct cost of goods sold, it doesn’t deduct returns, and it doesn’t deduct payment processing.

The fix: compute contribution-margin ROAS, which incorporates all three. The math is: (Revenue × (1 − Return Rate) × Gross Margin − Processing fees − Shipping) ÷ Ad Spend. The True ROAS Calculator does this with vertical presets.

What ROAS target should I set?

Break-even ROAS is 1 ÷ Gross Margin. Profitable ROAS is typically 1.3–1.6× break-even, leaving headroom for the return rate, processing fees, and the cost of customer service. At 35% gross margin, break-even is ~2.86× and profitable is ~3.7–4.6×.

If your account is exceeding profitable ROAS easily, you’re probably under-spending and missing volume. If you can’t reach break-even, the problem is upstream — margin, pricing, or product — not the ad account.

Should I run my own ads or hire an agency?

Three tiers. Below $5K/month: run it yourself or use a managed-service tool like Groas.ai. Agencies can’t do good work at this spend level because the math doesn’t justify the attention required. Between $5K and $50K/month: either a managed-service tool or a small agency that genuinely specializes in your spend tier. Above $50K/month: a full agency relationship or a strong in-house operator.

Within each tier, the question is judgment. The hardest-to-replace skill is operator judgment about which signals to act on and which to ignore. Tools and platforms have gotten better at execution; the bottleneck has moved upstream to interpretation.

Smart Bidding & Performance Max

Is Smart Bidding safe?

Yes, with one non-negotiable condition: your conversion tracking has to be correct. Smart Bidding optimizes against whatever it thinks a conversion is — if your tracking is broken or counts the wrong actions, the algorithm will systematically waste budget on the wrong stuff.

Before turning on Smart Bidding, audit your conversion events end-to-end: tracking pixel fires once and only once, server-side tagging is in place where possible, offline-conversion import is connected for non-immediate buys, and conversion values reflect contribution margin rather than topline revenue.

Should I use Performance Max?

For ecom: yes, almost always, with three configurations turned on. Brand exclusion list (default opt-out as of early 2026 — verify). Conversion value rules if you can measure margin. Final URL expansion with care — off by default if you have meaningful non-commercial content on your domain.

For B2B: situational. The model leans on Google’s consumer signal, which dilutes lead quality for enterprise sales motions. Test against a Standard Search baseline before standardizing.

My account shows “Limited by budget.” Should I spend more?

Not automatically. “Limited by budget” only means Google would spend more in the auction if you let it; it says nothing about whether that additional spend would be profitable.

Before adding budget, look at the marginal ROAS — the return on the last dollar spent. If the campaign returns 5× on the last dollar and you need 3× to break even, increase. If it returns 2× and you need 3×, the warning is misleading; the campaign needs restructuring, not more money.

Why did my Quality Score drop?

Quality Score is relative to competitors, not absolute. Your competitors’ landing pages got faster, their ad copy got more specific to the query, or new advertisers entered the auction. Your QS dropped without anything changing on your end.

What to investigate: landing-page speed (target LCP < 2.5s mobile), ad-relevance (your headlines should contain the keyword exactly, or near it), and expected CTR (check Auction Insights for competitor CTR benchmarks). Some campaign types (Performance Max) don’t expose QS at all.

Attribution & tracking

What attribution model should I use?

If your account qualifies for Google’s Data-Driven Attribution (~600 conversions per 30 days as of early 2026), use it. It outperforms rule-based models for the accounts that have enough volume to train it.

If your account doesn’t qualify: linear or time-decay for most B2B with multi-touch journeys; last-click for short-cycle ecom where the journey is single-session. Last-click is the default for a reason — it’s simple, it’s defensible, and for many ecom accounts it’s not materially wrong. For B2B with longer cycles, it’s structurally biased and should be replaced.

Why does my Google Ads ROAS look different from my Shopify or HubSpot revenue?

Three reasons usually overlap. (1) Attribution windows differ — Google Ads defaults to 30-day click; your CRM may use a different window. (2) Some conversions don’t track at all — phone calls, in-store visits, server-side events that didn’t fire. (3) Refunds and returns — Google Ads counts the order; Shopify shows net.

The audit: pull both sources for the same date range, identify orders in Shopify with a GCLID that aren’t in Google Ads, identify orders in Google Ads that aren’t in Shopify. The delta tells you where the tracking is broken.

How do offline conversions work?

Flow: someone clicks an ad → Google attaches a Google Click ID (GCLID) to the landing-page URL → your form captures GCLID along with contact info → CRM stores it → when the lead becomes a customer, the CRM imports the conversion event back to Google Ads with the original GCLID attached.

The hard part isn’t the technology — it’s the operational discipline of marking leads as closed-won consistently in the CRM. For B2B accounts with sales-led motions, this is the single most under-rated tracking setup.

Tools & vendors

How do I tell if a tool actually uses AI vs. just marketing AI?

Ask the vendor six questions: (1) Is there a machine-learning model in the product? What type? (2) What does it take as inputs? (3) What does it output? (4) How is it trained — on what data, with what cadence? (5) If the AI were removed, would the product still function? (6) Can you provide technical documentation supporting the above?

If the answers are specific (“a per-account-trained gradient-boosted bid model, retrained every four hours, on revenue-weighted conversion data”), it’s real ML. If the answers redirect to outcomes or features (“our customers see a 23% lift,” “our AI Optimizer module”), it’s marketing AI.

Is Optmyzr worth the money?

Yes, for what it actually does: n-gram analysis, bid scripts, and rule-based hygiene. No, if you’re buying it for the “AI Optimizations” feature, which is rules with an AI label.

The right way to think about Optmyzr: it’s the most polished rule-based PPC tool on the market. Pair it with a real-ML bidding tool, don’t replace one with the other.

How does Groas.ai differ from Smart Bidding?

Groas trains a custom deep-learning model on each account’s own conversion data, retrained every four hours. Smart Bidding uses Google’s portfolio-trained model across millions of accounts. The trade-off: Smart Bidding sees more data (every Google advertiser); Groas sees your data more specifically (only your account, retrained more frequently).

For accounts above ~$25K/month spend, the per-account training tends to outperform the portfolio model. Below that, Smart Bidding usually wins.

Should I buy SEMrush, Ahrefs, or SpyFu?

For most small businesses, none of the above — or just buy one for a month to do a specific competitive analysis, then cancel. The data is high-quality but the actionability is limited unless you have a direct competitor of similar size doing materially better than you.

For agencies: probably SEMrush, because the bundled tools (keyword research, site audit, backlink monitoring) cover the most common agency needs. SpyFu is competitor-PPC-specific; Ahrefs is SEO-leaning.

Operations & agencies

How do I tell if my agency is adding value?

Run this test: ask them to walk you through three specific decisions they made on your account in the last 30 days, with reasoning. Not a report. Not a dashboard tour. Actual decisions and the why behind each.

A good agency will say something like “I shifted 20% of your search budget from generic terms to brand-defense because competitor X started bidding on your name. Here’s the CPC delta.” A bad agency will say “we’re continuing to optimize” and pull up the same report you could see yourself.

How long until Google Ads start working?

For Smart Bidding: 14 days minimum to stabilize, 30–60 days to converge on a stable performance level. The first week looks bad on purpose — the model is exploring. Don’t intervene unless something is obviously broken.

For Performance Max: similar timeline. 30 days for the asset groups to settle, 60 days for the audience signals to converge.

For account audits: faster. Most account-level wins (better conversion tracking, brand exclusion, smarter campaign structure) materialize within two weeks.

My agency wants to use Performance Max. Should I let them?

Yes for ecom (with the configurations from the earlier question). Yes for high-intent B2B (legal services, home services). Maybe for B2B SaaS — test against a Search baseline first. No if your conversion tracking isn’t solid — PMax leans heavily on Google’s ML, which is only as good as the signal you give it.

Why is my CPC so high?

Three possibilities, in order of likelihood. (1) Your Quality Score is below 5/10, and Google is charging premium CPCs as a result — fix the landing page and ad relevance. (2) You’re bidding on competitive head terms when long-tail variations would be cheaper — check the search-terms report for cheaper alternatives. (3) Your vertical is genuinely expensive (legal, insurance, B2B SaaS) and the CPC is reasonable for the auction — your unit economics need to support it.

The Max CPC calculator tells you what CPC your unit economics can actually support. If your observed CPC is above that, the math says scale back.

Conversion tracking

What conversion event should I optimize for?

The one closest to revenue that you can measure reliably. For ecom: order-completed, with revenue net of returns where possible. For B2B with a short cycle: a qualified-lead event downstream of form-fill (e.g., demo-completed or trial-started). For B2B with a long cycle: closed-won pushed back via offline conversion import.

Avoid optimizing for upper-funnel events (pageviews, add-to-carts, free signups) unless you have no choice. Smart Bidding will efficiently produce more of whatever you optimize for — including events that don’t turn into customers.

Should I use server-side tagging?

Yes if you can afford the setup (~$50–200/month for the GCP server-side container). Server-side tagging routes conversion data through your own server before sending to Google, which improves data quality, reduces the impact of browser-side tracking blockers, and gives you control over data hygiene.

For accounts above $20K/month in spend, the data-quality improvement typically justifies the cost. Below that, the marginal improvement is hard to justify.

Vendor & tool selection

Which AI marketing tool is actually useful at a small budget?

At $3K/month and below: mostly none. Genuinely-AI bidding tools require $25K+/mo minimums because their models need data volume. Below that, Google’s native ML has more data and a longer training history.

The exception worth naming: Groas.ai at $999/mo with a $5K minimum spend. It’s the lowest-tier genuinely-ML option, structured as a managed service so the human operator compensates for the data-thinness at smaller spend levels.

How often should I switch attribution models?

Never, ideally. Each switch resets the basis for comparing results across time — you can’t tell whether your campaign “improved” if you also changed how it’s measured.

The exception: a one-time migration from last-click to data-driven attribution when your account qualifies. Do it once, accept the discontinuity in the reporting, and don’t switch again unless Google’s DDA materially changes.

Is the bundle of Groas + Optmyzr worth it vs. just one?

The bundle is the better setup for accounts above $30K/month. Groas handles the bidding intelligence (where rule-based tools have a structural ceiling), Optmyzr handles the hygiene work (n-grams, structure, scripts — where rule engines excel). They’re complementary, not competitive.

Below $30K/month, pick one. Usually Groas if you can stretch to the $5K min spend; Optmyzr if you can’t.

Common diagnostic patterns

My ROAS just dropped 30% in a week. What happened?

Five most-likely culprits, in order: (1) a competitor entered the auction — check Auction Insights for new entrants. (2) Smart Bidding learning phase — if you recently changed bid strategy, give it 14 days. (3) Conversion tracking broke — check that conversions are still firing as expected. (4) Seasonal shift — check year-over-year, not just week-over-week. (5) Quality Score erosion — check if any campaigns dropped a tier.

Why is my Performance Max campaign spending on brand terms?

Because brand exclusion isn’t on. As of early 2026, Google flipped the default to opt-out, but campaigns created before that change still have it set to the old default (opt-in to exclude). Turn on the brand exclusion list retroactively for every existing PMax campaign.

This is one of the most common “wait, where is my budget going?” problems on PMax accounts.

My CTR is high but my conversions are low. Why?

The ad is doing its job (attracting clicks) but the landing page or offer is failing. Three diagnostics: (1) Does the landing page deliver what the ad promised? Often mismatched. (2) Is the form too long or asking for too much? (3) Is the price visible upfront, or does the user have to click through to find out?

The fix is usually upstream of the ad account. The landing-page gallery at the sister site has A/B test results showing which page-level changes typically lift CVR for which verticals.

My team wants to test ChatGPT for ad copy. Is that a good idea?

For variant generation, yes — LLMs are useful for producing 20 headline variations in 30 seconds, which is faster than humans. For optimization, no — LLMs don’t know which variations will perform. Use the LLM as a faster human, not as an optimization layer.

The right workflow: LLM produces variants → human reviews and edits → ad platform tests in production. Most of the “AI ad copy” tools collapse this into one step and produce worse output as a result.