PPC Toolkit— Categorized index of paid-advertising software —

Glossary

The PPC glossary, with the operator’s definitions

Forty-four terms that come up daily in paid-media work, defined the way an agency operator would explain them on a call — not the way the platform documentation does.

R
Ruchika Rajput
Maintained quarterly · LinkedIn

The platforms (Google Ads, Microsoft Ads, Meta Ads) document these terms in ways that are technically correct and operationally useless. The definitions below assume you have an account in front of you and want to know what these things mean for your actual work.

Sections
  1. Metrics & KPIs
  2. Bidding & auction
  3. Attribution & tracking
  4. Campaign types
  5. Audiences & targeting
  6. Optimization & structure

Metrics & KPIs

ROAS return on ad spend
Revenue divided by ad spend. The headline metric on most paid-search dashboards. Misleading by default because it usually excludes cost-of-goods, returns, and payment processing — meaning a campaign with a “positive” ROAS may be unprofitable in unit-economic terms. See also: True ROAS, Break-even ROAS.
True ROAS / Contribution-margin ROAS
(Net revenue × Gross margin − Variable costs) ÷ Ad spend. The number that maps to actual profit. For most ecommerce, this is 30–60% lower than reported ROAS. The True ROAS Calculator exposes the gap.
Break-even ROAS
1 ÷ Gross margin. The ROAS at which an account exactly breaks even. At 30% gross margin, break-even is ~3.33x. Operating below break-even loses money even before considering returns and processing fees.
CPC cost per click
What you pay for one click. Reported as the average across an ad group, campaign, or account. Misleading at the aggregate level because branded clicks (cheap, high-converting) lower the average and mask non-branded CPCs (expensive, lower-converting).
Max CPC
The highest profitable bid you can place. Derived from unit economics, not observed in the platform. The Max CPC calculator derives it from deal value, margin, LTV:CAC ratio, and funnel conversion rates.
CPL cost per lead
What you pay to acquire one lead (form fill, demo request, signup). Useful for B2B; meaningless for ecom. The relevant lead-quality threshold is sales-qualified leads (SQLs), not raw form fills.
CPA cost per acquisition
What you pay to acquire one customer (closed-won, paid order, completed signup). The right metric for most accounts. Often confused with CPL, which is upstream of CPA in the funnel.
CAC customer acquisition cost
All-in cost to acquire a customer — not just ad spend but also sales, marketing operations, and tooling. Typically used in finance contexts where the operating-cost framing matters.
LTV lifetime value
Total revenue a customer generates over their relationship with the business. For subscription products, includes recurring revenue. For one-off purchases, often a single transaction. LTV:CAC ratio of 3:1 is the SaaS rule of thumb.
CTR click-through rate
Clicks divided by impressions. A signal of ad relevance and offer quality. Higher CTR doesn’t always mean better — cheaper clicks from less-qualified audiences can inflate CTR while degrading downstream performance.
Conversion rate
Conversions divided by clicks (or impressions, depending on the metric). The percentage of clicks that result in the tracked conversion event. Healthy ecom CVRs run 1.5–3.5%; B2B SaaS demo-request CVRs run 3–8%.
Quality Score
Google’s 1–10 score for ad relevance, landing page experience, and expected CTR. Influences ad rank and CPC. Increasingly opaque (not exposed for Performance Max campaigns), but still a useful diagnostic for traditional Search campaigns.

Bidding & auction

Smart Bidding
Google’s native automated bidding strategies (Target ROAS, Target CPA, Maximize Conversions, etc.). Uses Google’s portfolio-trained ML across millions of accounts. Generally outperforms manual bidding above ~$10K/mo in spend; below that, the model doesn’t have enough account-level data to train meaningfully.
Manual CPC
The legacy non-automated bidding mode. Operator sets bids by hand. Almost never the right choice in 2026 except for tightly controlled brand-defense campaigns.
Target ROAS (tROAS)
Smart Bidding strategy that targets a specific revenue-to-spend ratio. Set the target slightly above break-even ROAS; Google’s model will adjust bids to hit it on average. Requires conversion-value tracking installed correctly.
Target CPA (tCPA)
Smart Bidding strategy that targets a specific cost per conversion. Right for lead-gen accounts where conversion value isn’t tracked. Avoid for ecom where revenue varies materially per conversion.
Maximize Conversions
Smart Bidding strategy that spends the budget aiming for the most conversions, without a target CPA. Useful for budget-constrained accounts trying to learn the cost curve. Risky because no ceiling on what the model will pay per conversion.
Bid adjustment
Percentage modifier on the base bid for specific contexts (device, location, time of day, audience). E.g., “+20% on mobile” multiplies the base bid by 1.2 for mobile clicks. Largely deprecated under Smart Bidding, which handles these signals internally.
Auction
Real-time process Google runs for every search query to decide which ads to show and in what order. Inputs: advertiser’s bid, Quality Score, ad rank thresholds. Lasts roughly 100ms per query. Happens billions of times daily.
Ad rank
The score that determines an ad’s position in the auction. Roughly: Bid × Quality Score × expected impact of ad extensions. Higher ad rank = better position.

Attribution & tracking

Last-click attribution
Gives 100% of conversion credit to the final touchpoint before conversion. The default for most ad platforms. Systematically biased against top-of-funnel campaigns. For B2B with multi-touch journeys, materially understates paid-media contribution.
Data-driven attribution (DDA)
Google’s ML-based attribution model. Distributes credit across touchpoints based on patterns in your conversion data. Requires ~600+ conversions over 30 days to qualify (threshold reduced from 3,000 in early 2026). Recommended for most accounts that qualify.
Multi-touch attribution (MTA)
Any model that distributes credit across multiple touchpoints rather than just the last. Includes linear, U-shaped, time-decay, and data-driven. The attribution simulator shows how each model carves up the same conversion differently.
GCLID
Google Click ID. Unique identifier appended to every click’s landing-page URL when the auto-tagging is on. Captured in your CRM, it lets you import offline conversion events back to Google Ads with the original click attached. Essential for B2B accounts where leads close offline.
Offline conversion import
Pushing back into Google Ads which leads became customers. Closes the attribution loop for B2B and services. Most-undervalued setup work for accounts whose conversion event isn’t same-session.
Enhanced Conversions
Google’s privacy-aware conversion-tracking enhancement. Hashes first-party customer data and sends it server-side. Required for conversion-value rules in Smart Bidding. Worth installing on every account.
Conversion-value rules
Smart Bidding feature that lets you weight conversions by audience attribute (new vs. returning, geography, device). The closest Google has come to margin-aware native bidding. Easy to misconfigure.

Campaign types

Performance Max (PMax)
Google’s automated, multi-inventory campaign type. Combines Search, Shopping, Display, YouTube, Discover, and Gmail placements in one ML-driven campaign. Substantially improved through 2025. Default choice for ecom in 2026; situational for B2B.
Standard Shopping
The traditional Shopping campaign type, predating Performance Max. Largely replaced by PMax but still useful for accounts that need granular bid control or for testing against PMax baselines.
Demand Gen
Replaced Discovery in 2026. YouTube + Gmail + Discover feed placements. Useful for upper-funnel ecom and brand awareness. Often dilutes lead quality if used for B2B without careful audience configuration.
Search
The traditional keyword-targeted text-ad campaign type. Most predictable, most controllable. Still the default for B2B and high-intent verticals.
Display
Image and responsive display ads on Google’s display network (websites, apps). Generally low-intent. Used for retargeting and brand awareness. Easy to waste budget on if not carefully constrained.
YouTube
Video advertising on YouTube. Effective for brand awareness, video-first products, and audiences that don’t respond to text search ads. Conversion attribution is messier than Search.
Local Services Ads (LSAs)
Pay-per-lead format with the Google Guarantee badge for local services. Distinct from regular Google Ads. Higher conversion rates than standard Search for high-intent local queries. Required for most home-services and professional-services advertisers.

Audiences & targeting

Customer Match
Targeting based on uploaded first-party customer data (email lists, phone numbers). Hashed for privacy. Useful for retargeting customers, building lookalikes, and excluding existing customers from acquisition campaigns.
Similar audiences / Optimized targeting
Google’s lookalike targeting. Builds audiences resembling your conversion list or customer list. Increasingly handled inside Performance Max’s audience signals rather than as a standalone targeting layer.
Audience signals
The inputs to Performance Max’s ML targeting. First-party audience lists, custom intent audiences, demographics, in-market segments. PMax uses these as starting points for the model, not strict targeting constraints.
Negative keywords
Keywords for which your ad should NOT appear. Critical for Search campaigns; less relevant for PMax. The ongoing maintenance of negative keyword lists is one of the highest-ROI activities on a search account.
Match types
The breadth of how Google interprets your keyword: broad (loose), phrase (moderate), exact (tight). Broad match has expanded significantly under Smart Bidding and is now the default recommendation for most accounts — a major shift from a few years ago.
Search term
The actual query a user typed that triggered your ad. Visible in the search-terms report. The basis for negative-keyword decisions and for spotting bid-up opportunities.
N-gram analysis
Decomposing search terms into 1-, 2-, or 3-word chunks to identify patterns. Useful for finding negative keywords at scale or spotting underserved query patterns. Optmyzr is the most polished tool for n-gram work.

Optimization & structure

Account structure
How an advertiser organizes their account into campaigns and ad groups. Best practice has evolved away from hyper-granular structures (one ad group per keyword) toward consolidated structures that give Smart Bidding more data to train on.
Ad group
A grouping of related keywords with shared ads. The unit of bid-strategy management in pre-Smart-Bidding accounts; less relevant in PMax-dominant accounts.
Asset groups
The Performance Max equivalent of ad groups. Bundles of creative assets (headlines, descriptions, images, videos) plus audience signals that PMax uses to target.
Budget pacing
How the platform spends your daily budget over the day. Can be standard (even throughout the day) or accelerated (front-loaded). Most accounts should use standard pacing.
Daypart / Ad scheduling
Restricting ads to specific hours/days. Largely deprecated under Smart Bidding, which handles time-based optimization internally. Still relevant for accounts with operational constraints (e.g., a call center that’s closed nights).
Final URL expansion
PMax feature that lets Google’s ML send traffic to any URL on your domain, not just the ones you specify. Useful for ecom inventory with many product pages; dangerous for sites with significant non-commercial content.
Brand exclusion list
Negative-keyword list at the PMax campaign level that prevents the campaign from spending on your own brand terms. Should be on for almost every PMax campaign. Default was opt-in until early 2026; now default is opt-out (a meaningful UX improvement).
Conversion-event taxonomy
The set of conversion events you’ve defined and how they’re weighted. Often broken on accounts that have run for years — legacy events that no longer reflect business value, duplicate events firing twice, missing offline events. Auditing the taxonomy is the single highest-leverage hour an operator can spend on a new account.