SteerAds
GuideStratégieGoogle Ads

Client Google Ads reporting: 10 KPIs 2026

The 2021-era Google Ads KPIs (clicks, CTR, CPC) no longer cut it in 2026. Your client wants business impact: margin, LTV, MER, incrementality. This guide unpacks the 10 KPIs that actually matter — with formulas, healthy thresholds, use cases, and pitfalls — benchmarked across 2,000+ SteerAds accounts.

Angel
AngelStrategy & Audit Lead
···12 min read

70 to 82% of agencies still report only on gross revenue and ROAS in 2026, never mentioning real margin. The 10 business KPIs that follow change the conversation with clients — they speak net margin, LTV:CAC, and real incrementality rather than interface vanity metrics.

Why don't the basic Google Ads KPIs cut it anymore in 2026?

Google Ads client reporting has shifted in 2026. For a decade, agencies and advertisers clung to a reassuring trio: ROAS, CPA, CTR. Comfortable, easily exportable from the interface, universally understood — and completely insufficient for the decision-makers signing budgets today. On the SteerAds 2025-2026 sample, 70-82% of agencies (depending on the source) still report only on gross revenue and ROAS, never mentioning real margin. The outcome: a growing gap between what the reporting shows ("ROAS 4.2, profitable campaign") and what the client's accounting actually measures (negative margin on 30% of orders).

Three shifts make the old trio obsolete. One, gross revenue ROAS doesn't measure real margin; in fashion or consumer electronics, a revenue ROAS of 2.4 often hides a margin ROAS of 1.1 — barely profitable, sometimes negative once returns and logistics are folded in. Two, CPA gets misleading as soon as LTV is ignored: a high CPA on a customer who renews for 5 years stays profitable; a low CPA on a one-shot customer doesn't. Three, clicks and CTR have become genuine vanity metrics — they fill a dashboard while saying nothing about the business.

The 2026 decision-makers — CMOs, heads of growth, mature e-commerce CEOs — want to see business impact: net margin, payback period, true incremental lift, LTV:CAC. The 10 KPIs that follow are the modern roadmap for reporting that speaks the client's language, not the Google Ads interface's. To set the conceptual groundwork before diving in, read our companion piece understanding ROAS, CPA, and CPC.

Google Ads reporting KPI pyramid — 3 tiersBusinessMargin ROAS · LTV:CAC · MERIncrementality · New customer ratePerformanceCPA · ROAS · ConversionsFunnel spend share · Offline coverageTacticalClicks · CTR · CPC · Impression ShareQuality Score · Time to first conversionCEO / CMOHead of GrowthTraffic Manager

Higher up = fewer KPIs, but each one is worth more

Key insight :

a 2026 client report must cover all 3 tiers — tactical, performance, business — with a strict hierarchy. 3 business KPIs on page 1, 5 performance KPIs on page 2, 2 tactical KPIs in the appendix. In practice, dashboards that honor this hierarchy show roughly 30% higher agency client retention than "12 flat KPIs" dashboards.

What's the difference between margin ROAS and gross revenue ROAS?

Definition. Classic ROAS measures the ratio of revenue to ad spend. Margin ROAS folds in gross margin: it's the only ratio that tells you whether a campaign is profitable in the real accounting sense. Formula: (Revenue × margin%) / spend. Concrete example: a fashion e-commerce with 30% gross margin and a revenue ROAS of 4.2 posts a margin ROAS of 1.26 — profitable but with no headroom. At a revenue ROAS of 2.4, margin ROAS drops to 0.72: a flat loss on every dollar spent.

Healthy threshold by vertical. Fashion e-commerce: margin ROAS ≥ 1.0. Consumer electronics (8-15% margins): ≥ 0.8 minimum, paired with high LTV. Services (70%+ margin): ≥ 2.0. B2B SaaS: margin ROAS isn't the right metric — switch to LTV:CAC. Client use case. Exit the endless budget negotiation ("raise the budget, ROAS is good") by showing that a 4.2 revenue ROAS hides fragile profitability. Excellent for framing an e-com client pushing for more spend without seeing margin shrink.

Pitfalls to avoid. Forgetting variable costs outside COGS (logistics, returns, Stripe payment fees at 2.9% + $0.30, customer service). Best practice: compute a "contribution margin" that includes all marketing-dependent variable costs, not just product cost. In most cases, the median gap between revenue ROAS and margin ROAS is 2.4× vs 1.1×: in other words, an apparent revenue ROAS of 2.4 equals a margin ROAS right at break-even. Further reading on Google Ads support.

How do you calculate the CAC vs cohort LTV ratio?

Definition. CAC (Customer Acquisition Cost) = total spend / new customers. LTV (Lifetime Value) = ARPU × customer lifetime, calculated on a 12-24 month cohort. The LTV:CAC ratio is the only KPI that tells you whether the acquisition business model is sustainable long-term.

Healthy threshold. LTV:CAC ≥ 3 in B2B SaaS, ≥ 2.5 in e-commerce, ≥ 4 in high-repeat local services. Below 2, the model is fragile — a Google Ads CPC uptick or a retention dip tips it into loss. Above 5, there's often under-investment in acquisition. Across the accounts we observe, a median US B2B SaaS posts LTV $4,800 / CAC $1,200 = 4, a comfortable ratio.

Use case. Defending an apparently high CPA to a client who compares "just CPA." An $800 CAC on a $3,200 LTV is excellent; a $150 CAC on a $200 LTV is fatal. Pitfalls. LTV requires cohort data rarely available before 12 months of history. Workaround: LTV proxy computed at M+3 then extrapolated via an empirical multiplier (typically 2.5× for B2B SaaS, 1.8× for repeat e-com). Never use a theoretical business-plan LTV — only observed LTV. To cleanly connect CRM data to Google Ads, see our offline conversions guide.

What is MER (Marketing Efficiency Ratio)?

Definition. MER = total revenue / total marketing spend, where "marketing spend" includes ALL channels (Google Ads, Meta, TikTok, LinkedIn, influencer, SEO) and ideally headcount and tooling costs. MER gives you overall marketing profitability, agnostic to the attribution model.

Healthy threshold. Consumer B2C e-commerce: MER ≥ 3 (typically 3.5 to 5.0 on the best accounts). B2B SaaS: MER ≥ 5 on a trailing 12-month basis (since product margin is higher). Omnichannel retail with brick-and-mortar: MER ≥ 2.5 acceptable if store traffic is measured.

Difference from ROAS. Google Ads ROAS is sensitive to attribution bias — last-click over-credits brand, data-driven over-credits top-of-funnel, PMax over-attributes by +32% vs +11% actual incrementality observed. MER, by contrast, is aggregated: it doesn't need to "know" which channel drove the sale. Use case. Arbitrating between Google Ads and Meta Ads at fixed total budget without falling into the last-click ROAS war. Pitfalls. Forgetting headcount costs (agency, freelance, in-house salaries) and tools (Looker Studio Pro, ad server, tracker) that can represent 15-25% of actual media budget. A "media only" MER misleads as much as a gross revenue ROAS.

How do you measure incrementality with a geo holdout?

Definition. Incrementality measures the revenue genuinely generated by the campaign — the revenue that wouldn't exist without it — vs the revenue attributed by Google (which includes conversions the customer would have made anyway via SEO, direct, or word-of-mouth). The gold-standard method: geo holdout.

Methodology. Turn off a campaign (typically PMax or a channel suspected of over-attribution) for 2 to 4 weeks in 1 representative region (e.g., a mid-sized state, no seasonal bias). Compare total revenue in the test zone to control zones over the same period. The gap is real incrementality.

Healthy threshold. Varies by channel. In our sector panel, median PMax incrementality is +11% real vs +32% attributed — Google Ads over-attributes by roughly 3×. Non-brand search: +65-80% real (close to attributed). Brand search: 10-20% real (the user would have converted direct or via SEO). YouTube / Discovery / Demand Gen: the most over-attributed, real incrementality often 20-40% of attributed. Priority channels to test. PMax, YouTube, Discovery/Demand Gen, Display — all the ones with view-through attribution enabled. Pitfall. A short holdout (< 14 days) in too small a region produces only statistical noise. Minimum 2,000 control-zone conversions over the window for a clean read.

How should you split spend across funnel stages?

Definition. The percentage of budget spent per funnel stage — awareness (Display, YouTube, Demand Gen), consideration (non-brand generic Search, Display remarketing), conversion (brand Search, intent remarketing, PMax shopping). This KPI gives you a view of long-term pipeline health, not just immediate performance.

Healthy mix. For an account in steady state: 20% awareness + 40% consideration + 40% conversion. For an account in scaling phase: push awareness to 30%. For an account with tight cash flow: drop to 10% awareness + 30% consideration + 60% conversion, but accept a volume ceiling in 6 months.

Anti-patterns to flag to the client. 80% of budget on brand — pure waste, you're paying dearly for traffic that would have come direct. 100% on conversion-only generic — no nurturing, the pipeline dries up in 6 months. 60% on YouTube awareness with no measurement — money burned with no measurable ROI. Visualization. Classic 3-segment donut chart, or a Sankey diagram if the client is mature. In practice, accounts that starve their consideration tier (under 25% of budget) see conversion volume drop 14-22% at M+6 with no obvious cause — it's the pipeline that quietly drained.

How do you interpret Impression Share (total + lost)?

Definition. Impression Share measures the share of impressions actually served vs eligible. Reporting must show 3 variants: IS total, IS lost budget (impressions lost due to insufficient budget), IS lost rank (impressions lost due to bid or Quality Score too low). Without all 3, diagnosis is impossible.

Healthy thresholds. IS ≥ 60% on brand campaigns and business-critical non-brand keywords. IS lost combined < 40%. Critical signals: IS lost budget > 30% = the account is under-sized, pitch a budget increase with projected ROAS. IS lost rank > 25% = Quality Score or bid strategy issue — to fix before any budget raise.

Use case. The "raise your budget by 30%" pitch becomes credible when you show an IS lost budget of 38%: the account is leaving 38% of the eligible market on the table for lack of budget. Pitfalls. Confusing IS top with IS absolute top (Google has two distinct metrics). Reporting account-level IS when some campaigns are at 95% (brand) and others at 15% (generic) — averaging hides the diagnosis. Always break down by campaign. To dig deeper, see our Google Ads audit checklist.

How do you steer the New customer acquisition rate?

Definition. The percentage of conversions coming from new customers vs existing customers. For an e-commerce looking to grow, acquiring new customers is fundamentally different from re-engaging the base — but without segmentation, Google Ads treats both identically.

Healthy threshold. 40 to 60% new customers over the monthly window for a growing e-commerce. Below 30%, the account is essentially cannibalizing its own base — risk of saturation plateau in 6-12 months. Above 75%, watch the opposite imbalance: no retention, high churn hidden by constant acquisition.

Technical setup. Google Ads lets you declare a New Customer Acquisition Goal via Customer Match: you upload the existing customer list and tell the algo to over-weight (or exclude) new customers. Requires a clean CRM and a working Customer Match sync. Client use case. Quick diagnosis of an e-com client "stagnating on revenue": if New customer rate sits at 22%, the cause isn't the campaign, it's base saturation. Pitfalls. The measurement relies on Customer Match requiring a match rate > 60% — otherwise the algo treats existing customers as new and artificially inflates the KPI. For the setup, see the official Google Ads documentation.

Why weight average Quality Score by impressions?

Definition. The arithmetic average of Quality Score across all keywords is misleading: 300 keywords at QS 5 and 5 strategic keywords at QS 2 yield an average of 4.95 that hides the real problem. Impression-weighted Quality Score is the only reliable health indicator for Search. Formula: Σ(QS × impressions) / Σ(impressions), computed on the top 50 keywords by spend.

Healthy threshold. Weighted average QS ≥ 7.0 on core keywords. Between 5 and 7: acceptable but watch-list. Below 5: red alert, relevance issue with ads or landing pages. A Quality Score under 5 on a keyword at 40% of spend means you're paying 50-100% more than your competitors at the same bid rank.

Use case. Early warning signal. A weighted QS dropping from 7.2 to 5.8 in 30 days foreshadows a CTR or landing experience decline — fix before CPA follows. Pitfalls. Reporting a simple (unweighted) average QS with no context. Ignoring the 3 components (Expected CTR, Ad Relevance, Landing Page Experience) that drive QS. For the full diagnostic, see our guide to running Google Ads without an agency, which details the audits to run quarterly.

What does Time to first conversion tell you about your funnel?

Definition. The median delay between the first Google Ads click and the first conversion. Available in the "Path length" and "Time lag" columns of Google Ads attribution, or computable via BigQuery export if tracking is properly set up.

Observed benchmarks. B2B SaaS: 14 to 28 days common, sometimes 60+ on enterprise. Fashion and beauty e-commerce: 1 to 3 days. Consumer electronics e-commerce: 3 to 7 days. Local services: < 24h. Real estate, automotive: 30 to 90 days.

Use case. Understanding why a PMax campaign looks "under-performing" after 14 days in a B2B SaaS context: it needs 28-45 days to materialize conversions. Improvement signal: a drop in time to first conversion indicates a more efficient funnel (sharper landing, stronger remarketing). Major pitfall. A sudden drop can also signal over-attribution (Google re-claims conversions previously attributed to SEO). Always cross-reference with offline coverage and incrementality to arbitrate. Compare to a cohort baseline over a trailing 90 days to separate signal from noise.

What offline conversion coverage threshold fits your vertical?

Definition. The percentage of conversions attributed to Google Ads that come from offline uploads (via Google Ads API, Zapier, or Enhanced Conversions for Leads) vs conversions tracked on-site only. For B2B SaaS or e-commerce with strong customer service, offline is half the truth.

Healthy threshold by vertical. B2B SaaS: target ≥ 70% offline coverage. Without it, Smart Bidding optimizes on junk MQLs (filled forms but unqualified) and degrades pipeline quality. Standard e-commerce: 5-10% typical (returns, refund adjustments). E-commerce with strong support or phone cross-sell: 15-25%. Local services with appointment booking: 40-60% if appointments are fed back.

Use case. For B2B SaaS, this is THE KPI that unlocks Smart Bidding — without offline, tCPA optimizes poorly at best. For e-com, monitoring the trend is critical: a drop in offline coverage = broken CRM pipeline (pixel, endpoint, automation). Pitfall. A "healthy" percentage says nothing about upload latency. If offline conversions come back with 10 days of lag, Smart Bidding learns out of sync and optimization quality drops. Ideal: daily automated upload, latency < 48h. See our offline conversions CRM guide for the end-to-end setup.

Recap — 10 KPIs, healthy thresholds, and reporting cadence

Warning :

never put all 10 KPIs on the same page. The human brain processes 4 to 7 simultaneous pieces of information before saturation — past that, nothing sticks. A report that tries to "show everything" is a report the client doesn't read. Hierarchize: 3 business on page 1, 5 performance on page 2, 2 tactical in the appendix for operators. In practice, an overloaded dashboard costs roughly 25-35% more client churn vs a hierarchized dashboard.

To industrialize this reporting without building a custom in-house dashboard, our SteerAds Reports module automatically wires the 10 KPIs to Google Ads, Meta, GA4, and your CRM, with margin blending, cohort LTV, and holdout incrementality built in. The free Looker Studio template (available on lookerstudio.google.com via our connectors) covers 80% of the standard case — the rest needs BigQuery or a dedicated tool. For advanced connectors, see the Looker Studio developers documentation.

To move from insight to implementation: launch a free audit (results in 72h) that diagnoses which of the 10 KPIs are currently missing or miswired on your account, then deploy our Reports module to automate weekly + monthly dashboards for your clients. To go deeper by vertical, see our B2B SaaS Google Ads strategy or our 2026 Google Ads e-commerce playbook. Additional external resources: Looker Studio support and Think with Google measurement.

Sources

Official sources consulted for this guide:

FAQ

What client reporting cadence: weekly, monthly, or both?

Both, with distinct scopes. Weekly is tactical: spend, conversions, CPA/ROAS, anomalies, actions taken that week — 5 to 7 KPIs max, Looker Studio format or a condensed email. Monthly is business-level: margin ROAS, LTV:CAC, MER, incrementality, New customer rate, optimization roadmap — PDF format or a meeting with the decision-maker. Across our internal SteerAds benchmark (2,000+ accounts, 2025-2026), 68% of agencies send a decent monthly report but neglect the weekly one, which creates a 3-week gap between detecting a problem and acting on it. The right cadence is weekly operational + monthly strategic, strictly separated.

Should you show every KPI to the client or just 3-5?

The reflex to cram 10 KPIs onto one page kills reporting. SteerAds rule: 3 business KPIs on page 1 (margin ROAS, LTV:CAC or MER, New customer rate), 5 performance KPIs on page 2 (CAC, incrementality, funnel spend share, IS total/lost, weighted average QS), 2 tactical KPIs in the appendix (time to first conversion, offline coverage). The decision-maker stops at page 1 in 80% of cases. The operational client reads page 2. Both personas consume the same document without cognitive overload. Across our internal SteerAds benchmark, dashboards with 12+ indistinct KPIs show a 31% lower client retention rate.

Is Looker Studio enough, or do you need a paid tool?

Looker Studio covers 80% of Google Ads reporting needs, provided you accept 3 limits: no native margin ROAS (you'll need to blend product cost via Google Sheets or BigQuery), a 12-hour refresh ceiling on Google Ads data, and no real-time alerts. For multi-channel reporting (Meta, TikTok, LinkedIn, offline) or SKU-level margin measurement, a tool like SteerAds Reports automates the blending and pushes Slack/email alerts. Across our internal benchmark, 76% of agencies start on free Looker Studio and migrate to a dedicated tool past 15 accounts or $50k/month in spend.

How do you adapt reporting to client maturity?

Three tiers. Junior client (first Google Ads campaign): revenue, conversions, CPA, budget spent — 4 KPIs, plain-language vocabulary, 1 chart per KPI. Intermediate client (6-18 months of experience): add ROAS, IS, New customer rate, Quality Score — 7 KPIs, monthly PDF + weekly email. Senior client (CMO, head of growth, 2+ years): margin ROAS, LTV:CAC, MER, holdout incrementality, offline coverage — 10 KPIs, quarterly review session for strategic trade-offs. Across our internal SteerAds benchmark, 54% of agency client churn comes from reporting calibrated too low for a client who has become senior and wants more depth.

Ready to optimize your campaigns?

Start a free audit in 2 minutes and discover the ROI potential of your accounts.

Start my free audit

Free audit — no credit card required

Keep reading