€10M per year of Google Ads spend on a single advertiser account places this profile in the top 0.5% of French Google Ads accounts by volume. On accounts observed in public Google Ads benchmarks, the French median for active B2B SaaS Google Ads accounts sits around €35-80k/year of spend; the top quartile reaches €250-600k/year; accounts beyond €1M/year represent less than 4% of the panel; accounts at €10M+/year are extremely rare — typically unicorns or hyper-growth SaaS publishers with corporate enterprise ICP
This article is an anonymized long-form study of one of these accounts. €80M ARR, ACV €25k-150k, mid-market CIO ICP 200-2000 employees, 8 European markets covered, 5-person paid team, BigQuery + Looker Studio + server-side GTM data infrastructure, 12 custom Google Ads scripts in production. 18 months of observed steering, from the moment the account moved from €4M/year to €10M/year of spend. No company name. No precise figures that would reveal it. But everything else — org chart, MCC structure, budget allocation per channel, KPIs, tooling stack, incrementality workflow, mistakes made and corrected — is documented with the operational precision you never get in marketing case studies.
Why publish this? Because mid-market FR B2B SaaS accounts that scale spend (€1M, €2M, €5M/year) almost all hit the same structural problems once past a certain threshold — and public documentation on the real mechanics of a €10M/year account is, in 2026, basically nonexistent. What circulates in SEA conferences is almost always marketing or "how we cut CPA by 40%" without context. This article doesn't have that vocation. It describes the operational day-to-day of a €10M/year account, with the sharp opinions and critical positions on incrementality, cannibalization and naive metrics that are my signature trademark. For Performance Max fundamentals that underpin part of the analysis, see our 2026 Performance Max guide. For the B2B SaaS pillar, see our B2B SaaS Google Ads strategy. To score the health of your acquisition (above 3 = healthy SaaS, under 1 = unprofitable), our LTV:CAC calculator returns the ratio + interpretation.
The context: mature B2B SaaS, €80M ARR, 5-person paid team
The publisher studied is a mature vertical B2B SaaS operating in the ITSM/IT Operations segment with sub-modules ServiceDesk, IT asset management and workflow automation. Consolidated ARR of €80M at the time of the study, sustained organic growth of +35% YoY over the last 24 months. Average ACV of €65k/year with bimodal distribution — a mid-market cluster at €25-45k and an enterprise cluster at €80-150k. Median sales cycle 90 days, P75 at 165 days, P90 at 240 days. Primary ICP: CIO or Head of IT Ops at 200-2000 employee companies; secondary ICP: Director Service Management at 2000+ employee enterprises.
8 European markets covered: France (historic market, ~28% of spend), UK (~22%), Germany (~18%), Spain (~9%), Italy (~7%), Netherlands (~6%), Sweden (~5%), Belgium (~5%). No US or APAC presence at this stage — strategic board decision to focus on Europe before intercontinental expansion. Multilingual site in 8 languages, local sales team in each major market (FR, UK, DE), centralized sales team for the others.
Annual Google Ads spend: €10M stable, with moderate seasonality — Q1 and Q3 slightly higher (early-year IT budget cycles and anticipated Q4 planning), Q4 slightly lower (common enterprise IT budget freeze). Monthly distribution varies from €700k (August, December troughs) to €1.1M (March, September peaks). No "always-on" campaign paused for seasonality — the account runs 365 days, the trade-off is on bid aggressiveness rather than on/off.
Why this study is exemplary for the SEA community:
- Maturity — 18 consecutive months of steering at €10M/year with stable team, production data infrastructure, operational incrementality workflow. Not a chaotic hyper-growth case.
- Relevant vertical — mid-market B2B SaaS is the vertical where most aspiring FR publishers scale spend. Learnings transfer.
- Internal team, not agency — 5 in-house paid people, not an external agency talking about its client. Total operational ownership.
- Clean tracking — active offline conversion imports, server-side tracking in production, quarterly incrementality holdout. The numbers we cite are measured, not extrapolated.
The 4 invariants observed over 18 months:
- Consolidated LTV:CAC stays at 4.2:1 median — slightly above the 4:1 board target, despite +60% budget pressure over the period (move from €6M/year to €10M/year of spend).
- The channel mix varies less than 8% over 18 months — 60% PMax, 25% Search, 10% YouTube, 5% Discovery is the stable allocation, validated quarterly by incrementality holdout.
- The qualified pipeline from Google Ads feeds 38 to 44% of the total new business pipeline — the other 56-62% coming from outbound SDR (~25%), organic SEO inbound (~22%), partnerships and events (~15%). Google Ads is the most important individual channel but not dominant.
- The paid team didn't grow despite the doubling of spend — went from 4 to 5 people only, productivity gain came from automation (scripts, sGTM, automated incrementality workflow).
This 4th invariant is the most instructive. It indicates that beyond a certain maturity, scaling spend doesn't require scaling team — automation and tracking quality unlock the productivity lever. Many hyper-growth B2B SaaS companies make the opposite mistake: they hire 8-12 paid people for a €3M/year account because they didn't invest in upstream data infrastructure. Operational cost multiplied by 2-3x, no performance gain.
For the audit fundamentals that allow you to identify these invariants on your account, see our Google Ads audit checklist.
Org chart: Head of Paid + 4 specialists (precise roles, ratios)
The 5-person paid team is one of the most counterintuitive aspects of the account. The vast majority of growth-stage B2B SaaS over-recruit generalist "PPC managers" — each manager handles a mix of Search, PMax, YouTube, Display campaigns with similar KPIs. On this account, the structural choice was the opposite: 5 specialists across 5 complementary functions, with clear ownership by function and cross-functional collaboration.
Detailed org chart:
Detail of the 5 roles and their ratios:
Head of Paid Acquisition — profile with 8-12 years of B2B SaaS experience, ideally with a stint at 1-2 hyper-growth publishers. Strategic ownership: cross-channel budget allocation, quarterly board reporting, incrementality roadmap validation, team hiring and training. No direct campaign operations — delegates to the 4 specialists. Time ratio: 30% strategy/board, 30% team review and coaching, 20% cross-team relations (Sales, Product, Finance), 20% transversal projects (tracking overhaul, market expansion, infrastructure optimization).
PMax Specialist — profile with 4-7 years of PMax/Discovery experience in mature SaaS or e-commerce. Ownership of ~€6M/year of spend (60% of the account). Tasks: managing the 8 PMax campaigns (1 per European market), managing asset groups, audience signals, brand exclusions, PMax incrementality holdout, Asset Group Insights reporting. Critical: this role requires strong autonomy on measurement tools because PMax is opaque by default — fine analysis requires BigQuery, Looker Studio and custom scripts.
Search Specialist — profile with 4-6 years of non-brand Search experience on a mature account. Ownership of ~€2.5M/year of spend (25% of the account). Tasks: managing non-brand Search across 8 markets, match types, weekly negatives, Smart Bidding management, competitor monitoring. The role looks classic but its value lies in operational rigor: weekly negative adjustments, systematic Search Term Reports audit, Target CPA target calibration based on seasonality.
Brand & YouTube Specialist — profile with 3-5 years of YouTube/Demand Gen experience. Ownership of ~€1.5M/year of spend (15% of the account). Tasks: brand Search defense, YouTube Demand Gen, Discovery Ads, video creative coordination with the internal Brand team, YouTube holdout. Critical role for top-funnel and brand halo effect — often under-resourced by competitors who underestimate the YouTube impact in B2B SaaS.
Tracking & Data Engineer — profile with 5-9 years of tracking/data engineering experience, ideally with a stint in fintech or mature e-commerce. Cross-functional ownership across 100% of the account. Tasks: maintaining sGTM + Enhanced Conversions, BigQuery pipelines, Looker Studio dashboards, 12 custom Google Ads scripts, HubSpot offline conversion imports. This is the pivot role that unlocks productivity for the whole team — a €10M/year account without an internal Tracking & Data Engineer pays either in over-hiring campaign managers, in agency fees, or in degraded performance from broken tracking.
Why 5 people and not 8-12:
The team made the structural choice to invest massively in automation (scripts, BigQuery, sGTM, Looker) rather than headcount. Observed ratio: 1 paid person per €2M/year of spend, vs 1 person per €600-900k/year at competitors observed in public benchmarks. The 2.5-3x delta on headcount efficiency comes from:
- Automation: 12 custom Google Ads scripts handle 60-70% of routine optimizations (auto negatives, bid adjustments, anomaly alerts, budget pacing).
- Data infrastructure: BigQuery + Looker allow each specialist to produce self-service analyses without involving the central data team.
- Clean tracking: active offline conversions + Enhanced Conversions = reliable Smart Bidding signal, so less manual optimization needed.
- Specialization: each role develops vertical expertise over 18-24 months rather than superficial cross-channel knowledge.
The instructive counter-example: a mid-market FR B2B SaaS publisher observed in parallel with €3.2M/year spend had 6 paid people. Productivity: 1 person per €530k/year. Overall account performance: LTV:CAC 2.8:1, vs 4.2:1 on the studied account. Over-hiring didn't compensate for the absence of data infrastructure — on the contrary, it structurally slowed tool deployment because each manager defended their operational ownership. Lesson: scaling productivity goes through scaling infrastructure, not scaling headcount.
MCC structure: 1 MCC, 8 sub-accounts per European market
The account structure is the other strong invariant. 1 central MCC + 8 Google Ads sub-accounts, 1 per European market (FR, UK, DE, ES, IT, NL, SE, BE). No sub-account per product, no sub-account per funnel stage, no sub-account per campaign type. This structural decision was made during the move from €4M/year to €10M/year and has never been discussed since. It is validated by experience.
Why 1 sub-account per market and not per product:
The studied B2B SaaS has 4 main product modules that often sell bundled (intra-account cross-sell after signing an initial module). A "1 sub-account per product" structure would have had two pathologies:
- Search cannibalization between sub-accounts — a prospect searching for "mid-market ITSM" would have been contested between 2-3 sub-accounts of the same company, increasing CPC without increasing pipeline. Google documentation on MCC structure.
- Fragmented Smart Bidding learning — each sub-account would have had fewer conversions, so less stable Smart Bidding, so higher CPA.
Conversely, 1 sub-account per market brings:
- Consolidated Smart Bidding learning per market — each sub-account aggregates 100% of the market's conversions, maximizing available signal.
- Clean per-market reporting — alignment with Sales organization (which is per market).
- Consistent currency and timezone per sub-account.
- Market-specific dayparting and seasonality — usage patterns vary between FR, UK, DE and Netherlands, and hour/day bid modifiers differ.
Hierarchy within each sub-account:
Each market sub-account has the same internal structure:
- 2-3 PMax campaigns — generally 1 all-modules PMax + 1 verticalized PMax if volume justifies + 1 retargeting PMax for existing customer upsell.
- 8-12 Search campaigns — divided by funnel stage (generic solution intent, specific solution intent, competitor intent) and main product module.
- 1-2 brand Search campaigns — exact match defense, plus 1 brand variant.
- 2-3 YouTube Demand Gen campaigns — top-funnel corporate ICP.
- 1 Discovery campaign — secondary, monitored for drift.
Total ~15-20 campaigns per sub-account, so 120-160 campaigns total across the 8 markets. That's a lot but manageable because the structure is standardized — each market applies the same grid, allowing custom scripts to operate uniformly cross-market.
The 3 structural decisions validated by 18 months of observation:
-
No shared Customer Match list across sub-accounts — each sub-account has its own market-specific Customer Match list. It's more work (8 lists to maintain) but it's necessary because CRM bases are segmented by market and hash quality varies.
-
No single pan-European campaign — the temptation of a centralized "EU campaign" to save headcount was tested for 3 months in 2024. Result: CAC +28% vs per-market structure, because seasonalities, languages and bidding patterns were artificially smoothed. Decision: return to 1 sub-account per market. See our multi-account MCC strategy guide.
-
Currency hedging at MCC level, not sub-account level — each sub-account bills in local currency (EUR for FR/IT/ES/BE/NL, GBP for UK, SEK for SE, EUR for DE). Currency fluctuation hedging is managed at corporate Finance level, not at Google Ads sub-account level. This isolates FX variations from daily paid steering.
Cross-sub-account management workflow:
- MCC level Head of Paid — aggregated view of 8 sub-accounts, budget allocation, anomaly alerts.
- Specialist level — cross-sub-account access by specialty (PMax Specialist sees all 8 PMax campaigns, Search Specialist sees all 8 sets of non-brand Search campaigns, etc.).
- Sales/Customer Success sub-account ownership — each market sub-account has a Sales referent who shares pipeline info and lead quality. Weekly 30min coordination between paid and sales per major market.
The 1 sub-account per product structure seems logical in multi-module B2B SaaS, but it fragments Smart Bidding learning and creates inter-sub-account Search cannibalization. On referenced accounts, the per-market structure beats the per-product structure in 8 out of 10 cases — unless products have strictly disjoint audiences (e.g. a publisher with a B2B module and a B2C module, or a publisher with an enterprise module and an SMB freemium module). The golden rule: segment at the level that maximizes Smart Bidding learning without creating audience overlap.
Budget allocation: 60% PMax, 25% Search, 10% YouTube, 5% Discovery
Channel budget allocation is one of the most controversial figures to publish. The dominant 2026 position among players who speak at conferences is that "PMax should be 30-40% maximum, Search remains the pillar." The studied account contradicts this position: PMax represents 60% of annual spend, i.e. €6M/year. Non-brand Search 25%, YouTube Demand Gen 10%, Discovery Ads 5%. This allocation has been validated by 6 successive quarterly incrementality holdouts over 18 months, per market and per channel.
The central question: why 60% PMax?
Three cumulative reasons explaining the decision:
-
PMax benefits from non-replicable inventories — Discover Feed, YouTube Shorts in-feed, Gmail Promotions, Maps, Search partners, premium Display Network. On mid-market enterprise B2B SaaS, these inventories reach ICP prospects (CIOs scrolling Discover Feed, Heads of IT Ops checking Gmail) who would never have typed the equivalent Search query. Without PMax, these prospects aren't addressable on Google Ads. Competition isn't going after them either — it's a silent competitive advantage.
-
PMax's algorithm optimizes on closed-won deal, not MQL — thanks to operational HubSpot offline conversion imports, the PMax algorithm sees the final signal and directs budget toward segments actually producing deals. Without this tracking infrastructure, PMax would effectively be trapped on MQL and the 60% ratio would be unsustainable. See our offline conversions CRM Google Ads guide.
-
Quarterly incrementality holdout validates performance — across the 6 successive geo holdouts, measured PMax incrementality sits between 68% and 78% depending on quarter. That's high. Brand Search cannibalization stays below 12% thanks to active Brand Exclusions. PMax is therefore really incremental, not an optical illusion. For precise technical parameters of Brand Exclusions and audience signal, see the official Performance Max documentation, which details the control options available since 2024.
Critical reading of the figures:
-
Lowest CAC: brand Search at €320. But incrementality 35-55% — strong organic SEO cannibalization. The majority of brand Search conversions would have arrived without brand Google Ads, via direct SEO or direct navigation. The 5% spend ratio is therefore deliberately capped — increasing brand Search budget would produce waste.
-
Highest CAC: Discovery Ads at €2,800 with incrementality 35-58%. It's the worst-performing and most monitored channel. Kept at 3% of spend for two reasons: measured top-funnel halo effect (prospects exposed to Discovery convert +18% on Search 30 days later, observable via BigQuery cohort tracking), and alternative inventory signal if another major channel drifts. But mandatory quarterly monitoring — if incrementality drops below 30%, Discovery is cut.
-
Non-brand Search remains the channel with the highest LTV:CAC: 4.8:1 median over 24 months, vs 4.1:1 on PMax and 3.2:1 on YouTube. Expected — non-brand Search captures the most expressed commercial intent. The reason it represents only 25% of spend and not 50%: volume ceiling. Relevant non-brand Search on mid-market ITSM/IT Ops represents finite monthly demand in France (~4,000-7,000 ICP-target queries/month according to Google Keyword Planner on keyword cluster). Beyond €2.5M/year of non-brand Search spend on this vertical, marginal return collapses — each additional euro produces less than an additional euro on PMax which reaches inexhaustible latent inventory.
-
PMax is the only channel with non-capped scalability on this profile — that's what justifies its 60% ratio. If tomorrow the board asks for a move to €15M/year of spend, PMax will absorb the €5M delta, not non-brand Search which is already at saturation.
Per-market allocation varies around the consolidated median:
- France: 58% PMax, 27% Search, 10% YouTube, 5% Discovery (Search slightly higher because FR keyword coverage is most mature).
- UK: 62% PMax, 24% Search, 10% YouTube, 4% Discovery (PMax dominant because UK Search competition is very aggressive on ITSM).
- Germany: 55% PMax, 28% Search, 13% YouTube, 4% Discovery (YouTube performs better in DE — strong B2B video culture).
- Spain, Italy: 65% PMax, 22% Search, 8% YouTube, 5% Discovery (less mature Search, PMax compensates).
- Netherlands, Sweden, Belgium: 58% PMax, 26% Search, 11% YouTube, 5% Discovery.
Critical position on the "PMax overrated" view circulating in 2026:
Many voices in 2026 (consultants, agencies, conference speakers) hold a "PMax is trapped, keep 70% Search" position. In most observed cases, this is a defensive stance from players who lack the tracking infrastructure to steer PMax cleanly. Without active offline conversions, without Brand Exclusions, without regular incrementality holdout, PMax is effectively trapped — and the "PMax max 30%" position is rational for that account profile. But with clean tracking infrastructure, PMax delivers the highest incrementality of the account, empirically validated by geo holdouts. The question isn't "is PMax good or bad?" — it's "do you have the infrastructure to steer PMax without optical illusion?". See our Discovery Ads and incremental truth analysis which details typical over-attribution patterns on automated formats and which we apply contrary to the dominant position.
Weekly / monthly / quarterly KPIs: the steering pyramid
Steering a €10M/year account requires a 3-level KPI pyramid — weekly operational, monthly tactical, quarterly strategic. Confusing the levels is the most frequent steering mistake: making strategic decisions on noisy weekly KPIs, or missing operational drifts because you only look at the quarter. The discipline of separation is what distinguishes €10M/year steering from €1M/year steering.
Level 1 — Weekly operational (50+ KPIs monitored):
Owners: 4 specialists. Frequency: 30min weekly paid team review Monday morning + real-time Looker dashboard. Associated decisions: bid adjustments, negative additions, pause underperforming creatives, daily pacing adjustments.
KPIs tracked:
- Weekly conversion volume per channel and market vs forecast — alert if gap above 15%.
- Weekly CAC per campaign — alert if gap above 20% vs target.
- Search Term Reports — systematic weekly review to detect match type drifts and negative opportunities.
- Average Quality Score per ad group — alert if drop of 1+ point on major ad groups.
- Daily budget pacing per sub-account — alert if drift above 15% vs monthly target.
- Anomaly detection scripts — 1 custom Google Ads script run every 4h detects CPC spikes, CTR drops, conv rate drops on major campaigns.
- Creative performance — weekly review of PMax and Demand Gen assets, pause bottom 20% performance assets vs top 80%.
- Search competitor intelligence — weekly review of new competitors appearing in Auction Insights.
Level 2 — Monthly tactical (15-20 consolidated KPIs):
Owner: Head of Paid. Frequency: 90min monthly review with team + Sales + Customer Success first week of month. Associated decisions: marginal adjustment of allocation per channel and per market (±5%), Smart Bidding target adjustment, validation of next month's roadmap. Our MER calculator (Marketing Efficiency Ratio) measures overall marketing efficiency, not channel by channel.
KPIs tracked:
- Average monthly CAC per channel and per market — compared to target and 3-month trend.
- LTV:CAC at D+90 per channel and per market — primary monthly KPI.
- MQL/SQL/Opportunity/Closed-Won mix per channel — detects if a channel produces junk MQLs.
- Monthly budget pacing vs quarterly forecast — alert if drift above 8%.
- Consolidated Quality Score per sub-account.
- Cannibalization index PMax vs brand Search — calculated via custom formula based on month-over-month brand Search uplift.
- Qualified new business pipeline from Google Ads — shared with Sales for coherent pipeline validation.
- Velocity of signed deals from Google Ads — average MQL → Closed-Won time, alert if extension above 20%.
Level 3 — Quarterly strategic (5-8 synthetic KPIs):
Owner: Head of Paid + board members (CRO, CFO, CEO). Frequency: 2h Quarterly Business Review (QBR). Associated decisions: cross-channel budget allocation for next quarter, market expansion validation, incrementality roadmap validation, ICP target adjustment if strategic pivot.
KPIs tracked:
- Incrementality holdout per channel over 4 weeks — raw result + interpretation.
- Consolidated LTV:CAC on rolling 12 months.
- Pipeline contribution from Google Ads % of total new business pipeline.
- Spend efficiency trend — annualized spend / annualized closed-won on rolling 12 months.
- Cohort ROAS by quarter acquired — performance of cohorts acquired per quarter, measured at 6, 12, 18 months post-acquisition.
- Channel mix validation — alignment with board strategy (60/25/10/5).
- Consolidated competitive intelligence — share of voice vs direct competitors.
The classic trap: confusing levels. Many mid-market B2B SaaS publishers look at weekly CAC daily and make strategic reallocation decisions on every variation. Result: over-reaction to statistical noise, chronic campaign instability, Smart Bidding learning phase constantly relaunched. On a €10M/year account, strategic decisions are made quarterly, period. Weekly noise is handled at the weekly level by operational micro-adjustments that don't disturb the global mix.
For KPI fundamentals and client reporting adapted to more modest contexts, see our 10-KPI Google Ads client reporting guide.
Tooling stack: BigQuery + Looker Studio + sGTM + 12 custom scripts
The tooling stack is the most profitable investment of the account over 18 months. The annual infrastructure cost (BigQuery storage and queries, Looker Studio Pro, server-side GTM hosting, dev maintenance time) is on the order of €180-260k/year — i.e. ~2.2% of annual Google Ads spend. What it unlocks:
- Paid team productivity 2.5-3x vs structure without infrastructure (validated by benchmark against observed competitors).
- Clean offline conversion tracking that makes Smart Bidding reliable — which justifies the 60% PMax allocation.
- Automated incrementality holdout which makes budget arbitration non-naive.
- Institutional-quality board reporting that makes the board approve budgets without friction.
Stack components and their roles:
The 12 custom Google Ads scripts and their functions:
- Anomaly Detector — detects CPC spikes or CTR drops on major campaigns, real-time Slack alert.
- Negative Keywords Auto-Generator — analyzes Search Term Reports, suggests new negatives, automatically applies after weekly validation.
- Pacing Budget Monitor — monitors daily pacing per sub-account, alerts if drift above 15% vs monthly target.
- Quality Score Tracker — measures weekly QS per major ad group, alerts on 1+ point drop.
- Holdout Incrementality Setup — facilitates the setup of quarterly geo holdouts (exclusions, measurement, automated comparison).
- Auction Insights Crawler — monthly Auction Insights data collection, detects new competitors, shares team report.
- PMax Asset Performance Auditor — analyzes PMax asset performance, suggests pause/promote.
- Brand Search Cannibalization Calculator — monthly calculation of PMax → brand Search cannibalization index.
- Geo Performance Optimizer — detects geographic over- or under-performance, suggests bid modifiers.
- Day-Parting Optimizer — analyzes hour/day performance per campaign, suggests custom dayparting.
- Customer Match Refresher — weekly Customer Match list refresh from HubSpot.
- Reporting Aggregator — consolidates 8 sub-account data into a unified report for Looker Studio.
These scripts are stored in an internal Git repo, versioned, code-reviewed before deployment. It's not "copy-paste from a blog" — it's production engineering. Maintenance represents ~15-20% of the Tracking & Data Engineer's time. For an introduction to ready-to-use Google Ads scripts covering basic functions, see our 10 ready-to-copy scripts guide.
Server-side GTM in production — the tracking pivot:
The GTM server is hosted on Google Cloud Run, custom domain gtm.[domain].com. Architecture:
- GTM Client — fires events from the website and authenticated app.
- GTM Server — receives, enriches, deduplicates, distributes to Google Ads, GA4, HubSpot, Microsoft Ads.
- Enhanced Conversions — server-side SHA-256 email hash, sent with every conversion to Google Ads for cross-device matching.
- MSCLKID + GCLID + LinkedIn Click ID stored in 90-day first-party cookies, injected into form hidden fields, surfaced in CRM.
- Offline Conversions Pipeline — HubSpot webhook triggered on deal-closed-won → BigQuery → Google Ads Offline Conversions API.
This infrastructure eliminates 4 structural problems that most mid-market B2B SaaS underestimate:
- Cookie blockers + iOS 18+ — server-side tracking preserves 25-40% of conversions lost in client-side.
- Cross-device deduplication — Enhanced Conversions matches a user who clicked the ad on mobile then converted on desktop.
- Long-cycle latency — offline conversions surface the closed-won deal that arrives 90-180 days after initial click.
- Quality Smart Bidding signal — the algorithm optimizes on real deals, not noisy MQLs, so real CAC divided by 1.4-2x.
For server-side tracking setup details, see our 2026 server-side tracking GTM guide.
Mid-market B2B SaaS that scale from €1M to €5M/year in spend without simultaneously investing in tracking infrastructure (sGTM + offline conversions + BigQuery) systematically end up with a degraded account. Real CAC 1.5-2.5x higher than displayed CAC (because Smart Bidding optimizes on noisy MQLs), declining ROAS, board questioning paid allocation. The rule: for each additional euro invested in Google Ads spend beyond €1M/year, allocate 3-5 cents to tracking infrastructure. That's the observed ratio that maintains performance at scale.
Incrementality holdout workflow: 1 per quarter, 4 weeks
The incrementality holdout workflow is what distinguishes the €10M/year account from the aspiring €2M/year account. Most growth-stage mid-market B2B SaaS publishers never run an incrementality holdout. They steer on the apparent ROAS reported by platforms, which over-attributes by 15 to 32% depending on channel. Consequence: they make budget reallocation decisions biased by construction. On a €10M/year account, that's €1.5 to €3.2M/year of potentially misallocated budget — the equivalent of 5-10 paid people's annual salary. Holdout discipline becomes mandatory.
The rhythm: 1 holdout per quarter, per channel:
- Q1: PMax holdout on FR market (the largest sub-account).
- Q2: YouTube Demand Gen holdout on DE market (where YouTube performs best — robustness test).
- Q3: non-brand Search holdout on UK market (validation that Search remains incremental on competitive market).
- Q4: Discovery Ads holdout on ES market (validation of the most marginal channel).
This rotating schedule means each major channel is tested once over 18 months, and each major market is tested once over 24 months. Coverage is exhaustive without operational overload.
Standardized holdout methodology:
- Step 1 — Holdout zone selection (D-7): choose 1 representative market region (8-15% of usual conversion volume), geographically isolable, without extreme seasonal bias on the quarter.
- Step 2 — Pause channel on test zone (D+0): geographic exclusion at campaign level, other channels run normally, baseline 14 days before test documented.
- Step 3 — 28-day consecutive measurement (D+1 to D+28): conversion volume per day, per channel, per test zone vs control. Daily BigQuery export.
- Step 4 — Normalization and incrementality calculation (D+29 to D+35): seasonality neutralization, cross-channel cannibalization calculation, real vs claimed incrementality calculation.
- Step 5 — Decision (D+36): budget arbitration for next quarter based on measured incrementality.
Results observed from the last 6 holdouts:
What these results tell us:
-
Claimed incrementality is systematically higher than measured incrementality — the delta varies from 12 points (UK non-brand Search) to 58 points (FR brand Search SEO cannibalization). It's the structural over-attribution of Google Ads platforms, and it exists on all channels to varying degrees.
-
PMax with clean tracking infrastructure is the channel with the best ratio — measured incrementality 72-78% across 2 holdouts. Without clean tracking infrastructure, this ratio would collapse to 40-55% based on our observations on other audited accounts.
-
YouTube Demand Gen has average 65% incrementality — moderate but enough to justify 10% of mix with measured top-funnel halo effect.
-
Brand Search has the lowest incrementality — 48% — most brand Search conversions would have arrived via direct SEO or direct navigation. Maintaining at 5% of mix is intentional (competitive defense against brand conquesting by competitors) but increasing would produce net waste.
-
Discovery Ads ES at 42% incrementality triggered partial cut — the channel isn't dead but its contribution no longer justifies 5% of mix. Reduced to 2% post-Q4 2024.
Critical position on the industry that doesn't run holdouts:
In most referenced accounts, this is the most expensive and most invisible flaw. A €5M/year spend account without incrementality holdout systematically makes biased trade-offs. The defensive position I hear at conferences — "holdouts are complicated to set up, they disturb campaigns" — is, in 8 out of 10 cases, a justification for not facing the truth that half of the allocated budget isn't really incremental. On a mature account with budget above €1M/year, running 1 holdout per quarter isn't a nice-to-have, it's the validity condition of any strategic steering.
For methodological details on setting up a geo incrementality holdout, see our Discovery Ads and incremental truth analysis which details the step-by-step procedure transposable to any channel.
Board reporting: what surfaces, what doesn't
The quarterly board reporting is a discipline in itself. On a €10M/year account, the board (CEO, CFO, CRO) expects a synthetic but critical reading of paid acquisition — not a vanity-metrics dashboard. The rule: what surfaces to the board must be actionable at board level. Tactical detail (CPA per campaign, underperforming creatives, added negatives) stays at Head of Paid level and doesn't pollute the board conversation.
The 7 synthetic KPIs that surface to the quarterly board:
-
Consolidated spend efficiency — annualized spend / annualized closed-won. Target 4:1 minimum, ideally 4.5:1. Trend rolling 4 quarters.
-
Consolidated LTV:CAC over rolling 12 months — by market segment and global. Decomposed by acquisition cohort to make recent cohort quality visible.
-
Pipeline contribution from Google Ads — % of qualified new business pipeline from Google Ads. Target 35-45% per board strategy.
-
Holdout incrementality summary — quarter holdout result + 1-paragraph interpretation. No methodological detail, just actionable result.
-
Channel mix validation — alignment with board strategy (60% PMax, 25% Search, 10% YouTube, 5% Discovery). Alert if drift above 5 points on a channel.
-
Cohort ROAS by quarter acquired — performance of cohorts acquired per quarter, measured at 6, 12, 18 months post-acquisition. Primary indicator of long-term pipeline quality.
-
Competitive share of voice — consolidated Auction Insights vs main direct competitors. Trend over 4 quarters to make competitive pressure visible.
What doesn't surface to the board (deliberately):
- CPA/CAC per campaign — Head of Paid level, not board. The board cares about consolidated CAC and LTV:CAC, not the campaign-by-campaign detail.
- Quality Score — specialist level, not board. QS is an input, not an outcome — it doesn't deserve the board table.
- Detailed Search Term Reports — Search Specialist level, not board.
- Bid modifiers and match types — operational level, not board.
- Per-creative performance — specialist level, not board.
This "board only sees synthetic outcomes" discipline is what maintains a healthy paid-board relationship. When the board sees too much tactical detail, it starts to micro-manage operational arbitrations — and steering loses strategic coherence.
QBR (Quarterly Business Review) format:
- Slide 1 — Spend & ROAS quarter summary + rolling 4-quarter trend.
- Slide 2 — LTV:CAC + Pipeline contribution + Cohort ROAS.
- Slide 3 — Quarterly incrementality holdout + interpretation.
- Slide 4 — Channel mix validation + significant variations explained.
- Slide 5 — Competitive share of voice + 2-3 takeaways.
- Slide 6 — Next quarter roadmap: 3 priorities, proposed budget, expectations.
- Slide 7 — Board questions.
Total ~45-60 minutes, of which 30 minutes Q&A. No magic "hero metric" slide. No unrealistic promise. Critical position on assumptions: "if tracking continues to degrade with iOS 19+, we lose 8-12% visible conversions, here's our plan to mitigate." The board appreciates this candor — it's worth far more than a "+18% YoY" slide without context.
The "what doesn't surface either" discipline is just as important:
-
Raw Google Ads ROAS — that's the ROAS everyone displays at SEA conferences. We don't put it on the board because it over-attributes by 15-30% per channel, and pushes toward bad strategic decisions. On accounts observed in public Google Ads benchmarks, the gap sits between 12% (mature non-brand Search) and 38% (PMax without Brand Exclusions) — order of magnitude depends mostly on tracking infrastructure maturity. What surfaces is measured incrementality, not claimed ROAS.
-
"CPA down by X%" — taken in isolation, this metric is misleading. A declining CPA can mean (a) successful optimization, (b) volume drop (the algorithm focuses on cheapest conversions and cuts the rest), (c) seasonality, (d) degraded tracking. The board receives CAC + volume + pipeline contribution, not isolated CPA.
-
PMax vanity metrics — noisy Asset Group Insights, non-deduplicated all-types conversions, etc. What surfaces to the board is exclusively closed-won deal conversions via offline import.
Mistakes avoided and 18 months of learnings
The most precious learning of a €10M/year account isn't in success stories — it's in narrowly avoided mistakes and difficult corrections. Here are 8 rare operational learnings, documented over 18 months.
Learning 1 — The 1 sub-account per product structure was tested then abandoned in 2024.
For 4 months in 2024, the account experimented with a per-product split: 4 sub-accounts (1 per module) for the FR market instead of a single multi-module sub-account. Hypothesis: better reporting granularity, optimization per module. Result: FR CAC +22%, inter-sub-account Search cannibalization (same keywords targeted by 2-3 sub-accounts simultaneously), fragmented Smart Bidding learning. Decision: return to 1 sub-account per market structure, validated by post-rollback holdout.
Learning 2 — The single pan-European campaign was tested and failed.
Logical temptation: save headcount by consolidating the 8 sub-accounts into 1 "multi-language EU account." 3-month test on Q3 2024. Result: global CAC +28%, artificially smoothed seasonalities, Smart Bidding learning on signal mixed between very different markets. Decision: return to 1 sub-account per market structure, with additional automation investment to compensate for fragmentation cost.
Learning 3 — The PMax without Brand Exclusions trap cost ~€340k in cannibalization Q1 2024.
PMax was launched Q4 2023 without immediately activating Brand Exclusions (feature available but forgotten). For 11 weeks, PMax bought back brand queries for a magnificent apparent ROAS (~12:1) but real incrementality near zero (measured by post-correction holdout). Estimated cost of the mistake: ~€340k of partially wasted spend. Correction: Brand Exclusions activated + brand Search consolidated in dedicated Exact Match campaign with non-shared budget. Learning: Brand Exclusions must be activated from D+0 on PMax, never forgotten in the checklist.
Learning 4 — Smart Bidding optimized on MQL before offline conversions underperformed.
For the first 6 months post-move to €10M/year, Smart Bidding was optimizing on MQL (demo form submitted). The algorithm found a way to push top-funnel traffic producing lots of junk MQLs. Real closed-won deal CAC measured at 2.3x the displayed CPA. Correction: deployment of offline conversion imports via HubSpot connector, Smart Bidding optimization switch to closed-won deal. Measured effect: real CAC divided by 1.7 in 90 days, at constant budget. Learning: on 60-180-day cycles, optimizing Smart Bidding on the wrong signal costs more than absence of Smart Bidding.
Learning 5 — The July-August period requires custom dayparting.
Pattern observed on 2025-2026 aggregated Google Ads data: mid-market B2B conversions drop 32-45% in July-August on FR/IT/ES markets (widespread school holidays), 15-22% on DE/UK/SE. Across observed B2B verticals, the gap sits between 18% (critical SaaS infra with permanent on-call) and 55% (productivity SaaS oriented at management). Standard Smart Bidding doesn't adapt fast enough — learning in 7-14 days, so half of July is lost drifting. Solution: custom dayparting programmed manually each year (-30 to -50% bid modifier July-August on certain non-brand Search campaigns, maintain PMax with reduced budget), based on 2-year history. See our worldwide Google Ads budget seasonality guide.
Learning 6 — Lowering Target CPA target too fast crushed volume.
In Q1 2024, the team tried to lower the FR non-brand Search Target CPA target from €2,200 to €1,600 in 6 weeks (-27%). Result: FR non-brand Search conversion volume -42% in 4 weeks, the algorithm cutting serving on auctions with expected cost above target. Correction: target raised to €1,950, gradual descent in 10% steps every 2 weeks. Volume recovered in 6 weeks, final stable target €1,700. Learning: Target CPA target descent never above 10% per step, never above 1 step every 2 weeks. The discipline is non-negotiable.
Learning 7 — Non-refreshed Customer Match list becomes toxic.
Active Customer Match list on the account at 18,000 ICP contacts (existing customers + qualified MQL+ leads). For 4 months in 2024, the list wasn't weekly refreshed (Tracking Engineer rotation oversight). Consequence: the list progressively included churned contacts, disqualified leads, expired emails. Smart Bidding continued to optimize as if these contacts were relevant, over-bidding on obsolete audiences. Correction: weekly automated Customer Match refresh via custom script, 24h max SLA on list freshness. Learning: Customer Match freshness is just as important as initial list quality.
Learning 8 — The experimental channel test saved 8% of future mix.
In Q1 2025, a marginal Reddit Ads B2B test was launched on the US market (in parallel with EU rollout for validation). Initial budget €80k/quarter, speculative hypothesis. Result: LTV:CAC 4.8:1 over the first 6 months, excellent ICP fit on the DevOps/SRE segment, scalability confirmed. Decision: Reddit Ads moves to 5-8% of mix on markets where DevOps represents a significant share of secondary ICP. Learning: keeping 5-10% of budget in experimental channel tests is profitable even if 50-60% of tests fail — the 1-2 successful tests unlock new pipeline hard to find otherwise.
Across 18 months of €10M/year steering, the most profitable performance lever isn't an optimization "hack." It's methodological rigor: quarterly incrementality holdouts, operational offline conversions, stable MCC structure, respected 3-level KPI pyramid. These are disciplines, not techniques. They seem obvious in theory. They are rare in practice. That's precisely what distinguishes €10M/year steering from aspiring €1M/year steering — not technical expertise, but operational discipline. For accounts that want to professionalize this discipline, launch a free SteerAds audit that specifically evaluates tracking maturity, MCC structure, KPI discipline, and identifies gaps with the €10M+/year account benchmark.
The verdict: what €10M/year of spend really taught us
To close this anatomy, here are the 6 operational conclusions that emerge from 18 months of €10M/year steering. These are the sharp positions I defend internally and that most mainstream SEA publications refuse to write head-on.
Conclusion 1 — Tracking maturity is the most discriminating variable, ahead of campaign expertise.
On a mature account with spend above €1M/year, what distinguishes performant steering from mediocre steering isn't the finesse of campaign optimizations, bid adjustments or match types. It's the quality of the signal sent to Smart Bidding via offline conversions and Enhanced Conversions. Without this infrastructure, the world's best PPC manager optimizes on noisy signal and plateaus. With this infrastructure, an average PPC manager with operational discipline beats the best PPC manager without infrastructure. The primary investment lever for an account moving from €1M to €5M/year isn't operational headcount — it's tracking infrastructure.
Conclusion 2 — PMax isn't trapped by construction, it's trapped by mediocre tracking.
The dominant 2026 position "PMax max 30% of mix" is rational for accounts without clean tracking. On accounts with closed-won deal offline conversions, active Brand Exclusions, quarterly incrementality holdout, PMax delivers the highest incrementality of the account (measured 68-78% across 6 successive holdouts). The 60% PMax ratio we maintain is empirically validated, not dogmatic. Those who cap PMax at 30% without having the tracking infrastructure to steer beyond make the right choice — for their maturity level. Those who cap PMax at 30% with the tracking infrastructure leave performance on the table by excess caution.
Conclusion 3 — Per-market MCC structure beats per-product structure in 8 out of 10 cases.
The "1 sub-account per product" instinct is misleading. It fragments Smart Bidding learning, creates inter-sub-account Search cannibalization, and complicates cross-functional reporting with Sales/Customer Success who are organized by market. The 1 sub-account per European market structure is optimal for mid-market B2B SaaS with multi-country presence — except in exceptional cases of products with strictly disjoint audiences.
Conclusion 4 — The KPI pyramid discipline is just as important as the KPIs themselves.
Confusing KPI levels (looking at weekly operational and making strategic decisions) is the most frequent steering mistake on a mature account. Strategic decisions are made quarterly, period. Weekly noise is handled at the weekly level by operational micro-adjustments that don't disturb the global mix. Hyper-growth B2B SaaS that reallocate budget every 15 days based on weekly KPIs burn their Smart Bidding learning and learning phase.
Conclusion 5 — Incrementality holdout isn't optional on accounts above €1M/year.
Without holdout, you make budget reallocation decisions biased by construction. The structural over-attribution of Google Ads platforms is 15-32% per channel — i.e. €150-320k/year of potentially misallocated budget on a €1M/year account, €1.5-3.2M/year on a €10M/year account. For a mature account, running 1 holdout per quarter is a mandatory discipline, not a nice-to-have. Publishers who avoid it — through perceived methodological complexity or fear of the result — pay dearly in non-optimal trade-offs.
Conclusion 6 — Methodological rigor scales, individual expertise doesn't.
The lever that takes an account from €1M to €10M/year of spend while maintaining LTV:CAC isn't hiring senior PPC people — it's institutionalizing methodological discipline. 3-level KPI pyramid. Quarterly holdout per channel. Operational offline conversions. Stable MCC structure. Invested tooling stack. Documented incrementality roadmap. These disciplines, once in place, scale linearly with spend — the paid team moves from 4 to 5 people for a doubling of spend, because productivity per person is multiplied by infrastructure. Conversely, an account betting on individual expertise without infrastructure typically plateaus at €2-3M/year of spend before saturating in operational headcount.
This anatomy of a €10M/year account isn't a success story — it's the operational documentation of a system that works, with its structural choices, its corrected mistakes, and its critical positions on the industry. For mid-market FR B2B SaaS aspiring to scale spend without degrading performance, the learnings transfer — provided you simultaneously invest in tracking infrastructure, in methodological discipline, and in role specialization rather than multi-channel generalists.
For the Performance Max pillar that underpins part of the 60% PMax mix analysis, see our complete 2026 Performance Max guide. For the critical position on accounts that PMax degrades, see our analysis on why PMax destroys 30% of accounts in 2026. For the 90-day Microsoft Ads B2B SaaS case study that completes the mature B2B SaaS tracking panorama, see our Microsoft Ads B2B SaaS case study.
€10M/year isn't an endpoint. It's the threshold beyond which naive arbitrations become ruinous and where methodological discipline takes its true leverage effect — see also Microsoft Advertising Research for more details.
Sources
Official sources consulted for this guide:
FAQ
What team-to-spend ratio for a Google Ads account at €10M/year?
On the profile documented here, the observed ratio is 1 paid FTE per €2M/year of spend, i.e. 5 people total (1 Head of Paid + 4 specialists by discipline). In mature B2B SaaS, this ratio is healthy — under 1 per €1.5M/year, you under-invest in steering; beyond 1 per €3M/year, you leave ROAS on the table by lack of continuous optimization. Also include 0.5 FTE data engineer for the BigQuery/Looker stack, otherwise data steering degrades quickly after 18 months.
Why 60% PMax on a mature account at €10M/year?
Three cumulative reasons. First, conversion volume is enough for PMax to exit learning and actually converge (50+ conversions per day minimum). Second, the product/market mix to cover (8 EU markets, 4 product lines) requires automated cross-channel coverage that Search alone doesn't provide. Third, the quarterly holdout test confirmed across 4 consecutive cycles that PMax is incremental at 35-50% — not just cannibalizing Search. Without these 3 conditions, the ratio should be lower (40-50% PMax).
How much does the full tooling stack (BigQuery + Looker + sGTM + scripts) cost?
On the documented profile, the tooling stack represents about €4,200/month in direct costs: BigQuery €800-1,200/month depending on query volume, Looker Studio Pro €25/user × 8 = €200/month, sGTM hosted Cloud Run €150-300/month, monitoring + alerting €100/month, and about 2 FTE-days per month of script/dashboard maintenance (≈ €2,500-3,000). That's 0.5% of Google Ads spend — a healthy ratio for this level of data steering.
Which weekly KPIs actually surface to the board?
On this account, the board receives only 4 weekly KPIs: (1) cumulative MQL volume vs quarterly target, (2) blended CAC across all sources, (3) paid/organic ratio in MQL, and (4) flag of budget anomalies above 8% per market sub-account. Everything else (CPC, ROAS, CTR, learning phase, scripts) stays operational and only surfaces in case of incident or in-depth quarterly report. This filtering discipline is essential — overloading the board with technical metrics hurts strategic decisions.
How do you measure incrementality at this spend level?
Quarterly geographic holdout test over 4 weeks, with 1 channel paused on 1 representative market (typically Spain or Italy as control). The 4 annual cycles successively cover PMax, non-brand Search, YouTube and Discovery. Incrementality measurement: variation in organic conversions + branded Search + other paid channels on test zone vs control zones. Methodological cost: about €12,000-25,000 of lost spend per cycle, but it's the only reliable way to allocate €10M/year with rigor. Without holdout, you steer blind on biased Google Ads attribution.