SteerAds
Free toolNo signupAudit & checks

Quality Score Checker Google Ads — gratuit

The checker estimates your Quality Score from 10 weighted questions on observable signals. The pillar layer that follows explains in detail Google's 3 official components (Expected CTR, Ad Relevance, Landing Experience), how Google computes each in practice, observable 2026 QS benchmarks by vertical, and the 30-day method to move from average Quality Score 5 to a score of 8 on top keywords.

Connect your Google Ads account to analyze these metrics on your actual account.

Free audit 2 min →
Maria
MariaFundamentals & Education Lead
··9 min read

Répondez aux 10 questions ci-dessous (oui / non / je ne sais pas) pour estimer le Quality Score moyen probable d'un ad group Google Ads. Les pondérations correspondent aux 3 composantes officielles Google (Expected CTR, Ad Relevance, Landing Experience).

1

Le keyword principal apparaît-il dans le headline 1 de la RSA ?

2

Avez-vous au moins 12 headlines uniques par RSA (sur les 15 max) ?

3

Avez-vous une liste de négatifs partagés ≥ 200 entries au niveau compte ?

4

La landing page passe-t-elle Core Web Vitals (LCP < 2,5s, INP < 200ms, CLS < 0,1) ?

5

Le H1 de la landing page reprend-il le keyword ou intent de la RSA ?

6

Le mix match types est-il équilibré (≥30% exact + ≥40% phrase) ?

7

Avez-vous activé sitelinks + callouts + structured snippets + image extensions ?

8

L'ad rotation est-elle réglée sur 'Optimiser' et non 'Rotate Evenly' ?

9

Consent Mode v2 + Enhanced Conversions for Web sont-ils activés ?

10

Avez-vous Smart Bidding (Target CPA / Target ROAS / Max Conv) actif et stable ?

Comment fonctionne ce checker ?

Google Ads ne publie pas la formule exacte du Quality Score, mais documente les 3 composantes pondérées : Expected CTR, Ad Relevance, Landing Experience. Notre checker pondère 10 signaux observables qui corrélent fortement avec ces 3 composantes — chaque "oui" augmente le score probable, chaque "non" le pénalise.

Le score affiché est une estimation, pas la valeur officielle Google. Pour le score réel, consultez la colonne Quality Score dans l'interface Google Ads (au niveau keyword). Les écarts typiques entre estimation et réel se situent à ±1 point sur le panel sectoriel SteerAds.

Les actions priorisées affichées sont triées par ratio effort × impact : les plus simples à appliquer avec le plus gros gain QS arrivent en premier. Comptez 1 à 2 jours-homme pour appliquer le top 5, et 14 à 28 jours pour voir l'impact sur le score réel Google Ads.

Across aggregated 2025-2026 Google Ads data (public sources + Google Ads API), Quality Score remains the most powerful and most underused lever to reduce effective CPC without touching bids — a Quality Score 8 systematically pays 25 to 35% less than a QS 5 on the same keywords. The checker above estimates your probable QS from 10 weighted questions. What follows explains in detail Google's 3 official components, how Google computes each in practice, the 2026 observable QS benchmarks by vertical, and the 30-day method to move an account from an average Quality Score 5 to a score of 8 on top keywords.

For complete Quality Score fundamentals, see our complete Quality Score guide. For the RSA copywriting method that drives Expected CTR and Ad Relevance, see our RSA copywriting method guide. For the landing pages that drive Landing Experience, see our landing pages guide.

Google's 3 official Quality Score components

Google Ads does not publish the exact Quality Score formula, but officially documents the 3 weighted components that comprise it. Source documentation: support.google.com Quality Score. Understanding these 3 components is the first step to durably steering your QS — without this reading grid, you work blind.

Component 1 — Expected CTR. This is the expected click-through rate on your ad for that keyword, at corrected average position. Google compares your historical CTR with the median CTR of advertisers on that same keyword at similar position. The rating is Above Average, Average, or Below Average. In Google Ads data, empirical observation shows Expected CTR weighs roughly 40-50% of the final 1-10 score.

Component 2 — Ad Relevance. This is the semantic relevance between user query intent and the content of your ad. Google analyzes the matching between ad group keywords and the RSA headlines/descriptions. Rating Above Average, Average, or Below Average. Weighs roughly 30-35% of the final score per panel observations.

Component 3 — Landing Page Experience. This is the quality and relevance of the landing page once the user clicks. Google evaluates Core Web Vitals (speed, INP, stability), LP content relevance vs. the query, transparency (legal mentions, About, contact), user navigation. Weighs roughly 20-25% of the final score.

Practical implication: if your goal is to move from QS 5 to QS 8 on a keyword, the priority lever depends on which component is Below Average in your keyword-level interface. If Expected CTR is Below Average, rework the RSAs (headlines + extensions). If Ad Relevance is Below Average, rework the keyword/headline matching. If Landing Experience is Below Average, rework the LP. Do not work in random order — each component has its own recompute dynamic.

How Google computes Expected CTR (and how to improve it)

This is the most misunderstood component of Quality Score, and the most powerful CPC lever. Google compares your historical CTR on a keyword with the median CTR of all advertisers on that keyword at identical average position. If you are at position 2 with 6% CTR, Google compares this 6% with the median CTR of all advertisers at position 2 on that keyword over 30-90 days.

Major implication: on a highly competitive keyword (e.g. "CRM software" at position 1-2), the median CTR can sit at 8-12%. To move Above Average, your CTR must exceed roughly 10-14%. On a long-tail keyword, the median CTR is often 2-4%, and 4-6% is enough.

4 concrete levers to improve Expected CTR, in observed order of impact.

Lever 1 — Structured RSA pinning. Pin Headline 1 in position 1 with the exact primary keyword. Pin Headline 2 with the business USP. The rest in dynamic pool. Effect: +12 to +25% CTR in 14-21 days.

Lever 2 — Activate every extension. Sitelinks (4 minimum), callouts, structured snippets, prices, image, lead form. Documentation: support.google.com Ad Extensions. Effect: +18 to +28% cumulative CTR for 4 active extensions.

Lever 3 — Test 4-5 new RSAs per ad group. The richer the initial pool of headlines/descriptions, the better the algorithm converges. Doubling to 4-5 RSAs typically produces +8 to +15% CTR in 30 days.

Lever 4 — Cut very low CTR keywords. A keyword at 0.5% CTR at position 2-3 is a signal of intent-mismatch. Cutting improves the spend-weighted average CTR of the ad group.

Expected CTR plateaus on brand keywords :

On brand keywords (your trademark), the observed median CTR is often 25-50% — your direct competition is very weak, and the user typed your exact brand name. Reaching Above Average on Expected CTR on these keywords is trivial. Do not confuse QS 10 on brand keywords with QS 10 on generic keywords — the difficulty is completely different.

Ad Relevance: keyword to headline to landing matching

This is the fastest component to improve (3 to 7 days after RSA change) and the one that produces the most visible short-term QS gains. Google analyzes three matching levels: keyword to headlines, headlines to description, and overall semantic coherence of the ad group.

Practical observed rule: to move Ad Relevance from Below Average to Above Average, embed the exact keyword in Headline 1 AND in the ad's display URL path. This double integration typically produces the Above Average jump in 3-7 days. If you cannot embed the exact keyword in Headline 1 (e.g. 35-character keyword, exceeds the headline limit), use Dynamic Keyword Insertion as a backup.

Frequent structural mistake: a single ad group bundling 30+ semantically disparate keywords. Mechanically, no RSA can be relevant for all keywords simultaneously, and average Ad Relevance plateaus at Average at best. The SKAG (Single Keyword Ad Group) rule is too strict for 2026, but the moderate principle remains valid: keep 5-12 semantically close keywords per ad group, and create a dedicated ad group for each distinct semantic cluster.

Average QS below 6 on top keywords? The audit identifies the 3 priority components.

Three minutes after OAuth connection, the audit returns your spend-weighted QS, identifies keywords with QS below 6 capturing 25 to 40% of budget, and lists the 3 priority components (Expected CTR / Ad Relevance / Landing Experience) to fix to target QS 8+ in 30 days.

Run a free Quality Score audit →

2026 Landing Experience: Core Web Vitals + content depth

This is the component that has evolved most between 2022 and 2026 under the combined effect of Core Web Vitals and Helpful Content Updates. Today, Landing Page Experience rests on 4 measurable pillars.

Pillar 1 — Core Web Vitals. LCP under 2.5 seconds, INP under 200ms, CLS under 0.1. Measurable via PageSpeed Insights or Search Console. Roughly 35 to 50% of audited landing pages fail at least one of these 3 criteria, capping Landing Experience at Average or Below Average. Google documentation: web.dev Core Web Vitals.

Pillar 2 — Relevant content depth. At least 600 words of copy relevant to the keyword, H1 aligned with the primary keyword, H2 on sub-themes, structured FAQ. Minimalist showcase LPs (hero + 1 CTA + footer) plateau at Below Average.

Pillar 3 — Trust signals. Accessible legal mentions, About page, real contact (email + phone + address for local services), verifiable customer reviews. On LPs without trust signals, Landing Experience plateaus.

Pillar 4 — Mobile UX. More than 60% of Google Ads traffic is mobile in 2026. A non-mobile-first LP (font too small, CTA out of thumb's reach, form with 8+ fields) caps Landing Experience. Test via Mobile-Friendly Test.

2026 QS benchmarks by vertical

The orders of magnitude below come from aggregated 2025-2026 Google Ads data (public sources + Google Ads API), cross-referenced with public WordStream benchmarks. These are spend-weighted medians — intra-vertical variance remains strong depending on account maturity, tracking quality, and Quality Score investment.

  • Mass-market e-commerce: median QS 5.8, Top 25% above 7.2, Bottom 25% below 4.5.
  • Premium e-commerce: median QS 6.4, Top 25% above 7.8, Bottom 25% below 5.1.
  • Mid-market B2B SaaS: median QS 6.1, Top 25% above 7.5, Bottom 25% below 4.8.
  • High-intent B2C lead gen: median QS 5.5, Top 25% above 7.0, Bottom 25% below 4.2.
  • Local services: median QS 6.8, Top 25% above 8.1, Bottom 25% below 5.5.

Practical implication: if your spend-weighted average QS sits in the vertical median but your effective CPC is high, two typical causes. (1) Your QS on the top 20% keywords is below the vertical median. (2) Your direct competitors have a higher QS, shifting the entire auction upward.

30-day method to move from QS 5 to QS 8+

The operational sequence ranked by effort/impact ratio, observed across aggregated 2025-2026 Google Ads data. Accounts that apply this 30-day sequence mostly observe a spend-weighted average QS moving from 5.5 to 7.5 on the top 20% keywords — bigger gain if the account started below QS 5.

Week 1 — Diagnosis and prioritization. Export the keyword-level report with QS + components. Filter QS below 6 AND clicks above 100 over 30 days. For each keyword, identify the Below Average component. Sort by descending spend — you have your top 20-30 priority keywords. For audit detail, see our Google Ads audit checklist.

Week 2 — Ad Relevance and RSA work. For Ad Relevance Below Average keywords: rewrite the RSAs. Headline 1 exact keyword (pin position 1), Headline 2 USP (pin position 2), Headlines 3-15 in dynamic pool. 4-5 RSAs per ad group. Complete sitelinks, callouts, structured snippets. Ad Relevance moves Above Average in 3-7 days, Expected CTR rises in 10-14 days.

Week 3 — Landing Experience work. For Landing Experience Below Average keywords: align H1 with the RSA primary keyword. Verify Core Web Vitals via PageSpeed Insights. Add content depth (600 words + FAQ). Verify trust signals. Landing Experience moves Above Average in 7-21 days after recrawl.

Week 4 — Measurement and iteration. Re-export the keyword-level report. Measure QS delta and effective CPC. Identify keywords where QS did not move and plan month 2. Spend-weighted average QS typically moves from 5.5 to 7.5 in 30 days, effective CPC -18 to -28% on targeted keywords.

For complementary post-QS-improvement effective CPC analysis, use our CPC calculator. For Ad Rank decomposition and the QS effect in the auction formula, use our Ad Rank calculator. To measure CTR before/after and its Expected CTR effect, use our CTR calculator. See also our 10 Google Ads mistakes for the classic traps that quietly degrade QS.

Quality Score remains the most powerful and most underused lever to reduce effective CPC without touching bids. The checker above estimates your probable QS from 10 weighted questions. The work begins after: identify keywords with QS below 6 capturing 25 to 40% of spend, diagnose the priority Below Average component, and apply Ad Relevance / RSA / Landing Experience fixes in effort-impact order. On accounts that follow this 30-day method, the observation is stable: spend-weighted average QS moves from 5.5 to 7.5, effective CPC drops 18 to 28% on targeted keywords, and the effect compounds month after month because Expected CTR keeps rising with the improved historical CTR.

FAQ

What are the 3 official Quality Score components?

Google officially documents three weighted components: Expected CTR (expected click-through rate on the keyword at average position), Ad Relevance (alignment between query intent and ad content), and Landing Page Experience (quality and relevance of the landing page). Each component is rated Below Average, Average, or Above Average in the keyword-level interface. The final 1-10 score results from a weighted combination of these 3 ratings — Google does not publish the exact formula, but according to aggregated Google Ads data, empirical observation shows Expected CTR weighs 40-50%, Ad Relevance 30-35%, Landing Experience 20-25% on the final score.

How exactly does Google compute Expected CTR?

Google compares your historical CTR on the keyword with the median CTR of all advertisers on that same keyword at identical average position (corrected for display position and auction context). If your CTR is in the top 30% of advertisers on that keyword at average position 1-3, you are Above Average. If in the bottom 30%, Below Average. The rest is Average. This mechanic means that on competitive keywords, the CTR threshold to land Above Average is higher (often above 8%) than on long-tail keywords where 2-3% is enough. The calculation runs on a rolling 30-90 days — not in real time.

What CPC impact does a Quality Score 8 have vs. 5?

Across aggregated 2025-2026 Google Ads data, the observed effective CPC gap between QS 5 and QS 8 on the same keywords sits between 25 and 35%. Mechanically, the Google Ads auction formula weights real CPC by your Quality Score: real CPC = (next competitor's Ad Rank / your QS) + EUR 0.01. Doubling your QS nearly halves your effective CPC on auctions where you beat the next competitor. That is mathematically the most powerful lever to reduce CPC without touching bids, and it is free to enable (Ad Relevance / LP / RSA work, no extra media cost).

Should you target QS 10 or is QS 8 enough?

QS 8 is enough in 95% of cases. Moving from QS 8 to QS 10 takes 5 to 10 times more effort than moving from QS 5 to QS 8, for a marginal CPC gain of 8 to 12% instead of 25 to 35%. Practical rule observed from benchmarked public references: target stable QS 7-8 on the top 20% keywords of the account (those that capture 80% of spend), and acceptable QS 5-6 on the long tail where optimization effort does not pay off. Hammering on QS 10 is typically a signal of over-engineering that consumes time better invested elsewhere (tracking, Smart Bidding, audiences).

Does Quality Score improve with Smart Bidding?

Indirectly, yes. Smart Bidding (Target CPA / Target ROAS / Maximize Conversions) does not set QS, but it favors auctions where predicted conversion probability is high, which mechanically filters out poorly relevant queries and improves average Expected CTR. On accounts switching from Manual CPC to well-calibrated Smart Bidding (at least 30 conversions / rolling 30 days), average QS typically rises +0.5 to +1.2 points in 60-90 days with no other intervention. But Smart Bidding alone is not enough — you still need to work on Ad Relevance and Landing Experience to reach QS 8+.

How long until you see a QS change?

Lead times observed in Google Ads data. For Expected CTR: 14 to 28 days of data after RSA changes for Google to recompute the rating (calculation runs on rolling 30-day window). For Ad Relevance: 3 to 7 days after keyword/headline copy changes. For Landing Experience: 7 to 21 days after Core Web Vitals improvement (Google must re-crawl the LP). Overall, plan a minimum of 30 days to see a clear 1-10 score change, and 60-90 days to see it stabilize. Shorter: illusion of measurement, QS naturally swings plus or minus 1 point week to week.

Audit your account on 200+ checkpoints

Connect Google Ads via OAuth. Full audit in 2 minutes. No credit card. No commitment.

No credit card · Results in 2 minutes