SteerAds
Google AdsTutorielOptimisation

n8n + Google Ads : 6 flows automation 2026

n8n self-hosted (open-source, free) to automate Google Ads: CPC monitoring, Slack alerts, GA4 sync, weekly reporting, push offline conversions to CRM. Here are 6 ready-to-import JSON workflows, plus the docker-compose to host n8n in 10 minutes.

Matt
MattTracking & Data Lead
···10 min read

n8n has surpassed 100,000 active self-hosted deployments in 2025 (n8n.io) and 70,000 GitHub stars in Q1 2026, becoming the dominant open-source alternative to Zapier and Make. The economic criterion is massive: a Zapier workflow that runs 10,000 times per month costs about $98/month on the Professional plan, the same workflow on n8n self-hosted costs $0 beyond the $9 to $17/month VPS. On accounts observed in public Google Ads benchmarks, structures that switch to n8n recover $660 to $2,000/month on the automation budget line, the fastest ROI in the entire tooling stack.

Here's exactly the self-hosted setup, Google Ads credentials, and 6 ready-to-import JSON workflows to automate Google Ads management. No "n8n is great" generalities — concrete with docker-compose, JSON exports, transformation snippets. The repo github.com/steerads/n8n-google-ads-flows contains the 6 documented workflows. If you're still comparing with Zapier and Make, read in parallel our Zapier vs Make Google Ads comparison. Our wasted ad spend calculator estimates the $ burned/month by broad without negatives or excessive LP bounce.

n8n vs Zapier/Make: why self-hosted for Google Ads

n8n is a low-code open-source workflow automation tool launched in 2019. Drag-drop visual editor similar to Zapier or Make, but with two key differences: (1) self-hosted possible under Sustainable Use License, (2) no per-execution pricing on the self-hosted version. You pay for your VPS, period.

The 12-month economic calculation for an account running 50 active workflows (hourly + daily mix):

When n8n self-hosted wins: high execution volume (above 5,000/month), complex workflows with inline JavaScript, custom integrations not covered by SaaS, data residency constraints (strict privacy regulations, sensitive data), technical teams that can maintain Docker.

When Zapier/Make remain superior: non-technical teams (zero infra ops), simple low-volume workflows (below 1,000 executions/month), very fast start (15 min Zapier vs 1h n8n), need for exotic integrations (Zapier has 5,000+ native apps vs ~400 on n8n side).

For Google Ads specifically, n8n has a key advantage: no quota per execution, so you can run an hourly CPC monitoring workflow for 1 year without paying $1 more than the VPS. On Zapier, this same workflow costs you 8,760 executions × ~$0.02 = ~$175 minimum over the year.

Three deployment patterns coexist in 2026 based on team profile and infra constraints. Pattern 1 — Docker Compose on single VPS (the most widespread): a single Hetzner or DigitalOcean VPS at $5-13/month, n8n + Postgres + Caddy in compose, automatic snapshots every 24 hours on the provider side. Recommended for 90% of SMB and mid-market use cases. Pattern 2 — n8n Cloud managed (n8n.cloud): starting at $22/month per plan, zero infra ops to manage, ideal if the team absolutely doesn't want to touch Docker. The compromise: no customization of the runtime environment, and data doesn't stay with you. Pattern 3 — Kubernetes for multi-tenant or very high availability: official n8n Helm chart deployment on GKE/EKS/AKS, relevant only for agencies that pool multiple clients or internal SaaS setups with very high load. Plan for $90 to $275/month per cluster size.

The Docker Compose vs n8n Cloud choice deserves to be settled at startup. Docker Compose gives 100% control (data with you, custom environment, minimal network latency to Postgres), but requires minimal ops discipline: Docker updates every 1-2 months, monitoring of disk space (the executions table grows quickly), Let's Encrypt certificate management if Caddy crashes. n8n Cloud removes all this load but introduces an external dependency and a fixed cost that scales linearly with the number of active workflows. For a data team with 1 part-time DevOps, Docker Compose remains the right default. For a pure marketing team without technical operations, n8n Cloud avoids weeks of technical debt.

n8n setup: Docker, credentials, first workflow

The fastest setup to host n8n in production: Docker Compose on an Ubuntu VPS with Caddy as reverse proxy (automatic HTTPS via Let's Encrypt). Here's exactly the docker-compose.yml to paste:

# docker-compose.yml — n8n self-hosted production-ready
version: "3.8"

services:
  caddy:
    image: caddy:2
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
      - caddy_config:/config
    networks:
      - n8n-network

  postgres:
    image: postgres:16
    restart: unless-stopped
    environment:
      POSTGRES_USER: n8n
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: n8n
    volumes:
      - postgres_data:/var/lib/postgresql/data
    networks:
      - n8n-network

  n8n:
    image: docker.n8n.io/n8nio/n8n:latest
    restart: unless-stopped
    environment:
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
      - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
      - N8N_HOST=n8n.yourdomain.com
      - N8N_PROTOCOL=https
      - WEBHOOK_URL=https://n8n.yourdomain.com/
      - GENERIC_TIMEZONE=America/New_York
      - EXECUTIONS_DATA_PRUNE=true
      - EXECUTIONS_DATA_MAX_AGE=336
    volumes:
      - n8n_data:/home/node/.n8n
    depends_on:
      - postgres
    networks:
      - n8n-network

volumes:
  postgres_data:
  n8n_data:
  caddy_data:
  caddy_config:

networks:
  n8n-network:

The matching Caddyfile for automatic HTTPS reverse proxy:

n8n.yourdomain.com {
    reverse_proxy n8n:5678
    encode gzip
}

The .env to place alongside (DO NOT commit):

POSTGRES_PASSWORD=a_strong_password_minimum_32_chars_xxxxxx
N8N_ENCRYPTION_KEY=encryption_key_32_chars_strict_yyyyyyyy

Generate the N8N_ENCRYPTION_KEY with openssl rand -hex 32. This key encrypts all credentials stored by n8n (OAuth tokens, API keys) — if you lose it, all credentials are unrecoverable.

Launch:

# DNS prerequisite: point n8n.yourdomain.com to the VPS IP
docker-compose up -d
docker-compose logs -f n8n  # verify startup

n8n is accessible at https://n8n.yourdomain.com after ~2 minutes (Caddy delay to generate the Let's Encrypt certificate). First access: create an admin account, set the timezone, and that's it.

Backup strategy for credentials and workflows

Once n8n is in production, losing encrypted credentials or the Postgres database can cost weeks of reconfiguration. Three backup axes to put in place imperatively before migrating your critical workflows. Axis 1 — Provider-side disk snapshot. Activate automatic daily snapshots on the VPS ($1-2/month additional at Hetzner or DigitalOcean). In case of corruption or wrong manipulation, restoration in less than 10 minutes to a history point. It's the cheapest and most effective safety net, requiring no application logic.

Axis 2 — Regular and encrypted Postgres dump. Daily cron that runs pg_dump n8n then encrypts the dump with age or GPG, then pushes to an S3-compatible bucket (Cloudflare R2, Backblaze B2, Wasabi) with 30-day retention. Encryption logic is essential: the dump contains all workflows but also Google Ads credentials encrypted with your encryption_key. If the dump leaks in clear without encryption, anyone with the encryption_key can access your credentials. Store the encryption_key separately from the dump (ideally in a password manager shared with the backup admin team).

Axis 3 — JSON export of critical workflows in source control. For each production workflow, export the JSON via "Download" in the n8n UI and commit to a dedicated private Git repo. This gives two benefits: versioning and code review of workflow changes, and immediate restoration if the Postgres database is lost. The encryption_key remains necessary to reimport with credentials, but the workflow structure is preserved. On accounts observed in public Google Ads benchmarks, teams that combine all three axes never lose more than 24 hours of configuration in case of incident, vs weeks for those with only one axis.

Configure Google Ads credentials

In Credentials > Add Credential > Google Ads OAuth2 API. Fill in:

  • Client ID: from your GCP project (see our Python API setup guide for the generation procedure).
  • Client Secret: same.
  • Scope: https://www.googleapis.com/auth/adwords.
  • Authorization URL: pre-filled by n8n.
  • Token URL: pre-filled.
  • Redirect URL: copy the value provided by n8n and add it to the Authorized redirect URIs of your OAuth client in GCP Console.

Click Connect my account, validate in the OAuth window, n8n automatically retrieves the refresh_token and stores it encrypted.

For the developer_token (which isn't in OAuth), add a separate Header Auth Credential with Header Name: developer-token and Header Value: YOUR_DEVELOPER_TOKEN. HTTP Request nodes that call the Google Ads API will consume this header.

Flow 1-2: CPC monitoring + Slack alerts

The first 2 flows cover daily monitoring: detecting CPC anomalies and alerting Slack. It's the most profitable use case at startup.

Flow 1 — CPC monitoring with Slack alerts

Trigger: Cron, every hour. Logic: pull average CPCs of ENABLED campaigns over the rolling 24h, compare to 7-day average CPC, Slack alert if variation above +25%.

The exported JSON workflow (minified excerpt for readability):

{
  "name": "GoogleAds — CPC Monitoring + Slack Alerts",
  "nodes": [
    {
      "name": "Cron Hourly",
      "type": "n8n-nodes-base.cron",
      "parameters": {
        "triggerTimes": { "item": [{ "mode": "everyHour" }] }
      }
    },
    {
      "name": "Pull GAQL CPC",
      "type": "n8n-nodes-base.httpRequest",
      "parameters": {
        "method": "POST",
        "url": "https://googleads.googleapis.com/v17/customers/{{$env.CUSTOMER_ID}}/googleAds:search",
        "authentication": "predefinedCredentialType",
        "nodeCredentialType": "googleAdsOAuth2Api",
        "headerParameters": {
          "parameters": [
            { "name": "developer-token", "value": "{{$env.DEV_TOKEN}}" },
            { "name": "login-customer-id", "value": "{{$env.LOGIN_CID}}" }
          ]
        },
        "bodyParameters": {
          "parameters": [
            {
              "name": "query",
              "value": "SELECT campaign.id, campaign.name, metrics.average_cpc FROM campaign WHERE campaign.status = 'ENABLED' AND segments.date DURING LAST_7_DAYS"
            }
          ]
        }
      }
    },
    {
      "name": "Compute Variations",
      "type": "n8n-nodes-base.code",
      "parameters": {
        "language": "javaScript",
        "jsCode": "const items = $input.all();\nconst alerts = [];\nfor (const i of items) {\n  const c = i.json;\n  // CPC variation calculation vs baseline\n  const cpcNow = c.metrics.average_cpc / 1000000;\n  const cpcAvg7d = c.baseline_cpc;\n  const variation = (cpcNow - cpcAvg7d) / cpcAvg7d;\n  if (variation > 0.25) {\n    alerts.push({\n      campaign: c.campaign.name,\n      cpc_now: cpcNow.toFixed(2),\n      cpc_baseline: cpcAvg7d.toFixed(2),\n      variation_pct: (variation * 100).toFixed(1)\n    });\n  }\n}\nreturn alerts.map(a => ({ json: a }));"
      }
    },
    {
      "name": "Slack Alert",
      "type": "n8n-nodes-base.slack",
      "parameters": {
        "channel": "#google-ads-alerts",
        "text": "[CPC ALERT] {{$json.campaign}} - CPC +{{$json.variation_pct}}% ({{$json.cpc_baseline}} USD -> {{$json.cpc_now}} USD)"
      }
    }
  ]
}

Variables to customize after import: CUSTOMER_ID (10-digit client account), DEV_TOKEN (Google Ads developer token), LOGIN_CID (parent MCC), Slack channel.

Trigger frequency: every hour during business hours, daily at night (avoid false positives on low volume). Output: a Slack message per campaign whose CPC exceeds +25% of baseline.

Flow 2 — Spend anomaly detection (above 2x avg 7d)

Trigger: Cron, every 4 hours. Logic: pull total account spend over rolling period, compare to 7-day average × factor. If variation greater than 2x, alert on Slack with breakdown per campaign.

The n8n pattern is similar to Flow 1, with 4 nodes:

  1. Cron every 4h.
  2. HTTP Request GAQL: pull total spend + breakdown per campaign.
  3. Code: variation calculation vs 7d baseline, identification of causing campaigns.
  4. Slack: structured message send if anomaly.

Variables to tune: SPEND_VARIATION_THRESHOLD (2.0 = +100% = double), MIN_SPEND_BASELINE (filter low-volume campaigns to avoid false positives), ALERT_CHANNEL.

Anti-false positive dev tip :

Always filter low-volume campaigns before comparison (e.g.: skip if baseline spend below $55). On an account with 30 campaigns including 20 micro-campaigns at $5-11/day, without filter you'll spam Slack with 15 alerts/day. The signal/noise ratio drops, the team ignores the alerts, and you miss the real anomalies.

Flow 3-4: sync CRM conversions to Google Ads

Flows 3-4 cover offline conversion push from a CRM to Google Ads (Offline Conversion Imports + Customer Match). It's one of the most profitable use cases for B2B SaaS and lead gen with long sales cycles.

Flow 3 — Push HubSpot deals won to Google Ads offline conversions

Trigger: HubSpot webhook (deal stage = closed-won). Logic: receive the webhook, retrieve the gclid of the deal (stored in a custom property), format the payload according to Google Ads UploadClickConversion spec, push to the API.

Flow architecture:

[Webhook HubSpot] -> [Extract GCLID + value] -> [Format payload Google Ads]
                                              -> [HTTP POST UploadClickConversion]
                                              -> [Log success/error in PostgreSQL]
                                              -> [Slack notify if error]

The HTTP Request node toward Google Ads Offline Conversion:

// Code node: format payload
const dealData = $input.first().json;
const gclid = dealData.properties.hs_gclid;
const dealValue = parseFloat(dealData.properties.amount);
const closeDate = dealData.properties.closedate;

if (!gclid) {
  // No GCLID = not trackable (organic deal or non-Google source)
  return [{ json: { skip: true, reason: "no_gclid" } }];
}

// ISO format required by Google Ads
const conversionDateTime = new Date(closeDate)
  .toISOString()
  .replace("T", " ")
  .substring(0, 19) + "+00:00";

return [{
  json: {
    conversions: [{
      gclid,
      conversion_action: `customers/${$env.CUSTOMER_ID}/conversionActions/${$env.CONV_ACTION_ID}`,
      conversion_date_time: conversionDateTime,
      conversion_value: dealValue,
      currency_code: "USD"
    }],
    partial_failure: true,
    validate_only: false
  }
}];

Variables to customize: CUSTOMER_ID, CONV_ACTION_ID (the ID of the "Closed Deal" conversion action created upstream in Google Ads UI). See our Google Ads conversion tracking guide for conversion action creation and offline imports functional framework.

Critical pitfalls:

  • GCLID expires 90 days after the click. If your sales cycle exceeds 90 days, the push must be done before expiration or use GBRAID/WBRAID for iOS.
  • Deal value must be in account native currency (USD, EUR, GBP). No mix.
  • partial_failure: true allows Google to accept valid conversions even if a single one in the batch is invalid. Without this flag, an error rejects the entire batch.

Flow 4 — Sync GA4 audiences to Customer Match

Trigger: Cron daily 3am. Logic: query a GA4 audience (e.g.: "users who did pricing_page_view without purchase"), retrieve emails, hash in SHA-256, push to Google Ads Customer Match list.

The flow chains 4 main nodes: GA4 Data API pull -> SHA-256 hash node (custom Code) -> Google Ads OfflineUserDataJobService -> Slack confirm. See our Customer Match first-party data guide for the complete audience strategy.

Flow 5-6: weekly email reporting + Looker Studio

The last 2 flows cover recurring reporting: weekly email with synthesis KPIs, and BigQuery push + Looker Studio refresh for dashboards.

Flow 5 — Weekly performance report HTML email

Trigger: Cron, every Monday 8am. Logic: pull key KPIs of last 7 days (spend, clicks, conv, CPA, ROAS) per campaign, format an HTML email, send via SMTP to the distribution list. Our 2-input CPA calculator returns the value + median for your vertical.

The central Code node that formats the HTML:

// Format HTML weekly report
const campaigns = $input.all().map(i => i.json);

let html = `
<h2 style="font-family: Arial; color: #1a202c;" data-speakable>
  Google Ads Weekly Report - Week ${getWeekNumber()}
</h2>
<p>Period: ${getDateRangeLabel()}</p>
<table border="1" cellpadding="10" style="border-collapse:collapse; font-family:Arial; font-size:13px;">
  <tr style="background:#f5f5f5;">
    <th>Campaign</th><th>Spend</th><th>Clicks</th><th>Conv</th><th>CPA</th><th>ROAS</th>
  </tr>
`;

let totalSpend = 0, totalConv = 0;
for (const c of campaigns) {
  const cpa = c.conversions > 0 ? (c.cost_usd / c.conversions).toFixed(2) : "N/A";
  const roas = c.cost_usd > 0 ? (c.conversions_value / c.cost_usd).toFixed(2) : "N/A";
  html += `
    <tr>
      <td>${c.name}</td>
      <td>${c.cost_usd.toFixed(2)} USD</td>
      <td>${c.clicks}</td>
      <td>${c.conversions.toFixed(1)}</td>
      <td>${cpa} USD</td>
      <td>${roas}</td>
    </tr>
  `;
  totalSpend += c.cost_usd;
  totalConv += c.conversions;
}

html += `
</table>
<p><strong>Total Spend:</strong> ${totalSpend.toFixed(2)} USD</p>
<p><strong>Total Conversions:</strong> ${totalConv.toFixed(1)}</p>
<p><strong>Avg CPA:</strong> ${(totalSpend / totalConv).toFixed(2)} USD</p>
`;

function getWeekNumber() {
  const now = new Date();
  const start = new Date(now.getFullYear(), 0, 1);
  const days = Math.floor((now - start) / 86400000);
  return Math.ceil((days + start.getDay() + 1) / 7);
}

function getDateRangeLabel() {
  const end = new Date();
  const start = new Date();
  start.setDate(end.getDate() - 7);
  return `${start.toISOString().split("T")[0]} -> ${end.toISOString().split("T")[0]}`;
}

return [{ json: { html_body: html, subject: `Google Ads Report W${getWeekNumber()}` } }];

Variables: EMAIL_RECIPIENTS (comma-separated list), SMTP_* (host, port, user, pass — n8n SMTP credentials).

Flow 6 — Daily push BigQuery + refresh Looker Studio

Trigger: Cron daily 4am. Logic: pull all granular KPIs (per campaign, per keyword, per device, per geo) over the last 24h, push as append in a BigQuery table, trigger a refresh of the Looker Studio dashboard via webhook.

It's the classic data warehousing pattern: BigQuery as single source of truth, Looker Studio (formerly Data Studio) as visualization layer, n8n as ETL orchestrator. Advantage vs Python script: zero scheduler infra to maintain, native observability in n8n.

Best practices: error handling, retry, logging

An n8n workflow that runs 24/7 without error handling falls silent and you discover the problem 3 weeks later when a client asks why the report no longer arrives. 4 critical patterns.

Pattern 1 — Dedicated error workflow. In Settings > Error Workflow, design a specific workflow that triggers when any other workflow fails. The error workflow receives the error payload (workflow name, node, error message, timestamp) and sends a Slack alert. All production workflows must point to this error workflow.

Pattern 2 — Retry on failure. On each critical node (HTTP Request notably), activate Retry On Fail with 3 retries and a 30s delay. For transient errors (rate limit, network timeout), retry suffices. For permanent errors (auth expired, invalid argument), the node fails after 3 attempts and the error workflow activates.

Pattern 3 — Strategic Continue On Fail. For workflows that process N items in parallel (e.g.: push 100 HubSpot conversions to Google Ads), activate Continue On Fail on the mutation node. The workflow continues even if 5 items fail out of 100. Add an IF node afterward to separate successes from errors and log errors in a dedicated Postgres node.

Pattern 4 — Pruning historical executions. In the Docker .env, configure EXECUTIONS_DATA_MAX_AGE=336 (336 hours = 14 days) to automatically purge historical executions. Otherwise the n8n DB grows indefinitely. For 50 active workflows with 100 executions/day each, plan for ~150k rows in 14 days.

// Code node: log structure for observability
const ts = new Date().toISOString();
const workflowName = $workflow.name;
const executionId = $execution.id;
const nodeOutput = $input.first().json;

return [{
  json: {
    log_level: "INFO",
    timestamp: ts,
    workflow: workflowName,
    execution_id: executionId,
    event: "workflow_completed",
    items_processed: $input.all().length,
    metadata: nodeOutput
  }
}];
Google Ads refresh token :

Google Ads refresh_tokens can be revoked after 90 days of inactivity or if Google detects suspicious behavior (account password change, login from new device). If your workflow falls into INVALID_GRANT, regenerate the refresh_token via the OAuth procedure and update it in n8n Credentials. Set up a specific Slack alert on this error code to not discover it 3 days later.

n8n limits vs direct API

n8n is an excellent low-code compromise, but has its limits. 4 cases where switching to a standalone Python or Node.js script (see our Python Google Ads API setup guide) remains superior.

Limit 1 — Very high volume (above 500k operations/day). n8n excels on moderate volumes (1,000 to 100,000 ops/day). Beyond, latency per node + DB writing overhead slows down. A Python script with native batch operations processes 1M ops in 1h, vs 4-6h on n8n side.

Limit 2 — Complex algorithmic logic. For workflows requiring ML inference, statistical regressions, clustering — the n8n Code node allows JS but with a constrained environment. A Python script with scikit-learn / XGBoost remains more practical.

Limit 3 — Product integration. If you embed Google Ads in an internal product (dashboard, SaaS app), n8n is too "internal tool" and too heavy. Direct API via SDK remains the right choice.

Limit 4 — Hot debugging on prod. n8n offers correct but not optimal observability (no full stacktrace on Code nodes, limited logs). For a mature stack, Python with a logging + Sentry setup surpasses n8n in debugging UX.

For 80% of an SMB or mid-market's Google Ads use cases (monitoring, CRM sync, reporting, alerting), n8n is largely sufficient and the cost/maintenance ratio remains unbeatable. For accounts that want to industrialize without infra ops, our Auto-optimization module covers the 6 flows above in managed mode (zero VPS to maintain, zero refresh_token to renew), with native dashboard and alerting. See also our Google Ads audit checklist, our 10 Google Ads scripts guide, our Zapier vs Make comparison for no-code options, and our MCP Google Ads Claude Desktop guide for the conversational layer.

Common errors to avoid in n8n production

Five errors recur recurrently in self-hosted n8n setups observed in audit. Each can cost several days of silent incident or bad Google Ads decisions based on incomplete data. Here are the pitfalls and how to avoid them.

1. No monitoring on the workflow itself. Diagnosis: the workflow falls into error after a Google Ads API update or a revoked refresh_token, but no one notices before a client asks why the weekly report no longer arrives. Correction: configure a global Error Workflow that posts on Slack #ops-alerts at every failure, and a heartbeat cron (trivial workflow that posts a daily OK) to detect cases where the n8n scheduler itself is down. A critical workflow without heartbeat is a monitoring hole that can last weeks.

2. Storing the encryption_key in the same Git repo as the compose. Diagnosis: the .env with N8N_ENCRYPTION_KEY is committed by accident, the attacker who clones the repo can decrypt all Google Ads credentials. Correction: NEVER commit the .env. Use a secret manager (Doppler, Vault, AWS SM, GCP Secret Manager) or at minimum store the key in a password manager with audited access. If the key has already been publicly committed, immediately regenerate and reconfigure all credentials.

3. Active workflows on the test account with prod login_customer_id. Diagnosis: a developer copies a workflow from the test account to prod without changing the login_customer_id, the workflow modifies budgets on the wrong account. Correction: use distinct environment variables per environment (CUSTOMER_ID_TEST, CUSTOMER_ID_PROD) and a strict naming convention (workflow tagged [PROD] or [TEST] in its name). Systematically test in "Execute Workflow" mode on test account before activating in prod.

4. Postgres database that saturates without execution purge. Diagnosis: after 6 months in prod, the Postgres database reaches 50+ Gb, workflows slow down, backups take hours. Correction: configure EXECUTIONS_DATA_PRUNE=true and EXECUTIONS_DATA_MAX_AGE=336 (14 days) from setup. For critical workflows whose logs you want to keep longer, export to BigQuery or a separate datastore at execution time rather than bloating the n8n DB.

5. Looping without throttling on Google Ads mutation batches. Diagnosis: a workflow that syncs 5,000 Customer Match emails fails into RESOURCE_EXHAUSTED because it calls the API in parallel on all items. Correction: add a "SplitInBatches" node with batch_size 100 and a "Wait" 2 seconds node between batches. Total throughput is slightly reduced but error rate goes from 30-40% to less than 1%. This logic applies to all batch operations toward Google Ads, HubSpot, Salesforce, and other rate-limited APIs.

For official resources, see the n8n documentation (excellent level of detail) and the official n8n GitHub repo to follow releases.

Sources

Official sources consulted for this guide:

FAQ

Is n8n really free in self-hosted?

Yes, n8n Community Edition is licensed under Sustainable Use License (fork-friendly but not for resale as-is). In self-hosted, you only pay the cost of your server ($5 to $22/month on DigitalOcean, Hetzner, OVH for a standard setup). The Cloud version n8n.cloud is paid (starting at $22/month) for those who don't want to host. The critical difference vs Zapier/Make: no per-execution pricing. On Zapier, 5,000 zaps/month cost you ~$80/month minimum. On n8n self-hosted, 50,000 executions per day don't change infra cost. For a Google Ads account that runs 100+ workflows per day (hourly checks, daily reports), the ROI flips in favor of n8n from month 3.

Do you need to be a developer to use n8n with Google Ads?

Intermediate level is enough. n8n is a visual low-code tool with a drag-drop editor (Zapier-style) but also allows inline JavaScript for custom transformations. For the 6 flows we detail, you'll need to: understand an HTTP/REST request (n8n's Google Ads nodes call the API directly), know how to read a JSON payload, and ideally have generated an OAuth refresh_token once via the Google console. No need to write Python or TypeScript from scratch — Google Ads integration is preconfigured. For very custom workflows (e.g.: ML inference before decision), the 'Code' node allows embedding JavaScript or Python. Plan for 1 to 2 days of learning to be autonomous on basic flows, 1 week to master advanced flows.

Can n8n workflows run 24/7 without intervention?

Yes, that's their main design. Each workflow has a trigger (cron schedule, webhook, polling, manual) and once active, it runs in a loop based on the trigger. n8n Worker in Docker keeps the connection open, automatic retry on errors, logs each execution in DB. On accounts observed in public Google Ads benchmarks, critical workflows (CPC monitoring, anomaly alerts) run continuously for 12+ months without manual intervention beyond version updates. The only recurring maintenance: refresh of the Google Ads refresh_token every 90 days or so (otherwise revoked by Google), and n8n update every 1-2 months for security patches.

How to import a JSON workflow into n8n?

Very simple. In the n8n interface, top right, click on the 3 dots then 'Import from File' or 'Import from URL'. Paste JSON content or file URL. n8n loads the workflow with all nodes, connections, and variables. After import, you must reconfigure credentials (Google Ads OAuth, Slack webhook, email SMTP) because they aren't included in the JSON export for security reasons. Plan for 10 to 20 minutes to customize an imported workflow to your specific account. The repo github.com/steerads/n8n-google-ads-flows contains the 6 ready-to-import JSON workflows, with a README explaining variables to customize for each.

n8n vs Google Ads Scripts vs direct API: when to choose what?

Complexity pyramid. Google Ads Scripts for fast single-account (15 min setup), limited JS syntax but hosted. n8n self-hosted for multi-system workflows that orchestrate Google Ads + CRM + Slack + email + sheets without coding, with native scheduler and retry. Direct Python/Node API for very high volume complex workflows (10,000+ ops/day) or when integrating Google Ads in an internal product. On accounts observed in public benchmarks, the dominant pattern for SMBs and mid-market with light data team is: Scripts for tactical (hourly budget alerts), n8n for strategic (CRM sync, reporting), Python API only if data warehouse stack requires it. See our Zapier vs Make comparison for complementary no-code options.

Ready to optimize your campaigns?

Start a free audit in 2 minutes and discover the ROI potential of your accounts.

Start my free audit

Free audit — no credit card required

Keep reading