Loop Over Items in n8n Without Infinite Loops
I learned the hard way that a “simple” loop can quietly multiply executions until it floods logs and duplicates writes—so now I design every loop with a hard stop, a guard, and a dedupe key from the first draft. If you’re implementing Loop Over Items in n8n Without Infinite Loops, the goal is straightforward: process many items reliably while guaranteeing the workflow can’t re-trigger itself or re-process the same records forever.
Why infinite loops happen in n8n (and how to spot them fast)
Infinite loops usually appear when the workflow feeds output back into an earlier step without a clear termination condition—or when a trigger “sees” the workflow’s own side effects and fires again. You’ll recognize it by:
- Execution counts rising rapidly (sometimes per second).
- Repeated processing of identical item IDs or payloads.
- Growing memory/queue pressure because the same job is re-enqueued.
- Downstream tools receiving duplicates (CRMs, email tools, databases).
The fix is not “loop less.” The fix is: make the loop provably finite and make processing idempotent (safe to repeat without changing the outcome).
The safest mental model: finite loop + idempotent work
For production-grade looping, keep these rules:
- Finite loop: the loop must progress toward completion and stop after N items or when no items remain.
- Idempotent actions: a repeated run should not create extra rows, extra tickets, extra emails, or extra charges.
- Explicit state: store a cursor, page token, or processed IDs so the workflow knows what “done” means.
- Backpressure: batch processing prevents timeouts and protects APIs with rate limits.
Use the Loop Over Items pattern the safe way
The cleanest approach is to isolate three stages:
- Collect: fetch or generate the list of items to process (from API, DB, webhook, sheet).
- Process: handle items in batches with a stable cursor (index/page token).
- Commit: write results with dedupe protection (unique key / upsert / “already processed” check).
If your workflow mixes “collect” and “process” with side effects in the middle, debugging becomes painful and loops become likely.
Practical guardrails that prevent runaway loops
1) Add a hard stop (max iterations)
If the loop depends on external pagination or dynamic inputs, set a maximum number of iterations. This prevents a bug (or a bad API response) from creating endless paging.
2) Ensure progress (cursor must move)
If you track a page token, offset, or index, make sure it changes each iteration. If the cursor is missing, unchanged, or repeats, stop immediately and log an error event.
3) Separate triggers from writes that can re-trigger
A common loop is: workflow writes to a table → trigger watches that table → workflow triggers again. Fix it by either:
- Writing to a different table/collection than the trigger monitors, or
- Adding a “source=workflow” flag and filtering it out in the trigger query, or
- Switching to scheduled polling with a cursor instead of reactive triggers.
4) Make every item action idempotent
Even with perfect loop control, retries happen (network errors, 429 rate limits, transient API issues). If a retry creates duplicates, your loop becomes expensive fast. Use one of:
- Database unique constraints + upsert
- “Processed IDs” registry (DB table, Redis set, n8n static data)
- External tool’s idempotency keys (when available)
A reliable batching workflow you can reuse
This template uses batching, a cursor-like index, and a dedupe key strategy. Adapt the placeholders to your data.
1) Trigger (Webhook/Cron/Manual)2) Collect items (API/DB query) - Output: items[] with stable IDs 3) Batch/Loop node - Batch size: 10–100 (based on API limits) - Hard stop: maxIterations = 500 (example) 4) For each batch: a) Validate item has ID b) Dedupe check (DB/Redis/static registry) c) Process (API call / transform) d) Commit with upsert or unique key 5) If cursor/page token repeats or does not advance:- Stop + log error
Idempotency in practice: dedupe keys that actually work
A dedupe key should be deterministic and tied to a real business identity. Examples:
- CRM sync: contactEmail or crmContactId
- Invoices: invoiceNumber + vendorId
- Tickets: sourceSystemId + issueType
- Content pipeline: url or canonicalUrl
If you use random UUIDs created inside the loop, retries will bypass dedupe and create duplicates.
Option A: Dedupe with n8n static data (fast, but know the limitation)
Static data can be a practical short-term guard for low-volume automations, especially during development. Track processed IDs and skip repeats. The challenge is durability and scale: static registries can grow, and behavior depends on how and where n8n runs.
On each item:key = item.id (or a deterministic dedupeKey) registry = staticData.processedKeys || {} if registry[key] exists: skip this item process item... registry[key] = timestampstaticData.processedKeys = registry
Real limitation: this can become a bottleneck as the registry grows, and it’s not ideal for multi-worker or high-volume production. When you need strong guarantees, move dedupe to a database or a dedicated store.
Option B: Dedupe + locking with Redis (robust under concurrency)
Redis is excellent when multiple executions can overlap (webhooks, parallelism, bursts). You can store processed IDs and also implement short-lived locks that prevent two workers from processing the same item simultaneously.
Official site: Redis
Challenge (be honest): Redis adds operational overhead—network dependency, persistence decisions, and monitoring. If you self-host it, you must handle backups and reliability.
Practical workaround: use a managed Redis service in a U.S. region for low latency, keep lock TTLs short (seconds/minutes), and store only compact keys (hashes) instead of full payloads.
Option C: Dedupe with PostgreSQL unique constraints (most dependable for records)
If your loop ultimately writes records, a relational database unique constraint is one of the strongest anti-duplication tools you can deploy. Your workflow attempts the insert; duplicates are rejected; your logic moves on.
Official site: PostgreSQL
Challenge (be honest): schema and constraints require planning. Poor indexing or missing upsert logic can slow writes, especially when batches are large.
Practical workaround: create a dedicated table for processed keys with a unique index, and use upserts. Keep the stored data minimal: key + processed_at + status. This protects you from retries, crashes, and redeploys.
n8n itself: use it strategically, not dangerously
When you build loops in n8n, the platform is not the risk—the workflow design is. Keep your loop finite, your writes idempotent, and your trigger isolated from your side effects.
Official site: n8n
Challenge (be honest): fast iteration can tempt you to “just connect nodes” until it works, then ship it. That’s how self-referential paths and accidental re-triggers slip into production.
Practical workaround: add a dedicated “Safety” step near the top: enforce max iterations, validate required fields, and stop early when the cursor doesn’t advance. Then test with a small sample set and inspect execution logs before you scale batches.
Comparison table: loop-safety approaches
| Approach | Best for | Main risk | How to mitigate |
|---|---|---|---|
| Batching + hard stop | Preventing runaway pagination | Stopping too early if limits are wrong | Log cursor progress and tune batch/iteration caps |
| Static dedupe registry | Low-volume workflows and quick protection | Doesn’t scale; may not fit multi-worker setups | Expire old keys, or migrate dedupe to DB/Redis |
| Redis lock + dedupe | Concurrency bursts and overlapping executions | Operational dependency | Managed Redis + short TTL locks + compact keys |
| PostgreSQL unique constraint | Guaranteed no-duplicate record creation | Schema/index mistakes can slow writes | Upserts + proper indexes + minimal stored fields |
Common mistakes that create infinite loops (and the clean fixes)
- Mistake: Feeding processed output back into the “collect” step.Fix: Keep “collect” read-only; send writes to a separate branch.
- Mistake: Trigger watches a resource you update inside the workflow.Fix: Filter out workflow-generated events or write elsewhere.
- Mistake: No hard stop when paginating APIs.Fix: Cap iterations and abort on repeated page tokens.
- Mistake: Dedupe key changes on retries (random IDs).Fix: Use deterministic keys from business identifiers.
- Mistake: Processing large batches without rate-limit handling.Fix: Lower batch size and add retry/backoff on 429/5xx.
FAQ
How do you loop over items in n8n without re-triggering the workflow?
Keep the trigger isolated from any side effects it monitors. If your trigger listens to database writes, don’t write to the same table without filtering. If your trigger listens to webhooks, don’t call the same webhook URL from inside the workflow. Use a separate internal route or add a “source” flag and filter it out.
What’s the best way to guarantee “exactly-once” processing?
In practice you aim for “at-least-once execution” with “exactly-once effects.” That means retries can happen, but duplicates don’t. The most reliable method is a database-enforced unique key (plus upsert logic) so repeated attempts cannot create extra records.
What batch size should you use?
Pick the smallest batch size that stays under API limits and completes within your expected execution window. For many U.S.-hosted SaaS APIs, 10–50 is a safe starting range. If you see 429 responses, lower it and add backoff. If you see slow DB writes, lower it and optimize indexes.
How do you stop a loop when pagination breaks or repeats the same page token?
Track the previous cursor (page token, offset, last ID). If the next cursor is missing or identical, stop immediately and log the failure. That single guard prevents infinite paging on buggy endpoints or malformed responses.
Is it safe to use a static registry for processed items?
It’s safe as a temporary guard for small workflows, but it can become fragile at scale. If multiple executions overlap or you run multiple workers, move dedupe to a shared store like a database or Redis so all executions see the same state.
What’s the simplest “loop safety checklist” before you activate a workflow?
- Confirm the loop has a max-iterations cap.
- Confirm the cursor advances every iteration.
- Confirm writes are idempotent (unique key / upsert / dedupe store).
- Confirm the trigger can’t see and react to your own writes.
- Test with a small sample and verify logs show one pass only.
Conclusion
When you build Loop Over Items in n8n Without Infinite Loops with a finite loop, a real dedupe key, and a trigger that can’t re-fire on your own side effects, you get workflows that stay calm under pressure—bursty webhooks, flaky APIs, and retries included. Start with batching and a hard stop, then add database- or Redis-backed idempotency the moment your automation touches real customer data.

