How to Migrate n8n from SQLite to Postgres

Ahmed
0

How to Migrate n8n from SQLite to Postgres

I’ve migrated n8n instances where a “simple DB switch” turned into broken credentials and missing executions because the encryption key or volumes weren’t handled carefully.


How to Migrate n8n from SQLite to Postgres comes down to backing up the right data, keeping your encryption key consistent, and moving entities into a clean PostgreSQL database without surprises.


How to Migrate n8n from SQLite to Postgres

What you gain (and what changes) when you move to Postgres

SQLite is excellent for getting started, but it becomes a bottleneck when you push concurrency, run queue mode, or need strong operational controls (backups, point-in-time recovery, monitoring). PostgreSQL gives you better concurrency, safer multi-user behavior, and more predictable performance under load.


Real-world tradeoff: Postgres adds operational surface area. If you don’t tune connections, set sensible backups, or enforce SSL when required, you can end up with instability that feels worse than SQLite.


SQLite vs Postgres for n8n at a glance

Area SQLite (default) Postgres
Concurrency Limited under heavy writes Designed for concurrent reads/writes
Queue mode / multi-instance Risky and easy to misconfigure Built for multi-process / multi-worker setups
Backups & restore File-based (simple, but easy to forget volume mounts) Tooling-rich (dump/restore, automation-friendly)
Operational visibility Minimal Strong monitoring and diagnostics options
Common pitfall Data loss during container updates if volumes aren’t persistent Connection limits, SSL requirements, permissions

Before you touch anything: the 10-minute migration safety checklist

  • Find your current n8n “user folder” and confirm it’s on persistent storage (this is where the SQLite file and encryption key typically live).
  • Record your current encryption key behavior (did you set N8N_ENCRYPTION_KEY, or was it auto-generated on first run?).
  • Pause incoming webhooks (temporarily disable upstream triggers or maintenance-mode your reverse proxy if possible).
  • Stop n8n cleanly before exporting/importing to avoid inconsistent snapshots.
  • Plan what you’ll do with execution history (it can be large; decide whether you really need it in the target DB).

Identify where SQLite actually lives (Docker and VPS setups)

In many Docker deployments, n8n stores SQLite at /home/node/.n8n/database.sqlite inside the container, and that path should be mounted to a persistent volume. If you’re not mounting it, you’re one update away from losing workflows and executions.


n8n weakness to watch: n8n can appear to “work fine” even when you’re accidentally running with ephemeral storage. The fix is simple: make the user folder persistent (volume mount) and treat it like production data.


Quick commands to confirm the SQLite file location (Docker)

Check SQLite path inside container
docker exec -u node -it <your-n8n-container> ls -lha /home/node/.n8n

docker exec -u node -it <your-n8n-container> ls -lha /home/node/.n8n/database.sqlite

Choose your migration method

You have two reliable paths. Pick one and stick to it—mixing approaches mid-flight is how migrations get messy.


Method A (recommended): Export/Import entities to move SQLite ➜ Postgres

This is the cleanest approach when you want the Postgres database to become a faithful clone of your existing n8n state.

  • Pros: Moves more than just workflows/credentials (full entity set), and is designed for switching database types.
  • Cons: The target database should be empty before import (or you must explicitly truncate). Execution history can be large if you include it.

n8n weakness to watch: If n8n is still running while you export/import, you can end up with out-of-sync state. Always stop n8n for the migration window.


Method B: Export workflows + credentials (decrypted) and re-import

This is useful when you don’t want to move everything (or when your current database is messy), but it shifts more responsibility onto you to validate what didn’t move (tags, users/projects, variable-like settings, etc.).

  • Pros: Lets you rebuild a “clean room” Postgres n8n without bringing along unwanted baggage.
  • Cons: Decrypted credentials exports are sensitive; you must handle them like secrets. Also, you’ll likely lose some system-level state unless you recreate it.

Step-by-step: Migrate using entities (SQLite ➜ Postgres)

1) Stop n8n and snapshot what you have

Stop the n8n service first. If you’re on Docker Compose, bring the stack down. If you’re on a process manager, stop the process cleanly.

Stop Docker Compose stack
cd /path/to/your/n8n-compose

docker compose down

Then copy the entire n8n user folder (or at minimum keep a safe copy of your database.sqlite and encryption-related files). This gives you a rollback point.


2) Provision Postgres (local container or managed service)

For production, a managed PostgreSQL service in the U.S. (for example Amazon RDS in a U.S. region) gives you mature backups and monitoring. Use the official service page once and keep it consistent with your infrastructure standards: Amazon RDS for PostgreSQL.


Service challenge: Managed Postgres can throttle or disconnect if you exceed connection limits or misconfigure network rules. The fix is to keep n8n’s pool size conservative and verify security groups/firewall rules before cutover.


If you prefer Docker-based Postgres for a self-hosted box, make sure the volume is persistent.


3) Configure n8n to use Postgres

n8n switches DB backends through environment variables. The critical one is DB_TYPE=postgresdb, plus your host, database name, user, password, port, and schema.


n8n weakness to watch: In multi-container or queue-mode setups, every n8n process must receive the full Postgres env var set. If one container/pod is missing variables, you can silently fall back to SQLite and create a split-brain mess.


Use the n8n official site once for the authoritative configuration reference: n8n supported databases settings.


4) Start n8n against an EMPTY Postgres database

Start n8n with Postgres variables pointing to a new, empty database/schema. Let n8n create tables. Then stop it again before importing entities (this keeps the import clean and predictable).


5) Export entities from your existing SQLite deployment

Export entities (run where SQLite-based n8n can access its DB)
# If n8n is installed via npm:

n8n export:entities --outputDir=./n8n-entities # If n8n runs in Docker (run inside the container):
docker exec -u node -it <your-n8n-container> n8n export:entities --outputDir=/home/node/.n8n/n8n-entities

Optional but important: If you truly need execution history and data tables, include them intentionally (they can balloon export size and import time). Don’t “accidentally” migrate gigabytes.


6) Import entities into Postgres

Import entities into the Postgres-backed n8n
# Run where the Postgres-backed n8n instance can access the exported folder

# (and ensure Postgres env vars are set for this n8n process/container) # npm: n8n import:entities --inputDir=./n8n-entities --truncateTables true # Docker:
docker exec -u node -it <your-postgres-n8n-container> n8n import:entities --inputDir=/home/node/.n8n/n8n-entities --truncateTables true

Import challenge: If you import into a non-empty database (or forget truncation), you can get conflicts, duplicates, or partially overwritten state. The fix is to import into an empty database or explicitly truncate first—then re-run import once.


7) Bring n8n up and validate the migration

  • Log in and confirm workflows load correctly.
  • Open a few workflows that use credentials and run a safe test execution.
  • Confirm schedules, webhooks, and triggers behave as expected after you re-enable inbound traffic.

Credential safety: don’t break decryption during the move

n8n encrypts credentials using an encryption key. If you were relying on an auto-generated key stored in the n8n user folder, and you spin up a new environment without that key (or without a consistent N8N_ENCRYPTION_KEY), credentials may appear but fail to decrypt at runtime.


Practical fix: Set N8N_ENCRYPTION_KEY explicitly and keep it identical across all n8n instances that must read the same credentials. Store it in your secret manager and inject it at runtime.


Common migration mistakes (and the fastest fixes)

n8n “migrated” but it’s still using SQLite

  • Cause: Missing DB_TYPE=postgresdb or missing host/user/password variables in one container/pod.
  • Fix: Ensure every n8n process (web + workers) has the full Postgres env var set, then restart all processes.

Credentials show up but fail when you run a workflow

  • Cause: Encryption key changed between environments.
  • Fix: Restore the original encryption key behavior (same N8N_ENCRYPTION_KEY or same user-folder key files) and restart.

Import fails or produces partial data

  • Cause: Importing into a non-empty DB, permission issues, or schema mismatch.
  • Fix: Recreate a clean database/schema and re-import once, or re-run import with truncation where appropriate.

FAQ

Can you migrate n8n from SQLite to Postgres with near-zero downtime?

Yes, if you freeze writes. Pause triggers/webhooks, stop n8n, export entities, import into Postgres, then start n8n and re-enable traffic. The “downtime” becomes the export/import window, which stays small unless you bring over large execution history.


Do you need to move the old encryption key when migrating?

You need consistent credential decryption behavior. If the old instance used a specific N8N_ENCRYPTION_KEY, keep it the same. If it used an auto-generated key stored in the user folder, make sure the new instance uses the same key material (or switch to a fixed N8N_ENCRYPTION_KEY before migrating).


How do you confirm n8n is actually connected to Postgres?

Check your deployment environment variables and logs after restart. If you’re running multiple n8n processes (like web + workers), confirm every process has identical Postgres settings. Then validate by creating a new workflow and confirming it persists across restarts.


Should you migrate execution history?

Only if you have a concrete reason (auditing, incident review, or regulated retention). Execution history can explode in size and extend migration time. A clean cutover often keeps production stable and makes future backups faster.


What Postgres settings matter most for n8n stability?

Connection management and permissions matter first. Keep n8n’s connection pool modest, ensure the database user can create/alter required objects in the target schema, and enable SSL if your provider requires it. Most “random” issues are actually timeouts, exhausted connections, or blocked network rules.


Is it safer to migrate by exporting workflows and credentials instead of entities?

It can be safer when you want a clean rebuild, but it’s also easier to miss hidden state (like user/project structure or certain internal entities). Use workflows/credentials export when you intentionally want to curate what gets moved; use entities when you want the most faithful migration.



Conclusion

If you keep your n8n user folder and encryption key under control, moving to Postgres is one of the highest-impact upgrades you can make for reliability and scale. Do the migration once with clean exports/imports, validate credentials and triggers carefully, and you’ll end up with an automation stack that behaves like real production infrastructure.


Post a Comment

0 Comments

Post a Comment (0)