Migrating the *arr Stack to PostgreSQL on Kubernetes
TL;DR
- All *arr apps (Sonarr, Radarr, Prowlarr, Bazarr) and Seerr support PostgreSQL as a drop-in replacement for SQLite
- Configuration is done via environment variables — no need to modify
config.xml - For PostgreSQL 17, do not use the official migration guide’s pgloader approach directly — it assumes PG14. Use the schema-dump method described in this post
- Fresh start is fine for Prowlarr, Radarr 4K, and Bazarr. For large installs (Sonarr, Radarr 1080p), migrate your data
- If you use GitOps (Flux/ArgoCD), scale to 0 replicas in git before migrating — or your controller will restart the pod mid-migration
- Seerr has its own pgloader image and migration quirks around the
migrationstable — documented below
Context
My homelab runs a Talos Linux Kubernetes cluster managed via GitOps with FluxCD. All changes go through a private GitHub repository before being applied to the cluster. Secrets are encrypted with SOPS + Age. PostgreSQL is provided by a CloudNativePG cluster running PostgreSQL 17.9.
The media stack consists of:
- Sonarr — TV show management
- Radarr (1080p and 4K) — Movie management
- Prowlarr — Indexer management
- Bazarr — Subtitle management
- Seerr — Media request management (successor to Overseerr/Jellyseerr)
All apps were previously running SQLite on local-path PVCs, pinned to a single node via nodeSelector. The goal was to:
- Move databases to PostgreSQL for reliability and node mobility
- Move config PVCs from
local-pathto NFS (synology-nfsStorageClass) - Remove
nodeSelectorconstraints so pods can run on any worker
Prerequisites
- A running PostgreSQL instance accessible from your cluster
- Each app needs its own database(s) — see per-app details below
- A database user with full access to the databases
The examples below assume a PostgreSQL service reachable at postgres.databases.svc.cluster.local on port 5432, with user appuser and password managed via a Kubernetes Secret.
General Pattern
1. Create the Secret
apiVersion: v1
kind: Secret
metadata:
name: <app>-postgres
namespace: media
stringData:
password: your-password-here
2. Add environment variables to the Deployment
Each app uses a different prefix and variable names — covered per-app below.
3. Keep the old PVC in your manifests until migration is complete
If you use GitOps, removing the old PVC from your manifests will cause the controller to delete it. Keep pvc-old.yaml (with the original local-path PVC) in your kustomization until you have verified the migration succeeded. Only then remove it.
4. Scale to 0 before migrating data
Critical for GitOps users: If you simply run kubectl scale deployment/<app> --replicas=0, your GitOps controller will scale it back up within its reconciliation interval (typically 10 minutes). You must set replicas: 0 in git and push the change before starting the data migration.
The PostgreSQL 17 Problem
The official Servarr wiki’s migration guide (github.com/Servarr/Wiki) was written for PostgreSQL 14. On PG15+, running pgloader directly against a freshly created Postgres database fails with duplicate key errors, because the arr apps populate default rows in several tables on first startup — and pgloader then tries to insert the same rows from SQLite.
The solution is what I’ll call the schema-dump method:
- Start the app once against Postgres → it creates all tables and populates defaults
- Stop the app
- Dump the schema only (
pg_dump -s) - Drop and recreate the databases
- Reimport the schema
- Truncate all tables (they’re now empty but correctly structured)
- Run pgloader with
--with "data only" - Fix sequences
- Start the app
This way, pgloader never sees a conflict — it loads data into clean tables using a schema that was generated by the app itself against PG17.
Prowlarr — Fresh Start
Prowlarr stores indexer configurations and sync history. Since indexers can be re-added in minutes (especially if you use Profilarr or Recyclarr for profiles), a fresh start is the practical choice.
Prowlarr uses two databases: a main database and a log database.
Environment Variables
env:
- name: PROWLARR__POSTGRES__HOST
value: postgres.databases.svc.cluster.local
- name: PROWLARR__POSTGRES__PORT
value: "5432"
- name: PROWLARR__POSTGRES__USER
value: appuser
- name: PROWLARR__POSTGRES__PASSWORD
valueFrom:
secretKeyRef:
name: prowlarr-postgres
key: password
- name: PROWLARR__POSTGRES__MAINDB
value: prowlarr-main
- name: PROWLARR__POSTGRES__LOGDB
value: prowlarr-log
Full Deployment Example
apiVersion: apps/v1
kind: Deployment
metadata:
name: prowlarr
namespace: media
spec:
replicas: 1
selector:
matchLabels:
app: prowlarr
strategy:
type: Recreate
template:
metadata:
labels:
app: prowlarr
spec:
containers:
- name: prowlarr
image: ghcr.io/linuxserver/prowlarr:2.3.5
env:
- name: PUID
value: "1024"
- name: PGID
value: "1024"
- name: TZ
value: Europe/Amsterdam
- name: PROWLARR__POSTGRES__HOST
value: postgres.databases.svc.cluster.local
- name: PROWLARR__POSTGRES__PORT
value: "5432"
- name: PROWLARR__POSTGRES__USER
value: appuser
- name: PROWLARR__POSTGRES__PASSWORD
valueFrom:
secretKeyRef:
name: prowlarr-postgres
key: password
- name: PROWLARR__POSTGRES__MAINDB
value: prowlarr-main
- name: PROWLARR__POSTGRES__LOGDB
value: prowlarr-log
ports:
- containerPort: 9696
name: http
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
persistentVolumeClaim:
claimName: prowlarr-config
On first startup, Prowlarr will create all tables and apply migrations automatically. No data migration needed.
Radarr 4K — Fresh Start
My 4K library is small and less critical than the 1080p library, so a fresh start was the right call. The movies can be re-added by pointing Radarr at the existing root folder and running a scan.
Radarr also uses two databases.
Environment Variables
env:
- name: RADARR__POSTGRES__HOST
value: postgres.databases.svc.cluster.local
- name: RADARR__POSTGRES__PORT
value: "5432"
- name: RADARR__POSTGRES__USER
value: appuser
- name: RADARR__POSTGRES__PASSWORD
valueFrom:
secretKeyRef:
name: radarr-4k-postgres
key: password
- name: RADARR__POSTGRES__MAINDB
value: radarr-4k-main
- name: RADARR__POSTGRES__LOGDB
value: radarr-4k-log
The deployment is identical in structure to Prowlarr above — just swap the image, port (7878), and env var names.
Bazarr — Fresh Start
Bazarr stores subtitle download history and provider settings. History is not critical, and settings take only a few minutes to reconfigure. Fresh start is the sensible choice.
Bazarr uses a single database and uses different environment variable names from the arr apps:
Environment Variables
env:
- name: POSTGRES_ENABLED
value: "true"
- name: POSTGRES_HOST
value: postgres.databases.svc.cluster.local
- name: POSTGRES_PORT
value: "5432"
- name: POSTGRES_DATABASE
value: bazarr
- name: POSTGRES_USERNAME
value: appuser
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: bazarr-postgres
key: password
Note that Bazarr can also be configured via config.yaml instead of environment variables. The env vars take precedence.
Radarr 1080p — Data Migration (PG17 Schema-Dump Method)
My main Radarr library had a 150MB SQLite database with years of history, custom formats, and quality profiles. A fresh start would mean losing all of that — so migration was worth the effort.
Step 1: Set replicas to 0 in git
Commit replicas: 0 to your deployment manifest and push. Wait for your GitOps controller to apply it. This prevents the controller from restarting the pod during migration.
Step 2: Let Radarr create the schema
Temporarily scale to 1 and wait for the log line:
Now listening on: http://[::]:7878
Then immediately scale back to 0 (in git). Radarr has now created all tables in Postgres.
Step 3: Dump the Postgres schema
pg_dump -h postgres.databases.svc.cluster.local \
-U appuser -s radarr-1080p-main > /tmp/radarr-main-schema.sql
pg_dump -h postgres.databases.svc.cluster.local \
-U appuser -s radarr-1080p-log > /tmp/radarr-log-schema.sql
Step 4: Drop, recreate, and reimport schema
DROP DATABASE "radarr-1080p-main";
CREATE DATABASE "radarr-1080p-main";
ALTER DATABASE "radarr-1080p-main" OWNER TO appuser;
DROP DATABASE "radarr-1080p-log";
CREATE DATABASE "radarr-1080p-log";
ALTER DATABASE "radarr-1080p-log" OWNER TO appuser;
Note: If your Postgres connection pool keeps connections alive, you may need to terminate them first:
SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname IN ('radarr-1080p-main', 'radarr-1080p-log') AND pid <> pg_backend_pid();
Then reimport the schema:
psql -h postgres.databases.svc.cluster.local \
-U appuser -d radarr-1080p-main -f /tmp/radarr-main-schema.sql
psql -h postgres.databases.svc.cluster.local \
-U appuser -d radarr-1080p-log -f /tmp/radarr-log-schema.sql
Step 5: Truncate all tables
Even though the databases were just recreated, Radarr may have populated default rows before you scaled to 0. Truncate everything:
TRUNCATE TABLE
"AlternativeTitles", "AutoTagging", "Blocklist", "Collections", "Commands",
"Config", "Credits", "CustomFilters", "CustomFormats", "DelayProfiles",
"DownloadClientStatus", "DownloadClients", "DownloadHistory", "ExtraFiles",
"History", "ImportExclusions", "ImportListMovies", "ImportListStatus",
"ImportLists", "IndexerStatus", "Indexers", "Metadata", "MetadataFiles",
"MovieFiles", "MovieMetadata", "MovieTranslations", "Movies", "NamingConfig",
"NotificationStatus", "Notifications", "PendingReleases", "QualityDefinitions",
"QualityProfiles", "ReleaseProfiles", "RemotePathMappings", "RootFolders",
"ScheduledTasks", "SubtitleFiles", "Tags", "Users", "VersionInfo"
CASCADE;
Step 6: Run pgloader
pgloader \
--with "quote identifiers" \
--with "data only" \
/path/to/radarr.db \
postgresql://appuser:password@postgres.databases.svc.cluster.local/radarr-1080p-main
For large databases (150MB+), add batch size options to avoid out-of-memory errors:
pgloader \
--with "quote identifiers" \
--with "data only" \
--with "prefetch rows = 100" \
--with "batch size = 1MB" \
/path/to/radarr.db \
postgresql://appuser:password@postgres.databases.svc.cluster.local/radarr-1080p-main
A successful run ends with:
Total import time ✓ 300231 118.0 MB 4.556s
Zero errors means full success.
Step 7: Fix sequences
After pgloader, all sequence values need to be reset to avoid primary key conflicts on new inserts:
SELECT setval('public."Movies_Id_seq"', (SELECT MAX("Id")+1 FROM "Movies"));
SELECT setval('public."MovieFiles_Id_seq"', (SELECT MAX("Id")+1 FROM "MovieFiles"));
SELECT setval('public."History_Id_seq"', (SELECT MAX("Id")+1 FROM "History"));
SELECT setval('public."Commands_Id_seq"', (SELECT MAX("Id")+1 FROM "Commands"));
SELECT setval('public."Profiles_Id_seq"', (SELECT MAX("Id")+1 FROM "QualityProfiles"));
SELECT setval('public."QualityDefinitions_Id_seq"', (SELECT MAX("Id")+1 FROM "QualityDefinitions"));
SELECT setval('public."CustomFormats_Id_seq"', (SELECT MAX("Id")+1 FROM "CustomFormats"));
SELECT setval('public."Tags_Id_seq"', (SELECT MAX("Id")+1 FROM "Tags"));
SELECT setval('public."Users_Id_seq"', (SELECT MAX("Id")+1 FROM "Users"));
-- Repeat for all tables with an _Id_seq sequence
The full list of sequences is available in the Servarr wiki.
Step 8: Start Radarr
Commit replicas: 1 to git and push. Radarr will start, connect to Postgres, run any remaining migrations, and be operational with all your existing data.
Sonarr — Data Migration (PG17 Schema-Dump Method)
Sonarr follows the exact same process as Radarr. The key differences:
- The env var prefix is
SONARR__POSTGRES__*instead ofRADARR__POSTGRES__* - The database table names differ — get the exact list with
\dtin psql after schema creation - The database is larger (my install was 397MB), so the
prefetch rows = 100andbatch size = 1MBoptions are essential
Environment Variables
env:
- name: SONARR__POSTGRES__HOST
value: postgres.databases.svc.cluster.local
- name: SONARR__POSTGRES__PORT
value: "5432"
- name: SONARR__POSTGRES__USER
value: appuser
- name: SONARR__POSTGRES__PASSWORD
valueFrom:
secretKeyRef:
name: sonarr-postgres
key: password
- name: SONARR__POSTGRES__MAINDB
value: sonarr-main
- name: SONARR__POSTGRES__LOGDB
value: sonarr-log
Truncate for Sonarr
Sonarr’s table names differ slightly from Radarr. After schema creation, truncate using the actual table list:
TRUNCATE TABLE
"AutoTagging", "Blocklist", "Commands", "Config",
"CustomFilters", "CustomFormats", "DelayProfiles", "DownloadClientStatus",
"DownloadClients", "DownloadHistory", "EpisodeFiles", "Episodes",
"ExtraFiles", "History", "ImportListExclusions", "ImportListItems",
"ImportListStatus", "ImportLists", "IndexerStatus", "Indexers",
"Metadata", "MetadataFiles", "NamingConfig", "NotificationStatus",
"Notifications", "PendingReleases", "QualityDefinitions", "QualityProfiles",
"ReleaseProfiles", "RemotePathMappings", "RootFolders", "SceneMappings",
"ScheduledTasks", "Series", "SubtitleFiles", "Tags", "Users", "VersionInfo"
CASCADE;
Tip: Always run
\dtin psql after creating the schema and before truncating, to verify the actual table names in your version.
Seerr — Data Migration with Official pgloader Image
Seerr (the successor to Overseerr and Jellyseerr, merged February 2026) has official PostgreSQL support and its own migration documentation at docs.seerr.dev.
Seerr uses a single database and different environment variables:
Environment Variables
env:
- name: DB_TYPE
value: postgres
- name: DB_HOST
value: postgres.databases.svc.cluster.local
- name: DB_PORT
value: "5432"
- name: DB_USER
value: appuser
- name: DB_PASS
valueFrom:
secretKeyRef:
name: seerr-postgres
key: password
- name: DB_NAME
value: seerr
Migration Process
Seerr’s official docs recommend using a specific pgloader image that fixes a column quoting issue present in the standard pgloader release:
docker run --rm \
-v /path/to/config/db/db.sqlite3:/db.sqlite3:ro \
ghcr.io/ralgar/pgloader:pr-1531 \
pgloader --with "quote identifiers" --with "data only" \
/db.sqlite3 \
postgresql://appuser:password@postgres.databases.svc.cluster.local/seerr
On Kubernetes, you can run this as a pod with the SQLite file mounted from the existing PVC.
The migrations table problem
After pgloader completes, Seerr will fail to start with errors like:
Migration "InitialMigration1734786061496" failed, error: relation "PK_..." already exists
This happens because pgloader copies the SQLite migrations table, which contains the SQLite migration history. Seerr’s Postgres migration runner then tries to apply Postgres-specific migrations that have already been applied to the schema.
The fix is to replace the migrations table contents with the Postgres migration records. First, list the available Postgres migrations:
ls /app/dist/migration/postgres/
# 1734786061496-InitialMigration.js
# 1734786596045-AddTelegramMessageThreadId.js
# ... etc
Then replace the table contents:
TRUNCATE TABLE migrations CASCADE;
INSERT INTO migrations (timestamp, name) VALUES
(1734786061496, 'InitialMigration1734786061496'),
(1734786596045, 'AddTelegramMessageThreadId1734786596045'),
-- insert one row per .js file in /app/dist/migration/postgres/
-- timestamp = the number prefix, name = number + class name without .js
...;
After this, Seerr starts cleanly and applies any remaining migrations.
The mediaServerType problem
If you had to reconfigure Seerr via the setup wizard during migration, the wizard may set mediaServerType to the wrong value in settings.json. Plex = 1, Jellyfin = 2, Emby = 3. If the Plex tab is missing from Settings and you see a Jellyfin tab instead, edit settings.json directly:
# With the pod stopped, access the config volume and fix:
sed -i 's/"mediaServerType": 4/"mediaServerType": 1/' /app/config/settings.json
The GitOps Pitfall: Flux Restarting Pods During Migration
If you use FluxCD (or any GitOps operator with reconciliation), be aware that scaling a deployment to 0 via kubectl scale is only temporary. Flux will reconcile the deployment back to its desired state (1 replica) within its configured interval — typically 10 minutes.
During a data migration, this is catastrophic. A Radarr or Sonarr pod that starts mid-migration will write default data to Postgres, causing duplicate key errors when pgloader tries to insert the migrated data.
The correct approach is:
- Set
replicas: 0in the deployment manifest in git - Commit and push
- Wait for Flux to reconcile (or force it with
flux reconcile kustomization <name>) - Only then start the migration
When done, set replicas: 1 and push again. The same principle applies to ArgoCD or any other GitOps operator.
Results
After the migration, every app in the stack:
- Runs on PostgreSQL 17.9 instead of SQLite
- Uses an NFS-backed config PVC instead of a node-local PVC
- Has no
nodeSelector— it can run on any worker node in the cluster - Is truly stateless from a node perspective — all state lives in the shared database and NFS
The migration took roughly a day of careful work. The two largest databases (Sonarr at 397MB, Radarr 1080p at 150MB) migrated without data loss using the schema-dump + pgloader method.
References
- Servarr Wiki — Radarr PostgreSQL setup — official migration guide (written for PG14, use the schema-dump method for PG15+)
- tobz gist — schema-dump migration method for PG15+ — the approach this post is based on for Radarr/Sonarr
- Seerr documentation — Configuring the Database — official Seerr PostgreSQL and migration docs
- Seerr release announcement — context on the Overseerr/Jellyseerr merger
- CloudNativePG — the Postgres operator used in this setup
- pgloader — the tool used for SQLite → PostgreSQL data migration
- ghcr.io/ralgar/pgloader:pr-1531 — the Seerr-recommended pgloader build that fixes column quoting issues