PostHog on Hetzner: Self-Host Product Analytics with Docker (€16.41/mo, 45 min)
Goal: a fully working PostHog self-hosted install on a Hetzner CX42 (€16.41/mo) running ClickHouse + PostgreSQL + Redis + the PostHog app, behind Caddy auto-HTTPS, in ~45 minutes. This is the heaviest recipe in the library — but also the most powerful, because PostHog is product analytics, feature flags, session replay, A/B tests, and surveys in one stack.
Tested on a fresh Hetzner CX42 (4 vCPU / 16 GB RAM / 160 GB SSD) in Falkenstein, Ubuntu 24.04 LTS, 2026-05-02. The official Helm chart targets Kubernetes; for a single-node self-host, we use the maintained posthog/posthog Docker Compose setup which is officially supported up to ~5M events/mo per node.
Why PostHog (and when it’s overkill)
PostHog is product analytics, not page analytics. Pick it if you need any of these:
- Cohort analysis (“users who did X then Y”)
- Funnels with conversion windows
- Session replay (full DOM video reconstruction)
- Feature flags + A/B tests + surveys
- Product analytics SDKs for iOS / Android / Flutter / React Native
- SQL-level access to your event data (ClickHouse direct)
If your only goal is “leave GA4 with pageview dashboards”, PostHog is way overkill. Plausible or Umami will serve you faster, lighter, cheaper. PostHog is for SaaS teams who want Mixpanel/Amplitude features without paying $500-$5000/mo.
Hardware sizing (this matters)
ClickHouse alone wants 8+ GB RAM for a comfortable indie-scale install. Plus PostgreSQL (~1 GB), Redis (~512 MB), the PostHog app (~2 GB), and Plugin Server (~1 GB). Total floor: 16 GB RAM.
| Server | RAM | Cost/mo | Up to |
|---|---|---|---|
| CX42 | 16 GB | €16.41 | ~5M events/mo |
| CX52 | 32 GB | €32.50 | ~30M events/mo |
| CCX23 (dedicated) | 32 GB | €42.10 | ~80M events/mo |
Below 5M events/mo you’re paying €16.41/mo. Above that you’re either upgrading or sharding ClickHouse. Don’t try CX22/CX32 — ClickHouse will OOM-kill within hours.
Step 1 — Provision CX42 + DNS
Hetzner Cloud Console → Add Server → Falkenstein → Ubuntu 24.04 → CX42 → SSH key → name posthog-1. €16.41/mo. Note the public IPv4 as $IP.
# DNS
Type: A
Name: app (PostHog at app.your-site.com)
Value: $IP
TTL: 60
Step 2 — Bootstrap the server
ssh root@$IP
apt update && apt upgrade -y
apt install -y curl ca-certificates gnupg ufw git
# Docker
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
| tee /etc/apt/sources.list.d/docker.list > /dev/null
apt update && apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Firewall
ufw default deny incoming && ufw default allow outgoing
ufw allow 22/tcp && ufw allow 80/tcp && ufw allow 443/tcp
ufw --force enable
# Disable Linux transparent huge pages (ClickHouse requirement)
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
cat >> /etc/rc.local <<'EOF'
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
EOF
chmod +x /etc/rc.local
(THP off improves ClickHouse latency by 20%+ on this size of box. Mandatory.)
Step 3 — Clone the official PostHog Docker setup
mkdir -p /opt/posthog && cd /opt/posthog
git clone https://github.com/PostHog/posthog.git .
cp .env.example .env
Edit .env — minimum required:
SECRET_KEY=$(openssl rand -base64 64 | tr -d '\n')
SITE_URL=https://app.your-site.com
DEPLOYMENT=docker_compose
POSTGRES_PASSWORD=$(openssl rand -base64 32)
REDIS_URL=redis://redis:6379/
DEBUG=0
Save and chmod 600.
Step 4 — Bring up the stack
The official docker-compose.hobby.yml is the maintained single-node compose file:
docker compose -f docker-compose.hobby.yml up -d
This pulls ~3 GB of images and starts 8 services: web, worker, plugins, db (Postgres), clickhouse, kafka, zookeeper, redis. First boot takes 4–6 minutes — Kafka topics initialize, ClickHouse runs schema migrations, Postgres bootstraps.
Watch progress:
docker compose -f docker-compose.hobby.yml logs -f web
You’re waiting for Booting worker with pid: ... [INFO] Listening at: http://0.0.0.0:8000.
Verify locally:
curl -I http://127.0.0.1:8000
# HTTP/1.1 302 Found Location: /signup
Step 5 — Caddy + Let’s Encrypt
apt install -y debian-keyring debian-archive-keyring apt-transport-https
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' \
| gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' \
| tee /etc/apt/sources.list.d/caddy-stable.list
apt update && apt install -y caddy
cat > /etc/caddy/Caddyfile <
~10 seconds later, https://app.your-site.com serves the PostHog signup page.
Step 6 — First-run signup + project
- Open
https://app.your-site.com→ "Get started" → create your org + first user. - Project name (e.g. "production"). Pick a region (this is just a label for self-host).
- Choose your install type. We're "Docker Compose self-host".
- You land on the dashboard. Settings → Project → Project Settings → copy the Project API Key (looks like
phc_...).
Install the JS snippet on the site you want to track:
<script>
!function(t,e){var o,n,p,r;e.__SV||(window.posthog=e,e._i=[],e.init=function(i,s,a){function g(t,e){var o=e.split(".");2==o.length&&(t=t[o[0]],e=o[1]),t[e]=function(){t.push([e].concat(Array.prototype.slice.call(arguments,0)))}}(p=t.createElement("script")).type="text/javascript",p.crossOrigin="anonymous",p.async=!0,p.src=s.api_host.replace(".i.posthog.com","-assets.i.posthog.com")+"/static/array.js",(r=t.getElementsByTagName("script")[0]).parentNode.insertBefore(p,r);var u=e;for(void 0!==a?u=e[a]=[]:a="posthog",u.people=u.people||[],u.toString=function(t){var e="posthog";return"posthog"!==a&&(e+="."+a),t||(e+=" (stub)"),e},u.people.toString=function(){return u.toString(1)+".people (stub)"},o="init me ws ge fs capture calculateEventProperties register register_once register_for_session unregister unregister_for_session getFeatureFlag getFeatureFlagPayload isFeatureEnabled reloadFeatureFlags updateEarlyAccessFeatureEnrollment getEarlyAccessFeatures on onFeatureFlags onSurveysLoaded onSessionId getSurveys getActiveMatchingSurveys renderSurvey canRenderSurvey canRenderSurveyAsync identify setPersonProperties group resetGroups setPersonPropertiesForFlags resetPersonPropertiesForFlags setGroupPropertiesForFlags resetGroupPropertiesForFlags reset get_distinct_id getGroups get_session_id get_session_replay_url alias set_config startSessionRecording stopSessionRecording sessionRecordingStarted captureException loadToolbar get_property getSessionProperty createPersonProfile opt_in_capturing opt_out_capturing has_opted_in_capturing has_opted_out_capturing clear_opt_in_out_capturing debug getPageViewId captureTraceFeedback captureTraceMetric".split(" "),n=0;n
(Replace phc_YOUR-KEY-HERE with the API key from Step 6.4. Note api_host points at YOUR PostHog, not posthog.com — first-party tracking, ad-blocker-resistant.)
First event arrives in PostHog Live Events stream within 5 seconds.
What just happened (mechanism)
- web (Django + gunicorn) — serves the dashboard, signup, settings APIs.
- worker (Celery) — async job processor: cohorts, scheduled reports, exports.
- plugins (TypeScript on Node) — runs your event transformations + integrations (Stripe, Hubspot, Slack, custom).
- kafka + zookeeper — event ingestion buffer. Every captured event goes through Kafka before landing in ClickHouse. Lets you ingest 50k events/sec without losing data even when ClickHouse is busy.
- clickhouse — your event store. Columnar, billions-of-rows fast. Direct SQL access via
http://app.your-site.com/api/projects/.../query/in PostHog Insights. - db (Postgres 14) — Django metadata: users, orgs, projects, dashboards, feature flags, cohorts. Small (<1 GB even with thousands of cohorts).
- redis — Celery queue + cache for feature flag evaluations.
- Caddy on host — TLS + HTTP→HTTPS redirect.
Cookies set: 1 by default (ph_phc_*_posthog for distinct_id persistence). Disable in init with persistence: 'memory' for cookieless mode — but then you lose cross-session identity (every page-load looks like a new user). Most teams keep the cookie because product analytics needs visitor continuity.
Backup, updates, monitoring
Backup — three things to capture: Postgres (small, daily), ClickHouse (large, weekly diff), Object storage if you enabled session recordings (S3/MinIO).
cat > /usr/local/bin/posthog-pg-backup.sh <<'EOF'
#!/bin/bash
DATE=$(date +%F)
mkdir -p /backups/posthog
cd /opt/posthog
docker compose -f docker-compose.hobby.yml exec -T db \
pg_dump -U posthog posthog | gzip > /backups/posthog/pg-$DATE.sql.gz
find /backups/posthog -name "pg-*.sql.gz" -mtime +14 -delete
EOF
chmod +x /usr/local/bin/posthog-pg-backup.sh
echo "0 4 * * * /usr/local/bin/posthog-pg-backup.sh" | crontab -
For ClickHouse, use the official clickhouse-backup tool — see PostHog runbook.
Updates: PostHog releases major versions ~quarterly with database migrations. Always read the release notes before upgrading. Pull + restart:
cd /opt/posthog && git pull
docker compose -f docker-compose.hobby.yml pull
docker compose -f docker-compose.hobby.yml up -d
First post-update boot runs migrations — can take 5–15 minutes for ClickHouse schema changes. Don't interrupt.
Monitoring: PostHog has a built-in System status page at /instance/status. Combine with UptimeRobot on /_health.
3-year cost
| Item | Year 1 | 3-year total |
|---|---|---|
| Hetzner CX42 | €196.92 | €590.76 |
| Backup storage (Hetzner Storage Box 100GB) | €38.40 | €115.20 |
| Total | €235.32 | €705.96 |
PostHog Cloud: 1M events/mo + 5K session recordings = $0 (free tier). Pro at higher volume: $300+ /mo. The break-even for self-hosting is around 5M events/mo or 50K recordings.
When NOT to use this setup
- Just need pageview dashboards. Plausible. Half the cost, 10% the operational complexity.
- Free tier (PostHog Cloud) covers your usage. 1M events + 5K recordings is free forever — self-hosting is more work for the same outcome.
- You can't dedicate ops time. 8 services, ClickHouse tuning, Kafka topic management — this isn't a "deploy and forget" stack.
- You want EU-strict data residency without thinking. PostHog Cloud has an EU region. Self-hosting on Hetzner gives you the same physical guarantee but you own the regulatory paperwork.
Troubleshooting
OOM kills on first boot. CX42 just barely fits the stack at idle. If you OOM, raise to CX52 or disable session recording in Settings → Replay. Confirm with dmesg | grep -i kill.
Kafka won't start, Zookeeper logs "session expired". Slow disk + Zookeeper timeouts. Wait 60s, then docker compose -f docker-compose.hobby.yml restart zookeeper kafka. If recurrent, your VPS is undersized.
Events captured but not appearing in insights. ClickHouse async insertion lag. Check the Live Events stream — if events are there but Insights are empty after 5 min, restart the worker: docker compose -f docker-compose.hobby.yml restart worker.
"Failed to evaluate feature flags" in client logs. The flag evaluation endpoint is in the worker; verify worker container is healthy. Cold-start latency is ~3 seconds — that's normal for self-host (PostHog Cloud uses an edge service).
Next
- Stack Picker: confirm PostHog is your right pick — or see if Matomo+plugin covers what you need at lower complexity.
- TCO calculator: compare against PostHog Cloud at your projected volume.
- Matomo recipe: midpoint between Plausible's simplicity and PostHog's power.
Found this useful?
Try the Stack Picker to get a personal recommendation, or browse the install recipe library.