Find your self-hosted analytics stack.
Six questions about your hosting, database, and traffic — get one personalized recommendation across Plausible CE, Matomo, Umami, PostHog, and Rybbit.
Logic is opinionated, not absolute. Edge cases? Send a counter-example.
The 5 self-hosted analytics tools we score
Each card is a candidate the picker can land on. Best for reflects the typical winner profile, not a guarantee — your inputs determine the actual match.
What the analytics stack picker actually does
The picker is a deterministic decision aid, not an oracle. It scores 5 self-hosted analytics tools (Plausible CE, Matomo, Umami, PostHog, Rybbit) against your inputs and returns the highest-scoring match. Same answers always yield the same tool — there is no ML, no remote call, no per-user randomness. The logic runs entirely in your browser.
How the rules are scored and updated: how we test analytics tools.
Why these 6 questions filter you to one tool
Each question filters the candidate set on a dimension where the 5 tools differ materially. Below is what each question is actually checking and what trade-off it surfaces — without revealing the scoring weights, so the wizard remains useful.
1. Sysadmin comfort
Filters on operational complexity. Novice (Vercel/Netlify, no SSH) eliminates anything that needs a Hetzner box: Plausible CE, Matomo, PostHog, Rybbit all require a real VPS, leaving Umami on Vercel + Neon as the realistic candidate. Comfortable with Docker opens up Plausible CE (3-container compose) and Matomo (PHP-FPM + MariaDB). Expert — everything is on the table, including PostHog’s 7-service stack with ClickHouse + Kafka + Zookeeper.
2. Where you’ll host
Filters on data residency. EU-only heavily weights Plausible CE and Matomo (both European-origin codebases with explicit CNIL configuration paths) and discounts US-hosted SaaS-style stacks. Anywhere opens up the full set. Schrems II makes US data transfers a regulatory cost, not just a latency cost — if your EU customers care, this question is doing real work.
3. Database preference
Filters on the storage layer you’re willing to operate. Postgres matches Umami v2 and Plausible CE (Plausible also needs ClickHouse, but Postgres handles the metadata side). MariaDB / MySQL matches Matomo — the only candidate built on the LAMP stack. I’ll let the tool decide defers the choice and lets the picker score on the other dimensions. PostHog brings its own ClickHouse regardless of preference.
4. Monthly traffic volume
Filters on the cost curve. Below 100k events/month, every candidate runs comfortably on a €4.51/mo Hetzner CX22. Between 1M-5M, only Plausible / Matomo / Umami stay on CX22; PostHog needs CX32 (8 GB RAM minimum, 16 GB recommended) and Rybbit prefers CX32 for ClickHouse smoothness. Above 10M, infrastructure cost dominates feature differences — the 3-year TCO matrix has the explicit numbers across 5 traffic tiers.
5. Features beyond pageviews
Filters on report depth. Pageviews only — Umami or GoatCounter-style minimal stacks win on operational simplicity. Goals + funnels — Plausible CE and Matomo both ship native funnels; the deep-dive on Plausible custom events and revenue tracking covers the events/goals layer. Ecommerce — Matomo wins for WooCommerce/Shopify (Matomo ecommerce setup). Session replay + feature flags — only PostHog ships these natively; the trade-off is the heaviest infra and PostHog’s pivot to Cloud-first roadmap.
6. Compliance constraints
Filters on consent posture. No banner requires the «anonymous audience measurement» exemption shape: cookieless, IP-anonymized, EU-residency, no third-party transfers. Plausible CE wins by default; Matomo with disableCookies is a close second; PostHog memory-mode is technically possible but gives up most of what makes PostHog useful. The cookieless pillar covers the full posture for all 5 tools and links to the CNIL exemption guidance.
When this picker won’t help you
The 6 questions cover ~80% of self-hosted analytics decisions. The remaining 20% are edge cases the wizard misses by design — here is where to go when your case is one of them.
How the scoring works — and how we keep it current
Each answer adds 0–4 points to each tool’s score across the 6 categories. The highest-scoring tool wins. Ties are broken by EU-residency posture first (more conservative), then by setup time (faster wins). There is no machine learning, no feedback loop, no per-user variance — the rules are deterministic and version-pinned.
Update cadence: rules are re-weighed quarterly minimum, plus on every major tool release that changes the install path (Matomo 5.x release, PostHog Cloud-first pivot, Umami v2 schema migration). The last-updated stamp at the bottom of this page reflects the most recent re-weight; if it’s more than 90 days old, double-check vendor pricing pages before relying on a recommendation.
The full test environment, pricing-claim rules, and source-of-truth hierarchy (tool source code > first-party docs > regulatory primary sources > community blog posts) are documented in the how we test analytics tools page.
Frequently asked questions
Is this picker free?
Is the recommendation accurate?
How often is it updated?
What if I disagree with the recommendation?
Can I export my answers?
Does it consider hardware sizing?
What about GDPR — which tool wins?
Why isn’t GoatCounter / Fathom / Pirsch / Simple Analytics in the picker?
Can I use the picker output for client work?
Does the picker know about my CMS?
Last updated: May 4, 2026 · Methodology · Maintained independently — no vendor relationships