Is your Apache Superset instance slow? Before blaming the tool, this guide proposes a methodical approach to identify the real cause (no cache, poorly indexed query, undersized infrastructure, overloaded dashboard) and apply the right fix. For 2026.
1. 4-step diagnosis
- Measure: where is time consumed? Browser, network, backend, database?
- Reproduce: is the slowness systemic or punctual? One dashboard or all?
- Isolate: test chart by chart, query by query.
- Fix: apply the simplest solution first (cache).
If you want an already-optimized Superset, TVL Managed Superset applies cache and tuning best practices by default.
2. Diagnostic tools
- Chrome DevTools → Network: see the time of each HTTP request;
- Superset query history: SQL → Query History, see SQL time on DB side;
- pg_stat_statements on Postgres side: top slow queries;
- Prometheus/Grafana: p95 latency, error rate (cf. monitoring);
- Redis cache hit ratio: if <80%, cache is poorly used.
3. Common causes and solutions
Cause 1 — No cache configured
Symptom: each refresh re-queries the database. Solution: configure Redis (cf. Redis cache) with adapted TTL.
Cause 2 — Non-optimized SQL queries
- No index on filtered column → full scan;
- Multiple JOINs without prior aggregation;
SELECT *on columnar table;- Time filter missing → useless several-year scan.
Solution: pre-aggregate via dbt, index on DB side, limit time range.
Cause 3 — Overloaded dashboard
20+ charts per dashboard = 20+ parallel queries. Solution: split into multiple dashboards or use tabs in the same dashboard.
Cause 4 — Undersized infrastructure
| Symptom | Infra cause |
|---|---|
| 100% CPU on web pod | Too few gunicorn workers |
| Growing Celery queue | Too few Celery workers |
| Saturated Postgres | Connection pool too small, weak DB sizing |
| Redis evictions | maxmemory too low |
This configuration is applied by default on TVL Managed Superset, which follows community best practices.
Cause 5 — Heavy virtual datasets
A virtual dataset = subquery on each chart. If the subquery is heavy, multiply by N charts. Solution: materialize via dbt into a physical table.
Cause 6 — No async queries
Without GLOBAL_ASYNC_QUERIES, the browser waits for the response. With it, polls in the background. Solution: enable (cf. Celery).
Cause 7 — Frontend not on CDN
Static assets (JS, CSS) served by the web pod are slow. Solution: serve via CDN (Cloudflare, BunnyCDN, AWS CloudFront).
4. SQL profiling on Superset side
Enable query cost estimate:
FEATURE_FLAGS = {"ESTIMATE_QUERY_COST": True}
And in the Superset DB connection, check Allow query cost estimation. Before execution, Superset shows an EXPLAIN.
5. SQL profiling on DB side
-- Top 10 Superset queries
SELECT
queryid,
total_exec_time,
calls,
mean_exec_time,
query
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;
6. Target metrics
| Metric | Target |
|---|---|
| Time to first chart render | <2s |
| Backend p95 latency | <3s |
| Redis cache hit ratio | >90% |
| Celery queue backlog | <100 tasks |
| Active Postgres connections | < 70% of max |
7. Priority quick wins
- Enable Redis cache with generous TTL;
- Limit default time ranges (30 days instead of 5 years);
- Pre-aggregate fact tables via dbt;
- Index filter columns on DB side;
- Increase Celery workers based on queue.
8. Conclusion
A slow Superset is almost always solvable in a few hours of diagnosis. Redis cache and dbt modeling cover 80% of cases. For the remaining 20%, infra optimization and fine database tuning. The method remains the same: measure, isolate, fix.
Want the benefits of Apache Superset without the friction of installation and maintenance? Deploy your instance in 3 clicks with TVL Managed Superset, hosted in Europe (OVHcloud, Roubaix, France).
For more: Redis cache, Postgres tuning, caching strategies.