For every Celery setup: Django, Flask, FastAPI, or none of the above

Continuous monitoring
for every Celery task you run.

CeleryRadar watches your tasks, workers, queues, and Beat schedules continuously, so you'll know about a problem before your customers do.

Get started for free →
✓ Free tier, forever ✓ No credit card ✓ 5-minute install
api.celeryradar.com / overview live
Success rate
99.7%
24h
Workers online
4 / 4
all healthy
Failed tasks
12
+3 vs yesterday
Tasks · last hour
per-minute breakdown
1,284
↑ 12% vs prev hour
Beat schedule health
4 schedules · 1 late
export_nightly_csv 0 2 * * * 2:00 AM
reconcile_balances */15 * * * * 4m ago
rebuild_search_index 0 */6 * * * 6h ago
sync_stripe_customers 0 * * * * 52m ago
Recent tasks
streaming
Integrations

Works with your stack.

CeleryRadar plugs into the brokers, schedulers, and notification destinations you already use, and alerts you on the things that actually break in production.

Brokers
  • Redis
  • RabbitMQ
  • Amazon SQS
Task tracking works with any of these. Queue depth charts are Redis-only today.
Schedulers
  • Celery PersistentScheduler
  • django-celery-beat
  • RedBeat
Plug it in, your existing schedules are picked up automatically.
Notifications
  • Slack
  • Discord
  • Email
  • PagerDuty soon
  • Telegram soon
  • Pushover soon
Get alerts where your team already lives. Add as many destinations as you need.
Alerts on
  • Beat schedule missed
  • Queue depth threshold
  • Worker offline
  • Task failure rate
One alert per incident, not one every minute it's broken. Configure thresholds and timing per rule.
Diagnostics

Look closer than green and red.

A task that retries twice and then succeeds still shows up green on most dashboards. That's also where the interesting bugs live.

Retry rate, on its own column.

If one of your tasks quietly retries twice every time before finishing, you'd never spot it from a success-rate number alone. They all sit at 99%. We pull retry rate out as its own column so the tasks limping across the finish line stop hiding.

api.celeryradar.com / tasks / per task
1h 24h 7d 30d all
Task Runs Succeeded Failed Retries Fail rate Retry rate Avg p95
send_invoice_email 2,847 2,840 7 23 0.2% 0.8% 218ms 412ms
sync_stripe_customers 412 384 28 84 6.8% 20.4% 1,840ms 4,220ms
process_webhook 8,103 8,088 15 42 0.2% 0.5% 142ms 287ms
compute_embeddings 1,432 1,302 130 312 9.1% 21.8% 884ms 2,104ms
rebuild_search_index 6 6 0 0 0% 0% 14,920ms 16,400ms
Rates are computed on event counts. Retry rate > 0 means the average task in this window emitted at least that many retry events before terminating.

When one fails, see how it failed.

Click any failed task and you get every attempt on one timeline: what worker picked it up, what error knocked it back, when Celery finally gave up. Beats piecing it together from log files on three different boxes.

api.celeryradar.com / tasks / chain
← Tasks
tasks.send_invoice_email
5e3a4f8b-1c91-4a02-b8d3-77c9d4f29d4c
3 attempts · 6 events
Attempt 1
Startedworker-03
14:02:18 · 4m ago
view event →
Retryworker-03
14:02:23 · 4m ago
Retry in 2s: ConnectionTimeout: HTTPSConnectionPool(host='api.stripe.com', port=443): Read timed out (read timeout=5)
view event →
Attempt 2
Startedworker-01
14:02:32 · 4m ago
view event →
Retryworker-01
14:02:37 · 4m ago
Retry in 4s: ConnectionTimeout: HTTPSConnectionPool(host='api.stripe.com', port=443): Read timed out (read timeout=5)
view event →
Attempt 3
Startedworker-02
14:02:48 · 4m ago
view event →
Success287msworker-02
14:02:48 · 4m ago
view event →
Pricing

Simple, predictable pricing.

Start free, upgrade when you outgrow it. No surprise charges.

Starter
For solo devs and side projects.
Freeforever
Start free →
  • 7-day task history
  • Up to 2 workers
  • 3 alert rules
  • Email alerts
  • Beat & queue monitoring
Business
For larger fleets and longer history.
$49/month
Get started →
  • 90-day task history
  • Up to 100 workers
  • Everything in Developer
  • Priority email support
FAQ

Frequently asked questions

Still have questions? Talk to us.

How long does it take to set up?
About five minutes. Install celeryradar-sdk, call celeryradar_sdk.connect(api_key="<your-key>") in your Celery app config, and your first task event lands in the dashboard before the page reloads. No agents, no sidecars, no broker plugins.
Which brokers do you support?
Task event capture works with any Celery broker, we hook Celery's standard task signals, so we don't care what's underneath. Queue depth monitoring is currently Redis-only — single-instance only; Redis Cluster, Sentinel, and Streams aren't yet supported. RabbitMQ and SQS depth monitoring are on the roadmap. Beat schedule monitoring works with the default Celery scheduler, django-celery-beat, and RedBeat.
Will the SDK slow down my workers?
No. The SDK hooks Celery's standard task signals and ships events asynchronously over a short-timeout HTTP client. If our ingest endpoint is slow or unreachable, your workers don't notice, events drop with a warning rather than back up your task queue.
What about sensitive data in task arguments?
Pass capture_args=False to celeryradar_sdk.connect() and we'll only see task name, state, runtime, and exception type, never arguments or return values. Args and kwargs are also capped at 4KB per task and replaced with a "truncated" marker if oversized.
How does retention work?
Every task event is retained for the window your plan specifies (7 days on Free, 30 days on Developer, 90 days on Business). Older events are deleted automatically. Charts beyond your retention window will be empty. Upgrade for longer history.
Can I export my data?
Yes. Every task list view exports to CSV from a single click.
How is this different from Flower?
Flower is a real-time inspector and admin tool, great for browsing the current task queue, revoking tasks, and watching what's running right now. CeleryRadar is the layer above that: persistent history, alerts, dashboards, and trend analysis. Flower answers "what's happening this second"; CeleryRadar answers "did anything go wrong overnight, and how often does this task fail?". They don't conflict.
What if CeleryRadar itself goes down?
The SDK is async and non-blocking. If our ingest endpoint is slow or unreachable, your worker threads don't notice. Events buffer in a small in-process queue and replay automatically when we recover. State events (worker heartbeats, beat fires) get a separate retry queue so a brief outage on our side doesn't fire false "worker offline" alerts at you the moment we come back. Three layers of protection, all on by default.