Configuration
Every kwarg connect() takes, every environment variable it reads, and the brokers it supports.
connect()
celeryradar_sdk.connect(
api_key,
endpoint='https://api.celeryradar.com',
broker_url=None,
capture_args=True,
worker_name=None,
)
Your account's ingest key, from the settings page. Sent as a Bearer token on every request.
https://api.celeryradar.com
Override only if you're testing against a self-run instance or a private CeleryRadar deployment. The trailing /ingest/ is appended automatically.
None
Explicit broker URL for the queue depth poller. Defaults to app.conf.broker_url from the active Celery app, which is what you want unless your monitoring credentials are scoped differently from your worker credentials (e.g. a read-replica with a separate user).
True
When true, task arguments are captured and shipped with each event so they appear in the task detail page. Set to False if your tasks accept PII or secrets as arguments. With it off, you still get task name, state, runtime, retry count, and exception info — just not the arg values.
Args/kwargs are capped at 4 KB combined; oversized payloads are replaced with a truncation marker. Non-JSON-serializable values are coerced via repr().
None
Override the hostname reported to ingest. Without this, the SDK uses socket.gethostname(), which in a Kubernetes pod returns an ephemeral pod name that rotates on every restart and accumulates ghost workers in your dashboard. Set this to a stable per-deployment name in your manifest.
The CELERYRADAR_WORKER_NAME environment variable takes precedence over this kwarg, so deployment manifests can override the value without code changes.
Environment variables
Stable name for this worker. Wins over the worker_name= kwarg and over socket.gethostname(). Set this in your k8s manifest, ECS task definition, or systemd unit.
Empty values fall through (so an unset env var doesn't override a configured kwarg).
That's the full list. The SDK doesn't read any other env vars — there's no global config file, no .celeryradar.toml, no auto-discovery beyond what's described above.
Where to call connect()
Anywhere your Celery app is initialized. The most common places:
- Standalone Celery app — right after
app = Celery(...). - Django + Celery — in
config/celery.pyafterapp.config_from_object(...). - Celery beat as a separate process — call
connect()in the beat entrypoint too. Beat schedule monitoring relies on thebeat_initsignal firing in the beat process.
The SDK hooks Celery signals (task_prerun, task_postrun, task_failure, task_retry, heartbeat_sent, beat_init, before_task_publish, worker_process_init), so it must be imported and connected before Celery starts dispatching those signals — i.e. at module load time, not lazily inside a task.
Broker support
| Broker | Tasks & workers | Beat | Queue depth |
|---|---|---|---|
| Redis (lists) | ✓ | ✓ | ✓ |
| RabbitMQ | ✓ | ✓ | not yet |
| SQS | ✓ | ✓ | not yet |
| Redis Cluster / Sentinel / Streams | ✓ | ✓ | not yet |
Task lifecycle, worker heartbeats, and beat schedule monitoring are all driven by Celery signals — they work with any broker Celery itself supports. Queue depth polling is the only piece that needs to talk to your broker directly. Today that means a standard Redis list-mode broker (redis:// or rediss://).
If you're on an unsupported broker, queue depth charts will silently stay empty. Everything else still works.