How to Monitor Django Cron Jobs (With Alerts)
Django doesn't have a built-in task scheduler. That's actually fine because the ecosystem has solid options: management commands run via system cron, django-crontab, Celery beat, or APScheduler. What Django also doesn't have is monitoring. Your scheduled tasks fail silently, and you find out when users complain or data goes stale.
This guide covers how to add heartbeat monitoring to Django scheduled tasks, regardless of which scheduling approach you're using.
This guide covers Django 4.x and 5.x with Python 3.9+. Celery examples use Celery 5.x.
The Problem With Try/Except Error Handling
Most Django developers add error handling like this:
class Command(BaseCommand):
def handle(self, *args, **options):
try:
self.run_backup()
except Exception as e:
logger.error(f"Backup failed: {e}")
# Maybe send an email
This catches crashes, but not silent failures. If your cron daemon dies, the script never runs, no exception gets raised, no email gets sent. You've only solved half the problem.
The Solution: Heartbeat Monitoring
Flip the approach: instead of alerting on failure, alert on the absence of success.
After your task completes successfully, ping an external URL. If that ping doesn't arrive on schedule, you get an alert. This catches every failure mode: crashes, network issues, server reboots, misconfigured cron, OOM kills.
Adding Monitoring to Management Commands
The most common Django scheduling pattern is management commands run via system cron.
Basic Implementation
# app/management/commands/backup_database.py
from django.core.management.base import BaseCommand
# Option 1: Using requests (if installed)
import requests
# Option 2: Using stdlib (no extra dependencies)
from urllib.request import urlopen
class Command(BaseCommand):
help = 'Backs up the database'
def handle(self, *args, **options):
# Your task logic
self.run_backup()
# Ping on success (with requests)
requests.get(
'https://api.cronsignal.io/ping/YOUR_CHECK_ID',
timeout=5
)
# Or with stdlib (no extra dependencies):
# urlopen('https://api.cronsignal.io/ping/YOUR_CHECK_ID', timeout=5)
self.stdout.write(self.style.SUCCESS('Backup completed'))
def run_backup(self):
# Backup logic here
pass
If run_backup() raises an exception, the ping never fires. Simple and effective.
Your crontab entry stays the same:
0 2 * * * cd /path/to/project && python manage.py backup_database
Reusable Decorator
For multiple monitored commands, create a decorator:
# app/utils/monitoring.py
import functools
import requests
def monitored_command(check_id):
"""Ping CronSignal after successful command execution."""
def decorator(handle_func):
@functools.wraps(handle_func)
def wrapper(self, *args, **options):
result = handle_func(self, *args, **options)
requests.get(
f'https://api.cronsignal.io/ping/{check_id}',
timeout=5
)
return result
return wrapper
return decorator
Then use it in your commands:
from app.utils.monitoring import monitored_command
class Command(BaseCommand):
help = 'Generates weekly reports'
@monitored_command('YOUR_CHECK_ID')
def handle(self, *args, **options):
generate_reports()
self.stdout.write('Reports generated')
Context Manager Approach
If you prefer context managers:
# app/utils/monitoring.py
from contextlib import contextmanager
import requests
@contextmanager
def monitor_task(check_id):
"""Context manager that pings on successful completion."""
try:
yield
except Exception:
raise
else:
requests.get(
f'https://api.cronsignal.io/ping/{check_id}',
timeout=5
)
Usage:
class Command(BaseCommand):
def handle(self, *args, **options):
with monitor_task('YOUR_CHECK_ID'):
self.run_sync()
Monitoring Celery Beat Tasks
If you're using Celery for scheduling, add monitoring to your tasks:
# tasks.py
from celery import shared_task
import requests
@shared_task
def process_orders():
# Your task logic
OrderProcessor.run()
# Ping on success
requests.get(
'https://api.cronsignal.io/ping/YOUR_CHECK_ID',
timeout=5
)
For existing tasks you don't want to modify, use a wrapper:
@shared_task(bind=True)
def monitored_task(self, check_id, task_func, *args, **kwargs):
"""Generic wrapper for monitored tasks."""
result = task_func(*args, **kwargs)
requests.get(
f'https://api.cronsignal.io/ping/{check_id}',
timeout=5
)
return result
Celery Beat Configuration
In your celery.py or settings, configure the schedule as normal:
# celery.py
app.conf.beat_schedule = {
'process-orders-every-hour': {
'task': 'app.tasks.process_orders',
'schedule': crontab(minute=0),
},
}
The monitoring happens inside the task, not in the schedule configuration.
Monitoring APScheduler Jobs
If you're using APScheduler (common in smaller Django projects):
# In your Django app's apps.py ready() method
# or in a management command
#
# WARNING: Don't put this in settings.py or models.py
# It will run on every manage.py command (migrate, shell, etc.)
import os
from apscheduler.schedulers.background import BackgroundScheduler
import requests
def sync_inventory():
# Your task logic
InventorySync.run()
# Ping on success
requests.get(
'https://api.cronsignal.io/ping/YOUR_CHECK_ID',
timeout=5
)
def start_scheduler():
scheduler = BackgroundScheduler()
scheduler.add_job(sync_inventory, 'cron', hour=2)
scheduler.start()
# In apps.py:
# class MyAppConfig(AppConfig):
# def ready(self):
# if os.environ.get('RUN_MAIN'): # Prevent double-start in dev
# start_scheduler()
Note: Django-Q and Huey are also popular task queue options. The monitoring pattern is the same: ping after successful task completion.
Monitoring django-crontab Jobs
If you're using django-crontab:
# settings.py
CRONJOBS = [
('0 2 * * *', 'app.cron.backup_database'),
]
Add monitoring inside the function:
# app/cron.py
import requests
def backup_database():
# Your backup logic
run_backup()
# Ping on success
requests.get(
'https://api.cronsignal.io/ping/YOUR_CHECK_ID',
timeout=5
)
Handling Long-Running Tasks
For tasks that take significant time, ping at start and end:
class Command(BaseCommand):
def handle(self, *args, **options):
# Ping start
requests.get(
'https://api.cronsignal.io/ping/YOUR_CHECK_ID/start',
timeout=5
)
# Long-running task
self.run_etl_pipeline()
# Ping completion
requests.get(
'https://api.cronsignal.io/ping/YOUR_CHECK_ID',
timeout=5
)
If you get a start ping but no completion ping, something is hanging or crashed mid-execution.
Testing Your Setup
Verify monitoring works before relying on it.
Run your command manually:
python manage.py backup_database
Check CronSignal to confirm the ping arrived.
Then test failure detection. Add a deliberate error:
def handle(self, *args, **options):
raise Exception("Test failure")
# Ping never reached
Run the command. Verify no ping arrives and you get an alert within your grace period.
Getting Started
CronSignal handles the monitoring side for $9/month. Create a check, grab your ping URL, add it to your Django tasks. Takes five minutes. Start with 3 checks free.
Whether you're using management commands, Celery, APScheduler, or django-crontab, the pattern is the same: ping on success, get alerted on absence.
For more on heartbeat monitoring and why it beats try/except error handling, see our guide on how to monitor cron jobs.