Cron Job Fails Silently: How to Detect Hidden Failures
The most dangerous cron failures are the ones you don't know about. Here's how to catch them.
The Silent Failure Problem
Your cron job runs. It even shows up in the cron logs. But the job itself fails — and no one knows because:
- Output goes nowhere (not captured)
- The script doesn't check for errors
- There's no monitoring to detect missing runs
- Cron only logs that it started a job, not if it succeeded
You find out days or weeks later when a backup is missing, a report wasn't sent, or data is stale.
Step 1: Capture All Output
Redirect both stdout and stderr to a log file:
0 2 * * * /home/user/scripts/backup.sh >> /var/log/backup.log 2>&1
Breaking this down:
>>appends stdout to the log file2>&1redirects stderr to stdout (so errors get logged too)
Now you can check /var/log/backup.log to see what happened.
Step 2: Make Your Script Exit on Errors
Add set -e at the start of your bash script:
#!/bin/bash
set -e # Exit immediately if any command fails
# If pg_dump fails, the script stops here
pg_dump mydb > /backup/mydb.sql
# This won't run if pg_dump failed
aws s3 cp /backup/mydb.sql s3://my-bucket/
Without set -e, the script continues even if commands fail, which can cause cascading problems.
Step 3: Check Exit Codes Explicitly
For more control, check exit codes yourself:
#!/bin/bash
pg_dump mydb > /backup/mydb.sql
if [ $? -ne 0 ]; then
echo "ERROR: Database dump failed" >&2
exit 1
fi
aws s3 cp /backup/mydb.sql s3://my-bucket/
if [ $? -ne 0 ]; then
echo "ERROR: S3 upload failed" >&2
exit 1
fi
echo "Backup completed successfully"
exit 0
Step 4: Add Heartbeat Monitoring
The real solution: alert on the absence of success, not just on failure.
Add a ping that only fires if the job completes successfully:
0 2 * * * /home/user/scripts/backup.sh && curl -fsS https://api.cronsignal.io/ping/abc123
The && means curl only runs if backup.sh exits with code 0. If the script fails (or never runs at all), no ping is sent, and you get an alert.
This catches:
- Scripts that fail with an error
- Scripts that hang and never complete
- Jobs that don't run at all (cron daemon down, crontab deleted)
- Server crashes before the job runs
Step 5: Log Rotate Your Logs
If you're logging output, set up log rotation so logs don't fill your disk:
# /etc/logrotate.d/cron-backup
/var/log/backup.log {
weekly
rotate 4
compress
missingok
notifempty
}
Putting It All Together
#!/bin/bash
set -e
# Log start time
echo "$(date): Starting backup"
# Your backup logic
pg_dump mydb > /backup/mydb_$(date +%Y%m%d).sql
aws s3 cp /backup/mydb_$(date +%Y%m%d).sql s3://my-bucket/
# Log success
echo "$(date): Backup completed successfully"
# Exit 0 for success
exit 0
And in crontab:
0 2 * * * /home/user/scripts/backup.sh >> /var/log/backup.log 2>&1 && curl -fsS https://api.cronsignal.io/ping/abc123
Now you have:
- Logged output for debugging
- Proper error handling
- External monitoring that alerts you when things go wrong
Never miss a silent failure again
CronSignal alerts you the moment a job doesn't complete. No ping = you get notified. It's that simple.
Start Monitoring Free3 checks free. No credit card required.