Use Case

Monitor Scheduled Reports

When the CFO doesn't get their Monday morning revenue report, you hear about it. Fast.

The Problem

Business-critical reports run on schedules:

  • Daily revenue summaries at 6 AM
  • Weekly KPI dashboards every Monday
  • Monthly billing reports on the 1st
  • Hourly inventory sync jobs

When they fail, stakeholders notice. But the failures are often silent:

  • Database connection timeout — report never generated
  • API rate limit hit — partial data
  • Email delivery failed — report generated but never sent
  • Disk full — couldn't write the export file
  • Memory exhausted — process killed mid-generation

You find out when someone asks "Where's my report?"

The Solution

Add heartbeat pings after each critical step. Get alerted immediately when reports don't complete.

Basic Report Monitoring

#!/bin/bash
set -e

# Generate the report
python /opt/reports/daily_revenue.py

# Send via email
python /opt/reports/send_email.py --report=/tmp/daily_revenue.pdf

# Ping on success
curl -fsS https://api.cronsignal.io/ping/YOUR_CHECK_ID

Python Report Script

import requests
import psycopg2
import pandas as pd
from datetime import datetime

CRONSIGNAL_URL = "https://api.cronsignal.io/ping/YOUR_CHECK_ID"

def generate_report():
    # Connect to database
    conn = psycopg2.connect(
        host="localhost",
        database="analytics",
        user="reports",
        password="..."
    )

    # Run query
    df = pd.read_sql("""
        SELECT
            date_trunc('day', created_at) as date,
            SUM(amount) as revenue,
            COUNT(*) as orders
        FROM orders
        WHERE created_at >= NOW() - INTERVAL '7 days'
        GROUP BY 1
        ORDER BY 1
    """, conn)

    # Generate Excel report
    output_file = f"/reports/revenue_{datetime.now():%Y%m%d}.xlsx"
    df.to_excel(output_file, index=False)

    # Validate output
    if df.empty:
        raise ValueError("Report contains no data!")

    return output_file

def send_report(filepath):
    # Your email sending logic here
    # ...
    pass

if __name__ == "__main__":
    try:
        report = generate_report()
        send_report(report)

        # Success - ping CronSignal
        requests.get(CRONSIGNAL_URL, timeout=10)
        print("Report completed successfully")

    except Exception as e:
        print(f"Report failed: {e}")
        raise  # Exit with error code

Node.js Report Script

const axios = require('axios');
const { Pool } = require('pg');
const ExcelJS = require('exceljs');

const CRONSIGNAL_URL = 'https://api.cronsignal.io/ping/YOUR_CHECK_ID';

async function generateReport() {
    const pool = new Pool({ connectionString: process.env.DATABASE_URL });

    const result = await pool.query(`
        SELECT
            date_trunc('day', created_at) as date,
            SUM(amount) as revenue,
            COUNT(*) as orders
        FROM orders
        WHERE created_at >= NOW() - INTERVAL '7 days'
        GROUP BY 1
        ORDER BY 1
    `);

    if (result.rows.length === 0) {
        throw new Error('Report contains no data!');
    }

    // Create Excel file
    const workbook = new ExcelJS.Workbook();
    const sheet = workbook.addWorksheet('Revenue');
    sheet.addRows(result.rows);

    const filepath = `/reports/revenue_${new Date().toISOString().slice(0,10)}.xlsx`;
    await workbook.xlsx.writeFile(filepath);

    return filepath;
}

async function sendReport(filepath) {
    // Your email/Slack/upload logic here
}

async function main() {
    const report = await generateReport();
    await sendReport(report);

    // Success - ping CronSignal
    await axios.get(CRONSIGNAL_URL, { timeout: 10000 });
    console.log('Report completed successfully');
}

main().catch(err => {
    console.error('Report failed:', err);
    process.exit(1);
});

Multi-Step Pipeline Monitoring

For complex pipelines, monitor each stage:

#!/bin/bash
set -e

# Stage 1: Extract data
python /opt/pipeline/extract.py
curl -fsS "https://api.cronsignal.io/ping/EXTRACT_CHECK_ID"

# Stage 2: Transform
python /opt/pipeline/transform.py
curl -fsS "https://api.cronsignal.io/ping/TRANSFORM_CHECK_ID"

# Stage 3: Load to data warehouse
python /opt/pipeline/load.py
curl -fsS "https://api.cronsignal.io/ping/LOAD_CHECK_ID"

# Stage 4: Generate reports
python /opt/pipeline/generate_reports.py
curl -fsS "https://api.cronsignal.io/ping/REPORTS_CHECK_ID"

# Stage 5: Distribute
python /opt/pipeline/distribute.py
curl -fsS "https://api.cronsignal.io/ping/DISTRIBUTE_CHECK_ID"

This way, when something fails, you know exactly which stage failed.

Validate Report Quality

Don't just check if the report ran — verify it contains valid data:

import pandas as pd
import requests

def validate_report(filepath):
    df = pd.read_excel(filepath)

    # Check row count
    if len(df) < 1:
        raise ValueError("Report is empty")

    # Check for null values in critical columns
    if df['revenue'].isnull().any():
        raise ValueError("Revenue column contains nulls")

    # Check for reasonable values
    if df['revenue'].sum() < 0:
        raise ValueError("Negative total revenue - data issue")

    # Check date range
    min_date = df['date'].min()
    max_date = df['date'].max()
    if (max_date - min_date).days < 6:
        raise ValueError("Report doesn't cover full week")

    return True

# Generate report
report = generate_report()

# Validate before marking as success
validate_report(report)

# Send report
send_report(report)

# All good - ping
requests.get(CRONSIGNAL_URL)

Schedule Recommendations

Report Type Schedule Grace Period
Daily summary Every 24 hours 30-60 minutes
Weekly report Every 7 days 2 hours
Hourly sync Every 1 hour 10-15 minutes
Monthly billing Every 30 days 4 hours

Common Failure Points

Database Connections

Connection pools exhausted, timeouts, or credentials rotated. Add connection retry logic and proper error handling.

Memory Issues

Large reports can exhaust memory. Stream data in chunks rather than loading everything at once.

Email Delivery

SMTP servers have rate limits. Queue emails and handle bounces gracefully.

Timezone Confusion

A "daily" report might run at the wrong time if timezones aren't handled correctly. Always use UTC internally.

Know before stakeholders ask

Get alerted the moment a scheduled report fails. Fix it before anyone notices it's missing.

Start Monitoring Free

3 checks free. No credit card required.