You have a cron job that runs nightly. Maybe a backup, maybe a reindex, maybe a bill-runner. When it stops running, you don't notice — there's no email, no log entry, just silence.
Valpero heartbeats are the standard fix. Your script pings a URL each time it runs successfully. If we don't hear from it for N minutes past schedule, we alert you.
Step 1 — Create a heartbeat
Dashboard → Heartbeats → New. Pick a name (nightly-backup), an
expected interval (1440 minutes for once-a-day), and a grace period
(15 minutes). You'll get a unique URL like:
https://valpero.com/heartbeat/8f3b9c2d
Step 2 — Hit the URL after a successful run
Append one line to your script.
Bash:
#!/bin/bash
set -euo pipefail
backup_database
curl -fsS https://valpero.com/heartbeat/8f3b9c2d > /dev/null
Python:
import urllib.request
backup_database()
urllib.request.urlopen("https://valpero.com/heartbeat/8f3b9c2d")
Step 3 — Test it
Run your script manually. The dashboard should show "received 1 ping just now" within seconds.
What happens when it fails
- Script errors out before reaching the curl line → no ping → after grace expires, alert.
- Script never starts (cron daemon dead, server rebooted) → same.
- Script runs but takes 6h instead of 30m → ping eventually arrives, no alert.
Pro tips
- Ping at the start with
?state=runningto know the script started. - Different URLs per environment — staging vs prod heartbeats.
- Combine with HTTP monitors — heartbeat catches "didn't run", HTTP monitor catches "ran and broke."