How to verify your Restic backups
are actually working

A backup that has never been tested is not a backup — it's a hope. Three commands, one drill, and a monitoring loop turn that hope into something you can actually depend on.

The three things you actually need to verify

People talk about “testing backups” like it's a single thing. It's three:

  1. Structural integrity:The repository's metadata is consistent. Pack files reference real blobs, indexes match, no dangling references. restic check covers this.
  2. Content integrity:The actual encrypted data on disk hasn't bit-rotted or been corrupted by storage hardware. restic check --read-data downloads everything and recomputes checksums.
  3. Recoverability:You can actually pull a file back and it matches the original. This is the only one that proves you have a backup. The other two are necessary but not sufficient.

Skip any of these and you have an unverified backup. Most people stop at the first one and find out about the gap during a real outage.

Layer 1: Structural check after every backup

This is fast and cheap. Runs in seconds to a few minutes depending on repository size. Should be the last step of every backup script.

restic check

What it does: verifies all pack files are reachable, the index matches the data, and snapshot trees don't reference missing blobs. What it does NOT do: read the actual encrypted data. A pack file that's corrupted on disk but still has the right size and metadata will pass.

Wire it into your backup script:

#!/usr/bin/env bash
set -euo pipefail
source /etc/restic/env

restic backup /home /etc /var/lib
restic forget --keep-daily 7 --keep-weekly 4 --keep-monthly 6 --prune
restic check

If restic check fails, the next step is to investigate, not to keep backing up. A structurally-broken repo will silently lose data on the next prune.

Layer 2: Monthly --read-data verification

This is the expensive one. restic check --read-data downloads every encrypted blob in the repository and recomputes its checksum. It catches bit rot, storage corruption, and any tampering at the storage layer that didn't also touch the metadata.

restic check --read-data

On a 100 GB repository over a 100 Mbps connection, expect ~2 hours. On 1 TB at the same bandwidth, ~20 hours. Plan accordingly.

For metered storage providers (B2, S3, rsync.net) this gets expensive. Most people compromise with --read-data-subset, which verifies a percentage of randomly-sampled packs:

# Verify 10% of packs every week, full verify quarterly
restic check --read-data-subset 10%

ServerCrate has no egress fees, so --read-data runs free regardless of repository size. Schedule it monthly on Saturday nights.

Layer 3: The test-restore drill

This is the only verification that proves your backups will actually save you. Pick a known-good file, restore it to a scratch directory, diff it against the original.

# Pick a stable file you know hasn't changed
TARGET="/etc/hostname"

# Get the latest snapshot ID
SNAP=$(restic snapshots --json | jq -r '.[-1].id')

# Restore to a temp dir
mkdir -p /tmp/restic-drill
restic restore "$SNAP" --target /tmp/restic-drill --include "$TARGET"

# Compare
diff "$TARGET" "/tmp/restic-drill$TARGET" && echo "PASS" || echo "FAIL"

# Clean up
rm -rf /tmp/restic-drill

Run this quarterly minimum. Annually for sure. Pick a different file each time so you're not just restoring the same byte pattern over and over.

For the more paranoid: also restore from the oldest snapshot in the repo, not just the newest. A repository that can restore the latest snapshot but not the oldest one has a deeper integrity problem worth catching.

Wiring it into a monitoring loop

The verification is only useful if someone notices when it fails. The pattern that works:

  1. Healthchecks.io ping at the end of a clean run:Wrap the backup+check script in curl -fsS --retry 3 https://hc-ping.com/<uuid> on success. Healthchecks.io alerts if a ping doesn't arrive within the expected window.
  2. Separate ping for --read-data:Different schedule, different alert threshold. A weekly check failing is urgent. A monthly --read-data missing one cycle is a yellow flag, not a red one.
  3. Test-restore drill via cron with email-on-fail:Quarterly cron job that runs the drill and emails you the diff result. If the email never arrives, the drill never ran — that itself is a signal.

What people get wrong

  • Treating restic check as proof:It only checks structure. A repo can pass check and still be unrestorable due to storage-layer corruption.
  • Never doing --read-data:Bit rot is rare but real, especially on consumer-grade storage. Once a quarter is the floor.
  • Restoring to the same machine:If your laptop dies, you don't have a laptop. Practice the restore on a different machine, or at minimum a different user/path. Bonus: catches encryption-token-in-shell-history mistakes.
  • Not monitoring the verification:A check script that fails silently is worse than no check — it gives false confidence. If you only learn about a failed verification when you go looking for it, you have a hope, not a backup.

How ServerCrate makes this easier

ServerCrate is a vanilla Restic backend — everything above works exactly as documented. Three things that help specifically:

  • No egress fees:restic check --read-data downloads the entire repository every time you run it. On B2 or S3 that costs real money. On ServerCrate it's free, so you can run full verification as often as you want without watching the meter.
  • ZFS storage with checksums:The storage layer underneath your vault is ZFS with end-to-end checksumming. Silent bit rot gets caught and repaired against the mirror before Restic ever sees a corrupted byte. Your --read-data runs are unlikely to find storage-side problems — they'll only catch issues in transit or in your local restic environment.
  • Backup health surfaced in the portal:The vault page shows last backup time and a basic health status. If a backup hasn't arrived in the expected window, it's visible at a glance — one less monitoring blind spot.

Related guides: restic forget retention policy, restic restore commands, automating restic with systemd.

Backups you can actually verify.
No egress fees, no metered surprises.

10 GB free, no card. Run restic check --read-data as often as you want.

Cancel anytime. 10 GB free tier never expires. No egress fees.