I’d been using sqlite3 mydb .dump > backup.sql for SQLite backups. Works, but slow and holds a read lock long enough to block writers on big databases.

The right answer is the .backup command, which uses the SQLite online backup API:

sqlite3 mydb.sqlite ".backup backup.sqlite"

This produces a binary copy of the database. Crucially:

  • No lock held during the whole backup. SQLite copies pages in small chunks, briefly acquiring locks only to read consistent pages, and yielding between chunks.
  • Produces an actual database file, not a SQL dump. You can open the backup and query it immediately.
  • WAL-aware. Handles WAL-mode databases correctly.
  • Single file on disk, not multiplied by INSERT-overhead like a .dump file would be.

Timings on a 4GB database on my laptop:

  • .dump: 47 seconds, 5.2GB of text output.
  • .backup: 8 seconds, 4GB binary copy.

For automated backups, wrap in a script that:

  1. Takes the backup to a tmp filename.
  2. Verifies the backup opens (sqlite3 tmp.sqlite "PRAGMA integrity_check").
  3. Moves it into place.
sqlite3 prod.sqlite ".backup tmp.sqlite"
sqlite3 tmp.sqlite "PRAGMA integrity_check" | grep -q '^ok$' && mv tmp.sqlite backup-$(date +%F).sqlite

.dump still has its place — text output, version-to-version migration, human-readable archives — but for “copy my database somewhere,” .backup is the move.