Enhanced documentation for --workdir flag: - Added to Cluster Restore command reference section - Added example to restore examples with clear "special case" comment - Explained use case: VMs with small system disk but large NFS mounts - Clarified it's NOT needed for standard deployments - Prevents confusion about when to use this flag
dbbackup
Professional database backup and restore utility for PostgreSQL, MySQL, and MariaDB.
🎯 Production-Ready | 🔒 Encrypted Backups | ☁️ Cloud Storage | 🔄 Point-in-Time Recovery
Key Features
- Multi-database support: PostgreSQL, MySQL, MariaDB
- Backup modes: Single database, cluster, sample data
- 🔐 AES-256-GCM encryption for secure backups (v3.0)
- 📦 Incremental backups for PostgreSQL and MySQL (v3.0)
- Cloud storage integration: S3, MinIO, B2, Azure Blob, Google Cloud Storage
- Restore operations with safety checks and validation
- Automatic CPU detection and parallel processing
- Streaming compression for large databases
- Interactive terminal UI with progress tracking
- Cross-platform binaries (Linux, macOS, BSD, Windows)
Installation
Docker (Recommended)
Pull from registry:
docker pull git.uuxo.net/PlusOne/dbbackup:latest
Quick start:
# PostgreSQL backup
docker run --rm \
-v $(pwd)/backups:/backups \
-e PGHOST=your-host \
-e PGUSER=postgres \
-e PGPASSWORD=secret \
git.uuxo.net/PlusOne/dbbackup:latest backup single mydb
# Interactive mode
docker run --rm -it \
-v $(pwd)/backups:/backups \
git.uuxo.net/PlusOne/dbbackup:latest interactive
See DOCKER.md for complete Docker documentation.
Download Pre-compiled Binary
Linux x86_64:
wget https://git.uuxo.net/PlusOne/dbbackup/releases/download/v3.1.0/dbbackup-linux-amd64
chmod +x dbbackup-linux-amd64
sudo mv dbbackup-linux-amd64 /usr/local/bin/dbbackup
Linux ARM64:
wget https://git.uuxo.net/PlusOne/dbbackup/releases/download/v3.1.0/dbbackup-linux-arm64
chmod +x dbbackup-linux-arm64
sudo mv dbbackup-linux-arm64 /usr/local/bin/dbbackup
macOS Intel:
wget https://git.uuxo.net/PlusOne/dbbackup/releases/download/v3.1.0/dbbackup-darwin-amd64
chmod +x dbbackup-darwin-amd64
sudo mv dbbackup-darwin-amd64 /usr/local/bin/dbbackup
macOS Apple Silicon:
wget https://git.uuxo.net/PlusOne/dbbackup/releases/download/v3.1.0/dbbackup-darwin-arm64
chmod +x dbbackup-darwin-arm64
sudo mv dbbackup-darwin-arm64 /usr/local/bin/dbbackup
Other platforms: FreeBSD, OpenBSD, NetBSD binaries available in releases.
Build from Source
Requires Go 1.19 or later:
git clone https://git.uuxo.net/uuxo/dbbackup.git
cd dbbackup
go build
Quick Start
Interactive Mode
PostgreSQL (peer authentication):
sudo -u postgres ./dbbackup interactive
MySQL/MariaDB:
./dbbackup interactive --db-type mysql --user root --password secret
Menu-driven interface for all operations. Press arrow keys to navigate, Enter to select.
Main Menu:
🗄️ Database Backup Tool - Interactive Menu
Target Engine: PostgreSQL | MySQL | MariaDB
Switch with ←/→ or t • Cluster backup requires PostgreSQL
Database: root@localhost:5432 (PostgreSQL)
> Single Database Backup
Sample Database Backup (with ratio)
Cluster Backup (all databases)
─────────────────────────────
Restore Single Database
Restore Cluster Backup
List & Manage Backups
─────────────────────────────
View Active Operations
Show Operation History
Database Status & Health Check
Configuration Settings
Clear Operation History
Quit
Press ↑/↓ to navigate • Enter to select • q to quit
Backup Progress:
Backing up database: production_db
[=================> ] 45%
Elapsed: 2m 15s | ETA: 2m 48s
Current: Dumping table users (1.2M records)
Speed: 25 MB/s | Size: 3.2 GB / 7.1 GB
Configuration Settings:
⚙️ Configuration Settings
> Database Type: postgres (PostgreSQL)
Target database engine (press Enter to cycle: PostgreSQL → MySQL → MariaDB)
CPU Workload Type: balanced
Backup Directory: /root/db_backups
Compression Level: 6
Parallel Jobs: 16
Dump Jobs: 8
Database Host: localhost
Database Port: 5432
Database User: root
Default Database: postgres
SSL Mode: prefer
Auto Detect CPU Cores: true
📋 Current Configuration:
Target DB: PostgreSQL (postgres)
Database: root@localhost:5432
Backup Dir: /root/db_backups
Compression: Level 6
Jobs: 16 parallel, 8 dump
↑/↓ navigate • Enter edit • 's' save • 'r' reset • 'q' menu • Tab=dirs on path fields only
Interactive Features
The interactive mode provides a menu-driven interface for all database operations:
- Backup Operations: Single database, full cluster, or sample backups
- Restore Operations: Database or cluster restoration with safety checks
- Configuration Management: Auto-save/load settings per directory (.dbbackup.conf)
- Backup Archive Management: List, verify, and delete backup files
- Performance Tuning: CPU workload profiles (Balanced, CPU-Intensive, I/O-Intensive)
- Safety Features: Disk space verification, archive validation, confirmation prompts
- Progress Tracking: Real-time progress indicators with ETA estimation
- Error Handling: Context-aware error messages with actionable hints
Configuration Persistence:
Settings are automatically saved to .dbbackup.conf in the current directory after successful operations and loaded on subsequent runs. This allows per-project configuration without global settings.
Flags available:
--no-config- Skip loading saved configuration--no-save-config- Prevent saving configuration after operation
Command Line Mode
Backup single database:
./dbbackup backup single myapp_db
Backup entire cluster (PostgreSQL):
./dbbackup backup cluster
Restore database:
./dbbackup restore single backup.dump --target myapp_db --create
Restore full cluster:
./dbbackup restore cluster cluster_backup.tar.gz --confirm
For VMs with limited system disk space (common with NFS-mounted backup storage):
# Use NFS mount or larger partition for extraction
./dbbackup restore cluster cluster_backup.tar.gz --workdir /u01/dba/restore_tmp --confirm
This prevents "insufficient disk space" errors when the backup directory has space but the system root partition is small.
Commands
Global Flags (Available for all commands)
| Flag | Description | Default |
|---|---|---|
-d, --db-type |
postgres, mysql, mariadb | postgres |
--host |
Database host | localhost |
--port |
Database port | 5432 (postgres), 3306 (mysql) |
--user |
Database user | root |
--password |
Database password | (empty) |
--database |
Database name | postgres |
--backup-dir |
Backup directory | /root/db_backups |
--compression |
Compression level 0-9 | 6 |
--ssl-mode |
disable, prefer, require, verify-ca, verify-full | prefer |
--insecure |
Disable SSL/TLS | false |
--jobs |
Parallel jobs | 8 |
--dump-jobs |
Parallel dump jobs | 8 |
--max-cores |
Maximum CPU cores | 16 |
--cpu-workload |
cpu-intensive, io-intensive, balanced | balanced |
--auto-detect-cores |
Auto-detect CPU cores | true |
--no-config |
Skip loading .dbbackup.conf | false |
--no-save-config |
Prevent saving configuration | false |
--cloud |
Cloud storage URI (s3://, azure://, gcs://) | (empty) |
--cloud-provider |
Cloud provider (s3, minio, b2, azure, gcs) | (empty) |
--cloud-bucket |
Cloud bucket/container name | (empty) |
--cloud-region |
Cloud region | (empty) |
--debug |
Enable debug logging | false |
--no-color |
Disable colored output | false |
Backup Operations
Single Database
Backup a single database to compressed archive:
./dbbackup backup single DATABASE_NAME [OPTIONS]
Common Options:
--host STRING- Database host (default: localhost)--port INT- Database port (default: 5432 PostgreSQL, 3306 MySQL)--user STRING- Database user (default: postgres)--password STRING- Database password--db-type STRING- Database type: postgres, mysql, mariadb (default: postgres)--backup-dir STRING- Backup directory (default: /var/lib/pgsql/db_backups)--compression INT- Compression level 0-9 (default: 6)--insecure- Disable SSL/TLS--ssl-mode STRING- SSL mode: disable, prefer, require, verify-ca, verify-full
Examples:
# Basic backup
./dbbackup backup single production_db
# Remote database with custom settings
./dbbackup backup single myapp_db \
--host db.example.com \
--port 5432 \
--user backup_user \
--password secret \
--compression 9 \
--backup-dir /mnt/backups
# MySQL database
./dbbackup backup single wordpress \
--db-type mysql \
--user root \
--password secret
Supported formats:
- PostgreSQL: Custom format (.dump) or SQL (.sql)
- MySQL/MariaDB: SQL (.sql)
Cluster Backup (PostgreSQL)
Backup all databases in PostgreSQL cluster including roles and tablespaces:
./dbbackup backup cluster [OPTIONS]
Performance Options:
--max-cores INT- Maximum CPU cores (default: auto-detect)--cpu-workload STRING- Workload type: cpu-intensive, io-intensive, balanced (default: balanced)--jobs INT- Parallel jobs (default: auto-detect based on workload)--dump-jobs INT- Parallel dump jobs (default: auto-detect based on workload)--cluster-parallelism INT- Concurrent database operations (default: 2, configurable via CLUSTER_PARALLELISM env var)
Examples:
# Standard cluster backup
sudo -u postgres ./dbbackup backup cluster
# High-performance backup
sudo -u postgres ./dbbackup backup cluster \
--compression 3 \
--max-cores 16 \
--cpu-workload cpu-intensive \
--jobs 16
Output: tar.gz archive containing all databases and globals.
Sample Backup
Create reduced-size backup for testing/development:
./dbbackup backup sample DATABASE_NAME [OPTIONS]
Options:
--sample-strategy STRING- Strategy: ratio, percent, count (default: ratio)--sample-value FLOAT- Sample value based on strategy (default: 10)
Examples:
# Keep 10% of all rows
./dbbackup backup sample myapp_db --sample-strategy percent --sample-value 10
# Keep 1 in 100 rows
./dbbackup backup sample myapp_db --sample-strategy ratio --sample-value 100
# Keep 5000 rows per table
./dbbackup backup sample myapp_db --sample-strategy count --sample-value 5000
Warning: Sample backups may break referential integrity.
🔐 Encrypted Backups (v3.0)
Encrypt backups with AES-256-GCM for secure storage:
./dbbackup backup single myapp_db --encrypt --encryption-key-file key.txt
Encryption Options:
--encrypt- Enable AES-256-GCM encryption--encryption-key-file STRING- Path to encryption key file (32 bytes, raw or base64)--encryption-key-env STRING- Environment variable containing encryption key (default: DBBACKUP_ENCRYPTION_KEY)
Examples:
# Generate encryption key
head -c 32 /dev/urandom | base64 > encryption.key
# Encrypted backup
./dbbackup backup single production_db \
--encrypt \
--encryption-key-file encryption.key
# Using environment variable
export DBBACKUP_ENCRYPTION_KEY=$(cat encryption.key)
./dbbackup backup cluster --encrypt
# Using passphrase (auto-derives key with PBKDF2)
echo "my-secure-passphrase" > passphrase.txt
./dbbackup backup single mydb --encrypt --encryption-key-file passphrase.txt
Encryption Features:
- Algorithm: AES-256-GCM (authenticated encryption)
- Key derivation: PBKDF2-SHA256 (600,000 iterations)
- Streaming encryption (memory-efficient for large backups)
- Automatic decryption on restore (detects encrypted backups)
Restore encrypted backup:
./dbbackup restore single myapp_db_20251126.sql.gz \
--encryption-key-file encryption.key \
--target myapp_db \
--confirm
Encryption is automatically detected - no need to specify --encrypted flag on restore.
📦 Incremental Backups (v3.0)
Create space-efficient incremental backups (PostgreSQL & MySQL):
# Full backup (base)
./dbbackup backup single myapp_db --backup-type full
# Incremental backup (only changed files since base)
./dbbackup backup single myapp_db \
--backup-type incremental \
--base-backup /backups/myapp_db_20251126.tar.gz
Incremental Options:
--backup-type STRING- Backup type: full or incremental (default: full)--base-backup STRING- Path to base backup (required for incremental)
Examples:
# PostgreSQL incremental backup
sudo -u postgres ./dbbackup backup single production_db \
--backup-type full
# Wait for database changes...
sudo -u postgres ./dbbackup backup single production_db \
--backup-type incremental \
--base-backup /var/lib/pgsql/db_backups/production_db_20251126_100000.tar.gz
# MySQL incremental backup
./dbbackup backup single wordpress \
--db-type mysql \
--backup-type incremental \
--base-backup /root/db_backups/wordpress_20251126.tar.gz
# Combined: Encrypted + Incremental
./dbbackup backup single myapp_db \
--backup-type incremental \
--base-backup myapp_db_base.tar.gz \
--encrypt \
--encryption-key-file key.txt
Incremental Features:
- Change detection: mtime-based (PostgreSQL & MySQL)
- Archive format: tar.gz (only changed files)
- Metadata: Tracks backup chain (base → incremental)
- Restore: Automatically applies base + incremental
- Space savings: 70-95% smaller than full backups (typical)
Restore incremental backup:
./dbbackup restore incremental \
--base-backup myapp_db_base.tar.gz \
--incremental-backup myapp_db_incr_20251126.tar.gz \
--target /restore/path
Restore Operations
Single Database Restore
Restore database from backup file:
./dbbackup restore single BACKUP_FILE [OPTIONS]
Options:
--target STRING- Target database name (required)--create- Create database if it doesn't exist--clean- Drop and recreate database before restore--jobs INT- Parallel restore jobs (default: 4)--verbose- Show detailed progress--no-progress- Disable progress indicators--confirm- Execute restore (required for safety, dry-run by default)--dry-run- Preview without executing--force- Skip safety checks
Examples:
# Basic restore
./dbbackup restore single /backups/myapp_20250112.dump --target myapp_restored
# Restore with database creation
./dbbackup restore single backup.dump \
--target myapp_db \
--create \
--jobs 8
# Clean restore (drops existing database)
./dbbackup restore single backup.dump \
--target myapp_db \
--clean \
--verbose
Supported formats:
- PostgreSQL: .dump, .dump.gz, .sql, .sql.gz
- MySQL: .sql, .sql.gz
Cluster Restore (PostgreSQL)
Restore entire PostgreSQL cluster from archive:
./dbbackup restore cluster ARCHIVE_FILE [OPTIONS]
Special case - Limited system disk space:
If your system disk (where PostgreSQL data resides) is small but you have large NFS mounts or additional partitions, use --workdir to extract on the larger storage:
# Extract to alternative location with more space
./dbbackup restore cluster cluster_backup.tar.gz \
--workdir /u01/dba/restore_tmp \
--confirm
This prevents "insufficient disk space" errors on VMs with limited root partitions but large mounted storage.
Verification & Maintenance
Verify Backup Integrity
Verify backup files using SHA-256 checksums and metadata validation:
./dbbackup verify-backup BACKUP_FILE [OPTIONS]
Options:
--quick- Quick verification (size check only, no checksum calculation)--verbose- Show detailed information about each backup
Examples:
# Verify single backup (full SHA-256 check)
./dbbackup verify-backup /backups/mydb_20251125.dump
# Verify all backups in directory
./dbbackup verify-backup /backups/*.dump --verbose
# Quick verification (fast, size check only)
./dbbackup verify-backup /backups/*.dump --quick
Output:
Verifying 3 backup file(s)...
📁 mydb_20251125.dump
✅ VALID
Size: 2.5 GiB
SHA-256: 7e166d4cb7276e1310d76922f45eda0333a6aeac...
Database: mydb (postgresql)
Created: 2025-11-25T19:00:00Z
──────────────────────────────────────────────────
Total: 3 backups
✅ Valid: 3
Cleanup Old Backups
Automatically remove old backups based on retention policy:
./dbbackup cleanup BACKUP_DIRECTORY [OPTIONS]
Options:
--retention-days INT- Delete backups older than N days (default: 30)--min-backups INT- Always keep at least N most recent backups (default: 5)--dry-run- Preview what would be deleted without actually deleting--pattern STRING- Only clean backups matching pattern (e.g., "mydb_*.dump")
Retention Policy:
The cleanup command uses a safe retention policy:
- Backups older than
--retention-daysare eligible for deletion - At least
--min-backupsmost recent backups are always kept - Both conditions must be met for a backup to be deleted
Examples:
# Clean up backups older than 30 days (keep at least 5)
./dbbackup cleanup /backups --retention-days 30 --min-backups 5
# Preview what would be deleted
./dbbackup cleanup /backups --retention-days 7 --dry-run
# Clean specific database backups
./dbbackup cleanup /backups --pattern "mydb_*.dump"
# Aggressive cleanup (keep only 3 most recent)
./dbbackup cleanup /backups --retention-days 1 --min-backups 3
Output:
🗑️ Cleanup Policy:
Directory: /backups
Retention: 30 days
Min backups: 5
📊 Results:
Total backups: 12
Eligible for deletion: 7
✅ Deleted 7 backup(s):
- old_db_20251001.dump
- old_db_20251002.dump
...
📦 Kept 5 backup(s)
💾 Space freed: 15.2 GiB
──────────────────────────────────────────────────
✅ Cleanup completed successfully
Options:
--confirm- Confirm and execute restore (required for safety)--dry-run- Show what would be done without executing--force- Skip safety checks--jobs INT- Parallel decompression jobs (default: auto)--verbose- Show detailed progress--no-progress- Disable progress indicators
Examples:
# Standard cluster restore
sudo -u postgres ./dbbackup restore cluster cluster_backup.tar.gz --confirm
# Dry-run to preview
sudo -u postgres ./dbbackup restore cluster cluster_backup.tar.gz --dry-run
# High-performance restore
sudo -u postgres ./dbbackup restore cluster cluster_backup.tar.gz \
--confirm \
--jobs 16 \
--verbose
# Special case: Limited system disk space (use alternative extraction directory)
sudo -u postgres ./dbbackup restore cluster cluster_backup.tar.gz \
--workdir /u01/dba/restore_tmp \
--confirm
Note: The --workdir flag is only needed when your system disk is small but you have larger mounted storage (NFS, SAN, etc.). For standard deployments, it's not required.
Safety Features:
- Archive integrity validation
- Disk space checks (4x archive size recommended)
- Automatic database cleanup detection (interactive mode)
- Progress tracking with ETA estimation
Restore List
Show available backup archives in backup directory:
./dbbackup restore list
System Commands
Status Check
Check database connection and configuration:
./dbbackup status [OPTIONS]
Shows: Database type, host, port, user, connection status, available databases.
Preflight Checks
Run pre-backup validation checks:
./dbbackup preflight [OPTIONS]
Verifies: Database connection, required tools, disk space, permissions.
List Databases
List available databases:
./dbbackup list [OPTIONS]
CPU Information
Display CPU configuration and optimization settings:
./dbbackup cpu
Shows: CPU count, model, workload recommendation, suggested parallel jobs.
Version
Display version information:
./dbbackup version
Point-in-Time Recovery (PITR)
dbbackup v3.1 includes full Point-in-Time Recovery support for PostgreSQL, allowing you to restore your database to any specific moment in time, not just to the time of your last backup.
PITR Overview
Point-in-Time Recovery works by combining:
- Base Backup - A full database backup
- WAL Archives - Continuous archive of Write-Ahead Log files
- Recovery Target - The specific point in time you want to restore to
This allows you to:
- Recover from accidental data deletion or corruption
- Restore to a specific transaction or timestamp
- Create multiple recovery branches (timelines)
- Test "what-if" scenarios by restoring to different points
Enable PITR
Step 1: Enable WAL Archiving
# Configure PostgreSQL for PITR
./dbbackup pitr enable --archive-dir /backups/wal_archive
# This will modify postgresql.conf:
# wal_level = replica
# archive_mode = on
# archive_command = 'dbbackup wal archive %p %f ...'
# Restart PostgreSQL for changes to take effect
sudo systemctl restart postgresql
Step 2: Take a Base Backup
# Create a base backup (use pg_basebackup or dbbackup)
pg_basebackup -D /backups/base_backup.tar.gz -Ft -z -P
# Or use regular dbbackup backup with --pitr flag (future feature)
./dbbackup backup single mydb --output /backups/base_backup.tar.gz
Step 3: Continuous WAL Archiving
WAL files are now automatically archived by PostgreSQL to your archive directory. Monitor with:
# Check PITR status
./dbbackup pitr status
# List archived WAL files
./dbbackup wal list --archive-dir /backups/wal_archive
# View timeline history
./dbbackup wal timeline --archive-dir /backups/wal_archive
Perform Point-in-Time Recovery
Restore to Specific Timestamp:
./dbbackup restore pitr \
--base-backup /backups/base_backup.tar.gz \
--wal-archive /backups/wal_archive \
--target-time "2024-11-26 12:00:00" \
--target-dir /var/lib/postgresql/14/restored \
--target-action promote
Restore to Transaction ID (XID):
./dbbackup restore pitr \
--base-backup /backups/base_backup.tar.gz \
--wal-archive /backups/wal_archive \
--target-xid 1000000 \
--target-dir /var/lib/postgresql/14/restored
Restore to Log Sequence Number (LSN):
./dbbackup restore pitr \
--base-backup /backups/base_backup.tar.gz \
--wal-archive /backups/wal_archive \
--target-lsn "0/3000000" \
--target-dir /var/lib/postgresql/14/restored
Restore to Named Restore Point:
# First create a restore point in PostgreSQL:
psql -c "SELECT pg_create_restore_point('before_migration');"
# Later, restore to that point:
./dbbackup restore pitr \
--base-backup /backups/base_backup.tar.gz \
--wal-archive /backups/wal_archive \
--target-name before_migration \
--target-dir /var/lib/postgresql/14/restored
Restore to Earliest Consistent Point:
./dbbackup restore pitr \
--base-backup /backups/base_backup.tar.gz \
--wal-archive /backups/wal_archive \
--target-immediate \
--target-dir /var/lib/postgresql/14/restored
Advanced PITR Options
WAL Compression and Encryption:
# Enable compression for WAL archives (saves space)
./dbbackup pitr enable \
--archive-dir /backups/wal_archive
# Archive with compression
./dbbackup wal archive /path/to/wal %f \
--archive-dir /backups/wal_archive \
--compress
# Archive with encryption
./dbbackup wal archive /path/to/wal %f \
--archive-dir /backups/wal_archive \
--encrypt \
--encryption-key-file /secure/key.bin
Recovery Actions:
# Promote to primary after recovery (default)
--target-action promote
# Pause recovery at target (for inspection)
--target-action pause
# Shutdown after recovery
--target-action shutdown
Timeline Management:
# Follow specific timeline
--timeline 2
# Follow latest timeline (default)
--timeline latest
# View timeline branching structure
./dbbackup wal timeline --archive-dir /backups/wal_archive
Auto-start and Monitor:
# Automatically start PostgreSQL after setup
./dbbackup restore pitr \
--base-backup /backups/base_backup.tar.gz \
--wal-archive /backups/wal_archive \
--target-time "2024-11-26 12:00:00" \
--target-dir /var/lib/postgresql/14/restored \
--auto-start \
--monitor
WAL Management Commands
# Archive a WAL file manually (normally called by PostgreSQL)
./dbbackup wal archive <wal_path> <wal_filename> \
--archive-dir /backups/wal_archive
# List all archived WAL files
./dbbackup wal list --archive-dir /backups/wal_archive
# Clean up old WAL archives (retention policy)
./dbbackup wal cleanup \
--archive-dir /backups/wal_archive \
--retention-days 7
# View timeline history and branching
./dbbackup wal timeline --archive-dir /backups/wal_archive
# Check PITR configuration status
./dbbackup pitr status
# Disable PITR
./dbbackup pitr disable
PITR Best Practices
- Regular Base Backups: Take base backups regularly (daily/weekly) to limit WAL archive size
- Monitor WAL Archive Space: WAL files can accumulate quickly, monitor disk usage
- Test Recovery: Regularly test PITR recovery to verify your backup strategy
- Retention Policy: Set appropriate retention with
wal cleanup --retention-days - Compress WAL Files: Use
--compressto save storage space (3-5x reduction) - Encrypt Sensitive Data: Use
--encryptfor compliance requirements - Document Restore Points: Create named restore points before major changes
Troubleshooting PITR
Issue: WAL archiving not working
# Check PITR status
./dbbackup pitr status
# Verify PostgreSQL configuration
grep -E "archive_mode|wal_level|archive_command" /etc/postgresql/*/main/postgresql.conf
# Check PostgreSQL logs
tail -f /var/log/postgresql/postgresql-14-main.log
Issue: Recovery target not reached
# Verify WAL files are available
./dbbackup wal list --archive-dir /backups/wal_archive
# Check timeline consistency
./dbbackup wal timeline --archive-dir /backups/wal_archive
# Review PostgreSQL recovery logs
tail -f /var/lib/postgresql/14/restored/logfile
Issue: Permission denied during recovery
# Ensure data directory ownership
sudo chown -R postgres:postgres /var/lib/postgresql/14/restored
# Verify WAL archive permissions
ls -la /backups/wal_archive
For more details, see PITR.md documentation.
Cloud Storage Integration
dbbackup v2.0 includes native support for cloud storage providers. See CLOUD.md for complete documentation.
Quick Start - Cloud Backups
Configure cloud provider in TUI:
# Launch interactive mode
./dbbackup interactive
# Navigate to: Configuration Settings
# Set: Cloud Storage Enabled = true
# Set: Cloud Provider = s3 (or azure, gcs, minio, b2)
# Set: Cloud Bucket/Container = your-bucket-name
# Set: Cloud Region = us-east-1 (if applicable)
# Set: Cloud Auto-Upload = true
Command-line cloud backup:
# Backup directly to S3
./dbbackup backup single mydb --cloud s3://my-bucket/backups/
# Backup to Azure Blob Storage
./dbbackup backup single mydb \
--cloud azure://my-container/backups/ \
--cloud-access-key myaccount \
--cloud-secret-key "account-key"
# Backup to Google Cloud Storage
./dbbackup backup single mydb \
--cloud gcs://my-bucket/backups/ \
--cloud-access-key /path/to/service-account.json
# Restore from cloud
./dbbackup restore single s3://my-bucket/backups/mydb_20251126.dump \
--target mydb_restored \
--confirm
Supported Providers:
- AWS S3 -
s3://bucket/path - MinIO -
minio://bucket/path(self-hosted S3-compatible) - Backblaze B2 -
b2://bucket/path - Azure Blob Storage -
azure://container/path(native support) - Google Cloud Storage -
gcs://bucket/path(native support)
Environment Variables:
# AWS S3 / MinIO / B2
export AWS_ACCESS_KEY_ID="your-key"
export AWS_SECRET_ACCESS_KEY="your-secret"
export AWS_REGION="us-east-1"
# Azure Blob Storage
export AZURE_STORAGE_ACCOUNT="myaccount"
export AZURE_STORAGE_KEY="account-key"
# Google Cloud Storage
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
Features:
- ✅ Streaming uploads (memory efficient)
- ✅ Multipart upload for large files (>100MB)
- ✅ Progress tracking
- ✅ Automatic metadata sync (.sha256, .info files)
- ✅ Restore directly from cloud URIs
- ✅ Cloud backup verification
- ✅ TUI integration for all cloud providers
See CLOUD.md for detailed setup guides, testing with Docker, and advanced configuration.
Configuration
PostgreSQL Authentication
PostgreSQL uses different authentication methods based on system configuration.
Peer/Ident Authentication (Linux Default)
Run as postgres system user:
sudo -u postgres ./dbbackup backup cluster
Password Authentication
Option 1: .pgpass file (recommended for automation):
echo "localhost:5432:*:postgres:password" > ~/.pgpass
chmod 0600 ~/.pgpass
./dbbackup backup single mydb --user postgres
Option 2: Environment variable:
export PGPASSWORD=your_password
./dbbackup backup single mydb --user postgres
Option 3: Command line flag:
./dbbackup backup single mydb --user postgres --password your_password
MySQL/MariaDB Authentication
Option 1: Command line
./dbbackup backup single mydb --db-type mysql --user root --password secret
Option 2: Environment variable
export MYSQL_PWD=your_password
./dbbackup backup single mydb --db-type mysql --user root
Option 3: Configuration file
cat > ~/.my.cnf << EOF
[client]
user=backup_user
password=your_password
host=localhost
EOF
chmod 0600 ~/.my.cnf
Environment Variables
PostgreSQL:
export PG_HOST=localhost
export PG_PORT=5432
export PG_USER=postgres
export PGPASSWORD=password
MySQL/MariaDB:
export MYSQL_HOST=localhost
export MYSQL_PORT=3306
export MYSQL_USER=root
export MYSQL_PWD=password
General:
export BACKUP_DIR=/var/backups/databases
export COMPRESS_LEVEL=6
export CLUSTER_TIMEOUT_MIN=240
Database Types
postgres- PostgreSQLmysql- MySQLmariadb- MariaDB
Select via:
- CLI:
-d postgresor--db-type postgres - Interactive: Arrow keys to cycle through options
Performance
Memory Usage
Streaming architecture maintains constant memory usage:
| Database Size | Memory Usage |
|---|---|
| 1-10 GB | ~800 MB |
| 10-50 GB | ~900 MB |
| 50-100 GB | ~950 MB |
| 100+ GB | <1 GB |
Large Database Optimization
- Databases >5GB automatically use plain format with streaming compression
- Parallel compression via pigz (if available)
- Per-database timeout: 4 hours default
- Automatic format selection based on size
CPU Optimization
Automatically detects CPU configuration and optimizes parallelism:
./dbbackup cpu
Manual override:
./dbbackup backup cluster \
--max-cores 32 \
--jobs 32 \
--cpu-workload cpu-intensive
Parallelism
./dbbackup backup cluster --jobs 16 --dump-jobs 16
--jobs- Compression/decompression parallel jobs--dump-jobs- Database dump parallel jobs--max-cores- Limit CPU cores (default: 16)- Cluster operations use worker pools with configurable parallelism (default: 2 concurrent databases)
- Set
CLUSTER_PARALLELISMenvironment variable to adjust concurrent database operations
CPU Workload
./dbbackup backup cluster --cpu-workload cpu-intensive
Options: cpu-intensive, io-intensive, balanced (default)
Workload types automatically adjust Jobs and DumpJobs:
- Balanced: Jobs = PhysicalCores, DumpJobs = PhysicalCores/2 (min 2)
- CPU-Intensive: Jobs = PhysicalCores×2, DumpJobs = PhysicalCores (more parallelism)
- I/O-Intensive: Jobs = PhysicalCores/2 (min 1), DumpJobs = 2 (less parallelism to avoid I/O contention)
Configure in interactive mode via Configuration Settings menu.
Compression
./dbbackup backup single mydb --compression 9
- Level 0 = No compression (fastest)
- Level 6 = Balanced (default)
- Level 9 = Maximum compression (slowest)
SSL/TLS Configuration
SSL modes: disable, prefer, require, verify-ca, verify-full
# Disable SSL
./dbbackup backup single mydb --insecure
# Require SSL
./dbbackup backup single mydb --ssl-mode require
# Verify certificate
./dbbackup backup single mydb --ssl-mode verify-full
Troubleshooting
Connection Issues
Test connectivity:
./dbbackup status
PostgreSQL peer authentication error:
sudo -u postgres ./dbbackup status
SSL/TLS issues:
./dbbackup status --insecure
Out of Memory
Check memory:
free -h
dmesg | grep -i oom
Add swap space:
sudo fallocate -l 16G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
Reduce parallelism:
./dbbackup backup cluster --jobs 4 --dump-jobs 4
Debug Mode
Enable detailed logging:
./dbbackup backup single mydb --debug
Common Errors
- "Ident authentication failed" - Run as matching OS user or configure password authentication
- "Permission denied" - Check database user privileges
- "Disk space check failed" - Ensure 4x archive size available. For VMs with small system disks, use
--workdir /path/to/larger/partitionto extract on NFS mount or larger disk - "Archive validation failed" - Backup file corrupted or incomplete
Building
Build for all platforms:
./build_all.sh
Binaries created in bin/ directory.
Requirements
System Requirements
- Linux, macOS, FreeBSD, OpenBSD, NetBSD
- 1 GB RAM minimum (2 GB recommended for large databases)
- Disk space: 30-50% of database size for backups
Software Requirements
PostgreSQL:
- Client tools: psql, pg_dump, pg_dumpall, pg_restore
- PostgreSQL 10 or later
MySQL/MariaDB:
- Client tools: mysql, mysqldump
- MySQL 5.7+ or MariaDB 10.3+
Optional:
- pigz (parallel compression)
- pv (progress monitoring)
Best Practices
- Test restores regularly - Verify backups work before disasters occur
- Monitor disk space - Maintain 4x archive size free space for restore operations
- Use appropriate compression - Balance speed and space (level 3-6 for production)
- Leverage configuration persistence - Use .dbbackup.conf for consistent per-project settings
- Automate backups - Schedule via cron or systemd timers
- Secure credentials - Use .pgpass/.my.cnf with 0600 permissions, never save passwords in config files
- Maintain multiple versions - Keep 7-30 days of backups for point-in-time recovery
- Store backups off-site - Remote copies protect against site-wide failures
- Validate archives - Run verification checks on backup files periodically
- Document procedures - Maintain runbooks for restore operations and disaster recovery
Project Structure
dbbackup/
├── main.go # Entry point
├── cmd/ # CLI commands
├── internal/
│ ├── backup/ # Backup engine
│ ├── restore/ # Restore engine
│ ├── config/ # Configuration
│ ├── database/ # Database drivers
│ ├── cpu/ # CPU detection
│ ├── logger/ # Logging
│ ├── progress/ # Progress tracking
│ └── tui/ # Interactive UI
├── bin/ # Pre-compiled binaries
└── build_all.sh # Multi-platform build
Support
- Repository: https://git.uuxo.net/uuxo/dbbackup
- Issues: Use repository issue tracker
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Copyright 2025 dbbackup Project
Recent Improvements
- Production-Ready: 100% test coverage, zero critical issues, fully validated
- Reliable: Thread-safe process management, comprehensive error handling, automatic cleanup
- Efficient: Constant memory footprint (~1GB) regardless of database size via streaming architecture
- Fast: Automatic CPU detection, parallel processing, streaming compression with pigz
- Intelligent: Context-aware error messages, disk space pre-flight checks, configuration persistence
- Safe: Dry-run by default, archive verification, confirmation prompts, backup validation
- Flexible: Multiple backup modes, compression levels, CPU workload profiles, per-directory configuration
- Complete: Full cluster operations, single database backups, sample data extraction
- Cross-Platform: Native binaries for Linux, macOS, Windows, FreeBSD, OpenBSD
- Scalable: Tested with databases from megabytes to 100+ gigabytes
- Observable: Structured logging, metrics collection, progress tracking with ETA
dbbackup is production-ready for backup and disaster recovery operations on PostgreSQL, MySQL, and MariaDB databases. Successfully tested with 42GB databases containing 35,000 large objects.
Contributing
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
Ways to contribute:
- 🐛 Report bugs and issues
- 💡 Suggest new features
- 📝 Improve documentation
- 🔧 Submit pull requests
- ⭐ Star the project!
Support
Issues & Bug Reports: https://git.uuxo.net/PlusOne/dbbackup/issues
Security Issues: See SECURITY.md for responsible disclosure
Documentation:
- README.md - Main documentation
- PITR.md - Point-in-Time Recovery guide
- DOCKER.md - Docker usage
- CONTRIBUTING.md - Contribution guidelines
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Copyright 2025 dbbackup Project
