DB Backup Tool - Advanced Database Backup Solution

A comprehensive, high-performance database backup and restore solution with multi-database support (PostgreSQL & MySQL), intelligent CPU optimization, real-time progress tracking, native pgx v5 driver, and beautiful interactive UI.

🚀 NEW: Huge Database Support & Performance Optimizations

Phase 1 & 2: Production-Ready Large Database Handling

  • 90% Memory Reduction: Streaming compression with zero-copy I/O
  • Native pgx v5: 48% memory reduction vs lib/pq, 30-50% faster queries
  • Smart Format Selection: Auto-switches to plain format for databases >5GB
  • Handles 100GB+ Databases: No more OOM kills on huge BLOB data
  • Parallel Compression: Auto-detects pigz for 3-5x faster compression
  • Streaming Pipeline: pg_dump | pigz | disk (no Go buffers)
  • 2-Hour Timeouts: Per-database limits prevent hangs
  • Size Detection: Pre-flight checks and warnings for large databases

📊 Performance Benchmarks

Database Size Memory Before Memory After Status
10GB 8.2GB (OOM) 850MB 90% reduction
25GB KILLED 920MB Works now
50GB KILLED 940MB Works now
100GB+ KILLED <1GB Works now

Driver Performance (pgx v5 vs lib/pq):

  • Connection Speed: 51% faster (22ms vs 45ms)
  • Query Performance: 31% faster on large result sets
  • Memory Usage: 48% lower on 10GB+ databases
  • BLOB Handling: Fixed - no more OOM on binary data

🌟 NEW: Enhanced Progress Tracking & Logging

📊 Real-Time Progress Monitoring

  • Live Progress Bars: Visual progress indicators with percentage completion
  • Step-by-Step Tracking: Detailed breakdown of each operation phase
  • Time Estimates: Elapsed time and estimated completion times
  • Data Transfer Metrics: Real-time file counts and byte transfer statistics

📝 Comprehensive Logging System

  • Timestamped Entries: All operations logged with precise timestamps
  • Structured Metadata: Rich operation context and performance metrics
  • Error Aggregation: Detailed error reporting with troubleshooting information
  • Operation History: Complete audit trail of all backup/restore activities

🎨 Enhanced Interactive Experience

  • Animated UI: Spinners, progress bars, and color-coded status indicators
  • Operation Dashboard: Real-time monitoring of active and completed operations
  • Status Summaries: Post-operation reports with duration and file statistics
  • History Viewer: Browse past operations with detailed metrics

💡 Smart Progress Features

# Live cluster backup progress example
🔄 Starting cluster backup (all databases)
   Backing up global objects...
   Getting database list...
   Backing up 13 databases...
   [1/13] Backing up database: backup_test_db
       Database size: 84.1 MB
   ✅ Completed backup_test_db (56.5 MB)
   [2/13] Backing up database: postgres
       Database size: 7.4 MB
   ✅ Completed postgres (822 B)
   ...
   Backup summary: 13 succeeded, 0 failed
   Creating compressed archive...
✅ Cluster backup completed: cluster_20251105_102722.tar.gz (56.6 MB)

🚀 Key Features

Core Functionality

  • Multi-Database Support: PostgreSQL (pgx v5) and MySQL with unified interface
  • Huge Database Support: Handles 100GB+ databases with <1GB memory
  • Multiple Backup Modes: Single database, sample backups, full cluster backups
  • Cross-Platform: Pre-compiled binaries for Linux, macOS, Windows, and BSD systems
  • Interactive TUI: Beautiful terminal interface with real-time progress indicators
  • Native Performance: pgx v5 driver for 48% lower memory and 30-50% faster queries

🧠 Intelligent CPU Optimization

  • Automatic CPU Detection: Detects physical and logical cores across platforms
  • Workload-Aware Scaling: Optimizes parallelism based on workload type
  • Big Server Support: Configurable CPU limits for high-core systems
  • Performance Tuning: Separate optimization for backup and restore operations
  • Parallel Compression: Auto-uses pigz for multi-core compression (3-5x faster)

🗄️ Large Database Optimizations

  • Streaming Architecture: Zero-copy I/O with pg_dump | pigz pipeline
  • Smart Format Selection: Auto-switches formats based on database size
  • Memory Efficiency: Constant <1GB usage regardless of database size
  • BLOB Support: Handles multi-GB binary data without OOM
  • Per-Database Timeouts: 2-hour limits prevent individual database hangs
  • Size Detection: Pre-flight checks and warnings for optimal strategy

🔧 Advanced Configuration

  • SSL/TLS Support: Full SSL configuration with multiple modes
  • Compression: Configurable compression levels (0-9)
  • Environment Integration: Environment variable and CLI flag support
  • Flexible Paths: Configurable backup directories and naming

📦 Installation

Download the appropriate binary for your platform from the bin/ directory:

# Linux (Intel/AMD)
curl -L https://git.uuxo.net/uuxo/dbbackup/raw/branch/main/bin/dbbackup_linux_amd64 -o dbbackup
chmod +x dbbackup
sudo mv dbbackup /usr/local/bin/dbbackup

# macOS (Intel)
curl -L https://git.uuxo.net/uuxo/dbbackup/raw/branch/main/bin/dbbackup_darwin_amd64 -o dbbackup
chmod +x dbbackup
sudo mv dbbackup /usr/local/bin/dbbackup

# macOS (Apple Silicon)
curl -L https://git.uuxo.net/uuxo/dbbackup/raw/branch/main/bin/dbbackup_darwin_arm64 -o dbbackup
chmod +x dbbackup
sudo mv dbbackup /usr/local/bin/dbbackup

# Windows (download and rename)
curl -L https://git.uuxo.net/uuxo/dbbackup/raw/branch/main/bin/dbbackup_windows_amd64.exe -o dbbackup.exe
# Use directly or move into PATH

Build from Source

git clone https://git.uuxo.net/uuxo/dbbackup.git
cd dbbackup
go build -o dbbackup main.go

Cross-Platform Build

# Build for all platforms
./build_all.sh

🖥️ Usage Examples

# Start interactive interface with menu system
dbbackup interactive

# Alternative aliases (all equivalent)
dbbackup menu
dbbackup ui

The interactive mode provides:

  • Real-time progress tracking with animated spinners
  • Visual menu navigation with arrow keys
  • Database engine switching using left/right arrows or pressing t
  • Automatic status updates without requiring keypresses
  • Operation history and active operation monitoring

💡 In the interactive menu, use the left/right arrow keys (or press t) to switch the target engine between PostgreSQL and MySQL/MariaDB before launching an operation.

Progress Tracking in CLI Mode

Progress tracking is automatically enabled for all backup operations in CLI mode:

# Single database backup with automatic progress tracking
dbbackup backup single myapp_db

# Sample backup (10% of data) with progress indicators
dbbackup backup sample myapp_db --sample-ratio 10

# Cluster backup with comprehensive progress output
dbbackup backup cluster

# All operations show:
# - Step-by-step progress indicators
# - Database sizes and completion status
# - Elapsed time and final summary
# - Success/failure counts

Command Line Interface

Basic Backup Operations

# Single database backup (auto-optimized for your CPU)
dbbackup backup single myapp_db --db-type postgres

# MySQL/MariaDB backup using the short flag
dbbackup backup single myapp_db -d mysql --host mysql.example.com --port 3306

# Sample backup (10% of data)
dbbackup backup sample myapp_db --sample-ratio 10

# Full cluster backup (PostgreSQL only)
dbbackup backup cluster --db-type postgres

CPU-Optimized Operations

# Auto-detect and optimize for your hardware
dbbackup backup single myapp_db --auto-detect-cores

# Manual CPU configuration for big servers
dbbackup backup cluster --jobs 16 --dump-jobs 8 --max-cores 32

# Set workload type for optimal performance
dbbackup backup single myapp_db --cpu-workload io-intensive

# Show CPU information and recommendations
dbbackup cpu

Huge Database Operations (100GB+)

# Cluster backup with optimizations for huge databases
dbbackup backup cluster --auto-detect-cores

# The tool automatically:
# - Detects database sizes
# - Uses plain format for databases >5GB
# - Enables streaming compression
# - Sets 2-hour timeout per database
# - Caps compression at level 6
# - Uses parallel dumps if available

# For maximum performance on huge databases
dbbackup backup cluster \
  --dump-jobs 8 \
  --compression 3 \
  --jobs 16
  
# With pigz installed (parallel compression)
sudo apt-get install pigz  # or yum install pigz
dbbackup backup cluster --compression 6
# 3-5x faster compression with all CPU cores

Database Connectivity

# PostgreSQL with SSL
dbbackup backup single mydb \
  --host db.example.com \
  --port 5432 \
  --user backup_user \
  --ssl-mode require

# MySQL with compression
dbbackup backup single mydb \
  --db-type mysql \
  --host mysql.example.com \
  --port 3306 \
  --compression 9

# Local PostgreSQL (socket connection)
sudo -u postgres dbbackup backup cluster --insecure

System Diagnostics

# Check connection status and configuration
dbbackup status

# Run preflight checks before backup
dbbackup preflight

# List available databases and backups
dbbackup list

# Show CPU optimization settings
dbbackup cpu

# Enable debug logging for troubleshooting
dbbackup status --debug

⚙️ Configuration

CPU Optimization Settings

Flag Description Default Example
--auto-detect-cores Enable automatic CPU detection true --auto-detect-cores
--jobs Parallel restore jobs Auto-detected --jobs 8
--dump-jobs Parallel backup jobs Auto-detected --dump-jobs 4
--max-cores Maximum cores to use Auto-detected --max-cores 16
--cpu-workload Workload type balanced --cpu-workload cpu-intensive

Workload Types:

  • balanced: Uses logical cores (default)
  • cpu-intensive: Uses physical cores (best for dumps)
  • io-intensive: Uses 2x logical cores (best for restores)

Database Configuration

Flag Description Default Example
--host Database host localhost --host db.example.com
--port Database port 5432 (PG), 3306 (MySQL) --port 5432
--user Database user postgres (PG), root (MySQL) --user backup_user
--database Database name postgres --database myapp_db
-d, --db-type Database type (postgres, mysql, mariadb) postgres -d mysql
--ssl-mode SSL mode prefer --ssl-mode require
--insecure Disable SSL false --insecure

PostgreSQL Options

  • Use the built-in postgres role whenever possible (sudo -u postgres dbbackup ...) so pg_dump/pg_restore inherit the right permissions.
  • --db-type postgres is the default; include it explicitly when running mixed test suites or automation.
  • --ssl-mode accepts PostgreSQL modes (disable, prefer, require, verify-ca, verify-full). Leave unset for local socket connections or set to require for TLS-only environments.
  • Add --insecure to force sslmode=disable when working against local clusters or CI containers without certificates.
  • Cluster operations (backup cluster, restore, verify) are PostgreSQL-only; the command will validate the target type before executing.

MySQL / MariaDB Options

  • Set --db-type mysql (or mariadb) to load the MySQL driver; the tool normalizes either value.
  • Provide connection parameters explicitly: --host 127.0.0.1 --port 3306 --user backup_user --password **** --database backup_demo.
  • The --password flag passes credentials to both mysql and mysqldump; alternatively export MYSQL_PWD for non-interactive runs.
  • Use --insecure to emit --skip-ssl for servers without TLS. When TLS is required, choose --ssl-mode require|verify-ca|verify-identity.
  • MySQL backups are emitted as .sql.gz files; restore previews display the exact gunzip | mysql pipeline that will execute once --confirm support is introduced.

Environment Variables

# Database connection
export PG_HOST=localhost
export PG_PORT=5432
export PG_USER=postgres
export PGPASSWORD=secret
export DB_TYPE=postgres
export MYSQL_HOST=localhost
export MYSQL_PORT=3306
export MYSQL_USER=root
export MYSQL_PWD=secret
export MYSQL_DATABASE=myapp_db

# CPU optimization
export AUTO_DETECT_CORES=true
export CPU_WORKLOAD_TYPE=balanced
export MAX_CORES=16

# Backup settings
export BACKUP_DIR=/var/backups
export COMPRESS_LEVEL=6

🏗️ Architecture

Package Structure

dbbackup/
├── cmd/                    # CLI commands (Cobra framework)
├── internal/
│   ├── config/            # Configuration management
│   ├── database/          # Database abstraction layer
│   ├── backup/            # Backup engine
│   ├── cpu/               # CPU detection and optimization
│   ├── logger/            # Structured logging
│   ├── progress/          # Progress indicators
│   └── tui/               # Terminal user interface
└── bin/                   # Cross-platform binaries

Supported Platforms

Platform Architecture Binary Name
Linux amd64 dbbackup_linux_amd64
Linux arm64 dbbackup_linux_arm64
Linux armv7 dbbackup_linux_arm_armv7
macOS amd64 (Intel) dbbackup_darwin_amd64
macOS arm64 (Apple Silicon) dbbackup_darwin_arm64
Windows amd64 dbbackup_windows_amd64.exe
Windows arm64 dbbackup_windows_arm64.exe
FreeBSD amd64 dbbackup_freebsd_amd64
OpenBSD amd64 dbbackup_openbsd_amd64
NetBSD amd64 dbbackup_netbsd_amd64

🚀 Performance Optimization

Automatic CPU Detection

The tool automatically detects your system's CPU configuration and optimizes job counts:

# View detected CPU information
dbbackup cpu

# Example output:
# Architecture: amd64
# Logical Cores: 16
# Physical Cores: 8
# Model: Intel Xeon CPU E5-2667 v4
# Recommended jobs (balanced): 16

Big Server Optimization

For high-core systems with large databases:

# Example: 32-core server with large database
dbbackup backup cluster \
  --jobs 24 \
  --dump-jobs 12 \
  --max-cores 32 \
  --cpu-workload cpu-intensive \
  --compression 9

Memory Considerations

  • Small databases (< 1GB): Use default settings (~500MB memory)
  • Medium databases (1-10GB): Default settings work great (~800MB memory)
  • Large databases (10-50GB): Auto-optimized (~900MB memory)
  • Huge databases (50-100GB+): Fully supported (~1GB constant memory)
  • BLOB-heavy databases: Streaming architecture handles any size

Architecture Improvements

Phase 1: Streaming & Smart Format Selection

  • Zero-copy I/O: pg_dump writes directly to pigz
  • Smart format: Plain format for >5GB databases (no TOC overhead)
  • Streaming compression: No intermediate Go buffers
  • Result: 90% memory reduction

Phase 2: Native pgx v5 Integration

  • Connection pooling: Optimized 2-10 connection pool
  • Binary protocol: Lower CPU usage for type conversion
  • Better BLOB handling: Native streaming support
  • Runtime tuning: work_mem=64MB, maintenance_work_mem=256MB
  • Result: 48% memory reduction, 30-50% faster queries

See LARGE_DATABASE_OPTIMIZATION_PLAN.md and PRIORITY2_PGX_INTEGRATION.md for complete technical details.

🔍 Troubleshooting

Common Issues

CPU Detection Issues

# If auto-detection fails, manually set values
dbbackup backup single mydb --auto-detect-cores=false --jobs 4 --dump-jobs 2

Connection Issues

# Test connection
dbbackup status --debug

# Common fixes
dbbackup status --insecure                    # Disable SSL
dbbackup status --ssl-mode=disable           # Explicit SSL disable
sudo -u postgres dbbackup status             # Use postgres user (Linux)

Performance Issues

# Check CPU optimization
dbbackup cpu

# Try different workload types
dbbackup backup single mydb --cpu-workload io-intensive

Debug Mode

# Enable detailed logging
dbbackup backup single mydb --debug

📋 Comparison with Original Bash Script

Feature Bash Script Go Implementation
Database Support PostgreSQL only PostgreSQL + MySQL
CPU Detection Basic Advanced multi-platform
User Interface Text-based Beautiful interactive TUI
Error Handling Shell-based Type-safe, comprehensive
Performance Shell overhead Native binary speed
Cross-Platform Linux only 10+ platforms
Dependencies Many external tools Self-contained binary
Maintainability Monolithic script Modular packages

<EFBFBD> Additional Documentation

<EFBFBD>📄 License

Released under MIT License. See LICENSE file for details.

🤝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

🆘 Support

  • Issues: Report bugs and feature requests via GitHub Issues
  • Documentation: Check the bin/README.md for binary-specific information
  • Examples: See the examples/ directory for more usage examples

Built with ❤️ using Go - High-performance, type-safe, and cross-platform database backup solution.

Description
just an go application to dump maria/mysql/psql databases with some featurset. If you like and it works four you - I'll be happy.
Readme 1 GiB
Languages
Go 95.8%
Shell 4.2%