Compare commits
20 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| eac79baad6 | |||
| c655076ecd | |||
| 7478c9b365 | |||
| deaf704fae | |||
| 4a7acf5f1c | |||
| 5a605b53bd | |||
| e8062b97d9 | |||
| e2af53ed2a | |||
| 02dc046270 | |||
| 4ab80460c3 | |||
| 14e893f433 | |||
| de0582f1a4 | |||
| 6f5a7593c7 | |||
| b28e67ee98 | |||
| 8faf8ae217 | |||
| fec2652cd0 | |||
| b7498745f9 | |||
| 79f2efaaac | |||
| 19f44749b1 | |||
| c7904c7857 |
3
.gitignore
vendored
3
.gitignore
vendored
@ -37,3 +37,6 @@ CRITICAL_BUGS_FIXED.md
|
||||
LEGAL_DOCUMENTATION.md
|
||||
LEGAL_*.md
|
||||
legal/
|
||||
|
||||
# Release binaries (uploaded via gh release, not git)
|
||||
release/dbbackup_*
|
||||
|
||||
175
CHANGELOG.md
175
CHANGELOG.md
@ -5,6 +5,175 @@ All notable changes to dbbackup will be documented in this file.
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [4.2.2] - 2026-01-30
|
||||
|
||||
### Fixed - Complete pgzip Migration (Backup Side)
|
||||
|
||||
- **Removed ALL external gzip/pigz calls from backup engine**
|
||||
- `internal/backup/engine.go`: `executeWithStreamingCompression` now uses pgzip
|
||||
- `internal/parallel/engine.go`: Fixed stub gzipWriter to use pgzip
|
||||
- No more gzip/pigz processes visible in htop during backup
|
||||
- Uses klauspost/pgzip for parallel multi-core compression
|
||||
|
||||
- **Complete pgzip migration status**:
|
||||
- ✅ Backup: All compression uses in-process pgzip
|
||||
- ✅ Restore: All decompression uses in-process pgzip
|
||||
- ✅ Drill: Decompress on host with pgzip before Docker copy
|
||||
- ⚠️ PITR only: PostgreSQL's `restore_command` must remain shell (PostgreSQL limitation)
|
||||
|
||||
## [4.2.1] - 2026-01-30
|
||||
|
||||
### Fixed - Complete pgzip Migration
|
||||
|
||||
- **Removed ALL external gunzip/gzip calls** - Systematic audit and fix
|
||||
- `internal/restore/engine.go`: SQL restores now use pgzip stream → psql/mysql stdin
|
||||
- `internal/drill/engine.go`: Decompress on host with pgzip before Docker copy
|
||||
- No more gzip/gunzip/pigz processes visible in htop during restore
|
||||
- Uses klauspost/pgzip for parallel multi-core decompression
|
||||
|
||||
- **PostgreSQL PITR exception** - `restore_command` in recovery config must remain shell
|
||||
- PostgreSQL itself runs this command to fetch WAL files
|
||||
- Cannot be replaced with Go code (PostgreSQL limitation)
|
||||
|
||||
## [4.2.0] - 2026-01-30
|
||||
|
||||
### Added - Quick Wins Release
|
||||
|
||||
- **`dbbackup health` command** - Comprehensive backup infrastructure health check
|
||||
- 10 automated health checks: config, DB connectivity, backup dir, catalog, freshness, gaps, verification, file integrity, orphans, disk space
|
||||
- Exit codes for automation: 0=healthy, 1=warning, 2=critical
|
||||
- JSON output for monitoring integration (Prometheus, Nagios, etc.)
|
||||
- Auto-generates actionable recommendations
|
||||
- Custom backup interval for gap detection: `--interval 12h`
|
||||
- Skip database check for offline mode: `--skip-db`
|
||||
- Example: `dbbackup health --format json`
|
||||
|
||||
- **TUI System Health Check** - Interactive health monitoring
|
||||
- Accessible via Tools → System Health Check
|
||||
- Runs all 10 checks asynchronously with progress spinner
|
||||
- Color-coded results: green=healthy, yellow=warning, red=critical
|
||||
- Displays recommendations for any issues found
|
||||
|
||||
- **`dbbackup restore preview` command** - Pre-restore analysis and validation
|
||||
- Shows backup format, compression type, database type
|
||||
- Estimates uncompressed size (3x compression ratio)
|
||||
- Calculates RTO (Recovery Time Objective) based on active profile
|
||||
- Validates backup integrity without actual restore
|
||||
- Displays resource requirements (RAM, CPU, disk space)
|
||||
- Example: `dbbackup restore preview backup.dump.gz`
|
||||
|
||||
- **`dbbackup diff` command** - Compare two backups and track changes
|
||||
- Flexible input: file paths, catalog IDs, or `database:latest/previous`
|
||||
- Shows size delta with percentage change
|
||||
- Calculates database growth rate (GB/day)
|
||||
- Projects time to reach 10GB threshold
|
||||
- Compares backup duration and compression efficiency
|
||||
- JSON output for automation and reporting
|
||||
- Example: `dbbackup diff mydb:latest mydb:previous`
|
||||
|
||||
- **`dbbackup cost analyze` command** - Cloud storage cost optimization
|
||||
- Analyzes 15 storage tiers across 5 cloud providers
|
||||
- AWS S3: Standard, IA, Glacier Instant/Flexible, Deep Archive
|
||||
- Google Cloud Storage: Standard, Nearline, Coldline, Archive
|
||||
- Azure Blob Storage: Hot, Cool, Archive
|
||||
- Backblaze B2 and Wasabi alternatives
|
||||
- Monthly/annual cost projections
|
||||
- Savings calculations vs S3 Standard baseline
|
||||
- Tiered lifecycle strategy recommendations
|
||||
- Shows potential savings of 90%+ with proper policies
|
||||
- Example: `dbbackup cost analyze --database mydb`
|
||||
|
||||
### Enhanced
|
||||
- **TUI restore preview** - Added RTO estimates and size calculations
|
||||
- Shows estimated uncompressed size during restore confirmation
|
||||
- Displays estimated restore time based on current profile
|
||||
- Helps users make informed restore decisions
|
||||
- Keeps TUI simple (essentials only), detailed analysis in CLI
|
||||
|
||||
### Documentation
|
||||
- Updated README.md with new commands and examples
|
||||
- Created QUICK_WINS.md documenting the rapid development sprint
|
||||
- Added backup diff and cost analysis sections
|
||||
|
||||
## [4.1.4] - 2026-01-29
|
||||
|
||||
### Added
|
||||
- **New `turbo` restore profile** - Maximum restore speed, matches native `pg_restore -j8`
|
||||
- `ClusterParallelism = 2` (restore 2 DBs concurrently)
|
||||
- `Jobs = 8` (8 parallel pg_restore jobs)
|
||||
- `BufferedIO = true` (32KB write buffers for faster extraction)
|
||||
- Works on 16GB+ RAM, 4+ cores
|
||||
- Usage: `dbbackup restore cluster backup.tar.gz --profile=turbo --confirm`
|
||||
|
||||
- **Restore startup performance logging** - Shows actual parallelism settings at restore start
|
||||
- Logs profile name, cluster_parallelism, pg_restore_jobs, buffered_io
|
||||
- Helps verify settings before long restore operations
|
||||
|
||||
- **Buffered I/O optimization** - 32KB write buffers during tar extraction (turbo profile)
|
||||
- Reduces system call overhead
|
||||
- Improves I/O throughput for large archives
|
||||
|
||||
### Fixed
|
||||
- **TUI now respects saved profile settings** - Previously TUI forced `conservative` profile on every launch, ignoring user's saved configuration. Now properly loads and respects saved settings.
|
||||
|
||||
### Changed
|
||||
- TUI default profile changed from forced `conservative` to `balanced` (only when no profile configured)
|
||||
- `LargeDBMode` no longer forced on TUI startup - user controls it via settings
|
||||
|
||||
## [4.1.3] - 2026-01-27
|
||||
|
||||
### Added
|
||||
- **`--config` / `-c` global flag** - Specify config file path from anywhere
|
||||
- Example: `dbbackup --config /opt/dbbackup/.dbbackup.conf backup single mydb`
|
||||
- No longer need to `cd` to config directory before running commands
|
||||
- Works with all subcommands (backup, restore, verify, etc.)
|
||||
|
||||
## [4.1.2] - 2026-01-27
|
||||
|
||||
### Added
|
||||
- **`--socket` flag for MySQL/MariaDB** - Connect via Unix socket instead of TCP/IP
|
||||
- Usage: `dbbackup backup single mydb --db-type mysql --socket /var/run/mysqld/mysqld.sock`
|
||||
- Works for both backup and restore operations
|
||||
- Supports socket auth (no password required with proper permissions)
|
||||
|
||||
### Fixed
|
||||
- **Socket path as --host now works** - If `--host` starts with `/`, it's auto-detected as a socket path
|
||||
- Example: `--host /var/run/mysqld/mysqld.sock` now works correctly instead of DNS lookup error
|
||||
- Auto-converts to `--socket` internally
|
||||
|
||||
## [4.1.1] - 2026-01-25
|
||||
|
||||
### Added
|
||||
- **`dbbackup_build_info` metric** - Exposes version and git commit as Prometheus labels
|
||||
- Useful for tracking deployed versions across a fleet
|
||||
- Labels: `server`, `version`, `commit`
|
||||
|
||||
### Fixed
|
||||
- **Documentation clarification**: The `pitr_base` value for `backup_type` label is auto-assigned
|
||||
by `dbbackup pitr base` command. CLI `--backup-type` flag only accepts `full` or `incremental`.
|
||||
This was causing confusion in deployments.
|
||||
|
||||
## [4.1.0] - 2026-01-25
|
||||
|
||||
### Added
|
||||
- **Backup Type Tracking**: All backup metrics now include a `backup_type` label
|
||||
(`full`, `incremental`, or `pitr_base` for PITR base backups)
|
||||
- **PITR Metrics**: Complete Point-in-Time Recovery monitoring
|
||||
- `dbbackup_pitr_enabled` - Whether PITR is enabled (1/0)
|
||||
- `dbbackup_pitr_archive_lag_seconds` - Seconds since last WAL/binlog archived
|
||||
- `dbbackup_pitr_chain_valid` - WAL/binlog chain integrity (1=valid)
|
||||
- `dbbackup_pitr_gap_count` - Number of gaps in archive chain
|
||||
- `dbbackup_pitr_archive_count` - Total archived segments
|
||||
- `dbbackup_pitr_archive_size_bytes` - Total archive storage
|
||||
- `dbbackup_pitr_recovery_window_minutes` - Estimated PITR coverage
|
||||
- **PITR Alerting Rules**: 6 new alerts for PITR monitoring
|
||||
- PITRArchiveLag, PITRChainBroken, PITRGapsDetected, PITRArchiveStalled,
|
||||
PITRStorageGrowing, PITRDisabledUnexpectedly
|
||||
- **`dbbackup_backup_by_type` metric** - Count backups by type
|
||||
|
||||
### Changed
|
||||
- `dbbackup_backup_total` type changed from counter to gauge for snapshot-based collection
|
||||
|
||||
## [3.42.110] - 2026-01-24
|
||||
|
||||
### Improved - Code Quality & Testing
|
||||
@ -269,7 +438,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
- Good default for most scenarios
|
||||
- **Aggressive** (`--profile=aggressive`): Maximum parallelism, all available resources
|
||||
- Best for dedicated database servers with ample resources
|
||||
- **Potato** (`--profile=potato`): Easter egg 🥔, same as conservative
|
||||
- **Potato** (`--profile=potato`): Easter egg, same as conservative
|
||||
- **Profile system applies to both CLI and TUI**:
|
||||
- CLI: `dbbackup restore cluster backup.tar.gz --profile=conservative --confirm`
|
||||
- TUI: Automatically uses conservative profile for safer interactive operation
|
||||
@ -776,7 +945,7 @@ dbbackup metrics serve --port 9399
|
||||
|
||||
## [3.41.0] - 2026-01-07 "The Pre-Flight Check"
|
||||
|
||||
### Added - 🛡️ Pre-Restore Validation
|
||||
### Added - Pre-Restore Validation
|
||||
|
||||
**Automatic Dump Validation Before Restore:**
|
||||
- SQL dump files are now validated BEFORE attempting restore
|
||||
@ -863,7 +1032,7 @@ dbbackup metrics serve --port 9399
|
||||
|
||||
## [3.2.0] - 2025-12-13 "The Margin Eraser"
|
||||
|
||||
### Added - 🚀 Physical Backup Revolution
|
||||
### Added - Physical Backup Revolution
|
||||
|
||||
**MySQL Clone Plugin Integration:**
|
||||
- Native physical backup using MySQL 8.0.17+ Clone Plugin
|
||||
|
||||
32
QUICK.md
32
QUICK.md
@ -14,6 +14,9 @@ dbbackup backup single myapp
|
||||
# MySQL
|
||||
dbbackup backup single gitea --db-type mysql --host 127.0.0.1 --port 3306
|
||||
|
||||
# MySQL/MariaDB with Unix socket
|
||||
dbbackup backup single myapp --db-type mysql --socket /var/run/mysqld/mysqld.sock
|
||||
|
||||
# With compression level (0-9, default 6)
|
||||
dbbackup backup cluster --compression 9
|
||||
|
||||
@ -75,6 +78,35 @@ dbbackup blob stats --database myapp --host dbserver --user admin
|
||||
dbbackup blob stats --database shopdb --db-type mysql
|
||||
```
|
||||
|
||||
## Blob Statistics
|
||||
|
||||
```bash
|
||||
# Analyze blob/binary columns in a database (plan extraction strategies)
|
||||
dbbackup blob stats --database myapp
|
||||
|
||||
# Output shows tables with blob columns, row counts, and estimated sizes
|
||||
# Helps identify large binary data for separate extraction
|
||||
|
||||
# With explicit connection
|
||||
dbbackup blob stats --database myapp --host dbserver --user admin
|
||||
|
||||
# MySQL blob analysis
|
||||
dbbackup blob stats --database shopdb --db-type mysql
|
||||
```
|
||||
|
||||
## Engine Management
|
||||
|
||||
```bash
|
||||
# List available backup engines for MySQL/MariaDB
|
||||
dbbackup engine list
|
||||
|
||||
# Get detailed info on a specific engine
|
||||
dbbackup engine info clone
|
||||
|
||||
# Get current environment info
|
||||
dbbackup engine info
|
||||
```
|
||||
|
||||
## Cloud Storage
|
||||
|
||||
```bash
|
||||
|
||||
133
QUICK_WINS.md
Normal file
133
QUICK_WINS.md
Normal file
@ -0,0 +1,133 @@
|
||||
# Quick Wins Shipped - January 30, 2026
|
||||
|
||||
## Summary
|
||||
|
||||
Shipped 3 high-value features in rapid succession, transforming dbbackup's analysis capabilities.
|
||||
|
||||
## Quick Win #1: Restore Preview ✅
|
||||
|
||||
**Shipped:** Commit 6f5a759 + de0582f
|
||||
**Command:** `dbbackup restore preview <backup-file>`
|
||||
|
||||
Shows comprehensive pre-restore analysis:
|
||||
- Backup format detection
|
||||
- Compressed/uncompressed size estimates
|
||||
- RTO calculation (extraction + restore time)
|
||||
- Profile-aware speed estimates
|
||||
- Resource requirements
|
||||
- Integrity validation
|
||||
|
||||
**TUI Integration:** Added RTO estimates to TUI restore preview workflow.
|
||||
|
||||
## Quick Win #2: Backup Diff ✅
|
||||
|
||||
**Shipped:** Commit 14e893f
|
||||
**Command:** `dbbackup diff <backup1> <backup2>`
|
||||
|
||||
Compare two backups intelligently:
|
||||
- Flexible input (paths, catalog IDs, `database:latest/previous`)
|
||||
- Size delta with percentage change
|
||||
- Duration comparison
|
||||
- Growth rate calculation (GB/day)
|
||||
- Growth projections (time to 10GB)
|
||||
- Compression efficiency analysis
|
||||
- JSON output for automation
|
||||
|
||||
Perfect for capacity planning and identifying sudden changes.
|
||||
|
||||
## Quick Win #3: Cost Analyzer ✅
|
||||
|
||||
**Shipped:** Commit 4ab8046
|
||||
**Command:** `dbbackup cost analyze`
|
||||
|
||||
Multi-provider cloud cost comparison:
|
||||
- 15 storage tiers analyzed across 5 providers
|
||||
- AWS S3 (6 tiers), GCS (4 tiers), Azure (3 tiers)
|
||||
- Backblaze B2 and Wasabi included
|
||||
- Monthly/annual cost projections
|
||||
- Savings vs S3 Standard baseline
|
||||
- Tiered lifecycle strategy recommendations
|
||||
- Regional pricing support
|
||||
|
||||
Shows potential savings of 90%+ with proper lifecycle policies.
|
||||
|
||||
## Impact
|
||||
|
||||
**Time to Ship:** ~3 hours total
|
||||
- Restore Preview: 1.5 hours (CLI + TUI)
|
||||
- Backup Diff: 1 hour
|
||||
- Cost Analyzer: 0.5 hours
|
||||
|
||||
**Lines of Code:**
|
||||
- Restore Preview: 328 lines (cmd/restore_preview.go)
|
||||
- Backup Diff: 419 lines (cmd/backup_diff.go)
|
||||
- Cost Analyzer: 423 lines (cmd/cost.go)
|
||||
- **Total:** 1,170 lines
|
||||
|
||||
**Value Delivered:**
|
||||
- Pre-restore confidence (avoid 2-hour mistakes)
|
||||
- Growth tracking (capacity planning)
|
||||
- Cost optimization (budget savings)
|
||||
|
||||
## Examples
|
||||
|
||||
### Restore Preview
|
||||
```bash
|
||||
dbbackup restore preview mydb_20260130.dump.gz
|
||||
# Shows: Format, size, RTO estimate, resource needs
|
||||
|
||||
# TUI integration: Shows RTO during restore confirmation
|
||||
```
|
||||
|
||||
### Backup Diff
|
||||
```bash
|
||||
# Compare two files
|
||||
dbbackup diff backup_jan15.dump.gz backup_jan30.dump.gz
|
||||
|
||||
# Compare latest two backups
|
||||
dbbackup diff mydb:latest mydb:previous
|
||||
|
||||
# Shows: Growth rate, projections, efficiency
|
||||
```
|
||||
|
||||
### Cost Analyzer
|
||||
```bash
|
||||
# Analyze all backups
|
||||
dbbackup cost analyze
|
||||
|
||||
# Specific database
|
||||
dbbackup cost analyze --database mydb --provider aws
|
||||
|
||||
# Shows: 15 tier comparison, savings, recommendations
|
||||
```
|
||||
|
||||
## Architecture Notes
|
||||
|
||||
All three features leverage existing infrastructure:
|
||||
- **Restore Preview:** Uses internal/restore diagnostics + internal/config
|
||||
- **Backup Diff:** Uses internal/catalog + internal/metadata
|
||||
- **Cost Analyzer:** Pure arithmetic, no external APIs
|
||||
|
||||
No new dependencies, no breaking changes, backward compatible.
|
||||
|
||||
## Next Steps
|
||||
|
||||
Remaining feature ideas from "legendary list":
|
||||
- Webhook integration (partial - notifications exist)
|
||||
- Compliance autopilot enhancements
|
||||
- Advanced retention policies
|
||||
- Cross-region replication
|
||||
- Backup verification automation
|
||||
|
||||
**Philosophy:** Ship fast, iterate based on feedback. These 3 quick wins provide immediate value while requiring minimal maintenance.
|
||||
|
||||
---
|
||||
|
||||
**Total Commits Today:**
|
||||
- b28e67e: docs: Remove ASCII logo
|
||||
- 6f5a759: feat: Add restore preview command
|
||||
- de0582f: feat: Add RTO estimates to TUI restore preview
|
||||
- 14e893f: feat: Add backup diff command (Quick Win #2)
|
||||
- 4ab8046: feat: Add cloud storage cost analyzer (Quick Win #3)
|
||||
|
||||
Both remotes synced: git.uuxo.net + GitHub
|
||||
87
README.md
87
README.md
@ -1,21 +1,10 @@
|
||||
```
|
||||
_ _ _ _
|
||||
| | | | | | |
|
||||
__| | |__ | |__ __ _ ___| | ___ _ _ __
|
||||
/ _` | '_ \| '_ \ / _` |/ __| |/ / | | | '_ \
|
||||
| (_| | |_) | |_) | (_| | (__| <| |_| | |_) |
|
||||
\__,_|_.__/|_.__/ \__,_|\___|_|\_\\__,_| .__/
|
||||
| |
|
||||
|_|
|
||||
```
|
||||
|
||||
# dbbackup
|
||||
|
||||
Database backup and restore utility for PostgreSQL, MySQL, and MariaDB.
|
||||
|
||||
[](https://opensource.org/licenses/Apache-2.0)
|
||||
[](https://golang.org/)
|
||||
[](https://github.com/PlusOne/dbbackup/releases/latest)
|
||||
[](https://github.com/PlusOne/dbbackup/releases/latest)
|
||||
|
||||
**Repository:** https://git.uuxo.net/UUXO/dbbackup
|
||||
**Mirror:** https://github.com/PlusOne/dbbackup
|
||||
@ -671,8 +660,82 @@ dbbackup catalog search --database mydb --after 2024-01-01 --before 2024-12-31
|
||||
|
||||
# Get backup info by path
|
||||
dbbackup catalog info /backups/mydb_20240115.dump.gz
|
||||
|
||||
# Compare two backups to see what changed
|
||||
dbbackup diff /backups/mydb_20240115.dump.gz /backups/mydb_20240120.dump.gz
|
||||
|
||||
# Compare using catalog IDs
|
||||
dbbackup diff 123 456
|
||||
|
||||
# Compare latest two backups for a database
|
||||
dbbackup diff mydb:latest mydb:previous
|
||||
```
|
||||
|
||||
## Cost Analysis
|
||||
|
||||
Analyze and optimize cloud storage costs:
|
||||
|
||||
```bash
|
||||
# Analyze current backup costs
|
||||
dbbackup cost analyze
|
||||
|
||||
# Specific database
|
||||
dbbackup cost analyze --database mydb
|
||||
|
||||
# Compare providers and tiers
|
||||
dbbackup cost analyze --provider aws --format table
|
||||
|
||||
# Get JSON for automation/reporting
|
||||
dbbackup cost analyze --format json
|
||||
```
|
||||
|
||||
**Providers analyzed:**
|
||||
- AWS S3 (Standard, IA, Glacier, Deep Archive)
|
||||
- Google Cloud Storage (Standard, Nearline, Coldline, Archive)
|
||||
- Azure Blob (Hot, Cool, Archive)
|
||||
- Backblaze B2
|
||||
- Wasabi
|
||||
|
||||
Shows tiered storage strategy recommendations with potential annual savings.
|
||||
|
||||
## Health Check
|
||||
|
||||
Comprehensive backup infrastructure health monitoring:
|
||||
|
||||
```bash
|
||||
# Quick health check
|
||||
dbbackup health
|
||||
|
||||
# Detailed output
|
||||
dbbackup health --verbose
|
||||
|
||||
# JSON for monitoring integration (Prometheus, Nagios, etc.)
|
||||
dbbackup health --format json
|
||||
|
||||
# Custom backup interval for gap detection
|
||||
dbbackup health --interval 12h
|
||||
|
||||
# Skip database connectivity (offline check)
|
||||
dbbackup health --skip-db
|
||||
```
|
||||
|
||||
**Checks performed:**
|
||||
- Configuration validity
|
||||
- Database connectivity
|
||||
- Backup directory accessibility
|
||||
- Catalog integrity
|
||||
- Backup freshness (is last backup recent?)
|
||||
- Gap detection (missed scheduled backups)
|
||||
- Verification status (% of backups verified)
|
||||
- File integrity (do files exist and match metadata?)
|
||||
- Orphaned entries (catalog entries for missing files)
|
||||
- Disk space
|
||||
|
||||
**Exit codes for automation:**
|
||||
- `0` = healthy (all checks passed)
|
||||
- `1` = warning (some checks need attention)
|
||||
- `2` = critical (immediate action required)
|
||||
|
||||
## DR Drill Testing
|
||||
|
||||
Automated disaster recovery testing restores backups to Docker containers:
|
||||
|
||||
417
cmd/backup_diff.go
Normal file
417
cmd/backup_diff.go
Normal file
@ -0,0 +1,417 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"dbbackup/internal/catalog"
|
||||
"dbbackup/internal/metadata"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var (
|
||||
diffFormat string
|
||||
diffVerbose bool
|
||||
diffShowOnly string // changed, added, removed, all
|
||||
)
|
||||
|
||||
// diffCmd compares two backups
|
||||
var diffCmd = &cobra.Command{
|
||||
Use: "diff <backup1> <backup2>",
|
||||
Short: "Compare two backups and show differences",
|
||||
Long: `Compare two backups from the catalog and show what changed.
|
||||
|
||||
Shows:
|
||||
- New tables/databases added
|
||||
- Removed tables/databases
|
||||
- Size changes for existing tables
|
||||
- Total size delta
|
||||
- Compression ratio changes
|
||||
|
||||
Arguments can be:
|
||||
- Backup file paths (absolute or relative)
|
||||
- Backup IDs from catalog (e.g., "123", "456")
|
||||
- Database name with latest backup (e.g., "mydb:latest")
|
||||
|
||||
Examples:
|
||||
# Compare two backup files
|
||||
dbbackup diff backup1.dump.gz backup2.dump.gz
|
||||
|
||||
# Compare catalog entries by ID
|
||||
dbbackup diff 123 456
|
||||
|
||||
# Compare latest two backups for a database
|
||||
dbbackup diff mydb:latest mydb:previous
|
||||
|
||||
# Show only changes (ignore unchanged)
|
||||
dbbackup diff backup1.dump.gz backup2.dump.gz --show changed
|
||||
|
||||
# JSON output for automation
|
||||
dbbackup diff 123 456 --format json`,
|
||||
Args: cobra.ExactArgs(2),
|
||||
RunE: runDiff,
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(diffCmd)
|
||||
|
||||
diffCmd.Flags().StringVar(&diffFormat, "format", "table", "Output format (table, json)")
|
||||
diffCmd.Flags().BoolVar(&diffVerbose, "verbose", false, "Show verbose output")
|
||||
diffCmd.Flags().StringVar(&diffShowOnly, "show", "all", "Show only: changed, added, removed, all")
|
||||
}
|
||||
|
||||
func runDiff(cmd *cobra.Command, args []string) error {
|
||||
backup1Path, err := resolveBackupArg(args[0])
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to resolve backup1: %w", err)
|
||||
}
|
||||
|
||||
backup2Path, err := resolveBackupArg(args[1])
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to resolve backup2: %w", err)
|
||||
}
|
||||
|
||||
// Load metadata for both backups
|
||||
meta1, err := metadata.Load(backup1Path)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to load metadata for backup1: %w", err)
|
||||
}
|
||||
|
||||
meta2, err := metadata.Load(backup2Path)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to load metadata for backup2: %w", err)
|
||||
}
|
||||
|
||||
// Validate same database
|
||||
if meta1.Database != meta2.Database {
|
||||
return fmt.Errorf("backups are from different databases: %s vs %s", meta1.Database, meta2.Database)
|
||||
}
|
||||
|
||||
// Calculate diff
|
||||
diff := calculateBackupDiff(meta1, meta2)
|
||||
|
||||
// Output
|
||||
if diffFormat == "json" {
|
||||
return outputDiffJSON(diff, meta1, meta2)
|
||||
}
|
||||
|
||||
return outputDiffTable(diff, meta1, meta2)
|
||||
}
|
||||
|
||||
// resolveBackupArg resolves various backup reference formats
|
||||
func resolveBackupArg(arg string) (string, error) {
|
||||
// If it looks like a file path, use it directly
|
||||
if strings.Contains(arg, "/") || strings.HasSuffix(arg, ".gz") || strings.HasSuffix(arg, ".dump") {
|
||||
if _, err := os.Stat(arg); err == nil {
|
||||
return arg, nil
|
||||
}
|
||||
return "", fmt.Errorf("backup file not found: %s", arg)
|
||||
}
|
||||
|
||||
// Try as catalog ID
|
||||
cat, err := openCatalog()
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to open catalog: %w", err)
|
||||
}
|
||||
defer cat.Close()
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
// Special syntax: "database:latest" or "database:previous"
|
||||
if strings.Contains(arg, ":") {
|
||||
parts := strings.Split(arg, ":")
|
||||
database := parts[0]
|
||||
position := parts[1]
|
||||
|
||||
query := &catalog.SearchQuery{
|
||||
Database: database,
|
||||
OrderBy: "created_at",
|
||||
OrderDesc: true,
|
||||
}
|
||||
|
||||
if position == "latest" {
|
||||
query.Limit = 1
|
||||
} else if position == "previous" {
|
||||
query.Limit = 2
|
||||
} else {
|
||||
return "", fmt.Errorf("invalid position: %s (use 'latest' or 'previous')", position)
|
||||
}
|
||||
|
||||
entries, err := cat.Search(ctx, query)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
if len(entries) == 0 {
|
||||
return "", fmt.Errorf("no backups found for database: %s", database)
|
||||
}
|
||||
|
||||
if position == "previous" {
|
||||
if len(entries) < 2 {
|
||||
return "", fmt.Errorf("not enough backups for database: %s (need at least 2)", database)
|
||||
}
|
||||
return entries[1].BackupPath, nil
|
||||
}
|
||||
|
||||
return entries[0].BackupPath, nil
|
||||
}
|
||||
|
||||
// Try as numeric ID
|
||||
var id int64
|
||||
_, err = fmt.Sscanf(arg, "%d", &id)
|
||||
if err == nil {
|
||||
entry, err := cat.Get(ctx, id)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if entry == nil {
|
||||
return "", fmt.Errorf("backup not found with ID: %d", id)
|
||||
}
|
||||
return entry.BackupPath, nil
|
||||
}
|
||||
|
||||
return "", fmt.Errorf("invalid backup reference: %s", arg)
|
||||
}
|
||||
|
||||
// BackupDiff represents the difference between two backups
|
||||
type BackupDiff struct {
|
||||
Database string
|
||||
Backup1Time time.Time
|
||||
Backup2Time time.Time
|
||||
TimeDelta time.Duration
|
||||
SizeDelta int64
|
||||
SizeDeltaPct float64
|
||||
DurationDelta float64
|
||||
|
||||
// Detailed changes (when metadata contains table info)
|
||||
AddedItems []DiffItem
|
||||
RemovedItems []DiffItem
|
||||
ChangedItems []DiffItem
|
||||
UnchangedItems []DiffItem
|
||||
}
|
||||
|
||||
type DiffItem struct {
|
||||
Name string
|
||||
Size1 int64
|
||||
Size2 int64
|
||||
SizeDelta int64
|
||||
DeltaPct float64
|
||||
}
|
||||
|
||||
func calculateBackupDiff(meta1, meta2 *metadata.BackupMetadata) *BackupDiff {
|
||||
diff := &BackupDiff{
|
||||
Database: meta1.Database,
|
||||
Backup1Time: meta1.Timestamp,
|
||||
Backup2Time: meta2.Timestamp,
|
||||
TimeDelta: meta2.Timestamp.Sub(meta1.Timestamp),
|
||||
SizeDelta: meta2.SizeBytes - meta1.SizeBytes,
|
||||
DurationDelta: meta2.Duration - meta1.Duration,
|
||||
}
|
||||
|
||||
if meta1.SizeBytes > 0 {
|
||||
diff.SizeDeltaPct = (float64(diff.SizeDelta) / float64(meta1.SizeBytes)) * 100.0
|
||||
}
|
||||
|
||||
// If metadata contains table-level info, compare tables
|
||||
// For now, we only have file-level comparison
|
||||
// Future enhancement: parse backup files for table sizes
|
||||
|
||||
return diff
|
||||
}
|
||||
|
||||
func outputDiffTable(diff *BackupDiff, meta1, meta2 *metadata.BackupMetadata) error {
|
||||
fmt.Println()
|
||||
fmt.Println("═══════════════════════════════════════════════════════════")
|
||||
fmt.Printf(" Backup Comparison: %s\n", diff.Database)
|
||||
fmt.Println("═══════════════════════════════════════════════════════════")
|
||||
fmt.Println()
|
||||
|
||||
// Backup info
|
||||
fmt.Printf("[BACKUP 1]\n")
|
||||
fmt.Printf(" Time: %s\n", meta1.Timestamp.Format("2006-01-02 15:04:05"))
|
||||
fmt.Printf(" Size: %s (%d bytes)\n", formatBytesForDiff(meta1.SizeBytes), meta1.SizeBytes)
|
||||
fmt.Printf(" Duration: %.2fs\n", meta1.Duration)
|
||||
fmt.Printf(" Compression: %s\n", meta1.Compression)
|
||||
fmt.Printf(" Type: %s\n", meta1.BackupType)
|
||||
fmt.Println()
|
||||
|
||||
fmt.Printf("[BACKUP 2]\n")
|
||||
fmt.Printf(" Time: %s\n", meta2.Timestamp.Format("2006-01-02 15:04:05"))
|
||||
fmt.Printf(" Size: %s (%d bytes)\n", formatBytesForDiff(meta2.SizeBytes), meta2.SizeBytes)
|
||||
fmt.Printf(" Duration: %.2fs\n", meta2.Duration)
|
||||
fmt.Printf(" Compression: %s\n", meta2.Compression)
|
||||
fmt.Printf(" Type: %s\n", meta2.BackupType)
|
||||
fmt.Println()
|
||||
|
||||
// Deltas
|
||||
fmt.Println("───────────────────────────────────────────────────────────")
|
||||
fmt.Println("[CHANGES]")
|
||||
fmt.Println("───────────────────────────────────────────────────────────")
|
||||
|
||||
// Time delta
|
||||
timeDelta := diff.TimeDelta
|
||||
fmt.Printf(" Time Between: %s\n", formatDurationForDiff(timeDelta))
|
||||
|
||||
// Size delta
|
||||
sizeIcon := "="
|
||||
if diff.SizeDelta > 0 {
|
||||
sizeIcon = "↑"
|
||||
fmt.Printf(" Size Change: %s %s (+%.1f%%)\n",
|
||||
sizeIcon, formatBytesForDiff(diff.SizeDelta), diff.SizeDeltaPct)
|
||||
} else if diff.SizeDelta < 0 {
|
||||
sizeIcon = "↓"
|
||||
fmt.Printf(" Size Change: %s %s (%.1f%%)\n",
|
||||
sizeIcon, formatBytesForDiff(-diff.SizeDelta), diff.SizeDeltaPct)
|
||||
} else {
|
||||
fmt.Printf(" Size Change: %s No change\n", sizeIcon)
|
||||
}
|
||||
|
||||
// Duration delta
|
||||
durDelta := diff.DurationDelta
|
||||
durIcon := "="
|
||||
if durDelta > 0 {
|
||||
durIcon = "↑"
|
||||
durPct := (durDelta / meta1.Duration) * 100.0
|
||||
fmt.Printf(" Duration: %s +%.2fs (+%.1f%%)\n", durIcon, durDelta, durPct)
|
||||
} else if durDelta < 0 {
|
||||
durIcon = "↓"
|
||||
durPct := (-durDelta / meta1.Duration) * 100.0
|
||||
fmt.Printf(" Duration: %s -%.2fs (-%.1f%%)\n", durIcon, -durDelta, durPct)
|
||||
} else {
|
||||
fmt.Printf(" Duration: %s No change\n", durIcon)
|
||||
}
|
||||
|
||||
// Compression efficiency
|
||||
if meta1.Compression != "none" && meta2.Compression != "none" {
|
||||
fmt.Println()
|
||||
fmt.Println("[COMPRESSION ANALYSIS]")
|
||||
// Note: We'd need uncompressed sizes to calculate actual compression ratio
|
||||
fmt.Printf(" Backup 1: %s\n", meta1.Compression)
|
||||
fmt.Printf(" Backup 2: %s\n", meta2.Compression)
|
||||
if meta1.Compression != meta2.Compression {
|
||||
fmt.Printf(" ⚠ Compression method changed\n")
|
||||
}
|
||||
}
|
||||
|
||||
// Database growth rate
|
||||
if diff.TimeDelta.Hours() > 0 {
|
||||
growthPerDay := float64(diff.SizeDelta) / diff.TimeDelta.Hours() * 24.0
|
||||
fmt.Println()
|
||||
fmt.Println("[GROWTH RATE]")
|
||||
if growthPerDay > 0 {
|
||||
fmt.Printf(" Database growing at ~%s/day\n", formatBytesForDiff(int64(growthPerDay)))
|
||||
|
||||
// Project forward
|
||||
daysTo10GB := (10*1024*1024*1024 - float64(meta2.SizeBytes)) / growthPerDay
|
||||
if daysTo10GB > 0 && daysTo10GB < 365 {
|
||||
fmt.Printf(" Will reach 10GB in ~%.0f days\n", daysTo10GB)
|
||||
}
|
||||
} else if growthPerDay < 0 {
|
||||
fmt.Printf(" Database shrinking at ~%s/day\n", formatBytesForDiff(int64(-growthPerDay)))
|
||||
} else {
|
||||
fmt.Printf(" Database size stable\n")
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Println()
|
||||
fmt.Println("═══════════════════════════════════════════════════════════")
|
||||
|
||||
if diffVerbose {
|
||||
fmt.Println()
|
||||
fmt.Println("[METADATA DIFF]")
|
||||
fmt.Printf(" Host: %s → %s\n", meta1.Host, meta2.Host)
|
||||
fmt.Printf(" Port: %d → %d\n", meta1.Port, meta2.Port)
|
||||
fmt.Printf(" DB Version: %s → %s\n", meta1.DatabaseVersion, meta2.DatabaseVersion)
|
||||
fmt.Printf(" Encrypted: %v → %v\n", meta1.Encrypted, meta2.Encrypted)
|
||||
fmt.Printf(" Checksum 1: %s\n", meta1.SHA256[:16]+"...")
|
||||
fmt.Printf(" Checksum 2: %s\n", meta2.SHA256[:16]+"...")
|
||||
}
|
||||
|
||||
fmt.Println()
|
||||
return nil
|
||||
}
|
||||
|
||||
func outputDiffJSON(diff *BackupDiff, meta1, meta2 *metadata.BackupMetadata) error {
|
||||
output := map[string]interface{}{
|
||||
"database": diff.Database,
|
||||
"backup1": map[string]interface{}{
|
||||
"timestamp": meta1.Timestamp,
|
||||
"size_bytes": meta1.SizeBytes,
|
||||
"duration": meta1.Duration,
|
||||
"compression": meta1.Compression,
|
||||
"type": meta1.BackupType,
|
||||
"version": meta1.DatabaseVersion,
|
||||
},
|
||||
"backup2": map[string]interface{}{
|
||||
"timestamp": meta2.Timestamp,
|
||||
"size_bytes": meta2.SizeBytes,
|
||||
"duration": meta2.Duration,
|
||||
"compression": meta2.Compression,
|
||||
"type": meta2.BackupType,
|
||||
"version": meta2.DatabaseVersion,
|
||||
},
|
||||
"diff": map[string]interface{}{
|
||||
"time_delta_hours": diff.TimeDelta.Hours(),
|
||||
"size_delta_bytes": diff.SizeDelta,
|
||||
"size_delta_pct": diff.SizeDeltaPct,
|
||||
"duration_delta": diff.DurationDelta,
|
||||
},
|
||||
}
|
||||
|
||||
// Calculate growth rate
|
||||
if diff.TimeDelta.Hours() > 0 {
|
||||
growthPerDay := float64(diff.SizeDelta) / diff.TimeDelta.Hours() * 24.0
|
||||
output["growth_rate_bytes_per_day"] = growthPerDay
|
||||
}
|
||||
|
||||
data, err := json.MarshalIndent(output, "", " ")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fmt.Println(string(data))
|
||||
return nil
|
||||
}
|
||||
|
||||
// Utility wrappers
|
||||
func formatBytesForDiff(bytes int64) string {
|
||||
if bytes < 0 {
|
||||
return "-" + formatBytesForDiff(-bytes)
|
||||
}
|
||||
|
||||
const unit = 1024
|
||||
if bytes < unit {
|
||||
return fmt.Sprintf("%d B", bytes)
|
||||
}
|
||||
|
||||
div, exp := int64(unit), 0
|
||||
for n := bytes / unit; n >= unit; n /= unit {
|
||||
div *= unit
|
||||
exp++
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%.2f %ciB", float64(bytes)/float64(div), "KMGTPE"[exp])
|
||||
}
|
||||
|
||||
func formatDurationForDiff(d time.Duration) string {
|
||||
if d < 0 {
|
||||
return "-" + formatDurationForDiff(-d)
|
||||
}
|
||||
|
||||
days := int(d.Hours() / 24)
|
||||
hours := int(d.Hours()) % 24
|
||||
minutes := int(d.Minutes()) % 60
|
||||
|
||||
if days > 0 {
|
||||
return fmt.Sprintf("%dd %dh %dm", days, hours, minutes)
|
||||
}
|
||||
if hours > 0 {
|
||||
return fmt.Sprintf("%dh %dm", hours, minutes)
|
||||
}
|
||||
return fmt.Sprintf("%dm", minutes)
|
||||
}
|
||||
396
cmd/cost.go
Normal file
396
cmd/cost.go
Normal file
@ -0,0 +1,396 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"dbbackup/internal/catalog"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var (
|
||||
costDatabase string
|
||||
costFormat string
|
||||
costRegion string
|
||||
costProvider string
|
||||
costDays int
|
||||
)
|
||||
|
||||
// costCmd analyzes backup storage costs
|
||||
var costCmd = &cobra.Command{
|
||||
Use: "cost",
|
||||
Short: "Analyze cloud storage costs for backups",
|
||||
Long: `Calculate and compare cloud storage costs for your backups.
|
||||
|
||||
Analyzes storage costs across providers:
|
||||
- AWS S3 (Standard, IA, Glacier, Deep Archive)
|
||||
- Google Cloud Storage (Standard, Nearline, Coldline, Archive)
|
||||
- Azure Blob Storage (Hot, Cool, Archive)
|
||||
- Backblaze B2
|
||||
- Wasabi
|
||||
|
||||
Pricing is based on standard rates and may vary by region.
|
||||
|
||||
Examples:
|
||||
# Analyze all backups
|
||||
dbbackup cost analyze
|
||||
|
||||
# Specific database
|
||||
dbbackup cost analyze --database mydb
|
||||
|
||||
# Compare providers for 90 days
|
||||
dbbackup cost analyze --days 90 --format table
|
||||
|
||||
# Estimate for specific region
|
||||
dbbackup cost analyze --region us-east-1
|
||||
|
||||
# JSON output for automation
|
||||
dbbackup cost analyze --format json`,
|
||||
}
|
||||
|
||||
var costAnalyzeCmd = &cobra.Command{
|
||||
Use: "analyze",
|
||||
Short: "Analyze backup storage costs",
|
||||
Args: cobra.NoArgs,
|
||||
RunE: runCostAnalyze,
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(costCmd)
|
||||
costCmd.AddCommand(costAnalyzeCmd)
|
||||
|
||||
costAnalyzeCmd.Flags().StringVar(&costDatabase, "database", "", "Filter by database")
|
||||
costAnalyzeCmd.Flags().StringVar(&costFormat, "format", "table", "Output format (table, json)")
|
||||
costAnalyzeCmd.Flags().StringVar(&costRegion, "region", "us-east-1", "Cloud region for pricing")
|
||||
costAnalyzeCmd.Flags().StringVar(&costProvider, "provider", "all", "Show specific provider (all, aws, gcs, azure, b2, wasabi)")
|
||||
costAnalyzeCmd.Flags().IntVar(&costDays, "days", 30, "Number of days to calculate")
|
||||
}
|
||||
|
||||
func runCostAnalyze(cmd *cobra.Command, args []string) error {
|
||||
cat, err := openCatalog()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cat.Close()
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
// Get backup statistics
|
||||
var stats *catalog.Stats
|
||||
if costDatabase != "" {
|
||||
stats, err = cat.StatsByDatabase(ctx, costDatabase)
|
||||
} else {
|
||||
stats, err = cat.Stats(ctx)
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if stats.TotalBackups == 0 {
|
||||
fmt.Println("No backups found in catalog. Run 'dbbackup catalog sync' first.")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Calculate costs
|
||||
analysis := calculateCosts(stats.TotalSize, costDays, costRegion)
|
||||
|
||||
if costFormat == "json" {
|
||||
return outputCostJSON(analysis, stats)
|
||||
}
|
||||
|
||||
return outputCostTable(analysis, stats)
|
||||
}
|
||||
|
||||
// StorageTier represents a storage class/tier
|
||||
type StorageTier struct {
|
||||
Provider string
|
||||
Tier string
|
||||
Description string
|
||||
StorageGB float64 // $ per GB/month
|
||||
RetrievalGB float64 // $ per GB retrieved
|
||||
Requests float64 // $ per 1000 requests
|
||||
MinDays int // Minimum storage duration
|
||||
}
|
||||
|
||||
// CostAnalysis represents the cost breakdown
|
||||
type CostAnalysis struct {
|
||||
TotalSizeGB float64
|
||||
Days int
|
||||
Region string
|
||||
Recommendations []TierRecommendation
|
||||
}
|
||||
|
||||
type TierRecommendation struct {
|
||||
Provider string
|
||||
Tier string
|
||||
Description string
|
||||
MonthlyStorage float64
|
||||
AnnualStorage float64
|
||||
RetrievalCost float64
|
||||
TotalMonthly float64
|
||||
TotalAnnual float64
|
||||
SavingsVsS3 float64
|
||||
SavingsPct float64
|
||||
BestFor string
|
||||
}
|
||||
|
||||
func calculateCosts(totalBytes int64, days int, region string) *CostAnalysis {
|
||||
sizeGB := float64(totalBytes) / (1024 * 1024 * 1024)
|
||||
|
||||
analysis := &CostAnalysis{
|
||||
TotalSizeGB: sizeGB,
|
||||
Days: days,
|
||||
Region: region,
|
||||
}
|
||||
|
||||
// Define storage tiers (pricing as of 2026, approximate)
|
||||
tiers := []StorageTier{
|
||||
// AWS S3
|
||||
{Provider: "AWS S3", Tier: "Standard", Description: "Frequent access",
|
||||
StorageGB: 0.023, RetrievalGB: 0.0, Requests: 0.0004, MinDays: 0},
|
||||
{Provider: "AWS S3", Tier: "Intelligent-Tiering", Description: "Auto-optimization",
|
||||
StorageGB: 0.023, RetrievalGB: 0.0, Requests: 0.0004, MinDays: 0},
|
||||
{Provider: "AWS S3", Tier: "Standard-IA", Description: "Infrequent access",
|
||||
StorageGB: 0.0125, RetrievalGB: 0.01, Requests: 0.001, MinDays: 30},
|
||||
{Provider: "AWS S3", Tier: "Glacier Instant", Description: "Archive instant",
|
||||
StorageGB: 0.004, RetrievalGB: 0.03, Requests: 0.01, MinDays: 90},
|
||||
{Provider: "AWS S3", Tier: "Glacier Flexible", Description: "Archive flexible",
|
||||
StorageGB: 0.0036, RetrievalGB: 0.02, Requests: 0.05, MinDays: 90},
|
||||
{Provider: "AWS S3", Tier: "Deep Archive", Description: "Long-term archive",
|
||||
StorageGB: 0.00099, RetrievalGB: 0.02, Requests: 0.05, MinDays: 180},
|
||||
|
||||
// Google Cloud Storage
|
||||
{Provider: "GCS", Tier: "Standard", Description: "Frequent access",
|
||||
StorageGB: 0.020, RetrievalGB: 0.0, Requests: 0.0004, MinDays: 0},
|
||||
{Provider: "GCS", Tier: "Nearline", Description: "Monthly access",
|
||||
StorageGB: 0.010, RetrievalGB: 0.01, Requests: 0.001, MinDays: 30},
|
||||
{Provider: "GCS", Tier: "Coldline", Description: "Quarterly access",
|
||||
StorageGB: 0.004, RetrievalGB: 0.02, Requests: 0.005, MinDays: 90},
|
||||
{Provider: "GCS", Tier: "Archive", Description: "Annual access",
|
||||
StorageGB: 0.0012, RetrievalGB: 0.05, Requests: 0.05, MinDays: 365},
|
||||
|
||||
// Azure Blob Storage
|
||||
{Provider: "Azure", Tier: "Hot", Description: "Frequent access",
|
||||
StorageGB: 0.0184, RetrievalGB: 0.0, Requests: 0.0004, MinDays: 0},
|
||||
{Provider: "Azure", Tier: "Cool", Description: "Infrequent access",
|
||||
StorageGB: 0.010, RetrievalGB: 0.01, Requests: 0.001, MinDays: 30},
|
||||
{Provider: "Azure", Tier: "Archive", Description: "Long-term archive",
|
||||
StorageGB: 0.00099, RetrievalGB: 0.02, Requests: 0.05, MinDays: 180},
|
||||
|
||||
// Backblaze B2
|
||||
{Provider: "Backblaze B2", Tier: "Standard", Description: "Affordable cloud",
|
||||
StorageGB: 0.005, RetrievalGB: 0.01, Requests: 0.0004, MinDays: 0},
|
||||
|
||||
// Wasabi
|
||||
{Provider: "Wasabi", Tier: "Hot Cloud", Description: "No egress fees",
|
||||
StorageGB: 0.0059, RetrievalGB: 0.0, Requests: 0.0, MinDays: 90},
|
||||
}
|
||||
|
||||
// Calculate costs for each tier
|
||||
s3StandardCost := 0.0
|
||||
for _, tier := range tiers {
|
||||
if costProvider != "all" {
|
||||
providerLower := strings.ToLower(tier.Provider)
|
||||
filterLower := strings.ToLower(costProvider)
|
||||
if !strings.Contains(providerLower, filterLower) {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
rec := TierRecommendation{
|
||||
Provider: tier.Provider,
|
||||
Tier: tier.Tier,
|
||||
Description: tier.Description,
|
||||
}
|
||||
|
||||
// Monthly storage cost
|
||||
rec.MonthlyStorage = sizeGB * tier.StorageGB
|
||||
|
||||
// Annual storage cost
|
||||
rec.AnnualStorage = rec.MonthlyStorage * 12
|
||||
|
||||
// Estimate retrieval cost (assume 1 retrieval per month for DR testing)
|
||||
rec.RetrievalCost = sizeGB * tier.RetrievalGB
|
||||
|
||||
// Total costs
|
||||
rec.TotalMonthly = rec.MonthlyStorage + rec.RetrievalCost
|
||||
rec.TotalAnnual = rec.AnnualStorage + (rec.RetrievalCost * 12)
|
||||
|
||||
// Track S3 Standard for comparison
|
||||
if tier.Provider == "AWS S3" && tier.Tier == "Standard" {
|
||||
s3StandardCost = rec.TotalMonthly
|
||||
}
|
||||
|
||||
// Recommendations
|
||||
switch {
|
||||
case tier.MinDays >= 180:
|
||||
rec.BestFor = "Long-term archives (6+ months)"
|
||||
case tier.MinDays >= 90:
|
||||
rec.BestFor = "Compliance archives (3+ months)"
|
||||
case tier.MinDays >= 30:
|
||||
rec.BestFor = "Recent backups (monthly rotation)"
|
||||
default:
|
||||
rec.BestFor = "Active/hot backups (daily access)"
|
||||
}
|
||||
|
||||
analysis.Recommendations = append(analysis.Recommendations, rec)
|
||||
}
|
||||
|
||||
// Calculate savings vs S3 Standard
|
||||
if s3StandardCost > 0 {
|
||||
for i := range analysis.Recommendations {
|
||||
rec := &analysis.Recommendations[i]
|
||||
rec.SavingsVsS3 = s3StandardCost - rec.TotalMonthly
|
||||
if s3StandardCost > 0 {
|
||||
rec.SavingsPct = (rec.SavingsVsS3 / s3StandardCost) * 100.0
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return analysis
|
||||
}
|
||||
|
||||
func outputCostTable(analysis *CostAnalysis, stats *catalog.Stats) error {
|
||||
fmt.Println()
|
||||
fmt.Println("═══════════════════════════════════════════════════════════════════════════")
|
||||
fmt.Printf(" Cloud Storage Cost Analysis\n")
|
||||
fmt.Println("═══════════════════════════════════════════════════════════════════════════")
|
||||
fmt.Println()
|
||||
|
||||
fmt.Printf("[CURRENT BACKUP INVENTORY]\n")
|
||||
fmt.Printf(" Total Backups: %d\n", stats.TotalBackups)
|
||||
fmt.Printf(" Total Size: %.2f GB (%s)\n", analysis.TotalSizeGB, stats.TotalSizeHuman)
|
||||
if costDatabase != "" {
|
||||
fmt.Printf(" Database: %s\n", costDatabase)
|
||||
} else {
|
||||
fmt.Printf(" Databases: %d\n", len(stats.ByDatabase))
|
||||
}
|
||||
fmt.Printf(" Region: %s\n", analysis.Region)
|
||||
fmt.Printf(" Analysis Period: %d days\n", analysis.Days)
|
||||
fmt.Println()
|
||||
|
||||
fmt.Println("───────────────────────────────────────────────────────────────────────────")
|
||||
fmt.Printf("%-20s %-20s %12s %12s %12s\n",
|
||||
"PROVIDER", "TIER", "MONTHLY", "ANNUAL", "SAVINGS")
|
||||
fmt.Println("───────────────────────────────────────────────────────────────────────────")
|
||||
|
||||
for _, rec := range analysis.Recommendations {
|
||||
savings := ""
|
||||
if rec.SavingsVsS3 > 0 {
|
||||
savings = fmt.Sprintf("↓ $%.2f (%.0f%%)", rec.SavingsVsS3, rec.SavingsPct)
|
||||
} else if rec.SavingsVsS3 < 0 {
|
||||
savings = fmt.Sprintf("↑ $%.2f", -rec.SavingsVsS3)
|
||||
} else {
|
||||
savings = "baseline"
|
||||
}
|
||||
|
||||
fmt.Printf("%-20s %-20s $%10.2f $%10.2f %s\n",
|
||||
rec.Provider,
|
||||
rec.Tier,
|
||||
rec.TotalMonthly,
|
||||
rec.TotalAnnual,
|
||||
savings,
|
||||
)
|
||||
}
|
||||
|
||||
fmt.Println("───────────────────────────────────────────────────────────────────────────")
|
||||
fmt.Println()
|
||||
|
||||
// Top recommendations
|
||||
fmt.Println("[COST OPTIMIZATION RECOMMENDATIONS]")
|
||||
fmt.Println()
|
||||
|
||||
// Find cheapest option
|
||||
cheapest := analysis.Recommendations[0]
|
||||
for _, rec := range analysis.Recommendations {
|
||||
if rec.TotalAnnual < cheapest.TotalAnnual {
|
||||
cheapest = rec
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("💰 CHEAPEST OPTION: %s %s\n", cheapest.Provider, cheapest.Tier)
|
||||
fmt.Printf(" Annual Cost: $%.2f (save $%.2f/year vs S3 Standard)\n",
|
||||
cheapest.TotalAnnual, cheapest.SavingsVsS3*12)
|
||||
fmt.Printf(" Best For: %s\n", cheapest.BestFor)
|
||||
fmt.Println()
|
||||
|
||||
// Find best balance
|
||||
fmt.Printf("⚖️ BALANCED OPTION: AWS S3 Standard-IA or GCS Nearline\n")
|
||||
fmt.Printf(" Good balance of cost and accessibility\n")
|
||||
fmt.Printf(" Suitable for 30-day retention backups\n")
|
||||
fmt.Println()
|
||||
|
||||
// Find hot storage
|
||||
fmt.Printf("🔥 HOT STORAGE: Wasabi or Backblaze B2\n")
|
||||
fmt.Printf(" No egress fees (Wasabi) or low retrieval costs\n")
|
||||
fmt.Printf(" Perfect for frequent restore testing\n")
|
||||
fmt.Println()
|
||||
|
||||
// Strategy recommendation
|
||||
fmt.Println("[TIERED STORAGE STRATEGY]")
|
||||
fmt.Println()
|
||||
fmt.Printf(" Day 0-7: S3 Standard or Wasabi (frequent access)\n")
|
||||
fmt.Printf(" Day 8-30: S3 Standard-IA or GCS Nearline (weekly access)\n")
|
||||
fmt.Printf(" Day 31-90: S3 Glacier or GCS Coldline (monthly access)\n")
|
||||
fmt.Printf(" Day 90+: S3 Deep Archive or GCS Archive (compliance)\n")
|
||||
fmt.Println()
|
||||
|
||||
potentialSaving := 0.0
|
||||
for _, rec := range analysis.Recommendations {
|
||||
if rec.Provider == "AWS S3" && rec.Tier == "Deep Archive" {
|
||||
potentialSaving = rec.SavingsVsS3 * 12
|
||||
}
|
||||
}
|
||||
|
||||
if potentialSaving > 0 {
|
||||
fmt.Printf("💡 With tiered lifecycle policies, you could save ~$%.2f/year\n", potentialSaving)
|
||||
}
|
||||
|
||||
fmt.Println()
|
||||
fmt.Println("═══════════════════════════════════════════════════════════════════════════")
|
||||
fmt.Println()
|
||||
fmt.Println("Note: Costs are estimates based on standard pricing.")
|
||||
fmt.Println("Actual costs may vary by region, usage patterns, and current pricing.")
|
||||
fmt.Println()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func outputCostJSON(analysis *CostAnalysis, stats *catalog.Stats) error {
|
||||
output := map[string]interface{}{
|
||||
"inventory": map[string]interface{}{
|
||||
"total_backups": stats.TotalBackups,
|
||||
"total_size_gb": analysis.TotalSizeGB,
|
||||
"total_size_human": stats.TotalSizeHuman,
|
||||
"region": analysis.Region,
|
||||
"analysis_days": analysis.Days,
|
||||
},
|
||||
"recommendations": analysis.Recommendations,
|
||||
}
|
||||
|
||||
// Find cheapest
|
||||
cheapest := analysis.Recommendations[0]
|
||||
for _, rec := range analysis.Recommendations {
|
||||
if rec.TotalAnnual < cheapest.TotalAnnual {
|
||||
cheapest = rec
|
||||
}
|
||||
}
|
||||
|
||||
output["cheapest"] = map[string]interface{}{
|
||||
"provider": cheapest.Provider,
|
||||
"tier": cheapest.Tier,
|
||||
"annual_cost": cheapest.TotalAnnual,
|
||||
"monthly_cost": cheapest.TotalMonthly,
|
||||
}
|
||||
|
||||
data, err := json.MarshalIndent(output, "", " ")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fmt.Println(string(data))
|
||||
return nil
|
||||
}
|
||||
699
cmd/health.go
Normal file
699
cmd/health.go
Normal file
@ -0,0 +1,699 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"dbbackup/internal/catalog"
|
||||
"dbbackup/internal/database"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var (
|
||||
healthFormat string
|
||||
healthVerbose bool
|
||||
healthInterval string
|
||||
healthSkipDB bool
|
||||
)
|
||||
|
||||
// HealthStatus represents overall health
|
||||
type HealthStatus string
|
||||
|
||||
const (
|
||||
StatusHealthy HealthStatus = "healthy"
|
||||
StatusWarning HealthStatus = "warning"
|
||||
StatusCritical HealthStatus = "critical"
|
||||
)
|
||||
|
||||
// HealthReport contains the complete health check results
|
||||
type HealthReport struct {
|
||||
Status HealthStatus `json:"status"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Summary string `json:"summary"`
|
||||
Checks []HealthCheck `json:"checks"`
|
||||
Recommendations []string `json:"recommendations,omitempty"`
|
||||
}
|
||||
|
||||
// HealthCheck represents a single health check
|
||||
type HealthCheck struct {
|
||||
Name string `json:"name"`
|
||||
Status HealthStatus `json:"status"`
|
||||
Message string `json:"message"`
|
||||
Details string `json:"details,omitempty"`
|
||||
}
|
||||
|
||||
// healthCmd is the health check command
|
||||
var healthCmd = &cobra.Command{
|
||||
Use: "health",
|
||||
Short: "Check backup system health",
|
||||
Long: `Comprehensive health check for your backup infrastructure.
|
||||
|
||||
Checks:
|
||||
- Database connectivity (can we reach the database?)
|
||||
- Catalog integrity (is the backup database healthy?)
|
||||
- Backup freshness (are backups up to date?)
|
||||
- Gap detection (any missed scheduled backups?)
|
||||
- Verification status (are backups verified?)
|
||||
- File integrity (do backup files exist and match metadata?)
|
||||
- Disk space (sufficient space for operations?)
|
||||
- Configuration (valid settings?)
|
||||
|
||||
Exit codes for automation:
|
||||
0 = healthy (all checks passed)
|
||||
1 = warning (some checks need attention)
|
||||
2 = critical (immediate action required)
|
||||
|
||||
Examples:
|
||||
# Quick health check
|
||||
dbbackup health
|
||||
|
||||
# Detailed output
|
||||
dbbackup health --verbose
|
||||
|
||||
# JSON for monitoring integration
|
||||
dbbackup health --format json
|
||||
|
||||
# Custom backup interval for gap detection
|
||||
dbbackup health --interval 12h
|
||||
|
||||
# Skip database connectivity (offline check)
|
||||
dbbackup health --skip-db`,
|
||||
RunE: runHealthCheck,
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(healthCmd)
|
||||
|
||||
healthCmd.Flags().StringVar(&healthFormat, "format", "table", "Output format (table, json)")
|
||||
healthCmd.Flags().BoolVarP(&healthVerbose, "verbose", "v", false, "Show detailed output")
|
||||
healthCmd.Flags().StringVar(&healthInterval, "interval", "24h", "Expected backup interval for gap detection")
|
||||
healthCmd.Flags().BoolVar(&healthSkipDB, "skip-db", false, "Skip database connectivity check")
|
||||
}
|
||||
|
||||
func runHealthCheck(cmd *cobra.Command, args []string) error {
|
||||
report := &HealthReport{
|
||||
Status: StatusHealthy,
|
||||
Timestamp: time.Now(),
|
||||
Checks: []HealthCheck{},
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
// Parse interval for gap detection
|
||||
interval, err := time.ParseDuration(healthInterval)
|
||||
if err != nil {
|
||||
interval = 24 * time.Hour
|
||||
}
|
||||
|
||||
// 1. Configuration check
|
||||
report.addCheck(checkConfiguration())
|
||||
|
||||
// 2. Database connectivity (unless skipped)
|
||||
if !healthSkipDB {
|
||||
report.addCheck(checkDatabaseConnectivity(ctx))
|
||||
}
|
||||
|
||||
// 3. Backup directory check
|
||||
report.addCheck(checkBackupDir())
|
||||
|
||||
// 4. Catalog integrity check
|
||||
catalogCheck, cat := checkCatalogIntegrity(ctx)
|
||||
report.addCheck(catalogCheck)
|
||||
|
||||
if cat != nil {
|
||||
defer cat.Close()
|
||||
|
||||
// 5. Backup freshness check
|
||||
report.addCheck(checkBackupFreshness(ctx, cat, interval))
|
||||
|
||||
// 6. Gap detection
|
||||
report.addCheck(checkBackupGaps(ctx, cat, interval))
|
||||
|
||||
// 7. Verification status
|
||||
report.addCheck(checkVerificationStatus(ctx, cat))
|
||||
|
||||
// 8. File integrity (sampling)
|
||||
report.addCheck(checkFileIntegrity(ctx, cat))
|
||||
|
||||
// 9. Orphaned entries
|
||||
report.addCheck(checkOrphanedEntries(ctx, cat))
|
||||
}
|
||||
|
||||
// 10. Disk space
|
||||
report.addCheck(checkDiskSpace())
|
||||
|
||||
// Calculate overall status
|
||||
report.calculateOverallStatus()
|
||||
|
||||
// Generate recommendations
|
||||
report.generateRecommendations()
|
||||
|
||||
// Output
|
||||
if healthFormat == "json" {
|
||||
return outputHealthJSON(report)
|
||||
}
|
||||
|
||||
outputHealthTable(report)
|
||||
|
||||
// Exit code based on status
|
||||
switch report.Status {
|
||||
case StatusWarning:
|
||||
os.Exit(1)
|
||||
case StatusCritical:
|
||||
os.Exit(2)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *HealthReport) addCheck(check HealthCheck) {
|
||||
r.Checks = append(r.Checks, check)
|
||||
}
|
||||
|
||||
func (r *HealthReport) calculateOverallStatus() {
|
||||
criticalCount := 0
|
||||
warningCount := 0
|
||||
healthyCount := 0
|
||||
|
||||
for _, check := range r.Checks {
|
||||
switch check.Status {
|
||||
case StatusCritical:
|
||||
criticalCount++
|
||||
case StatusWarning:
|
||||
warningCount++
|
||||
case StatusHealthy:
|
||||
healthyCount++
|
||||
}
|
||||
}
|
||||
|
||||
if criticalCount > 0 {
|
||||
r.Status = StatusCritical
|
||||
r.Summary = fmt.Sprintf("%d critical, %d warning, %d healthy", criticalCount, warningCount, healthyCount)
|
||||
} else if warningCount > 0 {
|
||||
r.Status = StatusWarning
|
||||
r.Summary = fmt.Sprintf("%d warning, %d healthy", warningCount, healthyCount)
|
||||
} else {
|
||||
r.Status = StatusHealthy
|
||||
r.Summary = fmt.Sprintf("All %d checks passed", healthyCount)
|
||||
}
|
||||
}
|
||||
|
||||
func (r *HealthReport) generateRecommendations() {
|
||||
for _, check := range r.Checks {
|
||||
switch {
|
||||
case check.Name == "Backup Freshness" && check.Status != StatusHealthy:
|
||||
r.Recommendations = append(r.Recommendations, "Run a backup immediately: dbbackup backup cluster")
|
||||
case check.Name == "Verification Status" && check.Status != StatusHealthy:
|
||||
r.Recommendations = append(r.Recommendations, "Verify recent backups: dbbackup verify-backup /path/to/backup")
|
||||
case check.Name == "Disk Space" && check.Status != StatusHealthy:
|
||||
r.Recommendations = append(r.Recommendations, "Free up disk space or run cleanup: dbbackup cleanup")
|
||||
case check.Name == "Backup Gaps" && check.Status == StatusCritical:
|
||||
r.Recommendations = append(r.Recommendations, "Review backup schedule and cron configuration")
|
||||
case check.Name == "Orphaned Entries" && check.Status != StatusHealthy:
|
||||
r.Recommendations = append(r.Recommendations, "Clean orphaned entries: dbbackup catalog cleanup --orphaned")
|
||||
case check.Name == "Database Connectivity" && check.Status != StatusHealthy:
|
||||
r.Recommendations = append(r.Recommendations, "Check database connection settings in .dbbackup.conf")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Individual health checks
|
||||
|
||||
func checkConfiguration() HealthCheck {
|
||||
check := HealthCheck{
|
||||
Name: "Configuration",
|
||||
Status: StatusHealthy,
|
||||
}
|
||||
|
||||
if err := cfg.Validate(); err != nil {
|
||||
check.Status = StatusCritical
|
||||
check.Message = "Configuration invalid"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
|
||||
check.Message = "Configuration valid"
|
||||
return check
|
||||
}
|
||||
|
||||
func checkDatabaseConnectivity(ctx context.Context) HealthCheck {
|
||||
check := HealthCheck{
|
||||
Name: "Database Connectivity",
|
||||
Status: StatusHealthy,
|
||||
}
|
||||
|
||||
db, err := database.New(cfg, log)
|
||||
if err != nil {
|
||||
check.Status = StatusCritical
|
||||
check.Message = "Failed to create database instance"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
if err := db.Connect(ctx); err != nil {
|
||||
check.Status = StatusCritical
|
||||
check.Message = "Cannot connect to database"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
|
||||
version, _ := db.GetVersion(ctx)
|
||||
check.Message = "Connected successfully"
|
||||
check.Details = version
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
func checkBackupDir() HealthCheck {
|
||||
check := HealthCheck{
|
||||
Name: "Backup Directory",
|
||||
Status: StatusHealthy,
|
||||
}
|
||||
|
||||
info, err := os.Stat(cfg.BackupDir)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
check.Status = StatusWarning
|
||||
check.Message = "Backup directory does not exist"
|
||||
check.Details = cfg.BackupDir
|
||||
} else {
|
||||
check.Status = StatusCritical
|
||||
check.Message = "Cannot access backup directory"
|
||||
check.Details = err.Error()
|
||||
}
|
||||
return check
|
||||
}
|
||||
|
||||
if !info.IsDir() {
|
||||
check.Status = StatusCritical
|
||||
check.Message = "Backup path is not a directory"
|
||||
check.Details = cfg.BackupDir
|
||||
return check
|
||||
}
|
||||
|
||||
// Check writability
|
||||
testFile := filepath.Join(cfg.BackupDir, ".health_check_test")
|
||||
if err := os.WriteFile(testFile, []byte("test"), 0644); err != nil {
|
||||
check.Status = StatusCritical
|
||||
check.Message = "Backup directory is not writable"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
os.Remove(testFile)
|
||||
|
||||
check.Message = "Backup directory accessible"
|
||||
check.Details = cfg.BackupDir
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
func checkCatalogIntegrity(ctx context.Context) (HealthCheck, *catalog.SQLiteCatalog) {
|
||||
check := HealthCheck{
|
||||
Name: "Catalog Integrity",
|
||||
Status: StatusHealthy,
|
||||
}
|
||||
|
||||
cat, err := openCatalog()
|
||||
if err != nil {
|
||||
check.Status = StatusWarning
|
||||
check.Message = "Catalog not available"
|
||||
check.Details = err.Error()
|
||||
return check, nil
|
||||
}
|
||||
|
||||
// Try a simple query to verify integrity
|
||||
stats, err := cat.Stats(ctx)
|
||||
if err != nil {
|
||||
check.Status = StatusCritical
|
||||
check.Message = "Catalog corrupted or inaccessible"
|
||||
check.Details = err.Error()
|
||||
cat.Close()
|
||||
return check, nil
|
||||
}
|
||||
|
||||
check.Message = fmt.Sprintf("Catalog healthy (%d backups tracked)", stats.TotalBackups)
|
||||
check.Details = fmt.Sprintf("Size: %s", stats.TotalSizeHuman)
|
||||
|
||||
return check, cat
|
||||
}
|
||||
|
||||
func checkBackupFreshness(ctx context.Context, cat *catalog.SQLiteCatalog, interval time.Duration) HealthCheck {
|
||||
check := HealthCheck{
|
||||
Name: "Backup Freshness",
|
||||
Status: StatusHealthy,
|
||||
}
|
||||
|
||||
stats, err := cat.Stats(ctx)
|
||||
if err != nil {
|
||||
check.Status = StatusWarning
|
||||
check.Message = "Cannot determine backup freshness"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
|
||||
if stats.NewestBackup == nil {
|
||||
check.Status = StatusCritical
|
||||
check.Message = "No backups found in catalog"
|
||||
return check
|
||||
}
|
||||
|
||||
age := time.Since(*stats.NewestBackup)
|
||||
|
||||
if age > interval*3 {
|
||||
check.Status = StatusCritical
|
||||
check.Message = fmt.Sprintf("Last backup is %s old (critical)", formatDurationHealth(age))
|
||||
check.Details = stats.NewestBackup.Format("2006-01-02 15:04:05")
|
||||
} else if age > interval {
|
||||
check.Status = StatusWarning
|
||||
check.Message = fmt.Sprintf("Last backup is %s old", formatDurationHealth(age))
|
||||
check.Details = stats.NewestBackup.Format("2006-01-02 15:04:05")
|
||||
} else {
|
||||
check.Message = fmt.Sprintf("Last backup %s ago", formatDurationHealth(age))
|
||||
check.Details = stats.NewestBackup.Format("2006-01-02 15:04:05")
|
||||
}
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
func checkBackupGaps(ctx context.Context, cat *catalog.SQLiteCatalog, interval time.Duration) HealthCheck {
|
||||
check := HealthCheck{
|
||||
Name: "Backup Gaps",
|
||||
Status: StatusHealthy,
|
||||
}
|
||||
|
||||
config := &catalog.GapDetectionConfig{
|
||||
ExpectedInterval: interval,
|
||||
Tolerance: interval / 4,
|
||||
RPOThreshold: interval * 2,
|
||||
}
|
||||
|
||||
allGaps, err := cat.DetectAllGaps(ctx, config)
|
||||
if err != nil {
|
||||
check.Status = StatusWarning
|
||||
check.Message = "Gap detection failed"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
|
||||
totalGaps := 0
|
||||
criticalGaps := 0
|
||||
for _, gaps := range allGaps {
|
||||
totalGaps += len(gaps)
|
||||
for _, gap := range gaps {
|
||||
if gap.Severity == catalog.SeverityCritical {
|
||||
criticalGaps++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if criticalGaps > 0 {
|
||||
check.Status = StatusCritical
|
||||
check.Message = fmt.Sprintf("%d critical gaps detected", criticalGaps)
|
||||
check.Details = fmt.Sprintf("%d total gaps across %d databases", totalGaps, len(allGaps))
|
||||
} else if totalGaps > 0 {
|
||||
check.Status = StatusWarning
|
||||
check.Message = fmt.Sprintf("%d gaps detected", totalGaps)
|
||||
check.Details = fmt.Sprintf("Across %d databases", len(allGaps))
|
||||
} else {
|
||||
check.Message = "No backup gaps detected"
|
||||
}
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
func checkVerificationStatus(ctx context.Context, cat *catalog.SQLiteCatalog) HealthCheck {
|
||||
check := HealthCheck{
|
||||
Name: "Verification Status",
|
||||
Status: StatusHealthy,
|
||||
}
|
||||
|
||||
stats, err := cat.Stats(ctx)
|
||||
if err != nil {
|
||||
check.Status = StatusWarning
|
||||
check.Message = "Cannot check verification status"
|
||||
return check
|
||||
}
|
||||
|
||||
if stats.TotalBackups == 0 {
|
||||
check.Message = "No backups to verify"
|
||||
return check
|
||||
}
|
||||
|
||||
verifiedPct := float64(stats.VerifiedCount) / float64(stats.TotalBackups) * 100
|
||||
|
||||
if verifiedPct < 25 {
|
||||
check.Status = StatusWarning
|
||||
check.Message = fmt.Sprintf("Only %.0f%% of backups verified", verifiedPct)
|
||||
check.Details = fmt.Sprintf("%d/%d verified", stats.VerifiedCount, stats.TotalBackups)
|
||||
} else {
|
||||
check.Message = fmt.Sprintf("%.0f%% of backups verified", verifiedPct)
|
||||
check.Details = fmt.Sprintf("%d/%d verified", stats.VerifiedCount, stats.TotalBackups)
|
||||
}
|
||||
|
||||
// Check drill testing status too
|
||||
if stats.DrillTestedCount > 0 {
|
||||
check.Details += fmt.Sprintf(", %d drill tested", stats.DrillTestedCount)
|
||||
}
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
func checkFileIntegrity(ctx context.Context, cat *catalog.SQLiteCatalog) HealthCheck {
|
||||
check := HealthCheck{
|
||||
Name: "File Integrity",
|
||||
Status: StatusHealthy,
|
||||
}
|
||||
|
||||
// Sample recent backups for file existence
|
||||
entries, err := cat.Search(ctx, &catalog.SearchQuery{
|
||||
Limit: 10,
|
||||
OrderBy: "created_at",
|
||||
OrderDesc: true,
|
||||
})
|
||||
if err != nil || len(entries) == 0 {
|
||||
check.Message = "No backups to check"
|
||||
return check
|
||||
}
|
||||
|
||||
missingCount := 0
|
||||
checksumMismatch := 0
|
||||
|
||||
for _, entry := range entries {
|
||||
// Skip cloud backups
|
||||
if entry.CloudLocation != "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check file exists
|
||||
info, err := os.Stat(entry.BackupPath)
|
||||
if err != nil {
|
||||
missingCount++
|
||||
continue
|
||||
}
|
||||
|
||||
// Quick size check
|
||||
if info.Size() != entry.SizeBytes {
|
||||
checksumMismatch++
|
||||
}
|
||||
}
|
||||
|
||||
totalChecked := len(entries)
|
||||
|
||||
if missingCount > 0 {
|
||||
check.Status = StatusCritical
|
||||
check.Message = fmt.Sprintf("%d/%d backup files missing", missingCount, totalChecked)
|
||||
} else if checksumMismatch > 0 {
|
||||
check.Status = StatusWarning
|
||||
check.Message = fmt.Sprintf("%d/%d backups have size mismatch", checksumMismatch, totalChecked)
|
||||
} else {
|
||||
check.Message = fmt.Sprintf("Sampled %d recent backups - all present", totalChecked)
|
||||
}
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
func checkOrphanedEntries(ctx context.Context, cat *catalog.SQLiteCatalog) HealthCheck {
|
||||
check := HealthCheck{
|
||||
Name: "Orphaned Entries",
|
||||
Status: StatusHealthy,
|
||||
}
|
||||
|
||||
// Check for catalog entries pointing to missing files
|
||||
entries, err := cat.Search(ctx, &catalog.SearchQuery{
|
||||
Limit: 50,
|
||||
OrderBy: "created_at",
|
||||
OrderDesc: true,
|
||||
})
|
||||
if err != nil {
|
||||
check.Message = "Cannot check for orphaned entries"
|
||||
return check
|
||||
}
|
||||
|
||||
orphanCount := 0
|
||||
for _, entry := range entries {
|
||||
if entry.CloudLocation != "" {
|
||||
continue // Skip cloud backups
|
||||
}
|
||||
if _, err := os.Stat(entry.BackupPath); os.IsNotExist(err) {
|
||||
orphanCount++
|
||||
}
|
||||
}
|
||||
|
||||
if orphanCount > 0 {
|
||||
check.Status = StatusWarning
|
||||
check.Message = fmt.Sprintf("%d orphaned catalog entries", orphanCount)
|
||||
check.Details = "Files deleted but entries remain in catalog"
|
||||
} else {
|
||||
check.Message = "No orphaned entries detected"
|
||||
}
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
func checkDiskSpace() HealthCheck {
|
||||
check := HealthCheck{
|
||||
Name: "Disk Space",
|
||||
Status: StatusHealthy,
|
||||
}
|
||||
|
||||
// Simple approach: check if we can write a test file
|
||||
testPath := filepath.Join(cfg.BackupDir, ".space_check")
|
||||
|
||||
// Create a 1MB test to ensure we have space
|
||||
testData := make([]byte, 1024*1024)
|
||||
if err := os.WriteFile(testPath, testData, 0644); err != nil {
|
||||
check.Status = StatusCritical
|
||||
check.Message = "Insufficient disk space or write error"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
os.Remove(testPath)
|
||||
|
||||
// Try to get actual free space (Linux-specific)
|
||||
info, err := os.Stat(cfg.BackupDir)
|
||||
if err == nil && info.IsDir() {
|
||||
// Walk the backup directory to get size
|
||||
var totalSize int64
|
||||
filepath.Walk(cfg.BackupDir, func(path string, info os.FileInfo, err error) error {
|
||||
if err == nil && !info.IsDir() {
|
||||
totalSize += info.Size()
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
check.Message = "Disk space available"
|
||||
check.Details = fmt.Sprintf("Backup directory using %s", formatBytesHealth(totalSize))
|
||||
} else {
|
||||
check.Message = "Disk space available"
|
||||
}
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
// Output functions
|
||||
|
||||
func outputHealthTable(report *HealthReport) {
|
||||
fmt.Println()
|
||||
|
||||
statusIcon := "✅"
|
||||
statusColor := "\033[32m" // green
|
||||
if report.Status == StatusWarning {
|
||||
statusIcon = "⚠️"
|
||||
statusColor = "\033[33m" // yellow
|
||||
} else if report.Status == StatusCritical {
|
||||
statusIcon = "🚨"
|
||||
statusColor = "\033[31m" // red
|
||||
}
|
||||
|
||||
fmt.Println("═══════════════════════════════════════════════════════════════")
|
||||
fmt.Printf(" %s Backup Health Check\n", statusIcon)
|
||||
fmt.Println("═══════════════════════════════════════════════════════════════")
|
||||
fmt.Println()
|
||||
|
||||
fmt.Printf("Status: %s%s\033[0m\n", statusColor, strings.ToUpper(string(report.Status)))
|
||||
fmt.Printf("Time: %s\n", report.Timestamp.Format("2006-01-02 15:04:05"))
|
||||
fmt.Println()
|
||||
|
||||
fmt.Println("───────────────────────────────────────────────────────────────")
|
||||
fmt.Println("CHECKS")
|
||||
fmt.Println("───────────────────────────────────────────────────────────────")
|
||||
|
||||
for _, check := range report.Checks {
|
||||
icon := "✓"
|
||||
color := "\033[32m"
|
||||
if check.Status == StatusWarning {
|
||||
icon = "!"
|
||||
color = "\033[33m"
|
||||
} else if check.Status == StatusCritical {
|
||||
icon = "✗"
|
||||
color = "\033[31m"
|
||||
}
|
||||
|
||||
fmt.Printf("%s[%s]\033[0m %-22s %s\n", color, icon, check.Name, check.Message)
|
||||
|
||||
if healthVerbose && check.Details != "" {
|
||||
fmt.Printf(" └─ %s\n", check.Details)
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Println()
|
||||
fmt.Println("───────────────────────────────────────────────────────────────")
|
||||
fmt.Printf("Summary: %s\n", report.Summary)
|
||||
fmt.Println("───────────────────────────────────────────────────────────────")
|
||||
|
||||
if len(report.Recommendations) > 0 {
|
||||
fmt.Println()
|
||||
fmt.Println("RECOMMENDATIONS")
|
||||
for _, rec := range report.Recommendations {
|
||||
fmt.Printf(" → %s\n", rec)
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Println()
|
||||
}
|
||||
|
||||
func outputHealthJSON(report *HealthReport) error {
|
||||
data, err := json.MarshalIndent(report, "", " ")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Println(string(data))
|
||||
return nil
|
||||
}
|
||||
|
||||
// Helpers
|
||||
|
||||
func formatDurationHealth(d time.Duration) string {
|
||||
if d < time.Minute {
|
||||
return fmt.Sprintf("%.0fs", d.Seconds())
|
||||
}
|
||||
if d < time.Hour {
|
||||
return fmt.Sprintf("%.0fm", d.Minutes())
|
||||
}
|
||||
hours := int(d.Hours())
|
||||
if hours < 24 {
|
||||
return fmt.Sprintf("%dh", hours)
|
||||
}
|
||||
days := hours / 24
|
||||
return fmt.Sprintf("%dd %dh", days, hours%24)
|
||||
}
|
||||
|
||||
func formatBytesHealth(bytes int64) string {
|
||||
const unit = 1024
|
||||
if bytes < unit {
|
||||
return fmt.Sprintf("%d B", bytes)
|
||||
}
|
||||
div, exp := int64(unit), 0
|
||||
for n := bytes / unit; n >= unit; n /= unit {
|
||||
div *= unit
|
||||
exp++
|
||||
}
|
||||
return fmt.Sprintf("%.1f %cB", float64(bytes)/float64(div), "KMGTPE"[exp])
|
||||
}
|
||||
@ -127,8 +127,8 @@ func runMetricsExport(ctx context.Context) error {
|
||||
}
|
||||
defer cat.Close()
|
||||
|
||||
// Create metrics writer
|
||||
writer := prometheus.NewMetricsWriter(log, cat, server)
|
||||
// Create metrics writer with version info
|
||||
writer := prometheus.NewMetricsWriterWithVersion(log, cat, server, cfg.Version, cfg.GitCommit)
|
||||
|
||||
// Write textfile
|
||||
if err := writer.WriteTextfile(metricsOutput); err != nil {
|
||||
@ -162,8 +162,8 @@ func runMetricsServe(ctx context.Context) error {
|
||||
}
|
||||
defer cat.Close()
|
||||
|
||||
// Create exporter
|
||||
exporter := prometheus.NewExporter(log, cat, server, metricsPort)
|
||||
// Create exporter with version info
|
||||
exporter := prometheus.NewExporterWithVersion(log, cat, server, metricsPort, cfg.Version, cfg.GitCommit)
|
||||
|
||||
// Run server (blocks until context is cancelled)
|
||||
return exporter.Serve(ctx)
|
||||
|
||||
@ -66,14 +66,21 @@ TUI Automation Flags (for testing and CI/CD):
|
||||
cfg.TUIVerbose, _ = cmd.Flags().GetBool("verbose-tui")
|
||||
cfg.TUILogFile, _ = cmd.Flags().GetString("tui-log-file")
|
||||
|
||||
// Set conservative profile as default for TUI mode (safer for interactive users)
|
||||
if cfg.ResourceProfile == "" || cfg.ResourceProfile == "balanced" {
|
||||
cfg.ResourceProfile = "conservative"
|
||||
cfg.LargeDBMode = true
|
||||
// FIXED: Only set default profile if user hasn't configured one
|
||||
// Previously this forced conservative mode, ignoring user's saved settings
|
||||
if cfg.ResourceProfile == "" {
|
||||
// No profile configured at all - use balanced as sensible default
|
||||
cfg.ResourceProfile = "balanced"
|
||||
if cfg.Debug {
|
||||
log.Info("TUI mode: using conservative profile by default")
|
||||
log.Info("TUI mode: no profile configured, using 'balanced' default")
|
||||
}
|
||||
} else {
|
||||
// User has a configured profile - RESPECT IT!
|
||||
if cfg.Debug {
|
||||
log.Info("TUI mode: respecting user-configured profile", "profile", cfg.ResourceProfile)
|
||||
}
|
||||
}
|
||||
// Note: LargeDBMode is no longer forced - user controls it via settings
|
||||
|
||||
// Check authentication before starting TUI
|
||||
if cfg.IsPostgreSQL() {
|
||||
@ -274,7 +281,7 @@ func runPreflight(ctx context.Context) error {
|
||||
|
||||
// 4. Disk space check
|
||||
fmt.Print("[4] Available disk space... ")
|
||||
if err := checkDiskSpace(); err != nil {
|
||||
if err := checkPreflightDiskSpace(); err != nil {
|
||||
fmt.Printf("[FAIL] FAILED: %v\n", err)
|
||||
} else {
|
||||
fmt.Println("[OK] PASSED")
|
||||
@ -354,7 +361,7 @@ func checkBackupDirectory() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func checkDiskSpace() error {
|
||||
func checkPreflightDiskSpace() error {
|
||||
// Basic disk space check - this is a simplified version
|
||||
// In a real implementation, you'd use syscall.Statfs or similar
|
||||
if _, err := os.Stat(cfg.BackupDir); os.IsNotExist(err) {
|
||||
|
||||
328
cmd/restore_preview.go
Normal file
328
cmd/restore_preview.go
Normal file
@ -0,0 +1,328 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/dustin/go-humanize"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"dbbackup/internal/restore"
|
||||
)
|
||||
|
||||
var (
|
||||
previewCompareSchema bool
|
||||
previewEstimate bool
|
||||
)
|
||||
|
||||
var restorePreviewCmd = &cobra.Command{
|
||||
Use: "preview [archive-file]",
|
||||
Short: "Preview backup contents before restoring",
|
||||
Long: `Show detailed information about what a backup contains before actually restoring it.
|
||||
|
||||
This command analyzes backup archives and provides:
|
||||
- Database name, version, and size information
|
||||
- Table count and largest tables
|
||||
- Estimated restore time based on system resources
|
||||
- Required disk space
|
||||
- Schema comparison with current database (optional)
|
||||
- Resource recommendations
|
||||
|
||||
Use this to:
|
||||
- See what you'll get before committing to a long restore
|
||||
- Estimate restore time and resource requirements
|
||||
- Identify schema changes since backup was created
|
||||
- Verify backup contains expected data
|
||||
|
||||
Examples:
|
||||
# Preview a backup
|
||||
dbbackup restore preview mydb.dump.gz
|
||||
|
||||
# Preview with restore time estimation
|
||||
dbbackup restore preview mydb.dump.gz --estimate
|
||||
|
||||
# Preview with schema comparison to current database
|
||||
dbbackup restore preview mydb.dump.gz --compare-schema
|
||||
|
||||
# Preview cluster backup
|
||||
dbbackup restore preview cluster_backup.tar.gz
|
||||
`,
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: runRestorePreview,
|
||||
}
|
||||
|
||||
func init() {
|
||||
restoreCmd.AddCommand(restorePreviewCmd)
|
||||
|
||||
restorePreviewCmd.Flags().BoolVar(&previewCompareSchema, "compare-schema", false, "Compare backup schema with current database")
|
||||
restorePreviewCmd.Flags().BoolVar(&previewEstimate, "estimate", true, "Estimate restore time and resource requirements")
|
||||
restorePreviewCmd.Flags().BoolVar(&restoreVerbose, "verbose", false, "Show detailed analysis")
|
||||
}
|
||||
|
||||
func runRestorePreview(cmd *cobra.Command, args []string) error {
|
||||
archivePath := args[0]
|
||||
|
||||
// Convert to absolute path
|
||||
if !filepath.IsAbs(archivePath) {
|
||||
absPath, err := filepath.Abs(archivePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid archive path: %w", err)
|
||||
}
|
||||
archivePath = absPath
|
||||
}
|
||||
|
||||
// Check if file exists
|
||||
stat, err := os.Stat(archivePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("archive not found: %s", archivePath)
|
||||
}
|
||||
|
||||
fmt.Printf("\n%s\n", strings.Repeat("=", 70))
|
||||
fmt.Printf("BACKUP PREVIEW: %s\n", filepath.Base(archivePath))
|
||||
fmt.Printf("%s\n\n", strings.Repeat("=", 70))
|
||||
|
||||
// Get file info
|
||||
fileSize := stat.Size()
|
||||
fmt.Printf("File Information:\n")
|
||||
fmt.Printf(" Path: %s\n", archivePath)
|
||||
fmt.Printf(" Size: %s (%d bytes)\n", humanize.Bytes(uint64(fileSize)), fileSize)
|
||||
fmt.Printf(" Modified: %s\n", stat.ModTime().Format("2006-01-02 15:04:05"))
|
||||
fmt.Printf(" Age: %s\n", humanize.Time(stat.ModTime()))
|
||||
fmt.Println()
|
||||
|
||||
// Detect format
|
||||
format := restore.DetectArchiveFormat(archivePath)
|
||||
fmt.Printf("Format Detection:\n")
|
||||
fmt.Printf(" Type: %s\n", format.String())
|
||||
|
||||
if format.IsCompressed() {
|
||||
fmt.Printf(" Compressed: Yes\n")
|
||||
} else {
|
||||
fmt.Printf(" Compressed: No\n")
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
// Run diagnosis
|
||||
diagnoser := restore.NewDiagnoser(log, restoreVerbose)
|
||||
result, err := diagnoser.DiagnoseFile(archivePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to analyze backup: %w", err)
|
||||
}
|
||||
|
||||
// Database information
|
||||
fmt.Printf("Database Information:\n")
|
||||
|
||||
if format.IsClusterBackup() {
|
||||
// For cluster backups, extract database list
|
||||
fmt.Printf(" Type: Cluster Backup (multiple databases)\n")
|
||||
|
||||
// Try to list databases
|
||||
if dbList, err := listDatabasesInCluster(archivePath); err == nil && len(dbList) > 0 {
|
||||
fmt.Printf(" Databases: %d\n", len(dbList))
|
||||
fmt.Printf("\n Database List:\n")
|
||||
for _, db := range dbList {
|
||||
fmt.Printf(" - %s\n", db)
|
||||
}
|
||||
} else {
|
||||
fmt.Printf(" Databases: Multiple (use --list-databases to see all)\n")
|
||||
}
|
||||
} else {
|
||||
// Single database backup
|
||||
dbName := extractDatabaseName(archivePath, result)
|
||||
fmt.Printf(" Database: %s\n", dbName)
|
||||
|
||||
if result.Details != nil && result.Details.TableCount > 0 {
|
||||
fmt.Printf(" Tables: %d\n", result.Details.TableCount)
|
||||
|
||||
if len(result.Details.TableList) > 0 {
|
||||
fmt.Printf("\n Largest Tables (top 5):\n")
|
||||
displayCount := 5
|
||||
if len(result.Details.TableList) < displayCount {
|
||||
displayCount = len(result.Details.TableList)
|
||||
}
|
||||
for i := 0; i < displayCount; i++ {
|
||||
fmt.Printf(" - %s\n", result.Details.TableList[i])
|
||||
}
|
||||
if len(result.Details.TableList) > 5 {
|
||||
fmt.Printf(" ... and %d more\n", len(result.Details.TableList)-5)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
// Size estimation
|
||||
if result.Details != nil && result.Details.ExpandedSize > 0 {
|
||||
fmt.Printf("Size Estimates:\n")
|
||||
fmt.Printf(" Compressed: %s\n", humanize.Bytes(uint64(fileSize)))
|
||||
fmt.Printf(" Uncompressed: %s\n", humanize.Bytes(uint64(result.Details.ExpandedSize)))
|
||||
|
||||
if result.Details.CompressionRatio > 0 {
|
||||
fmt.Printf(" Ratio: %.1f%% (%.2fx compression)\n",
|
||||
result.Details.CompressionRatio*100,
|
||||
float64(result.Details.ExpandedSize)/float64(fileSize))
|
||||
}
|
||||
|
||||
// Estimate disk space needed (uncompressed + indexes + temp space)
|
||||
estimatedDisk := int64(float64(result.Details.ExpandedSize) * 1.5) // 1.5x for indexes and temp
|
||||
fmt.Printf(" Disk needed: %s (including indexes and temporary space)\n",
|
||||
humanize.Bytes(uint64(estimatedDisk)))
|
||||
fmt.Println()
|
||||
}
|
||||
|
||||
// Restore time estimation
|
||||
if previewEstimate {
|
||||
fmt.Printf("Restore Estimates:\n")
|
||||
|
||||
// Apply current profile
|
||||
profile := cfg.GetCurrentProfile()
|
||||
if profile != nil {
|
||||
fmt.Printf(" Profile: %s (P:%d J:%d)\n",
|
||||
profile.Name, profile.ClusterParallelism, profile.Jobs)
|
||||
}
|
||||
|
||||
// Estimate extraction time
|
||||
extractionSpeed := int64(500 * 1024 * 1024) // 500 MB/s typical
|
||||
extractionTime := time.Duration(fileSize/extractionSpeed) * time.Second
|
||||
|
||||
fmt.Printf(" Extract time: ~%s\n", formatDuration(extractionTime))
|
||||
|
||||
// Estimate restore time (depends on data size and parallelism)
|
||||
if result.Details != nil && result.Details.ExpandedSize > 0 {
|
||||
// Rough estimate: 50MB/s per job for PostgreSQL restore
|
||||
restoreSpeed := int64(50 * 1024 * 1024)
|
||||
if profile != nil {
|
||||
restoreSpeed *= int64(profile.Jobs)
|
||||
}
|
||||
restoreTime := time.Duration(result.Details.ExpandedSize/restoreSpeed) * time.Second
|
||||
|
||||
fmt.Printf(" Restore time: ~%s\n", formatDuration(restoreTime))
|
||||
|
||||
// Validation time (10% of restore)
|
||||
validationTime := restoreTime / 10
|
||||
fmt.Printf(" Validation: ~%s\n", formatDuration(validationTime))
|
||||
|
||||
// Total
|
||||
totalTime := extractionTime + restoreTime + validationTime
|
||||
fmt.Printf(" Total (RTO): ~%s\n", formatDuration(totalTime))
|
||||
}
|
||||
|
||||
fmt.Println()
|
||||
}
|
||||
|
||||
// Validation status
|
||||
fmt.Printf("Validation Status:\n")
|
||||
if result.IsValid {
|
||||
fmt.Printf(" Status: ✓ VALID - Backup appears intact\n")
|
||||
} else {
|
||||
fmt.Printf(" Status: ✗ INVALID - Backup has issues\n")
|
||||
}
|
||||
|
||||
if result.IsTruncated {
|
||||
fmt.Printf(" Truncation: ✗ File appears truncated\n")
|
||||
}
|
||||
if result.IsCorrupted {
|
||||
fmt.Printf(" Corruption: ✗ Corruption detected\n")
|
||||
}
|
||||
|
||||
if len(result.Errors) > 0 {
|
||||
fmt.Printf("\n Errors:\n")
|
||||
for _, err := range result.Errors {
|
||||
fmt.Printf(" - %s\n", err)
|
||||
}
|
||||
}
|
||||
|
||||
if len(result.Warnings) > 0 {
|
||||
fmt.Printf("\n Warnings:\n")
|
||||
for _, warn := range result.Warnings {
|
||||
fmt.Printf(" - %s\n", warn)
|
||||
}
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
// Schema comparison
|
||||
if previewCompareSchema {
|
||||
fmt.Printf("Schema Comparison:\n")
|
||||
fmt.Printf(" Status: Not yet implemented\n")
|
||||
fmt.Printf(" (Compare with current database schema)\n")
|
||||
fmt.Println()
|
||||
}
|
||||
|
||||
// Recommendations
|
||||
fmt.Printf("Recommendations:\n")
|
||||
|
||||
if !result.IsValid {
|
||||
fmt.Printf(" - ✗ DO NOT restore this backup - validation failed\n")
|
||||
fmt.Printf(" - Run 'dbbackup restore diagnose %s' for detailed analysis\n", filepath.Base(archivePath))
|
||||
} else {
|
||||
fmt.Printf(" - ✓ Backup is valid and ready to restore\n")
|
||||
|
||||
// Resource recommendations
|
||||
if result.Details != nil && result.Details.ExpandedSize > 0 {
|
||||
estimatedRAM := result.Details.ExpandedSize / (1024 * 1024 * 1024) / 10 // Rough: 10% of data size
|
||||
if estimatedRAM < 4 {
|
||||
estimatedRAM = 4
|
||||
}
|
||||
fmt.Printf(" - Recommended RAM: %dGB or more\n", estimatedRAM)
|
||||
|
||||
// Disk space
|
||||
estimatedDisk := int64(float64(result.Details.ExpandedSize) * 1.5)
|
||||
fmt.Printf(" - Ensure %s free disk space\n", humanize.Bytes(uint64(estimatedDisk)))
|
||||
}
|
||||
|
||||
// Profile recommendation
|
||||
if result.Details != nil && result.Details.TableCount > 100 {
|
||||
fmt.Printf(" - Use 'conservative' profile for databases with many tables\n")
|
||||
} else {
|
||||
fmt.Printf(" - Use 'turbo' profile for fastest restore\n")
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("\n%s\n", strings.Repeat("=", 70))
|
||||
|
||||
if result.IsValid {
|
||||
fmt.Printf("Ready to restore? Run:\n")
|
||||
if format.IsClusterBackup() {
|
||||
fmt.Printf(" dbbackup restore cluster %s --confirm\n", filepath.Base(archivePath))
|
||||
} else {
|
||||
fmt.Printf(" dbbackup restore single %s --confirm\n", filepath.Base(archivePath))
|
||||
}
|
||||
} else {
|
||||
fmt.Printf("Fix validation errors before attempting restore.\n")
|
||||
}
|
||||
fmt.Printf("%s\n\n", strings.Repeat("=", 70))
|
||||
|
||||
if !result.IsValid {
|
||||
return fmt.Errorf("backup validation failed")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Helper functions
|
||||
|
||||
func extractDatabaseName(archivePath string, result *restore.DiagnoseResult) string {
|
||||
// Try to extract from filename
|
||||
baseName := filepath.Base(archivePath)
|
||||
baseName = strings.TrimSuffix(baseName, ".gz")
|
||||
baseName = strings.TrimSuffix(baseName, ".dump")
|
||||
baseName = strings.TrimSuffix(baseName, ".sql")
|
||||
baseName = strings.TrimSuffix(baseName, ".tar")
|
||||
|
||||
// Remove timestamp patterns
|
||||
parts := strings.Split(baseName, "_")
|
||||
if len(parts) > 0 {
|
||||
return parts[0]
|
||||
}
|
||||
|
||||
return "unknown"
|
||||
}
|
||||
|
||||
func listDatabasesInCluster(archivePath string) ([]string, error) {
|
||||
// This would extract and list databases from tar.gz
|
||||
// For now, return empty to indicate it needs implementation
|
||||
return nil, fmt.Errorf("not implemented")
|
||||
}
|
||||
33
cmd/root.go
33
cmd/root.go
@ -3,6 +3,7 @@ package cmd
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"dbbackup/internal/config"
|
||||
"dbbackup/internal/logger"
|
||||
@ -54,9 +55,26 @@ For help with specific commands, use: dbbackup [command] --help`,
|
||||
|
||||
// Load local config if not disabled
|
||||
if !cfg.NoLoadConfig {
|
||||
if localCfg, err := config.LoadLocalConfig(); err != nil {
|
||||
log.Warn("Failed to load local config", "error", err)
|
||||
} else if localCfg != nil {
|
||||
// Use custom config path if specified, otherwise default to current directory
|
||||
var localCfg *config.LocalConfig
|
||||
var err error
|
||||
if cfg.ConfigPath != "" {
|
||||
localCfg, err = config.LoadLocalConfigFromPath(cfg.ConfigPath)
|
||||
if err != nil {
|
||||
log.Warn("Failed to load config from specified path", "path", cfg.ConfigPath, "error", err)
|
||||
} else if localCfg != nil {
|
||||
log.Info("Loaded configuration", "path", cfg.ConfigPath)
|
||||
}
|
||||
} else {
|
||||
localCfg, err = config.LoadLocalConfig()
|
||||
if err != nil {
|
||||
log.Warn("Failed to load local config", "error", err)
|
||||
} else if localCfg != nil {
|
||||
log.Info("Loaded configuration from .dbbackup.conf")
|
||||
}
|
||||
}
|
||||
|
||||
if localCfg != nil {
|
||||
// Save current flag values that were explicitly set
|
||||
savedBackupDir := cfg.BackupDir
|
||||
savedHost := cfg.Host
|
||||
@ -71,7 +89,6 @@ For help with specific commands, use: dbbackup [command] --help`,
|
||||
|
||||
// Apply config from file
|
||||
config.ApplyLocalConfig(cfg, localCfg)
|
||||
log.Info("Loaded configuration from .dbbackup.conf")
|
||||
|
||||
// Restore explicitly set flag values (flags have priority)
|
||||
if flagsSet["backup-dir"] {
|
||||
@ -107,6 +124,12 @@ For help with specific commands, use: dbbackup [command] --help`,
|
||||
}
|
||||
}
|
||||
|
||||
// Auto-detect socket from --host path (if host starts with /)
|
||||
if strings.HasPrefix(cfg.Host, "/") && cfg.Socket == "" {
|
||||
cfg.Socket = cfg.Host
|
||||
cfg.Host = "localhost" // Reset host for socket connections
|
||||
}
|
||||
|
||||
return cfg.SetDatabaseType(cfg.DatabaseType)
|
||||
},
|
||||
}
|
||||
@ -134,8 +157,10 @@ func Execute(ctx context.Context, config *config.Config, logger logger.Logger) e
|
||||
cfg.Version, cfg.BuildTime, cfg.GitCommit)
|
||||
|
||||
// Add persistent flags
|
||||
rootCmd.PersistentFlags().StringVarP(&cfg.ConfigPath, "config", "c", "", "Path to config file (default: .dbbackup.conf in current directory)")
|
||||
rootCmd.PersistentFlags().StringVar(&cfg.Host, "host", cfg.Host, "Database host")
|
||||
rootCmd.PersistentFlags().IntVar(&cfg.Port, "port", cfg.Port, "Database port")
|
||||
rootCmd.PersistentFlags().StringVar(&cfg.Socket, "socket", cfg.Socket, "Unix socket path for MySQL/MariaDB (e.g., /var/run/mysqld/mysqld.sock)")
|
||||
rootCmd.PersistentFlags().StringVar(&cfg.User, "user", cfg.User, "Database user")
|
||||
rootCmd.PersistentFlags().StringVar(&cfg.Database, "database", cfg.Database, "Database name")
|
||||
rootCmd.PersistentFlags().StringVar(&cfg.Password, "password", cfg.Password, "Database password")
|
||||
|
||||
339
docs/CATALOG.md
Normal file
339
docs/CATALOG.md
Normal file
@ -0,0 +1,339 @@
|
||||
# Backup Catalog
|
||||
|
||||
Complete reference for the dbbackup catalog system for tracking, managing, and analyzing backup inventory.
|
||||
|
||||
## Overview
|
||||
|
||||
The catalog is a SQLite database that tracks all backups, providing:
|
||||
- Backup gap detection (missing scheduled backups)
|
||||
- Retention policy compliance verification
|
||||
- Backup integrity tracking
|
||||
- Historical retention enforcement
|
||||
- Full-text search over backup metadata
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Initialize catalog (automatic on first use)
|
||||
dbbackup catalog sync /mnt/backups/databases
|
||||
|
||||
# List all backups in catalog
|
||||
dbbackup catalog list
|
||||
|
||||
# Show catalog statistics
|
||||
dbbackup catalog stats
|
||||
|
||||
# View backup details
|
||||
dbbackup catalog info mydb_2026-01-23.dump.gz
|
||||
|
||||
# Search for backups
|
||||
dbbackup catalog search --database myapp --after 2026-01-01
|
||||
```
|
||||
|
||||
## Catalog Sync
|
||||
|
||||
Syncs local backup directory with catalog database.
|
||||
|
||||
```bash
|
||||
# Sync all backups in directory
|
||||
dbbackup catalog sync /mnt/backups/databases
|
||||
|
||||
# Force rescan (useful if backups were added manually)
|
||||
dbbackup catalog sync /mnt/backups/databases --force
|
||||
|
||||
# Sync specific database backups
|
||||
dbbackup catalog sync /mnt/backups/databases --database myapp
|
||||
|
||||
# Dry-run to see what would be synced
|
||||
dbbackup catalog sync /mnt/backups/databases --dry-run
|
||||
```
|
||||
|
||||
Catalog entries include:
|
||||
- Backup filename
|
||||
- Database name
|
||||
- Backup timestamp
|
||||
- Size (bytes)
|
||||
- Compression ratio
|
||||
- Encryption status
|
||||
- Backup type (full/incremental/pitr_base)
|
||||
- Retention status
|
||||
- Checksum/hash
|
||||
|
||||
## Listing Backups
|
||||
|
||||
### Show All Backups
|
||||
|
||||
```bash
|
||||
dbbackup catalog list
|
||||
```
|
||||
|
||||
Output format:
|
||||
```
|
||||
Database Timestamp Size Compressed Encrypted Verified Type
|
||||
myapp 2026-01-23 14:30:00 2.5 GB 62% yes yes full
|
||||
myapp 2026-01-23 02:00:00 1.2 GB 58% yes yes incremental
|
||||
mydb 2026-01-23 22:15:00 856 MB 64% no no full
|
||||
```
|
||||
|
||||
### Filter by Database
|
||||
|
||||
```bash
|
||||
dbbackup catalog list --database myapp
|
||||
```
|
||||
|
||||
### Filter by Date Range
|
||||
|
||||
```bash
|
||||
dbbackup catalog list --after 2026-01-01 --before 2026-01-31
|
||||
```
|
||||
|
||||
### Sort Results
|
||||
|
||||
```bash
|
||||
dbbackup catalog list --sort size --reverse # Largest first
|
||||
dbbackup catalog list --sort date # Oldest first
|
||||
dbbackup catalog list --sort verified # Verified first
|
||||
```
|
||||
|
||||
## Statistics and Gaps
|
||||
|
||||
### Show Catalog Statistics
|
||||
|
||||
```bash
|
||||
dbbackup catalog stats
|
||||
```
|
||||
|
||||
Output includes:
|
||||
- Total backups
|
||||
- Total size stored
|
||||
- Unique databases
|
||||
- Success/failure ratio
|
||||
- Oldest/newest backup
|
||||
- Average backup size
|
||||
|
||||
### Detect Backup Gaps
|
||||
|
||||
Gaps are missing expected backups based on schedule.
|
||||
|
||||
```bash
|
||||
# Show gaps in mydb backups (assuming daily schedule)
|
||||
dbbackup catalog gaps mydb --interval 24h
|
||||
|
||||
# 12-hour interval
|
||||
dbbackup catalog gaps mydb --interval 12h
|
||||
|
||||
# Show as calendar grid
|
||||
dbbackup catalog gaps mydb --interval 24h --calendar
|
||||
|
||||
# Define custom work hours (backup only weekdays 02:00)
|
||||
dbbackup catalog gaps mydb --interval 24h --workdays-only
|
||||
```
|
||||
|
||||
Output shows:
|
||||
- Dates with missing backups
|
||||
- Expected backup count
|
||||
- Actual backup count
|
||||
- Gap duration
|
||||
- Reasons (if known)
|
||||
|
||||
## Searching
|
||||
|
||||
Full-text search across backup metadata.
|
||||
|
||||
```bash
|
||||
# Search by database name
|
||||
dbbackup catalog search --database myapp
|
||||
|
||||
# Search by date
|
||||
dbbackup catalog search --after 2026-01-01 --before 2026-01-31
|
||||
|
||||
# Search by size range (GB)
|
||||
dbbackup catalog search --min-size 0.5 --max-size 5.0
|
||||
|
||||
# Search by backup type
|
||||
dbbackup catalog search --backup-type incremental
|
||||
|
||||
# Search by encryption status
|
||||
dbbackup catalog search --encrypted
|
||||
|
||||
# Search by verification status
|
||||
dbbackup catalog search --verified
|
||||
|
||||
# Combine filters
|
||||
dbbackup catalog search --database myapp --encrypted --after 2026-01-01
|
||||
```
|
||||
|
||||
## Backup Details
|
||||
|
||||
```bash
|
||||
# Show full details for a specific backup
|
||||
dbbackup catalog info mydb_2026-01-23.dump.gz
|
||||
|
||||
# Output includes:
|
||||
# - Filename and path
|
||||
# - Database name and version
|
||||
# - Backup timestamp
|
||||
# - Backup type (full/incremental/pitr_base)
|
||||
# - Size (compressed/uncompressed)
|
||||
# - Compression ratio
|
||||
# - Encryption (algorithm, key hash)
|
||||
# - Checksums (md5, sha256)
|
||||
# - Verification status and date
|
||||
# - Retention classification (daily/weekly/monthly)
|
||||
# - Comments/notes
|
||||
```
|
||||
|
||||
## Retention Classification
|
||||
|
||||
The catalog classifies backups according to retention policies.
|
||||
|
||||
### GFS (Grandfather-Father-Son) Classification
|
||||
|
||||
```
|
||||
Daily: Last 7 backups
|
||||
Weekly: One backup per week for 4 weeks
|
||||
Monthly: One backup per month for 12 months
|
||||
```
|
||||
|
||||
Example:
|
||||
```bash
|
||||
dbbackup catalog list --show-retention
|
||||
|
||||
# Output shows:
|
||||
# myapp_2026-01-23.dump.gz daily (retain 6 more days)
|
||||
# myapp_2026-01-16.dump.gz weekly (retain 3 more weeks)
|
||||
# myapp_2026-01-01.dump.gz monthly (retain 11 more months)
|
||||
```
|
||||
|
||||
## Compliance Reports
|
||||
|
||||
Generate compliance reports based on catalog data.
|
||||
|
||||
```bash
|
||||
# Backup compliance report
|
||||
dbbackup catalog compliance-report
|
||||
|
||||
# Shows:
|
||||
# - All backups compliant with retention policy
|
||||
# - Gaps exceeding SLA
|
||||
# - Failed backups
|
||||
# - Unverified backups
|
||||
# - Encryption status
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Catalog settings in `.dbbackup.conf`:
|
||||
|
||||
```ini
|
||||
[catalog]
|
||||
# Enable catalog (default: true)
|
||||
enabled = true
|
||||
|
||||
# Catalog database path (default: ~/.dbbackup/catalog.db)
|
||||
db_path = /var/lib/dbbackup/catalog.db
|
||||
|
||||
# Retention days (default: 30)
|
||||
retention_days = 30
|
||||
|
||||
# Minimum backups to keep (default: 5)
|
||||
min_backups = 5
|
||||
|
||||
# Enable gap detection (default: true)
|
||||
gap_detection = true
|
||||
|
||||
# Gap alert threshold (hours, default: 36)
|
||||
gap_threshold_hours = 36
|
||||
|
||||
# Verify backups automatically (default: true)
|
||||
auto_verify = true
|
||||
```
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Rebuild Catalog
|
||||
|
||||
Rebuild from scratch (useful if corrupted):
|
||||
|
||||
```bash
|
||||
dbbackup catalog rebuild /mnt/backups/databases
|
||||
```
|
||||
|
||||
### Export Catalog
|
||||
|
||||
Export to CSV for analysis in spreadsheet/BI tools:
|
||||
|
||||
```bash
|
||||
dbbackup catalog export --format csv --output catalog.csv
|
||||
```
|
||||
|
||||
Supported formats:
|
||||
- csv (Excel compatible)
|
||||
- json (structured data)
|
||||
- html (browseable report)
|
||||
|
||||
### Cleanup Orphaned Entries
|
||||
|
||||
Remove catalog entries for deleted backups:
|
||||
|
||||
```bash
|
||||
dbbackup catalog cleanup --orphaned
|
||||
|
||||
# Dry-run
|
||||
dbbackup catalog cleanup --orphaned --dry-run
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Find All Encrypted Backups from Last Week
|
||||
|
||||
```bash
|
||||
dbbackup catalog search \
|
||||
--after "$(date -d '7 days ago' +%Y-%m-%d)" \
|
||||
--encrypted
|
||||
```
|
||||
|
||||
### Generate Weekly Compliance Report
|
||||
|
||||
```bash
|
||||
dbbackup catalog search \
|
||||
--after "$(date -d '7 days ago' +%Y-%m-%d)" \
|
||||
--show-retention \
|
||||
--verified
|
||||
```
|
||||
|
||||
### Monitor Backup Size Growth
|
||||
|
||||
```bash
|
||||
dbbackup catalog stats | grep "Average backup size"
|
||||
|
||||
# Track over time
|
||||
for week in $(seq 1 4); do
|
||||
DATE=$(date -d "$((week*7)) days ago" +%Y-%m-%d)
|
||||
echo "Week of $DATE:"
|
||||
dbbackup catalog stats --after "$DATE" | grep "Average backup size"
|
||||
done
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Catalog Shows Wrong Count
|
||||
|
||||
Resync the catalog:
|
||||
```bash
|
||||
dbbackup catalog sync /mnt/backups/databases --force
|
||||
```
|
||||
|
||||
### Gaps Detected But Backups Exist
|
||||
|
||||
Manual backups not in catalog - sync them:
|
||||
```bash
|
||||
dbbackup catalog sync /mnt/backups/databases
|
||||
```
|
||||
|
||||
### Corruption Error
|
||||
|
||||
Rebuild catalog:
|
||||
```bash
|
||||
dbbackup catalog rebuild /mnt/backups/databases
|
||||
```
|
||||
365
docs/DRILL.md
Normal file
365
docs/DRILL.md
Normal file
@ -0,0 +1,365 @@
|
||||
# Disaster Recovery Drilling
|
||||
|
||||
Complete guide for automated disaster recovery testing with dbbackup.
|
||||
|
||||
## Overview
|
||||
|
||||
DR drills automate the process of validating backup integrity through actual restore testing. Instead of hoping backups work when needed, automated drills regularly restore backups in isolated containers to verify:
|
||||
|
||||
- Backup file integrity
|
||||
- Database compatibility
|
||||
- Restore time estimates (RTO)
|
||||
- Schema validation
|
||||
- Data consistency
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Run single DR drill on latest backup
|
||||
dbbackup drill /mnt/backups/databases
|
||||
|
||||
# Drill specific database
|
||||
dbbackup drill /mnt/backups/databases --database myapp
|
||||
|
||||
# Drill multiple databases
|
||||
dbbackup drill /mnt/backups/databases --database myapp,mydb
|
||||
|
||||
# Schedule daily drills
|
||||
dbbackup drill /mnt/backups/databases --schedule daily
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Select backup** - Picks latest or specified backup
|
||||
2. **Create container** - Starts isolated database container
|
||||
3. **Extract backup** - Decompresses to temporary storage
|
||||
4. **Restore** - Imports data to test database
|
||||
5. **Validate** - Runs integrity checks
|
||||
6. **Cleanup** - Removes test container
|
||||
7. **Report** - Stores results in catalog
|
||||
|
||||
## Drill Configuration
|
||||
|
||||
### Select Specific Backup
|
||||
|
||||
```bash
|
||||
# Latest backup for database
|
||||
dbbackup drill /mnt/backups/databases --database myapp
|
||||
|
||||
# Backup from specific date
|
||||
dbbackup drill /mnt/backups/databases --database myapp --date 2026-01-23
|
||||
|
||||
# Oldest backup (best test)
|
||||
dbbackup drill /mnt/backups/databases --database myapp --oldest
|
||||
```
|
||||
|
||||
### Drill Options
|
||||
|
||||
```bash
|
||||
# Full validation (slower)
|
||||
dbbackup drill /mnt/backups/databases --full-validation
|
||||
|
||||
# Quick validation (schema only, faster)
|
||||
dbbackup drill /mnt/backups/databases --quick-validation
|
||||
|
||||
# Store results in catalog
|
||||
dbbackup drill /mnt/backups/databases --catalog
|
||||
|
||||
# Send notification on failure
|
||||
dbbackup drill /mnt/backups/databases --notify-on-failure
|
||||
|
||||
# Custom test database name
|
||||
dbbackup drill /mnt/backups/databases --test-database dr_test_prod
|
||||
```
|
||||
|
||||
## Scheduled Drills
|
||||
|
||||
Run drills automatically on a schedule.
|
||||
|
||||
### Configure Schedule
|
||||
|
||||
```bash
|
||||
# Daily drill at 03:00
|
||||
dbbackup drill /mnt/backups/databases --schedule "03:00"
|
||||
|
||||
# Weekly drill (Sunday 02:00)
|
||||
dbbackup drill /mnt/backups/databases --schedule "sun 02:00"
|
||||
|
||||
# Monthly drill (1st of month)
|
||||
dbbackup drill /mnt/backups/databases --schedule "monthly"
|
||||
|
||||
# Install as systemd timer
|
||||
sudo dbbackup install drill \
|
||||
--backup-path /mnt/backups/databases \
|
||||
--schedule "03:00"
|
||||
```
|
||||
|
||||
### Verify Schedule
|
||||
|
||||
```bash
|
||||
# Show next 5 scheduled drills
|
||||
dbbackup drill list --upcoming
|
||||
|
||||
# Check drill history
|
||||
dbbackup drill list --history
|
||||
|
||||
# Show drill statistics
|
||||
dbbackup drill stats
|
||||
```
|
||||
|
||||
## Drill Results
|
||||
|
||||
### View Drill History
|
||||
|
||||
```bash
|
||||
# All drill results
|
||||
dbbackup drill list
|
||||
|
||||
# Recent 10 drills
|
||||
dbbackup drill list --limit 10
|
||||
|
||||
# Drills from last week
|
||||
dbbackup drill list --after "$(date -d '7 days ago' +%Y-%m-%d)"
|
||||
|
||||
# Failed drills only
|
||||
dbbackup drill list --status failed
|
||||
|
||||
# Passed drills only
|
||||
dbbackup drill list --status passed
|
||||
```
|
||||
|
||||
### Detailed Drill Report
|
||||
|
||||
```bash
|
||||
dbbackup drill report myapp_2026-01-23.dump.gz
|
||||
|
||||
# Output includes:
|
||||
# - Backup filename
|
||||
# - Database version
|
||||
# - Extract time
|
||||
# - Restore time
|
||||
# - Row counts (before/after)
|
||||
# - Table verification results
|
||||
# - Data integrity status
|
||||
# - Pass/Fail verdict
|
||||
# - Warnings/errors
|
||||
```
|
||||
|
||||
## Validation Types
|
||||
|
||||
### Full Validation
|
||||
|
||||
Deep integrity checks on restored data.
|
||||
|
||||
```bash
|
||||
dbbackup drill /mnt/backups/databases --full-validation
|
||||
|
||||
# Checks:
|
||||
# - All tables restored
|
||||
# - Row counts match original
|
||||
# - Indexes present and valid
|
||||
# - Constraints enforced
|
||||
# - Foreign key references valid
|
||||
# - Sequence values correct (PostgreSQL)
|
||||
# - Triggers present (if not system-generated)
|
||||
```
|
||||
|
||||
### Quick Validation
|
||||
|
||||
Schema-only validation (fast).
|
||||
|
||||
```bash
|
||||
dbbackup drill /mnt/backups/databases --quick-validation
|
||||
|
||||
# Checks:
|
||||
# - Database connects
|
||||
# - All tables present
|
||||
# - Column definitions correct
|
||||
# - Indexes exist
|
||||
```
|
||||
|
||||
### Custom Validation
|
||||
|
||||
Run custom SQL checks.
|
||||
|
||||
```bash
|
||||
# Add custom validation query
|
||||
dbbackup drill /mnt/backups/databases \
|
||||
--validation-query "SELECT COUNT(*) FROM users" \
|
||||
--validation-expected 15000
|
||||
|
||||
# Example for multiple tables
|
||||
dbbackup drill /mnt/backups/databases \
|
||||
--validation-query "SELECT COUNT(*) FROM orders WHERE status='completed'" \
|
||||
--validation-expected 42000
|
||||
```
|
||||
|
||||
## Reporting
|
||||
|
||||
### Generate Drill Report
|
||||
|
||||
```bash
|
||||
# HTML report (email-friendly)
|
||||
dbbackup drill report --format html --output drill-report.html
|
||||
|
||||
# JSON report (for CI/CD pipelines)
|
||||
dbbackup drill report --format json --output drill-results.json
|
||||
|
||||
# Markdown report (GitHub integration)
|
||||
dbbackup drill report --format markdown --output drill-results.md
|
||||
```
|
||||
|
||||
### Example Report Format
|
||||
|
||||
```
|
||||
Disaster Recovery Drill Results
|
||||
================================
|
||||
|
||||
Backup: myapp_2026-01-23_14-30-00.dump.gz
|
||||
Date: 2026-01-25 03:15:00
|
||||
Duration: 5m 32s
|
||||
Status: PASSED
|
||||
|
||||
Details:
|
||||
Extract Time: 1m 15s
|
||||
Restore Time: 3m 42s
|
||||
Validation Time: 34s
|
||||
|
||||
Tables Restored: 42
|
||||
Rows Verified: 1,234,567
|
||||
Total Size: 2.5 GB
|
||||
|
||||
Validation:
|
||||
Schema Check: OK
|
||||
Row Count Check: OK (all tables)
|
||||
Index Check: OK (all 28 indexes present)
|
||||
Constraint Check: OK (all 5 foreign keys valid)
|
||||
|
||||
Warnings: None
|
||||
Errors: None
|
||||
```
|
||||
|
||||
## Integration with CI/CD
|
||||
|
||||
### GitHub Actions
|
||||
|
||||
```yaml
|
||||
name: Daily DR Drill
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 3 * * *' # Daily at 03:00
|
||||
|
||||
jobs:
|
||||
dr-drill:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Run DR drill
|
||||
run: |
|
||||
dbbackup drill /backups/databases \
|
||||
--full-validation \
|
||||
--format json \
|
||||
--output results.json
|
||||
|
||||
- name: Check results
|
||||
run: |
|
||||
if grep -q '"status":"failed"' results.json; then
|
||||
echo "DR drill failed!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Upload report
|
||||
uses: actions/upload-artifact@v2
|
||||
with:
|
||||
name: drill-results
|
||||
path: results.json
|
||||
```
|
||||
|
||||
### Jenkins Pipeline
|
||||
|
||||
```groovy
|
||||
pipeline {
|
||||
triggers {
|
||||
cron('H 3 * * *') // Daily at 03:00
|
||||
}
|
||||
|
||||
stages {
|
||||
stage('DR Drill') {
|
||||
steps {
|
||||
sh 'dbbackup drill /backups/databases --full-validation --format json --output drill.json'
|
||||
}
|
||||
}
|
||||
|
||||
stage('Validate Results') {
|
||||
steps {
|
||||
script {
|
||||
def results = readJSON file: 'drill.json'
|
||||
if (results.status != 'passed') {
|
||||
error("DR drill failed!")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Drill Fails with "Out of Space"
|
||||
|
||||
```bash
|
||||
# Check available disk space
|
||||
df -h
|
||||
|
||||
# Clean up old test databases
|
||||
docker system prune -a
|
||||
|
||||
# Use faster storage for test
|
||||
dbbackup drill /mnt/backups/databases --temp-dir /ssd/drill-temp
|
||||
```
|
||||
|
||||
### Drill Times Out
|
||||
|
||||
```bash
|
||||
# Increase timeout (minutes)
|
||||
dbbackup drill /mnt/backups/databases --timeout 30
|
||||
|
||||
# Skip certain validations to speed up
|
||||
dbbackup drill /mnt/backups/databases --quick-validation
|
||||
```
|
||||
|
||||
### Drill Shows Data Mismatch
|
||||
|
||||
Indicates a problem with the backup - investigate immediately:
|
||||
|
||||
```bash
|
||||
# Get detailed diff report
|
||||
dbbackup drill report --show-diffs myapp_2026-01-23.dump.gz
|
||||
|
||||
# Regenerate backup
|
||||
dbbackup backup single myapp --force-full
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Run weekly drills minimum** - Catch issues early
|
||||
|
||||
2. **Test oldest backups** - Verify full retention chain works
|
||||
```bash
|
||||
dbbackup drill /mnt/backups/databases --oldest
|
||||
```
|
||||
|
||||
3. **Test critical databases first** - Prioritize by impact
|
||||
|
||||
4. **Store results in catalog** - Track historical pass/fail rates
|
||||
|
||||
5. **Alert on failures** - Automatic notification via email/Slack
|
||||
|
||||
6. **Document RTO** - Use drill times to refine recovery objectives
|
||||
|
||||
7. **Test cross-major-versions** - Use test environment with different DB version
|
||||
```bash
|
||||
# Test PostgreSQL 15 backup on PostgreSQL 16
|
||||
dbbackup drill /mnt/backups/databases --target-version 16
|
||||
```
|
||||
@ -16,17 +16,17 @@ DBBackup now includes a modular backup engine system with multiple strategies:
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# List available engines
|
||||
# List available engines for your MySQL/MariaDB environment
|
||||
dbbackup engine list
|
||||
|
||||
# Auto-select best engine for your environment
|
||||
dbbackup engine select
|
||||
# Get detailed information on a specific engine
|
||||
dbbackup engine info clone
|
||||
|
||||
# Perform physical backup with auto-selection
|
||||
dbbackup physical-backup --output /backups/db.tar.gz
|
||||
# Get engine info for current environment
|
||||
dbbackup engine info
|
||||
|
||||
# Stream directly to S3 (no local storage needed)
|
||||
dbbackup stream-backup --target s3://bucket/backups/db.tar.gz --workers 8
|
||||
# Use engines with backup commands (auto-detection)
|
||||
dbbackup backup single mydb --db-type mysql
|
||||
```
|
||||
|
||||
## Engine Descriptions
|
||||
@ -36,7 +36,7 @@ dbbackup stream-backup --target s3://bucket/backups/db.tar.gz --workers 8
|
||||
Traditional logical backup using mysqldump. Works with all MySQL/MariaDB versions.
|
||||
|
||||
```bash
|
||||
dbbackup physical-backup --engine mysqldump --output backup.sql.gz
|
||||
dbbackup backup single mydb --db-type mysql
|
||||
```
|
||||
|
||||
Features:
|
||||
|
||||
@ -5,13 +5,15 @@ This document provides complete reference for the DBBackup Prometheus exporter,
|
||||
## What's New (January 2026)
|
||||
|
||||
### New Features
|
||||
- **Backup Type Tracking**: All backup metrics now include a `backup_type` label (`full`, `incremental`, `pitr_base`)
|
||||
- **Backup Type Tracking**: All backup metrics now include a `backup_type` label (`full`, `incremental`, or `pitr_base` for PITR base backups)
|
||||
- **Note**: CLI `--backup-type` flag only accepts `full` or `incremental`. The `pitr_base` label is auto-assigned when using `dbbackup pitr base`
|
||||
- **PITR Metrics**: Complete Point-in-Time Recovery monitoring for PostgreSQL WAL and MySQL binlog archiving
|
||||
- **New Alerts**: PITR-specific alerts for archive lag, chain integrity, and gap detection
|
||||
|
||||
### New Metrics Added
|
||||
| Metric | Description |
|
||||
|--------|-------------|
|
||||
| `dbbackup_build_info` | Build info with version and commit labels |
|
||||
| `dbbackup_backup_by_type` | Count backups by type (full/incremental/pitr_base) |
|
||||
| `dbbackup_pitr_enabled` | Whether PITR is enabled (1/0) |
|
||||
| `dbbackup_pitr_archive_lag_seconds` | Seconds since last WAL/binlog archived |
|
||||
|
||||
@ -43,6 +43,13 @@ dbbackup_backup_total{status="success"}
|
||||
**Labels:** `server`, `database`, `backup_type`
|
||||
**Description:** Total count of backups by backup type (`full`, `incremental`, `pitr_base`).
|
||||
|
||||
> **Note:** The `backup_type` label values are:
|
||||
> - `full` - Created with `--backup-type full` (default)
|
||||
> - `incremental` - Created with `--backup-type incremental`
|
||||
> - `pitr_base` - Auto-assigned when using `dbbackup pitr base` command
|
||||
>
|
||||
> The CLI `--backup-type` flag only accepts `full` or `incremental`.
|
||||
|
||||
**Example Query:**
|
||||
```promql
|
||||
# Count of each backup type
|
||||
@ -229,6 +236,44 @@ dbbackup_pitr_chain_valid == 0
|
||||
|
||||
---
|
||||
|
||||
## Build Information Metrics
|
||||
|
||||
### `dbbackup_build_info`
|
||||
**Type:** Gauge
|
||||
**Labels:** `server`, `version`, `commit`, `build_time`
|
||||
**Description:** Build information for the dbbackup exporter. Value is always 1.
|
||||
|
||||
This metric is useful for:
|
||||
- Tracking which version is deployed across your fleet
|
||||
- Alerting when versions drift between servers
|
||||
- Correlating behavior changes with deployments
|
||||
|
||||
**Example Queries:**
|
||||
```promql
|
||||
# Show all deployed versions
|
||||
group by (version) (dbbackup_build_info)
|
||||
|
||||
# Find servers not on latest version
|
||||
dbbackup_build_info{version!="4.1.4"}
|
||||
|
||||
# Alert on version drift
|
||||
count(count by (version) (dbbackup_build_info)) > 1
|
||||
|
||||
# PITR archive lag
|
||||
dbbackup_pitr_archive_lag_seconds > 600
|
||||
|
||||
# Check PITR chain integrity
|
||||
dbbackup_pitr_chain_valid == 1
|
||||
|
||||
# Estimate available PITR window (in minutes)
|
||||
dbbackup_pitr_recovery_window_minutes
|
||||
|
||||
# PITR gaps detected
|
||||
dbbackup_pitr_gap_count > 0
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Alerting Rules
|
||||
|
||||
See [alerting-rules.yaml](../grafana/alerting-rules.yaml) for pre-configured Prometheus alerting rules.
|
||||
|
||||
@ -67,18 +67,46 @@ dbbackup restore cluster backup.tar.gz --profile=balanced --confirm
|
||||
dbbackup restore cluster backup.tar.gz --profile=aggressive --confirm
|
||||
```
|
||||
|
||||
### Potato Profile (`--profile=potato`) 🥔
|
||||
### Potato Profile (`--profile=potato`)
|
||||
**Easter egg:** Same as conservative, for servers running on a potato.
|
||||
|
||||
### Turbo Profile (`--profile=turbo`)
|
||||
**NEW! Best for:** Maximum restore speed - matches native pg_restore -j8 performance.
|
||||
|
||||
**Settings:**
|
||||
- Parallel databases: 2 (balanced I/O)
|
||||
- pg_restore jobs: 8 (like `pg_restore -j8`)
|
||||
- Buffered I/O: 32KB write buffers for faster extraction
|
||||
- Optimized for large databases
|
||||
|
||||
**When to use:**
|
||||
- Dedicated database server
|
||||
- Need fastest possible restore (DR scenarios)
|
||||
- Server has 16GB+ RAM, 4+ cores
|
||||
- Large databases (100GB+)
|
||||
- You want dbbackup to match pg_restore speed
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
dbbackup restore cluster backup.tar.gz --profile=turbo --confirm
|
||||
```
|
||||
|
||||
**TUI Usage:**
|
||||
1. Go to Settings → Resource Profile
|
||||
2. Press Enter to cycle until you see "turbo"
|
||||
3. Save settings and run restore
|
||||
|
||||
## Profile Comparison
|
||||
|
||||
| Setting | Conservative | Balanced | Aggressive |
|
||||
|---------|-------------|----------|-----------|
|
||||
| Parallel DBs | 1 (sequential) | Auto (2-4) | Auto (all CPUs) |
|
||||
| Jobs (decompression) | 1 | Auto (2-4) | Auto (all CPUs) |
|
||||
| Memory Usage | Minimal | Moderate | Maximum |
|
||||
| Speed | Slowest | Medium | Fastest |
|
||||
| Stability | Most stable | Stable | Requires resources |
|
||||
| Setting | Conservative | Balanced | Performance | Turbo |
|
||||
|---------|-------------|----------|-------------|----------|
|
||||
| Parallel DBs | 1 | 2 | 4 | 2 |
|
||||
| pg_restore Jobs | 1 | 2 | 4 | 8 |
|
||||
| Buffered I/O | No | No | No | Yes (32KB) |
|
||||
| Memory Usage | Minimal | Moderate | High | Moderate |
|
||||
| Speed | Slowest | Medium | Fast | **Fastest** |
|
||||
| Stability | Most stable | Stable | Good | Good |
|
||||
| Best For | Small VMs | General use | Powerful servers | DR/Large DBs |
|
||||
|
||||
## Overriding Profile Settings
|
||||
|
||||
|
||||
364
docs/RTO.md
Normal file
364
docs/RTO.md
Normal file
@ -0,0 +1,364 @@
|
||||
# RTO/RPO Analysis
|
||||
|
||||
Complete reference for Recovery Time Objective (RTO) and Recovery Point Objective (RPO) analysis and calculation.
|
||||
|
||||
## Overview
|
||||
|
||||
RTO and RPO are critical metrics for disaster recovery planning:
|
||||
|
||||
- **RTO (Recovery Time Objective)** - Maximum acceptable time to restore systems
|
||||
- **RPO (Recovery Point Objective)** - Maximum acceptable data loss (time)
|
||||
|
||||
dbbackup calculates these based on:
|
||||
- Backup size and compression
|
||||
- Database size and transaction rate
|
||||
- Network bandwidth
|
||||
- Hardware resources
|
||||
- Retention policy
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Show RTO/RPO analysis
|
||||
dbbackup rto show
|
||||
|
||||
# Show recommendations
|
||||
dbbackup rto recommendations
|
||||
|
||||
# Export for disaster recovery plan
|
||||
dbbackup rto export --format pdf --output drp.pdf
|
||||
```
|
||||
|
||||
## RTO Calculation
|
||||
|
||||
RTO depends on restore operations:
|
||||
|
||||
```
|
||||
RTO = Time to: Extract + Restore + Validation
|
||||
|
||||
Extract Time = Backup Size / Extraction Speed (~500 MB/s typical)
|
||||
Restore Time = Total Operations / Database Write Speed (~10-100K rows/sec)
|
||||
Validation = Backup Verify (~10% of restore time)
|
||||
```
|
||||
|
||||
### Example
|
||||
|
||||
```
|
||||
Backup: myapp_production
|
||||
- Size on disk: 2.5 GB
|
||||
- Compressed: 850 MB
|
||||
|
||||
Extract Time = 850 MB / 500 MB/s = 1.7 minutes
|
||||
Restore Time = 1.5M rows / 50K rows/sec = 30 minutes
|
||||
Validation = 3 minutes
|
||||
|
||||
Total RTO = 34.7 minutes
|
||||
```
|
||||
|
||||
## RPO Calculation
|
||||
|
||||
RPO depends on backup frequency and transaction rate:
|
||||
|
||||
```
|
||||
RPO = Backup Interval + WAL Replay Time
|
||||
|
||||
Example with daily backups:
|
||||
- Backup interval: 24 hours
|
||||
- WAL available for PITR: +6 hours
|
||||
|
||||
RPO = 24-30 hours (worst case)
|
||||
```
|
||||
|
||||
### Optimizing RPO
|
||||
|
||||
Reduce RPO by:
|
||||
|
||||
```bash
|
||||
# More frequent backups (hourly vs daily)
|
||||
dbbackup backup single myapp --schedule "0 * * * *" # Every hour
|
||||
|
||||
# Enable PITR (Point-in-Time Recovery)
|
||||
dbbackup pitr enable myapp /mnt/wal
|
||||
dbbackup pitr base myapp /mnt/wal
|
||||
|
||||
# Continuous WAL archiving
|
||||
dbbackup pitr status myapp /mnt/wal
|
||||
```
|
||||
|
||||
With PITR enabled:
|
||||
```
|
||||
RPO = Time since last transaction (typically < 5 minutes)
|
||||
```
|
||||
|
||||
## Analysis Command
|
||||
|
||||
### Show Current Metrics
|
||||
|
||||
```bash
|
||||
dbbackup rto show
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
Database: production
|
||||
Engine: PostgreSQL 15
|
||||
|
||||
Current Status:
|
||||
Last Backup: 2026-01-23 02:00:00 (22 hours ago)
|
||||
Backup Size: 2.5 GB (compressed: 850 MB)
|
||||
RTO Estimate: 35 minutes
|
||||
RPO Current: 22 hours
|
||||
PITR Enabled: yes
|
||||
PITR Window: 6 hours
|
||||
|
||||
Recommendations:
|
||||
- RTO is acceptable (< 1 hour)
|
||||
- RPO could be improved with hourly backups (currently 22h)
|
||||
- PITR reduces RPO to 6 hours in case of full backup loss
|
||||
|
||||
Recovery Plans:
|
||||
Scenario 1: Full database loss
|
||||
RTO: 35 minutes (restore from latest backup)
|
||||
RPO: 22 hours (data since last backup lost)
|
||||
|
||||
Scenario 2: Point-in-time recovery
|
||||
RTO: 45 minutes (restore backup + replay WAL)
|
||||
RPO: 5 minutes (last transaction available)
|
||||
|
||||
Scenario 3: Table-level recovery (single table drop)
|
||||
RTO: 30 minutes (restore to temp DB, extract table)
|
||||
RPO: 22 hours
|
||||
```
|
||||
|
||||
### Get Recommendations
|
||||
|
||||
```bash
|
||||
dbbackup rto recommendations
|
||||
|
||||
# Output includes:
|
||||
# - Suggested backup frequency
|
||||
# - PITR recommendations
|
||||
# - Parallelism recommendations
|
||||
# - Resource utilization tips
|
||||
# - Cost-benefit analysis
|
||||
```
|
||||
|
||||
## Scenarios
|
||||
|
||||
### Scenario Analysis
|
||||
|
||||
Calculate RTO/RPO for different failure modes.
|
||||
|
||||
```bash
|
||||
# Full database loss (use latest backup)
|
||||
dbbackup rto scenario --type full-loss
|
||||
|
||||
# Point-in-time recovery (specific time before incident)
|
||||
dbbackup rto scenario --type point-in-time --time "2026-01-23 14:30:00"
|
||||
|
||||
# Table-level recovery
|
||||
dbbackup rto scenario --type table-level --table users
|
||||
|
||||
# Multiple databases
|
||||
dbbackup rto scenario --type multi-db --databases myapp,mydb
|
||||
```
|
||||
|
||||
### Custom Scenario
|
||||
|
||||
```bash
|
||||
# Network bandwidth constraint
|
||||
dbbackup rto scenario \
|
||||
--type full-loss \
|
||||
--bandwidth 10MB/s \
|
||||
--storage-type s3
|
||||
|
||||
# Limited resources (small restore server)
|
||||
dbbackup rto scenario \
|
||||
--type full-loss \
|
||||
--cpu-cores 4 \
|
||||
--memory-gb 8
|
||||
|
||||
# High transaction rate database
|
||||
dbbackup rto scenario \
|
||||
--type point-in-time \
|
||||
--tps 100000
|
||||
```
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Track RTO/RPO Trends
|
||||
|
||||
```bash
|
||||
# Show trend over time
|
||||
dbbackup rto history
|
||||
|
||||
# Export metrics for trending
|
||||
dbbackup rto export --format csv
|
||||
|
||||
# Output:
|
||||
# Date,Database,RTO_Minutes,RPO_Hours,Backup_Size_GB,Status
|
||||
# 2026-01-15,production,35,22,2.5,ok
|
||||
# 2026-01-16,production,35,22,2.5,ok
|
||||
# 2026-01-17,production,38,24,2.6,warning
|
||||
```
|
||||
|
||||
### Alert on RTO/RPO Violations
|
||||
|
||||
```bash
|
||||
# Alert if RTO > 1 hour
|
||||
dbbackup rto alert --type rto-violation --threshold 60
|
||||
|
||||
# Alert if RPO > 24 hours
|
||||
dbbackup rto alert --type rpo-violation --threshold 24
|
||||
|
||||
# Email on violations
|
||||
dbbackup rto alert \
|
||||
--type rpo-violation \
|
||||
--threshold 24 \
|
||||
--notify-email admin@example.com
|
||||
```
|
||||
|
||||
## Detailed Calculations
|
||||
|
||||
### Backup Time Components
|
||||
|
||||
```bash
|
||||
# Analyze last backup performance
|
||||
dbbackup rto backup-analysis
|
||||
|
||||
# Output:
|
||||
# Database: production
|
||||
# Backup Date: 2026-01-23 02:00:00
|
||||
# Total Duration: 45 minutes
|
||||
#
|
||||
# Components:
|
||||
# - Data extraction: 25m 30s (56%)
|
||||
# - Compression: 12m 15s (27%)
|
||||
# - Encryption: 5m 45s (13%)
|
||||
# - Upload to cloud: 1m 30s (3%)
|
||||
#
|
||||
# Throughput: 95 MB/s
|
||||
# Compression Ratio: 65%
|
||||
```
|
||||
|
||||
### Restore Time Components
|
||||
|
||||
```bash
|
||||
# Analyze restore performance from a test drill
|
||||
dbbackup rto restore-analysis myapp_2026-01-23.dump.gz
|
||||
|
||||
# Output:
|
||||
# Extract Time: 1m 45s
|
||||
# Restore Time: 28m 30s
|
||||
# Validation: 3m 15s
|
||||
# Total RTO: 33m 30s
|
||||
#
|
||||
# Restore Speed: 2.8M rows/minute
|
||||
# Objects Created: 4200
|
||||
# Indexes Built: 145
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Configure RTO/RPO targets in `.dbbackup.conf`:
|
||||
|
||||
```ini
|
||||
[rto_rpo]
|
||||
# Target RTO (minutes)
|
||||
target_rto_minutes = 60
|
||||
|
||||
# Target RPO (hours)
|
||||
target_rpo_hours = 4
|
||||
|
||||
# Alert on threshold violation
|
||||
alert_on_violation = true
|
||||
|
||||
# Minimum backups to maintain RTO
|
||||
min_backups_for_rto = 5
|
||||
|
||||
# PITR window target (hours)
|
||||
pitr_window_hours = 6
|
||||
```
|
||||
|
||||
## SLAs and Compliance
|
||||
|
||||
### Define SLA
|
||||
|
||||
```bash
|
||||
# Create SLA requirement
|
||||
dbbackup rto sla \
|
||||
--name production \
|
||||
--target-rto-minutes 30 \
|
||||
--target-rpo-hours 4 \
|
||||
--databases myapp,payments
|
||||
|
||||
# Verify compliance
|
||||
dbbackup rto sla --verify production
|
||||
|
||||
# Generate compliance report
|
||||
dbbackup rto sla --report production
|
||||
```
|
||||
|
||||
### Audit Trail
|
||||
|
||||
```bash
|
||||
# Show RTO/RPO audit history
|
||||
dbbackup rto audit
|
||||
|
||||
# Output shows:
|
||||
# Date Metric Value Target Status
|
||||
# 2026-01-25 03:15:00 RTO 35m 60m PASS
|
||||
# 2026-01-25 03:15:00 RPO 22h 4h FAIL
|
||||
# 2026-01-24 03:00:00 RTO 35m 60m PASS
|
||||
# 2026-01-24 03:00:00 RPO 22h 4h FAIL
|
||||
```
|
||||
|
||||
## Reporting
|
||||
|
||||
### Generate Report
|
||||
|
||||
```bash
|
||||
# Markdown report
|
||||
dbbackup rto report --format markdown --output rto-report.md
|
||||
|
||||
# PDF for disaster recovery plan
|
||||
dbbackup rto report --format pdf --output drp.pdf
|
||||
|
||||
# HTML for dashboard
|
||||
dbbackup rto report --format html --output rto-metrics.html
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Define SLA targets** - Start with business requirements
|
||||
- Critical systems: RTO < 1 hour
|
||||
- Important systems: RTO < 4 hours
|
||||
- Standard systems: RTO < 24 hours
|
||||
|
||||
2. **Test RTO regularly** - DR drills validate estimates
|
||||
```bash
|
||||
dbbackup drill /mnt/backups --full-validation
|
||||
```
|
||||
|
||||
3. **Monitor trends** - Increasing RTO may indicate issues
|
||||
|
||||
4. **Optimize backups** - Faster backups = smaller RTO
|
||||
- Increase parallelism
|
||||
- Use faster storage
|
||||
- Optimize compression level
|
||||
|
||||
5. **Plan for PITR** - Critical systems should have PITR enabled
|
||||
```bash
|
||||
dbbackup pitr enable myapp /mnt/wal
|
||||
```
|
||||
|
||||
6. **Document assumptions** - RTO/RPO calculations depend on:
|
||||
- Available bandwidth
|
||||
- Target hardware
|
||||
- Parallelism settings
|
||||
- Database size changes
|
||||
|
||||
7. **Regular audit** - Monthly SLA compliance review
|
||||
```bash
|
||||
dbbackup rto sla --verify production
|
||||
```
|
||||
@ -10,6 +10,7 @@ import (
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
@ -27,6 +28,8 @@ import (
|
||||
"dbbackup/internal/progress"
|
||||
"dbbackup/internal/security"
|
||||
"dbbackup/internal/swap"
|
||||
|
||||
"github.com/klauspost/pgzip"
|
||||
)
|
||||
|
||||
// ProgressCallback is called with byte-level progress updates during backup operations
|
||||
@ -1414,10 +1417,10 @@ func (e *Engine) executeCommand(ctx context.Context, cmdArgs []string, outputFil
|
||||
return nil
|
||||
}
|
||||
|
||||
// executeWithStreamingCompression handles plain format dumps with external compression
|
||||
// Uses: pg_dump | pigz > file.sql.gz (zero-copy streaming)
|
||||
// executeWithStreamingCompression handles plain format dumps with in-process pgzip compression
|
||||
// Uses: pg_dump stdout → pgzip.Writer → file.sql.gz (no external process)
|
||||
func (e *Engine) executeWithStreamingCompression(ctx context.Context, cmdArgs []string, outputFile string) error {
|
||||
e.log.Debug("Using streaming compression for large database")
|
||||
e.log.Debug("Using in-process pgzip compression for large database")
|
||||
|
||||
// Derive compressed output filename. If the output was named *.dump we replace that
|
||||
// with *.sql.gz; otherwise append .gz to the provided output file so we don't
|
||||
@ -1439,44 +1442,17 @@ func (e *Engine) executeWithStreamingCompression(ctx context.Context, cmdArgs []
|
||||
dumpCmd.Env = append(dumpCmd.Env, "PGPASSWORD="+e.cfg.Password)
|
||||
}
|
||||
|
||||
// Check for pigz (parallel gzip)
|
||||
compressor := "gzip"
|
||||
compressorArgs := []string{"-c"}
|
||||
|
||||
if _, err := exec.LookPath("pigz"); err == nil {
|
||||
compressor = "pigz"
|
||||
compressorArgs = []string{"-p", strconv.Itoa(e.cfg.Jobs), "-c"}
|
||||
e.log.Debug("Using pigz for parallel compression", "threads", e.cfg.Jobs)
|
||||
}
|
||||
|
||||
// Create compression command
|
||||
compressCmd := exec.CommandContext(ctx, compressor, compressorArgs...)
|
||||
|
||||
// Create output file
|
||||
outFile, err := os.Create(compressedFile)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create output file: %w", err)
|
||||
}
|
||||
defer outFile.Close()
|
||||
|
||||
// Set up pipeline: pg_dump | pigz > file.sql.gz
|
||||
// Get stdout pipe from pg_dump
|
||||
dumpStdout, err := dumpCmd.StdoutPipe()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create dump stdout pipe: %w", err)
|
||||
}
|
||||
|
||||
compressCmd.Stdin = dumpStdout
|
||||
compressCmd.Stdout = outFile
|
||||
|
||||
// Capture stderr from both commands
|
||||
// Capture stderr from pg_dump
|
||||
dumpStderr, err := dumpCmd.StderrPipe()
|
||||
if err != nil {
|
||||
e.log.Warn("Failed to capture dump stderr", "error", err)
|
||||
}
|
||||
compressStderr, err := compressCmd.StderrPipe()
|
||||
if err != nil {
|
||||
e.log.Warn("Failed to capture compress stderr", "error", err)
|
||||
}
|
||||
|
||||
// Stream stderr output
|
||||
if dumpStderr != nil {
|
||||
@ -1491,31 +1467,41 @@ func (e *Engine) executeWithStreamingCompression(ctx context.Context, cmdArgs []
|
||||
}()
|
||||
}
|
||||
|
||||
if compressStderr != nil {
|
||||
go func() {
|
||||
scanner := bufio.NewScanner(compressStderr)
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
if line != "" {
|
||||
e.log.Debug("compression", "output", line)
|
||||
}
|
||||
}
|
||||
}()
|
||||
// Create output file
|
||||
outFile, err := os.Create(compressedFile)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create output file: %w", err)
|
||||
}
|
||||
defer outFile.Close()
|
||||
|
||||
// Start compression first
|
||||
if err := compressCmd.Start(); err != nil {
|
||||
return fmt.Errorf("failed to start compressor: %w", err)
|
||||
// Create pgzip writer with parallel compression
|
||||
// Use configured Jobs or default to NumCPU
|
||||
workers := e.cfg.Jobs
|
||||
if workers <= 0 {
|
||||
workers = runtime.NumCPU()
|
||||
}
|
||||
gzWriter, err := pgzip.NewWriterLevel(outFile, pgzip.BestSpeed)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create pgzip writer: %w", err)
|
||||
}
|
||||
if err := gzWriter.SetConcurrency(256*1024, workers); err != nil {
|
||||
e.log.Warn("Failed to set pgzip concurrency", "error", err)
|
||||
}
|
||||
e.log.Debug("Using pgzip for parallel compression", "workers", workers)
|
||||
|
||||
// Then start pg_dump
|
||||
// Start pg_dump
|
||||
if err := dumpCmd.Start(); err != nil {
|
||||
compressCmd.Process.Kill()
|
||||
return fmt.Errorf("failed to start pg_dump: %w", err)
|
||||
}
|
||||
|
||||
// Copy from pg_dump stdout to pgzip writer in a goroutine
|
||||
copyDone := make(chan error, 1)
|
||||
go func() {
|
||||
_, copyErr := io.Copy(gzWriter, dumpStdout)
|
||||
copyDone <- copyErr
|
||||
}()
|
||||
|
||||
// Wait for pg_dump in a goroutine to handle context timeout properly
|
||||
// This prevents deadlock if pipe buffer fills and pg_dump blocks
|
||||
dumpDone := make(chan error, 1)
|
||||
go func() {
|
||||
dumpDone <- dumpCmd.Wait()
|
||||
@ -1533,33 +1519,29 @@ func (e *Engine) executeWithStreamingCompression(ctx context.Context, cmdArgs []
|
||||
dumpErr = ctx.Err()
|
||||
}
|
||||
|
||||
// Close stdout pipe to signal compressor we're done
|
||||
// This MUST happen after pg_dump exits to avoid broken pipe
|
||||
dumpStdout.Close()
|
||||
// Wait for copy to complete
|
||||
copyErr := <-copyDone
|
||||
|
||||
// Wait for compression to complete
|
||||
compressErr := compressCmd.Wait()
|
||||
// Close gzip writer to flush remaining data
|
||||
gzCloseErr := gzWriter.Close()
|
||||
|
||||
// Check errors - compressor failure first (it's usually the root cause)
|
||||
if compressErr != nil {
|
||||
e.log.Error("Compressor failed", "error", compressErr)
|
||||
return fmt.Errorf("compression failed (check disk space): %w", compressErr)
|
||||
}
|
||||
// Check errors in order of priority
|
||||
if dumpErr != nil {
|
||||
// Check for SIGPIPE (exit code 141) - indicates compressor died first
|
||||
if exitErr, ok := dumpErr.(*exec.ExitError); ok && exitErr.ExitCode() == 141 {
|
||||
e.log.Error("pg_dump received SIGPIPE - compressor may have failed")
|
||||
return fmt.Errorf("pg_dump broken pipe - check disk space and compressor")
|
||||
}
|
||||
return fmt.Errorf("pg_dump failed: %w", dumpErr)
|
||||
}
|
||||
if copyErr != nil {
|
||||
return fmt.Errorf("compression copy failed: %w", copyErr)
|
||||
}
|
||||
if gzCloseErr != nil {
|
||||
return fmt.Errorf("compression flush failed: %w", gzCloseErr)
|
||||
}
|
||||
|
||||
// Sync file to disk to ensure durability (prevents truncation on power loss)
|
||||
if err := outFile.Sync(); err != nil {
|
||||
e.log.Warn("Failed to sync output file", "error", err)
|
||||
}
|
||||
|
||||
e.log.Debug("Streaming compression completed", "output", compressedFile)
|
||||
e.log.Debug("In-process pgzip compression completed", "output", compressedFile)
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@ -464,8 +464,8 @@ func (c *SQLiteCatalog) Stats(ctx context.Context) (*Stats, error) {
|
||||
MAX(created_at),
|
||||
COALESCE(AVG(duration), 0),
|
||||
CAST(COALESCE(AVG(size_bytes), 0) AS INTEGER),
|
||||
SUM(CASE WHEN verified_at IS NOT NULL THEN 1 ELSE 0 END),
|
||||
SUM(CASE WHEN drill_tested_at IS NOT NULL THEN 1 ELSE 0 END)
|
||||
COALESCE(SUM(CASE WHEN verified_at IS NOT NULL THEN 1 ELSE 0 END), 0),
|
||||
COALESCE(SUM(CASE WHEN drill_tested_at IS NOT NULL THEN 1 ELSE 0 END), 0)
|
||||
FROM backups WHERE status != 'deleted'
|
||||
`)
|
||||
|
||||
@ -548,8 +548,8 @@ func (c *SQLiteCatalog) StatsByDatabase(ctx context.Context, database string) (*
|
||||
MAX(created_at),
|
||||
COALESCE(AVG(duration), 0),
|
||||
COALESCE(AVG(size_bytes), 0),
|
||||
SUM(CASE WHEN verified_at IS NOT NULL THEN 1 ELSE 0 END),
|
||||
SUM(CASE WHEN drill_tested_at IS NOT NULL THEN 1 ELSE 0 END)
|
||||
COALESCE(SUM(CASE WHEN verified_at IS NOT NULL THEN 1 ELSE 0 END), 0),
|
||||
COALESCE(SUM(CASE WHEN drill_tested_at IS NOT NULL THEN 1 ELSE 0 END), 0)
|
||||
FROM backups WHERE database = ? AND status != 'deleted'
|
||||
`, database)
|
||||
|
||||
|
||||
@ -17,12 +17,16 @@ type Config struct {
|
||||
BuildTime string
|
||||
GitCommit string
|
||||
|
||||
// Config file path (--config flag)
|
||||
ConfigPath string
|
||||
|
||||
// Database connection
|
||||
Host string
|
||||
Port int
|
||||
User string
|
||||
Database string
|
||||
Password string
|
||||
Socket string // Unix socket path for MySQL/MariaDB
|
||||
DatabaseType string // "postgres" or "mysql"
|
||||
SSLMode string
|
||||
Insecure bool
|
||||
@ -37,8 +41,10 @@ type Config struct {
|
||||
CPUWorkloadType string // "cpu-intensive", "io-intensive", "balanced"
|
||||
|
||||
// Resource profile for backup/restore operations
|
||||
ResourceProfile string // "conservative", "balanced", "performance", "max-performance"
|
||||
ResourceProfile string // "conservative", "balanced", "performance", "max-performance", "turbo"
|
||||
LargeDBMode bool // Enable large database mode (reduces parallelism, increases max_locks)
|
||||
BufferedIO bool // Use 32KB buffered I/O for faster extraction (turbo profile)
|
||||
ParallelExtract bool // Enable parallel file extraction where possible (turbo profile)
|
||||
|
||||
// CPU detection
|
||||
CPUDetector *cpu.Detector
|
||||
@ -433,7 +439,7 @@ func (c *Config) ApplyResourceProfile(profileName string) error {
|
||||
return &ConfigError{
|
||||
Field: "resource_profile",
|
||||
Value: profileName,
|
||||
Message: "unknown profile. Valid profiles: conservative, balanced, performance, max-performance",
|
||||
Message: "unknown profile. Valid profiles: conservative, balanced, performance, max-performance, turbo",
|
||||
}
|
||||
}
|
||||
|
||||
@ -456,6 +462,10 @@ func (c *Config) ApplyResourceProfile(profileName string) error {
|
||||
c.Jobs = profile.Jobs
|
||||
c.DumpJobs = profile.DumpJobs
|
||||
|
||||
// Apply turbo mode optimizations
|
||||
c.BufferedIO = profile.BufferedIO
|
||||
c.ParallelExtract = profile.ParallelExtract
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@ -42,8 +42,11 @@ type LocalConfig struct {
|
||||
|
||||
// LoadLocalConfig loads configuration from .dbbackup.conf in current directory
|
||||
func LoadLocalConfig() (*LocalConfig, error) {
|
||||
configPath := filepath.Join(".", ConfigFileName)
|
||||
return LoadLocalConfigFromPath(filepath.Join(".", ConfigFileName))
|
||||
}
|
||||
|
||||
// LoadLocalConfigFromPath loads configuration from a specific path
|
||||
func LoadLocalConfigFromPath(configPath string) (*LocalConfig, error) {
|
||||
data, err := os.ReadFile(configPath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
|
||||
@ -35,6 +35,8 @@ type ResourceProfile struct {
|
||||
RecommendedForLarge bool `json:"recommended_for_large"` // Suitable for large DBs?
|
||||
MinMemoryGB int `json:"min_memory_gb"` // Minimum memory for this profile
|
||||
MinCores int `json:"min_cores"` // Minimum cores for this profile
|
||||
BufferedIO bool `json:"buffered_io"` // Use 32KB buffered I/O for extraction
|
||||
ParallelExtract bool `json:"parallel_extract"` // Enable parallel file extraction
|
||||
}
|
||||
|
||||
// Predefined resource profiles
|
||||
@ -95,12 +97,31 @@ var (
|
||||
MinCores: 16,
|
||||
}
|
||||
|
||||
// ProfileTurbo - TURBO MODE: Optimized for fastest possible restore
|
||||
// Based on real-world testing: matches pg_restore -j8 performance
|
||||
// Uses buffered I/O, parallel extraction, and aggressive pg_restore parallelism
|
||||
ProfileTurbo = ResourceProfile{
|
||||
Name: "turbo",
|
||||
Description: "TURBO: Fastest restore mode. Matches native pg_restore -j8 speed. Use on dedicated DB servers.",
|
||||
ClusterParallelism: 2, // Restore 2 DBs concurrently (I/O balanced)
|
||||
Jobs: 8, // pg_restore -j8 (matches your pg_dump test)
|
||||
DumpJobs: 8, // Fast dumps too
|
||||
MaintenanceWorkMem: "2GB",
|
||||
MaxLocksPerTxn: 4096, // High for large schemas
|
||||
RecommendedForLarge: true, // Optimized for large DBs
|
||||
MinMemoryGB: 16, // Works on 16GB+ servers
|
||||
MinCores: 4, // Works on 4+ cores
|
||||
BufferedIO: true, // Enable 32KB buffered writes
|
||||
ParallelExtract: true, // Parallel tar extraction where possible
|
||||
}
|
||||
|
||||
// AllProfiles contains all available profiles (VM resource-based)
|
||||
AllProfiles = []ResourceProfile{
|
||||
ProfileConservative,
|
||||
ProfileBalanced,
|
||||
ProfilePerformance,
|
||||
ProfileMaxPerformance,
|
||||
ProfileTurbo,
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@ -278,8 +278,12 @@ func (m *MySQL) GetTableRowCount(ctx context.Context, database, table string) (i
|
||||
func (m *MySQL) BuildBackupCommand(database, outputFile string, options BackupOptions) []string {
|
||||
cmd := []string{"mysqldump"}
|
||||
|
||||
// Connection parameters - handle localhost vs remote differently
|
||||
if m.cfg.Host == "" || m.cfg.Host == "localhost" {
|
||||
// Connection parameters - socket takes priority, then localhost vs remote
|
||||
if m.cfg.Socket != "" {
|
||||
// Explicit socket path provided
|
||||
cmd = append(cmd, "-S", m.cfg.Socket)
|
||||
cmd = append(cmd, "-u", m.cfg.User)
|
||||
} else if m.cfg.Host == "" || m.cfg.Host == "localhost" {
|
||||
// For localhost, use socket connection (don't specify host/port)
|
||||
cmd = append(cmd, "-u", m.cfg.User)
|
||||
} else {
|
||||
@ -338,8 +342,12 @@ func (m *MySQL) BuildBackupCommand(database, outputFile string, options BackupOp
|
||||
func (m *MySQL) BuildRestoreCommand(database, inputFile string, options RestoreOptions) []string {
|
||||
cmd := []string{"mysql"}
|
||||
|
||||
// Connection parameters - handle localhost vs remote differently
|
||||
if m.cfg.Host == "" || m.cfg.Host == "localhost" {
|
||||
// Connection parameters - socket takes priority, then localhost vs remote
|
||||
if m.cfg.Socket != "" {
|
||||
// Explicit socket path provided
|
||||
cmd = append(cmd, "-S", m.cfg.Socket)
|
||||
cmd = append(cmd, "-u", m.cfg.User)
|
||||
} else if m.cfg.Host == "" || m.cfg.Host == "localhost" {
|
||||
// For localhost, use socket connection (don't specify host/port)
|
||||
cmd = append(cmd, "-u", m.cfg.User)
|
||||
} else {
|
||||
@ -417,8 +425,11 @@ func (m *MySQL) buildDSN() string {
|
||||
|
||||
dsn += "@"
|
||||
|
||||
// Handle localhost with Unix socket vs TCP/IP
|
||||
if m.cfg.Host == "" || m.cfg.Host == "localhost" {
|
||||
// Explicit socket takes priority
|
||||
if m.cfg.Socket != "" {
|
||||
dsn += "unix(" + m.cfg.Socket + ")"
|
||||
} else if m.cfg.Host == "" || m.cfg.Host == "localhost" {
|
||||
// Handle localhost with Unix socket vs TCP/IP
|
||||
// Try common socket paths for localhost connections
|
||||
socketPaths := []string{
|
||||
"/run/mysqld/mysqld.sock",
|
||||
|
||||
@ -4,12 +4,15 @@ package drill
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"dbbackup/internal/logger"
|
||||
|
||||
"github.com/klauspost/pgzip"
|
||||
)
|
||||
|
||||
// Engine executes DR drills
|
||||
@ -237,14 +240,62 @@ func (e *Engine) buildContainerConfig(config *DrillConfig) *ContainerConfig {
|
||||
}
|
||||
}
|
||||
|
||||
// decompressWithPgzip decompresses a .gz file using in-process pgzip
|
||||
func (e *Engine) decompressWithPgzip(srcPath string) (string, error) {
|
||||
if !strings.HasSuffix(srcPath, ".gz") {
|
||||
return srcPath, nil // Not compressed
|
||||
}
|
||||
|
||||
dstPath := strings.TrimSuffix(srcPath, ".gz")
|
||||
e.log.Info("Decompressing with pgzip", "src", srcPath, "dst", dstPath)
|
||||
|
||||
srcFile, err := os.Open(srcPath)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to open source: %w", err)
|
||||
}
|
||||
defer srcFile.Close()
|
||||
|
||||
gz, err := pgzip.NewReader(srcFile)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to create pgzip reader: %w", err)
|
||||
}
|
||||
defer gz.Close()
|
||||
|
||||
dstFile, err := os.Create(dstPath)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to create destination: %w", err)
|
||||
}
|
||||
defer dstFile.Close()
|
||||
|
||||
if _, err := io.Copy(dstFile, gz); err != nil {
|
||||
os.Remove(dstPath)
|
||||
return "", fmt.Errorf("decompression failed: %w", err)
|
||||
}
|
||||
|
||||
return dstPath, nil
|
||||
}
|
||||
|
||||
// restoreBackup restores the backup into the container
|
||||
func (e *Engine) restoreBackup(ctx context.Context, config *DrillConfig, containerID string, containerConfig *ContainerConfig) error {
|
||||
backupPath := config.BackupPath
|
||||
|
||||
// Decompress on host with pgzip before copying to container
|
||||
if strings.HasSuffix(backupPath, ".gz") {
|
||||
e.log.Info("[DECOMPRESS] Decompressing backup with pgzip on host...")
|
||||
decompressedPath, err := e.decompressWithPgzip(backupPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decompress backup: %w", err)
|
||||
}
|
||||
backupPath = decompressedPath
|
||||
defer os.Remove(decompressedPath) // Clean up temp file
|
||||
}
|
||||
|
||||
// Copy backup to container
|
||||
backupName := filepath.Base(config.BackupPath)
|
||||
backupName := filepath.Base(backupPath)
|
||||
containerBackupPath := "/tmp/" + backupName
|
||||
|
||||
e.log.Info("[DIR] Copying backup to container...")
|
||||
if err := e.docker.CopyToContainer(ctx, containerID, config.BackupPath, containerBackupPath); err != nil {
|
||||
if err := e.docker.CopyToContainer(ctx, containerID, backupPath, containerBackupPath); err != nil {
|
||||
return fmt.Errorf("failed to copy backup: %w", err)
|
||||
}
|
||||
|
||||
@ -264,20 +315,11 @@ func (e *Engine) restoreBackup(ctx context.Context, config *DrillConfig, contain
|
||||
func (e *Engine) executeRestore(ctx context.Context, config *DrillConfig, containerID, backupPath string, containerConfig *ContainerConfig) error {
|
||||
var cmd []string
|
||||
|
||||
// Note: Decompression is now done on host with pgzip before copying to container
|
||||
// So backupPath should never end with .gz at this point
|
||||
|
||||
switch config.DatabaseType {
|
||||
case "postgresql", "postgres":
|
||||
// Decompress if needed
|
||||
if strings.HasSuffix(backupPath, ".gz") {
|
||||
decompressedPath := strings.TrimSuffix(backupPath, ".gz")
|
||||
_, err := e.docker.ExecCommand(ctx, containerID, []string{
|
||||
"sh", "-c", fmt.Sprintf("gunzip -c %s > %s", backupPath, decompressedPath),
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("decompression failed: %w", err)
|
||||
}
|
||||
backupPath = decompressedPath
|
||||
}
|
||||
|
||||
// Create database
|
||||
_, err := e.docker.ExecCommand(ctx, containerID, []string{
|
||||
"psql", "-U", "postgres", "-c", fmt.Sprintf("CREATE DATABASE %s", config.DatabaseName),
|
||||
@ -296,32 +338,9 @@ func (e *Engine) executeRestore(ctx context.Context, config *DrillConfig, contai
|
||||
}
|
||||
|
||||
case "mysql":
|
||||
// Decompress if needed
|
||||
if strings.HasSuffix(backupPath, ".gz") {
|
||||
decompressedPath := strings.TrimSuffix(backupPath, ".gz")
|
||||
_, err := e.docker.ExecCommand(ctx, containerID, []string{
|
||||
"sh", "-c", fmt.Sprintf("gunzip -c %s > %s", backupPath, decompressedPath),
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("decompression failed: %w", err)
|
||||
}
|
||||
backupPath = decompressedPath
|
||||
}
|
||||
|
||||
cmd = []string{"sh", "-c", fmt.Sprintf("mysql -u root --password=root %s < %s", config.DatabaseName, backupPath)}
|
||||
|
||||
case "mariadb":
|
||||
if strings.HasSuffix(backupPath, ".gz") {
|
||||
decompressedPath := strings.TrimSuffix(backupPath, ".gz")
|
||||
_, err := e.docker.ExecCommand(ctx, containerID, []string{
|
||||
"sh", "-c", fmt.Sprintf("gunzip -c %s > %s", backupPath, decompressedPath),
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("decompression failed: %w", err)
|
||||
}
|
||||
backupPath = decompressedPath
|
||||
}
|
||||
|
||||
cmd = []string{"sh", "-c", fmt.Sprintf("mariadb -u root --password=root %s < %s", config.DatabaseName, backupPath)}
|
||||
|
||||
default:
|
||||
|
||||
@ -345,8 +345,10 @@ func (e *MySQLDumpEngine) Restore(ctx context.Context, opts *RestoreOptions) err
|
||||
// Build mysql command
|
||||
args := []string{}
|
||||
|
||||
// Connection parameters
|
||||
if e.config.Host != "" && e.config.Host != "localhost" {
|
||||
// Connection parameters - socket takes priority over host
|
||||
if e.config.Socket != "" {
|
||||
args = append(args, "-S", e.config.Socket)
|
||||
} else if e.config.Host != "" && e.config.Host != "localhost" {
|
||||
args = append(args, "-h", e.config.Host)
|
||||
args = append(args, "-P", strconv.Itoa(e.config.Port))
|
||||
}
|
||||
@ -494,8 +496,10 @@ func (e *MySQLDumpEngine) BackupToWriter(ctx context.Context, w io.Writer, opts
|
||||
func (e *MySQLDumpEngine) buildArgs(database string) []string {
|
||||
args := []string{}
|
||||
|
||||
// Connection parameters
|
||||
if e.config.Host != "" && e.config.Host != "localhost" {
|
||||
// Connection parameters - socket takes priority over host
|
||||
if e.config.Socket != "" {
|
||||
args = append(args, "-S", e.config.Socket)
|
||||
} else if e.config.Host != "" && e.config.Host != "localhost" {
|
||||
args = append(args, "-h", e.config.Host)
|
||||
args = append(args, "-P", strconv.Itoa(e.config.Port))
|
||||
}
|
||||
|
||||
@ -8,10 +8,13 @@ import (
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"sort"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/klauspost/pgzip"
|
||||
)
|
||||
|
||||
// Table represents a database table
|
||||
@ -599,21 +602,19 @@ func escapeString(s string) string {
|
||||
return string(result)
|
||||
}
|
||||
|
||||
// gzipWriter wraps compress/gzip
|
||||
// gzipWriter wraps pgzip for parallel compression
|
||||
type gzipWriter struct {
|
||||
io.WriteCloser
|
||||
*pgzip.Writer
|
||||
}
|
||||
|
||||
func newGzipWriter(w io.Writer) (*gzipWriter, error) {
|
||||
// Import would be: import "compress/gzip"
|
||||
// For now, return a passthrough (actual implementation would use gzip)
|
||||
return &gzipWriter{
|
||||
WriteCloser: &nopCloser{w},
|
||||
}, nil
|
||||
gz, err := pgzip.NewWriterLevel(w, pgzip.BestSpeed)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create pgzip writer: %w", err)
|
||||
}
|
||||
// Use all CPUs for parallel compression
|
||||
if err := gz.SetConcurrency(256*1024, runtime.NumCPU()); err != nil {
|
||||
// Non-fatal, continue with defaults
|
||||
}
|
||||
return &gzipWriter{Writer: gz}, nil
|
||||
}
|
||||
|
||||
type nopCloser struct {
|
||||
io.Writer
|
||||
}
|
||||
|
||||
func (n *nopCloser) Close() error { return nil }
|
||||
|
||||
@ -14,10 +14,12 @@ import (
|
||||
|
||||
// Exporter provides an HTTP endpoint for Prometheus metrics
|
||||
type Exporter struct {
|
||||
log logger.Logger
|
||||
catalog catalog.Catalog
|
||||
instance string
|
||||
port int
|
||||
log logger.Logger
|
||||
catalog catalog.Catalog
|
||||
instance string
|
||||
port int
|
||||
version string
|
||||
gitCommit string
|
||||
|
||||
mu sync.RWMutex
|
||||
cachedData string
|
||||
@ -36,6 +38,19 @@ func NewExporter(log logger.Logger, cat catalog.Catalog, instance string, port i
|
||||
}
|
||||
}
|
||||
|
||||
// NewExporterWithVersion creates a new Prometheus exporter with version info
|
||||
func NewExporterWithVersion(log logger.Logger, cat catalog.Catalog, instance string, port int, version, gitCommit string) *Exporter {
|
||||
return &Exporter{
|
||||
log: log,
|
||||
catalog: cat,
|
||||
instance: instance,
|
||||
port: port,
|
||||
version: version,
|
||||
gitCommit: gitCommit,
|
||||
refreshTTL: 30 * time.Second,
|
||||
}
|
||||
}
|
||||
|
||||
// Serve starts the HTTP server and blocks until context is cancelled
|
||||
func (e *Exporter) Serve(ctx context.Context) error {
|
||||
mux := http.NewServeMux()
|
||||
@ -158,7 +173,7 @@ func (e *Exporter) refreshLoop(ctx context.Context) {
|
||||
|
||||
// refresh updates the cached metrics
|
||||
func (e *Exporter) refresh() error {
|
||||
writer := NewMetricsWriter(e.log, e.catalog, e.instance)
|
||||
writer := NewMetricsWriterWithVersion(e.log, e.catalog, e.instance, e.version, e.gitCommit)
|
||||
data, err := writer.GenerateMetricsString()
|
||||
if err != nil {
|
||||
return err
|
||||
|
||||
@ -16,17 +16,32 @@ import (
|
||||
|
||||
// MetricsWriter writes metrics in Prometheus text format
|
||||
type MetricsWriter struct {
|
||||
log logger.Logger
|
||||
catalog catalog.Catalog
|
||||
instance string
|
||||
log logger.Logger
|
||||
catalog catalog.Catalog
|
||||
instance string
|
||||
version string
|
||||
gitCommit string
|
||||
}
|
||||
|
||||
// NewMetricsWriter creates a new MetricsWriter
|
||||
func NewMetricsWriter(log logger.Logger, cat catalog.Catalog, instance string) *MetricsWriter {
|
||||
return &MetricsWriter{
|
||||
log: log,
|
||||
catalog: cat,
|
||||
instance: instance,
|
||||
log: log,
|
||||
catalog: cat,
|
||||
instance: instance,
|
||||
version: "unknown",
|
||||
gitCommit: "unknown",
|
||||
}
|
||||
}
|
||||
|
||||
// NewMetricsWriterWithVersion creates a MetricsWriter with version info for build_info metric
|
||||
func NewMetricsWriterWithVersion(log logger.Logger, cat catalog.Catalog, instance, version, gitCommit string) *MetricsWriter {
|
||||
return &MetricsWriter{
|
||||
log: log,
|
||||
catalog: cat,
|
||||
instance: instance,
|
||||
version: version,
|
||||
gitCommit: gitCommit,
|
||||
}
|
||||
}
|
||||
|
||||
@ -193,6 +208,13 @@ func (m *MetricsWriter) formatMetrics(metrics []BackupMetrics) string {
|
||||
b.WriteString(fmt.Sprintf("# Server: %s\n", m.instance))
|
||||
b.WriteString("\n")
|
||||
|
||||
// dbbackup_build_info - version and build information
|
||||
b.WriteString("# HELP dbbackup_build_info Build information for dbbackup exporter\n")
|
||||
b.WriteString("# TYPE dbbackup_build_info gauge\n")
|
||||
b.WriteString(fmt.Sprintf("dbbackup_build_info{server=%q,version=%q,commit=%q} 1\n",
|
||||
m.instance, m.version, m.gitCommit))
|
||||
b.WriteString("\n")
|
||||
|
||||
// dbbackup_last_success_timestamp
|
||||
b.WriteString("# HELP dbbackup_last_success_timestamp Unix timestamp of last successful backup\n")
|
||||
b.WriteString("# TYPE dbbackup_last_success_timestamp gauge\n")
|
||||
|
||||
@ -2,6 +2,7 @@ package restore
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"bufio"
|
||||
"context"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
@ -481,27 +482,14 @@ func (e *Engine) restorePostgreSQLSQL(ctx context.Context, archivePath, targetDB
|
||||
var cmd []string
|
||||
|
||||
// For localhost, omit -h to use Unix socket (avoids Ident auth issues)
|
||||
// But always include -p for port (in case of non-standard port)
|
||||
hostArg := ""
|
||||
portArg := fmt.Sprintf("-p %d", e.cfg.Port)
|
||||
if e.cfg.Host != "localhost" && e.cfg.Host != "" {
|
||||
hostArg = fmt.Sprintf("-h %s", e.cfg.Host)
|
||||
}
|
||||
|
||||
if compressed {
|
||||
// NOTE: We do NOT use ON_ERROR_STOP=1 because:
|
||||
// 1. We pre-validate dumps above to catch truncation/corruption
|
||||
// 2. ON_ERROR_STOP=1 would fail on harmless "role does not exist" errors
|
||||
// 3. We handle errors in executeRestoreCommand with proper classification
|
||||
psqlCmd := fmt.Sprintf("psql %s -U %s -d %s", portArg, e.cfg.User, targetDB)
|
||||
if hostArg != "" {
|
||||
psqlCmd = fmt.Sprintf("psql %s %s -U %s -d %s", hostArg, portArg, e.cfg.User, targetDB)
|
||||
}
|
||||
// Set PGPASSWORD in the bash command for password-less auth
|
||||
cmd = []string{
|
||||
"bash", "-c",
|
||||
fmt.Sprintf("PGPASSWORD='%s' gunzip -c %s | %s", e.cfg.Password, archivePath, psqlCmd),
|
||||
}
|
||||
// Use in-process pgzip decompression (parallel, no external process)
|
||||
return e.executeRestoreWithPgzipStream(ctx, archivePath, targetDB, "postgresql")
|
||||
} else {
|
||||
// NOTE: We do NOT use ON_ERROR_STOP=1 (see above)
|
||||
if hostArg != "" {
|
||||
@ -534,11 +522,8 @@ func (e *Engine) restoreMySQLSQL(ctx context.Context, archivePath, targetDB stri
|
||||
cmd := e.db.BuildRestoreCommand(targetDB, archivePath, options)
|
||||
|
||||
if compressed {
|
||||
// For compressed SQL, decompress on the fly
|
||||
cmd = []string{
|
||||
"bash", "-c",
|
||||
fmt.Sprintf("gunzip -c %s | %s", archivePath, strings.Join(cmd, " ")),
|
||||
}
|
||||
// Use in-process pgzip decompression (parallel, no external process)
|
||||
return e.executeRestoreWithPgzipStream(ctx, archivePath, targetDB, "mysql")
|
||||
}
|
||||
|
||||
return e.executeRestoreCommand(ctx, cmd)
|
||||
@ -714,25 +699,38 @@ func (e *Engine) executeRestoreCommandWithContext(ctx context.Context, cmdArgs [
|
||||
return nil
|
||||
}
|
||||
|
||||
// executeRestoreWithDecompression handles decompression during restore
|
||||
// executeRestoreWithDecompression handles decompression during restore using in-process pgzip
|
||||
func (e *Engine) executeRestoreWithDecompression(ctx context.Context, archivePath string, restoreCmd []string) error {
|
||||
// Check if pigz is available for faster decompression
|
||||
decompressCmd := "gunzip"
|
||||
if _, err := exec.LookPath("pigz"); err == nil {
|
||||
decompressCmd = "pigz"
|
||||
e.log.Info("Using pigz for parallel decompression")
|
||||
e.log.Info("Using in-process pgzip decompression (parallel)", "archive", archivePath)
|
||||
|
||||
// Open the gzip file
|
||||
file, err := os.Open(archivePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to open archive: %w", err)
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
// Build pipeline: decompress | restore
|
||||
pipeline := fmt.Sprintf("%s -dc %s | %s", decompressCmd, archivePath, strings.Join(restoreCmd, " "))
|
||||
cmd := exec.CommandContext(ctx, "bash", "-c", pipeline)
|
||||
// Create parallel gzip reader
|
||||
gz, err := pgzip.NewReader(file)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create pgzip reader: %w", err)
|
||||
}
|
||||
defer gz.Close()
|
||||
|
||||
// Start restore command
|
||||
cmd := exec.CommandContext(ctx, restoreCmd[0], restoreCmd[1:]...)
|
||||
cmd.Env = append(os.Environ(),
|
||||
fmt.Sprintf("PGPASSWORD=%s", e.cfg.Password),
|
||||
fmt.Sprintf("MYSQL_PWD=%s", e.cfg.Password),
|
||||
)
|
||||
|
||||
// Stream stderr to avoid memory issues with large output
|
||||
// Pipe decompressed data to restore command stdin
|
||||
stdin, err := cmd.StdinPipe()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create stdin pipe: %w", err)
|
||||
}
|
||||
|
||||
// Capture stderr
|
||||
stderr, err := cmd.StderrPipe()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create stderr pipe: %w", err)
|
||||
@ -742,81 +740,169 @@ func (e *Engine) executeRestoreWithDecompression(ctx context.Context, archivePat
|
||||
return fmt.Errorf("failed to start restore command: %w", err)
|
||||
}
|
||||
|
||||
// Read stderr in goroutine to avoid blocking
|
||||
// Stream decompressed data to restore command in goroutine
|
||||
copyDone := make(chan error, 1)
|
||||
go func() {
|
||||
_, copyErr := io.Copy(stdin, gz)
|
||||
stdin.Close()
|
||||
copyDone <- copyErr
|
||||
}()
|
||||
|
||||
// Read stderr in goroutine
|
||||
var lastError string
|
||||
var errorCount int
|
||||
stderrDone := make(chan struct{})
|
||||
go func() {
|
||||
defer close(stderrDone)
|
||||
buf := make([]byte, 4096)
|
||||
const maxErrors = 10 // Limit captured errors to prevent OOM
|
||||
for {
|
||||
n, err := stderr.Read(buf)
|
||||
if n > 0 {
|
||||
chunk := string(buf[:n])
|
||||
// Only capture REAL errors, not verbose output
|
||||
if strings.Contains(chunk, "ERROR:") || strings.Contains(chunk, "FATAL:") || strings.Contains(chunk, "error:") {
|
||||
lastError = strings.TrimSpace(chunk)
|
||||
errorCount++
|
||||
if errorCount <= maxErrors {
|
||||
e.log.Warn("Restore stderr", "output", chunk)
|
||||
}
|
||||
}
|
||||
// Note: --verbose output is discarded to prevent OOM
|
||||
}
|
||||
if err != nil {
|
||||
break
|
||||
scanner := bufio.NewScanner(stderr)
|
||||
// Increase buffer size for long lines
|
||||
buf := make([]byte, 64*1024)
|
||||
scanner.Buffer(buf, 1024*1024)
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
if strings.Contains(strings.ToLower(line), "error") ||
|
||||
strings.Contains(line, "ERROR") ||
|
||||
strings.Contains(line, "FATAL") {
|
||||
lastError = line
|
||||
errorCount++
|
||||
e.log.Debug("Restore stderr", "line", line)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
// Wait for command with proper context handling
|
||||
cmdDone := make(chan error, 1)
|
||||
go func() {
|
||||
cmdDone <- cmd.Wait()
|
||||
}()
|
||||
// Wait for copy to complete
|
||||
copyErr := <-copyDone
|
||||
|
||||
var cmdErr error
|
||||
select {
|
||||
case cmdErr = <-cmdDone:
|
||||
// Command completed (success or failure)
|
||||
case <-ctx.Done():
|
||||
// Context cancelled - kill process
|
||||
e.log.Warn("Restore with decompression cancelled - killing process")
|
||||
cmd.Process.Kill()
|
||||
<-cmdDone
|
||||
cmdErr = ctx.Err()
|
||||
}
|
||||
|
||||
// Wait for stderr reader to finish
|
||||
// Wait for command
|
||||
cmdErr := cmd.Wait()
|
||||
<-stderrDone
|
||||
|
||||
if cmdErr != nil {
|
||||
// PostgreSQL pg_restore returns exit code 1 even for ignorable errors
|
||||
// Check if errors are ignorable (already exists, duplicate, etc.)
|
||||
if lastError != "" && e.isIgnorableError(lastError) {
|
||||
e.log.Warn("Restore with decompression completed with ignorable errors", "error_count", errorCount, "last_error", lastError)
|
||||
return nil // Success despite ignorable errors
|
||||
}
|
||||
if copyErr != nil && cmdErr == nil {
|
||||
return fmt.Errorf("decompression failed: %w", copyErr)
|
||||
}
|
||||
|
||||
// Classify error and provide helpful hints
|
||||
if cmdErr != nil {
|
||||
if lastError != "" && e.isIgnorableError(lastError) {
|
||||
e.log.Warn("Restore completed with ignorable errors", "error_count", errorCount)
|
||||
return nil
|
||||
}
|
||||
if lastError != "" {
|
||||
classification := checks.ClassifyError(lastError)
|
||||
e.log.Error("Restore with decompression failed",
|
||||
"error", cmdErr,
|
||||
"last_stderr", lastError,
|
||||
"error_count", errorCount,
|
||||
"error_type", classification.Type,
|
||||
"hint", classification.Hint,
|
||||
"action", classification.Action)
|
||||
return fmt.Errorf("restore failed: %w (last error: %s, total errors: %d) - %s",
|
||||
cmdErr, lastError, errorCount, classification.Hint)
|
||||
return fmt.Errorf("restore failed: %w (last error: %s) - %s", cmdErr, lastError, classification.Hint)
|
||||
}
|
||||
|
||||
e.log.Error("Restore with decompression failed", "error", cmdErr, "last_stderr", lastError, "error_count", errorCount)
|
||||
return fmt.Errorf("restore failed: %w", cmdErr)
|
||||
}
|
||||
|
||||
e.log.Info("Restore with pgzip decompression completed successfully")
|
||||
return nil
|
||||
}
|
||||
|
||||
// executeRestoreWithPgzipStream handles SQL restore with in-process pgzip decompression
|
||||
func (e *Engine) executeRestoreWithPgzipStream(ctx context.Context, archivePath, targetDB, dbType string) error {
|
||||
e.log.Info("Using in-process pgzip stream for SQL restore", "archive", archivePath, "database", targetDB, "type", dbType)
|
||||
|
||||
// Open the gzip file
|
||||
file, err := os.Open(archivePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to open archive: %w", err)
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
// Create parallel gzip reader
|
||||
gz, err := pgzip.NewReader(file)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create pgzip reader: %w", err)
|
||||
}
|
||||
defer gz.Close()
|
||||
|
||||
// Build restore command based on database type
|
||||
var cmd *exec.Cmd
|
||||
if dbType == "postgresql" {
|
||||
args := []string{"-p", fmt.Sprintf("%d", e.cfg.Port), "-U", e.cfg.User, "-d", targetDB}
|
||||
if e.cfg.Host != "localhost" && e.cfg.Host != "" {
|
||||
args = append([]string{"-h", e.cfg.Host}, args...)
|
||||
}
|
||||
cmd = exec.CommandContext(ctx, "psql", args...)
|
||||
cmd.Env = append(os.Environ(), fmt.Sprintf("PGPASSWORD=%s", e.cfg.Password))
|
||||
} else {
|
||||
// MySQL
|
||||
args := []string{"-u", e.cfg.User, "-p" + e.cfg.Password}
|
||||
if e.cfg.Host != "localhost" && e.cfg.Host != "" {
|
||||
args = append(args, "-h", e.cfg.Host)
|
||||
}
|
||||
args = append(args, "-P", fmt.Sprintf("%d", e.cfg.Port), targetDB)
|
||||
cmd = exec.CommandContext(ctx, "mysql", args...)
|
||||
}
|
||||
|
||||
// Pipe decompressed data to restore command stdin
|
||||
stdin, err := cmd.StdinPipe()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create stdin pipe: %w", err)
|
||||
}
|
||||
|
||||
// Capture stderr
|
||||
stderr, err := cmd.StderrPipe()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create stderr pipe: %w", err)
|
||||
}
|
||||
|
||||
if err := cmd.Start(); err != nil {
|
||||
return fmt.Errorf("failed to start restore command: %w", err)
|
||||
}
|
||||
|
||||
// Stream decompressed data to restore command in goroutine
|
||||
copyDone := make(chan error, 1)
|
||||
go func() {
|
||||
_, copyErr := io.Copy(stdin, gz)
|
||||
stdin.Close()
|
||||
copyDone <- copyErr
|
||||
}()
|
||||
|
||||
// Read stderr in goroutine
|
||||
var lastError string
|
||||
var errorCount int
|
||||
stderrDone := make(chan struct{})
|
||||
go func() {
|
||||
defer close(stderrDone)
|
||||
scanner := bufio.NewScanner(stderr)
|
||||
buf := make([]byte, 64*1024)
|
||||
scanner.Buffer(buf, 1024*1024)
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
if strings.Contains(strings.ToLower(line), "error") ||
|
||||
strings.Contains(line, "ERROR") ||
|
||||
strings.Contains(line, "FATAL") {
|
||||
lastError = line
|
||||
errorCount++
|
||||
e.log.Debug("Restore stderr", "line", line)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
// Wait for copy to complete
|
||||
copyErr := <-copyDone
|
||||
|
||||
// Wait for command
|
||||
cmdErr := cmd.Wait()
|
||||
<-stderrDone
|
||||
|
||||
if copyErr != nil && cmdErr == nil {
|
||||
return fmt.Errorf("pgzip decompression failed: %w", copyErr)
|
||||
}
|
||||
|
||||
if cmdErr != nil {
|
||||
if lastError != "" && e.isIgnorableError(lastError) {
|
||||
e.log.Warn("SQL restore completed with ignorable errors", "error_count", errorCount)
|
||||
return nil
|
||||
}
|
||||
if lastError != "" {
|
||||
classification := checks.ClassifyError(lastError)
|
||||
return fmt.Errorf("restore failed: %w (last error: %s) - %s", cmdErr, lastError, classification.Hint)
|
||||
}
|
||||
return fmt.Errorf("restore failed: %w", cmdErr)
|
||||
}
|
||||
|
||||
e.log.Info("SQL restore with pgzip stream completed successfully")
|
||||
return nil
|
||||
}
|
||||
|
||||
@ -952,6 +1038,29 @@ func (e *Engine) RestoreSingleFromCluster(ctx context.Context, clusterArchivePat
|
||||
func (e *Engine) RestoreCluster(ctx context.Context, archivePath string, preExtractedPath ...string) error {
|
||||
operation := e.log.StartOperation("Cluster Restore")
|
||||
|
||||
// 🚀 LOG ACTUAL PERFORMANCE SETTINGS - helps debug slow restores
|
||||
profile := e.cfg.GetCurrentProfile()
|
||||
if profile != nil {
|
||||
e.log.Info("🚀 RESTORE PERFORMANCE SETTINGS",
|
||||
"profile", profile.Name,
|
||||
"cluster_parallelism", profile.ClusterParallelism,
|
||||
"pg_restore_jobs", profile.Jobs,
|
||||
"large_db_mode", e.cfg.LargeDBMode,
|
||||
"buffered_io", profile.BufferedIO)
|
||||
} else {
|
||||
e.log.Info("🚀 RESTORE PERFORMANCE SETTINGS (raw config)",
|
||||
"profile", e.cfg.ResourceProfile,
|
||||
"cluster_parallelism", e.cfg.ClusterParallelism,
|
||||
"pg_restore_jobs", e.cfg.Jobs,
|
||||
"large_db_mode", e.cfg.LargeDBMode)
|
||||
}
|
||||
|
||||
// Also show in progress bar for TUI visibility
|
||||
if !e.silentMode {
|
||||
fmt.Printf("\n⚡ Performance: profile=%s, parallel_dbs=%d, pg_restore_jobs=%d\n\n",
|
||||
e.cfg.ResourceProfile, e.cfg.ClusterParallelism, e.cfg.Jobs)
|
||||
}
|
||||
|
||||
// Validate and sanitize archive path
|
||||
validArchivePath, pathErr := security.ValidateArchivePath(archivePath)
|
||||
if pathErr != nil {
|
||||
@ -1543,7 +1652,7 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string, preExtr
|
||||
var restoreErr error
|
||||
if isCompressedSQL {
|
||||
mu.Lock()
|
||||
e.log.Info("Detected compressed SQL format, using psql + gunzip", "file", dumpFile, "database", dbName)
|
||||
e.log.Info("Detected compressed SQL format, using psql + pgzip", "file", dumpFile, "database", dbName)
|
||||
mu.Unlock()
|
||||
restoreErr = e.restorePostgreSQLSQL(ctx, dumpFile, dbName, true)
|
||||
} else {
|
||||
@ -1798,10 +1907,22 @@ func (e *Engine) extractArchiveWithProgress(ctx context.Context, archivePath, de
|
||||
return fmt.Errorf("failed to create file %s: %w", targetPath, err)
|
||||
}
|
||||
|
||||
// Copy file contents
|
||||
if _, err := io.Copy(outFile, tarReader); err != nil {
|
||||
outFile.Close()
|
||||
return fmt.Errorf("failed to write file %s: %w", targetPath, err)
|
||||
// Copy file contents - use buffered I/O for turbo mode (32KB buffer)
|
||||
if e.cfg.BufferedIO {
|
||||
bufferedWriter := bufio.NewWriterSize(outFile, 32*1024) // 32KB buffer for faster writes
|
||||
if _, err := io.Copy(bufferedWriter, tarReader); err != nil {
|
||||
outFile.Close()
|
||||
return fmt.Errorf("failed to write file %s: %w", targetPath, err)
|
||||
}
|
||||
if err := bufferedWriter.Flush(); err != nil {
|
||||
outFile.Close()
|
||||
return fmt.Errorf("failed to flush buffer for %s: %w", targetPath, err)
|
||||
}
|
||||
} else {
|
||||
if _, err := io.Copy(outFile, tarReader); err != nil {
|
||||
outFile.Close()
|
||||
return fmt.Errorf("failed to write file %s: %w", targetPath, err)
|
||||
}
|
||||
}
|
||||
outFile.Close()
|
||||
case tar.TypeSymlink:
|
||||
|
||||
644
internal/tui/health.go
Normal file
644
internal/tui/health.go
Normal file
@ -0,0 +1,644 @@
|
||||
package tui
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
tea "github.com/charmbracelet/bubbletea"
|
||||
|
||||
"dbbackup/internal/catalog"
|
||||
"dbbackup/internal/checks"
|
||||
"dbbackup/internal/config"
|
||||
"dbbackup/internal/database"
|
||||
"dbbackup/internal/logger"
|
||||
)
|
||||
|
||||
// HealthStatus represents overall health
|
||||
type HealthStatus string
|
||||
|
||||
const (
|
||||
HealthStatusHealthy HealthStatus = "healthy"
|
||||
HealthStatusWarning HealthStatus = "warning"
|
||||
HealthStatusCritical HealthStatus = "critical"
|
||||
)
|
||||
|
||||
// TUIHealthCheck represents a single health check result
|
||||
type TUIHealthCheck struct {
|
||||
Name string
|
||||
Status HealthStatus
|
||||
Message string
|
||||
Details string
|
||||
}
|
||||
|
||||
// HealthViewModel shows comprehensive health check
|
||||
type HealthViewModel struct {
|
||||
config *config.Config
|
||||
logger logger.Logger
|
||||
parent tea.Model
|
||||
ctx context.Context
|
||||
loading bool
|
||||
checks []TUIHealthCheck
|
||||
overallStatus HealthStatus
|
||||
recommendations []string
|
||||
err error
|
||||
scrollOffset int
|
||||
}
|
||||
|
||||
// NewHealthView creates a new health view
|
||||
func NewHealthView(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context) *HealthViewModel {
|
||||
return &HealthViewModel{
|
||||
config: cfg,
|
||||
logger: log,
|
||||
parent: parent,
|
||||
ctx: ctx,
|
||||
loading: true,
|
||||
checks: []TUIHealthCheck{},
|
||||
}
|
||||
}
|
||||
|
||||
// healthResultMsg contains all health check results
|
||||
type healthResultMsg struct {
|
||||
checks []TUIHealthCheck
|
||||
overallStatus HealthStatus
|
||||
recommendations []string
|
||||
err error
|
||||
}
|
||||
|
||||
func (m *HealthViewModel) Init() tea.Cmd {
|
||||
return tea.Batch(
|
||||
m.runHealthChecks(),
|
||||
tickCmd(),
|
||||
)
|
||||
}
|
||||
|
||||
func (m *HealthViewModel) runHealthChecks() tea.Cmd {
|
||||
return func() tea.Msg {
|
||||
var checks []TUIHealthCheck
|
||||
var recommendations []string
|
||||
interval := 24 * time.Hour
|
||||
|
||||
// 1. Configuration check
|
||||
checks = append(checks, m.checkConfiguration())
|
||||
|
||||
// 2. Database connectivity
|
||||
checks = append(checks, m.checkDatabaseConnectivity())
|
||||
|
||||
// 3. Backup directory check
|
||||
checks = append(checks, m.checkBackupDir())
|
||||
|
||||
// 4. Catalog integrity check
|
||||
catalogCheck, cat := m.checkCatalogIntegrity()
|
||||
checks = append(checks, catalogCheck)
|
||||
|
||||
if cat != nil {
|
||||
defer cat.Close()
|
||||
|
||||
// 5. Backup freshness check
|
||||
checks = append(checks, m.checkBackupFreshness(cat, interval))
|
||||
|
||||
// 6. Gap detection
|
||||
checks = append(checks, m.checkBackupGaps(cat, interval))
|
||||
|
||||
// 7. Verification status
|
||||
checks = append(checks, m.checkVerificationStatus(cat))
|
||||
|
||||
// 8. File integrity (sampling)
|
||||
checks = append(checks, m.checkFileIntegrity(cat))
|
||||
|
||||
// 9. Orphaned entries
|
||||
checks = append(checks, m.checkOrphanedEntries(cat))
|
||||
}
|
||||
|
||||
// 10. Disk space
|
||||
checks = append(checks, m.checkDiskSpace())
|
||||
|
||||
// Calculate overall status
|
||||
overallStatus := m.calculateOverallStatus(checks)
|
||||
|
||||
// Generate recommendations
|
||||
recommendations = m.generateRecommendations(checks)
|
||||
|
||||
return healthResultMsg{
|
||||
checks: checks,
|
||||
overallStatus: overallStatus,
|
||||
recommendations: recommendations,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (m *HealthViewModel) calculateOverallStatus(checks []TUIHealthCheck) HealthStatus {
|
||||
for _, check := range checks {
|
||||
if check.Status == HealthStatusCritical {
|
||||
return HealthStatusCritical
|
||||
}
|
||||
}
|
||||
for _, check := range checks {
|
||||
if check.Status == HealthStatusWarning {
|
||||
return HealthStatusWarning
|
||||
}
|
||||
}
|
||||
return HealthStatusHealthy
|
||||
}
|
||||
|
||||
func (m *HealthViewModel) generateRecommendations(checks []TUIHealthCheck) []string {
|
||||
var recs []string
|
||||
for _, check := range checks {
|
||||
switch {
|
||||
case check.Name == "Backup Freshness" && check.Status != HealthStatusHealthy:
|
||||
recs = append(recs, "Run a backup: dbbackup backup cluster")
|
||||
case check.Name == "Verification Status" && check.Status != HealthStatusHealthy:
|
||||
recs = append(recs, "Verify backups: dbbackup verify-backup")
|
||||
case check.Name == "Disk Space" && check.Status != HealthStatusHealthy:
|
||||
recs = append(recs, "Free space: dbbackup cleanup")
|
||||
case check.Name == "Backup Gaps" && check.Status == HealthStatusCritical:
|
||||
recs = append(recs, "Review backup schedule and cron")
|
||||
case check.Name == "Orphaned Entries" && check.Status != HealthStatusHealthy:
|
||||
recs = append(recs, "Clean orphans: dbbackup catalog cleanup")
|
||||
case check.Name == "Database Connectivity" && check.Status != HealthStatusHealthy:
|
||||
recs = append(recs, "Check .dbbackup.conf settings")
|
||||
}
|
||||
}
|
||||
return recs
|
||||
}
|
||||
|
||||
// Individual health checks
|
||||
|
||||
func (m *HealthViewModel) checkConfiguration() TUIHealthCheck {
|
||||
check := TUIHealthCheck{
|
||||
Name: "Configuration",
|
||||
Status: HealthStatusHealthy,
|
||||
}
|
||||
|
||||
if err := m.config.Validate(); err != nil {
|
||||
check.Status = HealthStatusCritical
|
||||
check.Message = "Configuration invalid"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
|
||||
check.Message = "Configuration valid"
|
||||
return check
|
||||
}
|
||||
|
||||
func (m *HealthViewModel) checkDatabaseConnectivity() TUIHealthCheck {
|
||||
check := TUIHealthCheck{
|
||||
Name: "Database Connectivity",
|
||||
Status: HealthStatusHealthy,
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(m.ctx, 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
db, err := database.New(m.config, m.logger)
|
||||
if err != nil {
|
||||
check.Status = HealthStatusCritical
|
||||
check.Message = "Failed to create DB client"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
if err := db.Connect(ctx); err != nil {
|
||||
check.Status = HealthStatusCritical
|
||||
check.Message = "Cannot connect to database"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
|
||||
version, _ := db.GetVersion(ctx)
|
||||
check.Message = "Connected successfully"
|
||||
check.Details = version
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
func (m *HealthViewModel) checkBackupDir() TUIHealthCheck {
|
||||
check := TUIHealthCheck{
|
||||
Name: "Backup Directory",
|
||||
Status: HealthStatusHealthy,
|
||||
}
|
||||
|
||||
info, err := os.Stat(m.config.BackupDir)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
check.Status = HealthStatusWarning
|
||||
check.Message = "Directory does not exist"
|
||||
check.Details = m.config.BackupDir
|
||||
} else {
|
||||
check.Status = HealthStatusCritical
|
||||
check.Message = "Cannot access directory"
|
||||
check.Details = err.Error()
|
||||
}
|
||||
return check
|
||||
}
|
||||
|
||||
if !info.IsDir() {
|
||||
check.Status = HealthStatusCritical
|
||||
check.Message = "Path is not a directory"
|
||||
check.Details = m.config.BackupDir
|
||||
return check
|
||||
}
|
||||
|
||||
// Check writability
|
||||
testFile := filepath.Join(m.config.BackupDir, ".health_check_test")
|
||||
if err := os.WriteFile(testFile, []byte("test"), 0644); err != nil {
|
||||
check.Status = HealthStatusCritical
|
||||
check.Message = "Directory not writable"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
os.Remove(testFile)
|
||||
|
||||
check.Message = "Directory accessible"
|
||||
check.Details = m.config.BackupDir
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
func (m *HealthViewModel) checkCatalogIntegrity() (TUIHealthCheck, *catalog.SQLiteCatalog) {
|
||||
check := TUIHealthCheck{
|
||||
Name: "Catalog Integrity",
|
||||
Status: HealthStatusHealthy,
|
||||
}
|
||||
|
||||
catalogPath := filepath.Join(m.config.BackupDir, "dbbackup.db")
|
||||
cat, err := catalog.NewSQLiteCatalog(catalogPath)
|
||||
if err != nil {
|
||||
check.Status = HealthStatusWarning
|
||||
check.Message = "Catalog not available"
|
||||
check.Details = err.Error()
|
||||
return check, nil
|
||||
}
|
||||
|
||||
// Try a simple query to verify integrity
|
||||
stats, err := cat.Stats(m.ctx)
|
||||
if err != nil {
|
||||
check.Status = HealthStatusCritical
|
||||
check.Message = "Catalog corrupted"
|
||||
check.Details = err.Error()
|
||||
cat.Close()
|
||||
return check, nil
|
||||
}
|
||||
|
||||
check.Message = fmt.Sprintf("Healthy (%d backups)", stats.TotalBackups)
|
||||
check.Details = fmt.Sprintf("Size: %s", stats.TotalSizeHuman)
|
||||
|
||||
return check, cat
|
||||
}
|
||||
|
||||
func (m *HealthViewModel) checkBackupFreshness(cat *catalog.SQLiteCatalog, interval time.Duration) TUIHealthCheck {
|
||||
check := TUIHealthCheck{
|
||||
Name: "Backup Freshness",
|
||||
Status: HealthStatusHealthy,
|
||||
}
|
||||
|
||||
stats, err := cat.Stats(m.ctx)
|
||||
if err != nil {
|
||||
check.Status = HealthStatusWarning
|
||||
check.Message = "Cannot determine freshness"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
|
||||
if stats.NewestBackup == nil {
|
||||
check.Status = HealthStatusCritical
|
||||
check.Message = "No backups found"
|
||||
return check
|
||||
}
|
||||
|
||||
age := time.Since(*stats.NewestBackup)
|
||||
|
||||
if age > interval*3 {
|
||||
check.Status = HealthStatusCritical
|
||||
check.Message = fmt.Sprintf("Last backup %s old (critical)", formatHealthDuration(age))
|
||||
check.Details = stats.NewestBackup.Format("2006-01-02 15:04")
|
||||
} else if age > interval {
|
||||
check.Status = HealthStatusWarning
|
||||
check.Message = fmt.Sprintf("Last backup %s old", formatHealthDuration(age))
|
||||
check.Details = stats.NewestBackup.Format("2006-01-02 15:04")
|
||||
} else {
|
||||
check.Message = fmt.Sprintf("Last backup %s ago", formatHealthDuration(age))
|
||||
check.Details = stats.NewestBackup.Format("2006-01-02 15:04")
|
||||
}
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
func (m *HealthViewModel) checkBackupGaps(cat *catalog.SQLiteCatalog, interval time.Duration) TUIHealthCheck {
|
||||
check := TUIHealthCheck{
|
||||
Name: "Backup Gaps",
|
||||
Status: HealthStatusHealthy,
|
||||
}
|
||||
|
||||
config := &catalog.GapDetectionConfig{
|
||||
ExpectedInterval: interval,
|
||||
Tolerance: interval / 4,
|
||||
RPOThreshold: interval * 2,
|
||||
}
|
||||
|
||||
allGaps, err := cat.DetectAllGaps(m.ctx, config)
|
||||
if err != nil {
|
||||
check.Status = HealthStatusWarning
|
||||
check.Message = "Gap detection failed"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
|
||||
totalGaps := 0
|
||||
criticalGaps := 0
|
||||
for _, gaps := range allGaps {
|
||||
for _, gap := range gaps {
|
||||
totalGaps++
|
||||
if gap.Duration > interval*2 {
|
||||
criticalGaps++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if criticalGaps > 0 {
|
||||
check.Status = HealthStatusCritical
|
||||
check.Message = fmt.Sprintf("%d critical gaps detected", criticalGaps)
|
||||
check.Details = fmt.Sprintf("Total gaps: %d", totalGaps)
|
||||
} else if totalGaps > 0 {
|
||||
check.Status = HealthStatusWarning
|
||||
check.Message = fmt.Sprintf("%d gaps detected", totalGaps)
|
||||
} else {
|
||||
check.Message = "No backup gaps"
|
||||
}
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
func (m *HealthViewModel) checkVerificationStatus(cat *catalog.SQLiteCatalog) TUIHealthCheck {
|
||||
check := TUIHealthCheck{
|
||||
Name: "Verification Status",
|
||||
Status: HealthStatusHealthy,
|
||||
}
|
||||
|
||||
stats, err := cat.Stats(m.ctx)
|
||||
if err != nil {
|
||||
check.Status = HealthStatusWarning
|
||||
check.Message = "Cannot check verification"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
|
||||
if stats.TotalBackups == 0 {
|
||||
check.Message = "No backups to verify"
|
||||
return check
|
||||
}
|
||||
|
||||
verifiedPct := float64(stats.VerifiedCount) / float64(stats.TotalBackups) * 100
|
||||
|
||||
if verifiedPct < 50 {
|
||||
check.Status = HealthStatusWarning
|
||||
check.Message = fmt.Sprintf("Only %.0f%% verified", verifiedPct)
|
||||
check.Details = fmt.Sprintf("%d/%d backups verified", stats.VerifiedCount, stats.TotalBackups)
|
||||
} else {
|
||||
check.Message = fmt.Sprintf("%.0f%% verified", verifiedPct)
|
||||
check.Details = fmt.Sprintf("%d/%d backups", stats.VerifiedCount, stats.TotalBackups)
|
||||
}
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
func (m *HealthViewModel) checkFileIntegrity(cat *catalog.SQLiteCatalog) TUIHealthCheck {
|
||||
check := TUIHealthCheck{
|
||||
Name: "File Integrity",
|
||||
Status: HealthStatusHealthy,
|
||||
}
|
||||
|
||||
// Get recent backups using Search
|
||||
query := &catalog.SearchQuery{
|
||||
Limit: 5,
|
||||
OrderBy: "backup_date",
|
||||
OrderDesc: true,
|
||||
}
|
||||
backups, err := cat.Search(m.ctx, query)
|
||||
if err != nil {
|
||||
check.Status = HealthStatusWarning
|
||||
check.Message = "Cannot list backups"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
|
||||
if len(backups) == 0 {
|
||||
check.Message = "No backups to check"
|
||||
return check
|
||||
}
|
||||
|
||||
missing := 0
|
||||
for _, backup := range backups {
|
||||
path := backup.BackupPath
|
||||
if path != "" {
|
||||
if _, err := os.Stat(path); os.IsNotExist(err) {
|
||||
missing++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if missing > 0 {
|
||||
check.Status = HealthStatusCritical
|
||||
check.Message = fmt.Sprintf("%d/%d files missing", missing, len(backups))
|
||||
} else {
|
||||
check.Message = fmt.Sprintf("%d recent files verified", len(backups))
|
||||
}
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
func (m *HealthViewModel) checkOrphanedEntries(cat *catalog.SQLiteCatalog) TUIHealthCheck {
|
||||
check := TUIHealthCheck{
|
||||
Name: "Orphaned Entries",
|
||||
Status: HealthStatusHealthy,
|
||||
}
|
||||
|
||||
// Check for entries with missing files
|
||||
query := &catalog.SearchQuery{
|
||||
Limit: 20,
|
||||
OrderBy: "backup_date",
|
||||
OrderDesc: true,
|
||||
}
|
||||
backups, err := cat.Search(m.ctx, query)
|
||||
if err != nil {
|
||||
check.Status = HealthStatusWarning
|
||||
check.Message = "Cannot check orphans"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
|
||||
orphanCount := 0
|
||||
for _, backup := range backups {
|
||||
if backup.BackupPath != "" {
|
||||
if _, err := os.Stat(backup.BackupPath); os.IsNotExist(err) {
|
||||
orphanCount++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if orphanCount > 5 {
|
||||
check.Status = HealthStatusWarning
|
||||
check.Message = fmt.Sprintf("%d orphaned entries", orphanCount)
|
||||
check.Details = "Consider running catalog cleanup"
|
||||
} else if orphanCount > 0 {
|
||||
check.Message = fmt.Sprintf("%d orphaned entries", orphanCount)
|
||||
} else {
|
||||
check.Message = "No orphaned entries"
|
||||
}
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
func (m *HealthViewModel) checkDiskSpace() TUIHealthCheck {
|
||||
check := TUIHealthCheck{
|
||||
Name: "Disk Space",
|
||||
Status: HealthStatusHealthy,
|
||||
}
|
||||
|
||||
diskCheck := checks.CheckDiskSpace(m.config.BackupDir)
|
||||
|
||||
if diskCheck.Critical {
|
||||
check.Status = HealthStatusCritical
|
||||
check.Message = fmt.Sprintf("Disk %.0f%% full (critical)", diskCheck.UsedPercent)
|
||||
check.Details = fmt.Sprintf("Free: %s", formatHealthBytes(diskCheck.AvailableBytes))
|
||||
} else if diskCheck.Warning {
|
||||
check.Status = HealthStatusWarning
|
||||
check.Message = fmt.Sprintf("Disk %.0f%% full", diskCheck.UsedPercent)
|
||||
check.Details = fmt.Sprintf("Free: %s", formatHealthBytes(diskCheck.AvailableBytes))
|
||||
} else {
|
||||
check.Message = fmt.Sprintf("Disk %.0f%% used", diskCheck.UsedPercent)
|
||||
check.Details = fmt.Sprintf("Free: %s", formatHealthBytes(diskCheck.AvailableBytes))
|
||||
}
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
func (m *HealthViewModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
||||
switch msg := msg.(type) {
|
||||
case tickMsg:
|
||||
if m.loading {
|
||||
return m, tickCmd()
|
||||
}
|
||||
return m, nil
|
||||
|
||||
case healthResultMsg:
|
||||
m.loading = false
|
||||
m.checks = msg.checks
|
||||
m.overallStatus = msg.overallStatus
|
||||
m.recommendations = msg.recommendations
|
||||
m.err = msg.err
|
||||
return m, nil
|
||||
|
||||
case tea.KeyMsg:
|
||||
switch msg.String() {
|
||||
case "ctrl+c", "q", "esc", "enter":
|
||||
return m.parent, nil
|
||||
case "up", "k":
|
||||
if m.scrollOffset > 0 {
|
||||
m.scrollOffset--
|
||||
}
|
||||
case "down", "j":
|
||||
maxScroll := len(m.checks) + len(m.recommendations) - 5
|
||||
if maxScroll < 0 {
|
||||
maxScroll = 0
|
||||
}
|
||||
if m.scrollOffset < maxScroll {
|
||||
m.scrollOffset++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return m, nil
|
||||
}
|
||||
|
||||
func (m *HealthViewModel) View() string {
|
||||
var s strings.Builder
|
||||
|
||||
header := titleStyle.Render("[HEALTH] System Health Check")
|
||||
s.WriteString(fmt.Sprintf("\n%s\n\n", header))
|
||||
|
||||
if m.loading {
|
||||
spinner := []string{"-", "\\", "|", "/"}
|
||||
frame := int(time.Now().UnixMilli()/100) % len(spinner)
|
||||
s.WriteString(fmt.Sprintf("%s Running health checks...\n", spinner[frame]))
|
||||
return s.String()
|
||||
}
|
||||
|
||||
if m.err != nil {
|
||||
s.WriteString(errorStyle.Render(fmt.Sprintf("[FAIL] Error: %v\n\n", m.err)))
|
||||
}
|
||||
|
||||
// Overall status
|
||||
statusIcon := "[+]"
|
||||
statusColor := successStyle
|
||||
switch m.overallStatus {
|
||||
case HealthStatusWarning:
|
||||
statusIcon = "[!]"
|
||||
statusColor = StatusWarningStyle
|
||||
case HealthStatusCritical:
|
||||
statusIcon = "[X]"
|
||||
statusColor = errorStyle
|
||||
}
|
||||
s.WriteString(statusColor.Render(fmt.Sprintf("%s Overall: %s\n\n", statusIcon, strings.ToUpper(string(m.overallStatus)))))
|
||||
|
||||
// Individual checks
|
||||
s.WriteString("[CHECKS]\n")
|
||||
for _, check := range m.checks {
|
||||
icon := "[+]"
|
||||
style := successStyle
|
||||
switch check.Status {
|
||||
case HealthStatusWarning:
|
||||
icon = "[!]"
|
||||
style = StatusWarningStyle
|
||||
case HealthStatusCritical:
|
||||
icon = "[X]"
|
||||
style = errorStyle
|
||||
}
|
||||
s.WriteString(style.Render(fmt.Sprintf(" %s %-22s %s\n", icon, check.Name+":", check.Message)))
|
||||
if check.Details != "" {
|
||||
s.WriteString(infoStyle.Render(fmt.Sprintf(" %s\n", check.Details)))
|
||||
}
|
||||
}
|
||||
|
||||
// Recommendations
|
||||
if len(m.recommendations) > 0 {
|
||||
s.WriteString("\n[RECOMMENDATIONS]\n")
|
||||
for _, rec := range m.recommendations {
|
||||
s.WriteString(StatusWarningStyle.Render(fmt.Sprintf(" → %s\n", rec)))
|
||||
}
|
||||
}
|
||||
|
||||
s.WriteString("\n[KEYS] Press any key to return to menu\n")
|
||||
return s.String()
|
||||
}
|
||||
|
||||
// Helper functions
|
||||
func formatHealthDuration(d time.Duration) string {
|
||||
if d < time.Minute {
|
||||
return fmt.Sprintf("%ds", int(d.Seconds()))
|
||||
}
|
||||
if d < time.Hour {
|
||||
return fmt.Sprintf("%dm", int(d.Minutes()))
|
||||
}
|
||||
if d < 24*time.Hour {
|
||||
return fmt.Sprintf("%.1fh", d.Hours())
|
||||
}
|
||||
return fmt.Sprintf("%.1fd", d.Hours()/24)
|
||||
}
|
||||
|
||||
func formatHealthBytes(bytes uint64) string {
|
||||
const unit = 1024
|
||||
if bytes < unit {
|
||||
return fmt.Sprintf("%d B", bytes)
|
||||
}
|
||||
div, exp := uint64(unit), 0
|
||||
for n := bytes / unit; n >= unit; n /= unit {
|
||||
div *= unit
|
||||
exp++
|
||||
}
|
||||
return fmt.Sprintf("%.1f %cB", float64(bytes)/float64(div), "KMGTPE"[exp])
|
||||
}
|
||||
@ -392,6 +392,29 @@ func (m RestorePreviewModel) View() string {
|
||||
if m.archive.DatabaseName != "" {
|
||||
s.WriteString(fmt.Sprintf(" Database: %s\n", m.archive.DatabaseName))
|
||||
}
|
||||
|
||||
// Estimate uncompressed size and RTO
|
||||
if m.archive.Format.IsCompressed() {
|
||||
// Rough estimate: 3x compression ratio typical for DB dumps
|
||||
uncompressedEst := m.archive.Size * 3
|
||||
s.WriteString(fmt.Sprintf(" Estimated uncompressed: ~%s\n", formatSize(uncompressedEst)))
|
||||
|
||||
// Estimate RTO
|
||||
profile := m.config.GetCurrentProfile()
|
||||
if profile != nil {
|
||||
extractTime := m.archive.Size / (500 * 1024 * 1024) // 500 MB/s extraction
|
||||
if extractTime < 1 {
|
||||
extractTime = 1
|
||||
}
|
||||
restoreSpeed := int64(50 * 1024 * 1024 * int64(profile.Jobs)) // 50MB/s per job
|
||||
restoreTime := uncompressedEst / restoreSpeed
|
||||
if restoreTime < 1 {
|
||||
restoreTime = 1
|
||||
}
|
||||
totalMinutes := extractTime + restoreTime
|
||||
s.WriteString(fmt.Sprintf(" Estimated RTO: ~%dm (with %s profile)\n", totalMinutes, profile.Name))
|
||||
}
|
||||
}
|
||||
s.WriteString("\n")
|
||||
|
||||
// Target Information
|
||||
|
||||
@ -112,7 +112,8 @@ func NewSettingsModel(cfg *config.Config, log logger.Logger, parent tea.Model) S
|
||||
return c.ResourceProfile
|
||||
},
|
||||
Update: func(c *config.Config, v string) error {
|
||||
profiles := []string{"conservative", "balanced", "performance", "max-performance"}
|
||||
// UPDATED: Added 'turbo' profile for maximum restore speed
|
||||
profiles := []string{"conservative", "balanced", "performance", "max-performance", "turbo"}
|
||||
currentIdx := 0
|
||||
for i, p := range profiles {
|
||||
if c.ResourceProfile == p {
|
||||
@ -124,7 +125,7 @@ func NewSettingsModel(cfg *config.Config, log logger.Logger, parent tea.Model) S
|
||||
return c.ApplyResourceProfile(profiles[nextIdx])
|
||||
},
|
||||
Type: "selector",
|
||||
Description: "Resource profile for VM capacity. Toggle 'l' for Large DB Mode on any profile.",
|
||||
Description: "Resource profile. 'turbo' = fastest (matches pg_restore -j8). Press Enter to cycle.",
|
||||
},
|
||||
{
|
||||
Key: "large_db_mode",
|
||||
|
||||
@ -32,6 +32,7 @@ func NewToolsMenu(cfg *config.Config, log logger.Logger, parent tea.Model, ctx c
|
||||
"Kill Connections",
|
||||
"Drop Database",
|
||||
"--------------------------------",
|
||||
"System Health Check",
|
||||
"Dedup Store Analyze",
|
||||
"Verify Backup Integrity",
|
||||
"Catalog Sync",
|
||||
@ -88,13 +89,15 @@ func (t *ToolsMenu) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
||||
return t.handleKillConnections()
|
||||
case 5: // Drop Database
|
||||
return t.handleDropDatabase()
|
||||
case 7: // Dedup Store Analyze
|
||||
case 7: // System Health Check
|
||||
return t.handleSystemHealth()
|
||||
case 8: // Dedup Store Analyze
|
||||
return t.handleDedupAnalyze()
|
||||
case 8: // Verify Backup Integrity
|
||||
case 9: // Verify Backup Integrity
|
||||
return t.handleVerifyIntegrity()
|
||||
case 9: // Catalog Sync
|
||||
case 10: // Catalog Sync
|
||||
return t.handleCatalogSync()
|
||||
case 11: // Back to Main Menu
|
||||
case 12: // Back to Main Menu
|
||||
return t.parent, nil
|
||||
}
|
||||
}
|
||||
@ -148,6 +151,12 @@ func (t *ToolsMenu) handleBlobExtract() (tea.Model, tea.Cmd) {
|
||||
return t, nil
|
||||
}
|
||||
|
||||
// handleSystemHealth opens the system health check
|
||||
func (t *ToolsMenu) handleSystemHealth() (tea.Model, tea.Cmd) {
|
||||
view := NewHealthView(t.config, t.logger, t, t.ctx)
|
||||
return view, view.Init()
|
||||
}
|
||||
|
||||
// handleDedupAnalyze shows dedup store analysis
|
||||
func (t *ToolsMenu) handleDedupAnalyze() (tea.Model, tea.Cmd) {
|
||||
t.message = infoStyle.Render("[INFO] Dedup analyze coming soon - shows storage savings and chunk distribution")
|
||||
|
||||
2
main.go
2
main.go
@ -16,7 +16,7 @@ import (
|
||||
|
||||
// Build information (set by ldflags)
|
||||
var (
|
||||
version = "4.1.0"
|
||||
version = "4.2.2"
|
||||
buildTime = "unknown"
|
||||
gitCommit = "unknown"
|
||||
)
|
||||
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Reference in New Issue
Block a user