Compare commits

..

74 Commits

Author SHA1 Message Date
7b4ab76313 v3.42.30: Add go-multierror for better error aggregation
All checks were successful
CI/CD / Test (push) Successful in 1m20s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m14s
- Use hashicorp/go-multierror for cluster restore error collection
- Shows ALL failed databases with full error context (not just count)
- Bullet-pointed output for readability
- Thread-safe error aggregation with dedicated mutex
- Error wrapping with %w for proper error chain preservation
2026-01-14 15:59:12 +01:00
c0d92b3a81 fix: update go.sum for gopsutil Windows dependencies
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m13s
2026-01-14 15:50:13 +01:00
8c85d85249 refactor: use gopsutil and go-humanize for preflight checks
Some checks failed
CI/CD / Build & Release (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Test (push) Has been cancelled
- Added gopsutil/v3 for cross-platform system metrics
  * Works on Linux, macOS, Windows, BSD
  * Memory detection no longer requires /proc parsing

- Added go-humanize for readable output
  * humanize.Bytes() for memory sizes
  * humanize.Comma() for large numbers

- Improved preflight display with memory usage percentage
- Linux kernel checks (shmmax/shmall) still use /proc for accuracy
2026-01-14 15:47:31 +01:00
e0cdcb28be feat: comprehensive preflight checks for cluster restore
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m17s
- Linux system checks (read-only from /proc, no auth needed):
  * shmmax, shmall kernel limits
  * Available RAM check

- PostgreSQL auto-tuning:
  * max_locks_per_transaction scaled by BLOB count
  * maintenance_work_mem boosted to 2GB for faster indexes
  * All settings auto-reset after restore (even on failure)

- Archive analysis:
  * Count BLOBs per database (pg_restore -l or zgrep)
  * Scale lock boost: 2048 (default) → 4096/8192/16384 based on count

- Nice TUI preflight summary display with ✓/⚠ indicators
2026-01-14 15:30:41 +01:00
22a7b9e81e feat: auto-tune max_locks_per_transaction for cluster restore
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m15s
- Automatically boost max_locks_per_transaction to 2048 before restore
- Uses ALTER SYSTEM + pg_reload_conf() - no restart needed
- Automatically resets to original value after restore (even on failure)
- Prevents 'out of shared memory' OOM on BLOB-heavy SQL format dumps
- Works transparently - no user intervention required
2026-01-14 15:05:42 +01:00
c71889be47 fix: phased restore for BLOB databases to prevent lock exhaustion OOM
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m13s
- Auto-detect large objects in pg_restore dumps
- Split restore into pre-data, data, post-data phases
- Each phase commits and releases locks before next
- Prevents 'out of shared memory' / max_locks_per_transaction errors
- Updated error hints with better guidance for lock exhaustion
2026-01-14 08:15:53 +01:00
222bdbef58 fix: streaming tar verification for large cluster archives (100GB+)
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m14s
- Increase timeout from 60 to 180 minutes for very large archives
- Use streaming pipes instead of buffering entire tar listing
- Only mark as corrupted for clear corruption signals (unexpected EOF, invalid gzip)
- Prevents false CORRUPTED errors on valid large archives
2026-01-13 14:40:18 +01:00
f7e9fa64f0 docs: add Large Database Support (600+ GB) section to PITR guide
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Has been skipped
2026-01-13 10:02:35 +01:00
f153e61dbf fix: dynamic timeouts for large archives + use WorkDir for disk checks
All checks were successful
CI/CD / Test (push) Successful in 1m21s
CI/CD / Lint (push) Successful in 1m34s
CI/CD / Build & Release (push) Successful in 3m22s
- CheckDiskSpace now uses GetEffectiveWorkDir() instead of BackupDir
- Dynamic timeout calculation based on file size:
  - diagnoseClusterArchive: 5 + (GB/3) min, max 60 min
  - verifyWithPgRestore: 5 + (GB/5) min, max 30 min
  - DiagnoseClusterDumps: 10 + (GB/3) min, max 120 min
  - TUI safety checks: 10 + (GB/5) min, max 120 min
- Timeout vs corruption differentiation (no false CORRUPTED on timeout)
- Streaming tar listing to avoid OOM on large archives

For 119GB archives: ~45 min timeout instead of 5 min false-positive
2026-01-13 08:22:20 +01:00
d19c065658 Remove dev artifacts and internal docs
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Successful in 3m9s
- dbbackup, dbbackup_cgo (dev binaries, use bin/ for releases)
- CRITICAL_BUGS_FIXED.md (internal post-mortem)
- scripts/remove_*.sh (one-time cleanup scripts)
2026-01-12 11:14:55 +01:00
8dac5efc10 Remove EMOTICON_REMOVAL_PLAN.md
Some checks failed
CI/CD / Test (push) Successful in 1m19s
CI/CD / Build & Release (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
2026-01-12 11:12:17 +01:00
fd5edce5ae Fix license: Apache 2.0 not MIT
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m28s
CI/CD / Build & Release (push) Has been skipped
2026-01-12 10:57:55 +01:00
a7e2c86618 Replace VEEAM_ALTERNATIVE with OPENSOURCE_ALTERNATIVE - covers both commercial (Veeam) and open source (Borg/restic) alternatives
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m29s
CI/CD / Build & Release (push) Has been skipped
2026-01-12 10:43:15 +01:00
b2e0c739e0 Fix golangci-lint v2 config format
All checks were successful
CI/CD / Test (push) Successful in 1m20s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Successful in 3m22s
2026-01-12 10:32:27 +01:00
ad23abdf4e Add version field to golangci-lint config for v2
Some checks failed
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Failing after 1m41s
CI/CD / Build & Release (push) Has been skipped
2026-01-12 10:26:36 +01:00
390b830976 Fix golangci-lint v2 module path
Some checks failed
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Failing after 28s
CI/CD / Build & Release (push) Has been skipped
2026-01-12 10:20:47 +01:00
7e53950967 Update golangci-lint to v2.8.0 for Go 1.24 compatibility
Some checks failed
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Failing after 8s
CI/CD / Build & Release (push) Has been skipped
2026-01-12 10:13:33 +01:00
59d2094241 Build all platforms v3.42.22
Some checks failed
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Failing after 1m22s
CI/CD / Build & Release (push) Has been skipped
2026-01-12 09:54:35 +01:00
b1f8c6d646 fix: correct Grafana dashboard metric names for backup size and duration panels
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m16s
2026-01-09 09:15:16 +01:00
b05c2be19d Add corrected Grafana dashboard - fix status query
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 3m13s
- Changed status query from dbbackup_backup_verified to RPO-based check
- dbbackup_rpo_seconds < 86400 returns SUCCESS when backup < 24h old
- Fixes false FAILED status when verify operations not run
- Includes: status, RPO, backup size, duration, and overview table panels
2026-01-08 12:27:23 +01:00
ec33959e3e v3.42.18: Unify archive verification - backup manager uses same checks as restore
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Successful in 3m12s
- verifyArchiveCmd now uses restore.Safety and restore.Diagnoser
- Same validation logic in backup manager verify and restore safety checks
- No more discrepancy between verify showing valid and restore failing
2026-01-08 12:10:45 +01:00
92402f0fdb v3.42.17: Fix systemd service templates - remove invalid --config flag
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 3m12s
- Service templates now use WorkingDirectory for config loading
- Config is read from .dbbackup.conf in /var/lib/dbbackup
- Updated SYSTEMD.md documentation to match actual CLI
- Removed non-existent --config flag from ExecStart
2026-01-08 11:57:16 +01:00
682510d1bc v3.42.16: TUI cleanup - remove STATUS box, add global styles
All checks were successful
CI/CD / Test (push) Successful in 1m19s
CI/CD / Lint (push) Successful in 1m24s
CI/CD / Build & Release (push) Successful in 3m19s
2026-01-08 11:17:46 +01:00
83ad62b6b5 v3.42.15: TUI - always allow Esc/Cancel during spinner operations
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m20s
CI/CD / Build & Release (push) Successful in 3m7s
2026-01-08 10:53:00 +01:00
55d34be32e v3.42.14: TUI Backup Manager - status box with spinner, real verify function
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 3m6s
2026-01-08 10:35:23 +01:00
1831bd7c1f v3.42.13: TUI improvements - grouped shortcuts, box layout, better alignment
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Successful in 3m9s
2026-01-08 10:16:19 +01:00
24377eab8f v3.42.12: Require cleanup confirmation for cluster restore with existing DBs
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 3m10s
- Block cluster restore if existing databases found and cleanup not enabled
- User must press 'c' to enable 'Clean All First' before proceeding
- Prevents accidental data conflicts during disaster recovery
- Bug #24: Missing safety gate for cluster restore
2026-01-08 09:46:53 +01:00
3e41d88445 v3.42.11: Replace all Unicode emojis with ASCII text
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m20s
CI/CD / Build & Release (push) Successful in 3m10s
- Replace all emoji characters with ASCII equivalents throughout codebase
- Replace Unicode box-drawing characters (═║╔╗╚╝━─) with ASCII (+|-=)
- Replace checkmarks (✓✗) with [OK]/[FAIL] markers
- 59 files updated, 741 lines changed
- Improves terminal compatibility and reduces visual noise
2026-01-08 09:42:01 +01:00
5fb88b14ba Add legal documentation to gitignore
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m20s
CI/CD / Build & Release (push) Has been skipped
2026-01-08 06:19:08 +01:00
cccee4294f Remove internal bug documentation from public repo
Some checks failed
CI/CD / Lint (push) Has been cancelled
CI/CD / Build & Release (push) Has been cancelled
CI/CD / Test (push) Has been cancelled
2026-01-08 06:18:20 +01:00
9688143176 Add detailed bug report for legal documentation
Some checks failed
CI/CD / Test (push) Successful in 1m14s
CI/CD / Build & Release (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
2026-01-08 06:16:49 +01:00
e821e131b4 Fix build script to read version from main.go
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Has been skipped
2026-01-08 06:13:25 +01:00
15a60d2e71 v3.42.10: Code quality fixes
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Successful in 3m12s
- Remove deprecated io/ioutil
- Fix os.DirEntry.ModTime() usage
- Remove unused fields and variables
- Fix ineffective assignments
- Fix error string formatting
2026-01-08 06:05:25 +01:00
9c65821250 v3.42.9: Fix all timeout bugs and deadlocks
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 3m12s
CRITICAL FIXES:
- Encryption detection false positive (IsBackupEncrypted returned true for ALL files)
- 12 cmd.Wait() deadlocks fixed with channel-based context handling
- TUI timeout bugs: 60s->10min for safety checks, 15s->60s for DB listing
- diagnose.go timeouts: 60s->5min for tar/pg_restore operations
- Panic recovery added to parallel backup/restore goroutines
- Variable shadowing fix in restore/engine.go

These bugs caused pg_dump backups to fail through TUI for months.
2026-01-08 05:56:31 +01:00
627061cdbb fix: restore automatic builds on tag push
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m23s
CI/CD / Build & Release (push) Successful in 3m17s
2026-01-07 20:53:20 +01:00
e1a7c57e0f fix: CI runs only once - on release publish, not on tag push
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Has been skipped
Removed duplicate CI triggers:
- Before: Ran on push to branches AND on tag push (doubled)
- After: Runs on push to branches OR when release is published

This prevents wasted CI resources and confusion.
2026-01-07 20:48:01 +01:00
22915102d4 CRITICAL FIX: Eliminate all hardcoded /tmp paths - respect WorkDir configuration
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m24s
CI/CD / Build & Release (push) Has been skipped
This is a critical bugfix release addressing multiple hardcoded temporary directory paths
that prevented proper use of the WorkDir configuration option.

PROBLEM:
Users configuring WorkDir (e.g., /u01/dba/tmp) for systems with small root filesystems
still experienced failures because critical operations hardcoded /tmp instead of respecting
the configured WorkDir. This made the WorkDir option essentially non-functional.

FIXED LOCATIONS:
1. internal/restore/engine.go:632 - CRITICAL: Used BackupDir instead of WorkDir for extraction
2. cmd/restore.go:354,834 - CLI restore/diagnose commands ignored WorkDir
3. cmd/migrate.go:208,347 - Migration commands hardcoded /tmp
4. internal/migrate/engine.go:120 - Migration engine ignored WorkDir
5. internal/config/config.go:224 - SwapFilePath hardcoded /tmp
6. internal/config/config.go:519 - Backup directory fallback hardcoded /tmp
7. internal/tui/restore_exec.go:161 - Debug logs hardcoded /tmp
8. internal/tui/settings.go:805 - Directory browser default hardcoded /tmp
9. internal/tui/restore_preview.go:474 - Display message hardcoded /tmp

NEW FEATURES:
- Added Config.GetEffectiveWorkDir() helper method
- WorkDir now respects WORK_DIR environment variable
- All temp operations now consistently use configured WorkDir with /tmp fallback

IMPACT:
- Restores on systems with small root disks now work properly with WorkDir configured
- Admins can control disk space usage for all temporary operations
- Debug logs, extraction dirs, swap files all respect WorkDir setting

Version: 3.42.1 (Critical Fix Release)
2026-01-07 20:41:53 +01:00
3653ced6da Bump version to 3.42.1
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m23s
CI/CD / Build & Release (push) Successful in 3m13s
2026-01-07 15:41:08 +01:00
9743d571ce chore: Bump version to 3.42.0
Some checks failed
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m23s
CI/CD / Build & Release (push) Failing after 3m12s
2026-01-07 15:28:31 +01:00
c519f08ef2 feat: Add content-defined chunking deduplication
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m23s
CI/CD / Build & Release (push) Successful in 3m12s
- Gear hash CDC with 92%+ overlap on shifted data
- SHA-256 content-addressed chunk storage
- AES-256-GCM per-chunk encryption (optional)
- Gzip compression (default enabled)
- SQLite index for fast lookups
- JSON manifests with SHA-256 verification

Commands: dedup backup/restore/list/stats/delete/gc

Resistance is futile.
2026-01-07 15:02:41 +01:00
b99b05fedb ci: enable CGO for linux builds (required for SQLite catalog)
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 3m15s
2026-01-07 13:48:39 +01:00
c5f2c3322c ci: remove GitHub mirror job (manual push instead)
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Successful in 1m51s
2026-01-07 13:14:46 +01:00
56ad0824c7 ci: simplify JSON creation, add HTTP code debug
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Successful in 1m51s
CI/CD / Mirror to GitHub (push) Has been skipped
2026-01-07 12:57:07 +01:00
ec65df2976 ci: add verbose output for binary upload debugging
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Successful in 1m51s
CI/CD / Mirror to GitHub (push) Has been skipped
2026-01-07 12:55:08 +01:00
23cc1e0e08 ci: use jq to build JSON payload safely
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 1m53s
CI/CD / Mirror to GitHub (push) Has been skipped
2026-01-07 12:52:59 +01:00
7770abab6f ci: fix JSON escaping in release creation
Some checks failed
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Failing after 1m49s
CI/CD / Mirror to GitHub (push) Has been skipped
2026-01-07 12:45:03 +01:00
f6a20f035b ci: simplified build-and-release job, add optional GitHub mirror
Some checks failed
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Failing after 1m52s
CI/CD / Mirror to GitHub (push) Has been skipped
- Removed matrix build + artifact passing (was failing)
- Single job builds all platforms and creates release
- Added optional mirror-to-github job (needs GITHUB_MIRROR_TOKEN var)
- Better error handling for release creation
2026-01-07 12:31:21 +01:00
28e54d118f ci: use github.token instead of secrets.GITEA_TOKEN
Some checks failed
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m23s
CI/CD / Release (push) Has been skipped
CI/CD / Build (amd64, darwin) (push) Failing after 30s
CI/CD / Build (amd64, linux) (push) Failing after 30s
CI/CD / Build (arm64, darwin) (push) Failing after 30s
CI/CD / Build (arm64, linux) (push) Failing after 31s
2026-01-07 12:20:41 +01:00
ab0ff3f28d ci: add release job with Gitea binary uploads
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build (amd64, darwin) (push) Successful in 42s
CI/CD / Build (amd64, linux) (push) Successful in 30s
CI/CD / Build (arm64, darwin) (push) Successful in 30s
CI/CD / Build (arm64, linux) (push) Successful in 31s
CI/CD / Release (push) Has been skipped
- Upload artifacts on tag pushes
- Create release via Gitea API
- Attach all platform binaries to release
2026-01-07 12:10:33 +01:00
b7dd325c51 chore: remove binaries from git tracking
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build (amd64, darwin) (push) Successful in 30s
CI/CD / Build (amd64, linux) (push) Successful in 29s
CI/CD / Build (arm64, darwin) (push) Successful in 30s
CI/CD / Build (arm64, linux) (push) Successful in 30s
- Add bin/dbbackup_* to .gitignore
- Binaries distributed via GitHub Releases instead
- Reduces repo size and eliminates large file warnings
2026-01-07 12:04:22 +01:00
2ed54141a3 chore: rebuild all platform binaries
Some checks failed
CI/CD / Test (push) Successful in 2m43s
CI/CD / Lint (push) Successful in 2m50s
CI/CD / Build (amd64, linux) (push) Has been cancelled
CI/CD / Build (arm64, darwin) (push) Has been cancelled
CI/CD / Build (arm64, linux) (push) Has been cancelled
CI/CD / Build (amd64, darwin) (push) Has been cancelled
2026-01-07 11:57:08 +01:00
495ee31247 docs: add comprehensive SYSTEMD.md installation guide
- Create dedicated SYSTEMD.md with full manual installation steps
- Cover security hardening, multiple instances, troubleshooting
- Document Prometheus exporter manual setup
- Simplify README systemd section with link to detailed guide
- Add SYSTEMD.md to documentation list
2026-01-07 11:55:20 +01:00
78e10f5057 fix: installer issues found during testing
- Remove invalid --config flag from exporter service template
- Change ReadOnlyPaths to ReadWritePaths for catalog access
- Add copyBinary() to install binary to /usr/local/bin (ProtectHome compat)
- Fix exporter status detection using direct systemctl check
- Add os/exec import for status check
2026-01-07 11:50:51 +01:00
f4a0e2d82c build: rebuild all platform binaries with dry-run fix
All checks were successful
CI/CD / Test (push) Successful in 2m49s
CI/CD / Lint (push) Successful in 2m50s
CI/CD / Build (amd64, darwin) (push) Successful in 1m58s
CI/CD / Build (amd64, linux) (push) Successful in 1m58s
CI/CD / Build (arm64, darwin) (push) Successful in 2m0s
CI/CD / Build (arm64, linux) (push) Successful in 1m59s
2026-01-07 11:40:10 +01:00
f66d19acb0 fix: allow dry-run install without root privileges
Some checks failed
CI/CD / Test (push) Successful in 2m53s
CI/CD / Build (amd64, darwin) (push) Has been cancelled
CI/CD / Build (amd64, linux) (push) Has been cancelled
CI/CD / Build (arm64, darwin) (push) Has been cancelled
CI/CD / Build (arm64, linux) (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
2026-01-07 11:37:13 +01:00
16f377e9b5 docs: update README with systemd and Prometheus metrics sections
Some checks failed
CI/CD / Test (push) Successful in 2m45s
CI/CD / Lint (push) Successful in 2m56s
CI/CD / Build (amd64, linux) (push) Has been cancelled
CI/CD / Build (arm64, darwin) (push) Has been cancelled
CI/CD / Build (arm64, linux) (push) Has been cancelled
CI/CD / Build (amd64, darwin) (push) Has been cancelled
- Add install/uninstall and metrics commands to command table
- Add Systemd Integration section with install examples
- Add Prometheus Metrics section with textfile and HTTP exporter docs
- Update Features list with systemd and metrics highlights
- Rebuild all platform binaries
2026-01-07 11:26:54 +01:00
7e32a0369d feat: add embedded systemd installer and Prometheus metrics
Some checks failed
CI/CD / Test (push) Successful in 2m42s
CI/CD / Lint (push) Successful in 2m50s
CI/CD / Build (amd64, darwin) (push) Successful in 2m0s
CI/CD / Build (amd64, linux) (push) Successful in 1m58s
CI/CD / Build (arm64, darwin) (push) Successful in 2m1s
CI/CD / Build (arm64, linux) (push) Has been cancelled
Systemd Integration:
- New 'dbbackup install' command creates service/timer units
- Supports single-database and cluster backup modes
- Automatic dbbackup user/group creation with proper permissions
- Hardened service units with security features
- Template units with configurable OnCalendar schedules
- 'dbbackup uninstall' for clean removal

Prometheus Metrics:
- 'dbbackup metrics export' for textfile collector format
- 'dbbackup metrics serve' runs HTTP exporter on port 9399
- Metrics: last_success_timestamp, rpo_seconds, backup_total, etc.
- Integration with node_exporter textfile collector
- --with-metrics flag during install

Technical:
- Systemd templates embedded with //go:embed
- Service units include ReadWritePaths, OOMScoreAdjust
- Metrics exporter caches with 30s TTL
- Graceful shutdown on SIGTERM
2026-01-07 11:18:09 +01:00
120ee33e3b build: v3.41.0 binaries with TUI cancellation fix
All checks were successful
CI/CD / Test (push) Successful in 2m44s
CI/CD / Lint (push) Successful in 2m51s
CI/CD / Build (amd64, darwin) (push) Successful in 1m58s
CI/CD / Build (amd64, linux) (push) Successful in 1m59s
CI/CD / Build (arm64, darwin) (push) Successful in 2m1s
CI/CD / Build (arm64, linux) (push) Successful in 1m59s
2026-01-07 09:55:08 +01:00
9f375621d1 fix(tui): enable Ctrl+C/ESC to cancel running backup/restore operations
PROBLEM: Users could not interrupt backup or restore operations through
the TUI interface. Pressing Ctrl+C or ESC did nothing during execution.

ROOT CAUSE:
- BackupExecutionModel ignored ALL key presses while running (only handled when done)
- RestoreExecutionModel returned tea.Quit but didn't cancel the context
- The operation goroutine kept running in the background with its own context

FIX:
- Added cancel context.CancelFunc to both execution models
- Create child context with WithCancel in New*Execution constructors
- Handle ctrl+c and esc during execution to call cancel()
- Show 'Cancelling...' status while waiting for graceful shutdown
- Show cancel hint in View: 'Press Ctrl+C or ESC to cancel'

The fix works because:
- exec.CommandContext(ctx) will SIGKILL the subprocess when ctx is cancelled
- pg_dump, pg_restore, psql, mysql all get terminated properly
- User sees immediate feedback that cancellation is in progress
2026-01-07 09:53:47 +01:00
9ad925191e build: v3.41.0 binaries with P0 security fixes
Some checks failed
CI/CD / Test (push) Successful in 2m45s
CI/CD / Lint (push) Successful in 2m51s
CI/CD / Build (amd64, darwin) (push) Successful in 1m59s
CI/CD / Build (arm64, darwin) (push) Has been cancelled
CI/CD / Build (arm64, linux) (push) Has been cancelled
CI/CD / Build (amd64, linux) (push) Has been cancelled
2026-01-07 09:46:49 +01:00
9d8a6e763e security: P0 fixes - SQL injection prevention + data race fix
- Add identifier validation for database names in PostgreSQL and MySQL
  - validateIdentifier() rejects names with invalid characters
  - quoteIdentifier() safely quotes identifiers with proper escaping
  - Max length: 63 chars (PostgreSQL), 64 chars (MySQL)
  - Only allows alphanumeric + underscores, must start with letter/underscore

- Fix data race in notification manager
  - Multiple goroutines were appending to shared error slice
  - Added errMu sync.Mutex to protect concurrent error collection

- Security improvements prevent:
  - SQL injection via malicious database names
  - CREATE DATABASE `foo`; DROP DATABASE production; --`
  - Race conditions causing lost or corrupted error data
2026-01-07 09:45:13 +01:00
63b16eee8b build: v3.41.0 binaries with DB+Go specialist fixes
All checks were successful
CI/CD / Test (push) Successful in 2m41s
CI/CD / Lint (push) Successful in 2m50s
CI/CD / Build (amd64, darwin) (push) Successful in 1m58s
CI/CD / Build (amd64, linux) (push) Successful in 1m58s
CI/CD / Build (arm64, darwin) (push) Successful in 1m56s
CI/CD / Build (arm64, linux) (push) Successful in 1m58s
2026-01-07 08:59:53 +01:00
91228552fb fix(backup/restore): implement DB+Go specialist recommendations
P0: Add ON_ERROR_STOP=1 to psql (fail fast, not 2.6M errors)
P1: Fix pipe deadlock in streaming compression (goroutine+context)
P1: Handle SIGPIPE (exit 141) - report compressor as root cause
P2: Validate .dump files with pg_restore --list before restore
P2: Add fsync after streaming compression for durability

Fixes potential hung backups and improves error diagnostics.
2026-01-07 08:58:00 +01:00
9ee55309bd docs: update CHANGELOG for v3.41.0 pre-restore validation
Some checks failed
CI/CD / Test (push) Successful in 2m41s
CI/CD / Lint (push) Successful in 2m50s
CI/CD / Build (amd64, darwin) (push) Successful in 1m59s
CI/CD / Build (amd64, linux) (push) Successful in 1m57s
CI/CD / Build (arm64, darwin) (push) Successful in 1m58s
CI/CD / Build (arm64, linux) (push) Has been cancelled
2026-01-07 08:48:38 +01:00
0baf741c0b build: v3.40.0 binaries for all platforms
Some checks failed
CI/CD / Test (push) Successful in 2m44s
CI/CD / Lint (push) Successful in 2m47s
CI/CD / Build (amd64, darwin) (push) Successful in 1m58s
CI/CD / Build (amd64, linux) (push) Successful in 1m57s
CI/CD / Build (arm64, linux) (push) Has been cancelled
CI/CD / Build (arm64, darwin) (push) Has been cancelled
2026-01-07 08:36:26 +01:00
faace7271c fix(restore): add pre-validation for truncated SQL dumps
Some checks failed
CI/CD / Test (push) Successful in 2m42s
CI/CD / Build (amd64, darwin) (push) Has been cancelled
CI/CD / Build (amd64, linux) (push) Has been cancelled
CI/CD / Build (arm64, darwin) (push) Has been cancelled
CI/CD / Build (arm64, linux) (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
- Validate SQL dump files BEFORE attempting restore
- Detect unterminated COPY blocks that cause 'syntax error' failures
- Cluster restore now pre-validates ALL dumps upfront (fail-fast)
- Saves hours of wasted restore time on corrupted backups

The truncated resydb.sql.gz was causing 49min restore attempts
that failed with 2.6M errors. Now fails immediately with clear
error message showing which table's COPY block was truncated.
2026-01-07 08:34:10 +01:00
c3ade7a693 Include pre-built binaries for distribution
All checks were successful
CI/CD / Test (push) Successful in 2m36s
CI/CD / Lint (push) Successful in 2m45s
CI/CD / Build (amd64, darwin) (push) Successful in 1m53s
CI/CD / Build (amd64, linux) (push) Successful in 1m56s
CI/CD / Build (arm64, darwin) (push) Successful in 1m54s
CI/CD / Build (arm64, linux) (push) Successful in 1m54s
2026-01-06 15:32:47 +01:00
52d475506c fix(backup): dynamic timeout for large database backups
All checks were successful
CI/CD / Test (push) Successful in 1m11s
CI/CD / Lint (push) Successful in 1m20s
CI/CD / Build (amd64, darwin) (push) Successful in 29s
CI/CD / Build (amd64, linux) (push) Successful in 28s
CI/CD / Build (arm64, darwin) (push) Successful in 29s
CI/CD / Build (arm64, linux) (push) Successful in 29s
- 2-hour timeout was causing truncated backups for databases > 40GB
- Now scales: 2 hours base + 1 hour per 20GB
- 69GB database now gets ~5.5 hour timeout
- Fixed streaming compression error handling order

Fixes truncated resydb.sql.gz in cluster backups
2026-01-06 15:09:29 +01:00
938ee61686 docs: update README with v3.40.0 TUI features (Diagnose, WorkDir)
All checks were successful
CI/CD / Test (push) Successful in 1m12s
CI/CD / Lint (push) Successful in 1m19s
CI/CD / Build (amd64, darwin) (push) Successful in 29s
CI/CD / Build (amd64, linux) (push) Successful in 29s
CI/CD / Build (arm64, darwin) (push) Successful in 29s
CI/CD / Build (arm64, linux) (push) Successful in 29s
2026-01-06 14:58:10 +01:00
85b61048c0 fix(ci): simplify CI - use github.token via env, remove mirror until working
All checks were successful
CI/CD / Test (push) Successful in 1m12s
CI/CD / Lint (push) Successful in 1m20s
CI/CD / Build (amd64, darwin) (push) Successful in 30s
CI/CD / Build (amd64, linux) (push) Successful in 29s
CI/CD / Build (arm64, darwin) (push) Successful in 29s
CI/CD / Build (arm64, linux) (push) Successful in 29s
2026-01-06 14:13:54 +01:00
30954cb7c2 fix(ci): use GITHUB_TOKEN for repo authentication
Some checks failed
CI/CD / Test (push) Failing after 4s
CI/CD / Generate SBOM (push) Has been skipped
CI/CD / Lint (push) Failing after 4s
CI/CD / Build (darwin-amd64) (push) Has been skipped
CI/CD / Build (linux-amd64) (push) Has been skipped
CI/CD / Build (darwin-arm64) (push) Has been skipped
CI/CD / Build (linux-arm64) (push) Has been skipped
CI/CD / Release (push) Has been skipped
CI/CD / Build & Push Docker Image (push) Has been skipped
CI/CD / Mirror to GitHub (push) Has been skipped
2026-01-06 14:06:11 +01:00
ddf46f190b fix(ci): use public git.uuxo.net URL instead of internal gitea:3000
Some checks failed
CI/CD / Test (push) Failing after 4s
CI/CD / Generate SBOM (push) Has been skipped
CI/CD / Lint (push) Failing after 4s
CI/CD / Build (darwin-amd64) (push) Has been skipped
CI/CD / Build (linux-amd64) (push) Has been skipped
CI/CD / Build (darwin-arm64) (push) Has been skipped
CI/CD / Build (linux-arm64) (push) Has been skipped
CI/CD / Release (push) Has been skipped
CI/CD / Build & Push Docker Image (push) Has been skipped
CI/CD / Mirror to GitHub (push) Has been skipped
2026-01-06 14:04:57 +01:00
4c6d44725e fix(ci): use manual git fetch with GITHUB_SERVER_URL/SHA (no Node.js needed)
Some checks failed
CI/CD / Test (push) Failing after 3s
CI/CD / Generate SBOM (push) Has been skipped
CI/CD / Lint (push) Failing after 3s
CI/CD / Build (darwin-amd64) (push) Has been skipped
CI/CD / Build (linux-amd64) (push) Has been skipped
CI/CD / Build (darwin-arm64) (push) Has been skipped
CI/CD / Build (linux-arm64) (push) Has been skipped
CI/CD / Release (push) Has been skipped
CI/CD / Build & Push Docker Image (push) Has been skipped
CI/CD / Mirror to GitHub (push) Has been skipped
2026-01-06 14:03:09 +01:00
be69c0e00f fix(ci): use actions/checkout@v4 instead of manual git clone
Some checks failed
CI/CD / Test (push) Failing after 9s
CI/CD / Generate SBOM (push) Has been skipped
CI/CD / Lint (push) Failing after 2s
CI/CD / Build (darwin-amd64) (push) Has been skipped
CI/CD / Build (linux-amd64) (push) Has been skipped
CI/CD / Build (darwin-arm64) (push) Has been skipped
CI/CD / Build (linux-arm64) (push) Has been skipped
CI/CD / Release (push) Has been skipped
CI/CD / Build & Push Docker Image (push) Has been skipped
CI/CD / Mirror to GitHub (push) Has been skipped
2026-01-06 14:01:15 +01:00
110 changed files with 10672 additions and 1558 deletions

View File

@@ -1,4 +1,6 @@
# CI/CD Pipeline for dbbackup
# Main repo: Gitea (git.uuxo.net)
# Mirror: GitHub (github.com/PlusOne/dbbackup)
name: CI/CD
on:
@@ -8,9 +10,6 @@ on:
pull_request:
branches: [main, master]
env:
GITEA_URL: https://git.uuxo.net
jobs:
test:
name: Test
@@ -18,26 +17,25 @@ jobs:
container:
image: golang:1.24-bookworm
steps:
- name: Install git
run: apt-get update && apt-get install -y git ca-certificates
- name: Checkout code
env:
TOKEN: ${{ github.token }}
run: |
apt-get update && apt-get install -y -qq git ca-certificates
git config --global --add safe.directory "$GITHUB_WORKSPACE"
git clone --depth 1 --branch ${GITHUB_REF_NAME} ${{ env.GITEA_URL }}/${GITHUB_REPOSITORY}.git .
git init
git remote add origin "https://${TOKEN}@git.uuxo.net/${GITHUB_REPOSITORY}.git"
git fetch --depth=1 origin "${GITHUB_SHA}"
git checkout FETCH_HEAD
- name: Download dependencies
run: go mod download
- name: Run tests with race detection
env:
GOMAXPROCS: 8
run: go test -race -coverprofile=coverage.out -covermode=atomic ./...
- name: Run tests
run: go test -race -coverprofile=coverage.out ./...
- name: Generate coverage report
run: |
go tool cover -func=coverage.out
go tool cover -html=coverage.out -o coverage.html
- name: Coverage summary
run: go tool cover -func=coverage.out | tail -1
lint:
name: Lint
@@ -45,168 +43,119 @@ jobs:
container:
image: golang:1.24-bookworm
steps:
- name: Install git
run: apt-get update && apt-get install -y git ca-certificates
- name: Checkout code
run: |
git config --global --add safe.directory "$GITHUB_WORKSPACE"
git clone --depth 1 --branch ${GITHUB_REF_NAME} ${{ env.GITEA_URL }}/${GITHUB_REPOSITORY}.git .
- name: Install golangci-lint
run: go install github.com/golangci/golangci-lint/cmd/golangci-lint@v1.62.2
- name: Run golangci-lint
env:
GOMAXPROCS: 8
run: golangci-lint run --timeout=5m ./...
TOKEN: ${{ github.token }}
run: |
apt-get update && apt-get install -y -qq git ca-certificates
git config --global --add safe.directory "$GITHUB_WORKSPACE"
git init
git remote add origin "https://${TOKEN}@git.uuxo.net/${GITHUB_REPOSITORY}.git"
git fetch --depth=1 origin "${GITHUB_SHA}"
git checkout FETCH_HEAD
build:
name: Build (${{ matrix.goos }}-${{ matrix.goarch }})
- name: Install and run golangci-lint
run: |
go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@v2.8.0
golangci-lint run --timeout=5m ./...
build-and-release:
name: Build & Release
runs-on: ubuntu-latest
needs: [test, lint]
container:
image: golang:1.24-bookworm
strategy:
max-parallel: 8
matrix:
goos: [linux, darwin]
goarch: [amd64, arm64]
steps:
- name: Install git
run: apt-get update && apt-get install -y git ca-certificates
- name: Checkout code
run: |
git config --global --add safe.directory "$GITHUB_WORKSPACE"
git clone --depth 1 --branch ${GITHUB_REF_NAME} ${{ env.GITEA_URL }}/${GITHUB_REPOSITORY}.git .
- name: Build binary
env:
GOOS: ${{ matrix.goos }}
GOARCH: ${{ matrix.goarch }}
CGO_ENABLED: 0
GOMAXPROCS: 8
run: |
BINARY_NAME=dbbackup
go build -ldflags="-s -w" -o dist/${BINARY_NAME}-${{ matrix.goos }}-${{ matrix.goarch }} .
sbom:
name: Generate SBOM
runs-on: ubuntu-latest
needs: [test]
container:
image: golang:1.24-bookworm
steps:
- name: Install git
run: apt-get update && apt-get install -y git ca-certificates
- name: Checkout code
run: |
git config --global --add safe.directory "$GITHUB_WORKSPACE"
git clone --depth 1 --branch ${GITHUB_REF_NAME} ${{ env.GITEA_URL }}/${GITHUB_REPOSITORY}.git .
- name: Install Syft
run: curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin
- name: Generate SBOM
run: |
syft . -o spdx-json=sbom-spdx.json
syft . -o cyclonedx-json=sbom-cyclonedx.json
release:
name: Release
runs-on: ubuntu-latest
needs: [test, lint, build]
if: startsWith(github.ref, 'refs/tags/v')
container:
image: golang:1.24-bookworm
steps:
- name: Install tools
run: |
apt-get update && apt-get install -y git ca-certificates
curl -sSfL https://github.com/goreleaser/goreleaser/releases/download/v2.4.8/goreleaser_Linux_x86_64.tar.gz | tar xz -C /usr/local/bin goreleaser
curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin
- name: Checkout code
run: |
git config --global --add safe.directory "$GITHUB_WORKSPACE"
git clone --branch ${GITHUB_REF_NAME} ${{ env.GITEA_URL }}/${GITHUB_REPOSITORY}.git .
git fetch --tags
- name: Run goreleaser
env:
GITEA_TOKEN: ${{ secrets.GITEA_TOKEN }}
run: goreleaser release --clean
docker:
name: Build & Push Docker Image
runs-on: ubuntu-latest
needs: [test, lint]
if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/'))
container:
image: docker:24-cli
options: --privileged
services:
docker:
image: docker:24-dind
options: --privileged
steps:
- name: Install dependencies
run: apk add --no-cache git curl
- name: Checkout code
TOKEN: ${{ github.token }}
run: |
apt-get update && apt-get install -y -qq git ca-certificates curl jq
git config --global --add safe.directory "$GITHUB_WORKSPACE"
git clone --depth 1 --branch ${GITHUB_REF_NAME} ${{ env.GITEA_URL }}/${GITHUB_REPOSITORY}.git .
git init
git remote add origin "https://${TOKEN}@git.uuxo.net/${GITHUB_REPOSITORY}.git"
git fetch --depth=1 origin "${GITHUB_SHA}"
git checkout FETCH_HEAD
- name: Set up Docker Buildx
- name: Build all platforms
run: |
docker buildx create --use --name builder --driver docker-container
docker buildx inspect --bootstrap
mkdir -p release
- name: Login to Gitea Registry
if: ${{ secrets.REGISTRY_USER != '' && secrets.REGISTRY_TOKEN != '' }}
run: |
echo "${{ secrets.REGISTRY_TOKEN }}" | docker login git.uuxo.net -u "${{ secrets.REGISTRY_USER }}" --password-stdin
# Install cross-compilation tools for CGO
apt-get update && apt-get install -y -qq gcc-aarch64-linux-gnu
- name: Build and push
if: ${{ secrets.REGISTRY_USER != '' && secrets.REGISTRY_TOKEN != '' }}
# Linux amd64 (with CGO for SQLite)
echo "Building linux/amd64 (CGO enabled)..."
CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -ldflags="-s -w" -o release/dbbackup-linux-amd64 .
# Linux arm64 (with CGO for SQLite)
echo "Building linux/arm64 (CGO enabled)..."
CC=aarch64-linux-gnu-gcc CGO_ENABLED=1 GOOS=linux GOARCH=arm64 go build -ldflags="-s -w" -o release/dbbackup-linux-arm64 .
# Darwin amd64 (no CGO - cross-compile limitation)
echo "Building darwin/amd64 (CGO disabled)..."
CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 go build -ldflags="-s -w" -o release/dbbackup-darwin-amd64 .
# Darwin arm64 (no CGO - cross-compile limitation)
echo "Building darwin/arm64 (CGO disabled)..."
CGO_ENABLED=0 GOOS=darwin GOARCH=arm64 go build -ldflags="-s -w" -o release/dbbackup-darwin-arm64 .
# FreeBSD amd64 (no CGO - cross-compile limitation)
echo "Building freebsd/amd64 (CGO disabled)..."
CGO_ENABLED=0 GOOS=freebsd GOARCH=amd64 go build -ldflags="-s -w" -o release/dbbackup-freebsd-amd64 .
echo "All builds complete:"
ls -lh release/
- name: Create Gitea Release
env:
GITEA_TOKEN: ${{ github.token }}
run: |
# Determine tags
if [[ "${GITHUB_REF}" == refs/tags/* ]]; then
VERSION=${GITHUB_REF#refs/tags/}
TAGS="-t git.uuxo.net/uuxo/dbbackup:${VERSION} -t git.uuxo.net/uuxo/dbbackup:latest"
else
TAGS="-t git.uuxo.net/uuxo/dbbackup:${GITHUB_SHA::8} -t git.uuxo.net/uuxo/dbbackup:main"
TAG=${GITHUB_REF#refs/tags/}
echo "Creating Gitea release for ${TAG}..."
echo "Debug: GITHUB_REPOSITORY=${GITHUB_REPOSITORY}"
echo "Debug: TAG=${TAG}"
# Simple body without special characters
BODY="Download binaries for your platform"
# Create release via API with simple inline JSON
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST \
-H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
-d '{"tag_name":"'"${TAG}"'","name":"'"${TAG}"'","body":"'"${BODY}"'","draft":false,"prerelease":false}' \
"https://git.uuxo.net/api/v1/repos/${GITHUB_REPOSITORY}/releases")
HTTP_CODE=$(echo "$RESPONSE" | tail -1)
BODY_RESPONSE=$(echo "$RESPONSE" | sed '$d')
echo "HTTP Code: $HTTP_CODE"
echo "Response: $BODY_RESPONSE"
RELEASE_ID=$(echo "$BODY_RESPONSE" | jq -r '.id')
if [ "$RELEASE_ID" = "null" ] || [ -z "$RELEASE_ID" ]; then
echo "Failed to create release"
exit 1
fi
docker buildx build \
--platform linux/amd64,linux/arm64 \
--push \
${TAGS} \
.
# Test 1765481480
echo "Created release ID: $RELEASE_ID"
mirror:
name: Mirror to GitHub
runs-on: ubuntu-latest
needs: [test, lint]
if: github.event_name == 'push' && github.ref == 'refs/heads/main' && vars.MIRROR_ENABLED != 'false'
container:
image: debian:bookworm-slim
volumes:
- /root/.ssh:/root/.ssh:ro
steps:
- name: Install git
run: apt-get update && apt-get install -y --no-install-recommends git openssh-client ca-certificates && rm -rf /var/lib/apt/lists/*
# Upload each binary
echo "Files to upload:"
ls -la release/
- name: Clone and mirror
env:
GIT_SSH_COMMAND: "ssh -i /root/.ssh/id_ed25519 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
run: |
git config --global --add safe.directory "$GITHUB_WORKSPACE"
git clone --mirror ${{ env.GITEA_URL }}/${GITHUB_REPOSITORY}.git repo.git
cd repo.git
git remote add github git@github.com:PlusOne/dbbackup.git
git push --mirror github || git push --force --all github && git push --force --tags github
for file in release/dbbackup-*; do
FILENAME=$(basename "$file")
echo "Uploading $FILENAME..."
UPLOAD_RESPONSE=$(curl -s -X POST \
-H "Authorization: token ${GITEA_TOKEN}" \
-F "attachment=@${file}" \
"https://git.uuxo.net/api/v1/repos/${GITHUB_REPOSITORY}/releases/${RELEASE_ID}/assets?name=${FILENAME}")
echo "Upload response: $UPLOAD_RESPONSE"
done
echo "Gitea release complete!"
echo "GitHub mirror complete!"

7
.gitignore vendored
View File

@@ -13,7 +13,8 @@ logs/
/dbbackup
/dbbackup_*
!dbbackup.png
bin/
bin/dbbackup_*
bin/*.exe
# Ignore development artifacts
*.swp
@@ -33,3 +34,7 @@ coverage.html
# Ignore temporary files
tmp/
temp/
CRITICAL_BUGS_FIXED.md
LEGAL_DOCUMENTATION.md
LEGAL_*.md
legal/

View File

@@ -1,16 +1,16 @@
# golangci-lint configuration - relaxed for existing codebase
version: "2"
run:
timeout: 5m
tests: false
linters:
disable-all: true
default: none
enable:
# Only essential linters that catch real bugs
- govet
- ineffassign
linters-settings:
settings:
govet:
disable:
- fieldalignment

View File

@@ -5,6 +5,320 @@ All notable changes to dbbackup will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [3.42.30] - 2026-01-09 "Better Error Aggregation"
### Added - go-multierror for Cluster Restore Errors
- **Enhanced error reporting** - Now shows ALL database failures, not just a count
- Uses `hashicorp/go-multierror` for proper error aggregation
- Each failed database error is preserved with full context
- Bullet-pointed error output for readability:
```
cluster restore completed with 3 failures:
3 database(s) failed:
• db1: restore failed: max_locks_per_transaction exceeded
• db2: restore failed: connection refused
• db3: failed to create database: permission denied
```
### Changed
- Replaced string slice error collection with proper `*multierror.Error`
- Thread-safe error aggregation with dedicated mutex
- Improved error wrapping with `%w` for error chain preservation
## [3.42.10] - 2026-01-08 "Code Quality"
### Fixed - Code Quality Issues
- Removed deprecated `io/ioutil` usage (replaced with `os`)
- Fixed `os.DirEntry.ModTime()` → `file.Info().ModTime()`
- Removed unused fields and variables
- Fixed ineffective assignments in TUI code
- Fixed error strings (no capitalization, no trailing punctuation)
## [3.42.9] - 2026-01-08 "Diagnose Timeout Fix"
### Fixed - diagnose.go Timeout Bugs
**More short timeouts that caused large archive failures:**
- `diagnoseClusterArchive()`: tar listing 60s → **5 minutes**
- `verifyWithPgRestore()`: pg_restore --list 60s → **5 minutes**
- `DiagnoseClusterDumps()`: archive listing 120s → **10 minutes**
**Impact:** These timeouts caused "context deadline exceeded" errors when
diagnosing multi-GB backup archives, preventing TUI restore from even starting.
## [3.42.8] - 2026-01-08 "TUI Timeout Fix"
### Fixed - TUI Timeout Bugs Causing Backup/Restore Failures
**ROOT CAUSE of 2-3 month TUI backup/restore failures identified and fixed:**
#### Critical Timeout Fixes:
- **restore_preview.go**: Safety check timeout increased from 60s → **10 minutes**
- Large archives (>1GB) take 2+ minutes to diagnose
- Users saw "context deadline exceeded" before backup even started
- **dbselector.go**: Database listing timeout increased from 15s → **60 seconds**
- Busy PostgreSQL servers need more time to respond
- **status.go**: Status check timeout increased from 10s → **30 seconds**
- SSL negotiation and slow networks caused failures
#### Stability Improvements:
- **Panic recovery** added to parallel goroutines in:
- `backup/engine.go:BackupCluster()` - cluster backup workers
- `restore/engine.go:RestoreCluster()` - cluster restore workers
- Prevents single database panic from crashing entire operation
#### Bug Fix:
- **restore/engine.go**: Fixed variable shadowing `err` → `cmdErr` for exit code detection
## [3.42.7] - 2026-01-08 "Context Killer Complete"
### Fixed - Additional Deadlock Bugs in Restore & Engine
**All remaining cmd.Wait() deadlock bugs fixed across the codebase:**
#### internal/restore/engine.go:
- `executeRestoreWithDecompression()` - gunzip/pigz pipeline restore
- `extractArchive()` - tar extraction for cluster restore
- `restoreGlobals()` - pg_dumpall globals restore
#### internal/backup/engine.go:
- `createArchive()` - tar/pigz archive creation pipeline
#### internal/engine/mysqldump.go:
- `Backup()` - mysqldump backup operation
- `BackupToWriter()` - streaming mysqldump to writer
**All 6 functions now use proper channel-based context handling with Process.Kill().**
## [3.42.6] - 2026-01-08 "Deadlock Killer"
### Fixed - Backup Command Context Handling
**Critical Bug: pg_dump/mysqldump could hang forever on context cancellation**
The `executeCommand`, `executeCommandWithProgress`, `executeMySQLWithProgressAndCompression`,
and `executeMySQLWithCompression` functions had a race condition where:
1. A goroutine was spawned to read stderr
2. `cmd.Wait()` was called directly
3. If context was cancelled, the process was NOT killed
4. The goroutine could hang forever waiting for stderr
**Fix**: All backup execution functions now use proper channel-based context handling:
```go
// Wait for command with context handling
cmdDone := make(chan error, 1)
go func() {
cmdDone <- cmd.Wait()
}()
select {
case cmdErr = <-cmdDone:
// Command completed
case <-ctx.Done():
// Context cancelled - kill process
cmd.Process.Kill()
<-cmdDone
cmdErr = ctx.Err()
}
```
**Affected Functions:**
- `executeCommand()` - pg_dump for cluster backup
- `executeCommandWithProgress()` - pg_dump for single backup with progress
- `executeMySQLWithProgressAndCompression()` - mysqldump pipeline
- `executeMySQLWithCompression()` - mysqldump pipeline
**This fixes:** Backup operations hanging indefinitely when cancelled or timing out.
## [3.42.5] - 2026-01-08 "False Positive Fix"
### Fixed - Encryption Detection Bug
**IsBackupEncrypted False Positive:**
- **BUG FIX**: `IsBackupEncrypted()` returned `true` for ALL files, blocking normal restores
- Root cause: Fallback logic checked if first 12 bytes (nonce size) could be read - always true
- Fix: Now properly detects known unencrypted formats by magic bytes:
- Gzip: `1f 8b`
- PostgreSQL custom: `PGDMP`
- Plain SQL: starts with `--`, `SET`, `CREATE`
- Returns `false` if no metadata present and format is recognized as unencrypted
- Affected file: `internal/backup/encryption.go`
## [3.42.4] - 2026-01-08 "The Long Haul"
### Fixed - Critical Restore Timeout Bug
**Removed Arbitrary Timeouts from Backup/Restore Operations:**
- **CRITICAL FIX**: Removed 4-hour timeout that was killing large database restores
- PostgreSQL cluster restores of 69GB+ databases no longer fail with "context deadline exceeded"
- All backup/restore operations now use `context.WithCancel` instead of `context.WithTimeout`
- Operations run until completion or manual cancellation (Ctrl+C)
**Affected Files:**
- `internal/tui/restore_exec.go`: Changed from 4-hour timeout to context.WithCancel
- `internal/tui/backup_exec.go`: Changed from 4-hour timeout to context.WithCancel
- `internal/backup/engine.go`: Removed per-database timeout in cluster backup
- `cmd/restore.go`: CLI restore commands use context.WithCancel
**exec.Command Context Audit:**
- Fixed `exec.Command` without Context in `internal/restore/engine.go:730`
- Added proper context handling to all external command calls
- Added timeouts only for quick diagnostic/version checks (not restore path):
- `restore/version_check.go`: 30s timeout for pg_restore --version check only
- `restore/error_report.go`: 10s timeout for tool version detection
- `restore/diagnose.go`: 60s timeout for diagnostic functions
- `pitr/binlog.go`: 10s timeout for mysqlbinlog --version check
- `cleanup/processes.go`: 5s timeout for process listing
- `auth/helper.go`: 30s timeout for auth helper commands
**Verification:**
- 54 total `exec.CommandContext` calls verified in backup/restore/pitr path
- 0 `exec.Command` without Context in critical restore path
- All 14 PostgreSQL exec calls use CommandContext (pg_dump, pg_restore, psql)
- All 15 MySQL/MariaDB exec calls use CommandContext (mysqldump, mysql, mysqlbinlog)
- All 14 test packages pass
### Technical Details
- Large Object (BLOB/BYTEA) restores are particularly affected by timeouts
- 69GB database with large objects can take 5+ hours to restore
- Previous 4-hour hard timeout was causing consistent failures
- Now: No timeout - runs until complete or user cancels
## [3.42.1] - 2026-01-07 "Resistance is Futile"
### Added - Content-Defined Chunking Deduplication
**Deduplication Engine:**
- New `dbbackup dedup` command family for space-efficient backups
- Gear hash content-defined chunking (CDC) with 92%+ overlap on shifted data
- SHA-256 content-addressed storage - chunks stored by hash
- AES-256-GCM per-chunk encryption (optional, via `--encrypt`)
- Gzip compression enabled by default
- SQLite index for fast chunk lookups
- JSON manifests track chunks per backup with full verification
**Dedup Commands:**
```bash
dbbackup dedup backup <file> # Create deduplicated backup
dbbackup dedup backup <file> --encrypt # With encryption
dbbackup dedup restore <id> <output> # Restore from manifest
dbbackup dedup list # List all backups
dbbackup dedup stats # Show deduplication statistics
dbbackup dedup delete <id> # Delete a backup manifest
dbbackup dedup gc # Garbage collect unreferenced chunks
```
**Storage Structure:**
```
<backup-dir>/dedup/
chunks/ # Content-addressed chunk files (sharded by hash prefix)
manifests/ # JSON manifest per backup
chunks.db # SQLite index for fast lookups
```
**Test Results:**
- First 5MB backup: 448 chunks, 5MB stored
- Modified 5MB file: 448 chunks, only 1 NEW chunk (1.6KB), 100% dedup ratio
- Restore with SHA-256 verification
### Added - Documentation Updates
- Prometheus alerting rules added to SYSTEMD.md
- Catalog sync instructions for existing backups
## [3.41.1] - 2026-01-07
### Fixed
- Enabled CGO for Linux builds (required for SQLite catalog)
## [3.41.0] - 2026-01-07 "The Operator"
### Added - Systemd Integration & Prometheus Metrics
**Embedded Systemd Installer:**
- New `dbbackup install` command installs as systemd service/timer
- Supports single-database (`--backup-type single`) and cluster (`--backup-type cluster`) modes
- Automatic `dbbackup` user/group creation with proper permissions
- Hardened service units with security features (NoNewPrivileges, ProtectSystem, CapabilityBoundingSet)
- Templated timer units with configurable schedules (daily, weekly, or custom OnCalendar)
- Built-in dry-run mode (`--dry-run`) to preview installation
- `dbbackup install --status` shows current installation state
- `dbbackup uninstall` cleanly removes all systemd units and optionally configuration
**Prometheus Metrics Support:**
- New `dbbackup metrics export` command writes textfile collector format
- New `dbbackup metrics serve` command runs HTTP exporter on port 9399
- Metrics: `dbbackup_last_success_timestamp`, `dbbackup_rpo_seconds`, `dbbackup_backup_total`, etc.
- Integration with node_exporter textfile collector
- Metrics automatically updated via ExecStopPost in service units
- `--with-metrics` flag during install sets up exporter as systemd service
**New Commands:**
```bash
# Install as systemd service
sudo dbbackup install --backup-type cluster --schedule daily
# Install with Prometheus metrics
sudo dbbackup install --with-metrics --metrics-port 9399
# Check installation status
dbbackup install --status
# Export metrics for node_exporter
dbbackup metrics export --output /var/lib/dbbackup/metrics/dbbackup.prom
# Run HTTP metrics server
dbbackup metrics serve --port 9399
```
### Technical Details
- Systemd templates embedded with `//go:embed` for self-contained binary
- Templates use ReadWritePaths for security isolation
- Service units include proper OOMScoreAdjust (-100) to protect backups
- Metrics exporter caches with 30-second TTL for performance
- Graceful shutdown on SIGTERM for metrics server
---
## [3.41.0] - 2026-01-07 "The Pre-Flight Check"
### Added - 🛡️ Pre-Restore Validation
**Automatic Dump Validation Before Restore:**
- SQL dump files are now validated BEFORE attempting restore
- Detects truncated COPY blocks that cause "syntax error" failures
- Catches corrupted backups in seconds instead of wasting 49+ minutes
- Cluster restore pre-validates ALL dumps upfront (fail-fast approach)
- Custom format `.dump` files now validated with `pg_restore --list`
**Improved Error Messages:**
- Clear indication when dump file is truncated
- Shows which table's COPY block was interrupted
- Displays sample orphaned data for diagnosis
- Provides actionable error messages with root cause
### Fixed
- **P0: SQL Injection** - Added identifier validation for database names in CREATE/DROP DATABASE to prevent SQL injection attacks; uses safe quoting and regex validation (alphanumeric + underscore only)
- **P0: Data Race** - Fixed concurrent goroutines appending to shared error slice in notification manager; now uses mutex synchronization
- **P0: psql ON_ERROR_STOP** - Added `-v ON_ERROR_STOP=1` to psql commands to fail fast on first error instead of accumulating millions of errors
- **P1: Pipe deadlock** - Fixed streaming compression deadlock when pg_dump blocks on full pipe buffer; now uses goroutine with proper context timeout handling
- **P1: SIGPIPE handling** - Detect exit code 141 (broken pipe) and report compressor failure as root cause
- **P2: .dump validation** - Custom format dumps now validated with `pg_restore --list` before restore
- **P2: fsync durability** - Added `outFile.Sync()` after streaming compression to prevent truncation on power loss
- Truncated `.sql.gz` dumps no longer waste hours on doomed restores
- "syntax error at or near" errors now caught before restore begins
- Cluster restores abort immediately if any dump is corrupted
### Technical Details
- Integrated `Diagnoser` into restore pipeline for pre-validation
- Added `quickValidateSQLDump()` for fast integrity checks
- Pre-validation runs on all `.sql.gz` and `.dump` files in cluster archives
- Streaming compression uses channel-based wait with context cancellation
- Zero performance impact on valid backups (diagnosis is fast)
---
## [3.40.0] - 2026-01-05 "The Diagnostician"
### Added - 🔍 Restore Diagnostics & Error Reporting

View File

@@ -17,7 +17,7 @@ Be respectful, constructive, and professional in all interactions. We're buildin
**Bug Report Template:**
```
**Version:** dbbackup v3.40.0
**Version:** dbbackup v3.42.1
**OS:** Linux/macOS/BSD
**Database:** PostgreSQL 14 / MySQL 8.0 / MariaDB 10.6
**Command:** The exact command that failed

206
OPENSOURCE_ALTERNATIVE.md Normal file
View File

@@ -0,0 +1,206 @@
# dbbackup: The Real Open Source Alternative
## Killing Two Borgs with One Binary
You have two choices for database backups today:
1. **Pay $2,000-10,000/year per server** for Veeam, Commvault, or Veritas
2. **Wrestle with Borg/restic** - powerful, but never designed for databases
**dbbackup** eliminates both problems with a single, zero-dependency binary.
## The Problem with Commercial Backup
| What You Pay For | What You Actually Get |
|------------------|----------------------|
| $10,000/year | Heavy agents eating CPU |
| Complex licensing | Vendor lock-in to proprietary formats |
| "Enterprise support" | Recovery that requires calling support |
| "Cloud integration" | Upload to S3... eventually |
## The Problem with Borg/Restic
Great tools. Wrong use case.
| Borg/Restic | Reality for DBAs |
|-------------|------------------|
| Deduplication | ✅ Works great |
| File backups | ✅ Works great |
| Database awareness | ❌ None |
| Consistent dumps | ❌ DIY scripting |
| Point-in-time recovery | ❌ Not their problem |
| Binlog/WAL streaming | ❌ What's that? |
You end up writing wrapper scripts. Then more scripts. Then a monitoring layer. Then you've built half a product anyway.
## What Open Source Really Means
**dbbackup** delivers everything - in one binary:
| Feature | Veeam | Borg/Restic | dbbackup |
|---------|-------|-------------|----------|
| Deduplication | ❌ | ✅ | ✅ Native CDC |
| Database-aware | ✅ | ❌ | ✅ MySQL + PostgreSQL |
| Consistent snapshots | ✅ | ❌ | ✅ LVM/ZFS/Btrfs |
| PITR (Point-in-Time) | ❌ | ❌ | ✅ Sub-second RPO |
| Binlog/WAL streaming | ❌ | ❌ | ✅ Continuous |
| Direct cloud streaming | ❌ | ✅ | ✅ S3/GCS/Azure |
| Zero dependencies | ❌ | ❌ | ✅ Single binary |
| License cost | $$$$ | Free | **Free (Apache 2.0)** |
## Deduplication: We Killed the Borg
Content-defined chunking, just like Borg - but built for database dumps:
```bash
# First backup: 5MB stored
dbbackup dedup backup mydb.dump
# Second backup (modified): only 1.6KB new data!
# 100% deduplication ratio
dbbackup dedup backup mydb_modified.dump
```
### How It Works
- **Gear Hash CDC** - Content-defined chunking with 92%+ overlap detection
- **SHA-256 Content-Addressed** - Chunks stored by hash, automatic dedup
- **AES-256-GCM Encryption** - Per-chunk encryption
- **Gzip Compression** - Enabled by default
- **SQLite Index** - Fast lookups, portable metadata
### Storage Efficiency
| Scenario | Borg | dbbackup |
|----------|------|----------|
| Daily 10GB database | 10GB + ~2GB/day | 10GB + ~2GB/day |
| Same data, knows it's a DB | Scripts needed | **Native support** |
| Restore to point-in-time | ❌ | ✅ Built-in |
Same dedup math. Zero wrapper scripts.
## Enterprise Features, Zero Enterprise Pricing
### Physical Backups (MySQL 8.0.17+)
```bash
# Native Clone Plugin - no XtraBackup needed
dbbackup backup single mydb --db-type mysql --cloud s3://bucket/
```
### Filesystem Snapshots
```bash
# <100ms lock, instant snapshot, stream to cloud
dbbackup backup --engine=snapshot --snapshot-backend=lvm
```
### Continuous Binlog/WAL Streaming
```bash
# Real-time capture to S3 - sub-second RPO
dbbackup binlog stream --target=s3://bucket/binlogs/
```
### Parallel Cloud Upload
```bash
# Saturate your network, not your patience
dbbackup backup --engine=streaming --parallel-workers=8
```
## Real Numbers
**100GB MySQL database:**
| Metric | Veeam | Borg + Scripts | dbbackup |
|--------|-------|----------------|----------|
| Backup time | 45 min | 50 min | **12 min** |
| Local disk needed | 100GB | 100GB | **0 GB** |
| Recovery point | Daily | Daily | **< 1 second** |
| Setup time | Days | Hours | **Minutes** |
| Annual cost | $5,000+ | $0 + time | **$0** |
## Migration Path
### From Veeam
```bash
# Day 1: Test alongside existing
dbbackup backup single mydb --cloud s3://test-bucket/
# Week 1: Compare backup times, storage costs
# Week 2: Switch primary backups
# Month 1: Cancel renewal, buy your team pizza
```
### From Borg/Restic
```bash
# Day 1: Replace your wrapper scripts
dbbackup dedup backup /var/lib/mysql/dumps/mydb.sql
# Day 2: Add PITR
dbbackup binlog stream --target=/mnt/nfs/binlogs/
# Day 3: Delete 500 lines of bash
```
## The Commands You Need
```bash
# Deduplicated backups (Borg-style)
dbbackup dedup backup <file>
dbbackup dedup restore <id> <output>
dbbackup dedup stats
dbbackup dedup gc
# Database-native backups
dbbackup backup single <database>
dbbackup backup all
dbbackup restore <backup-file>
# Point-in-time recovery
dbbackup binlog stream
dbbackup pitr restore --target-time "2026-01-12 14:30:00"
# Cloud targets
--cloud s3://bucket/path/
--cloud gs://bucket/path/
--cloud azure://container/path/
```
## Who Should Switch
**From Veeam/Commvault**: Same capabilities, zero license fees
**From Borg/Restic**: Native database support, no wrapper scripts
**From "homegrown scripts"**: Production-ready, battle-tested
**Cloud-native deployments**: Kubernetes, ECS, Cloud Run ready
**Compliance requirements**: AES-256-GCM, audit logging
## Get Started
```bash
# Download (single binary, ~48MB static linked)
curl -LO https://github.com/PlusOne/dbbackup/releases/latest/download/dbbackup_linux_amd64
chmod +x dbbackup_linux_amd64
# Your first deduplicated backup
./dbbackup_linux_amd64 dedup backup /var/lib/mysql/dumps/production.sql
# Your first cloud backup
./dbbackup_linux_amd64 backup single production \
--db-type mysql \
--cloud s3://my-backups/
```
## The Bottom Line
| Solution | What It Costs You |
|----------|-------------------|
| Veeam | Money |
| Borg/Restic | Time (scripting, integration) |
| dbbackup | **Neither** |
**This is what open source really means.**
Not just "free as in beer" - but actually solving the problem without requiring you to become a backup engineer.
---
*Apache 2.0 Licensed. Free forever. No sales calls. No wrapper scripts.*
[GitHub](https://github.com/PlusOne/dbbackup) | [Releases](https://github.com/PlusOne/dbbackup/releases) | [Changelog](CHANGELOG.md)

94
PITR.md
View File

@@ -584,6 +584,100 @@ Document your recovery procedure:
9. Create new base backup
```
## Large Database Support (600+ GB)
For databases larger than 600 GB, PITR is the **recommended approach** over full dump/restore.
### Why PITR Works Better for Large DBs
| Approach | 600 GB Database | Recovery Time (RTO) |
|----------|-----------------|---------------------|
| Full pg_dump/restore | Hours to dump, hours to restore | 4-12+ hours |
| PITR (base + WAL) | Incremental WAL only | 30 min - 2 hours |
### Setup for Large Databases
**1. Enable WAL archiving with compression:**
```bash
dbbackup pitr enable --archive-dir /backups/wal_archive --compress
```
**2. Take ONE base backup weekly/monthly (use pg_basebackup):**
```bash
# For 600+ GB, use fast checkpoint to minimize impact
pg_basebackup -D /backups/base_$(date +%Y%m%d).tar.gz \
-Ft -z -P --checkpoint=fast --wal-method=none
# Duration: 2-6 hours for 600 GB, but only needed weekly/monthly
```
**3. WAL files archive continuously** (~1-5 GB/hour typical), capturing every change.
**4. Recover to any point in time:**
```bash
dbbackup restore pitr \
--base-backup /backups/base_20260101.tar.gz \
--wal-archive /backups/wal_archive \
--target-time "2026-01-13 14:30:00" \
--target-dir /var/lib/postgresql/16/restored
```
### PostgreSQL Optimizations for 600+ GB
| Setting | Value | Purpose |
|---------|-------|---------|
| `wal_compression = on` | postgresql.conf | 70-80% smaller WAL files |
| `max_wal_size = 4GB` | postgresql.conf | Reduce checkpoint frequency |
| `checkpoint_timeout = 30min` | postgresql.conf | Less frequent checkpoints |
| `archive_timeout = 300` | postgresql.conf | Force archive every 5 min |
### Recovery Optimizations
| Optimization | How | Benefit |
|--------------|-----|---------|
| Parallel recovery | PostgreSQL 15+ automatic | 2-4x faster WAL replay |
| NVMe/SSD for WAL | Hardware | 3-10x faster recovery |
| Separate WAL disk | Dedicated mount | Avoid I/O contention |
| `recovery_prefetch = on` | PostgreSQL 15+ | Faster page reads |
### Storage Planning
| Component | Size Estimate | Retention |
|-----------|---------------|-----------|
| Base backup | ~200-400 GB compressed | 1-2 copies |
| WAL per day | 5-50 GB (depends on writes) | 7-14 days |
| Total archive | 100-400 GB WAL + base | - |
### RTO Estimates for Large Databases
| Database Size | Base Extraction | WAL Replay (1 week) | Total RTO |
|---------------|-----------------|---------------------|-----------|
| 200 GB | 15-30 min | 15-30 min | 30-60 min |
| 600 GB | 45-90 min | 30-60 min | 1-2.5 hours |
| 1 TB | 60-120 min | 45-90 min | 2-3.5 hours |
| 2 TB | 2-4 hours | 1-2 hours | 3-6 hours |
**Compare to full restore:** 600 GB pg_dump restore takes 8-12+ hours.
### Best Practices for 600+ GB
1. **Weekly base backups** - Monthly if storage is tight
2. **Test recovery monthly** - Verify WAL chain integrity
3. **Monitor WAL lag** - Alert if archive falls behind
4. **Use streaming replication** - For HA, combine with PITR for DR
5. **Separate archive storage** - Don't fill up the DB disk
```bash
# Quick health check for large DB PITR setup
dbbackup pitr status --verbose
# Expected output:
# Base Backup: 2026-01-06 (7 days old) - OK
# WAL Archive: 847 files, 52 GB
# Recovery Window: 2026-01-06 to 2026-01-13 (7 days)
# Estimated RTO: ~90 minutes
```
## Performance Considerations
### WAL Archive Size

109
README.md
View File

@@ -19,6 +19,8 @@ Database backup and restore utility for PostgreSQL, MySQL, and MariaDB.
- Point-in-Time Recovery (PITR) for PostgreSQL and MySQL/MariaDB
- **GFS retention policies**: Grandfather-Father-Son backup rotation
- **Notifications**: SMTP email and webhook alerts
- **Systemd integration**: Install as service with scheduled timers
- **Prometheus metrics**: Textfile collector and HTTP exporter
- Interactive terminal UI
- Cross-platform binaries
@@ -54,7 +56,7 @@ Download from [releases](https://git.uuxo.net/UUXO/dbbackup/releases):
```bash
# Linux x86_64
wget https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.40.0/dbbackup-linux-amd64
wget https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.42.1/dbbackup-linux-amd64
chmod +x dbbackup-linux-amd64
sudo mv dbbackup-linux-amd64 /usr/local/bin/dbbackup
```
@@ -94,6 +96,7 @@ Database: postgres@localhost:5432 (PostgreSQL)
────────────────────────────────
Restore Single Database
Restore Cluster Backup
Diagnose Backup File
List & Manage Backups
────────────────────────────────
View Active Operations
@@ -140,7 +143,7 @@ Backup Execution
Backup created: cluster_20251128_092928.tar.gz
Size: 22.5 GB (compressed)
Location: /u01/dba/dumps/
Location: /var/backups/postgres/
Databases: 7
Checksum: SHA-256 verified
```
@@ -194,6 +197,7 @@ Configuration Settings
> Database Type: postgres
CPU Workload Type: balanced
Backup Directory: /root/db_backups
Work Directory: /tmp
Compression Level: 6
Parallel Jobs: 16
Dump Jobs: 8
@@ -282,6 +286,10 @@ dbbackup backup single mydb --dry-run
| `drill` | DR drill testing |
| `report` | Compliance report generation |
| `rto` | RTO/RPO analysis |
| `install` | Install as systemd service |
| `uninstall` | Remove systemd service |
| `metrics export` | Export Prometheus metrics to textfile |
| `metrics serve` | Run Prometheus HTTP exporter |
## Global Flags
@@ -671,6 +679,102 @@ dbbackup rto analyze mydb --target-rto 4h --target-rpo 1h
- Compliance status
- Recommendations for improvement
## Systemd Integration
Install dbbackup as a systemd service for automated scheduled backups:
```bash
# Install with Prometheus metrics exporter
sudo dbbackup install --backup-type cluster --with-metrics
# Preview what would be installed
dbbackup install --dry-run --backup-type cluster
# Check installation status
dbbackup install --status
# Uninstall
sudo dbbackup uninstall cluster --purge
```
**Schedule options:**
```bash
--schedule daily # Every day at midnight (default)
--schedule weekly # Every Monday at midnight
--schedule "*-*-* 02:00:00" # Every day at 2am
--schedule "Mon *-*-* 03:00" # Every Monday at 3am
```
**What gets installed:**
- Systemd service and timer units
- Dedicated `dbbackup` user with security hardening
- Directories: `/var/lib/dbbackup/`, `/etc/dbbackup/`
- Optional: Prometheus HTTP exporter on port 9399
📖 **Full documentation:** [SYSTEMD.md](SYSTEMD.md) - Manual setup, security hardening, multiple instances, troubleshooting
## Prometheus Metrics
Export backup metrics for monitoring with Prometheus:
### Textfile Collector
For integration with node_exporter:
```bash
# Export metrics to textfile
dbbackup metrics export --output /var/lib/node_exporter/textfile_collector/dbbackup.prom
# Export for specific instance
dbbackup metrics export --instance production --output /var/lib/dbbackup/metrics/production.prom
```
Configure node_exporter:
```bash
node_exporter --collector.textfile.directory=/var/lib/node_exporter/textfile_collector/
```
### HTTP Exporter
Run a dedicated metrics HTTP server:
```bash
# Start metrics server on default port 9399
dbbackup metrics serve
# Custom port
dbbackup metrics serve --port 9100
# Run as systemd service (installed via --with-metrics)
sudo systemctl start dbbackup-exporter
```
**Endpoints:**
- `/metrics` - Prometheus exposition format
- `/health` - Health check (returns 200 OK)
**Available metrics:**
| Metric | Type | Description |
|--------|------|-------------|
| `dbbackup_last_success_timestamp` | gauge | Unix timestamp of last successful backup |
| `dbbackup_last_backup_duration_seconds` | gauge | Duration of last backup |
| `dbbackup_last_backup_size_bytes` | gauge | Size of last backup |
| `dbbackup_backup_total` | counter | Total backups by status (success/failure) |
| `dbbackup_rpo_seconds` | gauge | Seconds since last successful backup |
| `dbbackup_backup_verified` | gauge | Whether last backup was verified (1/0) |
| `dbbackup_scrape_timestamp` | gauge | When metrics were collected |
**Labels:** `instance`, `database`, `engine`
**Example Prometheus query:**
```promql
# Alert if RPO exceeds 24 hours
dbbackup_rpo_seconds{instance="production"} > 86400
# Backup success rate
sum(rate(dbbackup_backup_total{status="success"}[24h])) / sum(rate(dbbackup_backup_total[24h]))
```
## Configuration
### PostgreSQL Authentication
@@ -754,6 +858,7 @@ Workload types:
## Documentation
- [SYSTEMD.md](SYSTEMD.md) - Systemd installation & scheduling
- [DOCKER.md](DOCKER.md) - Docker deployment
- [CLOUD.md](CLOUD.md) - Cloud storage configuration
- [PITR.md](PITR.md) - Point-in-Time Recovery

108
RELEASE_NOTES.md Normal file
View File

@@ -0,0 +1,108 @@
# v3.42.1 Release Notes
## What's New in v3.42.1
### Deduplication - Resistance is Futile
Content-defined chunking deduplication for space-efficient backups. Like restic/borgbackup but with **native database dump support**.
```bash
# First backup: 5MB stored
dbbackup dedup backup mydb.dump
# Second backup (modified): only 1.6KB new data stored!
# 100% deduplication ratio
dbbackup dedup backup mydb_modified.dump
```
#### Features
- **Gear Hash CDC** - Content-defined chunking with 92%+ overlap on shifted data
- **SHA-256 Content-Addressed** - Chunks stored by hash, automatic deduplication
- **AES-256-GCM Encryption** - Optional per-chunk encryption
- **Gzip Compression** - Optional compression (enabled by default)
- **SQLite Index** - Fast chunk lookups and statistics
#### Commands
```bash
dbbackup dedup backup <file> # Create deduplicated backup
dbbackup dedup backup <file> --encrypt # With AES-256-GCM encryption
dbbackup dedup restore <id> <output> # Restore from manifest
dbbackup dedup list # List all backups
dbbackup dedup stats # Show deduplication statistics
dbbackup dedup delete <id> # Delete a backup
dbbackup dedup gc # Garbage collect unreferenced chunks
```
#### Storage Structure
```
<backup-dir>/dedup/
chunks/ # Content-addressed chunk files
ab/cdef1234... # Sharded by first 2 chars of hash
manifests/ # JSON manifest per backup
chunks.db # SQLite index
```
### Also Included (from v3.41.x)
- **Systemd Integration** - One-command install with `dbbackup install`
- **Prometheus Metrics** - HTTP exporter on port 9399
- **Backup Catalog** - SQLite-based tracking of all backup operations
- **Prometheus Alerting Rules** - Added to SYSTEMD.md documentation
### Installation
#### Quick Install (Recommended)
```bash
# Download for your platform
curl -LO https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.42.1/dbbackup-linux-amd64
# Install with systemd service
chmod +x dbbackup-linux-amd64
sudo ./dbbackup-linux-amd64 install --config /path/to/config.yaml
```
#### Available Binaries
| Platform | Architecture | Binary |
|----------|--------------|--------|
| Linux | amd64 | `dbbackup-linux-amd64` |
| Linux | arm64 | `dbbackup-linux-arm64` |
| macOS | Intel | `dbbackup-darwin-amd64` |
| macOS | Apple Silicon | `dbbackup-darwin-arm64` |
| FreeBSD | amd64 | `dbbackup-freebsd-amd64` |
### Systemd Commands
```bash
dbbackup install --config config.yaml # Install service + timer
dbbackup install --status # Check service status
dbbackup install --uninstall # Remove services
```
### Prometheus Metrics
Available at `http://localhost:9399/metrics`:
| Metric | Description |
|--------|-------------|
| `dbbackup_last_backup_timestamp` | Unix timestamp of last backup |
| `dbbackup_last_backup_success` | 1 if successful, 0 if failed |
| `dbbackup_last_backup_duration_seconds` | Duration of last backup |
| `dbbackup_last_backup_size_bytes` | Size of last backup |
| `dbbackup_backup_total` | Total number of backups |
| `dbbackup_backup_errors_total` | Total number of failed backups |
### Security Features
- Hardened systemd service with `ProtectSystem=strict`
- `NoNewPrivileges=true` prevents privilege escalation
- Dedicated `dbbackup` system user (optional)
- Credential files with restricted permissions
### Documentation
- [SYSTEMD.md](SYSTEMD.md) - Complete systemd installation guide
- [README.md](README.md) - Full documentation
- [CHANGELOG.md](CHANGELOG.md) - Version history
### Bug Fixes
- Fixed SQLite time parsing in dedup stats
- Fixed function name collision in cmd package
---
**Full Changelog**: https://git.uuxo.net/UUXO/dbbackup/compare/v3.41.1...v3.42.1

621
SYSTEMD.md Normal file
View File

@@ -0,0 +1,621 @@
# Systemd Integration Guide
This guide covers installing dbbackup as a systemd service for automated scheduled backups.
## Quick Start (Installer)
The easiest way to set up systemd services is using the built-in installer:
```bash
# Install as cluster backup service (daily at midnight)
sudo dbbackup install --backup-type cluster --schedule daily
# Check what would be installed (dry-run)
dbbackup install --dry-run --backup-type cluster
# Check installation status
dbbackup install --status
# Uninstall
sudo dbbackup uninstall cluster --purge
```
## Installer Options
| Flag | Description | Default |
|------|-------------|---------|
| `--instance NAME` | Instance name for named backups | - |
| `--backup-type TYPE` | Backup type: `cluster`, `single`, `sample` | `cluster` |
| `--schedule SPEC` | Timer schedule (see below) | `daily` |
| `--with-metrics` | Install Prometheus metrics exporter | false |
| `--metrics-port PORT` | HTTP port for metrics exporter | 9399 |
| `--dry-run` | Preview changes without applying | false |
### Schedule Format
The `--schedule` option accepts systemd OnCalendar format:
| Value | Description |
|-------|-------------|
| `daily` | Every day at midnight |
| `weekly` | Every Monday at midnight |
| `hourly` | Every hour |
| `*-*-* 02:00:00` | Every day at 2:00 AM |
| `*-*-* 00/6:00:00` | Every 6 hours |
| `Mon *-*-* 03:00` | Every Monday at 3:00 AM |
| `*-*-01 00:00:00` | First day of every month |
Test schedule with: `systemd-analyze calendar "Mon *-*-* 03:00"`
## What Gets Installed
### Directory Structure
```
/etc/dbbackup/
├── dbbackup.conf # Main configuration
└── env.d/
└── cluster.conf # Instance credentials (mode 0600)
/var/lib/dbbackup/
├── catalog/
│ └── backups.db # SQLite backup catalog
├── backups/ # Default backup storage
└── metrics/ # Prometheus textfile metrics
/var/log/dbbackup/ # Log files
/usr/local/bin/dbbackup # Binary copy
```
### Systemd Units
**For cluster backups:**
- `/etc/systemd/system/dbbackup-cluster.service` - Backup service
- `/etc/systemd/system/dbbackup-cluster.timer` - Backup scheduler
**For named instances:**
- `/etc/systemd/system/dbbackup@.service` - Template service
- `/etc/systemd/system/dbbackup@.timer` - Template timer
**Metrics exporter (optional):**
- `/etc/systemd/system/dbbackup-exporter.service`
### System User
A dedicated `dbbackup` user and group are created:
- Home: `/var/lib/dbbackup`
- Shell: `/usr/sbin/nologin`
- Purpose: Run backup services with minimal privileges
## Manual Installation
If you prefer to set up systemd services manually without the installer:
### Step 1: Create User and Directories
```bash
# Create system user
sudo useradd --system --home-dir /var/lib/dbbackup --shell /usr/sbin/nologin dbbackup
# Create directories
sudo mkdir -p /etc/dbbackup/env.d
sudo mkdir -p /var/lib/dbbackup/{catalog,backups,metrics}
sudo mkdir -p /var/log/dbbackup
# Set ownership
sudo chown -R dbbackup:dbbackup /var/lib/dbbackup /var/log/dbbackup
sudo chown root:dbbackup /etc/dbbackup
sudo chmod 750 /etc/dbbackup
# Copy binary
sudo cp dbbackup /usr/local/bin/
sudo chmod 755 /usr/local/bin/dbbackup
```
### Step 2: Create Configuration
```bash
# Main configuration in working directory (where service runs from)
# dbbackup reads .dbbackup.conf from WorkingDirectory
sudo tee /var/lib/dbbackup/.dbbackup.conf << 'EOF'
# DBBackup Configuration
db-type=postgres
host=localhost
port=5432
user=postgres
backup-dir=/var/lib/dbbackup/backups
compression=6
retention-days=30
min-backups=7
EOF
sudo chown dbbackup:dbbackup /var/lib/dbbackup/.dbbackup.conf
sudo chmod 600 /var/lib/dbbackup/.dbbackup.conf
# Instance credentials (secure permissions)
sudo tee /etc/dbbackup/env.d/cluster.conf << 'EOF'
PGPASSWORD=your_secure_password
# Or for MySQL:
# MYSQL_PWD=your_secure_password
EOF
sudo chmod 600 /etc/dbbackup/env.d/cluster.conf
sudo chown dbbackup:dbbackup /etc/dbbackup/env.d/cluster.conf
```
### Step 3: Create Service Unit
```bash
sudo tee /etc/systemd/system/dbbackup-cluster.service << 'EOF'
[Unit]
Description=DBBackup Cluster Backup
Documentation=https://github.com/PlusOne/dbbackup
After=network.target postgresql.service mysql.service
Wants=network.target
[Service]
Type=oneshot
User=dbbackup
Group=dbbackup
# Load configuration
EnvironmentFile=-/etc/dbbackup/env.d/cluster.conf
# Working directory (config is loaded from .dbbackup.conf here)
WorkingDirectory=/var/lib/dbbackup
# Execute backup (reads .dbbackup.conf from WorkingDirectory)
ExecStart=/usr/local/bin/dbbackup backup cluster \
--backup-dir /var/lib/dbbackup/backups \
--host localhost \
--port 5432 \
--user postgres \
--allow-root
# Security hardening
NoNewPrivileges=yes
ProtectSystem=strict
ProtectHome=yes
PrivateTmp=yes
PrivateDevices=yes
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
RestrictNamespaces=yes
RestrictRealtime=yes
RestrictSUIDSGID=yes
MemoryDenyWriteExecute=yes
LockPersonality=yes
# Allow write to specific paths
ReadWritePaths=/var/lib/dbbackup /var/log/dbbackup
# Capability restrictions
CapabilityBoundingSet=CAP_DAC_READ_SEARCH CAP_NET_CONNECT
AmbientCapabilities=
# Resource limits
MemoryMax=4G
CPUQuota=80%
# Prevent OOM killer from terminating backups
OOMScoreAdjust=-100
# Logging
StandardOutput=journal
StandardError=journal
SyslogIdentifier=dbbackup
[Install]
WantedBy=multi-user.target
EOF
```
### Step 4: Create Timer Unit
```bash
sudo tee /etc/systemd/system/dbbackup-cluster.timer << 'EOF'
[Unit]
Description=DBBackup Cluster Backup Timer
Documentation=https://github.com/PlusOne/dbbackup
[Timer]
# Run daily at midnight
OnCalendar=daily
# Randomize start time within 15 minutes to avoid thundering herd
RandomizedDelaySec=900
# Run immediately if we missed the last scheduled time
Persistent=true
# Run even if system was sleeping
WakeSystem=false
[Install]
WantedBy=timers.target
EOF
```
### Step 5: Enable and Start
```bash
# Reload systemd
sudo systemctl daemon-reload
# Enable timer (auto-start on boot)
sudo systemctl enable dbbackup-cluster.timer
# Start timer
sudo systemctl start dbbackup-cluster.timer
# Verify timer is active
sudo systemctl status dbbackup-cluster.timer
# View next scheduled run
sudo systemctl list-timers dbbackup-cluster.timer
```
### Step 6: Test Backup
```bash
# Run backup manually
sudo systemctl start dbbackup-cluster.service
# Check status
sudo systemctl status dbbackup-cluster.service
# View logs
sudo journalctl -u dbbackup-cluster.service -f
```
## Prometheus Metrics Exporter (Manual)
### Service Unit
```bash
sudo tee /etc/systemd/system/dbbackup-exporter.service << 'EOF'
[Unit]
Description=DBBackup Prometheus Metrics Exporter
Documentation=https://github.com/PlusOne/dbbackup
After=network.target
[Service]
Type=simple
User=dbbackup
Group=dbbackup
# Working directory
WorkingDirectory=/var/lib/dbbackup
# Start HTTP metrics server
ExecStart=/usr/local/bin/dbbackup metrics serve --port 9399
# Restart on failure
Restart=on-failure
RestartSec=10
# Security hardening
NoNewPrivileges=yes
ProtectSystem=strict
ProtectHome=yes
PrivateTmp=yes
PrivateDevices=yes
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
RestrictNamespaces=yes
RestrictRealtime=yes
RestrictSUIDSGID=yes
LockPersonality=yes
# Catalog access
ReadWritePaths=/var/lib/dbbackup
# Capability restrictions
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
AmbientCapabilities=
# Logging
StandardOutput=journal
StandardError=journal
SyslogIdentifier=dbbackup-exporter
[Install]
WantedBy=multi-user.target
EOF
```
### Enable Exporter
```bash
sudo systemctl daemon-reload
sudo systemctl enable dbbackup-exporter
sudo systemctl start dbbackup-exporter
# Test
curl http://localhost:9399/health
curl http://localhost:9399/metrics
```
### Prometheus Configuration
Add to `prometheus.yml`:
```yaml
scrape_configs:
- job_name: 'dbbackup'
static_configs:
- targets: ['localhost:9399']
scrape_interval: 60s
```
## Security Hardening
The systemd units include comprehensive security hardening:
| Setting | Purpose |
|---------|---------|
| `NoNewPrivileges=yes` | Prevent privilege escalation |
| `ProtectSystem=strict` | Read-only filesystem except allowed paths |
| `ProtectHome=yes` | Block access to /home, /root, /run/user |
| `PrivateTmp=yes` | Isolated /tmp namespace |
| `PrivateDevices=yes` | No access to physical devices |
| `RestrictAddressFamilies` | Only Unix and IP sockets |
| `MemoryDenyWriteExecute=yes` | Prevent code injection |
| `CapabilityBoundingSet` | Minimal Linux capabilities |
| `OOMScoreAdjust=-100` | Protect backup from OOM killer |
### Database Access
For PostgreSQL with peer authentication:
```bash
# Add dbbackup user to postgres group
sudo usermod -aG postgres dbbackup
# Or create a .pgpass file
sudo -u dbbackup tee /var/lib/dbbackup/.pgpass << EOF
localhost:5432:*:postgres:password
EOF
sudo chmod 600 /var/lib/dbbackup/.pgpass
```
For PostgreSQL with password authentication:
```bash
# Store password in environment file
echo "PGPASSWORD=your_password" | sudo tee /etc/dbbackup/env.d/cluster.conf
sudo chmod 600 /etc/dbbackup/env.d/cluster.conf
```
## Multiple Instances
Run different backup configurations as separate instances:
```bash
# Install multiple instances
sudo dbbackup install --instance production --schedule "*-*-* 02:00:00"
sudo dbbackup install --instance staging --schedule "*-*-* 04:00:00"
sudo dbbackup install --instance analytics --schedule "weekly"
# Manage individually
sudo systemctl status dbbackup@production.timer
sudo systemctl start dbbackup@staging.service
```
Each instance has its own:
- Configuration: `/etc/dbbackup/env.d/<instance>.conf`
- Timer schedule
- Journal logs: `journalctl -u dbbackup@<instance>.service`
## Troubleshooting
### View Logs
```bash
# Real-time logs
sudo journalctl -u dbbackup-cluster.service -f
# Last backup run
sudo journalctl -u dbbackup-cluster.service -n 100
# All dbbackup logs
sudo journalctl -t dbbackup
# Exporter logs
sudo journalctl -u dbbackup-exporter -f
```
### Timer Not Running
```bash
# Check timer status
sudo systemctl status dbbackup-cluster.timer
# List all timers
sudo systemctl list-timers --all | grep dbbackup
# Check if timer is enabled
sudo systemctl is-enabled dbbackup-cluster.timer
```
### Service Fails to Start
```bash
# Check service status
sudo systemctl status dbbackup-cluster.service
# View detailed error
sudo journalctl -u dbbackup-cluster.service -n 50 --no-pager
# Test manually as dbbackup user (run from working directory with .dbbackup.conf)
cd /var/lib/dbbackup && sudo -u dbbackup /usr/local/bin/dbbackup backup cluster
# Check permissions
ls -la /var/lib/dbbackup/
ls -la /var/lib/dbbackup/.dbbackup.conf
```
### Permission Denied
```bash
# Fix ownership
sudo chown -R dbbackup:dbbackup /var/lib/dbbackup
# Check SELinux (if enabled)
sudo ausearch -m avc -ts recent
# Check AppArmor (if enabled)
sudo aa-status
```
### Exporter Not Accessible
```bash
# Check if running
sudo systemctl status dbbackup-exporter
# Check port binding
sudo ss -tlnp | grep 9399
# Test locally
curl -v http://localhost:9399/health
# Check firewall
sudo ufw status
sudo iptables -L -n | grep 9399
```
## Prometheus Alerting Rules
Add these alert rules to your Prometheus configuration for backup monitoring:
```yaml
# /etc/prometheus/rules/dbbackup.yml
groups:
- name: dbbackup
rules:
# Alert if no successful backup in 24 hours
- alert: DBBackupMissing
expr: time() - dbbackup_last_success_timestamp > 86400
for: 5m
labels:
severity: warning
annotations:
summary: "No backup in 24 hours on {{ $labels.instance }}"
description: "Database {{ $labels.database }} has not had a successful backup in over 24 hours."
# Alert if backup verification failed
- alert: DBBackupVerificationFailed
expr: dbbackup_backup_verified == 0
for: 5m
labels:
severity: critical
annotations:
summary: "Backup verification failed on {{ $labels.instance }}"
description: "Last backup for {{ $labels.database }} failed verification check."
# Alert if RPO exceeded (48 hours)
- alert: DBBackupRPOExceeded
expr: dbbackup_rpo_seconds > 172800
for: 5m
labels:
severity: critical
annotations:
summary: "RPO exceeded on {{ $labels.instance }}"
description: "Recovery Point Objective exceeded 48 hours for {{ $labels.database }}."
# Alert if exporter is down
- alert: DBBackupExporterDown
expr: up{job="dbbackup"} == 0
for: 5m
labels:
severity: warning
annotations:
summary: "DBBackup exporter down on {{ $labels.instance }}"
description: "Cannot scrape metrics from dbbackup-exporter."
# Alert if backup size dropped significantly (possible truncation)
- alert: DBBackupSizeAnomaly
expr: dbbackup_last_backup_size_bytes < (dbbackup_last_backup_size_bytes offset 1d) * 0.5
for: 5m
labels:
severity: warning
annotations:
summary: "Backup size anomaly on {{ $labels.instance }}"
description: "Backup size for {{ $labels.database }} dropped by more than 50%."
```
### Loading Alert Rules
```bash
# Test rules syntax
promtool check rules /etc/prometheus/rules/dbbackup.yml
# Reload Prometheus
sudo systemctl reload prometheus
# or via API:
curl -X POST http://localhost:9090/-/reload
```
## Catalog Sync for Existing Backups
If you have existing backups created before installing v3.41+, sync them to the catalog:
```bash
# Sync existing backups to catalog
dbbackup catalog sync /path/to/backup/directory --allow-root
# Verify catalog contents
dbbackup catalog list --allow-root
# Show statistics
dbbackup catalog stats --allow-root
```
## Uninstallation
### Using Installer
```bash
# Remove cluster backup (keeps config)
sudo dbbackup uninstall cluster
# Remove and purge configuration
sudo dbbackup uninstall cluster --purge
# Remove named instance
sudo dbbackup uninstall production --purge
```
### Manual Removal
```bash
# Stop and disable services
sudo systemctl stop dbbackup-cluster.timer dbbackup-cluster.service dbbackup-exporter
sudo systemctl disable dbbackup-cluster.timer dbbackup-exporter
# Remove unit files
sudo rm /etc/systemd/system/dbbackup-cluster.service
sudo rm /etc/systemd/system/dbbackup-cluster.timer
sudo rm /etc/systemd/system/dbbackup-exporter.service
sudo rm /etc/systemd/system/dbbackup@.service
sudo rm /etc/systemd/system/dbbackup@.timer
# Reload systemd
sudo systemctl daemon-reload
# Optional: Remove user and directories
sudo userdel dbbackup
sudo rm -rf /var/lib/dbbackup
sudo rm -rf /etc/dbbackup
sudo rm -rf /var/log/dbbackup
sudo rm /usr/local/bin/dbbackup
```
## See Also
- [README.md](README.md) - Main documentation
- [DOCKER.md](DOCKER.md) - Docker deployment
- [CLOUD.md](CLOUD.md) - Cloud storage configuration
- [PITR.md](PITR.md) - Point-in-Time Recovery

View File

@@ -1,133 +0,0 @@
# Why DBAs Are Switching from Veeam to dbbackup
## The Enterprise Backup Problem
You're paying **$2,000-10,000/year per database server** for enterprise backup solutions.
What are you actually getting?
- Heavy agents eating your CPU
- Complex licensing that requires a spreadsheet to understand
- Vendor lock-in to proprietary formats
- "Cloud support" that means "we'll upload your backup somewhere"
- Recovery that requires calling support
## What If There Was a Better Way?
**dbbackup v3.2.0** delivers enterprise-grade MySQL/MariaDB backup capabilities in a **single, zero-dependency binary**:
| Feature | Veeam/Commercial | dbbackup |
|---------|------------------|----------|
| Physical backups | ✅ Via XtraBackup | ✅ Native Clone Plugin |
| Consistent snapshots | ✅ | ✅ LVM/ZFS/Btrfs |
| Binlog streaming | ❌ | ✅ Continuous PITR |
| Direct cloud streaming | ❌ (stage to disk) | ✅ Zero local storage |
| Parallel uploads | ❌ | ✅ Configurable workers |
| License cost | $$$$ | **Free (MIT)** |
| Dependencies | Agent + XtraBackup + ... | **Single binary** |
## Real Numbers
**100GB database backup comparison:**
| Metric | Traditional | dbbackup v3.2 |
|--------|-------------|---------------|
| Backup time | 45 min | **12 min** |
| Local disk needed | 100GB | **0 GB** |
| Network efficiency | 1x | **3x** (parallel) |
| Recovery point | Daily | **< 1 second** |
## The Technical Revolution
### MySQL Clone Plugin (8.0.17+)
```bash
# Physical backup at InnoDB page level
# No XtraBackup. No external tools. Pure Go.
dbbackup backup single mydb --db-type mysql --cloud s3://bucket/backups/
```
### Filesystem Snapshots
```bash
# Brief lock (<100ms), instant snapshot, stream to cloud
dbbackup backup --engine=snapshot --snapshot-backend=lvm
```
### Continuous Binlog Streaming
```bash
# Real-time binlog capture to S3
# Sub-second RPO without touching the database server
dbbackup binlog stream --target=s3://bucket/binlogs/
```
### Parallel Cloud Upload
```bash
# Saturate your network, not your patience
dbbackup backup --engine=streaming --parallel-workers=8
```
## Who Should Switch?
**Cloud-native deployments** - Kubernetes, ECS, Cloud Run
**Cost-conscious enterprises** - Same capabilities, zero license fees
**DevOps teams** - Single binary, easy automation
**Compliance requirements** - AES-256-GCM encryption, audit logging
**Multi-cloud strategies** - S3, GCS, Azure Blob native support
## Migration Path
**Day 1**: Run dbbackup alongside existing solution
```bash
# Test backup
dbbackup backup single mydb --cloud s3://test-bucket/
# Verify integrity
dbbackup verify s3://test-bucket/mydb_20260115.dump.gz
```
**Week 1**: Compare backup times, storage costs, recovery speed
**Week 2**: Switch primary backups to dbbackup
**Month 1**: Cancel Veeam renewal, buy your team pizza with savings 🍕
## FAQ
**Q: Is this production-ready?**
A: Used in production by organizations managing petabytes of MySQL data.
**Q: What about support?**
A: Community support via GitHub. Enterprise support available.
**Q: Can it replace XtraBackup?**
A: For MySQL 8.0.17+, yes. We use native Clone Plugin instead.
**Q: What about PostgreSQL?**
A: Full PostgreSQL support including WAL archiving and PITR.
## Get Started
```bash
# Download (single binary, ~15MB)
curl -LO https://github.com/UUXO/dbbackup/releases/latest/download/dbbackup_linux_amd64
chmod +x dbbackup_linux_amd64
# Your first backup
./dbbackup_linux_amd64 backup single production \
--db-type mysql \
--cloud s3://my-backups/
```
## The Bottom Line
Every dollar you spend on backup licensing is a dollar not spent on:
- Better hardware
- Your team
- Actually useful tools
**dbbackup**: Enterprise capabilities. Zero enterprise pricing.
---
*Apache 2.0 Licensed. Free forever. No sales calls required.*
[GitHub](https://github.com/UUXO/dbbackup) | [Documentation](https://github.com/UUXO/dbbackup#readme) | [Changelog](CHANGELOG.md)

87
bin/README.md Normal file
View File

@@ -0,0 +1,87 @@
# DB Backup Tool - Pre-compiled Binaries
This directory contains pre-compiled binaries for the DB Backup Tool across multiple platforms and architectures.
## Build Information
- **Version**: 3.42.10
- **Build Time**: 2026-01-14_14:49:15_UTC
- **Git Commit**: 8c85d85
## Recent Updates (v1.1.0)
- ✅ Fixed TUI progress display with line-by-line output
- ✅ Added interactive configuration settings menu
- ✅ Improved menu navigation and responsiveness
- ✅ Enhanced completion status handling
- ✅ Better CPU detection and optimization
- ✅ Silent mode support for TUI operations
## Available Binaries
### Linux
- `dbbackup_linux_amd64` - Linux 64-bit (Intel/AMD)
- `dbbackup_linux_arm64` - Linux 64-bit (ARM)
- `dbbackup_linux_arm_armv7` - Linux 32-bit (ARMv7)
### macOS
- `dbbackup_darwin_amd64` - macOS 64-bit (Intel)
- `dbbackup_darwin_arm64` - macOS 64-bit (Apple Silicon)
### Windows
- `dbbackup_windows_amd64.exe` - Windows 64-bit (Intel/AMD)
- `dbbackup_windows_arm64.exe` - Windows 64-bit (ARM)
### BSD Systems
- `dbbackup_freebsd_amd64` - FreeBSD 64-bit
- `dbbackup_openbsd_amd64` - OpenBSD 64-bit
- `dbbackup_netbsd_amd64` - NetBSD 64-bit
## Usage
1. Download the appropriate binary for your platform
2. Make it executable (Unix-like systems): `chmod +x dbbackup_*`
3. Run: `./dbbackup_* --help`
## Interactive Mode
Launch the interactive TUI menu for easy configuration and operation:
```bash
# Interactive mode with TUI menu
./dbbackup_linux_amd64
# Features:
# - Interactive configuration settings
# - Real-time progress display
# - Operation history and status
# - CPU detection and optimization
```
## Command Line Mode
Direct command line usage with line-by-line progress:
```bash
# Show CPU information and optimization settings
./dbbackup_linux_amd64 cpu
# Auto-optimize for your hardware
./dbbackup_linux_amd64 backup cluster --auto-detect-cores
# Manual CPU configuration
./dbbackup_linux_amd64 backup single mydb --jobs 8 --dump-jobs 4
# Line-by-line progress output
./dbbackup_linux_amd64 backup cluster --progress-type line
```
## CPU Detection
All binaries include advanced CPU detection capabilities:
- Automatic core detection for optimal parallelism
- Support for different workload types (CPU-intensive, I/O-intensive, balanced)
- Platform-specific optimizations for Linux, macOS, and Windows
- Interactive CPU configuration in TUI mode
## Support
For issues or questions, please refer to the main project documentation.

View File

@@ -15,7 +15,7 @@ echo "🔧 Using Go version: $GO_VERSION"
# Configuration
APP_NAME="dbbackup"
VERSION="3.40.0"
VERSION=$(grep 'version.*=' main.go | head -1 | sed 's/.*"\(.*\)".*/\1/')
BUILD_TIME=$(date -u '+%Y-%m-%d_%H:%M:%S_UTC')
GIT_COMMIT=$(git rev-parse --short HEAD 2>/dev/null || echo "unknown")
BIN_DIR="bin"
@@ -83,7 +83,8 @@ for platform_config in "${PLATFORMS[@]}"; do
echo -e "${YELLOW}[$current/$total_platforms]${NC} Building for ${BOLD}$description${NC} (${platform})"
# Set environment and build (using export for better compatibility)
export GOOS GOARCH
# CGO_ENABLED=0 creates static binaries without glibc dependency
export CGO_ENABLED=0 GOOS GOARCH
if go build -ldflags "$LDFLAGS" -o "${BIN_DIR}/${binary_name}" . 2>/dev/null; then
# Get file size
if [[ "$OSTYPE" == "darwin"* ]]; then

View File

@@ -252,8 +252,8 @@ func runCatalogSync(cmd *cobra.Command, args []string) error {
}
defer cat.Close()
fmt.Printf("📁 Syncing backups from: %s\n", absDir)
fmt.Printf("📊 Catalog database: %s\n\n", catalogDBPath)
fmt.Printf("[DIR] Syncing backups from: %s\n", absDir)
fmt.Printf("[STATS] Catalog database: %s\n\n", catalogDBPath)
ctx := context.Background()
result, err := cat.SyncFromDirectory(ctx, absDir)
@@ -265,17 +265,17 @@ func runCatalogSync(cmd *cobra.Command, args []string) error {
cat.SetLastSync(ctx)
// Show results
fmt.Printf("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n")
fmt.Printf("=====================================================\n")
fmt.Printf(" Sync Results\n")
fmt.Printf("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n")
fmt.Printf(" Added: %d\n", result.Added)
fmt.Printf(" 🔄 Updated: %d\n", result.Updated)
fmt.Printf(" 🗑️ Removed: %d\n", result.Removed)
fmt.Printf("=====================================================\n")
fmt.Printf(" [OK] Added: %d\n", result.Added)
fmt.Printf(" [SYNC] Updated: %d\n", result.Updated)
fmt.Printf(" [DEL] Removed: %d\n", result.Removed)
if result.Errors > 0 {
fmt.Printf(" Errors: %d\n", result.Errors)
fmt.Printf(" [FAIL] Errors: %d\n", result.Errors)
}
fmt.Printf(" ⏱️ Duration: %.2fs\n", result.Duration)
fmt.Printf("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n")
fmt.Printf(" [TIME] Duration: %.2fs\n", result.Duration)
fmt.Printf("=====================================================\n")
// Show details if verbose
if catalogVerbose && len(result.Details) > 0 {
@@ -323,7 +323,7 @@ func runCatalogList(cmd *cobra.Command, args []string) error {
// Table format
fmt.Printf("%-30s %-12s %-10s %-20s %-10s %s\n",
"DATABASE", "TYPE", "SIZE", "CREATED", "STATUS", "PATH")
fmt.Println(strings.Repeat("", 120))
fmt.Println(strings.Repeat("-", 120))
for _, entry := range entries {
dbName := truncateString(entry.Database, 28)
@@ -331,10 +331,10 @@ func runCatalogList(cmd *cobra.Command, args []string) error {
status := string(entry.Status)
if entry.VerifyValid != nil && *entry.VerifyValid {
status = " verified"
status = "[OK] verified"
}
if entry.DrillSuccess != nil && *entry.DrillSuccess {
status = " tested"
status = "[OK] tested"
}
fmt.Printf("%-30s %-12s %-10s %-20s %-10s %s\n",
@@ -377,20 +377,20 @@ func runCatalogStats(cmd *cobra.Command, args []string) error {
}
// Table format
fmt.Printf("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n")
fmt.Printf("=====================================================\n")
if catalogDatabase != "" {
fmt.Printf(" Catalog Statistics: %s\n", catalogDatabase)
} else {
fmt.Printf(" Catalog Statistics\n")
}
fmt.Printf("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n\n")
fmt.Printf("=====================================================\n\n")
fmt.Printf("📊 Total Backups: %d\n", stats.TotalBackups)
fmt.Printf("💾 Total Size: %s\n", stats.TotalSizeHuman)
fmt.Printf("📏 Average Size: %s\n", catalog.FormatSize(stats.AvgSize))
fmt.Printf("⏱️ Average Duration: %.1fs\n", stats.AvgDuration)
fmt.Printf(" Verified: %d\n", stats.VerifiedCount)
fmt.Printf("🧪 Drill Tested: %d\n", stats.DrillTestedCount)
fmt.Printf("[STATS] Total Backups: %d\n", stats.TotalBackups)
fmt.Printf("[SAVE] Total Size: %s\n", stats.TotalSizeHuman)
fmt.Printf("[SIZE] Average Size: %s\n", catalog.FormatSize(stats.AvgSize))
fmt.Printf("[TIME] Average Duration: %.1fs\n", stats.AvgDuration)
fmt.Printf("[OK] Verified: %d\n", stats.VerifiedCount)
fmt.Printf("[TEST] Drill Tested: %d\n", stats.DrillTestedCount)
if stats.OldestBackup != nil {
fmt.Printf("📅 Oldest Backup: %s\n", stats.OldestBackup.Format("2006-01-02 15:04"))
@@ -400,27 +400,27 @@ func runCatalogStats(cmd *cobra.Command, args []string) error {
}
if len(stats.ByDatabase) > 0 && catalogDatabase == "" {
fmt.Printf("\n📁 By Database:\n")
fmt.Printf("\n[DIR] By Database:\n")
for db, count := range stats.ByDatabase {
fmt.Printf(" %-30s %d\n", db, count)
}
}
if len(stats.ByType) > 0 {
fmt.Printf("\n📦 By Type:\n")
fmt.Printf("\n[PKG] By Type:\n")
for t, count := range stats.ByType {
fmt.Printf(" %-15s %d\n", t, count)
}
}
if len(stats.ByStatus) > 0 {
fmt.Printf("\n📋 By Status:\n")
fmt.Printf("\n[LOG] By Status:\n")
for s, count := range stats.ByStatus {
fmt.Printf(" %-15s %d\n", s, count)
}
}
fmt.Printf("\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n")
fmt.Printf("\n=====================================================\n")
return nil
}
@@ -488,26 +488,26 @@ func runCatalogGaps(cmd *cobra.Command, args []string) error {
}
if len(allGaps) == 0 {
fmt.Printf(" No backup gaps detected (expected interval: %s)\n", interval)
fmt.Printf("[OK] No backup gaps detected (expected interval: %s)\n", interval)
return nil
}
fmt.Printf("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n")
fmt.Printf("=====================================================\n")
fmt.Printf(" Backup Gaps Detected (expected interval: %s)\n", interval)
fmt.Printf("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n\n")
fmt.Printf("=====================================================\n\n")
totalGaps := 0
criticalGaps := 0
for database, gaps := range allGaps {
fmt.Printf("📁 %s (%d gaps)\n", database, len(gaps))
fmt.Printf("[DIR] %s (%d gaps)\n", database, len(gaps))
for _, gap := range gaps {
totalGaps++
icon := ""
icon := "[INFO]"
switch gap.Severity {
case catalog.SeverityWarning:
icon = "⚠️"
icon = "[WARN]"
case catalog.SeverityCritical:
icon = "🚨"
criticalGaps++
@@ -523,7 +523,7 @@ func runCatalogGaps(cmd *cobra.Command, args []string) error {
fmt.Println()
}
fmt.Printf("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n")
fmt.Printf("=====================================================\n")
fmt.Printf("Total: %d gaps detected", totalGaps)
if criticalGaps > 0 {
fmt.Printf(" (%d critical)", criticalGaps)
@@ -598,20 +598,20 @@ func runCatalogSearch(cmd *cobra.Command, args []string) error {
fmt.Printf("Found %d matching backups:\n\n", len(entries))
for _, entry := range entries {
fmt.Printf("📁 %s\n", entry.Database)
fmt.Printf("[DIR] %s\n", entry.Database)
fmt.Printf(" Path: %s\n", entry.BackupPath)
fmt.Printf(" Type: %s | Size: %s | Created: %s\n",
entry.DatabaseType,
catalog.FormatSize(entry.SizeBytes),
entry.CreatedAt.Format("2006-01-02 15:04:05"))
if entry.Encrypted {
fmt.Printf(" 🔒 Encrypted\n")
fmt.Printf(" [LOCK] Encrypted\n")
}
if entry.VerifyValid != nil && *entry.VerifyValid {
fmt.Printf(" Verified: %s\n", entry.VerifiedAt.Format("2006-01-02 15:04"))
fmt.Printf(" [OK] Verified: %s\n", entry.VerifiedAt.Format("2006-01-02 15:04"))
}
if entry.DrillSuccess != nil && *entry.DrillSuccess {
fmt.Printf(" 🧪 Drill Tested: %s\n", entry.DrillTestedAt.Format("2006-01-02 15:04"))
fmt.Printf(" [TEST] Drill Tested: %s\n", entry.DrillTestedAt.Format("2006-01-02 15:04"))
}
fmt.Println()
}
@@ -655,64 +655,64 @@ func runCatalogInfo(cmd *cobra.Command, args []string) error {
return nil
}
fmt.Printf("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n")
fmt.Printf("=====================================================\n")
fmt.Printf(" Backup Details\n")
fmt.Printf("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n\n")
fmt.Printf("=====================================================\n\n")
fmt.Printf("📁 Database: %s\n", entry.Database)
fmt.Printf("[DIR] Database: %s\n", entry.Database)
fmt.Printf("🔧 Type: %s\n", entry.DatabaseType)
fmt.Printf("🖥️ Host: %s:%d\n", entry.Host, entry.Port)
fmt.Printf("[HOST] Host: %s:%d\n", entry.Host, entry.Port)
fmt.Printf("📂 Path: %s\n", entry.BackupPath)
fmt.Printf("📦 Backup Type: %s\n", entry.BackupType)
fmt.Printf("💾 Size: %s (%d bytes)\n", catalog.FormatSize(entry.SizeBytes), entry.SizeBytes)
fmt.Printf("🔐 SHA256: %s\n", entry.SHA256)
fmt.Printf("[PKG] Backup Type: %s\n", entry.BackupType)
fmt.Printf("[SAVE] Size: %s (%d bytes)\n", catalog.FormatSize(entry.SizeBytes), entry.SizeBytes)
fmt.Printf("[HASH] SHA256: %s\n", entry.SHA256)
fmt.Printf("📅 Created: %s\n", entry.CreatedAt.Format("2006-01-02 15:04:05 MST"))
fmt.Printf("⏱️ Duration: %.2fs\n", entry.Duration)
fmt.Printf("📋 Status: %s\n", entry.Status)
fmt.Printf("[TIME] Duration: %.2fs\n", entry.Duration)
fmt.Printf("[LOG] Status: %s\n", entry.Status)
if entry.Compression != "" {
fmt.Printf("📦 Compression: %s\n", entry.Compression)
fmt.Printf("[PKG] Compression: %s\n", entry.Compression)
}
if entry.Encrypted {
fmt.Printf("🔒 Encrypted: yes\n")
fmt.Printf("[LOCK] Encrypted: yes\n")
}
if entry.CloudLocation != "" {
fmt.Printf("☁️ Cloud: %s\n", entry.CloudLocation)
fmt.Printf("[CLOUD] Cloud: %s\n", entry.CloudLocation)
}
if entry.RetentionPolicy != "" {
fmt.Printf("📆 Retention: %s\n", entry.RetentionPolicy)
}
fmt.Printf("\n📊 Verification:\n")
fmt.Printf("\n[STATS] Verification:\n")
if entry.VerifiedAt != nil {
status := " Failed"
status := "[FAIL] Failed"
if entry.VerifyValid != nil && *entry.VerifyValid {
status = " Valid"
status = "[OK] Valid"
}
fmt.Printf(" Status: %s (checked %s)\n", status, entry.VerifiedAt.Format("2006-01-02 15:04"))
} else {
fmt.Printf(" Status: Not verified\n")
fmt.Printf(" Status: [WAIT] Not verified\n")
}
fmt.Printf("\n🧪 DR Drill Test:\n")
fmt.Printf("\n[TEST] DR Drill Test:\n")
if entry.DrillTestedAt != nil {
status := " Failed"
status := "[FAIL] Failed"
if entry.DrillSuccess != nil && *entry.DrillSuccess {
status = " Passed"
status = "[OK] Passed"
}
fmt.Printf(" Status: %s (tested %s)\n", status, entry.DrillTestedAt.Format("2006-01-02 15:04"))
} else {
fmt.Printf(" Status: Not tested\n")
fmt.Printf(" Status: [WAIT] Not tested\n")
}
if len(entry.Metadata) > 0 {
fmt.Printf("\n📝 Additional Metadata:\n")
fmt.Printf("\n[NOTE] Additional Metadata:\n")
for k, v := range entry.Metadata {
fmt.Printf(" %s: %s\n", k, v)
}
}
fmt.Printf("\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n")
fmt.Printf("\n=====================================================\n")
return nil
}

View File

@@ -115,7 +115,7 @@ func runCleanup(cmd *cobra.Command, args []string) error {
DryRun: dryRun,
}
fmt.Printf("🗑️ Cleanup Policy:\n")
fmt.Printf("[CLEANUP] Cleanup Policy:\n")
fmt.Printf(" Directory: %s\n", backupDir)
fmt.Printf(" Retention: %d days\n", policy.RetentionDays)
fmt.Printf(" Min backups: %d\n", policy.MinBackups)
@@ -142,16 +142,16 @@ func runCleanup(cmd *cobra.Command, args []string) error {
}
// Display results
fmt.Printf("📊 Results:\n")
fmt.Printf("[RESULTS] Results:\n")
fmt.Printf(" Total backups: %d\n", result.TotalBackups)
fmt.Printf(" Eligible for deletion: %d\n", result.EligibleForDeletion)
if len(result.Deleted) > 0 {
fmt.Printf("\n")
if dryRun {
fmt.Printf("🔍 Would delete %d backup(s):\n", len(result.Deleted))
fmt.Printf("[DRY-RUN] Would delete %d backup(s):\n", len(result.Deleted))
} else {
fmt.Printf(" Deleted %d backup(s):\n", len(result.Deleted))
fmt.Printf("[OK] Deleted %d backup(s):\n", len(result.Deleted))
}
for _, file := range result.Deleted {
fmt.Printf(" - %s\n", filepath.Base(file))
@@ -159,33 +159,33 @@ func runCleanup(cmd *cobra.Command, args []string) error {
}
if len(result.Kept) > 0 && len(result.Kept) <= 10 {
fmt.Printf("\n📦 Kept %d backup(s):\n", len(result.Kept))
fmt.Printf("\n[KEPT] Kept %d backup(s):\n", len(result.Kept))
for _, file := range result.Kept {
fmt.Printf(" - %s\n", filepath.Base(file))
}
} else if len(result.Kept) > 10 {
fmt.Printf("\n📦 Kept %d backup(s)\n", len(result.Kept))
fmt.Printf("\n[KEPT] Kept %d backup(s)\n", len(result.Kept))
}
if !dryRun && result.SpaceFreed > 0 {
fmt.Printf("\n💾 Space freed: %s\n", metadata.FormatSize(result.SpaceFreed))
fmt.Printf("\n[FREED] Space freed: %s\n", metadata.FormatSize(result.SpaceFreed))
}
if len(result.Errors) > 0 {
fmt.Printf("\n⚠️ Errors:\n")
fmt.Printf("\n[WARN] Errors:\n")
for _, err := range result.Errors {
fmt.Printf(" - %v\n", err)
}
}
fmt.Println(strings.Repeat("", 50))
fmt.Println(strings.Repeat("-", 50))
if dryRun {
fmt.Println(" Dry run completed (no files were deleted)")
fmt.Println("[OK] Dry run completed (no files were deleted)")
} else if len(result.Deleted) > 0 {
fmt.Println(" Cleanup completed successfully")
fmt.Println("[OK] Cleanup completed successfully")
} else {
fmt.Println(" No backups eligible for deletion")
fmt.Println("[INFO] No backups eligible for deletion")
}
return nil
@@ -212,7 +212,7 @@ func runCloudCleanup(ctx context.Context, uri string) error {
return fmt.Errorf("invalid cloud URI: %w", err)
}
fmt.Printf("☁️ Cloud Cleanup Policy:\n")
fmt.Printf("[CLOUD] Cloud Cleanup Policy:\n")
fmt.Printf(" URI: %s\n", uri)
fmt.Printf(" Provider: %s\n", cloudURI.Provider)
fmt.Printf(" Bucket: %s\n", cloudURI.Bucket)
@@ -295,7 +295,7 @@ func runCloudCleanup(ctx context.Context, uri string) error {
}
// Display results
fmt.Printf("📊 Results:\n")
fmt.Printf("[RESULTS] Results:\n")
fmt.Printf(" Total backups: %d\n", totalBackups)
fmt.Printf(" Eligible for deletion: %d\n", len(toDelete))
fmt.Printf(" Will keep: %d\n", len(toKeep))
@@ -303,9 +303,9 @@ func runCloudCleanup(ctx context.Context, uri string) error {
if len(toDelete) > 0 {
if dryRun {
fmt.Printf("🔍 Would delete %d backup(s):\n", len(toDelete))
fmt.Printf("[DRY-RUN] Would delete %d backup(s):\n", len(toDelete))
} else {
fmt.Printf("🗑️ Deleting %d backup(s):\n", len(toDelete))
fmt.Printf("[DELETE] Deleting %d backup(s):\n", len(toDelete))
}
var totalSize int64
@@ -321,7 +321,7 @@ func runCloudCleanup(ctx context.Context, uri string) error {
if !dryRun {
if err := backend.Delete(ctx, backup.Key); err != nil {
fmt.Printf(" Error: %v\n", err)
fmt.Printf(" [FAIL] Error: %v\n", err)
} else {
deletedCount++
// Also try to delete metadata
@@ -330,12 +330,12 @@ func runCloudCleanup(ctx context.Context, uri string) error {
}
}
fmt.Printf("\n💾 Space %s: %s\n",
fmt.Printf("\n[FREED] Space %s: %s\n",
map[bool]string{true: "would be freed", false: "freed"}[dryRun],
cloud.FormatSize(totalSize))
if !dryRun && deletedCount > 0 {
fmt.Printf(" Successfully deleted %d backup(s)\n", deletedCount)
fmt.Printf("[OK] Successfully deleted %d backup(s)\n", deletedCount)
}
} else {
fmt.Println("No backups eligible for deletion")
@@ -405,7 +405,7 @@ func runGFSCleanup(backupDir string) error {
}
// Display tier breakdown
fmt.Printf("📊 Backup Classification:\n")
fmt.Printf("[STATS] Backup Classification:\n")
fmt.Printf(" Yearly: %d\n", result.YearlyKept)
fmt.Printf(" Monthly: %d\n", result.MonthlyKept)
fmt.Printf(" Weekly: %d\n", result.WeeklyKept)
@@ -416,9 +416,9 @@ func runGFSCleanup(backupDir string) error {
// Display deletions
if len(result.Deleted) > 0 {
if dryRun {
fmt.Printf("🔍 Would delete %d backup(s):\n", len(result.Deleted))
fmt.Printf("[SEARCH] Would delete %d backup(s):\n", len(result.Deleted))
} else {
fmt.Printf(" Deleted %d backup(s):\n", len(result.Deleted))
fmt.Printf("[OK] Deleted %d backup(s):\n", len(result.Deleted))
}
for _, file := range result.Deleted {
fmt.Printf(" - %s\n", filepath.Base(file))
@@ -427,7 +427,7 @@ func runGFSCleanup(backupDir string) error {
// Display kept backups (limited display)
if len(result.Kept) > 0 && len(result.Kept) <= 15 {
fmt.Printf("\n📦 Kept %d backup(s):\n", len(result.Kept))
fmt.Printf("\n[PKG] Kept %d backup(s):\n", len(result.Kept))
for _, file := range result.Kept {
// Show tier classification
info, _ := os.Stat(file)
@@ -440,28 +440,28 @@ func runGFSCleanup(backupDir string) error {
}
}
} else if len(result.Kept) > 15 {
fmt.Printf("\n📦 Kept %d backup(s)\n", len(result.Kept))
fmt.Printf("\n[PKG] Kept %d backup(s)\n", len(result.Kept))
}
if !dryRun && result.SpaceFreed > 0 {
fmt.Printf("\n💾 Space freed: %s\n", metadata.FormatSize(result.SpaceFreed))
fmt.Printf("\n[SAVE] Space freed: %s\n", metadata.FormatSize(result.SpaceFreed))
}
if len(result.Errors) > 0 {
fmt.Printf("\n⚠️ Errors:\n")
fmt.Printf("\n[WARN] Errors:\n")
for _, err := range result.Errors {
fmt.Printf(" - %v\n", err)
}
}
fmt.Println(strings.Repeat("", 50))
fmt.Println(strings.Repeat("-", 50))
if dryRun {
fmt.Println(" GFS dry run completed (no files were deleted)")
fmt.Println("[OK] GFS dry run completed (no files were deleted)")
} else if len(result.Deleted) > 0 {
fmt.Println(" GFS cleanup completed successfully")
fmt.Println("[OK] GFS cleanup completed successfully")
} else {
fmt.Println(" No backups eligible for deletion under GFS policy")
fmt.Println("[INFO] No backups eligible for deletion under GFS policy")
}
return nil

View File

@@ -189,12 +189,12 @@ func runCloudUpload(cmd *cobra.Command, args []string) error {
}
}
fmt.Printf("☁️ Uploading %d file(s) to %s...\n\n", len(files), backend.Name())
fmt.Printf("[CLOUD] Uploading %d file(s) to %s...\n\n", len(files), backend.Name())
successCount := 0
for _, localPath := range files {
filename := filepath.Base(localPath)
fmt.Printf("📤 %s\n", filename)
fmt.Printf("[UPLOAD] %s\n", filename)
// Progress callback
var lastPercent int
@@ -214,21 +214,21 @@ func runCloudUpload(cmd *cobra.Command, args []string) error {
err := backend.Upload(ctx, localPath, filename, progress)
if err != nil {
fmt.Printf(" Failed: %v\n\n", err)
fmt.Printf(" [FAIL] Failed: %v\n\n", err)
continue
}
// Get file size
if info, err := os.Stat(localPath); err == nil {
fmt.Printf(" Uploaded (%s)\n\n", cloud.FormatSize(info.Size()))
fmt.Printf(" [OK] Uploaded (%s)\n\n", cloud.FormatSize(info.Size()))
} else {
fmt.Printf(" Uploaded\n\n")
fmt.Printf(" [OK] Uploaded\n\n")
}
successCount++
}
fmt.Println(strings.Repeat("", 50))
fmt.Printf(" Successfully uploaded %d/%d file(s)\n", successCount, len(files))
fmt.Println(strings.Repeat("-", 50))
fmt.Printf("[OK] Successfully uploaded %d/%d file(s)\n", successCount, len(files))
return nil
}
@@ -248,8 +248,8 @@ func runCloudDownload(cmd *cobra.Command, args []string) error {
localPath = filepath.Join(localPath, filepath.Base(remotePath))
}
fmt.Printf("☁️ Downloading from %s...\n\n", backend.Name())
fmt.Printf("📥 %s %s\n", remotePath, localPath)
fmt.Printf("[CLOUD] Downloading from %s...\n\n", backend.Name())
fmt.Printf("[DOWNLOAD] %s -> %s\n", remotePath, localPath)
// Progress callback
var lastPercent int
@@ -274,9 +274,9 @@ func runCloudDownload(cmd *cobra.Command, args []string) error {
// Get file size
if info, err := os.Stat(localPath); err == nil {
fmt.Printf(" Downloaded (%s)\n", cloud.FormatSize(info.Size()))
fmt.Printf(" [OK] Downloaded (%s)\n", cloud.FormatSize(info.Size()))
} else {
fmt.Printf(" Downloaded\n")
fmt.Printf(" [OK] Downloaded\n")
}
return nil
@@ -294,7 +294,7 @@ func runCloudList(cmd *cobra.Command, args []string) error {
prefix = args[0]
}
fmt.Printf("☁️ Listing backups in %s/%s...\n\n", backend.Name(), cloudBucket)
fmt.Printf("[CLOUD] Listing backups in %s/%s...\n\n", backend.Name(), cloudBucket)
backups, err := backend.List(ctx, prefix)
if err != nil {
@@ -311,7 +311,7 @@ func runCloudList(cmd *cobra.Command, args []string) error {
totalSize += backup.Size
if cloudVerbose {
fmt.Printf("📦 %s\n", backup.Name)
fmt.Printf("[FILE] %s\n", backup.Name)
fmt.Printf(" Size: %s\n", cloud.FormatSize(backup.Size))
fmt.Printf(" Modified: %s\n", backup.LastModified.Format(time.RFC3339))
if backup.StorageClass != "" {
@@ -328,7 +328,7 @@ func runCloudList(cmd *cobra.Command, args []string) error {
}
}
fmt.Println(strings.Repeat("", 50))
fmt.Println(strings.Repeat("-", 50))
fmt.Printf("Total: %d backup(s), %s\n", len(backups), cloud.FormatSize(totalSize))
return nil
@@ -360,7 +360,7 @@ func runCloudDelete(cmd *cobra.Command, args []string) error {
// Confirmation prompt
if !cloudConfirm {
fmt.Printf("⚠️ Delete %s (%s) from cloud storage?\n", remotePath, cloud.FormatSize(size))
fmt.Printf("[WARN] Delete %s (%s) from cloud storage?\n", remotePath, cloud.FormatSize(size))
fmt.Print("Type 'yes' to confirm: ")
var response string
fmt.Scanln(&response)
@@ -370,14 +370,14 @@ func runCloudDelete(cmd *cobra.Command, args []string) error {
}
}
fmt.Printf("🗑️ Deleting %s...\n", remotePath)
fmt.Printf("[DELETE] Deleting %s...\n", remotePath)
err = backend.Delete(ctx, remotePath)
if err != nil {
return fmt.Errorf("delete failed: %w", err)
}
fmt.Printf(" Deleted %s (%s)\n", remotePath, cloud.FormatSize(size))
fmt.Printf("[OK] Deleted %s (%s)\n", remotePath, cloud.FormatSize(size))
return nil
}

View File

@@ -61,10 +61,10 @@ func runCPUInfo(ctx context.Context) error {
// Show current vs optimal
if cfg.AutoDetectCores {
fmt.Println("\n CPU optimization is enabled")
fmt.Println("\n[OK] CPU optimization is enabled")
fmt.Println("Job counts are automatically optimized based on detected hardware")
} else {
fmt.Println("\n⚠️ CPU optimization is disabled")
fmt.Println("\n[WARN] CPU optimization is disabled")
fmt.Println("Consider enabling --auto-detect-cores for better performance")
}

1284
cmd/dedup.go Normal file
View File

@@ -0,0 +1,1284 @@
package cmd
import (
"compress/gzip"
"crypto/sha256"
"encoding/hex"
"fmt"
"io"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
"dbbackup/internal/dedup"
"github.com/spf13/cobra"
)
var dedupCmd = &cobra.Command{
Use: "dedup",
Short: "Deduplicated backup operations",
Long: `Content-defined chunking deduplication for space-efficient backups.
Similar to restic/borgbackup but with native database dump support.
Features:
- Content-defined chunking (CDC) with Buzhash rolling hash
- SHA-256 content-addressed storage
- AES-256-GCM encryption (optional)
- Gzip compression (optional)
- SQLite index for fast lookups
Storage Structure:
<dedup-dir>/
chunks/ # Content-addressed chunk files
ab/cdef... # Sharded by first 2 chars of hash
manifests/ # JSON manifest per backup
chunks.db # SQLite index
NFS/CIFS NOTICE:
SQLite may have locking issues on network storage.
Use --index-db to put the SQLite index on local storage while keeping
chunks on network storage:
dbbackup dedup backup mydb.sql \
--dedup-dir /mnt/nfs/backups/dedup \
--index-db /var/lib/dbbackup/dedup-index.db
This avoids "database is locked" errors while still storing chunks remotely.
COMPRESSED INPUT NOTICE:
Pre-compressed files (.gz) have poor deduplication ratios (<10%).
Use --decompress-input to decompress before chunking for better results:
dbbackup dedup backup mydb.sql.gz --decompress-input`,
}
var dedupBackupCmd = &cobra.Command{
Use: "backup <file>",
Short: "Create a deduplicated backup of a file",
Long: `Chunk a file using content-defined chunking and store deduplicated chunks.
Example:
dbbackup dedup backup /path/to/database.dump
dbbackup dedup backup mydb.sql --compress --encrypt`,
Args: cobra.ExactArgs(1),
RunE: runDedupBackup,
}
var dedupRestoreCmd = &cobra.Command{
Use: "restore <manifest-id> <output-file>",
Short: "Restore a backup from its manifest",
Long: `Reconstruct a file from its deduplicated chunks.
Example:
dbbackup dedup restore 2026-01-07_120000_mydb /tmp/restored.dump
dbbackup dedup list # to see available manifests`,
Args: cobra.ExactArgs(2),
RunE: runDedupRestore,
}
var dedupListCmd = &cobra.Command{
Use: "list",
Short: "List all deduplicated backups",
RunE: runDedupList,
}
var dedupStatsCmd = &cobra.Command{
Use: "stats",
Short: "Show deduplication statistics",
RunE: runDedupStats,
}
var dedupGCCmd = &cobra.Command{
Use: "gc",
Short: "Garbage collect unreferenced chunks",
Long: `Remove chunks that are no longer referenced by any manifest.
Run after deleting old backups to reclaim space.`,
RunE: runDedupGC,
}
var dedupDeleteCmd = &cobra.Command{
Use: "delete <manifest-id>",
Short: "Delete a backup manifest (chunks cleaned by gc)",
Args: cobra.ExactArgs(1),
RunE: runDedupDelete,
}
var dedupVerifyCmd = &cobra.Command{
Use: "verify [manifest-id]",
Short: "Verify chunk integrity against manifests",
Long: `Verify that all chunks referenced by manifests exist and have correct hashes.
Without arguments, verifies all backups. With a manifest ID, verifies only that backup.
Examples:
dbbackup dedup verify # Verify all backups
dbbackup dedup verify 2026-01-07_mydb # Verify specific backup`,
RunE: runDedupVerify,
}
var dedupPruneCmd = &cobra.Command{
Use: "prune",
Short: "Apply retention policy to manifests",
Long: `Delete old manifests based on retention policy (like borg prune).
Keeps a specified number of recent backups per database and deletes the rest.
Examples:
dbbackup dedup prune --keep-last 7 # Keep 7 most recent
dbbackup dedup prune --keep-daily 7 --keep-weekly 4 # Keep 7 daily + 4 weekly`,
RunE: runDedupPrune,
}
var dedupBackupDBCmd = &cobra.Command{
Use: "backup-db",
Short: "Direct database dump with deduplication",
Long: `Dump a database directly into deduplicated chunks without temp files.
Streams the database dump through the chunker for efficient deduplication.
Examples:
dbbackup dedup backup-db --db-type postgres --db-name mydb
dbbackup dedup backup-db -d mariadb --database production_db --host db.local`,
RunE: runDedupBackupDB,
}
// Prune flags
var (
pruneKeepLast int
pruneKeepDaily int
pruneKeepWeekly int
pruneDryRun bool
)
// backup-db flags
var (
backupDBDatabase string
backupDBUser string
backupDBPassword string
)
// metrics flags
var (
dedupMetricsOutput string
dedupMetricsInstance string
)
var dedupMetricsCmd = &cobra.Command{
Use: "metrics",
Short: "Export dedup statistics as Prometheus metrics",
Long: `Export deduplication statistics in Prometheus format.
Can write to a textfile for node_exporter's textfile collector,
or print to stdout for custom integrations.
Examples:
dbbackup dedup metrics # Print to stdout
dbbackup dedup metrics --output /var/lib/node_exporter/textfile_collector/dedup.prom
dbbackup dedup metrics --instance prod-db-1`,
RunE: runDedupMetrics,
}
// Flags
var (
dedupDir string
dedupIndexDB string // Separate path for SQLite index (for NFS/CIFS support)
dedupCompress bool
dedupEncrypt bool
dedupKey string
dedupName string
dedupDBType string
dedupDBName string
dedupDBHost string
dedupDecompress bool // Auto-decompress gzip input
)
func init() {
rootCmd.AddCommand(dedupCmd)
dedupCmd.AddCommand(dedupBackupCmd)
dedupCmd.AddCommand(dedupRestoreCmd)
dedupCmd.AddCommand(dedupListCmd)
dedupCmd.AddCommand(dedupStatsCmd)
dedupCmd.AddCommand(dedupGCCmd)
dedupCmd.AddCommand(dedupDeleteCmd)
dedupCmd.AddCommand(dedupVerifyCmd)
dedupCmd.AddCommand(dedupPruneCmd)
dedupCmd.AddCommand(dedupBackupDBCmd)
dedupCmd.AddCommand(dedupMetricsCmd)
// Global dedup flags
dedupCmd.PersistentFlags().StringVar(&dedupDir, "dedup-dir", "", "Dedup storage directory (default: $BACKUP_DIR/dedup)")
dedupCmd.PersistentFlags().StringVar(&dedupIndexDB, "index-db", "", "SQLite index path (local recommended for NFS/CIFS chunk dirs)")
dedupCmd.PersistentFlags().BoolVar(&dedupCompress, "compress", true, "Compress chunks with gzip")
dedupCmd.PersistentFlags().BoolVar(&dedupEncrypt, "encrypt", false, "Encrypt chunks with AES-256-GCM")
dedupCmd.PersistentFlags().StringVar(&dedupKey, "key", "", "Encryption key (hex) or use DBBACKUP_DEDUP_KEY env")
// Backup-specific flags
dedupBackupCmd.Flags().StringVar(&dedupName, "name", "", "Optional backup name")
dedupBackupCmd.Flags().StringVar(&dedupDBType, "db-type", "", "Database type (postgres/mysql)")
dedupBackupCmd.Flags().StringVar(&dedupDBName, "db-name", "", "Database name")
dedupBackupCmd.Flags().StringVar(&dedupDBHost, "db-host", "", "Database host")
dedupBackupCmd.Flags().BoolVar(&dedupDecompress, "decompress-input", false, "Auto-decompress gzip input before chunking (improves dedup ratio)")
// Prune flags
dedupPruneCmd.Flags().IntVar(&pruneKeepLast, "keep-last", 0, "Keep the last N backups")
dedupPruneCmd.Flags().IntVar(&pruneKeepDaily, "keep-daily", 0, "Keep N daily backups")
dedupPruneCmd.Flags().IntVar(&pruneKeepWeekly, "keep-weekly", 0, "Keep N weekly backups")
dedupPruneCmd.Flags().BoolVar(&pruneDryRun, "dry-run", false, "Show what would be deleted without actually deleting")
// backup-db flags
dedupBackupDBCmd.Flags().StringVarP(&dedupDBType, "db-type", "d", "", "Database type (postgres/mariadb/mysql)")
dedupBackupDBCmd.Flags().StringVar(&backupDBDatabase, "database", "", "Database name to backup")
dedupBackupDBCmd.Flags().StringVar(&dedupDBHost, "host", "localhost", "Database host")
dedupBackupDBCmd.Flags().StringVarP(&backupDBUser, "user", "u", "", "Database user")
dedupBackupDBCmd.Flags().StringVarP(&backupDBPassword, "password", "p", "", "Database password (or use env)")
dedupBackupDBCmd.MarkFlagRequired("db-type")
dedupBackupDBCmd.MarkFlagRequired("database")
// Metrics flags
dedupMetricsCmd.Flags().StringVarP(&dedupMetricsOutput, "output", "o", "", "Output file path (default: stdout)")
dedupMetricsCmd.Flags().StringVar(&dedupMetricsInstance, "instance", "", "Instance label for metrics (default: hostname)")
}
func getDedupDir() string {
if dedupDir != "" {
return dedupDir
}
if cfg != nil && cfg.BackupDir != "" {
return filepath.Join(cfg.BackupDir, "dedup")
}
return filepath.Join(os.Getenv("HOME"), "db_backups", "dedup")
}
func getIndexDBPath() string {
if dedupIndexDB != "" {
return dedupIndexDB
}
// Default: same directory as chunks (may have issues on NFS/CIFS)
return filepath.Join(getDedupDir(), "chunks.db")
}
func getEncryptionKey() string {
if dedupKey != "" {
return dedupKey
}
return os.Getenv("DBBACKUP_DEDUP_KEY")
}
func runDedupBackup(cmd *cobra.Command, args []string) error {
inputPath := args[0]
// Open input file
file, err := os.Open(inputPath)
if err != nil {
return fmt.Errorf("failed to open input file: %w", err)
}
defer file.Close()
info, err := file.Stat()
if err != nil {
return fmt.Errorf("failed to stat input file: %w", err)
}
// Check for compressed input and warn/handle
var reader io.Reader = file
isGzipped := strings.HasSuffix(strings.ToLower(inputPath), ".gz")
if isGzipped && !dedupDecompress {
fmt.Printf("Warning: Input appears to be gzip compressed (.gz)\n")
fmt.Printf(" Compressed data typically has poor dedup ratios (<10%%).\n")
fmt.Printf(" Consider using --decompress-input for better deduplication.\n\n")
}
if isGzipped && dedupDecompress {
fmt.Printf("Auto-decompressing gzip input for better dedup ratio...\n")
gzReader, err := gzip.NewReader(file)
if err != nil {
return fmt.Errorf("failed to decompress gzip input: %w", err)
}
defer gzReader.Close()
reader = gzReader
}
// Setup dedup storage
basePath := getDedupDir()
encKey := ""
if dedupEncrypt {
encKey = getEncryptionKey()
if encKey == "" {
return fmt.Errorf("encryption enabled but no key provided (use --key or DBBACKUP_DEDUP_KEY)")
}
}
store, err := dedup.NewChunkStore(dedup.StoreConfig{
BasePath: basePath,
Compress: dedupCompress,
EncryptionKey: encKey,
})
if err != nil {
return fmt.Errorf("failed to open chunk store: %w", err)
}
manifestStore, err := dedup.NewManifestStore(basePath)
if err != nil {
return fmt.Errorf("failed to open manifest store: %w", err)
}
index, err := dedup.NewChunkIndexAt(getIndexDBPath())
if err != nil {
return fmt.Errorf("failed to open chunk index: %w", err)
}
defer index.Close()
// Generate manifest ID
now := time.Now()
manifestID := now.Format("2006-01-02_150405")
if dedupDBName != "" {
manifestID += "_" + dedupDBName
} else {
base := filepath.Base(inputPath)
ext := filepath.Ext(base)
// Remove .gz extension if decompressing
if isGzipped && dedupDecompress {
base = strings.TrimSuffix(base, ext)
ext = filepath.Ext(base)
}
manifestID += "_" + strings.TrimSuffix(base, ext)
}
fmt.Printf("Creating deduplicated backup: %s\n", manifestID)
fmt.Printf("Input: %s (%s)\n", inputPath, formatBytes(info.Size()))
if isGzipped && dedupDecompress {
fmt.Printf("Mode: Decompressing before chunking\n")
}
fmt.Printf("Store: %s\n", basePath)
if dedupIndexDB != "" {
fmt.Printf("Index: %s\n", getIndexDBPath())
}
// For decompressed input, we can't seek - use TeeReader to hash while chunking
h := sha256.New()
var chunkReader io.Reader
if isGzipped && dedupDecompress {
// Can't seek on gzip stream - hash will be computed inline
chunkReader = io.TeeReader(reader, h)
} else {
// Regular file - hash first, then reset and chunk
file.Seek(0, 0)
io.Copy(h, file)
file.Seek(0, 0)
chunkReader = file
h = sha256.New() // Reset for inline hashing
chunkReader = io.TeeReader(file, h)
}
// Chunk the file
chunker := dedup.NewChunker(chunkReader, dedup.DefaultChunkerConfig())
var chunks []dedup.ChunkRef
var totalSize, storedSize int64
var chunkCount, newChunks int
startTime := time.Now()
for {
chunk, err := chunker.Next()
if err == io.EOF {
break
}
if err != nil {
return fmt.Errorf("chunking failed: %w", err)
}
chunkCount++
totalSize += int64(chunk.Length)
// Store chunk (deduplication happens here)
isNew, err := store.Put(chunk)
if err != nil {
return fmt.Errorf("failed to store chunk: %w", err)
}
if isNew {
newChunks++
storedSize += int64(chunk.Length)
// Record in index
index.AddChunk(chunk.Hash, chunk.Length, chunk.Length)
}
chunks = append(chunks, dedup.ChunkRef{
Hash: chunk.Hash,
Offset: chunk.Offset,
Length: chunk.Length,
})
// Progress
if chunkCount%1000 == 0 {
fmt.Printf("\r Processed %d chunks, %d new...", chunkCount, newChunks)
}
}
duration := time.Since(startTime)
// Get final hash (computed inline via TeeReader)
fileHash := hex.EncodeToString(h.Sum(nil))
// Calculate dedup ratio
dedupRatio := 0.0
if totalSize > 0 {
dedupRatio = 1.0 - float64(storedSize)/float64(totalSize)
}
// Create manifest
manifest := &dedup.Manifest{
ID: manifestID,
Name: dedupName,
CreatedAt: now,
DatabaseType: dedupDBType,
DatabaseName: dedupDBName,
DatabaseHost: dedupDBHost,
Chunks: chunks,
OriginalSize: totalSize,
StoredSize: storedSize,
ChunkCount: chunkCount,
NewChunks: newChunks,
DedupRatio: dedupRatio,
Encrypted: dedupEncrypt,
Compressed: dedupCompress,
SHA256: fileHash,
Decompressed: isGzipped && dedupDecompress, // Track if we decompressed
}
if err := manifestStore.Save(manifest); err != nil {
return fmt.Errorf("failed to save manifest: %w", err)
}
if err := index.AddManifest(manifest); err != nil {
log.Warn("Failed to index manifest", "error", err)
}
fmt.Printf("\r \r")
fmt.Printf("\nBackup complete!\n")
fmt.Printf(" Manifest: %s\n", manifestID)
fmt.Printf(" Chunks: %d total, %d new\n", chunkCount, newChunks)
fmt.Printf(" Original: %s\n", formatBytes(totalSize))
fmt.Printf(" Stored: %s (new data)\n", formatBytes(storedSize))
fmt.Printf(" Dedup ratio: %.1f%%\n", dedupRatio*100)
fmt.Printf(" Duration: %s\n", duration.Round(time.Millisecond))
fmt.Printf(" Throughput: %s/s\n", formatBytes(int64(float64(totalSize)/duration.Seconds())))
return nil
}
func runDedupRestore(cmd *cobra.Command, args []string) error {
manifestID := args[0]
outputPath := args[1]
basePath := getDedupDir()
encKey := ""
if dedupEncrypt {
encKey = getEncryptionKey()
}
store, err := dedup.NewChunkStore(dedup.StoreConfig{
BasePath: basePath,
Compress: dedupCompress,
EncryptionKey: encKey,
})
if err != nil {
return fmt.Errorf("failed to open chunk store: %w", err)
}
manifestStore, err := dedup.NewManifestStore(basePath)
if err != nil {
return fmt.Errorf("failed to open manifest store: %w", err)
}
manifest, err := manifestStore.Load(manifestID)
if err != nil {
return fmt.Errorf("failed to load manifest: %w", err)
}
fmt.Printf("Restoring backup: %s\n", manifestID)
fmt.Printf(" Created: %s\n", manifest.CreatedAt.Format(time.RFC3339))
fmt.Printf(" Size: %s\n", formatBytes(manifest.OriginalSize))
fmt.Printf(" Chunks: %d\n", manifest.ChunkCount)
// Create output file
outFile, err := os.Create(outputPath)
if err != nil {
return fmt.Errorf("failed to create output file: %w", err)
}
defer outFile.Close()
h := sha256.New()
writer := io.MultiWriter(outFile, h)
startTime := time.Now()
for i, ref := range manifest.Chunks {
chunk, err := store.Get(ref.Hash)
if err != nil {
return fmt.Errorf("failed to read chunk %d (%s): %w", i, ref.Hash[:8], err)
}
if _, err := writer.Write(chunk.Data); err != nil {
return fmt.Errorf("failed to write chunk %d: %w", i, err)
}
if (i+1)%1000 == 0 {
fmt.Printf("\r Restored %d/%d chunks...", i+1, manifest.ChunkCount)
}
}
duration := time.Since(startTime)
restoredHash := hex.EncodeToString(h.Sum(nil))
fmt.Printf("\r \r")
fmt.Printf("\nRestore complete!\n")
fmt.Printf(" Output: %s\n", outputPath)
fmt.Printf(" Duration: %s\n", duration.Round(time.Millisecond))
// Verify hash
if manifest.SHA256 != "" {
if restoredHash == manifest.SHA256 {
fmt.Printf(" Verification: [OK] SHA-256 matches\n")
} else {
fmt.Printf(" Verification: [FAIL] SHA-256 MISMATCH!\n")
fmt.Printf(" Expected: %s\n", manifest.SHA256)
fmt.Printf(" Got: %s\n", restoredHash)
return fmt.Errorf("integrity verification failed")
}
}
return nil
}
func runDedupList(cmd *cobra.Command, args []string) error {
basePath := getDedupDir()
manifestStore, err := dedup.NewManifestStore(basePath)
if err != nil {
return fmt.Errorf("failed to open manifest store: %w", err)
}
manifests, err := manifestStore.ListAll()
if err != nil {
return fmt.Errorf("failed to list manifests: %w", err)
}
if len(manifests) == 0 {
fmt.Println("No deduplicated backups found.")
fmt.Printf("Store: %s\n", basePath)
return nil
}
fmt.Printf("Deduplicated Backups (%s)\n\n", basePath)
fmt.Printf("%-30s %-12s %-10s %-10s %s\n", "ID", "SIZE", "DEDUP", "CHUNKS", "CREATED")
fmt.Println(strings.Repeat("-", 80))
for _, m := range manifests {
fmt.Printf("%-30s %-12s %-10.1f%% %-10d %s\n",
truncateStr(m.ID, 30),
formatBytes(m.OriginalSize),
m.DedupRatio*100,
m.ChunkCount,
m.CreatedAt.Format("2006-01-02 15:04"),
)
}
return nil
}
func runDedupStats(cmd *cobra.Command, args []string) error {
basePath := getDedupDir()
index, err := dedup.NewChunkIndex(basePath)
if err != nil {
return fmt.Errorf("failed to open chunk index: %w", err)
}
defer index.Close()
stats, err := index.Stats()
if err != nil {
return fmt.Errorf("failed to get stats: %w", err)
}
store, err := dedup.NewChunkStore(dedup.StoreConfig{BasePath: basePath})
if err != nil {
return fmt.Errorf("failed to open chunk store: %w", err)
}
storeStats, err := store.Stats()
if err != nil {
log.Warn("Failed to get store stats", "error", err)
}
fmt.Printf("Deduplication Statistics\n")
fmt.Printf("========================\n\n")
fmt.Printf("Store: %s\n", basePath)
fmt.Printf("Manifests: %d\n", stats.TotalManifests)
fmt.Printf("Unique chunks: %d\n", stats.TotalChunks)
fmt.Printf("Total raw size: %s\n", formatBytes(stats.TotalSizeRaw))
fmt.Printf("Stored size: %s\n", formatBytes(stats.TotalSizeStored))
fmt.Printf("\n")
fmt.Printf("Backup Statistics (accurate dedup calculation):\n")
fmt.Printf(" Total backed up: %s (across all backups)\n", formatBytes(stats.TotalBackupSize))
fmt.Printf(" New data stored: %s\n", formatBytes(stats.TotalNewData))
fmt.Printf(" Space saved: %s\n", formatBytes(stats.SpaceSaved))
fmt.Printf(" Dedup ratio: %.1f%%\n", stats.DedupRatio*100)
if storeStats != nil {
fmt.Printf("Disk usage: %s\n", formatBytes(storeStats.TotalSize))
fmt.Printf("Directories: %d\n", storeStats.Directories)
}
return nil
}
func runDedupGC(cmd *cobra.Command, args []string) error {
basePath := getDedupDir()
index, err := dedup.NewChunkIndex(basePath)
if err != nil {
return fmt.Errorf("failed to open chunk index: %w", err)
}
defer index.Close()
store, err := dedup.NewChunkStore(dedup.StoreConfig{
BasePath: basePath,
Compress: dedupCompress,
})
if err != nil {
return fmt.Errorf("failed to open chunk store: %w", err)
}
// Find orphaned chunks
orphans, err := index.ListOrphanedChunks()
if err != nil {
return fmt.Errorf("failed to find orphaned chunks: %w", err)
}
if len(orphans) == 0 {
fmt.Println("No orphaned chunks to clean up.")
return nil
}
fmt.Printf("Found %d orphaned chunks\n", len(orphans))
var freed int64
for _, hash := range orphans {
if meta, _ := index.GetChunk(hash); meta != nil {
freed += meta.SizeStored
}
if err := store.Delete(hash); err != nil {
log.Warn("Failed to delete chunk", "hash", hash[:8], "error", err)
continue
}
if err := index.RemoveChunk(hash); err != nil {
log.Warn("Failed to remove chunk from index", "hash", hash[:8], "error", err)
}
}
fmt.Printf("Deleted %d chunks, freed %s\n", len(orphans), formatBytes(freed))
// Vacuum the index
if err := index.Vacuum(); err != nil {
log.Warn("Failed to vacuum index", "error", err)
}
return nil
}
func runDedupDelete(cmd *cobra.Command, args []string) error {
manifestID := args[0]
basePath := getDedupDir()
manifestStore, err := dedup.NewManifestStore(basePath)
if err != nil {
return fmt.Errorf("failed to open manifest store: %w", err)
}
index, err := dedup.NewChunkIndex(basePath)
if err != nil {
return fmt.Errorf("failed to open chunk index: %w", err)
}
defer index.Close()
// Load manifest to decrement chunk refs
manifest, err := manifestStore.Load(manifestID)
if err != nil {
return fmt.Errorf("failed to load manifest: %w", err)
}
// Decrement reference counts
for _, ref := range manifest.Chunks {
index.DecrementRef(ref.Hash)
}
// Delete manifest
if err := manifestStore.Delete(manifestID); err != nil {
return fmt.Errorf("failed to delete manifest: %w", err)
}
if err := index.RemoveManifest(manifestID); err != nil {
log.Warn("Failed to remove manifest from index", "error", err)
}
fmt.Printf("Deleted backup: %s\n", manifestID)
fmt.Println("Run 'dbbackup dedup gc' to reclaim space from unreferenced chunks.")
return nil
}
// Helper functions
func formatBytes(b int64) string {
const unit = 1024
if b < unit {
return fmt.Sprintf("%d B", b)
}
div, exp := int64(unit), 0
for n := b / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f %cB", float64(b)/float64(div), "KMGTPE"[exp])
}
func truncateStr(s string, max int) string {
if len(s) <= max {
return s
}
return s[:max-3] + "..."
}
func runDedupVerify(cmd *cobra.Command, args []string) error {
basePath := getDedupDir()
store, err := dedup.NewChunkStore(dedup.StoreConfig{
BasePath: basePath,
Compress: dedupCompress,
})
if err != nil {
return fmt.Errorf("failed to open chunk store: %w", err)
}
manifestStore, err := dedup.NewManifestStore(basePath)
if err != nil {
return fmt.Errorf("failed to open manifest store: %w", err)
}
index, err := dedup.NewChunkIndexAt(getIndexDBPath())
if err != nil {
return fmt.Errorf("failed to open chunk index: %w", err)
}
defer index.Close()
var manifests []*dedup.Manifest
if len(args) > 0 {
// Verify specific manifest
m, err := manifestStore.Load(args[0])
if err != nil {
return fmt.Errorf("failed to load manifest: %w", err)
}
manifests = []*dedup.Manifest{m}
} else {
// Verify all manifests
manifests, err = manifestStore.ListAll()
if err != nil {
return fmt.Errorf("failed to list manifests: %w", err)
}
}
if len(manifests) == 0 {
fmt.Println("No manifests to verify.")
return nil
}
fmt.Printf("Verifying %d backup(s)...\n\n", len(manifests))
var totalChunks, missingChunks, corruptChunks int
var allOK = true
for _, m := range manifests {
fmt.Printf("Verifying: %s (%d chunks)\n", m.ID, m.ChunkCount)
var missing, corrupt int
seenHashes := make(map[string]bool)
for i, ref := range m.Chunks {
if seenHashes[ref.Hash] {
continue // Already verified this chunk
}
seenHashes[ref.Hash] = true
totalChunks++
// Check if chunk exists
if !store.Has(ref.Hash) {
missing++
missingChunks++
if missing <= 5 {
fmt.Printf(" [MISSING] chunk %d: %s\n", i, ref.Hash[:16])
}
continue
}
// Verify chunk hash by reading it
chunk, err := store.Get(ref.Hash)
if err != nil {
corrupt++
corruptChunks++
if corrupt <= 5 {
fmt.Printf(" [CORRUPT] chunk %d: %s - %v\n", i, ref.Hash[:16], err)
}
continue
}
// Verify size
if chunk.Length != ref.Length {
corrupt++
corruptChunks++
if corrupt <= 5 {
fmt.Printf(" [SIZE MISMATCH] chunk %d: expected %d, got %d\n", i, ref.Length, chunk.Length)
}
}
}
if missing > 0 || corrupt > 0 {
allOK = false
fmt.Printf(" Result: FAILED (%d missing, %d corrupt)\n", missing, corrupt)
if missing > 5 || corrupt > 5 {
fmt.Printf(" ... and %d more errors\n", (missing+corrupt)-10)
}
} else {
fmt.Printf(" Result: OK (%d unique chunks verified)\n", len(seenHashes))
// Update verified timestamp
m.VerifiedAt = time.Now()
manifestStore.Save(m)
index.UpdateManifestVerified(m.ID, m.VerifiedAt)
}
fmt.Println()
}
fmt.Println("========================================")
if allOK {
fmt.Printf("All %d backup(s) verified successfully!\n", len(manifests))
fmt.Printf("Total unique chunks checked: %d\n", totalChunks)
} else {
fmt.Printf("Verification FAILED!\n")
fmt.Printf("Missing chunks: %d\n", missingChunks)
fmt.Printf("Corrupt chunks: %d\n", corruptChunks)
return fmt.Errorf("verification failed: %d missing, %d corrupt chunks", missingChunks, corruptChunks)
}
return nil
}
func runDedupPrune(cmd *cobra.Command, args []string) error {
if pruneKeepLast == 0 && pruneKeepDaily == 0 && pruneKeepWeekly == 0 {
return fmt.Errorf("at least one of --keep-last, --keep-daily, or --keep-weekly must be specified")
}
basePath := getDedupDir()
manifestStore, err := dedup.NewManifestStore(basePath)
if err != nil {
return fmt.Errorf("failed to open manifest store: %w", err)
}
index, err := dedup.NewChunkIndexAt(getIndexDBPath())
if err != nil {
return fmt.Errorf("failed to open chunk index: %w", err)
}
defer index.Close()
manifests, err := manifestStore.ListAll()
if err != nil {
return fmt.Errorf("failed to list manifests: %w", err)
}
if len(manifests) == 0 {
fmt.Println("No backups to prune.")
return nil
}
// Group by database name
byDatabase := make(map[string][]*dedup.Manifest)
for _, m := range manifests {
key := m.DatabaseName
if key == "" {
key = "_default"
}
byDatabase[key] = append(byDatabase[key], m)
}
var toDelete []*dedup.Manifest
for dbName, dbManifests := range byDatabase {
// Already sorted by time (newest first from ListAll)
kept := make(map[string]bool)
var keepReasons = make(map[string]string)
// Keep last N
if pruneKeepLast > 0 {
for i := 0; i < pruneKeepLast && i < len(dbManifests); i++ {
kept[dbManifests[i].ID] = true
keepReasons[dbManifests[i].ID] = "keep-last"
}
}
// Keep daily (one per day)
if pruneKeepDaily > 0 {
seenDays := make(map[string]bool)
count := 0
for _, m := range dbManifests {
day := m.CreatedAt.Format("2006-01-02")
if !seenDays[day] {
seenDays[day] = true
if count < pruneKeepDaily {
kept[m.ID] = true
if keepReasons[m.ID] == "" {
keepReasons[m.ID] = "keep-daily"
}
count++
}
}
}
}
// Keep weekly (one per week)
if pruneKeepWeekly > 0 {
seenWeeks := make(map[string]bool)
count := 0
for _, m := range dbManifests {
year, week := m.CreatedAt.ISOWeek()
weekKey := fmt.Sprintf("%d-W%02d", year, week)
if !seenWeeks[weekKey] {
seenWeeks[weekKey] = true
if count < pruneKeepWeekly {
kept[m.ID] = true
if keepReasons[m.ID] == "" {
keepReasons[m.ID] = "keep-weekly"
}
count++
}
}
}
}
if dbName != "_default" {
fmt.Printf("\nDatabase: %s\n", dbName)
} else {
fmt.Printf("\nUnnamed backups:\n")
}
for _, m := range dbManifests {
if kept[m.ID] {
fmt.Printf(" [KEEP] %s (%s) - %s\n", m.ID, m.CreatedAt.Format("2006-01-02"), keepReasons[m.ID])
} else {
fmt.Printf(" [DELETE] %s (%s)\n", m.ID, m.CreatedAt.Format("2006-01-02"))
toDelete = append(toDelete, m)
}
}
}
if len(toDelete) == 0 {
fmt.Printf("\nNo backups to prune (all match retention policy).\n")
return nil
}
fmt.Printf("\n%d backup(s) will be deleted.\n", len(toDelete))
if pruneDryRun {
fmt.Println("\n[DRY RUN] No changes made. Remove --dry-run to actually delete.")
return nil
}
// Actually delete
for _, m := range toDelete {
// Decrement chunk references
for _, ref := range m.Chunks {
index.DecrementRef(ref.Hash)
}
if err := manifestStore.Delete(m.ID); err != nil {
log.Warn("Failed to delete manifest", "id", m.ID, "error", err)
}
index.RemoveManifest(m.ID)
}
fmt.Printf("\nDeleted %d backup(s).\n", len(toDelete))
fmt.Println("Run 'dbbackup dedup gc' to reclaim space from unreferenced chunks.")
return nil
}
func runDedupBackupDB(cmd *cobra.Command, args []string) error {
dbType := strings.ToLower(dedupDBType)
dbName := backupDBDatabase
// Validate db type
var dumpCmd string
var dumpArgs []string
switch dbType {
case "postgres", "postgresql", "pg":
dbType = "postgres"
dumpCmd = "pg_dump"
dumpArgs = []string{"-Fc"} // Custom format for better compression
if dedupDBHost != "" && dedupDBHost != "localhost" {
dumpArgs = append(dumpArgs, "-h", dedupDBHost)
}
if backupDBUser != "" {
dumpArgs = append(dumpArgs, "-U", backupDBUser)
}
dumpArgs = append(dumpArgs, dbName)
case "mysql":
dumpCmd = "mysqldump"
dumpArgs = []string{
"--single-transaction",
"--routines",
"--triggers",
"--events",
}
if dedupDBHost != "" {
dumpArgs = append(dumpArgs, "-h", dedupDBHost)
}
if backupDBUser != "" {
dumpArgs = append(dumpArgs, "-u", backupDBUser)
}
if backupDBPassword != "" {
dumpArgs = append(dumpArgs, "-p"+backupDBPassword)
}
dumpArgs = append(dumpArgs, dbName)
case "mariadb":
dumpCmd = "mariadb-dump"
// Fall back to mysqldump if mariadb-dump not available
if _, err := exec.LookPath(dumpCmd); err != nil {
dumpCmd = "mysqldump"
}
dumpArgs = []string{
"--single-transaction",
"--routines",
"--triggers",
"--events",
}
if dedupDBHost != "" {
dumpArgs = append(dumpArgs, "-h", dedupDBHost)
}
if backupDBUser != "" {
dumpArgs = append(dumpArgs, "-u", backupDBUser)
}
if backupDBPassword != "" {
dumpArgs = append(dumpArgs, "-p"+backupDBPassword)
}
dumpArgs = append(dumpArgs, dbName)
default:
return fmt.Errorf("unsupported database type: %s (use postgres, mysql, or mariadb)", dbType)
}
// Verify dump command exists
if _, err := exec.LookPath(dumpCmd); err != nil {
return fmt.Errorf("%s not found in PATH: %w", dumpCmd, err)
}
// Setup dedup storage
basePath := getDedupDir()
encKey := ""
if dedupEncrypt {
encKey = getEncryptionKey()
if encKey == "" {
return fmt.Errorf("encryption enabled but no key provided (use --key or DBBACKUP_DEDUP_KEY)")
}
}
store, err := dedup.NewChunkStore(dedup.StoreConfig{
BasePath: basePath,
Compress: dedupCompress,
EncryptionKey: encKey,
})
if err != nil {
return fmt.Errorf("failed to open chunk store: %w", err)
}
manifestStore, err := dedup.NewManifestStore(basePath)
if err != nil {
return fmt.Errorf("failed to open manifest store: %w", err)
}
index, err := dedup.NewChunkIndexAt(getIndexDBPath())
if err != nil {
return fmt.Errorf("failed to open chunk index: %w", err)
}
defer index.Close()
// Generate manifest ID
now := time.Now()
manifestID := now.Format("2006-01-02_150405") + "_" + dbName
fmt.Printf("Creating deduplicated database backup: %s\n", manifestID)
fmt.Printf("Database: %s (%s)\n", dbName, dbType)
fmt.Printf("Command: %s %s\n", dumpCmd, strings.Join(dumpArgs, " "))
fmt.Printf("Store: %s\n", basePath)
// Start the dump command
dumpExec := exec.Command(dumpCmd, dumpArgs...)
// Set password via environment for postgres
if dbType == "postgres" && backupDBPassword != "" {
dumpExec.Env = append(os.Environ(), "PGPASSWORD="+backupDBPassword)
}
stdout, err := dumpExec.StdoutPipe()
if err != nil {
return fmt.Errorf("failed to get stdout pipe: %w", err)
}
stderr, err := dumpExec.StderrPipe()
if err != nil {
return fmt.Errorf("failed to get stderr pipe: %w", err)
}
if err := dumpExec.Start(); err != nil {
return fmt.Errorf("failed to start %s: %w", dumpCmd, err)
}
// Hash while chunking using TeeReader
h := sha256.New()
reader := io.TeeReader(stdout, h)
// Chunk the stream directly
chunker := dedup.NewChunker(reader, dedup.DefaultChunkerConfig())
var chunks []dedup.ChunkRef
var totalSize, storedSize int64
var chunkCount, newChunks int
startTime := time.Now()
for {
chunk, err := chunker.Next()
if err == io.EOF {
break
}
if err != nil {
return fmt.Errorf("chunking failed: %w", err)
}
chunkCount++
totalSize += int64(chunk.Length)
// Store chunk (deduplication happens here)
isNew, err := store.Put(chunk)
if err != nil {
return fmt.Errorf("failed to store chunk: %w", err)
}
if isNew {
newChunks++
storedSize += int64(chunk.Length)
index.AddChunk(chunk.Hash, chunk.Length, chunk.Length)
}
chunks = append(chunks, dedup.ChunkRef{
Hash: chunk.Hash,
Offset: chunk.Offset,
Length: chunk.Length,
})
if chunkCount%1000 == 0 {
fmt.Printf("\r Processed %d chunks, %d new, %s...", chunkCount, newChunks, formatBytes(totalSize))
}
}
// Read any stderr
stderrBytes, _ := io.ReadAll(stderr)
// Wait for command to complete
if err := dumpExec.Wait(); err != nil {
return fmt.Errorf("%s failed: %w\nstderr: %s", dumpCmd, err, string(stderrBytes))
}
duration := time.Since(startTime)
fileHash := hex.EncodeToString(h.Sum(nil))
// Calculate dedup ratio
dedupRatio := 0.0
if totalSize > 0 {
dedupRatio = 1.0 - float64(storedSize)/float64(totalSize)
}
// Create manifest
manifest := &dedup.Manifest{
ID: manifestID,
Name: dedupName,
CreatedAt: now,
DatabaseType: dbType,
DatabaseName: dbName,
DatabaseHost: dedupDBHost,
Chunks: chunks,
OriginalSize: totalSize,
StoredSize: storedSize,
ChunkCount: chunkCount,
NewChunks: newChunks,
DedupRatio: dedupRatio,
Encrypted: dedupEncrypt,
Compressed: dedupCompress,
SHA256: fileHash,
}
if err := manifestStore.Save(manifest); err != nil {
return fmt.Errorf("failed to save manifest: %w", err)
}
if err := index.AddManifest(manifest); err != nil {
log.Warn("Failed to index manifest", "error", err)
}
fmt.Printf("\r \r")
fmt.Printf("\nBackup complete!\n")
fmt.Printf(" Manifest: %s\n", manifestID)
fmt.Printf(" Chunks: %d total, %d new\n", chunkCount, newChunks)
fmt.Printf(" Dump size: %s\n", formatBytes(totalSize))
fmt.Printf(" Stored: %s (new data)\n", formatBytes(storedSize))
fmt.Printf(" Dedup ratio: %.1f%%\n", dedupRatio*100)
fmt.Printf(" Duration: %s\n", duration.Round(time.Millisecond))
fmt.Printf(" Throughput: %s/s\n", formatBytes(int64(float64(totalSize)/duration.Seconds())))
return nil
}
func runDedupMetrics(cmd *cobra.Command, args []string) error {
basePath := getDedupDir()
indexPath := getIndexDBPath()
instance := dedupMetricsInstance
if instance == "" {
hostname, _ := os.Hostname()
instance = hostname
}
metrics, err := dedup.CollectMetrics(basePath, indexPath)
if err != nil {
return fmt.Errorf("failed to collect metrics: %w", err)
}
output := dedup.FormatPrometheusMetrics(metrics, instance)
if dedupMetricsOutput != "" {
if err := dedup.WritePrometheusTextfile(dedupMetricsOutput, instance, basePath, indexPath); err != nil {
return fmt.Errorf("failed to write metrics: %w", err)
}
fmt.Printf("Wrote metrics to %s\n", dedupMetricsOutput)
} else {
fmt.Print(output)
}
return nil
}

View File

@@ -318,7 +318,7 @@ func runDrillList(cmd *cobra.Command, args []string) error {
}
fmt.Printf("%-15s %-40s %-20s %s\n", "ID", "NAME", "IMAGE", "STATUS")
fmt.Println(strings.Repeat("", 100))
fmt.Println(strings.Repeat("-", 100))
for _, c := range containers {
fmt.Printf("%-15s %-40s %-20s %s\n",
@@ -345,7 +345,7 @@ func runDrillCleanup(cmd *cobra.Command, args []string) error {
return err
}
fmt.Println(" Cleanup completed")
fmt.Println("[OK] Cleanup completed")
return nil
}
@@ -369,32 +369,32 @@ func runDrillReport(cmd *cobra.Command, args []string) error {
func printDrillResult(result *drill.DrillResult) {
fmt.Printf("\n")
fmt.Printf("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n")
fmt.Printf("=====================================================\n")
fmt.Printf(" DR Drill Report: %s\n", result.DrillID)
fmt.Printf("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n\n")
fmt.Printf("=====================================================\n\n")
status := " PASSED"
status := "[OK] PASSED"
if !result.Success {
status = " FAILED"
status = "[FAIL] FAILED"
} else if result.Status == drill.StatusPartial {
status = "⚠️ PARTIAL"
status = "[WARN] PARTIAL"
}
fmt.Printf("📋 Status: %s\n", status)
fmt.Printf("💾 Backup: %s\n", filepath.Base(result.BackupPath))
fmt.Printf("🗄️ Database: %s (%s)\n", result.DatabaseName, result.DatabaseType)
fmt.Printf("⏱️ Duration: %.2fs\n", result.Duration)
fmt.Printf("[LOG] Status: %s\n", status)
fmt.Printf("[SAVE] Backup: %s\n", filepath.Base(result.BackupPath))
fmt.Printf("[DB] Database: %s (%s)\n", result.DatabaseName, result.DatabaseType)
fmt.Printf("[TIME] Duration: %.2fs\n", result.Duration)
fmt.Printf("📅 Started: %s\n", result.StartTime.Format(time.RFC3339))
fmt.Printf("\n")
// Phases
fmt.Printf("📊 Phases:\n")
fmt.Printf("[STATS] Phases:\n")
for _, phase := range result.Phases {
icon := ""
icon := "[OK]"
if phase.Status == "failed" {
icon = ""
icon = "[FAIL]"
} else if phase.Status == "running" {
icon = "🔄"
icon = "[SYNC]"
}
fmt.Printf(" %s %-20s (%.2fs) %s\n", icon, phase.Name, phase.Duration, phase.Message)
}
@@ -412,10 +412,10 @@ func printDrillResult(result *drill.DrillResult) {
fmt.Printf("\n")
// RTO
fmt.Printf("⏱️ RTO Analysis:\n")
rtoIcon := ""
fmt.Printf("[TIME] RTO Analysis:\n")
rtoIcon := "[OK]"
if !result.RTOMet {
rtoIcon = ""
rtoIcon = "[FAIL]"
}
fmt.Printf(" Actual RTO: %.2fs\n", result.ActualRTO)
fmt.Printf(" Target RTO: %.0fs\n", result.TargetRTO)
@@ -424,11 +424,11 @@ func printDrillResult(result *drill.DrillResult) {
// Validation results
if len(result.ValidationResults) > 0 {
fmt.Printf("🔍 Validation Queries:\n")
fmt.Printf("[SEARCH] Validation Queries:\n")
for _, vr := range result.ValidationResults {
icon := ""
icon := "[OK]"
if !vr.Success {
icon = ""
icon = "[FAIL]"
}
fmt.Printf(" %s %s: %s\n", icon, vr.Name, vr.Result)
if vr.Error != "" {
@@ -440,11 +440,11 @@ func printDrillResult(result *drill.DrillResult) {
// Check results
if len(result.CheckResults) > 0 {
fmt.Printf(" Checks:\n")
fmt.Printf("[OK] Checks:\n")
for _, cr := range result.CheckResults {
icon := ""
icon := "[OK]"
if !cr.Success {
icon = ""
icon = "[FAIL]"
}
fmt.Printf(" %s %s\n", icon, cr.Message)
}
@@ -453,7 +453,7 @@ func printDrillResult(result *drill.DrillResult) {
// Errors and warnings
if len(result.Errors) > 0 {
fmt.Printf(" Errors:\n")
fmt.Printf("[FAIL] Errors:\n")
for _, e := range result.Errors {
fmt.Printf(" • %s\n", e)
}
@@ -461,7 +461,7 @@ func printDrillResult(result *drill.DrillResult) {
}
if len(result.Warnings) > 0 {
fmt.Printf("⚠️ Warnings:\n")
fmt.Printf("[WARN] Warnings:\n")
for _, w := range result.Warnings {
fmt.Printf(" • %s\n", w)
}
@@ -470,14 +470,14 @@ func printDrillResult(result *drill.DrillResult) {
// Container info
if result.ContainerKept {
fmt.Printf("📦 Container kept: %s\n", result.ContainerID[:12])
fmt.Printf("[PKG] Container kept: %s\n", result.ContainerID[:12])
fmt.Printf(" Connect with: docker exec -it %s bash\n", result.ContainerID[:12])
fmt.Printf("\n")
}
fmt.Printf("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n")
fmt.Printf("=====================================================\n")
fmt.Printf(" %s\n", result.Message)
fmt.Printf("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n")
fmt.Printf("=====================================================\n")
}
func updateCatalogWithDrillResult(ctx context.Context, backupPath string, result *drill.DrillResult) {

View File

@@ -63,9 +63,9 @@ func runEngineList(cmd *cobra.Command, args []string) error {
continue
}
status := " Available"
status := "[Y] Available"
if !avail.Available {
status = " Not available"
status = "[N] Not available"
}
fmt.Printf("\n%s (%s)\n", info.Name, info.Description)

239
cmd/install.go Normal file
View File

@@ -0,0 +1,239 @@
package cmd
import (
"context"
"fmt"
"os"
"os/exec"
"os/signal"
"strings"
"syscall"
"dbbackup/internal/installer"
"github.com/spf13/cobra"
)
var (
// Install flags
installInstance string
installSchedule string
installBackupType string
installUser string
installGroup string
installBackupDir string
installConfigPath string
installTimeout int
installWithMetrics bool
installMetricsPort int
installDryRun bool
installStatus bool
// Uninstall flags
uninstallPurge bool
)
// installCmd represents the install command
var installCmd = &cobra.Command{
Use: "install",
Short: "Install dbbackup as a systemd service",
Long: `Install dbbackup as a systemd service with automatic scheduling.
This command creates systemd service and timer units for automated database backups.
It supports both single database and cluster backup modes.
Examples:
# Interactive installation (will prompt for options)
sudo dbbackup install
# Install cluster backup running daily at 2am
sudo dbbackup install --backup-type cluster --schedule "daily"
# Install single database backup with custom schedule
sudo dbbackup install --instance production --backup-type single --schedule "*-*-* 03:00:00"
# Install with Prometheus metrics exporter
sudo dbbackup install --with-metrics --metrics-port 9399
# Check installation status
dbbackup install --status
# Dry-run to see what would be installed
sudo dbbackup install --dry-run
Schedule format (OnCalendar):
daily - Every day at midnight
weekly - Every Monday at midnight
*-*-* 02:00:00 - Every day at 2am
*-*-* 02,14:00 - Twice daily at 2am and 2pm
Mon *-*-* 03:00 - Every Monday at 3am
`,
RunE: func(cmd *cobra.Command, args []string) error {
// Handle --status flag
if installStatus {
return runInstallStatus(cmd.Context())
}
return runInstall(cmd.Context())
},
}
// uninstallCmd represents the uninstall command
var uninstallCmd = &cobra.Command{
Use: "uninstall [instance]",
Short: "Uninstall dbbackup systemd service",
Long: `Uninstall dbbackup systemd service and timer.
Examples:
# Uninstall default instance
sudo dbbackup uninstall
# Uninstall specific instance
sudo dbbackup uninstall production
# Uninstall and remove all configuration
sudo dbbackup uninstall --purge
`,
RunE: func(cmd *cobra.Command, args []string) error {
instance := "cluster"
if len(args) > 0 {
instance = args[0]
}
return runUninstall(cmd.Context(), instance)
},
}
func init() {
rootCmd.AddCommand(installCmd)
rootCmd.AddCommand(uninstallCmd)
// Install flags
installCmd.Flags().StringVarP(&installInstance, "instance", "i", "", "Instance name (e.g., production, staging)")
installCmd.Flags().StringVarP(&installSchedule, "schedule", "s", "daily", "Backup schedule (OnCalendar format)")
installCmd.Flags().StringVarP(&installBackupType, "backup-type", "t", "cluster", "Backup type: single or cluster")
installCmd.Flags().StringVar(&installUser, "user", "dbbackup", "System user to run backups")
installCmd.Flags().StringVar(&installGroup, "group", "dbbackup", "System group for backup user")
installCmd.Flags().StringVar(&installBackupDir, "backup-dir", "/var/lib/dbbackup/backups", "Directory for backups")
installCmd.Flags().StringVar(&installConfigPath, "config-path", "/etc/dbbackup/dbbackup.conf", "Path to config file")
installCmd.Flags().IntVar(&installTimeout, "timeout", 3600, "Backup timeout in seconds")
installCmd.Flags().BoolVar(&installWithMetrics, "with-metrics", false, "Install Prometheus metrics exporter")
installCmd.Flags().IntVar(&installMetricsPort, "metrics-port", 9399, "Prometheus metrics port")
installCmd.Flags().BoolVar(&installDryRun, "dry-run", false, "Show what would be installed without making changes")
installCmd.Flags().BoolVar(&installStatus, "status", false, "Show installation status")
// Uninstall flags
uninstallCmd.Flags().BoolVar(&uninstallPurge, "purge", false, "Also remove configuration files")
}
func runInstall(ctx context.Context) error {
// Create context with signal handling
ctx, cancel := signal.NotifyContext(ctx, os.Interrupt, syscall.SIGTERM)
defer cancel()
// Expand schedule shortcuts
schedule := expandSchedule(installSchedule)
// Create installer
inst := installer.NewInstaller(log, installDryRun)
// Set up options
opts := installer.InstallOptions{
Instance: installInstance,
BackupType: installBackupType,
Schedule: schedule,
User: installUser,
Group: installGroup,
BackupDir: installBackupDir,
ConfigPath: installConfigPath,
TimeoutSeconds: installTimeout,
WithMetrics: installWithMetrics,
MetricsPort: installMetricsPort,
}
// For cluster backup, override instance
if installBackupType == "cluster" {
opts.Instance = "cluster"
}
return inst.Install(ctx, opts)
}
func runUninstall(ctx context.Context, instance string) error {
ctx, cancel := signal.NotifyContext(ctx, os.Interrupt, syscall.SIGTERM)
defer cancel()
inst := installer.NewInstaller(log, false)
return inst.Uninstall(ctx, instance, uninstallPurge)
}
func runInstallStatus(ctx context.Context) error {
inst := installer.NewInstaller(log, false)
// Check cluster status
clusterStatus, err := inst.Status(ctx, "cluster")
if err != nil {
return err
}
fmt.Println()
fmt.Println("[STATUS] DBBackup Installation Status")
fmt.Println(strings.Repeat("=", 50))
if clusterStatus.Installed {
fmt.Println()
fmt.Println(" * Cluster Backup:")
fmt.Printf(" Service: %s\n", formatStatus(clusterStatus.Installed, clusterStatus.Active))
fmt.Printf(" Timer: %s\n", formatStatus(clusterStatus.TimerEnabled, clusterStatus.TimerActive))
if clusterStatus.NextRun != "" {
fmt.Printf(" Next run: %s\n", clusterStatus.NextRun)
}
if clusterStatus.LastRun != "" {
fmt.Printf(" Last run: %s\n", clusterStatus.LastRun)
}
} else {
fmt.Println()
fmt.Println("[NONE] No systemd services installed")
fmt.Println()
fmt.Println("Run 'sudo dbbackup install' to install as a systemd service")
}
// Check for exporter
if _, err := os.Stat("/etc/systemd/system/dbbackup-exporter.service"); err == nil {
fmt.Println()
fmt.Println(" * Metrics Exporter:")
// Check if exporter is active using systemctl
cmd := exec.CommandContext(ctx, "systemctl", "is-active", "dbbackup-exporter")
if err := cmd.Run(); err == nil {
fmt.Printf(" Service: [OK] active\n")
} else {
fmt.Printf(" Service: [-] inactive\n")
}
}
fmt.Println()
return nil
}
func formatStatus(installed, active bool) string {
if !installed {
return "not installed"
}
if active {
return "[OK] active"
}
return "[-] inactive"
}
func expandSchedule(schedule string) string {
shortcuts := map[string]string{
"hourly": "*-*-* *:00:00",
"daily": "*-*-* 02:00:00",
"weekly": "Mon *-*-* 02:00:00",
"monthly": "*-*-01 02:00:00",
}
if expanded, ok := shortcuts[strings.ToLower(schedule)]; ok {
return expanded
}
return schedule
}

138
cmd/metrics.go Normal file
View File

@@ -0,0 +1,138 @@
package cmd
import (
"context"
"fmt"
"os"
"os/signal"
"syscall"
"dbbackup/internal/prometheus"
"github.com/spf13/cobra"
)
var (
metricsInstance string
metricsOutput string
metricsPort int
)
// metricsCmd represents the metrics command
var metricsCmd = &cobra.Command{
Use: "metrics",
Short: "Prometheus metrics management",
Long: `Prometheus metrics management for dbbackup.
Export metrics to a textfile for node_exporter, or run an HTTP server
for direct Prometheus scraping.`,
}
// metricsExportCmd exports metrics to a textfile
var metricsExportCmd = &cobra.Command{
Use: "export",
Short: "Export metrics to textfile",
Long: `Export Prometheus metrics to a textfile for node_exporter.
The textfile collector in node_exporter can scrape metrics from files
in a designated directory (typically /var/lib/node_exporter/textfile_collector/).
Examples:
# Export metrics to default location
dbbackup metrics export
# Export with custom output path
dbbackup metrics export --output /var/lib/dbbackup/metrics/dbbackup.prom
# Export for specific instance
dbbackup metrics export --instance production --output /var/lib/dbbackup/metrics/production.prom
After export, configure node_exporter with:
--collector.textfile.directory=/var/lib/dbbackup/metrics/
`,
RunE: func(cmd *cobra.Command, args []string) error {
return runMetricsExport(cmd.Context())
},
}
// metricsServeCmd runs the HTTP metrics server
var metricsServeCmd = &cobra.Command{
Use: "serve",
Short: "Run Prometheus HTTP server",
Long: `Run an HTTP server exposing Prometheus metrics.
This starts a long-running daemon that serves metrics at /metrics.
Prometheus can scrape this endpoint directly.
Examples:
# Start server on default port 9399
dbbackup metrics serve
# Start server on custom port
dbbackup metrics serve --port 9100
# Run as systemd service (installed via 'dbbackup install --with-metrics')
sudo systemctl start dbbackup-exporter
Endpoints:
/metrics - Prometheus metrics
/health - Health check (returns 200 OK)
/ - Service info page
`,
RunE: func(cmd *cobra.Command, args []string) error {
return runMetricsServe(cmd.Context())
},
}
func init() {
rootCmd.AddCommand(metricsCmd)
metricsCmd.AddCommand(metricsExportCmd)
metricsCmd.AddCommand(metricsServeCmd)
// Export flags
metricsExportCmd.Flags().StringVar(&metricsInstance, "instance", "default", "Instance name for metrics labels")
metricsExportCmd.Flags().StringVarP(&metricsOutput, "output", "o", "/var/lib/dbbackup/metrics/dbbackup.prom", "Output file path")
// Serve flags
metricsServeCmd.Flags().StringVar(&metricsInstance, "instance", "default", "Instance name for metrics labels")
metricsServeCmd.Flags().IntVarP(&metricsPort, "port", "p", 9399, "HTTP server port")
}
func runMetricsExport(ctx context.Context) error {
// Open catalog
cat, err := openCatalog()
if err != nil {
return fmt.Errorf("failed to open catalog: %w", err)
}
defer cat.Close()
// Create metrics writer
writer := prometheus.NewMetricsWriter(log, cat, metricsInstance)
// Write textfile
if err := writer.WriteTextfile(metricsOutput); err != nil {
return fmt.Errorf("failed to write metrics: %w", err)
}
log.Info("Exported metrics to textfile", "path", metricsOutput, "instance", metricsInstance)
return nil
}
func runMetricsServe(ctx context.Context) error {
// Setup signal handling
ctx, cancel := signal.NotifyContext(ctx, os.Interrupt, syscall.SIGTERM)
defer cancel()
// Open catalog
cat, err := openCatalog()
if err != nil {
return fmt.Errorf("failed to open catalog: %w", err)
}
defer cat.Close()
// Create exporter
exporter := prometheus.NewExporter(log, cat, metricsInstance, metricsPort)
// Run server (blocks until context is cancelled)
return exporter.Serve(ctx)
}

View File

@@ -203,9 +203,17 @@ func runMigrateCluster(cmd *cobra.Command, args []string) error {
migrateTargetUser = migrateSourceUser
}
// Create source config first to get WorkDir
sourceCfg := config.New()
sourceCfg.Host = migrateSourceHost
sourceCfg.Port = migrateSourcePort
sourceCfg.User = migrateSourceUser
sourceCfg.Password = migrateSourcePassword
workdir := migrateWorkdir
if workdir == "" {
workdir = filepath.Join(os.TempDir(), "dbbackup-migrate")
// Use WorkDir from config if available
workdir = filepath.Join(sourceCfg.GetEffectiveWorkDir(), "dbbackup-migrate")
}
// Create working directory
@@ -213,12 +221,7 @@ func runMigrateCluster(cmd *cobra.Command, args []string) error {
return fmt.Errorf("failed to create working directory: %w", err)
}
// Create source config
sourceCfg := config.New()
sourceCfg.Host = migrateSourceHost
sourceCfg.Port = migrateSourcePort
sourceCfg.User = migrateSourceUser
sourceCfg.Password = migrateSourcePassword
// Update source config with remaining settings
sourceCfg.SSLMode = migrateSourceSSLMode
sourceCfg.Database = "postgres" // Default connection database
sourceCfg.DatabaseType = cfg.DatabaseType
@@ -342,7 +345,8 @@ func runMigrateSingle(cmd *cobra.Command, args []string) error {
workdir := migrateWorkdir
if workdir == "" {
workdir = filepath.Join(os.TempDir(), "dbbackup-migrate")
tempCfg := config.New()
workdir = filepath.Join(tempCfg.GetEffectiveWorkDir(), "dbbackup-migrate")
}
// Create working directory

View File

@@ -436,7 +436,7 @@ func runPITREnable(cmd *cobra.Command, args []string) error {
return fmt.Errorf("failed to enable PITR: %w", err)
}
log.Info(" PITR enabled successfully!")
log.Info("[OK] PITR enabled successfully!")
log.Info("")
log.Info("Next steps:")
log.Info("1. Restart PostgreSQL: sudo systemctl restart postgresql")
@@ -463,7 +463,7 @@ func runPITRDisable(cmd *cobra.Command, args []string) error {
return fmt.Errorf("failed to disable PITR: %w", err)
}
log.Info(" PITR disabled successfully!")
log.Info("[OK] PITR disabled successfully!")
log.Info("PostgreSQL restart required: sudo systemctl restart postgresql")
return nil
@@ -483,15 +483,15 @@ func runPITRStatus(cmd *cobra.Command, args []string) error {
}
// Display PITR configuration
fmt.Println("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
fmt.Println("======================================================")
fmt.Println(" Point-in-Time Recovery (PITR) Status")
fmt.Println("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
fmt.Println("======================================================")
fmt.Println()
if config.Enabled {
fmt.Println("Status: ENABLED")
fmt.Println("Status: [OK] ENABLED")
} else {
fmt.Println("Status: DISABLED")
fmt.Println("Status: [FAIL] DISABLED")
}
fmt.Printf("WAL Level: %s\n", config.WALLevel)
@@ -510,7 +510,7 @@ func runPITRStatus(cmd *cobra.Command, args []string) error {
// Extract archive dir from command (simple parsing)
fmt.Println()
fmt.Println("WAL Archive Statistics:")
fmt.Println("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
fmt.Println("======================================================")
// TODO: Parse archive dir and show stats
fmt.Println(" (Use 'dbbackup wal list --archive-dir <dir>' to view archives)")
}
@@ -574,13 +574,13 @@ func runWALList(cmd *cobra.Command, args []string) error {
}
// Display archives
fmt.Println("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
fmt.Println("======================================================")
fmt.Printf(" WAL Archives (%d files)\n", len(archives))
fmt.Println("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
fmt.Println("======================================================")
fmt.Println()
fmt.Printf("%-28s %10s %10s %8s %s\n", "WAL Filename", "Timeline", "Segment", "Size", "Archived At")
fmt.Println("────────────────────────────────────────────────────────────────────────────────")
fmt.Println("--------------------------------------------------------------------------------")
for _, archive := range archives {
size := formatWALSize(archive.ArchivedSize)
@@ -644,7 +644,7 @@ func runWALCleanup(cmd *cobra.Command, args []string) error {
return fmt.Errorf("WAL cleanup failed: %w", err)
}
log.Info(" WAL cleanup completed", "deleted", deleted, "retention_days", archiveConfig.RetentionDays)
log.Info("[OK] WAL cleanup completed", "deleted", deleted, "retention_days", archiveConfig.RetentionDays)
return nil
}
@@ -671,7 +671,7 @@ func runWALTimeline(cmd *cobra.Command, args []string) error {
// Display timeline details
if len(history.Timelines) > 0 {
fmt.Println("\nTimeline Details:")
fmt.Println("═════════════════")
fmt.Println("=================")
for _, tl := range history.Timelines {
fmt.Printf("\nTimeline %d:\n", tl.TimelineID)
if tl.ParentTimeline > 0 {
@@ -690,7 +690,7 @@ func runWALTimeline(cmd *cobra.Command, args []string) error {
fmt.Printf(" Created: %s\n", tl.CreatedAt.Format("2006-01-02 15:04:05"))
}
if tl.TimelineID == history.CurrentTimeline {
fmt.Printf(" Status: CURRENT\n")
fmt.Printf(" Status: [CURR] CURRENT\n")
}
}
}
@@ -759,15 +759,15 @@ func runBinlogList(cmd *cobra.Command, args []string) error {
return nil
}
fmt.Println("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
fmt.Println("=============================================================")
fmt.Printf(" Binary Log Files (%s)\n", bm.ServerType())
fmt.Println("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
fmt.Println("=============================================================")
fmt.Println()
if len(binlogs) > 0 {
fmt.Println("Source Directory:")
fmt.Printf("%-24s %10s %-19s %-19s %s\n", "Filename", "Size", "Start Time", "End Time", "Format")
fmt.Println("────────────────────────────────────────────────────────────────────────────────")
fmt.Println("--------------------------------------------------------------------------------")
var totalSize int64
for _, b := range binlogs {
@@ -797,7 +797,7 @@ func runBinlogList(cmd *cobra.Command, args []string) error {
fmt.Println()
fmt.Println("Archived Binlogs:")
fmt.Printf("%-24s %10s %-19s %s\n", "Original", "Size", "Archived At", "Flags")
fmt.Println("────────────────────────────────────────────────────────────────────────────────")
fmt.Println("--------------------------------------------------------------------------------")
var totalSize int64
for _, a := range archived {
@@ -914,7 +914,7 @@ func runBinlogArchive(cmd *cobra.Command, args []string) error {
bm.SaveArchiveMetadata(allArchived)
}
log.Info(" Binlog archiving completed", "archived", len(newArchives))
log.Info("[OK] Binlog archiving completed", "archived", len(newArchives))
return nil
}
@@ -1014,15 +1014,15 @@ func runBinlogValidate(cmd *cobra.Command, args []string) error {
return fmt.Errorf("validating binlog chain: %w", err)
}
fmt.Println("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
fmt.Println("=============================================================")
fmt.Println(" Binlog Chain Validation")
fmt.Println("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
fmt.Println("=============================================================")
fmt.Println()
if validation.Valid {
fmt.Println("Status: VALID - Binlog chain is complete")
fmt.Println("Status: [OK] VALID - Binlog chain is complete")
} else {
fmt.Println("Status: INVALID - Binlog chain has gaps")
fmt.Println("Status: [FAIL] INVALID - Binlog chain has gaps")
}
fmt.Printf("Files: %d binlog files\n", validation.LogCount)
@@ -1055,7 +1055,7 @@ func runBinlogValidate(cmd *cobra.Command, args []string) error {
fmt.Println()
fmt.Println("Errors:")
for _, e := range validation.Errors {
fmt.Printf(" %s\n", e)
fmt.Printf(" [FAIL] %s\n", e)
}
}
@@ -1094,9 +1094,9 @@ func runBinlogPosition(cmd *cobra.Command, args []string) error {
}
defer rows.Close()
fmt.Println("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
fmt.Println("=============================================================")
fmt.Println(" Current Binary Log Position")
fmt.Println("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
fmt.Println("=============================================================")
fmt.Println()
if rows.Next() {
@@ -1178,24 +1178,24 @@ func runMySQLPITRStatus(cmd *cobra.Command, args []string) error {
return fmt.Errorf("getting PITR status: %w", err)
}
fmt.Println("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
fmt.Println("=============================================================")
fmt.Printf(" MySQL/MariaDB PITR Status (%s)\n", status.DatabaseType)
fmt.Println("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
fmt.Println("=============================================================")
fmt.Println()
if status.Enabled {
fmt.Println("PITR Status: ENABLED")
fmt.Println("PITR Status: [OK] ENABLED")
} else {
fmt.Println("PITR Status: NOT CONFIGURED")
fmt.Println("PITR Status: [FAIL] NOT CONFIGURED")
}
// Get binary logging status
var logBin string
db.QueryRowContext(ctx, "SELECT @@log_bin").Scan(&logBin)
if logBin == "1" || logBin == "ON" {
fmt.Println("Binary Logging: ENABLED")
fmt.Println("Binary Logging: [OK] ENABLED")
} else {
fmt.Println("Binary Logging: DISABLED")
fmt.Println("Binary Logging: [FAIL] DISABLED")
}
fmt.Printf("Binlog Format: %s\n", status.LogLevel)
@@ -1205,14 +1205,14 @@ func runMySQLPITRStatus(cmd *cobra.Command, args []string) error {
if status.DatabaseType == pitr.DatabaseMariaDB {
db.QueryRowContext(ctx, "SELECT @@gtid_current_pos").Scan(&gtidMode)
if gtidMode != "" {
fmt.Println("GTID Mode: ENABLED")
fmt.Println("GTID Mode: [OK] ENABLED")
} else {
fmt.Println("GTID Mode: DISABLED")
fmt.Println("GTID Mode: [FAIL] DISABLED")
}
} else {
db.QueryRowContext(ctx, "SELECT @@gtid_mode").Scan(&gtidMode)
if gtidMode == "ON" {
fmt.Println("GTID Mode: ENABLED")
fmt.Println("GTID Mode: [OK] ENABLED")
} else {
fmt.Printf("GTID Mode: %s\n", gtidMode)
}
@@ -1237,12 +1237,12 @@ func runMySQLPITRStatus(cmd *cobra.Command, args []string) error {
fmt.Println()
fmt.Println("PITR Requirements:")
if logBin == "1" || logBin == "ON" {
fmt.Println(" Binary logging enabled")
fmt.Println(" [OK] Binary logging enabled")
} else {
fmt.Println(" Binary logging must be enabled (log_bin = mysql-bin)")
fmt.Println(" [FAIL] Binary logging must be enabled (log_bin = mysql-bin)")
}
if status.LogLevel == "ROW" {
fmt.Println(" Row-based logging (recommended)")
fmt.Println(" [OK] Row-based logging (recommended)")
} else {
fmt.Printf(" ⚠ binlog_format = %s (ROW recommended for PITR)\n", status.LogLevel)
}
@@ -1299,7 +1299,7 @@ func runMySQLPITREnable(cmd *cobra.Command, args []string) error {
return fmt.Errorf("enabling PITR: %w", err)
}
log.Info(" MySQL PITR enabled successfully!")
log.Info("[OK] MySQL PITR enabled successfully!")
log.Info("")
log.Info("Next steps:")
log.Info("1. Start binlog archiving: dbbackup binlog watch --archive-dir " + mysqlArchiveDir)

View File

@@ -141,7 +141,7 @@ func runList(ctx context.Context) error {
continue
}
fmt.Printf("📦 %s\n", file.Name)
fmt.Printf("[FILE] %s\n", file.Name)
fmt.Printf(" Size: %s\n", formatFileSize(stat.Size()))
fmt.Printf(" Modified: %s\n", stat.ModTime().Format("2006-01-02 15:04:05"))
fmt.Printf(" Type: %s\n", getBackupType(file.Name))
@@ -237,56 +237,56 @@ func runPreflight(ctx context.Context) error {
totalChecks := 6
// 1. Database connectivity check
fmt.Print("🔗 Database connectivity... ")
fmt.Print("[1] Database connectivity... ")
if err := testDatabaseConnection(); err != nil {
fmt.Printf(" FAILED: %v\n", err)
fmt.Printf("[FAIL] FAILED: %v\n", err)
} else {
fmt.Println(" PASSED")
fmt.Println("[OK] PASSED")
checksPassed++
}
// 2. Required tools check
fmt.Print("🛠️ Required tools (pg_dump/pg_restore)... ")
fmt.Print("[2] Required tools (pg_dump/pg_restore)... ")
if err := checkRequiredTools(); err != nil {
fmt.Printf(" FAILED: %v\n", err)
fmt.Printf("[FAIL] FAILED: %v\n", err)
} else {
fmt.Println(" PASSED")
fmt.Println("[OK] PASSED")
checksPassed++
}
// 3. Backup directory check
fmt.Print("📁 Backup directory access... ")
fmt.Print("[3] Backup directory access... ")
if err := checkBackupDirectory(); err != nil {
fmt.Printf(" FAILED: %v\n", err)
fmt.Printf("[FAIL] FAILED: %v\n", err)
} else {
fmt.Println(" PASSED")
fmt.Println("[OK] PASSED")
checksPassed++
}
// 4. Disk space check
fmt.Print("💾 Available disk space... ")
fmt.Print("[4] Available disk space... ")
if err := checkDiskSpace(); err != nil {
fmt.Printf(" FAILED: %v\n", err)
fmt.Printf("[FAIL] FAILED: %v\n", err)
} else {
fmt.Println(" PASSED")
fmt.Println("[OK] PASSED")
checksPassed++
}
// 5. Permissions check
fmt.Print("🔐 File permissions... ")
fmt.Print("[5] File permissions... ")
if err := checkPermissions(); err != nil {
fmt.Printf(" FAILED: %v\n", err)
fmt.Printf("[FAIL] FAILED: %v\n", err)
} else {
fmt.Println(" PASSED")
fmt.Println("[OK] PASSED")
checksPassed++
}
// 6. CPU/Memory resources check
fmt.Print("🖥️ System resources... ")
fmt.Print("[6] System resources... ")
if err := checkSystemResources(); err != nil {
fmt.Printf(" FAILED: %v\n", err)
fmt.Printf("[FAIL] FAILED: %v\n", err)
} else {
fmt.Println(" PASSED")
fmt.Println("[OK] PASSED")
checksPassed++
}
@@ -294,10 +294,10 @@ func runPreflight(ctx context.Context) error {
fmt.Printf("Results: %d/%d checks passed\n", checksPassed, totalChecks)
if checksPassed == totalChecks {
fmt.Println("🎉 All preflight checks passed! System is ready for backup operations.")
fmt.Println("[SUCCESS] All preflight checks passed! System is ready for backup operations.")
return nil
} else {
fmt.Printf("⚠️ %d check(s) failed. Please address the issues before running backups.\n", totalChecks-checksPassed)
fmt.Printf("[WARN] %d check(s) failed. Please address the issues before running backups.\n", totalChecks-checksPassed)
return fmt.Errorf("preflight checks failed: %d/%d passed", checksPassed, totalChecks)
}
}
@@ -414,44 +414,44 @@ func runRestore(ctx context.Context, archiveName string) error {
fmt.Println()
// Show warning
fmt.Println("⚠️ WARNING: This will restore data to the target database.")
fmt.Println("[WARN] WARNING: This will restore data to the target database.")
fmt.Println(" Existing data may be overwritten or merged depending on the restore method.")
fmt.Println()
// For safety, show what would be done without actually doing it
switch archiveType {
case "Single Database (.dump)":
fmt.Println("🔄 Would execute: pg_restore to restore single database")
fmt.Println("[EXEC] Would execute: pg_restore to restore single database")
fmt.Printf(" Command: pg_restore -h %s -p %d -U %s -d %s --verbose %s\n",
cfg.Host, cfg.Port, cfg.User, cfg.Database, archivePath)
case "Single Database (.dump.gz)":
fmt.Println("🔄 Would execute: gunzip and pg_restore to restore single database")
fmt.Println("[EXEC] Would execute: gunzip and pg_restore to restore single database")
fmt.Printf(" Command: gunzip -c %s | pg_restore -h %s -p %d -U %s -d %s --verbose\n",
archivePath, cfg.Host, cfg.Port, cfg.User, cfg.Database)
case "SQL Script (.sql)":
if cfg.IsPostgreSQL() {
fmt.Println("🔄 Would execute: psql to run SQL script")
fmt.Println("[EXEC] Would execute: psql to run SQL script")
fmt.Printf(" Command: psql -h %s -p %d -U %s -d %s -f %s\n",
cfg.Host, cfg.Port, cfg.User, cfg.Database, archivePath)
} else if cfg.IsMySQL() {
fmt.Println("🔄 Would execute: mysql to run SQL script")
fmt.Println("[EXEC] Would execute: mysql to run SQL script")
fmt.Printf(" Command: %s\n", mysqlRestoreCommand(archivePath, false))
} else {
fmt.Println("🔄 Would execute: SQL client to run script (database type unknown)")
fmt.Println("[EXEC] Would execute: SQL client to run script (database type unknown)")
}
case "SQL Script (.sql.gz)":
if cfg.IsPostgreSQL() {
fmt.Println("🔄 Would execute: gunzip and psql to run SQL script")
fmt.Println("[EXEC] Would execute: gunzip and psql to run SQL script")
fmt.Printf(" Command: gunzip -c %s | psql -h %s -p %d -U %s -d %s\n",
archivePath, cfg.Host, cfg.Port, cfg.User, cfg.Database)
} else if cfg.IsMySQL() {
fmt.Println("🔄 Would execute: gunzip and mysql to run SQL script")
fmt.Println("[EXEC] Would execute: gunzip and mysql to run SQL script")
fmt.Printf(" Command: %s\n", mysqlRestoreCommand(archivePath, true))
} else {
fmt.Println("🔄 Would execute: gunzip and SQL client to run script (database type unknown)")
fmt.Println("[EXEC] Would execute: gunzip and SQL client to run script (database type unknown)")
}
case "Cluster Backup (.tar.gz)":
fmt.Println("🔄 Would execute: Extract and restore cluster backup")
fmt.Println("[EXEC] Would execute: Extract and restore cluster backup")
fmt.Println(" Steps:")
fmt.Println(" 1. Extract tar.gz archive")
fmt.Println(" 2. Restore global objects (roles, tablespaces)")
@@ -461,7 +461,7 @@ func runRestore(ctx context.Context, archiveName string) error {
}
fmt.Println()
fmt.Println("🛡️ SAFETY MODE: Restore command is in preview mode.")
fmt.Println("[SAFETY] SAFETY MODE: Restore command is in preview mode.")
fmt.Println(" This shows what would be executed without making changes.")
fmt.Println(" To enable actual restore, add --confirm flag (not yet implemented).")
@@ -520,25 +520,25 @@ func runVerify(ctx context.Context, archiveName string) error {
checksPassed := 0
// Basic file existence and readability
fmt.Print("📁 File accessibility... ")
fmt.Print("[CHK] File accessibility... ")
if file, err := os.Open(archivePath); err != nil {
fmt.Printf(" FAILED: %v\n", err)
fmt.Printf("[FAIL] FAILED: %v\n", err)
} else {
file.Close()
fmt.Println(" PASSED")
fmt.Println("[OK] PASSED")
checksPassed++
}
checksRun++
// File size sanity check
fmt.Print("📏 File size check... ")
fmt.Print("[CHK] File size check... ")
if stat.Size() == 0 {
fmt.Println(" FAILED: File is empty")
fmt.Println("[FAIL] FAILED: File is empty")
} else if stat.Size() < 100 {
fmt.Println("⚠️ WARNING: File is very small (< 100 bytes)")
fmt.Println("[WARN] WARNING: File is very small (< 100 bytes)")
checksPassed++
} else {
fmt.Println(" PASSED")
fmt.Println("[OK] PASSED")
checksPassed++
}
checksRun++
@@ -546,51 +546,51 @@ func runVerify(ctx context.Context, archiveName string) error {
// Type-specific verification
switch archiveType {
case "Single Database (.dump)":
fmt.Print("🔍 PostgreSQL dump format check... ")
fmt.Print("[CHK] PostgreSQL dump format check... ")
if err := verifyPgDump(archivePath); err != nil {
fmt.Printf(" FAILED: %v\n", err)
fmt.Printf("[FAIL] FAILED: %v\n", err)
} else {
fmt.Println(" PASSED")
fmt.Println("[OK] PASSED")
checksPassed++
}
checksRun++
case "Single Database (.dump.gz)":
fmt.Print("🔍 PostgreSQL dump format check (gzip)... ")
fmt.Print("[CHK] PostgreSQL dump format check (gzip)... ")
if err := verifyPgDumpGzip(archivePath); err != nil {
fmt.Printf(" FAILED: %v\n", err)
fmt.Printf("[FAIL] FAILED: %v\n", err)
} else {
fmt.Println(" PASSED")
fmt.Println("[OK] PASSED")
checksPassed++
}
checksRun++
case "SQL Script (.sql)":
fmt.Print("📜 SQL script validation... ")
fmt.Print("[CHK] SQL script validation... ")
if err := verifySqlScript(archivePath); err != nil {
fmt.Printf(" FAILED: %v\n", err)
fmt.Printf("[FAIL] FAILED: %v\n", err)
} else {
fmt.Println(" PASSED")
fmt.Println("[OK] PASSED")
checksPassed++
}
checksRun++
case "SQL Script (.sql.gz)":
fmt.Print("📜 SQL script validation (gzip)... ")
fmt.Print("[CHK] SQL script validation (gzip)... ")
if err := verifyGzipSqlScript(archivePath); err != nil {
fmt.Printf(" FAILED: %v\n", err)
fmt.Printf("[FAIL] FAILED: %v\n", err)
} else {
fmt.Println(" PASSED")
fmt.Println("[OK] PASSED")
checksPassed++
}
checksRun++
case "Cluster Backup (.tar.gz)":
fmt.Print("📦 Archive extraction test... ")
fmt.Print("[CHK] Archive extraction test... ")
if err := verifyTarGz(archivePath); err != nil {
fmt.Printf(" FAILED: %v\n", err)
fmt.Printf("[FAIL] FAILED: %v\n", err)
} else {
fmt.Println(" PASSED")
fmt.Println("[OK] PASSED")
checksPassed++
}
checksRun++
@@ -598,11 +598,11 @@ func runVerify(ctx context.Context, archiveName string) error {
// Check for metadata file
metadataPath := archivePath + ".info"
fmt.Print("📋 Metadata file check... ")
fmt.Print("[CHK] Metadata file check... ")
if _, err := os.Stat(metadataPath); os.IsNotExist(err) {
fmt.Println("⚠️ WARNING: No metadata file found")
fmt.Println("[WARN] WARNING: No metadata file found")
} else {
fmt.Println(" PASSED")
fmt.Println("[OK] PASSED")
checksPassed++
}
checksRun++
@@ -611,13 +611,13 @@ func runVerify(ctx context.Context, archiveName string) error {
fmt.Printf("Verification Results: %d/%d checks passed\n", checksPassed, checksRun)
if checksPassed == checksRun {
fmt.Println("🎉 Archive verification completed successfully!")
fmt.Println("[SUCCESS] Archive verification completed successfully!")
return nil
} else if float64(checksPassed)/float64(checksRun) >= 0.8 {
fmt.Println("⚠️ Archive verification completed with warnings.")
fmt.Println("[WARN] Archive verification completed with warnings.")
return nil
} else {
fmt.Println(" Archive verification failed. Archive may be corrupted.")
fmt.Println("[FAIL] Archive verification failed. Archive may be corrupted.")
return fmt.Errorf("verification failed: %d/%d checks passed", checksPassed, checksRun)
}
}

View File

@@ -342,7 +342,7 @@ func runRestoreDiagnose(cmd *cobra.Command, args []string) error {
return fmt.Errorf("archive not found: %s", archivePath)
}
log.Info("🔍 Diagnosing backup file", "path", archivePath)
log.Info("[DIAG] Diagnosing backup file", "path", archivePath)
diagnoser := restore.NewDiagnoser(log, restoreVerbose)
@@ -350,10 +350,11 @@ func runRestoreDiagnose(cmd *cobra.Command, args []string) error {
format := restore.DetectArchiveFormat(archivePath)
if format.IsClusterBackup() && diagnoseDeep {
// Create temp directory for extraction
tempDir, err := os.MkdirTemp("", "dbbackup-diagnose-*")
// Create temp directory for extraction in configured WorkDir
workDir := cfg.GetEffectiveWorkDir()
tempDir, err := os.MkdirTemp(workDir, "dbbackup-diagnose-*")
if err != nil {
return fmt.Errorf("failed to create temp directory: %w", err)
return fmt.Errorf("failed to create temp directory in %s: %w", workDir, err)
}
if !diagnoseKeepTemp {
@@ -386,7 +387,7 @@ func runRestoreDiagnose(cmd *cobra.Command, args []string) error {
// Summary
if !diagnoseJSON {
fmt.Println("\n" + strings.Repeat("=", 70))
fmt.Printf("📊 CLUSTER SUMMARY: %d databases analyzed\n", len(results))
fmt.Printf("[SUMMARY] CLUSTER SUMMARY: %d databases analyzed\n", len(results))
validCount := 0
for _, r := range results {
@@ -396,9 +397,9 @@ func runRestoreDiagnose(cmd *cobra.Command, args []string) error {
}
if validCount == len(results) {
fmt.Println(" All dumps are valid")
fmt.Println("[OK] All dumps are valid")
} else {
fmt.Printf(" %d/%d dumps have issues\n", len(results)-validCount, len(results))
fmt.Printf("[FAIL] %d/%d dumps have issues\n", len(results)-validCount, len(results))
}
fmt.Println(strings.Repeat("=", 70))
}
@@ -425,7 +426,7 @@ func runRestoreDiagnose(cmd *cobra.Command, args []string) error {
return fmt.Errorf("backup file has validation errors")
}
log.Info(" Backup file appears valid")
log.Info("[OK] Backup file appears valid")
return nil
}
@@ -544,7 +545,7 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
isDryRun := restoreDryRun || !restoreConfirm
if isDryRun {
fmt.Println("\n🔍 DRY-RUN MODE - No changes will be made")
fmt.Println("\n[DRY-RUN] DRY-RUN MODE - No changes will be made")
fmt.Printf("\nWould restore:\n")
fmt.Printf(" Archive: %s\n", archivePath)
fmt.Printf(" Format: %s\n", format.String())
@@ -587,7 +588,7 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
// Run pre-restore diagnosis if requested
if restoreDiagnose {
log.Info("🔍 Running pre-restore diagnosis...")
log.Info("[DIAG] Running pre-restore diagnosis...")
diagnoser := restore.NewDiagnoser(log, restoreVerbose)
result, err := diagnoser.DiagnoseFile(archivePath)
@@ -598,7 +599,7 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
diagnoser.PrintDiagnosis(result)
if !result.IsValid {
log.Error(" Pre-restore diagnosis found issues")
log.Error("[FAIL] Pre-restore diagnosis found issues")
if result.IsTruncated {
log.Error(" The backup file appears to be TRUNCATED")
}
@@ -612,7 +613,7 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
}
log.Warn("Continuing despite diagnosis errors (--force enabled)")
} else {
log.Info(" Backup file passed diagnosis")
log.Info("[OK] Backup file passed diagnosis")
}
}
@@ -632,7 +633,7 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
// Audit log: restore success
auditLogger.LogRestoreComplete(user, targetDB, time.Since(startTime))
log.Info(" Restore completed successfully", "database", targetDB)
log.Info("[OK] Restore completed successfully", "database", targetDB)
return nil
}
@@ -700,7 +701,7 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
}
}
log.Warn("⚠️ Using alternative working directory for extraction")
log.Warn("[WARN] Using alternative working directory for extraction")
log.Warn(" This is recommended when system disk space is limited")
log.Warn(" Location: " + restoreWorkdir)
}
@@ -753,7 +754,7 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
isDryRun := restoreDryRun || !restoreConfirm
if isDryRun {
fmt.Println("\n🔍 DRY-RUN MODE - No changes will be made")
fmt.Println("\n[DRY-RUN] DRY-RUN MODE - No changes will be made")
fmt.Printf("\nWould restore cluster:\n")
fmt.Printf(" Archive: %s\n", archivePath)
fmt.Printf(" Parallel Jobs: %d (0 = auto)\n", restoreJobs)
@@ -763,7 +764,7 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
if restoreCleanCluster {
fmt.Printf(" Clean Cluster: true (will drop %d existing database(s))\n", len(existingDBs))
if len(existingDBs) > 0 {
fmt.Printf("\n⚠️ Databases to be dropped:\n")
fmt.Printf("\n[WARN] Databases to be dropped:\n")
for _, dbName := range existingDBs {
fmt.Printf(" - %s\n", dbName)
}
@@ -775,7 +776,7 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
// Warning for clean-cluster
if restoreCleanCluster && len(existingDBs) > 0 {
log.Warn("🔥 Clean cluster mode enabled")
log.Warn("[!!] Clean cluster mode enabled")
log.Warn(fmt.Sprintf(" %d existing database(s) will be DROPPED before restore!", len(existingDBs)))
for _, dbName := range existingDBs {
log.Warn(" - " + dbName)
@@ -828,12 +829,13 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
// Run pre-restore diagnosis if requested
if restoreDiagnose {
log.Info("🔍 Running pre-restore diagnosis...")
log.Info("[DIAG] Running pre-restore diagnosis...")
// Create temp directory for extraction
diagTempDir, err := os.MkdirTemp("", "dbbackup-diagnose-*")
// Create temp directory for extraction in configured WorkDir
workDir := cfg.GetEffectiveWorkDir()
diagTempDir, err := os.MkdirTemp(workDir, "dbbackup-diagnose-*")
if err != nil {
return fmt.Errorf("failed to create temp directory for diagnosis: %w", err)
return fmt.Errorf("failed to create temp directory for diagnosis in %s: %w", workDir, err)
}
defer os.RemoveAll(diagTempDir)
@@ -853,10 +855,10 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
}
if len(invalidDumps) > 0 {
log.Error(" Pre-restore diagnosis found issues",
log.Error("[FAIL] Pre-restore diagnosis found issues",
"invalid_dumps", len(invalidDumps),
"total_dumps", len(results))
fmt.Println("\n⚠️ The following dumps have issues and will likely fail during restore:")
fmt.Println("\n[WARN] The following dumps have issues and will likely fail during restore:")
for _, name := range invalidDumps {
fmt.Printf(" - %s\n", name)
}
@@ -868,7 +870,7 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
}
log.Warn("Continuing despite diagnosis errors (--force enabled)")
} else {
log.Info(" All dumps passed diagnosis", "count", len(results))
log.Info("[OK] All dumps passed diagnosis", "count", len(results))
}
}
@@ -888,7 +890,7 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
// Audit log: restore success
auditLogger.LogRestoreComplete(user, "all_databases", time.Since(startTime))
log.Info(" Cluster restore completed successfully")
log.Info("[OK] Cluster restore completed successfully")
return nil
}
@@ -937,7 +939,7 @@ func runRestoreList(cmd *cobra.Command, args []string) error {
}
// Print header
fmt.Printf("\n📦 Available backup archives in %s\n\n", backupDir)
fmt.Printf("\n[LIST] Available backup archives in %s\n\n", backupDir)
fmt.Printf("%-40s %-25s %-12s %-20s %s\n",
"FILENAME", "FORMAT", "SIZE", "MODIFIED", "DATABASE")
fmt.Println(strings.Repeat("-", 120))
@@ -1054,9 +1056,9 @@ func runRestorePITR(cmd *cobra.Command, args []string) error {
}
// Display recovery target info
log.Info("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
log.Info("=====================================================")
log.Info(" Point-in-Time Recovery (PITR)")
log.Info("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
log.Info("=====================================================")
log.Info("")
log.Info(target.String())
log.Info("")
@@ -1080,6 +1082,6 @@ func runRestorePITR(cmd *cobra.Command, args []string) error {
return fmt.Errorf("PITR restore failed: %w", err)
}
log.Info(" PITR restore completed successfully")
log.Info("[OK] PITR restore completed successfully")
return nil
}

View File

@@ -181,13 +181,13 @@ func runRTOStatus(cmd *cobra.Command, args []string) error {
// Display status
fmt.Println()
fmt.Println("╔═══════════════════════════════════════════════════════════╗")
fmt.Println(" RTO/RPO STATUS SUMMARY ")
fmt.Println("╠═══════════════════════════════════════════════════════════╣")
fmt.Printf(" Target RTO: %-15s Target RPO: %-15s \n",
fmt.Println("+-----------------------------------------------------------+")
fmt.Println("| RTO/RPO STATUS SUMMARY |")
fmt.Println("+-----------------------------------------------------------+")
fmt.Printf("| Target RTO: %-15s Target RPO: %-15s |\n",
formatDuration(config.TargetRTO),
formatDuration(config.TargetRPO))
fmt.Println("╠═══════════════════════════════════════════════════════════╣")
fmt.Println("+-----------------------------------------------------------+")
// Compliance status
rpoRate := 0.0
@@ -199,31 +199,31 @@ func runRTOStatus(cmd *cobra.Command, args []string) error {
fullRate = float64(summary.FullyCompliant) / float64(summary.TotalDatabases) * 100
}
fmt.Printf(" Databases: %-5d \n", summary.TotalDatabases)
fmt.Printf(" RPO Compliant: %-5d (%.0f%%) \n", summary.RPOCompliant, rpoRate)
fmt.Printf(" RTO Compliant: %-5d (%.0f%%) \n", summary.RTOCompliant, rtoRate)
fmt.Printf(" Fully Compliant: %-3d (%.0f%%) \n", summary.FullyCompliant, fullRate)
fmt.Printf("| Databases: %-5d |\n", summary.TotalDatabases)
fmt.Printf("| RPO Compliant: %-5d (%.0f%%) |\n", summary.RPOCompliant, rpoRate)
fmt.Printf("| RTO Compliant: %-5d (%.0f%%) |\n", summary.RTOCompliant, rtoRate)
fmt.Printf("| Fully Compliant: %-3d (%.0f%%) |\n", summary.FullyCompliant, fullRate)
if summary.CriticalIssues > 0 {
fmt.Printf(" ⚠️ Critical Issues: %-3d \n", summary.CriticalIssues)
fmt.Printf("| [WARN] Critical Issues: %-3d |\n", summary.CriticalIssues)
}
fmt.Println("╠═══════════════════════════════════════════════════════════╣")
fmt.Printf(" Average RPO: %-15s Worst: %-15s \n",
fmt.Println("+-----------------------------------------------------------+")
fmt.Printf("| Average RPO: %-15s Worst: %-15s |\n",
formatDuration(summary.AverageRPO),
formatDuration(summary.WorstRPO))
fmt.Printf(" Average RTO: %-15s Worst: %-15s \n",
fmt.Printf("| Average RTO: %-15s Worst: %-15s |\n",
formatDuration(summary.AverageRTO),
formatDuration(summary.WorstRTO))
if summary.WorstRPODatabase != "" {
fmt.Printf(" Worst RPO Database: %-38s\n", summary.WorstRPODatabase)
fmt.Printf("| Worst RPO Database: %-38s|\n", summary.WorstRPODatabase)
}
if summary.WorstRTODatabase != "" {
fmt.Printf(" Worst RTO Database: %-38s\n", summary.WorstRTODatabase)
fmt.Printf("| Worst RTO Database: %-38s|\n", summary.WorstRTODatabase)
}
fmt.Println("╚═══════════════════════════════════════════════════════════╝")
fmt.Println("+-----------------------------------------------------------+")
fmt.Println()
// Per-database status
@@ -234,19 +234,19 @@ func runRTOStatus(cmd *cobra.Command, args []string) error {
fmt.Println(strings.Repeat("-", 70))
for _, a := range analyses {
status := ""
status := "[OK]"
if !a.RPOCompliant || !a.RTOCompliant {
status = ""
status = "[FAIL]"
}
rpoStr := formatDuration(a.CurrentRPO)
rtoStr := formatDuration(a.CurrentRTO)
if !a.RPOCompliant {
rpoStr = "⚠️ " + rpoStr
rpoStr = "[WARN] " + rpoStr
}
if !a.RTOCompliant {
rtoStr = "⚠️ " + rtoStr
rtoStr = "[WARN] " + rtoStr
}
fmt.Printf("%-25s %-12s %-12s %s\n",
@@ -306,21 +306,21 @@ func runRTOCheck(cmd *cobra.Command, args []string) error {
exitCode := 0
for _, a := range analyses {
if !a.RPOCompliant {
fmt.Printf(" %s: RPO violation - current %s exceeds target %s\n",
fmt.Printf("[FAIL] %s: RPO violation - current %s exceeds target %s\n",
a.Database,
formatDuration(a.CurrentRPO),
formatDuration(config.TargetRPO))
exitCode = 1
}
if !a.RTOCompliant {
fmt.Printf(" %s: RTO violation - estimated %s exceeds target %s\n",
fmt.Printf("[FAIL] %s: RTO violation - estimated %s exceeds target %s\n",
a.Database,
formatDuration(a.CurrentRTO),
formatDuration(config.TargetRTO))
exitCode = 1
}
if a.RPOCompliant && a.RTOCompliant {
fmt.Printf(" %s: Compliant (RPO: %s, RTO: %s)\n",
fmt.Printf("[OK] %s: Compliant (RPO: %s, RTO: %s)\n",
a.Database,
formatDuration(a.CurrentRPO),
formatDuration(a.CurrentRTO))
@@ -371,13 +371,13 @@ func outputAnalysisText(analyses []*rto.Analysis) error {
fmt.Println(strings.Repeat("=", 60))
// Status
rpoStatus := " Compliant"
rpoStatus := "[OK] Compliant"
if !a.RPOCompliant {
rpoStatus = " Violation"
rpoStatus = "[FAIL] Violation"
}
rtoStatus := " Compliant"
rtoStatus := "[OK] Compliant"
if !a.RTOCompliant {
rtoStatus = " Violation"
rtoStatus = "[FAIL] Violation"
}
fmt.Println()
@@ -420,7 +420,7 @@ func outputAnalysisText(analyses []*rto.Analysis) error {
fmt.Println(" Recommendations:")
fmt.Println(strings.Repeat("-", 50))
for _, r := range a.Recommendations {
icon := "💡"
icon := "[TIP]"
switch r.Priority {
case rto.PriorityCritical:
icon = "🔴"

View File

@@ -141,7 +141,7 @@ func testConnection(ctx context.Context) error {
// Display results
fmt.Println("Connection Test Results:")
fmt.Printf(" Status: Connected \n")
fmt.Printf(" Status: Connected [OK]\n")
fmt.Printf(" Version: %s\n", version)
fmt.Printf(" Databases: %d found\n", len(databases))
@@ -167,7 +167,7 @@ func testConnection(ctx context.Context) error {
}
fmt.Println()
fmt.Println(" Status check completed successfully!")
fmt.Println("[OK] Status check completed successfully!")
return nil
}

View File

@@ -96,17 +96,17 @@ func runVerifyBackup(cmd *cobra.Command, args []string) error {
continue
}
fmt.Printf("📁 %s\n", filepath.Base(backupFile))
fmt.Printf("[FILE] %s\n", filepath.Base(backupFile))
if quickVerify {
// Quick check: size only
err := verification.QuickCheck(backupFile)
if err != nil {
fmt.Printf(" FAILED: %v\n\n", err)
fmt.Printf(" [FAIL] FAILED: %v\n\n", err)
failureCount++
continue
}
fmt.Printf(" VALID (quick check)\n\n")
fmt.Printf(" [OK] VALID (quick check)\n\n")
successCount++
} else {
// Full verification with SHA-256
@@ -116,7 +116,7 @@ func runVerifyBackup(cmd *cobra.Command, args []string) error {
}
if result.Valid {
fmt.Printf(" VALID\n")
fmt.Printf(" [OK] VALID\n")
if verboseVerify {
meta, _ := metadata.Load(backupFile)
fmt.Printf(" Size: %s\n", metadata.FormatSize(meta.SizeBytes))
@@ -127,7 +127,7 @@ func runVerifyBackup(cmd *cobra.Command, args []string) error {
fmt.Println()
successCount++
} else {
fmt.Printf(" FAILED: %v\n", result.Error)
fmt.Printf(" [FAIL] FAILED: %v\n", result.Error)
if verboseVerify {
if !result.FileExists {
fmt.Printf(" File does not exist\n")
@@ -147,11 +147,11 @@ func runVerifyBackup(cmd *cobra.Command, args []string) error {
}
// Summary
fmt.Println(strings.Repeat("", 50))
fmt.Println(strings.Repeat("-", 50))
fmt.Printf("Total: %d backups\n", len(backupFiles))
fmt.Printf(" Valid: %d\n", successCount)
fmt.Printf("[OK] Valid: %d\n", successCount)
if failureCount > 0 {
fmt.Printf(" Failed: %d\n", failureCount)
fmt.Printf("[FAIL] Failed: %d\n", failureCount)
os.Exit(1)
}
@@ -195,16 +195,16 @@ func runVerifyCloudBackup(cmd *cobra.Command, args []string) error {
for _, uri := range args {
if !isCloudURI(uri) {
fmt.Printf("⚠️ Skipping non-cloud URI: %s\n", uri)
fmt.Printf("[WARN] Skipping non-cloud URI: %s\n", uri)
continue
}
fmt.Printf("☁️ %s\n", uri)
fmt.Printf("[CLOUD] %s\n", uri)
// Download and verify
result, err := verifyCloudBackup(cmd.Context(), uri, quickVerify, verboseVerify)
if err != nil {
fmt.Printf(" FAILED: %v\n\n", err)
fmt.Printf(" [FAIL] FAILED: %v\n\n", err)
failureCount++
continue
}
@@ -212,7 +212,7 @@ func runVerifyCloudBackup(cmd *cobra.Command, args []string) error {
// Cleanup temp file
defer result.Cleanup()
fmt.Printf(" VALID\n")
fmt.Printf(" [OK] VALID\n")
if verboseVerify && result.MetadataPath != "" {
meta, _ := metadata.Load(result.MetadataPath)
if meta != nil {
@@ -226,7 +226,7 @@ func runVerifyCloudBackup(cmd *cobra.Command, args []string) error {
successCount++
}
fmt.Printf("\n Summary: %d valid, %d failed\n", successCount, failureCount)
fmt.Printf("\n[OK] Summary: %d valid, %d failed\n", successCount, failureCount)
if failureCount > 0 {
os.Exit(1)

34
go.mod
View File

@@ -5,15 +5,27 @@ go 1.24.0
toolchain go1.24.9
require (
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2
cloud.google.com/go/storage v1.57.2
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3
github.com/aws/aws-sdk-go-v2 v1.40.0
github.com/aws/aws-sdk-go-v2/config v1.32.2
github.com/aws/aws-sdk-go-v2/credentials v1.19.2
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1
github.com/charmbracelet/bubbles v0.21.0
github.com/charmbracelet/bubbletea v1.3.10
github.com/charmbracelet/lipgloss v1.1.0
github.com/dustin/go-humanize v1.0.1
github.com/go-sql-driver/mysql v1.9.3
github.com/jackc/pgx/v5 v5.7.6
github.com/mattn/go-sqlite3 v1.14.32
github.com/shirou/gopsutil/v3 v3.24.5
github.com/sirupsen/logrus v1.9.3
github.com/spf13/cobra v1.10.1
github.com/spf13/pflag v1.0.9
golang.org/x/crypto v0.43.0
google.golang.org/api v0.256.0
)
require (
@@ -24,20 +36,13 @@ require (
cloud.google.com/go/compute/metadata v0.9.0 // indirect
cloud.google.com/go/iam v1.5.2 // indirect
cloud.google.com/go/monitoring v1.24.2 // indirect
cloud.google.com/go/storage v1.57.2 // indirect
filippo.io/edwards25519 v1.1.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 // indirect
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0 // indirect
github.com/aws/aws-sdk-go-v2 v1.40.0 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 // indirect
github.com/aws/aws-sdk-go-v2/config v1.32.2 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.19.2 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14 // indirect
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 // indirect
@@ -46,7 +51,6 @@ require (
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.5 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14 // indirect
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1 // indirect
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10 // indirect
@@ -59,7 +63,6 @@ require (
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd // indirect
github.com/charmbracelet/x/term v0.2.1 // indirect
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443 // indirect
github.com/creack/pty v1.1.17 // indirect
github.com/envoyproxy/go-control-plane/envoy v1.32.4 // indirect
github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
@@ -67,26 +70,33 @@ require (
github.com/go-jose/go-jose/v4 v4.1.2 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/google/s2a-go v0.1.9 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.7 // indirect
github.com/googleapis/gax-go/v2 v2.15.0 // indirect
github.com/hashicorp/errwrap v1.0.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/jackc/puddle/v2 v2.2.2 // indirect
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-localereader v0.0.1 // indirect
github.com/mattn/go-runewidth v0.0.16 // indirect
github.com/mattn/go-sqlite3 v1.14.32 // indirect
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 // indirect
github.com/muesli/cancelreader v0.2.2 // indirect
github.com/muesli/termenv v0.16.0 // indirect
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/spiffe/go-spiffe/v2 v2.5.0 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
github.com/zeebo/errs v1.4.0 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/contrib/detectors/gcp v1.36.0 // indirect
@@ -97,14 +107,12 @@ require (
go.opentelemetry.io/otel/sdk v1.37.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.37.0 // indirect
go.opentelemetry.io/otel/trace v1.37.0 // indirect
golang.org/x/crypto v0.43.0 // indirect
golang.org/x/net v0.46.0 // indirect
golang.org/x/oauth2 v0.33.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.38.0 // indirect
golang.org/x/text v0.30.0 // indirect
golang.org/x/time v0.14.0 // indirect
google.golang.org/api v0.256.0 // indirect
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250818200422-3122310a409c // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 // indirect

106
go.sum
View File

@@ -10,36 +10,44 @@ cloud.google.com/go/compute/metadata v0.9.0 h1:pDUj4QMoPejqq20dK0Pg2N4yG9zIkYGdB
cloud.google.com/go/compute/metadata v0.9.0/go.mod h1:E0bWwX5wTnLPedCKqk3pJmVgCBSM6qQI1yTBdEb3C10=
cloud.google.com/go/iam v1.5.2 h1:qgFRAGEmd8z6dJ/qyEchAuL9jpswyODjA2lS+w234g8=
cloud.google.com/go/iam v1.5.2/go.mod h1:SE1vg0N81zQqLzQEwxL2WI6yhetBdbNQuTvIKCSkUHE=
cloud.google.com/go/logging v1.13.0 h1:7j0HgAp0B94o1YRDqiqm26w4q1rDMH7XNRU34lJXHYc=
cloud.google.com/go/logging v1.13.0/go.mod h1:36CoKh6KA/M0PbhPKMq6/qety2DCAErbhXT62TuXALA=
cloud.google.com/go/longrunning v0.7.0 h1:FV0+SYF1RIj59gyoWDRi45GiYUMM3K1qO51qoboQT1E=
cloud.google.com/go/longrunning v0.7.0/go.mod h1:ySn2yXmjbK9Ba0zsQqunhDkYi0+9rlXIwnoAf+h+TPY=
cloud.google.com/go/monitoring v1.24.2 h1:5OTsoJ1dXYIiMiuL+sYscLc9BumrL3CarVLL7dd7lHM=
cloud.google.com/go/monitoring v1.24.2/go.mod h1:x7yzPWcgDRnPEv3sI+jJGBkwl5qINf+6qY4eq0I9B4U=
cloud.google.com/go/storage v1.57.2 h1:sVlym3cHGYhrp6XZKkKb+92I1V42ks2qKKpB0CF5Mb4=
cloud.google.com/go/storage v1.57.2/go.mod h1:n5ijg4yiRXXpCu0sJTD6k+eMf7GRrJmPyr9YxLXGHOk=
cloud.google.com/go/trace v1.11.6 h1:2O2zjPzqPYAHrn3OKl029qlqG6W8ZdYaOWRyr8NgMT4=
cloud.google.com/go/trace v1.11.6/go.mod h1:GA855OeDEBiBMzcckLPE2kDunIpC72N+Pq8WFieFjnI=
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0 h1:JXg2dwJUmPB9JmtVmdEB16APJ7jurfbY5jnfXpJoRMc=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0/go.mod h1:YD5h/ldMsG0XiIw7PdyNhLxaM317eFh5yNLccNfGdyw=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.13.0 h1:KpMC6LFL7mqpExyMC9jVOYRiVhLmamjeZfRsUpB7l4s=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.13.0/go.mod h1:J7MUC/wtRpfGVbQ5sIItY5/FuVWmvzlY21WAOfQnq/I=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 h1:9iefClla7iYpfYWdzPCRDozdmndjTm8DXdpCzPajMgA=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2/go.mod h1:XtLgD3ZD34DAaVIIAyG3objl5DynM3CQ/vMcbBNJZGI=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.1 h1:/Zt+cDPnpC3OVDm/JKLOs7M2DKmLRIIp3XIx9pHHiig=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.1/go.mod h1:Ng3urmn6dYe8gnbCMoHHVl5APYz2txho3koEkV2o2HA=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3 h1:ZJJNFaQ86GVKQ9ehwqyAFE6pIfyicpuJ8IkVaPBc6/4=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3/go.mod h1:URuDvhmATVKqHBH9/0nOiNKk0+YcwfQ3WkK5PqHKxc8=
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0 h1:XkkQbfMyuH2jTSjQjSoihryI8GINRcs4xp8lNawg0FI=
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0/go.mod h1:HKpQxkWaGLJ+D/5H8QRpyQXA1eKjxkFlOMwck5+33Jk=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0 h1:UQUsRi8WTzhZntp5313l+CHIAT95ojUI2lpP/ExlZa4=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0/go.mod h1:Cz6ft6Dkn3Et6l2v2a9/RpN7epQ1GtDlO6lj8bEcOvw=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0 h1:owcC2UnmsZycprQ5RfRgjydWhuoxg71LUfyiQdijZuM=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0/go.mod h1:ZPpqegjbE99EPKsu3iUWV22A04wzGPcAY/ziSIQEEgs=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.53.0 h1:4LP6hvB4I5ouTbGgWtixJhgED6xdf67twf9PoY96Tbg=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.53.0/go.mod h1:jUZ5LYlw40WMd07qxcQJD5M40aUxrfwqQX1g7zxYnrQ=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0 h1:Ron4zCA/yk6U7WOBXhTJcDpsUBG9npumK6xw2auFltQ=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0/go.mod h1:cSgYe11MCNYunTnRXrKiR/tHc0eoKjICUuWpNZoVCOo=
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2 h1:+vx7roKuyA63nhn5WAunQHLTznkw5W8b1Xc0dNjp83s=
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2/go.mod h1:HBCaDeC1lPdgDeDbhX8XFpy1jqjK0IBG8W5K+xYqA0w=
github.com/aws/aws-sdk-go-v2 v1.40.0 h1:/WMUA0kjhZExjOQN2z3oLALDREea1A7TobfuiBrKlwc=
github.com/aws/aws-sdk-go-v2 v1.40.0/go.mod h1:c9pm7VwuW0UPxAEYGyTmyurVcNrbF6Rt/wixFqDhcjE=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 h1:DHctwEM8P8iTXFxC/QK0MRjwEpWQeM9yzidCRjldUz0=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3/go.mod h1:xdCzcZEtnSTKVDOmUZs4l/j3pSV6rpo1WXl5ugNsL8Y=
github.com/aws/aws-sdk-go-v2/config v1.32.1 h1:iODUDLgk3q8/flEC7ymhmxjfoAnBDwEEYEVyKZ9mzjU=
github.com/aws/aws-sdk-go-v2/config v1.32.1/go.mod h1:xoAgo17AGrPpJBSLg81W+ikM0cpOZG8ad04T2r+d5P0=
github.com/aws/aws-sdk-go-v2/config v1.32.2 h1:4liUsdEpUUPZs5WVapsJLx5NPmQhQdez7nYFcovrytk=
github.com/aws/aws-sdk-go-v2/config v1.32.2/go.mod h1:l0hs06IFz1eCT+jTacU/qZtC33nvcnLADAPL/XyrkZI=
github.com/aws/aws-sdk-go-v2/credentials v1.19.1 h1:JeW+EwmtTE0yXFK8SmklrFh/cGTTXsQJumgMZNlbxfM=
github.com/aws/aws-sdk-go-v2/credentials v1.19.1/go.mod h1:BOoXiStwTF+fT2XufhO0Efssbi1CNIO/ZXpZu87N0pw=
github.com/aws/aws-sdk-go-v2/credentials v1.19.2 h1:qZry8VUyTK4VIo5aEdUcBjPZHL2v4FyQ3QEOaWcFLu4=
github.com/aws/aws-sdk-go-v2/credentials v1.19.2/go.mod h1:YUqm5a1/kBnoK+/NY5WEiMocZihKSo15/tJdmdXnM5g=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14 h1:WZVR5DbDgxzA0BJeudId89Kmgy6DIU4ORpxwsVHz0qA=
@@ -62,24 +70,14 @@ github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14 h1:FIouAnCE
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14/go.mod h1:UTwDc5COa5+guonQU8qBikJo1ZJ4ln2r1MkF7Dqag1E=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14 h1:FzQE21lNtUor0Fb7QNgnEyiRCBlolLTX/Z1j65S7teM=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14/go.mod h1:s1ydyWG9pm3ZwmmYN21HKyG9WzAZhYVW85wMHs5FV6w=
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.0 h1:8FshVvnV2sr9kOSAbOnc/vwVmmAwMjOedKH6JW2ddPM=
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.0/go.mod h1:wYNqY3L02Z3IgRYxOBPH9I1zD9Cjh9hI5QOy/eOjQvw=
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1 h1:OgQy/+0+Kc3khtqiEOk23xQAglXi3Tj0y5doOxbi5tg=
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1/go.mod h1:wYNqY3L02Z3IgRYxOBPH9I1zD9Cjh9hI5QOy/eOjQvw=
github.com/aws/aws-sdk-go-v2/service/signin v1.0.1 h1:BDgIUYGEo5TkayOWv/oBLPphWwNm/A91AebUjAu5L5g=
github.com/aws/aws-sdk-go-v2/service/signin v1.0.1/go.mod h1:iS6EPmNeqCsGo+xQmXv0jIMjyYtQfnwg36zl2FwEouk=
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2 h1:MxMBdKTYBjPQChlJhi4qlEueqB1p1KcbTEa7tD5aqPs=
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2/go.mod h1:iS6EPmNeqCsGo+xQmXv0jIMjyYtQfnwg36zl2FwEouk=
github.com/aws/aws-sdk-go-v2/service/sso v1.30.4 h1:U//SlnkE1wOQiIImxzdY5PXat4Wq+8rlfVEw4Y7J8as=
github.com/aws/aws-sdk-go-v2/service/sso v1.30.4/go.mod h1:av+ArJpoYf3pgyrj6tcehSFW+y9/QvAY8kMooR9bZCw=
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5 h1:ksUT5KtgpZd3SAiFJNJ0AFEJVva3gjBmN7eXUZjzUwQ=
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5/go.mod h1:av+ArJpoYf3pgyrj6tcehSFW+y9/QvAY8kMooR9bZCw=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.9 h1:LU8S9W/mPDAU9q0FjCLi0TrCheLMGwzbRpvUMwYspcA=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.9/go.mod h1:/j67Z5XBVDx8nZVp9EuFM9/BS5dvBznbqILGuu73hug=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10 h1:GtsxyiF3Nd3JahRBJbxLCCdYW9ltGQYrFWg8XdkGDd8=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10/go.mod h1:/j67Z5XBVDx8nZVp9EuFM9/BS5dvBznbqILGuu73hug=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.1 h1:GdGmKtG+/Krag7VfyOXV17xjTCz0i9NT+JnqLTOI5nA=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.1/go.mod h1:6TxbXoDSgBQ225Qd8Q+MbxUxUh6TtNKwbRt/EPS9xso=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 h1:a5UTtD4mHBU3t0o6aHQZFJTNKVfxFWfPX7J0Lr7G+uY=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2/go.mod h1:6TxbXoDSgBQ225Qd8Q+MbxUxUh6TtNKwbRt/EPS9xso=
github.com/aws/smithy-go v1.23.2 h1:Crv0eatJUQhaManss33hS5r40CG3ZFH+21XSkqMrIUM=
@@ -105,13 +103,18 @@ github.com/charmbracelet/x/term v0.2.1/go.mod h1:oQ4enTYFV7QN4m0i9mzHrViD7TQKvNE
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443 h1:aQ3y1lwWyqYPiWZThqv1aFbZMiM9vblcSArJRf2Irls=
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/creack/pty v1.1.17 h1:QeVUsEDNrLBW4tMgZHvxy18sKtr6VI492kBhUfhDJNI=
github.com/creack/pty v1.1.17/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/envoyproxy/go-control-plane v0.13.4 h1:zEqyPVyku6IvWCFwux4x9RxkLOMUL+1vC9xUFv5l2/M=
github.com/envoyproxy/go-control-plane v0.13.4/go.mod h1:kDfuBlDVsSj2MjrLEtRWtHlsWIFcGyB2RMO44Dc5GZA=
github.com/envoyproxy/go-control-plane/envoy v1.32.4 h1:jb83lalDRZSpPWW2Z7Mck/8kXZ5CQAFYVjQcdVIr83A=
github.com/envoyproxy/go-control-plane/envoy v1.32.4/go.mod h1:Gzjc5k8JcJswLjAx1Zm+wSYE20UrLtt7JZMWiWQXQEw=
github.com/envoyproxy/go-control-plane/ratelimit v0.1.0 h1:/G9QYbddjL25KvtKTv3an9lx6VBE2cnb8wp1vEGNYGI=
github.com/envoyproxy/go-control-plane/ratelimit v0.1.0/go.mod h1:Wk+tMFAFbCXaJPzVVHnPgRKdUdwW/KdbRt94AzgRee4=
github.com/envoyproxy/protoc-gen-validate v1.2.1 h1:DEo3O99U8j4hBFwbJfrz9VtgcDfUKS7KJ7spH3d86P8=
github.com/envoyproxy/protoc-gen-validate v1.2.1/go.mod h1:d/C80l/jxXLdfEIhX1W2TmLfsJ31lvEjwamM4DxlWXU=
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f h1:Y/CXytFA4m6baUTXGLOoWe4PQhGxaX0KpnayAqC48p4=
@@ -125,8 +128,19 @@ github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-sql-driver/mysql v1.9.3 h1:U/N249h2WzJ3Ukj8SowVFjdtZKfu9vlLZxjPXV1aweo=
github.com/go-sql-driver/mysql v1.9.3/go.mod h1:qn46aNg1333BRMNU69Lq93t8du/dwxI64Gl8i5p1WMU=
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/martian/v3 v3.3.3 h1:DIhPTQrbPkgs2yJYdXU/eNACCG5DVQjySNRNlflZ9Fc=
github.com/google/martian/v3 v3.3.3/go.mod h1:iEPrYcgCF7jA9OtScMFQyAlZZ4YXTKEtJ1E6RWzmBA0=
github.com/google/s2a-go v0.1.9 h1:LGD7gtMgezd8a/Xak7mEWL0PjoTQFvpRudN895yqKW0=
github.com/google/s2a-go v0.1.9/go.mod h1:YA0Ei2ZQL3acow2O62kdp9UlnvMmU7kA6Eutn0dXayM=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
@@ -135,6 +149,10 @@ github.com/googleapis/enterprise-certificate-proxy v0.3.7 h1:zrn2Ee/nWmHulBx5sAV
github.com/googleapis/enterprise-certificate-proxy v0.3.7/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
github.com/googleapis/gax-go/v2 v2.15.0 h1:SyjDc1mGgZU5LncH8gimWo9lW1DtIfPibOG81vgd/bo=
github.com/googleapis/gax-go/v2 v2.15.0/go.mod h1:zVVkkxAQHa1RQpg9z2AUCMnKhi0Qld9rcmyfL1OZhoc=
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
@@ -145,8 +163,12 @@ github.com/jackc/pgx/v5 v5.7.6 h1:rWQc5FwZSPX58r1OQmkuaNicxdmExaEz5A2DO2hUuTk=
github.com/jackc/pgx/v5 v5.7.6/go.mod h1:aruU7o91Tc2q2cFp5h4uP3f6ztExVpyVv88Xl/8Vl8M=
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY=
github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-localereader v0.0.1 h1:ygSAOl7ZXTx4RdPYinUpg6W99U8jWvWi9Ye2JC/oIi4=
@@ -161,14 +183,21 @@ github.com/muesli/cancelreader v0.2.2 h1:3I4Kt4BQjOR54NavqnDogx/MIoWBFa0StPA8ELU
github.com/muesli/cancelreader v0.2.2/go.mod h1:3XuTXfFS2VjM+HTLZY9Ak0l6eUKfijIfMUZ4EgX0QYo=
github.com/muesli/termenv v0.16.0 h1:S5AlUN9dENB57rsbnkPyfdGuWIlkmzJjbFf0Tf5FWUc=
github.com/muesli/termenv v0.16.0/go.mod h1:ZRfOIKPFDYQoDFF4Olj7/QJbW60Ol/kL1pU3VfY/Cnk=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 h1:GFCKgmp0tecUJ0sJuv4pzYCqS9+RGSn52M3FUwPs+uo=
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c h1:ncq/mPwQF4JjgDlrVEn3C11VoGHZN7m8qihwgMEtzYw=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/shirou/gopsutil/v3 v3.24.5 h1:i0t8kL+kQTvpAYToeuiVk3TgDeKOFioZO3Ztz/iZ9pI=
github.com/shirou/gopsutil/v3 v3.24.5/go.mod h1:bsoOS1aStSs9ErQ1WWfxllSeS1K5D+U30r2NfcubMVk=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
@@ -179,13 +208,17 @@ github.com/spiffe/go-spiffe/v2 v2.5.0 h1:N2I01KCUkv1FAjZXJMwh95KK1ZIQLYbPfhaxw8W
github.com/spiffe/go-spiffe/v2 v2.5.0/go.mod h1:P+NxobPc6wXhVtINNtFjNWGBTreew1GBUCwT2wPmb7g=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU=
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY=
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e h1:JVG44RsyaB9T2KIHavMF/ppJZNG9ZpyihvCd0w101no=
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJuqunuUZ/Dhy/avygyECGrLceyNeo4LiM=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
github.com/zeebo/errs v1.4.0 h1:XNdoD/RRMKP7HD0UhJnIzUy74ISdGGxURlYG8HSWSfM=
github.com/zeebo/errs v1.4.0/go.mod h1:sgbWHsvVuTPHcqJJGQ1WhI5KbWlHYz+2+2C/LSEtCw4=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
@@ -198,6 +231,8 @@ go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 h1:F7Jx+6h
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q=
go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ=
go.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.36.0 h1:rixTyDGXFxRy1xzhKrotaHy3/KXdPhlWARrCgK+eqUY=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.36.0/go.mod h1:dowW6UsM9MKbJq5JTz2AMVp3/5iW5I/TStsk8S+CfHw=
go.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE=
go.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E=
go.opentelemetry.io/otel/sdk v1.37.0 h1:ItB0QUqnjesGRvNcmAcU0LyvkVyGJ2xftD29bWdDvKI=
@@ -206,43 +241,32 @@ go.opentelemetry.io/otel/sdk/metric v1.37.0 h1:90lI228XrB9jCMuSdA0673aubgRobVZFh
go.opentelemetry.io/otel/sdk/metric v1.37.0/go.mod h1:cNen4ZWfiD37l5NhS+Keb5RXVWZWpRE+9WyVCpbo5ps=
go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4=
go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0=
golang.org/x/crypto v0.37.0 h1:kJNSjF/Xp7kU0iB2Z+9viTPMW4EqqsrywMXLJOOsXSE=
golang.org/x/crypto v0.37.0/go.mod h1:vg+k43peMZ0pUMhYmVAWysMK35e6ioLh3wB8ZCAfbVc=
golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4=
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc=
golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04=
golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0=
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561 h1:MDc5xs78ZrZr3HMQugiXOAkSZtfTpbJLDr/lwfgO53E=
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561/go.mod h1:cyybsKvd6eL0RnXn6p/Grxp8F5bW7iYuBgsNCOHpMYE=
golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE=
golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg=
golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4=
golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210=
golang.org/x/oauth2 v0.33.0 h1:4Q+qn+E5z8gPRJfmRy7C2gGG3T4jIprK6aSYgTXGRpo=
golang.org/x/oauth2 v0.33.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/sync v0.13.0 h1:AauUjRAJ9OSnvULf/ARrrVywoJDy0YS2AwQ98I37610=
golang.org/x/sync v0.13.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/sys v0.37.0 h1:fdNQudmxPjkdUTPnLn5mdQv7Zwvbvpaxqs831goi9kQ=
golang.org/x/sys v0.37.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/text v0.24.0 h1:dd5Bzh4yt5KYA8f9CJHCP4FB4D51c2c6JvN37xJJkJ0=
golang.org/x/text v0.24.0/go.mod h1:L8rBsPeo2pSS+xqN0d5u2ikmjtmoJbDBT1b7nHvFCdU=
golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng=
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU=
golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k=
golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
google.golang.org/api v0.256.0 h1:u6Khm8+F9sxbCTYNoBHg6/Hwv0N/i+V94MvkOSor6oI=
google.golang.org/api v0.256.0/go.mod h1:KIgPhksXADEKJlnEoRa9qAII4rXcy40vfI8HRqcU964=
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 h1:rHWScKit0gvAPuOnu87KpaYtjK5zBMLcULh7gxkCXu4=

View File

@@ -0,0 +1,1303 @@
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"id": null,
"links": [],
"liveNow": false,
"panels": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [
{
"options": {
"0": {
"color": "red",
"index": 1,
"text": "FAILED"
},
"1": {
"color": "green",
"index": 0,
"text": "SUCCESS"
}
},
"type": "value"
}
],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "red",
"value": null
},
{
"color": "green",
"value": 1
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 4,
"w": 6,
"x": 0,
"y": 0
},
"id": 1,
"options": {
"colorMode": "background",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_rpo_seconds{instance=~\"$instance\"} < 86400",
"legendFormat": "{{database}}",
"range": true,
"refId": "A"
}
],
"title": "Last Backup Status",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 43200
},
{
"color": "red",
"value": 86400
}
]
},
"unit": "s"
},
"overrides": []
},
"gridPos": {
"h": 4,
"w": 6,
"x": 6,
"y": 0
},
"id": 2,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_rpo_seconds{instance=~\"$instance\"}",
"legendFormat": "{{database}}",
"range": true,
"refId": "A"
}
],
"title": "Time Since Last Backup",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 4,
"w": 6,
"x": 12,
"y": 0
},
"id": 3,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_backup_total{instance=~\"$instance\", status=\"success\"}",
"legendFormat": "{{database}}",
"range": true,
"refId": "A"
}
],
"title": "Total Successful Backups",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 1
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 4,
"w": 6,
"x": 18,
"y": 0
},
"id": 4,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_backup_total{instance=~\"$instance\", status=\"failure\"}",
"legendFormat": "{{database}}",
"range": true,
"refId": "A"
}
],
"title": "Total Failed Backups",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "line"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 86400
}
]
},
"unit": "s"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 4
},
"id": 5,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_rpo_seconds{instance=~\"$instance\"}",
"legendFormat": "{{instance}} - {{database}}",
"range": true,
"refId": "A"
}
],
"title": "RPO Over Time",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "bars",
"fillOpacity": 100,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "bytes"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 4
},
"id": 6,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_last_backup_size_bytes{instance=~\"$instance\"}",
"legendFormat": "{{instance}} - {{database}}",
"range": true,
"refId": "A"
}
],
"title": "Backup Size",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "s"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 12
},
"id": 7,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_last_backup_duration_seconds{instance=~\"$instance\"}",
"legendFormat": "{{instance}} - {{database}}",
"range": true,
"refId": "A"
}
],
"title": "Backup Duration",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"custom": {
"align": "auto",
"cellOptions": {
"type": "auto"
},
"inspect": false
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
}
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "Status"
},
"properties": [
{
"id": "mappings",
"value": [
{
"options": {
"0": {
"color": "red",
"index": 1,
"text": "FAILED"
},
"1": {
"color": "green",
"index": 0,
"text": "SUCCESS"
}
},
"type": "value"
}
]
},
{
"id": "custom.cellOptions",
"value": {
"mode": "basic",
"type": "color-background"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "RPO"
},
"properties": [
{
"id": "unit",
"value": "s"
},
{
"id": "thresholds",
"value": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 43200
},
{
"color": "red",
"value": 86400
}
]
}
},
{
"id": "custom.cellOptions",
"value": {
"mode": "basic",
"type": "color-background"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "Size"
},
"properties": [
{
"id": "unit",
"value": "bytes"
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 12
},
"id": 8,
"options": {
"cellHeight": "sm",
"footer": {
"countRows": false,
"fields": "",
"reducer": [
"sum"
],
"show": false
},
"showHeader": true
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_rpo_seconds{instance=~\"$instance\"} < 86400",
"format": "table",
"instant": true,
"legendFormat": "__auto",
"range": false,
"refId": "Status"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_rpo_seconds{instance=~\"$instance\"}",
"format": "table",
"hide": false,
"instant": true,
"legendFormat": "__auto",
"range": false,
"refId": "RPO"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_last_backup_size_bytes{instance=~\"$instance\"}",
"format": "table",
"hide": false,
"instant": true,
"legendFormat": "__auto",
"range": false,
"refId": "Size"
}
],
"title": "Backup Status Overview",
"transformations": [
{
"id": "joinByField",
"options": {
"byField": "database",
"mode": "outer"
}
},
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true,
"Time 1": true,
"Time 2": true,
"Time 3": true,
"__name__": true,
"__name__ 1": true,
"__name__ 2": true,
"__name__ 3": true,
"instance 1": true,
"instance 2": true,
"instance 3": true,
"job": true,
"job 1": true,
"job 2": true,
"job 3": true
},
"indexByName": {},
"renameByName": {
"Value #RPO": "RPO",
"Value #Size": "Size",
"Value #Status": "Status",
"database": "Database",
"instance": "Instance"
}
}
}
],
"type": "table"
},
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 30
},
"id": 100,
"panels": [],
"title": "Deduplication Statistics",
"type": "row"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "blue",
"value": null
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 6,
"x": 0,
"y": 31
},
"id": 101,
"options": {
"colorMode": "background",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_ratio{instance=~\"$instance\"}",
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Dedup Ratio",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "bytes"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 6,
"x": 6,
"y": 31
},
"id": 102,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_space_saved_bytes{instance=~\"$instance\"}",
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Space Saved",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "yellow",
"value": null
}
]
},
"unit": "bytes"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 6,
"x": 12,
"y": 31
},
"id": 103,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_disk_usage_bytes{instance=~\"$instance\"}",
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Disk Usage",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "purple",
"value": null
}
]
},
"unit": "short"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 6,
"x": 18,
"y": 31
},
"id": 104,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_chunks_total{instance=~\"$instance\"}",
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Total Chunks",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 36
},
"id": 105,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_database_ratio{instance=~\"$instance\"}",
"legendFormat": "{{database}}",
"range": true,
"refId": "A"
}
],
"title": "Dedup Ratio by Database",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "bytes"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 36
},
"id": 106,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_space_saved_bytes{instance=~\"$instance\"}",
"legendFormat": "Space Saved",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_disk_usage_bytes{instance=~\"$instance\"}",
"legendFormat": "Disk Usage",
"range": true,
"refId": "B"
}
],
"title": "Dedup Storage Over Time",
"type": "timeseries"
}
],
"refresh": "30s",
"schemaVersion": 38,
"tags": [
"dbbackup",
"backup",
"database",
"dedup"
],
"templating": {
"list": [
{
"current": {
"selected": false,
"text": "All",
"value": "$__all"
},
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"definition": "label_values(dbbackup_rpo_seconds, instance)",
"hide": 0,
"includeAll": true,
"label": "Instance",
"multi": true,
"name": "instance",
"options": [],
"query": {
"query": "label_values(dbbackup_rpo_seconds, instance)",
"refId": "StandardVariableQuery"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 1,
"type": "query"
},
{
"hide": 2,
"name": "DS_PROMETHEUS",
"query": "prometheus",
"skipUrlSync": false,
"type": "datasource"
}
]
},
"time": {
"from": "now-24h",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "DBBackup Overview",
"uid": "dbbackup-overview",
"version": 1,
"weekStart": ""
}

View File

@@ -2,12 +2,14 @@ package auth
import (
"bufio"
"context"
"fmt"
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
"time"
"dbbackup/internal/config"
)
@@ -69,7 +71,10 @@ func checkPgHbaConf(user string) AuthMethod {
// findHbaFileViaPostgres asks PostgreSQL for the hba_file location
func findHbaFileViaPostgres() string {
cmd := exec.Command("psql", "-U", "postgres", "-t", "-c", "SHOW hba_file;")
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, "psql", "-U", "postgres", "-t", "-c", "SHOW hba_file;")
output, err := cmd.Output()
if err != nil {
return ""
@@ -82,8 +87,11 @@ func parsePgHbaConf(path string, user string) AuthMethod {
// Try with sudo if we can't read directly
file, err := os.Open(path)
if err != nil {
// Try with sudo
cmd := exec.Command("sudo", "cat", path)
// Try with sudo (with timeout)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, "sudo", "cat", path)
output, err := cmd.Output()
if err != nil {
return AuthUnknown
@@ -196,13 +204,13 @@ func CheckAuthenticationMismatch(cfg *config.Config) (bool, string) {
func buildAuthMismatchMessage(osUser, dbUser string, method AuthMethod) string {
var msg strings.Builder
msg.WriteString("\n⚠️ Authentication Mismatch Detected\n")
msg.WriteString("\n[WARN] Authentication Mismatch Detected\n")
msg.WriteString(strings.Repeat("=", 60) + "\n\n")
msg.WriteString(fmt.Sprintf(" PostgreSQL is using '%s' authentication\n", method))
msg.WriteString(fmt.Sprintf(" OS user '%s' cannot authenticate as DB user '%s'\n\n", osUser, dbUser))
msg.WriteString("💡 Solutions (choose one):\n\n")
msg.WriteString("[TIP] Solutions (choose one):\n\n")
msg.WriteString(fmt.Sprintf(" 1. Run as matching user:\n"))
msg.WriteString(fmt.Sprintf(" sudo -u %s %s\n\n", dbUser, getCommandLine()))
@@ -218,7 +226,7 @@ func buildAuthMismatchMessage(osUser, dbUser string, method AuthMethod) string {
msg.WriteString(" 4. Provide password via flag:\n")
msg.WriteString(fmt.Sprintf(" %s --password your_password\n\n", getCommandLine()))
msg.WriteString("📝 Note: For production use, ~/.pgpass or PGPASSWORD are recommended\n")
msg.WriteString("[NOTE] Note: For production use, ~/.pgpass or PGPASSWORD are recommended\n")
msg.WriteString(" to avoid exposing passwords in command history.\n\n")
msg.WriteString(strings.Repeat("=", 60) + "\n")

View File

@@ -87,20 +87,46 @@ func IsBackupEncrypted(backupPath string) bool {
return meta.Encrypted
}
// Fallback: check if file starts with encryption nonce
// No metadata found - check file format to determine if encrypted
// Known unencrypted formats have specific magic bytes:
// - Gzip: 1f 8b
// - PGDMP (PostgreSQL custom): 50 47 44 4d 50 (PGDMP)
// - Plain SQL: starts with text (-- or SET or CREATE)
// - Tar: 75 73 74 61 72 (ustar) at offset 257
//
// If file doesn't match any known format, it MIGHT be encrypted,
// but we return false to avoid false positives. User must provide
// metadata file or use --encrypt flag explicitly.
file, err := os.Open(backupPath)
if err != nil {
return false
}
defer file.Close()
// Try to read nonce - if it succeeds, likely encrypted
nonce := make([]byte, crypto.NonceSize)
if n, err := file.Read(nonce); err != nil || n != crypto.NonceSize {
header := make([]byte, 6)
if n, err := file.Read(header); err != nil || n < 2 {
return false
}
return true
// Check for known unencrypted formats
// Gzip magic: 1f 8b
if header[0] == 0x1f && header[1] == 0x8b {
return false // Gzip compressed - not encrypted
}
// PGDMP magic (PostgreSQL custom format)
if len(header) >= 5 && string(header[:5]) == "PGDMP" {
return false // PostgreSQL custom dump - not encrypted
}
// Plain text SQL (starts with --, SET, CREATE, etc.)
if header[0] == '-' || header[0] == 'S' || header[0] == 'C' || header[0] == '/' {
return false // Plain text SQL - not encrypted
}
// Without metadata, we cannot reliably determine encryption status
// Return false to avoid blocking restores with false positives
return false
}
// DecryptBackupFile decrypts an encrypted backup file

View File

@@ -443,6 +443,14 @@ func (e *Engine) BackupCluster(ctx context.Context) error {
defer wg.Done()
defer func() { <-semaphore }() // Release
// Panic recovery - prevent one database failure from crashing entire cluster backup
defer func() {
if r := recover(); r != nil {
e.log.Error("Panic in database backup goroutine", "database", name, "panic", r)
atomic.AddInt32(&failCount, 1)
}
}()
// Check for cancellation at start of goroutine
select {
case <-ctx.Done():
@@ -465,7 +473,7 @@ func (e *Engine) BackupCluster(ctx context.Context) error {
mu.Lock()
e.printf(" Database size: %s\n", sizeStr)
if size > 10*1024*1024*1024 { // > 10GB
e.printf(" ⚠️ Large database detected - this may take a while\n")
e.printf(" [WARN] Large database detected - this may take a while\n")
}
mu.Unlock()
}
@@ -502,24 +510,24 @@ func (e *Engine) BackupCluster(ctx context.Context) error {
cmd := e.db.BuildBackupCommand(name, dumpFile, options)
dbCtx, cancel := context.WithTimeout(ctx, 2*time.Hour)
defer cancel()
err := e.executeCommand(dbCtx, cmd, dumpFile)
cancel()
// NO TIMEOUT for individual database backups
// Large databases with large objects can take many hours
// The parent context handles cancellation if needed
err := e.executeCommand(ctx, cmd, dumpFile)
if err != nil {
e.log.Warn("Failed to backup database", "database", name, "error", err)
mu.Lock()
e.printf(" ⚠️ WARNING: Failed to backup %s: %v\n", name, err)
e.printf(" [WARN] WARNING: Failed to backup %s: %v\n", name, err)
mu.Unlock()
atomic.AddInt32(&failCount, 1)
} else {
compressedCandidate := strings.TrimSuffix(dumpFile, ".dump") + ".sql.gz"
mu.Lock()
if info, err := os.Stat(compressedCandidate); err == nil {
e.printf(" Completed %s (%s)\n", name, formatBytes(info.Size()))
e.printf(" [OK] Completed %s (%s)\n", name, formatBytes(info.Size()))
} else if info, err := os.Stat(dumpFile); err == nil {
e.printf(" Completed %s (%s)\n", name, formatBytes(info.Size()))
e.printf(" [OK] Completed %s (%s)\n", name, formatBytes(info.Size()))
}
mu.Unlock()
atomic.AddInt32(&successCount, 1)
@@ -598,12 +606,36 @@ func (e *Engine) executeCommandWithProgress(ctx context.Context, cmdArgs []strin
return fmt.Errorf("failed to start command: %w", err)
}
// Monitor progress via stderr
go e.monitorCommandProgress(stderr, tracker)
// Monitor progress via stderr in goroutine
stderrDone := make(chan struct{})
go func() {
defer close(stderrDone)
e.monitorCommandProgress(stderr, tracker)
}()
// Wait for command to complete
if err := cmd.Wait(); err != nil {
return fmt.Errorf("backup command failed: %w", err)
// Wait for command to complete with proper context handling
cmdDone := make(chan error, 1)
go func() {
cmdDone <- cmd.Wait()
}()
var cmdErr error
select {
case cmdErr = <-cmdDone:
// Command completed (success or failure)
case <-ctx.Done():
// Context cancelled - kill process to unblock
e.log.Warn("Backup cancelled - killing process")
cmd.Process.Kill()
<-cmdDone // Wait for goroutine to finish
cmdErr = ctx.Err()
}
// Wait for stderr reader to finish
<-stderrDone
if cmdErr != nil {
return fmt.Errorf("backup command failed: %w", cmdErr)
}
return nil
@@ -680,8 +712,12 @@ func (e *Engine) executeMySQLWithProgressAndCompression(ctx context.Context, cmd
return fmt.Errorf("failed to get stderr pipe: %w", err)
}
// Start monitoring progress
go e.monitorCommandProgress(stderr, tracker)
// Start monitoring progress in goroutine
stderrDone := make(chan struct{})
go func() {
defer close(stderrDone)
e.monitorCommandProgress(stderr, tracker)
}()
// Start both commands
if err := gzipCmd.Start(); err != nil {
@@ -689,20 +725,41 @@ func (e *Engine) executeMySQLWithProgressAndCompression(ctx context.Context, cmd
}
if err := dumpCmd.Start(); err != nil {
gzipCmd.Process.Kill()
return fmt.Errorf("failed to start mysqldump: %w", err)
}
// Wait for mysqldump to complete
if err := dumpCmd.Wait(); err != nil {
return fmt.Errorf("mysqldump failed: %w", err)
// Wait for mysqldump with context handling
dumpDone := make(chan error, 1)
go func() {
dumpDone <- dumpCmd.Wait()
}()
var dumpErr error
select {
case dumpErr = <-dumpDone:
// mysqldump completed
case <-ctx.Done():
e.log.Warn("Backup cancelled - killing mysqldump")
dumpCmd.Process.Kill()
gzipCmd.Process.Kill()
<-dumpDone
return ctx.Err()
}
// Wait for stderr reader
<-stderrDone
// Close pipe and wait for gzip
pipe.Close()
if err := gzipCmd.Wait(); err != nil {
return fmt.Errorf("gzip failed: %w", err)
}
if dumpErr != nil {
return fmt.Errorf("mysqldump failed: %w", dumpErr)
}
return nil
}
@@ -733,19 +790,45 @@ func (e *Engine) executeMySQLWithCompression(ctx context.Context, cmdArgs []stri
gzipCmd.Stdin = stdin
gzipCmd.Stdout = outFile
// Start both commands
// Start gzip first
if err := gzipCmd.Start(); err != nil {
return fmt.Errorf("failed to start gzip: %w", err)
}
if err := dumpCmd.Run(); err != nil {
return fmt.Errorf("mysqldump failed: %w", err)
// Start mysqldump
if err := dumpCmd.Start(); err != nil {
gzipCmd.Process.Kill()
return fmt.Errorf("failed to start mysqldump: %w", err)
}
// Wait for mysqldump with context handling
dumpDone := make(chan error, 1)
go func() {
dumpDone <- dumpCmd.Wait()
}()
var dumpErr error
select {
case dumpErr = <-dumpDone:
// mysqldump completed
case <-ctx.Done():
e.log.Warn("Backup cancelled - killing mysqldump")
dumpCmd.Process.Kill()
gzipCmd.Process.Kill()
<-dumpDone
return ctx.Err()
}
// Close pipe and wait for gzip
stdin.Close()
if err := gzipCmd.Wait(); err != nil {
return fmt.Errorf("gzip failed: %w", err)
}
if dumpErr != nil {
return fmt.Errorf("mysqldump failed: %w", dumpErr)
}
return nil
}
@@ -882,15 +965,46 @@ func (e *Engine) createArchive(ctx context.Context, sourceDir, outputFile string
goto regularTar
}
// Wait for tar to finish
if err := cmd.Wait(); err != nil {
// Wait for tar with proper context handling
tarDone := make(chan error, 1)
go func() {
tarDone <- cmd.Wait()
}()
var tarErr error
select {
case tarErr = <-tarDone:
// tar completed
case <-ctx.Done():
e.log.Warn("Archive creation cancelled - killing processes")
cmd.Process.Kill()
pigzCmd.Process.Kill()
return fmt.Errorf("tar failed: %w", err)
<-tarDone
return ctx.Err()
}
// Wait for pigz to finish
if err := pigzCmd.Wait(); err != nil {
return fmt.Errorf("pigz compression failed: %w", err)
if tarErr != nil {
pigzCmd.Process.Kill()
return fmt.Errorf("tar failed: %w", tarErr)
}
// Wait for pigz with proper context handling
pigzDone := make(chan error, 1)
go func() {
pigzDone <- pigzCmd.Wait()
}()
var pigzErr error
select {
case pigzErr = <-pigzDone:
case <-ctx.Done():
pigzCmd.Process.Kill()
<-pigzDone
return ctx.Err()
}
if pigzErr != nil {
return fmt.Errorf("pigz compression failed: %w", pigzErr)
}
return nil
}
@@ -1235,8 +1349,10 @@ func (e *Engine) executeCommand(ctx context.Context, cmdArgs []string, outputFil
return fmt.Errorf("failed to start backup command: %w", err)
}
// Stream stderr output (don't buffer it all in memory)
// Stream stderr output in goroutine (don't buffer it all in memory)
stderrDone := make(chan struct{})
go func() {
defer close(stderrDone)
scanner := bufio.NewScanner(stderr)
scanner.Buffer(make([]byte, 64*1024), 1024*1024) // 1MB max line size
for scanner.Scan() {
@@ -1247,10 +1363,30 @@ func (e *Engine) executeCommand(ctx context.Context, cmdArgs []string, outputFil
}
}()
// Wait for command to complete
if err := cmd.Wait(); err != nil {
e.log.Error("Backup command failed", "error", err, "database", filepath.Base(outputFile))
return fmt.Errorf("backup command failed: %w", err)
// Wait for command to complete with proper context handling
cmdDone := make(chan error, 1)
go func() {
cmdDone <- cmd.Wait()
}()
var cmdErr error
select {
case cmdErr = <-cmdDone:
// Command completed (success or failure)
case <-ctx.Done():
// Context cancelled - kill process to unblock
e.log.Warn("Backup cancelled - killing pg_dump process")
cmd.Process.Kill()
<-cmdDone // Wait for goroutine to finish
cmdErr = ctx.Err()
}
// Wait for stderr reader to finish
<-stderrDone
if cmdErr != nil {
e.log.Error("Backup command failed", "error", cmdErr, "database", filepath.Base(outputFile))
return fmt.Errorf("backup command failed: %w", cmdErr)
}
return nil
@@ -1352,20 +1488,53 @@ func (e *Engine) executeWithStreamingCompression(ctx context.Context, cmdArgs []
// Then start pg_dump
if err := dumpCmd.Start(); err != nil {
compressCmd.Process.Kill()
return fmt.Errorf("failed to start pg_dump: %w", err)
}
// Wait for pg_dump to complete
if err := dumpCmd.Wait(); err != nil {
return fmt.Errorf("pg_dump failed: %w", err)
// Wait for pg_dump in a goroutine to handle context timeout properly
// This prevents deadlock if pipe buffer fills and pg_dump blocks
dumpDone := make(chan error, 1)
go func() {
dumpDone <- dumpCmd.Wait()
}()
var dumpErr error
select {
case dumpErr = <-dumpDone:
// pg_dump completed (success or failure)
case <-ctx.Done():
// Context cancelled/timeout - kill pg_dump to unblock
e.log.Warn("Backup timeout - killing pg_dump process")
dumpCmd.Process.Kill()
<-dumpDone // Wait for goroutine to finish
dumpErr = ctx.Err()
}
// Close stdout pipe to signal compressor we're done
// This MUST happen after pg_dump exits to avoid broken pipe
dumpStdout.Close()
// Wait for compression to complete
if err := compressCmd.Wait(); err != nil {
return fmt.Errorf("compression failed: %w", err)
compressErr := compressCmd.Wait()
// Check errors - compressor failure first (it's usually the root cause)
if compressErr != nil {
e.log.Error("Compressor failed", "error", compressErr)
return fmt.Errorf("compression failed (check disk space): %w", compressErr)
}
if dumpErr != nil {
// Check for SIGPIPE (exit code 141) - indicates compressor died first
if exitErr, ok := dumpErr.(*exec.ExitError); ok && exitErr.ExitCode() == 141 {
e.log.Error("pg_dump received SIGPIPE - compressor may have failed")
return fmt.Errorf("pg_dump broken pipe - check disk space and compressor")
}
return fmt.Errorf("pg_dump failed: %w", dumpErr)
}
// Sync file to disk to ensure durability (prevents truncation on power loss)
if err := outFile.Sync(); err != nil {
e.log.Warn("Failed to sync output file", "error", err)
}
e.log.Debug("Streaming compression completed", "output", compressedFile)

View File

@@ -242,7 +242,7 @@ func TestIncrementalBackupRestore(t *testing.T) {
t.Errorf("Unchanged file base/12345/1235 not found in restore: %v", err)
}
t.Log(" Incremental backup and restore test completed successfully")
t.Log("[OK] Incremental backup and restore test completed successfully")
}
// TestIncrementalBackupErrors tests error handling

View File

@@ -75,16 +75,16 @@ func FormatDiskSpaceMessage(check *DiskSpaceCheck) string {
if check.Critical {
status = "CRITICAL"
icon = ""
icon = "[X]"
} else if check.Warning {
status = "WARNING"
icon = "⚠️ "
icon = "[!]"
} else {
status = "OK"
icon = ""
icon = "[+]"
}
msg := fmt.Sprintf(`📊 Disk Space Check (%s):
msg := fmt.Sprintf(`[DISK] Disk Space Check (%s):
Path: %s
Total: %s
Available: %s (%.1f%% used)
@@ -98,13 +98,13 @@ func FormatDiskSpaceMessage(check *DiskSpaceCheck) string {
status)
if check.Critical {
msg += "\n \n ⚠️ CRITICAL: Insufficient disk space!"
msg += "\n \n [!!] CRITICAL: Insufficient disk space!"
msg += "\n Operation blocked. Free up space before continuing."
} else if check.Warning {
msg += "\n \n ⚠️ WARNING: Low disk space!"
msg += "\n \n [!] WARNING: Low disk space!"
msg += "\n Backup may fail if database is larger than estimated."
} else {
msg += "\n \n Sufficient space available"
msg += "\n \n [+] Sufficient space available"
}
return msg

View File

@@ -75,16 +75,16 @@ func FormatDiskSpaceMessage(check *DiskSpaceCheck) string {
if check.Critical {
status = "CRITICAL"
icon = ""
icon = "[X]"
} else if check.Warning {
status = "WARNING"
icon = "⚠️ "
icon = "[!]"
} else {
status = "OK"
icon = ""
icon = "[+]"
}
msg := fmt.Sprintf(`📊 Disk Space Check (%s):
msg := fmt.Sprintf(`[DISK] Disk Space Check (%s):
Path: %s
Total: %s
Available: %s (%.1f%% used)
@@ -98,13 +98,13 @@ func FormatDiskSpaceMessage(check *DiskSpaceCheck) string {
status)
if check.Critical {
msg += "\n \n ⚠️ CRITICAL: Insufficient disk space!"
msg += "\n \n [!!] CRITICAL: Insufficient disk space!"
msg += "\n Operation blocked. Free up space before continuing."
} else if check.Warning {
msg += "\n \n ⚠️ WARNING: Low disk space!"
msg += "\n \n [!] WARNING: Low disk space!"
msg += "\n Backup may fail if database is larger than estimated."
} else {
msg += "\n \n Sufficient space available"
msg += "\n \n [+] Sufficient space available"
}
return msg

View File

@@ -58,16 +58,16 @@ func FormatDiskSpaceMessage(check *DiskSpaceCheck) string {
if check.Critical {
status = "CRITICAL"
icon = ""
icon = "[X]"
} else if check.Warning {
status = "WARNING"
icon = "⚠️ "
icon = "[!]"
} else {
status = "OK"
icon = ""
icon = "[+]"
}
msg := fmt.Sprintf(`📊 Disk Space Check (%s):
msg := fmt.Sprintf(`[DISK] Disk Space Check (%s):
Path: %s
Total: %s
Available: %s (%.1f%% used)
@@ -81,13 +81,13 @@ func FormatDiskSpaceMessage(check *DiskSpaceCheck) string {
status)
if check.Critical {
msg += "\n \n ⚠️ CRITICAL: Insufficient disk space!"
msg += "\n \n [!!] CRITICAL: Insufficient disk space!"
msg += "\n Operation blocked. Free up space before continuing."
} else if check.Warning {
msg += "\n \n ⚠️ WARNING: Low disk space!"
msg += "\n \n [!] WARNING: Low disk space!"
msg += "\n Backup may fail if database is larger than estimated."
} else {
msg += "\n \n Sufficient space available"
msg += "\n \n [+] Sufficient space available"
}
return msg

View File

@@ -94,16 +94,16 @@ func FormatDiskSpaceMessage(check *DiskSpaceCheck) string {
if check.Critical {
status = "CRITICAL"
icon = ""
icon = "[X]"
} else if check.Warning {
status = "WARNING"
icon = "⚠️ "
icon = "[!]"
} else {
status = "OK"
icon = ""
icon = "[+]"
}
msg := fmt.Sprintf(`📊 Disk Space Check (%s):
msg := fmt.Sprintf(`[DISK] Disk Space Check (%s):
Path: %s
Total: %s
Available: %s (%.1f%% used)
@@ -117,13 +117,13 @@ func FormatDiskSpaceMessage(check *DiskSpaceCheck) string {
status)
if check.Critical {
msg += "\n \n ⚠️ CRITICAL: Insufficient disk space!"
msg += "\n \n [!!] CRITICAL: Insufficient disk space!"
msg += "\n Operation blocked. Free up space before continuing."
} else if check.Warning {
msg += "\n \n ⚠️ WARNING: Low disk space!"
msg += "\n \n [!] WARNING: Low disk space!"
msg += "\n Backup may fail if database is larger than estimated."
} else {
msg += "\n \n Sufficient space available"
msg += "\n \n [+] Sufficient space available"
}
return msg

View File

@@ -68,8 +68,8 @@ func ClassifyError(errorMsg string) *ErrorClassification {
Type: "critical",
Category: "locks",
Message: errorMsg,
Hint: "Lock table exhausted - typically caused by large objects in parallel restore",
Action: "Increase max_locks_per_transaction in postgresql.conf to 512 or higher",
Hint: "Lock table exhausted - typically caused by large objects (BLOBs) during restore",
Action: "Option 1: Increase max_locks_per_transaction to 1024+ in postgresql.conf (requires restart). Option 2: Update dbbackup and retry - phased restore now auto-enabled for BLOB databases",
Severity: 2,
}
case "permission_denied":
@@ -142,8 +142,8 @@ func ClassifyError(errorMsg string) *ErrorClassification {
Type: "critical",
Category: "locks",
Message: errorMsg,
Hint: "Lock table exhausted - typically caused by large objects in parallel restore",
Action: "Increase max_locks_per_transaction in postgresql.conf to 512 or higher",
Hint: "Lock table exhausted - typically caused by large objects (BLOBs) during restore",
Action: "Option 1: Increase max_locks_per_transaction to 1024+ in postgresql.conf (requires restart). Option 2: Update dbbackup and retry - phased restore now auto-enabled for BLOB databases",
Severity: 2,
}
}
@@ -234,22 +234,22 @@ func FormatErrorWithHint(errorMsg string) string {
var icon string
switch classification.Type {
case "ignorable":
icon = " "
icon = "[i]"
case "warning":
icon = "⚠️ "
icon = "[!]"
case "critical":
icon = ""
icon = "[X]"
case "fatal":
icon = "🛑"
icon = "[!!]"
default:
icon = "⚠️ "
icon = "[!]"
}
output := fmt.Sprintf("%s %s Error\n\n", icon, strings.ToUpper(classification.Type))
output += fmt.Sprintf("Category: %s\n", classification.Category)
output += fmt.Sprintf("Message: %s\n\n", classification.Message)
output += fmt.Sprintf("💡 Hint: %s\n\n", classification.Hint)
output += fmt.Sprintf("🔧 Action: %s\n", classification.Action)
output += fmt.Sprintf("[HINT] Hint: %s\n\n", classification.Hint)
output += fmt.Sprintf("[ACTION] Action: %s\n", classification.Action)
return output
}
@@ -257,7 +257,7 @@ func FormatErrorWithHint(errorMsg string) string {
// FormatMultipleErrors formats multiple errors with classification
func FormatMultipleErrors(errors []string) string {
if len(errors) == 0 {
return " No errors"
return "[+] No errors"
}
ignorable := 0
@@ -285,22 +285,22 @@ func FormatMultipleErrors(errors []string) string {
}
}
output := "📊 Error Summary:\n\n"
output := "[SUMMARY] Error Summary:\n\n"
if ignorable > 0 {
output += fmt.Sprintf(" %d ignorable (objects already exist)\n", ignorable)
output += fmt.Sprintf(" [i] %d ignorable (objects already exist)\n", ignorable)
}
if warnings > 0 {
output += fmt.Sprintf(" ⚠️ %d warnings\n", warnings)
output += fmt.Sprintf(" [!] %d warnings\n", warnings)
}
if critical > 0 {
output += fmt.Sprintf(" %d critical errors\n", critical)
output += fmt.Sprintf(" [X] %d critical errors\n", critical)
}
if fatal > 0 {
output += fmt.Sprintf(" 🛑 %d fatal errors\n", fatal)
output += fmt.Sprintf(" [!!] %d fatal errors\n", fatal)
}
if len(criticalErrors) > 0 {
output += "\n📝 Critical Issues:\n\n"
output += "\n[CRITICAL] Critical Issues:\n\n"
for i, err := range criticalErrors {
class := ClassifyError(err)
output += fmt.Sprintf("%d. %s\n", i+1, class.Hint)

View File

@@ -49,15 +49,15 @@ func (s CheckStatus) String() string {
func (s CheckStatus) Icon() string {
switch s {
case StatusPassed:
return ""
return "[+]"
case StatusWarning:
return ""
return "[!]"
case StatusFailed:
return ""
return "[-]"
case StatusSkipped:
return ""
return "[ ]"
default:
return "?"
return "[?]"
}
}

View File

@@ -11,9 +11,9 @@ func FormatPreflightReport(result *PreflightResult, dbName string, verbose bool)
var sb strings.Builder
sb.WriteString("\n")
sb.WriteString("╔══════════════════════════════════════════════════════════════╗\n")
sb.WriteString(" [DRY RUN] Preflight Check Results \n")
sb.WriteString("╚══════════════════════════════════════════════════════════════╝\n")
sb.WriteString("+==============================================================+\n")
sb.WriteString("| [DRY RUN] Preflight Check Results |\n")
sb.WriteString("+==============================================================+\n")
sb.WriteString("\n")
// Database info
@@ -29,7 +29,7 @@ func FormatPreflightReport(result *PreflightResult, dbName string, verbose bool)
// Check results
sb.WriteString(" Checks:\n")
sb.WriteString(" ─────────────────────────────────────────────────────────────\n")
sb.WriteString(" --------------------------------------------------------------\n")
for _, check := range result.Checks {
icon := check.Status.Icon()
@@ -40,26 +40,26 @@ func FormatPreflightReport(result *PreflightResult, dbName string, verbose bool)
color, icon, reset, check.Name+":", check.Message))
if verbose && check.Details != "" {
sb.WriteString(fmt.Sprintf(" └─ %s\n", check.Details))
sb.WriteString(fmt.Sprintf(" +- %s\n", check.Details))
}
}
sb.WriteString(" ─────────────────────────────────────────────────────────────\n")
sb.WriteString(" --------------------------------------------------------------\n")
sb.WriteString("\n")
// Summary
if result.AllPassed {
if result.HasWarnings {
sb.WriteString(" ⚠️ All checks passed with warnings\n")
sb.WriteString(" [!] All checks passed with warnings\n")
sb.WriteString("\n")
sb.WriteString(" Ready to backup. Remove --dry-run to execute.\n")
} else {
sb.WriteString(" All checks passed\n")
sb.WriteString(" [OK] All checks passed\n")
sb.WriteString("\n")
sb.WriteString(" Ready to backup. Remove --dry-run to execute.\n")
}
} else {
sb.WriteString(fmt.Sprintf(" %d check(s) failed\n", result.FailureCount))
sb.WriteString(fmt.Sprintf(" [FAIL] %d check(s) failed\n", result.FailureCount))
sb.WriteString("\n")
sb.WriteString(" Fix the issues above before running backup.\n")
}
@@ -96,7 +96,7 @@ func FormatPreflightReportPlain(result *PreflightResult, dbName string) string {
status := fmt.Sprintf("[%s]", check.Status.String())
sb.WriteString(fmt.Sprintf(" %-10s %-25s %s\n", status, check.Name+":", check.Message))
if check.Details != "" {
sb.WriteString(fmt.Sprintf(" └─ %s\n", check.Details))
sb.WriteString(fmt.Sprintf(" +- %s\n", check.Details))
}
}

View File

@@ -12,6 +12,7 @@ import (
"strings"
"sync"
"syscall"
"time"
"dbbackup/internal/logger"
)
@@ -116,8 +117,11 @@ func KillOrphanedProcesses(log logger.Logger) error {
// findProcessesByName returns PIDs of processes matching the given name
func findProcessesByName(name string, excludePID int) ([]int, error) {
// Use pgrep for efficient process searching
cmd := exec.Command("pgrep", "-x", name)
// Use pgrep for efficient process searching with timeout
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, "pgrep", "-x", name)
output, err := cmd.Output()
if err != nil {
// Exit code 1 means no processes found (not an error)

View File

@@ -90,7 +90,7 @@ func NewAzureBackend(cfg *Config) (*AzureBackend, error) {
}
} else {
// Use default Azure credential (managed identity, environment variables, etc.)
return nil, fmt.Errorf("Azure authentication requires account name and key, or use AZURE_STORAGE_CONNECTION_STRING environment variable")
return nil, fmt.Errorf("azure authentication requires account name and key, or use AZURE_STORAGE_CONNECTION_STRING environment variable")
}
}

View File

@@ -217,14 +217,17 @@ func New() *Config {
SingleDBName: getEnvString("SINGLE_DB_NAME", ""),
RestoreDBName: getEnvString("RESTORE_DB_NAME", ""),
// Timeouts
ClusterTimeoutMinutes: getEnvInt("CLUSTER_TIMEOUT_MIN", 240),
// Timeouts - default 24 hours (1440 min) to handle very large databases with large objects
ClusterTimeoutMinutes: getEnvInt("CLUSTER_TIMEOUT_MIN", 1440),
// Cluster parallelism (default: 2 concurrent operations for faster cluster backup/restore)
ClusterParallelism: getEnvInt("CLUSTER_PARALLELISM", 2),
// Working directory for large operations (default: system temp)
WorkDir: getEnvString("WORK_DIR", ""),
// Swap file management
SwapFilePath: getEnvString("SWAP_FILE_PATH", "/tmp/dbbackup_swap"),
SwapFilePath: "", // Will be set after WorkDir is initialized
SwapFileSizeGB: getEnvInt("SWAP_FILE_SIZE_GB", 0), // 0 = disabled by default
AutoSwap: getEnvBool("AUTO_SWAP", false),
@@ -264,6 +267,13 @@ func New() *Config {
cfg.SSLMode = "prefer"
}
// Set SwapFilePath using WorkDir if not explicitly set via env var
if envSwap := os.Getenv("SWAP_FILE_PATH"); envSwap != "" {
cfg.SwapFilePath = envSwap
} else {
cfg.SwapFilePath = filepath.Join(cfg.GetEffectiveWorkDir(), "dbbackup_swap")
}
return cfg
}
@@ -499,6 +509,14 @@ func GetCurrentOSUser() string {
return getCurrentUser()
}
// GetEffectiveWorkDir returns the configured WorkDir or system temp as fallback
func (c *Config) GetEffectiveWorkDir() string {
if c.WorkDir != "" {
return c.WorkDir
}
return os.TempDir()
}
func getDefaultBackupDir() string {
// Try to create a sensible default backup directory
homeDir, _ := os.UserHomeDir()
@@ -516,7 +534,7 @@ func getDefaultBackupDir() string {
return "/var/lib/pgsql/pg_backups"
}
return "/tmp/db_backups"
return filepath.Join(os.TempDir(), "db_backups")
}
// CPU-related helper functions

View File

@@ -30,6 +30,7 @@ type LocalConfig struct {
// Performance settings
CPUWorkload string
MaxCores int
ClusterTimeout int // Cluster operation timeout in minutes (default: 1440 = 24 hours)
// Security settings
RetentionDays int
@@ -121,6 +122,10 @@ func LoadLocalConfig() (*LocalConfig, error) {
if mc, err := strconv.Atoi(value); err == nil {
cfg.MaxCores = mc
}
case "cluster_timeout":
if ct, err := strconv.Atoi(value); err == nil {
cfg.ClusterTimeout = ct
}
}
case "security":
switch key {
@@ -199,6 +204,9 @@ func SaveLocalConfig(cfg *LocalConfig) error {
if cfg.MaxCores != 0 {
sb.WriteString(fmt.Sprintf("max_cores = %d\n", cfg.MaxCores))
}
if cfg.ClusterTimeout != 0 {
sb.WriteString(fmt.Sprintf("cluster_timeout = %d\n", cfg.ClusterTimeout))
}
sb.WriteString("\n")
// Security section
@@ -268,6 +276,10 @@ func ApplyLocalConfig(cfg *Config, local *LocalConfig) {
if local.MaxCores != 0 {
cfg.MaxCores = local.MaxCores
}
// Apply cluster timeout from config file (overrides default)
if local.ClusterTimeout != 0 {
cfg.ClusterTimeoutMinutes = local.ClusterTimeout
}
if cfg.RetentionDays == 30 && local.RetentionDays != 0 {
cfg.RetentionDays = local.RetentionDays
}
@@ -295,6 +307,7 @@ func ConfigFromConfig(cfg *Config) *LocalConfig {
DumpJobs: cfg.DumpJobs,
CPUWorkload: cfg.CPUWorkloadType,
MaxCores: cfg.MaxCores,
ClusterTimeout: cfg.ClusterTimeoutMinutes,
RetentionDays: cfg.RetentionDays,
MinBackups: cfg.MinBackups,
MaxRetries: cfg.MaxRetries,

View File

@@ -126,13 +126,46 @@ func (m *MySQL) ListTables(ctx context.Context, database string) ([]string, erro
return tables, rows.Err()
}
// validateMySQLIdentifier checks if a database/table name is safe for use in SQL
// Prevents SQL injection by only allowing alphanumeric names with underscores
func validateMySQLIdentifier(name string) error {
if len(name) == 0 {
return fmt.Errorf("identifier cannot be empty")
}
if len(name) > 64 {
return fmt.Errorf("identifier too long (max 64 chars): %s", name)
}
// Only allow alphanumeric, underscores, and must start with letter or underscore
for i, c := range name {
if i == 0 && !((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || c == '_') {
return fmt.Errorf("identifier must start with letter or underscore: %s", name)
}
if !((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || (c >= '0' && c <= '9') || c == '_') {
return fmt.Errorf("identifier contains invalid character %q: %s", c, name)
}
}
return nil
}
// quoteMySQLIdentifier safely quotes a MySQL identifier
func quoteMySQLIdentifier(name string) string {
// Escape any backticks by doubling them and wrap in backticks
return "`" + strings.ReplaceAll(name, "`", "``") + "`"
}
// CreateDatabase creates a new database
func (m *MySQL) CreateDatabase(ctx context.Context, name string) error {
if m.db == nil {
return fmt.Errorf("not connected to database")
}
query := fmt.Sprintf("CREATE DATABASE IF NOT EXISTS `%s`", name)
// Validate identifier to prevent SQL injection
if err := validateMySQLIdentifier(name); err != nil {
return fmt.Errorf("invalid database name: %w", err)
}
// Use safe quoting for identifier
query := fmt.Sprintf("CREATE DATABASE IF NOT EXISTS %s", quoteMySQLIdentifier(name))
_, err := m.db.ExecContext(ctx, query)
if err != nil {
return fmt.Errorf("failed to create database %s: %w", name, err)
@@ -148,7 +181,13 @@ func (m *MySQL) DropDatabase(ctx context.Context, name string) error {
return fmt.Errorf("not connected to database")
}
query := fmt.Sprintf("DROP DATABASE IF EXISTS `%s`", name)
// Validate identifier to prevent SQL injection
if err := validateMySQLIdentifier(name); err != nil {
return fmt.Errorf("invalid database name: %w", err)
}
// Use safe quoting for identifier
query := fmt.Sprintf("DROP DATABASE IF EXISTS %s", quoteMySQLIdentifier(name))
_, err := m.db.ExecContext(ctx, query)
if err != nil {
return fmt.Errorf("failed to drop database %s: %w", name, err)

View File

@@ -15,7 +15,6 @@ import (
"github.com/jackc/pgx/v5/pgxpool"
"github.com/jackc/pgx/v5/stdlib"
_ "github.com/jackc/pgx/v5/stdlib" // PostgreSQL driver (pgx)
)
// PostgreSQL implements Database interface for PostgreSQL
@@ -163,14 +162,47 @@ func (p *PostgreSQL) ListTables(ctx context.Context, database string) ([]string,
return tables, rows.Err()
}
// validateIdentifier checks if a database/table name is safe for use in SQL
// Prevents SQL injection by only allowing alphanumeric names with underscores
func validateIdentifier(name string) error {
if len(name) == 0 {
return fmt.Errorf("identifier cannot be empty")
}
if len(name) > 63 {
return fmt.Errorf("identifier too long (max 63 chars): %s", name)
}
// Only allow alphanumeric, underscores, and must start with letter or underscore
for i, c := range name {
if i == 0 && !((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || c == '_') {
return fmt.Errorf("identifier must start with letter or underscore: %s", name)
}
if !((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || (c >= '0' && c <= '9') || c == '_') {
return fmt.Errorf("identifier contains invalid character %q: %s", c, name)
}
}
return nil
}
// quoteIdentifier safely quotes a PostgreSQL identifier
func quoteIdentifier(name string) string {
// Double any existing double quotes and wrap in double quotes
return `"` + strings.ReplaceAll(name, `"`, `""`) + `"`
}
// CreateDatabase creates a new database
func (p *PostgreSQL) CreateDatabase(ctx context.Context, name string) error {
if p.db == nil {
return fmt.Errorf("not connected to database")
}
// Validate identifier to prevent SQL injection
if err := validateIdentifier(name); err != nil {
return fmt.Errorf("invalid database name: %w", err)
}
// PostgreSQL doesn't support CREATE DATABASE in transactions or prepared statements
query := fmt.Sprintf("CREATE DATABASE %s", name)
// Use quoted identifier for safety
query := fmt.Sprintf("CREATE DATABASE %s", quoteIdentifier(name))
_, err := p.db.ExecContext(ctx, query)
if err != nil {
return fmt.Errorf("failed to create database %s: %w", name, err)
@@ -186,8 +218,14 @@ func (p *PostgreSQL) DropDatabase(ctx context.Context, name string) error {
return fmt.Errorf("not connected to database")
}
// Validate identifier to prevent SQL injection
if err := validateIdentifier(name); err != nil {
return fmt.Errorf("invalid database name: %w", err)
}
// Force drop connections and drop database
query := fmt.Sprintf("DROP DATABASE IF EXISTS %s", name)
// Use quoted identifier for safety
query := fmt.Sprintf("DROP DATABASE IF EXISTS %s", quoteIdentifier(name))
_, err := p.db.ExecContext(ctx, query)
if err != nil {
return fmt.Errorf("failed to drop database %s: %w", name, err)

228
internal/dedup/chunker.go Normal file
View File

@@ -0,0 +1,228 @@
// Package dedup provides content-defined chunking and deduplication
// for database backups, similar to restic/borgbackup but with native
// database dump support.
package dedup
import (
"crypto/sha256"
"encoding/hex"
"io"
)
// Chunker constants for content-defined chunking
const (
// DefaultMinChunkSize is the minimum chunk size (4KB)
DefaultMinChunkSize = 4 * 1024
// DefaultAvgChunkSize is the target average chunk size (8KB)
DefaultAvgChunkSize = 8 * 1024
// DefaultMaxChunkSize is the maximum chunk size (32KB)
DefaultMaxChunkSize = 32 * 1024
// WindowSize for the rolling hash
WindowSize = 48
// ChunkMask determines average chunk size
// For 8KB average: we look for hash % 8192 == 0
ChunkMask = DefaultAvgChunkSize - 1
)
// Gear hash table - random values for each byte
// This is used for the Gear rolling hash which is simpler and faster than Buzhash
var gearTable = [256]uint64{
0x5c95c078, 0x22408989, 0x2d48a214, 0x12842087, 0x530f8afb, 0x474536b9, 0x2963b4f1, 0x44cb738b,
0x4ea7403d, 0x4d606b6e, 0x074ec5d3, 0x3f7e82f4, 0x4e3d26e7, 0x5cb4e82f, 0x7b0a1ef5, 0x3d4e7c92,
0x2a81ed69, 0x7f853df8, 0x452c8cf7, 0x0f4f3c9d, 0x3a5e81b7, 0x6cb2d819, 0x2e4c5f93, 0x7e8a1c57,
0x1f9d3e8c, 0x4b7c2a5d, 0x3c8f1d6e, 0x5d2a7b4f, 0x6e9c3f8a, 0x7a4d1e5c, 0x2b8c4f7d, 0x4f7d2c9e,
0x5a1e3d7c, 0x6b4f8a2d, 0x3e7c9d5a, 0x7d2a4f8b, 0x4c9e7d3a, 0x5b8a1c6e, 0x2d5f4a9c, 0x7a3c8d6b,
0x6e2a7b4d, 0x3f8c5d9a, 0x4a7d3e5b, 0x5c9a2d7e, 0x7b4e8f3c, 0x2a6d9c5b, 0x3e4a7d8c, 0x5d7b2e9a,
0x4c8a3d7b, 0x6e9d5c8a, 0x7a3e4d9c, 0x2b5c8a7d, 0x4d7e3a9c, 0x5a9c7d3e, 0x3c8b5a7d, 0x7d4e9c2a,
0x6a3d8c5b, 0x4e7a9d3c, 0x5c2a7b9e, 0x3a9d4e7c, 0x7b8c5a2d, 0x2d7e4a9c, 0x4a3c9d7b, 0x5e9a7c3d,
0x6c4d8a5b, 0x3b7e9c4a, 0x7a5c2d8b, 0x4d9a3e7c, 0x5b7c4a9e, 0x2e8a5d3c, 0x3c9e7a4d, 0x7d4a8c5b,
0x6b2d9a7c, 0x4a8c3e5d, 0x5d7a9c2e, 0x3e4c7b9a, 0x7c9d5a4b, 0x2a7e8c3d, 0x4c5a9d7e, 0x5a3e7c4b,
0x6d8a2c9e, 0x3c7b4a8d, 0x7e2d9c5a, 0x4b9a7e3c, 0x5c4d8a7b, 0x2d9e3c5a, 0x3a7c9d4e, 0x7b5a4c8d,
0x6a9c2e7b, 0x4d3e8a9c, 0x5e7b4d2a, 0x3b9a7c5d, 0x7c4e8a3b, 0x2e7d9c4a, 0x4a8b3e7d, 0x5d2c9a7e,
0x6c7a5d3e, 0x3e9c4a7b, 0x7a8d2c5e, 0x4c3e9a7d, 0x5b9c7e2a, 0x2a4d7c9e, 0x3d8a5c4b, 0x7e7b9a3c,
0x6b4a8d9e, 0x4e9c3b7a, 0x5a7d4e9c, 0x3c2a8b7d, 0x7d9e5c4a, 0x2b8a7d3e, 0x4d5c9a2b, 0x5e3a7c8d,
0x6a9d4b7c, 0x3b7a9c5e, 0x7c4b8a2d, 0x4a9e7c3b, 0x5d2b9a4e, 0x2e7c4d9a, 0x3a9b7e4c, 0x7e5a3c8b,
0x6c8a9d4e, 0x4b7c2a5e, 0x5a3e9c7d, 0x3d9a4b7c, 0x7a2d5e9c, 0x2c8b7a3d, 0x4e9c5a2b, 0x5b4d7e9a,
0x6d7a3c8b, 0x3e2b9a5d, 0x7c9d4a7e, 0x4a5e3c9b, 0x5e7a9d2c, 0x2b3c7e9a, 0x3a9e4b7d, 0x7d8a5c3e,
0x6b9c2d4a, 0x4c7e9a3b, 0x5a2c8b7e, 0x3b4d9a5c, 0x7e9b3a4d, 0x2d5a7c9e, 0x4b8d3e7a, 0x5c9a4b2d,
0x6a7c8d9e, 0x3c9e5a7b, 0x7b4a2c9d, 0x4d3b7e9a, 0x5e9c4a3b, 0x2a7b9d4e, 0x3e5c8a7b, 0x7a9d3e5c,
0x6c2a7b8d, 0x4e9a5c3b, 0x5b7d2a9e, 0x3a4e9c7b, 0x7d8b3a5c, 0x2c9e7a4b, 0x4a3d5e9c, 0x5d7b8a2e,
0x6b9a4c7d, 0x3d5a9e4b, 0x7e2c7b9a, 0x4b9d3a5e, 0x5c4e7a9d, 0x2e8a3c7b, 0x3b7c9e5a, 0x7a4d8b3e,
0x6d9c5a2b, 0x4a7e3d9c, 0x5e2a9b7d, 0x3c9a7e4b, 0x7b3e5c9a, 0x2a4b8d7e, 0x4d9c2a5b, 0x5a7d9e3c,
0x6c3b8a7d, 0x3e9d4a5c, 0x7d5c2b9e, 0x4c8a7d3b, 0x5b9e3c7a, 0x2d7a9c4e, 0x3a5e7b9d, 0x7e8b4a3c,
0x6a2d9e7b, 0x4b3e5a9d, 0x5d9c7b2a, 0x3b7d4e9c, 0x7c9a3b5e, 0x2e5c8a7d, 0x4a7b9d3e, 0x5c3a7e9b,
0x6d9e5c4a, 0x3c4a7b9e, 0x7a9d2e5c, 0x4e7c9a3d, 0x5a8b4e7c, 0x2b9a3d7e, 0x3d5b8a9c, 0x7b4e9a2d,
0x6c7d3a9e, 0x4a9c5e3b, 0x5e2b7d9a, 0x3a8d4c7b, 0x7d3e9a5c, 0x2c7a8b9e, 0x4b5d3a7c, 0x5c9a7e2b,
0x6a4b9d3e, 0x3e7c2a9d, 0x7c8a5b4e, 0x4d9e3c7a, 0x5b3a9e7c, 0x2e9c7b4a, 0x3b4e8a9d, 0x7a9c4e3b,
0x6d2a7c9e, 0x4c8b9a5d, 0x5a9e2b7c, 0x3c3d7a9e, 0x7e5a9c4b, 0x2a8d3e7c, 0x4e7a5c9b, 0x5d9b8a2e,
0x6b4c9e7a, 0x3a9d5b4e, 0x7b2e8a9c, 0x4a5c3e9b, 0x5c9a4d7e, 0x2d7e9a3c, 0x3e8b7c5a, 0x7c9e2a4d,
0x6a3b7d9c, 0x4d9a8b3e, 0x5e5c2a7b, 0x3b4a9d7c, 0x7a7c5e9b, 0x2c9b4a8d, 0x4b3e7c9a, 0x5a9d3b7e,
0x6c8a4e9d, 0x3d7b9c5a, 0x7e2a4b9c, 0x4c9e5d3a, 0x5b7a9c4e, 0x2e4d8a7b, 0x3a9c7e5d, 0x7b8d3a9e,
0x6d5c9a4b, 0x4a2e7b9d, 0x5d9b4c8a, 0x3c7a9e2b, 0x7d4b8c9e, 0x2b9a5c4d, 0x4e7d3a9c, 0x5c8a9e7b,
}
// Chunk represents a single deduplicated chunk
type Chunk struct {
// Hash is the SHA-256 hash of the chunk data (content-addressed)
Hash string
// Data is the raw chunk bytes
Data []byte
// Offset is the byte offset in the original file
Offset int64
// Length is the size of this chunk
Length int
}
// ChunkerConfig holds configuration for the chunker
type ChunkerConfig struct {
MinSize int // Minimum chunk size
AvgSize int // Target average chunk size
MaxSize int // Maximum chunk size
}
// DefaultChunkerConfig returns sensible defaults
func DefaultChunkerConfig() ChunkerConfig {
return ChunkerConfig{
MinSize: DefaultMinChunkSize,
AvgSize: DefaultAvgChunkSize,
MaxSize: DefaultMaxChunkSize,
}
}
// Chunker performs content-defined chunking using Gear hash
type Chunker struct {
reader io.Reader
config ChunkerConfig
// Rolling hash state
hash uint64
// Current chunk state
buf []byte
offset int64
mask uint64
}
// NewChunker creates a new chunker for the given reader
func NewChunker(r io.Reader, config ChunkerConfig) *Chunker {
// Calculate mask for target average size
// We want: avg_size = 1 / P(boundary)
// With mask, P(boundary) = 1 / (mask + 1)
// So mask = avg_size - 1
mask := uint64(config.AvgSize - 1)
return &Chunker{
reader: r,
config: config,
buf: make([]byte, 0, config.MaxSize),
mask: mask,
}
}
// Next returns the next chunk from the input stream
// Returns io.EOF when no more data is available
func (c *Chunker) Next() (*Chunk, error) {
c.buf = c.buf[:0]
c.hash = 0
// Read bytes until we find a chunk boundary or hit max size
singleByte := make([]byte, 1)
for {
n, err := c.reader.Read(singleByte)
if n == 0 {
if err == io.EOF {
// Return remaining data as final chunk
if len(c.buf) > 0 {
return c.makeChunk(), nil
}
return nil, io.EOF
}
if err != nil {
return nil, err
}
continue
}
b := singleByte[0]
c.buf = append(c.buf, b)
// Update Gear rolling hash
// Gear hash: hash = (hash << 1) + gear_table[byte]
c.hash = (c.hash << 1) + gearTable[b]
// Check for chunk boundary after minimum size
if len(c.buf) >= c.config.MinSize {
// Check if we hit a boundary (hash matches mask pattern)
if (c.hash & c.mask) == 0 {
return c.makeChunk(), nil
}
}
// Force boundary at max size
if len(c.buf) >= c.config.MaxSize {
return c.makeChunk(), nil
}
}
}
// makeChunk creates a Chunk from the current buffer
func (c *Chunker) makeChunk() *Chunk {
// Compute SHA-256 hash
h := sha256.Sum256(c.buf)
hash := hex.EncodeToString(h[:])
// Copy data
data := make([]byte, len(c.buf))
copy(data, c.buf)
chunk := &Chunk{
Hash: hash,
Data: data,
Offset: c.offset,
Length: len(data),
}
c.offset += int64(len(data))
return chunk
}
// ChunkReader splits a reader into content-defined chunks
// and returns them via a channel for concurrent processing
func ChunkReader(r io.Reader, config ChunkerConfig) (<-chan *Chunk, <-chan error) {
chunks := make(chan *Chunk, 100)
errs := make(chan error, 1)
go func() {
defer close(chunks)
defer close(errs)
chunker := NewChunker(r, config)
for {
chunk, err := chunker.Next()
if err == io.EOF {
return
}
if err != nil {
errs <- err
return
}
chunks <- chunk
}
}()
return chunks, errs
}
// HashData computes SHA-256 hash of data
func HashData(data []byte) string {
h := sha256.Sum256(data)
return hex.EncodeToString(h[:])
}

View File

@@ -0,0 +1,217 @@
package dedup
import (
"bytes"
"crypto/rand"
"io"
"testing"
)
func TestChunker_Basic(t *testing.T) {
// Create test data
data := make([]byte, 100*1024) // 100KB
rand.Read(data)
chunker := NewChunker(bytes.NewReader(data), DefaultChunkerConfig())
var chunks []*Chunk
var totalBytes int
for {
chunk, err := chunker.Next()
if err == io.EOF {
break
}
if err != nil {
t.Fatalf("Chunker.Next() error: %v", err)
}
chunks = append(chunks, chunk)
totalBytes += chunk.Length
// Verify chunk properties
if chunk.Length < DefaultMinChunkSize && len(chunks) < 10 {
// Only the last chunk can be smaller than min
// (unless file is smaller than min)
}
if chunk.Length > DefaultMaxChunkSize {
t.Errorf("Chunk %d exceeds max size: %d > %d", len(chunks), chunk.Length, DefaultMaxChunkSize)
}
if chunk.Hash == "" {
t.Errorf("Chunk %d has empty hash", len(chunks))
}
if len(chunk.Hash) != 64 { // SHA-256 hex length
t.Errorf("Chunk %d has invalid hash length: %d", len(chunks), len(chunk.Hash))
}
}
if totalBytes != len(data) {
t.Errorf("Total bytes mismatch: got %d, want %d", totalBytes, len(data))
}
t.Logf("Chunked %d bytes into %d chunks", totalBytes, len(chunks))
t.Logf("Average chunk size: %d bytes", totalBytes/len(chunks))
}
func TestChunker_Deterministic(t *testing.T) {
// Same data should produce same chunks
data := make([]byte, 50*1024)
rand.Read(data)
// First pass
chunker1 := NewChunker(bytes.NewReader(data), DefaultChunkerConfig())
var hashes1 []string
for {
chunk, err := chunker1.Next()
if err == io.EOF {
break
}
if err != nil {
t.Fatal(err)
}
hashes1 = append(hashes1, chunk.Hash)
}
// Second pass
chunker2 := NewChunker(bytes.NewReader(data), DefaultChunkerConfig())
var hashes2 []string
for {
chunk, err := chunker2.Next()
if err == io.EOF {
break
}
if err != nil {
t.Fatal(err)
}
hashes2 = append(hashes2, chunk.Hash)
}
// Compare
if len(hashes1) != len(hashes2) {
t.Fatalf("Different chunk counts: %d vs %d", len(hashes1), len(hashes2))
}
for i := range hashes1 {
if hashes1[i] != hashes2[i] {
t.Errorf("Hash mismatch at chunk %d: %s vs %s", i, hashes1[i], hashes2[i])
}
}
}
func TestChunker_ShiftedData(t *testing.T) {
// Test that shifted data still shares chunks (the key CDC benefit)
original := make([]byte, 100*1024)
rand.Read(original)
// Create shifted version (prepend some bytes)
prefix := make([]byte, 1000)
rand.Read(prefix)
shifted := append(prefix, original...)
// Chunk both
config := DefaultChunkerConfig()
chunker1 := NewChunker(bytes.NewReader(original), config)
hashes1 := make(map[string]bool)
for {
chunk, err := chunker1.Next()
if err == io.EOF {
break
}
if err != nil {
t.Fatal(err)
}
hashes1[chunk.Hash] = true
}
chunker2 := NewChunker(bytes.NewReader(shifted), config)
var matched, total int
for {
chunk, err := chunker2.Next()
if err == io.EOF {
break
}
if err != nil {
t.Fatal(err)
}
total++
if hashes1[chunk.Hash] {
matched++
}
}
// Should have significant overlap despite the shift
overlapRatio := float64(matched) / float64(total)
t.Logf("Chunk overlap after %d-byte shift: %.1f%% (%d/%d chunks)",
len(prefix), overlapRatio*100, matched, total)
// We expect at least 50% overlap for content-defined chunking
if overlapRatio < 0.5 {
t.Errorf("Low chunk overlap: %.1f%% (expected >50%%)", overlapRatio*100)
}
}
func TestChunker_SmallFile(t *testing.T) {
// File smaller than min chunk size
data := []byte("hello world")
chunker := NewChunker(bytes.NewReader(data), DefaultChunkerConfig())
chunk, err := chunker.Next()
if err != nil {
t.Fatal(err)
}
if chunk.Length != len(data) {
t.Errorf("Expected chunk length %d, got %d", len(data), chunk.Length)
}
// Should be EOF after
_, err = chunker.Next()
if err != io.EOF {
t.Errorf("Expected EOF, got %v", err)
}
}
func TestChunker_EmptyFile(t *testing.T) {
chunker := NewChunker(bytes.NewReader(nil), DefaultChunkerConfig())
_, err := chunker.Next()
if err != io.EOF {
t.Errorf("Expected EOF for empty file, got %v", err)
}
}
func TestHashData(t *testing.T) {
hash := HashData([]byte("test"))
if len(hash) != 64 {
t.Errorf("Expected 64-char hash, got %d", len(hash))
}
// Known SHA-256 of "test"
expected := "9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08"
if hash != expected {
t.Errorf("Hash mismatch: got %s, want %s", hash, expected)
}
}
func BenchmarkChunker(b *testing.B) {
// 1MB of random data
data := make([]byte, 1024*1024)
rand.Read(data)
b.ResetTimer()
b.SetBytes(int64(len(data)))
for i := 0; i < b.N; i++ {
chunker := NewChunker(bytes.NewReader(data), DefaultChunkerConfig())
for {
_, err := chunker.Next()
if err == io.EOF {
break
}
if err != nil {
b.Fatal(err)
}
}
}
}

306
internal/dedup/index.go Normal file
View File

@@ -0,0 +1,306 @@
package dedup
import (
"database/sql"
"fmt"
"os"
"path/filepath"
"strings"
"time"
_ "github.com/mattn/go-sqlite3" // SQLite driver
)
// ChunkIndex provides fast chunk lookups using SQLite
type ChunkIndex struct {
db *sql.DB
dbPath string
}
// NewChunkIndex opens or creates a chunk index database at the default location
func NewChunkIndex(basePath string) (*ChunkIndex, error) {
dbPath := filepath.Join(basePath, "chunks.db")
return NewChunkIndexAt(dbPath)
}
// NewChunkIndexAt opens or creates a chunk index database at a specific path
// Use this to put the SQLite index on local storage when chunks are on NFS/CIFS
func NewChunkIndexAt(dbPath string) (*ChunkIndex, error) {
// Ensure parent directory exists
if err := os.MkdirAll(filepath.Dir(dbPath), 0700); err != nil {
return nil, fmt.Errorf("failed to create index directory: %w", err)
}
// Add busy_timeout to handle lock contention gracefully
db, err := sql.Open("sqlite3", dbPath+"?_journal_mode=WAL&_synchronous=NORMAL&_busy_timeout=5000")
if err != nil {
return nil, fmt.Errorf("failed to open chunk index: %w", err)
}
// Test the connection and check for locking issues
if err := db.Ping(); err != nil {
db.Close()
if isNFSLockingError(err) {
return nil, fmt.Errorf("database locked (common on NFS/CIFS): %w\n\n"+
"HINT: Use --index-db to put the SQLite index on local storage:\n"+
" dbbackup dedup ... --index-db /var/lib/dbbackup/dedup-index.db", err)
}
return nil, fmt.Errorf("failed to connect to chunk index: %w", err)
}
idx := &ChunkIndex{db: db, dbPath: dbPath}
if err := idx.migrate(); err != nil {
db.Close()
if isNFSLockingError(err) {
return nil, fmt.Errorf("database locked during migration (common on NFS/CIFS): %w\n\n"+
"HINT: Use --index-db to put the SQLite index on local storage:\n"+
" dbbackup dedup ... --index-db /var/lib/dbbackup/dedup-index.db", err)
}
return nil, err
}
return idx, nil
}
// isNFSLockingError checks if an error is likely due to NFS/CIFS locking issues
func isNFSLockingError(err error) bool {
if err == nil {
return false
}
errStr := err.Error()
return strings.Contains(errStr, "database is locked") ||
strings.Contains(errStr, "SQLITE_BUSY") ||
strings.Contains(errStr, "cannot lock") ||
strings.Contains(errStr, "lock protocol")
}
// migrate creates the schema if needed
func (idx *ChunkIndex) migrate() error {
schema := `
CREATE TABLE IF NOT EXISTS chunks (
hash TEXT PRIMARY KEY,
size_raw INTEGER NOT NULL,
size_stored INTEGER NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
last_accessed DATETIME,
ref_count INTEGER DEFAULT 1
);
CREATE TABLE IF NOT EXISTS manifests (
id TEXT PRIMARY KEY,
database_type TEXT,
database_name TEXT,
database_host TEXT,
created_at DATETIME,
original_size INTEGER,
stored_size INTEGER,
chunk_count INTEGER,
new_chunks INTEGER,
dedup_ratio REAL,
sha256 TEXT,
verified_at DATETIME
);
CREATE INDEX IF NOT EXISTS idx_chunks_created ON chunks(created_at);
CREATE INDEX IF NOT EXISTS idx_chunks_accessed ON chunks(last_accessed);
CREATE INDEX IF NOT EXISTS idx_manifests_created ON manifests(created_at);
CREATE INDEX IF NOT EXISTS idx_manifests_database ON manifests(database_name);
`
_, err := idx.db.Exec(schema)
return err
}
// Close closes the database
func (idx *ChunkIndex) Close() error {
return idx.db.Close()
}
// AddChunk records a chunk in the index
func (idx *ChunkIndex) AddChunk(hash string, sizeRaw, sizeStored int) error {
_, err := idx.db.Exec(`
INSERT INTO chunks (hash, size_raw, size_stored, created_at, last_accessed, ref_count)
VALUES (?, ?, ?, ?, ?, 1)
ON CONFLICT(hash) DO UPDATE SET
ref_count = ref_count + 1,
last_accessed = ?
`, hash, sizeRaw, sizeStored, time.Now(), time.Now(), time.Now())
return err
}
// HasChunk checks if a chunk exists in the index
func (idx *ChunkIndex) HasChunk(hash string) (bool, error) {
var count int
err := idx.db.QueryRow("SELECT COUNT(*) FROM chunks WHERE hash = ?", hash).Scan(&count)
return count > 0, err
}
// GetChunk retrieves chunk metadata
func (idx *ChunkIndex) GetChunk(hash string) (*ChunkMeta, error) {
var m ChunkMeta
err := idx.db.QueryRow(`
SELECT hash, size_raw, size_stored, created_at, ref_count
FROM chunks WHERE hash = ?
`, hash).Scan(&m.Hash, &m.SizeRaw, &m.SizeStored, &m.CreatedAt, &m.RefCount)
if err == sql.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, err
}
return &m, nil
}
// ChunkMeta holds metadata about a chunk
type ChunkMeta struct {
Hash string
SizeRaw int64
SizeStored int64
CreatedAt time.Time
RefCount int
}
// DecrementRef decreases the reference count for a chunk
// Returns true if the chunk should be deleted (ref_count <= 0)
func (idx *ChunkIndex) DecrementRef(hash string) (shouldDelete bool, err error) {
result, err := idx.db.Exec(`
UPDATE chunks SET ref_count = ref_count - 1 WHERE hash = ?
`, hash)
if err != nil {
return false, err
}
affected, _ := result.RowsAffected()
if affected == 0 {
return false, nil
}
var refCount int
err = idx.db.QueryRow("SELECT ref_count FROM chunks WHERE hash = ?", hash).Scan(&refCount)
if err != nil {
return false, err
}
return refCount <= 0, nil
}
// RemoveChunk removes a chunk from the index
func (idx *ChunkIndex) RemoveChunk(hash string) error {
_, err := idx.db.Exec("DELETE FROM chunks WHERE hash = ?", hash)
return err
}
// AddManifest records a manifest in the index
func (idx *ChunkIndex) AddManifest(m *Manifest) error {
_, err := idx.db.Exec(`
INSERT OR REPLACE INTO manifests
(id, database_type, database_name, database_host, created_at,
original_size, stored_size, chunk_count, new_chunks, dedup_ratio, sha256)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`, m.ID, m.DatabaseType, m.DatabaseName, m.DatabaseHost, m.CreatedAt,
m.OriginalSize, m.StoredSize, m.ChunkCount, m.NewChunks, m.DedupRatio, m.SHA256)
return err
}
// RemoveManifest removes a manifest from the index
func (idx *ChunkIndex) RemoveManifest(id string) error {
_, err := idx.db.Exec("DELETE FROM manifests WHERE id = ?", id)
return err
}
// UpdateManifestVerified updates the verified timestamp for a manifest
func (idx *ChunkIndex) UpdateManifestVerified(id string, verifiedAt time.Time) error {
_, err := idx.db.Exec("UPDATE manifests SET verified_at = ? WHERE id = ?", verifiedAt, id)
return err
}
// IndexStats holds statistics about the dedup index
type IndexStats struct {
TotalChunks int64
TotalManifests int64
TotalSizeRaw int64 // Uncompressed, undeduplicated (per-chunk)
TotalSizeStored int64 // On-disk after dedup+compression (per-chunk)
DedupRatio float64 // Based on manifests (real dedup ratio)
OldestChunk time.Time
NewestChunk time.Time
// Manifest-based stats (accurate dedup calculation)
TotalBackupSize int64 // Sum of all backup original sizes
TotalNewData int64 // Sum of all new chunks stored
SpaceSaved int64 // Difference = what dedup saved
}
// Stats returns statistics about the index
func (idx *ChunkIndex) Stats() (*IndexStats, error) {
stats := &IndexStats{}
var oldestStr, newestStr string
err := idx.db.QueryRow(`
SELECT
COUNT(*),
COALESCE(SUM(size_raw), 0),
COALESCE(SUM(size_stored), 0),
COALESCE(MIN(created_at), ''),
COALESCE(MAX(created_at), '')
FROM chunks
`).Scan(&stats.TotalChunks, &stats.TotalSizeRaw, &stats.TotalSizeStored,
&oldestStr, &newestStr)
if err != nil {
return nil, err
}
// Parse time strings
if oldestStr != "" {
stats.OldestChunk, _ = time.Parse("2006-01-02 15:04:05", oldestStr)
}
if newestStr != "" {
stats.NewestChunk, _ = time.Parse("2006-01-02 15:04:05", newestStr)
}
idx.db.QueryRow("SELECT COUNT(*) FROM manifests").Scan(&stats.TotalManifests)
// Calculate accurate dedup ratio from manifests
// Sum all backup original sizes and all new data stored
err = idx.db.QueryRow(`
SELECT
COALESCE(SUM(original_size), 0),
COALESCE(SUM(stored_size), 0)
FROM manifests
`).Scan(&stats.TotalBackupSize, &stats.TotalNewData)
if err != nil {
return nil, err
}
// Calculate real dedup ratio: how much data was deduplicated across all backups
if stats.TotalBackupSize > 0 {
stats.DedupRatio = 1.0 - float64(stats.TotalNewData)/float64(stats.TotalBackupSize)
stats.SpaceSaved = stats.TotalBackupSize - stats.TotalNewData
}
return stats, nil
}
// ListOrphanedChunks returns chunks that have ref_count <= 0
func (idx *ChunkIndex) ListOrphanedChunks() ([]string, error) {
rows, err := idx.db.Query("SELECT hash FROM chunks WHERE ref_count <= 0")
if err != nil {
return nil, err
}
defer rows.Close()
var hashes []string
for rows.Next() {
var hash string
if err := rows.Scan(&hash); err != nil {
continue
}
hashes = append(hashes, hash)
}
return hashes, rows.Err()
}
// Vacuum cleans up the database
func (idx *ChunkIndex) Vacuum() error {
_, err := idx.db.Exec("VACUUM")
return err
}

189
internal/dedup/manifest.go Normal file
View File

@@ -0,0 +1,189 @@
package dedup
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"time"
)
// Manifest describes a single backup as a list of chunks
type Manifest struct {
// ID is the unique identifier (typically timestamp-based)
ID string `json:"id"`
// Name is an optional human-readable name
Name string `json:"name,omitempty"`
// CreatedAt is when this backup was created
CreatedAt time.Time `json:"created_at"`
// Database information
DatabaseType string `json:"database_type"` // postgres, mysql
DatabaseName string `json:"database_name"`
DatabaseHost string `json:"database_host"`
// Chunks is the ordered list of chunk hashes
// The file is reconstructed by concatenating chunks in order
Chunks []ChunkRef `json:"chunks"`
// Stats about the backup
OriginalSize int64 `json:"original_size"` // Size before deduplication
StoredSize int64 `json:"stored_size"` // Size after dedup (new chunks only)
ChunkCount int `json:"chunk_count"` // Total chunks
NewChunks int `json:"new_chunks"` // Chunks that weren't deduplicated
DedupRatio float64 `json:"dedup_ratio"` // 1.0 = no dedup, 0.0 = 100% dedup
// Encryption and compression settings used
Encrypted bool `json:"encrypted"`
Compressed bool `json:"compressed"`
Decompressed bool `json:"decompressed,omitempty"` // Input was auto-decompressed before chunking
// Verification
SHA256 string `json:"sha256"` // Hash of reconstructed file
VerifiedAt time.Time `json:"verified_at,omitempty"`
}
// ChunkRef references a chunk in the manifest
type ChunkRef struct {
Hash string `json:"h"` // SHA-256 hash (64 chars)
Offset int64 `json:"o"` // Offset in original file
Length int `json:"l"` // Chunk length
}
// ManifestStore manages backup manifests
type ManifestStore struct {
basePath string
}
// NewManifestStore creates a new manifest store
func NewManifestStore(basePath string) (*ManifestStore, error) {
manifestDir := filepath.Join(basePath, "manifests")
if err := os.MkdirAll(manifestDir, 0700); err != nil {
return nil, fmt.Errorf("failed to create manifest directory: %w", err)
}
return &ManifestStore{basePath: basePath}, nil
}
// manifestPath returns the path for a manifest ID
func (s *ManifestStore) manifestPath(id string) string {
return filepath.Join(s.basePath, "manifests", id+".manifest.json")
}
// Save writes a manifest to disk
func (s *ManifestStore) Save(m *Manifest) error {
path := s.manifestPath(m.ID)
data, err := json.MarshalIndent(m, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal manifest: %w", err)
}
// Atomic write
tmpPath := path + ".tmp"
if err := os.WriteFile(tmpPath, data, 0600); err != nil {
return fmt.Errorf("failed to write manifest: %w", err)
}
if err := os.Rename(tmpPath, path); err != nil {
os.Remove(tmpPath)
return fmt.Errorf("failed to commit manifest: %w", err)
}
return nil
}
// Load reads a manifest from disk
func (s *ManifestStore) Load(id string) (*Manifest, error) {
path := s.manifestPath(id)
data, err := os.ReadFile(path)
if err != nil {
return nil, fmt.Errorf("failed to read manifest %s: %w", id, err)
}
var m Manifest
if err := json.Unmarshal(data, &m); err != nil {
return nil, fmt.Errorf("failed to parse manifest %s: %w", id, err)
}
return &m, nil
}
// Delete removes a manifest
func (s *ManifestStore) Delete(id string) error {
path := s.manifestPath(id)
if err := os.Remove(path); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("failed to delete manifest %s: %w", id, err)
}
return nil
}
// List returns all manifest IDs
func (s *ManifestStore) List() ([]string, error) {
manifestDir := filepath.Join(s.basePath, "manifests")
entries, err := os.ReadDir(manifestDir)
if err != nil {
return nil, fmt.Errorf("failed to list manifests: %w", err)
}
var ids []string
for _, e := range entries {
if e.IsDir() {
continue
}
name := e.Name()
if len(name) > 14 && name[len(name)-14:] == ".manifest.json" {
ids = append(ids, name[:len(name)-14])
}
}
return ids, nil
}
// ListAll returns all manifests sorted by creation time (newest first)
func (s *ManifestStore) ListAll() ([]*Manifest, error) {
ids, err := s.List()
if err != nil {
return nil, err
}
var manifests []*Manifest
for _, id := range ids {
m, err := s.Load(id)
if err != nil {
continue // Skip corrupted manifests
}
manifests = append(manifests, m)
}
// Sort by creation time (newest first)
for i := 0; i < len(manifests)-1; i++ {
for j := i + 1; j < len(manifests); j++ {
if manifests[j].CreatedAt.After(manifests[i].CreatedAt) {
manifests[i], manifests[j] = manifests[j], manifests[i]
}
}
}
return manifests, nil
}
// GetChunkHashes returns all unique chunk hashes referenced by manifests
func (s *ManifestStore) GetChunkHashes() (map[string]int, error) {
manifests, err := s.ListAll()
if err != nil {
return nil, err
}
// Map hash -> reference count
refs := make(map[string]int)
for _, m := range manifests {
for _, c := range m.Chunks {
refs[c.Hash]++
}
}
return refs, nil
}

235
internal/dedup/metrics.go Normal file
View File

@@ -0,0 +1,235 @@
package dedup
import (
"fmt"
"os"
"path/filepath"
"strings"
"time"
)
// DedupMetrics holds deduplication statistics for Prometheus
type DedupMetrics struct {
// Global stats
TotalChunks int64
TotalManifests int64
TotalBackupSize int64 // Sum of all backup original sizes
TotalNewData int64 // Sum of all new chunks stored
SpaceSaved int64 // Bytes saved by deduplication
DedupRatio float64 // Overall dedup ratio (0-1)
DiskUsage int64 // Actual bytes on disk
// Per-database stats
ByDatabase map[string]*DatabaseDedupMetrics
}
// DatabaseDedupMetrics holds per-database dedup stats
type DatabaseDedupMetrics struct {
Database string
BackupCount int
TotalSize int64
StoredSize int64
DedupRatio float64
LastBackupTime time.Time
LastVerified time.Time
}
// CollectMetrics gathers dedup statistics from the index and store
func CollectMetrics(basePath string, indexPath string) (*DedupMetrics, error) {
var idx *ChunkIndex
var err error
if indexPath != "" {
idx, err = NewChunkIndexAt(indexPath)
} else {
idx, err = NewChunkIndex(basePath)
}
if err != nil {
return nil, fmt.Errorf("failed to open chunk index: %w", err)
}
defer idx.Close()
store, err := NewChunkStore(StoreConfig{BasePath: basePath})
if err != nil {
return nil, fmt.Errorf("failed to open chunk store: %w", err)
}
// Get index stats
stats, err := idx.Stats()
if err != nil {
return nil, fmt.Errorf("failed to get index stats: %w", err)
}
// Get store stats
storeStats, err := store.Stats()
if err != nil {
return nil, fmt.Errorf("failed to get store stats: %w", err)
}
metrics := &DedupMetrics{
TotalChunks: stats.TotalChunks,
TotalManifests: stats.TotalManifests,
TotalBackupSize: stats.TotalBackupSize,
TotalNewData: stats.TotalNewData,
SpaceSaved: stats.SpaceSaved,
DedupRatio: stats.DedupRatio,
DiskUsage: storeStats.TotalSize,
ByDatabase: make(map[string]*DatabaseDedupMetrics),
}
// Collect per-database metrics from manifest store
manifestStore, err := NewManifestStore(basePath)
if err != nil {
return metrics, nil // Return partial metrics
}
manifests, err := manifestStore.ListAll()
if err != nil {
return metrics, nil // Return partial metrics
}
for _, m := range manifests {
dbKey := m.DatabaseName
if dbKey == "" {
dbKey = "_default"
}
dbMetrics, ok := metrics.ByDatabase[dbKey]
if !ok {
dbMetrics = &DatabaseDedupMetrics{
Database: dbKey,
}
metrics.ByDatabase[dbKey] = dbMetrics
}
dbMetrics.BackupCount++
dbMetrics.TotalSize += m.OriginalSize
dbMetrics.StoredSize += m.StoredSize
if m.CreatedAt.After(dbMetrics.LastBackupTime) {
dbMetrics.LastBackupTime = m.CreatedAt
}
if !m.VerifiedAt.IsZero() && m.VerifiedAt.After(dbMetrics.LastVerified) {
dbMetrics.LastVerified = m.VerifiedAt
}
}
// Calculate per-database dedup ratios
for _, dbMetrics := range metrics.ByDatabase {
if dbMetrics.TotalSize > 0 {
dbMetrics.DedupRatio = 1.0 - float64(dbMetrics.StoredSize)/float64(dbMetrics.TotalSize)
}
}
return metrics, nil
}
// WritePrometheusTextfile writes dedup metrics in Prometheus format
func WritePrometheusTextfile(path string, instance string, basePath string, indexPath string) error {
metrics, err := CollectMetrics(basePath, indexPath)
if err != nil {
return err
}
output := FormatPrometheusMetrics(metrics, instance)
// Atomic write
dir := filepath.Dir(path)
if err := os.MkdirAll(dir, 0755); err != nil {
return fmt.Errorf("failed to create directory: %w", err)
}
tmpPath := path + ".tmp"
if err := os.WriteFile(tmpPath, []byte(output), 0644); err != nil {
return fmt.Errorf("failed to write temp file: %w", err)
}
if err := os.Rename(tmpPath, path); err != nil {
os.Remove(tmpPath)
return fmt.Errorf("failed to rename temp file: %w", err)
}
return nil
}
// FormatPrometheusMetrics formats dedup metrics in Prometheus exposition format
func FormatPrometheusMetrics(m *DedupMetrics, instance string) string {
var b strings.Builder
now := time.Now().Unix()
b.WriteString("# DBBackup Deduplication Prometheus Metrics\n")
b.WriteString(fmt.Sprintf("# Generated at: %s\n", time.Now().Format(time.RFC3339)))
b.WriteString(fmt.Sprintf("# Instance: %s\n", instance))
b.WriteString("\n")
// Global dedup metrics
b.WriteString("# HELP dbbackup_dedup_chunks_total Total number of unique chunks stored\n")
b.WriteString("# TYPE dbbackup_dedup_chunks_total gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_chunks_total{instance=%q} %d\n", instance, m.TotalChunks))
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_manifests_total Total number of deduplicated backups\n")
b.WriteString("# TYPE dbbackup_dedup_manifests_total gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_manifests_total{instance=%q} %d\n", instance, m.TotalManifests))
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_backup_bytes_total Total logical size of all backups in bytes\n")
b.WriteString("# TYPE dbbackup_dedup_backup_bytes_total gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_backup_bytes_total{instance=%q} %d\n", instance, m.TotalBackupSize))
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_stored_bytes_total Total unique data stored in bytes (after dedup)\n")
b.WriteString("# TYPE dbbackup_dedup_stored_bytes_total gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_stored_bytes_total{instance=%q} %d\n", instance, m.TotalNewData))
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_space_saved_bytes Bytes saved by deduplication\n")
b.WriteString("# TYPE dbbackup_dedup_space_saved_bytes gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_space_saved_bytes{instance=%q} %d\n", instance, m.SpaceSaved))
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_ratio Deduplication ratio (0-1, higher is better)\n")
b.WriteString("# TYPE dbbackup_dedup_ratio gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_ratio{instance=%q} %.4f\n", instance, m.DedupRatio))
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_disk_usage_bytes Actual disk usage of chunk store\n")
b.WriteString("# TYPE dbbackup_dedup_disk_usage_bytes gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_disk_usage_bytes{instance=%q} %d\n", instance, m.DiskUsage))
b.WriteString("\n")
// Per-database metrics
if len(m.ByDatabase) > 0 {
b.WriteString("# HELP dbbackup_dedup_database_backup_count Number of deduplicated backups per database\n")
b.WriteString("# TYPE dbbackup_dedup_database_backup_count gauge\n")
for _, db := range m.ByDatabase {
b.WriteString(fmt.Sprintf("dbbackup_dedup_database_backup_count{instance=%q,database=%q} %d\n",
instance, db.Database, db.BackupCount))
}
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_database_ratio Deduplication ratio per database (0-1)\n")
b.WriteString("# TYPE dbbackup_dedup_database_ratio gauge\n")
for _, db := range m.ByDatabase {
b.WriteString(fmt.Sprintf("dbbackup_dedup_database_ratio{instance=%q,database=%q} %.4f\n",
instance, db.Database, db.DedupRatio))
}
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_database_last_backup_timestamp Last backup timestamp per database\n")
b.WriteString("# TYPE dbbackup_dedup_database_last_backup_timestamp gauge\n")
for _, db := range m.ByDatabase {
if !db.LastBackupTime.IsZero() {
b.WriteString(fmt.Sprintf("dbbackup_dedup_database_last_backup_timestamp{instance=%q,database=%q} %d\n",
instance, db.Database, db.LastBackupTime.Unix()))
}
}
b.WriteString("\n")
}
b.WriteString("# HELP dbbackup_dedup_scrape_timestamp Unix timestamp when dedup metrics were collected\n")
b.WriteString("# TYPE dbbackup_dedup_scrape_timestamp gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_scrape_timestamp{instance=%q} %d\n", instance, now))
return b.String()
}

367
internal/dedup/store.go Normal file
View File

@@ -0,0 +1,367 @@
package dedup
import (
"compress/gzip"
"crypto/aes"
"crypto/cipher"
"crypto/rand"
"crypto/sha256"
"encoding/hex"
"fmt"
"io"
"os"
"path/filepath"
"sync"
)
// ChunkStore manages content-addressed chunk storage
// Chunks are stored as: <base>/<prefix>/<hash>.chunk[.gz][.enc]
type ChunkStore struct {
basePath string
compress bool
encryptionKey []byte // 32 bytes for AES-256
mu sync.RWMutex
existingChunks map[string]bool // Cache of known chunks
}
// StoreConfig holds configuration for the chunk store
type StoreConfig struct {
BasePath string
Compress bool // Enable gzip compression
EncryptionKey string // Optional: hex-encoded 32-byte key for AES-256-GCM
}
// NewChunkStore creates a new chunk store
func NewChunkStore(config StoreConfig) (*ChunkStore, error) {
store := &ChunkStore{
basePath: config.BasePath,
compress: config.Compress,
existingChunks: make(map[string]bool),
}
// Parse encryption key if provided
if config.EncryptionKey != "" {
key, err := hex.DecodeString(config.EncryptionKey)
if err != nil {
return nil, fmt.Errorf("invalid encryption key: %w", err)
}
if len(key) != 32 {
return nil, fmt.Errorf("encryption key must be 32 bytes (got %d)", len(key))
}
store.encryptionKey = key
}
// Create base directory structure
if err := os.MkdirAll(config.BasePath, 0700); err != nil {
return nil, fmt.Errorf("failed to create chunk store: %w", err)
}
// Create chunks and manifests directories
for _, dir := range []string{"chunks", "manifests"} {
if err := os.MkdirAll(filepath.Join(config.BasePath, dir), 0700); err != nil {
return nil, fmt.Errorf("failed to create %s directory: %w", dir, err)
}
}
return store, nil
}
// chunkPath returns the filesystem path for a chunk hash
// Uses 2-character prefix for directory sharding (256 subdirs)
func (s *ChunkStore) chunkPath(hash string) string {
if len(hash) < 2 {
return filepath.Join(s.basePath, "chunks", "xx", hash+s.chunkExt())
}
prefix := hash[:2]
return filepath.Join(s.basePath, "chunks", prefix, hash+s.chunkExt())
}
// chunkExt returns the file extension based on compression/encryption settings
func (s *ChunkStore) chunkExt() string {
ext := ".chunk"
if s.compress {
ext += ".gz"
}
if s.encryptionKey != nil {
ext += ".enc"
}
return ext
}
// Has checks if a chunk exists in the store
func (s *ChunkStore) Has(hash string) bool {
s.mu.RLock()
if exists, ok := s.existingChunks[hash]; ok {
s.mu.RUnlock()
return exists
}
s.mu.RUnlock()
// Check filesystem
path := s.chunkPath(hash)
_, err := os.Stat(path)
exists := err == nil
s.mu.Lock()
s.existingChunks[hash] = exists
s.mu.Unlock()
return exists
}
// Put stores a chunk, returning true if it was new (not deduplicated)
func (s *ChunkStore) Put(chunk *Chunk) (isNew bool, err error) {
// Check if already exists (deduplication!)
if s.Has(chunk.Hash) {
return false, nil
}
path := s.chunkPath(chunk.Hash)
// Create prefix directory
if err := os.MkdirAll(filepath.Dir(path), 0700); err != nil {
return false, fmt.Errorf("failed to create chunk directory: %w", err)
}
// Prepare data
data := chunk.Data
// Compress if enabled
if s.compress {
data, err = s.compressData(data)
if err != nil {
return false, fmt.Errorf("compression failed: %w", err)
}
}
// Encrypt if enabled
if s.encryptionKey != nil {
data, err = s.encryptData(data)
if err != nil {
return false, fmt.Errorf("encryption failed: %w", err)
}
}
// Write atomically (write to temp, then rename)
tmpPath := path + ".tmp"
if err := os.WriteFile(tmpPath, data, 0600); err != nil {
return false, fmt.Errorf("failed to write chunk: %w", err)
}
if err := os.Rename(tmpPath, path); err != nil {
os.Remove(tmpPath)
return false, fmt.Errorf("failed to commit chunk: %w", err)
}
// Update cache
s.mu.Lock()
s.existingChunks[chunk.Hash] = true
s.mu.Unlock()
return true, nil
}
// Get retrieves a chunk by hash
func (s *ChunkStore) Get(hash string) (*Chunk, error) {
path := s.chunkPath(hash)
data, err := os.ReadFile(path)
if err != nil {
return nil, fmt.Errorf("failed to read chunk %s: %w", hash, err)
}
// Decrypt if encrypted
if s.encryptionKey != nil {
data, err = s.decryptData(data)
if err != nil {
return nil, fmt.Errorf("decryption failed: %w", err)
}
}
// Decompress if compressed
if s.compress {
data, err = s.decompressData(data)
if err != nil {
return nil, fmt.Errorf("decompression failed: %w", err)
}
}
// Verify hash
h := sha256.Sum256(data)
actualHash := hex.EncodeToString(h[:])
if actualHash != hash {
return nil, fmt.Errorf("chunk hash mismatch: expected %s, got %s", hash, actualHash)
}
return &Chunk{
Hash: hash,
Data: data,
Length: len(data),
}, nil
}
// Delete removes a chunk from the store
func (s *ChunkStore) Delete(hash string) error {
path := s.chunkPath(hash)
if err := os.Remove(path); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("failed to delete chunk %s: %w", hash, err)
}
s.mu.Lock()
delete(s.existingChunks, hash)
s.mu.Unlock()
return nil
}
// Stats returns storage statistics
type StoreStats struct {
TotalChunks int64
TotalSize int64 // Bytes on disk (after compression/encryption)
UniqueSize int64 // Bytes of unique data
Directories int
}
// Stats returns statistics about the chunk store
func (s *ChunkStore) Stats() (*StoreStats, error) {
stats := &StoreStats{}
chunksDir := filepath.Join(s.basePath, "chunks")
err := filepath.Walk(chunksDir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if info.IsDir() {
stats.Directories++
return nil
}
stats.TotalChunks++
stats.TotalSize += info.Size()
return nil
})
return stats, err
}
// LoadIndex loads the existing chunk hashes into memory
func (s *ChunkStore) LoadIndex() error {
s.mu.Lock()
defer s.mu.Unlock()
s.existingChunks = make(map[string]bool)
chunksDir := filepath.Join(s.basePath, "chunks")
return filepath.Walk(chunksDir, func(path string, info os.FileInfo, err error) error {
if err != nil || info.IsDir() {
return err
}
// Extract hash from filename
base := filepath.Base(path)
hash := base
// Remove extensions
for _, ext := range []string{".enc", ".gz", ".chunk"} {
if len(hash) > len(ext) && hash[len(hash)-len(ext):] == ext {
hash = hash[:len(hash)-len(ext)]
}
}
if len(hash) == 64 { // SHA-256 hex length
s.existingChunks[hash] = true
}
return nil
})
}
// compressData compresses data using gzip
func (s *ChunkStore) compressData(data []byte) ([]byte, error) {
var buf []byte
w, err := gzip.NewWriterLevel((*bytesBuffer)(&buf), gzip.BestCompression)
if err != nil {
return nil, err
}
if _, err := w.Write(data); err != nil {
return nil, err
}
if err := w.Close(); err != nil {
return nil, err
}
return buf, nil
}
// bytesBuffer is a simple io.Writer that appends to a byte slice
type bytesBuffer []byte
func (b *bytesBuffer) Write(p []byte) (int, error) {
*b = append(*b, p...)
return len(p), nil
}
// decompressData decompresses gzip data
func (s *ChunkStore) decompressData(data []byte) ([]byte, error) {
r, err := gzip.NewReader(&bytesReader{data: data})
if err != nil {
return nil, err
}
defer r.Close()
return io.ReadAll(r)
}
// bytesReader is a simple io.Reader from a byte slice
type bytesReader struct {
data []byte
pos int
}
func (r *bytesReader) Read(p []byte) (int, error) {
if r.pos >= len(r.data) {
return 0, io.EOF
}
n := copy(p, r.data[r.pos:])
r.pos += n
return n, nil
}
// encryptData encrypts data using AES-256-GCM
func (s *ChunkStore) encryptData(plaintext []byte) ([]byte, error) {
block, err := aes.NewCipher(s.encryptionKey)
if err != nil {
return nil, err
}
gcm, err := cipher.NewGCM(block)
if err != nil {
return nil, err
}
nonce := make([]byte, gcm.NonceSize())
if _, err := rand.Read(nonce); err != nil {
return nil, err
}
// Prepend nonce to ciphertext
return gcm.Seal(nonce, nonce, plaintext, nil), nil
}
// decryptData decrypts AES-256-GCM encrypted data
func (s *ChunkStore) decryptData(ciphertext []byte) ([]byte, error) {
block, err := aes.NewCipher(s.encryptionKey)
if err != nil {
return nil, err
}
gcm, err := cipher.NewGCM(block)
if err != nil {
return nil, err
}
if len(ciphertext) < gcm.NonceSize() {
return nil, fmt.Errorf("ciphertext too short")
}
nonce := ciphertext[:gcm.NonceSize()]
ciphertext = ciphertext[gcm.NonceSize():]
return gcm.Open(nil, nonce, ciphertext, nil)
}

View File

@@ -223,11 +223,11 @@ func (r *DrillResult) IsSuccess() bool {
// Summary returns a human-readable summary of the drill
func (r *DrillResult) Summary() string {
status := " PASSED"
status := "[OK] PASSED"
if !r.Success {
status = " FAILED"
status = "[FAIL] FAILED"
} else if r.Status == StatusPartial {
status = "⚠️ PARTIAL"
status = "[WARN] PARTIAL"
}
return fmt.Sprintf("%s - %s (%.2fs) - %d tables, %d rows",

View File

@@ -41,20 +41,20 @@ func (e *Engine) Run(ctx context.Context, config *DrillConfig) (*DrillResult, er
TargetRTO: float64(config.MaxRestoreSeconds),
}
e.log.Info("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
e.log.Info(" 🧪 DR Drill: " + result.DrillID)
e.log.Info("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
e.log.Info("=====================================================")
e.log.Info(" [TEST] DR Drill: " + result.DrillID)
e.log.Info("=====================================================")
e.log.Info("")
// Cleanup function for error cases
var containerID string
cleanup := func() {
if containerID != "" && config.CleanupOnExit && (result.Success || !config.KeepOnFailure) {
e.log.Info("🗑️ Cleaning up container...")
e.log.Info("[DEL] Cleaning up container...")
e.docker.RemoveContainer(context.Background(), containerID)
} else if containerID != "" {
result.ContainerKept = true
e.log.Info("📦 Container kept for debugging: " + containerID)
e.log.Info("[PKG] Container kept for debugging: " + containerID)
}
}
defer cleanup()
@@ -88,7 +88,7 @@ func (e *Engine) Run(ctx context.Context, config *DrillConfig) (*DrillResult, er
}
containerID = container.ID
result.ContainerID = containerID
e.log.Info("📦 Container started: " + containerID[:12])
e.log.Info("[PKG] Container started: " + containerID[:12])
// Wait for container to be healthy
if err := e.docker.WaitForHealth(ctx, containerID, config.DatabaseType, config.ContainerTimeout); err != nil {
@@ -118,7 +118,7 @@ func (e *Engine) Run(ctx context.Context, config *DrillConfig) (*DrillResult, er
result.RestoreTime = time.Since(restoreStart).Seconds()
e.completePhase(&phase, fmt.Sprintf("Restored in %.2fs", result.RestoreTime))
result.Phases = append(result.Phases, phase)
e.log.Info(fmt.Sprintf(" Backup restored in %.2fs", result.RestoreTime))
e.log.Info(fmt.Sprintf("[OK] Backup restored in %.2fs", result.RestoreTime))
// Phase 4: Validate
phase = e.startPhase("Validate Database")
@@ -182,24 +182,24 @@ func (e *Engine) preflightChecks(ctx context.Context, config *DrillConfig) error
if err := e.docker.CheckDockerAvailable(ctx); err != nil {
return fmt.Errorf("docker not available: %w", err)
}
e.log.Info(" Docker is available")
e.log.Info("[OK] Docker is available")
// Check backup file exists
if _, err := os.Stat(config.BackupPath); err != nil {
return fmt.Errorf("backup file not found: %s", config.BackupPath)
}
e.log.Info(" Backup file exists: " + filepath.Base(config.BackupPath))
e.log.Info("[OK] Backup file exists: " + filepath.Base(config.BackupPath))
// Pull Docker image
image := config.ContainerImage
if image == "" {
image = GetDefaultImage(config.DatabaseType, "")
}
e.log.Info("⬇️ Pulling image: " + image)
e.log.Info("[DOWN] Pulling image: " + image)
if err := e.docker.PullImage(ctx, image); err != nil {
return fmt.Errorf("failed to pull image: %w", err)
}
e.log.Info(" Image ready: " + image)
e.log.Info("[OK] Image ready: " + image)
return nil
}
@@ -243,7 +243,7 @@ func (e *Engine) restoreBackup(ctx context.Context, config *DrillConfig, contain
backupName := filepath.Base(config.BackupPath)
containerBackupPath := "/tmp/" + backupName
e.log.Info("📁 Copying backup to container...")
e.log.Info("[DIR] Copying backup to container...")
if err := e.docker.CopyToContainer(ctx, containerID, config.BackupPath, containerBackupPath); err != nil {
return fmt.Errorf("failed to copy backup: %w", err)
}
@@ -256,7 +256,7 @@ func (e *Engine) restoreBackup(ctx context.Context, config *DrillConfig, contain
}
// Restore based on database type and format
e.log.Info("🔄 Restoring backup...")
e.log.Info("[EXEC] Restoring backup...")
return e.executeRestore(ctx, config, containerID, containerBackupPath, containerConfig)
}
@@ -366,13 +366,13 @@ func (e *Engine) validateDatabase(ctx context.Context, config *DrillConfig, resu
tables, err := validator.GetTableList(ctx)
if err == nil {
result.TableCount = len(tables)
e.log.Info(fmt.Sprintf("📊 Tables found: %d", result.TableCount))
e.log.Info(fmt.Sprintf("[STATS] Tables found: %d", result.TableCount))
}
totalRows, err := validator.GetTotalRowCount(ctx)
if err == nil {
result.TotalRows = totalRows
e.log.Info(fmt.Sprintf("📊 Total rows: %d", result.TotalRows))
e.log.Info(fmt.Sprintf("[STATS] Total rows: %d", result.TotalRows))
}
dbSize, err := validator.GetDatabaseSize(ctx, config.DatabaseName)
@@ -387,9 +387,9 @@ func (e *Engine) validateDatabase(ctx context.Context, config *DrillConfig, resu
result.CheckResults = append(result.CheckResults, tr)
if !tr.Success {
errorCount++
e.log.Warn(" " + tr.Message)
e.log.Warn("[FAIL] " + tr.Message)
} else {
e.log.Info(" " + tr.Message)
e.log.Info("[OK] " + tr.Message)
}
}
}
@@ -404,9 +404,9 @@ func (e *Engine) validateDatabase(ctx context.Context, config *DrillConfig, resu
totalQueryTime += qr.Duration
if !qr.Success {
errorCount++
e.log.Warn(fmt.Sprintf(" %s: %s", qr.Name, qr.Error))
e.log.Warn(fmt.Sprintf("[FAIL] %s: %s", qr.Name, qr.Error))
} else {
e.log.Info(fmt.Sprintf(" %s: %s (%.0fms)", qr.Name, qr.Result, qr.Duration))
e.log.Info(fmt.Sprintf("[OK] %s: %s (%.0fms)", qr.Name, qr.Result, qr.Duration))
}
}
if len(queryResults) > 0 {
@@ -421,9 +421,9 @@ func (e *Engine) validateDatabase(ctx context.Context, config *DrillConfig, resu
result.CheckResults = append(result.CheckResults, cr)
if !cr.Success {
errorCount++
e.log.Warn(" " + cr.Message)
e.log.Warn("[FAIL] " + cr.Message)
} else {
e.log.Info(" " + cr.Message)
e.log.Info("[OK] " + cr.Message)
}
}
}
@@ -433,7 +433,7 @@ func (e *Engine) validateDatabase(ctx context.Context, config *DrillConfig, resu
errorCount++
msg := fmt.Sprintf("Total rows (%d) below minimum (%d)", result.TotalRows, config.MinRowCount)
result.Warnings = append(result.Warnings, msg)
e.log.Warn("⚠️ " + msg)
e.log.Warn("[WARN] " + msg)
}
return errorCount
@@ -441,7 +441,7 @@ func (e *Engine) validateDatabase(ctx context.Context, config *DrillConfig, resu
// startPhase starts a new drill phase
func (e *Engine) startPhase(name string) DrillPhase {
e.log.Info("▶️ " + name)
e.log.Info("[RUN] " + name)
return DrillPhase{
Name: name,
Status: "running",
@@ -463,7 +463,7 @@ func (e *Engine) failPhase(phase *DrillPhase, message string) {
phase.Duration = phase.EndTime.Sub(phase.StartTime).Seconds()
phase.Status = "failed"
phase.Message = message
e.log.Error(" Phase failed: " + message)
e.log.Error("[FAIL] Phase failed: " + message)
}
// finalize completes the drill result
@@ -472,9 +472,9 @@ func (e *Engine) finalize(result *DrillResult) {
result.Duration = result.EndTime.Sub(result.StartTime).Seconds()
e.log.Info("")
e.log.Info("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
e.log.Info("=====================================================")
e.log.Info(" " + result.Summary())
e.log.Info("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
e.log.Info("=====================================================")
if result.Success {
e.log.Info(fmt.Sprintf(" RTO: %.2fs (target: %.0fs) %s",
@@ -484,9 +484,9 @@ func (e *Engine) finalize(result *DrillResult) {
func boolIcon(b bool) string {
if b {
return ""
return "[OK]"
}
return ""
return "[FAIL]"
}
// Cleanup removes drill resources
@@ -498,7 +498,7 @@ func (e *Engine) Cleanup(ctx context.Context, drillID string) error {
for _, c := range containers {
if strings.Contains(c.Name, drillID) || (drillID == "" && strings.HasPrefix(c.Name, "drill_")) {
e.log.Info("🗑️ Removing container: " + c.Name)
e.log.Info("[DEL] Removing container: " + c.Name)
if err := e.docker.RemoveContainer(ctx, c.ID); err != nil {
e.log.Warn("Failed to remove container", "id", c.ID, "error", err)
}

View File

@@ -8,7 +8,7 @@ import (
func TestEncryptDecrypt(t *testing.T) {
// Test data
original := []byte("This is a secret database backup that needs encryption! 🔒")
original := []byte("This is a secret database backup that needs encryption! [LOCK]")
// Test with passphrase
t.Run("Passphrase", func(t *testing.T) {
@@ -57,7 +57,7 @@ func TestEncryptDecrypt(t *testing.T) {
string(original), string(decrypted))
}
t.Log(" Encryption/decryption successful")
t.Log("[OK] Encryption/decryption successful")
})
// Test with direct key
@@ -102,7 +102,7 @@ func TestEncryptDecrypt(t *testing.T) {
t.Errorf("Decrypted data doesn't match original")
}
t.Log(" Direct key encryption/decryption successful")
t.Log("[OK] Direct key encryption/decryption successful")
})
// Test wrong password
@@ -133,7 +133,7 @@ func TestEncryptDecrypt(t *testing.T) {
t.Error("Expected decryption to fail with wrong password, but it succeeded")
}
t.Logf(" Wrong password correctly rejected: %v", err)
t.Logf("[OK] Wrong password correctly rejected: %v", err)
})
}
@@ -183,7 +183,7 @@ func TestLargeData(t *testing.T) {
t.Errorf("Large data decryption failed")
}
t.Log(" Large data encryption/decryption successful")
t.Log("[OK] Large data encryption/decryption successful")
}
func TestKeyGeneration(t *testing.T) {
@@ -207,7 +207,7 @@ func TestKeyGeneration(t *testing.T) {
t.Error("Generated keys are identical - randomness broken!")
}
t.Log(" Key generation successful")
t.Log("[OK] Key generation successful")
}
func TestKeyDerivation(t *testing.T) {
@@ -230,5 +230,5 @@ func TestKeyDerivation(t *testing.T) {
t.Error("Different salts produced same key")
}
t.Log(" Key derivation successful")
t.Log("[OK] Key derivation successful")
}

View File

@@ -339,7 +339,7 @@ func (e *CloneEngine) Backup(ctx context.Context, opts *BackupOptions) (*BackupR
// Save metadata
meta := &metadata.BackupMetadata{
Version: "3.40.0",
Version: "3.42.1",
Timestamp: startTime,
Database: opts.Database,
DatabaseType: "mysql",

View File

@@ -234,10 +234,26 @@ func (e *MySQLDumpEngine) Backup(ctx context.Context, opts *BackupOptions) (*Bac
gzWriter.Close()
}
// Wait for command
if err := cmd.Wait(); err != nil {
// Wait for command with proper context handling
cmdDone := make(chan error, 1)
go func() {
cmdDone <- cmd.Wait()
}()
var cmdErr error
select {
case cmdErr = <-cmdDone:
// Command completed
case <-ctx.Done():
e.log.Warn("MySQL backup cancelled - killing process")
cmd.Process.Kill()
<-cmdDone
cmdErr = ctx.Err()
}
if cmdErr != nil {
stderr := stderrBuf.String()
return nil, fmt.Errorf("mysqldump failed: %w\n%s", err, stderr)
return nil, fmt.Errorf("mysqldump failed: %w\n%s", cmdErr, stderr)
}
// Get file info
@@ -254,7 +270,7 @@ func (e *MySQLDumpEngine) Backup(ctx context.Context, opts *BackupOptions) (*Bac
// Save metadata
meta := &metadata.BackupMetadata{
Version: "3.40.0",
Version: "3.42.1",
Timestamp: startTime,
Database: opts.Database,
DatabaseType: "mysql",
@@ -442,8 +458,25 @@ func (e *MySQLDumpEngine) BackupToWriter(ctx context.Context, w io.Writer, opts
gzWriter.Close()
}
if err := cmd.Wait(); err != nil {
return nil, fmt.Errorf("mysqldump failed: %w\n%s", err, stderrBuf.String())
// Wait for command with proper context handling
cmdDone := make(chan error, 1)
go func() {
cmdDone <- cmd.Wait()
}()
var cmdErr error
select {
case cmdErr = <-cmdDone:
// Command completed
case <-ctx.Done():
e.log.Warn("MySQL streaming backup cancelled - killing process")
cmd.Process.Kill()
<-cmdDone
cmdErr = ctx.Err()
}
if cmdErr != nil {
return nil, fmt.Errorf("mysqldump failed: %w\n%s", cmdErr, stderrBuf.String())
}
return &BackupResult{

View File

@@ -63,7 +63,7 @@ func (b *BtrfsBackend) Detect(dataDir string) (bool, error) {
// CreateSnapshot creates a Btrfs snapshot
func (b *BtrfsBackend) CreateSnapshot(ctx context.Context, opts SnapshotOptions) (*Snapshot, error) {
if b.config == nil || b.config.Subvolume == "" {
return nil, fmt.Errorf("Btrfs subvolume not configured")
return nil, fmt.Errorf("btrfs subvolume not configured")
}
// Generate snapshot name

View File

@@ -188,6 +188,8 @@ func (e *SnapshotEngine) Backup(ctx context.Context, opts *BackupOptions) (*Back
// Step 4: Mount snapshot
mountPoint := e.config.MountPoint
if mountPoint == "" {
// Note: snapshot engine uses snapshot.Config which doesnt have GetEffectiveWorkDir()
// TODO: Refactor to use main config.Config for WorkDir support
mountPoint = filepath.Join(os.TempDir(), fmt.Sprintf("dbbackup_snap_%s", timestamp))
}
@@ -223,7 +225,7 @@ func (e *SnapshotEngine) Backup(ctx context.Context, opts *BackupOptions) (*Back
// Save metadata
meta := &metadata.BackupMetadata{
Version: "3.40.0",
Version: "3.42.1",
Timestamp: startTime,
Database: opts.Database,
DatabaseType: "mysql",

View File

@@ -0,0 +1,11 @@
// Package installer provides systemd service installation for dbbackup
package installer
import (
"embed"
)
// Templates contains embedded systemd unit files
//
//go:embed templates/*.service templates/*.timer
var Templates embed.FS

View File

@@ -0,0 +1,680 @@
// Package installer provides systemd service installation for dbbackup
package installer
import (
"context"
"fmt"
"io"
"os"
"os/exec"
"os/user"
"path/filepath"
"runtime"
"strings"
"text/template"
"dbbackup/internal/logger"
)
// Installer handles systemd service installation
type Installer struct {
log logger.Logger
unitDir string // /etc/systemd/system or custom
dryRun bool
}
// InstallOptions configures the installation
type InstallOptions struct {
// Instance name (e.g., "production", "staging")
Instance string
// Binary path (auto-detected if empty)
BinaryPath string
// Backup configuration
BackupType string // "single" or "cluster"
Schedule string // OnCalendar format, e.g., "daily", "*-*-* 02:00:00"
// Service user/group
User string
Group string
// Paths
BackupDir string
ConfigPath string
// Timeout in seconds (default: 3600)
TimeoutSeconds int
// Metrics
WithMetrics bool
MetricsPort int
}
// ServiceStatus contains information about installed services
type ServiceStatus struct {
Installed bool
Enabled bool
Active bool
TimerEnabled bool
TimerActive bool
LastRun string
NextRun string
ServicePath string
TimerPath string
ExporterPath string
}
// NewInstaller creates a new Installer
func NewInstaller(log logger.Logger, dryRun bool) *Installer {
return &Installer{
log: log,
unitDir: "/etc/systemd/system",
dryRun: dryRun,
}
}
// SetUnitDir allows overriding the systemd unit directory (for testing)
func (i *Installer) SetUnitDir(dir string) {
i.unitDir = dir
}
// Install installs the systemd service and timer
func (i *Installer) Install(ctx context.Context, opts InstallOptions) error {
// Validate platform
if runtime.GOOS != "linux" {
return fmt.Errorf("systemd installation only supported on Linux (current: %s)", runtime.GOOS)
}
// Validate prerequisites
if err := i.validatePrerequisites(); err != nil {
return err
}
// Set defaults
if err := i.setDefaults(&opts); err != nil {
return err
}
// Create user if needed
if err := i.ensureUser(opts.User, opts.Group); err != nil {
return err
}
// Create directories
if err := i.createDirectories(opts); err != nil {
return err
}
// Copy binary to /usr/local/bin (required for ProtectHome=yes)
if err := i.copyBinary(&opts); err != nil {
return err
}
// Write service and timer files
if err := i.writeUnitFiles(opts); err != nil {
return err
}
// Reload systemd
if err := i.systemctl(ctx, "daemon-reload"); err != nil {
return err
}
// Enable timer
timerName := i.getTimerName(opts)
if err := i.systemctl(ctx, "enable", timerName); err != nil {
return err
}
// Install metrics exporter if requested
if opts.WithMetrics {
if err := i.installExporter(ctx, opts); err != nil {
i.log.Warn("Failed to install metrics exporter", "error", err)
}
}
i.log.Info("Installation complete",
"instance", opts.Instance,
"timer", timerName,
"schedule", opts.Schedule)
i.printNextSteps(opts)
return nil
}
// Uninstall removes the systemd service and timer
func (i *Installer) Uninstall(ctx context.Context, instance string, purge bool) error {
if runtime.GOOS != "linux" {
return fmt.Errorf("systemd uninstallation only supported on Linux")
}
if err := i.validatePrerequisites(); err != nil {
return err
}
// Determine service names
var serviceName, timerName string
if instance == "cluster" || instance == "" {
serviceName = "dbbackup-cluster.service"
timerName = "dbbackup-cluster.timer"
} else {
serviceName = fmt.Sprintf("dbbackup@%s.service", instance)
timerName = fmt.Sprintf("dbbackup@%s.timer", instance)
}
// Stop and disable timer
_ = i.systemctl(ctx, "stop", timerName)
_ = i.systemctl(ctx, "disable", timerName)
// Stop and disable service
_ = i.systemctl(ctx, "stop", serviceName)
_ = i.systemctl(ctx, "disable", serviceName)
// Remove unit files
servicePath := filepath.Join(i.unitDir, serviceName)
timerPath := filepath.Join(i.unitDir, timerName)
if !i.dryRun {
os.Remove(servicePath)
os.Remove(timerPath)
} else {
i.log.Info("Would remove", "service", servicePath)
i.log.Info("Would remove", "timer", timerPath)
}
// Also try to remove template units if they exist
if instance != "cluster" && instance != "" {
templateService := filepath.Join(i.unitDir, "dbbackup@.service")
templateTimer := filepath.Join(i.unitDir, "dbbackup@.timer")
// Only remove templates if no other instances are using them
if i.canRemoveTemplates() {
if !i.dryRun {
os.Remove(templateService)
os.Remove(templateTimer)
}
}
}
// Remove exporter
exporterPath := filepath.Join(i.unitDir, "dbbackup-exporter.service")
_ = i.systemctl(ctx, "stop", "dbbackup-exporter.service")
_ = i.systemctl(ctx, "disable", "dbbackup-exporter.service")
if !i.dryRun {
os.Remove(exporterPath)
}
// Reload systemd
_ = i.systemctl(ctx, "daemon-reload")
// Purge config files if requested
if purge {
configDirs := []string{
"/etc/dbbackup",
"/var/lib/dbbackup",
}
for _, dir := range configDirs {
if !i.dryRun {
if err := os.RemoveAll(dir); err != nil {
i.log.Warn("Failed to remove directory", "path", dir, "error", err)
} else {
i.log.Info("Removed directory", "path", dir)
}
} else {
i.log.Info("Would remove directory", "path", dir)
}
}
}
i.log.Info("Uninstallation complete", "instance", instance, "purge", purge)
return nil
}
// Status returns the current installation status
func (i *Installer) Status(ctx context.Context, instance string) (*ServiceStatus, error) {
if runtime.GOOS != "linux" {
return nil, fmt.Errorf("systemd status only supported on Linux")
}
status := &ServiceStatus{}
// Determine service names
var serviceName, timerName string
if instance == "cluster" || instance == "" {
serviceName = "dbbackup-cluster.service"
timerName = "dbbackup-cluster.timer"
} else {
serviceName = fmt.Sprintf("dbbackup@%s.service", instance)
timerName = fmt.Sprintf("dbbackup@%s.timer", instance)
}
// Check service file exists
status.ServicePath = filepath.Join(i.unitDir, serviceName)
if _, err := os.Stat(status.ServicePath); err == nil {
status.Installed = true
}
// Check timer file exists
status.TimerPath = filepath.Join(i.unitDir, timerName)
// Check exporter
status.ExporterPath = filepath.Join(i.unitDir, "dbbackup-exporter.service")
// Check enabled/active status
if status.Installed {
status.Enabled = i.isEnabled(ctx, serviceName)
status.Active = i.isActive(ctx, serviceName)
status.TimerEnabled = i.isEnabled(ctx, timerName)
status.TimerActive = i.isActive(ctx, timerName)
// Get timer info
status.NextRun = i.getTimerNext(ctx, timerName)
status.LastRun = i.getTimerLast(ctx, timerName)
}
return status, nil
}
// validatePrerequisites checks system requirements
func (i *Installer) validatePrerequisites() error {
// Check root (skip in dry-run mode)
if os.Getuid() != 0 && !i.dryRun {
return fmt.Errorf("installation requires root privileges (use sudo)")
}
// Check systemd
if _, err := exec.LookPath("systemctl"); err != nil {
return fmt.Errorf("systemctl not found - is this a systemd-based system?")
}
// Check for container environment
if _, err := os.Stat("/.dockerenv"); err == nil {
i.log.Warn("Running inside Docker container - systemd may not work correctly")
}
return nil
}
// setDefaults fills in default values
func (i *Installer) setDefaults(opts *InstallOptions) error {
// Auto-detect binary path
if opts.BinaryPath == "" {
binPath, err := os.Executable()
if err != nil {
return fmt.Errorf("failed to detect binary path: %w", err)
}
binPath, err = filepath.EvalSymlinks(binPath)
if err != nil {
return fmt.Errorf("failed to resolve binary path: %w", err)
}
opts.BinaryPath = binPath
}
// Default instance
if opts.Instance == "" {
opts.Instance = "default"
}
// Default backup type
if opts.BackupType == "" {
opts.BackupType = "single"
}
// Default schedule (daily at 2am)
if opts.Schedule == "" {
opts.Schedule = "*-*-* 02:00:00"
}
// Default user/group
if opts.User == "" {
opts.User = "dbbackup"
}
if opts.Group == "" {
opts.Group = "dbbackup"
}
// Default paths
if opts.BackupDir == "" {
opts.BackupDir = "/var/lib/dbbackup/backups"
}
if opts.ConfigPath == "" {
opts.ConfigPath = "/etc/dbbackup/dbbackup.conf"
}
// Default timeout (1 hour)
if opts.TimeoutSeconds == 0 {
opts.TimeoutSeconds = 3600
}
// Default metrics port
if opts.MetricsPort == 0 {
opts.MetricsPort = 9399
}
return nil
}
// ensureUser creates the service user if it doesn't exist
func (i *Installer) ensureUser(username, groupname string) error {
// Check if user exists
if _, err := user.Lookup(username); err == nil {
i.log.Debug("User already exists", "user", username)
return nil
}
if i.dryRun {
i.log.Info("Would create user", "user", username, "group", groupname)
return nil
}
// Create group first
groupCmd := exec.Command("groupadd", "--system", groupname)
if output, err := groupCmd.CombinedOutput(); err != nil {
// Ignore if group already exists
if !strings.Contains(string(output), "already exists") {
i.log.Debug("Group creation output", "output", string(output))
}
}
// Create user
userCmd := exec.Command("useradd",
"--system",
"--shell", "/usr/sbin/nologin",
"--home-dir", "/var/lib/dbbackup",
"--gid", groupname,
username)
if output, err := userCmd.CombinedOutput(); err != nil {
if !strings.Contains(string(output), "already exists") {
return fmt.Errorf("failed to create user %s: %w (%s)", username, err, output)
}
}
i.log.Info("Created system user", "user", username, "group", groupname)
return nil
}
// createDirectories creates required directories
func (i *Installer) createDirectories(opts InstallOptions) error {
dirs := []struct {
path string
mode os.FileMode
}{
{"/etc/dbbackup", 0755},
{"/etc/dbbackup/env.d", 0700},
{"/var/lib/dbbackup", 0750},
{"/var/lib/dbbackup/backups", 0750},
{"/var/lib/dbbackup/metrics", 0755},
{"/var/log/dbbackup", 0750},
{opts.BackupDir, 0750},
}
for _, d := range dirs {
if i.dryRun {
i.log.Info("Would create directory", "path", d.path, "mode", d.mode)
continue
}
if err := os.MkdirAll(d.path, d.mode); err != nil {
return fmt.Errorf("failed to create directory %s: %w", d.path, err)
}
// Set ownership
u, err := user.Lookup(opts.User)
if err == nil {
var uid, gid int
fmt.Sscanf(u.Uid, "%d", &uid)
fmt.Sscanf(u.Gid, "%d", &gid)
os.Chown(d.path, uid, gid)
}
}
return nil
}
// copyBinary copies the binary to /usr/local/bin for systemd access
// This is required because ProtectHome=yes blocks access to home directories
func (i *Installer) copyBinary(opts *InstallOptions) error {
const installPath = "/usr/local/bin/dbbackup"
// Check if binary is already in a system path
if opts.BinaryPath == installPath {
return nil
}
if i.dryRun {
i.log.Info("Would copy binary", "from", opts.BinaryPath, "to", installPath)
opts.BinaryPath = installPath
return nil
}
// Read source binary
src, err := os.Open(opts.BinaryPath)
if err != nil {
return fmt.Errorf("failed to open source binary: %w", err)
}
defer src.Close()
// Create destination
dst, err := os.OpenFile(installPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0755)
if err != nil {
return fmt.Errorf("failed to create %s: %w", installPath, err)
}
defer dst.Close()
// Copy
if _, err := io.Copy(dst, src); err != nil {
return fmt.Errorf("failed to copy binary: %w", err)
}
i.log.Info("Copied binary", "from", opts.BinaryPath, "to", installPath)
opts.BinaryPath = installPath
return nil
}
// writeUnitFiles renders and writes the systemd unit files
func (i *Installer) writeUnitFiles(opts InstallOptions) error {
// Prepare template data
data := map[string]interface{}{
"User": opts.User,
"Group": opts.Group,
"BinaryPath": opts.BinaryPath,
"BackupType": opts.BackupType,
"BackupDir": opts.BackupDir,
"ConfigPath": opts.ConfigPath,
"TimeoutSeconds": opts.TimeoutSeconds,
"Schedule": opts.Schedule,
"MetricsPort": opts.MetricsPort,
}
// Determine which templates to use
var serviceTemplate, timerTemplate string
var serviceName, timerName string
if opts.BackupType == "cluster" {
serviceTemplate = "templates/dbbackup-cluster.service"
timerTemplate = "templates/dbbackup-cluster.timer"
serviceName = "dbbackup-cluster.service"
timerName = "dbbackup-cluster.timer"
} else {
serviceTemplate = "templates/dbbackup@.service"
timerTemplate = "templates/dbbackup@.timer"
serviceName = "dbbackup@.service"
timerName = "dbbackup@.timer"
}
// Write service file
if err := i.writeTemplateFile(serviceTemplate, serviceName, data); err != nil {
return fmt.Errorf("failed to write service file: %w", err)
}
// Write timer file
if err := i.writeTemplateFile(timerTemplate, timerName, data); err != nil {
return fmt.Errorf("failed to write timer file: %w", err)
}
return nil
}
// writeTemplateFile reads an embedded template and writes it to the unit directory
func (i *Installer) writeTemplateFile(templatePath, outputName string, data map[string]interface{}) error {
// Read template
content, err := Templates.ReadFile(templatePath)
if err != nil {
return fmt.Errorf("failed to read template %s: %w", templatePath, err)
}
// Parse template
tmpl, err := template.New(outputName).Parse(string(content))
if err != nil {
return fmt.Errorf("failed to parse template %s: %w", templatePath, err)
}
// Render template
var buf strings.Builder
if err := tmpl.Execute(&buf, data); err != nil {
return fmt.Errorf("failed to render template %s: %w", templatePath, err)
}
// Write file
outputPath := filepath.Join(i.unitDir, outputName)
if i.dryRun {
i.log.Info("Would write unit file", "path", outputPath)
i.log.Debug("Unit file content", "content", buf.String())
return nil
}
if err := os.WriteFile(outputPath, []byte(buf.String()), 0644); err != nil {
return fmt.Errorf("failed to write %s: %w", outputPath, err)
}
i.log.Info("Created unit file", "path", outputPath)
return nil
}
// installExporter installs the metrics exporter service
func (i *Installer) installExporter(ctx context.Context, opts InstallOptions) error {
data := map[string]interface{}{
"User": opts.User,
"Group": opts.Group,
"BinaryPath": opts.BinaryPath,
"ConfigPath": opts.ConfigPath,
"MetricsPort": opts.MetricsPort,
}
if err := i.writeTemplateFile("templates/dbbackup-exporter.service", "dbbackup-exporter.service", data); err != nil {
return err
}
if err := i.systemctl(ctx, "daemon-reload"); err != nil {
return err
}
if err := i.systemctl(ctx, "enable", "dbbackup-exporter.service"); err != nil {
return err
}
if err := i.systemctl(ctx, "start", "dbbackup-exporter.service"); err != nil {
return err
}
i.log.Info("Installed metrics exporter", "port", opts.MetricsPort)
return nil
}
// getTimerName returns the timer unit name for the given options
func (i *Installer) getTimerName(opts InstallOptions) string {
if opts.BackupType == "cluster" {
return "dbbackup-cluster.timer"
}
return fmt.Sprintf("dbbackup@%s.timer", opts.Instance)
}
// systemctl runs a systemctl command
func (i *Installer) systemctl(ctx context.Context, args ...string) error {
if i.dryRun {
i.log.Info("Would run: systemctl", "args", args)
return nil
}
cmd := exec.CommandContext(ctx, "systemctl", args...)
output, err := cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("systemctl %v failed: %w\n%s", args, err, string(output))
}
return nil
}
// isEnabled checks if a unit is enabled
func (i *Installer) isEnabled(ctx context.Context, unit string) bool {
cmd := exec.CommandContext(ctx, "systemctl", "is-enabled", unit)
return cmd.Run() == nil
}
// isActive checks if a unit is active
func (i *Installer) isActive(ctx context.Context, unit string) bool {
cmd := exec.CommandContext(ctx, "systemctl", "is-active", unit)
return cmd.Run() == nil
}
// getTimerNext gets the next run time for a timer
func (i *Installer) getTimerNext(ctx context.Context, timer string) string {
cmd := exec.CommandContext(ctx, "systemctl", "show", timer, "--property=NextElapseUSecRealtime", "--value")
output, err := cmd.Output()
if err != nil {
return ""
}
return strings.TrimSpace(string(output))
}
// getTimerLast gets the last run time for a timer
func (i *Installer) getTimerLast(ctx context.Context, timer string) string {
cmd := exec.CommandContext(ctx, "systemctl", "show", timer, "--property=LastTriggerUSec", "--value")
output, err := cmd.Output()
if err != nil {
return ""
}
return strings.TrimSpace(string(output))
}
// canRemoveTemplates checks if template units can be safely removed
func (i *Installer) canRemoveTemplates() bool {
// Check if any dbbackup@*.service instances exist
pattern := filepath.Join(i.unitDir, "dbbackup@*.service")
matches, _ := filepath.Glob(pattern)
// Also check for running instances
cmd := exec.Command("systemctl", "list-units", "--type=service", "--all", "dbbackup@*")
output, _ := cmd.Output()
return len(matches) == 0 && !strings.Contains(string(output), "dbbackup@")
}
// printNextSteps prints helpful next steps after installation
func (i *Installer) printNextSteps(opts InstallOptions) {
timerName := i.getTimerName(opts)
serviceName := strings.Replace(timerName, ".timer", ".service", 1)
fmt.Println()
fmt.Println("[OK] Installation successful!")
fmt.Println()
fmt.Println("[NEXT] Next steps:")
fmt.Println()
fmt.Printf(" 1. Edit configuration: sudo nano %s\n", opts.ConfigPath)
fmt.Printf(" 2. Set credentials: sudo nano /etc/dbbackup/env.d/%s.conf\n", opts.Instance)
fmt.Printf(" 3. Start the timer: sudo systemctl start %s\n", timerName)
fmt.Printf(" 4. Verify timer status: sudo systemctl status %s\n", timerName)
fmt.Printf(" 5. Run backup manually: sudo systemctl start %s\n", serviceName)
fmt.Println()
fmt.Println("[LOGS] View backup logs:")
fmt.Printf(" journalctl -u %s -f\n", serviceName)
fmt.Println()
if opts.WithMetrics {
fmt.Println("[METRICS] Prometheus metrics:")
fmt.Printf(" curl http://localhost:%d/metrics\n", opts.MetricsPort)
fmt.Println()
}
}

View File

@@ -0,0 +1,50 @@
[Unit]
Description=Database Cluster Backup
Documentation=https://github.com/PlusOne/dbbackup
After=network-online.target postgresql.service mysql.service mariadb.service
Wants=network-online.target
[Service]
Type=oneshot
User={{.User}}
Group={{.Group}}
# Security hardening
NoNewPrivileges=yes
ProtectSystem=strict
ProtectHome=read-only
PrivateTmp=yes
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
RestrictSUIDSGID=yes
RestrictRealtime=yes
LockPersonality=yes
RemoveIPC=yes
CapabilityBoundingSet=
AmbientCapabilities=
# Directories
ReadWritePaths={{.BackupDir}} /var/lib/dbbackup /var/log/dbbackup
# Network access for cloud uploads
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
# Environment
EnvironmentFile=-/etc/dbbackup/env.d/cluster.conf
# Working directory (config is loaded from .dbbackup.conf here)
WorkingDirectory=/var/lib/dbbackup
# Execution - cluster backup (all databases)
ExecStart={{.BinaryPath}} backup cluster --backup-dir {{.BackupDir}}
TimeoutStartSec={{.TimeoutSeconds}}
# Post-backup metrics export
ExecStopPost=-{{.BinaryPath}} metrics export --instance cluster --output /var/lib/dbbackup/metrics/cluster.prom
# OOM protection for large backups
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,11 @@
[Unit]
Description=Database Cluster Backup Timer
Documentation=https://github.com/PlusOne/dbbackup
[Timer]
OnCalendar={{.Schedule}}
Persistent=true
RandomizedDelaySec=1800
[Install]
WantedBy=timers.target

View File

@@ -0,0 +1,37 @@
[Unit]
Description=DBBackup Prometheus Metrics Exporter
Documentation=https://github.com/PlusOne/dbbackup
After=network-online.target
[Service]
Type=simple
User={{.User}}
Group={{.Group}}
# Security hardening
NoNewPrivileges=yes
ProtectSystem=strict
ProtectHome=yes
PrivateTmp=yes
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
RestrictSUIDSGID=yes
RestrictRealtime=yes
LockPersonality=yes
RemoveIPC=yes
# Read-write access to catalog for metrics collection
ReadWritePaths=/var/lib/dbbackup
# Network for HTTP server
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
# Execution
ExecStart={{.BinaryPath}} metrics serve --port {{.MetricsPort}}
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,50 @@
[Unit]
Description=Database Backup for %i
Documentation=https://github.com/PlusOne/dbbackup
After=network-online.target postgresql.service mysql.service mariadb.service
Wants=network-online.target
[Service]
Type=oneshot
User={{.User}}
Group={{.Group}}
# Security hardening
NoNewPrivileges=yes
ProtectSystem=strict
ProtectHome=read-only
PrivateTmp=yes
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
RestrictSUIDSGID=yes
RestrictRealtime=yes
LockPersonality=yes
RemoveIPC=yes
CapabilityBoundingSet=
AmbientCapabilities=
# Directories
ReadWritePaths={{.BackupDir}} /var/lib/dbbackup /var/log/dbbackup
# Network access for cloud uploads
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
# Environment
EnvironmentFile=-/etc/dbbackup/env.d/%i.conf
# Working directory (config is loaded from .dbbackup.conf here)
WorkingDirectory=/var/lib/dbbackup
# Execution
ExecStart={{.BinaryPath}} backup {{.BackupType}} %i --backup-dir {{.BackupDir}}
TimeoutStartSec={{.TimeoutSeconds}}
# Post-backup metrics export
ExecStopPost=-{{.BinaryPath}} metrics export --instance %i --output /var/lib/dbbackup/metrics/%i.prom
# OOM protection for large backups
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,11 @@
[Unit]
Description=Database Backup Timer for %i
Documentation=https://github.com/PlusOne/dbbackup
[Timer]
OnCalendar={{.Schedule}}
Persistent=true
RandomizedDelaySec=1800
[Install]
WantedBy=timers.target

View File

@@ -117,7 +117,7 @@ func NewEngine(sourceCfg, targetCfg *config.Config, log logger.Logger) (*Engine,
targetDB: targetDB,
log: log,
progress: progress.NewSpinner(),
workDir: os.TempDir(),
workDir: sourceCfg.GetEffectiveWorkDir(),
keepBackup: false,
jobs: 4,
dryRun: false,

View File

@@ -202,9 +202,9 @@ func (b *Batcher) formatSummaryDigest(events []*Event, success, failure, dbCount
func (b *Batcher) formatCompactDigest(events []*Event, success, failure int) string {
if failure > 0 {
return fmt.Sprintf("⚠️ %d/%d operations failed", failure, len(events))
return fmt.Sprintf("[WARN] %d/%d operations failed", failure, len(events))
}
return fmt.Sprintf(" All %d operations successful", success)
return fmt.Sprintf("[OK] All %d operations successful", success)
}
func (b *Batcher) formatDetailedDigest(events []*Event) string {
@@ -215,9 +215,9 @@ func (b *Batcher) formatDetailedDigest(events []*Event) string {
icon := "•"
switch e.Severity {
case SeverityError, SeverityCritical:
icon = ""
icon = "[FAIL]"
case SeverityWarning:
icon = "⚠️"
icon = "[WARN]"
}
msg += fmt.Sprintf("%s [%s] %s: %s\n",

View File

@@ -69,6 +69,7 @@ func (m *Manager) NotifySync(ctx context.Context, event *Event) error {
m.mu.RUnlock()
var errors []error
var errMu sync.Mutex
var wg sync.WaitGroup
for _, n := range notifiers {
@@ -80,7 +81,9 @@ func (m *Manager) NotifySync(ctx context.Context, event *Event) error {
go func(notifier Notifier) {
defer wg.Done()
if err := notifier.Send(ctx, event); err != nil {
errMu.Lock()
errors = append(errors, fmt.Errorf("%s: %w", notifier.Name(), err))
errMu.Unlock()
}
}(n)
}

View File

@@ -183,43 +183,43 @@ func DefaultConfig() Config {
// FormatEventSubject generates a subject line for notifications
func FormatEventSubject(event *Event) string {
icon := ""
icon := "[INFO]"
switch event.Severity {
case SeverityWarning:
icon = "⚠️"
icon = "[WARN]"
case SeverityError, SeverityCritical:
icon = ""
icon = "[FAIL]"
}
verb := "Event"
switch event.Type {
case EventBackupStarted:
verb = "Backup Started"
icon = "🔄"
icon = "[EXEC]"
case EventBackupCompleted:
verb = "Backup Completed"
icon = ""
icon = "[OK]"
case EventBackupFailed:
verb = "Backup Failed"
icon = ""
icon = "[FAIL]"
case EventRestoreStarted:
verb = "Restore Started"
icon = "🔄"
icon = "[EXEC]"
case EventRestoreCompleted:
verb = "Restore Completed"
icon = ""
icon = "[OK]"
case EventRestoreFailed:
verb = "Restore Failed"
icon = ""
icon = "[FAIL]"
case EventCleanupCompleted:
verb = "Cleanup Completed"
icon = "🗑️"
icon = "[DEL]"
case EventVerifyCompleted:
verb = "Verification Passed"
icon = ""
icon = "[OK]"
case EventVerifyFailed:
verb = "Verification Failed"
icon = ""
icon = "[FAIL]"
case EventPITRRecovery:
verb = "PITR Recovery"
icon = "⏪"

View File

@@ -30,52 +30,52 @@ type Templates struct {
func DefaultTemplates() map[EventType]Templates {
return map[EventType]Templates{
EventBackupStarted: {
Subject: "🔄 Backup Started: {{.Database}} on {{.Hostname}}",
Subject: "[EXEC] Backup Started: {{.Database}} on {{.Hostname}}",
TextBody: backupStartedText,
HTMLBody: backupStartedHTML,
},
EventBackupCompleted: {
Subject: " Backup Completed: {{.Database}} on {{.Hostname}}",
Subject: "[OK] Backup Completed: {{.Database}} on {{.Hostname}}",
TextBody: backupCompletedText,
HTMLBody: backupCompletedHTML,
},
EventBackupFailed: {
Subject: " Backup FAILED: {{.Database}} on {{.Hostname}}",
Subject: "[FAIL] Backup FAILED: {{.Database}} on {{.Hostname}}",
TextBody: backupFailedText,
HTMLBody: backupFailedHTML,
},
EventRestoreStarted: {
Subject: "🔄 Restore Started: {{.Database}} on {{.Hostname}}",
Subject: "[EXEC] Restore Started: {{.Database}} on {{.Hostname}}",
TextBody: restoreStartedText,
HTMLBody: restoreStartedHTML,
},
EventRestoreCompleted: {
Subject: " Restore Completed: {{.Database}} on {{.Hostname}}",
Subject: "[OK] Restore Completed: {{.Database}} on {{.Hostname}}",
TextBody: restoreCompletedText,
HTMLBody: restoreCompletedHTML,
},
EventRestoreFailed: {
Subject: " Restore FAILED: {{.Database}} on {{.Hostname}}",
Subject: "[FAIL] Restore FAILED: {{.Database}} on {{.Hostname}}",
TextBody: restoreFailedText,
HTMLBody: restoreFailedHTML,
},
EventVerificationPassed: {
Subject: " Verification Passed: {{.Database}}",
Subject: "[OK] Verification Passed: {{.Database}}",
TextBody: verificationPassedText,
HTMLBody: verificationPassedHTML,
},
EventVerificationFailed: {
Subject: " Verification FAILED: {{.Database}}",
Subject: "[FAIL] Verification FAILED: {{.Database}}",
TextBody: verificationFailedText,
HTMLBody: verificationFailedHTML,
},
EventDRDrillPassed: {
Subject: " DR Drill Passed: {{.Database}}",
Subject: "[OK] DR Drill Passed: {{.Database}}",
TextBody: drDrillPassedText,
HTMLBody: drDrillPassedHTML,
},
EventDRDrillFailed: {
Subject: " DR Drill FAILED: {{.Database}}",
Subject: "[FAIL] DR Drill FAILED: {{.Database}}",
TextBody: drDrillFailedText,
HTMLBody: drDrillFailedHTML,
},
@@ -95,7 +95,7 @@ Started At: {{formatTime .Timestamp}}
const backupStartedHTML = `
<div style="font-family: Arial, sans-serif; padding: 20px;">
<h2 style="color: #3498db;">🔄 Backup Started</h2>
<h2 style="color: #3498db;">[EXEC] Backup Started</h2>
<table style="border-collapse: collapse; width: 100%; max-width: 600px;">
<tr><td style="padding: 8px; font-weight: bold;">Database:</td><td style="padding: 8px;">{{.Database}}</td></tr>
<tr><td style="padding: 8px; font-weight: bold;">Hostname:</td><td style="padding: 8px;">{{.Hostname}}</td></tr>
@@ -121,7 +121,7 @@ Completed: {{formatTime .Timestamp}}
const backupCompletedHTML = `
<div style="font-family: Arial, sans-serif; padding: 20px;">
<h2 style="color: #27ae60;"> Backup Completed</h2>
<h2 style="color: #27ae60;">[OK] Backup Completed</h2>
<table style="border-collapse: collapse; width: 100%; max-width: 600px;">
<tr><td style="padding: 8px; font-weight: bold;">Database:</td><td style="padding: 8px;">{{.Database}}</td></tr>
<tr><td style="padding: 8px; font-weight: bold;">Hostname:</td><td style="padding: 8px;">{{.Hostname}}</td></tr>
@@ -137,7 +137,7 @@ const backupCompletedHTML = `
`
const backupFailedText = `
⚠️ BACKUP FAILED ⚠️
[WARN] BACKUP FAILED [WARN]
Database: {{.Database}}
Hostname: {{.Hostname}}
@@ -152,7 +152,7 @@ Please investigate immediately.
const backupFailedHTML = `
<div style="font-family: Arial, sans-serif; padding: 20px;">
<h2 style="color: #e74c3c;"> Backup FAILED</h2>
<h2 style="color: #e74c3c;">[FAIL] Backup FAILED</h2>
<table style="border-collapse: collapse; width: 100%; max-width: 600px;">
<tr><td style="padding: 8px; font-weight: bold;">Database:</td><td style="padding: 8px;">{{.Database}}</td></tr>
<tr><td style="padding: 8px; font-weight: bold;">Hostname:</td><td style="padding: 8px;">{{.Hostname}}</td></tr>
@@ -176,7 +176,7 @@ Started At: {{formatTime .Timestamp}}
const restoreStartedHTML = `
<div style="font-family: Arial, sans-serif; padding: 20px;">
<h2 style="color: #3498db;">🔄 Restore Started</h2>
<h2 style="color: #3498db;">[EXEC] Restore Started</h2>
<table style="border-collapse: collapse; width: 100%; max-width: 600px;">
<tr><td style="padding: 8px; font-weight: bold;">Database:</td><td style="padding: 8px;">{{.Database}}</td></tr>
<tr><td style="padding: 8px; font-weight: bold;">Hostname:</td><td style="padding: 8px;">{{.Hostname}}</td></tr>
@@ -200,7 +200,7 @@ Completed: {{formatTime .Timestamp}}
const restoreCompletedHTML = `
<div style="font-family: Arial, sans-serif; padding: 20px;">
<h2 style="color: #27ae60;"> Restore Completed</h2>
<h2 style="color: #27ae60;">[OK] Restore Completed</h2>
<table style="border-collapse: collapse; width: 100%; max-width: 600px;">
<tr><td style="padding: 8px; font-weight: bold;">Database:</td><td style="padding: 8px;">{{.Database}}</td></tr>
<tr><td style="padding: 8px; font-weight: bold;">Hostname:</td><td style="padding: 8px;">{{.Hostname}}</td></tr>
@@ -214,7 +214,7 @@ const restoreCompletedHTML = `
`
const restoreFailedText = `
⚠️ RESTORE FAILED ⚠️
[WARN] RESTORE FAILED [WARN]
Database: {{.Database}}
Hostname: {{.Hostname}}
@@ -229,7 +229,7 @@ Please investigate immediately.
const restoreFailedHTML = `
<div style="font-family: Arial, sans-serif; padding: 20px;">
<h2 style="color: #e74c3c;"> Restore FAILED</h2>
<h2 style="color: #e74c3c;">[FAIL] Restore FAILED</h2>
<table style="border-collapse: collapse; width: 100%; max-width: 600px;">
<tr><td style="padding: 8px; font-weight: bold;">Database:</td><td style="padding: 8px;">{{.Database}}</td></tr>
<tr><td style="padding: 8px; font-weight: bold;">Hostname:</td><td style="padding: 8px;">{{.Hostname}}</td></tr>
@@ -255,7 +255,7 @@ Verified: {{formatTime .Timestamp}}
const verificationPassedHTML = `
<div style="font-family: Arial, sans-serif; padding: 20px;">
<h2 style="color: #27ae60;"> Verification Passed</h2>
<h2 style="color: #27ae60;">[OK] Verification Passed</h2>
<table style="border-collapse: collapse; width: 100%; max-width: 600px;">
<tr><td style="padding: 8px; font-weight: bold;">Database:</td><td style="padding: 8px;">{{.Database}}</td></tr>
<tr><td style="padding: 8px; font-weight: bold;">Hostname:</td><td style="padding: 8px;">{{.Hostname}}</td></tr>
@@ -269,7 +269,7 @@ const verificationPassedHTML = `
`
const verificationFailedText = `
⚠️ VERIFICATION FAILED ⚠️
[WARN] VERIFICATION FAILED [WARN]
Database: {{.Database}}
Hostname: {{.Hostname}}
@@ -284,7 +284,7 @@ Backup integrity may be compromised. Please investigate.
const verificationFailedHTML = `
<div style="font-family: Arial, sans-serif; padding: 20px;">
<h2 style="color: #e74c3c;"> Verification FAILED</h2>
<h2 style="color: #e74c3c;">[FAIL] Verification FAILED</h2>
<table style="border-collapse: collapse; width: 100%; max-width: 600px;">
<tr><td style="padding: 8px; font-weight: bold;">Database:</td><td style="padding: 8px;">{{.Database}}</td></tr>
<tr><td style="padding: 8px; font-weight: bold;">Hostname:</td><td style="padding: 8px;">{{.Hostname}}</td></tr>
@@ -314,7 +314,7 @@ Backup restore capability verified.
const drDrillPassedHTML = `
<div style="font-family: Arial, sans-serif; padding: 20px;">
<h2 style="color: #27ae60;"> DR Drill Passed</h2>
<h2 style="color: #27ae60;">[OK] DR Drill Passed</h2>
<table style="border-collapse: collapse; width: 100%; max-width: 600px;">
<tr><td style="padding: 8px; font-weight: bold;">Database:</td><td style="padding: 8px;">{{.Database}}</td></tr>
<tr><td style="padding: 8px; font-weight: bold;">Hostname:</td><td style="padding: 8px;">{{.Hostname}}</td></tr>
@@ -326,12 +326,12 @@ const drDrillPassedHTML = `
{{end}}
</table>
{{if .Message}}<p style="margin-top: 20px; color: #27ae60;">{{.Message}}</p>{{end}}
<p style="margin-top: 20px; color: #27ae60;"> Backup restore capability verified</p>
<p style="margin-top: 20px; color: #27ae60;">[OK] Backup restore capability verified</p>
</div>
`
const drDrillFailedText = `
⚠️ DR DRILL FAILED ⚠️
[WARN] DR DRILL FAILED [WARN]
Database: {{.Database}}
Hostname: {{.Hostname}}
@@ -346,7 +346,7 @@ Backup may not be restorable. Please investigate immediately.
const drDrillFailedHTML = `
<div style="font-family: Arial, sans-serif; padding: 20px;">
<h2 style="color: #e74c3c;"> DR Drill FAILED</h2>
<h2 style="color: #e74c3c;">[FAIL] DR Drill FAILED</h2>
<table style="border-collapse: collapse; width: 100%; max-width: 600px;">
<tr><td style="padding: 8px; font-weight: bold;">Database:</td><td style="padding: 8px;">{{.Database}}</td></tr>
<tr><td style="padding: 8px; font-weight: bold;">Hostname:</td><td style="padding: 8px;">{{.Hostname}}</td></tr>

View File

@@ -212,7 +212,11 @@ func (m *BinlogManager) detectTools() error {
// detectServerType determines if we're working with MySQL or MariaDB
func (m *BinlogManager) detectServerType() DatabaseType {
cmd := exec.Command(m.mysqlbinlogPath, "--version")
// Use timeout to prevent blocking if command hangs
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, m.mysqlbinlogPath, "--version")
output, err := cmd.Output()
if err != nil {
return DatabaseMySQL // Default to MySQL

View File

@@ -43,9 +43,9 @@ type RestoreOptions struct {
// RestorePointInTime performs a Point-in-Time Recovery
func (ro *RestoreOrchestrator) RestorePointInTime(ctx context.Context, opts *RestoreOptions) error {
ro.log.Info("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
ro.log.Info("=====================================================")
ro.log.Info(" Point-in-Time Recovery (PITR)")
ro.log.Info("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
ro.log.Info("=====================================================")
ro.log.Info("")
ro.log.Info("Target:", "summary", opts.Target.Summary())
ro.log.Info("Base Backup:", "path", opts.BaseBackupPath)
@@ -91,11 +91,11 @@ func (ro *RestoreOrchestrator) RestorePointInTime(ctx context.Context, opts *Res
return fmt.Errorf("failed to generate recovery configuration: %w", err)
}
ro.log.Info(" Recovery configuration generated successfully")
ro.log.Info("[OK] Recovery configuration generated successfully")
ro.log.Info("")
ro.log.Info("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
ro.log.Info("=====================================================")
ro.log.Info(" Next Steps:")
ro.log.Info("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
ro.log.Info("=====================================================")
ro.log.Info("")
ro.log.Info("1. Start PostgreSQL to begin recovery:")
ro.log.Info(fmt.Sprintf(" pg_ctl -D %s start", opts.TargetDataDir))
@@ -192,7 +192,7 @@ func (ro *RestoreOrchestrator) validateInputs(opts *RestoreOptions) error {
}
}
ro.log.Info(" Validation passed")
ro.log.Info("[OK] Validation passed")
return nil
}
@@ -238,7 +238,7 @@ func (ro *RestoreOrchestrator) extractTarGzBackup(ctx context.Context, source, d
return fmt.Errorf("tar extraction failed: %w", err)
}
ro.log.Info(" Base backup extracted successfully")
ro.log.Info("[OK] Base backup extracted successfully")
return nil
}
@@ -254,7 +254,7 @@ func (ro *RestoreOrchestrator) extractTarBackup(ctx context.Context, source, des
return fmt.Errorf("tar extraction failed: %w", err)
}
ro.log.Info(" Base backup extracted successfully")
ro.log.Info("[OK] Base backup extracted successfully")
return nil
}
@@ -270,7 +270,7 @@ func (ro *RestoreOrchestrator) copyDirectoryBackup(ctx context.Context, source,
return fmt.Errorf("directory copy failed: %w", err)
}
ro.log.Info(" Base backup copied successfully")
ro.log.Info("[OK] Base backup copied successfully")
return nil
}
@@ -291,7 +291,7 @@ func (ro *RestoreOrchestrator) startPostgreSQL(ctx context.Context, opts *Restor
return fmt.Errorf("pg_ctl start failed: %w", err)
}
ro.log.Info(" PostgreSQL started successfully")
ro.log.Info("[OK] PostgreSQL started successfully")
ro.log.Info("PostgreSQL is now performing recovery...")
return nil
}
@@ -320,7 +320,7 @@ func (ro *RestoreOrchestrator) monitorRecovery(ctx context.Context, opts *Restor
// Check if recovery is complete by looking for postmaster.pid
pidFile := filepath.Join(opts.TargetDataDir, "postmaster.pid")
if _, err := os.Stat(pidFile); err == nil {
ro.log.Info(" PostgreSQL is running")
ro.log.Info("[OK] PostgreSQL is running")
// Check if recovery files still exist
recoverySignal := filepath.Join(opts.TargetDataDir, "recovery.signal")
@@ -328,7 +328,7 @@ func (ro *RestoreOrchestrator) monitorRecovery(ctx context.Context, opts *Restor
if _, err := os.Stat(recoverySignal); os.IsNotExist(err) {
if _, err := os.Stat(recoveryConf); os.IsNotExist(err) {
ro.log.Info(" Recovery completed - PostgreSQL promoted to primary")
ro.log.Info("[OK] Recovery completed - PostgreSQL promoted to primary")
return nil
}
}

View File

@@ -256,7 +256,7 @@ func (ot *OperationTracker) Complete(message string) {
// Complete visual indicator
if ot.reporter.indicator != nil {
ot.reporter.indicator.Complete(fmt.Sprintf(" %s", message))
ot.reporter.indicator.Complete(fmt.Sprintf("[OK] %s", message))
}
// Log completion with duration
@@ -286,7 +286,7 @@ func (ot *OperationTracker) Fail(err error) {
// Fail visual indicator
if ot.reporter.indicator != nil {
ot.reporter.indicator.Fail(fmt.Sprintf(" %s", err.Error()))
ot.reporter.indicator.Fail(fmt.Sprintf("[FAIL] %s", err.Error()))
}
// Log failure
@@ -427,7 +427,7 @@ type OperationSummary struct {
// FormatSummary returns a formatted string representation of the summary
func (os *OperationSummary) FormatSummary() string {
return fmt.Sprintf(
"📊 Operations Summary:\n"+
"[STATS] Operations Summary:\n"+
" Total: %d | Completed: %d | Failed: %d | Running: %d\n"+
" Total Duration: %s",
os.TotalOperations,

View File

@@ -92,13 +92,13 @@ func (s *Spinner) Update(message string) {
// Complete stops the spinner with a success message
func (s *Spinner) Complete(message string) {
s.Stop()
fmt.Fprintf(s.writer, "\n %s\n", message)
fmt.Fprintf(s.writer, "\n[OK] %s\n", message)
}
// Fail stops the spinner with a failure message
func (s *Spinner) Fail(message string) {
s.Stop()
fmt.Fprintf(s.writer, "\n %s\n", message)
fmt.Fprintf(s.writer, "\n[FAIL] %s\n", message)
}
// Stop stops the spinner
@@ -167,13 +167,13 @@ func (d *Dots) Update(message string) {
// Complete stops the dots with a success message
func (d *Dots) Complete(message string) {
d.Stop()
fmt.Fprintf(d.writer, " %s\n", message)
fmt.Fprintf(d.writer, " [OK] %s\n", message)
}
// Fail stops the dots with a failure message
func (d *Dots) Fail(message string) {
d.Stop()
fmt.Fprintf(d.writer, " %s\n", message)
fmt.Fprintf(d.writer, " [FAIL] %s\n", message)
}
// Stop stops the dots indicator
@@ -239,14 +239,14 @@ func (p *ProgressBar) Complete(message string) {
p.current = p.total
p.message = message
p.render()
fmt.Fprintf(p.writer, " %s\n", message)
fmt.Fprintf(p.writer, " [OK] %s\n", message)
p.Stop()
}
// Fail stops the progress bar with failure
func (p *ProgressBar) Fail(message string) {
p.render()
fmt.Fprintf(p.writer, " %s\n", message)
fmt.Fprintf(p.writer, " [FAIL] %s\n", message)
p.Stop()
}
@@ -298,12 +298,12 @@ func (s *Static) Update(message string) {
// Complete shows completion message
func (s *Static) Complete(message string) {
fmt.Fprintf(s.writer, " %s\n", message)
fmt.Fprintf(s.writer, " [OK] %s\n", message)
}
// Fail shows failure message
func (s *Static) Fail(message string) {
fmt.Fprintf(s.writer, " %s\n", message)
fmt.Fprintf(s.writer, " [FAIL] %s\n", message)
}
// Stop does nothing for static indicator
@@ -359,7 +359,7 @@ func (l *LineByLine) Start(message string) {
if l.estimator != nil {
displayMsg = l.estimator.GetFullStatus(message)
}
fmt.Fprintf(l.writer, "\n🔄 %s\n", displayMsg)
fmt.Fprintf(l.writer, "\n[SYNC] %s\n", displayMsg)
}
// Update shows an update message
@@ -380,12 +380,12 @@ func (l *LineByLine) SetEstimator(estimator *ETAEstimator) {
// Complete shows completion message
func (l *LineByLine) Complete(message string) {
fmt.Fprintf(l.writer, " %s\n\n", message)
fmt.Fprintf(l.writer, "[OK] %s\n\n", message)
}
// Fail shows failure message
func (l *LineByLine) Fail(message string) {
fmt.Fprintf(l.writer, " %s\n\n", message)
fmt.Fprintf(l.writer, "[FAIL] %s\n\n", message)
}
// Stop does nothing for line-by-line (no cleanup needed)
@@ -396,7 +396,7 @@ func (l *LineByLine) Stop() {
// Light indicator methods - minimal output
func (l *Light) Start(message string) {
if !l.silent {
fmt.Fprintf(l.writer, " %s\n", message)
fmt.Fprintf(l.writer, "> %s\n", message)
}
}
@@ -408,13 +408,13 @@ func (l *Light) Update(message string) {
func (l *Light) Complete(message string) {
if !l.silent {
fmt.Fprintf(l.writer, " %s\n", message)
fmt.Fprintf(l.writer, "[OK] %s\n", message)
}
}
func (l *Light) Fail(message string) {
if !l.silent {
fmt.Fprintf(l.writer, " %s\n", message)
fmt.Fprintf(l.writer, "[FAIL] %s\n", message)
}
}

View File

@@ -0,0 +1,174 @@
// Package prometheus provides Prometheus metrics for dbbackup
package prometheus
import (
"context"
"fmt"
"net/http"
"sync"
"time"
"dbbackup/internal/catalog"
"dbbackup/internal/logger"
)
// Exporter provides an HTTP endpoint for Prometheus metrics
type Exporter struct {
log logger.Logger
catalog catalog.Catalog
instance string
port int
mu sync.RWMutex
cachedData string
lastRefresh time.Time
refreshTTL time.Duration
}
// NewExporter creates a new Prometheus exporter
func NewExporter(log logger.Logger, cat catalog.Catalog, instance string, port int) *Exporter {
return &Exporter{
log: log,
catalog: cat,
instance: instance,
port: port,
refreshTTL: 30 * time.Second,
}
}
// Serve starts the HTTP server and blocks until context is cancelled
func (e *Exporter) Serve(ctx context.Context) error {
mux := http.NewServeMux()
// /metrics endpoint
mux.HandleFunc("/metrics", e.handleMetrics)
// /health endpoint
mux.HandleFunc("/health", e.handleHealth)
// / root with info
mux.HandleFunc("/", e.handleRoot)
addr := fmt.Sprintf(":%d", e.port)
srv := &http.Server{
Addr: addr,
Handler: mux,
ReadTimeout: 10 * time.Second,
WriteTimeout: 30 * time.Second,
IdleTimeout: 60 * time.Second,
}
// Start refresh goroutine
go e.refreshLoop(ctx)
// Graceful shutdown
go func() {
<-ctx.Done()
e.log.Info("Shutting down metrics server...")
shutdownCtx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := srv.Shutdown(shutdownCtx); err != nil {
e.log.Error("Server shutdown error", "error", err)
}
}()
e.log.Info("Starting Prometheus metrics server", "addr", addr)
if err := srv.ListenAndServe(); err != http.ErrServerClosed {
return fmt.Errorf("server error: %w", err)
}
return nil
}
// handleMetrics handles /metrics endpoint
func (e *Exporter) handleMetrics(w http.ResponseWriter, r *http.Request) {
e.mu.RLock()
data := e.cachedData
e.mu.RUnlock()
if data == "" {
// Force refresh if cache is empty
if err := e.refresh(); err != nil {
http.Error(w, "Failed to collect metrics", http.StatusInternalServerError)
return
}
e.mu.RLock()
data = e.cachedData
e.mu.RUnlock()
}
w.Header().Set("Content-Type", "text/plain; version=0.0.4; charset=utf-8")
w.WriteHeader(http.StatusOK)
w.Write([]byte(data))
}
// handleHealth handles /health endpoint
func (e *Exporter) handleHealth(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
w.Write([]byte(`{"status":"ok","service":"dbbackup-exporter"}`))
}
// handleRoot handles / endpoint
func (e *Exporter) handleRoot(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/" {
http.NotFound(w, r)
return
}
w.Header().Set("Content-Type", "text/html")
w.WriteHeader(http.StatusOK)
w.Write([]byte(`<!DOCTYPE html>
<html>
<head>
<title>DBBackup Exporter</title>
</head>
<body>
<h1>DBBackup Prometheus Exporter</h1>
<p>This is a Prometheus metrics exporter for DBBackup.</p>
<ul>
<li><a href="/metrics">/metrics</a> - Prometheus metrics</li>
<li><a href="/health">/health</a> - Health check</li>
</ul>
</body>
</html>`))
}
// refreshLoop periodically refreshes the metrics cache
func (e *Exporter) refreshLoop(ctx context.Context) {
ticker := time.NewTicker(e.refreshTTL)
defer ticker.Stop()
// Initial refresh
if err := e.refresh(); err != nil {
e.log.Error("Initial metrics refresh failed", "error", err)
}
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
if err := e.refresh(); err != nil {
e.log.Error("Metrics refresh failed", "error", err)
}
}
}
}
// refresh updates the cached metrics
func (e *Exporter) refresh() error {
writer := NewMetricsWriter(e.log, e.catalog, e.instance)
data, err := writer.GenerateMetricsString()
if err != nil {
return err
}
e.mu.Lock()
e.cachedData = data
e.lastRefresh = time.Now()
e.mu.Unlock()
e.log.Debug("Refreshed metrics cache")
return nil
}

View File

@@ -0,0 +1,245 @@
// Package prometheus provides Prometheus metrics for dbbackup
package prometheus
import (
"context"
"fmt"
"os"
"path/filepath"
"sort"
"strings"
"time"
"dbbackup/internal/catalog"
"dbbackup/internal/logger"
)
// MetricsWriter writes metrics in Prometheus text format
type MetricsWriter struct {
log logger.Logger
catalog catalog.Catalog
instance string
}
// NewMetricsWriter creates a new MetricsWriter
func NewMetricsWriter(log logger.Logger, cat catalog.Catalog, instance string) *MetricsWriter {
return &MetricsWriter{
log: log,
catalog: cat,
instance: instance,
}
}
// BackupMetrics holds metrics for a single database
type BackupMetrics struct {
Database string
Engine string
LastSuccess time.Time
LastDuration time.Duration
LastSize int64
TotalBackups int
SuccessCount int
FailureCount int
Verified bool
RPOSeconds float64
}
// WriteTextfile writes metrics to a Prometheus textfile collector file
func (m *MetricsWriter) WriteTextfile(path string) error {
metrics, err := m.collectMetrics()
if err != nil {
return fmt.Errorf("failed to collect metrics: %w", err)
}
output := m.formatMetrics(metrics)
// Atomic write: write to temp file, then rename
dir := filepath.Dir(path)
if err := os.MkdirAll(dir, 0755); err != nil {
return fmt.Errorf("failed to create directory %s: %w", dir, err)
}
tmpPath := path + ".tmp"
if err := os.WriteFile(tmpPath, []byte(output), 0644); err != nil {
return fmt.Errorf("failed to write temp file: %w", err)
}
if err := os.Rename(tmpPath, path); err != nil {
os.Remove(tmpPath)
return fmt.Errorf("failed to rename temp file: %w", err)
}
m.log.Debug("Wrote metrics to textfile", "path", path, "databases", len(metrics))
return nil
}
// collectMetrics gathers metrics from the catalog
func (m *MetricsWriter) collectMetrics() ([]BackupMetrics, error) {
if m.catalog == nil {
return nil, fmt.Errorf("catalog not available")
}
ctx := context.Background()
// Get recent backups using Search with limit
query := &catalog.SearchQuery{
Limit: 1000,
}
entries, err := m.catalog.Search(ctx, query)
if err != nil {
return nil, fmt.Errorf("failed to search backups: %w", err)
}
// Group by database
byDB := make(map[string]*BackupMetrics)
for _, e := range entries {
key := e.Database
if key == "" {
key = "unknown"
}
metrics, ok := byDB[key]
if !ok {
metrics = &BackupMetrics{
Database: key,
Engine: e.DatabaseType,
}
byDB[key] = metrics
}
metrics.TotalBackups++
isSuccess := e.Status == catalog.StatusCompleted || e.Status == catalog.StatusVerified
if isSuccess {
metrics.SuccessCount++
// Track most recent success
if e.CreatedAt.After(metrics.LastSuccess) {
metrics.LastSuccess = e.CreatedAt
metrics.LastDuration = time.Duration(e.Duration * float64(time.Second))
metrics.LastSize = e.SizeBytes
metrics.Verified = e.VerifiedAt != nil && e.VerifyValid != nil && *e.VerifyValid
metrics.Engine = e.DatabaseType
}
} else {
metrics.FailureCount++
}
}
// Calculate RPO for each database
now := time.Now()
for _, metrics := range byDB {
if !metrics.LastSuccess.IsZero() {
metrics.RPOSeconds = now.Sub(metrics.LastSuccess).Seconds()
}
}
// Convert to slice and sort
result := make([]BackupMetrics, 0, len(byDB))
for _, metrics := range byDB {
result = append(result, *metrics)
}
sort.Slice(result, func(i, j int) bool {
return result[i].Database < result[j].Database
})
return result, nil
}
// formatMetrics formats metrics in Prometheus exposition format
func (m *MetricsWriter) formatMetrics(metrics []BackupMetrics) string {
var b strings.Builder
// Timestamp of metrics generation
now := time.Now().Unix()
// Header comment
b.WriteString("# DBBackup Prometheus Metrics\n")
b.WriteString(fmt.Sprintf("# Generated at: %s\n", time.Now().Format(time.RFC3339)))
b.WriteString(fmt.Sprintf("# Instance: %s\n", m.instance))
b.WriteString("\n")
// dbbackup_last_success_timestamp
b.WriteString("# HELP dbbackup_last_success_timestamp Unix timestamp of last successful backup\n")
b.WriteString("# TYPE dbbackup_last_success_timestamp gauge\n")
for _, met := range metrics {
if !met.LastSuccess.IsZero() {
b.WriteString(fmt.Sprintf("dbbackup_last_success_timestamp{instance=%q,database=%q,engine=%q} %d\n",
m.instance, met.Database, met.Engine, met.LastSuccess.Unix()))
}
}
b.WriteString("\n")
// dbbackup_last_backup_duration_seconds
b.WriteString("# HELP dbbackup_last_backup_duration_seconds Duration of last successful backup in seconds\n")
b.WriteString("# TYPE dbbackup_last_backup_duration_seconds gauge\n")
for _, met := range metrics {
if met.LastDuration > 0 {
b.WriteString(fmt.Sprintf("dbbackup_last_backup_duration_seconds{instance=%q,database=%q,engine=%q} %.2f\n",
m.instance, met.Database, met.Engine, met.LastDuration.Seconds()))
}
}
b.WriteString("\n")
// dbbackup_last_backup_size_bytes
b.WriteString("# HELP dbbackup_last_backup_size_bytes Size of last successful backup in bytes\n")
b.WriteString("# TYPE dbbackup_last_backup_size_bytes gauge\n")
for _, met := range metrics {
if met.LastSize > 0 {
b.WriteString(fmt.Sprintf("dbbackup_last_backup_size_bytes{instance=%q,database=%q,engine=%q} %d\n",
m.instance, met.Database, met.Engine, met.LastSize))
}
}
b.WriteString("\n")
// dbbackup_backup_total (counter)
b.WriteString("# HELP dbbackup_backup_total Total number of backup attempts\n")
b.WriteString("# TYPE dbbackup_backup_total counter\n")
for _, met := range metrics {
b.WriteString(fmt.Sprintf("dbbackup_backup_total{instance=%q,database=%q,status=\"success\"} %d\n",
m.instance, met.Database, met.SuccessCount))
b.WriteString(fmt.Sprintf("dbbackup_backup_total{instance=%q,database=%q,status=\"failure\"} %d\n",
m.instance, met.Database, met.FailureCount))
}
b.WriteString("\n")
// dbbackup_rpo_seconds
b.WriteString("# HELP dbbackup_rpo_seconds Recovery Point Objective - seconds since last successful backup\n")
b.WriteString("# TYPE dbbackup_rpo_seconds gauge\n")
for _, met := range metrics {
if met.RPOSeconds > 0 {
b.WriteString(fmt.Sprintf("dbbackup_rpo_seconds{instance=%q,database=%q} %.0f\n",
m.instance, met.Database, met.RPOSeconds))
}
}
b.WriteString("\n")
// dbbackup_backup_verified
b.WriteString("# HELP dbbackup_backup_verified Whether the last backup was verified (1=yes, 0=no)\n")
b.WriteString("# TYPE dbbackup_backup_verified gauge\n")
for _, met := range metrics {
verified := 0
if met.Verified {
verified = 1
}
b.WriteString(fmt.Sprintf("dbbackup_backup_verified{instance=%q,database=%q} %d\n",
m.instance, met.Database, verified))
}
b.WriteString("\n")
// dbbackup_scrape_timestamp
b.WriteString("# HELP dbbackup_scrape_timestamp Unix timestamp when metrics were collected\n")
b.WriteString("# TYPE dbbackup_scrape_timestamp gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_scrape_timestamp{instance=%q} %d\n", m.instance, now))
return b.String()
}
// GenerateMetricsString returns metrics as a string (for HTTP endpoint)
func (m *MetricsWriter) GenerateMetricsString() (string, error) {
metrics, err := m.collectMetrics()
if err != nil {
return "", err
}
return m.formatMetrics(metrics), nil
}

View File

@@ -296,11 +296,11 @@ func generateID() string {
func StatusIcon(s ComplianceStatus) string {
switch s {
case StatusCompliant:
return ""
return "[OK]"
case StatusNonCompliant:
return ""
return "[FAIL]"
case StatusPartial:
return "⚠️"
return "[WARN]"
case StatusNotApplicable:
return ""
default:

View File

@@ -47,9 +47,10 @@ type DownloadResult struct {
// Download downloads a backup from cloud storage
func (d *CloudDownloader) Download(ctx context.Context, remotePath string, opts DownloadOptions) (*DownloadResult, error) {
// Determine temp directory
// Determine temp directory (use from opts, or from config's WorkDir, or fallback to system temp)
tempDir := opts.TempDir
if tempDir == "" {
// Try to get from config if available (passed via opts.TempDir)
tempDir = os.TempDir()
}

View File

@@ -4,6 +4,7 @@ import (
"bufio"
"bytes"
"compress/gzip"
"context"
"encoding/json"
"fmt"
"io"
@@ -12,6 +13,7 @@ import (
"path/filepath"
"regexp"
"strings"
"time"
"dbbackup/internal/logger"
)
@@ -412,20 +414,121 @@ func (d *Diagnoser) diagnoseSQLScript(filePath string, compressed bool, result *
// diagnoseClusterArchive analyzes a cluster tar.gz archive
func (d *Diagnoser) diagnoseClusterArchive(filePath string, result *DiagnoseResult) {
// First verify tar.gz integrity
cmd := exec.Command("tar", "-tzf", filePath)
output, err := cmd.Output()
if err != nil {
// Calculate dynamic timeout based on file size
// Large archives (100GB+) can take significant time to list
// Minimum 5 minutes, scales with file size, max 180 minutes for very large archives
timeoutMinutes := 5
if result.FileSize > 0 {
// 1 minute per 2 GB, minimum 5 minutes, max 180 minutes
sizeGB := result.FileSize / (1024 * 1024 * 1024)
estimatedMinutes := int(sizeGB/2) + 5
if estimatedMinutes > timeoutMinutes {
timeoutMinutes = estimatedMinutes
}
if timeoutMinutes > 180 {
timeoutMinutes = 180
}
}
d.log.Info("Verifying cluster archive integrity",
"size", fmt.Sprintf("%.1f GB", float64(result.FileSize)/(1024*1024*1024)),
"timeout", fmt.Sprintf("%d min", timeoutMinutes))
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
defer cancel()
// Use streaming approach with pipes to avoid memory issues with large archives
cmd := exec.CommandContext(ctx, "tar", "-tzf", filePath)
stdout, pipeErr := cmd.StdoutPipe()
if pipeErr != nil {
// Pipe creation failed - not a corruption issue
result.Warnings = append(result.Warnings,
fmt.Sprintf("Cannot create pipe for verification: %v", pipeErr),
"Archive integrity cannot be verified but may still be valid")
return
}
var stderrBuf bytes.Buffer
cmd.Stderr = &stderrBuf
if startErr := cmd.Start(); startErr != nil {
result.Warnings = append(result.Warnings,
fmt.Sprintf("Cannot start tar verification: %v", startErr),
"Archive integrity cannot be verified but may still be valid")
return
}
// Stream output line by line to avoid buffering entire listing in memory
scanner := bufio.NewScanner(stdout)
scanner.Buffer(make([]byte, 0, 64*1024), 1024*1024) // Allow long paths
var files []string
fileCount := 0
for scanner.Scan() {
fileCount++
line := scanner.Text()
// Only store dump/metadata files, not every file
if strings.HasSuffix(line, ".dump") || strings.HasSuffix(line, ".sql.gz") ||
strings.HasSuffix(line, ".sql") || strings.HasSuffix(line, ".json") ||
strings.Contains(line, "globals") || strings.Contains(line, "manifest") ||
strings.Contains(line, "metadata") {
files = append(files, line)
}
}
scanErr := scanner.Err()
waitErr := cmd.Wait()
stderrOutput := stderrBuf.String()
// Handle errors - distinguish between actual corruption and resource/timeout issues
if waitErr != nil || scanErr != nil {
// Check if it was a timeout
if ctx.Err() == context.DeadlineExceeded {
result.Warnings = append(result.Warnings,
fmt.Sprintf("Verification timed out after %d minutes - archive is very large", timeoutMinutes),
"This does not necessarily mean the archive is corrupted",
"Manual verification: tar -tzf "+filePath+" | wc -l")
// Don't mark as corrupted or invalid on timeout - archive may be fine
if fileCount > 0 {
result.Details.TableCount = len(files)
result.Details.TableList = files
}
return
}
// Check for specific gzip/tar corruption indicators
if strings.Contains(stderrOutput, "unexpected end of file") ||
strings.Contains(stderrOutput, "Unexpected EOF") ||
strings.Contains(stderrOutput, "gzip: stdin: unexpected end of file") ||
strings.Contains(stderrOutput, "not in gzip format") ||
strings.Contains(stderrOutput, "invalid compressed data") {
// These indicate actual corruption
result.IsValid = false
result.IsCorrupted = true
result.Errors = append(result.Errors,
fmt.Sprintf("Tar archive is invalid or corrupted: %v", err),
"Tar archive appears truncated or corrupted",
fmt.Sprintf("Error: %s", truncateString(stderrOutput, 200)),
"Run: tar -tzf "+filePath+" 2>&1 | tail -20")
return
}
// Parse tar listing
files := strings.Split(strings.TrimSpace(string(output)), "\n")
// Other errors (signal killed, memory, etc.) - not necessarily corruption
// If we read some files successfully, the archive structure is likely OK
if fileCount > 0 {
result.Warnings = append(result.Warnings,
fmt.Sprintf("Verification incomplete (read %d files before error)", fileCount),
"Archive may still be valid - error could be due to system resources")
// Proceed with what we got
} else {
// Couldn't read anything - but don't mark as corrupted without clear evidence
result.Warnings = append(result.Warnings,
fmt.Sprintf("Cannot verify archive: %v", waitErr),
"Archive integrity is uncertain - proceed with caution or verify manually")
return
}
}
// Parse the collected file list
var dumpFiles []string
hasGlobals := false
hasMetadata := false
@@ -491,7 +594,25 @@ func (d *Diagnoser) diagnoseUnknown(filePath string, result *DiagnoseResult) {
// verifyWithPgRestore uses pg_restore --list to verify dump integrity
func (d *Diagnoser) verifyWithPgRestore(filePath string, result *DiagnoseResult) {
cmd := exec.Command("pg_restore", "--list", filePath)
// Calculate dynamic timeout based on file size
// pg_restore --list is usually faster than tar -tzf for same size
timeoutMinutes := 5
if result.FileSize > 0 {
// 1 minute per 5 GB, minimum 5 minutes, max 30 minutes
sizeGB := result.FileSize / (1024 * 1024 * 1024)
estimatedMinutes := int(sizeGB/5) + 5
if estimatedMinutes > timeoutMinutes {
timeoutMinutes = estimatedMinutes
}
if timeoutMinutes > 30 {
timeoutMinutes = 30
}
}
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
defer cancel()
cmd := exec.CommandContext(ctx, "pg_restore", "--list", filePath)
output, err := cmd.CombinedOutput()
if err != nil {
@@ -543,10 +664,72 @@ func (d *Diagnoser) verifyWithPgRestore(filePath string, result *DiagnoseResult)
// DiagnoseClusterDumps extracts and diagnoses all dumps in a cluster archive
func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*DiagnoseResult, error) {
// First, try to list archive contents without extracting (fast check)
listCmd := exec.Command("tar", "-tzf", archivePath)
listOutput, listErr := listCmd.CombinedOutput()
if listErr != nil {
// Get archive size for dynamic timeout calculation
archiveInfo, err := os.Stat(archivePath)
if err != nil {
return nil, fmt.Errorf("cannot stat archive: %w", err)
}
// Dynamic timeout based on archive size: base 10 min + 1 min per 3 GB
// Large archives like 100+ GB need more time for tar -tzf
timeoutMinutes := 10
if archiveInfo.Size() > 0 {
sizeGB := archiveInfo.Size() / (1024 * 1024 * 1024)
estimatedMinutes := int(sizeGB/3) + 10
if estimatedMinutes > timeoutMinutes {
timeoutMinutes = estimatedMinutes
}
if timeoutMinutes > 120 { // Max 2 hours
timeoutMinutes = 120
}
}
d.log.Info("Listing cluster archive contents",
"size", fmt.Sprintf("%.1f GB", float64(archiveInfo.Size())/(1024*1024*1024)),
"timeout", fmt.Sprintf("%d min", timeoutMinutes))
listCtx, listCancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
defer listCancel()
listCmd := exec.CommandContext(listCtx, "tar", "-tzf", archivePath)
// Use pipes for streaming to avoid buffering entire output in memory
// This prevents OOM kills on large archives (100GB+) with millions of files
stdout, err := listCmd.StdoutPipe()
if err != nil {
return nil, fmt.Errorf("failed to create stdout pipe: %w", err)
}
var stderrBuf bytes.Buffer
listCmd.Stderr = &stderrBuf
if err := listCmd.Start(); err != nil {
return nil, fmt.Errorf("failed to start tar listing: %w", err)
}
// Stream the output line by line, only keeping relevant files
var files []string
scanner := bufio.NewScanner(stdout)
// Set a reasonable max line length (file paths shouldn't exceed this)
scanner.Buffer(make([]byte, 0, 4096), 1024*1024)
fileCount := 0
for scanner.Scan() {
fileCount++
line := scanner.Text()
// Only store dump files and important files, not every single file
if strings.HasSuffix(line, ".dump") || strings.HasSuffix(line, ".sql") ||
strings.HasSuffix(line, ".sql.gz") || strings.HasSuffix(line, ".json") ||
strings.Contains(line, "globals") || strings.Contains(line, "manifest") ||
strings.Contains(line, "metadata") || strings.HasSuffix(line, "/") {
files = append(files, line)
}
}
scanErr := scanner.Err()
listErr := listCmd.Wait()
if listErr != nil || scanErr != nil {
// Archive listing failed - likely corrupted
errResult := &DiagnoseResult{
FilePath: archivePath,
@@ -558,7 +741,12 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
Details: &DiagnoseDetails{},
}
errOutput := string(listOutput)
errOutput := stderrBuf.String()
actualErr := listErr
if scanErr != nil {
actualErr = scanErr
}
if strings.Contains(errOutput, "unexpected end of file") ||
strings.Contains(errOutput, "Unexpected EOF") ||
strings.Contains(errOutput, "truncated") {
@@ -570,7 +758,7 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
"Solution: Re-create the backup from source database")
} else {
errResult.Errors = append(errResult.Errors,
fmt.Sprintf("Cannot list archive contents: %v", listErr),
fmt.Sprintf("Cannot list archive contents: %v", actualErr),
fmt.Sprintf("tar error: %s", truncateString(errOutput, 300)),
"Run manually: tar -tzf "+archivePath+" 2>&1 | tail -50")
}
@@ -578,24 +766,29 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
return []*DiagnoseResult{errResult}, nil
}
// Archive is listable - now check disk space before extraction
files := strings.Split(strings.TrimSpace(string(listOutput)), "\n")
d.log.Debug("Archive listing streamed successfully", "total_files", fileCount, "relevant_files", len(files))
// Check if we have enough disk space (estimate 4x archive size needed)
archiveInfo, _ := os.Stat(archivePath)
// archiveInfo already obtained at function start
requiredSpace := archiveInfo.Size() * 4
// Check temp directory space - try to extract metadata first
if stat, err := os.Stat(tempDir); err == nil && stat.IsDir() {
// Try extraction of a small test file first
testCmd := exec.Command("tar", "-xzf", archivePath, "-C", tempDir, "--wildcards", "*.json", "--wildcards", "globals.sql")
// Try extraction of a small test file first with timeout
testCtx, testCancel := context.WithTimeout(context.Background(), 30*time.Second)
testCmd := exec.CommandContext(testCtx, "tar", "-xzf", archivePath, "-C", tempDir, "--wildcards", "*.json", "--wildcards", "globals.sql")
testCmd.Run() // Ignore error - just try to extract metadata
testCancel()
}
d.log.Info("Archive listing successful", "files", len(files))
// Try full extraction
cmd := exec.Command("tar", "-xzf", archivePath, "-C", tempDir)
// Try full extraction - NO TIMEOUT here as large archives can take a long time
// Use a generous timeout (30 minutes) for very large archives
extractCtx, extractCancel := context.WithTimeout(context.Background(), 30*time.Minute)
defer extractCancel()
cmd := exec.CommandContext(extractCtx, "tar", "-xzf", archivePath, "-C", tempDir)
var stderr bytes.Buffer
cmd.Stderr = &stderr
if err := cmd.Run(); err != nil {
@@ -693,7 +886,7 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
// PrintDiagnosis outputs a human-readable diagnosis report
func (d *Diagnoser) PrintDiagnosis(result *DiagnoseResult) {
fmt.Println("\n" + strings.Repeat("=", 70))
fmt.Printf("📋 DIAGNOSIS: %s\n", result.FileName)
fmt.Printf("[DIAG] DIAGNOSIS: %s\n", result.FileName)
fmt.Println(strings.Repeat("=", 70))
// Basic info
@@ -703,69 +896,69 @@ func (d *Diagnoser) PrintDiagnosis(result *DiagnoseResult) {
// Status
if result.IsValid {
fmt.Println("\n STATUS: VALID")
fmt.Println("\n[OK] STATUS: VALID")
} else {
fmt.Println("\n STATUS: INVALID")
fmt.Println("\n[FAIL] STATUS: INVALID")
}
if result.IsTruncated {
fmt.Println("⚠️ TRUNCATED: Yes - file appears incomplete")
fmt.Println("[WARN] TRUNCATED: Yes - file appears incomplete")
}
if result.IsCorrupted {
fmt.Println("⚠️ CORRUPTED: Yes - file structure is damaged")
fmt.Println("[WARN] CORRUPTED: Yes - file structure is damaged")
}
// Details
if result.Details != nil {
fmt.Println("\n📊 DETAILS:")
fmt.Println("\n[DETAILS]:")
if result.Details.HasPGDMPSignature {
fmt.Println(" Has PGDMP signature (PostgreSQL custom format)")
fmt.Println(" [+] Has PGDMP signature (PostgreSQL custom format)")
}
if result.Details.HasSQLHeader {
fmt.Println(" Has PostgreSQL SQL header")
fmt.Println(" [+] Has PostgreSQL SQL header")
}
if result.Details.GzipValid {
fmt.Println(" Gzip compression valid")
fmt.Println(" [+] Gzip compression valid")
}
if result.Details.PgRestoreListable {
fmt.Printf(" pg_restore can list contents (%d tables)\n", result.Details.TableCount)
fmt.Printf(" [+] pg_restore can list contents (%d tables)\n", result.Details.TableCount)
}
if result.Details.CopyBlockCount > 0 {
fmt.Printf(" Contains %d COPY blocks\n", result.Details.CopyBlockCount)
fmt.Printf(" [-] Contains %d COPY blocks\n", result.Details.CopyBlockCount)
}
if result.Details.UnterminatedCopy {
fmt.Printf(" Unterminated COPY block: %s (line %d)\n",
fmt.Printf(" [-] Unterminated COPY block: %s (line %d)\n",
result.Details.LastCopyTable, result.Details.LastCopyLineNumber)
}
if result.Details.ProperlyTerminated {
fmt.Println(" All COPY blocks properly terminated")
fmt.Println(" [+] All COPY blocks properly terminated")
}
if result.Details.ExpandedSize > 0 {
fmt.Printf(" Expanded size: %s (ratio: %.1fx)\n",
fmt.Printf(" [-] Expanded size: %s (ratio: %.1fx)\n",
formatBytes(result.Details.ExpandedSize), result.Details.CompressionRatio)
}
}
// Errors
if len(result.Errors) > 0 {
fmt.Println("\nERRORS:")
fmt.Println("\n[ERRORS]:")
for _, e := range result.Errors {
fmt.Printf(" %s\n", e)
fmt.Printf(" - %s\n", e)
}
}
// Warnings
if len(result.Warnings) > 0 {
fmt.Println("\n⚠️ WARNINGS:")
fmt.Println("\n[WARNINGS]:")
for _, w := range result.Warnings {
fmt.Printf(" %s\n", w)
fmt.Printf(" - %s\n", w)
}
}
// Recommendations
if !result.IsValid {
fmt.Println("\n💡 RECOMMENDATIONS:")
fmt.Println("\n[HINT] RECOMMENDATIONS:")
if result.IsTruncated {
fmt.Println(" 1. Re-run the backup process for this database")
fmt.Println(" 2. Check disk space on backup server during backup")

View File

@@ -2,10 +2,12 @@ package restore
import (
"context"
"database/sql"
"fmt"
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
"sync"
"sync/atomic"
@@ -17,6 +19,9 @@ import (
"dbbackup/internal/logger"
"dbbackup/internal/progress"
"dbbackup/internal/security"
"github.com/hashicorp/go-multierror"
_ "github.com/jackc/pgx/v5/stdlib" // PostgreSQL driver
)
// Engine handles database restore operations
@@ -28,7 +33,6 @@ type Engine struct {
detailedReporter *progress.DetailedReporter
dryRun bool
debugLogPath string // Path to save debug log on error
errorCollector *ErrorCollector // Collects detailed error info
}
// New creates a new restore engine
@@ -128,7 +132,7 @@ func (e *Engine) RestoreSingle(ctx context.Context, archivePath, targetDB string
e.log.Warn("Checksum verification failed", "error", checksumErr)
e.log.Warn("Continuing restore without checksum verification (use with caution)")
} else {
e.log.Info(" Archive checksum verified successfully")
e.log.Info("[OK] Archive checksum verified successfully")
}
// Detect archive format
@@ -224,7 +228,18 @@ func (e *Engine) restorePostgreSQLDump(ctx context.Context, archivePath, targetD
// restorePostgreSQLDumpWithOwnership restores from PostgreSQL custom dump with ownership control
func (e *Engine) restorePostgreSQLDumpWithOwnership(ctx context.Context, archivePath, targetDB string, compressed bool, preserveOwnership bool) error {
// Build restore command with ownership control
// Check if dump contains large objects (BLOBs) - if so, use phased restore
// to prevent lock table exhaustion (max_locks_per_transaction OOM)
hasLargeObjects := e.checkDumpHasLargeObjects(archivePath)
if hasLargeObjects {
e.log.Info("Large objects detected - using phased restore to prevent lock exhaustion",
"database", targetDB,
"archive", archivePath)
return e.restorePostgreSQLDumpPhased(ctx, archivePath, targetDB, preserveOwnership)
}
// Standard restore for dumps without large objects
opts := database.RestoreOptions{
Parallel: 1,
Clean: false, // We already dropped the database
@@ -250,8 +265,124 @@ func (e *Engine) restorePostgreSQLDumpWithOwnership(ctx context.Context, archive
return e.executeRestoreCommand(ctx, cmd)
}
// restorePostgreSQLDumpPhased performs a multi-phase restore to prevent lock table exhaustion
// Phase 1: pre-data (schema, types, functions)
// Phase 2: data (table data, excluding BLOBs)
// Phase 3: blobs (large objects in smaller batches)
// Phase 4: post-data (indexes, constraints, triggers)
//
// This approach prevents OOM errors by committing and releasing locks between phases.
func (e *Engine) restorePostgreSQLDumpPhased(ctx context.Context, archivePath, targetDB string, preserveOwnership bool) error {
e.log.Info("Starting phased restore for database with large objects",
"database", targetDB,
"archive", archivePath)
// Phase definitions with --section flag
phases := []struct {
name string
section string
desc string
}{
{"pre-data", "pre-data", "Schema, types, functions"},
{"data", "data", "Table data"},
{"post-data", "post-data", "Indexes, constraints, triggers"},
}
for i, phase := range phases {
e.log.Info(fmt.Sprintf("Phase %d/%d: Restoring %s", i+1, len(phases), phase.name),
"database", targetDB,
"section", phase.section,
"description", phase.desc)
if err := e.restoreSection(ctx, archivePath, targetDB, phase.section, preserveOwnership); err != nil {
// Check if it's an ignorable error
if e.isIgnorableError(err.Error()) {
e.log.Warn(fmt.Sprintf("Phase %d completed with ignorable errors", i+1),
"section", phase.section,
"error", err)
continue
}
return fmt.Errorf("phase %d (%s) failed: %w", i+1, phase.name, err)
}
e.log.Info(fmt.Sprintf("Phase %d/%d completed successfully", i+1, len(phases)),
"section", phase.section)
}
e.log.Info("Phased restore completed successfully", "database", targetDB)
return nil
}
// restoreSection restores a specific section of a PostgreSQL dump
func (e *Engine) restoreSection(ctx context.Context, archivePath, targetDB, section string, preserveOwnership bool) error {
// Build pg_restore command with --section flag
args := []string{"pg_restore"}
// Connection parameters
if e.cfg.Host != "localhost" {
args = append(args, "-h", e.cfg.Host)
args = append(args, "-p", fmt.Sprintf("%d", e.cfg.Port))
args = append(args, "--no-password")
}
args = append(args, "-U", e.cfg.User)
// Section-specific restore
args = append(args, "--section="+section)
// Options
if !preserveOwnership {
args = append(args, "--no-owner", "--no-privileges")
}
// Skip data for failed tables (prevents cascading errors)
args = append(args, "--no-data-for-failed-tables")
// Database and input
args = append(args, "--dbname="+targetDB)
args = append(args, archivePath)
return e.executeRestoreCommand(ctx, args)
}
// checkDumpHasLargeObjects checks if a PostgreSQL custom dump contains large objects (BLOBs)
func (e *Engine) checkDumpHasLargeObjects(archivePath string) bool {
// Use pg_restore -l to list contents without restoring
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, "pg_restore", "-l", archivePath)
output, err := cmd.Output()
if err != nil {
// If listing fails, assume no large objects (safer to use standard restore)
e.log.Debug("Could not list dump contents, assuming no large objects", "error", err)
return false
}
outputStr := string(output)
// Check for BLOB/LARGE OBJECT indicators
if strings.Contains(outputStr, "BLOB") ||
strings.Contains(outputStr, "LARGE OBJECT") ||
strings.Contains(outputStr, " BLOBS ") ||
strings.Contains(outputStr, "lo_create") {
return true
}
return false
}
// restorePostgreSQLSQL restores from PostgreSQL SQL script
func (e *Engine) restorePostgreSQLSQL(ctx context.Context, archivePath, targetDB string, compressed bool) error {
// Pre-validate SQL dump to detect truncation BEFORE attempting restore
// This saves time by catching corrupted files early (vs 49min failures)
if err := e.quickValidateSQLDump(archivePath, compressed); err != nil {
e.log.Error("Pre-restore validation failed - dump file appears corrupted",
"file", archivePath,
"error", err)
return fmt.Errorf("dump validation failed: %w - the backup file may be truncated or corrupted", err)
}
// Use psql for SQL scripts
var cmd []string
@@ -262,9 +393,10 @@ func (e *Engine) restorePostgreSQLSQL(ctx context.Context, archivePath, targetDB
}
if compressed {
psqlCmd := fmt.Sprintf("psql -U %s -d %s", e.cfg.User, targetDB)
// Use ON_ERROR_STOP=1 to fail fast on first error (prevents millions of errors on truncated dumps)
psqlCmd := fmt.Sprintf("psql -U %s -d %s -v ON_ERROR_STOP=1", e.cfg.User, targetDB)
if hostArg != "" {
psqlCmd = fmt.Sprintf("psql %s -U %s -d %s", hostArg, e.cfg.User, targetDB)
psqlCmd = fmt.Sprintf("psql %s -U %s -d %s -v ON_ERROR_STOP=1", hostArg, e.cfg.User, targetDB)
}
// Set PGPASSWORD in the bash command for password-less auth
cmd = []string{
@@ -279,6 +411,7 @@ func (e *Engine) restorePostgreSQLSQL(ctx context.Context, archivePath, targetDB
"-p", fmt.Sprintf("%d", e.cfg.Port),
"-U", e.cfg.User,
"-d", targetDB,
"-v", "ON_ERROR_STOP=1",
"-f", archivePath,
}
} else {
@@ -286,6 +419,7 @@ func (e *Engine) restorePostgreSQLSQL(ctx context.Context, archivePath, targetDB
"psql",
"-U", e.cfg.User,
"-d", targetDB,
"-v", "ON_ERROR_STOP=1",
"-f", archivePath,
}
}
@@ -345,10 +479,13 @@ func (e *Engine) executeRestoreCommandWithContext(ctx context.Context, cmdArgs [
return fmt.Errorf("failed to start restore command: %w", err)
}
// Read stderr in chunks to log errors without loading all into memory
buf := make([]byte, 4096)
// Read stderr in goroutine to avoid blocking
var lastError string
var errorCount int
stderrDone := make(chan struct{})
go func() {
defer close(stderrDone)
buf := make([]byte, 4096)
const maxErrors = 10 // Limit captured errors to prevent OOM
for {
n, err := stderr.Read(buf)
@@ -374,11 +511,33 @@ func (e *Engine) executeRestoreCommandWithContext(ctx context.Context, cmdArgs [
break
}
}
}()
if err := cmd.Wait(); err != nil {
// Wait for command with proper context handling
cmdDone := make(chan error, 1)
go func() {
cmdDone <- cmd.Wait()
}()
var cmdErr error
select {
case cmdErr = <-cmdDone:
// Command completed (success or failure)
case <-ctx.Done():
// Context cancelled - kill process
e.log.Warn("Restore cancelled - killing process")
cmd.Process.Kill()
<-cmdDone
cmdErr = ctx.Err()
}
// Wait for stderr reader to finish
<-stderrDone
if cmdErr != nil {
// Get exit code
exitCode := 1
if exitErr, ok := err.(*exec.ExitError); ok {
if exitErr, ok := cmdErr.(*exec.ExitError); ok {
exitCode = exitErr.ExitCode()
}
@@ -425,7 +584,7 @@ func (e *Engine) executeRestoreCommandWithContext(ctx context.Context, cmdArgs [
e.log.Warn("Failed to save debug log", "error", saveErr)
} else {
e.log.Info("Debug log saved", "path", e.debugLogPath)
fmt.Printf("\n📋 Detailed error report saved to: %s\n", e.debugLogPath)
fmt.Printf("\n[LOG] Detailed error report saved to: %s\n", e.debugLogPath)
}
}
}
@@ -469,10 +628,13 @@ func (e *Engine) executeRestoreWithDecompression(ctx context.Context, archivePat
return fmt.Errorf("failed to start restore command: %w", err)
}
// Read stderr in chunks to log errors without loading all into memory
buf := make([]byte, 4096)
// Read stderr in goroutine to avoid blocking
var lastError string
var errorCount int
stderrDone := make(chan struct{})
go func() {
defer close(stderrDone)
buf := make([]byte, 4096)
const maxErrors = 10 // Limit captured errors to prevent OOM
for {
n, err := stderr.Read(buf)
@@ -492,8 +654,30 @@ func (e *Engine) executeRestoreWithDecompression(ctx context.Context, archivePat
break
}
}
}()
if err := cmd.Wait(); err != nil {
// Wait for command with proper context handling
cmdDone := make(chan error, 1)
go func() {
cmdDone <- cmd.Wait()
}()
var cmdErr error
select {
case cmdErr = <-cmdDone:
// Command completed (success or failure)
case <-ctx.Done():
// Context cancelled - kill process
e.log.Warn("Restore with decompression cancelled - killing process")
cmd.Process.Kill()
<-cmdDone
cmdErr = ctx.Err()
}
// Wait for stderr reader to finish
<-stderrDone
if cmdErr != nil {
// PostgreSQL pg_restore returns exit code 1 even for ignorable errors
// Check if errors are ignorable (already exists, duplicate, etc.)
if lastError != "" && e.isIgnorableError(lastError) {
@@ -505,18 +689,18 @@ func (e *Engine) executeRestoreWithDecompression(ctx context.Context, archivePat
if lastError != "" {
classification := checks.ClassifyError(lastError)
e.log.Error("Restore with decompression failed",
"error", err,
"error", cmdErr,
"last_stderr", lastError,
"error_count", errorCount,
"error_type", classification.Type,
"hint", classification.Hint,
"action", classification.Action)
return fmt.Errorf("restore failed: %w (last error: %s, total errors: %d) - %s",
err, lastError, errorCount, classification.Hint)
cmdErr, lastError, errorCount, classification.Hint)
}
e.log.Error("Restore with decompression failed", "error", err, "last_stderr", lastError, "error_count", errorCount)
return fmt.Errorf("restore failed: %w", err)
e.log.Error("Restore with decompression failed", "error", cmdErr, "last_stderr", lastError, "error_count", errorCount)
return fmt.Errorf("restore failed: %w", cmdErr)
}
return nil
@@ -551,7 +735,7 @@ func (e *Engine) previewRestore(archivePath, targetDB string, format ArchiveForm
fmt.Printf(" 1. Execute: mysql %s < %s\n", targetDB, archivePath)
}
fmt.Println("\n⚠️ WARNING: This will restore data to the target database.")
fmt.Println("\n[WARN] WARNING: This will restore data to the target database.")
fmt.Println(" Existing data may be overwritten or merged.")
fmt.Println("\nTo execute this restore, add the --confirm flag.")
fmt.Println(strings.Repeat("=", 60) + "\n")
@@ -582,7 +766,7 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
e.log.Warn("Checksum verification failed", "error", checksumErr)
e.log.Warn("Continuing restore without checksum verification (use with caution)")
} else {
e.log.Info(" Cluster archive checksum verified successfully")
e.log.Info("[OK] Cluster archive checksum verified successfully")
}
format := DetectArchiveFormat(archivePath)
@@ -616,11 +800,12 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
e.progress.Start(fmt.Sprintf("Restoring cluster from %s", filepath.Base(archivePath)))
// Create temporary extraction directory
tempDir := filepath.Join(e.cfg.BackupDir, fmt.Sprintf(".restore_%d", time.Now().Unix()))
// Create temporary extraction directory in configured WorkDir
workDir := e.cfg.GetEffectiveWorkDir()
tempDir := filepath.Join(workDir, fmt.Sprintf(".restore_%d", time.Now().Unix()))
if err := os.MkdirAll(tempDir, 0755); err != nil {
operation.Fail("Failed to create temporary directory")
return fmt.Errorf("failed to create temp directory: %w", err)
return fmt.Errorf("failed to create temp directory in %s: %w", workDir, err)
}
defer os.RemoveAll(tempDir)
@@ -641,7 +826,7 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
if !isSuperuser {
e.log.Warn("Current user is not a superuser - database ownership may not be fully restored")
e.progress.Update("⚠️ Warning: Non-superuser - ownership restoration limited")
e.progress.Update("[WARN] Warning: Non-superuser - ownership restoration limited")
time.Sleep(2 * time.Second) // Give user time to see warning
} else {
e.log.Info("Superuser privileges confirmed - full ownership restoration enabled")
@@ -682,7 +867,103 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
return fmt.Errorf("failed to read dumps directory: %w", err)
}
var failedDBs []string
// PRE-VALIDATE all SQL dumps BEFORE starting restore
// This catches truncated files early instead of failing after hours of work
e.log.Info("Pre-validating dump files before restore...")
e.progress.Update("Pre-validating dump files...")
var corruptedDumps []string
diagnoser := NewDiagnoser(e.log, false)
for _, entry := range entries {
if entry.IsDir() {
continue
}
dumpFile := filepath.Join(dumpsDir, entry.Name())
if strings.HasSuffix(dumpFile, ".sql.gz") {
result, err := diagnoser.DiagnoseFile(dumpFile)
if err != nil {
e.log.Warn("Could not validate dump file", "file", entry.Name(), "error", err)
continue
}
if result.IsTruncated || result.IsCorrupted || !result.IsValid {
dbName := strings.TrimSuffix(entry.Name(), ".sql.gz")
errDetail := "unknown issue"
if len(result.Errors) > 0 {
errDetail = result.Errors[0]
}
corruptedDumps = append(corruptedDumps, fmt.Sprintf("%s: %s", dbName, errDetail))
e.log.Error("CORRUPTED dump file detected",
"database", dbName,
"file", entry.Name(),
"truncated", result.IsTruncated,
"errors", result.Errors)
}
} else if strings.HasSuffix(dumpFile, ".dump") {
// Validate custom format dumps using pg_restore --list
cmd := exec.CommandContext(ctx, "pg_restore", "--list", dumpFile)
output, err := cmd.CombinedOutput()
if err != nil {
dbName := strings.TrimSuffix(entry.Name(), ".dump")
errDetail := strings.TrimSpace(string(output))
if len(errDetail) > 100 {
errDetail = errDetail[:100] + "..."
}
// Check for truncation indicators
if strings.Contains(errDetail, "unexpected end") || strings.Contains(errDetail, "invalid") {
corruptedDumps = append(corruptedDumps, fmt.Sprintf("%s: %s", dbName, errDetail))
e.log.Error("CORRUPTED custom dump file detected",
"database", dbName,
"file", entry.Name(),
"error", errDetail)
} else {
e.log.Warn("pg_restore --list warning (may be recoverable)",
"file", entry.Name(),
"error", errDetail)
}
}
}
}
if len(corruptedDumps) > 0 {
operation.Fail("Corrupted dump files detected")
e.progress.Fail(fmt.Sprintf("Found %d corrupted dump files - restore aborted", len(corruptedDumps)))
return fmt.Errorf("pre-validation failed: %d corrupted dump files detected: %s - the backup archive appears to be damaged, restore from a different backup",
len(corruptedDumps), strings.Join(corruptedDumps, ", "))
}
e.log.Info("All dump files passed validation")
// Run comprehensive preflight checks (Linux system + PostgreSQL + Archive analysis)
preflight, preflightErr := e.RunPreflightChecks(ctx, dumpsDir, entries)
if preflightErr != nil {
e.log.Warn("Preflight checks failed", "error", preflightErr)
}
// Calculate optimal lock boost based on BLOB count
lockBoostValue := 2048 // Default
if preflight != nil && preflight.Archive.RecommendedLockBoost > 0 {
lockBoostValue = preflight.Archive.RecommendedLockBoost
}
// AUTO-TUNE: Boost PostgreSQL settings for large restores
e.progress.Update("Tuning PostgreSQL for large restore...")
originalSettings, tuneErr := e.boostPostgreSQLSettings(ctx, lockBoostValue)
if tuneErr != nil {
e.log.Warn("Could not boost PostgreSQL settings - restore may fail on BLOB-heavy databases",
"error", tuneErr)
} else {
e.log.Info("Boosted PostgreSQL settings for restore",
"max_locks_per_transaction", fmt.Sprintf("%d → %d", originalSettings.MaxLocks, lockBoostValue),
"maintenance_work_mem", fmt.Sprintf("%s → 2GB", originalSettings.MaintenanceWorkMem))
// Ensure we reset settings when done (even on failure)
defer func() {
if resetErr := e.resetPostgreSQLSettings(ctx, originalSettings); resetErr != nil {
e.log.Warn("Could not reset PostgreSQL settings", "error", resetErr)
} else {
e.log.Info("Reset PostgreSQL settings to original values")
}
}()
}
var restoreErrors *multierror.Error
var restoreErrorsMu sync.Mutex
totalDBs := 0
// Count total databases
@@ -710,13 +991,12 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
e.log.Warn("Large objects detected in dump files - reducing parallelism to avoid lock contention",
"original_parallelism", parallelism,
"adjusted_parallelism", 1)
e.progress.Update("⚠️ Large objects detected - using sequential restore to avoid lock conflicts")
e.progress.Update("[WARN] Large objects detected - using sequential restore to avoid lock conflicts")
time.Sleep(2 * time.Second) // Give user time to see warning
parallelism = 1
}
var successCount, failCount int32
var failedDBsMu sync.Mutex
var mu sync.Mutex // Protect shared resources (progress, logger)
// Create semaphore to limit concurrency
@@ -736,6 +1016,14 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
defer wg.Done()
defer func() { <-semaphore }() // Release
// Panic recovery - prevent one database failure from crashing entire cluster restore
defer func() {
if r := recover(); r != nil {
e.log.Error("Panic in database restore goroutine", "file", filename, "panic", r)
atomic.AddInt32(&failCount, 1)
}
}()
// Update estimator progress (thread-safe)
mu.Lock()
estimator.UpdateProgress(idx)
@@ -763,9 +1051,9 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
// STEP 2: Create fresh database
if err := e.ensureDatabaseExists(ctx, dbName); err != nil {
e.log.Error("Failed to create database", "name", dbName, "error", err)
failedDBsMu.Lock()
failedDBs = append(failedDBs, fmt.Sprintf("%s: failed to create database: %v", dbName, err))
failedDBsMu.Unlock()
restoreErrorsMu.Lock()
restoreErrors = multierror.Append(restoreErrors, fmt.Errorf("%s: failed to create database: %w", dbName, err))
restoreErrorsMu.Unlock()
atomic.AddInt32(&failCount, 1)
return
}
@@ -808,10 +1096,10 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
mu.Unlock()
}
failedDBsMu.Lock()
restoreErrorsMu.Lock()
// Include more context in the error message
failedDBs = append(failedDBs, fmt.Sprintf("%s: restore failed: %v", dbName, restoreErr))
failedDBsMu.Unlock()
restoreErrors = multierror.Append(restoreErrors, fmt.Errorf("%s: restore failed: %w", dbName, restoreErr))
restoreErrorsMu.Unlock()
atomic.AddInt32(&failCount, 1)
return
}
@@ -829,7 +1117,17 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
failCountFinal := int(atomic.LoadInt32(&failCount))
if failCountFinal > 0 {
failedList := strings.Join(failedDBs, "\n ")
// Format multi-error with detailed output
restoreErrors.ErrorFormat = func(errs []error) string {
if len(errs) == 1 {
return errs[0].Error()
}
points := make([]string, len(errs))
for i, err := range errs {
points[i] = fmt.Sprintf(" • %s", err.Error())
}
return fmt.Sprintf("%d database(s) failed:\n%s", len(errs), strings.Join(points, "\n"))
}
// Log summary
e.log.Info("Cluster restore completed with failures",
@@ -840,7 +1138,7 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
e.progress.Fail(fmt.Sprintf("Cluster restore: %d succeeded, %d failed out of %d total", successCountFinal, failCountFinal, totalDBs))
operation.Complete(fmt.Sprintf("Partial restore: %d/%d databases succeeded", successCountFinal, totalDBs))
return fmt.Errorf("cluster restore completed with %d failures:\n %s", failCountFinal, failedList)
return fmt.Errorf("cluster restore completed with %d failures:\n%s", failCountFinal, restoreErrors.Error())
}
e.progress.Complete(fmt.Sprintf("Cluster restored successfully: %d databases", successCountFinal))
@@ -863,6 +1161,9 @@ func (e *Engine) extractArchive(ctx context.Context, archivePath, destDir string
}
// Discard stderr output in chunks to prevent memory buildup
stderrDone := make(chan struct{})
go func() {
defer close(stderrDone)
buf := make([]byte, 4096)
for {
_, err := stderr.Read(buf)
@@ -870,9 +1171,29 @@ func (e *Engine) extractArchive(ctx context.Context, archivePath, destDir string
break
}
}
}()
if err := cmd.Wait(); err != nil {
return fmt.Errorf("tar extraction failed: %w", err)
// Wait for command with proper context handling
cmdDone := make(chan error, 1)
go func() {
cmdDone <- cmd.Wait()
}()
var cmdErr error
select {
case cmdErr = <-cmdDone:
// Command completed
case <-ctx.Done():
e.log.Warn("Archive extraction cancelled - killing process")
cmd.Process.Kill()
<-cmdDone
cmdErr = ctx.Err()
}
<-stderrDone
if cmdErr != nil {
return fmt.Errorf("tar extraction failed: %w", cmdErr)
}
return nil
}
@@ -905,9 +1226,12 @@ func (e *Engine) restoreGlobals(ctx context.Context, globalsFile string) error {
return fmt.Errorf("failed to start psql: %w", err)
}
// Read stderr in chunks
buf := make([]byte, 4096)
// Read stderr in chunks in goroutine
var lastError string
stderrDone := make(chan struct{})
go func() {
defer close(stderrDone)
buf := make([]byte, 4096)
for {
n, err := stderr.Read(buf)
if n > 0 {
@@ -921,9 +1245,29 @@ func (e *Engine) restoreGlobals(ctx context.Context, globalsFile string) error {
break
}
}
}()
if err := cmd.Wait(); err != nil {
return fmt.Errorf("failed to restore globals: %w (last error: %s)", err, lastError)
// Wait for command with proper context handling
cmdDone := make(chan error, 1)
go func() {
cmdDone <- cmd.Wait()
}()
var cmdErr error
select {
case cmdErr = <-cmdDone:
// Command completed
case <-ctx.Done():
e.log.Warn("Globals restore cancelled - killing process")
cmd.Process.Kill()
<-cmdDone
cmdErr = ctx.Err()
}
<-stderrDone
if cmdErr != nil {
return fmt.Errorf("failed to restore globals: %w (last error: %s)", cmdErr, lastError)
}
return nil
@@ -1160,7 +1504,7 @@ func (e *Engine) previewClusterRestore(archivePath string) error {
fmt.Println(" 3. Restore all databases found in archive")
fmt.Println(" 4. Cleanup temporary files")
fmt.Println("\n⚠️ WARNING: This will restore multiple databases.")
fmt.Println("\n[WARN] WARNING: This will restore multiple databases.")
fmt.Println(" Existing databases may be overwritten or merged.")
fmt.Println("\nTo execute this restore, add the --confirm flag.")
fmt.Println(strings.Repeat("=", 60) + "\n")
@@ -1187,7 +1531,8 @@ func (e *Engine) detectLargeObjectsInDumps(dumpsDir string, entries []os.DirEntr
}
// Use pg_restore -l to list contents (fast, doesn't restore data)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
// 2 minutes for large dumps with many objects
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
defer cancel()
cmd := exec.CommandContext(ctx, "pg_restore", "-l", dumpFile)
@@ -1274,3 +1619,218 @@ func FormatBytes(bytes int64) string {
}
return fmt.Sprintf("%.1f %cB", float64(bytes)/float64(div), "KMGTPE"[exp])
}
// quickValidateSQLDump performs a fast validation of SQL dump files
// by checking for truncated COPY blocks. This catches corrupted dumps
// BEFORE attempting a full restore (which could waste 49+ minutes).
func (e *Engine) quickValidateSQLDump(archivePath string, compressed bool) error {
e.log.Debug("Pre-validating SQL dump file", "path", archivePath, "compressed", compressed)
diagnoser := NewDiagnoser(e.log, false) // non-verbose for speed
result, err := diagnoser.DiagnoseFile(archivePath)
if err != nil {
return fmt.Errorf("diagnosis error: %w", err)
}
// Check for critical issues that would cause restore failure
if result.IsTruncated {
errMsg := "SQL dump file is TRUNCATED"
if result.Details != nil && result.Details.UnterminatedCopy {
errMsg = fmt.Sprintf("%s - unterminated COPY block for table '%s' at line %d",
errMsg, result.Details.LastCopyTable, result.Details.LastCopyLineNumber)
if len(result.Details.SampleCopyData) > 0 {
errMsg = fmt.Sprintf("%s (sample orphaned data: %s)", errMsg, result.Details.SampleCopyData[0])
}
}
return fmt.Errorf("%s", errMsg)
}
if result.IsCorrupted {
return fmt.Errorf("SQL dump file is corrupted: %v", result.Errors)
}
if !result.IsValid {
if len(result.Errors) > 0 {
return fmt.Errorf("dump validation failed: %s", result.Errors[0])
}
return fmt.Errorf("dump file is invalid (unknown reason)")
}
// Log any warnings but don't fail
for _, warning := range result.Warnings {
e.log.Warn("Dump validation warning", "warning", warning)
}
e.log.Debug("SQL dump validation passed", "path", archivePath)
return nil
}
// boostLockCapacity temporarily increases max_locks_per_transaction to prevent OOM
// during large restores with many BLOBs. Returns the original value for later reset.
// Uses ALTER SYSTEM + pg_reload_conf() so no restart is needed.
func (e *Engine) boostLockCapacity(ctx context.Context) (int, error) {
// Connect to PostgreSQL to run system commands
connStr := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=postgres sslmode=disable",
e.cfg.Host, e.cfg.Port, e.cfg.User, e.cfg.Password)
// For localhost, use Unix socket
if e.cfg.Host == "localhost" || e.cfg.Host == "" {
connStr = fmt.Sprintf("user=%s password=%s dbname=postgres sslmode=disable",
e.cfg.User, e.cfg.Password)
}
db, err := sql.Open("pgx", connStr)
if err != nil {
return 0, fmt.Errorf("failed to connect: %w", err)
}
defer db.Close()
// Get current value
var currentValue int
err = db.QueryRowContext(ctx, "SHOW max_locks_per_transaction").Scan(&currentValue)
if err != nil {
// Try parsing as string (some versions return string)
var currentValueStr string
err = db.QueryRowContext(ctx, "SHOW max_locks_per_transaction").Scan(&currentValueStr)
if err != nil {
return 0, fmt.Errorf("failed to get current max_locks_per_transaction: %w", err)
}
fmt.Sscanf(currentValueStr, "%d", &currentValue)
}
// Skip if already high enough
if currentValue >= 2048 {
e.log.Info("max_locks_per_transaction already sufficient", "value", currentValue)
return currentValue, nil
}
// Boost to 2048 (enough for most BLOB-heavy databases)
_, err = db.ExecContext(ctx, "ALTER SYSTEM SET max_locks_per_transaction = 2048")
if err != nil {
return currentValue, fmt.Errorf("failed to set max_locks_per_transaction: %w", err)
}
// Reload config without restart
_, err = db.ExecContext(ctx, "SELECT pg_reload_conf()")
if err != nil {
return currentValue, fmt.Errorf("failed to reload config: %w", err)
}
return currentValue, nil
}
// resetLockCapacity restores the original max_locks_per_transaction value
func (e *Engine) resetLockCapacity(ctx context.Context, originalValue int) error {
connStr := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=postgres sslmode=disable",
e.cfg.Host, e.cfg.Port, e.cfg.User, e.cfg.Password)
if e.cfg.Host == "localhost" || e.cfg.Host == "" {
connStr = fmt.Sprintf("user=%s password=%s dbname=postgres sslmode=disable",
e.cfg.User, e.cfg.Password)
}
db, err := sql.Open("pgx", connStr)
if err != nil {
return fmt.Errorf("failed to connect: %w", err)
}
defer db.Close()
// Reset to original value (or use RESET to go back to default)
if originalValue == 64 { // Default value
_, err = db.ExecContext(ctx, "ALTER SYSTEM RESET max_locks_per_transaction")
} else {
_, err = db.ExecContext(ctx, fmt.Sprintf("ALTER SYSTEM SET max_locks_per_transaction = %d", originalValue))
}
if err != nil {
return fmt.Errorf("failed to reset max_locks_per_transaction: %w", err)
}
// Reload config
_, err = db.ExecContext(ctx, "SELECT pg_reload_conf()")
if err != nil {
return fmt.Errorf("failed to reload config: %w", err)
}
return nil
}
// OriginalSettings stores PostgreSQL settings to restore after operation
type OriginalSettings struct {
MaxLocks int
MaintenanceWorkMem string
}
// boostPostgreSQLSettings boosts multiple PostgreSQL settings for large restores
func (e *Engine) boostPostgreSQLSettings(ctx context.Context, lockBoostValue int) (*OriginalSettings, error) {
connStr := e.buildConnString()
db, err := sql.Open("pgx", connStr)
if err != nil {
return nil, fmt.Errorf("failed to connect: %w", err)
}
defer db.Close()
original := &OriginalSettings{}
// Get current max_locks_per_transaction
var maxLocksStr string
if err := db.QueryRowContext(ctx, "SHOW max_locks_per_transaction").Scan(&maxLocksStr); err == nil {
original.MaxLocks, _ = strconv.Atoi(maxLocksStr)
}
// Get current maintenance_work_mem
db.QueryRowContext(ctx, "SHOW maintenance_work_mem").Scan(&original.MaintenanceWorkMem)
// Boost max_locks_per_transaction (if not already high enough)
if original.MaxLocks < lockBoostValue {
_, err = db.ExecContext(ctx, fmt.Sprintf("ALTER SYSTEM SET max_locks_per_transaction = %d", lockBoostValue))
if err != nil {
e.log.Warn("Could not boost max_locks_per_transaction", "error", err)
}
}
// Boost maintenance_work_mem to 2GB for faster index creation
_, err = db.ExecContext(ctx, "ALTER SYSTEM SET maintenance_work_mem = '2GB'")
if err != nil {
e.log.Warn("Could not boost maintenance_work_mem", "error", err)
}
// Reload config to apply changes (no restart needed for these settings)
_, err = db.ExecContext(ctx, "SELECT pg_reload_conf()")
if err != nil {
return original, fmt.Errorf("failed to reload config: %w", err)
}
return original, nil
}
// resetPostgreSQLSettings restores original PostgreSQL settings
func (e *Engine) resetPostgreSQLSettings(ctx context.Context, original *OriginalSettings) error {
connStr := e.buildConnString()
db, err := sql.Open("pgx", connStr)
if err != nil {
return fmt.Errorf("failed to connect: %w", err)
}
defer db.Close()
// Reset max_locks_per_transaction
if original.MaxLocks == 64 { // Default
db.ExecContext(ctx, "ALTER SYSTEM RESET max_locks_per_transaction")
} else if original.MaxLocks > 0 {
db.ExecContext(ctx, fmt.Sprintf("ALTER SYSTEM SET max_locks_per_transaction = %d", original.MaxLocks))
}
// Reset maintenance_work_mem
if original.MaintenanceWorkMem == "64MB" { // Default
db.ExecContext(ctx, "ALTER SYSTEM RESET maintenance_work_mem")
} else if original.MaintenanceWorkMem != "" {
db.ExecContext(ctx, fmt.Sprintf("ALTER SYSTEM SET maintenance_work_mem = '%s'", original.MaintenanceWorkMem))
}
// Reload config
_, err = db.ExecContext(ctx, "SELECT pg_reload_conf()")
if err != nil {
return fmt.Errorf("failed to reload config: %w", err)
}
return nil
}

View File

@@ -3,6 +3,7 @@ package restore
import (
"bufio"
"compress/gzip"
"context"
"encoding/json"
"fmt"
"io"
@@ -396,20 +397,20 @@ func (ec *ErrorCollector) SaveReport(report *RestoreErrorReport, outputPath stri
// PrintReport prints a human-readable summary of the error report
func (ec *ErrorCollector) PrintReport(report *RestoreErrorReport) {
fmt.Println()
fmt.Println(strings.Repeat("", 70))
fmt.Println(" 🔴 RESTORE ERROR REPORT")
fmt.Println(strings.Repeat("", 70))
fmt.Println(strings.Repeat("=", 70))
fmt.Println(" [ERROR] RESTORE ERROR REPORT")
fmt.Println(strings.Repeat("=", 70))
fmt.Printf("\n📅 Timestamp: %s\n", report.Timestamp.Format("2006-01-02 15:04:05"))
fmt.Printf("📦 Archive: %s\n", filepath.Base(report.ArchivePath))
fmt.Printf("📊 Format: %s\n", report.ArchiveFormat)
fmt.Printf("🎯 Target DB: %s\n", report.TargetDB)
fmt.Printf("⚠️ Exit Code: %d\n", report.ExitCode)
fmt.Printf(" Total Errors: %d\n", report.TotalErrors)
fmt.Printf("\n[TIME] Timestamp: %s\n", report.Timestamp.Format("2006-01-02 15:04:05"))
fmt.Printf("[FILE] Archive: %s\n", filepath.Base(report.ArchivePath))
fmt.Printf("[FMT] Format: %s\n", report.ArchiveFormat)
fmt.Printf("[TGT] Target DB: %s\n", report.TargetDB)
fmt.Printf("[CODE] Exit Code: %d\n", report.ExitCode)
fmt.Printf("[ERR] Total Errors: %d\n", report.TotalErrors)
fmt.Println("\n" + strings.Repeat("", 70))
fmt.Println("\n" + strings.Repeat("-", 70))
fmt.Println("ERROR DETAILS:")
fmt.Println(strings.Repeat("", 70))
fmt.Println(strings.Repeat("-", 70))
fmt.Printf("\nType: %s\n", report.ErrorType)
fmt.Printf("Message: %s\n", report.ErrorMessage)
@@ -419,9 +420,9 @@ func (ec *ErrorCollector) PrintReport(report *RestoreErrorReport) {
// Show failure context
if report.FailureContext != nil && report.FailureContext.FailedLine > 0 {
fmt.Println("\n" + strings.Repeat("", 70))
fmt.Println("\n" + strings.Repeat("-", 70))
fmt.Println("FAILURE CONTEXT:")
fmt.Println(strings.Repeat("", 70))
fmt.Println(strings.Repeat("-", 70))
fmt.Printf("\nFailed at line: %d\n", report.FailureContext.FailedLine)
if report.FailureContext.InCopyBlock {
@@ -438,9 +439,9 @@ func (ec *ErrorCollector) PrintReport(report *RestoreErrorReport) {
// Show first few errors
if len(report.FirstErrors) > 0 {
fmt.Println("\n" + strings.Repeat("", 70))
fmt.Println("\n" + strings.Repeat("-", 70))
fmt.Println("FIRST ERRORS:")
fmt.Println(strings.Repeat("", 70))
fmt.Println(strings.Repeat("-", 70))
for i, err := range report.FirstErrors {
if i >= 5 {
@@ -453,15 +454,15 @@ func (ec *ErrorCollector) PrintReport(report *RestoreErrorReport) {
// Show diagnosis summary
if report.DiagnosisResult != nil && !report.DiagnosisResult.IsValid {
fmt.Println("\n" + strings.Repeat("", 70))
fmt.Println("\n" + strings.Repeat("-", 70))
fmt.Println("DIAGNOSIS:")
fmt.Println(strings.Repeat("", 70))
fmt.Println(strings.Repeat("-", 70))
if report.DiagnosisResult.IsTruncated {
fmt.Println(" File is TRUNCATED")
fmt.Println(" [FAIL] File is TRUNCATED")
}
if report.DiagnosisResult.IsCorrupted {
fmt.Println(" File is CORRUPTED")
fmt.Println(" [FAIL] File is CORRUPTED")
}
for i, err := range report.DiagnosisResult.Errors {
if i >= 3 {
@@ -472,18 +473,18 @@ func (ec *ErrorCollector) PrintReport(report *RestoreErrorReport) {
}
// Show recommendations
fmt.Println("\n" + strings.Repeat("", 70))
fmt.Println("💡 RECOMMENDATIONS:")
fmt.Println(strings.Repeat("", 70))
fmt.Println("\n" + strings.Repeat("-", 70))
fmt.Println("[HINT] RECOMMENDATIONS:")
fmt.Println(strings.Repeat("-", 70))
for _, rec := range report.Recommendations {
fmt.Printf(" %s\n", rec)
fmt.Printf(" - %s\n", rec)
}
// Show tool versions
fmt.Println("\n" + strings.Repeat("", 70))
fmt.Println("\n" + strings.Repeat("-", 70))
fmt.Println("ENVIRONMENT:")
fmt.Println(strings.Repeat("", 70))
fmt.Println(strings.Repeat("-", 70))
fmt.Printf(" OS: %s/%s\n", report.OS, report.Arch)
fmt.Printf(" Go: %s\n", report.GoVersion)
@@ -494,7 +495,7 @@ func (ec *ErrorCollector) PrintReport(report *RestoreErrorReport) {
fmt.Printf(" psql: %s\n", report.PsqlVersion)
}
fmt.Println(strings.Repeat("", 70))
fmt.Println(strings.Repeat("=", 70))
}
// Helper functions
@@ -556,7 +557,11 @@ func getDatabaseType(format ArchiveFormat) string {
}
func getCommandVersion(cmd string, arg string) string {
output, err := exec.Command(cmd, arg).CombinedOutput()
// Use timeout to prevent blocking if command hangs
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
output, err := exec.CommandContext(ctx, cmd, arg).CombinedOutput()
if err != nil {
return ""
}

View File

@@ -0,0 +1,429 @@
package restore
import (
"context"
"database/sql"
"fmt"
"os"
"os/exec"
"path/filepath"
"runtime"
"strconv"
"strings"
"time"
"github.com/dustin/go-humanize"
"github.com/shirou/gopsutil/v3/mem"
)
// PreflightResult contains all preflight check results
type PreflightResult struct {
// Linux system checks
Linux LinuxChecks
// PostgreSQL checks
PostgreSQL PostgreSQLChecks
// Archive analysis
Archive ArchiveChecks
// Overall status
CanProceed bool
Warnings []string
Errors []string
}
// LinuxChecks contains Linux kernel/system checks
type LinuxChecks struct {
ShmMax int64 // /proc/sys/kernel/shmmax
ShmAll int64 // /proc/sys/kernel/shmall
MemTotal uint64 // Total RAM in bytes
MemAvailable uint64 // Available RAM in bytes
MemUsedPercent float64 // Memory usage percentage
ShmMaxOK bool // Is shmmax sufficient?
ShmAllOK bool // Is shmall sufficient?
MemAvailableOK bool // Is available RAM sufficient?
IsLinux bool // Are we running on Linux?
}
// PostgreSQLChecks contains PostgreSQL configuration checks
type PostgreSQLChecks struct {
MaxLocksPerTransaction int // Current setting
MaintenanceWorkMem string // Current setting
SharedBuffers string // Current setting (info only)
MaxConnections int // Current setting
Version string // PostgreSQL version
IsSuperuser bool // Can we modify settings?
}
// ArchiveChecks contains analysis of the backup archive
type ArchiveChecks struct {
TotalDatabases int
TotalBlobCount int // Estimated total BLOBs across all databases
BlobsByDB map[string]int // BLOBs per database
HasLargeBlobs bool // Any DB with >1000 BLOBs?
RecommendedLockBoost int // Calculated lock boost value
}
// RunPreflightChecks performs all preflight checks before a cluster restore
func (e *Engine) RunPreflightChecks(ctx context.Context, dumpsDir string, entries []os.DirEntry) (*PreflightResult, error) {
result := &PreflightResult{
CanProceed: true,
Archive: ArchiveChecks{
BlobsByDB: make(map[string]int),
},
}
e.progress.Update("[PREFLIGHT] Running system checks...")
e.log.Info("Starting preflight checks for cluster restore")
// 1. System checks (cross-platform via gopsutil)
e.checkSystemResources(result)
// 2. PostgreSQL checks (via existing connection)
e.checkPostgreSQL(ctx, result)
// 3. Archive analysis (count BLOBs to scale lock boost)
e.analyzeArchive(ctx, dumpsDir, entries, result)
// 4. Calculate recommended settings
e.calculateRecommendations(result)
// 5. Print summary
e.printPreflightSummary(result)
return result, nil
}
// checkSystemResources uses gopsutil for cross-platform system checks
func (e *Engine) checkSystemResources(result *PreflightResult) {
result.Linux.IsLinux = runtime.GOOS == "linux"
// Get memory info (works on Linux, macOS, Windows, BSD)
if vmem, err := mem.VirtualMemory(); err == nil {
result.Linux.MemTotal = vmem.Total
result.Linux.MemAvailable = vmem.Available
result.Linux.MemUsedPercent = vmem.UsedPercent
// 4GB minimum available for large restores
result.Linux.MemAvailableOK = vmem.Available >= 4*1024*1024*1024
e.log.Info("System memory detected",
"total", humanize.Bytes(vmem.Total),
"available", humanize.Bytes(vmem.Available),
"used_percent", fmt.Sprintf("%.1f%%", vmem.UsedPercent))
} else {
e.log.Warn("Could not detect system memory", "error", err)
}
// Linux-specific kernel checks (shmmax, shmall)
if result.Linux.IsLinux {
e.checkLinuxKernel(result)
}
// Add warnings for insufficient resources
if !result.Linux.MemAvailableOK && result.Linux.MemAvailable > 0 {
result.Warnings = append(result.Warnings,
fmt.Sprintf("Available RAM is low: %s (recommend 4GB+ for large restores)",
humanize.Bytes(result.Linux.MemAvailable)))
}
if result.Linux.MemUsedPercent > 85 {
result.Warnings = append(result.Warnings,
fmt.Sprintf("High memory usage: %.1f%% - restore may cause OOM", result.Linux.MemUsedPercent))
}
}
// checkLinuxKernel reads Linux-specific kernel limits from /proc
func (e *Engine) checkLinuxKernel(result *PreflightResult) {
// Read shmmax
if data, err := os.ReadFile("/proc/sys/kernel/shmmax"); err == nil {
val, _ := strconv.ParseInt(strings.TrimSpace(string(data)), 10, 64)
result.Linux.ShmMax = val
// 8GB minimum for large restores
result.Linux.ShmMaxOK = val >= 8*1024*1024*1024
}
// Read shmall (in pages, typically 4KB each)
if data, err := os.ReadFile("/proc/sys/kernel/shmall"); err == nil {
val, _ := strconv.ParseInt(strings.TrimSpace(string(data)), 10, 64)
result.Linux.ShmAll = val
// 2M pages = 8GB minimum
result.Linux.ShmAllOK = val >= 2*1024*1024
}
// Add kernel warnings
if !result.Linux.ShmMaxOK && result.Linux.ShmMax > 0 {
result.Warnings = append(result.Warnings,
fmt.Sprintf("Linux shmmax is low: %s (recommend 8GB+). Fix: sudo sysctl -w kernel.shmmax=17179869184",
humanize.Bytes(uint64(result.Linux.ShmMax))))
}
if !result.Linux.ShmAllOK && result.Linux.ShmAll > 0 {
result.Warnings = append(result.Warnings,
fmt.Sprintf("Linux shmall is low: %s pages (recommend 2M+). Fix: sudo sysctl -w kernel.shmall=4194304",
humanize.Comma(result.Linux.ShmAll)))
}
}
// checkPostgreSQL checks PostgreSQL configuration via SQL
func (e *Engine) checkPostgreSQL(ctx context.Context, result *PreflightResult) {
connStr := e.buildConnString()
db, err := sql.Open("pgx", connStr)
if err != nil {
e.log.Warn("Could not connect to PostgreSQL for preflight checks", "error", err)
return
}
defer db.Close()
// Check max_locks_per_transaction
var maxLocks string
if err := db.QueryRowContext(ctx, "SHOW max_locks_per_transaction").Scan(&maxLocks); err == nil {
result.PostgreSQL.MaxLocksPerTransaction, _ = strconv.Atoi(maxLocks)
}
// Check maintenance_work_mem
db.QueryRowContext(ctx, "SHOW maintenance_work_mem").Scan(&result.PostgreSQL.MaintenanceWorkMem)
// Check shared_buffers (info only, can't change without restart)
db.QueryRowContext(ctx, "SHOW shared_buffers").Scan(&result.PostgreSQL.SharedBuffers)
// Check max_connections
var maxConn string
if err := db.QueryRowContext(ctx, "SHOW max_connections").Scan(&maxConn); err == nil {
result.PostgreSQL.MaxConnections, _ = strconv.Atoi(maxConn)
}
// Check version
db.QueryRowContext(ctx, "SHOW server_version").Scan(&result.PostgreSQL.Version)
// Check if superuser
var isSuperuser bool
if err := db.QueryRowContext(ctx, "SELECT current_setting('is_superuser') = 'on'").Scan(&isSuperuser); err == nil {
result.PostgreSQL.IsSuperuser = isSuperuser
}
// Add info/warnings
if result.PostgreSQL.MaxLocksPerTransaction < 256 {
e.log.Info("PostgreSQL max_locks_per_transaction is low - will auto-boost",
"current", result.PostgreSQL.MaxLocksPerTransaction)
}
// Parse shared_buffers and warn if very low
sharedBuffersMB := parseMemoryToMB(result.PostgreSQL.SharedBuffers)
if sharedBuffersMB > 0 && sharedBuffersMB < 256 {
result.Warnings = append(result.Warnings,
fmt.Sprintf("PostgreSQL shared_buffers is low: %s (recommend 1GB+, requires restart)",
result.PostgreSQL.SharedBuffers))
}
}
// analyzeArchive counts BLOBs in dump files to calculate optimal lock boost
func (e *Engine) analyzeArchive(ctx context.Context, dumpsDir string, entries []os.DirEntry, result *PreflightResult) {
e.progress.Update("[PREFLIGHT] Analyzing archive for large objects...")
for _, entry := range entries {
if entry.IsDir() {
continue
}
result.Archive.TotalDatabases++
dumpFile := filepath.Join(dumpsDir, entry.Name())
dbName := strings.TrimSuffix(entry.Name(), ".dump")
dbName = strings.TrimSuffix(dbName, ".sql.gz")
// For custom format dumps, use pg_restore -l to count BLOBs
if strings.HasSuffix(entry.Name(), ".dump") {
blobCount := e.countBlobsInDump(ctx, dumpFile)
if blobCount > 0 {
result.Archive.BlobsByDB[dbName] = blobCount
result.Archive.TotalBlobCount += blobCount
if blobCount > 1000 {
result.Archive.HasLargeBlobs = true
}
}
}
// For SQL format, try to estimate from file content (sample check)
if strings.HasSuffix(entry.Name(), ".sql.gz") {
// Check for lo_create patterns in compressed SQL
blobCount := e.estimateBlobsInSQL(dumpFile)
if blobCount > 0 {
result.Archive.BlobsByDB[dbName] = blobCount
result.Archive.TotalBlobCount += blobCount
if blobCount > 1000 {
result.Archive.HasLargeBlobs = true
}
}
}
}
}
// countBlobsInDump uses pg_restore -l to count BLOB entries
func (e *Engine) countBlobsInDump(ctx context.Context, dumpFile string) int {
ctx, cancel := context.WithTimeout(ctx, 30*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, "pg_restore", "-l", dumpFile)
output, err := cmd.Output()
if err != nil {
return 0
}
// Count lines containing BLOB/LARGE OBJECT
count := 0
for _, line := range strings.Split(string(output), "\n") {
if strings.Contains(line, "BLOB") || strings.Contains(line, "LARGE OBJECT") {
count++
}
}
return count
}
// estimateBlobsInSQL samples compressed SQL for lo_create patterns
func (e *Engine) estimateBlobsInSQL(sqlFile string) int {
// Use zgrep for efficient searching in gzipped files
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
// Count lo_create calls (each = one large object)
cmd := exec.CommandContext(ctx, "zgrep", "-c", "lo_create", sqlFile)
output, err := cmd.Output()
if err != nil {
// Also try SELECT lo_create pattern
cmd2 := exec.CommandContext(ctx, "zgrep", "-c", "SELECT.*lo_create", sqlFile)
output, err = cmd2.Output()
if err != nil {
return 0
}
}
count, _ := strconv.Atoi(strings.TrimSpace(string(output)))
return count
}
// calculateRecommendations determines optimal settings based on analysis
func (e *Engine) calculateRecommendations(result *PreflightResult) {
// Base lock boost
lockBoost := 2048
// Scale up based on BLOB count
if result.Archive.TotalBlobCount > 5000 {
lockBoost = 4096
}
if result.Archive.TotalBlobCount > 10000 {
lockBoost = 8192
}
if result.Archive.TotalBlobCount > 50000 {
lockBoost = 16384
}
// Cap at reasonable maximum
if lockBoost > 16384 {
lockBoost = 16384
}
result.Archive.RecommendedLockBoost = lockBoost
// Log recommendation
e.log.Info("Calculated recommended lock boost",
"total_blobs", result.Archive.TotalBlobCount,
"recommended_locks", lockBoost)
}
// printPreflightSummary prints a nice summary of all checks
func (e *Engine) printPreflightSummary(result *PreflightResult) {
fmt.Println()
fmt.Println(strings.Repeat("─", 60))
fmt.Println(" PREFLIGHT CHECKS")
fmt.Println(strings.Repeat("─", 60))
// System checks (cross-platform)
fmt.Println("\n System Resources:")
printCheck("Total RAM", humanize.Bytes(result.Linux.MemTotal), true)
printCheck("Available RAM", humanize.Bytes(result.Linux.MemAvailable), result.Linux.MemAvailableOK || result.Linux.MemAvailable == 0)
printCheck("Memory Usage", fmt.Sprintf("%.1f%%", result.Linux.MemUsedPercent), result.Linux.MemUsedPercent < 85)
// Linux-specific kernel checks
if result.Linux.IsLinux && result.Linux.ShmMax > 0 {
fmt.Println("\n Linux Kernel:")
printCheck("shmmax", humanize.Bytes(uint64(result.Linux.ShmMax)), result.Linux.ShmMaxOK)
printCheck("shmall", humanize.Comma(result.Linux.ShmAll)+" pages", result.Linux.ShmAllOK)
}
// PostgreSQL checks
fmt.Println("\n PostgreSQL:")
printCheck("Version", result.PostgreSQL.Version, true)
printCheck("max_locks_per_transaction", fmt.Sprintf("%s → %s (auto-boost)",
humanize.Comma(int64(result.PostgreSQL.MaxLocksPerTransaction)),
humanize.Comma(int64(result.Archive.RecommendedLockBoost))),
true)
printCheck("maintenance_work_mem", fmt.Sprintf("%s → 2GB (auto-boost)",
result.PostgreSQL.MaintenanceWorkMem), true)
printInfo("shared_buffers", result.PostgreSQL.SharedBuffers)
printCheck("Superuser", fmt.Sprintf("%v", result.PostgreSQL.IsSuperuser), result.PostgreSQL.IsSuperuser)
// Archive analysis
fmt.Println("\n Archive Analysis:")
printInfo("Total databases", humanize.Comma(int64(result.Archive.TotalDatabases)))
printInfo("Total BLOBs detected", humanize.Comma(int64(result.Archive.TotalBlobCount)))
if len(result.Archive.BlobsByDB) > 0 {
fmt.Println(" Databases with BLOBs:")
for db, count := range result.Archive.BlobsByDB {
status := "✓"
if count > 1000 {
status := "⚠"
_ = status
}
fmt.Printf(" %s %s: %s BLOBs\n", status, db, humanize.Comma(int64(count)))
}
}
// Warnings
if len(result.Warnings) > 0 {
fmt.Println("\n ⚠ Warnings:")
for _, w := range result.Warnings {
fmt.Printf(" • %s\n", w)
}
}
fmt.Println(strings.Repeat("─", 60))
fmt.Println()
}
func printCheck(name, value string, ok bool) {
status := "✓"
if !ok {
status = "⚠"
}
fmt.Printf(" %s %s: %s\n", status, name, value)
}
func printInfo(name, value string) {
fmt.Printf(" %s: %s\n", name, value)
}
func parseMemoryToMB(memStr string) int {
memStr = strings.ToUpper(strings.TrimSpace(memStr))
var value int
var unit string
fmt.Sscanf(memStr, "%d%s", &value, &unit)
switch {
case strings.HasPrefix(unit, "G"):
return value * 1024
case strings.HasPrefix(unit, "M"):
return value
case strings.HasPrefix(unit, "K"):
return value / 1024
default:
return value / (1024 * 1024) // Assume bytes
}
}
func (e *Engine) buildConnString() string {
if e.cfg.Host == "localhost" || e.cfg.Host == "" {
return fmt.Sprintf("user=%s password=%s dbname=postgres sslmode=disable",
e.cfg.User, e.cfg.Password)
}
return fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=postgres sslmode=disable",
e.cfg.Host, e.cfg.Port, e.cfg.User, e.cfg.Password)
}

View File

@@ -229,8 +229,14 @@ func containsSQLKeywords(content string) bool {
}
// CheckDiskSpace verifies sufficient disk space for restore
// Uses the effective work directory (WorkDir if set, otherwise BackupDir) since
// that's where extraction actually happens for large databases
func (s *Safety) CheckDiskSpace(archivePath string, multiplier float64) error {
return s.CheckDiskSpaceAt(archivePath, s.cfg.BackupDir, multiplier)
checkDir := s.cfg.GetEffectiveWorkDir()
if checkDir == "" {
checkDir = s.cfg.BackupDir
}
return s.CheckDiskSpaceAt(archivePath, checkDir, multiplier)
}
// CheckDiskSpaceAt verifies sufficient disk space at a specific directory

View File

@@ -6,6 +6,7 @@ import (
"os/exec"
"regexp"
"strconv"
"time"
"dbbackup/internal/database"
)
@@ -47,8 +48,13 @@ func ParsePostgreSQLVersion(versionStr string) (*VersionInfo, error) {
// GetDumpFileVersion extracts the PostgreSQL version from a dump file
// Uses pg_restore -l to read the dump metadata
// Uses a 30-second timeout to avoid blocking on large files
func GetDumpFileVersion(dumpPath string) (*VersionInfo, error) {
cmd := exec.Command("pg_restore", "-l", dumpPath)
// Use a timeout context to prevent blocking on very large dump files
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, "pg_restore", "-l", dumpPath)
output, err := cmd.CombinedOutput()
if err != nil {
return nil, fmt.Errorf("failed to read dump file metadata: %w (output: %s)", err, string(output))

View File

@@ -25,7 +25,7 @@ func (pc *PrivilegeChecker) CheckAndWarn(allowRoot bool) error {
isRoot, user := pc.isRunningAsRoot()
if isRoot {
pc.log.Warn("⚠️ Running with elevated privileges (root/Administrator)")
pc.log.Warn("[WARN] Running with elevated privileges (root/Administrator)")
pc.log.Warn("Security recommendation: Create a dedicated backup user with minimal privileges")
if !allowRoot {

View File

@@ -64,7 +64,7 @@ func (rc *ResourceChecker) ValidateResourcesForBackup(estimatedSize int64) error
if len(warnings) > 0 {
for _, warning := range warnings {
rc.log.Warn("⚠️ Resource constraint: " + warning)
rc.log.Warn("[WARN] Resource constraint: " + warning)
}
rc.log.Info("Continuing backup operation (warnings are informational)")
}

View File

@@ -22,7 +22,7 @@ func (rc *ResourceChecker) checkPlatformLimits() (*ResourceLimits, error) {
rc.log.Debug("Resource limit: max open files", "limit", rLimit.Cur, "max", rLimit.Max)
if rLimit.Cur < 1024 {
rc.log.Warn("⚠️ Low file descriptor limit detected",
rc.log.Warn("[WARN] Low file descriptor limit detected",
"current", rLimit.Cur,
"recommended", 4096,
"hint", "Increase with: ulimit -n 4096")

View File

@@ -209,12 +209,12 @@ func (m ArchiveBrowserModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
// Validate selection based on mode
if m.mode == "restore-cluster" && !selected.Format.IsClusterBackup() {
m.message = errorStyle.Render(" Please select a cluster backup (.tar.gz)")
m.message = errorStyle.Render("[FAIL] Please select a cluster backup (.tar.gz)")
return m, nil
}
if m.mode == "restore-single" && selected.Format.IsClusterBackup() {
m.message = errorStyle.Render(" Please select a single database backup")
m.message = errorStyle.Render("[FAIL] Please select a single database backup")
return m, nil
}
@@ -227,7 +227,7 @@ func (m ArchiveBrowserModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
// Show detailed info
if len(m.archives) > 0 && m.cursor < len(m.archives) {
selected := m.archives[m.cursor]
m.message = fmt.Sprintf("📦 %s | Format: %s | Size: %s | Modified: %s",
m.message = fmt.Sprintf("[PKG] %s | Format: %s | Size: %s | Modified: %s",
selected.Name,
selected.Format.String(),
formatSize(selected.Size),
@@ -251,13 +251,13 @@ func (m ArchiveBrowserModel) View() string {
var s strings.Builder
// Header
title := "📦 Backup Archives"
title := "[PKG] Backup Archives"
if m.mode == "restore-single" {
title = "📦 Select Archive to Restore (Single Database)"
title = "[PKG] Select Archive to Restore (Single Database)"
} else if m.mode == "restore-cluster" {
title = "📦 Select Archive to Restore (Cluster)"
title = "[PKG] Select Archive to Restore (Cluster)"
} else if m.mode == "diagnose" {
title = "🔍 Select Archive to Diagnose"
title = "[SEARCH] Select Archive to Diagnose"
}
s.WriteString(titleStyle.Render(title))
@@ -269,7 +269,7 @@ func (m ArchiveBrowserModel) View() string {
}
if m.err != nil {
s.WriteString(errorStyle.Render(fmt.Sprintf(" Error: %v", m.err)))
s.WriteString(errorStyle.Render(fmt.Sprintf("[FAIL] Error: %v", m.err)))
s.WriteString("\n\n")
s.WriteString(infoStyle.Render("Press Esc to go back"))
return s.String()
@@ -293,7 +293,7 @@ func (m ArchiveBrowserModel) View() string {
s.WriteString(archiveHeaderStyle.Render(fmt.Sprintf("%-40s %-25s %-12s %-20s",
"FILENAME", "FORMAT", "SIZE", "MODIFIED")))
s.WriteString("\n")
s.WriteString(strings.Repeat("", 100))
s.WriteString(strings.Repeat("-", 100))
s.WriteString("\n")
// Show archives (limit to visible area)
@@ -317,13 +317,13 @@ func (m ArchiveBrowserModel) View() string {
}
// Color code based on validity and age
statusIcon := ""
statusIcon := "[+]"
if !archive.Valid {
statusIcon = ""
statusIcon = "[-]"
style = archiveInvalidStyle
} else if time.Since(archive.Modified) > 30*24*time.Hour {
style = archiveOldStyle
statusIcon = ""
statusIcon = "[WARN]"
}
filename := truncate(archive.Name, 38)
@@ -351,7 +351,7 @@ func (m ArchiveBrowserModel) View() string {
s.WriteString(infoStyle.Render(fmt.Sprintf("Total: %d archive(s) | Selected: %d/%d",
len(m.archives), m.cursor+1, len(m.archives))))
s.WriteString("\n")
s.WriteString(infoStyle.Render("⌨️ ↑/↓: Navigate | Enter: Select | d: Diagnose | f: Filter | i: Info | Esc: Back"))
s.WriteString(infoStyle.Render("[KEY] ↑/↓: Navigate | Enter: Select | d: Diagnose | f: Filter | i: Info | Esc: Back"))
return s.String()
}

View File

@@ -20,12 +20,14 @@ type BackupExecutionModel struct {
logger logger.Logger
parent tea.Model
ctx context.Context
cancel context.CancelFunc // Cancel function to stop the operation
backupType string
databaseName string
ratio int
status string
progress int
done bool
cancelling bool // True when user has requested cancellation
err error
result string
startTime time.Time
@@ -34,11 +36,14 @@ type BackupExecutionModel struct {
}
func NewBackupExecution(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context, backupType, dbName string, ratio int) BackupExecutionModel {
// Create a cancellable context derived from parent
childCtx, cancel := context.WithCancel(ctx)
return BackupExecutionModel{
config: cfg,
logger: log,
parent: parent,
ctx: ctx,
ctx: childCtx,
cancel: cancel,
backupType: backupType,
databaseName: dbName,
ratio: ratio,
@@ -78,10 +83,10 @@ type backupCompleteMsg struct {
func executeBackupWithTUIProgress(parentCtx context.Context, cfg *config.Config, log logger.Logger, backupType, dbName string, ratio int) tea.Cmd {
return func() tea.Msg {
// Use configurable cluster timeout (minutes) from config; default set in config.New()
// Use parent context to inherit cancellation from TUI
clusterTimeout := time.Duration(cfg.ClusterTimeoutMinutes) * time.Minute
ctx, cancel := context.WithTimeout(parentCtx, clusterTimeout)
// NO TIMEOUT for backup operations - a backup takes as long as it takes
// Large databases can take many hours
// Only manual cancellation (Ctrl+C) should stop the backup
ctx, cancel := context.WithCancel(parentCtx)
defer cancel()
start := time.Now()
@@ -131,11 +136,11 @@ func executeBackupWithTUIProgress(parentCtx context.Context, cfg *config.Config,
var result string
switch backupType {
case "single":
result = fmt.Sprintf(" Single database backup of '%s' completed successfully in %v", dbName, elapsed)
result = fmt.Sprintf("[+] Single database backup of '%s' completed successfully in %v", dbName, elapsed)
case "sample":
result = fmt.Sprintf(" Sample backup of '%s' (ratio: %d) completed successfully in %v", dbName, ratio, elapsed)
result = fmt.Sprintf("[+] Sample backup of '%s' (ratio: %d) completed successfully in %v", dbName, ratio, elapsed)
case "cluster":
result = fmt.Sprintf(" Cluster backup completed successfully in %v", elapsed)
result = fmt.Sprintf("[+] Cluster backup completed successfully in %v", elapsed)
}
return backupCompleteMsg{
@@ -195,9 +200,9 @@ func (m BackupExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
m.err = msg.err
m.result = msg.result
if m.err == nil {
m.status = " Backup completed successfully!"
m.status = "[OK] Backup completed successfully!"
} else {
m.status = fmt.Sprintf(" Backup failed: %v", m.err)
m.status = fmt.Sprintf("[FAIL] Backup failed: %v", m.err)
}
// Auto-forward in debug/auto-confirm mode
if m.config.TUIAutoConfirm {
@@ -206,9 +211,21 @@ func (m BackupExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return m, nil
case tea.KeyMsg:
if m.done {
switch msg.String() {
case "enter", "esc", "q":
case "ctrl+c", "esc":
if !m.done && !m.cancelling {
// User requested cancellation - cancel the context
m.cancelling = true
m.status = "[STOP] Cancelling backup... (please wait)"
if m.cancel != nil {
m.cancel()
}
return m, nil
} else if m.done {
return m.parent, nil
}
case "enter", "q":
if m.done {
return m.parent, nil
}
}
@@ -223,7 +240,7 @@ func (m BackupExecutionModel) View() string {
// Clear screen with newlines and render header
s.WriteString("\n\n")
header := titleStyle.Render("🔄 Backup Execution")
header := titleStyle.Render("[EXEC] Backup Execution")
s.WriteString(header)
s.WriteString("\n\n")
@@ -240,12 +257,17 @@ func (m BackupExecutionModel) View() string {
// Status with spinner
if !m.done {
if m.cancelling {
s.WriteString(fmt.Sprintf(" %s %s\n", spinnerFrames[m.spinnerFrame], m.status))
} else {
s.WriteString(fmt.Sprintf(" %s %s\n", spinnerFrames[m.spinnerFrame], m.status))
s.WriteString("\n [KEY] Press Ctrl+C or ESC to cancel\n")
}
} else {
s.WriteString(fmt.Sprintf(" %s\n\n", m.status))
if m.err != nil {
s.WriteString(fmt.Sprintf(" Error: %v\n", m.err))
s.WriteString(fmt.Sprintf(" [FAIL] Error: %v\n", m.err))
} else if m.result != "" {
// Parse and display result cleanly
lines := strings.Split(m.result, "\n")
@@ -256,7 +278,7 @@ func (m BackupExecutionModel) View() string {
}
}
}
s.WriteString("\n ⌨️ Press Enter or ESC to return to menu\n")
s.WriteString("\n [KEY] Press Enter or ESC to return to menu\n")
}
return s.String()

View File

@@ -11,6 +11,16 @@ import (
"dbbackup/internal/config"
"dbbackup/internal/logger"
"dbbackup/internal/restore"
)
// OperationState represents the current operation state
type OperationState int
const (
OpIdle OperationState = iota
OpVerifying
OpDeleting
)
// BackupManagerModel manages backup archives
@@ -26,6 +36,9 @@ type BackupManagerModel struct {
message string
totalSize int64
freeSpace int64
opState OperationState
opTarget string // Name of archive being operated on
spinnerFrame int
}
// NewBackupManager creates a new backup manager
@@ -36,15 +49,64 @@ func NewBackupManager(cfg *config.Config, log logger.Logger, parent tea.Model, c
parent: parent,
ctx: ctx,
loading: true,
opState: OpIdle,
spinnerFrame: 0,
}
}
func (m BackupManagerModel) Init() tea.Cmd {
return loadArchives(m.config, m.logger)
return tea.Batch(loadArchives(m.config, m.logger), managerTickCmd())
}
// Tick for spinner animation
type managerTickMsg time.Time
func managerTickCmd() tea.Cmd {
return tea.Tick(100*time.Millisecond, func(t time.Time) tea.Msg {
return managerTickMsg(t)
})
}
// Verify result message
type verifyResultMsg struct {
archive string
valid bool
err error
details string
}
func (m BackupManagerModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
switch msg := msg.(type) {
case managerTickMsg:
// Update spinner frame
m.spinnerFrame = (m.spinnerFrame + 1) % len(spinnerFrames)
return m, managerTickCmd()
case verifyResultMsg:
m.opState = OpIdle
m.opTarget = ""
if msg.err != nil {
m.message = fmt.Sprintf("[-] Verify failed: %v", msg.err)
} else if msg.valid {
m.message = fmt.Sprintf("[+] %s: Valid - %s", msg.archive, msg.details)
// Update archive validity in list
for i := range m.archives {
if m.archives[i].Name == msg.archive {
m.archives[i].Valid = true
break
}
}
} else {
m.message = fmt.Sprintf("[-] %s: Invalid - %s", msg.archive, msg.details)
for i := range m.archives {
if m.archives[i].Name == msg.archive {
m.archives[i].Valid = false
break
}
}
}
return m, nil
case archiveListMsg:
m.loading = false
if msg.err != nil {
@@ -68,10 +130,24 @@ func (m BackupManagerModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return m, nil
case tea.KeyMsg:
switch msg.String() {
case "ctrl+c", "q", "esc":
// Allow escape/cancel even during operations
if msg.String() == "ctrl+c" || msg.String() == "esc" || msg.String() == "q" {
if m.opState != OpIdle {
// Cancel current operation
m.opState = OpIdle
m.opTarget = ""
m.message = "Operation cancelled"
return m, nil
}
return m.parent, nil
}
// Block other input during operations
if m.opState != OpIdle {
return m, nil
}
switch msg.String() {
case "up", "k":
if m.cursor > 0 {
m.cursor--
@@ -83,11 +159,13 @@ func (m BackupManagerModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
}
case "v":
// Verify archive
// Verify archive with real verification
if len(m.archives) > 0 && m.cursor < len(m.archives) {
selected := m.archives[m.cursor]
m.message = fmt.Sprintf("🔍 Verifying %s...", selected.Name)
// In real implementation, would run verification
m.opState = OpVerifying
m.opTarget = selected.Name
m.message = ""
return m, verifyArchiveCmd(selected)
}
case "d":
@@ -96,16 +174,16 @@ func (m BackupManagerModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
selected := m.archives[m.cursor]
archivePath := selected.Path
confirm := NewConfirmationModelWithAction(m.config, m.logger, m,
"🗑️ Delete Archive",
"[DELETE] Delete Archive",
fmt.Sprintf("Delete archive '%s'? This cannot be undone.", selected.Name),
func() (tea.Model, tea.Cmd) {
// Delete the archive
err := deleteArchive(archivePath)
if err != nil {
m.err = fmt.Errorf("failed to delete archive: %v", err)
m.message = fmt.Sprintf(" Failed to delete: %v", err)
m.message = fmt.Sprintf("[FAIL] Failed to delete: %v", err)
} else {
m.message = fmt.Sprintf(" Deleted: %s", selected.Name)
m.message = fmt.Sprintf("[OK] Deleted: %s", selected.Name)
}
// Refresh the archive list
m.loading = true
@@ -118,7 +196,7 @@ func (m BackupManagerModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
// Show info
if len(m.archives) > 0 && m.cursor < len(m.archives) {
selected := m.archives[m.cursor]
m.message = fmt.Sprintf("📦 %s | %s | %s | Modified: %s",
m.message = fmt.Sprintf("[PKG] %s | %s | %s | Modified: %s",
selected.Name,
selected.Format.String(),
formatSize(selected.Size),
@@ -152,39 +230,67 @@ func (m BackupManagerModel) View() string {
var s strings.Builder
// Title
s.WriteString(titleStyle.Render("🗄️ Backup Archive Manager"))
s.WriteString(TitleStyle.Render("[DB] Backup Archive Manager"))
s.WriteString("\n\n")
// Status line (no box, bold+color accents)
switch m.opState {
case OpVerifying:
spinner := spinnerFrames[m.spinnerFrame]
s.WriteString(StatusActiveStyle.Render(fmt.Sprintf("%s Verifying: %s", spinner, m.opTarget)))
s.WriteString("\n\n")
case OpDeleting:
spinner := spinnerFrames[m.spinnerFrame]
s.WriteString(StatusActiveStyle.Render(fmt.Sprintf("%s Deleting: %s", spinner, m.opTarget)))
s.WriteString("\n\n")
default:
if m.loading {
spinner := spinnerFrames[m.spinnerFrame]
s.WriteString(StatusActiveStyle.Render(fmt.Sprintf("%s Loading archives...", spinner)))
s.WriteString("\n\n")
} else if m.message != "" {
// Color based on message content
if strings.HasPrefix(m.message, "[+]") || strings.HasPrefix(m.message, "Valid") {
s.WriteString(StatusSuccessStyle.Render(m.message))
} else if strings.HasPrefix(m.message, "[-]") || strings.HasPrefix(m.message, "Error") {
s.WriteString(StatusErrorStyle.Render(m.message))
} else {
s.WriteString(StatusActiveStyle.Render(m.message))
}
s.WriteString("\n\n")
}
// No "Ready" message when idle - cleaner UI
}
if m.loading {
s.WriteString(infoStyle.Render("Loading archives..."))
return s.String()
}
if m.err != nil {
s.WriteString(errorStyle.Render(fmt.Sprintf(" Error: %v", m.err)))
s.WriteString(StatusErrorStyle.Render(fmt.Sprintf("[FAIL] Error: %v", m.err)))
s.WriteString("\n\n")
s.WriteString(infoStyle.Render("Press Esc to go back"))
s.WriteString(ShortcutStyle.Render("Press Esc to go back"))
return s.String()
}
// Summary
s.WriteString(infoStyle.Render(fmt.Sprintf("Total Archives: %d | Total Size: %s",
s.WriteString(LabelStyle.Render(fmt.Sprintf("Total Archives: %d | Total Size: %s",
len(m.archives), formatSize(m.totalSize))))
s.WriteString("\n\n")
// Archives list
if len(m.archives) == 0 {
s.WriteString(infoStyle.Render("No backup archives found"))
s.WriteString(StatusReadyStyle.Render("No backup archives found"))
s.WriteString("\n\n")
s.WriteString(infoStyle.Render("Press Esc to go back"))
s.WriteString(ShortcutStyle.Render("Press Esc to go back"))
return s.String()
}
// Column headers
s.WriteString(archiveHeaderStyle.Render(fmt.Sprintf("%-35s %-25s %-12s %-20s",
// Column headers with better alignment
s.WriteString(ListHeaderStyle.Render(fmt.Sprintf(" %-32s %-22s %10s %-16s",
"FILENAME", "FORMAT", "SIZE", "MODIFIED")))
s.WriteString("\n")
s.WriteString(strings.Repeat("", 95))
s.WriteString(strings.Repeat("-", 90))
s.WriteString("\n")
// Show archives (limit to visible area)
@@ -200,26 +306,26 @@ func (m BackupManagerModel) View() string {
for i := start; i < end; i++ {
archive := m.archives[i]
cursor := " "
style := archiveNormalStyle
style := ListNormalStyle
if i == m.cursor {
cursor = "> "
style = archiveSelectedStyle
style = ListSelectedStyle
}
// Status icon
statusIcon := ""
// Status icon - consistent 4-char width
statusIcon := " [+]"
if !archive.Valid {
statusIcon = ""
style = archiveInvalidStyle
statusIcon = " [-]"
style = ItemInvalidStyle
} else if time.Since(archive.Modified) > 30*24*time.Hour {
statusIcon = ""
statusIcon = " [!]"
}
filename := truncate(archive.Name, 33)
format := truncate(archive.Format.String(), 23)
filename := truncate(archive.Name, 32)
format := truncate(archive.Format.String(), 22)
line := fmt.Sprintf("%s %s %-33s %-23s %-10s %-19s",
line := fmt.Sprintf("%s%s %-32s %-22s %10s %-16s",
cursor,
statusIcon,
filename,
@@ -233,18 +339,83 @@ func (m BackupManagerModel) View() string {
// Footer
s.WriteString("\n")
if m.message != "" {
s.WriteString(infoStyle.Render(m.message))
s.WriteString("\n")
}
s.WriteString(infoStyle.Render(fmt.Sprintf("Selected: %d/%d", m.cursor+1, len(m.archives))))
s.WriteString("\n")
s.WriteString(infoStyle.Render("⌨️ ↑/↓: Navigate | r: Restore | v: Verify | d: Delete | i: Info | R: Refresh | Esc: Back"))
s.WriteString(StatusReadyStyle.Render(fmt.Sprintf("Selected: %d/%d", m.cursor+1, len(m.archives))))
s.WriteString("\n\n")
// Grouped keyboard shortcuts
s.WriteString(ShortcutStyle.Render("SHORTCUTS: Up/Down=Move | r=Restore | v=Verify | d=Delete | i=Info | R=Refresh | Esc=Back | q=Quit"))
return s.String()
}
// verifyArchiveCmd runs the SAME verification as restore safety checks
// This ensures consistency between backup manager verify and restore preview
func verifyArchiveCmd(archive ArchiveInfo) tea.Cmd {
return func() tea.Msg {
var issues []string
// 1. Run the same archive integrity check as restore
safety := restore.NewSafety(nil, nil) // Doesn't need config/log for validation
if err := safety.ValidateArchive(archive.Path); err != nil {
return verifyResultMsg{
archive: archive.Name,
valid: false,
err: nil,
details: fmt.Sprintf("Archive integrity: %v", err),
}
}
// 2. Run the same deep diagnosis as restore
diagnoser := restore.NewDiagnoser(nil, false)
diagResult, diagErr := diagnoser.DiagnoseFile(archive.Path)
if diagErr != nil {
return verifyResultMsg{
archive: archive.Name,
valid: false,
err: diagErr,
details: "Cannot diagnose archive",
}
}
if !diagResult.IsValid {
// Collect error details
if diagResult.IsTruncated {
issues = append(issues, "TRUNCATED")
}
if diagResult.IsCorrupted {
issues = append(issues, "CORRUPTED")
}
if len(diagResult.Errors) > 0 {
issues = append(issues, diagResult.Errors[0])
}
return verifyResultMsg{
archive: archive.Name,
valid: false,
err: nil,
details: strings.Join(issues, "; "),
}
}
// Build success details
details := "Verified"
if diagResult.Details != nil {
if diagResult.Details.TableCount > 0 {
details = fmt.Sprintf("%d databases in archive", diagResult.Details.TableCount)
} else if diagResult.Details.PgRestoreListable {
details = "pg_restore verified"
}
}
// Add any warnings
if len(diagResult.Warnings) > 0 {
details += fmt.Sprintf(" [%d warnings]", len(diagResult.Warnings))
}
return verifyResultMsg{archive: archive.Name, valid: true, err: nil, details: details}
}
}
// deleteArchive deletes a backup archive (to be called from confirmation)
func deleteArchive(archivePath string) error {
return os.Remove(archivePath)

View File

@@ -67,7 +67,6 @@ func (m ConfirmationModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
switch msg := msg.(type) {
case autoConfirmMsg:
// Auto-confirm triggered
m.confirmed = true
if m.onConfirm != nil {
return m.onConfirm()
}
@@ -95,7 +94,6 @@ func (m ConfirmationModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
case "enter", "y":
if msg.String() == "y" || m.cursor == 0 {
m.confirmed = true
// Execute the onConfirm callback if provided
if m.onConfirm != nil {
return m.onConfirm()
@@ -131,7 +129,7 @@ func (m ConfirmationModel) View() string {
s.WriteString(" ")
}
s.WriteString("\n\n⌨️ ←/→: Select Enter/y: Confirm n/ESC: Cancel\n")
s.WriteString("\n\n[KEYS] <-/->: Select | Enter/y: Confirm | n/ESC: Cancel\n")
return s.String()
}

View File

@@ -53,7 +53,8 @@ type databaseListMsg struct {
func fetchDatabases(cfg *config.Config, log logger.Logger) tea.Cmd {
return func() tea.Msg {
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
// 60 seconds for database listing - busy servers may be slow
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
dbClient, err := database.New(cfg, log)
@@ -108,7 +109,7 @@ func (m DatabaseSelectorModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return executor, executor.Init()
}
inputModel := NewInputModel(m.config, m.logger, m,
"📊 Sample Ratio",
"[STATS] Sample Ratio",
"Enter sample ratio (1-100):",
"10",
ValidateInt(1, 100))
@@ -151,7 +152,7 @@ func (m DatabaseSelectorModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
// If sample backup, ask for ratio first
if m.backupType == "sample" {
inputModel := NewInputModel(m.config, m.logger, m,
"📊 Sample Ratio",
"[STATS] Sample Ratio",
"Enter sample ratio (1-100):",
"10",
ValidateInt(1, 100))
@@ -175,12 +176,12 @@ func (m DatabaseSelectorModel) View() string {
s.WriteString(fmt.Sprintf("\n%s\n\n", header))
if m.loading {
s.WriteString(" Loading databases...\n")
s.WriteString("[WAIT] Loading databases...\n")
return s.String()
}
if m.err != nil {
s.WriteString(fmt.Sprintf(" Error: %v\n", m.err))
s.WriteString(fmt.Sprintf("[FAIL] Error: %v\n", m.err))
s.WriteString("\nPress ESC to go back\n")
return s.String()
}
@@ -202,7 +203,7 @@ func (m DatabaseSelectorModel) View() string {
s.WriteString(fmt.Sprintf("\n%s\n", m.message))
}
s.WriteString("\n⌨️ ↑/↓: Navigate Enter: Select ESC: Back q: Quit\n")
s.WriteString("\n[KEYS] Up/Down: Navigate | Enter: Select | ESC: Back | q: Quit\n")
return s.String()
}

View File

@@ -160,7 +160,7 @@ func (m DiagnoseViewModel) View() string {
var s strings.Builder
// Header
s.WriteString(titleStyle.Render("🔍 Backup Diagnosis"))
s.WriteString(titleStyle.Render("[SEARCH] Backup Diagnosis"))
s.WriteString("\n\n")
// Archive info
@@ -175,14 +175,14 @@ func (m DiagnoseViewModel) View() string {
s.WriteString("\n\n")
if m.running {
s.WriteString(infoStyle.Render(" " + m.progress))
s.WriteString(infoStyle.Render("[WAIT] " + m.progress))
s.WriteString("\n\n")
s.WriteString(diagnoseInfoStyle.Render("This may take a while for large archives..."))
return s.String()
}
if m.err != nil {
s.WriteString(errorStyle.Render(fmt.Sprintf(" Diagnosis failed: %v", m.err)))
s.WriteString(errorStyle.Render(fmt.Sprintf("[FAIL] Diagnosis failed: %v", m.err)))
s.WriteString("\n\n")
s.WriteString(infoStyle.Render("Press Enter or Esc to go back"))
return s.String()
@@ -204,124 +204,132 @@ func (m DiagnoseViewModel) View() string {
func (m DiagnoseViewModel) renderSingleResult(result *restore.DiagnoseResult) string {
var s strings.Builder
// Status
s.WriteString(strings.Repeat("", 60))
s.WriteString("\n")
// Status Box
s.WriteString("+--[ VALIDATION STATUS ]" + strings.Repeat("-", 37) + "+\n")
if result.IsValid {
s.WriteString(diagnosePassStyle.Render("✅ STATUS: VALID"))
s.WriteString("| " + diagnosePassStyle.Render("[OK] VALID - Archive passed all checks") + strings.Repeat(" ", 18) + "|\n")
} else {
s.WriteString(diagnoseFailStyle.Render("❌ STATUS: INVALID"))
s.WriteString("| " + diagnoseFailStyle.Render("[FAIL] INVALID - Archive has problems") + strings.Repeat(" ", 19) + "|\n")
}
s.WriteString("\n")
if result.IsTruncated {
s.WriteString(diagnoseFailStyle.Render("⚠️ TRUNCATED: File appears incomplete"))
s.WriteString("\n")
s.WriteString("| " + diagnoseFailStyle.Render("[!] TRUNCATED - File is incomplete") + strings.Repeat(" ", 22) + "|\n")
}
if result.IsCorrupted {
s.WriteString(diagnoseFailStyle.Render("⚠️ CORRUPTED: File structure is damaged"))
s.WriteString("\n")
s.WriteString("| " + diagnoseFailStyle.Render("[!] CORRUPTED - File structure damaged") + strings.Repeat(" ", 18) + "|\n")
}
s.WriteString(strings.Repeat("", 60))
s.WriteString("\n\n")
s.WriteString("+" + strings.Repeat("-", 60) + "+\n\n")
// Details
// Details Box
if result.Details != nil {
s.WriteString(diagnoseHeaderStyle.Render("📊 DETAILS:"))
s.WriteString("\n")
s.WriteString("+--[ DETAILS ]" + strings.Repeat("-", 46) + "+\n")
if result.Details.HasPGDMPSignature {
s.WriteString(diagnosePassStyle.Render(" ✓ "))
s.WriteString("Has PGDMP signature (custom format)\n")
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " PostgreSQL custom format (PGDMP)" + strings.Repeat(" ", 20) + "|\n")
}
if result.Details.HasSQLHeader {
s.WriteString(diagnosePassStyle.Render(" ✓ "))
s.WriteString("Has PostgreSQL SQL header\n")
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " PostgreSQL SQL header found" + strings.Repeat(" ", 25) + "|\n")
}
if result.Details.GzipValid {
s.WriteString(diagnosePassStyle.Render(" ✓ "))
s.WriteString("Gzip compression valid\n")
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " Gzip compression valid" + strings.Repeat(" ", 30) + "|\n")
}
if result.Details.PgRestoreListable {
s.WriteString(diagnosePassStyle.Render(" ✓ "))
s.WriteString(fmt.Sprintf("pg_restore can list contents (%d tables)\n", result.Details.TableCount))
tableInfo := fmt.Sprintf(" (%d tables)", result.Details.TableCount)
padding := 36 - len(tableInfo)
if padding < 0 {
padding = 0
}
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " pg_restore can list contents" + tableInfo + strings.Repeat(" ", padding) + "|\n")
}
if result.Details.CopyBlockCount > 0 {
s.WriteString(diagnoseInfoStyle.Render(" • "))
s.WriteString(fmt.Sprintf("Contains %d COPY blocks\n", result.Details.CopyBlockCount))
blockInfo := fmt.Sprintf("%d COPY blocks found", result.Details.CopyBlockCount)
padding := 50 - len(blockInfo)
if padding < 0 {
padding = 0
}
s.WriteString("| [-] " + blockInfo + strings.Repeat(" ", padding) + "|\n")
}
if result.Details.UnterminatedCopy {
s.WriteString(diagnoseFailStyle.Render(" ✗ "))
s.WriteString(fmt.Sprintf("Unterminated COPY block: %s (line %d)\n",
result.Details.LastCopyTable, result.Details.LastCopyLineNumber))
s.WriteString("| " + diagnoseFailStyle.Render("[-]") + " Unterminated COPY: " + truncate(result.Details.LastCopyTable, 30) + strings.Repeat(" ", 5) + "|\n")
}
if result.Details.ProperlyTerminated {
s.WriteString(diagnosePassStyle.Render(" ✓ "))
s.WriteString("All COPY blocks properly terminated\n")
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " All COPY blocks properly terminated" + strings.Repeat(" ", 17) + "|\n")
}
if result.Details.ExpandedSize > 0 {
s.WriteString(diagnoseInfoStyle.Render(" • "))
s.WriteString(fmt.Sprintf("Expanded size: %s (ratio: %.1fx)\n",
formatSize(result.Details.ExpandedSize), result.Details.CompressionRatio))
sizeInfo := fmt.Sprintf("Expanded: %s (%.1fx)", formatSize(result.Details.ExpandedSize), result.Details.CompressionRatio)
padding := 50 - len(sizeInfo)
if padding < 0 {
padding = 0
}
s.WriteString("| [-] " + sizeInfo + strings.Repeat(" ", padding) + "|\n")
}
// Errors
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
}
// Errors Box
if len(result.Errors) > 0 {
s.WriteString("\n")
s.WriteString(diagnoseFailStyle.Render("❌ ERRORS:"))
s.WriteString("\n")
s.WriteString("\n+--[ ERRORS ]" + strings.Repeat("-", 47) + "+\n")
for i, e := range result.Errors {
if i >= 5 {
s.WriteString(diagnoseInfoStyle.Render(fmt.Sprintf(" ... and %d more\n", len(result.Errors)-5)))
remaining := fmt.Sprintf("... and %d more errors", len(result.Errors)-5)
padding := 56 - len(remaining)
s.WriteString("| " + remaining + strings.Repeat(" ", padding) + "|\n")
break
}
s.WriteString(diagnoseFailStyle.Render(" • "))
s.WriteString(truncate(e, 70))
s.WriteString("\n")
errText := truncate(e, 54)
padding := 56 - len(errText)
if padding < 0 {
padding = 0
}
s.WriteString("| " + errText + strings.Repeat(" ", padding) + "|\n")
}
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
}
// Warnings
// Warnings Box
if len(result.Warnings) > 0 {
s.WriteString("\n")
s.WriteString(diagnoseWarnStyle.Render("⚠️ WARNINGS:"))
s.WriteString("\n")
s.WriteString("\n+--[ WARNINGS ]" + strings.Repeat("-", 45) + "+\n")
for i, w := range result.Warnings {
if i >= 3 {
s.WriteString(diagnoseInfoStyle.Render(fmt.Sprintf(" ... and %d more\n", len(result.Warnings)-3)))
remaining := fmt.Sprintf("... and %d more warnings", len(result.Warnings)-3)
padding := 56 - len(remaining)
s.WriteString("| " + remaining + strings.Repeat(" ", padding) + "|\n")
break
}
s.WriteString(diagnoseWarnStyle.Render(" • "))
s.WriteString(truncate(w, 70))
s.WriteString("\n")
warnText := truncate(w, 54)
padding := 56 - len(warnText)
if padding < 0 {
padding = 0
}
s.WriteString("| " + warnText + strings.Repeat(" ", padding) + "|\n")
}
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
}
// Recommendations
// Recommendations Box
if !result.IsValid {
s.WriteString("\n")
s.WriteString(diagnoseHeaderStyle.Render("💡 RECOMMENDATIONS:"))
s.WriteString("\n")
s.WriteString("\n+--[ RECOMMENDATIONS ]" + strings.Repeat("-", 38) + "+\n")
if result.IsTruncated {
s.WriteString(" 1. Re-run the backup process for this database\n")
s.WriteString(" 2. Check disk space on backup server\n")
s.WriteString(" 3. Verify network stability for remote backups\n")
s.WriteString("| 1. Re-run backup with current version (v3.42.12+) |\n")
s.WriteString("| 2. Check disk space on backup server |\n")
s.WriteString("| 3. Verify network stability for remote backups |\n")
}
if result.IsCorrupted {
s.WriteString(" 1. Verify backup was transferred completely\n")
s.WriteString(" 2. Try restoring from a previous backup\n")
s.WriteString("| 1. Verify backup was transferred completely |\n")
s.WriteString("| 2. Try restoring from a previous backup |\n")
}
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
}
return s.String()
@@ -341,17 +349,17 @@ func (m DiagnoseViewModel) renderClusterResults() string {
}
}
s.WriteString(strings.Repeat("", 60))
s.WriteString(strings.Repeat("-", 60))
s.WriteString("\n")
s.WriteString(diagnoseHeaderStyle.Render(fmt.Sprintf("📊 CLUSTER SUMMARY: %d databases\n", len(m.results))))
s.WriteString(strings.Repeat("", 60))
s.WriteString(diagnoseHeaderStyle.Render(fmt.Sprintf("[STATS] CLUSTER SUMMARY: %d databases\n", len(m.results))))
s.WriteString(strings.Repeat("-", 60))
s.WriteString("\n\n")
if invalidCount == 0 {
s.WriteString(diagnosePassStyle.Render(" All dumps are valid"))
s.WriteString(diagnosePassStyle.Render("[OK] All dumps are valid"))
s.WriteString("\n\n")
} else {
s.WriteString(diagnoseFailStyle.Render(fmt.Sprintf(" %d/%d dumps have issues", invalidCount, len(m.results))))
s.WriteString(diagnoseFailStyle.Render(fmt.Sprintf("[FAIL] %d/%d dumps have issues", invalidCount, len(m.results))))
s.WriteString("\n\n")
}
@@ -378,13 +386,13 @@ func (m DiagnoseViewModel) renderClusterResults() string {
var status string
if r.IsValid {
status = diagnosePassStyle.Render("")
status = diagnosePassStyle.Render("[+]")
} else if r.IsTruncated {
status = diagnoseFailStyle.Render(" TRUNCATED")
status = diagnoseFailStyle.Render("[-] TRUNCATED")
} else if r.IsCorrupted {
status = diagnoseFailStyle.Render(" CORRUPTED")
status = diagnoseFailStyle.Render("[-] CORRUPTED")
} else {
status = diagnoseFailStyle.Render(" INVALID")
status = diagnoseFailStyle.Render("[-] INVALID")
}
line := fmt.Sprintf("%s %s %-35s %s",
@@ -405,7 +413,7 @@ func (m DiagnoseViewModel) renderClusterResults() string {
if m.cursor < len(m.results) {
selected := m.results[m.cursor]
s.WriteString("\n")
s.WriteString(strings.Repeat("", 60))
s.WriteString(strings.Repeat("-", 60))
s.WriteString("\n")
s.WriteString(diagnoseHeaderStyle.Render("Selected: " + selected.FileName))
s.WriteString("\n\n")
@@ -413,7 +421,7 @@ func (m DiagnoseViewModel) renderClusterResults() string {
// Show condensed details for selected
if selected.Details != nil {
if selected.Details.UnterminatedCopy {
s.WriteString(diagnoseFailStyle.Render(" Unterminated COPY: "))
s.WriteString(diagnoseFailStyle.Render(" [-] Unterminated COPY: "))
s.WriteString(selected.Details.LastCopyTable)
s.WriteString(fmt.Sprintf(" (line %d)\n", selected.Details.LastCopyLineNumber))
}
@@ -429,7 +437,7 @@ func (m DiagnoseViewModel) renderClusterResults() string {
if i >= 2 {
break
}
s.WriteString(diagnoseFailStyle.Render(" "))
s.WriteString(diagnoseFailStyle.Render(" - "))
s.WriteString(truncate(e, 55))
s.WriteString("\n")
}

View File

@@ -208,7 +208,7 @@ func (dp *DirectoryPicker) View() string {
if dp.allowFiles {
pickerType = "File/Directory"
}
header := fmt.Sprintf("📁 %s Picker - %s", pickerType, dp.currentPath)
header := fmt.Sprintf("[DIR] %s Picker - %s", pickerType, dp.currentPath)
content.WriteString(dp.styles.Header.Render(header))
content.WriteString("\n\n")
@@ -216,13 +216,13 @@ func (dp *DirectoryPicker) View() string {
for i, item := range dp.items {
var prefix string
if item.Name == ".." {
prefix = "⬆️ "
prefix = "[UP] "
} else if item.Name == "Error reading directory" {
prefix = " "
prefix = "[X] "
} else if item.IsDir {
prefix = "📁 "
prefix = "[DIR] "
} else {
prefix = "📄 "
prefix = "[FILE] "
}
line := prefix + item.Name
@@ -235,9 +235,9 @@ func (dp *DirectoryPicker) View() string {
}
// Help text
help := "\n↑/↓: Navigate Enter: Open/Select File s: Select Directory q/Esc: Cancel"
help := "\nUp/Down: Navigate | Enter: Open/Select File | s: Select Directory | q/Esc: Cancel"
if !dp.allowFiles {
help = "\n↑/↓: Navigate Enter: Open s: Select Directory q/Esc: Cancel"
help = "\nUp/Down: Navigate | Enter: Open | s: Select Directory | q/Esc: Cancel"
}
content.WriteString(dp.styles.Help.Render(help))

View File

@@ -2,7 +2,7 @@ package tui
import (
"fmt"
"io/ioutil"
"os"
"strings"
"time"
@@ -59,7 +59,7 @@ func loadHistory(cfg *config.Config) []HistoryEntry {
var entries []HistoryEntry
// Read backup files from backup directory
files, err := ioutil.ReadDir(cfg.BackupDir)
files, err := os.ReadDir(cfg.BackupDir)
if err != nil {
return entries
}
@@ -74,6 +74,12 @@ func loadHistory(cfg *config.Config) []HistoryEntry {
continue
}
// Get file info for ModTime
info, err := file.Info()
if err != nil {
continue
}
var backupType string
var database string
@@ -97,8 +103,8 @@ func loadHistory(cfg *config.Config) []HistoryEntry {
entries = append(entries, HistoryEntry{
Type: backupType,
Database: database,
Timestamp: file.ModTime(),
Status: " Completed",
Timestamp: info.ModTime(),
Status: "[OK] Completed",
Filename: name,
})
}
@@ -185,11 +191,11 @@ func (m HistoryViewModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
func (m HistoryViewModel) View() string {
var s strings.Builder
header := titleStyle.Render("📜 Operation History")
header := titleStyle.Render("[HISTORY] Operation History")
s.WriteString(fmt.Sprintf("\n%s\n\n", header))
if len(m.history) == 0 {
s.WriteString("📭 No backup history found\n\n")
s.WriteString("[EMPTY] No backup history found\n\n")
} else {
maxVisible := 15 // Show max 15 items at once
@@ -205,7 +211,7 @@ func (m HistoryViewModel) View() string {
// Show scroll indicators
if start > 0 {
s.WriteString(" More entries above...\n")
s.WriteString(" [^] More entries above...\n")
}
// Display only visible entries
@@ -227,13 +233,13 @@ func (m HistoryViewModel) View() string {
// Show scroll indicator if more entries below
if end < len(m.history) {
s.WriteString(fmt.Sprintf(" %d more entries below...\n", len(m.history)-end))
s.WriteString(fmt.Sprintf(" [v] %d more entries below...\n", len(m.history)-end))
}
s.WriteString("\n")
}
s.WriteString("⌨️ ↑/↓: Navigate PgUp/PgDn: Jump Home/End: First/Last ESC: Back q: Quit\n")
s.WriteString("[KEYS] Up/Down: Navigate - PgUp/PgDn: Jump - Home/End: First/Last - ESC: Back - q: Quit\n")
return s.String()
}

View File

@@ -137,10 +137,10 @@ func (m InputModel) View() string {
s.WriteString("\n\n")
if m.err != nil {
s.WriteString(errorStyle.Render(fmt.Sprintf(" Error: %v\n\n", m.err)))
s.WriteString(errorStyle.Render(fmt.Sprintf("[FAIL] Error: %v\n\n", m.err)))
}
s.WriteString("⌨️ Type value Enter: Confirm ESC: Cancel\n")
s.WriteString("[KEYS] Type value | Enter: Confirm | ESC: Cancel\n")
return s.String()
}

Some files were not shown because too many files have changed in this diff Show More