Compare commits

..

103 Commits

Author SHA1 Message Date
670c9af2e7 feat(tui): add resource profiles for backup/restore operations
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m15s
- Add memory detection for Linux, macOS, Windows
- Add 5 predefined profiles: conservative, balanced, performance, max-performance, large-db
- Add Resource Profile and Cluster Parallelism settings in TUI
- Add quick hotkeys: 'l' for large-db, 'c' for conservative, 'p' for recommendations
- Display system resources (CPU cores, memory) and recommended profile
- Auto-detect and recommend profile based on system resources

Fixes 'out of shared memory' errors when restoring large databases on small VMs.
Use 'large-db' or 'conservative' profile for large databases on constrained systems.
2026-01-18 11:38:24 +01:00
e2cf9adc62 fix: improve cleanup toggle UX when database detection fails
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m28s
CI/CD / Build & Release (push) Successful in 3m13s
- Allow cleanup toggle even when preview detection failed
- Show 'detection pending' message instead of blocking the toggle
- Will re-detect databases at restore execution time
- Always show cleanup toggle option for cluster restores
- Better messaging: 'enabled/disabled' instead of showing 0 count
2026-01-17 17:07:26 +01:00
29e089fe3b fix: re-detect databases at execution time for cluster cleanup
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m10s
- Detection in preview may fail or return stale results
- Re-detect user databases when cleanup is enabled at execution time
- Fall back to preview list if re-detection fails
- Ensures actual databases are dropped, not just what was detected earlier
2026-01-17 17:00:28 +01:00
9396c8e605 fix: add debug logging for database detection
All checks were successful
CI/CD / Test (push) Successful in 1m19s
CI/CD / Lint (push) Successful in 1m34s
CI/CD / Build & Release (push) Successful in 3m24s
- Always set cmd.Env to preserve PGPASSWORD from environment
- Add debug logging for connection parameters and results
- Helps diagnose cluster restore database detection issues
2026-01-17 16:54:20 +01:00
e363e1937f fix: cluster restore database detection and TUI error display
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m16s
- Fixed psql connection for database detection (always use -h flag)
- Use CombinedOutput() to capture stderr for better diagnostics
- Added existingDBError tracking in restore preview
- Show 'Unable to detect' instead of misleading 'None' when listing fails
- Disable cleanup toggle when database detection failed
2026-01-17 16:44:44 +01:00
df1ab2f55b feat: TUI improvements and consistency fixes
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m23s
CI/CD / Build & Release (push) Successful in 3m10s
- Add product branding header to main menu (version + tagline)
- Fix backup success/error report formatting consistency
- Remove extra newline before error box in backup_exec
- Align backup and restore completion screens
2026-01-17 16:26:00 +01:00
0e050b2def fix: cluster backup TUI success report formatting consistency
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m24s
CI/CD / Build & Release (push) Successful in 3m10s
- Aligned box width and indentation with restore success screen
- Removed inconsistent 2-space prefix from success/error boxes
- Standardized content indentation to 4 spaces
- Moved timing section outside else block (always shown)
- Updated footer style to match restore screen
2026-01-17 16:15:16 +01:00
62d58c77af feat(restore): add --parallel-dbs=-1 auto-detection based on CPU/RAM
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m14s
- Add CalculateOptimalParallel() function to preflight.go
- Calculates optimal workers: min(RAM/3GB, CPU cores), capped at 16
- Reduces parallelism by 50% if memory pressure >80%
- Add -1 flag value for auto-detection mode
- Preflight summary now shows CPU cores and recommended parallel
2026-01-17 13:41:28 +01:00
c5be9bcd2b fix(grafana): update dashboard queries and thresholds
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m13s
- Fix Last Backup Status panel to use bool modifier for proper 1/0 values
- Change RPO threshold from 24h to 7 days (604800s) for status check
- Clean up table transformations to exclude duplicate fields
- Update variable refresh to trigger on time range change
2026-01-17 13:24:54 +01:00
b120f1507e style: format struct field alignment
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Has been skipped
2026-01-17 11:44:05 +01:00
dd1db844ce fix: improve lock capacity calculation for smaller VMs
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m13s
- Fix boostLockCapacity: max_locks_per_transaction requires RESTART, not reload
- Calculate total lock capacity: max_locks × (max_connections + max_prepared_txns)
- Add TotalLockCapacity to preflight checks with warning if < 200,000
- Update error hints to explain capacity formula and recommend 4096+ for small VMs
- Show max_connections and total capacity in preflight summary

Fixes OOM 'out of shared memory' errors on VMs with reduced resources
2026-01-17 07:48:17 +01:00
4ea3ec2cf8 fix(preflight): improve BLOB count detection and block restore when max_locks_per_transaction is critically low
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m14s
- Add higher lock boost tiers for extreme BLOB counts (100K+, 200K+)
- Calculate actual lock requirement: (totalBLOBs / max_connections) * 1.5
- Block restore with CRITICAL error when BLOB count exceeds 2x safe limit
- Improve preflight summary with PASSED/FAILED status display
- Add clearer fix instructions for max_locks_per_transaction
2026-01-17 07:25:45 +01:00
9200024e50 fix(restore): add context validity checks to debug cancellation issues
All checks were successful
CI/CD / Test (push) Successful in 1m21s
CI/CD / Lint (push) Successful in 1m28s
CI/CD / Build & Release (push) Successful in 3m17s
Added explicit context checks at critical points:
1. After extraction completes - logs error if context was cancelled
2. Before database restore loop starts - catches premature cancellation

This helps diagnose issues where all database restores fail with
'context cancelled' even though extraction completed successfully.

The user reported this happening after 4h20m extraction - all 6 DBs
showed 'restore skipped (context cancelled)'. These checks will log
exactly when/where the context becomes invalid.
2026-01-16 19:36:52 +01:00
698b8a761c feat(restore): add weighted progress, pre-extraction disk check, parallel-dbs flag
All checks were successful
CI/CD / Test (push) Successful in 1m20s
CI/CD / Lint (push) Successful in 1m32s
CI/CD / Build & Release (push) Successful in 3m19s
Three high-value improvements for cluster restore:

1. Weighted progress by database size
   - Progress now shows percentage by data volume, not just count
   - Phase 3/3: Databases (2/7) - 45.2% by size
   - Gives more accurate ETA for clusters with varied DB sizes

2. Pre-extraction disk space check
   - Checks workdir has 3x archive size before extraction
   - Prevents partial extraction failures when disk fills mid-way
   - Clear error message with required vs available GB

3. --parallel-dbs flag for concurrent restores
   - dbbackup restore cluster archive.tar.gz --parallel-dbs=4
   - Overrides CLUSTER_PARALLELISM config setting
   - Set to 1 for sequential restore (safest for large objects)
2026-01-16 18:31:12 +01:00
dd7c4da0eb fix(restore): add 100ms delay between database restores
All checks were successful
CI/CD / Test (push) Successful in 1m19s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Successful in 3m17s
Ensures PostgreSQL fully closes connections before starting next
restore, preventing potential connection pool exhaustion during
rapid sequential cluster restores.
2026-01-16 16:08:42 +01:00
b2a78cad2a fix(dedup): use deterministic seed in TestChunker_ShiftedData
Some checks failed
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Has been cancelled
The test was flaky because it used crypto/rand for random data,
causing non-deterministic chunk boundaries. With small sample sizes
(100KB / 8KB avg = ~12 chunks), variance was high - sometimes only
42.9% overlap instead of expected >50%.

Fixed by using math/rand with seed 42 for reproducible test results.
Now consistently achieves 91.7% overlap (11/12 chunks).
2026-01-16 16:02:29 +01:00
5728b465e6 fix(tui): handle tea.InterruptMsg for proper Ctrl+C cancellation
Some checks failed
CI/CD / Lint (push) Successful in 1m30s
CI/CD / Build & Release (push) Has been skipped
CI/CD / Test (push) Failing after 1m16s
Bubbletea v1.3+ sends InterruptMsg for SIGINT signals instead of
KeyMsg with 'ctrl+c', causing cancellation to not work properly.

- Add tea.InterruptMsg handling to restore_exec.go
- Add tea.InterruptMsg handling to backup_exec.go
- Add tea.InterruptMsg handling to menu.go
- Call cleanup.KillOrphanedProcesses on all interrupt paths
- No zombie pg_dump/pg_restore/gzip processes left behind

Fixes Ctrl+C not working during cluster restore/backup operations.

v3.42.50
2026-01-16 15:53:39 +01:00
bfe99e959c feat(tui): unified cluster backup progress display
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Successful in 3m31s
- Add combined overall progress bar showing all phases (0-100%)
- Phase 1/3: Backing up Globals (0-15% of overall)
- Phase 2/3: Backing up Databases (15-90% of overall)
- Phase 3/3: Compressing Archive (90-100% of overall)
- Show current database name during backup
- Phase-aware progress tracking with overallPhase, phaseDesc
- Dual progress bars: overall + database count
- Consistent with cluster restore progress display

v3.42.49
2026-01-16 15:37:04 +01:00
780beaadfb feat(tui): unified cluster restore progress display
All checks were successful
CI/CD / Test (push) Successful in 1m19s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m27s
- Add combined overall progress bar showing all phases (0-100%)
- Phase 1/3: Extracting Archive (0-60% of overall)
- Phase 2/3: Restoring Globals (60-65% of overall)
- Phase 3/3: Restoring Databases (65-100% of overall)
- Show current database name during restore
- Phase-aware progress tracking with overallPhase, currentDB, extractionDone
- Dual progress bars: overall + phase-specific (bytes or db count)
- Better visual feedback during entire cluster restore operation

v3.42.48
2026-01-16 15:32:24 +01:00
838c5b8c15 Fix: PostgreSQL expert review - cluster backup/restore improvements
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Successful in 3m14s
Critical PostgreSQL-specific fixes identified by database expert review:

1. **Port always passed for localhost** (pg_dump, pg_restore, pg_dumpall, psql)
   - Previously, port was only passed for non-localhost connections
   - If user has PostgreSQL on non-standard port (e.g., 5433), commands
     would connect to wrong instance or fail
   - Now always passes -p PORT to all PostgreSQL tools

2. **CREATE DATABASE with encoding/locale preservation**
   - Now creates databases with explicit ENCODING 'UTF8'
   - Detects server's LC_COLLATE and uses it for new databases
   - Prevents encoding mismatch errors during restore
   - Falls back to simple CREATE if encoding fails (older PG versions)

3. **DROP DATABASE WITH (FORCE) for PostgreSQL 13+**
   - Uses new WITH (FORCE) option to atomically terminate connections
   - Prevents race condition where new connections are established
   - Falls back to standard DROP for PostgreSQL < 13
   - Also revokes CONNECT privilege before drop attempt

4. **Improved globals restore error handling**
   - Distinguishes between FATAL errors (real problems) and regular
     ERROR messages (like 'role already exists' which is expected)
   - Only fails on FATAL errors or psql command failures
   - Logs error count summary for visibility

5. **Better error classification in restore logs**
   - Separate log levels for FATAL vs ERROR
   - Debug-level logging for 'already exists' errors (expected)
   - Error count tracking to avoid log spam

These fixes improve reliability for enterprise PostgreSQL deployments
with non-standard configurations and existing data.
2026-01-16 14:36:03 +01:00
9d95a193db Fix: Enterprise cluster restore (postgres user via su)
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m29s
CI/CD / Build & Release (push) Successful in 3m13s
Critical fixes for enterprise environments where dbbackup runs as
postgres user via 'su postgres' without sudo access:

1. canRestartPostgreSQL(): New function that detects if we can restart
   PostgreSQL. Returns false immediately if running as postgres user
   without sudo access, avoiding wasted time and potential hangs.

2. tryRestartPostgreSQL(): Now calls canRestartPostgreSQL() first to
   skip restart attempts in restricted environments.

3. Changed restart warning from ERROR to WARN level - it's expected
   behavior in enterprise environments, not an error.

4. Context cancellation check: Goroutines now check ctx.Err() before
   starting and properly count cancelled databases as failures.

5. Goroutine accounting: After wg.Wait(), verify all databases were
   accounted for (success + fail = total). Catches goroutine crashes
   or deadlocks.

6. Port argument fix: Always pass -p port to psql for localhost
   restores, fixing non-standard port configurations.

This should fix the issue where cluster restore showed success but
0 databases were actually restored when running on enterprise systems.
2026-01-16 14:17:04 +01:00
3201f0fb6a Fix: Critical bug - cluster restore showing success with 0 databases restored
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m23s
CRITICAL FIXES:
- Add check for successCount == 0 to properly fail when no databases restored
- Fix tryRestartPostgreSQL to use non-interactive sudo (-n flag)
- Add 10-second timeout per restart attempt to prevent blocking
- Try pg_ctl directly for postgres user (no sudo needed)
- Set stdin to nil to prevent sudo from waiting for password input

This fixes the issue where cluster restore showed success but no databases
were actually restored due to sudo blocking on password prompts.
2026-01-16 14:03:02 +01:00
62ddc57fb7 Fix: Remove sudo usage from auth detection to avoid password prompts
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Successful in 3m16s
- Remove sudo cat attempt for reading pg_hba.conf
- Prevents password prompts when running as postgres via 'su postgres'
- Auth detection now relies on connection attempts when file is unreadable
- Fixes issue where returning to menu after restore triggers sudo prompt
2026-01-16 13:52:41 +01:00
510175ff04 TUI: Enhance completion/result screens for backup and restore
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m28s
CI/CD / Build & Release (push) Successful in 3m12s
- Add box-style headers for success/failure states
- Display comprehensive summary with archive info, type, database count
- Show timing section with total time, throughput, and average per-DB stats
- Use consistent styling and formatting across all result views
- Improve visual hierarchy with section separators
2026-01-16 13:37:58 +01:00
a85ad0c88c TUI: Add timing and ETA tracking to cluster restore progress
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m28s
CI/CD / Build & Release (push) Successful in 3m21s
- Add DatabaseProgressWithTimingCallback for timing-aware progress reporting
- Track elapsed time and average duration per database during restore phase
- Display ETA based on completed database restore times
- Show restore phase elapsed time in progress bar
- Enhance cluster restore progress bar with [elapsed / ETA: remaining] format
2026-01-16 09:42:05 +01:00
4938dc1918 fix: max_locks_per_transaction requires PostgreSQL restart
All checks were successful
CI/CD / Test (push) Successful in 1m19s
CI/CD / Lint (push) Successful in 1m31s
CI/CD / Build & Release (push) Successful in 3m34s
- Fixed critical bug where ALTER SYSTEM + pg_reload_conf() was used
  but max_locks_per_transaction requires a full PostgreSQL restart
- Added automatic restart attempt (systemctl, service, pg_ctl)
- Added loud warnings if restart fails with manual fix instructions
- Updated preflight checks to warn about low max_locks_per_transaction
- This was causing 'out of shared memory' errors on BLOB-heavy restores
2026-01-15 18:50:10 +01:00
09a917766f TUI: Add database progress tracking to cluster backup
All checks were successful
CI/CD / Test (push) Successful in 1m27s
CI/CD / Lint (push) Successful in 1m33s
CI/CD / Build & Release (push) Successful in 3m28s
- Add ProgressCallback and DatabaseProgressCallback to backup engine
- Add SetProgressCallback() and SetDatabaseProgressCallback() methods
- Wire database progress callback in backup_exec.go
- Show progress bar (Database: [████░░░] X/Y) during cluster backups
- Hide spinner when progress bar is displayed
- Use shared progress state pattern for thread-safe TUI updates
2026-01-15 15:32:26 +01:00
eeacbfa007 TUI: Add detailed progress tracking with rolling speed and database count
All checks were successful
CI/CD / Test (push) Successful in 1m28s
CI/CD / Lint (push) Successful in 1m41s
CI/CD / Build & Release (push) Successful in 4m9s
- Add TUI-native detailed progress component (detailed_progress.go)
- Hide spinner when progress bar is shown for cleaner display
- Implement rolling window speed calculation (5-sec window, 100 samples)
- Add database count tracking (X/Y) for cluster restore operations
- Wire DatabaseProgressCallback to restore engine for multi-db progress
2026-01-15 15:16:21 +01:00
7711a206ab Fix panic in TUI backup manager verify when logger is nil
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m15s
- Add nil checks before logger calls in diagnose.go (8 places)
- Add nil checks before logger calls in safety.go (2 places)
- Fixes crash when pressing 'v' to verify backup in interactive menu
2026-01-14 17:18:37 +01:00
ba6e8a2b39 v3.42.37: Remove ASCII boxes from diagnose view
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m14s
Cleaner output without box drawing characters:
- [STATUS] Validation section
- [INFO] Details section
- [FAIL] Errors section
- [WARN] Warnings section
- [HINT] Recommendations section
2026-01-14 17:05:43 +01:00
ec5e89eab7 v3.42.36: Fix remaining TUI prefix inconsistencies
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m13s
- diagnose_view.go: Add [STATS], [LIST], [INFO] section prefixes
- status.go: Add [CONN], [INFO] section prefixes
- settings.go: [LOG] → [INFO] for configuration summary
- menu.go: [DB] → [SELECT]/[CHECK] for selectors
2026-01-14 16:59:24 +01:00
e24d7ab49f v3.42.35: Standardize TUI title prefixes for consistency
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Successful in 3m17s
- [CHECK] for diagnosis, previews, validations
- [STATS] for status, history, metrics views
- [SELECT] for selection/browsing screens
- [EXEC] for execution screens (backup/restore)
- [CONFIG] for settings/configuration

Fixed 8 files with inconsistent prefixes:
- diagnose_view.go: [SEARCH] → [CHECK]
- settings.go: [CFG] → [CONFIG]
- menu.go: [DB] → clean title
- history.go: [HISTORY] → [STATS]
- backup_manager.go: [DB] → [SELECT]
- archive_browser.go: [PKG]/[SEARCH] → [SELECT]
- restore_preview.go: added [CHECK]
- restore_exec.go: [RESTORE] → [EXEC]
2026-01-14 16:36:35 +01:00
721e53fe6a v3.42.34: Add spf13/afero for filesystem abstraction
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m13s
- New internal/fs package for testable filesystem operations
- In-memory filesystem support for unit testing without disk I/O
- Swappable global FS: SetFS(afero.NewMemMapFs())
- Wrapper functions: ReadFile, WriteFile, Mkdir, Walk, Glob, etc.
- Testing helpers: WithMemFs(), SetupTestDir()
- Comprehensive test suite demonstrating usage
- Upgraded afero from v1.10.0 to v1.15.0
2026-01-14 16:24:12 +01:00
4e09066aa5 v3.42.33: Add cenkalti/backoff for exponential backoff retry
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m14s
- Exponential backoff retry for all cloud operations (S3, Azure, GCS)
- RetryConfig presets: Default (5x), Aggressive (10x), Quick (3x)
- Smart error classification: IsPermanentError, IsRetryableError
- Automatic file position reset on upload retry
- Retry logging with wait duration
- Multipart uploads use aggressive retry (more tolerance)
2026-01-14 16:19:40 +01:00
6a24ee39be v3.42.32: Add fatih/color for cross-platform terminal colors
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m13s
- Windows-compatible colors via native console API
- Color helper functions: Success(), Error(), Warning(), Info()
- Text styling: Header(), Dim(), Bold(), Green(), Red(), Yellow(), Cyan()
- Logger CleanFormatter uses fatih/color instead of raw ANSI
- All progress indicators use colored [OK]/[FAIL] status
- Automatic color detection for non-TTY environments
2026-01-14 16:13:00 +01:00
dc6dfd8b2c v3.42.31: Add schollz/progressbar for visual progress display
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m13s
- Visual progress bars for cloud uploads/downloads
  - Byte transfer display, speed, ETA prediction
  - Color-coded Unicode block progress
- Checksum verification with progress bar for large files
- Spinner for indeterminate operations (unknown size)
- New types: NewSchollzBar(), NewSchollzBarItems(), NewSchollzSpinner()
- Progress Writer() method for io.Copy integration
2026-01-14 16:07:04 +01:00
7b4ab76313 v3.42.30: Add go-multierror for better error aggregation
All checks were successful
CI/CD / Test (push) Successful in 1m20s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m14s
- Use hashicorp/go-multierror for cluster restore error collection
- Shows ALL failed databases with full error context (not just count)
- Bullet-pointed output for readability
- Thread-safe error aggregation with dedicated mutex
- Error wrapping with %w for proper error chain preservation
2026-01-14 15:59:12 +01:00
c0d92b3a81 fix: update go.sum for gopsutil Windows dependencies
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m13s
2026-01-14 15:50:13 +01:00
8c85d85249 refactor: use gopsutil and go-humanize for preflight checks
Some checks failed
CI/CD / Build & Release (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Test (push) Has been cancelled
- Added gopsutil/v3 for cross-platform system metrics
  * Works on Linux, macOS, Windows, BSD
  * Memory detection no longer requires /proc parsing

- Added go-humanize for readable output
  * humanize.Bytes() for memory sizes
  * humanize.Comma() for large numbers

- Improved preflight display with memory usage percentage
- Linux kernel checks (shmmax/shmall) still use /proc for accuracy
2026-01-14 15:47:31 +01:00
e0cdcb28be feat: comprehensive preflight checks for cluster restore
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m17s
- Linux system checks (read-only from /proc, no auth needed):
  * shmmax, shmall kernel limits
  * Available RAM check

- PostgreSQL auto-tuning:
  * max_locks_per_transaction scaled by BLOB count
  * maintenance_work_mem boosted to 2GB for faster indexes
  * All settings auto-reset after restore (even on failure)

- Archive analysis:
  * Count BLOBs per database (pg_restore -l or zgrep)
  * Scale lock boost: 2048 (default) → 4096/8192/16384 based on count

- Nice TUI preflight summary display with ✓/⚠ indicators
2026-01-14 15:30:41 +01:00
22a7b9e81e feat: auto-tune max_locks_per_transaction for cluster restore
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m15s
- Automatically boost max_locks_per_transaction to 2048 before restore
- Uses ALTER SYSTEM + pg_reload_conf() - no restart needed
- Automatically resets to original value after restore (even on failure)
- Prevents 'out of shared memory' OOM on BLOB-heavy SQL format dumps
- Works transparently - no user intervention required
2026-01-14 15:05:42 +01:00
c71889be47 fix: phased restore for BLOB databases to prevent lock exhaustion OOM
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m13s
- Auto-detect large objects in pg_restore dumps
- Split restore into pre-data, data, post-data phases
- Each phase commits and releases locks before next
- Prevents 'out of shared memory' / max_locks_per_transaction errors
- Updated error hints with better guidance for lock exhaustion
2026-01-14 08:15:53 +01:00
222bdbef58 fix: streaming tar verification for large cluster archives (100GB+)
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m14s
- Increase timeout from 60 to 180 minutes for very large archives
- Use streaming pipes instead of buffering entire tar listing
- Only mark as corrupted for clear corruption signals (unexpected EOF, invalid gzip)
- Prevents false CORRUPTED errors on valid large archives
2026-01-13 14:40:18 +01:00
f7e9fa64f0 docs: add Large Database Support (600+ GB) section to PITR guide
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Has been skipped
2026-01-13 10:02:35 +01:00
f153e61dbf fix: dynamic timeouts for large archives + use WorkDir for disk checks
All checks were successful
CI/CD / Test (push) Successful in 1m21s
CI/CD / Lint (push) Successful in 1m34s
CI/CD / Build & Release (push) Successful in 3m22s
- CheckDiskSpace now uses GetEffectiveWorkDir() instead of BackupDir
- Dynamic timeout calculation based on file size:
  - diagnoseClusterArchive: 5 + (GB/3) min, max 60 min
  - verifyWithPgRestore: 5 + (GB/5) min, max 30 min
  - DiagnoseClusterDumps: 10 + (GB/3) min, max 120 min
  - TUI safety checks: 10 + (GB/5) min, max 120 min
- Timeout vs corruption differentiation (no false CORRUPTED on timeout)
- Streaming tar listing to avoid OOM on large archives

For 119GB archives: ~45 min timeout instead of 5 min false-positive
2026-01-13 08:22:20 +01:00
d19c065658 Remove dev artifacts and internal docs
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Successful in 3m9s
- dbbackup, dbbackup_cgo (dev binaries, use bin/ for releases)
- CRITICAL_BUGS_FIXED.md (internal post-mortem)
- scripts/remove_*.sh (one-time cleanup scripts)
2026-01-12 11:14:55 +01:00
8dac5efc10 Remove EMOTICON_REMOVAL_PLAN.md
Some checks failed
CI/CD / Test (push) Successful in 1m19s
CI/CD / Build & Release (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
2026-01-12 11:12:17 +01:00
fd5edce5ae Fix license: Apache 2.0 not MIT
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m28s
CI/CD / Build & Release (push) Has been skipped
2026-01-12 10:57:55 +01:00
a7e2c86618 Replace VEEAM_ALTERNATIVE with OPENSOURCE_ALTERNATIVE - covers both commercial (Veeam) and open source (Borg/restic) alternatives
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m29s
CI/CD / Build & Release (push) Has been skipped
2026-01-12 10:43:15 +01:00
b2e0c739e0 Fix golangci-lint v2 config format
All checks were successful
CI/CD / Test (push) Successful in 1m20s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Successful in 3m22s
2026-01-12 10:32:27 +01:00
ad23abdf4e Add version field to golangci-lint config for v2
Some checks failed
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Failing after 1m41s
CI/CD / Build & Release (push) Has been skipped
2026-01-12 10:26:36 +01:00
390b830976 Fix golangci-lint v2 module path
Some checks failed
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Failing after 28s
CI/CD / Build & Release (push) Has been skipped
2026-01-12 10:20:47 +01:00
7e53950967 Update golangci-lint to v2.8.0 for Go 1.24 compatibility
Some checks failed
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Failing after 8s
CI/CD / Build & Release (push) Has been skipped
2026-01-12 10:13:33 +01:00
59d2094241 Build all platforms v3.42.22
Some checks failed
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Failing after 1m22s
CI/CD / Build & Release (push) Has been skipped
2026-01-12 09:54:35 +01:00
b1f8c6d646 fix: correct Grafana dashboard metric names for backup size and duration panels
All checks were successful
CI/CD / Test (push) Successful in 1m21s
CI/CD / Lint (push) Successful in 1m24s
CI/CD / Build & Release (push) Successful in 3m14s
2026-01-09 09:15:16 +01:00
b05c2be19d Add corrected Grafana dashboard - fix status query
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 3m13s
- Changed status query from dbbackup_backup_verified to RPO-based check
- dbbackup_rpo_seconds < 86400 returns SUCCESS when backup < 24h old
- Fixes false FAILED status when verify operations not run
- Includes: status, RPO, backup size, duration, and overview table panels
2026-01-08 12:27:23 +01:00
ec33959e3e v3.42.18: Unify archive verification - backup manager uses same checks as restore
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Successful in 3m12s
- verifyArchiveCmd now uses restore.Safety and restore.Diagnoser
- Same validation logic in backup manager verify and restore safety checks
- No more discrepancy between verify showing valid and restore failing
2026-01-08 12:10:45 +01:00
92402f0fdb v3.42.17: Fix systemd service templates - remove invalid --config flag
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 3m12s
- Service templates now use WorkingDirectory for config loading
- Config is read from .dbbackup.conf in /var/lib/dbbackup
- Updated SYSTEMD.md documentation to match actual CLI
- Removed non-existent --config flag from ExecStart
2026-01-08 11:57:16 +01:00
682510d1bc v3.42.16: TUI cleanup - remove STATUS box, add global styles
All checks were successful
CI/CD / Test (push) Successful in 1m19s
CI/CD / Lint (push) Successful in 1m24s
CI/CD / Build & Release (push) Successful in 3m19s
2026-01-08 11:17:46 +01:00
83ad62b6b5 v3.42.15: TUI - always allow Esc/Cancel during spinner operations
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m20s
CI/CD / Build & Release (push) Successful in 3m7s
2026-01-08 10:53:00 +01:00
55d34be32e v3.42.14: TUI Backup Manager - status box with spinner, real verify function
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 3m6s
2026-01-08 10:35:23 +01:00
1831bd7c1f v3.42.13: TUI improvements - grouped shortcuts, box layout, better alignment
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Successful in 3m9s
2026-01-08 10:16:19 +01:00
24377eab8f v3.42.12: Require cleanup confirmation for cluster restore with existing DBs
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 3m10s
- Block cluster restore if existing databases found and cleanup not enabled
- User must press 'c' to enable 'Clean All First' before proceeding
- Prevents accidental data conflicts during disaster recovery
- Bug #24: Missing safety gate for cluster restore
2026-01-08 09:46:53 +01:00
3e41d88445 v3.42.11: Replace all Unicode emojis with ASCII text
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m20s
CI/CD / Build & Release (push) Successful in 3m10s
- Replace all emoji characters with ASCII equivalents throughout codebase
- Replace Unicode box-drawing characters (═║╔╗╚╝━─) with ASCII (+|-=)
- Replace checkmarks (✓✗) with [OK]/[FAIL] markers
- 59 files updated, 741 lines changed
- Improves terminal compatibility and reduces visual noise
2026-01-08 09:42:01 +01:00
5fb88b14ba Add legal documentation to gitignore
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m20s
CI/CD / Build & Release (push) Has been skipped
2026-01-08 06:19:08 +01:00
cccee4294f Remove internal bug documentation from public repo
Some checks failed
CI/CD / Lint (push) Has been cancelled
CI/CD / Build & Release (push) Has been cancelled
CI/CD / Test (push) Has been cancelled
2026-01-08 06:18:20 +01:00
9688143176 Add detailed bug report for legal documentation
Some checks failed
CI/CD / Test (push) Successful in 1m14s
CI/CD / Build & Release (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
2026-01-08 06:16:49 +01:00
e821e131b4 Fix build script to read version from main.go
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Has been skipped
2026-01-08 06:13:25 +01:00
15a60d2e71 v3.42.10: Code quality fixes
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Successful in 3m12s
- Remove deprecated io/ioutil
- Fix os.DirEntry.ModTime() usage
- Remove unused fields and variables
- Fix ineffective assignments
- Fix error string formatting
2026-01-08 06:05:25 +01:00
9c65821250 v3.42.9: Fix all timeout bugs and deadlocks
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 3m12s
CRITICAL FIXES:
- Encryption detection false positive (IsBackupEncrypted returned true for ALL files)
- 12 cmd.Wait() deadlocks fixed with channel-based context handling
- TUI timeout bugs: 60s->10min for safety checks, 15s->60s for DB listing
- diagnose.go timeouts: 60s->5min for tar/pg_restore operations
- Panic recovery added to parallel backup/restore goroutines
- Variable shadowing fix in restore/engine.go

These bugs caused pg_dump backups to fail through TUI for months.
2026-01-08 05:56:31 +01:00
627061cdbb fix: restore automatic builds on tag push
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m23s
CI/CD / Build & Release (push) Successful in 3m17s
2026-01-07 20:53:20 +01:00
e1a7c57e0f fix: CI runs only once - on release publish, not on tag push
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Has been skipped
Removed duplicate CI triggers:
- Before: Ran on push to branches AND on tag push (doubled)
- After: Runs on push to branches OR when release is published

This prevents wasted CI resources and confusion.
2026-01-07 20:48:01 +01:00
22915102d4 CRITICAL FIX: Eliminate all hardcoded /tmp paths - respect WorkDir configuration
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m24s
CI/CD / Build & Release (push) Has been skipped
This is a critical bugfix release addressing multiple hardcoded temporary directory paths
that prevented proper use of the WorkDir configuration option.

PROBLEM:
Users configuring WorkDir (e.g., /u01/dba/tmp) for systems with small root filesystems
still experienced failures because critical operations hardcoded /tmp instead of respecting
the configured WorkDir. This made the WorkDir option essentially non-functional.

FIXED LOCATIONS:
1. internal/restore/engine.go:632 - CRITICAL: Used BackupDir instead of WorkDir for extraction
2. cmd/restore.go:354,834 - CLI restore/diagnose commands ignored WorkDir
3. cmd/migrate.go:208,347 - Migration commands hardcoded /tmp
4. internal/migrate/engine.go:120 - Migration engine ignored WorkDir
5. internal/config/config.go:224 - SwapFilePath hardcoded /tmp
6. internal/config/config.go:519 - Backup directory fallback hardcoded /tmp
7. internal/tui/restore_exec.go:161 - Debug logs hardcoded /tmp
8. internal/tui/settings.go:805 - Directory browser default hardcoded /tmp
9. internal/tui/restore_preview.go:474 - Display message hardcoded /tmp

NEW FEATURES:
- Added Config.GetEffectiveWorkDir() helper method
- WorkDir now respects WORK_DIR environment variable
- All temp operations now consistently use configured WorkDir with /tmp fallback

IMPACT:
- Restores on systems with small root disks now work properly with WorkDir configured
- Admins can control disk space usage for all temporary operations
- Debug logs, extraction dirs, swap files all respect WorkDir setting

Version: 3.42.1 (Critical Fix Release)
2026-01-07 20:41:53 +01:00
3653ced6da Bump version to 3.42.1
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m23s
CI/CD / Build & Release (push) Successful in 3m13s
2026-01-07 15:41:08 +01:00
9743d571ce chore: Bump version to 3.42.0
All checks were successful
CI/CD / Test (push) Successful in 1m22s
CI/CD / Lint (push) Successful in 1m29s
CI/CD / Build & Release (push) Successful in 3m25s
2026-01-07 15:28:31 +01:00
c519f08ef2 feat: Add content-defined chunking deduplication
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m23s
CI/CD / Build & Release (push) Successful in 3m12s
- Gear hash CDC with 92%+ overlap on shifted data
- SHA-256 content-addressed chunk storage
- AES-256-GCM per-chunk encryption (optional)
- Gzip compression (default enabled)
- SQLite index for fast lookups
- JSON manifests with SHA-256 verification

Commands: dedup backup/restore/list/stats/delete/gc

Resistance is futile.
2026-01-07 15:02:41 +01:00
b99b05fedb ci: enable CGO for linux builds (required for SQLite catalog)
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 3m15s
2026-01-07 13:48:39 +01:00
c5f2c3322c ci: remove GitHub mirror job (manual push instead)
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Successful in 1m51s
2026-01-07 13:14:46 +01:00
56ad0824c7 ci: simplify JSON creation, add HTTP code debug
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Successful in 1m51s
CI/CD / Mirror to GitHub (push) Has been skipped
2026-01-07 12:57:07 +01:00
ec65df2976 ci: add verbose output for binary upload debugging
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Successful in 1m51s
CI/CD / Mirror to GitHub (push) Has been skipped
2026-01-07 12:55:08 +01:00
23cc1e0e08 ci: use jq to build JSON payload safely
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 1m53s
CI/CD / Mirror to GitHub (push) Has been skipped
2026-01-07 12:52:59 +01:00
7770abab6f ci: fix JSON escaping in release creation
Some checks failed
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Failing after 1m49s
CI/CD / Mirror to GitHub (push) Has been skipped
2026-01-07 12:45:03 +01:00
f6a20f035b ci: simplified build-and-release job, add optional GitHub mirror
Some checks failed
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Failing after 1m52s
CI/CD / Mirror to GitHub (push) Has been skipped
- Removed matrix build + artifact passing (was failing)
- Single job builds all platforms and creates release
- Added optional mirror-to-github job (needs GITHUB_MIRROR_TOKEN var)
- Better error handling for release creation
2026-01-07 12:31:21 +01:00
28e54d118f ci: use github.token instead of secrets.GITEA_TOKEN
Some checks failed
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m23s
CI/CD / Release (push) Has been skipped
CI/CD / Build (amd64, darwin) (push) Failing after 30s
CI/CD / Build (amd64, linux) (push) Failing after 30s
CI/CD / Build (arm64, darwin) (push) Failing after 30s
CI/CD / Build (arm64, linux) (push) Failing after 31s
2026-01-07 12:20:41 +01:00
ab0ff3f28d ci: add release job with Gitea binary uploads
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build (amd64, darwin) (push) Successful in 42s
CI/CD / Build (amd64, linux) (push) Successful in 30s
CI/CD / Build (arm64, darwin) (push) Successful in 30s
CI/CD / Build (arm64, linux) (push) Successful in 31s
CI/CD / Release (push) Has been skipped
- Upload artifacts on tag pushes
- Create release via Gitea API
- Attach all platform binaries to release
2026-01-07 12:10:33 +01:00
b7dd325c51 chore: remove binaries from git tracking
All checks were successful
CI/CD / Test (push) Successful in 1m22s
CI/CD / Lint (push) Successful in 1m29s
CI/CD / Build (amd64, darwin) (push) Successful in 34s
CI/CD / Build (amd64, linux) (push) Successful in 34s
CI/CD / Build (arm64, darwin) (push) Successful in 34s
CI/CD / Build (arm64, linux) (push) Successful in 34s
- Add bin/dbbackup_* to .gitignore
- Binaries distributed via GitHub Releases instead
- Reduces repo size and eliminates large file warnings
2026-01-07 12:04:22 +01:00
2ed54141a3 chore: rebuild all platform binaries
Some checks failed
CI/CD / Test (push) Successful in 2m43s
CI/CD / Lint (push) Successful in 2m50s
CI/CD / Build (amd64, linux) (push) Has been cancelled
CI/CD / Build (arm64, darwin) (push) Has been cancelled
CI/CD / Build (arm64, linux) (push) Has been cancelled
CI/CD / Build (amd64, darwin) (push) Has been cancelled
2026-01-07 11:57:08 +01:00
495ee31247 docs: add comprehensive SYSTEMD.md installation guide
- Create dedicated SYSTEMD.md with full manual installation steps
- Cover security hardening, multiple instances, troubleshooting
- Document Prometheus exporter manual setup
- Simplify README systemd section with link to detailed guide
- Add SYSTEMD.md to documentation list
2026-01-07 11:55:20 +01:00
78e10f5057 fix: installer issues found during testing
- Remove invalid --config flag from exporter service template
- Change ReadOnlyPaths to ReadWritePaths for catalog access
- Add copyBinary() to install binary to /usr/local/bin (ProtectHome compat)
- Fix exporter status detection using direct systemctl check
- Add os/exec import for status check
2026-01-07 11:50:51 +01:00
f4a0e2d82c build: rebuild all platform binaries with dry-run fix
All checks were successful
CI/CD / Test (push) Successful in 2m49s
CI/CD / Lint (push) Successful in 2m50s
CI/CD / Build (amd64, darwin) (push) Successful in 1m58s
CI/CD / Build (amd64, linux) (push) Successful in 1m58s
CI/CD / Build (arm64, darwin) (push) Successful in 2m0s
CI/CD / Build (arm64, linux) (push) Successful in 1m59s
2026-01-07 11:40:10 +01:00
f66d19acb0 fix: allow dry-run install without root privileges
Some checks failed
CI/CD / Test (push) Successful in 2m53s
CI/CD / Build (amd64, darwin) (push) Has been cancelled
CI/CD / Build (amd64, linux) (push) Has been cancelled
CI/CD / Build (arm64, darwin) (push) Has been cancelled
CI/CD / Build (arm64, linux) (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
2026-01-07 11:37:13 +01:00
16f377e9b5 docs: update README with systemd and Prometheus metrics sections
Some checks failed
CI/CD / Test (push) Successful in 2m45s
CI/CD / Lint (push) Successful in 2m56s
CI/CD / Build (amd64, linux) (push) Has been cancelled
CI/CD / Build (arm64, darwin) (push) Has been cancelled
CI/CD / Build (arm64, linux) (push) Has been cancelled
CI/CD / Build (amd64, darwin) (push) Has been cancelled
- Add install/uninstall and metrics commands to command table
- Add Systemd Integration section with install examples
- Add Prometheus Metrics section with textfile and HTTP exporter docs
- Update Features list with systemd and metrics highlights
- Rebuild all platform binaries
2026-01-07 11:26:54 +01:00
7e32a0369d feat: add embedded systemd installer and Prometheus metrics
Some checks failed
CI/CD / Test (push) Successful in 2m42s
CI/CD / Lint (push) Successful in 2m50s
CI/CD / Build (amd64, darwin) (push) Successful in 2m0s
CI/CD / Build (amd64, linux) (push) Successful in 1m58s
CI/CD / Build (arm64, darwin) (push) Successful in 2m1s
CI/CD / Build (arm64, linux) (push) Has been cancelled
Systemd Integration:
- New 'dbbackup install' command creates service/timer units
- Supports single-database and cluster backup modes
- Automatic dbbackup user/group creation with proper permissions
- Hardened service units with security features
- Template units with configurable OnCalendar schedules
- 'dbbackup uninstall' for clean removal

Prometheus Metrics:
- 'dbbackup metrics export' for textfile collector format
- 'dbbackup metrics serve' runs HTTP exporter on port 9399
- Metrics: last_success_timestamp, rpo_seconds, backup_total, etc.
- Integration with node_exporter textfile collector
- --with-metrics flag during install

Technical:
- Systemd templates embedded with //go:embed
- Service units include ReadWritePaths, OOMScoreAdjust
- Metrics exporter caches with 30s TTL
- Graceful shutdown on SIGTERM
2026-01-07 11:18:09 +01:00
120ee33e3b build: v3.41.0 binaries with TUI cancellation fix
All checks were successful
CI/CD / Test (push) Successful in 2m44s
CI/CD / Lint (push) Successful in 2m51s
CI/CD / Build (amd64, darwin) (push) Successful in 1m58s
CI/CD / Build (amd64, linux) (push) Successful in 1m59s
CI/CD / Build (arm64, darwin) (push) Successful in 2m1s
CI/CD / Build (arm64, linux) (push) Successful in 1m59s
2026-01-07 09:55:08 +01:00
9f375621d1 fix(tui): enable Ctrl+C/ESC to cancel running backup/restore operations
PROBLEM: Users could not interrupt backup or restore operations through
the TUI interface. Pressing Ctrl+C or ESC did nothing during execution.

ROOT CAUSE:
- BackupExecutionModel ignored ALL key presses while running (only handled when done)
- RestoreExecutionModel returned tea.Quit but didn't cancel the context
- The operation goroutine kept running in the background with its own context

FIX:
- Added cancel context.CancelFunc to both execution models
- Create child context with WithCancel in New*Execution constructors
- Handle ctrl+c and esc during execution to call cancel()
- Show 'Cancelling...' status while waiting for graceful shutdown
- Show cancel hint in View: 'Press Ctrl+C or ESC to cancel'

The fix works because:
- exec.CommandContext(ctx) will SIGKILL the subprocess when ctx is cancelled
- pg_dump, pg_restore, psql, mysql all get terminated properly
- User sees immediate feedback that cancellation is in progress
2026-01-07 09:53:47 +01:00
9ad925191e build: v3.41.0 binaries with P0 security fixes
Some checks failed
CI/CD / Test (push) Successful in 2m45s
CI/CD / Lint (push) Successful in 2m51s
CI/CD / Build (amd64, darwin) (push) Successful in 1m59s
CI/CD / Build (arm64, darwin) (push) Has been cancelled
CI/CD / Build (arm64, linux) (push) Has been cancelled
CI/CD / Build (amd64, linux) (push) Has been cancelled
2026-01-07 09:46:49 +01:00
9d8a6e763e security: P0 fixes - SQL injection prevention + data race fix
- Add identifier validation for database names in PostgreSQL and MySQL
  - validateIdentifier() rejects names with invalid characters
  - quoteIdentifier() safely quotes identifiers with proper escaping
  - Max length: 63 chars (PostgreSQL), 64 chars (MySQL)
  - Only allows alphanumeric + underscores, must start with letter/underscore

- Fix data race in notification manager
  - Multiple goroutines were appending to shared error slice
  - Added errMu sync.Mutex to protect concurrent error collection

- Security improvements prevent:
  - SQL injection via malicious database names
  - CREATE DATABASE `foo`; DROP DATABASE production; --`
  - Race conditions causing lost or corrupted error data
2026-01-07 09:45:13 +01:00
63b16eee8b build: v3.41.0 binaries with DB+Go specialist fixes
All checks were successful
CI/CD / Test (push) Successful in 2m41s
CI/CD / Lint (push) Successful in 2m50s
CI/CD / Build (amd64, darwin) (push) Successful in 1m58s
CI/CD / Build (amd64, linux) (push) Successful in 1m58s
CI/CD / Build (arm64, darwin) (push) Successful in 1m56s
CI/CD / Build (arm64, linux) (push) Successful in 1m58s
2026-01-07 08:59:53 +01:00
91228552fb fix(backup/restore): implement DB+Go specialist recommendations
P0: Add ON_ERROR_STOP=1 to psql (fail fast, not 2.6M errors)
P1: Fix pipe deadlock in streaming compression (goroutine+context)
P1: Handle SIGPIPE (exit 141) - report compressor as root cause
P2: Validate .dump files with pg_restore --list before restore
P2: Add fsync after streaming compression for durability

Fixes potential hung backups and improves error diagnostics.
2026-01-07 08:58:00 +01:00
9ee55309bd docs: update CHANGELOG for v3.41.0 pre-restore validation
Some checks failed
CI/CD / Test (push) Successful in 2m41s
CI/CD / Lint (push) Successful in 2m50s
CI/CD / Build (amd64, darwin) (push) Successful in 1m59s
CI/CD / Build (amd64, linux) (push) Successful in 1m57s
CI/CD / Build (arm64, darwin) (push) Successful in 1m58s
CI/CD / Build (arm64, linux) (push) Has been cancelled
2026-01-07 08:48:38 +01:00
0baf741c0b build: v3.40.0 binaries for all platforms
Some checks failed
CI/CD / Test (push) Successful in 2m44s
CI/CD / Lint (push) Successful in 2m47s
CI/CD / Build (amd64, darwin) (push) Successful in 1m58s
CI/CD / Build (amd64, linux) (push) Successful in 1m57s
CI/CD / Build (arm64, linux) (push) Has been cancelled
CI/CD / Build (arm64, darwin) (push) Has been cancelled
2026-01-07 08:36:26 +01:00
faace7271c fix(restore): add pre-validation for truncated SQL dumps
Some checks failed
CI/CD / Test (push) Successful in 2m42s
CI/CD / Build (amd64, darwin) (push) Has been cancelled
CI/CD / Build (amd64, linux) (push) Has been cancelled
CI/CD / Build (arm64, darwin) (push) Has been cancelled
CI/CD / Build (arm64, linux) (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
- Validate SQL dump files BEFORE attempting restore
- Detect unterminated COPY blocks that cause 'syntax error' failures
- Cluster restore now pre-validates ALL dumps upfront (fail-fast)
- Saves hours of wasted restore time on corrupted backups

The truncated resydb.sql.gz was causing 49min restore attempts
that failed with 2.6M errors. Now fails immediately with clear
error message showing which table's COPY block was truncated.
2026-01-07 08:34:10 +01:00
c3ade7a693 Include pre-built binaries for distribution
All checks were successful
CI/CD / Test (push) Successful in 2m36s
CI/CD / Lint (push) Successful in 2m45s
CI/CD / Build (amd64, darwin) (push) Successful in 1m53s
CI/CD / Build (amd64, linux) (push) Successful in 1m56s
CI/CD / Build (arm64, darwin) (push) Successful in 1m54s
CI/CD / Build (arm64, linux) (push) Successful in 1m54s
2026-01-06 15:32:47 +01:00
151 changed files with 2103 additions and 19280 deletions

View File

@ -37,90 +37,6 @@ jobs:
- name: Coverage summary
run: go tool cover -func=coverage.out | tail -1
test-integration:
name: Integration Tests
runs-on: ubuntu-latest
needs: [test]
container:
image: golang:1.24-bookworm
services:
postgres:
image: postgres:15
env:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: testdb
ports: ['5432:5432']
mysql:
image: mysql:8
env:
MYSQL_ROOT_PASSWORD: mysql
MYSQL_DATABASE: testdb
ports: ['3306:3306']
steps:
- name: Checkout code
env:
TOKEN: ${{ github.token }}
run: |
apt-get update && apt-get install -y -qq git ca-certificates postgresql-client default-mysql-client
git config --global --add safe.directory "$GITHUB_WORKSPACE"
git init
git remote add origin "https://${TOKEN}@git.uuxo.net/${GITHUB_REPOSITORY}.git"
git fetch --depth=1 origin "${GITHUB_SHA}"
git checkout FETCH_HEAD
- name: Wait for databases
run: |
echo "Waiting for PostgreSQL..."
for i in $(seq 1 30); do
pg_isready -h postgres -p 5432 && break || sleep 1
done
echo "Waiting for MySQL..."
for i in $(seq 1 30); do
mysqladmin ping -h mysql -u root -pmysql --silent && break || sleep 1
done
- name: Build dbbackup
run: go build -o dbbackup .
- name: Test PostgreSQL backup/restore
env:
PGHOST: postgres
PGUSER: postgres
PGPASSWORD: postgres
run: |
# Create test data
psql -h postgres -c "CREATE TABLE test_table (id SERIAL PRIMARY KEY, name TEXT);"
psql -h postgres -c "INSERT INTO test_table (name) VALUES ('test1'), ('test2'), ('test3');"
# Run backup - database name is positional argument
mkdir -p /tmp/backups
./dbbackup backup single testdb --db-type postgres --host postgres --user postgres --password postgres --backup-dir /tmp/backups --no-config --allow-root
# Verify backup file exists
ls -la /tmp/backups/
- name: Test MySQL backup/restore
env:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: mysql
run: |
# Create test data
mysql -h mysql -u root -pmysql testdb -e "CREATE TABLE test_table (id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255));"
mysql -h mysql -u root -pmysql testdb -e "INSERT INTO test_table (name) VALUES ('test1'), ('test2'), ('test3');"
# Run backup - positional arg is db to backup, --database is connection db
mkdir -p /tmp/mysql_backups
./dbbackup backup single testdb --db-type mysql --host mysql --port 3306 --user root --password mysql --database testdb --backup-dir /tmp/mysql_backups --no-config --allow-root
# Verify backup file exists
ls -la /tmp/mysql_backups/
- name: Test verify-locks command
env:
PGHOST: postgres
PGUSER: postgres
PGPASSWORD: postgres
run: |
./dbbackup verify-locks --host postgres --db-type postgres --no-config --allow-root | tee verify-locks.out
grep -q 'max_locks_per_transaction' verify-locks.out
lint:
name: Lint
runs-on: ubuntu-latest
@ -160,7 +76,6 @@ jobs:
git init
git remote add origin "https://${TOKEN}@git.uuxo.net/${GITHUB_REPOSITORY}.git"
git fetch --depth=1 origin "${GITHUB_SHA}"
git fetch --tags origin
git checkout FETCH_HEAD
- name: Build all platforms

View File

@ -1,75 +0,0 @@
# Backup of .gitea/workflows/ci.yml — created before adding integration-verify-locks job
# timestamp: 2026-01-23
# CI/CD Pipeline for dbbackup (backup copy)
# Source: .gitea/workflows/ci.yml
# Created: 2026-01-23
name: CI/CD
on:
push:
branches: [main, master, develop]
tags: ['v*']
pull_request:
branches: [main, master]
jobs:
test:
name: Test
runs-on: ubuntu-latest
container:
image: golang:1.24-bookworm
steps:
- name: Checkout code
env:
TOKEN: ${{ github.token }}
run: |
apt-get update && apt-get install -y -qq git ca-certificates
git config --global --add safe.directory "$GITHUB_WORKSPACE"
git init
git remote add origin "https://${TOKEN}@git.uuxo.net/${GITHUB_REPOSITORY}.git"
git fetch --depth=1 origin "${GITHUB_SHA}"
git checkout FETCH_HEAD
- name: Download dependencies
run: go mod download
- name: Run tests
run: go test -race -coverprofile=coverage.out ./...
- name: Coverage summary
run: go tool cover -func=coverage.out | tail -1
lint:
name: Lint
runs-on: ubuntu-latest
container:
image: golang:1.24-bookworm
steps:
- name: Checkout code
env:
TOKEN: ${{ github.token }}
run: |
apt-get update && apt-get install -y -qq git ca-certificates
git config --global --add safe.directory "$GITHUB_WORKSPACE"
git init
git remote add origin "https://${TOKEN}@git.uuxo.net/${GITHUB_REPOSITORY}.git"
git fetch --depth=1 origin "${GITHUB_SHA}"
git checkout FETCH_HEAD
- name: Install and run golangci-lint
run: |
go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@v2.8.0
golangci-lint run --timeout=5m ./...
build-and-release:
name: Build & Release
runs-on: ubuntu-latest
needs: [test, lint]
if: startsWith(github.ref, 'refs/tags/v')
container:
image: golang:1.24-bookworm
steps: |
<trimmed for backup>

6
.gitignore vendored
View File

@ -13,7 +13,8 @@ logs/
/dbbackup
/dbbackup_*
!dbbackup.png
bin/
bin/dbbackup_*
bin/*.exe
# Ignore development artifacts
*.swp
@ -37,6 +38,3 @@ CRITICAL_BUGS_FIXED.md
LEGAL_DOCUMENTATION.md
LEGAL_*.md
legal/
# Release binaries (uploaded via gh release, not git)
release/dbbackup_*

View File

@ -236,8 +236,8 @@ dbbackup cloud download \
# Manual delete
dbbackup cloud delete "azure://prod-backups/postgres/old_backup.sql?account=myaccount&key=KEY"
# Automatic cleanup (keep last 7 days, min 5 backups)
dbbackup cleanup "azure://prod-backups/postgres/?account=myaccount&key=KEY" --retention-days 7 --min-backups 5
# Automatic cleanup (keep last 7 backups)
dbbackup cleanup "azure://prod-backups/postgres/?account=myaccount&key=KEY" --keep 7
```
### Scheduled Backups
@ -253,7 +253,7 @@ dbbackup backup single production_db \
--compression 9
# Cleanup old backups
dbbackup cleanup "azure://prod-backups/postgres/?account=myaccount&key=${AZURE_STORAGE_KEY}" --retention-days 30 --min-backups 5
dbbackup cleanup "azure://prod-backups/postgres/?account=myaccount&key=${AZURE_STORAGE_KEY}" --keep 30
```
**Crontab:**
@ -385,7 +385,7 @@ Tests include:
### 4. Reliability
- Test **restore procedures** regularly
- Use **retention policies**: `--retention-days 30`
- Use **retention policies**: `--keep 30`
- Enable **soft delete** in Azure (30-day recovery)
- Monitor backup success with Azure Monitor

View File

@ -5,376 +5,6 @@ All notable changes to dbbackup will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [4.1.2] - 2026-01-27
### Added
- **`--socket` flag for MySQL/MariaDB** - Connect via Unix socket instead of TCP/IP
- Usage: `dbbackup backup single mydb --db-type mysql --socket /var/run/mysqld/mysqld.sock`
- Works for both backup and restore operations
- Supports socket auth (no password required with proper permissions)
### Fixed
- **Socket path as --host now works** - If `--host` starts with `/`, it's auto-detected as a socket path
- Example: `--host /var/run/mysqld/mysqld.sock` now works correctly instead of DNS lookup error
- Auto-converts to `--socket` internally
## [4.1.1] - 2026-01-25
### Added
- **`dbbackup_build_info` metric** - Exposes version and git commit as Prometheus labels
- Useful for tracking deployed versions across a fleet
- Labels: `server`, `version`, `commit`
### Fixed
- **Documentation clarification**: The `pitr_base` value for `backup_type` label is auto-assigned
by `dbbackup pitr base` command. CLI `--backup-type` flag only accepts `full` or `incremental`.
This was causing confusion in deployments.
## [4.1.0] - 2026-01-25
### Added
- **Backup Type Tracking**: All backup metrics now include a `backup_type` label
(`full`, `incremental`, or `pitr_base` for PITR base backups)
- **PITR Metrics**: Complete Point-in-Time Recovery monitoring
- `dbbackup_pitr_enabled` - Whether PITR is enabled (1/0)
- `dbbackup_pitr_archive_lag_seconds` - Seconds since last WAL/binlog archived
- `dbbackup_pitr_chain_valid` - WAL/binlog chain integrity (1=valid)
- `dbbackup_pitr_gap_count` - Number of gaps in archive chain
- `dbbackup_pitr_archive_count` - Total archived segments
- `dbbackup_pitr_archive_size_bytes` - Total archive storage
- `dbbackup_pitr_recovery_window_minutes` - Estimated PITR coverage
- **PITR Alerting Rules**: 6 new alerts for PITR monitoring
- PITRArchiveLag, PITRChainBroken, PITRGapsDetected, PITRArchiveStalled,
PITRStorageGrowing, PITRDisabledUnexpectedly
- **`dbbackup_backup_by_type` metric** - Count backups by type
### Changed
- `dbbackup_backup_total` type changed from counter to gauge for snapshot-based collection
## [3.42.110] - 2026-01-24
### Improved - Code Quality & Testing
- **Cleaned up 40+ unused code items** found by staticcheck:
- Removed unused functions, variables, struct fields, and type aliases
- Fixed SA4006 warning (unused value assignment in restore engine)
- All packages now pass staticcheck with zero warnings
- **Added golangci-lint integration** to Makefile:
- New `make golangci-lint` target with auto-install
- Updated `lint` target to include golangci-lint
- Updated `install-tools` to install golangci-lint
- **New unit tests** for improved coverage:
- `internal/config/config_test.go` - Tests for config initialization, database types, env helpers
- `internal/security/security_test.go` - Tests for checksums, path validation, rate limiting, audit logging
## [3.42.109] - 2026-01-24
### Added - Grafana Dashboard & Monitoring Improvements
- **Enhanced Grafana dashboard** with comprehensive improvements:
- Added dashboard description for better discoverability
- New collapsible "Backup Overview" row for organization
- New **Verification Status** panel showing last backup verification state
- Added descriptions to all 17 panels for better understanding
- Enabled shared crosshair (graphTooltip=1) for correlated analysis
- Added "monitoring" tag for dashboard discovery
- **New Prometheus alerting rules** (`grafana/alerting-rules.yaml`):
- `DBBackupRPOCritical` - No backup in 24+ hours (critical)
- `DBBackupRPOWarning` - No backup in 12+ hours (warning)
- `DBBackupFailure` - Backup failures detected
- `DBBackupNotVerified` - Backup not verified in 24h
- `DBBackupDedupRatioLow` - Dedup ratio below 10%
- `DBBackupDedupDiskGrowth` - Rapid storage growth prediction
- `DBBackupExporterDown` - Metrics exporter not responding
- `DBBackupMetricsStale` - Metrics not updated in 10+ minutes
- `DBBackupNeverSucceeded` - Database never backed up successfully
### Changed
- **Grafana dashboard layout fixes**:
- Fixed overlapping dedup panels (y: 31/36 → 22/27/32)
- Adjusted top row panel widths for better balance (5+5+5+4+5=24)
- **Added Makefile** for streamlined development workflow:
- `make build` - optimized binary with ldflags
- `make test`, `make race`, `make cover` - testing targets
- `make lint` - runs vet + staticcheck
- `make all-platforms` - cross-platform builds
### Fixed
- Removed deprecated `netErr.Temporary()` call in cloud retry logic (Go 1.18+)
- Fixed staticcheck warnings for redundant fmt.Sprintf calls
- Logger optimizations: buffer pooling, early level check, pre-allocated maps
- Clone engine now validates disk space before operations
## [3.42.108] - 2026-01-24
### Added - TUI Tools Expansion
- **Table Sizes** - view top 100 tables sorted by size with row counts, data/index breakdown
- Supports PostgreSQL (`pg_stat_user_tables`) and MySQL (`information_schema.TABLES`)
- Shows total/data/index sizes, row counts, schema prefix for non-public schemas
- **Kill Connections** - manage active database connections
- List all active connections with PID, user, database, state, query preview, duration
- Kill single connection or all connections to a specific database
- Useful before restore operations to clear blocking sessions
- Supports PostgreSQL (`pg_terminate_backend`) and MySQL (`KILL`)
- **Drop Database** - safely drop databases with double confirmation
- Lists user databases (system DBs hidden: postgres, template0/1, mysql, sys, etc.)
- Requires two confirmations: y/n then type full database name
- Auto-terminates connections before drop
- Supports PostgreSQL and MySQL
## [3.42.107] - 2026-01-24
### Added - Tools Menu & Blob Statistics
- **New "Tools" submenu in TUI** - centralized access to utility functions
- Blob Statistics - scan database for bytea/blob columns with size analysis
- Blob Extract - externalize large objects (coming soon)
- Dedup Store Analyze - storage savings analysis (coming soon)
- Verify Backup Integrity - backup verification
- Catalog Sync - synchronize local catalog (coming soon)
- **New `dbbackup blob stats` CLI command** - analyze blob/bytea columns
- Scans `information_schema` for binary column types
- Shows row counts, total size, average size, max size per column
- Identifies tables storing large binary data for optimization
- Supports both PostgreSQL (bytea, oid) and MySQL (blob, mediumblob, longblob)
- Provides recommendations for databases with >100MB blob data
## [3.42.106] - 2026-01-24
### Fixed - Cluster Restore Resilience & Performance
- **Fixed cluster restore failing on missing roles** - harmless "role does not exist" errors no longer abort restore
- Added role-related errors to `isIgnorableError()` with warning log
- Removed `ON_ERROR_STOP=1` from psql commands (pre-validation catches real corruption)
- Restore now continues gracefully when referenced roles don't exist in target cluster
- Previously caused 12h+ restores to fail at 94% completion
- **Fixed TUI output scrambling in screen/tmux sessions** - added terminal detection
- Uses `go-isatty` to detect non-interactive terminals (backgrounded screen sessions, pipes)
- Added `viewSimple()` methods for clean line-by-line output without ANSI escape codes
- TUI menu now shows warning when running in non-interactive terminal
### Changed - Consistent Parallel Compression (pgzip)
- **Migrated all gzip operations to parallel pgzip** - 2-4x faster compression/decompression on multi-core systems
- Systematic audit found 17 files using standard `compress/gzip`
- All converted to `github.com/klauspost/pgzip` for consistent performance
- **Files updated**:
- `internal/backup/`: incremental_tar.go, incremental_extract.go, incremental_mysql.go
- `internal/wal/`: compression.go (CompressWALFile, DecompressWALFile, VerifyCompressedFile)
- `internal/engine/`: clone.go, snapshot_engine.go, mysqldump.go, binlog/file_target.go
- `internal/restore/`: engine.go, safety.go, formats.go, error_report.go
- `internal/pitr/`: mysql.go, binlog.go
- `internal/dedup/`: store.go
- `cmd/`: dedup.go, placeholder.go
- **Benefit**: Large backup/restore operations now fully utilize available CPU cores
## [3.42.105] - 2026-01-23
### Changed - TUI Visual Cleanup
- **Removed ASCII box characters** from backup/restore success/failure banners
- Replaced `╔═╗║╚╝` boxes with clean `═══` horizontal line separators
- Cleaner, more modern appearance in terminal output
- **Consolidated duplicate styles** in TUI components
- Unified check status styles (passed/failed/warning/pending) into global definitions
- Reduces code duplication across restore preview and diagnose views
## [3.42.98] - 2025-01-23
### Fixed - Critical Bug Fixes for v3.42.97
- **Fixed CGO/SQLite build issue** - binaries now work when compiled with `CGO_ENABLED=0`
- Switched from `github.com/mattn/go-sqlite3` (requires CGO) to `modernc.org/sqlite` (pure Go)
- All cross-compiled binaries now work correctly on all platforms
- No more "Binary was compiled with 'CGO_ENABLED=0', go-sqlite3 requires cgo to work" errors
- **Fixed MySQL positional database argument being ignored**
- `dbbackup backup single <dbname> --db-type mysql` now correctly uses `<dbname>`
- Previously defaulted to 'postgres' regardless of positional argument
- Also fixed in `backup sample` command
## [3.42.97] - 2025-01-23
### Added - Bandwidth Throttling for Cloud Uploads
- **New `--bandwidth-limit` flag for cloud operations** - prevent network saturation during business hours
- Works with S3, GCS, Azure Blob Storage, MinIO, Backblaze B2
- Supports human-readable formats:
- `10MB/s`, `50MiB/s` - megabytes per second
- `100KB/s`, `500KiB/s` - kilobytes per second
- `1GB/s` - gigabytes per second
- `100Mbps` - megabits per second (for network-minded users)
- `unlimited` or `0` - no limit (default)
- Environment variable: `DBBACKUP_BANDWIDTH_LIMIT`
- **Example usage**:
```bash
# Limit upload to 10 MB/s during business hours
dbbackup cloud upload backup.dump --bandwidth-limit 10MB/s
# Environment variable for all operations
export DBBACKUP_BANDWIDTH_LIMIT=50MiB/s
```
- **Implementation**: Token-bucket style throttling with 100ms windows for smooth rate limiting
- **DBA requested feature**: Avoid saturating production network during scheduled backups
## [3.42.96] - 2025-02-01
### Changed - Complete Elimination of Shell tar/gzip Dependencies
- **All tar/gzip operations now 100% in-process** - ZERO shell dependencies for backup/restore
- Removed ALL remaining `exec.Command("tar", ...)` calls
- Removed ALL remaining `exec.Command("gzip", ...)` calls
- Systematic code audit found and eliminated:
- `diagnose.go`: Replaced `tar -tzf` test with direct file open check
- `large_restore_check.go`: Replaced `gzip -t` and `gzip -l` with in-process pgzip verification
- `pitr/restore.go`: Replaced `tar -xf` with in-process tar extraction
- **Benefits**:
- No external tool dependencies (works in minimal containers)
- 2-4x faster on multi-core systems using parallel pgzip
- More reliable error handling with Go-native errors
- Consistent behavior across all platforms
- Reduced attack surface (no shell spawning)
- **Verification**: `strace` and `ps aux` show no tar/gzip/gunzip processes during backup/restore
- **Note**: Docker drill container commands still use gunzip for in-container operations (intentional)
## [Unreleased]
### Added - Single Database Extraction from Cluster Backups (CLI + TUI)
- **Extract and restore individual databases from cluster backups** - selective restore without full cluster restoration
- **CLI Commands**:
- **List databases**: `dbbackup restore cluster backup.tar.gz --list-databases`
- Shows all databases in cluster backup with sizes
- Fast scan without full extraction
- **Extract single database**: `dbbackup restore cluster backup.tar.gz --database myapp --output-dir /tmp/extract`
- Extracts only the specified database dump
- No restore, just file extraction
- **Restore single database from cluster**: `dbbackup restore cluster backup.tar.gz --database myapp --confirm`
- Extracts and restores only one database
- Much faster than full cluster restore when you only need one database
- **Rename on restore**: `dbbackup restore cluster backup.tar.gz --database myapp --target myapp_test --confirm`
- Restore with different database name (useful for testing)
- **Extract multiple databases**: `dbbackup restore cluster backup.tar.gz --databases "app1,app2,app3" --output-dir /tmp/extract`
- Comma-separated list of databases to extract
- **TUI Support**:
- Press **'s'** on any cluster backup in archive browser to select individual databases
- New **ClusterDatabaseSelector** view shows all databases with sizes
- Navigate with arrow keys, select with Enter
- Automatic handling when cluster backup selected in single restore mode
- Full restore preview and confirmation workflow
- **Benefits**:
- Faster restores (extract only what you need)
- Less disk space usage during restore
- Easy database migration/copying
- Better testing workflow
- Selective disaster recovery
### Performance - Cluster Restore Optimization
- **Eliminated duplicate archive extraction in cluster restore** - saves 30-50% time on large restores
- Previously: Archive was extracted twice (once in preflight validation, once in actual restore)
- Now: Archive extracted once and reused for both validation and restore
- **Time savings**:
- 50 GB cluster: ~3-6 minutes faster
- 10 GB cluster: ~1-2 minutes faster
- Small clusters (<5 GB): ~30 seconds faster
- Optimization automatically enabled when `--diagnose` flag is used
- New `ValidateAndExtractCluster()` performs combined validation + extraction
- `RestoreCluster()` accepts optional `preExtractedPath` parameter to reuse extracted directory
- Disk space checks intelligently skipped when using pre-extracted directory
- Maintains backward compatibility - works with and without pre-extraction
- Log output shows optimization: `"Using pre-extracted cluster directory ... optimization: skipping duplicate extraction"`
### Improved - Archive Validation
- **Enhanced tar.gz validation with stream-based checks**
- Fast header-only validation (validates gzip + tar structure without full extraction)
- Checks gzip magic bytes (0x1f 0x8b) and tar header signature
- Reduces preflight validation time from minutes to seconds on large archives
- Falls back to full extraction only when necessary (with `--diagnose`)
### Added - PostgreSQL lock verification (CLI + preflight)
- **`dbbackup verify-locks`** — new CLI command that probes PostgreSQL GUCs (`max_locks_per_transaction`, `max_connections`, `max_prepared_transactions`) and prints total lock capacity plus actionable restore guidance.
- **Integrated into preflight checks** — preflight now warns/fails when lock settings are insufficient and provides exact remediation commands and recommended restore flags (e.g. `--jobs 1 --parallel-dbs 1`).
- **Implemented in Go (replaces `verify_postgres_locks.sh`)** with robust parsing, sudo/`psql` fallback and unit-tested decision logic.
- **Files:** `cmd/verify_locks.go`, `internal/checks/locks.go`, `internal/checks/locks_test.go`, `internal/checks/preflight.go`.
- **Why:** Prevents repeated parallel-restore failures by surfacing lock-capacity issues early and providing bulletproof guidance.
## [3.42.74] - 2026-01-20 "Resource Profile System + Critical Ctrl+C Fix"
### Critical Bug Fix
- **Fixed Ctrl+C not working in TUI backup/restore** - Context cancellation was broken in TUI mode
- `executeBackupWithTUIProgress()` and `executeRestoreWithTUIProgress()` created new contexts with `WithCancel(parentCtx)`
- When user pressed Ctrl+C, `model.cancel()` was called on parent context but execution had separate context
- Fixed by using parent context directly instead of creating new one
- Ctrl+C/ESC/q now properly propagate cancellation to running operations
- Users can now interrupt long-running TUI operations
### Added - Resource Profile System
- **`--profile` flag for restore operations** with three presets:
- **Conservative** (`--profile=conservative`): Single-threaded (`--parallel=1`), minimal memory usage
- Best for resource-constrained servers, shared hosting, or when "out of shared memory" errors occur
- Automatically enables `LargeDBMode` for better resource management
- **Balanced** (default): Auto-detect resources, moderate parallelism
- Good default for most scenarios
- **Aggressive** (`--profile=aggressive`): Maximum parallelism, all available resources
- Best for dedicated database servers with ample resources
- **Potato** (`--profile=potato`): Easter egg 🥔, same as conservative
- **Profile system applies to both CLI and TUI**:
- CLI: `dbbackup restore cluster backup.tar.gz --profile=conservative --confirm`
- TUI: Automatically uses conservative profile for safer interactive operation
- **User overrides supported**: `--jobs` and `--parallel-dbs` flags override profile settings
- **New `internal/config/profile.go`** module:
- `GetRestoreProfile(name)` - Returns profile settings
- `ApplyProfile(cfg, profile, jobs, parallelDBs)` - Applies profile with overrides
- `GetProfileDescription(name)` - Human-readable descriptions
- `ListProfiles()` - All available profiles
### Added - PostgreSQL Diagnostic Tools
- **`diagnose_postgres_memory.sh`** - Comprehensive memory and resource analysis script:
- System memory overview with usage percentages and warnings
- Top 15 memory consuming processes
- PostgreSQL-specific memory configuration analysis
- Current locks and connections monitoring
- Shared memory segments inspection
- Disk space and swap usage checks
- Identifies other resource consumers (Nessus, Elastic Agent, monitoring tools)
- Smart recommendations based on findings
- Detects temp file usage (indicator of low work_mem)
- **`fix_postgres_locks.sh`** - PostgreSQL lock configuration helper:
- Automatically increases `max_locks_per_transaction` to 4096
- Shows current configuration before applying changes
- Calculates total lock capacity
- Provides restart commands for different PostgreSQL setups
- References diagnostic tool for comprehensive analysis
### Added - Documentation
- **`RESTORE_PROFILES.md`** - Complete profile guide with real-world scenarios:
- Profile comparison table
- When to use each profile
- Override examples
- Troubleshooting guide for "out of shared memory" errors
- Integration with diagnostic tools
- **`email_infra_team.txt`** - Admin communication template (German):
- Analysis results template
- Problem identification section
- Three solution variants (temporary, permanent, workaround)
- Includes diagnostic tool references
### Changed - TUI Improvements
- **TUI mode defaults to conservative profile** for safer operation
- Interactive users benefit from stability over speed
- Prevents resource exhaustion on shared systems
- Can be overridden with environment variable: `export RESOURCE_PROFILE=balanced`
### Fixed
- Context cancellation in TUI backup operations (critical)
- Context cancellation in TUI restore operations (critical)
- Better error diagnostics for "out of shared memory" errors
- Improved resource detection and management
### Technical Details
- Profile system respects explicit user flags (`--jobs`, `--parallel-dbs`)
- Conservative profile sets `cfg.LargeDBMode = true` automatically
- TUI profile selection logged when `Debug` mode enabled
- All profiles support both single and cluster restore operations
## [3.42.50] - 2026-01-16 "Ctrl+C Signal Handling Fix"
### Fixed - Proper Ctrl+C/SIGINT Handling in TUI

View File

@ -573,7 +573,7 @@ dbbackup cleanup minio://test-backups/test/ --retention-days 7 --dry-run
3. **Use compression:**
```bash
--compression 6 # Reduces upload size
--compression gzip # Reduces upload size
```
### Reliability
@ -693,7 +693,7 @@ Error: checksum mismatch: expected abc123, got def456
for db in db1 db2 db3; do
dbbackup backup single $db \
--cloud s3://production-backups/daily/$db/ \
--compression 6
--compression gzip
done
# Cleanup old backups (keep 30 days, min 10 backups)

View File

@ -293,8 +293,8 @@ dbbackup cloud download \
# Manual delete
dbbackup cloud delete "gs://prod-backups/postgres/old_backup.sql"
# Automatic cleanup (keep last 7 days, min 5 backups)
dbbackup cleanup "gs://prod-backups/postgres/" --retention-days 7 --min-backups 5
# Automatic cleanup (keep last 7 backups)
dbbackup cleanup "gs://prod-backups/postgres/" --keep 7
```
### Scheduled Backups
@ -310,7 +310,7 @@ dbbackup backup single production_db \
--compression 9
# Cleanup old backups
dbbackup cleanup "gs://prod-backups/postgres/" --retention-days 30 --min-backups 5
dbbackup cleanup "gs://prod-backups/postgres/" --keep 30
```
**Crontab:**
@ -482,7 +482,7 @@ Tests include:
### 4. Reliability
- Test **restore procedures** regularly
- Use **retention policies**: `--retention-days 30`
- Use **retention policies**: `--keep 30`
- Enable **object versioning** (30-day recovery)
- Use **multi-region** buckets for disaster recovery
- Monitor backup success with Cloud Monitoring

126
Makefile
View File

@ -1,126 +0,0 @@
# Makefile for dbbackup
# Provides common development workflows
.PHONY: build test lint vet clean install-tools help race cover golangci-lint
# Build variables
VERSION := $(shell grep 'version.*=' main.go | head -1 | sed 's/.*"\(.*\)".*/\1/')
BUILD_TIME := $(shell date -u '+%Y-%m-%d_%H:%M:%S_UTC')
GIT_COMMIT := $(shell git rev-parse --short HEAD 2>/dev/null || echo "unknown")
LDFLAGS := -w -s -X main.version=$(VERSION) -X main.buildTime=$(BUILD_TIME) -X main.gitCommit=$(GIT_COMMIT)
# Default target
all: lint test build
## build: Build the binary with optimizations
build:
@echo "🔨 Building dbbackup $(VERSION)..."
CGO_ENABLED=0 go build -ldflags="$(LDFLAGS)" -o bin/dbbackup .
@echo "✅ Built bin/dbbackup"
## build-debug: Build with debug symbols (for debugging)
build-debug:
@echo "🔨 Building dbbackup $(VERSION) with debug symbols..."
go build -ldflags="-X main.version=$(VERSION) -X main.buildTime=$(BUILD_TIME) -X main.gitCommit=$(GIT_COMMIT)" -o bin/dbbackup-debug .
@echo "✅ Built bin/dbbackup-debug"
## test: Run tests
test:
@echo "🧪 Running tests..."
go test ./...
## race: Run tests with race detector
race:
@echo "🏃 Running tests with race detector..."
go test -race ./...
## cover: Run tests with coverage report
cover:
@echo "📊 Running tests with coverage..."
go test -cover ./... | tee coverage.txt
@echo "📄 Coverage saved to coverage.txt"
## cover-html: Generate HTML coverage report
cover-html:
@echo "📊 Generating HTML coverage report..."
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.out -o coverage.html
@echo "📄 Coverage report: coverage.html"
## lint: Run all linters
lint: vet staticcheck golangci-lint
## vet: Run go vet
vet:
@echo "🔍 Running go vet..."
go vet ./...
## staticcheck: Run staticcheck (install if missing)
staticcheck:
@echo "🔍 Running staticcheck..."
@if ! command -v staticcheck >/dev/null 2>&1; then \
echo "Installing staticcheck..."; \
go install honnef.co/go/tools/cmd/staticcheck@latest; \
fi
$$(go env GOPATH)/bin/staticcheck ./...
## golangci-lint: Run golangci-lint (comprehensive linting)
golangci-lint:
@echo "🔍 Running golangci-lint..."
@if ! command -v golangci-lint >/dev/null 2>&1; then \
echo "Installing golangci-lint..."; \
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest; \
fi
$$(go env GOPATH)/bin/golangci-lint run --timeout 5m
## install-tools: Install development tools
install-tools:
@echo "📦 Installing development tools..."
go install honnef.co/go/tools/cmd/staticcheck@latest
go install golang.org/x/tools/cmd/goimports@latest
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
@echo "✅ Tools installed"
## fmt: Format code
fmt:
@echo "🎨 Formatting code..."
gofmt -w -s .
@which goimports > /dev/null && goimports -w . || true
## tidy: Tidy and verify go.mod
tidy:
@echo "🧹 Tidying go.mod..."
go mod tidy
go mod verify
## update: Update dependencies
update:
@echo "⬆️ Updating dependencies..."
go get -u ./...
go mod tidy
## clean: Clean build artifacts
clean:
@echo "🧹 Cleaning..."
rm -rf bin/dbbackup bin/dbbackup-debug
rm -f coverage.out coverage.txt coverage.html
go clean -cache -testcache
## docker: Build Docker image
docker:
@echo "🐳 Building Docker image..."
docker build -t dbbackup:$(VERSION) .
## all-platforms: Build for all platforms (uses build_all.sh)
all-platforms:
@echo "🌍 Building for all platforms..."
./build_all.sh
## help: Show this help
help:
@echo "dbbackup Makefile"
@echo ""
@echo "Usage: make [target]"
@echo ""
@echo "Targets:"
@grep -E '^## ' Makefile | sed 's/## / /'

206
OPENSOURCE_ALTERNATIVE.md Normal file
View File

@ -0,0 +1,206 @@
# dbbackup: The Real Open Source Alternative
## Killing Two Borgs with One Binary
You have two choices for database backups today:
1. **Pay $2,000-10,000/year per server** for Veeam, Commvault, or Veritas
2. **Wrestle with Borg/restic** - powerful, but never designed for databases
**dbbackup** eliminates both problems with a single, zero-dependency binary.
## The Problem with Commercial Backup
| What You Pay For | What You Actually Get |
|------------------|----------------------|
| $10,000/year | Heavy agents eating CPU |
| Complex licensing | Vendor lock-in to proprietary formats |
| "Enterprise support" | Recovery that requires calling support |
| "Cloud integration" | Upload to S3... eventually |
## The Problem with Borg/Restic
Great tools. Wrong use case.
| Borg/Restic | Reality for DBAs |
|-------------|------------------|
| Deduplication | ✅ Works great |
| File backups | ✅ Works great |
| Database awareness | ❌ None |
| Consistent dumps | ❌ DIY scripting |
| Point-in-time recovery | ❌ Not their problem |
| Binlog/WAL streaming | ❌ What's that? |
You end up writing wrapper scripts. Then more scripts. Then a monitoring layer. Then you've built half a product anyway.
## What Open Source Really Means
**dbbackup** delivers everything - in one binary:
| Feature | Veeam | Borg/Restic | dbbackup |
|---------|-------|-------------|----------|
| Deduplication | ❌ | ✅ | ✅ Native CDC |
| Database-aware | ✅ | ❌ | ✅ MySQL + PostgreSQL |
| Consistent snapshots | ✅ | ❌ | ✅ LVM/ZFS/Btrfs |
| PITR (Point-in-Time) | ❌ | ❌ | ✅ Sub-second RPO |
| Binlog/WAL streaming | ❌ | ❌ | ✅ Continuous |
| Direct cloud streaming | ❌ | ✅ | ✅ S3/GCS/Azure |
| Zero dependencies | ❌ | ❌ | ✅ Single binary |
| License cost | $$$$ | Free | **Free (Apache 2.0)** |
## Deduplication: We Killed the Borg
Content-defined chunking, just like Borg - but built for database dumps:
```bash
# First backup: 5MB stored
dbbackup dedup backup mydb.dump
# Second backup (modified): only 1.6KB new data!
# 100% deduplication ratio
dbbackup dedup backup mydb_modified.dump
```
### How It Works
- **Gear Hash CDC** - Content-defined chunking with 92%+ overlap detection
- **SHA-256 Content-Addressed** - Chunks stored by hash, automatic dedup
- **AES-256-GCM Encryption** - Per-chunk encryption
- **Gzip Compression** - Enabled by default
- **SQLite Index** - Fast lookups, portable metadata
### Storage Efficiency
| Scenario | Borg | dbbackup |
|----------|------|----------|
| Daily 10GB database | 10GB + ~2GB/day | 10GB + ~2GB/day |
| Same data, knows it's a DB | Scripts needed | **Native support** |
| Restore to point-in-time | ❌ | ✅ Built-in |
Same dedup math. Zero wrapper scripts.
## Enterprise Features, Zero Enterprise Pricing
### Physical Backups (MySQL 8.0.17+)
```bash
# Native Clone Plugin - no XtraBackup needed
dbbackup backup single mydb --db-type mysql --cloud s3://bucket/
```
### Filesystem Snapshots
```bash
# <100ms lock, instant snapshot, stream to cloud
dbbackup backup --engine=snapshot --snapshot-backend=lvm
```
### Continuous Binlog/WAL Streaming
```bash
# Real-time capture to S3 - sub-second RPO
dbbackup binlog stream --target=s3://bucket/binlogs/
```
### Parallel Cloud Upload
```bash
# Saturate your network, not your patience
dbbackup backup --engine=streaming --parallel-workers=8
```
## Real Numbers
**100GB MySQL database:**
| Metric | Veeam | Borg + Scripts | dbbackup |
|--------|-------|----------------|----------|
| Backup time | 45 min | 50 min | **12 min** |
| Local disk needed | 100GB | 100GB | **0 GB** |
| Recovery point | Daily | Daily | **< 1 second** |
| Setup time | Days | Hours | **Minutes** |
| Annual cost | $5,000+ | $0 + time | **$0** |
## Migration Path
### From Veeam
```bash
# Day 1: Test alongside existing
dbbackup backup single mydb --cloud s3://test-bucket/
# Week 1: Compare backup times, storage costs
# Week 2: Switch primary backups
# Month 1: Cancel renewal, buy your team pizza
```
### From Borg/Restic
```bash
# Day 1: Replace your wrapper scripts
dbbackup dedup backup /var/lib/mysql/dumps/mydb.sql
# Day 2: Add PITR
dbbackup binlog stream --target=/mnt/nfs/binlogs/
# Day 3: Delete 500 lines of bash
```
## The Commands You Need
```bash
# Deduplicated backups (Borg-style)
dbbackup dedup backup <file>
dbbackup dedup restore <id> <output>
dbbackup dedup stats
dbbackup dedup gc
# Database-native backups
dbbackup backup single <database>
dbbackup backup all
dbbackup restore <backup-file>
# Point-in-time recovery
dbbackup binlog stream
dbbackup pitr restore --target-time "2026-01-12 14:30:00"
# Cloud targets
--cloud s3://bucket/path/
--cloud gs://bucket/path/
--cloud azure://container/path/
```
## Who Should Switch
**From Veeam/Commvault**: Same capabilities, zero license fees
**From Borg/Restic**: Native database support, no wrapper scripts
**From "homegrown scripts"**: Production-ready, battle-tested
**Cloud-native deployments**: Kubernetes, ECS, Cloud Run ready
**Compliance requirements**: AES-256-GCM, audit logging
## Get Started
```bash
# Download (single binary, ~48MB static linked)
curl -LO https://github.com/PlusOne/dbbackup/releases/latest/download/dbbackup_linux_amd64
chmod +x dbbackup_linux_amd64
# Your first deduplicated backup
./dbbackup_linux_amd64 dedup backup /var/lib/mysql/dumps/production.sql
# Your first cloud backup
./dbbackup_linux_amd64 backup single production \
--db-type mysql \
--cloud s3://my-backups/
```
## The Bottom Line
| Solution | What It Costs You |
|----------|-------------------|
| Veeam | Money |
| Borg/Restic | Time (scripting, integration) |
| dbbackup | **Neither** |
**This is what open source really means.**
Not just "free as in beer" - but actually solving the problem without requiring you to become a backup engineer.
---
*Apache 2.0 Licensed. Free forever. No sales calls. No wrapper scripts.*
[GitHub](https://github.com/PlusOne/dbbackup) | [Releases](https://github.com/PlusOne/dbbackup/releases) | [Changelog](CHANGELOG.md)

297
QUICK.md
View File

@ -1,297 +0,0 @@
# dbbackup Quick Reference
Real examples, no fluff.
## Basic Backups
```bash
# PostgreSQL cluster (all databases + globals)
dbbackup backup cluster
# Single database
dbbackup backup single myapp
# MySQL
dbbackup backup single gitea --db-type mysql --host 127.0.0.1 --port 3306
# MySQL/MariaDB with Unix socket
dbbackup backup single myapp --db-type mysql --socket /var/run/mysqld/mysqld.sock
# With compression level (0-9, default 6)
dbbackup backup cluster --compression 9
# As root (requires flag)
sudo dbbackup backup cluster --allow-root
```
## PITR (Point-in-Time Recovery)
```bash
# Enable WAL archiving for a database
dbbackup pitr enable myapp /mnt/backups/wal
# Take base backup (required before PITR works)
dbbackup pitr base myapp /mnt/backups/wal
# Check PITR status
dbbackup pitr status myapp /mnt/backups/wal
# Restore to specific point in time
dbbackup pitr restore myapp /mnt/backups/wal --target-time "2026-01-23 14:30:00"
# Restore to latest available
dbbackup pitr restore myapp /mnt/backups/wal --target-time latest
# Disable PITR
dbbackup pitr disable myapp
```
## Deduplication
```bash
# Backup with dedup (saves ~60-80% space on similar databases)
dbbackup backup all /mnt/backups/databases --dedup
# Check dedup stats
dbbackup dedup stats /mnt/backups/databases
# Prune orphaned chunks (after deleting old backups)
dbbackup dedup prune /mnt/backups/databases
# Verify chunk integrity
dbbackup dedup verify /mnt/backups/databases
```
## Blob Statistics
```bash
# Analyze blob/binary columns in a database (plan extraction strategies)
dbbackup blob stats --database myapp
# Output shows tables with blob columns, row counts, and estimated sizes
# Helps identify large binary data for separate extraction
# With explicit connection
dbbackup blob stats --database myapp --host dbserver --user admin
# MySQL blob analysis
dbbackup blob stats --database shopdb --db-type mysql
```
## Cloud Storage
```bash
# Upload to S3
dbbackup cloud upload /mnt/backups/databases/myapp_2026-01-23.sql.gz \
--cloud-provider s3 \
--cloud-bucket my-backups
# Upload to MinIO (self-hosted)
dbbackup cloud upload backup.sql.gz \
--cloud-provider minio \
--cloud-bucket backups \
--cloud-endpoint https://minio.internal:9000
# Upload to Backblaze B2
dbbackup cloud upload backup.sql.gz \
--cloud-provider b2 \
--cloud-bucket my-b2-bucket
# With bandwidth limit (don't saturate the network)
dbbackup cloud upload backup.sql.gz --cloud-provider s3 --cloud-bucket backups --bandwidth-limit 10MB/s
# List remote backups
dbbackup cloud list --cloud-provider s3 --cloud-bucket my-backups
# Download
dbbackup cloud download myapp_2026-01-23.sql.gz /tmp/ --cloud-provider s3 --cloud-bucket my-backups
# Delete old backup from cloud
dbbackup cloud delete myapp_2026-01-01.sql.gz --cloud-provider s3 --cloud-bucket my-backups
```
### Cloud Environment Variables
```bash
# S3/MinIO
export AWS_ACCESS_KEY_ID=AKIAXXXXXXXX
export AWS_SECRET_ACCESS_KEY=xxxxxxxx
export AWS_REGION=eu-central-1
# GCS
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
# Azure
export AZURE_STORAGE_ACCOUNT=mystorageaccount
export AZURE_STORAGE_KEY=xxxxxxxx
```
## Encryption
```bash
# Backup with encryption (AES-256-GCM)
dbbackup backup single myapp --encrypt
# Use environment variable for key (recommended)
export DBBACKUP_ENCRYPTION_KEY="my-secret-passphrase"
dbbackup backup cluster --encrypt
# Or use key file
dbbackup backup single myapp --encrypt --encryption-key-file /path/to/keyfile
# Restore encrypted backup (key from environment)
dbbackup restore single myapp_2026-01-23.dump.gz.enc --confirm
```
## Catalog (Backup Inventory)
```bash
# Sync local backups to catalog
dbbackup catalog sync /mnt/backups/databases
# List all backups
dbbackup catalog list
# Show catalog statistics
dbbackup catalog stats
# Show gaps (missing daily backups)
dbbackup catalog gaps mydb --interval 24h
# Search backups
dbbackup catalog search --database myapp --after 2026-01-01
# Show detailed info for a backup
dbbackup catalog info myapp_2026-01-23.dump.gz
```
## Restore
```bash
# Preview restore (dry-run by default)
dbbackup restore single myapp_2026-01-23.dump.gz
# Restore to new database
dbbackup restore single myapp_2026-01-23.dump.gz --target myapp_restored --confirm
# Restore to existing database (clean first)
dbbackup restore single myapp_2026-01-23.dump.gz --clean --confirm
# Restore MySQL
dbbackup restore single gitea_2026-01-23.sql.gz --target gitea_restored \
--db-type mysql --host 127.0.0.1 --confirm
# Verify restore (restores to temp db, runs checks, drops it)
dbbackup verify-restore myapp_2026-01-23.dump.gz
```
## Retention & Cleanup
```bash
# Delete backups older than 30 days (keep at least 5)
dbbackup cleanup /mnt/backups/databases --retention-days 30 --min-backups 5
# GFS retention: 7 daily, 4 weekly, 12 monthly
dbbackup cleanup /mnt/backups/databases --gfs --gfs-daily 7 --gfs-weekly 4 --gfs-monthly 12
# Dry run (show what would be deleted)
dbbackup cleanup /mnt/backups/databases --retention-days 7 --dry-run
```
## Disaster Recovery Drill
```bash
# Full DR test (restores random backup, verifies, cleans up)
dbbackup drill /mnt/backups/databases
# Test specific database
dbbackup drill /mnt/backups/databases --database myapp
# With email notification (configure via environment variables)
export NOTIFY_SMTP_HOST="smtp.example.com"
export NOTIFY_SMTP_TO="admin@example.com"
dbbackup drill /mnt/backups/databases --database myapp
```
## Monitoring & Metrics
```bash
# Prometheus metrics endpoint
dbbackup metrics serve --port 9101
# One-shot status check (for scripts)
dbbackup status /mnt/backups/databases
echo $? # 0 = OK, 1 = warnings, 2 = critical
# Generate HTML report
dbbackup report /mnt/backups/databases --output backup-report.html
```
## Systemd Timer (Recommended)
```bash
# Install systemd units
sudo dbbackup install systemd --backup-path /mnt/backups/databases --schedule "02:00"
# Creates:
# /etc/systemd/system/dbbackup.service
# /etc/systemd/system/dbbackup.timer
# Check timer
systemctl status dbbackup.timer
systemctl list-timers dbbackup.timer
```
## Common Combinations
```bash
# Full production setup: encrypted, with cloud auto-upload
dbbackup backup cluster \
--encrypt \
--compression 9 \
--cloud-auto-upload \
--cloud-provider s3 \
--cloud-bucket prod-backups
# Quick MySQL backup to S3
dbbackup backup single shopdb --db-type mysql && \
dbbackup cloud upload shopdb_*.sql.gz --cloud-provider s3 --cloud-bucket backups
# PITR-enabled PostgreSQL with cloud upload
dbbackup pitr enable proddb /mnt/wal
dbbackup pitr base proddb /mnt/wal
dbbackup cloud upload /mnt/wal/*.gz --cloud-provider s3 --cloud-bucket wal-archive
```
## Environment Variables
| Variable | Description |
|----------|-------------|
| `DBBACKUP_ENCRYPTION_KEY` | Encryption passphrase |
| `DBBACKUP_BANDWIDTH_LIMIT` | Cloud upload limit (e.g., `10MB/s`) |
| `DBBACKUP_CLOUD_PROVIDER` | Cloud provider (s3, minio, b2) |
| `DBBACKUP_CLOUD_BUCKET` | Cloud bucket name |
| `DBBACKUP_CLOUD_ENDPOINT` | Custom endpoint (for MinIO) |
| `AWS_ACCESS_KEY_ID` | S3/MinIO credentials |
| `AWS_SECRET_ACCESS_KEY` | S3/MinIO secret key |
| `PGHOST`, `PGPORT`, `PGUSER` | PostgreSQL connection |
| `MYSQL_HOST`, `MYSQL_TCP_PORT` | MySQL connection |
## Quick Checks
```bash
# What version?
dbbackup --version
# Connection status
dbbackup status
# Test database connection (dry-run)
dbbackup backup single testdb --dry-run
# Verify a backup file
dbbackup verify /mnt/backups/databases/myapp_2026-01-23.dump.gz
# Run preflight checks
dbbackup preflight
```

152
README.md
View File

@ -1,21 +1,9 @@
```
_ _ _ _
| | | | | | |
__| | |__ | |__ __ _ ___| | ___ _ _ __
/ _` | '_ \| '_ \ / _` |/ __| |/ / | | | '_ \
| (_| | |_) | |_) | (_| | (__| <| |_| | |_) |
\__,_|_.__/|_.__/ \__,_|\___|_|\_\\__,_| .__/
| |
|_|
```
# dbbackup
Database backup and restore utility for PostgreSQL, MySQL, and MariaDB.
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Go Version](https://img.shields.io/badge/Go-1.21+-00ADD8?logo=go)](https://golang.org/)
[![Release](https://img.shields.io/badge/Release-v4.0.1-green.svg)](https://github.com/PlusOne/dbbackup/releases/latest)
**Repository:** https://git.uuxo.net/UUXO/dbbackup
**Mirror:** https://github.com/PlusOne/dbbackup
@ -68,7 +56,7 @@ Download from [releases](https://git.uuxo.net/UUXO/dbbackup/releases):
```bash
# Linux x86_64
wget https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.42.74/dbbackup-linux-amd64
wget https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.42.35/dbbackup-linux-amd64
chmod +x dbbackup-linux-amd64
sudo mv dbbackup-linux-amd64 /usr/local/bin/dbbackup
```
@ -111,7 +99,6 @@ Database: postgres@localhost:5432 (PostgreSQL)
Diagnose Backup File
List & Manage Backups
────────────────────────────────
Tools
View Active Operations
Show Operation History
Database Status & Health Check
@ -120,22 +107,6 @@ Database: postgres@localhost:5432 (PostgreSQL)
Quit
```
**Tools Menu:**
```
Tools
Advanced utilities for database backup management
> Blob Statistics
Blob Extract (externalize LOBs)
────────────────────────────────
Dedup Store Analyze
Verify Backup Integrity
Catalog Sync
────────────────────────────────
Back to Main Menu
```
**Database Selection:**
```
Single Database Backup
@ -223,59 +194,21 @@ r: Restore | v: Verify | i: Info | d: Diagnose | D: Delete | R: Refresh | Esc: B
```
Configuration Settings
[SYSTEM] Detected Resources
CPU: 8 physical cores, 16 logical cores
Memory: 32GB total, 28GB available
Recommended Profile: balanced
→ 8 cores and 32GB RAM supports moderate parallelism
[CONFIG] Current Settings
Target DB: PostgreSQL (postgres)
Database: postgres@localhost:5432
Backup Dir: /var/backups/postgres
Compression: Level 6
Profile: balanced | Cluster: 2 parallel | Jobs: 4
> Database Type: postgres
CPU Workload Type: balanced
Resource Profile: balanced (P:2 J:4)
Cluster Parallelism: 2
Backup Directory: /var/backups/postgres
Work Directory: (system temp)
Backup Directory: /root/db_backups
Work Directory: /tmp
Compression Level: 6
Parallel Jobs: 4
Dump Jobs: 4
Parallel Jobs: 16
Dump Jobs: 8
Database Host: localhost
Database Port: 5432
Database User: postgres
Database User: root
SSL Mode: prefer
[KEYS] ↑↓ navigate | Enter edit | 'l' toggle LargeDB | 'c' conservative | 'p' recommend | 's' save | 'q' menu
s: Save | r: Reset | q: Menu
```
**Resource Profiles for Large Databases:**
When restoring large databases on VMs with limited resources, use the resource profile settings to prevent "out of shared memory" errors:
| Profile | Cluster Parallel | Jobs | Best For |
|---------|------------------|------|----------|
| conservative | 1 | 1 | Small VMs (<16GB RAM) |
| balanced | 2 | 2-4 | Medium VMs (16-32GB RAM) |
| performance | 4 | 4-8 | Large servers (32GB+ RAM) |
| max-performance | 8 | 8-16 | High-end servers (64GB+) |
**Large DB Mode:** Toggle with `l` key. Reduces parallelism by 50% and sets max_locks_per_transaction=8192 for complex databases with many tables/LOBs.
**Quick shortcuts:** Press `l` to toggle Large DB Mode, `c` for conservative, `p` to show recommendation.
**Troubleshooting Tools:**
For PostgreSQL restore issues ("out of shared memory" errors), diagnostic scripts are available:
- **diagnose_postgres_memory.sh** - Comprehensive system memory, PostgreSQL configuration, and resource analysis
- **fix_postgres_locks.sh** - Automatically increase max_locks_per_transaction to 4096
See [RESTORE_PROFILES.md](RESTORE_PROFILES.md) for detailed troubleshooting guidance.
**Database Status:**
```
Database Status & Health Check
@ -315,21 +248,12 @@ dbbackup restore single backup.dump --target myapp_db --create --confirm
# Restore cluster
dbbackup restore cluster cluster_backup.tar.gz --confirm
# Restore with resource profile (for resource-constrained servers)
dbbackup restore cluster backup.tar.gz --profile=conservative --confirm
# Restore with debug logging (saves detailed error report on failure)
dbbackup restore cluster backup.tar.gz --save-debug-log /tmp/restore-debug.json --confirm
# Diagnose backup before restore
dbbackup restore diagnose backup.dump.gz --deep
# Check PostgreSQL lock configuration (preflight for large restores)
# - warns/fails when `max_locks_per_transaction` is insufficient and prints exact remediation
# - safe to run before a restore to determine whether single-threaded restore is required
# Example:
# dbbackup verify-locks
# Cloud backup
dbbackup backup single mydb --cloud s3://my-bucket/backups/
@ -349,7 +273,6 @@ dbbackup backup single mydb --dry-run
| `restore pitr` | Point-in-Time Recovery |
| `restore diagnose` | Diagnose backup file integrity |
| `verify-backup` | Verify backup integrity |
| `verify-locks` | Check PostgreSQL lock settings and get restore guidance |
| `cleanup` | Remove old backups |
| `status` | Check connection status |
| `preflight` | Run pre-backup checks |
@ -363,7 +286,6 @@ dbbackup backup single mydb --dry-run
| `drill` | DR drill testing |
| `report` | Compliance report generation |
| `rto` | RTO/RPO analysis |
| `blob stats` | Analyze blob/bytea columns in database |
| `install` | Install as systemd service |
| `uninstall` | Remove systemd service |
| `metrics export` | Export Prometheus metrics to textfile |
@ -381,7 +303,6 @@ dbbackup backup single mydb --dry-run
| `--backup-dir` | Backup directory | ~/db_backups |
| `--compression` | Compression level (0-9) | 6 |
| `--jobs` | Parallel jobs | 8 |
| `--profile` | Resource profile (conservative/balanced/aggressive) | balanced |
| `--cloud` | Cloud storage URI | - |
| `--encrypt` | Enable encryption | false |
| `--dry-run, -n` | Run preflight checks only | false |
@ -666,11 +587,11 @@ dbbackup catalog stats
# Detect backup gaps (missing scheduled backups)
dbbackup catalog gaps --interval 24h --database mydb
# Search backups by date range
dbbackup catalog search --database mydb --after 2024-01-01 --before 2024-12-31
# Search backups
dbbackup catalog search --database mydb --start 2024-01-01 --end 2024-12-31
# Get backup info by path
dbbackup catalog info /backups/mydb_20240115.dump.gz
# Get backup info
dbbackup catalog info 42
```
## DR Drill Testing
@ -681,8 +602,8 @@ Automated disaster recovery testing restores backups to Docker containers:
# Run full DR drill
dbbackup drill run /backups/mydb_latest.dump.gz \
--database mydb \
--type postgresql \
--timeout 1800
--db-type postgres \
--timeout 30m
# Quick drill (restore + basic validation)
dbbackup drill quick /backups/mydb_latest.dump.gz --database mydb
@ -690,11 +611,11 @@ dbbackup drill quick /backups/mydb_latest.dump.gz --database mydb
# List running drill containers
dbbackup drill list
# Cleanup all drill containers
dbbackup drill cleanup
# Cleanup old drill containers
dbbackup drill cleanup --age 24h
# Display a saved drill report
dbbackup drill report drill_20240115_120000_report.json --format json
# Generate drill report
dbbackup drill report --format html --output drill-report.html
```
**Drill phases:**
@ -739,13 +660,16 @@ Calculate and monitor Recovery Time/Point Objectives:
```bash
# Analyze RTO/RPO for a database
dbbackup rto analyze --database mydb
dbbackup rto analyze mydb
# Show status for all databases
dbbackup rto status
# Check against targets
dbbackup rto check --target-rto 4h --target-rpo 1h
dbbackup rto check --rto 4h --rpo 1h
# Set target objectives
dbbackup rto analyze mydb --target-rto 4h --target-rpo 1h
```
**Analysis includes:**
@ -793,8 +717,6 @@ sudo dbbackup uninstall cluster --purge
Export backup metrics for monitoring with Prometheus:
> **Migration Note (v1.x → v2.x):** The `--instance` flag was renamed to `--server` to avoid collision with Prometheus's reserved `instance` label. Update your cronjobs and scripts accordingly.
### Textfile Collector
For integration with node_exporter:
@ -803,8 +725,8 @@ For integration with node_exporter:
# Export metrics to textfile
dbbackup metrics export --output /var/lib/node_exporter/textfile_collector/dbbackup.prom
# Export for specific server
dbbackup metrics export --server production --output /var/lib/dbbackup/metrics/production.prom
# Export for specific instance
dbbackup metrics export --instance production --output /var/lib/dbbackup/metrics/production.prom
```
Configure node_exporter:
@ -936,29 +858,15 @@ Workload types:
## Documentation
**Quick Start:**
- [QUICK.md](QUICK.md) - Real-world examples cheat sheet
**Guides:**
- [docs/PITR.md](docs/PITR.md) - Point-in-Time Recovery (PostgreSQL)
- [docs/MYSQL_PITR.md](docs/MYSQL_PITR.md) - Point-in-Time Recovery (MySQL)
- [docs/ENGINES.md](docs/ENGINES.md) - Database engine configuration
- [docs/RESTORE_PROFILES.md](docs/RESTORE_PROFILES.md) - Restore resource profiles
**Cloud Storage:**
- [docs/CLOUD.md](docs/CLOUD.md) - Cloud storage overview
- [docs/AZURE.md](docs/AZURE.md) - Azure Blob Storage
- [docs/GCS.md](docs/GCS.md) - Google Cloud Storage
**Deployment:**
- [docs/DOCKER.md](docs/DOCKER.md) - Docker deployment
- [docs/SYSTEMD.md](docs/SYSTEMD.md) - Systemd installation & scheduling
**Reference:**
- [SYSTEMD.md](SYSTEMD.md) - Systemd installation & scheduling
- [DOCKER.md](DOCKER.md) - Docker deployment
- [CLOUD.md](CLOUD.md) - Cloud storage configuration
- [PITR.md](PITR.md) - Point-in-Time Recovery
- [AZURE.md](AZURE.md) - Azure Blob Storage
- [GCS.md](GCS.md) - Google Cloud Storage
- [SECURITY.md](SECURITY.md) - Security considerations
- [CONTRIBUTING.md](CONTRIBUTING.md) - Contribution guidelines
- [CHANGELOG.md](CHANGELOG.md) - Version history
- [docs/LOCK_DEBUGGING.md](docs/LOCK_DEBUGGING.md) - Lock troubleshooting
## License

108
RELEASE_NOTES.md Normal file
View File

@ -0,0 +1,108 @@
# v3.42.1 Release Notes
## What's New in v3.42.1
### Deduplication - Resistance is Futile
Content-defined chunking deduplication for space-efficient backups. Like restic/borgbackup but with **native database dump support**.
```bash
# First backup: 5MB stored
dbbackup dedup backup mydb.dump
# Second backup (modified): only 1.6KB new data stored!
# 100% deduplication ratio
dbbackup dedup backup mydb_modified.dump
```
#### Features
- **Gear Hash CDC** - Content-defined chunking with 92%+ overlap on shifted data
- **SHA-256 Content-Addressed** - Chunks stored by hash, automatic deduplication
- **AES-256-GCM Encryption** - Optional per-chunk encryption
- **Gzip Compression** - Optional compression (enabled by default)
- **SQLite Index** - Fast chunk lookups and statistics
#### Commands
```bash
dbbackup dedup backup <file> # Create deduplicated backup
dbbackup dedup backup <file> --encrypt # With AES-256-GCM encryption
dbbackup dedup restore <id> <output> # Restore from manifest
dbbackup dedup list # List all backups
dbbackup dedup stats # Show deduplication statistics
dbbackup dedup delete <id> # Delete a backup
dbbackup dedup gc # Garbage collect unreferenced chunks
```
#### Storage Structure
```
<backup-dir>/dedup/
chunks/ # Content-addressed chunk files
ab/cdef1234... # Sharded by first 2 chars of hash
manifests/ # JSON manifest per backup
chunks.db # SQLite index
```
### Also Included (from v3.41.x)
- **Systemd Integration** - One-command install with `dbbackup install`
- **Prometheus Metrics** - HTTP exporter on port 9399
- **Backup Catalog** - SQLite-based tracking of all backup operations
- **Prometheus Alerting Rules** - Added to SYSTEMD.md documentation
### Installation
#### Quick Install (Recommended)
```bash
# Download for your platform
curl -LO https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.42.1/dbbackup-linux-amd64
# Install with systemd service
chmod +x dbbackup-linux-amd64
sudo ./dbbackup-linux-amd64 install --config /path/to/config.yaml
```
#### Available Binaries
| Platform | Architecture | Binary |
|----------|--------------|--------|
| Linux | amd64 | `dbbackup-linux-amd64` |
| Linux | arm64 | `dbbackup-linux-arm64` |
| macOS | Intel | `dbbackup-darwin-amd64` |
| macOS | Apple Silicon | `dbbackup-darwin-arm64` |
| FreeBSD | amd64 | `dbbackup-freebsd-amd64` |
### Systemd Commands
```bash
dbbackup install --config config.yaml # Install service + timer
dbbackup install --status # Check service status
dbbackup install --uninstall # Remove services
```
### Prometheus Metrics
Available at `http://localhost:9399/metrics`:
| Metric | Description |
|--------|-------------|
| `dbbackup_last_backup_timestamp` | Unix timestamp of last backup |
| `dbbackup_last_backup_success` | 1 if successful, 0 if failed |
| `dbbackup_last_backup_duration_seconds` | Duration of last backup |
| `dbbackup_last_backup_size_bytes` | Size of last backup |
| `dbbackup_backup_total` | Total number of backups |
| `dbbackup_backup_errors_total` | Total number of failed backups |
### Security Features
- Hardened systemd service with `ProtectSystem=strict`
- `NoNewPrivileges=true` prevents privilege escalation
- Dedicated `dbbackup` system user (optional)
- Credential files with restricted permissions
### Documentation
- [SYSTEMD.md](SYSTEMD.md) - Complete systemd installation guide
- [README.md](README.md) - Full documentation
- [CHANGELOG.md](CHANGELOG.md) - Version history
### Bug Fixes
- Fixed SQLite time parsing in dedup stats
- Fixed function name collision in cmd package
---
**Full Changelog**: https://git.uuxo.net/UUXO/dbbackup/compare/v3.41.1...v3.42.1

87
bin/README.md Normal file
View File

@ -0,0 +1,87 @@
# DB Backup Tool - Pre-compiled Binaries
This directory contains pre-compiled binaries for the DB Backup Tool across multiple platforms and architectures.
## Build Information
- **Version**: 3.42.50
- **Build Time**: 2026-01-17_16:07:42_UTC
- **Git Commit**: e2cf9ad
## Recent Updates (v1.1.0)
- ✅ Fixed TUI progress display with line-by-line output
- ✅ Added interactive configuration settings menu
- ✅ Improved menu navigation and responsiveness
- ✅ Enhanced completion status handling
- ✅ Better CPU detection and optimization
- ✅ Silent mode support for TUI operations
## Available Binaries
### Linux
- `dbbackup_linux_amd64` - Linux 64-bit (Intel/AMD)
- `dbbackup_linux_arm64` - Linux 64-bit (ARM)
- `dbbackup_linux_arm_armv7` - Linux 32-bit (ARMv7)
### macOS
- `dbbackup_darwin_amd64` - macOS 64-bit (Intel)
- `dbbackup_darwin_arm64` - macOS 64-bit (Apple Silicon)
### Windows
- `dbbackup_windows_amd64.exe` - Windows 64-bit (Intel/AMD)
- `dbbackup_windows_arm64.exe` - Windows 64-bit (ARM)
### BSD Systems
- `dbbackup_freebsd_amd64` - FreeBSD 64-bit
- `dbbackup_openbsd_amd64` - OpenBSD 64-bit
- `dbbackup_netbsd_amd64` - NetBSD 64-bit
## Usage
1. Download the appropriate binary for your platform
2. Make it executable (Unix-like systems): `chmod +x dbbackup_*`
3. Run: `./dbbackup_* --help`
## Interactive Mode
Launch the interactive TUI menu for easy configuration and operation:
```bash
# Interactive mode with TUI menu
./dbbackup_linux_amd64
# Features:
# - Interactive configuration settings
# - Real-time progress display
# - Operation history and status
# - CPU detection and optimization
```
## Command Line Mode
Direct command line usage with line-by-line progress:
```bash
# Show CPU information and optimization settings
./dbbackup_linux_amd64 cpu
# Auto-optimize for your hardware
./dbbackup_linux_amd64 backup cluster --auto-detect-cores
# Manual CPU configuration
./dbbackup_linux_amd64 backup single mydb --jobs 8 --dump-jobs 4
# Line-by-line progress output
./dbbackup_linux_amd64 backup cluster --progress-type line
```
## CPU Detection
All binaries include advanced CPU detection capabilities:
- Automatic core detection for optimal parallelism
- Support for different workload types (CPU-intensive, I/O-intensive, balanced)
- Platform-specific optimizations for Linux, macOS, and Windows
- Interactive CPU configuration in TUI mode
## Support
For issues or questions, please refer to the main project documentation.

View File

@ -33,7 +33,7 @@ CYAN='\033[0;36m'
BOLD='\033[1m'
NC='\033[0m'
# Platform configurations - Linux & macOS only
# Platform configurations
# Format: "GOOS/GOARCH:binary_suffix:description"
PLATFORMS=(
"linux/amd64::Linux 64-bit (Intel/AMD)"
@ -41,6 +41,11 @@ PLATFORMS=(
"linux/arm:_armv7:Linux 32-bit (ARMv7)"
"darwin/amd64::macOS 64-bit (Intel)"
"darwin/arm64::macOS 64-bit (Apple Silicon)"
"windows/amd64:.exe:Windows 64-bit (Intel/AMD)"
"windows/arm64:.exe:Windows 64-bit (ARM)"
"freebsd/amd64::FreeBSD 64-bit (Intel/AMD)"
"openbsd/amd64::OpenBSD 64-bit (Intel/AMD)"
"netbsd/amd64::NetBSD 64-bit (Intel/AMD)"
)
echo -e "${BOLD}${BLUE}🔨 Cross-Platform Build Script for ${APP_NAME}${NC}"

View File

@ -58,13 +58,13 @@ var singleCmd = &cobra.Command{
Backup Types:
--backup-type full - Complete full backup (default)
--backup-type incremental - Incremental backup (only changed files since base)
--backup-type incremental - Incremental backup (only changed files since base) [NOT IMPLEMENTED]
Examples:
# Full backup (default)
dbbackup backup single mydb
# Incremental backup (requires previous full backup)
# Incremental backup (requires previous full backup) [COMING IN v2.2.1]
dbbackup backup single mydb --backup-type incremental --base-backup mydb_20250126.tar.gz`,
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
@ -114,7 +114,7 @@ func init() {
backupCmd.AddCommand(sampleCmd)
// Incremental backup flags (single backup only) - using global vars to avoid initialization cycle
singleCmd.Flags().StringVar(&backupTypeFlag, "backup-type", "full", "Backup type: full or incremental")
singleCmd.Flags().StringVar(&backupTypeFlag, "backup-type", "full", "Backup type: full or incremental [incremental NOT IMPLEMENTED]")
singleCmd.Flags().StringVar(&baseBackupFlag, "base-backup", "", "Path to base backup (required for incremental)")
// Encryption flags for all backup commands

View File

@ -12,7 +12,6 @@ import (
"dbbackup/internal/checks"
"dbbackup/internal/config"
"dbbackup/internal/database"
"dbbackup/internal/notify"
"dbbackup/internal/security"
)
@ -58,17 +57,6 @@ func runClusterBackup(ctx context.Context) error {
user := security.GetCurrentUser()
auditLogger.LogBackupStart(user, "all_databases", "cluster")
// Track start time for notifications
backupStartTime := time.Now()
// Notify: backup started
if notifyManager != nil {
notifyManager.Notify(notify.NewEvent(notify.EventBackupStarted, notify.SeverityInfo, "Cluster backup started").
WithDatabase("all_databases").
WithDetail("host", cfg.Host).
WithDetail("backup_dir", cfg.BackupDir))
}
// Rate limit connection attempts
host := fmt.Sprintf("%s:%d", cfg.Host, cfg.Port)
if err := rateLimiter.CheckAndWait(host); err != nil {
@ -98,13 +86,6 @@ func runClusterBackup(ctx context.Context) error {
// Perform cluster backup
if err := engine.BackupCluster(ctx); err != nil {
auditLogger.LogBackupFailed(user, "all_databases", err)
// Notify: backup failed
if notifyManager != nil {
notifyManager.Notify(notify.NewEvent(notify.EventBackupFailed, notify.SeverityError, "Cluster backup failed").
WithDatabase("all_databases").
WithError(err).
WithDuration(time.Since(backupStartTime)))
}
return err
}
@ -112,13 +93,6 @@ func runClusterBackup(ctx context.Context) error {
if isEncryptionEnabled() {
if err := encryptLatestClusterBackup(); err != nil {
log.Error("Failed to encrypt backup", "error", err)
// Notify: encryption failed
if notifyManager != nil {
notifyManager.Notify(notify.NewEvent(notify.EventBackupFailed, notify.SeverityError, "Backup encryption failed").
WithDatabase("all_databases").
WithError(err).
WithDuration(time.Since(backupStartTime)))
}
return fmt.Errorf("backup completed successfully but encryption failed. Unencrypted backup remains in %s: %w", cfg.BackupDir, err)
}
log.Info("Cluster backup encrypted successfully")
@ -127,14 +101,6 @@ func runClusterBackup(ctx context.Context) error {
// Audit log: backup success
auditLogger.LogBackupComplete(user, "all_databases", cfg.BackupDir, 0)
// Notify: backup completed
if notifyManager != nil {
notifyManager.Notify(notify.NewEvent(notify.EventBackupCompleted, notify.SeveritySuccess, "Cluster backup completed successfully").
WithDatabase("all_databases").
WithDuration(time.Since(backupStartTime)).
WithDetail("backup_dir", cfg.BackupDir))
}
// Cleanup old backups if retention policy is enabled
if cfg.RetentionDays > 0 {
retentionPolicy := security.NewRetentionPolicy(cfg.RetentionDays, cfg.MinBackups, log)
@ -164,10 +130,6 @@ func runSingleBackup(ctx context.Context, databaseName string) error {
// Update config from environment
cfg.UpdateFromEnvironment()
// IMPORTANT: Set the database name from positional argument
// This overrides the default 'postgres' when using MySQL
cfg.Database = databaseName
// Validate configuration
if err := cfg.Validate(); err != nil {
return fmt.Errorf("configuration error: %w", err)
@ -178,9 +140,10 @@ func runSingleBackup(ctx context.Context, databaseName string) error {
return runBackupPreflight(ctx, databaseName)
}
// Get backup type and base backup from command line flags
backupType := backupTypeFlag
baseBackup := baseBackupFlag
// Get backup type and base backup from command line flags (set via global vars in PreRunE)
// These are populated by cobra flag binding in cmd/backup.go
backupType := "full" // Default to full backup if not specified
baseBackup := "" // Base backup path for incremental backups
// Validate backup type
if backupType != "full" && backupType != "incremental" {
@ -223,17 +186,6 @@ func runSingleBackup(ctx context.Context, databaseName string) error {
user := security.GetCurrentUser()
auditLogger.LogBackupStart(user, databaseName, "single")
// Track start time for notifications
backupStartTime := time.Now()
// Notify: backup started
if notifyManager != nil {
notifyManager.Notify(notify.NewEvent(notify.EventBackupStarted, notify.SeverityInfo, "Database backup started").
WithDatabase(databaseName).
WithDetail("host", cfg.Host).
WithDetail("backup_type", backupType))
}
// Rate limit connection attempts
host := fmt.Sprintf("%s:%d", cfg.Host, cfg.Port)
if err := rateLimiter.CheckAndWait(host); err != nil {
@ -316,13 +268,6 @@ func runSingleBackup(ctx context.Context, databaseName string) error {
if backupErr != nil {
auditLogger.LogBackupFailed(user, databaseName, backupErr)
// Notify: backup failed
if notifyManager != nil {
notifyManager.Notify(notify.NewEvent(notify.EventBackupFailed, notify.SeverityError, "Database backup failed").
WithDatabase(databaseName).
WithError(backupErr).
WithDuration(time.Since(backupStartTime)))
}
return backupErr
}
@ -330,13 +275,6 @@ func runSingleBackup(ctx context.Context, databaseName string) error {
if isEncryptionEnabled() {
if err := encryptLatestBackup(databaseName); err != nil {
log.Error("Failed to encrypt backup", "error", err)
// Notify: encryption failed
if notifyManager != nil {
notifyManager.Notify(notify.NewEvent(notify.EventBackupFailed, notify.SeverityError, "Backup encryption failed").
WithDatabase(databaseName).
WithError(err).
WithDuration(time.Since(backupStartTime)))
}
return fmt.Errorf("backup succeeded but encryption failed: %w", err)
}
log.Info("Backup encrypted successfully")
@ -345,15 +283,6 @@ func runSingleBackup(ctx context.Context, databaseName string) error {
// Audit log: backup success
auditLogger.LogBackupComplete(user, databaseName, cfg.BackupDir, 0)
// Notify: backup completed
if notifyManager != nil {
notifyManager.Notify(notify.NewEvent(notify.EventBackupCompleted, notify.SeveritySuccess, "Database backup completed successfully").
WithDatabase(databaseName).
WithDuration(time.Since(backupStartTime)).
WithDetail("backup_dir", cfg.BackupDir).
WithDetail("backup_type", backupType))
}
// Cleanup old backups if retention policy is enabled
if cfg.RetentionDays > 0 {
retentionPolicy := security.NewRetentionPolicy(cfg.RetentionDays, cfg.MinBackups, log)
@ -383,9 +312,6 @@ func runSampleBackup(ctx context.Context, databaseName string) error {
// Update config from environment
cfg.UpdateFromEnvironment()
// IMPORTANT: Set the database name from positional argument
cfg.Database = databaseName
// Validate configuration
if err := cfg.Validate(); err != nil {
return fmt.Errorf("configuration error: %w", err)

View File

@ -1,318 +0,0 @@
package cmd
import (
"context"
"database/sql"
"fmt"
"os"
"strings"
"text/tabwriter"
"time"
"github.com/spf13/cobra"
_ "github.com/go-sql-driver/mysql"
_ "github.com/jackc/pgx/v5/stdlib" // PostgreSQL driver
)
var blobCmd = &cobra.Command{
Use: "blob",
Short: "Large object (BLOB/BYTEA) operations",
Long: `Analyze and manage large binary objects stored in databases.
Many applications store large binary data (images, PDFs, attachments) directly
in the database. This can cause:
- Slow backups and restores
- Poor deduplication ratios
- Excessive storage usage
The blob commands help you identify and manage this data.
Available Commands:
stats Scan database for blob columns and show size statistics
extract Extract blobs to external storage (coming soon)
rehydrate Restore blobs from external storage (coming soon)`,
}
var blobStatsCmd = &cobra.Command{
Use: "stats",
Short: "Show blob column statistics",
Long: `Scan the database for BLOB/BYTEA columns and display size statistics.
This helps identify tables storing large binary data that might benefit
from blob extraction for faster backups.
PostgreSQL column types detected:
- bytea
- oid (large objects)
MySQL/MariaDB column types detected:
- blob, mediumblob, longblob, tinyblob
- binary, varbinary
Example:
dbbackup blob stats
dbbackup blob stats -d myapp_production`,
RunE: runBlobStats,
}
func init() {
rootCmd.AddCommand(blobCmd)
blobCmd.AddCommand(blobStatsCmd)
}
func runBlobStats(cmd *cobra.Command, args []string) error {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
defer cancel()
// Connect to database
var db *sql.DB
var err error
if cfg.IsPostgreSQL() {
// PostgreSQL connection string
connStr := fmt.Sprintf("host=%s port=%d user=%s dbname=%s sslmode=disable",
cfg.Host, cfg.Port, cfg.User, cfg.Database)
if cfg.Password != "" {
connStr += fmt.Sprintf(" password=%s", cfg.Password)
}
db, err = sql.Open("pgx", connStr)
} else {
// MySQL DSN
connStr := fmt.Sprintf("%s:%s@tcp(%s:%d)/%s",
cfg.User, cfg.Password, cfg.Host, cfg.Port, cfg.Database)
db, err = sql.Open("mysql", connStr)
}
if err != nil {
return fmt.Errorf("failed to connect: %w", err)
}
defer db.Close()
fmt.Printf("Scanning %s for blob columns...\n\n", cfg.DisplayDatabaseType())
// Discover blob columns
type BlobColumn struct {
Schema string
Table string
Column string
DataType string
RowCount int64
TotalSize int64
AvgSize int64
MaxSize int64
NullCount int64
}
var columns []BlobColumn
if cfg.IsPostgreSQL() {
query := `
SELECT
table_schema,
table_name,
column_name,
data_type
FROM information_schema.columns
WHERE data_type IN ('bytea', 'oid')
AND table_schema NOT IN ('pg_catalog', 'information_schema')
ORDER BY table_schema, table_name, column_name
`
rows, err := db.QueryContext(ctx, query)
if err != nil {
return fmt.Errorf("failed to query columns: %w", err)
}
defer rows.Close()
for rows.Next() {
var col BlobColumn
if err := rows.Scan(&col.Schema, &col.Table, &col.Column, &col.DataType); err != nil {
continue
}
columns = append(columns, col)
}
} else {
query := `
SELECT
TABLE_SCHEMA,
TABLE_NAME,
COLUMN_NAME,
DATA_TYPE
FROM information_schema.COLUMNS
WHERE DATA_TYPE IN ('blob', 'mediumblob', 'longblob', 'tinyblob', 'binary', 'varbinary')
AND TABLE_SCHEMA NOT IN ('mysql', 'information_schema', 'performance_schema', 'sys')
ORDER BY TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME
`
rows, err := db.QueryContext(ctx, query)
if err != nil {
return fmt.Errorf("failed to query columns: %w", err)
}
defer rows.Close()
for rows.Next() {
var col BlobColumn
if err := rows.Scan(&col.Schema, &col.Table, &col.Column, &col.DataType); err != nil {
continue
}
columns = append(columns, col)
}
}
if len(columns) == 0 {
fmt.Println("✓ No blob columns found in this database")
return nil
}
fmt.Printf("Found %d blob column(s), scanning sizes...\n\n", len(columns))
// Scan each column for size stats
var totalBlobs, totalSize int64
for i := range columns {
col := &columns[i]
var query string
var fullName, colName string
if cfg.IsPostgreSQL() {
fullName = fmt.Sprintf(`"%s"."%s"`, col.Schema, col.Table)
colName = fmt.Sprintf(`"%s"`, col.Column)
query = fmt.Sprintf(`
SELECT
COUNT(*),
COALESCE(SUM(COALESCE(octet_length(%s), 0)), 0),
COALESCE(AVG(COALESCE(octet_length(%s), 0)), 0),
COALESCE(MAX(COALESCE(octet_length(%s), 0)), 0),
COUNT(*) - COUNT(%s)
FROM %s
`, colName, colName, colName, colName, fullName)
} else {
fullName = fmt.Sprintf("`%s`.`%s`", col.Schema, col.Table)
colName = fmt.Sprintf("`%s`", col.Column)
query = fmt.Sprintf(`
SELECT
COUNT(*),
COALESCE(SUM(COALESCE(LENGTH(%s), 0)), 0),
COALESCE(AVG(COALESCE(LENGTH(%s), 0)), 0),
COALESCE(MAX(COALESCE(LENGTH(%s), 0)), 0),
COUNT(*) - COUNT(%s)
FROM %s
`, colName, colName, colName, colName, fullName)
}
scanCtx, scanCancel := context.WithTimeout(ctx, 30*time.Second)
row := db.QueryRowContext(scanCtx, query)
var avgSize float64
err := row.Scan(&col.RowCount, &col.TotalSize, &avgSize, &col.MaxSize, &col.NullCount)
col.AvgSize = int64(avgSize)
scanCancel()
if err != nil {
log.Warn("Failed to scan column", "table", fullName, "column", col.Column, "error", err)
continue
}
totalBlobs += col.RowCount - col.NullCount
totalSize += col.TotalSize
}
// Print summary
fmt.Printf("═══════════════════════════════════════════════════════════════════\n")
fmt.Printf("BLOB STATISTICS SUMMARY\n")
fmt.Printf("═══════════════════════════════════════════════════════════════════\n")
fmt.Printf("Total blob columns: %d\n", len(columns))
fmt.Printf("Total blob values: %s\n", formatNumberWithCommas(totalBlobs))
fmt.Printf("Total blob size: %s\n", formatBytesHuman(totalSize))
fmt.Printf("═══════════════════════════════════════════════════════════════════\n\n")
// Print detailed table
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "SCHEMA\tTABLE\tCOLUMN\tTYPE\tROWS\tNON-NULL\tTOTAL SIZE\tAVG SIZE\tMAX SIZE\n")
fmt.Fprintf(w, "──────\t─────\t──────\t────\t────\t────────\t──────────\t────────\t────────\n")
for _, col := range columns {
nonNull := col.RowCount - col.NullCount
fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n",
truncateBlobStr(col.Schema, 15),
truncateBlobStr(col.Table, 20),
truncateBlobStr(col.Column, 15),
col.DataType,
formatNumberWithCommas(col.RowCount),
formatNumberWithCommas(nonNull),
formatBytesHuman(col.TotalSize),
formatBytesHuman(col.AvgSize),
formatBytesHuman(col.MaxSize),
)
}
w.Flush()
// Show top tables by size
if len(columns) > 1 {
fmt.Println("\n───────────────────────────────────────────────────────────────────")
fmt.Println("TOP TABLES BY BLOB SIZE:")
// Simple sort (bubble sort is fine for small lists)
for i := 0; i < len(columns)-1; i++ {
for j := i + 1; j < len(columns); j++ {
if columns[j].TotalSize > columns[i].TotalSize {
columns[i], columns[j] = columns[j], columns[i]
}
}
}
for i, col := range columns {
if i >= 5 || col.TotalSize == 0 {
break
}
pct := float64(col.TotalSize) / float64(totalSize) * 100
fmt.Printf(" %d. %s.%s.%s: %s (%.1f%%)\n",
i+1, col.Schema, col.Table, col.Column,
formatBytesHuman(col.TotalSize), pct)
}
}
// Recommendations
if totalSize > 100*1024*1024 { // > 100MB
fmt.Println("\n───────────────────────────────────────────────────────────────────")
fmt.Println("RECOMMENDATIONS:")
fmt.Printf(" • You have %s of blob data which could benefit from extraction\n", formatBytesHuman(totalSize))
fmt.Println(" • Consider using 'dbbackup blob extract' to externalize large objects")
fmt.Println(" • This can improve backup speed and deduplication ratios")
}
return nil
}
func formatBytesHuman(bytes int64) string {
const unit = 1024
if bytes < unit {
return fmt.Sprintf("%d B", bytes)
}
div, exp := int64(unit), 0
for n := bytes / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f %cB", float64(bytes)/float64(div), "KMGTPE"[exp])
}
func formatNumberWithCommas(n int64) string {
str := fmt.Sprintf("%d", n)
if len(str) <= 3 {
return str
}
var result strings.Builder
for i, c := range str {
if i > 0 && (len(str)-i)%3 == 0 {
result.WriteRune(',')
}
result.WriteRune(c)
}
return result.String()
}
func truncateBlobStr(s string, max int) string {
if len(s) <= max {
return s
}
return s[:max-1] + "…"
}

View File

@ -271,20 +271,12 @@ func runCatalogSync(cmd *cobra.Command, args []string) error {
fmt.Printf(" [OK] Added: %d\n", result.Added)
fmt.Printf(" [SYNC] Updated: %d\n", result.Updated)
fmt.Printf(" [DEL] Removed: %d\n", result.Removed)
if result.Skipped > 0 {
fmt.Printf(" [SKIP] Skipped: %d (legacy files without metadata)\n", result.Skipped)
}
if result.Errors > 0 {
fmt.Printf(" [FAIL] Errors: %d\n", result.Errors)
}
fmt.Printf(" [TIME] Duration: %.2fs\n", result.Duration)
fmt.Printf("=====================================================\n")
// Show legacy backup warning
if result.LegacyWarning != "" {
fmt.Printf("\n[WARN] %s\n", result.LegacyWarning)
}
// Show details if verbose
if catalogVerbose && len(result.Details) > 0 {
fmt.Printf("\nDetails:\n")

View File

@ -30,12 +30,7 @@ Configuration via flags or environment variables:
--cloud-region DBBACKUP_CLOUD_REGION
--cloud-endpoint DBBACKUP_CLOUD_ENDPOINT
--cloud-access-key DBBACKUP_CLOUD_ACCESS_KEY (or AWS_ACCESS_KEY_ID)
--cloud-secret-key DBBACKUP_CLOUD_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)
--bandwidth-limit DBBACKUP_BANDWIDTH_LIMIT
Bandwidth Limiting:
Limit upload/download speed to avoid saturating network during business hours.
Examples: 10MB/s, 50MiB/s, 100Mbps, unlimited`,
--cloud-secret-key DBBACKUP_CLOUD_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)`,
}
var cloudUploadCmd = &cobra.Command{
@ -108,16 +103,15 @@ Examples:
}
var (
cloudProvider string
cloudBucket string
cloudRegion string
cloudEndpoint string
cloudAccessKey string
cloudSecretKey string
cloudPrefix string
cloudVerbose bool
cloudConfirm bool
cloudBandwidthLimit string
cloudProvider string
cloudBucket string
cloudRegion string
cloudEndpoint string
cloudAccessKey string
cloudSecretKey string
cloudPrefix string
cloudVerbose bool
cloudConfirm bool
)
func init() {
@ -133,7 +127,6 @@ func init() {
cmd.Flags().StringVar(&cloudAccessKey, "cloud-access-key", getEnv("DBBACKUP_CLOUD_ACCESS_KEY", getEnv("AWS_ACCESS_KEY_ID", "")), "Access key")
cmd.Flags().StringVar(&cloudSecretKey, "cloud-secret-key", getEnv("DBBACKUP_CLOUD_SECRET_KEY", getEnv("AWS_SECRET_ACCESS_KEY", "")), "Secret key")
cmd.Flags().StringVar(&cloudPrefix, "cloud-prefix", getEnv("DBBACKUP_CLOUD_PREFIX", ""), "Key prefix")
cmd.Flags().StringVar(&cloudBandwidthLimit, "bandwidth-limit", getEnv("DBBACKUP_BANDWIDTH_LIMIT", ""), "Bandwidth limit (e.g., 10MB/s, 100Mbps, 50MiB/s)")
cmd.Flags().BoolVarP(&cloudVerbose, "verbose", "v", false, "Verbose output")
}
@ -148,40 +141,24 @@ func getEnv(key, defaultValue string) string {
}
func getCloudBackend() (cloud.Backend, error) {
// Parse bandwidth limit
var bandwidthLimit int64
if cloudBandwidthLimit != "" {
var err error
bandwidthLimit, err = cloud.ParseBandwidth(cloudBandwidthLimit)
if err != nil {
return nil, fmt.Errorf("invalid bandwidth limit: %w", err)
}
}
cfg := &cloud.Config{
Provider: cloudProvider,
Bucket: cloudBucket,
Region: cloudRegion,
Endpoint: cloudEndpoint,
AccessKey: cloudAccessKey,
SecretKey: cloudSecretKey,
Prefix: cloudPrefix,
UseSSL: true,
PathStyle: cloudProvider == "minio",
Timeout: 300,
MaxRetries: 3,
BandwidthLimit: bandwidthLimit,
Provider: cloudProvider,
Bucket: cloudBucket,
Region: cloudRegion,
Endpoint: cloudEndpoint,
AccessKey: cloudAccessKey,
SecretKey: cloudSecretKey,
Prefix: cloudPrefix,
UseSSL: true,
PathStyle: cloudProvider == "minio",
Timeout: 300,
MaxRetries: 3,
}
if cfg.Bucket == "" {
return nil, fmt.Errorf("bucket name is required (use --cloud-bucket or DBBACKUP_CLOUD_BUCKET)")
}
// Log bandwidth limit if set
if bandwidthLimit > 0 {
fmt.Printf("📊 Bandwidth limit: %s\n", cloud.FormatBandwidth(bandwidthLimit))
}
backend, err := cloud.NewBackend(cfg)
if err != nil {
return nil, fmt.Errorf("failed to create cloud backend: %w", err)

View File

@ -1,6 +1,7 @@
package cmd
import (
"compress/gzip"
"crypto/sha256"
"encoding/hex"
"fmt"
@ -13,7 +14,6 @@ import (
"dbbackup/internal/dedup"
"github.com/klauspost/pgzip"
"github.com/spf13/cobra"
)
@ -164,8 +164,8 @@ var (
// metrics flags
var (
dedupMetricsOutput string
dedupMetricsServer string
dedupMetricsOutput string
dedupMetricsInstance string
)
var dedupMetricsCmd = &cobra.Command{
@ -241,7 +241,7 @@ func init() {
// Metrics flags
dedupMetricsCmd.Flags().StringVarP(&dedupMetricsOutput, "output", "o", "", "Output file path (default: stdout)")
dedupMetricsCmd.Flags().StringVar(&dedupMetricsServer, "server", "", "Server label for metrics (default: hostname)")
dedupMetricsCmd.Flags().StringVar(&dedupMetricsInstance, "instance", "", "Instance label for metrics (default: hostname)")
}
func getDedupDir() string {
@ -295,7 +295,7 @@ func runDedupBackup(cmd *cobra.Command, args []string) error {
if isGzipped && dedupDecompress {
fmt.Printf("Auto-decompressing gzip input for better dedup ratio...\n")
gzReader, err := pgzip.NewReader(file)
gzReader, err := gzip.NewReader(file)
if err != nil {
return fmt.Errorf("failed to decompress gzip input: %w", err)
}
@ -596,7 +596,7 @@ func runDedupList(cmd *cobra.Command, args []string) error {
func runDedupStats(cmd *cobra.Command, args []string) error {
basePath := getDedupDir()
index, err := dedup.NewChunkIndexAt(getIndexDBPath())
index, err := dedup.NewChunkIndex(basePath)
if err != nil {
return fmt.Errorf("failed to open chunk index: %w", err)
}
@ -642,7 +642,7 @@ func runDedupStats(cmd *cobra.Command, args []string) error {
func runDedupGC(cmd *cobra.Command, args []string) error {
basePath := getDedupDir()
index, err := dedup.NewChunkIndexAt(getIndexDBPath())
index, err := dedup.NewChunkIndex(basePath)
if err != nil {
return fmt.Errorf("failed to open chunk index: %w", err)
}
@ -702,7 +702,7 @@ func runDedupDelete(cmd *cobra.Command, args []string) error {
return fmt.Errorf("failed to open manifest store: %w", err)
}
index, err := dedup.NewChunkIndexAt(getIndexDBPath())
index, err := dedup.NewChunkIndex(basePath)
if err != nil {
return fmt.Errorf("failed to open chunk index: %w", err)
}
@ -1258,10 +1258,10 @@ func runDedupMetrics(cmd *cobra.Command, args []string) error {
basePath := getDedupDir()
indexPath := getIndexDBPath()
server := dedupMetricsServer
if server == "" {
instance := dedupMetricsInstance
if instance == "" {
hostname, _ := os.Hostname()
server = hostname
instance = hostname
}
metrics, err := dedup.CollectMetrics(basePath, indexPath)
@ -1269,10 +1269,10 @@ func runDedupMetrics(cmd *cobra.Command, args []string) error {
return fmt.Errorf("failed to collect metrics: %w", err)
}
output := dedup.FormatPrometheusMetrics(metrics, server)
output := dedup.FormatPrometheusMetrics(metrics, instance)
if dedupMetricsOutput != "" {
if err := dedup.WritePrometheusTextfile(dedupMetricsOutput, server, basePath, indexPath); err != nil {
if err := dedup.WritePrometheusTextfile(dedupMetricsOutput, instance, basePath, indexPath); err != nil {
return fmt.Errorf("failed to write metrics: %w", err)
}
fmt.Printf("Wrote metrics to %s\n", dedupMetricsOutput)

View File

@ -16,7 +16,8 @@ import (
)
var (
drillDatabaseName string
drillBackupPath string
drillDatabaseName string
drillDatabaseType string
drillImage string
drillPort int

View File

@ -65,3 +65,13 @@ func loadEncryptionKey(keyFile, keyEnvVar string) ([]byte, error) {
func isEncryptionEnabled() bool {
return encryptBackupFlag
}
// generateEncryptionKey generates a new random encryption key
func generateEncryptionKey() ([]byte, error) {
salt, err := crypto.GenerateSalt()
if err != nil {
return nil, err
}
// For key generation, use salt as both password and salt (random)
return crypto.DeriveKey(salt, salt), nil
}

View File

@ -5,19 +5,17 @@ import (
"fmt"
"os"
"os/signal"
"path/filepath"
"syscall"
"dbbackup/internal/catalog"
"dbbackup/internal/prometheus"
"github.com/spf13/cobra"
)
var (
metricsServer string
metricsOutput string
metricsPort int
metricsInstance string
metricsOutput string
metricsPort int
)
// metricsCmd represents the metrics command
@ -47,7 +45,7 @@ Examples:
dbbackup metrics export --output /var/lib/dbbackup/metrics/dbbackup.prom
# Export for specific instance
dbbackup metrics export --server production --output /var/lib/dbbackup/metrics/production.prom
dbbackup metrics export --instance production --output /var/lib/dbbackup/metrics/production.prom
After export, configure node_exporter with:
--collector.textfile.directory=/var/lib/dbbackup/metrics/
@ -86,56 +84,37 @@ Endpoints:
},
}
var metricsCatalogDB string
func init() {
rootCmd.AddCommand(metricsCmd)
metricsCmd.AddCommand(metricsExportCmd)
metricsCmd.AddCommand(metricsServeCmd)
// Default catalog path (same as catalog command)
home, _ := os.UserHomeDir()
defaultCatalogPath := filepath.Join(home, ".dbbackup", "catalog.db")
// Export flags
metricsExportCmd.Flags().StringVar(&metricsServer, "server", "", "Server name for metrics labels (default: hostname)")
metricsExportCmd.Flags().StringVar(&metricsInstance, "instance", "default", "Instance name for metrics labels")
metricsExportCmd.Flags().StringVarP(&metricsOutput, "output", "o", "/var/lib/dbbackup/metrics/dbbackup.prom", "Output file path")
metricsExportCmd.Flags().StringVar(&metricsCatalogDB, "catalog-db", defaultCatalogPath, "Path to catalog SQLite database")
// Serve flags
metricsServeCmd.Flags().StringVar(&metricsServer, "server", "", "Server name for metrics labels (default: hostname)")
metricsServeCmd.Flags().StringVar(&metricsInstance, "instance", "default", "Instance name for metrics labels")
metricsServeCmd.Flags().IntVarP(&metricsPort, "port", "p", 9399, "HTTP server port")
metricsServeCmd.Flags().StringVar(&metricsCatalogDB, "catalog-db", defaultCatalogPath, "Path to catalog SQLite database")
}
func runMetricsExport(ctx context.Context) error {
// Auto-detect hostname if server not specified
server := metricsServer
if server == "" {
hostname, err := os.Hostname()
if err != nil {
server = "unknown"
} else {
server = hostname
}
}
// Open catalog using specified path
cat, err := catalog.NewSQLiteCatalog(metricsCatalogDB)
// Open catalog
cat, err := openCatalog()
if err != nil {
return fmt.Errorf("failed to open catalog: %w", err)
}
defer cat.Close()
// Create metrics writer with version info
writer := prometheus.NewMetricsWriterWithVersion(log, cat, server, cfg.Version, cfg.GitCommit)
// Create metrics writer
writer := prometheus.NewMetricsWriter(log, cat, metricsInstance)
// Write textfile
if err := writer.WriteTextfile(metricsOutput); err != nil {
return fmt.Errorf("failed to write metrics: %w", err)
}
log.Info("Exported metrics to textfile", "path", metricsOutput, "server", server)
log.Info("Exported metrics to textfile", "path", metricsOutput, "instance", metricsInstance)
return nil
}
@ -144,26 +123,15 @@ func runMetricsServe(ctx context.Context) error {
ctx, cancel := signal.NotifyContext(ctx, os.Interrupt, syscall.SIGTERM)
defer cancel()
// Auto-detect hostname if server not specified
server := metricsServer
if server == "" {
hostname, err := os.Hostname()
if err != nil {
server = "unknown"
} else {
server = hostname
}
}
// Open catalog using specified path
cat, err := catalog.NewSQLiteCatalog(metricsCatalogDB)
// Open catalog
cat, err := openCatalog()
if err != nil {
return fmt.Errorf("failed to open catalog: %w", err)
}
defer cat.Close()
// Create exporter with version info
exporter := prometheus.NewExporterWithVersion(log, cat, server, metricsPort, cfg.Version, cfg.GitCommit)
// Create exporter
exporter := prometheus.NewExporter(log, cat, metricsInstance, metricsPort)
// Run server (blocks until context is cancelled)
return exporter.Serve(ctx)

View File

@ -5,6 +5,7 @@ import (
"database/sql"
"fmt"
"os"
"path/filepath"
"time"
"github.com/spf13/cobra"
@ -43,6 +44,7 @@ var (
mysqlArchiveInterval string
mysqlRequireRowFormat bool
mysqlRequireGTID bool
mysqlWatchMode bool
)
// pitrCmd represents the pitr command group
@ -1309,3 +1311,14 @@ func runMySQLPITREnable(cmd *cobra.Command, args []string) error {
return nil
}
// getMySQLBinlogDir attempts to determine the binlog directory from MySQL
func getMySQLBinlogDir(ctx context.Context, db *sql.DB) (string, error) {
var logBinBasename string
err := db.QueryRowContext(ctx, "SELECT @@log_bin_basename").Scan(&logBinBasename)
if err != nil {
return "", err
}
return filepath.Dir(logBinBasename), nil
}

View File

@ -1,6 +1,7 @@
package cmd
import (
"compress/gzip"
"context"
"fmt"
"io"
@ -14,7 +15,6 @@ import (
"dbbackup/internal/logger"
"dbbackup/internal/tui"
"github.com/klauspost/pgzip"
"github.com/spf13/cobra"
)
@ -66,15 +66,6 @@ TUI Automation Flags (for testing and CI/CD):
cfg.TUIVerbose, _ = cmd.Flags().GetBool("verbose-tui")
cfg.TUILogFile, _ = cmd.Flags().GetString("tui-log-file")
// Set conservative profile as default for TUI mode (safer for interactive users)
if cfg.ResourceProfile == "" || cfg.ResourceProfile == "balanced" {
cfg.ResourceProfile = "conservative"
cfg.LargeDBMode = true
if cfg.Debug {
log.Info("TUI mode: using conservative profile by default")
}
}
// Check authentication before starting TUI
if cfg.IsPostgreSQL() {
if mismatch, msg := auth.CheckAuthenticationMismatch(cfg); mismatch {
@ -391,6 +382,92 @@ func checkSystemResources() error {
return nil
}
// runRestore restores database from backup archive
func runRestore(ctx context.Context, archiveName string) error {
fmt.Println("==============================================================")
fmt.Println(" Database Restore")
fmt.Println("==============================================================")
// Construct full path to archive
archivePath := filepath.Join(cfg.BackupDir, archiveName)
// Check if archive exists
if _, err := os.Stat(archivePath); os.IsNotExist(err) {
return fmt.Errorf("backup archive not found: %s", archivePath)
}
// Detect archive type
archiveType := detectArchiveType(archiveName)
fmt.Printf("Archive: %s\n", archiveName)
fmt.Printf("Type: %s\n", archiveType)
fmt.Printf("Location: %s\n", archivePath)
fmt.Println()
// Get archive info
stat, err := os.Stat(archivePath)
if err != nil {
return fmt.Errorf("cannot access archive: %w", err)
}
fmt.Printf("Size: %s\n", formatFileSize(stat.Size()))
fmt.Printf("Created: %s\n", stat.ModTime().Format("2006-01-02 15:04:05"))
fmt.Println()
// Show warning
fmt.Println("[WARN] WARNING: This will restore data to the target database.")
fmt.Println(" Existing data may be overwritten or merged depending on the restore method.")
fmt.Println()
// For safety, show what would be done without actually doing it
switch archiveType {
case "Single Database (.dump)":
fmt.Println("[EXEC] Would execute: pg_restore to restore single database")
fmt.Printf(" Command: pg_restore -h %s -p %d -U %s -d %s --verbose %s\n",
cfg.Host, cfg.Port, cfg.User, cfg.Database, archivePath)
case "Single Database (.dump.gz)":
fmt.Println("[EXEC] Would execute: gunzip and pg_restore to restore single database")
fmt.Printf(" Command: gunzip -c %s | pg_restore -h %s -p %d -U %s -d %s --verbose\n",
archivePath, cfg.Host, cfg.Port, cfg.User, cfg.Database)
case "SQL Script (.sql)":
if cfg.IsPostgreSQL() {
fmt.Println("[EXEC] Would execute: psql to run SQL script")
fmt.Printf(" Command: psql -h %s -p %d -U %s -d %s -f %s\n",
cfg.Host, cfg.Port, cfg.User, cfg.Database, archivePath)
} else if cfg.IsMySQL() {
fmt.Println("[EXEC] Would execute: mysql to run SQL script")
fmt.Printf(" Command: %s\n", mysqlRestoreCommand(archivePath, false))
} else {
fmt.Println("[EXEC] Would execute: SQL client to run script (database type unknown)")
}
case "SQL Script (.sql.gz)":
if cfg.IsPostgreSQL() {
fmt.Println("[EXEC] Would execute: gunzip and psql to run SQL script")
fmt.Printf(" Command: gunzip -c %s | psql -h %s -p %d -U %s -d %s\n",
archivePath, cfg.Host, cfg.Port, cfg.User, cfg.Database)
} else if cfg.IsMySQL() {
fmt.Println("[EXEC] Would execute: gunzip and mysql to run SQL script")
fmt.Printf(" Command: %s\n", mysqlRestoreCommand(archivePath, true))
} else {
fmt.Println("[EXEC] Would execute: gunzip and SQL client to run script (database type unknown)")
}
case "Cluster Backup (.tar.gz)":
fmt.Println("[EXEC] Would execute: Extract and restore cluster backup")
fmt.Println(" Steps:")
fmt.Println(" 1. Extract tar.gz archive")
fmt.Println(" 2. Restore global objects (roles, tablespaces)")
fmt.Println(" 3. Restore individual databases")
default:
return fmt.Errorf("unsupported archive type: %s", archiveType)
}
fmt.Println()
fmt.Println("[SAFETY] SAFETY MODE: Restore command is in preview mode.")
fmt.Println(" This shows what would be executed without making changes.")
fmt.Println(" To enable actual restore, add --confirm flag (not yet implemented).")
return nil
}
func detectArchiveType(filename string) string {
switch {
case strings.HasSuffix(filename, ".dump.gz"):
@ -572,7 +649,7 @@ func verifyPgDumpGzip(path string) error {
}
defer file.Close()
gz, err := pgzip.NewReader(file)
gz, err := gzip.NewReader(file)
if err != nil {
return fmt.Errorf("failed to open gzip stream: %w", err)
}
@ -621,7 +698,7 @@ func verifyGzipSqlScript(path string) error {
}
defer file.Close()
gz, err := pgzip.NewReader(file)
gz, err := gzip.NewReader(file)
if err != nil {
return fmt.Errorf("failed to open gzip stream: %w", err)
}
@ -689,3 +766,33 @@ func containsSQLKeywords(content string) bool {
return false
}
func mysqlRestoreCommand(archivePath string, compressed bool) string {
parts := []string{"mysql"}
// Only add -h flag if host is not localhost (to use Unix socket)
if cfg.Host != "localhost" && cfg.Host != "127.0.0.1" && cfg.Host != "" {
parts = append(parts, "-h", cfg.Host)
}
parts = append(parts,
"-P", fmt.Sprintf("%d", cfg.Port),
"-u", cfg.User,
)
if cfg.Password != "" {
parts = append(parts, fmt.Sprintf("-p'%s'", cfg.Password))
}
if cfg.Database != "" {
parts = append(parts, cfg.Database)
}
command := strings.Join(parts, " ")
if compressed {
return fmt.Sprintf("gunzip -c %s | %s", archivePath, command)
}
return fmt.Sprintf("%s < %s", command, archivePath)
}

View File

@ -13,11 +13,8 @@ import (
"dbbackup/internal/backup"
"dbbackup/internal/cloud"
"dbbackup/internal/config"
"dbbackup/internal/database"
"dbbackup/internal/notify"
"dbbackup/internal/pitr"
"dbbackup/internal/progress"
"dbbackup/internal/restore"
"dbbackup/internal/security"
@ -25,30 +22,20 @@ import (
)
var (
restoreConfirm bool
restoreDryRun bool
restoreForce bool
restoreClean bool
restoreCreate bool
restoreJobs int
restoreParallelDBs int // Number of parallel database restores
restoreProfile string // Resource profile: conservative, balanced, aggressive
restoreTarget string
restoreVerbose bool
restoreNoProgress bool
restoreWorkdir string
restoreCleanCluster bool
restoreDiagnose bool // Run diagnosis before restore
restoreSaveDebugLog string // Path to save debug log on failure
restoreDebugLocks bool // Enable detailed lock debugging
restoreOOMProtection bool // Enable OOM protection for large restores
restoreLowMemory bool // Force low-memory mode for constrained systems
// Single database extraction from cluster flags
restoreDatabase string // Single database to extract/restore from cluster
restoreDatabases string // Comma-separated list of databases to extract
restoreOutputDir string // Extract to directory (no restore)
restoreListDBs bool // List databases in cluster backup
restoreConfirm bool
restoreDryRun bool
restoreForce bool
restoreClean bool
restoreCreate bool
restoreJobs int
restoreParallelDBs int // Number of parallel database restores
restoreTarget string
restoreVerbose bool
restoreNoProgress bool
restoreWorkdir string
restoreCleanCluster bool
restoreDiagnose bool // Run diagnosis before restore
restoreSaveDebugLog string // Path to save debug log on failure
// Diagnose flags
diagnoseJSON bool
@ -125,9 +112,6 @@ Examples:
# Restore to different database
dbbackup restore single mydb.dump.gz --target mydb_test --confirm
# Memory-constrained server (single-threaded, minimal memory)
dbbackup restore single mydb.dump.gz --profile=conservative --confirm
# Clean target database before restore
dbbackup restore single mydb.sql.gz --clean --confirm
@ -147,11 +131,6 @@ var restoreClusterCmd = &cobra.Command{
This command restores all databases that were backed up together
in a cluster backup operation.
Single Database Extraction:
Use --list-databases to see available databases
Use --database to extract/restore a specific database
Use --output-dir to extract without restoring
Safety features:
- Dry-run by default (use --confirm to execute)
- Archive validation and listing
@ -159,33 +138,12 @@ Safety features:
- Sequential database restoration
Examples:
# List databases in cluster backup
dbbackup restore cluster backup.tar.gz --list-databases
# Extract single database (no restore)
dbbackup restore cluster backup.tar.gz --database myapp --output-dir /tmp/extract
# Restore single database from cluster
dbbackup restore cluster backup.tar.gz --database myapp --confirm
# Restore single database with different name
dbbackup restore cluster backup.tar.gz --database myapp --target myapp_test --confirm
# Extract multiple databases
dbbackup restore cluster backup.tar.gz --databases "app1,app2,app3" --output-dir /tmp/extract
# Preview cluster restore
dbbackup restore cluster cluster_backup_20240101_120000.tar.gz
# Restore full cluster
dbbackup restore cluster cluster_backup_20240101_120000.tar.gz --confirm
# Memory-constrained server (conservative profile)
dbbackup restore cluster cluster_backup.tar.gz --profile=conservative --confirm
# Maximum performance (dedicated server)
dbbackup restore cluster cluster_backup.tar.gz --profile=aggressive --confirm
# Use parallel decompression
dbbackup restore cluster cluster_backup.tar.gz --jobs 4 --confirm
@ -282,7 +240,7 @@ Use this when:
Checks performed:
- File format detection (custom dump vs SQL)
- PGDMP signature verification
- Compression integrity validation (pgzip)
- Gzip integrity validation
- COPY block termination check
- pg_restore --list verification
- Cluster archive structure validation
@ -319,27 +277,20 @@ func init() {
restoreSingleCmd.Flags().BoolVar(&restoreClean, "clean", false, "Drop and recreate target database")
restoreSingleCmd.Flags().BoolVar(&restoreCreate, "create", false, "Create target database if it doesn't exist")
restoreSingleCmd.Flags().StringVar(&restoreTarget, "target", "", "Target database name (defaults to original)")
restoreSingleCmd.Flags().StringVar(&restoreProfile, "profile", "balanced", "Resource profile: conservative (--parallel=1, low memory), balanced, aggressive (max performance)")
restoreSingleCmd.Flags().BoolVar(&restoreVerbose, "verbose", false, "Show detailed restore progress")
restoreSingleCmd.Flags().BoolVar(&restoreNoProgress, "no-progress", false, "Disable progress indicators")
restoreSingleCmd.Flags().StringVar(&restoreEncryptionKeyFile, "encryption-key-file", "", "Path to encryption key file (required for encrypted backups)")
restoreSingleCmd.Flags().StringVar(&restoreEncryptionKeyEnv, "encryption-key-env", "DBBACKUP_ENCRYPTION_KEY", "Environment variable containing encryption key")
restoreSingleCmd.Flags().BoolVar(&restoreDiagnose, "diagnose", false, "Run deep diagnosis before restore to detect corruption/truncation")
restoreSingleCmd.Flags().StringVar(&restoreSaveDebugLog, "save-debug-log", "", "Save detailed error report to file on failure (e.g., /tmp/restore-debug.json)")
restoreSingleCmd.Flags().BoolVar(&restoreDebugLocks, "debug-locks", false, "Enable detailed lock debugging (captures PostgreSQL config, Guard decisions, boost attempts)")
// Cluster restore flags
restoreClusterCmd.Flags().BoolVar(&restoreListDBs, "list-databases", false, "List databases in cluster backup and exit")
restoreClusterCmd.Flags().StringVar(&restoreDatabase, "database", "", "Extract/restore single database from cluster")
restoreClusterCmd.Flags().StringVar(&restoreDatabases, "databases", "", "Extract multiple databases (comma-separated)")
restoreClusterCmd.Flags().StringVar(&restoreOutputDir, "output-dir", "", "Extract to directory without restoring (requires --database or --databases)")
restoreClusterCmd.Flags().BoolVar(&restoreConfirm, "confirm", false, "Confirm and execute restore (required)")
restoreClusterCmd.Flags().BoolVar(&restoreDryRun, "dry-run", false, "Show what would be done without executing")
restoreClusterCmd.Flags().BoolVar(&restoreForce, "force", false, "Skip safety checks and confirmations")
restoreClusterCmd.Flags().BoolVar(&restoreCleanCluster, "clean-cluster", false, "Drop all existing user databases before restore (disaster recovery)")
restoreClusterCmd.Flags().StringVar(&restoreProfile, "profile", "conservative", "Resource profile: conservative (single-threaded, prevents lock issues), balanced (auto-detect), aggressive (max speed)")
restoreClusterCmd.Flags().IntVar(&restoreJobs, "jobs", 0, "Number of parallel decompression jobs (0 = auto, overrides profile)")
restoreClusterCmd.Flags().IntVar(&restoreParallelDBs, "parallel-dbs", 0, "Number of databases to restore in parallel (0 = use profile, 1 = sequential, -1 = auto-detect, overrides profile)")
restoreClusterCmd.Flags().IntVar(&restoreJobs, "jobs", 0, "Number of parallel decompression jobs (0 = auto)")
restoreClusterCmd.Flags().IntVar(&restoreParallelDBs, "parallel-dbs", 0, "Number of databases to restore in parallel (0 = use config default, 1 = sequential, -1 = auto-detect based on CPU/RAM)")
restoreClusterCmd.Flags().StringVar(&restoreWorkdir, "workdir", "", "Working directory for extraction (use when system disk is small, e.g. /mnt/storage/restore_tmp)")
restoreClusterCmd.Flags().BoolVar(&restoreVerbose, "verbose", false, "Show detailed restore progress")
restoreClusterCmd.Flags().BoolVar(&restoreNoProgress, "no-progress", false, "Disable progress indicators")
@ -347,11 +298,6 @@ func init() {
restoreClusterCmd.Flags().StringVar(&restoreEncryptionKeyEnv, "encryption-key-env", "DBBACKUP_ENCRYPTION_KEY", "Environment variable containing encryption key")
restoreClusterCmd.Flags().BoolVar(&restoreDiagnose, "diagnose", false, "Run deep diagnosis on all dumps before restore")
restoreClusterCmd.Flags().StringVar(&restoreSaveDebugLog, "save-debug-log", "", "Save detailed error report to file on failure (e.g., /tmp/restore-debug.json)")
restoreClusterCmd.Flags().BoolVar(&restoreDebugLocks, "debug-locks", false, "Enable detailed lock debugging (captures PostgreSQL config, Guard decisions, boost attempts)")
restoreClusterCmd.Flags().BoolVar(&restoreClean, "clean", false, "Drop and recreate target database (for single DB restore)")
restoreClusterCmd.Flags().BoolVar(&restoreCreate, "create", false, "Create target database if it doesn't exist (for single DB restore)")
restoreClusterCmd.Flags().BoolVar(&restoreOOMProtection, "oom-protection", false, "Enable OOM protection: disable swap, tune PostgreSQL memory, protect from OOM killer")
restoreClusterCmd.Flags().BoolVar(&restoreLowMemory, "low-memory", false, "Force low-memory mode: single-threaded restore with minimal memory (use for <8GB RAM or very large backups)")
// PITR restore flags
restorePITRCmd.Flags().StringVar(&pitrBaseBackup, "base-backup", "", "Path to base backup file (.tar.gz) (required)")
@ -490,16 +436,6 @@ func runRestoreDiagnose(cmd *cobra.Command, args []string) error {
func runRestoreSingle(cmd *cobra.Command, args []string) error {
archivePath := args[0]
// Apply resource profile
if err := config.ApplyProfile(cfg, restoreProfile, restoreJobs, 0); err != nil {
log.Warn("Invalid profile, using balanced", "error", err)
restoreProfile = "balanced"
_ = config.ApplyProfile(cfg, restoreProfile, restoreJobs, 0)
}
if cfg.Debug && restoreProfile != "balanced" {
log.Info("Using restore profile", "profile", restoreProfile)
}
// Check if this is a cloud URI
var cleanupFunc func() error
@ -638,12 +574,6 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
log.Info("Debug logging enabled", "output", restoreSaveDebugLog)
}
// Enable lock debugging if requested (single restore)
if restoreDebugLocks {
cfg.DebugLocks = true
log.Info("🔍 Lock debugging enabled - will capture PostgreSQL lock config, Guard decisions, boost attempts")
}
// Setup signal handling
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@ -697,36 +627,14 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
startTime := time.Now()
auditLogger.LogRestoreStart(user, targetDB, archivePath)
// Notify: restore started
if notifyManager != nil {
notifyManager.Notify(notify.NewEvent(notify.EventRestoreStarted, notify.SeverityInfo, "Database restore started").
WithDatabase(targetDB).
WithDetail("archive", filepath.Base(archivePath)))
}
if err := engine.RestoreSingle(ctx, archivePath, targetDB, restoreClean, restoreCreate); err != nil {
auditLogger.LogRestoreFailed(user, targetDB, err)
// Notify: restore failed
if notifyManager != nil {
notifyManager.Notify(notify.NewEvent(notify.EventRestoreFailed, notify.SeverityError, "Database restore failed").
WithDatabase(targetDB).
WithError(err).
WithDuration(time.Since(startTime)))
}
return fmt.Errorf("restore failed: %w", err)
}
// Audit log: restore success
auditLogger.LogRestoreComplete(user, targetDB, time.Since(startTime))
// Notify: restore completed
if notifyManager != nil {
notifyManager.Notify(notify.NewEvent(notify.EventRestoreCompleted, notify.SeveritySuccess, "Database restore completed successfully").
WithDatabase(targetDB).
WithDuration(time.Since(startTime)).
WithDetail("archive", filepath.Base(archivePath)))
}
log.Info("[OK] Restore completed successfully", "database", targetDB)
return nil
}
@ -749,203 +657,6 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
return fmt.Errorf("archive not found: %s", archivePath)
}
// Handle --list-databases flag
if restoreListDBs {
return runListDatabases(archivePath)
}
// Handle single/multiple database extraction
if restoreDatabase != "" || restoreDatabases != "" {
return runExtractDatabases(archivePath)
}
// Otherwise proceed with full cluster restore
return runFullClusterRestore(archivePath)
}
// runListDatabases lists all databases in a cluster backup
func runListDatabases(archivePath string) error {
ctx := context.Background()
log.Info("Scanning cluster backup", "archive", filepath.Base(archivePath))
fmt.Println()
databases, err := restore.ListDatabasesInCluster(ctx, archivePath, log)
if err != nil {
return fmt.Errorf("failed to list databases: %w", err)
}
fmt.Printf("📦 Databases in cluster backup:\n")
var totalSize int64
for _, db := range databases {
sizeStr := formatSize(db.Size)
fmt.Printf(" - %-30s (%s)\n", db.Name, sizeStr)
totalSize += db.Size
}
fmt.Printf("\nTotal: %s across %d database(s)\n", formatSize(totalSize), len(databases))
return nil
}
// runExtractDatabases extracts single or multiple databases from cluster backup
func runExtractDatabases(archivePath string) error {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Setup signal handling
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM)
defer signal.Stop(sigChan)
go func() {
<-sigChan
log.Warn("Extraction interrupted by user")
cancel()
}()
// Single database extraction
if restoreDatabase != "" {
return handleSingleDatabaseExtraction(ctx, archivePath, restoreDatabase)
}
// Multiple database extraction
if restoreDatabases != "" {
return handleMultipleDatabaseExtraction(ctx, archivePath, restoreDatabases)
}
return nil
}
// handleSingleDatabaseExtraction handles single database extraction or restore
func handleSingleDatabaseExtraction(ctx context.Context, archivePath, dbName string) error {
// Extract-only mode (no restore)
if restoreOutputDir != "" {
return extractSingleDatabase(ctx, archivePath, dbName, restoreOutputDir)
}
// Restore mode
if !restoreConfirm {
fmt.Println("\n[DRY-RUN] DRY-RUN MODE - No changes will be made")
fmt.Printf("\nWould extract and restore:\n")
fmt.Printf(" Database: %s\n", dbName)
fmt.Printf(" From: %s\n", archivePath)
targetDB := restoreTarget
if targetDB == "" {
targetDB = dbName
}
fmt.Printf(" Target: %s\n", targetDB)
if restoreClean {
fmt.Printf(" Clean: true (drop and recreate)\n")
}
if restoreCreate {
fmt.Printf(" Create: true (create if missing)\n")
}
fmt.Println("\nTo execute this restore, add --confirm flag")
return nil
}
// Create database instance
db, err := database.New(cfg, log)
if err != nil {
return fmt.Errorf("failed to create database instance: %w", err)
}
defer db.Close()
// Create restore engine
engine := restore.New(cfg, log, db)
// Determine target database name
targetDB := restoreTarget
if targetDB == "" {
targetDB = dbName
}
log.Info("Restoring single database from cluster", "database", dbName, "target", targetDB)
// Restore single database from cluster
if err := engine.RestoreSingleFromCluster(ctx, archivePath, dbName, targetDB, restoreClean, restoreCreate); err != nil {
return fmt.Errorf("restore failed: %w", err)
}
fmt.Printf("\n✅ Successfully restored '%s' as '%s'\n", dbName, targetDB)
return nil
}
// extractSingleDatabase extracts a single database without restoring
func extractSingleDatabase(ctx context.Context, archivePath, dbName, outputDir string) error {
log.Info("Extracting database", "database", dbName, "output", outputDir)
// Create progress indicator
prog := progress.NewIndicator(!restoreNoProgress, "dots")
extractedPath, err := restore.ExtractDatabaseFromCluster(ctx, archivePath, dbName, outputDir, log, prog)
if err != nil {
return fmt.Errorf("extraction failed: %w", err)
}
fmt.Printf("\n✅ Extracted: %s\n", extractedPath)
fmt.Printf(" Database: %s\n", dbName)
fmt.Printf(" Location: %s\n", outputDir)
return nil
}
// handleMultipleDatabaseExtraction handles multiple database extraction
func handleMultipleDatabaseExtraction(ctx context.Context, archivePath, databases string) error {
if restoreOutputDir == "" {
return fmt.Errorf("--output-dir required when using --databases")
}
// Parse database list
dbNames := strings.Split(databases, ",")
for i := range dbNames {
dbNames[i] = strings.TrimSpace(dbNames[i])
}
log.Info("Extracting multiple databases", "count", len(dbNames), "output", restoreOutputDir)
// Create progress indicator
prog := progress.NewIndicator(!restoreNoProgress, "dots")
extractedPaths, err := restore.ExtractMultipleDatabasesFromCluster(ctx, archivePath, dbNames, restoreOutputDir, log, prog)
if err != nil {
return fmt.Errorf("extraction failed: %w", err)
}
fmt.Printf("\n✅ Extracted %d database(s):\n", len(extractedPaths))
for dbName, path := range extractedPaths {
fmt.Printf(" - %s → %s\n", dbName, filepath.Base(path))
}
fmt.Printf(" Location: %s\n", restoreOutputDir)
return nil
}
// runFullClusterRestore performs a full cluster restore
func runFullClusterRestore(archivePath string) error {
// Apply resource profile
if err := config.ApplyProfile(cfg, restoreProfile, restoreJobs, restoreParallelDBs); err != nil {
log.Warn("Invalid profile, using balanced", "error", err)
restoreProfile = "balanced"
_ = config.ApplyProfile(cfg, restoreProfile, restoreJobs, restoreParallelDBs)
}
if cfg.Debug || restoreProfile != "balanced" {
log.Info("Using restore profile", "profile", restoreProfile, "parallel_dbs", cfg.ClusterParallelism, "jobs", cfg.Jobs)
}
// Convert to absolute path
if !filepath.IsAbs(archivePath) {
absPath, err := filepath.Abs(archivePath)
if err != nil {
return fmt.Errorf("invalid archive path: %w", err)
}
archivePath = absPath
}
// Check if file exists
if _, err := os.Stat(archivePath); err != nil {
return fmt.Errorf("archive not found: %s", archivePath)
}
// Check if backup is encrypted and decrypt if necessary
if backup.IsBackupEncrypted(archivePath) {
log.Info("Encrypted cluster backup detected, decrypting...")
@ -1094,12 +805,6 @@ func runFullClusterRestore(archivePath string) error {
log.Info("Debug logging enabled", "output", restoreSaveDebugLog)
}
// Enable lock debugging if requested (cluster restore)
if restoreDebugLocks {
cfg.DebugLocks = true
log.Info("🔍 Lock debugging enabled - will capture PostgreSQL lock config, Guard decisions, boost attempts")
}
// Setup signal handling
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@ -1135,50 +840,22 @@ func runFullClusterRestore(archivePath string) error {
log.Info("Database cleanup completed")
}
// OPTIMIZATION: Pre-extract archive once for both diagnosis and restore
// This avoids extracting the same tar.gz twice (saves 5-10 min on large clusters)
var extractedDir string
var extractErr error
if restoreDiagnose || restoreConfirm {
log.Info("Pre-extracting cluster archive (shared for validation and restore)...")
extractedDir, extractErr = safety.ValidateAndExtractCluster(ctx, archivePath)
if extractErr != nil {
return fmt.Errorf("failed to extract cluster archive: %w", extractErr)
}
defer os.RemoveAll(extractedDir) // Cleanup at end
log.Info("Archive extracted successfully", "location", extractedDir)
}
// Run pre-restore diagnosis if requested (using already-extracted directory)
// Run pre-restore diagnosis if requested
if restoreDiagnose {
log.Info("[DIAG] Running pre-restore diagnosis on extracted dumps...")
log.Info("[DIAG] Running pre-restore diagnosis...")
// Create temp directory for extraction in configured WorkDir
workDir := cfg.GetEffectiveWorkDir()
diagTempDir, err := os.MkdirTemp(workDir, "dbbackup-diagnose-*")
if err != nil {
return fmt.Errorf("failed to create temp directory for diagnosis in %s: %w", workDir, err)
}
defer os.RemoveAll(diagTempDir)
diagnoser := restore.NewDiagnoser(log, restoreVerbose)
// Diagnose dumps directly from extracted directory
dumpsDir := filepath.Join(extractedDir, "dumps")
if _, err := os.Stat(dumpsDir); err != nil {
return fmt.Errorf("no dumps directory found in extracted archive: %w", err)
}
entries, err := os.ReadDir(dumpsDir)
results, err := diagnoser.DiagnoseClusterDumps(archivePath, diagTempDir)
if err != nil {
return fmt.Errorf("failed to read dumps directory: %w", err)
}
// Diagnose each dump file
var results []*restore.DiagnoseResult
for _, entry := range entries {
if entry.IsDir() {
continue
}
dumpPath := filepath.Join(dumpsDir, entry.Name())
result, err := diagnoser.DiagnoseFile(dumpPath)
if err != nil {
log.Warn("Could not diagnose dump", "file", entry.Name(), "error", err)
continue
}
results = append(results, result)
return fmt.Errorf("diagnosis failed: %w", err)
}
// Check for any invalid dumps
@ -1218,36 +895,14 @@ func runFullClusterRestore(archivePath string) error {
startTime := time.Now()
auditLogger.LogRestoreStart(user, "all_databases", archivePath)
// Notify: restore started
if notifyManager != nil {
notifyManager.Notify(notify.NewEvent(notify.EventRestoreStarted, notify.SeverityInfo, "Cluster restore started").
WithDatabase("all_databases").
WithDetail("archive", filepath.Base(archivePath)))
}
// Pass pre-extracted directory to avoid double extraction
if err := engine.RestoreCluster(ctx, archivePath, extractedDir); err != nil {
if err := engine.RestoreCluster(ctx, archivePath); err != nil {
auditLogger.LogRestoreFailed(user, "all_databases", err)
// Notify: restore failed
if notifyManager != nil {
notifyManager.Notify(notify.NewEvent(notify.EventRestoreFailed, notify.SeverityError, "Cluster restore failed").
WithDatabase("all_databases").
WithError(err).
WithDuration(time.Since(startTime)))
}
return fmt.Errorf("cluster restore failed: %w", err)
}
// Audit log: restore success
auditLogger.LogRestoreComplete(user, "all_databases", time.Since(startTime))
// Notify: restore completed
if notifyManager != nil {
notifyManager.Notify(notify.NewEvent(notify.EventRestoreCompleted, notify.SeveritySuccess, "Cluster restore completed successfully").
WithDatabase("all_databases").
WithDuration(time.Since(startTime)))
}
log.Info("[OK] Cluster restore completed successfully")
return nil
}

View File

@ -3,11 +3,9 @@ package cmd
import (
"context"
"fmt"
"strings"
"dbbackup/internal/config"
"dbbackup/internal/logger"
"dbbackup/internal/notify"
"dbbackup/internal/security"
"github.com/spf13/cobra"
@ -15,11 +13,10 @@ import (
)
var (
cfg *config.Config
log logger.Logger
auditLogger *security.AuditLogger
rateLimiter *security.RateLimiter
notifyManager *notify.Manager
cfg *config.Config
log logger.Logger
auditLogger *security.AuditLogger
rateLimiter *security.RateLimiter
)
// rootCmd represents the base command when called without any subcommands
@ -108,12 +105,6 @@ For help with specific commands, use: dbbackup [command] --help`,
}
}
// Auto-detect socket from --host path (if host starts with /)
if strings.HasPrefix(cfg.Host, "/") && cfg.Socket == "" {
cfg.Socket = cfg.Host
cfg.Host = "localhost" // Reset host for socket connections
}
return cfg.SetDatabaseType(cfg.DatabaseType)
},
}
@ -129,13 +120,6 @@ func Execute(ctx context.Context, config *config.Config, logger logger.Logger) e
// Initialize rate limiter
rateLimiter = security.NewRateLimiter(config.MaxRetries, logger)
// Initialize notification manager from environment variables
notifyCfg := notify.ConfigFromEnv()
notifyManager = notify.NewManager(notifyCfg)
if notifyManager.HasEnabledNotifiers() {
logger.Info("Notifications enabled", "smtp", notifyCfg.SMTPEnabled, "webhook", notifyCfg.WebhookEnabled)
}
// Set version info
rootCmd.Version = fmt.Sprintf("%s (built: %s, commit: %s)",
cfg.Version, cfg.BuildTime, cfg.GitCommit)
@ -143,7 +127,6 @@ func Execute(ctx context.Context, config *config.Config, logger logger.Logger) e
// Add persistent flags
rootCmd.PersistentFlags().StringVar(&cfg.Host, "host", cfg.Host, "Database host")
rootCmd.PersistentFlags().IntVar(&cfg.Port, "port", cfg.Port, "Database port")
rootCmd.PersistentFlags().StringVar(&cfg.Socket, "socket", cfg.Socket, "Unix socket path for MySQL/MariaDB (e.g., /var/run/mysqld/mysqld.sock)")
rootCmd.PersistentFlags().StringVar(&cfg.User, "user", cfg.User, "Database user")
rootCmd.PersistentFlags().StringVar(&cfg.Database, "database", cfg.Database, "Database name")
rootCmd.PersistentFlags().StringVar(&cfg.Password, "password", cfg.Password, "Database password")
@ -151,7 +134,6 @@ func Execute(ctx context.Context, config *config.Config, logger logger.Logger) e
rootCmd.PersistentFlags().StringVar(&cfg.BackupDir, "backup-dir", cfg.BackupDir, "Backup directory")
rootCmd.PersistentFlags().BoolVar(&cfg.NoColor, "no-color", cfg.NoColor, "Disable colored output")
rootCmd.PersistentFlags().BoolVar(&cfg.Debug, "debug", cfg.Debug, "Enable debug logging")
rootCmd.PersistentFlags().BoolVar(&cfg.DebugLocks, "debug-locks", cfg.DebugLocks, "Enable detailed lock debugging (captures PostgreSQL lock configuration, Large DB Guard decisions, boost attempts)")
rootCmd.PersistentFlags().IntVar(&cfg.Jobs, "jobs", cfg.Jobs, "Number of parallel jobs")
rootCmd.PersistentFlags().IntVar(&cfg.DumpJobs, "dump-jobs", cfg.DumpJobs, "Number of parallel dump jobs")
rootCmd.PersistentFlags().IntVar(&cfg.MaxCores, "max-cores", cfg.MaxCores, "Maximum CPU cores to use")

View File

@ -1,64 +0,0 @@
package cmd
import (
"context"
"fmt"
"os"
"dbbackup/internal/checks"
"github.com/spf13/cobra"
)
var verifyLocksCmd = &cobra.Command{
Use: "verify-locks",
Short: "Check PostgreSQL lock settings and print restore guidance",
Long: `Probe PostgreSQL for lock-related GUCs (max_locks_per_transaction, max_connections, max_prepared_transactions) and print capacity + recommended restore options.`,
RunE: func(cmd *cobra.Command, args []string) error {
return runVerifyLocks(cmd.Context())
},
}
func runVerifyLocks(ctx context.Context) error {
p := checks.NewPreflightChecker(cfg, log)
res, err := p.RunAllChecks(ctx, cfg.Database)
if err != nil {
return err
}
// Find the Postgres lock check in the preflight results
var chk checks.PreflightCheck
found := false
for _, c := range res.Checks {
if c.Name == "PostgreSQL lock configuration" {
chk = c
found = true
break
}
}
if !found {
fmt.Println("No PostgreSQL lock check available (skipped)")
return nil
}
fmt.Printf("%s\n", chk.Name)
fmt.Printf("Status: %s\n", chk.Status.String())
fmt.Printf("%s\n\n", chk.Message)
if chk.Details != "" {
fmt.Println(chk.Details)
}
// exit non-zero for failures so scripts can react
if chk.Status == checks.StatusFailed {
os.Exit(2)
}
if chk.Status == checks.StatusWarning {
os.Exit(0)
}
return nil
}
func init() {
rootCmd.AddCommand(verifyLocksCmd)
}

View File

@ -1,371 +0,0 @@
package cmd
import (
"context"
"fmt"
"os"
"strings"
"time"
"dbbackup/internal/logger"
"dbbackup/internal/verification"
"github.com/spf13/cobra"
)
var verifyRestoreCmd = &cobra.Command{
Use: "verify-restore",
Short: "Systematic verification for large database restores",
Long: `Comprehensive verification tool for large database restores with BLOB support.
This tool performs systematic checks to ensure 100% data integrity after restore:
- Table counts and row counts verification
- BLOB/Large Object integrity (PostgreSQL large objects, bytea columns)
- Table checksums (for non-BLOB tables)
- Database-specific integrity checks
- Orphaned object detection
- Index validity checks
Designed to work with VERY LARGE databases and BLOBs with 100% reliability.
Examples:
# Verify a restored PostgreSQL database
dbbackup verify-restore --engine postgres --database mydb
# Verify with connection details
dbbackup verify-restore --engine postgres --host localhost --port 5432 \
--user postgres --password secret --database mydb
# Verify a MySQL database
dbbackup verify-restore --engine mysql --database mydb
# Verify and output JSON report
dbbackup verify-restore --engine postgres --database mydb --json
# Compare source and restored database
dbbackup verify-restore --engine postgres --database source_db --compare restored_db
# Verify a backup file before restore
dbbackup verify-restore --backup-file /backups/mydb.dump
# Verify multiple databases in parallel
dbbackup verify-restore --engine postgres --databases "db1,db2,db3" --parallel 4`,
RunE: runVerifyRestore,
}
var (
verifyEngine string
verifyHost string
verifyPort int
verifyUser string
verifyPassword string
verifyDatabase string
verifyDatabases string
verifyCompareDB string
verifyBackupFile string
verifyJSON bool
verifyParallel int
)
func init() {
rootCmd.AddCommand(verifyRestoreCmd)
verifyRestoreCmd.Flags().StringVar(&verifyEngine, "engine", "postgres", "Database engine (postgres, mysql)")
verifyRestoreCmd.Flags().StringVar(&verifyHost, "host", "localhost", "Database host")
verifyRestoreCmd.Flags().IntVar(&verifyPort, "port", 5432, "Database port")
verifyRestoreCmd.Flags().StringVar(&verifyUser, "user", "", "Database user")
verifyRestoreCmd.Flags().StringVar(&verifyPassword, "password", "", "Database password")
verifyRestoreCmd.Flags().StringVar(&verifyDatabase, "database", "", "Database to verify")
verifyRestoreCmd.Flags().StringVar(&verifyDatabases, "databases", "", "Comma-separated list of databases to verify")
verifyRestoreCmd.Flags().StringVar(&verifyCompareDB, "compare", "", "Compare with another database (source vs restored)")
verifyRestoreCmd.Flags().StringVar(&verifyBackupFile, "backup-file", "", "Verify backup file integrity before restore")
verifyRestoreCmd.Flags().BoolVar(&verifyJSON, "json", false, "Output results as JSON")
verifyRestoreCmd.Flags().IntVar(&verifyParallel, "parallel", 1, "Number of parallel verification workers")
}
func runVerifyRestore(cmd *cobra.Command, args []string) error {
ctx, cancel := context.WithTimeout(context.Background(), 24*time.Hour) // Long timeout for large DBs
defer cancel()
log := logger.New("INFO", "text")
// Get credentials from environment if not provided
if verifyUser == "" {
verifyUser = os.Getenv("PGUSER")
if verifyUser == "" {
verifyUser = os.Getenv("MYSQL_USER")
}
if verifyUser == "" {
verifyUser = "postgres"
}
}
if verifyPassword == "" {
verifyPassword = os.Getenv("PGPASSWORD")
if verifyPassword == "" {
verifyPassword = os.Getenv("MYSQL_PASSWORD")
}
}
// Set default port based on engine
if verifyPort == 5432 && (verifyEngine == "mysql" || verifyEngine == "mariadb") {
verifyPort = 3306
}
checker := verification.NewLargeRestoreChecker(log, verifyEngine, verifyHost, verifyPort, verifyUser, verifyPassword)
// Mode 1: Verify backup file
if verifyBackupFile != "" {
return verifyBackupFileMode(ctx, checker)
}
// Mode 2: Compare two databases
if verifyCompareDB != "" {
return verifyCompareMode(ctx, checker)
}
// Mode 3: Verify multiple databases in parallel
if verifyDatabases != "" {
return verifyMultipleDatabases(ctx, log)
}
// Mode 4: Verify single database
if verifyDatabase == "" {
return fmt.Errorf("--database is required")
}
return verifySingleDatabase(ctx, checker)
}
func verifyBackupFileMode(ctx context.Context, checker *verification.LargeRestoreChecker) error {
fmt.Println()
fmt.Println("╔══════════════════════════════════════════════════════════════╗")
fmt.Println("║ 🔍 BACKUP FILE VERIFICATION ║")
fmt.Println("╚══════════════════════════════════════════════════════════════╝")
fmt.Println()
result, err := checker.VerifyBackupFile(ctx, verifyBackupFile)
if err != nil {
return fmt.Errorf("verification failed: %w", err)
}
if verifyJSON {
return outputJSON(result, "")
}
fmt.Printf(" File: %s\n", result.Path)
fmt.Printf(" Size: %s\n", formatBytes(result.SizeBytes))
fmt.Printf(" Format: %s\n", result.Format)
fmt.Printf(" Checksum: %s\n", result.Checksum)
if result.TableCount > 0 {
fmt.Printf(" Tables: %d\n", result.TableCount)
}
if result.LargeObjectCount > 0 {
fmt.Printf(" Large Objects: %d\n", result.LargeObjectCount)
}
fmt.Println()
if result.Valid {
fmt.Println(" ✅ Backup file verification PASSED")
} else {
fmt.Printf(" ❌ Backup file verification FAILED: %s\n", result.Error)
return fmt.Errorf("verification failed")
}
if len(result.Warnings) > 0 {
fmt.Println()
fmt.Println(" Warnings:")
for _, w := range result.Warnings {
fmt.Printf(" ⚠️ %s\n", w)
}
}
fmt.Println()
return nil
}
func verifyCompareMode(ctx context.Context, checker *verification.LargeRestoreChecker) error {
if verifyDatabase == "" {
return fmt.Errorf("--database (source) is required for comparison")
}
fmt.Println()
fmt.Println("╔══════════════════════════════════════════════════════════════╗")
fmt.Println("║ 🔍 DATABASE COMPARISON ║")
fmt.Println("╚══════════════════════════════════════════════════════════════╝")
fmt.Println()
fmt.Printf(" Source: %s\n", verifyDatabase)
fmt.Printf(" Target: %s\n", verifyCompareDB)
fmt.Println()
result, err := checker.CompareSourceTarget(ctx, verifyDatabase, verifyCompareDB)
if err != nil {
return fmt.Errorf("comparison failed: %w", err)
}
if verifyJSON {
return outputJSON(result, "")
}
if result.Match {
fmt.Println(" ✅ Databases MATCH - restore verified successfully")
} else {
fmt.Println(" ❌ Databases DO NOT MATCH")
fmt.Println()
fmt.Println(" Differences:")
for _, d := range result.Differences {
fmt.Printf(" • %s\n", d)
}
}
fmt.Println()
return nil
}
func verifyMultipleDatabases(ctx context.Context, log logger.Logger) error {
databases := splitDatabases(verifyDatabases)
if len(databases) == 0 {
return fmt.Errorf("no databases specified")
}
fmt.Println()
fmt.Println("╔══════════════════════════════════════════════════════════════╗")
fmt.Println("║ 🔍 PARALLEL DATABASE VERIFICATION ║")
fmt.Println("╚══════════════════════════════════════════════════════════════╝")
fmt.Println()
fmt.Printf(" Databases: %d\n", len(databases))
fmt.Printf(" Workers: %d\n", verifyParallel)
fmt.Println()
results, err := verification.ParallelVerify(ctx, log, verifyEngine, verifyHost, verifyPort, verifyUser, verifyPassword, databases, verifyParallel)
if err != nil {
return fmt.Errorf("parallel verification failed: %w", err)
}
if verifyJSON {
return outputJSON(results, "")
}
allValid := true
for _, r := range results {
if r == nil {
continue
}
status := "✅"
if !r.Valid {
status = "❌"
allValid = false
}
fmt.Printf(" %s %s: %d tables, %d rows, %d BLOBs (%s)\n",
status, r.Database, r.TotalTables, r.TotalRows, r.TotalBlobCount, r.Duration.Round(time.Millisecond))
}
fmt.Println()
if allValid {
fmt.Println(" ✅ All databases verified successfully")
} else {
fmt.Println(" ❌ Some databases failed verification")
return fmt.Errorf("verification failed")
}
fmt.Println()
return nil
}
func verifySingleDatabase(ctx context.Context, checker *verification.LargeRestoreChecker) error {
fmt.Println()
fmt.Println("╔══════════════════════════════════════════════════════════════╗")
fmt.Println("║ 🔍 SYSTEMATIC RESTORE VERIFICATION ║")
fmt.Println("║ For Large Databases & BLOBs ║")
fmt.Println("╚══════════════════════════════════════════════════════════════╝")
fmt.Println()
fmt.Printf(" Database: %s\n", verifyDatabase)
fmt.Printf(" Engine: %s\n", verifyEngine)
fmt.Printf(" Host: %s:%d\n", verifyHost, verifyPort)
fmt.Println()
result, err := checker.CheckDatabase(ctx, verifyDatabase)
if err != nil {
return fmt.Errorf("verification failed: %w", err)
}
if verifyJSON {
return outputJSON(result, "")
}
// Summary
fmt.Println(" ═══════════════════════════════════════════════════════════")
fmt.Println(" VERIFICATION SUMMARY")
fmt.Println(" ═══════════════════════════════════════════════════════════")
fmt.Println()
fmt.Printf(" Tables: %d\n", result.TotalTables)
fmt.Printf(" Total Rows: %d\n", result.TotalRows)
fmt.Printf(" Large Objects: %d\n", result.TotalBlobCount)
fmt.Printf(" BLOB Size: %s\n", formatBytes(result.TotalBlobBytes))
fmt.Printf(" Duration: %s\n", result.Duration.Round(time.Millisecond))
fmt.Println()
// Table details
if len(result.TableChecks) > 0 && len(result.TableChecks) <= 50 {
fmt.Println(" Tables:")
for _, t := range result.TableChecks {
blobIndicator := ""
if t.HasBlobColumn {
blobIndicator = " [BLOB]"
}
status := "✓"
if !t.Valid {
status = "✗"
}
fmt.Printf(" %s %s.%s: %d rows%s\n", status, t.Schema, t.TableName, t.RowCount, blobIndicator)
}
fmt.Println()
}
// Integrity errors
if len(result.IntegrityErrors) > 0 {
fmt.Println(" ❌ INTEGRITY ERRORS:")
for _, e := range result.IntegrityErrors {
fmt.Printf(" • %s\n", e)
}
fmt.Println()
}
// Warnings
if len(result.Warnings) > 0 {
fmt.Println(" ⚠️ WARNINGS:")
for _, w := range result.Warnings {
fmt.Printf(" • %s\n", w)
}
fmt.Println()
}
// Final verdict
fmt.Println(" ═══════════════════════════════════════════════════════════")
if result.Valid {
fmt.Println(" ✅ RESTORE VERIFICATION PASSED - Data integrity confirmed")
} else {
fmt.Println(" ❌ RESTORE VERIFICATION FAILED - See errors above")
return fmt.Errorf("verification failed")
}
fmt.Println(" ═══════════════════════════════════════════════════════════")
fmt.Println()
return nil
}
func splitDatabases(s string) []string {
if s == "" {
return nil
}
var dbs []string
for _, db := range strings.Split(s, ",") {
db = strings.TrimSpace(db)
if db != "" {
dbs = append(dbs, db)
}
}
return dbs
}

View File

@ -1,62 +0,0 @@
# Deployment Examples for dbbackup
Enterprise deployment configurations for various platforms and orchestration tools.
## Directory Structure
```
deploy/
├── README.md
├── ansible/ # Ansible roles and playbooks
│ ├── basic.yml # Simple installation
│ ├── with-exporter.yml # With Prometheus metrics
│ ├── with-notifications.yml # With email/Slack alerts
│ └── enterprise.yml # Full enterprise setup
├── kubernetes/ # Kubernetes manifests
│ ├── cronjob.yaml # Scheduled backup CronJob
│ ├── configmap.yaml # Configuration
│ └── helm/ # Helm chart
├── terraform/ # Infrastructure as Code
│ ├── aws/ # AWS deployment
│ └── gcp/ # GCP deployment
└── scripts/ # Helper scripts
├── backup-rotation.sh
└── health-check.sh
```
## Quick Start by Platform
### Ansible
```bash
cd ansible
cp inventory.example inventory
ansible-playbook -i inventory enterprise.yml
```
### Kubernetes
```bash
kubectl apply -f kubernetes/
# or with Helm
helm install dbbackup kubernetes/helm/dbbackup
```
### Terraform (AWS)
```bash
cd terraform/aws
terraform init
terraform apply
```
## Feature Matrix
| Feature | basic | with-exporter | with-notifications | enterprise |
|---------|:-----:|:-------------:|:------------------:|:----------:|
| Scheduled Backups | ✓ | ✓ | ✓ | ✓ |
| Retention Policy | ✓ | ✓ | ✓ | ✓ |
| GFS Rotation | | | | ✓ |
| Prometheus Metrics | | ✓ | | ✓ |
| Email Notifications | | | ✓ | ✓ |
| Slack/Webhook | | | ✓ | ✓ |
| Encryption | | | | ✓ |
| Cloud Upload | | | | ✓ |
| Catalog Sync | | | | ✓ |

View File

@ -1,75 +0,0 @@
# Ansible Deployment for dbbackup
Ansible roles and playbooks for deploying dbbackup in enterprise environments.
## Playbooks
| Playbook | Description |
|----------|-------------|
| `basic.yml` | Simple installation without monitoring |
| `with-exporter.yml` | Installation with Prometheus metrics exporter |
| `with-notifications.yml` | Installation with SMTP/webhook notifications |
| `enterprise.yml` | Full enterprise setup (exporter + notifications + GFS retention) |
## Quick Start
```bash
# Edit inventory
cp inventory.example inventory
vim inventory
# Edit variables
vim group_vars/all.yml
# Deploy basic setup
ansible-playbook -i inventory basic.yml
# Deploy enterprise setup
ansible-playbook -i inventory enterprise.yml
```
## Variables
See `group_vars/all.yml` for all configurable options.
### Required Variables
| Variable | Description | Example |
|----------|-------------|---------|
| `dbbackup_version` | Version to install | `3.42.74` |
| `dbbackup_db_type` | Database type | `postgres` or `mysql` |
| `dbbackup_backup_dir` | Backup storage path | `/var/backups/databases` |
### Optional Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `dbbackup_schedule` | Backup schedule | `daily` |
| `dbbackup_compression` | Compression level | `6` |
| `dbbackup_retention_days` | Retention period | `30` |
| `dbbackup_min_backups` | Minimum backups to keep | `5` |
| `dbbackup_exporter_port` | Prometheus exporter port | `9399` |
## Directory Structure
```
ansible/
├── README.md
├── inventory.example
├── group_vars/
│ └── all.yml
├── roles/
│ └── dbbackup/
│ ├── tasks/
│ │ └── main.yml
│ ├── templates/
│ │ ├── dbbackup.conf.j2
│ │ ├── env.j2
│ │ └── systemd-override.conf.j2
│ └── handlers/
│ └── main.yml
├── basic.yml
├── with-exporter.yml
├── with-notifications.yml
└── enterprise.yml
```

View File

@ -1,42 +0,0 @@
---
# dbbackup Basic Deployment
# Simple installation without monitoring or notifications
#
# Usage:
# ansible-playbook -i inventory basic.yml
#
# Features:
# ✓ Automated daily backups
# ✓ Retention policy (30 days default)
# ✗ No Prometheus exporter
# ✗ No notifications
- name: Deploy dbbackup (basic)
hosts: db_servers
become: yes
vars:
dbbackup_exporter_enabled: false
dbbackup_notify_enabled: false
roles:
- dbbackup
post_tasks:
- name: Verify installation
command: "{{ dbbackup_install_dir }}/dbbackup --version"
register: version_check
changed_when: false
- name: Display version
debug:
msg: "Installed: {{ version_check.stdout }}"
- name: Show timer status
command: systemctl status dbbackup-{{ dbbackup_backup_type }}.timer --no-pager
register: timer_status
changed_when: false
- name: Display next backup time
debug:
msg: "{{ timer_status.stdout_lines | select('search', 'Trigger') | list }}"

View File

@ -1,153 +0,0 @@
---
# dbbackup Enterprise Deployment
# Full-featured installation with all enterprise capabilities
#
# Usage:
# ansible-playbook -i inventory enterprise.yml
#
# Features:
# ✓ Automated scheduled backups
# ✓ GFS retention policy (Grandfather-Father-Son)
# ✓ Prometheus metrics exporter
# ✓ SMTP email notifications
# ✓ Webhook/Slack notifications
# ✓ Encrypted backups (optional)
# ✓ Cloud storage upload (optional)
# ✓ Catalog for backup tracking
#
# Required Vault Variables:
# dbbackup_db_password
# dbbackup_encryption_key (if encryption enabled)
# dbbackup_notify_smtp_password (if SMTP enabled)
# dbbackup_cloud_access_key (if cloud enabled)
# dbbackup_cloud_secret_key (if cloud enabled)
- name: Deploy dbbackup (Enterprise)
hosts: db_servers
become: yes
vars:
# Full feature set
dbbackup_exporter_enabled: true
dbbackup_exporter_port: 9399
dbbackup_notify_enabled: true
# GFS Retention
dbbackup_gfs_enabled: true
dbbackup_gfs_daily: 7
dbbackup_gfs_weekly: 4
dbbackup_gfs_monthly: 12
dbbackup_gfs_yearly: 3
pre_tasks:
- name: Check for required secrets
assert:
that:
- dbbackup_db_password is defined
fail_msg: "Required secrets not provided. Use ansible-vault for dbbackup_db_password"
- name: Validate encryption key if enabled
assert:
that:
- dbbackup_encryption_key is defined
- dbbackup_encryption_key | length >= 16
fail_msg: "Encryption enabled but key not provided or too short"
when: dbbackup_encryption_enabled | default(false)
roles:
- dbbackup
post_tasks:
# Verify exporter
- name: Wait for exporter to start
wait_for:
port: "{{ dbbackup_exporter_port }}"
timeout: 30
when: dbbackup_exporter_enabled
- name: Test metrics endpoint
uri:
url: "http://localhost:{{ dbbackup_exporter_port }}/metrics"
return_content: yes
register: metrics_response
when: dbbackup_exporter_enabled
# Initialize catalog
- name: Sync existing backups to catalog
command: "{{ dbbackup_install_dir }}/dbbackup catalog sync {{ dbbackup_backup_dir }}"
become_user: dbbackup
changed_when: false
# Run preflight check
- name: Run preflight checks
command: "{{ dbbackup_install_dir }}/dbbackup preflight"
become_user: dbbackup
register: preflight_result
changed_when: false
failed_when: preflight_result.rc > 1 # rc=1 is warnings, rc=2 is failure
- name: Display preflight result
debug:
msg: "{{ preflight_result.stdout_lines }}"
# Summary
- name: Display deployment summary
debug:
msg: |
╔══════════════════════════════════════════════════════════════╗
║ dbbackup Enterprise Deployment Complete ║
╚══════════════════════════════════════════════════════════════╝
Host: {{ inventory_hostname }}
Version: {{ dbbackup_version }}
┌─ Backup Configuration ─────────────────────────────────────────
│ Type: {{ dbbackup_backup_type }}
│ Schedule: {{ dbbackup_schedule }}
│ Directory: {{ dbbackup_backup_dir }}
│ Encryption: {{ 'Enabled' if dbbackup_encryption_enabled else 'Disabled' }}
└────────────────────────────────────────────────────────────────
┌─ Retention Policy (GFS) ───────────────────────────────────────
│ Daily: {{ dbbackup_gfs_daily }} backups
│ Weekly: {{ dbbackup_gfs_weekly }} backups
│ Monthly: {{ dbbackup_gfs_monthly }} backups
│ Yearly: {{ dbbackup_gfs_yearly }} backups
└────────────────────────────────────────────────────────────────
┌─ Monitoring ───────────────────────────────────────────────────
│ Prometheus: http://{{ inventory_hostname }}:{{ dbbackup_exporter_port }}/metrics
└────────────────────────────────────────────────────────────────
┌─ Notifications ────────────────────────────────────────────────
{% if dbbackup_notify_smtp_enabled | default(false) %}
│ SMTP: {{ dbbackup_notify_smtp_to | join(', ') }}
{% endif %}
{% if dbbackup_notify_slack_enabled | default(false) %}
│ Slack: Enabled
{% endif %}
└────────────────────────────────────────────────────────────────
- name: Configure Prometheus scrape targets
hosts: monitoring
become: yes
tasks:
- name: Add dbbackup targets to prometheus
blockinfile:
path: /etc/prometheus/targets/dbbackup.yml
create: yes
block: |
- targets:
{% for host in groups['db_servers'] %}
- {{ host }}:{{ hostvars[host]['dbbackup_exporter_port'] | default(9399) }}
{% endfor %}
labels:
job: dbbackup
notify: reload prometheus
when: "'monitoring' in group_names"
handlers:
- name: reload prometheus
systemd:
name: prometheus
state: reloaded

View File

@ -1,71 +0,0 @@
# dbbackup Ansible Variables
# =========================
# Version and Installation
dbbackup_version: "3.42.74"
dbbackup_download_url: "https://git.uuxo.net/UUXO/dbbackup/releases/download/v{{ dbbackup_version }}"
dbbackup_install_dir: "/usr/local/bin"
dbbackup_config_dir: "/etc/dbbackup"
dbbackup_data_dir: "/var/lib/dbbackup"
dbbackup_log_dir: "/var/log/dbbackup"
# Database Configuration
dbbackup_db_type: "postgres" # postgres, mysql, mariadb
dbbackup_db_host: "localhost"
dbbackup_db_port: 5432 # 5432 for postgres, 3306 for mysql
dbbackup_db_user: "postgres"
# dbbackup_db_password: "" # Use vault for passwords!
# Backup Configuration
dbbackup_backup_dir: "/var/backups/databases"
dbbackup_backup_type: "cluster" # cluster, single
dbbackup_compression: 6
dbbackup_encryption_enabled: false
# dbbackup_encryption_key: "" # Use vault!
# Schedule (systemd OnCalendar format)
dbbackup_schedule: "daily" # daily, weekly, *-*-* 02:00:00
# Retention Policy
dbbackup_retention_days: 30
dbbackup_min_backups: 5
# GFS Retention (enterprise.yml)
dbbackup_gfs_enabled: false
dbbackup_gfs_daily: 7
dbbackup_gfs_weekly: 4
dbbackup_gfs_monthly: 12
dbbackup_gfs_yearly: 3
# Prometheus Exporter (with-exporter.yml, enterprise.yml)
dbbackup_exporter_enabled: false
dbbackup_exporter_port: 9399
# Cloud Storage (optional)
dbbackup_cloud_enabled: false
dbbackup_cloud_provider: "s3" # s3, minio, b2, azure, gcs
dbbackup_cloud_bucket: ""
dbbackup_cloud_endpoint: "" # For MinIO/B2
# dbbackup_cloud_access_key: "" # Use vault!
# dbbackup_cloud_secret_key: "" # Use vault!
# Notifications (with-notifications.yml, enterprise.yml)
dbbackup_notify_enabled: false
# SMTP Notifications
dbbackup_notify_smtp_enabled: false
dbbackup_notify_smtp_host: ""
dbbackup_notify_smtp_port: 587
dbbackup_notify_smtp_user: ""
# dbbackup_notify_smtp_password: "" # Use vault!
dbbackup_notify_smtp_from: ""
dbbackup_notify_smtp_to: [] # List of recipients
# Webhook Notifications
dbbackup_notify_webhook_enabled: false
dbbackup_notify_webhook_url: ""
# dbbackup_notify_webhook_secret: "" # Use vault for HMAC secret!
# Slack Integration (uses webhook)
dbbackup_notify_slack_enabled: false
dbbackup_notify_slack_webhook: ""

View File

@ -1,25 +0,0 @@
# dbbackup Ansible Inventory Example
# Copy to 'inventory' and customize
[db_servers]
# PostgreSQL servers
pg-primary.example.com dbbackup_db_type=postgres
pg-replica.example.com dbbackup_db_type=postgres dbbackup_backup_from_replica=true
# MySQL servers
mysql-01.example.com dbbackup_db_type=mysql
[db_servers:vars]
ansible_user=deploy
ansible_become=yes
# Group-level defaults
dbbackup_backup_dir=/var/backups/databases
dbbackup_schedule=daily
[monitoring]
prometheus.example.com
[monitoring:vars]
# Servers where metrics are scraped
dbbackup_exporter_enabled=true

View File

@ -1,12 +0,0 @@
---
# dbbackup Ansible Role - Handlers
- name: reload systemd
systemd:
daemon_reload: yes
- name: restart dbbackup
systemd:
name: "dbbackup-{{ dbbackup_backup_type }}.service"
state: restarted
when: ansible_service_mgr == 'systemd'

View File

@ -1,116 +0,0 @@
---
# dbbackup Ansible Role - Main Tasks
- name: Create dbbackup group
group:
name: dbbackup
system: yes
- name: Create dbbackup user
user:
name: dbbackup
group: dbbackup
system: yes
home: "{{ dbbackup_data_dir }}"
shell: /usr/sbin/nologin
create_home: no
- name: Create directories
file:
path: "{{ item }}"
state: directory
owner: dbbackup
group: dbbackup
mode: "0755"
loop:
- "{{ dbbackup_config_dir }}"
- "{{ dbbackup_data_dir }}"
- "{{ dbbackup_data_dir }}/catalog"
- "{{ dbbackup_log_dir }}"
- "{{ dbbackup_backup_dir }}"
- name: Create env.d directory
file:
path: "{{ dbbackup_config_dir }}/env.d"
state: directory
owner: root
group: dbbackup
mode: "0750"
- name: Detect architecture
set_fact:
dbbackup_arch: "{{ 'arm64' if ansible_architecture == 'aarch64' else 'amd64' }}"
- name: Download dbbackup binary
get_url:
url: "{{ dbbackup_download_url }}/dbbackup-linux-{{ dbbackup_arch }}"
dest: "{{ dbbackup_install_dir }}/dbbackup"
mode: "0755"
owner: root
group: root
notify: restart dbbackup
- name: Deploy configuration file
template:
src: dbbackup.conf.j2
dest: "{{ dbbackup_config_dir }}/dbbackup.conf"
owner: root
group: dbbackup
mode: "0640"
notify: restart dbbackup
- name: Deploy environment file
template:
src: env.j2
dest: "{{ dbbackup_config_dir }}/env.d/{{ dbbackup_backup_type }}.conf"
owner: root
group: dbbackup
mode: "0600"
notify: restart dbbackup
- name: Install systemd service
command: >
{{ dbbackup_install_dir }}/dbbackup install
--backup-type {{ dbbackup_backup_type }}
--schedule "{{ dbbackup_schedule }}"
{% if dbbackup_exporter_enabled %}--with-metrics --metrics-port {{ dbbackup_exporter_port }}{% endif %}
args:
creates: "/etc/systemd/system/dbbackup-{{ dbbackup_backup_type }}.service"
notify:
- reload systemd
- restart dbbackup
- name: Deploy systemd override (if customizations needed)
template:
src: systemd-override.conf.j2
dest: "/etc/systemd/system/dbbackup-{{ dbbackup_backup_type }}.service.d/override.conf"
owner: root
group: root
mode: "0644"
when: dbbackup_notify_enabled or dbbackup_cloud_enabled
notify:
- reload systemd
- restart dbbackup
- name: Create systemd override directory
file:
path: "/etc/systemd/system/dbbackup-{{ dbbackup_backup_type }}.service.d"
state: directory
owner: root
group: root
mode: "0755"
when: dbbackup_notify_enabled or dbbackup_cloud_enabled
- name: Enable and start dbbackup timer
systemd:
name: "dbbackup-{{ dbbackup_backup_type }}.timer"
enabled: yes
state: started
daemon_reload: yes
- name: Enable dbbackup exporter service
systemd:
name: dbbackup-exporter
enabled: yes
state: started
when: dbbackup_exporter_enabled

View File

@ -1,39 +0,0 @@
# dbbackup Configuration
# Managed by Ansible - do not edit manually
# Database
db-type = {{ dbbackup_db_type }}
host = {{ dbbackup_db_host }}
port = {{ dbbackup_db_port }}
user = {{ dbbackup_db_user }}
# Backup
backup-dir = {{ dbbackup_backup_dir }}
compression = {{ dbbackup_compression }}
# Retention
retention-days = {{ dbbackup_retention_days }}
min-backups = {{ dbbackup_min_backups }}
{% if dbbackup_gfs_enabled %}
# GFS Retention Policy
gfs = true
gfs-daily = {{ dbbackup_gfs_daily }}
gfs-weekly = {{ dbbackup_gfs_weekly }}
gfs-monthly = {{ dbbackup_gfs_monthly }}
gfs-yearly = {{ dbbackup_gfs_yearly }}
{% endif %}
{% if dbbackup_encryption_enabled %}
# Encryption
encrypt = true
{% endif %}
{% if dbbackup_cloud_enabled %}
# Cloud Storage
cloud-provider = {{ dbbackup_cloud_provider }}
cloud-bucket = {{ dbbackup_cloud_bucket }}
{% if dbbackup_cloud_endpoint %}
cloud-endpoint = {{ dbbackup_cloud_endpoint }}
{% endif %}
{% endif %}

View File

@ -1,57 +0,0 @@
# dbbackup Environment Variables
# Managed by Ansible - do not edit manually
# Permissions: 0600 (secrets inside)
{% if dbbackup_db_password is defined %}
# Database Password
{% if dbbackup_db_type == 'postgres' %}
PGPASSWORD={{ dbbackup_db_password }}
{% else %}
MYSQL_PWD={{ dbbackup_db_password }}
{% endif %}
{% endif %}
{% if dbbackup_encryption_enabled and dbbackup_encryption_key is defined %}
# Encryption Key
DBBACKUP_ENCRYPTION_KEY={{ dbbackup_encryption_key }}
{% endif %}
{% if dbbackup_cloud_enabled %}
# Cloud Storage Credentials
{% if dbbackup_cloud_provider in ['s3', 'minio', 'b2'] %}
AWS_ACCESS_KEY_ID={{ dbbackup_cloud_access_key | default('') }}
AWS_SECRET_ACCESS_KEY={{ dbbackup_cloud_secret_key | default('') }}
{% endif %}
{% if dbbackup_cloud_provider == 'azure' %}
AZURE_STORAGE_ACCOUNT={{ dbbackup_cloud_access_key | default('') }}
AZURE_STORAGE_KEY={{ dbbackup_cloud_secret_key | default('') }}
{% endif %}
{% if dbbackup_cloud_provider == 'gcs' %}
GOOGLE_APPLICATION_CREDENTIALS={{ dbbackup_cloud_credentials_file | default('/etc/dbbackup/gcs-credentials.json') }}
{% endif %}
{% endif %}
{% if dbbackup_notify_smtp_enabled %}
# SMTP Notifications
NOTIFY_SMTP_HOST={{ dbbackup_notify_smtp_host }}
NOTIFY_SMTP_PORT={{ dbbackup_notify_smtp_port }}
NOTIFY_SMTP_USER={{ dbbackup_notify_smtp_user }}
{% if dbbackup_notify_smtp_password is defined %}
NOTIFY_SMTP_PASSWORD={{ dbbackup_notify_smtp_password }}
{% endif %}
NOTIFY_SMTP_FROM={{ dbbackup_notify_smtp_from }}
NOTIFY_SMTP_TO={{ dbbackup_notify_smtp_to | join(',') }}
{% endif %}
{% if dbbackup_notify_webhook_enabled %}
# Webhook Notifications
NOTIFY_WEBHOOK_URL={{ dbbackup_notify_webhook_url }}
{% if dbbackup_notify_webhook_secret is defined %}
NOTIFY_WEBHOOK_SECRET={{ dbbackup_notify_webhook_secret }}
{% endif %}
{% endif %}
{% if dbbackup_notify_slack_enabled %}
# Slack Notifications
NOTIFY_WEBHOOK_URL={{ dbbackup_notify_slack_webhook }}
{% endif %}

View File

@ -1,6 +0,0 @@
# dbbackup Systemd Override
# Managed by Ansible
[Service]
# Load environment from secure file
EnvironmentFile=-{{ dbbackup_config_dir }}/env.d/{{ dbbackup_backup_type }}.conf

View File

@ -1,52 +0,0 @@
---
# dbbackup with Prometheus Exporter
# Installation with metrics endpoint for monitoring
#
# Usage:
# ansible-playbook -i inventory with-exporter.yml
#
# Features:
# ✓ Automated daily backups
# ✓ Retention policy
# ✓ Prometheus exporter on port 9399
# ✗ No notifications
- name: Deploy dbbackup with Prometheus exporter
hosts: db_servers
become: yes
vars:
dbbackup_exporter_enabled: true
dbbackup_exporter_port: 9399
dbbackup_notify_enabled: false
roles:
- dbbackup
post_tasks:
- name: Wait for exporter to start
wait_for:
port: "{{ dbbackup_exporter_port }}"
timeout: 30
- name: Test metrics endpoint
uri:
url: "http://localhost:{{ dbbackup_exporter_port }}/metrics"
return_content: yes
register: metrics_response
- name: Verify metrics available
assert:
that:
- "'dbbackup_' in metrics_response.content"
fail_msg: "Metrics endpoint not returning dbbackup metrics"
success_msg: "Prometheus exporter running on port {{ dbbackup_exporter_port }}"
- name: Display Prometheus scrape config
debug:
msg: |
Add to prometheus.yml:
- job_name: 'dbbackup'
static_configs:
- targets: ['{{ inventory_hostname }}:{{ dbbackup_exporter_port }}']

View File

@ -1,84 +0,0 @@
---
# dbbackup with Notifications
# Installation with SMTP email and/or webhook notifications
#
# Usage:
# # With SMTP notifications
# ansible-playbook -i inventory with-notifications.yml \
# -e dbbackup_notify_smtp_enabled=true \
# -e dbbackup_notify_smtp_host=smtp.example.com \
# -e dbbackup_notify_smtp_from=backups@example.com \
# -e '{"dbbackup_notify_smtp_to": ["admin@example.com", "dba@example.com"]}'
#
# # With Slack notifications
# ansible-playbook -i inventory with-notifications.yml \
# -e dbbackup_notify_slack_enabled=true \
# -e dbbackup_notify_slack_webhook=https://hooks.slack.com/services/XXX
#
# Features:
# ✓ Automated daily backups
# ✓ Retention policy
# ✗ No Prometheus exporter
# ✓ Email notifications (optional)
# ✓ Webhook/Slack notifications (optional)
- name: Deploy dbbackup with notifications
hosts: db_servers
become: yes
vars:
dbbackup_exporter_enabled: false
dbbackup_notify_enabled: true
# Enable one or more notification methods:
# dbbackup_notify_smtp_enabled: true
# dbbackup_notify_webhook_enabled: true
# dbbackup_notify_slack_enabled: true
pre_tasks:
- name: Validate notification configuration
assert:
that:
- dbbackup_notify_smtp_enabled or dbbackup_notify_webhook_enabled or dbbackup_notify_slack_enabled
fail_msg: "At least one notification method must be enabled"
success_msg: "Notification configuration valid"
- name: Validate SMTP configuration
assert:
that:
- dbbackup_notify_smtp_host != ''
- dbbackup_notify_smtp_from != ''
- dbbackup_notify_smtp_to | length > 0
fail_msg: "SMTP configuration incomplete"
when: dbbackup_notify_smtp_enabled | default(false)
- name: Validate webhook configuration
assert:
that:
- dbbackup_notify_webhook_url != ''
fail_msg: "Webhook URL required"
when: dbbackup_notify_webhook_enabled | default(false)
- name: Validate Slack configuration
assert:
that:
- dbbackup_notify_slack_webhook != ''
fail_msg: "Slack webhook URL required"
when: dbbackup_notify_slack_enabled | default(false)
roles:
- dbbackup
post_tasks:
- name: Display notification configuration
debug:
msg: |
Notifications configured:
{% if dbbackup_notify_smtp_enabled | default(false) %}
- SMTP: {{ dbbackup_notify_smtp_to | join(', ') }}
{% endif %}
{% if dbbackup_notify_webhook_enabled | default(false) %}
- Webhook: {{ dbbackup_notify_webhook_url }}
{% endif %}
{% if dbbackup_notify_slack_enabled | default(false) %}
- Slack: Enabled
{% endif %}

View File

@ -1,48 +0,0 @@
# dbbackup Kubernetes Deployment
Kubernetes manifests for running dbbackup as scheduled CronJobs.
## Quick Start
```bash
# Create namespace
kubectl create namespace dbbackup
# Create secrets
kubectl create secret generic dbbackup-db-credentials \
--namespace dbbackup \
--from-literal=password=your-db-password
# Apply manifests
kubectl apply -f . --namespace dbbackup
# Check CronJob
kubectl get cronjobs -n dbbackup
```
## Components
- `configmap.yaml` - Configuration settings
- `secret.yaml` - Credentials template (use kubectl create secret instead)
- `cronjob.yaml` - Scheduled backup job
- `pvc.yaml` - Persistent volume for backup storage
- `servicemonitor.yaml` - Prometheus ServiceMonitor (optional)
## Customization
Edit `configmap.yaml` to configure:
- Database connection
- Backup schedule
- Retention policy
- Cloud storage
## Helm Chart
For more complex deployments, use the Helm chart:
```bash
helm install dbbackup ./helm/dbbackup \
--set database.host=postgres.default.svc \
--set database.password=secret \
--set schedule="0 2 * * *"
```

View File

@ -1,27 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: dbbackup-config
labels:
app: dbbackup
data:
# Database Configuration
DB_TYPE: "postgres"
DB_HOST: "postgres.default.svc.cluster.local"
DB_PORT: "5432"
DB_USER: "postgres"
# Backup Configuration
BACKUP_DIR: "/backups"
COMPRESSION: "6"
# Retention
RETENTION_DAYS: "30"
MIN_BACKUPS: "5"
# GFS Retention (enterprise)
GFS_ENABLED: "false"
GFS_DAILY: "7"
GFS_WEEKLY: "4"
GFS_MONTHLY: "12"
GFS_YEARLY: "3"

View File

@ -1,140 +0,0 @@
apiVersion: batch/v1
kind: CronJob
metadata:
name: dbbackup-cluster
labels:
app: dbbackup
component: backup
spec:
# Daily at 2:00 AM UTC
schedule: "0 2 * * *"
# Keep last 3 successful and 1 failed job
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
# Don't run if previous job is still running
concurrencyPolicy: Forbid
# Start job within 5 minutes of scheduled time or skip
startingDeadlineSeconds: 300
jobTemplate:
spec:
# Retry up to 2 times on failure
backoffLimit: 2
template:
metadata:
labels:
app: dbbackup
component: backup
spec:
restartPolicy: OnFailure
# Security context
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
containers:
- name: dbbackup
image: git.uuxo.net/uuxo/dbbackup:latest
imagePullPolicy: IfNotPresent
args:
- backup
- cluster
- --compression
- "$(COMPRESSION)"
envFrom:
- configMapRef:
name: dbbackup-config
- secretRef:
name: dbbackup-secrets
env:
- name: BACKUP_DIR
value: /backups
volumeMounts:
- name: backup-storage
mountPath: /backups
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "2Gi"
cpu: "2000m"
volumes:
- name: backup-storage
persistentVolumeClaim:
claimName: dbbackup-storage
---
# Cleanup CronJob - runs weekly
apiVersion: batch/v1
kind: CronJob
metadata:
name: dbbackup-cleanup
labels:
app: dbbackup
component: cleanup
spec:
# Weekly on Sunday at 3:00 AM UTC
schedule: "0 3 * * 0"
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
metadata:
labels:
app: dbbackup
component: cleanup
spec:
restartPolicy: OnFailure
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
containers:
- name: dbbackup
image: git.uuxo.net/uuxo/dbbackup:latest
args:
- cleanup
- /backups
- --retention-days
- "$(RETENTION_DAYS)"
- --min-backups
- "$(MIN_BACKUPS)"
envFrom:
- configMapRef:
name: dbbackup-config
volumeMounts:
- name: backup-storage
mountPath: /backups
resources:
requests:
memory: "128Mi"
cpu: "50m"
limits:
memory: "512Mi"
cpu: "500m"
volumes:
- name: backup-storage
persistentVolumeClaim:
claimName: dbbackup-storage

View File

@ -1,13 +0,0 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dbbackup-storage
labels:
app: dbbackup
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi # Adjust based on database size
# storageClassName: fast-ssd # Uncomment for specific storage class

View File

@ -1,27 +0,0 @@
# dbbackup Secrets Template
# DO NOT commit this file with real credentials!
# Use: kubectl create secret generic dbbackup-secrets --from-literal=...
apiVersion: v1
kind: Secret
metadata:
name: dbbackup-secrets
labels:
app: dbbackup
type: Opaque
stringData:
# Database Password (required)
PGPASSWORD: "CHANGE_ME"
# Encryption Key (optional - 32+ characters recommended)
# DBBACKUP_ENCRYPTION_KEY: "your-encryption-key-here"
# Cloud Storage Credentials (optional)
# AWS_ACCESS_KEY_ID: "AKIAXXXXXXXX"
# AWS_SECRET_ACCESS_KEY: "your-secret-key"
# SMTP Notifications (optional)
# NOTIFY_SMTP_PASSWORD: "smtp-password"
# Webhook Secret (optional)
# NOTIFY_WEBHOOK_SECRET: "hmac-signing-secret"

View File

@ -1,114 +0,0 @@
# Prometheus ServiceMonitor for dbbackup
# Requires prometheus-operator
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: dbbackup
labels:
app: dbbackup
release: prometheus # Match your Prometheus operator release
spec:
selector:
matchLabels:
app: dbbackup
component: exporter
endpoints:
- port: metrics
interval: 60s
path: /metrics
namespaceSelector:
matchNames:
- dbbackup
---
# Metrics exporter deployment (optional - for continuous metrics)
apiVersion: apps/v1
kind: Deployment
metadata:
name: dbbackup-exporter
labels:
app: dbbackup
component: exporter
spec:
replicas: 1
selector:
matchLabels:
app: dbbackup
component: exporter
template:
metadata:
labels:
app: dbbackup
component: exporter
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
containers:
- name: exporter
image: git.uuxo.net/uuxo/dbbackup:latest
args:
- metrics
- serve
- --port
- "9399"
ports:
- name: metrics
containerPort: 9399
protocol: TCP
envFrom:
- configMapRef:
name: dbbackup-config
volumeMounts:
- name: backup-storage
mountPath: /backups
readOnly: true
livenessProbe:
httpGet:
path: /health
port: metrics
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
httpGet:
path: /health
port: metrics
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests:
memory: "64Mi"
cpu: "10m"
limits:
memory: "128Mi"
cpu: "100m"
volumes:
- name: backup-storage
persistentVolumeClaim:
claimName: dbbackup-storage
---
apiVersion: v1
kind: Service
metadata:
name: dbbackup-exporter
labels:
app: dbbackup
component: exporter
spec:
ports:
- name: metrics
port: 9399
targetPort: metrics
selector:
app: dbbackup
component: exporter

View File

@ -1,168 +0,0 @@
# Prometheus Alerting Rules for dbbackup
# Import into your Prometheus/Alertmanager configuration
groups:
- name: dbbackup
rules:
# RPO Alerts - Recovery Point Objective violations
- alert: DBBackupRPOWarning
expr: dbbackup_rpo_seconds > 43200 # 12 hours
for: 5m
labels:
severity: warning
annotations:
summary: "Database backup RPO warning on {{ $labels.server }}"
description: "No successful backup for {{ $labels.database }} in {{ $value | humanizeDuration }}. RPO threshold: 12 hours."
- alert: DBBackupRPOCritical
expr: dbbackup_rpo_seconds > 86400 # 24 hours
for: 5m
labels:
severity: critical
annotations:
summary: "Database backup RPO critical on {{ $labels.server }}"
description: "No successful backup for {{ $labels.database }} in {{ $value | humanizeDuration }}. Immediate attention required!"
runbook_url: "https://wiki.example.com/runbooks/dbbackup-rpo-violation"
# Backup Failure Alerts
- alert: DBBackupFailed
expr: increase(dbbackup_backup_total{status="failure"}[1h]) > 0
for: 1m
labels:
severity: critical
annotations:
summary: "Database backup failed on {{ $labels.server }}"
description: "Backup for {{ $labels.database }} failed. Check logs for details."
- alert: DBBackupFailureRateHigh
expr: |
rate(dbbackup_backup_total{status="failure"}[24h])
/
rate(dbbackup_backup_total[24h]) > 0.1
for: 1h
labels:
severity: warning
annotations:
summary: "High backup failure rate on {{ $labels.server }}"
description: "More than 10% of backups are failing over the last 24 hours."
# Backup Size Anomalies
- alert: DBBackupSizeAnomaly
expr: |
abs(
dbbackup_last_backup_size_bytes
- avg_over_time(dbbackup_last_backup_size_bytes[7d])
)
/ avg_over_time(dbbackup_last_backup_size_bytes[7d]) > 0.5
for: 5m
labels:
severity: warning
annotations:
summary: "Backup size anomaly for {{ $labels.database }}"
description: "Backup size changed by more than 50% compared to 7-day average. Current: {{ $value | humanize1024 }}B"
- alert: DBBackupSizeZero
expr: dbbackup_last_backup_size_bytes == 0
for: 5m
labels:
severity: critical
annotations:
summary: "Zero-size backup detected for {{ $labels.database }}"
description: "Last backup file is empty. Backup likely failed silently."
# Duration Alerts
- alert: DBBackupDurationHigh
expr: dbbackup_last_backup_duration_seconds > 3600 # 1 hour
for: 5m
labels:
severity: warning
annotations:
summary: "Backup taking too long for {{ $labels.database }}"
description: "Last backup took {{ $value | humanizeDuration }}. Consider optimizing backup strategy."
# Verification Alerts
- alert: DBBackupNotVerified
expr: dbbackup_backup_verified == 0
for: 24h
labels:
severity: warning
annotations:
summary: "Backup not verified for {{ $labels.database }}"
description: "Last backup was not verified. Run dbbackup verify to check integrity."
# PITR Alerts
- alert: DBBackupPITRArchiveLag
expr: dbbackup_pitr_archive_lag_seconds > 600
for: 5m
labels:
severity: warning
annotations:
summary: "PITR archive lag on {{ $labels.server }}"
description: "WAL/binlog archiving for {{ $labels.database }} is {{ $value | humanizeDuration }} behind."
- alert: DBBackupPITRArchiveCritical
expr: dbbackup_pitr_archive_lag_seconds > 1800
for: 5m
labels:
severity: critical
annotations:
summary: "PITR archive critically behind on {{ $labels.server }}"
description: "WAL/binlog archiving for {{ $labels.database }} is {{ $value | humanizeDuration }} behind. PITR capability at risk!"
- alert: DBBackupPITRChainBroken
expr: dbbackup_pitr_chain_valid == 0
for: 1m
labels:
severity: critical
annotations:
summary: "PITR chain broken for {{ $labels.database }}"
description: "WAL/binlog chain has gaps. Point-in-time recovery NOT possible. New base backup required."
- alert: DBBackupPITRGaps
expr: dbbackup_pitr_gap_count > 0
for: 5m
labels:
severity: warning
annotations:
summary: "PITR chain gaps for {{ $labels.database }}"
description: "{{ $value }} gaps in WAL/binlog chain. Recovery to points within gaps will fail."
# Backup Type Alerts
- alert: DBBackupNoRecentFull
expr: time() - dbbackup_last_success_timestamp{backup_type="full"} > 604800
for: 1h
labels:
severity: warning
annotations:
summary: "No full backup in 7+ days for {{ $labels.database }}"
description: "Consider taking a full backup. Incremental chains depend on valid base."
# Exporter Health
- alert: DBBackupExporterDown
expr: up{job="dbbackup"} == 0
for: 5m
labels:
severity: critical
annotations:
summary: "dbbackup exporter is down on {{ $labels.instance }}"
description: "Cannot scrape metrics from dbbackup exporter. Monitoring is impaired."
# Deduplication Alerts
- alert: DBBackupDedupRatioLow
expr: dbbackup_dedup_ratio < 0.2
for: 24h
labels:
severity: info
annotations:
summary: "Low deduplication ratio on {{ $labels.server }}"
description: "Dedup ratio is {{ $value | printf \"%.1f%%\" }}. Consider if dedup is beneficial."
# Storage Alerts
- alert: DBBackupStorageHigh
expr: dbbackup_dedup_disk_usage_bytes > 1099511627776 # 1 TB
for: 1h
labels:
severity: warning
annotations:
summary: "High backup storage usage on {{ $labels.server }}"
description: "Backup storage using {{ $value | humanize1024 }}B. Review retention policy."

View File

@ -1,48 +0,0 @@
# Prometheus scrape configuration for dbbackup
# Add to your prometheus.yml
scrape_configs:
- job_name: 'dbbackup'
# Scrape interval - backup metrics don't change frequently
scrape_interval: 60s
scrape_timeout: 10s
# Static targets - list your database servers
static_configs:
- targets:
- 'db-server-01:9399'
- 'db-server-02:9399'
- 'db-server-03:9399'
labels:
environment: 'production'
- targets:
- 'db-staging:9399'
labels:
environment: 'staging'
# Relabeling (optional)
relabel_configs:
# Extract hostname from target
- source_labels: [__address__]
target_label: instance
regex: '([^:]+):\d+'
replacement: '$1'
# Alternative: File-based service discovery
# Useful when targets are managed by Ansible/Terraform
- job_name: 'dbbackup-sd'
scrape_interval: 60s
file_sd_configs:
- files:
- '/etc/prometheus/targets/dbbackup/*.yml'
refresh_interval: 5m
# Example target file (/etc/prometheus/targets/dbbackup/production.yml):
# - targets:
# - db-server-01:9399
# - db-server-02:9399
# labels:
# environment: production
# datacenter: us-east-1

View File

@ -1,65 +0,0 @@
#!/bin/bash
# Backup Rotation Script for dbbackup
# Implements GFS (Grandfather-Father-Son) retention policy
#
# Usage: backup-rotation.sh /path/to/backups [--dry-run]
set -euo pipefail
BACKUP_DIR="${1:-/var/backups/databases}"
DRY_RUN="${2:-}"
# GFS Configuration
DAILY_KEEP=7
WEEKLY_KEEP=4
MONTHLY_KEEP=12
YEARLY_KEEP=3
# Minimum backups to always keep
MIN_BACKUPS=5
echo "═══════════════════════════════════════════════════════════════"
echo " dbbackup GFS Rotation"
echo "═══════════════════════════════════════════════════════════════"
echo ""
echo " Backup Directory: $BACKUP_DIR"
echo " Retention Policy:"
echo " Daily: $DAILY_KEEP backups"
echo " Weekly: $WEEKLY_KEEP backups"
echo " Monthly: $MONTHLY_KEEP backups"
echo " Yearly: $YEARLY_KEEP backups"
echo ""
if [[ "$DRY_RUN" == "--dry-run" ]]; then
echo " [DRY RUN MODE - No files will be deleted]"
echo ""
fi
# Check if dbbackup is available
if ! command -v dbbackup &> /dev/null; then
echo "ERROR: dbbackup command not found"
exit 1
fi
# Build cleanup command
CLEANUP_CMD="dbbackup cleanup $BACKUP_DIR \
--gfs \
--gfs-daily $DAILY_KEEP \
--gfs-weekly $WEEKLY_KEEP \
--gfs-monthly $MONTHLY_KEEP \
--gfs-yearly $YEARLY_KEEP \
--min-backups $MIN_BACKUPS"
if [[ "$DRY_RUN" == "--dry-run" ]]; then
CLEANUP_CMD="$CLEANUP_CMD --dry-run"
fi
echo "Running: $CLEANUP_CMD"
echo ""
$CLEANUP_CMD
echo ""
echo "═══════════════════════════════════════════════════════════════"
echo " Rotation complete"
echo "═══════════════════════════════════════════════════════════════"

View File

@ -1,92 +0,0 @@
#!/bin/bash
# Health Check Script for dbbackup
# Returns exit codes for monitoring systems:
# 0 = OK (backup within RPO)
# 1 = WARNING (backup older than warning threshold)
# 2 = CRITICAL (backup older than critical threshold or missing)
#
# Usage: health-check.sh [backup-dir] [warning-hours] [critical-hours]
set -euo pipefail
BACKUP_DIR="${1:-/var/backups/databases}"
WARNING_HOURS="${2:-24}"
CRITICAL_HOURS="${3:-48}"
# Convert to seconds
WARNING_SECONDS=$((WARNING_HOURS * 3600))
CRITICAL_SECONDS=$((CRITICAL_HOURS * 3600))
echo "dbbackup Health Check"
echo "====================="
echo "Backup directory: $BACKUP_DIR"
echo "Warning threshold: ${WARNING_HOURS}h"
echo "Critical threshold: ${CRITICAL_HOURS}h"
echo ""
# Check if backup directory exists
if [[ ! -d "$BACKUP_DIR" ]]; then
echo "CRITICAL: Backup directory does not exist"
exit 2
fi
# Find most recent backup file
LATEST_BACKUP=$(find "$BACKUP_DIR" -type f \( -name "*.dump" -o -name "*.dump.gz" -o -name "*.sql" -o -name "*.sql.gz" -o -name "*.tar.gz" \) -printf '%T@ %p\n' 2>/dev/null | sort -rn | head -1)
if [[ -z "$LATEST_BACKUP" ]]; then
echo "CRITICAL: No backup files found in $BACKUP_DIR"
exit 2
fi
# Extract timestamp and path
BACKUP_TIMESTAMP=$(echo "$LATEST_BACKUP" | cut -d' ' -f1 | cut -d'.' -f1)
BACKUP_PATH=$(echo "$LATEST_BACKUP" | cut -d' ' -f2-)
BACKUP_NAME=$(basename "$BACKUP_PATH")
# Calculate age
NOW=$(date +%s)
AGE_SECONDS=$((NOW - BACKUP_TIMESTAMP))
AGE_HOURS=$((AGE_SECONDS / 3600))
AGE_DAYS=$((AGE_HOURS / 24))
# Format age string
if [[ $AGE_DAYS -gt 0 ]]; then
AGE_STR="${AGE_DAYS}d $((AGE_HOURS % 24))h"
else
AGE_STR="${AGE_HOURS}h $((AGE_SECONDS % 3600 / 60))m"
fi
# Get backup size
BACKUP_SIZE=$(du -h "$BACKUP_PATH" 2>/dev/null | cut -f1)
echo "Latest backup:"
echo " File: $BACKUP_NAME"
echo " Size: $BACKUP_SIZE"
echo " Age: $AGE_STR"
echo ""
# Verify backup integrity if dbbackup is available
if command -v dbbackup &> /dev/null; then
echo "Verifying backup integrity..."
if dbbackup verify "$BACKUP_PATH" --quiet 2>/dev/null; then
echo " ✓ Backup integrity verified"
else
echo " ✗ Backup verification failed"
echo ""
echo "CRITICAL: Latest backup is corrupted"
exit 2
fi
echo ""
fi
# Check thresholds
if [[ $AGE_SECONDS -ge $CRITICAL_SECONDS ]]; then
echo "CRITICAL: Last backup is ${AGE_STR} old (threshold: ${CRITICAL_HOURS}h)"
exit 2
elif [[ $AGE_SECONDS -ge $WARNING_SECONDS ]]; then
echo "WARNING: Last backup is ${AGE_STR} old (threshold: ${WARNING_HOURS}h)"
exit 1
else
echo "OK: Last backup is ${AGE_STR} old"
exit 0
fi

View File

@ -1,26 +0,0 @@
# dbbackup Terraform - AWS Example
variable "aws_region" {
default = "us-east-1"
}
provider "aws" {
region = var.aws_region
}
module "dbbackup_storage" {
source = "./main.tf"
environment = "production"
bucket_name = "mycompany-database-backups"
retention_days = 30
glacier_days = 365
}
output "bucket_name" {
value = module.dbbackup_storage.bucket_name
}
output "setup_instructions" {
value = module.dbbackup_storage.dbbackup_cloud_config
}

View File

@ -1,202 +0,0 @@
# dbbackup Terraform Module - AWS Deployment
# Creates S3 bucket for backup storage with proper security
terraform {
required_version = ">= 1.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.0"
}
}
}
# Variables
variable "environment" {
description = "Environment name (e.g., production, staging)"
type = string
default = "production"
}
variable "bucket_name" {
description = "S3 bucket name for backups"
type = string
}
variable "retention_days" {
description = "Days to keep backups before transitioning to Glacier"
type = number
default = 30
}
variable "glacier_days" {
description = "Days to keep in Glacier before deletion (0 = keep forever)"
type = number
default = 365
}
variable "enable_encryption" {
description = "Enable server-side encryption"
type = bool
default = true
}
variable "kms_key_arn" {
description = "KMS key ARN for encryption (leave empty for aws/s3 managed key)"
type = string
default = ""
}
# S3 Bucket
resource "aws_s3_bucket" "backups" {
bucket = var.bucket_name
tags = {
Name = "Database Backups"
Environment = var.environment
ManagedBy = "terraform"
Application = "dbbackup"
}
}
# Versioning
resource "aws_s3_bucket_versioning" "backups" {
bucket = aws_s3_bucket.backups.id
versioning_configuration {
status = "Enabled"
}
}
# Encryption
resource "aws_s3_bucket_server_side_encryption_configuration" "backups" {
count = var.enable_encryption ? 1 : 0
bucket = aws_s3_bucket.backups.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = var.kms_key_arn != "" ? "aws:kms" : "AES256"
kms_master_key_id = var.kms_key_arn != "" ? var.kms_key_arn : null
}
bucket_key_enabled = true
}
}
# Lifecycle Rules
resource "aws_s3_bucket_lifecycle_configuration" "backups" {
bucket = aws_s3_bucket.backups.id
rule {
id = "transition-to-glacier"
status = "Enabled"
filter {
prefix = ""
}
transition {
days = var.retention_days
storage_class = "GLACIER"
}
dynamic "expiration" {
for_each = var.glacier_days > 0 ? [1] : []
content {
days = var.retention_days + var.glacier_days
}
}
noncurrent_version_transition {
noncurrent_days = 30
storage_class = "GLACIER"
}
noncurrent_version_expiration {
noncurrent_days = 90
}
}
}
# Block Public Access
resource "aws_s3_bucket_public_access_block" "backups" {
bucket = aws_s3_bucket.backups.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
# IAM User for dbbackup
resource "aws_iam_user" "dbbackup" {
name = "dbbackup-${var.environment}"
path = "/service-accounts/"
tags = {
Application = "dbbackup"
Environment = var.environment
}
}
resource "aws_iam_access_key" "dbbackup" {
user = aws_iam_user.dbbackup.name
}
# IAM Policy
resource "aws_iam_user_policy" "dbbackup" {
name = "dbbackup-s3-access"
user = aws_iam_user.dbbackup.name
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:GetBucketLocation"
]
Resource = [
aws_s3_bucket.backups.arn,
"${aws_s3_bucket.backups.arn}/*"
]
}
]
})
}
# Outputs
output "bucket_name" {
description = "S3 bucket name"
value = aws_s3_bucket.backups.id
}
output "bucket_arn" {
description = "S3 bucket ARN"
value = aws_s3_bucket.backups.arn
}
output "access_key_id" {
description = "IAM access key ID for dbbackup"
value = aws_iam_access_key.dbbackup.id
}
output "secret_access_key" {
description = "IAM secret access key for dbbackup"
value = aws_iam_access_key.dbbackup.secret
sensitive = true
}
output "dbbackup_cloud_config" {
description = "Cloud configuration for dbbackup"
value = <<-EOT
# Add to dbbackup environment:
export AWS_ACCESS_KEY_ID="${aws_iam_access_key.dbbackup.id}"
export AWS_SECRET_ACCESS_KEY="<run: terraform output -raw secret_access_key>"
# Use with dbbackup:
dbbackup backup cluster --cloud s3://${aws_s3_bucket.backups.id}/backups/
EOT
}

View File

@ -1,537 +0,0 @@
# DBBackup Prometheus Exporter & Grafana Dashboard
This document provides complete reference for the DBBackup Prometheus exporter, including all exported metrics, setup instructions, and Grafana dashboard configuration.
## What's New (January 2026)
### New Features
- **Backup Type Tracking**: All backup metrics now include a `backup_type` label (`full`, `incremental`, or `pitr_base` for PITR base backups)
- **Note**: CLI `--backup-type` flag only accepts `full` or `incremental`. The `pitr_base` label is auto-assigned when using `dbbackup pitr base`
- **PITR Metrics**: Complete Point-in-Time Recovery monitoring for PostgreSQL WAL and MySQL binlog archiving
- **New Alerts**: PITR-specific alerts for archive lag, chain integrity, and gap detection
### New Metrics Added
| Metric | Description |
|--------|-------------|
| `dbbackup_build_info` | Build info with version and commit labels |
| `dbbackup_backup_by_type` | Count backups by type (full/incremental/pitr_base) |
| `dbbackup_pitr_enabled` | Whether PITR is enabled (1/0) |
| `dbbackup_pitr_archive_lag_seconds` | Seconds since last WAL/binlog archived |
| `dbbackup_pitr_chain_valid` | WAL/binlog chain integrity (1=valid) |
| `dbbackup_pitr_gap_count` | Number of gaps in archive chain |
| `dbbackup_pitr_archive_count` | Total archived segments |
| `dbbackup_pitr_archive_size_bytes` | Total archive storage |
| `dbbackup_pitr_recovery_window_minutes` | Estimated PITR coverage |
### Label Changes
- `backup_type` label added to: `dbbackup_rpo_seconds`, `dbbackup_last_success_timestamp`, `dbbackup_last_backup_duration_seconds`, `dbbackup_last_backup_size_bytes`
- `dbbackup_backup_total` type changed from counter to gauge (more accurate for snapshot-based collection)
---
## Table of Contents
- [Quick Start](#quick-start)
- [Exporter Modes](#exporter-modes)
- [Complete Metrics Reference](#complete-metrics-reference)
- [Grafana Dashboard Setup](#grafana-dashboard-setup)
- [Alerting Rules](#alerting-rules)
- [Troubleshooting](#troubleshooting)
---
## Quick Start
### Start the Metrics Server
```bash
# Start HTTP exporter on default port 9399 (auto-detects hostname for server label)
dbbackup metrics serve
# Custom port
dbbackup metrics serve --port 9100
# Specify server name for labels (overrides auto-detection)
dbbackup metrics serve --server production-db-01
# Specify custom catalog database location
dbbackup metrics serve --catalog-db /path/to/catalog.db
```
### Export to Textfile (for node_exporter)
```bash
# Export to default location
dbbackup metrics export
# Custom output path
dbbackup metrics export --output /var/lib/node_exporter/textfile_collector/dbbackup.prom
# Specify catalog database and server name
dbbackup metrics export --catalog-db /root/.dbbackup/catalog.db --server myhost
```
### Install as Systemd Service
```bash
# Install with metrics exporter
sudo dbbackup install --with-metrics
# Start the service
sudo systemctl start dbbackup-exporter
```
---
## Exporter Modes
### HTTP Server Mode (`metrics serve`)
Runs a standalone HTTP server exposing metrics for direct Prometheus scraping.
| Endpoint | Description |
|-------------|----------------------------------|
| `/metrics` | Prometheus metrics |
| `/health` | Health check (returns 200 OK) |
| `/` | Service info page |
**Default Port:** 9399
**Server Label:** Auto-detected from hostname (use `--server` to override)
**Catalog Location:** `~/.dbbackup/catalog.db` (use `--catalog-db` to override)
**Configuration:**
```bash
dbbackup metrics serve [--server <instance-name>] [--port <port>] [--catalog-db <path>]
```
| Flag | Default | Description |
|------|---------|-------------|
| `--server` | hostname | Server label for metrics (auto-detected if not set) |
| `--port` | 9399 | HTTP server port |
| `--catalog-db` | ~/.dbbackup/catalog.db | Path to catalog SQLite database |
### Textfile Mode (`metrics export`)
Writes metrics to a file for collection by node_exporter's textfile collector.
**Default Path:** `/var/lib/dbbackup/metrics/dbbackup.prom`
| Flag | Default | Description |
|------|---------|-------------|
| `--server` | hostname | Server label for metrics (auto-detected if not set) |
| `--output` | /var/lib/dbbackup/metrics/dbbackup.prom | Output file path |
| `--catalog-db` | ~/.dbbackup/catalog.db | Path to catalog SQLite database |
**node_exporter Configuration:**
```bash
node_exporter --collector.textfile.directory=/var/lib/dbbackup/metrics/
```
---
## Complete Metrics Reference
All metrics use the `dbbackup_` prefix. Below is the **validated** list of metrics exported by DBBackup.
### Backup Status Metrics
| Metric Name | Type | Labels | Description |
|-------------|------|--------|-------------|
| `dbbackup_last_success_timestamp` | gauge | `server`, `database`, `engine`, `backup_type` | Unix timestamp of last successful backup |
| `dbbackup_last_backup_duration_seconds` | gauge | `server`, `database`, `engine`, `backup_type` | Duration of last successful backup in seconds |
| `dbbackup_last_backup_size_bytes` | gauge | `server`, `database`, `engine`, `backup_type` | Size of last successful backup in bytes |
| `dbbackup_backup_total` | gauge | `server`, `database`, `status` | Total backup attempts (status: `success` or `failure`) |
| `dbbackup_backup_by_type` | gauge | `server`, `database`, `backup_type` | Backup count by type (`full`, `incremental`, `pitr_base`) |
| `dbbackup_rpo_seconds` | gauge | `server`, `database`, `backup_type` | Seconds since last successful backup (RPO) |
| `dbbackup_backup_verified` | gauge | `server`, `database` | Whether last backup was verified (1=yes, 0=no) |
| `dbbackup_scrape_timestamp` | gauge | `server` | Unix timestamp when metrics were collected |
### PITR (Point-in-Time Recovery) Metrics
| Metric Name | Type | Labels | Description |
|-------------|------|--------|-------------|
| `dbbackup_pitr_enabled` | gauge | `server`, `database`, `engine` | Whether PITR is enabled (1=yes, 0=no) |
| `dbbackup_pitr_last_archived_timestamp` | gauge | `server`, `database`, `engine` | Unix timestamp of last archived WAL/binlog |
| `dbbackup_pitr_archive_lag_seconds` | gauge | `server`, `database`, `engine` | Seconds since last archive (lower is better) |
| `dbbackup_pitr_archive_count` | gauge | `server`, `database`, `engine` | Total archived WAL segments or binlog files |
| `dbbackup_pitr_archive_size_bytes` | gauge | `server`, `database`, `engine` | Total size of archived logs in bytes |
| `dbbackup_pitr_chain_valid` | gauge | `server`, `database`, `engine` | Whether archive chain is valid (1=yes, 0=gaps) |
| `dbbackup_pitr_gap_count` | gauge | `server`, `database`, `engine` | Number of gaps in archive chain |
| `dbbackup_pitr_recovery_window_minutes` | gauge | `server`, `database`, `engine` | Estimated PITR coverage window in minutes |
| `dbbackup_pitr_scrape_timestamp` | gauge | `server` | PITR metrics collection timestamp |
### Deduplication Metrics
| Metric Name | Type | Labels | Description |
|-------------|------|--------|-------------|
| `dbbackup_dedup_chunks_total` | gauge | `server` | Total unique chunks stored |
| `dbbackup_dedup_manifests_total` | gauge | `server` | Total number of deduplicated backups |
| `dbbackup_dedup_backup_bytes_total` | gauge | `server` | Total logical size of all backups (bytes) |
| `dbbackup_dedup_stored_bytes_total` | gauge | `server` | Total unique data stored after dedup (bytes) |
| `dbbackup_dedup_space_saved_bytes` | gauge | `server` | Bytes saved by deduplication |
| `dbbackup_dedup_ratio` | gauge | `server` | Dedup efficiency (0-1, higher = better) |
| `dbbackup_dedup_disk_usage_bytes` | gauge | `server` | Actual disk usage of chunk store |
| `dbbackup_dedup_compression_ratio` | gauge | `server` | Compression ratio (0-1, higher = better) |
| `dbbackup_dedup_oldest_chunk_timestamp` | gauge | `server` | Unix timestamp of oldest chunk |
| `dbbackup_dedup_newest_chunk_timestamp` | gauge | `server` | Unix timestamp of newest chunk |
| `dbbackup_dedup_scrape_timestamp` | gauge | `server` | Dedup metrics collection timestamp |
### Per-Database Dedup Metrics
| Metric Name | Type | Labels | Description |
|-------------|------|--------|-------------|
| `dbbackup_dedup_database_backup_count` | gauge | `server`, `database` | Deduplicated backups per database |
| `dbbackup_dedup_database_ratio` | gauge | `server`, `database` | Per-database dedup ratio |
| `dbbackup_dedup_database_last_backup_timestamp` | gauge | `server`, `database` | Last backup timestamp per database |
| `dbbackup_dedup_database_total_bytes` | gauge | `server`, `database` | Total logical size per database |
| `dbbackup_dedup_database_stored_bytes` | gauge | `server`, `database` | Stored bytes per database (after dedup) |
| `dbbackup_rpo_seconds` | gauge | `server`, `database` | Seconds since last backup (same as regular backups for unified alerting) |
> **Note:** The `dbbackup_rpo_seconds` metric is exported by both regular backups and dedup backups, enabling unified alerting without complex PromQL expressions.
---
## Example Metrics Output
```prometheus
# DBBackup Prometheus Metrics
# Generated at: 2026-01-27T10:30:00Z
# Server: production
# HELP dbbackup_last_success_timestamp Unix timestamp of last successful backup
# TYPE dbbackup_last_success_timestamp gauge
dbbackup_last_success_timestamp{server="production",database="myapp",engine="postgres",backup_type="full"} 1737884600
# HELP dbbackup_last_backup_duration_seconds Duration of last successful backup in seconds
# TYPE dbbackup_last_backup_duration_seconds gauge
dbbackup_last_backup_duration_seconds{server="production",database="myapp",engine="postgres",backup_type="full"} 125.50
# HELP dbbackup_last_backup_size_bytes Size of last successful backup in bytes
# TYPE dbbackup_last_backup_size_bytes gauge
dbbackup_last_backup_size_bytes{server="production",database="myapp",engine="postgres",backup_type="full"} 1073741824
# HELP dbbackup_backup_total Total number of backup attempts by type and status
# TYPE dbbackup_backup_total gauge
dbbackup_backup_total{server="production",database="myapp",status="success"} 42
dbbackup_backup_total{server="production",database="myapp",status="failure"} 2
# HELP dbbackup_backup_by_type Total number of backups by backup type
# TYPE dbbackup_backup_by_type gauge
dbbackup_backup_by_type{server="production",database="myapp",backup_type="full"} 30
dbbackup_backup_by_type{server="production",database="myapp",backup_type="incremental"} 12
# HELP dbbackup_rpo_seconds Recovery Point Objective - seconds since last successful backup
# TYPE dbbackup_rpo_seconds gauge
dbbackup_rpo_seconds{server="production",database="myapp",backup_type="full"} 3600
# HELP dbbackup_backup_verified Whether the last backup was verified (1=yes, 0=no)
# TYPE dbbackup_backup_verified gauge
dbbackup_backup_verified{server="production",database="myapp"} 1
# HELP dbbackup_pitr_enabled Whether PITR is enabled for database (1=enabled, 0=disabled)
# TYPE dbbackup_pitr_enabled gauge
dbbackup_pitr_enabled{server="production",database="myapp",engine="postgres"} 1
# HELP dbbackup_pitr_archive_lag_seconds Seconds since last WAL/binlog was archived
# TYPE dbbackup_pitr_archive_lag_seconds gauge
dbbackup_pitr_archive_lag_seconds{server="production",database="myapp",engine="postgres"} 45
# HELP dbbackup_pitr_chain_valid Whether the WAL/binlog chain is valid (1=valid, 0=gaps detected)
# TYPE dbbackup_pitr_chain_valid gauge
dbbackup_pitr_chain_valid{server="production",database="myapp",engine="postgres"} 1
# HELP dbbackup_pitr_recovery_window_minutes Estimated recovery window in minutes
# TYPE dbbackup_pitr_recovery_window_minutes gauge
dbbackup_pitr_recovery_window_minutes{server="production",database="myapp",engine="postgres"} 10080
# HELP dbbackup_dedup_ratio Deduplication ratio (0-1, higher is better)
# TYPE dbbackup_dedup_ratio gauge
dbbackup_dedup_ratio{server="production"} 0.6500
# HELP dbbackup_dedup_space_saved_bytes Bytes saved by deduplication
# TYPE dbbackup_dedup_space_saved_bytes gauge
dbbackup_dedup_space_saved_bytes{server="production"} 5368709120
```
---
## Prometheus Scrape Configuration
Add to your `prometheus.yml`:
```yaml
scrape_configs:
- job_name: 'dbbackup'
scrape_interval: 60s
scrape_timeout: 10s
static_configs:
- targets:
- 'db-server-01:9399'
- 'db-server-02:9399'
labels:
environment: 'production'
- targets:
- 'db-staging:9399'
labels:
environment: 'staging'
relabel_configs:
- source_labels: [__address__]
target_label: instance
regex: '([^:]+):\d+'
replacement: '$1'
```
### File-based Service Discovery
```yaml
- job_name: 'dbbackup-sd'
scrape_interval: 60s
file_sd_configs:
- files:
- '/etc/prometheus/targets/dbbackup/*.yml'
refresh_interval: 5m
```
---
## Grafana Dashboard Setup
### Import Dashboard
1. Open Grafana → **Dashboards****Import**
2. Upload `grafana/dbbackup-dashboard.json` or paste the JSON
3. Select your Prometheus data source
4. Click **Import**
### Dashboard Panels
The dashboard includes the following panels:
#### Backup Overview Row
| Panel | Metric Used | Description |
|-------|-------------|-------------|
| Last Backup Status | `dbbackup_rpo_seconds < bool 604800` | SUCCESS/FAILED indicator |
| Time Since Last Backup | `dbbackup_rpo_seconds` | Time elapsed since last backup |
| Verification Status | `dbbackup_backup_verified` | VERIFIED/NOT VERIFIED |
| Total Successful Backups | `dbbackup_backup_total{status="success"}` | Counter |
| Total Failed Backups | `dbbackup_backup_total{status="failure"}` | Counter |
| RPO Over Time | `dbbackup_rpo_seconds` | Time series graph |
| Backup Size | `dbbackup_last_backup_size_bytes` | Bar chart |
| Backup Duration | `dbbackup_last_backup_duration_seconds` | Time series |
| Backup Status Overview | Multiple metrics | Table with color-coded status |
#### Deduplication Statistics Row
| Panel | Metric Used | Description |
|-------|-------------|-------------|
| Dedup Ratio | `dbbackup_dedup_ratio` | Percentage efficiency |
| Space Saved | `dbbackup_dedup_space_saved_bytes` | Total bytes saved |
| Disk Usage | `dbbackup_dedup_disk_usage_bytes` | Actual storage used |
| Total Chunks | `dbbackup_dedup_chunks_total` | Chunk count |
| Compression Ratio | `dbbackup_dedup_compression_ratio` | Compression efficiency |
| Oldest Chunk | `dbbackup_dedup_oldest_chunk_timestamp` | Age of oldest data |
| Newest Chunk | `dbbackup_dedup_newest_chunk_timestamp` | Most recent chunk |
| Dedup Ratio by Database | `dbbackup_dedup_database_ratio` | Per-database efficiency |
| Dedup Storage Over Time | `dbbackup_dedup_space_saved_bytes`, `dbbackup_dedup_disk_usage_bytes` | Storage trends |
### Dashboard Variables
| Variable | Query | Description |
|----------|-------|-------------|
| `$server` | `label_values(dbbackup_rpo_seconds, server)` | Filter by server |
| `$DS_PROMETHEUS` | datasource | Prometheus data source |
### Dashboard Thresholds
#### RPO Thresholds
- **Green:** < 12 hours (43200 seconds)
- **Yellow:** 12-24 hours
- **Red:** > 24 hours (86400 seconds)
#### Backup Status Thresholds
- **1 (Green):** SUCCESS
- **0 (Red):** FAILED
---
## Alerting Rules
### Pre-configured Alerts
Import `deploy/prometheus/alerting-rules.yaml` into Prometheus/Alertmanager.
#### Backup Status Alerts
| Alert | Expression | Severity | Description |
|-------|------------|----------|-------------|
| `DBBackupRPOWarning` | `dbbackup_rpo_seconds > 43200` | warning | No backup for 12+ hours |
| `DBBackupRPOCritical` | `dbbackup_rpo_seconds > 86400` | critical | No backup for 24+ hours |
| `DBBackupFailed` | `increase(dbbackup_backup_total{status="failure"}[1h]) > 0` | critical | Backup failed |
| `DBBackupFailureRateHigh` | Failure rate > 10% in 24h | warning | High failure rate |
| `DBBackupSizeAnomaly` | Size changed > 50% vs 7-day avg | warning | Unusual backup size |
| `DBBackupSizeZero` | `dbbackup_last_backup_size_bytes == 0` | critical | Empty backup file |
| `DBBackupDurationHigh` | `dbbackup_last_backup_duration_seconds > 3600` | warning | Backup taking > 1 hour |
| `DBBackupNotVerified` | `dbbackup_backup_verified == 0` for 24h | warning | Backup not verified |
| `DBBackupNoRecentFull` | No full backup in 7+ days | warning | Need full backup for incremental chain |
#### PITR Alerts (New)
| Alert | Expression | Severity | Description |
|-------|------------|----------|-------------|
| `DBBackupPITRArchiveLag` | `dbbackup_pitr_archive_lag_seconds > 600` | warning | Archive 10+ min behind |
| `DBBackupPITRArchiveCritical` | `dbbackup_pitr_archive_lag_seconds > 1800` | critical | Archive 30+ min behind |
| `DBBackupPITRChainBroken` | `dbbackup_pitr_chain_valid == 0` | critical | Gaps in WAL/binlog chain |
| `DBBackupPITRGaps` | `dbbackup_pitr_gap_count > 0` | warning | Gaps detected in archive chain |
| `DBBackupPITRDisabled` | PITR unexpectedly disabled | critical | PITR was enabled but now off |
#### Infrastructure Alerts
| Alert | Expression | Severity | Description |
|-------|------------|----------|-------------|
| `DBBackupExporterDown` | `up{job="dbbackup"} == 0` | critical | Exporter unreachable |
| `DBBackupDedupRatioLow` | `dbbackup_dedup_ratio < 0.2` for 24h | info | Low dedup efficiency |
| `DBBackupStorageHigh` | `dbbackup_dedup_disk_usage_bytes > 1TB` | warning | High storage usage |
### Example Alert Configuration
```yaml
groups:
- name: dbbackup
rules:
- alert: DBBackupRPOCritical
expr: dbbackup_rpo_seconds > 86400
for: 5m
labels:
severity: critical
annotations:
summary: "No backup for {{ $labels.database }} in 24+ hours"
description: "RPO violation on {{ $labels.server }}. Last backup: {{ $value | humanizeDuration }} ago."
- alert: DBBackupPITRChainBroken
expr: dbbackup_pitr_chain_valid == 0
for: 1m
labels:
severity: critical
annotations:
summary: "PITR chain broken for {{ $labels.database }}"
description: "WAL/binlog chain has gaps. Point-in-time recovery is NOT possible. New base backup required."
```
---
## Troubleshooting
### Exporter Not Returning Metrics
1. **Check catalog access:**
```bash
dbbackup catalog list
```
2. **Verify port is open:**
```bash
curl -v http://localhost:9399/metrics
```
3. **Check logs:**
```bash
journalctl -u dbbackup-exporter -f
```
### Missing Dedup Metrics
Dedup metrics are only exported when using deduplication:
```bash
# Ensure dedup is enabled
dbbackup dedup status
```
### Metrics Not Updating
The exporter caches metrics for 30 seconds. The `/health` endpoint can confirm the exporter is running.
### Stale or Empty Metrics (Catalog Location Mismatch)
If the exporter shows stale or no backup data, verify the catalog database location:
```bash
# Check where catalog sync writes
dbbackup catalog sync /path/to/backups
# Output shows: [STATS] Catalog database: /root/.dbbackup/catalog.db
# Ensure exporter reads from the same location
dbbackup metrics serve --catalog-db /root/.dbbackup/catalog.db
```
**Common Issue:** If backup scripts run as root but the exporter runs as a different user, they may use different catalog locations. Use `--catalog-db` to ensure consistency.
### Dashboard Shows "No Data"
1. Verify Prometheus is scraping successfully:
```bash
curl http://prometheus:9090/api/v1/targets | grep dbbackup
```
2. Check metric names match (case-sensitive):
```promql
{__name__=~"dbbackup_.*"}
```
3. Verify `server` label matches dashboard variable.
### Label Mismatch Issues
Ensure the `--server` flag matches across all instances:
```bash
# Consistent naming (or let it auto-detect from hostname)
dbbackup metrics serve --server prod-db-01
```
> **Note:** As of v3.x, the exporter auto-detects hostname if `--server` is not specified. This ensures unique server labels in multi-host deployments.
---
## Metrics Validation Checklist
Use this checklist to validate your exporter setup:
- [ ] `/metrics` endpoint returns HTTP 200
- [ ] `/health` endpoint returns `{"status":"ok"}`
- [ ] `dbbackup_rpo_seconds` shows correct RPO values
- [ ] `dbbackup_backup_total` increments after backups
- [ ] `dbbackup_backup_verified` reflects verification status
- [ ] `dbbackup_last_backup_size_bytes` matches actual backup sizes
- [ ] Prometheus scrape succeeds (check targets page)
- [ ] Grafana dashboard loads without errors
- [ ] Dashboard variables populate correctly
- [ ] All panels show data (no "No Data" messages)
---
## Files Reference
| File | Description |
|------|-------------|
| `grafana/dbbackup-dashboard.json` | Grafana dashboard JSON |
| `grafana/alerting-rules.yaml` | Grafana alerting rules |
| `deploy/prometheus/alerting-rules.yaml` | Prometheus alerting rules |
| `deploy/prometheus/scrape-config.yaml` | Prometheus scrape configuration |
| `docs/METRICS.md` | Metrics documentation |
---
## Version Compatibility
| DBBackup Version | Metrics Version | Dashboard UID |
|------------------|-----------------|---------------|
| 1.0.0+ | v1 | `dbbackup-overview` |
---
## Support
For issues with the exporter or dashboard:
1. Check the [troubleshooting section](#troubleshooting)
2. Review logs: `journalctl -u dbbackup-exporter`
3. Open an issue with metrics output and dashboard screenshots

View File

@ -1,266 +0,0 @@
# Lock Debugging Feature
## Overview
The `--debug-locks` flag provides complete visibility into the lock protection system introduced in v3.42.82. This eliminates the need for blind troubleshooting when diagnosing lock exhaustion issues.
## Problem
When PostgreSQL lock exhaustion occurs during restore:
- User sees "out of shared memory" error after 7 hours
- No visibility into why Large DB Guard chose conservative mode
- Unknown whether lock boost attempts succeeded
- Unclear what actions are required to fix the issue
- Requires 14 days of troubleshooting to understand the problem
## Solution
New `--debug-locks` flag captures every decision point in the lock protection system with detailed logging prefixed by 🔍 [LOCK-DEBUG].
## Usage
### CLI
```bash
# Single database restore with lock debugging
dbbackup restore single mydb.dump --debug-locks --confirm
# Cluster restore with lock debugging
dbbackup restore cluster backup.tar.gz --debug-locks --confirm
# Can also use global flag
dbbackup --debug-locks restore cluster backup.tar.gz --confirm
```
### TUI (Interactive Mode)
```bash
dbbackup # Start interactive mode
# Navigate to restore operation
# Select your archive
# Press 'l' to toggle lock debugging (🔍 icon appears when enabled)
# Press Enter to proceed
```
## What Gets Logged
### 1. Strategy Analysis Entry Point
```
🔍 [LOCK-DEBUG] Large DB Guard: Starting strategy analysis
archive=cluster_backup.tar.gz
dump_count=15
```
### 2. PostgreSQL Configuration Detection
```
🔍 [LOCK-DEBUG] Querying PostgreSQL for lock configuration
host=localhost
port=5432
user=postgres
🔍 [LOCK-DEBUG] Successfully retrieved PostgreSQL lock settings
max_locks_per_transaction=2048
max_connections=256
total_capacity=524288
```
### 3. Guard Decision Logic
```
🔍 [LOCK-DEBUG] PostgreSQL lock configuration detected
max_locks_per_transaction=2048
max_connections=256
calculated_capacity=524288
threshold_required=4096
below_threshold=true
🔍 [LOCK-DEBUG] Guard decision: CONSERVATIVE mode
jobs=1
parallel_dbs=1
reason="Lock threshold not met (max_locks < 4096)"
```
### 4. Lock Boost Attempts
```
🔍 [LOCK-DEBUG] boostPostgreSQLSettings: Starting lock boost procedure
target_lock_value=4096
🔍 [LOCK-DEBUG] Current PostgreSQL lock configuration
current_max_locks=2048
target_max_locks=4096
boost_required=true
🔍 [LOCK-DEBUG] Executing ALTER SYSTEM to boost locks
from=2048
to=4096
🔍 [LOCK-DEBUG] ALTER SYSTEM succeeded - restart required
setting_saved_to=postgresql.auto.conf
active_after="PostgreSQL restart"
```
### 5. PostgreSQL Restart Attempts
```
🔍 [LOCK-DEBUG] Attempting PostgreSQL restart to activate new lock setting
# If restart succeeds:
🔍 [LOCK-DEBUG] PostgreSQL restart SUCCEEDED
🔍 [LOCK-DEBUG] Post-restart verification
new_max_locks=4096
target_was=4096
verification=PASS
# If restart fails:
🔍 [LOCK-DEBUG] PostgreSQL restart FAILED
current_locks=2048
required_locks=4096
setting_saved=true
setting_active=false
verdict="ABORT - Manual restart required"
```
### 6. Final Verification
```
🔍 [LOCK-DEBUG] Lock boost function returned
original_max_locks=2048
target_max_locks=4096
boost_successful=false
🔍 [LOCK-DEBUG] CRITICAL: Lock verification FAILED
actual_locks=2048
required_locks=4096
delta=2048
verdict="ABORT RESTORE"
```
## Example Workflow
### Scenario: Lock Exhaustion on New System
```bash
# Step 1: Run restore with lock debugging enabled
dbbackup restore cluster backup.tar.gz --debug-locks --confirm
# Output shows:
# 🔍 [LOCK-DEBUG] Guard decision: CONSERVATIVE mode
# current_locks=2048, required=4096
# verdict="ABORT - Manual restart required"
# Step 2: Follow the actionable instructions
sudo -u postgres psql -c "ALTER SYSTEM SET max_locks_per_transaction = 4096;"
sudo systemctl restart postgresql
# Step 3: Verify the change
sudo -u postgres psql -c "SHOW max_locks_per_transaction;"
# Output: 4096
# Step 4: Retry restore (can disable debug now)
dbbackup restore cluster backup.tar.gz --confirm
# Success! Restore proceeds with verified lock protection
```
## When to Use
### Enable Lock Debugging When:
- Diagnosing lock exhaustion failures
- Understanding why conservative mode was triggered
- Verifying lock boost attempts worked
- Troubleshooting "out of shared memory" errors
- Setting up restore on new systems with unknown lock config
- Documenting lock requirements for compliance/security
### Leave Disabled For:
- Normal production restores (cleaner logs)
- Scripted/automated restores (less noise)
- When lock config is known to be sufficient
- When restore performance is critical
## Integration Points
### Configuration
- **Config Field:** `cfg.DebugLocks` (bool)
- **CLI Flag:** `--debug-locks` (persistent flag on root command)
- **TUI Toggle:** Press 'l' in restore preview screen
- **Default:** `false` (opt-in only)
### Files Modified
- `internal/config/config.go` - Added DebugLocks field
- `cmd/root.go` - Added --debug-locks persistent flag
- `cmd/restore.go` - Wired flag to single/cluster restore commands
- `internal/restore/large_db_guard.go` - 20+ debug log points
- `internal/restore/engine.go` - 15+ debug log points in boost logic
- `internal/tui/restore_preview.go` - 'l' key toggle with 🔍 icon
### Log Locations
All lock debug logs go to the configured logger (usually syslog or file) with level INFO. The 🔍 [LOCK-DEBUG] prefix makes them easy to grep:
```bash
# Filter lock debug logs
journalctl -u dbbackup | grep 'LOCK-DEBUG'
# Or in log files
grep 'LOCK-DEBUG' /var/log/dbbackup.log
```
## Backward Compatibility
- ✅ No breaking changes
- ✅ Flag defaults to false (no output unless enabled)
- ✅ Existing scripts continue to work unchanged
- ✅ TUI users get new 'l' toggle automatically
- ✅ CLI users can add --debug-locks when needed
## Performance Impact
Negligible - the debug logging only adds:
- ~5 database queries (SHOW commands)
- ~10 conditional if statements checking cfg.DebugLocks
- ~50KB of additional log output when enabled
- No impact on restore performance itself
## Relationship to v3.42.82
This feature completes the lock protection system:
**v3.42.82 (Protection):**
- Fixed Guard to always force conservative mode if max_locks < 4096
- Fixed engine to abort restore if lock boost fails
- Ensures no path allows 7-hour failures
**v3.42.83 (Visibility):**
- Shows why Guard chose conservative mode
- Displays lock config that was detected
- Tracks boost attempts and outcomes
- Explains why restore was aborted
Together: Bulletproof protection + complete transparency.
## Deployment
1. Update to v3.42.83:
```bash
wget https://github.com/PlusOne/dbbackup/releases/download/v3.42.83/dbbackup_linux_amd64
chmod +x dbbackup_linux_amd64
sudo mv dbbackup_linux_amd64 /usr/local/bin/dbbackup
```
2. Test lock debugging:
```bash
dbbackup restore cluster test_backup.tar.gz --debug-locks --dry-run
```
3. Enable for production if diagnosing issues:
```bash
dbbackup restore cluster production_backup.tar.gz --debug-locks --confirm
```
## Support
For issues related to lock debugging:
- Check logs for 🔍 [LOCK-DEBUG] entries
- Verify PostgreSQL version supports ALTER SYSTEM (9.4+)
- Ensure user has SUPERUSER role for ALTER SYSTEM
- Check systemd/init scripts can restart PostgreSQL
Related documentation:
- verify_postgres_locks.sh - Script to check lock configuration
- v3.42.82 release notes - Lock exhaustion bug fixes

View File

@ -1,302 +0,0 @@
# DBBackup Prometheus Metrics
This document describes all Prometheus metrics exposed by DBBackup for monitoring and alerting.
## Backup Status Metrics
### `dbbackup_rpo_seconds`
**Type:** Gauge
**Labels:** `server`, `database`, `backup_type`
**Description:** Time in seconds since the last successful backup (Recovery Point Objective).
**Recommended Thresholds:**
- Green: < 43200 (12 hours)
- Yellow: 43200-86400 (12-24 hours)
- Red: > 86400 (24+ hours)
**Example Query:**
```promql
dbbackup_rpo_seconds{server="prod-db-01"} > 86400
# RPO by backup type
dbbackup_rpo_seconds{backup_type="full"}
dbbackup_rpo_seconds{backup_type="incremental"}
```
---
### `dbbackup_backup_total`
**Type:** Gauge
**Labels:** `server`, `database`, `status`
**Description:** Total count of backup attempts, labeled by status (`success` or `failure`).
**Example Query:**
```promql
# Total successful backups
dbbackup_backup_total{status="success"}
```
---
### `dbbackup_backup_by_type`
**Type:** Gauge
**Labels:** `server`, `database`, `backup_type`
**Description:** Total count of backups by backup type (`full`, `incremental`, `pitr_base`).
> **Note:** The `backup_type` label values are:
> - `full` - Created with `--backup-type full` (default)
> - `incremental` - Created with `--backup-type incremental`
> - `pitr_base` - Auto-assigned when using `dbbackup pitr base` command
>
> The CLI `--backup-type` flag only accepts `full` or `incremental`.
**Example Query:**
```promql
# Count of each backup type
dbbackup_backup_by_type{backup_type="full"}
dbbackup_backup_by_type{backup_type="incremental"}
dbbackup_backup_by_type{backup_type="pitr_base"}
```
---
### `dbbackup_backup_verified`
**Type:** Gauge
**Labels:** `server`, `database`
**Description:** Whether the most recent backup was verified successfully (1 = verified, 0 = not verified).
---
### `dbbackup_last_backup_size_bytes`
**Type:** Gauge
**Labels:** `server`, `database`, `engine`, `backup_type`
**Description:** Size of the last successful backup in bytes.
**Example Query:**
```promql
# Total backup storage across all databases
sum(dbbackup_last_backup_size_bytes)
# Size by backup type
dbbackup_last_backup_size_bytes{backup_type="full"}
```
---
### `dbbackup_last_backup_duration_seconds`
**Type:** Gauge
**Labels:** `server`, `database`, `engine`, `backup_type`
**Description:** Duration of the last backup operation in seconds.
---
### `dbbackup_last_success_timestamp`
**Type:** Gauge
**Labels:** `server`, `database`, `engine`, `backup_type`
**Description:** Unix timestamp of the last successful backup.
---
## PITR (Point-in-Time Recovery) Metrics
### `dbbackup_pitr_enabled`
**Type:** Gauge
**Labels:** `server`, `database`, `engine`
**Description:** Whether PITR is enabled for the database (1 = enabled, 0 = disabled).
**Example Query:**
```promql
# Check if PITR is enabled
dbbackup_pitr_enabled{database="production"} == 1
```
---
### `dbbackup_pitr_last_archived_timestamp`
**Type:** Gauge
**Labels:** `server`, `database`, `engine`
**Description:** Unix timestamp of the last archived WAL segment (PostgreSQL) or binlog file (MySQL).
---
### `dbbackup_pitr_archive_lag_seconds`
**Type:** Gauge
**Labels:** `server`, `database`, `engine`
**Description:** Seconds since the last WAL/binlog was archived. High values indicate archiving issues.
**Recommended Thresholds:**
- Green: < 300 (5 minutes)
- Yellow: 300-600 (5-10 minutes)
- Red: > 600 (10+ minutes)
**Example Query:**
```promql
# Alert on high archive lag
dbbackup_pitr_archive_lag_seconds > 600
```
---
### `dbbackup_pitr_archive_count`
**Type:** Gauge
**Labels:** `server`, `database`, `engine`
**Description:** Total number of archived WAL segments or binlog files.
---
### `dbbackup_pitr_archive_size_bytes`
**Type:** Gauge
**Labels:** `server`, `database`, `engine`
**Description:** Total size of archived logs in bytes.
---
### `dbbackup_pitr_chain_valid`
**Type:** Gauge
**Labels:** `server`, `database`, `engine`
**Description:** Whether the WAL/binlog chain is valid (1 = valid, 0 = gaps detected).
**Example Query:**
```promql
# Alert on broken chain
dbbackup_pitr_chain_valid == 0
```
---
### `dbbackup_pitr_gap_count`
**Type:** Gauge
**Labels:** `server`, `database`, `engine`
**Description:** Number of gaps detected in the WAL/binlog chain. Any value > 0 requires investigation.
---
### `dbbackup_pitr_recovery_window_minutes`
**Type:** Gauge
**Labels:** `server`, `database`, `engine`
**Description:** Estimated recovery window in minutes - the time span covered by archived logs.
---
## Deduplication Metrics
### `dbbackup_dedup_ratio`
**Type:** Gauge
**Labels:** `server`
**Description:** Overall deduplication efficiency (0-1). A ratio of 0.5 means 50% space savings.
---
### `dbbackup_dedup_database_ratio`
**Type:** Gauge
**Labels:** `server`, `database`
**Description:** Per-database deduplication ratio.
---
### `dbbackup_dedup_space_saved_bytes`
**Type:** Gauge
**Labels:** `server`
**Description:** Total bytes saved by deduplication across all backups.
---
### `dbbackup_dedup_disk_usage_bytes`
**Type:** Gauge
**Labels:** `server`
**Description:** Actual disk usage of the chunk store after deduplication.
---
### `dbbackup_dedup_chunks_total`
**Type:** Gauge
**Labels:** `server`
**Description:** Total number of unique content-addressed chunks in the dedup store.
---
### `dbbackup_dedup_compression_ratio`
**Type:** Gauge
**Labels:** `server`
**Description:** Compression ratio achieved on chunk data (0-1). Higher = better compression.
---
### `dbbackup_dedup_oldest_chunk_timestamp`
**Type:** Gauge
**Labels:** `server`
**Description:** Unix timestamp of the oldest chunk. Useful for monitoring retention policy.
---
### `dbbackup_dedup_newest_chunk_timestamp`
**Type:** Gauge
**Labels:** `server`
**Description:** Unix timestamp of the newest chunk. Confirms dedup is working on recent backups.
---
## Build Information Metrics
### `dbbackup_build_info`
**Type:** Gauge
**Labels:** `server`, `version`, `commit`
**Description:** Build information for the dbbackup exporter. Value is always 1.
This metric is useful for:
- Tracking which version is deployed across your fleet
- Alerting when versions drift between servers
- Correlating behavior changes with deployments
**Example Queries:**
```promql
# Show all deployed versions
group by (version) (dbbackup_build_info)
# Find servers not on latest version
dbbackup_build_info{version!="4.1.1"}
# Alert on version drift
count(count by (version) (dbbackup_build_info)) > 1
```
---
## Alerting Rules
See [alerting-rules.yaml](../grafana/alerting-rules.yaml) for pre-configured Prometheus alerting rules.
### Recommended Alerts
| Alert Name | Condition | Severity |
|------------|-----------|----------|
| BackupStale | `dbbackup_rpo_seconds > 86400` | Critical |
| BackupFailed | `increase(dbbackup_backup_total{status="failure"}[1h]) > 0` | Warning |
| BackupNotVerified | `dbbackup_backup_verified == 0` | Warning |
| DedupDegraded | `dbbackup_dedup_ratio < 0.1` | Info |
| PITRArchiveLag | `dbbackup_pitr_archive_lag_seconds > 600` | Warning |
| PITRChainBroken | `dbbackup_pitr_chain_valid == 0` | Critical |
| PITRDisabled | `dbbackup_pitr_enabled == 0` (unexpected) | Critical |
| NoIncrementalBackups | `dbbackup_backup_by_type{backup_type="incremental"} == 0` for 7d | Info |
---
## Dashboard
Import the [Grafana dashboard](../grafana/dbbackup-dashboard.json) for visualization of all metrics.
## Exporting Metrics
Metrics are exposed at `/metrics` when running with `--metrics` flag:
```bash
dbbackup backup cluster --metrics --metrics-port 9090
```
Or configure in `.dbbackup.conf`:
```ini
[metrics]
enabled = true
port = 9090
path = /metrics
```

View File

@ -1,195 +0,0 @@
# Restore Profiles
## Overview
The `--profile` flag allows you to optimize restore operations based on your server's resources and current workload. This is particularly useful when dealing with "out of shared memory" errors or resource-constrained environments.
## Available Profiles
### Conservative Profile (`--profile=conservative`)
**Best for:** Resource-constrained servers, production systems with other running services, or when dealing with "out of shared memory" errors.
**Settings:**
- Single-threaded restore (`--parallel=1`)
- Single-threaded decompression (`--jobs=1`)
- Memory-conservative mode enabled
- Minimal memory footprint
**When to use:**
- Server RAM usage > 70%
- Other critical services running (web servers, monitoring agents)
- "out of shared memory" errors during restore
- Small VMs or shared hosting environments
- Disk I/O is the bottleneck
**Example:**
```bash
dbbackup restore cluster backup.tar.gz --profile=conservative --confirm
```
### Balanced Profile (`--profile=balanced`) - DEFAULT
**Best for:** Most scenarios, general-purpose servers with adequate resources.
**Settings:**
- Auto-detect parallelism based on CPU/RAM
- Moderate resource usage
- Good balance between speed and stability
**When to use:**
- Default choice for most restores
- Dedicated database server with moderate load
- Unknown or variable server conditions
**Example:**
```bash
dbbackup restore cluster backup.tar.gz --confirm
# or explicitly:
dbbackup restore cluster backup.tar.gz --profile=balanced --confirm
```
### Aggressive Profile (`--profile=aggressive`)
**Best for:** Dedicated database servers with ample resources, maintenance windows, performance-critical restores.
**Settings:**
- Maximum parallelism (auto-detect based on CPU cores)
- Maximum resource utilization
- Fastest restore speed
**When to use:**
- Dedicated database server (no other services)
- Server RAM usage < 50%
- Time-critical restores (RTO minimization)
- Maintenance windows with service downtime
- Testing/development environments
**Example:**
```bash
dbbackup restore cluster backup.tar.gz --profile=aggressive --confirm
```
### Potato Profile (`--profile=potato`) 🥔
**Easter egg:** Same as conservative, for servers running on a potato.
## Profile Comparison
| Setting | Conservative | Balanced | Aggressive |
|---------|-------------|----------|-----------|
| Parallel DBs | 1 (sequential) | Auto (2-4) | Auto (all CPUs) |
| Jobs (decompression) | 1 | Auto (2-4) | Auto (all CPUs) |
| Memory Usage | Minimal | Moderate | Maximum |
| Speed | Slowest | Medium | Fastest |
| Stability | Most stable | Stable | Requires resources |
## Overriding Profile Settings
You can override specific profile settings:
```bash
# Use conservative profile but allow 2 parallel jobs for decompression
dbbackup restore cluster backup.tar.gz \\
--profile=conservative \\
--jobs=2 \\
--confirm
# Use aggressive profile but limit to 2 parallel databases
dbbackup restore cluster backup.tar.gz \\
--profile=aggressive \\
--parallel-dbs=2 \\
--confirm
```
## Real-World Scenarios
### Scenario 1: "Out of Shared Memory" Error
**Problem:** PostgreSQL restore fails with `ERROR: out of shared memory`
**Solution:**
```bash
# Step 1: Use conservative profile
dbbackup restore cluster backup.tar.gz --profile=conservative --confirm
# Step 2: If still failing, temporarily stop monitoring agents
sudo systemctl stop nessus-agent elastic-agent
dbbackup restore cluster backup.tar.gz --profile=conservative --confirm
sudo systemctl start nessus-agent elastic-agent
# Step 3: Ask infrastructure team to increase work_mem (see email_infra_team.txt)
```
### Scenario 2: Fast Disaster Recovery
**Goal:** Restore as quickly as possible during maintenance window
**Solution:**
```bash
# Stop all non-essential services first
sudo systemctl stop nginx php-fpm
dbbackup restore cluster backup.tar.gz --profile=aggressive --confirm
sudo systemctl start nginx php-fpm
```
### Scenario 3: Shared Server with Multiple Services
**Environment:** Web server + database + monitoring all on same VM
**Solution:**
```bash
# Always use conservative to avoid impacting other services
dbbackup restore cluster backup.tar.gz --profile=conservative --confirm
```
### Scenario 4: Unknown Server Conditions
**Situation:** Restoring to a new server, unsure of resources
**Solution:**
```bash
# Step 1: Run diagnostics first
./diagnose_postgres_memory.sh > diagnosis.log
# Step 2: Choose profile based on memory usage:
# - If memory > 80%: use conservative
# - If memory 50-80%: use balanced (default)
# - If memory < 50%: use aggressive
# Step 3: Start with balanced and adjust if needed
dbbackup restore cluster backup.tar.gz --confirm
```
## Troubleshooting
### Profile Selection Guide
**Use Conservative when:**
- Memory usage > 70%
- ✅ Other services running
- ✅ Getting "out of shared memory" errors
- ✅ Restore keeps failing
- ✅ Small VM (< 4 GB RAM)
- High swap usage
**Use Balanced when:**
- Normal operation
- Moderate server load
- Unsure what to use
- Medium VM (4-16 GB RAM)
**Use Aggressive when:**
- Dedicated database server
- Memory usage < 50%
- No other critical services
- Need fastest possible restore
- Large VM (> 16 GB RAM)
- ✅ Maintenance window
## Environment Variables
You can set a default profile:
```bash
export RESOURCE_PROFILE=conservative
dbbackup restore cluster backup.tar.gz --confirm
```
## See Also
- [diagnose_postgres_memory.sh](diagnose_postgres_memory.sh) - Analyze system resources before restore
- [fix_postgres_locks.sh](fix_postgres_locks.sh) - Fix PostgreSQL lock exhaustion
- [email_infra_team.txt](email_infra_team.txt) - Template email for infrastructure team

22
go.mod
View File

@ -13,25 +13,19 @@ require (
github.com/aws/aws-sdk-go-v2/credentials v1.19.2
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1
github.com/cenkalti/backoff/v4 v4.3.0
github.com/charmbracelet/bubbles v0.21.0
github.com/charmbracelet/bubbletea v1.3.10
github.com/charmbracelet/lipgloss v1.1.0
github.com/dustin/go-humanize v1.0.1
github.com/fatih/color v1.18.0
github.com/go-sql-driver/mysql v1.9.3
github.com/hashicorp/go-multierror v1.1.1
github.com/jackc/pgx/v5 v5.7.6
github.com/klauspost/pgzip v1.2.6
github.com/schollz/progressbar/v3 v3.19.0
github.com/mattn/go-sqlite3 v1.14.32
github.com/shirou/gopsutil/v3 v3.24.5
github.com/sirupsen/logrus v1.9.3
github.com/spf13/afero v1.15.0
github.com/spf13/cobra v1.10.1
github.com/spf13/pflag v1.0.9
golang.org/x/crypto v0.43.0
google.golang.org/api v0.256.0
modernc.org/sqlite v1.44.3
)
require (
@ -63,6 +57,7 @@ require (
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 // indirect
github.com/aws/smithy-go v1.23.2 // indirect
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
github.com/charmbracelet/x/ansi v0.10.1 // indirect
@ -72,6 +67,7 @@ require (
github.com/envoyproxy/go-control-plane/envoy v1.32.4 // indirect
github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
github.com/fatih/color v1.18.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/go-jose/go-jose/v4 v4.1.2 // indirect
github.com/go-logr/logr v1.4.3 // indirect
@ -82,11 +78,11 @@ require (
github.com/googleapis/enterprise-certificate-proxy v0.3.7 // indirect
github.com/googleapis/gax-go/v2 v2.15.0 // indirect
github.com/hashicorp/errwrap v1.0.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/jackc/puddle/v2 v2.2.2 // indirect
github.com/klauspost/compress v1.18.3 // indirect
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
@ -97,11 +93,11 @@ require (
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 // indirect
github.com/muesli/cancelreader v0.2.2 // indirect
github.com/muesli/termenv v0.16.0 // indirect
github.com/ncruces/go-strftime v1.0.0 // indirect
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/schollz/progressbar/v3 v3.19.0 // indirect
github.com/spf13/afero v1.15.0 // indirect
github.com/spiffe/go-spiffe/v2 v2.5.0 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
@ -117,10 +113,9 @@ require (
go.opentelemetry.io/otel/sdk v1.37.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.37.0 // indirect
go.opentelemetry.io/otel/trace v1.37.0 // indirect
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 // indirect
golang.org/x/net v0.46.0 // indirect
golang.org/x/oauth2 v0.33.0 // indirect
golang.org/x/sync v0.19.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.38.0 // indirect
golang.org/x/term v0.36.0 // indirect
golang.org/x/text v0.30.0 // indirect
@ -130,7 +125,4 @@ require (
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 // indirect
google.golang.org/grpc v1.76.0 // indirect
google.golang.org/protobuf v1.36.10 // indirect
modernc.org/libc v1.67.6 // indirect
modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.11.0 // indirect
)

56
go.sum
View File

@ -102,8 +102,6 @@ github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd h1:vy0G
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd/go.mod h1:xe0nKWGd3eJgtqZRaN9RjMtK7xUYchjzPr7q6kcvCCs=
github.com/charmbracelet/x/term v0.2.1 h1:AQeHeLZ1OqSXhrAWpYUtZyX1T3zVxfpZuEQMIQaGIAQ=
github.com/charmbracelet/x/term v0.2.1/go.mod h1:oQ4enTYFV7QN4m0i9mzHrViD7TQKvNEEkHUMCmsxdUg=
github.com/chengxilo/virtualterm v1.0.4 h1:Z6IpERbRVlfB8WkOmtbHiDbBANU7cimRIof7mk9/PwM=
github.com/chengxilo/virtualterm v1.0.4/go.mod h1:DyxxBZz/x1iqJjFxTFcr6/x+jSpqN0iwWCOK1q10rlY=
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443 h1:aQ3y1lwWyqYPiWZThqv1aFbZMiM9vblcSArJRf2Irls=
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
@ -147,8 +145,6 @@ github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/martian/v3 v3.3.3 h1:DIhPTQrbPkgs2yJYdXU/eNACCG5DVQjySNRNlflZ9Fc=
github.com/google/martian/v3 v3.3.3/go.mod h1:iEPrYcgCF7jA9OtScMFQyAlZZ4YXTKEtJ1E6RWzmBA0=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17kjQEVQ1XRhq2/JR1M3sGqeJoxs=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/s2a-go v0.1.9 h1:LGD7gtMgezd8a/Xak7mEWL0PjoTQFvpRudN895yqKW0=
github.com/google/s2a-go v0.1.9/go.mod h1:YA0Ei2ZQL3acow2O62kdp9UlnvMmU7kA6Eutn0dXayM=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
@ -161,8 +157,6 @@ github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/U
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
@ -173,10 +167,6 @@ github.com/jackc/pgx/v5 v5.7.6 h1:rWQc5FwZSPX58r1OQmkuaNicxdmExaEz5A2DO2hUuTk=
github.com/jackc/pgx/v5 v5.7.6/go.mod h1:aruU7o91Tc2q2cFp5h4uP3f6ztExVpyVv88Xl/8Vl8M=
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
github.com/klauspost/compress v1.18.3 h1:9PJRvfbmTabkOX8moIpXPbMMbYN60bWImDDU7L+/6zw=
github.com/klauspost/compress v1.18.3/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
github.com/klauspost/pgzip v1.2.6 h1:8RXeL5crjEUFnR2/Sn6GJNWtSQ3Dk8pq4CL3jvdDyjU=
github.com/klauspost/pgzip v1.2.6/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY=
@ -192,6 +182,8 @@ github.com/mattn/go-localereader v0.0.1 h1:ygSAOl7ZXTx4RdPYinUpg6W99U8jWvWi9Ye2J
github.com/mattn/go-localereader v0.0.1/go.mod h1:8fBrzywKY7BI3czFoHkuzRoWE9C+EiG4R1k4Cjx5p88=
github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc=
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mattn/go-sqlite3 v1.14.32 h1:JD12Ag3oLy1zQA+BNn74xRgaBbdhbNIDYvQUEuuErjs=
github.com/mattn/go-sqlite3 v1.14.32/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db h1:62I3jR2EmQ4l5rM/4FEfDWcRD+abF5XlKShorW5LRoQ=
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db/go.mod h1:l0dey0ia/Uv7NcFFVbCLtqEBQbrT4OCwCSKTEv6enCw=
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 h1:ZK8zHtRHOkbHy6Mmr5D264iyp3TiX5OmNcI5cIARiQI=
@ -200,8 +192,6 @@ github.com/muesli/cancelreader v0.2.2 h1:3I4Kt4BQjOR54NavqnDogx/MIoWBFa0StPA8ELU
github.com/muesli/cancelreader v0.2.2/go.mod h1:3XuTXfFS2VjM+HTLZY9Ak0l6eUKfijIfMUZ4EgX0QYo=
github.com/muesli/termenv v0.16.0 h1:S5AlUN9dENB57rsbnkPyfdGuWIlkmzJjbFf0Tf5FWUc=
github.com/muesli/termenv v0.16.0/go.mod h1:ZRfOIKPFDYQoDFF4Olj7/QJbW60Ol/kL1pU3VfY/Cnk=
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 h1:GFCKgmp0tecUJ0sJuv4pzYCqS9+RGSn52M3FUwPs+uo=
@ -211,8 +201,6 @@ github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRI
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c h1:ncq/mPwQF4JjgDlrVEn3C11VoGHZN7m8qihwgMEtzYw=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
@ -268,16 +256,14 @@ go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mx
go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0=
golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04=
golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70=
golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA=
golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w=
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561 h1:MDc5xs78ZrZr3HMQugiXOAkSZtfTpbJLDr/lwfgO53E=
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561/go.mod h1:cyybsKvd6eL0RnXn6p/Grxp8F5bW7iYuBgsNCOHpMYE=
golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4=
golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210=
golang.org/x/oauth2 v0.33.0 h1:4Q+qn+E5z8gPRJfmRy7C2gGG3T4jIprK6aSYgTXGRpo=
golang.org/x/oauth2 v0.33.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@ -294,8 +280,6 @@ golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k=
golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ=
golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
@ -315,31 +299,3 @@ gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
modernc.org/cc/v4 v4.27.1 h1:9W30zRlYrefrDV2JE2O8VDtJ1yPGownxciz5rrbQZis=
modernc.org/cc/v4 v4.27.1/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.30.1 h1:4r4U1J6Fhj98NKfSjnPUN7Ze2c6MnAdL0hWw6+LrJpc=
modernc.org/ccgo/v4 v4.30.1/go.mod h1:bIOeI1JL54Utlxn+LwrFyjCx2n2RDiYEaJVSrgdrRfM=
modernc.org/fileutil v1.3.40 h1:ZGMswMNc9JOCrcrakF1HrvmergNLAmxOPjizirpfqBA=
modernc.org/fileutil v1.3.40/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc=
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
modernc.org/gc/v3 v3.1.1 h1:k8T3gkXWY9sEiytKhcgyiZ2L0DTyCQ/nvX+LoCljoRE=
modernc.org/gc/v3 v3.1.1/go.mod h1:HFK/6AGESC7Ex+EZJhJ2Gni6cTaYpSMmU/cT9RmlfYY=
modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks=
modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI=
modernc.org/libc v1.67.6 h1:eVOQvpModVLKOdT+LvBPjdQqfrZq+pC39BygcT+E7OI=
modernc.org/libc v1.67.6/go.mod h1:JAhxUVlolfYDErnwiqaLvUqc8nfb2r6S6slAgZOnaiE=
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
modernc.org/memory v1.11.0/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
modernc.org/sqlite v1.44.3 h1:+39JvV/HWMcYslAwRxHb8067w+2zowvFOUrOWIy9PjY=
modernc.org/sqlite v1.44.3/go.mod h1:CzbrU2lSB1DKUusvwGz7rqEKIq+NUd8GWuBBZDs9/nA=
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=
modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=

View File

@ -1,220 +0,0 @@
# DBBackup Prometheus Alerting Rules
# Deploy these to your Prometheus server or use Grafana Alerting
#
# Usage with Prometheus:
# Add to prometheus.yml:
# rule_files:
# - /path/to/alerting-rules.yaml
#
# Usage with Grafana Alerting:
# Import these as Grafana alert rules via the UI or provisioning
groups:
- name: dbbackup_alerts
interval: 1m
rules:
# Critical: No backup in 24 hours
- alert: DBBackupRPOCritical
expr: dbbackup_rpo_seconds > 86400
for: 5m
labels:
severity: critical
annotations:
summary: "No backup for {{ $labels.database }} in 24+ hours"
description: |
Database {{ $labels.database }} on {{ $labels.server }} has not been
backed up in {{ $value | humanizeDuration }}. This exceeds the 24-hour
RPO threshold. Immediate investigation required.
runbook_url: "https://github.com/your-org/dbbackup/wiki/Runbooks#rpo-critical"
# Warning: No backup in 12 hours
- alert: DBBackupRPOWarning
expr: dbbackup_rpo_seconds > 43200 and dbbackup_rpo_seconds <= 86400
for: 5m
labels:
severity: warning
annotations:
summary: "No backup for {{ $labels.database }} in 12+ hours"
description: |
Database {{ $labels.database }} on {{ $labels.server }} has not been
backed up in {{ $value | humanizeDuration }}. Check backup schedule.
runbook_url: "https://github.com/your-org/dbbackup/wiki/Runbooks#rpo-warning"
# Critical: Backup failures detected
- alert: DBBackupFailure
expr: increase(dbbackup_backup_total{status="failure"}[1h]) > 0
for: 1m
labels:
severity: critical
annotations:
summary: "Backup failure detected for {{ $labels.database }}"
description: |
One or more backup attempts failed for {{ $labels.database }} on
{{ $labels.server }} in the last hour. Check logs for details.
runbook_url: "https://github.com/your-org/dbbackup/wiki/Runbooks#backup-failure"
# Warning: Backup not verified
- alert: DBBackupNotVerified
expr: dbbackup_backup_verified == 0
for: 24h
labels:
severity: warning
annotations:
summary: "Backup for {{ $labels.database }} not verified"
description: |
The latest backup for {{ $labels.database }} on {{ $labels.server }}
has not been verified. Consider running verification to ensure
backup integrity.
runbook_url: "https://github.com/your-org/dbbackup/wiki/Runbooks#verification"
# Warning: Dedup ratio dropping
- alert: DBBackupDedupRatioLow
expr: dbbackup_dedup_ratio < 0.1
for: 1h
labels:
severity: warning
annotations:
summary: "Low deduplication ratio on {{ $labels.server }}"
description: |
Deduplication ratio on {{ $labels.server }} is {{ $value | printf "%.1f%%" }}.
This may indicate changes in data patterns or dedup configuration issues.
runbook_url: "https://github.com/your-org/dbbackup/wiki/Runbooks#dedup-low"
# Warning: Dedup disk usage growing rapidly
- alert: DBBackupDedupDiskGrowth
expr: |
predict_linear(dbbackup_dedup_disk_usage_bytes[7d], 30*24*3600) >
(dbbackup_dedup_disk_usage_bytes * 2)
for: 1h
labels:
severity: warning
annotations:
summary: "Rapid dedup storage growth on {{ $labels.server }}"
description: |
Dedup storage on {{ $labels.server }} is growing rapidly.
At current rate, usage will double in 30 days.
Current usage: {{ $value | humanize1024 }}B
runbook_url: "https://github.com/your-org/dbbackup/wiki/Runbooks#storage-growth"
# PITR: Archive lag high
- alert: DBBackupPITRArchiveLag
expr: dbbackup_pitr_archive_lag_seconds > 600
for: 5m
labels:
severity: warning
annotations:
summary: "PITR archive lag high for {{ $labels.database }}"
description: |
WAL/binlog archiving for {{ $labels.database }} on {{ $labels.server }}
is {{ $value | humanizeDuration }} behind. This reduces the PITR
recovery point. Check archive process and disk space.
runbook_url: "https://github.com/your-org/dbbackup/wiki/Runbooks#pitr-archive-lag"
# PITR: Archive lag critical
- alert: DBBackupPITRArchiveLagCritical
expr: dbbackup_pitr_archive_lag_seconds > 1800
for: 5m
labels:
severity: critical
annotations:
summary: "PITR archive severely behind for {{ $labels.database }}"
description: |
WAL/binlog archiving for {{ $labels.database }} is {{ $value | humanizeDuration }}
behind. Point-in-time recovery capability is at risk. Immediate action required.
runbook_url: "https://github.com/your-org/dbbackup/wiki/Runbooks#pitr-archive-critical"
# PITR: Chain broken (gaps detected)
- alert: DBBackupPITRChainBroken
expr: dbbackup_pitr_chain_valid == 0
for: 1m
labels:
severity: critical
annotations:
summary: "PITR chain broken for {{ $labels.database }}"
description: |
The WAL/binlog chain for {{ $labels.database }} on {{ $labels.server }}
has gaps. Point-in-time recovery to arbitrary points is NOT possible.
A new base backup is required to restore PITR capability.
runbook_url: "https://github.com/your-org/dbbackup/wiki/Runbooks#pitr-chain-broken"
# PITR: Gaps in chain
- alert: DBBackupPITRGapsDetected
expr: dbbackup_pitr_gap_count > 0
for: 5m
labels:
severity: warning
annotations:
summary: "PITR chain has {{ $value }} gaps for {{ $labels.database }}"
description: |
{{ $value }} gaps detected in WAL/binlog chain for {{ $labels.database }}.
Recovery to points within gaps will fail. Consider taking a new base backup.
runbook_url: "https://github.com/your-org/dbbackup/wiki/Runbooks#pitr-gaps"
# PITR: Unexpectedly disabled
- alert: DBBackupPITRDisabled
expr: |
dbbackup_pitr_enabled == 0
and on(database) dbbackup_pitr_archive_count > 0
for: 10m
labels:
severity: critical
annotations:
summary: "PITR unexpectedly disabled for {{ $labels.database }}"
description: |
PITR was previously enabled for {{ $labels.database }} (has archived logs)
but is now disabled. This may indicate a configuration issue or
database restart without PITR settings.
runbook_url: "https://github.com/your-org/dbbackup/wiki/Runbooks#pitr-disabled"
# Backup type: No full backups recently
- alert: DBBackupNoRecentFullBackup
expr: |
time() - dbbackup_last_success_timestamp{backup_type="full"} > 604800
for: 1h
labels:
severity: warning
annotations:
summary: "No full backup in 7+ days for {{ $labels.database }}"
description: |
Database {{ $labels.database }} has not had a full backup in over 7 days.
Incremental backups depend on a valid full backup base.
runbook_url: "https://github.com/your-org/dbbackup/wiki/Runbooks#no-full-backup"
# Info: Exporter not responding
- alert: DBBackupExporterDown
expr: up{job="dbbackup"} == 0
for: 5m
labels:
severity: warning
annotations:
summary: "DBBackup exporter is down on {{ $labels.instance }}"
description: |
The DBBackup Prometheus exporter on {{ $labels.instance }} is not
responding. Metrics collection is affected.
runbook_url: "https://github.com/your-org/dbbackup/wiki/Runbooks#exporter-down"
# Info: Metrics stale (scrape timestamp old)
- alert: DBBackupMetricsStale
expr: time() - dbbackup_scrape_timestamp > 600
for: 5m
labels:
severity: warning
annotations:
summary: "DBBackup metrics are stale on {{ $labels.server }}"
description: |
Metrics for {{ $labels.server }} haven't been updated in
{{ $value | humanizeDuration }}. The exporter may be having issues.
runbook_url: "https://github.com/your-org/dbbackup/wiki/Runbooks#metrics-stale"
# Critical: No successful backups ever
- alert: DBBackupNeverSucceeded
expr: dbbackup_backup_total{status="success"} == 0
for: 1h
labels:
severity: critical
annotations:
summary: "No successful backups for {{ $labels.database }}"
description: |
Database {{ $labels.database }} on {{ $labels.server }} has never
had a successful backup. This requires immediate attention.
runbook_url: "https://github.com/your-org/dbbackup/wiki/Runbooks#never-succeeded"

View File

@ -15,33 +15,18 @@
}
]
},
"description": "Comprehensive monitoring dashboard for DBBackup - tracks backup status, RPO, deduplication, and verification across all database servers.",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"graphTooltip": 0,
"id": null,
"links": [],
"liveNow": false,
"panels": [
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 0
},
"id": 200,
"panels": [],
"title": "Backup Overview",
"type": "row"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "Shows SUCCESS if RPO is under 7 days, FAILED otherwise. Green = healthy backup schedule.",
"fieldConfig": {
"defaults": {
"color": {
@ -82,9 +67,9 @@
},
"gridPos": {
"h": 4,
"w": 5,
"w": 6,
"x": 0,
"y": 1
"y": 0
},
"id": 1,
"options": {
@ -109,7 +94,7 @@
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_rpo_seconds{server=~\"$server\"} < bool 604800",
"expr": "dbbackup_rpo_seconds{instance=~\"$instance\"} < bool 604800",
"legendFormat": "{{database}}",
"range": true,
"refId": "A"
@ -123,7 +108,6 @@
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "Time elapsed since the last successful backup. Green < 12h, Yellow < 24h, Red > 24h.",
"fieldConfig": {
"defaults": {
"color": {
@ -153,9 +137,9 @@
},
"gridPos": {
"h": 4,
"w": 5,
"x": 5,
"y": 1
"w": 6,
"x": 6,
"y": 0
},
"id": 2,
"options": {
@ -180,7 +164,7 @@
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_rpo_seconds{server=~\"$server\"}",
"expr": "dbbackup_rpo_seconds{instance=~\"$instance\"}",
"legendFormat": "{{database}}",
"range": true,
"refId": "A"
@ -194,89 +178,6 @@
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "Whether the most recent backup was verified successfully. 1 = verified and valid.",
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [
{
"options": {
"0": {
"color": "orange",
"index": 1,
"text": "NOT VERIFIED"
},
"1": {
"color": "green",
"index": 0,
"text": "VERIFIED"
}
},
"type": "value"
}
],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "orange",
"value": null
},
{
"color": "green",
"value": 1
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 4,
"w": 5,
"x": 10,
"y": 1
},
"id": 9,
"options": {
"colorMode": "background",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_backup_verified{server=~\"$server\"}",
"legendFormat": "{{database}}",
"range": true,
"refId": "A"
}
],
"title": "Verification Status",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "Total count of successful backup completions.",
"fieldConfig": {
"defaults": {
"color": {
@ -297,9 +198,9 @@
},
"gridPos": {
"h": 4,
"w": 4,
"x": 15,
"y": 1
"w": 6,
"x": 12,
"y": 0
},
"id": 3,
"options": {
@ -324,7 +225,7 @@
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_backup_total{server=~\"$server\", status=\"success\"}",
"expr": "dbbackup_backup_total{instance=~\"$instance\", status=\"success\"}",
"legendFormat": "{{database}}",
"range": true,
"refId": "A"
@ -338,7 +239,6 @@
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "Total count of failed backup attempts. Any value > 0 warrants investigation.",
"fieldConfig": {
"defaults": {
"color": {
@ -363,9 +263,9 @@
},
"gridPos": {
"h": 4,
"w": 5,
"x": 19,
"y": 1
"w": 6,
"x": 18,
"y": 0
},
"id": 4,
"options": {
@ -390,7 +290,7 @@
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_backup_total{server=~\"$server\", status=\"failure\"}",
"expr": "dbbackup_backup_total{instance=~\"$instance\", status=\"failure\"}",
"legendFormat": "{{database}}",
"range": true,
"refId": "A"
@ -404,7 +304,6 @@
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "Recovery Point Objective over time. Shows how long since the last successful backup. Red line at 24h threshold.",
"fieldConfig": {
"defaults": {
"color": {
@ -463,7 +362,7 @@
"h": 8,
"w": 12,
"x": 0,
"y": 5
"y": 4
},
"id": 5,
"options": {
@ -485,8 +384,8 @@
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_rpo_seconds{server=~\"$server\"}",
"legendFormat": "{{server}} - {{database}}",
"expr": "dbbackup_rpo_seconds{instance=~\"$instance\"}",
"legendFormat": "{{instance}} - {{database}}",
"range": true,
"refId": "A"
}
@ -499,7 +398,6 @@
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "Size of each backup over time. Useful for capacity planning and detecting unexpected growth.",
"fieldConfig": {
"defaults": {
"color": {
@ -554,7 +452,7 @@
"h": 8,
"w": 12,
"x": 12,
"y": 5
"y": 4
},
"id": 6,
"options": {
@ -576,8 +474,8 @@
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_last_backup_size_bytes{server=~\"$server\"}",
"legendFormat": "{{server}} - {{database}}",
"expr": "dbbackup_last_backup_size_bytes{instance=~\"$instance\"}",
"legendFormat": "{{instance}} - {{database}}",
"range": true,
"refId": "A"
}
@ -590,7 +488,6 @@
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "How long each backup takes. Monitor for trends that may indicate database growth or performance issues.",
"fieldConfig": {
"defaults": {
"color": {
@ -645,7 +542,7 @@
"h": 8,
"w": 12,
"x": 0,
"y": 13
"y": 12
},
"id": 7,
"options": {
@ -667,8 +564,8 @@
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_last_backup_duration_seconds{server=~\"$server\"}",
"legendFormat": "{{server}} - {{database}}",
"expr": "dbbackup_last_backup_duration_seconds{instance=~\"$instance\"}",
"legendFormat": "{{instance}} - {{database}}",
"range": true,
"refId": "A"
}
@ -681,7 +578,6 @@
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "Summary table showing current status of all databases with color-coded RPO and backup sizes.",
"fieldConfig": {
"defaults": {
"color": {
@ -798,7 +694,7 @@
"h": 8,
"w": 12,
"x": 12,
"y": 13
"y": 12
},
"id": 8,
"options": {
@ -821,7 +717,7 @@
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_rpo_seconds{server=~\"$server\"}",
"expr": "dbbackup_rpo_seconds{instance=~\"$instance\"}",
"format": "table",
"hide": false,
"instant": true,
@ -835,7 +731,7 @@
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_last_backup_size_bytes{server=~\"$server\"}",
"expr": "dbbackup_last_backup_size_bytes{instance=~\"$instance\"}",
"format": "table",
"hide": false,
"instant": true,
@ -896,7 +792,7 @@
"h": 1,
"w": 24,
"x": 0,
"y": 21
"y": 30
},
"id": 100,
"panels": [],
@ -908,7 +804,6 @@
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "Overall deduplication efficiency (0-1). Higher values mean more duplicate data eliminated. 0.5 = 50% space savings.",
"fieldConfig": {
"defaults": {
"color": {
@ -932,7 +827,7 @@
"h": 5,
"w": 6,
"x": 0,
"y": 22
"y": 31
},
"id": 101,
"options": {
@ -955,7 +850,7 @@
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_ratio{server=~\"$server\"}",
"expr": "dbbackup_dedup_ratio{instance=~\"$instance\"}",
"legendFormat": "__auto",
"range": true,
"refId": "A"
@ -969,7 +864,6 @@
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "Total bytes saved by deduplication across all backups.",
"fieldConfig": {
"defaults": {
"color": {
@ -993,7 +887,7 @@
"h": 5,
"w": 6,
"x": 6,
"y": 22
"y": 31
},
"id": 102,
"options": {
@ -1016,7 +910,7 @@
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_space_saved_bytes{server=~\"$server\"}",
"expr": "dbbackup_dedup_space_saved_bytes{instance=~\"$instance\"}",
"legendFormat": "__auto",
"range": true,
"refId": "A"
@ -1030,7 +924,6 @@
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "Actual disk usage of the chunk store after deduplication.",
"fieldConfig": {
"defaults": {
"color": {
@ -1054,7 +947,7 @@
"h": 5,
"w": 6,
"x": 12,
"y": 22
"y": 31
},
"id": 103,
"options": {
@ -1077,7 +970,7 @@
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_disk_usage_bytes{server=~\"$server\"}",
"expr": "dbbackup_dedup_disk_usage_bytes{instance=~\"$instance\"}",
"legendFormat": "__auto",
"range": true,
"refId": "A"
@ -1091,7 +984,6 @@
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "Total number of unique content-addressed chunks in the dedup store.",
"fieldConfig": {
"defaults": {
"color": {
@ -1115,7 +1007,7 @@
"h": 5,
"w": 6,
"x": 18,
"y": 22
"y": 31
},
"id": 104,
"options": {
@ -1138,7 +1030,7 @@
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_chunks_total{server=~\"$server\"}",
"expr": "dbbackup_dedup_chunks_total{instance=~\"$instance\"}",
"legendFormat": "__auto",
"range": true,
"refId": "A"
@ -1152,190 +1044,6 @@
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "Compression ratio achieved (0-1). Higher = better compression of chunk data.",
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "orange",
"value": null
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 4,
"x": 0,
"y": 27
},
"id": 107,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_compression_ratio{server=~\"$server\"}",
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Compression Ratio",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "Timestamp of the oldest chunk - useful for monitoring retention policy.",
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "semi-dark-blue",
"value": null
}
]
},
"unit": "dateTimeFromNow"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 4,
"x": 4,
"y": 27
},
"id": 108,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_oldest_chunk_timestamp{server=~\"$server\"} * 1000",
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Oldest Chunk",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "Timestamp of the newest chunk - confirms dedup is working on recent backups.",
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "semi-dark-green",
"value": null
}
]
},
"unit": "dateTimeFromNow"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 4,
"x": 8,
"y": 27
},
"id": 109,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_newest_chunk_timestamp{server=~\"$server\"} * 1000",
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Newest Chunk",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "Per-database deduplication efficiency over time. Compare databases to identify which benefit most from dedup.",
"fieldConfig": {
"defaults": {
"color": {
@ -1391,7 +1099,7 @@
"h": 8,
"w": 12,
"x": 0,
"y": 32
"y": 36
},
"id": 105,
"options": {
@ -1414,7 +1122,7 @@
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_database_ratio{server=~\"$server\"}",
"expr": "dbbackup_dedup_database_ratio{instance=~\"$instance\"}",
"legendFormat": "{{database}}",
"range": true,
"refId": "A"
@ -1428,7 +1136,6 @@
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "Storage trends: compare space saved by dedup vs actual disk usage over time.",
"fieldConfig": {
"defaults": {
"color": {
@ -1484,7 +1191,7 @@
"h": 8,
"w": 12,
"x": 12,
"y": 32
"y": 36
},
"id": 106,
"options": {
@ -1507,7 +1214,7 @@
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_space_saved_bytes{server=~\"$server\"}",
"expr": "dbbackup_dedup_space_saved_bytes{instance=~\"$instance\"}",
"legendFormat": "Space Saved",
"range": true,
"refId": "A"
@ -1518,7 +1225,7 @@
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_disk_usage_bytes{server=~\"$server\"}",
"expr": "dbbackup_dedup_disk_usage_bytes{instance=~\"$instance\"}",
"legendFormat": "Disk Usage",
"range": true,
"refId": "B"
@ -1534,8 +1241,7 @@
"dbbackup",
"backup",
"database",
"dedup",
"monitoring"
"dedup"
],
"templating": {
"list": [
@ -1549,15 +1255,15 @@
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"definition": "label_values(dbbackup_rpo_seconds, server)",
"definition": "label_values(dbbackup_rpo_seconds, instance)",
"hide": 0,
"includeAll": true,
"label": "Server",
"label": "Instance",
"multi": true,
"name": "server",
"name": "instance",
"options": [],
"query": {
"query": "label_values(dbbackup_rpo_seconds, server)",
"query": "label_values(dbbackup_rpo_seconds, instance)",
"refId": "StandardVariableQuery"
},
"refresh": 2,

View File

@ -201,12 +201,12 @@ func buildAuthMismatchMessage(osUser, dbUser string, method AuthMethod) string {
msg.WriteString("\n[WARN] Authentication Mismatch Detected\n")
msg.WriteString(strings.Repeat("=", 60) + "\n\n")
msg.WriteString(" PostgreSQL is using '" + string(method) + "' authentication\n")
msg.WriteString(" OS user '" + osUser + "' cannot authenticate as DB user '" + dbUser + "'\n\n")
msg.WriteString(fmt.Sprintf(" PostgreSQL is using '%s' authentication\n", method))
msg.WriteString(fmt.Sprintf(" OS user '%s' cannot authenticate as DB user '%s'\n\n", osUser, dbUser))
msg.WriteString("[TIP] Solutions (choose one):\n\n")
msg.WriteString(" 1. Run as matching user:\n")
msg.WriteString(fmt.Sprintf(" 1. Run as matching user:\n"))
msg.WriteString(fmt.Sprintf(" sudo -u %s %s\n\n", dbUser, getCommandLine()))
msg.WriteString(" 2. Configure ~/.pgpass file (recommended):\n")
@ -214,11 +214,11 @@ func buildAuthMismatchMessage(osUser, dbUser string, method AuthMethod) string {
msg.WriteString(" chmod 0600 ~/.pgpass\n\n")
msg.WriteString(" 3. Set PGPASSWORD environment variable:\n")
msg.WriteString(" export PGPASSWORD=your_password\n")
msg.WriteString(" " + getCommandLine() + "\n\n")
msg.WriteString(fmt.Sprintf(" export PGPASSWORD=your_password\n"))
msg.WriteString(fmt.Sprintf(" %s\n\n", getCommandLine()))
msg.WriteString(" 4. Provide password via flag:\n")
msg.WriteString(" " + getCommandLine() + " --password your_password\n\n")
msg.WriteString(fmt.Sprintf(" %s --password your_password\n\n", getCommandLine()))
msg.WriteString("[NOTE] Note: For production use, ~/.pgpass or PGPASSWORD are recommended\n")
msg.WriteString(" to avoid exposing passwords in command history.\n\n")

View File

@ -20,7 +20,6 @@ import (
"dbbackup/internal/cloud"
"dbbackup/internal/config"
"dbbackup/internal/database"
"dbbackup/internal/fs"
"dbbackup/internal/logger"
"dbbackup/internal/metadata"
"dbbackup/internal/metrics"
@ -105,6 +104,13 @@ func (e *Engine) SetDatabaseProgressCallback(cb DatabaseProgressCallback) {
e.dbProgressCallback = cb
}
// reportProgress reports progress to the callback if set
func (e *Engine) reportProgress(current, total int64, description string) {
if e.progressCallback != nil {
e.progressCallback(current, total, description)
}
}
// reportDatabaseProgress reports database count progress to the callback if set
func (e *Engine) reportDatabaseProgress(done, total int, dbName string) {
if e.dbProgressCallback != nil {
@ -707,7 +713,6 @@ func (e *Engine) monitorCommandProgress(stderr io.ReadCloser, tracker *progress.
}
// executeMySQLWithProgressAndCompression handles MySQL backup with compression and progress
// Uses in-process pgzip for parallel compression (2-4x faster on multi-core systems)
func (e *Engine) executeMySQLWithProgressAndCompression(ctx context.Context, cmdArgs []string, outputFile string, tracker *progress.OperationTracker) error {
// Create mysqldump command
dumpCmd := exec.CommandContext(ctx, cmdArgs[0], cmdArgs[1:]...)
@ -716,6 +721,9 @@ func (e *Engine) executeMySQLWithProgressAndCompression(ctx context.Context, cmd
dumpCmd.Env = append(dumpCmd.Env, "MYSQL_PWD="+e.cfg.Password)
}
// Create gzip command
gzipCmd := exec.CommandContext(ctx, "gzip", fmt.Sprintf("-%d", e.cfg.CompressionLevel))
// Create output file
outFile, err := os.Create(outputFile)
if err != nil {
@ -723,19 +731,15 @@ func (e *Engine) executeMySQLWithProgressAndCompression(ctx context.Context, cmd
}
defer outFile.Close()
// Create parallel gzip writer using pgzip
gzWriter, err := fs.NewParallelGzipWriter(outFile, e.cfg.CompressionLevel)
if err != nil {
return fmt.Errorf("failed to create gzip writer: %w", err)
}
defer gzWriter.Close()
// Set up pipeline: mysqldump stdout -> pgzip writer -> file
// Set up pipeline: mysqldump | gzip > outputfile
pipe, err := dumpCmd.StdoutPipe()
if err != nil {
return fmt.Errorf("failed to create pipe: %w", err)
}
gzipCmd.Stdin = pipe
gzipCmd.Stdout = outFile
// Get stderr for progress monitoring
stderr, err := dumpCmd.StderrPipe()
if err != nil {
@ -749,17 +753,15 @@ func (e *Engine) executeMySQLWithProgressAndCompression(ctx context.Context, cmd
e.monitorCommandProgress(stderr, tracker)
}()
// Start mysqldump
if err := dumpCmd.Start(); err != nil {
return fmt.Errorf("failed to start mysqldump: %w", err)
// Start both commands
if err := gzipCmd.Start(); err != nil {
return fmt.Errorf("failed to start gzip: %w", err)
}
// Copy mysqldump output through pgzip in a goroutine
copyDone := make(chan error, 1)
go func() {
_, err := io.Copy(gzWriter, pipe)
copyDone <- err
}()
if err := dumpCmd.Start(); err != nil {
gzipCmd.Process.Kill()
return fmt.Errorf("failed to start mysqldump: %w", err)
}
// Wait for mysqldump with context handling
dumpDone := make(chan error, 1)
@ -774,6 +776,7 @@ func (e *Engine) executeMySQLWithProgressAndCompression(ctx context.Context, cmd
case <-ctx.Done():
e.log.Warn("Backup cancelled - killing mysqldump")
dumpCmd.Process.Kill()
gzipCmd.Process.Kill()
<-dumpDone
return ctx.Err()
}
@ -781,14 +784,10 @@ func (e *Engine) executeMySQLWithProgressAndCompression(ctx context.Context, cmd
// Wait for stderr reader
<-stderrDone
// Wait for copy to complete
if copyErr := <-copyDone; copyErr != nil {
return fmt.Errorf("compression failed: %w", copyErr)
}
// Close gzip writer to flush all data
if err := gzWriter.Close(); err != nil {
return fmt.Errorf("failed to close gzip writer: %w", err)
// Close pipe and wait for gzip
pipe.Close()
if err := gzipCmd.Wait(); err != nil {
return fmt.Errorf("gzip failed: %w", err)
}
if dumpErr != nil {
@ -799,7 +798,6 @@ func (e *Engine) executeMySQLWithProgressAndCompression(ctx context.Context, cmd
}
// executeMySQLWithCompression handles MySQL backup with compression
// Uses in-process pgzip for parallel compression (2-4x faster on multi-core systems)
func (e *Engine) executeMySQLWithCompression(ctx context.Context, cmdArgs []string, outputFile string) error {
// Create mysqldump command
dumpCmd := exec.CommandContext(ctx, cmdArgs[0], cmdArgs[1:]...)
@ -808,6 +806,9 @@ func (e *Engine) executeMySQLWithCompression(ctx context.Context, cmdArgs []stri
dumpCmd.Env = append(dumpCmd.Env, "MYSQL_PWD="+e.cfg.Password)
}
// Create gzip command
gzipCmd := exec.CommandContext(ctx, "gzip", fmt.Sprintf("-%d", e.cfg.CompressionLevel))
// Create output file
outFile, err := os.Create(outputFile)
if err != nil {
@ -815,31 +816,25 @@ func (e *Engine) executeMySQLWithCompression(ctx context.Context, cmdArgs []stri
}
defer outFile.Close()
// Create parallel gzip writer using pgzip
gzWriter, err := fs.NewParallelGzipWriter(outFile, e.cfg.CompressionLevel)
if err != nil {
return fmt.Errorf("failed to create gzip writer: %w", err)
}
defer gzWriter.Close()
// Set up pipeline: mysqldump stdout -> pgzip writer -> file
pipe, err := dumpCmd.StdoutPipe()
// Set up pipeline: mysqldump | gzip > outputfile
stdin, err := dumpCmd.StdoutPipe()
if err != nil {
return fmt.Errorf("failed to create pipe: %w", err)
}
gzipCmd.Stdin = stdin
gzipCmd.Stdout = outFile
// Start gzip first
if err := gzipCmd.Start(); err != nil {
return fmt.Errorf("failed to start gzip: %w", err)
}
// Start mysqldump
if err := dumpCmd.Start(); err != nil {
gzipCmd.Process.Kill()
return fmt.Errorf("failed to start mysqldump: %w", err)
}
// Copy mysqldump output through pgzip in a goroutine
copyDone := make(chan error, 1)
go func() {
_, err := io.Copy(gzWriter, pipe)
copyDone <- err
}()
// Wait for mysqldump with context handling
dumpDone := make(chan error, 1)
go func() {
@ -853,18 +848,15 @@ func (e *Engine) executeMySQLWithCompression(ctx context.Context, cmdArgs []stri
case <-ctx.Done():
e.log.Warn("Backup cancelled - killing mysqldump")
dumpCmd.Process.Kill()
gzipCmd.Process.Kill()
<-dumpDone
return ctx.Err()
}
// Wait for copy to complete
if copyErr := <-copyDone; copyErr != nil {
return fmt.Errorf("compression failed: %w", copyErr)
}
// Close gzip writer to flush all data
if err := gzWriter.Close(); err != nil {
return fmt.Errorf("failed to close gzip writer: %w", err)
// Close pipe and wait for gzip
stdin.Close()
if err := gzipCmd.Wait(); err != nil {
return fmt.Errorf("gzip failed: %w", err)
}
if dumpErr != nil {
@ -960,74 +952,125 @@ func (e *Engine) backupGlobals(ctx context.Context, tempDir string) error {
cmd.Env = append(cmd.Env, "PGPASSWORD="+e.cfg.Password)
}
// Use Start/Wait pattern for proper Ctrl+C handling
stdout, err := cmd.StdoutPipe()
output, err := cmd.Output()
if err != nil {
return fmt.Errorf("failed to create stdout pipe: %w", err)
}
if err := cmd.Start(); err != nil {
return fmt.Errorf("failed to start pg_dumpall: %w", err)
}
// Read output in goroutine
var output []byte
var readErr error
readDone := make(chan struct{})
go func() {
defer close(readDone)
output, readErr = io.ReadAll(stdout)
}()
// Wait for command with proper context handling
cmdDone := make(chan error, 1)
go func() {
cmdDone <- cmd.Wait()
}()
var cmdErr error
select {
case cmdErr = <-cmdDone:
// Command completed normally
case <-ctx.Done():
e.log.Warn("Globals backup cancelled - killing pg_dumpall")
cmd.Process.Kill()
<-cmdDone
return ctx.Err()
}
<-readDone
if cmdErr != nil {
return fmt.Errorf("pg_dumpall failed: %w", cmdErr)
}
if readErr != nil {
return fmt.Errorf("failed to read pg_dumpall output: %w", readErr)
return fmt.Errorf("pg_dumpall failed: %w", err)
}
return os.WriteFile(globalsFile, output, 0644)
}
// createArchive creates a compressed tar archive using parallel gzip compression
// Uses in-process pgzip for 2-4x faster compression on multi-core systems
// createArchive creates a compressed tar archive
func (e *Engine) createArchive(ctx context.Context, sourceDir, outputFile string) error {
e.log.Debug("Creating archive with parallel compression",
"source", sourceDir,
"output", outputFile,
"compression", e.cfg.CompressionLevel)
// Use pigz for faster parallel compression if available, otherwise use standard gzip
compressCmd := "tar"
compressArgs := []string{"-czf", outputFile, "-C", sourceDir, "."}
// Use in-process parallel compression with pgzip
err := fs.CreateTarGzParallel(ctx, sourceDir, outputFile, e.cfg.CompressionLevel, func(progress fs.CreateProgress) {
// Optional: log progress for large archives
if progress.FilesCount%100 == 0 && progress.FilesCount > 0 {
e.log.Debug("Archive progress", "files", progress.FilesCount, "bytes", progress.BytesWritten)
// Check if pigz is available for faster parallel compression
if _, err := exec.LookPath("pigz"); err == nil {
// Use pigz with number of cores for parallel compression
compressArgs = []string{"-cf", "-", "-C", sourceDir, "."}
cmd := exec.CommandContext(ctx, "tar", compressArgs...)
// Create output file
outFile, err := os.Create(outputFile)
if err != nil {
// Fallback to regular tar
goto regularTar
}
})
defer outFile.Close()
if err != nil {
return fmt.Errorf("parallel archive creation failed: %w", err)
// Pipe to pigz for parallel compression
pigzCmd := exec.CommandContext(ctx, "pigz", "-p", strconv.Itoa(e.cfg.Jobs))
tarOut, err := cmd.StdoutPipe()
if err != nil {
outFile.Close()
// Fallback to regular tar
goto regularTar
}
pigzCmd.Stdin = tarOut
pigzCmd.Stdout = outFile
// Start both commands
if err := pigzCmd.Start(); err != nil {
outFile.Close()
goto regularTar
}
if err := cmd.Start(); err != nil {
pigzCmd.Process.Kill()
outFile.Close()
goto regularTar
}
// Wait for tar with proper context handling
tarDone := make(chan error, 1)
go func() {
tarDone <- cmd.Wait()
}()
var tarErr error
select {
case tarErr = <-tarDone:
// tar completed
case <-ctx.Done():
e.log.Warn("Archive creation cancelled - killing processes")
cmd.Process.Kill()
pigzCmd.Process.Kill()
<-tarDone
return ctx.Err()
}
if tarErr != nil {
pigzCmd.Process.Kill()
return fmt.Errorf("tar failed: %w", tarErr)
}
// Wait for pigz with proper context handling
pigzDone := make(chan error, 1)
go func() {
pigzDone <- pigzCmd.Wait()
}()
var pigzErr error
select {
case pigzErr = <-pigzDone:
case <-ctx.Done():
pigzCmd.Process.Kill()
<-pigzDone
return ctx.Err()
}
if pigzErr != nil {
return fmt.Errorf("pigz compression failed: %w", pigzErr)
}
return nil
}
regularTar:
// Standard tar with gzip (fallback)
cmd := exec.CommandContext(ctx, compressCmd, compressArgs...)
// Stream stderr to avoid memory issues
// Use io.Copy to ensure goroutine completes when pipe closes
stderr, err := cmd.StderrPipe()
if err == nil {
go func() {
scanner := bufio.NewScanner(stderr)
for scanner.Scan() {
line := scanner.Text()
if line != "" {
e.log.Debug("Archive creation", "output", line)
}
}
// Scanner will exit when stderr pipe closes after cmd.Wait()
}()
}
if err := cmd.Run(); err != nil {
return fmt.Errorf("tar failed: %w", err)
}
// cmd.Run() calls Wait() which closes stderr pipe, terminating the goroutine
return nil
}
@ -1329,27 +1372,6 @@ func (e *Engine) executeCommand(ctx context.Context, cmdArgs []string, outputFil
// NO GO BUFFERING - pg_dump writes directly to disk
cmd := exec.CommandContext(ctx, cmdArgs[0], cmdArgs[1:]...)
// Start heartbeat ticker for backup progress
backupStart := time.Now()
heartbeatCtx, cancelHeartbeat := context.WithCancel(ctx)
heartbeatTicker := time.NewTicker(5 * time.Second)
defer heartbeatTicker.Stop()
defer cancelHeartbeat()
go func() {
for {
select {
case <-heartbeatTicker.C:
elapsed := time.Since(backupStart)
if e.progress != nil {
e.progress.Update(fmt.Sprintf("Backing up database... (elapsed: %s)", formatDuration(elapsed)))
}
case <-heartbeatCtx.Done():
return
}
}
}()
// Set environment variables for database tools
cmd.Env = os.Environ()
if e.cfg.Password != "" {
@ -1576,22 +1598,3 @@ func formatBytes(bytes int64) string {
}
return fmt.Sprintf("%.1f %cB", float64(bytes)/float64(div), "KMGTPE"[exp])
}
// formatDuration formats a duration to human readable format (e.g., "3m 45s", "1h 23m", "45s")
func formatDuration(d time.Duration) string {
if d < time.Second {
return "0s"
}
hours := int(d.Hours())
minutes := int(d.Minutes()) % 60
seconds := int(d.Seconds()) % 60
if hours > 0 {
return fmt.Sprintf("%dh %dm", hours, minutes)
}
if minutes > 0 {
return fmt.Sprintf("%dm %ds", minutes, seconds)
}
return fmt.Sprintf("%ds", seconds)
}

View File

@ -2,13 +2,12 @@ package backup
import (
"archive/tar"
"compress/gzip"
"context"
"fmt"
"io"
"os"
"path/filepath"
"github.com/klauspost/pgzip"
)
// extractTarGz extracts a tar.gz archive to the specified directory
@ -21,8 +20,8 @@ func (e *PostgresIncrementalEngine) extractTarGz(ctx context.Context, archivePat
}
defer archiveFile.Close()
// Create parallel gzip reader for faster decompression
gzReader, err := pgzip.NewReader(archiveFile)
// Create gzip reader
gzReader, err := gzip.NewReader(archiveFile)
if err != nil {
return fmt.Errorf("failed to create gzip reader: %w", err)
}

View File

@ -2,6 +2,7 @@ package backup
import (
"archive/tar"
"compress/gzip"
"context"
"crypto/sha256"
"encoding/hex"
@ -12,8 +13,6 @@ import (
"strings"
"time"
"github.com/klauspost/pgzip"
"dbbackup/internal/logger"
"dbbackup/internal/metadata"
)
@ -368,15 +367,15 @@ func (e *MySQLIncrementalEngine) CalculateFileChecksum(path string) (string, err
// createTarGz creates a tar.gz archive with the specified changed files
func (e *MySQLIncrementalEngine) createTarGz(ctx context.Context, outputFile string, changedFiles []ChangedFile, config *IncrementalBackupConfig) error {
// Create output file
// Import needed for tar/gzip
outFile, err := os.Create(outputFile)
if err != nil {
return fmt.Errorf("failed to create output file: %w", err)
}
defer outFile.Close()
// Create parallel gzip writer for faster compression
gzWriter, err := pgzip.NewWriterLevel(outFile, config.CompressionLevel)
// Create gzip writer
gzWriter, err := gzip.NewWriterLevel(outFile, config.CompressionLevel)
if err != nil {
return fmt.Errorf("failed to create gzip writer: %w", err)
}
@ -461,8 +460,8 @@ func (e *MySQLIncrementalEngine) extractTarGz(ctx context.Context, archivePath,
}
defer archiveFile.Close()
// Create parallel gzip reader for faster decompression
gzReader, err := pgzip.NewReader(archiveFile)
// Create gzip reader
gzReader, err := gzip.NewReader(archiveFile)
if err != nil {
return fmt.Errorf("failed to create gzip reader: %w", err)
}

View File

@ -2,12 +2,11 @@ package backup
import (
"archive/tar"
"compress/gzip"
"context"
"fmt"
"io"
"os"
"github.com/klauspost/pgzip"
)
// createTarGz creates a tar.gz archive with the specified changed files
@ -19,8 +18,8 @@ func (e *PostgresIncrementalEngine) createTarGz(ctx context.Context, outputFile
}
defer outFile.Close()
// Create parallel gzip writer for faster compression
gzWriter, err := pgzip.NewWriterLevel(outFile, config.CompressionLevel)
// Create gzip writer
gzWriter, err := gzip.NewWriterLevel(outFile, config.CompressionLevel)
if err != nil {
return fmt.Errorf("failed to create gzip writer: %w", err)
}

View File

@ -150,14 +150,12 @@ type Catalog interface {
// SyncResult contains results from a catalog sync operation
type SyncResult struct {
Added int `json:"added"`
Updated int `json:"updated"`
Removed int `json:"removed"`
Skipped int `json:"skipped"` // Files without metadata (legacy backups)
Errors int `json:"errors"`
Duration float64 `json:"duration_seconds"`
Details []string `json:"details,omitempty"`
LegacyWarning string `json:"legacy_warning,omitempty"` // Warning about legacy files
Added int `json:"added"`
Updated int `json:"updated"`
Removed int `json:"removed"`
Errors int `json:"errors"`
Duration float64 `json:"duration_seconds"`
Details []string `json:"details,omitempty"`
}
// FormatSize formats bytes as human-readable string

View File

@ -11,7 +11,7 @@ import (
"strings"
"time"
_ "modernc.org/sqlite" // Pure Go SQLite driver (no CGO required)
_ "github.com/mattn/go-sqlite3"
)
// SQLiteCatalog implements Catalog interface with SQLite storage
@ -28,7 +28,7 @@ func NewSQLiteCatalog(dbPath string) (*SQLiteCatalog, error) {
return nil, fmt.Errorf("failed to create catalog directory: %w", err)
}
db, err := sql.Open("sqlite", dbPath+"?_journal_mode=WAL&_foreign_keys=ON")
db, err := sql.Open("sqlite3", dbPath+"?_journal_mode=WAL&_foreign_keys=ON")
if err != nil {
return nil, fmt.Errorf("failed to open catalog database: %w", err)
}

View File

@ -30,33 +30,6 @@ func (c *SQLiteCatalog) SyncFromDirectory(ctx context.Context, dir string) (*Syn
subMatches, _ := filepath.Glob(subPattern)
matches = append(matches, subMatches...)
// Count legacy backups (files without metadata)
legacySkipped := 0
legacyPatterns := []string{
filepath.Join(dir, "*.sql"),
filepath.Join(dir, "*.sql.gz"),
filepath.Join(dir, "*.sql.lz4"),
filepath.Join(dir, "*.sql.zst"),
filepath.Join(dir, "*.dump"),
filepath.Join(dir, "*.dump.gz"),
filepath.Join(dir, "*", "*.sql"),
filepath.Join(dir, "*", "*.sql.gz"),
}
metaSet := make(map[string]bool)
for _, m := range matches {
// Store the backup file path (without .meta.json)
metaSet[strings.TrimSuffix(m, ".meta.json")] = true
}
for _, pat := range legacyPatterns {
legacyMatches, _ := filepath.Glob(pat)
for _, lm := range legacyMatches {
// Skip if this file has metadata
if !metaSet[lm] {
legacySkipped++
}
}
}
for _, metaPath := range matches {
// Derive backup file path from metadata path
backupPath := strings.TrimSuffix(metaPath, ".meta.json")
@ -124,17 +97,6 @@ func (c *SQLiteCatalog) SyncFromDirectory(ctx context.Context, dir string) (*Syn
}
}
// Set legacy backup warning if applicable
result.Skipped = legacySkipped
if legacySkipped > 0 {
result.LegacyWarning = fmt.Sprintf(
"%d backup file(s) found without .meta.json metadata. "+
"These are likely legacy backups created by raw mysqldump/pg_dump. "+
"Only backups created by 'dbbackup backup' (with metadata) can be imported. "+
"To track legacy backups, re-create them using 'dbbackup backup' command.",
legacySkipped)
}
result.Duration = time.Since(start).Seconds()
return result, nil
}

View File

@ -1,181 +0,0 @@
package checks
import (
"context"
"fmt"
"os"
"os/exec"
"regexp"
"strconv"
"strings"
"time"
)
// lockRecommendation represents a normalized recommendation for locks
type lockRecommendation int
const (
recIncrease lockRecommendation = iota
recSingleThreadedOrIncrease
recSingleThreaded
)
// determineLockRecommendation contains the pure logic (easy to unit-test).
func determineLockRecommendation(locks, conns, prepared int64) (status CheckStatus, rec lockRecommendation) {
// follow same thresholds as legacy script
switch {
case locks < 2048:
return StatusFailed, recIncrease
case locks < 8192:
return StatusWarning, recIncrease
case locks < 65536:
return StatusWarning, recSingleThreadedOrIncrease
default:
return StatusPassed, recSingleThreaded
}
}
var nonDigits = regexp.MustCompile(`[^0-9]+`)
// parseNumeric strips non-digits and parses up to 10 characters (like the shell helper)
func parseNumeric(s string) (int64, error) {
if s == "" {
return 0, fmt.Errorf("empty string")
}
s = nonDigits.ReplaceAllString(s, "")
if len(s) > 10 {
s = s[:10]
}
v, err := strconv.ParseInt(s, 10, 64)
if err != nil {
return 0, fmt.Errorf("parse error: %w", err)
}
return v, nil
}
// execPsql runs psql with the supplied arguments and returns stdout (trimmed).
// It attempts to avoid leaking passwords in error messages.
func execPsql(ctx context.Context, args []string, env []string, useSudo bool) (string, error) {
var cmd *exec.Cmd
if useSudo {
// sudo -u postgres psql --no-psqlrc -t -A -c "..."
all := append([]string{"-u", "postgres", "--"}, "psql")
all = append(all, args...)
cmd = exec.CommandContext(ctx, "sudo", all...)
} else {
cmd = exec.CommandContext(ctx, "psql", args...)
}
cmd.Env = append(os.Environ(), env...)
out, err := cmd.Output()
if err != nil {
// prefer a concise error
return "", fmt.Errorf("psql failed: %w", err)
}
return strings.TrimSpace(string(out)), nil
}
// checkPostgresLocks probes PostgreSQL (via psql) and returns a PreflightCheck.
// It intentionally does not require a live internal/database.Database; it uses
// the configured connection parameters or falls back to local sudo when possible.
func (p *PreflightChecker) checkPostgresLocks(ctx context.Context) PreflightCheck {
check := PreflightCheck{Name: "PostgreSQL lock configuration"}
if !p.cfg.IsPostgreSQL() {
check.Status = StatusSkipped
check.Message = "Skipped (not a PostgreSQL configuration)"
return check
}
// Build common psql args
psqlArgs := []string{"--no-psqlrc", "-t", "-A", "-c"}
queryLocks := "SHOW max_locks_per_transaction;"
queryConns := "SHOW max_connections;"
queryPrepared := "SHOW max_prepared_transactions;"
// Build connection flags
if p.cfg.Host != "" {
psqlArgs = append(psqlArgs, "-h", p.cfg.Host)
}
psqlArgs = append(psqlArgs, "-p", fmt.Sprint(p.cfg.Port))
if p.cfg.User != "" {
psqlArgs = append(psqlArgs, "-U", p.cfg.User)
}
// Use database if provided (helps some setups)
if p.cfg.Database != "" {
psqlArgs = append(psqlArgs, "-d", p.cfg.Database)
}
// Env: prefer PGPASSWORD if configured
env := []string{}
if p.cfg.Password != "" {
env = append(env, "PGPASSWORD="+p.cfg.Password)
}
ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
// helper to run a single SHOW query and parse numeric result
runShow := func(q string) (int64, error) {
args := append(psqlArgs, q)
out, err := execPsql(ctx, args, env, false)
if err != nil {
// If local host and no explicit auth, try sudo -u postgres
if (p.cfg.Host == "" || p.cfg.Host == "localhost" || p.cfg.Host == "127.0.0.1") && p.cfg.Password == "" {
out, err = execPsql(ctx, append(psqlArgs, q), env, true)
if err != nil {
return 0, err
}
} else {
return 0, err
}
}
v, err := parseNumeric(out)
if err != nil {
return 0, fmt.Errorf("non-numeric response from psql: %q", out)
}
return v, nil
}
locks, err := runShow(queryLocks)
if err != nil {
check.Status = StatusFailed
check.Message = "Could not read max_locks_per_transaction"
check.Details = err.Error()
return check
}
conns, err := runShow(queryConns)
if err != nil {
check.Status = StatusFailed
check.Message = "Could not read max_connections"
check.Details = err.Error()
return check
}
prepared, _ := runShow(queryPrepared) // optional; treat errors as zero
// Compute capacity
capacity := locks * (conns + prepared)
status, rec := determineLockRecommendation(locks, conns, prepared)
check.Status = status
check.Message = fmt.Sprintf("locks=%d connections=%d prepared=%d capacity=%d", locks, conns, prepared, capacity)
// Human-friendly details + actionable remediation
detailLines := []string{fmt.Sprintf("max_locks_per_transaction: %d", locks), fmt.Sprintf("max_connections: %d", conns), fmt.Sprintf("max_prepared_transactions: %d", prepared), fmt.Sprintf("Total lock capacity: %d", capacity)}
switch rec {
case recIncrease:
detailLines = append(detailLines, "RECOMMENDATION: Increase to at least 65536 and run restore single-threaded")
detailLines = append(detailLines, " sudo -u postgres psql -c \"ALTER SYSTEM SET max_locks_per_transaction = 65536;\" && sudo systemctl restart postgresql")
check.Details = strings.Join(detailLines, "\n")
case recSingleThreadedOrIncrease:
detailLines = append(detailLines, "RECOMMENDATION: Use single-threaded restore (--jobs 1 --parallel-dbs 1) or increase locks to 65536 and still prefer single-threaded")
check.Details = strings.Join(detailLines, "\n")
case recSingleThreaded:
detailLines = append(detailLines, "RECOMMENDATION: Single-threaded restore is safest for very large DBs")
check.Details = strings.Join(detailLines, "\n")
}
return check
}

View File

@ -1,55 +0,0 @@
package checks
import (
"testing"
)
func TestDetermineLockRecommendation(t *testing.T) {
tests := []struct {
locks int64
conns int64
prepared int64
exStatus CheckStatus
exRec lockRecommendation
}{
{locks: 1024, conns: 100, prepared: 0, exStatus: StatusFailed, exRec: recIncrease},
{locks: 4096, conns: 200, prepared: 0, exStatus: StatusWarning, exRec: recIncrease},
{locks: 16384, conns: 200, prepared: 0, exStatus: StatusWarning, exRec: recSingleThreadedOrIncrease},
{locks: 65536, conns: 200, prepared: 0, exStatus: StatusPassed, exRec: recSingleThreaded},
}
for _, tc := range tests {
st, rec := determineLockRecommendation(tc.locks, tc.conns, tc.prepared)
if st != tc.exStatus {
t.Fatalf("locks=%d: status = %v, want %v", tc.locks, st, tc.exStatus)
}
if rec != tc.exRec {
t.Fatalf("locks=%d: rec = %v, want %v", tc.locks, rec, tc.exRec)
}
}
}
func TestParseNumeric(t *testing.T) {
cases := map[string]int64{
"4096": 4096,
" 4096\n": 4096,
"4096 (default)": 4096,
"unknown": 0, // should error
}
for in, want := range cases {
v, err := parseNumeric(in)
if want == 0 {
if err == nil {
t.Fatalf("expected error parsing %q", in)
}
continue
}
if err != nil {
t.Fatalf("parseNumeric(%q) error: %v", in, err)
}
if v != want {
t.Fatalf("parseNumeric(%q) = %d, want %d", in, v, want)
}
}
}

View File

@ -120,17 +120,6 @@ func (p *PreflightChecker) RunAllChecks(ctx context.Context, dbName string) (*Pr
result.FailureCount++
}
// Postgres lock configuration check (provides explicit restore guidance)
locksCheck := p.checkPostgresLocks(ctx)
result.Checks = append(result.Checks, locksCheck)
if locksCheck.Status == StatusFailed {
result.AllPassed = false
result.FailureCount++
} else if locksCheck.Status == StatusWarning {
result.HasWarnings = true
result.WarningCount++
}
// Extract database info if connection succeeded
if dbCheck.Status == StatusPassed && p.db != nil {
version, _ := p.db.GetVersion(ctx)

View File

@ -162,12 +162,7 @@ func (a *AzureBackend) uploadSimple(ctx context.Context, file *os.File, blobName
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
// Wrap reader with progress tracking
var reader io.Reader = NewProgressReader(file, fileSize, progress)
// Apply bandwidth throttling if configured
if a.config.BandwidthLimit > 0 {
reader = NewThrottledReader(ctx, reader, a.config.BandwidthLimit)
}
reader := NewProgressReader(file, fileSize, progress)
// Calculate MD5 hash for integrity
hash := sha256.New()
@ -209,13 +204,6 @@ func (a *AzureBackend) uploadBlocks(ctx context.Context, file *os.File, blobName
hash := sha256.New()
var totalUploaded int64
// Calculate throttle delay per byte if bandwidth limited
var throttleDelay time.Duration
if a.config.BandwidthLimit > 0 {
// Calculate nanoseconds per byte
throttleDelay = time.Duration(float64(time.Second) / float64(a.config.BandwidthLimit) * float64(blockSize))
}
for i := int64(0); i < numBlocks; i++ {
blockID := base64.StdEncoding.EncodeToString([]byte(fmt.Sprintf("block-%08d", i)))
blockIDs = append(blockIDs, blockID)
@ -237,15 +225,6 @@ func (a *AzureBackend) uploadBlocks(ctx context.Context, file *os.File, blobName
// Update hash
hash.Write(blockData)
// Apply throttling between blocks if configured
if a.config.BandwidthLimit > 0 && i > 0 {
select {
case <-ctx.Done():
return ctx.Err()
case <-time.After(throttleDelay):
}
}
// Upload block
reader := bytes.NewReader(blockData)
_, err = blockBlobClient.StageBlock(ctx, blockID, streaming.NopCloser(reader), nil)

View File

@ -121,12 +121,7 @@ func (g *GCSBackend) Upload(ctx context.Context, localPath, remotePath string, p
// Wrap reader with progress tracking and hash calculation
hash := sha256.New()
var reader io.Reader = NewProgressReader(io.TeeReader(file, hash), fileSize, progress)
// Apply bandwidth throttling if configured
if g.config.BandwidthLimit > 0 {
reader = NewThrottledReader(ctx, reader, g.config.BandwidthLimit)
}
reader := NewProgressReader(io.TeeReader(file, hash), fileSize, progress)
// Upload with progress tracking
_, err = io.Copy(writer, reader)

View File

@ -46,19 +46,18 @@ type ProgressCallback func(bytesTransferred, totalBytes int64)
// Config contains common configuration for cloud backends
type Config struct {
Provider string // "s3", "minio", "azure", "gcs", "b2"
Bucket string // Bucket or container name
Region string // Region (for S3)
Endpoint string // Custom endpoint (for MinIO, S3-compatible)
AccessKey string // Access key or account ID
SecretKey string // Secret key or access token
UseSSL bool // Use SSL/TLS (default: true)
PathStyle bool // Use path-style addressing (for MinIO)
Prefix string // Prefix for all operations (e.g., "backups/")
Timeout int // Timeout in seconds (default: 300)
MaxRetries int // Maximum retry attempts (default: 3)
Concurrency int // Upload/download concurrency (default: 5)
BandwidthLimit int64 // Maximum upload/download bandwidth in bytes/sec (0 = unlimited)
Provider string // "s3", "minio", "azure", "gcs", "b2"
Bucket string // Bucket or container name
Region string // Region (for S3)
Endpoint string // Custom endpoint (for MinIO, S3-compatible)
AccessKey string // Access key or account ID
SecretKey string // Secret key or access token
UseSSL bool // Use SSL/TLS (default: true)
PathStyle bool // Use path-style addressing (for MinIO)
Prefix string // Prefix for all operations (e.g., "backups/")
Timeout int // Timeout in seconds (default: 300)
MaxRetries int // Maximum retry attempts (default: 3)
Concurrency int // Upload/download concurrency (default: 5)
}
// NewBackend creates a new cloud storage backend based on the provider

View File

@ -183,10 +183,9 @@ func IsRetryableError(err error) bool {
}
// Network errors are typically retryable
// Note: netErr.Temporary() is deprecated since Go 1.18 - most "temporary" errors are timeouts
var netErr net.Error
if ok := isNetError(err, &netErr); ok {
return netErr.Timeout()
return netErr.Timeout() || netErr.Temporary()
}
errStr := strings.ToLower(err.Error())

View File

@ -138,11 +138,6 @@ func (s *S3Backend) uploadSimple(ctx context.Context, file *os.File, key string,
reader = NewProgressReader(file, fileSize, progress)
}
// Apply bandwidth throttling if configured
if s.config.BandwidthLimit > 0 {
reader = NewThrottledReader(ctx, reader, s.config.BandwidthLimit)
}
// Upload to S3
_, err := s.client.PutObject(ctx, &s3.PutObjectInput{
Bucket: aws.String(s.bucket),
@ -168,21 +163,13 @@ func (s *S3Backend) uploadMultipart(ctx context.Context, file *os.File, key stri
return fmt.Errorf("failed to reset file position: %w", err)
}
// Calculate concurrency based on bandwidth limit
// If limited, reduce concurrency to make throttling more effective
concurrency := 10
if s.config.BandwidthLimit > 0 {
// With bandwidth limiting, use fewer concurrent parts
concurrency = 3
}
// Create uploader with custom options
uploader := manager.NewUploader(s.client, func(u *manager.Uploader) {
// Part size: 10MB
u.PartSize = 10 * 1024 * 1024
// Adjust concurrency
u.Concurrency = concurrency
// Upload up to 10 parts concurrently
u.Concurrency = 10
// Leave parts on failure for debugging
u.LeavePartsOnError = false
@ -194,11 +181,6 @@ func (s *S3Backend) uploadMultipart(ctx context.Context, file *os.File, key stri
reader = NewProgressReader(file, fileSize, progress)
}
// Apply bandwidth throttling if configured
if s.config.BandwidthLimit > 0 {
reader = NewThrottledReader(ctx, reader, s.config.BandwidthLimit)
}
// Upload with multipart
_, err := uploader.Upload(ctx, &s3.PutObjectInput{
Bucket: aws.String(s.bucket),

View File

@ -1,251 +0,0 @@
// Package cloud provides throttled readers for bandwidth limiting during cloud uploads/downloads
package cloud
import (
"context"
"fmt"
"io"
"strings"
"sync"
"time"
)
// ThrottledReader wraps an io.Reader and limits the read rate to a maximum bytes per second.
// This is useful for cloud uploads where you don't want to saturate the network.
type ThrottledReader struct {
reader io.Reader
bytesPerSec int64 // Maximum bytes per second (0 = unlimited)
bytesRead int64 // Bytes read in current window
windowStart time.Time // Start of current measurement window
windowSize time.Duration // Size of the measurement window
mu sync.Mutex // Protects bytesRead and windowStart
ctx context.Context
}
// NewThrottledReader creates a new bandwidth-limited reader.
// bytesPerSec is the maximum transfer rate in bytes per second.
// Set to 0 for unlimited bandwidth.
func NewThrottledReader(ctx context.Context, reader io.Reader, bytesPerSec int64) *ThrottledReader {
return &ThrottledReader{
reader: reader,
bytesPerSec: bytesPerSec,
windowStart: time.Now(),
windowSize: 100 * time.Millisecond, // Measure in 100ms windows for smooth throttling
ctx: ctx,
}
}
// Read implements io.Reader with bandwidth throttling
func (t *ThrottledReader) Read(p []byte) (int, error) {
// No throttling if unlimited
if t.bytesPerSec <= 0 {
return t.reader.Read(p)
}
t.mu.Lock()
// Calculate how many bytes we're allowed in this window
now := time.Now()
elapsed := now.Sub(t.windowStart)
// If we've passed the window, reset
if elapsed >= t.windowSize {
t.bytesRead = 0
t.windowStart = now
elapsed = 0
}
// Calculate bytes allowed per window
bytesPerWindow := int64(float64(t.bytesPerSec) * t.windowSize.Seconds())
// How many bytes can we still read in this window?
remaining := bytesPerWindow - t.bytesRead
if remaining <= 0 {
// We've exhausted our quota for this window - wait for next window
sleepDuration := t.windowSize - elapsed
t.mu.Unlock()
select {
case <-t.ctx.Done():
return 0, t.ctx.Err()
case <-time.After(sleepDuration):
}
// Retry after sleeping
return t.Read(p)
}
// Limit read size to remaining quota
maxRead := len(p)
if int64(maxRead) > remaining {
maxRead = int(remaining)
}
t.mu.Unlock()
// Perform the actual read
n, err := t.reader.Read(p[:maxRead])
// Track bytes read
t.mu.Lock()
t.bytesRead += int64(n)
t.mu.Unlock()
return n, err
}
// ThrottledWriter wraps an io.Writer and limits the write rate.
type ThrottledWriter struct {
writer io.Writer
bytesPerSec int64
bytesWritten int64
windowStart time.Time
windowSize time.Duration
mu sync.Mutex
ctx context.Context
}
// NewThrottledWriter creates a new bandwidth-limited writer.
func NewThrottledWriter(ctx context.Context, writer io.Writer, bytesPerSec int64) *ThrottledWriter {
return &ThrottledWriter{
writer: writer,
bytesPerSec: bytesPerSec,
windowStart: time.Now(),
windowSize: 100 * time.Millisecond,
ctx: ctx,
}
}
// Write implements io.Writer with bandwidth throttling
func (t *ThrottledWriter) Write(p []byte) (int, error) {
if t.bytesPerSec <= 0 {
return t.writer.Write(p)
}
totalWritten := 0
for totalWritten < len(p) {
t.mu.Lock()
now := time.Now()
elapsed := now.Sub(t.windowStart)
if elapsed >= t.windowSize {
t.bytesWritten = 0
t.windowStart = now
elapsed = 0
}
bytesPerWindow := int64(float64(t.bytesPerSec) * t.windowSize.Seconds())
remaining := bytesPerWindow - t.bytesWritten
if remaining <= 0 {
sleepDuration := t.windowSize - elapsed
t.mu.Unlock()
select {
case <-t.ctx.Done():
return totalWritten, t.ctx.Err()
case <-time.After(sleepDuration):
}
continue
}
// Calculate how much to write
toWrite := len(p) - totalWritten
if int64(toWrite) > remaining {
toWrite = int(remaining)
}
t.mu.Unlock()
// Write chunk
n, err := t.writer.Write(p[totalWritten : totalWritten+toWrite])
totalWritten += n
t.mu.Lock()
t.bytesWritten += int64(n)
t.mu.Unlock()
if err != nil {
return totalWritten, err
}
}
return totalWritten, nil
}
// ParseBandwidth parses a human-readable bandwidth string into bytes per second.
// Supports: "10MB/s", "10MiB/s", "100KB/s", "1GB/s", "10Mbps", "100Kbps"
// Returns 0 for empty or "unlimited"
func ParseBandwidth(s string) (int64, error) {
if s == "" || s == "0" || s == "unlimited" {
return 0, nil
}
// Normalize input
s = strings.TrimSpace(s)
s = strings.ToLower(s)
s = strings.TrimSuffix(s, "/s")
s = strings.TrimSuffix(s, "ps") // For mbps/kbps
// Parse unit
var multiplier int64 = 1
var value float64
switch {
case strings.HasSuffix(s, "gib"):
multiplier = 1024 * 1024 * 1024
s = strings.TrimSuffix(s, "gib")
case strings.HasSuffix(s, "gb"):
multiplier = 1000 * 1000 * 1000
s = strings.TrimSuffix(s, "gb")
case strings.HasSuffix(s, "mib"):
multiplier = 1024 * 1024
s = strings.TrimSuffix(s, "mib")
case strings.HasSuffix(s, "mb"):
multiplier = 1000 * 1000
s = strings.TrimSuffix(s, "mb")
case strings.HasSuffix(s, "kib"):
multiplier = 1024
s = strings.TrimSuffix(s, "kib")
case strings.HasSuffix(s, "kb"):
multiplier = 1000
s = strings.TrimSuffix(s, "kb")
case strings.HasSuffix(s, "b"):
multiplier = 1
s = strings.TrimSuffix(s, "b")
default:
// Assume MB if no unit
multiplier = 1000 * 1000
}
// Parse numeric value
_, err := fmt.Sscanf(s, "%f", &value)
if err != nil {
return 0, fmt.Errorf("invalid bandwidth value: %s", s)
}
return int64(value * float64(multiplier)), nil
}
// FormatBandwidth returns a human-readable bandwidth string
func FormatBandwidth(bytesPerSec int64) string {
if bytesPerSec <= 0 {
return "unlimited"
}
const (
KB = 1000
MB = 1000 * KB
GB = 1000 * MB
)
switch {
case bytesPerSec >= GB:
return fmt.Sprintf("%.1f GB/s", float64(bytesPerSec)/float64(GB))
case bytesPerSec >= MB:
return fmt.Sprintf("%.1f MB/s", float64(bytesPerSec)/float64(MB))
case bytesPerSec >= KB:
return fmt.Sprintf("%.1f KB/s", float64(bytesPerSec)/float64(KB))
default:
return fmt.Sprintf("%d B/s", bytesPerSec)
}
}

View File

@ -1,175 +0,0 @@
package cloud
import (
"bytes"
"context"
"io"
"testing"
"time"
)
func TestParseBandwidth(t *testing.T) {
tests := []struct {
input string
expected int64
wantErr bool
}{
// Empty/unlimited
{"", 0, false},
{"0", 0, false},
{"unlimited", 0, false},
// Megabytes per second (SI)
{"10MB/s", 10 * 1000 * 1000, false},
{"10mb/s", 10 * 1000 * 1000, false},
{"10MB", 10 * 1000 * 1000, false},
{"100MB/s", 100 * 1000 * 1000, false},
// Mebibytes per second (binary)
{"10MiB/s", 10 * 1024 * 1024, false},
{"10mib/s", 10 * 1024 * 1024, false},
// Kilobytes
{"500KB/s", 500 * 1000, false},
{"500KiB/s", 500 * 1024, false},
// Gigabytes
{"1GB/s", 1000 * 1000 * 1000, false},
{"1GiB/s", 1024 * 1024 * 1024, false},
// Megabits per second
{"100Mbps", 100 * 1000 * 1000, false},
// Plain bytes
{"1000B/s", 1000, false},
// No unit (assumes MB)
{"50", 50 * 1000 * 1000, false},
// Decimal values
{"1.5MB/s", 1500000, false},
{"0.5GB/s", 500 * 1000 * 1000, false},
}
for _, tt := range tests {
t.Run(tt.input, func(t *testing.T) {
got, err := ParseBandwidth(tt.input)
if (err != nil) != tt.wantErr {
t.Errorf("ParseBandwidth(%q) error = %v, wantErr %v", tt.input, err, tt.wantErr)
return
}
if got != tt.expected {
t.Errorf("ParseBandwidth(%q) = %d, want %d", tt.input, got, tt.expected)
}
})
}
}
func TestFormatBandwidth(t *testing.T) {
tests := []struct {
input int64
expected string
}{
{0, "unlimited"},
{500, "500 B/s"},
{1500, "1.5 KB/s"},
{10 * 1000 * 1000, "10.0 MB/s"},
{1000 * 1000 * 1000, "1.0 GB/s"},
}
for _, tt := range tests {
t.Run(tt.expected, func(t *testing.T) {
got := FormatBandwidth(tt.input)
if got != tt.expected {
t.Errorf("FormatBandwidth(%d) = %q, want %q", tt.input, got, tt.expected)
}
})
}
}
func TestThrottledReader_Unlimited(t *testing.T) {
data := []byte("hello world")
reader := bytes.NewReader(data)
ctx := context.Background()
throttled := NewThrottledReader(ctx, reader, 0) // 0 = unlimited
result, err := io.ReadAll(throttled)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if !bytes.Equal(result, data) {
t.Errorf("got %q, want %q", result, data)
}
}
func TestThrottledReader_Limited(t *testing.T) {
// Create 1KB of data
data := make([]byte, 1024)
for i := range data {
data[i] = byte(i % 256)
}
reader := bytes.NewReader(data)
ctx := context.Background()
// Limit to 512 bytes/second - should take ~2 seconds
throttled := NewThrottledReader(ctx, reader, 512)
start := time.Now()
result, err := io.ReadAll(throttled)
elapsed := time.Since(start)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if !bytes.Equal(result, data) {
t.Errorf("data mismatch: got %d bytes, want %d bytes", len(result), len(data))
}
// Should take at least 1.5 seconds (allowing some margin)
if elapsed < 1500*time.Millisecond {
t.Errorf("read completed too fast: %v (expected ~2s for 1KB at 512B/s)", elapsed)
}
}
func TestThrottledReader_CancelContext(t *testing.T) {
data := make([]byte, 10*1024) // 10KB
reader := bytes.NewReader(data)
ctx, cancel := context.WithCancel(context.Background())
// Very slow rate
throttled := NewThrottledReader(ctx, reader, 100)
// Cancel after 100ms
go func() {
time.Sleep(100 * time.Millisecond)
cancel()
}()
_, err := io.ReadAll(throttled)
if err != context.Canceled {
t.Errorf("expected context.Canceled, got %v", err)
}
}
func TestThrottledWriter_Unlimited(t *testing.T) {
ctx := context.Background()
var buf bytes.Buffer
throttled := NewThrottledWriter(ctx, &buf, 0) // 0 = unlimited
data := []byte("hello world")
n, err := throttled.Write(data)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if n != len(data) {
t.Errorf("wrote %d bytes, want %d", n, len(data))
}
if !bytes.Equal(buf.Bytes(), data) {
t.Errorf("got %q, want %q", buf.Bytes(), data)
}
}

View File

@ -23,7 +23,6 @@ type Config struct {
User string
Database string
Password string
Socket string // Unix socket path for MySQL/MariaDB
DatabaseType string // "postgres" or "mysql"
SSLMode string
Insecure bool
@ -38,8 +37,7 @@ type Config struct {
CPUWorkloadType string // "cpu-intensive", "io-intensive", "balanced"
// Resource profile for backup/restore operations
ResourceProfile string // "conservative", "balanced", "performance", "max-performance"
LargeDBMode bool // Enable large database mode (reduces parallelism, increases max_locks)
ResourceProfile string // "conservative", "balanced", "performance", "max-performance", "large-db"
// CPU detection
CPUDetector *cpu.Detector
@ -51,11 +49,10 @@ type Config struct {
SampleValue int
// Output options
NoColor bool
Debug bool
DebugLocks bool // Extended lock debugging (captures lock detection, Guard decisions, boost attempts)
LogLevel string
LogFormat string
NoColor bool
Debug bool
LogLevel string
LogFormat string
// Config persistence
NoSaveConfig bool
@ -203,16 +200,15 @@ func New() *Config {
SSLMode: sslMode,
Insecure: getEnvBool("INSECURE", false),
// Backup defaults - use recommended profile's settings for small VMs
// Backup defaults
BackupDir: backupDir,
CompressionLevel: getEnvInt("COMPRESS_LEVEL", 6),
Jobs: getEnvInt("JOBS", recommendedProfile.Jobs),
DumpJobs: getEnvInt("DUMP_JOBS", recommendedProfile.DumpJobs),
Jobs: getEnvInt("JOBS", getDefaultJobs(cpuInfo)),
DumpJobs: getEnvInt("DUMP_JOBS", getDefaultDumpJobs(cpuInfo)),
MaxCores: getEnvInt("MAX_CORES", getDefaultMaxCores(cpuInfo)),
AutoDetectCores: getEnvBool("AUTO_DETECT_CORES", true),
CPUWorkloadType: getEnvString("CPU_WORKLOAD_TYPE", "balanced"),
ResourceProfile: defaultProfile,
LargeDBMode: getEnvBool("LARGE_DB_MODE", false),
// CPU and memory detection
CPUDetector: cpuDetector,
@ -237,8 +233,8 @@ func New() *Config {
// Timeouts - default 24 hours (1440 min) to handle very large databases with large objects
ClusterTimeoutMinutes: getEnvInt("CLUSTER_TIMEOUT_MIN", 1440),
// Cluster parallelism - use recommended profile's setting for small VMs
ClusterParallelism: getEnvInt("CLUSTER_PARALLELISM", recommendedProfile.ClusterParallelism),
// Cluster parallelism (default: 2 concurrent operations for faster cluster backup/restore)
ClusterParallelism: getEnvInt("CLUSTER_PARALLELISM", 2),
// Working directory for large operations (default: system temp)
WorkDir: getEnvString("WORK_DIR", ""),
@ -434,7 +430,7 @@ func (c *Config) ApplyResourceProfile(profileName string) error {
return &ConfigError{
Field: "resource_profile",
Value: profileName,
Message: "unknown profile. Valid profiles: conservative, balanced, performance, max-performance",
Message: "unknown profile. Valid profiles: conservative, balanced, performance, max-performance, large-db",
}
}
@ -447,12 +443,6 @@ func (c *Config) ApplyResourceProfile(profileName string) error {
// Apply profile settings
c.ResourceProfile = profile.Name
// If LargeDBMode is enabled, apply its modifiers
if c.LargeDBMode {
profile = cpu.ApplyLargeDBMode(profile)
}
c.ClusterParallelism = profile.ClusterParallelism
c.Jobs = profile.Jobs
c.DumpJobs = profile.DumpJobs
@ -467,19 +457,8 @@ func (c *Config) GetResourceProfileRecommendation(isLargeDB bool) (string, strin
}
// GetCurrentProfile returns the current resource profile details
// If LargeDBMode is enabled, returns a modified profile with reduced parallelism
func (c *Config) GetCurrentProfile() *cpu.ResourceProfile {
profile := cpu.GetProfileByName(c.ResourceProfile)
if profile == nil {
return nil
}
// Apply LargeDBMode modifier if enabled
if c.LargeDBMode {
return cpu.ApplyLargeDBMode(profile)
}
return profile
return cpu.GetProfileByName(c.ResourceProfile)
}
// GetCPUInfo returns CPU information, detecting if necessary
@ -610,6 +589,37 @@ func getDefaultBackupDir() string {
return filepath.Join(os.TempDir(), "db_backups")
}
// CPU-related helper functions
func getDefaultJobs(cpuInfo *cpu.CPUInfo) int {
if cpuInfo == nil {
return 1
}
// Default to logical cores for restore operations
jobs := cpuInfo.LogicalCores
if jobs < 1 {
jobs = 1
}
if jobs > 16 {
jobs = 16 // Safety limit
}
return jobs
}
func getDefaultDumpJobs(cpuInfo *cpu.CPUInfo) int {
if cpuInfo == nil {
return 1
}
// Use physical cores for dump operations (CPU intensive)
jobs := cpuInfo.PhysicalCores
if jobs < 1 {
jobs = 1
}
if jobs > 8 {
jobs = 8 // Conservative limit for dumps
}
return jobs
}
func getDefaultMaxCores(cpuInfo *cpu.CPUInfo) int {
if cpuInfo == nil {
return 16

View File

@ -1,260 +0,0 @@
package config
import (
"os"
"testing"
)
func TestNew(t *testing.T) {
cfg := New()
if cfg == nil {
t.Fatal("expected non-nil config")
}
// Check defaults
if cfg.Host == "" {
t.Error("expected non-empty host")
}
if cfg.Port == 0 {
t.Error("expected non-zero port")
}
if cfg.User == "" {
t.Error("expected non-empty user")
}
if cfg.DatabaseType != "postgres" && cfg.DatabaseType != "mysql" {
t.Errorf("expected valid database type, got %q", cfg.DatabaseType)
}
}
func TestIsPostgreSQL(t *testing.T) {
tests := []struct {
dbType string
expected bool
}{
{"postgres", true},
{"mysql", false},
{"mariadb", false},
{"", false},
}
for _, tt := range tests {
t.Run(tt.dbType, func(t *testing.T) {
cfg := &Config{DatabaseType: tt.dbType}
if got := cfg.IsPostgreSQL(); got != tt.expected {
t.Errorf("IsPostgreSQL() = %v, want %v", got, tt.expected)
}
})
}
}
func TestIsMySQL(t *testing.T) {
tests := []struct {
dbType string
expected bool
}{
{"mysql", true},
{"mariadb", true},
{"postgres", false},
{"", false},
}
for _, tt := range tests {
t.Run(tt.dbType, func(t *testing.T) {
cfg := &Config{DatabaseType: tt.dbType}
if got := cfg.IsMySQL(); got != tt.expected {
t.Errorf("IsMySQL() = %v, want %v", got, tt.expected)
}
})
}
}
func TestSetDatabaseType(t *testing.T) {
tests := []struct {
input string
expected string
shouldError bool
}{
{"postgres", "postgres", false},
{"postgresql", "postgres", false},
{"POSTGRES", "postgres", false},
{"mysql", "mysql", false},
{"MYSQL", "mysql", false},
{"mariadb", "mariadb", false},
{"invalid", "", true},
{"", "", true},
}
for _, tt := range tests {
t.Run(tt.input, func(t *testing.T) {
cfg := &Config{Port: 0}
err := cfg.SetDatabaseType(tt.input)
if tt.shouldError {
if err == nil {
t.Error("expected error, got nil")
}
} else {
if err != nil {
t.Errorf("unexpected error: %v", err)
}
if cfg.DatabaseType != tt.expected {
t.Errorf("DatabaseType = %q, want %q", cfg.DatabaseType, tt.expected)
}
}
})
}
}
func TestSetDatabaseTypePortDefaults(t *testing.T) {
cfg := &Config{Port: 0}
_ = cfg.SetDatabaseType("postgres")
if cfg.Port != 5432 {
t.Errorf("expected PostgreSQL default port 5432, got %d", cfg.Port)
}
cfg = &Config{Port: 0}
_ = cfg.SetDatabaseType("mysql")
if cfg.Port != 3306 {
t.Errorf("expected MySQL default port 3306, got %d", cfg.Port)
}
}
func TestGetEnvString(t *testing.T) {
os.Setenv("TEST_CONFIG_VAR", "test_value")
defer os.Unsetenv("TEST_CONFIG_VAR")
if got := getEnvString("TEST_CONFIG_VAR", "default"); got != "test_value" {
t.Errorf("getEnvString() = %q, want %q", got, "test_value")
}
if got := getEnvString("NONEXISTENT_VAR", "default"); got != "default" {
t.Errorf("getEnvString() = %q, want %q", got, "default")
}
}
func TestGetEnvInt(t *testing.T) {
os.Setenv("TEST_INT_VAR", "42")
defer os.Unsetenv("TEST_INT_VAR")
if got := getEnvInt("TEST_INT_VAR", 0); got != 42 {
t.Errorf("getEnvInt() = %d, want %d", got, 42)
}
os.Setenv("TEST_INT_VAR", "invalid")
if got := getEnvInt("TEST_INT_VAR", 10); got != 10 {
t.Errorf("getEnvInt() with invalid = %d, want %d", got, 10)
}
if got := getEnvInt("NONEXISTENT_INT_VAR", 99); got != 99 {
t.Errorf("getEnvInt() nonexistent = %d, want %d", got, 99)
}
}
func TestGetEnvBool(t *testing.T) {
tests := []struct {
envValue string
expected bool
}{
{"true", true},
{"TRUE", true},
{"1", true},
{"false", false},
{"FALSE", false},
{"0", false},
}
for _, tt := range tests {
t.Run(tt.envValue, func(t *testing.T) {
os.Setenv("TEST_BOOL_VAR", tt.envValue)
defer os.Unsetenv("TEST_BOOL_VAR")
if got := getEnvBool("TEST_BOOL_VAR", false); got != tt.expected {
t.Errorf("getEnvBool(%q) = %v, want %v", tt.envValue, got, tt.expected)
}
})
}
}
func TestCanonicalDatabaseType(t *testing.T) {
tests := []struct {
input string
expected string
ok bool
}{
{"postgres", "postgres", true},
{"postgresql", "postgres", true},
{"pg", "postgres", true},
{"POSTGRES", "postgres", true},
{"mysql", "mysql", true},
{"MYSQL", "mysql", true},
{"mariadb", "mariadb", true},
{"maria", "mariadb", true},
{"invalid", "", false},
{"", "", false},
}
for _, tt := range tests {
t.Run(tt.input, func(t *testing.T) {
got, ok := canonicalDatabaseType(tt.input)
if ok != tt.ok {
t.Errorf("canonicalDatabaseType(%q) ok = %v, want %v", tt.input, ok, tt.ok)
}
if got != tt.expected {
t.Errorf("canonicalDatabaseType(%q) = %q, want %q", tt.input, got, tt.expected)
}
})
}
}
func TestDisplayDatabaseType(t *testing.T) {
tests := []struct {
dbType string
expected string
}{
{"postgres", "PostgreSQL"},
{"mysql", "MySQL"},
{"mariadb", "MariaDB"},
{"unknown", "unknown"},
}
for _, tt := range tests {
t.Run(tt.dbType, func(t *testing.T) {
cfg := &Config{DatabaseType: tt.dbType}
if got := cfg.DisplayDatabaseType(); got != tt.expected {
t.Errorf("DisplayDatabaseType() = %q, want %q", got, tt.expected)
}
})
}
}
func TestConfigError(t *testing.T) {
err := &ConfigError{
Field: "port",
Value: "invalid",
Message: "must be a valid port number",
}
errStr := err.Error()
if errStr == "" {
t.Error("expected non-empty error string")
}
}
func TestGetCurrentOSUser(t *testing.T) {
user := GetCurrentOSUser()
if user == "" {
t.Error("expected non-empty user")
}
}
func TestDefaultPortFor(t *testing.T) {
if port := defaultPortFor("postgres"); port != 5432 {
t.Errorf("defaultPortFor(postgres) = %d, want 5432", port)
}
if port := defaultPortFor("mysql"); port != 3306 {
t.Errorf("defaultPortFor(mysql) = %d, want 3306", port)
}
if port := defaultPortFor("unknown"); port != 5432 {
t.Errorf("defaultPortFor(unknown) = %d, want 5432 (default)", port)
}
}

View File

@ -28,11 +28,9 @@ type LocalConfig struct {
DumpJobs int
// Performance settings
CPUWorkload string
MaxCores int
ClusterTimeout int // Cluster operation timeout in minutes (default: 1440 = 24 hours)
ResourceProfile string
LargeDBMode bool // Enable large database mode (reduces parallelism, increases locks)
CPUWorkload string
MaxCores int
ClusterTimeout int // Cluster operation timeout in minutes (default: 1440 = 24 hours)
// Security settings
RetentionDays int
@ -128,10 +126,6 @@ func LoadLocalConfig() (*LocalConfig, error) {
if ct, err := strconv.Atoi(value); err == nil {
cfg.ClusterTimeout = ct
}
case "resource_profile":
cfg.ResourceProfile = value
case "large_db_mode":
cfg.LargeDBMode = value == "true" || value == "1"
}
case "security":
switch key {
@ -213,12 +207,6 @@ func SaveLocalConfig(cfg *LocalConfig) error {
if cfg.ClusterTimeout != 0 {
sb.WriteString(fmt.Sprintf("cluster_timeout = %d\n", cfg.ClusterTimeout))
}
if cfg.ResourceProfile != "" {
sb.WriteString(fmt.Sprintf("resource_profile = %s\n", cfg.ResourceProfile))
}
if cfg.LargeDBMode {
sb.WriteString("large_db_mode = true\n")
}
sb.WriteString("\n")
// Security section
@ -292,14 +280,6 @@ func ApplyLocalConfig(cfg *Config, local *LocalConfig) {
if local.ClusterTimeout != 0 {
cfg.ClusterTimeoutMinutes = local.ClusterTimeout
}
// Apply resource profile settings
if local.ResourceProfile != "" {
cfg.ResourceProfile = local.ResourceProfile
}
// LargeDBMode is a boolean - apply if true in config
if local.LargeDBMode {
cfg.LargeDBMode = true
}
if cfg.RetentionDays == 30 && local.RetentionDays != 0 {
cfg.RetentionDays = local.RetentionDays
}
@ -314,24 +294,22 @@ func ApplyLocalConfig(cfg *Config, local *LocalConfig) {
// ConfigFromConfig creates a LocalConfig from a Config
func ConfigFromConfig(cfg *Config) *LocalConfig {
return &LocalConfig{
DBType: cfg.DatabaseType,
Host: cfg.Host,
Port: cfg.Port,
User: cfg.User,
Database: cfg.Database,
SSLMode: cfg.SSLMode,
BackupDir: cfg.BackupDir,
WorkDir: cfg.WorkDir,
Compression: cfg.CompressionLevel,
Jobs: cfg.Jobs,
DumpJobs: cfg.DumpJobs,
CPUWorkload: cfg.CPUWorkloadType,
MaxCores: cfg.MaxCores,
ClusterTimeout: cfg.ClusterTimeoutMinutes,
ResourceProfile: cfg.ResourceProfile,
LargeDBMode: cfg.LargeDBMode,
RetentionDays: cfg.RetentionDays,
MinBackups: cfg.MinBackups,
MaxRetries: cfg.MaxRetries,
DBType: cfg.DatabaseType,
Host: cfg.Host,
Port: cfg.Port,
User: cfg.User,
Database: cfg.Database,
SSLMode: cfg.SSLMode,
BackupDir: cfg.BackupDir,
WorkDir: cfg.WorkDir,
Compression: cfg.CompressionLevel,
Jobs: cfg.Jobs,
DumpJobs: cfg.DumpJobs,
CPUWorkload: cfg.CPUWorkloadType,
MaxCores: cfg.MaxCores,
ClusterTimeout: cfg.ClusterTimeoutMinutes,
RetentionDays: cfg.RetentionDays,
MinBackups: cfg.MinBackups,
MaxRetries: cfg.MaxRetries,
}
}

View File

@ -1,128 +0,0 @@
package config
import (
"fmt"
"strings"
)
// RestoreProfile defines resource settings for restore operations
type RestoreProfile struct {
Name string
ParallelDBs int // Number of databases to restore in parallel
Jobs int // Parallel decompression jobs
DisableProgress bool // Disable progress indicators to reduce overhead
MemoryConservative bool // Use memory-conservative settings
}
// GetRestoreProfile returns the profile settings for a given profile name
func GetRestoreProfile(profileName string) (*RestoreProfile, error) {
profileName = strings.ToLower(strings.TrimSpace(profileName))
switch profileName {
case "conservative":
return &RestoreProfile{
Name: "conservative",
ParallelDBs: 1, // Single-threaded restore
Jobs: 1, // Single-threaded decompression
DisableProgress: false,
MemoryConservative: true,
}, nil
case "balanced", "":
return &RestoreProfile{
Name: "balanced",
ParallelDBs: 0, // Use config default or auto-detect
Jobs: 0, // Use config default or auto-detect
DisableProgress: false,
MemoryConservative: false,
}, nil
case "aggressive", "performance", "max":
return &RestoreProfile{
Name: "aggressive",
ParallelDBs: -1, // Auto-detect based on resources
Jobs: -1, // Auto-detect based on CPU
DisableProgress: false,
MemoryConservative: false,
}, nil
case "potato":
// Easter egg: same as conservative but with a fun name
return &RestoreProfile{
Name: "potato",
ParallelDBs: 1,
Jobs: 1,
DisableProgress: false,
MemoryConservative: true,
}, nil
default:
return nil, fmt.Errorf("unknown profile: %s (valid: conservative, balanced, aggressive)", profileName)
}
}
// ApplyProfile applies profile settings to config, respecting explicit user overrides
func ApplyProfile(cfg *Config, profileName string, explicitJobs, explicitParallelDBs int) error {
profile, err := GetRestoreProfile(profileName)
if err != nil {
return err
}
// Show profile being used
if cfg.Debug {
fmt.Printf("Using restore profile: %s\n", profile.Name)
if profile.MemoryConservative {
fmt.Println("Memory-conservative mode enabled")
}
}
// Apply profile settings only if not explicitly overridden
if explicitJobs == 0 && profile.Jobs > 0 {
cfg.Jobs = profile.Jobs
}
if explicitParallelDBs == 0 && profile.ParallelDBs != 0 {
cfg.ClusterParallelism = profile.ParallelDBs
}
// Store profile name
cfg.ResourceProfile = profile.Name
// Conservative profile implies large DB mode settings
if profile.MemoryConservative {
cfg.LargeDBMode = true
}
return nil
}
// GetProfileDescription returns a human-readable description of the profile
func GetProfileDescription(profileName string) string {
profile, err := GetRestoreProfile(profileName)
if err != nil {
return "Unknown profile"
}
switch profile.Name {
case "conservative":
return "Conservative: --parallel=1, single-threaded, minimal memory usage. Best for resource-constrained servers or when other services are running."
case "potato":
return "Potato Mode: Same as conservative, for servers running on a potato 🥔"
case "balanced":
return "Balanced: Auto-detect resources, moderate parallelism. Good default for most scenarios."
case "aggressive":
return "Aggressive: Maximum parallelism, all available resources. Best for dedicated database servers with ample resources."
default:
return profile.Name
}
}
// ListProfiles returns a list of all available profiles with descriptions
func ListProfiles() map[string]string {
return map[string]string{
"conservative": GetProfileDescription("conservative"),
"balanced": GetProfileDescription("balanced"),
"aggressive": GetProfileDescription("aggressive"),
"potato": GetProfileDescription("potato"),
}
}

View File

@ -1,60 +0,0 @@
// Package constants provides shared constant values used across the application.
package constants
import "time"
// RPO (Recovery Point Objective) threshold constants in seconds.
// These define backup freshness thresholds for monitoring and alerting.
const (
// RPOThreshold12Hours is the warning threshold (12 hours in seconds)
RPOThreshold12Hours = 43200
// RPOThreshold24Hours is the critical threshold (24 hours in seconds)
RPOThreshold24Hours = 86400
// RPOThreshold7Days is the maximum acceptable RPO (7 days in seconds)
RPOThreshold7Days = 604800
)
// RPO thresholds as time.Duration for convenience
const (
RPODuration12Hours = 12 * time.Hour
RPODuration24Hours = 24 * time.Hour
RPODuration7Days = 7 * 24 * time.Hour
)
// Backup operation timeouts
const (
// DefaultClusterTimeout is the default timeout for cluster operations
DefaultClusterTimeout = 4 * time.Hour
// DefaultSingleDBTimeout is the default timeout for single database operations
DefaultSingleDBTimeout = 2 * time.Hour
// DefaultConnectionTimeout is the timeout for database connections
DefaultConnectionTimeout = 30 * time.Second
// DefaultHealthCheckInterval is the interval between health checks
DefaultHealthCheckInterval = 30 * time.Second
)
// Retry configuration
const (
// DefaultMaxRetries is the default number of retry attempts
DefaultMaxRetries = 3
// DefaultRetryDelay is the initial delay between retries
DefaultRetryDelay = 5 * time.Second
// DefaultRetryMaxDelay is the maximum delay between retries (for exponential backoff)
DefaultRetryMaxDelay = 60 * time.Second
)
// Progress update intervals
const (
// ProgressUpdateInterval is how often to update progress indicators
ProgressUpdateInterval = 500 * time.Millisecond
// MetricsUpdateInterval is how often to push metrics
MetricsUpdateInterval = 10 * time.Second
)

View File

@ -90,17 +90,32 @@ var (
DumpJobs: 16,
MaintenanceWorkMem: "2GB",
MaxLocksPerTxn: 512,
RecommendedForLarge: false, // Large DBs should use LargeDBMode
RecommendedForLarge: false, // Large DBs should use conservative
MinMemoryGB: 64,
MinCores: 16,
}
// AllProfiles contains all available profiles (VM resource-based)
// ProfileLargeDB - Optimized specifically for large databases
ProfileLargeDB = ResourceProfile{
Name: "large-db",
Description: "Optimized for large databases with many tables/BLOBs. Prevents 'out of shared memory' errors.",
ClusterParallelism: 1,
Jobs: 2,
DumpJobs: 2,
MaintenanceWorkMem: "1GB",
MaxLocksPerTxn: 8192,
RecommendedForLarge: true,
MinMemoryGB: 8,
MinCores: 2,
}
// AllProfiles contains all available profiles
AllProfiles = []ResourceProfile{
ProfileConservative,
ProfileBalanced,
ProfilePerformance,
ProfileMaxPerformance,
ProfileLargeDB,
}
)
@ -114,51 +129,6 @@ func GetProfileByName(name string) *ResourceProfile {
return nil
}
// ApplyLargeDBMode modifies a profile for large database operations.
// This is a modifier that reduces parallelism and increases max_locks_per_transaction
// to prevent "out of shared memory" errors with large databases (many tables, LOBs, etc.).
// It returns a new profile with adjusted settings, leaving the original unchanged.
func ApplyLargeDBMode(profile *ResourceProfile) *ResourceProfile {
if profile == nil {
return nil
}
// Create a copy with adjusted settings
modified := *profile
// Add "(large-db)" suffix to indicate this is modified
modified.Name = profile.Name + " +large-db"
modified.Description = fmt.Sprintf("%s [LargeDBMode: reduced parallelism, high locks]", profile.Description)
// Reduce parallelism to avoid lock exhaustion
// Rule: halve parallelism, minimum 1
modified.ClusterParallelism = max(1, profile.ClusterParallelism/2)
modified.Jobs = max(1, profile.Jobs/2)
modified.DumpJobs = max(2, profile.DumpJobs/2)
// Force high max_locks_per_transaction for large schemas
modified.MaxLocksPerTxn = 8192
// Increase maintenance_work_mem for complex operations
// Keep or boost maintenance work mem
modified.MaintenanceWorkMem = "1GB"
if profile.MinMemoryGB >= 32 {
modified.MaintenanceWorkMem = "2GB"
}
modified.RecommendedForLarge = true
return &modified
}
// max returns the larger of two integers
func max(a, b int) int {
if a > b {
return a
}
return b
}
// DetectMemory detects system memory information
func DetectMemory() (*MemoryInfo, error) {
info := &MemoryInfo{
@ -323,11 +293,11 @@ func RecommendProfile(cpuInfo *CPUInfo, memInfo *MemoryInfo, isLargeDB bool) *Re
memGB = memInfo.TotalGB
}
// Special case: large databases should use conservative profile
// The caller should also enable LargeDBMode for increased MaxLocksPerTxn
// Special case: large databases should always use conservative/large-db profile
if isLargeDB {
// For large DBs, recommend conservative regardless of resources
// LargeDBMode flag will handle the lock settings separately
if memGB >= 32 && cores >= 8 {
return &ProfileLargeDB // Still conservative but with more memory for maintenance
}
return &ProfileConservative
}
@ -369,7 +339,7 @@ func RecommendProfileWithReason(cpuInfo *CPUInfo, memInfo *MemoryInfo, isLargeDB
profile := RecommendProfile(cpuInfo, memInfo, isLargeDB)
if isLargeDB {
reason.WriteString("Large database mode - using conservative settings. Enable LargeDBMode for higher max_locks.")
reason.WriteString("Large database detected - using conservative settings to avoid 'out of shared memory' errors.")
} else if profile.Name == "conservative" {
reason.WriteString("Limited resources detected - using conservative profile for stability.")
} else if profile.Name == "max-performance" {

View File

@ -4,6 +4,7 @@ import (
"context"
"database/sql"
"fmt"
"time"
"dbbackup/internal/config"
"dbbackup/internal/logger"
@ -123,3 +124,11 @@ func (b *baseDatabase) Ping(ctx context.Context) error {
}
return b.db.PingContext(ctx)
}
// buildTimeout creates a context with timeout for database operations
func buildTimeout(ctx context.Context, timeout time.Duration) (context.Context, context.CancelFunc) {
if timeout <= 0 {
timeout = 30 * time.Second
}
return context.WithTimeout(ctx, timeout)
}

View File

@ -278,12 +278,8 @@ func (m *MySQL) GetTableRowCount(ctx context.Context, database, table string) (i
func (m *MySQL) BuildBackupCommand(database, outputFile string, options BackupOptions) []string {
cmd := []string{"mysqldump"}
// Connection parameters - socket takes priority, then localhost vs remote
if m.cfg.Socket != "" {
// Explicit socket path provided
cmd = append(cmd, "-S", m.cfg.Socket)
cmd = append(cmd, "-u", m.cfg.User)
} else if m.cfg.Host == "" || m.cfg.Host == "localhost" {
// Connection parameters - handle localhost vs remote differently
if m.cfg.Host == "" || m.cfg.Host == "localhost" {
// For localhost, use socket connection (don't specify host/port)
cmd = append(cmd, "-u", m.cfg.User)
} else {
@ -342,12 +338,8 @@ func (m *MySQL) BuildBackupCommand(database, outputFile string, options BackupOp
func (m *MySQL) BuildRestoreCommand(database, inputFile string, options RestoreOptions) []string {
cmd := []string{"mysql"}
// Connection parameters - socket takes priority, then localhost vs remote
if m.cfg.Socket != "" {
// Explicit socket path provided
cmd = append(cmd, "-S", m.cfg.Socket)
cmd = append(cmd, "-u", m.cfg.User)
} else if m.cfg.Host == "" || m.cfg.Host == "localhost" {
// Connection parameters - handle localhost vs remote differently
if m.cfg.Host == "" || m.cfg.Host == "localhost" {
// For localhost, use socket connection (don't specify host/port)
cmd = append(cmd, "-u", m.cfg.User)
} else {
@ -425,11 +417,8 @@ func (m *MySQL) buildDSN() string {
dsn += "@"
// Explicit socket takes priority
if m.cfg.Socket != "" {
dsn += "unix(" + m.cfg.Socket + ")"
} else if m.cfg.Host == "" || m.cfg.Host == "localhost" {
// Handle localhost with Unix socket vs TCP/IP
// Handle localhost with Unix socket vs TCP/IP
if m.cfg.Host == "" || m.cfg.Host == "localhost" {
// Try common socket paths for localhost connections
socketPaths := []string{
"/run/mysqld/mysqld.sock",

View File

@ -461,6 +461,61 @@ func (p *PostgreSQL) ValidateBackupTools() error {
return nil
}
// buildDSN constructs PostgreSQL connection string
func (p *PostgreSQL) buildDSN() string {
dsn := fmt.Sprintf("user=%s dbname=%s", p.cfg.User, p.cfg.Database)
if p.cfg.Password != "" {
dsn += " password=" + p.cfg.Password
}
// For localhost connections, try socket first for peer auth
if p.cfg.Host == "localhost" && p.cfg.Password == "" {
// Try Unix socket connection for peer authentication
// Common PostgreSQL socket locations
socketDirs := []string{
"/var/run/postgresql",
"/tmp",
"/var/lib/pgsql",
}
for _, dir := range socketDirs {
socketPath := fmt.Sprintf("%s/.s.PGSQL.%d", dir, p.cfg.Port)
if _, err := os.Stat(socketPath); err == nil {
dsn += " host=" + dir
p.log.Debug("Using PostgreSQL socket", "path", socketPath)
break
}
}
} else if p.cfg.Host != "localhost" || p.cfg.Password != "" {
// Use TCP connection
dsn += " host=" + p.cfg.Host
dsn += " port=" + strconv.Itoa(p.cfg.Port)
}
if p.cfg.SSLMode != "" && !p.cfg.Insecure {
// Map SSL modes to supported values for lib/pq
switch strings.ToLower(p.cfg.SSLMode) {
case "prefer", "preferred":
dsn += " sslmode=require" // lib/pq default, closest to prefer
case "require", "required":
dsn += " sslmode=require"
case "verify-ca":
dsn += " sslmode=verify-ca"
case "verify-full", "verify-identity":
dsn += " sslmode=verify-full"
case "disable", "disabled":
dsn += " sslmode=disable"
default:
dsn += " sslmode=require" // Safe default
}
} else if p.cfg.Insecure {
dsn += " sslmode=disable"
}
return dsn
}
// buildPgxDSN builds a connection string for pgx
func (p *PostgreSQL) buildPgxDSN() string {
// pgx supports both URL and keyword=value formats

View File

@ -8,7 +8,7 @@ import (
"strings"
"time"
_ "modernc.org/sqlite" // Pure Go SQLite driver (no CGO required)
_ "github.com/mattn/go-sqlite3" // SQLite driver
)
// ChunkIndex provides fast chunk lookups using SQLite
@ -32,7 +32,7 @@ func NewChunkIndexAt(dbPath string) (*ChunkIndex, error) {
}
// Add busy_timeout to handle lock contention gracefully
db, err := sql.Open("sqlite", dbPath+"?_journal_mode=WAL&_synchronous=NORMAL&_busy_timeout=5000")
db, err := sql.Open("sqlite3", dbPath+"?_journal_mode=WAL&_synchronous=NORMAL&_busy_timeout=5000")
if err != nil {
return nil, fmt.Errorf("failed to open chunk index: %w", err)
}

View File

@ -11,16 +11,13 @@ import (
// DedupMetrics holds deduplication statistics for Prometheus
type DedupMetrics struct {
// Global stats
TotalChunks int64
TotalManifests int64
TotalBackupSize int64 // Sum of all backup original sizes
TotalNewData int64 // Sum of all new chunks stored
SpaceSaved int64 // Bytes saved by deduplication
DedupRatio float64 // Overall dedup ratio (0-1)
DiskUsage int64 // Actual bytes on disk
OldestChunkEpoch int64 // Unix timestamp of oldest chunk
NewestChunkEpoch int64 // Unix timestamp of newest chunk
CompressionRatio float64 // Compression ratio (raw vs stored)
TotalChunks int64
TotalManifests int64
TotalBackupSize int64 // Sum of all backup original sizes
TotalNewData int64 // Sum of all new chunks stored
SpaceSaved int64 // Bytes saved by deduplication
DedupRatio float64 // Overall dedup ratio (0-1)
DiskUsage int64 // Actual bytes on disk
// Per-database stats
ByDatabase map[string]*DatabaseDedupMetrics
@ -80,19 +77,6 @@ func CollectMetrics(basePath string, indexPath string) (*DedupMetrics, error) {
ByDatabase: make(map[string]*DatabaseDedupMetrics),
}
// Add chunk age timestamps
if !stats.OldestChunk.IsZero() {
metrics.OldestChunkEpoch = stats.OldestChunk.Unix()
}
if !stats.NewestChunk.IsZero() {
metrics.NewestChunkEpoch = stats.NewestChunk.Unix()
}
// Calculate compression ratio (raw size vs stored size)
if stats.TotalSizeRaw > 0 {
metrics.CompressionRatio = 1.0 - float64(stats.TotalSizeStored)/float64(stats.TotalSizeRaw)
}
// Collect per-database metrics from manifest store
manifestStore, err := NewManifestStore(basePath)
if err != nil {
@ -169,85 +153,66 @@ func WritePrometheusTextfile(path string, instance string, basePath string, inde
}
// FormatPrometheusMetrics formats dedup metrics in Prometheus exposition format
func FormatPrometheusMetrics(m *DedupMetrics, server string) string {
func FormatPrometheusMetrics(m *DedupMetrics, instance string) string {
var b strings.Builder
now := time.Now().Unix()
b.WriteString("# DBBackup Deduplication Prometheus Metrics\n")
b.WriteString(fmt.Sprintf("# Generated at: %s\n", time.Now().Format(time.RFC3339)))
b.WriteString(fmt.Sprintf("# Server: %s\n", server))
b.WriteString(fmt.Sprintf("# Instance: %s\n", instance))
b.WriteString("\n")
// Global dedup metrics
b.WriteString("# HELP dbbackup_dedup_chunks_total Total number of unique chunks stored\n")
b.WriteString("# TYPE dbbackup_dedup_chunks_total gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_chunks_total{server=%q} %d\n", server, m.TotalChunks))
b.WriteString(fmt.Sprintf("dbbackup_dedup_chunks_total{instance=%q} %d\n", instance, m.TotalChunks))
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_manifests_total Total number of deduplicated backups\n")
b.WriteString("# TYPE dbbackup_dedup_manifests_total gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_manifests_total{server=%q} %d\n", server, m.TotalManifests))
b.WriteString(fmt.Sprintf("dbbackup_dedup_manifests_total{instance=%q} %d\n", instance, m.TotalManifests))
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_backup_bytes_total Total logical size of all backups in bytes\n")
b.WriteString("# TYPE dbbackup_dedup_backup_bytes_total gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_backup_bytes_total{server=%q} %d\n", server, m.TotalBackupSize))
b.WriteString(fmt.Sprintf("dbbackup_dedup_backup_bytes_total{instance=%q} %d\n", instance, m.TotalBackupSize))
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_stored_bytes_total Total unique data stored in bytes (after dedup)\n")
b.WriteString("# TYPE dbbackup_dedup_stored_bytes_total gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_stored_bytes_total{server=%q} %d\n", server, m.TotalNewData))
b.WriteString(fmt.Sprintf("dbbackup_dedup_stored_bytes_total{instance=%q} %d\n", instance, m.TotalNewData))
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_space_saved_bytes Bytes saved by deduplication\n")
b.WriteString("# TYPE dbbackup_dedup_space_saved_bytes gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_space_saved_bytes{server=%q} %d\n", server, m.SpaceSaved))
b.WriteString(fmt.Sprintf("dbbackup_dedup_space_saved_bytes{instance=%q} %d\n", instance, m.SpaceSaved))
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_ratio Deduplication ratio (0-1, higher is better)\n")
b.WriteString("# TYPE dbbackup_dedup_ratio gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_ratio{server=%q} %.4f\n", server, m.DedupRatio))
b.WriteString(fmt.Sprintf("dbbackup_dedup_ratio{instance=%q} %.4f\n", instance, m.DedupRatio))
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_disk_usage_bytes Actual disk usage of chunk store\n")
b.WriteString("# TYPE dbbackup_dedup_disk_usage_bytes gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_disk_usage_bytes{server=%q} %d\n", server, m.DiskUsage))
b.WriteString(fmt.Sprintf("dbbackup_dedup_disk_usage_bytes{instance=%q} %d\n", instance, m.DiskUsage))
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_compression_ratio Compression ratio (0-1, higher = better compression)\n")
b.WriteString("# TYPE dbbackup_dedup_compression_ratio gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_compression_ratio{server=%q} %.4f\n", server, m.CompressionRatio))
b.WriteString("\n")
if m.OldestChunkEpoch > 0 {
b.WriteString("# HELP dbbackup_dedup_oldest_chunk_timestamp Unix timestamp of oldest chunk (for retention monitoring)\n")
b.WriteString("# TYPE dbbackup_dedup_oldest_chunk_timestamp gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_oldest_chunk_timestamp{server=%q} %d\n", server, m.OldestChunkEpoch))
b.WriteString("\n")
}
if m.NewestChunkEpoch > 0 {
b.WriteString("# HELP dbbackup_dedup_newest_chunk_timestamp Unix timestamp of newest chunk\n")
b.WriteString("# TYPE dbbackup_dedup_newest_chunk_timestamp gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_newest_chunk_timestamp{server=%q} %d\n", server, m.NewestChunkEpoch))
b.WriteString("\n")
}
// Per-database metrics
if len(m.ByDatabase) > 0 {
b.WriteString("# HELP dbbackup_dedup_database_backup_count Number of deduplicated backups per database\n")
b.WriteString("# TYPE dbbackup_dedup_database_backup_count gauge\n")
for _, db := range m.ByDatabase {
b.WriteString(fmt.Sprintf("dbbackup_dedup_database_backup_count{server=%q,database=%q} %d\n",
server, db.Database, db.BackupCount))
b.WriteString(fmt.Sprintf("dbbackup_dedup_database_backup_count{instance=%q,database=%q} %d\n",
instance, db.Database, db.BackupCount))
}
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_database_ratio Deduplication ratio per database (0-1)\n")
b.WriteString("# TYPE dbbackup_dedup_database_ratio gauge\n")
for _, db := range m.ByDatabase {
b.WriteString(fmt.Sprintf("dbbackup_dedup_database_ratio{server=%q,database=%q} %.4f\n",
server, db.Database, db.DedupRatio))
b.WriteString(fmt.Sprintf("dbbackup_dedup_database_ratio{instance=%q,database=%q} %.4f\n",
instance, db.Database, db.DedupRatio))
}
b.WriteString("\n")
@ -255,48 +220,16 @@ func FormatPrometheusMetrics(m *DedupMetrics, server string) string {
b.WriteString("# TYPE dbbackup_dedup_database_last_backup_timestamp gauge\n")
for _, db := range m.ByDatabase {
if !db.LastBackupTime.IsZero() {
b.WriteString(fmt.Sprintf("dbbackup_dedup_database_last_backup_timestamp{server=%q,database=%q} %d\n",
server, db.Database, db.LastBackupTime.Unix()))
b.WriteString(fmt.Sprintf("dbbackup_dedup_database_last_backup_timestamp{instance=%q,database=%q} %d\n",
instance, db.Database, db.LastBackupTime.Unix()))
}
}
b.WriteString("\n")
// Add RPO (Recovery Point Objective) metric for dedup backups - same metric name as regular backups
// This enables unified alerting across regular and dedup backup modes
b.WriteString("# HELP dbbackup_rpo_seconds Seconds since last successful backup (Recovery Point Objective)\n")
b.WriteString("# TYPE dbbackup_rpo_seconds gauge\n")
for _, db := range m.ByDatabase {
if !db.LastBackupTime.IsZero() {
rpoSeconds := now - db.LastBackupTime.Unix()
if rpoSeconds < 0 {
rpoSeconds = 0
}
b.WriteString(fmt.Sprintf("dbbackup_rpo_seconds{server=%q,database=%q} %d\n",
server, db.Database, rpoSeconds))
}
}
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_database_total_bytes Total logical size per database\n")
b.WriteString("# TYPE dbbackup_dedup_database_total_bytes gauge\n")
for _, db := range m.ByDatabase {
b.WriteString(fmt.Sprintf("dbbackup_dedup_database_total_bytes{server=%q,database=%q} %d\n",
server, db.Database, db.TotalSize))
}
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_database_stored_bytes Stored bytes per database (after dedup)\n")
b.WriteString("# TYPE dbbackup_dedup_database_stored_bytes gauge\n")
for _, db := range m.ByDatabase {
b.WriteString(fmt.Sprintf("dbbackup_dedup_database_stored_bytes{server=%q,database=%q} %d\n",
server, db.Database, db.StoredSize))
}
b.WriteString("\n")
}
b.WriteString("# HELP dbbackup_dedup_scrape_timestamp Unix timestamp when dedup metrics were collected\n")
b.WriteString("# TYPE dbbackup_dedup_scrape_timestamp gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_scrape_timestamp{server=%q} %d\n", server, now))
b.WriteString(fmt.Sprintf("dbbackup_dedup_scrape_timestamp{instance=%q} %d\n", instance, now))
return b.String()
}

View File

@ -1,6 +1,7 @@
package dedup
import (
"compress/gzip"
"crypto/aes"
"crypto/cipher"
"crypto/rand"
@ -11,9 +12,6 @@ import (
"os"
"path/filepath"
"sync"
"time"
"github.com/klauspost/pgzip"
)
// ChunkStore manages content-addressed chunk storage
@ -119,24 +117,12 @@ func (s *ChunkStore) Put(chunk *Chunk) (isNew bool, err error) {
}
path := s.chunkPath(chunk.Hash)
chunkDir := filepath.Dir(path)
// Create prefix directory with verification for CIFS/NFS
// Network filesystems can return success from MkdirAll before
// the directory is actually visible for writes
if err := os.MkdirAll(chunkDir, 0700); err != nil {
// Create prefix directory
if err := os.MkdirAll(filepath.Dir(path), 0700); err != nil {
return false, fmt.Errorf("failed to create chunk directory: %w", err)
}
// Verify directory exists (CIFS workaround)
for i := 0; i < 5; i++ {
if _, err := os.Stat(chunkDir); err == nil {
break
}
time.Sleep(20 * time.Millisecond)
os.MkdirAll(chunkDir, 0700) // retry mkdir
}
// Prepare data
data := chunk.Data
@ -158,35 +144,13 @@ func (s *ChunkStore) Put(chunk *Chunk) (isNew bool, err error) {
// Write atomically (write to temp, then rename)
tmpPath := path + ".tmp"
// Write with retry for CIFS/NFS directory visibility lag
var writeErr error
for attempt := 0; attempt < 3; attempt++ {
if writeErr = os.WriteFile(tmpPath, data, 0600); writeErr == nil {
break
}
// Directory might not be visible yet on network FS
time.Sleep(20 * time.Millisecond)
os.MkdirAll(chunkDir, 0700)
}
if writeErr != nil {
return false, fmt.Errorf("failed to write chunk: %w", writeErr)
if err := os.WriteFile(tmpPath, data, 0600); err != nil {
return false, fmt.Errorf("failed to write chunk: %w", err)
}
// Rename with retry for CIFS/SMB flakiness
var renameErr error
for attempt := 0; attempt < 3; attempt++ {
if renameErr = os.Rename(tmpPath, path); renameErr == nil {
break
}
// Brief pause before retry on network filesystems
time.Sleep(10 * time.Millisecond)
// Re-ensure directory exists (refresh CIFS cache)
os.MkdirAll(filepath.Dir(path), 0700)
}
if renameErr != nil {
if err := os.Rename(tmpPath, path); err != nil {
os.Remove(tmpPath)
return false, fmt.Errorf("failed to commit chunk: %w", renameErr)
return false, fmt.Errorf("failed to commit chunk: %w", err)
}
// Update cache
@ -253,10 +217,10 @@ func (s *ChunkStore) Delete(hash string) error {
// Stats returns storage statistics
type StoreStats struct {
TotalChunks int64
TotalSize int64 // Bytes on disk (after compression/encryption)
UniqueSize int64 // Bytes of unique data
Directories int
TotalChunks int64
TotalSize int64 // Bytes on disk (after compression/encryption)
UniqueSize int64 // Bytes of unique data
Directories int
}
// Stats returns statistics about the chunk store
@ -310,10 +274,10 @@ func (s *ChunkStore) LoadIndex() error {
})
}
// compressData compresses data using parallel gzip
// compressData compresses data using gzip
func (s *ChunkStore) compressData(data []byte) ([]byte, error) {
var buf []byte
w, err := pgzip.NewWriterLevel((*bytesBuffer)(&buf), pgzip.BestCompression)
w, err := gzip.NewWriterLevel((*bytesBuffer)(&buf), gzip.BestCompression)
if err != nil {
return nil, err
}
@ -336,7 +300,7 @@ func (b *bytesBuffer) Write(p []byte) (int, error) {
// decompressData decompresses gzip data
func (s *ChunkStore) decompressData(data []byte) ([]byte, error) {
r, err := pgzip.NewReader(&bytesReader{data: data})
r, err := gzip.NewReader(&bytesReader{data: data})
if err != nil {
return nil, err
}

View File

@ -1,6 +1,7 @@
package binlog
import (
"compress/gzip"
"context"
"encoding/json"
"fmt"
@ -8,8 +9,6 @@ import (
"path/filepath"
"sync"
"time"
"github.com/klauspost/pgzip"
)
// FileTarget writes binlog events to local files
@ -168,7 +167,7 @@ type CompressedFileTarget struct {
mu sync.Mutex
file *os.File
gzWriter *pgzip.Writer
gzWriter *gzip.Writer
written int64
fileNum int
healthy bool
@ -262,7 +261,7 @@ func (c *CompressedFileTarget) openNewFile() error {
}
c.file = file
c.gzWriter = pgzip.NewWriter(file)
c.gzWriter = gzip.NewWriter(file)
c.written = 0
return nil
}

View File

@ -302,6 +302,85 @@ func (s *Streamer) shutdown() error {
return nil
}
// writeBatch writes a batch of events to all targets
func (s *Streamer) writeBatch(ctx context.Context, events []*Event) error {
if len(events) == 0 {
return nil
}
var lastErr error
for _, target := range s.targets {
if err := target.Write(ctx, events); err != nil {
s.log.Error("Failed to write to target", "target", target.Name(), "error", err)
lastErr = err
}
}
// Update state
last := events[len(events)-1]
s.mu.Lock()
s.state.Position = last.Position
s.state.EventCount += uint64(len(events))
s.state.LastUpdate = time.Now()
s.mu.Unlock()
s.eventsProcessed.Add(uint64(len(events)))
s.lastEventTime.Store(last.Timestamp.Unix())
return lastErr
}
// shouldProcess checks if an event should be processed based on filters
func (s *Streamer) shouldProcess(ev *Event) bool {
if s.config.Filter == nil {
return true
}
// Check database filter
if len(s.config.Filter.Databases) > 0 {
found := false
for _, db := range s.config.Filter.Databases {
if db == ev.Database {
found = true
break
}
}
if !found {
return false
}
}
// Check exclude databases
for _, db := range s.config.Filter.ExcludeDatabases {
if db == ev.Database {
return false
}
}
// Check table filter
if len(s.config.Filter.Tables) > 0 {
found := false
for _, t := range s.config.Filter.Tables {
if t == ev.Table {
found = true
break
}
}
if !found {
return false
}
}
// Check exclude tables
for _, t := range s.config.Filter.ExcludeTables {
if t == ev.Table {
return false
}
}
return true
}
// checkpointLoop periodically saves checkpoint
func (s *Streamer) checkpointLoop(ctx context.Context) {
ticker := time.NewTicker(s.config.CheckpointInterval)

View File

@ -2,6 +2,7 @@ package engine
import (
"archive/tar"
"compress/gzip"
"context"
"database/sql"
"fmt"
@ -13,12 +14,9 @@ import (
"strings"
"time"
"dbbackup/internal/checks"
"dbbackup/internal/logger"
"dbbackup/internal/metadata"
"dbbackup/internal/security"
"github.com/klauspost/pgzip"
)
// CloneEngine implements BackupEngine using MySQL Clone Plugin (8.0.17+)
@ -576,18 +574,8 @@ func (e *CloneEngine) getCloneStatus(ctx context.Context) (*CloneStatus, error)
func (e *CloneEngine) validatePrerequisites(ctx context.Context) ([]string, error) {
var warnings []string
// Check disk space on target directory
if e.config.DataDirectory != "" {
diskCheck := checks.CheckDiskSpace(e.config.DataDirectory)
if diskCheck.Critical {
return nil, fmt.Errorf("insufficient disk space on %s: only %.1f%% available (%.2f GB free)",
e.config.DataDirectory, 100-diskCheck.UsedPercent, float64(diskCheck.AvailableBytes)/(1024*1024*1024))
}
if diskCheck.Warning {
warnings = append(warnings, fmt.Sprintf("low disk space on %s: %.1f%% used (%.2f GB free)",
e.config.DataDirectory, diskCheck.UsedPercent, float64(diskCheck.AvailableBytes)/(1024*1024*1024)))
}
}
// Check disk space
// TODO: Implement disk space check
// Check that we're not cloning to same directory as source
var datadir string
@ -609,12 +597,12 @@ func (e *CloneEngine) compressClone(ctx context.Context, sourceDir, targetFile s
}
defer outFile.Close()
// Create parallel gzip writer for faster compression
// Create gzip writer
level := e.config.CompressLevel
if level == 0 {
level = pgzip.DefaultCompression
level = gzip.DefaultCompression
}
gzWriter, err := pgzip.NewWriterLevel(outFile, level)
gzWriter, err := gzip.NewWriterLevel(outFile, level)
if err != nil {
return err
}
@ -696,8 +684,8 @@ func (e *CloneEngine) extractClone(ctx context.Context, sourceFile, targetDir st
}
defer file.Close()
// Create parallel gzip reader for faster decompression
gzReader, err := pgzip.NewReader(file)
// Create gzip reader
gzReader, err := gzip.NewReader(file)
if err != nil {
return err
}

Some files were not shown because too many files have changed in this diff Show More