Compare commits

...

40 Commits

Author SHA1 Message Date
e449e2f448 v5.8.24: Add TUI option to skip preflight checks with warning
Some checks failed
CI/CD / Test (push) Successful in 3m22s
CI/CD / Lint (push) Successful in 1m47s
CI/CD / Integration Tests (push) Successful in 1m15s
CI/CD / Native Engine Tests (push) Successful in 1m11s
CI/CD / Build Binary (push) Successful in 1m2s
CI/CD / Test Release Build (push) Successful in 1m46s
CI/CD / Release Binaries (push) Failing after 12m25s
2026-02-05 13:01:38 +00:00
dceab64b67 v5.8.23: Add Go unit tests for context cancellation verification
Some checks failed
CI/CD / Test (push) Successful in 3m8s
CI/CD / Lint (push) Successful in 1m32s
CI/CD / Integration Tests (push) Successful in 1m18s
CI/CD / Native Engine Tests (push) Successful in 1m9s
CI/CD / Build Binary (push) Successful in 57s
CI/CD / Test Release Build (push) Successful in 1m45s
CI/CD / Release Binaries (push) Failing after 12m3s
2026-02-05 12:52:42 +00:00
a101fb81ab v5.8.22: Defensive fixes for potential restore hang issues
Some checks failed
CI/CD / Test (push) Successful in 3m25s
CI/CD / Lint (push) Successful in 1m33s
CI/CD / Integration Tests (push) Successful in 1m4s
CI/CD / Native Engine Tests (push) Successful in 1m2s
CI/CD / Build Binary (push) Successful in 56s
CI/CD / Test Release Build (push) Successful in 1m41s
CI/CD / Release Binaries (push) Failing after 11m55s
- Add context cancellation check during COPY data parsing loop
  (prevents hangs when parsing large tables with millions of rows)
- Add 5-second timeout for stderr reader in globals restore
  (prevents indefinite hang if psql process doesn't terminate cleanly)
- Reduce database drop timeout from 5 minutes to 60 seconds
  (improves TUI responsiveness during cluster cleanup)
2026-02-05 12:40:26 +00:00
555177f5a7 v5.8.21: Fix TUI menu handler mismatch and add InterruptMsg handlers
Some checks failed
CI/CD / Test (push) Successful in 3m10s
CI/CD / Lint (push) Successful in 1m31s
CI/CD / Integration Tests (push) Successful in 1m9s
CI/CD / Native Engine Tests (push) Successful in 1m2s
CI/CD / Build Binary (push) Successful in 54s
CI/CD / Test Release Build (push) Successful in 1m46s
CI/CD / Release Binaries (push) Failing after 11m4s
- Fix menu.go case 10/11 mismatch (separator vs profile item)
- Add tea.InterruptMsg handlers for Bubbletea v1.3+ SIGINT handling:
  - archive_browser.go
  - restore_preview.go
  - confirmation.go
  - dbselector.go
  - cluster_db_selector.go
  - profile.go
- Add missing ctrl+c key handlers to cluster_db_selector and profile
- Fix ConfirmationModel fallback to use context.Background() if nil
2026-02-05 12:34:21 +00:00
0d416ecb55 v5.8.20: Fix restore ETA display showing 0.0s on large cluster restores
Some checks failed
CI/CD / Test (push) Successful in 3m12s
CI/CD / Lint (push) Successful in 1m32s
CI/CD / Integration Tests (push) Successful in 1m7s
CI/CD / Native Engine Tests (push) Successful in 1m0s
CI/CD / Build Binary (push) Successful in 53s
CI/CD / Test Release Build (push) Successful in 1m47s
CI/CD / Release Binaries (push) Failing after 10m34s
- Calculate dbPhaseElapsed in all 3 restore callbacks after setting phase3StartTime
- Always recalculate elapsed from phase3StartTime in getCurrentRestoreProgress
- Fixes ETA and Elapsed display in TUI cluster restore progress
- Same fix pattern as v5.8.19 for backup
2026-02-05 12:23:39 +00:00
1fe16ef89b v5.8.19: Fix backup ETA display showing 0.0s on large cluster dumps
Some checks failed
CI/CD / Test (push) Successful in 3m9s
CI/CD / Lint (push) Successful in 1m31s
CI/CD / Integration Tests (push) Successful in 1m6s
CI/CD / Native Engine Tests (push) Successful in 1m2s
CI/CD / Build Binary (push) Successful in 55s
CI/CD / Test Release Build (push) Successful in 1m46s
CI/CD / Release Binaries (push) Failing after 11m15s
- Calculate dbPhaseElapsed in callback immediately after setting phase2StartTime
- Always recalculate elapsed from phase2StartTime in getCurrentBackupProgress
- Add debug log when phase 2 starts for troubleshooting
- Fixes ETA and Elapsed display in TUI cluster backup progress
2026-02-05 12:21:09 +00:00
4507ec682f v5.8.18: Add TUI debug logging for interactive restore debugging
Some checks failed
CI/CD / Test (push) Successful in 3m8s
CI/CD / Lint (push) Successful in 1m12s
CI/CD / Integration Tests (push) Successful in 54s
CI/CD / Native Engine Tests (push) Successful in 52s
CI/CD / Build Binary (push) Successful in 44s
CI/CD / Test Release Build (push) Successful in 1m18s
CI/CD / Release Binaries (push) Failing after 11m21s
- TUI debug log writes continuously to dbbackup-tui-debug-*.log
- Logs at key restore phases: context check, DB client, cluster clean, restore call
- Sync after each write to capture state even if hang occurs
- Log file in WorkDir (default /tmp) when 'd' is pressed in restore preview
2026-02-05 12:02:35 +00:00
084b8bd279 v5.8.17: Add PostgreSQL connection timeouts as hang safeguard
Some checks failed
CI/CD / Test (push) Successful in 3m6s
CI/CD / Lint (push) Successful in 1m10s
CI/CD / Integration Tests (push) Successful in 56s
CI/CD / Native Engine Tests (push) Successful in 51s
CI/CD / Build Binary (push) Successful in 43s
CI/CD / Test Release Build (push) Successful in 1m17s
CI/CD / Release Binaries (push) Failing after 9m55s
- Set statement_timeout=1hr, lock_timeout=5min, idle_in_transaction=10min
- These server-side timeouts ensure stuck queries abort even if context cancellation fails
- Additional defense-in-depth for TUI cluster restore hang issue
- Add test_cancel.sh for verifying cancellation behavior
2026-02-05 11:43:20 +00:00
0d85caea53 v5.8.16: Fix TUI cluster restore hang on large SQL files - adds context cancellation support to parseStatements and schema execution loop
Some checks failed
CI/CD / Test (push) Successful in 3m31s
CI/CD / Lint (push) Successful in 1m13s
CI/CD / Integration Tests (push) Successful in 56s
CI/CD / Native Engine Tests (push) Successful in 53s
CI/CD / Build Binary (push) Successful in 42s
CI/CD / Test Release Build (push) Successful in 1m17s
CI/CD / Release Binaries (push) Failing after 10m11s
2026-02-05 11:28:04 +00:00
3624ff54ff v5.8.15: Fix TUI cluster restore hang on large SQL files
Some checks failed
CI/CD / Test (push) Has been cancelled
CI/CD / Integration Tests (push) Has been cancelled
CI/CD / Native Engine Tests (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Build Binary (push) Has been cancelled
CI/CD / Test Release Build (push) Has been cancelled
CI/CD / Release Binaries (push) Has been cancelled
- Add context cancellation support to parseStatementsWithContext()
- Check for cancellation every 10000 lines during SQL parsing
- Add context checks in schema statement execution loop
- Use context-aware parsing in RestoreFile() for proper Ctrl+C handling
- Complements v5.8.14 panic recovery fix by preventing hangs
2026-02-05 11:27:08 +00:00
696273816e ci: Remove port bindings to fix 'port already in use' errors
Some checks failed
CI/CD / Test (push) Successful in 3m9s
CI/CD / Lint (push) Successful in 1m11s
CI/CD / Integration Tests (push) Successful in 54s
CI/CD / Native Engine Tests (push) Successful in 52s
CI/CD / Build Binary (push) Successful in 47s
CI/CD / Test Release Build (push) Successful in 1m29s
CI/CD / Release Binaries (push) Failing after 10m32s
Services in container networking can communicate via hostname
without binding to host ports. This fixes CI failures when
port 5432/3306 are already in use on the runner.
2026-02-05 10:51:42 +00:00
2b7cfa4b67 release.sh: Add -m/--message flag for release comment
Some checks failed
CI/CD / Test (push) Successful in 3m0s
CI/CD / Lint (push) Successful in 1m11s
CI/CD / Integration Tests (push) Failing after 3s
CI/CD / Native Engine Tests (push) Successful in 50s
CI/CD / Build Binary (push) Successful in 41s
CI/CD / Test Release Build (push) Successful in 1m17s
CI/CD / Release Binaries (push) Has been skipped
2026-02-05 09:24:42 +00:00
714ff3a41d Add release.sh script for automated GitHub releases
- release.sh: Build binaries and create/update GitHub releases
- Token stored in .gh_token (gitignored for security)

Usage:
  ./release.sh              # Build and release current version
  ./release.sh --bump       # Bump patch version, build, and release
  ./release.sh --update     # Update existing release with new binaries
  ./release.sh --dry-run    # Preview actions
2026-02-05 09:19:06 +00:00
b095e2fab5 v5.8.14: Fix TUI cluster restore panic/hang on SQL file from pg_dump
Some checks failed
CI/CD / Test (push) Successful in 3m11s
CI/CD / Lint (push) Successful in 1m11s
CI/CD / Integration Tests (push) Failing after 3s
CI/CD / Native Engine Tests (push) Successful in 53s
CI/CD / Build Binary (push) Successful in 43s
CI/CD / Test Release Build (push) Successful in 1m18s
CI/CD / Release Binaries (push) Failing after 9m53s
CRITICAL BUG FIX:
- Fixed BubbleTea execBatchMsg WaitGroup deadlock during cluster restore
- Root cause: panic recovery in tea.Cmd functions returned nil instead of tea.Msg
- When panics were recovered, no message was sent to BubbleTea, causing
  the internal WaitGroup to wait forever (deadlock)

Changes:
- restore_exec.go: Use named return value (returnMsg) in panic recovery
  to ensure BubbleTea always receives a message even on panic
- backup_exec.go: Apply same fix for backup execution consistency
- parallel_restore.go: Verified labeled breaks (copyLoop, postDataLoop)
  are correctly implemented for context cancellation

Technical details:
- In Go, defer cannot use 'return' to set return value
- But with named return values, defer can modify them directly
- This ensures tea.Cmd always returns a tea.Msg, preventing deadlock

Tested: All TUI and restore tests pass
2026-02-05 09:09:40 +00:00
e6c0ca0667 v5.8.13: Add -trimpath to all builds for clean stack traces
Some checks failed
CI/CD / Test (push) Successful in 2m59s
CI/CD / Lint (push) Failing after 17s
CI/CD / Build Binary (push) Has been skipped
CI/CD / Test Release Build (push) Has been skipped
CI/CD / Integration Tests (push) Failing after 3s
CI/CD / Native Engine Tests (push) Successful in 52s
CI/CD / Release Binaries (push) Has been skipped
2026-02-05 05:03:15 +00:00
79dc604eb6 v5.8.12: Fix config loading for non-standard home directories
Some checks failed
CI/CD / Test (push) Has been cancelled
CI/CD / Integration Tests (push) Has been cancelled
CI/CD / Native Engine Tests (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Build Binary (push) Has been cancelled
CI/CD / Test Release Build (push) Has been cancelled
CI/CD / Release Binaries (push) Has been cancelled
- Config now searches: ./ → ~/ → /etc/dbbackup.conf → /etc/dbbackup/dbbackup.conf
- Works for postgres user with home at /var/lib/postgresql
- Added ConfigSearchPaths() and LoadLocalConfigWithPath()
- Log shows which config path was loaded
2026-02-04 19:18:25 +01:00
de88e38f93 v5.8.11: TUI deadlock fix, systemd-run isolation, restore dry-run, audit signing
Some checks failed
CI/CD / Test (push) Has been cancelled
CI/CD / Integration Tests (push) Has been cancelled
CI/CD / Native Engine Tests (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Build Binary (push) Has been cancelled
CI/CD / Test Release Build (push) Has been cancelled
CI/CD / Release Binaries (push) Has been cancelled
Fixed:
- TUI deadlock from goroutine leaks in pgxpool connection handling

Added:
- systemd-run resource isolation for long-running jobs (cgroups.go)
- Restore dry-run with 10 pre-restore validation checks (dryrun.go)
- Ed25519 audit log signing with hash chains (audit.go)
2026-02-04 18:58:08 +01:00
97c52ab9e5 fix(pgxpool): properly cleanup goroutine on both Close() and context cancel
Some checks failed
CI/CD / Test (push) Has been cancelled
CI/CD / Integration Tests (push) Has been cancelled
CI/CD / Native Engine Tests (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Build Binary (push) Has been cancelled
CI/CD / Test Release Build (push) Has been cancelled
CI/CD / Release Binaries (push) Has been cancelled
The cleanup goroutine was only waiting on ctx.Done(), which meant:
- Normal Close() calls left the goroutine hanging forever
- Only Ctrl+C (context cancel) would stop the goroutine

Now the goroutine uses select{} to wait on either:
- ctx.Done() - context cancelled (Ctrl+C)
- closeCh - explicit Close() call

This ensures no goroutine leaks in either scenario.
2026-02-04 14:56:14 +01:00
3c9e5f04ca fix(native): generate .meta.json for native engine backups
Some checks failed
CI/CD / Test (push) Has been cancelled
CI/CD / Integration Tests (push) Has been cancelled
CI/CD / Native Engine Tests (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Build Binary (push) Has been cancelled
CI/CD / Test Release Build (push) Has been cancelled
CI/CD / Release Binaries (push) Has been cancelled
The native backup engine was not creating .meta.json metadata files,
causing catalog sync to skip these backups and Prometheus metrics
to show stale timestamps.

Now native backups create proper metadata including:
- Timestamp, database, host, port
- File size and SHA256 checksum
- Duration and compression info
- Engine name and objects processed

Fixes catalog sync and Prometheus exporter metrics for native backups.
2026-02-04 13:07:08 +01:00
86a28b6ec5 fix: ensure pgxpool closes on context cancellation (Ctrl+C hang fix v2)
Some checks failed
CI/CD / Test (push) Has been cancelled
CI/CD / Integration Tests (push) Has been cancelled
CI/CD / Native Engine Tests (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Build Binary (push) Has been cancelled
CI/CD / Test Release Build (push) Has been cancelled
CI/CD / Release Binaries (push) Has been cancelled
- Added goroutine to explicitly close pgxpool when context is cancelled
- pgxpool.Close() must be called explicitly - context cancellation alone doesn't stop the background health check
- Reduced HealthCheckPeriod from 1 minute to 5 seconds for faster shutdown
- Applied fix to both parallel_restore.go and database/postgresql.go

This properly fixes the hanging goroutines on Ctrl+C during TUI restore operations.

Version 5.8.8
2026-02-04 11:23:12 +01:00
63b35414d2 fix: pgxpool context cancellation hang on Ctrl+C during cluster restore
Some checks failed
CI/CD / Test (push) Has been cancelled
CI/CD / Integration Tests (push) Has been cancelled
CI/CD / Native Engine Tests (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Build Binary (push) Has been cancelled
CI/CD / Test Release Build (push) Has been cancelled
CI/CD / Release Binaries (push) Has been cancelled
- Fixed pgxpool created with context.Background() causing background health check goroutine to hang
- Added NewParallelRestoreEngineWithContext() to properly pass cancellable context
- Added context cancellation checks in parallel worker goroutines (Phase 3 COPY, Phase 4 indexes)
- Workers now exit cleanly when context is cancelled instead of continuing indefinitely

Version 5.8.7
2026-02-04 08:14:35 +01:00
db46770e7f v5.8.6: Support pg_dumpall SQL files in cluster restore
Some checks failed
CI/CD / Test (push) Successful in 2m59s
CI/CD / Lint (push) Successful in 1m10s
CI/CD / Integration Tests (push) Failing after 25s
CI/CD / Native Engine Tests (push) Successful in 50s
CI/CD / Build Binary (push) Successful in 44s
CI/CD / Test Release Build (push) Successful in 1m17s
CI/CD / Release Binaries (push) Failing after 10m7s
NEW FEATURE:
- TUI cluster restore now accepts .sql and .sql.gz files (pg_dumpall output)
- Uses native engine automatically for SQL-based cluster restores
- Added CanBeClusterRestore() method to detect valid cluster formats

Supported cluster restore formats:
- .tar.gz (dbbackup cluster format)
- .sql (pg_dumpall plain format)
- .sql.gz (pg_dumpall compressed format)
2026-02-03 22:38:32 +01:00
51764a677a v5.8.5: Improve cluster restore error message for pg_dumpall SQL files
Some checks failed
CI/CD / Integration Tests (push) Has been cancelled
CI/CD / Native Engine Tests (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Build Binary (push) Has been cancelled
CI/CD / Test Release Build (push) Has been cancelled
CI/CD / Release Binaries (push) Has been cancelled
CI/CD / Test (push) Has been cancelled
- Better error message when selecting non-.tar.gz file in cluster restore
- Explains that pg_dumpall SQL files should be restored via: psql -f <file.sql>
- Shows actual psql command with correct host/port/user from config
2026-02-03 22:27:39 +01:00
bdbbb59e51 v5.8.4: Fix config file loading (was completely broken)
Some checks failed
CI/CD / Test (push) Has been cancelled
CI/CD / Integration Tests (push) Has been cancelled
CI/CD / Native Engine Tests (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Build Binary (push) Has been cancelled
CI/CD / Test Release Build (push) Has been cancelled
CI/CD / Release Binaries (push) Has been cancelled
CRITICAL FIX:
- Config file loading was completely broken since v5.x
- A duplicate PersistentPreRunE was overwriting the config loading logic
- Now .dbbackup.conf and --config flag work correctly

The second PersistentPreRunE (for password deprecation) was replacing
the entire config loading logic, so no config files were ever loaded.
2026-02-03 22:11:31 +01:00
1a6ea13222 v5.8.3: Fix TUI cluster restore validation for non-tar.gz files
Some checks failed
CI/CD / Test (push) Has been cancelled
CI/CD / Integration Tests (push) Has been cancelled
CI/CD / Native Engine Tests (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Build Binary (push) Has been cancelled
CI/CD / Test Release Build (push) Has been cancelled
CI/CD / Release Binaries (push) Has been cancelled
- Block selection of single DB backups (.sql, .dump) in cluster restore mode
- Show informative error message when wrong backup type selected
- Prevents misleading error at restore execution time
2026-02-03 22:02:55 +01:00
598056ffe3 release: v5.8.2 - TUI Archive Selection Fix + Config Save Fix
Some checks failed
CI/CD / Integration Tests (push) Has been cancelled
CI/CD / Native Engine Tests (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Build Binary (push) Has been cancelled
CI/CD / Test Release Build (push) Has been cancelled
CI/CD / Release Binaries (push) Has been cancelled
CI/CD / Test (push) Has been cancelled
FIXES:
- TUI: All backup formats (.sql, .sql.gz, .dump, .tar.gz) now selectable for restore
- Config: SaveLocalConfig now ALWAYS writes all values (even 0)
- Config: Added timestamp to saved config files

TESTS:
- Added TestConfigSaveLoad and TestConfigSaveZeroValues
- Added TestDetectArchiveFormatAll for format detection
2026-02-03 20:21:38 +01:00
185c8fb0f3 release: v5.8.1 - TUI Archive Browser Fix
Some checks failed
CI/CD / Test (push) Has been cancelled
CI/CD / Integration Tests (push) Has been cancelled
CI/CD / Native Engine Tests (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Build Binary (push) Has been cancelled
CI/CD / Test Release Build (push) Has been cancelled
CI/CD / Release Binaries (push) Has been cancelled
2026-02-03 20:09:13 +01:00
d80ac4cae4 fix(tui): Allow any .tar.gz file as cluster backup in archive browser
Previously, only files with "cluster" in the name AND .tar.gz extension
were recognized as cluster backups. This prevented users from selecting
renamed backup files.

Now ALL .tar.gz files are recognized as cluster backup archives,
since that is the standard format for cluster backups.

Also improved error message clarity.
2026-02-03 20:07:35 +01:00
35535f1010 release: v5.8.0 - Parallel BLOB Engine & Performance Optimizations
Some checks failed
CI/CD / Test (push) Has been cancelled
CI/CD / Integration Tests (push) Has been cancelled
CI/CD / Native Engine Tests (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Build Binary (push) Has been cancelled
CI/CD / Test Release Build (push) Has been cancelled
CI/CD / Release Binaries (push) Has been cancelled
🚀 MAJOR RELEASE: v5.8.0

NEW FEATURES:
═══════════════════════════════════════════════════════════════
 Parallel Restore Engine (parallel_restore.go)
   - Matches pg_restore -j8 performance for SQL format
   - Worker pool with semaphore pattern
   - Schema → COPY DATA → Indexes in proper phases

 BLOB Parallel Engine (blob_parallel.go)
   - PostgreSQL Specialist optimized
   - Parallel BYTEA column backup/restore
   - Large Object (pg_largeobject) support
   - Streaming for memory efficiency
   - Throughput monitoring (MB/s)

 Session Optimizations
   - work_mem = 256MB
   - maintenance_work_mem = 512MB
   - synchronous_commit = off
   - session_replication_role = replica

FIXES:
═══════════════════════════════════════════════════════════════
 TUI Timer Reset Issue
   - Fixed heartbeat showing "running: 5s" then reset
   - Now shows: "running: Xs (phase: Ym Zs)"

 Config Save/Load Bug
   - ApplyLocalConfig now always applies saved values
   - Fixed values matching defaults being skipped

PERFORMANCE:
═══════════════════════════════════════════════════════════════
Before: 120GB restore = 10+ hours (sequential SQL)
After:  120GB restore = ~240 minutes (parallel like pg_restore -j8)
2026-02-03 19:55:54 +01:00
ec7a51047c feat(blob): Add parallel BLOB backup/restore engine - PostgreSQL specialist optimization
🚀 PARALLEL BLOB ENGINE (blob_parallel.go) - NEW

PostgreSQL Specialist + Go Dev + Linux Admin collaboration:

BLOB DISCOVERY & ANALYSIS:
- AnalyzeBlobTables() - Detects all BYTEA columns in database
- Queries pg_largeobject for Large Object count and size
- Prioritizes tables by estimated BLOB size (largest first)
- Supports intelligent workload distribution

PARALLEL BLOB BACKUP:
- BackupBlobTables() - Parallel worker pool for BLOB tables
- backupTableBlobs() - Per-table streaming with gzip
- BackupLargeObjects() - Parallel lo_get() export
- StreamingBlobBackup() - Cursor-based for very large tables

PARALLEL BLOB RESTORE:
- RestoreBlobTables() - Parallel COPY FROM for BLOB data
- RestoreLargeObjects() - Parallel lo_create/lo_put
- ExecuteParallelCOPY() - Optimized multi-table COPY

SESSION OPTIMIZATIONS (per-connection):
- work_mem = 256MB (sorting/hashing)
- maintenance_work_mem = 512MB (constraint validation)
- synchronous_commit = off (no WAL sync wait)
- session_replication_role = replica (disable triggers)
- wal_buffers = 64MB (larger WAL buffer)
- checkpoint_completion_target = 0.9 (spread I/O)

CONFIGURATION OPTIONS:
- Workers: Parallel worker count (default: 4)
- ChunkSize: 8MB for streaming large BLOBs
- LargeBlobThreshold: 10MB = "large"
- CopyBufferSize: 1MB buffer
- ProgressCallback: Real-time monitoring

STATISTICS TRACKING:
- ThroughputMBps, LargestBlobSize, AverageBlobSize
- TablesWithBlobs, LargeObjectsCount, LargeObjectsBytes

This matches pg_dump/pg_restore -j performance for BLOB-heavy databases.
2026-02-03 19:53:42 +01:00
b00050e015 fix(config): Always apply saved config values, not just non-defaults
Bug: ApplyLocalConfig was checking if current value matched default
before applying saved config. This caused saved values that happen
to match defaults (e.g., compression=6) to not be loaded.

Fix: Always apply non-empty/non-zero values from config file.
CLI flag overrides are already handled in root.go after this function.
2026-02-03 19:47:52 +01:00
f323e9ae3a feat(restore): Add parallel restore engine for SQL format - matches pg_restore -j8 performance 2026-02-03 19:41:17 +01:00
f3767e3064 Cluster Restore: Fix timer display, add SQL format warning, optimize performance
Timer Fix:
- Show both per-database and overall phase elapsed time in heartbeat
- Changed 'elapsed: Xs' to 'running: Xs (phase: Ym Zs)'
- Fixes confusing timer reset when each database completes

SQL Format Warning:
- Detect .sql.gz backup format before restore
- Display prominent warning that SQL format cannot use parallel restore
- Explain 3-5x slowdown compared to pg_restore -j8
- Recommend --use-native-engine=false for faster future restores

Performance Optimizations:
- psql: Add performance tuning via -c flags (synchronous_commit=off, work_mem, maintenance_work_mem)
- Native engine: Extended optimizations including:
  - wal_level=minimal, fsync=off, full_page_writes=off
  - max_parallel_workers_per_gather=4
  - checkpoint_timeout=1h, max_wal_size=10GB
- Reduce progress callback overhead (every 1000 statements vs 100)

Note: SQL format (.sql.gz) restores are inherently sequential.
For parallel restore performance matching pg_restore -j8,
use custom format (.dump) via --use-native-engine=false during backup.
2026-02-03 19:34:39 +01:00
ae167ac063 v5.7.10: TUI consistency fixes and improvements
Some checks failed
CI/CD / Test (push) Has been cancelled
CI/CD / Integration Tests (push) Has been cancelled
CI/CD / Native Engine Tests (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Build Binary (push) Has been cancelled
CI/CD / Test Release Build (push) Has been cancelled
CI/CD / Release Binaries (push) Has been cancelled
- Fix auto-select index mismatch in menu.go
- Fix tea.Quit → nil for back navigation in done states
- Add separator skip navigation for up/down keys
- Add input validation for ratio inputs (0-100 range)
- Add 11 unit tests + 2 benchmarks for TUI
- Add TUI smoke test script for CI/CD
- Improve TODO messages with version hints
2026-02-03 15:16:00 +01:00
6be19323d2 TUI: Improve UX and input validation
## Fixed
- Menu navigation now skips separator lines (up/down arrows)
- Input validation for sample ratio (0-100 range check)
- Graceful handling of invalid input with error message

## Improved
- Tools menu 'coming soon' items now show clear TODO status
- Added version hints (planned for v6.1)
- CLI alternative shown for Catalog Sync

## Code Quality
- Added warnStyle for TODO messages in tools.go
- Consistent error handling in input.go
2026-02-03 15:11:07 +01:00
0e42c3ee41 TUI: Fix incorrect tea.Quit in back navigation
## Fixed
- backup_exec.go: InterruptMsg when done now returns to parent (not quit)
- restore_exec.go: InterruptMsg when done now returns to parent
- restore_exec.go: 'q' key when done now returns to parent

## Behavior Change
When backup/restore is complete and user presses Ctrl+C, ESC, or 'q':
- Before: App would exit completely
- After: Returns to main menu

Note: tea.Quit is still correctly used for TUIAutoConfirm mode
(automated testing) where app exit after operation is expected.
2026-02-03 15:04:42 +01:00
4fc51e3a6b TUI: Fix auto-select index mismatch + add unit tests
## Fixed
- Auto-select case indices now match keyboard handler indices
- Added missing handlers: Schedule, Chain, Profile in auto-select
- Separators now properly handled (return nil cmd)

## Added
- internal/tui/menu_test.go: 11 unit tests + 2 benchmarks
  - Navigation tests (up/down, vim keys, bounds)
  - Quit tests (q, Ctrl+C)
  - Database type switching
  - View rendering
  - Auto-select functionality
- tests/tui_smoke_test.sh: Automated TUI smoke testing
  - Tests all 19 menu items via --tui-auto-select
  - No human input required
  - CI/CD ready

All TUI tests passing.
2026-02-03 15:00:34 +01:00
2db1daebd6 v5.7.9: Fix encryption detection and in-place decryption
Some checks failed
CI/CD / Test (push) Has been cancelled
CI/CD / Integration Tests (push) Has been cancelled
CI/CD / Native Engine Tests (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Build Binary (push) Has been cancelled
CI/CD / Test Release Build (push) Has been cancelled
CI/CD / Release Binaries (push) Has been cancelled
## Fixed
- IsBackupEncrypted() not detecting single-database encrypted backups
- In-place decryption corrupting files (truncated before read)
- Metadata update using wrong path for Load()

## Added
- PostgreSQL DR Drill --no-owner --no-acl flags (v5.7.8)

## Tested
- Full encryption round-trip verified (88 tables)
- All 16+ core commands on production-like environment
2026-02-03 14:42:32 +01:00
9940d43958 v5.7.8: PostgreSQL DR Drill --no-owner --no-acl fix
Some checks failed
CI/CD / Test (push) Has been cancelled
CI/CD / Integration Tests (push) Has been cancelled
CI/CD / Native Engine Tests (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Build Binary (push) Has been cancelled
CI/CD / Test Release Build (push) Has been cancelled
CI/CD / Release Binaries (push) Has been cancelled
### Fixed
- PostgreSQL DR Drill: Add --no-owner and --no-acl flags to pg_restore
  to avoid OWNER/GRANT errors when original roles don't exist in container

### Tested
- DR Drill verified on PostgreSQL keycloak (88 tables, 1686 rows, RTO: 1.36s)
2026-02-03 13:57:28 +01:00
d10f334508 v5.7.7: DR Drill MariaDB fixes, SMTP notifications, verify paths
Some checks failed
CI/CD / Test (push) Has been cancelled
CI/CD / Integration Tests (push) Has been cancelled
CI/CD / Native Engine Tests (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Build Binary (push) Has been cancelled
CI/CD / Test Release Build (push) Has been cancelled
CI/CD / Release Binaries (push) Has been cancelled
### Fixed (5.7.3 - 5.7.7)
- MariaDB binlog position bug (4 vs 5 columns)
- Notify test command ENV variable reading
- SMTP 250 Ok response treated as error
- Verify command absolute path handling
- DR Drill for modern MariaDB containers:
  - Use mariadb-admin/mariadb client
  - TCP instead of socket connections
  - DROP DATABASE before restore

### Improved
- Better --password flag error message
- PostgreSQL peer auth fallback logging
- Binlog warnings at DEBUG level
2026-02-03 13:42:02 +01:00
59 changed files with 5748 additions and 327 deletions

View File

@ -49,13 +49,14 @@ jobs:
env:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: testdb
ports: ['5432:5432']
# Use container networking instead of host port binding
# This avoids "port already in use" errors on shared runners
mysql:
image: mysql:8
env:
MYSQL_ROOT_PASSWORD: mysql
MYSQL_DATABASE: testdb
ports: ['3306:3306']
# Use container networking instead of host port binding
steps:
- name: Checkout code
env:
@ -80,7 +81,7 @@ jobs:
done
- name: Build dbbackup
run: go build -o dbbackup .
run: go build -trimpath -o dbbackup .
- name: Test PostgreSQL backup/restore
env:
@ -239,7 +240,7 @@ jobs:
echo "Focus: PostgreSQL native engine validation only"
- name: Build dbbackup for native testing
run: go build -o dbbackup-native .
run: go build -trimpath -o dbbackup-native .
- name: Test PostgreSQL Native Engine
env:
@ -383,7 +384,7 @@ jobs:
- name: Build for current platform
run: |
echo "Building dbbackup for testing..."
go build -ldflags="-s -w" -o dbbackup .
go build -trimpath -ldflags="-s -w" -o dbbackup .
echo "Build successful!"
ls -lh dbbackup
./dbbackup version || echo "Binary created successfully"
@ -419,7 +420,7 @@ jobs:
# Test Linux amd64 build (with CGO for SQLite)
echo "Testing linux/amd64 build (CGO enabled)..."
if CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -ldflags="-s -w" -o release/dbbackup-linux-amd64 .; then
if CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -trimpath -ldflags="-s -w" -o release/dbbackup-linux-amd64 .; then
echo "✅ linux/amd64 build successful"
ls -lh release/dbbackup-linux-amd64
else
@ -428,7 +429,7 @@ jobs:
# Test Darwin amd64 (no CGO - cross-compile limitation)
echo "Testing darwin/amd64 build (CGO disabled)..."
if CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 go build -ldflags="-s -w" -o release/dbbackup-darwin-amd64 .; then
if CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 go build -trimpath -ldflags="-s -w" -o release/dbbackup-darwin-amd64 .; then
echo "✅ darwin/amd64 build successful"
ls -lh release/dbbackup-darwin-amd64
else
@ -508,23 +509,23 @@ jobs:
# Linux amd64 (with CGO for SQLite)
echo "Building linux/amd64 (CGO enabled)..."
CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -ldflags="-s -w" -o release/dbbackup-linux-amd64 .
CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -trimpath -ldflags="-s -w" -o release/dbbackup-linux-amd64 .
# Linux arm64 (with CGO for SQLite)
echo "Building linux/arm64 (CGO enabled)..."
CC=aarch64-linux-gnu-gcc CGO_ENABLED=1 GOOS=linux GOARCH=arm64 go build -ldflags="-s -w" -o release/dbbackup-linux-arm64 .
CC=aarch64-linux-gnu-gcc CGO_ENABLED=1 GOOS=linux GOARCH=arm64 go build -trimpath -ldflags="-s -w" -o release/dbbackup-linux-arm64 .
# Darwin amd64 (no CGO - cross-compile limitation)
echo "Building darwin/amd64 (CGO disabled)..."
CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 go build -ldflags="-s -w" -o release/dbbackup-darwin-amd64 .
CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 go build -trimpath -ldflags="-s -w" -o release/dbbackup-darwin-amd64 .
# Darwin arm64 (no CGO - cross-compile limitation)
echo "Building darwin/arm64 (CGO disabled)..."
CGO_ENABLED=0 GOOS=darwin GOARCH=arm64 go build -ldflags="-s -w" -o release/dbbackup-darwin-arm64 .
CGO_ENABLED=0 GOOS=darwin GOARCH=arm64 go build -trimpath -ldflags="-s -w" -o release/dbbackup-darwin-arm64 .
# FreeBSD amd64 (no CGO - cross-compile limitation)
echo "Building freebsd/amd64 (CGO disabled)..."
CGO_ENABLED=0 GOOS=freebsd GOARCH=amd64 go build -ldflags="-s -w" -o release/dbbackup-freebsd-amd64 .
CGO_ENABLED=0 GOOS=freebsd GOARCH=amd64 go build -trimpath -ldflags="-s -w" -o release/dbbackup-freebsd-amd64 .
echo "All builds complete:"
ls -lh release/

1
.gitignore vendored
View File

@ -18,6 +18,7 @@ bin/
# Ignore local configuration (may contain IPs/credentials)
.dbbackup.conf
.gh_token
# Ignore session/development notes
TODO_SESSION.md

View File

@ -5,6 +5,201 @@ All notable changes to dbbackup will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [5.8.24] - 2026-02-05
### Added
- **Skip Preflight Checks Option**: New TUI setting to disable pre-restore safety checks
- Accessible via Settings menu → "Skip Preflight Checks"
- Shows warning when enabled: "⚠️ SKIPPED (dangerous)"
- Displays prominent warning banner on restore preview screen
- Useful for enterprise scenarios where checks are too slow on large databases
- Config field: `SkipPreflightChecks` (default: false)
## [5.8.23] - 2026-02-05
### Added
- **Cancellation Tests**: Added Go unit tests for context cancellation verification
- `TestParseStatementsContextCancellation` - verifies statement parsing can be cancelled
- `TestParseStatementsWithCopyDataCancellation` - verifies COPY data parsing can be cancelled
- Tests confirm cancellation responds within 10ms on large (1M+ line) files
## [5.8.15] - 2026-02-05
### Fixed
- **TUI Cluster Restore Hang**: Fixed hang during large SQL file restore (pg_dumpall format)
- Added context cancellation support to `parseStatementsWithContext()` with checks every 10000 lines
- Added context cancellation checks in schema statement execution loop
- Now uses context-aware parsing in `RestoreFile()` for proper Ctrl+C handling
- This complements the v5.8.14 panic recovery fix by preventing hangs (not just panics)
## [5.8.14] - 2026-02-05
### Fixed
- **TUI Cluster Restore Panic**: Fixed BubbleTea WaitGroup deadlock during cluster restore
- Panic recovery in `tea.Cmd` functions now uses named return values to properly return messages
- Previously, panic recovery returned nil which caused `execBatchMsg` WaitGroup to hang forever
- Affected files: `restore_exec.go` and `backup_exec.go`
## [5.8.12] - 2026-02-04
### Fixed
- **Config Loading**: Fixed config not loading for users without standard home directories
- Now searches: current dir → home dir → /etc/dbbackup.conf → /etc/dbbackup/dbbackup.conf
- Works for postgres user with home at /var/lib/postgresql
- Added `ConfigSearchPaths()` and `LoadLocalConfigWithPath()` functions
- Log now shows which config path was actually loaded
## [5.8.11] - 2026-02-04
### Fixed
- **TUI Deadlock**: Fixed goroutine leaks in pgxpool connection handling
- Removed redundant goroutines waiting on ctx.Done() in postgresql.go and parallel_restore.go
- These were causing WaitGroup deadlocks when BubbleTea tried to shutdown
### Added
- **systemd-run Resource Isolation**: New `internal/cleanup/cgroups.go` for long-running jobs
- `RunWithResourceLimits()` wraps commands in systemd-run scopes
- Configurable: MemoryHigh, MemoryMax, CPUQuota, IOWeight, Nice, Slice
- Automatic cleanup on context cancellation
- **Restore Dry-Run Checks**: New `internal/restore/dryrun.go` with 10 pre-restore validations
- Archive access, format, connectivity, permissions, target conflicts
- Disk space, work directory, required tools, lock settings, memory estimation
- Returns pass/warning/fail status with detailed messages
- **Audit Log Signing**: Enhanced `internal/security/audit.go` with Ed25519 cryptographic signing
- `SignedAuditEntry` with sequence numbers, hash chains, and signatures
- `GenerateSigningKeys()`, `SavePrivateKey()`, `LoadPublicKey()`
- `EnableSigning()`, `ExportSignedLog()`, `VerifyAuditLog()` for tamper detection
## [5.7.10] - 2026-02-03
### Fixed
- **TUI Auto-Select Index Mismatch**: Fixed `--tui-auto-select` case indices not matching keyboard handler
- Indices 5-11 were out of sync, causing wrong menu items to be selected in automated testing
- Added missing handlers for Schedule, Chain, and Profile commands
- **TUI Back Navigation**: Fixed incorrect `tea.Quit` usage in done states
- `backup_exec.go` and `restore_exec.go` returned `tea.Quit` instead of `nil` for InterruptMsg
- This caused unwanted application exit instead of returning to parent menu
- **TUI Separator Navigation**: Arrow keys now skip separator items
- Up/down navigation auto-skips items of kind `itemSeparator`
- Prevents cursor from landing on non-selectable menu separators
- **TUI Input Validation**: Added ratio validation for percentage inputs
- Values outside 0-100 range now show error message
- Auto-confirm mode uses safe default (10) for invalid input
### Added
- **TUI Unit Tests**: 11 new tests + 2 benchmarks in `internal/tui/menu_test.go`
- Tests: navigation, quit, Ctrl+C, database switch, view rendering, auto-select
- Benchmarks: View rendering performance, navigation stress test
- **TUI Smoke Test Script**: `tests/tui_smoke_test.sh` for CI/CD integration
- Tests all 19 menu items via `--tui-auto-select` flag
- No human input required, suitable for automated pipelines
### Changed
- **TUI TODO Messages**: Improved clarity with `[TODO]` prefix and version hints
- Placeholder items now show "[TODO] Feature Name - planned for v6.1"
- Added `warnStyle` for better visual distinction
## [5.7.9] - 2026-02-03
### Fixed
- **Encryption Detection**: Fixed `IsBackupEncrypted()` not detecting single-database encrypted backups
- Was incorrectly treating single backups as cluster backups with empty database list
- Now properly checks `len(clusterMeta.Databases) > 0` before treating as cluster
- **In-Place Decryption**: Fixed critical bug where in-place decryption corrupted files
- `DecryptFile()` with same input/output path would truncate file before reading
- Now uses temp file pattern for safe in-place decryption
- **Metadata Update**: Fixed encryption metadata not being saved correctly
- `metadata.Load()` was called with wrong path (already had `.meta.json` suffix)
### Tested
- Full encryption round-trip: backup → encrypt → decrypt → restore (88 tables)
- PostgreSQL DR Drill with `--no-owner --no-acl` flags
- All 16+ core commands verified on dev.uuxo.net
## [5.7.8] - 2026-02-03
### Fixed
- **DR Drill PostgreSQL**: Fixed restore failures on different host
- Added `--no-owner` and `--no-acl` flags to pg_restore
- Prevents role/permission errors when restoring to different PostgreSQL instance
## [5.7.7] - 2026-02-03
### Fixed
- **DR Drill MariaDB**: Complete fixes for modern MariaDB containers
- Use TCP (127.0.0.1) instead of socket for health checks and restore
- Use `mariadb-admin` and `mariadb` client (not `mysqladmin`/`mysql`)
- Drop existing database before restore (backup contains CREATE DATABASE)
- Tested with MariaDB 12.1.2 image
## [5.7.6] - 2026-02-03
### Fixed
- **Verify Command**: Fixed absolute path handling
- `dbbackup verify /full/path/to/backup.dump` now works correctly
- Previously always prefixed with `--backup-dir`, breaking absolute paths
## [5.7.5] - 2026-02-03
### Fixed
- **SMTP Notifications**: Fixed false error on successful email delivery
- `client.Quit()` response "250 Ok: queued" was incorrectly treated as error
- Now properly closes data writer and ignores successful quit response
## [5.7.4] - 2026-02-03
### Fixed
- **Notify Test Command** - Fixed `dbbackup notify test` to properly read NOTIFY_* environment variables
- Previously only checked `cfg.NotifyEnabled` which wasn't set from ENV
- Now uses `notify.ConfigFromEnv()` like the rest of the application
- Clear error messages showing exactly which ENV variables to set
### Technical Details
- `cmd/notify.go`: Refactored to use `notify.ConfigFromEnv()` instead of `cfg.*` fields
## [5.7.3] - 2026-02-03
### Fixed
- **MariaDB Binlog Position Bug** - Fixed `getBinlogPosition()` to handle dynamic column count
- MariaDB `SHOW MASTER STATUS` returns 4 columns
- MySQL 5.6+ returns 5 columns (with `Executed_Gtid_Set`)
- Now tries 5 columns first, falls back to 4 columns for MariaDB compatibility
### Improved
- **Better `--password` Flag Error Message**
- Using `--password` now shows helpful error with instructions for `MYSQL_PWD`/`PGPASSWORD` environment variables
- Flag is hidden but accepted for better error handling
- **Improved Fallback Logging for PostgreSQL Peer Authentication**
- Changed from `WARN: Native engine failed, falling back...`
- Now shows `INFO: Native engine requires password auth, using pg_dump with peer authentication`
- Clearer indication that this is expected behavior, not an error
- **Reduced Noise from Binlog Position Warnings**
- "Binary logging not enabled" now logged at DEBUG level (was WARN)
- "Insufficient privileges for binlog" now logged at DEBUG level (was WARN)
- Only unexpected errors still logged as WARN
### Technical Details
- `internal/engine/native/mysql.go`: Dynamic column detection in `getBinlogPosition()`
- `cmd/root.go`: Added hidden `--password` flag with helpful error message
- `cmd/backup_impl.go`: Improved fallback logging for peer auth scenarios
## [5.7.2] - 2026-02-02
### Added
- Native engine improvements for production stability
## [5.7.1] - 2026-02-02
### Fixed
- Minor stability fixes
## [5.7.0] - 2026-02-02
### Added
- Enhanced native engine support for MariaDB
## [5.6.0] - 2026-02-02
### Performance Optimizations 🚀

View File

@ -17,9 +17,9 @@ Be respectful, constructive, and professional in all interactions. We're buildin
**Bug Report Template:**
```
**Version:** dbbackup v3.42.1
**Version:** dbbackup v5.7.10
**OS:** Linux/macOS/BSD
**Database:** PostgreSQL 14 / MySQL 8.0 / MariaDB 10.6
**Database:** PostgreSQL 14+ / MySQL 8.0+ / MariaDB 10.6+
**Command:** The exact command that failed
**Error:** Full error message and stack trace
**Expected:** What you expected to happen

View File

@ -19,7 +19,7 @@ COPY . .
# Build binary with cross-compilation support
RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} \
go build -a -installsuffix cgo -ldflags="-w -s" -o dbbackup .
go build -trimpath -a -installsuffix cgo -ldflags="-w -s" -o dbbackup .
# Final stage - minimal runtime image
# Using pinned version 3.19 which has better QEMU compatibility

View File

@ -15,7 +15,7 @@ all: lint test build
## build: Build the binary with optimizations
build:
@echo "🔨 Building dbbackup $(VERSION)..."
CGO_ENABLED=0 go build -ldflags="$(LDFLAGS)" -o bin/dbbackup .
CGO_ENABLED=0 go build -trimpath -ldflags="$(LDFLAGS)" -o bin/dbbackup .
@echo "✅ Built bin/dbbackup"
## build-debug: Build with debug symbols (for debugging)

View File

@ -4,7 +4,7 @@ Database backup and restore utility for PostgreSQL, MySQL, and MariaDB.
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Go Version](https://img.shields.io/badge/Go-1.21+-00ADD8?logo=go)](https://golang.org/)
[![Release](https://img.shields.io/badge/Release-v5.1.15-green.svg)](https://github.com/PlusOne/dbbackup/releases/latest)
[![Release](https://img.shields.io/badge/Release-v5.7.10-green.svg)](https://git.uuxo.net/UUXO/dbbackup/releases/latest)
**Repository:** https://git.uuxo.net/UUXO/dbbackup
**Mirror:** https://github.com/PlusOne/dbbackup
@ -92,7 +92,7 @@ Download from [releases](https://git.uuxo.net/UUXO/dbbackup/releases):
```bash
# Linux x86_64
wget https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.42.74/dbbackup-linux-amd64
wget https://git.uuxo.net/UUXO/dbbackup/releases/download/v5.7.10/dbbackup-linux-amd64
chmod +x dbbackup-linux-amd64
sudo mv dbbackup-linux-amd64 /usr/local/bin/dbbackup
```
@ -115,8 +115,9 @@ go build
# PostgreSQL with peer authentication
sudo -u postgres dbbackup interactive
# MySQL/MariaDB
dbbackup interactive --db-type mysql --user root --password secret
# MySQL/MariaDB (use MYSQL_PWD env var for password)
export MYSQL_PWD='secret'
dbbackup interactive --db-type mysql --user root
```
**Main Menu:**
@ -401,7 +402,7 @@ dbbackup backup single mydb --dry-run
| `--host` | Database host | localhost |
| `--port` | Database port | 5432/3306 |
| `--user` | Database user | current user |
| `--password` | Database password | - |
| `MYSQL_PWD` / `PGPASSWORD` | Database password (env var) | - |
| `--backup-dir` | Backup directory | ~/db_backups |
| `--compression` | Compression level (0-9) | 6 |
| `--jobs` | Parallel jobs | 8 |
@ -673,6 +674,22 @@ dbbackup backup single mydb
- `dr_drill_passed`, `dr_drill_failed`
- `gap_detected`, `rpo_violation`
### Testing Notifications
```bash
# Test notification configuration
export NOTIFY_SMTP_HOST="localhost"
export NOTIFY_SMTP_PORT="25"
export NOTIFY_SMTP_FROM="dbbackup@myserver.local"
export NOTIFY_SMTP_TO="admin@example.com"
dbbackup notify test --verbose
# [OK] Notification sent successfully
# For servers using STARTTLS with self-signed certs
export NOTIFY_SMTP_STARTTLS="false"
```
## Backup Catalog
Track all backups in a SQLite catalog with gap detection and search:
@ -970,8 +987,12 @@ export PGPASSWORD=password
### MySQL/MariaDB Authentication
```bash
# Command line
dbbackup backup single mydb --db-type mysql --user root --password secret
# Environment variable (recommended)
export MYSQL_PWD='secret'
dbbackup backup single mydb --db-type mysql --user root
# Socket authentication (no password needed)
dbbackup backup single mydb --db-type mysql --socket /var/run/mysqld/mysqld.sock
# Configuration file
cat > ~/.my.cnf << EOF
@ -982,6 +1003,9 @@ EOF
chmod 0600 ~/.my.cnf
```
> **Note:** The `--password` command-line flag is not supported for security reasons
> (passwords would be visible in `ps aux` output). Use environment variables or config files.
### Configuration Persistence
Settings are saved to `.dbbackup.conf` in the current directory:

View File

@ -6,9 +6,10 @@ We release security updates for the following versions:
| Version | Supported |
| ------- | ------------------ |
| 3.1.x | :white_check_mark: |
| 3.0.x | :white_check_mark: |
| < 3.0 | :x: |
| 5.7.x | :white_check_mark: |
| 5.6.x | :white_check_mark: |
| 5.5.x | :white_check_mark: |
| < 5.5 | :x: |
## Reporting a Vulnerability

View File

@ -80,7 +80,7 @@ for platform_config in "${PLATFORMS[@]}"; do
# Set environment and build (using export for better compatibility)
# CGO_ENABLED=0 creates static binaries without glibc dependency
export CGO_ENABLED=0 GOOS GOARCH
if go build -ldflags "$LDFLAGS" -o "${BIN_DIR}/${binary_name}" . 2>/dev/null; then
if go build -trimpath -ldflags "$LDFLAGS" -o "${BIN_DIR}/${binary_name}" . 2>/dev/null; then
# Get file size
if [[ "$OSTYPE" == "darwin"* ]]; then
size=$(stat -f%z "${BIN_DIR}/${binary_name}" 2>/dev/null || echo "0")

View File

@ -286,7 +286,13 @@ func runSingleBackup(ctx context.Context, databaseName string) error {
err = runNativeBackup(ctx, db, databaseName, backupType, baseBackup, backupStartTime, user)
if err != nil && cfg.FallbackToTools {
log.Warn("Native engine failed, falling back to external tools", "error", err)
// Check if this is an expected authentication failure (peer auth doesn't provide password to native engine)
errStr := err.Error()
if strings.Contains(errStr, "password authentication failed") || strings.Contains(errStr, "SASL auth") {
log.Info("Native engine requires password auth, using pg_dump with peer authentication")
} else {
log.Warn("Native engine failed, falling back to external tools", "error", err)
}
// Continue with tool-based backup below
} else {
// Native engine succeeded or no fallback configured

View File

@ -6,10 +6,12 @@ import (
"io"
"os"
"path/filepath"
"strings"
"time"
"dbbackup/internal/database"
"dbbackup/internal/engine/native"
"dbbackup/internal/metadata"
"dbbackup/internal/notify"
"github.com/klauspost/pgzip"
@ -162,6 +164,54 @@ func runNativeBackup(ctx context.Context, db database.Database, databaseName, ba
"duration", backupDuration,
"engine", result.EngineUsed)
// Get actual file size from disk
fileInfo, err := os.Stat(outputFile)
var actualSize int64
if err == nil {
actualSize = fileInfo.Size()
} else {
actualSize = result.BytesProcessed
}
// Calculate SHA256 checksum
sha256sum, err := metadata.CalculateSHA256(outputFile)
if err != nil {
log.Warn("Failed to calculate SHA256", "error", err)
sha256sum = ""
}
// Create and save metadata file
meta := &metadata.BackupMetadata{
Version: "1.0",
Timestamp: backupStartTime,
Database: databaseName,
DatabaseType: dbType,
Host: cfg.Host,
Port: cfg.Port,
User: cfg.User,
BackupFile: filepath.Base(outputFile),
SizeBytes: actualSize,
SHA256: sha256sum,
Compression: "gzip",
BackupType: backupType,
Duration: backupDuration.Seconds(),
ExtraInfo: map[string]string{
"engine": result.EngineUsed,
"objects_processed": fmt.Sprintf("%d", result.ObjectsProcessed),
},
}
if cfg.CompressionLevel == 0 {
meta.Compression = "none"
}
metaPath := outputFile + ".meta.json"
if err := metadata.Save(metaPath, meta); err != nil {
log.Warn("Failed to save metadata", "error", err)
} else {
log.Debug("Metadata saved", "path", metaPath)
}
// Audit log: backup completed
auditLogger.LogBackupComplete(user, databaseName, cfg.BackupDir, result.BytesProcessed)
@ -188,7 +238,7 @@ func detectDatabaseTypeFromConfig() string {
return "unknown"
}
// buildNativeDSN builds a PostgreSQL DSN from the global configuration
// buildNativeDSN builds a DSN from the global configuration for the appropriate database type
func buildNativeDSN(databaseName string) string {
if cfg == nil {
return ""
@ -199,9 +249,39 @@ func buildNativeDSN(databaseName string) string {
host = "localhost"
}
dbName := databaseName
if dbName == "" {
dbName = cfg.Database
}
// Build MySQL DSN for MySQL/MariaDB
if cfg.IsMySQL() {
port := cfg.Port
if port == 0 {
port = 3306 // MySQL default port
}
user := cfg.User
if user == "" {
user = "root"
}
// MySQL DSN format: user:password@tcp(host:port)/dbname
dsn := user
if cfg.Password != "" {
dsn += ":" + cfg.Password
}
dsn += fmt.Sprintf("@tcp(%s:%d)/", host, port)
if dbName != "" {
dsn += dbName
}
return dsn
}
// Build PostgreSQL DSN (default)
port := cfg.Port
if port == 0 {
port = 5432
port = 5432 // PostgreSQL default port
}
user := cfg.User
@ -209,25 +289,38 @@ func buildNativeDSN(databaseName string) string {
user = "postgres"
}
dbName := databaseName
if dbName == "" {
dbName = cfg.Database
}
if dbName == "" {
dbName = "postgres"
}
// Check if host is a Unix socket path (starts with /)
isSocketPath := strings.HasPrefix(host, "/")
dsn := fmt.Sprintf("postgres://%s", user)
if cfg.Password != "" {
dsn += ":" + cfg.Password
}
dsn += fmt.Sprintf("@%s:%d/%s", host, port, dbName)
if isSocketPath {
// Unix socket: use host parameter in query string
// pgx format: postgres://user@/dbname?host=/var/run/postgresql
dsn += fmt.Sprintf("@/%s", dbName)
} else {
// TCP connection: use host:port in authority
dsn += fmt.Sprintf("@%s:%d/%s", host, port, dbName)
}
sslMode := cfg.SSLMode
if sslMode == "" {
sslMode = "prefer"
}
dsn += "?sslmode=" + sslMode
if isSocketPath {
// For Unix sockets, add host parameter and disable SSL
dsn += fmt.Sprintf("?host=%s&sslmode=disable", host)
} else {
dsn += "?sslmode=" + sslMode
}
return dsn
}

View File

@ -54,19 +54,29 @@ func init() {
}
func runNotifyTest(cmd *cobra.Command, args []string) error {
if !cfg.NotifyEnabled {
fmt.Println("[WARN] Notifications are disabled")
fmt.Println("Enable with: --notify-enabled")
// Load notification config from environment variables (same as root.go)
notifyCfg := notify.ConfigFromEnv()
// Check if any notification method is configured
if !notifyCfg.SMTPEnabled && !notifyCfg.WebhookEnabled {
fmt.Println("[WARN] No notification endpoints configured")
fmt.Println()
fmt.Println("Example configuration:")
fmt.Println(" notify_enabled = true")
fmt.Println(" notify_on_success = true")
fmt.Println(" notify_on_failure = true")
fmt.Println(" notify_webhook_url = \"https://your-webhook-url\"")
fmt.Println(" # or")
fmt.Println(" notify_smtp_host = \"smtp.example.com\"")
fmt.Println(" notify_smtp_from = \"backups@example.com\"")
fmt.Println(" notify_smtp_to = \"admin@example.com\"")
fmt.Println("Configure via environment variables:")
fmt.Println()
fmt.Println(" SMTP Email:")
fmt.Println(" NOTIFY_SMTP_HOST=smtp.example.com")
fmt.Println(" NOTIFY_SMTP_PORT=587")
fmt.Println(" NOTIFY_SMTP_FROM=backups@example.com")
fmt.Println(" NOTIFY_SMTP_TO=admin@example.com")
fmt.Println()
fmt.Println(" Webhook:")
fmt.Println(" NOTIFY_WEBHOOK_URL=https://your-webhook-url")
fmt.Println()
fmt.Println(" Optional:")
fmt.Println(" NOTIFY_SMTP_USER=username")
fmt.Println(" NOTIFY_SMTP_PASSWORD=password")
fmt.Println(" NOTIFY_SMTP_STARTTLS=true")
fmt.Println(" NOTIFY_WEBHOOK_SECRET=hmac-secret")
return nil
}
@ -79,52 +89,19 @@ func runNotifyTest(cmd *cobra.Command, args []string) error {
fmt.Println("[TEST] Testing notification configuration...")
fmt.Println()
// Check what's configured
hasWebhook := cfg.NotifyWebhookURL != ""
hasSMTP := cfg.NotifySMTPHost != ""
if !hasWebhook && !hasSMTP {
fmt.Println("[WARN] No notification endpoints configured")
fmt.Println()
fmt.Println("Configure at least one:")
fmt.Println(" --notify-webhook-url URL # Generic webhook")
fmt.Println(" --notify-smtp-host HOST # Email (requires SMTP settings)")
return nil
}
// Show what will be tested
if hasWebhook {
fmt.Printf("[INFO] Webhook configured: %s\n", cfg.NotifyWebhookURL)
if notifyCfg.WebhookEnabled {
fmt.Printf("[INFO] Webhook configured: %s\n", notifyCfg.WebhookURL)
}
if hasSMTP {
fmt.Printf("[INFO] SMTP configured: %s:%d\n", cfg.NotifySMTPHost, cfg.NotifySMTPPort)
fmt.Printf(" From: %s\n", cfg.NotifySMTPFrom)
if len(cfg.NotifySMTPTo) > 0 {
fmt.Printf(" To: %v\n", cfg.NotifySMTPTo)
if notifyCfg.SMTPEnabled {
fmt.Printf("[INFO] SMTP configured: %s:%d\n", notifyCfg.SMTPHost, notifyCfg.SMTPPort)
fmt.Printf(" From: %s\n", notifyCfg.SMTPFrom)
if len(notifyCfg.SMTPTo) > 0 {
fmt.Printf(" To: %v\n", notifyCfg.SMTPTo)
}
}
fmt.Println()
// Create notification config
notifyCfg := notify.Config{
SMTPEnabled: hasSMTP,
SMTPHost: cfg.NotifySMTPHost,
SMTPPort: cfg.NotifySMTPPort,
SMTPUser: cfg.NotifySMTPUser,
SMTPPassword: cfg.NotifySMTPPassword,
SMTPFrom: cfg.NotifySMTPFrom,
SMTPTo: cfg.NotifySMTPTo,
SMTPTLS: cfg.NotifySMTPTLS,
SMTPStartTLS: cfg.NotifySMTPStartTLS,
WebhookEnabled: hasWebhook,
WebhookURL: cfg.NotifyWebhookURL,
WebhookMethod: "POST",
OnSuccess: true,
OnFailure: true,
}
// Create manager
manager := notify.NewManager(notifyCfg)

View File

@ -423,8 +423,13 @@ func runVerify(ctx context.Context, archiveName string) error {
fmt.Println(" Backup Archive Verification")
fmt.Println("==============================================================")
// Construct full path to archive
archivePath := filepath.Join(cfg.BackupDir, archiveName)
// Construct full path to archive - use as-is if already absolute
var archivePath string
if filepath.IsAbs(archiveName) {
archivePath = archiveName
} else {
archivePath = filepath.Join(cfg.BackupDir, archiveName)
}
// Check if archive exists
if _, err := os.Stat(archivePath); os.IsNotExist(err) {

View File

@ -86,7 +86,7 @@ func init() {
// Generate command flags
reportGenerateCmd.Flags().StringVarP(&reportType, "type", "t", "soc2", "Report type (soc2, gdpr, hipaa, pci-dss, iso27001)")
reportGenerateCmd.Flags().IntVarP(&reportDays, "days", "d", 90, "Number of days to include in report")
reportGenerateCmd.Flags().IntVar(&reportDays, "days", 90, "Number of days to include in report")
reportGenerateCmd.Flags().StringVar(&reportStartDate, "start", "", "Start date (YYYY-MM-DD)")
reportGenerateCmd.Flags().StringVar(&reportEndDate, "end", "", "End date (YYYY-MM-DD)")
reportGenerateCmd.Flags().StringVarP(&reportFormat, "format", "f", "markdown", "Output format (json, markdown, html)")
@ -97,7 +97,7 @@ func init() {
// Summary command flags
reportSummaryCmd.Flags().StringVarP(&reportType, "type", "t", "soc2", "Report type")
reportSummaryCmd.Flags().IntVarP(&reportDays, "days", "d", 90, "Number of days to include")
reportSummaryCmd.Flags().IntVar(&reportDays, "days", 90, "Number of days to include")
reportSummaryCmd.Flags().StringVar(&reportCatalog, "catalog", "", "Path to backup catalog database")
}

View File

@ -15,11 +15,12 @@ import (
)
var (
cfg *config.Config
log logger.Logger
auditLogger *security.AuditLogger
rateLimiter *security.RateLimiter
notifyManager *notify.Manager
cfg *config.Config
log logger.Logger
auditLogger *security.AuditLogger
rateLimiter *security.RateLimiter
notifyManager *notify.Manager
deprecatedPassword string
)
// rootCmd represents the base command when called without any subcommands
@ -47,6 +48,11 @@ For help with specific commands, use: dbbackup [command] --help`,
return nil
}
// Check for deprecated password flag
if deprecatedPassword != "" {
return fmt.Errorf("--password flag is not supported for security reasons. Use environment variables instead:\n - MySQL/MariaDB: export MYSQL_PWD='your_password'\n - PostgreSQL: export PGPASSWORD='your_password' or use .pgpass file")
}
// Store which flags were explicitly set by user
flagsSet := make(map[string]bool)
cmd.Flags().Visit(func(f *pflag.Flag) {
@ -55,22 +61,24 @@ For help with specific commands, use: dbbackup [command] --help`,
// Load local config if not disabled
if !cfg.NoLoadConfig {
// Use custom config path if specified, otherwise default to current directory
// Use custom config path if specified, otherwise search standard locations
var localCfg *config.LocalConfig
var configPath string
var err error
if cfg.ConfigPath != "" {
localCfg, err = config.LoadLocalConfigFromPath(cfg.ConfigPath)
configPath = cfg.ConfigPath
if err != nil {
log.Warn("Failed to load config from specified path", "path", cfg.ConfigPath, "error", err)
} else if localCfg != nil {
log.Info("Loaded configuration", "path", cfg.ConfigPath)
}
} else {
localCfg, err = config.LoadLocalConfig()
localCfg, configPath, err = config.LoadLocalConfigWithPath()
if err != nil {
log.Warn("Failed to load local config", "error", err)
log.Warn("Failed to load config", "error", err)
} else if localCfg != nil {
log.Info("Loaded configuration from .dbbackup.conf")
log.Info("Loaded configuration", "path", configPath)
}
}
@ -125,9 +133,15 @@ For help with specific commands, use: dbbackup [command] --help`,
}
// Auto-detect socket from --host path (if host starts with /)
// For MySQL/MariaDB: set Socket and reset Host to localhost
// For PostgreSQL: keep Host as socket path (pgx/libpq handle it correctly)
if strings.HasPrefix(cfg.Host, "/") && cfg.Socket == "" {
cfg.Socket = cfg.Host
cfg.Host = "localhost" // Reset host for socket connections
if cfg.IsMySQL() {
// MySQL uses separate Socket field, Host should be localhost
cfg.Socket = cfg.Host
cfg.Host = "localhost"
}
// For PostgreSQL, keep cfg.Host as the socket path - pgx handles this correctly
}
return cfg.SetDatabaseType(cfg.DatabaseType)
@ -164,7 +178,9 @@ func Execute(ctx context.Context, config *config.Config, logger logger.Logger) e
rootCmd.PersistentFlags().StringVar(&cfg.User, "user", cfg.User, "Database user")
rootCmd.PersistentFlags().StringVar(&cfg.Database, "database", cfg.Database, "Database name")
// SECURITY: Password flag removed - use PGPASSWORD/MYSQL_PWD environment variable or .pgpass file
// rootCmd.PersistentFlags().StringVar(&cfg.Password, "password", cfg.Password, "Database password")
// Provide helpful error message for users expecting --password flag
rootCmd.PersistentFlags().StringVar(&deprecatedPassword, "password", "", "DEPRECATED: Use MYSQL_PWD or PGPASSWORD environment variable instead")
rootCmd.PersistentFlags().MarkHidden("password")
rootCmd.PersistentFlags().StringVarP(&cfg.DatabaseType, "db-type", "d", cfg.DatabaseType, "Database type (postgres|mysql|mariadb)")
rootCmd.PersistentFlags().StringVar(&cfg.BackupDir, "backup-dir", cfg.BackupDir, "Backup directory")
rootCmd.PersistentFlags().BoolVar(&cfg.NoColor, "no-color", cfg.NoColor, "Disable colored output")

View File

@ -0,0 +1,104 @@
---
# dbbackup Production Deployment Playbook
# Deploys dbbackup binary and verifies backup jobs
#
# Usage (from dev.uuxo.net):
# ansible-playbook -i inventory.yml deploy-production.yml
# ansible-playbook -i inventory.yml deploy-production.yml --limit mysql01.uuxoi.local
# ansible-playbook -i inventory.yml deploy-production.yml --tags binary # Only deploy binary
- name: Deploy dbbackup to production DB hosts
hosts: db_servers
become: yes
vars:
# Binary source: /tmp/dbbackup_linux_amd64 on Ansible controller (dev.uuxo.net)
local_binary: "{{ dbbackup_binary_src | default('/tmp/dbbackup_linux_amd64') }}"
install_path: /usr/local/bin/dbbackup
tasks:
- name: Deploy dbbackup binary
tags: [binary, deploy]
block:
- name: Copy dbbackup binary
copy:
src: "{{ local_binary }}"
dest: "{{ install_path }}"
mode: "0755"
owner: root
group: root
register: binary_deployed
- name: Verify dbbackup version
command: "{{ install_path }} --version"
register: version_check
changed_when: false
- name: Display installed version
debug:
msg: "{{ inventory_hostname }}: {{ version_check.stdout }}"
- name: Check backup configuration
tags: [verify, check]
block:
- name: Check backup script exists
stat:
path: "/opt/dbbackup/bin/{{ dbbackup_backup_script | default('backup.sh') }}"
register: backup_script
- name: Display backup script status
debug:
msg: "Backup script: {{ 'EXISTS' if backup_script.stat.exists else 'MISSING' }}"
- name: Check systemd timer status
shell: systemctl list-timers --no-pager | grep dbbackup || echo "No timer found"
register: timer_status
changed_when: false
- name: Display timer status
debug:
msg: "{{ timer_status.stdout_lines }}"
- name: Check exporter service
shell: systemctl is-active dbbackup-exporter 2>/dev/null || echo "not running"
register: exporter_status
changed_when: false
- name: Display exporter status
debug:
msg: "Exporter: {{ exporter_status.stdout }}"
- name: Run test backup (dry-run)
tags: [test, never]
block:
- name: Execute dry-run backup
command: >
{{ install_path }} backup single {{ dbbackup_databases[0] }}
--db-type {{ dbbackup_db_type }}
{% if dbbackup_socket is defined %}--socket {{ dbbackup_socket }}{% endif %}
{% if dbbackup_host is defined %}--host {{ dbbackup_host }}{% endif %}
{% if dbbackup_port is defined %}--port {{ dbbackup_port }}{% endif %}
--user root
--allow-root
--dry-run
environment:
MYSQL_PWD: "{{ dbbackup_password | default('') }}"
register: dryrun_result
changed_when: false
ignore_errors: yes
- name: Display dry-run result
debug:
msg: "{{ dryrun_result.stdout_lines[-5:] }}"
post_tasks:
- name: Deployment summary
debug:
msg: |
=== {{ inventory_hostname }} ===
Version: {{ version_check.stdout | default('unknown') }}
DB Type: {{ dbbackup_db_type }}
Databases: {{ dbbackup_databases | join(', ') }}
Backup Dir: {{ dbbackup_backup_dir }}
Timer: {{ 'active' if 'dbbackup' in timer_status.stdout else 'not configured' }}
Exporter: {{ exporter_status.stdout }}

View File

@ -0,0 +1,56 @@
# dbbackup Production Inventory
# Ansible läuft auf dev.uuxo.net - direkter SSH-Zugang zu allen Hosts
all:
vars:
ansible_user: root
ansible_ssh_common_args: '-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
dbbackup_version: "5.7.2"
# Binary wird von dev.uuxo.net aus deployed (dort liegt es in /tmp nach scp)
dbbackup_binary_src: "/tmp/dbbackup_linux_amd64"
children:
db_servers:
hosts:
mysql01.uuxoi.local:
dbbackup_db_type: mariadb
dbbackup_databases:
- ejabberd
dbbackup_backup_dir: /mnt/smb-mysql01/backups/databases
dbbackup_socket: /var/run/mysqld/mysqld.sock
dbbackup_pitr_enabled: true
dbbackup_backup_script: backup-mysql01.sh
alternate.uuxoi.local:
dbbackup_db_type: mariadb
dbbackup_databases:
- dbispconfig
- c1aps1
- c2marianskronkorken
- matomo
- phpmyadmin
- roundcube
- roundcubemail
dbbackup_backup_dir: /mnt/smb-alternate/backups/databases
dbbackup_host: 127.0.0.1
dbbackup_port: 3306
dbbackup_password: "xt3kci28"
dbbackup_backup_script: backup-alternate.sh
cloud.uuxoi.local:
dbbackup_db_type: mariadb
dbbackup_databases:
- nextcloud_db
dbbackup_backup_dir: /mnt/smb-cloud/backups/dedup
dbbackup_socket: /var/run/mysqld/mysqld.sock
dbbackup_dedup_enabled: true
dbbackup_backup_script: backup-cloud.sh
# Hosts mit speziellen Anforderungen
special_hosts:
hosts:
git.uuxoi.local:
dbbackup_db_type: mariadb
dbbackup_databases:
- gitea
dbbackup_note: "Docker-based MariaDB - needs SSH key setup"

View File

@ -370,6 +370,39 @@ SET GLOBAL gtid_mode = ON;
4. **Monitoring**: Check progress with `dbbackup status`
5. **Testing**: Verify restores regularly with `dbbackup verify`
## Authentication
### Password Handling (Security)
For security reasons, dbbackup does **not** support `--password` as a command-line flag. Passwords should be passed via environment variables:
```bash
# MySQL/MariaDB
export MYSQL_PWD='your_password'
dbbackup backup single mydb --db-type mysql
# PostgreSQL
export PGPASSWORD='your_password'
dbbackup backup single mydb --db-type postgres
```
Alternative methods:
- **MySQL/MariaDB**: Use socket authentication with `--socket /var/run/mysqld/mysqld.sock`
- **PostgreSQL**: Use peer authentication by running as the postgres user
### PostgreSQL Peer Authentication
When using PostgreSQL with peer authentication (running as the `postgres` user), the native engine will automatically fall back to `pg_dump` since peer auth doesn't provide a password for the native protocol:
```bash
# This works - dbbackup detects peer auth and uses pg_dump
sudo -u postgres dbbackup backup single mydb -d postgres
```
You'll see: `INFO: Native engine requires password auth, using pg_dump with peer authentication`
This is expected behavior, not an error.
## See Also
- [PITR.md](PITR.md) - Point-in-Time Recovery guide

View File

@ -36,8 +36,8 @@ func EncryptBackupFile(backupPath string, key []byte, log logger.Logger) error {
// Update metadata to indicate encryption
metaPath := backupPath + ".meta.json"
if _, err := os.Stat(metaPath); err == nil {
// Load existing metadata
meta, err := metadata.Load(metaPath)
// Load existing metadata (Load expects backup path, not meta path)
meta, err := metadata.Load(backupPath)
if err != nil {
log.Warn("Failed to load metadata for encryption update", "error", err)
} else {
@ -45,7 +45,7 @@ func EncryptBackupFile(backupPath string, key []byte, log logger.Logger) error {
meta.Encrypted = true
meta.EncryptionAlgorithm = string(crypto.AlgorithmAES256GCM)
// Save updated metadata
// Save updated metadata (Save expects meta path)
if err := metadata.Save(metaPath, meta); err != nil {
log.Warn("Failed to update metadata with encryption info", "error", err)
}
@ -70,8 +70,8 @@ func EncryptBackupFile(backupPath string, key []byte, log logger.Logger) error {
// IsBackupEncrypted checks if a backup file is encrypted
func IsBackupEncrypted(backupPath string) bool {
// Check metadata first - try cluster metadata (for cluster backups)
// Try cluster metadata first
if clusterMeta, err := metadata.LoadCluster(backupPath); err == nil {
// Only treat as cluster if it actually has databases
if clusterMeta, err := metadata.LoadCluster(backupPath); err == nil && len(clusterMeta.Databases) > 0 {
// For cluster backups, check if ANY database is encrypted
for _, db := range clusterMeta.Databases {
if db.Encrypted {

236
internal/cleanup/cgroups.go Normal file
View File

@ -0,0 +1,236 @@
package cleanup
import (
"context"
"fmt"
"os"
"os/exec"
"runtime"
"strings"
"dbbackup/internal/logger"
)
// ResourceLimits defines resource constraints for long-running operations
type ResourceLimits struct {
// MemoryHigh is the high memory limit (e.g., "4G", "2048M")
// When exceeded, kernel will throttle and reclaim memory aggressively
MemoryHigh string
// MemoryMax is the hard memory limit (e.g., "6G")
// Process is killed if exceeded
MemoryMax string
// CPUQuota limits CPU usage (e.g., "70%" for 70% of one CPU)
CPUQuota string
// IOWeight sets I/O priority (1-10000, default 100)
IOWeight int
// Nice sets process priority (-20 to 19)
Nice int
// Slice is the systemd slice to run under (e.g., "dbbackup.slice")
Slice string
}
// DefaultResourceLimits returns sensible defaults for backup/restore operations
func DefaultResourceLimits() *ResourceLimits {
return &ResourceLimits{
MemoryHigh: "4G",
MemoryMax: "6G",
CPUQuota: "80%",
IOWeight: 100, // Default priority
Nice: 10, // Slightly lower priority than interactive processes
Slice: "dbbackup.slice",
}
}
// SystemdRunAvailable checks if systemd-run is available on this system
func SystemdRunAvailable() bool {
if runtime.GOOS != "linux" {
return false
}
_, err := exec.LookPath("systemd-run")
return err == nil
}
// RunWithResourceLimits executes a command with resource limits via systemd-run
// Falls back to direct execution if systemd-run is not available
func RunWithResourceLimits(ctx context.Context, log logger.Logger, limits *ResourceLimits, name string, args ...string) error {
if limits == nil {
limits = DefaultResourceLimits()
}
// If systemd-run not available, fall back to direct execution
if !SystemdRunAvailable() {
log.Debug("systemd-run not available, running without resource limits")
cmd := exec.CommandContext(ctx, name, args...)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
return cmd.Run()
}
// Build systemd-run command
systemdArgs := buildSystemdArgs(limits, name, args)
log.Info("Running with systemd resource limits",
"command", name,
"memory_high", limits.MemoryHigh,
"cpu_quota", limits.CPUQuota)
cmd := exec.CommandContext(ctx, "systemd-run", systemdArgs...)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
return cmd.Run()
}
// RunWithResourceLimitsOutput executes with limits and returns combined output
func RunWithResourceLimitsOutput(ctx context.Context, log logger.Logger, limits *ResourceLimits, name string, args ...string) ([]byte, error) {
if limits == nil {
limits = DefaultResourceLimits()
}
// If systemd-run not available, fall back to direct execution
if !SystemdRunAvailable() {
log.Debug("systemd-run not available, running without resource limits")
cmd := exec.CommandContext(ctx, name, args...)
return cmd.CombinedOutput()
}
// Build systemd-run command
systemdArgs := buildSystemdArgs(limits, name, args)
log.Debug("Running with systemd resource limits",
"command", name,
"memory_high", limits.MemoryHigh)
cmd := exec.CommandContext(ctx, "systemd-run", systemdArgs...)
return cmd.CombinedOutput()
}
// buildSystemdArgs constructs the systemd-run argument list
func buildSystemdArgs(limits *ResourceLimits, name string, args []string) []string {
systemdArgs := []string{
"--scope", // Run as transient scope (not service)
"--user", // Run in user session (no root required)
"--quiet", // Reduce systemd noise
"--collect", // Automatically clean up after exit
}
// Add description for easier identification
systemdArgs = append(systemdArgs, fmt.Sprintf("--description=dbbackup: %s", name))
// Add resource properties
if limits.MemoryHigh != "" {
systemdArgs = append(systemdArgs, fmt.Sprintf("--property=MemoryHigh=%s", limits.MemoryHigh))
}
if limits.MemoryMax != "" {
systemdArgs = append(systemdArgs, fmt.Sprintf("--property=MemoryMax=%s", limits.MemoryMax))
}
if limits.CPUQuota != "" {
systemdArgs = append(systemdArgs, fmt.Sprintf("--property=CPUQuota=%s", limits.CPUQuota))
}
if limits.IOWeight > 0 {
systemdArgs = append(systemdArgs, fmt.Sprintf("--property=IOWeight=%d", limits.IOWeight))
}
if limits.Nice != 0 {
systemdArgs = append(systemdArgs, fmt.Sprintf("--property=Nice=%d", limits.Nice))
}
if limits.Slice != "" {
systemdArgs = append(systemdArgs, fmt.Sprintf("--slice=%s", limits.Slice))
}
// Add separator and command
systemdArgs = append(systemdArgs, "--")
systemdArgs = append(systemdArgs, name)
systemdArgs = append(systemdArgs, args...)
return systemdArgs
}
// WrapCommand creates an exec.Cmd that runs with resource limits
// This allows the caller to customize stdin/stdout/stderr before running
func WrapCommand(ctx context.Context, log logger.Logger, limits *ResourceLimits, name string, args ...string) *exec.Cmd {
if limits == nil {
limits = DefaultResourceLimits()
}
// If systemd-run not available, return direct command
if !SystemdRunAvailable() {
log.Debug("systemd-run not available, returning unwrapped command")
return exec.CommandContext(ctx, name, args...)
}
// Build systemd-run command
systemdArgs := buildSystemdArgs(limits, name, args)
log.Debug("Wrapping command with systemd resource limits",
"command", name,
"memory_high", limits.MemoryHigh)
return exec.CommandContext(ctx, "systemd-run", systemdArgs...)
}
// ResourceLimitsFromConfig creates resource limits from size estimates
// Useful for dynamically setting limits based on backup/restore size
func ResourceLimitsFromConfig(estimatedSizeBytes int64, isRestore bool) *ResourceLimits {
limits := DefaultResourceLimits()
// Estimate memory needs based on data size
// Restore needs more memory than backup
var memoryMultiplier float64 = 0.1 // 10% of data size for backup
if isRestore {
memoryMultiplier = 0.2 // 20% of data size for restore
}
estimatedMemMB := int64(float64(estimatedSizeBytes/1024/1024) * memoryMultiplier)
// Clamp to reasonable values
if estimatedMemMB < 512 {
estimatedMemMB = 512 // Minimum 512MB
}
if estimatedMemMB > 16384 {
estimatedMemMB = 16384 // Maximum 16GB
}
limits.MemoryHigh = fmt.Sprintf("%dM", estimatedMemMB)
limits.MemoryMax = fmt.Sprintf("%dM", estimatedMemMB*2) // 2x high limit
return limits
}
// GetActiveResourceUsage returns current resource usage if running in systemd scope
func GetActiveResourceUsage() (string, error) {
if !SystemdRunAvailable() {
return "", fmt.Errorf("systemd not available")
}
// Check if we're running in a scope
cmd := exec.Command("systemctl", "--user", "status", "--no-pager")
output, err := cmd.Output()
if err != nil {
return "", fmt.Errorf("failed to get systemd status: %w", err)
}
// Extract dbbackup-related scopes
lines := strings.Split(string(output), "\n")
var dbbackupLines []string
for _, line := range lines {
if strings.Contains(line, "dbbackup") {
dbbackupLines = append(dbbackupLines, strings.TrimSpace(line))
}
}
if len(dbbackupLines) == 0 {
return "No active dbbackup scopes", nil
}
return strings.Join(dbbackupLines, "\n"), nil
}

View File

@ -131,6 +131,9 @@ type Config struct {
TUIVerbose bool // Verbose TUI logging
TUILogFile string // TUI event log file path
// Safety options
SkipPreflightChecks bool // Skip pre-restore safety checks (archive integrity, disk space, etc.)
// Cloud storage options (v2.0)
CloudEnabled bool // Enable cloud storage integration
CloudProvider string // "s3", "minio", "b2", "azure", "gcs"
@ -319,7 +322,8 @@ func (c *Config) UpdateFromEnvironment() {
if password := os.Getenv("PGPASSWORD"); password != "" {
c.Password = password
}
if password := os.Getenv("MYSQL_PWD"); password != "" && c.DatabaseType == "mysql" {
// MYSQL_PWD works for both mysql and mariadb
if password := os.Getenv("MYSQL_PWD"); password != "" && (c.DatabaseType == "mysql" || c.DatabaseType == "mariadb") {
c.Password = password
}
}

View File

@ -6,6 +6,7 @@ import (
"path/filepath"
"strconv"
"strings"
"time"
)
const ConfigFileName = ".dbbackup.conf"
@ -40,9 +41,53 @@ type LocalConfig struct {
MaxRetries int
}
// LoadLocalConfig loads configuration from .dbbackup.conf in current directory
// ConfigSearchPaths returns all paths where config files are searched, in order of priority
func ConfigSearchPaths() []string {
paths := []string{
filepath.Join(".", ConfigFileName), // Current directory (highest priority)
}
// User's home directory
if home, err := os.UserHomeDir(); err == nil && home != "" {
paths = append(paths, filepath.Join(home, ConfigFileName))
}
// System-wide config locations
paths = append(paths,
"/etc/dbbackup.conf",
"/etc/dbbackup/dbbackup.conf",
)
return paths
}
// LoadLocalConfig loads configuration from .dbbackup.conf
// Search order: 1) current directory, 2) user's home directory, 3) /etc/dbbackup.conf, 4) /etc/dbbackup/dbbackup.conf
func LoadLocalConfig() (*LocalConfig, error) {
return LoadLocalConfigFromPath(filepath.Join(".", ConfigFileName))
for _, path := range ConfigSearchPaths() {
cfg, err := LoadLocalConfigFromPath(path)
if err != nil {
return nil, err
}
if cfg != nil {
return cfg, nil
}
}
return nil, nil
}
// LoadLocalConfigWithPath loads configuration and returns the path it was loaded from
func LoadLocalConfigWithPath() (*LocalConfig, string, error) {
for _, path := range ConfigSearchPaths() {
cfg, err := LoadLocalConfigFromPath(path)
if err != nil {
return nil, "", err
}
if cfg != nil {
return cfg, path, nil
}
}
return nil, "", nil
}
// LoadLocalConfigFromPath loads configuration from a specific path
@ -159,115 +204,89 @@ func LoadLocalConfigFromPath(configPath string) (*LocalConfig, error) {
// SaveLocalConfig saves configuration to .dbbackup.conf in current directory
func SaveLocalConfig(cfg *LocalConfig) error {
return SaveLocalConfigToPath(cfg, filepath.Join(".", ConfigFileName))
}
// SaveLocalConfigToPath saves configuration to a specific path
func SaveLocalConfigToPath(cfg *LocalConfig, configPath string) error {
var sb strings.Builder
sb.WriteString("# dbbackup configuration\n")
sb.WriteString("# This file is auto-generated. Edit with care.\n\n")
sb.WriteString("# This file is auto-generated. Edit with care.\n")
sb.WriteString(fmt.Sprintf("# Saved: %s\n\n", time.Now().Format(time.RFC3339)))
// Database section
// Database section - ALWAYS write all values
sb.WriteString("[database]\n")
if cfg.DBType != "" {
sb.WriteString(fmt.Sprintf("type = %s\n", cfg.DBType))
}
if cfg.Host != "" {
sb.WriteString(fmt.Sprintf("host = %s\n", cfg.Host))
}
if cfg.Port != 0 {
sb.WriteString(fmt.Sprintf("port = %d\n", cfg.Port))
}
if cfg.User != "" {
sb.WriteString(fmt.Sprintf("user = %s\n", cfg.User))
}
if cfg.Database != "" {
sb.WriteString(fmt.Sprintf("database = %s\n", cfg.Database))
}
if cfg.SSLMode != "" {
sb.WriteString(fmt.Sprintf("ssl_mode = %s\n", cfg.SSLMode))
}
sb.WriteString(fmt.Sprintf("type = %s\n", cfg.DBType))
sb.WriteString(fmt.Sprintf("host = %s\n", cfg.Host))
sb.WriteString(fmt.Sprintf("port = %d\n", cfg.Port))
sb.WriteString(fmt.Sprintf("user = %s\n", cfg.User))
sb.WriteString(fmt.Sprintf("database = %s\n", cfg.Database))
sb.WriteString(fmt.Sprintf("ssl_mode = %s\n", cfg.SSLMode))
sb.WriteString("\n")
// Backup section
// Backup section - ALWAYS write all values (including 0)
sb.WriteString("[backup]\n")
if cfg.BackupDir != "" {
sb.WriteString(fmt.Sprintf("backup_dir = %s\n", cfg.BackupDir))
}
sb.WriteString(fmt.Sprintf("backup_dir = %s\n", cfg.BackupDir))
if cfg.WorkDir != "" {
sb.WriteString(fmt.Sprintf("work_dir = %s\n", cfg.WorkDir))
}
if cfg.Compression != 0 {
sb.WriteString(fmt.Sprintf("compression = %d\n", cfg.Compression))
}
if cfg.Jobs != 0 {
sb.WriteString(fmt.Sprintf("jobs = %d\n", cfg.Jobs))
}
if cfg.DumpJobs != 0 {
sb.WriteString(fmt.Sprintf("dump_jobs = %d\n", cfg.DumpJobs))
}
sb.WriteString(fmt.Sprintf("compression = %d\n", cfg.Compression))
sb.WriteString(fmt.Sprintf("jobs = %d\n", cfg.Jobs))
sb.WriteString(fmt.Sprintf("dump_jobs = %d\n", cfg.DumpJobs))
sb.WriteString("\n")
// Performance section
// Performance section - ALWAYS write all values
sb.WriteString("[performance]\n")
if cfg.CPUWorkload != "" {
sb.WriteString(fmt.Sprintf("cpu_workload = %s\n", cfg.CPUWorkload))
}
if cfg.MaxCores != 0 {
sb.WriteString(fmt.Sprintf("max_cores = %d\n", cfg.MaxCores))
}
if cfg.ClusterTimeout != 0 {
sb.WriteString(fmt.Sprintf("cluster_timeout = %d\n", cfg.ClusterTimeout))
}
sb.WriteString(fmt.Sprintf("cpu_workload = %s\n", cfg.CPUWorkload))
sb.WriteString(fmt.Sprintf("max_cores = %d\n", cfg.MaxCores))
sb.WriteString(fmt.Sprintf("cluster_timeout = %d\n", cfg.ClusterTimeout))
if cfg.ResourceProfile != "" {
sb.WriteString(fmt.Sprintf("resource_profile = %s\n", cfg.ResourceProfile))
}
if cfg.LargeDBMode {
sb.WriteString("large_db_mode = true\n")
}
sb.WriteString(fmt.Sprintf("large_db_mode = %t\n", cfg.LargeDBMode))
sb.WriteString("\n")
// Security section
// Security section - ALWAYS write all values
sb.WriteString("[security]\n")
if cfg.RetentionDays != 0 {
sb.WriteString(fmt.Sprintf("retention_days = %d\n", cfg.RetentionDays))
}
if cfg.MinBackups != 0 {
sb.WriteString(fmt.Sprintf("min_backups = %d\n", cfg.MinBackups))
}
if cfg.MaxRetries != 0 {
sb.WriteString(fmt.Sprintf("max_retries = %d\n", cfg.MaxRetries))
}
sb.WriteString(fmt.Sprintf("retention_days = %d\n", cfg.RetentionDays))
sb.WriteString(fmt.Sprintf("min_backups = %d\n", cfg.MinBackups))
sb.WriteString(fmt.Sprintf("max_retries = %d\n", cfg.MaxRetries))
configPath := filepath.Join(".", ConfigFileName)
// Use 0600 permissions for security (readable/writable only by owner)
if err := os.WriteFile(configPath, []byte(sb.String()), 0600); err != nil {
return fmt.Errorf("failed to write config file: %w", err)
// Use 0644 permissions for readability
if err := os.WriteFile(configPath, []byte(sb.String()), 0644); err != nil {
return fmt.Errorf("failed to write config file %s: %w", configPath, err)
}
return nil
}
// ApplyLocalConfig applies loaded local config to the main config if values are not already set
// ApplyLocalConfig applies loaded local config to the main config.
// All non-empty/non-zero values from the config file are applied.
// CLI flag overrides are handled separately in root.go after this function.
func ApplyLocalConfig(cfg *Config, local *LocalConfig) {
if local == nil {
return
}
// Only apply if not already set via flags
if cfg.DatabaseType == "postgres" && local.DBType != "" {
// Apply all non-empty values from config file
// CLI flags override these in root.go after ApplyLocalConfig is called
if local.DBType != "" {
cfg.DatabaseType = local.DBType
}
if cfg.Host == "localhost" && local.Host != "" {
if local.Host != "" {
cfg.Host = local.Host
}
if cfg.Port == 5432 && local.Port != 0 {
if local.Port != 0 {
cfg.Port = local.Port
}
if cfg.User == "root" && local.User != "" {
if local.User != "" {
cfg.User = local.User
}
if local.Database != "" {
cfg.Database = local.Database
}
if cfg.SSLMode == "prefer" && local.SSLMode != "" {
if local.SSLMode != "" {
cfg.SSLMode = local.SSLMode
}
if local.BackupDir != "" {
@ -276,7 +295,7 @@ func ApplyLocalConfig(cfg *Config, local *LocalConfig) {
if local.WorkDir != "" {
cfg.WorkDir = local.WorkDir
}
if cfg.CompressionLevel == 6 && local.Compression != 0 {
if local.Compression != 0 {
cfg.CompressionLevel = local.Compression
}
if local.Jobs != 0 {
@ -285,31 +304,28 @@ func ApplyLocalConfig(cfg *Config, local *LocalConfig) {
if local.DumpJobs != 0 {
cfg.DumpJobs = local.DumpJobs
}
if cfg.CPUWorkloadType == "balanced" && local.CPUWorkload != "" {
if local.CPUWorkload != "" {
cfg.CPUWorkloadType = local.CPUWorkload
}
if local.MaxCores != 0 {
cfg.MaxCores = local.MaxCores
}
// Apply cluster timeout from config file (overrides default)
if local.ClusterTimeout != 0 {
cfg.ClusterTimeoutMinutes = local.ClusterTimeout
}
// Apply resource profile settings
if local.ResourceProfile != "" {
cfg.ResourceProfile = local.ResourceProfile
}
// LargeDBMode is a boolean - apply if true in config
if local.LargeDBMode {
cfg.LargeDBMode = true
}
if cfg.RetentionDays == 30 && local.RetentionDays != 0 {
if local.RetentionDays != 0 {
cfg.RetentionDays = local.RetentionDays
}
if cfg.MinBackups == 5 && local.MinBackups != 0 {
if local.MinBackups != 0 {
cfg.MinBackups = local.MinBackups
}
if cfg.MaxRetries == 3 && local.MaxRetries != 0 {
if local.MaxRetries != 0 {
cfg.MaxRetries = local.MaxRetries
}
}

View File

@ -0,0 +1,178 @@
package config
import (
"os"
"path/filepath"
"testing"
)
func TestConfigSaveLoad(t *testing.T) {
// Create a temp directory
tmpDir, err := os.MkdirTemp("", "dbbackup-config-test")
if err != nil {
t.Fatalf("Failed to create temp dir: %v", err)
}
defer os.RemoveAll(tmpDir)
configPath := filepath.Join(tmpDir, ".dbbackup.conf")
// Create test config with ALL fields set
original := &LocalConfig{
DBType: "postgres",
Host: "test-host-123",
Port: 5432,
User: "testuser",
Database: "testdb",
SSLMode: "require",
BackupDir: "/test/backups",
WorkDir: "/test/work",
Compression: 9,
Jobs: 16,
DumpJobs: 8,
CPUWorkload: "aggressive",
MaxCores: 32,
ClusterTimeout: 180,
ResourceProfile: "high",
LargeDBMode: true,
RetentionDays: 14,
MinBackups: 3,
MaxRetries: 5,
}
// Save to specific path
err = SaveLocalConfigToPath(original, configPath)
if err != nil {
t.Fatalf("Failed to save config: %v", err)
}
// Verify file exists
if _, err := os.Stat(configPath); os.IsNotExist(err) {
t.Fatalf("Config file not created at %s", configPath)
}
// Load it back
loaded, err := LoadLocalConfigFromPath(configPath)
if err != nil {
t.Fatalf("Failed to load config: %v", err)
}
if loaded == nil {
t.Fatal("Loaded config is nil")
}
// Verify ALL values
if loaded.DBType != original.DBType {
t.Errorf("DBType mismatch: got %s, want %s", loaded.DBType, original.DBType)
}
if loaded.Host != original.Host {
t.Errorf("Host mismatch: got %s, want %s", loaded.Host, original.Host)
}
if loaded.Port != original.Port {
t.Errorf("Port mismatch: got %d, want %d", loaded.Port, original.Port)
}
if loaded.User != original.User {
t.Errorf("User mismatch: got %s, want %s", loaded.User, original.User)
}
if loaded.Database != original.Database {
t.Errorf("Database mismatch: got %s, want %s", loaded.Database, original.Database)
}
if loaded.SSLMode != original.SSLMode {
t.Errorf("SSLMode mismatch: got %s, want %s", loaded.SSLMode, original.SSLMode)
}
if loaded.BackupDir != original.BackupDir {
t.Errorf("BackupDir mismatch: got %s, want %s", loaded.BackupDir, original.BackupDir)
}
if loaded.WorkDir != original.WorkDir {
t.Errorf("WorkDir mismatch: got %s, want %s", loaded.WorkDir, original.WorkDir)
}
if loaded.Compression != original.Compression {
t.Errorf("Compression mismatch: got %d, want %d", loaded.Compression, original.Compression)
}
if loaded.Jobs != original.Jobs {
t.Errorf("Jobs mismatch: got %d, want %d", loaded.Jobs, original.Jobs)
}
if loaded.DumpJobs != original.DumpJobs {
t.Errorf("DumpJobs mismatch: got %d, want %d", loaded.DumpJobs, original.DumpJobs)
}
if loaded.CPUWorkload != original.CPUWorkload {
t.Errorf("CPUWorkload mismatch: got %s, want %s", loaded.CPUWorkload, original.CPUWorkload)
}
if loaded.MaxCores != original.MaxCores {
t.Errorf("MaxCores mismatch: got %d, want %d", loaded.MaxCores, original.MaxCores)
}
if loaded.ClusterTimeout != original.ClusterTimeout {
t.Errorf("ClusterTimeout mismatch: got %d, want %d", loaded.ClusterTimeout, original.ClusterTimeout)
}
if loaded.ResourceProfile != original.ResourceProfile {
t.Errorf("ResourceProfile mismatch: got %s, want %s", loaded.ResourceProfile, original.ResourceProfile)
}
if loaded.LargeDBMode != original.LargeDBMode {
t.Errorf("LargeDBMode mismatch: got %t, want %t", loaded.LargeDBMode, original.LargeDBMode)
}
if loaded.RetentionDays != original.RetentionDays {
t.Errorf("RetentionDays mismatch: got %d, want %d", loaded.RetentionDays, original.RetentionDays)
}
if loaded.MinBackups != original.MinBackups {
t.Errorf("MinBackups mismatch: got %d, want %d", loaded.MinBackups, original.MinBackups)
}
if loaded.MaxRetries != original.MaxRetries {
t.Errorf("MaxRetries mismatch: got %d, want %d", loaded.MaxRetries, original.MaxRetries)
}
t.Log("✅ All config fields save/load correctly!")
}
func TestConfigSaveZeroValues(t *testing.T) {
// This tests that 0 values are saved and loaded correctly
tmpDir, err := os.MkdirTemp("", "dbbackup-config-test-zero")
if err != nil {
t.Fatalf("Failed to create temp dir: %v", err)
}
defer os.RemoveAll(tmpDir)
configPath := filepath.Join(tmpDir, ".dbbackup.conf")
// Config with 0/false values intentionally
original := &LocalConfig{
DBType: "postgres",
Host: "localhost",
Port: 5432,
User: "postgres",
Database: "test",
SSLMode: "disable",
BackupDir: "/backups",
Compression: 0, // Intentionally 0 = no compression
Jobs: 1,
DumpJobs: 1,
CPUWorkload: "conservative",
MaxCores: 1,
ClusterTimeout: 0, // No timeout
LargeDBMode: false,
RetentionDays: 0, // Keep forever
MinBackups: 0,
MaxRetries: 0,
}
// Save
err = SaveLocalConfigToPath(original, configPath)
if err != nil {
t.Fatalf("Failed to save config: %v", err)
}
// Load
loaded, err := LoadLocalConfigFromPath(configPath)
if err != nil {
t.Fatalf("Failed to load config: %v", err)
}
// The values that are 0/false should still load correctly
// Note: In INI format, 0 values ARE written and loaded
if loaded.Compression != 0 {
t.Errorf("Compression should be 0, got %d", loaded.Compression)
}
if loaded.LargeDBMode != false {
t.Errorf("LargeDBMode should be false, got %t", loaded.LargeDBMode)
}
t.Log("✅ Zero values handled correctly!")
}

View File

@ -265,6 +265,13 @@ func (e *AESEncryptor) EncryptFile(inputPath, outputPath string, key []byte) err
// DecryptFile decrypts a file
func (e *AESEncryptor) DecryptFile(inputPath, outputPath string, key []byte) error {
// Handle in-place decryption (input == output)
inPlace := inputPath == outputPath
actualOutputPath := outputPath
if inPlace {
actualOutputPath = outputPath + ".decrypted.tmp"
}
// Open input file
inFile, err := os.Open(inputPath)
if err != nil {
@ -273,7 +280,7 @@ func (e *AESEncryptor) DecryptFile(inputPath, outputPath string, key []byte) err
defer inFile.Close()
// Create output file
outFile, err := os.Create(outputPath)
outFile, err := os.Create(actualOutputPath)
if err != nil {
return fmt.Errorf("failed to create output file: %w", err)
}
@ -287,8 +294,29 @@ func (e *AESEncryptor) DecryptFile(inputPath, outputPath string, key []byte) err
// Copy decrypted data to output file
if _, err := io.Copy(outFile, decReader); err != nil {
// Clean up temp file on failure
if inPlace {
os.Remove(actualOutputPath)
}
return fmt.Errorf("failed to write decrypted data: %w", err)
}
// For in-place decryption, replace original file
if inPlace {
outFile.Close() // Close before rename
inFile.Close() // Close before remove
// Remove original encrypted file
if err := os.Remove(inputPath); err != nil {
os.Remove(actualOutputPath)
return fmt.Errorf("failed to remove original file: %w", err)
}
// Rename decrypted file to original name
if err := os.Rename(actualOutputPath, outputPath); err != nil {
return fmt.Errorf("failed to rename decrypted file: %w", err)
}
}
return nil
}

View File

@ -74,7 +74,7 @@ func (p *PostgreSQL) Connect(ctx context.Context) error {
config.MinConns = 2 // Keep minimum connections ready
config.MaxConnLifetime = 0 // No limit on connection lifetime
config.MaxConnIdleTime = 0 // No idle timeout
config.HealthCheckPeriod = 1 * time.Minute // Health check every minute
config.HealthCheckPeriod = 5 * time.Second // Faster health check for quicker shutdown on Ctrl+C
// Optimize for large query results (BLOB data)
config.ConnConfig.RuntimeParams["work_mem"] = "64MB"
@ -97,6 +97,14 @@ func (p *PostgreSQL) Connect(ctx context.Context) error {
p.pool = pool
p.db = db
// NOTE: We intentionally do NOT start a goroutine to close the pool on context cancellation.
// The pool is closed via defer dbClient.Close() in the caller, which is the correct pattern.
// Starting a goroutine here causes goroutine leaks and potential double-close issues when:
// 1. The caller's defer runs first (normal case)
// 2. Then context is cancelled and the goroutine tries to close an already-closed pool
// This was causing deadlocks in the TUI when tea.Batch was waiting for commands to complete.
p.log.Info("Connected to PostgreSQL successfully", "driver", "pgx", "max_conns", config.MaxConns)
return nil
}
@ -324,12 +332,21 @@ func (p *PostgreSQL) BuildBackupCommand(database, outputFile string, options Bac
cmd := []string{"pg_dump"}
// Connection parameters
// CRITICAL: Always pass port even for localhost - user may have non-standard port
if p.cfg.Host != "localhost" && p.cfg.Host != "127.0.0.1" && p.cfg.Host != "" {
// CRITICAL: For Unix socket paths (starting with /), use -h with socket dir but NO port
// This enables peer authentication via socket. Port would force TCP connection.
isSocketPath := strings.HasPrefix(p.cfg.Host, "/")
if isSocketPath {
// Unix socket: use -h with socket directory, no port needed
cmd = append(cmd, "-h", p.cfg.Host)
} else if p.cfg.Host != "localhost" && p.cfg.Host != "127.0.0.1" && p.cfg.Host != "" {
// Remote host: use -h and port
cmd = append(cmd, "-h", p.cfg.Host)
cmd = append(cmd, "--no-password")
cmd = append(cmd, "-p", strconv.Itoa(p.cfg.Port))
} else {
// localhost: always pass port for non-standard port configs
cmd = append(cmd, "-p", strconv.Itoa(p.cfg.Port))
}
cmd = append(cmd, "-p", strconv.Itoa(p.cfg.Port))
cmd = append(cmd, "-U", p.cfg.User)
// Format and compression
@ -347,9 +364,10 @@ func (p *PostgreSQL) BuildBackupCommand(database, outputFile string, options Bac
cmd = append(cmd, "--compress="+strconv.Itoa(options.Compression))
}
// Parallel jobs (supported for directory and custom formats since PostgreSQL 9.3)
// Parallel jobs (ONLY supported for directory format in pg_dump)
// NOTE: custom format does NOT support --jobs despite PostgreSQL docs being unclear
// NOTE: plain format does NOT support --jobs (it's single-threaded by design)
if options.Parallel > 1 && (options.Format == "directory" || options.Format == "custom") {
if options.Parallel > 1 && options.Format == "directory" {
cmd = append(cmd, "--jobs="+strconv.Itoa(options.Parallel))
}
@ -390,12 +408,21 @@ func (p *PostgreSQL) BuildRestoreCommand(database, inputFile string, options Res
cmd := []string{"pg_restore"}
// Connection parameters
// CRITICAL: Always pass port even for localhost - user may have non-standard port
if p.cfg.Host != "localhost" && p.cfg.Host != "127.0.0.1" && p.cfg.Host != "" {
// CRITICAL: For Unix socket paths (starting with /), use -h with socket dir but NO port
// This enables peer authentication via socket. Port would force TCP connection.
isSocketPath := strings.HasPrefix(p.cfg.Host, "/")
if isSocketPath {
// Unix socket: use -h with socket directory, no port needed
cmd = append(cmd, "-h", p.cfg.Host)
} else if p.cfg.Host != "localhost" && p.cfg.Host != "127.0.0.1" && p.cfg.Host != "" {
// Remote host: use -h and port
cmd = append(cmd, "-h", p.cfg.Host)
cmd = append(cmd, "--no-password")
cmd = append(cmd, "-p", strconv.Itoa(p.cfg.Port))
} else {
// localhost: always pass port for non-standard port configs
cmd = append(cmd, "-p", strconv.Itoa(p.cfg.Port))
}
cmd = append(cmd, "-p", strconv.Itoa(p.cfg.Port))
cmd = append(cmd, "-U", p.cfg.User)
// Parallel jobs (incompatible with --single-transaction per PostgreSQL docs)
@ -486,6 +513,15 @@ func (p *PostgreSQL) buildPgxDSN() string {
// pgx supports both URL and keyword=value formats
// Use keyword format for Unix sockets, URL for TCP
// Check if host is an explicit Unix socket path (starts with /)
if strings.HasPrefix(p.cfg.Host, "/") {
// User provided explicit socket directory path
dsn := fmt.Sprintf("user=%s dbname=%s host=%s sslmode=disable",
p.cfg.User, p.cfg.Database, p.cfg.Host)
p.log.Debug("Using explicit PostgreSQL socket path", "path", p.cfg.Host)
return dsn
}
// Try Unix socket first for localhost without password
if p.cfg.Host == "localhost" && p.cfg.Password == "" {
socketDirs := []string{

View File

@ -147,9 +147,10 @@ func (dm *DockerManager) healthCheckCommand(dbType string) []string {
case "postgresql", "postgres":
return []string{"pg_isready", "-U", "postgres"}
case "mysql":
return []string{"mysqladmin", "ping", "-h", "localhost", "-u", "root", "--password=root"}
return []string{"mysqladmin", "ping", "-h", "127.0.0.1", "-u", "root", "--password=root"}
case "mariadb":
return []string{"mariadb-admin", "ping", "-h", "localhost", "-u", "root", "--password=root"}
// Use mariadb-admin with TCP connection
return []string{"mariadb-admin", "ping", "-h", "127.0.0.1", "-u", "root", "--password=root"}
default:
return []string{"echo", "ok"}
}

View File

@ -334,16 +334,29 @@ func (e *Engine) executeRestore(ctx context.Context, config *DrillConfig, contai
// Detect restore method based on file content
isCustomFormat := strings.Contains(backupPath, ".dump") || strings.Contains(backupPath, ".custom")
if isCustomFormat {
cmd = []string{"pg_restore", "-U", "postgres", "-d", config.DatabaseName, "-v", backupPath}
// Use --no-owner and --no-acl to avoid OWNER/GRANT errors in container
// (original owner/roles don't exist in isolated container)
cmd = []string{"pg_restore", "-U", "postgres", "-d", config.DatabaseName, "-v", "--no-owner", "--no-acl", backupPath}
} else {
cmd = []string{"sh", "-c", fmt.Sprintf("psql -U postgres -d %s < %s", config.DatabaseName, backupPath)}
}
case "mysql":
cmd = []string{"sh", "-c", fmt.Sprintf("mysql -u root --password=root %s < %s", config.DatabaseName, backupPath)}
// Drop database if exists (backup contains CREATE DATABASE)
_, _ = e.docker.ExecCommand(ctx, containerID, []string{
"mysql", "-h", "127.0.0.1", "-u", "root", "--password=root", "-e",
fmt.Sprintf("DROP DATABASE IF EXISTS %s", config.DatabaseName),
})
cmd = []string{"sh", "-c", fmt.Sprintf("mysql -h 127.0.0.1 -u root --password=root < %s", backupPath)}
case "mariadb":
cmd = []string{"sh", "-c", fmt.Sprintf("mariadb -u root --password=root %s < %s", config.DatabaseName, backupPath)}
// Drop database if exists (backup contains CREATE DATABASE)
_, _ = e.docker.ExecCommand(ctx, containerID, []string{
"mariadb", "-h", "127.0.0.1", "-u", "root", "--password=root", "-e",
fmt.Sprintf("DROP DATABASE IF EXISTS %s", config.DatabaseName),
})
// Use mariadb client (mysql symlink may not exist in newer images)
cmd = []string{"sh", "-c", fmt.Sprintf("mariadb -h 127.0.0.1 -u root --password=root < %s", backupPath)}
default:
return fmt.Errorf("unsupported database type: %s", config.DatabaseType)

View File

@ -0,0 +1,947 @@
package native
import (
"bytes"
"compress/gzip"
"context"
"encoding/hex"
"fmt"
"io"
"os"
"path/filepath"
"sort"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/jackc/pgx/v5/pgxpool"
"dbbackup/internal/logger"
)
// ═══════════════════════════════════════════════════════════════════════════════
// DBBACKUP BLOB PARALLEL ENGINE
// ═══════════════════════════════════════════════════════════════════════════════
// PostgreSQL Specialist + Go Developer + Linux Admin collaboration
//
// This module provides OPTIMIZED parallel backup and restore for:
// 1. BYTEA columns - Binary data stored inline in tables
// 2. Large Objects (pg_largeobject) - External BLOB storage via OID references
// 3. TOAST data - PostgreSQL's automatic large value compression
//
// KEY OPTIMIZATIONS:
// - Parallel table COPY operations (like pg_dump -j)
// - Streaming BYTEA with chunked processing (avoids memory spikes)
// - Large Object parallel export using lo_read()
// - Connection pooling with optimal pool size
// - Binary format for maximum throughput
// - Pipelined writes to minimize syscalls
// ═══════════════════════════════════════════════════════════════════════════════
// BlobConfig configures BLOB handling optimization
type BlobConfig struct {
// Number of parallel workers for BLOB operations
Workers int
// Chunk size for streaming large BLOBs (default: 8MB)
ChunkSize int64
// Threshold for considering a BLOB "large" (default: 10MB)
LargeBlobThreshold int64
// Whether to use binary format for COPY (faster but less portable)
UseBinaryFormat bool
// Buffer size for COPY operations (default: 1MB)
CopyBufferSize int
// Progress callback for monitoring
ProgressCallback func(phase string, table string, current, total int64, bytesProcessed int64)
// WorkDir for temp files during large BLOB operations
WorkDir string
}
// DefaultBlobConfig returns optimized defaults
func DefaultBlobConfig() *BlobConfig {
return &BlobConfig{
Workers: 4,
ChunkSize: 8 * 1024 * 1024, // 8MB chunks for streaming
LargeBlobThreshold: 10 * 1024 * 1024, // 10MB = "large"
UseBinaryFormat: false, // Text format for compatibility
CopyBufferSize: 1024 * 1024, // 1MB buffer
WorkDir: os.TempDir(),
}
}
// BlobParallelEngine handles optimized BLOB backup/restore
type BlobParallelEngine struct {
pool *pgxpool.Pool
log logger.Logger
config *BlobConfig
// Statistics
stats BlobStats
}
// BlobStats tracks BLOB operation statistics
type BlobStats struct {
TablesProcessed int64
TotalRows int64
TotalBytes int64
LargeObjectsCount int64
LargeObjectsBytes int64
ByteaColumnsCount int64
ByteaColumnsBytes int64
Duration time.Duration
ParallelWorkers int
TablesWithBlobs []string
LargestBlobSize int64
LargestBlobTable string
AverageBlobSize int64
CompressionRatio float64
ThroughputMBps float64
}
// TableBlobInfo contains BLOB information for a table
type TableBlobInfo struct {
Schema string
Table string
ByteaColumns []string // Columns containing BYTEA data
HasLargeData bool // Table contains BLOB > threshold
EstimatedSize int64 // Estimated BLOB data size
RowCount int64
Priority int // Processing priority (larger = first)
}
// NewBlobParallelEngine creates a new BLOB-optimized engine
func NewBlobParallelEngine(pool *pgxpool.Pool, log logger.Logger, config *BlobConfig) *BlobParallelEngine {
if config == nil {
config = DefaultBlobConfig()
}
if config.Workers < 1 {
config.Workers = 4
}
if config.ChunkSize < 1024*1024 {
config.ChunkSize = 8 * 1024 * 1024
}
if config.CopyBufferSize < 64*1024 {
config.CopyBufferSize = 1024 * 1024
}
return &BlobParallelEngine{
pool: pool,
log: log,
config: config,
}
}
// ═══════════════════════════════════════════════════════════════════════════════
// PHASE 1: BLOB DISCOVERY & ANALYSIS
// ═══════════════════════════════════════════════════════════════════════════════
// AnalyzeBlobTables discovers and analyzes all tables with BLOB data
func (e *BlobParallelEngine) AnalyzeBlobTables(ctx context.Context) ([]TableBlobInfo, error) {
e.log.Info("🔍 Analyzing database for BLOB data...")
start := time.Now()
conn, err := e.pool.Acquire(ctx)
if err != nil {
return nil, fmt.Errorf("failed to acquire connection: %w", err)
}
defer conn.Release()
// Query 1: Find all BYTEA columns
byteaQuery := `
SELECT
c.table_schema,
c.table_name,
c.column_name,
pg_table_size(quote_ident(c.table_schema) || '.' || quote_ident(c.table_name)) as table_size,
(SELECT reltuples::bigint FROM pg_class r
JOIN pg_namespace n ON n.oid = r.relnamespace
WHERE n.nspname = c.table_schema AND r.relname = c.table_name) as row_count
FROM information_schema.columns c
JOIN pg_class pc ON pc.relname = c.table_name
JOIN pg_namespace pn ON pn.oid = pc.relnamespace AND pn.nspname = c.table_schema
WHERE c.data_type = 'bytea'
AND c.table_schema NOT IN ('pg_catalog', 'information_schema', 'pg_toast')
AND pc.relkind = 'r'
ORDER BY table_size DESC NULLS LAST
`
rows, err := conn.Query(ctx, byteaQuery)
if err != nil {
return nil, fmt.Errorf("failed to query BYTEA columns: %w", err)
}
defer rows.Close()
// Group by table
tableMap := make(map[string]*TableBlobInfo)
for rows.Next() {
var schema, table, column string
var tableSize, rowCount *int64
if err := rows.Scan(&schema, &table, &column, &tableSize, &rowCount); err != nil {
continue
}
key := schema + "." + table
if _, exists := tableMap[key]; !exists {
tableMap[key] = &TableBlobInfo{
Schema: schema,
Table: table,
ByteaColumns: []string{},
}
}
tableMap[key].ByteaColumns = append(tableMap[key].ByteaColumns, column)
if tableSize != nil {
tableMap[key].EstimatedSize = *tableSize
}
if rowCount != nil {
tableMap[key].RowCount = *rowCount
}
}
// Query 2: Check for Large Objects
loQuery := `
SELECT COUNT(*), COALESCE(SUM(pg_column_size(lo_get(oid))), 0)
FROM pg_largeobject_metadata
`
var loCount, loSize int64
if err := conn.QueryRow(ctx, loQuery).Scan(&loCount, &loSize); err != nil {
// Large objects may not exist
e.log.Debug("No large objects found or query failed", "error", err)
} else {
e.stats.LargeObjectsCount = loCount
e.stats.LargeObjectsBytes = loSize
e.log.Info("Found Large Objects", "count", loCount, "size_mb", loSize/(1024*1024))
}
// Convert map to sorted slice (largest first for best parallelization)
var tables []TableBlobInfo
for _, t := range tableMap {
// Calculate priority based on estimated size
t.Priority = int(t.EstimatedSize / (1024 * 1024)) // MB as priority
if t.EstimatedSize > e.config.LargeBlobThreshold {
t.HasLargeData = true
t.Priority += 1000 // Boost priority for large data
}
tables = append(tables, *t)
e.stats.TablesWithBlobs = append(e.stats.TablesWithBlobs, t.Schema+"."+t.Table)
}
// Sort by priority (descending) for optimal parallel distribution
sort.Slice(tables, func(i, j int) bool {
return tables[i].Priority > tables[j].Priority
})
e.log.Info("BLOB analysis complete",
"tables_with_bytea", len(tables),
"large_objects", loCount,
"duration", time.Since(start))
return tables, nil
}
// ═══════════════════════════════════════════════════════════════════════════════
// PHASE 2: PARALLEL BLOB BACKUP
// ═══════════════════════════════════════════════════════════════════════════════
// BackupBlobTables performs parallel backup of BLOB-containing tables
func (e *BlobParallelEngine) BackupBlobTables(ctx context.Context, tables []TableBlobInfo, outputDir string) error {
if len(tables) == 0 {
e.log.Info("No BLOB tables to backup")
return nil
}
start := time.Now()
e.log.Info("🚀 Starting parallel BLOB backup",
"tables", len(tables),
"workers", e.config.Workers)
// Create output directory
blobDir := filepath.Join(outputDir, "blobs")
if err := os.MkdirAll(blobDir, 0755); err != nil {
return fmt.Errorf("failed to create BLOB directory: %w", err)
}
// Worker pool with semaphore
var wg sync.WaitGroup
semaphore := make(chan struct{}, e.config.Workers)
errChan := make(chan error, len(tables))
var processedTables int64
var processedBytes int64
for i := range tables {
table := tables[i]
wg.Add(1)
semaphore <- struct{}{} // Acquire worker slot
go func(t TableBlobInfo) {
defer wg.Done()
defer func() { <-semaphore }() // Release worker slot
// Backup this table's BLOB data
bytesWritten, err := e.backupTableBlobs(ctx, &t, blobDir)
if err != nil {
errChan <- fmt.Errorf("table %s.%s: %w", t.Schema, t.Table, err)
return
}
completed := atomic.AddInt64(&processedTables, 1)
atomic.AddInt64(&processedBytes, bytesWritten)
if e.config.ProgressCallback != nil {
e.config.ProgressCallback("backup", t.Schema+"."+t.Table,
completed, int64(len(tables)), processedBytes)
}
}(table)
}
wg.Wait()
close(errChan)
// Collect errors
var errors []string
for err := range errChan {
errors = append(errors, err.Error())
}
e.stats.TablesProcessed = processedTables
e.stats.TotalBytes = processedBytes
e.stats.Duration = time.Since(start)
e.stats.ParallelWorkers = e.config.Workers
if e.stats.Duration.Seconds() > 0 {
e.stats.ThroughputMBps = float64(e.stats.TotalBytes) / (1024 * 1024) / e.stats.Duration.Seconds()
}
e.log.Info("✅ Parallel BLOB backup complete",
"tables", processedTables,
"bytes", processedBytes,
"throughput_mbps", fmt.Sprintf("%.2f", e.stats.ThroughputMBps),
"duration", e.stats.Duration,
"errors", len(errors))
if len(errors) > 0 {
return fmt.Errorf("backup completed with %d errors: %v", len(errors), errors)
}
return nil
}
// backupTableBlobs backs up BLOB data from a single table
func (e *BlobParallelEngine) backupTableBlobs(ctx context.Context, table *TableBlobInfo, outputDir string) (int64, error) {
conn, err := e.pool.Acquire(ctx)
if err != nil {
return 0, err
}
defer conn.Release()
// Create output file
filename := fmt.Sprintf("%s.%s.blob.sql.gz", table.Schema, table.Table)
outPath := filepath.Join(outputDir, filename)
file, err := os.Create(outPath)
if err != nil {
return 0, err
}
defer file.Close()
// Use gzip compression
gzWriter := gzip.NewWriter(file)
defer gzWriter.Close()
// Apply session optimizations for COPY
optimizations := []string{
"SET work_mem = '256MB'", // More memory for sorting
"SET maintenance_work_mem = '512MB'", // For index operations
"SET synchronous_commit = 'off'", // Faster for backup reads
}
for _, opt := range optimizations {
conn.Exec(ctx, opt)
}
// Write COPY header
copyHeader := fmt.Sprintf("-- BLOB backup for %s.%s\n", table.Schema, table.Table)
copyHeader += fmt.Sprintf("-- BYTEA columns: %s\n", strings.Join(table.ByteaColumns, ", "))
copyHeader += fmt.Sprintf("-- Estimated rows: %d\n\n", table.RowCount)
// Write COPY statement that will be used for restore
fullTableName := fmt.Sprintf("%s.%s", e.quoteIdentifier(table.Schema), e.quoteIdentifier(table.Table))
copyHeader += fmt.Sprintf("COPY %s FROM stdin;\n", fullTableName)
gzWriter.Write([]byte(copyHeader))
// Use COPY TO STDOUT for efficient binary data export
copySQL := fmt.Sprintf("COPY %s TO STDOUT", fullTableName)
var bytesWritten int64
copyResult, err := conn.Conn().PgConn().CopyTo(ctx, gzWriter, copySQL)
if err != nil {
return bytesWritten, fmt.Errorf("COPY TO failed: %w", err)
}
bytesWritten = copyResult.RowsAffected()
// Write terminator
gzWriter.Write([]byte("\\.\n"))
atomic.AddInt64(&e.stats.TotalRows, bytesWritten)
e.log.Debug("Backed up BLOB table",
"table", table.Schema+"."+table.Table,
"rows", bytesWritten)
return bytesWritten, nil
}
// ═══════════════════════════════════════════════════════════════════════════════
// PHASE 3: PARALLEL BLOB RESTORE
// ═══════════════════════════════════════════════════════════════════════════════
// RestoreBlobTables performs parallel restore of BLOB-containing tables
func (e *BlobParallelEngine) RestoreBlobTables(ctx context.Context, blobDir string) error {
// Find all BLOB backup files
files, err := filepath.Glob(filepath.Join(blobDir, "*.blob.sql.gz"))
if err != nil {
return fmt.Errorf("failed to list BLOB files: %w", err)
}
if len(files) == 0 {
e.log.Info("No BLOB backup files found")
return nil
}
start := time.Now()
e.log.Info("🚀 Starting parallel BLOB restore",
"files", len(files),
"workers", e.config.Workers)
// Worker pool with semaphore
var wg sync.WaitGroup
semaphore := make(chan struct{}, e.config.Workers)
errChan := make(chan error, len(files))
var processedFiles int64
var processedRows int64
for _, file := range files {
wg.Add(1)
semaphore <- struct{}{}
go func(filePath string) {
defer wg.Done()
defer func() { <-semaphore }()
rows, err := e.restoreBlobFile(ctx, filePath)
if err != nil {
errChan <- fmt.Errorf("file %s: %w", filePath, err)
return
}
completed := atomic.AddInt64(&processedFiles, 1)
atomic.AddInt64(&processedRows, rows)
if e.config.ProgressCallback != nil {
e.config.ProgressCallback("restore", filepath.Base(filePath),
completed, int64(len(files)), processedRows)
}
}(file)
}
wg.Wait()
close(errChan)
// Collect errors
var errors []string
for err := range errChan {
errors = append(errors, err.Error())
}
e.stats.Duration = time.Since(start)
e.log.Info("✅ Parallel BLOB restore complete",
"files", processedFiles,
"rows", processedRows,
"duration", e.stats.Duration,
"errors", len(errors))
if len(errors) > 0 {
return fmt.Errorf("restore completed with %d errors: %v", len(errors), errors)
}
return nil
}
// restoreBlobFile restores a single BLOB backup file
func (e *BlobParallelEngine) restoreBlobFile(ctx context.Context, filePath string) (int64, error) {
conn, err := e.pool.Acquire(ctx)
if err != nil {
return 0, err
}
defer conn.Release()
// Apply restore optimizations
optimizations := []string{
"SET synchronous_commit = 'off'",
"SET session_replication_role = 'replica'", // Disable triggers
"SET work_mem = '256MB'",
}
for _, opt := range optimizations {
conn.Exec(ctx, opt)
}
// Open compressed file
file, err := os.Open(filePath)
if err != nil {
return 0, err
}
defer file.Close()
gzReader, err := gzip.NewReader(file)
if err != nil {
return 0, err
}
defer gzReader.Close()
// Read content
content, err := io.ReadAll(gzReader)
if err != nil {
return 0, err
}
// Parse COPY statement and data
lines := bytes.Split(content, []byte("\n"))
var copySQL string
var dataStart int
for i, line := range lines {
lineStr := string(line)
if strings.HasPrefix(strings.ToUpper(strings.TrimSpace(lineStr)), "COPY ") &&
strings.HasSuffix(strings.TrimSpace(lineStr), "FROM stdin;") {
// Convert FROM stdin to proper COPY format
copySQL = strings.TrimSuffix(strings.TrimSpace(lineStr), "FROM stdin;") + "FROM STDIN"
dataStart = i + 1
break
}
}
if copySQL == "" {
return 0, fmt.Errorf("no COPY statement found in file")
}
// Build data buffer (excluding COPY header and terminator)
var dataBuffer bytes.Buffer
for i := dataStart; i < len(lines); i++ {
line := string(lines[i])
if line == "\\." {
break
}
dataBuffer.WriteString(line)
dataBuffer.WriteByte('\n')
}
// Execute COPY FROM
tag, err := conn.Conn().PgConn().CopyFrom(ctx, &dataBuffer, copySQL)
if err != nil {
return 0, fmt.Errorf("COPY FROM failed: %w", err)
}
return tag.RowsAffected(), nil
}
// ═══════════════════════════════════════════════════════════════════════════════
// PHASE 4: LARGE OBJECT (lo_*) HANDLING
// ═══════════════════════════════════════════════════════════════════════════════
// BackupLargeObjects exports all Large Objects in parallel
func (e *BlobParallelEngine) BackupLargeObjects(ctx context.Context, outputDir string) error {
conn, err := e.pool.Acquire(ctx)
if err != nil {
return err
}
defer conn.Release()
// Get all Large Object OIDs
rows, err := conn.Query(ctx, "SELECT oid FROM pg_largeobject_metadata ORDER BY oid")
if err != nil {
return fmt.Errorf("failed to query large objects: %w", err)
}
var oids []uint32
for rows.Next() {
var oid uint32
if err := rows.Scan(&oid); err != nil {
continue
}
oids = append(oids, oid)
}
rows.Close()
if len(oids) == 0 {
e.log.Info("No Large Objects to backup")
return nil
}
e.log.Info("🗄️ Backing up Large Objects",
"count", len(oids),
"workers", e.config.Workers)
loDir := filepath.Join(outputDir, "large_objects")
if err := os.MkdirAll(loDir, 0755); err != nil {
return err
}
// Worker pool
var wg sync.WaitGroup
semaphore := make(chan struct{}, e.config.Workers)
errChan := make(chan error, len(oids))
for _, oid := range oids {
wg.Add(1)
semaphore <- struct{}{}
go func(o uint32) {
defer wg.Done()
defer func() { <-semaphore }()
if err := e.backupLargeObject(ctx, o, loDir); err != nil {
errChan <- fmt.Errorf("OID %d: %w", o, err)
}
}(oid)
}
wg.Wait()
close(errChan)
var errors []string
for err := range errChan {
errors = append(errors, err.Error())
}
if len(errors) > 0 {
return fmt.Errorf("LO backup had %d errors: %v", len(errors), errors)
}
return nil
}
// backupLargeObject backs up a single Large Object
func (e *BlobParallelEngine) backupLargeObject(ctx context.Context, oid uint32, outputDir string) error {
conn, err := e.pool.Acquire(ctx)
if err != nil {
return err
}
defer conn.Release()
// Use transaction for lo_* operations
tx, err := conn.Begin(ctx)
if err != nil {
return err
}
defer tx.Rollback(ctx)
// Read Large Object data using lo_get()
var data []byte
err = tx.QueryRow(ctx, "SELECT lo_get($1)", oid).Scan(&data)
if err != nil {
return fmt.Errorf("lo_get failed: %w", err)
}
// Write to file
filename := filepath.Join(outputDir, fmt.Sprintf("lo_%d.bin", oid))
if err := os.WriteFile(filename, data, 0644); err != nil {
return err
}
atomic.AddInt64(&e.stats.LargeObjectsBytes, int64(len(data)))
return tx.Commit(ctx)
}
// RestoreLargeObjects restores all Large Objects in parallel
func (e *BlobParallelEngine) RestoreLargeObjects(ctx context.Context, loDir string) error {
files, err := filepath.Glob(filepath.Join(loDir, "lo_*.bin"))
if err != nil {
return err
}
if len(files) == 0 {
e.log.Info("No Large Objects to restore")
return nil
}
e.log.Info("🗄️ Restoring Large Objects",
"count", len(files),
"workers", e.config.Workers)
var wg sync.WaitGroup
semaphore := make(chan struct{}, e.config.Workers)
errChan := make(chan error, len(files))
for _, file := range files {
wg.Add(1)
semaphore <- struct{}{}
go func(f string) {
defer wg.Done()
defer func() { <-semaphore }()
if err := e.restoreLargeObject(ctx, f); err != nil {
errChan <- err
}
}(file)
}
wg.Wait()
close(errChan)
var errors []string
for err := range errChan {
errors = append(errors, err.Error())
}
if len(errors) > 0 {
return fmt.Errorf("LO restore had %d errors: %v", len(errors), errors)
}
return nil
}
// restoreLargeObject restores a single Large Object
func (e *BlobParallelEngine) restoreLargeObject(ctx context.Context, filePath string) error {
// Extract OID from filename
var oid uint32
_, err := fmt.Sscanf(filepath.Base(filePath), "lo_%d.bin", &oid)
if err != nil {
return fmt.Errorf("invalid filename: %s", filePath)
}
data, err := os.ReadFile(filePath)
if err != nil {
return err
}
conn, err := e.pool.Acquire(ctx)
if err != nil {
return err
}
defer conn.Release()
tx, err := conn.Begin(ctx)
if err != nil {
return err
}
defer tx.Rollback(ctx)
// Create Large Object with specific OID and write data
_, err = tx.Exec(ctx, "SELECT lo_create($1)", oid)
if err != nil {
return fmt.Errorf("lo_create failed: %w", err)
}
_, err = tx.Exec(ctx, "SELECT lo_put($1, 0, $2)", oid, data)
if err != nil {
return fmt.Errorf("lo_put failed: %w", err)
}
return tx.Commit(ctx)
}
// ═══════════════════════════════════════════════════════════════════════════════
// PHASE 5: OPTIMIZED BYTEA STREAMING
// ═══════════════════════════════════════════════════════════════════════════════
// StreamingBlobBackup performs streaming backup for very large BYTEA tables
// This avoids loading entire table into memory
func (e *BlobParallelEngine) StreamingBlobBackup(ctx context.Context, table *TableBlobInfo, writer io.Writer) error {
conn, err := e.pool.Acquire(ctx)
if err != nil {
return err
}
defer conn.Release()
// Use cursor-based iteration for memory efficiency
cursorName := fmt.Sprintf("blob_cursor_%d", time.Now().UnixNano())
fullTable := fmt.Sprintf("%s.%s", e.quoteIdentifier(table.Schema), e.quoteIdentifier(table.Table))
tx, err := conn.Begin(ctx)
if err != nil {
return err
}
defer tx.Rollback(ctx)
// Declare cursor
_, err = tx.Exec(ctx, fmt.Sprintf("DECLARE %s CURSOR FOR SELECT * FROM %s", cursorName, fullTable))
if err != nil {
return fmt.Errorf("cursor declaration failed: %w", err)
}
// Fetch in batches
batchSize := 1000
for {
rows, err := tx.Query(ctx, fmt.Sprintf("FETCH %d FROM %s", batchSize, cursorName))
if err != nil {
return err
}
fieldDescs := rows.FieldDescriptions()
rowCount := 0
numFields := len(fieldDescs)
for rows.Next() {
values, err := rows.Values()
if err != nil {
rows.Close()
return err
}
// Write row data
line := e.formatRowForCopy(values, numFields)
writer.Write([]byte(line))
writer.Write([]byte("\n"))
rowCount++
}
rows.Close()
if rowCount < batchSize {
break // No more rows
}
}
// Close cursor
tx.Exec(ctx, fmt.Sprintf("CLOSE %s", cursorName))
return tx.Commit(ctx)
}
// formatRowForCopy formats a row for COPY format
func (e *BlobParallelEngine) formatRowForCopy(values []interface{}, numFields int) string {
var parts []string
for i, v := range values {
if v == nil {
parts = append(parts, "\\N")
continue
}
switch val := v.(type) {
case []byte:
// BYTEA - encode as hex with \x prefix
parts = append(parts, "\\\\x"+hex.EncodeToString(val))
case string:
// Escape special characters for COPY format
escaped := strings.ReplaceAll(val, "\\", "\\\\")
escaped = strings.ReplaceAll(escaped, "\t", "\\t")
escaped = strings.ReplaceAll(escaped, "\n", "\\n")
escaped = strings.ReplaceAll(escaped, "\r", "\\r")
parts = append(parts, escaped)
default:
parts = append(parts, fmt.Sprintf("%v", v))
}
_ = i // Suppress unused warning
_ = numFields
}
return strings.Join(parts, "\t")
}
// GetStats returns current statistics
func (e *BlobParallelEngine) GetStats() BlobStats {
return e.stats
}
// Helper function
func (e *BlobParallelEngine) quoteIdentifier(name string) string {
return `"` + strings.ReplaceAll(name, `"`, `""`) + `"`
}
// ═══════════════════════════════════════════════════════════════════════════════
// INTEGRATION WITH MAIN PARALLEL RESTORE ENGINE
// ═══════════════════════════════════════════════════════════════════════════════
// EnhancedCOPYResult extends COPY operation with BLOB-specific handling
type EnhancedCOPYResult struct {
Table string
RowsAffected int64
BytesWritten int64
HasBytea bool
Duration time.Duration
ThroughputMBs float64
}
// ExecuteParallelCOPY performs optimized parallel COPY for all tables including BLOBs
func (e *BlobParallelEngine) ExecuteParallelCOPY(ctx context.Context, statements []*SQLStatement, workers int) ([]EnhancedCOPYResult, error) {
if workers < 1 {
workers = e.config.Workers
}
e.log.Info("⚡ Executing parallel COPY with BLOB optimization",
"tables", len(statements),
"workers", workers)
var wg sync.WaitGroup
semaphore := make(chan struct{}, workers)
results := make([]EnhancedCOPYResult, len(statements))
for i, stmt := range statements {
wg.Add(1)
semaphore <- struct{}{}
go func(idx int, s *SQLStatement) {
defer wg.Done()
defer func() { <-semaphore }()
start := time.Now()
result := EnhancedCOPYResult{
Table: s.TableName,
}
conn, err := e.pool.Acquire(ctx)
if err != nil {
e.log.Error("Failed to acquire connection", "table", s.TableName, "error", err)
results[idx] = result
return
}
defer conn.Release()
// Apply BLOB-optimized settings
opts := []string{
"SET synchronous_commit = 'off'",
"SET session_replication_role = 'replica'",
"SET work_mem = '256MB'",
"SET maintenance_work_mem = '512MB'",
}
for _, opt := range opts {
conn.Exec(ctx, opt)
}
// Execute COPY
copySQL := fmt.Sprintf("COPY %s FROM STDIN", s.TableName)
tag, err := conn.Conn().PgConn().CopyFrom(ctx, strings.NewReader(s.CopyData.String()), copySQL)
if err != nil {
e.log.Error("COPY failed", "table", s.TableName, "error", err)
results[idx] = result
return
}
result.RowsAffected = tag.RowsAffected()
result.BytesWritten = int64(s.CopyData.Len())
result.Duration = time.Since(start)
if result.Duration.Seconds() > 0 {
result.ThroughputMBs = float64(result.BytesWritten) / (1024 * 1024) / result.Duration.Seconds()
}
results[idx] = result
}(i, stmt)
}
wg.Wait()
// Log summary
var totalRows, totalBytes int64
for _, r := range results {
totalRows += r.RowsAffected
totalBytes += r.BytesWritten
}
e.log.Info("✅ Parallel COPY complete",
"tables", len(statements),
"total_rows", totalRows,
"total_mb", totalBytes/(1024*1024))
return results, nil
}

View File

@ -138,7 +138,15 @@ func (e *MySQLNativeEngine) Backup(ctx context.Context, outputWriter io.Writer)
// Get binlog position for PITR
binlogPos, err := e.getBinlogPosition(ctx)
if err != nil {
e.log.Warn("Failed to get binlog position", "error", err)
// Only warn about binlog errors if it's not "no rows" (binlog disabled) or permission errors
errStr := err.Error()
if strings.Contains(errStr, "no rows in result set") {
e.log.Debug("Binary logging not enabled on this server, skipping binlog position capture")
} else if strings.Contains(errStr, "Access denied") || strings.Contains(errStr, "BINLOG MONITOR") {
e.log.Debug("Insufficient privileges for binlog position (PITR requires BINLOG MONITOR or SUPER privilege)")
} else {
e.log.Warn("Failed to get binlog position", "error", err)
}
}
// Start transaction for consistent backup
@ -386,6 +394,10 @@ func (e *MySQLNativeEngine) buildDSN() string {
ReadTimeout: 30 * time.Second,
WriteTimeout: 30 * time.Second,
// Auth settings - required for MariaDB unix_socket auth
AllowNativePasswords: true,
AllowOldPasswords: true,
// Character set
Params: map[string]string{
"charset": "utf8mb4",
@ -418,21 +430,34 @@ func (e *MySQLNativeEngine) buildDSN() string {
func (e *MySQLNativeEngine) getBinlogPosition(ctx context.Context) (*BinlogPosition, error) {
var file string
var position int64
var binlogDoDB, binlogIgnoreDB sql.NullString
var executedGtidSet sql.NullString // MySQL 5.6+ has 5th column
// Try MySQL 8.0.22+ syntax first, then fall back to legacy
// Note: MySQL 8.0.22+ uses SHOW BINARY LOG STATUS
// MySQL 5.6+ has 5 columns: File, Position, Binlog_Do_DB, Binlog_Ignore_DB, Executed_Gtid_Set
// MariaDB has 4 columns: File, Position, Binlog_Do_DB, Binlog_Ignore_DB
row := e.db.QueryRowContext(ctx, "SHOW BINARY LOG STATUS")
err := row.Scan(&file, &position, nil, nil, nil)
err := row.Scan(&file, &position, &binlogDoDB, &binlogIgnoreDB, &executedGtidSet)
if err != nil {
// Fall back to legacy syntax for older MySQL versions
// Fall back to legacy syntax for older MySQL/MariaDB versions
row = e.db.QueryRowContext(ctx, "SHOW MASTER STATUS")
if err = row.Scan(&file, &position, nil, nil, nil); err != nil {
return nil, fmt.Errorf("failed to get binlog status: %w", err)
// Try 5 columns first (MySQL 5.6+)
err = row.Scan(&file, &position, &binlogDoDB, &binlogIgnoreDB, &executedGtidSet)
if err != nil {
// MariaDB only has 4 columns
row = e.db.QueryRowContext(ctx, "SHOW MASTER STATUS")
if err = row.Scan(&file, &position, &binlogDoDB, &binlogIgnoreDB); err != nil {
return nil, fmt.Errorf("failed to get binlog status: %w", err)
}
}
}
// Try to get GTID set (MySQL 5.6+)
// Try to get GTID set (MySQL 5.6+ / MariaDB 10.0+)
var gtidSet string
if row := e.db.QueryRowContext(ctx, "SELECT @@global.gtid_executed"); row != nil {
if executedGtidSet.Valid && executedGtidSet.String != "" {
gtidSet = executedGtidSet.String
} else if row := e.db.QueryRowContext(ctx, "SELECT @@global.gtid_executed"); row != nil {
row.Scan(&gtidSet)
}
@ -689,7 +714,8 @@ func (e *MySQLNativeEngine) getTableInfo(ctx context.Context, database, table st
row := e.db.QueryRowContext(ctx, query, database, table)
var info MySQLTableInfo
var autoInc, createTime, updateTime sql.NullInt64
var autoInc sql.NullInt64
var createTime, updateTime sql.NullTime
var collation sql.NullString
err := row.Scan(&info.Name, &info.Engine, &collation, &info.RowCount,
@ -705,13 +731,11 @@ func (e *MySQLNativeEngine) getTableInfo(ctx context.Context, database, table st
}
if createTime.Valid {
createTimeVal := time.Unix(createTime.Int64, 0)
info.CreateTime = &createTimeVal
info.CreateTime = &createTime.Time
}
if updateTime.Valid {
updateTimeVal := time.Unix(updateTime.Int64, 0)
info.UpdateTime = &updateTimeVal
info.UpdateTime = &updateTime.Time
}
return &info, nil

View File

@ -0,0 +1,589 @@
package native
import (
"bufio"
"bytes"
"compress/gzip"
"context"
"fmt"
"io"
"os"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/jackc/pgx/v5/pgxpool"
"github.com/klauspost/pgzip"
"dbbackup/internal/logger"
)
// ParallelRestoreEngine provides high-performance parallel SQL restore
// that can match pg_restore -j8 performance for SQL format dumps
type ParallelRestoreEngine struct {
config *PostgreSQLNativeConfig
pool *pgxpool.Pool
log logger.Logger
// Configuration
parallelWorkers int
// Internal cancel channel to stop the pool cleanup goroutine
closeCh chan struct{}
}
// ParallelRestoreOptions configures parallel restore behavior
type ParallelRestoreOptions struct {
// Number of parallel workers for COPY operations (like pg_restore -j)
Workers int
// Continue on error instead of stopping
ContinueOnError bool
// Progress callback
ProgressCallback func(phase string, current, total int, tableName string)
}
// ParallelRestoreResult contains restore statistics
type ParallelRestoreResult struct {
Duration time.Duration
SchemaStatements int64
TablesRestored int64
RowsRestored int64
IndexesCreated int64
Errors []string
}
// SQLStatement represents a parsed SQL statement with metadata
type SQLStatement struct {
SQL string
Type StatementType
TableName string // For COPY statements
CopyData bytes.Buffer // Data for COPY FROM STDIN
}
// StatementType classifies SQL statements for parallel execution
type StatementType int
const (
StmtSchema StatementType = iota // CREATE TABLE, TYPE, FUNCTION, etc.
StmtCopyData // COPY ... FROM stdin with data
StmtPostData // CREATE INDEX, ADD CONSTRAINT, etc.
StmtOther // SET, COMMENT, etc.
)
// NewParallelRestoreEngine creates a new parallel restore engine
// NOTE: Pass a cancellable context to ensure the pool is properly closed on Ctrl+C
func NewParallelRestoreEngine(config *PostgreSQLNativeConfig, log logger.Logger, workers int) (*ParallelRestoreEngine, error) {
return NewParallelRestoreEngineWithContext(context.Background(), config, log, workers)
}
// NewParallelRestoreEngineWithContext creates a new parallel restore engine with context support
// This ensures the connection pool is properly closed when the context is cancelled
func NewParallelRestoreEngineWithContext(ctx context.Context, config *PostgreSQLNativeConfig, log logger.Logger, workers int) (*ParallelRestoreEngine, error) {
if workers < 1 {
workers = 4 // Default to 4 parallel workers
}
// Build connection string
sslMode := config.SSLMode
if sslMode == "" {
sslMode = "prefer"
}
connString := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=%s",
config.Host, config.Port, config.User, config.Password, config.Database, sslMode)
// Create connection pool with enough connections for parallel workers
poolConfig, err := pgxpool.ParseConfig(connString)
if err != nil {
return nil, fmt.Errorf("failed to parse connection config: %w", err)
}
// Pool size = workers + 1 (for schema operations)
poolConfig.MaxConns = int32(workers + 2)
poolConfig.MinConns = int32(workers)
// CRITICAL: Reduce health check period to allow faster shutdown
// Default is 1 minute which causes hangs on Ctrl+C
poolConfig.HealthCheckPeriod = 5 * time.Second
// CRITICAL: Set connection-level timeouts to ensure queries can be cancelled
// This prevents infinite hangs on slow/stuck operations
poolConfig.ConnConfig.RuntimeParams = map[string]string{
"statement_timeout": "3600000", // 1 hour max per statement (in ms)
"lock_timeout": "300000", // 5 min max wait for locks (in ms)
"idle_in_transaction_session_timeout": "600000", // 10 min idle timeout (in ms)
}
// Use the provided context so pool health checks stop when context is cancelled
pool, err := pgxpool.NewWithConfig(ctx, poolConfig)
if err != nil {
return nil, fmt.Errorf("failed to create connection pool: %w", err)
}
closeCh := make(chan struct{})
engine := &ParallelRestoreEngine{
config: config,
pool: pool,
log: log,
parallelWorkers: workers,
closeCh: closeCh,
}
// NOTE: We intentionally do NOT start a goroutine to close the pool on context cancellation.
// The pool is closed via defer parallelEngine.Close() in the caller (restore/engine.go).
// The Close() method properly signals closeCh and closes the pool.
// Starting a goroutine here can cause:
// 1. Race conditions with explicit Close() calls
// 2. Goroutine leaks if neither ctx nor Close() fires
// 3. Deadlocks with BubbleTea's event loop
return engine, nil
}
// RestoreFile restores from a SQL file with parallel execution
func (e *ParallelRestoreEngine) RestoreFile(ctx context.Context, filePath string, options *ParallelRestoreOptions) (*ParallelRestoreResult, error) {
startTime := time.Now()
result := &ParallelRestoreResult{}
if options == nil {
options = &ParallelRestoreOptions{Workers: e.parallelWorkers}
}
if options.Workers < 1 {
options.Workers = e.parallelWorkers
}
e.log.Info("Starting parallel SQL restore",
"file", filePath,
"workers", options.Workers)
// Open file (handle gzip)
file, err := os.Open(filePath)
if err != nil {
return result, fmt.Errorf("failed to open file: %w", err)
}
defer file.Close()
var reader io.Reader = file
if strings.HasSuffix(filePath, ".gz") {
gzReader, err := pgzip.NewReader(file)
if err != nil {
return result, fmt.Errorf("failed to create gzip reader: %w", err)
}
defer gzReader.Close()
reader = gzReader
}
// Phase 1: Parse and classify statements
e.log.Info("Phase 1: Parsing SQL dump...")
if options.ProgressCallback != nil {
options.ProgressCallback("parsing", 0, 0, "")
}
statements, err := e.parseStatementsWithContext(ctx, reader)
if err != nil {
return result, fmt.Errorf("failed to parse SQL: %w", err)
}
// Count by type
var schemaCount, copyCount, postDataCount int
for _, stmt := range statements {
switch stmt.Type {
case StmtSchema:
schemaCount++
case StmtCopyData:
copyCount++
case StmtPostData:
postDataCount++
}
}
e.log.Info("Parsed SQL dump",
"schema_statements", schemaCount,
"copy_operations", copyCount,
"post_data_statements", postDataCount)
// Phase 2: Execute schema statements (sequential - must be in order)
e.log.Info("Phase 2: Creating schema (sequential)...")
if options.ProgressCallback != nil {
options.ProgressCallback("schema", 0, schemaCount, "")
}
schemaStmts := 0
for _, stmt := range statements {
// Check for context cancellation periodically
select {
case <-ctx.Done():
return result, ctx.Err()
default:
}
if stmt.Type == StmtSchema || stmt.Type == StmtOther {
if err := e.executeStatement(ctx, stmt.SQL); err != nil {
if options.ContinueOnError {
result.Errors = append(result.Errors, err.Error())
} else {
return result, fmt.Errorf("schema creation failed: %w", err)
}
}
schemaStmts++
result.SchemaStatements++
if options.ProgressCallback != nil && schemaStmts%100 == 0 {
options.ProgressCallback("schema", schemaStmts, schemaCount, "")
}
}
}
// Phase 3: Execute COPY operations in parallel (THE KEY TO PERFORMANCE!)
e.log.Info("Phase 3: Loading data in parallel...",
"tables", copyCount,
"workers", options.Workers)
if options.ProgressCallback != nil {
options.ProgressCallback("data", 0, copyCount, "")
}
copyStmts := make([]*SQLStatement, 0, copyCount)
for i := range statements {
if statements[i].Type == StmtCopyData {
copyStmts = append(copyStmts, &statements[i])
}
}
// Execute COPY operations in parallel using worker pool
var wg sync.WaitGroup
semaphore := make(chan struct{}, options.Workers)
var completedCopies int64
var totalRows int64
var cancelled int32 // Atomic flag to signal cancellation
copyLoop:
for _, stmt := range copyStmts {
// Check for context cancellation before starting new work
if ctx.Err() != nil {
break
}
wg.Add(1)
select {
case semaphore <- struct{}{}: // Acquire worker slot
case <-ctx.Done():
wg.Done()
atomic.StoreInt32(&cancelled, 1)
break copyLoop // CRITICAL: Use labeled break to exit the for loop, not just the select
}
go func(s *SQLStatement) {
defer wg.Done()
defer func() { <-semaphore }() // Release worker slot
// Check cancellation before executing
if ctx.Err() != nil || atomic.LoadInt32(&cancelled) == 1 {
return
}
rows, err := e.executeCopy(ctx, s)
if err != nil {
if ctx.Err() != nil {
// Context cancelled, don't log as error
return
}
if options.ContinueOnError {
e.log.Warn("COPY failed", "table", s.TableName, "error", err)
} else {
e.log.Error("COPY failed", "table", s.TableName, "error", err)
}
} else {
atomic.AddInt64(&totalRows, rows)
}
completed := atomic.AddInt64(&completedCopies, 1)
if options.ProgressCallback != nil {
options.ProgressCallback("data", int(completed), copyCount, s.TableName)
}
}(stmt)
}
wg.Wait()
// Check if cancelled
if ctx.Err() != nil {
return result, ctx.Err()
}
result.TablesRestored = completedCopies
result.RowsRestored = totalRows
// Phase 4: Execute post-data statements in parallel (indexes, constraints)
e.log.Info("Phase 4: Creating indexes and constraints in parallel...",
"statements", postDataCount,
"workers", options.Workers)
if options.ProgressCallback != nil {
options.ProgressCallback("indexes", 0, postDataCount, "")
}
postDataStmts := make([]string, 0, postDataCount)
for _, stmt := range statements {
if stmt.Type == StmtPostData {
postDataStmts = append(postDataStmts, stmt.SQL)
}
}
// Execute post-data in parallel
var completedPostData int64
cancelled = 0 // Reset for phase 4
postDataLoop:
for _, sql := range postDataStmts {
// Check for context cancellation before starting new work
if ctx.Err() != nil {
break
}
wg.Add(1)
select {
case semaphore <- struct{}{}:
case <-ctx.Done():
wg.Done()
atomic.StoreInt32(&cancelled, 1)
break postDataLoop // CRITICAL: Use labeled break to exit the for loop, not just the select
}
go func(stmt string) {
defer wg.Done()
defer func() { <-semaphore }()
// Check cancellation before executing
if ctx.Err() != nil || atomic.LoadInt32(&cancelled) == 1 {
return
}
if err := e.executeStatement(ctx, stmt); err != nil {
if ctx.Err() != nil {
return // Context cancelled
}
if options.ContinueOnError {
e.log.Warn("Post-data statement failed", "error", err)
}
} else {
atomic.AddInt64(&result.IndexesCreated, 1)
}
completed := atomic.AddInt64(&completedPostData, 1)
if options.ProgressCallback != nil {
options.ProgressCallback("indexes", int(completed), postDataCount, "")
}
}(sql)
}
wg.Wait()
// Check if cancelled
if ctx.Err() != nil {
return result, ctx.Err()
}
result.Duration = time.Since(startTime)
e.log.Info("Parallel restore completed",
"duration", result.Duration,
"tables", result.TablesRestored,
"rows", result.RowsRestored,
"indexes", result.IndexesCreated)
return result, nil
}
// parseStatements reads and classifies all SQL statements
func (e *ParallelRestoreEngine) parseStatements(reader io.Reader) ([]SQLStatement, error) {
return e.parseStatementsWithContext(context.Background(), reader)
}
// parseStatementsWithContext reads and classifies all SQL statements with context support
func (e *ParallelRestoreEngine) parseStatementsWithContext(ctx context.Context, reader io.Reader) ([]SQLStatement, error) {
scanner := bufio.NewScanner(reader)
scanner.Buffer(make([]byte, 1024*1024), 64*1024*1024) // 64MB max for large statements
var statements []SQLStatement
var stmtBuffer bytes.Buffer
var inCopyMode bool
var currentCopyStmt *SQLStatement
lineCount := 0
for scanner.Scan() {
// Check for context cancellation every 10000 lines
lineCount++
if lineCount%10000 == 0 {
select {
case <-ctx.Done():
return statements, ctx.Err()
default:
}
}
line := scanner.Text()
// Handle COPY data mode
if inCopyMode {
if line == "\\." {
// End of COPY data
if currentCopyStmt != nil {
statements = append(statements, *currentCopyStmt)
currentCopyStmt = nil
}
inCopyMode = false
continue
}
if currentCopyStmt != nil {
currentCopyStmt.CopyData.WriteString(line)
currentCopyStmt.CopyData.WriteByte('\n')
}
// Check for context cancellation during COPY data parsing (large tables)
// Check every 10000 lines to avoid overhead
if lineCount%10000 == 0 {
select {
case <-ctx.Done():
return statements, ctx.Err()
default:
}
}
continue
}
// Check for COPY statement start
trimmed := strings.TrimSpace(line)
upperTrimmed := strings.ToUpper(trimmed)
if strings.HasPrefix(upperTrimmed, "COPY ") && strings.HasSuffix(trimmed, "FROM stdin;") {
// Extract table name
parts := strings.Fields(line)
tableName := ""
if len(parts) >= 2 {
tableName = parts[1]
}
currentCopyStmt = &SQLStatement{
SQL: line,
Type: StmtCopyData,
TableName: tableName,
}
inCopyMode = true
continue
}
// Skip comments and empty lines
if trimmed == "" || strings.HasPrefix(trimmed, "--") {
continue
}
// Accumulate statement
stmtBuffer.WriteString(line)
stmtBuffer.WriteByte('\n')
// Check if statement is complete
if strings.HasSuffix(trimmed, ";") {
sql := stmtBuffer.String()
stmtBuffer.Reset()
stmt := SQLStatement{
SQL: sql,
Type: classifyStatement(sql),
}
statements = append(statements, stmt)
}
}
if err := scanner.Err(); err != nil {
return nil, fmt.Errorf("error scanning SQL: %w", err)
}
return statements, nil
}
// classifyStatement determines the type of SQL statement
func classifyStatement(sql string) StatementType {
upper := strings.ToUpper(strings.TrimSpace(sql))
// Post-data statements (can be parallelized)
if strings.HasPrefix(upper, "CREATE INDEX") ||
strings.HasPrefix(upper, "CREATE UNIQUE INDEX") ||
strings.HasPrefix(upper, "ALTER TABLE") && strings.Contains(upper, "ADD CONSTRAINT") ||
strings.HasPrefix(upper, "ALTER TABLE") && strings.Contains(upper, "ADD FOREIGN KEY") ||
strings.HasPrefix(upper, "CREATE TRIGGER") ||
strings.HasPrefix(upper, "ALTER TABLE") && strings.Contains(upper, "ENABLE TRIGGER") {
return StmtPostData
}
// Schema statements (must be sequential)
if strings.HasPrefix(upper, "CREATE ") ||
strings.HasPrefix(upper, "ALTER ") ||
strings.HasPrefix(upper, "DROP ") ||
strings.HasPrefix(upper, "GRANT ") ||
strings.HasPrefix(upper, "REVOKE ") {
return StmtSchema
}
return StmtOther
}
// executeStatement executes a single SQL statement
func (e *ParallelRestoreEngine) executeStatement(ctx context.Context, sql string) error {
conn, err := e.pool.Acquire(ctx)
if err != nil {
return fmt.Errorf("failed to acquire connection: %w", err)
}
defer conn.Release()
_, err = conn.Exec(ctx, sql)
return err
}
// executeCopy executes a COPY FROM STDIN operation with BLOB optimization
func (e *ParallelRestoreEngine) executeCopy(ctx context.Context, stmt *SQLStatement) (int64, error) {
conn, err := e.pool.Acquire(ctx)
if err != nil {
return 0, fmt.Errorf("failed to acquire connection: %w", err)
}
defer conn.Release()
// Apply per-connection BLOB-optimized settings
// PostgreSQL Specialist recommended settings for maximum BLOB throughput
optimizations := []string{
"SET synchronous_commit = 'off'", // Don't wait for WAL sync
"SET session_replication_role = 'replica'", // Disable triggers during load
"SET work_mem = '256MB'", // More memory for sorting
"SET maintenance_work_mem = '512MB'", // For constraint validation
"SET wal_buffers = '64MB'", // Larger WAL buffer
"SET checkpoint_completion_target = '0.9'", // Spread checkpoint I/O
}
for _, opt := range optimizations {
conn.Exec(ctx, opt)
}
// Execute the COPY
copySQL := fmt.Sprintf("COPY %s FROM STDIN", stmt.TableName)
tag, err := conn.Conn().PgConn().CopyFrom(ctx, strings.NewReader(stmt.CopyData.String()), copySQL)
if err != nil {
return 0, err
}
return tag.RowsAffected(), nil
}
// Close closes the connection pool and stops the cleanup goroutine
func (e *ParallelRestoreEngine) Close() error {
// Signal the cleanup goroutine to exit
if e.closeCh != nil {
close(e.closeCh)
}
// Close the pool
if e.pool != nil {
e.pool.Close()
}
return nil
}
// Ensure gzip import is used
var _ = gzip.BestCompression

View File

@ -0,0 +1,121 @@
package native
import (
"bytes"
"context"
"strings"
"testing"
"time"
"dbbackup/internal/logger"
)
// mockLogger for tests
type mockLogger struct{}
func (m *mockLogger) Debug(msg string, args ...any) {}
func (m *mockLogger) Info(msg string, keysAndValues ...interface{}) {}
func (m *mockLogger) Warn(msg string, keysAndValues ...interface{}) {}
func (m *mockLogger) Error(msg string, keysAndValues ...interface{}) {}
func (m *mockLogger) Time(msg string, args ...any) {}
func (m *mockLogger) WithField(key string, value interface{}) logger.Logger { return m }
func (m *mockLogger) WithFields(fields map[string]interface{}) logger.Logger { return m }
func (m *mockLogger) StartOperation(name string) logger.OperationLogger { return &mockOpLogger{} }
type mockOpLogger struct{}
func (m *mockOpLogger) Update(msg string, args ...any) {}
func (m *mockOpLogger) Complete(msg string, args ...any) {}
func (m *mockOpLogger) Fail(msg string, args ...any) {}
// createTestEngine creates an engine without database connection for parsing tests
func createTestEngine() *ParallelRestoreEngine {
return &ParallelRestoreEngine{
config: &PostgreSQLNativeConfig{},
log: &mockLogger{},
parallelWorkers: 4,
closeCh: make(chan struct{}),
}
}
// TestParseStatementsContextCancellation verifies that parsing can be cancelled
// This was a critical fix - parsing large SQL files would hang on Ctrl+C
func TestParseStatementsContextCancellation(t *testing.T) {
engine := createTestEngine()
// Create a large SQL content that would take a while to parse
var buf bytes.Buffer
buf.WriteString("-- Test dump\n")
buf.WriteString("SET statement_timeout = 0;\n")
// Add 1,000,000 lines to simulate a large dump
for i := 0; i < 1000000; i++ {
buf.WriteString("SELECT ")
buf.WriteString(string(rune('0' + (i % 10))))
buf.WriteString("; -- line padding to make file larger\n")
}
// Create a context that cancels after 10ms
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond)
defer cancel()
reader := strings.NewReader(buf.String())
start := time.Now()
_, err := engine.parseStatementsWithContext(ctx, reader)
elapsed := time.Since(start)
// Should return quickly with context error, not hang
if elapsed > 500*time.Millisecond {
t.Errorf("Parsing took too long after cancellation: %v (expected < 500ms)", elapsed)
}
if err == nil {
t.Log("Parsing completed before timeout (system is very fast)")
} else if err == context.DeadlineExceeded || err == context.Canceled {
t.Logf("✓ Context cancellation worked correctly (elapsed: %v)", elapsed)
} else {
t.Logf("Got error: %v (elapsed: %v)", err, elapsed)
}
}
// TestParseStatementsWithCopyDataCancellation tests cancellation during COPY data parsing
// This is where large restores spend most of their time
func TestParseStatementsWithCopyDataCancellation(t *testing.T) {
engine := createTestEngine()
// Create SQL with COPY statement and lots of data
var buf bytes.Buffer
buf.WriteString("CREATE TABLE test (id int, data text);\n")
buf.WriteString("COPY test (id, data) FROM stdin;\n")
// Add 500,000 rows of COPY data
for i := 0; i < 500000; i++ {
buf.WriteString("1\tsome test data for row number padding to make larger\n")
}
buf.WriteString("\\.\n")
buf.WriteString("SELECT 1;\n")
// Create a context that cancels after 10ms
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond)
defer cancel()
reader := strings.NewReader(buf.String())
start := time.Now()
_, err := engine.parseStatementsWithContext(ctx, reader)
elapsed := time.Since(start)
// Should return quickly with context error, not hang
if elapsed > 500*time.Millisecond {
t.Errorf("COPY parsing took too long after cancellation: %v (expected < 500ms)", elapsed)
}
if err == nil {
t.Log("Parsing completed before timeout (system is very fast)")
} else if err == context.DeadlineExceeded || err == context.Canceled {
t.Logf("✓ Context cancellation during COPY worked correctly (elapsed: %v)", elapsed)
} else {
t.Logf("Got error: %v (elapsed: %v)", err, elapsed)
}
}

View File

@ -241,7 +241,7 @@ func (e *PostgreSQLNativeEngine) backupPlainFormat(ctx context.Context, w io.Wri
return result, nil
}
// copyTableData uses COPY TO for efficient data export
// copyTableData uses COPY TO for efficient data export with BLOB optimization
func (e *PostgreSQLNativeEngine) copyTableData(ctx context.Context, w io.Writer, schema, table string) (int64, error) {
// Get a separate connection from the pool for COPY operation
conn, err := e.pool.Acquire(ctx)
@ -250,6 +250,18 @@ func (e *PostgreSQLNativeEngine) copyTableData(ctx context.Context, w io.Writer,
}
defer conn.Release()
// ═══════════════════════════════════════════════════════════════════════
// BLOB-OPTIMIZED SESSION SETTINGS (PostgreSQL Specialist recommendations)
// ═══════════════════════════════════════════════════════════════════════
blobOptimizations := []string{
"SET work_mem = '256MB'", // More memory for sorting/hashing
"SET maintenance_work_mem = '512MB'", // For large operations
"SET temp_buffers = '64MB'", // Temp table buffers
}
for _, opt := range blobOptimizations {
conn.Exec(ctx, opt)
}
// Check if table has any data
countSQL := fmt.Sprintf("SELECT COUNT(*) FROM %s.%s",
e.quoteIdentifier(schema), e.quoteIdentifier(table))
@ -277,7 +289,7 @@ func (e *PostgreSQLNativeEngine) copyTableData(ctx context.Context, w io.Writer,
var bytesWritten int64
// Use proper pgx COPY TO protocol
// Use proper pgx COPY TO protocol - this streams BYTEA data efficiently
copySQL := fmt.Sprintf("COPY %s.%s TO STDOUT",
e.quoteIdentifier(schema),
e.quoteIdentifier(table))
@ -592,18 +604,29 @@ func (e *PostgreSQLNativeEngine) formatDataType(dataType, udtName string, maxLen
// Helper methods
func (e *PostgreSQLNativeEngine) buildConnectionString() string {
// Check if host is a Unix socket path (starts with /)
isSocketPath := strings.HasPrefix(e.cfg.Host, "/")
parts := []string{
fmt.Sprintf("host=%s", e.cfg.Host),
fmt.Sprintf("port=%d", e.cfg.Port),
fmt.Sprintf("user=%s", e.cfg.User),
fmt.Sprintf("dbname=%s", e.cfg.Database),
}
// Only add port for TCP connections, not for Unix sockets
if !isSocketPath {
parts = append(parts, fmt.Sprintf("port=%d", e.cfg.Port))
}
parts = append(parts, fmt.Sprintf("user=%s", e.cfg.User))
parts = append(parts, fmt.Sprintf("dbname=%s", e.cfg.Database))
if e.cfg.Password != "" {
parts = append(parts, fmt.Sprintf("password=%s", e.cfg.Password))
}
if e.cfg.SSLMode != "" {
if isSocketPath {
// Unix socket connections don't use SSL
parts = append(parts, "sslmode=disable")
} else if e.cfg.SSLMode != "" {
parts = append(parts, fmt.Sprintf("sslmode=%s", e.cfg.SSLMode))
} else {
parts = append(parts, "sslmode=prefer")

View File

@ -2,12 +2,14 @@ package native
import (
"context"
"database/sql"
"fmt"
"os"
"runtime"
"strings"
"time"
_ "github.com/go-sql-driver/mysql"
"github.com/jackc/pgx/v5/pgxpool"
"github.com/shirou/gopsutil/v3/cpu"
"github.com/shirou/gopsutil/v3/disk"
@ -355,6 +357,19 @@ func detectDiskProfile(ctx context.Context) (*DiskProfile, error) {
// detectDatabaseProfile queries database for capabilities
func detectDatabaseProfile(ctx context.Context, dsn string) (*DatabaseProfile, error) {
// Detect DSN type by format
if strings.HasPrefix(dsn, "postgres://") || strings.HasPrefix(dsn, "postgresql://") {
return detectPostgresDatabaseProfile(ctx, dsn)
}
// MySQL DSN format: user:password@tcp(host:port)/dbname
if strings.Contains(dsn, "@tcp(") || strings.Contains(dsn, "@unix(") {
return detectMySQLDatabaseProfile(ctx, dsn)
}
return nil, fmt.Errorf("unsupported DSN format for database profiling")
}
// detectPostgresDatabaseProfile profiles PostgreSQL database
func detectPostgresDatabaseProfile(ctx context.Context, dsn string) (*DatabaseProfile, error) {
// Create temporary pool with minimal connections
poolConfig, err := pgxpool.ParseConfig(dsn)
if err != nil {
@ -449,6 +464,104 @@ func detectDatabaseProfile(ctx context.Context, dsn string) (*DatabaseProfile, e
return profile, nil
}
// detectMySQLDatabaseProfile profiles MySQL/MariaDB database
func detectMySQLDatabaseProfile(ctx context.Context, dsn string) (*DatabaseProfile, error) {
db, err := sql.Open("mysql", dsn)
if err != nil {
return nil, err
}
defer db.Close()
// Configure connection pool
db.SetMaxOpenConns(2)
db.SetMaxIdleConns(1)
db.SetConnMaxLifetime(30 * time.Second)
if err := db.PingContext(ctx); err != nil {
return nil, fmt.Errorf("failed to connect to MySQL: %w", err)
}
profile := &DatabaseProfile{}
// Get MySQL version
err = db.QueryRowContext(ctx, "SELECT version()").Scan(&profile.Version)
if err != nil {
return nil, err
}
// Get max_connections
var maxConns int
row := db.QueryRowContext(ctx, "SELECT @@max_connections")
if err := row.Scan(&maxConns); err == nil {
profile.MaxConnections = maxConns
}
// Get innodb_buffer_pool_size (equivalent to shared_buffers)
var bufferPoolSize uint64
row = db.QueryRowContext(ctx, "SELECT @@innodb_buffer_pool_size")
if err := row.Scan(&bufferPoolSize); err == nil {
profile.SharedBuffers = bufferPoolSize
}
// Get sort_buffer_size (somewhat equivalent to work_mem)
var sortBuffer uint64
row = db.QueryRowContext(ctx, "SELECT @@sort_buffer_size")
if err := row.Scan(&sortBuffer); err == nil {
profile.WorkMem = sortBuffer
}
// Estimate database size
var dbSize sql.NullInt64
row = db.QueryRowContext(ctx, `
SELECT SUM(data_length + index_length)
FROM information_schema.tables
WHERE table_schema = DATABASE()`)
if err := row.Scan(&dbSize); err == nil && dbSize.Valid {
profile.EstimatedSize = uint64(dbSize.Int64)
}
// Check for BLOB columns
var blobCount int
row = db.QueryRowContext(ctx, `
SELECT COUNT(*)
FROM information_schema.columns
WHERE table_schema = DATABASE()
AND data_type IN ('blob', 'mediumblob', 'longblob', 'text', 'mediumtext', 'longtext')`)
if err := row.Scan(&blobCount); err == nil {
profile.HasBLOBs = blobCount > 0
}
// Check for indexes
var indexCount int
row = db.QueryRowContext(ctx, `
SELECT COUNT(*)
FROM information_schema.statistics
WHERE table_schema = DATABASE()`)
if err := row.Scan(&indexCount); err == nil {
profile.HasIndexes = indexCount > 0
}
// Count tables
row = db.QueryRowContext(ctx, `
SELECT COUNT(*)
FROM information_schema.tables
WHERE table_schema = DATABASE()
AND table_type = 'BASE TABLE'`)
row.Scan(&profile.TableCount)
// Estimate row count
var rowCount sql.NullInt64
row = db.QueryRowContext(ctx, `
SELECT SUM(table_rows)
FROM information_schema.tables
WHERE table_schema = DATABASE()`)
if err := row.Scan(&rowCount); err == nil && rowCount.Valid {
profile.EstimatedRowCount = rowCount.Int64
}
return profile, nil
}
// parsePostgresSize parses PostgreSQL size strings like "128MB", "8GB"
func parsePostgresSize(s string) uint64 {
s = strings.TrimSpace(s)

View File

@ -113,22 +113,44 @@ func (r *PostgreSQLRestoreEngine) Restore(ctx context.Context, source io.Reader,
}
defer conn.Release()
// Apply performance optimizations for bulk loading
// Apply aggressive performance optimizations for bulk loading
// These provide 2-5x speedup for large SQL restores
optimizations := []string{
"SET synchronous_commit = 'off'", // Async commits (HUGE speedup)
"SET work_mem = '256MB'", // Faster sorts
"SET maintenance_work_mem = '512MB'", // Faster index builds
"SET session_replication_role = 'replica'", // Disable triggers/FK checks
// Critical performance settings
"SET synchronous_commit = 'off'", // Async commits (HUGE speedup - 2x+)
"SET work_mem = '512MB'", // Faster sorts and hash operations
"SET maintenance_work_mem = '1GB'", // Faster index builds
"SET session_replication_role = 'replica'", // Disable triggers/FK checks during load
// Parallel query for index creation
"SET max_parallel_workers_per_gather = 4",
"SET max_parallel_maintenance_workers = 4",
// Reduce I/O overhead
"SET wal_level = 'minimal'",
"SET fsync = off",
"SET full_page_writes = off",
// Checkpoint tuning (reduce checkpoint frequency during bulk load)
"SET checkpoint_timeout = '1h'",
"SET max_wal_size = '10GB'",
}
appliedCount := 0
for _, sql := range optimizations {
if _, err := conn.Exec(ctx, sql); err != nil {
r.engine.log.Debug("Optimization not available", "sql", sql, "error", err)
r.engine.log.Debug("Optimization not available (may require superuser)", "sql", sql, "error", err)
} else {
appliedCount++
}
}
r.engine.log.Info("Applied PostgreSQL bulk load optimizations", "applied", appliedCount, "total", len(optimizations))
// Restore settings at end
defer func() {
conn.Exec(ctx, "SET synchronous_commit = 'on'")
conn.Exec(ctx, "SET session_replication_role = 'origin'")
conn.Exec(ctx, "SET fsync = on")
conn.Exec(ctx, "SET full_page_writes = on")
}()
// Parse and execute SQL statements from the backup
@ -221,7 +243,8 @@ func (r *PostgreSQLRestoreEngine) Restore(ctx context.Context, source io.Reader,
continue
}
// Execute the statement
// Execute the statement with pipelining for better throughput
// Use pgx's implicit pipelining by not waiting for each result
_, err := conn.Exec(ctx, stmt)
if err != nil {
if options.ContinueOnError {
@ -232,7 +255,8 @@ func (r *PostgreSQLRestoreEngine) Restore(ctx context.Context, source io.Reader,
}
stmtCount++
if options.ProgressCallback != nil && stmtCount%100 == 0 {
// Report progress less frequently to reduce overhead (every 1000 statements)
if options.ProgressCallback != nil && stmtCount%1000 == 0 {
options.ProgressCallback(&RestoreProgress{
Operation: "SQL",
ObjectsCompleted: stmtCount,

View File

@ -154,14 +154,21 @@ func (s *SMTPNotifier) sendMail(ctx context.Context, message string) error {
if err != nil {
return fmt.Errorf("data command failed: %w", err)
}
defer w.Close()
_, err = w.Write([]byte(message))
if err != nil {
return fmt.Errorf("write failed: %w", err)
}
return client.Quit()
// Close the data writer to finalize the message
if err = w.Close(); err != nil {
return fmt.Errorf("data close failed: %w", err)
}
// Quit gracefully - ignore the response as long as it's a 2xx code
// Some servers return "250 2.0.0 Ok: queued as..." which isn't an error
_ = client.Quit()
return nil
}
// getPriority returns X-Priority header value based on severity

View File

@ -30,24 +30,25 @@ var PhaseWeights = map[Phase]int{
// ProgressSnapshot is a mutex-free copy of progress state for safe reading
type ProgressSnapshot struct {
Operation string
ArchiveFile string
Phase Phase
ExtractBytes int64
ExtractTotal int64
DatabasesDone int
DatabasesTotal int
CurrentDB string
CurrentDBBytes int64
CurrentDBTotal int64
DatabaseSizes map[string]int64
VerifyDone int
VerifyTotal int
StartTime time.Time
PhaseStartTime time.Time
LastUpdateTime time.Time
DatabaseTimes []time.Duration
Errors []string
Operation string
ArchiveFile string
Phase Phase
ExtractBytes int64
ExtractTotal int64
DatabasesDone int
DatabasesTotal int
CurrentDB string
CurrentDBBytes int64
CurrentDBTotal int64
DatabaseSizes map[string]int64
VerifyDone int
VerifyTotal int
StartTime time.Time
PhaseStartTime time.Time
LastUpdateTime time.Time
DatabaseTimes []time.Duration
Errors []string
UseNativeEngine bool // True if using pure Go native engine (no pg_restore)
}
// UnifiedClusterProgress combines all progress states into one cohesive structure
@ -56,8 +57,9 @@ type UnifiedClusterProgress struct {
mu sync.RWMutex
// Operation info
Operation string // "backup" or "restore"
ArchiveFile string
Operation string // "backup" or "restore"
ArchiveFile string
UseNativeEngine bool // True if using pure Go native engine (no pg_restore)
// Current phase
Phase Phase
@ -177,6 +179,13 @@ func (p *UnifiedClusterProgress) SetVerifyProgress(done, total int) {
p.LastUpdateTime = time.Now()
}
// SetUseNativeEngine sets whether native Go engine is used (no external tools)
func (p *UnifiedClusterProgress) SetUseNativeEngine(native bool) {
p.mu.Lock()
defer p.mu.Unlock()
p.UseNativeEngine = native
}
// AddError adds an error message
func (p *UnifiedClusterProgress) AddError(err string) {
p.mu.Lock()
@ -320,24 +329,25 @@ func (p *UnifiedClusterProgress) GetSnapshot() ProgressSnapshot {
copy(errors, p.Errors)
return ProgressSnapshot{
Operation: p.Operation,
ArchiveFile: p.ArchiveFile,
Phase: p.Phase,
ExtractBytes: p.ExtractBytes,
ExtractTotal: p.ExtractTotal,
DatabasesDone: p.DatabasesDone,
DatabasesTotal: p.DatabasesTotal,
CurrentDB: p.CurrentDB,
CurrentDBBytes: p.CurrentDBBytes,
CurrentDBTotal: p.CurrentDBTotal,
DatabaseSizes: dbSizes,
VerifyDone: p.VerifyDone,
VerifyTotal: p.VerifyTotal,
StartTime: p.StartTime,
PhaseStartTime: p.PhaseStartTime,
LastUpdateTime: p.LastUpdateTime,
DatabaseTimes: dbTimes,
Errors: errors,
Operation: p.Operation,
ArchiveFile: p.ArchiveFile,
Phase: p.Phase,
ExtractBytes: p.ExtractBytes,
ExtractTotal: p.ExtractTotal,
DatabasesDone: p.DatabasesDone,
DatabasesTotal: p.DatabasesTotal,
CurrentDB: p.CurrentDB,
CurrentDBBytes: p.CurrentDBBytes,
CurrentDBTotal: p.CurrentDBTotal,
DatabaseSizes: dbSizes,
VerifyDone: p.VerifyDone,
VerifyTotal: p.VerifyTotal,
StartTime: p.StartTime,
PhaseStartTime: p.PhaseStartTime,
LastUpdateTime: p.LastUpdateTime,
DatabaseTimes: dbTimes,
Errors: errors,
UseNativeEngine: p.UseNativeEngine,
}
}

666
internal/restore/dryrun.go Normal file
View File

@ -0,0 +1,666 @@
package restore
import (
"context"
"fmt"
"os"
"os/exec"
"path/filepath"
"strings"
"syscall"
"time"
"dbbackup/internal/cleanup"
"dbbackup/internal/config"
"dbbackup/internal/logger"
)
// DryRunCheck represents a single dry-run check result
type DryRunCheck struct {
Name string
Status DryRunStatus
Message string
Details string
Critical bool // If true, restore will definitely fail
}
// DryRunStatus represents the status of a dry-run check
type DryRunStatus int
const (
DryRunPassed DryRunStatus = iota
DryRunWarning
DryRunFailed
DryRunSkipped
)
func (s DryRunStatus) String() string {
switch s {
case DryRunPassed:
return "PASS"
case DryRunWarning:
return "WARN"
case DryRunFailed:
return "FAIL"
case DryRunSkipped:
return "SKIP"
default:
return "UNKNOWN"
}
}
func (s DryRunStatus) Icon() string {
switch s {
case DryRunPassed:
return "[+]"
case DryRunWarning:
return "[!]"
case DryRunFailed:
return "[-]"
case DryRunSkipped:
return "[ ]"
default:
return "[?]"
}
}
// DryRunResult contains all dry-run check results
type DryRunResult struct {
Checks []DryRunCheck
CanProceed bool
HasWarnings bool
CriticalCount int
WarningCount int
EstimatedTime time.Duration
RequiredDiskMB int64
AvailableDiskMB int64
}
// RestoreDryRun performs comprehensive pre-restore validation
type RestoreDryRun struct {
cfg *config.Config
log logger.Logger
safety *Safety
archive string
target string
}
// NewRestoreDryRun creates a new restore dry-run validator
func NewRestoreDryRun(cfg *config.Config, log logger.Logger, archivePath, targetDB string) *RestoreDryRun {
return &RestoreDryRun{
cfg: cfg,
log: log,
safety: NewSafety(cfg, log),
archive: archivePath,
target: targetDB,
}
}
// Run executes all dry-run checks
func (r *RestoreDryRun) Run(ctx context.Context) (*DryRunResult, error) {
result := &DryRunResult{
Checks: make([]DryRunCheck, 0, 10),
CanProceed: true,
}
r.log.Info("Running restore dry-run checks",
"archive", r.archive,
"target", r.target)
// 1. Archive existence and accessibility
result.Checks = append(result.Checks, r.checkArchiveAccess())
// 2. Archive format validation
result.Checks = append(result.Checks, r.checkArchiveFormat())
// 3. Database connectivity
result.Checks = append(result.Checks, r.checkDatabaseConnectivity(ctx))
// 4. User permissions (CREATE DATABASE, DROP, etc.)
result.Checks = append(result.Checks, r.checkUserPermissions(ctx))
// 5. Target database conflicts
result.Checks = append(result.Checks, r.checkTargetConflicts(ctx))
// 6. Disk space requirements
diskCheck, requiredMB, availableMB := r.checkDiskSpace()
result.Checks = append(result.Checks, diskCheck)
result.RequiredDiskMB = requiredMB
result.AvailableDiskMB = availableMB
// 7. Work directory permissions
result.Checks = append(result.Checks, r.checkWorkDirectory())
// 8. Required tools availability
result.Checks = append(result.Checks, r.checkRequiredTools())
// 9. PostgreSQL lock settings (for parallel restore)
result.Checks = append(result.Checks, r.checkLockSettings(ctx))
// 10. Memory availability
result.Checks = append(result.Checks, r.checkMemoryAvailability())
// Calculate summary
for _, check := range result.Checks {
switch check.Status {
case DryRunFailed:
if check.Critical {
result.CriticalCount++
result.CanProceed = false
} else {
result.WarningCount++
result.HasWarnings = true
}
case DryRunWarning:
result.WarningCount++
result.HasWarnings = true
}
}
// Estimate restore time based on archive size
result.EstimatedTime = r.estimateRestoreTime()
return result, nil
}
// checkArchiveAccess verifies the archive file is accessible
func (r *RestoreDryRun) checkArchiveAccess() DryRunCheck {
check := DryRunCheck{
Name: "Archive Access",
Critical: true,
}
info, err := os.Stat(r.archive)
if err != nil {
if os.IsNotExist(err) {
check.Status = DryRunFailed
check.Message = "Archive file not found"
check.Details = r.archive
} else if os.IsPermission(err) {
check.Status = DryRunFailed
check.Message = "Permission denied reading archive"
check.Details = err.Error()
} else {
check.Status = DryRunFailed
check.Message = "Cannot access archive"
check.Details = err.Error()
}
return check
}
if info.Size() == 0 {
check.Status = DryRunFailed
check.Message = "Archive file is empty"
return check
}
check.Status = DryRunPassed
check.Message = fmt.Sprintf("Archive accessible (%s)", formatBytesSize(info.Size()))
return check
}
// checkArchiveFormat validates the archive format
func (r *RestoreDryRun) checkArchiveFormat() DryRunCheck {
check := DryRunCheck{
Name: "Archive Format",
Critical: true,
}
err := r.safety.ValidateArchive(r.archive)
if err != nil {
check.Status = DryRunFailed
check.Message = "Invalid archive format"
check.Details = err.Error()
return check
}
format := DetectArchiveFormat(r.archive)
check.Status = DryRunPassed
check.Message = fmt.Sprintf("Valid %s format", format.String())
return check
}
// checkDatabaseConnectivity tests database connection
func (r *RestoreDryRun) checkDatabaseConnectivity(ctx context.Context) DryRunCheck {
check := DryRunCheck{
Name: "Database Connectivity",
Critical: true,
}
// Try to list databases as a connectivity check
_, err := r.safety.ListUserDatabases(ctx)
if err != nil {
check.Status = DryRunFailed
check.Message = "Cannot connect to database server"
check.Details = err.Error()
return check
}
check.Status = DryRunPassed
check.Message = fmt.Sprintf("Connected to %s:%d", r.cfg.Host, r.cfg.Port)
return check
}
// checkUserPermissions verifies required database permissions
func (r *RestoreDryRun) checkUserPermissions(ctx context.Context) DryRunCheck {
check := DryRunCheck{
Name: "User Permissions",
Critical: true,
}
if r.cfg.DatabaseType != "postgres" {
check.Status = DryRunSkipped
check.Message = "Permission check only implemented for PostgreSQL"
return check
}
// Check if user has CREATEDB privilege
query := `SELECT rolcreatedb, rolsuper FROM pg_roles WHERE rolname = current_user`
args := []string{
"-h", r.cfg.Host,
"-p", fmt.Sprintf("%d", r.cfg.Port),
"-U", r.cfg.User,
"-d", "postgres",
"-tA",
"-c", query,
}
cmd := cleanup.SafeCommand(ctx, "psql", args...)
if r.cfg.Password != "" {
cmd.Env = append(os.Environ(), fmt.Sprintf("PGPASSWORD=%s", r.cfg.Password))
}
output, err := cmd.Output()
if err != nil {
check.Status = DryRunWarning
check.Message = "Could not verify permissions"
check.Details = err.Error()
return check
}
result := strings.TrimSpace(string(output))
parts := strings.Split(result, "|")
if len(parts) >= 2 {
canCreate := parts[0] == "t"
isSuper := parts[1] == "t"
if isSuper {
check.Status = DryRunPassed
check.Message = "User is superuser (full permissions)"
return check
}
if canCreate {
check.Status = DryRunPassed
check.Message = "User has CREATEDB privilege"
return check
}
}
check.Status = DryRunFailed
check.Message = "User lacks CREATEDB privilege"
check.Details = "Required for creating target database. Run: ALTER USER " + r.cfg.User + " CREATEDB;"
return check
}
// checkTargetConflicts checks if target database already exists
func (r *RestoreDryRun) checkTargetConflicts(ctx context.Context) DryRunCheck {
check := DryRunCheck{
Name: "Target Database",
Critical: false, // Not critical - can be overwritten with --clean
}
if r.target == "" {
check.Status = DryRunSkipped
check.Message = "Cluster restore - checking multiple databases"
return check
}
databases, err := r.safety.ListUserDatabases(ctx)
if err != nil {
check.Status = DryRunWarning
check.Message = "Could not check existing databases"
check.Details = err.Error()
return check
}
for _, db := range databases {
if db == r.target {
check.Status = DryRunWarning
check.Message = fmt.Sprintf("Database '%s' already exists", r.target)
check.Details = "Use --clean to drop and recreate, or choose different target"
return check
}
}
check.Status = DryRunPassed
check.Message = fmt.Sprintf("Target '%s' is available", r.target)
return check
}
// checkDiskSpace verifies sufficient disk space
func (r *RestoreDryRun) checkDiskSpace() (DryRunCheck, int64, int64) {
check := DryRunCheck{
Name: "Disk Space",
Critical: true,
}
// Get archive size
info, err := os.Stat(r.archive)
if err != nil {
check.Status = DryRunSkipped
check.Message = "Cannot determine archive size"
return check, 0, 0
}
// Estimate uncompressed size (assume 3x compression ratio)
archiveSizeMB := info.Size() / 1024 / 1024
estimatedUncompressedMB := archiveSizeMB * 3
// Need space for: work dir extraction + restored database
// Work dir: full uncompressed size
// Database: roughly same as uncompressed SQL
requiredMB := estimatedUncompressedMB * 2
// Check available disk space in work directory
workDir := r.cfg.GetEffectiveWorkDir()
if workDir == "" {
workDir = r.cfg.BackupDir
}
var stat syscall.Statfs_t
if err := syscall.Statfs(workDir, &stat); err != nil {
check.Status = DryRunWarning
check.Message = "Cannot check disk space"
check.Details = err.Error()
return check, requiredMB, 0
}
availableMB := int64(stat.Bavail*uint64(stat.Bsize)) / 1024 / 1024
if availableMB < requiredMB {
check.Status = DryRunFailed
check.Message = fmt.Sprintf("Insufficient disk space: need %d MB, have %d MB", requiredMB, availableMB)
check.Details = fmt.Sprintf("Work directory: %s", workDir)
return check, requiredMB, availableMB
}
// Warn if less than 20% buffer
if availableMB < requiredMB*12/10 {
check.Status = DryRunWarning
check.Message = fmt.Sprintf("Low disk space margin: need %d MB, have %d MB", requiredMB, availableMB)
return check, requiredMB, availableMB
}
check.Status = DryRunPassed
check.Message = fmt.Sprintf("Sufficient space: need ~%d MB, have %d MB", requiredMB, availableMB)
return check, requiredMB, availableMB
}
// checkWorkDirectory verifies work directory is writable
func (r *RestoreDryRun) checkWorkDirectory() DryRunCheck {
check := DryRunCheck{
Name: "Work Directory",
Critical: true,
}
workDir := r.cfg.GetEffectiveWorkDir()
if workDir == "" {
workDir = r.cfg.BackupDir
}
// Check if directory exists
info, err := os.Stat(workDir)
if err != nil {
if os.IsNotExist(err) {
check.Status = DryRunFailed
check.Message = "Work directory does not exist"
check.Details = workDir
} else {
check.Status = DryRunFailed
check.Message = "Cannot access work directory"
check.Details = err.Error()
}
return check
}
if !info.IsDir() {
check.Status = DryRunFailed
check.Message = "Work path is not a directory"
check.Details = workDir
return check
}
// Try to create a test file
testFile := filepath.Join(workDir, ".dbbackup-dryrun-test")
f, err := os.Create(testFile)
if err != nil {
check.Status = DryRunFailed
check.Message = "Work directory is not writable"
check.Details = err.Error()
return check
}
f.Close()
os.Remove(testFile)
check.Status = DryRunPassed
check.Message = fmt.Sprintf("Work directory writable: %s", workDir)
return check
}
// checkRequiredTools verifies required CLI tools are available
func (r *RestoreDryRun) checkRequiredTools() DryRunCheck {
check := DryRunCheck{
Name: "Required Tools",
Critical: true,
}
var required []string
switch r.cfg.DatabaseType {
case "postgres":
required = []string{"pg_restore", "psql", "createdb"}
case "mysql", "mariadb":
required = []string{"mysql", "mysqldump"}
default:
check.Status = DryRunSkipped
check.Message = "Unknown database type"
return check
}
missing := []string{}
for _, tool := range required {
if _, err := LookPath(tool); err != nil {
missing = append(missing, tool)
}
}
if len(missing) > 0 {
check.Status = DryRunFailed
check.Message = fmt.Sprintf("Missing tools: %s", strings.Join(missing, ", "))
check.Details = "Install the database client tools package"
return check
}
check.Status = DryRunPassed
check.Message = fmt.Sprintf("All tools available: %s", strings.Join(required, ", "))
return check
}
// checkLockSettings checks PostgreSQL lock settings for parallel restore
func (r *RestoreDryRun) checkLockSettings(ctx context.Context) DryRunCheck {
check := DryRunCheck{
Name: "Lock Settings",
Critical: false,
}
if r.cfg.DatabaseType != "postgres" {
check.Status = DryRunSkipped
check.Message = "Lock check only for PostgreSQL"
return check
}
// Check max_locks_per_transaction
query := `SHOW max_locks_per_transaction`
args := []string{
"-h", r.cfg.Host,
"-p", fmt.Sprintf("%d", r.cfg.Port),
"-U", r.cfg.User,
"-d", "postgres",
"-tA",
"-c", query,
}
cmd := cleanup.SafeCommand(ctx, "psql", args...)
if r.cfg.Password != "" {
cmd.Env = append(os.Environ(), fmt.Sprintf("PGPASSWORD=%s", r.cfg.Password))
}
output, err := cmd.Output()
if err != nil {
check.Status = DryRunWarning
check.Message = "Could not check lock settings"
return check
}
locks := strings.TrimSpace(string(output))
if locks == "" {
check.Status = DryRunWarning
check.Message = "Could not determine max_locks_per_transaction"
return check
}
// Default is 64, recommend at least 128 for parallel restores
var lockCount int
fmt.Sscanf(locks, "%d", &lockCount)
if lockCount < 128 {
check.Status = DryRunWarning
check.Message = fmt.Sprintf("max_locks_per_transaction=%d (recommend 128+ for parallel)", lockCount)
check.Details = "Set: ALTER SYSTEM SET max_locks_per_transaction = 128; then restart PostgreSQL"
return check
}
check.Status = DryRunPassed
check.Message = fmt.Sprintf("max_locks_per_transaction=%d (sufficient)", lockCount)
return check
}
// checkMemoryAvailability checks if enough memory is available
func (r *RestoreDryRun) checkMemoryAvailability() DryRunCheck {
check := DryRunCheck{
Name: "Memory Availability",
Critical: false,
}
// Read /proc/meminfo on Linux
data, err := os.ReadFile("/proc/meminfo")
if err != nil {
check.Status = DryRunSkipped
check.Message = "Cannot check memory (non-Linux?)"
return check
}
var availableKB int64
for _, line := range strings.Split(string(data), "\n") {
if strings.HasPrefix(line, "MemAvailable:") {
fmt.Sscanf(line, "MemAvailable: %d kB", &availableKB)
break
}
}
availableMB := availableKB / 1024
// Recommend at least 1GB for restore operations
if availableMB < 1024 {
check.Status = DryRunWarning
check.Message = fmt.Sprintf("Low available memory: %d MB", availableMB)
check.Details = "Restore may be slow or fail. Consider closing other applications."
return check
}
check.Status = DryRunPassed
check.Message = fmt.Sprintf("Available memory: %d MB", availableMB)
return check
}
// estimateRestoreTime estimates restore duration based on archive size
func (r *RestoreDryRun) estimateRestoreTime() time.Duration {
info, err := os.Stat(r.archive)
if err != nil {
return 0
}
// Rough estimate: 100 MB/minute for restore operations
// This accounts for decompression, SQL parsing, and database writes
sizeMB := info.Size() / 1024 / 1024
minutes := sizeMB / 100
if minutes < 1 {
minutes = 1
}
return time.Duration(minutes) * time.Minute
}
// formatBytesSize formats bytes to human-readable string
func formatBytesSize(bytes int64) string {
const (
KB = 1024
MB = KB * 1024
GB = MB * 1024
)
switch {
case bytes >= GB:
return fmt.Sprintf("%.1f GB", float64(bytes)/GB)
case bytes >= MB:
return fmt.Sprintf("%.1f MB", float64(bytes)/MB)
case bytes >= KB:
return fmt.Sprintf("%.1f KB", float64(bytes)/KB)
default:
return fmt.Sprintf("%d B", bytes)
}
}
// LookPath is a wrapper around exec.LookPath for testing
var LookPath = func(file string) (string, error) {
return exec.LookPath(file)
}
// PrintDryRunResult prints a formatted dry-run result
func PrintDryRunResult(result *DryRunResult) {
fmt.Println("\n" + strings.Repeat("=", 60))
fmt.Println("RESTORE DRY-RUN RESULTS")
fmt.Println(strings.Repeat("=", 60))
for _, check := range result.Checks {
fmt.Printf("%s %-20s %s\n", check.Status.Icon(), check.Name+":", check.Message)
if check.Details != "" {
fmt.Printf(" └─ %s\n", check.Details)
}
}
fmt.Println(strings.Repeat("-", 60))
if result.EstimatedTime > 0 {
fmt.Printf("Estimated restore time: %s\n", result.EstimatedTime)
}
if result.RequiredDiskMB > 0 {
fmt.Printf("Disk space: %d MB required, %d MB available\n",
result.RequiredDiskMB, result.AvailableDiskMB)
}
fmt.Println()
if result.CanProceed {
if result.HasWarnings {
fmt.Println("⚠️ DRY-RUN: PASSED with warnings - restore can proceed")
} else {
fmt.Println("✅ DRY-RUN: PASSED - restore can proceed")
}
} else {
fmt.Printf("❌ DRY-RUN: FAILED - %d critical issue(s) must be resolved\n", result.CriticalCount)
}
fmt.Println()
}

View File

@ -620,6 +620,78 @@ func (e *Engine) restoreWithNativeEngine(ctx context.Context, archivePath, targe
SSLMode: e.cfg.SSLMode,
}
// Use PARALLEL restore engine for SQL format - this matches pg_restore -j performance!
// The parallel engine:
// 1. Executes schema statements sequentially (CREATE TABLE, etc.)
// 2. Executes COPY data loading in PARALLEL (like pg_restore -j8)
// 3. Creates indexes and constraints in PARALLEL
parallelWorkers := e.cfg.Jobs
if parallelWorkers < 1 {
parallelWorkers = 4
}
e.log.Info("Using PARALLEL native restore engine",
"workers", parallelWorkers,
"database", targetDB,
"archive", archivePath)
// Pass context to ensure pool is properly closed on Ctrl+C cancellation
parallelEngine, err := native.NewParallelRestoreEngineWithContext(ctx, nativeCfg, e.log, parallelWorkers)
if err != nil {
e.log.Warn("Failed to create parallel restore engine, falling back to sequential", "error", err)
// Fall back to sequential restore
return e.restoreWithSequentialNativeEngine(ctx, archivePath, targetDB, compressed)
}
defer parallelEngine.Close()
// Run parallel restore with progress callbacks
options := &native.ParallelRestoreOptions{
Workers: parallelWorkers,
ContinueOnError: true,
ProgressCallback: func(phase string, current, total int, tableName string) {
switch phase {
case "parsing":
e.log.Debug("Parsing SQL dump...")
case "schema":
if current%50 == 0 {
e.log.Debug("Creating schema", "progress", current, "total", total)
}
case "data":
e.log.Debug("Loading data", "table", tableName, "progress", current, "total", total)
// Report progress to TUI
e.reportDatabaseProgress(current, total, tableName)
case "indexes":
e.log.Debug("Creating indexes", "progress", current, "total", total)
}
},
}
result, err := parallelEngine.RestoreFile(ctx, archivePath, options)
if err != nil {
return fmt.Errorf("parallel native restore failed: %w", err)
}
e.log.Info("Parallel native restore completed",
"database", targetDB,
"tables", result.TablesRestored,
"rows", result.RowsRestored,
"indexes", result.IndexesCreated,
"duration", result.Duration)
return nil
}
// restoreWithSequentialNativeEngine is the fallback sequential restore
func (e *Engine) restoreWithSequentialNativeEngine(ctx context.Context, archivePath, targetDB string, compressed bool) error {
nativeCfg := &native.PostgreSQLNativeConfig{
Host: e.cfg.Host,
Port: e.cfg.Port,
User: e.cfg.User,
Password: e.cfg.Password,
Database: targetDB,
SSLMode: e.cfg.SSLMode,
}
// Create restore engine
restoreEngine, err := native.NewPostgreSQLRestoreEngine(nativeCfg, e.log)
if err != nil {
@ -974,10 +1046,35 @@ func (e *Engine) executeRestoreWithPgzipStream(ctx context.Context, archivePath,
// Build restore command based on database type
var cmd *exec.Cmd
if dbType == "postgresql" {
args := []string{"-p", fmt.Sprintf("%d", e.cfg.Port), "-U", e.cfg.User, "-d", targetDB}
// Add performance tuning via psql preamble commands
// These are executed before the SQL dump to speed up bulk loading
preamble := `
SET synchronous_commit = 'off';
SET work_mem = '256MB';
SET maintenance_work_mem = '1GB';
SET max_parallel_workers_per_gather = 4;
SET max_parallel_maintenance_workers = 4;
SET wal_level = 'minimal';
SET fsync = off;
SET full_page_writes = off;
SET checkpoint_timeout = '1h';
SET max_wal_size = '10GB';
`
// Note: Some settings require superuser - we try them but continue if they fail
// The -c flags run before the main script
args := []string{
"-p", fmt.Sprintf("%d", e.cfg.Port),
"-U", e.cfg.User,
"-d", targetDB,
"-c", "SET synchronous_commit = 'off'",
"-c", "SET work_mem = '256MB'",
"-c", "SET maintenance_work_mem = '1GB'",
}
if e.cfg.Host != "localhost" && e.cfg.Host != "" {
args = append([]string{"-h", e.cfg.Host}, args...)
}
e.log.Info("Applying PostgreSQL performance tuning for SQL restore", "preamble_settings", 3)
_ = preamble // Documented for reference
cmd = cleanup.SafeCommand(ctx, "psql", args...)
cmd.Env = append(os.Environ(), fmt.Sprintf("PGPASSWORD=%s", e.cfg.Password))
} else {
@ -1246,9 +1343,14 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string, preExtr
}
format := DetectArchiveFormat(archivePath)
if format != FormatClusterTarGz {
if !format.CanBeClusterRestore() {
operation.Fail("Invalid cluster archive format")
return fmt.Errorf("not a cluster archive: %s (detected format: %s)", archivePath, format)
return fmt.Errorf("not a valid cluster restore format: %s (detected format: %s). Supported: .tar.gz, .sql, .sql.gz", archivePath, format)
}
// For SQL-based cluster restores, use a different restore path
if format == FormatPostgreSQLSQL || format == FormatPostgreSQLSQLGz {
return e.restoreClusterFromSQL(ctx, archivePath, operation)
}
// Check if we have a pre-extracted directory (optimization to avoid double extraction)
@ -1644,6 +1746,60 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string, preExtr
estimator := progress.NewETAEstimator("Restoring cluster", totalDBs)
e.progress.SetEstimator(estimator)
// Detect backup format and warn about performance implications
// .sql.gz files (from native engine) cannot use parallel restore like pg_restore -j8
hasSQLFormat := false
hasCustomFormat := false
for _, entry := range entries {
if !entry.IsDir() {
if strings.HasSuffix(entry.Name(), ".sql.gz") {
hasSQLFormat = true
} else if strings.HasSuffix(entry.Name(), ".dump") {
hasCustomFormat = true
}
}
}
// Warn about SQL format performance limitation
if hasSQLFormat && !hasCustomFormat {
if e.cfg.UseNativeEngine {
// Native engine now uses PARALLEL restore - should match pg_restore -j8 performance!
e.log.Info("✅ SQL format detected - using PARALLEL native restore engine",
"mode", "parallel",
"workers", e.cfg.Jobs,
"optimization", "COPY operations run in parallel like pg_restore -j")
if !e.silentMode {
fmt.Println()
fmt.Println("═══════════════════════════════════════════════════════════════")
fmt.Println(" ✅ PARALLEL NATIVE RESTORE: SQL Format with Parallel Loading")
fmt.Println("═══════════════════════════════════════════════════════════════")
fmt.Printf(" Using %d parallel workers for COPY operations.\n", e.cfg.Jobs)
fmt.Println(" Performance should match pg_restore -j" + fmt.Sprintf("%d", e.cfg.Jobs))
fmt.Println("═══════════════════════════════════════════════════════════════")
fmt.Println()
}
} else {
// psql path is still sequential
e.log.Warn("⚠️ PERFORMANCE WARNING: Backup uses SQL format (.sql.gz)",
"reason", "psql mode cannot parallelize SQL format",
"recommendation", "Enable --use-native-engine for parallel COPY loading")
if !e.silentMode {
fmt.Println()
fmt.Println("═══════════════════════════════════════════════════════════════")
fmt.Println(" ⚠️ PERFORMANCE NOTE: SQL Format with psql (sequential)")
fmt.Println("═══════════════════════════════════════════════════════════════")
fmt.Println(" Backup files use .sql.gz format.")
fmt.Println(" psql mode restores are sequential.")
fmt.Println()
fmt.Println(" For PARALLEL restore, use: --use-native-engine")
fmt.Println(" The native engine parallelizes COPY like pg_restore -j8")
fmt.Println("═══════════════════════════════════════════════════════════════")
fmt.Println()
}
time.Sleep(2 * time.Second)
}
}
// Check for large objects in dump files and adjust parallelism
hasLargeObjects := e.detectLargeObjectsInDumps(dumpsDir, entries)
@ -1803,17 +1959,18 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string, preExtr
select {
case <-heartbeatTicker.C:
heartbeatCount++
elapsed := time.Since(dbRestoreStart)
dbElapsed := time.Since(dbRestoreStart) // Per-database elapsed
phaseElapsedNow := time.Since(restorePhaseStart) // Overall phase elapsed
mu.Lock()
statusMsg := fmt.Sprintf("Restoring %s (%d/%d) - elapsed: %s",
dbName, idx+1, totalDBs, formatDuration(elapsed))
statusMsg := fmt.Sprintf("Restoring %s (%d/%d) - running: %s (phase: %s)",
dbName, idx+1, totalDBs, formatDuration(dbElapsed), formatDuration(phaseElapsedNow))
e.progress.Update(statusMsg)
// CRITICAL: Report activity to TUI callbacks during long-running restore
// Use time-based progress estimation: assume ~10MB/s average throughput
// This gives visual feedback even when pg_restore hasn't completed
estimatedBytesPerSec := int64(10 * 1024 * 1024) // 10 MB/s conservative estimate
estimatedBytesDone := elapsed.Milliseconds() / 1000 * estimatedBytesPerSec
estimatedBytesDone := dbElapsed.Milliseconds() / 1000 * estimatedBytesPerSec
if expectedDBSize > 0 && estimatedBytesDone > expectedDBSize {
estimatedBytesDone = expectedDBSize * 95 / 100 // Cap at 95%
}
@ -1824,8 +1981,7 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string, preExtr
// Report to TUI with estimated progress
e.reportDatabaseProgressByBytes(currentBytesEstimate, totalBytes, dbName, int(atomic.LoadInt32(&successCount)), totalDBs)
// Also report timing info
phaseElapsed := time.Since(restorePhaseStart)
// Also report timing info (use phaseElapsedNow computed above)
var avgPerDB time.Duration
completedDBTimesMu.Lock()
if len(completedDBTimes) > 0 {
@ -1836,7 +1992,7 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string, preExtr
avgPerDB = total / time.Duration(len(completedDBTimes))
}
completedDBTimesMu.Unlock()
e.reportDatabaseProgressWithTiming(idx, totalDBs, dbName, phaseElapsed, avgPerDB)
e.reportDatabaseProgressWithTiming(idx, totalDBs, dbName, phaseElapsedNow, avgPerDB)
mu.Unlock()
case <-heartbeatCtx.Done():
@ -1848,7 +2004,11 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string, preExtr
var restoreErr error
if isCompressedSQL {
mu.Lock()
e.log.Info("Detected compressed SQL format, using psql + pgzip", "file", dumpFile, "database", dbName)
if e.cfg.UseNativeEngine {
e.log.Info("Detected compressed SQL format, using native Go engine", "file", dumpFile, "database", dbName)
} else {
e.log.Info("Detected compressed SQL format, using psql + pgzip", "file", dumpFile, "database", dbName)
}
mu.Unlock()
restoreErr = e.restorePostgreSQLSQL(ctx, dumpFile, dbName, true)
} else {
@ -2023,6 +2183,45 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string, preExtr
return nil
}
// restoreClusterFromSQL restores a pg_dumpall SQL file using the native engine
// This handles .sql and .sql.gz files containing full cluster dumps
func (e *Engine) restoreClusterFromSQL(ctx context.Context, archivePath string, operation logger.OperationLogger) error {
e.log.Info("Restoring cluster from SQL file (pg_dumpall format)",
"file", filepath.Base(archivePath),
"native_engine", true)
clusterStartTime := time.Now()
// Determine if compressed
compressed := strings.HasSuffix(strings.ToLower(archivePath), ".gz")
// Use native engine to restore directly to postgres database (globals + all databases)
e.log.Info("Restoring SQL dump using native engine...",
"compressed", compressed,
"size", FormatBytes(getFileSize(archivePath)))
e.progress.Start("Restoring cluster from SQL dump...")
// For pg_dumpall, we restore to the 'postgres' database which then creates other databases
targetDB := "postgres"
err := e.restoreWithNativeEngine(ctx, archivePath, targetDB, compressed)
if err != nil {
operation.Fail(fmt.Sprintf("SQL cluster restore failed: %v", err))
e.recordClusterRestoreMetrics(clusterStartTime, archivePath, 0, 0, false, err.Error())
return fmt.Errorf("SQL cluster restore failed: %w", err)
}
duration := time.Since(clusterStartTime)
e.progress.Complete(fmt.Sprintf("Cluster restored successfully from SQL in %s", duration.Round(time.Second)))
operation.Complete("SQL cluster restore completed")
// Record metrics
e.recordClusterRestoreMetrics(clusterStartTime, archivePath, 1, 1, true, "")
return nil
}
// recordClusterRestoreMetrics records metrics for cluster restore operations
func (e *Engine) recordClusterRestoreMetrics(startTime time.Time, archivePath string, totalDBs, successCount int, success bool, errorMsg string) {
duration := time.Since(startTime)
@ -2326,7 +2525,14 @@ func (e *Engine) restoreGlobals(ctx context.Context, globalsFile string) error {
cmdErr = ctx.Err()
}
<-stderrDone
// Wait for stderr reader with timeout to prevent indefinite hang
// if the process doesn't fully terminate
select {
case <-stderrDone:
// Normal completion
case <-time.After(5 * time.Second):
e.log.Warn("Stderr reader timeout - forcefully continuing")
}
// Only fail on actual command errors or FATAL PostgreSQL errors
// Regular ERROR messages (like "role already exists") are expected
@ -2770,6 +2976,15 @@ func (e *Engine) isIgnorableError(errorMsg string) bool {
return false
}
// getFileSize returns the size of a file, or 0 if it can't be read
func getFileSize(path string) int64 {
info, err := os.Stat(path)
if err != nil {
return 0
}
return info.Size()
}
// FormatBytes formats bytes to human readable format
func FormatBytes(bytes int64) string {
const unit = 1024

View File

@ -47,7 +47,12 @@ func DetectArchiveFormat(filename string) ArchiveFormat {
lower := strings.ToLower(filename)
// Check for cluster archives first (most specific)
if strings.Contains(lower, "cluster") && strings.HasSuffix(lower, ".tar.gz") {
// A .tar.gz file is considered a cluster backup if:
// 1. Contains "cluster" in name, OR
// 2. Is a .tar.gz file (likely a cluster backup archive)
if strings.HasSuffix(lower, ".tar.gz") {
// All .tar.gz files are treated as cluster backups
// since that's the format used for cluster archives
return FormatClusterTarGz
}
@ -163,11 +168,19 @@ func (f ArchiveFormat) IsCompressed() bool {
f == FormatClusterTarGz
}
// IsClusterBackup returns true if the archive is a cluster backup
// IsClusterBackup returns true if the archive is a cluster backup (.tar.gz format created by dbbackup)
func (f ArchiveFormat) IsClusterBackup() bool {
return f == FormatClusterTarGz
}
// CanBeClusterRestore returns true if the format can be used for cluster restore
// This includes .tar.gz (dbbackup format) and .sql/.sql.gz (pg_dumpall format for native engine)
func (f ArchiveFormat) CanBeClusterRestore() bool {
return f == FormatClusterTarGz ||
f == FormatPostgreSQLSQL ||
f == FormatPostgreSQLSQLGz
}
// IsPostgreSQL returns true if the archive is PostgreSQL format
func (f ArchiveFormat) IsPostgreSQL() bool {
return f == FormatPostgreSQLDump ||

View File

@ -220,3 +220,34 @@ func TestDetectArchiveFormatWithRealFiles(t *testing.T) {
})
}
}
func TestDetectArchiveFormatAll(t *testing.T) {
tests := []struct {
filename string
want ArchiveFormat
isCluster bool
}{
{"testdb.sql", FormatPostgreSQLSQL, false},
{"testdb.sql.gz", FormatPostgreSQLSQLGz, false},
{"testdb.dump", FormatPostgreSQLDump, false},
{"testdb.dump.gz", FormatPostgreSQLDumpGz, false},
{"cluster_backup.tar.gz", FormatClusterTarGz, true},
{"mybackup.tar.gz", FormatClusterTarGz, true},
{"testdb_20260130_204350_native.sql.gz", FormatPostgreSQLSQLGz, false},
{"mysql_backup.sql", FormatMySQLSQL, false},
{"mysql_dump.sql.gz", FormatMySQLSQLGz, false}, // Has "mysql" in name = MySQL
{"randomfile.txt", FormatUnknown, false},
}
for _, tt := range tests {
t.Run(tt.filename, func(t *testing.T) {
got := DetectArchiveFormat(tt.filename)
if got != tt.want {
t.Errorf("DetectArchiveFormat(%q) = %v, want %v", tt.filename, got, tt.want)
}
if got.IsClusterBackup() != tt.isCluster {
t.Errorf("DetectArchiveFormat(%q).IsClusterBackup() = %v, want %v", tt.filename, got.IsClusterBackup(), tt.isCluster)
}
})
}
}

View File

@ -1,7 +1,15 @@
package security
import (
"crypto/ed25519"
"crypto/rand"
"crypto/sha256"
"encoding/base64"
"encoding/hex"
"encoding/json"
"fmt"
"os"
"sync"
"time"
"dbbackup/internal/logger"
@ -21,13 +29,36 @@ type AuditEvent struct {
type AuditLogger struct {
log logger.Logger
enabled bool
// For signed audit log support
mu sync.Mutex
entries []SignedAuditEntry
privateKey ed25519.PrivateKey
publicKey ed25519.PublicKey
prevHash string // Hash of previous entry for chaining
}
// SignedAuditEntry represents an audit entry with cryptographic signature
type SignedAuditEntry struct {
Sequence int64 `json:"seq"`
Timestamp string `json:"ts"`
User string `json:"user"`
Action string `json:"action"`
Resource string `json:"resource"`
Result string `json:"result"`
Details string `json:"details,omitempty"`
PrevHash string `json:"prev_hash"` // Hash chain for tamper detection
Hash string `json:"hash"` // SHA-256 of this entry (without signature)
Signature string `json:"sig"` // Ed25519 signature of Hash
}
// NewAuditLogger creates a new audit logger
func NewAuditLogger(log logger.Logger, enabled bool) *AuditLogger {
return &AuditLogger{
log: log,
enabled: enabled,
log: log,
enabled: enabled,
entries: make([]SignedAuditEntry, 0),
prevHash: "genesis", // Initial hash for first entry
}
}
@ -232,3 +263,337 @@ func GetCurrentUser() string {
}
return "unknown"
}
// =============================================================================
// Audit Log Signing and Verification
// =============================================================================
// GenerateSigningKeys generates a new Ed25519 key pair for audit log signing
func GenerateSigningKeys() (privateKey ed25519.PrivateKey, publicKey ed25519.PublicKey, err error) {
publicKey, privateKey, err = ed25519.GenerateKey(rand.Reader)
return
}
// SavePrivateKey saves the private key to a file (PEM-like format)
func SavePrivateKey(path string, key ed25519.PrivateKey) error {
encoded := base64.StdEncoding.EncodeToString(key)
content := fmt.Sprintf("-----BEGIN DBBACKUP AUDIT PRIVATE KEY-----\n%s\n-----END DBBACKUP AUDIT PRIVATE KEY-----\n", encoded)
return os.WriteFile(path, []byte(content), 0600) // Restrictive permissions
}
// SavePublicKey saves the public key to a file (PEM-like format)
func SavePublicKey(path string, key ed25519.PublicKey) error {
encoded := base64.StdEncoding.EncodeToString(key)
content := fmt.Sprintf("-----BEGIN DBBACKUP AUDIT PUBLIC KEY-----\n%s\n-----END DBBACKUP AUDIT PUBLIC KEY-----\n", encoded)
return os.WriteFile(path, []byte(content), 0644)
}
// LoadPrivateKey loads a private key from file
func LoadPrivateKey(path string) (ed25519.PrivateKey, error) {
data, err := os.ReadFile(path)
if err != nil {
return nil, fmt.Errorf("failed to read private key: %w", err)
}
// Extract base64 content between PEM markers
content := extractPEMContent(string(data))
if content == "" {
return nil, fmt.Errorf("invalid private key format")
}
decoded, err := base64.StdEncoding.DecodeString(content)
if err != nil {
return nil, fmt.Errorf("failed to decode private key: %w", err)
}
if len(decoded) != ed25519.PrivateKeySize {
return nil, fmt.Errorf("invalid private key size")
}
return ed25519.PrivateKey(decoded), nil
}
// LoadPublicKey loads a public key from file
func LoadPublicKey(path string) (ed25519.PublicKey, error) {
data, err := os.ReadFile(path)
if err != nil {
return nil, fmt.Errorf("failed to read public key: %w", err)
}
content := extractPEMContent(string(data))
if content == "" {
return nil, fmt.Errorf("invalid public key format")
}
decoded, err := base64.StdEncoding.DecodeString(content)
if err != nil {
return nil, fmt.Errorf("failed to decode public key: %w", err)
}
if len(decoded) != ed25519.PublicKeySize {
return nil, fmt.Errorf("invalid public key size")
}
return ed25519.PublicKey(decoded), nil
}
// extractPEMContent extracts base64 content from PEM-like format
func extractPEMContent(data string) string {
// Simple extraction - find content between markers
start := 0
for i := 0; i < len(data); i++ {
if data[i] == '\n' && i > 0 && data[i-1] == '-' {
start = i + 1
break
}
}
end := len(data)
for i := len(data) - 1; i > start; i-- {
if data[i] == '\n' && i+1 < len(data) && data[i+1] == '-' {
end = i
break
}
}
if start >= end {
return ""
}
// Remove whitespace
result := ""
for _, c := range data[start:end] {
if c != '\n' && c != '\r' && c != ' ' {
result += string(c)
}
}
return result
}
// EnableSigning enables cryptographic signing for audit entries
func (a *AuditLogger) EnableSigning(privateKey ed25519.PrivateKey) {
a.mu.Lock()
defer a.mu.Unlock()
a.privateKey = privateKey
a.publicKey = privateKey.Public().(ed25519.PublicKey)
}
// AddSignedEntry adds a signed entry to the audit log
func (a *AuditLogger) AddSignedEntry(event AuditEvent) error {
if !a.enabled {
return nil
}
a.mu.Lock()
defer a.mu.Unlock()
// Serialize details
detailsJSON := ""
if len(event.Details) > 0 {
if data, err := json.Marshal(event.Details); err == nil {
detailsJSON = string(data)
}
}
entry := SignedAuditEntry{
Sequence: int64(len(a.entries) + 1),
Timestamp: event.Timestamp.Format(time.RFC3339Nano),
User: event.User,
Action: event.Action,
Resource: event.Resource,
Result: event.Result,
Details: detailsJSON,
PrevHash: a.prevHash,
}
// Calculate hash of entry (without signature)
entry.Hash = a.calculateEntryHash(entry)
// Sign if private key is available
if a.privateKey != nil {
hashBytes, _ := hex.DecodeString(entry.Hash)
signature := ed25519.Sign(a.privateKey, hashBytes)
entry.Signature = base64.StdEncoding.EncodeToString(signature)
}
// Update chain
a.prevHash = entry.Hash
a.entries = append(a.entries, entry)
// Also log to standard logger
a.logEvent(event)
return nil
}
// calculateEntryHash computes SHA-256 hash of an entry (without signature field)
func (a *AuditLogger) calculateEntryHash(entry SignedAuditEntry) string {
// Create canonical representation for hashing
data := fmt.Sprintf("%d|%s|%s|%s|%s|%s|%s|%s",
entry.Sequence,
entry.Timestamp,
entry.User,
entry.Action,
entry.Resource,
entry.Result,
entry.Details,
entry.PrevHash,
)
hash := sha256.Sum256([]byte(data))
return hex.EncodeToString(hash[:])
}
// ExportSignedLog exports the signed audit log to a file
func (a *AuditLogger) ExportSignedLog(path string) error {
a.mu.Lock()
defer a.mu.Unlock()
data, err := json.MarshalIndent(a.entries, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal audit log: %w", err)
}
return os.WriteFile(path, data, 0644)
}
// VerifyAuditLog verifies the integrity of an exported audit log
func VerifyAuditLog(logPath string, publicKeyPath string) (*AuditVerificationResult, error) {
// Load public key
publicKey, err := LoadPublicKey(publicKeyPath)
if err != nil {
return nil, fmt.Errorf("failed to load public key: %w", err)
}
// Load audit log
data, err := os.ReadFile(logPath)
if err != nil {
return nil, fmt.Errorf("failed to read audit log: %w", err)
}
var entries []SignedAuditEntry
if err := json.Unmarshal(data, &entries); err != nil {
return nil, fmt.Errorf("failed to parse audit log: %w", err)
}
result := &AuditVerificationResult{
TotalEntries: len(entries),
ValidEntries: 0,
Errors: make([]string, 0),
}
prevHash := "genesis"
for i, entry := range entries {
// Verify hash chain
if entry.PrevHash != prevHash {
result.Errors = append(result.Errors,
fmt.Sprintf("Entry %d: hash chain broken (expected %s, got %s)",
i+1, prevHash[:16]+"...", entry.PrevHash[:min(16, len(entry.PrevHash))]+"..."))
}
// Recalculate hash
expectedHash := calculateVerifyHash(entry)
if entry.Hash != expectedHash {
result.Errors = append(result.Errors,
fmt.Sprintf("Entry %d: hash mismatch (entry may be tampered)", i+1))
}
// Verify signature
if entry.Signature != "" {
hashBytes, _ := hex.DecodeString(entry.Hash)
sigBytes, err := base64.StdEncoding.DecodeString(entry.Signature)
if err != nil {
result.Errors = append(result.Errors,
fmt.Sprintf("Entry %d: invalid signature encoding", i+1))
} else if !ed25519.Verify(publicKey, hashBytes, sigBytes) {
result.Errors = append(result.Errors,
fmt.Sprintf("Entry %d: signature verification failed", i+1))
} else {
result.ValidEntries++
}
} else {
result.Errors = append(result.Errors,
fmt.Sprintf("Entry %d: missing signature", i+1))
}
prevHash = entry.Hash
}
result.ChainValid = len(result.Errors) == 0 ||
!containsChainError(result.Errors)
result.AllSignaturesValid = result.ValidEntries == result.TotalEntries
return result, nil
}
// AuditVerificationResult contains the result of audit log verification
type AuditVerificationResult struct {
TotalEntries int
ValidEntries int
ChainValid bool
AllSignaturesValid bool
Errors []string
}
// IsValid returns true if the audit log is completely valid
func (r *AuditVerificationResult) IsValid() bool {
return r.ChainValid && r.AllSignaturesValid && len(r.Errors) == 0
}
// String returns a human-readable summary
func (r *AuditVerificationResult) String() string {
if r.IsValid() {
return fmt.Sprintf("✅ Audit log verified: %d entries, chain intact, all signatures valid",
r.TotalEntries)
}
return fmt.Sprintf("❌ Audit log verification failed: %d/%d valid entries, %d errors",
r.ValidEntries, r.TotalEntries, len(r.Errors))
}
// calculateVerifyHash recalculates hash for verification
func calculateVerifyHash(entry SignedAuditEntry) string {
data := fmt.Sprintf("%d|%s|%s|%s|%s|%s|%s|%s",
entry.Sequence,
entry.Timestamp,
entry.User,
entry.Action,
entry.Resource,
entry.Result,
entry.Details,
entry.PrevHash,
)
hash := sha256.Sum256([]byte(data))
return hex.EncodeToString(hash[:])
}
// containsChainError checks if errors include hash chain issues
func containsChainError(errors []string) bool {
for _, err := range errors {
if len(err) > 0 && (err[0:min(20, len(err))] == "Entry" &&
(contains(err, "hash chain") || contains(err, "hash mismatch"))) {
return true
}
}
return false
}
// contains is a simple string contains helper
func contains(s, substr string) bool {
for i := 0; i <= len(s)-len(substr); i++ {
if s[i:i+len(substr)] == substr {
return true
}
}
return false
}
// min returns the minimum of two ints
func min(a, b int) int {
if a < b {
return a
}
return b
}

View File

@ -168,6 +168,10 @@ func (m ArchiveBrowserModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
}
return m, nil
case tea.InterruptMsg:
// Handle Ctrl+C signal (SIGINT) - Bubbletea v1.3+ sends this instead of KeyMsg for ctrl+c
return m.parent, nil
case tea.KeyMsg:
switch msg.String() {
case "ctrl+c", "q", "esc":
@ -205,19 +209,28 @@ func (m ArchiveBrowserModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return diagnoseView, diagnoseView.Init()
}
// Validate selection based on mode
if m.mode == "restore-cluster" && !selected.Format.IsClusterBackup() {
m.message = errorStyle.Render("[FAIL] Please select a cluster backup (.tar.gz)")
// For restore-cluster mode: check if format can be used for cluster restore
// - .tar.gz: dbbackup cluster format (works with pg_restore)
// - .sql/.sql.gz: pg_dumpall format (works with native engine or psql)
if m.mode == "restore-cluster" && !selected.Format.CanBeClusterRestore() {
m.message = errorStyle.Render(fmt.Sprintf("⚠️ %s cannot be used for cluster restore.\n\n Supported formats: .tar.gz (dbbackup), .sql, .sql.gz (pg_dumpall)",
selected.Name))
return m, nil
}
// For SQL-based cluster restore, enable native engine automatically
if m.mode == "restore-cluster" && !selected.Format.IsClusterBackup() {
// This is a .sql or .sql.gz file - use native engine
m.config.UseNativeEngine = true
}
// For single restore mode with cluster backup selected - offer to select individual database
if m.mode == "restore-single" && selected.Format.IsClusterBackup() {
// Cluster backup selected in single restore mode - offer to select individual database
clusterSelector := NewClusterDatabaseSelector(m.config, m.logger, m, m.ctx, selected, "single", false)
return clusterSelector, clusterSelector.Init()
}
// Open restore preview
// Open restore preview for valid format
preview := NewRestorePreview(m.config, m.logger, m.parent, m.ctx, selected, m.mode)
return preview, preview.Init()
}
@ -382,6 +395,7 @@ func (m ArchiveBrowserModel) filterArchives(archives []ArchiveInfo) []ArchiveInf
for _, archive := range archives {
switch m.filterType {
case "postgres":
// Show all PostgreSQL formats (single DB)
if archive.Format.IsPostgreSQL() && !archive.Format.IsClusterBackup() {
filtered = append(filtered, archive)
}
@ -390,6 +404,7 @@ func (m ArchiveBrowserModel) filterArchives(archives []ArchiveInfo) []ArchiveInf
filtered = append(filtered, archive)
}
case "cluster":
// Show .tar.gz cluster archives
if archive.Format.IsClusterBackup() {
filtered = append(filtered, archive)
}

View File

@ -123,9 +123,11 @@ func getCurrentBackupProgress() (dbTotal, dbDone int, dbName string, overallPhas
currentBackupProgressState.hasUpdate = false
// Calculate realtime phase elapsed if we have a phase 2 start time
dbPhaseElapsed = currentBackupProgressState.dbPhaseElapsed
// Always recalculate from phase2StartTime for accurate real-time display
if !currentBackupProgressState.phase2StartTime.IsZero() {
dbPhaseElapsed = time.Since(currentBackupProgressState.phase2StartTime)
} else {
dbPhaseElapsed = currentBackupProgressState.dbPhaseElapsed
}
return currentBackupProgressState.dbTotal, currentBackupProgressState.dbDone,
@ -181,11 +183,22 @@ type backupCompleteMsg struct {
}
func executeBackupWithTUIProgress(parentCtx context.Context, cfg *config.Config, log logger.Logger, backupType, dbName string, ratio int) tea.Cmd {
return func() tea.Msg {
// CRITICAL: Add panic recovery to prevent TUI crashes on context cancellation
return func() (returnMsg tea.Msg) {
start := time.Now()
// CRITICAL: Add panic recovery that RETURNS a proper message to BubbleTea.
// Without this, if a panic occurs the command function returns nil,
// causing BubbleTea's execBatchMsg WaitGroup to hang forever waiting
// for a message that never comes.
defer func() {
if r := recover(); r != nil {
log.Error("Backup execution panic recovered", "panic", r, "database", dbName)
// CRITICAL: Set the named return value so BubbleTea receives a message
returnMsg = backupCompleteMsg{
result: "",
err: fmt.Errorf("backup panic: %v", r),
elapsed: time.Since(start),
}
}
}()
@ -201,8 +214,6 @@ func executeBackupWithTUIProgress(parentCtx context.Context, cfg *config.Config,
}
}
start := time.Now()
// Setup shared progress state for TUI polling
progressState := &sharedBackupProgressState{}
setCurrentBackupProgress(progressState)
@ -251,7 +262,10 @@ func executeBackupWithTUIProgress(parentCtx context.Context, cfg *config.Config,
// Set phase 2 start time on first callback (for realtime ETA calculation)
if progressState.phase2StartTime.IsZero() {
progressState.phase2StartTime = time.Now()
log.Info("Phase 2 started", "time", progressState.phase2StartTime)
}
// Calculate elapsed time immediately
progressState.dbPhaseElapsed = time.Since(progressState.phase2StartTime)
progressState.mu.Unlock()
})
@ -398,7 +412,7 @@ func (m BackupExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
}
return m, nil
} else if m.done {
return m.parent, tea.Quit
return m.parent, nil // Return to menu, not quit app
}
return m, nil

View File

@ -97,13 +97,17 @@ func (m ClusterDatabaseSelectorModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
}
return m, nil
case tea.InterruptMsg:
// Handle Ctrl+C signal (SIGINT) - Bubbletea v1.3+ sends this instead of KeyMsg for ctrl+c
return m.parent, nil
case tea.KeyMsg:
if m.loading {
return m, nil
}
switch msg.String() {
case "q", "esc":
case "ctrl+c", "q", "esc":
// Return to parent
return m.parent, nil

View File

@ -70,9 +70,18 @@ func (m ConfirmationModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
if m.onConfirm != nil {
return m.onConfirm()
}
executor := NewBackupExecution(m.config, m.logger, m.parent, m.ctx, "cluster", "", 0)
// Default fallback (should not be reached if onConfirm is always provided)
ctx := m.ctx
if ctx == nil {
ctx = context.Background()
}
executor := NewBackupExecution(m.config, m.logger, m.parent, ctx, "cluster", "", 0)
return executor, executor.Init()
case tea.InterruptMsg:
// Handle Ctrl+C signal (SIGINT) - Bubbletea v1.3+ sends this instead of KeyMsg for ctrl+c
return m.parent, nil
case tea.KeyMsg:
// Auto-forward ESC/quit in auto-confirm mode
if m.config.TUIAutoConfirm {
@ -98,8 +107,12 @@ func (m ConfirmationModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
if m.onConfirm != nil {
return m.onConfirm()
}
// Default: execute cluster backup for backward compatibility
executor := NewBackupExecution(m.config, m.logger, m.parent, m.ctx, "cluster", "", 0)
// Default fallback (should not be reached if onConfirm is always provided)
ctx := m.ctx
if ctx == nil {
ctx = context.Background()
}
executor := NewBackupExecution(m.config, m.logger, m, ctx, "cluster", "", 0)
return executor, executor.Init()
}
return m.parent, nil

View File

@ -126,6 +126,10 @@ func (m DatabaseSelectorModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
}
return m, nil
case tea.InterruptMsg:
// Handle Ctrl+C signal (SIGINT) - Bubbletea v1.3+ sends this instead of KeyMsg for ctrl+c
return m.parent, nil
case tea.KeyMsg:
// Auto-forward ESC/quit in auto-confirm mode
if m.config.TUIAutoConfirm {

View File

@ -56,7 +56,10 @@ func (m InputModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
case inputAutoConfirmMsg:
// Use default value and proceed
if selector, ok := m.parent.(DatabaseSelectorModel); ok {
ratio, _ := strconv.Atoi(m.value)
ratio, err := strconv.Atoi(m.value)
if err != nil || ratio < 0 || ratio > 100 {
ratio = 10 // Safe default
}
executor := NewBackupExecution(selector.config, selector.logger, selector.parent, selector.ctx,
selector.backupType, selector.selected, ratio)
return executor, executor.Init()
@ -83,7 +86,11 @@ func (m InputModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
// If this is from database selector, execute backup with ratio
if selector, ok := m.parent.(DatabaseSelectorModel); ok {
ratio, _ := strconv.Atoi(m.value)
ratio, err := strconv.Atoi(m.value)
if err != nil || ratio < 0 || ratio > 100 {
m.err = fmt.Errorf("ratio must be 0-100")
return m, nil
}
executor := NewBackupExecution(selector.config, selector.logger, selector.parent, selector.ctx,
selector.backupType, selector.selected, ratio)
return executor, executor.Init()

View File

@ -165,6 +165,7 @@ func (m *MenuModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
m.logger.Info("Auto-selecting option", "cursor", m.cursor, "choice", m.choices[m.cursor])
// Trigger the selection based on cursor position
// IMPORTANT: Keep in sync with keyboard handler below!
switch m.cursor {
case 0: // Single Database Backup
return m.handleSingleBackup()
@ -172,6 +173,8 @@ func (m *MenuModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return m.handleSampleBackup()
case 2: // Cluster Backup
return m.handleClusterBackup()
case 3: // Separator - skip
return m, nil
case 4: // Restore Single Database
return m.handleRestoreSingle()
case 5: // Restore Cluster Backup
@ -180,19 +183,27 @@ func (m *MenuModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return m.handleDiagnoseBackup()
case 7: // List & Manage Backups
return m.handleBackupManager()
case 9: // Tools
case 8: // View Backup Schedule
return m.handleSchedule()
case 9: // View Backup Chain
return m.handleChain()
case 10: // Separator - skip
return m, nil
case 11: // System Resource Profile
return m.handleProfile()
case 12: // Tools
return m.handleTools()
case 10: // View Active Operations
case 13: // View Active Operations
return m.handleViewOperations()
case 11: // Show Operation History
case 14: // Show Operation History
return m.handleOperationHistory()
case 12: // Database Status
case 15: // Database Status
return m.handleStatus()
case 13: // Settings
case 16: // Settings
return m.handleSettings()
case 14: // Clear History
case 17: // Clear History
m.message = "[DEL] History cleared"
case 15: // Quit
case 18: // Quit
if m.cancel != nil {
m.cancel()
}
@ -255,11 +266,19 @@ func (m *MenuModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
case "up", "k":
if m.cursor > 0 {
m.cursor--
// Skip separators
if strings.Contains(m.choices[m.cursor], "---") && m.cursor > 0 {
m.cursor--
}
}
case "down", "j":
if m.cursor < len(m.choices)-1 {
m.cursor++
// Skip separators
if strings.Contains(m.choices[m.cursor], "---") && m.cursor < len(m.choices)-1 {
m.cursor++
}
}
case "enter", " ":
@ -284,10 +303,10 @@ func (m *MenuModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return m.handleSchedule()
case 9: // View Backup Chain
return m.handleChain()
case 10: // System Resource Profile
return m.handleProfile()
case 11: // Separator
case 10: // Separator
// Do nothing
case 11: // System Resource Profile
return m.handleProfile()
case 12: // Tools
return m.handleTools()
case 13: // View Active Operations

340
internal/tui/menu_test.go Normal file
View File

@ -0,0 +1,340 @@
package tui
import (
"strings"
"testing"
tea "github.com/charmbracelet/bubbletea"
"dbbackup/internal/config"
"dbbackup/internal/logger"
)
// TestMenuModelCreation tests that menu model is created correctly
func TestMenuModelCreation(t *testing.T) {
cfg := config.New()
log := logger.NewNullLogger()
model := NewMenuModel(cfg, log)
defer model.Close()
if model == nil {
t.Fatal("Expected non-nil model")
}
if len(model.choices) == 0 {
t.Error("Expected choices to be populated")
}
// Verify expected menu items exist
expectedItems := []string{
"Single Database Backup",
"Cluster Backup",
"Restore Single Database",
"Tools",
"Database Status",
"Configuration Settings",
"Quit",
}
for _, expected := range expectedItems {
found := false
for _, choice := range model.choices {
if strings.Contains(choice, expected) || choice == expected {
found = true
break
}
}
if !found {
t.Errorf("Expected menu item %q not found", expected)
}
}
}
// TestMenuNavigation tests keyboard navigation
func TestMenuNavigation(t *testing.T) {
cfg := config.New()
log := logger.NewNullLogger()
model := NewMenuModel(cfg, log)
defer model.Close()
// Initial cursor should be 0
if model.cursor != 0 {
t.Errorf("Expected initial cursor 0, got %d", model.cursor)
}
// Navigate down
newModel, _ := model.Update(tea.KeyMsg{Type: tea.KeyDown})
menuModel := newModel.(*MenuModel)
if menuModel.cursor != 1 {
t.Errorf("Expected cursor 1 after down, got %d", menuModel.cursor)
}
// Navigate down again
newModel, _ = menuModel.Update(tea.KeyMsg{Type: tea.KeyDown})
menuModel = newModel.(*MenuModel)
if menuModel.cursor != 2 {
t.Errorf("Expected cursor 2 after second down, got %d", menuModel.cursor)
}
// Navigate up
newModel, _ = menuModel.Update(tea.KeyMsg{Type: tea.KeyUp})
menuModel = newModel.(*MenuModel)
if menuModel.cursor != 1 {
t.Errorf("Expected cursor 1 after up, got %d", menuModel.cursor)
}
}
// TestMenuVimNavigation tests vim-style navigation (j/k)
func TestMenuVimNavigation(t *testing.T) {
cfg := config.New()
log := logger.NewNullLogger()
model := NewMenuModel(cfg, log)
defer model.Close()
// Navigate down with 'j'
newModel, _ := model.Update(tea.KeyMsg{Type: tea.KeyRunes, Runes: []rune{'j'}})
menuModel := newModel.(*MenuModel)
if menuModel.cursor != 1 {
t.Errorf("Expected cursor 1 after 'j', got %d", menuModel.cursor)
}
// Navigate up with 'k'
newModel, _ = menuModel.Update(tea.KeyMsg{Type: tea.KeyRunes, Runes: []rune{'k'}})
menuModel = newModel.(*MenuModel)
if menuModel.cursor != 0 {
t.Errorf("Expected cursor 0 after 'k', got %d", menuModel.cursor)
}
}
// TestMenuBoundsCheck tests that cursor doesn't go out of bounds
func TestMenuBoundsCheck(t *testing.T) {
cfg := config.New()
log := logger.NewNullLogger()
model := NewMenuModel(cfg, log)
defer model.Close()
// Try to go up from position 0
newModel, _ := model.Update(tea.KeyMsg{Type: tea.KeyUp})
menuModel := newModel.(*MenuModel)
if menuModel.cursor != 0 {
t.Errorf("Expected cursor to stay at 0 when going up, got %d", menuModel.cursor)
}
// Go to last item
for i := 0; i < len(model.choices); i++ {
newModel, _ = menuModel.Update(tea.KeyMsg{Type: tea.KeyDown})
menuModel = newModel.(*MenuModel)
}
lastIndex := len(model.choices) - 1
if menuModel.cursor != lastIndex {
t.Errorf("Expected cursor at last index %d, got %d", lastIndex, menuModel.cursor)
}
// Try to go down past last item
newModel, _ = menuModel.Update(tea.KeyMsg{Type: tea.KeyDown})
menuModel = newModel.(*MenuModel)
if menuModel.cursor != lastIndex {
t.Errorf("Expected cursor to stay at %d when going down past end, got %d", lastIndex, menuModel.cursor)
}
}
// TestMenuQuit tests quit functionality
func TestMenuQuit(t *testing.T) {
cfg := config.New()
log := logger.NewNullLogger()
model := NewMenuModel(cfg, log)
defer model.Close()
// Test 'q' to quit
newModel, cmd := model.Update(tea.KeyMsg{Type: tea.KeyRunes, Runes: []rune{'q'}})
menuModel := newModel.(*MenuModel)
if !menuModel.quitting {
t.Error("Expected quitting to be true after 'q'")
}
if cmd == nil {
t.Error("Expected quit command to be returned")
}
}
// TestMenuCtrlC tests Ctrl+C handling
func TestMenuCtrlC(t *testing.T) {
cfg := config.New()
log := logger.NewNullLogger()
model := NewMenuModel(cfg, log)
defer model.Close()
// Test Ctrl+C
newModel, cmd := model.Update(tea.KeyMsg{Type: tea.KeyCtrlC})
menuModel := newModel.(*MenuModel)
if !menuModel.quitting {
t.Error("Expected quitting to be true after Ctrl+C")
}
if cmd == nil {
t.Error("Expected quit command to be returned")
}
}
// TestMenuDatabaseTypeSwitch tests database type switching with 't'
func TestMenuDatabaseTypeSwitch(t *testing.T) {
cfg := config.New()
cfg.DatabaseType = "postgres"
log := logger.NewNullLogger()
model := NewMenuModel(cfg, log)
defer model.Close()
initialCursor := model.dbTypeCursor
// Press 't' to cycle database type
newModel, _ := model.Update(tea.KeyMsg{Type: tea.KeyRunes, Runes: []rune{'t'}})
menuModel := newModel.(*MenuModel)
expectedCursor := (initialCursor + 1) % len(model.dbTypes)
if menuModel.dbTypeCursor != expectedCursor {
t.Errorf("Expected dbTypeCursor %d after 't', got %d", expectedCursor, menuModel.dbTypeCursor)
}
}
// TestMenuView tests that View() returns valid output
func TestMenuView(t *testing.T) {
cfg := config.New()
cfg.Version = "5.7.9"
log := logger.NewNullLogger()
model := NewMenuModel(cfg, log)
defer model.Close()
view := model.View()
if len(view) == 0 {
t.Error("Expected non-empty view output")
}
// Check for expected content
if !strings.Contains(view, "Interactive Menu") {
t.Error("Expected view to contain 'Interactive Menu'")
}
if !strings.Contains(view, "5.7.9") {
t.Error("Expected view to contain version number")
}
}
// TestMenuQuittingView tests view when quitting
func TestMenuQuittingView(t *testing.T) {
cfg := config.New()
log := logger.NewNullLogger()
model := NewMenuModel(cfg, log)
defer model.Close()
model.quitting = true
view := model.View()
if !strings.Contains(view, "Thanks for using") {
t.Error("Expected quitting view to contain goodbye message")
}
}
// TestAutoSelectValid tests that auto-select with valid index works
func TestAutoSelectValid(t *testing.T) {
cfg := config.New()
cfg.TUIAutoSelect = 0 // Single Database Backup
log := logger.NewNullLogger()
model := NewMenuModel(cfg, log)
defer model.Close()
// Trigger auto-select message - should transition to DatabaseSelectorModel
newModel, _ := model.Update(autoSelectMsg{})
// Auto-select for option 0 (Single Backup) should return a DatabaseSelectorModel
// This verifies the handler was called correctly
_, ok := newModel.(DatabaseSelectorModel)
if !ok {
// It might also be *MenuModel if the handler returned early
if menuModel, ok := newModel.(*MenuModel); ok {
if menuModel.cursor != 0 {
t.Errorf("Expected cursor 0 after auto-select, got %d", menuModel.cursor)
}
} else {
t.Logf("Auto-select returned model type: %T (this is acceptable)", newModel)
}
}
}
// TestAutoSelectSeparatorSkipped tests that separators are handled in auto-select
func TestAutoSelectSeparatorSkipped(t *testing.T) {
cfg := config.New()
cfg.TUIAutoSelect = 3 // Separator
log := logger.NewNullLogger()
model := NewMenuModel(cfg, log)
defer model.Close()
// Should not crash when auto-selecting separator
newModel, cmd := model.Update(autoSelectMsg{})
// For separator, should return same MenuModel without transition
menuModel, ok := newModel.(*MenuModel)
if !ok {
t.Errorf("Expected MenuModel for separator, got %T", newModel)
return
}
// Should just return without action
if menuModel.quitting {
t.Error("Should not quit when selecting separator")
}
// cmd should be nil for separator
if cmd != nil {
t.Error("Expected nil command for separator selection")
}
}
// BenchmarkMenuView benchmarks the View() rendering
func BenchmarkMenuView(b *testing.B) {
cfg := config.New()
log := logger.NewNullLogger()
model := NewMenuModel(cfg, log)
defer model.Close()
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = model.View()
}
}
// BenchmarkMenuNavigation benchmarks navigation performance
func BenchmarkMenuNavigation(b *testing.B) {
cfg := config.New()
log := logger.NewNullLogger()
model := NewMenuModel(cfg, log)
defer model.Close()
downKey := tea.KeyMsg{Type: tea.KeyDown}
upKey := tea.KeyMsg{Type: tea.KeyUp}
b.ResetTimer()
for i := 0; i < b.N; i++ {
if i%2 == 0 {
model.Update(downKey)
} else {
model.Update(upKey)
}
}
}

View File

@ -181,9 +181,17 @@ func (m *ProfileModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
}
return m, nil
case tea.InterruptMsg:
// Handle Ctrl+C signal (SIGINT) - Bubbletea v1.3+ sends this instead of KeyMsg for ctrl+c
m.quitting = true
if m.parent != nil {
return m.parent, nil
}
return m, tea.Quit
case tea.KeyMsg:
switch msg.String() {
case "q", "esc":
case "ctrl+c", "q", "esc":
m.quitting = true
if m.parent != nil {
return m.parent, nil

View File

@ -245,9 +245,11 @@ func getCurrentRestoreProgress() (bytesTotal, bytesDone int64, description strin
speed = calculateRollingSpeed(currentRestoreProgressState.speedSamples)
// Calculate realtime phase elapsed if we have a phase 3 start time
dbPhaseElapsed = currentRestoreProgressState.dbPhaseElapsed
// Always recalculate from phase3StartTime for accurate real-time display
if !currentRestoreProgressState.phase3StartTime.IsZero() {
dbPhaseElapsed = time.Since(currentRestoreProgressState.phase3StartTime)
} else {
dbPhaseElapsed = currentRestoreProgressState.dbPhaseElapsed
}
return currentRestoreProgressState.bytesTotal, currentRestoreProgressState.bytesDone,
@ -308,13 +310,53 @@ func calculateRollingSpeed(samples []restoreSpeedSample) float64 {
}
func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config, log logger.Logger, archive ArchiveInfo, targetDB string, cleanFirst, createIfMissing bool, restoreType string, cleanClusterFirst bool, existingDBs []string, saveDebugLog bool) tea.Cmd {
return func() tea.Msg {
// CRITICAL: Add panic recovery to prevent TUI crashes on context cancellation
return func() (returnMsg tea.Msg) {
start := time.Now()
// TUI Debug Log: Always write to file when debug is enabled (even on success/hang)
var tuiDebugFile *os.File
if saveDebugLog {
workDir := cfg.GetEffectiveWorkDir()
tuiLogPath := filepath.Join(workDir, fmt.Sprintf("dbbackup-tui-debug-%s.log", time.Now().Format("20060102-150405")))
var err error
tuiDebugFile, err = os.Create(tuiLogPath)
if err == nil {
defer tuiDebugFile.Close()
fmt.Fprintf(tuiDebugFile, "=== TUI Restore Debug Log ===\n")
fmt.Fprintf(tuiDebugFile, "Started: %s\n", time.Now().Format(time.RFC3339))
fmt.Fprintf(tuiDebugFile, "Archive: %s\n", archive.Path)
fmt.Fprintf(tuiDebugFile, "RestoreType: %s\n", restoreType)
fmt.Fprintf(tuiDebugFile, "TargetDB: %s\n", targetDB)
fmt.Fprintf(tuiDebugFile, "CleanCluster: %v\n", cleanClusterFirst)
fmt.Fprintf(tuiDebugFile, "ExistingDBs: %v\n\n", existingDBs)
log.Info("TUI debug log enabled", "path", tuiLogPath)
}
}
tuiLog := func(msg string, args ...interface{}) {
if tuiDebugFile != nil {
fmt.Fprintf(tuiDebugFile, "[%s] %s", time.Now().Format("15:04:05.000"), fmt.Sprintf(msg, args...))
fmt.Fprintln(tuiDebugFile)
tuiDebugFile.Sync() // Flush immediately so we capture hangs
}
}
tuiLog("Starting restore execution")
// CRITICAL: Add panic recovery that RETURNS a proper message to BubbleTea.
// Without this, if a panic occurs the command function returns nil,
// causing BubbleTea's execBatchMsg WaitGroup to hang forever waiting
// for a message that never comes. This was the root cause of the
// TUI cluster restore hang/panic issue.
defer func() {
if r := recover(); r != nil {
log.Error("Restore execution panic recovered", "panic", r, "database", targetDB)
// Return error message instead of crashing
// Note: We can't return from defer, so this just logs
// CRITICAL: Set the named return value so BubbleTea receives a message
// This prevents the WaitGroup deadlock in execBatchMsg
returnMsg = restoreCompleteMsg{
result: "",
err: fmt.Errorf("restore panic: %v", r),
elapsed: time.Since(start),
}
}
}()
@ -322,8 +364,11 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
// DO NOT create a new context here as it breaks Ctrl+C cancellation
ctx := parentCtx
tuiLog("Checking context state")
// Check if context is already cancelled
if ctx.Err() != nil {
tuiLog("Context already cancelled: %v", ctx.Err())
return restoreCompleteMsg{
result: "",
err: fmt.Errorf("operation cancelled: %w", ctx.Err()),
@ -331,11 +376,12 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
}
}
start := time.Now()
tuiLog("Creating database client")
// Create database instance
dbClient, err := database.New(cfg, log)
if err != nil {
tuiLog("Database client creation failed: %v", err)
return restoreCompleteMsg{
result: "",
err: fmt.Errorf("failed to create database client: %w", err),
@ -344,8 +390,11 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
}
defer dbClient.Close()
tuiLog("Database client created successfully")
// STEP 1: Clean cluster if requested (drop all existing user databases)
if restoreType == "restore-cluster" && cleanClusterFirst {
tuiLog("STEP 1: Cleaning cluster (dropping existing DBs)")
// Re-detect databases at execution time to get current state
// The preview list may be stale or detection may have failed earlier
safety := restore.NewSafety(cfg, log)
@ -365,8 +414,9 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
// This matches how cluster restore works - uses CLI tools, not database connections
droppedCount := 0
for _, dbName := range existingDBs {
// Create timeout context for each database drop (5 minutes per DB - large DBs take time)
dropCtx, dropCancel := context.WithTimeout(ctx, 5*time.Minute)
// Create timeout context for each database drop (60 seconds per DB)
// Reduced from 5 minutes for better TUI responsiveness
dropCtx, dropCancel := context.WithTimeout(ctx, 60*time.Second)
if err := dropDatabaseCLI(dropCtx, cfg, dbName); err != nil {
log.Warn("Failed to drop database", "name", dbName, "error", err)
// Continue with other databases
@ -395,6 +445,8 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
// Initialize unified progress tracker for cluster restores
if restoreType == "restore-cluster" {
progressState.unifiedProgress = progress.NewUnifiedClusterProgress("restore", archive.Path)
// Set engine type for correct TUI display
progressState.unifiedProgress.SetUseNativeEngine(cfg.UseNativeEngine)
}
engine.SetProgressCallback(func(current, total int64, description string) {
// CRITICAL: Panic recovery to prevent nil pointer crashes
@ -478,6 +530,8 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
if progressState.phase3StartTime.IsZero() {
progressState.phase3StartTime = time.Now()
}
// Calculate elapsed time immediately for accurate display
progressState.dbPhaseElapsed = time.Since(progressState.phase3StartTime)
// Clear byte progress when switching to db progress
progressState.bytesTotal = 0
progressState.bytesDone = 0
@ -519,6 +573,10 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
if progressState.phase3StartTime.IsZero() {
progressState.phase3StartTime = time.Now()
}
// Recalculate elapsed for accuracy if phaseElapsed not provided
if phaseElapsed == 0 && !progressState.phase3StartTime.IsZero() {
progressState.dbPhaseElapsed = time.Since(progressState.phase3StartTime)
}
// Clear byte progress when switching to db progress
progressState.bytesTotal = 0
progressState.bytesDone = 0
@ -559,6 +617,8 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
if progressState.phase3StartTime.IsZero() {
progressState.phase3StartTime = time.Now()
}
// Calculate elapsed time immediately for accurate display
progressState.dbPhaseElapsed = time.Since(progressState.phase3StartTime)
// Update unified progress tracker
if progressState.unifiedProgress != nil {
@ -583,29 +643,39 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
log.Info("Debug logging enabled", "path", debugLogPath)
}
tuiLog("STEP 3: Executing restore (type=%s)", restoreType)
// STEP 3: Execute restore based on type
var restoreErr error
if restoreType == "restore-cluster" {
// Use pre-extracted directory if available (optimization)
if archive.ExtractedDir != "" {
tuiLog("Using pre-extracted cluster directory: %s", archive.ExtractedDir)
log.Info("Using pre-extracted cluster directory", "path", archive.ExtractedDir)
defer os.RemoveAll(archive.ExtractedDir) // Cleanup after restore completes
restoreErr = engine.RestoreCluster(ctx, archive.Path, archive.ExtractedDir)
} else {
tuiLog("Calling engine.RestoreCluster for: %s", archive.Path)
restoreErr = engine.RestoreCluster(ctx, archive.Path)
}
tuiLog("RestoreCluster returned: err=%v", restoreErr)
} else if restoreType == "restore-cluster-single" {
tuiLog("Calling RestoreSingleFromCluster: %s -> %s", archive.Path, targetDB)
// Restore single database from cluster backup
// Also cleanup pre-extracted dir if present
if archive.ExtractedDir != "" {
defer os.RemoveAll(archive.ExtractedDir)
}
restoreErr = engine.RestoreSingleFromCluster(ctx, archive.Path, targetDB, targetDB, cleanFirst, createIfMissing)
tuiLog("RestoreSingleFromCluster returned: err=%v", restoreErr)
} else {
tuiLog("Calling RestoreSingle: %s -> %s", archive.Path, targetDB)
restoreErr = engine.RestoreSingle(ctx, archive.Path, targetDB, cleanFirst, createIfMissing)
tuiLog("RestoreSingle returned: err=%v", restoreErr)
}
if restoreErr != nil {
tuiLog("Restore failed: %v", restoreErr)
return restoreCompleteMsg{
result: "",
err: restoreErr,
@ -622,6 +692,8 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
result = fmt.Sprintf("Successfully restored cluster from %s (cleaned %d existing database(s) first)", archive.Name, len(existingDBs))
}
tuiLog("Restore completed successfully: %s", result)
return restoreCompleteMsg{
result: result,
err: nil,
@ -801,7 +873,7 @@ func (m RestoreExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
}
return m, nil
} else if m.done {
return m.parent, tea.Quit
return m.parent, nil // Return to menu, not quit app
}
return m, nil
@ -830,7 +902,7 @@ func (m RestoreExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
}
return m, nil
} else if m.done {
return m.parent, tea.Quit
return m.parent, nil // Return to menu, not quit app
}
case "enter", " ":
if m.done {

View File

@ -99,6 +99,22 @@ type safetyCheckCompleteMsg struct {
func runSafetyChecks(cfg *config.Config, log logger.Logger, archive ArchiveInfo, targetDB string) tea.Cmd {
return func() tea.Msg {
// Check if preflight checks should be skipped
if cfg.SkipPreflightChecks {
// Return all checks as "skipped" with warning
checks := []SafetyCheck{
{Name: "Archive integrity", Status: "warning", Message: "⚠️ SKIPPED - preflight checks disabled", Critical: true},
{Name: "Dump validity", Status: "warning", Message: "⚠️ SKIPPED - preflight checks disabled", Critical: true},
{Name: "Disk space", Status: "warning", Message: "⚠️ SKIPPED - preflight checks disabled", Critical: true},
{Name: "Required tools", Status: "warning", Message: "⚠️ SKIPPED - preflight checks disabled", Critical: true},
{Name: "Target database", Status: "warning", Message: "⚠️ SKIPPED - preflight checks disabled", Critical: false},
}
return safetyCheckCompleteMsg{
checks: checks,
canProceed: true, // Allow proceeding but with warnings
}
}
// Dynamic timeout based on archive size for large database support
// Base: 10 minutes + 1 minute per 5 GB, max 120 minutes
timeoutMinutes := 10
@ -272,6 +288,10 @@ func (m RestorePreviewModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
}
return m, nil
case tea.InterruptMsg:
// Handle Ctrl+C signal (SIGINT) - Bubbletea v1.3+ sends this instead of KeyMsg for ctrl+c
return m.parent, nil
case tea.KeyMsg:
switch msg.String() {
case "ctrl+c", "q", "esc":
@ -441,6 +461,13 @@ func (m RestorePreviewModel) View() string {
s.WriteString(fmt.Sprintf(" Database: %s\n", m.targetDB))
s.WriteString(fmt.Sprintf(" Host: %s:%d\n", m.config.Host, m.config.Port))
// Show Engine Mode for single restore too
if m.config.UseNativeEngine {
s.WriteString(CheckPassedStyle.Render(" Engine Mode: Native Go (pure Go, no external tools)") + "\n")
} else {
s.WriteString(fmt.Sprintf(" Engine Mode: External Tools (psql)\n"))
}
cleanIcon := "[N]"
if m.cleanFirst {
cleanIcon = "[Y]"
@ -473,6 +500,13 @@ func (m RestorePreviewModel) View() string {
s.WriteString(fmt.Sprintf(" CPU Workload: %s\n", m.config.CPUWorkloadType))
s.WriteString(fmt.Sprintf(" Cluster Parallelism: %d databases\n", m.config.ClusterParallelism))
// Show Engine Mode - critical for understanding restore behavior
if m.config.UseNativeEngine {
s.WriteString(CheckPassedStyle.Render(" Engine Mode: Native Go (pure Go, no external tools)") + "\n")
} else {
s.WriteString(fmt.Sprintf(" Engine Mode: External Tools (pg_restore, psql)\n"))
}
if m.existingDBError != "" {
// Show warning when database listing failed - but still allow cleanup toggle
s.WriteString(CheckWarningStyle.Render(" Existing Databases: Detection failed\n"))
@ -512,6 +546,14 @@ func (m RestorePreviewModel) View() string {
s.WriteString(archiveHeaderStyle.Render("[SAFETY] Checks"))
s.WriteString("\n")
// Show warning banner if preflight checks are skipped
if m.config.SkipPreflightChecks {
s.WriteString(CheckWarningStyle.Render(" ⚠️ PREFLIGHT CHECKS DISABLED ⚠️"))
s.WriteString("\n")
s.WriteString(CheckWarningStyle.Render(" Restore may fail unexpectedly. Re-enable in Settings."))
s.WriteString("\n\n")
}
if m.checking {
s.WriteString(infoStyle.Render(" Running safety checks..."))
s.WriteString("\n")

View File

@ -236,7 +236,11 @@ func (v *RichClusterProgressView) renderPhaseDetails(snapshot *progress.Progress
b.WriteString(fmt.Sprintf(" %s %-20s [restoring...] running %s\n",
spinner, truncateString(snapshot.CurrentDB, 20),
formatDuration(phaseElapsed)))
b.WriteString(fmt.Sprintf(" └─ pg_restore in progress (progress updates every 5s)\n"))
if snapshot.UseNativeEngine {
b.WriteString(fmt.Sprintf(" └─ native Go engine in progress (pure Go, no external tools)\n"))
} else {
b.WriteString(fmt.Sprintf(" └─ pg_restore in progress (progress updates every 5s)\n"))
}
}
}

View File

@ -165,6 +165,22 @@ func NewSettingsModel(cfg *config.Config, log logger.Logger, parent tea.Model) S
Type: "selector",
Description: "Enable for databases with many tables/LOBs. Reduces parallelism, increases max_locks_per_transaction.",
},
{
Key: "skip_preflight_checks",
DisplayName: "Skip Preflight Checks",
Value: func(c *config.Config) string {
if c.SkipPreflightChecks {
return "⚠️ SKIPPED (dangerous)"
}
return "Enabled (safe)"
},
Update: func(c *config.Config, v string) error {
c.SkipPreflightChecks = !c.SkipPreflightChecks
return nil
},
Type: "selector",
Description: "⚠️ WARNING: Skipping checks may result in failed restores or data loss. Only use if checks are too slow.",
},
{
Key: "cluster_parallelism",
DisplayName: "Cluster Parallelism",

View File

@ -5,11 +5,15 @@ import (
"fmt"
tea "github.com/charmbracelet/bubbletea"
"github.com/charmbracelet/lipgloss"
"dbbackup/internal/config"
"dbbackup/internal/logger"
)
// warnStyle for TODO/coming soon messages
var warnStyle = lipgloss.NewStyle().Foreground(lipgloss.Color("3")).Bold(true)
// ToolsMenu represents the tools submenu
type ToolsMenu struct {
choices []string
@ -147,7 +151,7 @@ func (t *ToolsMenu) handleBlobStats() (tea.Model, tea.Cmd) {
// handleBlobExtract opens the blob extraction wizard
func (t *ToolsMenu) handleBlobExtract() (tea.Model, tea.Cmd) {
t.message = infoStyle.Render("[INFO] Blob extraction coming soon - extracts large objects to dedup store")
t.message = warnStyle.Render("[TODO] Blob extraction - planned for v6.1")
return t, nil
}
@ -159,7 +163,7 @@ func (t *ToolsMenu) handleSystemHealth() (tea.Model, tea.Cmd) {
// handleDedupAnalyze shows dedup store analysis
func (t *ToolsMenu) handleDedupAnalyze() (tea.Model, tea.Cmd) {
t.message = infoStyle.Render("[INFO] Dedup analyze coming soon - shows storage savings and chunk distribution")
t.message = warnStyle.Render("[TODO] Dedup analyze - planned for v6.1")
return t, nil
}
@ -172,7 +176,7 @@ func (t *ToolsMenu) handleVerifyIntegrity() (tea.Model, tea.Cmd) {
// handleCatalogSync synchronizes backup catalog
func (t *ToolsMenu) handleCatalogSync() (tea.Model, tea.Cmd) {
t.message = infoStyle.Render("[INFO] Catalog sync coming soon - synchronizes local catalog with cloud storage")
t.message = warnStyle.Render("[TODO] Catalog sync TUI - use CLI: dbbackup catalog sync")
return t, nil
}

View File

@ -16,7 +16,7 @@ import (
// Build information (set by ldflags)
var (
version = "5.7.2"
version = "5.8.24"
buildTime = "unknown"
gitCommit = "unknown"
)

233
release.sh Executable file
View File

@ -0,0 +1,233 @@
#!/bin/bash
# Release script for dbbackup
# Builds binaries and creates/updates GitHub release
#
# Usage:
# ./release.sh # Build and release current version
# ./release.sh --bump # Bump patch version, build, and release
# ./release.sh --update # Update existing release with new binaries
# ./release.sh --dry-run # Show what would happen without doing it
set -e
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[0;33m'
BLUE='\033[0;34m'
BOLD='\033[1m'
NC='\033[0m'
# Configuration
TOKEN_FILE=".gh_token"
MAIN_FILE="main.go"
# Parse arguments
BUMP_VERSION=false
UPDATE_ONLY=false
DRY_RUN=false
RELEASE_MSG=""
while [[ $# -gt 0 ]]; do
case $1 in
--bump)
BUMP_VERSION=true
shift
;;
--update)
UPDATE_ONLY=true
shift
;;
--dry-run)
DRY_RUN=true
shift
;;
-m|--message)
RELEASE_MSG="$2"
shift 2
;;
--help|-h)
echo "Usage: $0 [OPTIONS]"
echo ""
echo "Options:"
echo " --bump Bump patch version before release"
echo " --update Update existing release (don't create new)"
echo " --dry-run Show what would happen without doing it"
echo " -m, --message Release message/comment (required for new releases)"
echo " --help Show this help"
echo ""
echo "Examples:"
echo " $0 -m \"Fix TUI crash on cluster restore\""
echo " $0 --bump -m \"Add new backup compression option\""
echo " $0 --update # Just update binaries, no message needed"
echo ""
echo "Token file: .gh_token (gitignored)"
exit 0
;;
*)
echo -e "${RED}Unknown option: $1${NC}"
echo "Use --help for usage"
exit 1
;;
esac
done
# Check for GitHub token
if [ ! -f "$TOKEN_FILE" ]; then
echo -e "${RED}❌ Token file not found: $TOKEN_FILE${NC}"
echo ""
echo "Create it with:"
echo " echo 'your_github_token' > $TOKEN_FILE"
echo ""
echo "The file is gitignored for security."
exit 1
fi
GH_TOKEN=$(cat "$TOKEN_FILE" | tr -d '[:space:]')
if [ -z "$GH_TOKEN" ]; then
echo -e "${RED}❌ Token file is empty${NC}"
exit 1
fi
export GH_TOKEN
# Get current version
CURRENT_VERSION=$(grep 'version.*=' "$MAIN_FILE" | head -1 | sed 's/.*"\(.*\)".*/\1/')
echo -e "${BLUE}📦 Current version: ${YELLOW}${CURRENT_VERSION}${NC}"
# Bump version if requested
if [ "$BUMP_VERSION" = true ]; then
# Parse version (X.Y.Z)
MAJOR=$(echo "$CURRENT_VERSION" | cut -d. -f1)
MINOR=$(echo "$CURRENT_VERSION" | cut -d. -f2)
PATCH=$(echo "$CURRENT_VERSION" | cut -d. -f3)
NEW_PATCH=$((PATCH + 1))
NEW_VERSION="${MAJOR}.${MINOR}.${NEW_PATCH}"
echo -e "${GREEN}📈 Bumping version: ${YELLOW}${CURRENT_VERSION}${NC}${GREEN}${NEW_VERSION}${NC}"
if [ "$DRY_RUN" = false ]; then
sed -i "s/version.*=.*\"${CURRENT_VERSION}\"/version = \"${NEW_VERSION}\"/" "$MAIN_FILE"
CURRENT_VERSION="$NEW_VERSION"
fi
fi
TAG="v${CURRENT_VERSION}"
echo -e "${BLUE}🏷️ Release tag: ${YELLOW}${TAG}${NC}"
# Require message for new releases (not updates)
if [ -z "$RELEASE_MSG" ] && [ "$UPDATE_ONLY" = false ] && [ "$DRY_RUN" = false ]; then
echo -e "${RED}❌ Release message required. Use -m \"Your message\"${NC}"
echo ""
echo "Example:"
echo " $0 -m \"Fix TUI crash on cluster restore\""
exit 1
fi
if [ "$DRY_RUN" = true ]; then
echo -e "${YELLOW}🔍 DRY RUN - No changes will be made${NC}"
echo ""
echo "Would execute:"
echo " 1. Build binaries with build_all.sh"
echo " 2. Commit and push changes"
echo " 3. Create/update release ${TAG}"
exit 0
fi
# Build binaries
echo ""
echo -e "${BOLD}${BLUE}🔨 Building binaries...${NC}"
bash build_all.sh
# Check if there are changes to commit
if [ -n "$(git status --porcelain)" ]; then
echo ""
echo -e "${BLUE}📝 Committing changes...${NC}"
git add -A
# Generate commit message using the release message
if [ -n "$RELEASE_MSG" ]; then
COMMIT_MSG="${TAG}: ${RELEASE_MSG}"
elif [ "$BUMP_VERSION" = true ]; then
COMMIT_MSG="${TAG}: Version bump"
else
COMMIT_MSG="${TAG}: Release build"
fi
git commit -m "$COMMIT_MSG"
fi
# Push changes
echo -e "${BLUE}⬆️ Pushing to origin...${NC}"
git push origin main
# Handle tag
TAG_EXISTS=$(git tag -l "$TAG")
if [ -z "$TAG_EXISTS" ]; then
echo -e "${BLUE}🏷️ Creating tag ${TAG}...${NC}"
git tag "$TAG"
git push origin "$TAG"
else
echo -e "${YELLOW}⚠️ Tag ${TAG} already exists${NC}"
fi
# Check if release exists
echo ""
echo -e "${BLUE}🚀 Preparing release...${NC}"
RELEASE_EXISTS=$(gh release view "$TAG" 2>/dev/null && echo "yes" || echo "no")
if [ "$RELEASE_EXISTS" = "yes" ] || [ "$UPDATE_ONLY" = true ]; then
echo -e "${YELLOW}📦 Updating existing release ${TAG}...${NC}"
# Delete existing assets and upload new ones
for binary in bin/dbbackup_*; do
if [ -f "$binary" ]; then
ASSET_NAME=$(basename "$binary")
echo " Uploading $ASSET_NAME..."
gh release upload "$TAG" "$binary" --clobber
fi
done
else
echo -e "${GREEN}📦 Creating new release ${TAG}...${NC}"
# Generate release notes with the provided message
NOTES="## ${TAG}: ${RELEASE_MSG}
### Downloads
| Platform | Architecture | Binary |
|----------|--------------|--------|
| Linux | x86_64 (Intel/AMD) | \`dbbackup_linux_amd64\` |
| Linux | ARM64 | \`dbbackup_linux_arm64\` |
| Linux | ARMv7 | \`dbbackup_linux_arm_armv7\` |
| macOS | Intel | \`dbbackup_darwin_amd64\` |
| macOS | Apple Silicon (M1/M2) | \`dbbackup_darwin_arm64\` |
### Installation
\`\`\`bash
# Linux x86_64
curl -LO https://github.com/PlusOne/dbbackup/releases/download/${TAG}/dbbackup_linux_amd64
chmod +x dbbackup_linux_amd64
sudo mv dbbackup_linux_amd64 /usr/local/bin/dbbackup
# macOS Apple Silicon
curl -LO https://github.com/PlusOne/dbbackup/releases/download/${TAG}/dbbackup_darwin_arm64
chmod +x dbbackup_darwin_arm64
sudo mv dbbackup_darwin_arm64 /usr/local/bin/dbbackup
\`\`\`
"
gh release create "$TAG" \
--title "${TAG}: ${RELEASE_MSG}" \
--notes "$NOTES" \
bin/dbbackup_linux_amd64 \
bin/dbbackup_linux_arm64 \
bin/dbbackup_linux_arm_armv7 \
bin/dbbackup_darwin_amd64 \
bin/dbbackup_darwin_arm64
fi
echo ""
echo -e "${GREEN}${BOLD}✅ Release complete!${NC}"
echo -e " ${BLUE}https://github.com/PlusOne/dbbackup/releases/tag/${TAG}${NC}"

222
scripts/dbtest.sh Normal file
View File

@ -0,0 +1,222 @@
#!/bin/bash
# Enterprise Database Test Utility
set -e
DB_NAME="${DB_NAME:-testdb_500gb}"
TARGET_GB="${TARGET_GB:-500}"
BLOB_KB="${BLOB_KB:-100}"
BATCH_ROWS="${BATCH_ROWS:-10000}"
show_help() {
cat << 'HELP'
╔═══════════════════════════════════════════════════════════════╗
║ ENTERPRISE DATABASE TEST UTILITY ║
╚═══════════════════════════════════════════════════════════════╝
Usage: ./dbtest.sh <command> [options]
Commands:
status Show current database status
generate Generate test database (interactive)
generate-bg Generate in background (tmux)
stop Stop running generation
drop Drop test database
drop-all Drop ALL non-system databases
backup Run dbbackup to SMB
estimate Estimate generation time
log Show generation log
attach Attach to tmux session
Environment variables:
DB_NAME=testdb_500gb Database name
TARGET_GB=500 Target size in GB
BLOB_KB=100 Blob size in KB
BATCH_ROWS=10000 Rows per batch
Examples:
./dbtest.sh generate # Interactive generation
TARGET_GB=100 ./dbtest.sh generate-bg # 100GB in background
DB_NAME=mytest ./dbtest.sh drop # Drop specific database
./dbtest.sh drop-all # Clean slate
HELP
}
cmd_status() {
echo "╔═══════════════════════════════════════════════════════════════╗"
echo "║ DATABASE STATUS - $(date '+%Y-%m-%d %H:%M:%S')"
echo "╚═══════════════════════════════════════════════════════════════╝"
echo ""
echo "┌─ GENERATION ──────────────────────────────────────────────────┐"
if tmux has-session -t dbgen 2>/dev/null; then
echo "│ Status: ⏳ RUNNING (attach: ./dbtest.sh attach)"
echo "│ Log: $(tail -1 /root/generate_500gb.log 2>/dev/null | cut -c1-55)"
else
echo "│ Status: ⏹ Not running"
fi
echo "└───────────────────────────────────────────────────────────────┘"
echo ""
echo "┌─ POSTGRESQL DATABASES ─────────────────────────────────────────┐"
sudo -u postgres psql -t -c "SELECT datname || ': ' || pg_size_pretty(pg_database_size(datname)) FROM pg_database WHERE datname NOT LIKE 'template%' ORDER BY pg_database_size(datname) DESC" 2>/dev/null | sed 's/^/│ /'
echo "└───────────────────────────────────────────────────────────────┘"
echo ""
echo "┌─ STORAGE ──────────────────────────────────────────────────────┐"
echo -n "│ Fast 1TB: "; df -h /mnt/HC_Volume_104577460 2>/dev/null | awk 'NR==2{print $3"/"$2" ("$5")"}' || echo "N/A"
echo -n "│ SMB 10TB: "; df -h /mnt/smb-devdb 2>/dev/null | awk 'NR==2{print $3"/"$2" ("$5")"}' || echo "N/A"
echo -n "│ Local: "; df -h / | awk 'NR==2{print $3"/"$2" ("$5")"}'
echo "└───────────────────────────────────────────────────────────────┘"
}
cmd_stop() {
echo "Stopping generation..."
tmux kill-session -t dbgen 2>/dev/null && echo "Stopped." || echo "Not running."
}
cmd_drop() {
echo "Dropping database: $DB_NAME"
sudo -u postgres psql -c "SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname='$DB_NAME' AND pid <> pg_backend_pid();" 2>/dev/null || true
sudo -u postgres dropdb --if-exists "$DB_NAME" && echo "Dropped: $DB_NAME" || echo "Not found."
}
cmd_drop_all() {
echo "WARNING: This will drop ALL non-system databases!"
read -p "Type 'YES' to confirm: " confirm
[ "$confirm" != "YES" ] && echo "Cancelled." && exit 0
for db in $(sudo -u postgres psql -t -c "SELECT datname FROM pg_database WHERE datname NOT IN ('postgres','template0','template1')"); do
db=$(echo $db | tr -d ' ')
[ -n "$db" ] && echo "Dropping: $db" && sudo -u postgres dropdb --if-exists "$db"
done
echo "Done."
}
cmd_log() {
tail -50 /root/generate_500gb.log 2>/dev/null || echo "No log file."
}
cmd_attach() {
tmux has-session -t dbgen 2>/dev/null && tmux attach -t dbgen || echo "Not running."
}
cmd_backup() {
mkdir -p /mnt/smb-devdb/cluster-500gb
dbbackup backup cluster --backup-dir /mnt/smb-devdb/cluster-500gb
}
cmd_estimate() {
echo "Target: ${TARGET_GB}GB with ${BLOB_KB}KB blobs"
mins=$((TARGET_GB / 2))
echo "Estimated: ~${mins} minutes (~$((mins/60)) hours)"
}
cmd_generate() {
echo "=== Interactive Database Generator ==="
read -p "Database name [$DB_NAME]: " i; DB_NAME="${i:-$DB_NAME}"
read -p "Target size GB [$TARGET_GB]: " i; TARGET_GB="${i:-$TARGET_GB}"
read -p "Blob size KB [$BLOB_KB]: " i; BLOB_KB="${i:-$BLOB_KB}"
read -p "Rows per batch [$BATCH_ROWS]: " i; BATCH_ROWS="${i:-$BATCH_ROWS}"
echo "Config: $DB_NAME, ${TARGET_GB}GB, ${BLOB_KB}KB blobs"
read -p "Start? [y/N]: " c
[[ "$c" != "y" && "$c" != "Y" ]] && echo "Cancelled." && exit 0
do_generate
}
cmd_generate_bg() {
echo "Starting: $DB_NAME, ${TARGET_GB}GB, ${BLOB_KB}KB blobs"
tmux kill-session -t dbgen 2>/dev/null || true
tmux new-session -d -s dbgen "DB_NAME=$DB_NAME TARGET_GB=$TARGET_GB BLOB_KB=$BLOB_KB BATCH_ROWS=$BATCH_ROWS /root/dbtest.sh _run 2>&1 | tee /root/generate_500gb.log"
echo "Started in tmux. Use: ./dbtest.sh log | attach | stop"
}
do_generate() {
BLOB_BYTES=$((BLOB_KB * 1024))
echo "=== ${TARGET_GB}GB Generator ==="
echo "Started: $(date)"
sudo -u postgres dropdb --if-exists "$DB_NAME"
sudo -u postgres createdb "$DB_NAME"
sudo -u postgres psql -d "$DB_NAME" -c "CREATE EXTENSION IF NOT EXISTS pgcrypto;"
sudo -u postgres psql -d "$DB_NAME" << 'EOSQL'
CREATE OR REPLACE FUNCTION large_random_bytes(size_bytes INT) RETURNS BYTEA AS $$
DECLARE r BYTEA := E'\x'; c INT := 1024; m INT := size_bytes;
BEGIN
WHILE m > 0 LOOP
IF m >= c THEN r := r || gen_random_bytes(c); m := m - c;
ELSE r := r || gen_random_bytes(m); m := 0; END IF;
END LOOP;
RETURN r;
END; $$ LANGUAGE plpgsql;
CREATE TABLE enterprise_documents (
id BIGSERIAL PRIMARY KEY, uuid UUID DEFAULT gen_random_uuid(),
created_at TIMESTAMPTZ DEFAULT now(), document_type VARCHAR(50),
document_name VARCHAR(255), file_size BIGINT, content BYTEA
);
ALTER TABLE enterprise_documents ALTER COLUMN content SET STORAGE EXTERNAL;
CREATE INDEX idx_doc_created ON enterprise_documents(created_at);
CREATE TABLE enterprise_transactions (
id BIGSERIAL PRIMARY KEY, created_at TIMESTAMPTZ DEFAULT now(),
customer_id BIGINT, amount DECIMAL(15,2), status VARCHAR(20)
);
EOSQL
echo "Tables created"
batch=0
start=$(date +%s)
while true; do
sz=$(sudo -u postgres psql -t -A -c "SELECT pg_database_size('$DB_NAME')/1024/1024/1024")
[ "$sz" -ge "$TARGET_GB" ] && echo "=== Target reached: ${sz}GB ===" && break
batch=$((batch + 1))
pct=$((sz * 100 / TARGET_GB))
el=$(($(date +%s) - start))
if [ $sz -gt 0 ] && [ $el -gt 0 ]; then
eta="$(((TARGET_GB-sz)*el/sz/60))min"
else
eta="..."
fi
echo "Batch $batch: ${sz}GB/${TARGET_GB}GB (${pct}%) ETA:$eta"
sudo -u postgres psql -q -d "$DB_NAME" -c "
INSERT INTO enterprise_documents (document_type, document_name, file_size, content)
SELECT (ARRAY['PDF','DOCX','IMG','VID'])[floor(random()*4+1)],
'Doc_'||i||'_'||substr(md5(random()::TEXT),1,8), $BLOB_BYTES,
large_random_bytes($BLOB_BYTES)
FROM generate_series(1, $BATCH_ROWS) i;"
sudo -u postgres psql -q -d "$DB_NAME" -c "
INSERT INTO enterprise_transactions (customer_id, amount, status)
SELECT (random()*1000000)::BIGINT, (random()*10000)::DECIMAL(15,2),
(ARRAY['ok','pending','failed'])[floor(random()*3+1)]
FROM generate_series(1, 20000);"
done
sudo -u postgres psql -d "$DB_NAME" -c "ANALYZE;"
sudo -u postgres psql -d "$DB_NAME" -c "SELECT pg_size_pretty(pg_database_size('$DB_NAME')) as size, (SELECT count(*) FROM enterprise_documents) as docs;"
echo "Completed: $(date)"
}
case "${1:-help}" in
status) cmd_status ;;
generate) cmd_generate ;;
generate-bg) cmd_generate_bg ;;
stop) cmd_stop ;;
drop) cmd_drop ;;
drop-all) cmd_drop_all ;;
backup) cmd_backup ;;
estimate) cmd_estimate ;;
log) cmd_log ;;
attach) cmd_attach ;;
_run) do_generate ;;
help|--help|-h) show_help ;;
*) echo "Unknown: $1"; show_help ;;
esac

232
tests/tui_smoke_test.sh Executable file
View File

@ -0,0 +1,232 @@
#!/bin/bash
# TUI Smoke Test Script
# Tests all TUI menu options via auto-select to ensure they don't crash
#
# Usage: ./tests/tui_smoke_test.sh [--db-host HOST] [--db-port PORT]
#
# Requirements:
# - dbbackup binary in PATH or ./bin/
# - Optional: PostgreSQL connection for full testing
set -e
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Configuration
DBBACKUP="${DBBACKUP:-$(command -v dbbackup 2>/dev/null || echo "./bin/dbbackup_linux_amd64")}"
TIMEOUT_SECONDS=5
PASSED=0
FAILED=0
SKIPPED=0
# Parse arguments
DB_HOST="${DB_HOST:-localhost}"
DB_PORT="${DB_PORT:-5432}"
while [[ $# -gt 0 ]]; do
case $1 in
--db-host) DB_HOST="$2"; shift 2 ;;
--db-port) DB_PORT="$2"; shift 2 ;;
--binary) DBBACKUP="$2"; shift 2 ;;
--help)
echo "Usage: $0 [--db-host HOST] [--db-port PORT] [--binary PATH]"
exit 0
;;
*) shift ;;
esac
done
echo "=============================================="
echo " TUI Smoke Test Suite"
echo "=============================================="
echo "Binary: $DBBACKUP"
echo "Database: $DB_HOST:$DB_PORT"
echo ""
# Check binary exists
if [[ ! -x "$DBBACKUP" ]]; then
echo -e "${RED}ERROR: dbbackup binary not found at $DBBACKUP${NC}"
exit 1
fi
# Get version
VERSION=$("$DBBACKUP" version 2>/dev/null | head -1 || echo "unknown")
echo "Version: $VERSION"
echo ""
# Menu item mapping (index -> name -> expected behavior)
declare -A MENU_ITEMS=(
[0]="Single Database Backup"
[1]="Sample Database Backup"
[2]="Cluster Backup"
[3]="Separator (skip)"
[4]="Restore Single Database"
[5]="Restore Cluster Backup"
[6]="Diagnose Backup File"
[7]="List & Manage Backups"
[8]="View Backup Schedule"
[9]="View Backup Chain"
[10]="Separator (skip)"
[11]="System Resource Profile"
[12]="Tools"
[13]="View Active Operations"
[14]="Show Operation History"
[15]="Database Status"
[16]="Configuration Settings"
[17]="Clear Operation History"
[18]="Quit"
)
# Items that require database connection
DB_REQUIRED=(0 1 2 15)
# Items that require file selection (will timeout, that's OK)
FILE_REQUIRED=(4 5 6 7)
# Items that are separators (should be skipped)
SEPARATORS=(3 10)
# Test function
test_menu_item() {
local idx=$1
local name="${MENU_ITEMS[$idx]}"
local expect_timeout=false
local expect_db=false
# Check if separator
for sep in "${SEPARATORS[@]}"; do
if [[ $idx -eq $sep ]]; then
echo -e " [${YELLOW}SKIP${NC}] #$idx: $name"
((SKIPPED++))
return 0
fi
done
# Check if requires file selection (will timeout waiting for input)
for item in "${FILE_REQUIRED[@]}"; do
if [[ $idx -eq $item ]]; then
expect_timeout=true
break
fi
done
# Check if requires database
for item in "${DB_REQUIRED[@]}"; do
if [[ $idx -eq $item ]]; then
expect_db=true
break
fi
done
# Run test with timeout
local output
local exit_code=0
if [[ "$expect_timeout" == "true" ]]; then
# These items wait for user input, timeout is expected
output=$(timeout $TIMEOUT_SECONDS "$DBBACKUP" --tui-auto-select=$idx \
--host "$DB_HOST" --port "$DB_PORT" \
--no-save-config 2>&1) || exit_code=$?
# Timeout exit code is 124, that's OK for interactive items
if [[ $exit_code -eq 124 ]]; then
echo -e " [${GREEN}PASS${NC}] #$idx: $name (timeout expected)"
((PASSED++))
return 0
fi
else
output=$(timeout $TIMEOUT_SECONDS "$DBBACKUP" --tui-auto-select=$idx \
--host "$DB_HOST" --port "$DB_PORT" \
--no-save-config 2>&1) || exit_code=$?
fi
# Check for crashes/panics
if echo "$output" | grep -qi "panic\|fatal\|segfault"; then
echo -e " [${RED}FAIL${NC}] #$idx: $name - CRASH DETECTED"
echo " Output: $(echo "$output" | head -3)"
((FAILED++))
return 1
fi
# Check exit code
if [[ $exit_code -eq 0 ]] || [[ $exit_code -eq 124 ]]; then
echo -e " [${GREEN}PASS${NC}] #$idx: $name"
((PASSED++))
elif [[ "$expect_db" == "true" ]] && echo "$output" | grep -qi "connection\|connect\|database"; then
# DB connection failure is acceptable if no DB configured
echo -e " [${YELLOW}SKIP${NC}] #$idx: $name (no DB connection)"
((SKIPPED++))
else
echo -e " [${RED}FAIL${NC}] #$idx: $name (exit code: $exit_code)"
echo " Output: $(echo "$output" | tail -2)"
((FAILED++))
fi
}
echo "Running menu item tests..."
echo ""
# Test each menu item
for idx in $(seq 0 18); do
test_menu_item $idx
done
echo ""
echo "=============================================="
echo " Test Results"
echo "=============================================="
echo -e " ${GREEN}Passed:${NC} $PASSED"
echo -e " ${YELLOW}Skipped:${NC} $SKIPPED"
echo -e " ${RED}Failed:${NC} $FAILED"
echo ""
# Additional structural tests
echo "Running structural tests..."
# Test --help
if "$DBBACKUP" --help 2>&1 | grep -q "Interactive Mode"; then
echo -e " [${GREEN}PASS${NC}] --help includes TUI info"
((PASSED++))
else
echo -e " [${RED}FAIL${NC}] --help missing TUI info"
((FAILED++))
fi
# Test version
if "$DBBACKUP" version 2>&1 | grep -qE "^v?[0-9]+\.[0-9]+"; then
echo -e " [${GREEN}PASS${NC}] version command works"
((PASSED++))
else
echo -e " [${RED}FAIL${NC}] version command failed"
((FAILED++))
fi
# Test --no-tui mode
if timeout 2 "$DBBACKUP" status --no-tui --host "$DB_HOST" 2>&1 | grep -qiE "status|error|connection"; then
echo -e " [${GREEN}PASS${NC}] --no-tui mode works"
((PASSED++))
else
echo -e " [${YELLOW}SKIP${NC}] --no-tui test inconclusive"
((SKIPPED++))
fi
echo ""
echo "=============================================="
echo " Final Summary"
echo "=============================================="
echo -e " ${GREEN}Total Passed:${NC} $PASSED"
echo -e " ${YELLOW}Total Skipped:${NC} $SKIPPED"
echo -e " ${RED}Total Failed:${NC} $FAILED"
echo ""
if [[ $FAILED -gt 0 ]]; then
echo -e "${RED}Some tests failed!${NC}"
exit 1
else
echo -e "${GREEN}All tests passed!${NC}"
exit 0
fi