Compare commits

...

11 Commits

Author SHA1 Message Date
4486a5d617 build: v3.42.78 with heartbeat progress for all operations
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Successful in 3m16s
- Linux amd64/arm64/armv7
- macOS Intel/Apple Silicon
- Windows amd64/arm64
- BSD variants (FreeBSD, NetBSD, OpenBSD)
- All binaries include real-time progress heartbeat
- SHA256 checksums included
2026-01-21 14:03:52 +01:00
75dee1fff5 feat: Add heartbeat progress for extraction, single DB restore, and backups
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Has been skipped
- Add heartbeat ticker to Phase 1 (tar extraction) in cluster restore
- Add heartbeat ticker to single database restore operations
- Add heartbeat ticker to backup operations (pg_dump, mysqldump)
- All heartbeats update every 5 seconds showing elapsed time
- Prevents frozen progress during long-running operations

Examples:
- 'Extracting archive... (elapsed: 2m 15s)'
- 'Restoring myapp... (elapsed: 5m 30s)'
- 'Backing up database... (elapsed: 8m 45s)'

Completes heartbeat implementation for all major blocking operations.
2026-01-21 14:00:31 +01:00
91d494537d feat: Add real-time progress heartbeat during Phase 2 cluster restore
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Has been skipped
- Add heartbeat ticker that updates progress every 5 seconds
- Show elapsed time during database restore: 'Restoring myapp (1/5) - elapsed: 3m 45s'
- Prevents frozen progress bar during long-running pg_restore operations
- Implements Phase 1 of restore progress enhancement proposal

Fixes issue where progress bar appeared frozen during large database restores
because pg_restore is a blocking subprocess with no intermediate feedback.
2026-01-21 13:55:39 +01:00
8ffc1ba23c docs: Add TUI support to v3.42.77 release notes
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Has been skipped
2026-01-21 13:51:09 +01:00
8e8045d8c0 build: Generate binaries and checksums for v3.42.77
Some checks failed
CI/CD / Lint (push) Has been cancelled
CI/CD / Build & Release (push) Has been cancelled
CI/CD / Test (push) Has been cancelled
2026-01-21 13:50:25 +01:00
0e94dcf384 docs: Update CHANGELOG with TUI support for single DB extraction
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m30s
CI/CD / Build & Release (push) Has been skipped
2026-01-21 13:43:55 +01:00
33adfbdb38 feat: Add TUI support for single database restore from cluster backups
Some checks failed
CI/CD / Lint (push) Has been cancelled
CI/CD / Build & Release (push) Has been cancelled
CI/CD / Test (push) Has been cancelled
New TUI features:
- Press 's' on cluster backup to select individual databases
- New ClusterDatabaseSelector view with database list and sizes
- Single/multiple selection modes
- restore-cluster-single mode in RestoreExecutionModel
- Automatic format detection for extracted dumps

User flow:
1. Browse to cluster backup in archive browser
2. Press 's' (or select cluster in single restore mode)
3. See list of databases with sizes
4. Select database with arrow keys
5. Press Enter to restore

Benefits:
- Faster restores (extract only needed database)
- Less disk space usage
- Easy database migration/testing via TUI
- Consistent with CLI --database flag

Implementation:
- internal/tui/cluster_db_selector.go: New selector view
- archive_browser.go: Added 's' hotkey + smart cluster handling
- restore_exec.go: Support for restore-cluster-single mode
- Uses existing RestoreSingleFromCluster() backend
2026-01-21 13:43:17 +01:00
af34eaa073 feat: Add single database extraction from cluster backups
All checks were successful
CI/CD / Test (push) Successful in 1m21s
CI/CD / Lint (push) Successful in 1m28s
CI/CD / Build & Release (push) Has been skipped
- New --list-databases flag to list all databases in cluster backup
- New --database flag to extract/restore single database from cluster
- New --databases flag to extract multiple databases (comma-separated)
- New --output-dir flag to extract without restoring
- Support for database renaming with --target flag

Use cases:
- Selective disaster recovery (restore only affected databases)
- Database migration between clusters
- Testing workflows (restore with different names)
- Faster restores (extract only what you need)
- Less disk space usage

Implementation:
- ListDatabasesInCluster() - scan and list databases with sizes
- ExtractDatabaseFromCluster() - extract single database
- ExtractMultipleDatabasesFromCluster() - extract multiple databases
- RestoreSingleFromCluster() - extract and restore in one step

Examples:
  dbbackup restore cluster backup.tar.gz --list-databases
  dbbackup restore cluster backup.tar.gz --database myapp --confirm
  dbbackup restore cluster backup.tar.gz --database myapp --target test --confirm
  dbbackup restore cluster backup.tar.gz --databases "app1,app2" --output-dir /tmp
2026-01-21 13:34:24 +01:00
babce7cc83 feat: Add single database extraction from cluster backups
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Successful in 3m17s
- List databases in cluster backup: --list-databases
- Extract single database: --database <name> --output-dir <dir>
- Restore single database from cluster: --database <name> --confirm
- Rename on restore: --database <name> --target <new_name> --confirm
- Extract multiple databases: --databases "db1,db2,db3" --output-dir <dir>

Benefits:
- Faster selective restores (extract only what you need)
- Less disk space usage during restore
- Easy database migration/copying between clusters
- Better testing workflow (restore with different name)
- Selective disaster recovery

Implementation:
- New internal/restore/extract.go with extraction functions
- ListDatabasesInCluster(): Fast scan of cluster archive
- ExtractDatabaseFromCluster(): Extract single database
- ExtractMultipleDatabasesFromCluster(): Extract multiple databases
- RestoreSingleFromCluster(): Extract + restore single database
- Stream-based extraction with progress feedback

Examples:
  dbbackup restore cluster backup.tar.gz --list-databases
  dbbackup restore cluster backup.tar.gz --database myapp --output-dir /tmp
  dbbackup restore cluster backup.tar.gz --database myapp --confirm
  dbbackup restore cluster backup.tar.gz --database myapp --target myapp_test --confirm
2026-01-21 11:31:38 +01:00
ae8c8fde3d perf: Reduce progress update intervals for smoother real-time feedback
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m16s
- Archive extraction: 100ms → 50ms (2× faster updates)
- Dots animation: 500ms → 100ms (5× faster updates)
- Progress updates now feel more responsive and real-time
- Improves user experience during long-running restore operations
2026-01-21 11:04:50 +01:00
346cb7fb61 feat: Extend fix_postgres_locks.sh with work_mem and maintenance_work_mem optimizations
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m28s
CI/CD / Build & Release (push) Has been skipped
- Added work_mem: 64MB → 256MB (reduces temp file usage)
- Added maintenance_work_mem: 2GB → 4GB (faster restore/vacuum)
- Shows 4 config values before fix (instead of 2)
- Applies 3 optimizations (instead of just locks)
- Better success tracking and error handling
- Enhanced verification commands and benefits list
2026-01-21 10:43:05 +01:00
14 changed files with 1436 additions and 20 deletions

View File

@ -7,6 +7,35 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
### Added - Single Database Extraction from Cluster Backups (CLI + TUI)
- **Extract and restore individual databases from cluster backups** - selective restore without full cluster restoration
- **CLI Commands**:
- **List databases**: `dbbackup restore cluster backup.tar.gz --list-databases`
- Shows all databases in cluster backup with sizes
- Fast scan without full extraction
- **Extract single database**: `dbbackup restore cluster backup.tar.gz --database myapp --output-dir /tmp/extract`
- Extracts only the specified database dump
- No restore, just file extraction
- **Restore single database from cluster**: `dbbackup restore cluster backup.tar.gz --database myapp --confirm`
- Extracts and restores only one database
- Much faster than full cluster restore when you only need one database
- **Rename on restore**: `dbbackup restore cluster backup.tar.gz --database myapp --target myapp_test --confirm`
- Restore with different database name (useful for testing)
- **Extract multiple databases**: `dbbackup restore cluster backup.tar.gz --databases "app1,app2,app3" --output-dir /tmp/extract`
- Comma-separated list of databases to extract
- **TUI Support**:
- Press **'s'** on any cluster backup in archive browser to select individual databases
- New **ClusterDatabaseSelector** view shows all databases with sizes
- Navigate with arrow keys, select with Enter
- Automatic handling when cluster backup selected in single restore mode
- Full restore preview and confirmation workflow
- **Benefits**:
- Faster restores (extract only what you need)
- Less disk space usage during restore
- Easy database migration/copying
- Better testing workflow
- Selective disaster recovery
### Performance - Cluster Restore Optimization
- **Eliminated duplicate archive extraction in cluster restore** - saves 30-50% time on large restores
- Previously: Archive was extracted twice (once in preflight validation, once in actual restore)

View File

@ -0,0 +1,171 @@
# Restore Progress Bar Enhancement Proposal
## Problem
During Phase 2 cluster restore, the progress bar is not real-time because:
- `pg_restore` subprocess blocks until completion
- Progress updates only happen **before** each database restore starts
- No feedback during actual restore execution (which can take hours)
- Users see frozen progress bar during large database restores
## Root Cause
In `internal/restore/engine.go`:
- `executeRestoreCommand()` blocks on `cmd.Wait()`
- Progress is only reported at goroutine entry (line ~1315)
- No streaming progress during pg_restore execution
## Proposed Solutions
### Option 1: Parse pg_restore stderr for progress (RECOMMENDED)
**Pros:**
- Real-time feedback during restore
- Works with existing pg_restore
- No external tools needed
**Implementation:**
```go
// In executeRestoreCommand, modify stderr reader:
go func() {
scanner := bufio.NewScanner(stderr)
for scanner.Scan() {
line := scanner.Text()
// Parse pg_restore progress lines
// Format: "pg_restore: processing item 1234 TABLE public users"
if strings.Contains(line, "processing item") {
e.reportItemProgress(line) // Update progress bar
}
// Capture errors
if strings.Contains(line, "ERROR:") {
lastError = line
errorCount++
}
}
}()
```
**Add to RestoreCluster goroutine:**
```go
// Track sub-items within each database
var currentDBItems, totalDBItems int
e.setItemProgressCallback(func(current, total int) {
currentDBItems = current
totalDBItems = total
// Update TUI with sub-progress
e.reportDatabaseSubProgress(idx, totalDBs, dbName, current, total)
})
```
### Option 2: Verbose mode with line counting
**Pros:**
- More granular progress (row-level)
- Shows exact operation being performed
**Cons:**
- `--verbose` causes massive stderr output (OOM risk on huge DBs)
- Currently disabled for memory safety
- Requires careful memory management
### Option 3: Hybrid approach (BEST)
**Combine both:**
1. **Default**: Parse non-verbose pg_restore output for item counts
2. **Small DBs** (<500MB): Enable verbose for detailed progress
3. **Periodic updates**: Report progress every 5 seconds even without stderr changes
**Implementation:**
```go
// Add periodic progress ticker
progressTicker := time.NewTicker(5 * time.Second)
defer progressTicker.Stop()
go func() {
for {
select {
case <-progressTicker.C:
// Report heartbeat even if no stderr
e.reportHeartbeat(dbName, time.Since(dbRestoreStart))
case <-stderrDone:
return
}
}
}()
```
## Recommended Implementation Plan
### Phase 1: Quick Win (1-2 hours)
1. Add heartbeat ticker in cluster restore goroutines
2. Update TUI to show "Restoring database X... (elapsed: 3m 45s)"
3. No code changes to pg_restore wrapper
### Phase 2: Parse pg_restore Output (4-6 hours)
1. Parse stderr for "processing item" lines
2. Extract current/total item counts
3. Report sub-progress to TUI
4. Update progress bar calculation:
```
dbProgress = baseProgress + (itemsDone/totalItems) * dbWeightedPercent
```
### Phase 3: Smart Verbose Mode (optional)
1. Detect database size before restore
2. Enable verbose for DBs < 500MB
3. Parse verbose output for detailed progress
4. Automatic fallback to item-based for large DBs
## Files to Modify
1. **internal/restore/engine.go**:
- `executeRestoreCommand()` - add progress parsing
- `RestoreCluster()` - add heartbeat ticker
- New: `reportItemProgress()`, `reportHeartbeat()`
2. **internal/tui/restore_exec.go**:
- Update `RestoreExecModel` to handle sub-progress
- Add "elapsed time" display during restore
- Show item counts: "Restoring tables... (234/567)"
3. **internal/progress/indicator.go**:
- Add `UpdateSubProgress(current, total int)` method
- Add `ReportHeartbeat(elapsed time.Duration)` method
## Example Output
**Before (current):**
```
[====================] Phase 2/3: Restoring Databases (1/5)
Restoring database myapp...
[frozen for 30 minutes]
```
**After (with heartbeat):**
```
[====================] Phase 2/3: Restoring Databases (1/5)
Restoring database myapp... (elapsed: 4m 32s)
[updates every 5 seconds]
```
**After (with item parsing):**
```
[=========>-----------] Phase 2/3: Restoring Databases (1/5)
Restoring database myapp... (processing item 1,234/5,678) (elapsed: 4m 32s)
[smooth progress bar movement]
```
## Testing Strategy
1. Test with small DB (< 100MB) - verify heartbeat works
2. Test with large DB (> 10GB) - verify no OOM, heartbeat works
3. Test with BLOB-heavy DB - verify phased restore shows progress
4. Test parallel cluster restore - verify multiple heartbeats don't conflict
## Risk Assessment
- **Low risk**: Heartbeat ticker (Phase 1)
- **Medium risk**: stderr parsing (Phase 2) - test thoroughly
- **High risk**: Verbose mode (Phase 3) - can cause OOM
## Estimated Implementation Time
- Phase 1 (heartbeat): 1-2 hours
- Phase 2 (item parsing): 4-6 hours
- Phase 3 (smart verbose): 8-10 hours (optional)
**Total for Phases 1+2: 5-8 hours**

View File

@ -4,8 +4,8 @@ This directory contains pre-compiled binaries for the DB Backup Tool across mult
## Build Information
- **Version**: 3.42.50
- **Build Time**: 2026-01-18_20:23:55_UTC
- **Git Commit**: a759f4d
- **Build Time**: 2026-01-21_13:01:17_UTC
- **Git Commit**: 75dee1f
## Recent Updates (v1.1.0)
- ✅ Fixed TUI progress display with line-by-line output

10
bin/checksums.txt Normal file
View File

@ -0,0 +1,10 @@
674a9cdb28a6b27ebb3004b2a00330cb23708894207681405abb4774975fd92d dbbackup_darwin_amd64
c65808a0b9a3eb5a88d4c30579aa67f10093aeb77db74c1d4747730f8bf33fa6 dbbackup_darwin_arm64
c6dd8effb74c8a69b0232c1eb603c4cebe2b6cdf5d2f764c6b9d4ecc98cff6fd dbbackup_freebsd_amd64
c1f24b324e0afc6b6e59c846d823d09c6c193cf812a92daab103977ec605cb48 dbbackup_linux_amd64
edf31fca271a264a2a3a88c8de8ab0d4c576f5f08199fd60e68791707e0d87a1 dbbackup_linux_arm64
d699561ca3b3b40f8d463bbd3b7eade7fce052f3b4aeea8a56896e8cedab433d dbbackup_linux_arm_armv7
f221ccc7202e425acae81acf880ea666432889ac74289031b4942bf5f9284eed dbbackup_netbsd_amd64
f93486bc4efcc627b23c7b0c4e06ffd54e4fb85be322c58aaec3a913b90735af dbbackup_openbsd_amd64
935dfd4b666760efdc43236d12a1098d86a1c540d9a5ca9534efd7f00d2ab541 dbbackup_windows_amd64.exe
b41d2e467d88c3e4c3fe42bf27cdd10f487564710c3802a842ca7c3e639f44df dbbackup_windows_arm64.exe

View File

@ -16,6 +16,7 @@ import (
"dbbackup/internal/config"
"dbbackup/internal/database"
"dbbackup/internal/pitr"
"dbbackup/internal/progress"
"dbbackup/internal/restore"
"dbbackup/internal/security"
@ -39,6 +40,12 @@ var (
restoreDiagnose bool // Run diagnosis before restore
restoreSaveDebugLog string // Path to save debug log on failure
// Single database extraction from cluster flags
restoreDatabase string // Single database to extract/restore from cluster
restoreDatabases string // Comma-separated list of databases to extract
restoreOutputDir string // Extract to directory (no restore)
restoreListDBs bool // List databases in cluster backup
// Diagnose flags
diagnoseJSON bool
diagnoseDeep bool
@ -136,6 +143,11 @@ var restoreClusterCmd = &cobra.Command{
This command restores all databases that were backed up together
in a cluster backup operation.
Single Database Extraction:
Use --list-databases to see available databases
Use --database to extract/restore a specific database
Use --output-dir to extract without restoring
Safety features:
- Dry-run by default (use --confirm to execute)
- Archive validation and listing
@ -143,6 +155,21 @@ Safety features:
- Sequential database restoration
Examples:
# List databases in cluster backup
dbbackup restore cluster backup.tar.gz --list-databases
# Extract single database (no restore)
dbbackup restore cluster backup.tar.gz --database myapp --output-dir /tmp/extract
# Restore single database from cluster
dbbackup restore cluster backup.tar.gz --database myapp --confirm
# Restore single database with different name
dbbackup restore cluster backup.tar.gz --database myapp --target myapp_test --confirm
# Extract multiple databases
dbbackup restore cluster backup.tar.gz --databases "app1,app2,app3" --output-dir /tmp/extract
# Preview cluster restore
dbbackup restore cluster cluster_backup_20240101_120000.tar.gz
@ -297,6 +324,10 @@ func init() {
restoreSingleCmd.Flags().StringVar(&restoreSaveDebugLog, "save-debug-log", "", "Save detailed error report to file on failure (e.g., /tmp/restore-debug.json)")
// Cluster restore flags
restoreClusterCmd.Flags().BoolVar(&restoreListDBs, "list-databases", false, "List databases in cluster backup and exit")
restoreClusterCmd.Flags().StringVar(&restoreDatabase, "database", "", "Extract/restore single database from cluster")
restoreClusterCmd.Flags().StringVar(&restoreDatabases, "databases", "", "Extract multiple databases (comma-separated)")
restoreClusterCmd.Flags().StringVar(&restoreOutputDir, "output-dir", "", "Extract to directory without restoring (requires --database or --databases)")
restoreClusterCmd.Flags().BoolVar(&restoreConfirm, "confirm", false, "Confirm and execute restore (required)")
restoreClusterCmd.Flags().BoolVar(&restoreDryRun, "dry-run", false, "Show what would be done without executing")
restoreClusterCmd.Flags().BoolVar(&restoreForce, "force", false, "Skip safety checks and confirmations")
@ -311,6 +342,8 @@ func init() {
restoreClusterCmd.Flags().StringVar(&restoreEncryptionKeyEnv, "encryption-key-env", "DBBACKUP_ENCRYPTION_KEY", "Environment variable containing encryption key")
restoreClusterCmd.Flags().BoolVar(&restoreDiagnose, "diagnose", false, "Run deep diagnosis on all dumps before restore")
restoreClusterCmd.Flags().StringVar(&restoreSaveDebugLog, "save-debug-log", "", "Save detailed error report to file on failure (e.g., /tmp/restore-debug.json)")
restoreClusterCmd.Flags().BoolVar(&restoreClean, "clean", false, "Drop and recreate target database (for single DB restore)")
restoreClusterCmd.Flags().BoolVar(&restoreCreate, "create", false, "Create target database if it doesn't exist (for single DB restore)")
// PITR restore flags
restorePITRCmd.Flags().StringVar(&pitrBaseBackup, "base-backup", "", "Path to base backup file (.tar.gz) (required)")
@ -666,6 +699,193 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
func runRestoreCluster(cmd *cobra.Command, args []string) error {
archivePath := args[0]
// Convert to absolute path
if !filepath.IsAbs(archivePath) {
absPath, err := filepath.Abs(archivePath)
if err != nil {
return fmt.Errorf("invalid archive path: %w", err)
}
archivePath = absPath
}
// Check if file exists
if _, err := os.Stat(archivePath); err != nil {
return fmt.Errorf("archive not found: %s", archivePath)
}
// Handle --list-databases flag
if restoreListDBs {
return runListDatabases(archivePath)
}
// Handle single/multiple database extraction
if restoreDatabase != "" || restoreDatabases != "" {
return runExtractDatabases(archivePath)
}
// Otherwise proceed with full cluster restore
return runFullClusterRestore(archivePath)
}
// runListDatabases lists all databases in a cluster backup
func runListDatabases(archivePath string) error {
ctx := context.Background()
log.Info("Scanning cluster backup", "archive", filepath.Base(archivePath))
fmt.Println()
databases, err := restore.ListDatabasesInCluster(ctx, archivePath, log)
if err != nil {
return fmt.Errorf("failed to list databases: %w", err)
}
fmt.Printf("📦 Databases in cluster backup:\n")
var totalSize int64
for _, db := range databases {
sizeStr := formatSize(db.Size)
fmt.Printf(" - %-30s (%s)\n", db.Name, sizeStr)
totalSize += db.Size
}
fmt.Printf("\nTotal: %s across %d database(s)\n", formatSize(totalSize), len(databases))
return nil
}
// runExtractDatabases extracts single or multiple databases from cluster backup
func runExtractDatabases(archivePath string) error {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Setup signal handling
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM)
defer signal.Stop(sigChan)
go func() {
<-sigChan
log.Warn("Extraction interrupted by user")
cancel()
}()
// Single database extraction
if restoreDatabase != "" {
return handleSingleDatabaseExtraction(ctx, archivePath, restoreDatabase)
}
// Multiple database extraction
if restoreDatabases != "" {
return handleMultipleDatabaseExtraction(ctx, archivePath, restoreDatabases)
}
return nil
}
// handleSingleDatabaseExtraction handles single database extraction or restore
func handleSingleDatabaseExtraction(ctx context.Context, archivePath, dbName string) error {
// Extract-only mode (no restore)
if restoreOutputDir != "" {
return extractSingleDatabase(ctx, archivePath, dbName, restoreOutputDir)
}
// Restore mode
if !restoreConfirm {
fmt.Println("\n[DRY-RUN] DRY-RUN MODE - No changes will be made")
fmt.Printf("\nWould extract and restore:\n")
fmt.Printf(" Database: %s\n", dbName)
fmt.Printf(" From: %s\n", archivePath)
targetDB := restoreTarget
if targetDB == "" {
targetDB = dbName
}
fmt.Printf(" Target: %s\n", targetDB)
if restoreClean {
fmt.Printf(" Clean: true (drop and recreate)\n")
}
if restoreCreate {
fmt.Printf(" Create: true (create if missing)\n")
}
fmt.Println("\nTo execute this restore, add --confirm flag")
return nil
}
// Create database instance
db, err := database.New(cfg, log)
if err != nil {
return fmt.Errorf("failed to create database instance: %w", err)
}
defer db.Close()
// Create restore engine
engine := restore.New(cfg, log, db)
// Determine target database name
targetDB := restoreTarget
if targetDB == "" {
targetDB = dbName
}
log.Info("Restoring single database from cluster", "database", dbName, "target", targetDB)
// Restore single database from cluster
if err := engine.RestoreSingleFromCluster(ctx, archivePath, dbName, targetDB, restoreClean, restoreCreate); err != nil {
return fmt.Errorf("restore failed: %w", err)
}
fmt.Printf("\n✅ Successfully restored '%s' as '%s'\n", dbName, targetDB)
return nil
}
// extractSingleDatabase extracts a single database without restoring
func extractSingleDatabase(ctx context.Context, archivePath, dbName, outputDir string) error {
log.Info("Extracting database", "database", dbName, "output", outputDir)
// Create progress indicator
prog := progress.NewIndicator(!restoreNoProgress, "dots")
extractedPath, err := restore.ExtractDatabaseFromCluster(ctx, archivePath, dbName, outputDir, log, prog)
if err != nil {
return fmt.Errorf("extraction failed: %w", err)
}
fmt.Printf("\n✅ Extracted: %s\n", extractedPath)
fmt.Printf(" Database: %s\n", dbName)
fmt.Printf(" Location: %s\n", outputDir)
return nil
}
// handleMultipleDatabaseExtraction handles multiple database extraction
func handleMultipleDatabaseExtraction(ctx context.Context, archivePath, databases string) error {
if restoreOutputDir == "" {
return fmt.Errorf("--output-dir required when using --databases")
}
// Parse database list
dbNames := strings.Split(databases, ",")
for i := range dbNames {
dbNames[i] = strings.TrimSpace(dbNames[i])
}
log.Info("Extracting multiple databases", "count", len(dbNames), "output", restoreOutputDir)
// Create progress indicator
prog := progress.NewIndicator(!restoreNoProgress, "dots")
extractedPaths, err := restore.ExtractMultipleDatabasesFromCluster(ctx, archivePath, dbNames, restoreOutputDir, log, prog)
if err != nil {
return fmt.Errorf("extraction failed: %w", err)
}
fmt.Printf("\n✅ Extracted %d database(s):\n", len(extractedPaths))
for dbName, path := range extractedPaths {
fmt.Printf(" - %s → %s\n", dbName, filepath.Base(path))
}
fmt.Printf(" Location: %s\n", restoreOutputDir)
return nil
}
// runFullClusterRestore performs a full cluster restore
func runFullClusterRestore(archivePath string) error {
// Apply resource profile
if err := config.ApplyProfile(cfg, restoreProfile, restoreJobs, restoreParallelDBs); err != nil {
log.Warn("Invalid profile, using balanced", "error", err)

View File

@ -35,34 +35,77 @@ echo "📊 Current PostgreSQL Configuration:"
echo "────────────────────────────────────────────────────────────"
sudo -u postgres psql -c "SHOW max_locks_per_transaction;" 2>/dev/null || psql -c "SHOW max_locks_per_transaction;" || echo "Unable to query current value"
sudo -u postgres psql -c "SHOW max_connections;" 2>/dev/null || psql -c "SHOW max_connections;" || echo "Unable to query current value"
sudo -u postgres psql -c "SHOW work_mem;" 2>/dev/null || psql -c "SHOW work_mem;" || echo "Unable to query current value"
sudo -u postgres psql -c "SHOW maintenance_work_mem;" 2>/dev/null || psql -c "SHOW maintenance_work_mem;" || echo "Unable to query current value"
echo
# Recommended value
# Recommended values
RECOMMENDED_LOCKS=4096
RECOMMENDED_WORK_MEM="256MB"
RECOMMENDED_MAINTENANCE_WORK_MEM="4GB"
echo "🔧 Applying Fix:"
echo "🔧 Applying Fixes:"
echo "────────────────────────────────────────────────────────────"
echo "Setting max_locks_per_transaction = $RECOMMENDED_LOCKS"
echo "1. Setting max_locks_per_transaction = $RECOMMENDED_LOCKS"
echo "2. Setting work_mem = $RECOMMENDED_WORK_MEM (improves query performance)"
echo "3. Setting maintenance_work_mem = $RECOMMENDED_MAINTENANCE_WORK_MEM (speeds up restore/vacuum)"
echo
# Apply the setting
# Apply the settings
SUCCESS=0
# Fix 1: max_locks_per_transaction
if sudo -u postgres psql -c "ALTER SYSTEM SET max_locks_per_transaction = $RECOMMENDED_LOCKS;" 2>/dev/null; then
echo "✅ Configuration updated successfully"
echo "✅ max_locks_per_transaction updated successfully"
SUCCESS=$((SUCCESS + 1))
elif psql -c "ALTER SYSTEM SET max_locks_per_transaction = $RECOMMENDED_LOCKS;" 2>/dev/null; then
echo "✅ Configuration updated successfully"
echo "✅ max_locks_per_transaction updated successfully"
SUCCESS=$((SUCCESS + 1))
else
echo "❌ Failed to update configuration"
echo "❌ Failed to update max_locks_per_transaction"
fi
# Fix 2: work_mem
if sudo -u postgres psql -c "ALTER SYSTEM SET work_mem = '$RECOMMENDED_WORK_MEM';" 2>/dev/null; then
echo "✅ work_mem updated successfully"
SUCCESS=$((SUCCESS + 1))
elif psql -c "ALTER SYSTEM SET work_mem = '$RECOMMENDED_WORK_MEM';" 2>/dev/null; then
echo "✅ work_mem updated successfully"
SUCCESS=$((SUCCESS + 1))
else
echo "❌ Failed to update work_mem"
fi
# Fix 3: maintenance_work_mem
if sudo -u postgres psql -c "ALTER SYSTEM SET maintenance_work_mem = '$RECOMMENDED_MAINTENANCE_WORK_MEM';" 2>/dev/null; then
echo "✅ maintenance_work_mem updated successfully"
SUCCESS=$((SUCCESS + 1))
elif psql -c "ALTER SYSTEM SET maintenance_work_mem = '$RECOMMENDED_MAINTENANCE_WORK_MEM';" 2>/dev/null; then
echo "✅ maintenance_work_mem updated successfully"
SUCCESS=$((SUCCESS + 1))
else
echo "❌ Failed to update maintenance_work_mem"
fi
if [ $SUCCESS -eq 0 ]; then
echo
echo "❌ All configuration updates failed"
echo
echo "Manual steps:"
echo "1. Connect to PostgreSQL as superuser:"
echo " sudo -u postgres psql"
echo
echo "2. Run this command:"
echo "2. Run these commands:"
echo " ALTER SYSTEM SET max_locks_per_transaction = $RECOMMENDED_LOCKS;"
echo " ALTER SYSTEM SET work_mem = '$RECOMMENDED_WORK_MEM';"
echo " ALTER SYSTEM SET maintenance_work_mem = '$RECOMMENDED_MAINTENANCE_WORK_MEM';"
echo
exit 1
fi
echo
echo "✅ Applied $SUCCESS out of 3 configuration changes"
echo
echo "⚠️ IMPORTANT: PostgreSQL restart required!"
echo "────────────────────────────────────────────────────────────"
@ -73,12 +116,23 @@ echo " • systemd: sudo systemctl restart postgresql"
echo " • pg_ctl: sudo -u postgres pg_ctl restart -D /var/lib/postgresql/data"
echo " • service: sudo service postgresql restart"
echo
echo "Lock capacity after restart will be:"
echo " max_locks_per_transaction × (max_connections + max_prepared_transactions)"
echo " = $RECOMMENDED_LOCKS × (connections + prepared)"
echo "📊 Expected capacity after restart:"
echo "────────────────────────────────────────────────────────────"
echo " Lock capacity: max_locks_per_transaction × (max_connections + max_prepared)"
echo " = $RECOMMENDED_LOCKS × (connections + prepared)"
echo
echo " Work memory: $RECOMMENDED_WORK_MEM per query operation"
echo " Maintenance: $RECOMMENDED_MAINTENANCE_WORK_MEM for restore/vacuum/index"
echo
echo "After restarting, verify with:"
echo " psql -c 'SHOW max_locks_per_transaction;'"
echo " psql -c 'SHOW work_mem;'"
echo " psql -c 'SHOW maintenance_work_mem;'"
echo
echo "💡 Benefits:"
echo " ✓ Prevents 'out of shared memory' errors during restore"
echo " ✓ Reduces temp file usage (better performance)"
echo " ✓ Faster restore, vacuum, and index operations"
echo
echo "🔍 For comprehensive diagnostics, run:"
echo " ./diagnose_postgres_memory.sh"

View File

@ -1372,6 +1372,27 @@ func (e *Engine) executeCommand(ctx context.Context, cmdArgs []string, outputFil
// NO GO BUFFERING - pg_dump writes directly to disk
cmd := exec.CommandContext(ctx, cmdArgs[0], cmdArgs[1:]...)
// Start heartbeat ticker for backup progress
backupStart := time.Now()
heartbeatCtx, cancelHeartbeat := context.WithCancel(ctx)
heartbeatTicker := time.NewTicker(5 * time.Second)
defer heartbeatTicker.Stop()
defer cancelHeartbeat()
go func() {
for {
select {
case <-heartbeatTicker.C:
elapsed := time.Since(backupStart)
if e.progress != nil {
e.progress.Update(fmt.Sprintf("Backing up database... (elapsed: %s)", formatDuration(elapsed)))
}
case <-heartbeatCtx.Done():
return
}
}
}()
// Set environment variables for database tools
cmd.Env = os.Environ()
if e.cfg.Password != "" {
@ -1598,3 +1619,23 @@ func formatBytes(bytes int64) string {
}
return fmt.Sprintf("%.1f %cB", float64(bytes)/float64(div), "KMGTPE"[exp])
}
// formatDuration formats a duration to human readable format (e.g., "3m 45s", "1h 23m", "45s")
func formatDuration(d time.Duration) string {
if d < time.Second {
return "0s"
}
hours := int(d.Hours())
minutes := int(d.Minutes()) % 60
seconds := int(d.Seconds()) % 60
if hours > 0 {
return fmt.Sprintf("%dh %dm", hours, minutes)
}
if minutes > 0 {
return fmt.Sprintf("%dm %ds", minutes, seconds)
}
return fmt.Sprintf("%ds", seconds)
}

View File

@ -146,7 +146,7 @@ func (d *Dots) Start(message string) {
fmt.Fprint(d.writer, message)
go func() {
ticker := time.NewTicker(500 * time.Millisecond)
ticker := time.NewTicker(100 * time.Millisecond)
defer ticker.Stop()
count := 0

View File

@ -292,6 +292,25 @@ func (e *Engine) restorePostgreSQLDump(ctx context.Context, archivePath, targetD
cmd := e.db.BuildRestoreCommand(targetDB, archivePath, opts)
// Start heartbeat ticker for restore progress
restoreStart := time.Now()
heartbeatCtx, cancelHeartbeat := context.WithCancel(ctx)
heartbeatTicker := time.NewTicker(5 * time.Second)
defer heartbeatTicker.Stop()
defer cancelHeartbeat()
go func() {
for {
select {
case <-heartbeatTicker.C:
elapsed := time.Since(restoreStart)
e.progress.Update(fmt.Sprintf("Restoring %s... (elapsed: %s)", targetDB, formatDuration(elapsed)))
case <-heartbeatCtx.Done():
return
}
}
}()
if compressed {
// For compressed dumps, decompress first
return e.executeRestoreWithDecompression(ctx, archivePath, cmd)
@ -820,6 +839,95 @@ func (e *Engine) previewRestore(archivePath, targetDB string, format ArchiveForm
return nil
}
// RestoreSingleFromCluster extracts and restores a single database from a cluster backup
func (e *Engine) RestoreSingleFromCluster(ctx context.Context, clusterArchivePath, dbName, targetDB string, cleanFirst, createIfMissing bool) error {
operation := e.log.StartOperation("Single Database Restore from Cluster")
// Validate and sanitize archive path
validArchivePath, pathErr := security.ValidateArchivePath(clusterArchivePath)
if pathErr != nil {
operation.Fail(fmt.Sprintf("Invalid archive path: %v", pathErr))
return fmt.Errorf("invalid archive path: %w", pathErr)
}
clusterArchivePath = validArchivePath
// Validate archive exists
if _, err := os.Stat(clusterArchivePath); os.IsNotExist(err) {
operation.Fail("Archive not found")
return fmt.Errorf("archive not found: %s", clusterArchivePath)
}
// Verify it's a cluster archive
format := DetectArchiveFormat(clusterArchivePath)
if format != FormatClusterTarGz {
operation.Fail("Not a cluster archive")
return fmt.Errorf("not a cluster archive: %s (format: %s)", clusterArchivePath, format)
}
// Create temporary directory for extraction
workDir := e.cfg.GetEffectiveWorkDir()
tempDir := filepath.Join(workDir, fmt.Sprintf(".extract_%d", time.Now().Unix()))
if err := os.MkdirAll(tempDir, 0755); err != nil {
operation.Fail("Failed to create temporary directory")
return fmt.Errorf("failed to create temp directory: %w", err)
}
defer os.RemoveAll(tempDir)
// Extract the specific database from cluster archive
e.log.Info("Extracting database from cluster backup", "database", dbName, "cluster", filepath.Base(clusterArchivePath))
e.progress.Start(fmt.Sprintf("Extracting '%s' from cluster backup", dbName))
extractedPath, err := ExtractDatabaseFromCluster(ctx, clusterArchivePath, dbName, tempDir, e.log, e.progress)
if err != nil {
e.progress.Fail(fmt.Sprintf("Extraction failed: %v", err))
operation.Fail(fmt.Sprintf("Extraction failed: %v", err))
return fmt.Errorf("failed to extract database: %w", err)
}
e.progress.Update(fmt.Sprintf("Extracted: %s", filepath.Base(extractedPath)))
e.log.Info("Database extracted successfully", "path", extractedPath)
// Now restore the extracted database file
e.progress.Update("Restoring database...")
// Create database if requested and it doesn't exist
if createIfMissing {
e.log.Info("Checking if target database exists", "database", targetDB)
if err := e.ensureDatabaseExists(ctx, targetDB); err != nil {
operation.Fail(fmt.Sprintf("Failed to create database: %v", err))
return fmt.Errorf("failed to create database '%s': %w", targetDB, err)
}
}
// Detect format of extracted file
extractedFormat := DetectArchiveFormat(extractedPath)
e.log.Info("Restoring extracted database", "format", extractedFormat, "target", targetDB)
// Restore based on format
var restoreErr error
switch extractedFormat {
case FormatPostgreSQLDump, FormatPostgreSQLDumpGz:
restoreErr = e.restorePostgreSQLDump(ctx, extractedPath, targetDB, extractedFormat == FormatPostgreSQLDumpGz, cleanFirst)
case FormatPostgreSQLSQL, FormatPostgreSQLSQLGz:
restoreErr = e.restorePostgreSQLSQL(ctx, extractedPath, targetDB, extractedFormat == FormatPostgreSQLSQLGz)
case FormatMySQLSQL, FormatMySQLSQLGz:
restoreErr = e.restoreMySQLSQL(ctx, extractedPath, targetDB, extractedFormat == FormatMySQLSQLGz)
default:
operation.Fail("Unsupported extracted format")
return fmt.Errorf("unsupported extracted format: %s", extractedFormat)
}
if restoreErr != nil {
e.progress.Fail(fmt.Sprintf("Restore failed: %v", restoreErr))
operation.Fail(fmt.Sprintf("Restore failed: %v", restoreErr))
return restoreErr
}
e.progress.Complete(fmt.Sprintf("Database '%s' restored from cluster backup", targetDB))
operation.Complete(fmt.Sprintf("Restored '%s' from cluster as '%s'", dbName, targetDB))
return nil
}
// RestoreCluster restores a full cluster from a tar.gz archive
// If preExtractedPath is non-empty, uses that directory instead of extracting archivePath
// This avoids double extraction when ValidateAndExtractCluster was already called
@ -1251,6 +1359,25 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string, preExtr
preserveOwnership := isSuperuser
isCompressedSQL := strings.HasSuffix(dumpFile, ".sql.gz")
// Start heartbeat ticker to show progress during long-running restore
heartbeatCtx, cancelHeartbeat := context.WithCancel(ctx)
heartbeatTicker := time.NewTicker(5 * time.Second)
go func() {
for {
select {
case <-heartbeatTicker.C:
elapsed := time.Since(dbRestoreStart)
mu.Lock()
statusMsg := fmt.Sprintf("Restoring %s (%d/%d) - elapsed: %s",
dbName, idx+1, totalDBs, formatDuration(elapsed))
e.progress.Update(statusMsg)
mu.Unlock()
case <-heartbeatCtx.Done():
return
}
}
}()
var restoreErr error
if isCompressedSQL {
mu.Lock()
@ -1264,6 +1391,10 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string, preExtr
restoreErr = e.restorePostgreSQLDumpWithOwnership(ctx, dumpFile, dbName, false, preserveOwnership)
}
// Stop heartbeat ticker
heartbeatTicker.Stop()
cancelHeartbeat()
if restoreErr != nil {
mu.Lock()
e.log.Error("Failed to restore database", "name", dbName, "file", dumpFile, "error", restoreErr)
@ -1506,9 +1637,9 @@ func (pr *progressReader) Read(p []byte) (n int, err error) {
n, err = pr.reader.Read(p)
pr.bytesRead += int64(n)
// Throttle progress reporting to every 100ms
// Throttle progress reporting to every 50ms for smoother updates
if pr.reportEvery == 0 {
pr.reportEvery = 100 * time.Millisecond
pr.reportEvery = 50 * time.Millisecond
}
if time.Since(pr.lastReport) > pr.reportEvery {
if pr.callback != nil {
@ -1522,6 +1653,25 @@ func (pr *progressReader) Read(p []byte) (n int, err error) {
// extractArchiveShell extracts using shell tar command (faster but no progress)
func (e *Engine) extractArchiveShell(ctx context.Context, archivePath, destDir string) error {
// Start heartbeat ticker for extraction progress
extractionStart := time.Now()
heartbeatCtx, cancelHeartbeat := context.WithCancel(ctx)
heartbeatTicker := time.NewTicker(5 * time.Second)
defer heartbeatTicker.Stop()
defer cancelHeartbeat()
go func() {
for {
select {
case <-heartbeatTicker.C:
elapsed := time.Since(extractionStart)
e.progress.Update(fmt.Sprintf("Extracting archive... (elapsed: %s)", formatDuration(elapsed)))
case <-heartbeatCtx.Done():
return
}
}
}()
cmd := exec.CommandContext(ctx, "tar", "-xzf", archivePath, "-C", destDir)
// Stream stderr to avoid memory issues - tar can produce lots of output for large archives
@ -2104,6 +2254,25 @@ func FormatBytes(bytes int64) string {
return fmt.Sprintf("%.1f %cB", float64(bytes)/float64(div), "KMGTPE"[exp])
}
// formatDuration formats a duration to human readable format (e.g., "3m 45s", "1h 23m", "45s")
func formatDuration(d time.Duration) string {
if d < time.Second {
return "0s"
}
hours := int(d.Hours())
minutes := int(d.Minutes()) % 60
seconds := int(d.Seconds()) % 60
if hours > 0 {
return fmt.Sprintf("%dh %dm", hours, minutes)
}
if minutes > 0 {
return fmt.Sprintf("%dm %ds", minutes, seconds)
}
return fmt.Sprintf("%ds", seconds)
}
// quickValidateSQLDump performs a fast validation of SQL dump files
// by checking for truncated COPY blocks. This catches corrupted dumps
// BEFORE attempting a full restore (which could waste 49+ minutes).

344
internal/restore/extract.go Normal file
View File

@ -0,0 +1,344 @@
package restore
import (
"archive/tar"
"compress/gzip"
"context"
"fmt"
"io"
"os"
"path/filepath"
"sort"
"strings"
"dbbackup/internal/logger"
"dbbackup/internal/progress"
)
// DatabaseInfo represents metadata about a database in a cluster backup
type DatabaseInfo struct {
Name string
Filename string
Size int64
}
// ListDatabasesInCluster lists all databases in a cluster backup archive
func ListDatabasesInCluster(ctx context.Context, archivePath string, log logger.Logger) ([]DatabaseInfo, error) {
file, err := os.Open(archivePath)
if err != nil {
return nil, fmt.Errorf("cannot open archive: %w", err)
}
defer file.Close()
gz, err := gzip.NewReader(file)
if err != nil {
return nil, fmt.Errorf("not a valid gzip archive: %w", err)
}
defer gz.Close()
tarReader := tar.NewReader(gz)
databases := make([]DatabaseInfo, 0)
for {
select {
case <-ctx.Done():
return nil, ctx.Err()
default:
}
header, err := tarReader.Next()
if err == io.EOF {
break
}
if err != nil {
return nil, fmt.Errorf("error reading tar archive: %w", err)
}
// Look for files in dumps/ directory
if !header.FileInfo().IsDir() && strings.HasPrefix(header.Name, "dumps/") {
filename := filepath.Base(header.Name)
// Extract database name from filename (remove .dump, .dump.gz, .sql, .sql.gz)
dbName := filename
dbName = strings.TrimSuffix(dbName, ".dump.gz")
dbName = strings.TrimSuffix(dbName, ".dump")
dbName = strings.TrimSuffix(dbName, ".sql.gz")
dbName = strings.TrimSuffix(dbName, ".sql")
databases = append(databases, DatabaseInfo{
Name: dbName,
Filename: filename,
Size: header.Size,
})
}
}
// Sort by name for consistent output
sort.Slice(databases, func(i, j int) bool {
return databases[i].Name < databases[j].Name
})
if len(databases) == 0 {
return nil, fmt.Errorf("no databases found in cluster backup")
}
return databases, nil
}
// ExtractDatabaseFromCluster extracts a single database dump from cluster backup
func ExtractDatabaseFromCluster(ctx context.Context, archivePath, dbName, outputDir string, log logger.Logger, prog progress.Indicator) (string, error) {
file, err := os.Open(archivePath)
if err != nil {
return "", fmt.Errorf("cannot open archive: %w", err)
}
defer file.Close()
stat, err := file.Stat()
if err != nil {
return "", fmt.Errorf("cannot stat archive: %w", err)
}
archiveSize := stat.Size()
gz, err := gzip.NewReader(file)
if err != nil {
return "", fmt.Errorf("not a valid gzip archive: %w", err)
}
defer gz.Close()
tarReader := tar.NewReader(gz)
// Create output directory if needed
if err := os.MkdirAll(outputDir, 0755); err != nil {
return "", fmt.Errorf("cannot create output directory: %w", err)
}
targetPattern := fmt.Sprintf("dumps/%s.", dbName) // Match dbName.dump, dbName.sql, etc.
var extractedPath string
found := false
if prog != nil {
prog.Start(fmt.Sprintf("Extracting database: %s", dbName))
defer prog.Stop()
}
var bytesRead int64
ticker := make(chan struct{})
stopTicker := make(chan struct{})
go func() {
for {
select {
case <-ctx.Done():
return
case <-stopTicker:
return
case <-ticker:
if prog != nil && archiveSize > 0 {
percentage := float64(bytesRead) / float64(archiveSize) * 100
prog.Update(fmt.Sprintf("Scanning: %.1f%%", percentage))
}
}
}
}()
for {
select {
case <-ctx.Done():
close(stopTicker)
return "", ctx.Err()
default:
}
header, err := tarReader.Next()
if err == io.EOF {
break
}
if err != nil {
close(stopTicker)
return "", fmt.Errorf("error reading tar archive: %w", err)
}
bytesRead += header.Size
select {
case ticker <- struct{}{}:
default:
}
// Check if this is the database we're looking for
if strings.HasPrefix(header.Name, targetPattern) && !header.FileInfo().IsDir() {
filename := filepath.Base(header.Name)
extractedPath = filepath.Join(outputDir, filename)
// Extract the file
outFile, err := os.Create(extractedPath)
if err != nil {
close(stopTicker)
return "", fmt.Errorf("cannot create output file: %w", err)
}
if prog != nil {
prog.Update(fmt.Sprintf("Extracting: %s", filename))
}
written, err := io.Copy(outFile, tarReader)
outFile.Close()
if err != nil {
close(stopTicker)
return "", fmt.Errorf("extraction failed: %w", err)
}
log.Info("Database extracted successfully", "database", dbName, "size", formatBytes(written), "path", extractedPath)
found = true
break
}
}
close(stopTicker)
if !found {
return "", fmt.Errorf("database '%s' not found in cluster backup", dbName)
}
return extractedPath, nil
}
// ExtractMultipleDatabasesFromCluster extracts multiple databases from cluster backup
func ExtractMultipleDatabasesFromCluster(ctx context.Context, archivePath string, dbNames []string, outputDir string, log logger.Logger, prog progress.Indicator) (map[string]string, error) {
file, err := os.Open(archivePath)
if err != nil {
return nil, fmt.Errorf("cannot open archive: %w", err)
}
defer file.Close()
stat, err := file.Stat()
if err != nil {
return nil, fmt.Errorf("cannot stat archive: %w", err)
}
archiveSize := stat.Size()
gz, err := gzip.NewReader(file)
if err != nil {
return nil, fmt.Errorf("not a valid gzip archive: %w", err)
}
defer gz.Close()
tarReader := tar.NewReader(gz)
// Create output directory if needed
if err := os.MkdirAll(outputDir, 0755); err != nil {
return nil, fmt.Errorf("cannot create output directory: %w", err)
}
// Build lookup map
targetDBs := make(map[string]bool)
for _, dbName := range dbNames {
targetDBs[dbName] = true
}
extractedPaths := make(map[string]string)
if prog != nil {
prog.Start(fmt.Sprintf("Extracting %d databases", len(dbNames)))
defer prog.Stop()
}
var bytesRead int64
ticker := make(chan struct{})
stopTicker := make(chan struct{})
go func() {
for {
select {
case <-ctx.Done():
return
case <-stopTicker:
return
case <-ticker:
if prog != nil && archiveSize > 0 {
percentage := float64(bytesRead) / float64(archiveSize) * 100
prog.Update(fmt.Sprintf("Scanning: %.1f%% (%d/%d found)", percentage, len(extractedPaths), len(dbNames)))
}
}
}
}()
for {
select {
case <-ctx.Done():
close(stopTicker)
return nil, ctx.Err()
default:
}
header, err := tarReader.Next()
if err == io.EOF {
break
}
if err != nil {
close(stopTicker)
return nil, fmt.Errorf("error reading tar archive: %w", err)
}
bytesRead += header.Size
select {
case ticker <- struct{}{}:
default:
}
// Check if this is one of the databases we're looking for
if strings.HasPrefix(header.Name, "dumps/") && !header.FileInfo().IsDir() {
filename := filepath.Base(header.Name)
// Extract database name
dbName := filename
dbName = strings.TrimSuffix(dbName, ".dump.gz")
dbName = strings.TrimSuffix(dbName, ".dump")
dbName = strings.TrimSuffix(dbName, ".sql.gz")
dbName = strings.TrimSuffix(dbName, ".sql")
if targetDBs[dbName] {
extractedPath := filepath.Join(outputDir, filename)
// Extract the file
outFile, err := os.Create(extractedPath)
if err != nil {
close(stopTicker)
return nil, fmt.Errorf("cannot create output file for %s: %w", dbName, err)
}
if prog != nil {
prog.Update(fmt.Sprintf("Extracting: %s (%d/%d)", dbName, len(extractedPaths)+1, len(dbNames)))
}
written, err := io.Copy(outFile, tarReader)
outFile.Close()
if err != nil {
close(stopTicker)
return nil, fmt.Errorf("extraction failed for %s: %w", dbName, err)
}
log.Info("Database extracted", "database", dbName, "size", formatBytes(written))
extractedPaths[dbName] = extractedPath
// Stop early if we found all databases
if len(extractedPaths) == len(dbNames) {
break
}
}
}
}
close(stopTicker)
// Check if all requested databases were found
missing := make([]string, 0)
for _, dbName := range dbNames {
if _, found := extractedPaths[dbName]; !found {
missing = append(missing, dbName)
}
}
if len(missing) > 0 {
return extractedPaths, fmt.Errorf("databases not found in cluster backup: %s", strings.Join(missing, ", "))
}
return extractedPaths, nil
}

View File

@ -214,14 +214,27 @@ func (m ArchiveBrowserModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
}
if m.mode == "restore-single" && selected.Format.IsClusterBackup() {
m.message = errorStyle.Render("[FAIL] Please select a single database backup")
return m, nil
// Cluster backup selected in single restore mode - offer to select individual database
clusterSelector := NewClusterDatabaseSelector(m.config, m.logger, m, m.ctx, selected, "single", false)
return clusterSelector, clusterSelector.Init()
}
// Open restore preview
preview := NewRestorePreview(m.config, m.logger, m.parent, m.ctx, selected, m.mode)
return preview, preview.Init()
}
case "s":
// Select single database from cluster (shortcut key)
if len(m.archives) > 0 && m.cursor < len(m.archives) {
selected := m.archives[m.cursor]
if selected.Format.IsClusterBackup() {
clusterSelector := NewClusterDatabaseSelector(m.config, m.logger, m, m.ctx, selected, "single", false)
return clusterSelector, clusterSelector.Init()
} else {
m.message = infoStyle.Render("💡 [s] only works with cluster backups")
}
}
case "i":
// Show detailed info
@ -351,7 +364,7 @@ func (m ArchiveBrowserModel) View() string {
s.WriteString(infoStyle.Render(fmt.Sprintf("Total: %d archive(s) | Selected: %d/%d",
len(m.archives), m.cursor+1, len(m.archives))))
s.WriteString("\n")
s.WriteString(infoStyle.Render("[KEY] ↑/↓: Navigate | Enter: Select | d: Diagnose | f: Filter | i: Info | Esc: Back"))
s.WriteString(infoStyle.Render("[KEY] ↑/↓: Navigate | Enter: Select | s: Single DB from Cluster | d: Diagnose | f: Filter | i: Info | Esc: Back"))
return s.String()
}

View File

@ -0,0 +1,281 @@
package tui
import (
"context"
"fmt"
"strings"
tea "github.com/charmbracelet/bubbletea"
"dbbackup/internal/config"
"dbbackup/internal/logger"
"dbbackup/internal/restore"
)
// ClusterDatabaseSelectorModel for selecting databases from a cluster backup
type ClusterDatabaseSelectorModel struct {
config *config.Config
logger logger.Logger
parent tea.Model
ctx context.Context
archive ArchiveInfo
databases []restore.DatabaseInfo
cursor int
selected map[int]bool // Track multiple selections
loading bool
err error
title string
mode string // "single" or "multiple"
extractOnly bool // If true, extract without restoring
}
func NewClusterDatabaseSelector(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context, archive ArchiveInfo, mode string, extractOnly bool) ClusterDatabaseSelectorModel {
return ClusterDatabaseSelectorModel{
config: cfg,
logger: log,
parent: parent,
ctx: ctx,
archive: archive,
databases: nil,
selected: make(map[int]bool),
title: "Select Database(s) from Cluster Backup",
loading: true,
mode: mode,
extractOnly: extractOnly,
}
}
func (m ClusterDatabaseSelectorModel) Init() tea.Cmd {
return fetchClusterDatabases(m.ctx, m.archive, m.logger)
}
type clusterDatabaseListMsg struct {
databases []restore.DatabaseInfo
err error
}
func fetchClusterDatabases(ctx context.Context, archive ArchiveInfo, log logger.Logger) tea.Cmd {
return func() tea.Msg {
databases, err := restore.ListDatabasesInCluster(ctx, archive.Path, log)
if err != nil {
return clusterDatabaseListMsg{databases: nil, err: fmt.Errorf("failed to list databases: %w", err)}
}
return clusterDatabaseListMsg{databases: databases, err: nil}
}
}
func (m ClusterDatabaseSelectorModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
switch msg := msg.(type) {
case clusterDatabaseListMsg:
m.loading = false
if msg.err != nil {
m.err = msg.err
} else {
m.databases = msg.databases
if len(m.databases) > 0 && m.mode == "single" {
m.selected[0] = true // Pre-select first database in single mode
}
}
return m, nil
case tea.KeyMsg:
if m.loading {
return m, nil
}
switch msg.String() {
case "q", "esc":
// Return to parent
return m.parent, nil
case "up", "k":
if m.cursor > 0 {
m.cursor--
}
case "down", "j":
if m.cursor < len(m.databases)-1 {
m.cursor++
}
case " ": // Space to toggle selection (multiple mode)
if m.mode == "multiple" {
m.selected[m.cursor] = !m.selected[m.cursor]
} else {
// Single mode: clear all and select current
m.selected = make(map[int]bool)
m.selected[m.cursor] = true
}
case "enter":
if m.err != nil {
return m.parent, nil
}
if len(m.databases) == 0 {
return m.parent, nil
}
// Get selected database(s)
var selectedDBs []restore.DatabaseInfo
for i, selected := range m.selected {
if selected && i < len(m.databases) {
selectedDBs = append(selectedDBs, m.databases[i])
}
}
if len(selectedDBs) == 0 {
// No selection, use cursor position
selectedDBs = []restore.DatabaseInfo{m.databases[m.cursor]}
}
if m.extractOnly {
// TODO: Implement extraction flow
m.logger.Info("Extract-only mode not yet implemented in TUI")
return m.parent, nil
}
// For restore: proceed to restore preview/confirmation
if len(selectedDBs) == 1 {
// Single database restore from cluster
// Create a temporary archive info for the selected database
dbArchive := ArchiveInfo{
Name: selectedDBs[0].Filename,
Path: m.archive.Path, // Still use cluster archive path
Format: m.archive.Format,
Size: selectedDBs[0].Size,
Modified: m.archive.Modified,
DatabaseName: selectedDBs[0].Name,
}
preview := NewRestorePreview(m.config, m.logger, m.parent, m.ctx, dbArchive, "restore-cluster-single")
return preview, preview.Init()
} else {
// Multiple database restore - not yet implemented
m.logger.Info("Multiple database restore not yet implemented in TUI")
return m.parent, nil
}
}
}
return m, nil
}
func (m ClusterDatabaseSelectorModel) View() string {
if m.loading {
return TitleStyle.Render("Loading databases from cluster backup...") + "\n\nPlease wait..."
}
if m.err != nil {
var s strings.Builder
s.WriteString(TitleStyle.Render("Error"))
s.WriteString("\n\n")
s.WriteString(StatusErrorStyle.Render("Failed to list databases"))
s.WriteString("\n\n")
s.WriteString(m.err.Error())
s.WriteString("\n\n")
s.WriteString(StatusReadyStyle.Render("Press any key to go back"))
return s.String()
}
if len(m.databases) == 0 {
var s strings.Builder
s.WriteString(TitleStyle.Render("No Databases Found"))
s.WriteString("\n\n")
s.WriteString(StatusWarningStyle.Render("The cluster backup appears to be empty or invalid."))
s.WriteString("\n\n")
s.WriteString(StatusReadyStyle.Render("Press any key to go back"))
return s.String()
}
var s strings.Builder
// Title
s.WriteString(TitleStyle.Render(m.title))
s.WriteString("\n\n")
// Archive info
s.WriteString(LabelStyle.Render("Archive: "))
s.WriteString(m.archive.Name)
s.WriteString("\n")
s.WriteString(LabelStyle.Render("Databases: "))
s.WriteString(fmt.Sprintf("%d", len(m.databases)))
s.WriteString("\n\n")
// Instructions
if m.mode == "multiple" {
s.WriteString(StatusReadyStyle.Render("↑/↓: navigate • space: select/deselect • enter: confirm • q/esc: back"))
} else {
s.WriteString(StatusReadyStyle.Render("↑/↓: navigate • enter: select • q/esc: back"))
}
s.WriteString("\n\n")
// Database list
s.WriteString(ListHeaderStyle.Render("Available Databases:"))
s.WriteString("\n\n")
for i, db := range m.databases {
cursor := " "
if m.cursor == i {
cursor = "▶ "
}
checkbox := ""
if m.mode == "multiple" {
if m.selected[i] {
checkbox = "[✓] "
} else {
checkbox = "[ ] "
}
} else {
if m.selected[i] {
checkbox = "● "
} else {
checkbox = "○ "
}
}
sizeStr := formatBytes(db.Size)
line := fmt.Sprintf("%s%s%-40s %10s", cursor, checkbox, db.Name, sizeStr)
if m.cursor == i {
s.WriteString(ListSelectedStyle.Render(line))
} else {
s.WriteString(ListNormalStyle.Render(line))
}
s.WriteString("\n")
}
s.WriteString("\n")
// Selection summary
selectedCount := 0
var totalSize int64
for i, selected := range m.selected {
if selected && i < len(m.databases) {
selectedCount++
totalSize += m.databases[i].Size
}
}
if selectedCount > 0 {
s.WriteString(StatusSuccessStyle.Render(fmt.Sprintf("Selected: %d database(s), Total size: %s", selectedCount, formatBytes(totalSize))))
s.WriteString("\n")
}
return s.String()
}
// formatBytes formats byte count as human-readable string
func formatBytes(bytes int64) string {
const unit = 1024
if bytes < unit {
return fmt.Sprintf("%d B", bytes)
}
div, exp := int64(unit), 0
for n := bytes / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f %cB", float64(bytes)/float64(div), "KMGTPE"[exp])
}

View File

@ -430,6 +430,9 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
var restoreErr error
if restoreType == "restore-cluster" {
restoreErr = engine.RestoreCluster(ctx, archive.Path)
} else if restoreType == "restore-cluster-single" {
// Restore single database from cluster backup
restoreErr = engine.RestoreSingleFromCluster(ctx, archive.Path, targetDB, targetDB, cleanFirst, createIfMissing)
} else {
restoreErr = engine.RestoreSingle(ctx, archive.Path, targetDB, cleanFirst, createIfMissing)
}
@ -445,6 +448,8 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
result := fmt.Sprintf("Successfully restored from %s", archive.Name)
if restoreType == "restore-single" {
result = fmt.Sprintf("Successfully restored '%s' from %s", targetDB, archive.Name)
} else if restoreType == "restore-cluster-single" {
result = fmt.Sprintf("Successfully restored '%s' from cluster %s", targetDB, archive.Name)
} else if restoreType == "restore-cluster" && cleanClusterFirst {
result = fmt.Sprintf("Successfully restored cluster from %s (cleaned %d existing database(s) first)", archive.Name, len(existingDBs))
}
@ -658,13 +663,15 @@ func (m RestoreExecutionModel) View() string {
title := "[EXEC] Restoring Database"
if m.restoreType == "restore-cluster" {
title = "[EXEC] Restoring Cluster"
} else if m.restoreType == "restore-cluster-single" {
title = "[EXEC] Restoring Single Database from Cluster"
}
s.WriteString(titleStyle.Render(title))
s.WriteString("\n\n")
// Archive info
s.WriteString(fmt.Sprintf("Archive: %s\n", m.archive.Name))
if m.restoreType == "restore-single" {
if m.restoreType == "restore-single" || m.restoreType == "restore-cluster-single" {
s.WriteString(fmt.Sprintf("Target: %s\n", m.targetDB))
}
s.WriteString("\n")

77
release-notes-v3.42.77.md Normal file
View File

@ -0,0 +1,77 @@
# dbbackup v3.42.77
## 🎯 New Feature: Single Database Extraction from Cluster Backups
Extract and restore individual databases from cluster backups without full cluster restoration!
### 🆕 New Flags
- **`--list-databases`**: List all databases in cluster backup with sizes
- **`--database <name>`**: Extract/restore a single database from cluster
- **`--databases "db1,db2,db3"`**: Extract multiple databases (comma-separated)
- **`--output-dir <path>`**: Extract to directory without restoring
- **`--target <name>`**: Rename database during restore
### 📖 Examples
```bash
# List databases in cluster backup
dbbackup restore cluster backup.tar.gz --list-databases
# Extract single database (no restore)
dbbackup restore cluster backup.tar.gz --database myapp --output-dir /tmp/extract
# Restore single database from cluster
dbbackup restore cluster backup.tar.gz --database myapp --confirm
# Restore with different name (testing)
dbbackup restore cluster backup.tar.gz --database myapp --target myapp_test --confirm
# Extract multiple databases
dbbackup restore cluster backup.tar.gz --databases "app1,app2,app3" --output-dir /tmp/extract
```
### 💡 Use Cases
**Selective disaster recovery** - restore only affected databases
**Database migration** - copy databases between clusters
**Testing workflows** - restore with different names
**Faster restores** - extract only what you need
**Less disk space** - no need to extract entire cluster
### ⚙️ Technical Details
- Stream-based extraction with progress feedback
- Fast cluster archive scanning (no full extraction needed)
- Works with all cluster backup formats (.tar.gz)
- Compatible with existing cluster restore workflow
- Automatic format detection for extracted dumps
### 🖥️ TUI Support (Interactive Mode)
**New in this release**: Press **`s`** key when viewing a cluster backup to select individual databases!
- Navigate cluster backups in TUI and press `s` for database selection
- Interactive database picker with size information
- Visual selection confirmation before restore
- Seamless integration with existing TUI workflows
**TUI Workflow:**
1. Launch TUI: `dbbackup` (no arguments)
2. Navigate to "Restore" → "Single Database"
3. Select cluster backup archive
4. Press `s` to show database list
5. Select database and confirm restore
## 📦 Installation
Download the binary for your platform below and make it executable:
```bash
chmod +x dbbackup_*
./dbbackup_* --version
```
## 🔍 Checksums
SHA256 checksums in `checksums.txt`.