Compare commits

...

18 Commits

Author SHA1 Message Date
35a9a6e837 Release v3.42.80 - Default conservative profile for lock safety
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m28s
CI/CD / Build & Release (push) Successful in 3m18s
2026-01-22 08:26:58 +01:00
82378be971 Build v3.42.79 - Lock exhaustion fix
Some checks failed
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Failing after 3m17s
2026-01-22 08:24:32 +01:00
9fec2c79f8 Fix: Change default restore profile to conservative to prevent lock exhaustion
- Set default --profile to 'conservative' (single-threaded)
- Prevents PostgreSQL lock table exhaustion on large database restores
- Users can still use --profile balanced or aggressive for faster restores
- Updated verify_postgres_locks.sh to reflect new default
2026-01-22 08:18:52 +01:00
ae34467b4a chore: Bump version to 3.42.79
All checks were successful
CI/CD / Test (push) Successful in 1m20s
CI/CD / Lint (push) Successful in 1m30s
CI/CD / Build & Release (push) Successful in 3m25s
2026-01-21 21:23:49 +01:00
379ca06146 fix: Clean up trailing whitespace in heartbeat implementation
All checks were successful
CI/CD / Test (push) Successful in 1m21s
CI/CD / Lint (push) Successful in 1m30s
CI/CD / Build & Release (push) Has been skipped
2026-01-21 21:19:38 +01:00
c9bca42f28 fix: Use tr -cd '0-9' to extract only digits
All checks were successful
CI/CD / Test (push) Successful in 1m20s
CI/CD / Lint (push) Successful in 1m28s
CI/CD / Build & Release (push) Has been skipped
- tr -cd '0-9' deletes all characters except digits
- More portable than grep -o with regex
- Works regardless of timing or formatting in output
- Limits to 10 chars to prevent issues
2026-01-21 20:52:17 +01:00
c90ec1156e fix: Use --no-psqlrc and grep to extract clean numeric values
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m30s
CI/CD / Build & Release (push) Has been skipped
- Disables .psqlrc completely with --no-psqlrc flag
- Uses grep -o '[0-9]\+' to extract only digits
- Takes first match with head -1
- Completely bypasses timing and formatting issues
2026-01-21 20:48:00 +01:00
23265a33a4 fix: Strip timing with awk to handle \timing on in psqlrc
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m29s
CI/CD / Build & Release (push) Has been skipped
- Use awk to extract only first field (numeric value)
- Handles case where user has \timing on in .psqlrc
- Strips 'Time: X.XXX ms' completely
2026-01-21 20:41:48 +01:00
9b9abbfde7 fix: Strip psql timing info from lock verification script
Some checks failed
CI/CD / Test (push) Successful in 1m20s
CI/CD / Build & Release (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
- Use -t -A -q flags to get clean numeric values
- Prevents 'Time: 0.105 ms' from breaking calculations
- Add error handling for empty values
2026-01-21 20:39:00 +01:00
6282d66693 feat: Add PostgreSQL lock configuration verification script
All checks were successful
CI/CD / Test (push) Successful in 1m20s
CI/CD / Lint (push) Successful in 1m29s
CI/CD / Build & Release (push) Has been skipped
- Verifies if max_locks_per_transaction settings actually took effect
- Calculates total lock capacity from max_locks × (max_connections + max_prepared)
- Shows whether restart is needed or settings are insufficient
- Helps diagnose 'out of shared memory' errors during restore
2026-01-21 20:34:51 +01:00
4486a5d617 build: v3.42.78 with heartbeat progress for all operations
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Successful in 3m16s
- Linux amd64/arm64/armv7
- macOS Intel/Apple Silicon
- Windows amd64/arm64
- BSD variants (FreeBSD, NetBSD, OpenBSD)
- All binaries include real-time progress heartbeat
- SHA256 checksums included
2026-01-21 14:03:52 +01:00
75dee1fff5 feat: Add heartbeat progress for extraction, single DB restore, and backups
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Has been skipped
- Add heartbeat ticker to Phase 1 (tar extraction) in cluster restore
- Add heartbeat ticker to single database restore operations
- Add heartbeat ticker to backup operations (pg_dump, mysqldump)
- All heartbeats update every 5 seconds showing elapsed time
- Prevents frozen progress during long-running operations

Examples:
- 'Extracting archive... (elapsed: 2m 15s)'
- 'Restoring myapp... (elapsed: 5m 30s)'
- 'Backing up database... (elapsed: 8m 45s)'

Completes heartbeat implementation for all major blocking operations.
2026-01-21 14:00:31 +01:00
91d494537d feat: Add real-time progress heartbeat during Phase 2 cluster restore
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Has been skipped
- Add heartbeat ticker that updates progress every 5 seconds
- Show elapsed time during database restore: 'Restoring myapp (1/5) - elapsed: 3m 45s'
- Prevents frozen progress bar during long-running pg_restore operations
- Implements Phase 1 of restore progress enhancement proposal

Fixes issue where progress bar appeared frozen during large database restores
because pg_restore is a blocking subprocess with no intermediate feedback.
2026-01-21 13:55:39 +01:00
8ffc1ba23c docs: Add TUI support to v3.42.77 release notes
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Has been skipped
2026-01-21 13:51:09 +01:00
8e8045d8c0 build: Generate binaries and checksums for v3.42.77
Some checks failed
CI/CD / Lint (push) Has been cancelled
CI/CD / Build & Release (push) Has been cancelled
CI/CD / Test (push) Has been cancelled
2026-01-21 13:50:25 +01:00
0e94dcf384 docs: Update CHANGELOG with TUI support for single DB extraction
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m30s
CI/CD / Build & Release (push) Has been skipped
2026-01-21 13:43:55 +01:00
33adfbdb38 feat: Add TUI support for single database restore from cluster backups
Some checks failed
CI/CD / Lint (push) Has been cancelled
CI/CD / Build & Release (push) Has been cancelled
CI/CD / Test (push) Has been cancelled
New TUI features:
- Press 's' on cluster backup to select individual databases
- New ClusterDatabaseSelector view with database list and sizes
- Single/multiple selection modes
- restore-cluster-single mode in RestoreExecutionModel
- Automatic format detection for extracted dumps

User flow:
1. Browse to cluster backup in archive browser
2. Press 's' (or select cluster in single restore mode)
3. See list of databases with sizes
4. Select database with arrow keys
5. Press Enter to restore

Benefits:
- Faster restores (extract only needed database)
- Less disk space usage
- Easy database migration/testing via TUI
- Consistent with CLI --database flag

Implementation:
- internal/tui/cluster_db_selector.go: New selector view
- archive_browser.go: Added 's' hotkey + smart cluster handling
- restore_exec.go: Support for restore-cluster-single mode
- Uses existing RestoreSingleFromCluster() backend
2026-01-21 13:43:17 +01:00
af34eaa073 feat: Add single database extraction from cluster backups
All checks were successful
CI/CD / Test (push) Successful in 1m21s
CI/CD / Lint (push) Successful in 1m28s
CI/CD / Build & Release (push) Has been skipped
- New --list-databases flag to list all databases in cluster backup
- New --database flag to extract/restore single database from cluster
- New --databases flag to extract multiple databases (comma-separated)
- New --output-dir flag to extract without restoring
- Support for database renaming with --target flag

Use cases:
- Selective disaster recovery (restore only affected databases)
- Database migration between clusters
- Testing workflows (restore with different names)
- Faster restores (extract only what you need)
- Less disk space usage

Implementation:
- ListDatabasesInCluster() - scan and list databases with sizes
- ExtractDatabaseFromCluster() - extract single database
- ExtractMultipleDatabasesFromCluster() - extract multiple databases
- RestoreSingleFromCluster() - extract and restore in one step

Examples:
  dbbackup restore cluster backup.tar.gz --list-databases
  dbbackup restore cluster backup.tar.gz --database myapp --confirm
  dbbackup restore cluster backup.tar.gz --database myapp --target test --confirm
  dbbackup restore cluster backup.tar.gz --databases "app1,app2" --output-dir /tmp
2026-01-21 13:34:24 +01:00
13 changed files with 807 additions and 32 deletions

View File

@ -7,21 +7,28 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
### Added - Single Database Extraction from Cluster Backups
### Added - Single Database Extraction from Cluster Backups (CLI + TUI)
- **Extract and restore individual databases from cluster backups** - selective restore without full cluster restoration
- **List databases**: `dbbackup restore cluster backup.tar.gz --list-databases`
- Shows all databases in cluster backup with sizes
- Fast scan without full extraction
- **Extract single database**: `dbbackup restore cluster backup.tar.gz --database myapp --output-dir /tmp/extract`
- Extracts only the specified database dump
- No restore, just file extraction
- **Restore single database from cluster**: `dbbackup restore cluster backup.tar.gz --database myapp --confirm`
- Extracts and restores only one database
- Much faster than full cluster restore when you only need one database
- **Rename on restore**: `dbbackup restore cluster backup.tar.gz --database myapp --target myapp_test --confirm`
- Restore with different database name (useful for testing)
- **Extract multiple databases**: `dbbackup restore cluster backup.tar.gz --databases "app1,app2,app3" --output-dir /tmp/extract`
- Comma-separated list of databases to extract
- **CLI Commands**:
- **List databases**: `dbbackup restore cluster backup.tar.gz --list-databases`
- Shows all databases in cluster backup with sizes
- Fast scan without full extraction
- **Extract single database**: `dbbackup restore cluster backup.tar.gz --database myapp --output-dir /tmp/extract`
- Extracts only the specified database dump
- No restore, just file extraction
- **Restore single database from cluster**: `dbbackup restore cluster backup.tar.gz --database myapp --confirm`
- Extracts and restores only one database
- Much faster than full cluster restore when you only need one database
- **Rename on restore**: `dbbackup restore cluster backup.tar.gz --database myapp --target myapp_test --confirm`
- Restore with different database name (useful for testing)
- **Extract multiple databases**: `dbbackup restore cluster backup.tar.gz --databases "app1,app2,app3" --output-dir /tmp/extract`
- Comma-separated list of databases to extract
- **TUI Support**:
- Press **'s'** on any cluster backup in archive browser to select individual databases
- New **ClusterDatabaseSelector** view shows all databases with sizes
- Navigate with arrow keys, select with Enter
- Automatic handling when cluster backup selected in single restore mode
- Full restore preview and confirmation workflow
- **Benefits**:
- Faster restores (extract only what you need)
- Less disk space usage during restore

View File

@ -0,0 +1,171 @@
# Restore Progress Bar Enhancement Proposal
## Problem
During Phase 2 cluster restore, the progress bar is not real-time because:
- `pg_restore` subprocess blocks until completion
- Progress updates only happen **before** each database restore starts
- No feedback during actual restore execution (which can take hours)
- Users see frozen progress bar during large database restores
## Root Cause
In `internal/restore/engine.go`:
- `executeRestoreCommand()` blocks on `cmd.Wait()`
- Progress is only reported at goroutine entry (line ~1315)
- No streaming progress during pg_restore execution
## Proposed Solutions
### Option 1: Parse pg_restore stderr for progress (RECOMMENDED)
**Pros:**
- Real-time feedback during restore
- Works with existing pg_restore
- No external tools needed
**Implementation:**
```go
// In executeRestoreCommand, modify stderr reader:
go func() {
scanner := bufio.NewScanner(stderr)
for scanner.Scan() {
line := scanner.Text()
// Parse pg_restore progress lines
// Format: "pg_restore: processing item 1234 TABLE public users"
if strings.Contains(line, "processing item") {
e.reportItemProgress(line) // Update progress bar
}
// Capture errors
if strings.Contains(line, "ERROR:") {
lastError = line
errorCount++
}
}
}()
```
**Add to RestoreCluster goroutine:**
```go
// Track sub-items within each database
var currentDBItems, totalDBItems int
e.setItemProgressCallback(func(current, total int) {
currentDBItems = current
totalDBItems = total
// Update TUI with sub-progress
e.reportDatabaseSubProgress(idx, totalDBs, dbName, current, total)
})
```
### Option 2: Verbose mode with line counting
**Pros:**
- More granular progress (row-level)
- Shows exact operation being performed
**Cons:**
- `--verbose` causes massive stderr output (OOM risk on huge DBs)
- Currently disabled for memory safety
- Requires careful memory management
### Option 3: Hybrid approach (BEST)
**Combine both:**
1. **Default**: Parse non-verbose pg_restore output for item counts
2. **Small DBs** (<500MB): Enable verbose for detailed progress
3. **Periodic updates**: Report progress every 5 seconds even without stderr changes
**Implementation:**
```go
// Add periodic progress ticker
progressTicker := time.NewTicker(5 * time.Second)
defer progressTicker.Stop()
go func() {
for {
select {
case <-progressTicker.C:
// Report heartbeat even if no stderr
e.reportHeartbeat(dbName, time.Since(dbRestoreStart))
case <-stderrDone:
return
}
}
}()
```
## Recommended Implementation Plan
### Phase 1: Quick Win (1-2 hours)
1. Add heartbeat ticker in cluster restore goroutines
2. Update TUI to show "Restoring database X... (elapsed: 3m 45s)"
3. No code changes to pg_restore wrapper
### Phase 2: Parse pg_restore Output (4-6 hours)
1. Parse stderr for "processing item" lines
2. Extract current/total item counts
3. Report sub-progress to TUI
4. Update progress bar calculation:
```
dbProgress = baseProgress + (itemsDone/totalItems) * dbWeightedPercent
```
### Phase 3: Smart Verbose Mode (optional)
1. Detect database size before restore
2. Enable verbose for DBs < 500MB
3. Parse verbose output for detailed progress
4. Automatic fallback to item-based for large DBs
## Files to Modify
1. **internal/restore/engine.go**:
- `executeRestoreCommand()` - add progress parsing
- `RestoreCluster()` - add heartbeat ticker
- New: `reportItemProgress()`, `reportHeartbeat()`
2. **internal/tui/restore_exec.go**:
- Update `RestoreExecModel` to handle sub-progress
- Add "elapsed time" display during restore
- Show item counts: "Restoring tables... (234/567)"
3. **internal/progress/indicator.go**:
- Add `UpdateSubProgress(current, total int)` method
- Add `ReportHeartbeat(elapsed time.Duration)` method
## Example Output
**Before (current):**
```
[====================] Phase 2/3: Restoring Databases (1/5)
Restoring database myapp...
[frozen for 30 minutes]
```
**After (with heartbeat):**
```
[====================] Phase 2/3: Restoring Databases (1/5)
Restoring database myapp... (elapsed: 4m 32s)
[updates every 5 seconds]
```
**After (with item parsing):**
```
[=========>-----------] Phase 2/3: Restoring Databases (1/5)
Restoring database myapp... (processing item 1,234/5,678) (elapsed: 4m 32s)
[smooth progress bar movement]
```
## Testing Strategy
1. Test with small DB (< 100MB) - verify heartbeat works
2. Test with large DB (> 10GB) - verify no OOM, heartbeat works
3. Test with BLOB-heavy DB - verify phased restore shows progress
4. Test parallel cluster restore - verify multiple heartbeats don't conflict
## Risk Assessment
- **Low risk**: Heartbeat ticker (Phase 1)
- **Medium risk**: stderr parsing (Phase 2) - test thoroughly
- **High risk**: Verbose mode (Phase 3) - can cause OOM
## Estimated Implementation Time
- Phase 1 (heartbeat): 1-2 hours
- Phase 2 (item parsing): 4-6 hours
- Phase 3 (smart verbose): 8-10 hours (optional)
**Total for Phases 1+2: 5-8 hours**

View File

@ -3,9 +3,9 @@
This directory contains pre-compiled binaries for the DB Backup Tool across multiple platforms and architectures.
## Build Information
- **Version**: 3.42.50
- **Build Time**: 2026-01-21_10:29:01_UTC
- **Git Commit**: ae8c8fd
- **Version**: 3.42.80
- **Build Time**: 2026-01-22_07:26:07_UTC
- **Git Commit**: 82378be
## Recent Updates (v1.1.0)
- ✅ Fixed TUI progress display with line-by-line output

View File

@ -41,10 +41,10 @@ var (
restoreSaveDebugLog string // Path to save debug log on failure
// Single database extraction from cluster flags
restoreDatabase string // Single database to extract/restore from cluster
restoreDatabases string // Comma-separated list of databases to extract
restoreOutputDir string // Extract to directory (no restore)
restoreListDBs bool // List databases in cluster backup
restoreDatabase string // Single database to extract/restore from cluster
restoreDatabases string // Comma-separated list of databases to extract
restoreOutputDir string // Extract to directory (no restore)
restoreListDBs bool // List databases in cluster backup
// Diagnose flags
diagnoseJSON bool
@ -332,7 +332,7 @@ func init() {
restoreClusterCmd.Flags().BoolVar(&restoreDryRun, "dry-run", false, "Show what would be done without executing")
restoreClusterCmd.Flags().BoolVar(&restoreForce, "force", false, "Skip safety checks and confirmations")
restoreClusterCmd.Flags().BoolVar(&restoreCleanCluster, "clean-cluster", false, "Drop all existing user databases before restore (disaster recovery)")
restoreClusterCmd.Flags().StringVar(&restoreProfile, "profile", "balanced", "Resource profile: conservative (--parallel=1, low memory), balanced, aggressive (max performance)")
restoreClusterCmd.Flags().StringVar(&restoreProfile, "profile", "conservative", "Resource profile: conservative (single-threaded, prevents lock issues), balanced (auto-detect), aggressive (max speed)")
restoreClusterCmd.Flags().IntVar(&restoreJobs, "jobs", 0, "Number of parallel decompression jobs (0 = auto, overrides profile)")
restoreClusterCmd.Flags().IntVar(&restoreParallelDBs, "parallel-dbs", 0, "Number of databases to restore in parallel (0 = use profile, 1 = sequential, -1 = auto-detect, overrides profile)")
restoreClusterCmd.Flags().StringVar(&restoreWorkdir, "workdir", "", "Working directory for extraction (use when system disk is small, e.g. /mnt/storage/restore_tmp)")
@ -746,7 +746,7 @@ func runListDatabases(archivePath string) error {
fmt.Printf(" - %-30s (%s)\n", db.Name, sizeStr)
totalSize += db.Size
}
fmt.Printf("\nTotal: %s across %d database(s)\n", formatSize(totalSize), len(databases))
return nil
}

View File

@ -1372,6 +1372,27 @@ func (e *Engine) executeCommand(ctx context.Context, cmdArgs []string, outputFil
// NO GO BUFFERING - pg_dump writes directly to disk
cmd := exec.CommandContext(ctx, cmdArgs[0], cmdArgs[1:]...)
// Start heartbeat ticker for backup progress
backupStart := time.Now()
heartbeatCtx, cancelHeartbeat := context.WithCancel(ctx)
heartbeatTicker := time.NewTicker(5 * time.Second)
defer heartbeatTicker.Stop()
defer cancelHeartbeat()
go func() {
for {
select {
case <-heartbeatTicker.C:
elapsed := time.Since(backupStart)
if e.progress != nil {
e.progress.Update(fmt.Sprintf("Backing up database... (elapsed: %s)", formatDuration(elapsed)))
}
case <-heartbeatCtx.Done():
return
}
}
}()
// Set environment variables for database tools
cmd.Env = os.Environ()
if e.cfg.Password != "" {
@ -1598,3 +1619,22 @@ func formatBytes(bytes int64) string {
}
return fmt.Sprintf("%.1f %cB", float64(bytes)/float64(div), "KMGTPE"[exp])
}
// formatDuration formats a duration to human readable format (e.g., "3m 45s", "1h 23m", "45s")
func formatDuration(d time.Duration) string {
if d < time.Second {
return "0s"
}
hours := int(d.Hours())
minutes := int(d.Minutes()) % 60
seconds := int(d.Seconds()) % 60
if hours > 0 {
return fmt.Sprintf("%dh %dm", hours, minutes)
}
if minutes > 0 {
return fmt.Sprintf("%dm %ds", minutes, seconds)
}
return fmt.Sprintf("%ds", seconds)
}

View File

@ -292,6 +292,25 @@ func (e *Engine) restorePostgreSQLDump(ctx context.Context, archivePath, targetD
cmd := e.db.BuildRestoreCommand(targetDB, archivePath, opts)
// Start heartbeat ticker for restore progress
restoreStart := time.Now()
heartbeatCtx, cancelHeartbeat := context.WithCancel(ctx)
heartbeatTicker := time.NewTicker(5 * time.Second)
defer heartbeatTicker.Stop()
defer cancelHeartbeat()
go func() {
for {
select {
case <-heartbeatTicker.C:
elapsed := time.Since(restoreStart)
e.progress.Update(fmt.Sprintf("Restoring %s... (elapsed: %s)", targetDB, formatDuration(elapsed)))
case <-heartbeatCtx.Done():
return
}
}
}()
if compressed {
// For compressed dumps, decompress first
return e.executeRestoreWithDecompression(ctx, archivePath, cmd)
@ -1340,6 +1359,25 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string, preExtr
preserveOwnership := isSuperuser
isCompressedSQL := strings.HasSuffix(dumpFile, ".sql.gz")
// Start heartbeat ticker to show progress during long-running restore
heartbeatCtx, cancelHeartbeat := context.WithCancel(ctx)
heartbeatTicker := time.NewTicker(5 * time.Second)
go func() {
for {
select {
case <-heartbeatTicker.C:
elapsed := time.Since(dbRestoreStart)
mu.Lock()
statusMsg := fmt.Sprintf("Restoring %s (%d/%d) - elapsed: %s",
dbName, idx+1, totalDBs, formatDuration(elapsed))
e.progress.Update(statusMsg)
mu.Unlock()
case <-heartbeatCtx.Done():
return
}
}
}()
var restoreErr error
if isCompressedSQL {
mu.Lock()
@ -1353,6 +1391,10 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string, preExtr
restoreErr = e.restorePostgreSQLDumpWithOwnership(ctx, dumpFile, dbName, false, preserveOwnership)
}
// Stop heartbeat ticker
heartbeatTicker.Stop()
cancelHeartbeat()
if restoreErr != nil {
mu.Lock()
e.log.Error("Failed to restore database", "name", dbName, "file", dumpFile, "error", restoreErr)
@ -1611,6 +1653,25 @@ func (pr *progressReader) Read(p []byte) (n int, err error) {
// extractArchiveShell extracts using shell tar command (faster but no progress)
func (e *Engine) extractArchiveShell(ctx context.Context, archivePath, destDir string) error {
// Start heartbeat ticker for extraction progress
extractionStart := time.Now()
heartbeatCtx, cancelHeartbeat := context.WithCancel(ctx)
heartbeatTicker := time.NewTicker(5 * time.Second)
defer heartbeatTicker.Stop()
defer cancelHeartbeat()
go func() {
for {
select {
case <-heartbeatTicker.C:
elapsed := time.Since(extractionStart)
e.progress.Update(fmt.Sprintf("Extracting archive... (elapsed: %s)", formatDuration(elapsed)))
case <-heartbeatCtx.Done():
return
}
}
}()
cmd := exec.CommandContext(ctx, "tar", "-xzf", archivePath, "-C", destDir)
// Stream stderr to avoid memory issues - tar can produce lots of output for large archives
@ -2193,6 +2254,25 @@ func FormatBytes(bytes int64) string {
return fmt.Sprintf("%.1f %cB", float64(bytes)/float64(div), "KMGTPE"[exp])
}
// formatDuration formats a duration to human readable format (e.g., "3m 45s", "1h 23m", "45s")
func formatDuration(d time.Duration) string {
if d < time.Second {
return "0s"
}
hours := int(d.Hours())
minutes := int(d.Minutes()) % 60
seconds := int(d.Seconds()) % 60
if hours > 0 {
return fmt.Sprintf("%dh %dm", hours, minutes)
}
if minutes > 0 {
return fmt.Sprintf("%dm %ds", minutes, seconds)
}
return fmt.Sprintf("%ds", seconds)
}
// quickValidateSQLDump performs a fast validation of SQL dump files
// by checking for truncated COPY blocks. This catches corrupted dumps
// BEFORE attempting a full restore (which could waste 49+ minutes).

View File

@ -57,7 +57,7 @@ func ListDatabasesInCluster(ctx context.Context, archivePath string, log logger.
// Look for files in dumps/ directory
if !header.FileInfo().IsDir() && strings.HasPrefix(header.Name, "dumps/") {
filename := filepath.Base(header.Name)
// Extract database name from filename (remove .dump, .dump.gz, .sql, .sql.gz)
dbName := filename
dbName = strings.TrimSuffix(dbName, ".dump.gz")
@ -106,7 +106,7 @@ func ExtractDatabaseFromCluster(ctx context.Context, archivePath, dbName, output
defer gz.Close()
tarReader := tar.NewReader(gz)
// Create output directory if needed
if err := os.MkdirAll(outputDir, 0755); err != nil {
return "", fmt.Errorf("cannot create output directory: %w", err)
@ -222,7 +222,7 @@ func ExtractMultipleDatabasesFromCluster(ctx context.Context, archivePath string
defer gz.Close()
tarReader := tar.NewReader(gz)
// Create output directory if needed
if err := os.MkdirAll(outputDir, 0755); err != nil {
return nil, fmt.Errorf("cannot create output directory: %w", err)
@ -286,7 +286,7 @@ func ExtractMultipleDatabasesFromCluster(ctx context.Context, archivePath string
// Check if this is one of the databases we're looking for
if strings.HasPrefix(header.Name, "dumps/") && !header.FileInfo().IsDir() {
filename := filepath.Base(header.Name)
// Extract database name
dbName := filename
dbName = strings.TrimSuffix(dbName, ".dump.gz")

View File

@ -214,8 +214,9 @@ func (m ArchiveBrowserModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
}
if m.mode == "restore-single" && selected.Format.IsClusterBackup() {
m.message = errorStyle.Render("[FAIL] Please select a single database backup")
return m, nil
// Cluster backup selected in single restore mode - offer to select individual database
clusterSelector := NewClusterDatabaseSelector(m.config, m.logger, m, m.ctx, selected, "single", false)
return clusterSelector, clusterSelector.Init()
}
// Open restore preview
@ -223,6 +224,18 @@ func (m ArchiveBrowserModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return preview, preview.Init()
}
case "s":
// Select single database from cluster (shortcut key)
if len(m.archives) > 0 && m.cursor < len(m.archives) {
selected := m.archives[m.cursor]
if selected.Format.IsClusterBackup() {
clusterSelector := NewClusterDatabaseSelector(m.config, m.logger, m, m.ctx, selected, "single", false)
return clusterSelector, clusterSelector.Init()
} else {
m.message = infoStyle.Render("💡 [s] only works with cluster backups")
}
}
case "i":
// Show detailed info
if len(m.archives) > 0 && m.cursor < len(m.archives) {
@ -351,7 +364,7 @@ func (m ArchiveBrowserModel) View() string {
s.WriteString(infoStyle.Render(fmt.Sprintf("Total: %d archive(s) | Selected: %d/%d",
len(m.archives), m.cursor+1, len(m.archives))))
s.WriteString("\n")
s.WriteString(infoStyle.Render("[KEY] ↑/↓: Navigate | Enter: Select | d: Diagnose | f: Filter | i: Info | Esc: Back"))
s.WriteString(infoStyle.Render("[KEY] ↑/↓: Navigate | Enter: Select | s: Single DB from Cluster | d: Diagnose | f: Filter | i: Info | Esc: Back"))
return s.String()
}

View File

@ -0,0 +1,281 @@
package tui
import (
"context"
"fmt"
"strings"
tea "github.com/charmbracelet/bubbletea"
"dbbackup/internal/config"
"dbbackup/internal/logger"
"dbbackup/internal/restore"
)
// ClusterDatabaseSelectorModel for selecting databases from a cluster backup
type ClusterDatabaseSelectorModel struct {
config *config.Config
logger logger.Logger
parent tea.Model
ctx context.Context
archive ArchiveInfo
databases []restore.DatabaseInfo
cursor int
selected map[int]bool // Track multiple selections
loading bool
err error
title string
mode string // "single" or "multiple"
extractOnly bool // If true, extract without restoring
}
func NewClusterDatabaseSelector(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context, archive ArchiveInfo, mode string, extractOnly bool) ClusterDatabaseSelectorModel {
return ClusterDatabaseSelectorModel{
config: cfg,
logger: log,
parent: parent,
ctx: ctx,
archive: archive,
databases: nil,
selected: make(map[int]bool),
title: "Select Database(s) from Cluster Backup",
loading: true,
mode: mode,
extractOnly: extractOnly,
}
}
func (m ClusterDatabaseSelectorModel) Init() tea.Cmd {
return fetchClusterDatabases(m.ctx, m.archive, m.logger)
}
type clusterDatabaseListMsg struct {
databases []restore.DatabaseInfo
err error
}
func fetchClusterDatabases(ctx context.Context, archive ArchiveInfo, log logger.Logger) tea.Cmd {
return func() tea.Msg {
databases, err := restore.ListDatabasesInCluster(ctx, archive.Path, log)
if err != nil {
return clusterDatabaseListMsg{databases: nil, err: fmt.Errorf("failed to list databases: %w", err)}
}
return clusterDatabaseListMsg{databases: databases, err: nil}
}
}
func (m ClusterDatabaseSelectorModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
switch msg := msg.(type) {
case clusterDatabaseListMsg:
m.loading = false
if msg.err != nil {
m.err = msg.err
} else {
m.databases = msg.databases
if len(m.databases) > 0 && m.mode == "single" {
m.selected[0] = true // Pre-select first database in single mode
}
}
return m, nil
case tea.KeyMsg:
if m.loading {
return m, nil
}
switch msg.String() {
case "q", "esc":
// Return to parent
return m.parent, nil
case "up", "k":
if m.cursor > 0 {
m.cursor--
}
case "down", "j":
if m.cursor < len(m.databases)-1 {
m.cursor++
}
case " ": // Space to toggle selection (multiple mode)
if m.mode == "multiple" {
m.selected[m.cursor] = !m.selected[m.cursor]
} else {
// Single mode: clear all and select current
m.selected = make(map[int]bool)
m.selected[m.cursor] = true
}
case "enter":
if m.err != nil {
return m.parent, nil
}
if len(m.databases) == 0 {
return m.parent, nil
}
// Get selected database(s)
var selectedDBs []restore.DatabaseInfo
for i, selected := range m.selected {
if selected && i < len(m.databases) {
selectedDBs = append(selectedDBs, m.databases[i])
}
}
if len(selectedDBs) == 0 {
// No selection, use cursor position
selectedDBs = []restore.DatabaseInfo{m.databases[m.cursor]}
}
if m.extractOnly {
// TODO: Implement extraction flow
m.logger.Info("Extract-only mode not yet implemented in TUI")
return m.parent, nil
}
// For restore: proceed to restore preview/confirmation
if len(selectedDBs) == 1 {
// Single database restore from cluster
// Create a temporary archive info for the selected database
dbArchive := ArchiveInfo{
Name: selectedDBs[0].Filename,
Path: m.archive.Path, // Still use cluster archive path
Format: m.archive.Format,
Size: selectedDBs[0].Size,
Modified: m.archive.Modified,
DatabaseName: selectedDBs[0].Name,
}
preview := NewRestorePreview(m.config, m.logger, m.parent, m.ctx, dbArchive, "restore-cluster-single")
return preview, preview.Init()
} else {
// Multiple database restore - not yet implemented
m.logger.Info("Multiple database restore not yet implemented in TUI")
return m.parent, nil
}
}
}
return m, nil
}
func (m ClusterDatabaseSelectorModel) View() string {
if m.loading {
return TitleStyle.Render("Loading databases from cluster backup...") + "\n\nPlease wait..."
}
if m.err != nil {
var s strings.Builder
s.WriteString(TitleStyle.Render("Error"))
s.WriteString("\n\n")
s.WriteString(StatusErrorStyle.Render("Failed to list databases"))
s.WriteString("\n\n")
s.WriteString(m.err.Error())
s.WriteString("\n\n")
s.WriteString(StatusReadyStyle.Render("Press any key to go back"))
return s.String()
}
if len(m.databases) == 0 {
var s strings.Builder
s.WriteString(TitleStyle.Render("No Databases Found"))
s.WriteString("\n\n")
s.WriteString(StatusWarningStyle.Render("The cluster backup appears to be empty or invalid."))
s.WriteString("\n\n")
s.WriteString(StatusReadyStyle.Render("Press any key to go back"))
return s.String()
}
var s strings.Builder
// Title
s.WriteString(TitleStyle.Render(m.title))
s.WriteString("\n\n")
// Archive info
s.WriteString(LabelStyle.Render("Archive: "))
s.WriteString(m.archive.Name)
s.WriteString("\n")
s.WriteString(LabelStyle.Render("Databases: "))
s.WriteString(fmt.Sprintf("%d", len(m.databases)))
s.WriteString("\n\n")
// Instructions
if m.mode == "multiple" {
s.WriteString(StatusReadyStyle.Render("↑/↓: navigate • space: select/deselect • enter: confirm • q/esc: back"))
} else {
s.WriteString(StatusReadyStyle.Render("↑/↓: navigate • enter: select • q/esc: back"))
}
s.WriteString("\n\n")
// Database list
s.WriteString(ListHeaderStyle.Render("Available Databases:"))
s.WriteString("\n\n")
for i, db := range m.databases {
cursor := " "
if m.cursor == i {
cursor = "▶ "
}
checkbox := ""
if m.mode == "multiple" {
if m.selected[i] {
checkbox = "[✓] "
} else {
checkbox = "[ ] "
}
} else {
if m.selected[i] {
checkbox = "● "
} else {
checkbox = "○ "
}
}
sizeStr := formatBytes(db.Size)
line := fmt.Sprintf("%s%s%-40s %10s", cursor, checkbox, db.Name, sizeStr)
if m.cursor == i {
s.WriteString(ListSelectedStyle.Render(line))
} else {
s.WriteString(ListNormalStyle.Render(line))
}
s.WriteString("\n")
}
s.WriteString("\n")
// Selection summary
selectedCount := 0
var totalSize int64
for i, selected := range m.selected {
if selected && i < len(m.databases) {
selectedCount++
totalSize += m.databases[i].Size
}
}
if selectedCount > 0 {
s.WriteString(StatusSuccessStyle.Render(fmt.Sprintf("Selected: %d database(s), Total size: %s", selectedCount, formatBytes(totalSize))))
s.WriteString("\n")
}
return s.String()
}
// formatBytes formats byte count as human-readable string
func formatBytes(bytes int64) string {
const unit = 1024
if bytes < unit {
return fmt.Sprintf("%d B", bytes)
}
div, exp := int64(unit), 0
for n := bytes / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f %cB", float64(bytes)/float64(div), "KMGTPE"[exp])
}

View File

@ -430,6 +430,9 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
var restoreErr error
if restoreType == "restore-cluster" {
restoreErr = engine.RestoreCluster(ctx, archive.Path)
} else if restoreType == "restore-cluster-single" {
// Restore single database from cluster backup
restoreErr = engine.RestoreSingleFromCluster(ctx, archive.Path, targetDB, targetDB, cleanFirst, createIfMissing)
} else {
restoreErr = engine.RestoreSingle(ctx, archive.Path, targetDB, cleanFirst, createIfMissing)
}
@ -445,6 +448,8 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
result := fmt.Sprintf("Successfully restored from %s", archive.Name)
if restoreType == "restore-single" {
result = fmt.Sprintf("Successfully restored '%s' from %s", targetDB, archive.Name)
} else if restoreType == "restore-cluster-single" {
result = fmt.Sprintf("Successfully restored '%s' from cluster %s", targetDB, archive.Name)
} else if restoreType == "restore-cluster" && cleanClusterFirst {
result = fmt.Sprintf("Successfully restored cluster from %s (cleaned %d existing database(s) first)", archive.Name, len(existingDBs))
}
@ -658,13 +663,15 @@ func (m RestoreExecutionModel) View() string {
title := "[EXEC] Restoring Database"
if m.restoreType == "restore-cluster" {
title = "[EXEC] Restoring Cluster"
} else if m.restoreType == "restore-cluster-single" {
title = "[EXEC] Restoring Single Database from Cluster"
}
s.WriteString(titleStyle.Render(title))
s.WriteString("\n\n")
// Archive info
s.WriteString(fmt.Sprintf("Archive: %s\n", m.archive.Name))
if m.restoreType == "restore-single" {
if m.restoreType == "restore-single" || m.restoreType == "restore-cluster-single" {
s.WriteString(fmt.Sprintf("Target: %s\n", m.targetDB))
}
s.WriteString("\n")

View File

@ -16,7 +16,7 @@ import (
// Build information (set by ldflags)
var (
version = "3.42.50"
version = "3.42.80"
buildTime = "unknown"
gitCommit = "unknown"
)

77
release-notes-v3.42.77.md Normal file
View File

@ -0,0 +1,77 @@
# dbbackup v3.42.77
## 🎯 New Feature: Single Database Extraction from Cluster Backups
Extract and restore individual databases from cluster backups without full cluster restoration!
### 🆕 New Flags
- **`--list-databases`**: List all databases in cluster backup with sizes
- **`--database <name>`**: Extract/restore a single database from cluster
- **`--databases "db1,db2,db3"`**: Extract multiple databases (comma-separated)
- **`--output-dir <path>`**: Extract to directory without restoring
- **`--target <name>`**: Rename database during restore
### 📖 Examples
```bash
# List databases in cluster backup
dbbackup restore cluster backup.tar.gz --list-databases
# Extract single database (no restore)
dbbackup restore cluster backup.tar.gz --database myapp --output-dir /tmp/extract
# Restore single database from cluster
dbbackup restore cluster backup.tar.gz --database myapp --confirm
# Restore with different name (testing)
dbbackup restore cluster backup.tar.gz --database myapp --target myapp_test --confirm
# Extract multiple databases
dbbackup restore cluster backup.tar.gz --databases "app1,app2,app3" --output-dir /tmp/extract
```
### 💡 Use Cases
**Selective disaster recovery** - restore only affected databases
**Database migration** - copy databases between clusters
**Testing workflows** - restore with different names
**Faster restores** - extract only what you need
**Less disk space** - no need to extract entire cluster
### ⚙️ Technical Details
- Stream-based extraction with progress feedback
- Fast cluster archive scanning (no full extraction needed)
- Works with all cluster backup formats (.tar.gz)
- Compatible with existing cluster restore workflow
- Automatic format detection for extracted dumps
### 🖥️ TUI Support (Interactive Mode)
**New in this release**: Press **`s`** key when viewing a cluster backup to select individual databases!
- Navigate cluster backups in TUI and press `s` for database selection
- Interactive database picker with size information
- Visual selection confirmation before restore
- Seamless integration with existing TUI workflows
**TUI Workflow:**
1. Launch TUI: `dbbackup` (no arguments)
2. Navigate to "Restore" → "Single Database"
3. Select cluster backup archive
4. Press `s` to show database list
5. Select database and confirm restore
## 📦 Installation
Download the binary for your platform below and make it executable:
```bash
chmod +x dbbackup_*
./dbbackup_* --version
```
## 🔍 Checksums
SHA256 checksums in `checksums.txt`.

99
verify_postgres_locks.sh Executable file
View File

@ -0,0 +1,99 @@
#!/bin/bash
#
# PostgreSQL Lock Configuration Check & Restore Guidance
#
echo "════════════════════════════════════════════════════════════"
echo " PostgreSQL Lock Configuration & Restore Strategy"
echo "════════════════════════════════════════════════════════════"
echo
# Get values - extract ONLY digits, remove all non-numeric chars
LOCKS=$(sudo -u postgres psql --no-psqlrc -t -A -c "SHOW max_locks_per_transaction;" 2>/dev/null | tr -cd '0-9' | head -c 10)
CONNS=$(sudo -u postgres psql --no-psqlrc -t -A -c "SHOW max_connections;" 2>/dev/null | tr -cd '0-9' | head -c 10)
PREPARED=$(sudo -u postgres psql --no-psqlrc -t -A -c "SHOW max_prepared_transactions;" 2>/dev/null | tr -cd '0-9' | head -c 10)
if [ -z "$LOCKS" ]; then
LOCKS=$(psql --no-psqlrc -t -A -c "SHOW max_locks_per_transaction;" 2>/dev/null | tr -cd '0-9' | head -c 10)
CONNS=$(psql --no-psqlrc -t -A -c "SHOW max_connections;" 2>/dev/null | tr -cd '0-9' | head -c 10)
PREPARED=$(psql --no-psqlrc -t -A -c "SHOW max_prepared_transactions;" 2>/dev/null | tr -cd '0-9' | head -c 10)
fi
if [ -z "$LOCKS" ] || [ -z "$CONNS" ]; then
echo "❌ ERROR: Could not retrieve PostgreSQL settings"
echo " Ensure PostgreSQL is running and accessible"
exit 1
fi
echo "📊 Current Configuration:"
echo "────────────────────────────────────────────────────────────"
echo " max_locks_per_transaction: $LOCKS"
echo " max_connections: $CONNS"
echo " max_prepared_transactions: ${PREPARED:-0}"
echo
# Calculate capacity
PREPARED=${PREPARED:-0}
CAPACITY=$((LOCKS * (CONNS + PREPARED)))
echo " Total Lock Capacity: $CAPACITY locks"
echo
# Determine status
if [ "$LOCKS" -lt 2048 ]; then
STATUS="❌ CRITICAL"
RECOMMENDATION="increase_locks"
elif [ "$LOCKS" -lt 4096 ]; then
STATUS="⚠️ LOW"
RECOMMENDATION="single_threaded"
else
STATUS="✅ OK"
RECOMMENDATION="single_threaded"
fi
echo "Status: $STATUS (locks=$LOCKS, capacity=$CAPACITY)"
echo
echo "════════════════════════════════════════════════════════════"
echo " 🎯 RECOMMENDED RESTORE COMMAND"
echo "════════════════════════════════════════════════════════════"
echo
if [ "$RECOMMENDATION" = "increase_locks" ]; then
echo "CRITICAL: Locks too low. Increase first, THEN use single-threaded:"
echo
echo "1. Increase locks (requires PostgreSQL restart):"
echo " sudo -u postgres psql -c \"ALTER SYSTEM SET max_locks_per_transaction = 4096;\""
echo " sudo systemctl restart postgresql"
echo
echo "2. Run restore with single-threaded mode:"
echo " dbbackup restore cluster <backup-file> \\"
echo " --profile conservative \\"
echo " --parallel-dbs 1 \\"
echo " --jobs 1 \\"
echo " --confirm"
else
echo "✅ Use default CONSERVATIVE profile (single-threaded, prevents lock issues):"
echo
echo " dbbackup restore cluster <backup-file> --confirm"
echo
echo " (Default profile is now 'conservative' = single-threaded)"
echo
echo " For faster restore (if locks are sufficient):"
echo " dbbackup restore cluster <backup-file> --profile balanced --confirm"
echo " dbbackup restore cluster <backup-file> --profile aggressive --confirm"
fi
echo
echo "════════════════════════════════════════════════════════════"
echo " WHY SINGLE-THREADED?"
echo "════════════════════════════════════════════════════════════"
echo
echo " Parallel restore with large databases (especially with BLOBs)"
echo " can exhaust locks EVEN with high max_locks_per_transaction."
echo
echo " --jobs 1 = Single-threaded pg_restore (minimal locks)"
echo " --parallel-dbs 1 = Restore one database at a time"
echo
echo " Trade-off: Slower restore, but GUARANTEED completion."
echo
echo "════════════════════════════════════════════════════════════"