Compare commits

...

4 Commits

Author SHA1 Message Date
f9ff45cf2a v3.42.70: TUI consistency improvements - unified backup/restore views
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m23s
CI/CD / Build & Release (push) Successful in 3m11s
- Added phase constants (backupPhaseGlobals, backupPhaseDatabases, backupPhaseCompressing)
- Changed title from '[EXEC] Backup Execution' to '[EXEC] Cluster Backup'
- Made phase labels explicit with action verbs (Backing up Globals, Backing up Databases, Compressing Archive)
- Added realtime ETA tracking to backup phase 2 (databases) with phase2StartTime
- Moved duration display from top to bottom as 'Elapsed:' (consistent with restore)
- Standardized keys label to '[KEYS]' everywhere (was '[KEY]')
- Added timing fields: phase2StartTime, dbPhaseElapsed, dbAvgPerDB
- Created renderBackupDatabaseProgressBarWithTiming() with elapsed + ETA display
- Enhanced completion summary with archive info and throughput calculation
- Removed duplicate formatDuration() function (shared with restore_exec.go)

All 10 consistency improvements implemented (high/medium/low priority).
Backup and restore TUI views now provide unified professional UX.
2026-01-18 18:52:26 +01:00
72c06ba5c2 fix(tui): realtime ETA updates during phase 3 cluster restore
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m24s
CI/CD / Build & Release (push) Successful in 3m10s
Previously, the ETA during phase 3 (database restores) would appear to
hang because dbPhaseElapsed was only updated when a new database started
restoring, not during the restore operation itself.

Fixed by:
- Added phase3StartTime to track when phase 3 begins
- Calculate dbPhaseElapsed in realtime using time.Since(phase3StartTime)
- ETA now updates every 100ms tick instead of only on database transitions

This ensures the elapsed time and ETA display continuously update during
long-running database restores.
2026-01-18 18:36:48 +01:00
a0a401cab1 fix(tui): suppress preflight stdout output in TUI mode to prevent scrambled display
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m23s
CI/CD / Build & Release (push) Successful in 3m8s
- Add silentMode field to restore Engine struct
- Set silentMode=true in NewSilent() constructor for TUI mode
- Skip fmt.Println output in printPreflightSummary when in silent mode
- Log summary instead of printing to stdout in TUI mode
- Fixes scrambled output during cluster restore preflight checks
2026-01-18 18:17:00 +01:00
59a717abe7 refactor(profiles): replace large-db profile with composable LargeDBMode
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m15s
BREAKING CHANGE: Removed 'large-db' as standalone profile

New Design:
- Resource Profiles now purely represent VM capacity:
  conservative, balanced, performance, max-performance
- LargeDBMode is a separate boolean toggle that modifies any profile:
  - Reduces ClusterParallelism and Jobs by 50%
  - Forces MaxLocksPerTxn = 8192
  - Increases MaintenanceWorkMem

TUI Changes:
- 'l' key now toggles LargeDBMode ON/OFF instead of applying large-db profile
- New 'Large DB Mode' setting in settings menu
- Settings are persisted to .dbbackup.conf

This allows any resource profile to be combined with large database
optimization, giving users more flexibility on both small and large VMs.
2026-01-18 12:39:21 +01:00
11 changed files with 340 additions and 123 deletions

View File

@@ -221,7 +221,7 @@ Configuration Settings
Database User: postgres
SSL Mode: prefer
[KEYS] ↑↓ navigate | Enter edit | 'l' large-db | 'c' conservative | 'p' recommend | 's' save | 'q' menu
[KEYS] ↑↓ navigate | Enter edit | 'l' toggle LargeDB | 'c' conservative | 'p' recommend | 's' save | 'q' menu
```
**Resource Profiles for Large Databases:**
@@ -233,9 +233,11 @@ When restoring large databases on VMs with limited resources, use the resource p
| conservative | 1 | 1 | Small VMs (<16GB RAM) |
| balanced | 2 | 2-4 | Medium VMs (16-32GB RAM) |
| performance | 4 | 4-8 | Large servers (32GB+ RAM) |
| large-db | 1 | 2 | Large databases on any hardware |
| max-performance | 8 | 8-16 | High-end servers (64GB+) |
**Quick shortcuts:** Press `l` to apply large-db profile, `c` for conservative, `p` to show recommendation.
**Large DB Mode:** Toggle with `l` key. Reduces parallelism by 50% and sets max_locks_per_transaction=8192 for complex databases with many tables/LOBs.
**Quick shortcuts:** Press `l` to toggle Large DB Mode, `c` for conservative, `p` to show recommendation.
**Database Status:**
```

View File

@@ -4,8 +4,8 @@ This directory contains pre-compiled binaries for the DB Backup Tool across mult
## Build Information
- **Version**: 3.42.50
- **Build Time**: 2026-01-18_11:06:11_UTC
- **Git Commit**: ea4337e
- **Build Time**: 2026-01-18_17:17:17_UTC
- **Git Commit**: a0a401c
## Recent Updates (v1.1.0)
- ✅ Fixed TUI progress display with line-by-line output

View File

@@ -37,7 +37,8 @@ type Config struct {
CPUWorkloadType string // "cpu-intensive", "io-intensive", "balanced"
// Resource profile for backup/restore operations
ResourceProfile string // "conservative", "balanced", "performance", "max-performance", "large-db"
ResourceProfile string // "conservative", "balanced", "performance", "max-performance"
LargeDBMode bool // Enable large database mode (reduces parallelism, increases max_locks)
// CPU detection
CPUDetector *cpu.Detector
@@ -209,6 +210,7 @@ func New() *Config {
AutoDetectCores: getEnvBool("AUTO_DETECT_CORES", true),
CPUWorkloadType: getEnvString("CPU_WORKLOAD_TYPE", "balanced"),
ResourceProfile: defaultProfile,
LargeDBMode: getEnvBool("LARGE_DB_MODE", false),
// CPU and memory detection
CPUDetector: cpuDetector,
@@ -430,7 +432,7 @@ func (c *Config) ApplyResourceProfile(profileName string) error {
return &ConfigError{
Field: "resource_profile",
Value: profileName,
Message: "unknown profile. Valid profiles: conservative, balanced, performance, max-performance, large-db",
Message: "unknown profile. Valid profiles: conservative, balanced, performance, max-performance",
}
}
@@ -457,8 +459,19 @@ func (c *Config) GetResourceProfileRecommendation(isLargeDB bool) (string, strin
}
// GetCurrentProfile returns the current resource profile details
// If LargeDBMode is enabled, returns a modified profile with reduced parallelism
func (c *Config) GetCurrentProfile() *cpu.ResourceProfile {
return cpu.GetProfileByName(c.ResourceProfile)
profile := cpu.GetProfileByName(c.ResourceProfile)
if profile == nil {
return nil
}
// Apply LargeDBMode modifier if enabled
if c.LargeDBMode {
return cpu.ApplyLargeDBMode(profile)
}
return profile
}
// GetCPUInfo returns CPU information, detecting if necessary

View File

@@ -28,9 +28,11 @@ type LocalConfig struct {
DumpJobs int
// Performance settings
CPUWorkload string
MaxCores int
ClusterTimeout int // Cluster operation timeout in minutes (default: 1440 = 24 hours)
CPUWorkload string
MaxCores int
ClusterTimeout int // Cluster operation timeout in minutes (default: 1440 = 24 hours)
ResourceProfile string
LargeDBMode bool // Enable large database mode (reduces parallelism, increases locks)
// Security settings
RetentionDays int
@@ -126,6 +128,10 @@ func LoadLocalConfig() (*LocalConfig, error) {
if ct, err := strconv.Atoi(value); err == nil {
cfg.ClusterTimeout = ct
}
case "resource_profile":
cfg.ResourceProfile = value
case "large_db_mode":
cfg.LargeDBMode = value == "true" || value == "1"
}
case "security":
switch key {
@@ -207,6 +213,12 @@ func SaveLocalConfig(cfg *LocalConfig) error {
if cfg.ClusterTimeout != 0 {
sb.WriteString(fmt.Sprintf("cluster_timeout = %d\n", cfg.ClusterTimeout))
}
if cfg.ResourceProfile != "" {
sb.WriteString(fmt.Sprintf("resource_profile = %s\n", cfg.ResourceProfile))
}
if cfg.LargeDBMode {
sb.WriteString("large_db_mode = true\n")
}
sb.WriteString("\n")
// Security section
@@ -280,6 +292,14 @@ func ApplyLocalConfig(cfg *Config, local *LocalConfig) {
if local.ClusterTimeout != 0 {
cfg.ClusterTimeoutMinutes = local.ClusterTimeout
}
// Apply resource profile settings
if local.ResourceProfile != "" {
cfg.ResourceProfile = local.ResourceProfile
}
// LargeDBMode is a boolean - apply if true in config
if local.LargeDBMode {
cfg.LargeDBMode = true
}
if cfg.RetentionDays == 30 && local.RetentionDays != 0 {
cfg.RetentionDays = local.RetentionDays
}
@@ -294,22 +314,24 @@ func ApplyLocalConfig(cfg *Config, local *LocalConfig) {
// ConfigFromConfig creates a LocalConfig from a Config
func ConfigFromConfig(cfg *Config) *LocalConfig {
return &LocalConfig{
DBType: cfg.DatabaseType,
Host: cfg.Host,
Port: cfg.Port,
User: cfg.User,
Database: cfg.Database,
SSLMode: cfg.SSLMode,
BackupDir: cfg.BackupDir,
WorkDir: cfg.WorkDir,
Compression: cfg.CompressionLevel,
Jobs: cfg.Jobs,
DumpJobs: cfg.DumpJobs,
CPUWorkload: cfg.CPUWorkloadType,
MaxCores: cfg.MaxCores,
ClusterTimeout: cfg.ClusterTimeoutMinutes,
RetentionDays: cfg.RetentionDays,
MinBackups: cfg.MinBackups,
MaxRetries: cfg.MaxRetries,
DBType: cfg.DatabaseType,
Host: cfg.Host,
Port: cfg.Port,
User: cfg.User,
Database: cfg.Database,
SSLMode: cfg.SSLMode,
BackupDir: cfg.BackupDir,
WorkDir: cfg.WorkDir,
Compression: cfg.CompressionLevel,
Jobs: cfg.Jobs,
DumpJobs: cfg.DumpJobs,
CPUWorkload: cfg.CPUWorkloadType,
MaxCores: cfg.MaxCores,
ClusterTimeout: cfg.ClusterTimeoutMinutes,
ResourceProfile: cfg.ResourceProfile,
LargeDBMode: cfg.LargeDBMode,
RetentionDays: cfg.RetentionDays,
MinBackups: cfg.MinBackups,
MaxRetries: cfg.MaxRetries,
}
}

View File

@@ -90,32 +90,17 @@ var (
DumpJobs: 16,
MaintenanceWorkMem: "2GB",
MaxLocksPerTxn: 512,
RecommendedForLarge: false, // Large DBs should use conservative
RecommendedForLarge: false, // Large DBs should use LargeDBMode
MinMemoryGB: 64,
MinCores: 16,
}
// ProfileLargeDB - Optimized specifically for large databases
ProfileLargeDB = ResourceProfile{
Name: "large-db",
Description: "Optimized for large databases with many tables/BLOBs. Prevents 'out of shared memory' errors.",
ClusterParallelism: 1,
Jobs: 2,
DumpJobs: 2,
MaintenanceWorkMem: "1GB",
MaxLocksPerTxn: 8192,
RecommendedForLarge: true,
MinMemoryGB: 8,
MinCores: 2,
}
// AllProfiles contains all available profiles
// AllProfiles contains all available profiles (VM resource-based)
AllProfiles = []ResourceProfile{
ProfileConservative,
ProfileBalanced,
ProfilePerformance,
ProfileMaxPerformance,
ProfileLargeDB,
}
)
@@ -129,6 +114,51 @@ func GetProfileByName(name string) *ResourceProfile {
return nil
}
// ApplyLargeDBMode modifies a profile for large database operations.
// This is a modifier that reduces parallelism and increases max_locks_per_transaction
// to prevent "out of shared memory" errors with large databases (many tables, LOBs, etc.).
// It returns a new profile with adjusted settings, leaving the original unchanged.
func ApplyLargeDBMode(profile *ResourceProfile) *ResourceProfile {
if profile == nil {
return nil
}
// Create a copy with adjusted settings
modified := *profile
// Add "(large-db)" suffix to indicate this is modified
modified.Name = profile.Name + " +large-db"
modified.Description = fmt.Sprintf("%s [LargeDBMode: reduced parallelism, high locks]", profile.Description)
// Reduce parallelism to avoid lock exhaustion
// Rule: halve parallelism, minimum 1
modified.ClusterParallelism = max(1, profile.ClusterParallelism/2)
modified.Jobs = max(1, profile.Jobs/2)
modified.DumpJobs = max(2, profile.DumpJobs/2)
// Force high max_locks_per_transaction for large schemas
modified.MaxLocksPerTxn = 8192
// Increase maintenance_work_mem for complex operations
// Keep or boost maintenance work mem
modified.MaintenanceWorkMem = "1GB"
if profile.MinMemoryGB >= 32 {
modified.MaintenanceWorkMem = "2GB"
}
modified.RecommendedForLarge = true
return &modified
}
// max returns the larger of two integers
func max(a, b int) int {
if a > b {
return a
}
return b
}
// DetectMemory detects system memory information
func DetectMemory() (*MemoryInfo, error) {
info := &MemoryInfo{
@@ -293,11 +323,11 @@ func RecommendProfile(cpuInfo *CPUInfo, memInfo *MemoryInfo, isLargeDB bool) *Re
memGB = memInfo.TotalGB
}
// Special case: large databases should always use conservative/large-db profile
// Special case: large databases should use conservative profile
// The caller should also enable LargeDBMode for increased MaxLocksPerTxn
if isLargeDB {
if memGB >= 32 && cores >= 8 {
return &ProfileLargeDB // Still conservative but with more memory for maintenance
}
// For large DBs, recommend conservative regardless of resources
// LargeDBMode flag will handle the lock settings separately
return &ProfileConservative
}
@@ -339,7 +369,7 @@ func RecommendProfileWithReason(cpuInfo *CPUInfo, memInfo *MemoryInfo, isLargeDB
profile := RecommendProfile(cpuInfo, memInfo, isLargeDB)
if isLargeDB {
reason.WriteString("Large database detected - using conservative settings to avoid 'out of shared memory' errors.")
reason.WriteString("Large database mode - using conservative settings. Enable LargeDBMode for higher max_locks.")
} else if profile.Name == "conservative" {
reason.WriteString("Limited resources detected - using conservative profile for stability.")
} else if profile.Name == "max-performance" {

View File

@@ -50,6 +50,7 @@ type Engine struct {
progress progress.Indicator
detailedReporter *progress.DetailedReporter
dryRun bool
silentMode bool // Suppress stdout output (for TUI mode)
debugLogPath string // Path to save debug log on error
// TUI progress callback for detailed progress reporting
@@ -86,6 +87,7 @@ func NewSilent(cfg *config.Config, log logger.Logger, db database.Database) *Eng
progress: progressIndicator,
detailedReporter: detailedReporter,
dryRun: false,
silentMode: true, // Suppress stdout for TUI
}
}

View File

@@ -542,7 +542,20 @@ func (e *Engine) calculateRecommendedParallel(result *PreflightResult) int {
}
// printPreflightSummary prints a nice summary of all checks
// In silent mode (TUI), this is skipped and results are logged instead
func (e *Engine) printPreflightSummary(result *PreflightResult) {
// In TUI/silent mode, don't print to stdout - it causes scrambled output
if e.silentMode {
// Log summary instead for debugging
e.log.Info("Preflight checks complete",
"can_proceed", result.CanProceed,
"warnings", len(result.Warnings),
"errors", len(result.Errors),
"total_blobs", result.Archive.TotalBlobCount,
"recommended_locks", result.Archive.RecommendedLockBoost)
return
}
fmt.Println()
fmt.Println(strings.Repeat("─", 60))
fmt.Println(" PREFLIGHT CHECKS")

View File

@@ -13,6 +13,14 @@ import (
"dbbackup/internal/config"
"dbbackup/internal/database"
"dbbackup/internal/logger"
"path/filepath"
)
// Backup phase constants for consistency
const (
backupPhaseGlobals = 1
backupPhaseDatabases = 2
backupPhaseCompressing = 3
)
// BackupExecutionModel handles backup execution with progress
@@ -31,27 +39,36 @@ type BackupExecutionModel struct {
cancelling bool // True when user has requested cancellation
err error
result string
archivePath string // Path to created archive (for summary)
archiveSize int64 // Size of created archive (for summary)
startTime time.Time
elapsed time.Duration // Final elapsed time
details []string
spinnerFrame int
// Database count progress (for cluster backup)
dbTotal int
dbDone int
dbName string // Current database being backed up
overallPhase int // 1=globals, 2=databases, 3=compressing
phaseDesc string // Description of current phase
dbTotal int
dbDone int
dbName string // Current database being backed up
overallPhase int // 1=globals, 2=databases, 3=compressing
phaseDesc string // Description of current phase
phase2StartTime time.Time // When phase 2 (databases) started (for realtime ETA)
dbPhaseElapsed time.Duration // Elapsed time since database backup phase started
dbAvgPerDB time.Duration // Average time per database backup
}
// sharedBackupProgressState holds progress state that can be safely accessed from callbacks
type sharedBackupProgressState struct {
mu sync.Mutex
dbTotal int
dbDone int
dbName string
overallPhase int // 1=globals, 2=databases, 3=compressing
phaseDesc string // Description of current phase
hasUpdate bool
mu sync.Mutex
dbTotal int
dbDone int
dbName string
overallPhase int // 1=globals, 2=databases, 3=compressing
phaseDesc string // Description of current phase
hasUpdate bool
phase2StartTime time.Time // When phase 2 started (for realtime ETA calculation)
dbPhaseElapsed time.Duration // Elapsed time since database backup phase started
dbAvgPerDB time.Duration // Average time per database backup
}
// Package-level shared progress state for backup operations
@@ -72,12 +89,12 @@ func clearCurrentBackupProgress() {
currentBackupProgressState = nil
}
func getCurrentBackupProgress() (dbTotal, dbDone int, dbName string, overallPhase int, phaseDesc string, hasUpdate bool) {
func getCurrentBackupProgress() (dbTotal, dbDone int, dbName string, overallPhase int, phaseDesc string, hasUpdate bool, dbPhaseElapsed, dbAvgPerDB time.Duration, phase2StartTime time.Time) {
currentBackupProgressMu.Lock()
defer currentBackupProgressMu.Unlock()
if currentBackupProgressState == nil {
return 0, 0, "", 0, "", false
return 0, 0, "", 0, "", false, 0, 0, time.Time{}
}
currentBackupProgressState.mu.Lock()
@@ -86,9 +103,17 @@ func getCurrentBackupProgress() (dbTotal, dbDone int, dbName string, overallPhas
hasUpdate = currentBackupProgressState.hasUpdate
currentBackupProgressState.hasUpdate = false
// Calculate realtime phase elapsed if we have a phase 2 start time
dbPhaseElapsed = currentBackupProgressState.dbPhaseElapsed
if !currentBackupProgressState.phase2StartTime.IsZero() {
dbPhaseElapsed = time.Since(currentBackupProgressState.phase2StartTime)
}
return currentBackupProgressState.dbTotal, currentBackupProgressState.dbDone,
currentBackupProgressState.dbName, currentBackupProgressState.overallPhase,
currentBackupProgressState.phaseDesc, hasUpdate
currentBackupProgressState.phaseDesc, hasUpdate,
dbPhaseElapsed, currentBackupProgressState.dbAvgPerDB,
currentBackupProgressState.phase2StartTime
}
func NewBackupExecution(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context, backupType, dbName string, ratio int) BackupExecutionModel {
@@ -132,8 +157,11 @@ type backupProgressMsg struct {
}
type backupCompleteMsg struct {
result string
err error
result string
err error
archivePath string
archiveSize int64
elapsed time.Duration
}
func executeBackupWithTUIProgress(parentCtx context.Context, cfg *config.Config, log logger.Logger, backupType, dbName string, ratio int) tea.Cmd {
@@ -176,9 +204,13 @@ func executeBackupWithTUIProgress(parentCtx context.Context, cfg *config.Config,
progressState.dbDone = done
progressState.dbTotal = total
progressState.dbName = currentDB
progressState.overallPhase = 2 // Phase 2: Backing up databases
progressState.phaseDesc = fmt.Sprintf("Phase 2/3: Databases (%d/%d)", done, total)
progressState.overallPhase = backupPhaseDatabases
progressState.phaseDesc = fmt.Sprintf("Phase 2/3: Backing up Databases (%d/%d)", done, total)
progressState.hasUpdate = true
// Set phase 2 start time on first callback (for realtime ETA calculation)
if progressState.phase2StartTime.IsZero() {
progressState.phase2StartTime = time.Now()
}
progressState.mu.Unlock()
})
@@ -216,8 +248,9 @@ func executeBackupWithTUIProgress(parentCtx context.Context, cfg *config.Config,
}
return backupCompleteMsg{
result: result,
err: nil,
result: result,
err: nil,
elapsed: elapsed,
}
}
}
@@ -230,13 +263,15 @@ func (m BackupExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
m.spinnerFrame = (m.spinnerFrame + 1) % len(spinnerFrames)
// Poll for database progress updates from callbacks
dbTotal, dbDone, dbName, overallPhase, phaseDesc, hasUpdate := getCurrentBackupProgress()
dbTotal, dbDone, dbName, overallPhase, phaseDesc, hasUpdate, dbPhaseElapsed, dbAvgPerDB, _ := getCurrentBackupProgress()
if hasUpdate {
m.dbTotal = dbTotal
m.dbDone = dbDone
m.dbName = dbName
m.overallPhase = overallPhase
m.phaseDesc = phaseDesc
m.dbPhaseElapsed = dbPhaseElapsed
m.dbAvgPerDB = dbAvgPerDB
}
// Update status based on progress and elapsed time
@@ -284,6 +319,7 @@ func (m BackupExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
m.done = true
m.err = msg.err
m.result = msg.result
m.elapsed = msg.elapsed
if m.err == nil {
m.status = "[OK] Backup completed successfully!"
} else {
@@ -361,14 +397,52 @@ func renderBackupDatabaseProgressBar(done, total int, dbName string, width int)
return fmt.Sprintf(" Database: [%s] %d/%d", bar, done, total)
}
// renderBackupDatabaseProgressBarWithTiming renders database backup progress with ETA
func renderBackupDatabaseProgressBarWithTiming(done, total int, dbPhaseElapsed, dbAvgPerDB time.Duration) string {
if total == 0 {
return ""
}
// Calculate progress percentage
percent := float64(done) / float64(total)
if percent > 1.0 {
percent = 1.0
}
// Build progress bar
barWidth := 50
filled := int(float64(barWidth) * percent)
if filled > barWidth {
filled = barWidth
}
bar := strings.Repeat("█", filled) + strings.Repeat("░", barWidth-filled)
// Calculate ETA similar to restore
var etaStr string
if done > 0 && done < total {
avgPerDB := dbPhaseElapsed / time.Duration(done)
remaining := total - done
eta := avgPerDB * time.Duration(remaining)
etaStr = fmt.Sprintf(" | ETA: %s", formatDuration(eta))
} else if done == total {
etaStr = " | Complete"
}
return fmt.Sprintf(" Databases: [%s] %d/%d | Elapsed: %s%s\n",
bar, done, total, formatDuration(dbPhaseElapsed), etaStr)
}
func (m BackupExecutionModel) View() string {
var s strings.Builder
s.Grow(512) // Pre-allocate estimated capacity for better performance
// Clear screen with newlines and render header
s.WriteString("\n\n")
header := titleStyle.Render("[EXEC] Backup Execution")
s.WriteString(header)
header := "[EXEC] Backing up Database"
if m.backupType == "cluster" {
header = "[EXEC] Cluster Backup"
}
s.WriteString(titleStyle.Render(header))
s.WriteString("\n\n")
// Backup details - properly aligned
@@ -379,7 +453,6 @@ func (m BackupExecutionModel) View() string {
if m.ratio > 0 {
s.WriteString(fmt.Sprintf(" %-10s %d\n", "Sample:", m.ratio))
}
s.WriteString(fmt.Sprintf(" %-10s %s\n", "Duration:", time.Since(m.startTime).Round(time.Second)))
s.WriteString("\n")
// Status display
@@ -395,11 +468,15 @@ func (m BackupExecutionModel) View() string {
elapsedSec := int(time.Since(m.startTime).Seconds())
if m.overallPhase == 2 && m.dbTotal > 0 {
if m.overallPhase == backupPhaseDatabases && m.dbTotal > 0 {
// Phase 2: Database backups - contributes 15-90%
dbPct := int((int64(m.dbDone) * 100) / int64(m.dbTotal))
overallProgress = 15 + (dbPct * 75 / 100)
phaseLabel = m.phaseDesc
} else if m.overallPhase == backupPhaseCompressing {
// Phase 3: Compressing archive
overallProgress = 92
phaseLabel = "Phase 3/3: Compressing Archive"
} else if elapsedSec < 5 {
// Initial setup
overallProgress = 2
@@ -430,9 +507,9 @@ func (m BackupExecutionModel) View() string {
}
s.WriteString("\n")
// Database progress bar
progressBar := renderBackupDatabaseProgressBar(m.dbDone, m.dbTotal, m.dbName, 50)
s.WriteString(progressBar + "\n")
// Database progress bar with timing
s.WriteString(renderBackupDatabaseProgressBarWithTiming(m.dbDone, m.dbTotal, m.dbPhaseElapsed, m.dbAvgPerDB))
s.WriteString("\n")
} else {
// Intermediate phase (globals)
spinner := spinnerFrames[m.spinnerFrame]
@@ -449,7 +526,10 @@ func (m BackupExecutionModel) View() string {
}
if !m.cancelling {
s.WriteString("\n [KEY] Press Ctrl+C or ESC to cancel\n")
// Elapsed time
s.WriteString(fmt.Sprintf("Elapsed: %s\n", formatDuration(time.Since(m.startTime))))
s.WriteString("\n")
s.WriteString(infoStyle.Render("[KEYS] Press Ctrl+C or ESC to cancel"))
}
} else {
// Show completion summary with detailed stats
@@ -474,6 +554,14 @@ func (m BackupExecutionModel) View() string {
s.WriteString(infoStyle.Render(" ─── Summary ───────────────────────────────────────────────"))
s.WriteString("\n\n")
// Archive info (if available)
if m.archivePath != "" {
s.WriteString(fmt.Sprintf(" Archive: %s\n", filepath.Base(m.archivePath)))
}
if m.archiveSize > 0 {
s.WriteString(fmt.Sprintf(" Archive Size: %s\n", FormatBytes(m.archiveSize)))
}
// Backup type specific info
switch m.backupType {
case "cluster":
@@ -497,12 +585,21 @@ func (m BackupExecutionModel) View() string {
s.WriteString(infoStyle.Render(" ─── Timing ────────────────────────────────────────────────"))
s.WriteString("\n\n")
elapsed := time.Since(m.startTime)
s.WriteString(fmt.Sprintf(" Total Time: %s\n", formatBackupDuration(elapsed)))
elapsed := m.elapsed
if elapsed == 0 {
elapsed = time.Since(m.startTime)
}
s.WriteString(fmt.Sprintf(" Total Time: %s\n", formatDuration(elapsed)))
// Calculate and show throughput if we have size info
if m.archiveSize > 0 && elapsed.Seconds() > 0 {
throughput := float64(m.archiveSize) / elapsed.Seconds()
s.WriteString(fmt.Sprintf(" Throughput: %s/s (average)\n", FormatBytes(int64(throughput))))
}
if m.backupType == "cluster" && m.dbTotal > 0 && m.err == nil {
avgPerDB := elapsed / time.Duration(m.dbTotal)
s.WriteString(fmt.Sprintf(" Avg per DB: %s\n", formatBackupDuration(avgPerDB)))
s.WriteString(fmt.Sprintf(" Avg per DB: %s\n", formatDuration(avgPerDB)))
}
s.WriteString("\n")
@@ -513,18 +610,3 @@ func (m BackupExecutionModel) View() string {
return s.String()
}
// formatBackupDuration formats duration in human readable format
func formatBackupDuration(d time.Duration) string {
if d < time.Minute {
return fmt.Sprintf("%.1fs", d.Seconds())
}
if d < time.Hour {
minutes := int(d.Minutes())
seconds := int(d.Seconds()) % 60
return fmt.Sprintf("%dm %ds", minutes, seconds)
}
hours := int(d.Hours())
minutes := int(d.Minutes()) % 60
return fmt.Sprintf("%dh %dm", hours, minutes)
}

View File

@@ -154,6 +154,7 @@ type sharedProgressState struct {
// Timing info for database restore phase
dbPhaseElapsed time.Duration // Elapsed time since restore phase started
dbAvgPerDB time.Duration // Average time per database restore
phase3StartTime time.Time // When phase 3 started (for realtime ETA calculation)
// Overall phase tracking (1=Extract, 2=Globals, 3=Databases)
overallPhase int
@@ -190,12 +191,12 @@ func clearCurrentRestoreProgress() {
currentRestoreProgressState = nil
}
func getCurrentRestoreProgress() (bytesTotal, bytesDone int64, description string, hasUpdate bool, dbTotal, dbDone int, speed float64, dbPhaseElapsed, dbAvgPerDB time.Duration, currentDB string, overallPhase int, extractionDone bool, dbBytesTotal, dbBytesDone int64) {
func getCurrentRestoreProgress() (bytesTotal, bytesDone int64, description string, hasUpdate bool, dbTotal, dbDone int, speed float64, dbPhaseElapsed, dbAvgPerDB time.Duration, currentDB string, overallPhase int, extractionDone bool, dbBytesTotal, dbBytesDone int64, phase3StartTime time.Time) {
currentRestoreProgressMu.Lock()
defer currentRestoreProgressMu.Unlock()
if currentRestoreProgressState == nil {
return 0, 0, "", false, 0, 0, 0, 0, 0, "", 0, false, 0, 0
return 0, 0, "", false, 0, 0, 0, 0, 0, "", 0, false, 0, 0, time.Time{}
}
currentRestoreProgressState.mu.Lock()
@@ -204,13 +205,20 @@ func getCurrentRestoreProgress() (bytesTotal, bytesDone int64, description strin
// Calculate rolling window speed
speed = calculateRollingSpeed(currentRestoreProgressState.speedSamples)
// Calculate realtime phase elapsed if we have a phase 3 start time
dbPhaseElapsed = currentRestoreProgressState.dbPhaseElapsed
if !currentRestoreProgressState.phase3StartTime.IsZero() {
dbPhaseElapsed = time.Since(currentRestoreProgressState.phase3StartTime)
}
return currentRestoreProgressState.bytesTotal, currentRestoreProgressState.bytesDone,
currentRestoreProgressState.description, currentRestoreProgressState.hasUpdate,
currentRestoreProgressState.dbTotal, currentRestoreProgressState.dbDone, speed,
currentRestoreProgressState.dbPhaseElapsed, currentRestoreProgressState.dbAvgPerDB,
dbPhaseElapsed, currentRestoreProgressState.dbAvgPerDB,
currentRestoreProgressState.currentDB, currentRestoreProgressState.overallPhase,
currentRestoreProgressState.extractionDone,
currentRestoreProgressState.dbBytesTotal, currentRestoreProgressState.dbBytesDone
currentRestoreProgressState.dbBytesTotal, currentRestoreProgressState.dbBytesDone,
currentRestoreProgressState.phase3StartTime
}
// calculateRollingSpeed calculates speed from recent samples (last 5 seconds)
@@ -357,6 +365,10 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
progressState.overallPhase = 3
progressState.extractionDone = true
progressState.hasUpdate = true
// Set phase 3 start time on first callback (for realtime ETA calculation)
if progressState.phase3StartTime.IsZero() {
progressState.phase3StartTime = time.Now()
}
// Clear byte progress when switching to db progress
progressState.bytesTotal = 0
progressState.bytesDone = 0
@@ -375,6 +387,10 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
progressState.dbPhaseElapsed = phaseElapsed
progressState.dbAvgPerDB = avgPerDB
progressState.hasUpdate = true
// Set phase 3 start time on first callback (for realtime ETA calculation)
if progressState.phase3StartTime.IsZero() {
progressState.phase3StartTime = time.Now()
}
// Clear byte progress when switching to db progress
progressState.bytesTotal = 0
progressState.bytesDone = 0
@@ -392,6 +408,10 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
progressState.overallPhase = 3
progressState.extractionDone = true
progressState.hasUpdate = true
// Set phase 3 start time on first callback (for realtime ETA calculation)
if progressState.phase3StartTime.IsZero() {
progressState.phase3StartTime = time.Now()
}
})
// Store progress state in a package-level variable for the ticker to access
@@ -447,7 +467,8 @@ func (m RestoreExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
m.elapsed = time.Since(m.startTime)
// Poll shared progress state for real-time updates
bytesTotal, bytesDone, description, hasUpdate, dbTotal, dbDone, speed, dbPhaseElapsed, dbAvgPerDB, currentDB, overallPhase, extractionDone, dbBytesTotal, dbBytesDone := getCurrentRestoreProgress()
// Note: dbPhaseElapsed is now calculated in realtime inside getCurrentRestoreProgress()
bytesTotal, bytesDone, description, hasUpdate, dbTotal, dbDone, speed, dbPhaseElapsed, dbAvgPerDB, currentDB, overallPhase, extractionDone, dbBytesTotal, dbBytesDone, _ := getCurrentRestoreProgress()
if hasUpdate && bytesTotal > 0 && !extractionDone {
// Phase 1: Extraction
m.bytesTotal = bytesTotal
@@ -1159,8 +1180,8 @@ func formatRestoreError(errStr string) string {
s.WriteString(" If you reduced VM size or max_connections, you need higher\n")
s.WriteString(" max_locks_per_transaction to compensate.\n\n")
s.WriteString(successStyle.Render(" FIX OPTIONS:\n"))
s.WriteString(" 1. Use 'conservative' or 'large-db' profile in Settings\n")
s.WriteString(" (press 'l' for large-db, 'c' for conservative)\n\n")
s.WriteString(" 1. Enable 'Large DB Mode' in Settings\n")
s.WriteString(" (press 'l' to toggle, reduces parallelism, increases locks)\n\n")
s.WriteString(" 2. Increase PostgreSQL locks:\n")
s.WriteString(" ALTER SYSTEM SET max_locks_per_transaction = 4096;\n")
s.WriteString(" Then RESTART PostgreSQL.\n\n")
@@ -1186,7 +1207,7 @@ func formatRestoreError(errStr string) string {
s.WriteString("\n\n")
s.WriteString(" 1. Check the full error log for details\n")
s.WriteString(" 2. Try restoring with 'conservative' profile (press 'c')\n")
s.WriteString(" 3. For large databases, use 'large-db' profile (press 'l')\n")
s.WriteString(" 3. For complex databases, enable 'Large DB Mode' (press 'l')\n")
}
s.WriteString("\n")

View File

@@ -410,6 +410,10 @@ func (m RestorePreviewModel) View() string {
} else {
s.WriteString(fmt.Sprintf(" Resource Profile: %s\n", m.config.ResourceProfile))
}
// Show Large DB Mode status
if m.config.LargeDBMode {
s.WriteString(" Large DB Mode: ON (reduced parallelism, high locks)\n")
}
s.WriteString(fmt.Sprintf(" CPU Workload: %s\n", m.config.CPUWorkloadType))
s.WriteString(fmt.Sprintf(" Cluster Parallelism: %d databases\n", m.config.ClusterParallelism))

View File

@@ -113,7 +113,7 @@ func NewSettingsModel(cfg *config.Config, log logger.Logger, parent tea.Model) S
return c.ResourceProfile
},
Update: func(c *config.Config, v string) error {
profiles := []string{"conservative", "balanced", "performance", "max-performance", "large-db"}
profiles := []string{"conservative", "balanced", "performance", "max-performance"}
currentIdx := 0
for i, p := range profiles {
if c.ResourceProfile == p {
@@ -125,7 +125,23 @@ func NewSettingsModel(cfg *config.Config, log logger.Logger, parent tea.Model) S
return c.ApplyResourceProfile(profiles[nextIdx])
},
Type: "selector",
Description: "Resource profile for backup/restore. Use 'conservative' or 'large-db' for large databases on small VMs.",
Description: "Resource profile for VM capacity. Toggle 'l' for Large DB Mode on any profile.",
},
{
Key: "large_db_mode",
DisplayName: "Large DB Mode",
Value: func(c *config.Config) string {
if c.LargeDBMode {
return "ON (↓parallelism, ↑locks)"
}
return "OFF"
},
Update: func(c *config.Config, v string) error {
c.LargeDBMode = !c.LargeDBMode
return nil
},
Type: "selector",
Description: "Enable for databases with many tables/LOBs. Reduces parallelism, increases max_locks_per_transaction.",
},
{
Key: "cluster_parallelism",
@@ -574,8 +590,8 @@ func (m SettingsModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return m.saveSettings()
case "l":
// Quick shortcut: Apply "large-db" profile for large databases
return m.applyLargeDBProfile()
// Quick shortcut: Toggle Large DB Mode
return m.toggleLargeDBMode()
case "c":
// Quick shortcut: Apply "conservative" profile for constrained VMs
@@ -590,13 +606,20 @@ func (m SettingsModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return m, nil
}
// applyLargeDBProfile applies the large-db profile optimized for large databases
func (m SettingsModel) applyLargeDBProfile() (tea.Model, tea.Cmd) {
if err := m.config.ApplyResourceProfile("large-db"); err != nil {
m.message = errorStyle.Render(fmt.Sprintf("[FAIL] %s", err.Error()))
return m, nil
// toggleLargeDBMode toggles the Large DB Mode flag
func (m SettingsModel) toggleLargeDBMode() (tea.Model, tea.Cmd) {
m.config.LargeDBMode = !m.config.LargeDBMode
if m.config.LargeDBMode {
profile := m.config.GetCurrentProfile()
m.message = successStyle.Render(fmt.Sprintf(
"[ON] Large DB Mode enabled: %s → Parallel=%d, Jobs=%d, MaxLocks=%d",
profile.Name, profile.ClusterParallelism, profile.Jobs, profile.MaxLocksPerTxn))
} else {
profile := m.config.GetCurrentProfile()
m.message = successStyle.Render(fmt.Sprintf(
"[OFF] Large DB Mode disabled: %s → Parallel=%d, Jobs=%d",
profile.Name, profile.ClusterParallelism, profile.Jobs))
}
m.message = successStyle.Render("[OK] Applied 'large-db' profile: Cluster=1, Jobs=2. Optimized for large DBs to avoid 'out of shared memory' errors.")
return m, nil
}
@@ -613,14 +636,19 @@ func (m SettingsModel) applyConservativeProfile() (tea.Model, tea.Cmd) {
// showProfileRecommendation displays the recommended profile based on system resources
func (m SettingsModel) showProfileRecommendation() (tea.Model, tea.Cmd) {
profileName, reason := m.config.GetResourceProfileRecommendation(false)
largeDBProfile, largeDBReason := m.config.GetResourceProfileRecommendation(true)
var largeDBHint string
if m.config.LargeDBMode {
largeDBHint = "Large DB Mode: ON"
} else {
largeDBHint = "Large DB Mode: OFF (press 'l' to enable)"
}
m.message = infoStyle.Render(fmt.Sprintf(
"[RECOMMEND] Default: %s | For Large DBs: %s\n"+
"[RECOMMEND] Profile: %s | %s\n"+
" → %s\n"+
" → Large DB: %s\n"+
" Press 'l' for large-db profile, 'c' for conservative",
profileName, largeDBProfile, reason, largeDBReason))
" Press 'l' to toggle Large DB Mode, 'c' for conservative",
profileName, largeDBHint, reason))
return m, nil
}
@@ -907,9 +935,9 @@ func (m SettingsModel) View() string {
} else {
// Show different help based on current selection
if m.cursor >= 0 && m.cursor < len(m.settings) && m.settings[m.cursor].Type == "path" {
footer = infoStyle.Render("\n[KEYS] ↑↓ navigate | Enter edit | Tab dirs | 'l' large-db | 'c' conservative | 'p' recommend | 's' save | 'q' menu")
footer = infoStyle.Render("\n[KEYS] ↑↓ navigate | Enter edit | Tab dirs | 'l' toggle LargeDB | 'c' conservative | 'p' recommend | 's' save | 'q' menu")
} else {
footer = infoStyle.Render("\n[KEYS] ↑↓ navigate | Enter edit | 'l' large-db profile | 'c' conservative | 'p' recommend | 's' save | 'r' reset | 'q' menu")
footer = infoStyle.Render("\n[KEYS] ↑↓ navigate | Enter edit | 'l' toggle LargeDB mode | 'c' conservative | 'p' recommend | 's' save | 'r' reset | 'q' menu")
}
}
}