Compare commits

..

16 Commits

Author SHA1 Message Date
a759f4d3db v3.42.72: Fix Large DB Mode not applying when changing profiles
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m13s
Critical fix: Large DB Mode settings (reduced parallelism, increased locks)
were not being reapplied when user changed resource profiles in Settings.

This caused cluster restores to fail with 'out of shared memory' errors on
low-resource VMs even when Large DB Mode was enabled.

Now ApplyResourceProfile() properly applies LargeDBMode modifiers after
setting base profile values, ensuring parallelism stays reduced.

Example: balanced profile with Large DB Mode:
- ClusterParallelism: 4 → 2 (halved)
- MaxLocksPerTxn: 2048 → 8192 (increased)

This allows successful cluster restores on low-spec VMs.
2026-01-18 21:23:35 +01:00
7cf1d6f85b Update bin/README.md for v3.42.71
Some checks failed
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Has been cancelled
2026-01-18 21:17:11 +01:00
b305d1342e v3.42.71: Fix error message formatting + code alignment
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m24s
CI/CD / Build & Release (push) Successful in 3m12s
- Fixed incorrect diagnostic message for shared memory exhaustion
  (was 'Cannot access file', now 'PostgreSQL lock table exhausted')
- Improved clarity of lock capacity formula display
- Code formatting: aligned struct field comments for consistency
- Files affected: restore_exec.go, backup_exec.go, persist.go
2026-01-18 21:13:35 +01:00
5456da7183 Update bin/README.md for v3.42.70
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Has been skipped
2026-01-18 18:53:44 +01:00
f9ff45cf2a v3.42.70: TUI consistency improvements - unified backup/restore views
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m23s
CI/CD / Build & Release (push) Successful in 3m11s
- Added phase constants (backupPhaseGlobals, backupPhaseDatabases, backupPhaseCompressing)
- Changed title from '[EXEC] Backup Execution' to '[EXEC] Cluster Backup'
- Made phase labels explicit with action verbs (Backing up Globals, Backing up Databases, Compressing Archive)
- Added realtime ETA tracking to backup phase 2 (databases) with phase2StartTime
- Moved duration display from top to bottom as 'Elapsed:' (consistent with restore)
- Standardized keys label to '[KEYS]' everywhere (was '[KEY]')
- Added timing fields: phase2StartTime, dbPhaseElapsed, dbAvgPerDB
- Created renderBackupDatabaseProgressBarWithTiming() with elapsed + ETA display
- Enhanced completion summary with archive info and throughput calculation
- Removed duplicate formatDuration() function (shared with restore_exec.go)

All 10 consistency improvements implemented (high/medium/low priority).
Backup and restore TUI views now provide unified professional UX.
2026-01-18 18:52:26 +01:00
72c06ba5c2 fix(tui): realtime ETA updates during phase 3 cluster restore
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m24s
CI/CD / Build & Release (push) Successful in 3m10s
Previously, the ETA during phase 3 (database restores) would appear to
hang because dbPhaseElapsed was only updated when a new database started
restoring, not during the restore operation itself.

Fixed by:
- Added phase3StartTime to track when phase 3 begins
- Calculate dbPhaseElapsed in realtime using time.Since(phase3StartTime)
- ETA now updates every 100ms tick instead of only on database transitions

This ensures the elapsed time and ETA display continuously update during
long-running database restores.
2026-01-18 18:36:48 +01:00
a0a401cab1 fix(tui): suppress preflight stdout output in TUI mode to prevent scrambled display
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m23s
CI/CD / Build & Release (push) Successful in 3m8s
- Add silentMode field to restore Engine struct
- Set silentMode=true in NewSilent() constructor for TUI mode
- Skip fmt.Println output in printPreflightSummary when in silent mode
- Log summary instead of printing to stdout in TUI mode
- Fixes scrambled output during cluster restore preflight checks
2026-01-18 18:17:00 +01:00
59a717abe7 refactor(profiles): replace large-db profile with composable LargeDBMode
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m15s
BREAKING CHANGE: Removed 'large-db' as standalone profile

New Design:
- Resource Profiles now purely represent VM capacity:
  conservative, balanced, performance, max-performance
- LargeDBMode is a separate boolean toggle that modifies any profile:
  - Reduces ClusterParallelism and Jobs by 50%
  - Forces MaxLocksPerTxn = 8192
  - Increases MaintenanceWorkMem

TUI Changes:
- 'l' key now toggles LargeDBMode ON/OFF instead of applying large-db profile
- New 'Large DB Mode' setting in settings menu
- Settings are persisted to .dbbackup.conf

This allows any resource profile to be combined with large database
optimization, giving users more flexibility on both small and large VMs.
2026-01-18 12:39:21 +01:00
490a12f858 feat(tui): show resource profile and CPU workload on cluster restore preview
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m15s
- Displays current Resource Profile with parallelism settings
- Shows CPU Workload type (balanced/cpu-intensive/io-intensive)
- Shows Cluster Parallelism (number of concurrent database restores)

This helps users understand what performance settings will be used
before starting a cluster restore operation.
2026-01-18 12:19:28 +01:00
ea4337e298 fix(config): use resource profile defaults for Jobs, DumpJobs, ClusterParallelism
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m17s
- ClusterParallelism now defaults to recommended profile (1 for small VMs)
- Jobs and DumpJobs now use profile values instead of raw CPU counts
- Small VMs (4 cores, 32GB) will now get 'conservative' or 'balanced' profile
  with lower parallelism to avoid 'out of shared memory' errors

This fixes the issue where small VMs defaulted to ClusterParallelism=2
regardless of detected resources, causing restore failures on large DBs.
2026-01-18 12:04:11 +01:00
bbd4f0ceac docs: update TUI screenshots with resource profiles and system info
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Has been skipped
- Added system detection display (CPU cores, memory, recommended profile)
- Added Resource Profile and Cluster Parallelism settings
- Updated hotkeys: 'l' large-db, 'c' conservative, 'p' recommend
- Added resource profiles table for large database operations
- Updated example values to reflect typical PostgreSQL setup
2026-01-18 11:55:30 +01:00
f6f8b04785 fix(tui): improve restore error display formatting
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Successful in 3m16s
- Parse and structure error messages for clean TUI display
- Extract error type, message, hint, and failed databases
- Show specific recommendations based on error type
- Fix for 'out of shared memory' - suggest profile settings
- Limit line width to prevent scrambled display
- Add structured sections: Error Details, Diagnosis, Recommendations
2026-01-18 11:48:07 +01:00
670c9af2e7 feat(tui): add resource profiles for backup/restore operations
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m15s
- Add memory detection for Linux, macOS, Windows
- Add 5 predefined profiles: conservative, balanced, performance, max-performance, large-db
- Add Resource Profile and Cluster Parallelism settings in TUI
- Add quick hotkeys: 'l' for large-db, 'c' for conservative, 'p' for recommendations
- Display system resources (CPU cores, memory) and recommended profile
- Auto-detect and recommend profile based on system resources

Fixes 'out of shared memory' errors when restoring large databases on small VMs.
Use 'large-db' or 'conservative' profile for large databases on constrained systems.
2026-01-18 11:38:24 +01:00
e2cf9adc62 fix: improve cleanup toggle UX when database detection fails
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m28s
CI/CD / Build & Release (push) Successful in 3m13s
- Allow cleanup toggle even when preview detection failed
- Show 'detection pending' message instead of blocking the toggle
- Will re-detect databases at restore execution time
- Always show cleanup toggle option for cluster restores
- Better messaging: 'enabled/disabled' instead of showing 0 count
2026-01-17 17:07:26 +01:00
29e089fe3b fix: re-detect databases at execution time for cluster cleanup
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m10s
- Detection in preview may fail or return stale results
- Re-detect user databases when cleanup is enabled at execution time
- Fall back to preview list if re-detection fails
- Ensures actual databases are dropped, not just what was detected earlier
2026-01-17 17:00:28 +01:00
9396c8e605 fix: add debug logging for database detection
All checks were successful
CI/CD / Test (push) Successful in 1m19s
CI/CD / Lint (push) Successful in 1m34s
CI/CD / Build & Release (push) Successful in 3m24s
- Always set cmd.Env to preserve PGPASSWORD from environment
- Add debug logging for connection parameters and results
- Helps diagnose cluster restore database detection issues
2026-01-17 16:54:20 +01:00
12 changed files with 1246 additions and 135 deletions

View File

@@ -194,21 +194,51 @@ r: Restore | v: Verify | i: Info | d: Diagnose | D: Delete | R: Refresh | Esc: B
``` ```
Configuration Settings Configuration Settings
[SYSTEM] Detected Resources
CPU: 8 physical cores, 16 logical cores
Memory: 32GB total, 28GB available
Recommended Profile: balanced
→ 8 cores and 32GB RAM supports moderate parallelism
[CONFIG] Current Settings
Target DB: PostgreSQL (postgres)
Database: postgres@localhost:5432
Backup Dir: /var/backups/postgres
Compression: Level 6
Profile: balanced | Cluster: 2 parallel | Jobs: 4
> Database Type: postgres > Database Type: postgres
CPU Workload Type: balanced CPU Workload Type: balanced
Backup Directory: /root/db_backups Resource Profile: balanced (P:2 J:4)
Work Directory: /tmp Cluster Parallelism: 2
Backup Directory: /var/backups/postgres
Work Directory: (system temp)
Compression Level: 6 Compression Level: 6
Parallel Jobs: 16 Parallel Jobs: 4
Dump Jobs: 8 Dump Jobs: 4
Database Host: localhost Database Host: localhost
Database Port: 5432 Database Port: 5432
Database User: root Database User: postgres
SSL Mode: prefer SSL Mode: prefer
s: Save | r: Reset | q: Menu [KEYS] ↑↓ navigate | Enter edit | 'l' toggle LargeDB | 'c' conservative | 'p' recommend | 's' save | 'q' menu
``` ```
**Resource Profiles for Large Databases:**
When restoring large databases on VMs with limited resources, use the resource profile settings to prevent "out of shared memory" errors:
| Profile | Cluster Parallel | Jobs | Best For |
|---------|------------------|------|----------|
| conservative | 1 | 1 | Small VMs (<16GB RAM) |
| balanced | 2 | 2-4 | Medium VMs (16-32GB RAM) |
| performance | 4 | 4-8 | Large servers (32GB+ RAM) |
| max-performance | 8 | 8-16 | High-end servers (64GB+) |
**Large DB Mode:** Toggle with `l` key. Reduces parallelism by 50% and sets max_locks_per_transaction=8192 for complex databases with many tables/LOBs.
**Quick shortcuts:** Press `l` to toggle Large DB Mode, `c` for conservative, `p` to show recommendation.
**Database Status:** **Database Status:**
``` ```
Database Status & Health Check Database Status & Health Check

View File

@@ -4,8 +4,8 @@ This directory contains pre-compiled binaries for the DB Backup Tool across mult
## Build Information ## Build Information
- **Version**: 3.42.50 - **Version**: 3.42.50
- **Build Time**: 2026-01-17_15:26:14_UTC - **Build Time**: 2026-01-18_20:14:09_UTC
- **Git Commit**: df1ab2f - **Git Commit**: b305d13
## Recent Updates (v1.1.0) ## Recent Updates (v1.1.0)
- ✅ Fixed TUI progress display with line-by-line output - ✅ Fixed TUI progress display with line-by-line output

View File

@@ -36,9 +36,14 @@ type Config struct {
AutoDetectCores bool AutoDetectCores bool
CPUWorkloadType string // "cpu-intensive", "io-intensive", "balanced" CPUWorkloadType string // "cpu-intensive", "io-intensive", "balanced"
// Resource profile for backup/restore operations
ResourceProfile string // "conservative", "balanced", "performance", "max-performance"
LargeDBMode bool // Enable large database mode (reduces parallelism, increases max_locks)
// CPU detection // CPU detection
CPUDetector *cpu.Detector CPUDetector *cpu.Detector
CPUInfo *cpu.CPUInfo CPUInfo *cpu.CPUInfo
MemoryInfo *cpu.MemoryInfo // System memory information
// Sample backup options // Sample backup options
SampleStrategy string // "ratio", "percent", "count" SampleStrategy string // "ratio", "percent", "count"
@@ -178,6 +183,13 @@ func New() *Config {
sslMode = "" sslMode = ""
} }
// Detect memory information
memInfo, _ := cpu.DetectMemory()
// Determine recommended resource profile
recommendedProfile := cpu.RecommendProfile(cpuInfo, memInfo, false)
defaultProfile := getEnvString("RESOURCE_PROFILE", recommendedProfile.Name)
cfg := &Config{ cfg := &Config{
// Database defaults // Database defaults
Host: host, Host: host,
@@ -189,18 +201,21 @@ func New() *Config {
SSLMode: sslMode, SSLMode: sslMode,
Insecure: getEnvBool("INSECURE", false), Insecure: getEnvBool("INSECURE", false),
// Backup defaults // Backup defaults - use recommended profile's settings for small VMs
BackupDir: backupDir, BackupDir: backupDir,
CompressionLevel: getEnvInt("COMPRESS_LEVEL", 6), CompressionLevel: getEnvInt("COMPRESS_LEVEL", 6),
Jobs: getEnvInt("JOBS", getDefaultJobs(cpuInfo)), Jobs: getEnvInt("JOBS", recommendedProfile.Jobs),
DumpJobs: getEnvInt("DUMP_JOBS", getDefaultDumpJobs(cpuInfo)), DumpJobs: getEnvInt("DUMP_JOBS", recommendedProfile.DumpJobs),
MaxCores: getEnvInt("MAX_CORES", getDefaultMaxCores(cpuInfo)), MaxCores: getEnvInt("MAX_CORES", getDefaultMaxCores(cpuInfo)),
AutoDetectCores: getEnvBool("AUTO_DETECT_CORES", true), AutoDetectCores: getEnvBool("AUTO_DETECT_CORES", true),
CPUWorkloadType: getEnvString("CPU_WORKLOAD_TYPE", "balanced"), CPUWorkloadType: getEnvString("CPU_WORKLOAD_TYPE", "balanced"),
ResourceProfile: defaultProfile,
LargeDBMode: getEnvBool("LARGE_DB_MODE", false),
// CPU detection // CPU and memory detection
CPUDetector: cpuDetector, CPUDetector: cpuDetector,
CPUInfo: cpuInfo, CPUInfo: cpuInfo,
MemoryInfo: memInfo,
// Sample backup defaults // Sample backup defaults
SampleStrategy: getEnvString("SAMPLE_STRATEGY", "ratio"), SampleStrategy: getEnvString("SAMPLE_STRATEGY", "ratio"),
@@ -220,8 +235,8 @@ func New() *Config {
// Timeouts - default 24 hours (1440 min) to handle very large databases with large objects // Timeouts - default 24 hours (1440 min) to handle very large databases with large objects
ClusterTimeoutMinutes: getEnvInt("CLUSTER_TIMEOUT_MIN", 1440), ClusterTimeoutMinutes: getEnvInt("CLUSTER_TIMEOUT_MIN", 1440),
// Cluster parallelism (default: 2 concurrent operations for faster cluster backup/restore) // Cluster parallelism - use recommended profile's setting for small VMs
ClusterParallelism: getEnvInt("CLUSTER_PARALLELISM", 2), ClusterParallelism: getEnvInt("CLUSTER_PARALLELISM", recommendedProfile.ClusterParallelism),
// Working directory for large operations (default: system temp) // Working directory for large operations (default: system temp)
WorkDir: getEnvString("WORK_DIR", ""), WorkDir: getEnvString("WORK_DIR", ""),
@@ -409,6 +424,62 @@ func (c *Config) OptimizeForCPU() error {
return nil return nil
} }
// ApplyResourceProfile applies a resource profile to the configuration
// This adjusts parallelism settings based on the chosen profile
func (c *Config) ApplyResourceProfile(profileName string) error {
profile := cpu.GetProfileByName(profileName)
if profile == nil {
return &ConfigError{
Field: "resource_profile",
Value: profileName,
Message: "unknown profile. Valid profiles: conservative, balanced, performance, max-performance",
}
}
// Validate profile against current system
isValid, warnings := cpu.ValidateProfileForSystem(profile, c.CPUInfo, c.MemoryInfo)
if !isValid {
// Log warnings but don't block - user may know what they're doing
_ = warnings // In production, log these warnings
}
// Apply profile settings
c.ResourceProfile = profile.Name
// If LargeDBMode is enabled, apply its modifiers
if c.LargeDBMode {
profile = cpu.ApplyLargeDBMode(profile)
}
c.ClusterParallelism = profile.ClusterParallelism
c.Jobs = profile.Jobs
c.DumpJobs = profile.DumpJobs
return nil
}
// GetResourceProfileRecommendation returns the recommended profile and reason
func (c *Config) GetResourceProfileRecommendation(isLargeDB bool) (string, string) {
profile, reason := cpu.RecommendProfileWithReason(c.CPUInfo, c.MemoryInfo, isLargeDB)
return profile.Name, reason
}
// GetCurrentProfile returns the current resource profile details
// If LargeDBMode is enabled, returns a modified profile with reduced parallelism
func (c *Config) GetCurrentProfile() *cpu.ResourceProfile {
profile := cpu.GetProfileByName(c.ResourceProfile)
if profile == nil {
return nil
}
// Apply LargeDBMode modifier if enabled
if c.LargeDBMode {
return cpu.ApplyLargeDBMode(profile)
}
return profile
}
// GetCPUInfo returns CPU information, detecting if necessary // GetCPUInfo returns CPU information, detecting if necessary
func (c *Config) GetCPUInfo() (*cpu.CPUInfo, error) { func (c *Config) GetCPUInfo() (*cpu.CPUInfo, error) {
if c.CPUInfo != nil { if c.CPUInfo != nil {

View File

@@ -28,9 +28,11 @@ type LocalConfig struct {
DumpJobs int DumpJobs int
// Performance settings // Performance settings
CPUWorkload string CPUWorkload string
MaxCores int MaxCores int
ClusterTimeout int // Cluster operation timeout in minutes (default: 1440 = 24 hours) ClusterTimeout int // Cluster operation timeout in minutes (default: 1440 = 24 hours)
ResourceProfile string
LargeDBMode bool // Enable large database mode (reduces parallelism, increases locks)
// Security settings // Security settings
RetentionDays int RetentionDays int
@@ -126,6 +128,10 @@ func LoadLocalConfig() (*LocalConfig, error) {
if ct, err := strconv.Atoi(value); err == nil { if ct, err := strconv.Atoi(value); err == nil {
cfg.ClusterTimeout = ct cfg.ClusterTimeout = ct
} }
case "resource_profile":
cfg.ResourceProfile = value
case "large_db_mode":
cfg.LargeDBMode = value == "true" || value == "1"
} }
case "security": case "security":
switch key { switch key {
@@ -207,6 +213,12 @@ func SaveLocalConfig(cfg *LocalConfig) error {
if cfg.ClusterTimeout != 0 { if cfg.ClusterTimeout != 0 {
sb.WriteString(fmt.Sprintf("cluster_timeout = %d\n", cfg.ClusterTimeout)) sb.WriteString(fmt.Sprintf("cluster_timeout = %d\n", cfg.ClusterTimeout))
} }
if cfg.ResourceProfile != "" {
sb.WriteString(fmt.Sprintf("resource_profile = %s\n", cfg.ResourceProfile))
}
if cfg.LargeDBMode {
sb.WriteString("large_db_mode = true\n")
}
sb.WriteString("\n") sb.WriteString("\n")
// Security section // Security section
@@ -280,6 +292,14 @@ func ApplyLocalConfig(cfg *Config, local *LocalConfig) {
if local.ClusterTimeout != 0 { if local.ClusterTimeout != 0 {
cfg.ClusterTimeoutMinutes = local.ClusterTimeout cfg.ClusterTimeoutMinutes = local.ClusterTimeout
} }
// Apply resource profile settings
if local.ResourceProfile != "" {
cfg.ResourceProfile = local.ResourceProfile
}
// LargeDBMode is a boolean - apply if true in config
if local.LargeDBMode {
cfg.LargeDBMode = true
}
if cfg.RetentionDays == 30 && local.RetentionDays != 0 { if cfg.RetentionDays == 30 && local.RetentionDays != 0 {
cfg.RetentionDays = local.RetentionDays cfg.RetentionDays = local.RetentionDays
} }
@@ -294,22 +314,24 @@ func ApplyLocalConfig(cfg *Config, local *LocalConfig) {
// ConfigFromConfig creates a LocalConfig from a Config // ConfigFromConfig creates a LocalConfig from a Config
func ConfigFromConfig(cfg *Config) *LocalConfig { func ConfigFromConfig(cfg *Config) *LocalConfig {
return &LocalConfig{ return &LocalConfig{
DBType: cfg.DatabaseType, DBType: cfg.DatabaseType,
Host: cfg.Host, Host: cfg.Host,
Port: cfg.Port, Port: cfg.Port,
User: cfg.User, User: cfg.User,
Database: cfg.Database, Database: cfg.Database,
SSLMode: cfg.SSLMode, SSLMode: cfg.SSLMode,
BackupDir: cfg.BackupDir, BackupDir: cfg.BackupDir,
WorkDir: cfg.WorkDir, WorkDir: cfg.WorkDir,
Compression: cfg.CompressionLevel, Compression: cfg.CompressionLevel,
Jobs: cfg.Jobs, Jobs: cfg.Jobs,
DumpJobs: cfg.DumpJobs, DumpJobs: cfg.DumpJobs,
CPUWorkload: cfg.CPUWorkloadType, CPUWorkload: cfg.CPUWorkloadType,
MaxCores: cfg.MaxCores, MaxCores: cfg.MaxCores,
ClusterTimeout: cfg.ClusterTimeoutMinutes, ClusterTimeout: cfg.ClusterTimeoutMinutes,
RetentionDays: cfg.RetentionDays, ResourceProfile: cfg.ResourceProfile,
MinBackups: cfg.MinBackups, LargeDBMode: cfg.LargeDBMode,
MaxRetries: cfg.MaxRetries, RetentionDays: cfg.RetentionDays,
MinBackups: cfg.MinBackups,
MaxRetries: cfg.MaxRetries,
} }
} }

475
internal/cpu/profiles.go Normal file
View File

@@ -0,0 +1,475 @@
package cpu
import (
"bufio"
"fmt"
"os"
"os/exec"
"runtime"
"strconv"
"strings"
)
// MemoryInfo holds system memory information
type MemoryInfo struct {
TotalBytes int64 `json:"total_bytes"`
AvailableBytes int64 `json:"available_bytes"`
FreeBytes int64 `json:"free_bytes"`
UsedBytes int64 `json:"used_bytes"`
SwapTotalBytes int64 `json:"swap_total_bytes"`
SwapFreeBytes int64 `json:"swap_free_bytes"`
TotalGB int `json:"total_gb"`
AvailableGB int `json:"available_gb"`
Platform string `json:"platform"`
}
// ResourceProfile defines a resource allocation profile for backup/restore operations
type ResourceProfile struct {
Name string `json:"name"`
Description string `json:"description"`
ClusterParallelism int `json:"cluster_parallelism"` // Concurrent databases
Jobs int `json:"jobs"` // Parallel jobs within pg_restore
DumpJobs int `json:"dump_jobs"` // Parallel jobs for pg_dump
MaintenanceWorkMem string `json:"maintenance_work_mem"` // PostgreSQL recommendation
MaxLocksPerTxn int `json:"max_locks_per_txn"` // PostgreSQL recommendation
RecommendedForLarge bool `json:"recommended_for_large"` // Suitable for large DBs?
MinMemoryGB int `json:"min_memory_gb"` // Minimum memory for this profile
MinCores int `json:"min_cores"` // Minimum cores for this profile
}
// Predefined resource profiles
var (
// ProfileConservative - Safe for constrained VMs, avoids shared memory issues
ProfileConservative = ResourceProfile{
Name: "conservative",
Description: "Safe for small VMs (2-4 cores, <16GB). Sequential operations, minimal memory pressure. Best for large DBs on limited hardware.",
ClusterParallelism: 1,
Jobs: 1,
DumpJobs: 2,
MaintenanceWorkMem: "256MB",
MaxLocksPerTxn: 4096,
RecommendedForLarge: true,
MinMemoryGB: 4,
MinCores: 2,
}
// ProfileBalanced - Default profile, works for most scenarios
ProfileBalanced = ResourceProfile{
Name: "balanced",
Description: "Balanced for medium VMs (4-8 cores, 16-32GB). Moderate parallelism with good safety margin.",
ClusterParallelism: 2,
Jobs: 2,
DumpJobs: 4,
MaintenanceWorkMem: "512MB",
MaxLocksPerTxn: 2048,
RecommendedForLarge: true,
MinMemoryGB: 16,
MinCores: 4,
}
// ProfilePerformance - Aggressive parallelism for powerful servers
ProfilePerformance = ResourceProfile{
Name: "performance",
Description: "Aggressive for powerful servers (8+ cores, 32GB+). Maximum parallelism for fast operations.",
ClusterParallelism: 4,
Jobs: 4,
DumpJobs: 8,
MaintenanceWorkMem: "1GB",
MaxLocksPerTxn: 1024,
RecommendedForLarge: false, // Large DBs may still need conservative
MinMemoryGB: 32,
MinCores: 8,
}
// ProfileMaxPerformance - Maximum parallelism for high-end servers
ProfileMaxPerformance = ResourceProfile{
Name: "max-performance",
Description: "Maximum for high-end servers (16+ cores, 64GB+). Full CPU utilization.",
ClusterParallelism: 8,
Jobs: 8,
DumpJobs: 16,
MaintenanceWorkMem: "2GB",
MaxLocksPerTxn: 512,
RecommendedForLarge: false, // Large DBs should use LargeDBMode
MinMemoryGB: 64,
MinCores: 16,
}
// AllProfiles contains all available profiles (VM resource-based)
AllProfiles = []ResourceProfile{
ProfileConservative,
ProfileBalanced,
ProfilePerformance,
ProfileMaxPerformance,
}
)
// GetProfileByName returns a profile by its name
func GetProfileByName(name string) *ResourceProfile {
for _, p := range AllProfiles {
if strings.EqualFold(p.Name, name) {
return &p
}
}
return nil
}
// ApplyLargeDBMode modifies a profile for large database operations.
// This is a modifier that reduces parallelism and increases max_locks_per_transaction
// to prevent "out of shared memory" errors with large databases (many tables, LOBs, etc.).
// It returns a new profile with adjusted settings, leaving the original unchanged.
func ApplyLargeDBMode(profile *ResourceProfile) *ResourceProfile {
if profile == nil {
return nil
}
// Create a copy with adjusted settings
modified := *profile
// Add "(large-db)" suffix to indicate this is modified
modified.Name = profile.Name + " +large-db"
modified.Description = fmt.Sprintf("%s [LargeDBMode: reduced parallelism, high locks]", profile.Description)
// Reduce parallelism to avoid lock exhaustion
// Rule: halve parallelism, minimum 1
modified.ClusterParallelism = max(1, profile.ClusterParallelism/2)
modified.Jobs = max(1, profile.Jobs/2)
modified.DumpJobs = max(2, profile.DumpJobs/2)
// Force high max_locks_per_transaction for large schemas
modified.MaxLocksPerTxn = 8192
// Increase maintenance_work_mem for complex operations
// Keep or boost maintenance work mem
modified.MaintenanceWorkMem = "1GB"
if profile.MinMemoryGB >= 32 {
modified.MaintenanceWorkMem = "2GB"
}
modified.RecommendedForLarge = true
return &modified
}
// max returns the larger of two integers
func max(a, b int) int {
if a > b {
return a
}
return b
}
// DetectMemory detects system memory information
func DetectMemory() (*MemoryInfo, error) {
info := &MemoryInfo{
Platform: runtime.GOOS,
}
switch runtime.GOOS {
case "linux":
if err := detectLinuxMemory(info); err != nil {
return info, fmt.Errorf("linux memory detection failed: %w", err)
}
case "darwin":
if err := detectDarwinMemory(info); err != nil {
return info, fmt.Errorf("darwin memory detection failed: %w", err)
}
case "windows":
if err := detectWindowsMemory(info); err != nil {
return info, fmt.Errorf("windows memory detection failed: %w", err)
}
default:
// Fallback: use Go runtime memory stats
var memStats runtime.MemStats
runtime.ReadMemStats(&memStats)
info.TotalBytes = int64(memStats.Sys)
info.AvailableBytes = int64(memStats.Sys - memStats.Alloc)
}
// Calculate GB values
info.TotalGB = int(info.TotalBytes / (1024 * 1024 * 1024))
info.AvailableGB = int(info.AvailableBytes / (1024 * 1024 * 1024))
return info, nil
}
// detectLinuxMemory reads memory info from /proc/meminfo
func detectLinuxMemory(info *MemoryInfo) error {
file, err := os.Open("/proc/meminfo")
if err != nil {
return err
}
defer file.Close()
scanner := bufio.NewScanner(file)
for scanner.Scan() {
line := scanner.Text()
parts := strings.Fields(line)
if len(parts) < 2 {
continue
}
key := strings.TrimSuffix(parts[0], ":")
value, err := strconv.ParseInt(parts[1], 10, 64)
if err != nil {
continue
}
// Values are in kB
valueBytes := value * 1024
switch key {
case "MemTotal":
info.TotalBytes = valueBytes
case "MemAvailable":
info.AvailableBytes = valueBytes
case "MemFree":
info.FreeBytes = valueBytes
case "SwapTotal":
info.SwapTotalBytes = valueBytes
case "SwapFree":
info.SwapFreeBytes = valueBytes
}
}
info.UsedBytes = info.TotalBytes - info.AvailableBytes
return scanner.Err()
}
// detectDarwinMemory detects memory on macOS
func detectDarwinMemory(info *MemoryInfo) error {
// Use sysctl for total memory
if output, err := runCommand("sysctl", "-n", "hw.memsize"); err == nil {
if val, err := strconv.ParseInt(strings.TrimSpace(output), 10, 64); err == nil {
info.TotalBytes = val
}
}
// Use vm_stat for available memory (more complex parsing required)
if output, err := runCommand("vm_stat"); err == nil {
pageSize := int64(4096) // Default page size
var freePages, inactivePages int64
lines := strings.Split(output, "\n")
for _, line := range lines {
if strings.Contains(line, "page size of") {
parts := strings.Fields(line)
for i, p := range parts {
if p == "of" && i+1 < len(parts) {
if ps, err := strconv.ParseInt(parts[i+1], 10, 64); err == nil {
pageSize = ps
}
}
}
} else if strings.Contains(line, "Pages free:") {
val := extractNumberFromLine(line)
freePages = val
} else if strings.Contains(line, "Pages inactive:") {
val := extractNumberFromLine(line)
inactivePages = val
}
}
info.FreeBytes = freePages * pageSize
info.AvailableBytes = (freePages + inactivePages) * pageSize
}
info.UsedBytes = info.TotalBytes - info.AvailableBytes
return nil
}
// detectWindowsMemory detects memory on Windows
func detectWindowsMemory(info *MemoryInfo) error {
// Use wmic for memory info
if output, err := runCommand("wmic", "OS", "get", "TotalVisibleMemorySize,FreePhysicalMemory", "/format:list"); err == nil {
lines := strings.Split(output, "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
if strings.HasPrefix(line, "TotalVisibleMemorySize=") {
val := strings.TrimPrefix(line, "TotalVisibleMemorySize=")
if v, err := strconv.ParseInt(strings.TrimSpace(val), 10, 64); err == nil {
info.TotalBytes = v * 1024 // KB to bytes
}
} else if strings.HasPrefix(line, "FreePhysicalMemory=") {
val := strings.TrimPrefix(line, "FreePhysicalMemory=")
if v, err := strconv.ParseInt(strings.TrimSpace(val), 10, 64); err == nil {
info.FreeBytes = v * 1024
info.AvailableBytes = v * 1024
}
}
}
}
info.UsedBytes = info.TotalBytes - info.AvailableBytes
return nil
}
// RecommendProfile recommends a resource profile based on system resources and workload
func RecommendProfile(cpuInfo *CPUInfo, memInfo *MemoryInfo, isLargeDB bool) *ResourceProfile {
cores := 0
if cpuInfo != nil {
cores = cpuInfo.PhysicalCores
if cores == 0 {
cores = cpuInfo.LogicalCores
}
}
if cores == 0 {
cores = runtime.NumCPU()
}
memGB := 0
if memInfo != nil {
memGB = memInfo.TotalGB
}
// Special case: large databases should use conservative profile
// The caller should also enable LargeDBMode for increased MaxLocksPerTxn
if isLargeDB {
// For large DBs, recommend conservative regardless of resources
// LargeDBMode flag will handle the lock settings separately
return &ProfileConservative
}
// Resource-based selection
if cores >= 16 && memGB >= 64 {
return &ProfileMaxPerformance
} else if cores >= 8 && memGB >= 32 {
return &ProfilePerformance
} else if cores >= 4 && memGB >= 16 {
return &ProfileBalanced
}
// Default to conservative for constrained systems
return &ProfileConservative
}
// RecommendProfileWithReason returns a profile recommendation with explanation
func RecommendProfileWithReason(cpuInfo *CPUInfo, memInfo *MemoryInfo, isLargeDB bool) (*ResourceProfile, string) {
cores := 0
if cpuInfo != nil {
cores = cpuInfo.PhysicalCores
if cores == 0 {
cores = cpuInfo.LogicalCores
}
}
if cores == 0 {
cores = runtime.NumCPU()
}
memGB := 0
if memInfo != nil {
memGB = memInfo.TotalGB
}
// Build reason string
var reason strings.Builder
reason.WriteString(fmt.Sprintf("System: %d cores, %dGB RAM. ", cores, memGB))
profile := RecommendProfile(cpuInfo, memInfo, isLargeDB)
if isLargeDB {
reason.WriteString("Large database mode - using conservative settings. Enable LargeDBMode for higher max_locks.")
} else if profile.Name == "conservative" {
reason.WriteString("Limited resources detected - using conservative profile for stability.")
} else if profile.Name == "max-performance" {
reason.WriteString("High-end server detected - using maximum parallelism.")
} else if profile.Name == "performance" {
reason.WriteString("Good resources detected - using performance profile.")
} else {
reason.WriteString("Using balanced profile for optimal performance/stability trade-off.")
}
return profile, reason.String()
}
// ValidateProfileForSystem checks if a profile is suitable for the current system
func ValidateProfileForSystem(profile *ResourceProfile, cpuInfo *CPUInfo, memInfo *MemoryInfo) (bool, []string) {
var warnings []string
cores := 0
if cpuInfo != nil {
cores = cpuInfo.PhysicalCores
if cores == 0 {
cores = cpuInfo.LogicalCores
}
}
if cores == 0 {
cores = runtime.NumCPU()
}
memGB := 0
if memInfo != nil {
memGB = memInfo.TotalGB
}
// Check minimum requirements
if cores < profile.MinCores {
warnings = append(warnings,
fmt.Sprintf("Profile '%s' recommends %d+ cores (system has %d)", profile.Name, profile.MinCores, cores))
}
if memGB < profile.MinMemoryGB {
warnings = append(warnings,
fmt.Sprintf("Profile '%s' recommends %dGB+ RAM (system has %dGB)", profile.Name, profile.MinMemoryGB, memGB))
}
// Check for potential issues
if profile.ClusterParallelism > cores {
warnings = append(warnings,
fmt.Sprintf("Cluster parallelism (%d) exceeds CPU cores (%d) - may cause contention",
profile.ClusterParallelism, cores))
}
// Memory pressure warning
memPerWorker := 2 // Rough estimate: 2GB per parallel worker for large DB operations
requiredMem := profile.ClusterParallelism * profile.Jobs * memPerWorker
if memGB > 0 && requiredMem > memGB {
warnings = append(warnings,
fmt.Sprintf("High parallelism may require ~%dGB RAM (system has %dGB) - risk of OOM",
requiredMem, memGB))
}
return len(warnings) == 0, warnings
}
// FormatProfileSummary returns a formatted summary of a profile
func (p *ResourceProfile) FormatProfileSummary() string {
return fmt.Sprintf("[%s] Parallel: %d DBs, %d jobs | Recommended for large DBs: %v",
strings.ToUpper(p.Name),
p.ClusterParallelism,
p.Jobs,
p.RecommendedForLarge)
}
// PostgreSQLRecommendations returns PostgreSQL configuration recommendations for this profile
func (p *ResourceProfile) PostgreSQLRecommendations() []string {
return []string{
fmt.Sprintf("ALTER SYSTEM SET max_locks_per_transaction = %d;", p.MaxLocksPerTxn),
fmt.Sprintf("ALTER SYSTEM SET maintenance_work_mem = '%s';", p.MaintenanceWorkMem),
"-- Restart PostgreSQL after changes to max_locks_per_transaction",
}
}
// Helper functions
func runCommand(name string, args ...string) (string, error) {
cmd := exec.Command(name, args...)
output, err := cmd.Output()
if err != nil {
return "", err
}
return string(output), nil
}
func extractNumberFromLine(line string) int64 {
// Extract number before the period at end (e.g., "Pages free: 123456.")
parts := strings.Fields(line)
for _, p := range parts {
p = strings.TrimSuffix(p, ".")
if val, err := strconv.ParseInt(p, 10, 64); err == nil && val > 0 {
return val
}
}
return 0
}

View File

@@ -50,6 +50,7 @@ type Engine struct {
progress progress.Indicator progress progress.Indicator
detailedReporter *progress.DetailedReporter detailedReporter *progress.DetailedReporter
dryRun bool dryRun bool
silentMode bool // Suppress stdout output (for TUI mode)
debugLogPath string // Path to save debug log on error debugLogPath string // Path to save debug log on error
// TUI progress callback for detailed progress reporting // TUI progress callback for detailed progress reporting
@@ -86,6 +87,7 @@ func NewSilent(cfg *config.Config, log logger.Logger, db database.Database) *Eng
progress: progressIndicator, progress: progressIndicator,
detailedReporter: detailedReporter, detailedReporter: detailedReporter,
dryRun: false, dryRun: false,
silentMode: true, // Suppress stdout for TUI
} }
} }

View File

@@ -542,7 +542,20 @@ func (e *Engine) calculateRecommendedParallel(result *PreflightResult) int {
} }
// printPreflightSummary prints a nice summary of all checks // printPreflightSummary prints a nice summary of all checks
// In silent mode (TUI), this is skipped and results are logged instead
func (e *Engine) printPreflightSummary(result *PreflightResult) { func (e *Engine) printPreflightSummary(result *PreflightResult) {
// In TUI/silent mode, don't print to stdout - it causes scrambled output
if e.silentMode {
// Log summary instead for debugging
e.log.Info("Preflight checks complete",
"can_proceed", result.CanProceed,
"warnings", len(result.Warnings),
"errors", len(result.Errors),
"total_blobs", result.Archive.TotalBlobCount,
"recommended_locks", result.Archive.RecommendedLockBoost)
return
}
fmt.Println() fmt.Println()
fmt.Println(strings.Repeat("─", 60)) fmt.Println(strings.Repeat("─", 60))
fmt.Println(" PREFLIGHT CHECKS") fmt.Println(" PREFLIGHT CHECKS")

View File

@@ -417,10 +417,14 @@ func (s *Safety) listPostgresUserDatabases(ctx context.Context) ([]string, error
cmd := exec.CommandContext(ctx, "psql", args...) cmd := exec.CommandContext(ctx, "psql", args...)
// Set password if provided // Set password - check config first, then environment
env := os.Environ()
if s.cfg.Password != "" { if s.cfg.Password != "" {
cmd.Env = append(os.Environ(), fmt.Sprintf("PGPASSWORD=%s", s.cfg.Password)) env = append(env, fmt.Sprintf("PGPASSWORD=%s", s.cfg.Password))
} }
cmd.Env = env
s.log.Debug("Listing PostgreSQL databases", "host", host, "port", s.cfg.Port, "user", s.cfg.User)
output, err := cmd.CombinedOutput() output, err := cmd.CombinedOutput()
if err != nil { if err != nil {
@@ -438,6 +442,8 @@ func (s *Safety) listPostgresUserDatabases(ctx context.Context) ([]string, error
} }
} }
s.log.Debug("Found user databases", "count", len(databases), "databases", databases, "raw_output", string(output))
return databases, nil return databases, nil
} }

View File

@@ -13,6 +13,14 @@ import (
"dbbackup/internal/config" "dbbackup/internal/config"
"dbbackup/internal/database" "dbbackup/internal/database"
"dbbackup/internal/logger" "dbbackup/internal/logger"
"path/filepath"
)
// Backup phase constants for consistency
const (
backupPhaseGlobals = 1
backupPhaseDatabases = 2
backupPhaseCompressing = 3
) )
// BackupExecutionModel handles backup execution with progress // BackupExecutionModel handles backup execution with progress
@@ -31,27 +39,36 @@ type BackupExecutionModel struct {
cancelling bool // True when user has requested cancellation cancelling bool // True when user has requested cancellation
err error err error
result string result string
archivePath string // Path to created archive (for summary)
archiveSize int64 // Size of created archive (for summary)
startTime time.Time startTime time.Time
elapsed time.Duration // Final elapsed time
details []string details []string
spinnerFrame int spinnerFrame int
// Database count progress (for cluster backup) // Database count progress (for cluster backup)
dbTotal int dbTotal int
dbDone int dbDone int
dbName string // Current database being backed up dbName string // Current database being backed up
overallPhase int // 1=globals, 2=databases, 3=compressing overallPhase int // 1=globals, 2=databases, 3=compressing
phaseDesc string // Description of current phase phaseDesc string // Description of current phase
phase2StartTime time.Time // When phase 2 (databases) started (for realtime ETA)
dbPhaseElapsed time.Duration // Elapsed time since database backup phase started
dbAvgPerDB time.Duration // Average time per database backup
} }
// sharedBackupProgressState holds progress state that can be safely accessed from callbacks // sharedBackupProgressState holds progress state that can be safely accessed from callbacks
type sharedBackupProgressState struct { type sharedBackupProgressState struct {
mu sync.Mutex mu sync.Mutex
dbTotal int dbTotal int
dbDone int dbDone int
dbName string dbName string
overallPhase int // 1=globals, 2=databases, 3=compressing overallPhase int // 1=globals, 2=databases, 3=compressing
phaseDesc string // Description of current phase phaseDesc string // Description of current phase
hasUpdate bool hasUpdate bool
phase2StartTime time.Time // When phase 2 started (for realtime ETA calculation)
dbPhaseElapsed time.Duration // Elapsed time since database backup phase started
dbAvgPerDB time.Duration // Average time per database backup
} }
// Package-level shared progress state for backup operations // Package-level shared progress state for backup operations
@@ -72,12 +89,12 @@ func clearCurrentBackupProgress() {
currentBackupProgressState = nil currentBackupProgressState = nil
} }
func getCurrentBackupProgress() (dbTotal, dbDone int, dbName string, overallPhase int, phaseDesc string, hasUpdate bool) { func getCurrentBackupProgress() (dbTotal, dbDone int, dbName string, overallPhase int, phaseDesc string, hasUpdate bool, dbPhaseElapsed, dbAvgPerDB time.Duration, phase2StartTime time.Time) {
currentBackupProgressMu.Lock() currentBackupProgressMu.Lock()
defer currentBackupProgressMu.Unlock() defer currentBackupProgressMu.Unlock()
if currentBackupProgressState == nil { if currentBackupProgressState == nil {
return 0, 0, "", 0, "", false return 0, 0, "", 0, "", false, 0, 0, time.Time{}
} }
currentBackupProgressState.mu.Lock() currentBackupProgressState.mu.Lock()
@@ -86,9 +103,17 @@ func getCurrentBackupProgress() (dbTotal, dbDone int, dbName string, overallPhas
hasUpdate = currentBackupProgressState.hasUpdate hasUpdate = currentBackupProgressState.hasUpdate
currentBackupProgressState.hasUpdate = false currentBackupProgressState.hasUpdate = false
// Calculate realtime phase elapsed if we have a phase 2 start time
dbPhaseElapsed = currentBackupProgressState.dbPhaseElapsed
if !currentBackupProgressState.phase2StartTime.IsZero() {
dbPhaseElapsed = time.Since(currentBackupProgressState.phase2StartTime)
}
return currentBackupProgressState.dbTotal, currentBackupProgressState.dbDone, return currentBackupProgressState.dbTotal, currentBackupProgressState.dbDone,
currentBackupProgressState.dbName, currentBackupProgressState.overallPhase, currentBackupProgressState.dbName, currentBackupProgressState.overallPhase,
currentBackupProgressState.phaseDesc, hasUpdate currentBackupProgressState.phaseDesc, hasUpdate,
dbPhaseElapsed, currentBackupProgressState.dbAvgPerDB,
currentBackupProgressState.phase2StartTime
} }
func NewBackupExecution(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context, backupType, dbName string, ratio int) BackupExecutionModel { func NewBackupExecution(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context, backupType, dbName string, ratio int) BackupExecutionModel {
@@ -132,8 +157,11 @@ type backupProgressMsg struct {
} }
type backupCompleteMsg struct { type backupCompleteMsg struct {
result string result string
err error err error
archivePath string
archiveSize int64
elapsed time.Duration
} }
func executeBackupWithTUIProgress(parentCtx context.Context, cfg *config.Config, log logger.Logger, backupType, dbName string, ratio int) tea.Cmd { func executeBackupWithTUIProgress(parentCtx context.Context, cfg *config.Config, log logger.Logger, backupType, dbName string, ratio int) tea.Cmd {
@@ -176,9 +204,13 @@ func executeBackupWithTUIProgress(parentCtx context.Context, cfg *config.Config,
progressState.dbDone = done progressState.dbDone = done
progressState.dbTotal = total progressState.dbTotal = total
progressState.dbName = currentDB progressState.dbName = currentDB
progressState.overallPhase = 2 // Phase 2: Backing up databases progressState.overallPhase = backupPhaseDatabases
progressState.phaseDesc = fmt.Sprintf("Phase 2/3: Databases (%d/%d)", done, total) progressState.phaseDesc = fmt.Sprintf("Phase 2/3: Backing up Databases (%d/%d)", done, total)
progressState.hasUpdate = true progressState.hasUpdate = true
// Set phase 2 start time on first callback (for realtime ETA calculation)
if progressState.phase2StartTime.IsZero() {
progressState.phase2StartTime = time.Now()
}
progressState.mu.Unlock() progressState.mu.Unlock()
}) })
@@ -216,8 +248,9 @@ func executeBackupWithTUIProgress(parentCtx context.Context, cfg *config.Config,
} }
return backupCompleteMsg{ return backupCompleteMsg{
result: result, result: result,
err: nil, err: nil,
elapsed: elapsed,
} }
} }
} }
@@ -230,13 +263,15 @@ func (m BackupExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
m.spinnerFrame = (m.spinnerFrame + 1) % len(spinnerFrames) m.spinnerFrame = (m.spinnerFrame + 1) % len(spinnerFrames)
// Poll for database progress updates from callbacks // Poll for database progress updates from callbacks
dbTotal, dbDone, dbName, overallPhase, phaseDesc, hasUpdate := getCurrentBackupProgress() dbTotal, dbDone, dbName, overallPhase, phaseDesc, hasUpdate, dbPhaseElapsed, dbAvgPerDB, _ := getCurrentBackupProgress()
if hasUpdate { if hasUpdate {
m.dbTotal = dbTotal m.dbTotal = dbTotal
m.dbDone = dbDone m.dbDone = dbDone
m.dbName = dbName m.dbName = dbName
m.overallPhase = overallPhase m.overallPhase = overallPhase
m.phaseDesc = phaseDesc m.phaseDesc = phaseDesc
m.dbPhaseElapsed = dbPhaseElapsed
m.dbAvgPerDB = dbAvgPerDB
} }
// Update status based on progress and elapsed time // Update status based on progress and elapsed time
@@ -284,6 +319,7 @@ func (m BackupExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
m.done = true m.done = true
m.err = msg.err m.err = msg.err
m.result = msg.result m.result = msg.result
m.elapsed = msg.elapsed
if m.err == nil { if m.err == nil {
m.status = "[OK] Backup completed successfully!" m.status = "[OK] Backup completed successfully!"
} else { } else {
@@ -361,14 +397,52 @@ func renderBackupDatabaseProgressBar(done, total int, dbName string, width int)
return fmt.Sprintf(" Database: [%s] %d/%d", bar, done, total) return fmt.Sprintf(" Database: [%s] %d/%d", bar, done, total)
} }
// renderBackupDatabaseProgressBarWithTiming renders database backup progress with ETA
func renderBackupDatabaseProgressBarWithTiming(done, total int, dbPhaseElapsed, dbAvgPerDB time.Duration) string {
if total == 0 {
return ""
}
// Calculate progress percentage
percent := float64(done) / float64(total)
if percent > 1.0 {
percent = 1.0
}
// Build progress bar
barWidth := 50
filled := int(float64(barWidth) * percent)
if filled > barWidth {
filled = barWidth
}
bar := strings.Repeat("█", filled) + strings.Repeat("░", barWidth-filled)
// Calculate ETA similar to restore
var etaStr string
if done > 0 && done < total {
avgPerDB := dbPhaseElapsed / time.Duration(done)
remaining := total - done
eta := avgPerDB * time.Duration(remaining)
etaStr = fmt.Sprintf(" | ETA: %s", formatDuration(eta))
} else if done == total {
etaStr = " | Complete"
}
return fmt.Sprintf(" Databases: [%s] %d/%d | Elapsed: %s%s\n",
bar, done, total, formatDuration(dbPhaseElapsed), etaStr)
}
func (m BackupExecutionModel) View() string { func (m BackupExecutionModel) View() string {
var s strings.Builder var s strings.Builder
s.Grow(512) // Pre-allocate estimated capacity for better performance s.Grow(512) // Pre-allocate estimated capacity for better performance
// Clear screen with newlines and render header // Clear screen with newlines and render header
s.WriteString("\n\n") s.WriteString("\n\n")
header := titleStyle.Render("[EXEC] Backup Execution") header := "[EXEC] Backing up Database"
s.WriteString(header) if m.backupType == "cluster" {
header = "[EXEC] Cluster Backup"
}
s.WriteString(titleStyle.Render(header))
s.WriteString("\n\n") s.WriteString("\n\n")
// Backup details - properly aligned // Backup details - properly aligned
@@ -379,7 +453,6 @@ func (m BackupExecutionModel) View() string {
if m.ratio > 0 { if m.ratio > 0 {
s.WriteString(fmt.Sprintf(" %-10s %d\n", "Sample:", m.ratio)) s.WriteString(fmt.Sprintf(" %-10s %d\n", "Sample:", m.ratio))
} }
s.WriteString(fmt.Sprintf(" %-10s %s\n", "Duration:", time.Since(m.startTime).Round(time.Second)))
s.WriteString("\n") s.WriteString("\n")
// Status display // Status display
@@ -395,11 +468,15 @@ func (m BackupExecutionModel) View() string {
elapsedSec := int(time.Since(m.startTime).Seconds()) elapsedSec := int(time.Since(m.startTime).Seconds())
if m.overallPhase == 2 && m.dbTotal > 0 { if m.overallPhase == backupPhaseDatabases && m.dbTotal > 0 {
// Phase 2: Database backups - contributes 15-90% // Phase 2: Database backups - contributes 15-90%
dbPct := int((int64(m.dbDone) * 100) / int64(m.dbTotal)) dbPct := int((int64(m.dbDone) * 100) / int64(m.dbTotal))
overallProgress = 15 + (dbPct * 75 / 100) overallProgress = 15 + (dbPct * 75 / 100)
phaseLabel = m.phaseDesc phaseLabel = m.phaseDesc
} else if m.overallPhase == backupPhaseCompressing {
// Phase 3: Compressing archive
overallProgress = 92
phaseLabel = "Phase 3/3: Compressing Archive"
} else if elapsedSec < 5 { } else if elapsedSec < 5 {
// Initial setup // Initial setup
overallProgress = 2 overallProgress = 2
@@ -430,9 +507,9 @@ func (m BackupExecutionModel) View() string {
} }
s.WriteString("\n") s.WriteString("\n")
// Database progress bar // Database progress bar with timing
progressBar := renderBackupDatabaseProgressBar(m.dbDone, m.dbTotal, m.dbName, 50) s.WriteString(renderBackupDatabaseProgressBarWithTiming(m.dbDone, m.dbTotal, m.dbPhaseElapsed, m.dbAvgPerDB))
s.WriteString(progressBar + "\n") s.WriteString("\n")
} else { } else {
// Intermediate phase (globals) // Intermediate phase (globals)
spinner := spinnerFrames[m.spinnerFrame] spinner := spinnerFrames[m.spinnerFrame]
@@ -449,7 +526,10 @@ func (m BackupExecutionModel) View() string {
} }
if !m.cancelling { if !m.cancelling {
s.WriteString("\n [KEY] Press Ctrl+C or ESC to cancel\n") // Elapsed time
s.WriteString(fmt.Sprintf("Elapsed: %s\n", formatDuration(time.Since(m.startTime))))
s.WriteString("\n")
s.WriteString(infoStyle.Render("[KEYS] Press Ctrl+C or ESC to cancel"))
} }
} else { } else {
// Show completion summary with detailed stats // Show completion summary with detailed stats
@@ -474,6 +554,14 @@ func (m BackupExecutionModel) View() string {
s.WriteString(infoStyle.Render(" ─── Summary ───────────────────────────────────────────────")) s.WriteString(infoStyle.Render(" ─── Summary ───────────────────────────────────────────────"))
s.WriteString("\n\n") s.WriteString("\n\n")
// Archive info (if available)
if m.archivePath != "" {
s.WriteString(fmt.Sprintf(" Archive: %s\n", filepath.Base(m.archivePath)))
}
if m.archiveSize > 0 {
s.WriteString(fmt.Sprintf(" Archive Size: %s\n", FormatBytes(m.archiveSize)))
}
// Backup type specific info // Backup type specific info
switch m.backupType { switch m.backupType {
case "cluster": case "cluster":
@@ -497,12 +585,21 @@ func (m BackupExecutionModel) View() string {
s.WriteString(infoStyle.Render(" ─── Timing ────────────────────────────────────────────────")) s.WriteString(infoStyle.Render(" ─── Timing ────────────────────────────────────────────────"))
s.WriteString("\n\n") s.WriteString("\n\n")
elapsed := time.Since(m.startTime) elapsed := m.elapsed
s.WriteString(fmt.Sprintf(" Total Time: %s\n", formatBackupDuration(elapsed))) if elapsed == 0 {
elapsed = time.Since(m.startTime)
}
s.WriteString(fmt.Sprintf(" Total Time: %s\n", formatDuration(elapsed)))
// Calculate and show throughput if we have size info
if m.archiveSize > 0 && elapsed.Seconds() > 0 {
throughput := float64(m.archiveSize) / elapsed.Seconds()
s.WriteString(fmt.Sprintf(" Throughput: %s/s (average)\n", FormatBytes(int64(throughput))))
}
if m.backupType == "cluster" && m.dbTotal > 0 && m.err == nil { if m.backupType == "cluster" && m.dbTotal > 0 && m.err == nil {
avgPerDB := elapsed / time.Duration(m.dbTotal) avgPerDB := elapsed / time.Duration(m.dbTotal)
s.WriteString(fmt.Sprintf(" Avg per DB: %s\n", formatBackupDuration(avgPerDB))) s.WriteString(fmt.Sprintf(" Avg per DB: %s\n", formatDuration(avgPerDB)))
} }
s.WriteString("\n") s.WriteString("\n")
@@ -513,18 +610,3 @@ func (m BackupExecutionModel) View() string {
return s.String() return s.String()
} }
// formatBackupDuration formats duration in human readable format
func formatBackupDuration(d time.Duration) string {
if d < time.Minute {
return fmt.Sprintf("%.1fs", d.Seconds())
}
if d < time.Hour {
minutes := int(d.Minutes())
seconds := int(d.Seconds()) % 60
return fmt.Sprintf("%dm %ds", minutes, seconds)
}
hours := int(d.Hours())
minutes := int(d.Minutes()) % 60
return fmt.Sprintf("%dh %dm", hours, minutes)
}

View File

@@ -152,8 +152,9 @@ type sharedProgressState struct {
currentDB string currentDB string
// Timing info for database restore phase // Timing info for database restore phase
dbPhaseElapsed time.Duration // Elapsed time since restore phase started dbPhaseElapsed time.Duration // Elapsed time since restore phase started
dbAvgPerDB time.Duration // Average time per database restore dbAvgPerDB time.Duration // Average time per database restore
phase3StartTime time.Time // When phase 3 started (for realtime ETA calculation)
// Overall phase tracking (1=Extract, 2=Globals, 3=Databases) // Overall phase tracking (1=Extract, 2=Globals, 3=Databases)
overallPhase int overallPhase int
@@ -190,12 +191,12 @@ func clearCurrentRestoreProgress() {
currentRestoreProgressState = nil currentRestoreProgressState = nil
} }
func getCurrentRestoreProgress() (bytesTotal, bytesDone int64, description string, hasUpdate bool, dbTotal, dbDone int, speed float64, dbPhaseElapsed, dbAvgPerDB time.Duration, currentDB string, overallPhase int, extractionDone bool, dbBytesTotal, dbBytesDone int64) { func getCurrentRestoreProgress() (bytesTotal, bytesDone int64, description string, hasUpdate bool, dbTotal, dbDone int, speed float64, dbPhaseElapsed, dbAvgPerDB time.Duration, currentDB string, overallPhase int, extractionDone bool, dbBytesTotal, dbBytesDone int64, phase3StartTime time.Time) {
currentRestoreProgressMu.Lock() currentRestoreProgressMu.Lock()
defer currentRestoreProgressMu.Unlock() defer currentRestoreProgressMu.Unlock()
if currentRestoreProgressState == nil { if currentRestoreProgressState == nil {
return 0, 0, "", false, 0, 0, 0, 0, 0, "", 0, false, 0, 0 return 0, 0, "", false, 0, 0, 0, 0, 0, "", 0, false, 0, 0, time.Time{}
} }
currentRestoreProgressState.mu.Lock() currentRestoreProgressState.mu.Lock()
@@ -204,13 +205,20 @@ func getCurrentRestoreProgress() (bytesTotal, bytesDone int64, description strin
// Calculate rolling window speed // Calculate rolling window speed
speed = calculateRollingSpeed(currentRestoreProgressState.speedSamples) speed = calculateRollingSpeed(currentRestoreProgressState.speedSamples)
// Calculate realtime phase elapsed if we have a phase 3 start time
dbPhaseElapsed = currentRestoreProgressState.dbPhaseElapsed
if !currentRestoreProgressState.phase3StartTime.IsZero() {
dbPhaseElapsed = time.Since(currentRestoreProgressState.phase3StartTime)
}
return currentRestoreProgressState.bytesTotal, currentRestoreProgressState.bytesDone, return currentRestoreProgressState.bytesTotal, currentRestoreProgressState.bytesDone,
currentRestoreProgressState.description, currentRestoreProgressState.hasUpdate, currentRestoreProgressState.description, currentRestoreProgressState.hasUpdate,
currentRestoreProgressState.dbTotal, currentRestoreProgressState.dbDone, speed, currentRestoreProgressState.dbTotal, currentRestoreProgressState.dbDone, speed,
currentRestoreProgressState.dbPhaseElapsed, currentRestoreProgressState.dbAvgPerDB, dbPhaseElapsed, currentRestoreProgressState.dbAvgPerDB,
currentRestoreProgressState.currentDB, currentRestoreProgressState.overallPhase, currentRestoreProgressState.currentDB, currentRestoreProgressState.overallPhase,
currentRestoreProgressState.extractionDone, currentRestoreProgressState.extractionDone,
currentRestoreProgressState.dbBytesTotal, currentRestoreProgressState.dbBytesDone currentRestoreProgressState.dbBytesTotal, currentRestoreProgressState.dbBytesDone,
currentRestoreProgressState.phase3StartTime
} }
// calculateRollingSpeed calculates speed from recent samples (last 5 seconds) // calculateRollingSpeed calculates speed from recent samples (last 5 seconds)
@@ -273,26 +281,42 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
defer dbClient.Close() defer dbClient.Close()
// STEP 1: Clean cluster if requested (drop all existing user databases) // STEP 1: Clean cluster if requested (drop all existing user databases)
if restoreType == "restore-cluster" && cleanClusterFirst && len(existingDBs) > 0 { if restoreType == "restore-cluster" && cleanClusterFirst {
log.Info("Dropping existing user databases before cluster restore", "count", len(existingDBs)) // Re-detect databases at execution time to get current state
// The preview list may be stale or detection may have failed earlier
// Drop databases using command-line psql (no connection required) safety := restore.NewSafety(cfg, log)
// This matches how cluster restore works - uses CLI tools, not database connections currentDBs, err := safety.ListUserDatabases(ctx)
droppedCount := 0 if err != nil {
for _, dbName := range existingDBs { log.Warn("Failed to list databases for cleanup, using preview list", "error", err)
// Create timeout context for each database drop (5 minutes per DB - large DBs take time) currentDBs = existingDBs // Fall back to preview list
dropCtx, dropCancel := context.WithTimeout(ctx, 5*time.Minute) } else if len(currentDBs) > 0 {
if err := dropDatabaseCLI(dropCtx, cfg, dbName); err != nil { log.Info("Re-detected user databases for cleanup", "count", len(currentDBs), "databases", currentDBs)
log.Warn("Failed to drop database", "name", dbName, "error", err) existingDBs = currentDBs // Update with fresh list
// Continue with other databases
} else {
droppedCount++
log.Info("Dropped database", "name", dbName)
}
dropCancel() // Clean up context
} }
log.Info("Cluster cleanup completed", "dropped", droppedCount, "total", len(existingDBs)) if len(existingDBs) > 0 {
log.Info("Dropping existing user databases before cluster restore", "count", len(existingDBs))
// Drop databases using command-line psql (no connection required)
// This matches how cluster restore works - uses CLI tools, not database connections
droppedCount := 0
for _, dbName := range existingDBs {
// Create timeout context for each database drop (5 minutes per DB - large DBs take time)
dropCtx, dropCancel := context.WithTimeout(ctx, 5*time.Minute)
if err := dropDatabaseCLI(dropCtx, cfg, dbName); err != nil {
log.Warn("Failed to drop database", "name", dbName, "error", err)
// Continue with other databases
} else {
droppedCount++
log.Info("Dropped database", "name", dbName)
}
dropCancel() // Clean up context
}
log.Info("Cluster cleanup completed", "dropped", droppedCount, "total", len(existingDBs))
} else {
log.Info("No user databases to clean up")
}
} }
// STEP 2: Create restore engine with silent progress (no stdout interference with TUI) // STEP 2: Create restore engine with silent progress (no stdout interference with TUI)
@@ -341,6 +365,10 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
progressState.overallPhase = 3 progressState.overallPhase = 3
progressState.extractionDone = true progressState.extractionDone = true
progressState.hasUpdate = true progressState.hasUpdate = true
// Set phase 3 start time on first callback (for realtime ETA calculation)
if progressState.phase3StartTime.IsZero() {
progressState.phase3StartTime = time.Now()
}
// Clear byte progress when switching to db progress // Clear byte progress when switching to db progress
progressState.bytesTotal = 0 progressState.bytesTotal = 0
progressState.bytesDone = 0 progressState.bytesDone = 0
@@ -359,6 +387,10 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
progressState.dbPhaseElapsed = phaseElapsed progressState.dbPhaseElapsed = phaseElapsed
progressState.dbAvgPerDB = avgPerDB progressState.dbAvgPerDB = avgPerDB
progressState.hasUpdate = true progressState.hasUpdate = true
// Set phase 3 start time on first callback (for realtime ETA calculation)
if progressState.phase3StartTime.IsZero() {
progressState.phase3StartTime = time.Now()
}
// Clear byte progress when switching to db progress // Clear byte progress when switching to db progress
progressState.bytesTotal = 0 progressState.bytesTotal = 0
progressState.bytesDone = 0 progressState.bytesDone = 0
@@ -376,6 +408,10 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
progressState.overallPhase = 3 progressState.overallPhase = 3
progressState.extractionDone = true progressState.extractionDone = true
progressState.hasUpdate = true progressState.hasUpdate = true
// Set phase 3 start time on first callback (for realtime ETA calculation)
if progressState.phase3StartTime.IsZero() {
progressState.phase3StartTime = time.Now()
}
}) })
// Store progress state in a package-level variable for the ticker to access // Store progress state in a package-level variable for the ticker to access
@@ -431,7 +467,8 @@ func (m RestoreExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
m.elapsed = time.Since(m.startTime) m.elapsed = time.Since(m.startTime)
// Poll shared progress state for real-time updates // Poll shared progress state for real-time updates
bytesTotal, bytesDone, description, hasUpdate, dbTotal, dbDone, speed, dbPhaseElapsed, dbAvgPerDB, currentDB, overallPhase, extractionDone, dbBytesTotal, dbBytesDone := getCurrentRestoreProgress() // Note: dbPhaseElapsed is now calculated in realtime inside getCurrentRestoreProgress()
bytesTotal, bytesDone, description, hasUpdate, dbTotal, dbDone, speed, dbPhaseElapsed, dbAvgPerDB, currentDB, overallPhase, extractionDone, dbBytesTotal, dbBytesDone, _ := getCurrentRestoreProgress()
if hasUpdate && bytesTotal > 0 && !extractionDone { if hasUpdate && bytesTotal > 0 && !extractionDone {
// Phase 1: Extraction // Phase 1: Extraction
m.bytesTotal = bytesTotal m.bytesTotal = bytesTotal
@@ -643,7 +680,13 @@ func (m RestoreExecutionModel) View() string {
s.WriteString("\n") s.WriteString("\n")
s.WriteString(errorStyle.Render("╚══════════════════════════════════════════════════════════════╝")) s.WriteString(errorStyle.Render("╚══════════════════════════════════════════════════════════════╝"))
s.WriteString("\n\n") s.WriteString("\n\n")
s.WriteString(errorStyle.Render(fmt.Sprintf(" Error: %v", m.err)))
// Parse and display error in a clean, structured format
errStr := m.err.Error()
// Extract key parts from the error message
errDisplay := formatRestoreError(errStr)
s.WriteString(errDisplay)
s.WriteString("\n") s.WriteString("\n")
} else { } else {
s.WriteString(successStyle.Render("╔══════════════════════════════════════════════════════════════╗")) s.WriteString(successStyle.Render("╔══════════════════════════════════════════════════════════════╗"))
@@ -989,3 +1032,188 @@ func dropDatabaseCLI(ctx context.Context, cfg *config.Config, dbName string) err
return nil return nil
} }
// formatRestoreError formats a restore error message for clean TUI display
func formatRestoreError(errStr string) string {
var s strings.Builder
maxLineWidth := 60
// Common patterns to extract
patterns := []struct {
key string
pattern string
}{
{"Error Type", "ERROR:"},
{"Hint", "HINT:"},
{"Last Error", "last error:"},
{"Total Errors", "total errors:"},
}
// First, try to extract a clean error summary
errLines := strings.Split(errStr, "\n")
// Find the main error message (first line or first ERROR:)
mainError := ""
hint := ""
totalErrors := ""
dbsFailed := []string{}
for _, line := range errLines {
line = strings.TrimSpace(line)
if line == "" {
continue
}
// Extract ERROR messages
if strings.Contains(line, "ERROR:") {
if mainError == "" {
// Get just the ERROR part
idx := strings.Index(line, "ERROR:")
if idx >= 0 {
mainError = strings.TrimSpace(line[idx:])
// Truncate if too long
if len(mainError) > maxLineWidth {
mainError = mainError[:maxLineWidth-3] + "..."
}
}
}
}
// Extract HINT
if strings.Contains(line, "HINT:") {
idx := strings.Index(line, "HINT:")
if idx >= 0 {
hint = strings.TrimSpace(line[idx+5:])
if len(hint) > maxLineWidth {
hint = hint[:maxLineWidth-3] + "..."
}
}
}
// Extract total errors count
if strings.Contains(line, "total errors:") {
idx := strings.Index(line, "total errors:")
if idx >= 0 {
totalErrors = strings.TrimSpace(line[idx+13:])
// Just extract the number
parts := strings.Fields(totalErrors)
if len(parts) > 0 {
totalErrors = parts[0]
}
}
}
// Extract failed database names (for cluster restore)
if strings.Contains(line, ": restore failed:") {
parts := strings.SplitN(line, ":", 2)
if len(parts) > 0 {
dbName := strings.TrimSpace(parts[0])
if dbName != "" && !strings.HasPrefix(dbName, "Error") {
dbsFailed = append(dbsFailed, dbName)
}
}
}
}
// If no structured error found, use the first line
if mainError == "" {
firstLine := errStr
if idx := strings.Index(errStr, "\n"); idx > 0 {
firstLine = errStr[:idx]
}
if len(firstLine) > maxLineWidth*2 {
firstLine = firstLine[:maxLineWidth*2-3] + "..."
}
mainError = firstLine
}
// Build structured error display
s.WriteString(infoStyle.Render(" ─── Error Details ─────────────────────────────────────────"))
s.WriteString("\n\n")
// Error type detection
errorType := "critical"
if strings.Contains(errStr, "out of shared memory") || strings.Contains(errStr, "max_locks_per_transaction") {
errorType = "critical"
} else if strings.Contains(errStr, "connection") {
errorType = "connection"
} else if strings.Contains(errStr, "permission") || strings.Contains(errStr, "access") {
errorType = "permission"
}
s.WriteString(fmt.Sprintf(" Type: %s\n", errorType))
s.WriteString(fmt.Sprintf(" Message: %s\n", mainError))
if hint != "" {
s.WriteString(fmt.Sprintf(" Hint: %s\n", hint))
}
if totalErrors != "" {
s.WriteString(fmt.Sprintf(" Total Errors: %s\n", totalErrors))
}
// Show failed databases (max 5)
if len(dbsFailed) > 0 {
s.WriteString("\n")
s.WriteString(" Failed Databases:\n")
for i, db := range dbsFailed {
if i >= 5 {
s.WriteString(fmt.Sprintf(" ... and %d more\n", len(dbsFailed)-5))
break
}
s.WriteString(fmt.Sprintf(" • %s\n", db))
}
}
s.WriteString("\n")
s.WriteString(infoStyle.Render(" ─── Diagnosis ─────────────────────────────────────────────"))
s.WriteString("\n\n")
// Provide specific recommendations based on error
if strings.Contains(errStr, "out of shared memory") || strings.Contains(errStr, "max_locks_per_transaction") {
s.WriteString(errorStyle.Render(" • PostgreSQL lock table exhausted\n"))
s.WriteString("\n")
s.WriteString(infoStyle.Render(" ─── [HINT] Recommendations ────────────────────────────────"))
s.WriteString("\n\n")
s.WriteString(" Lock capacity = max_locks_per_transaction\n")
s.WriteString(" × (max_connections + max_prepared_transactions)\n\n")
s.WriteString(" If you reduced VM size or max_connections, you need higher\n")
s.WriteString(" max_locks_per_transaction to compensate.\n\n")
s.WriteString(successStyle.Render(" FIX OPTIONS:\n"))
s.WriteString(" 1. Enable 'Large DB Mode' in Settings\n")
s.WriteString(" (press 'l' to toggle, reduces parallelism, increases locks)\n\n")
s.WriteString(" 2. Increase PostgreSQL locks:\n")
s.WriteString(" ALTER SYSTEM SET max_locks_per_transaction = 4096;\n")
s.WriteString(" Then RESTART PostgreSQL.\n\n")
s.WriteString(" 3. Reduce parallel jobs:\n")
s.WriteString(" Set Cluster Parallelism = 1 in Settings\n")
} else if strings.Contains(errStr, "connection") || strings.Contains(errStr, "refused") {
s.WriteString(" • Database connection failed\n\n")
s.WriteString(infoStyle.Render(" ─── [HINT] Recommendations ────────────────────────────────"))
s.WriteString("\n\n")
s.WriteString(" 1. Check database is running\n")
s.WriteString(" 2. Verify host, port, and credentials in Settings\n")
s.WriteString(" 3. Check firewall/network connectivity\n")
} else if strings.Contains(errStr, "permission") || strings.Contains(errStr, "denied") {
s.WriteString(" • Permission denied\n\n")
s.WriteString(infoStyle.Render(" ─── [HINT] Recommendations ────────────────────────────────"))
s.WriteString("\n\n")
s.WriteString(" 1. Verify database user has sufficient privileges\n")
s.WriteString(" 2. Grant CREATE/DROP DATABASE permissions if restoring cluster\n")
s.WriteString(" 3. Check file system permissions on backup directory\n")
} else {
s.WriteString(" See error message above for details.\n\n")
s.WriteString(infoStyle.Render(" ─── [HINT] General Recommendations ────────────────────────"))
s.WriteString("\n\n")
s.WriteString(" 1. Check the full error log for details\n")
s.WriteString(" 2. Try restoring with 'conservative' profile (press 'c')\n")
s.WriteString(" 3. For complex databases, enable 'Large DB Mode' (press 'l')\n")
}
s.WriteString("\n")
// Suppress the pattern variable since we don't use it but defined it
_ = patterns
return s.String()
}

View File

@@ -288,17 +288,19 @@ func (m RestorePreviewModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
case "c": case "c":
if m.mode == "restore-cluster" { if m.mode == "restore-cluster" {
// Prevent toggle if we couldn't detect existing databases // Toggle cluster cleanup - databases will be re-detected at execution time
if m.existingDBError != "" { m.cleanClusterFirst = !m.cleanClusterFirst
m.message = checkWarningStyle.Render("[WARN] Cannot enable cleanup - database detection failed") if m.cleanClusterFirst {
} else { if m.existingDBError != "" {
// Toggle cluster cleanup // Detection failed in preview - will re-detect at execution
m.cleanClusterFirst = !m.cleanClusterFirst m.message = checkWarningStyle.Render("[WARN] Will clean existing databases before restore (detection pending)")
if m.cleanClusterFirst { } else if m.existingDBCount > 0 {
m.message = checkWarningStyle.Render(fmt.Sprintf("[WARN] Will drop %d existing database(s) before restore", m.existingDBCount)) m.message = checkWarningStyle.Render(fmt.Sprintf("[WARN] Will drop %d existing database(s) before restore", m.existingDBCount))
} else { } else {
m.message = fmt.Sprintf("Clean cluster first: disabled") m.message = infoStyle.Render("[INFO] Cleanup enabled (no databases currently detected)")
} }
} else {
m.message = fmt.Sprintf("Clean cluster first: disabled")
} }
} else { } else {
// Toggle create if missing // Toggle create if missing
@@ -400,10 +402,26 @@ func (m RestorePreviewModel) View() string {
s.WriteString("\n") s.WriteString("\n")
s.WriteString(fmt.Sprintf(" Host: %s:%d\n", m.config.Host, m.config.Port)) s.WriteString(fmt.Sprintf(" Host: %s:%d\n", m.config.Host, m.config.Port))
// Show Resource Profile and CPU Workload settings
profile := m.config.GetCurrentProfile()
if profile != nil {
s.WriteString(fmt.Sprintf(" Resource Profile: %s (Parallel:%d, Jobs:%d)\n",
profile.Name, profile.ClusterParallelism, profile.Jobs))
} else {
s.WriteString(fmt.Sprintf(" Resource Profile: %s\n", m.config.ResourceProfile))
}
// Show Large DB Mode status
if m.config.LargeDBMode {
s.WriteString(" Large DB Mode: ON (reduced parallelism, high locks)\n")
}
s.WriteString(fmt.Sprintf(" CPU Workload: %s\n", m.config.CPUWorkloadType))
s.WriteString(fmt.Sprintf(" Cluster Parallelism: %d databases\n", m.config.ClusterParallelism))
if m.existingDBError != "" { if m.existingDBError != "" {
// Show error when database listing failed // Show warning when database listing failed - but still allow cleanup toggle
s.WriteString(checkWarningStyle.Render(fmt.Sprintf(" Existing Databases: Unable to detect (%s)\n", m.existingDBError))) s.WriteString(checkWarningStyle.Render(" Existing Databases: Detection failed\n"))
s.WriteString(infoStyle.Render(" (Cleanup option disabled - cannot verify database status)\n")) s.WriteString(infoStyle.Render(fmt.Sprintf(" (%s)\n", m.existingDBError)))
s.WriteString(infoStyle.Render(" (Will re-detect at restore time)\n"))
} else if m.existingDBCount > 0 { } else if m.existingDBCount > 0 {
s.WriteString(fmt.Sprintf(" Existing Databases: %d found\n", m.existingDBCount)) s.WriteString(fmt.Sprintf(" Existing Databases: %d found\n", m.existingDBCount))
@@ -417,17 +435,20 @@ func (m RestorePreviewModel) View() string {
} }
s.WriteString(fmt.Sprintf(" - %s\n", db)) s.WriteString(fmt.Sprintf(" - %s\n", db))
} }
cleanIcon := "[N]"
cleanStyle := infoStyle
if m.cleanClusterFirst {
cleanIcon = "[Y]"
cleanStyle = checkWarningStyle
}
s.WriteString(cleanStyle.Render(fmt.Sprintf(" Clean All First: %s %v (press 'c' to toggle)\n", cleanIcon, m.cleanClusterFirst)))
} else { } else {
s.WriteString(" Existing Databases: None (clean slate)\n") s.WriteString(" Existing Databases: None (clean slate)\n")
} }
// Always show cleanup toggle for cluster restore
cleanIcon := "[N]"
cleanStyle := infoStyle
if m.cleanClusterFirst {
cleanIcon := "[Y]"
cleanStyle = checkWarningStyle
s.WriteString(cleanStyle.Render(fmt.Sprintf(" Clean All First: %s enabled (press 'c' to toggle)\n", cleanIcon)))
} else {
s.WriteString(cleanStyle.Render(fmt.Sprintf(" Clean All First: %s disabled (press 'c' to toggle)\n", cleanIcon)))
}
s.WriteString("\n") s.WriteString("\n")
} }
@@ -475,10 +496,18 @@ func (m RestorePreviewModel) View() string {
s.WriteString(infoStyle.Render(" All existing data in target database will be dropped!")) s.WriteString(infoStyle.Render(" All existing data in target database will be dropped!"))
s.WriteString("\n\n") s.WriteString("\n\n")
} }
if m.cleanClusterFirst && m.existingDBCount > 0 { if m.cleanClusterFirst {
s.WriteString(checkWarningStyle.Render("[DANGER] WARNING: Cluster cleanup enabled")) s.WriteString(checkWarningStyle.Render("[DANGER] WARNING: Cluster cleanup enabled"))
s.WriteString("\n") s.WriteString("\n")
s.WriteString(checkWarningStyle.Render(fmt.Sprintf(" %d existing database(s) will be DROPPED before restore!", m.existingDBCount))) if m.existingDBError != "" {
s.WriteString(checkWarningStyle.Render(" Existing databases will be DROPPED before restore!"))
s.WriteString("\n")
s.WriteString(infoStyle.Render(" (Database count will be detected at restore time)"))
} else if m.existingDBCount > 0 {
s.WriteString(checkWarningStyle.Render(fmt.Sprintf(" %d existing database(s) will be DROPPED before restore!", m.existingDBCount)))
} else {
s.WriteString(infoStyle.Render(" No databases currently detected - cleanup will verify at restore time"))
}
s.WriteString("\n") s.WriteString("\n")
s.WriteString(infoStyle.Render(" This ensures a clean disaster recovery scenario")) s.WriteString(infoStyle.Render(" This ensures a clean disaster recovery scenario"))
s.WriteString("\n\n") s.WriteString("\n\n")

View File

@@ -10,6 +10,7 @@ import (
"github.com/charmbracelet/lipgloss" "github.com/charmbracelet/lipgloss"
"dbbackup/internal/config" "dbbackup/internal/config"
"dbbackup/internal/cpu"
"dbbackup/internal/logger" "dbbackup/internal/logger"
) )
@@ -101,6 +102,65 @@ func NewSettingsModel(cfg *config.Config, log logger.Logger, parent tea.Model) S
Type: "selector", Type: "selector",
Description: "CPU workload profile (press Enter to cycle: Balanced → CPU-Intensive → I/O-Intensive)", Description: "CPU workload profile (press Enter to cycle: Balanced → CPU-Intensive → I/O-Intensive)",
}, },
{
Key: "resource_profile",
DisplayName: "Resource Profile",
Value: func(c *config.Config) string {
profile := c.GetCurrentProfile()
if profile != nil {
return fmt.Sprintf("%s (P:%d J:%d)", profile.Name, profile.ClusterParallelism, profile.Jobs)
}
return c.ResourceProfile
},
Update: func(c *config.Config, v string) error {
profiles := []string{"conservative", "balanced", "performance", "max-performance"}
currentIdx := 0
for i, p := range profiles {
if c.ResourceProfile == p {
currentIdx = i
break
}
}
nextIdx := (currentIdx + 1) % len(profiles)
return c.ApplyResourceProfile(profiles[nextIdx])
},
Type: "selector",
Description: "Resource profile for VM capacity. Toggle 'l' for Large DB Mode on any profile.",
},
{
Key: "large_db_mode",
DisplayName: "Large DB Mode",
Value: func(c *config.Config) string {
if c.LargeDBMode {
return "ON (↓parallelism, ↑locks)"
}
return "OFF"
},
Update: func(c *config.Config, v string) error {
c.LargeDBMode = !c.LargeDBMode
return nil
},
Type: "selector",
Description: "Enable for databases with many tables/LOBs. Reduces parallelism, increases max_locks_per_transaction.",
},
{
Key: "cluster_parallelism",
DisplayName: "Cluster Parallelism",
Value: func(c *config.Config) string { return fmt.Sprintf("%d", c.ClusterParallelism) },
Update: func(c *config.Config, v string) error {
val, err := strconv.Atoi(v)
if err != nil {
return fmt.Errorf("cluster parallelism must be a number")
}
if val < 1 {
return fmt.Errorf("cluster parallelism must be at least 1")
}
c.ClusterParallelism = val
return nil
},
Type: "int",
Description: "Concurrent databases during cluster backup/restore (1=sequential, safer for large DBs)",
},
{ {
Key: "backup_dir", Key: "backup_dir",
DisplayName: "Backup Directory", DisplayName: "Backup Directory",
@@ -528,12 +588,70 @@ func (m SettingsModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
case "s": case "s":
return m.saveSettings() return m.saveSettings()
case "l":
// Quick shortcut: Toggle Large DB Mode
return m.toggleLargeDBMode()
case "c":
// Quick shortcut: Apply "conservative" profile for constrained VMs
return m.applyConservativeProfile()
case "p":
// Show profile recommendation
return m.showProfileRecommendation()
} }
} }
return m, nil return m, nil
} }
// toggleLargeDBMode toggles the Large DB Mode flag
func (m SettingsModel) toggleLargeDBMode() (tea.Model, tea.Cmd) {
m.config.LargeDBMode = !m.config.LargeDBMode
if m.config.LargeDBMode {
profile := m.config.GetCurrentProfile()
m.message = successStyle.Render(fmt.Sprintf(
"[ON] Large DB Mode enabled: %s → Parallel=%d, Jobs=%d, MaxLocks=%d",
profile.Name, profile.ClusterParallelism, profile.Jobs, profile.MaxLocksPerTxn))
} else {
profile := m.config.GetCurrentProfile()
m.message = successStyle.Render(fmt.Sprintf(
"[OFF] Large DB Mode disabled: %s → Parallel=%d, Jobs=%d",
profile.Name, profile.ClusterParallelism, profile.Jobs))
}
return m, nil
}
// applyConservativeProfile applies the conservative profile for constrained VMs
func (m SettingsModel) applyConservativeProfile() (tea.Model, tea.Cmd) {
if err := m.config.ApplyResourceProfile("conservative"); err != nil {
m.message = errorStyle.Render(fmt.Sprintf("[FAIL] %s", err.Error()))
return m, nil
}
m.message = successStyle.Render("[OK] Applied 'conservative' profile: Cluster=1, Jobs=1. Safe for small VMs with limited memory.")
return m, nil
}
// showProfileRecommendation displays the recommended profile based on system resources
func (m SettingsModel) showProfileRecommendation() (tea.Model, tea.Cmd) {
profileName, reason := m.config.GetResourceProfileRecommendation(false)
var largeDBHint string
if m.config.LargeDBMode {
largeDBHint = "Large DB Mode: ON"
} else {
largeDBHint = "Large DB Mode: OFF (press 'l' to enable)"
}
m.message = infoStyle.Render(fmt.Sprintf(
"[RECOMMEND] Profile: %s | %s\n"+
" → %s\n"+
" Press 'l' to toggle Large DB Mode, 'c' for conservative",
profileName, largeDBHint, reason))
return m, nil
}
// handleEditingInput handles input when editing a setting // handleEditingInput handles input when editing a setting
func (m SettingsModel) handleEditingInput(msg tea.KeyMsg) (tea.Model, tea.Cmd) { func (m SettingsModel) handleEditingInput(msg tea.KeyMsg) (tea.Model, tea.Cmd) {
switch msg.String() { switch msg.String() {
@@ -747,7 +865,32 @@ func (m SettingsModel) View() string {
// Current configuration summary // Current configuration summary
if !m.editing { if !m.editing {
b.WriteString("\n") b.WriteString("\n")
b.WriteString(infoStyle.Render("[INFO] Current Configuration")) b.WriteString(infoStyle.Render("[INFO] System Resources & Configuration"))
b.WriteString("\n")
// System resources
var sysInfo []string
if m.config.CPUInfo != nil {
sysInfo = append(sysInfo, fmt.Sprintf("CPU: %d cores (physical), %d logical",
m.config.CPUInfo.PhysicalCores, m.config.CPUInfo.LogicalCores))
}
if m.config.MemoryInfo != nil {
sysInfo = append(sysInfo, fmt.Sprintf("Memory: %dGB total, %dGB available",
m.config.MemoryInfo.TotalGB, m.config.MemoryInfo.AvailableGB))
}
// Recommended profile
recommendedProfile, reason := m.config.GetResourceProfileRecommendation(false)
sysInfo = append(sysInfo, fmt.Sprintf("Recommended Profile: %s", recommendedProfile))
sysInfo = append(sysInfo, fmt.Sprintf(" → %s", reason))
for _, line := range sysInfo {
b.WriteString(detailStyle.Render(fmt.Sprintf(" %s", line)))
b.WriteString("\n")
}
b.WriteString("\n")
b.WriteString(infoStyle.Render("[CONFIG] Current Settings"))
b.WriteString("\n") b.WriteString("\n")
summary := []string{ summary := []string{
@@ -755,7 +898,17 @@ func (m SettingsModel) View() string {
fmt.Sprintf("Database: %s@%s:%d", m.config.User, m.config.Host, m.config.Port), fmt.Sprintf("Database: %s@%s:%d", m.config.User, m.config.Host, m.config.Port),
fmt.Sprintf("Backup Dir: %s", m.config.BackupDir), fmt.Sprintf("Backup Dir: %s", m.config.BackupDir),
fmt.Sprintf("Compression: Level %d", m.config.CompressionLevel), fmt.Sprintf("Compression: Level %d", m.config.CompressionLevel),
fmt.Sprintf("Jobs: %d parallel, %d dump", m.config.Jobs, m.config.DumpJobs), fmt.Sprintf("Profile: %s | Cluster: %d parallel | Jobs: %d",
m.config.ResourceProfile, m.config.ClusterParallelism, m.config.Jobs),
}
// Show profile warnings if applicable
profile := m.config.GetCurrentProfile()
if profile != nil {
isValid, warnings := cpu.ValidateProfileForSystem(profile, m.config.CPUInfo, m.config.MemoryInfo)
if !isValid && len(warnings) > 0 {
summary = append(summary, fmt.Sprintf("⚠️ Warning: %s", warnings[0]))
}
} }
if m.config.CloudEnabled { if m.config.CloudEnabled {
@@ -782,9 +935,9 @@ func (m SettingsModel) View() string {
} else { } else {
// Show different help based on current selection // Show different help based on current selection
if m.cursor >= 0 && m.cursor < len(m.settings) && m.settings[m.cursor].Type == "path" { if m.cursor >= 0 && m.cursor < len(m.settings) && m.settings[m.cursor].Type == "path" {
footer = infoStyle.Render("\n[KEYS] Up/Down navigate | Enter edit | Tab browse directories | 's' save | 'r' reset | 'q' menu") footer = infoStyle.Render("\n[KEYS] ↑↓ navigate | Enter edit | Tab dirs | 'l' toggle LargeDB | 'c' conservative | 'p' recommend | 's' save | 'q' menu")
} else { } else {
footer = infoStyle.Render("\n[KEYS] Up/Down navigate | Enter edit | 's' save | 'r' reset | 'q' menu | Tab=dirs on path fields only") footer = infoStyle.Render("\n[KEYS] ↑↓ navigate | Enter edit | 'l' toggle LargeDB mode | 'c' conservative | 'p' recommend | 's' save | 'r' reset | 'q' menu")
} }
} }
} }