Compare commits

...

13 Commits

Author SHA1 Message Date
838c5b8c15 Fix: PostgreSQL expert review - cluster backup/restore improvements
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Successful in 3m14s
Critical PostgreSQL-specific fixes identified by database expert review:

1. **Port always passed for localhost** (pg_dump, pg_restore, pg_dumpall, psql)
   - Previously, port was only passed for non-localhost connections
   - If user has PostgreSQL on non-standard port (e.g., 5433), commands
     would connect to wrong instance or fail
   - Now always passes -p PORT to all PostgreSQL tools

2. **CREATE DATABASE with encoding/locale preservation**
   - Now creates databases with explicit ENCODING 'UTF8'
   - Detects server's LC_COLLATE and uses it for new databases
   - Prevents encoding mismatch errors during restore
   - Falls back to simple CREATE if encoding fails (older PG versions)

3. **DROP DATABASE WITH (FORCE) for PostgreSQL 13+**
   - Uses new WITH (FORCE) option to atomically terminate connections
   - Prevents race condition where new connections are established
   - Falls back to standard DROP for PostgreSQL < 13
   - Also revokes CONNECT privilege before drop attempt

4. **Improved globals restore error handling**
   - Distinguishes between FATAL errors (real problems) and regular
     ERROR messages (like 'role already exists' which is expected)
   - Only fails on FATAL errors or psql command failures
   - Logs error count summary for visibility

5. **Better error classification in restore logs**
   - Separate log levels for FATAL vs ERROR
   - Debug-level logging for 'already exists' errors (expected)
   - Error count tracking to avoid log spam

These fixes improve reliability for enterprise PostgreSQL deployments
with non-standard configurations and existing data.
2026-01-16 14:36:03 +01:00
9d95a193db Fix: Enterprise cluster restore (postgres user via su)
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m29s
CI/CD / Build & Release (push) Successful in 3m13s
Critical fixes for enterprise environments where dbbackup runs as
postgres user via 'su postgres' without sudo access:

1. canRestartPostgreSQL(): New function that detects if we can restart
   PostgreSQL. Returns false immediately if running as postgres user
   without sudo access, avoiding wasted time and potential hangs.

2. tryRestartPostgreSQL(): Now calls canRestartPostgreSQL() first to
   skip restart attempts in restricted environments.

3. Changed restart warning from ERROR to WARN level - it's expected
   behavior in enterprise environments, not an error.

4. Context cancellation check: Goroutines now check ctx.Err() before
   starting and properly count cancelled databases as failures.

5. Goroutine accounting: After wg.Wait(), verify all databases were
   accounted for (success + fail = total). Catches goroutine crashes
   or deadlocks.

6. Port argument fix: Always pass -p port to psql for localhost
   restores, fixing non-standard port configurations.

This should fix the issue where cluster restore showed success but
0 databases were actually restored when running on enterprise systems.
2026-01-16 14:17:04 +01:00
3201f0fb6a Fix: Critical bug - cluster restore showing success with 0 databases restored
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m23s
CRITICAL FIXES:
- Add check for successCount == 0 to properly fail when no databases restored
- Fix tryRestartPostgreSQL to use non-interactive sudo (-n flag)
- Add 10-second timeout per restart attempt to prevent blocking
- Try pg_ctl directly for postgres user (no sudo needed)
- Set stdin to nil to prevent sudo from waiting for password input

This fixes the issue where cluster restore showed success but no databases
were actually restored due to sudo blocking on password prompts.
2026-01-16 14:03:02 +01:00
62ddc57fb7 Fix: Remove sudo usage from auth detection to avoid password prompts
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Successful in 3m16s
- Remove sudo cat attempt for reading pg_hba.conf
- Prevents password prompts when running as postgres via 'su postgres'
- Auth detection now relies on connection attempts when file is unreadable
- Fixes issue where returning to menu after restore triggers sudo prompt
2026-01-16 13:52:41 +01:00
510175ff04 TUI: Enhance completion/result screens for backup and restore
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m28s
CI/CD / Build & Release (push) Successful in 3m12s
- Add box-style headers for success/failure states
- Display comprehensive summary with archive info, type, database count
- Show timing section with total time, throughput, and average per-DB stats
- Use consistent styling and formatting across all result views
- Improve visual hierarchy with section separators
2026-01-16 13:37:58 +01:00
a85ad0c88c TUI: Add timing and ETA tracking to cluster restore progress
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m28s
CI/CD / Build & Release (push) Successful in 3m21s
- Add DatabaseProgressWithTimingCallback for timing-aware progress reporting
- Track elapsed time and average duration per database during restore phase
- Display ETA based on completed database restore times
- Show restore phase elapsed time in progress bar
- Enhance cluster restore progress bar with [elapsed / ETA: remaining] format
2026-01-16 09:42:05 +01:00
4938dc1918 fix: max_locks_per_transaction requires PostgreSQL restart
All checks were successful
CI/CD / Test (push) Successful in 1m19s
CI/CD / Lint (push) Successful in 1m31s
CI/CD / Build & Release (push) Successful in 3m34s
- Fixed critical bug where ALTER SYSTEM + pg_reload_conf() was used
  but max_locks_per_transaction requires a full PostgreSQL restart
- Added automatic restart attempt (systemctl, service, pg_ctl)
- Added loud warnings if restart fails with manual fix instructions
- Updated preflight checks to warn about low max_locks_per_transaction
- This was causing 'out of shared memory' errors on BLOB-heavy restores
2026-01-15 18:50:10 +01:00
09a917766f TUI: Add database progress tracking to cluster backup
All checks were successful
CI/CD / Test (push) Successful in 1m27s
CI/CD / Lint (push) Successful in 1m33s
CI/CD / Build & Release (push) Successful in 3m28s
- Add ProgressCallback and DatabaseProgressCallback to backup engine
- Add SetProgressCallback() and SetDatabaseProgressCallback() methods
- Wire database progress callback in backup_exec.go
- Show progress bar (Database: [████░░░] X/Y) during cluster backups
- Hide spinner when progress bar is displayed
- Use shared progress state pattern for thread-safe TUI updates
2026-01-15 15:32:26 +01:00
eeacbfa007 TUI: Add detailed progress tracking with rolling speed and database count
All checks were successful
CI/CD / Test (push) Successful in 1m28s
CI/CD / Lint (push) Successful in 1m41s
CI/CD / Build & Release (push) Successful in 4m9s
- Add TUI-native detailed progress component (detailed_progress.go)
- Hide spinner when progress bar is shown for cleaner display
- Implement rolling window speed calculation (5-sec window, 100 samples)
- Add database count tracking (X/Y) for cluster restore operations
- Wire DatabaseProgressCallback to restore engine for multi-db progress
2026-01-15 15:16:21 +01:00
7711a206ab Fix panic in TUI backup manager verify when logger is nil
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m15s
- Add nil checks before logger calls in diagnose.go (8 places)
- Add nil checks before logger calls in safety.go (2 places)
- Fixes crash when pressing 'v' to verify backup in interactive menu
2026-01-14 17:18:37 +01:00
ba6e8a2b39 v3.42.37: Remove ASCII boxes from diagnose view
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m14s
Cleaner output without box drawing characters:
- [STATUS] Validation section
- [INFO] Details section
- [FAIL] Errors section
- [WARN] Warnings section
- [HINT] Recommendations section
2026-01-14 17:05:43 +01:00
ec5e89eab7 v3.42.36: Fix remaining TUI prefix inconsistencies
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m13s
- diagnose_view.go: Add [STATS], [LIST], [INFO] section prefixes
- status.go: Add [CONN], [INFO] section prefixes
- settings.go: [LOG] → [INFO] for configuration summary
- menu.go: [DB] → [SELECT]/[CHECK] for selectors
2026-01-14 16:59:24 +01:00
e24d7ab49f v3.42.35: Standardize TUI title prefixes for consistency
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Successful in 3m17s
- [CHECK] for diagnosis, previews, validations
- [STATS] for status, history, metrics views
- [SELECT] for selection/browsing screens
- [EXEC] for execution screens (backup/restore)
- [CONFIG] for settings/configuration

Fixed 8 files with inconsistent prefixes:
- diagnose_view.go: [SEARCH] → [CHECK]
- settings.go: [CFG] → [CONFIG]
- menu.go: [DB] → clean title
- history.go: [HISTORY] → [STATS]
- backup_manager.go: [DB] → [SELECT]
- archive_browser.go: [PKG]/[SEARCH] → [SELECT]
- restore_preview.go: added [CHECK]
- restore_exec.go: [RESTORE] → [EXEC]
2026-01-14 16:36:35 +01:00
22 changed files with 1837 additions and 249 deletions

View File

@@ -5,6 +5,36 @@ All notable changes to dbbackup will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [3.42.35] - 2026-01-15 "TUI Detailed Progress"
### Added - Enhanced TUI Progress Display
- **Detailed progress bar in TUI restore** - schollz-style progress bar with:
- Byte progress display (e.g., `245 MB / 1.2 GB`)
- Transfer speed calculation (e.g., `45 MB/s`)
- ETA prediction for long operations
- Unicode block-based visual bar
- **Real-time extraction progress** - Archive extraction now reports actual bytes processed
- **Go-native tar extraction** - Uses Go's `archive/tar` + `compress/gzip` when progress callback is set
- **New `DetailedProgress` component** in TUI package:
- `NewDetailedProgress(total, description)` - Byte-based progress
- `NewDetailedProgressItems(total, description)` - Item count progress
- `NewDetailedProgressSpinner(description)` - Indeterminate spinner
- `RenderProgressBar(width)` - Generate schollz-style output
- **Progress callback API** in restore engine:
- `SetProgressCallback(func(current, total int64, description string))`
- Allows TUI to receive real-time progress updates from restore operations
- **Shared progress state** pattern for Bubble Tea integration
### Changed
- TUI restore execution now shows detailed byte progress during archive extraction
- Cluster restore shows extraction progress instead of just spinner
- Falls back to shell `tar` command when no progress callback is set (faster)
### Technical Details
- `progressReader` wrapper tracks bytes read through gzip/tar pipeline
- Throttled progress updates (every 100ms) to avoid UI flooding
- Thread-safe shared state pattern for cross-goroutine progress updates
## [3.42.34] - 2026-01-14 "Filesystem Abstraction"
### Added - spf13/afero for Filesystem Abstraction

View File

@@ -56,7 +56,7 @@ Download from [releases](https://git.uuxo.net/UUXO/dbbackup/releases):
```bash
# Linux x86_64
wget https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.42.1/dbbackup-linux-amd64
wget https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.42.35/dbbackup-linux-amd64
chmod +x dbbackup-linux-amd64
sudo mv dbbackup-linux-amd64 /usr/local/bin/dbbackup
```

View File

@@ -3,9 +3,9 @@
This directory contains pre-compiled binaries for the DB Backup Tool across multiple platforms and architectures.
## Build Information
- **Version**: 3.42.33
- **Build Time**: 2026-01-14_15:19:48_UTC
- **Git Commit**: 4e09066
- **Version**: 3.42.34
- **Build Time**: 2026-01-16_13:17:19_UTC
- **Git Commit**: 9d95a19
## Recent Updates (v1.1.0)
- ✅ Fixed TUI progress display with line-by-line output

View File

@@ -84,20 +84,14 @@ func findHbaFileViaPostgres() string {
// parsePgHbaConf parses pg_hba.conf and returns the authentication method
func parsePgHbaConf(path string, user string) AuthMethod {
// Try with sudo if we can't read directly
// Try to read the file directly - do NOT use sudo as it triggers password prompts
// If we can't read pg_hba.conf, we'll rely on connection attempts to determine auth
file, err := os.Open(path)
if err != nil {
// Try with sudo (with timeout)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, "sudo", "cat", path)
output, err := cmd.Output()
if err != nil {
// If we can't read the file, return unknown and let the connection determine auth
// This avoids sudo password prompts when running as postgres via su
return AuthUnknown
}
return parseHbaContent(string(output), user)
}
defer file.Close()
scanner := bufio.NewScanner(file)

View File

@@ -28,6 +28,12 @@ import (
"dbbackup/internal/swap"
)
// ProgressCallback is called with byte-level progress updates during backup operations
type ProgressCallback func(current, total int64, description string)
// DatabaseProgressCallback is called with database count progress during cluster backup
type DatabaseProgressCallback func(done, total int, dbName string)
// Engine handles backup operations
type Engine struct {
cfg *config.Config
@@ -36,6 +42,8 @@ type Engine struct {
progress progress.Indicator
detailedReporter *progress.DetailedReporter
silent bool // Silent mode for TUI
progressCallback ProgressCallback
dbProgressCallback DatabaseProgressCallback
}
// New creates a new backup engine
@@ -86,6 +94,30 @@ func NewSilent(cfg *config.Config, log logger.Logger, db database.Database, prog
}
}
// SetProgressCallback sets a callback for detailed progress reporting (for TUI mode)
func (e *Engine) SetProgressCallback(cb ProgressCallback) {
e.progressCallback = cb
}
// SetDatabaseProgressCallback sets a callback for database count progress during cluster backup
func (e *Engine) SetDatabaseProgressCallback(cb DatabaseProgressCallback) {
e.dbProgressCallback = cb
}
// reportProgress reports progress to the callback if set
func (e *Engine) reportProgress(current, total int64, description string) {
if e.progressCallback != nil {
e.progressCallback(current, total, description)
}
}
// reportDatabaseProgress reports database count progress to the callback if set
func (e *Engine) reportDatabaseProgress(done, total int, dbName string) {
if e.dbProgressCallback != nil {
e.dbProgressCallback(done, total, dbName)
}
}
// loggerAdapter adapts our logger to the progress.Logger interface
type loggerAdapter struct {
logger logger.Logger
@@ -465,6 +497,8 @@ func (e *Engine) BackupCluster(ctx context.Context) error {
estimator.UpdateProgress(idx)
e.printf(" [%d/%d] Backing up database: %s\n", idx+1, len(databases), name)
quietProgress.Update(fmt.Sprintf("Backing up database %d/%d: %s", idx+1, len(databases), name))
// Report database progress to TUI callback
e.reportDatabaseProgress(idx+1, len(databases), name)
mu.Unlock()
// Check database size and warn if very large
@@ -903,11 +937,15 @@ func (e *Engine) createSampleBackup(ctx context.Context, databaseName, outputFil
func (e *Engine) backupGlobals(ctx context.Context, tempDir string) error {
globalsFile := filepath.Join(tempDir, "globals.sql")
cmd := exec.CommandContext(ctx, "pg_dumpall", "--globals-only")
if e.cfg.Host != "localhost" {
cmd.Args = append(cmd.Args, "-h", e.cfg.Host, "-p", fmt.Sprintf("%d", e.cfg.Port))
// CRITICAL: Always pass port even for localhost - user may have non-standard port
cmd := exec.CommandContext(ctx, "pg_dumpall", "--globals-only",
"-p", fmt.Sprintf("%d", e.cfg.Port),
"-U", e.cfg.User)
// Only add -h flag for non-localhost to use Unix socket for peer auth
if e.cfg.Host != "localhost" && e.cfg.Host != "127.0.0.1" && e.cfg.Host != "" {
cmd.Args = append([]string{cmd.Args[0], "-h", e.cfg.Host}, cmd.Args[1:]...)
}
cmd.Args = append(cmd.Args, "-U", e.cfg.User)
cmd.Env = os.Environ()
if e.cfg.Password != "" {

View File

@@ -316,11 +316,12 @@ func (p *PostgreSQL) BuildBackupCommand(database, outputFile string, options Bac
cmd := []string{"pg_dump"}
// Connection parameters
if p.cfg.Host != "localhost" {
// CRITICAL: Always pass port even for localhost - user may have non-standard port
if p.cfg.Host != "localhost" && p.cfg.Host != "127.0.0.1" && p.cfg.Host != "" {
cmd = append(cmd, "-h", p.cfg.Host)
cmd = append(cmd, "-p", strconv.Itoa(p.cfg.Port))
cmd = append(cmd, "--no-password")
}
cmd = append(cmd, "-p", strconv.Itoa(p.cfg.Port))
cmd = append(cmd, "-U", p.cfg.User)
// Format and compression
@@ -380,11 +381,12 @@ func (p *PostgreSQL) BuildRestoreCommand(database, inputFile string, options Res
cmd := []string{"pg_restore"}
// Connection parameters
if p.cfg.Host != "localhost" {
// CRITICAL: Always pass port even for localhost - user may have non-standard port
if p.cfg.Host != "localhost" && p.cfg.Host != "127.0.0.1" && p.cfg.Host != "" {
cmd = append(cmd, "-h", p.cfg.Host)
cmd = append(cmd, "-p", strconv.Itoa(p.cfg.Port))
cmd = append(cmd, "--no-password")
}
cmd = append(cmd, "-p", strconv.Itoa(p.cfg.Port))
cmd = append(cmd, "-U", p.cfg.User)
// Parallel jobs (incompatible with --single-transaction per PostgreSQL docs)

View File

@@ -368,7 +368,7 @@ func (d *Diagnoser) diagnoseSQLScript(filePath string, compressed bool, result *
}
// Store last line for termination check
if lineNumber > 0 && (lineNumber%100000 == 0) && d.verbose {
if lineNumber > 0 && (lineNumber%100000 == 0) && d.verbose && d.log != nil {
d.log.Debug("Scanning SQL file", "lines_processed", lineNumber)
}
}
@@ -430,9 +430,11 @@ func (d *Diagnoser) diagnoseClusterArchive(filePath string, result *DiagnoseResu
}
}
if d.log != nil {
d.log.Info("Verifying cluster archive integrity",
"size", fmt.Sprintf("%.1f GB", float64(result.FileSize)/(1024*1024*1024)),
"timeout", fmt.Sprintf("%d min", timeoutMinutes))
}
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
defer cancel()
@@ -561,7 +563,7 @@ func (d *Diagnoser) diagnoseClusterArchive(filePath string, result *DiagnoseResu
}
// For verbose mode, diagnose individual dumps inside the archive
if d.verbose && len(dumpFiles) > 0 {
if d.verbose && len(dumpFiles) > 0 && d.log != nil {
d.log.Info("Cluster archive contains databases", "count", len(dumpFiles))
for _, df := range dumpFiles {
d.log.Info(" - " + df)
@@ -684,9 +686,11 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
}
}
if d.log != nil {
d.log.Info("Listing cluster archive contents",
"size", fmt.Sprintf("%.1f GB", float64(archiveInfo.Size())/(1024*1024*1024)),
"timeout", fmt.Sprintf("%d min", timeoutMinutes))
}
listCtx, listCancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
defer listCancel()
@@ -766,7 +770,9 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
return []*DiagnoseResult{errResult}, nil
}
if d.log != nil {
d.log.Debug("Archive listing streamed successfully", "total_files", fileCount, "relevant_files", len(files))
}
// Check if we have enough disk space (estimate 4x archive size needed)
// archiveInfo already obtained at function start
@@ -781,7 +787,9 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
testCancel()
}
if d.log != nil {
d.log.Info("Archive listing successful", "files", len(files))
}
// Try full extraction - NO TIMEOUT here as large archives can take a long time
// Use a generous timeout (30 minutes) for very large archives
@@ -870,11 +878,15 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
}
dumpPath := filepath.Join(dumpsDir, name)
if d.log != nil {
d.log.Info("Diagnosing dump file", "file", name)
}
result, err := d.DiagnoseFile(dumpPath)
if err != nil {
if d.log != nil {
d.log.Warn("Failed to diagnose file", "file", name, "error", err)
}
continue
}
results = append(results, result)

View File

@@ -1,9 +1,12 @@
package restore
import (
"archive/tar"
"compress/gzip"
"context"
"database/sql"
"fmt"
"io"
"os"
"os/exec"
"path/filepath"
@@ -24,6 +27,17 @@ import (
_ "github.com/jackc/pgx/v5/stdlib" // PostgreSQL driver
)
// ProgressCallback is called with progress updates during long operations
// Parameters: current bytes/items done, total bytes/items, description
type ProgressCallback func(current, total int64, description string)
// DatabaseProgressCallback is called with database count progress during cluster restore
type DatabaseProgressCallback func(done, total int, dbName string)
// DatabaseProgressWithTimingCallback is called with database progress including timing info
// Parameters: done count, total count, database name, elapsed time for current restore phase, avg duration per DB
type DatabaseProgressWithTimingCallback func(done, total int, dbName string, phaseElapsed, avgPerDB time.Duration)
// Engine handles database restore operations
type Engine struct {
cfg *config.Config
@@ -33,6 +47,11 @@ type Engine struct {
detailedReporter *progress.DetailedReporter
dryRun bool
debugLogPath string // Path to save debug log on error
// TUI progress callback for detailed progress reporting
progressCallback ProgressCallback
dbProgressCallback DatabaseProgressCallback
dbProgressTimingCallback DatabaseProgressWithTimingCallback
}
// New creates a new restore engine
@@ -88,6 +107,42 @@ func (e *Engine) SetDebugLogPath(path string) {
e.debugLogPath = path
}
// SetProgressCallback sets a callback for detailed progress reporting (for TUI mode)
func (e *Engine) SetProgressCallback(cb ProgressCallback) {
e.progressCallback = cb
}
// SetDatabaseProgressCallback sets a callback for database count progress during cluster restore
func (e *Engine) SetDatabaseProgressCallback(cb DatabaseProgressCallback) {
e.dbProgressCallback = cb
}
// SetDatabaseProgressWithTimingCallback sets a callback for database progress with timing info
func (e *Engine) SetDatabaseProgressWithTimingCallback(cb DatabaseProgressWithTimingCallback) {
e.dbProgressTimingCallback = cb
}
// reportProgress safely calls the progress callback if set
func (e *Engine) reportProgress(current, total int64, description string) {
if e.progressCallback != nil {
e.progressCallback(current, total, description)
}
}
// reportDatabaseProgress safely calls the database progress callback if set
func (e *Engine) reportDatabaseProgress(done, total int, dbName string) {
if e.dbProgressCallback != nil {
e.dbProgressCallback(done, total, dbName)
}
}
// reportDatabaseProgressWithTiming safely calls the timing-aware callback if set
func (e *Engine) reportDatabaseProgressWithTiming(done, total int, dbName string, phaseElapsed, avgPerDB time.Duration) {
if e.dbProgressTimingCallback != nil {
e.dbProgressTimingCallback(done, total, dbName, phaseElapsed, avgPerDB)
}
}
// loggerAdapter adapts our logger to the progress.Logger interface
type loggerAdapter struct {
logger logger.Logger
@@ -387,16 +442,18 @@ func (e *Engine) restorePostgreSQLSQL(ctx context.Context, archivePath, targetDB
var cmd []string
// For localhost, omit -h to use Unix socket (avoids Ident auth issues)
// But always include -p for port (in case of non-standard port)
hostArg := ""
portArg := fmt.Sprintf("-p %d", e.cfg.Port)
if e.cfg.Host != "localhost" && e.cfg.Host != "" {
hostArg = fmt.Sprintf("-h %s -p %d", e.cfg.Host, e.cfg.Port)
hostArg = fmt.Sprintf("-h %s", e.cfg.Host)
}
if compressed {
// Use ON_ERROR_STOP=1 to fail fast on first error (prevents millions of errors on truncated dumps)
psqlCmd := fmt.Sprintf("psql -U %s -d %s -v ON_ERROR_STOP=1", e.cfg.User, targetDB)
psqlCmd := fmt.Sprintf("psql %s -U %s -d %s -v ON_ERROR_STOP=1", portArg, e.cfg.User, targetDB)
if hostArg != "" {
psqlCmd = fmt.Sprintf("psql %s -U %s -d %s -v ON_ERROR_STOP=1", hostArg, e.cfg.User, targetDB)
psqlCmd = fmt.Sprintf("psql %s %s -U %s -d %s -v ON_ERROR_STOP=1", hostArg, portArg, e.cfg.User, targetDB)
}
// Set PGPASSWORD in the bash command for password-less auth
cmd = []string{
@@ -417,6 +474,7 @@ func (e *Engine) restorePostgreSQLSQL(ctx context.Context, archivePath, targetDB
} else {
cmd = []string{
"psql",
"-p", fmt.Sprintf("%d", e.cfg.Port),
"-U", e.cfg.User,
"-d", targetDB,
"-v", "ON_ERROR_STOP=1",
@@ -999,6 +1057,11 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
var successCount, failCount int32
var mu sync.Mutex // Protect shared resources (progress, logger)
// Timing tracking for restore phase progress
restorePhaseStart := time.Now()
var completedDBTimes []time.Duration // Track duration for each completed DB restore
var completedDBTimesMu sync.Mutex
// Create semaphore to limit concurrency
semaphore := make(chan struct{}, parallelism)
var wg sync.WaitGroup
@@ -1024,6 +1087,19 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
}
}()
// Check for context cancellation before starting
if ctx.Err() != nil {
e.log.Warn("Context cancelled - skipping database restore", "file", filename)
atomic.AddInt32(&failCount, 1)
restoreErrorsMu.Lock()
restoreErrors = multierror.Append(restoreErrors, fmt.Errorf("%s: restore skipped (context cancelled)", strings.TrimSuffix(strings.TrimSuffix(filename, ".dump"), ".sql.gz")))
restoreErrorsMu.Unlock()
return
}
// Track timing for this database restore
dbRestoreStart := time.Now()
// Update estimator progress (thread-safe)
mu.Lock()
estimator.UpdateProgress(idx)
@@ -1036,10 +1112,26 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
dbProgress := 15 + int(float64(idx)/float64(totalDBs)*85.0)
// Calculate average time per DB and report progress with timing
completedDBTimesMu.Lock()
var avgPerDB time.Duration
if len(completedDBTimes) > 0 {
var totalDuration time.Duration
for _, d := range completedDBTimes {
totalDuration += d
}
avgPerDB = totalDuration / time.Duration(len(completedDBTimes))
}
phaseElapsed := time.Since(restorePhaseStart)
completedDBTimesMu.Unlock()
mu.Lock()
statusMsg := fmt.Sprintf("Restoring database %s (%d/%d)", dbName, idx+1, totalDBs)
e.progress.Update(statusMsg)
e.log.Info("Restoring database", "name", dbName, "file", dumpFile, "progress", dbProgress)
// Report database progress for TUI (both callbacks)
e.reportDatabaseProgress(idx, totalDBs, dbName)
e.reportDatabaseProgressWithTiming(idx, totalDBs, dbName, phaseElapsed, avgPerDB)
mu.Unlock()
// STEP 1: Drop existing database completely (clean slate)
@@ -1104,6 +1196,12 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
return
}
// Track completed database restore duration for ETA calculation
dbRestoreDuration := time.Since(dbRestoreStart)
completedDBTimesMu.Lock()
completedDBTimes = append(completedDBTimes, dbRestoreDuration)
completedDBTimesMu.Unlock()
atomic.AddInt32(&successCount, 1)
}(dbIndex, entry.Name())
@@ -1116,6 +1214,35 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
successCountFinal := int(atomic.LoadInt32(&successCount))
failCountFinal := int(atomic.LoadInt32(&failCount))
// SANITY CHECK: Verify all databases were accounted for
// This catches any goroutine that exited without updating counters
accountedFor := successCountFinal + failCountFinal
if accountedFor != totalDBs {
missingCount := totalDBs - accountedFor
e.log.Error("INTERNAL ERROR: Some database restore goroutines did not report status",
"expected", totalDBs,
"success", successCountFinal,
"failed", failCountFinal,
"unaccounted", missingCount)
// Treat unaccounted databases as failures
failCountFinal += missingCount
restoreErrorsMu.Lock()
restoreErrors = multierror.Append(restoreErrors, fmt.Errorf("%d database(s) did not complete (possible goroutine crash or deadlock)", missingCount))
restoreErrorsMu.Unlock()
}
// CRITICAL: Check if no databases were restored at all
if successCountFinal == 0 {
e.progress.Fail(fmt.Sprintf("Cluster restore FAILED: 0 of %d databases restored", totalDBs))
operation.Fail("No databases were restored")
if failCountFinal > 0 && restoreErrors != nil {
return fmt.Errorf("cluster restore failed: all %d database(s) failed:\n%s", failCountFinal, restoreErrors.Error())
}
return fmt.Errorf("cluster restore failed: no databases were restored (0 of %d total). Check PostgreSQL logs for details", totalDBs)
}
if failCountFinal > 0 {
// Format multi-error with detailed output
restoreErrors.ErrorFormat = func(errs []error) string {
@@ -1146,8 +1273,144 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
return nil
}
// extractArchive extracts a tar.gz archive
// extractArchive extracts a tar.gz archive with progress reporting
func (e *Engine) extractArchive(ctx context.Context, archivePath, destDir string) error {
// If progress callback is set, use Go's archive/tar for progress tracking
if e.progressCallback != nil {
return e.extractArchiveWithProgress(ctx, archivePath, destDir)
}
// Otherwise use fast shell tar (no progress)
return e.extractArchiveShell(ctx, archivePath, destDir)
}
// extractArchiveWithProgress extracts using Go's archive/tar with detailed progress reporting
func (e *Engine) extractArchiveWithProgress(ctx context.Context, archivePath, destDir string) error {
// Get archive size for progress calculation
archiveInfo, err := os.Stat(archivePath)
if err != nil {
return fmt.Errorf("failed to stat archive: %w", err)
}
totalSize := archiveInfo.Size()
// Open the archive file
file, err := os.Open(archivePath)
if err != nil {
return fmt.Errorf("failed to open archive: %w", err)
}
defer file.Close()
// Wrap with progress reader
progressReader := &progressReader{
reader: file,
totalSize: totalSize,
callback: e.progressCallback,
desc: "Extracting archive",
}
// Create gzip reader
gzReader, err := gzip.NewReader(progressReader)
if err != nil {
return fmt.Errorf("failed to create gzip reader: %w", err)
}
defer gzReader.Close()
// Create tar reader
tarReader := tar.NewReader(gzReader)
// Extract files
for {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
header, err := tarReader.Next()
if err == io.EOF {
break // End of archive
}
if err != nil {
return fmt.Errorf("failed to read tar header: %w", err)
}
// Sanitize and validate path
targetPath := filepath.Join(destDir, header.Name)
// Security check: ensure path is within destDir (prevent path traversal)
if !strings.HasPrefix(filepath.Clean(targetPath), filepath.Clean(destDir)) {
e.log.Warn("Skipping potentially malicious path in archive", "path", header.Name)
continue
}
switch header.Typeflag {
case tar.TypeDir:
if err := os.MkdirAll(targetPath, 0755); err != nil {
return fmt.Errorf("failed to create directory %s: %w", targetPath, err)
}
case tar.TypeReg:
// Ensure parent directory exists
if err := os.MkdirAll(filepath.Dir(targetPath), 0755); err != nil {
return fmt.Errorf("failed to create parent directory: %w", err)
}
// Create the file
outFile, err := os.OpenFile(targetPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, os.FileMode(header.Mode))
if err != nil {
return fmt.Errorf("failed to create file %s: %w", targetPath, err)
}
// Copy file contents
if _, err := io.Copy(outFile, tarReader); err != nil {
outFile.Close()
return fmt.Errorf("failed to write file %s: %w", targetPath, err)
}
outFile.Close()
case tar.TypeSymlink:
// Handle symlinks (common in some archives)
if err := os.Symlink(header.Linkname, targetPath); err != nil {
// Ignore symlink errors (may already exist or not supported)
e.log.Debug("Could not create symlink", "path", targetPath, "target", header.Linkname)
}
}
}
// Final progress update
e.reportProgress(totalSize, totalSize, "Extraction complete")
return nil
}
// progressReader wraps an io.Reader to report read progress
type progressReader struct {
reader io.Reader
totalSize int64
bytesRead int64
callback ProgressCallback
desc string
lastReport time.Time
reportEvery time.Duration
}
func (pr *progressReader) Read(p []byte) (n int, err error) {
n, err = pr.reader.Read(p)
pr.bytesRead += int64(n)
// Throttle progress reporting to every 100ms
if pr.reportEvery == 0 {
pr.reportEvery = 100 * time.Millisecond
}
if time.Since(pr.lastReport) > pr.reportEvery {
if pr.callback != nil {
pr.callback(pr.bytesRead, pr.totalSize, pr.desc)
}
pr.lastReport = time.Now()
}
return n, err
}
// extractArchiveShell extracts using shell tar command (faster but no progress)
func (e *Engine) extractArchiveShell(ctx context.Context, archivePath, destDir string) error {
cmd := exec.CommandContext(ctx, "tar", "-xzf", archivePath, "-C", destDir)
// Stream stderr to avoid memory issues - tar can produce lots of output for large archives
@@ -1199,6 +1462,8 @@ func (e *Engine) extractArchive(ctx context.Context, archivePath, destDir string
}
// restoreGlobals restores global objects (roles, tablespaces)
// Note: psql returns 0 even when some statements fail (e.g., role already exists)
// We track errors but only fail on FATAL errors that would prevent restore
func (e *Engine) restoreGlobals(ctx context.Context, globalsFile string) error {
args := []string{
"-p", fmt.Sprintf("%d", e.cfg.Port),
@@ -1228,6 +1493,8 @@ func (e *Engine) restoreGlobals(ctx context.Context, globalsFile string) error {
// Read stderr in chunks in goroutine
var lastError string
var errorCount int
var fatalError bool
stderrDone := make(chan struct{})
go func() {
defer close(stderrDone)
@@ -1236,9 +1503,23 @@ func (e *Engine) restoreGlobals(ctx context.Context, globalsFile string) error {
n, err := stderr.Read(buf)
if n > 0 {
chunk := string(buf[:n])
if strings.Contains(chunk, "ERROR") || strings.Contains(chunk, "FATAL") {
// Track different error types
if strings.Contains(chunk, "FATAL") {
fatalError = true
lastError = chunk
e.log.Warn("Globals restore stderr", "output", chunk)
e.log.Error("Globals restore FATAL error", "output", chunk)
} else if strings.Contains(chunk, "ERROR") {
errorCount++
lastError = chunk
// Only log first few errors to avoid spam
if errorCount <= 5 {
// Check if it's an ignorable "already exists" error
if strings.Contains(chunk, "already exists") {
e.log.Debug("Globals restore: object already exists (expected)", "output", chunk)
} else {
e.log.Warn("Globals restore error", "output", chunk)
}
}
}
}
if err != nil {
@@ -1266,10 +1547,23 @@ func (e *Engine) restoreGlobals(ctx context.Context, globalsFile string) error {
<-stderrDone
// Only fail on actual command errors or FATAL PostgreSQL errors
// Regular ERROR messages (like "role already exists") are expected
if cmdErr != nil {
return fmt.Errorf("failed to restore globals: %w (last error: %s)", cmdErr, lastError)
}
// If we had FATAL errors, those are real problems
if fatalError {
return fmt.Errorf("globals restore had FATAL error: %s", lastError)
}
// Log summary if there were errors (but don't fail)
if errorCount > 0 {
e.log.Info("Globals restore completed with some errors (usually 'already exists' - expected)",
"error_count", errorCount)
}
return nil
}
@@ -1337,6 +1631,7 @@ func (e *Engine) terminateConnections(ctx context.Context, dbName string) error
}
// dropDatabaseIfExists drops a database completely (clean slate)
// Uses PostgreSQL 13+ WITH (FORCE) option to forcefully drop even with active connections
func (e *Engine) dropDatabaseIfExists(ctx context.Context, dbName string) error {
// First terminate all connections
if err := e.terminateConnections(ctx, dbName); err != nil {
@@ -1346,28 +1641,69 @@ func (e *Engine) dropDatabaseIfExists(ctx context.Context, dbName string) error
// Wait a moment for connections to terminate
time.Sleep(500 * time.Millisecond)
// Drop the database
// Try to revoke new connections (prevents race condition)
// This only works if we have the privilege to do so
revokeArgs := []string{
"-p", fmt.Sprintf("%d", e.cfg.Port),
"-U", e.cfg.User,
"-d", "postgres",
"-c", fmt.Sprintf("REVOKE CONNECT ON DATABASE \"%s\" FROM PUBLIC", dbName),
}
if e.cfg.Host != "localhost" && e.cfg.Host != "127.0.0.1" && e.cfg.Host != "" {
revokeArgs = append([]string{"-h", e.cfg.Host}, revokeArgs...)
}
revokeCmd := exec.CommandContext(ctx, "psql", revokeArgs...)
revokeCmd.Env = append(os.Environ(), fmt.Sprintf("PGPASSWORD=%s", e.cfg.Password))
revokeCmd.Run() // Ignore errors - database might not exist
// Terminate connections again after revoking connect privilege
e.terminateConnections(ctx, dbName)
time.Sleep(200 * time.Millisecond)
// Try DROP DATABASE WITH (FORCE) first (PostgreSQL 13+)
// This forcefully terminates connections and drops the database atomically
forceArgs := []string{
"-p", fmt.Sprintf("%d", e.cfg.Port),
"-U", e.cfg.User,
"-d", "postgres",
"-c", fmt.Sprintf("DROP DATABASE IF EXISTS \"%s\" WITH (FORCE)", dbName),
}
if e.cfg.Host != "localhost" && e.cfg.Host != "127.0.0.1" && e.cfg.Host != "" {
forceArgs = append([]string{"-h", e.cfg.Host}, forceArgs...)
}
forceCmd := exec.CommandContext(ctx, "psql", forceArgs...)
forceCmd.Env = append(os.Environ(), fmt.Sprintf("PGPASSWORD=%s", e.cfg.Password))
output, err := forceCmd.CombinedOutput()
if err == nil {
e.log.Info("Dropped existing database (with FORCE)", "name", dbName)
return nil
}
// If FORCE option failed (PostgreSQL < 13), try regular drop
if strings.Contains(string(output), "syntax error") || strings.Contains(string(output), "WITH (FORCE)") {
e.log.Debug("WITH (FORCE) not supported, using standard DROP", "name", dbName)
args := []string{
"-p", fmt.Sprintf("%d", e.cfg.Port),
"-U", e.cfg.User,
"-d", "postgres",
"-c", fmt.Sprintf("DROP DATABASE IF EXISTS \"%s\"", dbName),
}
// Only add -h flag if host is not localhost (to use Unix socket for peer auth)
if e.cfg.Host != "localhost" && e.cfg.Host != "127.0.0.1" && e.cfg.Host != "" {
args = append([]string{"-h", e.cfg.Host}, args...)
}
cmd := exec.CommandContext(ctx, "psql", args...)
// Always set PGPASSWORD (empty string is fine for peer/ident auth)
cmd.Env = append(os.Environ(), fmt.Sprintf("PGPASSWORD=%s", e.cfg.Password))
output, err := cmd.CombinedOutput()
output, err = cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("failed to drop database '%s': %w\nOutput: %s", dbName, err, string(output))
}
} else if err != nil {
return fmt.Errorf("failed to drop database '%s': %w\nOutput: %s", dbName, err, string(output))
}
e.log.Info("Dropped existing database", "name", dbName)
return nil
@@ -1408,12 +1744,14 @@ func (e *Engine) ensureMySQLDatabaseExists(ctx context.Context, dbName string) e
}
// ensurePostgresDatabaseExists checks if a PostgreSQL database exists and creates it if not
// It attempts to extract encoding/locale from the dump file to preserve original settings
func (e *Engine) ensurePostgresDatabaseExists(ctx context.Context, dbName string) error {
// Skip creation for postgres and template databases - they should already exist
if dbName == "postgres" || dbName == "template0" || dbName == "template1" {
e.log.Info("Skipping create for system database (assume exists)", "name", dbName)
return nil
}
// Build psql command with authentication
buildPsqlCmd := func(ctx context.Context, database, query string) *exec.Cmd {
args := []string{
@@ -1453,14 +1791,31 @@ func (e *Engine) ensurePostgresDatabaseExists(ctx context.Context, dbName string
// Database doesn't exist, create it
// IMPORTANT: Use template0 to avoid duplicate definition errors from local additions to template1
// Also use UTF8 encoding explicitly as it's the most common and safest choice
// See PostgreSQL docs: https://www.postgresql.org/docs/current/app-pgrestore.html#APP-PGRESTORE-NOTES
e.log.Info("Creating database from template0", "name", dbName)
e.log.Info("Creating database from template0 with UTF8 encoding", "name", dbName)
// Get server's default locale for LC_COLLATE and LC_CTYPE
// This ensures compatibility while using the correct encoding
localeCmd := buildPsqlCmd(ctx, "postgres", "SHOW lc_collate")
localeOutput, _ := localeCmd.CombinedOutput()
serverLocale := strings.TrimSpace(string(localeOutput))
if serverLocale == "" {
serverLocale = "en_US.UTF-8" // Fallback to common default
}
// Build CREATE DATABASE command with encoding and locale
// Using ENCODING 'UTF8' explicitly ensures the dump can be restored
createSQL := fmt.Sprintf(
"CREATE DATABASE \"%s\" WITH TEMPLATE template0 ENCODING 'UTF8' LC_COLLATE '%s' LC_CTYPE '%s'",
dbName, serverLocale, serverLocale,
)
createArgs := []string{
"-p", fmt.Sprintf("%d", e.cfg.Port),
"-U", e.cfg.User,
"-d", "postgres",
"-c", fmt.Sprintf("CREATE DATABASE \"%s\" WITH TEMPLATE template0", dbName),
"-c", createSQL,
}
// Only add -h flag if host is not localhost (to use Unix socket for peer auth)
@@ -1475,10 +1830,28 @@ func (e *Engine) ensurePostgresDatabaseExists(ctx context.Context, dbName string
output, err = createCmd.CombinedOutput()
if err != nil {
// Log the error and include the psql output in the returned error to aid debugging
// If encoding/locale fails, try simpler CREATE DATABASE
e.log.Warn("Database creation with encoding failed, trying simple create", "name", dbName, "error", err)
simpleArgs := []string{
"-p", fmt.Sprintf("%d", e.cfg.Port),
"-U", e.cfg.User,
"-d", "postgres",
"-c", fmt.Sprintf("CREATE DATABASE \"%s\" WITH TEMPLATE template0", dbName),
}
if e.cfg.Host != "localhost" && e.cfg.Host != "127.0.0.1" && e.cfg.Host != "" {
simpleArgs = append([]string{"-h", e.cfg.Host}, simpleArgs...)
}
simpleCmd := exec.CommandContext(ctx, "psql", simpleArgs...)
simpleCmd.Env = append(os.Environ(), fmt.Sprintf("PGPASSWORD=%s", e.cfg.Password))
output, err = simpleCmd.CombinedOutput()
if err != nil {
e.log.Warn("Database creation failed", "name", dbName, "error", err, "output", string(output))
return fmt.Errorf("failed to create database '%s': %w (output: %s)", dbName, err, strings.TrimSpace(string(output)))
}
}
e.log.Info("Successfully created database from template0", "name", dbName)
return nil
@@ -1761,6 +2134,8 @@ type OriginalSettings struct {
}
// boostPostgreSQLSettings boosts multiple PostgreSQL settings for large restores
// NOTE: max_locks_per_transaction requires a PostgreSQL RESTART to take effect!
// maintenance_work_mem can be changed with pg_reload_conf().
func (e *Engine) boostPostgreSQLSettings(ctx context.Context, lockBoostValue int) (*OriginalSettings, error) {
connStr := e.buildConnString()
db, err := sql.Open("pgx", connStr)
@@ -1780,30 +2155,156 @@ func (e *Engine) boostPostgreSQLSettings(ctx context.Context, lockBoostValue int
// Get current maintenance_work_mem
db.QueryRowContext(ctx, "SHOW maintenance_work_mem").Scan(&original.MaintenanceWorkMem)
// Boost max_locks_per_transaction (if not already high enough)
// CRITICAL: max_locks_per_transaction requires a PostgreSQL RESTART!
// pg_reload_conf() is NOT sufficient for this parameter.
needsRestart := false
if original.MaxLocks < lockBoostValue {
_, err = db.ExecContext(ctx, fmt.Sprintf("ALTER SYSTEM SET max_locks_per_transaction = %d", lockBoostValue))
if err != nil {
e.log.Warn("Could not boost max_locks_per_transaction", "error", err)
e.log.Warn("Could not set max_locks_per_transaction", "error", err)
} else {
needsRestart = true
e.log.Warn("max_locks_per_transaction requires PostgreSQL restart to take effect",
"current", original.MaxLocks,
"target", lockBoostValue)
}
}
// Boost maintenance_work_mem to 2GB for faster index creation
// (this one CAN be applied via pg_reload_conf)
_, err = db.ExecContext(ctx, "ALTER SYSTEM SET maintenance_work_mem = '2GB'")
if err != nil {
e.log.Warn("Could not boost maintenance_work_mem", "error", err)
}
// Reload config to apply changes (no restart needed for these settings)
// Reload config to apply maintenance_work_mem
_, err = db.ExecContext(ctx, "SELECT pg_reload_conf()")
if err != nil {
return original, fmt.Errorf("failed to reload config: %w", err)
}
// If max_locks_per_transaction needs a restart, try to do it
if needsRestart {
if restarted := e.tryRestartPostgreSQL(ctx); restarted {
e.log.Info("PostgreSQL restarted successfully - max_locks_per_transaction now active")
// Wait for PostgreSQL to be ready
time.Sleep(3 * time.Second)
} else {
// Cannot restart - warn user but continue
// The setting is written to postgresql.auto.conf and will take effect on next restart
e.log.Warn("=" + strings.Repeat("=", 70))
e.log.Warn("NOTE: max_locks_per_transaction change requires PostgreSQL restart")
e.log.Warn("Current value: " + strconv.Itoa(original.MaxLocks) + ", target: " + strconv.Itoa(lockBoostValue))
e.log.Warn("")
e.log.Warn("The setting has been saved to postgresql.auto.conf and will take")
e.log.Warn("effect on the next PostgreSQL restart. If restore fails with")
e.log.Warn("'out of shared memory' errors, ask your DBA to restart PostgreSQL.")
e.log.Warn("")
e.log.Warn("Continuing with restore - this may succeed if your databases")
e.log.Warn("don't have many large objects (BLOBs).")
e.log.Warn("=" + strings.Repeat("=", 70))
// Continue anyway - might work for small restores or DBs without BLOBs
}
}
return original, nil
}
// canRestartPostgreSQL checks if we have the ability to restart PostgreSQL
// Returns false if running in a restricted environment (e.g., su postgres on enterprise systems)
func (e *Engine) canRestartPostgreSQL() bool {
// Check if we're running as postgres user - if so, we likely can't restart
// because PostgreSQL is managed by init/systemd, not directly by pg_ctl
currentUser := os.Getenv("USER")
if currentUser == "" {
currentUser = os.Getenv("LOGNAME")
}
// If we're the postgres user, check if we have sudo access
if currentUser == "postgres" {
// Try a quick sudo check - if this fails, we can't restart
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, "sudo", "-n", "true")
cmd.Stdin = nil
if err := cmd.Run(); err != nil {
e.log.Info("Running as postgres user without sudo access - cannot restart PostgreSQL",
"user", currentUser,
"hint", "Ask system administrator to restart PostgreSQL if needed")
return false
}
}
return true
}
// tryRestartPostgreSQL attempts to restart PostgreSQL using various methods
// Returns true if restart was successful
// IMPORTANT: Uses short timeouts and non-interactive sudo to avoid blocking on password prompts
// NOTE: This function will return false immediately if running as postgres without sudo
func (e *Engine) tryRestartPostgreSQL(ctx context.Context) bool {
// First check if we can even attempt a restart
if !e.canRestartPostgreSQL() {
e.log.Info("Skipping PostgreSQL restart attempt (no privileges)")
return false
}
e.progress.Update("Attempting PostgreSQL restart for lock settings...")
// Use short timeout for each restart attempt (don't block on sudo password prompts)
runWithTimeout := func(args ...string) bool {
cmdCtx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cmd := exec.CommandContext(cmdCtx, args[0], args[1:]...)
// Set stdin to /dev/null to prevent sudo from waiting for password
cmd.Stdin = nil
return cmd.Run() == nil
}
// Method 1: systemctl (most common on modern Linux) - use sudo -n for non-interactive
if runWithTimeout("sudo", "-n", "systemctl", "restart", "postgresql") {
return true
}
// Method 2: systemctl with version suffix (e.g., postgresql-15)
for _, ver := range []string{"17", "16", "15", "14", "13", "12"} {
if runWithTimeout("sudo", "-n", "systemctl", "restart", "postgresql-"+ver) {
return true
}
}
// Method 3: service command (older systems)
if runWithTimeout("sudo", "-n", "service", "postgresql", "restart") {
return true
}
// Method 4: pg_ctl as postgres user (if we ARE postgres user, no sudo needed)
if runWithTimeout("pg_ctl", "restart", "-D", "/var/lib/postgresql/data", "-m", "fast") {
return true
}
// Method 5: Try common PGDATA paths with pg_ctl directly (for postgres user)
pgdataPaths := []string{
"/var/lib/pgsql/data",
"/var/lib/pgsql/17/data",
"/var/lib/pgsql/16/data",
"/var/lib/pgsql/15/data",
"/var/lib/postgresql/17/main",
"/var/lib/postgresql/16/main",
"/var/lib/postgresql/15/main",
}
for _, pgdata := range pgdataPaths {
if runWithTimeout("pg_ctl", "restart", "-D", pgdata, "-m", "fast") {
return true
}
}
return false
}
// resetPostgreSQLSettings restores original PostgreSQL settings
// NOTE: max_locks_per_transaction changes are written but require restart to take effect.
// We don't restart here since we're done with the restore.
func (e *Engine) resetPostgreSQLSettings(ctx context.Context, original *OriginalSettings) error {
connStr := e.buildConnString()
db, err := sql.Open("pgx", connStr)
@@ -1812,25 +2313,28 @@ func (e *Engine) resetPostgreSQLSettings(ctx context.Context, original *Original
}
defer db.Close()
// Reset max_locks_per_transaction
// Reset max_locks_per_transaction (will take effect on next restart)
if original.MaxLocks == 64 { // Default
db.ExecContext(ctx, "ALTER SYSTEM RESET max_locks_per_transaction")
} else if original.MaxLocks > 0 {
db.ExecContext(ctx, fmt.Sprintf("ALTER SYSTEM SET max_locks_per_transaction = %d", original.MaxLocks))
}
// Reset maintenance_work_mem
// Reset maintenance_work_mem (takes effect immediately with reload)
if original.MaintenanceWorkMem == "64MB" { // Default
db.ExecContext(ctx, "ALTER SYSTEM RESET maintenance_work_mem")
} else if original.MaintenanceWorkMem != "" {
db.ExecContext(ctx, fmt.Sprintf("ALTER SYSTEM SET maintenance_work_mem = '%s'", original.MaintenanceWorkMem))
}
// Reload config
// Reload config (only maintenance_work_mem will take effect immediately)
_, err = db.ExecContext(ctx, "SELECT pg_reload_conf()")
if err != nil {
return fmt.Errorf("failed to reload config: %w", err)
}
e.log.Info("PostgreSQL settings reset queued",
"note", "max_locks_per_transaction will revert on next PostgreSQL restart")
return nil
}

View File

@@ -201,10 +201,19 @@ func (e *Engine) checkPostgreSQL(ctx context.Context, result *PreflightResult) {
result.PostgreSQL.IsSuperuser = isSuperuser
}
// Add info/warnings
// CRITICAL: max_locks_per_transaction requires PostgreSQL RESTART to change!
// Warn users loudly about this - it's the #1 cause of "out of shared memory" errors
if result.PostgreSQL.MaxLocksPerTransaction < 256 {
e.log.Info("PostgreSQL max_locks_per_transaction is low - will auto-boost",
"current", result.PostgreSQL.MaxLocksPerTransaction)
e.log.Warn("PostgreSQL max_locks_per_transaction is LOW",
"current", result.PostgreSQL.MaxLocksPerTransaction,
"recommended", "256+",
"note", "REQUIRES PostgreSQL restart to change!")
result.Warnings = append(result.Warnings,
fmt.Sprintf("max_locks_per_transaction=%d is low (recommend 256+). "+
"This setting requires PostgreSQL RESTART to change. "+
"BLOB-heavy databases may fail with 'out of shared memory' error.",
result.PostgreSQL.MaxLocksPerTransaction))
}
// Parse shared_buffers and warn if very low

View File

@@ -255,7 +255,9 @@ func (s *Safety) CheckDiskSpaceAt(archivePath string, checkDir string, multiplie
// Get available disk space
availableSpace, err := getDiskSpace(checkDir)
if err != nil {
if s.log != nil {
s.log.Warn("Cannot check disk space", "error", err)
}
return nil // Don't fail if we can't check
}
@@ -278,10 +280,12 @@ func (s *Safety) CheckDiskSpaceAt(archivePath string, checkDir string, multiplie
checkDir)
}
if s.log != nil {
s.log.Info("Disk space check passed",
"location", checkDir,
"required", FormatBytes(requiredSpace),
"available", FormatBytes(availableSpace))
}
return nil
}

View File

@@ -251,13 +251,13 @@ func (m ArchiveBrowserModel) View() string {
var s strings.Builder
// Header
title := "[PKG] Backup Archives"
title := "[SELECT] Backup Archives"
if m.mode == "restore-single" {
title = "[PKG] Select Archive to Restore (Single Database)"
title = "[SELECT] Select Archive to Restore (Single Database)"
} else if m.mode == "restore-cluster" {
title = "[PKG] Select Archive to Restore (Cluster)"
title = "[SELECT] Select Archive to Restore (Cluster)"
} else if m.mode == "diagnose" {
title = "[SEARCH] Select Archive to Diagnose"
title = "[SELECT] Select Archive to Diagnose"
}
s.WriteString(titleStyle.Render(title))

207
internal/tui/backup_exec.go Executable file → Normal file
View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"strings"
"sync"
"time"
tea "github.com/charmbracelet/bubbletea"
@@ -33,6 +34,56 @@ type BackupExecutionModel struct {
startTime time.Time
details []string
spinnerFrame int
// Database count progress (for cluster backup)
dbTotal int
dbDone int
dbName string // Current database being backed up
}
// sharedBackupProgressState holds progress state that can be safely accessed from callbacks
type sharedBackupProgressState struct {
mu sync.Mutex
dbTotal int
dbDone int
dbName string
hasUpdate bool
}
// Package-level shared progress state for backup operations
var (
currentBackupProgressMu sync.Mutex
currentBackupProgressState *sharedBackupProgressState
)
func setCurrentBackupProgress(state *sharedBackupProgressState) {
currentBackupProgressMu.Lock()
defer currentBackupProgressMu.Unlock()
currentBackupProgressState = state
}
func clearCurrentBackupProgress() {
currentBackupProgressMu.Lock()
defer currentBackupProgressMu.Unlock()
currentBackupProgressState = nil
}
func getCurrentBackupProgress() (dbTotal, dbDone int, dbName string, hasUpdate bool) {
currentBackupProgressMu.Lock()
defer currentBackupProgressMu.Unlock()
if currentBackupProgressState == nil {
return 0, 0, "", false
}
currentBackupProgressState.mu.Lock()
defer currentBackupProgressState.mu.Unlock()
hasUpdate = currentBackupProgressState.hasUpdate
currentBackupProgressState.hasUpdate = false
return currentBackupProgressState.dbTotal, currentBackupProgressState.dbDone,
currentBackupProgressState.dbName, hasUpdate
}
func NewBackupExecution(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context, backupType, dbName string, ratio int) BackupExecutionModel {
@@ -55,7 +106,6 @@ func NewBackupExecution(cfg *config.Config, log logger.Logger, parent tea.Model,
}
func (m BackupExecutionModel) Init() tea.Cmd {
// TUI handles all display through View() - no progress callbacks needed
return tea.Batch(
executeBackupWithTUIProgress(m.ctx, m.config, m.logger, m.backupType, m.databaseName, m.ratio),
backupTickCmd(),
@@ -91,6 +141,11 @@ func executeBackupWithTUIProgress(parentCtx context.Context, cfg *config.Config,
start := time.Now()
// Setup shared progress state for TUI polling
progressState := &sharedBackupProgressState{}
setCurrentBackupProgress(progressState)
defer clearCurrentBackupProgress()
dbClient, err := database.New(cfg, log)
if err != nil {
return backupCompleteMsg{
@@ -110,6 +165,16 @@ func executeBackupWithTUIProgress(parentCtx context.Context, cfg *config.Config,
// Pass nil as indicator - TUI itself handles all display, no stdout printing
engine := backup.NewSilent(cfg, log, dbClient, nil)
// Set database progress callback for cluster backups
engine.SetDatabaseProgressCallback(func(done, total int, currentDB string) {
progressState.mu.Lock()
progressState.dbDone = done
progressState.dbTotal = total
progressState.dbName = currentDB
progressState.hasUpdate = true
progressState.mu.Unlock()
})
var backupErr error
switch backupType {
case "single":
@@ -157,10 +222,21 @@ func (m BackupExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
// Increment spinner frame for smooth animation
m.spinnerFrame = (m.spinnerFrame + 1) % len(spinnerFrames)
// Update status based on elapsed time to show progress
// Poll for database progress updates from callbacks
dbTotal, dbDone, dbName, hasUpdate := getCurrentBackupProgress()
if hasUpdate {
m.dbTotal = dbTotal
m.dbDone = dbDone
m.dbName = dbName
}
// Update status based on progress and elapsed time
elapsedSec := int(time.Since(m.startTime).Seconds())
if elapsedSec < 2 {
if m.dbTotal > 0 && m.dbDone > 0 {
// We have real progress from cluster backup
m.status = fmt.Sprintf("Backing up database: %s", m.dbName)
} else if elapsedSec < 2 {
m.status = "Initializing backup..."
} else if elapsedSec < 5 {
if m.backupType == "cluster" {
@@ -234,6 +310,34 @@ func (m BackupExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return m, nil
}
// renderDatabaseProgressBar renders a progress bar for database count progress
func renderBackupDatabaseProgressBar(done, total int, dbName string, width int) string {
if total == 0 {
return ""
}
// Calculate progress percentage
percent := float64(done) / float64(total)
if percent > 1.0 {
percent = 1.0
}
// Calculate filled width
barWidth := width - 20 // Leave room for label and percentage
if barWidth < 10 {
barWidth = 10
}
filled := int(float64(barWidth) * percent)
if filled > barWidth {
filled = barWidth
}
// Build progress bar
bar := strings.Repeat("█", filled) + strings.Repeat("░", barWidth-filled)
return fmt.Sprintf(" Database: [%s] %d/%d", bar, done, total)
}
func (m BackupExecutionModel) View() string {
var s strings.Builder
s.Grow(512) // Pre-allocate estimated capacity for better performance
@@ -255,31 +359,104 @@ func (m BackupExecutionModel) View() string {
s.WriteString(fmt.Sprintf(" %-10s %s\n", "Duration:", time.Since(m.startTime).Round(time.Second)))
s.WriteString("\n")
// Status with spinner
// Status display
if !m.done {
// Show database progress bar if we have progress data (cluster backup)
if m.dbTotal > 0 && m.dbDone > 0 {
// Show progress bar instead of spinner when we have real progress
progressBar := renderBackupDatabaseProgressBar(m.dbDone, m.dbTotal, m.dbName, 50)
s.WriteString(progressBar + "\n")
s.WriteString(fmt.Sprintf(" %s\n", m.status))
} else {
// Show spinner during initial phases
if m.cancelling {
s.WriteString(fmt.Sprintf(" %s %s\n", spinnerFrames[m.spinnerFrame], m.status))
} else {
s.WriteString(fmt.Sprintf(" %s %s\n", spinnerFrames[m.spinnerFrame], m.status))
}
}
if !m.cancelling {
s.WriteString("\n [KEY] Press Ctrl+C or ESC to cancel\n")
}
} else {
s.WriteString(fmt.Sprintf(" %s\n\n", m.status))
// Show completion summary with detailed stats
if m.err != nil {
s.WriteString(fmt.Sprintf(" [FAIL] Error: %v\n", m.err))
} else if m.result != "" {
// Parse and display result cleanly
lines := strings.Split(m.result, "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
if line != "" {
s.WriteString(" " + line + "\n")
s.WriteString("\n")
s.WriteString(errorStyle.Render(" ╔══════════════════════════════════════════════════════════╗"))
s.WriteString("\n")
s.WriteString(errorStyle.Render(" ║ [FAIL] BACKUP FAILED ║"))
s.WriteString("\n")
s.WriteString(errorStyle.Render(" ╚══════════════════════════════════════════════════════════╝"))
s.WriteString("\n\n")
s.WriteString(errorStyle.Render(fmt.Sprintf(" Error: %v", m.err)))
s.WriteString("\n")
} else {
s.WriteString("\n")
s.WriteString(successStyle.Render(" ╔══════════════════════════════════════════════════════════╗"))
s.WriteString("\n")
s.WriteString(successStyle.Render(" ║ [OK] BACKUP COMPLETED SUCCESSFULLY ║"))
s.WriteString("\n")
s.WriteString(successStyle.Render(" ╚══════════════════════════════════════════════════════════╝"))
s.WriteString("\n\n")
// Summary section
s.WriteString(infoStyle.Render(" ─── Summary ─────────────────────────────────────────────"))
s.WriteString("\n\n")
// Backup type specific info
switch m.backupType {
case "cluster":
s.WriteString(" Type: Cluster Backup\n")
if m.dbTotal > 0 {
s.WriteString(fmt.Sprintf(" Databases: %d backed up\n", m.dbTotal))
}
case "single":
s.WriteString(" Type: Single Database Backup\n")
s.WriteString(fmt.Sprintf(" Database: %s\n", m.databaseName))
case "sample":
s.WriteString(" Type: Sample Backup\n")
s.WriteString(fmt.Sprintf(" Database: %s\n", m.databaseName))
s.WriteString(fmt.Sprintf(" Sample Ratio: %d\n", m.ratio))
}
s.WriteString("\n")
// Timing section
s.WriteString(infoStyle.Render(" ─── Timing ──────────────────────────────────────────────"))
s.WriteString("\n\n")
elapsed := time.Since(m.startTime)
s.WriteString(fmt.Sprintf(" Total Time: %s\n", formatBackupDuration(elapsed)))
if m.backupType == "cluster" && m.dbTotal > 0 {
avgPerDB := elapsed / time.Duration(m.dbTotal)
s.WriteString(fmt.Sprintf(" Avg per DB: %s\n", formatBackupDuration(avgPerDB)))
}
s.WriteString("\n [KEY] Press Enter or ESC to return to menu\n")
s.WriteString("\n")
s.WriteString(infoStyle.Render(" ─────────────────────────────────────────────────────────"))
s.WriteString("\n")
}
s.WriteString("\n")
s.WriteString(" [KEY] Press Enter or ESC to return to menu\n")
}
return s.String()
}
// formatBackupDuration formats duration in human readable format
func formatBackupDuration(d time.Duration) string {
if d < time.Minute {
return fmt.Sprintf("%.1fs", d.Seconds())
}
if d < time.Hour {
minutes := int(d.Minutes())
seconds := int(d.Seconds()) % 60
return fmt.Sprintf("%dm %ds", minutes, seconds)
}
hours := int(d.Hours())
minutes := int(d.Minutes()) % 60
return fmt.Sprintf("%dh %dm", hours, minutes)
}

View File

@@ -230,7 +230,7 @@ func (m BackupManagerModel) View() string {
var s strings.Builder
// Title
s.WriteString(TitleStyle.Render("[DB] Backup Archive Manager"))
s.WriteString(TitleStyle.Render("[SELECT] Backup Archive Manager"))
s.WriteString("\n\n")
// Status line (no box, bold+color accents)

View File

@@ -0,0 +1,406 @@
package tui
import (
"fmt"
"strings"
"sync"
"time"
)
// DetailedProgress provides schollz-like progress information for TUI rendering
// This is a data structure that can be queried by Bubble Tea's View() method
type DetailedProgress struct {
mu sync.RWMutex
// Core progress
Total int64 // Total bytes or items
Current int64 // Current bytes or items done
// Display info
Description string // What operation is happening
Unit string // "bytes", "files", "databases", etc.
// Timing for ETA/speed calculation
StartTime time.Time
LastUpdate time.Time
SpeedWindow []speedSample // Rolling window for speed calculation
// State
IsIndeterminate bool // True if total is unknown (spinner mode)
IsComplete bool
IsFailed bool
ErrorMessage string
}
type speedSample struct {
timestamp time.Time
bytes int64
}
// NewDetailedProgress creates a progress tracker with known total
func NewDetailedProgress(total int64, description string) *DetailedProgress {
return &DetailedProgress{
Total: total,
Description: description,
Unit: "bytes",
StartTime: time.Now(),
LastUpdate: time.Now(),
SpeedWindow: make([]speedSample, 0, 20),
IsIndeterminate: total <= 0,
}
}
// NewDetailedProgressItems creates a progress tracker for item counts
func NewDetailedProgressItems(total int, description string) *DetailedProgress {
return &DetailedProgress{
Total: int64(total),
Description: description,
Unit: "items",
StartTime: time.Now(),
LastUpdate: time.Now(),
SpeedWindow: make([]speedSample, 0, 20),
IsIndeterminate: total <= 0,
}
}
// NewDetailedProgressSpinner creates an indeterminate progress tracker
func NewDetailedProgressSpinner(description string) *DetailedProgress {
return &DetailedProgress{
Total: -1,
Description: description,
Unit: "",
StartTime: time.Now(),
LastUpdate: time.Now(),
SpeedWindow: make([]speedSample, 0, 20),
IsIndeterminate: true,
}
}
// Add adds to the current progress
func (dp *DetailedProgress) Add(n int64) {
dp.mu.Lock()
defer dp.mu.Unlock()
dp.Current += n
dp.LastUpdate = time.Now()
// Add speed sample
dp.SpeedWindow = append(dp.SpeedWindow, speedSample{
timestamp: dp.LastUpdate,
bytes: dp.Current,
})
// Keep only last 20 samples for speed calculation
if len(dp.SpeedWindow) > 20 {
dp.SpeedWindow = dp.SpeedWindow[len(dp.SpeedWindow)-20:]
}
}
// Set sets the current progress to a specific value
func (dp *DetailedProgress) Set(n int64) {
dp.mu.Lock()
defer dp.mu.Unlock()
dp.Current = n
dp.LastUpdate = time.Now()
// Add speed sample
dp.SpeedWindow = append(dp.SpeedWindow, speedSample{
timestamp: dp.LastUpdate,
bytes: dp.Current,
})
if len(dp.SpeedWindow) > 20 {
dp.SpeedWindow = dp.SpeedWindow[len(dp.SpeedWindow)-20:]
}
}
// SetTotal updates the total (useful when total becomes known during operation)
func (dp *DetailedProgress) SetTotal(total int64) {
dp.mu.Lock()
defer dp.mu.Unlock()
dp.Total = total
dp.IsIndeterminate = total <= 0
}
// SetDescription updates the description
func (dp *DetailedProgress) SetDescription(desc string) {
dp.mu.Lock()
defer dp.mu.Unlock()
dp.Description = desc
}
// Complete marks the progress as complete
func (dp *DetailedProgress) Complete() {
dp.mu.Lock()
defer dp.mu.Unlock()
dp.IsComplete = true
dp.Current = dp.Total
}
// Fail marks the progress as failed
func (dp *DetailedProgress) Fail(errMsg string) {
dp.mu.Lock()
defer dp.mu.Unlock()
dp.IsFailed = true
dp.ErrorMessage = errMsg
}
// GetPercent returns the progress percentage (0-100)
func (dp *DetailedProgress) GetPercent() int {
dp.mu.RLock()
defer dp.mu.RUnlock()
if dp.IsIndeterminate || dp.Total <= 0 {
return 0
}
percent := int((dp.Current * 100) / dp.Total)
if percent > 100 {
return 100
}
return percent
}
// GetSpeed returns the current transfer speed in bytes/second
func (dp *DetailedProgress) GetSpeed() float64 {
dp.mu.RLock()
defer dp.mu.RUnlock()
if len(dp.SpeedWindow) < 2 {
return 0
}
// Use first and last samples in window for smoothed speed
first := dp.SpeedWindow[0]
last := dp.SpeedWindow[len(dp.SpeedWindow)-1]
elapsed := last.timestamp.Sub(first.timestamp).Seconds()
if elapsed <= 0 {
return 0
}
bytesTransferred := last.bytes - first.bytes
return float64(bytesTransferred) / elapsed
}
// GetETA returns the estimated time remaining
func (dp *DetailedProgress) GetETA() time.Duration {
dp.mu.RLock()
defer dp.mu.RUnlock()
if dp.IsIndeterminate || dp.Total <= 0 || dp.Current >= dp.Total {
return 0
}
speed := dp.getSpeedLocked()
if speed <= 0 {
return 0
}
remaining := dp.Total - dp.Current
seconds := float64(remaining) / speed
return time.Duration(seconds) * time.Second
}
func (dp *DetailedProgress) getSpeedLocked() float64 {
if len(dp.SpeedWindow) < 2 {
return 0
}
first := dp.SpeedWindow[0]
last := dp.SpeedWindow[len(dp.SpeedWindow)-1]
elapsed := last.timestamp.Sub(first.timestamp).Seconds()
if elapsed <= 0 {
return 0
}
bytesTransferred := last.bytes - first.bytes
return float64(bytesTransferred) / elapsed
}
// GetElapsed returns the elapsed time since start
func (dp *DetailedProgress) GetElapsed() time.Duration {
dp.mu.RLock()
defer dp.mu.RUnlock()
return time.Since(dp.StartTime)
}
// GetState returns a snapshot of the current state for rendering
func (dp *DetailedProgress) GetState() DetailedProgressState {
dp.mu.RLock()
defer dp.mu.RUnlock()
return DetailedProgressState{
Description: dp.Description,
Current: dp.Current,
Total: dp.Total,
Percent: dp.getPercentLocked(),
Speed: dp.getSpeedLocked(),
ETA: dp.getETALocked(),
Elapsed: time.Since(dp.StartTime),
Unit: dp.Unit,
IsIndeterminate: dp.IsIndeterminate,
IsComplete: dp.IsComplete,
IsFailed: dp.IsFailed,
ErrorMessage: dp.ErrorMessage,
}
}
func (dp *DetailedProgress) getPercentLocked() int {
if dp.IsIndeterminate || dp.Total <= 0 {
return 0
}
percent := int((dp.Current * 100) / dp.Total)
if percent > 100 {
return 100
}
return percent
}
func (dp *DetailedProgress) getETALocked() time.Duration {
if dp.IsIndeterminate || dp.Total <= 0 || dp.Current >= dp.Total {
return 0
}
speed := dp.getSpeedLocked()
if speed <= 0 {
return 0
}
remaining := dp.Total - dp.Current
seconds := float64(remaining) / speed
return time.Duration(seconds) * time.Second
}
// DetailedProgressState is an immutable snapshot for rendering
type DetailedProgressState struct {
Description string
Current int64
Total int64
Percent int
Speed float64 // bytes/sec
ETA time.Duration
Elapsed time.Duration
Unit string
IsIndeterminate bool
IsComplete bool
IsFailed bool
ErrorMessage string
}
// RenderProgressBar renders a TUI-friendly progress bar string
// Returns something like: "Extracting archive [████████░░░░░░░░░░░░] 45% 12.5 MB/s ETA: 2m 30s"
func (s DetailedProgressState) RenderProgressBar(width int) string {
if s.IsIndeterminate {
return s.renderIndeterminate()
}
// Progress bar
barWidth := 30
if width < 80 {
barWidth = 20
}
filled := (s.Percent * barWidth) / 100
if filled > barWidth {
filled = barWidth
}
bar := strings.Repeat("█", filled) + strings.Repeat("░", barWidth-filled)
// Format bytes
currentStr := FormatBytes(s.Current)
totalStr := FormatBytes(s.Total)
// Format speed
speedStr := ""
if s.Speed > 0 {
speedStr = fmt.Sprintf("%s/s", FormatBytes(int64(s.Speed)))
}
// Format ETA
etaStr := ""
if s.ETA > 0 && !s.IsComplete {
etaStr = fmt.Sprintf("ETA: %s", FormatDurationShort(s.ETA))
}
// Build the line
parts := []string{
fmt.Sprintf("[%s]", bar),
fmt.Sprintf("%3d%%", s.Percent),
}
if s.Unit == "bytes" && s.Total > 0 {
parts = append(parts, fmt.Sprintf("%s/%s", currentStr, totalStr))
} else if s.Total > 0 {
parts = append(parts, fmt.Sprintf("%d/%d", s.Current, s.Total))
}
if speedStr != "" {
parts = append(parts, speedStr)
}
if etaStr != "" {
parts = append(parts, etaStr)
}
return strings.Join(parts, " ")
}
func (s DetailedProgressState) renderIndeterminate() string {
elapsed := FormatDurationShort(s.Elapsed)
return fmt.Sprintf("[spinner] %s Elapsed: %s", s.Description, elapsed)
}
// RenderCompact renders a compact single-line progress string
func (s DetailedProgressState) RenderCompact() string {
if s.IsComplete {
return fmt.Sprintf("[OK] %s completed in %s", s.Description, FormatDurationShort(s.Elapsed))
}
if s.IsFailed {
return fmt.Sprintf("[FAIL] %s: %s", s.Description, s.ErrorMessage)
}
if s.IsIndeterminate {
return fmt.Sprintf("[...] %s (%s)", s.Description, FormatDurationShort(s.Elapsed))
}
return fmt.Sprintf("[%3d%%] %s - %s/%s", s.Percent, s.Description,
FormatBytes(s.Current), FormatBytes(s.Total))
}
// FormatBytes formats bytes in human-readable format
func FormatBytes(b int64) string {
const unit = 1024
if b < unit {
return fmt.Sprintf("%d B", b)
}
div, exp := int64(unit), 0
for n := b / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f %cB", float64(b)/float64(div), "KMGTPE"[exp])
}
// FormatDurationShort formats duration in short form
func FormatDurationShort(d time.Duration) string {
if d < time.Second {
return "<1s"
}
if d < time.Minute {
return fmt.Sprintf("%ds", int(d.Seconds()))
}
if d < time.Hour {
m := int(d.Minutes())
s := int(d.Seconds()) % 60
if s > 0 {
return fmt.Sprintf("%dm %ds", m, s)
}
return fmt.Sprintf("%dm", m)
}
h := int(d.Hours())
m := int(d.Minutes()) % 60
return fmt.Sprintf("%dh %dm", h, m)
}

View File

@@ -160,7 +160,7 @@ func (m DiagnoseViewModel) View() string {
var s strings.Builder
// Header
s.WriteString(titleStyle.Render("[SEARCH] Backup Diagnosis"))
s.WriteString(titleStyle.Render("[CHECK] Backup Diagnosis"))
s.WriteString("\n\n")
// Archive info
@@ -204,132 +204,111 @@ func (m DiagnoseViewModel) View() string {
func (m DiagnoseViewModel) renderSingleResult(result *restore.DiagnoseResult) string {
var s strings.Builder
// Status Box
s.WriteString("+--[ VALIDATION STATUS ]" + strings.Repeat("-", 37) + "+\n")
// Validation Status
s.WriteString(diagnoseHeaderStyle.Render("[STATUS] Validation"))
s.WriteString("\n")
if result.IsValid {
s.WriteString("| " + diagnosePassStyle.Render("[OK] VALID - Archive passed all checks") + strings.Repeat(" ", 18) + "|\n")
s.WriteString(diagnosePassStyle.Render(" [OK] VALID - Archive passed all checks"))
s.WriteString("\n")
} else {
s.WriteString("| " + diagnoseFailStyle.Render("[FAIL] INVALID - Archive has problems") + strings.Repeat(" ", 19) + "|\n")
s.WriteString(diagnoseFailStyle.Render(" [FAIL] INVALID - Archive has problems"))
s.WriteString("\n")
}
if result.IsTruncated {
s.WriteString("| " + diagnoseFailStyle.Render("[!] TRUNCATED - File is incomplete") + strings.Repeat(" ", 22) + "|\n")
s.WriteString(diagnoseFailStyle.Render(" [!] TRUNCATED - File is incomplete"))
s.WriteString("\n")
}
if result.IsCorrupted {
s.WriteString("| " + diagnoseFailStyle.Render("[!] CORRUPTED - File structure damaged") + strings.Repeat(" ", 18) + "|\n")
s.WriteString(diagnoseFailStyle.Render(" [!] CORRUPTED - File structure damaged"))
s.WriteString("\n")
}
s.WriteString("+" + strings.Repeat("-", 60) + "+\n\n")
s.WriteString("\n")
// Details Box
// Details
if result.Details != nil {
s.WriteString("+--[ DETAILS ]" + strings.Repeat("-", 46) + "+\n")
s.WriteString(diagnoseHeaderStyle.Render("[INFO] Details"))
s.WriteString("\n")
if result.Details.HasPGDMPSignature {
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " PostgreSQL custom format (PGDMP)" + strings.Repeat(" ", 20) + "|\n")
s.WriteString(diagnosePassStyle.Render(" [+]") + " PostgreSQL custom format (PGDMP)\n")
}
if result.Details.HasSQLHeader {
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " PostgreSQL SQL header found" + strings.Repeat(" ", 25) + "|\n")
s.WriteString(diagnosePassStyle.Render(" [+]") + " PostgreSQL SQL header found\n")
}
if result.Details.GzipValid {
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " Gzip compression valid" + strings.Repeat(" ", 30) + "|\n")
s.WriteString(diagnosePassStyle.Render(" [+]") + " Gzip compression valid\n")
}
if result.Details.PgRestoreListable {
tableInfo := fmt.Sprintf(" (%d tables)", result.Details.TableCount)
padding := 36 - len(tableInfo)
if padding < 0 {
padding = 0
}
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " pg_restore can list contents" + tableInfo + strings.Repeat(" ", padding) + "|\n")
s.WriteString(diagnosePassStyle.Render(" [+]") + fmt.Sprintf(" pg_restore can list contents (%d tables)\n", result.Details.TableCount))
}
if result.Details.CopyBlockCount > 0 {
blockInfo := fmt.Sprintf("%d COPY blocks found", result.Details.CopyBlockCount)
padding := 50 - len(blockInfo)
if padding < 0 {
padding = 0
}
s.WriteString("| [-] " + blockInfo + strings.Repeat(" ", padding) + "|\n")
s.WriteString(fmt.Sprintf(" [-] %d COPY blocks found\n", result.Details.CopyBlockCount))
}
if result.Details.UnterminatedCopy {
s.WriteString("| " + diagnoseFailStyle.Render("[-]") + " Unterminated COPY: " + truncate(result.Details.LastCopyTable, 30) + strings.Repeat(" ", 5) + "|\n")
s.WriteString(diagnoseFailStyle.Render(" [-]") + " Unterminated COPY: " + truncate(result.Details.LastCopyTable, 30) + "\n")
}
if result.Details.ProperlyTerminated {
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " All COPY blocks properly terminated" + strings.Repeat(" ", 17) + "|\n")
s.WriteString(diagnosePassStyle.Render(" [+]") + " All COPY blocks properly terminated\n")
}
if result.Details.ExpandedSize > 0 {
sizeInfo := fmt.Sprintf("Expanded: %s (%.1fx)", formatSize(result.Details.ExpandedSize), result.Details.CompressionRatio)
padding := 50 - len(sizeInfo)
if padding < 0 {
padding = 0
}
s.WriteString("| [-] " + sizeInfo + strings.Repeat(" ", padding) + "|\n")
s.WriteString(fmt.Sprintf(" [-] Expanded: %s (%.1fx)\n", formatSize(result.Details.ExpandedSize), result.Details.CompressionRatio))
}
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
s.WriteString("\n")
}
// Errors Box
// Errors
if len(result.Errors) > 0 {
s.WriteString("\n+--[ ERRORS ]" + strings.Repeat("-", 47) + "+\n")
s.WriteString(diagnoseFailStyle.Render("[FAIL] Errors"))
s.WriteString("\n")
for i, e := range result.Errors {
if i >= 5 {
remaining := fmt.Sprintf("... and %d more errors", len(result.Errors)-5)
padding := 56 - len(remaining)
s.WriteString("| " + remaining + strings.Repeat(" ", padding) + "|\n")
s.WriteString(fmt.Sprintf(" ... and %d more errors\n", len(result.Errors)-5))
break
}
errText := truncate(e, 54)
padding := 56 - len(errText)
if padding < 0 {
padding = 0
s.WriteString(" " + truncate(e, 60) + "\n")
}
s.WriteString("| " + errText + strings.Repeat(" ", padding) + "|\n")
}
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
s.WriteString("\n")
}
// Warnings Box
// Warnings
if len(result.Warnings) > 0 {
s.WriteString("\n+--[ WARNINGS ]" + strings.Repeat("-", 45) + "+\n")
s.WriteString(diagnoseWarnStyle.Render("[WARN] Warnings"))
s.WriteString("\n")
for i, w := range result.Warnings {
if i >= 3 {
remaining := fmt.Sprintf("... and %d more warnings", len(result.Warnings)-3)
padding := 56 - len(remaining)
s.WriteString("| " + remaining + strings.Repeat(" ", padding) + "|\n")
s.WriteString(fmt.Sprintf(" ... and %d more warnings\n", len(result.Warnings)-3))
break
}
warnText := truncate(w, 54)
padding := 56 - len(warnText)
if padding < 0 {
padding = 0
s.WriteString(" " + truncate(w, 60) + "\n")
}
s.WriteString("| " + warnText + strings.Repeat(" ", padding) + "|\n")
}
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
s.WriteString("\n")
}
// Recommendations Box
// Recommendations
if !result.IsValid {
s.WriteString("\n+--[ RECOMMENDATIONS ]" + strings.Repeat("-", 38) + "+\n")
s.WriteString(diagnoseInfoStyle.Render("[HINT] Recommendations"))
s.WriteString("\n")
if result.IsTruncated {
s.WriteString("| 1. Re-run backup with current version (v3.42.12+) |\n")
s.WriteString("| 2. Check disk space on backup server |\n")
s.WriteString("| 3. Verify network stability for remote backups |\n")
s.WriteString(" 1. Re-run backup with current version (v3.42+)\n")
s.WriteString(" 2. Check disk space on backup server\n")
s.WriteString(" 3. Verify network stability for remote backups\n")
}
if result.IsCorrupted {
s.WriteString("| 1. Verify backup was transferred completely |\n")
s.WriteString("| 2. Try restoring from a previous backup |\n")
s.WriteString(" 1. Verify backup was transferred completely\n")
s.WriteString(" 2. Try restoring from a previous backup\n")
}
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
}
return s.String()
@@ -349,10 +328,8 @@ func (m DiagnoseViewModel) renderClusterResults() string {
}
}
s.WriteString(strings.Repeat("-", 60))
s.WriteString("\n")
s.WriteString(diagnoseHeaderStyle.Render(fmt.Sprintf("[STATS] CLUSTER SUMMARY: %d databases\n", len(m.results))))
s.WriteString(strings.Repeat("-", 60))
s.WriteString(diagnoseHeaderStyle.Render(fmt.Sprintf("[STATS] Cluster Summary: %d databases", len(m.results))))
s.WriteString("\n\n")
if invalidCount == 0 {
@@ -364,7 +341,7 @@ func (m DiagnoseViewModel) renderClusterResults() string {
}
// List all dumps with status
s.WriteString(diagnoseHeaderStyle.Render("Database Dumps:"))
s.WriteString(diagnoseHeaderStyle.Render("[LIST] Database Dumps"))
s.WriteString("\n")
// Show visible range based on cursor
@@ -413,9 +390,7 @@ func (m DiagnoseViewModel) renderClusterResults() string {
if m.cursor < len(m.results) {
selected := m.results[m.cursor]
s.WriteString("\n")
s.WriteString(strings.Repeat("-", 60))
s.WriteString("\n")
s.WriteString(diagnoseHeaderStyle.Render("Selected: " + selected.FileName))
s.WriteString(diagnoseHeaderStyle.Render("[INFO] Selected: " + selected.FileName))
s.WriteString("\n\n")
// Show condensed details for selected

View File

@@ -191,7 +191,7 @@ func (m HistoryViewModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
func (m HistoryViewModel) View() string {
var s strings.Builder
header := titleStyle.Render("[HISTORY] Operation History")
header := titleStyle.Render("[STATS] Operation History")
s.WriteString(fmt.Sprintf("\n%s\n\n", header))
if len(m.history) == 0 {

View File

@@ -285,7 +285,7 @@ func (m *MenuModel) View() string {
var s string
// Header
header := titleStyle.Render("[DB] Database Backup Tool - Interactive Menu")
header := titleStyle.Render("Database Backup Tool - Interactive Menu")
s += fmt.Sprintf("\n%s\n\n", header)
if len(m.dbTypes) > 0 {
@@ -334,13 +334,13 @@ func (m *MenuModel) View() string {
// handleSingleBackup opens database selector for single backup
func (m *MenuModel) handleSingleBackup() (tea.Model, tea.Cmd) {
selector := NewDatabaseSelector(m.config, m.logger, m, m.ctx, "[DB] Single Database Backup", "single")
selector := NewDatabaseSelector(m.config, m.logger, m, m.ctx, "[SELECT] Single Database Backup", "single")
return selector, selector.Init()
}
// handleSampleBackup opens database selector for sample backup
func (m *MenuModel) handleSampleBackup() (tea.Model, tea.Cmd) {
selector := NewDatabaseSelector(m.config, m.logger, m, m.ctx, "[STATS] Sample Database Backup", "sample")
selector := NewDatabaseSelector(m.config, m.logger, m, m.ctx, "[SELECT] Sample Database Backup", "sample")
return selector, selector.Init()
}
@@ -356,7 +356,7 @@ func (m *MenuModel) handleClusterBackup() (tea.Model, tea.Cmd) {
return executor, executor.Init()
}
confirm := NewConfirmationModelWithAction(m.config, m.logger, m,
"[DB] Cluster Backup",
"[CHECK] Cluster Backup",
"This will backup ALL databases in the cluster. Continue?",
func() (tea.Model, tea.Cmd) {
executor := NewBackupExecution(m.config, m.logger, m, m.ctx, "cluster", "", 0)

View File

@@ -6,6 +6,7 @@ import (
"os/exec"
"path/filepath"
"strings"
"sync"
"time"
tea "github.com/charmbracelet/bubbletea"
@@ -45,6 +46,21 @@ type RestoreExecutionModel struct {
spinnerFrame int
spinnerFrames []string
// Detailed byte progress for schollz-style display
bytesTotal int64
bytesDone int64
description string
showBytes bool // True when we have real byte progress to show
speed float64 // Rolling window speed in bytes/sec
// Database count progress (for cluster restore)
dbTotal int
dbDone int
// Timing info for database restore phase (ETA calculation)
dbPhaseElapsed time.Duration // Elapsed time since restore phase started
dbAvgPerDB time.Duration // Average time per database restore
// Results
done bool
cancelling bool // True when user has requested cancellation
@@ -101,6 +117,9 @@ type restoreProgressMsg struct {
phase string
progress int
detail string
bytesTotal int64
bytesDone int64
description string
}
type restoreCompleteMsg struct {
@@ -109,6 +128,107 @@ type restoreCompleteMsg struct {
elapsed time.Duration
}
// sharedProgressState holds progress state that can be safely accessed from callbacks
type sharedProgressState struct {
mu sync.Mutex
bytesTotal int64
bytesDone int64
description string
hasUpdate bool
// Database count progress (for cluster restore)
dbTotal int
dbDone int
// Timing info for database restore phase
dbPhaseElapsed time.Duration // Elapsed time since restore phase started
dbAvgPerDB time.Duration // Average time per database restore
// Rolling window for speed calculation
speedSamples []restoreSpeedSample
}
type restoreSpeedSample struct {
timestamp time.Time
bytes int64
}
// Package-level shared progress state for restore operations
var (
currentRestoreProgressMu sync.Mutex
currentRestoreProgressState *sharedProgressState
)
func setCurrentRestoreProgress(state *sharedProgressState) {
currentRestoreProgressMu.Lock()
defer currentRestoreProgressMu.Unlock()
currentRestoreProgressState = state
}
func clearCurrentRestoreProgress() {
currentRestoreProgressMu.Lock()
defer currentRestoreProgressMu.Unlock()
currentRestoreProgressState = nil
}
func getCurrentRestoreProgress() (bytesTotal, bytesDone int64, description string, hasUpdate bool, dbTotal, dbDone int, speed float64, dbPhaseElapsed, dbAvgPerDB time.Duration) {
currentRestoreProgressMu.Lock()
defer currentRestoreProgressMu.Unlock()
if currentRestoreProgressState == nil {
return 0, 0, "", false, 0, 0, 0, 0, 0
}
currentRestoreProgressState.mu.Lock()
defer currentRestoreProgressState.mu.Unlock()
// Calculate rolling window speed
speed = calculateRollingSpeed(currentRestoreProgressState.speedSamples)
return currentRestoreProgressState.bytesTotal, currentRestoreProgressState.bytesDone,
currentRestoreProgressState.description, currentRestoreProgressState.hasUpdate,
currentRestoreProgressState.dbTotal, currentRestoreProgressState.dbDone, speed,
currentRestoreProgressState.dbPhaseElapsed, currentRestoreProgressState.dbAvgPerDB
}
// calculateRollingSpeed calculates speed from recent samples (last 5 seconds)
func calculateRollingSpeed(samples []restoreSpeedSample) float64 {
if len(samples) < 2 {
return 0
}
// Use samples from last 5 seconds for smoothed speed
now := time.Now()
cutoff := now.Add(-5 * time.Second)
var firstInWindow, lastInWindow *restoreSpeedSample
for i := range samples {
if samples[i].timestamp.After(cutoff) {
if firstInWindow == nil {
firstInWindow = &samples[i]
}
lastInWindow = &samples[i]
}
}
// Fall back to first and last if window is empty
if firstInWindow == nil || lastInWindow == nil || firstInWindow == lastInWindow {
firstInWindow = &samples[0]
lastInWindow = &samples[len(samples)-1]
}
elapsed := lastInWindow.timestamp.Sub(firstInWindow.timestamp).Seconds()
if elapsed <= 0 {
return 0
}
bytesTransferred := lastInWindow.bytes - firstInWindow.bytes
return float64(bytesTransferred) / elapsed
}
// restoreProgressChannel allows sending progress updates from the restore goroutine
type restoreProgressChannel chan restoreProgressMsg
func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config, log logger.Logger, archive ArchiveInfo, targetDB string, cleanFirst, createIfMissing bool, restoreType string, cleanClusterFirst bool, existingDBs []string, saveDebugLog bool) tea.Cmd {
return func() tea.Msg {
// NO TIMEOUT for restore operations - a restore takes as long as it takes
@@ -156,6 +276,63 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
// STEP 2: Create restore engine with silent progress (no stdout interference with TUI)
engine := restore.NewSilent(cfg, log, dbClient)
// Set up progress callback for detailed progress reporting
// We use a shared pointer that can be queried by the TUI ticker
progressState := &sharedProgressState{
speedSamples: make([]restoreSpeedSample, 0, 100),
}
engine.SetProgressCallback(func(current, total int64, description string) {
progressState.mu.Lock()
defer progressState.mu.Unlock()
progressState.bytesDone = current
progressState.bytesTotal = total
progressState.description = description
progressState.hasUpdate = true
// Add speed sample for rolling window calculation
progressState.speedSamples = append(progressState.speedSamples, restoreSpeedSample{
timestamp: time.Now(),
bytes: current,
})
// Keep only last 100 samples
if len(progressState.speedSamples) > 100 {
progressState.speedSamples = progressState.speedSamples[len(progressState.speedSamples)-100:]
}
})
// Set up database progress callback for cluster restore
engine.SetDatabaseProgressCallback(func(done, total int, dbName string) {
progressState.mu.Lock()
defer progressState.mu.Unlock()
progressState.dbDone = done
progressState.dbTotal = total
progressState.description = fmt.Sprintf("Restoring %s", dbName)
progressState.hasUpdate = true
// Clear byte progress when switching to db progress
progressState.bytesTotal = 0
progressState.bytesDone = 0
})
// Set up timing-aware database progress callback for cluster restore ETA
engine.SetDatabaseProgressWithTimingCallback(func(done, total int, dbName string, phaseElapsed, avgPerDB time.Duration) {
progressState.mu.Lock()
defer progressState.mu.Unlock()
progressState.dbDone = done
progressState.dbTotal = total
progressState.description = fmt.Sprintf("Restoring %s", dbName)
progressState.dbPhaseElapsed = phaseElapsed
progressState.dbAvgPerDB = avgPerDB
progressState.hasUpdate = true
// Clear byte progress when switching to db progress
progressState.bytesTotal = 0
progressState.bytesDone = 0
})
// Store progress state in a package-level variable for the ticker to access
// This is a workaround because tea messages can't be sent from callbacks
setCurrentRestoreProgress(progressState)
defer clearCurrentRestoreProgress()
// Enable debug logging if requested
if saveDebugLog {
// Generate debug log path using configured WorkDir
@@ -165,9 +342,6 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
log.Info("Debug logging enabled", "path", debugLogPath)
}
// Set up progress callback (but it won't work in goroutine - progress is already sent via logs)
// The TUI will just use spinner animation to show activity
// STEP 3: Execute restore based on type
var restoreErr error
if restoreType == "restore-cluster" {
@@ -206,7 +380,31 @@ func (m RestoreExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
m.spinnerFrame = (m.spinnerFrame + 1) % len(m.spinnerFrames)
m.elapsed = time.Since(m.startTime)
// Update status based on elapsed time to show progress
// Poll shared progress state for real-time updates
bytesTotal, bytesDone, description, hasUpdate, dbTotal, dbDone, speed, dbPhaseElapsed, dbAvgPerDB := getCurrentRestoreProgress()
if hasUpdate && bytesTotal > 0 {
m.bytesTotal = bytesTotal
m.bytesDone = bytesDone
m.description = description
m.showBytes = true
m.speed = speed
// Update status to reflect actual progress
m.status = description
m.phase = "Extracting"
m.progress = int((bytesDone * 100) / bytesTotal)
} else if hasUpdate && dbTotal > 0 {
// Database count progress for cluster restore with timing
m.dbTotal = dbTotal
m.dbDone = dbDone
m.dbPhaseElapsed = dbPhaseElapsed
m.dbAvgPerDB = dbAvgPerDB
m.showBytes = false
m.status = fmt.Sprintf("Restoring database %d of %d...", dbDone+1, dbTotal)
m.phase = "Restore"
m.progress = int((dbDone * 100) / dbTotal)
} else {
// Fallback: Update status based on elapsed time to show progress
// This provides visual feedback even though we don't have real-time progress
elapsedSec := int(m.elapsed.Seconds())
@@ -241,6 +439,7 @@ func (m RestoreExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
m.phase = "Restore"
}
}
}
return m, restoreTickCmd()
}
@@ -250,6 +449,15 @@ func (m RestoreExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
m.status = msg.status
m.phase = msg.phase
m.progress = msg.progress
// Update byte-level progress if available
if msg.bytesTotal > 0 {
m.bytesTotal = msg.bytesTotal
m.bytesDone = msg.bytesDone
m.description = msg.description
m.showBytes = true
}
if msg.detail != "" {
m.details = append(m.details, msg.detail)
// Keep only last 5 details
@@ -321,9 +529,9 @@ func (m RestoreExecutionModel) View() string {
s.Grow(512) // Pre-allocate estimated capacity for better performance
// Title
title := "[RESTORE] Restoring Database"
title := "[EXEC] Restoring Database"
if m.restoreType == "restore-cluster" {
title = "[RESTORE] Restoring Cluster"
title = "[EXEC] Restoring Cluster"
}
s.WriteString(titleStyle.Render(title))
s.WriteString("\n\n")
@@ -336,39 +544,108 @@ func (m RestoreExecutionModel) View() string {
s.WriteString("\n")
if m.done {
// Show result
// Show result with comprehensive summary
if m.err != nil {
s.WriteString(errorStyle.Render("[FAIL] Restore Failed"))
s.WriteString(errorStyle.Render("╔══════════════════════════════════════════════════════════════╗"))
s.WriteString("\n")
s.WriteString(errorStyle.Render("║ [FAIL] RESTORE FAILED ║"))
s.WriteString("\n")
s.WriteString(errorStyle.Render("╚══════════════════════════════════════════════════════════════╝"))
s.WriteString("\n\n")
s.WriteString(errorStyle.Render(fmt.Sprintf(" Error: %v", m.err)))
s.WriteString("\n")
} else {
s.WriteString(successStyle.Render("[OK] Restore Completed Successfully"))
s.WriteString(successStyle.Render("╔══════════════════════════════════════════════════════════════╗"))
s.WriteString("\n")
s.WriteString(successStyle.Render("║ [OK] RESTORE COMPLETED SUCCESSFULLY ║"))
s.WriteString("\n")
s.WriteString(successStyle.Render("╚══════════════════════════════════════════════════════════════╝"))
s.WriteString("\n\n")
s.WriteString(successStyle.Render(m.result))
// Summary section
s.WriteString(infoStyle.Render(" ─── Summary ───────────────────────────────────────────────"))
s.WriteString("\n\n")
// Archive info
s.WriteString(fmt.Sprintf(" Archive: %s\n", m.archive.Name))
if m.archive.Size > 0 {
s.WriteString(fmt.Sprintf(" Archive Size: %s\n", FormatBytes(m.archive.Size)))
}
// Restore type specific info
if m.restoreType == "restore-cluster" {
s.WriteString(fmt.Sprintf(" Type: Cluster Restore\n"))
if m.dbTotal > 0 {
s.WriteString(fmt.Sprintf(" Databases: %d restored\n", m.dbTotal))
}
if m.cleanClusterFirst && len(m.existingDBs) > 0 {
s.WriteString(fmt.Sprintf(" Cleaned: %d existing database(s) dropped\n", len(m.existingDBs)))
}
} else {
s.WriteString(fmt.Sprintf(" Type: Single Database Restore\n"))
s.WriteString(fmt.Sprintf(" Target DB: %s\n", m.targetDB))
}
s.WriteString("\n")
}
s.WriteString(fmt.Sprintf("\nElapsed Time: %s\n", formatDuration(m.elapsed)))
// Timing section
s.WriteString(infoStyle.Render(" ─── Timing ────────────────────────────────────────────────"))
s.WriteString("\n\n")
s.WriteString(fmt.Sprintf(" Total Time: %s\n", formatDuration(m.elapsed)))
// Calculate and show throughput if we have size info
if m.archive.Size > 0 && m.elapsed.Seconds() > 0 {
throughput := float64(m.archive.Size) / m.elapsed.Seconds()
s.WriteString(fmt.Sprintf(" Throughput: %s/s (average)\n", FormatBytes(int64(throughput))))
}
if m.dbTotal > 0 && m.err == nil {
avgPerDB := m.elapsed / time.Duration(m.dbTotal)
s.WriteString(fmt.Sprintf(" Avg per DB: %s\n", formatDuration(avgPerDB)))
}
s.WriteString("\n")
s.WriteString(infoStyle.Render(" ───────────────────────────────────────────────────────────"))
s.WriteString("\n\n")
s.WriteString(infoStyle.Render(" [KEYS] Press Enter to continue"))
} else {
// Show progress
s.WriteString(fmt.Sprintf("Phase: %s\n", m.phase))
// Show status with rotating spinner (unified indicator for all operations)
// Show detailed progress bar when we have byte-level information
// In this case, hide the spinner for cleaner display
if m.showBytes && m.bytesTotal > 0 {
// Status line without spinner (progress bar provides activity indication)
s.WriteString(fmt.Sprintf("Status: %s\n", m.status))
s.WriteString("\n")
// Render schollz-style progress bar with bytes, rolling speed, ETA
s.WriteString(renderDetailedProgressBarWithSpeed(m.bytesDone, m.bytesTotal, m.speed))
s.WriteString("\n\n")
} else if m.dbTotal > 0 {
// Database count progress for cluster restore with timing
spinner := m.spinnerFrames[m.spinnerFrame]
s.WriteString(fmt.Sprintf("Status: %s %s\n", spinner, m.status))
s.WriteString("\n")
// Show database progress bar with timing and ETA
s.WriteString(renderDatabaseProgressBarWithTiming(m.dbDone, m.dbTotal, m.dbPhaseElapsed, m.dbAvgPerDB))
s.WriteString("\n\n")
} else {
// Show status with rotating spinner (for phases without detailed progress)
spinner := m.spinnerFrames[m.spinnerFrame]
s.WriteString(fmt.Sprintf("Status: %s %s\n", spinner, m.status))
s.WriteString("\n")
// Only show progress bar for single database restore
// Cluster restore uses spinner only (consistent with CLI behavior)
if m.restoreType == "restore-single" {
// Fallback to simple progress bar for single database restore
progressBar := renderProgressBar(m.progress)
s.WriteString(progressBar)
s.WriteString(fmt.Sprintf(" %d%%\n", m.progress))
s.WriteString("\n")
}
}
// Elapsed time
s.WriteString(fmt.Sprintf("Elapsed: %s\n", formatDuration(m.elapsed)))
@@ -390,6 +667,141 @@ func renderProgressBar(percent int) string {
return successStyle.Render(bar) + infoStyle.Render(empty)
}
// renderDetailedProgressBar renders a schollz-style progress bar with bytes, speed, and ETA
// Uses elapsed time for speed calculation (fallback)
func renderDetailedProgressBar(done, total int64, elapsed time.Duration) string {
speed := 0.0
if elapsed.Seconds() > 0 {
speed = float64(done) / elapsed.Seconds()
}
return renderDetailedProgressBarWithSpeed(done, total, speed)
}
// renderDetailedProgressBarWithSpeed renders a schollz-style progress bar with pre-calculated rolling speed
func renderDetailedProgressBarWithSpeed(done, total int64, speed float64) string {
var s strings.Builder
// Calculate percentage
percent := 0
if total > 0 {
percent = int((done * 100) / total)
if percent > 100 {
percent = 100
}
}
// Render progress bar
width := 30
filled := (percent * width) / 100
barFilled := strings.Repeat("█", filled)
barEmpty := strings.Repeat("░", width-filled)
s.WriteString(successStyle.Render("["))
s.WriteString(successStyle.Render(barFilled))
s.WriteString(infoStyle.Render(barEmpty))
s.WriteString(successStyle.Render("]"))
// Percentage
s.WriteString(fmt.Sprintf(" %3d%%", percent))
// Bytes progress
s.WriteString(fmt.Sprintf(" %s / %s", FormatBytes(done), FormatBytes(total)))
// Speed display (using rolling window speed)
if speed > 0 {
s.WriteString(fmt.Sprintf(" %s/s", FormatBytes(int64(speed))))
// ETA calculation based on rolling speed
if done < total {
remaining := total - done
etaSeconds := float64(remaining) / speed
eta := time.Duration(etaSeconds) * time.Second
s.WriteString(fmt.Sprintf(" ETA: %s", FormatDurationShort(eta)))
}
}
return s.String()
}
// renderDatabaseProgressBar renders a progress bar for database count (cluster restore)
func renderDatabaseProgressBar(done, total int) string {
var s strings.Builder
// Calculate percentage
percent := 0
if total > 0 {
percent = (done * 100) / total
if percent > 100 {
percent = 100
}
}
// Render progress bar
width := 30
filled := (percent * width) / 100
barFilled := strings.Repeat("█", filled)
barEmpty := strings.Repeat("░", width-filled)
s.WriteString(successStyle.Render("["))
s.WriteString(successStyle.Render(barFilled))
s.WriteString(infoStyle.Render(barEmpty))
s.WriteString(successStyle.Render("]"))
// Count and percentage
s.WriteString(fmt.Sprintf(" %3d%% %d / %d databases", percent, done, total))
return s.String()
}
// renderDatabaseProgressBarWithTiming renders a progress bar for database count with timing and ETA
func renderDatabaseProgressBarWithTiming(done, total int, phaseElapsed, avgPerDB time.Duration) string {
var s strings.Builder
// Calculate percentage
percent := 0
if total > 0 {
percent = (done * 100) / total
if percent > 100 {
percent = 100
}
}
// Render progress bar
width := 30
filled := (percent * width) / 100
barFilled := strings.Repeat("█", filled)
barEmpty := strings.Repeat("░", width-filled)
s.WriteString(successStyle.Render("["))
s.WriteString(successStyle.Render(barFilled))
s.WriteString(infoStyle.Render(barEmpty))
s.WriteString(successStyle.Render("]"))
// Count and percentage
s.WriteString(fmt.Sprintf(" %3d%% %d / %d databases", percent, done, total))
// Timing and ETA
if phaseElapsed > 0 {
s.WriteString(fmt.Sprintf(" [%s", FormatDurationShort(phaseElapsed)))
// Calculate ETA based on average time per database
if avgPerDB > 0 && done < total {
remainingDBs := total - done
eta := time.Duration(remainingDBs) * avgPerDB
s.WriteString(fmt.Sprintf(" / ETA: %s", FormatDurationShort(eta)))
} else if done > 0 && done < total {
// Fallback: estimate ETA from overall elapsed time
avgElapsed := phaseElapsed / time.Duration(done)
remainingDBs := total - done
eta := time.Duration(remainingDBs) * avgElapsed
s.WriteString(fmt.Sprintf(" / ETA: ~%s", FormatDurationShort(eta)))
}
s.WriteString("]")
}
return s.String()
}
// formatDuration formats duration in human readable format
func formatDuration(d time.Duration) string {
if d < time.Minute {

View File

@@ -339,9 +339,9 @@ func (m RestorePreviewModel) View() string {
var s strings.Builder
// Title
title := "Restore Preview"
title := "[CHECK] Restore Preview"
if m.mode == "restore-cluster" {
title = "Cluster Restore Preview"
title = "[CHECK] Cluster Restore Preview"
}
s.WriteString(titleStyle.Render(title))
s.WriteString("\n\n")

View File

@@ -688,7 +688,7 @@ func (m SettingsModel) View() string {
var b strings.Builder
// Header
header := titleStyle.Render("[CFG] Configuration Settings")
header := titleStyle.Render("[CONFIG] Configuration Settings")
b.WriteString(fmt.Sprintf("\n%s\n\n", header))
// Settings list
@@ -747,7 +747,7 @@ func (m SettingsModel) View() string {
// Current configuration summary
if !m.editing {
b.WriteString("\n")
b.WriteString(infoStyle.Render("[LOG] Current Configuration:"))
b.WriteString(infoStyle.Render("[INFO] Current Configuration"))
b.WriteString("\n")
summary := []string{

View File

@@ -173,7 +173,7 @@ func (m StatusViewModel) View() string {
s.WriteString(errorStyle.Render(fmt.Sprintf("[FAIL] Error: %v\n", m.err)))
s.WriteString("\n")
} else {
s.WriteString("Connection Status:\n")
s.WriteString("[CONN] Connection Status\n")
if m.connected {
s.WriteString(successStyle.Render(" [+] Connected\n"))
} else {
@@ -181,6 +181,7 @@ func (m StatusViewModel) View() string {
}
s.WriteString("\n")
s.WriteString("[INFO] Server Details\n")
s.WriteString(fmt.Sprintf(" Database Type: %s (%s)\n", m.config.DisplayDatabaseType(), m.config.DatabaseType))
s.WriteString(fmt.Sprintf(" Host: %s:%d\n", m.config.Host, m.config.Port))
s.WriteString(fmt.Sprintf(" User: %s\n", m.config.User))

View File

@@ -120,12 +120,36 @@ var ShortcutStyle = lipgloss.NewStyle().
// =============================================================================
// HELPER PREFIXES (no emoticons)
// =============================================================================
// Convention for TUI titles/headers:
// [CHECK] - Verification/diagnosis screens
// [STATS] - Statistics/status screens
// [SELECT] - Selection/browser screens
// [EXEC] - Execution/running screens
// [CONFIG] - Configuration/settings screens
//
// Convention for status messages:
// [OK] - Success
// [FAIL] - Error/failure
// [WAIT] - In progress
// [WARN] - Warning
// [INFO] - Information
const (
// Title prefixes (for view headers)
PrefixCheck = "[CHECK]"
PrefixStats = "[STATS]"
PrefixSelect = "[SELECT]"
PrefixExec = "[EXEC]"
PrefixConfig = "[CONFIG]"
// Status prefixes
PrefixOK = "[OK]"
PrefixFail = "[FAIL]"
PrefixWarn = "[!]"
PrefixInfo = "[i]"
PrefixWait = "[WAIT]"
PrefixWarn = "[WARN]"
PrefixInfo = "[INFO]"
// List item prefixes
PrefixPlus = "[+]"
PrefixMinus = "[-]"
PrefixArrow = ">"