Compare commits
6 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 698b8a761c | |||
| dd7c4da0eb | |||
| b2a78cad2a | |||
| 5728b465e6 | |||
| bfe99e959c | |||
| 780beaadfb |
62
CHANGELOG.md
62
CHANGELOG.md
@@ -5,6 +5,68 @@ All notable changes to dbbackup will be documented in this file.
|
|||||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||||
|
|
||||||
|
## [3.42.50] - 2026-01-16 "Ctrl+C Signal Handling Fix"
|
||||||
|
|
||||||
|
### Fixed - Proper Ctrl+C/SIGINT Handling in TUI
|
||||||
|
- **Added tea.InterruptMsg handling** - Bubbletea v1.3+ sends `InterruptMsg` for SIGINT signals
|
||||||
|
instead of a `KeyMsg` with "ctrl+c", causing cancellation to not work
|
||||||
|
- **Fixed cluster restore cancellation** - Ctrl+C now properly cancels running restore operations
|
||||||
|
- **Fixed cluster backup cancellation** - Ctrl+C now properly cancels running backup operations
|
||||||
|
- **Added interrupt handling to main menu** - Proper cleanup on SIGINT from menu
|
||||||
|
- **Orphaned process cleanup** - `cleanup.KillOrphanedProcesses()` called on all interrupt paths
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- All TUI execution views now handle both `tea.KeyMsg` ("ctrl+c") and `tea.InterruptMsg`
|
||||||
|
- Context cancellation properly propagates to child processes via `exec.CommandContext`
|
||||||
|
- No zombie pg_dump/pg_restore/gzip processes left behind on cancellation
|
||||||
|
|
||||||
|
## [3.42.49] - 2026-01-16 "Unified Cluster Backup Progress"
|
||||||
|
|
||||||
|
### Added - Unified Progress Display for Cluster Backup
|
||||||
|
- **Combined overall progress bar** for cluster backup showing all phases:
|
||||||
|
- Phase 1/3: Backing up Globals (0-15% of overall)
|
||||||
|
- Phase 2/3: Backing up Databases (15-90% of overall)
|
||||||
|
- Phase 3/3: Compressing Archive (90-100% of overall)
|
||||||
|
- **Current database indicator** - Shows which database is currently being backed up
|
||||||
|
- **Phase-aware progress tracking** - New fields in backup progress state:
|
||||||
|
- `overallPhase` - Current phase (1=globals, 2=databases, 3=compressing)
|
||||||
|
- `phaseDesc` - Human-readable phase description
|
||||||
|
- **Dual progress bars** for cluster backup:
|
||||||
|
- Overall progress bar showing combined operation progress
|
||||||
|
- Database count progress bar showing individual database progress
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- Cluster backup TUI now shows unified progress display matching restore
|
||||||
|
- Progress callbacks now include phase information
|
||||||
|
- Better visual feedback during entire cluster backup operation
|
||||||
|
|
||||||
|
## [3.42.48] - 2026-01-15 "Unified Cluster Restore Progress"
|
||||||
|
|
||||||
|
### Added - Unified Progress Display for Cluster Restore
|
||||||
|
- **Combined overall progress bar** showing progress across all restore phases:
|
||||||
|
- Phase 1/3: Extracting Archive (0-60% of overall)
|
||||||
|
- Phase 2/3: Restoring Globals (60-65% of overall)
|
||||||
|
- Phase 3/3: Restoring Databases (65-100% of overall)
|
||||||
|
- **Current database indicator** - Shows which database is currently being restored
|
||||||
|
- **Phase-aware progress tracking** - New fields in progress state:
|
||||||
|
- `overallPhase` - Current phase (1=extraction, 2=globals, 3=databases)
|
||||||
|
- `currentDB` - Name of database currently being restored
|
||||||
|
- `extractionDone` - Boolean flag for phase transition
|
||||||
|
- **Dual progress bars** for cluster restore:
|
||||||
|
- Overall progress bar showing combined operation progress
|
||||||
|
- Phase-specific progress bar (extraction bytes or database count)
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- Cluster restore TUI now shows unified progress display
|
||||||
|
- Progress callbacks now set phase and current database information
|
||||||
|
- Extraction completion triggers automatic transition to globals phase
|
||||||
|
- Database restore phase shows current database name with spinner
|
||||||
|
|
||||||
|
### Improved
|
||||||
|
- Better visual feedback during entire cluster restore operation
|
||||||
|
- Clear phase indicators help users understand restore progress
|
||||||
|
- Overall progress percentage gives better time estimates
|
||||||
|
|
||||||
## [3.42.35] - 2026-01-15 "TUI Detailed Progress"
|
## [3.42.35] - 2026-01-15 "TUI Detailed Progress"
|
||||||
|
|
||||||
### Added - Enhanced TUI Progress Display
|
### Added - Enhanced TUI Progress Display
|
||||||
|
|||||||
@@ -3,9 +3,9 @@
|
|||||||
This directory contains pre-compiled binaries for the DB Backup Tool across multiple platforms and architectures.
|
This directory contains pre-compiled binaries for the DB Backup Tool across multiple platforms and architectures.
|
||||||
|
|
||||||
## Build Information
|
## Build Information
|
||||||
- **Version**: 3.42.34
|
- **Version**: 3.42.50
|
||||||
- **Build Time**: 2026-01-16_13:17:19_UTC
|
- **Build Time**: 2026-01-16_15:09:21_UTC
|
||||||
- **Git Commit**: 9d95a19
|
- **Git Commit**: dd7c4da
|
||||||
|
|
||||||
## Recent Updates (v1.1.0)
|
## Recent Updates (v1.1.0)
|
||||||
- ✅ Fixed TUI progress display with line-by-line output
|
- ✅ Fixed TUI progress display with line-by-line output
|
||||||
|
|||||||
@@ -28,6 +28,7 @@ var (
|
|||||||
restoreClean bool
|
restoreClean bool
|
||||||
restoreCreate bool
|
restoreCreate bool
|
||||||
restoreJobs int
|
restoreJobs int
|
||||||
|
restoreParallelDBs int // Number of parallel database restores
|
||||||
restoreTarget string
|
restoreTarget string
|
||||||
restoreVerbose bool
|
restoreVerbose bool
|
||||||
restoreNoProgress bool
|
restoreNoProgress bool
|
||||||
@@ -289,6 +290,7 @@ func init() {
|
|||||||
restoreClusterCmd.Flags().BoolVar(&restoreForce, "force", false, "Skip safety checks and confirmations")
|
restoreClusterCmd.Flags().BoolVar(&restoreForce, "force", false, "Skip safety checks and confirmations")
|
||||||
restoreClusterCmd.Flags().BoolVar(&restoreCleanCluster, "clean-cluster", false, "Drop all existing user databases before restore (disaster recovery)")
|
restoreClusterCmd.Flags().BoolVar(&restoreCleanCluster, "clean-cluster", false, "Drop all existing user databases before restore (disaster recovery)")
|
||||||
restoreClusterCmd.Flags().IntVar(&restoreJobs, "jobs", 0, "Number of parallel decompression jobs (0 = auto)")
|
restoreClusterCmd.Flags().IntVar(&restoreJobs, "jobs", 0, "Number of parallel decompression jobs (0 = auto)")
|
||||||
|
restoreClusterCmd.Flags().IntVar(&restoreParallelDBs, "parallel-dbs", 0, "Number of databases to restore in parallel (0 = use config default, 1 = sequential)")
|
||||||
restoreClusterCmd.Flags().StringVar(&restoreWorkdir, "workdir", "", "Working directory for extraction (use when system disk is small, e.g. /mnt/storage/restore_tmp)")
|
restoreClusterCmd.Flags().StringVar(&restoreWorkdir, "workdir", "", "Working directory for extraction (use when system disk is small, e.g. /mnt/storage/restore_tmp)")
|
||||||
restoreClusterCmd.Flags().BoolVar(&restoreVerbose, "verbose", false, "Show detailed restore progress")
|
restoreClusterCmd.Flags().BoolVar(&restoreVerbose, "verbose", false, "Show detailed restore progress")
|
||||||
restoreClusterCmd.Flags().BoolVar(&restoreNoProgress, "no-progress", false, "Disable progress indicators")
|
restoreClusterCmd.Flags().BoolVar(&restoreNoProgress, "no-progress", false, "Disable progress indicators")
|
||||||
@@ -783,6 +785,12 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Override cluster parallelism if --parallel-dbs is specified
|
||||||
|
if restoreParallelDBs > 0 {
|
||||||
|
cfg.ClusterParallelism = restoreParallelDBs
|
||||||
|
log.Info("Using custom parallelism for database restores", "parallel_dbs", restoreParallelDBs)
|
||||||
|
}
|
||||||
|
|
||||||
// Create restore engine
|
// Create restore engine
|
||||||
engine := restore.New(cfg, log, db)
|
engine := restore.New(cfg, log, db)
|
||||||
|
|
||||||
|
|||||||
@@ -4,6 +4,7 @@ import (
|
|||||||
"bytes"
|
"bytes"
|
||||||
"crypto/rand"
|
"crypto/rand"
|
||||||
"io"
|
"io"
|
||||||
|
mathrand "math/rand"
|
||||||
"testing"
|
"testing"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -100,12 +101,15 @@ func TestChunker_Deterministic(t *testing.T) {
|
|||||||
|
|
||||||
func TestChunker_ShiftedData(t *testing.T) {
|
func TestChunker_ShiftedData(t *testing.T) {
|
||||||
// Test that shifted data still shares chunks (the key CDC benefit)
|
// Test that shifted data still shares chunks (the key CDC benefit)
|
||||||
|
// Use deterministic random data for reproducible test results
|
||||||
|
rng := mathrand.New(mathrand.NewSource(42))
|
||||||
|
|
||||||
original := make([]byte, 100*1024)
|
original := make([]byte, 100*1024)
|
||||||
rand.Read(original)
|
rng.Read(original)
|
||||||
|
|
||||||
// Create shifted version (prepend some bytes)
|
// Create shifted version (prepend some bytes)
|
||||||
prefix := make([]byte, 1000)
|
prefix := make([]byte, 1000)
|
||||||
rand.Read(prefix)
|
rng.Read(prefix)
|
||||||
shifted := append(prefix, original...)
|
shifted := append(prefix, original...)
|
||||||
|
|
||||||
// Chunk both
|
// Chunk both
|
||||||
|
|||||||
@@ -38,6 +38,10 @@ type DatabaseProgressCallback func(done, total int, dbName string)
|
|||||||
// Parameters: done count, total count, database name, elapsed time for current restore phase, avg duration per DB
|
// Parameters: done count, total count, database name, elapsed time for current restore phase, avg duration per DB
|
||||||
type DatabaseProgressWithTimingCallback func(done, total int, dbName string, phaseElapsed, avgPerDB time.Duration)
|
type DatabaseProgressWithTimingCallback func(done, total int, dbName string, phaseElapsed, avgPerDB time.Duration)
|
||||||
|
|
||||||
|
// DatabaseProgressByBytesCallback is called with progress weighted by database sizes (bytes)
|
||||||
|
// Parameters: bytes completed, total bytes, current database name, databases done count, total database count
|
||||||
|
type DatabaseProgressByBytesCallback func(bytesDone, bytesTotal int64, dbName string, dbDone, dbTotal int)
|
||||||
|
|
||||||
// Engine handles database restore operations
|
// Engine handles database restore operations
|
||||||
type Engine struct {
|
type Engine struct {
|
||||||
cfg *config.Config
|
cfg *config.Config
|
||||||
@@ -49,9 +53,10 @@ type Engine struct {
|
|||||||
debugLogPath string // Path to save debug log on error
|
debugLogPath string // Path to save debug log on error
|
||||||
|
|
||||||
// TUI progress callback for detailed progress reporting
|
// TUI progress callback for detailed progress reporting
|
||||||
progressCallback ProgressCallback
|
progressCallback ProgressCallback
|
||||||
dbProgressCallback DatabaseProgressCallback
|
dbProgressCallback DatabaseProgressCallback
|
||||||
dbProgressTimingCallback DatabaseProgressWithTimingCallback
|
dbProgressTimingCallback DatabaseProgressWithTimingCallback
|
||||||
|
dbProgressByBytesCallback DatabaseProgressByBytesCallback
|
||||||
}
|
}
|
||||||
|
|
||||||
// New creates a new restore engine
|
// New creates a new restore engine
|
||||||
@@ -122,6 +127,11 @@ func (e *Engine) SetDatabaseProgressWithTimingCallback(cb DatabaseProgressWithTi
|
|||||||
e.dbProgressTimingCallback = cb
|
e.dbProgressTimingCallback = cb
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetDatabaseProgressByBytesCallback sets a callback for progress weighted by database sizes
|
||||||
|
func (e *Engine) SetDatabaseProgressByBytesCallback(cb DatabaseProgressByBytesCallback) {
|
||||||
|
e.dbProgressByBytesCallback = cb
|
||||||
|
}
|
||||||
|
|
||||||
// reportProgress safely calls the progress callback if set
|
// reportProgress safely calls the progress callback if set
|
||||||
func (e *Engine) reportProgress(current, total int64, description string) {
|
func (e *Engine) reportProgress(current, total int64, description string) {
|
||||||
if e.progressCallback != nil {
|
if e.progressCallback != nil {
|
||||||
@@ -143,6 +153,13 @@ func (e *Engine) reportDatabaseProgressWithTiming(done, total int, dbName string
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// reportDatabaseProgressByBytes safely calls the bytes-weighted callback if set
|
||||||
|
func (e *Engine) reportDatabaseProgressByBytes(bytesDone, bytesTotal int64, dbName string, dbDone, dbTotal int) {
|
||||||
|
if e.dbProgressByBytesCallback != nil {
|
||||||
|
e.dbProgressByBytesCallback(bytesDone, bytesTotal, dbName, dbDone, dbTotal)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// loggerAdapter adapts our logger to the progress.Logger interface
|
// loggerAdapter adapts our logger to the progress.Logger interface
|
||||||
type loggerAdapter struct {
|
type loggerAdapter struct {
|
||||||
logger logger.Logger
|
logger logger.Logger
|
||||||
@@ -861,6 +878,25 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
|
|||||||
// Create temporary extraction directory in configured WorkDir
|
// Create temporary extraction directory in configured WorkDir
|
||||||
workDir := e.cfg.GetEffectiveWorkDir()
|
workDir := e.cfg.GetEffectiveWorkDir()
|
||||||
tempDir := filepath.Join(workDir, fmt.Sprintf(".restore_%d", time.Now().Unix()))
|
tempDir := filepath.Join(workDir, fmt.Sprintf(".restore_%d", time.Now().Unix()))
|
||||||
|
|
||||||
|
// Check disk space for extraction (need ~3x archive size: compressed + extracted + working space)
|
||||||
|
if archiveInfo != nil {
|
||||||
|
requiredBytes := uint64(archiveInfo.Size()) * 3
|
||||||
|
extractionCheck := checks.CheckDiskSpace(workDir)
|
||||||
|
if extractionCheck.AvailableBytes < requiredBytes {
|
||||||
|
operation.Fail("Insufficient disk space for extraction")
|
||||||
|
return fmt.Errorf("insufficient disk space for extraction in %s: need %.1f GB, have %.1f GB (archive size: %.1f GB × 3)",
|
||||||
|
workDir,
|
||||||
|
float64(requiredBytes)/(1024*1024*1024),
|
||||||
|
float64(extractionCheck.AvailableBytes)/(1024*1024*1024),
|
||||||
|
float64(archiveInfo.Size())/(1024*1024*1024))
|
||||||
|
}
|
||||||
|
e.log.Info("Disk space check for extraction passed",
|
||||||
|
"workdir", workDir,
|
||||||
|
"required_gb", float64(requiredBytes)/(1024*1024*1024),
|
||||||
|
"available_gb", float64(extractionCheck.AvailableBytes)/(1024*1024*1024))
|
||||||
|
}
|
||||||
|
|
||||||
if err := os.MkdirAll(tempDir, 0755); err != nil {
|
if err := os.MkdirAll(tempDir, 0755); err != nil {
|
||||||
operation.Fail("Failed to create temporary directory")
|
operation.Fail("Failed to create temporary directory")
|
||||||
return fmt.Errorf("failed to create temp directory in %s: %w", workDir, err)
|
return fmt.Errorf("failed to create temp directory in %s: %w", workDir, err)
|
||||||
@@ -1024,12 +1060,27 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
|
|||||||
var restoreErrorsMu sync.Mutex
|
var restoreErrorsMu sync.Mutex
|
||||||
totalDBs := 0
|
totalDBs := 0
|
||||||
|
|
||||||
// Count total databases
|
// Count total databases and calculate total bytes for weighted progress
|
||||||
|
var totalBytes int64
|
||||||
|
dbSizes := make(map[string]int64) // Map database name to dump file size
|
||||||
for _, entry := range entries {
|
for _, entry := range entries {
|
||||||
if !entry.IsDir() {
|
if !entry.IsDir() {
|
||||||
totalDBs++
|
totalDBs++
|
||||||
|
dumpFile := filepath.Join(dumpsDir, entry.Name())
|
||||||
|
if info, err := os.Stat(dumpFile); err == nil {
|
||||||
|
dbName := entry.Name()
|
||||||
|
dbName = strings.TrimSuffix(dbName, ".dump")
|
||||||
|
dbName = strings.TrimSuffix(dbName, ".sql.gz")
|
||||||
|
dbSizes[dbName] = info.Size()
|
||||||
|
totalBytes += info.Size()
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
e.log.Info("Calculated total restore size", "databases", totalDBs, "total_bytes", totalBytes)
|
||||||
|
|
||||||
|
// Track bytes completed for weighted progress
|
||||||
|
var bytesCompleted int64
|
||||||
|
var bytesCompletedMu sync.Mutex
|
||||||
|
|
||||||
// Create ETA estimator for database restores
|
// Create ETA estimator for database restores
|
||||||
estimator := progress.NewETAEstimator("Restoring cluster", totalDBs)
|
estimator := progress.NewETAEstimator("Restoring cluster", totalDBs)
|
||||||
@@ -1202,7 +1253,21 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
|
|||||||
completedDBTimes = append(completedDBTimes, dbRestoreDuration)
|
completedDBTimes = append(completedDBTimes, dbRestoreDuration)
|
||||||
completedDBTimesMu.Unlock()
|
completedDBTimesMu.Unlock()
|
||||||
|
|
||||||
|
// Update bytes completed for weighted progress
|
||||||
|
dbSize := dbSizes[dbName]
|
||||||
|
bytesCompletedMu.Lock()
|
||||||
|
bytesCompleted += dbSize
|
||||||
|
currentBytesCompleted := bytesCompleted
|
||||||
|
currentSuccessCount := int(atomic.LoadInt32(&successCount)) + 1 // +1 because we're about to increment
|
||||||
|
bytesCompletedMu.Unlock()
|
||||||
|
|
||||||
|
// Report weighted progress (bytes-based)
|
||||||
|
e.reportDatabaseProgressByBytes(currentBytesCompleted, totalBytes, dbName, currentSuccessCount, totalDBs)
|
||||||
|
|
||||||
atomic.AddInt32(&successCount, 1)
|
atomic.AddInt32(&successCount, 1)
|
||||||
|
|
||||||
|
// Small delay to ensure PostgreSQL fully closes connections before next restore
|
||||||
|
time.Sleep(100 * time.Millisecond)
|
||||||
}(dbIndex, entry.Name())
|
}(dbIndex, entry.Name())
|
||||||
|
|
||||||
dbIndex++
|
dbIndex++
|
||||||
|
|||||||
@@ -36,18 +36,22 @@ type BackupExecutionModel struct {
|
|||||||
spinnerFrame int
|
spinnerFrame int
|
||||||
|
|
||||||
// Database count progress (for cluster backup)
|
// Database count progress (for cluster backup)
|
||||||
dbTotal int
|
dbTotal int
|
||||||
dbDone int
|
dbDone int
|
||||||
dbName string // Current database being backed up
|
dbName string // Current database being backed up
|
||||||
|
overallPhase int // 1=globals, 2=databases, 3=compressing
|
||||||
|
phaseDesc string // Description of current phase
|
||||||
}
|
}
|
||||||
|
|
||||||
// sharedBackupProgressState holds progress state that can be safely accessed from callbacks
|
// sharedBackupProgressState holds progress state that can be safely accessed from callbacks
|
||||||
type sharedBackupProgressState struct {
|
type sharedBackupProgressState struct {
|
||||||
mu sync.Mutex
|
mu sync.Mutex
|
||||||
dbTotal int
|
dbTotal int
|
||||||
dbDone int
|
dbDone int
|
||||||
dbName string
|
dbName string
|
||||||
hasUpdate bool
|
overallPhase int // 1=globals, 2=databases, 3=compressing
|
||||||
|
phaseDesc string // Description of current phase
|
||||||
|
hasUpdate bool
|
||||||
}
|
}
|
||||||
|
|
||||||
// Package-level shared progress state for backup operations
|
// Package-level shared progress state for backup operations
|
||||||
@@ -68,12 +72,12 @@ func clearCurrentBackupProgress() {
|
|||||||
currentBackupProgressState = nil
|
currentBackupProgressState = nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func getCurrentBackupProgress() (dbTotal, dbDone int, dbName string, hasUpdate bool) {
|
func getCurrentBackupProgress() (dbTotal, dbDone int, dbName string, overallPhase int, phaseDesc string, hasUpdate bool) {
|
||||||
currentBackupProgressMu.Lock()
|
currentBackupProgressMu.Lock()
|
||||||
defer currentBackupProgressMu.Unlock()
|
defer currentBackupProgressMu.Unlock()
|
||||||
|
|
||||||
if currentBackupProgressState == nil {
|
if currentBackupProgressState == nil {
|
||||||
return 0, 0, "", false
|
return 0, 0, "", 0, "", false
|
||||||
}
|
}
|
||||||
|
|
||||||
currentBackupProgressState.mu.Lock()
|
currentBackupProgressState.mu.Lock()
|
||||||
@@ -83,7 +87,8 @@ func getCurrentBackupProgress() (dbTotal, dbDone int, dbName string, hasUpdate b
|
|||||||
currentBackupProgressState.hasUpdate = false
|
currentBackupProgressState.hasUpdate = false
|
||||||
|
|
||||||
return currentBackupProgressState.dbTotal, currentBackupProgressState.dbDone,
|
return currentBackupProgressState.dbTotal, currentBackupProgressState.dbDone,
|
||||||
currentBackupProgressState.dbName, hasUpdate
|
currentBackupProgressState.dbName, currentBackupProgressState.overallPhase,
|
||||||
|
currentBackupProgressState.phaseDesc, hasUpdate
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewBackupExecution(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context, backupType, dbName string, ratio int) BackupExecutionModel {
|
func NewBackupExecution(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context, backupType, dbName string, ratio int) BackupExecutionModel {
|
||||||
@@ -171,6 +176,8 @@ func executeBackupWithTUIProgress(parentCtx context.Context, cfg *config.Config,
|
|||||||
progressState.dbDone = done
|
progressState.dbDone = done
|
||||||
progressState.dbTotal = total
|
progressState.dbTotal = total
|
||||||
progressState.dbName = currentDB
|
progressState.dbName = currentDB
|
||||||
|
progressState.overallPhase = 2 // Phase 2: Backing up databases
|
||||||
|
progressState.phaseDesc = fmt.Sprintf("Phase 2/3: Databases (%d/%d)", done, total)
|
||||||
progressState.hasUpdate = true
|
progressState.hasUpdate = true
|
||||||
progressState.mu.Unlock()
|
progressState.mu.Unlock()
|
||||||
})
|
})
|
||||||
@@ -223,11 +230,13 @@ func (m BackupExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
m.spinnerFrame = (m.spinnerFrame + 1) % len(spinnerFrames)
|
m.spinnerFrame = (m.spinnerFrame + 1) % len(spinnerFrames)
|
||||||
|
|
||||||
// Poll for database progress updates from callbacks
|
// Poll for database progress updates from callbacks
|
||||||
dbTotal, dbDone, dbName, hasUpdate := getCurrentBackupProgress()
|
dbTotal, dbDone, dbName, overallPhase, phaseDesc, hasUpdate := getCurrentBackupProgress()
|
||||||
if hasUpdate {
|
if hasUpdate {
|
||||||
m.dbTotal = dbTotal
|
m.dbTotal = dbTotal
|
||||||
m.dbDone = dbDone
|
m.dbDone = dbDone
|
||||||
m.dbName = dbName
|
m.dbName = dbName
|
||||||
|
m.overallPhase = overallPhase
|
||||||
|
m.phaseDesc = phaseDesc
|
||||||
}
|
}
|
||||||
|
|
||||||
// Update status based on progress and elapsed time
|
// Update status based on progress and elapsed time
|
||||||
@@ -286,6 +295,20 @@ func (m BackupExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
}
|
}
|
||||||
return m, nil
|
return m, nil
|
||||||
|
|
||||||
|
case tea.InterruptMsg:
|
||||||
|
// Handle Ctrl+C signal (SIGINT) - Bubbletea v1.3+ sends this instead of KeyMsg for ctrl+c
|
||||||
|
if !m.done && !m.cancelling {
|
||||||
|
m.cancelling = true
|
||||||
|
m.status = "[STOP] Cancelling backup... (please wait)"
|
||||||
|
if m.cancel != nil {
|
||||||
|
m.cancel()
|
||||||
|
}
|
||||||
|
return m, nil
|
||||||
|
} else if m.done {
|
||||||
|
return m.parent, tea.Quit
|
||||||
|
}
|
||||||
|
return m, nil
|
||||||
|
|
||||||
case tea.KeyMsg:
|
case tea.KeyMsg:
|
||||||
switch msg.String() {
|
switch msg.String() {
|
||||||
case "ctrl+c", "esc":
|
case "ctrl+c", "esc":
|
||||||
@@ -361,19 +384,68 @@ func (m BackupExecutionModel) View() string {
|
|||||||
|
|
||||||
// Status display
|
// Status display
|
||||||
if !m.done {
|
if !m.done {
|
||||||
// Show database progress bar if we have progress data (cluster backup)
|
// Unified progress display for cluster backup
|
||||||
if m.dbTotal > 0 && m.dbDone > 0 {
|
if m.backupType == "cluster" {
|
||||||
// Show progress bar instead of spinner when we have real progress
|
// Calculate overall progress across all phases
|
||||||
progressBar := renderBackupDatabaseProgressBar(m.dbDone, m.dbTotal, m.dbName, 50)
|
// Phase 1: Globals (0-15%)
|
||||||
s.WriteString(progressBar + "\n")
|
// Phase 2: Databases (15-90%)
|
||||||
s.WriteString(fmt.Sprintf(" %s\n", m.status))
|
// Phase 3: Compressing (90-100%)
|
||||||
} else {
|
overallProgress := 0
|
||||||
// Show spinner during initial phases
|
phaseLabel := "Starting..."
|
||||||
if m.cancelling {
|
|
||||||
s.WriteString(fmt.Sprintf(" %s %s\n", spinnerFrames[m.spinnerFrame], m.status))
|
elapsedSec := int(time.Since(m.startTime).Seconds())
|
||||||
} else {
|
|
||||||
s.WriteString(fmt.Sprintf(" %s %s\n", spinnerFrames[m.spinnerFrame], m.status))
|
if m.overallPhase == 2 && m.dbTotal > 0 {
|
||||||
|
// Phase 2: Database backups - contributes 15-90%
|
||||||
|
dbPct := int((int64(m.dbDone) * 100) / int64(m.dbTotal))
|
||||||
|
overallProgress = 15 + (dbPct * 75 / 100)
|
||||||
|
phaseLabel = m.phaseDesc
|
||||||
|
} else if elapsedSec < 5 {
|
||||||
|
// Initial setup
|
||||||
|
overallProgress = 2
|
||||||
|
phaseLabel = "Phase 1/3: Initializing..."
|
||||||
|
} else if m.dbTotal == 0 {
|
||||||
|
// Phase 1: Globals backup (before databases start)
|
||||||
|
overallProgress = 10
|
||||||
|
phaseLabel = "Phase 1/3: Backing up Globals"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Header with phase and overall progress
|
||||||
|
s.WriteString(infoStyle.Render(" ─── Cluster Backup Progress ──────────────────────────────"))
|
||||||
|
s.WriteString("\n\n")
|
||||||
|
s.WriteString(fmt.Sprintf(" %s\n\n", phaseLabel))
|
||||||
|
|
||||||
|
// Overall progress bar
|
||||||
|
s.WriteString(" Overall: ")
|
||||||
|
s.WriteString(renderProgressBar(overallProgress))
|
||||||
|
s.WriteString(fmt.Sprintf(" %d%%\n", overallProgress))
|
||||||
|
|
||||||
|
// Phase-specific details
|
||||||
|
if m.dbTotal > 0 && m.dbDone > 0 {
|
||||||
|
// Show current database being backed up
|
||||||
|
s.WriteString("\n")
|
||||||
|
spinner := spinnerFrames[m.spinnerFrame]
|
||||||
|
if m.dbName != "" && m.dbDone <= m.dbTotal {
|
||||||
|
s.WriteString(fmt.Sprintf(" Current: %s %s\n", spinner, m.dbName))
|
||||||
|
}
|
||||||
|
s.WriteString("\n")
|
||||||
|
|
||||||
|
// Database progress bar
|
||||||
|
progressBar := renderBackupDatabaseProgressBar(m.dbDone, m.dbTotal, m.dbName, 50)
|
||||||
|
s.WriteString(progressBar + "\n")
|
||||||
|
} else {
|
||||||
|
// Intermediate phase (globals)
|
||||||
|
spinner := spinnerFrames[m.spinnerFrame]
|
||||||
|
s.WriteString(fmt.Sprintf("\n %s %s\n\n", spinner, m.status))
|
||||||
|
}
|
||||||
|
|
||||||
|
s.WriteString("\n")
|
||||||
|
s.WriteString(infoStyle.Render(" ───────────────────────────────────────────────────────────"))
|
||||||
|
s.WriteString("\n\n")
|
||||||
|
} else {
|
||||||
|
// Single/sample database backup - simpler display
|
||||||
|
spinner := spinnerFrames[m.spinnerFrame]
|
||||||
|
s.WriteString(fmt.Sprintf(" %s %s\n", spinner, m.status))
|
||||||
}
|
}
|
||||||
|
|
||||||
if !m.cancelling {
|
if !m.cancelling {
|
||||||
|
|||||||
@@ -188,6 +188,21 @@ func (m *MenuModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
}
|
}
|
||||||
return m, nil
|
return m, nil
|
||||||
|
|
||||||
|
case tea.InterruptMsg:
|
||||||
|
// Handle Ctrl+C signal (SIGINT) - Bubbletea v1.3+ sends this
|
||||||
|
if m.cancel != nil {
|
||||||
|
m.cancel()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clean up any orphaned processes before exit
|
||||||
|
m.logger.Info("Cleaning up processes before exit (SIGINT)")
|
||||||
|
if err := cleanup.KillOrphanedProcesses(m.logger); err != nil {
|
||||||
|
m.logger.Warn("Failed to clean up all processes", "error", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
m.quitting = true
|
||||||
|
return m, tea.Quit
|
||||||
|
|
||||||
case tea.KeyMsg:
|
case tea.KeyMsg:
|
||||||
switch msg.String() {
|
switch msg.String() {
|
||||||
case "ctrl+c", "q":
|
case "ctrl+c", "q":
|
||||||
|
|||||||
@@ -57,10 +57,18 @@ type RestoreExecutionModel struct {
|
|||||||
dbTotal int
|
dbTotal int
|
||||||
dbDone int
|
dbDone int
|
||||||
|
|
||||||
|
// Current database being restored (for detailed display)
|
||||||
|
currentDB string
|
||||||
|
|
||||||
// Timing info for database restore phase (ETA calculation)
|
// Timing info for database restore phase (ETA calculation)
|
||||||
dbPhaseElapsed time.Duration // Elapsed time since restore phase started
|
dbPhaseElapsed time.Duration // Elapsed time since restore phase started
|
||||||
dbAvgPerDB time.Duration // Average time per database restore
|
dbAvgPerDB time.Duration // Average time per database restore
|
||||||
|
|
||||||
|
// Overall progress tracking for unified display
|
||||||
|
overallPhase int // 1=Extracting, 2=Globals, 3=Databases
|
||||||
|
extractionDone bool
|
||||||
|
extractionTime time.Duration // How long extraction took (for ETA calc)
|
||||||
|
|
||||||
// Results
|
// Results
|
||||||
done bool
|
done bool
|
||||||
cancelling bool // True when user has requested cancellation
|
cancelling bool // True when user has requested cancellation
|
||||||
@@ -140,10 +148,21 @@ type sharedProgressState struct {
|
|||||||
dbTotal int
|
dbTotal int
|
||||||
dbDone int
|
dbDone int
|
||||||
|
|
||||||
|
// Current database being restored
|
||||||
|
currentDB string
|
||||||
|
|
||||||
// Timing info for database restore phase
|
// Timing info for database restore phase
|
||||||
dbPhaseElapsed time.Duration // Elapsed time since restore phase started
|
dbPhaseElapsed time.Duration // Elapsed time since restore phase started
|
||||||
dbAvgPerDB time.Duration // Average time per database restore
|
dbAvgPerDB time.Duration // Average time per database restore
|
||||||
|
|
||||||
|
// Overall phase tracking (1=Extract, 2=Globals, 3=Databases)
|
||||||
|
overallPhase int
|
||||||
|
extractionDone bool
|
||||||
|
|
||||||
|
// Weighted progress by database sizes (bytes)
|
||||||
|
dbBytesTotal int64 // Total bytes across all databases
|
||||||
|
dbBytesDone int64 // Bytes completed (sum of finished DB sizes)
|
||||||
|
|
||||||
// Rolling window for speed calculation
|
// Rolling window for speed calculation
|
||||||
speedSamples []restoreSpeedSample
|
speedSamples []restoreSpeedSample
|
||||||
}
|
}
|
||||||
@@ -171,12 +190,12 @@ func clearCurrentRestoreProgress() {
|
|||||||
currentRestoreProgressState = nil
|
currentRestoreProgressState = nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func getCurrentRestoreProgress() (bytesTotal, bytesDone int64, description string, hasUpdate bool, dbTotal, dbDone int, speed float64, dbPhaseElapsed, dbAvgPerDB time.Duration) {
|
func getCurrentRestoreProgress() (bytesTotal, bytesDone int64, description string, hasUpdate bool, dbTotal, dbDone int, speed float64, dbPhaseElapsed, dbAvgPerDB time.Duration, currentDB string, overallPhase int, extractionDone bool, dbBytesTotal, dbBytesDone int64) {
|
||||||
currentRestoreProgressMu.Lock()
|
currentRestoreProgressMu.Lock()
|
||||||
defer currentRestoreProgressMu.Unlock()
|
defer currentRestoreProgressMu.Unlock()
|
||||||
|
|
||||||
if currentRestoreProgressState == nil {
|
if currentRestoreProgressState == nil {
|
||||||
return 0, 0, "", false, 0, 0, 0, 0, 0
|
return 0, 0, "", false, 0, 0, 0, 0, 0, "", 0, false, 0, 0
|
||||||
}
|
}
|
||||||
|
|
||||||
currentRestoreProgressState.mu.Lock()
|
currentRestoreProgressState.mu.Lock()
|
||||||
@@ -188,7 +207,10 @@ func getCurrentRestoreProgress() (bytesTotal, bytesDone int64, description strin
|
|||||||
return currentRestoreProgressState.bytesTotal, currentRestoreProgressState.bytesDone,
|
return currentRestoreProgressState.bytesTotal, currentRestoreProgressState.bytesDone,
|
||||||
currentRestoreProgressState.description, currentRestoreProgressState.hasUpdate,
|
currentRestoreProgressState.description, currentRestoreProgressState.hasUpdate,
|
||||||
currentRestoreProgressState.dbTotal, currentRestoreProgressState.dbDone, speed,
|
currentRestoreProgressState.dbTotal, currentRestoreProgressState.dbDone, speed,
|
||||||
currentRestoreProgressState.dbPhaseElapsed, currentRestoreProgressState.dbAvgPerDB
|
currentRestoreProgressState.dbPhaseElapsed, currentRestoreProgressState.dbAvgPerDB,
|
||||||
|
currentRestoreProgressState.currentDB, currentRestoreProgressState.overallPhase,
|
||||||
|
currentRestoreProgressState.extractionDone,
|
||||||
|
currentRestoreProgressState.dbBytesTotal, currentRestoreProgressState.dbBytesDone
|
||||||
}
|
}
|
||||||
|
|
||||||
// calculateRollingSpeed calculates speed from recent samples (last 5 seconds)
|
// calculateRollingSpeed calculates speed from recent samples (last 5 seconds)
|
||||||
@@ -288,6 +310,14 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
|
|||||||
progressState.bytesTotal = total
|
progressState.bytesTotal = total
|
||||||
progressState.description = description
|
progressState.description = description
|
||||||
progressState.hasUpdate = true
|
progressState.hasUpdate = true
|
||||||
|
progressState.overallPhase = 1
|
||||||
|
progressState.extractionDone = false
|
||||||
|
|
||||||
|
// Check if extraction is complete
|
||||||
|
if current >= total && total > 0 {
|
||||||
|
progressState.extractionDone = true
|
||||||
|
progressState.overallPhase = 2
|
||||||
|
}
|
||||||
|
|
||||||
// Add speed sample for rolling window calculation
|
// Add speed sample for rolling window calculation
|
||||||
progressState.speedSamples = append(progressState.speedSamples, restoreSpeedSample{
|
progressState.speedSamples = append(progressState.speedSamples, restoreSpeedSample{
|
||||||
@@ -307,6 +337,9 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
|
|||||||
progressState.dbDone = done
|
progressState.dbDone = done
|
||||||
progressState.dbTotal = total
|
progressState.dbTotal = total
|
||||||
progressState.description = fmt.Sprintf("Restoring %s", dbName)
|
progressState.description = fmt.Sprintf("Restoring %s", dbName)
|
||||||
|
progressState.currentDB = dbName
|
||||||
|
progressState.overallPhase = 3
|
||||||
|
progressState.extractionDone = true
|
||||||
progressState.hasUpdate = true
|
progressState.hasUpdate = true
|
||||||
// Clear byte progress when switching to db progress
|
// Clear byte progress when switching to db progress
|
||||||
progressState.bytesTotal = 0
|
progressState.bytesTotal = 0
|
||||||
@@ -320,6 +353,9 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
|
|||||||
progressState.dbDone = done
|
progressState.dbDone = done
|
||||||
progressState.dbTotal = total
|
progressState.dbTotal = total
|
||||||
progressState.description = fmt.Sprintf("Restoring %s", dbName)
|
progressState.description = fmt.Sprintf("Restoring %s", dbName)
|
||||||
|
progressState.currentDB = dbName
|
||||||
|
progressState.overallPhase = 3
|
||||||
|
progressState.extractionDone = true
|
||||||
progressState.dbPhaseElapsed = phaseElapsed
|
progressState.dbPhaseElapsed = phaseElapsed
|
||||||
progressState.dbAvgPerDB = avgPerDB
|
progressState.dbAvgPerDB = avgPerDB
|
||||||
progressState.hasUpdate = true
|
progressState.hasUpdate = true
|
||||||
@@ -328,6 +364,20 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
|
|||||||
progressState.bytesDone = 0
|
progressState.bytesDone = 0
|
||||||
})
|
})
|
||||||
|
|
||||||
|
// Set up weighted (bytes-based) progress callback for accurate cluster restore progress
|
||||||
|
engine.SetDatabaseProgressByBytesCallback(func(bytesDone, bytesTotal int64, dbName string, dbDone, dbTotal int) {
|
||||||
|
progressState.mu.Lock()
|
||||||
|
defer progressState.mu.Unlock()
|
||||||
|
progressState.dbBytesDone = bytesDone
|
||||||
|
progressState.dbBytesTotal = bytesTotal
|
||||||
|
progressState.dbDone = dbDone
|
||||||
|
progressState.dbTotal = dbTotal
|
||||||
|
progressState.currentDB = dbName
|
||||||
|
progressState.overallPhase = 3
|
||||||
|
progressState.extractionDone = true
|
||||||
|
progressState.hasUpdate = true
|
||||||
|
})
|
||||||
|
|
||||||
// Store progress state in a package-level variable for the ticker to access
|
// Store progress state in a package-level variable for the ticker to access
|
||||||
// This is a workaround because tea messages can't be sent from callbacks
|
// This is a workaround because tea messages can't be sent from callbacks
|
||||||
setCurrentRestoreProgress(progressState)
|
setCurrentRestoreProgress(progressState)
|
||||||
@@ -381,28 +431,54 @@ func (m RestoreExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
m.elapsed = time.Since(m.startTime)
|
m.elapsed = time.Since(m.startTime)
|
||||||
|
|
||||||
// Poll shared progress state for real-time updates
|
// Poll shared progress state for real-time updates
|
||||||
bytesTotal, bytesDone, description, hasUpdate, dbTotal, dbDone, speed, dbPhaseElapsed, dbAvgPerDB := getCurrentRestoreProgress()
|
bytesTotal, bytesDone, description, hasUpdate, dbTotal, dbDone, speed, dbPhaseElapsed, dbAvgPerDB, currentDB, overallPhase, extractionDone, dbBytesTotal, dbBytesDone := getCurrentRestoreProgress()
|
||||||
if hasUpdate && bytesTotal > 0 {
|
if hasUpdate && bytesTotal > 0 && !extractionDone {
|
||||||
|
// Phase 1: Extraction
|
||||||
m.bytesTotal = bytesTotal
|
m.bytesTotal = bytesTotal
|
||||||
m.bytesDone = bytesDone
|
m.bytesDone = bytesDone
|
||||||
m.description = description
|
m.description = description
|
||||||
m.showBytes = true
|
m.showBytes = true
|
||||||
m.speed = speed
|
m.speed = speed
|
||||||
|
m.overallPhase = 1
|
||||||
|
m.extractionDone = false
|
||||||
|
|
||||||
// Update status to reflect actual progress
|
// Update status to reflect actual progress
|
||||||
m.status = description
|
m.status = description
|
||||||
m.phase = "Extracting"
|
m.phase = "Phase 1/3: Extracting Archive"
|
||||||
m.progress = int((bytesDone * 100) / bytesTotal)
|
m.progress = int((bytesDone * 100) / bytesTotal)
|
||||||
} else if hasUpdate && dbTotal > 0 {
|
} else if hasUpdate && dbTotal > 0 {
|
||||||
// Database count progress for cluster restore with timing
|
// Phase 3: Database restores
|
||||||
m.dbTotal = dbTotal
|
m.dbTotal = dbTotal
|
||||||
m.dbDone = dbDone
|
m.dbDone = dbDone
|
||||||
m.dbPhaseElapsed = dbPhaseElapsed
|
m.dbPhaseElapsed = dbPhaseElapsed
|
||||||
m.dbAvgPerDB = dbAvgPerDB
|
m.dbAvgPerDB = dbAvgPerDB
|
||||||
|
m.currentDB = currentDB
|
||||||
|
m.overallPhase = overallPhase
|
||||||
|
m.extractionDone = extractionDone
|
||||||
m.showBytes = false
|
m.showBytes = false
|
||||||
m.status = fmt.Sprintf("Restoring database %d of %d...", dbDone+1, dbTotal)
|
|
||||||
m.phase = "Restore"
|
if dbDone < dbTotal {
|
||||||
m.progress = int((dbDone * 100) / dbTotal)
|
m.status = fmt.Sprintf("Restoring: %s", currentDB)
|
||||||
|
} else {
|
||||||
|
m.status = "Finalizing..."
|
||||||
|
}
|
||||||
|
|
||||||
|
// Use weighted progress by bytes if available, otherwise use count
|
||||||
|
if dbBytesTotal > 0 {
|
||||||
|
weightedPercent := int((dbBytesDone * 100) / dbBytesTotal)
|
||||||
|
m.phase = fmt.Sprintf("Phase 3/3: Databases (%d/%d) - %.1f%% by size", dbDone, dbTotal, float64(dbBytesDone*100)/float64(dbBytesTotal))
|
||||||
|
m.progress = weightedPercent
|
||||||
|
} else {
|
||||||
|
m.phase = fmt.Sprintf("Phase 3/3: Databases (%d/%d)", dbDone, dbTotal)
|
||||||
|
m.progress = int((dbDone * 100) / dbTotal)
|
||||||
|
}
|
||||||
|
} else if hasUpdate && extractionDone && dbTotal == 0 {
|
||||||
|
// Phase 2: Globals restore (brief phase between extraction and databases)
|
||||||
|
m.overallPhase = 2
|
||||||
|
m.extractionDone = true
|
||||||
|
m.showBytes = false
|
||||||
|
m.status = "Restoring global objects (roles, tablespaces)..."
|
||||||
|
m.phase = "Phase 2/3: Restoring Globals"
|
||||||
} else {
|
} else {
|
||||||
// Fallback: Update status based on elapsed time to show progress
|
// Fallback: Update status based on elapsed time to show progress
|
||||||
// This provides visual feedback even though we don't have real-time progress
|
// This provides visual feedback even though we don't have real-time progress
|
||||||
@@ -487,6 +563,21 @@ func (m RestoreExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
}
|
}
|
||||||
return m, nil
|
return m, nil
|
||||||
|
|
||||||
|
case tea.InterruptMsg:
|
||||||
|
// Handle Ctrl+C signal (SIGINT) - Bubbletea v1.3+ sends this instead of KeyMsg for ctrl+c
|
||||||
|
if !m.done && !m.cancelling {
|
||||||
|
m.cancelling = true
|
||||||
|
m.status = "[STOP] Cancelling restore... (please wait)"
|
||||||
|
m.phase = "Cancelling"
|
||||||
|
if m.cancel != nil {
|
||||||
|
m.cancel()
|
||||||
|
}
|
||||||
|
return m, nil
|
||||||
|
} else if m.done {
|
||||||
|
return m.parent, tea.Quit
|
||||||
|
}
|
||||||
|
return m, nil
|
||||||
|
|
||||||
case tea.KeyMsg:
|
case tea.KeyMsg:
|
||||||
switch msg.String() {
|
switch msg.String() {
|
||||||
case "ctrl+c", "esc":
|
case "ctrl+c", "esc":
|
||||||
@@ -610,36 +701,88 @@ func (m RestoreExecutionModel) View() string {
|
|||||||
s.WriteString("\n\n")
|
s.WriteString("\n\n")
|
||||||
s.WriteString(infoStyle.Render(" [KEYS] Press Enter to continue"))
|
s.WriteString(infoStyle.Render(" [KEYS] Press Enter to continue"))
|
||||||
} else {
|
} else {
|
||||||
// Show progress
|
// Show unified progress for cluster restore
|
||||||
s.WriteString(fmt.Sprintf("Phase: %s\n", m.phase))
|
if m.restoreType == "restore-cluster" {
|
||||||
|
// Calculate overall progress across all phases
|
||||||
|
// Phase 1: Extraction (0-60%)
|
||||||
|
// Phase 2: Globals (60-65%)
|
||||||
|
// Phase 3: Databases (65-100%)
|
||||||
|
overallProgress := 0
|
||||||
|
phaseLabel := "Starting..."
|
||||||
|
|
||||||
// Show detailed progress bar when we have byte-level information
|
if m.showBytes && m.bytesTotal > 0 {
|
||||||
// In this case, hide the spinner for cleaner display
|
// Phase 1: Extraction - contributes 0-60%
|
||||||
if m.showBytes && m.bytesTotal > 0 {
|
extractPct := int((m.bytesDone * 100) / m.bytesTotal)
|
||||||
// Status line without spinner (progress bar provides activity indication)
|
overallProgress = (extractPct * 60) / 100
|
||||||
s.WriteString(fmt.Sprintf("Status: %s\n", m.status))
|
phaseLabel = "Phase 1/3: Extracting Archive"
|
||||||
s.WriteString("\n")
|
} else if m.extractionDone && m.dbTotal == 0 {
|
||||||
|
// Phase 2: Globals restore
|
||||||
|
overallProgress = 62
|
||||||
|
phaseLabel = "Phase 2/3: Restoring Globals"
|
||||||
|
} else if m.dbTotal > 0 {
|
||||||
|
// Phase 3: Database restores - contributes 65-100%
|
||||||
|
dbPct := int((int64(m.dbDone) * 100) / int64(m.dbTotal))
|
||||||
|
overallProgress = 65 + (dbPct * 35 / 100)
|
||||||
|
phaseLabel = fmt.Sprintf("Phase 3/3: Databases (%d/%d)", m.dbDone, m.dbTotal)
|
||||||
|
}
|
||||||
|
|
||||||
// Render schollz-style progress bar with bytes, rolling speed, ETA
|
// Header with phase and overall progress
|
||||||
s.WriteString(renderDetailedProgressBarWithSpeed(m.bytesDone, m.bytesTotal, m.speed))
|
s.WriteString(infoStyle.Render(" ─── Cluster Restore Progress ─────────────────────────────"))
|
||||||
s.WriteString("\n\n")
|
s.WriteString("\n\n")
|
||||||
} else if m.dbTotal > 0 {
|
s.WriteString(fmt.Sprintf(" %s\n\n", phaseLabel))
|
||||||
// Database count progress for cluster restore with timing
|
|
||||||
spinner := m.spinnerFrames[m.spinnerFrame]
|
|
||||||
s.WriteString(fmt.Sprintf("Status: %s %s\n", spinner, m.status))
|
|
||||||
s.WriteString("\n")
|
|
||||||
|
|
||||||
// Show database progress bar with timing and ETA
|
// Overall progress bar
|
||||||
s.WriteString(renderDatabaseProgressBarWithTiming(m.dbDone, m.dbTotal, m.dbPhaseElapsed, m.dbAvgPerDB))
|
s.WriteString(" Overall: ")
|
||||||
|
s.WriteString(renderProgressBar(overallProgress))
|
||||||
|
s.WriteString(fmt.Sprintf(" %d%%\n", overallProgress))
|
||||||
|
|
||||||
|
// Phase-specific details
|
||||||
|
if m.showBytes && m.bytesTotal > 0 {
|
||||||
|
// Show extraction details
|
||||||
|
s.WriteString("\n")
|
||||||
|
s.WriteString(fmt.Sprintf(" %s\n", m.status))
|
||||||
|
s.WriteString("\n")
|
||||||
|
s.WriteString(renderDetailedProgressBarWithSpeed(m.bytesDone, m.bytesTotal, m.speed))
|
||||||
|
s.WriteString("\n")
|
||||||
|
} else if m.dbTotal > 0 {
|
||||||
|
// Show current database being restored
|
||||||
|
s.WriteString("\n")
|
||||||
|
spinner := m.spinnerFrames[m.spinnerFrame]
|
||||||
|
if m.currentDB != "" && m.dbDone < m.dbTotal {
|
||||||
|
s.WriteString(fmt.Sprintf(" Current: %s %s\n", spinner, m.currentDB))
|
||||||
|
} else if m.dbDone >= m.dbTotal {
|
||||||
|
s.WriteString(fmt.Sprintf(" %s Finalizing...\n", spinner))
|
||||||
|
}
|
||||||
|
s.WriteString("\n")
|
||||||
|
|
||||||
|
// Database progress bar with timing
|
||||||
|
s.WriteString(renderDatabaseProgressBarWithTiming(m.dbDone, m.dbTotal, m.dbPhaseElapsed, m.dbAvgPerDB))
|
||||||
|
s.WriteString("\n")
|
||||||
|
} else {
|
||||||
|
// Intermediate phase (globals)
|
||||||
|
spinner := m.spinnerFrames[m.spinnerFrame]
|
||||||
|
s.WriteString(fmt.Sprintf("\n %s %s\n\n", spinner, m.status))
|
||||||
|
}
|
||||||
|
|
||||||
|
s.WriteString("\n")
|
||||||
|
s.WriteString(infoStyle.Render(" ───────────────────────────────────────────────────────────"))
|
||||||
s.WriteString("\n\n")
|
s.WriteString("\n\n")
|
||||||
} else {
|
} else {
|
||||||
// Show status with rotating spinner (for phases without detailed progress)
|
// Single database restore - simpler display
|
||||||
spinner := m.spinnerFrames[m.spinnerFrame]
|
s.WriteString(fmt.Sprintf("Phase: %s\n", m.phase))
|
||||||
s.WriteString(fmt.Sprintf("Status: %s %s\n", spinner, m.status))
|
|
||||||
s.WriteString("\n")
|
|
||||||
|
|
||||||
if m.restoreType == "restore-single" {
|
// Show detailed progress bar when we have byte-level information
|
||||||
// Fallback to simple progress bar for single database restore
|
if m.showBytes && m.bytesTotal > 0 {
|
||||||
|
s.WriteString(fmt.Sprintf("Status: %s\n", m.status))
|
||||||
|
s.WriteString("\n")
|
||||||
|
s.WriteString(renderDetailedProgressBarWithSpeed(m.bytesDone, m.bytesTotal, m.speed))
|
||||||
|
s.WriteString("\n\n")
|
||||||
|
} else {
|
||||||
|
spinner := m.spinnerFrames[m.spinnerFrame]
|
||||||
|
s.WriteString(fmt.Sprintf("Status: %s %s\n", spinner, m.status))
|
||||||
|
s.WriteString("\n")
|
||||||
|
|
||||||
|
// Fallback to simple progress bar
|
||||||
progressBar := renderProgressBar(m.progress)
|
progressBar := renderProgressBar(m.progress)
|
||||||
s.WriteString(progressBar)
|
s.WriteString(progressBar)
|
||||||
s.WriteString(fmt.Sprintf(" %d%%\n", m.progress))
|
s.WriteString(fmt.Sprintf(" %d%%\n", m.progress))
|
||||||
|
|||||||
Reference in New Issue
Block a user