Compare commits

...

6 Commits

Author SHA1 Message Date
ec33959e3e v3.42.18: Unify archive verification - backup manager uses same checks as restore
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Successful in 3m12s
- verifyArchiveCmd now uses restore.Safety and restore.Diagnoser
- Same validation logic in backup manager verify and restore safety checks
- No more discrepancy between verify showing valid and restore failing
2026-01-08 12:10:45 +01:00
92402f0fdb v3.42.17: Fix systemd service templates - remove invalid --config flag
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 3m12s
- Service templates now use WorkingDirectory for config loading
- Config is read from .dbbackup.conf in /var/lib/dbbackup
- Updated SYSTEMD.md documentation to match actual CLI
- Removed non-existent --config flag from ExecStart
2026-01-08 11:57:16 +01:00
682510d1bc v3.42.16: TUI cleanup - remove STATUS box, add global styles
All checks were successful
CI/CD / Test (push) Successful in 1m19s
CI/CD / Lint (push) Successful in 1m24s
CI/CD / Build & Release (push) Successful in 3m19s
2026-01-08 11:17:46 +01:00
83ad62b6b5 v3.42.15: TUI - always allow Esc/Cancel during spinner operations
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m20s
CI/CD / Build & Release (push) Successful in 3m7s
2026-01-08 10:53:00 +01:00
55d34be32e v3.42.14: TUI Backup Manager - status box with spinner, real verify function
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 3m6s
2026-01-08 10:35:23 +01:00
1831bd7c1f v3.42.13: TUI improvements - grouped shortcuts, box layout, better alignment
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Successful in 3m9s
2026-01-08 10:16:19 +01:00
10 changed files with 476 additions and 154 deletions

View File

@@ -116,8 +116,9 @@ sudo chmod 755 /usr/local/bin/dbbackup
### Step 2: Create Configuration ### Step 2: Create Configuration
```bash ```bash
# Main configuration # Main configuration in working directory (where service runs from)
sudo tee /etc/dbbackup/dbbackup.conf << 'EOF' # dbbackup reads .dbbackup.conf from WorkingDirectory
sudo tee /var/lib/dbbackup/.dbbackup.conf << 'EOF'
# DBBackup Configuration # DBBackup Configuration
db-type=postgres db-type=postgres
host=localhost host=localhost
@@ -128,6 +129,8 @@ compression=6
retention-days=30 retention-days=30
min-backups=7 min-backups=7
EOF EOF
sudo chown dbbackup:dbbackup /var/lib/dbbackup/.dbbackup.conf
sudo chmod 600 /var/lib/dbbackup/.dbbackup.conf
# Instance credentials (secure permissions) # Instance credentials (secure permissions)
sudo tee /etc/dbbackup/env.d/cluster.conf << 'EOF' sudo tee /etc/dbbackup/env.d/cluster.conf << 'EOF'
@@ -157,13 +160,15 @@ Group=dbbackup
# Load configuration # Load configuration
EnvironmentFile=-/etc/dbbackup/env.d/cluster.conf EnvironmentFile=-/etc/dbbackup/env.d/cluster.conf
# Working directory # Working directory (config is loaded from .dbbackup.conf here)
WorkingDirectory=/var/lib/dbbackup WorkingDirectory=/var/lib/dbbackup
# Execute backup # Execute backup (reads .dbbackup.conf from WorkingDirectory)
ExecStart=/usr/local/bin/dbbackup backup cluster \ ExecStart=/usr/local/bin/dbbackup backup cluster \
--config /etc/dbbackup/dbbackup.conf \
--backup-dir /var/lib/dbbackup/backups \ --backup-dir /var/lib/dbbackup/backups \
--host localhost \
--port 5432 \
--user postgres \
--allow-root --allow-root
# Security hardening # Security hardening
@@ -443,12 +448,12 @@ sudo systemctl status dbbackup-cluster.service
# View detailed error # View detailed error
sudo journalctl -u dbbackup-cluster.service -n 50 --no-pager sudo journalctl -u dbbackup-cluster.service -n 50 --no-pager
# Test manually as dbbackup user # Test manually as dbbackup user (run from working directory with .dbbackup.conf)
sudo -u dbbackup /usr/local/bin/dbbackup backup cluster --config /etc/dbbackup/dbbackup.conf cd /var/lib/dbbackup && sudo -u dbbackup /usr/local/bin/dbbackup backup cluster
# Check permissions # Check permissions
ls -la /var/lib/dbbackup/ ls -la /var/lib/dbbackup/
ls -la /etc/dbbackup/ ls -la /var/lib/dbbackup/.dbbackup.conf
``` ```
### Permission Denied ### Permission Denied

View File

@@ -3,9 +3,9 @@
This directory contains pre-compiled binaries for the DB Backup Tool across multiple platforms and architectures. This directory contains pre-compiled binaries for the DB Backup Tool across multiple platforms and architectures.
## Build Information ## Build Information
- **Version**: 3.42.1 - **Version**: 3.42.10
- **Build Time**: 2026-01-08_05:03:53_UTC - **Build Time**: 2026-01-08_10:59:00_UTC
- **Git Commit**: 9c65821 - **Git Commit**: 92402f0
## Recent Updates (v1.1.0) ## Recent Updates (v1.1.0)
- ✅ Fixed TUI progress display with line-by-line output - ✅ Fixed TUI progress display with line-by-line output

View File

@@ -37,9 +37,9 @@ var (
restoreSaveDebugLog string // Path to save debug log on failure restoreSaveDebugLog string // Path to save debug log on failure
// Diagnose flags // Diagnose flags
diagnoseJSON bool diagnoseJSON bool
diagnoseDeep bool diagnoseDeep bool
diagnoseKeepTemp bool diagnoseKeepTemp bool
// Encryption flags // Encryption flags
restoreEncryptionKeyFile string restoreEncryptionKeyFile string
@@ -565,7 +565,7 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
// Create restore engine // Create restore engine
engine := restore.New(cfg, log, db) engine := restore.New(cfg, log, db)
// Enable debug logging if requested // Enable debug logging if requested
if restoreSaveDebugLog != "" { if restoreSaveDebugLog != "" {
engine.SetDebugLogPath(restoreSaveDebugLog) engine.SetDebugLogPath(restoreSaveDebugLog)
@@ -589,15 +589,15 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
// Run pre-restore diagnosis if requested // Run pre-restore diagnosis if requested
if restoreDiagnose { if restoreDiagnose {
log.Info("[DIAG] Running pre-restore diagnosis...") log.Info("[DIAG] Running pre-restore diagnosis...")
diagnoser := restore.NewDiagnoser(log, restoreVerbose) diagnoser := restore.NewDiagnoser(log, restoreVerbose)
result, err := diagnoser.DiagnoseFile(archivePath) result, err := diagnoser.DiagnoseFile(archivePath)
if err != nil { if err != nil {
return fmt.Errorf("diagnosis failed: %w", err) return fmt.Errorf("diagnosis failed: %w", err)
} }
diagnoser.PrintDiagnosis(result) diagnoser.PrintDiagnosis(result)
if !result.IsValid { if !result.IsValid {
log.Error("[FAIL] Pre-restore diagnosis found issues") log.Error("[FAIL] Pre-restore diagnosis found issues")
if result.IsTruncated { if result.IsTruncated {
@@ -607,7 +607,7 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
log.Error(" The backup file appears to be CORRUPTED") log.Error(" The backup file appears to be CORRUPTED")
} }
fmt.Println("\nUse --force to attempt restore anyway.") fmt.Println("\nUse --force to attempt restore anyway.")
if !restoreForce { if !restoreForce {
return fmt.Errorf("aborting restore due to backup file issues") return fmt.Errorf("aborting restore due to backup file issues")
} }
@@ -785,7 +785,7 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
// Create restore engine // Create restore engine
engine := restore.New(cfg, log, db) engine := restore.New(cfg, log, db)
// Enable debug logging if requested // Enable debug logging if requested
if restoreSaveDebugLog != "" { if restoreSaveDebugLog != "" {
engine.SetDebugLogPath(restoreSaveDebugLog) engine.SetDebugLogPath(restoreSaveDebugLog)
@@ -830,7 +830,7 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
// Run pre-restore diagnosis if requested // Run pre-restore diagnosis if requested
if restoreDiagnose { if restoreDiagnose {
log.Info("[DIAG] Running pre-restore diagnosis...") log.Info("[DIAG] Running pre-restore diagnosis...")
// Create temp directory for extraction in configured WorkDir // Create temp directory for extraction in configured WorkDir
workDir := cfg.GetEffectiveWorkDir() workDir := cfg.GetEffectiveWorkDir()
diagTempDir, err := os.MkdirTemp(workDir, "dbbackup-diagnose-*") diagTempDir, err := os.MkdirTemp(workDir, "dbbackup-diagnose-*")
@@ -838,13 +838,13 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
return fmt.Errorf("failed to create temp directory for diagnosis in %s: %w", workDir, err) return fmt.Errorf("failed to create temp directory for diagnosis in %s: %w", workDir, err)
} }
defer os.RemoveAll(diagTempDir) defer os.RemoveAll(diagTempDir)
diagnoser := restore.NewDiagnoser(log, restoreVerbose) diagnoser := restore.NewDiagnoser(log, restoreVerbose)
results, err := diagnoser.DiagnoseClusterDumps(archivePath, diagTempDir) results, err := diagnoser.DiagnoseClusterDumps(archivePath, diagTempDir)
if err != nil { if err != nil {
return fmt.Errorf("diagnosis failed: %w", err) return fmt.Errorf("diagnosis failed: %w", err)
} }
// Check for any invalid dumps // Check for any invalid dumps
var invalidDumps []string var invalidDumps []string
for _, result := range results { for _, result := range results {
@@ -853,7 +853,7 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
diagnoser.PrintDiagnosis(result) diagnoser.PrintDiagnosis(result)
} }
} }
if len(invalidDumps) > 0 { if len(invalidDumps) > 0 {
log.Error("[FAIL] Pre-restore diagnosis found issues", log.Error("[FAIL] Pre-restore diagnosis found issues",
"invalid_dumps", len(invalidDumps), "invalid_dumps", len(invalidDumps),
@@ -864,7 +864,7 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
} }
fmt.Println("\nRun 'dbbackup restore diagnose <archive> --deep' for full details.") fmt.Println("\nRun 'dbbackup restore diagnose <archive> --deep' for full details.")
fmt.Println("Use --force to attempt restore anyway.") fmt.Println("Use --force to attempt restore anyway.")
if !restoreForce { if !restoreForce {
return fmt.Errorf("aborting restore due to %d invalid dump(s)", len(invalidDumps)) return fmt.Errorf("aborting restore due to %d invalid dump(s)", len(invalidDumps))
} }

View File

@@ -53,16 +53,16 @@ type InstallOptions struct {
// ServiceStatus contains information about installed services // ServiceStatus contains information about installed services
type ServiceStatus struct { type ServiceStatus struct {
Installed bool Installed bool
Enabled bool Enabled bool
Active bool Active bool
TimerEnabled bool TimerEnabled bool
TimerActive bool TimerActive bool
LastRun string LastRun string
NextRun string NextRun string
ServicePath string ServicePath string
TimerPath string TimerPath string
ExporterPath string ExporterPath string
} }
// NewInstaller creates a new Installer // NewInstaller creates a new Installer
@@ -188,7 +188,7 @@ func (i *Installer) Uninstall(ctx context.Context, instance string, purge bool)
if instance != "cluster" && instance != "" { if instance != "cluster" && instance != "" {
templateService := filepath.Join(i.unitDir, "dbbackup@.service") templateService := filepath.Join(i.unitDir, "dbbackup@.service")
templateTimer := filepath.Join(i.unitDir, "dbbackup@.timer") templateTimer := filepath.Join(i.unitDir, "dbbackup@.timer")
// Only remove templates if no other instances are using them // Only remove templates if no other instances are using them
if i.canRemoveTemplates() { if i.canRemoveTemplates() {
if !i.dryRun { if !i.dryRun {
@@ -644,11 +644,11 @@ func (i *Installer) canRemoveTemplates() bool {
// Check if any dbbackup@*.service instances exist // Check if any dbbackup@*.service instances exist
pattern := filepath.Join(i.unitDir, "dbbackup@*.service") pattern := filepath.Join(i.unitDir, "dbbackup@*.service")
matches, _ := filepath.Glob(pattern) matches, _ := filepath.Glob(pattern)
// Also check for running instances // Also check for running instances
cmd := exec.Command("systemctl", "list-units", "--type=service", "--all", "dbbackup@*") cmd := exec.Command("systemctl", "list-units", "--type=service", "--all", "dbbackup@*")
output, _ := cmd.Output() output, _ := cmd.Output()
return len(matches) == 0 && !strings.Contains(string(output), "dbbackup@") return len(matches) == 0 && !strings.Contains(string(output), "dbbackup@")
} }

View File

@@ -33,8 +33,11 @@ RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
# Environment # Environment
EnvironmentFile=-/etc/dbbackup/env.d/cluster.conf EnvironmentFile=-/etc/dbbackup/env.d/cluster.conf
# Working directory (config is loaded from .dbbackup.conf here)
WorkingDirectory=/var/lib/dbbackup
# Execution - cluster backup (all databases) # Execution - cluster backup (all databases)
ExecStart={{.BinaryPath}} backup cluster --config {{.ConfigPath}} ExecStart={{.BinaryPath}} backup cluster --backup-dir {{.BackupDir}}
TimeoutStartSec={{.TimeoutSeconds}} TimeoutStartSec={{.TimeoutSeconds}}
# Post-backup metrics export # Post-backup metrics export

View File

@@ -33,8 +33,11 @@ RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
# Environment # Environment
EnvironmentFile=-/etc/dbbackup/env.d/%i.conf EnvironmentFile=-/etc/dbbackup/env.d/%i.conf
# Working directory (config is loaded from .dbbackup.conf here)
WorkingDirectory=/var/lib/dbbackup
# Execution # Execution
ExecStart={{.BinaryPath}} backup {{.BackupType}} %i --config {{.ConfigPath}} ExecStart={{.BinaryPath}} backup {{.BackupType}} %i --backup-dir {{.BackupDir}}
TimeoutStartSec={{.TimeoutSeconds}} TimeoutStartSec={{.TimeoutSeconds}}
# Post-backup metrics export # Post-backup metrics export

View File

@@ -11,40 +11,102 @@ import (
"dbbackup/internal/config" "dbbackup/internal/config"
"dbbackup/internal/logger" "dbbackup/internal/logger"
"dbbackup/internal/restore"
)
// OperationState represents the current operation state
type OperationState int
const (
OpIdle OperationState = iota
OpVerifying
OpDeleting
) )
// BackupManagerModel manages backup archives // BackupManagerModel manages backup archives
type BackupManagerModel struct { type BackupManagerModel struct {
config *config.Config config *config.Config
logger logger.Logger logger logger.Logger
parent tea.Model parent tea.Model
ctx context.Context ctx context.Context
archives []ArchiveInfo archives []ArchiveInfo
cursor int cursor int
loading bool loading bool
err error err error
message string message string
totalSize int64 totalSize int64
freeSpace int64 freeSpace int64
opState OperationState
opTarget string // Name of archive being operated on
spinnerFrame int
} }
// NewBackupManager creates a new backup manager // NewBackupManager creates a new backup manager
func NewBackupManager(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context) BackupManagerModel { func NewBackupManager(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context) BackupManagerModel {
return BackupManagerModel{ return BackupManagerModel{
config: cfg, config: cfg,
logger: log, logger: log,
parent: parent, parent: parent,
ctx: ctx, ctx: ctx,
loading: true, loading: true,
opState: OpIdle,
spinnerFrame: 0,
} }
} }
func (m BackupManagerModel) Init() tea.Cmd { func (m BackupManagerModel) Init() tea.Cmd {
return loadArchives(m.config, m.logger) return tea.Batch(loadArchives(m.config, m.logger), managerTickCmd())
}
// Tick for spinner animation
type managerTickMsg time.Time
func managerTickCmd() tea.Cmd {
return tea.Tick(100*time.Millisecond, func(t time.Time) tea.Msg {
return managerTickMsg(t)
})
}
// Verify result message
type verifyResultMsg struct {
archive string
valid bool
err error
details string
} }
func (m BackupManagerModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) { func (m BackupManagerModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
switch msg := msg.(type) { switch msg := msg.(type) {
case managerTickMsg:
// Update spinner frame
m.spinnerFrame = (m.spinnerFrame + 1) % len(spinnerFrames)
return m, managerTickCmd()
case verifyResultMsg:
m.opState = OpIdle
m.opTarget = ""
if msg.err != nil {
m.message = fmt.Sprintf("[-] Verify failed: %v", msg.err)
} else if msg.valid {
m.message = fmt.Sprintf("[+] %s: Valid - %s", msg.archive, msg.details)
// Update archive validity in list
for i := range m.archives {
if m.archives[i].Name == msg.archive {
m.archives[i].Valid = true
break
}
}
} else {
m.message = fmt.Sprintf("[-] %s: Invalid - %s", msg.archive, msg.details)
for i := range m.archives {
if m.archives[i].Name == msg.archive {
m.archives[i].Valid = false
break
}
}
}
return m, nil
case archiveListMsg: case archiveListMsg:
m.loading = false m.loading = false
if msg.err != nil { if msg.err != nil {
@@ -68,10 +130,24 @@ func (m BackupManagerModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return m, nil return m, nil
case tea.KeyMsg: case tea.KeyMsg:
switch msg.String() { // Allow escape/cancel even during operations
case "ctrl+c", "q", "esc": if msg.String() == "ctrl+c" || msg.String() == "esc" || msg.String() == "q" {
if m.opState != OpIdle {
// Cancel current operation
m.opState = OpIdle
m.opTarget = ""
m.message = "Operation cancelled"
return m, nil
}
return m.parent, nil return m.parent, nil
}
// Block other input during operations
if m.opState != OpIdle {
return m, nil
}
switch msg.String() {
case "up", "k": case "up", "k":
if m.cursor > 0 { if m.cursor > 0 {
m.cursor-- m.cursor--
@@ -83,11 +159,13 @@ func (m BackupManagerModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
} }
case "v": case "v":
// Verify archive // Verify archive with real verification
if len(m.archives) > 0 && m.cursor < len(m.archives) { if len(m.archives) > 0 && m.cursor < len(m.archives) {
selected := m.archives[m.cursor] selected := m.archives[m.cursor]
m.message = fmt.Sprintf("[SEARCH] Verifying %s...", selected.Name) m.opState = OpVerifying
// In real implementation, would run verification m.opTarget = selected.Name
m.message = ""
return m, verifyArchiveCmd(selected)
} }
case "d": case "d":
@@ -152,39 +230,67 @@ func (m BackupManagerModel) View() string {
var s strings.Builder var s strings.Builder
// Title // Title
s.WriteString(titleStyle.Render("[DB] Backup Archive Manager")) s.WriteString(TitleStyle.Render("[DB] Backup Archive Manager"))
s.WriteString("\n\n") s.WriteString("\n\n")
// Status line (no box, bold+color accents)
switch m.opState {
case OpVerifying:
spinner := spinnerFrames[m.spinnerFrame]
s.WriteString(StatusActiveStyle.Render(fmt.Sprintf("%s Verifying: %s", spinner, m.opTarget)))
s.WriteString("\n\n")
case OpDeleting:
spinner := spinnerFrames[m.spinnerFrame]
s.WriteString(StatusActiveStyle.Render(fmt.Sprintf("%s Deleting: %s", spinner, m.opTarget)))
s.WriteString("\n\n")
default:
if m.loading {
spinner := spinnerFrames[m.spinnerFrame]
s.WriteString(StatusActiveStyle.Render(fmt.Sprintf("%s Loading archives...", spinner)))
s.WriteString("\n\n")
} else if m.message != "" {
// Color based on message content
if strings.HasPrefix(m.message, "[+]") || strings.HasPrefix(m.message, "Valid") {
s.WriteString(StatusSuccessStyle.Render(m.message))
} else if strings.HasPrefix(m.message, "[-]") || strings.HasPrefix(m.message, "Error") {
s.WriteString(StatusErrorStyle.Render(m.message))
} else {
s.WriteString(StatusActiveStyle.Render(m.message))
}
s.WriteString("\n\n")
}
// No "Ready" message when idle - cleaner UI
}
if m.loading { if m.loading {
s.WriteString(infoStyle.Render("Loading archives..."))
return s.String() return s.String()
} }
if m.err != nil { if m.err != nil {
s.WriteString(errorStyle.Render(fmt.Sprintf("[FAIL] Error: %v", m.err))) s.WriteString(StatusErrorStyle.Render(fmt.Sprintf("[FAIL] Error: %v", m.err)))
s.WriteString("\n\n") s.WriteString("\n\n")
s.WriteString(infoStyle.Render("Press Esc to go back")) s.WriteString(ShortcutStyle.Render("Press Esc to go back"))
return s.String() return s.String()
} }
// Summary // Summary
s.WriteString(infoStyle.Render(fmt.Sprintf("Total Archives: %d | Total Size: %s", s.WriteString(LabelStyle.Render(fmt.Sprintf("Total Archives: %d | Total Size: %s",
len(m.archives), formatSize(m.totalSize)))) len(m.archives), formatSize(m.totalSize))))
s.WriteString("\n\n") s.WriteString("\n\n")
// Archives list // Archives list
if len(m.archives) == 0 { if len(m.archives) == 0 {
s.WriteString(infoStyle.Render("No backup archives found")) s.WriteString(StatusReadyStyle.Render("No backup archives found"))
s.WriteString("\n\n") s.WriteString("\n\n")
s.WriteString(infoStyle.Render("Press Esc to go back")) s.WriteString(ShortcutStyle.Render("Press Esc to go back"))
return s.String() return s.String()
} }
// Column headers // Column headers with better alignment
s.WriteString(archiveHeaderStyle.Render(fmt.Sprintf("%-35s %-25s %-12s %-20s", s.WriteString(ListHeaderStyle.Render(fmt.Sprintf(" %-32s %-22s %10s %-16s",
"FILENAME", "FORMAT", "SIZE", "MODIFIED"))) "FILENAME", "FORMAT", "SIZE", "MODIFIED")))
s.WriteString("\n") s.WriteString("\n")
s.WriteString(strings.Repeat("-", 95)) s.WriteString(strings.Repeat("-", 90))
s.WriteString("\n") s.WriteString("\n")
// Show archives (limit to visible area) // Show archives (limit to visible area)
@@ -199,27 +305,27 @@ func (m BackupManagerModel) View() string {
for i := start; i < end; i++ { for i := start; i < end; i++ {
archive := m.archives[i] archive := m.archives[i]
cursor := " " cursor := " "
style := archiveNormalStyle style := ListNormalStyle
if i == m.cursor { if i == m.cursor {
cursor = ">" cursor = "> "
style = archiveSelectedStyle style = ListSelectedStyle
} }
// Status icon // Status icon - consistent 4-char width
statusIcon := "[+]" statusIcon := " [+]"
if !archive.Valid { if !archive.Valid {
statusIcon = "[-]" statusIcon = " [-]"
style = archiveInvalidStyle style = ItemInvalidStyle
} else if time.Since(archive.Modified) > 30*24*time.Hour { } else if time.Since(archive.Modified) > 30*24*time.Hour {
statusIcon = "[WARN]" statusIcon = " [!]"
} }
filename := truncate(archive.Name, 33) filename := truncate(archive.Name, 32)
format := truncate(archive.Format.String(), 23) format := truncate(archive.Format.String(), 22)
line := fmt.Sprintf("%s %s %-33s %-23s %-10s %-19s", line := fmt.Sprintf("%s%s %-32s %-22s %10s %-16s",
cursor, cursor,
statusIcon, statusIcon,
filename, filename,
@@ -233,18 +339,83 @@ func (m BackupManagerModel) View() string {
// Footer // Footer
s.WriteString("\n") s.WriteString("\n")
if m.message != "" {
s.WriteString(infoStyle.Render(m.message))
s.WriteString("\n")
}
s.WriteString(infoStyle.Render(fmt.Sprintf("Selected: %d/%d", m.cursor+1, len(m.archives)))) s.WriteString(StatusReadyStyle.Render(fmt.Sprintf("Selected: %d/%d", m.cursor+1, len(m.archives))))
s.WriteString("\n") s.WriteString("\n\n")
s.WriteString(infoStyle.Render("[KEY] ↑/↓: Navigate | r: Restore | v: Verify | d: Delete | i: Info | R: Refresh | Esc: Back"))
// Grouped keyboard shortcuts
s.WriteString(ShortcutStyle.Render("SHORTCUTS: Up/Down=Move | r=Restore | v=Verify | d=Delete | i=Info | R=Refresh | Esc=Back | q=Quit"))
return s.String() return s.String()
} }
// verifyArchiveCmd runs the SAME verification as restore safety checks
// This ensures consistency between backup manager verify and restore preview
func verifyArchiveCmd(archive ArchiveInfo) tea.Cmd {
return func() tea.Msg {
var issues []string
// 1. Run the same archive integrity check as restore
safety := restore.NewSafety(nil, nil) // Doesn't need config/log for validation
if err := safety.ValidateArchive(archive.Path); err != nil {
return verifyResultMsg{
archive: archive.Name,
valid: false,
err: nil,
details: fmt.Sprintf("Archive integrity: %v", err),
}
}
// 2. Run the same deep diagnosis as restore
diagnoser := restore.NewDiagnoser(nil, false)
diagResult, diagErr := diagnoser.DiagnoseFile(archive.Path)
if diagErr != nil {
return verifyResultMsg{
archive: archive.Name,
valid: false,
err: diagErr,
details: "Cannot diagnose archive",
}
}
if !diagResult.IsValid {
// Collect error details
if diagResult.IsTruncated {
issues = append(issues, "TRUNCATED")
}
if diagResult.IsCorrupted {
issues = append(issues, "CORRUPTED")
}
if len(diagResult.Errors) > 0 {
issues = append(issues, diagResult.Errors[0])
}
return verifyResultMsg{
archive: archive.Name,
valid: false,
err: nil,
details: strings.Join(issues, "; "),
}
}
// Build success details
details := "Verified"
if diagResult.Details != nil {
if diagResult.Details.TableCount > 0 {
details = fmt.Sprintf("%d databases in archive", diagResult.Details.TableCount)
} else if diagResult.Details.PgRestoreListable {
details = "pg_restore verified"
}
}
// Add any warnings
if len(diagResult.Warnings) > 0 {
details += fmt.Sprintf(" [%d warnings]", len(diagResult.Warnings))
}
return verifyResultMsg{archive: archive.Name, valid: true, err: nil, details: details}
}
}
// deleteArchive deletes a backup archive (to be called from confirmation) // deleteArchive deletes a backup archive (to be called from confirmation)
func deleteArchive(archivePath string) error { func deleteArchive(archivePath string) error {
return os.Remove(archivePath) return os.Remove(archivePath)

View File

@@ -204,124 +204,132 @@ func (m DiagnoseViewModel) View() string {
func (m DiagnoseViewModel) renderSingleResult(result *restore.DiagnoseResult) string { func (m DiagnoseViewModel) renderSingleResult(result *restore.DiagnoseResult) string {
var s strings.Builder var s strings.Builder
// Status // Status Box
s.WriteString(strings.Repeat("-", 60)) s.WriteString("+--[ VALIDATION STATUS ]" + strings.Repeat("-", 37) + "+\n")
s.WriteString("\n")
if result.IsValid { if result.IsValid {
s.WriteString(diagnosePassStyle.Render("[OK] STATUS: VALID")) s.WriteString("| " + diagnosePassStyle.Render("[OK] VALID - Archive passed all checks") + strings.Repeat(" ", 18) + "|\n")
} else { } else {
s.WriteString(diagnoseFailStyle.Render("[FAIL] STATUS: INVALID")) s.WriteString("| " + diagnoseFailStyle.Render("[FAIL] INVALID - Archive has problems") + strings.Repeat(" ", 19) + "|\n")
} }
s.WriteString("\n")
if result.IsTruncated { if result.IsTruncated {
s.WriteString(diagnoseFailStyle.Render("[WARN] TRUNCATED: File appears incomplete")) s.WriteString("| " + diagnoseFailStyle.Render("[!] TRUNCATED - File is incomplete") + strings.Repeat(" ", 22) + "|\n")
s.WriteString("\n")
} }
if result.IsCorrupted { if result.IsCorrupted {
s.WriteString(diagnoseFailStyle.Render("[WARN] CORRUPTED: File structure is damaged")) s.WriteString("| " + diagnoseFailStyle.Render("[!] CORRUPTED - File structure damaged") + strings.Repeat(" ", 18) + "|\n")
s.WriteString("\n")
} }
s.WriteString(strings.Repeat("-", 60)) s.WriteString("+" + strings.Repeat("-", 60) + "+\n\n")
s.WriteString("\n\n")
// Details // Details Box
if result.Details != nil { if result.Details != nil {
s.WriteString(diagnoseHeaderStyle.Render("[STATS] DETAILS:")) s.WriteString("+--[ DETAILS ]" + strings.Repeat("-", 46) + "+\n")
s.WriteString("\n")
if result.Details.HasPGDMPSignature { if result.Details.HasPGDMPSignature {
s.WriteString(diagnosePassStyle.Render(" [+] ")) s.WriteString("| " + diagnosePassStyle.Render("[+]") + " PostgreSQL custom format (PGDMP)" + strings.Repeat(" ", 20) + "|\n")
s.WriteString("Has PGDMP signature (custom format)\n")
} }
if result.Details.HasSQLHeader { if result.Details.HasSQLHeader {
s.WriteString(diagnosePassStyle.Render(" [+] ")) s.WriteString("| " + diagnosePassStyle.Render("[+]") + " PostgreSQL SQL header found" + strings.Repeat(" ", 25) + "|\n")
s.WriteString("Has PostgreSQL SQL header\n")
} }
if result.Details.GzipValid { if result.Details.GzipValid {
s.WriteString(diagnosePassStyle.Render(" [+] ")) s.WriteString("| " + diagnosePassStyle.Render("[+]") + " Gzip compression valid" + strings.Repeat(" ", 30) + "|\n")
s.WriteString("Gzip compression valid\n")
} }
if result.Details.PgRestoreListable { if result.Details.PgRestoreListable {
s.WriteString(diagnosePassStyle.Render(" [+] ")) tableInfo := fmt.Sprintf(" (%d tables)", result.Details.TableCount)
s.WriteString(fmt.Sprintf("pg_restore can list contents (%d tables)\n", result.Details.TableCount)) padding := 36 - len(tableInfo)
if padding < 0 {
padding = 0
}
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " pg_restore can list contents" + tableInfo + strings.Repeat(" ", padding) + "|\n")
} }
if result.Details.CopyBlockCount > 0 { if result.Details.CopyBlockCount > 0 {
s.WriteString(diagnoseInfoStyle.Render(" - ")) blockInfo := fmt.Sprintf("%d COPY blocks found", result.Details.CopyBlockCount)
s.WriteString(fmt.Sprintf("Contains %d COPY blocks\n", result.Details.CopyBlockCount)) padding := 50 - len(blockInfo)
if padding < 0 {
padding = 0
}
s.WriteString("| [-] " + blockInfo + strings.Repeat(" ", padding) + "|\n")
} }
if result.Details.UnterminatedCopy { if result.Details.UnterminatedCopy {
s.WriteString(diagnoseFailStyle.Render(" [-] ")) s.WriteString("| " + diagnoseFailStyle.Render("[-]") + " Unterminated COPY: " + truncate(result.Details.LastCopyTable, 30) + strings.Repeat(" ", 5) + "|\n")
s.WriteString(fmt.Sprintf("Unterminated COPY block: %s (line %d)\n",
result.Details.LastCopyTable, result.Details.LastCopyLineNumber))
} }
if result.Details.ProperlyTerminated { if result.Details.ProperlyTerminated {
s.WriteString(diagnosePassStyle.Render(" [+] ")) s.WriteString("| " + diagnosePassStyle.Render("[+]") + " All COPY blocks properly terminated" + strings.Repeat(" ", 17) + "|\n")
s.WriteString("All COPY blocks properly terminated\n")
} }
if result.Details.ExpandedSize > 0 { if result.Details.ExpandedSize > 0 {
s.WriteString(diagnoseInfoStyle.Render(" - ")) sizeInfo := fmt.Sprintf("Expanded: %s (%.1fx)", formatSize(result.Details.ExpandedSize), result.Details.CompressionRatio)
s.WriteString(fmt.Sprintf("Expanded size: %s (ratio: %.1fx)\n", padding := 50 - len(sizeInfo)
formatSize(result.Details.ExpandedSize), result.Details.CompressionRatio)) if padding < 0 {
padding = 0
}
s.WriteString("| [-] " + sizeInfo + strings.Repeat(" ", padding) + "|\n")
} }
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
} }
// Errors // Errors Box
if len(result.Errors) > 0 { if len(result.Errors) > 0 {
s.WriteString("\n") s.WriteString("\n+--[ ERRORS ]" + strings.Repeat("-", 47) + "+\n")
s.WriteString(diagnoseFailStyle.Render("[FAIL] ERRORS:"))
s.WriteString("\n")
for i, e := range result.Errors { for i, e := range result.Errors {
if i >= 5 { if i >= 5 {
s.WriteString(diagnoseInfoStyle.Render(fmt.Sprintf(" ... and %d more\n", len(result.Errors)-5))) remaining := fmt.Sprintf("... and %d more errors", len(result.Errors)-5)
padding := 56 - len(remaining)
s.WriteString("| " + remaining + strings.Repeat(" ", padding) + "|\n")
break break
} }
s.WriteString(diagnoseFailStyle.Render(" - ")) errText := truncate(e, 54)
s.WriteString(truncate(e, 70)) padding := 56 - len(errText)
s.WriteString("\n") if padding < 0 {
padding = 0
}
s.WriteString("| " + errText + strings.Repeat(" ", padding) + "|\n")
} }
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
} }
// Warnings // Warnings Box
if len(result.Warnings) > 0 { if len(result.Warnings) > 0 {
s.WriteString("\n") s.WriteString("\n+--[ WARNINGS ]" + strings.Repeat("-", 45) + "+\n")
s.WriteString(diagnoseWarnStyle.Render("[WARN] WARNINGS:"))
s.WriteString("\n")
for i, w := range result.Warnings { for i, w := range result.Warnings {
if i >= 3 { if i >= 3 {
s.WriteString(diagnoseInfoStyle.Render(fmt.Sprintf(" ... and %d more\n", len(result.Warnings)-3))) remaining := fmt.Sprintf("... and %d more warnings", len(result.Warnings)-3)
padding := 56 - len(remaining)
s.WriteString("| " + remaining + strings.Repeat(" ", padding) + "|\n")
break break
} }
s.WriteString(diagnoseWarnStyle.Render(" - ")) warnText := truncate(w, 54)
s.WriteString(truncate(w, 70)) padding := 56 - len(warnText)
s.WriteString("\n") if padding < 0 {
padding = 0
}
s.WriteString("| " + warnText + strings.Repeat(" ", padding) + "|\n")
} }
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
} }
// Recommendations // Recommendations Box
if !result.IsValid { if !result.IsValid {
s.WriteString("\n") s.WriteString("\n+--[ RECOMMENDATIONS ]" + strings.Repeat("-", 38) + "+\n")
s.WriteString(diagnoseHeaderStyle.Render("[HINT] RECOMMENDATIONS:"))
s.WriteString("\n")
if result.IsTruncated { if result.IsTruncated {
s.WriteString(" 1. Re-run the backup process for this database\n") s.WriteString("| 1. Re-run backup with current version (v3.42.12+) |\n")
s.WriteString(" 2. Check disk space on backup server\n") s.WriteString("| 2. Check disk space on backup server |\n")
s.WriteString(" 3. Verify network stability for remote backups\n") s.WriteString("| 3. Verify network stability for remote backups |\n")
} }
if result.IsCorrupted { if result.IsCorrupted {
s.WriteString(" 1. Verify backup was transferred completely\n") s.WriteString("| 1. Verify backup was transferred completely |\n")
s.WriteString(" 2. Try restoring from a previous backup\n") s.WriteString("| 2. Try restoring from a previous backup |\n")
} }
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
} }
return s.String() return s.String()

View File

@@ -146,11 +146,10 @@ func (m StatusViewModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return m, nil return m, nil
case tea.KeyMsg: case tea.KeyMsg:
if !m.loading { // Always allow escape, even during loading
switch msg.String() { switch msg.String() {
case "ctrl+c", "q", "esc", "enter": case "ctrl+c", "q", "esc", "enter":
return m.parent, nil return m.parent, nil
}
} }
} }

133
internal/tui/styles.go Normal file
View File

@@ -0,0 +1,133 @@
package tui
import "github.com/charmbracelet/lipgloss"
// =============================================================================
// GLOBAL TUI STYLE DEFINITIONS
// =============================================================================
// Design Language:
// - Bold text for labels and headers
// - Colors for semantic meaning (green=success, red=error, yellow=warning)
// - No emoticons - use simple text prefixes like [OK], [FAIL], [!]
// - No boxes for inline status - use bold+color accents
// - Consistent color palette across all views
// =============================================================================
// Color Palette (ANSI 256 colors for terminal compatibility)
const (
ColorWhite = lipgloss.Color("15") // Bright white
ColorGray = lipgloss.Color("250") // Light gray
ColorDim = lipgloss.Color("244") // Dim gray
ColorDimmer = lipgloss.Color("240") // Darker gray
ColorSuccess = lipgloss.Color("2") // Green
ColorError = lipgloss.Color("1") // Red
ColorWarning = lipgloss.Color("3") // Yellow
ColorInfo = lipgloss.Color("6") // Cyan
ColorAccent = lipgloss.Color("4") // Blue
)
// =============================================================================
// TITLE & HEADER STYLES
// =============================================================================
// TitleStyle - main view title (bold white on gray background)
var TitleStyle = lipgloss.NewStyle().
Bold(true).
Foreground(ColorWhite).
Background(ColorDimmer).
Padding(0, 1)
// HeaderStyle - section headers (bold gray)
var HeaderStyle = lipgloss.NewStyle().
Bold(true).
Foreground(ColorDim)
// LabelStyle - field labels (bold cyan)
var LabelStyle = lipgloss.NewStyle().
Bold(true).
Foreground(ColorInfo)
// =============================================================================
// STATUS STYLES
// =============================================================================
// StatusReadyStyle - idle/ready state (dim)
var StatusReadyStyle = lipgloss.NewStyle().
Foreground(ColorDim)
// StatusActiveStyle - operation in progress (bold cyan)
var StatusActiveStyle = lipgloss.NewStyle().
Bold(true).
Foreground(ColorInfo)
// StatusSuccessStyle - success messages (bold green)
var StatusSuccessStyle = lipgloss.NewStyle().
Bold(true).
Foreground(ColorSuccess)
// StatusErrorStyle - error messages (bold red)
var StatusErrorStyle = lipgloss.NewStyle().
Bold(true).
Foreground(ColorError)
// StatusWarningStyle - warning messages (bold yellow)
var StatusWarningStyle = lipgloss.NewStyle().
Bold(true).
Foreground(ColorWarning)
// =============================================================================
// LIST & TABLE STYLES
// =============================================================================
// ListNormalStyle - unselected list items
var ListNormalStyle = lipgloss.NewStyle().
Foreground(ColorGray)
// ListSelectedStyle - selected/cursor item (bold white)
var ListSelectedStyle = lipgloss.NewStyle().
Foreground(ColorWhite).
Bold(true)
// ListHeaderStyle - column headers (bold dim)
var ListHeaderStyle = lipgloss.NewStyle().
Bold(true).
Foreground(ColorDim)
// =============================================================================
// ITEM STATUS STYLES
// =============================================================================
// ItemValidStyle - valid/OK items (green)
var ItemValidStyle = lipgloss.NewStyle().
Foreground(ColorSuccess)
// ItemInvalidStyle - invalid/failed items (red)
var ItemInvalidStyle = lipgloss.NewStyle().
Foreground(ColorError)
// ItemOldStyle - old/stale items (yellow)
var ItemOldStyle = lipgloss.NewStyle().
Foreground(ColorWarning)
// =============================================================================
// SHORTCUT STYLE
// =============================================================================
// ShortcutStyle - keyboard shortcuts footer (dim)
var ShortcutStyle = lipgloss.NewStyle().
Foreground(ColorDim)
// =============================================================================
// HELPER PREFIXES (no emoticons)
// =============================================================================
const (
PrefixOK = "[OK]"
PrefixFail = "[FAIL]"
PrefixWarn = "[!]"
PrefixInfo = "[i]"
PrefixPlus = "[+]"
PrefixMinus = "[-]"
PrefixArrow = ">"
PrefixSpinner = "" // Spinner character added dynamically
)