Compare commits

...

6 Commits

Author SHA1 Message Date
7711a206ab Fix panic in TUI backup manager verify when logger is nil
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m15s
- Add nil checks before logger calls in diagnose.go (8 places)
- Add nil checks before logger calls in safety.go (2 places)
- Fixes crash when pressing 'v' to verify backup in interactive menu
2026-01-14 17:18:37 +01:00
ba6e8a2b39 v3.42.37: Remove ASCII boxes from diagnose view
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m14s
Cleaner output without box drawing characters:
- [STATUS] Validation section
- [INFO] Details section
- [FAIL] Errors section
- [WARN] Warnings section
- [HINT] Recommendations section
2026-01-14 17:05:43 +01:00
ec5e89eab7 v3.42.36: Fix remaining TUI prefix inconsistencies
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m13s
- diagnose_view.go: Add [STATS], [LIST], [INFO] section prefixes
- status.go: Add [CONN], [INFO] section prefixes
- settings.go: [LOG] → [INFO] for configuration summary
- menu.go: [DB] → [SELECT]/[CHECK] for selectors
2026-01-14 16:59:24 +01:00
e24d7ab49f v3.42.35: Standardize TUI title prefixes for consistency
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Successful in 3m17s
- [CHECK] for diagnosis, previews, validations
- [STATS] for status, history, metrics views
- [SELECT] for selection/browsing screens
- [EXEC] for execution screens (backup/restore)
- [CONFIG] for settings/configuration

Fixed 8 files with inconsistent prefixes:
- diagnose_view.go: [SEARCH] → [CHECK]
- settings.go: [CFG] → [CONFIG]
- menu.go: [DB] → clean title
- history.go: [HISTORY] → [STATS]
- backup_manager.go: [DB] → [SELECT]
- archive_browser.go: [PKG]/[SEARCH] → [SELECT]
- restore_preview.go: added [CHECK]
- restore_exec.go: [RESTORE] → [EXEC]
2026-01-14 16:36:35 +01:00
721e53fe6a v3.42.34: Add spf13/afero for filesystem abstraction
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m13s
- New internal/fs package for testable filesystem operations
- In-memory filesystem support for unit testing without disk I/O
- Swappable global FS: SetFS(afero.NewMemMapFs())
- Wrapper functions: ReadFile, WriteFile, Mkdir, Walk, Glob, etc.
- Testing helpers: WithMemFs(), SetupTestDir()
- Comprehensive test suite demonstrating usage
- Upgraded afero from v1.10.0 to v1.15.0
2026-01-14 16:24:12 +01:00
4e09066aa5 v3.42.33: Add cenkalti/backoff for exponential backoff retry
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m14s
- Exponential backoff retry for all cloud operations (S3, Azure, GCS)
- RetryConfig presets: Default (5x), Aggressive (10x), Quick (3x)
- Smart error classification: IsPermanentError, IsRetryableError
- Automatic file position reset on upload retry
- Retry logging with wait duration
- Multipart uploads use aggressive retry (more tolerance)
2026-01-14 16:19:40 +01:00
24 changed files with 1077 additions and 292 deletions

View File

@@ -5,6 +5,48 @@ All notable changes to dbbackup will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [3.42.34] - 2026-01-14 "Filesystem Abstraction"
### Added - spf13/afero for Filesystem Abstraction
- **New `internal/fs` package** for testable filesystem operations
- **In-memory filesystem** for unit testing without disk I/O
- **Global FS interface** that can be swapped for testing:
```go
fs.SetFS(afero.NewMemMapFs()) // Use memory
fs.ResetFS() // Back to real disk
```
- **Wrapper functions** for all common file operations:
- `ReadFile`, `WriteFile`, `Create`, `Open`, `Remove`, `RemoveAll`
- `Mkdir`, `MkdirAll`, `ReadDir`, `Walk`, `Glob`
- `Exists`, `DirExists`, `IsDir`, `IsEmpty`
- `TempDir`, `TempFile`, `CopyFile`, `FileSize`
- **Testing helpers**:
- `WithMemFs(fn)` - Execute function with temp in-memory FS
- `SetupTestDir(files)` - Create test directory structure
- **Comprehensive test suite** demonstrating usage
### Changed
- Upgraded afero from v1.10.0 to v1.15.0
## [3.42.33] - 2026-01-14 "Exponential Backoff Retry"
### Added - cenkalti/backoff for Cloud Operation Retry
- **Exponential backoff retry** for all cloud operations (S3, Azure, GCS)
- **Retry configurations**:
- `DefaultRetryConfig()` - 5 retries, 500ms→30s backoff, 5 min max
- `AggressiveRetryConfig()` - 10 retries, 1s→60s backoff, 15 min max
- `QuickRetryConfig()` - 3 retries, 100ms→5s backoff, 30s max
- **Smart error classification**:
- `IsPermanentError()` - Auth/bucket errors (no retry)
- `IsRetryableError()` - Timeout/network errors (retry)
- **Retry logging** - Each retry attempt is logged with wait duration
### Changed
- S3 simple upload, multipart upload, download now retry on transient failures
- Azure simple upload, download now retry on transient failures
- GCS upload, download now retry on transient failures
- Large file multipart uploads use `AggressiveRetryConfig()` (more retries)
## [3.42.32] - 2026-01-14 "Cross-Platform Colors" ## [3.42.32] - 2026-01-14 "Cross-Platform Colors"
### Added - fatih/color for Cross-Platform Terminal Colors ### Added - fatih/color for Cross-Platform Terminal Colors

View File

@@ -56,7 +56,7 @@ Download from [releases](https://git.uuxo.net/UUXO/dbbackup/releases):
```bash ```bash
# Linux x86_64 # Linux x86_64
wget https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.42.1/dbbackup-linux-amd64 wget https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.42.35/dbbackup-linux-amd64
chmod +x dbbackup-linux-amd64 chmod +x dbbackup-linux-amd64
sudo mv dbbackup-linux-amd64 /usr/local/bin/dbbackup sudo mv dbbackup-linux-amd64 /usr/local/bin/dbbackup
``` ```

View File

@@ -3,9 +3,9 @@
This directory contains pre-compiled binaries for the DB Backup Tool across multiple platforms and architectures. This directory contains pre-compiled binaries for the DB Backup Tool across multiple platforms and architectures.
## Build Information ## Build Information
- **Version**: 3.42.31 - **Version**: 3.42.34
- **Build Time**: 2026-01-14_15:07:12_UTC - **Build Time**: 2026-01-14_16:06:08_UTC
- **Git Commit**: dc6dfd8 - **Git Commit**: ba6e8a2
## Recent Updates (v1.1.0) ## Recent Updates (v1.1.0)
- ✅ Fixed TUI progress display with line-by-line output - ✅ Fixed TUI progress display with line-by-line output

2
go.mod
View File

@@ -57,6 +57,7 @@ require (
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 // indirect github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 // indirect
github.com/aws/smithy-go v1.23.2 // indirect github.com/aws/smithy-go v1.23.2 // indirect
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
github.com/charmbracelet/x/ansi v0.10.1 // indirect github.com/charmbracelet/x/ansi v0.10.1 // indirect
@@ -96,6 +97,7 @@ require (
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/rivo/uniseg v0.4.7 // indirect github.com/rivo/uniseg v0.4.7 // indirect
github.com/schollz/progressbar/v3 v3.19.0 // indirect github.com/schollz/progressbar/v3 v3.19.0 // indirect
github.com/spf13/afero v1.15.0 // indirect
github.com/spiffe/go-spiffe/v2 v2.5.0 // indirect github.com/spiffe/go-spiffe/v2 v2.5.0 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect github.com/tklauser/numcpus v0.6.1 // indirect

4
go.sum
View File

@@ -84,6 +84,8 @@ github.com/aws/smithy-go v1.23.2 h1:Crv0eatJUQhaManss33hS5r40CG3ZFH+21XSkqMrIUM=
github.com/aws/smithy-go v1.23.2/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0= github.com/aws/smithy-go v1.23.2/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k= github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k=
github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8= github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/charmbracelet/bubbles v0.21.0 h1:9TdC97SdRVg/1aaXNVWfFH3nnLAwOXr8Fn6u6mfQdFs= github.com/charmbracelet/bubbles v0.21.0 h1:9TdC97SdRVg/1aaXNVWfFH3nnLAwOXr8Fn6u6mfQdFs=
@@ -209,6 +211,8 @@ github.com/shirou/gopsutil/v3 v3.24.5 h1:i0t8kL+kQTvpAYToeuiVk3TgDeKOFioZO3Ztz/i
github.com/shirou/gopsutil/v3 v3.24.5/go.mod h1:bsoOS1aStSs9ErQ1WWfxllSeS1K5D+U30r2NfcubMVk= github.com/shirou/gopsutil/v3 v3.24.5/go.mod h1:bsoOS1aStSs9ErQ1WWfxllSeS1K5D+U30r2NfcubMVk=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I=
github.com/spf13/afero v1.15.0/go.mod h1:NC2ByUVxtQs4b3sIUphxK0NioZnmxgyCrfzeuq8lxMg=
github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s= github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0= github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY= github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=

View File

@@ -151,8 +151,14 @@ func (a *AzureBackend) Upload(ctx context.Context, localPath, remotePath string,
return a.uploadSimple(ctx, file, blobName, fileSize, progress) return a.uploadSimple(ctx, file, blobName, fileSize, progress)
} }
// uploadSimple uploads a file using simple upload (single request) // uploadSimple uploads a file using simple upload (single request) with retry
func (a *AzureBackend) uploadSimple(ctx context.Context, file *os.File, blobName string, fileSize int64, progress ProgressCallback) error { func (a *AzureBackend) uploadSimple(ctx context.Context, file *os.File, blobName string, fileSize int64, progress ProgressCallback) error {
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
// Reset file position for retry
if _, err := file.Seek(0, 0); err != nil {
return fmt.Errorf("failed to reset file position: %w", err)
}
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName) blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
// Wrap reader with progress tracking // Wrap reader with progress tracking
@@ -182,6 +188,9 @@ func (a *AzureBackend) uploadSimple(ctx context.Context, file *os.File, blobName
} }
return nil return nil
}, func(err error, duration time.Duration) {
fmt.Printf("[Azure] Upload retry in %v: %v\n", duration, err)
})
} }
// uploadBlocks uploads a file using block blob staging (for large files) // uploadBlocks uploads a file using block blob staging (for large files)
@@ -251,7 +260,7 @@ func (a *AzureBackend) uploadBlocks(ctx context.Context, file *os.File, blobName
return nil return nil
} }
// Download downloads a file from Azure Blob Storage // Download downloads a file from Azure Blob Storage with retry
func (a *AzureBackend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error { func (a *AzureBackend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
blobName := strings.TrimPrefix(remotePath, "/") blobName := strings.TrimPrefix(remotePath, "/")
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName) blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
@@ -264,6 +273,7 @@ func (a *AzureBackend) Download(ctx context.Context, remotePath, localPath strin
fileSize := *props.ContentLength fileSize := *props.ContentLength
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
// Download blob // Download blob
resp, err := blockBlobClient.DownloadStream(ctx, nil) resp, err := blockBlobClient.DownloadStream(ctx, nil)
if err != nil { if err != nil {
@@ -271,7 +281,7 @@ func (a *AzureBackend) Download(ctx context.Context, remotePath, localPath strin
} }
defer resp.Body.Close() defer resp.Body.Close()
// Create local file // Create/truncate local file
file, err := os.Create(localPath) file, err := os.Create(localPath)
if err != nil { if err != nil {
return fmt.Errorf("failed to create file: %w", err) return fmt.Errorf("failed to create file: %w", err)
@@ -288,6 +298,9 @@ func (a *AzureBackend) Download(ctx context.Context, remotePath, localPath strin
} }
return nil return nil
}, func(err error, duration time.Duration) {
fmt.Printf("[Azure] Download retry in %v: %v\n", duration, err)
})
} }
// Delete deletes a file from Azure Blob Storage // Delete deletes a file from Azure Blob Storage

View File

@@ -89,7 +89,7 @@ func (g *GCSBackend) Name() string {
return "gcs" return "gcs"
} }
// Upload uploads a file to Google Cloud Storage // Upload uploads a file to Google Cloud Storage with retry
func (g *GCSBackend) Upload(ctx context.Context, localPath, remotePath string, progress ProgressCallback) error { func (g *GCSBackend) Upload(ctx context.Context, localPath, remotePath string, progress ProgressCallback) error {
file, err := os.Open(localPath) file, err := os.Open(localPath)
if err != nil { if err != nil {
@@ -106,6 +106,12 @@ func (g *GCSBackend) Upload(ctx context.Context, localPath, remotePath string, p
// Remove leading slash from remote path // Remove leading slash from remote path
objectName := strings.TrimPrefix(remotePath, "/") objectName := strings.TrimPrefix(remotePath, "/")
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
// Reset file position for retry
if _, err := file.Seek(0, 0); err != nil {
return fmt.Errorf("failed to reset file position: %w", err)
}
bucket := g.client.Bucket(g.bucketName) bucket := g.client.Bucket(g.bucketName)
object := bucket.Object(objectName) object := bucket.Object(objectName)
@@ -142,9 +148,12 @@ func (g *GCSBackend) Upload(ctx context.Context, localPath, remotePath string, p
} }
return nil return nil
}, func(err error, duration time.Duration) {
fmt.Printf("[GCS] Upload retry in %v: %v\n", duration, err)
})
} }
// Download downloads a file from Google Cloud Storage // Download downloads a file from Google Cloud Storage with retry
func (g *GCSBackend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error { func (g *GCSBackend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
objectName := strings.TrimPrefix(remotePath, "/") objectName := strings.TrimPrefix(remotePath, "/")
@@ -159,6 +168,7 @@ func (g *GCSBackend) Download(ctx context.Context, remotePath, localPath string,
fileSize := attrs.Size fileSize := attrs.Size
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
// Create reader // Create reader
reader, err := object.NewReader(ctx) reader, err := object.NewReader(ctx)
if err != nil { if err != nil {
@@ -166,7 +176,7 @@ func (g *GCSBackend) Download(ctx context.Context, remotePath, localPath string,
} }
defer reader.Close() defer reader.Close()
// Create local file // Create/truncate local file
file, err := os.Create(localPath) file, err := os.Create(localPath)
if err != nil { if err != nil {
return fmt.Errorf("failed to create file: %w", err) return fmt.Errorf("failed to create file: %w", err)
@@ -183,6 +193,9 @@ func (g *GCSBackend) Download(ctx context.Context, remotePath, localPath string,
} }
return nil return nil
}, func(err error, duration time.Duration) {
fmt.Printf("[GCS] Download retry in %v: %v\n", duration, err)
})
} }
// Delete deletes a file from Google Cloud Storage // Delete deletes a file from Google Cloud Storage

257
internal/cloud/retry.go Normal file
View File

@@ -0,0 +1,257 @@
package cloud
import (
"context"
"fmt"
"net"
"strings"
"time"
"github.com/cenkalti/backoff/v4"
)
// RetryConfig configures retry behavior
type RetryConfig struct {
MaxRetries int // Maximum number of retries (0 = unlimited)
InitialInterval time.Duration // Initial backoff interval
MaxInterval time.Duration // Maximum backoff interval
MaxElapsedTime time.Duration // Maximum total time for retries
Multiplier float64 // Backoff multiplier
}
// DefaultRetryConfig returns sensible defaults for cloud operations
func DefaultRetryConfig() *RetryConfig {
return &RetryConfig{
MaxRetries: 5,
InitialInterval: 500 * time.Millisecond,
MaxInterval: 30 * time.Second,
MaxElapsedTime: 5 * time.Minute,
Multiplier: 2.0,
}
}
// AggressiveRetryConfig returns config for critical operations that need more retries
func AggressiveRetryConfig() *RetryConfig {
return &RetryConfig{
MaxRetries: 10,
InitialInterval: 1 * time.Second,
MaxInterval: 60 * time.Second,
MaxElapsedTime: 15 * time.Minute,
Multiplier: 1.5,
}
}
// QuickRetryConfig returns config for operations that should fail fast
func QuickRetryConfig() *RetryConfig {
return &RetryConfig{
MaxRetries: 3,
InitialInterval: 100 * time.Millisecond,
MaxInterval: 5 * time.Second,
MaxElapsedTime: 30 * time.Second,
Multiplier: 2.0,
}
}
// RetryOperation executes an operation with exponential backoff retry
func RetryOperation(ctx context.Context, cfg *RetryConfig, operation func() error) error {
if cfg == nil {
cfg = DefaultRetryConfig()
}
// Create exponential backoff
expBackoff := backoff.NewExponentialBackOff()
expBackoff.InitialInterval = cfg.InitialInterval
expBackoff.MaxInterval = cfg.MaxInterval
expBackoff.MaxElapsedTime = cfg.MaxElapsedTime
expBackoff.Multiplier = cfg.Multiplier
expBackoff.Reset()
// Wrap with max retries if specified
var b backoff.BackOff = expBackoff
if cfg.MaxRetries > 0 {
b = backoff.WithMaxRetries(expBackoff, uint64(cfg.MaxRetries))
}
// Add context support
b = backoff.WithContext(b, ctx)
// Track attempts for logging
attempt := 0
// Wrap operation to handle permanent vs retryable errors
wrappedOp := func() error {
attempt++
err := operation()
if err == nil {
return nil
}
// Check if error is permanent (should not retry)
if IsPermanentError(err) {
return backoff.Permanent(err)
}
return err
}
return backoff.Retry(wrappedOp, b)
}
// RetryOperationWithNotify executes an operation with retry and calls notify on each retry
func RetryOperationWithNotify(ctx context.Context, cfg *RetryConfig, operation func() error, notify func(err error, duration time.Duration)) error {
if cfg == nil {
cfg = DefaultRetryConfig()
}
// Create exponential backoff
expBackoff := backoff.NewExponentialBackOff()
expBackoff.InitialInterval = cfg.InitialInterval
expBackoff.MaxInterval = cfg.MaxInterval
expBackoff.MaxElapsedTime = cfg.MaxElapsedTime
expBackoff.Multiplier = cfg.Multiplier
expBackoff.Reset()
// Wrap with max retries if specified
var b backoff.BackOff = expBackoff
if cfg.MaxRetries > 0 {
b = backoff.WithMaxRetries(expBackoff, uint64(cfg.MaxRetries))
}
// Add context support
b = backoff.WithContext(b, ctx)
// Wrap operation to handle permanent vs retryable errors
wrappedOp := func() error {
err := operation()
if err == nil {
return nil
}
// Check if error is permanent (should not retry)
if IsPermanentError(err) {
return backoff.Permanent(err)
}
return err
}
return backoff.RetryNotify(wrappedOp, b, notify)
}
// IsPermanentError returns true if the error should not be retried
func IsPermanentError(err error) bool {
if err == nil {
return false
}
errStr := strings.ToLower(err.Error())
// Authentication/authorization errors - don't retry
permanentPatterns := []string{
"access denied",
"forbidden",
"unauthorized",
"invalid credentials",
"invalid access key",
"invalid secret",
"no such bucket",
"bucket not found",
"container not found",
"nosuchbucket",
"nosuchkey",
"invalid argument",
"malformed",
"invalid request",
"permission denied",
"access control",
"policy",
}
for _, pattern := range permanentPatterns {
if strings.Contains(errStr, pattern) {
return true
}
}
return false
}
// IsRetryableError returns true if the error is transient and should be retried
func IsRetryableError(err error) bool {
if err == nil {
return false
}
// Network errors are typically retryable
var netErr net.Error
if ok := isNetError(err, &netErr); ok {
return netErr.Timeout() || netErr.Temporary()
}
errStr := strings.ToLower(err.Error())
// Transient errors - should retry
retryablePatterns := []string{
"timeout",
"connection reset",
"connection refused",
"connection closed",
"eof",
"broken pipe",
"temporary failure",
"service unavailable",
"internal server error",
"bad gateway",
"gateway timeout",
"too many requests",
"rate limit",
"throttl",
"slowdown",
"try again",
"retry",
}
for _, pattern := range retryablePatterns {
if strings.Contains(errStr, pattern) {
return true
}
}
return false
}
// isNetError checks if err wraps a net.Error
func isNetError(err error, target *net.Error) bool {
for err != nil {
if ne, ok := err.(net.Error); ok {
*target = ne
return true
}
// Try to unwrap
if unwrapper, ok := err.(interface{ Unwrap() error }); ok {
err = unwrapper.Unwrap()
} else {
break
}
}
return false
}
// WithRetry is a helper that wraps a function with default retry logic
func WithRetry(ctx context.Context, operationName string, fn func() error) error {
notify := func(err error, duration time.Duration) {
// Log retry attempts (caller can provide their own logger if needed)
fmt.Printf("[RETRY] %s failed, retrying in %v: %v\n", operationName, duration, err)
}
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), fn, notify)
}
// WithRetryConfig is a helper that wraps a function with custom retry config
func WithRetryConfig(ctx context.Context, cfg *RetryConfig, operationName string, fn func() error) error {
notify := func(err error, duration time.Duration) {
fmt.Printf("[RETRY] %s failed, retrying in %v: %v\n", operationName, duration, err)
}
return RetryOperationWithNotify(ctx, cfg, fn, notify)
}

View File

@@ -7,6 +7,7 @@ import (
"os" "os"
"path/filepath" "path/filepath"
"strings" "strings"
"time"
"github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/config"
@@ -123,8 +124,14 @@ func (s *S3Backend) Upload(ctx context.Context, localPath, remotePath string, pr
return s.uploadSimple(ctx, file, key, fileSize, progress) return s.uploadSimple(ctx, file, key, fileSize, progress)
} }
// uploadSimple performs a simple single-part upload // uploadSimple performs a simple single-part upload with retry
func (s *S3Backend) uploadSimple(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error { func (s *S3Backend) uploadSimple(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error {
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
// Reset file position for retry
if _, err := file.Seek(0, 0); err != nil {
return fmt.Errorf("failed to reset file position: %w", err)
}
// Create progress reader // Create progress reader
var reader io.Reader = file var reader io.Reader = file
if progress != nil { if progress != nil {
@@ -143,10 +150,19 @@ func (s *S3Backend) uploadSimple(ctx context.Context, file *os.File, key string,
} }
return nil return nil
}, func(err error, duration time.Duration) {
fmt.Printf("[S3] Upload retry in %v: %v\n", duration, err)
})
} }
// uploadMultipart performs a multipart upload for large files // uploadMultipart performs a multipart upload for large files with retry
func (s *S3Backend) uploadMultipart(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error { func (s *S3Backend) uploadMultipart(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error {
return RetryOperationWithNotify(ctx, AggressiveRetryConfig(), func() error {
// Reset file position for retry
if _, err := file.Seek(0, 0); err != nil {
return fmt.Errorf("failed to reset file position: %w", err)
}
// Create uploader with custom options // Create uploader with custom options
uploader := manager.NewUploader(s.client, func(u *manager.Uploader) { uploader := manager.NewUploader(s.client, func(u *manager.Uploader) {
// Part size: 10MB // Part size: 10MB
@@ -177,9 +193,12 @@ func (s *S3Backend) uploadMultipart(ctx context.Context, file *os.File, key stri
} }
return nil return nil
}, func(err error, duration time.Duration) {
fmt.Printf("[S3] Multipart upload retry in %v: %v\n", duration, err)
})
} }
// Download downloads a file from S3 // Download downloads a file from S3 with retry
func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error { func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
// Build S3 key // Build S3 key
key := s.buildKey(remotePath) key := s.buildKey(remotePath)
@@ -190,6 +209,12 @@ func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string,
return fmt.Errorf("failed to get object size: %w", err) return fmt.Errorf("failed to get object size: %w", err)
} }
// Create directory for local file
if err := os.MkdirAll(filepath.Dir(localPath), 0755); err != nil {
return fmt.Errorf("failed to create directory: %w", err)
}
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
// Download from S3 // Download from S3
result, err := s.client.GetObject(ctx, &s3.GetObjectInput{ result, err := s.client.GetObject(ctx, &s3.GetObjectInput{
Bucket: aws.String(s.bucket), Bucket: aws.String(s.bucket),
@@ -200,11 +225,7 @@ func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string,
} }
defer result.Body.Close() defer result.Body.Close()
// Create local file // Create/truncate local file
if err := os.MkdirAll(filepath.Dir(localPath), 0755); err != nil {
return fmt.Errorf("failed to create directory: %w", err)
}
outFile, err := os.Create(localPath) outFile, err := os.Create(localPath)
if err != nil { if err != nil {
return fmt.Errorf("failed to create local file: %w", err) return fmt.Errorf("failed to create local file: %w", err)
@@ -223,6 +244,9 @@ func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string,
} }
return nil return nil
}, func(err error, duration time.Duration) {
fmt.Printf("[S3] Download retry in %v: %v\n", duration, err)
})
} }
// List lists all backup files in S3 // List lists all backup files in S3

223
internal/fs/fs.go Normal file
View File

@@ -0,0 +1,223 @@
// Package fs provides filesystem abstraction using spf13/afero for testability.
// It allows swapping the real filesystem with an in-memory mock for unit tests.
package fs
import (
"io"
"os"
"path/filepath"
"time"
"github.com/spf13/afero"
)
// FS is the global filesystem interface used throughout the application.
// By default, it uses the real OS filesystem.
// For testing, use SetFS(afero.NewMemMapFs()) to use an in-memory filesystem.
var FS afero.Fs = afero.NewOsFs()
// SetFS sets the global filesystem (useful for testing)
func SetFS(fs afero.Fs) {
FS = fs
}
// ResetFS resets to the real OS filesystem
func ResetFS() {
FS = afero.NewOsFs()
}
// NewMemMapFs creates a new in-memory filesystem for testing
func NewMemMapFs() afero.Fs {
return afero.NewMemMapFs()
}
// NewReadOnlyFs wraps a filesystem to make it read-only
func NewReadOnlyFs(base afero.Fs) afero.Fs {
return afero.NewReadOnlyFs(base)
}
// NewBasePathFs creates a filesystem rooted at a specific path
func NewBasePathFs(base afero.Fs, path string) afero.Fs {
return afero.NewBasePathFs(base, path)
}
// --- File Operations (use global FS) ---
// Create creates a file
func Create(name string) (afero.File, error) {
return FS.Create(name)
}
// Open opens a file for reading
func Open(name string) (afero.File, error) {
return FS.Open(name)
}
// OpenFile opens a file with specified flags and permissions
func OpenFile(name string, flag int, perm os.FileMode) (afero.File, error) {
return FS.OpenFile(name, flag, perm)
}
// Remove removes a file or empty directory
func Remove(name string) error {
return FS.Remove(name)
}
// RemoveAll removes a path and any children it contains
func RemoveAll(path string) error {
return FS.RemoveAll(path)
}
// Rename renames (moves) a file
func Rename(oldname, newname string) error {
return FS.Rename(oldname, newname)
}
// Stat returns file info
func Stat(name string) (os.FileInfo, error) {
return FS.Stat(name)
}
// Chmod changes file mode
func Chmod(name string, mode os.FileMode) error {
return FS.Chmod(name, mode)
}
// Chown changes file ownership (may not work on all filesystems)
func Chown(name string, uid, gid int) error {
return FS.Chown(name, uid, gid)
}
// Chtimes changes file access and modification times
func Chtimes(name string, atime, mtime time.Time) error {
return FS.Chtimes(name, atime, mtime)
}
// --- Directory Operations ---
// Mkdir creates a directory
func Mkdir(name string, perm os.FileMode) error {
return FS.Mkdir(name, perm)
}
// MkdirAll creates a directory and all parents
func MkdirAll(path string, perm os.FileMode) error {
return FS.MkdirAll(path, perm)
}
// ReadDir reads a directory
func ReadDir(dirname string) ([]os.FileInfo, error) {
return afero.ReadDir(FS, dirname)
}
// --- File Content Operations ---
// ReadFile reads an entire file
func ReadFile(filename string) ([]byte, error) {
return afero.ReadFile(FS, filename)
}
// WriteFile writes data to a file
func WriteFile(filename string, data []byte, perm os.FileMode) error {
return afero.WriteFile(FS, filename, data, perm)
}
// --- Existence Checks ---
// Exists checks if a file or directory exists
func Exists(path string) (bool, error) {
return afero.Exists(FS, path)
}
// DirExists checks if a directory exists
func DirExists(path string) (bool, error) {
return afero.DirExists(FS, path)
}
// IsDir checks if path is a directory
func IsDir(path string) (bool, error) {
return afero.IsDir(FS, path)
}
// IsEmpty checks if a directory is empty
func IsEmpty(path string) (bool, error) {
return afero.IsEmpty(FS, path)
}
// --- Utility Functions ---
// Walk walks a directory tree
func Walk(root string, walkFn filepath.WalkFunc) error {
return afero.Walk(FS, root, walkFn)
}
// Glob returns the names of all files matching pattern
func Glob(pattern string) ([]string, error) {
return afero.Glob(FS, pattern)
}
// TempDir creates a temporary directory
func TempDir(dir, prefix string) (string, error) {
return afero.TempDir(FS, dir, prefix)
}
// TempFile creates a temporary file
func TempFile(dir, pattern string) (afero.File, error) {
return afero.TempFile(FS, dir, pattern)
}
// CopyFile copies a file from src to dst
func CopyFile(src, dst string) error {
srcFile, err := FS.Open(src)
if err != nil {
return err
}
defer srcFile.Close()
srcInfo, err := srcFile.Stat()
if err != nil {
return err
}
dstFile, err := FS.OpenFile(dst, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, srcInfo.Mode())
if err != nil {
return err
}
defer dstFile.Close()
_, err = io.Copy(dstFile, srcFile)
return err
}
// FileSize returns the size of a file
func FileSize(path string) (int64, error) {
info, err := FS.Stat(path)
if err != nil {
return 0, err
}
return info.Size(), nil
}
// --- Testing Helpers ---
// WithMemFs executes a function with an in-memory filesystem, then restores the original
func WithMemFs(fn func(fs afero.Fs)) {
original := FS
memFs := afero.NewMemMapFs()
FS = memFs
defer func() { FS = original }()
fn(memFs)
}
// SetupTestDir creates a test directory structure in-memory
func SetupTestDir(files map[string]string) afero.Fs {
memFs := afero.NewMemMapFs()
for path, content := range files {
dir := filepath.Dir(path)
if dir != "." && dir != "/" {
_ = memFs.MkdirAll(dir, 0755)
}
_ = afero.WriteFile(memFs, path, []byte(content), 0644)
}
return memFs
}

191
internal/fs/fs_test.go Normal file
View File

@@ -0,0 +1,191 @@
package fs
import (
"os"
"testing"
"github.com/spf13/afero"
)
func TestMemMapFs(t *testing.T) {
// Use in-memory filesystem for testing
WithMemFs(func(memFs afero.Fs) {
// Create a file
err := WriteFile("/test/file.txt", []byte("hello world"), 0644)
if err != nil {
t.Fatalf("WriteFile failed: %v", err)
}
// Read it back
content, err := ReadFile("/test/file.txt")
if err != nil {
t.Fatalf("ReadFile failed: %v", err)
}
if string(content) != "hello world" {
t.Errorf("expected 'hello world', got '%s'", string(content))
}
// Check existence
exists, err := Exists("/test/file.txt")
if err != nil {
t.Fatalf("Exists failed: %v", err)
}
if !exists {
t.Error("file should exist")
}
// Check non-existent file
exists, err = Exists("/nonexistent.txt")
if err != nil {
t.Fatalf("Exists failed: %v", err)
}
if exists {
t.Error("file should not exist")
}
})
}
func TestSetupTestDir(t *testing.T) {
// Create test directory structure
testFs := SetupTestDir(map[string]string{
"/backups/db1.dump": "database 1 content",
"/backups/db2.dump": "database 2 content",
"/config/settings.json": `{"key": "value"}`,
})
// Verify files exist
content, err := afero.ReadFile(testFs, "/backups/db1.dump")
if err != nil {
t.Fatalf("ReadFile failed: %v", err)
}
if string(content) != "database 1 content" {
t.Errorf("unexpected content: %s", string(content))
}
// Verify directory structure
files, err := afero.ReadDir(testFs, "/backups")
if err != nil {
t.Fatalf("ReadDir failed: %v", err)
}
if len(files) != 2 {
t.Errorf("expected 2 files, got %d", len(files))
}
}
func TestCopyFile(t *testing.T) {
WithMemFs(func(memFs afero.Fs) {
// Create source file
err := WriteFile("/source.txt", []byte("copy me"), 0644)
if err != nil {
t.Fatalf("WriteFile failed: %v", err)
}
// Copy file
err = CopyFile("/source.txt", "/dest.txt")
if err != nil {
t.Fatalf("CopyFile failed: %v", err)
}
// Verify copy
content, err := ReadFile("/dest.txt")
if err != nil {
t.Fatalf("ReadFile failed: %v", err)
}
if string(content) != "copy me" {
t.Errorf("unexpected content: %s", string(content))
}
})
}
func TestFileSize(t *testing.T) {
WithMemFs(func(memFs afero.Fs) {
data := []byte("12345678901234567890") // 20 bytes
err := WriteFile("/sized.txt", data, 0644)
if err != nil {
t.Fatalf("WriteFile failed: %v", err)
}
size, err := FileSize("/sized.txt")
if err != nil {
t.Fatalf("FileSize failed: %v", err)
}
if size != 20 {
t.Errorf("expected size 20, got %d", size)
}
})
}
func TestTempDir(t *testing.T) {
WithMemFs(func(memFs afero.Fs) {
// Create temp dir
dir, err := TempDir("", "test-")
if err != nil {
t.Fatalf("TempDir failed: %v", err)
}
// Verify it exists
isDir, err := IsDir(dir)
if err != nil {
t.Fatalf("IsDir failed: %v", err)
}
if !isDir {
t.Error("temp dir should be a directory")
}
// Verify it's empty
isEmpty, err := IsEmpty(dir)
if err != nil {
t.Fatalf("IsEmpty failed: %v", err)
}
if !isEmpty {
t.Error("temp dir should be empty")
}
})
}
func TestWalk(t *testing.T) {
WithMemFs(func(memFs afero.Fs) {
// Create directory structure
_ = MkdirAll("/root/a/b", 0755)
_ = WriteFile("/root/file1.txt", []byte("1"), 0644)
_ = WriteFile("/root/a/file2.txt", []byte("2"), 0644)
_ = WriteFile("/root/a/b/file3.txt", []byte("3"), 0644)
var files []string
err := Walk("/root", func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if !info.IsDir() {
files = append(files, path)
}
return nil
})
if err != nil {
t.Fatalf("Walk failed: %v", err)
}
if len(files) != 3 {
t.Errorf("expected 3 files, got %d: %v", len(files), files)
}
})
}
func TestGlob(t *testing.T) {
WithMemFs(func(memFs afero.Fs) {
_ = WriteFile("/data/backup1.dump", []byte("1"), 0644)
_ = WriteFile("/data/backup2.dump", []byte("2"), 0644)
_ = WriteFile("/data/config.json", []byte("{}"), 0644)
matches, err := Glob("/data/*.dump")
if err != nil {
t.Fatalf("Glob failed: %v", err)
}
if len(matches) != 2 {
t.Errorf("expected 2 matches, got %d: %v", len(matches), matches)
}
})
}

View File

@@ -368,7 +368,7 @@ func (d *Diagnoser) diagnoseSQLScript(filePath string, compressed bool, result *
} }
// Store last line for termination check // Store last line for termination check
if lineNumber > 0 && (lineNumber%100000 == 0) && d.verbose { if lineNumber > 0 && (lineNumber%100000 == 0) && d.verbose && d.log != nil {
d.log.Debug("Scanning SQL file", "lines_processed", lineNumber) d.log.Debug("Scanning SQL file", "lines_processed", lineNumber)
} }
} }
@@ -430,9 +430,11 @@ func (d *Diagnoser) diagnoseClusterArchive(filePath string, result *DiagnoseResu
} }
} }
if d.log != nil {
d.log.Info("Verifying cluster archive integrity", d.log.Info("Verifying cluster archive integrity",
"size", fmt.Sprintf("%.1f GB", float64(result.FileSize)/(1024*1024*1024)), "size", fmt.Sprintf("%.1f GB", float64(result.FileSize)/(1024*1024*1024)),
"timeout", fmt.Sprintf("%d min", timeoutMinutes)) "timeout", fmt.Sprintf("%d min", timeoutMinutes))
}
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute) ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
defer cancel() defer cancel()
@@ -561,7 +563,7 @@ func (d *Diagnoser) diagnoseClusterArchive(filePath string, result *DiagnoseResu
} }
// For verbose mode, diagnose individual dumps inside the archive // For verbose mode, diagnose individual dumps inside the archive
if d.verbose && len(dumpFiles) > 0 { if d.verbose && len(dumpFiles) > 0 && d.log != nil {
d.log.Info("Cluster archive contains databases", "count", len(dumpFiles)) d.log.Info("Cluster archive contains databases", "count", len(dumpFiles))
for _, df := range dumpFiles { for _, df := range dumpFiles {
d.log.Info(" - " + df) d.log.Info(" - " + df)
@@ -684,9 +686,11 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
} }
} }
if d.log != nil {
d.log.Info("Listing cluster archive contents", d.log.Info("Listing cluster archive contents",
"size", fmt.Sprintf("%.1f GB", float64(archiveInfo.Size())/(1024*1024*1024)), "size", fmt.Sprintf("%.1f GB", float64(archiveInfo.Size())/(1024*1024*1024)),
"timeout", fmt.Sprintf("%d min", timeoutMinutes)) "timeout", fmt.Sprintf("%d min", timeoutMinutes))
}
listCtx, listCancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute) listCtx, listCancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
defer listCancel() defer listCancel()
@@ -766,7 +770,9 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
return []*DiagnoseResult{errResult}, nil return []*DiagnoseResult{errResult}, nil
} }
if d.log != nil {
d.log.Debug("Archive listing streamed successfully", "total_files", fileCount, "relevant_files", len(files)) d.log.Debug("Archive listing streamed successfully", "total_files", fileCount, "relevant_files", len(files))
}
// Check if we have enough disk space (estimate 4x archive size needed) // Check if we have enough disk space (estimate 4x archive size needed)
// archiveInfo already obtained at function start // archiveInfo already obtained at function start
@@ -781,7 +787,9 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
testCancel() testCancel()
} }
if d.log != nil {
d.log.Info("Archive listing successful", "files", len(files)) d.log.Info("Archive listing successful", "files", len(files))
}
// Try full extraction - NO TIMEOUT here as large archives can take a long time // Try full extraction - NO TIMEOUT here as large archives can take a long time
// Use a generous timeout (30 minutes) for very large archives // Use a generous timeout (30 minutes) for very large archives
@@ -870,11 +878,15 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
} }
dumpPath := filepath.Join(dumpsDir, name) dumpPath := filepath.Join(dumpsDir, name)
if d.log != nil {
d.log.Info("Diagnosing dump file", "file", name) d.log.Info("Diagnosing dump file", "file", name)
}
result, err := d.DiagnoseFile(dumpPath) result, err := d.DiagnoseFile(dumpPath)
if err != nil { if err != nil {
if d.log != nil {
d.log.Warn("Failed to diagnose file", "file", name, "error", err) d.log.Warn("Failed to diagnose file", "file", name, "error", err)
}
continue continue
} }
results = append(results, result) results = append(results, result)

View File

@@ -255,7 +255,9 @@ func (s *Safety) CheckDiskSpaceAt(archivePath string, checkDir string, multiplie
// Get available disk space // Get available disk space
availableSpace, err := getDiskSpace(checkDir) availableSpace, err := getDiskSpace(checkDir)
if err != nil { if err != nil {
if s.log != nil {
s.log.Warn("Cannot check disk space", "error", err) s.log.Warn("Cannot check disk space", "error", err)
}
return nil // Don't fail if we can't check return nil // Don't fail if we can't check
} }
@@ -278,10 +280,12 @@ func (s *Safety) CheckDiskSpaceAt(archivePath string, checkDir string, multiplie
checkDir) checkDir)
} }
if s.log != nil {
s.log.Info("Disk space check passed", s.log.Info("Disk space check passed",
"location", checkDir, "location", checkDir,
"required", FormatBytes(requiredSpace), "required", FormatBytes(requiredSpace),
"available", FormatBytes(availableSpace)) "available", FormatBytes(availableSpace))
}
return nil return nil
} }

View File

@@ -251,13 +251,13 @@ func (m ArchiveBrowserModel) View() string {
var s strings.Builder var s strings.Builder
// Header // Header
title := "[PKG] Backup Archives" title := "[SELECT] Backup Archives"
if m.mode == "restore-single" { if m.mode == "restore-single" {
title = "[PKG] Select Archive to Restore (Single Database)" title = "[SELECT] Select Archive to Restore (Single Database)"
} else if m.mode == "restore-cluster" { } else if m.mode == "restore-cluster" {
title = "[PKG] Select Archive to Restore (Cluster)" title = "[SELECT] Select Archive to Restore (Cluster)"
} else if m.mode == "diagnose" { } else if m.mode == "diagnose" {
title = "[SEARCH] Select Archive to Diagnose" title = "[SELECT] Select Archive to Diagnose"
} }
s.WriteString(titleStyle.Render(title)) s.WriteString(titleStyle.Render(title))

View File

@@ -230,7 +230,7 @@ func (m BackupManagerModel) View() string {
var s strings.Builder var s strings.Builder
// Title // Title
s.WriteString(TitleStyle.Render("[DB] Backup Archive Manager")) s.WriteString(TitleStyle.Render("[SELECT] Backup Archive Manager"))
s.WriteString("\n\n") s.WriteString("\n\n")
// Status line (no box, bold+color accents) // Status line (no box, bold+color accents)

View File

@@ -160,7 +160,7 @@ func (m DiagnoseViewModel) View() string {
var s strings.Builder var s strings.Builder
// Header // Header
s.WriteString(titleStyle.Render("[SEARCH] Backup Diagnosis")) s.WriteString(titleStyle.Render("[CHECK] Backup Diagnosis"))
s.WriteString("\n\n") s.WriteString("\n\n")
// Archive info // Archive info
@@ -204,132 +204,111 @@ func (m DiagnoseViewModel) View() string {
func (m DiagnoseViewModel) renderSingleResult(result *restore.DiagnoseResult) string { func (m DiagnoseViewModel) renderSingleResult(result *restore.DiagnoseResult) string {
var s strings.Builder var s strings.Builder
// Status Box // Validation Status
s.WriteString("+--[ VALIDATION STATUS ]" + strings.Repeat("-", 37) + "+\n") s.WriteString(diagnoseHeaderStyle.Render("[STATUS] Validation"))
s.WriteString("\n")
if result.IsValid { if result.IsValid {
s.WriteString("| " + diagnosePassStyle.Render("[OK] VALID - Archive passed all checks") + strings.Repeat(" ", 18) + "|\n") s.WriteString(diagnosePassStyle.Render(" [OK] VALID - Archive passed all checks"))
s.WriteString("\n")
} else { } else {
s.WriteString("| " + diagnoseFailStyle.Render("[FAIL] INVALID - Archive has problems") + strings.Repeat(" ", 19) + "|\n") s.WriteString(diagnoseFailStyle.Render(" [FAIL] INVALID - Archive has problems"))
s.WriteString("\n")
} }
if result.IsTruncated { if result.IsTruncated {
s.WriteString("| " + diagnoseFailStyle.Render("[!] TRUNCATED - File is incomplete") + strings.Repeat(" ", 22) + "|\n") s.WriteString(diagnoseFailStyle.Render(" [!] TRUNCATED - File is incomplete"))
s.WriteString("\n")
} }
if result.IsCorrupted { if result.IsCorrupted {
s.WriteString("| " + diagnoseFailStyle.Render("[!] CORRUPTED - File structure damaged") + strings.Repeat(" ", 18) + "|\n") s.WriteString(diagnoseFailStyle.Render(" [!] CORRUPTED - File structure damaged"))
s.WriteString("\n")
} }
s.WriteString("+" + strings.Repeat("-", 60) + "+\n\n") s.WriteString("\n")
// Details Box // Details
if result.Details != nil { if result.Details != nil {
s.WriteString("+--[ DETAILS ]" + strings.Repeat("-", 46) + "+\n") s.WriteString(diagnoseHeaderStyle.Render("[INFO] Details"))
s.WriteString("\n")
if result.Details.HasPGDMPSignature { if result.Details.HasPGDMPSignature {
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " PostgreSQL custom format (PGDMP)" + strings.Repeat(" ", 20) + "|\n") s.WriteString(diagnosePassStyle.Render(" [+]") + " PostgreSQL custom format (PGDMP)\n")
} }
if result.Details.HasSQLHeader { if result.Details.HasSQLHeader {
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " PostgreSQL SQL header found" + strings.Repeat(" ", 25) + "|\n") s.WriteString(diagnosePassStyle.Render(" [+]") + " PostgreSQL SQL header found\n")
} }
if result.Details.GzipValid { if result.Details.GzipValid {
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " Gzip compression valid" + strings.Repeat(" ", 30) + "|\n") s.WriteString(diagnosePassStyle.Render(" [+]") + " Gzip compression valid\n")
} }
if result.Details.PgRestoreListable { if result.Details.PgRestoreListable {
tableInfo := fmt.Sprintf(" (%d tables)", result.Details.TableCount) s.WriteString(diagnosePassStyle.Render(" [+]") + fmt.Sprintf(" pg_restore can list contents (%d tables)\n", result.Details.TableCount))
padding := 36 - len(tableInfo)
if padding < 0 {
padding = 0
}
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " pg_restore can list contents" + tableInfo + strings.Repeat(" ", padding) + "|\n")
} }
if result.Details.CopyBlockCount > 0 { if result.Details.CopyBlockCount > 0 {
blockInfo := fmt.Sprintf("%d COPY blocks found", result.Details.CopyBlockCount) s.WriteString(fmt.Sprintf(" [-] %d COPY blocks found\n", result.Details.CopyBlockCount))
padding := 50 - len(blockInfo)
if padding < 0 {
padding = 0
}
s.WriteString("| [-] " + blockInfo + strings.Repeat(" ", padding) + "|\n")
} }
if result.Details.UnterminatedCopy { if result.Details.UnterminatedCopy {
s.WriteString("| " + diagnoseFailStyle.Render("[-]") + " Unterminated COPY: " + truncate(result.Details.LastCopyTable, 30) + strings.Repeat(" ", 5) + "|\n") s.WriteString(diagnoseFailStyle.Render(" [-]") + " Unterminated COPY: " + truncate(result.Details.LastCopyTable, 30) + "\n")
} }
if result.Details.ProperlyTerminated { if result.Details.ProperlyTerminated {
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " All COPY blocks properly terminated" + strings.Repeat(" ", 17) + "|\n") s.WriteString(diagnosePassStyle.Render(" [+]") + " All COPY blocks properly terminated\n")
} }
if result.Details.ExpandedSize > 0 { if result.Details.ExpandedSize > 0 {
sizeInfo := fmt.Sprintf("Expanded: %s (%.1fx)", formatSize(result.Details.ExpandedSize), result.Details.CompressionRatio) s.WriteString(fmt.Sprintf(" [-] Expanded: %s (%.1fx)\n", formatSize(result.Details.ExpandedSize), result.Details.CompressionRatio))
padding := 50 - len(sizeInfo)
if padding < 0 {
padding = 0
}
s.WriteString("| [-] " + sizeInfo + strings.Repeat(" ", padding) + "|\n")
} }
s.WriteString("+" + strings.Repeat("-", 60) + "+\n") s.WriteString("\n")
} }
// Errors Box // Errors
if len(result.Errors) > 0 { if len(result.Errors) > 0 {
s.WriteString("\n+--[ ERRORS ]" + strings.Repeat("-", 47) + "+\n") s.WriteString(diagnoseFailStyle.Render("[FAIL] Errors"))
s.WriteString("\n")
for i, e := range result.Errors { for i, e := range result.Errors {
if i >= 5 { if i >= 5 {
remaining := fmt.Sprintf("... and %d more errors", len(result.Errors)-5) s.WriteString(fmt.Sprintf(" ... and %d more errors\n", len(result.Errors)-5))
padding := 56 - len(remaining)
s.WriteString("| " + remaining + strings.Repeat(" ", padding) + "|\n")
break break
} }
errText := truncate(e, 54) s.WriteString(" " + truncate(e, 60) + "\n")
padding := 56 - len(errText)
if padding < 0 {
padding = 0
} }
s.WriteString("| " + errText + strings.Repeat(" ", padding) + "|\n") s.WriteString("\n")
}
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
} }
// Warnings Box // Warnings
if len(result.Warnings) > 0 { if len(result.Warnings) > 0 {
s.WriteString("\n+--[ WARNINGS ]" + strings.Repeat("-", 45) + "+\n") s.WriteString(diagnoseWarnStyle.Render("[WARN] Warnings"))
s.WriteString("\n")
for i, w := range result.Warnings { for i, w := range result.Warnings {
if i >= 3 { if i >= 3 {
remaining := fmt.Sprintf("... and %d more warnings", len(result.Warnings)-3) s.WriteString(fmt.Sprintf(" ... and %d more warnings\n", len(result.Warnings)-3))
padding := 56 - len(remaining)
s.WriteString("| " + remaining + strings.Repeat(" ", padding) + "|\n")
break break
} }
warnText := truncate(w, 54) s.WriteString(" " + truncate(w, 60) + "\n")
padding := 56 - len(warnText)
if padding < 0 {
padding = 0
} }
s.WriteString("| " + warnText + strings.Repeat(" ", padding) + "|\n") s.WriteString("\n")
}
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
} }
// Recommendations Box // Recommendations
if !result.IsValid { if !result.IsValid {
s.WriteString("\n+--[ RECOMMENDATIONS ]" + strings.Repeat("-", 38) + "+\n") s.WriteString(diagnoseInfoStyle.Render("[HINT] Recommendations"))
s.WriteString("\n")
if result.IsTruncated { if result.IsTruncated {
s.WriteString("| 1. Re-run backup with current version (v3.42.12+) |\n") s.WriteString(" 1. Re-run backup with current version (v3.42+)\n")
s.WriteString("| 2. Check disk space on backup server |\n") s.WriteString(" 2. Check disk space on backup server\n")
s.WriteString("| 3. Verify network stability for remote backups |\n") s.WriteString(" 3. Verify network stability for remote backups\n")
} }
if result.IsCorrupted { if result.IsCorrupted {
s.WriteString("| 1. Verify backup was transferred completely |\n") s.WriteString(" 1. Verify backup was transferred completely\n")
s.WriteString("| 2. Try restoring from a previous backup |\n") s.WriteString(" 2. Try restoring from a previous backup\n")
} }
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
} }
return s.String() return s.String()
@@ -349,10 +328,8 @@ func (m DiagnoseViewModel) renderClusterResults() string {
} }
} }
s.WriteString(strings.Repeat("-", 60))
s.WriteString("\n") s.WriteString("\n")
s.WriteString(diagnoseHeaderStyle.Render(fmt.Sprintf("[STATS] CLUSTER SUMMARY: %d databases\n", len(m.results)))) s.WriteString(diagnoseHeaderStyle.Render(fmt.Sprintf("[STATS] Cluster Summary: %d databases", len(m.results))))
s.WriteString(strings.Repeat("-", 60))
s.WriteString("\n\n") s.WriteString("\n\n")
if invalidCount == 0 { if invalidCount == 0 {
@@ -364,7 +341,7 @@ func (m DiagnoseViewModel) renderClusterResults() string {
} }
// List all dumps with status // List all dumps with status
s.WriteString(diagnoseHeaderStyle.Render("Database Dumps:")) s.WriteString(diagnoseHeaderStyle.Render("[LIST] Database Dumps"))
s.WriteString("\n") s.WriteString("\n")
// Show visible range based on cursor // Show visible range based on cursor
@@ -413,9 +390,7 @@ func (m DiagnoseViewModel) renderClusterResults() string {
if m.cursor < len(m.results) { if m.cursor < len(m.results) {
selected := m.results[m.cursor] selected := m.results[m.cursor]
s.WriteString("\n") s.WriteString("\n")
s.WriteString(strings.Repeat("-", 60)) s.WriteString(diagnoseHeaderStyle.Render("[INFO] Selected: " + selected.FileName))
s.WriteString("\n")
s.WriteString(diagnoseHeaderStyle.Render("Selected: " + selected.FileName))
s.WriteString("\n\n") s.WriteString("\n\n")
// Show condensed details for selected // Show condensed details for selected

View File

@@ -191,7 +191,7 @@ func (m HistoryViewModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
func (m HistoryViewModel) View() string { func (m HistoryViewModel) View() string {
var s strings.Builder var s strings.Builder
header := titleStyle.Render("[HISTORY] Operation History") header := titleStyle.Render("[STATS] Operation History")
s.WriteString(fmt.Sprintf("\n%s\n\n", header)) s.WriteString(fmt.Sprintf("\n%s\n\n", header))
if len(m.history) == 0 { if len(m.history) == 0 {

View File

@@ -285,7 +285,7 @@ func (m *MenuModel) View() string {
var s string var s string
// Header // Header
header := titleStyle.Render("[DB] Database Backup Tool - Interactive Menu") header := titleStyle.Render("Database Backup Tool - Interactive Menu")
s += fmt.Sprintf("\n%s\n\n", header) s += fmt.Sprintf("\n%s\n\n", header)
if len(m.dbTypes) > 0 { if len(m.dbTypes) > 0 {
@@ -334,13 +334,13 @@ func (m *MenuModel) View() string {
// handleSingleBackup opens database selector for single backup // handleSingleBackup opens database selector for single backup
func (m *MenuModel) handleSingleBackup() (tea.Model, tea.Cmd) { func (m *MenuModel) handleSingleBackup() (tea.Model, tea.Cmd) {
selector := NewDatabaseSelector(m.config, m.logger, m, m.ctx, "[DB] Single Database Backup", "single") selector := NewDatabaseSelector(m.config, m.logger, m, m.ctx, "[SELECT] Single Database Backup", "single")
return selector, selector.Init() return selector, selector.Init()
} }
// handleSampleBackup opens database selector for sample backup // handleSampleBackup opens database selector for sample backup
func (m *MenuModel) handleSampleBackup() (tea.Model, tea.Cmd) { func (m *MenuModel) handleSampleBackup() (tea.Model, tea.Cmd) {
selector := NewDatabaseSelector(m.config, m.logger, m, m.ctx, "[STATS] Sample Database Backup", "sample") selector := NewDatabaseSelector(m.config, m.logger, m, m.ctx, "[SELECT] Sample Database Backup", "sample")
return selector, selector.Init() return selector, selector.Init()
} }
@@ -356,7 +356,7 @@ func (m *MenuModel) handleClusterBackup() (tea.Model, tea.Cmd) {
return executor, executor.Init() return executor, executor.Init()
} }
confirm := NewConfirmationModelWithAction(m.config, m.logger, m, confirm := NewConfirmationModelWithAction(m.config, m.logger, m,
"[DB] Cluster Backup", "[CHECK] Cluster Backup",
"This will backup ALL databases in the cluster. Continue?", "This will backup ALL databases in the cluster. Continue?",
func() (tea.Model, tea.Cmd) { func() (tea.Model, tea.Cmd) {
executor := NewBackupExecution(m.config, m.logger, m, m.ctx, "cluster", "", 0) executor := NewBackupExecution(m.config, m.logger, m, m.ctx, "cluster", "", 0)

View File

@@ -321,9 +321,9 @@ func (m RestoreExecutionModel) View() string {
s.Grow(512) // Pre-allocate estimated capacity for better performance s.Grow(512) // Pre-allocate estimated capacity for better performance
// Title // Title
title := "[RESTORE] Restoring Database" title := "[EXEC] Restoring Database"
if m.restoreType == "restore-cluster" { if m.restoreType == "restore-cluster" {
title = "[RESTORE] Restoring Cluster" title = "[EXEC] Restoring Cluster"
} }
s.WriteString(titleStyle.Render(title)) s.WriteString(titleStyle.Render(title))
s.WriteString("\n\n") s.WriteString("\n\n")

View File

@@ -339,9 +339,9 @@ func (m RestorePreviewModel) View() string {
var s strings.Builder var s strings.Builder
// Title // Title
title := "Restore Preview" title := "[CHECK] Restore Preview"
if m.mode == "restore-cluster" { if m.mode == "restore-cluster" {
title = "Cluster Restore Preview" title = "[CHECK] Cluster Restore Preview"
} }
s.WriteString(titleStyle.Render(title)) s.WriteString(titleStyle.Render(title))
s.WriteString("\n\n") s.WriteString("\n\n")

View File

@@ -688,7 +688,7 @@ func (m SettingsModel) View() string {
var b strings.Builder var b strings.Builder
// Header // Header
header := titleStyle.Render("[CFG] Configuration Settings") header := titleStyle.Render("[CONFIG] Configuration Settings")
b.WriteString(fmt.Sprintf("\n%s\n\n", header)) b.WriteString(fmt.Sprintf("\n%s\n\n", header))
// Settings list // Settings list
@@ -747,7 +747,7 @@ func (m SettingsModel) View() string {
// Current configuration summary // Current configuration summary
if !m.editing { if !m.editing {
b.WriteString("\n") b.WriteString("\n")
b.WriteString(infoStyle.Render("[LOG] Current Configuration:")) b.WriteString(infoStyle.Render("[INFO] Current Configuration"))
b.WriteString("\n") b.WriteString("\n")
summary := []string{ summary := []string{

View File

@@ -173,7 +173,7 @@ func (m StatusViewModel) View() string {
s.WriteString(errorStyle.Render(fmt.Sprintf("[FAIL] Error: %v\n", m.err))) s.WriteString(errorStyle.Render(fmt.Sprintf("[FAIL] Error: %v\n", m.err)))
s.WriteString("\n") s.WriteString("\n")
} else { } else {
s.WriteString("Connection Status:\n") s.WriteString("[CONN] Connection Status\n")
if m.connected { if m.connected {
s.WriteString(successStyle.Render(" [+] Connected\n")) s.WriteString(successStyle.Render(" [+] Connected\n"))
} else { } else {
@@ -181,6 +181,7 @@ func (m StatusViewModel) View() string {
} }
s.WriteString("\n") s.WriteString("\n")
s.WriteString("[INFO] Server Details\n")
s.WriteString(fmt.Sprintf(" Database Type: %s (%s)\n", m.config.DisplayDatabaseType(), m.config.DatabaseType)) s.WriteString(fmt.Sprintf(" Database Type: %s (%s)\n", m.config.DisplayDatabaseType(), m.config.DatabaseType))
s.WriteString(fmt.Sprintf(" Host: %s:%d\n", m.config.Host, m.config.Port)) s.WriteString(fmt.Sprintf(" Host: %s:%d\n", m.config.Host, m.config.Port))
s.WriteString(fmt.Sprintf(" User: %s\n", m.config.User)) s.WriteString(fmt.Sprintf(" User: %s\n", m.config.User))

View File

@@ -120,12 +120,36 @@ var ShortcutStyle = lipgloss.NewStyle().
// ============================================================================= // =============================================================================
// HELPER PREFIXES (no emoticons) // HELPER PREFIXES (no emoticons)
// ============================================================================= // =============================================================================
// Convention for TUI titles/headers:
// [CHECK] - Verification/diagnosis screens
// [STATS] - Statistics/status screens
// [SELECT] - Selection/browser screens
// [EXEC] - Execution/running screens
// [CONFIG] - Configuration/settings screens
//
// Convention for status messages:
// [OK] - Success
// [FAIL] - Error/failure
// [WAIT] - In progress
// [WARN] - Warning
// [INFO] - Information
const ( const (
// Title prefixes (for view headers)
PrefixCheck = "[CHECK]"
PrefixStats = "[STATS]"
PrefixSelect = "[SELECT]"
PrefixExec = "[EXEC]"
PrefixConfig = "[CONFIG]"
// Status prefixes
PrefixOK = "[OK]" PrefixOK = "[OK]"
PrefixFail = "[FAIL]" PrefixFail = "[FAIL]"
PrefixWarn = "[!]" PrefixWait = "[WAIT]"
PrefixInfo = "[i]" PrefixWarn = "[WARN]"
PrefixInfo = "[INFO]"
// List item prefixes
PrefixPlus = "[+]" PrefixPlus = "[+]"
PrefixMinus = "[-]" PrefixMinus = "[-]"
PrefixArrow = ">" PrefixArrow = ">"

View File

@@ -16,7 +16,7 @@ import (
// Build information (set by ldflags) // Build information (set by ldflags)
var ( var (
version = "3.42.32" version = "3.42.34"
buildTime = "unknown" buildTime = "unknown"
gitCommit = "unknown" gitCommit = "unknown"
) )