Compare commits

..

17 Commits

Author SHA1 Message Date
eeacbfa007 TUI: Add detailed progress tracking with rolling speed and database count
All checks were successful
CI/CD / Test (push) Successful in 1m28s
CI/CD / Lint (push) Successful in 1m41s
CI/CD / Build & Release (push) Successful in 4m9s
- Add TUI-native detailed progress component (detailed_progress.go)
- Hide spinner when progress bar is shown for cleaner display
- Implement rolling window speed calculation (5-sec window, 100 samples)
- Add database count tracking (X/Y) for cluster restore operations
- Wire DatabaseProgressCallback to restore engine for multi-db progress
2026-01-15 15:16:21 +01:00
7711a206ab Fix panic in TUI backup manager verify when logger is nil
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m15s
- Add nil checks before logger calls in diagnose.go (8 places)
- Add nil checks before logger calls in safety.go (2 places)
- Fixes crash when pressing 'v' to verify backup in interactive menu
2026-01-14 17:18:37 +01:00
ba6e8a2b39 v3.42.37: Remove ASCII boxes from diagnose view
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m14s
Cleaner output without box drawing characters:
- [STATUS] Validation section
- [INFO] Details section
- [FAIL] Errors section
- [WARN] Warnings section
- [HINT] Recommendations section
2026-01-14 17:05:43 +01:00
ec5e89eab7 v3.42.36: Fix remaining TUI prefix inconsistencies
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m13s
- diagnose_view.go: Add [STATS], [LIST], [INFO] section prefixes
- status.go: Add [CONN], [INFO] section prefixes
- settings.go: [LOG] → [INFO] for configuration summary
- menu.go: [DB] → [SELECT]/[CHECK] for selectors
2026-01-14 16:59:24 +01:00
e24d7ab49f v3.42.35: Standardize TUI title prefixes for consistency
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Successful in 3m17s
- [CHECK] for diagnosis, previews, validations
- [STATS] for status, history, metrics views
- [SELECT] for selection/browsing screens
- [EXEC] for execution screens (backup/restore)
- [CONFIG] for settings/configuration

Fixed 8 files with inconsistent prefixes:
- diagnose_view.go: [SEARCH] → [CHECK]
- settings.go: [CFG] → [CONFIG]
- menu.go: [DB] → clean title
- history.go: [HISTORY] → [STATS]
- backup_manager.go: [DB] → [SELECT]
- archive_browser.go: [PKG]/[SEARCH] → [SELECT]
- restore_preview.go: added [CHECK]
- restore_exec.go: [RESTORE] → [EXEC]
2026-01-14 16:36:35 +01:00
721e53fe6a v3.42.34: Add spf13/afero for filesystem abstraction
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m13s
- New internal/fs package for testable filesystem operations
- In-memory filesystem support for unit testing without disk I/O
- Swappable global FS: SetFS(afero.NewMemMapFs())
- Wrapper functions: ReadFile, WriteFile, Mkdir, Walk, Glob, etc.
- Testing helpers: WithMemFs(), SetupTestDir()
- Comprehensive test suite demonstrating usage
- Upgraded afero from v1.10.0 to v1.15.0
2026-01-14 16:24:12 +01:00
4e09066aa5 v3.42.33: Add cenkalti/backoff for exponential backoff retry
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m14s
- Exponential backoff retry for all cloud operations (S3, Azure, GCS)
- RetryConfig presets: Default (5x), Aggressive (10x), Quick (3x)
- Smart error classification: IsPermanentError, IsRetryableError
- Automatic file position reset on upload retry
- Retry logging with wait duration
- Multipart uploads use aggressive retry (more tolerance)
2026-01-14 16:19:40 +01:00
6a24ee39be v3.42.32: Add fatih/color for cross-platform terminal colors
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m13s
- Windows-compatible colors via native console API
- Color helper functions: Success(), Error(), Warning(), Info()
- Text styling: Header(), Dim(), Bold(), Green(), Red(), Yellow(), Cyan()
- Logger CleanFormatter uses fatih/color instead of raw ANSI
- All progress indicators use colored [OK]/[FAIL] status
- Automatic color detection for non-TTY environments
2026-01-14 16:13:00 +01:00
dc6dfd8b2c v3.42.31: Add schollz/progressbar for visual progress display
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m13s
- Visual progress bars for cloud uploads/downloads
  - Byte transfer display, speed, ETA prediction
  - Color-coded Unicode block progress
- Checksum verification with progress bar for large files
- Spinner for indeterminate operations (unknown size)
- New types: NewSchollzBar(), NewSchollzBarItems(), NewSchollzSpinner()
- Progress Writer() method for io.Copy integration
2026-01-14 16:07:04 +01:00
7b4ab76313 v3.42.30: Add go-multierror for better error aggregation
All checks were successful
CI/CD / Test (push) Successful in 1m20s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m14s
- Use hashicorp/go-multierror for cluster restore error collection
- Shows ALL failed databases with full error context (not just count)
- Bullet-pointed output for readability
- Thread-safe error aggregation with dedicated mutex
- Error wrapping with %w for proper error chain preservation
2026-01-14 15:59:12 +01:00
c0d92b3a81 fix: update go.sum for gopsutil Windows dependencies
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m13s
2026-01-14 15:50:13 +01:00
8c85d85249 refactor: use gopsutil and go-humanize for preflight checks
Some checks failed
CI/CD / Build & Release (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
CI/CD / Test (push) Has been cancelled
- Added gopsutil/v3 for cross-platform system metrics
  * Works on Linux, macOS, Windows, BSD
  * Memory detection no longer requires /proc parsing

- Added go-humanize for readable output
  * humanize.Bytes() for memory sizes
  * humanize.Comma() for large numbers

- Improved preflight display with memory usage percentage
- Linux kernel checks (shmmax/shmall) still use /proc for accuracy
2026-01-14 15:47:31 +01:00
e0cdcb28be feat: comprehensive preflight checks for cluster restore
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m17s
- Linux system checks (read-only from /proc, no auth needed):
  * shmmax, shmall kernel limits
  * Available RAM check

- PostgreSQL auto-tuning:
  * max_locks_per_transaction scaled by BLOB count
  * maintenance_work_mem boosted to 2GB for faster indexes
  * All settings auto-reset after restore (even on failure)

- Archive analysis:
  * Count BLOBs per database (pg_restore -l or zgrep)
  * Scale lock boost: 2048 (default) → 4096/8192/16384 based on count

- Nice TUI preflight summary display with ✓/⚠ indicators
2026-01-14 15:30:41 +01:00
22a7b9e81e feat: auto-tune max_locks_per_transaction for cluster restore
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m15s
- Automatically boost max_locks_per_transaction to 2048 before restore
- Uses ALTER SYSTEM + pg_reload_conf() - no restart needed
- Automatically resets to original value after restore (even on failure)
- Prevents 'out of shared memory' OOM on BLOB-heavy SQL format dumps
- Works transparently - no user intervention required
2026-01-14 15:05:42 +01:00
c71889be47 fix: phased restore for BLOB databases to prevent lock exhaustion OOM
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m25s
CI/CD / Build & Release (push) Successful in 3m13s
- Auto-detect large objects in pg_restore dumps
- Split restore into pre-data, data, post-data phases
- Each phase commits and releases locks before next
- Prevents 'out of shared memory' / max_locks_per_transaction errors
- Updated error hints with better guidance for lock exhaustion
2026-01-14 08:15:53 +01:00
222bdbef58 fix: streaming tar verification for large cluster archives (100GB+)
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m14s
- Increase timeout from 60 to 180 minutes for very large archives
- Use streaming pipes instead of buffering entire tar listing
- Only mark as corrupted for clear corruption signals (unexpected EOF, invalid gzip)
- Prevents false CORRUPTED errors on valid large archives
2026-01-13 14:40:18 +01:00
f7e9fa64f0 docs: add Large Database Support (600+ GB) section to PITR guide
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Has been skipped
2026-01-13 10:02:35 +01:00
35 changed files with 3569 additions and 481 deletions

View File

@@ -5,6 +5,133 @@ All notable changes to dbbackup will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [3.42.35] - 2026-01-15 "TUI Detailed Progress"
### Added - Enhanced TUI Progress Display
- **Detailed progress bar in TUI restore** - schollz-style progress bar with:
- Byte progress display (e.g., `245 MB / 1.2 GB`)
- Transfer speed calculation (e.g., `45 MB/s`)
- ETA prediction for long operations
- Unicode block-based visual bar
- **Real-time extraction progress** - Archive extraction now reports actual bytes processed
- **Go-native tar extraction** - Uses Go's `archive/tar` + `compress/gzip` when progress callback is set
- **New `DetailedProgress` component** in TUI package:
- `NewDetailedProgress(total, description)` - Byte-based progress
- `NewDetailedProgressItems(total, description)` - Item count progress
- `NewDetailedProgressSpinner(description)` - Indeterminate spinner
- `RenderProgressBar(width)` - Generate schollz-style output
- **Progress callback API** in restore engine:
- `SetProgressCallback(func(current, total int64, description string))`
- Allows TUI to receive real-time progress updates from restore operations
- **Shared progress state** pattern for Bubble Tea integration
### Changed
- TUI restore execution now shows detailed byte progress during archive extraction
- Cluster restore shows extraction progress instead of just spinner
- Falls back to shell `tar` command when no progress callback is set (faster)
### Technical Details
- `progressReader` wrapper tracks bytes read through gzip/tar pipeline
- Throttled progress updates (every 100ms) to avoid UI flooding
- Thread-safe shared state pattern for cross-goroutine progress updates
## [3.42.34] - 2026-01-14 "Filesystem Abstraction"
### Added - spf13/afero for Filesystem Abstraction
- **New `internal/fs` package** for testable filesystem operations
- **In-memory filesystem** for unit testing without disk I/O
- **Global FS interface** that can be swapped for testing:
```go
fs.SetFS(afero.NewMemMapFs()) // Use memory
fs.ResetFS() // Back to real disk
```
- **Wrapper functions** for all common file operations:
- `ReadFile`, `WriteFile`, `Create`, `Open`, `Remove`, `RemoveAll`
- `Mkdir`, `MkdirAll`, `ReadDir`, `Walk`, `Glob`
- `Exists`, `DirExists`, `IsDir`, `IsEmpty`
- `TempDir`, `TempFile`, `CopyFile`, `FileSize`
- **Testing helpers**:
- `WithMemFs(fn)` - Execute function with temp in-memory FS
- `SetupTestDir(files)` - Create test directory structure
- **Comprehensive test suite** demonstrating usage
### Changed
- Upgraded afero from v1.10.0 to v1.15.0
## [3.42.33] - 2026-01-14 "Exponential Backoff Retry"
### Added - cenkalti/backoff for Cloud Operation Retry
- **Exponential backoff retry** for all cloud operations (S3, Azure, GCS)
- **Retry configurations**:
- `DefaultRetryConfig()` - 5 retries, 500ms→30s backoff, 5 min max
- `AggressiveRetryConfig()` - 10 retries, 1s→60s backoff, 15 min max
- `QuickRetryConfig()` - 3 retries, 100ms→5s backoff, 30s max
- **Smart error classification**:
- `IsPermanentError()` - Auth/bucket errors (no retry)
- `IsRetryableError()` - Timeout/network errors (retry)
- **Retry logging** - Each retry attempt is logged with wait duration
### Changed
- S3 simple upload, multipart upload, download now retry on transient failures
- Azure simple upload, download now retry on transient failures
- GCS upload, download now retry on transient failures
- Large file multipart uploads use `AggressiveRetryConfig()` (more retries)
## [3.42.32] - 2026-01-14 "Cross-Platform Colors"
### Added - fatih/color for Cross-Platform Terminal Colors
- **Windows-compatible colors** - Native Windows console API support
- **Color helper functions** in `logger` package:
- `Success()`, `Error()`, `Warning()`, `Info()` - Status messages with icons
- `Header()`, `Dim()`, `Bold()` - Text styling
- `Green()`, `Red()`, `Yellow()`, `Cyan()` - Colored text
- `StatusLine()`, `TableRow()` - Formatted output
- `DisableColors()`, `EnableColors()` - Runtime control
- **Consistent color scheme** across all log levels
### Changed
- Logger `CleanFormatter` now uses fatih/color instead of raw ANSI codes
- All progress indicators use fatih/color for `[OK]`/`[FAIL]` status
- Automatic color detection (disabled for non-TTY)
## [3.42.31] - 2026-01-14 "Visual Progress Bars"
### Added - schollz/progressbar for Enhanced Progress Display
- **Visual progress bars** for cloud uploads/downloads with:
- Byte transfer display (e.g., `245 MB / 1.2 GB`)
- Transfer speed (e.g., `45 MB/s`)
- ETA prediction
- Color-coded progress with Unicode blocks
- **Checksum verification progress** - visual progress while calculating SHA-256
- **Spinner for indeterminate operations** - Braille-style spinner when size unknown
- New progress types: `NewSchollzBar()`, `NewSchollzBarItems()`, `NewSchollzSpinner()`
- Progress bar `Writer()` method for io.Copy integration
### Changed
- Cloud download shows real-time byte progress instead of 10% log messages
- Cloud upload shows visual progress bar instead of debug logs
- Checksum verification shows progress for large files
## [3.42.30] - 2026-01-09 "Better Error Aggregation"
### Added - go-multierror for Cluster Restore Errors
- **Enhanced error reporting** - Now shows ALL database failures, not just a count
- Uses `hashicorp/go-multierror` for proper error aggregation
- Each failed database error is preserved with full context
- Bullet-pointed error output for readability:
```
cluster restore completed with 3 failures:
3 database(s) failed:
• db1: restore failed: max_locks_per_transaction exceeded
• db2: restore failed: connection refused
• db3: failed to create database: permission denied
```
### Changed
- Replaced string slice error collection with proper `*multierror.Error`
- Thread-safe error aggregation with dedicated mutex
- Improved error wrapping with `%w` for error chain preservation
## [3.42.10] - 2026-01-08 "Code Quality"
### Fixed - Code Quality Issues

94
PITR.md
View File

@@ -584,6 +584,100 @@ Document your recovery procedure:
9. Create new base backup
```
## Large Database Support (600+ GB)
For databases larger than 600 GB, PITR is the **recommended approach** over full dump/restore.
### Why PITR Works Better for Large DBs
| Approach | 600 GB Database | Recovery Time (RTO) |
|----------|-----------------|---------------------|
| Full pg_dump/restore | Hours to dump, hours to restore | 4-12+ hours |
| PITR (base + WAL) | Incremental WAL only | 30 min - 2 hours |
### Setup for Large Databases
**1. Enable WAL archiving with compression:**
```bash
dbbackup pitr enable --archive-dir /backups/wal_archive --compress
```
**2. Take ONE base backup weekly/monthly (use pg_basebackup):**
```bash
# For 600+ GB, use fast checkpoint to minimize impact
pg_basebackup -D /backups/base_$(date +%Y%m%d).tar.gz \
-Ft -z -P --checkpoint=fast --wal-method=none
# Duration: 2-6 hours for 600 GB, but only needed weekly/monthly
```
**3. WAL files archive continuously** (~1-5 GB/hour typical), capturing every change.
**4. Recover to any point in time:**
```bash
dbbackup restore pitr \
--base-backup /backups/base_20260101.tar.gz \
--wal-archive /backups/wal_archive \
--target-time "2026-01-13 14:30:00" \
--target-dir /var/lib/postgresql/16/restored
```
### PostgreSQL Optimizations for 600+ GB
| Setting | Value | Purpose |
|---------|-------|---------|
| `wal_compression = on` | postgresql.conf | 70-80% smaller WAL files |
| `max_wal_size = 4GB` | postgresql.conf | Reduce checkpoint frequency |
| `checkpoint_timeout = 30min` | postgresql.conf | Less frequent checkpoints |
| `archive_timeout = 300` | postgresql.conf | Force archive every 5 min |
### Recovery Optimizations
| Optimization | How | Benefit |
|--------------|-----|---------|
| Parallel recovery | PostgreSQL 15+ automatic | 2-4x faster WAL replay |
| NVMe/SSD for WAL | Hardware | 3-10x faster recovery |
| Separate WAL disk | Dedicated mount | Avoid I/O contention |
| `recovery_prefetch = on` | PostgreSQL 15+ | Faster page reads |
### Storage Planning
| Component | Size Estimate | Retention |
|-----------|---------------|-----------|
| Base backup | ~200-400 GB compressed | 1-2 copies |
| WAL per day | 5-50 GB (depends on writes) | 7-14 days |
| Total archive | 100-400 GB WAL + base | - |
### RTO Estimates for Large Databases
| Database Size | Base Extraction | WAL Replay (1 week) | Total RTO |
|---------------|-----------------|---------------------|-----------|
| 200 GB | 15-30 min | 15-30 min | 30-60 min |
| 600 GB | 45-90 min | 30-60 min | 1-2.5 hours |
| 1 TB | 60-120 min | 45-90 min | 2-3.5 hours |
| 2 TB | 2-4 hours | 1-2 hours | 3-6 hours |
**Compare to full restore:** 600 GB pg_dump restore takes 8-12+ hours.
### Best Practices for 600+ GB
1. **Weekly base backups** - Monthly if storage is tight
2. **Test recovery monthly** - Verify WAL chain integrity
3. **Monitor WAL lag** - Alert if archive falls behind
4. **Use streaming replication** - For HA, combine with PITR for DR
5. **Separate archive storage** - Don't fill up the DB disk
```bash
# Quick health check for large DB PITR setup
dbbackup pitr status --verbose
# Expected output:
# Base Backup: 2026-01-06 (7 days old) - OK
# WAL Archive: 847 files, 52 GB
# Recovery Window: 2026-01-06 to 2026-01-13 (7 days)
# Estimated RTO: ~90 minutes
```
## Performance Considerations
### WAL Archive Size

View File

@@ -56,7 +56,7 @@ Download from [releases](https://git.uuxo.net/UUXO/dbbackup/releases):
```bash
# Linux x86_64
wget https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.42.1/dbbackup-linux-amd64
wget https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.42.35/dbbackup-linux-amd64
chmod +x dbbackup-linux-amd64
sudo mv dbbackup-linux-amd64 /usr/local/bin/dbbackup
```

View File

@@ -3,9 +3,9 @@
This directory contains pre-compiled binaries for the DB Backup Tool across multiple platforms and architectures.
## Build Information
- **Version**: 3.42.10
- **Build Time**: 2026-01-12_14:25:53_UTC
- **Git Commit**: d19c065
- **Version**: 3.42.34
- **Build Time**: 2026-01-14_16:19:00_UTC
- **Git Commit**: 7711a20
## Recent Updates (v1.1.0)
- ✅ Fixed TUI progress display with line-by-line output

41
go.mod
View File

@@ -5,15 +5,27 @@ go 1.24.0
toolchain go1.24.9
require (
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2
cloud.google.com/go/storage v1.57.2
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3
github.com/aws/aws-sdk-go-v2 v1.40.0
github.com/aws/aws-sdk-go-v2/config v1.32.2
github.com/aws/aws-sdk-go-v2/credentials v1.19.2
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1
github.com/charmbracelet/bubbles v0.21.0
github.com/charmbracelet/bubbletea v1.3.10
github.com/charmbracelet/lipgloss v1.1.0
github.com/dustin/go-humanize v1.0.1
github.com/go-sql-driver/mysql v1.9.3
github.com/jackc/pgx/v5 v5.7.6
github.com/mattn/go-sqlite3 v1.14.32
github.com/shirou/gopsutil/v3 v3.24.5
github.com/sirupsen/logrus v1.9.3
github.com/spf13/cobra v1.10.1
github.com/spf13/pflag v1.0.9
golang.org/x/crypto v0.43.0
google.golang.org/api v0.256.0
)
require (
@@ -24,20 +36,13 @@ require (
cloud.google.com/go/compute/metadata v0.9.0 // indirect
cloud.google.com/go/iam v1.5.2 // indirect
cloud.google.com/go/monitoring v1.24.2 // indirect
cloud.google.com/go/storage v1.57.2 // indirect
filippo.io/edwards25519 v1.1.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 // indirect
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0 // indirect
github.com/aws/aws-sdk-go-v2 v1.40.0 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 // indirect
github.com/aws/aws-sdk-go-v2/config v1.32.2 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.19.2 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14 // indirect
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 // indirect
@@ -46,47 +51,58 @@ require (
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.5 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14 // indirect
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1 // indirect
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 // indirect
github.com/aws/smithy-go v1.23.2 // indirect
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
github.com/charmbracelet/x/ansi v0.10.1 // indirect
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd // indirect
github.com/charmbracelet/x/term v0.2.1 // indirect
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443 // indirect
github.com/creack/pty v1.1.17 // indirect
github.com/envoyproxy/go-control-plane/envoy v1.32.4 // indirect
github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
github.com/fatih/color v1.18.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/go-jose/go-jose/v4 v4.1.2 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/google/s2a-go v0.1.9 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.7 // indirect
github.com/googleapis/gax-go/v2 v2.15.0 // indirect
github.com/hashicorp/errwrap v1.0.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/jackc/puddle/v2 v2.2.2 // indirect
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-localereader v0.0.1 // indirect
github.com/mattn/go-runewidth v0.0.16 // indirect
github.com/mattn/go-sqlite3 v1.14.32 // indirect
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db // indirect
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 // indirect
github.com/muesli/cancelreader v0.2.2 // indirect
github.com/muesli/termenv v0.16.0 // indirect
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/schollz/progressbar/v3 v3.19.0 // indirect
github.com/spf13/afero v1.15.0 // indirect
github.com/spiffe/go-spiffe/v2 v2.5.0 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
github.com/zeebo/errs v1.4.0 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/contrib/detectors/gcp v1.36.0 // indirect
@@ -97,14 +113,13 @@ require (
go.opentelemetry.io/otel/sdk v1.37.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.37.0 // indirect
go.opentelemetry.io/otel/trace v1.37.0 // indirect
golang.org/x/crypto v0.43.0 // indirect
golang.org/x/net v0.46.0 // indirect
golang.org/x/oauth2 v0.33.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.38.0 // indirect
golang.org/x/term v0.36.0 // indirect
golang.org/x/text v0.30.0 // indirect
golang.org/x/time v0.14.0 // indirect
google.golang.org/api v0.256.0 // indirect
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250818200422-3122310a409c // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 // indirect

122
go.sum
View File

@@ -10,36 +10,44 @@ cloud.google.com/go/compute/metadata v0.9.0 h1:pDUj4QMoPejqq20dK0Pg2N4yG9zIkYGdB
cloud.google.com/go/compute/metadata v0.9.0/go.mod h1:E0bWwX5wTnLPedCKqk3pJmVgCBSM6qQI1yTBdEb3C10=
cloud.google.com/go/iam v1.5.2 h1:qgFRAGEmd8z6dJ/qyEchAuL9jpswyODjA2lS+w234g8=
cloud.google.com/go/iam v1.5.2/go.mod h1:SE1vg0N81zQqLzQEwxL2WI6yhetBdbNQuTvIKCSkUHE=
cloud.google.com/go/logging v1.13.0 h1:7j0HgAp0B94o1YRDqiqm26w4q1rDMH7XNRU34lJXHYc=
cloud.google.com/go/logging v1.13.0/go.mod h1:36CoKh6KA/M0PbhPKMq6/qety2DCAErbhXT62TuXALA=
cloud.google.com/go/longrunning v0.7.0 h1:FV0+SYF1RIj59gyoWDRi45GiYUMM3K1qO51qoboQT1E=
cloud.google.com/go/longrunning v0.7.0/go.mod h1:ySn2yXmjbK9Ba0zsQqunhDkYi0+9rlXIwnoAf+h+TPY=
cloud.google.com/go/monitoring v1.24.2 h1:5OTsoJ1dXYIiMiuL+sYscLc9BumrL3CarVLL7dd7lHM=
cloud.google.com/go/monitoring v1.24.2/go.mod h1:x7yzPWcgDRnPEv3sI+jJGBkwl5qINf+6qY4eq0I9B4U=
cloud.google.com/go/storage v1.57.2 h1:sVlym3cHGYhrp6XZKkKb+92I1V42ks2qKKpB0CF5Mb4=
cloud.google.com/go/storage v1.57.2/go.mod h1:n5ijg4yiRXXpCu0sJTD6k+eMf7GRrJmPyr9YxLXGHOk=
cloud.google.com/go/trace v1.11.6 h1:2O2zjPzqPYAHrn3OKl029qlqG6W8ZdYaOWRyr8NgMT4=
cloud.google.com/go/trace v1.11.6/go.mod h1:GA855OeDEBiBMzcckLPE2kDunIpC72N+Pq8WFieFjnI=
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0 h1:JXg2dwJUmPB9JmtVmdEB16APJ7jurfbY5jnfXpJoRMc=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0/go.mod h1:YD5h/ldMsG0XiIw7PdyNhLxaM317eFh5yNLccNfGdyw=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.13.0 h1:KpMC6LFL7mqpExyMC9jVOYRiVhLmamjeZfRsUpB7l4s=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.13.0/go.mod h1:J7MUC/wtRpfGVbQ5sIItY5/FuVWmvzlY21WAOfQnq/I=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 h1:9iefClla7iYpfYWdzPCRDozdmndjTm8DXdpCzPajMgA=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2/go.mod h1:XtLgD3ZD34DAaVIIAyG3objl5DynM3CQ/vMcbBNJZGI=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.1 h1:/Zt+cDPnpC3OVDm/JKLOs7M2DKmLRIIp3XIx9pHHiig=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.1/go.mod h1:Ng3urmn6dYe8gnbCMoHHVl5APYz2txho3koEkV2o2HA=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3 h1:ZJJNFaQ86GVKQ9ehwqyAFE6pIfyicpuJ8IkVaPBc6/4=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3/go.mod h1:URuDvhmATVKqHBH9/0nOiNKk0+YcwfQ3WkK5PqHKxc8=
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0 h1:XkkQbfMyuH2jTSjQjSoihryI8GINRcs4xp8lNawg0FI=
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0/go.mod h1:HKpQxkWaGLJ+D/5H8QRpyQXA1eKjxkFlOMwck5+33Jk=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0 h1:UQUsRi8WTzhZntp5313l+CHIAT95ojUI2lpP/ExlZa4=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0/go.mod h1:Cz6ft6Dkn3Et6l2v2a9/RpN7epQ1GtDlO6lj8bEcOvw=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0 h1:owcC2UnmsZycprQ5RfRgjydWhuoxg71LUfyiQdijZuM=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0/go.mod h1:ZPpqegjbE99EPKsu3iUWV22A04wzGPcAY/ziSIQEEgs=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.53.0 h1:4LP6hvB4I5ouTbGgWtixJhgED6xdf67twf9PoY96Tbg=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.53.0/go.mod h1:jUZ5LYlw40WMd07qxcQJD5M40aUxrfwqQX1g7zxYnrQ=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0 h1:Ron4zCA/yk6U7WOBXhTJcDpsUBG9npumK6xw2auFltQ=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0/go.mod h1:cSgYe11MCNYunTnRXrKiR/tHc0eoKjICUuWpNZoVCOo=
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2 h1:+vx7roKuyA63nhn5WAunQHLTznkw5W8b1Xc0dNjp83s=
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2/go.mod h1:HBCaDeC1lPdgDeDbhX8XFpy1jqjK0IBG8W5K+xYqA0w=
github.com/aws/aws-sdk-go-v2 v1.40.0 h1:/WMUA0kjhZExjOQN2z3oLALDREea1A7TobfuiBrKlwc=
github.com/aws/aws-sdk-go-v2 v1.40.0/go.mod h1:c9pm7VwuW0UPxAEYGyTmyurVcNrbF6Rt/wixFqDhcjE=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 h1:DHctwEM8P8iTXFxC/QK0MRjwEpWQeM9yzidCRjldUz0=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3/go.mod h1:xdCzcZEtnSTKVDOmUZs4l/j3pSV6rpo1WXl5ugNsL8Y=
github.com/aws/aws-sdk-go-v2/config v1.32.1 h1:iODUDLgk3q8/flEC7ymhmxjfoAnBDwEEYEVyKZ9mzjU=
github.com/aws/aws-sdk-go-v2/config v1.32.1/go.mod h1:xoAgo17AGrPpJBSLg81W+ikM0cpOZG8ad04T2r+d5P0=
github.com/aws/aws-sdk-go-v2/config v1.32.2 h1:4liUsdEpUUPZs5WVapsJLx5NPmQhQdez7nYFcovrytk=
github.com/aws/aws-sdk-go-v2/config v1.32.2/go.mod h1:l0hs06IFz1eCT+jTacU/qZtC33nvcnLADAPL/XyrkZI=
github.com/aws/aws-sdk-go-v2/credentials v1.19.1 h1:JeW+EwmtTE0yXFK8SmklrFh/cGTTXsQJumgMZNlbxfM=
github.com/aws/aws-sdk-go-v2/credentials v1.19.1/go.mod h1:BOoXiStwTF+fT2XufhO0Efssbi1CNIO/ZXpZu87N0pw=
github.com/aws/aws-sdk-go-v2/credentials v1.19.2 h1:qZry8VUyTK4VIo5aEdUcBjPZHL2v4FyQ3QEOaWcFLu4=
github.com/aws/aws-sdk-go-v2/credentials v1.19.2/go.mod h1:YUqm5a1/kBnoK+/NY5WEiMocZihKSo15/tJdmdXnM5g=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14 h1:WZVR5DbDgxzA0BJeudId89Kmgy6DIU4ORpxwsVHz0qA=
@@ -62,30 +70,22 @@ github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14 h1:FIouAnCE
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14/go.mod h1:UTwDc5COa5+guonQU8qBikJo1ZJ4ln2r1MkF7Dqag1E=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14 h1:FzQE21lNtUor0Fb7QNgnEyiRCBlolLTX/Z1j65S7teM=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14/go.mod h1:s1ydyWG9pm3ZwmmYN21HKyG9WzAZhYVW85wMHs5FV6w=
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.0 h1:8FshVvnV2sr9kOSAbOnc/vwVmmAwMjOedKH6JW2ddPM=
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.0/go.mod h1:wYNqY3L02Z3IgRYxOBPH9I1zD9Cjh9hI5QOy/eOjQvw=
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1 h1:OgQy/+0+Kc3khtqiEOk23xQAglXi3Tj0y5doOxbi5tg=
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1/go.mod h1:wYNqY3L02Z3IgRYxOBPH9I1zD9Cjh9hI5QOy/eOjQvw=
github.com/aws/aws-sdk-go-v2/service/signin v1.0.1 h1:BDgIUYGEo5TkayOWv/oBLPphWwNm/A91AebUjAu5L5g=
github.com/aws/aws-sdk-go-v2/service/signin v1.0.1/go.mod h1:iS6EPmNeqCsGo+xQmXv0jIMjyYtQfnwg36zl2FwEouk=
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2 h1:MxMBdKTYBjPQChlJhi4qlEueqB1p1KcbTEa7tD5aqPs=
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2/go.mod h1:iS6EPmNeqCsGo+xQmXv0jIMjyYtQfnwg36zl2FwEouk=
github.com/aws/aws-sdk-go-v2/service/sso v1.30.4 h1:U//SlnkE1wOQiIImxzdY5PXat4Wq+8rlfVEw4Y7J8as=
github.com/aws/aws-sdk-go-v2/service/sso v1.30.4/go.mod h1:av+ArJpoYf3pgyrj6tcehSFW+y9/QvAY8kMooR9bZCw=
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5 h1:ksUT5KtgpZd3SAiFJNJ0AFEJVva3gjBmN7eXUZjzUwQ=
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5/go.mod h1:av+ArJpoYf3pgyrj6tcehSFW+y9/QvAY8kMooR9bZCw=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.9 h1:LU8S9W/mPDAU9q0FjCLi0TrCheLMGwzbRpvUMwYspcA=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.9/go.mod h1:/j67Z5XBVDx8nZVp9EuFM9/BS5dvBznbqILGuu73hug=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10 h1:GtsxyiF3Nd3JahRBJbxLCCdYW9ltGQYrFWg8XdkGDd8=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10/go.mod h1:/j67Z5XBVDx8nZVp9EuFM9/BS5dvBznbqILGuu73hug=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.1 h1:GdGmKtG+/Krag7VfyOXV17xjTCz0i9NT+JnqLTOI5nA=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.1/go.mod h1:6TxbXoDSgBQ225Qd8Q+MbxUxUh6TtNKwbRt/EPS9xso=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 h1:a5UTtD4mHBU3t0o6aHQZFJTNKVfxFWfPX7J0Lr7G+uY=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2/go.mod h1:6TxbXoDSgBQ225Qd8Q+MbxUxUh6TtNKwbRt/EPS9xso=
github.com/aws/smithy-go v1.23.2 h1:Crv0eatJUQhaManss33hS5r40CG3ZFH+21XSkqMrIUM=
github.com/aws/smithy-go v1.23.2/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k=
github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/charmbracelet/bubbles v0.21.0 h1:9TdC97SdRVg/1aaXNVWfFH3nnLAwOXr8Fn6u6mfQdFs=
@@ -105,17 +105,24 @@ github.com/charmbracelet/x/term v0.2.1/go.mod h1:oQ4enTYFV7QN4m0i9mzHrViD7TQKvNE
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443 h1:aQ3y1lwWyqYPiWZThqv1aFbZMiM9vblcSArJRf2Irls=
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/creack/pty v1.1.17 h1:QeVUsEDNrLBW4tMgZHvxy18sKtr6VI492kBhUfhDJNI=
github.com/creack/pty v1.1.17/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/envoyproxy/go-control-plane v0.13.4 h1:zEqyPVyku6IvWCFwux4x9RxkLOMUL+1vC9xUFv5l2/M=
github.com/envoyproxy/go-control-plane v0.13.4/go.mod h1:kDfuBlDVsSj2MjrLEtRWtHlsWIFcGyB2RMO44Dc5GZA=
github.com/envoyproxy/go-control-plane/envoy v1.32.4 h1:jb83lalDRZSpPWW2Z7Mck/8kXZ5CQAFYVjQcdVIr83A=
github.com/envoyproxy/go-control-plane/envoy v1.32.4/go.mod h1:Gzjc5k8JcJswLjAx1Zm+wSYE20UrLtt7JZMWiWQXQEw=
github.com/envoyproxy/go-control-plane/ratelimit v0.1.0 h1:/G9QYbddjL25KvtKTv3an9lx6VBE2cnb8wp1vEGNYGI=
github.com/envoyproxy/go-control-plane/ratelimit v0.1.0/go.mod h1:Wk+tMFAFbCXaJPzVVHnPgRKdUdwW/KdbRt94AzgRee4=
github.com/envoyproxy/protoc-gen-validate v1.2.1 h1:DEo3O99U8j4hBFwbJfrz9VtgcDfUKS7KJ7spH3d86P8=
github.com/envoyproxy/protoc-gen-validate v1.2.1/go.mod h1:d/C80l/jxXLdfEIhX1W2TmLfsJ31lvEjwamM4DxlWXU=
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f h1:Y/CXytFA4m6baUTXGLOoWe4PQhGxaX0KpnayAqC48p4=
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f/go.mod h1:vw97MGsxSvLiUE2X8qFplwetxpGLQrlU1Q9AUEIzCaM=
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/go-jose/go-jose/v4 v4.1.2 h1:TK/7NqRQZfgAh+Td8AlsrvtPoUyiHh0LqVvokh+1vHI=
@@ -125,8 +132,19 @@ github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-sql-driver/mysql v1.9.3 h1:U/N249h2WzJ3Ukj8SowVFjdtZKfu9vlLZxjPXV1aweo=
github.com/go-sql-driver/mysql v1.9.3/go.mod h1:qn46aNg1333BRMNU69Lq93t8du/dwxI64Gl8i5p1WMU=
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/martian/v3 v3.3.3 h1:DIhPTQrbPkgs2yJYdXU/eNACCG5DVQjySNRNlflZ9Fc=
github.com/google/martian/v3 v3.3.3/go.mod h1:iEPrYcgCF7jA9OtScMFQyAlZZ4YXTKEtJ1E6RWzmBA0=
github.com/google/s2a-go v0.1.9 h1:LGD7gtMgezd8a/Xak7mEWL0PjoTQFvpRudN895yqKW0=
github.com/google/s2a-go v0.1.9/go.mod h1:YA0Ei2ZQL3acow2O62kdp9UlnvMmU7kA6Eutn0dXayM=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
@@ -135,6 +153,10 @@ github.com/googleapis/enterprise-certificate-proxy v0.3.7 h1:zrn2Ee/nWmHulBx5sAV
github.com/googleapis/enterprise-certificate-proxy v0.3.7/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
github.com/googleapis/gax-go/v2 v2.15.0 h1:SyjDc1mGgZU5LncH8gimWo9lW1DtIfPibOG81vgd/bo=
github.com/googleapis/gax-go/v2 v2.15.0/go.mod h1:zVVkkxAQHa1RQpg9z2AUCMnKhi0Qld9rcmyfL1OZhoc=
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
@@ -145,8 +167,15 @@ github.com/jackc/pgx/v5 v5.7.6 h1:rWQc5FwZSPX58r1OQmkuaNicxdmExaEz5A2DO2hUuTk=
github.com/jackc/pgx/v5 v5.7.6/go.mod h1:aruU7o91Tc2q2cFp5h4uP3f6ztExVpyVv88Xl/8Vl8M=
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY=
github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-localereader v0.0.1 h1:ygSAOl7ZXTx4RdPYinUpg6W99U8jWvWi9Ye2JC/oIi4=
@@ -155,22 +184,35 @@ github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6T
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mattn/go-sqlite3 v1.14.32 h1:JD12Ag3oLy1zQA+BNn74xRgaBbdhbNIDYvQUEuuErjs=
github.com/mattn/go-sqlite3 v1.14.32/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db h1:62I3jR2EmQ4l5rM/4FEfDWcRD+abF5XlKShorW5LRoQ=
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db/go.mod h1:l0dey0ia/Uv7NcFFVbCLtqEBQbrT4OCwCSKTEv6enCw=
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 h1:ZK8zHtRHOkbHy6Mmr5D264iyp3TiX5OmNcI5cIARiQI=
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6/go.mod h1:CJlz5H+gyd6CUWT45Oy4q24RdLyn7Md9Vj2/ldJBSIo=
github.com/muesli/cancelreader v0.2.2 h1:3I4Kt4BQjOR54NavqnDogx/MIoWBFa0StPA8ELUXHmA=
github.com/muesli/cancelreader v0.2.2/go.mod h1:3XuTXfFS2VjM+HTLZY9Ak0l6eUKfijIfMUZ4EgX0QYo=
github.com/muesli/termenv v0.16.0 h1:S5AlUN9dENB57rsbnkPyfdGuWIlkmzJjbFf0Tf5FWUc=
github.com/muesli/termenv v0.16.0/go.mod h1:ZRfOIKPFDYQoDFF4Olj7/QJbW60Ol/kL1pU3VfY/Cnk=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 h1:GFCKgmp0tecUJ0sJuv4pzYCqS9+RGSn52M3FUwPs+uo=
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c h1:ncq/mPwQF4JjgDlrVEn3C11VoGHZN7m8qihwgMEtzYw=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/schollz/progressbar/v3 v3.19.0 h1:Ea18xuIRQXLAUidVDox3AbwfUhD0/1IvohyTutOIFoc=
github.com/schollz/progressbar/v3 v3.19.0/go.mod h1:IsO3lpbaGuzh8zIMzgY3+J8l4C8GjO0Y9S69eFvNsec=
github.com/shirou/gopsutil/v3 v3.24.5 h1:i0t8kL+kQTvpAYToeuiVk3TgDeKOFioZO3Ztz/iZ9pI=
github.com/shirou/gopsutil/v3 v3.24.5/go.mod h1:bsoOS1aStSs9ErQ1WWfxllSeS1K5D+U30r2NfcubMVk=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I=
github.com/spf13/afero v1.15.0/go.mod h1:NC2ByUVxtQs4b3sIUphxK0NioZnmxgyCrfzeuq8lxMg=
github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=
@@ -179,13 +221,17 @@ github.com/spiffe/go-spiffe/v2 v2.5.0 h1:N2I01KCUkv1FAjZXJMwh95KK1ZIQLYbPfhaxw8W
github.com/spiffe/go-spiffe/v2 v2.5.0/go.mod h1:P+NxobPc6wXhVtINNtFjNWGBTreew1GBUCwT2wPmb7g=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU=
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY=
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e h1:JVG44RsyaB9T2KIHavMF/ppJZNG9ZpyihvCd0w101no=
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJuqunuUZ/Dhy/avygyECGrLceyNeo4LiM=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
github.com/zeebo/errs v1.4.0 h1:XNdoD/RRMKP7HD0UhJnIzUy74ISdGGxURlYG8HSWSfM=
github.com/zeebo/errs v1.4.0/go.mod h1:sgbWHsvVuTPHcqJJGQ1WhI5KbWlHYz+2+2C/LSEtCw4=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
@@ -198,6 +244,8 @@ go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 h1:F7Jx+6h
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q=
go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ=
go.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.36.0 h1:rixTyDGXFxRy1xzhKrotaHy3/KXdPhlWARrCgK+eqUY=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.36.0/go.mod h1:dowW6UsM9MKbJq5JTz2AMVp3/5iW5I/TStsk8S+CfHw=
go.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE=
go.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E=
go.opentelemetry.io/otel/sdk v1.37.0 h1:ItB0QUqnjesGRvNcmAcU0LyvkVyGJ2xftD29bWdDvKI=
@@ -206,43 +254,35 @@ go.opentelemetry.io/otel/sdk/metric v1.37.0 h1:90lI228XrB9jCMuSdA0673aubgRobVZFh
go.opentelemetry.io/otel/sdk/metric v1.37.0/go.mod h1:cNen4ZWfiD37l5NhS+Keb5RXVWZWpRE+9WyVCpbo5ps=
go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4=
go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0=
golang.org/x/crypto v0.37.0 h1:kJNSjF/Xp7kU0iB2Z+9viTPMW4EqqsrywMXLJOOsXSE=
golang.org/x/crypto v0.37.0/go.mod h1:vg+k43peMZ0pUMhYmVAWysMK35e6ioLh3wB8ZCAfbVc=
golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4=
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc=
golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04=
golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0=
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561 h1:MDc5xs78ZrZr3HMQugiXOAkSZtfTpbJLDr/lwfgO53E=
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561/go.mod h1:cyybsKvd6eL0RnXn6p/Grxp8F5bW7iYuBgsNCOHpMYE=
golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE=
golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg=
golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4=
golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210=
golang.org/x/oauth2 v0.33.0 h1:4Q+qn+E5z8gPRJfmRy7C2gGG3T4jIprK6aSYgTXGRpo=
golang.org/x/oauth2 v0.33.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/sync v0.13.0 h1:AauUjRAJ9OSnvULf/ARrrVywoJDy0YS2AwQ98I37610=
golang.org/x/sync v0.13.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/sys v0.37.0 h1:fdNQudmxPjkdUTPnLn5mdQv7Zwvbvpaxqs831goi9kQ=
golang.org/x/sys v0.37.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/text v0.24.0 h1:dd5Bzh4yt5KYA8f9CJHCP4FB4D51c2c6JvN37xJJkJ0=
golang.org/x/text v0.24.0/go.mod h1:L8rBsPeo2pSS+xqN0d5u2ikmjtmoJbDBT1b7nHvFCdU=
golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng=
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU=
golang.org/x/term v0.36.0 h1:zMPR+aF8gfksFprF/Nc/rd1wRS1EI6nDBGyWAvDzx2Q=
golang.org/x/term v0.36.0/go.mod h1:Qu394IJq6V6dCBRgwqshf3mPF85AqzYEzofzRdZkWss=
golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k=
golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
google.golang.org/api v0.256.0 h1:u6Khm8+F9sxbCTYNoBHg6/Hwv0N/i+V94MvkOSor6oI=
google.golang.org/api v0.256.0/go.mod h1:KIgPhksXADEKJlnEoRa9qAII4rXcy40vfI8HRqcU964=
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 h1:rHWScKit0gvAPuOnu87KpaYtjK5zBMLcULh7gxkCXu4=

View File

@@ -1242,23 +1242,29 @@ func (e *Engine) uploadToCloud(ctx context.Context, backupFile string, tracker *
filename := filepath.Base(backupFile)
e.log.Info("Uploading backup to cloud", "file", filename, "size", cloud.FormatSize(info.Size()))
// Progress callback
var lastPercent int
// Create schollz progressbar for visual upload progress
bar := progress.NewSchollzBar(info.Size(), fmt.Sprintf("Uploading %s", filename))
// Progress callback with schollz progressbar
var lastBytes int64
progressCallback := func(transferred, total int64) {
percent := int(float64(transferred) / float64(total) * 100)
if percent != lastPercent && percent%10 == 0 {
e.log.Debug("Upload progress", "percent", percent, "transferred", cloud.FormatSize(transferred), "total", cloud.FormatSize(total))
lastPercent = percent
delta := transferred - lastBytes
if delta > 0 {
_ = bar.Add64(delta)
}
lastBytes = transferred
}
// Upload to cloud
err = backend.Upload(ctx, backupFile, filename, progressCallback)
if err != nil {
bar.Fail("Upload failed")
uploadStep.Fail(fmt.Errorf("cloud upload failed: %w", err))
return err
}
_ = bar.Finish()
// Also upload metadata file
metaFile := backupFile + ".meta.json"
if _, err := os.Stat(metaFile); err == nil {

View File

@@ -68,8 +68,8 @@ func ClassifyError(errorMsg string) *ErrorClassification {
Type: "critical",
Category: "locks",
Message: errorMsg,
Hint: "Lock table exhausted - typically caused by large objects in parallel restore",
Action: "Increase max_locks_per_transaction in postgresql.conf to 512 or higher",
Hint: "Lock table exhausted - typically caused by large objects (BLOBs) during restore",
Action: "Option 1: Increase max_locks_per_transaction to 1024+ in postgresql.conf (requires restart). Option 2: Update dbbackup and retry - phased restore now auto-enabled for BLOB databases",
Severity: 2,
}
case "permission_denied":
@@ -142,8 +142,8 @@ func ClassifyError(errorMsg string) *ErrorClassification {
Type: "critical",
Category: "locks",
Message: errorMsg,
Hint: "Lock table exhausted - typically caused by large objects in parallel restore",
Action: "Increase max_locks_per_transaction in postgresql.conf to 512 or higher",
Hint: "Lock table exhausted - typically caused by large objects (BLOBs) during restore",
Action: "Option 1: Increase max_locks_per_transaction to 1024+ in postgresql.conf (requires restart). Option 2: Update dbbackup and retry - phased restore now auto-enabled for BLOB databases",
Severity: 2,
}
}

View File

@@ -151,8 +151,14 @@ func (a *AzureBackend) Upload(ctx context.Context, localPath, remotePath string,
return a.uploadSimple(ctx, file, blobName, fileSize, progress)
}
// uploadSimple uploads a file using simple upload (single request)
// uploadSimple uploads a file using simple upload (single request) with retry
func (a *AzureBackend) uploadSimple(ctx context.Context, file *os.File, blobName string, fileSize int64, progress ProgressCallback) error {
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
// Reset file position for retry
if _, err := file.Seek(0, 0); err != nil {
return fmt.Errorf("failed to reset file position: %w", err)
}
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
// Wrap reader with progress tracking
@@ -182,6 +188,9 @@ func (a *AzureBackend) uploadSimple(ctx context.Context, file *os.File, blobName
}
return nil
}, func(err error, duration time.Duration) {
fmt.Printf("[Azure] Upload retry in %v: %v\n", duration, err)
})
}
// uploadBlocks uploads a file using block blob staging (for large files)
@@ -251,7 +260,7 @@ func (a *AzureBackend) uploadBlocks(ctx context.Context, file *os.File, blobName
return nil
}
// Download downloads a file from Azure Blob Storage
// Download downloads a file from Azure Blob Storage with retry
func (a *AzureBackend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
blobName := strings.TrimPrefix(remotePath, "/")
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
@@ -264,6 +273,7 @@ func (a *AzureBackend) Download(ctx context.Context, remotePath, localPath strin
fileSize := *props.ContentLength
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
// Download blob
resp, err := blockBlobClient.DownloadStream(ctx, nil)
if err != nil {
@@ -271,7 +281,7 @@ func (a *AzureBackend) Download(ctx context.Context, remotePath, localPath strin
}
defer resp.Body.Close()
// Create local file
// Create/truncate local file
file, err := os.Create(localPath)
if err != nil {
return fmt.Errorf("failed to create file: %w", err)
@@ -288,6 +298,9 @@ func (a *AzureBackend) Download(ctx context.Context, remotePath, localPath strin
}
return nil
}, func(err error, duration time.Duration) {
fmt.Printf("[Azure] Download retry in %v: %v\n", duration, err)
})
}
// Delete deletes a file from Azure Blob Storage

View File

@@ -89,7 +89,7 @@ func (g *GCSBackend) Name() string {
return "gcs"
}
// Upload uploads a file to Google Cloud Storage
// Upload uploads a file to Google Cloud Storage with retry
func (g *GCSBackend) Upload(ctx context.Context, localPath, remotePath string, progress ProgressCallback) error {
file, err := os.Open(localPath)
if err != nil {
@@ -106,6 +106,12 @@ func (g *GCSBackend) Upload(ctx context.Context, localPath, remotePath string, p
// Remove leading slash from remote path
objectName := strings.TrimPrefix(remotePath, "/")
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
// Reset file position for retry
if _, err := file.Seek(0, 0); err != nil {
return fmt.Errorf("failed to reset file position: %w", err)
}
bucket := g.client.Bucket(g.bucketName)
object := bucket.Object(objectName)
@@ -142,9 +148,12 @@ func (g *GCSBackend) Upload(ctx context.Context, localPath, remotePath string, p
}
return nil
}, func(err error, duration time.Duration) {
fmt.Printf("[GCS] Upload retry in %v: %v\n", duration, err)
})
}
// Download downloads a file from Google Cloud Storage
// Download downloads a file from Google Cloud Storage with retry
func (g *GCSBackend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
objectName := strings.TrimPrefix(remotePath, "/")
@@ -159,6 +168,7 @@ func (g *GCSBackend) Download(ctx context.Context, remotePath, localPath string,
fileSize := attrs.Size
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
// Create reader
reader, err := object.NewReader(ctx)
if err != nil {
@@ -166,7 +176,7 @@ func (g *GCSBackend) Download(ctx context.Context, remotePath, localPath string,
}
defer reader.Close()
// Create local file
// Create/truncate local file
file, err := os.Create(localPath)
if err != nil {
return fmt.Errorf("failed to create file: %w", err)
@@ -183,6 +193,9 @@ func (g *GCSBackend) Download(ctx context.Context, remotePath, localPath string,
}
return nil
}, func(err error, duration time.Duration) {
fmt.Printf("[GCS] Download retry in %v: %v\n", duration, err)
})
}
// Delete deletes a file from Google Cloud Storage

257
internal/cloud/retry.go Normal file
View File

@@ -0,0 +1,257 @@
package cloud
import (
"context"
"fmt"
"net"
"strings"
"time"
"github.com/cenkalti/backoff/v4"
)
// RetryConfig configures retry behavior
type RetryConfig struct {
MaxRetries int // Maximum number of retries (0 = unlimited)
InitialInterval time.Duration // Initial backoff interval
MaxInterval time.Duration // Maximum backoff interval
MaxElapsedTime time.Duration // Maximum total time for retries
Multiplier float64 // Backoff multiplier
}
// DefaultRetryConfig returns sensible defaults for cloud operations
func DefaultRetryConfig() *RetryConfig {
return &RetryConfig{
MaxRetries: 5,
InitialInterval: 500 * time.Millisecond,
MaxInterval: 30 * time.Second,
MaxElapsedTime: 5 * time.Minute,
Multiplier: 2.0,
}
}
// AggressiveRetryConfig returns config for critical operations that need more retries
func AggressiveRetryConfig() *RetryConfig {
return &RetryConfig{
MaxRetries: 10,
InitialInterval: 1 * time.Second,
MaxInterval: 60 * time.Second,
MaxElapsedTime: 15 * time.Minute,
Multiplier: 1.5,
}
}
// QuickRetryConfig returns config for operations that should fail fast
func QuickRetryConfig() *RetryConfig {
return &RetryConfig{
MaxRetries: 3,
InitialInterval: 100 * time.Millisecond,
MaxInterval: 5 * time.Second,
MaxElapsedTime: 30 * time.Second,
Multiplier: 2.0,
}
}
// RetryOperation executes an operation with exponential backoff retry
func RetryOperation(ctx context.Context, cfg *RetryConfig, operation func() error) error {
if cfg == nil {
cfg = DefaultRetryConfig()
}
// Create exponential backoff
expBackoff := backoff.NewExponentialBackOff()
expBackoff.InitialInterval = cfg.InitialInterval
expBackoff.MaxInterval = cfg.MaxInterval
expBackoff.MaxElapsedTime = cfg.MaxElapsedTime
expBackoff.Multiplier = cfg.Multiplier
expBackoff.Reset()
// Wrap with max retries if specified
var b backoff.BackOff = expBackoff
if cfg.MaxRetries > 0 {
b = backoff.WithMaxRetries(expBackoff, uint64(cfg.MaxRetries))
}
// Add context support
b = backoff.WithContext(b, ctx)
// Track attempts for logging
attempt := 0
// Wrap operation to handle permanent vs retryable errors
wrappedOp := func() error {
attempt++
err := operation()
if err == nil {
return nil
}
// Check if error is permanent (should not retry)
if IsPermanentError(err) {
return backoff.Permanent(err)
}
return err
}
return backoff.Retry(wrappedOp, b)
}
// RetryOperationWithNotify executes an operation with retry and calls notify on each retry
func RetryOperationWithNotify(ctx context.Context, cfg *RetryConfig, operation func() error, notify func(err error, duration time.Duration)) error {
if cfg == nil {
cfg = DefaultRetryConfig()
}
// Create exponential backoff
expBackoff := backoff.NewExponentialBackOff()
expBackoff.InitialInterval = cfg.InitialInterval
expBackoff.MaxInterval = cfg.MaxInterval
expBackoff.MaxElapsedTime = cfg.MaxElapsedTime
expBackoff.Multiplier = cfg.Multiplier
expBackoff.Reset()
// Wrap with max retries if specified
var b backoff.BackOff = expBackoff
if cfg.MaxRetries > 0 {
b = backoff.WithMaxRetries(expBackoff, uint64(cfg.MaxRetries))
}
// Add context support
b = backoff.WithContext(b, ctx)
// Wrap operation to handle permanent vs retryable errors
wrappedOp := func() error {
err := operation()
if err == nil {
return nil
}
// Check if error is permanent (should not retry)
if IsPermanentError(err) {
return backoff.Permanent(err)
}
return err
}
return backoff.RetryNotify(wrappedOp, b, notify)
}
// IsPermanentError returns true if the error should not be retried
func IsPermanentError(err error) bool {
if err == nil {
return false
}
errStr := strings.ToLower(err.Error())
// Authentication/authorization errors - don't retry
permanentPatterns := []string{
"access denied",
"forbidden",
"unauthorized",
"invalid credentials",
"invalid access key",
"invalid secret",
"no such bucket",
"bucket not found",
"container not found",
"nosuchbucket",
"nosuchkey",
"invalid argument",
"malformed",
"invalid request",
"permission denied",
"access control",
"policy",
}
for _, pattern := range permanentPatterns {
if strings.Contains(errStr, pattern) {
return true
}
}
return false
}
// IsRetryableError returns true if the error is transient and should be retried
func IsRetryableError(err error) bool {
if err == nil {
return false
}
// Network errors are typically retryable
var netErr net.Error
if ok := isNetError(err, &netErr); ok {
return netErr.Timeout() || netErr.Temporary()
}
errStr := strings.ToLower(err.Error())
// Transient errors - should retry
retryablePatterns := []string{
"timeout",
"connection reset",
"connection refused",
"connection closed",
"eof",
"broken pipe",
"temporary failure",
"service unavailable",
"internal server error",
"bad gateway",
"gateway timeout",
"too many requests",
"rate limit",
"throttl",
"slowdown",
"try again",
"retry",
}
for _, pattern := range retryablePatterns {
if strings.Contains(errStr, pattern) {
return true
}
}
return false
}
// isNetError checks if err wraps a net.Error
func isNetError(err error, target *net.Error) bool {
for err != nil {
if ne, ok := err.(net.Error); ok {
*target = ne
return true
}
// Try to unwrap
if unwrapper, ok := err.(interface{ Unwrap() error }); ok {
err = unwrapper.Unwrap()
} else {
break
}
}
return false
}
// WithRetry is a helper that wraps a function with default retry logic
func WithRetry(ctx context.Context, operationName string, fn func() error) error {
notify := func(err error, duration time.Duration) {
// Log retry attempts (caller can provide their own logger if needed)
fmt.Printf("[RETRY] %s failed, retrying in %v: %v\n", operationName, duration, err)
}
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), fn, notify)
}
// WithRetryConfig is a helper that wraps a function with custom retry config
func WithRetryConfig(ctx context.Context, cfg *RetryConfig, operationName string, fn func() error) error {
notify := func(err error, duration time.Duration) {
fmt.Printf("[RETRY] %s failed, retrying in %v: %v\n", operationName, duration, err)
}
return RetryOperationWithNotify(ctx, cfg, fn, notify)
}

View File

@@ -7,6 +7,7 @@ import (
"os"
"path/filepath"
"strings"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
@@ -123,8 +124,14 @@ func (s *S3Backend) Upload(ctx context.Context, localPath, remotePath string, pr
return s.uploadSimple(ctx, file, key, fileSize, progress)
}
// uploadSimple performs a simple single-part upload
// uploadSimple performs a simple single-part upload with retry
func (s *S3Backend) uploadSimple(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error {
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
// Reset file position for retry
if _, err := file.Seek(0, 0); err != nil {
return fmt.Errorf("failed to reset file position: %w", err)
}
// Create progress reader
var reader io.Reader = file
if progress != nil {
@@ -143,10 +150,19 @@ func (s *S3Backend) uploadSimple(ctx context.Context, file *os.File, key string,
}
return nil
}, func(err error, duration time.Duration) {
fmt.Printf("[S3] Upload retry in %v: %v\n", duration, err)
})
}
// uploadMultipart performs a multipart upload for large files
// uploadMultipart performs a multipart upload for large files with retry
func (s *S3Backend) uploadMultipart(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error {
return RetryOperationWithNotify(ctx, AggressiveRetryConfig(), func() error {
// Reset file position for retry
if _, err := file.Seek(0, 0); err != nil {
return fmt.Errorf("failed to reset file position: %w", err)
}
// Create uploader with custom options
uploader := manager.NewUploader(s.client, func(u *manager.Uploader) {
// Part size: 10MB
@@ -177,9 +193,12 @@ func (s *S3Backend) uploadMultipart(ctx context.Context, file *os.File, key stri
}
return nil
}, func(err error, duration time.Duration) {
fmt.Printf("[S3] Multipart upload retry in %v: %v\n", duration, err)
})
}
// Download downloads a file from S3
// Download downloads a file from S3 with retry
func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
// Build S3 key
key := s.buildKey(remotePath)
@@ -190,6 +209,12 @@ func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string,
return fmt.Errorf("failed to get object size: %w", err)
}
// Create directory for local file
if err := os.MkdirAll(filepath.Dir(localPath), 0755); err != nil {
return fmt.Errorf("failed to create directory: %w", err)
}
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
// Download from S3
result, err := s.client.GetObject(ctx, &s3.GetObjectInput{
Bucket: aws.String(s.bucket),
@@ -200,11 +225,7 @@ func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string,
}
defer result.Body.Close()
// Create local file
if err := os.MkdirAll(filepath.Dir(localPath), 0755); err != nil {
return fmt.Errorf("failed to create directory: %w", err)
}
// Create/truncate local file
outFile, err := os.Create(localPath)
if err != nil {
return fmt.Errorf("failed to create local file: %w", err)
@@ -223,6 +244,9 @@ func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string,
}
return nil
}, func(err error, duration time.Duration) {
fmt.Printf("[S3] Download retry in %v: %v\n", duration, err)
})
}
// List lists all backup files in S3

223
internal/fs/fs.go Normal file
View File

@@ -0,0 +1,223 @@
// Package fs provides filesystem abstraction using spf13/afero for testability.
// It allows swapping the real filesystem with an in-memory mock for unit tests.
package fs
import (
"io"
"os"
"path/filepath"
"time"
"github.com/spf13/afero"
)
// FS is the global filesystem interface used throughout the application.
// By default, it uses the real OS filesystem.
// For testing, use SetFS(afero.NewMemMapFs()) to use an in-memory filesystem.
var FS afero.Fs = afero.NewOsFs()
// SetFS sets the global filesystem (useful for testing)
func SetFS(fs afero.Fs) {
FS = fs
}
// ResetFS resets to the real OS filesystem
func ResetFS() {
FS = afero.NewOsFs()
}
// NewMemMapFs creates a new in-memory filesystem for testing
func NewMemMapFs() afero.Fs {
return afero.NewMemMapFs()
}
// NewReadOnlyFs wraps a filesystem to make it read-only
func NewReadOnlyFs(base afero.Fs) afero.Fs {
return afero.NewReadOnlyFs(base)
}
// NewBasePathFs creates a filesystem rooted at a specific path
func NewBasePathFs(base afero.Fs, path string) afero.Fs {
return afero.NewBasePathFs(base, path)
}
// --- File Operations (use global FS) ---
// Create creates a file
func Create(name string) (afero.File, error) {
return FS.Create(name)
}
// Open opens a file for reading
func Open(name string) (afero.File, error) {
return FS.Open(name)
}
// OpenFile opens a file with specified flags and permissions
func OpenFile(name string, flag int, perm os.FileMode) (afero.File, error) {
return FS.OpenFile(name, flag, perm)
}
// Remove removes a file or empty directory
func Remove(name string) error {
return FS.Remove(name)
}
// RemoveAll removes a path and any children it contains
func RemoveAll(path string) error {
return FS.RemoveAll(path)
}
// Rename renames (moves) a file
func Rename(oldname, newname string) error {
return FS.Rename(oldname, newname)
}
// Stat returns file info
func Stat(name string) (os.FileInfo, error) {
return FS.Stat(name)
}
// Chmod changes file mode
func Chmod(name string, mode os.FileMode) error {
return FS.Chmod(name, mode)
}
// Chown changes file ownership (may not work on all filesystems)
func Chown(name string, uid, gid int) error {
return FS.Chown(name, uid, gid)
}
// Chtimes changes file access and modification times
func Chtimes(name string, atime, mtime time.Time) error {
return FS.Chtimes(name, atime, mtime)
}
// --- Directory Operations ---
// Mkdir creates a directory
func Mkdir(name string, perm os.FileMode) error {
return FS.Mkdir(name, perm)
}
// MkdirAll creates a directory and all parents
func MkdirAll(path string, perm os.FileMode) error {
return FS.MkdirAll(path, perm)
}
// ReadDir reads a directory
func ReadDir(dirname string) ([]os.FileInfo, error) {
return afero.ReadDir(FS, dirname)
}
// --- File Content Operations ---
// ReadFile reads an entire file
func ReadFile(filename string) ([]byte, error) {
return afero.ReadFile(FS, filename)
}
// WriteFile writes data to a file
func WriteFile(filename string, data []byte, perm os.FileMode) error {
return afero.WriteFile(FS, filename, data, perm)
}
// --- Existence Checks ---
// Exists checks if a file or directory exists
func Exists(path string) (bool, error) {
return afero.Exists(FS, path)
}
// DirExists checks if a directory exists
func DirExists(path string) (bool, error) {
return afero.DirExists(FS, path)
}
// IsDir checks if path is a directory
func IsDir(path string) (bool, error) {
return afero.IsDir(FS, path)
}
// IsEmpty checks if a directory is empty
func IsEmpty(path string) (bool, error) {
return afero.IsEmpty(FS, path)
}
// --- Utility Functions ---
// Walk walks a directory tree
func Walk(root string, walkFn filepath.WalkFunc) error {
return afero.Walk(FS, root, walkFn)
}
// Glob returns the names of all files matching pattern
func Glob(pattern string) ([]string, error) {
return afero.Glob(FS, pattern)
}
// TempDir creates a temporary directory
func TempDir(dir, prefix string) (string, error) {
return afero.TempDir(FS, dir, prefix)
}
// TempFile creates a temporary file
func TempFile(dir, pattern string) (afero.File, error) {
return afero.TempFile(FS, dir, pattern)
}
// CopyFile copies a file from src to dst
func CopyFile(src, dst string) error {
srcFile, err := FS.Open(src)
if err != nil {
return err
}
defer srcFile.Close()
srcInfo, err := srcFile.Stat()
if err != nil {
return err
}
dstFile, err := FS.OpenFile(dst, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, srcInfo.Mode())
if err != nil {
return err
}
defer dstFile.Close()
_, err = io.Copy(dstFile, srcFile)
return err
}
// FileSize returns the size of a file
func FileSize(path string) (int64, error) {
info, err := FS.Stat(path)
if err != nil {
return 0, err
}
return info.Size(), nil
}
// --- Testing Helpers ---
// WithMemFs executes a function with an in-memory filesystem, then restores the original
func WithMemFs(fn func(fs afero.Fs)) {
original := FS
memFs := afero.NewMemMapFs()
FS = memFs
defer func() { FS = original }()
fn(memFs)
}
// SetupTestDir creates a test directory structure in-memory
func SetupTestDir(files map[string]string) afero.Fs {
memFs := afero.NewMemMapFs()
for path, content := range files {
dir := filepath.Dir(path)
if dir != "." && dir != "/" {
_ = memFs.MkdirAll(dir, 0755)
}
_ = afero.WriteFile(memFs, path, []byte(content), 0644)
}
return memFs
}

191
internal/fs/fs_test.go Normal file
View File

@@ -0,0 +1,191 @@
package fs
import (
"os"
"testing"
"github.com/spf13/afero"
)
func TestMemMapFs(t *testing.T) {
// Use in-memory filesystem for testing
WithMemFs(func(memFs afero.Fs) {
// Create a file
err := WriteFile("/test/file.txt", []byte("hello world"), 0644)
if err != nil {
t.Fatalf("WriteFile failed: %v", err)
}
// Read it back
content, err := ReadFile("/test/file.txt")
if err != nil {
t.Fatalf("ReadFile failed: %v", err)
}
if string(content) != "hello world" {
t.Errorf("expected 'hello world', got '%s'", string(content))
}
// Check existence
exists, err := Exists("/test/file.txt")
if err != nil {
t.Fatalf("Exists failed: %v", err)
}
if !exists {
t.Error("file should exist")
}
// Check non-existent file
exists, err = Exists("/nonexistent.txt")
if err != nil {
t.Fatalf("Exists failed: %v", err)
}
if exists {
t.Error("file should not exist")
}
})
}
func TestSetupTestDir(t *testing.T) {
// Create test directory structure
testFs := SetupTestDir(map[string]string{
"/backups/db1.dump": "database 1 content",
"/backups/db2.dump": "database 2 content",
"/config/settings.json": `{"key": "value"}`,
})
// Verify files exist
content, err := afero.ReadFile(testFs, "/backups/db1.dump")
if err != nil {
t.Fatalf("ReadFile failed: %v", err)
}
if string(content) != "database 1 content" {
t.Errorf("unexpected content: %s", string(content))
}
// Verify directory structure
files, err := afero.ReadDir(testFs, "/backups")
if err != nil {
t.Fatalf("ReadDir failed: %v", err)
}
if len(files) != 2 {
t.Errorf("expected 2 files, got %d", len(files))
}
}
func TestCopyFile(t *testing.T) {
WithMemFs(func(memFs afero.Fs) {
// Create source file
err := WriteFile("/source.txt", []byte("copy me"), 0644)
if err != nil {
t.Fatalf("WriteFile failed: %v", err)
}
// Copy file
err = CopyFile("/source.txt", "/dest.txt")
if err != nil {
t.Fatalf("CopyFile failed: %v", err)
}
// Verify copy
content, err := ReadFile("/dest.txt")
if err != nil {
t.Fatalf("ReadFile failed: %v", err)
}
if string(content) != "copy me" {
t.Errorf("unexpected content: %s", string(content))
}
})
}
func TestFileSize(t *testing.T) {
WithMemFs(func(memFs afero.Fs) {
data := []byte("12345678901234567890") // 20 bytes
err := WriteFile("/sized.txt", data, 0644)
if err != nil {
t.Fatalf("WriteFile failed: %v", err)
}
size, err := FileSize("/sized.txt")
if err != nil {
t.Fatalf("FileSize failed: %v", err)
}
if size != 20 {
t.Errorf("expected size 20, got %d", size)
}
})
}
func TestTempDir(t *testing.T) {
WithMemFs(func(memFs afero.Fs) {
// Create temp dir
dir, err := TempDir("", "test-")
if err != nil {
t.Fatalf("TempDir failed: %v", err)
}
// Verify it exists
isDir, err := IsDir(dir)
if err != nil {
t.Fatalf("IsDir failed: %v", err)
}
if !isDir {
t.Error("temp dir should be a directory")
}
// Verify it's empty
isEmpty, err := IsEmpty(dir)
if err != nil {
t.Fatalf("IsEmpty failed: %v", err)
}
if !isEmpty {
t.Error("temp dir should be empty")
}
})
}
func TestWalk(t *testing.T) {
WithMemFs(func(memFs afero.Fs) {
// Create directory structure
_ = MkdirAll("/root/a/b", 0755)
_ = WriteFile("/root/file1.txt", []byte("1"), 0644)
_ = WriteFile("/root/a/file2.txt", []byte("2"), 0644)
_ = WriteFile("/root/a/b/file3.txt", []byte("3"), 0644)
var files []string
err := Walk("/root", func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if !info.IsDir() {
files = append(files, path)
}
return nil
})
if err != nil {
t.Fatalf("Walk failed: %v", err)
}
if len(files) != 3 {
t.Errorf("expected 3 files, got %d: %v", len(files), files)
}
})
}
func TestGlob(t *testing.T) {
WithMemFs(func(memFs afero.Fs) {
_ = WriteFile("/data/backup1.dump", []byte("1"), 0644)
_ = WriteFile("/data/backup2.dump", []byte("2"), 0644)
_ = WriteFile("/data/config.json", []byte("{}"), 0644)
matches, err := Glob("/data/*.dump")
if err != nil {
t.Fatalf("Glob failed: %v", err)
}
if len(matches) != 2 {
t.Errorf("expected 2 matches, got %d: %v", len(matches), matches)
}
})
}

118
internal/logger/colors.go Normal file
View File

@@ -0,0 +1,118 @@
package logger
import (
"fmt"
"os"
"github.com/fatih/color"
)
// CLI output helpers using fatih/color for cross-platform support
// Success prints a success message with green checkmark
func Success(format string, args ...interface{}) {
msg := fmt.Sprintf(format, args...)
SuccessColor.Fprint(os.Stdout, "✓ ")
fmt.Println(msg)
}
// Error prints an error message with red X
func Error(format string, args ...interface{}) {
msg := fmt.Sprintf(format, args...)
ErrorColor.Fprint(os.Stderr, "✗ ")
fmt.Fprintln(os.Stderr, msg)
}
// Warning prints a warning message with yellow exclamation
func Warning(format string, args ...interface{}) {
msg := fmt.Sprintf(format, args...)
WarnColor.Fprint(os.Stdout, "⚠ ")
fmt.Println(msg)
}
// Info prints an info message with blue arrow
func Info(format string, args ...interface{}) {
msg := fmt.Sprintf(format, args...)
InfoColor.Fprint(os.Stdout, "→ ")
fmt.Println(msg)
}
// Header prints a bold header
func Header(format string, args ...interface{}) {
msg := fmt.Sprintf(format, args...)
HighlightColor.Println(msg)
}
// Dim prints dimmed/secondary text
func Dim(format string, args ...interface{}) {
msg := fmt.Sprintf(format, args...)
DimColor.Println(msg)
}
// Bold returns bold text
func Bold(text string) string {
return color.New(color.Bold).Sprint(text)
}
// Green returns green text
func Green(text string) string {
return SuccessColor.Sprint(text)
}
// Red returns red text
func Red(text string) string {
return ErrorColor.Sprint(text)
}
// Yellow returns yellow text
func Yellow(text string) string {
return WarnColor.Sprint(text)
}
// Cyan returns cyan text
func Cyan(text string) string {
return InfoColor.Sprint(text)
}
// StatusLine prints a key-value status line
func StatusLine(key, value string) {
DimColor.Printf(" %s: ", key)
fmt.Println(value)
}
// ProgressStatus prints operation status with timing
func ProgressStatus(operation string, status string, isSuccess bool) {
if isSuccess {
SuccessColor.Print("[OK] ")
} else {
ErrorColor.Print("[FAIL] ")
}
fmt.Printf("%s: %s\n", operation, status)
}
// Table prints a simple formatted table row
func TableRow(cols ...string) {
for i, col := range cols {
if i == 0 {
InfoColor.Printf("%-20s", col)
} else {
fmt.Printf("%-15s", col)
}
}
fmt.Println()
}
// DisableColors disables all color output (for non-TTY or --no-color flag)
func DisableColors() {
color.NoColor = true
}
// EnableColors enables color output
func EnableColors() {
color.NoColor = false
}
// IsColorEnabled returns whether colors are enabled
func IsColorEnabled() bool {
return !color.NoColor
}

View File

@@ -7,9 +7,29 @@ import (
"strings"
"time"
"github.com/fatih/color"
"github.com/sirupsen/logrus"
)
// Color printers for consistent output across the application
var (
// Status colors
SuccessColor = color.New(color.FgGreen, color.Bold)
ErrorColor = color.New(color.FgRed, color.Bold)
WarnColor = color.New(color.FgYellow, color.Bold)
InfoColor = color.New(color.FgCyan)
DebugColor = color.New(color.FgWhite)
// Highlight colors
HighlightColor = color.New(color.FgMagenta, color.Bold)
DimColor = color.New(color.FgHiBlack)
// Data colors
NumberColor = color.New(color.FgYellow)
PathColor = color.New(color.FgBlue, color.Underline)
TimeColor = color.New(color.FgCyan)
)
// Logger defines the interface for logging
type Logger interface {
Debug(msg string, args ...any)
@@ -226,34 +246,32 @@ type CleanFormatter struct{}
func (f *CleanFormatter) Format(entry *logrus.Entry) ([]byte, error) {
timestamp := entry.Time.Format("2006-01-02T15:04:05")
// Color codes for different log levels
var levelColor, levelText string
// Get level color and text using fatih/color
var levelPrinter *color.Color
var levelText string
switch entry.Level {
case logrus.DebugLevel:
levelColor = "\033[36m" // Cyan
levelPrinter = DebugColor
levelText = "DEBUG"
case logrus.InfoLevel:
levelColor = "\033[32m" // Green
levelPrinter = SuccessColor
levelText = "INFO "
case logrus.WarnLevel:
levelColor = "\033[33m" // Yellow
levelPrinter = WarnColor
levelText = "WARN "
case logrus.ErrorLevel:
levelColor = "\033[31m" // Red
levelPrinter = ErrorColor
levelText = "ERROR"
default:
levelColor = "\033[0m" // Reset
levelPrinter = InfoColor
levelText = "INFO "
}
resetColor := "\033[0m"
// Build the message with perfectly aligned columns
var output strings.Builder
// Column 1: Level (with color, fixed width 5 chars)
output.WriteString(levelColor)
output.WriteString(levelText)
output.WriteString(resetColor)
output.WriteString(levelPrinter.Sprint(levelText))
output.WriteString(" ")
// Column 2: Timestamp (fixed format)

View File

@@ -6,6 +6,16 @@ import (
"os"
"strings"
"time"
"github.com/fatih/color"
"github.com/schollz/progressbar/v3"
)
// Color printers for progress indicators
var (
okColor = color.New(color.FgGreen, color.Bold)
failColor = color.New(color.FgRed, color.Bold)
warnColor = color.New(color.FgYellow, color.Bold)
)
// Indicator represents a progress indicator interface
@@ -92,13 +102,15 @@ func (s *Spinner) Update(message string) {
// Complete stops the spinner with a success message
func (s *Spinner) Complete(message string) {
s.Stop()
fmt.Fprintf(s.writer, "\n[OK] %s\n", message)
okColor.Fprint(s.writer, "[OK] ")
fmt.Fprintln(s.writer, message)
}
// Fail stops the spinner with a failure message
func (s *Spinner) Fail(message string) {
s.Stop()
fmt.Fprintf(s.writer, "\n[FAIL] %s\n", message)
failColor.Fprint(s.writer, "[FAIL] ")
fmt.Fprintln(s.writer, message)
}
// Stop stops the spinner
@@ -167,13 +179,15 @@ func (d *Dots) Update(message string) {
// Complete stops the dots with a success message
func (d *Dots) Complete(message string) {
d.Stop()
fmt.Fprintf(d.writer, " [OK] %s\n", message)
okColor.Fprint(d.writer, " [OK] ")
fmt.Fprintln(d.writer, message)
}
// Fail stops the dots with a failure message
func (d *Dots) Fail(message string) {
d.Stop()
fmt.Fprintf(d.writer, " [FAIL] %s\n", message)
failColor.Fprint(d.writer, " [FAIL] ")
fmt.Fprintln(d.writer, message)
}
// Stop stops the dots indicator
@@ -239,14 +253,16 @@ func (p *ProgressBar) Complete(message string) {
p.current = p.total
p.message = message
p.render()
fmt.Fprintf(p.writer, " [OK] %s\n", message)
okColor.Fprint(p.writer, " [OK] ")
fmt.Fprintln(p.writer, message)
p.Stop()
}
// Fail stops the progress bar with failure
func (p *ProgressBar) Fail(message string) {
p.render()
fmt.Fprintf(p.writer, " [FAIL] %s\n", message)
failColor.Fprint(p.writer, " [FAIL] ")
fmt.Fprintln(p.writer, message)
p.Stop()
}
@@ -298,12 +314,14 @@ func (s *Static) Update(message string) {
// Complete shows completion message
func (s *Static) Complete(message string) {
fmt.Fprintf(s.writer, " [OK] %s\n", message)
okColor.Fprint(s.writer, " [OK] ")
fmt.Fprintln(s.writer, message)
}
// Fail shows failure message
func (s *Static) Fail(message string) {
fmt.Fprintf(s.writer, " [FAIL] %s\n", message)
failColor.Fprint(s.writer, " [FAIL] ")
fmt.Fprintln(s.writer, message)
}
// Stop does nothing for static indicator
@@ -380,12 +398,14 @@ func (l *LineByLine) SetEstimator(estimator *ETAEstimator) {
// Complete shows completion message
func (l *LineByLine) Complete(message string) {
fmt.Fprintf(l.writer, "[OK] %s\n\n", message)
okColor.Fprint(l.writer, "[OK] ")
fmt.Fprintf(l.writer, "%s\n\n", message)
}
// Fail shows failure message
func (l *LineByLine) Fail(message string) {
fmt.Fprintf(l.writer, "[FAIL] %s\n\n", message)
failColor.Fprint(l.writer, "[FAIL] ")
fmt.Fprintf(l.writer, "%s\n\n", message)
}
// Stop does nothing for line-by-line (no cleanup needed)
@@ -408,13 +428,15 @@ func (l *Light) Update(message string) {
func (l *Light) Complete(message string) {
if !l.silent {
fmt.Fprintf(l.writer, "[OK] %s\n", message)
okColor.Fprint(l.writer, "[OK] ")
fmt.Fprintln(l.writer, message)
}
}
func (l *Light) Fail(message string) {
if !l.silent {
fmt.Fprintf(l.writer, "[FAIL] %s\n", message)
failColor.Fprint(l.writer, "[FAIL] ")
fmt.Fprintln(l.writer, message)
}
}
@@ -440,6 +462,8 @@ func NewIndicator(interactive bool, indicatorType string) Indicator {
return NewDots()
case "bar":
return NewProgressBar(100) // Default to 100 steps
case "schollz":
return NewSchollzBarItems(100, "Progress")
case "line":
return NewLineByLine()
case "light":
@@ -463,3 +487,161 @@ func (n *NullIndicator) Complete(message string) {}
func (n *NullIndicator) Fail(message string) {}
func (n *NullIndicator) Stop() {}
func (n *NullIndicator) SetEstimator(estimator *ETAEstimator) {}
// SchollzBar wraps schollz/progressbar for enhanced progress display
// Ideal for byte-based operations like archive extraction and file transfers
type SchollzBar struct {
bar *progressbar.ProgressBar
message string
total int64
estimator *ETAEstimator
}
// NewSchollzBar creates a new schollz progressbar with byte-based progress
func NewSchollzBar(total int64, description string) *SchollzBar {
bar := progressbar.NewOptions64(
total,
progressbar.OptionEnableColorCodes(true),
progressbar.OptionShowBytes(true),
progressbar.OptionSetWidth(40),
progressbar.OptionSetDescription(description),
progressbar.OptionSetTheme(progressbar.Theme{
Saucer: "[green]█[reset]",
SaucerHead: "[green]▌[reset]",
SaucerPadding: "░",
BarStart: "[",
BarEnd: "]",
}),
progressbar.OptionShowCount(),
progressbar.OptionSetPredictTime(true),
progressbar.OptionFullWidth(),
progressbar.OptionClearOnFinish(),
)
return &SchollzBar{
bar: bar,
message: description,
total: total,
}
}
// NewSchollzBarItems creates a progressbar for item counts (not bytes)
func NewSchollzBarItems(total int, description string) *SchollzBar {
bar := progressbar.NewOptions(
total,
progressbar.OptionEnableColorCodes(true),
progressbar.OptionShowCount(),
progressbar.OptionSetWidth(40),
progressbar.OptionSetDescription(description),
progressbar.OptionSetTheme(progressbar.Theme{
Saucer: "[cyan]█[reset]",
SaucerHead: "[cyan]▌[reset]",
SaucerPadding: "░",
BarStart: "[",
BarEnd: "]",
}),
progressbar.OptionSetPredictTime(true),
progressbar.OptionFullWidth(),
progressbar.OptionClearOnFinish(),
)
return &SchollzBar{
bar: bar,
message: description,
total: int64(total),
}
}
// NewSchollzSpinner creates an indeterminate spinner for unknown-length operations
func NewSchollzSpinner(description string) *SchollzBar {
bar := progressbar.NewOptions(
-1, // Indeterminate
progressbar.OptionEnableColorCodes(true),
progressbar.OptionSetWidth(40),
progressbar.OptionSetDescription(description),
progressbar.OptionSpinnerType(14), // Braille spinner
progressbar.OptionFullWidth(),
)
return &SchollzBar{
bar: bar,
message: description,
total: -1,
}
}
// Start initializes the progress bar (Indicator interface)
func (s *SchollzBar) Start(message string) {
s.message = message
s.bar.Describe(message)
}
// Update updates the description (Indicator interface)
func (s *SchollzBar) Update(message string) {
s.message = message
s.bar.Describe(message)
}
// Add adds bytes/items to the progress
func (s *SchollzBar) Add(n int) error {
return s.bar.Add(n)
}
// Add64 adds bytes to the progress (for large files)
func (s *SchollzBar) Add64(n int64) error {
return s.bar.Add64(n)
}
// Set sets the current progress value
func (s *SchollzBar) Set(n int) error {
return s.bar.Set(n)
}
// Set64 sets the current progress value (for large files)
func (s *SchollzBar) Set64(n int64) error {
return s.bar.Set64(n)
}
// ChangeMax updates the maximum value
func (s *SchollzBar) ChangeMax(max int) {
s.bar.ChangeMax(max)
s.total = int64(max)
}
// ChangeMax64 updates the maximum value (for large files)
func (s *SchollzBar) ChangeMax64(max int64) {
s.bar.ChangeMax64(max)
s.total = max
}
// Complete finishes with success (Indicator interface)
func (s *SchollzBar) Complete(message string) {
_ = s.bar.Finish()
okColor.Print("[OK] ")
fmt.Println(message)
}
// Fail finishes with failure (Indicator interface)
func (s *SchollzBar) Fail(message string) {
_ = s.bar.Clear()
failColor.Print("[FAIL] ")
fmt.Println(message)
}
// Stop stops the progress bar (Indicator interface)
func (s *SchollzBar) Stop() {
_ = s.bar.Clear()
}
// SetEstimator is a no-op (schollz has built-in ETA)
func (s *SchollzBar) SetEstimator(estimator *ETAEstimator) {
s.estimator = estimator
}
// Writer returns an io.Writer that updates progress as data is written
// Useful for wrapping readers/writers in copy operations
func (s *SchollzBar) Writer() io.Writer {
return s.bar
}
// Finish marks the progress as complete
func (s *SchollzBar) Finish() error {
return s.bar.Finish()
}

View File

@@ -12,6 +12,7 @@ import (
"dbbackup/internal/cloud"
"dbbackup/internal/logger"
"dbbackup/internal/metadata"
"dbbackup/internal/progress"
)
// CloudDownloader handles downloading backups from cloud storage
@@ -73,25 +74,43 @@ func (d *CloudDownloader) Download(ctx context.Context, remotePath string, opts
size = 0 // Continue anyway
}
// Progress callback
var lastPercent int
progressCallback := func(transferred, total int64) {
if total > 0 {
percent := int(float64(transferred) / float64(total) * 100)
if percent != lastPercent && percent%10 == 0 {
d.log.Info("Download progress", "percent", percent, "transferred", cloud.FormatSize(transferred), "total", cloud.FormatSize(total))
lastPercent = percent
// Create schollz progressbar for visual download progress
var bar *progress.SchollzBar
if size > 0 {
bar = progress.NewSchollzBar(size, fmt.Sprintf("Downloading %s", filename))
} else {
bar = progress.NewSchollzSpinner(fmt.Sprintf("Downloading %s", filename))
}
// Progress callback with schollz progressbar
var lastBytes int64
progressCallback := func(transferred, total int64) {
if bar != nil {
// Update progress bar with delta
delta := transferred - lastBytes
if delta > 0 {
_ = bar.Add64(delta)
}
lastBytes = transferred
}
}
// Download file
if err := d.backend.Download(ctx, remotePath, localPath, progressCallback); err != nil {
if bar != nil {
bar.Fail("Download failed")
}
// Cleanup on failure
os.RemoveAll(tempSubDir)
return nil, fmt.Errorf("download failed: %w", err)
}
if bar != nil {
_ = bar.Finish()
}
d.log.Info("Download completed", "size", cloud.FormatSize(size))
result := &DownloadResult{
LocalPath: localPath,
RemotePath: remotePath,
@@ -115,7 +134,7 @@ func (d *CloudDownloader) Download(ctx context.Context, remotePath string, opts
// Verify checksum if requested
if opts.VerifyChecksum {
d.log.Info("Verifying checksum...")
checksum, err := calculateSHA256(localPath)
checksum, err := calculateSHA256WithProgress(localPath)
if err != nil {
// Cleanup on verification failure
os.RemoveAll(tempSubDir)
@@ -186,6 +205,35 @@ func calculateSHA256(filePath string) (string, error) {
return hex.EncodeToString(hash.Sum(nil)), nil
}
// calculateSHA256WithProgress calculates SHA-256 with visual progress bar
func calculateSHA256WithProgress(filePath string) (string, error) {
file, err := os.Open(filePath)
if err != nil {
return "", err
}
defer file.Close()
// Get file size for progress bar
stat, err := file.Stat()
if err != nil {
return "", err
}
bar := progress.NewSchollzBar(stat.Size(), "Verifying checksum")
hash := sha256.New()
// Create a multi-writer to update both hash and progress
writer := io.MultiWriter(hash, bar.Writer())
if _, err := io.Copy(writer, file); err != nil {
bar.Fail("Verification failed")
return "", err
}
_ = bar.Finish()
return hex.EncodeToString(hash.Sum(nil)), nil
}
// DownloadFromCloudURI is a convenience function to download from a cloud URI
func DownloadFromCloudURI(ctx context.Context, uri string, opts DownloadOptions) (*DownloadResult, error) {
// Parse URI

View File

@@ -368,7 +368,7 @@ func (d *Diagnoser) diagnoseSQLScript(filePath string, compressed bool, result *
}
// Store last line for termination check
if lineNumber > 0 && (lineNumber%100000 == 0) && d.verbose {
if lineNumber > 0 && (lineNumber%100000 == 0) && d.verbose && d.log != nil {
d.log.Debug("Scanning SQL file", "lines_processed", lineNumber)
}
}
@@ -415,51 +415,122 @@ func (d *Diagnoser) diagnoseSQLScript(filePath string, compressed bool, result *
// diagnoseClusterArchive analyzes a cluster tar.gz archive
func (d *Diagnoser) diagnoseClusterArchive(filePath string, result *DiagnoseResult) {
// Calculate dynamic timeout based on file size
// Assume minimum 50 MB/s throughput for compressed archive listing
// Minimum 5 minutes, scales with file size
// Large archives (100GB+) can take significant time to list
// Minimum 5 minutes, scales with file size, max 180 minutes for very large archives
timeoutMinutes := 5
if result.FileSize > 0 {
// 1 minute per 3 GB, minimum 5 minutes, max 60 minutes
// 1 minute per 2 GB, minimum 5 minutes, max 180 minutes
sizeGB := result.FileSize / (1024 * 1024 * 1024)
estimatedMinutes := int(sizeGB/3) + 5
estimatedMinutes := int(sizeGB/2) + 5
if estimatedMinutes > timeoutMinutes {
timeoutMinutes = estimatedMinutes
}
if timeoutMinutes > 60 {
timeoutMinutes = 60
if timeoutMinutes > 180 {
timeoutMinutes = 180
}
}
if d.log != nil {
d.log.Info("Verifying cluster archive integrity",
"size", fmt.Sprintf("%.1f GB", float64(result.FileSize)/(1024*1024*1024)),
"timeout", fmt.Sprintf("%d min", timeoutMinutes))
}
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
defer cancel()
// Use streaming approach with pipes to avoid memory issues with large archives
cmd := exec.CommandContext(ctx, "tar", "-tzf", filePath)
output, err := cmd.Output()
if err != nil {
stdout, pipeErr := cmd.StdoutPipe()
if pipeErr != nil {
// Pipe creation failed - not a corruption issue
result.Warnings = append(result.Warnings,
fmt.Sprintf("Cannot create pipe for verification: %v", pipeErr),
"Archive integrity cannot be verified but may still be valid")
return
}
var stderrBuf bytes.Buffer
cmd.Stderr = &stderrBuf
if startErr := cmd.Start(); startErr != nil {
result.Warnings = append(result.Warnings,
fmt.Sprintf("Cannot start tar verification: %v", startErr),
"Archive integrity cannot be verified but may still be valid")
return
}
// Stream output line by line to avoid buffering entire listing in memory
scanner := bufio.NewScanner(stdout)
scanner.Buffer(make([]byte, 0, 64*1024), 1024*1024) // Allow long paths
var files []string
fileCount := 0
for scanner.Scan() {
fileCount++
line := scanner.Text()
// Only store dump/metadata files, not every file
if strings.HasSuffix(line, ".dump") || strings.HasSuffix(line, ".sql.gz") ||
strings.HasSuffix(line, ".sql") || strings.HasSuffix(line, ".json") ||
strings.Contains(line, "globals") || strings.Contains(line, "manifest") ||
strings.Contains(line, "metadata") {
files = append(files, line)
}
}
scanErr := scanner.Err()
waitErr := cmd.Wait()
stderrOutput := stderrBuf.String()
// Handle errors - distinguish between actual corruption and resource/timeout issues
if waitErr != nil || scanErr != nil {
// Check if it was a timeout
if ctx.Err() == context.DeadlineExceeded {
result.IsValid = false
result.Errors = append(result.Errors,
result.Warnings = append(result.Warnings,
fmt.Sprintf("Verification timed out after %d minutes - archive is very large", timeoutMinutes),
"This does not necessarily mean the archive is corrupted",
"Manual verification: tar -tzf "+filePath+" | wc -l")
// Don't mark as corrupted on timeout
// Don't mark as corrupted or invalid on timeout - archive may be fine
if fileCount > 0 {
result.Details.TableCount = len(files)
result.Details.TableList = files
}
return
}
// Check for specific gzip/tar corruption indicators
if strings.Contains(stderrOutput, "unexpected end of file") ||
strings.Contains(stderrOutput, "Unexpected EOF") ||
strings.Contains(stderrOutput, "gzip: stdin: unexpected end of file") ||
strings.Contains(stderrOutput, "not in gzip format") ||
strings.Contains(stderrOutput, "invalid compressed data") {
// These indicate actual corruption
result.IsValid = false
result.IsCorrupted = true
result.Errors = append(result.Errors,
fmt.Sprintf("Tar archive is invalid or corrupted: %v", err),
"Tar archive appears truncated or corrupted",
fmt.Sprintf("Error: %s", truncateString(stderrOutput, 200)),
"Run: tar -tzf "+filePath+" 2>&1 | tail -20")
return
}
// Parse tar listing
files := strings.Split(strings.TrimSpace(string(output)), "\n")
// Other errors (signal killed, memory, etc.) - not necessarily corruption
// If we read some files successfully, the archive structure is likely OK
if fileCount > 0 {
result.Warnings = append(result.Warnings,
fmt.Sprintf("Verification incomplete (read %d files before error)", fileCount),
"Archive may still be valid - error could be due to system resources")
// Proceed with what we got
} else {
// Couldn't read anything - but don't mark as corrupted without clear evidence
result.Warnings = append(result.Warnings,
fmt.Sprintf("Cannot verify archive: %v", waitErr),
"Archive integrity is uncertain - proceed with caution or verify manually")
return
}
}
// Parse the collected file list
var dumpFiles []string
hasGlobals := false
hasMetadata := false
@@ -492,7 +563,7 @@ func (d *Diagnoser) diagnoseClusterArchive(filePath string, result *DiagnoseResu
}
// For verbose mode, diagnose individual dumps inside the archive
if d.verbose && len(dumpFiles) > 0 {
if d.verbose && len(dumpFiles) > 0 && d.log != nil {
d.log.Info("Cluster archive contains databases", "count", len(dumpFiles))
for _, df := range dumpFiles {
d.log.Info(" - " + df)
@@ -615,9 +686,11 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
}
}
if d.log != nil {
d.log.Info("Listing cluster archive contents",
"size", fmt.Sprintf("%.1f GB", float64(archiveInfo.Size())/(1024*1024*1024)),
"timeout", fmt.Sprintf("%d min", timeoutMinutes))
}
listCtx, listCancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
defer listCancel()
@@ -697,7 +770,9 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
return []*DiagnoseResult{errResult}, nil
}
if d.log != nil {
d.log.Debug("Archive listing streamed successfully", "total_files", fileCount, "relevant_files", len(files))
}
// Check if we have enough disk space (estimate 4x archive size needed)
// archiveInfo already obtained at function start
@@ -712,7 +787,9 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
testCancel()
}
if d.log != nil {
d.log.Info("Archive listing successful", "files", len(files))
}
// Try full extraction - NO TIMEOUT here as large archives can take a long time
// Use a generous timeout (30 minutes) for very large archives
@@ -801,11 +878,15 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
}
dumpPath := filepath.Join(dumpsDir, name)
if d.log != nil {
d.log.Info("Diagnosing dump file", "file", name)
}
result, err := d.DiagnoseFile(dumpPath)
if err != nil {
if d.log != nil {
d.log.Warn("Failed to diagnose file", "file", name, "error", err)
}
continue
}
results = append(results, result)

View File

@@ -1,11 +1,16 @@
package restore
import (
"archive/tar"
"compress/gzip"
"context"
"database/sql"
"fmt"
"io"
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
"sync"
"sync/atomic"
@@ -17,8 +22,18 @@ import (
"dbbackup/internal/logger"
"dbbackup/internal/progress"
"dbbackup/internal/security"
"github.com/hashicorp/go-multierror"
_ "github.com/jackc/pgx/v5/stdlib" // PostgreSQL driver
)
// ProgressCallback is called with progress updates during long operations
// Parameters: current bytes/items done, total bytes/items, description
type ProgressCallback func(current, total int64, description string)
// DatabaseProgressCallback is called with database count progress during cluster restore
type DatabaseProgressCallback func(done, total int, dbName string)
// Engine handles database restore operations
type Engine struct {
cfg *config.Config
@@ -28,6 +43,10 @@ type Engine struct {
detailedReporter *progress.DetailedReporter
dryRun bool
debugLogPath string // Path to save debug log on error
// TUI progress callback for detailed progress reporting
progressCallback ProgressCallback
dbProgressCallback DatabaseProgressCallback
}
// New creates a new restore engine
@@ -83,6 +102,30 @@ func (e *Engine) SetDebugLogPath(path string) {
e.debugLogPath = path
}
// SetProgressCallback sets a callback for detailed progress reporting (for TUI mode)
func (e *Engine) SetProgressCallback(cb ProgressCallback) {
e.progressCallback = cb
}
// SetDatabaseProgressCallback sets a callback for database count progress during cluster restore
func (e *Engine) SetDatabaseProgressCallback(cb DatabaseProgressCallback) {
e.dbProgressCallback = cb
}
// reportProgress safely calls the progress callback if set
func (e *Engine) reportProgress(current, total int64, description string) {
if e.progressCallback != nil {
e.progressCallback(current, total, description)
}
}
// reportDatabaseProgress safely calls the database progress callback if set
func (e *Engine) reportDatabaseProgress(done, total int, dbName string) {
if e.dbProgressCallback != nil {
e.dbProgressCallback(done, total, dbName)
}
}
// loggerAdapter adapts our logger to the progress.Logger interface
type loggerAdapter struct {
logger logger.Logger
@@ -223,7 +266,18 @@ func (e *Engine) restorePostgreSQLDump(ctx context.Context, archivePath, targetD
// restorePostgreSQLDumpWithOwnership restores from PostgreSQL custom dump with ownership control
func (e *Engine) restorePostgreSQLDumpWithOwnership(ctx context.Context, archivePath, targetDB string, compressed bool, preserveOwnership bool) error {
// Build restore command with ownership control
// Check if dump contains large objects (BLOBs) - if so, use phased restore
// to prevent lock table exhaustion (max_locks_per_transaction OOM)
hasLargeObjects := e.checkDumpHasLargeObjects(archivePath)
if hasLargeObjects {
e.log.Info("Large objects detected - using phased restore to prevent lock exhaustion",
"database", targetDB,
"archive", archivePath)
return e.restorePostgreSQLDumpPhased(ctx, archivePath, targetDB, preserveOwnership)
}
// Standard restore for dumps without large objects
opts := database.RestoreOptions{
Parallel: 1,
Clean: false, // We already dropped the database
@@ -249,6 +303,113 @@ func (e *Engine) restorePostgreSQLDumpWithOwnership(ctx context.Context, archive
return e.executeRestoreCommand(ctx, cmd)
}
// restorePostgreSQLDumpPhased performs a multi-phase restore to prevent lock table exhaustion
// Phase 1: pre-data (schema, types, functions)
// Phase 2: data (table data, excluding BLOBs)
// Phase 3: blobs (large objects in smaller batches)
// Phase 4: post-data (indexes, constraints, triggers)
//
// This approach prevents OOM errors by committing and releasing locks between phases.
func (e *Engine) restorePostgreSQLDumpPhased(ctx context.Context, archivePath, targetDB string, preserveOwnership bool) error {
e.log.Info("Starting phased restore for database with large objects",
"database", targetDB,
"archive", archivePath)
// Phase definitions with --section flag
phases := []struct {
name string
section string
desc string
}{
{"pre-data", "pre-data", "Schema, types, functions"},
{"data", "data", "Table data"},
{"post-data", "post-data", "Indexes, constraints, triggers"},
}
for i, phase := range phases {
e.log.Info(fmt.Sprintf("Phase %d/%d: Restoring %s", i+1, len(phases), phase.name),
"database", targetDB,
"section", phase.section,
"description", phase.desc)
if err := e.restoreSection(ctx, archivePath, targetDB, phase.section, preserveOwnership); err != nil {
// Check if it's an ignorable error
if e.isIgnorableError(err.Error()) {
e.log.Warn(fmt.Sprintf("Phase %d completed with ignorable errors", i+1),
"section", phase.section,
"error", err)
continue
}
return fmt.Errorf("phase %d (%s) failed: %w", i+1, phase.name, err)
}
e.log.Info(fmt.Sprintf("Phase %d/%d completed successfully", i+1, len(phases)),
"section", phase.section)
}
e.log.Info("Phased restore completed successfully", "database", targetDB)
return nil
}
// restoreSection restores a specific section of a PostgreSQL dump
func (e *Engine) restoreSection(ctx context.Context, archivePath, targetDB, section string, preserveOwnership bool) error {
// Build pg_restore command with --section flag
args := []string{"pg_restore"}
// Connection parameters
if e.cfg.Host != "localhost" {
args = append(args, "-h", e.cfg.Host)
args = append(args, "-p", fmt.Sprintf("%d", e.cfg.Port))
args = append(args, "--no-password")
}
args = append(args, "-U", e.cfg.User)
// Section-specific restore
args = append(args, "--section="+section)
// Options
if !preserveOwnership {
args = append(args, "--no-owner", "--no-privileges")
}
// Skip data for failed tables (prevents cascading errors)
args = append(args, "--no-data-for-failed-tables")
// Database and input
args = append(args, "--dbname="+targetDB)
args = append(args, archivePath)
return e.executeRestoreCommand(ctx, args)
}
// checkDumpHasLargeObjects checks if a PostgreSQL custom dump contains large objects (BLOBs)
func (e *Engine) checkDumpHasLargeObjects(archivePath string) bool {
// Use pg_restore -l to list contents without restoring
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, "pg_restore", "-l", archivePath)
output, err := cmd.Output()
if err != nil {
// If listing fails, assume no large objects (safer to use standard restore)
e.log.Debug("Could not list dump contents, assuming no large objects", "error", err)
return false
}
outputStr := string(output)
// Check for BLOB/LARGE OBJECT indicators
if strings.Contains(outputStr, "BLOB") ||
strings.Contains(outputStr, "LARGE OBJECT") ||
strings.Contains(outputStr, " BLOBS ") ||
strings.Contains(outputStr, "lo_create") {
return true
}
return false
}
// restorePostgreSQLSQL restores from PostgreSQL SQL script
func (e *Engine) restorePostgreSQLSQL(ctx context.Context, archivePath, targetDB string, compressed bool) error {
// Pre-validate SQL dump to detect truncation BEFORE attempting restore
@@ -807,7 +968,40 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
}
e.log.Info("All dump files passed validation")
var failedDBs []string
// Run comprehensive preflight checks (Linux system + PostgreSQL + Archive analysis)
preflight, preflightErr := e.RunPreflightChecks(ctx, dumpsDir, entries)
if preflightErr != nil {
e.log.Warn("Preflight checks failed", "error", preflightErr)
}
// Calculate optimal lock boost based on BLOB count
lockBoostValue := 2048 // Default
if preflight != nil && preflight.Archive.RecommendedLockBoost > 0 {
lockBoostValue = preflight.Archive.RecommendedLockBoost
}
// AUTO-TUNE: Boost PostgreSQL settings for large restores
e.progress.Update("Tuning PostgreSQL for large restore...")
originalSettings, tuneErr := e.boostPostgreSQLSettings(ctx, lockBoostValue)
if tuneErr != nil {
e.log.Warn("Could not boost PostgreSQL settings - restore may fail on BLOB-heavy databases",
"error", tuneErr)
} else {
e.log.Info("Boosted PostgreSQL settings for restore",
"max_locks_per_transaction", fmt.Sprintf("%d → %d", originalSettings.MaxLocks, lockBoostValue),
"maintenance_work_mem", fmt.Sprintf("%s → 2GB", originalSettings.MaintenanceWorkMem))
// Ensure we reset settings when done (even on failure)
defer func() {
if resetErr := e.resetPostgreSQLSettings(ctx, originalSettings); resetErr != nil {
e.log.Warn("Could not reset PostgreSQL settings", "error", resetErr)
} else {
e.log.Info("Reset PostgreSQL settings to original values")
}
}()
}
var restoreErrors *multierror.Error
var restoreErrorsMu sync.Mutex
totalDBs := 0
// Count total databases
@@ -841,7 +1035,6 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
}
var successCount, failCount int32
var failedDBsMu sync.Mutex
var mu sync.Mutex // Protect shared resources (progress, logger)
// Create semaphore to limit concurrency
@@ -885,6 +1078,8 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
statusMsg := fmt.Sprintf("Restoring database %s (%d/%d)", dbName, idx+1, totalDBs)
e.progress.Update(statusMsg)
e.log.Info("Restoring database", "name", dbName, "file", dumpFile, "progress", dbProgress)
// Report database progress for TUI
e.reportDatabaseProgress(idx, totalDBs, dbName)
mu.Unlock()
// STEP 1: Drop existing database completely (clean slate)
@@ -896,9 +1091,9 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
// STEP 2: Create fresh database
if err := e.ensureDatabaseExists(ctx, dbName); err != nil {
e.log.Error("Failed to create database", "name", dbName, "error", err)
failedDBsMu.Lock()
failedDBs = append(failedDBs, fmt.Sprintf("%s: failed to create database: %v", dbName, err))
failedDBsMu.Unlock()
restoreErrorsMu.Lock()
restoreErrors = multierror.Append(restoreErrors, fmt.Errorf("%s: failed to create database: %w", dbName, err))
restoreErrorsMu.Unlock()
atomic.AddInt32(&failCount, 1)
return
}
@@ -941,10 +1136,10 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
mu.Unlock()
}
failedDBsMu.Lock()
restoreErrorsMu.Lock()
// Include more context in the error message
failedDBs = append(failedDBs, fmt.Sprintf("%s: restore failed: %v", dbName, restoreErr))
failedDBsMu.Unlock()
restoreErrors = multierror.Append(restoreErrors, fmt.Errorf("%s: restore failed: %w", dbName, restoreErr))
restoreErrorsMu.Unlock()
atomic.AddInt32(&failCount, 1)
return
}
@@ -962,7 +1157,17 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
failCountFinal := int(atomic.LoadInt32(&failCount))
if failCountFinal > 0 {
failedList := strings.Join(failedDBs, "\n ")
// Format multi-error with detailed output
restoreErrors.ErrorFormat = func(errs []error) string {
if len(errs) == 1 {
return errs[0].Error()
}
points := make([]string, len(errs))
for i, err := range errs {
points[i] = fmt.Sprintf(" • %s", err.Error())
}
return fmt.Sprintf("%d database(s) failed:\n%s", len(errs), strings.Join(points, "\n"))
}
// Log summary
e.log.Info("Cluster restore completed with failures",
@@ -973,7 +1178,7 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
e.progress.Fail(fmt.Sprintf("Cluster restore: %d succeeded, %d failed out of %d total", successCountFinal, failCountFinal, totalDBs))
operation.Complete(fmt.Sprintf("Partial restore: %d/%d databases succeeded", successCountFinal, totalDBs))
return fmt.Errorf("cluster restore completed with %d failures:\n %s", failCountFinal, failedList)
return fmt.Errorf("cluster restore completed with %d failures:\n%s", failCountFinal, restoreErrors.Error())
}
e.progress.Complete(fmt.Sprintf("Cluster restored successfully: %d databases", successCountFinal))
@@ -981,8 +1186,144 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
return nil
}
// extractArchive extracts a tar.gz archive
// extractArchive extracts a tar.gz archive with progress reporting
func (e *Engine) extractArchive(ctx context.Context, archivePath, destDir string) error {
// If progress callback is set, use Go's archive/tar for progress tracking
if e.progressCallback != nil {
return e.extractArchiveWithProgress(ctx, archivePath, destDir)
}
// Otherwise use fast shell tar (no progress)
return e.extractArchiveShell(ctx, archivePath, destDir)
}
// extractArchiveWithProgress extracts using Go's archive/tar with detailed progress reporting
func (e *Engine) extractArchiveWithProgress(ctx context.Context, archivePath, destDir string) error {
// Get archive size for progress calculation
archiveInfo, err := os.Stat(archivePath)
if err != nil {
return fmt.Errorf("failed to stat archive: %w", err)
}
totalSize := archiveInfo.Size()
// Open the archive file
file, err := os.Open(archivePath)
if err != nil {
return fmt.Errorf("failed to open archive: %w", err)
}
defer file.Close()
// Wrap with progress reader
progressReader := &progressReader{
reader: file,
totalSize: totalSize,
callback: e.progressCallback,
desc: "Extracting archive",
}
// Create gzip reader
gzReader, err := gzip.NewReader(progressReader)
if err != nil {
return fmt.Errorf("failed to create gzip reader: %w", err)
}
defer gzReader.Close()
// Create tar reader
tarReader := tar.NewReader(gzReader)
// Extract files
for {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
header, err := tarReader.Next()
if err == io.EOF {
break // End of archive
}
if err != nil {
return fmt.Errorf("failed to read tar header: %w", err)
}
// Sanitize and validate path
targetPath := filepath.Join(destDir, header.Name)
// Security check: ensure path is within destDir (prevent path traversal)
if !strings.HasPrefix(filepath.Clean(targetPath), filepath.Clean(destDir)) {
e.log.Warn("Skipping potentially malicious path in archive", "path", header.Name)
continue
}
switch header.Typeflag {
case tar.TypeDir:
if err := os.MkdirAll(targetPath, 0755); err != nil {
return fmt.Errorf("failed to create directory %s: %w", targetPath, err)
}
case tar.TypeReg:
// Ensure parent directory exists
if err := os.MkdirAll(filepath.Dir(targetPath), 0755); err != nil {
return fmt.Errorf("failed to create parent directory: %w", err)
}
// Create the file
outFile, err := os.OpenFile(targetPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, os.FileMode(header.Mode))
if err != nil {
return fmt.Errorf("failed to create file %s: %w", targetPath, err)
}
// Copy file contents
if _, err := io.Copy(outFile, tarReader); err != nil {
outFile.Close()
return fmt.Errorf("failed to write file %s: %w", targetPath, err)
}
outFile.Close()
case tar.TypeSymlink:
// Handle symlinks (common in some archives)
if err := os.Symlink(header.Linkname, targetPath); err != nil {
// Ignore symlink errors (may already exist or not supported)
e.log.Debug("Could not create symlink", "path", targetPath, "target", header.Linkname)
}
}
}
// Final progress update
e.reportProgress(totalSize, totalSize, "Extraction complete")
return nil
}
// progressReader wraps an io.Reader to report read progress
type progressReader struct {
reader io.Reader
totalSize int64
bytesRead int64
callback ProgressCallback
desc string
lastReport time.Time
reportEvery time.Duration
}
func (pr *progressReader) Read(p []byte) (n int, err error) {
n, err = pr.reader.Read(p)
pr.bytesRead += int64(n)
// Throttle progress reporting to every 100ms
if pr.reportEvery == 0 {
pr.reportEvery = 100 * time.Millisecond
}
if time.Since(pr.lastReport) > pr.reportEvery {
if pr.callback != nil {
pr.callback(pr.bytesRead, pr.totalSize, pr.desc)
}
pr.lastReport = time.Now()
}
return n, err
}
// extractArchiveShell extracts using shell tar command (faster but no progress)
func (e *Engine) extractArchiveShell(ctx context.Context, archivePath, destDir string) error {
cmd := exec.CommandContext(ctx, "tar", "-xzf", archivePath, "-C", destDir)
// Stream stderr to avoid memory issues - tar can produce lots of output for large archives
@@ -1499,3 +1840,173 @@ func (e *Engine) quickValidateSQLDump(archivePath string, compressed bool) error
e.log.Debug("SQL dump validation passed", "path", archivePath)
return nil
}
// boostLockCapacity temporarily increases max_locks_per_transaction to prevent OOM
// during large restores with many BLOBs. Returns the original value for later reset.
// Uses ALTER SYSTEM + pg_reload_conf() so no restart is needed.
func (e *Engine) boostLockCapacity(ctx context.Context) (int, error) {
// Connect to PostgreSQL to run system commands
connStr := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=postgres sslmode=disable",
e.cfg.Host, e.cfg.Port, e.cfg.User, e.cfg.Password)
// For localhost, use Unix socket
if e.cfg.Host == "localhost" || e.cfg.Host == "" {
connStr = fmt.Sprintf("user=%s password=%s dbname=postgres sslmode=disable",
e.cfg.User, e.cfg.Password)
}
db, err := sql.Open("pgx", connStr)
if err != nil {
return 0, fmt.Errorf("failed to connect: %w", err)
}
defer db.Close()
// Get current value
var currentValue int
err = db.QueryRowContext(ctx, "SHOW max_locks_per_transaction").Scan(&currentValue)
if err != nil {
// Try parsing as string (some versions return string)
var currentValueStr string
err = db.QueryRowContext(ctx, "SHOW max_locks_per_transaction").Scan(&currentValueStr)
if err != nil {
return 0, fmt.Errorf("failed to get current max_locks_per_transaction: %w", err)
}
fmt.Sscanf(currentValueStr, "%d", &currentValue)
}
// Skip if already high enough
if currentValue >= 2048 {
e.log.Info("max_locks_per_transaction already sufficient", "value", currentValue)
return currentValue, nil
}
// Boost to 2048 (enough for most BLOB-heavy databases)
_, err = db.ExecContext(ctx, "ALTER SYSTEM SET max_locks_per_transaction = 2048")
if err != nil {
return currentValue, fmt.Errorf("failed to set max_locks_per_transaction: %w", err)
}
// Reload config without restart
_, err = db.ExecContext(ctx, "SELECT pg_reload_conf()")
if err != nil {
return currentValue, fmt.Errorf("failed to reload config: %w", err)
}
return currentValue, nil
}
// resetLockCapacity restores the original max_locks_per_transaction value
func (e *Engine) resetLockCapacity(ctx context.Context, originalValue int) error {
connStr := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=postgres sslmode=disable",
e.cfg.Host, e.cfg.Port, e.cfg.User, e.cfg.Password)
if e.cfg.Host == "localhost" || e.cfg.Host == "" {
connStr = fmt.Sprintf("user=%s password=%s dbname=postgres sslmode=disable",
e.cfg.User, e.cfg.Password)
}
db, err := sql.Open("pgx", connStr)
if err != nil {
return fmt.Errorf("failed to connect: %w", err)
}
defer db.Close()
// Reset to original value (or use RESET to go back to default)
if originalValue == 64 { // Default value
_, err = db.ExecContext(ctx, "ALTER SYSTEM RESET max_locks_per_transaction")
} else {
_, err = db.ExecContext(ctx, fmt.Sprintf("ALTER SYSTEM SET max_locks_per_transaction = %d", originalValue))
}
if err != nil {
return fmt.Errorf("failed to reset max_locks_per_transaction: %w", err)
}
// Reload config
_, err = db.ExecContext(ctx, "SELECT pg_reload_conf()")
if err != nil {
return fmt.Errorf("failed to reload config: %w", err)
}
return nil
}
// OriginalSettings stores PostgreSQL settings to restore after operation
type OriginalSettings struct {
MaxLocks int
MaintenanceWorkMem string
}
// boostPostgreSQLSettings boosts multiple PostgreSQL settings for large restores
func (e *Engine) boostPostgreSQLSettings(ctx context.Context, lockBoostValue int) (*OriginalSettings, error) {
connStr := e.buildConnString()
db, err := sql.Open("pgx", connStr)
if err != nil {
return nil, fmt.Errorf("failed to connect: %w", err)
}
defer db.Close()
original := &OriginalSettings{}
// Get current max_locks_per_transaction
var maxLocksStr string
if err := db.QueryRowContext(ctx, "SHOW max_locks_per_transaction").Scan(&maxLocksStr); err == nil {
original.MaxLocks, _ = strconv.Atoi(maxLocksStr)
}
// Get current maintenance_work_mem
db.QueryRowContext(ctx, "SHOW maintenance_work_mem").Scan(&original.MaintenanceWorkMem)
// Boost max_locks_per_transaction (if not already high enough)
if original.MaxLocks < lockBoostValue {
_, err = db.ExecContext(ctx, fmt.Sprintf("ALTER SYSTEM SET max_locks_per_transaction = %d", lockBoostValue))
if err != nil {
e.log.Warn("Could not boost max_locks_per_transaction", "error", err)
}
}
// Boost maintenance_work_mem to 2GB for faster index creation
_, err = db.ExecContext(ctx, "ALTER SYSTEM SET maintenance_work_mem = '2GB'")
if err != nil {
e.log.Warn("Could not boost maintenance_work_mem", "error", err)
}
// Reload config to apply changes (no restart needed for these settings)
_, err = db.ExecContext(ctx, "SELECT pg_reload_conf()")
if err != nil {
return original, fmt.Errorf("failed to reload config: %w", err)
}
return original, nil
}
// resetPostgreSQLSettings restores original PostgreSQL settings
func (e *Engine) resetPostgreSQLSettings(ctx context.Context, original *OriginalSettings) error {
connStr := e.buildConnString()
db, err := sql.Open("pgx", connStr)
if err != nil {
return fmt.Errorf("failed to connect: %w", err)
}
defer db.Close()
// Reset max_locks_per_transaction
if original.MaxLocks == 64 { // Default
db.ExecContext(ctx, "ALTER SYSTEM RESET max_locks_per_transaction")
} else if original.MaxLocks > 0 {
db.ExecContext(ctx, fmt.Sprintf("ALTER SYSTEM SET max_locks_per_transaction = %d", original.MaxLocks))
}
// Reset maintenance_work_mem
if original.MaintenanceWorkMem == "64MB" { // Default
db.ExecContext(ctx, "ALTER SYSTEM RESET maintenance_work_mem")
} else if original.MaintenanceWorkMem != "" {
db.ExecContext(ctx, fmt.Sprintf("ALTER SYSTEM SET maintenance_work_mem = '%s'", original.MaintenanceWorkMem))
}
// Reload config
_, err = db.ExecContext(ctx, "SELECT pg_reload_conf()")
if err != nil {
return fmt.Errorf("failed to reload config: %w", err)
}
return nil
}

View File

@@ -0,0 +1,429 @@
package restore
import (
"context"
"database/sql"
"fmt"
"os"
"os/exec"
"path/filepath"
"runtime"
"strconv"
"strings"
"time"
"github.com/dustin/go-humanize"
"github.com/shirou/gopsutil/v3/mem"
)
// PreflightResult contains all preflight check results
type PreflightResult struct {
// Linux system checks
Linux LinuxChecks
// PostgreSQL checks
PostgreSQL PostgreSQLChecks
// Archive analysis
Archive ArchiveChecks
// Overall status
CanProceed bool
Warnings []string
Errors []string
}
// LinuxChecks contains Linux kernel/system checks
type LinuxChecks struct {
ShmMax int64 // /proc/sys/kernel/shmmax
ShmAll int64 // /proc/sys/kernel/shmall
MemTotal uint64 // Total RAM in bytes
MemAvailable uint64 // Available RAM in bytes
MemUsedPercent float64 // Memory usage percentage
ShmMaxOK bool // Is shmmax sufficient?
ShmAllOK bool // Is shmall sufficient?
MemAvailableOK bool // Is available RAM sufficient?
IsLinux bool // Are we running on Linux?
}
// PostgreSQLChecks contains PostgreSQL configuration checks
type PostgreSQLChecks struct {
MaxLocksPerTransaction int // Current setting
MaintenanceWorkMem string // Current setting
SharedBuffers string // Current setting (info only)
MaxConnections int // Current setting
Version string // PostgreSQL version
IsSuperuser bool // Can we modify settings?
}
// ArchiveChecks contains analysis of the backup archive
type ArchiveChecks struct {
TotalDatabases int
TotalBlobCount int // Estimated total BLOBs across all databases
BlobsByDB map[string]int // BLOBs per database
HasLargeBlobs bool // Any DB with >1000 BLOBs?
RecommendedLockBoost int // Calculated lock boost value
}
// RunPreflightChecks performs all preflight checks before a cluster restore
func (e *Engine) RunPreflightChecks(ctx context.Context, dumpsDir string, entries []os.DirEntry) (*PreflightResult, error) {
result := &PreflightResult{
CanProceed: true,
Archive: ArchiveChecks{
BlobsByDB: make(map[string]int),
},
}
e.progress.Update("[PREFLIGHT] Running system checks...")
e.log.Info("Starting preflight checks for cluster restore")
// 1. System checks (cross-platform via gopsutil)
e.checkSystemResources(result)
// 2. PostgreSQL checks (via existing connection)
e.checkPostgreSQL(ctx, result)
// 3. Archive analysis (count BLOBs to scale lock boost)
e.analyzeArchive(ctx, dumpsDir, entries, result)
// 4. Calculate recommended settings
e.calculateRecommendations(result)
// 5. Print summary
e.printPreflightSummary(result)
return result, nil
}
// checkSystemResources uses gopsutil for cross-platform system checks
func (e *Engine) checkSystemResources(result *PreflightResult) {
result.Linux.IsLinux = runtime.GOOS == "linux"
// Get memory info (works on Linux, macOS, Windows, BSD)
if vmem, err := mem.VirtualMemory(); err == nil {
result.Linux.MemTotal = vmem.Total
result.Linux.MemAvailable = vmem.Available
result.Linux.MemUsedPercent = vmem.UsedPercent
// 4GB minimum available for large restores
result.Linux.MemAvailableOK = vmem.Available >= 4*1024*1024*1024
e.log.Info("System memory detected",
"total", humanize.Bytes(vmem.Total),
"available", humanize.Bytes(vmem.Available),
"used_percent", fmt.Sprintf("%.1f%%", vmem.UsedPercent))
} else {
e.log.Warn("Could not detect system memory", "error", err)
}
// Linux-specific kernel checks (shmmax, shmall)
if result.Linux.IsLinux {
e.checkLinuxKernel(result)
}
// Add warnings for insufficient resources
if !result.Linux.MemAvailableOK && result.Linux.MemAvailable > 0 {
result.Warnings = append(result.Warnings,
fmt.Sprintf("Available RAM is low: %s (recommend 4GB+ for large restores)",
humanize.Bytes(result.Linux.MemAvailable)))
}
if result.Linux.MemUsedPercent > 85 {
result.Warnings = append(result.Warnings,
fmt.Sprintf("High memory usage: %.1f%% - restore may cause OOM", result.Linux.MemUsedPercent))
}
}
// checkLinuxKernel reads Linux-specific kernel limits from /proc
func (e *Engine) checkLinuxKernel(result *PreflightResult) {
// Read shmmax
if data, err := os.ReadFile("/proc/sys/kernel/shmmax"); err == nil {
val, _ := strconv.ParseInt(strings.TrimSpace(string(data)), 10, 64)
result.Linux.ShmMax = val
// 8GB minimum for large restores
result.Linux.ShmMaxOK = val >= 8*1024*1024*1024
}
// Read shmall (in pages, typically 4KB each)
if data, err := os.ReadFile("/proc/sys/kernel/shmall"); err == nil {
val, _ := strconv.ParseInt(strings.TrimSpace(string(data)), 10, 64)
result.Linux.ShmAll = val
// 2M pages = 8GB minimum
result.Linux.ShmAllOK = val >= 2*1024*1024
}
// Add kernel warnings
if !result.Linux.ShmMaxOK && result.Linux.ShmMax > 0 {
result.Warnings = append(result.Warnings,
fmt.Sprintf("Linux shmmax is low: %s (recommend 8GB+). Fix: sudo sysctl -w kernel.shmmax=17179869184",
humanize.Bytes(uint64(result.Linux.ShmMax))))
}
if !result.Linux.ShmAllOK && result.Linux.ShmAll > 0 {
result.Warnings = append(result.Warnings,
fmt.Sprintf("Linux shmall is low: %s pages (recommend 2M+). Fix: sudo sysctl -w kernel.shmall=4194304",
humanize.Comma(result.Linux.ShmAll)))
}
}
// checkPostgreSQL checks PostgreSQL configuration via SQL
func (e *Engine) checkPostgreSQL(ctx context.Context, result *PreflightResult) {
connStr := e.buildConnString()
db, err := sql.Open("pgx", connStr)
if err != nil {
e.log.Warn("Could not connect to PostgreSQL for preflight checks", "error", err)
return
}
defer db.Close()
// Check max_locks_per_transaction
var maxLocks string
if err := db.QueryRowContext(ctx, "SHOW max_locks_per_transaction").Scan(&maxLocks); err == nil {
result.PostgreSQL.MaxLocksPerTransaction, _ = strconv.Atoi(maxLocks)
}
// Check maintenance_work_mem
db.QueryRowContext(ctx, "SHOW maintenance_work_mem").Scan(&result.PostgreSQL.MaintenanceWorkMem)
// Check shared_buffers (info only, can't change without restart)
db.QueryRowContext(ctx, "SHOW shared_buffers").Scan(&result.PostgreSQL.SharedBuffers)
// Check max_connections
var maxConn string
if err := db.QueryRowContext(ctx, "SHOW max_connections").Scan(&maxConn); err == nil {
result.PostgreSQL.MaxConnections, _ = strconv.Atoi(maxConn)
}
// Check version
db.QueryRowContext(ctx, "SHOW server_version").Scan(&result.PostgreSQL.Version)
// Check if superuser
var isSuperuser bool
if err := db.QueryRowContext(ctx, "SELECT current_setting('is_superuser') = 'on'").Scan(&isSuperuser); err == nil {
result.PostgreSQL.IsSuperuser = isSuperuser
}
// Add info/warnings
if result.PostgreSQL.MaxLocksPerTransaction < 256 {
e.log.Info("PostgreSQL max_locks_per_transaction is low - will auto-boost",
"current", result.PostgreSQL.MaxLocksPerTransaction)
}
// Parse shared_buffers and warn if very low
sharedBuffersMB := parseMemoryToMB(result.PostgreSQL.SharedBuffers)
if sharedBuffersMB > 0 && sharedBuffersMB < 256 {
result.Warnings = append(result.Warnings,
fmt.Sprintf("PostgreSQL shared_buffers is low: %s (recommend 1GB+, requires restart)",
result.PostgreSQL.SharedBuffers))
}
}
// analyzeArchive counts BLOBs in dump files to calculate optimal lock boost
func (e *Engine) analyzeArchive(ctx context.Context, dumpsDir string, entries []os.DirEntry, result *PreflightResult) {
e.progress.Update("[PREFLIGHT] Analyzing archive for large objects...")
for _, entry := range entries {
if entry.IsDir() {
continue
}
result.Archive.TotalDatabases++
dumpFile := filepath.Join(dumpsDir, entry.Name())
dbName := strings.TrimSuffix(entry.Name(), ".dump")
dbName = strings.TrimSuffix(dbName, ".sql.gz")
// For custom format dumps, use pg_restore -l to count BLOBs
if strings.HasSuffix(entry.Name(), ".dump") {
blobCount := e.countBlobsInDump(ctx, dumpFile)
if blobCount > 0 {
result.Archive.BlobsByDB[dbName] = blobCount
result.Archive.TotalBlobCount += blobCount
if blobCount > 1000 {
result.Archive.HasLargeBlobs = true
}
}
}
// For SQL format, try to estimate from file content (sample check)
if strings.HasSuffix(entry.Name(), ".sql.gz") {
// Check for lo_create patterns in compressed SQL
blobCount := e.estimateBlobsInSQL(dumpFile)
if blobCount > 0 {
result.Archive.BlobsByDB[dbName] = blobCount
result.Archive.TotalBlobCount += blobCount
if blobCount > 1000 {
result.Archive.HasLargeBlobs = true
}
}
}
}
}
// countBlobsInDump uses pg_restore -l to count BLOB entries
func (e *Engine) countBlobsInDump(ctx context.Context, dumpFile string) int {
ctx, cancel := context.WithTimeout(ctx, 30*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, "pg_restore", "-l", dumpFile)
output, err := cmd.Output()
if err != nil {
return 0
}
// Count lines containing BLOB/LARGE OBJECT
count := 0
for _, line := range strings.Split(string(output), "\n") {
if strings.Contains(line, "BLOB") || strings.Contains(line, "LARGE OBJECT") {
count++
}
}
return count
}
// estimateBlobsInSQL samples compressed SQL for lo_create patterns
func (e *Engine) estimateBlobsInSQL(sqlFile string) int {
// Use zgrep for efficient searching in gzipped files
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
// Count lo_create calls (each = one large object)
cmd := exec.CommandContext(ctx, "zgrep", "-c", "lo_create", sqlFile)
output, err := cmd.Output()
if err != nil {
// Also try SELECT lo_create pattern
cmd2 := exec.CommandContext(ctx, "zgrep", "-c", "SELECT.*lo_create", sqlFile)
output, err = cmd2.Output()
if err != nil {
return 0
}
}
count, _ := strconv.Atoi(strings.TrimSpace(string(output)))
return count
}
// calculateRecommendations determines optimal settings based on analysis
func (e *Engine) calculateRecommendations(result *PreflightResult) {
// Base lock boost
lockBoost := 2048
// Scale up based on BLOB count
if result.Archive.TotalBlobCount > 5000 {
lockBoost = 4096
}
if result.Archive.TotalBlobCount > 10000 {
lockBoost = 8192
}
if result.Archive.TotalBlobCount > 50000 {
lockBoost = 16384
}
// Cap at reasonable maximum
if lockBoost > 16384 {
lockBoost = 16384
}
result.Archive.RecommendedLockBoost = lockBoost
// Log recommendation
e.log.Info("Calculated recommended lock boost",
"total_blobs", result.Archive.TotalBlobCount,
"recommended_locks", lockBoost)
}
// printPreflightSummary prints a nice summary of all checks
func (e *Engine) printPreflightSummary(result *PreflightResult) {
fmt.Println()
fmt.Println(strings.Repeat("─", 60))
fmt.Println(" PREFLIGHT CHECKS")
fmt.Println(strings.Repeat("─", 60))
// System checks (cross-platform)
fmt.Println("\n System Resources:")
printCheck("Total RAM", humanize.Bytes(result.Linux.MemTotal), true)
printCheck("Available RAM", humanize.Bytes(result.Linux.MemAvailable), result.Linux.MemAvailableOK || result.Linux.MemAvailable == 0)
printCheck("Memory Usage", fmt.Sprintf("%.1f%%", result.Linux.MemUsedPercent), result.Linux.MemUsedPercent < 85)
// Linux-specific kernel checks
if result.Linux.IsLinux && result.Linux.ShmMax > 0 {
fmt.Println("\n Linux Kernel:")
printCheck("shmmax", humanize.Bytes(uint64(result.Linux.ShmMax)), result.Linux.ShmMaxOK)
printCheck("shmall", humanize.Comma(result.Linux.ShmAll)+" pages", result.Linux.ShmAllOK)
}
// PostgreSQL checks
fmt.Println("\n PostgreSQL:")
printCheck("Version", result.PostgreSQL.Version, true)
printCheck("max_locks_per_transaction", fmt.Sprintf("%s → %s (auto-boost)",
humanize.Comma(int64(result.PostgreSQL.MaxLocksPerTransaction)),
humanize.Comma(int64(result.Archive.RecommendedLockBoost))),
true)
printCheck("maintenance_work_mem", fmt.Sprintf("%s → 2GB (auto-boost)",
result.PostgreSQL.MaintenanceWorkMem), true)
printInfo("shared_buffers", result.PostgreSQL.SharedBuffers)
printCheck("Superuser", fmt.Sprintf("%v", result.PostgreSQL.IsSuperuser), result.PostgreSQL.IsSuperuser)
// Archive analysis
fmt.Println("\n Archive Analysis:")
printInfo("Total databases", humanize.Comma(int64(result.Archive.TotalDatabases)))
printInfo("Total BLOBs detected", humanize.Comma(int64(result.Archive.TotalBlobCount)))
if len(result.Archive.BlobsByDB) > 0 {
fmt.Println(" Databases with BLOBs:")
for db, count := range result.Archive.BlobsByDB {
status := "✓"
if count > 1000 {
status := "⚠"
_ = status
}
fmt.Printf(" %s %s: %s BLOBs\n", status, db, humanize.Comma(int64(count)))
}
}
// Warnings
if len(result.Warnings) > 0 {
fmt.Println("\n ⚠ Warnings:")
for _, w := range result.Warnings {
fmt.Printf(" • %s\n", w)
}
}
fmt.Println(strings.Repeat("─", 60))
fmt.Println()
}
func printCheck(name, value string, ok bool) {
status := "✓"
if !ok {
status = "⚠"
}
fmt.Printf(" %s %s: %s\n", status, name, value)
}
func printInfo(name, value string) {
fmt.Printf(" %s: %s\n", name, value)
}
func parseMemoryToMB(memStr string) int {
memStr = strings.ToUpper(strings.TrimSpace(memStr))
var value int
var unit string
fmt.Sscanf(memStr, "%d%s", &value, &unit)
switch {
case strings.HasPrefix(unit, "G"):
return value * 1024
case strings.HasPrefix(unit, "M"):
return value
case strings.HasPrefix(unit, "K"):
return value / 1024
default:
return value / (1024 * 1024) // Assume bytes
}
}
func (e *Engine) buildConnString() string {
if e.cfg.Host == "localhost" || e.cfg.Host == "" {
return fmt.Sprintf("user=%s password=%s dbname=postgres sslmode=disable",
e.cfg.User, e.cfg.Password)
}
return fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=postgres sslmode=disable",
e.cfg.Host, e.cfg.Port, e.cfg.User, e.cfg.Password)
}

View File

@@ -255,7 +255,9 @@ func (s *Safety) CheckDiskSpaceAt(archivePath string, checkDir string, multiplie
// Get available disk space
availableSpace, err := getDiskSpace(checkDir)
if err != nil {
if s.log != nil {
s.log.Warn("Cannot check disk space", "error", err)
}
return nil // Don't fail if we can't check
}
@@ -278,10 +280,12 @@ func (s *Safety) CheckDiskSpaceAt(archivePath string, checkDir string, multiplie
checkDir)
}
if s.log != nil {
s.log.Info("Disk space check passed",
"location", checkDir,
"required", FormatBytes(requiredSpace),
"available", FormatBytes(availableSpace))
}
return nil
}

View File

@@ -251,13 +251,13 @@ func (m ArchiveBrowserModel) View() string {
var s strings.Builder
// Header
title := "[PKG] Backup Archives"
title := "[SELECT] Backup Archives"
if m.mode == "restore-single" {
title = "[PKG] Select Archive to Restore (Single Database)"
title = "[SELECT] Select Archive to Restore (Single Database)"
} else if m.mode == "restore-cluster" {
title = "[PKG] Select Archive to Restore (Cluster)"
title = "[SELECT] Select Archive to Restore (Cluster)"
} else if m.mode == "diagnose" {
title = "[SEARCH] Select Archive to Diagnose"
title = "[SELECT] Select Archive to Diagnose"
}
s.WriteString(titleStyle.Render(title))

View File

@@ -230,7 +230,7 @@ func (m BackupManagerModel) View() string {
var s strings.Builder
// Title
s.WriteString(TitleStyle.Render("[DB] Backup Archive Manager"))
s.WriteString(TitleStyle.Render("[SELECT] Backup Archive Manager"))
s.WriteString("\n\n")
// Status line (no box, bold+color accents)

View File

@@ -0,0 +1,406 @@
package tui
import (
"fmt"
"strings"
"sync"
"time"
)
// DetailedProgress provides schollz-like progress information for TUI rendering
// This is a data structure that can be queried by Bubble Tea's View() method
type DetailedProgress struct {
mu sync.RWMutex
// Core progress
Total int64 // Total bytes or items
Current int64 // Current bytes or items done
// Display info
Description string // What operation is happening
Unit string // "bytes", "files", "databases", etc.
// Timing for ETA/speed calculation
StartTime time.Time
LastUpdate time.Time
SpeedWindow []speedSample // Rolling window for speed calculation
// State
IsIndeterminate bool // True if total is unknown (spinner mode)
IsComplete bool
IsFailed bool
ErrorMessage string
}
type speedSample struct {
timestamp time.Time
bytes int64
}
// NewDetailedProgress creates a progress tracker with known total
func NewDetailedProgress(total int64, description string) *DetailedProgress {
return &DetailedProgress{
Total: total,
Description: description,
Unit: "bytes",
StartTime: time.Now(),
LastUpdate: time.Now(),
SpeedWindow: make([]speedSample, 0, 20),
IsIndeterminate: total <= 0,
}
}
// NewDetailedProgressItems creates a progress tracker for item counts
func NewDetailedProgressItems(total int, description string) *DetailedProgress {
return &DetailedProgress{
Total: int64(total),
Description: description,
Unit: "items",
StartTime: time.Now(),
LastUpdate: time.Now(),
SpeedWindow: make([]speedSample, 0, 20),
IsIndeterminate: total <= 0,
}
}
// NewDetailedProgressSpinner creates an indeterminate progress tracker
func NewDetailedProgressSpinner(description string) *DetailedProgress {
return &DetailedProgress{
Total: -1,
Description: description,
Unit: "",
StartTime: time.Now(),
LastUpdate: time.Now(),
SpeedWindow: make([]speedSample, 0, 20),
IsIndeterminate: true,
}
}
// Add adds to the current progress
func (dp *DetailedProgress) Add(n int64) {
dp.mu.Lock()
defer dp.mu.Unlock()
dp.Current += n
dp.LastUpdate = time.Now()
// Add speed sample
dp.SpeedWindow = append(dp.SpeedWindow, speedSample{
timestamp: dp.LastUpdate,
bytes: dp.Current,
})
// Keep only last 20 samples for speed calculation
if len(dp.SpeedWindow) > 20 {
dp.SpeedWindow = dp.SpeedWindow[len(dp.SpeedWindow)-20:]
}
}
// Set sets the current progress to a specific value
func (dp *DetailedProgress) Set(n int64) {
dp.mu.Lock()
defer dp.mu.Unlock()
dp.Current = n
dp.LastUpdate = time.Now()
// Add speed sample
dp.SpeedWindow = append(dp.SpeedWindow, speedSample{
timestamp: dp.LastUpdate,
bytes: dp.Current,
})
if len(dp.SpeedWindow) > 20 {
dp.SpeedWindow = dp.SpeedWindow[len(dp.SpeedWindow)-20:]
}
}
// SetTotal updates the total (useful when total becomes known during operation)
func (dp *DetailedProgress) SetTotal(total int64) {
dp.mu.Lock()
defer dp.mu.Unlock()
dp.Total = total
dp.IsIndeterminate = total <= 0
}
// SetDescription updates the description
func (dp *DetailedProgress) SetDescription(desc string) {
dp.mu.Lock()
defer dp.mu.Unlock()
dp.Description = desc
}
// Complete marks the progress as complete
func (dp *DetailedProgress) Complete() {
dp.mu.Lock()
defer dp.mu.Unlock()
dp.IsComplete = true
dp.Current = dp.Total
}
// Fail marks the progress as failed
func (dp *DetailedProgress) Fail(errMsg string) {
dp.mu.Lock()
defer dp.mu.Unlock()
dp.IsFailed = true
dp.ErrorMessage = errMsg
}
// GetPercent returns the progress percentage (0-100)
func (dp *DetailedProgress) GetPercent() int {
dp.mu.RLock()
defer dp.mu.RUnlock()
if dp.IsIndeterminate || dp.Total <= 0 {
return 0
}
percent := int((dp.Current * 100) / dp.Total)
if percent > 100 {
return 100
}
return percent
}
// GetSpeed returns the current transfer speed in bytes/second
func (dp *DetailedProgress) GetSpeed() float64 {
dp.mu.RLock()
defer dp.mu.RUnlock()
if len(dp.SpeedWindow) < 2 {
return 0
}
// Use first and last samples in window for smoothed speed
first := dp.SpeedWindow[0]
last := dp.SpeedWindow[len(dp.SpeedWindow)-1]
elapsed := last.timestamp.Sub(first.timestamp).Seconds()
if elapsed <= 0 {
return 0
}
bytesTransferred := last.bytes - first.bytes
return float64(bytesTransferred) / elapsed
}
// GetETA returns the estimated time remaining
func (dp *DetailedProgress) GetETA() time.Duration {
dp.mu.RLock()
defer dp.mu.RUnlock()
if dp.IsIndeterminate || dp.Total <= 0 || dp.Current >= dp.Total {
return 0
}
speed := dp.getSpeedLocked()
if speed <= 0 {
return 0
}
remaining := dp.Total - dp.Current
seconds := float64(remaining) / speed
return time.Duration(seconds) * time.Second
}
func (dp *DetailedProgress) getSpeedLocked() float64 {
if len(dp.SpeedWindow) < 2 {
return 0
}
first := dp.SpeedWindow[0]
last := dp.SpeedWindow[len(dp.SpeedWindow)-1]
elapsed := last.timestamp.Sub(first.timestamp).Seconds()
if elapsed <= 0 {
return 0
}
bytesTransferred := last.bytes - first.bytes
return float64(bytesTransferred) / elapsed
}
// GetElapsed returns the elapsed time since start
func (dp *DetailedProgress) GetElapsed() time.Duration {
dp.mu.RLock()
defer dp.mu.RUnlock()
return time.Since(dp.StartTime)
}
// GetState returns a snapshot of the current state for rendering
func (dp *DetailedProgress) GetState() DetailedProgressState {
dp.mu.RLock()
defer dp.mu.RUnlock()
return DetailedProgressState{
Description: dp.Description,
Current: dp.Current,
Total: dp.Total,
Percent: dp.getPercentLocked(),
Speed: dp.getSpeedLocked(),
ETA: dp.getETALocked(),
Elapsed: time.Since(dp.StartTime),
Unit: dp.Unit,
IsIndeterminate: dp.IsIndeterminate,
IsComplete: dp.IsComplete,
IsFailed: dp.IsFailed,
ErrorMessage: dp.ErrorMessage,
}
}
func (dp *DetailedProgress) getPercentLocked() int {
if dp.IsIndeterminate || dp.Total <= 0 {
return 0
}
percent := int((dp.Current * 100) / dp.Total)
if percent > 100 {
return 100
}
return percent
}
func (dp *DetailedProgress) getETALocked() time.Duration {
if dp.IsIndeterminate || dp.Total <= 0 || dp.Current >= dp.Total {
return 0
}
speed := dp.getSpeedLocked()
if speed <= 0 {
return 0
}
remaining := dp.Total - dp.Current
seconds := float64(remaining) / speed
return time.Duration(seconds) * time.Second
}
// DetailedProgressState is an immutable snapshot for rendering
type DetailedProgressState struct {
Description string
Current int64
Total int64
Percent int
Speed float64 // bytes/sec
ETA time.Duration
Elapsed time.Duration
Unit string
IsIndeterminate bool
IsComplete bool
IsFailed bool
ErrorMessage string
}
// RenderProgressBar renders a TUI-friendly progress bar string
// Returns something like: "Extracting archive [████████░░░░░░░░░░░░] 45% 12.5 MB/s ETA: 2m 30s"
func (s DetailedProgressState) RenderProgressBar(width int) string {
if s.IsIndeterminate {
return s.renderIndeterminate()
}
// Progress bar
barWidth := 30
if width < 80 {
barWidth = 20
}
filled := (s.Percent * barWidth) / 100
if filled > barWidth {
filled = barWidth
}
bar := strings.Repeat("█", filled) + strings.Repeat("░", barWidth-filled)
// Format bytes
currentStr := FormatBytes(s.Current)
totalStr := FormatBytes(s.Total)
// Format speed
speedStr := ""
if s.Speed > 0 {
speedStr = fmt.Sprintf("%s/s", FormatBytes(int64(s.Speed)))
}
// Format ETA
etaStr := ""
if s.ETA > 0 && !s.IsComplete {
etaStr = fmt.Sprintf("ETA: %s", FormatDurationShort(s.ETA))
}
// Build the line
parts := []string{
fmt.Sprintf("[%s]", bar),
fmt.Sprintf("%3d%%", s.Percent),
}
if s.Unit == "bytes" && s.Total > 0 {
parts = append(parts, fmt.Sprintf("%s/%s", currentStr, totalStr))
} else if s.Total > 0 {
parts = append(parts, fmt.Sprintf("%d/%d", s.Current, s.Total))
}
if speedStr != "" {
parts = append(parts, speedStr)
}
if etaStr != "" {
parts = append(parts, etaStr)
}
return strings.Join(parts, " ")
}
func (s DetailedProgressState) renderIndeterminate() string {
elapsed := FormatDurationShort(s.Elapsed)
return fmt.Sprintf("[spinner] %s Elapsed: %s", s.Description, elapsed)
}
// RenderCompact renders a compact single-line progress string
func (s DetailedProgressState) RenderCompact() string {
if s.IsComplete {
return fmt.Sprintf("[OK] %s completed in %s", s.Description, FormatDurationShort(s.Elapsed))
}
if s.IsFailed {
return fmt.Sprintf("[FAIL] %s: %s", s.Description, s.ErrorMessage)
}
if s.IsIndeterminate {
return fmt.Sprintf("[...] %s (%s)", s.Description, FormatDurationShort(s.Elapsed))
}
return fmt.Sprintf("[%3d%%] %s - %s/%s", s.Percent, s.Description,
FormatBytes(s.Current), FormatBytes(s.Total))
}
// FormatBytes formats bytes in human-readable format
func FormatBytes(b int64) string {
const unit = 1024
if b < unit {
return fmt.Sprintf("%d B", b)
}
div, exp := int64(unit), 0
for n := b / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f %cB", float64(b)/float64(div), "KMGTPE"[exp])
}
// FormatDurationShort formats duration in short form
func FormatDurationShort(d time.Duration) string {
if d < time.Second {
return "<1s"
}
if d < time.Minute {
return fmt.Sprintf("%ds", int(d.Seconds()))
}
if d < time.Hour {
m := int(d.Minutes())
s := int(d.Seconds()) % 60
if s > 0 {
return fmt.Sprintf("%dm %ds", m, s)
}
return fmt.Sprintf("%dm", m)
}
h := int(d.Hours())
m := int(d.Minutes()) % 60
return fmt.Sprintf("%dh %dm", h, m)
}

View File

@@ -160,7 +160,7 @@ func (m DiagnoseViewModel) View() string {
var s strings.Builder
// Header
s.WriteString(titleStyle.Render("[SEARCH] Backup Diagnosis"))
s.WriteString(titleStyle.Render("[CHECK] Backup Diagnosis"))
s.WriteString("\n\n")
// Archive info
@@ -204,132 +204,111 @@ func (m DiagnoseViewModel) View() string {
func (m DiagnoseViewModel) renderSingleResult(result *restore.DiagnoseResult) string {
var s strings.Builder
// Status Box
s.WriteString("+--[ VALIDATION STATUS ]" + strings.Repeat("-", 37) + "+\n")
// Validation Status
s.WriteString(diagnoseHeaderStyle.Render("[STATUS] Validation"))
s.WriteString("\n")
if result.IsValid {
s.WriteString("| " + diagnosePassStyle.Render("[OK] VALID - Archive passed all checks") + strings.Repeat(" ", 18) + "|\n")
s.WriteString(diagnosePassStyle.Render(" [OK] VALID - Archive passed all checks"))
s.WriteString("\n")
} else {
s.WriteString("| " + diagnoseFailStyle.Render("[FAIL] INVALID - Archive has problems") + strings.Repeat(" ", 19) + "|\n")
s.WriteString(diagnoseFailStyle.Render(" [FAIL] INVALID - Archive has problems"))
s.WriteString("\n")
}
if result.IsTruncated {
s.WriteString("| " + diagnoseFailStyle.Render("[!] TRUNCATED - File is incomplete") + strings.Repeat(" ", 22) + "|\n")
s.WriteString(diagnoseFailStyle.Render(" [!] TRUNCATED - File is incomplete"))
s.WriteString("\n")
}
if result.IsCorrupted {
s.WriteString("| " + diagnoseFailStyle.Render("[!] CORRUPTED - File structure damaged") + strings.Repeat(" ", 18) + "|\n")
s.WriteString(diagnoseFailStyle.Render(" [!] CORRUPTED - File structure damaged"))
s.WriteString("\n")
}
s.WriteString("+" + strings.Repeat("-", 60) + "+\n\n")
s.WriteString("\n")
// Details Box
// Details
if result.Details != nil {
s.WriteString("+--[ DETAILS ]" + strings.Repeat("-", 46) + "+\n")
s.WriteString(diagnoseHeaderStyle.Render("[INFO] Details"))
s.WriteString("\n")
if result.Details.HasPGDMPSignature {
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " PostgreSQL custom format (PGDMP)" + strings.Repeat(" ", 20) + "|\n")
s.WriteString(diagnosePassStyle.Render(" [+]") + " PostgreSQL custom format (PGDMP)\n")
}
if result.Details.HasSQLHeader {
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " PostgreSQL SQL header found" + strings.Repeat(" ", 25) + "|\n")
s.WriteString(diagnosePassStyle.Render(" [+]") + " PostgreSQL SQL header found\n")
}
if result.Details.GzipValid {
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " Gzip compression valid" + strings.Repeat(" ", 30) + "|\n")
s.WriteString(diagnosePassStyle.Render(" [+]") + " Gzip compression valid\n")
}
if result.Details.PgRestoreListable {
tableInfo := fmt.Sprintf(" (%d tables)", result.Details.TableCount)
padding := 36 - len(tableInfo)
if padding < 0 {
padding = 0
}
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " pg_restore can list contents" + tableInfo + strings.Repeat(" ", padding) + "|\n")
s.WriteString(diagnosePassStyle.Render(" [+]") + fmt.Sprintf(" pg_restore can list contents (%d tables)\n", result.Details.TableCount))
}
if result.Details.CopyBlockCount > 0 {
blockInfo := fmt.Sprintf("%d COPY blocks found", result.Details.CopyBlockCount)
padding := 50 - len(blockInfo)
if padding < 0 {
padding = 0
}
s.WriteString("| [-] " + blockInfo + strings.Repeat(" ", padding) + "|\n")
s.WriteString(fmt.Sprintf(" [-] %d COPY blocks found\n", result.Details.CopyBlockCount))
}
if result.Details.UnterminatedCopy {
s.WriteString("| " + diagnoseFailStyle.Render("[-]") + " Unterminated COPY: " + truncate(result.Details.LastCopyTable, 30) + strings.Repeat(" ", 5) + "|\n")
s.WriteString(diagnoseFailStyle.Render(" [-]") + " Unterminated COPY: " + truncate(result.Details.LastCopyTable, 30) + "\n")
}
if result.Details.ProperlyTerminated {
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " All COPY blocks properly terminated" + strings.Repeat(" ", 17) + "|\n")
s.WriteString(diagnosePassStyle.Render(" [+]") + " All COPY blocks properly terminated\n")
}
if result.Details.ExpandedSize > 0 {
sizeInfo := fmt.Sprintf("Expanded: %s (%.1fx)", formatSize(result.Details.ExpandedSize), result.Details.CompressionRatio)
padding := 50 - len(sizeInfo)
if padding < 0 {
padding = 0
}
s.WriteString("| [-] " + sizeInfo + strings.Repeat(" ", padding) + "|\n")
s.WriteString(fmt.Sprintf(" [-] Expanded: %s (%.1fx)\n", formatSize(result.Details.ExpandedSize), result.Details.CompressionRatio))
}
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
s.WriteString("\n")
}
// Errors Box
// Errors
if len(result.Errors) > 0 {
s.WriteString("\n+--[ ERRORS ]" + strings.Repeat("-", 47) + "+\n")
s.WriteString(diagnoseFailStyle.Render("[FAIL] Errors"))
s.WriteString("\n")
for i, e := range result.Errors {
if i >= 5 {
remaining := fmt.Sprintf("... and %d more errors", len(result.Errors)-5)
padding := 56 - len(remaining)
s.WriteString("| " + remaining + strings.Repeat(" ", padding) + "|\n")
s.WriteString(fmt.Sprintf(" ... and %d more errors\n", len(result.Errors)-5))
break
}
errText := truncate(e, 54)
padding := 56 - len(errText)
if padding < 0 {
padding = 0
s.WriteString(" " + truncate(e, 60) + "\n")
}
s.WriteString("| " + errText + strings.Repeat(" ", padding) + "|\n")
}
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
s.WriteString("\n")
}
// Warnings Box
// Warnings
if len(result.Warnings) > 0 {
s.WriteString("\n+--[ WARNINGS ]" + strings.Repeat("-", 45) + "+\n")
s.WriteString(diagnoseWarnStyle.Render("[WARN] Warnings"))
s.WriteString("\n")
for i, w := range result.Warnings {
if i >= 3 {
remaining := fmt.Sprintf("... and %d more warnings", len(result.Warnings)-3)
padding := 56 - len(remaining)
s.WriteString("| " + remaining + strings.Repeat(" ", padding) + "|\n")
s.WriteString(fmt.Sprintf(" ... and %d more warnings\n", len(result.Warnings)-3))
break
}
warnText := truncate(w, 54)
padding := 56 - len(warnText)
if padding < 0 {
padding = 0
s.WriteString(" " + truncate(w, 60) + "\n")
}
s.WriteString("| " + warnText + strings.Repeat(" ", padding) + "|\n")
}
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
s.WriteString("\n")
}
// Recommendations Box
// Recommendations
if !result.IsValid {
s.WriteString("\n+--[ RECOMMENDATIONS ]" + strings.Repeat("-", 38) + "+\n")
s.WriteString(diagnoseInfoStyle.Render("[HINT] Recommendations"))
s.WriteString("\n")
if result.IsTruncated {
s.WriteString("| 1. Re-run backup with current version (v3.42.12+) |\n")
s.WriteString("| 2. Check disk space on backup server |\n")
s.WriteString("| 3. Verify network stability for remote backups |\n")
s.WriteString(" 1. Re-run backup with current version (v3.42+)\n")
s.WriteString(" 2. Check disk space on backup server\n")
s.WriteString(" 3. Verify network stability for remote backups\n")
}
if result.IsCorrupted {
s.WriteString("| 1. Verify backup was transferred completely |\n")
s.WriteString("| 2. Try restoring from a previous backup |\n")
s.WriteString(" 1. Verify backup was transferred completely\n")
s.WriteString(" 2. Try restoring from a previous backup\n")
}
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
}
return s.String()
@@ -349,10 +328,8 @@ func (m DiagnoseViewModel) renderClusterResults() string {
}
}
s.WriteString(strings.Repeat("-", 60))
s.WriteString("\n")
s.WriteString(diagnoseHeaderStyle.Render(fmt.Sprintf("[STATS] CLUSTER SUMMARY: %d databases\n", len(m.results))))
s.WriteString(strings.Repeat("-", 60))
s.WriteString(diagnoseHeaderStyle.Render(fmt.Sprintf("[STATS] Cluster Summary: %d databases", len(m.results))))
s.WriteString("\n\n")
if invalidCount == 0 {
@@ -364,7 +341,7 @@ func (m DiagnoseViewModel) renderClusterResults() string {
}
// List all dumps with status
s.WriteString(diagnoseHeaderStyle.Render("Database Dumps:"))
s.WriteString(diagnoseHeaderStyle.Render("[LIST] Database Dumps"))
s.WriteString("\n")
// Show visible range based on cursor
@@ -413,9 +390,7 @@ func (m DiagnoseViewModel) renderClusterResults() string {
if m.cursor < len(m.results) {
selected := m.results[m.cursor]
s.WriteString("\n")
s.WriteString(strings.Repeat("-", 60))
s.WriteString("\n")
s.WriteString(diagnoseHeaderStyle.Render("Selected: " + selected.FileName))
s.WriteString(diagnoseHeaderStyle.Render("[INFO] Selected: " + selected.FileName))
s.WriteString("\n\n")
// Show condensed details for selected

View File

@@ -191,7 +191,7 @@ func (m HistoryViewModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
func (m HistoryViewModel) View() string {
var s strings.Builder
header := titleStyle.Render("[HISTORY] Operation History")
header := titleStyle.Render("[STATS] Operation History")
s.WriteString(fmt.Sprintf("\n%s\n\n", header))
if len(m.history) == 0 {

View File

@@ -285,7 +285,7 @@ func (m *MenuModel) View() string {
var s string
// Header
header := titleStyle.Render("[DB] Database Backup Tool - Interactive Menu")
header := titleStyle.Render("Database Backup Tool - Interactive Menu")
s += fmt.Sprintf("\n%s\n\n", header)
if len(m.dbTypes) > 0 {
@@ -334,13 +334,13 @@ func (m *MenuModel) View() string {
// handleSingleBackup opens database selector for single backup
func (m *MenuModel) handleSingleBackup() (tea.Model, tea.Cmd) {
selector := NewDatabaseSelector(m.config, m.logger, m, m.ctx, "[DB] Single Database Backup", "single")
selector := NewDatabaseSelector(m.config, m.logger, m, m.ctx, "[SELECT] Single Database Backup", "single")
return selector, selector.Init()
}
// handleSampleBackup opens database selector for sample backup
func (m *MenuModel) handleSampleBackup() (tea.Model, tea.Cmd) {
selector := NewDatabaseSelector(m.config, m.logger, m, m.ctx, "[STATS] Sample Database Backup", "sample")
selector := NewDatabaseSelector(m.config, m.logger, m, m.ctx, "[SELECT] Sample Database Backup", "sample")
return selector, selector.Init()
}
@@ -356,7 +356,7 @@ func (m *MenuModel) handleClusterBackup() (tea.Model, tea.Cmd) {
return executor, executor.Init()
}
confirm := NewConfirmationModelWithAction(m.config, m.logger, m,
"[DB] Cluster Backup",
"[CHECK] Cluster Backup",
"This will backup ALL databases in the cluster. Continue?",
func() (tea.Model, tea.Cmd) {
executor := NewBackupExecution(m.config, m.logger, m, m.ctx, "cluster", "", 0)

View File

@@ -6,6 +6,7 @@ import (
"os/exec"
"path/filepath"
"strings"
"sync"
"time"
tea "github.com/charmbracelet/bubbletea"
@@ -45,6 +46,17 @@ type RestoreExecutionModel struct {
spinnerFrame int
spinnerFrames []string
// Detailed byte progress for schollz-style display
bytesTotal int64
bytesDone int64
description string
showBytes bool // True when we have real byte progress to show
speed float64 // Rolling window speed in bytes/sec
// Database count progress (for cluster restore)
dbTotal int
dbDone int
// Results
done bool
cancelling bool // True when user has requested cancellation
@@ -101,6 +113,9 @@ type restoreProgressMsg struct {
phase string
progress int
detail string
bytesTotal int64
bytesDone int64
description string
}
type restoreCompleteMsg struct {
@@ -109,6 +124,102 @@ type restoreCompleteMsg struct {
elapsed time.Duration
}
// sharedProgressState holds progress state that can be safely accessed from callbacks
type sharedProgressState struct {
mu sync.Mutex
bytesTotal int64
bytesDone int64
description string
hasUpdate bool
// Database count progress (for cluster restore)
dbTotal int
dbDone int
// Rolling window for speed calculation
speedSamples []restoreSpeedSample
}
type restoreSpeedSample struct {
timestamp time.Time
bytes int64
}
// Package-level shared progress state for restore operations
var (
currentRestoreProgressMu sync.Mutex
currentRestoreProgressState *sharedProgressState
)
func setCurrentRestoreProgress(state *sharedProgressState) {
currentRestoreProgressMu.Lock()
defer currentRestoreProgressMu.Unlock()
currentRestoreProgressState = state
}
func clearCurrentRestoreProgress() {
currentRestoreProgressMu.Lock()
defer currentRestoreProgressMu.Unlock()
currentRestoreProgressState = nil
}
func getCurrentRestoreProgress() (bytesTotal, bytesDone int64, description string, hasUpdate bool, dbTotal, dbDone int, speed float64) {
currentRestoreProgressMu.Lock()
defer currentRestoreProgressMu.Unlock()
if currentRestoreProgressState == nil {
return 0, 0, "", false, 0, 0, 0
}
currentRestoreProgressState.mu.Lock()
defer currentRestoreProgressState.mu.Unlock()
// Calculate rolling window speed
speed = calculateRollingSpeed(currentRestoreProgressState.speedSamples)
return currentRestoreProgressState.bytesTotal, currentRestoreProgressState.bytesDone,
currentRestoreProgressState.description, currentRestoreProgressState.hasUpdate,
currentRestoreProgressState.dbTotal, currentRestoreProgressState.dbDone, speed
}
// calculateRollingSpeed calculates speed from recent samples (last 5 seconds)
func calculateRollingSpeed(samples []restoreSpeedSample) float64 {
if len(samples) < 2 {
return 0
}
// Use samples from last 5 seconds for smoothed speed
now := time.Now()
cutoff := now.Add(-5 * time.Second)
var firstInWindow, lastInWindow *restoreSpeedSample
for i := range samples {
if samples[i].timestamp.After(cutoff) {
if firstInWindow == nil {
firstInWindow = &samples[i]
}
lastInWindow = &samples[i]
}
}
// Fall back to first and last if window is empty
if firstInWindow == nil || lastInWindow == nil || firstInWindow == lastInWindow {
firstInWindow = &samples[0]
lastInWindow = &samples[len(samples)-1]
}
elapsed := lastInWindow.timestamp.Sub(firstInWindow.timestamp).Seconds()
if elapsed <= 0 {
return 0
}
bytesTransferred := lastInWindow.bytes - firstInWindow.bytes
return float64(bytesTransferred) / elapsed
}
// restoreProgressChannel allows sending progress updates from the restore goroutine
type restoreProgressChannel chan restoreProgressMsg
func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config, log logger.Logger, archive ArchiveInfo, targetDB string, cleanFirst, createIfMissing bool, restoreType string, cleanClusterFirst bool, existingDBs []string, saveDebugLog bool) tea.Cmd {
return func() tea.Msg {
// NO TIMEOUT for restore operations - a restore takes as long as it takes
@@ -156,6 +267,48 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
// STEP 2: Create restore engine with silent progress (no stdout interference with TUI)
engine := restore.NewSilent(cfg, log, dbClient)
// Set up progress callback for detailed progress reporting
// We use a shared pointer that can be queried by the TUI ticker
progressState := &sharedProgressState{
speedSamples: make([]restoreSpeedSample, 0, 100),
}
engine.SetProgressCallback(func(current, total int64, description string) {
progressState.mu.Lock()
defer progressState.mu.Unlock()
progressState.bytesDone = current
progressState.bytesTotal = total
progressState.description = description
progressState.hasUpdate = true
// Add speed sample for rolling window calculation
progressState.speedSamples = append(progressState.speedSamples, restoreSpeedSample{
timestamp: time.Now(),
bytes: current,
})
// Keep only last 100 samples
if len(progressState.speedSamples) > 100 {
progressState.speedSamples = progressState.speedSamples[len(progressState.speedSamples)-100:]
}
})
// Set up database progress callback for cluster restore
engine.SetDatabaseProgressCallback(func(done, total int, dbName string) {
progressState.mu.Lock()
defer progressState.mu.Unlock()
progressState.dbDone = done
progressState.dbTotal = total
progressState.description = fmt.Sprintf("Restoring %s", dbName)
progressState.hasUpdate = true
// Clear byte progress when switching to db progress
progressState.bytesTotal = 0
progressState.bytesDone = 0
})
// Store progress state in a package-level variable for the ticker to access
// This is a workaround because tea messages can't be sent from callbacks
setCurrentRestoreProgress(progressState)
defer clearCurrentRestoreProgress()
// Enable debug logging if requested
if saveDebugLog {
// Generate debug log path using configured WorkDir
@@ -165,9 +318,6 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
log.Info("Debug logging enabled", "path", debugLogPath)
}
// Set up progress callback (but it won't work in goroutine - progress is already sent via logs)
// The TUI will just use spinner animation to show activity
// STEP 3: Execute restore based on type
var restoreErr error
if restoreType == "restore-cluster" {
@@ -206,7 +356,29 @@ func (m RestoreExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
m.spinnerFrame = (m.spinnerFrame + 1) % len(m.spinnerFrames)
m.elapsed = time.Since(m.startTime)
// Update status based on elapsed time to show progress
// Poll shared progress state for real-time updates
bytesTotal, bytesDone, description, hasUpdate, dbTotal, dbDone, speed := getCurrentRestoreProgress()
if hasUpdate && bytesTotal > 0 {
m.bytesTotal = bytesTotal
m.bytesDone = bytesDone
m.description = description
m.showBytes = true
m.speed = speed
// Update status to reflect actual progress
m.status = description
m.phase = "Extracting"
m.progress = int((bytesDone * 100) / bytesTotal)
} else if hasUpdate && dbTotal > 0 {
// Database count progress for cluster restore
m.dbTotal = dbTotal
m.dbDone = dbDone
m.showBytes = false
m.status = fmt.Sprintf("Restoring database %d of %d...", dbDone+1, dbTotal)
m.phase = "Restore"
m.progress = int((dbDone * 100) / dbTotal)
} else {
// Fallback: Update status based on elapsed time to show progress
// This provides visual feedback even though we don't have real-time progress
elapsedSec := int(m.elapsed.Seconds())
@@ -241,6 +413,7 @@ func (m RestoreExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
m.phase = "Restore"
}
}
}
return m, restoreTickCmd()
}
@@ -250,6 +423,15 @@ func (m RestoreExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
m.status = msg.status
m.phase = msg.phase
m.progress = msg.progress
// Update byte-level progress if available
if msg.bytesTotal > 0 {
m.bytesTotal = msg.bytesTotal
m.bytesDone = msg.bytesDone
m.description = msg.description
m.showBytes = true
}
if msg.detail != "" {
m.details = append(m.details, msg.detail)
// Keep only last 5 details
@@ -321,9 +503,9 @@ func (m RestoreExecutionModel) View() string {
s.Grow(512) // Pre-allocate estimated capacity for better performance
// Title
title := "[RESTORE] Restoring Database"
title := "[EXEC] Restoring Database"
if m.restoreType == "restore-cluster" {
title = "[RESTORE] Restoring Cluster"
title = "[EXEC] Restoring Cluster"
}
s.WriteString(titleStyle.Render(title))
s.WriteString("\n\n")
@@ -356,19 +538,39 @@ func (m RestoreExecutionModel) View() string {
// Show progress
s.WriteString(fmt.Sprintf("Phase: %s\n", m.phase))
// Show status with rotating spinner (unified indicator for all operations)
// Show detailed progress bar when we have byte-level information
// In this case, hide the spinner for cleaner display
if m.showBytes && m.bytesTotal > 0 {
// Status line without spinner (progress bar provides activity indication)
s.WriteString(fmt.Sprintf("Status: %s\n", m.status))
s.WriteString("\n")
// Render schollz-style progress bar with bytes, rolling speed, ETA
s.WriteString(renderDetailedProgressBarWithSpeed(m.bytesDone, m.bytesTotal, m.speed))
s.WriteString("\n\n")
} else if m.dbTotal > 0 {
// Database count progress for cluster restore
spinner := m.spinnerFrames[m.spinnerFrame]
s.WriteString(fmt.Sprintf("Status: %s %s\n", spinner, m.status))
s.WriteString("\n")
// Show database progress bar
s.WriteString(renderDatabaseProgressBar(m.dbDone, m.dbTotal))
s.WriteString("\n\n")
} else {
// Show status with rotating spinner (for phases without detailed progress)
spinner := m.spinnerFrames[m.spinnerFrame]
s.WriteString(fmt.Sprintf("Status: %s %s\n", spinner, m.status))
s.WriteString("\n")
// Only show progress bar for single database restore
// Cluster restore uses spinner only (consistent with CLI behavior)
if m.restoreType == "restore-single" {
// Fallback to simple progress bar for single database restore
progressBar := renderProgressBar(m.progress)
s.WriteString(progressBar)
s.WriteString(fmt.Sprintf(" %d%%\n", m.progress))
s.WriteString("\n")
}
}
// Elapsed time
s.WriteString(fmt.Sprintf("Elapsed: %s\n", formatDuration(m.elapsed)))
@@ -390,6 +592,92 @@ func renderProgressBar(percent int) string {
return successStyle.Render(bar) + infoStyle.Render(empty)
}
// renderDetailedProgressBar renders a schollz-style progress bar with bytes, speed, and ETA
// Uses elapsed time for speed calculation (fallback)
func renderDetailedProgressBar(done, total int64, elapsed time.Duration) string {
speed := 0.0
if elapsed.Seconds() > 0 {
speed = float64(done) / elapsed.Seconds()
}
return renderDetailedProgressBarWithSpeed(done, total, speed)
}
// renderDetailedProgressBarWithSpeed renders a schollz-style progress bar with pre-calculated rolling speed
func renderDetailedProgressBarWithSpeed(done, total int64, speed float64) string {
var s strings.Builder
// Calculate percentage
percent := 0
if total > 0 {
percent = int((done * 100) / total)
if percent > 100 {
percent = 100
}
}
// Render progress bar
width := 30
filled := (percent * width) / 100
barFilled := strings.Repeat("█", filled)
barEmpty := strings.Repeat("░", width-filled)
s.WriteString(successStyle.Render("["))
s.WriteString(successStyle.Render(barFilled))
s.WriteString(infoStyle.Render(barEmpty))
s.WriteString(successStyle.Render("]"))
// Percentage
s.WriteString(fmt.Sprintf(" %3d%%", percent))
// Bytes progress
s.WriteString(fmt.Sprintf(" %s / %s", FormatBytes(done), FormatBytes(total)))
// Speed display (using rolling window speed)
if speed > 0 {
s.WriteString(fmt.Sprintf(" %s/s", FormatBytes(int64(speed))))
// ETA calculation based on rolling speed
if done < total {
remaining := total - done
etaSeconds := float64(remaining) / speed
eta := time.Duration(etaSeconds) * time.Second
s.WriteString(fmt.Sprintf(" ETA: %s", FormatDurationShort(eta)))
}
}
return s.String()
}
// renderDatabaseProgressBar renders a progress bar for database count (cluster restore)
func renderDatabaseProgressBar(done, total int) string {
var s strings.Builder
// Calculate percentage
percent := 0
if total > 0 {
percent = (done * 100) / total
if percent > 100 {
percent = 100
}
}
// Render progress bar
width := 30
filled := (percent * width) / 100
barFilled := strings.Repeat("█", filled)
barEmpty := strings.Repeat("░", width-filled)
s.WriteString(successStyle.Render("["))
s.WriteString(successStyle.Render(barFilled))
s.WriteString(infoStyle.Render(barEmpty))
s.WriteString(successStyle.Render("]"))
// Count and percentage
s.WriteString(fmt.Sprintf(" %3d%% %d / %d databases", percent, done, total))
return s.String()
}
// formatDuration formats duration in human readable format
func formatDuration(d time.Duration) string {
if d < time.Minute {

View File

@@ -339,9 +339,9 @@ func (m RestorePreviewModel) View() string {
var s strings.Builder
// Title
title := "Restore Preview"
title := "[CHECK] Restore Preview"
if m.mode == "restore-cluster" {
title = "Cluster Restore Preview"
title = "[CHECK] Cluster Restore Preview"
}
s.WriteString(titleStyle.Render(title))
s.WriteString("\n\n")

View File

@@ -688,7 +688,7 @@ func (m SettingsModel) View() string {
var b strings.Builder
// Header
header := titleStyle.Render("[CFG] Configuration Settings")
header := titleStyle.Render("[CONFIG] Configuration Settings")
b.WriteString(fmt.Sprintf("\n%s\n\n", header))
// Settings list
@@ -747,7 +747,7 @@ func (m SettingsModel) View() string {
// Current configuration summary
if !m.editing {
b.WriteString("\n")
b.WriteString(infoStyle.Render("[LOG] Current Configuration:"))
b.WriteString(infoStyle.Render("[INFO] Current Configuration"))
b.WriteString("\n")
summary := []string{

View File

@@ -173,7 +173,7 @@ func (m StatusViewModel) View() string {
s.WriteString(errorStyle.Render(fmt.Sprintf("[FAIL] Error: %v\n", m.err)))
s.WriteString("\n")
} else {
s.WriteString("Connection Status:\n")
s.WriteString("[CONN] Connection Status\n")
if m.connected {
s.WriteString(successStyle.Render(" [+] Connected\n"))
} else {
@@ -181,11 +181,12 @@ func (m StatusViewModel) View() string {
}
s.WriteString("\n")
s.WriteString(fmt.Sprintf("Database Type: %s (%s)\n", m.config.DisplayDatabaseType(), m.config.DatabaseType))
s.WriteString(fmt.Sprintf("Host: %s:%d\n", m.config.Host, m.config.Port))
s.WriteString(fmt.Sprintf("User: %s\n", m.config.User))
s.WriteString(fmt.Sprintf("Backup Directory: %s\n", m.config.BackupDir))
s.WriteString(fmt.Sprintf("Version: %s\n\n", m.dbVersion))
s.WriteString("[INFO] Server Details\n")
s.WriteString(fmt.Sprintf(" Database Type: %s (%s)\n", m.config.DisplayDatabaseType(), m.config.DatabaseType))
s.WriteString(fmt.Sprintf(" Host: %s:%d\n", m.config.Host, m.config.Port))
s.WriteString(fmt.Sprintf(" User: %s\n", m.config.User))
s.WriteString(fmt.Sprintf(" Backup Directory: %s\n", m.config.BackupDir))
s.WriteString(fmt.Sprintf(" Version: %s\n\n", m.dbVersion))
if m.dbCount > 0 {
s.WriteString(fmt.Sprintf("Databases Found: %s\n", successStyle.Render(fmt.Sprintf("%d", m.dbCount))))

View File

@@ -120,12 +120,36 @@ var ShortcutStyle = lipgloss.NewStyle().
// =============================================================================
// HELPER PREFIXES (no emoticons)
// =============================================================================
// Convention for TUI titles/headers:
// [CHECK] - Verification/diagnosis screens
// [STATS] - Statistics/status screens
// [SELECT] - Selection/browser screens
// [EXEC] - Execution/running screens
// [CONFIG] - Configuration/settings screens
//
// Convention for status messages:
// [OK] - Success
// [FAIL] - Error/failure
// [WAIT] - In progress
// [WARN] - Warning
// [INFO] - Information
const (
// Title prefixes (for view headers)
PrefixCheck = "[CHECK]"
PrefixStats = "[STATS]"
PrefixSelect = "[SELECT]"
PrefixExec = "[EXEC]"
PrefixConfig = "[CONFIG]"
// Status prefixes
PrefixOK = "[OK]"
PrefixFail = "[FAIL]"
PrefixWarn = "[!]"
PrefixInfo = "[i]"
PrefixWait = "[WAIT]"
PrefixWarn = "[WARN]"
PrefixInfo = "[INFO]"
// List item prefixes
PrefixPlus = "[+]"
PrefixMinus = "[-]"
PrefixArrow = ">"

View File

@@ -16,7 +16,7 @@ import (
// Build information (set by ldflags)
var (
version = "3.42.10"
version = "3.42.34"
buildTime = "unknown"
gitCommit = "unknown"
)