Compare commits
5 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 721e53fe6a | |||
| 4e09066aa5 | |||
| 6a24ee39be | |||
| dc6dfd8b2c | |||
| 7b4ab76313 |
97
CHANGELOG.md
97
CHANGELOG.md
@@ -5,6 +5,103 @@ All notable changes to dbbackup will be documented in this file.
|
|||||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||||
|
|
||||||
|
## [3.42.34] - 2026-01-14 "Filesystem Abstraction"
|
||||||
|
|
||||||
|
### Added - spf13/afero for Filesystem Abstraction
|
||||||
|
- **New `internal/fs` package** for testable filesystem operations
|
||||||
|
- **In-memory filesystem** for unit testing without disk I/O
|
||||||
|
- **Global FS interface** that can be swapped for testing:
|
||||||
|
```go
|
||||||
|
fs.SetFS(afero.NewMemMapFs()) // Use memory
|
||||||
|
fs.ResetFS() // Back to real disk
|
||||||
|
```
|
||||||
|
- **Wrapper functions** for all common file operations:
|
||||||
|
- `ReadFile`, `WriteFile`, `Create`, `Open`, `Remove`, `RemoveAll`
|
||||||
|
- `Mkdir`, `MkdirAll`, `ReadDir`, `Walk`, `Glob`
|
||||||
|
- `Exists`, `DirExists`, `IsDir`, `IsEmpty`
|
||||||
|
- `TempDir`, `TempFile`, `CopyFile`, `FileSize`
|
||||||
|
- **Testing helpers**:
|
||||||
|
- `WithMemFs(fn)` - Execute function with temp in-memory FS
|
||||||
|
- `SetupTestDir(files)` - Create test directory structure
|
||||||
|
- **Comprehensive test suite** demonstrating usage
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- Upgraded afero from v1.10.0 to v1.15.0
|
||||||
|
|
||||||
|
## [3.42.33] - 2026-01-14 "Exponential Backoff Retry"
|
||||||
|
|
||||||
|
### Added - cenkalti/backoff for Cloud Operation Retry
|
||||||
|
- **Exponential backoff retry** for all cloud operations (S3, Azure, GCS)
|
||||||
|
- **Retry configurations**:
|
||||||
|
- `DefaultRetryConfig()` - 5 retries, 500ms→30s backoff, 5 min max
|
||||||
|
- `AggressiveRetryConfig()` - 10 retries, 1s→60s backoff, 15 min max
|
||||||
|
- `QuickRetryConfig()` - 3 retries, 100ms→5s backoff, 30s max
|
||||||
|
- **Smart error classification**:
|
||||||
|
- `IsPermanentError()` - Auth/bucket errors (no retry)
|
||||||
|
- `IsRetryableError()` - Timeout/network errors (retry)
|
||||||
|
- **Retry logging** - Each retry attempt is logged with wait duration
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- S3 simple upload, multipart upload, download now retry on transient failures
|
||||||
|
- Azure simple upload, download now retry on transient failures
|
||||||
|
- GCS upload, download now retry on transient failures
|
||||||
|
- Large file multipart uploads use `AggressiveRetryConfig()` (more retries)
|
||||||
|
|
||||||
|
## [3.42.32] - 2026-01-14 "Cross-Platform Colors"
|
||||||
|
|
||||||
|
### Added - fatih/color for Cross-Platform Terminal Colors
|
||||||
|
- **Windows-compatible colors** - Native Windows console API support
|
||||||
|
- **Color helper functions** in `logger` package:
|
||||||
|
- `Success()`, `Error()`, `Warning()`, `Info()` - Status messages with icons
|
||||||
|
- `Header()`, `Dim()`, `Bold()` - Text styling
|
||||||
|
- `Green()`, `Red()`, `Yellow()`, `Cyan()` - Colored text
|
||||||
|
- `StatusLine()`, `TableRow()` - Formatted output
|
||||||
|
- `DisableColors()`, `EnableColors()` - Runtime control
|
||||||
|
- **Consistent color scheme** across all log levels
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- Logger `CleanFormatter` now uses fatih/color instead of raw ANSI codes
|
||||||
|
- All progress indicators use fatih/color for `[OK]`/`[FAIL]` status
|
||||||
|
- Automatic color detection (disabled for non-TTY)
|
||||||
|
|
||||||
|
## [3.42.31] - 2026-01-14 "Visual Progress Bars"
|
||||||
|
|
||||||
|
### Added - schollz/progressbar for Enhanced Progress Display
|
||||||
|
- **Visual progress bars** for cloud uploads/downloads with:
|
||||||
|
- Byte transfer display (e.g., `245 MB / 1.2 GB`)
|
||||||
|
- Transfer speed (e.g., `45 MB/s`)
|
||||||
|
- ETA prediction
|
||||||
|
- Color-coded progress with Unicode blocks
|
||||||
|
- **Checksum verification progress** - visual progress while calculating SHA-256
|
||||||
|
- **Spinner for indeterminate operations** - Braille-style spinner when size unknown
|
||||||
|
- New progress types: `NewSchollzBar()`, `NewSchollzBarItems()`, `NewSchollzSpinner()`
|
||||||
|
- Progress bar `Writer()` method for io.Copy integration
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- Cloud download shows real-time byte progress instead of 10% log messages
|
||||||
|
- Cloud upload shows visual progress bar instead of debug logs
|
||||||
|
- Checksum verification shows progress for large files
|
||||||
|
|
||||||
|
## [3.42.30] - 2026-01-09 "Better Error Aggregation"
|
||||||
|
|
||||||
|
### Added - go-multierror for Cluster Restore Errors
|
||||||
|
- **Enhanced error reporting** - Now shows ALL database failures, not just a count
|
||||||
|
- Uses `hashicorp/go-multierror` for proper error aggregation
|
||||||
|
- Each failed database error is preserved with full context
|
||||||
|
- Bullet-pointed error output for readability:
|
||||||
|
```
|
||||||
|
cluster restore completed with 3 failures:
|
||||||
|
3 database(s) failed:
|
||||||
|
• db1: restore failed: max_locks_per_transaction exceeded
|
||||||
|
• db2: restore failed: connection refused
|
||||||
|
• db3: failed to create database: permission denied
|
||||||
|
```
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- Replaced string slice error collection with proper `*multierror.Error`
|
||||||
|
- Thread-safe error aggregation with dedicated mutex
|
||||||
|
- Improved error wrapping with `%w` for error chain preservation
|
||||||
|
|
||||||
## [3.42.10] - 2026-01-08 "Code Quality"
|
## [3.42.10] - 2026-01-08 "Code Quality"
|
||||||
|
|
||||||
### Fixed - Code Quality Issues
|
### Fixed - Code Quality Issues
|
||||||
|
|||||||
@@ -3,9 +3,9 @@
|
|||||||
This directory contains pre-compiled binaries for the DB Backup Tool across multiple platforms and architectures.
|
This directory contains pre-compiled binaries for the DB Backup Tool across multiple platforms and architectures.
|
||||||
|
|
||||||
## Build Information
|
## Build Information
|
||||||
- **Version**: 3.42.10
|
- **Version**: 3.42.33
|
||||||
- **Build Time**: 2026-01-14_14:49:15_UTC
|
- **Build Time**: 2026-01-14_15:19:48_UTC
|
||||||
- **Git Commit**: 8c85d85
|
- **Git Commit**: 4e09066
|
||||||
|
|
||||||
## Recent Updates (v1.1.0)
|
## Recent Updates (v1.1.0)
|
||||||
- ✅ Fixed TUI progress display with line-by-line output
|
- ✅ Fixed TUI progress display with line-by-line output
|
||||||
|
|||||||
9
go.mod
9
go.mod
@@ -57,6 +57,7 @@ require (
|
|||||||
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 // indirect
|
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 // indirect
|
||||||
github.com/aws/smithy-go v1.23.2 // indirect
|
github.com/aws/smithy-go v1.23.2 // indirect
|
||||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
|
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
|
||||||
|
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
|
||||||
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||||
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
|
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
|
||||||
github.com/charmbracelet/x/ansi v0.10.1 // indirect
|
github.com/charmbracelet/x/ansi v0.10.1 // indirect
|
||||||
@@ -66,6 +67,7 @@ require (
|
|||||||
github.com/envoyproxy/go-control-plane/envoy v1.32.4 // indirect
|
github.com/envoyproxy/go-control-plane/envoy v1.32.4 // indirect
|
||||||
github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect
|
github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect
|
||||||
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
|
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
|
||||||
|
github.com/fatih/color v1.18.0 // indirect
|
||||||
github.com/felixge/httpsnoop v1.0.4 // indirect
|
github.com/felixge/httpsnoop v1.0.4 // indirect
|
||||||
github.com/go-jose/go-jose/v4 v4.1.2 // indirect
|
github.com/go-jose/go-jose/v4 v4.1.2 // indirect
|
||||||
github.com/go-logr/logr v1.4.3 // indirect
|
github.com/go-logr/logr v1.4.3 // indirect
|
||||||
@@ -75,21 +77,27 @@ require (
|
|||||||
github.com/google/uuid v1.6.0 // indirect
|
github.com/google/uuid v1.6.0 // indirect
|
||||||
github.com/googleapis/enterprise-certificate-proxy v0.3.7 // indirect
|
github.com/googleapis/enterprise-certificate-proxy v0.3.7 // indirect
|
||||||
github.com/googleapis/gax-go/v2 v2.15.0 // indirect
|
github.com/googleapis/gax-go/v2 v2.15.0 // indirect
|
||||||
|
github.com/hashicorp/errwrap v1.0.0 // indirect
|
||||||
|
github.com/hashicorp/go-multierror v1.1.1 // indirect
|
||||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||||
github.com/jackc/pgpassfile v1.0.0 // indirect
|
github.com/jackc/pgpassfile v1.0.0 // indirect
|
||||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
|
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
|
||||||
github.com/jackc/puddle/v2 v2.2.2 // indirect
|
github.com/jackc/puddle/v2 v2.2.2 // indirect
|
||||||
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
|
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
|
||||||
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
|
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
|
||||||
|
github.com/mattn/go-colorable v0.1.13 // indirect
|
||||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||||
github.com/mattn/go-localereader v0.0.1 // indirect
|
github.com/mattn/go-localereader v0.0.1 // indirect
|
||||||
github.com/mattn/go-runewidth v0.0.16 // indirect
|
github.com/mattn/go-runewidth v0.0.16 // indirect
|
||||||
|
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db // indirect
|
||||||
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 // indirect
|
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 // indirect
|
||||||
github.com/muesli/cancelreader v0.2.2 // indirect
|
github.com/muesli/cancelreader v0.2.2 // indirect
|
||||||
github.com/muesli/termenv v0.16.0 // indirect
|
github.com/muesli/termenv v0.16.0 // indirect
|
||||||
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 // indirect
|
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 // indirect
|
||||||
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
|
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
|
||||||
github.com/rivo/uniseg v0.4.7 // indirect
|
github.com/rivo/uniseg v0.4.7 // indirect
|
||||||
|
github.com/schollz/progressbar/v3 v3.19.0 // indirect
|
||||||
|
github.com/spf13/afero v1.15.0 // indirect
|
||||||
github.com/spiffe/go-spiffe/v2 v2.5.0 // indirect
|
github.com/spiffe/go-spiffe/v2 v2.5.0 // indirect
|
||||||
github.com/tklauser/go-sysconf v0.3.12 // indirect
|
github.com/tklauser/go-sysconf v0.3.12 // indirect
|
||||||
github.com/tklauser/numcpus v0.6.1 // indirect
|
github.com/tklauser/numcpus v0.6.1 // indirect
|
||||||
@@ -109,6 +117,7 @@ require (
|
|||||||
golang.org/x/oauth2 v0.33.0 // indirect
|
golang.org/x/oauth2 v0.33.0 // indirect
|
||||||
golang.org/x/sync v0.18.0 // indirect
|
golang.org/x/sync v0.18.0 // indirect
|
||||||
golang.org/x/sys v0.38.0 // indirect
|
golang.org/x/sys v0.38.0 // indirect
|
||||||
|
golang.org/x/term v0.36.0 // indirect
|
||||||
golang.org/x/text v0.30.0 // indirect
|
golang.org/x/text v0.30.0 // indirect
|
||||||
golang.org/x/time v0.14.0 // indirect
|
golang.org/x/time v0.14.0 // indirect
|
||||||
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 // indirect
|
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 // indirect
|
||||||
|
|||||||
20
go.sum
20
go.sum
@@ -84,6 +84,8 @@ github.com/aws/smithy-go v1.23.2 h1:Crv0eatJUQhaManss33hS5r40CG3ZFH+21XSkqMrIUM=
|
|||||||
github.com/aws/smithy-go v1.23.2/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
|
github.com/aws/smithy-go v1.23.2/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
|
||||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k=
|
github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k=
|
||||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8=
|
github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8=
|
||||||
|
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
|
||||||
|
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
|
||||||
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
||||||
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||||
github.com/charmbracelet/bubbles v0.21.0 h1:9TdC97SdRVg/1aaXNVWfFH3nnLAwOXr8Fn6u6mfQdFs=
|
github.com/charmbracelet/bubbles v0.21.0 h1:9TdC97SdRVg/1aaXNVWfFH3nnLAwOXr8Fn6u6mfQdFs=
|
||||||
@@ -119,6 +121,8 @@ github.com/envoyproxy/protoc-gen-validate v1.2.1 h1:DEo3O99U8j4hBFwbJfrz9VtgcDfU
|
|||||||
github.com/envoyproxy/protoc-gen-validate v1.2.1/go.mod h1:d/C80l/jxXLdfEIhX1W2TmLfsJ31lvEjwamM4DxlWXU=
|
github.com/envoyproxy/protoc-gen-validate v1.2.1/go.mod h1:d/C80l/jxXLdfEIhX1W2TmLfsJ31lvEjwamM4DxlWXU=
|
||||||
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f h1:Y/CXytFA4m6baUTXGLOoWe4PQhGxaX0KpnayAqC48p4=
|
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f h1:Y/CXytFA4m6baUTXGLOoWe4PQhGxaX0KpnayAqC48p4=
|
||||||
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f/go.mod h1:vw97MGsxSvLiUE2X8qFplwetxpGLQrlU1Q9AUEIzCaM=
|
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f/go.mod h1:vw97MGsxSvLiUE2X8qFplwetxpGLQrlU1Q9AUEIzCaM=
|
||||||
|
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
|
||||||
|
github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
|
||||||
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
|
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
|
||||||
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
|
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
|
||||||
github.com/go-jose/go-jose/v4 v4.1.2 h1:TK/7NqRQZfgAh+Td8AlsrvtPoUyiHh0LqVvokh+1vHI=
|
github.com/go-jose/go-jose/v4 v4.1.2 h1:TK/7NqRQZfgAh+Td8AlsrvtPoUyiHh0LqVvokh+1vHI=
|
||||||
@@ -149,6 +153,10 @@ github.com/googleapis/enterprise-certificate-proxy v0.3.7 h1:zrn2Ee/nWmHulBx5sAV
|
|||||||
github.com/googleapis/enterprise-certificate-proxy v0.3.7/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
|
github.com/googleapis/enterprise-certificate-proxy v0.3.7/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
|
||||||
github.com/googleapis/gax-go/v2 v2.15.0 h1:SyjDc1mGgZU5LncH8gimWo9lW1DtIfPibOG81vgd/bo=
|
github.com/googleapis/gax-go/v2 v2.15.0 h1:SyjDc1mGgZU5LncH8gimWo9lW1DtIfPibOG81vgd/bo=
|
||||||
github.com/googleapis/gax-go/v2 v2.15.0/go.mod h1:zVVkkxAQHa1RQpg9z2AUCMnKhi0Qld9rcmyfL1OZhoc=
|
github.com/googleapis/gax-go/v2 v2.15.0/go.mod h1:zVVkkxAQHa1RQpg9z2AUCMnKhi0Qld9rcmyfL1OZhoc=
|
||||||
|
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
|
||||||
|
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||||
|
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
|
||||||
|
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
|
||||||
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
||||||
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
||||||
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
|
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
|
||||||
@@ -165,6 +173,9 @@ github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69
|
|||||||
github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0=
|
github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0=
|
||||||
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
|
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
|
||||||
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
|
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
|
||||||
|
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
|
||||||
|
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
|
||||||
|
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
|
||||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
||||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||||
github.com/mattn/go-localereader v0.0.1 h1:ygSAOl7ZXTx4RdPYinUpg6W99U8jWvWi9Ye2JC/oIi4=
|
github.com/mattn/go-localereader v0.0.1 h1:ygSAOl7ZXTx4RdPYinUpg6W99U8jWvWi9Ye2JC/oIi4=
|
||||||
@@ -173,6 +184,8 @@ github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6T
|
|||||||
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
|
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
|
||||||
github.com/mattn/go-sqlite3 v1.14.32 h1:JD12Ag3oLy1zQA+BNn74xRgaBbdhbNIDYvQUEuuErjs=
|
github.com/mattn/go-sqlite3 v1.14.32 h1:JD12Ag3oLy1zQA+BNn74xRgaBbdhbNIDYvQUEuuErjs=
|
||||||
github.com/mattn/go-sqlite3 v1.14.32/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
|
github.com/mattn/go-sqlite3 v1.14.32/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
|
||||||
|
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db h1:62I3jR2EmQ4l5rM/4FEfDWcRD+abF5XlKShorW5LRoQ=
|
||||||
|
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db/go.mod h1:l0dey0ia/Uv7NcFFVbCLtqEBQbrT4OCwCSKTEv6enCw=
|
||||||
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 h1:ZK8zHtRHOkbHy6Mmr5D264iyp3TiX5OmNcI5cIARiQI=
|
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 h1:ZK8zHtRHOkbHy6Mmr5D264iyp3TiX5OmNcI5cIARiQI=
|
||||||
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6/go.mod h1:CJlz5H+gyd6CUWT45Oy4q24RdLyn7Md9Vj2/ldJBSIo=
|
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6/go.mod h1:CJlz5H+gyd6CUWT45Oy4q24RdLyn7Md9Vj2/ldJBSIo=
|
||||||
github.com/muesli/cancelreader v0.2.2 h1:3I4Kt4BQjOR54NavqnDogx/MIoWBFa0StPA8ELUXHmA=
|
github.com/muesli/cancelreader v0.2.2 h1:3I4Kt4BQjOR54NavqnDogx/MIoWBFa0StPA8ELUXHmA=
|
||||||
@@ -192,10 +205,14 @@ github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJ
|
|||||||
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
|
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
|
||||||
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
|
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
|
||||||
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||||
|
github.com/schollz/progressbar/v3 v3.19.0 h1:Ea18xuIRQXLAUidVDox3AbwfUhD0/1IvohyTutOIFoc=
|
||||||
|
github.com/schollz/progressbar/v3 v3.19.0/go.mod h1:IsO3lpbaGuzh8zIMzgY3+J8l4C8GjO0Y9S69eFvNsec=
|
||||||
github.com/shirou/gopsutil/v3 v3.24.5 h1:i0t8kL+kQTvpAYToeuiVk3TgDeKOFioZO3Ztz/iZ9pI=
|
github.com/shirou/gopsutil/v3 v3.24.5 h1:i0t8kL+kQTvpAYToeuiVk3TgDeKOFioZO3Ztz/iZ9pI=
|
||||||
github.com/shirou/gopsutil/v3 v3.24.5/go.mod h1:bsoOS1aStSs9ErQ1WWfxllSeS1K5D+U30r2NfcubMVk=
|
github.com/shirou/gopsutil/v3 v3.24.5/go.mod h1:bsoOS1aStSs9ErQ1WWfxllSeS1K5D+U30r2NfcubMVk=
|
||||||
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
|
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
|
||||||
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
|
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
|
||||||
|
github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I=
|
||||||
|
github.com/spf13/afero v1.15.0/go.mod h1:NC2ByUVxtQs4b3sIUphxK0NioZnmxgyCrfzeuq8lxMg=
|
||||||
github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
|
github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
|
||||||
github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
|
github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
|
||||||
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=
|
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=
|
||||||
@@ -251,11 +268,14 @@ golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7w
|
|||||||
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
|
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
|
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
|
||||||
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||||
|
golang.org/x/term v0.36.0 h1:zMPR+aF8gfksFprF/Nc/rd1wRS1EI6nDBGyWAvDzx2Q=
|
||||||
|
golang.org/x/term v0.36.0/go.mod h1:Qu394IJq6V6dCBRgwqshf3mPF85AqzYEzofzRdZkWss=
|
||||||
golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k=
|
golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k=
|
||||||
golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
|
golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
|
||||||
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
|
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
|
||||||
|
|||||||
@@ -1242,23 +1242,29 @@ func (e *Engine) uploadToCloud(ctx context.Context, backupFile string, tracker *
|
|||||||
filename := filepath.Base(backupFile)
|
filename := filepath.Base(backupFile)
|
||||||
e.log.Info("Uploading backup to cloud", "file", filename, "size", cloud.FormatSize(info.Size()))
|
e.log.Info("Uploading backup to cloud", "file", filename, "size", cloud.FormatSize(info.Size()))
|
||||||
|
|
||||||
// Progress callback
|
// Create schollz progressbar for visual upload progress
|
||||||
var lastPercent int
|
bar := progress.NewSchollzBar(info.Size(), fmt.Sprintf("Uploading %s", filename))
|
||||||
|
|
||||||
|
// Progress callback with schollz progressbar
|
||||||
|
var lastBytes int64
|
||||||
progressCallback := func(transferred, total int64) {
|
progressCallback := func(transferred, total int64) {
|
||||||
percent := int(float64(transferred) / float64(total) * 100)
|
delta := transferred - lastBytes
|
||||||
if percent != lastPercent && percent%10 == 0 {
|
if delta > 0 {
|
||||||
e.log.Debug("Upload progress", "percent", percent, "transferred", cloud.FormatSize(transferred), "total", cloud.FormatSize(total))
|
_ = bar.Add64(delta)
|
||||||
lastPercent = percent
|
|
||||||
}
|
}
|
||||||
|
lastBytes = transferred
|
||||||
}
|
}
|
||||||
|
|
||||||
// Upload to cloud
|
// Upload to cloud
|
||||||
err = backend.Upload(ctx, backupFile, filename, progressCallback)
|
err = backend.Upload(ctx, backupFile, filename, progressCallback)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
bar.Fail("Upload failed")
|
||||||
uploadStep.Fail(fmt.Errorf("cloud upload failed: %w", err))
|
uploadStep.Fail(fmt.Errorf("cloud upload failed: %w", err))
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
_ = bar.Finish()
|
||||||
|
|
||||||
// Also upload metadata file
|
// Also upload metadata file
|
||||||
metaFile := backupFile + ".meta.json"
|
metaFile := backupFile + ".meta.json"
|
||||||
if _, err := os.Stat(metaFile); err == nil {
|
if _, err := os.Stat(metaFile); err == nil {
|
||||||
|
|||||||
@@ -151,37 +151,46 @@ func (a *AzureBackend) Upload(ctx context.Context, localPath, remotePath string,
|
|||||||
return a.uploadSimple(ctx, file, blobName, fileSize, progress)
|
return a.uploadSimple(ctx, file, blobName, fileSize, progress)
|
||||||
}
|
}
|
||||||
|
|
||||||
// uploadSimple uploads a file using simple upload (single request)
|
// uploadSimple uploads a file using simple upload (single request) with retry
|
||||||
func (a *AzureBackend) uploadSimple(ctx context.Context, file *os.File, blobName string, fileSize int64, progress ProgressCallback) error {
|
func (a *AzureBackend) uploadSimple(ctx context.Context, file *os.File, blobName string, fileSize int64, progress ProgressCallback) error {
|
||||||
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
|
||||||
|
// Reset file position for retry
|
||||||
|
if _, err := file.Seek(0, 0); err != nil {
|
||||||
|
return fmt.Errorf("failed to reset file position: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
// Wrap reader with progress tracking
|
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
||||||
reader := NewProgressReader(file, fileSize, progress)
|
|
||||||
|
|
||||||
// Calculate MD5 hash for integrity
|
// Wrap reader with progress tracking
|
||||||
hash := sha256.New()
|
reader := NewProgressReader(file, fileSize, progress)
|
||||||
teeReader := io.TeeReader(reader, hash)
|
|
||||||
|
|
||||||
_, err := blockBlobClient.UploadStream(ctx, teeReader, &blockblob.UploadStreamOptions{
|
// Calculate MD5 hash for integrity
|
||||||
BlockSize: 4 * 1024 * 1024, // 4MB blocks
|
hash := sha256.New()
|
||||||
|
teeReader := io.TeeReader(reader, hash)
|
||||||
|
|
||||||
|
_, err := blockBlobClient.UploadStream(ctx, teeReader, &blockblob.UploadStreamOptions{
|
||||||
|
BlockSize: 4 * 1024 * 1024, // 4MB blocks
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to upload blob: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Store checksum as metadata
|
||||||
|
checksum := hex.EncodeToString(hash.Sum(nil))
|
||||||
|
metadata := map[string]*string{
|
||||||
|
"sha256": &checksum,
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = blockBlobClient.SetMetadata(ctx, metadata, nil)
|
||||||
|
if err != nil {
|
||||||
|
// Non-fatal: upload succeeded but metadata failed
|
||||||
|
fmt.Fprintf(os.Stderr, "Warning: failed to set blob metadata: %v\n", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}, func(err error, duration time.Duration) {
|
||||||
|
fmt.Printf("[Azure] Upload retry in %v: %v\n", duration, err)
|
||||||
})
|
})
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to upload blob: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Store checksum as metadata
|
|
||||||
checksum := hex.EncodeToString(hash.Sum(nil))
|
|
||||||
metadata := map[string]*string{
|
|
||||||
"sha256": &checksum,
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = blockBlobClient.SetMetadata(ctx, metadata, nil)
|
|
||||||
if err != nil {
|
|
||||||
// Non-fatal: upload succeeded but metadata failed
|
|
||||||
fmt.Fprintf(os.Stderr, "Warning: failed to set blob metadata: %v\n", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// uploadBlocks uploads a file using block blob staging (for large files)
|
// uploadBlocks uploads a file using block blob staging (for large files)
|
||||||
@@ -251,7 +260,7 @@ func (a *AzureBackend) uploadBlocks(ctx context.Context, file *os.File, blobName
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Download downloads a file from Azure Blob Storage
|
// Download downloads a file from Azure Blob Storage with retry
|
||||||
func (a *AzureBackend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
|
func (a *AzureBackend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
|
||||||
blobName := strings.TrimPrefix(remotePath, "/")
|
blobName := strings.TrimPrefix(remotePath, "/")
|
||||||
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
||||||
@@ -264,30 +273,34 @@ func (a *AzureBackend) Download(ctx context.Context, remotePath, localPath strin
|
|||||||
|
|
||||||
fileSize := *props.ContentLength
|
fileSize := *props.ContentLength
|
||||||
|
|
||||||
// Download blob
|
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
|
||||||
resp, err := blockBlobClient.DownloadStream(ctx, nil)
|
// Download blob
|
||||||
if err != nil {
|
resp, err := blockBlobClient.DownloadStream(ctx, nil)
|
||||||
return fmt.Errorf("failed to download blob: %w", err)
|
if err != nil {
|
||||||
}
|
return fmt.Errorf("failed to download blob: %w", err)
|
||||||
defer resp.Body.Close()
|
}
|
||||||
|
defer resp.Body.Close()
|
||||||
|
|
||||||
// Create local file
|
// Create/truncate local file
|
||||||
file, err := os.Create(localPath)
|
file, err := os.Create(localPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed to create file: %w", err)
|
return fmt.Errorf("failed to create file: %w", err)
|
||||||
}
|
}
|
||||||
defer file.Close()
|
defer file.Close()
|
||||||
|
|
||||||
// Wrap reader with progress tracking
|
// Wrap reader with progress tracking
|
||||||
reader := NewProgressReader(resp.Body, fileSize, progress)
|
reader := NewProgressReader(resp.Body, fileSize, progress)
|
||||||
|
|
||||||
// Copy with progress
|
// Copy with progress
|
||||||
_, err = io.Copy(file, reader)
|
_, err = io.Copy(file, reader)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed to write file: %w", err)
|
return fmt.Errorf("failed to write file: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
}, func(err error, duration time.Duration) {
|
||||||
|
fmt.Printf("[Azure] Download retry in %v: %v\n", duration, err)
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Delete deletes a file from Azure Blob Storage
|
// Delete deletes a file from Azure Blob Storage
|
||||||
|
|||||||
@@ -89,7 +89,7 @@ func (g *GCSBackend) Name() string {
|
|||||||
return "gcs"
|
return "gcs"
|
||||||
}
|
}
|
||||||
|
|
||||||
// Upload uploads a file to Google Cloud Storage
|
// Upload uploads a file to Google Cloud Storage with retry
|
||||||
func (g *GCSBackend) Upload(ctx context.Context, localPath, remotePath string, progress ProgressCallback) error {
|
func (g *GCSBackend) Upload(ctx context.Context, localPath, remotePath string, progress ProgressCallback) error {
|
||||||
file, err := os.Open(localPath)
|
file, err := os.Open(localPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -106,45 +106,54 @@ func (g *GCSBackend) Upload(ctx context.Context, localPath, remotePath string, p
|
|||||||
// Remove leading slash from remote path
|
// Remove leading slash from remote path
|
||||||
objectName := strings.TrimPrefix(remotePath, "/")
|
objectName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
|
||||||
bucket := g.client.Bucket(g.bucketName)
|
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
|
||||||
object := bucket.Object(objectName)
|
// Reset file position for retry
|
||||||
|
if _, err := file.Seek(0, 0); err != nil {
|
||||||
|
return fmt.Errorf("failed to reset file position: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
// Create writer with automatic chunking for large files
|
bucket := g.client.Bucket(g.bucketName)
|
||||||
writer := object.NewWriter(ctx)
|
object := bucket.Object(objectName)
|
||||||
writer.ChunkSize = 16 * 1024 * 1024 // 16MB chunks for streaming
|
|
||||||
|
|
||||||
// Wrap reader with progress tracking and hash calculation
|
// Create writer with automatic chunking for large files
|
||||||
hash := sha256.New()
|
writer := object.NewWriter(ctx)
|
||||||
reader := NewProgressReader(io.TeeReader(file, hash), fileSize, progress)
|
writer.ChunkSize = 16 * 1024 * 1024 // 16MB chunks for streaming
|
||||||
|
|
||||||
// Upload with progress tracking
|
// Wrap reader with progress tracking and hash calculation
|
||||||
_, err = io.Copy(writer, reader)
|
hash := sha256.New()
|
||||||
if err != nil {
|
reader := NewProgressReader(io.TeeReader(file, hash), fileSize, progress)
|
||||||
writer.Close()
|
|
||||||
return fmt.Errorf("failed to upload object: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Close writer (finalizes upload)
|
// Upload with progress tracking
|
||||||
if err := writer.Close(); err != nil {
|
_, err = io.Copy(writer, reader)
|
||||||
return fmt.Errorf("failed to finalize upload: %w", err)
|
if err != nil {
|
||||||
}
|
writer.Close()
|
||||||
|
return fmt.Errorf("failed to upload object: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
// Store checksum as metadata
|
// Close writer (finalizes upload)
|
||||||
checksum := hex.EncodeToString(hash.Sum(nil))
|
if err := writer.Close(); err != nil {
|
||||||
_, err = object.Update(ctx, storage.ObjectAttrsToUpdate{
|
return fmt.Errorf("failed to finalize upload: %w", err)
|
||||||
Metadata: map[string]string{
|
}
|
||||||
"sha256": checksum,
|
|
||||||
},
|
// Store checksum as metadata
|
||||||
|
checksum := hex.EncodeToString(hash.Sum(nil))
|
||||||
|
_, err = object.Update(ctx, storage.ObjectAttrsToUpdate{
|
||||||
|
Metadata: map[string]string{
|
||||||
|
"sha256": checksum,
|
||||||
|
},
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
// Non-fatal: upload succeeded but metadata failed
|
||||||
|
fmt.Fprintf(os.Stderr, "Warning: failed to set object metadata: %v\n", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}, func(err error, duration time.Duration) {
|
||||||
|
fmt.Printf("[GCS] Upload retry in %v: %v\n", duration, err)
|
||||||
})
|
})
|
||||||
if err != nil {
|
|
||||||
// Non-fatal: upload succeeded but metadata failed
|
|
||||||
fmt.Fprintf(os.Stderr, "Warning: failed to set object metadata: %v\n", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Download downloads a file from Google Cloud Storage
|
// Download downloads a file from Google Cloud Storage with retry
|
||||||
func (g *GCSBackend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
|
func (g *GCSBackend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
|
||||||
objectName := strings.TrimPrefix(remotePath, "/")
|
objectName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
|
||||||
@@ -159,30 +168,34 @@ func (g *GCSBackend) Download(ctx context.Context, remotePath, localPath string,
|
|||||||
|
|
||||||
fileSize := attrs.Size
|
fileSize := attrs.Size
|
||||||
|
|
||||||
// Create reader
|
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
|
||||||
reader, err := object.NewReader(ctx)
|
// Create reader
|
||||||
if err != nil {
|
reader, err := object.NewReader(ctx)
|
||||||
return fmt.Errorf("failed to download object: %w", err)
|
if err != nil {
|
||||||
}
|
return fmt.Errorf("failed to download object: %w", err)
|
||||||
defer reader.Close()
|
}
|
||||||
|
defer reader.Close()
|
||||||
|
|
||||||
// Create local file
|
// Create/truncate local file
|
||||||
file, err := os.Create(localPath)
|
file, err := os.Create(localPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed to create file: %w", err)
|
return fmt.Errorf("failed to create file: %w", err)
|
||||||
}
|
}
|
||||||
defer file.Close()
|
defer file.Close()
|
||||||
|
|
||||||
// Wrap reader with progress tracking
|
// Wrap reader with progress tracking
|
||||||
progressReader := NewProgressReader(reader, fileSize, progress)
|
progressReader := NewProgressReader(reader, fileSize, progress)
|
||||||
|
|
||||||
// Copy with progress
|
// Copy with progress
|
||||||
_, err = io.Copy(file, progressReader)
|
_, err = io.Copy(file, progressReader)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed to write file: %w", err)
|
return fmt.Errorf("failed to write file: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
}, func(err error, duration time.Duration) {
|
||||||
|
fmt.Printf("[GCS] Download retry in %v: %v\n", duration, err)
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Delete deletes a file from Google Cloud Storage
|
// Delete deletes a file from Google Cloud Storage
|
||||||
|
|||||||
257
internal/cloud/retry.go
Normal file
257
internal/cloud/retry.go
Normal file
@@ -0,0 +1,257 @@
|
|||||||
|
package cloud
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"net"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/cenkalti/backoff/v4"
|
||||||
|
)
|
||||||
|
|
||||||
|
// RetryConfig configures retry behavior
|
||||||
|
type RetryConfig struct {
|
||||||
|
MaxRetries int // Maximum number of retries (0 = unlimited)
|
||||||
|
InitialInterval time.Duration // Initial backoff interval
|
||||||
|
MaxInterval time.Duration // Maximum backoff interval
|
||||||
|
MaxElapsedTime time.Duration // Maximum total time for retries
|
||||||
|
Multiplier float64 // Backoff multiplier
|
||||||
|
}
|
||||||
|
|
||||||
|
// DefaultRetryConfig returns sensible defaults for cloud operations
|
||||||
|
func DefaultRetryConfig() *RetryConfig {
|
||||||
|
return &RetryConfig{
|
||||||
|
MaxRetries: 5,
|
||||||
|
InitialInterval: 500 * time.Millisecond,
|
||||||
|
MaxInterval: 30 * time.Second,
|
||||||
|
MaxElapsedTime: 5 * time.Minute,
|
||||||
|
Multiplier: 2.0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// AggressiveRetryConfig returns config for critical operations that need more retries
|
||||||
|
func AggressiveRetryConfig() *RetryConfig {
|
||||||
|
return &RetryConfig{
|
||||||
|
MaxRetries: 10,
|
||||||
|
InitialInterval: 1 * time.Second,
|
||||||
|
MaxInterval: 60 * time.Second,
|
||||||
|
MaxElapsedTime: 15 * time.Minute,
|
||||||
|
Multiplier: 1.5,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// QuickRetryConfig returns config for operations that should fail fast
|
||||||
|
func QuickRetryConfig() *RetryConfig {
|
||||||
|
return &RetryConfig{
|
||||||
|
MaxRetries: 3,
|
||||||
|
InitialInterval: 100 * time.Millisecond,
|
||||||
|
MaxInterval: 5 * time.Second,
|
||||||
|
MaxElapsedTime: 30 * time.Second,
|
||||||
|
Multiplier: 2.0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// RetryOperation executes an operation with exponential backoff retry
|
||||||
|
func RetryOperation(ctx context.Context, cfg *RetryConfig, operation func() error) error {
|
||||||
|
if cfg == nil {
|
||||||
|
cfg = DefaultRetryConfig()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create exponential backoff
|
||||||
|
expBackoff := backoff.NewExponentialBackOff()
|
||||||
|
expBackoff.InitialInterval = cfg.InitialInterval
|
||||||
|
expBackoff.MaxInterval = cfg.MaxInterval
|
||||||
|
expBackoff.MaxElapsedTime = cfg.MaxElapsedTime
|
||||||
|
expBackoff.Multiplier = cfg.Multiplier
|
||||||
|
expBackoff.Reset()
|
||||||
|
|
||||||
|
// Wrap with max retries if specified
|
||||||
|
var b backoff.BackOff = expBackoff
|
||||||
|
if cfg.MaxRetries > 0 {
|
||||||
|
b = backoff.WithMaxRetries(expBackoff, uint64(cfg.MaxRetries))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add context support
|
||||||
|
b = backoff.WithContext(b, ctx)
|
||||||
|
|
||||||
|
// Track attempts for logging
|
||||||
|
attempt := 0
|
||||||
|
|
||||||
|
// Wrap operation to handle permanent vs retryable errors
|
||||||
|
wrappedOp := func() error {
|
||||||
|
attempt++
|
||||||
|
err := operation()
|
||||||
|
if err == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if error is permanent (should not retry)
|
||||||
|
if IsPermanentError(err) {
|
||||||
|
return backoff.Permanent(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return backoff.Retry(wrappedOp, b)
|
||||||
|
}
|
||||||
|
|
||||||
|
// RetryOperationWithNotify executes an operation with retry and calls notify on each retry
|
||||||
|
func RetryOperationWithNotify(ctx context.Context, cfg *RetryConfig, operation func() error, notify func(err error, duration time.Duration)) error {
|
||||||
|
if cfg == nil {
|
||||||
|
cfg = DefaultRetryConfig()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create exponential backoff
|
||||||
|
expBackoff := backoff.NewExponentialBackOff()
|
||||||
|
expBackoff.InitialInterval = cfg.InitialInterval
|
||||||
|
expBackoff.MaxInterval = cfg.MaxInterval
|
||||||
|
expBackoff.MaxElapsedTime = cfg.MaxElapsedTime
|
||||||
|
expBackoff.Multiplier = cfg.Multiplier
|
||||||
|
expBackoff.Reset()
|
||||||
|
|
||||||
|
// Wrap with max retries if specified
|
||||||
|
var b backoff.BackOff = expBackoff
|
||||||
|
if cfg.MaxRetries > 0 {
|
||||||
|
b = backoff.WithMaxRetries(expBackoff, uint64(cfg.MaxRetries))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add context support
|
||||||
|
b = backoff.WithContext(b, ctx)
|
||||||
|
|
||||||
|
// Wrap operation to handle permanent vs retryable errors
|
||||||
|
wrappedOp := func() error {
|
||||||
|
err := operation()
|
||||||
|
if err == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if error is permanent (should not retry)
|
||||||
|
if IsPermanentError(err) {
|
||||||
|
return backoff.Permanent(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return backoff.RetryNotify(wrappedOp, b, notify)
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsPermanentError returns true if the error should not be retried
|
||||||
|
func IsPermanentError(err error) bool {
|
||||||
|
if err == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
errStr := strings.ToLower(err.Error())
|
||||||
|
|
||||||
|
// Authentication/authorization errors - don't retry
|
||||||
|
permanentPatterns := []string{
|
||||||
|
"access denied",
|
||||||
|
"forbidden",
|
||||||
|
"unauthorized",
|
||||||
|
"invalid credentials",
|
||||||
|
"invalid access key",
|
||||||
|
"invalid secret",
|
||||||
|
"no such bucket",
|
||||||
|
"bucket not found",
|
||||||
|
"container not found",
|
||||||
|
"nosuchbucket",
|
||||||
|
"nosuchkey",
|
||||||
|
"invalid argument",
|
||||||
|
"malformed",
|
||||||
|
"invalid request",
|
||||||
|
"permission denied",
|
||||||
|
"access control",
|
||||||
|
"policy",
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, pattern := range permanentPatterns {
|
||||||
|
if strings.Contains(errStr, pattern) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsRetryableError returns true if the error is transient and should be retried
|
||||||
|
func IsRetryableError(err error) bool {
|
||||||
|
if err == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Network errors are typically retryable
|
||||||
|
var netErr net.Error
|
||||||
|
if ok := isNetError(err, &netErr); ok {
|
||||||
|
return netErr.Timeout() || netErr.Temporary()
|
||||||
|
}
|
||||||
|
|
||||||
|
errStr := strings.ToLower(err.Error())
|
||||||
|
|
||||||
|
// Transient errors - should retry
|
||||||
|
retryablePatterns := []string{
|
||||||
|
"timeout",
|
||||||
|
"connection reset",
|
||||||
|
"connection refused",
|
||||||
|
"connection closed",
|
||||||
|
"eof",
|
||||||
|
"broken pipe",
|
||||||
|
"temporary failure",
|
||||||
|
"service unavailable",
|
||||||
|
"internal server error",
|
||||||
|
"bad gateway",
|
||||||
|
"gateway timeout",
|
||||||
|
"too many requests",
|
||||||
|
"rate limit",
|
||||||
|
"throttl",
|
||||||
|
"slowdown",
|
||||||
|
"try again",
|
||||||
|
"retry",
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, pattern := range retryablePatterns {
|
||||||
|
if strings.Contains(errStr, pattern) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// isNetError checks if err wraps a net.Error
|
||||||
|
func isNetError(err error, target *net.Error) bool {
|
||||||
|
for err != nil {
|
||||||
|
if ne, ok := err.(net.Error); ok {
|
||||||
|
*target = ne
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
// Try to unwrap
|
||||||
|
if unwrapper, ok := err.(interface{ Unwrap() error }); ok {
|
||||||
|
err = unwrapper.Unwrap()
|
||||||
|
} else {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithRetry is a helper that wraps a function with default retry logic
|
||||||
|
func WithRetry(ctx context.Context, operationName string, fn func() error) error {
|
||||||
|
notify := func(err error, duration time.Duration) {
|
||||||
|
// Log retry attempts (caller can provide their own logger if needed)
|
||||||
|
fmt.Printf("[RETRY] %s failed, retrying in %v: %v\n", operationName, duration, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), fn, notify)
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithRetryConfig is a helper that wraps a function with custom retry config
|
||||||
|
func WithRetryConfig(ctx context.Context, cfg *RetryConfig, operationName string, fn func() error) error {
|
||||||
|
notify := func(err error, duration time.Duration) {
|
||||||
|
fmt.Printf("[RETRY] %s failed, retrying in %v: %v\n", operationName, duration, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return RetryOperationWithNotify(ctx, cfg, fn, notify)
|
||||||
|
}
|
||||||
@@ -7,6 +7,7 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strings"
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/aws/aws-sdk-go-v2/aws"
|
"github.com/aws/aws-sdk-go-v2/aws"
|
||||||
"github.com/aws/aws-sdk-go-v2/config"
|
"github.com/aws/aws-sdk-go-v2/config"
|
||||||
@@ -123,63 +124,81 @@ func (s *S3Backend) Upload(ctx context.Context, localPath, remotePath string, pr
|
|||||||
return s.uploadSimple(ctx, file, key, fileSize, progress)
|
return s.uploadSimple(ctx, file, key, fileSize, progress)
|
||||||
}
|
}
|
||||||
|
|
||||||
// uploadSimple performs a simple single-part upload
|
// uploadSimple performs a simple single-part upload with retry
|
||||||
func (s *S3Backend) uploadSimple(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error {
|
func (s *S3Backend) uploadSimple(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error {
|
||||||
// Create progress reader
|
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
|
||||||
var reader io.Reader = file
|
// Reset file position for retry
|
||||||
if progress != nil {
|
if _, err := file.Seek(0, 0); err != nil {
|
||||||
reader = NewProgressReader(file, fileSize, progress)
|
return fmt.Errorf("failed to reset file position: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Upload to S3
|
// Create progress reader
|
||||||
_, err := s.client.PutObject(ctx, &s3.PutObjectInput{
|
var reader io.Reader = file
|
||||||
Bucket: aws.String(s.bucket),
|
if progress != nil {
|
||||||
Key: aws.String(key),
|
reader = NewProgressReader(file, fileSize, progress)
|
||||||
Body: reader,
|
}
|
||||||
|
|
||||||
|
// Upload to S3
|
||||||
|
_, err := s.client.PutObject(ctx, &s3.PutObjectInput{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
Key: aws.String(key),
|
||||||
|
Body: reader,
|
||||||
|
})
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to upload to S3: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}, func(err error, duration time.Duration) {
|
||||||
|
fmt.Printf("[S3] Upload retry in %v: %v\n", duration, err)
|
||||||
})
|
})
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to upload to S3: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// uploadMultipart performs a multipart upload for large files
|
// uploadMultipart performs a multipart upload for large files with retry
|
||||||
func (s *S3Backend) uploadMultipart(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error {
|
func (s *S3Backend) uploadMultipart(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error {
|
||||||
// Create uploader with custom options
|
return RetryOperationWithNotify(ctx, AggressiveRetryConfig(), func() error {
|
||||||
uploader := manager.NewUploader(s.client, func(u *manager.Uploader) {
|
// Reset file position for retry
|
||||||
// Part size: 10MB
|
if _, err := file.Seek(0, 0); err != nil {
|
||||||
u.PartSize = 10 * 1024 * 1024
|
return fmt.Errorf("failed to reset file position: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
// Upload up to 10 parts concurrently
|
// Create uploader with custom options
|
||||||
u.Concurrency = 10
|
uploader := manager.NewUploader(s.client, func(u *manager.Uploader) {
|
||||||
|
// Part size: 10MB
|
||||||
|
u.PartSize = 10 * 1024 * 1024
|
||||||
|
|
||||||
// Leave parts on failure for debugging
|
// Upload up to 10 parts concurrently
|
||||||
u.LeavePartsOnError = false
|
u.Concurrency = 10
|
||||||
|
|
||||||
|
// Leave parts on failure for debugging
|
||||||
|
u.LeavePartsOnError = false
|
||||||
|
})
|
||||||
|
|
||||||
|
// Wrap file with progress reader
|
||||||
|
var reader io.Reader = file
|
||||||
|
if progress != nil {
|
||||||
|
reader = NewProgressReader(file, fileSize, progress)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Upload with multipart
|
||||||
|
_, err := uploader.Upload(ctx, &s3.PutObjectInput{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
Key: aws.String(key),
|
||||||
|
Body: reader,
|
||||||
|
})
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("multipart upload failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}, func(err error, duration time.Duration) {
|
||||||
|
fmt.Printf("[S3] Multipart upload retry in %v: %v\n", duration, err)
|
||||||
})
|
})
|
||||||
|
|
||||||
// Wrap file with progress reader
|
|
||||||
var reader io.Reader = file
|
|
||||||
if progress != nil {
|
|
||||||
reader = NewProgressReader(file, fileSize, progress)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Upload with multipart
|
|
||||||
_, err := uploader.Upload(ctx, &s3.PutObjectInput{
|
|
||||||
Bucket: aws.String(s.bucket),
|
|
||||||
Key: aws.String(key),
|
|
||||||
Body: reader,
|
|
||||||
})
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("multipart upload failed: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Download downloads a file from S3
|
// Download downloads a file from S3 with retry
|
||||||
func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
|
func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
|
||||||
// Build S3 key
|
// Build S3 key
|
||||||
key := s.buildKey(remotePath)
|
key := s.buildKey(remotePath)
|
||||||
@@ -190,39 +209,44 @@ func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string,
|
|||||||
return fmt.Errorf("failed to get object size: %w", err)
|
return fmt.Errorf("failed to get object size: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Download from S3
|
// Create directory for local file
|
||||||
result, err := s.client.GetObject(ctx, &s3.GetObjectInput{
|
|
||||||
Bucket: aws.String(s.bucket),
|
|
||||||
Key: aws.String(key),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to download from S3: %w", err)
|
|
||||||
}
|
|
||||||
defer result.Body.Close()
|
|
||||||
|
|
||||||
// Create local file
|
|
||||||
if err := os.MkdirAll(filepath.Dir(localPath), 0755); err != nil {
|
if err := os.MkdirAll(filepath.Dir(localPath), 0755); err != nil {
|
||||||
return fmt.Errorf("failed to create directory: %w", err)
|
return fmt.Errorf("failed to create directory: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
outFile, err := os.Create(localPath)
|
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
|
||||||
if err != nil {
|
// Download from S3
|
||||||
return fmt.Errorf("failed to create local file: %w", err)
|
result, err := s.client.GetObject(ctx, &s3.GetObjectInput{
|
||||||
}
|
Bucket: aws.String(s.bucket),
|
||||||
defer outFile.Close()
|
Key: aws.String(key),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to download from S3: %w", err)
|
||||||
|
}
|
||||||
|
defer result.Body.Close()
|
||||||
|
|
||||||
// Copy with progress tracking
|
// Create/truncate local file
|
||||||
var reader io.Reader = result.Body
|
outFile, err := os.Create(localPath)
|
||||||
if progress != nil {
|
if err != nil {
|
||||||
reader = NewProgressReader(result.Body, size, progress)
|
return fmt.Errorf("failed to create local file: %w", err)
|
||||||
}
|
}
|
||||||
|
defer outFile.Close()
|
||||||
|
|
||||||
_, err = io.Copy(outFile, reader)
|
// Copy with progress tracking
|
||||||
if err != nil {
|
var reader io.Reader = result.Body
|
||||||
return fmt.Errorf("failed to write file: %w", err)
|
if progress != nil {
|
||||||
}
|
reader = NewProgressReader(result.Body, size, progress)
|
||||||
|
}
|
||||||
|
|
||||||
return nil
|
_, err = io.Copy(outFile, reader)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to write file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}, func(err error, duration time.Duration) {
|
||||||
|
fmt.Printf("[S3] Download retry in %v: %v\n", duration, err)
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// List lists all backup files in S3
|
// List lists all backup files in S3
|
||||||
|
|||||||
223
internal/fs/fs.go
Normal file
223
internal/fs/fs.go
Normal file
@@ -0,0 +1,223 @@
|
|||||||
|
// Package fs provides filesystem abstraction using spf13/afero for testability.
|
||||||
|
// It allows swapping the real filesystem with an in-memory mock for unit tests.
|
||||||
|
package fs
|
||||||
|
|
||||||
|
import (
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/spf13/afero"
|
||||||
|
)
|
||||||
|
|
||||||
|
// FS is the global filesystem interface used throughout the application.
|
||||||
|
// By default, it uses the real OS filesystem.
|
||||||
|
// For testing, use SetFS(afero.NewMemMapFs()) to use an in-memory filesystem.
|
||||||
|
var FS afero.Fs = afero.NewOsFs()
|
||||||
|
|
||||||
|
// SetFS sets the global filesystem (useful for testing)
|
||||||
|
func SetFS(fs afero.Fs) {
|
||||||
|
FS = fs
|
||||||
|
}
|
||||||
|
|
||||||
|
// ResetFS resets to the real OS filesystem
|
||||||
|
func ResetFS() {
|
||||||
|
FS = afero.NewOsFs()
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewMemMapFs creates a new in-memory filesystem for testing
|
||||||
|
func NewMemMapFs() afero.Fs {
|
||||||
|
return afero.NewMemMapFs()
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewReadOnlyFs wraps a filesystem to make it read-only
|
||||||
|
func NewReadOnlyFs(base afero.Fs) afero.Fs {
|
||||||
|
return afero.NewReadOnlyFs(base)
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewBasePathFs creates a filesystem rooted at a specific path
|
||||||
|
func NewBasePathFs(base afero.Fs, path string) afero.Fs {
|
||||||
|
return afero.NewBasePathFs(base, path)
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- File Operations (use global FS) ---
|
||||||
|
|
||||||
|
// Create creates a file
|
||||||
|
func Create(name string) (afero.File, error) {
|
||||||
|
return FS.Create(name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Open opens a file for reading
|
||||||
|
func Open(name string) (afero.File, error) {
|
||||||
|
return FS.Open(name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// OpenFile opens a file with specified flags and permissions
|
||||||
|
func OpenFile(name string, flag int, perm os.FileMode) (afero.File, error) {
|
||||||
|
return FS.OpenFile(name, flag, perm)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Remove removes a file or empty directory
|
||||||
|
func Remove(name string) error {
|
||||||
|
return FS.Remove(name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveAll removes a path and any children it contains
|
||||||
|
func RemoveAll(path string) error {
|
||||||
|
return FS.RemoveAll(path)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Rename renames (moves) a file
|
||||||
|
func Rename(oldname, newname string) error {
|
||||||
|
return FS.Rename(oldname, newname)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stat returns file info
|
||||||
|
func Stat(name string) (os.FileInfo, error) {
|
||||||
|
return FS.Stat(name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Chmod changes file mode
|
||||||
|
func Chmod(name string, mode os.FileMode) error {
|
||||||
|
return FS.Chmod(name, mode)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Chown changes file ownership (may not work on all filesystems)
|
||||||
|
func Chown(name string, uid, gid int) error {
|
||||||
|
return FS.Chown(name, uid, gid)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Chtimes changes file access and modification times
|
||||||
|
func Chtimes(name string, atime, mtime time.Time) error {
|
||||||
|
return FS.Chtimes(name, atime, mtime)
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Directory Operations ---
|
||||||
|
|
||||||
|
// Mkdir creates a directory
|
||||||
|
func Mkdir(name string, perm os.FileMode) error {
|
||||||
|
return FS.Mkdir(name, perm)
|
||||||
|
}
|
||||||
|
|
||||||
|
// MkdirAll creates a directory and all parents
|
||||||
|
func MkdirAll(path string, perm os.FileMode) error {
|
||||||
|
return FS.MkdirAll(path, perm)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ReadDir reads a directory
|
||||||
|
func ReadDir(dirname string) ([]os.FileInfo, error) {
|
||||||
|
return afero.ReadDir(FS, dirname)
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- File Content Operations ---
|
||||||
|
|
||||||
|
// ReadFile reads an entire file
|
||||||
|
func ReadFile(filename string) ([]byte, error) {
|
||||||
|
return afero.ReadFile(FS, filename)
|
||||||
|
}
|
||||||
|
|
||||||
|
// WriteFile writes data to a file
|
||||||
|
func WriteFile(filename string, data []byte, perm os.FileMode) error {
|
||||||
|
return afero.WriteFile(FS, filename, data, perm)
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Existence Checks ---
|
||||||
|
|
||||||
|
// Exists checks if a file or directory exists
|
||||||
|
func Exists(path string) (bool, error) {
|
||||||
|
return afero.Exists(FS, path)
|
||||||
|
}
|
||||||
|
|
||||||
|
// DirExists checks if a directory exists
|
||||||
|
func DirExists(path string) (bool, error) {
|
||||||
|
return afero.DirExists(FS, path)
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsDir checks if path is a directory
|
||||||
|
func IsDir(path string) (bool, error) {
|
||||||
|
return afero.IsDir(FS, path)
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsEmpty checks if a directory is empty
|
||||||
|
func IsEmpty(path string) (bool, error) {
|
||||||
|
return afero.IsEmpty(FS, path)
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Utility Functions ---
|
||||||
|
|
||||||
|
// Walk walks a directory tree
|
||||||
|
func Walk(root string, walkFn filepath.WalkFunc) error {
|
||||||
|
return afero.Walk(FS, root, walkFn)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Glob returns the names of all files matching pattern
|
||||||
|
func Glob(pattern string) ([]string, error) {
|
||||||
|
return afero.Glob(FS, pattern)
|
||||||
|
}
|
||||||
|
|
||||||
|
// TempDir creates a temporary directory
|
||||||
|
func TempDir(dir, prefix string) (string, error) {
|
||||||
|
return afero.TempDir(FS, dir, prefix)
|
||||||
|
}
|
||||||
|
|
||||||
|
// TempFile creates a temporary file
|
||||||
|
func TempFile(dir, pattern string) (afero.File, error) {
|
||||||
|
return afero.TempFile(FS, dir, pattern)
|
||||||
|
}
|
||||||
|
|
||||||
|
// CopyFile copies a file from src to dst
|
||||||
|
func CopyFile(src, dst string) error {
|
||||||
|
srcFile, err := FS.Open(src)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer srcFile.Close()
|
||||||
|
|
||||||
|
srcInfo, err := srcFile.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
dstFile, err := FS.OpenFile(dst, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, srcInfo.Mode())
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer dstFile.Close()
|
||||||
|
|
||||||
|
_, err = io.Copy(dstFile, srcFile)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// FileSize returns the size of a file
|
||||||
|
func FileSize(path string) (int64, error) {
|
||||||
|
info, err := FS.Stat(path)
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
return info.Size(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Testing Helpers ---
|
||||||
|
|
||||||
|
// WithMemFs executes a function with an in-memory filesystem, then restores the original
|
||||||
|
func WithMemFs(fn func(fs afero.Fs)) {
|
||||||
|
original := FS
|
||||||
|
memFs := afero.NewMemMapFs()
|
||||||
|
FS = memFs
|
||||||
|
defer func() { FS = original }()
|
||||||
|
fn(memFs)
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetupTestDir creates a test directory structure in-memory
|
||||||
|
func SetupTestDir(files map[string]string) afero.Fs {
|
||||||
|
memFs := afero.NewMemMapFs()
|
||||||
|
for path, content := range files {
|
||||||
|
dir := filepath.Dir(path)
|
||||||
|
if dir != "." && dir != "/" {
|
||||||
|
_ = memFs.MkdirAll(dir, 0755)
|
||||||
|
}
|
||||||
|
_ = afero.WriteFile(memFs, path, []byte(content), 0644)
|
||||||
|
}
|
||||||
|
return memFs
|
||||||
|
}
|
||||||
191
internal/fs/fs_test.go
Normal file
191
internal/fs/fs_test.go
Normal file
@@ -0,0 +1,191 @@
|
|||||||
|
package fs
|
||||||
|
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/spf13/afero"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestMemMapFs(t *testing.T) {
|
||||||
|
// Use in-memory filesystem for testing
|
||||||
|
WithMemFs(func(memFs afero.Fs) {
|
||||||
|
// Create a file
|
||||||
|
err := WriteFile("/test/file.txt", []byte("hello world"), 0644)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("WriteFile failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read it back
|
||||||
|
content, err := ReadFile("/test/file.txt")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ReadFile failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if string(content) != "hello world" {
|
||||||
|
t.Errorf("expected 'hello world', got '%s'", string(content))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check existence
|
||||||
|
exists, err := Exists("/test/file.txt")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Exists failed: %v", err)
|
||||||
|
}
|
||||||
|
if !exists {
|
||||||
|
t.Error("file should exist")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check non-existent file
|
||||||
|
exists, err = Exists("/nonexistent.txt")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Exists failed: %v", err)
|
||||||
|
}
|
||||||
|
if exists {
|
||||||
|
t.Error("file should not exist")
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSetupTestDir(t *testing.T) {
|
||||||
|
// Create test directory structure
|
||||||
|
testFs := SetupTestDir(map[string]string{
|
||||||
|
"/backups/db1.dump": "database 1 content",
|
||||||
|
"/backups/db2.dump": "database 2 content",
|
||||||
|
"/config/settings.json": `{"key": "value"}`,
|
||||||
|
})
|
||||||
|
|
||||||
|
// Verify files exist
|
||||||
|
content, err := afero.ReadFile(testFs, "/backups/db1.dump")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ReadFile failed: %v", err)
|
||||||
|
}
|
||||||
|
if string(content) != "database 1 content" {
|
||||||
|
t.Errorf("unexpected content: %s", string(content))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify directory structure
|
||||||
|
files, err := afero.ReadDir(testFs, "/backups")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ReadDir failed: %v", err)
|
||||||
|
}
|
||||||
|
if len(files) != 2 {
|
||||||
|
t.Errorf("expected 2 files, got %d", len(files))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCopyFile(t *testing.T) {
|
||||||
|
WithMemFs(func(memFs afero.Fs) {
|
||||||
|
// Create source file
|
||||||
|
err := WriteFile("/source.txt", []byte("copy me"), 0644)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("WriteFile failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Copy file
|
||||||
|
err = CopyFile("/source.txt", "/dest.txt")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CopyFile failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify copy
|
||||||
|
content, err := ReadFile("/dest.txt")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ReadFile failed: %v", err)
|
||||||
|
}
|
||||||
|
if string(content) != "copy me" {
|
||||||
|
t.Errorf("unexpected content: %s", string(content))
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFileSize(t *testing.T) {
|
||||||
|
WithMemFs(func(memFs afero.Fs) {
|
||||||
|
data := []byte("12345678901234567890") // 20 bytes
|
||||||
|
err := WriteFile("/sized.txt", data, 0644)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("WriteFile failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
size, err := FileSize("/sized.txt")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("FileSize failed: %v", err)
|
||||||
|
}
|
||||||
|
if size != 20 {
|
||||||
|
t.Errorf("expected size 20, got %d", size)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestTempDir(t *testing.T) {
|
||||||
|
WithMemFs(func(memFs afero.Fs) {
|
||||||
|
// Create temp dir
|
||||||
|
dir, err := TempDir("", "test-")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("TempDir failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify it exists
|
||||||
|
isDir, err := IsDir(dir)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("IsDir failed: %v", err)
|
||||||
|
}
|
||||||
|
if !isDir {
|
||||||
|
t.Error("temp dir should be a directory")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify it's empty
|
||||||
|
isEmpty, err := IsEmpty(dir)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("IsEmpty failed: %v", err)
|
||||||
|
}
|
||||||
|
if !isEmpty {
|
||||||
|
t.Error("temp dir should be empty")
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestWalk(t *testing.T) {
|
||||||
|
WithMemFs(func(memFs afero.Fs) {
|
||||||
|
// Create directory structure
|
||||||
|
_ = MkdirAll("/root/a/b", 0755)
|
||||||
|
_ = WriteFile("/root/file1.txt", []byte("1"), 0644)
|
||||||
|
_ = WriteFile("/root/a/file2.txt", []byte("2"), 0644)
|
||||||
|
_ = WriteFile("/root/a/b/file3.txt", []byte("3"), 0644)
|
||||||
|
|
||||||
|
var files []string
|
||||||
|
err := Walk("/root", func(path string, info os.FileInfo, err error) error {
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if !info.IsDir() {
|
||||||
|
files = append(files, path)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Walk failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(files) != 3 {
|
||||||
|
t.Errorf("expected 3 files, got %d: %v", len(files), files)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGlob(t *testing.T) {
|
||||||
|
WithMemFs(func(memFs afero.Fs) {
|
||||||
|
_ = WriteFile("/data/backup1.dump", []byte("1"), 0644)
|
||||||
|
_ = WriteFile("/data/backup2.dump", []byte("2"), 0644)
|
||||||
|
_ = WriteFile("/data/config.json", []byte("{}"), 0644)
|
||||||
|
|
||||||
|
matches, err := Glob("/data/*.dump")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Glob failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(matches) != 2 {
|
||||||
|
t.Errorf("expected 2 matches, got %d: %v", len(matches), matches)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
118
internal/logger/colors.go
Normal file
118
internal/logger/colors.go
Normal file
@@ -0,0 +1,118 @@
|
|||||||
|
package logger
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"github.com/fatih/color"
|
||||||
|
)
|
||||||
|
|
||||||
|
// CLI output helpers using fatih/color for cross-platform support
|
||||||
|
|
||||||
|
// Success prints a success message with green checkmark
|
||||||
|
func Success(format string, args ...interface{}) {
|
||||||
|
msg := fmt.Sprintf(format, args...)
|
||||||
|
SuccessColor.Fprint(os.Stdout, "✓ ")
|
||||||
|
fmt.Println(msg)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Error prints an error message with red X
|
||||||
|
func Error(format string, args ...interface{}) {
|
||||||
|
msg := fmt.Sprintf(format, args...)
|
||||||
|
ErrorColor.Fprint(os.Stderr, "✗ ")
|
||||||
|
fmt.Fprintln(os.Stderr, msg)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Warning prints a warning message with yellow exclamation
|
||||||
|
func Warning(format string, args ...interface{}) {
|
||||||
|
msg := fmt.Sprintf(format, args...)
|
||||||
|
WarnColor.Fprint(os.Stdout, "⚠ ")
|
||||||
|
fmt.Println(msg)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Info prints an info message with blue arrow
|
||||||
|
func Info(format string, args ...interface{}) {
|
||||||
|
msg := fmt.Sprintf(format, args...)
|
||||||
|
InfoColor.Fprint(os.Stdout, "→ ")
|
||||||
|
fmt.Println(msg)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Header prints a bold header
|
||||||
|
func Header(format string, args ...interface{}) {
|
||||||
|
msg := fmt.Sprintf(format, args...)
|
||||||
|
HighlightColor.Println(msg)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Dim prints dimmed/secondary text
|
||||||
|
func Dim(format string, args ...interface{}) {
|
||||||
|
msg := fmt.Sprintf(format, args...)
|
||||||
|
DimColor.Println(msg)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Bold returns bold text
|
||||||
|
func Bold(text string) string {
|
||||||
|
return color.New(color.Bold).Sprint(text)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Green returns green text
|
||||||
|
func Green(text string) string {
|
||||||
|
return SuccessColor.Sprint(text)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Red returns red text
|
||||||
|
func Red(text string) string {
|
||||||
|
return ErrorColor.Sprint(text)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Yellow returns yellow text
|
||||||
|
func Yellow(text string) string {
|
||||||
|
return WarnColor.Sprint(text)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cyan returns cyan text
|
||||||
|
func Cyan(text string) string {
|
||||||
|
return InfoColor.Sprint(text)
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatusLine prints a key-value status line
|
||||||
|
func StatusLine(key, value string) {
|
||||||
|
DimColor.Printf(" %s: ", key)
|
||||||
|
fmt.Println(value)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ProgressStatus prints operation status with timing
|
||||||
|
func ProgressStatus(operation string, status string, isSuccess bool) {
|
||||||
|
if isSuccess {
|
||||||
|
SuccessColor.Print("[OK] ")
|
||||||
|
} else {
|
||||||
|
ErrorColor.Print("[FAIL] ")
|
||||||
|
}
|
||||||
|
fmt.Printf("%s: %s\n", operation, status)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Table prints a simple formatted table row
|
||||||
|
func TableRow(cols ...string) {
|
||||||
|
for i, col := range cols {
|
||||||
|
if i == 0 {
|
||||||
|
InfoColor.Printf("%-20s", col)
|
||||||
|
} else {
|
||||||
|
fmt.Printf("%-15s", col)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
fmt.Println()
|
||||||
|
}
|
||||||
|
|
||||||
|
// DisableColors disables all color output (for non-TTY or --no-color flag)
|
||||||
|
func DisableColors() {
|
||||||
|
color.NoColor = true
|
||||||
|
}
|
||||||
|
|
||||||
|
// EnableColors enables color output
|
||||||
|
func EnableColors() {
|
||||||
|
color.NoColor = false
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsColorEnabled returns whether colors are enabled
|
||||||
|
func IsColorEnabled() bool {
|
||||||
|
return !color.NoColor
|
||||||
|
}
|
||||||
@@ -7,9 +7,29 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/fatih/color"
|
||||||
"github.com/sirupsen/logrus"
|
"github.com/sirupsen/logrus"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// Color printers for consistent output across the application
|
||||||
|
var (
|
||||||
|
// Status colors
|
||||||
|
SuccessColor = color.New(color.FgGreen, color.Bold)
|
||||||
|
ErrorColor = color.New(color.FgRed, color.Bold)
|
||||||
|
WarnColor = color.New(color.FgYellow, color.Bold)
|
||||||
|
InfoColor = color.New(color.FgCyan)
|
||||||
|
DebugColor = color.New(color.FgWhite)
|
||||||
|
|
||||||
|
// Highlight colors
|
||||||
|
HighlightColor = color.New(color.FgMagenta, color.Bold)
|
||||||
|
DimColor = color.New(color.FgHiBlack)
|
||||||
|
|
||||||
|
// Data colors
|
||||||
|
NumberColor = color.New(color.FgYellow)
|
||||||
|
PathColor = color.New(color.FgBlue, color.Underline)
|
||||||
|
TimeColor = color.New(color.FgCyan)
|
||||||
|
)
|
||||||
|
|
||||||
// Logger defines the interface for logging
|
// Logger defines the interface for logging
|
||||||
type Logger interface {
|
type Logger interface {
|
||||||
Debug(msg string, args ...any)
|
Debug(msg string, args ...any)
|
||||||
@@ -226,34 +246,32 @@ type CleanFormatter struct{}
|
|||||||
func (f *CleanFormatter) Format(entry *logrus.Entry) ([]byte, error) {
|
func (f *CleanFormatter) Format(entry *logrus.Entry) ([]byte, error) {
|
||||||
timestamp := entry.Time.Format("2006-01-02T15:04:05")
|
timestamp := entry.Time.Format("2006-01-02T15:04:05")
|
||||||
|
|
||||||
// Color codes for different log levels
|
// Get level color and text using fatih/color
|
||||||
var levelColor, levelText string
|
var levelPrinter *color.Color
|
||||||
|
var levelText string
|
||||||
switch entry.Level {
|
switch entry.Level {
|
||||||
case logrus.DebugLevel:
|
case logrus.DebugLevel:
|
||||||
levelColor = "\033[36m" // Cyan
|
levelPrinter = DebugColor
|
||||||
levelText = "DEBUG"
|
levelText = "DEBUG"
|
||||||
case logrus.InfoLevel:
|
case logrus.InfoLevel:
|
||||||
levelColor = "\033[32m" // Green
|
levelPrinter = SuccessColor
|
||||||
levelText = "INFO "
|
levelText = "INFO "
|
||||||
case logrus.WarnLevel:
|
case logrus.WarnLevel:
|
||||||
levelColor = "\033[33m" // Yellow
|
levelPrinter = WarnColor
|
||||||
levelText = "WARN "
|
levelText = "WARN "
|
||||||
case logrus.ErrorLevel:
|
case logrus.ErrorLevel:
|
||||||
levelColor = "\033[31m" // Red
|
levelPrinter = ErrorColor
|
||||||
levelText = "ERROR"
|
levelText = "ERROR"
|
||||||
default:
|
default:
|
||||||
levelColor = "\033[0m" // Reset
|
levelPrinter = InfoColor
|
||||||
levelText = "INFO "
|
levelText = "INFO "
|
||||||
}
|
}
|
||||||
resetColor := "\033[0m"
|
|
||||||
|
|
||||||
// Build the message with perfectly aligned columns
|
// Build the message with perfectly aligned columns
|
||||||
var output strings.Builder
|
var output strings.Builder
|
||||||
|
|
||||||
// Column 1: Level (with color, fixed width 5 chars)
|
// Column 1: Level (with color, fixed width 5 chars)
|
||||||
output.WriteString(levelColor)
|
output.WriteString(levelPrinter.Sprint(levelText))
|
||||||
output.WriteString(levelText)
|
|
||||||
output.WriteString(resetColor)
|
|
||||||
output.WriteString(" ")
|
output.WriteString(" ")
|
||||||
|
|
||||||
// Column 2: Timestamp (fixed format)
|
// Column 2: Timestamp (fixed format)
|
||||||
|
|||||||
@@ -6,6 +6,16 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/fatih/color"
|
||||||
|
"github.com/schollz/progressbar/v3"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Color printers for progress indicators
|
||||||
|
var (
|
||||||
|
okColor = color.New(color.FgGreen, color.Bold)
|
||||||
|
failColor = color.New(color.FgRed, color.Bold)
|
||||||
|
warnColor = color.New(color.FgYellow, color.Bold)
|
||||||
)
|
)
|
||||||
|
|
||||||
// Indicator represents a progress indicator interface
|
// Indicator represents a progress indicator interface
|
||||||
@@ -92,13 +102,15 @@ func (s *Spinner) Update(message string) {
|
|||||||
// Complete stops the spinner with a success message
|
// Complete stops the spinner with a success message
|
||||||
func (s *Spinner) Complete(message string) {
|
func (s *Spinner) Complete(message string) {
|
||||||
s.Stop()
|
s.Stop()
|
||||||
fmt.Fprintf(s.writer, "\n[OK] %s\n", message)
|
okColor.Fprint(s.writer, "[OK] ")
|
||||||
|
fmt.Fprintln(s.writer, message)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fail stops the spinner with a failure message
|
// Fail stops the spinner with a failure message
|
||||||
func (s *Spinner) Fail(message string) {
|
func (s *Spinner) Fail(message string) {
|
||||||
s.Stop()
|
s.Stop()
|
||||||
fmt.Fprintf(s.writer, "\n[FAIL] %s\n", message)
|
failColor.Fprint(s.writer, "[FAIL] ")
|
||||||
|
fmt.Fprintln(s.writer, message)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Stop stops the spinner
|
// Stop stops the spinner
|
||||||
@@ -167,13 +179,15 @@ func (d *Dots) Update(message string) {
|
|||||||
// Complete stops the dots with a success message
|
// Complete stops the dots with a success message
|
||||||
func (d *Dots) Complete(message string) {
|
func (d *Dots) Complete(message string) {
|
||||||
d.Stop()
|
d.Stop()
|
||||||
fmt.Fprintf(d.writer, " [OK] %s\n", message)
|
okColor.Fprint(d.writer, " [OK] ")
|
||||||
|
fmt.Fprintln(d.writer, message)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fail stops the dots with a failure message
|
// Fail stops the dots with a failure message
|
||||||
func (d *Dots) Fail(message string) {
|
func (d *Dots) Fail(message string) {
|
||||||
d.Stop()
|
d.Stop()
|
||||||
fmt.Fprintf(d.writer, " [FAIL] %s\n", message)
|
failColor.Fprint(d.writer, " [FAIL] ")
|
||||||
|
fmt.Fprintln(d.writer, message)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Stop stops the dots indicator
|
// Stop stops the dots indicator
|
||||||
@@ -239,14 +253,16 @@ func (p *ProgressBar) Complete(message string) {
|
|||||||
p.current = p.total
|
p.current = p.total
|
||||||
p.message = message
|
p.message = message
|
||||||
p.render()
|
p.render()
|
||||||
fmt.Fprintf(p.writer, " [OK] %s\n", message)
|
okColor.Fprint(p.writer, " [OK] ")
|
||||||
|
fmt.Fprintln(p.writer, message)
|
||||||
p.Stop()
|
p.Stop()
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fail stops the progress bar with failure
|
// Fail stops the progress bar with failure
|
||||||
func (p *ProgressBar) Fail(message string) {
|
func (p *ProgressBar) Fail(message string) {
|
||||||
p.render()
|
p.render()
|
||||||
fmt.Fprintf(p.writer, " [FAIL] %s\n", message)
|
failColor.Fprint(p.writer, " [FAIL] ")
|
||||||
|
fmt.Fprintln(p.writer, message)
|
||||||
p.Stop()
|
p.Stop()
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -298,12 +314,14 @@ func (s *Static) Update(message string) {
|
|||||||
|
|
||||||
// Complete shows completion message
|
// Complete shows completion message
|
||||||
func (s *Static) Complete(message string) {
|
func (s *Static) Complete(message string) {
|
||||||
fmt.Fprintf(s.writer, " [OK] %s\n", message)
|
okColor.Fprint(s.writer, " [OK] ")
|
||||||
|
fmt.Fprintln(s.writer, message)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fail shows failure message
|
// Fail shows failure message
|
||||||
func (s *Static) Fail(message string) {
|
func (s *Static) Fail(message string) {
|
||||||
fmt.Fprintf(s.writer, " [FAIL] %s\n", message)
|
failColor.Fprint(s.writer, " [FAIL] ")
|
||||||
|
fmt.Fprintln(s.writer, message)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Stop does nothing for static indicator
|
// Stop does nothing for static indicator
|
||||||
@@ -380,12 +398,14 @@ func (l *LineByLine) SetEstimator(estimator *ETAEstimator) {
|
|||||||
|
|
||||||
// Complete shows completion message
|
// Complete shows completion message
|
||||||
func (l *LineByLine) Complete(message string) {
|
func (l *LineByLine) Complete(message string) {
|
||||||
fmt.Fprintf(l.writer, "[OK] %s\n\n", message)
|
okColor.Fprint(l.writer, "[OK] ")
|
||||||
|
fmt.Fprintf(l.writer, "%s\n\n", message)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fail shows failure message
|
// Fail shows failure message
|
||||||
func (l *LineByLine) Fail(message string) {
|
func (l *LineByLine) Fail(message string) {
|
||||||
fmt.Fprintf(l.writer, "[FAIL] %s\n\n", message)
|
failColor.Fprint(l.writer, "[FAIL] ")
|
||||||
|
fmt.Fprintf(l.writer, "%s\n\n", message)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Stop does nothing for line-by-line (no cleanup needed)
|
// Stop does nothing for line-by-line (no cleanup needed)
|
||||||
@@ -408,13 +428,15 @@ func (l *Light) Update(message string) {
|
|||||||
|
|
||||||
func (l *Light) Complete(message string) {
|
func (l *Light) Complete(message string) {
|
||||||
if !l.silent {
|
if !l.silent {
|
||||||
fmt.Fprintf(l.writer, "[OK] %s\n", message)
|
okColor.Fprint(l.writer, "[OK] ")
|
||||||
|
fmt.Fprintln(l.writer, message)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (l *Light) Fail(message string) {
|
func (l *Light) Fail(message string) {
|
||||||
if !l.silent {
|
if !l.silent {
|
||||||
fmt.Fprintf(l.writer, "[FAIL] %s\n", message)
|
failColor.Fprint(l.writer, "[FAIL] ")
|
||||||
|
fmt.Fprintln(l.writer, message)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -440,6 +462,8 @@ func NewIndicator(interactive bool, indicatorType string) Indicator {
|
|||||||
return NewDots()
|
return NewDots()
|
||||||
case "bar":
|
case "bar":
|
||||||
return NewProgressBar(100) // Default to 100 steps
|
return NewProgressBar(100) // Default to 100 steps
|
||||||
|
case "schollz":
|
||||||
|
return NewSchollzBarItems(100, "Progress")
|
||||||
case "line":
|
case "line":
|
||||||
return NewLineByLine()
|
return NewLineByLine()
|
||||||
case "light":
|
case "light":
|
||||||
@@ -463,3 +487,161 @@ func (n *NullIndicator) Complete(message string) {}
|
|||||||
func (n *NullIndicator) Fail(message string) {}
|
func (n *NullIndicator) Fail(message string) {}
|
||||||
func (n *NullIndicator) Stop() {}
|
func (n *NullIndicator) Stop() {}
|
||||||
func (n *NullIndicator) SetEstimator(estimator *ETAEstimator) {}
|
func (n *NullIndicator) SetEstimator(estimator *ETAEstimator) {}
|
||||||
|
|
||||||
|
// SchollzBar wraps schollz/progressbar for enhanced progress display
|
||||||
|
// Ideal for byte-based operations like archive extraction and file transfers
|
||||||
|
type SchollzBar struct {
|
||||||
|
bar *progressbar.ProgressBar
|
||||||
|
message string
|
||||||
|
total int64
|
||||||
|
estimator *ETAEstimator
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewSchollzBar creates a new schollz progressbar with byte-based progress
|
||||||
|
func NewSchollzBar(total int64, description string) *SchollzBar {
|
||||||
|
bar := progressbar.NewOptions64(
|
||||||
|
total,
|
||||||
|
progressbar.OptionEnableColorCodes(true),
|
||||||
|
progressbar.OptionShowBytes(true),
|
||||||
|
progressbar.OptionSetWidth(40),
|
||||||
|
progressbar.OptionSetDescription(description),
|
||||||
|
progressbar.OptionSetTheme(progressbar.Theme{
|
||||||
|
Saucer: "[green]█[reset]",
|
||||||
|
SaucerHead: "[green]▌[reset]",
|
||||||
|
SaucerPadding: "░",
|
||||||
|
BarStart: "[",
|
||||||
|
BarEnd: "]",
|
||||||
|
}),
|
||||||
|
progressbar.OptionShowCount(),
|
||||||
|
progressbar.OptionSetPredictTime(true),
|
||||||
|
progressbar.OptionFullWidth(),
|
||||||
|
progressbar.OptionClearOnFinish(),
|
||||||
|
)
|
||||||
|
return &SchollzBar{
|
||||||
|
bar: bar,
|
||||||
|
message: description,
|
||||||
|
total: total,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewSchollzBarItems creates a progressbar for item counts (not bytes)
|
||||||
|
func NewSchollzBarItems(total int, description string) *SchollzBar {
|
||||||
|
bar := progressbar.NewOptions(
|
||||||
|
total,
|
||||||
|
progressbar.OptionEnableColorCodes(true),
|
||||||
|
progressbar.OptionShowCount(),
|
||||||
|
progressbar.OptionSetWidth(40),
|
||||||
|
progressbar.OptionSetDescription(description),
|
||||||
|
progressbar.OptionSetTheme(progressbar.Theme{
|
||||||
|
Saucer: "[cyan]█[reset]",
|
||||||
|
SaucerHead: "[cyan]▌[reset]",
|
||||||
|
SaucerPadding: "░",
|
||||||
|
BarStart: "[",
|
||||||
|
BarEnd: "]",
|
||||||
|
}),
|
||||||
|
progressbar.OptionSetPredictTime(true),
|
||||||
|
progressbar.OptionFullWidth(),
|
||||||
|
progressbar.OptionClearOnFinish(),
|
||||||
|
)
|
||||||
|
return &SchollzBar{
|
||||||
|
bar: bar,
|
||||||
|
message: description,
|
||||||
|
total: int64(total),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewSchollzSpinner creates an indeterminate spinner for unknown-length operations
|
||||||
|
func NewSchollzSpinner(description string) *SchollzBar {
|
||||||
|
bar := progressbar.NewOptions(
|
||||||
|
-1, // Indeterminate
|
||||||
|
progressbar.OptionEnableColorCodes(true),
|
||||||
|
progressbar.OptionSetWidth(40),
|
||||||
|
progressbar.OptionSetDescription(description),
|
||||||
|
progressbar.OptionSpinnerType(14), // Braille spinner
|
||||||
|
progressbar.OptionFullWidth(),
|
||||||
|
)
|
||||||
|
return &SchollzBar{
|
||||||
|
bar: bar,
|
||||||
|
message: description,
|
||||||
|
total: -1,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start initializes the progress bar (Indicator interface)
|
||||||
|
func (s *SchollzBar) Start(message string) {
|
||||||
|
s.message = message
|
||||||
|
s.bar.Describe(message)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update updates the description (Indicator interface)
|
||||||
|
func (s *SchollzBar) Update(message string) {
|
||||||
|
s.message = message
|
||||||
|
s.bar.Describe(message)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add adds bytes/items to the progress
|
||||||
|
func (s *SchollzBar) Add(n int) error {
|
||||||
|
return s.bar.Add(n)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add64 adds bytes to the progress (for large files)
|
||||||
|
func (s *SchollzBar) Add64(n int64) error {
|
||||||
|
return s.bar.Add64(n)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set sets the current progress value
|
||||||
|
func (s *SchollzBar) Set(n int) error {
|
||||||
|
return s.bar.Set(n)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set64 sets the current progress value (for large files)
|
||||||
|
func (s *SchollzBar) Set64(n int64) error {
|
||||||
|
return s.bar.Set64(n)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ChangeMax updates the maximum value
|
||||||
|
func (s *SchollzBar) ChangeMax(max int) {
|
||||||
|
s.bar.ChangeMax(max)
|
||||||
|
s.total = int64(max)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ChangeMax64 updates the maximum value (for large files)
|
||||||
|
func (s *SchollzBar) ChangeMax64(max int64) {
|
||||||
|
s.bar.ChangeMax64(max)
|
||||||
|
s.total = max
|
||||||
|
}
|
||||||
|
|
||||||
|
// Complete finishes with success (Indicator interface)
|
||||||
|
func (s *SchollzBar) Complete(message string) {
|
||||||
|
_ = s.bar.Finish()
|
||||||
|
okColor.Print("[OK] ")
|
||||||
|
fmt.Println(message)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fail finishes with failure (Indicator interface)
|
||||||
|
func (s *SchollzBar) Fail(message string) {
|
||||||
|
_ = s.bar.Clear()
|
||||||
|
failColor.Print("[FAIL] ")
|
||||||
|
fmt.Println(message)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stop stops the progress bar (Indicator interface)
|
||||||
|
func (s *SchollzBar) Stop() {
|
||||||
|
_ = s.bar.Clear()
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetEstimator is a no-op (schollz has built-in ETA)
|
||||||
|
func (s *SchollzBar) SetEstimator(estimator *ETAEstimator) {
|
||||||
|
s.estimator = estimator
|
||||||
|
}
|
||||||
|
|
||||||
|
// Writer returns an io.Writer that updates progress as data is written
|
||||||
|
// Useful for wrapping readers/writers in copy operations
|
||||||
|
func (s *SchollzBar) Writer() io.Writer {
|
||||||
|
return s.bar
|
||||||
|
}
|
||||||
|
|
||||||
|
// Finish marks the progress as complete
|
||||||
|
func (s *SchollzBar) Finish() error {
|
||||||
|
return s.bar.Finish()
|
||||||
|
}
|
||||||
|
|||||||
@@ -12,6 +12,7 @@ import (
|
|||||||
"dbbackup/internal/cloud"
|
"dbbackup/internal/cloud"
|
||||||
"dbbackup/internal/logger"
|
"dbbackup/internal/logger"
|
||||||
"dbbackup/internal/metadata"
|
"dbbackup/internal/metadata"
|
||||||
|
"dbbackup/internal/progress"
|
||||||
)
|
)
|
||||||
|
|
||||||
// CloudDownloader handles downloading backups from cloud storage
|
// CloudDownloader handles downloading backups from cloud storage
|
||||||
@@ -73,25 +74,43 @@ func (d *CloudDownloader) Download(ctx context.Context, remotePath string, opts
|
|||||||
size = 0 // Continue anyway
|
size = 0 // Continue anyway
|
||||||
}
|
}
|
||||||
|
|
||||||
// Progress callback
|
// Create schollz progressbar for visual download progress
|
||||||
var lastPercent int
|
var bar *progress.SchollzBar
|
||||||
|
if size > 0 {
|
||||||
|
bar = progress.NewSchollzBar(size, fmt.Sprintf("Downloading %s", filename))
|
||||||
|
} else {
|
||||||
|
bar = progress.NewSchollzSpinner(fmt.Sprintf("Downloading %s", filename))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Progress callback with schollz progressbar
|
||||||
|
var lastBytes int64
|
||||||
progressCallback := func(transferred, total int64) {
|
progressCallback := func(transferred, total int64) {
|
||||||
if total > 0 {
|
if bar != nil {
|
||||||
percent := int(float64(transferred) / float64(total) * 100)
|
// Update progress bar with delta
|
||||||
if percent != lastPercent && percent%10 == 0 {
|
delta := transferred - lastBytes
|
||||||
d.log.Info("Download progress", "percent", percent, "transferred", cloud.FormatSize(transferred), "total", cloud.FormatSize(total))
|
if delta > 0 {
|
||||||
lastPercent = percent
|
_ = bar.Add64(delta)
|
||||||
}
|
}
|
||||||
|
lastBytes = transferred
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Download file
|
// Download file
|
||||||
if err := d.backend.Download(ctx, remotePath, localPath, progressCallback); err != nil {
|
if err := d.backend.Download(ctx, remotePath, localPath, progressCallback); err != nil {
|
||||||
|
if bar != nil {
|
||||||
|
bar.Fail("Download failed")
|
||||||
|
}
|
||||||
// Cleanup on failure
|
// Cleanup on failure
|
||||||
os.RemoveAll(tempSubDir)
|
os.RemoveAll(tempSubDir)
|
||||||
return nil, fmt.Errorf("download failed: %w", err)
|
return nil, fmt.Errorf("download failed: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if bar != nil {
|
||||||
|
_ = bar.Finish()
|
||||||
|
}
|
||||||
|
|
||||||
|
d.log.Info("Download completed", "size", cloud.FormatSize(size))
|
||||||
|
|
||||||
result := &DownloadResult{
|
result := &DownloadResult{
|
||||||
LocalPath: localPath,
|
LocalPath: localPath,
|
||||||
RemotePath: remotePath,
|
RemotePath: remotePath,
|
||||||
@@ -115,7 +134,7 @@ func (d *CloudDownloader) Download(ctx context.Context, remotePath string, opts
|
|||||||
// Verify checksum if requested
|
// Verify checksum if requested
|
||||||
if opts.VerifyChecksum {
|
if opts.VerifyChecksum {
|
||||||
d.log.Info("Verifying checksum...")
|
d.log.Info("Verifying checksum...")
|
||||||
checksum, err := calculateSHA256(localPath)
|
checksum, err := calculateSHA256WithProgress(localPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
// Cleanup on verification failure
|
// Cleanup on verification failure
|
||||||
os.RemoveAll(tempSubDir)
|
os.RemoveAll(tempSubDir)
|
||||||
@@ -186,6 +205,35 @@ func calculateSHA256(filePath string) (string, error) {
|
|||||||
return hex.EncodeToString(hash.Sum(nil)), nil
|
return hex.EncodeToString(hash.Sum(nil)), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// calculateSHA256WithProgress calculates SHA-256 with visual progress bar
|
||||||
|
func calculateSHA256WithProgress(filePath string) (string, error) {
|
||||||
|
file, err := os.Open(filePath)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
// Get file size for progress bar
|
||||||
|
stat, err := file.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
bar := progress.NewSchollzBar(stat.Size(), "Verifying checksum")
|
||||||
|
hash := sha256.New()
|
||||||
|
|
||||||
|
// Create a multi-writer to update both hash and progress
|
||||||
|
writer := io.MultiWriter(hash, bar.Writer())
|
||||||
|
|
||||||
|
if _, err := io.Copy(writer, file); err != nil {
|
||||||
|
bar.Fail("Verification failed")
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
_ = bar.Finish()
|
||||||
|
return hex.EncodeToString(hash.Sum(nil)), nil
|
||||||
|
}
|
||||||
|
|
||||||
// DownloadFromCloudURI is a convenience function to download from a cloud URI
|
// DownloadFromCloudURI is a convenience function to download from a cloud URI
|
||||||
func DownloadFromCloudURI(ctx context.Context, uri string, opts DownloadOptions) (*DownloadResult, error) {
|
func DownloadFromCloudURI(ctx context.Context, uri string, opts DownloadOptions) (*DownloadResult, error) {
|
||||||
// Parse URI
|
// Parse URI
|
||||||
|
|||||||
@@ -20,6 +20,7 @@ import (
|
|||||||
"dbbackup/internal/progress"
|
"dbbackup/internal/progress"
|
||||||
"dbbackup/internal/security"
|
"dbbackup/internal/security"
|
||||||
|
|
||||||
|
"github.com/hashicorp/go-multierror"
|
||||||
_ "github.com/jackc/pgx/v5/stdlib" // PostgreSQL driver
|
_ "github.com/jackc/pgx/v5/stdlib" // PostgreSQL driver
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -961,7 +962,8 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
|
|||||||
}()
|
}()
|
||||||
}
|
}
|
||||||
|
|
||||||
var failedDBs []string
|
var restoreErrors *multierror.Error
|
||||||
|
var restoreErrorsMu sync.Mutex
|
||||||
totalDBs := 0
|
totalDBs := 0
|
||||||
|
|
||||||
// Count total databases
|
// Count total databases
|
||||||
@@ -995,7 +997,6 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
var successCount, failCount int32
|
var successCount, failCount int32
|
||||||
var failedDBsMu sync.Mutex
|
|
||||||
var mu sync.Mutex // Protect shared resources (progress, logger)
|
var mu sync.Mutex // Protect shared resources (progress, logger)
|
||||||
|
|
||||||
// Create semaphore to limit concurrency
|
// Create semaphore to limit concurrency
|
||||||
@@ -1050,9 +1051,9 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
|
|||||||
// STEP 2: Create fresh database
|
// STEP 2: Create fresh database
|
||||||
if err := e.ensureDatabaseExists(ctx, dbName); err != nil {
|
if err := e.ensureDatabaseExists(ctx, dbName); err != nil {
|
||||||
e.log.Error("Failed to create database", "name", dbName, "error", err)
|
e.log.Error("Failed to create database", "name", dbName, "error", err)
|
||||||
failedDBsMu.Lock()
|
restoreErrorsMu.Lock()
|
||||||
failedDBs = append(failedDBs, fmt.Sprintf("%s: failed to create database: %v", dbName, err))
|
restoreErrors = multierror.Append(restoreErrors, fmt.Errorf("%s: failed to create database: %w", dbName, err))
|
||||||
failedDBsMu.Unlock()
|
restoreErrorsMu.Unlock()
|
||||||
atomic.AddInt32(&failCount, 1)
|
atomic.AddInt32(&failCount, 1)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -1095,10 +1096,10 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
|
|||||||
mu.Unlock()
|
mu.Unlock()
|
||||||
}
|
}
|
||||||
|
|
||||||
failedDBsMu.Lock()
|
restoreErrorsMu.Lock()
|
||||||
// Include more context in the error message
|
// Include more context in the error message
|
||||||
failedDBs = append(failedDBs, fmt.Sprintf("%s: restore failed: %v", dbName, restoreErr))
|
restoreErrors = multierror.Append(restoreErrors, fmt.Errorf("%s: restore failed: %w", dbName, restoreErr))
|
||||||
failedDBsMu.Unlock()
|
restoreErrorsMu.Unlock()
|
||||||
atomic.AddInt32(&failCount, 1)
|
atomic.AddInt32(&failCount, 1)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -1116,7 +1117,17 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
|
|||||||
failCountFinal := int(atomic.LoadInt32(&failCount))
|
failCountFinal := int(atomic.LoadInt32(&failCount))
|
||||||
|
|
||||||
if failCountFinal > 0 {
|
if failCountFinal > 0 {
|
||||||
failedList := strings.Join(failedDBs, "\n ")
|
// Format multi-error with detailed output
|
||||||
|
restoreErrors.ErrorFormat = func(errs []error) string {
|
||||||
|
if len(errs) == 1 {
|
||||||
|
return errs[0].Error()
|
||||||
|
}
|
||||||
|
points := make([]string, len(errs))
|
||||||
|
for i, err := range errs {
|
||||||
|
points[i] = fmt.Sprintf(" • %s", err.Error())
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%d database(s) failed:\n%s", len(errs), strings.Join(points, "\n"))
|
||||||
|
}
|
||||||
|
|
||||||
// Log summary
|
// Log summary
|
||||||
e.log.Info("Cluster restore completed with failures",
|
e.log.Info("Cluster restore completed with failures",
|
||||||
@@ -1127,7 +1138,7 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
|
|||||||
e.progress.Fail(fmt.Sprintf("Cluster restore: %d succeeded, %d failed out of %d total", successCountFinal, failCountFinal, totalDBs))
|
e.progress.Fail(fmt.Sprintf("Cluster restore: %d succeeded, %d failed out of %d total", successCountFinal, failCountFinal, totalDBs))
|
||||||
operation.Complete(fmt.Sprintf("Partial restore: %d/%d databases succeeded", successCountFinal, totalDBs))
|
operation.Complete(fmt.Sprintf("Partial restore: %d/%d databases succeeded", successCountFinal, totalDBs))
|
||||||
|
|
||||||
return fmt.Errorf("cluster restore completed with %d failures:\n %s", failCountFinal, failedList)
|
return fmt.Errorf("cluster restore completed with %d failures:\n%s", failCountFinal, restoreErrors.Error())
|
||||||
}
|
}
|
||||||
|
|
||||||
e.progress.Complete(fmt.Sprintf("Cluster restored successfully: %d databases", successCountFinal))
|
e.progress.Complete(fmt.Sprintf("Cluster restored successfully: %d databases", successCountFinal))
|
||||||
|
|||||||
@@ -35,15 +35,15 @@ type PreflightResult struct {
|
|||||||
|
|
||||||
// LinuxChecks contains Linux kernel/system checks
|
// LinuxChecks contains Linux kernel/system checks
|
||||||
type LinuxChecks struct {
|
type LinuxChecks struct {
|
||||||
ShmMax int64 // /proc/sys/kernel/shmmax
|
ShmMax int64 // /proc/sys/kernel/shmmax
|
||||||
ShmAll int64 // /proc/sys/kernel/shmall
|
ShmAll int64 // /proc/sys/kernel/shmall
|
||||||
MemTotal uint64 // Total RAM in bytes
|
MemTotal uint64 // Total RAM in bytes
|
||||||
MemAvailable uint64 // Available RAM in bytes
|
MemAvailable uint64 // Available RAM in bytes
|
||||||
MemUsedPercent float64 // Memory usage percentage
|
MemUsedPercent float64 // Memory usage percentage
|
||||||
ShmMaxOK bool // Is shmmax sufficient?
|
ShmMaxOK bool // Is shmmax sufficient?
|
||||||
ShmAllOK bool // Is shmall sufficient?
|
ShmAllOK bool // Is shmall sufficient?
|
||||||
MemAvailableOK bool // Is available RAM sufficient?
|
MemAvailableOK bool // Is available RAM sufficient?
|
||||||
IsLinux bool // Are we running on Linux?
|
IsLinux bool // Are we running on Linux?
|
||||||
}
|
}
|
||||||
|
|
||||||
// PostgreSQLChecks contains PostgreSQL configuration checks
|
// PostgreSQLChecks contains PostgreSQL configuration checks
|
||||||
|
|||||||
Reference in New Issue
Block a user