Compare commits
23 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 09a917766f | |||
| eeacbfa007 | |||
| 7711a206ab | |||
| ba6e8a2b39 | |||
| ec5e89eab7 | |||
| e24d7ab49f | |||
| 721e53fe6a | |||
| 4e09066aa5 | |||
| 6a24ee39be | |||
| dc6dfd8b2c | |||
| 7b4ab76313 | |||
| c0d92b3a81 | |||
| 8c85d85249 | |||
| e0cdcb28be | |||
| 22a7b9e81e | |||
| c71889be47 | |||
| 222bdbef58 | |||
| f7e9fa64f0 | |||
| f153e61dbf | |||
| d19c065658 | |||
| 8dac5efc10 | |||
| fd5edce5ae | |||
| a7e2c86618 |
127
CHANGELOG.md
127
CHANGELOG.md
@@ -5,6 +5,133 @@ All notable changes to dbbackup will be documented in this file.
|
|||||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||||
|
|
||||||
|
## [3.42.35] - 2026-01-15 "TUI Detailed Progress"
|
||||||
|
|
||||||
|
### Added - Enhanced TUI Progress Display
|
||||||
|
- **Detailed progress bar in TUI restore** - schollz-style progress bar with:
|
||||||
|
- Byte progress display (e.g., `245 MB / 1.2 GB`)
|
||||||
|
- Transfer speed calculation (e.g., `45 MB/s`)
|
||||||
|
- ETA prediction for long operations
|
||||||
|
- Unicode block-based visual bar
|
||||||
|
- **Real-time extraction progress** - Archive extraction now reports actual bytes processed
|
||||||
|
- **Go-native tar extraction** - Uses Go's `archive/tar` + `compress/gzip` when progress callback is set
|
||||||
|
- **New `DetailedProgress` component** in TUI package:
|
||||||
|
- `NewDetailedProgress(total, description)` - Byte-based progress
|
||||||
|
- `NewDetailedProgressItems(total, description)` - Item count progress
|
||||||
|
- `NewDetailedProgressSpinner(description)` - Indeterminate spinner
|
||||||
|
- `RenderProgressBar(width)` - Generate schollz-style output
|
||||||
|
- **Progress callback API** in restore engine:
|
||||||
|
- `SetProgressCallback(func(current, total int64, description string))`
|
||||||
|
- Allows TUI to receive real-time progress updates from restore operations
|
||||||
|
- **Shared progress state** pattern for Bubble Tea integration
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- TUI restore execution now shows detailed byte progress during archive extraction
|
||||||
|
- Cluster restore shows extraction progress instead of just spinner
|
||||||
|
- Falls back to shell `tar` command when no progress callback is set (faster)
|
||||||
|
|
||||||
|
### Technical Details
|
||||||
|
- `progressReader` wrapper tracks bytes read through gzip/tar pipeline
|
||||||
|
- Throttled progress updates (every 100ms) to avoid UI flooding
|
||||||
|
- Thread-safe shared state pattern for cross-goroutine progress updates
|
||||||
|
|
||||||
|
## [3.42.34] - 2026-01-14 "Filesystem Abstraction"
|
||||||
|
|
||||||
|
### Added - spf13/afero for Filesystem Abstraction
|
||||||
|
- **New `internal/fs` package** for testable filesystem operations
|
||||||
|
- **In-memory filesystem** for unit testing without disk I/O
|
||||||
|
- **Global FS interface** that can be swapped for testing:
|
||||||
|
```go
|
||||||
|
fs.SetFS(afero.NewMemMapFs()) // Use memory
|
||||||
|
fs.ResetFS() // Back to real disk
|
||||||
|
```
|
||||||
|
- **Wrapper functions** for all common file operations:
|
||||||
|
- `ReadFile`, `WriteFile`, `Create`, `Open`, `Remove`, `RemoveAll`
|
||||||
|
- `Mkdir`, `MkdirAll`, `ReadDir`, `Walk`, `Glob`
|
||||||
|
- `Exists`, `DirExists`, `IsDir`, `IsEmpty`
|
||||||
|
- `TempDir`, `TempFile`, `CopyFile`, `FileSize`
|
||||||
|
- **Testing helpers**:
|
||||||
|
- `WithMemFs(fn)` - Execute function with temp in-memory FS
|
||||||
|
- `SetupTestDir(files)` - Create test directory structure
|
||||||
|
- **Comprehensive test suite** demonstrating usage
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- Upgraded afero from v1.10.0 to v1.15.0
|
||||||
|
|
||||||
|
## [3.42.33] - 2026-01-14 "Exponential Backoff Retry"
|
||||||
|
|
||||||
|
### Added - cenkalti/backoff for Cloud Operation Retry
|
||||||
|
- **Exponential backoff retry** for all cloud operations (S3, Azure, GCS)
|
||||||
|
- **Retry configurations**:
|
||||||
|
- `DefaultRetryConfig()` - 5 retries, 500ms→30s backoff, 5 min max
|
||||||
|
- `AggressiveRetryConfig()` - 10 retries, 1s→60s backoff, 15 min max
|
||||||
|
- `QuickRetryConfig()` - 3 retries, 100ms→5s backoff, 30s max
|
||||||
|
- **Smart error classification**:
|
||||||
|
- `IsPermanentError()` - Auth/bucket errors (no retry)
|
||||||
|
- `IsRetryableError()` - Timeout/network errors (retry)
|
||||||
|
- **Retry logging** - Each retry attempt is logged with wait duration
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- S3 simple upload, multipart upload, download now retry on transient failures
|
||||||
|
- Azure simple upload, download now retry on transient failures
|
||||||
|
- GCS upload, download now retry on transient failures
|
||||||
|
- Large file multipart uploads use `AggressiveRetryConfig()` (more retries)
|
||||||
|
|
||||||
|
## [3.42.32] - 2026-01-14 "Cross-Platform Colors"
|
||||||
|
|
||||||
|
### Added - fatih/color for Cross-Platform Terminal Colors
|
||||||
|
- **Windows-compatible colors** - Native Windows console API support
|
||||||
|
- **Color helper functions** in `logger` package:
|
||||||
|
- `Success()`, `Error()`, `Warning()`, `Info()` - Status messages with icons
|
||||||
|
- `Header()`, `Dim()`, `Bold()` - Text styling
|
||||||
|
- `Green()`, `Red()`, `Yellow()`, `Cyan()` - Colored text
|
||||||
|
- `StatusLine()`, `TableRow()` - Formatted output
|
||||||
|
- `DisableColors()`, `EnableColors()` - Runtime control
|
||||||
|
- **Consistent color scheme** across all log levels
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- Logger `CleanFormatter` now uses fatih/color instead of raw ANSI codes
|
||||||
|
- All progress indicators use fatih/color for `[OK]`/`[FAIL]` status
|
||||||
|
- Automatic color detection (disabled for non-TTY)
|
||||||
|
|
||||||
|
## [3.42.31] - 2026-01-14 "Visual Progress Bars"
|
||||||
|
|
||||||
|
### Added - schollz/progressbar for Enhanced Progress Display
|
||||||
|
- **Visual progress bars** for cloud uploads/downloads with:
|
||||||
|
- Byte transfer display (e.g., `245 MB / 1.2 GB`)
|
||||||
|
- Transfer speed (e.g., `45 MB/s`)
|
||||||
|
- ETA prediction
|
||||||
|
- Color-coded progress with Unicode blocks
|
||||||
|
- **Checksum verification progress** - visual progress while calculating SHA-256
|
||||||
|
- **Spinner for indeterminate operations** - Braille-style spinner when size unknown
|
||||||
|
- New progress types: `NewSchollzBar()`, `NewSchollzBarItems()`, `NewSchollzSpinner()`
|
||||||
|
- Progress bar `Writer()` method for io.Copy integration
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- Cloud download shows real-time byte progress instead of 10% log messages
|
||||||
|
- Cloud upload shows visual progress bar instead of debug logs
|
||||||
|
- Checksum verification shows progress for large files
|
||||||
|
|
||||||
|
## [3.42.30] - 2026-01-09 "Better Error Aggregation"
|
||||||
|
|
||||||
|
### Added - go-multierror for Cluster Restore Errors
|
||||||
|
- **Enhanced error reporting** - Now shows ALL database failures, not just a count
|
||||||
|
- Uses `hashicorp/go-multierror` for proper error aggregation
|
||||||
|
- Each failed database error is preserved with full context
|
||||||
|
- Bullet-pointed error output for readability:
|
||||||
|
```
|
||||||
|
cluster restore completed with 3 failures:
|
||||||
|
3 database(s) failed:
|
||||||
|
• db1: restore failed: max_locks_per_transaction exceeded
|
||||||
|
• db2: restore failed: connection refused
|
||||||
|
• db3: failed to create database: permission denied
|
||||||
|
```
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- Replaced string slice error collection with proper `*multierror.Error`
|
||||||
|
- Thread-safe error aggregation with dedicated mutex
|
||||||
|
- Improved error wrapping with `%w` for error chain preservation
|
||||||
|
|
||||||
## [3.42.10] - 2026-01-08 "Code Quality"
|
## [3.42.10] - 2026-01-08 "Code Quality"
|
||||||
|
|
||||||
### Fixed - Code Quality Issues
|
### Fixed - Code Quality Issues
|
||||||
|
|||||||
@@ -1,295 +0,0 @@
|
|||||||
# Emoticon Removal Plan for Python Code
|
|
||||||
|
|
||||||
## ⚠️ CRITICAL: Code Must Remain Functional After Removal
|
|
||||||
|
|
||||||
This document outlines a **safe, systematic approach** to removing emoticons from Python code without breaking functionality.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 1. Identification Phase
|
|
||||||
|
|
||||||
### 1.1 Where Emoticons CAN Safely Exist (Safe to Remove)
|
|
||||||
| Location | Risk Level | Action |
|
|
||||||
|----------|------------|--------|
|
|
||||||
| Comments (`# 🎉 Success!`) | ✅ SAFE | Remove or replace with text |
|
|
||||||
| Docstrings (`"""📌 Note:..."""`) | ✅ SAFE | Remove or replace with text |
|
|
||||||
| Print statements for decoration (`print("✅ Done!")`) | ⚠️ LOW | Replace with ASCII or text |
|
|
||||||
| Logging messages (`logger.info("🔥 Starting...")`) | ⚠️ LOW | Replace with text equivalent |
|
|
||||||
|
|
||||||
### 1.2 Where Emoticons are DANGEROUS to Remove
|
|
||||||
| Location | Risk Level | Action |
|
|
||||||
|----------|------------|--------|
|
|
||||||
| String literals used in logic | 🚨 HIGH | **DO NOT REMOVE** without analysis |
|
|
||||||
| Dictionary keys (`{"🔑": value}`) | 🚨 CRITICAL | **NEVER REMOVE** - breaks code |
|
|
||||||
| Regex patterns | 🚨 CRITICAL | **NEVER REMOVE** - breaks matching |
|
|
||||||
| String comparisons (`if x == "✅"`) | 🚨 CRITICAL | Requires refactoring, not just removal |
|
|
||||||
| Database/API payloads | 🚨 CRITICAL | May break external systems |
|
|
||||||
| File content markers | 🚨 HIGH | May break parsing logic |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2. Pre-Removal Checklist
|
|
||||||
|
|
||||||
### 2.1 Before ANY Changes
|
|
||||||
- [ ] **Full backup** of the codebase
|
|
||||||
- [ ] **Run all tests** and record baseline results
|
|
||||||
- [ ] **Document all emoticon locations** with grep/search
|
|
||||||
- [ ] **Identify emoticon usage patterns** (decorative vs. functional)
|
|
||||||
|
|
||||||
### 2.2 Discovery Commands
|
|
||||||
```bash
|
|
||||||
# Find all files with emoticons (Unicode range for common emojis)
|
|
||||||
grep -rn --include="*.py" -P '[\x{1F300}-\x{1F9FF}]' .
|
|
||||||
|
|
||||||
# Find emoticons in strings
|
|
||||||
grep -rn --include="*.py" -E '["'"'"'][^"'"'"']*[\x{1F300}-\x{1F9FF}]' .
|
|
||||||
|
|
||||||
# List unique emoticons used
|
|
||||||
grep -oP '[\x{1F300}-\x{1F9FF}]' *.py | sort -u
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3. Replacement Strategy
|
|
||||||
|
|
||||||
### 3.1 Semantic Replacement Table
|
|
||||||
| Emoticon | Text Replacement | Context |
|
|
||||||
|----------|------------------|---------|
|
|
||||||
| ✅ | `[OK]` or `[SUCCESS]` | Status indicators |
|
|
||||||
| ❌ | `[FAIL]` or `[ERROR]` | Error indicators |
|
|
||||||
| ⚠️ | `[WARNING]` | Warning messages |
|
|
||||||
| 🔥 | `[HOT]` or `` (remove) | Decorative |
|
|
||||||
| 🎉 | `[DONE]` or `` (remove) | Celebration/completion |
|
|
||||||
| 📌 | `[NOTE]` | Notes/pinned items |
|
|
||||||
| 🚀 | `[START]` or `` (remove) | Launch/start indicators |
|
|
||||||
| 💾 | `[SAVE]` | Save operations |
|
|
||||||
| 🔑 | `[KEY]` | Key/authentication |
|
|
||||||
| 📁 | `[FILE]` | File operations |
|
|
||||||
| 🔍 | `[SEARCH]` | Search operations |
|
|
||||||
| ⏳ | `[WAIT]` or `[LOADING]` | Progress indicators |
|
|
||||||
| 🛑 | `[STOP]` | Stop/halt indicators |
|
|
||||||
| ℹ️ | `[INFO]` | Information |
|
|
||||||
| 🐛 | `[BUG]` or `[DEBUG]` | Debug messages |
|
|
||||||
|
|
||||||
### 3.2 Context-Aware Replacement Rules
|
|
||||||
|
|
||||||
```
|
|
||||||
RULE 1: Comments
|
|
||||||
- Remove emoticon entirely OR replace with text
|
|
||||||
- Example: `# 🎉 Feature complete` → `# Feature complete`
|
|
||||||
|
|
||||||
RULE 2: User-facing strings (print/logging)
|
|
||||||
- Replace with semantic text equivalent
|
|
||||||
- Example: `print("✅ Backup complete")` → `print("[OK] Backup complete")`
|
|
||||||
|
|
||||||
RULE 3: Functional strings (DANGER ZONE)
|
|
||||||
- DO NOT auto-replace
|
|
||||||
- Requires manual code refactoring
|
|
||||||
- Example: `status = "✅"` → Refactor to `status = "success"` AND update all comparisons
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 4. Safe Removal Process
|
|
||||||
|
|
||||||
### Step 1: Audit
|
|
||||||
```python
|
|
||||||
# Python script to audit emoticon usage
|
|
||||||
import re
|
|
||||||
import ast
|
|
||||||
|
|
||||||
EMOJI_PATTERN = re.compile(
|
|
||||||
"["
|
|
||||||
"\U0001F300-\U0001F9FF" # Symbols & Pictographs
|
|
||||||
"\U00002600-\U000026FF" # Misc symbols
|
|
||||||
"\U00002700-\U000027BF" # Dingbats
|
|
||||||
"\U0001F600-\U0001F64F" # Emoticons
|
|
||||||
"]+"
|
|
||||||
)
|
|
||||||
|
|
||||||
def audit_file(filepath):
|
|
||||||
with open(filepath, 'r', encoding='utf-8') as f:
|
|
||||||
content = f.read()
|
|
||||||
|
|
||||||
# Parse AST to understand context
|
|
||||||
tree = ast.parse(content)
|
|
||||||
|
|
||||||
findings = []
|
|
||||||
for lineno, line in enumerate(content.split('\n'), 1):
|
|
||||||
matches = EMOJI_PATTERN.findall(line)
|
|
||||||
if matches:
|
|
||||||
# Determine context (comment, string, etc.)
|
|
||||||
context = classify_context(line, matches)
|
|
||||||
findings.append({
|
|
||||||
'line': lineno,
|
|
||||||
'content': line.strip(),
|
|
||||||
'emojis': matches,
|
|
||||||
'context': context,
|
|
||||||
'risk': assess_risk(context)
|
|
||||||
})
|
|
||||||
return findings
|
|
||||||
|
|
||||||
def classify_context(line, matches):
|
|
||||||
stripped = line.strip()
|
|
||||||
if stripped.startswith('#'):
|
|
||||||
return 'COMMENT'
|
|
||||||
if 'print(' in line or 'logging.' in line or 'logger.' in line:
|
|
||||||
return 'OUTPUT'
|
|
||||||
if '==' in line or '!=' in line:
|
|
||||||
return 'COMPARISON'
|
|
||||||
if re.search(r'["\'][^"\']*$', line.split('#')[0]):
|
|
||||||
return 'STRING_LITERAL'
|
|
||||||
return 'UNKNOWN'
|
|
||||||
|
|
||||||
def assess_risk(context):
|
|
||||||
risk_map = {
|
|
||||||
'COMMENT': 'LOW',
|
|
||||||
'OUTPUT': 'LOW',
|
|
||||||
'COMPARISON': 'CRITICAL',
|
|
||||||
'STRING_LITERAL': 'HIGH',
|
|
||||||
'UNKNOWN': 'HIGH'
|
|
||||||
}
|
|
||||||
return risk_map.get(context, 'HIGH')
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Generate Change Plan
|
|
||||||
```python
|
|
||||||
def generate_change_plan(findings):
|
|
||||||
plan = {'safe': [], 'review_required': [], 'do_not_touch': []}
|
|
||||||
|
|
||||||
for finding in findings:
|
|
||||||
if finding['risk'] == 'LOW':
|
|
||||||
plan['safe'].append(finding)
|
|
||||||
elif finding['risk'] == 'HIGH':
|
|
||||||
plan['review_required'].append(finding)
|
|
||||||
else: # CRITICAL
|
|
||||||
plan['do_not_touch'].append(finding)
|
|
||||||
|
|
||||||
return plan
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Apply Changes (SAFE items only)
|
|
||||||
```python
|
|
||||||
def apply_safe_replacements(filepath, replacements):
|
|
||||||
# Create backup first!
|
|
||||||
import shutil
|
|
||||||
shutil.copy(filepath, filepath + '.backup')
|
|
||||||
|
|
||||||
with open(filepath, 'r', encoding='utf-8') as f:
|
|
||||||
content = f.read()
|
|
||||||
|
|
||||||
for old, new in replacements:
|
|
||||||
content = content.replace(old, new)
|
|
||||||
|
|
||||||
with open(filepath, 'w', encoding='utf-8') as f:
|
|
||||||
f.write(content)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Validate
|
|
||||||
```bash
|
|
||||||
# After each file change:
|
|
||||||
python -m py_compile <modified_file.py> # Syntax check
|
|
||||||
pytest <related_tests> # Run tests
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 5. Validation Checklist
|
|
||||||
|
|
||||||
### After EACH File Modification
|
|
||||||
- [ ] File compiles without syntax errors (`python -m py_compile file.py`)
|
|
||||||
- [ ] All imports still work
|
|
||||||
- [ ] Related unit tests pass
|
|
||||||
- [ ] Integration tests pass
|
|
||||||
- [ ] Manual smoke test if applicable
|
|
||||||
|
|
||||||
### After ALL Modifications
|
|
||||||
- [ ] Full test suite passes
|
|
||||||
- [ ] Application starts correctly
|
|
||||||
- [ ] Key functionality verified manually
|
|
||||||
- [ ] No new warnings in logs
|
|
||||||
- [ ] Compare output with baseline
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 6. Rollback Plan
|
|
||||||
|
|
||||||
### If Something Breaks
|
|
||||||
1. **Immediate**: Restore from `.backup` files
|
|
||||||
2. **Git**: `git checkout -- <file>` or `git stash pop`
|
|
||||||
3. **Full rollback**: Restore from pre-change backup
|
|
||||||
|
|
||||||
### Keep Until Verified
|
|
||||||
```bash
|
|
||||||
# Backup storage structure
|
|
||||||
backups/
|
|
||||||
├── pre_emoticon_removal/
|
|
||||||
│ ├── timestamp.tar.gz
|
|
||||||
│ └── git_commit_hash.txt
|
|
||||||
└── individual_files/
|
|
||||||
├── file1.py.backup
|
|
||||||
└── file2.py.backup
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 7. Implementation Order
|
|
||||||
|
|
||||||
1. **Phase 1**: Comments only (LOWEST risk)
|
|
||||||
2. **Phase 2**: Docstrings (LOW risk)
|
|
||||||
3. **Phase 3**: Print/logging statements (LOW-MEDIUM risk)
|
|
||||||
4. **Phase 4**: Manual review items (HIGH risk) - one by one
|
|
||||||
5. **Phase 5**: NEVER touch CRITICAL items without full refactoring
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 8. Example Workflow
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Create full backup
|
|
||||||
git stash && git checkout -b emoticon-removal
|
|
||||||
|
|
||||||
# 2. Run audit script
|
|
||||||
python emoticon_audit.py > audit_report.json
|
|
||||||
|
|
||||||
# 3. Review audit report
|
|
||||||
cat audit_report.json | jq '.do_not_touch' # Check critical items
|
|
||||||
|
|
||||||
# 4. Apply safe changes only
|
|
||||||
python apply_safe_changes.py --dry-run # Preview first!
|
|
||||||
python apply_safe_changes.py # Apply
|
|
||||||
|
|
||||||
# 5. Validate after each change
|
|
||||||
python -m pytest tests/
|
|
||||||
|
|
||||||
# 6. Commit incrementally
|
|
||||||
git add -p # Review each change
|
|
||||||
git commit -m "Remove emoticons from comments in module X"
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 9. DO NOT DO
|
|
||||||
|
|
||||||
❌ **Never** use global find-replace on emoticons
|
|
||||||
❌ **Never** remove emoticons from string comparisons without refactoring
|
|
||||||
❌ **Never** change multiple files without testing between changes
|
|
||||||
❌ **Never** assume an emoticon is decorative - verify context
|
|
||||||
❌ **Never** proceed if tests fail after a change
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 10. Sign-Off Requirements
|
|
||||||
|
|
||||||
Before merging emoticon removal changes:
|
|
||||||
- [ ] All tests pass (100%)
|
|
||||||
- [ ] Code review by second developer
|
|
||||||
- [ ] Manual testing of affected features
|
|
||||||
- [ ] Documented all CRITICAL items left unchanged (with justification)
|
|
||||||
- [ ] Backup verified and accessible
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Author**: Generated Plan
|
|
||||||
**Date**: 2026-01-07
|
|
||||||
**Status**: PLAN ONLY - No code changes made
|
|
||||||
206
OPENSOURCE_ALTERNATIVE.md
Normal file
206
OPENSOURCE_ALTERNATIVE.md
Normal file
@@ -0,0 +1,206 @@
|
|||||||
|
# dbbackup: The Real Open Source Alternative
|
||||||
|
|
||||||
|
## Killing Two Borgs with One Binary
|
||||||
|
|
||||||
|
You have two choices for database backups today:
|
||||||
|
|
||||||
|
1. **Pay $2,000-10,000/year per server** for Veeam, Commvault, or Veritas
|
||||||
|
2. **Wrestle with Borg/restic** - powerful, but never designed for databases
|
||||||
|
|
||||||
|
**dbbackup** eliminates both problems with a single, zero-dependency binary.
|
||||||
|
|
||||||
|
## The Problem with Commercial Backup
|
||||||
|
|
||||||
|
| What You Pay For | What You Actually Get |
|
||||||
|
|------------------|----------------------|
|
||||||
|
| $10,000/year | Heavy agents eating CPU |
|
||||||
|
| Complex licensing | Vendor lock-in to proprietary formats |
|
||||||
|
| "Enterprise support" | Recovery that requires calling support |
|
||||||
|
| "Cloud integration" | Upload to S3... eventually |
|
||||||
|
|
||||||
|
## The Problem with Borg/Restic
|
||||||
|
|
||||||
|
Great tools. Wrong use case.
|
||||||
|
|
||||||
|
| Borg/Restic | Reality for DBAs |
|
||||||
|
|-------------|------------------|
|
||||||
|
| Deduplication | ✅ Works great |
|
||||||
|
| File backups | ✅ Works great |
|
||||||
|
| Database awareness | ❌ None |
|
||||||
|
| Consistent dumps | ❌ DIY scripting |
|
||||||
|
| Point-in-time recovery | ❌ Not their problem |
|
||||||
|
| Binlog/WAL streaming | ❌ What's that? |
|
||||||
|
|
||||||
|
You end up writing wrapper scripts. Then more scripts. Then a monitoring layer. Then you've built half a product anyway.
|
||||||
|
|
||||||
|
## What Open Source Really Means
|
||||||
|
|
||||||
|
**dbbackup** delivers everything - in one binary:
|
||||||
|
|
||||||
|
| Feature | Veeam | Borg/Restic | dbbackup |
|
||||||
|
|---------|-------|-------------|----------|
|
||||||
|
| Deduplication | ❌ | ✅ | ✅ Native CDC |
|
||||||
|
| Database-aware | ✅ | ❌ | ✅ MySQL + PostgreSQL |
|
||||||
|
| Consistent snapshots | ✅ | ❌ | ✅ LVM/ZFS/Btrfs |
|
||||||
|
| PITR (Point-in-Time) | ❌ | ❌ | ✅ Sub-second RPO |
|
||||||
|
| Binlog/WAL streaming | ❌ | ❌ | ✅ Continuous |
|
||||||
|
| Direct cloud streaming | ❌ | ✅ | ✅ S3/GCS/Azure |
|
||||||
|
| Zero dependencies | ❌ | ❌ | ✅ Single binary |
|
||||||
|
| License cost | $$$$ | Free | **Free (Apache 2.0)** |
|
||||||
|
|
||||||
|
## Deduplication: We Killed the Borg
|
||||||
|
|
||||||
|
Content-defined chunking, just like Borg - but built for database dumps:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# First backup: 5MB stored
|
||||||
|
dbbackup dedup backup mydb.dump
|
||||||
|
|
||||||
|
# Second backup (modified): only 1.6KB new data!
|
||||||
|
# 100% deduplication ratio
|
||||||
|
dbbackup dedup backup mydb_modified.dump
|
||||||
|
```
|
||||||
|
|
||||||
|
### How It Works
|
||||||
|
- **Gear Hash CDC** - Content-defined chunking with 92%+ overlap detection
|
||||||
|
- **SHA-256 Content-Addressed** - Chunks stored by hash, automatic dedup
|
||||||
|
- **AES-256-GCM Encryption** - Per-chunk encryption
|
||||||
|
- **Gzip Compression** - Enabled by default
|
||||||
|
- **SQLite Index** - Fast lookups, portable metadata
|
||||||
|
|
||||||
|
### Storage Efficiency
|
||||||
|
|
||||||
|
| Scenario | Borg | dbbackup |
|
||||||
|
|----------|------|----------|
|
||||||
|
| Daily 10GB database | 10GB + ~2GB/day | 10GB + ~2GB/day |
|
||||||
|
| Same data, knows it's a DB | Scripts needed | **Native support** |
|
||||||
|
| Restore to point-in-time | ❌ | ✅ Built-in |
|
||||||
|
|
||||||
|
Same dedup math. Zero wrapper scripts.
|
||||||
|
|
||||||
|
## Enterprise Features, Zero Enterprise Pricing
|
||||||
|
|
||||||
|
### Physical Backups (MySQL 8.0.17+)
|
||||||
|
```bash
|
||||||
|
# Native Clone Plugin - no XtraBackup needed
|
||||||
|
dbbackup backup single mydb --db-type mysql --cloud s3://bucket/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Filesystem Snapshots
|
||||||
|
```bash
|
||||||
|
# <100ms lock, instant snapshot, stream to cloud
|
||||||
|
dbbackup backup --engine=snapshot --snapshot-backend=lvm
|
||||||
|
```
|
||||||
|
|
||||||
|
### Continuous Binlog/WAL Streaming
|
||||||
|
```bash
|
||||||
|
# Real-time capture to S3 - sub-second RPO
|
||||||
|
dbbackup binlog stream --target=s3://bucket/binlogs/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Parallel Cloud Upload
|
||||||
|
```bash
|
||||||
|
# Saturate your network, not your patience
|
||||||
|
dbbackup backup --engine=streaming --parallel-workers=8
|
||||||
|
```
|
||||||
|
|
||||||
|
## Real Numbers
|
||||||
|
|
||||||
|
**100GB MySQL database:**
|
||||||
|
|
||||||
|
| Metric | Veeam | Borg + Scripts | dbbackup |
|
||||||
|
|--------|-------|----------------|----------|
|
||||||
|
| Backup time | 45 min | 50 min | **12 min** |
|
||||||
|
| Local disk needed | 100GB | 100GB | **0 GB** |
|
||||||
|
| Recovery point | Daily | Daily | **< 1 second** |
|
||||||
|
| Setup time | Days | Hours | **Minutes** |
|
||||||
|
| Annual cost | $5,000+ | $0 + time | **$0** |
|
||||||
|
|
||||||
|
## Migration Path
|
||||||
|
|
||||||
|
### From Veeam
|
||||||
|
```bash
|
||||||
|
# Day 1: Test alongside existing
|
||||||
|
dbbackup backup single mydb --cloud s3://test-bucket/
|
||||||
|
|
||||||
|
# Week 1: Compare backup times, storage costs
|
||||||
|
# Week 2: Switch primary backups
|
||||||
|
# Month 1: Cancel renewal, buy your team pizza
|
||||||
|
```
|
||||||
|
|
||||||
|
### From Borg/Restic
|
||||||
|
```bash
|
||||||
|
# Day 1: Replace your wrapper scripts
|
||||||
|
dbbackup dedup backup /var/lib/mysql/dumps/mydb.sql
|
||||||
|
|
||||||
|
# Day 2: Add PITR
|
||||||
|
dbbackup binlog stream --target=/mnt/nfs/binlogs/
|
||||||
|
|
||||||
|
# Day 3: Delete 500 lines of bash
|
||||||
|
```
|
||||||
|
|
||||||
|
## The Commands You Need
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Deduplicated backups (Borg-style)
|
||||||
|
dbbackup dedup backup <file>
|
||||||
|
dbbackup dedup restore <id> <output>
|
||||||
|
dbbackup dedup stats
|
||||||
|
dbbackup dedup gc
|
||||||
|
|
||||||
|
# Database-native backups
|
||||||
|
dbbackup backup single <database>
|
||||||
|
dbbackup backup all
|
||||||
|
dbbackup restore <backup-file>
|
||||||
|
|
||||||
|
# Point-in-time recovery
|
||||||
|
dbbackup binlog stream
|
||||||
|
dbbackup pitr restore --target-time "2026-01-12 14:30:00"
|
||||||
|
|
||||||
|
# Cloud targets
|
||||||
|
--cloud s3://bucket/path/
|
||||||
|
--cloud gs://bucket/path/
|
||||||
|
--cloud azure://container/path/
|
||||||
|
```
|
||||||
|
|
||||||
|
## Who Should Switch
|
||||||
|
|
||||||
|
✅ **From Veeam/Commvault**: Same capabilities, zero license fees
|
||||||
|
✅ **From Borg/Restic**: Native database support, no wrapper scripts
|
||||||
|
✅ **From "homegrown scripts"**: Production-ready, battle-tested
|
||||||
|
✅ **Cloud-native deployments**: Kubernetes, ECS, Cloud Run ready
|
||||||
|
✅ **Compliance requirements**: AES-256-GCM, audit logging
|
||||||
|
|
||||||
|
## Get Started
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Download (single binary, ~48MB static linked)
|
||||||
|
curl -LO https://github.com/PlusOne/dbbackup/releases/latest/download/dbbackup_linux_amd64
|
||||||
|
chmod +x dbbackup_linux_amd64
|
||||||
|
|
||||||
|
# Your first deduplicated backup
|
||||||
|
./dbbackup_linux_amd64 dedup backup /var/lib/mysql/dumps/production.sql
|
||||||
|
|
||||||
|
# Your first cloud backup
|
||||||
|
./dbbackup_linux_amd64 backup single production \
|
||||||
|
--db-type mysql \
|
||||||
|
--cloud s3://my-backups/
|
||||||
|
```
|
||||||
|
|
||||||
|
## The Bottom Line
|
||||||
|
|
||||||
|
| Solution | What It Costs You |
|
||||||
|
|----------|-------------------|
|
||||||
|
| Veeam | Money |
|
||||||
|
| Borg/Restic | Time (scripting, integration) |
|
||||||
|
| dbbackup | **Neither** |
|
||||||
|
|
||||||
|
**This is what open source really means.**
|
||||||
|
|
||||||
|
Not just "free as in beer" - but actually solving the problem without requiring you to become a backup engineer.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Apache 2.0 Licensed. Free forever. No sales calls. No wrapper scripts.*
|
||||||
|
|
||||||
|
[GitHub](https://github.com/PlusOne/dbbackup) | [Releases](https://github.com/PlusOne/dbbackup/releases) | [Changelog](CHANGELOG.md)
|
||||||
94
PITR.md
94
PITR.md
@@ -584,6 +584,100 @@ Document your recovery procedure:
|
|||||||
9. Create new base backup
|
9. Create new base backup
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Large Database Support (600+ GB)
|
||||||
|
|
||||||
|
For databases larger than 600 GB, PITR is the **recommended approach** over full dump/restore.
|
||||||
|
|
||||||
|
### Why PITR Works Better for Large DBs
|
||||||
|
|
||||||
|
| Approach | 600 GB Database | Recovery Time (RTO) |
|
||||||
|
|----------|-----------------|---------------------|
|
||||||
|
| Full pg_dump/restore | Hours to dump, hours to restore | 4-12+ hours |
|
||||||
|
| PITR (base + WAL) | Incremental WAL only | 30 min - 2 hours |
|
||||||
|
|
||||||
|
### Setup for Large Databases
|
||||||
|
|
||||||
|
**1. Enable WAL archiving with compression:**
|
||||||
|
```bash
|
||||||
|
dbbackup pitr enable --archive-dir /backups/wal_archive --compress
|
||||||
|
```
|
||||||
|
|
||||||
|
**2. Take ONE base backup weekly/monthly (use pg_basebackup):**
|
||||||
|
```bash
|
||||||
|
# For 600+ GB, use fast checkpoint to minimize impact
|
||||||
|
pg_basebackup -D /backups/base_$(date +%Y%m%d).tar.gz \
|
||||||
|
-Ft -z -P --checkpoint=fast --wal-method=none
|
||||||
|
|
||||||
|
# Duration: 2-6 hours for 600 GB, but only needed weekly/monthly
|
||||||
|
```
|
||||||
|
|
||||||
|
**3. WAL files archive continuously** (~1-5 GB/hour typical), capturing every change.
|
||||||
|
|
||||||
|
**4. Recover to any point in time:**
|
||||||
|
```bash
|
||||||
|
dbbackup restore pitr \
|
||||||
|
--base-backup /backups/base_20260101.tar.gz \
|
||||||
|
--wal-archive /backups/wal_archive \
|
||||||
|
--target-time "2026-01-13 14:30:00" \
|
||||||
|
--target-dir /var/lib/postgresql/16/restored
|
||||||
|
```
|
||||||
|
|
||||||
|
### PostgreSQL Optimizations for 600+ GB
|
||||||
|
|
||||||
|
| Setting | Value | Purpose |
|
||||||
|
|---------|-------|---------|
|
||||||
|
| `wal_compression = on` | postgresql.conf | 70-80% smaller WAL files |
|
||||||
|
| `max_wal_size = 4GB` | postgresql.conf | Reduce checkpoint frequency |
|
||||||
|
| `checkpoint_timeout = 30min` | postgresql.conf | Less frequent checkpoints |
|
||||||
|
| `archive_timeout = 300` | postgresql.conf | Force archive every 5 min |
|
||||||
|
|
||||||
|
### Recovery Optimizations
|
||||||
|
|
||||||
|
| Optimization | How | Benefit |
|
||||||
|
|--------------|-----|---------|
|
||||||
|
| Parallel recovery | PostgreSQL 15+ automatic | 2-4x faster WAL replay |
|
||||||
|
| NVMe/SSD for WAL | Hardware | 3-10x faster recovery |
|
||||||
|
| Separate WAL disk | Dedicated mount | Avoid I/O contention |
|
||||||
|
| `recovery_prefetch = on` | PostgreSQL 15+ | Faster page reads |
|
||||||
|
|
||||||
|
### Storage Planning
|
||||||
|
|
||||||
|
| Component | Size Estimate | Retention |
|
||||||
|
|-----------|---------------|-----------|
|
||||||
|
| Base backup | ~200-400 GB compressed | 1-2 copies |
|
||||||
|
| WAL per day | 5-50 GB (depends on writes) | 7-14 days |
|
||||||
|
| Total archive | 100-400 GB WAL + base | - |
|
||||||
|
|
||||||
|
### RTO Estimates for Large Databases
|
||||||
|
|
||||||
|
| Database Size | Base Extraction | WAL Replay (1 week) | Total RTO |
|
||||||
|
|---------------|-----------------|---------------------|-----------|
|
||||||
|
| 200 GB | 15-30 min | 15-30 min | 30-60 min |
|
||||||
|
| 600 GB | 45-90 min | 30-60 min | 1-2.5 hours |
|
||||||
|
| 1 TB | 60-120 min | 45-90 min | 2-3.5 hours |
|
||||||
|
| 2 TB | 2-4 hours | 1-2 hours | 3-6 hours |
|
||||||
|
|
||||||
|
**Compare to full restore:** 600 GB pg_dump restore takes 8-12+ hours.
|
||||||
|
|
||||||
|
### Best Practices for 600+ GB
|
||||||
|
|
||||||
|
1. **Weekly base backups** - Monthly if storage is tight
|
||||||
|
2. **Test recovery monthly** - Verify WAL chain integrity
|
||||||
|
3. **Monitor WAL lag** - Alert if archive falls behind
|
||||||
|
4. **Use streaming replication** - For HA, combine with PITR for DR
|
||||||
|
5. **Separate archive storage** - Don't fill up the DB disk
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Quick health check for large DB PITR setup
|
||||||
|
dbbackup pitr status --verbose
|
||||||
|
|
||||||
|
# Expected output:
|
||||||
|
# Base Backup: 2026-01-06 (7 days old) - OK
|
||||||
|
# WAL Archive: 847 files, 52 GB
|
||||||
|
# Recovery Window: 2026-01-06 to 2026-01-13 (7 days)
|
||||||
|
# Estimated RTO: ~90 minutes
|
||||||
|
```
|
||||||
|
|
||||||
## Performance Considerations
|
## Performance Considerations
|
||||||
|
|
||||||
### WAL Archive Size
|
### WAL Archive Size
|
||||||
|
|||||||
@@ -56,7 +56,7 @@ Download from [releases](https://git.uuxo.net/UUXO/dbbackup/releases):
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Linux x86_64
|
# Linux x86_64
|
||||||
wget https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.42.1/dbbackup-linux-amd64
|
wget https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.42.35/dbbackup-linux-amd64
|
||||||
chmod +x dbbackup-linux-amd64
|
chmod +x dbbackup-linux-amd64
|
||||||
sudo mv dbbackup-linux-amd64 /usr/local/bin/dbbackup
|
sudo mv dbbackup-linux-amd64 /usr/local/bin/dbbackup
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -1,133 +0,0 @@
|
|||||||
# Why DBAs Are Switching from Veeam to dbbackup
|
|
||||||
|
|
||||||
## The Enterprise Backup Problem
|
|
||||||
|
|
||||||
You're paying **$2,000-10,000/year per database server** for enterprise backup solutions.
|
|
||||||
|
|
||||||
What are you actually getting?
|
|
||||||
|
|
||||||
- Heavy agents eating your CPU
|
|
||||||
- Complex licensing that requires a spreadsheet to understand
|
|
||||||
- Vendor lock-in to proprietary formats
|
|
||||||
- "Cloud support" that means "we'll upload your backup somewhere"
|
|
||||||
- Recovery that requires calling support
|
|
||||||
|
|
||||||
## What If There Was a Better Way?
|
|
||||||
|
|
||||||
**dbbackup v3.2.0** delivers enterprise-grade MySQL/MariaDB backup capabilities in a **single, zero-dependency binary**:
|
|
||||||
|
|
||||||
| Feature | Veeam/Commercial | dbbackup |
|
|
||||||
|---------|------------------|----------|
|
|
||||||
| Physical backups | ✅ Via XtraBackup | ✅ Native Clone Plugin |
|
|
||||||
| Consistent snapshots | ✅ | ✅ LVM/ZFS/Btrfs |
|
|
||||||
| Binlog streaming | ❌ | ✅ Continuous PITR |
|
|
||||||
| Direct cloud streaming | ❌ (stage to disk) | ✅ Zero local storage |
|
|
||||||
| Parallel uploads | ❌ | ✅ Configurable workers |
|
|
||||||
| License cost | $$$$ | **Free (MIT)** |
|
|
||||||
| Dependencies | Agent + XtraBackup + ... | **Single binary** |
|
|
||||||
|
|
||||||
## Real Numbers
|
|
||||||
|
|
||||||
**100GB database backup comparison:**
|
|
||||||
|
|
||||||
| Metric | Traditional | dbbackup v3.2 |
|
|
||||||
|--------|-------------|---------------|
|
|
||||||
| Backup time | 45 min | **12 min** |
|
|
||||||
| Local disk needed | 100GB | **0 GB** |
|
|
||||||
| Network efficiency | 1x | **3x** (parallel) |
|
|
||||||
| Recovery point | Daily | **< 1 second** |
|
|
||||||
|
|
||||||
## The Technical Revolution
|
|
||||||
|
|
||||||
### MySQL Clone Plugin (8.0.17+)
|
|
||||||
```bash
|
|
||||||
# Physical backup at InnoDB page level
|
|
||||||
# No XtraBackup. No external tools. Pure Go.
|
|
||||||
dbbackup backup single mydb --db-type mysql --cloud s3://bucket/backups/
|
|
||||||
```
|
|
||||||
|
|
||||||
### Filesystem Snapshots
|
|
||||||
```bash
|
|
||||||
# Brief lock (<100ms), instant snapshot, stream to cloud
|
|
||||||
dbbackup backup --engine=snapshot --snapshot-backend=lvm
|
|
||||||
```
|
|
||||||
|
|
||||||
### Continuous Binlog Streaming
|
|
||||||
```bash
|
|
||||||
# Real-time binlog capture to S3
|
|
||||||
# Sub-second RPO without touching the database server
|
|
||||||
dbbackup binlog stream --target=s3://bucket/binlogs/
|
|
||||||
```
|
|
||||||
|
|
||||||
### Parallel Cloud Upload
|
|
||||||
```bash
|
|
||||||
# Saturate your network, not your patience
|
|
||||||
dbbackup backup --engine=streaming --parallel-workers=8
|
|
||||||
```
|
|
||||||
|
|
||||||
## Who Should Switch?
|
|
||||||
|
|
||||||
✅ **Cloud-native deployments** - Kubernetes, ECS, Cloud Run
|
|
||||||
✅ **Cost-conscious enterprises** - Same capabilities, zero license fees
|
|
||||||
✅ **DevOps teams** - Single binary, easy automation
|
|
||||||
✅ **Compliance requirements** - AES-256-GCM encryption, audit logging
|
|
||||||
✅ **Multi-cloud strategies** - S3, GCS, Azure Blob native support
|
|
||||||
|
|
||||||
## Migration Path
|
|
||||||
|
|
||||||
**Day 1**: Run dbbackup alongside existing solution
|
|
||||||
```bash
|
|
||||||
# Test backup
|
|
||||||
dbbackup backup single mydb --cloud s3://test-bucket/
|
|
||||||
|
|
||||||
# Verify integrity
|
|
||||||
dbbackup verify s3://test-bucket/mydb_20260115.dump.gz
|
|
||||||
```
|
|
||||||
|
|
||||||
**Week 1**: Compare backup times, storage costs, recovery speed
|
|
||||||
|
|
||||||
**Week 2**: Switch primary backups to dbbackup
|
|
||||||
|
|
||||||
**Month 1**: Cancel Veeam renewal, buy your team pizza with savings 🍕
|
|
||||||
|
|
||||||
## FAQ
|
|
||||||
|
|
||||||
**Q: Is this production-ready?**
|
|
||||||
A: Used in production by organizations managing petabytes of MySQL data.
|
|
||||||
|
|
||||||
**Q: What about support?**
|
|
||||||
A: Community support via GitHub. Enterprise support available.
|
|
||||||
|
|
||||||
**Q: Can it replace XtraBackup?**
|
|
||||||
A: For MySQL 8.0.17+, yes. We use native Clone Plugin instead.
|
|
||||||
|
|
||||||
**Q: What about PostgreSQL?**
|
|
||||||
A: Full PostgreSQL support including WAL archiving and PITR.
|
|
||||||
|
|
||||||
## Get Started
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Download (single binary, ~15MB)
|
|
||||||
curl -LO https://github.com/UUXO/dbbackup/releases/latest/download/dbbackup_linux_amd64
|
|
||||||
chmod +x dbbackup_linux_amd64
|
|
||||||
|
|
||||||
# Your first backup
|
|
||||||
./dbbackup_linux_amd64 backup single production \
|
|
||||||
--db-type mysql \
|
|
||||||
--cloud s3://my-backups/
|
|
||||||
```
|
|
||||||
|
|
||||||
## The Bottom Line
|
|
||||||
|
|
||||||
Every dollar you spend on backup licensing is a dollar not spent on:
|
|
||||||
- Better hardware
|
|
||||||
- Your team
|
|
||||||
- Actually useful tools
|
|
||||||
|
|
||||||
**dbbackup**: Enterprise capabilities. Zero enterprise pricing.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
*Apache 2.0 Licensed. Free forever. No sales calls required.*
|
|
||||||
|
|
||||||
[GitHub](https://github.com/UUXO/dbbackup) | [Documentation](https://github.com/UUXO/dbbackup#readme) | [Changelog](CHANGELOG.md)
|
|
||||||
@@ -3,9 +3,9 @@
|
|||||||
This directory contains pre-compiled binaries for the DB Backup Tool across multiple platforms and architectures.
|
This directory contains pre-compiled binaries for the DB Backup Tool across multiple platforms and architectures.
|
||||||
|
|
||||||
## Build Information
|
## Build Information
|
||||||
- **Version**: 3.42.10
|
- **Version**: 3.42.34
|
||||||
- **Build Time**: 2026-01-12_08:50:35_UTC
|
- **Build Time**: 2026-01-15_14:16:33_UTC
|
||||||
- **Git Commit**: b1f8c6d
|
- **Git Commit**: eeacbfa
|
||||||
|
|
||||||
## Recent Updates (v1.1.0)
|
## Recent Updates (v1.1.0)
|
||||||
- ✅ Fixed TUI progress display with line-by-line output
|
- ✅ Fixed TUI progress display with line-by-line output
|
||||||
|
|||||||
41
go.mod
41
go.mod
@@ -5,15 +5,27 @@ go 1.24.0
|
|||||||
toolchain go1.24.9
|
toolchain go1.24.9
|
||||||
|
|
||||||
require (
|
require (
|
||||||
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2
|
cloud.google.com/go/storage v1.57.2
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3
|
||||||
|
github.com/aws/aws-sdk-go-v2 v1.40.0
|
||||||
|
github.com/aws/aws-sdk-go-v2/config v1.32.2
|
||||||
|
github.com/aws/aws-sdk-go-v2/credentials v1.19.2
|
||||||
|
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1
|
||||||
github.com/charmbracelet/bubbles v0.21.0
|
github.com/charmbracelet/bubbles v0.21.0
|
||||||
github.com/charmbracelet/bubbletea v1.3.10
|
github.com/charmbracelet/bubbletea v1.3.10
|
||||||
github.com/charmbracelet/lipgloss v1.1.0
|
github.com/charmbracelet/lipgloss v1.1.0
|
||||||
|
github.com/dustin/go-humanize v1.0.1
|
||||||
github.com/go-sql-driver/mysql v1.9.3
|
github.com/go-sql-driver/mysql v1.9.3
|
||||||
github.com/jackc/pgx/v5 v5.7.6
|
github.com/jackc/pgx/v5 v5.7.6
|
||||||
|
github.com/mattn/go-sqlite3 v1.14.32
|
||||||
|
github.com/shirou/gopsutil/v3 v3.24.5
|
||||||
github.com/sirupsen/logrus v1.9.3
|
github.com/sirupsen/logrus v1.9.3
|
||||||
github.com/spf13/cobra v1.10.1
|
github.com/spf13/cobra v1.10.1
|
||||||
github.com/spf13/pflag v1.0.9
|
github.com/spf13/pflag v1.0.9
|
||||||
|
golang.org/x/crypto v0.43.0
|
||||||
|
google.golang.org/api v0.256.0
|
||||||
)
|
)
|
||||||
|
|
||||||
require (
|
require (
|
||||||
@@ -24,20 +36,13 @@ require (
|
|||||||
cloud.google.com/go/compute/metadata v0.9.0 // indirect
|
cloud.google.com/go/compute/metadata v0.9.0 // indirect
|
||||||
cloud.google.com/go/iam v1.5.2 // indirect
|
cloud.google.com/go/iam v1.5.2 // indirect
|
||||||
cloud.google.com/go/monitoring v1.24.2 // indirect
|
cloud.google.com/go/monitoring v1.24.2 // indirect
|
||||||
cloud.google.com/go/storage v1.57.2 // indirect
|
|
||||||
filippo.io/edwards25519 v1.1.0 // indirect
|
filippo.io/edwards25519 v1.1.0 // indirect
|
||||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0 // indirect
|
|
||||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 // indirect
|
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 // indirect
|
||||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3 // indirect
|
|
||||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0 // indirect
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0 // indirect
|
||||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0 // indirect
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0 // indirect
|
||||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0 // indirect
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0 // indirect
|
||||||
github.com/aws/aws-sdk-go-v2 v1.40.0 // indirect
|
|
||||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 // indirect
|
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 // indirect
|
||||||
github.com/aws/aws-sdk-go-v2/config v1.32.2 // indirect
|
|
||||||
github.com/aws/aws-sdk-go-v2/credentials v1.19.2 // indirect
|
|
||||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14 // indirect
|
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14 // indirect
|
||||||
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12 // indirect
|
|
||||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 // indirect
|
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 // indirect
|
||||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 // indirect
|
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 // indirect
|
||||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 // indirect
|
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 // indirect
|
||||||
@@ -46,47 +51,58 @@ require (
|
|||||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.5 // indirect
|
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.5 // indirect
|
||||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14 // indirect
|
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14 // indirect
|
||||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14 // indirect
|
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14 // indirect
|
||||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1 // indirect
|
|
||||||
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2 // indirect
|
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2 // indirect
|
||||||
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5 // indirect
|
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5 // indirect
|
||||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10 // indirect
|
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10 // indirect
|
||||||
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 // indirect
|
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 // indirect
|
||||||
github.com/aws/smithy-go v1.23.2 // indirect
|
github.com/aws/smithy-go v1.23.2 // indirect
|
||||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
|
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
|
||||||
|
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
|
||||||
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||||
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
|
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
|
||||||
github.com/charmbracelet/x/ansi v0.10.1 // indirect
|
github.com/charmbracelet/x/ansi v0.10.1 // indirect
|
||||||
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd // indirect
|
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd // indirect
|
||||||
github.com/charmbracelet/x/term v0.2.1 // indirect
|
github.com/charmbracelet/x/term v0.2.1 // indirect
|
||||||
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443 // indirect
|
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443 // indirect
|
||||||
github.com/creack/pty v1.1.17 // indirect
|
|
||||||
github.com/envoyproxy/go-control-plane/envoy v1.32.4 // indirect
|
github.com/envoyproxy/go-control-plane/envoy v1.32.4 // indirect
|
||||||
github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect
|
github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect
|
||||||
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
|
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
|
||||||
|
github.com/fatih/color v1.18.0 // indirect
|
||||||
github.com/felixge/httpsnoop v1.0.4 // indirect
|
github.com/felixge/httpsnoop v1.0.4 // indirect
|
||||||
github.com/go-jose/go-jose/v4 v4.1.2 // indirect
|
github.com/go-jose/go-jose/v4 v4.1.2 // indirect
|
||||||
github.com/go-logr/logr v1.4.3 // indirect
|
github.com/go-logr/logr v1.4.3 // indirect
|
||||||
github.com/go-logr/stdr v1.2.2 // indirect
|
github.com/go-logr/stdr v1.2.2 // indirect
|
||||||
|
github.com/go-ole/go-ole v1.2.6 // indirect
|
||||||
github.com/google/s2a-go v0.1.9 // indirect
|
github.com/google/s2a-go v0.1.9 // indirect
|
||||||
github.com/google/uuid v1.6.0 // indirect
|
github.com/google/uuid v1.6.0 // indirect
|
||||||
github.com/googleapis/enterprise-certificate-proxy v0.3.7 // indirect
|
github.com/googleapis/enterprise-certificate-proxy v0.3.7 // indirect
|
||||||
github.com/googleapis/gax-go/v2 v2.15.0 // indirect
|
github.com/googleapis/gax-go/v2 v2.15.0 // indirect
|
||||||
|
github.com/hashicorp/errwrap v1.0.0 // indirect
|
||||||
|
github.com/hashicorp/go-multierror v1.1.1 // indirect
|
||||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||||
github.com/jackc/pgpassfile v1.0.0 // indirect
|
github.com/jackc/pgpassfile v1.0.0 // indirect
|
||||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
|
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
|
||||||
github.com/jackc/puddle/v2 v2.2.2 // indirect
|
github.com/jackc/puddle/v2 v2.2.2 // indirect
|
||||||
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
|
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
|
||||||
|
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
|
||||||
|
github.com/mattn/go-colorable v0.1.13 // indirect
|
||||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||||
github.com/mattn/go-localereader v0.0.1 // indirect
|
github.com/mattn/go-localereader v0.0.1 // indirect
|
||||||
github.com/mattn/go-runewidth v0.0.16 // indirect
|
github.com/mattn/go-runewidth v0.0.16 // indirect
|
||||||
github.com/mattn/go-sqlite3 v1.14.32 // indirect
|
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db // indirect
|
||||||
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 // indirect
|
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 // indirect
|
||||||
github.com/muesli/cancelreader v0.2.2 // indirect
|
github.com/muesli/cancelreader v0.2.2 // indirect
|
||||||
github.com/muesli/termenv v0.16.0 // indirect
|
github.com/muesli/termenv v0.16.0 // indirect
|
||||||
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 // indirect
|
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 // indirect
|
||||||
|
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
|
||||||
github.com/rivo/uniseg v0.4.7 // indirect
|
github.com/rivo/uniseg v0.4.7 // indirect
|
||||||
|
github.com/schollz/progressbar/v3 v3.19.0 // indirect
|
||||||
|
github.com/spf13/afero v1.15.0 // indirect
|
||||||
github.com/spiffe/go-spiffe/v2 v2.5.0 // indirect
|
github.com/spiffe/go-spiffe/v2 v2.5.0 // indirect
|
||||||
|
github.com/tklauser/go-sysconf v0.3.12 // indirect
|
||||||
|
github.com/tklauser/numcpus v0.6.1 // indirect
|
||||||
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
|
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
|
||||||
|
github.com/yusufpapurcu/wmi v1.2.4 // indirect
|
||||||
github.com/zeebo/errs v1.4.0 // indirect
|
github.com/zeebo/errs v1.4.0 // indirect
|
||||||
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
|
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
|
||||||
go.opentelemetry.io/contrib/detectors/gcp v1.36.0 // indirect
|
go.opentelemetry.io/contrib/detectors/gcp v1.36.0 // indirect
|
||||||
@@ -97,14 +113,13 @@ require (
|
|||||||
go.opentelemetry.io/otel/sdk v1.37.0 // indirect
|
go.opentelemetry.io/otel/sdk v1.37.0 // indirect
|
||||||
go.opentelemetry.io/otel/sdk/metric v1.37.0 // indirect
|
go.opentelemetry.io/otel/sdk/metric v1.37.0 // indirect
|
||||||
go.opentelemetry.io/otel/trace v1.37.0 // indirect
|
go.opentelemetry.io/otel/trace v1.37.0 // indirect
|
||||||
golang.org/x/crypto v0.43.0 // indirect
|
|
||||||
golang.org/x/net v0.46.0 // indirect
|
golang.org/x/net v0.46.0 // indirect
|
||||||
golang.org/x/oauth2 v0.33.0 // indirect
|
golang.org/x/oauth2 v0.33.0 // indirect
|
||||||
golang.org/x/sync v0.18.0 // indirect
|
golang.org/x/sync v0.18.0 // indirect
|
||||||
golang.org/x/sys v0.38.0 // indirect
|
golang.org/x/sys v0.38.0 // indirect
|
||||||
|
golang.org/x/term v0.36.0 // indirect
|
||||||
golang.org/x/text v0.30.0 // indirect
|
golang.org/x/text v0.30.0 // indirect
|
||||||
golang.org/x/time v0.14.0 // indirect
|
golang.org/x/time v0.14.0 // indirect
|
||||||
google.golang.org/api v0.256.0 // indirect
|
|
||||||
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 // indirect
|
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 // indirect
|
||||||
google.golang.org/genproto/googleapis/api v0.0.0-20250818200422-3122310a409c // indirect
|
google.golang.org/genproto/googleapis/api v0.0.0-20250818200422-3122310a409c // indirect
|
||||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 // indirect
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 // indirect
|
||||||
|
|||||||
122
go.sum
122
go.sum
@@ -10,36 +10,44 @@ cloud.google.com/go/compute/metadata v0.9.0 h1:pDUj4QMoPejqq20dK0Pg2N4yG9zIkYGdB
|
|||||||
cloud.google.com/go/compute/metadata v0.9.0/go.mod h1:E0bWwX5wTnLPedCKqk3pJmVgCBSM6qQI1yTBdEb3C10=
|
cloud.google.com/go/compute/metadata v0.9.0/go.mod h1:E0bWwX5wTnLPedCKqk3pJmVgCBSM6qQI1yTBdEb3C10=
|
||||||
cloud.google.com/go/iam v1.5.2 h1:qgFRAGEmd8z6dJ/qyEchAuL9jpswyODjA2lS+w234g8=
|
cloud.google.com/go/iam v1.5.2 h1:qgFRAGEmd8z6dJ/qyEchAuL9jpswyODjA2lS+w234g8=
|
||||||
cloud.google.com/go/iam v1.5.2/go.mod h1:SE1vg0N81zQqLzQEwxL2WI6yhetBdbNQuTvIKCSkUHE=
|
cloud.google.com/go/iam v1.5.2/go.mod h1:SE1vg0N81zQqLzQEwxL2WI6yhetBdbNQuTvIKCSkUHE=
|
||||||
|
cloud.google.com/go/logging v1.13.0 h1:7j0HgAp0B94o1YRDqiqm26w4q1rDMH7XNRU34lJXHYc=
|
||||||
|
cloud.google.com/go/logging v1.13.0/go.mod h1:36CoKh6KA/M0PbhPKMq6/qety2DCAErbhXT62TuXALA=
|
||||||
|
cloud.google.com/go/longrunning v0.7.0 h1:FV0+SYF1RIj59gyoWDRi45GiYUMM3K1qO51qoboQT1E=
|
||||||
|
cloud.google.com/go/longrunning v0.7.0/go.mod h1:ySn2yXmjbK9Ba0zsQqunhDkYi0+9rlXIwnoAf+h+TPY=
|
||||||
cloud.google.com/go/monitoring v1.24.2 h1:5OTsoJ1dXYIiMiuL+sYscLc9BumrL3CarVLL7dd7lHM=
|
cloud.google.com/go/monitoring v1.24.2 h1:5OTsoJ1dXYIiMiuL+sYscLc9BumrL3CarVLL7dd7lHM=
|
||||||
cloud.google.com/go/monitoring v1.24.2/go.mod h1:x7yzPWcgDRnPEv3sI+jJGBkwl5qINf+6qY4eq0I9B4U=
|
cloud.google.com/go/monitoring v1.24.2/go.mod h1:x7yzPWcgDRnPEv3sI+jJGBkwl5qINf+6qY4eq0I9B4U=
|
||||||
cloud.google.com/go/storage v1.57.2 h1:sVlym3cHGYhrp6XZKkKb+92I1V42ks2qKKpB0CF5Mb4=
|
cloud.google.com/go/storage v1.57.2 h1:sVlym3cHGYhrp6XZKkKb+92I1V42ks2qKKpB0CF5Mb4=
|
||||||
cloud.google.com/go/storage v1.57.2/go.mod h1:n5ijg4yiRXXpCu0sJTD6k+eMf7GRrJmPyr9YxLXGHOk=
|
cloud.google.com/go/storage v1.57.2/go.mod h1:n5ijg4yiRXXpCu0sJTD6k+eMf7GRrJmPyr9YxLXGHOk=
|
||||||
|
cloud.google.com/go/trace v1.11.6 h1:2O2zjPzqPYAHrn3OKl029qlqG6W8ZdYaOWRyr8NgMT4=
|
||||||
|
cloud.google.com/go/trace v1.11.6/go.mod h1:GA855OeDEBiBMzcckLPE2kDunIpC72N+Pq8WFieFjnI=
|
||||||
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
|
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
|
||||||
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
|
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
|
||||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0 h1:JXg2dwJUmPB9JmtVmdEB16APJ7jurfbY5jnfXpJoRMc=
|
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0 h1:JXg2dwJUmPB9JmtVmdEB16APJ7jurfbY5jnfXpJoRMc=
|
||||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0/go.mod h1:YD5h/ldMsG0XiIw7PdyNhLxaM317eFh5yNLccNfGdyw=
|
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0/go.mod h1:YD5h/ldMsG0XiIw7PdyNhLxaM317eFh5yNLccNfGdyw=
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.13.0 h1:KpMC6LFL7mqpExyMC9jVOYRiVhLmamjeZfRsUpB7l4s=
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.13.0/go.mod h1:J7MUC/wtRpfGVbQ5sIItY5/FuVWmvzlY21WAOfQnq/I=
|
||||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 h1:9iefClla7iYpfYWdzPCRDozdmndjTm8DXdpCzPajMgA=
|
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 h1:9iefClla7iYpfYWdzPCRDozdmndjTm8DXdpCzPajMgA=
|
||||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2/go.mod h1:XtLgD3ZD34DAaVIIAyG3objl5DynM3CQ/vMcbBNJZGI=
|
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2/go.mod h1:XtLgD3ZD34DAaVIIAyG3objl5DynM3CQ/vMcbBNJZGI=
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.1 h1:/Zt+cDPnpC3OVDm/JKLOs7M2DKmLRIIp3XIx9pHHiig=
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.1/go.mod h1:Ng3urmn6dYe8gnbCMoHHVl5APYz2txho3koEkV2o2HA=
|
||||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3 h1:ZJJNFaQ86GVKQ9ehwqyAFE6pIfyicpuJ8IkVaPBc6/4=
|
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3 h1:ZJJNFaQ86GVKQ9ehwqyAFE6pIfyicpuJ8IkVaPBc6/4=
|
||||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3/go.mod h1:URuDvhmATVKqHBH9/0nOiNKk0+YcwfQ3WkK5PqHKxc8=
|
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3/go.mod h1:URuDvhmATVKqHBH9/0nOiNKk0+YcwfQ3WkK5PqHKxc8=
|
||||||
|
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0 h1:XkkQbfMyuH2jTSjQjSoihryI8GINRcs4xp8lNawg0FI=
|
||||||
|
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0/go.mod h1:HKpQxkWaGLJ+D/5H8QRpyQXA1eKjxkFlOMwck5+33Jk=
|
||||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0 h1:UQUsRi8WTzhZntp5313l+CHIAT95ojUI2lpP/ExlZa4=
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0 h1:UQUsRi8WTzhZntp5313l+CHIAT95ojUI2lpP/ExlZa4=
|
||||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0/go.mod h1:Cz6ft6Dkn3Et6l2v2a9/RpN7epQ1GtDlO6lj8bEcOvw=
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0/go.mod h1:Cz6ft6Dkn3Et6l2v2a9/RpN7epQ1GtDlO6lj8bEcOvw=
|
||||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0 h1:owcC2UnmsZycprQ5RfRgjydWhuoxg71LUfyiQdijZuM=
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0 h1:owcC2UnmsZycprQ5RfRgjydWhuoxg71LUfyiQdijZuM=
|
||||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0/go.mod h1:ZPpqegjbE99EPKsu3iUWV22A04wzGPcAY/ziSIQEEgs=
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0/go.mod h1:ZPpqegjbE99EPKsu3iUWV22A04wzGPcAY/ziSIQEEgs=
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.53.0 h1:4LP6hvB4I5ouTbGgWtixJhgED6xdf67twf9PoY96Tbg=
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.53.0/go.mod h1:jUZ5LYlw40WMd07qxcQJD5M40aUxrfwqQX1g7zxYnrQ=
|
||||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0 h1:Ron4zCA/yk6U7WOBXhTJcDpsUBG9npumK6xw2auFltQ=
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0 h1:Ron4zCA/yk6U7WOBXhTJcDpsUBG9npumK6xw2auFltQ=
|
||||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0/go.mod h1:cSgYe11MCNYunTnRXrKiR/tHc0eoKjICUuWpNZoVCOo=
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0/go.mod h1:cSgYe11MCNYunTnRXrKiR/tHc0eoKjICUuWpNZoVCOo=
|
||||||
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2 h1:+vx7roKuyA63nhn5WAunQHLTznkw5W8b1Xc0dNjp83s=
|
|
||||||
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2/go.mod h1:HBCaDeC1lPdgDeDbhX8XFpy1jqjK0IBG8W5K+xYqA0w=
|
|
||||||
github.com/aws/aws-sdk-go-v2 v1.40.0 h1:/WMUA0kjhZExjOQN2z3oLALDREea1A7TobfuiBrKlwc=
|
github.com/aws/aws-sdk-go-v2 v1.40.0 h1:/WMUA0kjhZExjOQN2z3oLALDREea1A7TobfuiBrKlwc=
|
||||||
github.com/aws/aws-sdk-go-v2 v1.40.0/go.mod h1:c9pm7VwuW0UPxAEYGyTmyurVcNrbF6Rt/wixFqDhcjE=
|
github.com/aws/aws-sdk-go-v2 v1.40.0/go.mod h1:c9pm7VwuW0UPxAEYGyTmyurVcNrbF6Rt/wixFqDhcjE=
|
||||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 h1:DHctwEM8P8iTXFxC/QK0MRjwEpWQeM9yzidCRjldUz0=
|
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 h1:DHctwEM8P8iTXFxC/QK0MRjwEpWQeM9yzidCRjldUz0=
|
||||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3/go.mod h1:xdCzcZEtnSTKVDOmUZs4l/j3pSV6rpo1WXl5ugNsL8Y=
|
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3/go.mod h1:xdCzcZEtnSTKVDOmUZs4l/j3pSV6rpo1WXl5ugNsL8Y=
|
||||||
github.com/aws/aws-sdk-go-v2/config v1.32.1 h1:iODUDLgk3q8/flEC7ymhmxjfoAnBDwEEYEVyKZ9mzjU=
|
|
||||||
github.com/aws/aws-sdk-go-v2/config v1.32.1/go.mod h1:xoAgo17AGrPpJBSLg81W+ikM0cpOZG8ad04T2r+d5P0=
|
|
||||||
github.com/aws/aws-sdk-go-v2/config v1.32.2 h1:4liUsdEpUUPZs5WVapsJLx5NPmQhQdez7nYFcovrytk=
|
github.com/aws/aws-sdk-go-v2/config v1.32.2 h1:4liUsdEpUUPZs5WVapsJLx5NPmQhQdez7nYFcovrytk=
|
||||||
github.com/aws/aws-sdk-go-v2/config v1.32.2/go.mod h1:l0hs06IFz1eCT+jTacU/qZtC33nvcnLADAPL/XyrkZI=
|
github.com/aws/aws-sdk-go-v2/config v1.32.2/go.mod h1:l0hs06IFz1eCT+jTacU/qZtC33nvcnLADAPL/XyrkZI=
|
||||||
github.com/aws/aws-sdk-go-v2/credentials v1.19.1 h1:JeW+EwmtTE0yXFK8SmklrFh/cGTTXsQJumgMZNlbxfM=
|
|
||||||
github.com/aws/aws-sdk-go-v2/credentials v1.19.1/go.mod h1:BOoXiStwTF+fT2XufhO0Efssbi1CNIO/ZXpZu87N0pw=
|
|
||||||
github.com/aws/aws-sdk-go-v2/credentials v1.19.2 h1:qZry8VUyTK4VIo5aEdUcBjPZHL2v4FyQ3QEOaWcFLu4=
|
github.com/aws/aws-sdk-go-v2/credentials v1.19.2 h1:qZry8VUyTK4VIo5aEdUcBjPZHL2v4FyQ3QEOaWcFLu4=
|
||||||
github.com/aws/aws-sdk-go-v2/credentials v1.19.2/go.mod h1:YUqm5a1/kBnoK+/NY5WEiMocZihKSo15/tJdmdXnM5g=
|
github.com/aws/aws-sdk-go-v2/credentials v1.19.2/go.mod h1:YUqm5a1/kBnoK+/NY5WEiMocZihKSo15/tJdmdXnM5g=
|
||||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14 h1:WZVR5DbDgxzA0BJeudId89Kmgy6DIU4ORpxwsVHz0qA=
|
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14 h1:WZVR5DbDgxzA0BJeudId89Kmgy6DIU4ORpxwsVHz0qA=
|
||||||
@@ -62,30 +70,22 @@ github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14 h1:FIouAnCE
|
|||||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14/go.mod h1:UTwDc5COa5+guonQU8qBikJo1ZJ4ln2r1MkF7Dqag1E=
|
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14/go.mod h1:UTwDc5COa5+guonQU8qBikJo1ZJ4ln2r1MkF7Dqag1E=
|
||||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14 h1:FzQE21lNtUor0Fb7QNgnEyiRCBlolLTX/Z1j65S7teM=
|
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14 h1:FzQE21lNtUor0Fb7QNgnEyiRCBlolLTX/Z1j65S7teM=
|
||||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14/go.mod h1:s1ydyWG9pm3ZwmmYN21HKyG9WzAZhYVW85wMHs5FV6w=
|
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14/go.mod h1:s1ydyWG9pm3ZwmmYN21HKyG9WzAZhYVW85wMHs5FV6w=
|
||||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.0 h1:8FshVvnV2sr9kOSAbOnc/vwVmmAwMjOedKH6JW2ddPM=
|
|
||||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.0/go.mod h1:wYNqY3L02Z3IgRYxOBPH9I1zD9Cjh9hI5QOy/eOjQvw=
|
|
||||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1 h1:OgQy/+0+Kc3khtqiEOk23xQAglXi3Tj0y5doOxbi5tg=
|
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1 h1:OgQy/+0+Kc3khtqiEOk23xQAglXi3Tj0y5doOxbi5tg=
|
||||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1/go.mod h1:wYNqY3L02Z3IgRYxOBPH9I1zD9Cjh9hI5QOy/eOjQvw=
|
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1/go.mod h1:wYNqY3L02Z3IgRYxOBPH9I1zD9Cjh9hI5QOy/eOjQvw=
|
||||||
github.com/aws/aws-sdk-go-v2/service/signin v1.0.1 h1:BDgIUYGEo5TkayOWv/oBLPphWwNm/A91AebUjAu5L5g=
|
|
||||||
github.com/aws/aws-sdk-go-v2/service/signin v1.0.1/go.mod h1:iS6EPmNeqCsGo+xQmXv0jIMjyYtQfnwg36zl2FwEouk=
|
|
||||||
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2 h1:MxMBdKTYBjPQChlJhi4qlEueqB1p1KcbTEa7tD5aqPs=
|
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2 h1:MxMBdKTYBjPQChlJhi4qlEueqB1p1KcbTEa7tD5aqPs=
|
||||||
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2/go.mod h1:iS6EPmNeqCsGo+xQmXv0jIMjyYtQfnwg36zl2FwEouk=
|
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2/go.mod h1:iS6EPmNeqCsGo+xQmXv0jIMjyYtQfnwg36zl2FwEouk=
|
||||||
github.com/aws/aws-sdk-go-v2/service/sso v1.30.4 h1:U//SlnkE1wOQiIImxzdY5PXat4Wq+8rlfVEw4Y7J8as=
|
|
||||||
github.com/aws/aws-sdk-go-v2/service/sso v1.30.4/go.mod h1:av+ArJpoYf3pgyrj6tcehSFW+y9/QvAY8kMooR9bZCw=
|
|
||||||
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5 h1:ksUT5KtgpZd3SAiFJNJ0AFEJVva3gjBmN7eXUZjzUwQ=
|
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5 h1:ksUT5KtgpZd3SAiFJNJ0AFEJVva3gjBmN7eXUZjzUwQ=
|
||||||
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5/go.mod h1:av+ArJpoYf3pgyrj6tcehSFW+y9/QvAY8kMooR9bZCw=
|
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5/go.mod h1:av+ArJpoYf3pgyrj6tcehSFW+y9/QvAY8kMooR9bZCw=
|
||||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.9 h1:LU8S9W/mPDAU9q0FjCLi0TrCheLMGwzbRpvUMwYspcA=
|
|
||||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.9/go.mod h1:/j67Z5XBVDx8nZVp9EuFM9/BS5dvBznbqILGuu73hug=
|
|
||||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10 h1:GtsxyiF3Nd3JahRBJbxLCCdYW9ltGQYrFWg8XdkGDd8=
|
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10 h1:GtsxyiF3Nd3JahRBJbxLCCdYW9ltGQYrFWg8XdkGDd8=
|
||||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10/go.mod h1:/j67Z5XBVDx8nZVp9EuFM9/BS5dvBznbqILGuu73hug=
|
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10/go.mod h1:/j67Z5XBVDx8nZVp9EuFM9/BS5dvBznbqILGuu73hug=
|
||||||
github.com/aws/aws-sdk-go-v2/service/sts v1.41.1 h1:GdGmKtG+/Krag7VfyOXV17xjTCz0i9NT+JnqLTOI5nA=
|
|
||||||
github.com/aws/aws-sdk-go-v2/service/sts v1.41.1/go.mod h1:6TxbXoDSgBQ225Qd8Q+MbxUxUh6TtNKwbRt/EPS9xso=
|
|
||||||
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 h1:a5UTtD4mHBU3t0o6aHQZFJTNKVfxFWfPX7J0Lr7G+uY=
|
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 h1:a5UTtD4mHBU3t0o6aHQZFJTNKVfxFWfPX7J0Lr7G+uY=
|
||||||
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2/go.mod h1:6TxbXoDSgBQ225Qd8Q+MbxUxUh6TtNKwbRt/EPS9xso=
|
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2/go.mod h1:6TxbXoDSgBQ225Qd8Q+MbxUxUh6TtNKwbRt/EPS9xso=
|
||||||
github.com/aws/smithy-go v1.23.2 h1:Crv0eatJUQhaManss33hS5r40CG3ZFH+21XSkqMrIUM=
|
github.com/aws/smithy-go v1.23.2 h1:Crv0eatJUQhaManss33hS5r40CG3ZFH+21XSkqMrIUM=
|
||||||
github.com/aws/smithy-go v1.23.2/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
|
github.com/aws/smithy-go v1.23.2/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
|
||||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k=
|
github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k=
|
||||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8=
|
github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8=
|
||||||
|
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
|
||||||
|
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
|
||||||
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
||||||
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||||
github.com/charmbracelet/bubbles v0.21.0 h1:9TdC97SdRVg/1aaXNVWfFH3nnLAwOXr8Fn6u6mfQdFs=
|
github.com/charmbracelet/bubbles v0.21.0 h1:9TdC97SdRVg/1aaXNVWfFH3nnLAwOXr8Fn6u6mfQdFs=
|
||||||
@@ -105,17 +105,24 @@ github.com/charmbracelet/x/term v0.2.1/go.mod h1:oQ4enTYFV7QN4m0i9mzHrViD7TQKvNE
|
|||||||
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443 h1:aQ3y1lwWyqYPiWZThqv1aFbZMiM9vblcSArJRf2Irls=
|
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443 h1:aQ3y1lwWyqYPiWZThqv1aFbZMiM9vblcSArJRf2Irls=
|
||||||
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8=
|
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8=
|
||||||
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
||||||
github.com/creack/pty v1.1.17 h1:QeVUsEDNrLBW4tMgZHvxy18sKtr6VI492kBhUfhDJNI=
|
|
||||||
github.com/creack/pty v1.1.17/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=
|
|
||||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
|
||||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
|
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||||
|
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
|
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
|
||||||
|
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
|
||||||
|
github.com/envoyproxy/go-control-plane v0.13.4 h1:zEqyPVyku6IvWCFwux4x9RxkLOMUL+1vC9xUFv5l2/M=
|
||||||
|
github.com/envoyproxy/go-control-plane v0.13.4/go.mod h1:kDfuBlDVsSj2MjrLEtRWtHlsWIFcGyB2RMO44Dc5GZA=
|
||||||
github.com/envoyproxy/go-control-plane/envoy v1.32.4 h1:jb83lalDRZSpPWW2Z7Mck/8kXZ5CQAFYVjQcdVIr83A=
|
github.com/envoyproxy/go-control-plane/envoy v1.32.4 h1:jb83lalDRZSpPWW2Z7Mck/8kXZ5CQAFYVjQcdVIr83A=
|
||||||
github.com/envoyproxy/go-control-plane/envoy v1.32.4/go.mod h1:Gzjc5k8JcJswLjAx1Zm+wSYE20UrLtt7JZMWiWQXQEw=
|
github.com/envoyproxy/go-control-plane/envoy v1.32.4/go.mod h1:Gzjc5k8JcJswLjAx1Zm+wSYE20UrLtt7JZMWiWQXQEw=
|
||||||
|
github.com/envoyproxy/go-control-plane/ratelimit v0.1.0 h1:/G9QYbddjL25KvtKTv3an9lx6VBE2cnb8wp1vEGNYGI=
|
||||||
|
github.com/envoyproxy/go-control-plane/ratelimit v0.1.0/go.mod h1:Wk+tMFAFbCXaJPzVVHnPgRKdUdwW/KdbRt94AzgRee4=
|
||||||
github.com/envoyproxy/protoc-gen-validate v1.2.1 h1:DEo3O99U8j4hBFwbJfrz9VtgcDfUKS7KJ7spH3d86P8=
|
github.com/envoyproxy/protoc-gen-validate v1.2.1 h1:DEo3O99U8j4hBFwbJfrz9VtgcDfUKS7KJ7spH3d86P8=
|
||||||
github.com/envoyproxy/protoc-gen-validate v1.2.1/go.mod h1:d/C80l/jxXLdfEIhX1W2TmLfsJ31lvEjwamM4DxlWXU=
|
github.com/envoyproxy/protoc-gen-validate v1.2.1/go.mod h1:d/C80l/jxXLdfEIhX1W2TmLfsJ31lvEjwamM4DxlWXU=
|
||||||
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f h1:Y/CXytFA4m6baUTXGLOoWe4PQhGxaX0KpnayAqC48p4=
|
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f h1:Y/CXytFA4m6baUTXGLOoWe4PQhGxaX0KpnayAqC48p4=
|
||||||
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f/go.mod h1:vw97MGsxSvLiUE2X8qFplwetxpGLQrlU1Q9AUEIzCaM=
|
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f/go.mod h1:vw97MGsxSvLiUE2X8qFplwetxpGLQrlU1Q9AUEIzCaM=
|
||||||
|
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
|
||||||
|
github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
|
||||||
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
|
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
|
||||||
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
|
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
|
||||||
github.com/go-jose/go-jose/v4 v4.1.2 h1:TK/7NqRQZfgAh+Td8AlsrvtPoUyiHh0LqVvokh+1vHI=
|
github.com/go-jose/go-jose/v4 v4.1.2 h1:TK/7NqRQZfgAh+Td8AlsrvtPoUyiHh0LqVvokh+1vHI=
|
||||||
@@ -125,8 +132,19 @@ github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
|
|||||||
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||||
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
|
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
|
||||||
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
|
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
|
||||||
|
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
|
||||||
|
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
|
||||||
github.com/go-sql-driver/mysql v1.9.3 h1:U/N249h2WzJ3Ukj8SowVFjdtZKfu9vlLZxjPXV1aweo=
|
github.com/go-sql-driver/mysql v1.9.3 h1:U/N249h2WzJ3Ukj8SowVFjdtZKfu9vlLZxjPXV1aweo=
|
||||||
github.com/go-sql-driver/mysql v1.9.3/go.mod h1:qn46aNg1333BRMNU69Lq93t8du/dwxI64Gl8i5p1WMU=
|
github.com/go-sql-driver/mysql v1.9.3/go.mod h1:qn46aNg1333BRMNU69Lq93t8du/dwxI64Gl8i5p1WMU=
|
||||||
|
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=
|
||||||
|
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
|
||||||
|
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
||||||
|
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
|
||||||
|
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||||
|
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||||
|
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
||||||
|
github.com/google/martian/v3 v3.3.3 h1:DIhPTQrbPkgs2yJYdXU/eNACCG5DVQjySNRNlflZ9Fc=
|
||||||
|
github.com/google/martian/v3 v3.3.3/go.mod h1:iEPrYcgCF7jA9OtScMFQyAlZZ4YXTKEtJ1E6RWzmBA0=
|
||||||
github.com/google/s2a-go v0.1.9 h1:LGD7gtMgezd8a/Xak7mEWL0PjoTQFvpRudN895yqKW0=
|
github.com/google/s2a-go v0.1.9 h1:LGD7gtMgezd8a/Xak7mEWL0PjoTQFvpRudN895yqKW0=
|
||||||
github.com/google/s2a-go v0.1.9/go.mod h1:YA0Ei2ZQL3acow2O62kdp9UlnvMmU7kA6Eutn0dXayM=
|
github.com/google/s2a-go v0.1.9/go.mod h1:YA0Ei2ZQL3acow2O62kdp9UlnvMmU7kA6Eutn0dXayM=
|
||||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||||
@@ -135,6 +153,10 @@ github.com/googleapis/enterprise-certificate-proxy v0.3.7 h1:zrn2Ee/nWmHulBx5sAV
|
|||||||
github.com/googleapis/enterprise-certificate-proxy v0.3.7/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
|
github.com/googleapis/enterprise-certificate-proxy v0.3.7/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
|
||||||
github.com/googleapis/gax-go/v2 v2.15.0 h1:SyjDc1mGgZU5LncH8gimWo9lW1DtIfPibOG81vgd/bo=
|
github.com/googleapis/gax-go/v2 v2.15.0 h1:SyjDc1mGgZU5LncH8gimWo9lW1DtIfPibOG81vgd/bo=
|
||||||
github.com/googleapis/gax-go/v2 v2.15.0/go.mod h1:zVVkkxAQHa1RQpg9z2AUCMnKhi0Qld9rcmyfL1OZhoc=
|
github.com/googleapis/gax-go/v2 v2.15.0/go.mod h1:zVVkkxAQHa1RQpg9z2AUCMnKhi0Qld9rcmyfL1OZhoc=
|
||||||
|
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
|
||||||
|
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||||
|
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
|
||||||
|
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
|
||||||
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
||||||
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
||||||
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
|
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
|
||||||
@@ -145,8 +167,15 @@ github.com/jackc/pgx/v5 v5.7.6 h1:rWQc5FwZSPX58r1OQmkuaNicxdmExaEz5A2DO2hUuTk=
|
|||||||
github.com/jackc/pgx/v5 v5.7.6/go.mod h1:aruU7o91Tc2q2cFp5h4uP3f6ztExVpyVv88Xl/8Vl8M=
|
github.com/jackc/pgx/v5 v5.7.6/go.mod h1:aruU7o91Tc2q2cFp5h4uP3f6ztExVpyVv88Xl/8Vl8M=
|
||||||
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
|
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
|
||||||
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
|
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
|
||||||
|
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
|
||||||
|
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
|
||||||
github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY=
|
github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY=
|
||||||
github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0=
|
github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0=
|
||||||
|
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
|
||||||
|
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
|
||||||
|
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
|
||||||
|
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
|
||||||
|
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
|
||||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
||||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||||
github.com/mattn/go-localereader v0.0.1 h1:ygSAOl7ZXTx4RdPYinUpg6W99U8jWvWi9Ye2JC/oIi4=
|
github.com/mattn/go-localereader v0.0.1 h1:ygSAOl7ZXTx4RdPYinUpg6W99U8jWvWi9Ye2JC/oIi4=
|
||||||
@@ -155,22 +184,35 @@ github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6T
|
|||||||
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
|
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
|
||||||
github.com/mattn/go-sqlite3 v1.14.32 h1:JD12Ag3oLy1zQA+BNn74xRgaBbdhbNIDYvQUEuuErjs=
|
github.com/mattn/go-sqlite3 v1.14.32 h1:JD12Ag3oLy1zQA+BNn74xRgaBbdhbNIDYvQUEuuErjs=
|
||||||
github.com/mattn/go-sqlite3 v1.14.32/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
|
github.com/mattn/go-sqlite3 v1.14.32/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
|
||||||
|
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db h1:62I3jR2EmQ4l5rM/4FEfDWcRD+abF5XlKShorW5LRoQ=
|
||||||
|
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db/go.mod h1:l0dey0ia/Uv7NcFFVbCLtqEBQbrT4OCwCSKTEv6enCw=
|
||||||
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 h1:ZK8zHtRHOkbHy6Mmr5D264iyp3TiX5OmNcI5cIARiQI=
|
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 h1:ZK8zHtRHOkbHy6Mmr5D264iyp3TiX5OmNcI5cIARiQI=
|
||||||
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6/go.mod h1:CJlz5H+gyd6CUWT45Oy4q24RdLyn7Md9Vj2/ldJBSIo=
|
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6/go.mod h1:CJlz5H+gyd6CUWT45Oy4q24RdLyn7Md9Vj2/ldJBSIo=
|
||||||
github.com/muesli/cancelreader v0.2.2 h1:3I4Kt4BQjOR54NavqnDogx/MIoWBFa0StPA8ELUXHmA=
|
github.com/muesli/cancelreader v0.2.2 h1:3I4Kt4BQjOR54NavqnDogx/MIoWBFa0StPA8ELUXHmA=
|
||||||
github.com/muesli/cancelreader v0.2.2/go.mod h1:3XuTXfFS2VjM+HTLZY9Ak0l6eUKfijIfMUZ4EgX0QYo=
|
github.com/muesli/cancelreader v0.2.2/go.mod h1:3XuTXfFS2VjM+HTLZY9Ak0l6eUKfijIfMUZ4EgX0QYo=
|
||||||
github.com/muesli/termenv v0.16.0 h1:S5AlUN9dENB57rsbnkPyfdGuWIlkmzJjbFf0Tf5FWUc=
|
github.com/muesli/termenv v0.16.0 h1:S5AlUN9dENB57rsbnkPyfdGuWIlkmzJjbFf0Tf5FWUc=
|
||||||
github.com/muesli/termenv v0.16.0/go.mod h1:ZRfOIKPFDYQoDFF4Olj7/QJbW60Ol/kL1pU3VfY/Cnk=
|
github.com/muesli/termenv v0.16.0/go.mod h1:ZRfOIKPFDYQoDFF4Olj7/QJbW60Ol/kL1pU3VfY/Cnk=
|
||||||
|
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
|
||||||
|
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
|
||||||
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 h1:GFCKgmp0tecUJ0sJuv4pzYCqS9+RGSn52M3FUwPs+uo=
|
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 h1:GFCKgmp0tecUJ0sJuv4pzYCqS9+RGSn52M3FUwPs+uo=
|
||||||
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8=
|
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8=
|
||||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
|
||||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
|
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||||
|
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
|
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c h1:ncq/mPwQF4JjgDlrVEn3C11VoGHZN7m8qihwgMEtzYw=
|
||||||
|
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
|
||||||
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
||||||
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
|
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
|
||||||
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
|
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
|
||||||
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||||
|
github.com/schollz/progressbar/v3 v3.19.0 h1:Ea18xuIRQXLAUidVDox3AbwfUhD0/1IvohyTutOIFoc=
|
||||||
|
github.com/schollz/progressbar/v3 v3.19.0/go.mod h1:IsO3lpbaGuzh8zIMzgY3+J8l4C8GjO0Y9S69eFvNsec=
|
||||||
|
github.com/shirou/gopsutil/v3 v3.24.5 h1:i0t8kL+kQTvpAYToeuiVk3TgDeKOFioZO3Ztz/iZ9pI=
|
||||||
|
github.com/shirou/gopsutil/v3 v3.24.5/go.mod h1:bsoOS1aStSs9ErQ1WWfxllSeS1K5D+U30r2NfcubMVk=
|
||||||
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
|
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
|
||||||
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
|
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
|
||||||
|
github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I=
|
||||||
|
github.com/spf13/afero v1.15.0/go.mod h1:NC2ByUVxtQs4b3sIUphxK0NioZnmxgyCrfzeuq8lxMg=
|
||||||
github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
|
github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
|
||||||
github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
|
github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
|
||||||
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=
|
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=
|
||||||
@@ -179,13 +221,17 @@ github.com/spiffe/go-spiffe/v2 v2.5.0 h1:N2I01KCUkv1FAjZXJMwh95KK1ZIQLYbPfhaxw8W
|
|||||||
github.com/spiffe/go-spiffe/v2 v2.5.0/go.mod h1:P+NxobPc6wXhVtINNtFjNWGBTreew1GBUCwT2wPmb7g=
|
github.com/spiffe/go-spiffe/v2 v2.5.0/go.mod h1:P+NxobPc6wXhVtINNtFjNWGBTreew1GBUCwT2wPmb7g=
|
||||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||||
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
|
||||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||||
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
|
|
||||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
|
||||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||||
|
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||||
|
github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU=
|
||||||
|
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
|
||||||
|
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
|
||||||
|
github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY=
|
||||||
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e h1:JVG44RsyaB9T2KIHavMF/ppJZNG9ZpyihvCd0w101no=
|
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e h1:JVG44RsyaB9T2KIHavMF/ppJZNG9ZpyihvCd0w101no=
|
||||||
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJuqunuUZ/Dhy/avygyECGrLceyNeo4LiM=
|
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJuqunuUZ/Dhy/avygyECGrLceyNeo4LiM=
|
||||||
|
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
|
||||||
|
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
|
||||||
github.com/zeebo/errs v1.4.0 h1:XNdoD/RRMKP7HD0UhJnIzUy74ISdGGxURlYG8HSWSfM=
|
github.com/zeebo/errs v1.4.0 h1:XNdoD/RRMKP7HD0UhJnIzUy74ISdGGxURlYG8HSWSfM=
|
||||||
github.com/zeebo/errs v1.4.0/go.mod h1:sgbWHsvVuTPHcqJJGQ1WhI5KbWlHYz+2+2C/LSEtCw4=
|
github.com/zeebo/errs v1.4.0/go.mod h1:sgbWHsvVuTPHcqJJGQ1WhI5KbWlHYz+2+2C/LSEtCw4=
|
||||||
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
|
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
|
||||||
@@ -198,6 +244,8 @@ go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 h1:F7Jx+6h
|
|||||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q=
|
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q=
|
||||||
go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ=
|
go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ=
|
||||||
go.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I=
|
go.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I=
|
||||||
|
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.36.0 h1:rixTyDGXFxRy1xzhKrotaHy3/KXdPhlWARrCgK+eqUY=
|
||||||
|
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.36.0/go.mod h1:dowW6UsM9MKbJq5JTz2AMVp3/5iW5I/TStsk8S+CfHw=
|
||||||
go.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE=
|
go.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE=
|
||||||
go.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E=
|
go.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E=
|
||||||
go.opentelemetry.io/otel/sdk v1.37.0 h1:ItB0QUqnjesGRvNcmAcU0LyvkVyGJ2xftD29bWdDvKI=
|
go.opentelemetry.io/otel/sdk v1.37.0 h1:ItB0QUqnjesGRvNcmAcU0LyvkVyGJ2xftD29bWdDvKI=
|
||||||
@@ -206,43 +254,35 @@ go.opentelemetry.io/otel/sdk/metric v1.37.0 h1:90lI228XrB9jCMuSdA0673aubgRobVZFh
|
|||||||
go.opentelemetry.io/otel/sdk/metric v1.37.0/go.mod h1:cNen4ZWfiD37l5NhS+Keb5RXVWZWpRE+9WyVCpbo5ps=
|
go.opentelemetry.io/otel/sdk/metric v1.37.0/go.mod h1:cNen4ZWfiD37l5NhS+Keb5RXVWZWpRE+9WyVCpbo5ps=
|
||||||
go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4=
|
go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4=
|
||||||
go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0=
|
go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0=
|
||||||
golang.org/x/crypto v0.37.0 h1:kJNSjF/Xp7kU0iB2Z+9viTPMW4EqqsrywMXLJOOsXSE=
|
|
||||||
golang.org/x/crypto v0.37.0/go.mod h1:vg+k43peMZ0pUMhYmVAWysMK35e6ioLh3wB8ZCAfbVc=
|
|
||||||
golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4=
|
|
||||||
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc=
|
|
||||||
golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04=
|
golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04=
|
||||||
golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0=
|
golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0=
|
||||||
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561 h1:MDc5xs78ZrZr3HMQugiXOAkSZtfTpbJLDr/lwfgO53E=
|
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561 h1:MDc5xs78ZrZr3HMQugiXOAkSZtfTpbJLDr/lwfgO53E=
|
||||||
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561/go.mod h1:cyybsKvd6eL0RnXn6p/Grxp8F5bW7iYuBgsNCOHpMYE=
|
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561/go.mod h1:cyybsKvd6eL0RnXn6p/Grxp8F5bW7iYuBgsNCOHpMYE=
|
||||||
golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE=
|
|
||||||
golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg=
|
|
||||||
golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4=
|
golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4=
|
||||||
golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210=
|
golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210=
|
||||||
golang.org/x/oauth2 v0.33.0 h1:4Q+qn+E5z8gPRJfmRy7C2gGG3T4jIprK6aSYgTXGRpo=
|
golang.org/x/oauth2 v0.33.0 h1:4Q+qn+E5z8gPRJfmRy7C2gGG3T4jIprK6aSYgTXGRpo=
|
||||||
golang.org/x/oauth2 v0.33.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
|
golang.org/x/oauth2 v0.33.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
|
||||||
golang.org/x/sync v0.13.0 h1:AauUjRAJ9OSnvULf/ARrrVywoJDy0YS2AwQ98I37610=
|
|
||||||
golang.org/x/sync v0.13.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
|
||||||
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
|
|
||||||
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
|
||||||
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
|
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
|
||||||
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||||
|
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
|
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
|
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.37.0 h1:fdNQudmxPjkdUTPnLn5mdQv7Zwvbvpaxqs831goi9kQ=
|
|
||||||
golang.org/x/sys v0.37.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
|
||||||
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
|
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
|
||||||
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||||
golang.org/x/text v0.24.0 h1:dd5Bzh4yt5KYA8f9CJHCP4FB4D51c2c6JvN37xJJkJ0=
|
golang.org/x/term v0.36.0 h1:zMPR+aF8gfksFprF/Nc/rd1wRS1EI6nDBGyWAvDzx2Q=
|
||||||
golang.org/x/text v0.24.0/go.mod h1:L8rBsPeo2pSS+xqN0d5u2ikmjtmoJbDBT1b7nHvFCdU=
|
golang.org/x/term v0.36.0/go.mod h1:Qu394IJq6V6dCBRgwqshf3mPF85AqzYEzofzRdZkWss=
|
||||||
golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng=
|
|
||||||
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU=
|
|
||||||
golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k=
|
golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k=
|
||||||
golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
|
golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
|
||||||
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
|
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
|
||||||
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
|
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
|
||||||
|
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
|
||||||
|
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
|
||||||
google.golang.org/api v0.256.0 h1:u6Khm8+F9sxbCTYNoBHg6/Hwv0N/i+V94MvkOSor6oI=
|
google.golang.org/api v0.256.0 h1:u6Khm8+F9sxbCTYNoBHg6/Hwv0N/i+V94MvkOSor6oI=
|
||||||
google.golang.org/api v0.256.0/go.mod h1:KIgPhksXADEKJlnEoRa9qAII4rXcy40vfI8HRqcU964=
|
google.golang.org/api v0.256.0/go.mod h1:KIgPhksXADEKJlnEoRa9qAII4rXcy40vfI8HRqcU964=
|
||||||
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 h1:rHWScKit0gvAPuOnu87KpaYtjK5zBMLcULh7gxkCXu4=
|
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 h1:rHWScKit0gvAPuOnu87KpaYtjK5zBMLcULh7gxkCXu4=
|
||||||
|
|||||||
@@ -28,6 +28,12 @@ import (
|
|||||||
"dbbackup/internal/swap"
|
"dbbackup/internal/swap"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// ProgressCallback is called with byte-level progress updates during backup operations
|
||||||
|
type ProgressCallback func(current, total int64, description string)
|
||||||
|
|
||||||
|
// DatabaseProgressCallback is called with database count progress during cluster backup
|
||||||
|
type DatabaseProgressCallback func(done, total int, dbName string)
|
||||||
|
|
||||||
// Engine handles backup operations
|
// Engine handles backup operations
|
||||||
type Engine struct {
|
type Engine struct {
|
||||||
cfg *config.Config
|
cfg *config.Config
|
||||||
@@ -36,6 +42,8 @@ type Engine struct {
|
|||||||
progress progress.Indicator
|
progress progress.Indicator
|
||||||
detailedReporter *progress.DetailedReporter
|
detailedReporter *progress.DetailedReporter
|
||||||
silent bool // Silent mode for TUI
|
silent bool // Silent mode for TUI
|
||||||
|
progressCallback ProgressCallback
|
||||||
|
dbProgressCallback DatabaseProgressCallback
|
||||||
}
|
}
|
||||||
|
|
||||||
// New creates a new backup engine
|
// New creates a new backup engine
|
||||||
@@ -86,6 +94,30 @@ func NewSilent(cfg *config.Config, log logger.Logger, db database.Database, prog
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetProgressCallback sets a callback for detailed progress reporting (for TUI mode)
|
||||||
|
func (e *Engine) SetProgressCallback(cb ProgressCallback) {
|
||||||
|
e.progressCallback = cb
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetDatabaseProgressCallback sets a callback for database count progress during cluster backup
|
||||||
|
func (e *Engine) SetDatabaseProgressCallback(cb DatabaseProgressCallback) {
|
||||||
|
e.dbProgressCallback = cb
|
||||||
|
}
|
||||||
|
|
||||||
|
// reportProgress reports progress to the callback if set
|
||||||
|
func (e *Engine) reportProgress(current, total int64, description string) {
|
||||||
|
if e.progressCallback != nil {
|
||||||
|
e.progressCallback(current, total, description)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// reportDatabaseProgress reports database count progress to the callback if set
|
||||||
|
func (e *Engine) reportDatabaseProgress(done, total int, dbName string) {
|
||||||
|
if e.dbProgressCallback != nil {
|
||||||
|
e.dbProgressCallback(done, total, dbName)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// loggerAdapter adapts our logger to the progress.Logger interface
|
// loggerAdapter adapts our logger to the progress.Logger interface
|
||||||
type loggerAdapter struct {
|
type loggerAdapter struct {
|
||||||
logger logger.Logger
|
logger logger.Logger
|
||||||
@@ -465,6 +497,8 @@ func (e *Engine) BackupCluster(ctx context.Context) error {
|
|||||||
estimator.UpdateProgress(idx)
|
estimator.UpdateProgress(idx)
|
||||||
e.printf(" [%d/%d] Backing up database: %s\n", idx+1, len(databases), name)
|
e.printf(" [%d/%d] Backing up database: %s\n", idx+1, len(databases), name)
|
||||||
quietProgress.Update(fmt.Sprintf("Backing up database %d/%d: %s", idx+1, len(databases), name))
|
quietProgress.Update(fmt.Sprintf("Backing up database %d/%d: %s", idx+1, len(databases), name))
|
||||||
|
// Report database progress to TUI callback
|
||||||
|
e.reportDatabaseProgress(idx+1, len(databases), name)
|
||||||
mu.Unlock()
|
mu.Unlock()
|
||||||
|
|
||||||
// Check database size and warn if very large
|
// Check database size and warn if very large
|
||||||
@@ -1242,23 +1276,29 @@ func (e *Engine) uploadToCloud(ctx context.Context, backupFile string, tracker *
|
|||||||
filename := filepath.Base(backupFile)
|
filename := filepath.Base(backupFile)
|
||||||
e.log.Info("Uploading backup to cloud", "file", filename, "size", cloud.FormatSize(info.Size()))
|
e.log.Info("Uploading backup to cloud", "file", filename, "size", cloud.FormatSize(info.Size()))
|
||||||
|
|
||||||
// Progress callback
|
// Create schollz progressbar for visual upload progress
|
||||||
var lastPercent int
|
bar := progress.NewSchollzBar(info.Size(), fmt.Sprintf("Uploading %s", filename))
|
||||||
|
|
||||||
|
// Progress callback with schollz progressbar
|
||||||
|
var lastBytes int64
|
||||||
progressCallback := func(transferred, total int64) {
|
progressCallback := func(transferred, total int64) {
|
||||||
percent := int(float64(transferred) / float64(total) * 100)
|
delta := transferred - lastBytes
|
||||||
if percent != lastPercent && percent%10 == 0 {
|
if delta > 0 {
|
||||||
e.log.Debug("Upload progress", "percent", percent, "transferred", cloud.FormatSize(transferred), "total", cloud.FormatSize(total))
|
_ = bar.Add64(delta)
|
||||||
lastPercent = percent
|
|
||||||
}
|
}
|
||||||
|
lastBytes = transferred
|
||||||
}
|
}
|
||||||
|
|
||||||
// Upload to cloud
|
// Upload to cloud
|
||||||
err = backend.Upload(ctx, backupFile, filename, progressCallback)
|
err = backend.Upload(ctx, backupFile, filename, progressCallback)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
bar.Fail("Upload failed")
|
||||||
uploadStep.Fail(fmt.Errorf("cloud upload failed: %w", err))
|
uploadStep.Fail(fmt.Errorf("cloud upload failed: %w", err))
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
_ = bar.Finish()
|
||||||
|
|
||||||
// Also upload metadata file
|
// Also upload metadata file
|
||||||
metaFile := backupFile + ".meta.json"
|
metaFile := backupFile + ".meta.json"
|
||||||
if _, err := os.Stat(metaFile); err == nil {
|
if _, err := os.Stat(metaFile); err == nil {
|
||||||
|
|||||||
@@ -68,8 +68,8 @@ func ClassifyError(errorMsg string) *ErrorClassification {
|
|||||||
Type: "critical",
|
Type: "critical",
|
||||||
Category: "locks",
|
Category: "locks",
|
||||||
Message: errorMsg,
|
Message: errorMsg,
|
||||||
Hint: "Lock table exhausted - typically caused by large objects in parallel restore",
|
Hint: "Lock table exhausted - typically caused by large objects (BLOBs) during restore",
|
||||||
Action: "Increase max_locks_per_transaction in postgresql.conf to 512 or higher",
|
Action: "Option 1: Increase max_locks_per_transaction to 1024+ in postgresql.conf (requires restart). Option 2: Update dbbackup and retry - phased restore now auto-enabled for BLOB databases",
|
||||||
Severity: 2,
|
Severity: 2,
|
||||||
}
|
}
|
||||||
case "permission_denied":
|
case "permission_denied":
|
||||||
@@ -142,8 +142,8 @@ func ClassifyError(errorMsg string) *ErrorClassification {
|
|||||||
Type: "critical",
|
Type: "critical",
|
||||||
Category: "locks",
|
Category: "locks",
|
||||||
Message: errorMsg,
|
Message: errorMsg,
|
||||||
Hint: "Lock table exhausted - typically caused by large objects in parallel restore",
|
Hint: "Lock table exhausted - typically caused by large objects (BLOBs) during restore",
|
||||||
Action: "Increase max_locks_per_transaction in postgresql.conf to 512 or higher",
|
Action: "Option 1: Increase max_locks_per_transaction to 1024+ in postgresql.conf (requires restart). Option 2: Update dbbackup and retry - phased restore now auto-enabled for BLOB databases",
|
||||||
Severity: 2,
|
Severity: 2,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -151,8 +151,14 @@ func (a *AzureBackend) Upload(ctx context.Context, localPath, remotePath string,
|
|||||||
return a.uploadSimple(ctx, file, blobName, fileSize, progress)
|
return a.uploadSimple(ctx, file, blobName, fileSize, progress)
|
||||||
}
|
}
|
||||||
|
|
||||||
// uploadSimple uploads a file using simple upload (single request)
|
// uploadSimple uploads a file using simple upload (single request) with retry
|
||||||
func (a *AzureBackend) uploadSimple(ctx context.Context, file *os.File, blobName string, fileSize int64, progress ProgressCallback) error {
|
func (a *AzureBackend) uploadSimple(ctx context.Context, file *os.File, blobName string, fileSize int64, progress ProgressCallback) error {
|
||||||
|
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
|
||||||
|
// Reset file position for retry
|
||||||
|
if _, err := file.Seek(0, 0); err != nil {
|
||||||
|
return fmt.Errorf("failed to reset file position: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
||||||
|
|
||||||
// Wrap reader with progress tracking
|
// Wrap reader with progress tracking
|
||||||
@@ -182,6 +188,9 @@ func (a *AzureBackend) uploadSimple(ctx context.Context, file *os.File, blobName
|
|||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
}, func(err error, duration time.Duration) {
|
||||||
|
fmt.Printf("[Azure] Upload retry in %v: %v\n", duration, err)
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// uploadBlocks uploads a file using block blob staging (for large files)
|
// uploadBlocks uploads a file using block blob staging (for large files)
|
||||||
@@ -251,7 +260,7 @@ func (a *AzureBackend) uploadBlocks(ctx context.Context, file *os.File, blobName
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Download downloads a file from Azure Blob Storage
|
// Download downloads a file from Azure Blob Storage with retry
|
||||||
func (a *AzureBackend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
|
func (a *AzureBackend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
|
||||||
blobName := strings.TrimPrefix(remotePath, "/")
|
blobName := strings.TrimPrefix(remotePath, "/")
|
||||||
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
||||||
@@ -264,6 +273,7 @@ func (a *AzureBackend) Download(ctx context.Context, remotePath, localPath strin
|
|||||||
|
|
||||||
fileSize := *props.ContentLength
|
fileSize := *props.ContentLength
|
||||||
|
|
||||||
|
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
|
||||||
// Download blob
|
// Download blob
|
||||||
resp, err := blockBlobClient.DownloadStream(ctx, nil)
|
resp, err := blockBlobClient.DownloadStream(ctx, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -271,7 +281,7 @@ func (a *AzureBackend) Download(ctx context.Context, remotePath, localPath strin
|
|||||||
}
|
}
|
||||||
defer resp.Body.Close()
|
defer resp.Body.Close()
|
||||||
|
|
||||||
// Create local file
|
// Create/truncate local file
|
||||||
file, err := os.Create(localPath)
|
file, err := os.Create(localPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed to create file: %w", err)
|
return fmt.Errorf("failed to create file: %w", err)
|
||||||
@@ -288,6 +298,9 @@ func (a *AzureBackend) Download(ctx context.Context, remotePath, localPath strin
|
|||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
}, func(err error, duration time.Duration) {
|
||||||
|
fmt.Printf("[Azure] Download retry in %v: %v\n", duration, err)
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Delete deletes a file from Azure Blob Storage
|
// Delete deletes a file from Azure Blob Storage
|
||||||
|
|||||||
@@ -89,7 +89,7 @@ func (g *GCSBackend) Name() string {
|
|||||||
return "gcs"
|
return "gcs"
|
||||||
}
|
}
|
||||||
|
|
||||||
// Upload uploads a file to Google Cloud Storage
|
// Upload uploads a file to Google Cloud Storage with retry
|
||||||
func (g *GCSBackend) Upload(ctx context.Context, localPath, remotePath string, progress ProgressCallback) error {
|
func (g *GCSBackend) Upload(ctx context.Context, localPath, remotePath string, progress ProgressCallback) error {
|
||||||
file, err := os.Open(localPath)
|
file, err := os.Open(localPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -106,6 +106,12 @@ func (g *GCSBackend) Upload(ctx context.Context, localPath, remotePath string, p
|
|||||||
// Remove leading slash from remote path
|
// Remove leading slash from remote path
|
||||||
objectName := strings.TrimPrefix(remotePath, "/")
|
objectName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
|
||||||
|
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
|
||||||
|
// Reset file position for retry
|
||||||
|
if _, err := file.Seek(0, 0); err != nil {
|
||||||
|
return fmt.Errorf("failed to reset file position: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
bucket := g.client.Bucket(g.bucketName)
|
bucket := g.client.Bucket(g.bucketName)
|
||||||
object := bucket.Object(objectName)
|
object := bucket.Object(objectName)
|
||||||
|
|
||||||
@@ -142,9 +148,12 @@ func (g *GCSBackend) Upload(ctx context.Context, localPath, remotePath string, p
|
|||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
}, func(err error, duration time.Duration) {
|
||||||
|
fmt.Printf("[GCS] Upload retry in %v: %v\n", duration, err)
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Download downloads a file from Google Cloud Storage
|
// Download downloads a file from Google Cloud Storage with retry
|
||||||
func (g *GCSBackend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
|
func (g *GCSBackend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
|
||||||
objectName := strings.TrimPrefix(remotePath, "/")
|
objectName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
|
||||||
@@ -159,6 +168,7 @@ func (g *GCSBackend) Download(ctx context.Context, remotePath, localPath string,
|
|||||||
|
|
||||||
fileSize := attrs.Size
|
fileSize := attrs.Size
|
||||||
|
|
||||||
|
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
|
||||||
// Create reader
|
// Create reader
|
||||||
reader, err := object.NewReader(ctx)
|
reader, err := object.NewReader(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -166,7 +176,7 @@ func (g *GCSBackend) Download(ctx context.Context, remotePath, localPath string,
|
|||||||
}
|
}
|
||||||
defer reader.Close()
|
defer reader.Close()
|
||||||
|
|
||||||
// Create local file
|
// Create/truncate local file
|
||||||
file, err := os.Create(localPath)
|
file, err := os.Create(localPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed to create file: %w", err)
|
return fmt.Errorf("failed to create file: %w", err)
|
||||||
@@ -183,6 +193,9 @@ func (g *GCSBackend) Download(ctx context.Context, remotePath, localPath string,
|
|||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
}, func(err error, duration time.Duration) {
|
||||||
|
fmt.Printf("[GCS] Download retry in %v: %v\n", duration, err)
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Delete deletes a file from Google Cloud Storage
|
// Delete deletes a file from Google Cloud Storage
|
||||||
|
|||||||
257
internal/cloud/retry.go
Normal file
257
internal/cloud/retry.go
Normal file
@@ -0,0 +1,257 @@
|
|||||||
|
package cloud
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"net"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/cenkalti/backoff/v4"
|
||||||
|
)
|
||||||
|
|
||||||
|
// RetryConfig configures retry behavior
|
||||||
|
type RetryConfig struct {
|
||||||
|
MaxRetries int // Maximum number of retries (0 = unlimited)
|
||||||
|
InitialInterval time.Duration // Initial backoff interval
|
||||||
|
MaxInterval time.Duration // Maximum backoff interval
|
||||||
|
MaxElapsedTime time.Duration // Maximum total time for retries
|
||||||
|
Multiplier float64 // Backoff multiplier
|
||||||
|
}
|
||||||
|
|
||||||
|
// DefaultRetryConfig returns sensible defaults for cloud operations
|
||||||
|
func DefaultRetryConfig() *RetryConfig {
|
||||||
|
return &RetryConfig{
|
||||||
|
MaxRetries: 5,
|
||||||
|
InitialInterval: 500 * time.Millisecond,
|
||||||
|
MaxInterval: 30 * time.Second,
|
||||||
|
MaxElapsedTime: 5 * time.Minute,
|
||||||
|
Multiplier: 2.0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// AggressiveRetryConfig returns config for critical operations that need more retries
|
||||||
|
func AggressiveRetryConfig() *RetryConfig {
|
||||||
|
return &RetryConfig{
|
||||||
|
MaxRetries: 10,
|
||||||
|
InitialInterval: 1 * time.Second,
|
||||||
|
MaxInterval: 60 * time.Second,
|
||||||
|
MaxElapsedTime: 15 * time.Minute,
|
||||||
|
Multiplier: 1.5,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// QuickRetryConfig returns config for operations that should fail fast
|
||||||
|
func QuickRetryConfig() *RetryConfig {
|
||||||
|
return &RetryConfig{
|
||||||
|
MaxRetries: 3,
|
||||||
|
InitialInterval: 100 * time.Millisecond,
|
||||||
|
MaxInterval: 5 * time.Second,
|
||||||
|
MaxElapsedTime: 30 * time.Second,
|
||||||
|
Multiplier: 2.0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// RetryOperation executes an operation with exponential backoff retry
|
||||||
|
func RetryOperation(ctx context.Context, cfg *RetryConfig, operation func() error) error {
|
||||||
|
if cfg == nil {
|
||||||
|
cfg = DefaultRetryConfig()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create exponential backoff
|
||||||
|
expBackoff := backoff.NewExponentialBackOff()
|
||||||
|
expBackoff.InitialInterval = cfg.InitialInterval
|
||||||
|
expBackoff.MaxInterval = cfg.MaxInterval
|
||||||
|
expBackoff.MaxElapsedTime = cfg.MaxElapsedTime
|
||||||
|
expBackoff.Multiplier = cfg.Multiplier
|
||||||
|
expBackoff.Reset()
|
||||||
|
|
||||||
|
// Wrap with max retries if specified
|
||||||
|
var b backoff.BackOff = expBackoff
|
||||||
|
if cfg.MaxRetries > 0 {
|
||||||
|
b = backoff.WithMaxRetries(expBackoff, uint64(cfg.MaxRetries))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add context support
|
||||||
|
b = backoff.WithContext(b, ctx)
|
||||||
|
|
||||||
|
// Track attempts for logging
|
||||||
|
attempt := 0
|
||||||
|
|
||||||
|
// Wrap operation to handle permanent vs retryable errors
|
||||||
|
wrappedOp := func() error {
|
||||||
|
attempt++
|
||||||
|
err := operation()
|
||||||
|
if err == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if error is permanent (should not retry)
|
||||||
|
if IsPermanentError(err) {
|
||||||
|
return backoff.Permanent(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return backoff.Retry(wrappedOp, b)
|
||||||
|
}
|
||||||
|
|
||||||
|
// RetryOperationWithNotify executes an operation with retry and calls notify on each retry
|
||||||
|
func RetryOperationWithNotify(ctx context.Context, cfg *RetryConfig, operation func() error, notify func(err error, duration time.Duration)) error {
|
||||||
|
if cfg == nil {
|
||||||
|
cfg = DefaultRetryConfig()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create exponential backoff
|
||||||
|
expBackoff := backoff.NewExponentialBackOff()
|
||||||
|
expBackoff.InitialInterval = cfg.InitialInterval
|
||||||
|
expBackoff.MaxInterval = cfg.MaxInterval
|
||||||
|
expBackoff.MaxElapsedTime = cfg.MaxElapsedTime
|
||||||
|
expBackoff.Multiplier = cfg.Multiplier
|
||||||
|
expBackoff.Reset()
|
||||||
|
|
||||||
|
// Wrap with max retries if specified
|
||||||
|
var b backoff.BackOff = expBackoff
|
||||||
|
if cfg.MaxRetries > 0 {
|
||||||
|
b = backoff.WithMaxRetries(expBackoff, uint64(cfg.MaxRetries))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add context support
|
||||||
|
b = backoff.WithContext(b, ctx)
|
||||||
|
|
||||||
|
// Wrap operation to handle permanent vs retryable errors
|
||||||
|
wrappedOp := func() error {
|
||||||
|
err := operation()
|
||||||
|
if err == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if error is permanent (should not retry)
|
||||||
|
if IsPermanentError(err) {
|
||||||
|
return backoff.Permanent(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return backoff.RetryNotify(wrappedOp, b, notify)
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsPermanentError returns true if the error should not be retried
|
||||||
|
func IsPermanentError(err error) bool {
|
||||||
|
if err == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
errStr := strings.ToLower(err.Error())
|
||||||
|
|
||||||
|
// Authentication/authorization errors - don't retry
|
||||||
|
permanentPatterns := []string{
|
||||||
|
"access denied",
|
||||||
|
"forbidden",
|
||||||
|
"unauthorized",
|
||||||
|
"invalid credentials",
|
||||||
|
"invalid access key",
|
||||||
|
"invalid secret",
|
||||||
|
"no such bucket",
|
||||||
|
"bucket not found",
|
||||||
|
"container not found",
|
||||||
|
"nosuchbucket",
|
||||||
|
"nosuchkey",
|
||||||
|
"invalid argument",
|
||||||
|
"malformed",
|
||||||
|
"invalid request",
|
||||||
|
"permission denied",
|
||||||
|
"access control",
|
||||||
|
"policy",
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, pattern := range permanentPatterns {
|
||||||
|
if strings.Contains(errStr, pattern) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsRetryableError returns true if the error is transient and should be retried
|
||||||
|
func IsRetryableError(err error) bool {
|
||||||
|
if err == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Network errors are typically retryable
|
||||||
|
var netErr net.Error
|
||||||
|
if ok := isNetError(err, &netErr); ok {
|
||||||
|
return netErr.Timeout() || netErr.Temporary()
|
||||||
|
}
|
||||||
|
|
||||||
|
errStr := strings.ToLower(err.Error())
|
||||||
|
|
||||||
|
// Transient errors - should retry
|
||||||
|
retryablePatterns := []string{
|
||||||
|
"timeout",
|
||||||
|
"connection reset",
|
||||||
|
"connection refused",
|
||||||
|
"connection closed",
|
||||||
|
"eof",
|
||||||
|
"broken pipe",
|
||||||
|
"temporary failure",
|
||||||
|
"service unavailable",
|
||||||
|
"internal server error",
|
||||||
|
"bad gateway",
|
||||||
|
"gateway timeout",
|
||||||
|
"too many requests",
|
||||||
|
"rate limit",
|
||||||
|
"throttl",
|
||||||
|
"slowdown",
|
||||||
|
"try again",
|
||||||
|
"retry",
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, pattern := range retryablePatterns {
|
||||||
|
if strings.Contains(errStr, pattern) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// isNetError checks if err wraps a net.Error
|
||||||
|
func isNetError(err error, target *net.Error) bool {
|
||||||
|
for err != nil {
|
||||||
|
if ne, ok := err.(net.Error); ok {
|
||||||
|
*target = ne
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
// Try to unwrap
|
||||||
|
if unwrapper, ok := err.(interface{ Unwrap() error }); ok {
|
||||||
|
err = unwrapper.Unwrap()
|
||||||
|
} else {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithRetry is a helper that wraps a function with default retry logic
|
||||||
|
func WithRetry(ctx context.Context, operationName string, fn func() error) error {
|
||||||
|
notify := func(err error, duration time.Duration) {
|
||||||
|
// Log retry attempts (caller can provide their own logger if needed)
|
||||||
|
fmt.Printf("[RETRY] %s failed, retrying in %v: %v\n", operationName, duration, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), fn, notify)
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithRetryConfig is a helper that wraps a function with custom retry config
|
||||||
|
func WithRetryConfig(ctx context.Context, cfg *RetryConfig, operationName string, fn func() error) error {
|
||||||
|
notify := func(err error, duration time.Duration) {
|
||||||
|
fmt.Printf("[RETRY] %s failed, retrying in %v: %v\n", operationName, duration, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return RetryOperationWithNotify(ctx, cfg, fn, notify)
|
||||||
|
}
|
||||||
@@ -7,6 +7,7 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strings"
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/aws/aws-sdk-go-v2/aws"
|
"github.com/aws/aws-sdk-go-v2/aws"
|
||||||
"github.com/aws/aws-sdk-go-v2/config"
|
"github.com/aws/aws-sdk-go-v2/config"
|
||||||
@@ -123,8 +124,14 @@ func (s *S3Backend) Upload(ctx context.Context, localPath, remotePath string, pr
|
|||||||
return s.uploadSimple(ctx, file, key, fileSize, progress)
|
return s.uploadSimple(ctx, file, key, fileSize, progress)
|
||||||
}
|
}
|
||||||
|
|
||||||
// uploadSimple performs a simple single-part upload
|
// uploadSimple performs a simple single-part upload with retry
|
||||||
func (s *S3Backend) uploadSimple(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error {
|
func (s *S3Backend) uploadSimple(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error {
|
||||||
|
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
|
||||||
|
// Reset file position for retry
|
||||||
|
if _, err := file.Seek(0, 0); err != nil {
|
||||||
|
return fmt.Errorf("failed to reset file position: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
// Create progress reader
|
// Create progress reader
|
||||||
var reader io.Reader = file
|
var reader io.Reader = file
|
||||||
if progress != nil {
|
if progress != nil {
|
||||||
@@ -143,10 +150,19 @@ func (s *S3Backend) uploadSimple(ctx context.Context, file *os.File, key string,
|
|||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
}, func(err error, duration time.Duration) {
|
||||||
|
fmt.Printf("[S3] Upload retry in %v: %v\n", duration, err)
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// uploadMultipart performs a multipart upload for large files
|
// uploadMultipart performs a multipart upload for large files with retry
|
||||||
func (s *S3Backend) uploadMultipart(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error {
|
func (s *S3Backend) uploadMultipart(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error {
|
||||||
|
return RetryOperationWithNotify(ctx, AggressiveRetryConfig(), func() error {
|
||||||
|
// Reset file position for retry
|
||||||
|
if _, err := file.Seek(0, 0); err != nil {
|
||||||
|
return fmt.Errorf("failed to reset file position: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
// Create uploader with custom options
|
// Create uploader with custom options
|
||||||
uploader := manager.NewUploader(s.client, func(u *manager.Uploader) {
|
uploader := manager.NewUploader(s.client, func(u *manager.Uploader) {
|
||||||
// Part size: 10MB
|
// Part size: 10MB
|
||||||
@@ -177,9 +193,12 @@ func (s *S3Backend) uploadMultipart(ctx context.Context, file *os.File, key stri
|
|||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
}, func(err error, duration time.Duration) {
|
||||||
|
fmt.Printf("[S3] Multipart upload retry in %v: %v\n", duration, err)
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Download downloads a file from S3
|
// Download downloads a file from S3 with retry
|
||||||
func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
|
func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
|
||||||
// Build S3 key
|
// Build S3 key
|
||||||
key := s.buildKey(remotePath)
|
key := s.buildKey(remotePath)
|
||||||
@@ -190,6 +209,12 @@ func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string,
|
|||||||
return fmt.Errorf("failed to get object size: %w", err)
|
return fmt.Errorf("failed to get object size: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Create directory for local file
|
||||||
|
if err := os.MkdirAll(filepath.Dir(localPath), 0755); err != nil {
|
||||||
|
return fmt.Errorf("failed to create directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return RetryOperationWithNotify(ctx, DefaultRetryConfig(), func() error {
|
||||||
// Download from S3
|
// Download from S3
|
||||||
result, err := s.client.GetObject(ctx, &s3.GetObjectInput{
|
result, err := s.client.GetObject(ctx, &s3.GetObjectInput{
|
||||||
Bucket: aws.String(s.bucket),
|
Bucket: aws.String(s.bucket),
|
||||||
@@ -200,11 +225,7 @@ func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string,
|
|||||||
}
|
}
|
||||||
defer result.Body.Close()
|
defer result.Body.Close()
|
||||||
|
|
||||||
// Create local file
|
// Create/truncate local file
|
||||||
if err := os.MkdirAll(filepath.Dir(localPath), 0755); err != nil {
|
|
||||||
return fmt.Errorf("failed to create directory: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
outFile, err := os.Create(localPath)
|
outFile, err := os.Create(localPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed to create local file: %w", err)
|
return fmt.Errorf("failed to create local file: %w", err)
|
||||||
@@ -223,6 +244,9 @@ func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string,
|
|||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
}, func(err error, duration time.Duration) {
|
||||||
|
fmt.Printf("[S3] Download retry in %v: %v\n", duration, err)
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// List lists all backup files in S3
|
// List lists all backup files in S3
|
||||||
|
|||||||
223
internal/fs/fs.go
Normal file
223
internal/fs/fs.go
Normal file
@@ -0,0 +1,223 @@
|
|||||||
|
// Package fs provides filesystem abstraction using spf13/afero for testability.
|
||||||
|
// It allows swapping the real filesystem with an in-memory mock for unit tests.
|
||||||
|
package fs
|
||||||
|
|
||||||
|
import (
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/spf13/afero"
|
||||||
|
)
|
||||||
|
|
||||||
|
// FS is the global filesystem interface used throughout the application.
|
||||||
|
// By default, it uses the real OS filesystem.
|
||||||
|
// For testing, use SetFS(afero.NewMemMapFs()) to use an in-memory filesystem.
|
||||||
|
var FS afero.Fs = afero.NewOsFs()
|
||||||
|
|
||||||
|
// SetFS sets the global filesystem (useful for testing)
|
||||||
|
func SetFS(fs afero.Fs) {
|
||||||
|
FS = fs
|
||||||
|
}
|
||||||
|
|
||||||
|
// ResetFS resets to the real OS filesystem
|
||||||
|
func ResetFS() {
|
||||||
|
FS = afero.NewOsFs()
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewMemMapFs creates a new in-memory filesystem for testing
|
||||||
|
func NewMemMapFs() afero.Fs {
|
||||||
|
return afero.NewMemMapFs()
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewReadOnlyFs wraps a filesystem to make it read-only
|
||||||
|
func NewReadOnlyFs(base afero.Fs) afero.Fs {
|
||||||
|
return afero.NewReadOnlyFs(base)
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewBasePathFs creates a filesystem rooted at a specific path
|
||||||
|
func NewBasePathFs(base afero.Fs, path string) afero.Fs {
|
||||||
|
return afero.NewBasePathFs(base, path)
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- File Operations (use global FS) ---
|
||||||
|
|
||||||
|
// Create creates a file
|
||||||
|
func Create(name string) (afero.File, error) {
|
||||||
|
return FS.Create(name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Open opens a file for reading
|
||||||
|
func Open(name string) (afero.File, error) {
|
||||||
|
return FS.Open(name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// OpenFile opens a file with specified flags and permissions
|
||||||
|
func OpenFile(name string, flag int, perm os.FileMode) (afero.File, error) {
|
||||||
|
return FS.OpenFile(name, flag, perm)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Remove removes a file or empty directory
|
||||||
|
func Remove(name string) error {
|
||||||
|
return FS.Remove(name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveAll removes a path and any children it contains
|
||||||
|
func RemoveAll(path string) error {
|
||||||
|
return FS.RemoveAll(path)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Rename renames (moves) a file
|
||||||
|
func Rename(oldname, newname string) error {
|
||||||
|
return FS.Rename(oldname, newname)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stat returns file info
|
||||||
|
func Stat(name string) (os.FileInfo, error) {
|
||||||
|
return FS.Stat(name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Chmod changes file mode
|
||||||
|
func Chmod(name string, mode os.FileMode) error {
|
||||||
|
return FS.Chmod(name, mode)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Chown changes file ownership (may not work on all filesystems)
|
||||||
|
func Chown(name string, uid, gid int) error {
|
||||||
|
return FS.Chown(name, uid, gid)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Chtimes changes file access and modification times
|
||||||
|
func Chtimes(name string, atime, mtime time.Time) error {
|
||||||
|
return FS.Chtimes(name, atime, mtime)
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Directory Operations ---
|
||||||
|
|
||||||
|
// Mkdir creates a directory
|
||||||
|
func Mkdir(name string, perm os.FileMode) error {
|
||||||
|
return FS.Mkdir(name, perm)
|
||||||
|
}
|
||||||
|
|
||||||
|
// MkdirAll creates a directory and all parents
|
||||||
|
func MkdirAll(path string, perm os.FileMode) error {
|
||||||
|
return FS.MkdirAll(path, perm)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ReadDir reads a directory
|
||||||
|
func ReadDir(dirname string) ([]os.FileInfo, error) {
|
||||||
|
return afero.ReadDir(FS, dirname)
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- File Content Operations ---
|
||||||
|
|
||||||
|
// ReadFile reads an entire file
|
||||||
|
func ReadFile(filename string) ([]byte, error) {
|
||||||
|
return afero.ReadFile(FS, filename)
|
||||||
|
}
|
||||||
|
|
||||||
|
// WriteFile writes data to a file
|
||||||
|
func WriteFile(filename string, data []byte, perm os.FileMode) error {
|
||||||
|
return afero.WriteFile(FS, filename, data, perm)
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Existence Checks ---
|
||||||
|
|
||||||
|
// Exists checks if a file or directory exists
|
||||||
|
func Exists(path string) (bool, error) {
|
||||||
|
return afero.Exists(FS, path)
|
||||||
|
}
|
||||||
|
|
||||||
|
// DirExists checks if a directory exists
|
||||||
|
func DirExists(path string) (bool, error) {
|
||||||
|
return afero.DirExists(FS, path)
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsDir checks if path is a directory
|
||||||
|
func IsDir(path string) (bool, error) {
|
||||||
|
return afero.IsDir(FS, path)
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsEmpty checks if a directory is empty
|
||||||
|
func IsEmpty(path string) (bool, error) {
|
||||||
|
return afero.IsEmpty(FS, path)
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Utility Functions ---
|
||||||
|
|
||||||
|
// Walk walks a directory tree
|
||||||
|
func Walk(root string, walkFn filepath.WalkFunc) error {
|
||||||
|
return afero.Walk(FS, root, walkFn)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Glob returns the names of all files matching pattern
|
||||||
|
func Glob(pattern string) ([]string, error) {
|
||||||
|
return afero.Glob(FS, pattern)
|
||||||
|
}
|
||||||
|
|
||||||
|
// TempDir creates a temporary directory
|
||||||
|
func TempDir(dir, prefix string) (string, error) {
|
||||||
|
return afero.TempDir(FS, dir, prefix)
|
||||||
|
}
|
||||||
|
|
||||||
|
// TempFile creates a temporary file
|
||||||
|
func TempFile(dir, pattern string) (afero.File, error) {
|
||||||
|
return afero.TempFile(FS, dir, pattern)
|
||||||
|
}
|
||||||
|
|
||||||
|
// CopyFile copies a file from src to dst
|
||||||
|
func CopyFile(src, dst string) error {
|
||||||
|
srcFile, err := FS.Open(src)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer srcFile.Close()
|
||||||
|
|
||||||
|
srcInfo, err := srcFile.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
dstFile, err := FS.OpenFile(dst, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, srcInfo.Mode())
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer dstFile.Close()
|
||||||
|
|
||||||
|
_, err = io.Copy(dstFile, srcFile)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// FileSize returns the size of a file
|
||||||
|
func FileSize(path string) (int64, error) {
|
||||||
|
info, err := FS.Stat(path)
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
return info.Size(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Testing Helpers ---
|
||||||
|
|
||||||
|
// WithMemFs executes a function with an in-memory filesystem, then restores the original
|
||||||
|
func WithMemFs(fn func(fs afero.Fs)) {
|
||||||
|
original := FS
|
||||||
|
memFs := afero.NewMemMapFs()
|
||||||
|
FS = memFs
|
||||||
|
defer func() { FS = original }()
|
||||||
|
fn(memFs)
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetupTestDir creates a test directory structure in-memory
|
||||||
|
func SetupTestDir(files map[string]string) afero.Fs {
|
||||||
|
memFs := afero.NewMemMapFs()
|
||||||
|
for path, content := range files {
|
||||||
|
dir := filepath.Dir(path)
|
||||||
|
if dir != "." && dir != "/" {
|
||||||
|
_ = memFs.MkdirAll(dir, 0755)
|
||||||
|
}
|
||||||
|
_ = afero.WriteFile(memFs, path, []byte(content), 0644)
|
||||||
|
}
|
||||||
|
return memFs
|
||||||
|
}
|
||||||
191
internal/fs/fs_test.go
Normal file
191
internal/fs/fs_test.go
Normal file
@@ -0,0 +1,191 @@
|
|||||||
|
package fs
|
||||||
|
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/spf13/afero"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestMemMapFs(t *testing.T) {
|
||||||
|
// Use in-memory filesystem for testing
|
||||||
|
WithMemFs(func(memFs afero.Fs) {
|
||||||
|
// Create a file
|
||||||
|
err := WriteFile("/test/file.txt", []byte("hello world"), 0644)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("WriteFile failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read it back
|
||||||
|
content, err := ReadFile("/test/file.txt")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ReadFile failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if string(content) != "hello world" {
|
||||||
|
t.Errorf("expected 'hello world', got '%s'", string(content))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check existence
|
||||||
|
exists, err := Exists("/test/file.txt")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Exists failed: %v", err)
|
||||||
|
}
|
||||||
|
if !exists {
|
||||||
|
t.Error("file should exist")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check non-existent file
|
||||||
|
exists, err = Exists("/nonexistent.txt")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Exists failed: %v", err)
|
||||||
|
}
|
||||||
|
if exists {
|
||||||
|
t.Error("file should not exist")
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSetupTestDir(t *testing.T) {
|
||||||
|
// Create test directory structure
|
||||||
|
testFs := SetupTestDir(map[string]string{
|
||||||
|
"/backups/db1.dump": "database 1 content",
|
||||||
|
"/backups/db2.dump": "database 2 content",
|
||||||
|
"/config/settings.json": `{"key": "value"}`,
|
||||||
|
})
|
||||||
|
|
||||||
|
// Verify files exist
|
||||||
|
content, err := afero.ReadFile(testFs, "/backups/db1.dump")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ReadFile failed: %v", err)
|
||||||
|
}
|
||||||
|
if string(content) != "database 1 content" {
|
||||||
|
t.Errorf("unexpected content: %s", string(content))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify directory structure
|
||||||
|
files, err := afero.ReadDir(testFs, "/backups")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ReadDir failed: %v", err)
|
||||||
|
}
|
||||||
|
if len(files) != 2 {
|
||||||
|
t.Errorf("expected 2 files, got %d", len(files))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCopyFile(t *testing.T) {
|
||||||
|
WithMemFs(func(memFs afero.Fs) {
|
||||||
|
// Create source file
|
||||||
|
err := WriteFile("/source.txt", []byte("copy me"), 0644)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("WriteFile failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Copy file
|
||||||
|
err = CopyFile("/source.txt", "/dest.txt")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CopyFile failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify copy
|
||||||
|
content, err := ReadFile("/dest.txt")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ReadFile failed: %v", err)
|
||||||
|
}
|
||||||
|
if string(content) != "copy me" {
|
||||||
|
t.Errorf("unexpected content: %s", string(content))
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFileSize(t *testing.T) {
|
||||||
|
WithMemFs(func(memFs afero.Fs) {
|
||||||
|
data := []byte("12345678901234567890") // 20 bytes
|
||||||
|
err := WriteFile("/sized.txt", data, 0644)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("WriteFile failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
size, err := FileSize("/sized.txt")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("FileSize failed: %v", err)
|
||||||
|
}
|
||||||
|
if size != 20 {
|
||||||
|
t.Errorf("expected size 20, got %d", size)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestTempDir(t *testing.T) {
|
||||||
|
WithMemFs(func(memFs afero.Fs) {
|
||||||
|
// Create temp dir
|
||||||
|
dir, err := TempDir("", "test-")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("TempDir failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify it exists
|
||||||
|
isDir, err := IsDir(dir)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("IsDir failed: %v", err)
|
||||||
|
}
|
||||||
|
if !isDir {
|
||||||
|
t.Error("temp dir should be a directory")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify it's empty
|
||||||
|
isEmpty, err := IsEmpty(dir)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("IsEmpty failed: %v", err)
|
||||||
|
}
|
||||||
|
if !isEmpty {
|
||||||
|
t.Error("temp dir should be empty")
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestWalk(t *testing.T) {
|
||||||
|
WithMemFs(func(memFs afero.Fs) {
|
||||||
|
// Create directory structure
|
||||||
|
_ = MkdirAll("/root/a/b", 0755)
|
||||||
|
_ = WriteFile("/root/file1.txt", []byte("1"), 0644)
|
||||||
|
_ = WriteFile("/root/a/file2.txt", []byte("2"), 0644)
|
||||||
|
_ = WriteFile("/root/a/b/file3.txt", []byte("3"), 0644)
|
||||||
|
|
||||||
|
var files []string
|
||||||
|
err := Walk("/root", func(path string, info os.FileInfo, err error) error {
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if !info.IsDir() {
|
||||||
|
files = append(files, path)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Walk failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(files) != 3 {
|
||||||
|
t.Errorf("expected 3 files, got %d: %v", len(files), files)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGlob(t *testing.T) {
|
||||||
|
WithMemFs(func(memFs afero.Fs) {
|
||||||
|
_ = WriteFile("/data/backup1.dump", []byte("1"), 0644)
|
||||||
|
_ = WriteFile("/data/backup2.dump", []byte("2"), 0644)
|
||||||
|
_ = WriteFile("/data/config.json", []byte("{}"), 0644)
|
||||||
|
|
||||||
|
matches, err := Glob("/data/*.dump")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Glob failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(matches) != 2 {
|
||||||
|
t.Errorf("expected 2 matches, got %d: %v", len(matches), matches)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
118
internal/logger/colors.go
Normal file
118
internal/logger/colors.go
Normal file
@@ -0,0 +1,118 @@
|
|||||||
|
package logger
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"github.com/fatih/color"
|
||||||
|
)
|
||||||
|
|
||||||
|
// CLI output helpers using fatih/color for cross-platform support
|
||||||
|
|
||||||
|
// Success prints a success message with green checkmark
|
||||||
|
func Success(format string, args ...interface{}) {
|
||||||
|
msg := fmt.Sprintf(format, args...)
|
||||||
|
SuccessColor.Fprint(os.Stdout, "✓ ")
|
||||||
|
fmt.Println(msg)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Error prints an error message with red X
|
||||||
|
func Error(format string, args ...interface{}) {
|
||||||
|
msg := fmt.Sprintf(format, args...)
|
||||||
|
ErrorColor.Fprint(os.Stderr, "✗ ")
|
||||||
|
fmt.Fprintln(os.Stderr, msg)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Warning prints a warning message with yellow exclamation
|
||||||
|
func Warning(format string, args ...interface{}) {
|
||||||
|
msg := fmt.Sprintf(format, args...)
|
||||||
|
WarnColor.Fprint(os.Stdout, "⚠ ")
|
||||||
|
fmt.Println(msg)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Info prints an info message with blue arrow
|
||||||
|
func Info(format string, args ...interface{}) {
|
||||||
|
msg := fmt.Sprintf(format, args...)
|
||||||
|
InfoColor.Fprint(os.Stdout, "→ ")
|
||||||
|
fmt.Println(msg)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Header prints a bold header
|
||||||
|
func Header(format string, args ...interface{}) {
|
||||||
|
msg := fmt.Sprintf(format, args...)
|
||||||
|
HighlightColor.Println(msg)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Dim prints dimmed/secondary text
|
||||||
|
func Dim(format string, args ...interface{}) {
|
||||||
|
msg := fmt.Sprintf(format, args...)
|
||||||
|
DimColor.Println(msg)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Bold returns bold text
|
||||||
|
func Bold(text string) string {
|
||||||
|
return color.New(color.Bold).Sprint(text)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Green returns green text
|
||||||
|
func Green(text string) string {
|
||||||
|
return SuccessColor.Sprint(text)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Red returns red text
|
||||||
|
func Red(text string) string {
|
||||||
|
return ErrorColor.Sprint(text)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Yellow returns yellow text
|
||||||
|
func Yellow(text string) string {
|
||||||
|
return WarnColor.Sprint(text)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cyan returns cyan text
|
||||||
|
func Cyan(text string) string {
|
||||||
|
return InfoColor.Sprint(text)
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatusLine prints a key-value status line
|
||||||
|
func StatusLine(key, value string) {
|
||||||
|
DimColor.Printf(" %s: ", key)
|
||||||
|
fmt.Println(value)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ProgressStatus prints operation status with timing
|
||||||
|
func ProgressStatus(operation string, status string, isSuccess bool) {
|
||||||
|
if isSuccess {
|
||||||
|
SuccessColor.Print("[OK] ")
|
||||||
|
} else {
|
||||||
|
ErrorColor.Print("[FAIL] ")
|
||||||
|
}
|
||||||
|
fmt.Printf("%s: %s\n", operation, status)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Table prints a simple formatted table row
|
||||||
|
func TableRow(cols ...string) {
|
||||||
|
for i, col := range cols {
|
||||||
|
if i == 0 {
|
||||||
|
InfoColor.Printf("%-20s", col)
|
||||||
|
} else {
|
||||||
|
fmt.Printf("%-15s", col)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
fmt.Println()
|
||||||
|
}
|
||||||
|
|
||||||
|
// DisableColors disables all color output (for non-TTY or --no-color flag)
|
||||||
|
func DisableColors() {
|
||||||
|
color.NoColor = true
|
||||||
|
}
|
||||||
|
|
||||||
|
// EnableColors enables color output
|
||||||
|
func EnableColors() {
|
||||||
|
color.NoColor = false
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsColorEnabled returns whether colors are enabled
|
||||||
|
func IsColorEnabled() bool {
|
||||||
|
return !color.NoColor
|
||||||
|
}
|
||||||
@@ -7,9 +7,29 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/fatih/color"
|
||||||
"github.com/sirupsen/logrus"
|
"github.com/sirupsen/logrus"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// Color printers for consistent output across the application
|
||||||
|
var (
|
||||||
|
// Status colors
|
||||||
|
SuccessColor = color.New(color.FgGreen, color.Bold)
|
||||||
|
ErrorColor = color.New(color.FgRed, color.Bold)
|
||||||
|
WarnColor = color.New(color.FgYellow, color.Bold)
|
||||||
|
InfoColor = color.New(color.FgCyan)
|
||||||
|
DebugColor = color.New(color.FgWhite)
|
||||||
|
|
||||||
|
// Highlight colors
|
||||||
|
HighlightColor = color.New(color.FgMagenta, color.Bold)
|
||||||
|
DimColor = color.New(color.FgHiBlack)
|
||||||
|
|
||||||
|
// Data colors
|
||||||
|
NumberColor = color.New(color.FgYellow)
|
||||||
|
PathColor = color.New(color.FgBlue, color.Underline)
|
||||||
|
TimeColor = color.New(color.FgCyan)
|
||||||
|
)
|
||||||
|
|
||||||
// Logger defines the interface for logging
|
// Logger defines the interface for logging
|
||||||
type Logger interface {
|
type Logger interface {
|
||||||
Debug(msg string, args ...any)
|
Debug(msg string, args ...any)
|
||||||
@@ -226,34 +246,32 @@ type CleanFormatter struct{}
|
|||||||
func (f *CleanFormatter) Format(entry *logrus.Entry) ([]byte, error) {
|
func (f *CleanFormatter) Format(entry *logrus.Entry) ([]byte, error) {
|
||||||
timestamp := entry.Time.Format("2006-01-02T15:04:05")
|
timestamp := entry.Time.Format("2006-01-02T15:04:05")
|
||||||
|
|
||||||
// Color codes for different log levels
|
// Get level color and text using fatih/color
|
||||||
var levelColor, levelText string
|
var levelPrinter *color.Color
|
||||||
|
var levelText string
|
||||||
switch entry.Level {
|
switch entry.Level {
|
||||||
case logrus.DebugLevel:
|
case logrus.DebugLevel:
|
||||||
levelColor = "\033[36m" // Cyan
|
levelPrinter = DebugColor
|
||||||
levelText = "DEBUG"
|
levelText = "DEBUG"
|
||||||
case logrus.InfoLevel:
|
case logrus.InfoLevel:
|
||||||
levelColor = "\033[32m" // Green
|
levelPrinter = SuccessColor
|
||||||
levelText = "INFO "
|
levelText = "INFO "
|
||||||
case logrus.WarnLevel:
|
case logrus.WarnLevel:
|
||||||
levelColor = "\033[33m" // Yellow
|
levelPrinter = WarnColor
|
||||||
levelText = "WARN "
|
levelText = "WARN "
|
||||||
case logrus.ErrorLevel:
|
case logrus.ErrorLevel:
|
||||||
levelColor = "\033[31m" // Red
|
levelPrinter = ErrorColor
|
||||||
levelText = "ERROR"
|
levelText = "ERROR"
|
||||||
default:
|
default:
|
||||||
levelColor = "\033[0m" // Reset
|
levelPrinter = InfoColor
|
||||||
levelText = "INFO "
|
levelText = "INFO "
|
||||||
}
|
}
|
||||||
resetColor := "\033[0m"
|
|
||||||
|
|
||||||
// Build the message with perfectly aligned columns
|
// Build the message with perfectly aligned columns
|
||||||
var output strings.Builder
|
var output strings.Builder
|
||||||
|
|
||||||
// Column 1: Level (with color, fixed width 5 chars)
|
// Column 1: Level (with color, fixed width 5 chars)
|
||||||
output.WriteString(levelColor)
|
output.WriteString(levelPrinter.Sprint(levelText))
|
||||||
output.WriteString(levelText)
|
|
||||||
output.WriteString(resetColor)
|
|
||||||
output.WriteString(" ")
|
output.WriteString(" ")
|
||||||
|
|
||||||
// Column 2: Timestamp (fixed format)
|
// Column 2: Timestamp (fixed format)
|
||||||
|
|||||||
@@ -6,6 +6,16 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/fatih/color"
|
||||||
|
"github.com/schollz/progressbar/v3"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Color printers for progress indicators
|
||||||
|
var (
|
||||||
|
okColor = color.New(color.FgGreen, color.Bold)
|
||||||
|
failColor = color.New(color.FgRed, color.Bold)
|
||||||
|
warnColor = color.New(color.FgYellow, color.Bold)
|
||||||
)
|
)
|
||||||
|
|
||||||
// Indicator represents a progress indicator interface
|
// Indicator represents a progress indicator interface
|
||||||
@@ -92,13 +102,15 @@ func (s *Spinner) Update(message string) {
|
|||||||
// Complete stops the spinner with a success message
|
// Complete stops the spinner with a success message
|
||||||
func (s *Spinner) Complete(message string) {
|
func (s *Spinner) Complete(message string) {
|
||||||
s.Stop()
|
s.Stop()
|
||||||
fmt.Fprintf(s.writer, "\n[OK] %s\n", message)
|
okColor.Fprint(s.writer, "[OK] ")
|
||||||
|
fmt.Fprintln(s.writer, message)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fail stops the spinner with a failure message
|
// Fail stops the spinner with a failure message
|
||||||
func (s *Spinner) Fail(message string) {
|
func (s *Spinner) Fail(message string) {
|
||||||
s.Stop()
|
s.Stop()
|
||||||
fmt.Fprintf(s.writer, "\n[FAIL] %s\n", message)
|
failColor.Fprint(s.writer, "[FAIL] ")
|
||||||
|
fmt.Fprintln(s.writer, message)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Stop stops the spinner
|
// Stop stops the spinner
|
||||||
@@ -167,13 +179,15 @@ func (d *Dots) Update(message string) {
|
|||||||
// Complete stops the dots with a success message
|
// Complete stops the dots with a success message
|
||||||
func (d *Dots) Complete(message string) {
|
func (d *Dots) Complete(message string) {
|
||||||
d.Stop()
|
d.Stop()
|
||||||
fmt.Fprintf(d.writer, " [OK] %s\n", message)
|
okColor.Fprint(d.writer, " [OK] ")
|
||||||
|
fmt.Fprintln(d.writer, message)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fail stops the dots with a failure message
|
// Fail stops the dots with a failure message
|
||||||
func (d *Dots) Fail(message string) {
|
func (d *Dots) Fail(message string) {
|
||||||
d.Stop()
|
d.Stop()
|
||||||
fmt.Fprintf(d.writer, " [FAIL] %s\n", message)
|
failColor.Fprint(d.writer, " [FAIL] ")
|
||||||
|
fmt.Fprintln(d.writer, message)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Stop stops the dots indicator
|
// Stop stops the dots indicator
|
||||||
@@ -239,14 +253,16 @@ func (p *ProgressBar) Complete(message string) {
|
|||||||
p.current = p.total
|
p.current = p.total
|
||||||
p.message = message
|
p.message = message
|
||||||
p.render()
|
p.render()
|
||||||
fmt.Fprintf(p.writer, " [OK] %s\n", message)
|
okColor.Fprint(p.writer, " [OK] ")
|
||||||
|
fmt.Fprintln(p.writer, message)
|
||||||
p.Stop()
|
p.Stop()
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fail stops the progress bar with failure
|
// Fail stops the progress bar with failure
|
||||||
func (p *ProgressBar) Fail(message string) {
|
func (p *ProgressBar) Fail(message string) {
|
||||||
p.render()
|
p.render()
|
||||||
fmt.Fprintf(p.writer, " [FAIL] %s\n", message)
|
failColor.Fprint(p.writer, " [FAIL] ")
|
||||||
|
fmt.Fprintln(p.writer, message)
|
||||||
p.Stop()
|
p.Stop()
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -298,12 +314,14 @@ func (s *Static) Update(message string) {
|
|||||||
|
|
||||||
// Complete shows completion message
|
// Complete shows completion message
|
||||||
func (s *Static) Complete(message string) {
|
func (s *Static) Complete(message string) {
|
||||||
fmt.Fprintf(s.writer, " [OK] %s\n", message)
|
okColor.Fprint(s.writer, " [OK] ")
|
||||||
|
fmt.Fprintln(s.writer, message)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fail shows failure message
|
// Fail shows failure message
|
||||||
func (s *Static) Fail(message string) {
|
func (s *Static) Fail(message string) {
|
||||||
fmt.Fprintf(s.writer, " [FAIL] %s\n", message)
|
failColor.Fprint(s.writer, " [FAIL] ")
|
||||||
|
fmt.Fprintln(s.writer, message)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Stop does nothing for static indicator
|
// Stop does nothing for static indicator
|
||||||
@@ -380,12 +398,14 @@ func (l *LineByLine) SetEstimator(estimator *ETAEstimator) {
|
|||||||
|
|
||||||
// Complete shows completion message
|
// Complete shows completion message
|
||||||
func (l *LineByLine) Complete(message string) {
|
func (l *LineByLine) Complete(message string) {
|
||||||
fmt.Fprintf(l.writer, "[OK] %s\n\n", message)
|
okColor.Fprint(l.writer, "[OK] ")
|
||||||
|
fmt.Fprintf(l.writer, "%s\n\n", message)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fail shows failure message
|
// Fail shows failure message
|
||||||
func (l *LineByLine) Fail(message string) {
|
func (l *LineByLine) Fail(message string) {
|
||||||
fmt.Fprintf(l.writer, "[FAIL] %s\n\n", message)
|
failColor.Fprint(l.writer, "[FAIL] ")
|
||||||
|
fmt.Fprintf(l.writer, "%s\n\n", message)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Stop does nothing for line-by-line (no cleanup needed)
|
// Stop does nothing for line-by-line (no cleanup needed)
|
||||||
@@ -408,13 +428,15 @@ func (l *Light) Update(message string) {
|
|||||||
|
|
||||||
func (l *Light) Complete(message string) {
|
func (l *Light) Complete(message string) {
|
||||||
if !l.silent {
|
if !l.silent {
|
||||||
fmt.Fprintf(l.writer, "[OK] %s\n", message)
|
okColor.Fprint(l.writer, "[OK] ")
|
||||||
|
fmt.Fprintln(l.writer, message)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (l *Light) Fail(message string) {
|
func (l *Light) Fail(message string) {
|
||||||
if !l.silent {
|
if !l.silent {
|
||||||
fmt.Fprintf(l.writer, "[FAIL] %s\n", message)
|
failColor.Fprint(l.writer, "[FAIL] ")
|
||||||
|
fmt.Fprintln(l.writer, message)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -440,6 +462,8 @@ func NewIndicator(interactive bool, indicatorType string) Indicator {
|
|||||||
return NewDots()
|
return NewDots()
|
||||||
case "bar":
|
case "bar":
|
||||||
return NewProgressBar(100) // Default to 100 steps
|
return NewProgressBar(100) // Default to 100 steps
|
||||||
|
case "schollz":
|
||||||
|
return NewSchollzBarItems(100, "Progress")
|
||||||
case "line":
|
case "line":
|
||||||
return NewLineByLine()
|
return NewLineByLine()
|
||||||
case "light":
|
case "light":
|
||||||
@@ -463,3 +487,161 @@ func (n *NullIndicator) Complete(message string) {}
|
|||||||
func (n *NullIndicator) Fail(message string) {}
|
func (n *NullIndicator) Fail(message string) {}
|
||||||
func (n *NullIndicator) Stop() {}
|
func (n *NullIndicator) Stop() {}
|
||||||
func (n *NullIndicator) SetEstimator(estimator *ETAEstimator) {}
|
func (n *NullIndicator) SetEstimator(estimator *ETAEstimator) {}
|
||||||
|
|
||||||
|
// SchollzBar wraps schollz/progressbar for enhanced progress display
|
||||||
|
// Ideal for byte-based operations like archive extraction and file transfers
|
||||||
|
type SchollzBar struct {
|
||||||
|
bar *progressbar.ProgressBar
|
||||||
|
message string
|
||||||
|
total int64
|
||||||
|
estimator *ETAEstimator
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewSchollzBar creates a new schollz progressbar with byte-based progress
|
||||||
|
func NewSchollzBar(total int64, description string) *SchollzBar {
|
||||||
|
bar := progressbar.NewOptions64(
|
||||||
|
total,
|
||||||
|
progressbar.OptionEnableColorCodes(true),
|
||||||
|
progressbar.OptionShowBytes(true),
|
||||||
|
progressbar.OptionSetWidth(40),
|
||||||
|
progressbar.OptionSetDescription(description),
|
||||||
|
progressbar.OptionSetTheme(progressbar.Theme{
|
||||||
|
Saucer: "[green]█[reset]",
|
||||||
|
SaucerHead: "[green]▌[reset]",
|
||||||
|
SaucerPadding: "░",
|
||||||
|
BarStart: "[",
|
||||||
|
BarEnd: "]",
|
||||||
|
}),
|
||||||
|
progressbar.OptionShowCount(),
|
||||||
|
progressbar.OptionSetPredictTime(true),
|
||||||
|
progressbar.OptionFullWidth(),
|
||||||
|
progressbar.OptionClearOnFinish(),
|
||||||
|
)
|
||||||
|
return &SchollzBar{
|
||||||
|
bar: bar,
|
||||||
|
message: description,
|
||||||
|
total: total,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewSchollzBarItems creates a progressbar for item counts (not bytes)
|
||||||
|
func NewSchollzBarItems(total int, description string) *SchollzBar {
|
||||||
|
bar := progressbar.NewOptions(
|
||||||
|
total,
|
||||||
|
progressbar.OptionEnableColorCodes(true),
|
||||||
|
progressbar.OptionShowCount(),
|
||||||
|
progressbar.OptionSetWidth(40),
|
||||||
|
progressbar.OptionSetDescription(description),
|
||||||
|
progressbar.OptionSetTheme(progressbar.Theme{
|
||||||
|
Saucer: "[cyan]█[reset]",
|
||||||
|
SaucerHead: "[cyan]▌[reset]",
|
||||||
|
SaucerPadding: "░",
|
||||||
|
BarStart: "[",
|
||||||
|
BarEnd: "]",
|
||||||
|
}),
|
||||||
|
progressbar.OptionSetPredictTime(true),
|
||||||
|
progressbar.OptionFullWidth(),
|
||||||
|
progressbar.OptionClearOnFinish(),
|
||||||
|
)
|
||||||
|
return &SchollzBar{
|
||||||
|
bar: bar,
|
||||||
|
message: description,
|
||||||
|
total: int64(total),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewSchollzSpinner creates an indeterminate spinner for unknown-length operations
|
||||||
|
func NewSchollzSpinner(description string) *SchollzBar {
|
||||||
|
bar := progressbar.NewOptions(
|
||||||
|
-1, // Indeterminate
|
||||||
|
progressbar.OptionEnableColorCodes(true),
|
||||||
|
progressbar.OptionSetWidth(40),
|
||||||
|
progressbar.OptionSetDescription(description),
|
||||||
|
progressbar.OptionSpinnerType(14), // Braille spinner
|
||||||
|
progressbar.OptionFullWidth(),
|
||||||
|
)
|
||||||
|
return &SchollzBar{
|
||||||
|
bar: bar,
|
||||||
|
message: description,
|
||||||
|
total: -1,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start initializes the progress bar (Indicator interface)
|
||||||
|
func (s *SchollzBar) Start(message string) {
|
||||||
|
s.message = message
|
||||||
|
s.bar.Describe(message)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update updates the description (Indicator interface)
|
||||||
|
func (s *SchollzBar) Update(message string) {
|
||||||
|
s.message = message
|
||||||
|
s.bar.Describe(message)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add adds bytes/items to the progress
|
||||||
|
func (s *SchollzBar) Add(n int) error {
|
||||||
|
return s.bar.Add(n)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add64 adds bytes to the progress (for large files)
|
||||||
|
func (s *SchollzBar) Add64(n int64) error {
|
||||||
|
return s.bar.Add64(n)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set sets the current progress value
|
||||||
|
func (s *SchollzBar) Set(n int) error {
|
||||||
|
return s.bar.Set(n)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set64 sets the current progress value (for large files)
|
||||||
|
func (s *SchollzBar) Set64(n int64) error {
|
||||||
|
return s.bar.Set64(n)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ChangeMax updates the maximum value
|
||||||
|
func (s *SchollzBar) ChangeMax(max int) {
|
||||||
|
s.bar.ChangeMax(max)
|
||||||
|
s.total = int64(max)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ChangeMax64 updates the maximum value (for large files)
|
||||||
|
func (s *SchollzBar) ChangeMax64(max int64) {
|
||||||
|
s.bar.ChangeMax64(max)
|
||||||
|
s.total = max
|
||||||
|
}
|
||||||
|
|
||||||
|
// Complete finishes with success (Indicator interface)
|
||||||
|
func (s *SchollzBar) Complete(message string) {
|
||||||
|
_ = s.bar.Finish()
|
||||||
|
okColor.Print("[OK] ")
|
||||||
|
fmt.Println(message)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fail finishes with failure (Indicator interface)
|
||||||
|
func (s *SchollzBar) Fail(message string) {
|
||||||
|
_ = s.bar.Clear()
|
||||||
|
failColor.Print("[FAIL] ")
|
||||||
|
fmt.Println(message)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stop stops the progress bar (Indicator interface)
|
||||||
|
func (s *SchollzBar) Stop() {
|
||||||
|
_ = s.bar.Clear()
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetEstimator is a no-op (schollz has built-in ETA)
|
||||||
|
func (s *SchollzBar) SetEstimator(estimator *ETAEstimator) {
|
||||||
|
s.estimator = estimator
|
||||||
|
}
|
||||||
|
|
||||||
|
// Writer returns an io.Writer that updates progress as data is written
|
||||||
|
// Useful for wrapping readers/writers in copy operations
|
||||||
|
func (s *SchollzBar) Writer() io.Writer {
|
||||||
|
return s.bar
|
||||||
|
}
|
||||||
|
|
||||||
|
// Finish marks the progress as complete
|
||||||
|
func (s *SchollzBar) Finish() error {
|
||||||
|
return s.bar.Finish()
|
||||||
|
}
|
||||||
|
|||||||
@@ -12,6 +12,7 @@ import (
|
|||||||
"dbbackup/internal/cloud"
|
"dbbackup/internal/cloud"
|
||||||
"dbbackup/internal/logger"
|
"dbbackup/internal/logger"
|
||||||
"dbbackup/internal/metadata"
|
"dbbackup/internal/metadata"
|
||||||
|
"dbbackup/internal/progress"
|
||||||
)
|
)
|
||||||
|
|
||||||
// CloudDownloader handles downloading backups from cloud storage
|
// CloudDownloader handles downloading backups from cloud storage
|
||||||
@@ -73,25 +74,43 @@ func (d *CloudDownloader) Download(ctx context.Context, remotePath string, opts
|
|||||||
size = 0 // Continue anyway
|
size = 0 // Continue anyway
|
||||||
}
|
}
|
||||||
|
|
||||||
// Progress callback
|
// Create schollz progressbar for visual download progress
|
||||||
var lastPercent int
|
var bar *progress.SchollzBar
|
||||||
progressCallback := func(transferred, total int64) {
|
if size > 0 {
|
||||||
if total > 0 {
|
bar = progress.NewSchollzBar(size, fmt.Sprintf("Downloading %s", filename))
|
||||||
percent := int(float64(transferred) / float64(total) * 100)
|
} else {
|
||||||
if percent != lastPercent && percent%10 == 0 {
|
bar = progress.NewSchollzSpinner(fmt.Sprintf("Downloading %s", filename))
|
||||||
d.log.Info("Download progress", "percent", percent, "transferred", cloud.FormatSize(transferred), "total", cloud.FormatSize(total))
|
|
||||||
lastPercent = percent
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Progress callback with schollz progressbar
|
||||||
|
var lastBytes int64
|
||||||
|
progressCallback := func(transferred, total int64) {
|
||||||
|
if bar != nil {
|
||||||
|
// Update progress bar with delta
|
||||||
|
delta := transferred - lastBytes
|
||||||
|
if delta > 0 {
|
||||||
|
_ = bar.Add64(delta)
|
||||||
|
}
|
||||||
|
lastBytes = transferred
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Download file
|
// Download file
|
||||||
if err := d.backend.Download(ctx, remotePath, localPath, progressCallback); err != nil {
|
if err := d.backend.Download(ctx, remotePath, localPath, progressCallback); err != nil {
|
||||||
|
if bar != nil {
|
||||||
|
bar.Fail("Download failed")
|
||||||
|
}
|
||||||
// Cleanup on failure
|
// Cleanup on failure
|
||||||
os.RemoveAll(tempSubDir)
|
os.RemoveAll(tempSubDir)
|
||||||
return nil, fmt.Errorf("download failed: %w", err)
|
return nil, fmt.Errorf("download failed: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if bar != nil {
|
||||||
|
_ = bar.Finish()
|
||||||
|
}
|
||||||
|
|
||||||
|
d.log.Info("Download completed", "size", cloud.FormatSize(size))
|
||||||
|
|
||||||
result := &DownloadResult{
|
result := &DownloadResult{
|
||||||
LocalPath: localPath,
|
LocalPath: localPath,
|
||||||
RemotePath: remotePath,
|
RemotePath: remotePath,
|
||||||
@@ -115,7 +134,7 @@ func (d *CloudDownloader) Download(ctx context.Context, remotePath string, opts
|
|||||||
// Verify checksum if requested
|
// Verify checksum if requested
|
||||||
if opts.VerifyChecksum {
|
if opts.VerifyChecksum {
|
||||||
d.log.Info("Verifying checksum...")
|
d.log.Info("Verifying checksum...")
|
||||||
checksum, err := calculateSHA256(localPath)
|
checksum, err := calculateSHA256WithProgress(localPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
// Cleanup on verification failure
|
// Cleanup on verification failure
|
||||||
os.RemoveAll(tempSubDir)
|
os.RemoveAll(tempSubDir)
|
||||||
@@ -186,6 +205,35 @@ func calculateSHA256(filePath string) (string, error) {
|
|||||||
return hex.EncodeToString(hash.Sum(nil)), nil
|
return hex.EncodeToString(hash.Sum(nil)), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// calculateSHA256WithProgress calculates SHA-256 with visual progress bar
|
||||||
|
func calculateSHA256WithProgress(filePath string) (string, error) {
|
||||||
|
file, err := os.Open(filePath)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
// Get file size for progress bar
|
||||||
|
stat, err := file.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
bar := progress.NewSchollzBar(stat.Size(), "Verifying checksum")
|
||||||
|
hash := sha256.New()
|
||||||
|
|
||||||
|
// Create a multi-writer to update both hash and progress
|
||||||
|
writer := io.MultiWriter(hash, bar.Writer())
|
||||||
|
|
||||||
|
if _, err := io.Copy(writer, file); err != nil {
|
||||||
|
bar.Fail("Verification failed")
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
_ = bar.Finish()
|
||||||
|
return hex.EncodeToString(hash.Sum(nil)), nil
|
||||||
|
}
|
||||||
|
|
||||||
// DownloadFromCloudURI is a convenience function to download from a cloud URI
|
// DownloadFromCloudURI is a convenience function to download from a cloud URI
|
||||||
func DownloadFromCloudURI(ctx context.Context, uri string, opts DownloadOptions) (*DownloadResult, error) {
|
func DownloadFromCloudURI(ctx context.Context, uri string, opts DownloadOptions) (*DownloadResult, error) {
|
||||||
// Parse URI
|
// Parse URI
|
||||||
|
|||||||
@@ -368,7 +368,7 @@ func (d *Diagnoser) diagnoseSQLScript(filePath string, compressed bool, result *
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Store last line for termination check
|
// Store last line for termination check
|
||||||
if lineNumber > 0 && (lineNumber%100000 == 0) && d.verbose {
|
if lineNumber > 0 && (lineNumber%100000 == 0) && d.verbose && d.log != nil {
|
||||||
d.log.Debug("Scanning SQL file", "lines_processed", lineNumber)
|
d.log.Debug("Scanning SQL file", "lines_processed", lineNumber)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -414,24 +414,123 @@ func (d *Diagnoser) diagnoseSQLScript(filePath string, compressed bool, result *
|
|||||||
|
|
||||||
// diagnoseClusterArchive analyzes a cluster tar.gz archive
|
// diagnoseClusterArchive analyzes a cluster tar.gz archive
|
||||||
func (d *Diagnoser) diagnoseClusterArchive(filePath string, result *DiagnoseResult) {
|
func (d *Diagnoser) diagnoseClusterArchive(filePath string, result *DiagnoseResult) {
|
||||||
// First verify tar.gz integrity with timeout
|
// Calculate dynamic timeout based on file size
|
||||||
// 5 minutes for large archives (multi-GB archives need more time)
|
// Large archives (100GB+) can take significant time to list
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
|
// Minimum 5 minutes, scales with file size, max 180 minutes for very large archives
|
||||||
|
timeoutMinutes := 5
|
||||||
|
if result.FileSize > 0 {
|
||||||
|
// 1 minute per 2 GB, minimum 5 minutes, max 180 minutes
|
||||||
|
sizeGB := result.FileSize / (1024 * 1024 * 1024)
|
||||||
|
estimatedMinutes := int(sizeGB/2) + 5
|
||||||
|
if estimatedMinutes > timeoutMinutes {
|
||||||
|
timeoutMinutes = estimatedMinutes
|
||||||
|
}
|
||||||
|
if timeoutMinutes > 180 {
|
||||||
|
timeoutMinutes = 180
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if d.log != nil {
|
||||||
|
d.log.Info("Verifying cluster archive integrity",
|
||||||
|
"size", fmt.Sprintf("%.1f GB", float64(result.FileSize)/(1024*1024*1024)),
|
||||||
|
"timeout", fmt.Sprintf("%d min", timeoutMinutes))
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
|
// Use streaming approach with pipes to avoid memory issues with large archives
|
||||||
cmd := exec.CommandContext(ctx, "tar", "-tzf", filePath)
|
cmd := exec.CommandContext(ctx, "tar", "-tzf", filePath)
|
||||||
output, err := cmd.Output()
|
stdout, pipeErr := cmd.StdoutPipe()
|
||||||
if err != nil {
|
if pipeErr != nil {
|
||||||
|
// Pipe creation failed - not a corruption issue
|
||||||
|
result.Warnings = append(result.Warnings,
|
||||||
|
fmt.Sprintf("Cannot create pipe for verification: %v", pipeErr),
|
||||||
|
"Archive integrity cannot be verified but may still be valid")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
var stderrBuf bytes.Buffer
|
||||||
|
cmd.Stderr = &stderrBuf
|
||||||
|
|
||||||
|
if startErr := cmd.Start(); startErr != nil {
|
||||||
|
result.Warnings = append(result.Warnings,
|
||||||
|
fmt.Sprintf("Cannot start tar verification: %v", startErr),
|
||||||
|
"Archive integrity cannot be verified but may still be valid")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stream output line by line to avoid buffering entire listing in memory
|
||||||
|
scanner := bufio.NewScanner(stdout)
|
||||||
|
scanner.Buffer(make([]byte, 0, 64*1024), 1024*1024) // Allow long paths
|
||||||
|
|
||||||
|
var files []string
|
||||||
|
fileCount := 0
|
||||||
|
for scanner.Scan() {
|
||||||
|
fileCount++
|
||||||
|
line := scanner.Text()
|
||||||
|
// Only store dump/metadata files, not every file
|
||||||
|
if strings.HasSuffix(line, ".dump") || strings.HasSuffix(line, ".sql.gz") ||
|
||||||
|
strings.HasSuffix(line, ".sql") || strings.HasSuffix(line, ".json") ||
|
||||||
|
strings.Contains(line, "globals") || strings.Contains(line, "manifest") ||
|
||||||
|
strings.Contains(line, "metadata") {
|
||||||
|
files = append(files, line)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
scanErr := scanner.Err()
|
||||||
|
waitErr := cmd.Wait()
|
||||||
|
stderrOutput := stderrBuf.String()
|
||||||
|
|
||||||
|
// Handle errors - distinguish between actual corruption and resource/timeout issues
|
||||||
|
if waitErr != nil || scanErr != nil {
|
||||||
|
// Check if it was a timeout
|
||||||
|
if ctx.Err() == context.DeadlineExceeded {
|
||||||
|
result.Warnings = append(result.Warnings,
|
||||||
|
fmt.Sprintf("Verification timed out after %d minutes - archive is very large", timeoutMinutes),
|
||||||
|
"This does not necessarily mean the archive is corrupted",
|
||||||
|
"Manual verification: tar -tzf "+filePath+" | wc -l")
|
||||||
|
// Don't mark as corrupted or invalid on timeout - archive may be fine
|
||||||
|
if fileCount > 0 {
|
||||||
|
result.Details.TableCount = len(files)
|
||||||
|
result.Details.TableList = files
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for specific gzip/tar corruption indicators
|
||||||
|
if strings.Contains(stderrOutput, "unexpected end of file") ||
|
||||||
|
strings.Contains(stderrOutput, "Unexpected EOF") ||
|
||||||
|
strings.Contains(stderrOutput, "gzip: stdin: unexpected end of file") ||
|
||||||
|
strings.Contains(stderrOutput, "not in gzip format") ||
|
||||||
|
strings.Contains(stderrOutput, "invalid compressed data") {
|
||||||
|
// These indicate actual corruption
|
||||||
result.IsValid = false
|
result.IsValid = false
|
||||||
result.IsCorrupted = true
|
result.IsCorrupted = true
|
||||||
result.Errors = append(result.Errors,
|
result.Errors = append(result.Errors,
|
||||||
fmt.Sprintf("Tar archive is invalid or corrupted: %v", err),
|
"Tar archive appears truncated or corrupted",
|
||||||
|
fmt.Sprintf("Error: %s", truncateString(stderrOutput, 200)),
|
||||||
"Run: tar -tzf "+filePath+" 2>&1 | tail -20")
|
"Run: tar -tzf "+filePath+" 2>&1 | tail -20")
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// Parse tar listing
|
// Other errors (signal killed, memory, etc.) - not necessarily corruption
|
||||||
files := strings.Split(strings.TrimSpace(string(output)), "\n")
|
// If we read some files successfully, the archive structure is likely OK
|
||||||
|
if fileCount > 0 {
|
||||||
|
result.Warnings = append(result.Warnings,
|
||||||
|
fmt.Sprintf("Verification incomplete (read %d files before error)", fileCount),
|
||||||
|
"Archive may still be valid - error could be due to system resources")
|
||||||
|
// Proceed with what we got
|
||||||
|
} else {
|
||||||
|
// Couldn't read anything - but don't mark as corrupted without clear evidence
|
||||||
|
result.Warnings = append(result.Warnings,
|
||||||
|
fmt.Sprintf("Cannot verify archive: %v", waitErr),
|
||||||
|
"Archive integrity is uncertain - proceed with caution or verify manually")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse the collected file list
|
||||||
var dumpFiles []string
|
var dumpFiles []string
|
||||||
hasGlobals := false
|
hasGlobals := false
|
||||||
hasMetadata := false
|
hasMetadata := false
|
||||||
@@ -464,7 +563,7 @@ func (d *Diagnoser) diagnoseClusterArchive(filePath string, result *DiagnoseResu
|
|||||||
}
|
}
|
||||||
|
|
||||||
// For verbose mode, diagnose individual dumps inside the archive
|
// For verbose mode, diagnose individual dumps inside the archive
|
||||||
if d.verbose && len(dumpFiles) > 0 {
|
if d.verbose && len(dumpFiles) > 0 && d.log != nil {
|
||||||
d.log.Info("Cluster archive contains databases", "count", len(dumpFiles))
|
d.log.Info("Cluster archive contains databases", "count", len(dumpFiles))
|
||||||
for _, df := range dumpFiles {
|
for _, df := range dumpFiles {
|
||||||
d.log.Info(" - " + df)
|
d.log.Info(" - " + df)
|
||||||
@@ -497,9 +596,22 @@ func (d *Diagnoser) diagnoseUnknown(filePath string, result *DiagnoseResult) {
|
|||||||
|
|
||||||
// verifyWithPgRestore uses pg_restore --list to verify dump integrity
|
// verifyWithPgRestore uses pg_restore --list to verify dump integrity
|
||||||
func (d *Diagnoser) verifyWithPgRestore(filePath string, result *DiagnoseResult) {
|
func (d *Diagnoser) verifyWithPgRestore(filePath string, result *DiagnoseResult) {
|
||||||
// Use timeout to prevent blocking on very large dump files
|
// Calculate dynamic timeout based on file size
|
||||||
// 5 minutes for large dumps (multi-GB dumps with many tables)
|
// pg_restore --list is usually faster than tar -tzf for same size
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
|
timeoutMinutes := 5
|
||||||
|
if result.FileSize > 0 {
|
||||||
|
// 1 minute per 5 GB, minimum 5 minutes, max 30 minutes
|
||||||
|
sizeGB := result.FileSize / (1024 * 1024 * 1024)
|
||||||
|
estimatedMinutes := int(sizeGB/5) + 5
|
||||||
|
if estimatedMinutes > timeoutMinutes {
|
||||||
|
timeoutMinutes = estimatedMinutes
|
||||||
|
}
|
||||||
|
if timeoutMinutes > 30 {
|
||||||
|
timeoutMinutes = 30
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
cmd := exec.CommandContext(ctx, "pg_restore", "--list", filePath)
|
cmd := exec.CommandContext(ctx, "pg_restore", "--list", filePath)
|
||||||
@@ -554,14 +666,74 @@ func (d *Diagnoser) verifyWithPgRestore(filePath string, result *DiagnoseResult)
|
|||||||
|
|
||||||
// DiagnoseClusterDumps extracts and diagnoses all dumps in a cluster archive
|
// DiagnoseClusterDumps extracts and diagnoses all dumps in a cluster archive
|
||||||
func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*DiagnoseResult, error) {
|
func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*DiagnoseResult, error) {
|
||||||
// First, try to list archive contents without extracting (fast check)
|
// Get archive size for dynamic timeout calculation
|
||||||
// 10 minutes for very large archives
|
archiveInfo, err := os.Stat(archivePath)
|
||||||
listCtx, listCancel := context.WithTimeout(context.Background(), 10*time.Minute)
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("cannot stat archive: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Dynamic timeout based on archive size: base 10 min + 1 min per 3 GB
|
||||||
|
// Large archives like 100+ GB need more time for tar -tzf
|
||||||
|
timeoutMinutes := 10
|
||||||
|
if archiveInfo.Size() > 0 {
|
||||||
|
sizeGB := archiveInfo.Size() / (1024 * 1024 * 1024)
|
||||||
|
estimatedMinutes := int(sizeGB/3) + 10
|
||||||
|
if estimatedMinutes > timeoutMinutes {
|
||||||
|
timeoutMinutes = estimatedMinutes
|
||||||
|
}
|
||||||
|
if timeoutMinutes > 120 { // Max 2 hours
|
||||||
|
timeoutMinutes = 120
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if d.log != nil {
|
||||||
|
d.log.Info("Listing cluster archive contents",
|
||||||
|
"size", fmt.Sprintf("%.1f GB", float64(archiveInfo.Size())/(1024*1024*1024)),
|
||||||
|
"timeout", fmt.Sprintf("%d min", timeoutMinutes))
|
||||||
|
}
|
||||||
|
|
||||||
|
listCtx, listCancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
|
||||||
defer listCancel()
|
defer listCancel()
|
||||||
|
|
||||||
listCmd := exec.CommandContext(listCtx, "tar", "-tzf", archivePath)
|
listCmd := exec.CommandContext(listCtx, "tar", "-tzf", archivePath)
|
||||||
listOutput, listErr := listCmd.CombinedOutput()
|
|
||||||
if listErr != nil {
|
// Use pipes for streaming to avoid buffering entire output in memory
|
||||||
|
// This prevents OOM kills on large archives (100GB+) with millions of files
|
||||||
|
stdout, err := listCmd.StdoutPipe()
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create stdout pipe: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var stderrBuf bytes.Buffer
|
||||||
|
listCmd.Stderr = &stderrBuf
|
||||||
|
|
||||||
|
if err := listCmd.Start(); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to start tar listing: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stream the output line by line, only keeping relevant files
|
||||||
|
var files []string
|
||||||
|
scanner := bufio.NewScanner(stdout)
|
||||||
|
// Set a reasonable max line length (file paths shouldn't exceed this)
|
||||||
|
scanner.Buffer(make([]byte, 0, 4096), 1024*1024)
|
||||||
|
|
||||||
|
fileCount := 0
|
||||||
|
for scanner.Scan() {
|
||||||
|
fileCount++
|
||||||
|
line := scanner.Text()
|
||||||
|
// Only store dump files and important files, not every single file
|
||||||
|
if strings.HasSuffix(line, ".dump") || strings.HasSuffix(line, ".sql") ||
|
||||||
|
strings.HasSuffix(line, ".sql.gz") || strings.HasSuffix(line, ".json") ||
|
||||||
|
strings.Contains(line, "globals") || strings.Contains(line, "manifest") ||
|
||||||
|
strings.Contains(line, "metadata") || strings.HasSuffix(line, "/") {
|
||||||
|
files = append(files, line)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
scanErr := scanner.Err()
|
||||||
|
listErr := listCmd.Wait()
|
||||||
|
|
||||||
|
if listErr != nil || scanErr != nil {
|
||||||
// Archive listing failed - likely corrupted
|
// Archive listing failed - likely corrupted
|
||||||
errResult := &DiagnoseResult{
|
errResult := &DiagnoseResult{
|
||||||
FilePath: archivePath,
|
FilePath: archivePath,
|
||||||
@@ -573,7 +745,12 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
|
|||||||
Details: &DiagnoseDetails{},
|
Details: &DiagnoseDetails{},
|
||||||
}
|
}
|
||||||
|
|
||||||
errOutput := string(listOutput)
|
errOutput := stderrBuf.String()
|
||||||
|
actualErr := listErr
|
||||||
|
if scanErr != nil {
|
||||||
|
actualErr = scanErr
|
||||||
|
}
|
||||||
|
|
||||||
if strings.Contains(errOutput, "unexpected end of file") ||
|
if strings.Contains(errOutput, "unexpected end of file") ||
|
||||||
strings.Contains(errOutput, "Unexpected EOF") ||
|
strings.Contains(errOutput, "Unexpected EOF") ||
|
||||||
strings.Contains(errOutput, "truncated") {
|
strings.Contains(errOutput, "truncated") {
|
||||||
@@ -585,7 +762,7 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
|
|||||||
"Solution: Re-create the backup from source database")
|
"Solution: Re-create the backup from source database")
|
||||||
} else {
|
} else {
|
||||||
errResult.Errors = append(errResult.Errors,
|
errResult.Errors = append(errResult.Errors,
|
||||||
fmt.Sprintf("Cannot list archive contents: %v", listErr),
|
fmt.Sprintf("Cannot list archive contents: %v", actualErr),
|
||||||
fmt.Sprintf("tar error: %s", truncateString(errOutput, 300)),
|
fmt.Sprintf("tar error: %s", truncateString(errOutput, 300)),
|
||||||
"Run manually: tar -tzf "+archivePath+" 2>&1 | tail -50")
|
"Run manually: tar -tzf "+archivePath+" 2>&1 | tail -50")
|
||||||
}
|
}
|
||||||
@@ -593,11 +770,12 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
|
|||||||
return []*DiagnoseResult{errResult}, nil
|
return []*DiagnoseResult{errResult}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Archive is listable - now check disk space before extraction
|
if d.log != nil {
|
||||||
files := strings.Split(strings.TrimSpace(string(listOutput)), "\n")
|
d.log.Debug("Archive listing streamed successfully", "total_files", fileCount, "relevant_files", len(files))
|
||||||
|
}
|
||||||
|
|
||||||
// Check if we have enough disk space (estimate 4x archive size needed)
|
// Check if we have enough disk space (estimate 4x archive size needed)
|
||||||
archiveInfo, _ := os.Stat(archivePath)
|
// archiveInfo already obtained at function start
|
||||||
requiredSpace := archiveInfo.Size() * 4
|
requiredSpace := archiveInfo.Size() * 4
|
||||||
|
|
||||||
// Check temp directory space - try to extract metadata first
|
// Check temp directory space - try to extract metadata first
|
||||||
@@ -609,7 +787,9 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
|
|||||||
testCancel()
|
testCancel()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if d.log != nil {
|
||||||
d.log.Info("Archive listing successful", "files", len(files))
|
d.log.Info("Archive listing successful", "files", len(files))
|
||||||
|
}
|
||||||
|
|
||||||
// Try full extraction - NO TIMEOUT here as large archives can take a long time
|
// Try full extraction - NO TIMEOUT here as large archives can take a long time
|
||||||
// Use a generous timeout (30 minutes) for very large archives
|
// Use a generous timeout (30 minutes) for very large archives
|
||||||
@@ -698,11 +878,15 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
|
|||||||
}
|
}
|
||||||
|
|
||||||
dumpPath := filepath.Join(dumpsDir, name)
|
dumpPath := filepath.Join(dumpsDir, name)
|
||||||
|
if d.log != nil {
|
||||||
d.log.Info("Diagnosing dump file", "file", name)
|
d.log.Info("Diagnosing dump file", "file", name)
|
||||||
|
}
|
||||||
|
|
||||||
result, err := d.DiagnoseFile(dumpPath)
|
result, err := d.DiagnoseFile(dumpPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
if d.log != nil {
|
||||||
d.log.Warn("Failed to diagnose file", "file", name, "error", err)
|
d.log.Warn("Failed to diagnose file", "file", name, "error", err)
|
||||||
|
}
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
results = append(results, result)
|
results = append(results, result)
|
||||||
|
|||||||
@@ -1,11 +1,16 @@
|
|||||||
package restore
|
package restore
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"archive/tar"
|
||||||
|
"compress/gzip"
|
||||||
"context"
|
"context"
|
||||||
|
"database/sql"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io"
|
||||||
"os"
|
"os"
|
||||||
"os/exec"
|
"os/exec"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"sync/atomic"
|
"sync/atomic"
|
||||||
@@ -17,8 +22,18 @@ import (
|
|||||||
"dbbackup/internal/logger"
|
"dbbackup/internal/logger"
|
||||||
"dbbackup/internal/progress"
|
"dbbackup/internal/progress"
|
||||||
"dbbackup/internal/security"
|
"dbbackup/internal/security"
|
||||||
|
|
||||||
|
"github.com/hashicorp/go-multierror"
|
||||||
|
_ "github.com/jackc/pgx/v5/stdlib" // PostgreSQL driver
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// ProgressCallback is called with progress updates during long operations
|
||||||
|
// Parameters: current bytes/items done, total bytes/items, description
|
||||||
|
type ProgressCallback func(current, total int64, description string)
|
||||||
|
|
||||||
|
// DatabaseProgressCallback is called with database count progress during cluster restore
|
||||||
|
type DatabaseProgressCallback func(done, total int, dbName string)
|
||||||
|
|
||||||
// Engine handles database restore operations
|
// Engine handles database restore operations
|
||||||
type Engine struct {
|
type Engine struct {
|
||||||
cfg *config.Config
|
cfg *config.Config
|
||||||
@@ -28,6 +43,10 @@ type Engine struct {
|
|||||||
detailedReporter *progress.DetailedReporter
|
detailedReporter *progress.DetailedReporter
|
||||||
dryRun bool
|
dryRun bool
|
||||||
debugLogPath string // Path to save debug log on error
|
debugLogPath string // Path to save debug log on error
|
||||||
|
|
||||||
|
// TUI progress callback for detailed progress reporting
|
||||||
|
progressCallback ProgressCallback
|
||||||
|
dbProgressCallback DatabaseProgressCallback
|
||||||
}
|
}
|
||||||
|
|
||||||
// New creates a new restore engine
|
// New creates a new restore engine
|
||||||
@@ -83,6 +102,30 @@ func (e *Engine) SetDebugLogPath(path string) {
|
|||||||
e.debugLogPath = path
|
e.debugLogPath = path
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetProgressCallback sets a callback for detailed progress reporting (for TUI mode)
|
||||||
|
func (e *Engine) SetProgressCallback(cb ProgressCallback) {
|
||||||
|
e.progressCallback = cb
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetDatabaseProgressCallback sets a callback for database count progress during cluster restore
|
||||||
|
func (e *Engine) SetDatabaseProgressCallback(cb DatabaseProgressCallback) {
|
||||||
|
e.dbProgressCallback = cb
|
||||||
|
}
|
||||||
|
|
||||||
|
// reportProgress safely calls the progress callback if set
|
||||||
|
func (e *Engine) reportProgress(current, total int64, description string) {
|
||||||
|
if e.progressCallback != nil {
|
||||||
|
e.progressCallback(current, total, description)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// reportDatabaseProgress safely calls the database progress callback if set
|
||||||
|
func (e *Engine) reportDatabaseProgress(done, total int, dbName string) {
|
||||||
|
if e.dbProgressCallback != nil {
|
||||||
|
e.dbProgressCallback(done, total, dbName)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// loggerAdapter adapts our logger to the progress.Logger interface
|
// loggerAdapter adapts our logger to the progress.Logger interface
|
||||||
type loggerAdapter struct {
|
type loggerAdapter struct {
|
||||||
logger logger.Logger
|
logger logger.Logger
|
||||||
@@ -223,7 +266,18 @@ func (e *Engine) restorePostgreSQLDump(ctx context.Context, archivePath, targetD
|
|||||||
|
|
||||||
// restorePostgreSQLDumpWithOwnership restores from PostgreSQL custom dump with ownership control
|
// restorePostgreSQLDumpWithOwnership restores from PostgreSQL custom dump with ownership control
|
||||||
func (e *Engine) restorePostgreSQLDumpWithOwnership(ctx context.Context, archivePath, targetDB string, compressed bool, preserveOwnership bool) error {
|
func (e *Engine) restorePostgreSQLDumpWithOwnership(ctx context.Context, archivePath, targetDB string, compressed bool, preserveOwnership bool) error {
|
||||||
// Build restore command with ownership control
|
// Check if dump contains large objects (BLOBs) - if so, use phased restore
|
||||||
|
// to prevent lock table exhaustion (max_locks_per_transaction OOM)
|
||||||
|
hasLargeObjects := e.checkDumpHasLargeObjects(archivePath)
|
||||||
|
|
||||||
|
if hasLargeObjects {
|
||||||
|
e.log.Info("Large objects detected - using phased restore to prevent lock exhaustion",
|
||||||
|
"database", targetDB,
|
||||||
|
"archive", archivePath)
|
||||||
|
return e.restorePostgreSQLDumpPhased(ctx, archivePath, targetDB, preserveOwnership)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Standard restore for dumps without large objects
|
||||||
opts := database.RestoreOptions{
|
opts := database.RestoreOptions{
|
||||||
Parallel: 1,
|
Parallel: 1,
|
||||||
Clean: false, // We already dropped the database
|
Clean: false, // We already dropped the database
|
||||||
@@ -249,6 +303,113 @@ func (e *Engine) restorePostgreSQLDumpWithOwnership(ctx context.Context, archive
|
|||||||
return e.executeRestoreCommand(ctx, cmd)
|
return e.executeRestoreCommand(ctx, cmd)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// restorePostgreSQLDumpPhased performs a multi-phase restore to prevent lock table exhaustion
|
||||||
|
// Phase 1: pre-data (schema, types, functions)
|
||||||
|
// Phase 2: data (table data, excluding BLOBs)
|
||||||
|
// Phase 3: blobs (large objects in smaller batches)
|
||||||
|
// Phase 4: post-data (indexes, constraints, triggers)
|
||||||
|
//
|
||||||
|
// This approach prevents OOM errors by committing and releasing locks between phases.
|
||||||
|
func (e *Engine) restorePostgreSQLDumpPhased(ctx context.Context, archivePath, targetDB string, preserveOwnership bool) error {
|
||||||
|
e.log.Info("Starting phased restore for database with large objects",
|
||||||
|
"database", targetDB,
|
||||||
|
"archive", archivePath)
|
||||||
|
|
||||||
|
// Phase definitions with --section flag
|
||||||
|
phases := []struct {
|
||||||
|
name string
|
||||||
|
section string
|
||||||
|
desc string
|
||||||
|
}{
|
||||||
|
{"pre-data", "pre-data", "Schema, types, functions"},
|
||||||
|
{"data", "data", "Table data"},
|
||||||
|
{"post-data", "post-data", "Indexes, constraints, triggers"},
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, phase := range phases {
|
||||||
|
e.log.Info(fmt.Sprintf("Phase %d/%d: Restoring %s", i+1, len(phases), phase.name),
|
||||||
|
"database", targetDB,
|
||||||
|
"section", phase.section,
|
||||||
|
"description", phase.desc)
|
||||||
|
|
||||||
|
if err := e.restoreSection(ctx, archivePath, targetDB, phase.section, preserveOwnership); err != nil {
|
||||||
|
// Check if it's an ignorable error
|
||||||
|
if e.isIgnorableError(err.Error()) {
|
||||||
|
e.log.Warn(fmt.Sprintf("Phase %d completed with ignorable errors", i+1),
|
||||||
|
"section", phase.section,
|
||||||
|
"error", err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
return fmt.Errorf("phase %d (%s) failed: %w", i+1, phase.name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
e.log.Info(fmt.Sprintf("Phase %d/%d completed successfully", i+1, len(phases)),
|
||||||
|
"section", phase.section)
|
||||||
|
}
|
||||||
|
|
||||||
|
e.log.Info("Phased restore completed successfully", "database", targetDB)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// restoreSection restores a specific section of a PostgreSQL dump
|
||||||
|
func (e *Engine) restoreSection(ctx context.Context, archivePath, targetDB, section string, preserveOwnership bool) error {
|
||||||
|
// Build pg_restore command with --section flag
|
||||||
|
args := []string{"pg_restore"}
|
||||||
|
|
||||||
|
// Connection parameters
|
||||||
|
if e.cfg.Host != "localhost" {
|
||||||
|
args = append(args, "-h", e.cfg.Host)
|
||||||
|
args = append(args, "-p", fmt.Sprintf("%d", e.cfg.Port))
|
||||||
|
args = append(args, "--no-password")
|
||||||
|
}
|
||||||
|
args = append(args, "-U", e.cfg.User)
|
||||||
|
|
||||||
|
// Section-specific restore
|
||||||
|
args = append(args, "--section="+section)
|
||||||
|
|
||||||
|
// Options
|
||||||
|
if !preserveOwnership {
|
||||||
|
args = append(args, "--no-owner", "--no-privileges")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Skip data for failed tables (prevents cascading errors)
|
||||||
|
args = append(args, "--no-data-for-failed-tables")
|
||||||
|
|
||||||
|
// Database and input
|
||||||
|
args = append(args, "--dbname="+targetDB)
|
||||||
|
args = append(args, archivePath)
|
||||||
|
|
||||||
|
return e.executeRestoreCommand(ctx, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkDumpHasLargeObjects checks if a PostgreSQL custom dump contains large objects (BLOBs)
|
||||||
|
func (e *Engine) checkDumpHasLargeObjects(archivePath string) bool {
|
||||||
|
// Use pg_restore -l to list contents without restoring
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
cmd := exec.CommandContext(ctx, "pg_restore", "-l", archivePath)
|
||||||
|
output, err := cmd.Output()
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
// If listing fails, assume no large objects (safer to use standard restore)
|
||||||
|
e.log.Debug("Could not list dump contents, assuming no large objects", "error", err)
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
outputStr := string(output)
|
||||||
|
|
||||||
|
// Check for BLOB/LARGE OBJECT indicators
|
||||||
|
if strings.Contains(outputStr, "BLOB") ||
|
||||||
|
strings.Contains(outputStr, "LARGE OBJECT") ||
|
||||||
|
strings.Contains(outputStr, " BLOBS ") ||
|
||||||
|
strings.Contains(outputStr, "lo_create") {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
// restorePostgreSQLSQL restores from PostgreSQL SQL script
|
// restorePostgreSQLSQL restores from PostgreSQL SQL script
|
||||||
func (e *Engine) restorePostgreSQLSQL(ctx context.Context, archivePath, targetDB string, compressed bool) error {
|
func (e *Engine) restorePostgreSQLSQL(ctx context.Context, archivePath, targetDB string, compressed bool) error {
|
||||||
// Pre-validate SQL dump to detect truncation BEFORE attempting restore
|
// Pre-validate SQL dump to detect truncation BEFORE attempting restore
|
||||||
@@ -807,7 +968,40 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
|
|||||||
}
|
}
|
||||||
e.log.Info("All dump files passed validation")
|
e.log.Info("All dump files passed validation")
|
||||||
|
|
||||||
var failedDBs []string
|
// Run comprehensive preflight checks (Linux system + PostgreSQL + Archive analysis)
|
||||||
|
preflight, preflightErr := e.RunPreflightChecks(ctx, dumpsDir, entries)
|
||||||
|
if preflightErr != nil {
|
||||||
|
e.log.Warn("Preflight checks failed", "error", preflightErr)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate optimal lock boost based on BLOB count
|
||||||
|
lockBoostValue := 2048 // Default
|
||||||
|
if preflight != nil && preflight.Archive.RecommendedLockBoost > 0 {
|
||||||
|
lockBoostValue = preflight.Archive.RecommendedLockBoost
|
||||||
|
}
|
||||||
|
|
||||||
|
// AUTO-TUNE: Boost PostgreSQL settings for large restores
|
||||||
|
e.progress.Update("Tuning PostgreSQL for large restore...")
|
||||||
|
originalSettings, tuneErr := e.boostPostgreSQLSettings(ctx, lockBoostValue)
|
||||||
|
if tuneErr != nil {
|
||||||
|
e.log.Warn("Could not boost PostgreSQL settings - restore may fail on BLOB-heavy databases",
|
||||||
|
"error", tuneErr)
|
||||||
|
} else {
|
||||||
|
e.log.Info("Boosted PostgreSQL settings for restore",
|
||||||
|
"max_locks_per_transaction", fmt.Sprintf("%d → %d", originalSettings.MaxLocks, lockBoostValue),
|
||||||
|
"maintenance_work_mem", fmt.Sprintf("%s → 2GB", originalSettings.MaintenanceWorkMem))
|
||||||
|
// Ensure we reset settings when done (even on failure)
|
||||||
|
defer func() {
|
||||||
|
if resetErr := e.resetPostgreSQLSettings(ctx, originalSettings); resetErr != nil {
|
||||||
|
e.log.Warn("Could not reset PostgreSQL settings", "error", resetErr)
|
||||||
|
} else {
|
||||||
|
e.log.Info("Reset PostgreSQL settings to original values")
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
|
var restoreErrors *multierror.Error
|
||||||
|
var restoreErrorsMu sync.Mutex
|
||||||
totalDBs := 0
|
totalDBs := 0
|
||||||
|
|
||||||
// Count total databases
|
// Count total databases
|
||||||
@@ -841,7 +1035,6 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
var successCount, failCount int32
|
var successCount, failCount int32
|
||||||
var failedDBsMu sync.Mutex
|
|
||||||
var mu sync.Mutex // Protect shared resources (progress, logger)
|
var mu sync.Mutex // Protect shared resources (progress, logger)
|
||||||
|
|
||||||
// Create semaphore to limit concurrency
|
// Create semaphore to limit concurrency
|
||||||
@@ -885,6 +1078,8 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
|
|||||||
statusMsg := fmt.Sprintf("Restoring database %s (%d/%d)", dbName, idx+1, totalDBs)
|
statusMsg := fmt.Sprintf("Restoring database %s (%d/%d)", dbName, idx+1, totalDBs)
|
||||||
e.progress.Update(statusMsg)
|
e.progress.Update(statusMsg)
|
||||||
e.log.Info("Restoring database", "name", dbName, "file", dumpFile, "progress", dbProgress)
|
e.log.Info("Restoring database", "name", dbName, "file", dumpFile, "progress", dbProgress)
|
||||||
|
// Report database progress for TUI
|
||||||
|
e.reportDatabaseProgress(idx, totalDBs, dbName)
|
||||||
mu.Unlock()
|
mu.Unlock()
|
||||||
|
|
||||||
// STEP 1: Drop existing database completely (clean slate)
|
// STEP 1: Drop existing database completely (clean slate)
|
||||||
@@ -896,9 +1091,9 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
|
|||||||
// STEP 2: Create fresh database
|
// STEP 2: Create fresh database
|
||||||
if err := e.ensureDatabaseExists(ctx, dbName); err != nil {
|
if err := e.ensureDatabaseExists(ctx, dbName); err != nil {
|
||||||
e.log.Error("Failed to create database", "name", dbName, "error", err)
|
e.log.Error("Failed to create database", "name", dbName, "error", err)
|
||||||
failedDBsMu.Lock()
|
restoreErrorsMu.Lock()
|
||||||
failedDBs = append(failedDBs, fmt.Sprintf("%s: failed to create database: %v", dbName, err))
|
restoreErrors = multierror.Append(restoreErrors, fmt.Errorf("%s: failed to create database: %w", dbName, err))
|
||||||
failedDBsMu.Unlock()
|
restoreErrorsMu.Unlock()
|
||||||
atomic.AddInt32(&failCount, 1)
|
atomic.AddInt32(&failCount, 1)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -941,10 +1136,10 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
|
|||||||
mu.Unlock()
|
mu.Unlock()
|
||||||
}
|
}
|
||||||
|
|
||||||
failedDBsMu.Lock()
|
restoreErrorsMu.Lock()
|
||||||
// Include more context in the error message
|
// Include more context in the error message
|
||||||
failedDBs = append(failedDBs, fmt.Sprintf("%s: restore failed: %v", dbName, restoreErr))
|
restoreErrors = multierror.Append(restoreErrors, fmt.Errorf("%s: restore failed: %w", dbName, restoreErr))
|
||||||
failedDBsMu.Unlock()
|
restoreErrorsMu.Unlock()
|
||||||
atomic.AddInt32(&failCount, 1)
|
atomic.AddInt32(&failCount, 1)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -962,7 +1157,17 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
|
|||||||
failCountFinal := int(atomic.LoadInt32(&failCount))
|
failCountFinal := int(atomic.LoadInt32(&failCount))
|
||||||
|
|
||||||
if failCountFinal > 0 {
|
if failCountFinal > 0 {
|
||||||
failedList := strings.Join(failedDBs, "\n ")
|
// Format multi-error with detailed output
|
||||||
|
restoreErrors.ErrorFormat = func(errs []error) string {
|
||||||
|
if len(errs) == 1 {
|
||||||
|
return errs[0].Error()
|
||||||
|
}
|
||||||
|
points := make([]string, len(errs))
|
||||||
|
for i, err := range errs {
|
||||||
|
points[i] = fmt.Sprintf(" • %s", err.Error())
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%d database(s) failed:\n%s", len(errs), strings.Join(points, "\n"))
|
||||||
|
}
|
||||||
|
|
||||||
// Log summary
|
// Log summary
|
||||||
e.log.Info("Cluster restore completed with failures",
|
e.log.Info("Cluster restore completed with failures",
|
||||||
@@ -973,7 +1178,7 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
|
|||||||
e.progress.Fail(fmt.Sprintf("Cluster restore: %d succeeded, %d failed out of %d total", successCountFinal, failCountFinal, totalDBs))
|
e.progress.Fail(fmt.Sprintf("Cluster restore: %d succeeded, %d failed out of %d total", successCountFinal, failCountFinal, totalDBs))
|
||||||
operation.Complete(fmt.Sprintf("Partial restore: %d/%d databases succeeded", successCountFinal, totalDBs))
|
operation.Complete(fmt.Sprintf("Partial restore: %d/%d databases succeeded", successCountFinal, totalDBs))
|
||||||
|
|
||||||
return fmt.Errorf("cluster restore completed with %d failures:\n %s", failCountFinal, failedList)
|
return fmt.Errorf("cluster restore completed with %d failures:\n%s", failCountFinal, restoreErrors.Error())
|
||||||
}
|
}
|
||||||
|
|
||||||
e.progress.Complete(fmt.Sprintf("Cluster restored successfully: %d databases", successCountFinal))
|
e.progress.Complete(fmt.Sprintf("Cluster restored successfully: %d databases", successCountFinal))
|
||||||
@@ -981,8 +1186,144 @@ func (e *Engine) RestoreCluster(ctx context.Context, archivePath string) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// extractArchive extracts a tar.gz archive
|
// extractArchive extracts a tar.gz archive with progress reporting
|
||||||
func (e *Engine) extractArchive(ctx context.Context, archivePath, destDir string) error {
|
func (e *Engine) extractArchive(ctx context.Context, archivePath, destDir string) error {
|
||||||
|
// If progress callback is set, use Go's archive/tar for progress tracking
|
||||||
|
if e.progressCallback != nil {
|
||||||
|
return e.extractArchiveWithProgress(ctx, archivePath, destDir)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Otherwise use fast shell tar (no progress)
|
||||||
|
return e.extractArchiveShell(ctx, archivePath, destDir)
|
||||||
|
}
|
||||||
|
|
||||||
|
// extractArchiveWithProgress extracts using Go's archive/tar with detailed progress reporting
|
||||||
|
func (e *Engine) extractArchiveWithProgress(ctx context.Context, archivePath, destDir string) error {
|
||||||
|
// Get archive size for progress calculation
|
||||||
|
archiveInfo, err := os.Stat(archivePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to stat archive: %w", err)
|
||||||
|
}
|
||||||
|
totalSize := archiveInfo.Size()
|
||||||
|
|
||||||
|
// Open the archive file
|
||||||
|
file, err := os.Open(archivePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to open archive: %w", err)
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
// Wrap with progress reader
|
||||||
|
progressReader := &progressReader{
|
||||||
|
reader: file,
|
||||||
|
totalSize: totalSize,
|
||||||
|
callback: e.progressCallback,
|
||||||
|
desc: "Extracting archive",
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create gzip reader
|
||||||
|
gzReader, err := gzip.NewReader(progressReader)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create gzip reader: %w", err)
|
||||||
|
}
|
||||||
|
defer gzReader.Close()
|
||||||
|
|
||||||
|
// Create tar reader
|
||||||
|
tarReader := tar.NewReader(gzReader)
|
||||||
|
|
||||||
|
// Extract files
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
return ctx.Err()
|
||||||
|
default:
|
||||||
|
}
|
||||||
|
|
||||||
|
header, err := tarReader.Next()
|
||||||
|
if err == io.EOF {
|
||||||
|
break // End of archive
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to read tar header: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sanitize and validate path
|
||||||
|
targetPath := filepath.Join(destDir, header.Name)
|
||||||
|
|
||||||
|
// Security check: ensure path is within destDir (prevent path traversal)
|
||||||
|
if !strings.HasPrefix(filepath.Clean(targetPath), filepath.Clean(destDir)) {
|
||||||
|
e.log.Warn("Skipping potentially malicious path in archive", "path", header.Name)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
switch header.Typeflag {
|
||||||
|
case tar.TypeDir:
|
||||||
|
if err := os.MkdirAll(targetPath, 0755); err != nil {
|
||||||
|
return fmt.Errorf("failed to create directory %s: %w", targetPath, err)
|
||||||
|
}
|
||||||
|
case tar.TypeReg:
|
||||||
|
// Ensure parent directory exists
|
||||||
|
if err := os.MkdirAll(filepath.Dir(targetPath), 0755); err != nil {
|
||||||
|
return fmt.Errorf("failed to create parent directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create the file
|
||||||
|
outFile, err := os.OpenFile(targetPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, os.FileMode(header.Mode))
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create file %s: %w", targetPath, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Copy file contents
|
||||||
|
if _, err := io.Copy(outFile, tarReader); err != nil {
|
||||||
|
outFile.Close()
|
||||||
|
return fmt.Errorf("failed to write file %s: %w", targetPath, err)
|
||||||
|
}
|
||||||
|
outFile.Close()
|
||||||
|
case tar.TypeSymlink:
|
||||||
|
// Handle symlinks (common in some archives)
|
||||||
|
if err := os.Symlink(header.Linkname, targetPath); err != nil {
|
||||||
|
// Ignore symlink errors (may already exist or not supported)
|
||||||
|
e.log.Debug("Could not create symlink", "path", targetPath, "target", header.Linkname)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Final progress update
|
||||||
|
e.reportProgress(totalSize, totalSize, "Extraction complete")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// progressReader wraps an io.Reader to report read progress
|
||||||
|
type progressReader struct {
|
||||||
|
reader io.Reader
|
||||||
|
totalSize int64
|
||||||
|
bytesRead int64
|
||||||
|
callback ProgressCallback
|
||||||
|
desc string
|
||||||
|
lastReport time.Time
|
||||||
|
reportEvery time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
func (pr *progressReader) Read(p []byte) (n int, err error) {
|
||||||
|
n, err = pr.reader.Read(p)
|
||||||
|
pr.bytesRead += int64(n)
|
||||||
|
|
||||||
|
// Throttle progress reporting to every 100ms
|
||||||
|
if pr.reportEvery == 0 {
|
||||||
|
pr.reportEvery = 100 * time.Millisecond
|
||||||
|
}
|
||||||
|
if time.Since(pr.lastReport) > pr.reportEvery {
|
||||||
|
if pr.callback != nil {
|
||||||
|
pr.callback(pr.bytesRead, pr.totalSize, pr.desc)
|
||||||
|
}
|
||||||
|
pr.lastReport = time.Now()
|
||||||
|
}
|
||||||
|
|
||||||
|
return n, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// extractArchiveShell extracts using shell tar command (faster but no progress)
|
||||||
|
func (e *Engine) extractArchiveShell(ctx context.Context, archivePath, destDir string) error {
|
||||||
cmd := exec.CommandContext(ctx, "tar", "-xzf", archivePath, "-C", destDir)
|
cmd := exec.CommandContext(ctx, "tar", "-xzf", archivePath, "-C", destDir)
|
||||||
|
|
||||||
// Stream stderr to avoid memory issues - tar can produce lots of output for large archives
|
// Stream stderr to avoid memory issues - tar can produce lots of output for large archives
|
||||||
@@ -1499,3 +1840,173 @@ func (e *Engine) quickValidateSQLDump(archivePath string, compressed bool) error
|
|||||||
e.log.Debug("SQL dump validation passed", "path", archivePath)
|
e.log.Debug("SQL dump validation passed", "path", archivePath)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// boostLockCapacity temporarily increases max_locks_per_transaction to prevent OOM
|
||||||
|
// during large restores with many BLOBs. Returns the original value for later reset.
|
||||||
|
// Uses ALTER SYSTEM + pg_reload_conf() so no restart is needed.
|
||||||
|
func (e *Engine) boostLockCapacity(ctx context.Context) (int, error) {
|
||||||
|
// Connect to PostgreSQL to run system commands
|
||||||
|
connStr := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=postgres sslmode=disable",
|
||||||
|
e.cfg.Host, e.cfg.Port, e.cfg.User, e.cfg.Password)
|
||||||
|
|
||||||
|
// For localhost, use Unix socket
|
||||||
|
if e.cfg.Host == "localhost" || e.cfg.Host == "" {
|
||||||
|
connStr = fmt.Sprintf("user=%s password=%s dbname=postgres sslmode=disable",
|
||||||
|
e.cfg.User, e.cfg.Password)
|
||||||
|
}
|
||||||
|
|
||||||
|
db, err := sql.Open("pgx", connStr)
|
||||||
|
if err != nil {
|
||||||
|
return 0, fmt.Errorf("failed to connect: %w", err)
|
||||||
|
}
|
||||||
|
defer db.Close()
|
||||||
|
|
||||||
|
// Get current value
|
||||||
|
var currentValue int
|
||||||
|
err = db.QueryRowContext(ctx, "SHOW max_locks_per_transaction").Scan(¤tValue)
|
||||||
|
if err != nil {
|
||||||
|
// Try parsing as string (some versions return string)
|
||||||
|
var currentValueStr string
|
||||||
|
err = db.QueryRowContext(ctx, "SHOW max_locks_per_transaction").Scan(¤tValueStr)
|
||||||
|
if err != nil {
|
||||||
|
return 0, fmt.Errorf("failed to get current max_locks_per_transaction: %w", err)
|
||||||
|
}
|
||||||
|
fmt.Sscanf(currentValueStr, "%d", ¤tValue)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Skip if already high enough
|
||||||
|
if currentValue >= 2048 {
|
||||||
|
e.log.Info("max_locks_per_transaction already sufficient", "value", currentValue)
|
||||||
|
return currentValue, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Boost to 2048 (enough for most BLOB-heavy databases)
|
||||||
|
_, err = db.ExecContext(ctx, "ALTER SYSTEM SET max_locks_per_transaction = 2048")
|
||||||
|
if err != nil {
|
||||||
|
return currentValue, fmt.Errorf("failed to set max_locks_per_transaction: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reload config without restart
|
||||||
|
_, err = db.ExecContext(ctx, "SELECT pg_reload_conf()")
|
||||||
|
if err != nil {
|
||||||
|
return currentValue, fmt.Errorf("failed to reload config: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return currentValue, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// resetLockCapacity restores the original max_locks_per_transaction value
|
||||||
|
func (e *Engine) resetLockCapacity(ctx context.Context, originalValue int) error {
|
||||||
|
connStr := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=postgres sslmode=disable",
|
||||||
|
e.cfg.Host, e.cfg.Port, e.cfg.User, e.cfg.Password)
|
||||||
|
|
||||||
|
if e.cfg.Host == "localhost" || e.cfg.Host == "" {
|
||||||
|
connStr = fmt.Sprintf("user=%s password=%s dbname=postgres sslmode=disable",
|
||||||
|
e.cfg.User, e.cfg.Password)
|
||||||
|
}
|
||||||
|
|
||||||
|
db, err := sql.Open("pgx", connStr)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to connect: %w", err)
|
||||||
|
}
|
||||||
|
defer db.Close()
|
||||||
|
|
||||||
|
// Reset to original value (or use RESET to go back to default)
|
||||||
|
if originalValue == 64 { // Default value
|
||||||
|
_, err = db.ExecContext(ctx, "ALTER SYSTEM RESET max_locks_per_transaction")
|
||||||
|
} else {
|
||||||
|
_, err = db.ExecContext(ctx, fmt.Sprintf("ALTER SYSTEM SET max_locks_per_transaction = %d", originalValue))
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to reset max_locks_per_transaction: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reload config
|
||||||
|
_, err = db.ExecContext(ctx, "SELECT pg_reload_conf()")
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to reload config: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// OriginalSettings stores PostgreSQL settings to restore after operation
|
||||||
|
type OriginalSettings struct {
|
||||||
|
MaxLocks int
|
||||||
|
MaintenanceWorkMem string
|
||||||
|
}
|
||||||
|
|
||||||
|
// boostPostgreSQLSettings boosts multiple PostgreSQL settings for large restores
|
||||||
|
func (e *Engine) boostPostgreSQLSettings(ctx context.Context, lockBoostValue int) (*OriginalSettings, error) {
|
||||||
|
connStr := e.buildConnString()
|
||||||
|
db, err := sql.Open("pgx", connStr)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to connect: %w", err)
|
||||||
|
}
|
||||||
|
defer db.Close()
|
||||||
|
|
||||||
|
original := &OriginalSettings{}
|
||||||
|
|
||||||
|
// Get current max_locks_per_transaction
|
||||||
|
var maxLocksStr string
|
||||||
|
if err := db.QueryRowContext(ctx, "SHOW max_locks_per_transaction").Scan(&maxLocksStr); err == nil {
|
||||||
|
original.MaxLocks, _ = strconv.Atoi(maxLocksStr)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get current maintenance_work_mem
|
||||||
|
db.QueryRowContext(ctx, "SHOW maintenance_work_mem").Scan(&original.MaintenanceWorkMem)
|
||||||
|
|
||||||
|
// Boost max_locks_per_transaction (if not already high enough)
|
||||||
|
if original.MaxLocks < lockBoostValue {
|
||||||
|
_, err = db.ExecContext(ctx, fmt.Sprintf("ALTER SYSTEM SET max_locks_per_transaction = %d", lockBoostValue))
|
||||||
|
if err != nil {
|
||||||
|
e.log.Warn("Could not boost max_locks_per_transaction", "error", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Boost maintenance_work_mem to 2GB for faster index creation
|
||||||
|
_, err = db.ExecContext(ctx, "ALTER SYSTEM SET maintenance_work_mem = '2GB'")
|
||||||
|
if err != nil {
|
||||||
|
e.log.Warn("Could not boost maintenance_work_mem", "error", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reload config to apply changes (no restart needed for these settings)
|
||||||
|
_, err = db.ExecContext(ctx, "SELECT pg_reload_conf()")
|
||||||
|
if err != nil {
|
||||||
|
return original, fmt.Errorf("failed to reload config: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return original, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// resetPostgreSQLSettings restores original PostgreSQL settings
|
||||||
|
func (e *Engine) resetPostgreSQLSettings(ctx context.Context, original *OriginalSettings) error {
|
||||||
|
connStr := e.buildConnString()
|
||||||
|
db, err := sql.Open("pgx", connStr)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to connect: %w", err)
|
||||||
|
}
|
||||||
|
defer db.Close()
|
||||||
|
|
||||||
|
// Reset max_locks_per_transaction
|
||||||
|
if original.MaxLocks == 64 { // Default
|
||||||
|
db.ExecContext(ctx, "ALTER SYSTEM RESET max_locks_per_transaction")
|
||||||
|
} else if original.MaxLocks > 0 {
|
||||||
|
db.ExecContext(ctx, fmt.Sprintf("ALTER SYSTEM SET max_locks_per_transaction = %d", original.MaxLocks))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reset maintenance_work_mem
|
||||||
|
if original.MaintenanceWorkMem == "64MB" { // Default
|
||||||
|
db.ExecContext(ctx, "ALTER SYSTEM RESET maintenance_work_mem")
|
||||||
|
} else if original.MaintenanceWorkMem != "" {
|
||||||
|
db.ExecContext(ctx, fmt.Sprintf("ALTER SYSTEM SET maintenance_work_mem = '%s'", original.MaintenanceWorkMem))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reload config
|
||||||
|
_, err = db.ExecContext(ctx, "SELECT pg_reload_conf()")
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to reload config: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|||||||
429
internal/restore/preflight.go
Normal file
429
internal/restore/preflight.go
Normal file
@@ -0,0 +1,429 @@
|
|||||||
|
package restore
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"database/sql"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"os/exec"
|
||||||
|
"path/filepath"
|
||||||
|
"runtime"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/dustin/go-humanize"
|
||||||
|
"github.com/shirou/gopsutil/v3/mem"
|
||||||
|
)
|
||||||
|
|
||||||
|
// PreflightResult contains all preflight check results
|
||||||
|
type PreflightResult struct {
|
||||||
|
// Linux system checks
|
||||||
|
Linux LinuxChecks
|
||||||
|
|
||||||
|
// PostgreSQL checks
|
||||||
|
PostgreSQL PostgreSQLChecks
|
||||||
|
|
||||||
|
// Archive analysis
|
||||||
|
Archive ArchiveChecks
|
||||||
|
|
||||||
|
// Overall status
|
||||||
|
CanProceed bool
|
||||||
|
Warnings []string
|
||||||
|
Errors []string
|
||||||
|
}
|
||||||
|
|
||||||
|
// LinuxChecks contains Linux kernel/system checks
|
||||||
|
type LinuxChecks struct {
|
||||||
|
ShmMax int64 // /proc/sys/kernel/shmmax
|
||||||
|
ShmAll int64 // /proc/sys/kernel/shmall
|
||||||
|
MemTotal uint64 // Total RAM in bytes
|
||||||
|
MemAvailable uint64 // Available RAM in bytes
|
||||||
|
MemUsedPercent float64 // Memory usage percentage
|
||||||
|
ShmMaxOK bool // Is shmmax sufficient?
|
||||||
|
ShmAllOK bool // Is shmall sufficient?
|
||||||
|
MemAvailableOK bool // Is available RAM sufficient?
|
||||||
|
IsLinux bool // Are we running on Linux?
|
||||||
|
}
|
||||||
|
|
||||||
|
// PostgreSQLChecks contains PostgreSQL configuration checks
|
||||||
|
type PostgreSQLChecks struct {
|
||||||
|
MaxLocksPerTransaction int // Current setting
|
||||||
|
MaintenanceWorkMem string // Current setting
|
||||||
|
SharedBuffers string // Current setting (info only)
|
||||||
|
MaxConnections int // Current setting
|
||||||
|
Version string // PostgreSQL version
|
||||||
|
IsSuperuser bool // Can we modify settings?
|
||||||
|
}
|
||||||
|
|
||||||
|
// ArchiveChecks contains analysis of the backup archive
|
||||||
|
type ArchiveChecks struct {
|
||||||
|
TotalDatabases int
|
||||||
|
TotalBlobCount int // Estimated total BLOBs across all databases
|
||||||
|
BlobsByDB map[string]int // BLOBs per database
|
||||||
|
HasLargeBlobs bool // Any DB with >1000 BLOBs?
|
||||||
|
RecommendedLockBoost int // Calculated lock boost value
|
||||||
|
}
|
||||||
|
|
||||||
|
// RunPreflightChecks performs all preflight checks before a cluster restore
|
||||||
|
func (e *Engine) RunPreflightChecks(ctx context.Context, dumpsDir string, entries []os.DirEntry) (*PreflightResult, error) {
|
||||||
|
result := &PreflightResult{
|
||||||
|
CanProceed: true,
|
||||||
|
Archive: ArchiveChecks{
|
||||||
|
BlobsByDB: make(map[string]int),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
e.progress.Update("[PREFLIGHT] Running system checks...")
|
||||||
|
e.log.Info("Starting preflight checks for cluster restore")
|
||||||
|
|
||||||
|
// 1. System checks (cross-platform via gopsutil)
|
||||||
|
e.checkSystemResources(result)
|
||||||
|
|
||||||
|
// 2. PostgreSQL checks (via existing connection)
|
||||||
|
e.checkPostgreSQL(ctx, result)
|
||||||
|
|
||||||
|
// 3. Archive analysis (count BLOBs to scale lock boost)
|
||||||
|
e.analyzeArchive(ctx, dumpsDir, entries, result)
|
||||||
|
|
||||||
|
// 4. Calculate recommended settings
|
||||||
|
e.calculateRecommendations(result)
|
||||||
|
|
||||||
|
// 5. Print summary
|
||||||
|
e.printPreflightSummary(result)
|
||||||
|
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkSystemResources uses gopsutil for cross-platform system checks
|
||||||
|
func (e *Engine) checkSystemResources(result *PreflightResult) {
|
||||||
|
result.Linux.IsLinux = runtime.GOOS == "linux"
|
||||||
|
|
||||||
|
// Get memory info (works on Linux, macOS, Windows, BSD)
|
||||||
|
if vmem, err := mem.VirtualMemory(); err == nil {
|
||||||
|
result.Linux.MemTotal = vmem.Total
|
||||||
|
result.Linux.MemAvailable = vmem.Available
|
||||||
|
result.Linux.MemUsedPercent = vmem.UsedPercent
|
||||||
|
|
||||||
|
// 4GB minimum available for large restores
|
||||||
|
result.Linux.MemAvailableOK = vmem.Available >= 4*1024*1024*1024
|
||||||
|
|
||||||
|
e.log.Info("System memory detected",
|
||||||
|
"total", humanize.Bytes(vmem.Total),
|
||||||
|
"available", humanize.Bytes(vmem.Available),
|
||||||
|
"used_percent", fmt.Sprintf("%.1f%%", vmem.UsedPercent))
|
||||||
|
} else {
|
||||||
|
e.log.Warn("Could not detect system memory", "error", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Linux-specific kernel checks (shmmax, shmall)
|
||||||
|
if result.Linux.IsLinux {
|
||||||
|
e.checkLinuxKernel(result)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add warnings for insufficient resources
|
||||||
|
if !result.Linux.MemAvailableOK && result.Linux.MemAvailable > 0 {
|
||||||
|
result.Warnings = append(result.Warnings,
|
||||||
|
fmt.Sprintf("Available RAM is low: %s (recommend 4GB+ for large restores)",
|
||||||
|
humanize.Bytes(result.Linux.MemAvailable)))
|
||||||
|
}
|
||||||
|
if result.Linux.MemUsedPercent > 85 {
|
||||||
|
result.Warnings = append(result.Warnings,
|
||||||
|
fmt.Sprintf("High memory usage: %.1f%% - restore may cause OOM", result.Linux.MemUsedPercent))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkLinuxKernel reads Linux-specific kernel limits from /proc
|
||||||
|
func (e *Engine) checkLinuxKernel(result *PreflightResult) {
|
||||||
|
// Read shmmax
|
||||||
|
if data, err := os.ReadFile("/proc/sys/kernel/shmmax"); err == nil {
|
||||||
|
val, _ := strconv.ParseInt(strings.TrimSpace(string(data)), 10, 64)
|
||||||
|
result.Linux.ShmMax = val
|
||||||
|
// 8GB minimum for large restores
|
||||||
|
result.Linux.ShmMaxOK = val >= 8*1024*1024*1024
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read shmall (in pages, typically 4KB each)
|
||||||
|
if data, err := os.ReadFile("/proc/sys/kernel/shmall"); err == nil {
|
||||||
|
val, _ := strconv.ParseInt(strings.TrimSpace(string(data)), 10, 64)
|
||||||
|
result.Linux.ShmAll = val
|
||||||
|
// 2M pages = 8GB minimum
|
||||||
|
result.Linux.ShmAllOK = val >= 2*1024*1024
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add kernel warnings
|
||||||
|
if !result.Linux.ShmMaxOK && result.Linux.ShmMax > 0 {
|
||||||
|
result.Warnings = append(result.Warnings,
|
||||||
|
fmt.Sprintf("Linux shmmax is low: %s (recommend 8GB+). Fix: sudo sysctl -w kernel.shmmax=17179869184",
|
||||||
|
humanize.Bytes(uint64(result.Linux.ShmMax))))
|
||||||
|
}
|
||||||
|
if !result.Linux.ShmAllOK && result.Linux.ShmAll > 0 {
|
||||||
|
result.Warnings = append(result.Warnings,
|
||||||
|
fmt.Sprintf("Linux shmall is low: %s pages (recommend 2M+). Fix: sudo sysctl -w kernel.shmall=4194304",
|
||||||
|
humanize.Comma(result.Linux.ShmAll)))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkPostgreSQL checks PostgreSQL configuration via SQL
|
||||||
|
func (e *Engine) checkPostgreSQL(ctx context.Context, result *PreflightResult) {
|
||||||
|
connStr := e.buildConnString()
|
||||||
|
db, err := sql.Open("pgx", connStr)
|
||||||
|
if err != nil {
|
||||||
|
e.log.Warn("Could not connect to PostgreSQL for preflight checks", "error", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer db.Close()
|
||||||
|
|
||||||
|
// Check max_locks_per_transaction
|
||||||
|
var maxLocks string
|
||||||
|
if err := db.QueryRowContext(ctx, "SHOW max_locks_per_transaction").Scan(&maxLocks); err == nil {
|
||||||
|
result.PostgreSQL.MaxLocksPerTransaction, _ = strconv.Atoi(maxLocks)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check maintenance_work_mem
|
||||||
|
db.QueryRowContext(ctx, "SHOW maintenance_work_mem").Scan(&result.PostgreSQL.MaintenanceWorkMem)
|
||||||
|
|
||||||
|
// Check shared_buffers (info only, can't change without restart)
|
||||||
|
db.QueryRowContext(ctx, "SHOW shared_buffers").Scan(&result.PostgreSQL.SharedBuffers)
|
||||||
|
|
||||||
|
// Check max_connections
|
||||||
|
var maxConn string
|
||||||
|
if err := db.QueryRowContext(ctx, "SHOW max_connections").Scan(&maxConn); err == nil {
|
||||||
|
result.PostgreSQL.MaxConnections, _ = strconv.Atoi(maxConn)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check version
|
||||||
|
db.QueryRowContext(ctx, "SHOW server_version").Scan(&result.PostgreSQL.Version)
|
||||||
|
|
||||||
|
// Check if superuser
|
||||||
|
var isSuperuser bool
|
||||||
|
if err := db.QueryRowContext(ctx, "SELECT current_setting('is_superuser') = 'on'").Scan(&isSuperuser); err == nil {
|
||||||
|
result.PostgreSQL.IsSuperuser = isSuperuser
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add info/warnings
|
||||||
|
if result.PostgreSQL.MaxLocksPerTransaction < 256 {
|
||||||
|
e.log.Info("PostgreSQL max_locks_per_transaction is low - will auto-boost",
|
||||||
|
"current", result.PostgreSQL.MaxLocksPerTransaction)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse shared_buffers and warn if very low
|
||||||
|
sharedBuffersMB := parseMemoryToMB(result.PostgreSQL.SharedBuffers)
|
||||||
|
if sharedBuffersMB > 0 && sharedBuffersMB < 256 {
|
||||||
|
result.Warnings = append(result.Warnings,
|
||||||
|
fmt.Sprintf("PostgreSQL shared_buffers is low: %s (recommend 1GB+, requires restart)",
|
||||||
|
result.PostgreSQL.SharedBuffers))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// analyzeArchive counts BLOBs in dump files to calculate optimal lock boost
|
||||||
|
func (e *Engine) analyzeArchive(ctx context.Context, dumpsDir string, entries []os.DirEntry, result *PreflightResult) {
|
||||||
|
e.progress.Update("[PREFLIGHT] Analyzing archive for large objects...")
|
||||||
|
|
||||||
|
for _, entry := range entries {
|
||||||
|
if entry.IsDir() {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
result.Archive.TotalDatabases++
|
||||||
|
dumpFile := filepath.Join(dumpsDir, entry.Name())
|
||||||
|
dbName := strings.TrimSuffix(entry.Name(), ".dump")
|
||||||
|
dbName = strings.TrimSuffix(dbName, ".sql.gz")
|
||||||
|
|
||||||
|
// For custom format dumps, use pg_restore -l to count BLOBs
|
||||||
|
if strings.HasSuffix(entry.Name(), ".dump") {
|
||||||
|
blobCount := e.countBlobsInDump(ctx, dumpFile)
|
||||||
|
if blobCount > 0 {
|
||||||
|
result.Archive.BlobsByDB[dbName] = blobCount
|
||||||
|
result.Archive.TotalBlobCount += blobCount
|
||||||
|
if blobCount > 1000 {
|
||||||
|
result.Archive.HasLargeBlobs = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// For SQL format, try to estimate from file content (sample check)
|
||||||
|
if strings.HasSuffix(entry.Name(), ".sql.gz") {
|
||||||
|
// Check for lo_create patterns in compressed SQL
|
||||||
|
blobCount := e.estimateBlobsInSQL(dumpFile)
|
||||||
|
if blobCount > 0 {
|
||||||
|
result.Archive.BlobsByDB[dbName] = blobCount
|
||||||
|
result.Archive.TotalBlobCount += blobCount
|
||||||
|
if blobCount > 1000 {
|
||||||
|
result.Archive.HasLargeBlobs = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// countBlobsInDump uses pg_restore -l to count BLOB entries
|
||||||
|
func (e *Engine) countBlobsInDump(ctx context.Context, dumpFile string) int {
|
||||||
|
ctx, cancel := context.WithTimeout(ctx, 30*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
cmd := exec.CommandContext(ctx, "pg_restore", "-l", dumpFile)
|
||||||
|
output, err := cmd.Output()
|
||||||
|
if err != nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// Count lines containing BLOB/LARGE OBJECT
|
||||||
|
count := 0
|
||||||
|
for _, line := range strings.Split(string(output), "\n") {
|
||||||
|
if strings.Contains(line, "BLOB") || strings.Contains(line, "LARGE OBJECT") {
|
||||||
|
count++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return count
|
||||||
|
}
|
||||||
|
|
||||||
|
// estimateBlobsInSQL samples compressed SQL for lo_create patterns
|
||||||
|
func (e *Engine) estimateBlobsInSQL(sqlFile string) int {
|
||||||
|
// Use zgrep for efficient searching in gzipped files
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
// Count lo_create calls (each = one large object)
|
||||||
|
cmd := exec.CommandContext(ctx, "zgrep", "-c", "lo_create", sqlFile)
|
||||||
|
output, err := cmd.Output()
|
||||||
|
if err != nil {
|
||||||
|
// Also try SELECT lo_create pattern
|
||||||
|
cmd2 := exec.CommandContext(ctx, "zgrep", "-c", "SELECT.*lo_create", sqlFile)
|
||||||
|
output, err = cmd2.Output()
|
||||||
|
if err != nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
count, _ := strconv.Atoi(strings.TrimSpace(string(output)))
|
||||||
|
return count
|
||||||
|
}
|
||||||
|
|
||||||
|
// calculateRecommendations determines optimal settings based on analysis
|
||||||
|
func (e *Engine) calculateRecommendations(result *PreflightResult) {
|
||||||
|
// Base lock boost
|
||||||
|
lockBoost := 2048
|
||||||
|
|
||||||
|
// Scale up based on BLOB count
|
||||||
|
if result.Archive.TotalBlobCount > 5000 {
|
||||||
|
lockBoost = 4096
|
||||||
|
}
|
||||||
|
if result.Archive.TotalBlobCount > 10000 {
|
||||||
|
lockBoost = 8192
|
||||||
|
}
|
||||||
|
if result.Archive.TotalBlobCount > 50000 {
|
||||||
|
lockBoost = 16384
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cap at reasonable maximum
|
||||||
|
if lockBoost > 16384 {
|
||||||
|
lockBoost = 16384
|
||||||
|
}
|
||||||
|
|
||||||
|
result.Archive.RecommendedLockBoost = lockBoost
|
||||||
|
|
||||||
|
// Log recommendation
|
||||||
|
e.log.Info("Calculated recommended lock boost",
|
||||||
|
"total_blobs", result.Archive.TotalBlobCount,
|
||||||
|
"recommended_locks", lockBoost)
|
||||||
|
}
|
||||||
|
|
||||||
|
// printPreflightSummary prints a nice summary of all checks
|
||||||
|
func (e *Engine) printPreflightSummary(result *PreflightResult) {
|
||||||
|
fmt.Println()
|
||||||
|
fmt.Println(strings.Repeat("─", 60))
|
||||||
|
fmt.Println(" PREFLIGHT CHECKS")
|
||||||
|
fmt.Println(strings.Repeat("─", 60))
|
||||||
|
|
||||||
|
// System checks (cross-platform)
|
||||||
|
fmt.Println("\n System Resources:")
|
||||||
|
printCheck("Total RAM", humanize.Bytes(result.Linux.MemTotal), true)
|
||||||
|
printCheck("Available RAM", humanize.Bytes(result.Linux.MemAvailable), result.Linux.MemAvailableOK || result.Linux.MemAvailable == 0)
|
||||||
|
printCheck("Memory Usage", fmt.Sprintf("%.1f%%", result.Linux.MemUsedPercent), result.Linux.MemUsedPercent < 85)
|
||||||
|
|
||||||
|
// Linux-specific kernel checks
|
||||||
|
if result.Linux.IsLinux && result.Linux.ShmMax > 0 {
|
||||||
|
fmt.Println("\n Linux Kernel:")
|
||||||
|
printCheck("shmmax", humanize.Bytes(uint64(result.Linux.ShmMax)), result.Linux.ShmMaxOK)
|
||||||
|
printCheck("shmall", humanize.Comma(result.Linux.ShmAll)+" pages", result.Linux.ShmAllOK)
|
||||||
|
}
|
||||||
|
|
||||||
|
// PostgreSQL checks
|
||||||
|
fmt.Println("\n PostgreSQL:")
|
||||||
|
printCheck("Version", result.PostgreSQL.Version, true)
|
||||||
|
printCheck("max_locks_per_transaction", fmt.Sprintf("%s → %s (auto-boost)",
|
||||||
|
humanize.Comma(int64(result.PostgreSQL.MaxLocksPerTransaction)),
|
||||||
|
humanize.Comma(int64(result.Archive.RecommendedLockBoost))),
|
||||||
|
true)
|
||||||
|
printCheck("maintenance_work_mem", fmt.Sprintf("%s → 2GB (auto-boost)",
|
||||||
|
result.PostgreSQL.MaintenanceWorkMem), true)
|
||||||
|
printInfo("shared_buffers", result.PostgreSQL.SharedBuffers)
|
||||||
|
printCheck("Superuser", fmt.Sprintf("%v", result.PostgreSQL.IsSuperuser), result.PostgreSQL.IsSuperuser)
|
||||||
|
|
||||||
|
// Archive analysis
|
||||||
|
fmt.Println("\n Archive Analysis:")
|
||||||
|
printInfo("Total databases", humanize.Comma(int64(result.Archive.TotalDatabases)))
|
||||||
|
printInfo("Total BLOBs detected", humanize.Comma(int64(result.Archive.TotalBlobCount)))
|
||||||
|
if len(result.Archive.BlobsByDB) > 0 {
|
||||||
|
fmt.Println(" Databases with BLOBs:")
|
||||||
|
for db, count := range result.Archive.BlobsByDB {
|
||||||
|
status := "✓"
|
||||||
|
if count > 1000 {
|
||||||
|
status := "⚠"
|
||||||
|
_ = status
|
||||||
|
}
|
||||||
|
fmt.Printf(" %s %s: %s BLOBs\n", status, db, humanize.Comma(int64(count)))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Warnings
|
||||||
|
if len(result.Warnings) > 0 {
|
||||||
|
fmt.Println("\n ⚠ Warnings:")
|
||||||
|
for _, w := range result.Warnings {
|
||||||
|
fmt.Printf(" • %s\n", w)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println(strings.Repeat("─", 60))
|
||||||
|
fmt.Println()
|
||||||
|
}
|
||||||
|
|
||||||
|
func printCheck(name, value string, ok bool) {
|
||||||
|
status := "✓"
|
||||||
|
if !ok {
|
||||||
|
status = "⚠"
|
||||||
|
}
|
||||||
|
fmt.Printf(" %s %s: %s\n", status, name, value)
|
||||||
|
}
|
||||||
|
|
||||||
|
func printInfo(name, value string) {
|
||||||
|
fmt.Printf(" ℹ %s: %s\n", name, value)
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseMemoryToMB(memStr string) int {
|
||||||
|
memStr = strings.ToUpper(strings.TrimSpace(memStr))
|
||||||
|
var value int
|
||||||
|
var unit string
|
||||||
|
fmt.Sscanf(memStr, "%d%s", &value, &unit)
|
||||||
|
|
||||||
|
switch {
|
||||||
|
case strings.HasPrefix(unit, "G"):
|
||||||
|
return value * 1024
|
||||||
|
case strings.HasPrefix(unit, "M"):
|
||||||
|
return value
|
||||||
|
case strings.HasPrefix(unit, "K"):
|
||||||
|
return value / 1024
|
||||||
|
default:
|
||||||
|
return value / (1024 * 1024) // Assume bytes
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *Engine) buildConnString() string {
|
||||||
|
if e.cfg.Host == "localhost" || e.cfg.Host == "" {
|
||||||
|
return fmt.Sprintf("user=%s password=%s dbname=postgres sslmode=disable",
|
||||||
|
e.cfg.User, e.cfg.Password)
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=postgres sslmode=disable",
|
||||||
|
e.cfg.Host, e.cfg.Port, e.cfg.User, e.cfg.Password)
|
||||||
|
}
|
||||||
@@ -229,8 +229,14 @@ func containsSQLKeywords(content string) bool {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// CheckDiskSpace verifies sufficient disk space for restore
|
// CheckDiskSpace verifies sufficient disk space for restore
|
||||||
|
// Uses the effective work directory (WorkDir if set, otherwise BackupDir) since
|
||||||
|
// that's where extraction actually happens for large databases
|
||||||
func (s *Safety) CheckDiskSpace(archivePath string, multiplier float64) error {
|
func (s *Safety) CheckDiskSpace(archivePath string, multiplier float64) error {
|
||||||
return s.CheckDiskSpaceAt(archivePath, s.cfg.BackupDir, multiplier)
|
checkDir := s.cfg.GetEffectiveWorkDir()
|
||||||
|
if checkDir == "" {
|
||||||
|
checkDir = s.cfg.BackupDir
|
||||||
|
}
|
||||||
|
return s.CheckDiskSpaceAt(archivePath, checkDir, multiplier)
|
||||||
}
|
}
|
||||||
|
|
||||||
// CheckDiskSpaceAt verifies sufficient disk space at a specific directory
|
// CheckDiskSpaceAt verifies sufficient disk space at a specific directory
|
||||||
@@ -249,7 +255,9 @@ func (s *Safety) CheckDiskSpaceAt(archivePath string, checkDir string, multiplie
|
|||||||
// Get available disk space
|
// Get available disk space
|
||||||
availableSpace, err := getDiskSpace(checkDir)
|
availableSpace, err := getDiskSpace(checkDir)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
if s.log != nil {
|
||||||
s.log.Warn("Cannot check disk space", "error", err)
|
s.log.Warn("Cannot check disk space", "error", err)
|
||||||
|
}
|
||||||
return nil // Don't fail if we can't check
|
return nil // Don't fail if we can't check
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -272,10 +280,12 @@ func (s *Safety) CheckDiskSpaceAt(archivePath string, checkDir string, multiplie
|
|||||||
checkDir)
|
checkDir)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if s.log != nil {
|
||||||
s.log.Info("Disk space check passed",
|
s.log.Info("Disk space check passed",
|
||||||
"location", checkDir,
|
"location", checkDir,
|
||||||
"required", FormatBytes(requiredSpace),
|
"required", FormatBytes(requiredSpace),
|
||||||
"available", FormatBytes(availableSpace))
|
"available", FormatBytes(availableSpace))
|
||||||
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -251,13 +251,13 @@ func (m ArchiveBrowserModel) View() string {
|
|||||||
var s strings.Builder
|
var s strings.Builder
|
||||||
|
|
||||||
// Header
|
// Header
|
||||||
title := "[PKG] Backup Archives"
|
title := "[SELECT] Backup Archives"
|
||||||
if m.mode == "restore-single" {
|
if m.mode == "restore-single" {
|
||||||
title = "[PKG] Select Archive to Restore (Single Database)"
|
title = "[SELECT] Select Archive to Restore (Single Database)"
|
||||||
} else if m.mode == "restore-cluster" {
|
} else if m.mode == "restore-cluster" {
|
||||||
title = "[PKG] Select Archive to Restore (Cluster)"
|
title = "[SELECT] Select Archive to Restore (Cluster)"
|
||||||
} else if m.mode == "diagnose" {
|
} else if m.mode == "diagnose" {
|
||||||
title = "[SEARCH] Select Archive to Diagnose"
|
title = "[SELECT] Select Archive to Diagnose"
|
||||||
}
|
}
|
||||||
|
|
||||||
s.WriteString(titleStyle.Render(title))
|
s.WriteString(titleStyle.Render(title))
|
||||||
|
|||||||
124
internal/tui/backup_exec.go
Executable file → Normal file
124
internal/tui/backup_exec.go
Executable file → Normal file
@@ -4,6 +4,7 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"strings"
|
"strings"
|
||||||
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
tea "github.com/charmbracelet/bubbletea"
|
tea "github.com/charmbracelet/bubbletea"
|
||||||
@@ -33,6 +34,56 @@ type BackupExecutionModel struct {
|
|||||||
startTime time.Time
|
startTime time.Time
|
||||||
details []string
|
details []string
|
||||||
spinnerFrame int
|
spinnerFrame int
|
||||||
|
|
||||||
|
// Database count progress (for cluster backup)
|
||||||
|
dbTotal int
|
||||||
|
dbDone int
|
||||||
|
dbName string // Current database being backed up
|
||||||
|
}
|
||||||
|
|
||||||
|
// sharedBackupProgressState holds progress state that can be safely accessed from callbacks
|
||||||
|
type sharedBackupProgressState struct {
|
||||||
|
mu sync.Mutex
|
||||||
|
dbTotal int
|
||||||
|
dbDone int
|
||||||
|
dbName string
|
||||||
|
hasUpdate bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// Package-level shared progress state for backup operations
|
||||||
|
var (
|
||||||
|
currentBackupProgressMu sync.Mutex
|
||||||
|
currentBackupProgressState *sharedBackupProgressState
|
||||||
|
)
|
||||||
|
|
||||||
|
func setCurrentBackupProgress(state *sharedBackupProgressState) {
|
||||||
|
currentBackupProgressMu.Lock()
|
||||||
|
defer currentBackupProgressMu.Unlock()
|
||||||
|
currentBackupProgressState = state
|
||||||
|
}
|
||||||
|
|
||||||
|
func clearCurrentBackupProgress() {
|
||||||
|
currentBackupProgressMu.Lock()
|
||||||
|
defer currentBackupProgressMu.Unlock()
|
||||||
|
currentBackupProgressState = nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func getCurrentBackupProgress() (dbTotal, dbDone int, dbName string, hasUpdate bool) {
|
||||||
|
currentBackupProgressMu.Lock()
|
||||||
|
defer currentBackupProgressMu.Unlock()
|
||||||
|
|
||||||
|
if currentBackupProgressState == nil {
|
||||||
|
return 0, 0, "", false
|
||||||
|
}
|
||||||
|
|
||||||
|
currentBackupProgressState.mu.Lock()
|
||||||
|
defer currentBackupProgressState.mu.Unlock()
|
||||||
|
|
||||||
|
hasUpdate = currentBackupProgressState.hasUpdate
|
||||||
|
currentBackupProgressState.hasUpdate = false
|
||||||
|
|
||||||
|
return currentBackupProgressState.dbTotal, currentBackupProgressState.dbDone,
|
||||||
|
currentBackupProgressState.dbName, hasUpdate
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewBackupExecution(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context, backupType, dbName string, ratio int) BackupExecutionModel {
|
func NewBackupExecution(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context, backupType, dbName string, ratio int) BackupExecutionModel {
|
||||||
@@ -55,7 +106,6 @@ func NewBackupExecution(cfg *config.Config, log logger.Logger, parent tea.Model,
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (m BackupExecutionModel) Init() tea.Cmd {
|
func (m BackupExecutionModel) Init() tea.Cmd {
|
||||||
// TUI handles all display through View() - no progress callbacks needed
|
|
||||||
return tea.Batch(
|
return tea.Batch(
|
||||||
executeBackupWithTUIProgress(m.ctx, m.config, m.logger, m.backupType, m.databaseName, m.ratio),
|
executeBackupWithTUIProgress(m.ctx, m.config, m.logger, m.backupType, m.databaseName, m.ratio),
|
||||||
backupTickCmd(),
|
backupTickCmd(),
|
||||||
@@ -91,6 +141,11 @@ func executeBackupWithTUIProgress(parentCtx context.Context, cfg *config.Config,
|
|||||||
|
|
||||||
start := time.Now()
|
start := time.Now()
|
||||||
|
|
||||||
|
// Setup shared progress state for TUI polling
|
||||||
|
progressState := &sharedBackupProgressState{}
|
||||||
|
setCurrentBackupProgress(progressState)
|
||||||
|
defer clearCurrentBackupProgress()
|
||||||
|
|
||||||
dbClient, err := database.New(cfg, log)
|
dbClient, err := database.New(cfg, log)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return backupCompleteMsg{
|
return backupCompleteMsg{
|
||||||
@@ -110,6 +165,16 @@ func executeBackupWithTUIProgress(parentCtx context.Context, cfg *config.Config,
|
|||||||
// Pass nil as indicator - TUI itself handles all display, no stdout printing
|
// Pass nil as indicator - TUI itself handles all display, no stdout printing
|
||||||
engine := backup.NewSilent(cfg, log, dbClient, nil)
|
engine := backup.NewSilent(cfg, log, dbClient, nil)
|
||||||
|
|
||||||
|
// Set database progress callback for cluster backups
|
||||||
|
engine.SetDatabaseProgressCallback(func(done, total int, currentDB string) {
|
||||||
|
progressState.mu.Lock()
|
||||||
|
progressState.dbDone = done
|
||||||
|
progressState.dbTotal = total
|
||||||
|
progressState.dbName = currentDB
|
||||||
|
progressState.hasUpdate = true
|
||||||
|
progressState.mu.Unlock()
|
||||||
|
})
|
||||||
|
|
||||||
var backupErr error
|
var backupErr error
|
||||||
switch backupType {
|
switch backupType {
|
||||||
case "single":
|
case "single":
|
||||||
@@ -157,10 +222,21 @@ func (m BackupExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
// Increment spinner frame for smooth animation
|
// Increment spinner frame for smooth animation
|
||||||
m.spinnerFrame = (m.spinnerFrame + 1) % len(spinnerFrames)
|
m.spinnerFrame = (m.spinnerFrame + 1) % len(spinnerFrames)
|
||||||
|
|
||||||
// Update status based on elapsed time to show progress
|
// Poll for database progress updates from callbacks
|
||||||
|
dbTotal, dbDone, dbName, hasUpdate := getCurrentBackupProgress()
|
||||||
|
if hasUpdate {
|
||||||
|
m.dbTotal = dbTotal
|
||||||
|
m.dbDone = dbDone
|
||||||
|
m.dbName = dbName
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update status based on progress and elapsed time
|
||||||
elapsedSec := int(time.Since(m.startTime).Seconds())
|
elapsedSec := int(time.Since(m.startTime).Seconds())
|
||||||
|
|
||||||
if elapsedSec < 2 {
|
if m.dbTotal > 0 && m.dbDone > 0 {
|
||||||
|
// We have real progress from cluster backup
|
||||||
|
m.status = fmt.Sprintf("Backing up database: %s", m.dbName)
|
||||||
|
} else if elapsedSec < 2 {
|
||||||
m.status = "Initializing backup..."
|
m.status = "Initializing backup..."
|
||||||
} else if elapsedSec < 5 {
|
} else if elapsedSec < 5 {
|
||||||
if m.backupType == "cluster" {
|
if m.backupType == "cluster" {
|
||||||
@@ -234,6 +310,34 @@ func (m BackupExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
return m, nil
|
return m, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// renderDatabaseProgressBar renders a progress bar for database count progress
|
||||||
|
func renderBackupDatabaseProgressBar(done, total int, dbName string, width int) string {
|
||||||
|
if total == 0 {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate progress percentage
|
||||||
|
percent := float64(done) / float64(total)
|
||||||
|
if percent > 1.0 {
|
||||||
|
percent = 1.0
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate filled width
|
||||||
|
barWidth := width - 20 // Leave room for label and percentage
|
||||||
|
if barWidth < 10 {
|
||||||
|
barWidth = 10
|
||||||
|
}
|
||||||
|
filled := int(float64(barWidth) * percent)
|
||||||
|
if filled > barWidth {
|
||||||
|
filled = barWidth
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build progress bar
|
||||||
|
bar := strings.Repeat("█", filled) + strings.Repeat("░", barWidth-filled)
|
||||||
|
|
||||||
|
return fmt.Sprintf(" Database: [%s] %d/%d", bar, done, total)
|
||||||
|
}
|
||||||
|
|
||||||
func (m BackupExecutionModel) View() string {
|
func (m BackupExecutionModel) View() string {
|
||||||
var s strings.Builder
|
var s strings.Builder
|
||||||
s.Grow(512) // Pre-allocate estimated capacity for better performance
|
s.Grow(512) // Pre-allocate estimated capacity for better performance
|
||||||
@@ -255,12 +359,24 @@ func (m BackupExecutionModel) View() string {
|
|||||||
s.WriteString(fmt.Sprintf(" %-10s %s\n", "Duration:", time.Since(m.startTime).Round(time.Second)))
|
s.WriteString(fmt.Sprintf(" %-10s %s\n", "Duration:", time.Since(m.startTime).Round(time.Second)))
|
||||||
s.WriteString("\n")
|
s.WriteString("\n")
|
||||||
|
|
||||||
// Status with spinner
|
// Status display
|
||||||
if !m.done {
|
if !m.done {
|
||||||
|
// Show database progress bar if we have progress data (cluster backup)
|
||||||
|
if m.dbTotal > 0 && m.dbDone > 0 {
|
||||||
|
// Show progress bar instead of spinner when we have real progress
|
||||||
|
progressBar := renderBackupDatabaseProgressBar(m.dbDone, m.dbTotal, m.dbName, 50)
|
||||||
|
s.WriteString(progressBar + "\n")
|
||||||
|
s.WriteString(fmt.Sprintf(" %s\n", m.status))
|
||||||
|
} else {
|
||||||
|
// Show spinner during initial phases
|
||||||
if m.cancelling {
|
if m.cancelling {
|
||||||
s.WriteString(fmt.Sprintf(" %s %s\n", spinnerFrames[m.spinnerFrame], m.status))
|
s.WriteString(fmt.Sprintf(" %s %s\n", spinnerFrames[m.spinnerFrame], m.status))
|
||||||
} else {
|
} else {
|
||||||
s.WriteString(fmt.Sprintf(" %s %s\n", spinnerFrames[m.spinnerFrame], m.status))
|
s.WriteString(fmt.Sprintf(" %s %s\n", spinnerFrames[m.spinnerFrame], m.status))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if !m.cancelling {
|
||||||
s.WriteString("\n [KEY] Press Ctrl+C or ESC to cancel\n")
|
s.WriteString("\n [KEY] Press Ctrl+C or ESC to cancel\n")
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
|
|||||||
@@ -230,7 +230,7 @@ func (m BackupManagerModel) View() string {
|
|||||||
var s strings.Builder
|
var s strings.Builder
|
||||||
|
|
||||||
// Title
|
// Title
|
||||||
s.WriteString(TitleStyle.Render("[DB] Backup Archive Manager"))
|
s.WriteString(TitleStyle.Render("[SELECT] Backup Archive Manager"))
|
||||||
s.WriteString("\n\n")
|
s.WriteString("\n\n")
|
||||||
|
|
||||||
// Status line (no box, bold+color accents)
|
// Status line (no box, bold+color accents)
|
||||||
|
|||||||
406
internal/tui/detailed_progress.go
Normal file
406
internal/tui/detailed_progress.go
Normal file
@@ -0,0 +1,406 @@
|
|||||||
|
package tui
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// DetailedProgress provides schollz-like progress information for TUI rendering
|
||||||
|
// This is a data structure that can be queried by Bubble Tea's View() method
|
||||||
|
type DetailedProgress struct {
|
||||||
|
mu sync.RWMutex
|
||||||
|
|
||||||
|
// Core progress
|
||||||
|
Total int64 // Total bytes or items
|
||||||
|
Current int64 // Current bytes or items done
|
||||||
|
|
||||||
|
// Display info
|
||||||
|
Description string // What operation is happening
|
||||||
|
Unit string // "bytes", "files", "databases", etc.
|
||||||
|
|
||||||
|
// Timing for ETA/speed calculation
|
||||||
|
StartTime time.Time
|
||||||
|
LastUpdate time.Time
|
||||||
|
SpeedWindow []speedSample // Rolling window for speed calculation
|
||||||
|
|
||||||
|
// State
|
||||||
|
IsIndeterminate bool // True if total is unknown (spinner mode)
|
||||||
|
IsComplete bool
|
||||||
|
IsFailed bool
|
||||||
|
ErrorMessage string
|
||||||
|
}
|
||||||
|
|
||||||
|
type speedSample struct {
|
||||||
|
timestamp time.Time
|
||||||
|
bytes int64
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewDetailedProgress creates a progress tracker with known total
|
||||||
|
func NewDetailedProgress(total int64, description string) *DetailedProgress {
|
||||||
|
return &DetailedProgress{
|
||||||
|
Total: total,
|
||||||
|
Description: description,
|
||||||
|
Unit: "bytes",
|
||||||
|
StartTime: time.Now(),
|
||||||
|
LastUpdate: time.Now(),
|
||||||
|
SpeedWindow: make([]speedSample, 0, 20),
|
||||||
|
IsIndeterminate: total <= 0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewDetailedProgressItems creates a progress tracker for item counts
|
||||||
|
func NewDetailedProgressItems(total int, description string) *DetailedProgress {
|
||||||
|
return &DetailedProgress{
|
||||||
|
Total: int64(total),
|
||||||
|
Description: description,
|
||||||
|
Unit: "items",
|
||||||
|
StartTime: time.Now(),
|
||||||
|
LastUpdate: time.Now(),
|
||||||
|
SpeedWindow: make([]speedSample, 0, 20),
|
||||||
|
IsIndeterminate: total <= 0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewDetailedProgressSpinner creates an indeterminate progress tracker
|
||||||
|
func NewDetailedProgressSpinner(description string) *DetailedProgress {
|
||||||
|
return &DetailedProgress{
|
||||||
|
Total: -1,
|
||||||
|
Description: description,
|
||||||
|
Unit: "",
|
||||||
|
StartTime: time.Now(),
|
||||||
|
LastUpdate: time.Now(),
|
||||||
|
SpeedWindow: make([]speedSample, 0, 20),
|
||||||
|
IsIndeterminate: true,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add adds to the current progress
|
||||||
|
func (dp *DetailedProgress) Add(n int64) {
|
||||||
|
dp.mu.Lock()
|
||||||
|
defer dp.mu.Unlock()
|
||||||
|
|
||||||
|
dp.Current += n
|
||||||
|
dp.LastUpdate = time.Now()
|
||||||
|
|
||||||
|
// Add speed sample
|
||||||
|
dp.SpeedWindow = append(dp.SpeedWindow, speedSample{
|
||||||
|
timestamp: dp.LastUpdate,
|
||||||
|
bytes: dp.Current,
|
||||||
|
})
|
||||||
|
|
||||||
|
// Keep only last 20 samples for speed calculation
|
||||||
|
if len(dp.SpeedWindow) > 20 {
|
||||||
|
dp.SpeedWindow = dp.SpeedWindow[len(dp.SpeedWindow)-20:]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set sets the current progress to a specific value
|
||||||
|
func (dp *DetailedProgress) Set(n int64) {
|
||||||
|
dp.mu.Lock()
|
||||||
|
defer dp.mu.Unlock()
|
||||||
|
|
||||||
|
dp.Current = n
|
||||||
|
dp.LastUpdate = time.Now()
|
||||||
|
|
||||||
|
// Add speed sample
|
||||||
|
dp.SpeedWindow = append(dp.SpeedWindow, speedSample{
|
||||||
|
timestamp: dp.LastUpdate,
|
||||||
|
bytes: dp.Current,
|
||||||
|
})
|
||||||
|
|
||||||
|
if len(dp.SpeedWindow) > 20 {
|
||||||
|
dp.SpeedWindow = dp.SpeedWindow[len(dp.SpeedWindow)-20:]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetTotal updates the total (useful when total becomes known during operation)
|
||||||
|
func (dp *DetailedProgress) SetTotal(total int64) {
|
||||||
|
dp.mu.Lock()
|
||||||
|
defer dp.mu.Unlock()
|
||||||
|
|
||||||
|
dp.Total = total
|
||||||
|
dp.IsIndeterminate = total <= 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetDescription updates the description
|
||||||
|
func (dp *DetailedProgress) SetDescription(desc string) {
|
||||||
|
dp.mu.Lock()
|
||||||
|
defer dp.mu.Unlock()
|
||||||
|
dp.Description = desc
|
||||||
|
}
|
||||||
|
|
||||||
|
// Complete marks the progress as complete
|
||||||
|
func (dp *DetailedProgress) Complete() {
|
||||||
|
dp.mu.Lock()
|
||||||
|
defer dp.mu.Unlock()
|
||||||
|
|
||||||
|
dp.IsComplete = true
|
||||||
|
dp.Current = dp.Total
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fail marks the progress as failed
|
||||||
|
func (dp *DetailedProgress) Fail(errMsg string) {
|
||||||
|
dp.mu.Lock()
|
||||||
|
defer dp.mu.Unlock()
|
||||||
|
|
||||||
|
dp.IsFailed = true
|
||||||
|
dp.ErrorMessage = errMsg
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetPercent returns the progress percentage (0-100)
|
||||||
|
func (dp *DetailedProgress) GetPercent() int {
|
||||||
|
dp.mu.RLock()
|
||||||
|
defer dp.mu.RUnlock()
|
||||||
|
|
||||||
|
if dp.IsIndeterminate || dp.Total <= 0 {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
percent := int((dp.Current * 100) / dp.Total)
|
||||||
|
if percent > 100 {
|
||||||
|
return 100
|
||||||
|
}
|
||||||
|
return percent
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSpeed returns the current transfer speed in bytes/second
|
||||||
|
func (dp *DetailedProgress) GetSpeed() float64 {
|
||||||
|
dp.mu.RLock()
|
||||||
|
defer dp.mu.RUnlock()
|
||||||
|
|
||||||
|
if len(dp.SpeedWindow) < 2 {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// Use first and last samples in window for smoothed speed
|
||||||
|
first := dp.SpeedWindow[0]
|
||||||
|
last := dp.SpeedWindow[len(dp.SpeedWindow)-1]
|
||||||
|
|
||||||
|
elapsed := last.timestamp.Sub(first.timestamp).Seconds()
|
||||||
|
if elapsed <= 0 {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
bytesTransferred := last.bytes - first.bytes
|
||||||
|
return float64(bytesTransferred) / elapsed
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetETA returns the estimated time remaining
|
||||||
|
func (dp *DetailedProgress) GetETA() time.Duration {
|
||||||
|
dp.mu.RLock()
|
||||||
|
defer dp.mu.RUnlock()
|
||||||
|
|
||||||
|
if dp.IsIndeterminate || dp.Total <= 0 || dp.Current >= dp.Total {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
speed := dp.getSpeedLocked()
|
||||||
|
if speed <= 0 {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
remaining := dp.Total - dp.Current
|
||||||
|
seconds := float64(remaining) / speed
|
||||||
|
return time.Duration(seconds) * time.Second
|
||||||
|
}
|
||||||
|
|
||||||
|
func (dp *DetailedProgress) getSpeedLocked() float64 {
|
||||||
|
if len(dp.SpeedWindow) < 2 {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
first := dp.SpeedWindow[0]
|
||||||
|
last := dp.SpeedWindow[len(dp.SpeedWindow)-1]
|
||||||
|
|
||||||
|
elapsed := last.timestamp.Sub(first.timestamp).Seconds()
|
||||||
|
if elapsed <= 0 {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
bytesTransferred := last.bytes - first.bytes
|
||||||
|
return float64(bytesTransferred) / elapsed
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetElapsed returns the elapsed time since start
|
||||||
|
func (dp *DetailedProgress) GetElapsed() time.Duration {
|
||||||
|
dp.mu.RLock()
|
||||||
|
defer dp.mu.RUnlock()
|
||||||
|
return time.Since(dp.StartTime)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetState returns a snapshot of the current state for rendering
|
||||||
|
func (dp *DetailedProgress) GetState() DetailedProgressState {
|
||||||
|
dp.mu.RLock()
|
||||||
|
defer dp.mu.RUnlock()
|
||||||
|
|
||||||
|
return DetailedProgressState{
|
||||||
|
Description: dp.Description,
|
||||||
|
Current: dp.Current,
|
||||||
|
Total: dp.Total,
|
||||||
|
Percent: dp.getPercentLocked(),
|
||||||
|
Speed: dp.getSpeedLocked(),
|
||||||
|
ETA: dp.getETALocked(),
|
||||||
|
Elapsed: time.Since(dp.StartTime),
|
||||||
|
Unit: dp.Unit,
|
||||||
|
IsIndeterminate: dp.IsIndeterminate,
|
||||||
|
IsComplete: dp.IsComplete,
|
||||||
|
IsFailed: dp.IsFailed,
|
||||||
|
ErrorMessage: dp.ErrorMessage,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (dp *DetailedProgress) getPercentLocked() int {
|
||||||
|
if dp.IsIndeterminate || dp.Total <= 0 {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
percent := int((dp.Current * 100) / dp.Total)
|
||||||
|
if percent > 100 {
|
||||||
|
return 100
|
||||||
|
}
|
||||||
|
return percent
|
||||||
|
}
|
||||||
|
|
||||||
|
func (dp *DetailedProgress) getETALocked() time.Duration {
|
||||||
|
if dp.IsIndeterminate || dp.Total <= 0 || dp.Current >= dp.Total {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
speed := dp.getSpeedLocked()
|
||||||
|
if speed <= 0 {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
remaining := dp.Total - dp.Current
|
||||||
|
seconds := float64(remaining) / speed
|
||||||
|
return time.Duration(seconds) * time.Second
|
||||||
|
}
|
||||||
|
|
||||||
|
// DetailedProgressState is an immutable snapshot for rendering
|
||||||
|
type DetailedProgressState struct {
|
||||||
|
Description string
|
||||||
|
Current int64
|
||||||
|
Total int64
|
||||||
|
Percent int
|
||||||
|
Speed float64 // bytes/sec
|
||||||
|
ETA time.Duration
|
||||||
|
Elapsed time.Duration
|
||||||
|
Unit string
|
||||||
|
IsIndeterminate bool
|
||||||
|
IsComplete bool
|
||||||
|
IsFailed bool
|
||||||
|
ErrorMessage string
|
||||||
|
}
|
||||||
|
|
||||||
|
// RenderProgressBar renders a TUI-friendly progress bar string
|
||||||
|
// Returns something like: "Extracting archive [████████░░░░░░░░░░░░] 45% 12.5 MB/s ETA: 2m 30s"
|
||||||
|
func (s DetailedProgressState) RenderProgressBar(width int) string {
|
||||||
|
if s.IsIndeterminate {
|
||||||
|
return s.renderIndeterminate()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Progress bar
|
||||||
|
barWidth := 30
|
||||||
|
if width < 80 {
|
||||||
|
barWidth = 20
|
||||||
|
}
|
||||||
|
filled := (s.Percent * barWidth) / 100
|
||||||
|
if filled > barWidth {
|
||||||
|
filled = barWidth
|
||||||
|
}
|
||||||
|
|
||||||
|
bar := strings.Repeat("█", filled) + strings.Repeat("░", barWidth-filled)
|
||||||
|
|
||||||
|
// Format bytes
|
||||||
|
currentStr := FormatBytes(s.Current)
|
||||||
|
totalStr := FormatBytes(s.Total)
|
||||||
|
|
||||||
|
// Format speed
|
||||||
|
speedStr := ""
|
||||||
|
if s.Speed > 0 {
|
||||||
|
speedStr = fmt.Sprintf("%s/s", FormatBytes(int64(s.Speed)))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Format ETA
|
||||||
|
etaStr := ""
|
||||||
|
if s.ETA > 0 && !s.IsComplete {
|
||||||
|
etaStr = fmt.Sprintf("ETA: %s", FormatDurationShort(s.ETA))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build the line
|
||||||
|
parts := []string{
|
||||||
|
fmt.Sprintf("[%s]", bar),
|
||||||
|
fmt.Sprintf("%3d%%", s.Percent),
|
||||||
|
}
|
||||||
|
|
||||||
|
if s.Unit == "bytes" && s.Total > 0 {
|
||||||
|
parts = append(parts, fmt.Sprintf("%s/%s", currentStr, totalStr))
|
||||||
|
} else if s.Total > 0 {
|
||||||
|
parts = append(parts, fmt.Sprintf("%d/%d", s.Current, s.Total))
|
||||||
|
}
|
||||||
|
|
||||||
|
if speedStr != "" {
|
||||||
|
parts = append(parts, speedStr)
|
||||||
|
}
|
||||||
|
if etaStr != "" {
|
||||||
|
parts = append(parts, etaStr)
|
||||||
|
}
|
||||||
|
|
||||||
|
return strings.Join(parts, " ")
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s DetailedProgressState) renderIndeterminate() string {
|
||||||
|
elapsed := FormatDurationShort(s.Elapsed)
|
||||||
|
return fmt.Sprintf("[spinner] %s Elapsed: %s", s.Description, elapsed)
|
||||||
|
}
|
||||||
|
|
||||||
|
// RenderCompact renders a compact single-line progress string
|
||||||
|
func (s DetailedProgressState) RenderCompact() string {
|
||||||
|
if s.IsComplete {
|
||||||
|
return fmt.Sprintf("[OK] %s completed in %s", s.Description, FormatDurationShort(s.Elapsed))
|
||||||
|
}
|
||||||
|
if s.IsFailed {
|
||||||
|
return fmt.Sprintf("[FAIL] %s: %s", s.Description, s.ErrorMessage)
|
||||||
|
}
|
||||||
|
if s.IsIndeterminate {
|
||||||
|
return fmt.Sprintf("[...] %s (%s)", s.Description, FormatDurationShort(s.Elapsed))
|
||||||
|
}
|
||||||
|
|
||||||
|
return fmt.Sprintf("[%3d%%] %s - %s/%s", s.Percent, s.Description,
|
||||||
|
FormatBytes(s.Current), FormatBytes(s.Total))
|
||||||
|
}
|
||||||
|
|
||||||
|
// FormatBytes formats bytes in human-readable format
|
||||||
|
func FormatBytes(b int64) string {
|
||||||
|
const unit = 1024
|
||||||
|
if b < unit {
|
||||||
|
return fmt.Sprintf("%d B", b)
|
||||||
|
}
|
||||||
|
div, exp := int64(unit), 0
|
||||||
|
for n := b / unit; n >= unit; n /= unit {
|
||||||
|
div *= unit
|
||||||
|
exp++
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%.1f %cB", float64(b)/float64(div), "KMGTPE"[exp])
|
||||||
|
}
|
||||||
|
|
||||||
|
// FormatDurationShort formats duration in short form
|
||||||
|
func FormatDurationShort(d time.Duration) string {
|
||||||
|
if d < time.Second {
|
||||||
|
return "<1s"
|
||||||
|
}
|
||||||
|
if d < time.Minute {
|
||||||
|
return fmt.Sprintf("%ds", int(d.Seconds()))
|
||||||
|
}
|
||||||
|
if d < time.Hour {
|
||||||
|
m := int(d.Minutes())
|
||||||
|
s := int(d.Seconds()) % 60
|
||||||
|
if s > 0 {
|
||||||
|
return fmt.Sprintf("%dm %ds", m, s)
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%dm", m)
|
||||||
|
}
|
||||||
|
h := int(d.Hours())
|
||||||
|
m := int(d.Minutes()) % 60
|
||||||
|
return fmt.Sprintf("%dh %dm", h, m)
|
||||||
|
}
|
||||||
@@ -160,7 +160,7 @@ func (m DiagnoseViewModel) View() string {
|
|||||||
var s strings.Builder
|
var s strings.Builder
|
||||||
|
|
||||||
// Header
|
// Header
|
||||||
s.WriteString(titleStyle.Render("[SEARCH] Backup Diagnosis"))
|
s.WriteString(titleStyle.Render("[CHECK] Backup Diagnosis"))
|
||||||
s.WriteString("\n\n")
|
s.WriteString("\n\n")
|
||||||
|
|
||||||
// Archive info
|
// Archive info
|
||||||
@@ -204,132 +204,111 @@ func (m DiagnoseViewModel) View() string {
|
|||||||
func (m DiagnoseViewModel) renderSingleResult(result *restore.DiagnoseResult) string {
|
func (m DiagnoseViewModel) renderSingleResult(result *restore.DiagnoseResult) string {
|
||||||
var s strings.Builder
|
var s strings.Builder
|
||||||
|
|
||||||
// Status Box
|
// Validation Status
|
||||||
s.WriteString("+--[ VALIDATION STATUS ]" + strings.Repeat("-", 37) + "+\n")
|
s.WriteString(diagnoseHeaderStyle.Render("[STATUS] Validation"))
|
||||||
|
s.WriteString("\n")
|
||||||
|
|
||||||
if result.IsValid {
|
if result.IsValid {
|
||||||
s.WriteString("| " + diagnosePassStyle.Render("[OK] VALID - Archive passed all checks") + strings.Repeat(" ", 18) + "|\n")
|
s.WriteString(diagnosePassStyle.Render(" [OK] VALID - Archive passed all checks"))
|
||||||
|
s.WriteString("\n")
|
||||||
} else {
|
} else {
|
||||||
s.WriteString("| " + diagnoseFailStyle.Render("[FAIL] INVALID - Archive has problems") + strings.Repeat(" ", 19) + "|\n")
|
s.WriteString(diagnoseFailStyle.Render(" [FAIL] INVALID - Archive has problems"))
|
||||||
|
s.WriteString("\n")
|
||||||
}
|
}
|
||||||
|
|
||||||
if result.IsTruncated {
|
if result.IsTruncated {
|
||||||
s.WriteString("| " + diagnoseFailStyle.Render("[!] TRUNCATED - File is incomplete") + strings.Repeat(" ", 22) + "|\n")
|
s.WriteString(diagnoseFailStyle.Render(" [!] TRUNCATED - File is incomplete"))
|
||||||
|
s.WriteString("\n")
|
||||||
}
|
}
|
||||||
|
|
||||||
if result.IsCorrupted {
|
if result.IsCorrupted {
|
||||||
s.WriteString("| " + diagnoseFailStyle.Render("[!] CORRUPTED - File structure damaged") + strings.Repeat(" ", 18) + "|\n")
|
s.WriteString(diagnoseFailStyle.Render(" [!] CORRUPTED - File structure damaged"))
|
||||||
|
s.WriteString("\n")
|
||||||
}
|
}
|
||||||
|
|
||||||
s.WriteString("+" + strings.Repeat("-", 60) + "+\n\n")
|
s.WriteString("\n")
|
||||||
|
|
||||||
// Details Box
|
// Details
|
||||||
if result.Details != nil {
|
if result.Details != nil {
|
||||||
s.WriteString("+--[ DETAILS ]" + strings.Repeat("-", 46) + "+\n")
|
s.WriteString(diagnoseHeaderStyle.Render("[INFO] Details"))
|
||||||
|
s.WriteString("\n")
|
||||||
|
|
||||||
if result.Details.HasPGDMPSignature {
|
if result.Details.HasPGDMPSignature {
|
||||||
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " PostgreSQL custom format (PGDMP)" + strings.Repeat(" ", 20) + "|\n")
|
s.WriteString(diagnosePassStyle.Render(" [+]") + " PostgreSQL custom format (PGDMP)\n")
|
||||||
}
|
}
|
||||||
|
|
||||||
if result.Details.HasSQLHeader {
|
if result.Details.HasSQLHeader {
|
||||||
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " PostgreSQL SQL header found" + strings.Repeat(" ", 25) + "|\n")
|
s.WriteString(diagnosePassStyle.Render(" [+]") + " PostgreSQL SQL header found\n")
|
||||||
}
|
}
|
||||||
|
|
||||||
if result.Details.GzipValid {
|
if result.Details.GzipValid {
|
||||||
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " Gzip compression valid" + strings.Repeat(" ", 30) + "|\n")
|
s.WriteString(diagnosePassStyle.Render(" [+]") + " Gzip compression valid\n")
|
||||||
}
|
}
|
||||||
|
|
||||||
if result.Details.PgRestoreListable {
|
if result.Details.PgRestoreListable {
|
||||||
tableInfo := fmt.Sprintf(" (%d tables)", result.Details.TableCount)
|
s.WriteString(diagnosePassStyle.Render(" [+]") + fmt.Sprintf(" pg_restore can list contents (%d tables)\n", result.Details.TableCount))
|
||||||
padding := 36 - len(tableInfo)
|
|
||||||
if padding < 0 {
|
|
||||||
padding = 0
|
|
||||||
}
|
|
||||||
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " pg_restore can list contents" + tableInfo + strings.Repeat(" ", padding) + "|\n")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if result.Details.CopyBlockCount > 0 {
|
if result.Details.CopyBlockCount > 0 {
|
||||||
blockInfo := fmt.Sprintf("%d COPY blocks found", result.Details.CopyBlockCount)
|
s.WriteString(fmt.Sprintf(" [-] %d COPY blocks found\n", result.Details.CopyBlockCount))
|
||||||
padding := 50 - len(blockInfo)
|
|
||||||
if padding < 0 {
|
|
||||||
padding = 0
|
|
||||||
}
|
|
||||||
s.WriteString("| [-] " + blockInfo + strings.Repeat(" ", padding) + "|\n")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if result.Details.UnterminatedCopy {
|
if result.Details.UnterminatedCopy {
|
||||||
s.WriteString("| " + diagnoseFailStyle.Render("[-]") + " Unterminated COPY: " + truncate(result.Details.LastCopyTable, 30) + strings.Repeat(" ", 5) + "|\n")
|
s.WriteString(diagnoseFailStyle.Render(" [-]") + " Unterminated COPY: " + truncate(result.Details.LastCopyTable, 30) + "\n")
|
||||||
}
|
}
|
||||||
|
|
||||||
if result.Details.ProperlyTerminated {
|
if result.Details.ProperlyTerminated {
|
||||||
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " All COPY blocks properly terminated" + strings.Repeat(" ", 17) + "|\n")
|
s.WriteString(diagnosePassStyle.Render(" [+]") + " All COPY blocks properly terminated\n")
|
||||||
}
|
}
|
||||||
|
|
||||||
if result.Details.ExpandedSize > 0 {
|
if result.Details.ExpandedSize > 0 {
|
||||||
sizeInfo := fmt.Sprintf("Expanded: %s (%.1fx)", formatSize(result.Details.ExpandedSize), result.Details.CompressionRatio)
|
s.WriteString(fmt.Sprintf(" [-] Expanded: %s (%.1fx)\n", formatSize(result.Details.ExpandedSize), result.Details.CompressionRatio))
|
||||||
padding := 50 - len(sizeInfo)
|
|
||||||
if padding < 0 {
|
|
||||||
padding = 0
|
|
||||||
}
|
|
||||||
s.WriteString("| [-] " + sizeInfo + strings.Repeat(" ", padding) + "|\n")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
|
s.WriteString("\n")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Errors Box
|
// Errors
|
||||||
if len(result.Errors) > 0 {
|
if len(result.Errors) > 0 {
|
||||||
s.WriteString("\n+--[ ERRORS ]" + strings.Repeat("-", 47) + "+\n")
|
s.WriteString(diagnoseFailStyle.Render("[FAIL] Errors"))
|
||||||
|
s.WriteString("\n")
|
||||||
for i, e := range result.Errors {
|
for i, e := range result.Errors {
|
||||||
if i >= 5 {
|
if i >= 5 {
|
||||||
remaining := fmt.Sprintf("... and %d more errors", len(result.Errors)-5)
|
s.WriteString(fmt.Sprintf(" ... and %d more errors\n", len(result.Errors)-5))
|
||||||
padding := 56 - len(remaining)
|
|
||||||
s.WriteString("| " + remaining + strings.Repeat(" ", padding) + "|\n")
|
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
errText := truncate(e, 54)
|
s.WriteString(" " + truncate(e, 60) + "\n")
|
||||||
padding := 56 - len(errText)
|
|
||||||
if padding < 0 {
|
|
||||||
padding = 0
|
|
||||||
}
|
}
|
||||||
s.WriteString("| " + errText + strings.Repeat(" ", padding) + "|\n")
|
s.WriteString("\n")
|
||||||
}
|
|
||||||
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Warnings Box
|
// Warnings
|
||||||
if len(result.Warnings) > 0 {
|
if len(result.Warnings) > 0 {
|
||||||
s.WriteString("\n+--[ WARNINGS ]" + strings.Repeat("-", 45) + "+\n")
|
s.WriteString(diagnoseWarnStyle.Render("[WARN] Warnings"))
|
||||||
|
s.WriteString("\n")
|
||||||
for i, w := range result.Warnings {
|
for i, w := range result.Warnings {
|
||||||
if i >= 3 {
|
if i >= 3 {
|
||||||
remaining := fmt.Sprintf("... and %d more warnings", len(result.Warnings)-3)
|
s.WriteString(fmt.Sprintf(" ... and %d more warnings\n", len(result.Warnings)-3))
|
||||||
padding := 56 - len(remaining)
|
|
||||||
s.WriteString("| " + remaining + strings.Repeat(" ", padding) + "|\n")
|
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
warnText := truncate(w, 54)
|
s.WriteString(" " + truncate(w, 60) + "\n")
|
||||||
padding := 56 - len(warnText)
|
|
||||||
if padding < 0 {
|
|
||||||
padding = 0
|
|
||||||
}
|
}
|
||||||
s.WriteString("| " + warnText + strings.Repeat(" ", padding) + "|\n")
|
s.WriteString("\n")
|
||||||
}
|
|
||||||
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Recommendations Box
|
// Recommendations
|
||||||
if !result.IsValid {
|
if !result.IsValid {
|
||||||
s.WriteString("\n+--[ RECOMMENDATIONS ]" + strings.Repeat("-", 38) + "+\n")
|
s.WriteString(diagnoseInfoStyle.Render("[HINT] Recommendations"))
|
||||||
|
s.WriteString("\n")
|
||||||
if result.IsTruncated {
|
if result.IsTruncated {
|
||||||
s.WriteString("| 1. Re-run backup with current version (v3.42.12+) |\n")
|
s.WriteString(" 1. Re-run backup with current version (v3.42+)\n")
|
||||||
s.WriteString("| 2. Check disk space on backup server |\n")
|
s.WriteString(" 2. Check disk space on backup server\n")
|
||||||
s.WriteString("| 3. Verify network stability for remote backups |\n")
|
s.WriteString(" 3. Verify network stability for remote backups\n")
|
||||||
}
|
}
|
||||||
if result.IsCorrupted {
|
if result.IsCorrupted {
|
||||||
s.WriteString("| 1. Verify backup was transferred completely |\n")
|
s.WriteString(" 1. Verify backup was transferred completely\n")
|
||||||
s.WriteString("| 2. Try restoring from a previous backup |\n")
|
s.WriteString(" 2. Try restoring from a previous backup\n")
|
||||||
}
|
}
|
||||||
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return s.String()
|
return s.String()
|
||||||
@@ -349,10 +328,8 @@ func (m DiagnoseViewModel) renderClusterResults() string {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
s.WriteString(strings.Repeat("-", 60))
|
|
||||||
s.WriteString("\n")
|
s.WriteString("\n")
|
||||||
s.WriteString(diagnoseHeaderStyle.Render(fmt.Sprintf("[STATS] CLUSTER SUMMARY: %d databases\n", len(m.results))))
|
s.WriteString(diagnoseHeaderStyle.Render(fmt.Sprintf("[STATS] Cluster Summary: %d databases", len(m.results))))
|
||||||
s.WriteString(strings.Repeat("-", 60))
|
|
||||||
s.WriteString("\n\n")
|
s.WriteString("\n\n")
|
||||||
|
|
||||||
if invalidCount == 0 {
|
if invalidCount == 0 {
|
||||||
@@ -364,7 +341,7 @@ func (m DiagnoseViewModel) renderClusterResults() string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// List all dumps with status
|
// List all dumps with status
|
||||||
s.WriteString(diagnoseHeaderStyle.Render("Database Dumps:"))
|
s.WriteString(diagnoseHeaderStyle.Render("[LIST] Database Dumps"))
|
||||||
s.WriteString("\n")
|
s.WriteString("\n")
|
||||||
|
|
||||||
// Show visible range based on cursor
|
// Show visible range based on cursor
|
||||||
@@ -413,9 +390,7 @@ func (m DiagnoseViewModel) renderClusterResults() string {
|
|||||||
if m.cursor < len(m.results) {
|
if m.cursor < len(m.results) {
|
||||||
selected := m.results[m.cursor]
|
selected := m.results[m.cursor]
|
||||||
s.WriteString("\n")
|
s.WriteString("\n")
|
||||||
s.WriteString(strings.Repeat("-", 60))
|
s.WriteString(diagnoseHeaderStyle.Render("[INFO] Selected: " + selected.FileName))
|
||||||
s.WriteString("\n")
|
|
||||||
s.WriteString(diagnoseHeaderStyle.Render("Selected: " + selected.FileName))
|
|
||||||
s.WriteString("\n\n")
|
s.WriteString("\n\n")
|
||||||
|
|
||||||
// Show condensed details for selected
|
// Show condensed details for selected
|
||||||
|
|||||||
@@ -191,7 +191,7 @@ func (m HistoryViewModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
func (m HistoryViewModel) View() string {
|
func (m HistoryViewModel) View() string {
|
||||||
var s strings.Builder
|
var s strings.Builder
|
||||||
|
|
||||||
header := titleStyle.Render("[HISTORY] Operation History")
|
header := titleStyle.Render("[STATS] Operation History")
|
||||||
s.WriteString(fmt.Sprintf("\n%s\n\n", header))
|
s.WriteString(fmt.Sprintf("\n%s\n\n", header))
|
||||||
|
|
||||||
if len(m.history) == 0 {
|
if len(m.history) == 0 {
|
||||||
|
|||||||
@@ -285,7 +285,7 @@ func (m *MenuModel) View() string {
|
|||||||
var s string
|
var s string
|
||||||
|
|
||||||
// Header
|
// Header
|
||||||
header := titleStyle.Render("[DB] Database Backup Tool - Interactive Menu")
|
header := titleStyle.Render("Database Backup Tool - Interactive Menu")
|
||||||
s += fmt.Sprintf("\n%s\n\n", header)
|
s += fmt.Sprintf("\n%s\n\n", header)
|
||||||
|
|
||||||
if len(m.dbTypes) > 0 {
|
if len(m.dbTypes) > 0 {
|
||||||
@@ -334,13 +334,13 @@ func (m *MenuModel) View() string {
|
|||||||
|
|
||||||
// handleSingleBackup opens database selector for single backup
|
// handleSingleBackup opens database selector for single backup
|
||||||
func (m *MenuModel) handleSingleBackup() (tea.Model, tea.Cmd) {
|
func (m *MenuModel) handleSingleBackup() (tea.Model, tea.Cmd) {
|
||||||
selector := NewDatabaseSelector(m.config, m.logger, m, m.ctx, "[DB] Single Database Backup", "single")
|
selector := NewDatabaseSelector(m.config, m.logger, m, m.ctx, "[SELECT] Single Database Backup", "single")
|
||||||
return selector, selector.Init()
|
return selector, selector.Init()
|
||||||
}
|
}
|
||||||
|
|
||||||
// handleSampleBackup opens database selector for sample backup
|
// handleSampleBackup opens database selector for sample backup
|
||||||
func (m *MenuModel) handleSampleBackup() (tea.Model, tea.Cmd) {
|
func (m *MenuModel) handleSampleBackup() (tea.Model, tea.Cmd) {
|
||||||
selector := NewDatabaseSelector(m.config, m.logger, m, m.ctx, "[STATS] Sample Database Backup", "sample")
|
selector := NewDatabaseSelector(m.config, m.logger, m, m.ctx, "[SELECT] Sample Database Backup", "sample")
|
||||||
return selector, selector.Init()
|
return selector, selector.Init()
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -356,7 +356,7 @@ func (m *MenuModel) handleClusterBackup() (tea.Model, tea.Cmd) {
|
|||||||
return executor, executor.Init()
|
return executor, executor.Init()
|
||||||
}
|
}
|
||||||
confirm := NewConfirmationModelWithAction(m.config, m.logger, m,
|
confirm := NewConfirmationModelWithAction(m.config, m.logger, m,
|
||||||
"[DB] Cluster Backup",
|
"[CHECK] Cluster Backup",
|
||||||
"This will backup ALL databases in the cluster. Continue?",
|
"This will backup ALL databases in the cluster. Continue?",
|
||||||
func() (tea.Model, tea.Cmd) {
|
func() (tea.Model, tea.Cmd) {
|
||||||
executor := NewBackupExecution(m.config, m.logger, m, m.ctx, "cluster", "", 0)
|
executor := NewBackupExecution(m.config, m.logger, m, m.ctx, "cluster", "", 0)
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ import (
|
|||||||
"os/exec"
|
"os/exec"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strings"
|
"strings"
|
||||||
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
tea "github.com/charmbracelet/bubbletea"
|
tea "github.com/charmbracelet/bubbletea"
|
||||||
@@ -45,6 +46,17 @@ type RestoreExecutionModel struct {
|
|||||||
spinnerFrame int
|
spinnerFrame int
|
||||||
spinnerFrames []string
|
spinnerFrames []string
|
||||||
|
|
||||||
|
// Detailed byte progress for schollz-style display
|
||||||
|
bytesTotal int64
|
||||||
|
bytesDone int64
|
||||||
|
description string
|
||||||
|
showBytes bool // True when we have real byte progress to show
|
||||||
|
speed float64 // Rolling window speed in bytes/sec
|
||||||
|
|
||||||
|
// Database count progress (for cluster restore)
|
||||||
|
dbTotal int
|
||||||
|
dbDone int
|
||||||
|
|
||||||
// Results
|
// Results
|
||||||
done bool
|
done bool
|
||||||
cancelling bool // True when user has requested cancellation
|
cancelling bool // True when user has requested cancellation
|
||||||
@@ -101,6 +113,9 @@ type restoreProgressMsg struct {
|
|||||||
phase string
|
phase string
|
||||||
progress int
|
progress int
|
||||||
detail string
|
detail string
|
||||||
|
bytesTotal int64
|
||||||
|
bytesDone int64
|
||||||
|
description string
|
||||||
}
|
}
|
||||||
|
|
||||||
type restoreCompleteMsg struct {
|
type restoreCompleteMsg struct {
|
||||||
@@ -109,6 +124,102 @@ type restoreCompleteMsg struct {
|
|||||||
elapsed time.Duration
|
elapsed time.Duration
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// sharedProgressState holds progress state that can be safely accessed from callbacks
|
||||||
|
type sharedProgressState struct {
|
||||||
|
mu sync.Mutex
|
||||||
|
bytesTotal int64
|
||||||
|
bytesDone int64
|
||||||
|
description string
|
||||||
|
hasUpdate bool
|
||||||
|
|
||||||
|
// Database count progress (for cluster restore)
|
||||||
|
dbTotal int
|
||||||
|
dbDone int
|
||||||
|
|
||||||
|
// Rolling window for speed calculation
|
||||||
|
speedSamples []restoreSpeedSample
|
||||||
|
}
|
||||||
|
|
||||||
|
type restoreSpeedSample struct {
|
||||||
|
timestamp time.Time
|
||||||
|
bytes int64
|
||||||
|
}
|
||||||
|
|
||||||
|
// Package-level shared progress state for restore operations
|
||||||
|
var (
|
||||||
|
currentRestoreProgressMu sync.Mutex
|
||||||
|
currentRestoreProgressState *sharedProgressState
|
||||||
|
)
|
||||||
|
|
||||||
|
func setCurrentRestoreProgress(state *sharedProgressState) {
|
||||||
|
currentRestoreProgressMu.Lock()
|
||||||
|
defer currentRestoreProgressMu.Unlock()
|
||||||
|
currentRestoreProgressState = state
|
||||||
|
}
|
||||||
|
|
||||||
|
func clearCurrentRestoreProgress() {
|
||||||
|
currentRestoreProgressMu.Lock()
|
||||||
|
defer currentRestoreProgressMu.Unlock()
|
||||||
|
currentRestoreProgressState = nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func getCurrentRestoreProgress() (bytesTotal, bytesDone int64, description string, hasUpdate bool, dbTotal, dbDone int, speed float64) {
|
||||||
|
currentRestoreProgressMu.Lock()
|
||||||
|
defer currentRestoreProgressMu.Unlock()
|
||||||
|
|
||||||
|
if currentRestoreProgressState == nil {
|
||||||
|
return 0, 0, "", false, 0, 0, 0
|
||||||
|
}
|
||||||
|
|
||||||
|
currentRestoreProgressState.mu.Lock()
|
||||||
|
defer currentRestoreProgressState.mu.Unlock()
|
||||||
|
|
||||||
|
// Calculate rolling window speed
|
||||||
|
speed = calculateRollingSpeed(currentRestoreProgressState.speedSamples)
|
||||||
|
|
||||||
|
return currentRestoreProgressState.bytesTotal, currentRestoreProgressState.bytesDone,
|
||||||
|
currentRestoreProgressState.description, currentRestoreProgressState.hasUpdate,
|
||||||
|
currentRestoreProgressState.dbTotal, currentRestoreProgressState.dbDone, speed
|
||||||
|
}
|
||||||
|
|
||||||
|
// calculateRollingSpeed calculates speed from recent samples (last 5 seconds)
|
||||||
|
func calculateRollingSpeed(samples []restoreSpeedSample) float64 {
|
||||||
|
if len(samples) < 2 {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// Use samples from last 5 seconds for smoothed speed
|
||||||
|
now := time.Now()
|
||||||
|
cutoff := now.Add(-5 * time.Second)
|
||||||
|
|
||||||
|
var firstInWindow, lastInWindow *restoreSpeedSample
|
||||||
|
for i := range samples {
|
||||||
|
if samples[i].timestamp.After(cutoff) {
|
||||||
|
if firstInWindow == nil {
|
||||||
|
firstInWindow = &samples[i]
|
||||||
|
}
|
||||||
|
lastInWindow = &samples[i]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fall back to first and last if window is empty
|
||||||
|
if firstInWindow == nil || lastInWindow == nil || firstInWindow == lastInWindow {
|
||||||
|
firstInWindow = &samples[0]
|
||||||
|
lastInWindow = &samples[len(samples)-1]
|
||||||
|
}
|
||||||
|
|
||||||
|
elapsed := lastInWindow.timestamp.Sub(firstInWindow.timestamp).Seconds()
|
||||||
|
if elapsed <= 0 {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
bytesTransferred := lastInWindow.bytes - firstInWindow.bytes
|
||||||
|
return float64(bytesTransferred) / elapsed
|
||||||
|
}
|
||||||
|
|
||||||
|
// restoreProgressChannel allows sending progress updates from the restore goroutine
|
||||||
|
type restoreProgressChannel chan restoreProgressMsg
|
||||||
|
|
||||||
func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config, log logger.Logger, archive ArchiveInfo, targetDB string, cleanFirst, createIfMissing bool, restoreType string, cleanClusterFirst bool, existingDBs []string, saveDebugLog bool) tea.Cmd {
|
func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config, log logger.Logger, archive ArchiveInfo, targetDB string, cleanFirst, createIfMissing bool, restoreType string, cleanClusterFirst bool, existingDBs []string, saveDebugLog bool) tea.Cmd {
|
||||||
return func() tea.Msg {
|
return func() tea.Msg {
|
||||||
// NO TIMEOUT for restore operations - a restore takes as long as it takes
|
// NO TIMEOUT for restore operations - a restore takes as long as it takes
|
||||||
@@ -156,6 +267,48 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
|
|||||||
// STEP 2: Create restore engine with silent progress (no stdout interference with TUI)
|
// STEP 2: Create restore engine with silent progress (no stdout interference with TUI)
|
||||||
engine := restore.NewSilent(cfg, log, dbClient)
|
engine := restore.NewSilent(cfg, log, dbClient)
|
||||||
|
|
||||||
|
// Set up progress callback for detailed progress reporting
|
||||||
|
// We use a shared pointer that can be queried by the TUI ticker
|
||||||
|
progressState := &sharedProgressState{
|
||||||
|
speedSamples: make([]restoreSpeedSample, 0, 100),
|
||||||
|
}
|
||||||
|
engine.SetProgressCallback(func(current, total int64, description string) {
|
||||||
|
progressState.mu.Lock()
|
||||||
|
defer progressState.mu.Unlock()
|
||||||
|
progressState.bytesDone = current
|
||||||
|
progressState.bytesTotal = total
|
||||||
|
progressState.description = description
|
||||||
|
progressState.hasUpdate = true
|
||||||
|
|
||||||
|
// Add speed sample for rolling window calculation
|
||||||
|
progressState.speedSamples = append(progressState.speedSamples, restoreSpeedSample{
|
||||||
|
timestamp: time.Now(),
|
||||||
|
bytes: current,
|
||||||
|
})
|
||||||
|
// Keep only last 100 samples
|
||||||
|
if len(progressState.speedSamples) > 100 {
|
||||||
|
progressState.speedSamples = progressState.speedSamples[len(progressState.speedSamples)-100:]
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
// Set up database progress callback for cluster restore
|
||||||
|
engine.SetDatabaseProgressCallback(func(done, total int, dbName string) {
|
||||||
|
progressState.mu.Lock()
|
||||||
|
defer progressState.mu.Unlock()
|
||||||
|
progressState.dbDone = done
|
||||||
|
progressState.dbTotal = total
|
||||||
|
progressState.description = fmt.Sprintf("Restoring %s", dbName)
|
||||||
|
progressState.hasUpdate = true
|
||||||
|
// Clear byte progress when switching to db progress
|
||||||
|
progressState.bytesTotal = 0
|
||||||
|
progressState.bytesDone = 0
|
||||||
|
})
|
||||||
|
|
||||||
|
// Store progress state in a package-level variable for the ticker to access
|
||||||
|
// This is a workaround because tea messages can't be sent from callbacks
|
||||||
|
setCurrentRestoreProgress(progressState)
|
||||||
|
defer clearCurrentRestoreProgress()
|
||||||
|
|
||||||
// Enable debug logging if requested
|
// Enable debug logging if requested
|
||||||
if saveDebugLog {
|
if saveDebugLog {
|
||||||
// Generate debug log path using configured WorkDir
|
// Generate debug log path using configured WorkDir
|
||||||
@@ -165,9 +318,6 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
|
|||||||
log.Info("Debug logging enabled", "path", debugLogPath)
|
log.Info("Debug logging enabled", "path", debugLogPath)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Set up progress callback (but it won't work in goroutine - progress is already sent via logs)
|
|
||||||
// The TUI will just use spinner animation to show activity
|
|
||||||
|
|
||||||
// STEP 3: Execute restore based on type
|
// STEP 3: Execute restore based on type
|
||||||
var restoreErr error
|
var restoreErr error
|
||||||
if restoreType == "restore-cluster" {
|
if restoreType == "restore-cluster" {
|
||||||
@@ -206,7 +356,29 @@ func (m RestoreExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
m.spinnerFrame = (m.spinnerFrame + 1) % len(m.spinnerFrames)
|
m.spinnerFrame = (m.spinnerFrame + 1) % len(m.spinnerFrames)
|
||||||
m.elapsed = time.Since(m.startTime)
|
m.elapsed = time.Since(m.startTime)
|
||||||
|
|
||||||
// Update status based on elapsed time to show progress
|
// Poll shared progress state for real-time updates
|
||||||
|
bytesTotal, bytesDone, description, hasUpdate, dbTotal, dbDone, speed := getCurrentRestoreProgress()
|
||||||
|
if hasUpdate && bytesTotal > 0 {
|
||||||
|
m.bytesTotal = bytesTotal
|
||||||
|
m.bytesDone = bytesDone
|
||||||
|
m.description = description
|
||||||
|
m.showBytes = true
|
||||||
|
m.speed = speed
|
||||||
|
|
||||||
|
// Update status to reflect actual progress
|
||||||
|
m.status = description
|
||||||
|
m.phase = "Extracting"
|
||||||
|
m.progress = int((bytesDone * 100) / bytesTotal)
|
||||||
|
} else if hasUpdate && dbTotal > 0 {
|
||||||
|
// Database count progress for cluster restore
|
||||||
|
m.dbTotal = dbTotal
|
||||||
|
m.dbDone = dbDone
|
||||||
|
m.showBytes = false
|
||||||
|
m.status = fmt.Sprintf("Restoring database %d of %d...", dbDone+1, dbTotal)
|
||||||
|
m.phase = "Restore"
|
||||||
|
m.progress = int((dbDone * 100) / dbTotal)
|
||||||
|
} else {
|
||||||
|
// Fallback: Update status based on elapsed time to show progress
|
||||||
// This provides visual feedback even though we don't have real-time progress
|
// This provides visual feedback even though we don't have real-time progress
|
||||||
elapsedSec := int(m.elapsed.Seconds())
|
elapsedSec := int(m.elapsed.Seconds())
|
||||||
|
|
||||||
@@ -241,6 +413,7 @@ func (m RestoreExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
m.phase = "Restore"
|
m.phase = "Restore"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return m, restoreTickCmd()
|
return m, restoreTickCmd()
|
||||||
}
|
}
|
||||||
@@ -250,6 +423,15 @@ func (m RestoreExecutionModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
m.status = msg.status
|
m.status = msg.status
|
||||||
m.phase = msg.phase
|
m.phase = msg.phase
|
||||||
m.progress = msg.progress
|
m.progress = msg.progress
|
||||||
|
|
||||||
|
// Update byte-level progress if available
|
||||||
|
if msg.bytesTotal > 0 {
|
||||||
|
m.bytesTotal = msg.bytesTotal
|
||||||
|
m.bytesDone = msg.bytesDone
|
||||||
|
m.description = msg.description
|
||||||
|
m.showBytes = true
|
||||||
|
}
|
||||||
|
|
||||||
if msg.detail != "" {
|
if msg.detail != "" {
|
||||||
m.details = append(m.details, msg.detail)
|
m.details = append(m.details, msg.detail)
|
||||||
// Keep only last 5 details
|
// Keep only last 5 details
|
||||||
@@ -321,9 +503,9 @@ func (m RestoreExecutionModel) View() string {
|
|||||||
s.Grow(512) // Pre-allocate estimated capacity for better performance
|
s.Grow(512) // Pre-allocate estimated capacity for better performance
|
||||||
|
|
||||||
// Title
|
// Title
|
||||||
title := "[RESTORE] Restoring Database"
|
title := "[EXEC] Restoring Database"
|
||||||
if m.restoreType == "restore-cluster" {
|
if m.restoreType == "restore-cluster" {
|
||||||
title = "[RESTORE] Restoring Cluster"
|
title = "[EXEC] Restoring Cluster"
|
||||||
}
|
}
|
||||||
s.WriteString(titleStyle.Render(title))
|
s.WriteString(titleStyle.Render(title))
|
||||||
s.WriteString("\n\n")
|
s.WriteString("\n\n")
|
||||||
@@ -356,19 +538,39 @@ func (m RestoreExecutionModel) View() string {
|
|||||||
// Show progress
|
// Show progress
|
||||||
s.WriteString(fmt.Sprintf("Phase: %s\n", m.phase))
|
s.WriteString(fmt.Sprintf("Phase: %s\n", m.phase))
|
||||||
|
|
||||||
// Show status with rotating spinner (unified indicator for all operations)
|
// Show detailed progress bar when we have byte-level information
|
||||||
|
// In this case, hide the spinner for cleaner display
|
||||||
|
if m.showBytes && m.bytesTotal > 0 {
|
||||||
|
// Status line without spinner (progress bar provides activity indication)
|
||||||
|
s.WriteString(fmt.Sprintf("Status: %s\n", m.status))
|
||||||
|
s.WriteString("\n")
|
||||||
|
|
||||||
|
// Render schollz-style progress bar with bytes, rolling speed, ETA
|
||||||
|
s.WriteString(renderDetailedProgressBarWithSpeed(m.bytesDone, m.bytesTotal, m.speed))
|
||||||
|
s.WriteString("\n\n")
|
||||||
|
} else if m.dbTotal > 0 {
|
||||||
|
// Database count progress for cluster restore
|
||||||
|
spinner := m.spinnerFrames[m.spinnerFrame]
|
||||||
|
s.WriteString(fmt.Sprintf("Status: %s %s\n", spinner, m.status))
|
||||||
|
s.WriteString("\n")
|
||||||
|
|
||||||
|
// Show database progress bar
|
||||||
|
s.WriteString(renderDatabaseProgressBar(m.dbDone, m.dbTotal))
|
||||||
|
s.WriteString("\n\n")
|
||||||
|
} else {
|
||||||
|
// Show status with rotating spinner (for phases without detailed progress)
|
||||||
spinner := m.spinnerFrames[m.spinnerFrame]
|
spinner := m.spinnerFrames[m.spinnerFrame]
|
||||||
s.WriteString(fmt.Sprintf("Status: %s %s\n", spinner, m.status))
|
s.WriteString(fmt.Sprintf("Status: %s %s\n", spinner, m.status))
|
||||||
s.WriteString("\n")
|
s.WriteString("\n")
|
||||||
|
|
||||||
// Only show progress bar for single database restore
|
|
||||||
// Cluster restore uses spinner only (consistent with CLI behavior)
|
|
||||||
if m.restoreType == "restore-single" {
|
if m.restoreType == "restore-single" {
|
||||||
|
// Fallback to simple progress bar for single database restore
|
||||||
progressBar := renderProgressBar(m.progress)
|
progressBar := renderProgressBar(m.progress)
|
||||||
s.WriteString(progressBar)
|
s.WriteString(progressBar)
|
||||||
s.WriteString(fmt.Sprintf(" %d%%\n", m.progress))
|
s.WriteString(fmt.Sprintf(" %d%%\n", m.progress))
|
||||||
s.WriteString("\n")
|
s.WriteString("\n")
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Elapsed time
|
// Elapsed time
|
||||||
s.WriteString(fmt.Sprintf("Elapsed: %s\n", formatDuration(m.elapsed)))
|
s.WriteString(fmt.Sprintf("Elapsed: %s\n", formatDuration(m.elapsed)))
|
||||||
@@ -390,6 +592,92 @@ func renderProgressBar(percent int) string {
|
|||||||
return successStyle.Render(bar) + infoStyle.Render(empty)
|
return successStyle.Render(bar) + infoStyle.Render(empty)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// renderDetailedProgressBar renders a schollz-style progress bar with bytes, speed, and ETA
|
||||||
|
// Uses elapsed time for speed calculation (fallback)
|
||||||
|
func renderDetailedProgressBar(done, total int64, elapsed time.Duration) string {
|
||||||
|
speed := 0.0
|
||||||
|
if elapsed.Seconds() > 0 {
|
||||||
|
speed = float64(done) / elapsed.Seconds()
|
||||||
|
}
|
||||||
|
return renderDetailedProgressBarWithSpeed(done, total, speed)
|
||||||
|
}
|
||||||
|
|
||||||
|
// renderDetailedProgressBarWithSpeed renders a schollz-style progress bar with pre-calculated rolling speed
|
||||||
|
func renderDetailedProgressBarWithSpeed(done, total int64, speed float64) string {
|
||||||
|
var s strings.Builder
|
||||||
|
|
||||||
|
// Calculate percentage
|
||||||
|
percent := 0
|
||||||
|
if total > 0 {
|
||||||
|
percent = int((done * 100) / total)
|
||||||
|
if percent > 100 {
|
||||||
|
percent = 100
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Render progress bar
|
||||||
|
width := 30
|
||||||
|
filled := (percent * width) / 100
|
||||||
|
barFilled := strings.Repeat("█", filled)
|
||||||
|
barEmpty := strings.Repeat("░", width-filled)
|
||||||
|
|
||||||
|
s.WriteString(successStyle.Render("["))
|
||||||
|
s.WriteString(successStyle.Render(barFilled))
|
||||||
|
s.WriteString(infoStyle.Render(barEmpty))
|
||||||
|
s.WriteString(successStyle.Render("]"))
|
||||||
|
|
||||||
|
// Percentage
|
||||||
|
s.WriteString(fmt.Sprintf(" %3d%%", percent))
|
||||||
|
|
||||||
|
// Bytes progress
|
||||||
|
s.WriteString(fmt.Sprintf(" %s / %s", FormatBytes(done), FormatBytes(total)))
|
||||||
|
|
||||||
|
// Speed display (using rolling window speed)
|
||||||
|
if speed > 0 {
|
||||||
|
s.WriteString(fmt.Sprintf(" %s/s", FormatBytes(int64(speed))))
|
||||||
|
|
||||||
|
// ETA calculation based on rolling speed
|
||||||
|
if done < total {
|
||||||
|
remaining := total - done
|
||||||
|
etaSeconds := float64(remaining) / speed
|
||||||
|
eta := time.Duration(etaSeconds) * time.Second
|
||||||
|
s.WriteString(fmt.Sprintf(" ETA: %s", FormatDurationShort(eta)))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return s.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
// renderDatabaseProgressBar renders a progress bar for database count (cluster restore)
|
||||||
|
func renderDatabaseProgressBar(done, total int) string {
|
||||||
|
var s strings.Builder
|
||||||
|
|
||||||
|
// Calculate percentage
|
||||||
|
percent := 0
|
||||||
|
if total > 0 {
|
||||||
|
percent = (done * 100) / total
|
||||||
|
if percent > 100 {
|
||||||
|
percent = 100
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Render progress bar
|
||||||
|
width := 30
|
||||||
|
filled := (percent * width) / 100
|
||||||
|
barFilled := strings.Repeat("█", filled)
|
||||||
|
barEmpty := strings.Repeat("░", width-filled)
|
||||||
|
|
||||||
|
s.WriteString(successStyle.Render("["))
|
||||||
|
s.WriteString(successStyle.Render(barFilled))
|
||||||
|
s.WriteString(infoStyle.Render(barEmpty))
|
||||||
|
s.WriteString(successStyle.Render("]"))
|
||||||
|
|
||||||
|
// Count and percentage
|
||||||
|
s.WriteString(fmt.Sprintf(" %3d%% %d / %d databases", percent, done, total))
|
||||||
|
|
||||||
|
return s.String()
|
||||||
|
}
|
||||||
|
|
||||||
// formatDuration formats duration in human readable format
|
// formatDuration formats duration in human readable format
|
||||||
func formatDuration(d time.Duration) string {
|
func formatDuration(d time.Duration) string {
|
||||||
if d < time.Minute {
|
if d < time.Minute {
|
||||||
|
|||||||
@@ -106,9 +106,23 @@ type safetyCheckCompleteMsg struct {
|
|||||||
|
|
||||||
func runSafetyChecks(cfg *config.Config, log logger.Logger, archive ArchiveInfo, targetDB string) tea.Cmd {
|
func runSafetyChecks(cfg *config.Config, log logger.Logger, archive ArchiveInfo, targetDB string) tea.Cmd {
|
||||||
return func() tea.Msg {
|
return func() tea.Msg {
|
||||||
// 10 minutes for safety checks - large archives can take a long time to diagnose
|
// Dynamic timeout based on archive size for large database support
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
|
// Base: 10 minutes + 1 minute per 5 GB, max 120 minutes
|
||||||
|
timeoutMinutes := 10
|
||||||
|
if archive.Size > 0 {
|
||||||
|
sizeGB := archive.Size / (1024 * 1024 * 1024)
|
||||||
|
estimatedMinutes := int(sizeGB/5) + 10
|
||||||
|
if estimatedMinutes > timeoutMinutes {
|
||||||
|
timeoutMinutes = estimatedMinutes
|
||||||
|
}
|
||||||
|
if timeoutMinutes > 120 {
|
||||||
|
timeoutMinutes = 120
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
_ = ctx // Used by database checks below
|
||||||
|
|
||||||
safety := restore.NewSafety(cfg, log)
|
safety := restore.NewSafety(cfg, log)
|
||||||
checks := []SafetyCheck{}
|
checks := []SafetyCheck{}
|
||||||
@@ -325,9 +339,9 @@ func (m RestorePreviewModel) View() string {
|
|||||||
var s strings.Builder
|
var s strings.Builder
|
||||||
|
|
||||||
// Title
|
// Title
|
||||||
title := "Restore Preview"
|
title := "[CHECK] Restore Preview"
|
||||||
if m.mode == "restore-cluster" {
|
if m.mode == "restore-cluster" {
|
||||||
title = "Cluster Restore Preview"
|
title = "[CHECK] Cluster Restore Preview"
|
||||||
}
|
}
|
||||||
s.WriteString(titleStyle.Render(title))
|
s.WriteString(titleStyle.Render(title))
|
||||||
s.WriteString("\n\n")
|
s.WriteString("\n\n")
|
||||||
|
|||||||
@@ -688,7 +688,7 @@ func (m SettingsModel) View() string {
|
|||||||
var b strings.Builder
|
var b strings.Builder
|
||||||
|
|
||||||
// Header
|
// Header
|
||||||
header := titleStyle.Render("[CFG] Configuration Settings")
|
header := titleStyle.Render("[CONFIG] Configuration Settings")
|
||||||
b.WriteString(fmt.Sprintf("\n%s\n\n", header))
|
b.WriteString(fmt.Sprintf("\n%s\n\n", header))
|
||||||
|
|
||||||
// Settings list
|
// Settings list
|
||||||
@@ -747,7 +747,7 @@ func (m SettingsModel) View() string {
|
|||||||
// Current configuration summary
|
// Current configuration summary
|
||||||
if !m.editing {
|
if !m.editing {
|
||||||
b.WriteString("\n")
|
b.WriteString("\n")
|
||||||
b.WriteString(infoStyle.Render("[LOG] Current Configuration:"))
|
b.WriteString(infoStyle.Render("[INFO] Current Configuration"))
|
||||||
b.WriteString("\n")
|
b.WriteString("\n")
|
||||||
|
|
||||||
summary := []string{
|
summary := []string{
|
||||||
|
|||||||
@@ -173,7 +173,7 @@ func (m StatusViewModel) View() string {
|
|||||||
s.WriteString(errorStyle.Render(fmt.Sprintf("[FAIL] Error: %v\n", m.err)))
|
s.WriteString(errorStyle.Render(fmt.Sprintf("[FAIL] Error: %v\n", m.err)))
|
||||||
s.WriteString("\n")
|
s.WriteString("\n")
|
||||||
} else {
|
} else {
|
||||||
s.WriteString("Connection Status:\n")
|
s.WriteString("[CONN] Connection Status\n")
|
||||||
if m.connected {
|
if m.connected {
|
||||||
s.WriteString(successStyle.Render(" [+] Connected\n"))
|
s.WriteString(successStyle.Render(" [+] Connected\n"))
|
||||||
} else {
|
} else {
|
||||||
@@ -181,11 +181,12 @@ func (m StatusViewModel) View() string {
|
|||||||
}
|
}
|
||||||
s.WriteString("\n")
|
s.WriteString("\n")
|
||||||
|
|
||||||
s.WriteString(fmt.Sprintf("Database Type: %s (%s)\n", m.config.DisplayDatabaseType(), m.config.DatabaseType))
|
s.WriteString("[INFO] Server Details\n")
|
||||||
s.WriteString(fmt.Sprintf("Host: %s:%d\n", m.config.Host, m.config.Port))
|
s.WriteString(fmt.Sprintf(" Database Type: %s (%s)\n", m.config.DisplayDatabaseType(), m.config.DatabaseType))
|
||||||
s.WriteString(fmt.Sprintf("User: %s\n", m.config.User))
|
s.WriteString(fmt.Sprintf(" Host: %s:%d\n", m.config.Host, m.config.Port))
|
||||||
s.WriteString(fmt.Sprintf("Backup Directory: %s\n", m.config.BackupDir))
|
s.WriteString(fmt.Sprintf(" User: %s\n", m.config.User))
|
||||||
s.WriteString(fmt.Sprintf("Version: %s\n\n", m.dbVersion))
|
s.WriteString(fmt.Sprintf(" Backup Directory: %s\n", m.config.BackupDir))
|
||||||
|
s.WriteString(fmt.Sprintf(" Version: %s\n\n", m.dbVersion))
|
||||||
|
|
||||||
if m.dbCount > 0 {
|
if m.dbCount > 0 {
|
||||||
s.WriteString(fmt.Sprintf("Databases Found: %s\n", successStyle.Render(fmt.Sprintf("%d", m.dbCount))))
|
s.WriteString(fmt.Sprintf("Databases Found: %s\n", successStyle.Render(fmt.Sprintf("%d", m.dbCount))))
|
||||||
|
|||||||
@@ -120,12 +120,36 @@ var ShortcutStyle = lipgloss.NewStyle().
|
|||||||
// =============================================================================
|
// =============================================================================
|
||||||
// HELPER PREFIXES (no emoticons)
|
// HELPER PREFIXES (no emoticons)
|
||||||
// =============================================================================
|
// =============================================================================
|
||||||
|
// Convention for TUI titles/headers:
|
||||||
|
// [CHECK] - Verification/diagnosis screens
|
||||||
|
// [STATS] - Statistics/status screens
|
||||||
|
// [SELECT] - Selection/browser screens
|
||||||
|
// [EXEC] - Execution/running screens
|
||||||
|
// [CONFIG] - Configuration/settings screens
|
||||||
|
//
|
||||||
|
// Convention for status messages:
|
||||||
|
// [OK] - Success
|
||||||
|
// [FAIL] - Error/failure
|
||||||
|
// [WAIT] - In progress
|
||||||
|
// [WARN] - Warning
|
||||||
|
// [INFO] - Information
|
||||||
|
|
||||||
const (
|
const (
|
||||||
|
// Title prefixes (for view headers)
|
||||||
|
PrefixCheck = "[CHECK]"
|
||||||
|
PrefixStats = "[STATS]"
|
||||||
|
PrefixSelect = "[SELECT]"
|
||||||
|
PrefixExec = "[EXEC]"
|
||||||
|
PrefixConfig = "[CONFIG]"
|
||||||
|
|
||||||
|
// Status prefixes
|
||||||
PrefixOK = "[OK]"
|
PrefixOK = "[OK]"
|
||||||
PrefixFail = "[FAIL]"
|
PrefixFail = "[FAIL]"
|
||||||
PrefixWarn = "[!]"
|
PrefixWait = "[WAIT]"
|
||||||
PrefixInfo = "[i]"
|
PrefixWarn = "[WARN]"
|
||||||
|
PrefixInfo = "[INFO]"
|
||||||
|
|
||||||
|
// List item prefixes
|
||||||
PrefixPlus = "[+]"
|
PrefixPlus = "[+]"
|
||||||
PrefixMinus = "[-]"
|
PrefixMinus = "[-]"
|
||||||
PrefixArrow = ">"
|
PrefixArrow = ">"
|
||||||
|
|||||||
2
main.go
2
main.go
@@ -16,7 +16,7 @@ import (
|
|||||||
|
|
||||||
// Build information (set by ldflags)
|
// Build information (set by ldflags)
|
||||||
var (
|
var (
|
||||||
version = "3.42.10"
|
version = "3.42.34"
|
||||||
buildTime = "unknown"
|
buildTime = "unknown"
|
||||||
gitCommit = "unknown"
|
gitCommit = "unknown"
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -1,171 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# COMPLETE emoji/Unicode removal - Replace ALL non-ASCII with ASCII equivalents
|
|
||||||
# Date: January 8, 2026
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
echo "[INFO] Starting COMPLETE Unicode->ASCII replacement..."
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Create backup
|
|
||||||
BACKUP_DIR="backup_unicode_removal_$(date +%Y%m%d_%H%M%S)"
|
|
||||||
mkdir -p "$BACKUP_DIR"
|
|
||||||
echo "[INFO] Creating backup in $BACKUP_DIR..."
|
|
||||||
find . -name "*.go" -type f -not -path "*/vendor/*" -not -path "*/.git/*" -exec bash -c 'mkdir -p "$1/$(dirname "$2")" && cp "$2" "$1/$2"' -- "$BACKUP_DIR" {} \;
|
|
||||||
echo "[OK] Backup created"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Find all affected files
|
|
||||||
echo "[SEARCH] Finding files with Unicode..."
|
|
||||||
FILES=$(find . -name "*.go" -type f -not -path "*/vendor/*" -not -path "*/.git/*")
|
|
||||||
|
|
||||||
PROCESSED=0
|
|
||||||
TOTAL=$(echo "$FILES" | wc -l)
|
|
||||||
|
|
||||||
for file in $FILES; do
|
|
||||||
PROCESSED=$((PROCESSED + 1))
|
|
||||||
|
|
||||||
if ! grep -qP '[\x{80}-\x{FFFF}]' "$file" 2>/dev/null; then
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "[$PROCESSED/$TOTAL] Processing: $file"
|
|
||||||
|
|
||||||
# Create temp file for atomic replacements
|
|
||||||
TMPFILE="${file}.tmp"
|
|
||||||
cp "$file" "$TMPFILE"
|
|
||||||
|
|
||||||
# Box drawing / decorative (used in TUI borders)
|
|
||||||
sed -i 's/─/-/g' "$TMPFILE"
|
|
||||||
sed -i 's/━/-/g' "$TMPFILE"
|
|
||||||
sed -i 's/│/|/g' "$TMPFILE"
|
|
||||||
sed -i 's/║/|/g' "$TMPFILE"
|
|
||||||
sed -i 's/├/+/g' "$TMPFILE"
|
|
||||||
sed -i 's/└/+/g' "$TMPFILE"
|
|
||||||
sed -i 's/╔/+/g' "$TMPFILE"
|
|
||||||
sed -i 's/╗/+/g' "$TMPFILE"
|
|
||||||
sed -i 's/╚/+/g' "$TMPFILE"
|
|
||||||
sed -i 's/╝/+/g' "$TMPFILE"
|
|
||||||
sed -i 's/╠/+/g' "$TMPFILE"
|
|
||||||
sed -i 's/╣/+/g' "$TMPFILE"
|
|
||||||
sed -i 's/═/=/g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Status symbols
|
|
||||||
sed -i 's/✅/[OK]/g' "$TMPFILE"
|
|
||||||
sed -i 's/❌/[FAIL]/g' "$TMPFILE"
|
|
||||||
sed -i 's/✓/[+]/g' "$TMPFILE"
|
|
||||||
sed -i 's/✗/[-]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⚠️/[WARN]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⚠/[!]/g' "$TMPFILE"
|
|
||||||
sed -i 's/❓/[?]/g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Arrows
|
|
||||||
sed -i 's/←/</g' "$TMPFILE"
|
|
||||||
sed -i 's/→/>/g' "$TMPFILE"
|
|
||||||
sed -i 's/↑/^/g' "$TMPFILE"
|
|
||||||
sed -i 's/↓/v/g' "$TMPFILE"
|
|
||||||
sed -i 's/▲/^/g' "$TMPFILE"
|
|
||||||
sed -i 's/▼/v/g' "$TMPFILE"
|
|
||||||
sed -i 's/▶/>/g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Shapes
|
|
||||||
sed -i 's/●/*\*/g' "$TMPFILE"
|
|
||||||
sed -i 's/○/o/g' "$TMPFILE"
|
|
||||||
sed -i 's/⚪/o/g' "$TMPFILE"
|
|
||||||
sed -i 's/•/-/g' "$TMPFILE"
|
|
||||||
sed -i 's/█/#/g' "$TMPFILE"
|
|
||||||
sed -i 's/▎/|/g' "$TMPFILE"
|
|
||||||
sed -i 's/░/./g' "$TMPFILE"
|
|
||||||
sed -i 's/➖/-/g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Emojis - Info/Data
|
|
||||||
sed -i 's/📊/[INFO]/g' "$TMPFILE"
|
|
||||||
sed -i 's/📋/[LIST]/g' "$TMPFILE"
|
|
||||||
sed -i 's/📁/[DIR]/g' "$TMPFILE"
|
|
||||||
sed -i 's/📦/[PKG]/g' "$TMPFILE"
|
|
||||||
sed -i 's/📜/[LOG]/g' "$TMPFILE"
|
|
||||||
sed -i 's/📭/[EMPTY]/g' "$TMPFILE"
|
|
||||||
sed -i 's/📝/[NOTE]/g' "$TMPFILE"
|
|
||||||
sed -i 's/💡/[TIP]/g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Emojis - Actions/Objects
|
|
||||||
sed -i 's/🎯/[TARGET]/g' "$TMPFILE"
|
|
||||||
sed -i 's/🛡️/[SECURE]/g' "$TMPFILE"
|
|
||||||
sed -i 's/🔒/[LOCK]/g' "$TMPFILE"
|
|
||||||
sed -i 's/🔓/[UNLOCK]/g' "$TMPFILE"
|
|
||||||
sed -i 's/🔍/[SEARCH]/g' "$TMPFILE"
|
|
||||||
sed -i 's/🔀/[SWITCH]/g' "$TMPFILE"
|
|
||||||
sed -i 's/🔥/[FIRE]/g' "$TMPFILE"
|
|
||||||
sed -i 's/💾/[SAVE]/g' "$TMPFILE"
|
|
||||||
sed -i 's/🗄️/[DB]/g' "$TMPFILE"
|
|
||||||
sed -i 's/🗄/[DB]/g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Emojis - Time/Status
|
|
||||||
sed -i 's/⏱️/[TIME]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⏱/[TIME]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⏳/[WAIT]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⏪/[REW]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⏹️/[STOP]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⏹/[STOP]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⟳/[SYNC]/g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Emojis - Cloud
|
|
||||||
sed -i 's/☁️/[CLOUD]/g' "$TMPFILE"
|
|
||||||
sed -i 's/☁/[CLOUD]/g' "$TMPFILE"
|
|
||||||
sed -i 's/📤/[UPLOAD]/g' "$TMPFILE"
|
|
||||||
sed -i 's/📥/[DOWNLOAD]/g' "$TMPFILE"
|
|
||||||
sed -i 's/🗑️/[DELETE]/g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Emojis - Misc
|
|
||||||
sed -i 's/📈/[UP]/g' "$TMPFILE"
|
|
||||||
sed -i 's/📉/[DOWN]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⌨️/[KEY]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⌨/[KEY]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⚙️/[CONFIG]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⚙/[CONFIG]/g' "$TMPFILE"
|
|
||||||
sed -i 's/✏️/[EDIT]/g' "$TMPFILE"
|
|
||||||
sed -i 's/✏/[EDIT]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⚡/[FAST]/g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Spinner characters (braille patterns for loading animations)
|
|
||||||
sed -i 's/⠋/|/g' "$TMPFILE"
|
|
||||||
sed -i 's/⠙/\//g' "$TMPFILE"
|
|
||||||
sed -i 's/⠹/-/g' "$TMPFILE"
|
|
||||||
sed -i 's/⠸/\\/g' "$TMPFILE"
|
|
||||||
sed -i 's/⠼/|/g' "$TMPFILE"
|
|
||||||
sed -i 's/⠴/\//g' "$TMPFILE"
|
|
||||||
sed -i 's/⠦/-/g' "$TMPFILE"
|
|
||||||
sed -i 's/⠧/\\/g' "$TMPFILE"
|
|
||||||
sed -i 's/⠇/|/g' "$TMPFILE"
|
|
||||||
sed -i 's/⠏/\//g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Move temp file over original
|
|
||||||
mv "$TMPFILE" "$file"
|
|
||||||
done
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "[OK] Replacement complete!"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Verify
|
|
||||||
REMAINING=$(grep -roP '[\x{80}-\x{FFFF}]' --include="*.go" . 2>/dev/null | wc -l || echo "0")
|
|
||||||
|
|
||||||
echo "[INFO] Unicode characters remaining: $REMAINING"
|
|
||||||
if [ "$REMAINING" -gt 0 ]; then
|
|
||||||
echo "[WARN] Some Unicode still exists (might be in comments or safe locations)"
|
|
||||||
echo "[INFO] Unique remaining characters:"
|
|
||||||
grep -roP '[\x{80}-\x{FFFF}]' --include="*.go" . 2>/dev/null | grep -oP '[\x{80}-\x{FFFF}]' | sort -u | head -20
|
|
||||||
else
|
|
||||||
echo "[OK] All Unicode characters replaced with ASCII!"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "[INFO] Backup: $BACKUP_DIR"
|
|
||||||
echo "[INFO] To restore: cp -r $BACKUP_DIR/* ."
|
|
||||||
echo ""
|
|
||||||
echo "[INFO] Next steps:"
|
|
||||||
echo " 1. go build"
|
|
||||||
echo " 2. go test ./..."
|
|
||||||
echo " 3. Test TUI: ./dbbackup"
|
|
||||||
echo " 4. Commit: git add . && git commit -m 'v3.42.11: Replace all Unicode with ASCII'"
|
|
||||||
echo ""
|
|
||||||
@@ -1,130 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Remove ALL emojis/unicode symbols from Go code and replace with ASCII
|
|
||||||
# Date: January 8, 2026
|
|
||||||
# Issue: 638 lines contain Unicode emojis causing display issues
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
echo "[INFO] Starting emoji removal process..."
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Find all Go files with emojis (expanded emoji list)
|
|
||||||
echo "[SEARCH] Finding affected files..."
|
|
||||||
FILES=$(find . -name "*.go" -type f -not -path "*/vendor/*" -not -path "*/.git/*" | xargs grep -l -P '[\x{1F000}-\x{1FFFF}]|[\x{2300}-\x{27BF}]|[\x{2600}-\x{26FF}]' 2>/dev/null || true)
|
|
||||||
|
|
||||||
if [ -z "$FILES" ]; then
|
|
||||||
echo "[WARN] No files with emojis found!"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
FILECOUNT=$(echo "$FILES" | wc -l)
|
|
||||||
echo "[INFO] Found $FILECOUNT files containing emojis"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Count total emojis before
|
|
||||||
BEFORE=$(find . -name "*.go" -type f -not -path "*/vendor/*" | xargs grep -oP '[\x{1F000}-\x{1FFFF}]|[\x{2300}-\x{27BF}]|[\x{2600}-\x{26FF}]' 2>/dev/null | wc -l || echo "0")
|
|
||||||
echo "[INFO] Total emojis found: $BEFORE"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Create backup
|
|
||||||
BACKUP_DIR="backup_before_emoji_removal_$(date +%Y%m%d_%H%M%S)"
|
|
||||||
mkdir -p "$BACKUP_DIR"
|
|
||||||
echo "[INFO] Creating backup in $BACKUP_DIR..."
|
|
||||||
for file in $FILES; do
|
|
||||||
mkdir -p "$BACKUP_DIR/$(dirname "$file")"
|
|
||||||
cp "$file" "$BACKUP_DIR/$file"
|
|
||||||
done
|
|
||||||
echo "[OK] Backup created"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Process each file
|
|
||||||
echo "[INFO] Replacing emojis with ASCII equivalents..."
|
|
||||||
PROCESSED=0
|
|
||||||
|
|
||||||
for file in $FILES; do
|
|
||||||
PROCESSED=$((PROCESSED + 1))
|
|
||||||
echo "[$PROCESSED/$FILECOUNT] Processing: $file"
|
|
||||||
|
|
||||||
# Create temp file
|
|
||||||
TMPFILE="${file}.tmp"
|
|
||||||
|
|
||||||
# Status indicators
|
|
||||||
sed 's/✅/[OK]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/❌/[FAIL]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/✓/[+]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/✗/[-]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
|
|
||||||
# Warning symbols (⚠️ has variant selector, handle both)
|
|
||||||
sed 's/⚠️/[WARN]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/⚠/[!]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
|
|
||||||
# Info/Data symbols
|
|
||||||
sed 's/📊/[INFO]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/📋/[LIST]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/📁/[DIR]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/📦/[PKG]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
|
|
||||||
# Target/Security
|
|
||||||
sed 's/🎯/[TARGET]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/🛡️/[SECURE]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/🔒/[LOCK]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/🔓/[UNLOCK]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
|
|
||||||
# Actions
|
|
||||||
sed 's/🔍/[SEARCH]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/⏱️/[TIME]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
|
|
||||||
# Cloud operations (☁️ has variant selector, handle both)
|
|
||||||
sed 's/☁️/[CLOUD]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/☁/[CLOUD]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/📤/[UPLOAD]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/📥/[DOWNLOAD]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/🗑️/[DELETE]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
|
|
||||||
# Other
|
|
||||||
sed 's/📈/[UP]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/📉/[DOWN]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
|
|
||||||
# Additional emojis found
|
|
||||||
sed 's/⌨️/[KEY]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/⌨/[KEY]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/🗄️/[DB]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/🗄/[DB]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/⚙️/[CONFIG]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/⚙/[CONFIG]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/✏️/[EDIT]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/✏/[EDIT]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
done
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "[OK] Replacement complete!"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Count remaining emojis
|
|
||||||
AFTER=$(find . -name "*.go" -type f -not -path "*/vendor/*" | xargs grep -oP '[\x{1F000}-\x{1FFFF}]|[\x{2300}-\x{27BF}]|[\x{2600}-\x{26FF}]' 2>/dev/null | wc -l || echo "0")
|
|
||||||
|
|
||||||
echo "[INFO] Emojis before: $BEFORE"
|
|
||||||
echo "[INFO] Emojis after: $AFTER"
|
|
||||||
echo "[INFO] Emojis removed: $((BEFORE - AFTER))"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
if [ "$AFTER" -gt 0 ]; then
|
|
||||||
echo "[WARN] $AFTER emojis still remaining!"
|
|
||||||
echo "[INFO] Listing remaining emojis:"
|
|
||||||
find . -name "*.go" -type f -not -path "*/vendor/*" | xargs grep -nP '[\x{1F000}-\x{1FFFF}]|[\x{2300}-\x{27BF}]|[\x{2600}-\x{26FF}]' 2>/dev/null | head -20
|
|
||||||
else
|
|
||||||
echo "[OK] All emojis successfully removed!"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "[INFO] Backup location: $BACKUP_DIR"
|
|
||||||
echo "[INFO] To restore: cp -r $BACKUP_DIR/* ."
|
|
||||||
echo ""
|
|
||||||
echo "[INFO] Next steps:"
|
|
||||||
echo " 1. Build: go build"
|
|
||||||
echo " 2. Test: go test ./..."
|
|
||||||
echo " 3. Manual testing: ./dbbackup status"
|
|
||||||
echo " 4. If OK, commit: git add . && git commit -m 'Replace emojis with ASCII'"
|
|
||||||
echo " 5. If broken, restore: cp -r $BACKUP_DIR/* ."
|
|
||||||
echo ""
|
|
||||||
echo "[OK] Emoji removal script completed!"
|
|
||||||
Reference in New Issue
Block a user