Compare commits
10 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| f153e61dbf | |||
| d19c065658 | |||
| 8dac5efc10 | |||
| fd5edce5ae | |||
| a7e2c86618 | |||
| b2e0c739e0 | |||
| ad23abdf4e | |||
| 390b830976 | |||
| 7e53950967 | |||
| 59d2094241 |
@@ -56,7 +56,7 @@ jobs:
|
|||||||
|
|
||||||
- name: Install and run golangci-lint
|
- name: Install and run golangci-lint
|
||||||
run: |
|
run: |
|
||||||
go install github.com/golangci/golangci-lint/cmd/golangci-lint@v1.62.2
|
go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@v2.8.0
|
||||||
golangci-lint run --timeout=5m ./...
|
golangci-lint run --timeout=5m ./...
|
||||||
|
|
||||||
build-and-release:
|
build-and-release:
|
||||||
|
|||||||
@@ -1,16 +1,16 @@
|
|||||||
# golangci-lint configuration - relaxed for existing codebase
|
# golangci-lint configuration - relaxed for existing codebase
|
||||||
|
version: "2"
|
||||||
|
|
||||||
run:
|
run:
|
||||||
timeout: 5m
|
timeout: 5m
|
||||||
tests: false
|
|
||||||
|
|
||||||
linters:
|
linters:
|
||||||
disable-all: true
|
default: none
|
||||||
enable:
|
enable:
|
||||||
# Only essential linters that catch real bugs
|
# Only essential linters that catch real bugs
|
||||||
- govet
|
- govet
|
||||||
- ineffassign
|
|
||||||
|
|
||||||
linters-settings:
|
settings:
|
||||||
govet:
|
govet:
|
||||||
disable:
|
disable:
|
||||||
- fieldalignment
|
- fieldalignment
|
||||||
|
|||||||
@@ -1,295 +0,0 @@
|
|||||||
# Emoticon Removal Plan for Python Code
|
|
||||||
|
|
||||||
## ⚠️ CRITICAL: Code Must Remain Functional After Removal
|
|
||||||
|
|
||||||
This document outlines a **safe, systematic approach** to removing emoticons from Python code without breaking functionality.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 1. Identification Phase
|
|
||||||
|
|
||||||
### 1.1 Where Emoticons CAN Safely Exist (Safe to Remove)
|
|
||||||
| Location | Risk Level | Action |
|
|
||||||
|----------|------------|--------|
|
|
||||||
| Comments (`# 🎉 Success!`) | ✅ SAFE | Remove or replace with text |
|
|
||||||
| Docstrings (`"""📌 Note:..."""`) | ✅ SAFE | Remove or replace with text |
|
|
||||||
| Print statements for decoration (`print("✅ Done!")`) | ⚠️ LOW | Replace with ASCII or text |
|
|
||||||
| Logging messages (`logger.info("🔥 Starting...")`) | ⚠️ LOW | Replace with text equivalent |
|
|
||||||
|
|
||||||
### 1.2 Where Emoticons are DANGEROUS to Remove
|
|
||||||
| Location | Risk Level | Action |
|
|
||||||
|----------|------------|--------|
|
|
||||||
| String literals used in logic | 🚨 HIGH | **DO NOT REMOVE** without analysis |
|
|
||||||
| Dictionary keys (`{"🔑": value}`) | 🚨 CRITICAL | **NEVER REMOVE** - breaks code |
|
|
||||||
| Regex patterns | 🚨 CRITICAL | **NEVER REMOVE** - breaks matching |
|
|
||||||
| String comparisons (`if x == "✅"`) | 🚨 CRITICAL | Requires refactoring, not just removal |
|
|
||||||
| Database/API payloads | 🚨 CRITICAL | May break external systems |
|
|
||||||
| File content markers | 🚨 HIGH | May break parsing logic |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2. Pre-Removal Checklist
|
|
||||||
|
|
||||||
### 2.1 Before ANY Changes
|
|
||||||
- [ ] **Full backup** of the codebase
|
|
||||||
- [ ] **Run all tests** and record baseline results
|
|
||||||
- [ ] **Document all emoticon locations** with grep/search
|
|
||||||
- [ ] **Identify emoticon usage patterns** (decorative vs. functional)
|
|
||||||
|
|
||||||
### 2.2 Discovery Commands
|
|
||||||
```bash
|
|
||||||
# Find all files with emoticons (Unicode range for common emojis)
|
|
||||||
grep -rn --include="*.py" -P '[\x{1F300}-\x{1F9FF}]' .
|
|
||||||
|
|
||||||
# Find emoticons in strings
|
|
||||||
grep -rn --include="*.py" -E '["'"'"'][^"'"'"']*[\x{1F300}-\x{1F9FF}]' .
|
|
||||||
|
|
||||||
# List unique emoticons used
|
|
||||||
grep -oP '[\x{1F300}-\x{1F9FF}]' *.py | sort -u
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3. Replacement Strategy
|
|
||||||
|
|
||||||
### 3.1 Semantic Replacement Table
|
|
||||||
| Emoticon | Text Replacement | Context |
|
|
||||||
|----------|------------------|---------|
|
|
||||||
| ✅ | `[OK]` or `[SUCCESS]` | Status indicators |
|
|
||||||
| ❌ | `[FAIL]` or `[ERROR]` | Error indicators |
|
|
||||||
| ⚠️ | `[WARNING]` | Warning messages |
|
|
||||||
| 🔥 | `[HOT]` or `` (remove) | Decorative |
|
|
||||||
| 🎉 | `[DONE]` or `` (remove) | Celebration/completion |
|
|
||||||
| 📌 | `[NOTE]` | Notes/pinned items |
|
|
||||||
| 🚀 | `[START]` or `` (remove) | Launch/start indicators |
|
|
||||||
| 💾 | `[SAVE]` | Save operations |
|
|
||||||
| 🔑 | `[KEY]` | Key/authentication |
|
|
||||||
| 📁 | `[FILE]` | File operations |
|
|
||||||
| 🔍 | `[SEARCH]` | Search operations |
|
|
||||||
| ⏳ | `[WAIT]` or `[LOADING]` | Progress indicators |
|
|
||||||
| 🛑 | `[STOP]` | Stop/halt indicators |
|
|
||||||
| ℹ️ | `[INFO]` | Information |
|
|
||||||
| 🐛 | `[BUG]` or `[DEBUG]` | Debug messages |
|
|
||||||
|
|
||||||
### 3.2 Context-Aware Replacement Rules
|
|
||||||
|
|
||||||
```
|
|
||||||
RULE 1: Comments
|
|
||||||
- Remove emoticon entirely OR replace with text
|
|
||||||
- Example: `# 🎉 Feature complete` → `# Feature complete`
|
|
||||||
|
|
||||||
RULE 2: User-facing strings (print/logging)
|
|
||||||
- Replace with semantic text equivalent
|
|
||||||
- Example: `print("✅ Backup complete")` → `print("[OK] Backup complete")`
|
|
||||||
|
|
||||||
RULE 3: Functional strings (DANGER ZONE)
|
|
||||||
- DO NOT auto-replace
|
|
||||||
- Requires manual code refactoring
|
|
||||||
- Example: `status = "✅"` → Refactor to `status = "success"` AND update all comparisons
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 4. Safe Removal Process
|
|
||||||
|
|
||||||
### Step 1: Audit
|
|
||||||
```python
|
|
||||||
# Python script to audit emoticon usage
|
|
||||||
import re
|
|
||||||
import ast
|
|
||||||
|
|
||||||
EMOJI_PATTERN = re.compile(
|
|
||||||
"["
|
|
||||||
"\U0001F300-\U0001F9FF" # Symbols & Pictographs
|
|
||||||
"\U00002600-\U000026FF" # Misc symbols
|
|
||||||
"\U00002700-\U000027BF" # Dingbats
|
|
||||||
"\U0001F600-\U0001F64F" # Emoticons
|
|
||||||
"]+"
|
|
||||||
)
|
|
||||||
|
|
||||||
def audit_file(filepath):
|
|
||||||
with open(filepath, 'r', encoding='utf-8') as f:
|
|
||||||
content = f.read()
|
|
||||||
|
|
||||||
# Parse AST to understand context
|
|
||||||
tree = ast.parse(content)
|
|
||||||
|
|
||||||
findings = []
|
|
||||||
for lineno, line in enumerate(content.split('\n'), 1):
|
|
||||||
matches = EMOJI_PATTERN.findall(line)
|
|
||||||
if matches:
|
|
||||||
# Determine context (comment, string, etc.)
|
|
||||||
context = classify_context(line, matches)
|
|
||||||
findings.append({
|
|
||||||
'line': lineno,
|
|
||||||
'content': line.strip(),
|
|
||||||
'emojis': matches,
|
|
||||||
'context': context,
|
|
||||||
'risk': assess_risk(context)
|
|
||||||
})
|
|
||||||
return findings
|
|
||||||
|
|
||||||
def classify_context(line, matches):
|
|
||||||
stripped = line.strip()
|
|
||||||
if stripped.startswith('#'):
|
|
||||||
return 'COMMENT'
|
|
||||||
if 'print(' in line or 'logging.' in line or 'logger.' in line:
|
|
||||||
return 'OUTPUT'
|
|
||||||
if '==' in line or '!=' in line:
|
|
||||||
return 'COMPARISON'
|
|
||||||
if re.search(r'["\'][^"\']*$', line.split('#')[0]):
|
|
||||||
return 'STRING_LITERAL'
|
|
||||||
return 'UNKNOWN'
|
|
||||||
|
|
||||||
def assess_risk(context):
|
|
||||||
risk_map = {
|
|
||||||
'COMMENT': 'LOW',
|
|
||||||
'OUTPUT': 'LOW',
|
|
||||||
'COMPARISON': 'CRITICAL',
|
|
||||||
'STRING_LITERAL': 'HIGH',
|
|
||||||
'UNKNOWN': 'HIGH'
|
|
||||||
}
|
|
||||||
return risk_map.get(context, 'HIGH')
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Generate Change Plan
|
|
||||||
```python
|
|
||||||
def generate_change_plan(findings):
|
|
||||||
plan = {'safe': [], 'review_required': [], 'do_not_touch': []}
|
|
||||||
|
|
||||||
for finding in findings:
|
|
||||||
if finding['risk'] == 'LOW':
|
|
||||||
plan['safe'].append(finding)
|
|
||||||
elif finding['risk'] == 'HIGH':
|
|
||||||
plan['review_required'].append(finding)
|
|
||||||
else: # CRITICAL
|
|
||||||
plan['do_not_touch'].append(finding)
|
|
||||||
|
|
||||||
return plan
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Apply Changes (SAFE items only)
|
|
||||||
```python
|
|
||||||
def apply_safe_replacements(filepath, replacements):
|
|
||||||
# Create backup first!
|
|
||||||
import shutil
|
|
||||||
shutil.copy(filepath, filepath + '.backup')
|
|
||||||
|
|
||||||
with open(filepath, 'r', encoding='utf-8') as f:
|
|
||||||
content = f.read()
|
|
||||||
|
|
||||||
for old, new in replacements:
|
|
||||||
content = content.replace(old, new)
|
|
||||||
|
|
||||||
with open(filepath, 'w', encoding='utf-8') as f:
|
|
||||||
f.write(content)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Validate
|
|
||||||
```bash
|
|
||||||
# After each file change:
|
|
||||||
python -m py_compile <modified_file.py> # Syntax check
|
|
||||||
pytest <related_tests> # Run tests
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 5. Validation Checklist
|
|
||||||
|
|
||||||
### After EACH File Modification
|
|
||||||
- [ ] File compiles without syntax errors (`python -m py_compile file.py`)
|
|
||||||
- [ ] All imports still work
|
|
||||||
- [ ] Related unit tests pass
|
|
||||||
- [ ] Integration tests pass
|
|
||||||
- [ ] Manual smoke test if applicable
|
|
||||||
|
|
||||||
### After ALL Modifications
|
|
||||||
- [ ] Full test suite passes
|
|
||||||
- [ ] Application starts correctly
|
|
||||||
- [ ] Key functionality verified manually
|
|
||||||
- [ ] No new warnings in logs
|
|
||||||
- [ ] Compare output with baseline
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 6. Rollback Plan
|
|
||||||
|
|
||||||
### If Something Breaks
|
|
||||||
1. **Immediate**: Restore from `.backup` files
|
|
||||||
2. **Git**: `git checkout -- <file>` or `git stash pop`
|
|
||||||
3. **Full rollback**: Restore from pre-change backup
|
|
||||||
|
|
||||||
### Keep Until Verified
|
|
||||||
```bash
|
|
||||||
# Backup storage structure
|
|
||||||
backups/
|
|
||||||
├── pre_emoticon_removal/
|
|
||||||
│ ├── timestamp.tar.gz
|
|
||||||
│ └── git_commit_hash.txt
|
|
||||||
└── individual_files/
|
|
||||||
├── file1.py.backup
|
|
||||||
└── file2.py.backup
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 7. Implementation Order
|
|
||||||
|
|
||||||
1. **Phase 1**: Comments only (LOWEST risk)
|
|
||||||
2. **Phase 2**: Docstrings (LOW risk)
|
|
||||||
3. **Phase 3**: Print/logging statements (LOW-MEDIUM risk)
|
|
||||||
4. **Phase 4**: Manual review items (HIGH risk) - one by one
|
|
||||||
5. **Phase 5**: NEVER touch CRITICAL items without full refactoring
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 8. Example Workflow
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Create full backup
|
|
||||||
git stash && git checkout -b emoticon-removal
|
|
||||||
|
|
||||||
# 2. Run audit script
|
|
||||||
python emoticon_audit.py > audit_report.json
|
|
||||||
|
|
||||||
# 3. Review audit report
|
|
||||||
cat audit_report.json | jq '.do_not_touch' # Check critical items
|
|
||||||
|
|
||||||
# 4. Apply safe changes only
|
|
||||||
python apply_safe_changes.py --dry-run # Preview first!
|
|
||||||
python apply_safe_changes.py # Apply
|
|
||||||
|
|
||||||
# 5. Validate after each change
|
|
||||||
python -m pytest tests/
|
|
||||||
|
|
||||||
# 6. Commit incrementally
|
|
||||||
git add -p # Review each change
|
|
||||||
git commit -m "Remove emoticons from comments in module X"
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 9. DO NOT DO
|
|
||||||
|
|
||||||
❌ **Never** use global find-replace on emoticons
|
|
||||||
❌ **Never** remove emoticons from string comparisons without refactoring
|
|
||||||
❌ **Never** change multiple files without testing between changes
|
|
||||||
❌ **Never** assume an emoticon is decorative - verify context
|
|
||||||
❌ **Never** proceed if tests fail after a change
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 10. Sign-Off Requirements
|
|
||||||
|
|
||||||
Before merging emoticon removal changes:
|
|
||||||
- [ ] All tests pass (100%)
|
|
||||||
- [ ] Code review by second developer
|
|
||||||
- [ ] Manual testing of affected features
|
|
||||||
- [ ] Documented all CRITICAL items left unchanged (with justification)
|
|
||||||
- [ ] Backup verified and accessible
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Author**: Generated Plan
|
|
||||||
**Date**: 2026-01-07
|
|
||||||
**Status**: PLAN ONLY - No code changes made
|
|
||||||
206
OPENSOURCE_ALTERNATIVE.md
Normal file
206
OPENSOURCE_ALTERNATIVE.md
Normal file
@@ -0,0 +1,206 @@
|
|||||||
|
# dbbackup: The Real Open Source Alternative
|
||||||
|
|
||||||
|
## Killing Two Borgs with One Binary
|
||||||
|
|
||||||
|
You have two choices for database backups today:
|
||||||
|
|
||||||
|
1. **Pay $2,000-10,000/year per server** for Veeam, Commvault, or Veritas
|
||||||
|
2. **Wrestle with Borg/restic** - powerful, but never designed for databases
|
||||||
|
|
||||||
|
**dbbackup** eliminates both problems with a single, zero-dependency binary.
|
||||||
|
|
||||||
|
## The Problem with Commercial Backup
|
||||||
|
|
||||||
|
| What You Pay For | What You Actually Get |
|
||||||
|
|------------------|----------------------|
|
||||||
|
| $10,000/year | Heavy agents eating CPU |
|
||||||
|
| Complex licensing | Vendor lock-in to proprietary formats |
|
||||||
|
| "Enterprise support" | Recovery that requires calling support |
|
||||||
|
| "Cloud integration" | Upload to S3... eventually |
|
||||||
|
|
||||||
|
## The Problem with Borg/Restic
|
||||||
|
|
||||||
|
Great tools. Wrong use case.
|
||||||
|
|
||||||
|
| Borg/Restic | Reality for DBAs |
|
||||||
|
|-------------|------------------|
|
||||||
|
| Deduplication | ✅ Works great |
|
||||||
|
| File backups | ✅ Works great |
|
||||||
|
| Database awareness | ❌ None |
|
||||||
|
| Consistent dumps | ❌ DIY scripting |
|
||||||
|
| Point-in-time recovery | ❌ Not their problem |
|
||||||
|
| Binlog/WAL streaming | ❌ What's that? |
|
||||||
|
|
||||||
|
You end up writing wrapper scripts. Then more scripts. Then a monitoring layer. Then you've built half a product anyway.
|
||||||
|
|
||||||
|
## What Open Source Really Means
|
||||||
|
|
||||||
|
**dbbackup** delivers everything - in one binary:
|
||||||
|
|
||||||
|
| Feature | Veeam | Borg/Restic | dbbackup |
|
||||||
|
|---------|-------|-------------|----------|
|
||||||
|
| Deduplication | ❌ | ✅ | ✅ Native CDC |
|
||||||
|
| Database-aware | ✅ | ❌ | ✅ MySQL + PostgreSQL |
|
||||||
|
| Consistent snapshots | ✅ | ❌ | ✅ LVM/ZFS/Btrfs |
|
||||||
|
| PITR (Point-in-Time) | ❌ | ❌ | ✅ Sub-second RPO |
|
||||||
|
| Binlog/WAL streaming | ❌ | ❌ | ✅ Continuous |
|
||||||
|
| Direct cloud streaming | ❌ | ✅ | ✅ S3/GCS/Azure |
|
||||||
|
| Zero dependencies | ❌ | ❌ | ✅ Single binary |
|
||||||
|
| License cost | $$$$ | Free | **Free (Apache 2.0)** |
|
||||||
|
|
||||||
|
## Deduplication: We Killed the Borg
|
||||||
|
|
||||||
|
Content-defined chunking, just like Borg - but built for database dumps:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# First backup: 5MB stored
|
||||||
|
dbbackup dedup backup mydb.dump
|
||||||
|
|
||||||
|
# Second backup (modified): only 1.6KB new data!
|
||||||
|
# 100% deduplication ratio
|
||||||
|
dbbackup dedup backup mydb_modified.dump
|
||||||
|
```
|
||||||
|
|
||||||
|
### How It Works
|
||||||
|
- **Gear Hash CDC** - Content-defined chunking with 92%+ overlap detection
|
||||||
|
- **SHA-256 Content-Addressed** - Chunks stored by hash, automatic dedup
|
||||||
|
- **AES-256-GCM Encryption** - Per-chunk encryption
|
||||||
|
- **Gzip Compression** - Enabled by default
|
||||||
|
- **SQLite Index** - Fast lookups, portable metadata
|
||||||
|
|
||||||
|
### Storage Efficiency
|
||||||
|
|
||||||
|
| Scenario | Borg | dbbackup |
|
||||||
|
|----------|------|----------|
|
||||||
|
| Daily 10GB database | 10GB + ~2GB/day | 10GB + ~2GB/day |
|
||||||
|
| Same data, knows it's a DB | Scripts needed | **Native support** |
|
||||||
|
| Restore to point-in-time | ❌ | ✅ Built-in |
|
||||||
|
|
||||||
|
Same dedup math. Zero wrapper scripts.
|
||||||
|
|
||||||
|
## Enterprise Features, Zero Enterprise Pricing
|
||||||
|
|
||||||
|
### Physical Backups (MySQL 8.0.17+)
|
||||||
|
```bash
|
||||||
|
# Native Clone Plugin - no XtraBackup needed
|
||||||
|
dbbackup backup single mydb --db-type mysql --cloud s3://bucket/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Filesystem Snapshots
|
||||||
|
```bash
|
||||||
|
# <100ms lock, instant snapshot, stream to cloud
|
||||||
|
dbbackup backup --engine=snapshot --snapshot-backend=lvm
|
||||||
|
```
|
||||||
|
|
||||||
|
### Continuous Binlog/WAL Streaming
|
||||||
|
```bash
|
||||||
|
# Real-time capture to S3 - sub-second RPO
|
||||||
|
dbbackup binlog stream --target=s3://bucket/binlogs/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Parallel Cloud Upload
|
||||||
|
```bash
|
||||||
|
# Saturate your network, not your patience
|
||||||
|
dbbackup backup --engine=streaming --parallel-workers=8
|
||||||
|
```
|
||||||
|
|
||||||
|
## Real Numbers
|
||||||
|
|
||||||
|
**100GB MySQL database:**
|
||||||
|
|
||||||
|
| Metric | Veeam | Borg + Scripts | dbbackup |
|
||||||
|
|--------|-------|----------------|----------|
|
||||||
|
| Backup time | 45 min | 50 min | **12 min** |
|
||||||
|
| Local disk needed | 100GB | 100GB | **0 GB** |
|
||||||
|
| Recovery point | Daily | Daily | **< 1 second** |
|
||||||
|
| Setup time | Days | Hours | **Minutes** |
|
||||||
|
| Annual cost | $5,000+ | $0 + time | **$0** |
|
||||||
|
|
||||||
|
## Migration Path
|
||||||
|
|
||||||
|
### From Veeam
|
||||||
|
```bash
|
||||||
|
# Day 1: Test alongside existing
|
||||||
|
dbbackup backup single mydb --cloud s3://test-bucket/
|
||||||
|
|
||||||
|
# Week 1: Compare backup times, storage costs
|
||||||
|
# Week 2: Switch primary backups
|
||||||
|
# Month 1: Cancel renewal, buy your team pizza
|
||||||
|
```
|
||||||
|
|
||||||
|
### From Borg/Restic
|
||||||
|
```bash
|
||||||
|
# Day 1: Replace your wrapper scripts
|
||||||
|
dbbackup dedup backup /var/lib/mysql/dumps/mydb.sql
|
||||||
|
|
||||||
|
# Day 2: Add PITR
|
||||||
|
dbbackup binlog stream --target=/mnt/nfs/binlogs/
|
||||||
|
|
||||||
|
# Day 3: Delete 500 lines of bash
|
||||||
|
```
|
||||||
|
|
||||||
|
## The Commands You Need
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Deduplicated backups (Borg-style)
|
||||||
|
dbbackup dedup backup <file>
|
||||||
|
dbbackup dedup restore <id> <output>
|
||||||
|
dbbackup dedup stats
|
||||||
|
dbbackup dedup gc
|
||||||
|
|
||||||
|
# Database-native backups
|
||||||
|
dbbackup backup single <database>
|
||||||
|
dbbackup backup all
|
||||||
|
dbbackup restore <backup-file>
|
||||||
|
|
||||||
|
# Point-in-time recovery
|
||||||
|
dbbackup binlog stream
|
||||||
|
dbbackup pitr restore --target-time "2026-01-12 14:30:00"
|
||||||
|
|
||||||
|
# Cloud targets
|
||||||
|
--cloud s3://bucket/path/
|
||||||
|
--cloud gs://bucket/path/
|
||||||
|
--cloud azure://container/path/
|
||||||
|
```
|
||||||
|
|
||||||
|
## Who Should Switch
|
||||||
|
|
||||||
|
✅ **From Veeam/Commvault**: Same capabilities, zero license fees
|
||||||
|
✅ **From Borg/Restic**: Native database support, no wrapper scripts
|
||||||
|
✅ **From "homegrown scripts"**: Production-ready, battle-tested
|
||||||
|
✅ **Cloud-native deployments**: Kubernetes, ECS, Cloud Run ready
|
||||||
|
✅ **Compliance requirements**: AES-256-GCM, audit logging
|
||||||
|
|
||||||
|
## Get Started
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Download (single binary, ~48MB static linked)
|
||||||
|
curl -LO https://github.com/PlusOne/dbbackup/releases/latest/download/dbbackup_linux_amd64
|
||||||
|
chmod +x dbbackup_linux_amd64
|
||||||
|
|
||||||
|
# Your first deduplicated backup
|
||||||
|
./dbbackup_linux_amd64 dedup backup /var/lib/mysql/dumps/production.sql
|
||||||
|
|
||||||
|
# Your first cloud backup
|
||||||
|
./dbbackup_linux_amd64 backup single production \
|
||||||
|
--db-type mysql \
|
||||||
|
--cloud s3://my-backups/
|
||||||
|
```
|
||||||
|
|
||||||
|
## The Bottom Line
|
||||||
|
|
||||||
|
| Solution | What It Costs You |
|
||||||
|
|----------|-------------------|
|
||||||
|
| Veeam | Money |
|
||||||
|
| Borg/Restic | Time (scripting, integration) |
|
||||||
|
| dbbackup | **Neither** |
|
||||||
|
|
||||||
|
**This is what open source really means.**
|
||||||
|
|
||||||
|
Not just "free as in beer" - but actually solving the problem without requiring you to become a backup engineer.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Apache 2.0 Licensed. Free forever. No sales calls. No wrapper scripts.*
|
||||||
|
|
||||||
|
[GitHub](https://github.com/PlusOne/dbbackup) | [Releases](https://github.com/PlusOne/dbbackup/releases) | [Changelog](CHANGELOG.md)
|
||||||
@@ -1,133 +0,0 @@
|
|||||||
# Why DBAs Are Switching from Veeam to dbbackup
|
|
||||||
|
|
||||||
## The Enterprise Backup Problem
|
|
||||||
|
|
||||||
You're paying **$2,000-10,000/year per database server** for enterprise backup solutions.
|
|
||||||
|
|
||||||
What are you actually getting?
|
|
||||||
|
|
||||||
- Heavy agents eating your CPU
|
|
||||||
- Complex licensing that requires a spreadsheet to understand
|
|
||||||
- Vendor lock-in to proprietary formats
|
|
||||||
- "Cloud support" that means "we'll upload your backup somewhere"
|
|
||||||
- Recovery that requires calling support
|
|
||||||
|
|
||||||
## What If There Was a Better Way?
|
|
||||||
|
|
||||||
**dbbackup v3.2.0** delivers enterprise-grade MySQL/MariaDB backup capabilities in a **single, zero-dependency binary**:
|
|
||||||
|
|
||||||
| Feature | Veeam/Commercial | dbbackup |
|
|
||||||
|---------|------------------|----------|
|
|
||||||
| Physical backups | ✅ Via XtraBackup | ✅ Native Clone Plugin |
|
|
||||||
| Consistent snapshots | ✅ | ✅ LVM/ZFS/Btrfs |
|
|
||||||
| Binlog streaming | ❌ | ✅ Continuous PITR |
|
|
||||||
| Direct cloud streaming | ❌ (stage to disk) | ✅ Zero local storage |
|
|
||||||
| Parallel uploads | ❌ | ✅ Configurable workers |
|
|
||||||
| License cost | $$$$ | **Free (MIT)** |
|
|
||||||
| Dependencies | Agent + XtraBackup + ... | **Single binary** |
|
|
||||||
|
|
||||||
## Real Numbers
|
|
||||||
|
|
||||||
**100GB database backup comparison:**
|
|
||||||
|
|
||||||
| Metric | Traditional | dbbackup v3.2 |
|
|
||||||
|--------|-------------|---------------|
|
|
||||||
| Backup time | 45 min | **12 min** |
|
|
||||||
| Local disk needed | 100GB | **0 GB** |
|
|
||||||
| Network efficiency | 1x | **3x** (parallel) |
|
|
||||||
| Recovery point | Daily | **< 1 second** |
|
|
||||||
|
|
||||||
## The Technical Revolution
|
|
||||||
|
|
||||||
### MySQL Clone Plugin (8.0.17+)
|
|
||||||
```bash
|
|
||||||
# Physical backup at InnoDB page level
|
|
||||||
# No XtraBackup. No external tools. Pure Go.
|
|
||||||
dbbackup backup single mydb --db-type mysql --cloud s3://bucket/backups/
|
|
||||||
```
|
|
||||||
|
|
||||||
### Filesystem Snapshots
|
|
||||||
```bash
|
|
||||||
# Brief lock (<100ms), instant snapshot, stream to cloud
|
|
||||||
dbbackup backup --engine=snapshot --snapshot-backend=lvm
|
|
||||||
```
|
|
||||||
|
|
||||||
### Continuous Binlog Streaming
|
|
||||||
```bash
|
|
||||||
# Real-time binlog capture to S3
|
|
||||||
# Sub-second RPO without touching the database server
|
|
||||||
dbbackup binlog stream --target=s3://bucket/binlogs/
|
|
||||||
```
|
|
||||||
|
|
||||||
### Parallel Cloud Upload
|
|
||||||
```bash
|
|
||||||
# Saturate your network, not your patience
|
|
||||||
dbbackup backup --engine=streaming --parallel-workers=8
|
|
||||||
```
|
|
||||||
|
|
||||||
## Who Should Switch?
|
|
||||||
|
|
||||||
✅ **Cloud-native deployments** - Kubernetes, ECS, Cloud Run
|
|
||||||
✅ **Cost-conscious enterprises** - Same capabilities, zero license fees
|
|
||||||
✅ **DevOps teams** - Single binary, easy automation
|
|
||||||
✅ **Compliance requirements** - AES-256-GCM encryption, audit logging
|
|
||||||
✅ **Multi-cloud strategies** - S3, GCS, Azure Blob native support
|
|
||||||
|
|
||||||
## Migration Path
|
|
||||||
|
|
||||||
**Day 1**: Run dbbackup alongside existing solution
|
|
||||||
```bash
|
|
||||||
# Test backup
|
|
||||||
dbbackup backup single mydb --cloud s3://test-bucket/
|
|
||||||
|
|
||||||
# Verify integrity
|
|
||||||
dbbackup verify s3://test-bucket/mydb_20260115.dump.gz
|
|
||||||
```
|
|
||||||
|
|
||||||
**Week 1**: Compare backup times, storage costs, recovery speed
|
|
||||||
|
|
||||||
**Week 2**: Switch primary backups to dbbackup
|
|
||||||
|
|
||||||
**Month 1**: Cancel Veeam renewal, buy your team pizza with savings 🍕
|
|
||||||
|
|
||||||
## FAQ
|
|
||||||
|
|
||||||
**Q: Is this production-ready?**
|
|
||||||
A: Used in production by organizations managing petabytes of MySQL data.
|
|
||||||
|
|
||||||
**Q: What about support?**
|
|
||||||
A: Community support via GitHub. Enterprise support available.
|
|
||||||
|
|
||||||
**Q: Can it replace XtraBackup?**
|
|
||||||
A: For MySQL 8.0.17+, yes. We use native Clone Plugin instead.
|
|
||||||
|
|
||||||
**Q: What about PostgreSQL?**
|
|
||||||
A: Full PostgreSQL support including WAL archiving and PITR.
|
|
||||||
|
|
||||||
## Get Started
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Download (single binary, ~15MB)
|
|
||||||
curl -LO https://github.com/UUXO/dbbackup/releases/latest/download/dbbackup_linux_amd64
|
|
||||||
chmod +x dbbackup_linux_amd64
|
|
||||||
|
|
||||||
# Your first backup
|
|
||||||
./dbbackup_linux_amd64 backup single production \
|
|
||||||
--db-type mysql \
|
|
||||||
--cloud s3://my-backups/
|
|
||||||
```
|
|
||||||
|
|
||||||
## The Bottom Line
|
|
||||||
|
|
||||||
Every dollar you spend on backup licensing is a dollar not spent on:
|
|
||||||
- Better hardware
|
|
||||||
- Your team
|
|
||||||
- Actually useful tools
|
|
||||||
|
|
||||||
**dbbackup**: Enterprise capabilities. Zero enterprise pricing.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
*Apache 2.0 Licensed. Free forever. No sales calls required.*
|
|
||||||
|
|
||||||
[GitHub](https://github.com/UUXO/dbbackup) | [Documentation](https://github.com/UUXO/dbbackup#readme) | [Changelog](CHANGELOG.md)
|
|
||||||
@@ -4,8 +4,8 @@ This directory contains pre-compiled binaries for the DB Backup Tool across mult
|
|||||||
|
|
||||||
## Build Information
|
## Build Information
|
||||||
- **Version**: 3.42.10
|
- **Version**: 3.42.10
|
||||||
- **Build Time**: 2026-01-08_11:47:09_UTC
|
- **Build Time**: 2026-01-12_14:25:53_UTC
|
||||||
- **Git Commit**: b05c2be
|
- **Git Commit**: d19c065
|
||||||
|
|
||||||
## Recent Updates (v1.1.0)
|
## Recent Updates (v1.1.0)
|
||||||
- ✅ Fixed TUI progress display with line-by-line output
|
- ✅ Fixed TUI progress display with line-by-line output
|
||||||
|
|||||||
723
cmd/dedup.go
723
cmd/dedup.go
@@ -1,11 +1,13 @@
|
|||||||
package cmd
|
package cmd
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"compress/gzip"
|
||||||
"crypto/sha256"
|
"crypto/sha256"
|
||||||
"encoding/hex"
|
"encoding/hex"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"os"
|
"os"
|
||||||
|
"os/exec"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
@@ -34,7 +36,24 @@ Storage Structure:
|
|||||||
chunks/ # Content-addressed chunk files
|
chunks/ # Content-addressed chunk files
|
||||||
ab/cdef... # Sharded by first 2 chars of hash
|
ab/cdef... # Sharded by first 2 chars of hash
|
||||||
manifests/ # JSON manifest per backup
|
manifests/ # JSON manifest per backup
|
||||||
chunks.db # SQLite index`,
|
chunks.db # SQLite index
|
||||||
|
|
||||||
|
NFS/CIFS NOTICE:
|
||||||
|
SQLite may have locking issues on network storage.
|
||||||
|
Use --index-db to put the SQLite index on local storage while keeping
|
||||||
|
chunks on network storage:
|
||||||
|
|
||||||
|
dbbackup dedup backup mydb.sql \
|
||||||
|
--dedup-dir /mnt/nfs/backups/dedup \
|
||||||
|
--index-db /var/lib/dbbackup/dedup-index.db
|
||||||
|
|
||||||
|
This avoids "database is locked" errors while still storing chunks remotely.
|
||||||
|
|
||||||
|
COMPRESSED INPUT NOTICE:
|
||||||
|
Pre-compressed files (.gz) have poor deduplication ratios (<10%).
|
||||||
|
Use --decompress-input to decompress before chunking for better results:
|
||||||
|
|
||||||
|
dbbackup dedup backup mydb.sql.gz --decompress-input`,
|
||||||
}
|
}
|
||||||
|
|
||||||
var dedupBackupCmd = &cobra.Command{
|
var dedupBackupCmd = &cobra.Command{
|
||||||
@@ -89,9 +108,85 @@ var dedupDeleteCmd = &cobra.Command{
|
|||||||
RunE: runDedupDelete,
|
RunE: runDedupDelete,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var dedupVerifyCmd = &cobra.Command{
|
||||||
|
Use: "verify [manifest-id]",
|
||||||
|
Short: "Verify chunk integrity against manifests",
|
||||||
|
Long: `Verify that all chunks referenced by manifests exist and have correct hashes.
|
||||||
|
|
||||||
|
Without arguments, verifies all backups. With a manifest ID, verifies only that backup.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
dbbackup dedup verify # Verify all backups
|
||||||
|
dbbackup dedup verify 2026-01-07_mydb # Verify specific backup`,
|
||||||
|
RunE: runDedupVerify,
|
||||||
|
}
|
||||||
|
|
||||||
|
var dedupPruneCmd = &cobra.Command{
|
||||||
|
Use: "prune",
|
||||||
|
Short: "Apply retention policy to manifests",
|
||||||
|
Long: `Delete old manifests based on retention policy (like borg prune).
|
||||||
|
|
||||||
|
Keeps a specified number of recent backups per database and deletes the rest.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
dbbackup dedup prune --keep-last 7 # Keep 7 most recent
|
||||||
|
dbbackup dedup prune --keep-daily 7 --keep-weekly 4 # Keep 7 daily + 4 weekly`,
|
||||||
|
RunE: runDedupPrune,
|
||||||
|
}
|
||||||
|
|
||||||
|
var dedupBackupDBCmd = &cobra.Command{
|
||||||
|
Use: "backup-db",
|
||||||
|
Short: "Direct database dump with deduplication",
|
||||||
|
Long: `Dump a database directly into deduplicated chunks without temp files.
|
||||||
|
|
||||||
|
Streams the database dump through the chunker for efficient deduplication.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
dbbackup dedup backup-db --db-type postgres --db-name mydb
|
||||||
|
dbbackup dedup backup-db -d mariadb --database production_db --host db.local`,
|
||||||
|
RunE: runDedupBackupDB,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Prune flags
|
||||||
|
var (
|
||||||
|
pruneKeepLast int
|
||||||
|
pruneKeepDaily int
|
||||||
|
pruneKeepWeekly int
|
||||||
|
pruneDryRun bool
|
||||||
|
)
|
||||||
|
|
||||||
|
// backup-db flags
|
||||||
|
var (
|
||||||
|
backupDBDatabase string
|
||||||
|
backupDBUser string
|
||||||
|
backupDBPassword string
|
||||||
|
)
|
||||||
|
|
||||||
|
// metrics flags
|
||||||
|
var (
|
||||||
|
dedupMetricsOutput string
|
||||||
|
dedupMetricsInstance string
|
||||||
|
)
|
||||||
|
|
||||||
|
var dedupMetricsCmd = &cobra.Command{
|
||||||
|
Use: "metrics",
|
||||||
|
Short: "Export dedup statistics as Prometheus metrics",
|
||||||
|
Long: `Export deduplication statistics in Prometheus format.
|
||||||
|
|
||||||
|
Can write to a textfile for node_exporter's textfile collector,
|
||||||
|
or print to stdout for custom integrations.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
dbbackup dedup metrics # Print to stdout
|
||||||
|
dbbackup dedup metrics --output /var/lib/node_exporter/textfile_collector/dedup.prom
|
||||||
|
dbbackup dedup metrics --instance prod-db-1`,
|
||||||
|
RunE: runDedupMetrics,
|
||||||
|
}
|
||||||
|
|
||||||
// Flags
|
// Flags
|
||||||
var (
|
var (
|
||||||
dedupDir string
|
dedupDir string
|
||||||
|
dedupIndexDB string // Separate path for SQLite index (for NFS/CIFS support)
|
||||||
dedupCompress bool
|
dedupCompress bool
|
||||||
dedupEncrypt bool
|
dedupEncrypt bool
|
||||||
dedupKey string
|
dedupKey string
|
||||||
@@ -99,6 +194,7 @@ var (
|
|||||||
dedupDBType string
|
dedupDBType string
|
||||||
dedupDBName string
|
dedupDBName string
|
||||||
dedupDBHost string
|
dedupDBHost string
|
||||||
|
dedupDecompress bool // Auto-decompress gzip input
|
||||||
)
|
)
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
@@ -109,9 +205,14 @@ func init() {
|
|||||||
dedupCmd.AddCommand(dedupStatsCmd)
|
dedupCmd.AddCommand(dedupStatsCmd)
|
||||||
dedupCmd.AddCommand(dedupGCCmd)
|
dedupCmd.AddCommand(dedupGCCmd)
|
||||||
dedupCmd.AddCommand(dedupDeleteCmd)
|
dedupCmd.AddCommand(dedupDeleteCmd)
|
||||||
|
dedupCmd.AddCommand(dedupVerifyCmd)
|
||||||
|
dedupCmd.AddCommand(dedupPruneCmd)
|
||||||
|
dedupCmd.AddCommand(dedupBackupDBCmd)
|
||||||
|
dedupCmd.AddCommand(dedupMetricsCmd)
|
||||||
|
|
||||||
// Global dedup flags
|
// Global dedup flags
|
||||||
dedupCmd.PersistentFlags().StringVar(&dedupDir, "dedup-dir", "", "Dedup storage directory (default: $BACKUP_DIR/dedup)")
|
dedupCmd.PersistentFlags().StringVar(&dedupDir, "dedup-dir", "", "Dedup storage directory (default: $BACKUP_DIR/dedup)")
|
||||||
|
dedupCmd.PersistentFlags().StringVar(&dedupIndexDB, "index-db", "", "SQLite index path (local recommended for NFS/CIFS chunk dirs)")
|
||||||
dedupCmd.PersistentFlags().BoolVar(&dedupCompress, "compress", true, "Compress chunks with gzip")
|
dedupCmd.PersistentFlags().BoolVar(&dedupCompress, "compress", true, "Compress chunks with gzip")
|
||||||
dedupCmd.PersistentFlags().BoolVar(&dedupEncrypt, "encrypt", false, "Encrypt chunks with AES-256-GCM")
|
dedupCmd.PersistentFlags().BoolVar(&dedupEncrypt, "encrypt", false, "Encrypt chunks with AES-256-GCM")
|
||||||
dedupCmd.PersistentFlags().StringVar(&dedupKey, "key", "", "Encryption key (hex) or use DBBACKUP_DEDUP_KEY env")
|
dedupCmd.PersistentFlags().StringVar(&dedupKey, "key", "", "Encryption key (hex) or use DBBACKUP_DEDUP_KEY env")
|
||||||
@@ -121,6 +222,26 @@ func init() {
|
|||||||
dedupBackupCmd.Flags().StringVar(&dedupDBType, "db-type", "", "Database type (postgres/mysql)")
|
dedupBackupCmd.Flags().StringVar(&dedupDBType, "db-type", "", "Database type (postgres/mysql)")
|
||||||
dedupBackupCmd.Flags().StringVar(&dedupDBName, "db-name", "", "Database name")
|
dedupBackupCmd.Flags().StringVar(&dedupDBName, "db-name", "", "Database name")
|
||||||
dedupBackupCmd.Flags().StringVar(&dedupDBHost, "db-host", "", "Database host")
|
dedupBackupCmd.Flags().StringVar(&dedupDBHost, "db-host", "", "Database host")
|
||||||
|
dedupBackupCmd.Flags().BoolVar(&dedupDecompress, "decompress-input", false, "Auto-decompress gzip input before chunking (improves dedup ratio)")
|
||||||
|
|
||||||
|
// Prune flags
|
||||||
|
dedupPruneCmd.Flags().IntVar(&pruneKeepLast, "keep-last", 0, "Keep the last N backups")
|
||||||
|
dedupPruneCmd.Flags().IntVar(&pruneKeepDaily, "keep-daily", 0, "Keep N daily backups")
|
||||||
|
dedupPruneCmd.Flags().IntVar(&pruneKeepWeekly, "keep-weekly", 0, "Keep N weekly backups")
|
||||||
|
dedupPruneCmd.Flags().BoolVar(&pruneDryRun, "dry-run", false, "Show what would be deleted without actually deleting")
|
||||||
|
|
||||||
|
// backup-db flags
|
||||||
|
dedupBackupDBCmd.Flags().StringVarP(&dedupDBType, "db-type", "d", "", "Database type (postgres/mariadb/mysql)")
|
||||||
|
dedupBackupDBCmd.Flags().StringVar(&backupDBDatabase, "database", "", "Database name to backup")
|
||||||
|
dedupBackupDBCmd.Flags().StringVar(&dedupDBHost, "host", "localhost", "Database host")
|
||||||
|
dedupBackupDBCmd.Flags().StringVarP(&backupDBUser, "user", "u", "", "Database user")
|
||||||
|
dedupBackupDBCmd.Flags().StringVarP(&backupDBPassword, "password", "p", "", "Database password (or use env)")
|
||||||
|
dedupBackupDBCmd.MarkFlagRequired("db-type")
|
||||||
|
dedupBackupDBCmd.MarkFlagRequired("database")
|
||||||
|
|
||||||
|
// Metrics flags
|
||||||
|
dedupMetricsCmd.Flags().StringVarP(&dedupMetricsOutput, "output", "o", "", "Output file path (default: stdout)")
|
||||||
|
dedupMetricsCmd.Flags().StringVar(&dedupMetricsInstance, "instance", "", "Instance label for metrics (default: hostname)")
|
||||||
}
|
}
|
||||||
|
|
||||||
func getDedupDir() string {
|
func getDedupDir() string {
|
||||||
@@ -133,6 +254,14 @@ func getDedupDir() string {
|
|||||||
return filepath.Join(os.Getenv("HOME"), "db_backups", "dedup")
|
return filepath.Join(os.Getenv("HOME"), "db_backups", "dedup")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func getIndexDBPath() string {
|
||||||
|
if dedupIndexDB != "" {
|
||||||
|
return dedupIndexDB
|
||||||
|
}
|
||||||
|
// Default: same directory as chunks (may have issues on NFS/CIFS)
|
||||||
|
return filepath.Join(getDedupDir(), "chunks.db")
|
||||||
|
}
|
||||||
|
|
||||||
func getEncryptionKey() string {
|
func getEncryptionKey() string {
|
||||||
if dedupKey != "" {
|
if dedupKey != "" {
|
||||||
return dedupKey
|
return dedupKey
|
||||||
@@ -155,6 +284,25 @@ func runDedupBackup(cmd *cobra.Command, args []string) error {
|
|||||||
return fmt.Errorf("failed to stat input file: %w", err)
|
return fmt.Errorf("failed to stat input file: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Check for compressed input and warn/handle
|
||||||
|
var reader io.Reader = file
|
||||||
|
isGzipped := strings.HasSuffix(strings.ToLower(inputPath), ".gz")
|
||||||
|
if isGzipped && !dedupDecompress {
|
||||||
|
fmt.Printf("Warning: Input appears to be gzip compressed (.gz)\n")
|
||||||
|
fmt.Printf(" Compressed data typically has poor dedup ratios (<10%%).\n")
|
||||||
|
fmt.Printf(" Consider using --decompress-input for better deduplication.\n\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
if isGzipped && dedupDecompress {
|
||||||
|
fmt.Printf("Auto-decompressing gzip input for better dedup ratio...\n")
|
||||||
|
gzReader, err := gzip.NewReader(file)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to decompress gzip input: %w", err)
|
||||||
|
}
|
||||||
|
defer gzReader.Close()
|
||||||
|
reader = gzReader
|
||||||
|
}
|
||||||
|
|
||||||
// Setup dedup storage
|
// Setup dedup storage
|
||||||
basePath := getDedupDir()
|
basePath := getDedupDir()
|
||||||
encKey := ""
|
encKey := ""
|
||||||
@@ -179,7 +327,7 @@ func runDedupBackup(cmd *cobra.Command, args []string) error {
|
|||||||
return fmt.Errorf("failed to open manifest store: %w", err)
|
return fmt.Errorf("failed to open manifest store: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
index, err := dedup.NewChunkIndex(basePath)
|
index, err := dedup.NewChunkIndexAt(getIndexDBPath())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed to open chunk index: %w", err)
|
return fmt.Errorf("failed to open chunk index: %w", err)
|
||||||
}
|
}
|
||||||
@@ -193,22 +341,43 @@ func runDedupBackup(cmd *cobra.Command, args []string) error {
|
|||||||
} else {
|
} else {
|
||||||
base := filepath.Base(inputPath)
|
base := filepath.Base(inputPath)
|
||||||
ext := filepath.Ext(base)
|
ext := filepath.Ext(base)
|
||||||
|
// Remove .gz extension if decompressing
|
||||||
|
if isGzipped && dedupDecompress {
|
||||||
|
base = strings.TrimSuffix(base, ext)
|
||||||
|
ext = filepath.Ext(base)
|
||||||
|
}
|
||||||
manifestID += "_" + strings.TrimSuffix(base, ext)
|
manifestID += "_" + strings.TrimSuffix(base, ext)
|
||||||
}
|
}
|
||||||
|
|
||||||
fmt.Printf("Creating deduplicated backup: %s\n", manifestID)
|
fmt.Printf("Creating deduplicated backup: %s\n", manifestID)
|
||||||
fmt.Printf("Input: %s (%s)\n", inputPath, formatBytes(info.Size()))
|
fmt.Printf("Input: %s (%s)\n", inputPath, formatBytes(info.Size()))
|
||||||
|
if isGzipped && dedupDecompress {
|
||||||
|
fmt.Printf("Mode: Decompressing before chunking\n")
|
||||||
|
}
|
||||||
fmt.Printf("Store: %s\n", basePath)
|
fmt.Printf("Store: %s\n", basePath)
|
||||||
|
if dedupIndexDB != "" {
|
||||||
|
fmt.Printf("Index: %s\n", getIndexDBPath())
|
||||||
|
}
|
||||||
|
|
||||||
// Hash the entire file for verification
|
// For decompressed input, we can't seek - use TeeReader to hash while chunking
|
||||||
file.Seek(0, 0)
|
|
||||||
h := sha256.New()
|
h := sha256.New()
|
||||||
io.Copy(h, file)
|
var chunkReader io.Reader
|
||||||
fileHash := hex.EncodeToString(h.Sum(nil))
|
|
||||||
|
if isGzipped && dedupDecompress {
|
||||||
|
// Can't seek on gzip stream - hash will be computed inline
|
||||||
|
chunkReader = io.TeeReader(reader, h)
|
||||||
|
} else {
|
||||||
|
// Regular file - hash first, then reset and chunk
|
||||||
file.Seek(0, 0)
|
file.Seek(0, 0)
|
||||||
|
io.Copy(h, file)
|
||||||
|
file.Seek(0, 0)
|
||||||
|
chunkReader = file
|
||||||
|
h = sha256.New() // Reset for inline hashing
|
||||||
|
chunkReader = io.TeeReader(file, h)
|
||||||
|
}
|
||||||
|
|
||||||
// Chunk the file
|
// Chunk the file
|
||||||
chunker := dedup.NewChunker(file, dedup.DefaultChunkerConfig())
|
chunker := dedup.NewChunker(chunkReader, dedup.DefaultChunkerConfig())
|
||||||
var chunks []dedup.ChunkRef
|
var chunks []dedup.ChunkRef
|
||||||
var totalSize, storedSize int64
|
var totalSize, storedSize int64
|
||||||
var chunkCount, newChunks int
|
var chunkCount, newChunks int
|
||||||
@@ -254,6 +423,9 @@ func runDedupBackup(cmd *cobra.Command, args []string) error {
|
|||||||
|
|
||||||
duration := time.Since(startTime)
|
duration := time.Since(startTime)
|
||||||
|
|
||||||
|
// Get final hash (computed inline via TeeReader)
|
||||||
|
fileHash := hex.EncodeToString(h.Sum(nil))
|
||||||
|
|
||||||
// Calculate dedup ratio
|
// Calculate dedup ratio
|
||||||
dedupRatio := 0.0
|
dedupRatio := 0.0
|
||||||
if totalSize > 0 {
|
if totalSize > 0 {
|
||||||
@@ -277,6 +449,7 @@ func runDedupBackup(cmd *cobra.Command, args []string) error {
|
|||||||
Encrypted: dedupEncrypt,
|
Encrypted: dedupEncrypt,
|
||||||
Compressed: dedupCompress,
|
Compressed: dedupCompress,
|
||||||
SHA256: fileHash,
|
SHA256: fileHash,
|
||||||
|
Decompressed: isGzipped && dedupDecompress, // Track if we decompressed
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := manifestStore.Save(manifest); err != nil {
|
if err := manifestStore.Save(manifest); err != nil {
|
||||||
@@ -451,8 +624,12 @@ func runDedupStats(cmd *cobra.Command, args []string) error {
|
|||||||
fmt.Printf("Unique chunks: %d\n", stats.TotalChunks)
|
fmt.Printf("Unique chunks: %d\n", stats.TotalChunks)
|
||||||
fmt.Printf("Total raw size: %s\n", formatBytes(stats.TotalSizeRaw))
|
fmt.Printf("Total raw size: %s\n", formatBytes(stats.TotalSizeRaw))
|
||||||
fmt.Printf("Stored size: %s\n", formatBytes(stats.TotalSizeStored))
|
fmt.Printf("Stored size: %s\n", formatBytes(stats.TotalSizeStored))
|
||||||
fmt.Printf("Dedup ratio: %.1f%%\n", stats.DedupRatio*100)
|
fmt.Printf("\n")
|
||||||
fmt.Printf("Space saved: %s\n", formatBytes(stats.TotalSizeRaw-stats.TotalSizeStored))
|
fmt.Printf("Backup Statistics (accurate dedup calculation):\n")
|
||||||
|
fmt.Printf(" Total backed up: %s (across all backups)\n", formatBytes(stats.TotalBackupSize))
|
||||||
|
fmt.Printf(" New data stored: %s\n", formatBytes(stats.TotalNewData))
|
||||||
|
fmt.Printf(" Space saved: %s\n", formatBytes(stats.SpaceSaved))
|
||||||
|
fmt.Printf(" Dedup ratio: %.1f%%\n", stats.DedupRatio*100)
|
||||||
|
|
||||||
if storeStats != nil {
|
if storeStats != nil {
|
||||||
fmt.Printf("Disk usage: %s\n", formatBytes(storeStats.TotalSize))
|
fmt.Printf("Disk usage: %s\n", formatBytes(storeStats.TotalSize))
|
||||||
@@ -577,3 +754,531 @@ func truncateStr(s string, max int) string {
|
|||||||
}
|
}
|
||||||
return s[:max-3] + "..."
|
return s[:max-3] + "..."
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func runDedupVerify(cmd *cobra.Command, args []string) error {
|
||||||
|
basePath := getDedupDir()
|
||||||
|
|
||||||
|
store, err := dedup.NewChunkStore(dedup.StoreConfig{
|
||||||
|
BasePath: basePath,
|
||||||
|
Compress: dedupCompress,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to open chunk store: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
manifestStore, err := dedup.NewManifestStore(basePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to open manifest store: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
index, err := dedup.NewChunkIndexAt(getIndexDBPath())
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to open chunk index: %w", err)
|
||||||
|
}
|
||||||
|
defer index.Close()
|
||||||
|
|
||||||
|
var manifests []*dedup.Manifest
|
||||||
|
|
||||||
|
if len(args) > 0 {
|
||||||
|
// Verify specific manifest
|
||||||
|
m, err := manifestStore.Load(args[0])
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to load manifest: %w", err)
|
||||||
|
}
|
||||||
|
manifests = []*dedup.Manifest{m}
|
||||||
|
} else {
|
||||||
|
// Verify all manifests
|
||||||
|
manifests, err = manifestStore.ListAll()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to list manifests: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(manifests) == 0 {
|
||||||
|
fmt.Println("No manifests to verify.")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("Verifying %d backup(s)...\n\n", len(manifests))
|
||||||
|
|
||||||
|
var totalChunks, missingChunks, corruptChunks int
|
||||||
|
var allOK = true
|
||||||
|
|
||||||
|
for _, m := range manifests {
|
||||||
|
fmt.Printf("Verifying: %s (%d chunks)\n", m.ID, m.ChunkCount)
|
||||||
|
|
||||||
|
var missing, corrupt int
|
||||||
|
seenHashes := make(map[string]bool)
|
||||||
|
|
||||||
|
for i, ref := range m.Chunks {
|
||||||
|
if seenHashes[ref.Hash] {
|
||||||
|
continue // Already verified this chunk
|
||||||
|
}
|
||||||
|
seenHashes[ref.Hash] = true
|
||||||
|
totalChunks++
|
||||||
|
|
||||||
|
// Check if chunk exists
|
||||||
|
if !store.Has(ref.Hash) {
|
||||||
|
missing++
|
||||||
|
missingChunks++
|
||||||
|
if missing <= 5 {
|
||||||
|
fmt.Printf(" [MISSING] chunk %d: %s\n", i, ref.Hash[:16])
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify chunk hash by reading it
|
||||||
|
chunk, err := store.Get(ref.Hash)
|
||||||
|
if err != nil {
|
||||||
|
corrupt++
|
||||||
|
corruptChunks++
|
||||||
|
if corrupt <= 5 {
|
||||||
|
fmt.Printf(" [CORRUPT] chunk %d: %s - %v\n", i, ref.Hash[:16], err)
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify size
|
||||||
|
if chunk.Length != ref.Length {
|
||||||
|
corrupt++
|
||||||
|
corruptChunks++
|
||||||
|
if corrupt <= 5 {
|
||||||
|
fmt.Printf(" [SIZE MISMATCH] chunk %d: expected %d, got %d\n", i, ref.Length, chunk.Length)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if missing > 0 || corrupt > 0 {
|
||||||
|
allOK = false
|
||||||
|
fmt.Printf(" Result: FAILED (%d missing, %d corrupt)\n", missing, corrupt)
|
||||||
|
if missing > 5 || corrupt > 5 {
|
||||||
|
fmt.Printf(" ... and %d more errors\n", (missing+corrupt)-10)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
fmt.Printf(" Result: OK (%d unique chunks verified)\n", len(seenHashes))
|
||||||
|
// Update verified timestamp
|
||||||
|
m.VerifiedAt = time.Now()
|
||||||
|
manifestStore.Save(m)
|
||||||
|
index.UpdateManifestVerified(m.ID, m.VerifiedAt)
|
||||||
|
}
|
||||||
|
fmt.Println()
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println("========================================")
|
||||||
|
if allOK {
|
||||||
|
fmt.Printf("All %d backup(s) verified successfully!\n", len(manifests))
|
||||||
|
fmt.Printf("Total unique chunks checked: %d\n", totalChunks)
|
||||||
|
} else {
|
||||||
|
fmt.Printf("Verification FAILED!\n")
|
||||||
|
fmt.Printf("Missing chunks: %d\n", missingChunks)
|
||||||
|
fmt.Printf("Corrupt chunks: %d\n", corruptChunks)
|
||||||
|
return fmt.Errorf("verification failed: %d missing, %d corrupt chunks", missingChunks, corruptChunks)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func runDedupPrune(cmd *cobra.Command, args []string) error {
|
||||||
|
if pruneKeepLast == 0 && pruneKeepDaily == 0 && pruneKeepWeekly == 0 {
|
||||||
|
return fmt.Errorf("at least one of --keep-last, --keep-daily, or --keep-weekly must be specified")
|
||||||
|
}
|
||||||
|
|
||||||
|
basePath := getDedupDir()
|
||||||
|
|
||||||
|
manifestStore, err := dedup.NewManifestStore(basePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to open manifest store: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
index, err := dedup.NewChunkIndexAt(getIndexDBPath())
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to open chunk index: %w", err)
|
||||||
|
}
|
||||||
|
defer index.Close()
|
||||||
|
|
||||||
|
manifests, err := manifestStore.ListAll()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to list manifests: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(manifests) == 0 {
|
||||||
|
fmt.Println("No backups to prune.")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Group by database name
|
||||||
|
byDatabase := make(map[string][]*dedup.Manifest)
|
||||||
|
for _, m := range manifests {
|
||||||
|
key := m.DatabaseName
|
||||||
|
if key == "" {
|
||||||
|
key = "_default"
|
||||||
|
}
|
||||||
|
byDatabase[key] = append(byDatabase[key], m)
|
||||||
|
}
|
||||||
|
|
||||||
|
var toDelete []*dedup.Manifest
|
||||||
|
|
||||||
|
for dbName, dbManifests := range byDatabase {
|
||||||
|
// Already sorted by time (newest first from ListAll)
|
||||||
|
kept := make(map[string]bool)
|
||||||
|
var keepReasons = make(map[string]string)
|
||||||
|
|
||||||
|
// Keep last N
|
||||||
|
if pruneKeepLast > 0 {
|
||||||
|
for i := 0; i < pruneKeepLast && i < len(dbManifests); i++ {
|
||||||
|
kept[dbManifests[i].ID] = true
|
||||||
|
keepReasons[dbManifests[i].ID] = "keep-last"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Keep daily (one per day)
|
||||||
|
if pruneKeepDaily > 0 {
|
||||||
|
seenDays := make(map[string]bool)
|
||||||
|
count := 0
|
||||||
|
for _, m := range dbManifests {
|
||||||
|
day := m.CreatedAt.Format("2006-01-02")
|
||||||
|
if !seenDays[day] {
|
||||||
|
seenDays[day] = true
|
||||||
|
if count < pruneKeepDaily {
|
||||||
|
kept[m.ID] = true
|
||||||
|
if keepReasons[m.ID] == "" {
|
||||||
|
keepReasons[m.ID] = "keep-daily"
|
||||||
|
}
|
||||||
|
count++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Keep weekly (one per week)
|
||||||
|
if pruneKeepWeekly > 0 {
|
||||||
|
seenWeeks := make(map[string]bool)
|
||||||
|
count := 0
|
||||||
|
for _, m := range dbManifests {
|
||||||
|
year, week := m.CreatedAt.ISOWeek()
|
||||||
|
weekKey := fmt.Sprintf("%d-W%02d", year, week)
|
||||||
|
if !seenWeeks[weekKey] {
|
||||||
|
seenWeeks[weekKey] = true
|
||||||
|
if count < pruneKeepWeekly {
|
||||||
|
kept[m.ID] = true
|
||||||
|
if keepReasons[m.ID] == "" {
|
||||||
|
keepReasons[m.ID] = "keep-weekly"
|
||||||
|
}
|
||||||
|
count++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if dbName != "_default" {
|
||||||
|
fmt.Printf("\nDatabase: %s\n", dbName)
|
||||||
|
} else {
|
||||||
|
fmt.Printf("\nUnnamed backups:\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, m := range dbManifests {
|
||||||
|
if kept[m.ID] {
|
||||||
|
fmt.Printf(" [KEEP] %s (%s) - %s\n", m.ID, m.CreatedAt.Format("2006-01-02"), keepReasons[m.ID])
|
||||||
|
} else {
|
||||||
|
fmt.Printf(" [DELETE] %s (%s)\n", m.ID, m.CreatedAt.Format("2006-01-02"))
|
||||||
|
toDelete = append(toDelete, m)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(toDelete) == 0 {
|
||||||
|
fmt.Printf("\nNo backups to prune (all match retention policy).\n")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("\n%d backup(s) will be deleted.\n", len(toDelete))
|
||||||
|
|
||||||
|
if pruneDryRun {
|
||||||
|
fmt.Println("\n[DRY RUN] No changes made. Remove --dry-run to actually delete.")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Actually delete
|
||||||
|
for _, m := range toDelete {
|
||||||
|
// Decrement chunk references
|
||||||
|
for _, ref := range m.Chunks {
|
||||||
|
index.DecrementRef(ref.Hash)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := manifestStore.Delete(m.ID); err != nil {
|
||||||
|
log.Warn("Failed to delete manifest", "id", m.ID, "error", err)
|
||||||
|
}
|
||||||
|
index.RemoveManifest(m.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("\nDeleted %d backup(s).\n", len(toDelete))
|
||||||
|
fmt.Println("Run 'dbbackup dedup gc' to reclaim space from unreferenced chunks.")
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func runDedupBackupDB(cmd *cobra.Command, args []string) error {
|
||||||
|
dbType := strings.ToLower(dedupDBType)
|
||||||
|
dbName := backupDBDatabase
|
||||||
|
|
||||||
|
// Validate db type
|
||||||
|
var dumpCmd string
|
||||||
|
var dumpArgs []string
|
||||||
|
|
||||||
|
switch dbType {
|
||||||
|
case "postgres", "postgresql", "pg":
|
||||||
|
dbType = "postgres"
|
||||||
|
dumpCmd = "pg_dump"
|
||||||
|
dumpArgs = []string{"-Fc"} // Custom format for better compression
|
||||||
|
if dedupDBHost != "" && dedupDBHost != "localhost" {
|
||||||
|
dumpArgs = append(dumpArgs, "-h", dedupDBHost)
|
||||||
|
}
|
||||||
|
if backupDBUser != "" {
|
||||||
|
dumpArgs = append(dumpArgs, "-U", backupDBUser)
|
||||||
|
}
|
||||||
|
dumpArgs = append(dumpArgs, dbName)
|
||||||
|
|
||||||
|
case "mysql":
|
||||||
|
dumpCmd = "mysqldump"
|
||||||
|
dumpArgs = []string{
|
||||||
|
"--single-transaction",
|
||||||
|
"--routines",
|
||||||
|
"--triggers",
|
||||||
|
"--events",
|
||||||
|
}
|
||||||
|
if dedupDBHost != "" {
|
||||||
|
dumpArgs = append(dumpArgs, "-h", dedupDBHost)
|
||||||
|
}
|
||||||
|
if backupDBUser != "" {
|
||||||
|
dumpArgs = append(dumpArgs, "-u", backupDBUser)
|
||||||
|
}
|
||||||
|
if backupDBPassword != "" {
|
||||||
|
dumpArgs = append(dumpArgs, "-p"+backupDBPassword)
|
||||||
|
}
|
||||||
|
dumpArgs = append(dumpArgs, dbName)
|
||||||
|
|
||||||
|
case "mariadb":
|
||||||
|
dumpCmd = "mariadb-dump"
|
||||||
|
// Fall back to mysqldump if mariadb-dump not available
|
||||||
|
if _, err := exec.LookPath(dumpCmd); err != nil {
|
||||||
|
dumpCmd = "mysqldump"
|
||||||
|
}
|
||||||
|
dumpArgs = []string{
|
||||||
|
"--single-transaction",
|
||||||
|
"--routines",
|
||||||
|
"--triggers",
|
||||||
|
"--events",
|
||||||
|
}
|
||||||
|
if dedupDBHost != "" {
|
||||||
|
dumpArgs = append(dumpArgs, "-h", dedupDBHost)
|
||||||
|
}
|
||||||
|
if backupDBUser != "" {
|
||||||
|
dumpArgs = append(dumpArgs, "-u", backupDBUser)
|
||||||
|
}
|
||||||
|
if backupDBPassword != "" {
|
||||||
|
dumpArgs = append(dumpArgs, "-p"+backupDBPassword)
|
||||||
|
}
|
||||||
|
dumpArgs = append(dumpArgs, dbName)
|
||||||
|
|
||||||
|
default:
|
||||||
|
return fmt.Errorf("unsupported database type: %s (use postgres, mysql, or mariadb)", dbType)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify dump command exists
|
||||||
|
if _, err := exec.LookPath(dumpCmd); err != nil {
|
||||||
|
return fmt.Errorf("%s not found in PATH: %w", dumpCmd, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Setup dedup storage
|
||||||
|
basePath := getDedupDir()
|
||||||
|
encKey := ""
|
||||||
|
if dedupEncrypt {
|
||||||
|
encKey = getEncryptionKey()
|
||||||
|
if encKey == "" {
|
||||||
|
return fmt.Errorf("encryption enabled but no key provided (use --key or DBBACKUP_DEDUP_KEY)")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
store, err := dedup.NewChunkStore(dedup.StoreConfig{
|
||||||
|
BasePath: basePath,
|
||||||
|
Compress: dedupCompress,
|
||||||
|
EncryptionKey: encKey,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to open chunk store: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
manifestStore, err := dedup.NewManifestStore(basePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to open manifest store: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
index, err := dedup.NewChunkIndexAt(getIndexDBPath())
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to open chunk index: %w", err)
|
||||||
|
}
|
||||||
|
defer index.Close()
|
||||||
|
|
||||||
|
// Generate manifest ID
|
||||||
|
now := time.Now()
|
||||||
|
manifestID := now.Format("2006-01-02_150405") + "_" + dbName
|
||||||
|
|
||||||
|
fmt.Printf("Creating deduplicated database backup: %s\n", manifestID)
|
||||||
|
fmt.Printf("Database: %s (%s)\n", dbName, dbType)
|
||||||
|
fmt.Printf("Command: %s %s\n", dumpCmd, strings.Join(dumpArgs, " "))
|
||||||
|
fmt.Printf("Store: %s\n", basePath)
|
||||||
|
|
||||||
|
// Start the dump command
|
||||||
|
dumpExec := exec.Command(dumpCmd, dumpArgs...)
|
||||||
|
|
||||||
|
// Set password via environment for postgres
|
||||||
|
if dbType == "postgres" && backupDBPassword != "" {
|
||||||
|
dumpExec.Env = append(os.Environ(), "PGPASSWORD="+backupDBPassword)
|
||||||
|
}
|
||||||
|
|
||||||
|
stdout, err := dumpExec.StdoutPipe()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to get stdout pipe: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
stderr, err := dumpExec.StderrPipe()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to get stderr pipe: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := dumpExec.Start(); err != nil {
|
||||||
|
return fmt.Errorf("failed to start %s: %w", dumpCmd, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Hash while chunking using TeeReader
|
||||||
|
h := sha256.New()
|
||||||
|
reader := io.TeeReader(stdout, h)
|
||||||
|
|
||||||
|
// Chunk the stream directly
|
||||||
|
chunker := dedup.NewChunker(reader, dedup.DefaultChunkerConfig())
|
||||||
|
var chunks []dedup.ChunkRef
|
||||||
|
var totalSize, storedSize int64
|
||||||
|
var chunkCount, newChunks int
|
||||||
|
|
||||||
|
startTime := time.Now()
|
||||||
|
|
||||||
|
for {
|
||||||
|
chunk, err := chunker.Next()
|
||||||
|
if err == io.EOF {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("chunking failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
chunkCount++
|
||||||
|
totalSize += int64(chunk.Length)
|
||||||
|
|
||||||
|
// Store chunk (deduplication happens here)
|
||||||
|
isNew, err := store.Put(chunk)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to store chunk: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if isNew {
|
||||||
|
newChunks++
|
||||||
|
storedSize += int64(chunk.Length)
|
||||||
|
index.AddChunk(chunk.Hash, chunk.Length, chunk.Length)
|
||||||
|
}
|
||||||
|
|
||||||
|
chunks = append(chunks, dedup.ChunkRef{
|
||||||
|
Hash: chunk.Hash,
|
||||||
|
Offset: chunk.Offset,
|
||||||
|
Length: chunk.Length,
|
||||||
|
})
|
||||||
|
|
||||||
|
if chunkCount%1000 == 0 {
|
||||||
|
fmt.Printf("\r Processed %d chunks, %d new, %s...", chunkCount, newChunks, formatBytes(totalSize))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read any stderr
|
||||||
|
stderrBytes, _ := io.ReadAll(stderr)
|
||||||
|
|
||||||
|
// Wait for command to complete
|
||||||
|
if err := dumpExec.Wait(); err != nil {
|
||||||
|
return fmt.Errorf("%s failed: %w\nstderr: %s", dumpCmd, err, string(stderrBytes))
|
||||||
|
}
|
||||||
|
|
||||||
|
duration := time.Since(startTime)
|
||||||
|
fileHash := hex.EncodeToString(h.Sum(nil))
|
||||||
|
|
||||||
|
// Calculate dedup ratio
|
||||||
|
dedupRatio := 0.0
|
||||||
|
if totalSize > 0 {
|
||||||
|
dedupRatio = 1.0 - float64(storedSize)/float64(totalSize)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create manifest
|
||||||
|
manifest := &dedup.Manifest{
|
||||||
|
ID: manifestID,
|
||||||
|
Name: dedupName,
|
||||||
|
CreatedAt: now,
|
||||||
|
DatabaseType: dbType,
|
||||||
|
DatabaseName: dbName,
|
||||||
|
DatabaseHost: dedupDBHost,
|
||||||
|
Chunks: chunks,
|
||||||
|
OriginalSize: totalSize,
|
||||||
|
StoredSize: storedSize,
|
||||||
|
ChunkCount: chunkCount,
|
||||||
|
NewChunks: newChunks,
|
||||||
|
DedupRatio: dedupRatio,
|
||||||
|
Encrypted: dedupEncrypt,
|
||||||
|
Compressed: dedupCompress,
|
||||||
|
SHA256: fileHash,
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := manifestStore.Save(manifest); err != nil {
|
||||||
|
return fmt.Errorf("failed to save manifest: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := index.AddManifest(manifest); err != nil {
|
||||||
|
log.Warn("Failed to index manifest", "error", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("\r \r")
|
||||||
|
fmt.Printf("\nBackup complete!\n")
|
||||||
|
fmt.Printf(" Manifest: %s\n", manifestID)
|
||||||
|
fmt.Printf(" Chunks: %d total, %d new\n", chunkCount, newChunks)
|
||||||
|
fmt.Printf(" Dump size: %s\n", formatBytes(totalSize))
|
||||||
|
fmt.Printf(" Stored: %s (new data)\n", formatBytes(storedSize))
|
||||||
|
fmt.Printf(" Dedup ratio: %.1f%%\n", dedupRatio*100)
|
||||||
|
fmt.Printf(" Duration: %s\n", duration.Round(time.Millisecond))
|
||||||
|
fmt.Printf(" Throughput: %s/s\n", formatBytes(int64(float64(totalSize)/duration.Seconds())))
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func runDedupMetrics(cmd *cobra.Command, args []string) error {
|
||||||
|
basePath := getDedupDir()
|
||||||
|
indexPath := getIndexDBPath()
|
||||||
|
|
||||||
|
instance := dedupMetricsInstance
|
||||||
|
if instance == "" {
|
||||||
|
hostname, _ := os.Hostname()
|
||||||
|
instance = hostname
|
||||||
|
}
|
||||||
|
|
||||||
|
metrics, err := dedup.CollectMetrics(basePath, indexPath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to collect metrics: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
output := dedup.FormatPrometheusMetrics(metrics, instance)
|
||||||
|
|
||||||
|
if dedupMetricsOutput != "" {
|
||||||
|
if err := dedup.WritePrometheusTextfile(dedupMetricsOutput, instance, basePath, indexPath); err != nil {
|
||||||
|
return fmt.Errorf("failed to write metrics: %w", err)
|
||||||
|
}
|
||||||
|
fmt.Printf("Wrote metrics to %s\n", dedupMetricsOutput)
|
||||||
|
} else {
|
||||||
|
fmt.Print(output)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|||||||
@@ -794,6 +794,454 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"type": "table"
|
"type": "table"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"collapsed": false,
|
||||||
|
"gridPos": {
|
||||||
|
"h": 1,
|
||||||
|
"w": 24,
|
||||||
|
"x": 0,
|
||||||
|
"y": 30
|
||||||
|
},
|
||||||
|
"id": 100,
|
||||||
|
"panels": [],
|
||||||
|
"title": "Deduplication Statistics",
|
||||||
|
"type": "row"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "prometheus",
|
||||||
|
"uid": "${DS_PROMETHEUS}"
|
||||||
|
},
|
||||||
|
"fieldConfig": {
|
||||||
|
"defaults": {
|
||||||
|
"color": {
|
||||||
|
"mode": "thresholds"
|
||||||
|
},
|
||||||
|
"mappings": [],
|
||||||
|
"thresholds": {
|
||||||
|
"mode": "absolute",
|
||||||
|
"steps": [
|
||||||
|
{
|
||||||
|
"color": "blue",
|
||||||
|
"value": null
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"unit": "percentunit"
|
||||||
|
},
|
||||||
|
"overrides": []
|
||||||
|
},
|
||||||
|
"gridPos": {
|
||||||
|
"h": 5,
|
||||||
|
"w": 6,
|
||||||
|
"x": 0,
|
||||||
|
"y": 31
|
||||||
|
},
|
||||||
|
"id": 101,
|
||||||
|
"options": {
|
||||||
|
"colorMode": "background",
|
||||||
|
"graphMode": "none",
|
||||||
|
"justifyMode": "auto",
|
||||||
|
"orientation": "auto",
|
||||||
|
"reduceOptions": {
|
||||||
|
"calcs": ["lastNotNull"],
|
||||||
|
"fields": "",
|
||||||
|
"values": false
|
||||||
|
},
|
||||||
|
"textMode": "auto"
|
||||||
|
},
|
||||||
|
"pluginVersion": "10.2.0",
|
||||||
|
"targets": [
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "prometheus",
|
||||||
|
"uid": "${DS_PROMETHEUS}"
|
||||||
|
},
|
||||||
|
"editorMode": "code",
|
||||||
|
"expr": "dbbackup_dedup_ratio{instance=~\"$instance\"}",
|
||||||
|
"legendFormat": "__auto",
|
||||||
|
"range": true,
|
||||||
|
"refId": "A"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"title": "Dedup Ratio",
|
||||||
|
"type": "stat"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "prometheus",
|
||||||
|
"uid": "${DS_PROMETHEUS}"
|
||||||
|
},
|
||||||
|
"fieldConfig": {
|
||||||
|
"defaults": {
|
||||||
|
"color": {
|
||||||
|
"mode": "thresholds"
|
||||||
|
},
|
||||||
|
"mappings": [],
|
||||||
|
"thresholds": {
|
||||||
|
"mode": "absolute",
|
||||||
|
"steps": [
|
||||||
|
{
|
||||||
|
"color": "green",
|
||||||
|
"value": null
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"unit": "bytes"
|
||||||
|
},
|
||||||
|
"overrides": []
|
||||||
|
},
|
||||||
|
"gridPos": {
|
||||||
|
"h": 5,
|
||||||
|
"w": 6,
|
||||||
|
"x": 6,
|
||||||
|
"y": 31
|
||||||
|
},
|
||||||
|
"id": 102,
|
||||||
|
"options": {
|
||||||
|
"colorMode": "value",
|
||||||
|
"graphMode": "none",
|
||||||
|
"justifyMode": "auto",
|
||||||
|
"orientation": "auto",
|
||||||
|
"reduceOptions": {
|
||||||
|
"calcs": ["lastNotNull"],
|
||||||
|
"fields": "",
|
||||||
|
"values": false
|
||||||
|
},
|
||||||
|
"textMode": "auto"
|
||||||
|
},
|
||||||
|
"pluginVersion": "10.2.0",
|
||||||
|
"targets": [
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "prometheus",
|
||||||
|
"uid": "${DS_PROMETHEUS}"
|
||||||
|
},
|
||||||
|
"editorMode": "code",
|
||||||
|
"expr": "dbbackup_dedup_space_saved_bytes{instance=~\"$instance\"}",
|
||||||
|
"legendFormat": "__auto",
|
||||||
|
"range": true,
|
||||||
|
"refId": "A"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"title": "Space Saved",
|
||||||
|
"type": "stat"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "prometheus",
|
||||||
|
"uid": "${DS_PROMETHEUS}"
|
||||||
|
},
|
||||||
|
"fieldConfig": {
|
||||||
|
"defaults": {
|
||||||
|
"color": {
|
||||||
|
"mode": "thresholds"
|
||||||
|
},
|
||||||
|
"mappings": [],
|
||||||
|
"thresholds": {
|
||||||
|
"mode": "absolute",
|
||||||
|
"steps": [
|
||||||
|
{
|
||||||
|
"color": "yellow",
|
||||||
|
"value": null
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"unit": "bytes"
|
||||||
|
},
|
||||||
|
"overrides": []
|
||||||
|
},
|
||||||
|
"gridPos": {
|
||||||
|
"h": 5,
|
||||||
|
"w": 6,
|
||||||
|
"x": 12,
|
||||||
|
"y": 31
|
||||||
|
},
|
||||||
|
"id": 103,
|
||||||
|
"options": {
|
||||||
|
"colorMode": "value",
|
||||||
|
"graphMode": "none",
|
||||||
|
"justifyMode": "auto",
|
||||||
|
"orientation": "auto",
|
||||||
|
"reduceOptions": {
|
||||||
|
"calcs": ["lastNotNull"],
|
||||||
|
"fields": "",
|
||||||
|
"values": false
|
||||||
|
},
|
||||||
|
"textMode": "auto"
|
||||||
|
},
|
||||||
|
"pluginVersion": "10.2.0",
|
||||||
|
"targets": [
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "prometheus",
|
||||||
|
"uid": "${DS_PROMETHEUS}"
|
||||||
|
},
|
||||||
|
"editorMode": "code",
|
||||||
|
"expr": "dbbackup_dedup_disk_usage_bytes{instance=~\"$instance\"}",
|
||||||
|
"legendFormat": "__auto",
|
||||||
|
"range": true,
|
||||||
|
"refId": "A"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"title": "Disk Usage",
|
||||||
|
"type": "stat"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "prometheus",
|
||||||
|
"uid": "${DS_PROMETHEUS}"
|
||||||
|
},
|
||||||
|
"fieldConfig": {
|
||||||
|
"defaults": {
|
||||||
|
"color": {
|
||||||
|
"mode": "thresholds"
|
||||||
|
},
|
||||||
|
"mappings": [],
|
||||||
|
"thresholds": {
|
||||||
|
"mode": "absolute",
|
||||||
|
"steps": [
|
||||||
|
{
|
||||||
|
"color": "purple",
|
||||||
|
"value": null
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"unit": "short"
|
||||||
|
},
|
||||||
|
"overrides": []
|
||||||
|
},
|
||||||
|
"gridPos": {
|
||||||
|
"h": 5,
|
||||||
|
"w": 6,
|
||||||
|
"x": 18,
|
||||||
|
"y": 31
|
||||||
|
},
|
||||||
|
"id": 104,
|
||||||
|
"options": {
|
||||||
|
"colorMode": "value",
|
||||||
|
"graphMode": "none",
|
||||||
|
"justifyMode": "auto",
|
||||||
|
"orientation": "auto",
|
||||||
|
"reduceOptions": {
|
||||||
|
"calcs": ["lastNotNull"],
|
||||||
|
"fields": "",
|
||||||
|
"values": false
|
||||||
|
},
|
||||||
|
"textMode": "auto"
|
||||||
|
},
|
||||||
|
"pluginVersion": "10.2.0",
|
||||||
|
"targets": [
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "prometheus",
|
||||||
|
"uid": "${DS_PROMETHEUS}"
|
||||||
|
},
|
||||||
|
"editorMode": "code",
|
||||||
|
"expr": "dbbackup_dedup_chunks_total{instance=~\"$instance\"}",
|
||||||
|
"legendFormat": "__auto",
|
||||||
|
"range": true,
|
||||||
|
"refId": "A"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"title": "Total Chunks",
|
||||||
|
"type": "stat"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "prometheus",
|
||||||
|
"uid": "${DS_PROMETHEUS}"
|
||||||
|
},
|
||||||
|
"fieldConfig": {
|
||||||
|
"defaults": {
|
||||||
|
"color": {
|
||||||
|
"mode": "palette-classic"
|
||||||
|
},
|
||||||
|
"custom": {
|
||||||
|
"axisBorderShow": false,
|
||||||
|
"axisCenteredZero": false,
|
||||||
|
"axisColorMode": "text",
|
||||||
|
"axisLabel": "",
|
||||||
|
"axisPlacement": "auto",
|
||||||
|
"barAlignment": 0,
|
||||||
|
"drawStyle": "line",
|
||||||
|
"fillOpacity": 10,
|
||||||
|
"gradientMode": "none",
|
||||||
|
"hideFrom": {
|
||||||
|
"legend": false,
|
||||||
|
"tooltip": false,
|
||||||
|
"viz": false
|
||||||
|
},
|
||||||
|
"insertNulls": false,
|
||||||
|
"lineInterpolation": "linear",
|
||||||
|
"lineWidth": 1,
|
||||||
|
"pointSize": 5,
|
||||||
|
"scaleDistribution": {
|
||||||
|
"type": "linear"
|
||||||
|
},
|
||||||
|
"showPoints": "auto",
|
||||||
|
"spanNulls": false,
|
||||||
|
"stacking": {
|
||||||
|
"group": "A",
|
||||||
|
"mode": "none"
|
||||||
|
},
|
||||||
|
"thresholdsStyle": {
|
||||||
|
"mode": "off"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"mappings": [],
|
||||||
|
"thresholds": {
|
||||||
|
"mode": "absolute",
|
||||||
|
"steps": [
|
||||||
|
{
|
||||||
|
"color": "green",
|
||||||
|
"value": null
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"unit": "percentunit"
|
||||||
|
},
|
||||||
|
"overrides": []
|
||||||
|
},
|
||||||
|
"gridPos": {
|
||||||
|
"h": 8,
|
||||||
|
"w": 12,
|
||||||
|
"x": 0,
|
||||||
|
"y": 36
|
||||||
|
},
|
||||||
|
"id": 105,
|
||||||
|
"options": {
|
||||||
|
"legend": {
|
||||||
|
"calcs": [],
|
||||||
|
"displayMode": "list",
|
||||||
|
"placement": "bottom",
|
||||||
|
"showLegend": true
|
||||||
|
},
|
||||||
|
"tooltip": {
|
||||||
|
"mode": "single",
|
||||||
|
"sort": "none"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"pluginVersion": "10.2.0",
|
||||||
|
"targets": [
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "prometheus",
|
||||||
|
"uid": "${DS_PROMETHEUS}"
|
||||||
|
},
|
||||||
|
"editorMode": "code",
|
||||||
|
"expr": "dbbackup_dedup_database_ratio{instance=~\"$instance\"}",
|
||||||
|
"legendFormat": "{{database}}",
|
||||||
|
"range": true,
|
||||||
|
"refId": "A"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"title": "Dedup Ratio by Database",
|
||||||
|
"type": "timeseries"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "prometheus",
|
||||||
|
"uid": "${DS_PROMETHEUS}"
|
||||||
|
},
|
||||||
|
"fieldConfig": {
|
||||||
|
"defaults": {
|
||||||
|
"color": {
|
||||||
|
"mode": "palette-classic"
|
||||||
|
},
|
||||||
|
"custom": {
|
||||||
|
"axisBorderShow": false,
|
||||||
|
"axisCenteredZero": false,
|
||||||
|
"axisColorMode": "text",
|
||||||
|
"axisLabel": "",
|
||||||
|
"axisPlacement": "auto",
|
||||||
|
"barAlignment": 0,
|
||||||
|
"drawStyle": "line",
|
||||||
|
"fillOpacity": 10,
|
||||||
|
"gradientMode": "none",
|
||||||
|
"hideFrom": {
|
||||||
|
"legend": false,
|
||||||
|
"tooltip": false,
|
||||||
|
"viz": false
|
||||||
|
},
|
||||||
|
"insertNulls": false,
|
||||||
|
"lineInterpolation": "linear",
|
||||||
|
"lineWidth": 1,
|
||||||
|
"pointSize": 5,
|
||||||
|
"scaleDistribution": {
|
||||||
|
"type": "linear"
|
||||||
|
},
|
||||||
|
"showPoints": "auto",
|
||||||
|
"spanNulls": false,
|
||||||
|
"stacking": {
|
||||||
|
"group": "A",
|
||||||
|
"mode": "none"
|
||||||
|
},
|
||||||
|
"thresholdsStyle": {
|
||||||
|
"mode": "off"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"mappings": [],
|
||||||
|
"thresholds": {
|
||||||
|
"mode": "absolute",
|
||||||
|
"steps": [
|
||||||
|
{
|
||||||
|
"color": "green",
|
||||||
|
"value": null
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"unit": "bytes"
|
||||||
|
},
|
||||||
|
"overrides": []
|
||||||
|
},
|
||||||
|
"gridPos": {
|
||||||
|
"h": 8,
|
||||||
|
"w": 12,
|
||||||
|
"x": 12,
|
||||||
|
"y": 36
|
||||||
|
},
|
||||||
|
"id": 106,
|
||||||
|
"options": {
|
||||||
|
"legend": {
|
||||||
|
"calcs": [],
|
||||||
|
"displayMode": "list",
|
||||||
|
"placement": "bottom",
|
||||||
|
"showLegend": true
|
||||||
|
},
|
||||||
|
"tooltip": {
|
||||||
|
"mode": "single",
|
||||||
|
"sort": "none"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"pluginVersion": "10.2.0",
|
||||||
|
"targets": [
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "prometheus",
|
||||||
|
"uid": "${DS_PROMETHEUS}"
|
||||||
|
},
|
||||||
|
"editorMode": "code",
|
||||||
|
"expr": "dbbackup_dedup_space_saved_bytes{instance=~\"$instance\"}",
|
||||||
|
"legendFormat": "Space Saved",
|
||||||
|
"range": true,
|
||||||
|
"refId": "A"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "prometheus",
|
||||||
|
"uid": "${DS_PROMETHEUS}"
|
||||||
|
},
|
||||||
|
"editorMode": "code",
|
||||||
|
"expr": "dbbackup_dedup_disk_usage_bytes{instance=~\"$instance\"}",
|
||||||
|
"legendFormat": "Disk Usage",
|
||||||
|
"range": true,
|
||||||
|
"refId": "B"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"title": "Dedup Storage Over Time",
|
||||||
|
"type": "timeseries"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"refresh": "30s",
|
"refresh": "30s",
|
||||||
@@ -801,7 +1249,8 @@
|
|||||||
"tags": [
|
"tags": [
|
||||||
"dbbackup",
|
"dbbackup",
|
||||||
"backup",
|
"backup",
|
||||||
"database"
|
"database",
|
||||||
|
"dedup"
|
||||||
],
|
],
|
||||||
"templating": {
|
"templating": {
|
||||||
"list": [
|
"list": [
|
||||||
|
|||||||
@@ -3,7 +3,9 @@ package dedup
|
|||||||
import (
|
import (
|
||||||
"database/sql"
|
"database/sql"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
_ "github.com/mattn/go-sqlite3" // SQLite driver
|
_ "github.com/mattn/go-sqlite3" // SQLite driver
|
||||||
@@ -12,26 +14,66 @@ import (
|
|||||||
// ChunkIndex provides fast chunk lookups using SQLite
|
// ChunkIndex provides fast chunk lookups using SQLite
|
||||||
type ChunkIndex struct {
|
type ChunkIndex struct {
|
||||||
db *sql.DB
|
db *sql.DB
|
||||||
|
dbPath string
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewChunkIndex opens or creates a chunk index database
|
// NewChunkIndex opens or creates a chunk index database at the default location
|
||||||
func NewChunkIndex(basePath string) (*ChunkIndex, error) {
|
func NewChunkIndex(basePath string) (*ChunkIndex, error) {
|
||||||
dbPath := filepath.Join(basePath, "chunks.db")
|
dbPath := filepath.Join(basePath, "chunks.db")
|
||||||
|
return NewChunkIndexAt(dbPath)
|
||||||
|
}
|
||||||
|
|
||||||
db, err := sql.Open("sqlite3", dbPath+"?_journal_mode=WAL&_synchronous=NORMAL")
|
// NewChunkIndexAt opens or creates a chunk index database at a specific path
|
||||||
|
// Use this to put the SQLite index on local storage when chunks are on NFS/CIFS
|
||||||
|
func NewChunkIndexAt(dbPath string) (*ChunkIndex, error) {
|
||||||
|
// Ensure parent directory exists
|
||||||
|
if err := os.MkdirAll(filepath.Dir(dbPath), 0700); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create index directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add busy_timeout to handle lock contention gracefully
|
||||||
|
db, err := sql.Open("sqlite3", dbPath+"?_journal_mode=WAL&_synchronous=NORMAL&_busy_timeout=5000")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("failed to open chunk index: %w", err)
|
return nil, fmt.Errorf("failed to open chunk index: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
idx := &ChunkIndex{db: db}
|
// Test the connection and check for locking issues
|
||||||
|
if err := db.Ping(); err != nil {
|
||||||
|
db.Close()
|
||||||
|
if isNFSLockingError(err) {
|
||||||
|
return nil, fmt.Errorf("database locked (common on NFS/CIFS): %w\n\n"+
|
||||||
|
"HINT: Use --index-db to put the SQLite index on local storage:\n"+
|
||||||
|
" dbbackup dedup ... --index-db /var/lib/dbbackup/dedup-index.db", err)
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("failed to connect to chunk index: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
idx := &ChunkIndex{db: db, dbPath: dbPath}
|
||||||
if err := idx.migrate(); err != nil {
|
if err := idx.migrate(); err != nil {
|
||||||
db.Close()
|
db.Close()
|
||||||
|
if isNFSLockingError(err) {
|
||||||
|
return nil, fmt.Errorf("database locked during migration (common on NFS/CIFS): %w\n\n"+
|
||||||
|
"HINT: Use --index-db to put the SQLite index on local storage:\n"+
|
||||||
|
" dbbackup dedup ... --index-db /var/lib/dbbackup/dedup-index.db", err)
|
||||||
|
}
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
return idx, nil
|
return idx, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// isNFSLockingError checks if an error is likely due to NFS/CIFS locking issues
|
||||||
|
func isNFSLockingError(err error) bool {
|
||||||
|
if err == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
errStr := err.Error()
|
||||||
|
return strings.Contains(errStr, "database is locked") ||
|
||||||
|
strings.Contains(errStr, "SQLITE_BUSY") ||
|
||||||
|
strings.Contains(errStr, "cannot lock") ||
|
||||||
|
strings.Contains(errStr, "lock protocol")
|
||||||
|
}
|
||||||
|
|
||||||
// migrate creates the schema if needed
|
// migrate creates the schema if needed
|
||||||
func (idx *ChunkIndex) migrate() error {
|
func (idx *ChunkIndex) migrate() error {
|
||||||
schema := `
|
schema := `
|
||||||
@@ -166,15 +208,26 @@ func (idx *ChunkIndex) RemoveManifest(id string) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// UpdateManifestVerified updates the verified timestamp for a manifest
|
||||||
|
func (idx *ChunkIndex) UpdateManifestVerified(id string, verifiedAt time.Time) error {
|
||||||
|
_, err := idx.db.Exec("UPDATE manifests SET verified_at = ? WHERE id = ?", verifiedAt, id)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
// IndexStats holds statistics about the dedup index
|
// IndexStats holds statistics about the dedup index
|
||||||
type IndexStats struct {
|
type IndexStats struct {
|
||||||
TotalChunks int64
|
TotalChunks int64
|
||||||
TotalManifests int64
|
TotalManifests int64
|
||||||
TotalSizeRaw int64 // Uncompressed, undeduplicated
|
TotalSizeRaw int64 // Uncompressed, undeduplicated (per-chunk)
|
||||||
TotalSizeStored int64 // On-disk after dedup+compression
|
TotalSizeStored int64 // On-disk after dedup+compression (per-chunk)
|
||||||
DedupRatio float64
|
DedupRatio float64 // Based on manifests (real dedup ratio)
|
||||||
OldestChunk time.Time
|
OldestChunk time.Time
|
||||||
NewestChunk time.Time
|
NewestChunk time.Time
|
||||||
|
|
||||||
|
// Manifest-based stats (accurate dedup calculation)
|
||||||
|
TotalBackupSize int64 // Sum of all backup original sizes
|
||||||
|
TotalNewData int64 // Sum of all new chunks stored
|
||||||
|
SpaceSaved int64 // Difference = what dedup saved
|
||||||
}
|
}
|
||||||
|
|
||||||
// Stats returns statistics about the index
|
// Stats returns statistics about the index
|
||||||
@@ -206,8 +259,22 @@ func (idx *ChunkIndex) Stats() (*IndexStats, error) {
|
|||||||
|
|
||||||
idx.db.QueryRow("SELECT COUNT(*) FROM manifests").Scan(&stats.TotalManifests)
|
idx.db.QueryRow("SELECT COUNT(*) FROM manifests").Scan(&stats.TotalManifests)
|
||||||
|
|
||||||
if stats.TotalSizeRaw > 0 {
|
// Calculate accurate dedup ratio from manifests
|
||||||
stats.DedupRatio = 1.0 - float64(stats.TotalSizeStored)/float64(stats.TotalSizeRaw)
|
// Sum all backup original sizes and all new data stored
|
||||||
|
err = idx.db.QueryRow(`
|
||||||
|
SELECT
|
||||||
|
COALESCE(SUM(original_size), 0),
|
||||||
|
COALESCE(SUM(stored_size), 0)
|
||||||
|
FROM manifests
|
||||||
|
`).Scan(&stats.TotalBackupSize, &stats.TotalNewData)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate real dedup ratio: how much data was deduplicated across all backups
|
||||||
|
if stats.TotalBackupSize > 0 {
|
||||||
|
stats.DedupRatio = 1.0 - float64(stats.TotalNewData)/float64(stats.TotalBackupSize)
|
||||||
|
stats.SpaceSaved = stats.TotalBackupSize - stats.TotalNewData
|
||||||
}
|
}
|
||||||
|
|
||||||
return stats, nil
|
return stats, nil
|
||||||
|
|||||||
@@ -38,6 +38,7 @@ type Manifest struct {
|
|||||||
// Encryption and compression settings used
|
// Encryption and compression settings used
|
||||||
Encrypted bool `json:"encrypted"`
|
Encrypted bool `json:"encrypted"`
|
||||||
Compressed bool `json:"compressed"`
|
Compressed bool `json:"compressed"`
|
||||||
|
Decompressed bool `json:"decompressed,omitempty"` // Input was auto-decompressed before chunking
|
||||||
|
|
||||||
// Verification
|
// Verification
|
||||||
SHA256 string `json:"sha256"` // Hash of reconstructed file
|
SHA256 string `json:"sha256"` // Hash of reconstructed file
|
||||||
|
|||||||
235
internal/dedup/metrics.go
Normal file
235
internal/dedup/metrics.go
Normal file
@@ -0,0 +1,235 @@
|
|||||||
|
package dedup
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// DedupMetrics holds deduplication statistics for Prometheus
|
||||||
|
type DedupMetrics struct {
|
||||||
|
// Global stats
|
||||||
|
TotalChunks int64
|
||||||
|
TotalManifests int64
|
||||||
|
TotalBackupSize int64 // Sum of all backup original sizes
|
||||||
|
TotalNewData int64 // Sum of all new chunks stored
|
||||||
|
SpaceSaved int64 // Bytes saved by deduplication
|
||||||
|
DedupRatio float64 // Overall dedup ratio (0-1)
|
||||||
|
DiskUsage int64 // Actual bytes on disk
|
||||||
|
|
||||||
|
// Per-database stats
|
||||||
|
ByDatabase map[string]*DatabaseDedupMetrics
|
||||||
|
}
|
||||||
|
|
||||||
|
// DatabaseDedupMetrics holds per-database dedup stats
|
||||||
|
type DatabaseDedupMetrics struct {
|
||||||
|
Database string
|
||||||
|
BackupCount int
|
||||||
|
TotalSize int64
|
||||||
|
StoredSize int64
|
||||||
|
DedupRatio float64
|
||||||
|
LastBackupTime time.Time
|
||||||
|
LastVerified time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// CollectMetrics gathers dedup statistics from the index and store
|
||||||
|
func CollectMetrics(basePath string, indexPath string) (*DedupMetrics, error) {
|
||||||
|
var idx *ChunkIndex
|
||||||
|
var err error
|
||||||
|
|
||||||
|
if indexPath != "" {
|
||||||
|
idx, err = NewChunkIndexAt(indexPath)
|
||||||
|
} else {
|
||||||
|
idx, err = NewChunkIndex(basePath)
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to open chunk index: %w", err)
|
||||||
|
}
|
||||||
|
defer idx.Close()
|
||||||
|
|
||||||
|
store, err := NewChunkStore(StoreConfig{BasePath: basePath})
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to open chunk store: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get index stats
|
||||||
|
stats, err := idx.Stats()
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to get index stats: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get store stats
|
||||||
|
storeStats, err := store.Stats()
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to get store stats: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
metrics := &DedupMetrics{
|
||||||
|
TotalChunks: stats.TotalChunks,
|
||||||
|
TotalManifests: stats.TotalManifests,
|
||||||
|
TotalBackupSize: stats.TotalBackupSize,
|
||||||
|
TotalNewData: stats.TotalNewData,
|
||||||
|
SpaceSaved: stats.SpaceSaved,
|
||||||
|
DedupRatio: stats.DedupRatio,
|
||||||
|
DiskUsage: storeStats.TotalSize,
|
||||||
|
ByDatabase: make(map[string]*DatabaseDedupMetrics),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Collect per-database metrics from manifest store
|
||||||
|
manifestStore, err := NewManifestStore(basePath)
|
||||||
|
if err != nil {
|
||||||
|
return metrics, nil // Return partial metrics
|
||||||
|
}
|
||||||
|
|
||||||
|
manifests, err := manifestStore.ListAll()
|
||||||
|
if err != nil {
|
||||||
|
return metrics, nil // Return partial metrics
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, m := range manifests {
|
||||||
|
dbKey := m.DatabaseName
|
||||||
|
if dbKey == "" {
|
||||||
|
dbKey = "_default"
|
||||||
|
}
|
||||||
|
|
||||||
|
dbMetrics, ok := metrics.ByDatabase[dbKey]
|
||||||
|
if !ok {
|
||||||
|
dbMetrics = &DatabaseDedupMetrics{
|
||||||
|
Database: dbKey,
|
||||||
|
}
|
||||||
|
metrics.ByDatabase[dbKey] = dbMetrics
|
||||||
|
}
|
||||||
|
|
||||||
|
dbMetrics.BackupCount++
|
||||||
|
dbMetrics.TotalSize += m.OriginalSize
|
||||||
|
dbMetrics.StoredSize += m.StoredSize
|
||||||
|
|
||||||
|
if m.CreatedAt.After(dbMetrics.LastBackupTime) {
|
||||||
|
dbMetrics.LastBackupTime = m.CreatedAt
|
||||||
|
}
|
||||||
|
if !m.VerifiedAt.IsZero() && m.VerifiedAt.After(dbMetrics.LastVerified) {
|
||||||
|
dbMetrics.LastVerified = m.VerifiedAt
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate per-database dedup ratios
|
||||||
|
for _, dbMetrics := range metrics.ByDatabase {
|
||||||
|
if dbMetrics.TotalSize > 0 {
|
||||||
|
dbMetrics.DedupRatio = 1.0 - float64(dbMetrics.StoredSize)/float64(dbMetrics.TotalSize)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return metrics, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// WritePrometheusTextfile writes dedup metrics in Prometheus format
|
||||||
|
func WritePrometheusTextfile(path string, instance string, basePath string, indexPath string) error {
|
||||||
|
metrics, err := CollectMetrics(basePath, indexPath)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
output := FormatPrometheusMetrics(metrics, instance)
|
||||||
|
|
||||||
|
// Atomic write
|
||||||
|
dir := filepath.Dir(path)
|
||||||
|
if err := os.MkdirAll(dir, 0755); err != nil {
|
||||||
|
return fmt.Errorf("failed to create directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
tmpPath := path + ".tmp"
|
||||||
|
if err := os.WriteFile(tmpPath, []byte(output), 0644); err != nil {
|
||||||
|
return fmt.Errorf("failed to write temp file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := os.Rename(tmpPath, path); err != nil {
|
||||||
|
os.Remove(tmpPath)
|
||||||
|
return fmt.Errorf("failed to rename temp file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// FormatPrometheusMetrics formats dedup metrics in Prometheus exposition format
|
||||||
|
func FormatPrometheusMetrics(m *DedupMetrics, instance string) string {
|
||||||
|
var b strings.Builder
|
||||||
|
now := time.Now().Unix()
|
||||||
|
|
||||||
|
b.WriteString("# DBBackup Deduplication Prometheus Metrics\n")
|
||||||
|
b.WriteString(fmt.Sprintf("# Generated at: %s\n", time.Now().Format(time.RFC3339)))
|
||||||
|
b.WriteString(fmt.Sprintf("# Instance: %s\n", instance))
|
||||||
|
b.WriteString("\n")
|
||||||
|
|
||||||
|
// Global dedup metrics
|
||||||
|
b.WriteString("# HELP dbbackup_dedup_chunks_total Total number of unique chunks stored\n")
|
||||||
|
b.WriteString("# TYPE dbbackup_dedup_chunks_total gauge\n")
|
||||||
|
b.WriteString(fmt.Sprintf("dbbackup_dedup_chunks_total{instance=%q} %d\n", instance, m.TotalChunks))
|
||||||
|
b.WriteString("\n")
|
||||||
|
|
||||||
|
b.WriteString("# HELP dbbackup_dedup_manifests_total Total number of deduplicated backups\n")
|
||||||
|
b.WriteString("# TYPE dbbackup_dedup_manifests_total gauge\n")
|
||||||
|
b.WriteString(fmt.Sprintf("dbbackup_dedup_manifests_total{instance=%q} %d\n", instance, m.TotalManifests))
|
||||||
|
b.WriteString("\n")
|
||||||
|
|
||||||
|
b.WriteString("# HELP dbbackup_dedup_backup_bytes_total Total logical size of all backups in bytes\n")
|
||||||
|
b.WriteString("# TYPE dbbackup_dedup_backup_bytes_total gauge\n")
|
||||||
|
b.WriteString(fmt.Sprintf("dbbackup_dedup_backup_bytes_total{instance=%q} %d\n", instance, m.TotalBackupSize))
|
||||||
|
b.WriteString("\n")
|
||||||
|
|
||||||
|
b.WriteString("# HELP dbbackup_dedup_stored_bytes_total Total unique data stored in bytes (after dedup)\n")
|
||||||
|
b.WriteString("# TYPE dbbackup_dedup_stored_bytes_total gauge\n")
|
||||||
|
b.WriteString(fmt.Sprintf("dbbackup_dedup_stored_bytes_total{instance=%q} %d\n", instance, m.TotalNewData))
|
||||||
|
b.WriteString("\n")
|
||||||
|
|
||||||
|
b.WriteString("# HELP dbbackup_dedup_space_saved_bytes Bytes saved by deduplication\n")
|
||||||
|
b.WriteString("# TYPE dbbackup_dedup_space_saved_bytes gauge\n")
|
||||||
|
b.WriteString(fmt.Sprintf("dbbackup_dedup_space_saved_bytes{instance=%q} %d\n", instance, m.SpaceSaved))
|
||||||
|
b.WriteString("\n")
|
||||||
|
|
||||||
|
b.WriteString("# HELP dbbackup_dedup_ratio Deduplication ratio (0-1, higher is better)\n")
|
||||||
|
b.WriteString("# TYPE dbbackup_dedup_ratio gauge\n")
|
||||||
|
b.WriteString(fmt.Sprintf("dbbackup_dedup_ratio{instance=%q} %.4f\n", instance, m.DedupRatio))
|
||||||
|
b.WriteString("\n")
|
||||||
|
|
||||||
|
b.WriteString("# HELP dbbackup_dedup_disk_usage_bytes Actual disk usage of chunk store\n")
|
||||||
|
b.WriteString("# TYPE dbbackup_dedup_disk_usage_bytes gauge\n")
|
||||||
|
b.WriteString(fmt.Sprintf("dbbackup_dedup_disk_usage_bytes{instance=%q} %d\n", instance, m.DiskUsage))
|
||||||
|
b.WriteString("\n")
|
||||||
|
|
||||||
|
// Per-database metrics
|
||||||
|
if len(m.ByDatabase) > 0 {
|
||||||
|
b.WriteString("# HELP dbbackup_dedup_database_backup_count Number of deduplicated backups per database\n")
|
||||||
|
b.WriteString("# TYPE dbbackup_dedup_database_backup_count gauge\n")
|
||||||
|
for _, db := range m.ByDatabase {
|
||||||
|
b.WriteString(fmt.Sprintf("dbbackup_dedup_database_backup_count{instance=%q,database=%q} %d\n",
|
||||||
|
instance, db.Database, db.BackupCount))
|
||||||
|
}
|
||||||
|
b.WriteString("\n")
|
||||||
|
|
||||||
|
b.WriteString("# HELP dbbackup_dedup_database_ratio Deduplication ratio per database (0-1)\n")
|
||||||
|
b.WriteString("# TYPE dbbackup_dedup_database_ratio gauge\n")
|
||||||
|
for _, db := range m.ByDatabase {
|
||||||
|
b.WriteString(fmt.Sprintf("dbbackup_dedup_database_ratio{instance=%q,database=%q} %.4f\n",
|
||||||
|
instance, db.Database, db.DedupRatio))
|
||||||
|
}
|
||||||
|
b.WriteString("\n")
|
||||||
|
|
||||||
|
b.WriteString("# HELP dbbackup_dedup_database_last_backup_timestamp Last backup timestamp per database\n")
|
||||||
|
b.WriteString("# TYPE dbbackup_dedup_database_last_backup_timestamp gauge\n")
|
||||||
|
for _, db := range m.ByDatabase {
|
||||||
|
if !db.LastBackupTime.IsZero() {
|
||||||
|
b.WriteString(fmt.Sprintf("dbbackup_dedup_database_last_backup_timestamp{instance=%q,database=%q} %d\n",
|
||||||
|
instance, db.Database, db.LastBackupTime.Unix()))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
b.WriteString("\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
b.WriteString("# HELP dbbackup_dedup_scrape_timestamp Unix timestamp when dedup metrics were collected\n")
|
||||||
|
b.WriteString("# TYPE dbbackup_dedup_scrape_timestamp gauge\n")
|
||||||
|
b.WriteString(fmt.Sprintf("dbbackup_dedup_scrape_timestamp{instance=%q} %d\n", instance, now))
|
||||||
|
|
||||||
|
return b.String()
|
||||||
|
}
|
||||||
@@ -414,14 +414,42 @@ func (d *Diagnoser) diagnoseSQLScript(filePath string, compressed bool, result *
|
|||||||
|
|
||||||
// diagnoseClusterArchive analyzes a cluster tar.gz archive
|
// diagnoseClusterArchive analyzes a cluster tar.gz archive
|
||||||
func (d *Diagnoser) diagnoseClusterArchive(filePath string, result *DiagnoseResult) {
|
func (d *Diagnoser) diagnoseClusterArchive(filePath string, result *DiagnoseResult) {
|
||||||
// First verify tar.gz integrity with timeout
|
// Calculate dynamic timeout based on file size
|
||||||
// 5 minutes for large archives (multi-GB archives need more time)
|
// Assume minimum 50 MB/s throughput for compressed archive listing
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
|
// Minimum 5 minutes, scales with file size
|
||||||
|
timeoutMinutes := 5
|
||||||
|
if result.FileSize > 0 {
|
||||||
|
// 1 minute per 3 GB, minimum 5 minutes, max 60 minutes
|
||||||
|
sizeGB := result.FileSize / (1024 * 1024 * 1024)
|
||||||
|
estimatedMinutes := int(sizeGB/3) + 5
|
||||||
|
if estimatedMinutes > timeoutMinutes {
|
||||||
|
timeoutMinutes = estimatedMinutes
|
||||||
|
}
|
||||||
|
if timeoutMinutes > 60 {
|
||||||
|
timeoutMinutes = 60
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
d.log.Info("Verifying cluster archive integrity",
|
||||||
|
"size", fmt.Sprintf("%.1f GB", float64(result.FileSize)/(1024*1024*1024)),
|
||||||
|
"timeout", fmt.Sprintf("%d min", timeoutMinutes))
|
||||||
|
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
cmd := exec.CommandContext(ctx, "tar", "-tzf", filePath)
|
cmd := exec.CommandContext(ctx, "tar", "-tzf", filePath)
|
||||||
output, err := cmd.Output()
|
output, err := cmd.Output()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
// Check if it was a timeout
|
||||||
|
if ctx.Err() == context.DeadlineExceeded {
|
||||||
|
result.IsValid = false
|
||||||
|
result.Errors = append(result.Errors,
|
||||||
|
fmt.Sprintf("Verification timed out after %d minutes - archive is very large", timeoutMinutes),
|
||||||
|
"This does not necessarily mean the archive is corrupted",
|
||||||
|
"Manual verification: tar -tzf "+filePath+" | wc -l")
|
||||||
|
// Don't mark as corrupted on timeout
|
||||||
|
return
|
||||||
|
}
|
||||||
result.IsValid = false
|
result.IsValid = false
|
||||||
result.IsCorrupted = true
|
result.IsCorrupted = true
|
||||||
result.Errors = append(result.Errors,
|
result.Errors = append(result.Errors,
|
||||||
@@ -497,9 +525,22 @@ func (d *Diagnoser) diagnoseUnknown(filePath string, result *DiagnoseResult) {
|
|||||||
|
|
||||||
// verifyWithPgRestore uses pg_restore --list to verify dump integrity
|
// verifyWithPgRestore uses pg_restore --list to verify dump integrity
|
||||||
func (d *Diagnoser) verifyWithPgRestore(filePath string, result *DiagnoseResult) {
|
func (d *Diagnoser) verifyWithPgRestore(filePath string, result *DiagnoseResult) {
|
||||||
// Use timeout to prevent blocking on very large dump files
|
// Calculate dynamic timeout based on file size
|
||||||
// 5 minutes for large dumps (multi-GB dumps with many tables)
|
// pg_restore --list is usually faster than tar -tzf for same size
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
|
timeoutMinutes := 5
|
||||||
|
if result.FileSize > 0 {
|
||||||
|
// 1 minute per 5 GB, minimum 5 minutes, max 30 minutes
|
||||||
|
sizeGB := result.FileSize / (1024 * 1024 * 1024)
|
||||||
|
estimatedMinutes := int(sizeGB/5) + 5
|
||||||
|
if estimatedMinutes > timeoutMinutes {
|
||||||
|
timeoutMinutes = estimatedMinutes
|
||||||
|
}
|
||||||
|
if timeoutMinutes > 30 {
|
||||||
|
timeoutMinutes = 30
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
cmd := exec.CommandContext(ctx, "pg_restore", "--list", filePath)
|
cmd := exec.CommandContext(ctx, "pg_restore", "--list", filePath)
|
||||||
@@ -554,14 +595,72 @@ func (d *Diagnoser) verifyWithPgRestore(filePath string, result *DiagnoseResult)
|
|||||||
|
|
||||||
// DiagnoseClusterDumps extracts and diagnoses all dumps in a cluster archive
|
// DiagnoseClusterDumps extracts and diagnoses all dumps in a cluster archive
|
||||||
func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*DiagnoseResult, error) {
|
func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*DiagnoseResult, error) {
|
||||||
// First, try to list archive contents without extracting (fast check)
|
// Get archive size for dynamic timeout calculation
|
||||||
// 10 minutes for very large archives
|
archiveInfo, err := os.Stat(archivePath)
|
||||||
listCtx, listCancel := context.WithTimeout(context.Background(), 10*time.Minute)
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("cannot stat archive: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Dynamic timeout based on archive size: base 10 min + 1 min per 3 GB
|
||||||
|
// Large archives like 100+ GB need more time for tar -tzf
|
||||||
|
timeoutMinutes := 10
|
||||||
|
if archiveInfo.Size() > 0 {
|
||||||
|
sizeGB := archiveInfo.Size() / (1024 * 1024 * 1024)
|
||||||
|
estimatedMinutes := int(sizeGB/3) + 10
|
||||||
|
if estimatedMinutes > timeoutMinutes {
|
||||||
|
timeoutMinutes = estimatedMinutes
|
||||||
|
}
|
||||||
|
if timeoutMinutes > 120 { // Max 2 hours
|
||||||
|
timeoutMinutes = 120
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
d.log.Info("Listing cluster archive contents",
|
||||||
|
"size", fmt.Sprintf("%.1f GB", float64(archiveInfo.Size())/(1024*1024*1024)),
|
||||||
|
"timeout", fmt.Sprintf("%d min", timeoutMinutes))
|
||||||
|
|
||||||
|
listCtx, listCancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
|
||||||
defer listCancel()
|
defer listCancel()
|
||||||
|
|
||||||
listCmd := exec.CommandContext(listCtx, "tar", "-tzf", archivePath)
|
listCmd := exec.CommandContext(listCtx, "tar", "-tzf", archivePath)
|
||||||
listOutput, listErr := listCmd.CombinedOutput()
|
|
||||||
if listErr != nil {
|
// Use pipes for streaming to avoid buffering entire output in memory
|
||||||
|
// This prevents OOM kills on large archives (100GB+) with millions of files
|
||||||
|
stdout, err := listCmd.StdoutPipe()
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create stdout pipe: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var stderrBuf bytes.Buffer
|
||||||
|
listCmd.Stderr = &stderrBuf
|
||||||
|
|
||||||
|
if err := listCmd.Start(); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to start tar listing: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stream the output line by line, only keeping relevant files
|
||||||
|
var files []string
|
||||||
|
scanner := bufio.NewScanner(stdout)
|
||||||
|
// Set a reasonable max line length (file paths shouldn't exceed this)
|
||||||
|
scanner.Buffer(make([]byte, 0, 4096), 1024*1024)
|
||||||
|
|
||||||
|
fileCount := 0
|
||||||
|
for scanner.Scan() {
|
||||||
|
fileCount++
|
||||||
|
line := scanner.Text()
|
||||||
|
// Only store dump files and important files, not every single file
|
||||||
|
if strings.HasSuffix(line, ".dump") || strings.HasSuffix(line, ".sql") ||
|
||||||
|
strings.HasSuffix(line, ".sql.gz") || strings.HasSuffix(line, ".json") ||
|
||||||
|
strings.Contains(line, "globals") || strings.Contains(line, "manifest") ||
|
||||||
|
strings.Contains(line, "metadata") || strings.HasSuffix(line, "/") {
|
||||||
|
files = append(files, line)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
scanErr := scanner.Err()
|
||||||
|
listErr := listCmd.Wait()
|
||||||
|
|
||||||
|
if listErr != nil || scanErr != nil {
|
||||||
// Archive listing failed - likely corrupted
|
// Archive listing failed - likely corrupted
|
||||||
errResult := &DiagnoseResult{
|
errResult := &DiagnoseResult{
|
||||||
FilePath: archivePath,
|
FilePath: archivePath,
|
||||||
@@ -573,7 +672,12 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
|
|||||||
Details: &DiagnoseDetails{},
|
Details: &DiagnoseDetails{},
|
||||||
}
|
}
|
||||||
|
|
||||||
errOutput := string(listOutput)
|
errOutput := stderrBuf.String()
|
||||||
|
actualErr := listErr
|
||||||
|
if scanErr != nil {
|
||||||
|
actualErr = scanErr
|
||||||
|
}
|
||||||
|
|
||||||
if strings.Contains(errOutput, "unexpected end of file") ||
|
if strings.Contains(errOutput, "unexpected end of file") ||
|
||||||
strings.Contains(errOutput, "Unexpected EOF") ||
|
strings.Contains(errOutput, "Unexpected EOF") ||
|
||||||
strings.Contains(errOutput, "truncated") {
|
strings.Contains(errOutput, "truncated") {
|
||||||
@@ -585,7 +689,7 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
|
|||||||
"Solution: Re-create the backup from source database")
|
"Solution: Re-create the backup from source database")
|
||||||
} else {
|
} else {
|
||||||
errResult.Errors = append(errResult.Errors,
|
errResult.Errors = append(errResult.Errors,
|
||||||
fmt.Sprintf("Cannot list archive contents: %v", listErr),
|
fmt.Sprintf("Cannot list archive contents: %v", actualErr),
|
||||||
fmt.Sprintf("tar error: %s", truncateString(errOutput, 300)),
|
fmt.Sprintf("tar error: %s", truncateString(errOutput, 300)),
|
||||||
"Run manually: tar -tzf "+archivePath+" 2>&1 | tail -50")
|
"Run manually: tar -tzf "+archivePath+" 2>&1 | tail -50")
|
||||||
}
|
}
|
||||||
@@ -593,11 +697,10 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
|
|||||||
return []*DiagnoseResult{errResult}, nil
|
return []*DiagnoseResult{errResult}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Archive is listable - now check disk space before extraction
|
d.log.Debug("Archive listing streamed successfully", "total_files", fileCount, "relevant_files", len(files))
|
||||||
files := strings.Split(strings.TrimSpace(string(listOutput)), "\n")
|
|
||||||
|
|
||||||
// Check if we have enough disk space (estimate 4x archive size needed)
|
// Check if we have enough disk space (estimate 4x archive size needed)
|
||||||
archiveInfo, _ := os.Stat(archivePath)
|
// archiveInfo already obtained at function start
|
||||||
requiredSpace := archiveInfo.Size() * 4
|
requiredSpace := archiveInfo.Size() * 4
|
||||||
|
|
||||||
// Check temp directory space - try to extract metadata first
|
// Check temp directory space - try to extract metadata first
|
||||||
|
|||||||
@@ -229,8 +229,14 @@ func containsSQLKeywords(content string) bool {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// CheckDiskSpace verifies sufficient disk space for restore
|
// CheckDiskSpace verifies sufficient disk space for restore
|
||||||
|
// Uses the effective work directory (WorkDir if set, otherwise BackupDir) since
|
||||||
|
// that's where extraction actually happens for large databases
|
||||||
func (s *Safety) CheckDiskSpace(archivePath string, multiplier float64) error {
|
func (s *Safety) CheckDiskSpace(archivePath string, multiplier float64) error {
|
||||||
return s.CheckDiskSpaceAt(archivePath, s.cfg.BackupDir, multiplier)
|
checkDir := s.cfg.GetEffectiveWorkDir()
|
||||||
|
if checkDir == "" {
|
||||||
|
checkDir = s.cfg.BackupDir
|
||||||
|
}
|
||||||
|
return s.CheckDiskSpaceAt(archivePath, checkDir, multiplier)
|
||||||
}
|
}
|
||||||
|
|
||||||
// CheckDiskSpaceAt verifies sufficient disk space at a specific directory
|
// CheckDiskSpaceAt verifies sufficient disk space at a specific directory
|
||||||
|
|||||||
@@ -106,9 +106,23 @@ type safetyCheckCompleteMsg struct {
|
|||||||
|
|
||||||
func runSafetyChecks(cfg *config.Config, log logger.Logger, archive ArchiveInfo, targetDB string) tea.Cmd {
|
func runSafetyChecks(cfg *config.Config, log logger.Logger, archive ArchiveInfo, targetDB string) tea.Cmd {
|
||||||
return func() tea.Msg {
|
return func() tea.Msg {
|
||||||
// 10 minutes for safety checks - large archives can take a long time to diagnose
|
// Dynamic timeout based on archive size for large database support
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
|
// Base: 10 minutes + 1 minute per 5 GB, max 120 minutes
|
||||||
|
timeoutMinutes := 10
|
||||||
|
if archive.Size > 0 {
|
||||||
|
sizeGB := archive.Size / (1024 * 1024 * 1024)
|
||||||
|
estimatedMinutes := int(sizeGB/5) + 10
|
||||||
|
if estimatedMinutes > timeoutMinutes {
|
||||||
|
timeoutMinutes = estimatedMinutes
|
||||||
|
}
|
||||||
|
if timeoutMinutes > 120 {
|
||||||
|
timeoutMinutes = 120
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
_ = ctx // Used by database checks below
|
||||||
|
|
||||||
safety := restore.NewSafety(cfg, log)
|
safety := restore.NewSafety(cfg, log)
|
||||||
checks := []SafetyCheck{}
|
checks := []SafetyCheck{}
|
||||||
|
|||||||
@@ -1,171 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# COMPLETE emoji/Unicode removal - Replace ALL non-ASCII with ASCII equivalents
|
|
||||||
# Date: January 8, 2026
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
echo "[INFO] Starting COMPLETE Unicode->ASCII replacement..."
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Create backup
|
|
||||||
BACKUP_DIR="backup_unicode_removal_$(date +%Y%m%d_%H%M%S)"
|
|
||||||
mkdir -p "$BACKUP_DIR"
|
|
||||||
echo "[INFO] Creating backup in $BACKUP_DIR..."
|
|
||||||
find . -name "*.go" -type f -not -path "*/vendor/*" -not -path "*/.git/*" -exec bash -c 'mkdir -p "$1/$(dirname "$2")" && cp "$2" "$1/$2"' -- "$BACKUP_DIR" {} \;
|
|
||||||
echo "[OK] Backup created"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Find all affected files
|
|
||||||
echo "[SEARCH] Finding files with Unicode..."
|
|
||||||
FILES=$(find . -name "*.go" -type f -not -path "*/vendor/*" -not -path "*/.git/*")
|
|
||||||
|
|
||||||
PROCESSED=0
|
|
||||||
TOTAL=$(echo "$FILES" | wc -l)
|
|
||||||
|
|
||||||
for file in $FILES; do
|
|
||||||
PROCESSED=$((PROCESSED + 1))
|
|
||||||
|
|
||||||
if ! grep -qP '[\x{80}-\x{FFFF}]' "$file" 2>/dev/null; then
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "[$PROCESSED/$TOTAL] Processing: $file"
|
|
||||||
|
|
||||||
# Create temp file for atomic replacements
|
|
||||||
TMPFILE="${file}.tmp"
|
|
||||||
cp "$file" "$TMPFILE"
|
|
||||||
|
|
||||||
# Box drawing / decorative (used in TUI borders)
|
|
||||||
sed -i 's/─/-/g' "$TMPFILE"
|
|
||||||
sed -i 's/━/-/g' "$TMPFILE"
|
|
||||||
sed -i 's/│/|/g' "$TMPFILE"
|
|
||||||
sed -i 's/║/|/g' "$TMPFILE"
|
|
||||||
sed -i 's/├/+/g' "$TMPFILE"
|
|
||||||
sed -i 's/└/+/g' "$TMPFILE"
|
|
||||||
sed -i 's/╔/+/g' "$TMPFILE"
|
|
||||||
sed -i 's/╗/+/g' "$TMPFILE"
|
|
||||||
sed -i 's/╚/+/g' "$TMPFILE"
|
|
||||||
sed -i 's/╝/+/g' "$TMPFILE"
|
|
||||||
sed -i 's/╠/+/g' "$TMPFILE"
|
|
||||||
sed -i 's/╣/+/g' "$TMPFILE"
|
|
||||||
sed -i 's/═/=/g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Status symbols
|
|
||||||
sed -i 's/✅/[OK]/g' "$TMPFILE"
|
|
||||||
sed -i 's/❌/[FAIL]/g' "$TMPFILE"
|
|
||||||
sed -i 's/✓/[+]/g' "$TMPFILE"
|
|
||||||
sed -i 's/✗/[-]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⚠️/[WARN]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⚠/[!]/g' "$TMPFILE"
|
|
||||||
sed -i 's/❓/[?]/g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Arrows
|
|
||||||
sed -i 's/←/</g' "$TMPFILE"
|
|
||||||
sed -i 's/→/>/g' "$TMPFILE"
|
|
||||||
sed -i 's/↑/^/g' "$TMPFILE"
|
|
||||||
sed -i 's/↓/v/g' "$TMPFILE"
|
|
||||||
sed -i 's/▲/^/g' "$TMPFILE"
|
|
||||||
sed -i 's/▼/v/g' "$TMPFILE"
|
|
||||||
sed -i 's/▶/>/g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Shapes
|
|
||||||
sed -i 's/●/*\*/g' "$TMPFILE"
|
|
||||||
sed -i 's/○/o/g' "$TMPFILE"
|
|
||||||
sed -i 's/⚪/o/g' "$TMPFILE"
|
|
||||||
sed -i 's/•/-/g' "$TMPFILE"
|
|
||||||
sed -i 's/█/#/g' "$TMPFILE"
|
|
||||||
sed -i 's/▎/|/g' "$TMPFILE"
|
|
||||||
sed -i 's/░/./g' "$TMPFILE"
|
|
||||||
sed -i 's/➖/-/g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Emojis - Info/Data
|
|
||||||
sed -i 's/📊/[INFO]/g' "$TMPFILE"
|
|
||||||
sed -i 's/📋/[LIST]/g' "$TMPFILE"
|
|
||||||
sed -i 's/📁/[DIR]/g' "$TMPFILE"
|
|
||||||
sed -i 's/📦/[PKG]/g' "$TMPFILE"
|
|
||||||
sed -i 's/📜/[LOG]/g' "$TMPFILE"
|
|
||||||
sed -i 's/📭/[EMPTY]/g' "$TMPFILE"
|
|
||||||
sed -i 's/📝/[NOTE]/g' "$TMPFILE"
|
|
||||||
sed -i 's/💡/[TIP]/g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Emojis - Actions/Objects
|
|
||||||
sed -i 's/🎯/[TARGET]/g' "$TMPFILE"
|
|
||||||
sed -i 's/🛡️/[SECURE]/g' "$TMPFILE"
|
|
||||||
sed -i 's/🔒/[LOCK]/g' "$TMPFILE"
|
|
||||||
sed -i 's/🔓/[UNLOCK]/g' "$TMPFILE"
|
|
||||||
sed -i 's/🔍/[SEARCH]/g' "$TMPFILE"
|
|
||||||
sed -i 's/🔀/[SWITCH]/g' "$TMPFILE"
|
|
||||||
sed -i 's/🔥/[FIRE]/g' "$TMPFILE"
|
|
||||||
sed -i 's/💾/[SAVE]/g' "$TMPFILE"
|
|
||||||
sed -i 's/🗄️/[DB]/g' "$TMPFILE"
|
|
||||||
sed -i 's/🗄/[DB]/g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Emojis - Time/Status
|
|
||||||
sed -i 's/⏱️/[TIME]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⏱/[TIME]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⏳/[WAIT]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⏪/[REW]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⏹️/[STOP]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⏹/[STOP]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⟳/[SYNC]/g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Emojis - Cloud
|
|
||||||
sed -i 's/☁️/[CLOUD]/g' "$TMPFILE"
|
|
||||||
sed -i 's/☁/[CLOUD]/g' "$TMPFILE"
|
|
||||||
sed -i 's/📤/[UPLOAD]/g' "$TMPFILE"
|
|
||||||
sed -i 's/📥/[DOWNLOAD]/g' "$TMPFILE"
|
|
||||||
sed -i 's/🗑️/[DELETE]/g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Emojis - Misc
|
|
||||||
sed -i 's/📈/[UP]/g' "$TMPFILE"
|
|
||||||
sed -i 's/📉/[DOWN]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⌨️/[KEY]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⌨/[KEY]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⚙️/[CONFIG]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⚙/[CONFIG]/g' "$TMPFILE"
|
|
||||||
sed -i 's/✏️/[EDIT]/g' "$TMPFILE"
|
|
||||||
sed -i 's/✏/[EDIT]/g' "$TMPFILE"
|
|
||||||
sed -i 's/⚡/[FAST]/g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Spinner characters (braille patterns for loading animations)
|
|
||||||
sed -i 's/⠋/|/g' "$TMPFILE"
|
|
||||||
sed -i 's/⠙/\//g' "$TMPFILE"
|
|
||||||
sed -i 's/⠹/-/g' "$TMPFILE"
|
|
||||||
sed -i 's/⠸/\\/g' "$TMPFILE"
|
|
||||||
sed -i 's/⠼/|/g' "$TMPFILE"
|
|
||||||
sed -i 's/⠴/\//g' "$TMPFILE"
|
|
||||||
sed -i 's/⠦/-/g' "$TMPFILE"
|
|
||||||
sed -i 's/⠧/\\/g' "$TMPFILE"
|
|
||||||
sed -i 's/⠇/|/g' "$TMPFILE"
|
|
||||||
sed -i 's/⠏/\//g' "$TMPFILE"
|
|
||||||
|
|
||||||
# Move temp file over original
|
|
||||||
mv "$TMPFILE" "$file"
|
|
||||||
done
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "[OK] Replacement complete!"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Verify
|
|
||||||
REMAINING=$(grep -roP '[\x{80}-\x{FFFF}]' --include="*.go" . 2>/dev/null | wc -l || echo "0")
|
|
||||||
|
|
||||||
echo "[INFO] Unicode characters remaining: $REMAINING"
|
|
||||||
if [ "$REMAINING" -gt 0 ]; then
|
|
||||||
echo "[WARN] Some Unicode still exists (might be in comments or safe locations)"
|
|
||||||
echo "[INFO] Unique remaining characters:"
|
|
||||||
grep -roP '[\x{80}-\x{FFFF}]' --include="*.go" . 2>/dev/null | grep -oP '[\x{80}-\x{FFFF}]' | sort -u | head -20
|
|
||||||
else
|
|
||||||
echo "[OK] All Unicode characters replaced with ASCII!"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "[INFO] Backup: $BACKUP_DIR"
|
|
||||||
echo "[INFO] To restore: cp -r $BACKUP_DIR/* ."
|
|
||||||
echo ""
|
|
||||||
echo "[INFO] Next steps:"
|
|
||||||
echo " 1. go build"
|
|
||||||
echo " 2. go test ./..."
|
|
||||||
echo " 3. Test TUI: ./dbbackup"
|
|
||||||
echo " 4. Commit: git add . && git commit -m 'v3.42.11: Replace all Unicode with ASCII'"
|
|
||||||
echo ""
|
|
||||||
@@ -1,130 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Remove ALL emojis/unicode symbols from Go code and replace with ASCII
|
|
||||||
# Date: January 8, 2026
|
|
||||||
# Issue: 638 lines contain Unicode emojis causing display issues
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
echo "[INFO] Starting emoji removal process..."
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Find all Go files with emojis (expanded emoji list)
|
|
||||||
echo "[SEARCH] Finding affected files..."
|
|
||||||
FILES=$(find . -name "*.go" -type f -not -path "*/vendor/*" -not -path "*/.git/*" | xargs grep -l -P '[\x{1F000}-\x{1FFFF}]|[\x{2300}-\x{27BF}]|[\x{2600}-\x{26FF}]' 2>/dev/null || true)
|
|
||||||
|
|
||||||
if [ -z "$FILES" ]; then
|
|
||||||
echo "[WARN] No files with emojis found!"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
FILECOUNT=$(echo "$FILES" | wc -l)
|
|
||||||
echo "[INFO] Found $FILECOUNT files containing emojis"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Count total emojis before
|
|
||||||
BEFORE=$(find . -name "*.go" -type f -not -path "*/vendor/*" | xargs grep -oP '[\x{1F000}-\x{1FFFF}]|[\x{2300}-\x{27BF}]|[\x{2600}-\x{26FF}]' 2>/dev/null | wc -l || echo "0")
|
|
||||||
echo "[INFO] Total emojis found: $BEFORE"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Create backup
|
|
||||||
BACKUP_DIR="backup_before_emoji_removal_$(date +%Y%m%d_%H%M%S)"
|
|
||||||
mkdir -p "$BACKUP_DIR"
|
|
||||||
echo "[INFO] Creating backup in $BACKUP_DIR..."
|
|
||||||
for file in $FILES; do
|
|
||||||
mkdir -p "$BACKUP_DIR/$(dirname "$file")"
|
|
||||||
cp "$file" "$BACKUP_DIR/$file"
|
|
||||||
done
|
|
||||||
echo "[OK] Backup created"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Process each file
|
|
||||||
echo "[INFO] Replacing emojis with ASCII equivalents..."
|
|
||||||
PROCESSED=0
|
|
||||||
|
|
||||||
for file in $FILES; do
|
|
||||||
PROCESSED=$((PROCESSED + 1))
|
|
||||||
echo "[$PROCESSED/$FILECOUNT] Processing: $file"
|
|
||||||
|
|
||||||
# Create temp file
|
|
||||||
TMPFILE="${file}.tmp"
|
|
||||||
|
|
||||||
# Status indicators
|
|
||||||
sed 's/✅/[OK]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/❌/[FAIL]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/✓/[+]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/✗/[-]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
|
|
||||||
# Warning symbols (⚠️ has variant selector, handle both)
|
|
||||||
sed 's/⚠️/[WARN]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/⚠/[!]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
|
|
||||||
# Info/Data symbols
|
|
||||||
sed 's/📊/[INFO]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/📋/[LIST]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/📁/[DIR]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/📦/[PKG]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
|
|
||||||
# Target/Security
|
|
||||||
sed 's/🎯/[TARGET]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/🛡️/[SECURE]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/🔒/[LOCK]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/🔓/[UNLOCK]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
|
|
||||||
# Actions
|
|
||||||
sed 's/🔍/[SEARCH]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/⏱️/[TIME]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
|
|
||||||
# Cloud operations (☁️ has variant selector, handle both)
|
|
||||||
sed 's/☁️/[CLOUD]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/☁/[CLOUD]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/📤/[UPLOAD]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/📥/[DOWNLOAD]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/🗑️/[DELETE]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
|
|
||||||
# Other
|
|
||||||
sed 's/📈/[UP]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/📉/[DOWN]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
|
|
||||||
# Additional emojis found
|
|
||||||
sed 's/⌨️/[KEY]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/⌨/[KEY]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/🗄️/[DB]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/🗄/[DB]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/⚙️/[CONFIG]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/⚙/[CONFIG]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/✏️/[EDIT]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
sed 's/✏/[EDIT]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
|
|
||||||
done
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "[OK] Replacement complete!"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Count remaining emojis
|
|
||||||
AFTER=$(find . -name "*.go" -type f -not -path "*/vendor/*" | xargs grep -oP '[\x{1F000}-\x{1FFFF}]|[\x{2300}-\x{27BF}]|[\x{2600}-\x{26FF}]' 2>/dev/null | wc -l || echo "0")
|
|
||||||
|
|
||||||
echo "[INFO] Emojis before: $BEFORE"
|
|
||||||
echo "[INFO] Emojis after: $AFTER"
|
|
||||||
echo "[INFO] Emojis removed: $((BEFORE - AFTER))"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
if [ "$AFTER" -gt 0 ]; then
|
|
||||||
echo "[WARN] $AFTER emojis still remaining!"
|
|
||||||
echo "[INFO] Listing remaining emojis:"
|
|
||||||
find . -name "*.go" -type f -not -path "*/vendor/*" | xargs grep -nP '[\x{1F000}-\x{1FFFF}]|[\x{2300}-\x{27BF}]|[\x{2600}-\x{26FF}]' 2>/dev/null | head -20
|
|
||||||
else
|
|
||||||
echo "[OK] All emojis successfully removed!"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "[INFO] Backup location: $BACKUP_DIR"
|
|
||||||
echo "[INFO] To restore: cp -r $BACKUP_DIR/* ."
|
|
||||||
echo ""
|
|
||||||
echo "[INFO] Next steps:"
|
|
||||||
echo " 1. Build: go build"
|
|
||||||
echo " 2. Test: go test ./..."
|
|
||||||
echo " 3. Manual testing: ./dbbackup status"
|
|
||||||
echo " 4. If OK, commit: git add . && git commit -m 'Replace emojis with ASCII'"
|
|
||||||
echo " 5. If broken, restore: cp -r $BACKUP_DIR/* ."
|
|
||||||
echo ""
|
|
||||||
echo "[OK] Emoji removal script completed!"
|
|
||||||
Reference in New Issue
Block a user