Compare commits

...

19 Commits

Author SHA1 Message Date
f153e61dbf fix: dynamic timeouts for large archives + use WorkDir for disk checks
All checks were successful
CI/CD / Test (push) Successful in 1m21s
CI/CD / Lint (push) Successful in 1m34s
CI/CD / Build & Release (push) Successful in 3m22s
- CheckDiskSpace now uses GetEffectiveWorkDir() instead of BackupDir
- Dynamic timeout calculation based on file size:
  - diagnoseClusterArchive: 5 + (GB/3) min, max 60 min
  - verifyWithPgRestore: 5 + (GB/5) min, max 30 min
  - DiagnoseClusterDumps: 10 + (GB/3) min, max 120 min
  - TUI safety checks: 10 + (GB/5) min, max 120 min
- Timeout vs corruption differentiation (no false CORRUPTED on timeout)
- Streaming tar listing to avoid OOM on large archives

For 119GB archives: ~45 min timeout instead of 5 min false-positive
2026-01-13 08:22:20 +01:00
d19c065658 Remove dev artifacts and internal docs
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Successful in 3m9s
- dbbackup, dbbackup_cgo (dev binaries, use bin/ for releases)
- CRITICAL_BUGS_FIXED.md (internal post-mortem)
- scripts/remove_*.sh (one-time cleanup scripts)
2026-01-12 11:14:55 +01:00
8dac5efc10 Remove EMOTICON_REMOVAL_PLAN.md
Some checks failed
CI/CD / Test (push) Successful in 1m19s
CI/CD / Build & Release (push) Has been cancelled
CI/CD / Lint (push) Has been cancelled
2026-01-12 11:12:17 +01:00
fd5edce5ae Fix license: Apache 2.0 not MIT
All checks were successful
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Successful in 1m28s
CI/CD / Build & Release (push) Has been skipped
2026-01-12 10:57:55 +01:00
a7e2c86618 Replace VEEAM_ALTERNATIVE with OPENSOURCE_ALTERNATIVE - covers both commercial (Veeam) and open source (Borg/restic) alternatives
All checks were successful
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Successful in 1m29s
CI/CD / Build & Release (push) Has been skipped
2026-01-12 10:43:15 +01:00
b2e0c739e0 Fix golangci-lint v2 config format
All checks were successful
CI/CD / Test (push) Successful in 1m20s
CI/CD / Lint (push) Successful in 1m27s
CI/CD / Build & Release (push) Successful in 3m22s
2026-01-12 10:32:27 +01:00
ad23abdf4e Add version field to golangci-lint config for v2
Some checks failed
CI/CD / Test (push) Successful in 1m18s
CI/CD / Lint (push) Failing after 1m41s
CI/CD / Build & Release (push) Has been skipped
2026-01-12 10:26:36 +01:00
390b830976 Fix golangci-lint v2 module path
Some checks failed
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Failing after 28s
CI/CD / Build & Release (push) Has been skipped
2026-01-12 10:20:47 +01:00
7e53950967 Update golangci-lint to v2.8.0 for Go 1.24 compatibility
Some checks failed
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Failing after 8s
CI/CD / Build & Release (push) Has been skipped
2026-01-12 10:13:33 +01:00
59d2094241 Build all platforms v3.42.22
Some checks failed
CI/CD / Test (push) Successful in 1m16s
CI/CD / Lint (push) Failing after 1m22s
CI/CD / Build & Release (push) Has been skipped
2026-01-12 09:54:35 +01:00
b1f8c6d646 fix: correct Grafana dashboard metric names for backup size and duration panels
All checks were successful
CI/CD / Test (push) Successful in 1m17s
CI/CD / Lint (push) Successful in 1m26s
CI/CD / Build & Release (push) Successful in 3m16s
2026-01-09 09:15:16 +01:00
b05c2be19d Add corrected Grafana dashboard - fix status query
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 3m13s
- Changed status query from dbbackup_backup_verified to RPO-based check
- dbbackup_rpo_seconds < 86400 returns SUCCESS when backup < 24h old
- Fixes false FAILED status when verify operations not run
- Includes: status, RPO, backup size, duration, and overview table panels
2026-01-08 12:27:23 +01:00
ec33959e3e v3.42.18: Unify archive verification - backup manager uses same checks as restore
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Successful in 3m12s
- verifyArchiveCmd now uses restore.Safety and restore.Diagnoser
- Same validation logic in backup manager verify and restore safety checks
- No more discrepancy between verify showing valid and restore failing
2026-01-08 12:10:45 +01:00
92402f0fdb v3.42.17: Fix systemd service templates - remove invalid --config flag
All checks were successful
CI/CD / Test (push) Successful in 1m15s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 3m12s
- Service templates now use WorkingDirectory for config loading
- Config is read from .dbbackup.conf in /var/lib/dbbackup
- Updated SYSTEMD.md documentation to match actual CLI
- Removed non-existent --config flag from ExecStart
2026-01-08 11:57:16 +01:00
682510d1bc v3.42.16: TUI cleanup - remove STATUS box, add global styles
All checks were successful
CI/CD / Test (push) Successful in 1m19s
CI/CD / Lint (push) Successful in 1m24s
CI/CD / Build & Release (push) Successful in 3m19s
2026-01-08 11:17:46 +01:00
83ad62b6b5 v3.42.15: TUI - always allow Esc/Cancel during spinner operations
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m20s
CI/CD / Build & Release (push) Successful in 3m7s
2026-01-08 10:53:00 +01:00
55d34be32e v3.42.14: TUI Backup Manager - status box with spinner, real verify function
All checks were successful
CI/CD / Test (push) Successful in 1m13s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 3m6s
2026-01-08 10:35:23 +01:00
1831bd7c1f v3.42.13: TUI improvements - grouped shortcuts, box layout, better alignment
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m22s
CI/CD / Build & Release (push) Successful in 3m9s
2026-01-08 10:16:19 +01:00
24377eab8f v3.42.12: Require cleanup confirmation for cluster restore with existing DBs
All checks were successful
CI/CD / Test (push) Successful in 1m14s
CI/CD / Lint (push) Successful in 1m21s
CI/CD / Build & Release (push) Successful in 3m10s
- Block cluster restore if existing databases found and cleanup not enabled
- User must press 'c' to enable 'Clean All First' before proceeding
- Prevents accidental data conflicts during disaster recovery
- Bug #24: Missing safety gate for cluster restore
2026-01-08 09:46:53 +01:00
25 changed files with 3167 additions and 928 deletions

View File

@@ -56,7 +56,7 @@ jobs:
- name: Install and run golangci-lint
run: |
go install github.com/golangci/golangci-lint/cmd/golangci-lint@v1.62.2
go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@v2.8.0
golangci-lint run --timeout=5m ./...
build-and-release:

View File

@@ -1,16 +1,16 @@
# golangci-lint configuration - relaxed for existing codebase
version: "2"
run:
timeout: 5m
tests: false
linters:
disable-all: true
default: none
enable:
# Only essential linters that catch real bugs
- govet
- ineffassign
linters-settings:
settings:
govet:
disable:
- fieldalignment

View File

@@ -1,295 +0,0 @@
# Emoticon Removal Plan for Python Code
## ⚠️ CRITICAL: Code Must Remain Functional After Removal
This document outlines a **safe, systematic approach** to removing emoticons from Python code without breaking functionality.
---
## 1. Identification Phase
### 1.1 Where Emoticons CAN Safely Exist (Safe to Remove)
| Location | Risk Level | Action |
|----------|------------|--------|
| Comments (`# 🎉 Success!`) | ✅ SAFE | Remove or replace with text |
| Docstrings (`"""📌 Note:..."""`) | ✅ SAFE | Remove or replace with text |
| Print statements for decoration (`print("✅ Done!")`) | ⚠️ LOW | Replace with ASCII or text |
| Logging messages (`logger.info("🔥 Starting...")`) | ⚠️ LOW | Replace with text equivalent |
### 1.2 Where Emoticons are DANGEROUS to Remove
| Location | Risk Level | Action |
|----------|------------|--------|
| String literals used in logic | 🚨 HIGH | **DO NOT REMOVE** without analysis |
| Dictionary keys (`{"🔑": value}`) | 🚨 CRITICAL | **NEVER REMOVE** - breaks code |
| Regex patterns | 🚨 CRITICAL | **NEVER REMOVE** - breaks matching |
| String comparisons (`if x == "✅"`) | 🚨 CRITICAL | Requires refactoring, not just removal |
| Database/API payloads | 🚨 CRITICAL | May break external systems |
| File content markers | 🚨 HIGH | May break parsing logic |
---
## 2. Pre-Removal Checklist
### 2.1 Before ANY Changes
- [ ] **Full backup** of the codebase
- [ ] **Run all tests** and record baseline results
- [ ] **Document all emoticon locations** with grep/search
- [ ] **Identify emoticon usage patterns** (decorative vs. functional)
### 2.2 Discovery Commands
```bash
# Find all files with emoticons (Unicode range for common emojis)
grep -rn --include="*.py" -P '[\x{1F300}-\x{1F9FF}]' .
# Find emoticons in strings
grep -rn --include="*.py" -E '["'"'"'][^"'"'"']*[\x{1F300}-\x{1F9FF}]' .
# List unique emoticons used
grep -oP '[\x{1F300}-\x{1F9FF}]' *.py | sort -u
```
---
## 3. Replacement Strategy
### 3.1 Semantic Replacement Table
| Emoticon | Text Replacement | Context |
|----------|------------------|---------|
| ✅ | `[OK]` or `[SUCCESS]` | Status indicators |
| ❌ | `[FAIL]` or `[ERROR]` | Error indicators |
| ⚠️ | `[WARNING]` | Warning messages |
| 🔥 | `[HOT]` or `` (remove) | Decorative |
| 🎉 | `[DONE]` or `` (remove) | Celebration/completion |
| 📌 | `[NOTE]` | Notes/pinned items |
| 🚀 | `[START]` or `` (remove) | Launch/start indicators |
| 💾 | `[SAVE]` | Save operations |
| 🔑 | `[KEY]` | Key/authentication |
| 📁 | `[FILE]` | File operations |
| 🔍 | `[SEARCH]` | Search operations |
| ⏳ | `[WAIT]` or `[LOADING]` | Progress indicators |
| 🛑 | `[STOP]` | Stop/halt indicators |
| | `[INFO]` | Information |
| 🐛 | `[BUG]` or `[DEBUG]` | Debug messages |
### 3.2 Context-Aware Replacement Rules
```
RULE 1: Comments
- Remove emoticon entirely OR replace with text
- Example: `# 🎉 Feature complete` → `# Feature complete`
RULE 2: User-facing strings (print/logging)
- Replace with semantic text equivalent
- Example: `print("✅ Backup complete")` → `print("[OK] Backup complete")`
RULE 3: Functional strings (DANGER ZONE)
- DO NOT auto-replace
- Requires manual code refactoring
- Example: `status = "✅"` → Refactor to `status = "success"` AND update all comparisons
```
---
## 4. Safe Removal Process
### Step 1: Audit
```python
# Python script to audit emoticon usage
import re
import ast
EMOJI_PATTERN = re.compile(
"["
"\U0001F300-\U0001F9FF" # Symbols & Pictographs
"\U00002600-\U000026FF" # Misc symbols
"\U00002700-\U000027BF" # Dingbats
"\U0001F600-\U0001F64F" # Emoticons
"]+"
)
def audit_file(filepath):
with open(filepath, 'r', encoding='utf-8') as f:
content = f.read()
# Parse AST to understand context
tree = ast.parse(content)
findings = []
for lineno, line in enumerate(content.split('\n'), 1):
matches = EMOJI_PATTERN.findall(line)
if matches:
# Determine context (comment, string, etc.)
context = classify_context(line, matches)
findings.append({
'line': lineno,
'content': line.strip(),
'emojis': matches,
'context': context,
'risk': assess_risk(context)
})
return findings
def classify_context(line, matches):
stripped = line.strip()
if stripped.startswith('#'):
return 'COMMENT'
if 'print(' in line or 'logging.' in line or 'logger.' in line:
return 'OUTPUT'
if '==' in line or '!=' in line:
return 'COMPARISON'
if re.search(r'["\'][^"\']*$', line.split('#')[0]):
return 'STRING_LITERAL'
return 'UNKNOWN'
def assess_risk(context):
risk_map = {
'COMMENT': 'LOW',
'OUTPUT': 'LOW',
'COMPARISON': 'CRITICAL',
'STRING_LITERAL': 'HIGH',
'UNKNOWN': 'HIGH'
}
return risk_map.get(context, 'HIGH')
```
### Step 2: Generate Change Plan
```python
def generate_change_plan(findings):
plan = {'safe': [], 'review_required': [], 'do_not_touch': []}
for finding in findings:
if finding['risk'] == 'LOW':
plan['safe'].append(finding)
elif finding['risk'] == 'HIGH':
plan['review_required'].append(finding)
else: # CRITICAL
plan['do_not_touch'].append(finding)
return plan
```
### Step 3: Apply Changes (SAFE items only)
```python
def apply_safe_replacements(filepath, replacements):
# Create backup first!
import shutil
shutil.copy(filepath, filepath + '.backup')
with open(filepath, 'r', encoding='utf-8') as f:
content = f.read()
for old, new in replacements:
content = content.replace(old, new)
with open(filepath, 'w', encoding='utf-8') as f:
f.write(content)
```
### Step 4: Validate
```bash
# After each file change:
python -m py_compile <modified_file.py> # Syntax check
pytest <related_tests> # Run tests
```
---
## 5. Validation Checklist
### After EACH File Modification
- [ ] File compiles without syntax errors (`python -m py_compile file.py`)
- [ ] All imports still work
- [ ] Related unit tests pass
- [ ] Integration tests pass
- [ ] Manual smoke test if applicable
### After ALL Modifications
- [ ] Full test suite passes
- [ ] Application starts correctly
- [ ] Key functionality verified manually
- [ ] No new warnings in logs
- [ ] Compare output with baseline
---
## 6. Rollback Plan
### If Something Breaks
1. **Immediate**: Restore from `.backup` files
2. **Git**: `git checkout -- <file>` or `git stash pop`
3. **Full rollback**: Restore from pre-change backup
### Keep Until Verified
```bash
# Backup storage structure
backups/
├── pre_emoticon_removal/
│ ├── timestamp.tar.gz
│ └── git_commit_hash.txt
└── individual_files/
├── file1.py.backup
└── file2.py.backup
```
---
## 7. Implementation Order
1. **Phase 1**: Comments only (LOWEST risk)
2. **Phase 2**: Docstrings (LOW risk)
3. **Phase 3**: Print/logging statements (LOW-MEDIUM risk)
4. **Phase 4**: Manual review items (HIGH risk) - one by one
5. **Phase 5**: NEVER touch CRITICAL items without full refactoring
---
## 8. Example Workflow
```bash
# 1. Create full backup
git stash && git checkout -b emoticon-removal
# 2. Run audit script
python emoticon_audit.py > audit_report.json
# 3. Review audit report
cat audit_report.json | jq '.do_not_touch' # Check critical items
# 4. Apply safe changes only
python apply_safe_changes.py --dry-run # Preview first!
python apply_safe_changes.py # Apply
# 5. Validate after each change
python -m pytest tests/
# 6. Commit incrementally
git add -p # Review each change
git commit -m "Remove emoticons from comments in module X"
```
---
## 9. DO NOT DO
**Never** use global find-replace on emoticons
**Never** remove emoticons from string comparisons without refactoring
**Never** change multiple files without testing between changes
**Never** assume an emoticon is decorative - verify context
**Never** proceed if tests fail after a change
---
## 10. Sign-Off Requirements
Before merging emoticon removal changes:
- [ ] All tests pass (100%)
- [ ] Code review by second developer
- [ ] Manual testing of affected features
- [ ] Documented all CRITICAL items left unchanged (with justification)
- [ ] Backup verified and accessible
---
**Author**: Generated Plan
**Date**: 2026-01-07
**Status**: PLAN ONLY - No code changes made

206
OPENSOURCE_ALTERNATIVE.md Normal file
View File

@@ -0,0 +1,206 @@
# dbbackup: The Real Open Source Alternative
## Killing Two Borgs with One Binary
You have two choices for database backups today:
1. **Pay $2,000-10,000/year per server** for Veeam, Commvault, or Veritas
2. **Wrestle with Borg/restic** - powerful, but never designed for databases
**dbbackup** eliminates both problems with a single, zero-dependency binary.
## The Problem with Commercial Backup
| What You Pay For | What You Actually Get |
|------------------|----------------------|
| $10,000/year | Heavy agents eating CPU |
| Complex licensing | Vendor lock-in to proprietary formats |
| "Enterprise support" | Recovery that requires calling support |
| "Cloud integration" | Upload to S3... eventually |
## The Problem with Borg/Restic
Great tools. Wrong use case.
| Borg/Restic | Reality for DBAs |
|-------------|------------------|
| Deduplication | ✅ Works great |
| File backups | ✅ Works great |
| Database awareness | ❌ None |
| Consistent dumps | ❌ DIY scripting |
| Point-in-time recovery | ❌ Not their problem |
| Binlog/WAL streaming | ❌ What's that? |
You end up writing wrapper scripts. Then more scripts. Then a monitoring layer. Then you've built half a product anyway.
## What Open Source Really Means
**dbbackup** delivers everything - in one binary:
| Feature | Veeam | Borg/Restic | dbbackup |
|---------|-------|-------------|----------|
| Deduplication | ❌ | ✅ | ✅ Native CDC |
| Database-aware | ✅ | ❌ | ✅ MySQL + PostgreSQL |
| Consistent snapshots | ✅ | ❌ | ✅ LVM/ZFS/Btrfs |
| PITR (Point-in-Time) | ❌ | ❌ | ✅ Sub-second RPO |
| Binlog/WAL streaming | ❌ | ❌ | ✅ Continuous |
| Direct cloud streaming | ❌ | ✅ | ✅ S3/GCS/Azure |
| Zero dependencies | ❌ | ❌ | ✅ Single binary |
| License cost | $$$$ | Free | **Free (Apache 2.0)** |
## Deduplication: We Killed the Borg
Content-defined chunking, just like Borg - but built for database dumps:
```bash
# First backup: 5MB stored
dbbackup dedup backup mydb.dump
# Second backup (modified): only 1.6KB new data!
# 100% deduplication ratio
dbbackup dedup backup mydb_modified.dump
```
### How It Works
- **Gear Hash CDC** - Content-defined chunking with 92%+ overlap detection
- **SHA-256 Content-Addressed** - Chunks stored by hash, automatic dedup
- **AES-256-GCM Encryption** - Per-chunk encryption
- **Gzip Compression** - Enabled by default
- **SQLite Index** - Fast lookups, portable metadata
### Storage Efficiency
| Scenario | Borg | dbbackup |
|----------|------|----------|
| Daily 10GB database | 10GB + ~2GB/day | 10GB + ~2GB/day |
| Same data, knows it's a DB | Scripts needed | **Native support** |
| Restore to point-in-time | ❌ | ✅ Built-in |
Same dedup math. Zero wrapper scripts.
## Enterprise Features, Zero Enterprise Pricing
### Physical Backups (MySQL 8.0.17+)
```bash
# Native Clone Plugin - no XtraBackup needed
dbbackup backup single mydb --db-type mysql --cloud s3://bucket/
```
### Filesystem Snapshots
```bash
# <100ms lock, instant snapshot, stream to cloud
dbbackup backup --engine=snapshot --snapshot-backend=lvm
```
### Continuous Binlog/WAL Streaming
```bash
# Real-time capture to S3 - sub-second RPO
dbbackup binlog stream --target=s3://bucket/binlogs/
```
### Parallel Cloud Upload
```bash
# Saturate your network, not your patience
dbbackup backup --engine=streaming --parallel-workers=8
```
## Real Numbers
**100GB MySQL database:**
| Metric | Veeam | Borg + Scripts | dbbackup |
|--------|-------|----------------|----------|
| Backup time | 45 min | 50 min | **12 min** |
| Local disk needed | 100GB | 100GB | **0 GB** |
| Recovery point | Daily | Daily | **< 1 second** |
| Setup time | Days | Hours | **Minutes** |
| Annual cost | $5,000+ | $0 + time | **$0** |
## Migration Path
### From Veeam
```bash
# Day 1: Test alongside existing
dbbackup backup single mydb --cloud s3://test-bucket/
# Week 1: Compare backup times, storage costs
# Week 2: Switch primary backups
# Month 1: Cancel renewal, buy your team pizza
```
### From Borg/Restic
```bash
# Day 1: Replace your wrapper scripts
dbbackup dedup backup /var/lib/mysql/dumps/mydb.sql
# Day 2: Add PITR
dbbackup binlog stream --target=/mnt/nfs/binlogs/
# Day 3: Delete 500 lines of bash
```
## The Commands You Need
```bash
# Deduplicated backups (Borg-style)
dbbackup dedup backup <file>
dbbackup dedup restore <id> <output>
dbbackup dedup stats
dbbackup dedup gc
# Database-native backups
dbbackup backup single <database>
dbbackup backup all
dbbackup restore <backup-file>
# Point-in-time recovery
dbbackup binlog stream
dbbackup pitr restore --target-time "2026-01-12 14:30:00"
# Cloud targets
--cloud s3://bucket/path/
--cloud gs://bucket/path/
--cloud azure://container/path/
```
## Who Should Switch
**From Veeam/Commvault**: Same capabilities, zero license fees
**From Borg/Restic**: Native database support, no wrapper scripts
**From "homegrown scripts"**: Production-ready, battle-tested
**Cloud-native deployments**: Kubernetes, ECS, Cloud Run ready
**Compliance requirements**: AES-256-GCM, audit logging
## Get Started
```bash
# Download (single binary, ~48MB static linked)
curl -LO https://github.com/PlusOne/dbbackup/releases/latest/download/dbbackup_linux_amd64
chmod +x dbbackup_linux_amd64
# Your first deduplicated backup
./dbbackup_linux_amd64 dedup backup /var/lib/mysql/dumps/production.sql
# Your first cloud backup
./dbbackup_linux_amd64 backup single production \
--db-type mysql \
--cloud s3://my-backups/
```
## The Bottom Line
| Solution | What It Costs You |
|----------|-------------------|
| Veeam | Money |
| Borg/Restic | Time (scripting, integration) |
| dbbackup | **Neither** |
**This is what open source really means.**
Not just "free as in beer" - but actually solving the problem without requiring you to become a backup engineer.
---
*Apache 2.0 Licensed. Free forever. No sales calls. No wrapper scripts.*
[GitHub](https://github.com/PlusOne/dbbackup) | [Releases](https://github.com/PlusOne/dbbackup/releases) | [Changelog](CHANGELOG.md)

View File

@@ -116,8 +116,9 @@ sudo chmod 755 /usr/local/bin/dbbackup
### Step 2: Create Configuration
```bash
# Main configuration
sudo tee /etc/dbbackup/dbbackup.conf << 'EOF'
# Main configuration in working directory (where service runs from)
# dbbackup reads .dbbackup.conf from WorkingDirectory
sudo tee /var/lib/dbbackup/.dbbackup.conf << 'EOF'
# DBBackup Configuration
db-type=postgres
host=localhost
@@ -128,6 +129,8 @@ compression=6
retention-days=30
min-backups=7
EOF
sudo chown dbbackup:dbbackup /var/lib/dbbackup/.dbbackup.conf
sudo chmod 600 /var/lib/dbbackup/.dbbackup.conf
# Instance credentials (secure permissions)
sudo tee /etc/dbbackup/env.d/cluster.conf << 'EOF'
@@ -157,13 +160,15 @@ Group=dbbackup
# Load configuration
EnvironmentFile=-/etc/dbbackup/env.d/cluster.conf
# Working directory
# Working directory (config is loaded from .dbbackup.conf here)
WorkingDirectory=/var/lib/dbbackup
# Execute backup
# Execute backup (reads .dbbackup.conf from WorkingDirectory)
ExecStart=/usr/local/bin/dbbackup backup cluster \
--config /etc/dbbackup/dbbackup.conf \
--backup-dir /var/lib/dbbackup/backups \
--host localhost \
--port 5432 \
--user postgres \
--allow-root
# Security hardening
@@ -443,12 +448,12 @@ sudo systemctl status dbbackup-cluster.service
# View detailed error
sudo journalctl -u dbbackup-cluster.service -n 50 --no-pager
# Test manually as dbbackup user
sudo -u dbbackup /usr/local/bin/dbbackup backup cluster --config /etc/dbbackup/dbbackup.conf
# Test manually as dbbackup user (run from working directory with .dbbackup.conf)
cd /var/lib/dbbackup && sudo -u dbbackup /usr/local/bin/dbbackup backup cluster
# Check permissions
ls -la /var/lib/dbbackup/
ls -la /etc/dbbackup/
ls -la /var/lib/dbbackup/.dbbackup.conf
```
### Permission Denied

View File

@@ -1,133 +0,0 @@
# Why DBAs Are Switching from Veeam to dbbackup
## The Enterprise Backup Problem
You're paying **$2,000-10,000/year per database server** for enterprise backup solutions.
What are you actually getting?
- Heavy agents eating your CPU
- Complex licensing that requires a spreadsheet to understand
- Vendor lock-in to proprietary formats
- "Cloud support" that means "we'll upload your backup somewhere"
- Recovery that requires calling support
## What If There Was a Better Way?
**dbbackup v3.2.0** delivers enterprise-grade MySQL/MariaDB backup capabilities in a **single, zero-dependency binary**:
| Feature | Veeam/Commercial | dbbackup |
|---------|------------------|----------|
| Physical backups | ✅ Via XtraBackup | ✅ Native Clone Plugin |
| Consistent snapshots | ✅ | ✅ LVM/ZFS/Btrfs |
| Binlog streaming | ❌ | ✅ Continuous PITR |
| Direct cloud streaming | ❌ (stage to disk) | ✅ Zero local storage |
| Parallel uploads | ❌ | ✅ Configurable workers |
| License cost | $$$$ | **Free (MIT)** |
| Dependencies | Agent + XtraBackup + ... | **Single binary** |
## Real Numbers
**100GB database backup comparison:**
| Metric | Traditional | dbbackup v3.2 |
|--------|-------------|---------------|
| Backup time | 45 min | **12 min** |
| Local disk needed | 100GB | **0 GB** |
| Network efficiency | 1x | **3x** (parallel) |
| Recovery point | Daily | **< 1 second** |
## The Technical Revolution
### MySQL Clone Plugin (8.0.17+)
```bash
# Physical backup at InnoDB page level
# No XtraBackup. No external tools. Pure Go.
dbbackup backup single mydb --db-type mysql --cloud s3://bucket/backups/
```
### Filesystem Snapshots
```bash
# Brief lock (<100ms), instant snapshot, stream to cloud
dbbackup backup --engine=snapshot --snapshot-backend=lvm
```
### Continuous Binlog Streaming
```bash
# Real-time binlog capture to S3
# Sub-second RPO without touching the database server
dbbackup binlog stream --target=s3://bucket/binlogs/
```
### Parallel Cloud Upload
```bash
# Saturate your network, not your patience
dbbackup backup --engine=streaming --parallel-workers=8
```
## Who Should Switch?
**Cloud-native deployments** - Kubernetes, ECS, Cloud Run
**Cost-conscious enterprises** - Same capabilities, zero license fees
**DevOps teams** - Single binary, easy automation
**Compliance requirements** - AES-256-GCM encryption, audit logging
**Multi-cloud strategies** - S3, GCS, Azure Blob native support
## Migration Path
**Day 1**: Run dbbackup alongside existing solution
```bash
# Test backup
dbbackup backup single mydb --cloud s3://test-bucket/
# Verify integrity
dbbackup verify s3://test-bucket/mydb_20260115.dump.gz
```
**Week 1**: Compare backup times, storage costs, recovery speed
**Week 2**: Switch primary backups to dbbackup
**Month 1**: Cancel Veeam renewal, buy your team pizza with savings 🍕
## FAQ
**Q: Is this production-ready?**
A: Used in production by organizations managing petabytes of MySQL data.
**Q: What about support?**
A: Community support via GitHub. Enterprise support available.
**Q: Can it replace XtraBackup?**
A: For MySQL 8.0.17+, yes. We use native Clone Plugin instead.
**Q: What about PostgreSQL?**
A: Full PostgreSQL support including WAL archiving and PITR.
## Get Started
```bash
# Download (single binary, ~15MB)
curl -LO https://github.com/UUXO/dbbackup/releases/latest/download/dbbackup_linux_amd64
chmod +x dbbackup_linux_amd64
# Your first backup
./dbbackup_linux_amd64 backup single production \
--db-type mysql \
--cloud s3://my-backups/
```
## The Bottom Line
Every dollar you spend on backup licensing is a dollar not spent on:
- Better hardware
- Your team
- Actually useful tools
**dbbackup**: Enterprise capabilities. Zero enterprise pricing.
---
*Apache 2.0 Licensed. Free forever. No sales calls required.*
[GitHub](https://github.com/UUXO/dbbackup) | [Documentation](https://github.com/UUXO/dbbackup#readme) | [Changelog](CHANGELOG.md)

View File

@@ -3,9 +3,9 @@
This directory contains pre-compiled binaries for the DB Backup Tool across multiple platforms and architectures.
## Build Information
- **Version**: 3.42.1
- **Build Time**: 2026-01-08_05:03:53_UTC
- **Git Commit**: 9c65821
- **Version**: 3.42.10
- **Build Time**: 2026-01-12_14:25:53_UTC
- **Git Commit**: d19c065
## Recent Updates (v1.1.0)
- ✅ Fixed TUI progress display with line-by-line output

View File

@@ -1,11 +1,13 @@
package cmd
import (
"compress/gzip"
"crypto/sha256"
"encoding/hex"
"fmt"
"io"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
@@ -34,7 +36,24 @@ Storage Structure:
chunks/ # Content-addressed chunk files
ab/cdef... # Sharded by first 2 chars of hash
manifests/ # JSON manifest per backup
chunks.db # SQLite index`,
chunks.db # SQLite index
NFS/CIFS NOTICE:
SQLite may have locking issues on network storage.
Use --index-db to put the SQLite index on local storage while keeping
chunks on network storage:
dbbackup dedup backup mydb.sql \
--dedup-dir /mnt/nfs/backups/dedup \
--index-db /var/lib/dbbackup/dedup-index.db
This avoids "database is locked" errors while still storing chunks remotely.
COMPRESSED INPUT NOTICE:
Pre-compressed files (.gz) have poor deduplication ratios (<10%).
Use --decompress-input to decompress before chunking for better results:
dbbackup dedup backup mydb.sql.gz --decompress-input`,
}
var dedupBackupCmd = &cobra.Command{
@@ -89,9 +108,85 @@ var dedupDeleteCmd = &cobra.Command{
RunE: runDedupDelete,
}
var dedupVerifyCmd = &cobra.Command{
Use: "verify [manifest-id]",
Short: "Verify chunk integrity against manifests",
Long: `Verify that all chunks referenced by manifests exist and have correct hashes.
Without arguments, verifies all backups. With a manifest ID, verifies only that backup.
Examples:
dbbackup dedup verify # Verify all backups
dbbackup dedup verify 2026-01-07_mydb # Verify specific backup`,
RunE: runDedupVerify,
}
var dedupPruneCmd = &cobra.Command{
Use: "prune",
Short: "Apply retention policy to manifests",
Long: `Delete old manifests based on retention policy (like borg prune).
Keeps a specified number of recent backups per database and deletes the rest.
Examples:
dbbackup dedup prune --keep-last 7 # Keep 7 most recent
dbbackup dedup prune --keep-daily 7 --keep-weekly 4 # Keep 7 daily + 4 weekly`,
RunE: runDedupPrune,
}
var dedupBackupDBCmd = &cobra.Command{
Use: "backup-db",
Short: "Direct database dump with deduplication",
Long: `Dump a database directly into deduplicated chunks without temp files.
Streams the database dump through the chunker for efficient deduplication.
Examples:
dbbackup dedup backup-db --db-type postgres --db-name mydb
dbbackup dedup backup-db -d mariadb --database production_db --host db.local`,
RunE: runDedupBackupDB,
}
// Prune flags
var (
pruneKeepLast int
pruneKeepDaily int
pruneKeepWeekly int
pruneDryRun bool
)
// backup-db flags
var (
backupDBDatabase string
backupDBUser string
backupDBPassword string
)
// metrics flags
var (
dedupMetricsOutput string
dedupMetricsInstance string
)
var dedupMetricsCmd = &cobra.Command{
Use: "metrics",
Short: "Export dedup statistics as Prometheus metrics",
Long: `Export deduplication statistics in Prometheus format.
Can write to a textfile for node_exporter's textfile collector,
or print to stdout for custom integrations.
Examples:
dbbackup dedup metrics # Print to stdout
dbbackup dedup metrics --output /var/lib/node_exporter/textfile_collector/dedup.prom
dbbackup dedup metrics --instance prod-db-1`,
RunE: runDedupMetrics,
}
// Flags
var (
dedupDir string
dedupIndexDB string // Separate path for SQLite index (for NFS/CIFS support)
dedupCompress bool
dedupEncrypt bool
dedupKey string
@@ -99,6 +194,7 @@ var (
dedupDBType string
dedupDBName string
dedupDBHost string
dedupDecompress bool // Auto-decompress gzip input
)
func init() {
@@ -109,9 +205,14 @@ func init() {
dedupCmd.AddCommand(dedupStatsCmd)
dedupCmd.AddCommand(dedupGCCmd)
dedupCmd.AddCommand(dedupDeleteCmd)
dedupCmd.AddCommand(dedupVerifyCmd)
dedupCmd.AddCommand(dedupPruneCmd)
dedupCmd.AddCommand(dedupBackupDBCmd)
dedupCmd.AddCommand(dedupMetricsCmd)
// Global dedup flags
dedupCmd.PersistentFlags().StringVar(&dedupDir, "dedup-dir", "", "Dedup storage directory (default: $BACKUP_DIR/dedup)")
dedupCmd.PersistentFlags().StringVar(&dedupIndexDB, "index-db", "", "SQLite index path (local recommended for NFS/CIFS chunk dirs)")
dedupCmd.PersistentFlags().BoolVar(&dedupCompress, "compress", true, "Compress chunks with gzip")
dedupCmd.PersistentFlags().BoolVar(&dedupEncrypt, "encrypt", false, "Encrypt chunks with AES-256-GCM")
dedupCmd.PersistentFlags().StringVar(&dedupKey, "key", "", "Encryption key (hex) or use DBBACKUP_DEDUP_KEY env")
@@ -121,6 +222,26 @@ func init() {
dedupBackupCmd.Flags().StringVar(&dedupDBType, "db-type", "", "Database type (postgres/mysql)")
dedupBackupCmd.Flags().StringVar(&dedupDBName, "db-name", "", "Database name")
dedupBackupCmd.Flags().StringVar(&dedupDBHost, "db-host", "", "Database host")
dedupBackupCmd.Flags().BoolVar(&dedupDecompress, "decompress-input", false, "Auto-decompress gzip input before chunking (improves dedup ratio)")
// Prune flags
dedupPruneCmd.Flags().IntVar(&pruneKeepLast, "keep-last", 0, "Keep the last N backups")
dedupPruneCmd.Flags().IntVar(&pruneKeepDaily, "keep-daily", 0, "Keep N daily backups")
dedupPruneCmd.Flags().IntVar(&pruneKeepWeekly, "keep-weekly", 0, "Keep N weekly backups")
dedupPruneCmd.Flags().BoolVar(&pruneDryRun, "dry-run", false, "Show what would be deleted without actually deleting")
// backup-db flags
dedupBackupDBCmd.Flags().StringVarP(&dedupDBType, "db-type", "d", "", "Database type (postgres/mariadb/mysql)")
dedupBackupDBCmd.Flags().StringVar(&backupDBDatabase, "database", "", "Database name to backup")
dedupBackupDBCmd.Flags().StringVar(&dedupDBHost, "host", "localhost", "Database host")
dedupBackupDBCmd.Flags().StringVarP(&backupDBUser, "user", "u", "", "Database user")
dedupBackupDBCmd.Flags().StringVarP(&backupDBPassword, "password", "p", "", "Database password (or use env)")
dedupBackupDBCmd.MarkFlagRequired("db-type")
dedupBackupDBCmd.MarkFlagRequired("database")
// Metrics flags
dedupMetricsCmd.Flags().StringVarP(&dedupMetricsOutput, "output", "o", "", "Output file path (default: stdout)")
dedupMetricsCmd.Flags().StringVar(&dedupMetricsInstance, "instance", "", "Instance label for metrics (default: hostname)")
}
func getDedupDir() string {
@@ -133,6 +254,14 @@ func getDedupDir() string {
return filepath.Join(os.Getenv("HOME"), "db_backups", "dedup")
}
func getIndexDBPath() string {
if dedupIndexDB != "" {
return dedupIndexDB
}
// Default: same directory as chunks (may have issues on NFS/CIFS)
return filepath.Join(getDedupDir(), "chunks.db")
}
func getEncryptionKey() string {
if dedupKey != "" {
return dedupKey
@@ -155,6 +284,25 @@ func runDedupBackup(cmd *cobra.Command, args []string) error {
return fmt.Errorf("failed to stat input file: %w", err)
}
// Check for compressed input and warn/handle
var reader io.Reader = file
isGzipped := strings.HasSuffix(strings.ToLower(inputPath), ".gz")
if isGzipped && !dedupDecompress {
fmt.Printf("Warning: Input appears to be gzip compressed (.gz)\n")
fmt.Printf(" Compressed data typically has poor dedup ratios (<10%%).\n")
fmt.Printf(" Consider using --decompress-input for better deduplication.\n\n")
}
if isGzipped && dedupDecompress {
fmt.Printf("Auto-decompressing gzip input for better dedup ratio...\n")
gzReader, err := gzip.NewReader(file)
if err != nil {
return fmt.Errorf("failed to decompress gzip input: %w", err)
}
defer gzReader.Close()
reader = gzReader
}
// Setup dedup storage
basePath := getDedupDir()
encKey := ""
@@ -179,7 +327,7 @@ func runDedupBackup(cmd *cobra.Command, args []string) error {
return fmt.Errorf("failed to open manifest store: %w", err)
}
index, err := dedup.NewChunkIndex(basePath)
index, err := dedup.NewChunkIndexAt(getIndexDBPath())
if err != nil {
return fmt.Errorf("failed to open chunk index: %w", err)
}
@@ -193,22 +341,43 @@ func runDedupBackup(cmd *cobra.Command, args []string) error {
} else {
base := filepath.Base(inputPath)
ext := filepath.Ext(base)
// Remove .gz extension if decompressing
if isGzipped && dedupDecompress {
base = strings.TrimSuffix(base, ext)
ext = filepath.Ext(base)
}
manifestID += "_" + strings.TrimSuffix(base, ext)
}
fmt.Printf("Creating deduplicated backup: %s\n", manifestID)
fmt.Printf("Input: %s (%s)\n", inputPath, formatBytes(info.Size()))
if isGzipped && dedupDecompress {
fmt.Printf("Mode: Decompressing before chunking\n")
}
fmt.Printf("Store: %s\n", basePath)
if dedupIndexDB != "" {
fmt.Printf("Index: %s\n", getIndexDBPath())
}
// Hash the entire file for verification
file.Seek(0, 0)
// For decompressed input, we can't seek - use TeeReader to hash while chunking
h := sha256.New()
io.Copy(h, file)
fileHash := hex.EncodeToString(h.Sum(nil))
file.Seek(0, 0)
var chunkReader io.Reader
if isGzipped && dedupDecompress {
// Can't seek on gzip stream - hash will be computed inline
chunkReader = io.TeeReader(reader, h)
} else {
// Regular file - hash first, then reset and chunk
file.Seek(0, 0)
io.Copy(h, file)
file.Seek(0, 0)
chunkReader = file
h = sha256.New() // Reset for inline hashing
chunkReader = io.TeeReader(file, h)
}
// Chunk the file
chunker := dedup.NewChunker(file, dedup.DefaultChunkerConfig())
chunker := dedup.NewChunker(chunkReader, dedup.DefaultChunkerConfig())
var chunks []dedup.ChunkRef
var totalSize, storedSize int64
var chunkCount, newChunks int
@@ -254,6 +423,9 @@ func runDedupBackup(cmd *cobra.Command, args []string) error {
duration := time.Since(startTime)
// Get final hash (computed inline via TeeReader)
fileHash := hex.EncodeToString(h.Sum(nil))
// Calculate dedup ratio
dedupRatio := 0.0
if totalSize > 0 {
@@ -277,6 +449,7 @@ func runDedupBackup(cmd *cobra.Command, args []string) error {
Encrypted: dedupEncrypt,
Compressed: dedupCompress,
SHA256: fileHash,
Decompressed: isGzipped && dedupDecompress, // Track if we decompressed
}
if err := manifestStore.Save(manifest); err != nil {
@@ -451,8 +624,12 @@ func runDedupStats(cmd *cobra.Command, args []string) error {
fmt.Printf("Unique chunks: %d\n", stats.TotalChunks)
fmt.Printf("Total raw size: %s\n", formatBytes(stats.TotalSizeRaw))
fmt.Printf("Stored size: %s\n", formatBytes(stats.TotalSizeStored))
fmt.Printf("Dedup ratio: %.1f%%\n", stats.DedupRatio*100)
fmt.Printf("Space saved: %s\n", formatBytes(stats.TotalSizeRaw-stats.TotalSizeStored))
fmt.Printf("\n")
fmt.Printf("Backup Statistics (accurate dedup calculation):\n")
fmt.Printf(" Total backed up: %s (across all backups)\n", formatBytes(stats.TotalBackupSize))
fmt.Printf(" New data stored: %s\n", formatBytes(stats.TotalNewData))
fmt.Printf(" Space saved: %s\n", formatBytes(stats.SpaceSaved))
fmt.Printf(" Dedup ratio: %.1f%%\n", stats.DedupRatio*100)
if storeStats != nil {
fmt.Printf("Disk usage: %s\n", formatBytes(storeStats.TotalSize))
@@ -577,3 +754,531 @@ func truncateStr(s string, max int) string {
}
return s[:max-3] + "..."
}
func runDedupVerify(cmd *cobra.Command, args []string) error {
basePath := getDedupDir()
store, err := dedup.NewChunkStore(dedup.StoreConfig{
BasePath: basePath,
Compress: dedupCompress,
})
if err != nil {
return fmt.Errorf("failed to open chunk store: %w", err)
}
manifestStore, err := dedup.NewManifestStore(basePath)
if err != nil {
return fmt.Errorf("failed to open manifest store: %w", err)
}
index, err := dedup.NewChunkIndexAt(getIndexDBPath())
if err != nil {
return fmt.Errorf("failed to open chunk index: %w", err)
}
defer index.Close()
var manifests []*dedup.Manifest
if len(args) > 0 {
// Verify specific manifest
m, err := manifestStore.Load(args[0])
if err != nil {
return fmt.Errorf("failed to load manifest: %w", err)
}
manifests = []*dedup.Manifest{m}
} else {
// Verify all manifests
manifests, err = manifestStore.ListAll()
if err != nil {
return fmt.Errorf("failed to list manifests: %w", err)
}
}
if len(manifests) == 0 {
fmt.Println("No manifests to verify.")
return nil
}
fmt.Printf("Verifying %d backup(s)...\n\n", len(manifests))
var totalChunks, missingChunks, corruptChunks int
var allOK = true
for _, m := range manifests {
fmt.Printf("Verifying: %s (%d chunks)\n", m.ID, m.ChunkCount)
var missing, corrupt int
seenHashes := make(map[string]bool)
for i, ref := range m.Chunks {
if seenHashes[ref.Hash] {
continue // Already verified this chunk
}
seenHashes[ref.Hash] = true
totalChunks++
// Check if chunk exists
if !store.Has(ref.Hash) {
missing++
missingChunks++
if missing <= 5 {
fmt.Printf(" [MISSING] chunk %d: %s\n", i, ref.Hash[:16])
}
continue
}
// Verify chunk hash by reading it
chunk, err := store.Get(ref.Hash)
if err != nil {
corrupt++
corruptChunks++
if corrupt <= 5 {
fmt.Printf(" [CORRUPT] chunk %d: %s - %v\n", i, ref.Hash[:16], err)
}
continue
}
// Verify size
if chunk.Length != ref.Length {
corrupt++
corruptChunks++
if corrupt <= 5 {
fmt.Printf(" [SIZE MISMATCH] chunk %d: expected %d, got %d\n", i, ref.Length, chunk.Length)
}
}
}
if missing > 0 || corrupt > 0 {
allOK = false
fmt.Printf(" Result: FAILED (%d missing, %d corrupt)\n", missing, corrupt)
if missing > 5 || corrupt > 5 {
fmt.Printf(" ... and %d more errors\n", (missing+corrupt)-10)
}
} else {
fmt.Printf(" Result: OK (%d unique chunks verified)\n", len(seenHashes))
// Update verified timestamp
m.VerifiedAt = time.Now()
manifestStore.Save(m)
index.UpdateManifestVerified(m.ID, m.VerifiedAt)
}
fmt.Println()
}
fmt.Println("========================================")
if allOK {
fmt.Printf("All %d backup(s) verified successfully!\n", len(manifests))
fmt.Printf("Total unique chunks checked: %d\n", totalChunks)
} else {
fmt.Printf("Verification FAILED!\n")
fmt.Printf("Missing chunks: %d\n", missingChunks)
fmt.Printf("Corrupt chunks: %d\n", corruptChunks)
return fmt.Errorf("verification failed: %d missing, %d corrupt chunks", missingChunks, corruptChunks)
}
return nil
}
func runDedupPrune(cmd *cobra.Command, args []string) error {
if pruneKeepLast == 0 && pruneKeepDaily == 0 && pruneKeepWeekly == 0 {
return fmt.Errorf("at least one of --keep-last, --keep-daily, or --keep-weekly must be specified")
}
basePath := getDedupDir()
manifestStore, err := dedup.NewManifestStore(basePath)
if err != nil {
return fmt.Errorf("failed to open manifest store: %w", err)
}
index, err := dedup.NewChunkIndexAt(getIndexDBPath())
if err != nil {
return fmt.Errorf("failed to open chunk index: %w", err)
}
defer index.Close()
manifests, err := manifestStore.ListAll()
if err != nil {
return fmt.Errorf("failed to list manifests: %w", err)
}
if len(manifests) == 0 {
fmt.Println("No backups to prune.")
return nil
}
// Group by database name
byDatabase := make(map[string][]*dedup.Manifest)
for _, m := range manifests {
key := m.DatabaseName
if key == "" {
key = "_default"
}
byDatabase[key] = append(byDatabase[key], m)
}
var toDelete []*dedup.Manifest
for dbName, dbManifests := range byDatabase {
// Already sorted by time (newest first from ListAll)
kept := make(map[string]bool)
var keepReasons = make(map[string]string)
// Keep last N
if pruneKeepLast > 0 {
for i := 0; i < pruneKeepLast && i < len(dbManifests); i++ {
kept[dbManifests[i].ID] = true
keepReasons[dbManifests[i].ID] = "keep-last"
}
}
// Keep daily (one per day)
if pruneKeepDaily > 0 {
seenDays := make(map[string]bool)
count := 0
for _, m := range dbManifests {
day := m.CreatedAt.Format("2006-01-02")
if !seenDays[day] {
seenDays[day] = true
if count < pruneKeepDaily {
kept[m.ID] = true
if keepReasons[m.ID] == "" {
keepReasons[m.ID] = "keep-daily"
}
count++
}
}
}
}
// Keep weekly (one per week)
if pruneKeepWeekly > 0 {
seenWeeks := make(map[string]bool)
count := 0
for _, m := range dbManifests {
year, week := m.CreatedAt.ISOWeek()
weekKey := fmt.Sprintf("%d-W%02d", year, week)
if !seenWeeks[weekKey] {
seenWeeks[weekKey] = true
if count < pruneKeepWeekly {
kept[m.ID] = true
if keepReasons[m.ID] == "" {
keepReasons[m.ID] = "keep-weekly"
}
count++
}
}
}
}
if dbName != "_default" {
fmt.Printf("\nDatabase: %s\n", dbName)
} else {
fmt.Printf("\nUnnamed backups:\n")
}
for _, m := range dbManifests {
if kept[m.ID] {
fmt.Printf(" [KEEP] %s (%s) - %s\n", m.ID, m.CreatedAt.Format("2006-01-02"), keepReasons[m.ID])
} else {
fmt.Printf(" [DELETE] %s (%s)\n", m.ID, m.CreatedAt.Format("2006-01-02"))
toDelete = append(toDelete, m)
}
}
}
if len(toDelete) == 0 {
fmt.Printf("\nNo backups to prune (all match retention policy).\n")
return nil
}
fmt.Printf("\n%d backup(s) will be deleted.\n", len(toDelete))
if pruneDryRun {
fmt.Println("\n[DRY RUN] No changes made. Remove --dry-run to actually delete.")
return nil
}
// Actually delete
for _, m := range toDelete {
// Decrement chunk references
for _, ref := range m.Chunks {
index.DecrementRef(ref.Hash)
}
if err := manifestStore.Delete(m.ID); err != nil {
log.Warn("Failed to delete manifest", "id", m.ID, "error", err)
}
index.RemoveManifest(m.ID)
}
fmt.Printf("\nDeleted %d backup(s).\n", len(toDelete))
fmt.Println("Run 'dbbackup dedup gc' to reclaim space from unreferenced chunks.")
return nil
}
func runDedupBackupDB(cmd *cobra.Command, args []string) error {
dbType := strings.ToLower(dedupDBType)
dbName := backupDBDatabase
// Validate db type
var dumpCmd string
var dumpArgs []string
switch dbType {
case "postgres", "postgresql", "pg":
dbType = "postgres"
dumpCmd = "pg_dump"
dumpArgs = []string{"-Fc"} // Custom format for better compression
if dedupDBHost != "" && dedupDBHost != "localhost" {
dumpArgs = append(dumpArgs, "-h", dedupDBHost)
}
if backupDBUser != "" {
dumpArgs = append(dumpArgs, "-U", backupDBUser)
}
dumpArgs = append(dumpArgs, dbName)
case "mysql":
dumpCmd = "mysqldump"
dumpArgs = []string{
"--single-transaction",
"--routines",
"--triggers",
"--events",
}
if dedupDBHost != "" {
dumpArgs = append(dumpArgs, "-h", dedupDBHost)
}
if backupDBUser != "" {
dumpArgs = append(dumpArgs, "-u", backupDBUser)
}
if backupDBPassword != "" {
dumpArgs = append(dumpArgs, "-p"+backupDBPassword)
}
dumpArgs = append(dumpArgs, dbName)
case "mariadb":
dumpCmd = "mariadb-dump"
// Fall back to mysqldump if mariadb-dump not available
if _, err := exec.LookPath(dumpCmd); err != nil {
dumpCmd = "mysqldump"
}
dumpArgs = []string{
"--single-transaction",
"--routines",
"--triggers",
"--events",
}
if dedupDBHost != "" {
dumpArgs = append(dumpArgs, "-h", dedupDBHost)
}
if backupDBUser != "" {
dumpArgs = append(dumpArgs, "-u", backupDBUser)
}
if backupDBPassword != "" {
dumpArgs = append(dumpArgs, "-p"+backupDBPassword)
}
dumpArgs = append(dumpArgs, dbName)
default:
return fmt.Errorf("unsupported database type: %s (use postgres, mysql, or mariadb)", dbType)
}
// Verify dump command exists
if _, err := exec.LookPath(dumpCmd); err != nil {
return fmt.Errorf("%s not found in PATH: %w", dumpCmd, err)
}
// Setup dedup storage
basePath := getDedupDir()
encKey := ""
if dedupEncrypt {
encKey = getEncryptionKey()
if encKey == "" {
return fmt.Errorf("encryption enabled but no key provided (use --key or DBBACKUP_DEDUP_KEY)")
}
}
store, err := dedup.NewChunkStore(dedup.StoreConfig{
BasePath: basePath,
Compress: dedupCompress,
EncryptionKey: encKey,
})
if err != nil {
return fmt.Errorf("failed to open chunk store: %w", err)
}
manifestStore, err := dedup.NewManifestStore(basePath)
if err != nil {
return fmt.Errorf("failed to open manifest store: %w", err)
}
index, err := dedup.NewChunkIndexAt(getIndexDBPath())
if err != nil {
return fmt.Errorf("failed to open chunk index: %w", err)
}
defer index.Close()
// Generate manifest ID
now := time.Now()
manifestID := now.Format("2006-01-02_150405") + "_" + dbName
fmt.Printf("Creating deduplicated database backup: %s\n", manifestID)
fmt.Printf("Database: %s (%s)\n", dbName, dbType)
fmt.Printf("Command: %s %s\n", dumpCmd, strings.Join(dumpArgs, " "))
fmt.Printf("Store: %s\n", basePath)
// Start the dump command
dumpExec := exec.Command(dumpCmd, dumpArgs...)
// Set password via environment for postgres
if dbType == "postgres" && backupDBPassword != "" {
dumpExec.Env = append(os.Environ(), "PGPASSWORD="+backupDBPassword)
}
stdout, err := dumpExec.StdoutPipe()
if err != nil {
return fmt.Errorf("failed to get stdout pipe: %w", err)
}
stderr, err := dumpExec.StderrPipe()
if err != nil {
return fmt.Errorf("failed to get stderr pipe: %w", err)
}
if err := dumpExec.Start(); err != nil {
return fmt.Errorf("failed to start %s: %w", dumpCmd, err)
}
// Hash while chunking using TeeReader
h := sha256.New()
reader := io.TeeReader(stdout, h)
// Chunk the stream directly
chunker := dedup.NewChunker(reader, dedup.DefaultChunkerConfig())
var chunks []dedup.ChunkRef
var totalSize, storedSize int64
var chunkCount, newChunks int
startTime := time.Now()
for {
chunk, err := chunker.Next()
if err == io.EOF {
break
}
if err != nil {
return fmt.Errorf("chunking failed: %w", err)
}
chunkCount++
totalSize += int64(chunk.Length)
// Store chunk (deduplication happens here)
isNew, err := store.Put(chunk)
if err != nil {
return fmt.Errorf("failed to store chunk: %w", err)
}
if isNew {
newChunks++
storedSize += int64(chunk.Length)
index.AddChunk(chunk.Hash, chunk.Length, chunk.Length)
}
chunks = append(chunks, dedup.ChunkRef{
Hash: chunk.Hash,
Offset: chunk.Offset,
Length: chunk.Length,
})
if chunkCount%1000 == 0 {
fmt.Printf("\r Processed %d chunks, %d new, %s...", chunkCount, newChunks, formatBytes(totalSize))
}
}
// Read any stderr
stderrBytes, _ := io.ReadAll(stderr)
// Wait for command to complete
if err := dumpExec.Wait(); err != nil {
return fmt.Errorf("%s failed: %w\nstderr: %s", dumpCmd, err, string(stderrBytes))
}
duration := time.Since(startTime)
fileHash := hex.EncodeToString(h.Sum(nil))
// Calculate dedup ratio
dedupRatio := 0.0
if totalSize > 0 {
dedupRatio = 1.0 - float64(storedSize)/float64(totalSize)
}
// Create manifest
manifest := &dedup.Manifest{
ID: manifestID,
Name: dedupName,
CreatedAt: now,
DatabaseType: dbType,
DatabaseName: dbName,
DatabaseHost: dedupDBHost,
Chunks: chunks,
OriginalSize: totalSize,
StoredSize: storedSize,
ChunkCount: chunkCount,
NewChunks: newChunks,
DedupRatio: dedupRatio,
Encrypted: dedupEncrypt,
Compressed: dedupCompress,
SHA256: fileHash,
}
if err := manifestStore.Save(manifest); err != nil {
return fmt.Errorf("failed to save manifest: %w", err)
}
if err := index.AddManifest(manifest); err != nil {
log.Warn("Failed to index manifest", "error", err)
}
fmt.Printf("\r \r")
fmt.Printf("\nBackup complete!\n")
fmt.Printf(" Manifest: %s\n", manifestID)
fmt.Printf(" Chunks: %d total, %d new\n", chunkCount, newChunks)
fmt.Printf(" Dump size: %s\n", formatBytes(totalSize))
fmt.Printf(" Stored: %s (new data)\n", formatBytes(storedSize))
fmt.Printf(" Dedup ratio: %.1f%%\n", dedupRatio*100)
fmt.Printf(" Duration: %s\n", duration.Round(time.Millisecond))
fmt.Printf(" Throughput: %s/s\n", formatBytes(int64(float64(totalSize)/duration.Seconds())))
return nil
}
func runDedupMetrics(cmd *cobra.Command, args []string) error {
basePath := getDedupDir()
indexPath := getIndexDBPath()
instance := dedupMetricsInstance
if instance == "" {
hostname, _ := os.Hostname()
instance = hostname
}
metrics, err := dedup.CollectMetrics(basePath, indexPath)
if err != nil {
return fmt.Errorf("failed to collect metrics: %w", err)
}
output := dedup.FormatPrometheusMetrics(metrics, instance)
if dedupMetricsOutput != "" {
if err := dedup.WritePrometheusTextfile(dedupMetricsOutput, instance, basePath, indexPath); err != nil {
return fmt.Errorf("failed to write metrics: %w", err)
}
fmt.Printf("Wrote metrics to %s\n", dedupMetricsOutput)
} else {
fmt.Print(output)
}
return nil
}

View File

@@ -37,9 +37,9 @@ var (
restoreSaveDebugLog string // Path to save debug log on failure
// Diagnose flags
diagnoseJSON bool
diagnoseDeep bool
diagnoseKeepTemp bool
diagnoseJSON bool
diagnoseDeep bool
diagnoseKeepTemp bool
// Encryption flags
restoreEncryptionKeyFile string
@@ -565,7 +565,7 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
// Create restore engine
engine := restore.New(cfg, log, db)
// Enable debug logging if requested
if restoreSaveDebugLog != "" {
engine.SetDebugLogPath(restoreSaveDebugLog)
@@ -589,15 +589,15 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
// Run pre-restore diagnosis if requested
if restoreDiagnose {
log.Info("[DIAG] Running pre-restore diagnosis...")
diagnoser := restore.NewDiagnoser(log, restoreVerbose)
result, err := diagnoser.DiagnoseFile(archivePath)
if err != nil {
return fmt.Errorf("diagnosis failed: %w", err)
}
diagnoser.PrintDiagnosis(result)
if !result.IsValid {
log.Error("[FAIL] Pre-restore diagnosis found issues")
if result.IsTruncated {
@@ -607,7 +607,7 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
log.Error(" The backup file appears to be CORRUPTED")
}
fmt.Println("\nUse --force to attempt restore anyway.")
if !restoreForce {
return fmt.Errorf("aborting restore due to backup file issues")
}
@@ -785,7 +785,7 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
// Create restore engine
engine := restore.New(cfg, log, db)
// Enable debug logging if requested
if restoreSaveDebugLog != "" {
engine.SetDebugLogPath(restoreSaveDebugLog)
@@ -830,7 +830,7 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
// Run pre-restore diagnosis if requested
if restoreDiagnose {
log.Info("[DIAG] Running pre-restore diagnosis...")
// Create temp directory for extraction in configured WorkDir
workDir := cfg.GetEffectiveWorkDir()
diagTempDir, err := os.MkdirTemp(workDir, "dbbackup-diagnose-*")
@@ -838,13 +838,13 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
return fmt.Errorf("failed to create temp directory for diagnosis in %s: %w", workDir, err)
}
defer os.RemoveAll(diagTempDir)
diagnoser := restore.NewDiagnoser(log, restoreVerbose)
results, err := diagnoser.DiagnoseClusterDumps(archivePath, diagTempDir)
if err != nil {
return fmt.Errorf("diagnosis failed: %w", err)
}
// Check for any invalid dumps
var invalidDumps []string
for _, result := range results {
@@ -853,7 +853,7 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
diagnoser.PrintDiagnosis(result)
}
}
if len(invalidDumps) > 0 {
log.Error("[FAIL] Pre-restore diagnosis found issues",
"invalid_dumps", len(invalidDumps),
@@ -864,7 +864,7 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
}
fmt.Println("\nRun 'dbbackup restore diagnose <archive> --deep' for full details.")
fmt.Println("Use --force to attempt restore anyway.")
if !restoreForce {
return fmt.Errorf("aborting restore due to %d invalid dump(s)", len(invalidDumps))
}

View File

@@ -0,0 +1,1303 @@
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"id": null,
"links": [],
"liveNow": false,
"panels": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [
{
"options": {
"0": {
"color": "red",
"index": 1,
"text": "FAILED"
},
"1": {
"color": "green",
"index": 0,
"text": "SUCCESS"
}
},
"type": "value"
}
],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "red",
"value": null
},
{
"color": "green",
"value": 1
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 4,
"w": 6,
"x": 0,
"y": 0
},
"id": 1,
"options": {
"colorMode": "background",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_rpo_seconds{instance=~\"$instance\"} < 86400",
"legendFormat": "{{database}}",
"range": true,
"refId": "A"
}
],
"title": "Last Backup Status",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 43200
},
{
"color": "red",
"value": 86400
}
]
},
"unit": "s"
},
"overrides": []
},
"gridPos": {
"h": 4,
"w": 6,
"x": 6,
"y": 0
},
"id": 2,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_rpo_seconds{instance=~\"$instance\"}",
"legendFormat": "{{database}}",
"range": true,
"refId": "A"
}
],
"title": "Time Since Last Backup",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 4,
"w": 6,
"x": 12,
"y": 0
},
"id": 3,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_backup_total{instance=~\"$instance\", status=\"success\"}",
"legendFormat": "{{database}}",
"range": true,
"refId": "A"
}
],
"title": "Total Successful Backups",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 1
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 4,
"w": 6,
"x": 18,
"y": 0
},
"id": 4,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_backup_total{instance=~\"$instance\", status=\"failure\"}",
"legendFormat": "{{database}}",
"range": true,
"refId": "A"
}
],
"title": "Total Failed Backups",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "line"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 86400
}
]
},
"unit": "s"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 4
},
"id": 5,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_rpo_seconds{instance=~\"$instance\"}",
"legendFormat": "{{instance}} - {{database}}",
"range": true,
"refId": "A"
}
],
"title": "RPO Over Time",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "bars",
"fillOpacity": 100,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "bytes"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 4
},
"id": 6,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_last_backup_size_bytes{instance=~\"$instance\"}",
"legendFormat": "{{instance}} - {{database}}",
"range": true,
"refId": "A"
}
],
"title": "Backup Size",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "s"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 12
},
"id": 7,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_last_backup_duration_seconds{instance=~\"$instance\"}",
"legendFormat": "{{instance}} - {{database}}",
"range": true,
"refId": "A"
}
],
"title": "Backup Duration",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"custom": {
"align": "auto",
"cellOptions": {
"type": "auto"
},
"inspect": false
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
}
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "Status"
},
"properties": [
{
"id": "mappings",
"value": [
{
"options": {
"0": {
"color": "red",
"index": 1,
"text": "FAILED"
},
"1": {
"color": "green",
"index": 0,
"text": "SUCCESS"
}
},
"type": "value"
}
]
},
{
"id": "custom.cellOptions",
"value": {
"mode": "basic",
"type": "color-background"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "RPO"
},
"properties": [
{
"id": "unit",
"value": "s"
},
{
"id": "thresholds",
"value": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 43200
},
{
"color": "red",
"value": 86400
}
]
}
},
{
"id": "custom.cellOptions",
"value": {
"mode": "basic",
"type": "color-background"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "Size"
},
"properties": [
{
"id": "unit",
"value": "bytes"
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 12
},
"id": 8,
"options": {
"cellHeight": "sm",
"footer": {
"countRows": false,
"fields": "",
"reducer": [
"sum"
],
"show": false
},
"showHeader": true
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_rpo_seconds{instance=~\"$instance\"} < 86400",
"format": "table",
"instant": true,
"legendFormat": "__auto",
"range": false,
"refId": "Status"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_rpo_seconds{instance=~\"$instance\"}",
"format": "table",
"hide": false,
"instant": true,
"legendFormat": "__auto",
"range": false,
"refId": "RPO"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_last_backup_size_bytes{instance=~\"$instance\"}",
"format": "table",
"hide": false,
"instant": true,
"legendFormat": "__auto",
"range": false,
"refId": "Size"
}
],
"title": "Backup Status Overview",
"transformations": [
{
"id": "joinByField",
"options": {
"byField": "database",
"mode": "outer"
}
},
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true,
"Time 1": true,
"Time 2": true,
"Time 3": true,
"__name__": true,
"__name__ 1": true,
"__name__ 2": true,
"__name__ 3": true,
"instance 1": true,
"instance 2": true,
"instance 3": true,
"job": true,
"job 1": true,
"job 2": true,
"job 3": true
},
"indexByName": {},
"renameByName": {
"Value #RPO": "RPO",
"Value #Size": "Size",
"Value #Status": "Status",
"database": "Database",
"instance": "Instance"
}
}
}
],
"type": "table"
},
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 30
},
"id": 100,
"panels": [],
"title": "Deduplication Statistics",
"type": "row"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "blue",
"value": null
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 6,
"x": 0,
"y": 31
},
"id": 101,
"options": {
"colorMode": "background",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_ratio{instance=~\"$instance\"}",
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Dedup Ratio",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "bytes"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 6,
"x": 6,
"y": 31
},
"id": 102,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_space_saved_bytes{instance=~\"$instance\"}",
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Space Saved",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "yellow",
"value": null
}
]
},
"unit": "bytes"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 6,
"x": 12,
"y": 31
},
"id": 103,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_disk_usage_bytes{instance=~\"$instance\"}",
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Disk Usage",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "purple",
"value": null
}
]
},
"unit": "short"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 6,
"x": 18,
"y": 31
},
"id": 104,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_chunks_total{instance=~\"$instance\"}",
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Total Chunks",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 36
},
"id": 105,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_database_ratio{instance=~\"$instance\"}",
"legendFormat": "{{database}}",
"range": true,
"refId": "A"
}
],
"title": "Dedup Ratio by Database",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "bytes"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 36
},
"id": 106,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"pluginVersion": "10.2.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_space_saved_bytes{instance=~\"$instance\"}",
"legendFormat": "Space Saved",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "dbbackup_dedup_disk_usage_bytes{instance=~\"$instance\"}",
"legendFormat": "Disk Usage",
"range": true,
"refId": "B"
}
],
"title": "Dedup Storage Over Time",
"type": "timeseries"
}
],
"refresh": "30s",
"schemaVersion": 38,
"tags": [
"dbbackup",
"backup",
"database",
"dedup"
],
"templating": {
"list": [
{
"current": {
"selected": false,
"text": "All",
"value": "$__all"
},
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"definition": "label_values(dbbackup_rpo_seconds, instance)",
"hide": 0,
"includeAll": true,
"label": "Instance",
"multi": true,
"name": "instance",
"options": [],
"query": {
"query": "label_values(dbbackup_rpo_seconds, instance)",
"refId": "StandardVariableQuery"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 1,
"type": "query"
},
{
"hide": 2,
"name": "DS_PROMETHEUS",
"query": "prometheus",
"skipUrlSync": false,
"type": "datasource"
}
]
},
"time": {
"from": "now-24h",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "DBBackup Overview",
"uid": "dbbackup-overview",
"version": 1,
"weekStart": ""
}

View File

@@ -3,7 +3,9 @@ package dedup
import (
"database/sql"
"fmt"
"os"
"path/filepath"
"strings"
"time"
_ "github.com/mattn/go-sqlite3" // SQLite driver
@@ -11,27 +13,67 @@ import (
// ChunkIndex provides fast chunk lookups using SQLite
type ChunkIndex struct {
db *sql.DB
db *sql.DB
dbPath string
}
// NewChunkIndex opens or creates a chunk index database
// NewChunkIndex opens or creates a chunk index database at the default location
func NewChunkIndex(basePath string) (*ChunkIndex, error) {
dbPath := filepath.Join(basePath, "chunks.db")
return NewChunkIndexAt(dbPath)
}
db, err := sql.Open("sqlite3", dbPath+"?_journal_mode=WAL&_synchronous=NORMAL")
// NewChunkIndexAt opens or creates a chunk index database at a specific path
// Use this to put the SQLite index on local storage when chunks are on NFS/CIFS
func NewChunkIndexAt(dbPath string) (*ChunkIndex, error) {
// Ensure parent directory exists
if err := os.MkdirAll(filepath.Dir(dbPath), 0700); err != nil {
return nil, fmt.Errorf("failed to create index directory: %w", err)
}
// Add busy_timeout to handle lock contention gracefully
db, err := sql.Open("sqlite3", dbPath+"?_journal_mode=WAL&_synchronous=NORMAL&_busy_timeout=5000")
if err != nil {
return nil, fmt.Errorf("failed to open chunk index: %w", err)
}
idx := &ChunkIndex{db: db}
// Test the connection and check for locking issues
if err := db.Ping(); err != nil {
db.Close()
if isNFSLockingError(err) {
return nil, fmt.Errorf("database locked (common on NFS/CIFS): %w\n\n"+
"HINT: Use --index-db to put the SQLite index on local storage:\n"+
" dbbackup dedup ... --index-db /var/lib/dbbackup/dedup-index.db", err)
}
return nil, fmt.Errorf("failed to connect to chunk index: %w", err)
}
idx := &ChunkIndex{db: db, dbPath: dbPath}
if err := idx.migrate(); err != nil {
db.Close()
if isNFSLockingError(err) {
return nil, fmt.Errorf("database locked during migration (common on NFS/CIFS): %w\n\n"+
"HINT: Use --index-db to put the SQLite index on local storage:\n"+
" dbbackup dedup ... --index-db /var/lib/dbbackup/dedup-index.db", err)
}
return nil, err
}
return idx, nil
}
// isNFSLockingError checks if an error is likely due to NFS/CIFS locking issues
func isNFSLockingError(err error) bool {
if err == nil {
return false
}
errStr := err.Error()
return strings.Contains(errStr, "database is locked") ||
strings.Contains(errStr, "SQLITE_BUSY") ||
strings.Contains(errStr, "cannot lock") ||
strings.Contains(errStr, "lock protocol")
}
// migrate creates the schema if needed
func (idx *ChunkIndex) migrate() error {
schema := `
@@ -166,15 +208,26 @@ func (idx *ChunkIndex) RemoveManifest(id string) error {
return err
}
// UpdateManifestVerified updates the verified timestamp for a manifest
func (idx *ChunkIndex) UpdateManifestVerified(id string, verifiedAt time.Time) error {
_, err := idx.db.Exec("UPDATE manifests SET verified_at = ? WHERE id = ?", verifiedAt, id)
return err
}
// IndexStats holds statistics about the dedup index
type IndexStats struct {
TotalChunks int64
TotalManifests int64
TotalSizeRaw int64 // Uncompressed, undeduplicated
TotalSizeStored int64 // On-disk after dedup+compression
DedupRatio float64
TotalSizeRaw int64 // Uncompressed, undeduplicated (per-chunk)
TotalSizeStored int64 // On-disk after dedup+compression (per-chunk)
DedupRatio float64 // Based on manifests (real dedup ratio)
OldestChunk time.Time
NewestChunk time.Time
// Manifest-based stats (accurate dedup calculation)
TotalBackupSize int64 // Sum of all backup original sizes
TotalNewData int64 // Sum of all new chunks stored
SpaceSaved int64 // Difference = what dedup saved
}
// Stats returns statistics about the index
@@ -206,8 +259,22 @@ func (idx *ChunkIndex) Stats() (*IndexStats, error) {
idx.db.QueryRow("SELECT COUNT(*) FROM manifests").Scan(&stats.TotalManifests)
if stats.TotalSizeRaw > 0 {
stats.DedupRatio = 1.0 - float64(stats.TotalSizeStored)/float64(stats.TotalSizeRaw)
// Calculate accurate dedup ratio from manifests
// Sum all backup original sizes and all new data stored
err = idx.db.QueryRow(`
SELECT
COALESCE(SUM(original_size), 0),
COALESCE(SUM(stored_size), 0)
FROM manifests
`).Scan(&stats.TotalBackupSize, &stats.TotalNewData)
if err != nil {
return nil, err
}
// Calculate real dedup ratio: how much data was deduplicated across all backups
if stats.TotalBackupSize > 0 {
stats.DedupRatio = 1.0 - float64(stats.TotalNewData)/float64(stats.TotalBackupSize)
stats.SpaceSaved = stats.TotalBackupSize - stats.TotalNewData
}
return stats, nil

View File

@@ -36,8 +36,9 @@ type Manifest struct {
DedupRatio float64 `json:"dedup_ratio"` // 1.0 = no dedup, 0.0 = 100% dedup
// Encryption and compression settings used
Encrypted bool `json:"encrypted"`
Compressed bool `json:"compressed"`
Encrypted bool `json:"encrypted"`
Compressed bool `json:"compressed"`
Decompressed bool `json:"decompressed,omitempty"` // Input was auto-decompressed before chunking
// Verification
SHA256 string `json:"sha256"` // Hash of reconstructed file

235
internal/dedup/metrics.go Normal file
View File

@@ -0,0 +1,235 @@
package dedup
import (
"fmt"
"os"
"path/filepath"
"strings"
"time"
)
// DedupMetrics holds deduplication statistics for Prometheus
type DedupMetrics struct {
// Global stats
TotalChunks int64
TotalManifests int64
TotalBackupSize int64 // Sum of all backup original sizes
TotalNewData int64 // Sum of all new chunks stored
SpaceSaved int64 // Bytes saved by deduplication
DedupRatio float64 // Overall dedup ratio (0-1)
DiskUsage int64 // Actual bytes on disk
// Per-database stats
ByDatabase map[string]*DatabaseDedupMetrics
}
// DatabaseDedupMetrics holds per-database dedup stats
type DatabaseDedupMetrics struct {
Database string
BackupCount int
TotalSize int64
StoredSize int64
DedupRatio float64
LastBackupTime time.Time
LastVerified time.Time
}
// CollectMetrics gathers dedup statistics from the index and store
func CollectMetrics(basePath string, indexPath string) (*DedupMetrics, error) {
var idx *ChunkIndex
var err error
if indexPath != "" {
idx, err = NewChunkIndexAt(indexPath)
} else {
idx, err = NewChunkIndex(basePath)
}
if err != nil {
return nil, fmt.Errorf("failed to open chunk index: %w", err)
}
defer idx.Close()
store, err := NewChunkStore(StoreConfig{BasePath: basePath})
if err != nil {
return nil, fmt.Errorf("failed to open chunk store: %w", err)
}
// Get index stats
stats, err := idx.Stats()
if err != nil {
return nil, fmt.Errorf("failed to get index stats: %w", err)
}
// Get store stats
storeStats, err := store.Stats()
if err != nil {
return nil, fmt.Errorf("failed to get store stats: %w", err)
}
metrics := &DedupMetrics{
TotalChunks: stats.TotalChunks,
TotalManifests: stats.TotalManifests,
TotalBackupSize: stats.TotalBackupSize,
TotalNewData: stats.TotalNewData,
SpaceSaved: stats.SpaceSaved,
DedupRatio: stats.DedupRatio,
DiskUsage: storeStats.TotalSize,
ByDatabase: make(map[string]*DatabaseDedupMetrics),
}
// Collect per-database metrics from manifest store
manifestStore, err := NewManifestStore(basePath)
if err != nil {
return metrics, nil // Return partial metrics
}
manifests, err := manifestStore.ListAll()
if err != nil {
return metrics, nil // Return partial metrics
}
for _, m := range manifests {
dbKey := m.DatabaseName
if dbKey == "" {
dbKey = "_default"
}
dbMetrics, ok := metrics.ByDatabase[dbKey]
if !ok {
dbMetrics = &DatabaseDedupMetrics{
Database: dbKey,
}
metrics.ByDatabase[dbKey] = dbMetrics
}
dbMetrics.BackupCount++
dbMetrics.TotalSize += m.OriginalSize
dbMetrics.StoredSize += m.StoredSize
if m.CreatedAt.After(dbMetrics.LastBackupTime) {
dbMetrics.LastBackupTime = m.CreatedAt
}
if !m.VerifiedAt.IsZero() && m.VerifiedAt.After(dbMetrics.LastVerified) {
dbMetrics.LastVerified = m.VerifiedAt
}
}
// Calculate per-database dedup ratios
for _, dbMetrics := range metrics.ByDatabase {
if dbMetrics.TotalSize > 0 {
dbMetrics.DedupRatio = 1.0 - float64(dbMetrics.StoredSize)/float64(dbMetrics.TotalSize)
}
}
return metrics, nil
}
// WritePrometheusTextfile writes dedup metrics in Prometheus format
func WritePrometheusTextfile(path string, instance string, basePath string, indexPath string) error {
metrics, err := CollectMetrics(basePath, indexPath)
if err != nil {
return err
}
output := FormatPrometheusMetrics(metrics, instance)
// Atomic write
dir := filepath.Dir(path)
if err := os.MkdirAll(dir, 0755); err != nil {
return fmt.Errorf("failed to create directory: %w", err)
}
tmpPath := path + ".tmp"
if err := os.WriteFile(tmpPath, []byte(output), 0644); err != nil {
return fmt.Errorf("failed to write temp file: %w", err)
}
if err := os.Rename(tmpPath, path); err != nil {
os.Remove(tmpPath)
return fmt.Errorf("failed to rename temp file: %w", err)
}
return nil
}
// FormatPrometheusMetrics formats dedup metrics in Prometheus exposition format
func FormatPrometheusMetrics(m *DedupMetrics, instance string) string {
var b strings.Builder
now := time.Now().Unix()
b.WriteString("# DBBackup Deduplication Prometheus Metrics\n")
b.WriteString(fmt.Sprintf("# Generated at: %s\n", time.Now().Format(time.RFC3339)))
b.WriteString(fmt.Sprintf("# Instance: %s\n", instance))
b.WriteString("\n")
// Global dedup metrics
b.WriteString("# HELP dbbackup_dedup_chunks_total Total number of unique chunks stored\n")
b.WriteString("# TYPE dbbackup_dedup_chunks_total gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_chunks_total{instance=%q} %d\n", instance, m.TotalChunks))
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_manifests_total Total number of deduplicated backups\n")
b.WriteString("# TYPE dbbackup_dedup_manifests_total gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_manifests_total{instance=%q} %d\n", instance, m.TotalManifests))
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_backup_bytes_total Total logical size of all backups in bytes\n")
b.WriteString("# TYPE dbbackup_dedup_backup_bytes_total gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_backup_bytes_total{instance=%q} %d\n", instance, m.TotalBackupSize))
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_stored_bytes_total Total unique data stored in bytes (after dedup)\n")
b.WriteString("# TYPE dbbackup_dedup_stored_bytes_total gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_stored_bytes_total{instance=%q} %d\n", instance, m.TotalNewData))
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_space_saved_bytes Bytes saved by deduplication\n")
b.WriteString("# TYPE dbbackup_dedup_space_saved_bytes gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_space_saved_bytes{instance=%q} %d\n", instance, m.SpaceSaved))
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_ratio Deduplication ratio (0-1, higher is better)\n")
b.WriteString("# TYPE dbbackup_dedup_ratio gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_ratio{instance=%q} %.4f\n", instance, m.DedupRatio))
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_disk_usage_bytes Actual disk usage of chunk store\n")
b.WriteString("# TYPE dbbackup_dedup_disk_usage_bytes gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_disk_usage_bytes{instance=%q} %d\n", instance, m.DiskUsage))
b.WriteString("\n")
// Per-database metrics
if len(m.ByDatabase) > 0 {
b.WriteString("# HELP dbbackup_dedup_database_backup_count Number of deduplicated backups per database\n")
b.WriteString("# TYPE dbbackup_dedup_database_backup_count gauge\n")
for _, db := range m.ByDatabase {
b.WriteString(fmt.Sprintf("dbbackup_dedup_database_backup_count{instance=%q,database=%q} %d\n",
instance, db.Database, db.BackupCount))
}
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_database_ratio Deduplication ratio per database (0-1)\n")
b.WriteString("# TYPE dbbackup_dedup_database_ratio gauge\n")
for _, db := range m.ByDatabase {
b.WriteString(fmt.Sprintf("dbbackup_dedup_database_ratio{instance=%q,database=%q} %.4f\n",
instance, db.Database, db.DedupRatio))
}
b.WriteString("\n")
b.WriteString("# HELP dbbackup_dedup_database_last_backup_timestamp Last backup timestamp per database\n")
b.WriteString("# TYPE dbbackup_dedup_database_last_backup_timestamp gauge\n")
for _, db := range m.ByDatabase {
if !db.LastBackupTime.IsZero() {
b.WriteString(fmt.Sprintf("dbbackup_dedup_database_last_backup_timestamp{instance=%q,database=%q} %d\n",
instance, db.Database, db.LastBackupTime.Unix()))
}
}
b.WriteString("\n")
}
b.WriteString("# HELP dbbackup_dedup_scrape_timestamp Unix timestamp when dedup metrics were collected\n")
b.WriteString("# TYPE dbbackup_dedup_scrape_timestamp gauge\n")
b.WriteString(fmt.Sprintf("dbbackup_dedup_scrape_timestamp{instance=%q} %d\n", instance, now))
return b.String()
}

View File

@@ -53,16 +53,16 @@ type InstallOptions struct {
// ServiceStatus contains information about installed services
type ServiceStatus struct {
Installed bool
Enabled bool
Active bool
TimerEnabled bool
TimerActive bool
LastRun string
NextRun string
ServicePath string
TimerPath string
ExporterPath string
Installed bool
Enabled bool
Active bool
TimerEnabled bool
TimerActive bool
LastRun string
NextRun string
ServicePath string
TimerPath string
ExporterPath string
}
// NewInstaller creates a new Installer
@@ -188,7 +188,7 @@ func (i *Installer) Uninstall(ctx context.Context, instance string, purge bool)
if instance != "cluster" && instance != "" {
templateService := filepath.Join(i.unitDir, "dbbackup@.service")
templateTimer := filepath.Join(i.unitDir, "dbbackup@.timer")
// Only remove templates if no other instances are using them
if i.canRemoveTemplates() {
if !i.dryRun {
@@ -644,11 +644,11 @@ func (i *Installer) canRemoveTemplates() bool {
// Check if any dbbackup@*.service instances exist
pattern := filepath.Join(i.unitDir, "dbbackup@*.service")
matches, _ := filepath.Glob(pattern)
// Also check for running instances
cmd := exec.Command("systemctl", "list-units", "--type=service", "--all", "dbbackup@*")
output, _ := cmd.Output()
return len(matches) == 0 && !strings.Contains(string(output), "dbbackup@")
}

View File

@@ -33,8 +33,11 @@ RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
# Environment
EnvironmentFile=-/etc/dbbackup/env.d/cluster.conf
# Working directory (config is loaded from .dbbackup.conf here)
WorkingDirectory=/var/lib/dbbackup
# Execution - cluster backup (all databases)
ExecStart={{.BinaryPath}} backup cluster --config {{.ConfigPath}}
ExecStart={{.BinaryPath}} backup cluster --backup-dir {{.BackupDir}}
TimeoutStartSec={{.TimeoutSeconds}}
# Post-backup metrics export

View File

@@ -33,8 +33,11 @@ RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
# Environment
EnvironmentFile=-/etc/dbbackup/env.d/%i.conf
# Working directory (config is loaded from .dbbackup.conf here)
WorkingDirectory=/var/lib/dbbackup
# Execution
ExecStart={{.BinaryPath}} backup {{.BackupType}} %i --config {{.ConfigPath}}
ExecStart={{.BinaryPath}} backup {{.BackupType}} %i --backup-dir {{.BackupDir}}
TimeoutStartSec={{.TimeoutSeconds}}
# Post-backup metrics export

View File

@@ -414,14 +414,42 @@ func (d *Diagnoser) diagnoseSQLScript(filePath string, compressed bool, result *
// diagnoseClusterArchive analyzes a cluster tar.gz archive
func (d *Diagnoser) diagnoseClusterArchive(filePath string, result *DiagnoseResult) {
// First verify tar.gz integrity with timeout
// 5 minutes for large archives (multi-GB archives need more time)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
// Calculate dynamic timeout based on file size
// Assume minimum 50 MB/s throughput for compressed archive listing
// Minimum 5 minutes, scales with file size
timeoutMinutes := 5
if result.FileSize > 0 {
// 1 minute per 3 GB, minimum 5 minutes, max 60 minutes
sizeGB := result.FileSize / (1024 * 1024 * 1024)
estimatedMinutes := int(sizeGB/3) + 5
if estimatedMinutes > timeoutMinutes {
timeoutMinutes = estimatedMinutes
}
if timeoutMinutes > 60 {
timeoutMinutes = 60
}
}
d.log.Info("Verifying cluster archive integrity",
"size", fmt.Sprintf("%.1f GB", float64(result.FileSize)/(1024*1024*1024)),
"timeout", fmt.Sprintf("%d min", timeoutMinutes))
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
defer cancel()
cmd := exec.CommandContext(ctx, "tar", "-tzf", filePath)
output, err := cmd.Output()
if err != nil {
// Check if it was a timeout
if ctx.Err() == context.DeadlineExceeded {
result.IsValid = false
result.Errors = append(result.Errors,
fmt.Sprintf("Verification timed out after %d minutes - archive is very large", timeoutMinutes),
"This does not necessarily mean the archive is corrupted",
"Manual verification: tar -tzf "+filePath+" | wc -l")
// Don't mark as corrupted on timeout
return
}
result.IsValid = false
result.IsCorrupted = true
result.Errors = append(result.Errors,
@@ -497,9 +525,22 @@ func (d *Diagnoser) diagnoseUnknown(filePath string, result *DiagnoseResult) {
// verifyWithPgRestore uses pg_restore --list to verify dump integrity
func (d *Diagnoser) verifyWithPgRestore(filePath string, result *DiagnoseResult) {
// Use timeout to prevent blocking on very large dump files
// 5 minutes for large dumps (multi-GB dumps with many tables)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
// Calculate dynamic timeout based on file size
// pg_restore --list is usually faster than tar -tzf for same size
timeoutMinutes := 5
if result.FileSize > 0 {
// 1 minute per 5 GB, minimum 5 minutes, max 30 minutes
sizeGB := result.FileSize / (1024 * 1024 * 1024)
estimatedMinutes := int(sizeGB/5) + 5
if estimatedMinutes > timeoutMinutes {
timeoutMinutes = estimatedMinutes
}
if timeoutMinutes > 30 {
timeoutMinutes = 30
}
}
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
defer cancel()
cmd := exec.CommandContext(ctx, "pg_restore", "--list", filePath)
@@ -554,14 +595,72 @@ func (d *Diagnoser) verifyWithPgRestore(filePath string, result *DiagnoseResult)
// DiagnoseClusterDumps extracts and diagnoses all dumps in a cluster archive
func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*DiagnoseResult, error) {
// First, try to list archive contents without extracting (fast check)
// 10 minutes for very large archives
listCtx, listCancel := context.WithTimeout(context.Background(), 10*time.Minute)
// Get archive size for dynamic timeout calculation
archiveInfo, err := os.Stat(archivePath)
if err != nil {
return nil, fmt.Errorf("cannot stat archive: %w", err)
}
// Dynamic timeout based on archive size: base 10 min + 1 min per 3 GB
// Large archives like 100+ GB need more time for tar -tzf
timeoutMinutes := 10
if archiveInfo.Size() > 0 {
sizeGB := archiveInfo.Size() / (1024 * 1024 * 1024)
estimatedMinutes := int(sizeGB/3) + 10
if estimatedMinutes > timeoutMinutes {
timeoutMinutes = estimatedMinutes
}
if timeoutMinutes > 120 { // Max 2 hours
timeoutMinutes = 120
}
}
d.log.Info("Listing cluster archive contents",
"size", fmt.Sprintf("%.1f GB", float64(archiveInfo.Size())/(1024*1024*1024)),
"timeout", fmt.Sprintf("%d min", timeoutMinutes))
listCtx, listCancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
defer listCancel()
listCmd := exec.CommandContext(listCtx, "tar", "-tzf", archivePath)
listOutput, listErr := listCmd.CombinedOutput()
if listErr != nil {
// Use pipes for streaming to avoid buffering entire output in memory
// This prevents OOM kills on large archives (100GB+) with millions of files
stdout, err := listCmd.StdoutPipe()
if err != nil {
return nil, fmt.Errorf("failed to create stdout pipe: %w", err)
}
var stderrBuf bytes.Buffer
listCmd.Stderr = &stderrBuf
if err := listCmd.Start(); err != nil {
return nil, fmt.Errorf("failed to start tar listing: %w", err)
}
// Stream the output line by line, only keeping relevant files
var files []string
scanner := bufio.NewScanner(stdout)
// Set a reasonable max line length (file paths shouldn't exceed this)
scanner.Buffer(make([]byte, 0, 4096), 1024*1024)
fileCount := 0
for scanner.Scan() {
fileCount++
line := scanner.Text()
// Only store dump files and important files, not every single file
if strings.HasSuffix(line, ".dump") || strings.HasSuffix(line, ".sql") ||
strings.HasSuffix(line, ".sql.gz") || strings.HasSuffix(line, ".json") ||
strings.Contains(line, "globals") || strings.Contains(line, "manifest") ||
strings.Contains(line, "metadata") || strings.HasSuffix(line, "/") {
files = append(files, line)
}
}
scanErr := scanner.Err()
listErr := listCmd.Wait()
if listErr != nil || scanErr != nil {
// Archive listing failed - likely corrupted
errResult := &DiagnoseResult{
FilePath: archivePath,
@@ -573,7 +672,12 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
Details: &DiagnoseDetails{},
}
errOutput := string(listOutput)
errOutput := stderrBuf.String()
actualErr := listErr
if scanErr != nil {
actualErr = scanErr
}
if strings.Contains(errOutput, "unexpected end of file") ||
strings.Contains(errOutput, "Unexpected EOF") ||
strings.Contains(errOutput, "truncated") {
@@ -585,7 +689,7 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
"Solution: Re-create the backup from source database")
} else {
errResult.Errors = append(errResult.Errors,
fmt.Sprintf("Cannot list archive contents: %v", listErr),
fmt.Sprintf("Cannot list archive contents: %v", actualErr),
fmt.Sprintf("tar error: %s", truncateString(errOutput, 300)),
"Run manually: tar -tzf "+archivePath+" 2>&1 | tail -50")
}
@@ -593,11 +697,10 @@ func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*Diagno
return []*DiagnoseResult{errResult}, nil
}
// Archive is listable - now check disk space before extraction
files := strings.Split(strings.TrimSpace(string(listOutput)), "\n")
d.log.Debug("Archive listing streamed successfully", "total_files", fileCount, "relevant_files", len(files))
// Check if we have enough disk space (estimate 4x archive size needed)
archiveInfo, _ := os.Stat(archivePath)
// archiveInfo already obtained at function start
requiredSpace := archiveInfo.Size() * 4
// Check temp directory space - try to extract metadata first

View File

@@ -229,8 +229,14 @@ func containsSQLKeywords(content string) bool {
}
// CheckDiskSpace verifies sufficient disk space for restore
// Uses the effective work directory (WorkDir if set, otherwise BackupDir) since
// that's where extraction actually happens for large databases
func (s *Safety) CheckDiskSpace(archivePath string, multiplier float64) error {
return s.CheckDiskSpaceAt(archivePath, s.cfg.BackupDir, multiplier)
checkDir := s.cfg.GetEffectiveWorkDir()
if checkDir == "" {
checkDir = s.cfg.BackupDir
}
return s.CheckDiskSpaceAt(archivePath, checkDir, multiplier)
}
// CheckDiskSpaceAt verifies sufficient disk space at a specific directory

View File

@@ -11,40 +11,102 @@ import (
"dbbackup/internal/config"
"dbbackup/internal/logger"
"dbbackup/internal/restore"
)
// OperationState represents the current operation state
type OperationState int
const (
OpIdle OperationState = iota
OpVerifying
OpDeleting
)
// BackupManagerModel manages backup archives
type BackupManagerModel struct {
config *config.Config
logger logger.Logger
parent tea.Model
ctx context.Context
archives []ArchiveInfo
cursor int
loading bool
err error
message string
totalSize int64
freeSpace int64
config *config.Config
logger logger.Logger
parent tea.Model
ctx context.Context
archives []ArchiveInfo
cursor int
loading bool
err error
message string
totalSize int64
freeSpace int64
opState OperationState
opTarget string // Name of archive being operated on
spinnerFrame int
}
// NewBackupManager creates a new backup manager
func NewBackupManager(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context) BackupManagerModel {
return BackupManagerModel{
config: cfg,
logger: log,
parent: parent,
ctx: ctx,
loading: true,
config: cfg,
logger: log,
parent: parent,
ctx: ctx,
loading: true,
opState: OpIdle,
spinnerFrame: 0,
}
}
func (m BackupManagerModel) Init() tea.Cmd {
return loadArchives(m.config, m.logger)
return tea.Batch(loadArchives(m.config, m.logger), managerTickCmd())
}
// Tick for spinner animation
type managerTickMsg time.Time
func managerTickCmd() tea.Cmd {
return tea.Tick(100*time.Millisecond, func(t time.Time) tea.Msg {
return managerTickMsg(t)
})
}
// Verify result message
type verifyResultMsg struct {
archive string
valid bool
err error
details string
}
func (m BackupManagerModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
switch msg := msg.(type) {
case managerTickMsg:
// Update spinner frame
m.spinnerFrame = (m.spinnerFrame + 1) % len(spinnerFrames)
return m, managerTickCmd()
case verifyResultMsg:
m.opState = OpIdle
m.opTarget = ""
if msg.err != nil {
m.message = fmt.Sprintf("[-] Verify failed: %v", msg.err)
} else if msg.valid {
m.message = fmt.Sprintf("[+] %s: Valid - %s", msg.archive, msg.details)
// Update archive validity in list
for i := range m.archives {
if m.archives[i].Name == msg.archive {
m.archives[i].Valid = true
break
}
}
} else {
m.message = fmt.Sprintf("[-] %s: Invalid - %s", msg.archive, msg.details)
for i := range m.archives {
if m.archives[i].Name == msg.archive {
m.archives[i].Valid = false
break
}
}
}
return m, nil
case archiveListMsg:
m.loading = false
if msg.err != nil {
@@ -68,10 +130,24 @@ func (m BackupManagerModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return m, nil
case tea.KeyMsg:
switch msg.String() {
case "ctrl+c", "q", "esc":
// Allow escape/cancel even during operations
if msg.String() == "ctrl+c" || msg.String() == "esc" || msg.String() == "q" {
if m.opState != OpIdle {
// Cancel current operation
m.opState = OpIdle
m.opTarget = ""
m.message = "Operation cancelled"
return m, nil
}
return m.parent, nil
}
// Block other input during operations
if m.opState != OpIdle {
return m, nil
}
switch msg.String() {
case "up", "k":
if m.cursor > 0 {
m.cursor--
@@ -83,11 +159,13 @@ func (m BackupManagerModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
}
case "v":
// Verify archive
// Verify archive with real verification
if len(m.archives) > 0 && m.cursor < len(m.archives) {
selected := m.archives[m.cursor]
m.message = fmt.Sprintf("[SEARCH] Verifying %s...", selected.Name)
// In real implementation, would run verification
m.opState = OpVerifying
m.opTarget = selected.Name
m.message = ""
return m, verifyArchiveCmd(selected)
}
case "d":
@@ -152,39 +230,67 @@ func (m BackupManagerModel) View() string {
var s strings.Builder
// Title
s.WriteString(titleStyle.Render("[DB] Backup Archive Manager"))
s.WriteString(TitleStyle.Render("[DB] Backup Archive Manager"))
s.WriteString("\n\n")
// Status line (no box, bold+color accents)
switch m.opState {
case OpVerifying:
spinner := spinnerFrames[m.spinnerFrame]
s.WriteString(StatusActiveStyle.Render(fmt.Sprintf("%s Verifying: %s", spinner, m.opTarget)))
s.WriteString("\n\n")
case OpDeleting:
spinner := spinnerFrames[m.spinnerFrame]
s.WriteString(StatusActiveStyle.Render(fmt.Sprintf("%s Deleting: %s", spinner, m.opTarget)))
s.WriteString("\n\n")
default:
if m.loading {
spinner := spinnerFrames[m.spinnerFrame]
s.WriteString(StatusActiveStyle.Render(fmt.Sprintf("%s Loading archives...", spinner)))
s.WriteString("\n\n")
} else if m.message != "" {
// Color based on message content
if strings.HasPrefix(m.message, "[+]") || strings.HasPrefix(m.message, "Valid") {
s.WriteString(StatusSuccessStyle.Render(m.message))
} else if strings.HasPrefix(m.message, "[-]") || strings.HasPrefix(m.message, "Error") {
s.WriteString(StatusErrorStyle.Render(m.message))
} else {
s.WriteString(StatusActiveStyle.Render(m.message))
}
s.WriteString("\n\n")
}
// No "Ready" message when idle - cleaner UI
}
if m.loading {
s.WriteString(infoStyle.Render("Loading archives..."))
return s.String()
}
if m.err != nil {
s.WriteString(errorStyle.Render(fmt.Sprintf("[FAIL] Error: %v", m.err)))
s.WriteString(StatusErrorStyle.Render(fmt.Sprintf("[FAIL] Error: %v", m.err)))
s.WriteString("\n\n")
s.WriteString(infoStyle.Render("Press Esc to go back"))
s.WriteString(ShortcutStyle.Render("Press Esc to go back"))
return s.String()
}
// Summary
s.WriteString(infoStyle.Render(fmt.Sprintf("Total Archives: %d | Total Size: %s",
s.WriteString(LabelStyle.Render(fmt.Sprintf("Total Archives: %d | Total Size: %s",
len(m.archives), formatSize(m.totalSize))))
s.WriteString("\n\n")
// Archives list
if len(m.archives) == 0 {
s.WriteString(infoStyle.Render("No backup archives found"))
s.WriteString(StatusReadyStyle.Render("No backup archives found"))
s.WriteString("\n\n")
s.WriteString(infoStyle.Render("Press Esc to go back"))
s.WriteString(ShortcutStyle.Render("Press Esc to go back"))
return s.String()
}
// Column headers
s.WriteString(archiveHeaderStyle.Render(fmt.Sprintf("%-35s %-25s %-12s %-20s",
// Column headers with better alignment
s.WriteString(ListHeaderStyle.Render(fmt.Sprintf(" %-32s %-22s %10s %-16s",
"FILENAME", "FORMAT", "SIZE", "MODIFIED")))
s.WriteString("\n")
s.WriteString(strings.Repeat("-", 95))
s.WriteString(strings.Repeat("-", 90))
s.WriteString("\n")
// Show archives (limit to visible area)
@@ -199,27 +305,27 @@ func (m BackupManagerModel) View() string {
for i := start; i < end; i++ {
archive := m.archives[i]
cursor := " "
style := archiveNormalStyle
cursor := " "
style := ListNormalStyle
if i == m.cursor {
cursor = ">"
style = archiveSelectedStyle
cursor = "> "
style = ListSelectedStyle
}
// Status icon
statusIcon := "[+]"
// Status icon - consistent 4-char width
statusIcon := " [+]"
if !archive.Valid {
statusIcon = "[-]"
style = archiveInvalidStyle
statusIcon = " [-]"
style = ItemInvalidStyle
} else if time.Since(archive.Modified) > 30*24*time.Hour {
statusIcon = "[WARN]"
statusIcon = " [!]"
}
filename := truncate(archive.Name, 33)
format := truncate(archive.Format.String(), 23)
filename := truncate(archive.Name, 32)
format := truncate(archive.Format.String(), 22)
line := fmt.Sprintf("%s %s %-33s %-23s %-10s %-19s",
line := fmt.Sprintf("%s%s %-32s %-22s %10s %-16s",
cursor,
statusIcon,
filename,
@@ -233,18 +339,83 @@ func (m BackupManagerModel) View() string {
// Footer
s.WriteString("\n")
if m.message != "" {
s.WriteString(infoStyle.Render(m.message))
s.WriteString("\n")
}
s.WriteString(infoStyle.Render(fmt.Sprintf("Selected: %d/%d", m.cursor+1, len(m.archives))))
s.WriteString("\n")
s.WriteString(infoStyle.Render("[KEY] ↑/↓: Navigate | r: Restore | v: Verify | d: Delete | i: Info | R: Refresh | Esc: Back"))
s.WriteString(StatusReadyStyle.Render(fmt.Sprintf("Selected: %d/%d", m.cursor+1, len(m.archives))))
s.WriteString("\n\n")
// Grouped keyboard shortcuts
s.WriteString(ShortcutStyle.Render("SHORTCUTS: Up/Down=Move | r=Restore | v=Verify | d=Delete | i=Info | R=Refresh | Esc=Back | q=Quit"))
return s.String()
}
// verifyArchiveCmd runs the SAME verification as restore safety checks
// This ensures consistency between backup manager verify and restore preview
func verifyArchiveCmd(archive ArchiveInfo) tea.Cmd {
return func() tea.Msg {
var issues []string
// 1. Run the same archive integrity check as restore
safety := restore.NewSafety(nil, nil) // Doesn't need config/log for validation
if err := safety.ValidateArchive(archive.Path); err != nil {
return verifyResultMsg{
archive: archive.Name,
valid: false,
err: nil,
details: fmt.Sprintf("Archive integrity: %v", err),
}
}
// 2. Run the same deep diagnosis as restore
diagnoser := restore.NewDiagnoser(nil, false)
diagResult, diagErr := diagnoser.DiagnoseFile(archive.Path)
if diagErr != nil {
return verifyResultMsg{
archive: archive.Name,
valid: false,
err: diagErr,
details: "Cannot diagnose archive",
}
}
if !diagResult.IsValid {
// Collect error details
if diagResult.IsTruncated {
issues = append(issues, "TRUNCATED")
}
if diagResult.IsCorrupted {
issues = append(issues, "CORRUPTED")
}
if len(diagResult.Errors) > 0 {
issues = append(issues, diagResult.Errors[0])
}
return verifyResultMsg{
archive: archive.Name,
valid: false,
err: nil,
details: strings.Join(issues, "; "),
}
}
// Build success details
details := "Verified"
if diagResult.Details != nil {
if diagResult.Details.TableCount > 0 {
details = fmt.Sprintf("%d databases in archive", diagResult.Details.TableCount)
} else if diagResult.Details.PgRestoreListable {
details = "pg_restore verified"
}
}
// Add any warnings
if len(diagResult.Warnings) > 0 {
details += fmt.Sprintf(" [%d warnings]", len(diagResult.Warnings))
}
return verifyResultMsg{archive: archive.Name, valid: true, err: nil, details: details}
}
}
// deleteArchive deletes a backup archive (to be called from confirmation)
func deleteArchive(archivePath string) error {
return os.Remove(archivePath)

View File

@@ -204,124 +204,132 @@ func (m DiagnoseViewModel) View() string {
func (m DiagnoseViewModel) renderSingleResult(result *restore.DiagnoseResult) string {
var s strings.Builder
// Status
s.WriteString(strings.Repeat("-", 60))
s.WriteString("\n")
// Status Box
s.WriteString("+--[ VALIDATION STATUS ]" + strings.Repeat("-", 37) + "+\n")
if result.IsValid {
s.WriteString(diagnosePassStyle.Render("[OK] STATUS: VALID"))
s.WriteString("| " + diagnosePassStyle.Render("[OK] VALID - Archive passed all checks") + strings.Repeat(" ", 18) + "|\n")
} else {
s.WriteString(diagnoseFailStyle.Render("[FAIL] STATUS: INVALID"))
s.WriteString("| " + diagnoseFailStyle.Render("[FAIL] INVALID - Archive has problems") + strings.Repeat(" ", 19) + "|\n")
}
s.WriteString("\n")
if result.IsTruncated {
s.WriteString(diagnoseFailStyle.Render("[WARN] TRUNCATED: File appears incomplete"))
s.WriteString("\n")
s.WriteString("| " + diagnoseFailStyle.Render("[!] TRUNCATED - File is incomplete") + strings.Repeat(" ", 22) + "|\n")
}
if result.IsCorrupted {
s.WriteString(diagnoseFailStyle.Render("[WARN] CORRUPTED: File structure is damaged"))
s.WriteString("\n")
s.WriteString("| " + diagnoseFailStyle.Render("[!] CORRUPTED - File structure damaged") + strings.Repeat(" ", 18) + "|\n")
}
s.WriteString(strings.Repeat("-", 60))
s.WriteString("\n\n")
s.WriteString("+" + strings.Repeat("-", 60) + "+\n\n")
// Details
// Details Box
if result.Details != nil {
s.WriteString(diagnoseHeaderStyle.Render("[STATS] DETAILS:"))
s.WriteString("\n")
s.WriteString("+--[ DETAILS ]" + strings.Repeat("-", 46) + "+\n")
if result.Details.HasPGDMPSignature {
s.WriteString(diagnosePassStyle.Render(" [+] "))
s.WriteString("Has PGDMP signature (custom format)\n")
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " PostgreSQL custom format (PGDMP)" + strings.Repeat(" ", 20) + "|\n")
}
if result.Details.HasSQLHeader {
s.WriteString(diagnosePassStyle.Render(" [+] "))
s.WriteString("Has PostgreSQL SQL header\n")
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " PostgreSQL SQL header found" + strings.Repeat(" ", 25) + "|\n")
}
if result.Details.GzipValid {
s.WriteString(diagnosePassStyle.Render(" [+] "))
s.WriteString("Gzip compression valid\n")
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " Gzip compression valid" + strings.Repeat(" ", 30) + "|\n")
}
if result.Details.PgRestoreListable {
s.WriteString(diagnosePassStyle.Render(" [+] "))
s.WriteString(fmt.Sprintf("pg_restore can list contents (%d tables)\n", result.Details.TableCount))
tableInfo := fmt.Sprintf(" (%d tables)", result.Details.TableCount)
padding := 36 - len(tableInfo)
if padding < 0 {
padding = 0
}
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " pg_restore can list contents" + tableInfo + strings.Repeat(" ", padding) + "|\n")
}
if result.Details.CopyBlockCount > 0 {
s.WriteString(diagnoseInfoStyle.Render(" - "))
s.WriteString(fmt.Sprintf("Contains %d COPY blocks\n", result.Details.CopyBlockCount))
blockInfo := fmt.Sprintf("%d COPY blocks found", result.Details.CopyBlockCount)
padding := 50 - len(blockInfo)
if padding < 0 {
padding = 0
}
s.WriteString("| [-] " + blockInfo + strings.Repeat(" ", padding) + "|\n")
}
if result.Details.UnterminatedCopy {
s.WriteString(diagnoseFailStyle.Render(" [-] "))
s.WriteString(fmt.Sprintf("Unterminated COPY block: %s (line %d)\n",
result.Details.LastCopyTable, result.Details.LastCopyLineNumber))
s.WriteString("| " + diagnoseFailStyle.Render("[-]") + " Unterminated COPY: " + truncate(result.Details.LastCopyTable, 30) + strings.Repeat(" ", 5) + "|\n")
}
if result.Details.ProperlyTerminated {
s.WriteString(diagnosePassStyle.Render(" [+] "))
s.WriteString("All COPY blocks properly terminated\n")
s.WriteString("| " + diagnosePassStyle.Render("[+]") + " All COPY blocks properly terminated" + strings.Repeat(" ", 17) + "|\n")
}
if result.Details.ExpandedSize > 0 {
s.WriteString(diagnoseInfoStyle.Render(" - "))
s.WriteString(fmt.Sprintf("Expanded size: %s (ratio: %.1fx)\n",
formatSize(result.Details.ExpandedSize), result.Details.CompressionRatio))
sizeInfo := fmt.Sprintf("Expanded: %s (%.1fx)", formatSize(result.Details.ExpandedSize), result.Details.CompressionRatio)
padding := 50 - len(sizeInfo)
if padding < 0 {
padding = 0
}
s.WriteString("| [-] " + sizeInfo + strings.Repeat(" ", padding) + "|\n")
}
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
}
// Errors
// Errors Box
if len(result.Errors) > 0 {
s.WriteString("\n")
s.WriteString(diagnoseFailStyle.Render("[FAIL] ERRORS:"))
s.WriteString("\n")
s.WriteString("\n+--[ ERRORS ]" + strings.Repeat("-", 47) + "+\n")
for i, e := range result.Errors {
if i >= 5 {
s.WriteString(diagnoseInfoStyle.Render(fmt.Sprintf(" ... and %d more\n", len(result.Errors)-5)))
remaining := fmt.Sprintf("... and %d more errors", len(result.Errors)-5)
padding := 56 - len(remaining)
s.WriteString("| " + remaining + strings.Repeat(" ", padding) + "|\n")
break
}
s.WriteString(diagnoseFailStyle.Render(" - "))
s.WriteString(truncate(e, 70))
s.WriteString("\n")
errText := truncate(e, 54)
padding := 56 - len(errText)
if padding < 0 {
padding = 0
}
s.WriteString("| " + errText + strings.Repeat(" ", padding) + "|\n")
}
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
}
// Warnings
// Warnings Box
if len(result.Warnings) > 0 {
s.WriteString("\n")
s.WriteString(diagnoseWarnStyle.Render("[WARN] WARNINGS:"))
s.WriteString("\n")
s.WriteString("\n+--[ WARNINGS ]" + strings.Repeat("-", 45) + "+\n")
for i, w := range result.Warnings {
if i >= 3 {
s.WriteString(diagnoseInfoStyle.Render(fmt.Sprintf(" ... and %d more\n", len(result.Warnings)-3)))
remaining := fmt.Sprintf("... and %d more warnings", len(result.Warnings)-3)
padding := 56 - len(remaining)
s.WriteString("| " + remaining + strings.Repeat(" ", padding) + "|\n")
break
}
s.WriteString(diagnoseWarnStyle.Render(" - "))
s.WriteString(truncate(w, 70))
s.WriteString("\n")
warnText := truncate(w, 54)
padding := 56 - len(warnText)
if padding < 0 {
padding = 0
}
s.WriteString("| " + warnText + strings.Repeat(" ", padding) + "|\n")
}
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
}
// Recommendations
// Recommendations Box
if !result.IsValid {
s.WriteString("\n")
s.WriteString(diagnoseHeaderStyle.Render("[HINT] RECOMMENDATIONS:"))
s.WriteString("\n")
s.WriteString("\n+--[ RECOMMENDATIONS ]" + strings.Repeat("-", 38) + "+\n")
if result.IsTruncated {
s.WriteString(" 1. Re-run the backup process for this database\n")
s.WriteString(" 2. Check disk space on backup server\n")
s.WriteString(" 3. Verify network stability for remote backups\n")
s.WriteString("| 1. Re-run backup with current version (v3.42.12+) |\n")
s.WriteString("| 2. Check disk space on backup server |\n")
s.WriteString("| 3. Verify network stability for remote backups |\n")
}
if result.IsCorrupted {
s.WriteString(" 1. Verify backup was transferred completely\n")
s.WriteString(" 2. Try restoring from a previous backup\n")
s.WriteString("| 1. Verify backup was transferred completely |\n")
s.WriteString("| 2. Try restoring from a previous backup |\n")
}
s.WriteString("+" + strings.Repeat("-", 60) + "+\n")
}
return s.String()

View File

@@ -106,9 +106,23 @@ type safetyCheckCompleteMsg struct {
func runSafetyChecks(cfg *config.Config, log logger.Logger, archive ArchiveInfo, targetDB string) tea.Cmd {
return func() tea.Msg {
// 10 minutes for safety checks - large archives can take a long time to diagnose
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
// Dynamic timeout based on archive size for large database support
// Base: 10 minutes + 1 minute per 5 GB, max 120 minutes
timeoutMinutes := 10
if archive.Size > 0 {
sizeGB := archive.Size / (1024 * 1024 * 1024)
estimatedMinutes := int(sizeGB/5) + 10
if estimatedMinutes > timeoutMinutes {
timeoutMinutes = estimatedMinutes
}
if timeoutMinutes > 120 {
timeoutMinutes = 120
}
}
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
defer cancel()
_ = ctx // Used by database checks below
safety := restore.NewSafety(cfg, log)
checks := []SafetyCheck{}
@@ -306,6 +320,12 @@ func (m RestorePreviewModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return m, nil
}
// Cluster-specific check: must enable cleanup if existing databases found
if m.mode == "restore-cluster" && m.existingDBCount > 0 && !m.cleanClusterFirst {
m.message = errorStyle.Render("[FAIL] Cannot proceed - press 'c' to enable cleanup of " + fmt.Sprintf("%d", m.existingDBCount) + " existing database(s) first")
return m, nil
}
// Proceed to restore execution
exec := NewRestoreExecution(m.config, m.logger, m.parent, m.ctx, m.archive, m.targetDB, m.cleanFirst, m.createIfMissing, m.mode, m.cleanClusterFirst, m.existingDBs, m.saveDebugLog, m.workDir)
return exec, exec.Init()

View File

@@ -146,11 +146,10 @@ func (m StatusViewModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return m, nil
case tea.KeyMsg:
if !m.loading {
switch msg.String() {
case "ctrl+c", "q", "esc", "enter":
return m.parent, nil
}
// Always allow escape, even during loading
switch msg.String() {
case "ctrl+c", "q", "esc", "enter":
return m.parent, nil
}
}

133
internal/tui/styles.go Normal file
View File

@@ -0,0 +1,133 @@
package tui
import "github.com/charmbracelet/lipgloss"
// =============================================================================
// GLOBAL TUI STYLE DEFINITIONS
// =============================================================================
// Design Language:
// - Bold text for labels and headers
// - Colors for semantic meaning (green=success, red=error, yellow=warning)
// - No emoticons - use simple text prefixes like [OK], [FAIL], [!]
// - No boxes for inline status - use bold+color accents
// - Consistent color palette across all views
// =============================================================================
// Color Palette (ANSI 256 colors for terminal compatibility)
const (
ColorWhite = lipgloss.Color("15") // Bright white
ColorGray = lipgloss.Color("250") // Light gray
ColorDim = lipgloss.Color("244") // Dim gray
ColorDimmer = lipgloss.Color("240") // Darker gray
ColorSuccess = lipgloss.Color("2") // Green
ColorError = lipgloss.Color("1") // Red
ColorWarning = lipgloss.Color("3") // Yellow
ColorInfo = lipgloss.Color("6") // Cyan
ColorAccent = lipgloss.Color("4") // Blue
)
// =============================================================================
// TITLE & HEADER STYLES
// =============================================================================
// TitleStyle - main view title (bold white on gray background)
var TitleStyle = lipgloss.NewStyle().
Bold(true).
Foreground(ColorWhite).
Background(ColorDimmer).
Padding(0, 1)
// HeaderStyle - section headers (bold gray)
var HeaderStyle = lipgloss.NewStyle().
Bold(true).
Foreground(ColorDim)
// LabelStyle - field labels (bold cyan)
var LabelStyle = lipgloss.NewStyle().
Bold(true).
Foreground(ColorInfo)
// =============================================================================
// STATUS STYLES
// =============================================================================
// StatusReadyStyle - idle/ready state (dim)
var StatusReadyStyle = lipgloss.NewStyle().
Foreground(ColorDim)
// StatusActiveStyle - operation in progress (bold cyan)
var StatusActiveStyle = lipgloss.NewStyle().
Bold(true).
Foreground(ColorInfo)
// StatusSuccessStyle - success messages (bold green)
var StatusSuccessStyle = lipgloss.NewStyle().
Bold(true).
Foreground(ColorSuccess)
// StatusErrorStyle - error messages (bold red)
var StatusErrorStyle = lipgloss.NewStyle().
Bold(true).
Foreground(ColorError)
// StatusWarningStyle - warning messages (bold yellow)
var StatusWarningStyle = lipgloss.NewStyle().
Bold(true).
Foreground(ColorWarning)
// =============================================================================
// LIST & TABLE STYLES
// =============================================================================
// ListNormalStyle - unselected list items
var ListNormalStyle = lipgloss.NewStyle().
Foreground(ColorGray)
// ListSelectedStyle - selected/cursor item (bold white)
var ListSelectedStyle = lipgloss.NewStyle().
Foreground(ColorWhite).
Bold(true)
// ListHeaderStyle - column headers (bold dim)
var ListHeaderStyle = lipgloss.NewStyle().
Bold(true).
Foreground(ColorDim)
// =============================================================================
// ITEM STATUS STYLES
// =============================================================================
// ItemValidStyle - valid/OK items (green)
var ItemValidStyle = lipgloss.NewStyle().
Foreground(ColorSuccess)
// ItemInvalidStyle - invalid/failed items (red)
var ItemInvalidStyle = lipgloss.NewStyle().
Foreground(ColorError)
// ItemOldStyle - old/stale items (yellow)
var ItemOldStyle = lipgloss.NewStyle().
Foreground(ColorWarning)
// =============================================================================
// SHORTCUT STYLE
// =============================================================================
// ShortcutStyle - keyboard shortcuts footer (dim)
var ShortcutStyle = lipgloss.NewStyle().
Foreground(ColorDim)
// =============================================================================
// HELPER PREFIXES (no emoticons)
// =============================================================================
const (
PrefixOK = "[OK]"
PrefixFail = "[FAIL]"
PrefixWarn = "[!]"
PrefixInfo = "[i]"
PrefixPlus = "[+]"
PrefixMinus = "[-]"
PrefixArrow = ">"
PrefixSpinner = "" // Spinner character added dynamically
)

View File

@@ -1,171 +0,0 @@
#!/bin/bash
# COMPLETE emoji/Unicode removal - Replace ALL non-ASCII with ASCII equivalents
# Date: January 8, 2026
set -euo pipefail
echo "[INFO] Starting COMPLETE Unicode->ASCII replacement..."
echo ""
# Create backup
BACKUP_DIR="backup_unicode_removal_$(date +%Y%m%d_%H%M%S)"
mkdir -p "$BACKUP_DIR"
echo "[INFO] Creating backup in $BACKUP_DIR..."
find . -name "*.go" -type f -not -path "*/vendor/*" -not -path "*/.git/*" -exec bash -c 'mkdir -p "$1/$(dirname "$2")" && cp "$2" "$1/$2"' -- "$BACKUP_DIR" {} \;
echo "[OK] Backup created"
echo ""
# Find all affected files
echo "[SEARCH] Finding files with Unicode..."
FILES=$(find . -name "*.go" -type f -not -path "*/vendor/*" -not -path "*/.git/*")
PROCESSED=0
TOTAL=$(echo "$FILES" | wc -l)
for file in $FILES; do
PROCESSED=$((PROCESSED + 1))
if ! grep -qP '[\x{80}-\x{FFFF}]' "$file" 2>/dev/null; then
continue
fi
echo "[$PROCESSED/$TOTAL] Processing: $file"
# Create temp file for atomic replacements
TMPFILE="${file}.tmp"
cp "$file" "$TMPFILE"
# Box drawing / decorative (used in TUI borders)
sed -i 's/─/-/g' "$TMPFILE"
sed -i 's/━/-/g' "$TMPFILE"
sed -i 's/│/|/g' "$TMPFILE"
sed -i 's/║/|/g' "$TMPFILE"
sed -i 's/├/+/g' "$TMPFILE"
sed -i 's/└/+/g' "$TMPFILE"
sed -i 's/╔/+/g' "$TMPFILE"
sed -i 's/╗/+/g' "$TMPFILE"
sed -i 's/╚/+/g' "$TMPFILE"
sed -i 's/╝/+/g' "$TMPFILE"
sed -i 's/╠/+/g' "$TMPFILE"
sed -i 's/╣/+/g' "$TMPFILE"
sed -i 's/═/=/g' "$TMPFILE"
# Status symbols
sed -i 's/✅/[OK]/g' "$TMPFILE"
sed -i 's/❌/[FAIL]/g' "$TMPFILE"
sed -i 's/✓/[+]/g' "$TMPFILE"
sed -i 's/✗/[-]/g' "$TMPFILE"
sed -i 's/⚠️/[WARN]/g' "$TMPFILE"
sed -i 's/⚠/[!]/g' "$TMPFILE"
sed -i 's/❓/[?]/g' "$TMPFILE"
# Arrows
sed -i 's/←/</g' "$TMPFILE"
sed -i 's/→/>/g' "$TMPFILE"
sed -i 's/↑/^/g' "$TMPFILE"
sed -i 's/↓/v/g' "$TMPFILE"
sed -i 's/▲/^/g' "$TMPFILE"
sed -i 's/▼/v/g' "$TMPFILE"
sed -i 's/▶/>/g' "$TMPFILE"
# Shapes
sed -i 's/●/*\*/g' "$TMPFILE"
sed -i 's/○/o/g' "$TMPFILE"
sed -i 's/⚪/o/g' "$TMPFILE"
sed -i 's/•/-/g' "$TMPFILE"
sed -i 's/█/#/g' "$TMPFILE"
sed -i 's/▎/|/g' "$TMPFILE"
sed -i 's/░/./g' "$TMPFILE"
sed -i 's//-/g' "$TMPFILE"
# Emojis - Info/Data
sed -i 's/📊/[INFO]/g' "$TMPFILE"
sed -i 's/📋/[LIST]/g' "$TMPFILE"
sed -i 's/📁/[DIR]/g' "$TMPFILE"
sed -i 's/📦/[PKG]/g' "$TMPFILE"
sed -i 's/📜/[LOG]/g' "$TMPFILE"
sed -i 's/📭/[EMPTY]/g' "$TMPFILE"
sed -i 's/📝/[NOTE]/g' "$TMPFILE"
sed -i 's/💡/[TIP]/g' "$TMPFILE"
# Emojis - Actions/Objects
sed -i 's/🎯/[TARGET]/g' "$TMPFILE"
sed -i 's/🛡️/[SECURE]/g' "$TMPFILE"
sed -i 's/🔒/[LOCK]/g' "$TMPFILE"
sed -i 's/🔓/[UNLOCK]/g' "$TMPFILE"
sed -i 's/🔍/[SEARCH]/g' "$TMPFILE"
sed -i 's/🔀/[SWITCH]/g' "$TMPFILE"
sed -i 's/🔥/[FIRE]/g' "$TMPFILE"
sed -i 's/💾/[SAVE]/g' "$TMPFILE"
sed -i 's/🗄️/[DB]/g' "$TMPFILE"
sed -i 's/🗄/[DB]/g' "$TMPFILE"
# Emojis - Time/Status
sed -i 's/⏱️/[TIME]/g' "$TMPFILE"
sed -i 's/⏱/[TIME]/g' "$TMPFILE"
sed -i 's/⏳/[WAIT]/g' "$TMPFILE"
sed -i 's/⏪/[REW]/g' "$TMPFILE"
sed -i 's/⏹️/[STOP]/g' "$TMPFILE"
sed -i 's/⏹/[STOP]/g' "$TMPFILE"
sed -i 's/⟳/[SYNC]/g' "$TMPFILE"
# Emojis - Cloud
sed -i 's/☁️/[CLOUD]/g' "$TMPFILE"
sed -i 's/☁/[CLOUD]/g' "$TMPFILE"
sed -i 's/📤/[UPLOAD]/g' "$TMPFILE"
sed -i 's/📥/[DOWNLOAD]/g' "$TMPFILE"
sed -i 's/🗑️/[DELETE]/g' "$TMPFILE"
# Emojis - Misc
sed -i 's/📈/[UP]/g' "$TMPFILE"
sed -i 's/📉/[DOWN]/g' "$TMPFILE"
sed -i 's/⌨️/[KEY]/g' "$TMPFILE"
sed -i 's/⌨/[KEY]/g' "$TMPFILE"
sed -i 's/⚙️/[CONFIG]/g' "$TMPFILE"
sed -i 's/⚙/[CONFIG]/g' "$TMPFILE"
sed -i 's/✏️/[EDIT]/g' "$TMPFILE"
sed -i 's/✏/[EDIT]/g' "$TMPFILE"
sed -i 's/⚡/[FAST]/g' "$TMPFILE"
# Spinner characters (braille patterns for loading animations)
sed -i 's/⠋/|/g' "$TMPFILE"
sed -i 's/⠙/\//g' "$TMPFILE"
sed -i 's/⠹/-/g' "$TMPFILE"
sed -i 's/⠸/\\/g' "$TMPFILE"
sed -i 's/⠼/|/g' "$TMPFILE"
sed -i 's/⠴/\//g' "$TMPFILE"
sed -i 's/⠦/-/g' "$TMPFILE"
sed -i 's/⠧/\\/g' "$TMPFILE"
sed -i 's/⠇/|/g' "$TMPFILE"
sed -i 's/⠏/\//g' "$TMPFILE"
# Move temp file over original
mv "$TMPFILE" "$file"
done
echo ""
echo "[OK] Replacement complete!"
echo ""
# Verify
REMAINING=$(grep -roP '[\x{80}-\x{FFFF}]' --include="*.go" . 2>/dev/null | wc -l || echo "0")
echo "[INFO] Unicode characters remaining: $REMAINING"
if [ "$REMAINING" -gt 0 ]; then
echo "[WARN] Some Unicode still exists (might be in comments or safe locations)"
echo "[INFO] Unique remaining characters:"
grep -roP '[\x{80}-\x{FFFF}]' --include="*.go" . 2>/dev/null | grep -oP '[\x{80}-\x{FFFF}]' | sort -u | head -20
else
echo "[OK] All Unicode characters replaced with ASCII!"
fi
echo ""
echo "[INFO] Backup: $BACKUP_DIR"
echo "[INFO] To restore: cp -r $BACKUP_DIR/* ."
echo ""
echo "[INFO] Next steps:"
echo " 1. go build"
echo " 2. go test ./..."
echo " 3. Test TUI: ./dbbackup"
echo " 4. Commit: git add . && git commit -m 'v3.42.11: Replace all Unicode with ASCII'"
echo ""

View File

@@ -1,130 +0,0 @@
#!/bin/bash
# Remove ALL emojis/unicode symbols from Go code and replace with ASCII
# Date: January 8, 2026
# Issue: 638 lines contain Unicode emojis causing display issues
set -euo pipefail
echo "[INFO] Starting emoji removal process..."
echo ""
# Find all Go files with emojis (expanded emoji list)
echo "[SEARCH] Finding affected files..."
FILES=$(find . -name "*.go" -type f -not -path "*/vendor/*" -not -path "*/.git/*" | xargs grep -l -P '[\x{1F000}-\x{1FFFF}]|[\x{2300}-\x{27BF}]|[\x{2600}-\x{26FF}]' 2>/dev/null || true)
if [ -z "$FILES" ]; then
echo "[WARN] No files with emojis found!"
exit 0
fi
FILECOUNT=$(echo "$FILES" | wc -l)
echo "[INFO] Found $FILECOUNT files containing emojis"
echo ""
# Count total emojis before
BEFORE=$(find . -name "*.go" -type f -not -path "*/vendor/*" | xargs grep -oP '[\x{1F000}-\x{1FFFF}]|[\x{2300}-\x{27BF}]|[\x{2600}-\x{26FF}]' 2>/dev/null | wc -l || echo "0")
echo "[INFO] Total emojis found: $BEFORE"
echo ""
# Create backup
BACKUP_DIR="backup_before_emoji_removal_$(date +%Y%m%d_%H%M%S)"
mkdir -p "$BACKUP_DIR"
echo "[INFO] Creating backup in $BACKUP_DIR..."
for file in $FILES; do
mkdir -p "$BACKUP_DIR/$(dirname "$file")"
cp "$file" "$BACKUP_DIR/$file"
done
echo "[OK] Backup created"
echo ""
# Process each file
echo "[INFO] Replacing emojis with ASCII equivalents..."
PROCESSED=0
for file in $FILES; do
PROCESSED=$((PROCESSED + 1))
echo "[$PROCESSED/$FILECOUNT] Processing: $file"
# Create temp file
TMPFILE="${file}.tmp"
# Status indicators
sed 's/✅/[OK]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/❌/[FAIL]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/✓/[+]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/✗/[-]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
# Warning symbols (⚠️ has variant selector, handle both)
sed 's/⚠️/[WARN]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/⚠/[!]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
# Info/Data symbols
sed 's/📊/[INFO]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/📋/[LIST]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/📁/[DIR]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/📦/[PKG]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
# Target/Security
sed 's/🎯/[TARGET]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/🛡️/[SECURE]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/🔒/[LOCK]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/🔓/[UNLOCK]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
# Actions
sed 's/🔍/[SEARCH]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/⏱️/[TIME]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
# Cloud operations (☁️ has variant selector, handle both)
sed 's/☁️/[CLOUD]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/☁/[CLOUD]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/📤/[UPLOAD]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/📥/[DOWNLOAD]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/🗑️/[DELETE]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
# Other
sed 's/📈/[UP]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/📉/[DOWN]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
# Additional emojis found
sed 's/⌨️/[KEY]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/⌨/[KEY]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/🗄️/[DB]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/🗄/[DB]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/⚙️/[CONFIG]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/⚙/[CONFIG]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/✏️/[EDIT]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
sed 's/✏/[EDIT]/g' "$file" > "$TMPFILE" && mv "$TMPFILE" "$file"
done
echo ""
echo "[OK] Replacement complete!"
echo ""
# Count remaining emojis
AFTER=$(find . -name "*.go" -type f -not -path "*/vendor/*" | xargs grep -oP '[\x{1F000}-\x{1FFFF}]|[\x{2300}-\x{27BF}]|[\x{2600}-\x{26FF}]' 2>/dev/null | wc -l || echo "0")
echo "[INFO] Emojis before: $BEFORE"
echo "[INFO] Emojis after: $AFTER"
echo "[INFO] Emojis removed: $((BEFORE - AFTER))"
echo ""
if [ "$AFTER" -gt 0 ]; then
echo "[WARN] $AFTER emojis still remaining!"
echo "[INFO] Listing remaining emojis:"
find . -name "*.go" -type f -not -path "*/vendor/*" | xargs grep -nP '[\x{1F000}-\x{1FFFF}]|[\x{2300}-\x{27BF}]|[\x{2600}-\x{26FF}]' 2>/dev/null | head -20
else
echo "[OK] All emojis successfully removed!"
fi
echo ""
echo "[INFO] Backup location: $BACKUP_DIR"
echo "[INFO] To restore: cp -r $BACKUP_DIR/* ."
echo ""
echo "[INFO] Next steps:"
echo " 1. Build: go build"
echo " 2. Test: go test ./..."
echo " 3. Manual testing: ./dbbackup status"
echo " 4. If OK, commit: git add . && git commit -m 'Replace emojis with ASCII'"
echo " 5. If broken, restore: cp -r $BACKUP_DIR/* ."
echo ""
echo "[OK] Emoji removal script completed!"