Compare commits

...

5 Commits

Author SHA1 Message Date
9a650e339d v3.42.99: Fix dedup CIFS/SMB rename bug
All checks were successful
CI/CD / Test (push) Successful in 1m10s
CI/CD / Lint (push) Successful in 1m0s
CI/CD / Integration Tests (push) Successful in 43s
CI/CD / Build & Release (push) Successful in 2m11s
On network filesystems (CIFS/SMB), atomic renames can fail with
'no such file or directory' due to stale directory caches.

Fix:
- Add MkdirAll before rename to refresh directory cache
- Retry rename up to 3 times with 10ms delay
- Re-ensure directory exists on each retry attempt
2026-01-23 13:07:53 +01:00
056094281e Reorganize docs: move to docs/, clean up obsolete files
All checks were successful
CI/CD / Test (push) Successful in 1m8s
CI/CD / Lint (push) Successful in 1m1s
CI/CD / Integration Tests (push) Successful in 43s
CI/CD / Build & Release (push) Has been skipped
Moved to docs/:
- AZURE.md, GCS.md, CLOUD.md (cloud storage)
- PITR.md, MYSQL_PITR.md (point-in-time recovery)
- ENGINES.md, DOCKER.md, SYSTEMD.md (deployment)
- RESTORE_PROFILES.md, LOCK_DEBUGGING.md (troubleshooting)
- LEGAL_DOCUMENTATION.md, GARANTIE.md, OPENSOURCE_ALTERNATIVE.md

Removed obsolete:
- RELEASE_85_FALLBACK.md
- release-notes-v3.42.77.md
- CODE_FLOW_PROOF.md
- RESTORE_PROGRESS_PROPOSAL.md
- RELEASE_NOTES.md (superseded by CHANGELOG.md)

Root now has only: README, QUICK, CHANGELOG, CONTRIBUTING, SECURITY, LICENSE
2026-01-23 13:00:36 +01:00
c52afc5004 Remove email_infra_team.txt 2026-01-23 12:58:31 +01:00
047c3b25f5 Add QUICK.md - real-world examples cheat sheet 2026-01-23 12:57:15 +01:00
ba435de895 v3.42.98: Fix CGO/SQLite and MySQL db name bugs
All checks were successful
CI/CD / Test (push) Successful in 1m9s
CI/CD / Lint (push) Successful in 1m0s
CI/CD / Integration Tests (push) Successful in 43s
CI/CD / Build & Release (push) Successful in 2m11s
FIXES:
- Switch from mattn/go-sqlite3 (CGO) to modernc.org/sqlite (pure Go)
  Binaries compiled with CGO_ENABLED=0 now work correctly
- Fix MySQL positional database argument being ignored
  'dbbackup backup single gitea --db-type mysql' now uses 'gitea' correctly
2026-01-23 12:11:30 +01:00
28 changed files with 399 additions and 747 deletions

View File

@ -5,6 +5,19 @@ All notable changes to dbbackup will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [3.42.98] - 2025-01-23
### Fixed - Critical Bug Fixes for v3.42.97
- **Fixed CGO/SQLite build issue** - binaries now work when compiled with `CGO_ENABLED=0`
- Switched from `github.com/mattn/go-sqlite3` (requires CGO) to `modernc.org/sqlite` (pure Go)
- All cross-compiled binaries now work correctly on all platforms
- No more "Binary was compiled with 'CGO_ENABLED=0', go-sqlite3 requires cgo to work" errors
- **Fixed MySQL positional database argument being ignored**
- `dbbackup backup single <dbname> --db-type mysql` now correctly uses `<dbname>`
- Previously defaulted to 'postgres' regardless of positional argument
- Also fixed in `backup sample` command
## [3.42.97] - 2025-01-23
### Added - Bandwidth Throttling for Cloud Uploads

View File

@ -1,229 +0,0 @@
# EXAKTER CODE-FLOW - BEWEIS DASS ES FUNKTIONIERT
## DEIN PROBLEM (16 TAGE):
- `max_locks_per_transaction = 4096`
- Restore startet parallel (ClusterParallelism=2, Jobs=4)
- Nach 4+ Stunden: "ERROR: out of shared memory"
- Totaler Verlust der Zeit
## WAS DER CODE JETZT TUT (Line-by-Line):
### 1. PREFLIGHT CHECK (internal/restore/engine.go:1210-1249)
```go
// Line 1210: Berechne wie viele locks wir brauchen
lockBoostValue := 2048 // Default
if preflight != nil && preflight.Archive.RecommendedLockBoost > 0 {
lockBoostValue = preflight.Archive.RecommendedLockBoost // = 65536 für BLOBs
}
// Line 1220: Versuche locks zu erhöhen (wird fehlschlagen ohne restart)
originalSettings, tuneErr := e.boostPostgreSQLSettings(ctx, lockBoostValue)
// Line 1249: CRITICAL CHECK - Hier greift der Fix
if originalSettings.MaxLocks < lockBoostValue { // 4096 < 65536 = TRUE
```
### 2. AUTO-FALLBACK (internal/restore/engine.go:1250-1283)
```go
// Line 1250-1256: Warnung
e.log.Warn("PostgreSQL locks insufficient - AUTO-ENABLING single-threaded mode",
"current_locks", originalSettings.MaxLocks, // 4096
"optimal_locks", lockBoostValue, // 65536
"auto_action", "forcing sequential restore")
// Line 1273-1275: CONFIG WIRD GEÄNDERT
e.cfg.Jobs = 1 // Von 4 → 1
e.cfg.ClusterParallelism = 1 // Von 2 → 1
strategy.UseConservative = true
// Line 1279: Akzeptiere verfügbare locks
lockBoostValue = originalSettings.MaxLocks // Nutze 4096 statt 65536
```
**NACH DIESEM CODE:**
- `e.cfg.ClusterParallelism = 1`
- `e.cfg.Jobs = 1`
### 3. RESTORE LOOP START (internal/restore/engine.go:1344-1383)
```go
// Line 1344: LIEST die geänderte Config
parallelism := e.cfg.ClusterParallelism // Liest: 1 ✅
// Line 1346: Ensures mindestens 1
if parallelism < 1 {
parallelism = 1
}
// Line 1378-1383: Semaphore limitiert Parallelität
semaphore := make(chan struct{}, parallelism) // Channel Size = 1 ✅
var wg sync.WaitGroup
// Line 1385+: Database Loop
for _, entry := range entries {
wg.Add(1)
semaphore <- struct{}{} // BLOCKIERT wenn Channel voll (Size 1)
go func() {
defer func() { <-semaphore }() // Gibt Lock frei
// NUR 1 Goroutine kann hier sein wegen Semaphore Size 1 ✅
```
**RESULTAT:** Nur 1 Database zur Zeit wird restored
### 4. SINGLE DATABASE RESTORE (internal/restore/engine.go:323-337)
```go
// Line 326: Check ob Database BLOBs hat
hasLargeObjects := e.checkDumpHasLargeObjects(archivePath)
if hasLargeObjects {
// Line 329: PHASED RESTORE für BLOBs
return e.restorePostgreSQLDumpPhased(ctx, archivePath, targetDB, preserveOwnership)
}
// Line 336: Standard restore (ohne BLOBs)
opts := database.RestoreOptions{
Parallel: 1, // HARDCODED: Nur 1 pg_restore worker ✅
```
**RESULTAT:** Jede Database nutzt nur 1 Worker
### 5. PHASED RESTORE FÜR BLOBs (internal/restore/engine.go:368-405)
```go
// Line 368: Phased restore in 3 Phasen
phases := []struct {
name string
section string
}{
{"pre-data", "pre-data"}, // Schema only
{"data", "data"}, // Data only
{"post-data", "post-data"}, // Indexes only
}
// Line 386: Pro Phase einzeln restoren
for i, phase := range phases {
if err := e.restoreSection(ctx, archivePath, targetDB, phase.section, ...); err != nil {
```
**RESULTAT:** BLOBs werden in kleinen Häppchen restored
### 6. RUNTIME LOCK DETECTION (internal/restore/engine.go:643-664)
```go
// Line 643: Error Classification
if lastError != "" {
classification = checks.ClassifyError(lastError)
// Line 647: NEUE DETECTION
if strings.Contains(lastError, "out of shared memory") ||
strings.Contains(lastError, "max_locks_per_transaction") {
// Line 654: Return special error
return fmt.Errorf("LOCK_EXHAUSTION: %s - max_locks_per_transaction insufficient (error: %w)", lastError, cmdErr)
}
}
```
### 7. LOCK ERROR HANDLER (internal/restore/engine.go:1503-1530)
```go
// Line 1503: In Database Restore Loop
if restoreErr != nil {
errMsg := restoreErr.Error()
// Line 1507: Check for LOCK_EXHAUSTION
if strings.Contains(errMsg, "LOCK_EXHAUSTION:") ||
strings.Contains(errMsg, "out of shared memory") {
// Line 1512: FORCE SEQUENTIAL für Future
e.cfg.ClusterParallelism = 1
e.cfg.Jobs = 1
// Line 1525: ABORT IMMEDIATELY
return // Stoppt alle Goroutines
}
}
```
**RESULTAT:** Bei Lock-Error sofortiger Stop statt 4h weiterlaufen
## LOCK USAGE BERECHNUNG:
### VORHER (16 Tage Failures):
```
ClusterParallelism = 2 → 2 DBs parallel
Jobs = 4 → 4 workers per DB
Total workers = 2 × 4 = 8
Locks per worker = ~8000 (BLOBs)
TOTAL LOCKS NEEDED = 64000
AVAILABLE = 4096
→ OUT OF SHARED MEMORY ❌
```
### JETZT (Mit Fix):
```
ClusterParallelism = 1 → 1 DB zur Zeit
Jobs = 1 → 1 worker
Phased = yes → 3 Phasen je ~1000 locks
TOTAL LOCKS NEEDED = 1000 (per phase)
AVAILABLE = 4096
HEADROOM = 4096 - 1000 = 3096 locks frei
→ SUCCESS ✅
```
## WARUM ES DIESMAL FUNKTIONIERT:
1. **Line 1249**: Check `if originalSettings.MaxLocks < lockBoostValue`
- Mit 4096 locks: `4096 < 65536` = **TRUE**
- Triggert Auto-Fallback
2. **Line 1274**: `e.cfg.ClusterParallelism = 1`
- Wird gesetzt BEVOR Restore Loop
3. **Line 1344**: `parallelism := e.cfg.ClusterParallelism`
- Liest den Wert 1
4. **Line 1383**: `semaphore := make(chan struct{}, 1)`
- Channel Size = 1 = nur 1 DB parallel
5. **Line 337**: `Parallel: 1`
- Nur 1 Worker per DB
6. **Line 368+**: Phased Restore für BLOBs
- 3 kleine Phasen statt 1 große
**MATHEMATIK:**
- 1 DB × 1 Worker × ~1000 locks = 1000 locks
- Available = 4096 locks
- **75% HEADROOM**
## DEIN DEPLOYMENT:
```bash
# 1. Binary auf Server kopieren
scp /home/renz/source/dbbackup/bin/dbbackup_linux_amd64 user@server:/tmp/
# 2. Auf Server als postgres user
sudo su - postgres
cp /tmp/dbbackup_linux_amd64 /usr/local/bin/dbbackup
chmod +x /usr/local/bin/dbbackup
# 3. Restore starten (NO FLAGS NEEDED - Auto-Detection funktioniert)
dbbackup restore cluster cluster_20260113_091134.tar.gz --confirm
```
**ES WIRD:**
1. Locks checken (4096 < 65536)
2. Auto-enable sequential mode
3. 1 DB zur Zeit restoren
4. BLOBs in Phasen
5. **DURCHLAUFEN**
Oder deine 180 + 2 Monate + Job sind futsch.
**KEINE GARANTIE - NUR CODE.**

272
QUICK.md Normal file
View File

@ -0,0 +1,272 @@
# dbbackup Quick Reference
Real examples, no fluff.
## Basic Backups
```bash
# PostgreSQL (auto-detects all databases)
dbbackup backup all /mnt/backups/databases
# Single database
dbbackup backup single myapp /mnt/backups/databases
# MySQL
dbbackup backup single gitea --db-type mysql --db-host 127.0.0.1 --db-port 3306 /mnt/backups/databases
# With compression level (1-9, default 6)
dbbackup backup all /mnt/backups/databases --compression-level 9
# As root (requires flag)
sudo dbbackup backup all /mnt/backups/databases --allow-root
```
## PITR (Point-in-Time Recovery)
```bash
# Enable WAL archiving for a database
dbbackup pitr enable myapp /mnt/backups/wal
# Take base backup (required before PITR works)
dbbackup pitr base myapp /mnt/backups/wal
# Check PITR status
dbbackup pitr status myapp /mnt/backups/wal
# Restore to specific point in time
dbbackup pitr restore myapp /mnt/backups/wal --target-time "2026-01-23 14:30:00"
# Restore to latest available
dbbackup pitr restore myapp /mnt/backups/wal --target-time latest
# Disable PITR
dbbackup pitr disable myapp
```
## Deduplication
```bash
# Backup with dedup (saves ~60-80% space on similar databases)
dbbackup backup all /mnt/backups/databases --dedup
# Check dedup stats
dbbackup dedup stats /mnt/backups/databases
# Prune orphaned chunks (after deleting old backups)
dbbackup dedup prune /mnt/backups/databases
# Verify chunk integrity
dbbackup dedup verify /mnt/backups/databases
```
## Cloud Storage
```bash
# Upload to S3/MinIO
dbbackup cloud upload /mnt/backups/databases/myapp_2026-01-23.sql.gz \
--provider s3 \
--bucket my-backups \
--endpoint https://s3.amazonaws.com
# Upload to MinIO (self-hosted)
dbbackup cloud upload backup.sql.gz \
--provider s3 \
--bucket backups \
--endpoint https://minio.internal:9000
# Upload to Google Cloud Storage
dbbackup cloud upload backup.sql.gz \
--provider gcs \
--bucket my-gcs-bucket
# Upload to Azure Blob
dbbackup cloud upload backup.sql.gz \
--provider azure \
--bucket mycontainer
# With bandwidth limit (don't saturate the network)
dbbackup cloud upload backup.sql.gz --provider s3 --bucket backups --bandwidth-limit 10MB/s
# List remote backups
dbbackup cloud list --provider s3 --bucket my-backups
# Download
dbbackup cloud download myapp_2026-01-23.sql.gz /tmp/ --provider s3 --bucket my-backups
# Sync local backup dir to cloud
dbbackup cloud sync /mnt/backups/databases --provider s3 --bucket my-backups
```
### Cloud Environment Variables
```bash
# S3/MinIO
export AWS_ACCESS_KEY_ID=AKIAXXXXXXXX
export AWS_SECRET_ACCESS_KEY=xxxxxxxx
export AWS_REGION=eu-central-1
# GCS
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
# Azure
export AZURE_STORAGE_ACCOUNT=mystorageaccount
export AZURE_STORAGE_KEY=xxxxxxxx
```
## Encryption
```bash
# Backup with encryption (AES-256-GCM)
dbbackup backup all /mnt/backups/databases --encrypt --encrypt-key "my-secret-passphrase"
# Or use environment variable
export DBBACKUP_ENCRYPT_KEY="my-secret-passphrase"
dbbackup backup all /mnt/backups/databases --encrypt
# Restore encrypted backup
dbbackup restore /mnt/backups/databases/myapp_2026-01-23.sql.gz.enc myapp_restored \
--encrypt-key "my-secret-passphrase"
```
## Catalog (Backup Inventory)
```bash
# Sync local backups to catalog
dbbackup catalog sync /mnt/backups/databases
# List all backups
dbbackup catalog list
# Show gaps (missing daily backups)
dbbackup catalog gaps
# Search backups
dbbackup catalog search myapp
# Export catalog to JSON
dbbackup catalog export --format json > backups.json
```
## Restore
```bash
# Restore to new database
dbbackup restore /mnt/backups/databases/myapp_2026-01-23.sql.gz myapp_restored
# Restore to existing database (overwrites!)
dbbackup restore /mnt/backups/databases/myapp_2026-01-23.sql.gz myapp --force
# Restore MySQL
dbbackup restore /mnt/backups/databases/gitea_2026-01-23.sql.gz gitea_restored \
--db-type mysql --db-host 127.0.0.1
# Verify restore (restores to temp db, runs checks, drops it)
dbbackup verify-restore /mnt/backups/databases/myapp_2026-01-23.sql.gz
```
## Retention & Cleanup
```bash
# Delete backups older than 30 days
dbbackup cleanup /mnt/backups/databases --older-than 30d
# Keep 7 daily, 4 weekly, 12 monthly (GFS)
dbbackup cleanup /mnt/backups/databases --keep-daily 7 --keep-weekly 4 --keep-monthly 12
# Dry run (show what would be deleted)
dbbackup cleanup /mnt/backups/databases --older-than 30d --dry-run
```
## Disaster Recovery Drill
```bash
# Full DR test (restores random backup, verifies, cleans up)
dbbackup drill /mnt/backups/databases
# Test specific database
dbbackup drill /mnt/backups/databases --database myapp
# With email report
dbbackup drill /mnt/backups/databases --notify admin@example.com
```
## Monitoring & Metrics
```bash
# Prometheus metrics endpoint
dbbackup metrics serve --port 9101
# One-shot status check (for scripts)
dbbackup status /mnt/backups/databases
echo $? # 0 = OK, 1 = warnings, 2 = critical
# Generate HTML report
dbbackup report /mnt/backups/databases --output backup-report.html
```
## Systemd Timer (Recommended)
```bash
# Install systemd units
sudo dbbackup install systemd --backup-path /mnt/backups/databases --schedule "02:00"
# Creates:
# /etc/systemd/system/dbbackup.service
# /etc/systemd/system/dbbackup.timer
# Check timer
systemctl status dbbackup.timer
systemctl list-timers dbbackup.timer
```
## Common Combinations
```bash
# Full production setup: encrypted, deduplicated, uploaded to S3
dbbackup backup all /mnt/backups/databases \
--dedup \
--encrypt \
--compression-level 9
dbbackup cloud sync /mnt/backups/databases \
--provider s3 \
--bucket prod-backups \
--bandwidth-limit 50MB/s
# Quick MySQL backup to S3
dbbackup backup single shopdb --db-type mysql /tmp/backup && \
dbbackup cloud upload /tmp/backup/shopdb_*.sql.gz --provider s3 --bucket backups
# PITR-enabled PostgreSQL with cloud sync
dbbackup pitr enable proddb /mnt/wal
dbbackup pitr base proddb /mnt/wal
dbbackup cloud sync /mnt/wal --provider gcs --bucket wal-archive
```
## Environment Variables
| Variable | Description |
|----------|-------------|
| `DBBACKUP_ENCRYPT_KEY` | Encryption passphrase |
| `DBBACKUP_BANDWIDTH_LIMIT` | Cloud upload limit (e.g., `10MB/s`) |
| `PGHOST`, `PGPORT`, `PGUSER` | PostgreSQL connection |
| `MYSQL_HOST`, `MYSQL_TCP_PORT` | MySQL connection |
| `AWS_ACCESS_KEY_ID` | S3/MinIO credentials |
| `GOOGLE_APPLICATION_CREDENTIALS` | GCS service account JSON path |
| `AZURE_STORAGE_ACCOUNT` | Azure storage account name |
## Quick Checks
```bash
# What version?
dbbackup --version
# What's installed?
dbbackup status
# Test database connection
dbbackup backup single testdb /tmp --dry-run
# Verify a backup file
dbbackup verify /mnt/backups/databases/myapp_2026-01-23.sql.gz
```

View File

@ -907,16 +907,30 @@ Workload types:
## Documentation
- [RESTORE_PROFILES.md](RESTORE_PROFILES.md) - Restore resource profiles & troubleshooting
- [SYSTEMD.md](SYSTEMD.md) - Systemd installation & scheduling
- [DOCKER.md](DOCKER.md) - Docker deployment
- [CLOUD.md](CLOUD.md) - Cloud storage configuration
- [PITR.md](PITR.md) - Point-in-Time Recovery
- [AZURE.md](AZURE.md) - Azure Blob Storage
- [GCS.md](GCS.md) - Google Cloud Storage
**Quick Start:**
- [QUICK.md](QUICK.md) - Real-world examples cheat sheet
**Guides:**
- [docs/PITR.md](docs/PITR.md) - Point-in-Time Recovery (PostgreSQL)
- [docs/MYSQL_PITR.md](docs/MYSQL_PITR.md) - Point-in-Time Recovery (MySQL)
- [docs/ENGINES.md](docs/ENGINES.md) - Database engine configuration
- [docs/RESTORE_PROFILES.md](docs/RESTORE_PROFILES.md) - Restore resource profiles
**Cloud Storage:**
- [docs/CLOUD.md](docs/CLOUD.md) - Cloud storage overview
- [docs/AZURE.md](docs/AZURE.md) - Azure Blob Storage
- [docs/GCS.md](docs/GCS.md) - Google Cloud Storage
**Deployment:**
- [docs/DOCKER.md](docs/DOCKER.md) - Docker deployment
- [docs/SYSTEMD.md](docs/SYSTEMD.md) - Systemd installation & scheduling
**Reference:**
- [SECURITY.md](SECURITY.md) - Security considerations
- [CONTRIBUTING.md](CONTRIBUTING.md) - Contribution guidelines
- [CHANGELOG.md](CHANGELOG.md) - Version history
- [docs/LOCK_DEBUGGING.md](docs/LOCK_DEBUGGING.md) - Lock troubleshooting
- [docs/LEGAL_DOCUMENTATION.md](docs/LEGAL_DOCUMENTATION.md) - Legal & compliance
## License

View File

@ -1,21 +0,0 @@
# Fallback instructions for release 85
If you need to hard reset to the last known good release (v3.42.85):
1. Fetch the tag from remote:
git fetch --tags
2. Checkout the release tag:
git checkout v3.42.85
3. (Optional) Hard reset main to this tag:
git checkout main
git reset --hard v3.42.85
git push --force origin main
git push --force github main
4. Re-run CI to verify stability.
# Note
- This will revert all changes after v3.42.85.
- Only use if CI and builds are broken and cannot be fixed quickly.

View File

@ -1,108 +0,0 @@
# v3.42.1 Release Notes
## What's New in v3.42.1
### Deduplication - Resistance is Futile
Content-defined chunking deduplication for space-efficient backups. Like restic/borgbackup but with **native database dump support**.
```bash
# First backup: 5MB stored
dbbackup dedup backup mydb.dump
# Second backup (modified): only 1.6KB new data stored!
# 100% deduplication ratio
dbbackup dedup backup mydb_modified.dump
```
#### Features
- **Gear Hash CDC** - Content-defined chunking with 92%+ overlap on shifted data
- **SHA-256 Content-Addressed** - Chunks stored by hash, automatic deduplication
- **AES-256-GCM Encryption** - Optional per-chunk encryption
- **Gzip Compression** - Optional compression (enabled by default)
- **SQLite Index** - Fast chunk lookups and statistics
#### Commands
```bash
dbbackup dedup backup <file> # Create deduplicated backup
dbbackup dedup backup <file> --encrypt # With AES-256-GCM encryption
dbbackup dedup restore <id> <output> # Restore from manifest
dbbackup dedup list # List all backups
dbbackup dedup stats # Show deduplication statistics
dbbackup dedup delete <id> # Delete a backup
dbbackup dedup gc # Garbage collect unreferenced chunks
```
#### Storage Structure
```
<backup-dir>/dedup/
chunks/ # Content-addressed chunk files
ab/cdef1234... # Sharded by first 2 chars of hash
manifests/ # JSON manifest per backup
chunks.db # SQLite index
```
### Also Included (from v3.41.x)
- **Systemd Integration** - One-command install with `dbbackup install`
- **Prometheus Metrics** - HTTP exporter on port 9399
- **Backup Catalog** - SQLite-based tracking of all backup operations
- **Prometheus Alerting Rules** - Added to SYSTEMD.md documentation
### Installation
#### Quick Install (Recommended)
```bash
# Download for your platform
curl -LO https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.42.1/dbbackup-linux-amd64
# Install with systemd service
chmod +x dbbackup-linux-amd64
sudo ./dbbackup-linux-amd64 install --config /path/to/config.yaml
```
#### Available Binaries
| Platform | Architecture | Binary |
|----------|--------------|--------|
| Linux | amd64 | `dbbackup-linux-amd64` |
| Linux | arm64 | `dbbackup-linux-arm64` |
| macOS | Intel | `dbbackup-darwin-amd64` |
| macOS | Apple Silicon | `dbbackup-darwin-arm64` |
| FreeBSD | amd64 | `dbbackup-freebsd-amd64` |
### Systemd Commands
```bash
dbbackup install --config config.yaml # Install service + timer
dbbackup install --status # Check service status
dbbackup install --uninstall # Remove services
```
### Prometheus Metrics
Available at `http://localhost:9399/metrics`:
| Metric | Description |
|--------|-------------|
| `dbbackup_last_backup_timestamp` | Unix timestamp of last backup |
| `dbbackup_last_backup_success` | 1 if successful, 0 if failed |
| `dbbackup_last_backup_duration_seconds` | Duration of last backup |
| `dbbackup_last_backup_size_bytes` | Size of last backup |
| `dbbackup_backup_total` | Total number of backups |
| `dbbackup_backup_errors_total` | Total number of failed backups |
### Security Features
- Hardened systemd service with `ProtectSystem=strict`
- `NoNewPrivileges=true` prevents privilege escalation
- Dedicated `dbbackup` system user (optional)
- Credential files with restricted permissions
### Documentation
- [SYSTEMD.md](SYSTEMD.md) - Complete systemd installation guide
- [README.md](README.md) - Full documentation
- [CHANGELOG.md](CHANGELOG.md) - Version history
### Bug Fixes
- Fixed SQLite time parsing in dedup stats
- Fixed function name collision in cmd package
---
**Full Changelog**: https://git.uuxo.net/UUXO/dbbackup/compare/v3.41.1...v3.42.1

View File

@ -1,171 +0,0 @@
# Restore Progress Bar Enhancement Proposal
## Problem
During Phase 2 cluster restore, the progress bar is not real-time because:
- `pg_restore` subprocess blocks until completion
- Progress updates only happen **before** each database restore starts
- No feedback during actual restore execution (which can take hours)
- Users see frozen progress bar during large database restores
## Root Cause
In `internal/restore/engine.go`:
- `executeRestoreCommand()` blocks on `cmd.Wait()`
- Progress is only reported at goroutine entry (line ~1315)
- No streaming progress during pg_restore execution
## Proposed Solutions
### Option 1: Parse pg_restore stderr for progress (RECOMMENDED)
**Pros:**
- Real-time feedback during restore
- Works with existing pg_restore
- No external tools needed
**Implementation:**
```go
// In executeRestoreCommand, modify stderr reader:
go func() {
scanner := bufio.NewScanner(stderr)
for scanner.Scan() {
line := scanner.Text()
// Parse pg_restore progress lines
// Format: "pg_restore: processing item 1234 TABLE public users"
if strings.Contains(line, "processing item") {
e.reportItemProgress(line) // Update progress bar
}
// Capture errors
if strings.Contains(line, "ERROR:") {
lastError = line
errorCount++
}
}
}()
```
**Add to RestoreCluster goroutine:**
```go
// Track sub-items within each database
var currentDBItems, totalDBItems int
e.setItemProgressCallback(func(current, total int) {
currentDBItems = current
totalDBItems = total
// Update TUI with sub-progress
e.reportDatabaseSubProgress(idx, totalDBs, dbName, current, total)
})
```
### Option 2: Verbose mode with line counting
**Pros:**
- More granular progress (row-level)
- Shows exact operation being performed
**Cons:**
- `--verbose` causes massive stderr output (OOM risk on huge DBs)
- Currently disabled for memory safety
- Requires careful memory management
### Option 3: Hybrid approach (BEST)
**Combine both:**
1. **Default**: Parse non-verbose pg_restore output for item counts
2. **Small DBs** (<500MB): Enable verbose for detailed progress
3. **Periodic updates**: Report progress every 5 seconds even without stderr changes
**Implementation:**
```go
// Add periodic progress ticker
progressTicker := time.NewTicker(5 * time.Second)
defer progressTicker.Stop()
go func() {
for {
select {
case <-progressTicker.C:
// Report heartbeat even if no stderr
e.reportHeartbeat(dbName, time.Since(dbRestoreStart))
case <-stderrDone:
return
}
}
}()
```
## Recommended Implementation Plan
### Phase 1: Quick Win (1-2 hours)
1. Add heartbeat ticker in cluster restore goroutines
2. Update TUI to show "Restoring database X... (elapsed: 3m 45s)"
3. No code changes to pg_restore wrapper
### Phase 2: Parse pg_restore Output (4-6 hours)
1. Parse stderr for "processing item" lines
2. Extract current/total item counts
3. Report sub-progress to TUI
4. Update progress bar calculation:
```
dbProgress = baseProgress + (itemsDone/totalItems) * dbWeightedPercent
```
### Phase 3: Smart Verbose Mode (optional)
1. Detect database size before restore
2. Enable verbose for DBs < 500MB
3. Parse verbose output for detailed progress
4. Automatic fallback to item-based for large DBs
## Files to Modify
1. **internal/restore/engine.go**:
- `executeRestoreCommand()` - add progress parsing
- `RestoreCluster()` - add heartbeat ticker
- New: `reportItemProgress()`, `reportHeartbeat()`
2. **internal/tui/restore_exec.go**:
- Update `RestoreExecModel` to handle sub-progress
- Add "elapsed time" display during restore
- Show item counts: "Restoring tables... (234/567)"
3. **internal/progress/indicator.go**:
- Add `UpdateSubProgress(current, total int)` method
- Add `ReportHeartbeat(elapsed time.Duration)` method
## Example Output
**Before (current):**
```
[====================] Phase 2/3: Restoring Databases (1/5)
Restoring database myapp...
[frozen for 30 minutes]
```
**After (with heartbeat):**
```
[====================] Phase 2/3: Restoring Databases (1/5)
Restoring database myapp... (elapsed: 4m 32s)
[updates every 5 seconds]
```
**After (with item parsing):**
```
[=========>-----------] Phase 2/3: Restoring Databases (1/5)
Restoring database myapp... (processing item 1,234/5,678) (elapsed: 4m 32s)
[smooth progress bar movement]
```
## Testing Strategy
1. Test with small DB (< 100MB) - verify heartbeat works
2. Test with large DB (> 10GB) - verify no OOM, heartbeat works
3. Test with BLOB-heavy DB - verify phased restore shows progress
4. Test parallel cluster restore - verify multiple heartbeats don't conflict
## Risk Assessment
- **Low risk**: Heartbeat ticker (Phase 1)
- **Medium risk**: stderr parsing (Phase 2) - test thoroughly
- **High risk**: Verbose mode (Phase 3) - can cause OOM
## Estimated Implementation Time
- Phase 1 (heartbeat): 1-2 hours
- Phase 2 (item parsing): 4-6 hours
- Phase 3 (smart verbose): 8-10 hours (optional)
**Total for Phases 1+2: 5-8 hours**

View File

@ -3,9 +3,9 @@
This directory contains pre-compiled binaries for the DB Backup Tool across multiple platforms and architectures.
## Build Information
- **Version**: 3.42.81
- **Build Time**: 2026-01-23_09:30:32_UTC
- **Git Commit**: a33e09d
- **Version**: 3.42.97
- **Build Time**: 2026-01-23_11:09:43_UTC
- **Git Commit**: a18947a
## Recent Updates (v1.1.0)
- ✅ Fixed TUI progress display with line-by-line output

View File

@ -130,6 +130,10 @@ func runSingleBackup(ctx context.Context, databaseName string) error {
// Update config from environment
cfg.UpdateFromEnvironment()
// IMPORTANT: Set the database name from positional argument
// This overrides the default 'postgres' when using MySQL
cfg.Database = databaseName
// Validate configuration
if err := cfg.Validate(); err != nil {
return fmt.Errorf("configuration error: %w", err)
@ -312,6 +316,9 @@ func runSampleBackup(ctx context.Context, databaseName string) error {
// Update config from environment
cfg.UpdateFromEnvironment()
// IMPORTANT: Set the database name from positional argument
cfg.Database = databaseName
// Validate configuration
if err := cfg.Validate(); err != nil {
return fmt.Errorf("configuration error: %w", err)

View File

View File

@ -1,112 +0,0 @@
Betreff: PostgreSQL Restore Fehler - "out of shared memory" auf RST Server
Hallo Infra-Team,
wir haben auf dem RST PostgreSQL Server (PostgreSQL 17.4) wiederholt Restore-Fehler mit "out of shared memory" Meldungen.
═══════════════════════════════════════════════════════════
ANALYSE (Stand: 20.01.2026)
═══════════════════════════════════════════════════════════
Server-Specs:
• RAM: 31 GB (aktuell 19.6 GB belegt = 63.9%)
• PostgreSQL nutzt nur ~118 MB für eigene Prozesse
• Swap: 4 GB (6.4% genutzt)
Lock-Konfiguration:
• max_locks_per_transaction: 4096 ✓ (bereits korrekt)
• max_connections: 100
• Lock Capacity: 409.600 ✓ (ausreichend)
═══════════════════════════════════════════════════════════
PROBLEM-IDENTIFIKATION
═══════════════════════════════════════════════════════════
1. MEMORY CONSUMER (nicht-PostgreSQL):
• Nessus Agent: ~173 MB
• Elastic Agent: ~300 MB (mehrere Komponenten)
• Icinga: ~24 MB
• Weitere Monitoring: ~100+ MB
2. WORK_MEM ZU NIEDRIG:
• Aktuell: 64 MB
• 4 Datenbanken nutzen Temp-Files (Indikator für zu wenig work_mem):
- prodkc: 201 MB temp files
- keycloak: 45 MB temp files
- d7030: 6 MB temp files
- pgbench_db: 2 MB temp files
═══════════════════════════════════════════════════════════
EMPFOHLENE MASSNAHMEN
═══════════════════════════════════════════════════════════
VARIANTE A - Temporär für große Restores:
-------------------------------------------
1. Monitoring-Agents stoppen (frei: ~500 MB):
sudo systemctl stop nessus-agent
sudo systemctl stop elastic-agent
2. work_mem erhöhen:
sudo -u postgres psql -c "ALTER SYSTEM SET work_mem = '256MB';"
sudo systemctl restart postgresql
3. Restore durchführen
4. Agents wieder starten:
sudo systemctl start nessus-agent
sudo systemctl start elastic-agent
VARIANTE B - Permanente Lösung:
-------------------------------------------
1. work_mem auf 256 MB erhöhen (statt 64 MB)
2. maintenance_work_mem optional auf 4 GB erhöhen (statt 2 GB)
3. Falls möglich: Monitoring auf dedizierten Server verschieben
SQL-Befehle:
ALTER SYSTEM SET work_mem = '256MB';
ALTER SYSTEM SET maintenance_work_mem = '4GB';
-- Anschließend PostgreSQL restart
VARIANTE C - Falls keine Config-Änderung möglich:
-------------------------------------------
• Restore mit --profile=conservative durchführen (reduziert Memory-Druck)
dbbackup restore cluster backup.tar.gz --profile=conservative --confirm
• Oder TUI-Modus nutzen (verwendet automatisch conservative profile):
dbbackup interactive
• Monitoring während Restore-Fenster deaktivieren
═══════════════════════════════════════════════════════════
DETAIL-REPORT
═══════════════════════════════════════════════════════════
Vollständiger Diagnose-Report liegt bei bzw. kann jederzeit mit
diesem Script generiert werden:
/path/to/diagnose_postgres_memory.sh
Das Script analysiert:
• System Memory Usage
• PostgreSQL Konfiguration
• Lock Usage
• Temp File Usage
• Blocking Queries
• Shared Memory Segments
═══════════════════════════════════════════════════════════
Bevorzugt wäre Variante B (permanente work_mem Erhöhung), damit künftige
große Restores ohne manuelle Eingriffe durchlaufen.
Bitte um Rückmeldung, welche Variante ihr umsetzt bzw. ob ihr weitere
Infos benötigt.
Danke & Grüße
[Dein Name]
---
Anhang: diagnose_postgres_memory.sh (falls nicht vorhanden)
Error Log: /a01/dba/tmp/dbbackup-restore-debug-20260119-221730.json

20
go.mod
View File

@ -13,19 +13,25 @@ require (
github.com/aws/aws-sdk-go-v2/credentials v1.19.2
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1
github.com/cenkalti/backoff/v4 v4.3.0
github.com/charmbracelet/bubbles v0.21.0
github.com/charmbracelet/bubbletea v1.3.10
github.com/charmbracelet/lipgloss v1.1.0
github.com/dustin/go-humanize v1.0.1
github.com/fatih/color v1.18.0
github.com/go-sql-driver/mysql v1.9.3
github.com/hashicorp/go-multierror v1.1.1
github.com/jackc/pgx/v5 v5.7.6
github.com/mattn/go-sqlite3 v1.14.32
github.com/klauspost/pgzip v1.2.6
github.com/schollz/progressbar/v3 v3.19.0
github.com/shirou/gopsutil/v3 v3.24.5
github.com/sirupsen/logrus v1.9.3
github.com/spf13/afero v1.15.0
github.com/spf13/cobra v1.10.1
github.com/spf13/pflag v1.0.9
golang.org/x/crypto v0.43.0
google.golang.org/api v0.256.0
modernc.org/sqlite v1.44.3
)
require (
@ -57,7 +63,6 @@ require (
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 // indirect
github.com/aws/smithy-go v1.23.2 // indirect
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
github.com/charmbracelet/x/ansi v0.10.1 // indirect
@ -67,7 +72,6 @@ require (
github.com/envoyproxy/go-control-plane/envoy v1.32.4 // indirect
github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
github.com/fatih/color v1.18.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/go-jose/go-jose/v4 v4.1.2 // indirect
github.com/go-logr/logr v1.4.3 // indirect
@ -78,13 +82,11 @@ require (
github.com/googleapis/enterprise-certificate-proxy v0.3.7 // indirect
github.com/googleapis/gax-go/v2 v2.15.0 // indirect
github.com/hashicorp/errwrap v1.0.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/jackc/puddle/v2 v2.2.2 // indirect
github.com/klauspost/compress v1.18.3 // indirect
github.com/klauspost/pgzip v1.2.6 // indirect
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
@ -95,11 +97,11 @@ require (
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 // indirect
github.com/muesli/cancelreader v0.2.2 // indirect
github.com/muesli/termenv v0.16.0 // indirect
github.com/ncruces/go-strftime v1.0.0 // indirect
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/schollz/progressbar/v3 v3.19.0 // indirect
github.com/spf13/afero v1.15.0 // indirect
github.com/spiffe/go-spiffe/v2 v2.5.0 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
@ -115,6 +117,7 @@ require (
go.opentelemetry.io/otel/sdk v1.37.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.37.0 // indirect
go.opentelemetry.io/otel/trace v1.37.0 // indirect
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 // indirect
golang.org/x/net v0.46.0 // indirect
golang.org/x/oauth2 v0.33.0 // indirect
golang.org/x/sync v0.19.0 // indirect
@ -127,4 +130,7 @@ require (
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 // indirect
google.golang.org/grpc v1.76.0 // indirect
google.golang.org/protobuf v1.36.10 // indirect
modernc.org/libc v1.67.6 // indirect
modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.11.0 // indirect
)

50
go.sum
View File

@ -102,6 +102,8 @@ github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd h1:vy0G
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd/go.mod h1:xe0nKWGd3eJgtqZRaN9RjMtK7xUYchjzPr7q6kcvCCs=
github.com/charmbracelet/x/term v0.2.1 h1:AQeHeLZ1OqSXhrAWpYUtZyX1T3zVxfpZuEQMIQaGIAQ=
github.com/charmbracelet/x/term v0.2.1/go.mod h1:oQ4enTYFV7QN4m0i9mzHrViD7TQKvNEEkHUMCmsxdUg=
github.com/chengxilo/virtualterm v1.0.4 h1:Z6IpERbRVlfB8WkOmtbHiDbBANU7cimRIof7mk9/PwM=
github.com/chengxilo/virtualterm v1.0.4/go.mod h1:DyxxBZz/x1iqJjFxTFcr6/x+jSpqN0iwWCOK1q10rlY=
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443 h1:aQ3y1lwWyqYPiWZThqv1aFbZMiM9vblcSArJRf2Irls=
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
@ -145,6 +147,8 @@ github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/martian/v3 v3.3.3 h1:DIhPTQrbPkgs2yJYdXU/eNACCG5DVQjySNRNlflZ9Fc=
github.com/google/martian/v3 v3.3.3/go.mod h1:iEPrYcgCF7jA9OtScMFQyAlZZ4YXTKEtJ1E6RWzmBA0=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17kjQEVQ1XRhq2/JR1M3sGqeJoxs=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/s2a-go v0.1.9 h1:LGD7gtMgezd8a/Xak7mEWL0PjoTQFvpRudN895yqKW0=
github.com/google/s2a-go v0.1.9/go.mod h1:YA0Ei2ZQL3acow2O62kdp9UlnvMmU7kA6Eutn0dXayM=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
@ -157,6 +161,8 @@ github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/U
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
@ -186,8 +192,6 @@ github.com/mattn/go-localereader v0.0.1 h1:ygSAOl7ZXTx4RdPYinUpg6W99U8jWvWi9Ye2J
github.com/mattn/go-localereader v0.0.1/go.mod h1:8fBrzywKY7BI3czFoHkuzRoWE9C+EiG4R1k4Cjx5p88=
github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc=
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mattn/go-sqlite3 v1.14.32 h1:JD12Ag3oLy1zQA+BNn74xRgaBbdhbNIDYvQUEuuErjs=
github.com/mattn/go-sqlite3 v1.14.32/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db h1:62I3jR2EmQ4l5rM/4FEfDWcRD+abF5XlKShorW5LRoQ=
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db/go.mod h1:l0dey0ia/Uv7NcFFVbCLtqEBQbrT4OCwCSKTEv6enCw=
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 h1:ZK8zHtRHOkbHy6Mmr5D264iyp3TiX5OmNcI5cIARiQI=
@ -196,6 +200,8 @@ github.com/muesli/cancelreader v0.2.2 h1:3I4Kt4BQjOR54NavqnDogx/MIoWBFa0StPA8ELU
github.com/muesli/cancelreader v0.2.2/go.mod h1:3XuTXfFS2VjM+HTLZY9Ak0l6eUKfijIfMUZ4EgX0QYo=
github.com/muesli/termenv v0.16.0 h1:S5AlUN9dENB57rsbnkPyfdGuWIlkmzJjbFf0Tf5FWUc=
github.com/muesli/termenv v0.16.0/go.mod h1:ZRfOIKPFDYQoDFF4Olj7/QJbW60Ol/kL1pU3VfY/Cnk=
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 h1:GFCKgmp0tecUJ0sJuv4pzYCqS9+RGSn52M3FUwPs+uo=
@ -205,6 +211,8 @@ github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRI
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c h1:ncq/mPwQF4JjgDlrVEn3C11VoGHZN7m8qihwgMEtzYw=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
@ -260,14 +268,14 @@ go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mx
go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0=
golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04=
golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0=
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561 h1:MDc5xs78ZrZr3HMQugiXOAkSZtfTpbJLDr/lwfgO53E=
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561/go.mod h1:cyybsKvd6eL0RnXn6p/Grxp8F5bW7iYuBgsNCOHpMYE=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70=
golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA=
golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w=
golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4=
golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210=
golang.org/x/oauth2 v0.33.0 h1:4Q+qn+E5z8gPRJfmRy7C2gGG3T4jIprK6aSYgTXGRpo=
golang.org/x/oauth2 v0.33.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -286,6 +294,8 @@ golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k=
golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ=
golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
@ -305,3 +315,31 @@ gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
modernc.org/cc/v4 v4.27.1 h1:9W30zRlYrefrDV2JE2O8VDtJ1yPGownxciz5rrbQZis=
modernc.org/cc/v4 v4.27.1/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.30.1 h1:4r4U1J6Fhj98NKfSjnPUN7Ze2c6MnAdL0hWw6+LrJpc=
modernc.org/ccgo/v4 v4.30.1/go.mod h1:bIOeI1JL54Utlxn+LwrFyjCx2n2RDiYEaJVSrgdrRfM=
modernc.org/fileutil v1.3.40 h1:ZGMswMNc9JOCrcrakF1HrvmergNLAmxOPjizirpfqBA=
modernc.org/fileutil v1.3.40/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc=
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
modernc.org/gc/v3 v3.1.1 h1:k8T3gkXWY9sEiytKhcgyiZ2L0DTyCQ/nvX+LoCljoRE=
modernc.org/gc/v3 v3.1.1/go.mod h1:HFK/6AGESC7Ex+EZJhJ2Gni6cTaYpSMmU/cT9RmlfYY=
modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks=
modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI=
modernc.org/libc v1.67.6 h1:eVOQvpModVLKOdT+LvBPjdQqfrZq+pC39BygcT+E7OI=
modernc.org/libc v1.67.6/go.mod h1:JAhxUVlolfYDErnwiqaLvUqc8nfb2r6S6slAgZOnaiE=
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
modernc.org/memory v1.11.0/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
modernc.org/sqlite v1.44.3 h1:+39JvV/HWMcYslAwRxHb8067w+2zowvFOUrOWIy9PjY=
modernc.org/sqlite v1.44.3/go.mod h1:CzbrU2lSB1DKUusvwGz7rqEKIq+NUd8GWuBBZDs9/nA=
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=
modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=

View File

@ -11,7 +11,7 @@ import (
"strings"
"time"
_ "github.com/mattn/go-sqlite3"
_ "modernc.org/sqlite" // Pure Go SQLite driver (no CGO required)
)
// SQLiteCatalog implements Catalog interface with SQLite storage
@ -28,7 +28,7 @@ func NewSQLiteCatalog(dbPath string) (*SQLiteCatalog, error) {
return nil, fmt.Errorf("failed to create catalog directory: %w", err)
}
db, err := sql.Open("sqlite3", dbPath+"?_journal_mode=WAL&_foreign_keys=ON")
db, err := sql.Open("sqlite", dbPath+"?_journal_mode=WAL&_foreign_keys=ON")
if err != nil {
return nil, fmt.Errorf("failed to open catalog database: %w", err)
}

View File

@ -8,7 +8,7 @@ import (
"strings"
"time"
_ "github.com/mattn/go-sqlite3" // SQLite driver
_ "modernc.org/sqlite" // Pure Go SQLite driver (no CGO required)
)
// ChunkIndex provides fast chunk lookups using SQLite
@ -32,7 +32,7 @@ func NewChunkIndexAt(dbPath string) (*ChunkIndex, error) {
}
// Add busy_timeout to handle lock contention gracefully
db, err := sql.Open("sqlite3", dbPath+"?_journal_mode=WAL&_synchronous=NORMAL&_busy_timeout=5000")
db, err := sql.Open("sqlite", dbPath+"?_journal_mode=WAL&_synchronous=NORMAL&_busy_timeout=5000")
if err != nil {
return nil, fmt.Errorf("failed to open chunk index: %w", err)
}

View File

@ -12,6 +12,7 @@ import (
"os"
"path/filepath"
"sync"
"time"
)
// ChunkStore manages content-addressed chunk storage
@ -148,9 +149,28 @@ func (s *ChunkStore) Put(chunk *Chunk) (isNew bool, err error) {
return false, fmt.Errorf("failed to write chunk: %w", err)
}
if err := os.Rename(tmpPath, path); err != nil {
// Ensure shard directory exists before rename (CIFS/SMB compatibility)
// Network filesystems can have stale directory caches that cause
// "no such file or directory" errors on rename even when the dir exists
if err := os.MkdirAll(filepath.Dir(path), 0700); err != nil {
os.Remove(tmpPath)
return false, fmt.Errorf("failed to commit chunk: %w", err)
return false, fmt.Errorf("failed to ensure chunk directory: %w", err)
}
// Rename with retry for CIFS/SMB flakiness
var renameErr error
for attempt := 0; attempt < 3; attempt++ {
if renameErr = os.Rename(tmpPath, path); renameErr == nil {
break
}
// Brief pause before retry on network filesystems
time.Sleep(10 * time.Millisecond)
// Re-ensure directory exists (refresh CIFS cache)
os.MkdirAll(filepath.Dir(path), 0700)
}
if renameErr != nil {
os.Remove(tmpPath)
return false, fmt.Errorf("failed to commit chunk: %w", renameErr)
}
// Update cache

View File

@ -1,77 +0,0 @@
# dbbackup v3.42.77
## 🎯 New Feature: Single Database Extraction from Cluster Backups
Extract and restore individual databases from cluster backups without full cluster restoration!
### 🆕 New Flags
- **`--list-databases`**: List all databases in cluster backup with sizes
- **`--database <name>`**: Extract/restore a single database from cluster
- **`--databases "db1,db2,db3"`**: Extract multiple databases (comma-separated)
- **`--output-dir <path>`**: Extract to directory without restoring
- **`--target <name>`**: Rename database during restore
### 📖 Examples
```bash
# List databases in cluster backup
dbbackup restore cluster backup.tar.gz --list-databases
# Extract single database (no restore)
dbbackup restore cluster backup.tar.gz --database myapp --output-dir /tmp/extract
# Restore single database from cluster
dbbackup restore cluster backup.tar.gz --database myapp --confirm
# Restore with different name (testing)
dbbackup restore cluster backup.tar.gz --database myapp --target myapp_test --confirm
# Extract multiple databases
dbbackup restore cluster backup.tar.gz --databases "app1,app2,app3" --output-dir /tmp/extract
```
### 💡 Use Cases
**Selective disaster recovery** - restore only affected databases
**Database migration** - copy databases between clusters
**Testing workflows** - restore with different names
**Faster restores** - extract only what you need
**Less disk space** - no need to extract entire cluster
### ⚙️ Technical Details
- Stream-based extraction with progress feedback
- Fast cluster archive scanning (no full extraction needed)
- Works with all cluster backup formats (.tar.gz)
- Compatible with existing cluster restore workflow
- Automatic format detection for extracted dumps
### 🖥️ TUI Support (Interactive Mode)
**New in this release**: Press **`s`** key when viewing a cluster backup to select individual databases!
- Navigate cluster backups in TUI and press `s` for database selection
- Interactive database picker with size information
- Visual selection confirmation before restore
- Seamless integration with existing TUI workflows
**TUI Workflow:**
1. Launch TUI: `dbbackup` (no arguments)
2. Navigate to "Restore" → "Single Database"
3. Select cluster backup archive
4. Press `s` to show database list
5. Select database and confirm restore
## 📦 Installation
Download the binary for your platform below and make it executable:
```bash
chmod +x dbbackup_*
./dbbackup_* --version
```
## 🔍 Checksums
SHA256 checksums in `checksums.txt`.