Compare commits

...

10 Commits

Author SHA1 Message Date
ee1f58efdb chore: ignore bin/ directory to prevent repository bloat
Some checks failed
CI/CD / Test (push) Failing after 4s
CI/CD / Generate SBOM (push) Has been skipped
CI/CD / Lint (push) Failing after 4s
CI/CD / Build (darwin-amd64) (push) Has been skipped
CI/CD / Build (linux-amd64) (push) Has been skipped
CI/CD / Build (darwin-arm64) (push) Has been skipped
CI/CD / Build (linux-arm64) (push) Has been skipped
CI/CD / Release (push) Has been skipped
CI/CD / Build & Push Docker Image (push) Has been skipped
CI/CD / Mirror to GitHub (push) Has been skipped
2026-01-06 13:39:47 +01:00
5959d7313d fix(diagnose): add debug logging for WorkDir usage 2026-01-06 12:34:00 +01:00
b856d8b3f8 feat(tui): add Work Directory setting for large archive operations
- Added WorkDir to Config for custom temp directory
- TUI Settings: new 'Work Directory' option to set alternative temp location
- Restore Preview: press 'w' to toggle work directory (uses backup dir as default)
- Diagnose View: now uses configured WorkDir for cluster extraction
- Config persistence: WorkDir saved to .dbbackup.conf

This fixes diagnosis/restore failures when /tmp is too small for large archives.
Use cases: servers with limited /tmp, 70GB+ archives needing 280GB+ extraction space.
2026-01-06 11:11:22 +01:00
886aa4810a fix(diagnose): improve cluster archive diagnosis error handling
- Better error messages when tar extraction fails
- Detect truncated/corrupted archives without full extraction
- Show archive contents even when extraction fails
- Provide helpful hints for disk space and corruption issues
- Exit status 2 from tar now shows detailed diagnostics
2026-01-06 10:42:38 +01:00
14bd1f848c feat(tui): add Diagnose Backup File option to interactive menu
- Added 'Diagnose Backup File' as menu option in TUI
- Archive browser now supports 'diagnose' mode
- Allows users to run deep diagnosis on backups before restore
- Helps identify truncation/corruption issues in large backups
2026-01-06 09:44:22 +01:00
4c171c0e44 v3.40.0: Restore diagnostics and error reporting
Features:
- restore diagnose command for backup file analysis
- Deep COPY block verification for truncated dump detection
- PGDMP signature and gzip integrity validation
- Detailed error reports with --save-debug-log flag
- Ring buffer stderr capture (prevents OOM on 2M+ errors)
- Error classification with actionable recommendations

TUI Enhancements:
- Automatic dump validity safety check before restore
- Press 'd' in archive browser to diagnose backups
- Press 'd' in restore preview for debug log toggle
- Debug logs saved to /tmp on failure when enabled

Documentation:
- Updated README with diagnose command and examples
- Updated CHANGELOG with full feature list
- Updated restore preview screenshots
2026-01-05 15:17:54 +01:00
e7f0a9f5eb docs: update documentation to match current CLI syntax
- AZURE.md, GCS.md: Replace 'backup postgres' with 'backup single'
- AZURE.md, GCS.md: Replace 'restore postgres --source' with proper syntax
- AZURE.md, GCS.md: Remove non-existent --output, --source flags
- VEEAM_ALTERNATIVE.md: Fix command examples and broken link
- CONTRIBUTING.md: Remove RELEASE_NOTES step from release process
- CHANGELOG.md: Remove reference to deleted file
- Remove RELEASE_NOTES_v3.1.md (content is in CHANGELOG.md)
2026-01-05 12:41:18 +01:00
2e942f04a4 docs: remove undocumented --notify flag from README
The --notify CLI flag was documented but not implemented.
Notifications are configured via environment variables only.
2026-01-05 12:35:33 +01:00
f29e6fe102 docs: fix MYSQL_PITR.md - remove non-existent --pitr flag
Regular backups already capture binlog position automatically when
PITR is enabled at the MySQL level. No special flag needed.
2025-12-15 15:12:50 +01:00
51fc570fc7 chore: bump version to 3.2.0 across all files 2025-12-15 15:09:34 +01:00
27 changed files with 2607 additions and 566 deletions

3
.gitignore vendored
View File

@@ -9,10 +9,11 @@ logs/
*.trace
*.err
# Ignore built binaries in root (keep bin/ directory for releases)
# Ignore built binaries (built fresh via build_all.sh on release)
/dbbackup
/dbbackup_*
!dbbackup.png
bin/
# Ignore development artifacts
*.swp

View File

@@ -28,21 +28,16 @@ This guide covers using **Azure Blob Storage** with `dbbackup` for secure, scala
```bash
# Backup PostgreSQL to Azure
dbbackup backup postgres \
--host localhost \
--database mydb \
--output backup.sql \
--cloud "azure://mycontainer/backups/db.sql?account=myaccount&key=ACCOUNT_KEY"
dbbackup backup single mydb \
--cloud "azure://mycontainer/backups/?account=myaccount&key=ACCOUNT_KEY"
```
### 3. Restore from Azure
```bash
# Restore from Azure backup
dbbackup restore postgres \
--source "azure://mycontainer/backups/db.sql?account=myaccount&key=ACCOUNT_KEY" \
--host localhost \
--database mydb_restored
# Download backup from Azure and restore
dbbackup cloud download "azure://mycontainer/backups/mydb.dump.gz?account=myaccount&key=ACCOUNT_KEY" ./mydb.dump.gz
dbbackup restore single ./mydb.dump.gz --target mydb_restored --confirm
```
## URI Syntax
@@ -99,7 +94,7 @@ export AZURE_STORAGE_ACCOUNT="myaccount"
export AZURE_STORAGE_KEY="YOUR_ACCOUNT_KEY"
# Use simplified URI (credentials from environment)
dbbackup backup postgres --cloud "azure://container/path/backup.sql"
dbbackup backup single mydb --cloud "azure://container/path/"
```
### Method 3: Connection String
@@ -109,7 +104,7 @@ Use Azure connection string:
```bash
export AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=YOUR_KEY;EndpointSuffix=core.windows.net"
dbbackup backup postgres --cloud "azure://container/path/backup.sql"
dbbackup backup single mydb --cloud "azure://container/path/"
```
### Getting Your Account Key
@@ -196,11 +191,8 @@ Configure automatic tier transitions:
```bash
# PostgreSQL backup with automatic Azure upload
dbbackup backup postgres \
--host localhost \
--database production_db \
--output /backups/db.sql \
--cloud "azure://prod-backups/postgres/$(date +%Y%m%d_%H%M%S).sql?account=myaccount&key=KEY" \
dbbackup backup single production_db \
--cloud "azure://prod-backups/postgres/?account=myaccount&key=KEY" \
--compression 6
```
@@ -208,10 +200,7 @@ dbbackup backup postgres \
```bash
# Backup entire PostgreSQL cluster to Azure
dbbackup backup postgres \
--host localhost \
--all-databases \
--output-dir /backups \
dbbackup backup cluster \
--cloud "azure://prod-backups/postgres/cluster/?account=myaccount&key=KEY"
```
@@ -257,13 +246,9 @@ dbbackup cleanup "azure://prod-backups/postgres/?account=myaccount&key=KEY" --ke
#!/bin/bash
# Azure backup script (run via cron)
DATE=$(date +%Y%m%d_%H%M%S)
AZURE_URI="azure://prod-backups/postgres/${DATE}.sql?account=myaccount&key=${AZURE_STORAGE_KEY}"
AZURE_URI="azure://prod-backups/postgres/?account=myaccount&key=${AZURE_STORAGE_KEY}"
dbbackup backup postgres \
--host localhost \
--database production_db \
--output /tmp/backup.sql \
dbbackup backup single production_db \
--cloud "${AZURE_URI}" \
--compression 9
@@ -289,35 +274,25 @@ For large files (>256MB), dbbackup automatically uses Azure Block Blob staging:
```bash
# Large database backup (automatically uses block blob)
dbbackup backup postgres \
--host localhost \
--database huge_db \
--output /backups/huge.sql \
--cloud "azure://backups/huge.sql?account=myaccount&key=KEY"
dbbackup backup single huge_db \
--cloud "azure://backups/?account=myaccount&key=KEY"
```
### Progress Tracking
```bash
# Backup with progress display
dbbackup backup postgres \
--host localhost \
--database mydb \
--output backup.sql \
--cloud "azure://backups/backup.sql?account=myaccount&key=KEY" \
--progress
dbbackup backup single mydb \
--cloud "azure://backups/?account=myaccount&key=KEY"
```
### Concurrent Operations
```bash
# Backup multiple databases in parallel
dbbackup backup postgres \
--host localhost \
--all-databases \
--output-dir /backups \
# Backup cluster with parallel jobs
dbbackup backup cluster \
--cloud "azure://backups/cluster/?account=myaccount&key=KEY" \
--parallelism 4
--jobs 4
```
### Custom Metadata
@@ -365,11 +340,8 @@ Endpoint: http://localhost:10000/devstoreaccount1
```bash
# Backup to Azurite
dbbackup backup postgres \
--host localhost \
--database testdb \
--output test.sql \
--cloud "azure://test-backups/test.sql?endpoint=http://localhost:10000&account=devstoreaccount1&key=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
dbbackup backup single testdb \
--cloud "azure://test-backups/?endpoint=http://localhost:10000&account=devstoreaccount1&key=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
```
### Run Integration Tests
@@ -492,8 +464,8 @@ Tests include:
Enable debug mode:
```bash
dbbackup backup postgres \
--cloud "azure://container/backup.sql?account=myaccount&key=KEY" \
dbbackup backup single mydb \
--cloud "azure://container/?account=myaccount&key=KEY" \
--debug
```

View File

@@ -5,6 +5,101 @@ All notable changes to dbbackup will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [3.40.0] - 2026-01-05 "The Diagnostician"
### Added - 🔍 Restore Diagnostics & Error Reporting
**Backup Diagnosis Command:**
- `restore diagnose <archive>` - Deep analysis of backup files before restore
- Detects truncated dumps, corrupted archives, incomplete COPY blocks
- PGDMP signature validation for PostgreSQL custom format
- Gzip integrity verification with decompression test
- `pg_restore --list` validation for custom format archives
- `--deep` flag for exhaustive line-by-line analysis
- `--json` flag for machine-readable output
- Cluster archive diagnosis scans all contained dumps
**Detailed Error Reporting:**
- Comprehensive error collector captures stderr during restore
- Ring buffer prevents OOM on high-error restores (2M+ errors)
- Error classification with actionable hints and recommendations
- `--save-debug-log <path>` saves JSON report on failure
- Reports include: exit codes, last errors, line context, tool versions
- Automatic recommendations based on error patterns
**TUI Restore Enhancements:**
- **Dump validity** safety check runs automatically before restore
- Detects truncated/corrupted backups in restore preview
- Press **`d`** to toggle debug log saving in Advanced Options
- Debug logs saved to `/tmp/dbbackup-restore-debug-*.json` on failure
- Press **`d`** in archive browser to run diagnosis on any backup
**New Commands:**
- `restore diagnose` - Analyze backup file integrity and structure
**New Flags:**
- `--save-debug-log <path>` - Save detailed JSON error report on failure
- `--diagnose` - Run deep diagnosis before cluster restore
- `--deep` - Enable exhaustive diagnosis (line-by-line analysis)
- `--json` - Output diagnosis in JSON format
- `--keep-temp` - Keep temporary files after diagnosis
- `--verbose` - Show detailed diagnosis progress
### Technical Details
- 1,200+ lines of new diagnostic code
- Error classification system with 15+ error patterns
- Ring buffer stderr capture (1MB max, 10K lines)
- Zero memory growth on high-error restores
- Full TUI integration for diagnostics
---
## [3.2.0] - 2025-12-13 "The Margin Eraser"
### Added - 🚀 Physical Backup Revolution
**MySQL Clone Plugin Integration:**
- Native physical backup using MySQL 8.0.17+ Clone Plugin
- No XtraBackup dependency - pure Go implementation
- Real-time progress monitoring via performance_schema
- Support for both local and remote clone operations
**Filesystem Snapshot Orchestration:**
- LVM snapshot support with automatic cleanup
- ZFS snapshot integration with send/receive
- Btrfs subvolume snapshot support
- Brief table lock (<100ms) for consistency
- Automatic snapshot backend detection
**Continuous Binlog Streaming:**
- Real-time binlog capture using MySQL replication protocol
- Multiple targets: file, compressed file, S3 direct streaming
- Sub-second RPO without impacting database server
- Automatic position tracking and checkpointing
**Parallel Cloud Streaming:**
- Direct database-to-S3 streaming (zero local storage)
- Configurable worker pool for parallel uploads
- S3 multipart upload with automatic retry
- Support for S3, GCS, and Azure Blob Storage
**Smart Engine Selection:**
- Automatic engine selection based on environment
- MySQL version detection and capability checking
- Filesystem type detection for optimal snapshot backend
- Database size-based recommendations
**New Commands:**
- `engine list` - List available backup engines
- `engine info <name>` - Show detailed engine information
- `backup --engine=<name>` - Use specific backup engine
### Technical Details
- 7,559 lines of new code
- Zero new external dependencies
- 10/10 platform builds successful
- Full test coverage for new engines
## [3.1.0] - 2025-11-26
### Added - 🔄 Point-in-Time Recovery (PITR)
@@ -117,7 +212,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Documentation
- Added comprehensive PITR.md guide (complete PITR documentation)
- Updated README.md with PITR section (200+ lines)
- Added RELEASE_NOTES_v3.1.md (full feature list)
- Updated CHANGELOG.md with v3.1.0 details
- Added NOTICE file for Apache License attribution
- Created comprehensive test suite (tests/pitr_complete_test.go - 700+ lines)

View File

@@ -17,7 +17,7 @@ Be respectful, constructive, and professional in all interactions. We're buildin
**Bug Report Template:**
```
**Version:** dbbackup v3.1.0
**Version:** dbbackup v3.40.0
**OS:** Linux/macOS/BSD
**Database:** PostgreSQL 14 / MySQL 8.0 / MariaDB 10.6
**Command:** The exact command that failed
@@ -274,12 +274,11 @@ Fixes #56
1. Update version in `main.go`
2. Update `CHANGELOG.md`
3. Create release notes (`RELEASE_NOTES_vX.Y.Z.md`)
4. Commit: `git commit -m "Release vX.Y.Z"`
5. Tag: `git tag -a vX.Y.Z -m "Release vX.Y.Z"`
6. Push: `git push origin main vX.Y.Z`
7. Build binaries: `./build_all.sh`
8. Create GitHub Release with binaries
3. Commit: `git commit -m "Release vX.Y.Z"`
4. Tag: `git tag -a vX.Y.Z -m "Release vX.Y.Z"`
5. Push: `git push origin main vX.Y.Z`
6. Build binaries: `./build_all.sh`
7. Create GitHub Release with binaries
## Questions?

80
GCS.md
View File

@@ -28,21 +28,16 @@ This guide covers using **Google Cloud Storage (GCS)** with `dbbackup` for secur
```bash
# Backup PostgreSQL to GCS (using ADC)
dbbackup backup postgres \
--host localhost \
--database mydb \
--output backup.sql \
--cloud "gs://mybucket/backups/db.sql"
dbbackup backup single mydb \
--cloud "gs://mybucket/backups/"
```
### 3. Restore from GCS
```bash
# Restore from GCS backup
dbbackup restore postgres \
--source "gs://mybucket/backups/db.sql" \
--host localhost \
--database mydb_restored
# Download backup from GCS and restore
dbbackup cloud download "gs://mybucket/backups/mydb.dump.gz" ./mydb.dump.gz
dbbackup restore single ./mydb.dump.gz --target mydb_restored --confirm
```
## URI Syntax
@@ -107,7 +102,7 @@ gcloud auth application-default login
gcloud auth activate-service-account --key-file=/path/to/key.json
# Use simplified URI (credentials from environment)
dbbackup backup postgres --cloud "gs://mybucket/backups/backup.sql"
dbbackup backup single mydb --cloud "gs://mybucket/backups/"
```
### Method 2: Service Account JSON
@@ -121,14 +116,14 @@ Download service account key from GCP Console:
**Use in URI:**
```bash
dbbackup backup postgres \
--cloud "gs://mybucket/backup.sql?credentials=/path/to/service-account.json"
dbbackup backup single mydb \
--cloud "gs://mybucket/?credentials=/path/to/service-account.json"
```
**Or via environment:**
```bash
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
dbbackup backup postgres --cloud "gs://mybucket/backup.sql"
dbbackup backup single mydb --cloud "gs://mybucket/"
```
### Method 3: Workload Identity (GKE)
@@ -147,7 +142,7 @@ metadata:
Then use ADC in your pod:
```bash
dbbackup backup postgres --cloud "gs://mybucket/backup.sql"
dbbackup backup single mydb --cloud "gs://mybucket/"
```
### Required IAM Permissions
@@ -250,11 +245,8 @@ gsutil mb -l eu gs://mybucket/
```bash
# PostgreSQL backup with automatic GCS upload
dbbackup backup postgres \
--host localhost \
--database production_db \
--output /backups/db.sql \
--cloud "gs://prod-backups/postgres/$(date +%Y%m%d_%H%M%S).sql" \
dbbackup backup single production_db \
--cloud "gs://prod-backups/postgres/" \
--compression 6
```
@@ -262,10 +254,7 @@ dbbackup backup postgres \
```bash
# Backup entire PostgreSQL cluster to GCS
dbbackup backup postgres \
--host localhost \
--all-databases \
--output-dir /backups \
dbbackup backup cluster \
--cloud "gs://prod-backups/postgres/cluster/"
```
@@ -314,13 +303,9 @@ dbbackup cleanup "gs://prod-backups/postgres/" --keep 7
#!/bin/bash
# GCS backup script (run via cron)
DATE=$(date +%Y%m%d_%H%M%S)
GCS_URI="gs://prod-backups/postgres/${DATE}.sql"
GCS_URI="gs://prod-backups/postgres/"
dbbackup backup postgres \
--host localhost \
--database production_db \
--output /tmp/backup.sql \
dbbackup backup single production_db \
--cloud "${GCS_URI}" \
--compression 9
@@ -360,35 +345,25 @@ For large files, dbbackup automatically uses GCS chunked upload:
```bash
# Large database backup (automatically uses chunked upload)
dbbackup backup postgres \
--host localhost \
--database huge_db \
--output /backups/huge.sql \
--cloud "gs://backups/huge.sql"
dbbackup backup single huge_db \
--cloud "gs://backups/"
```
### Progress Tracking
```bash
# Backup with progress display
dbbackup backup postgres \
--host localhost \
--database mydb \
--output backup.sql \
--cloud "gs://backups/backup.sql" \
--progress
dbbackup backup single mydb \
--cloud "gs://backups/"
```
### Concurrent Operations
```bash
# Backup multiple databases in parallel
dbbackup backup postgres \
--host localhost \
--all-databases \
--output-dir /backups \
# Backup cluster with parallel jobs
dbbackup backup cluster \
--cloud "gs://backups/cluster/" \
--parallelism 4
--jobs 4
```
### Custom Metadata
@@ -460,11 +435,8 @@ curl -X POST "http://localhost:4443/storage/v1/b?project=test-project" \
```bash
# Backup to fake-gcs-server
dbbackup backup postgres \
--host localhost \
--database testdb \
--output test.sql \
--cloud "gs://test-backups/test.sql?endpoint=http://localhost:4443/storage/v1"
dbbackup backup single testdb \
--cloud "gs://test-backups/?endpoint=http://localhost:4443/storage/v1"
```
### Run Integration Tests
@@ -593,8 +565,8 @@ Tests include:
Enable debug mode:
```bash
dbbackup backup postgres \
--cloud "gs://bucket/backup.sql" \
dbbackup backup single mydb \
--cloud "gs://bucket/" \
--debug
```

View File

@@ -110,10 +110,12 @@ dbbackup pitr mysql-enable --archive-dir /backups/binlog_archive
### 3. Create a Base Backup
```bash
# Create a PITR-capable backup
dbbackup backup single mydb --pitr
# Create a backup - binlog position is automatically recorded
dbbackup backup single mydb
```
> **Note:** All backups automatically capture the current binlog position when PITR is enabled at the MySQL level. This position is stored in the backup metadata and used as the starting point for binlog replay during recovery.
### 4. Start Binlog Archiving
```bash

View File

@@ -54,7 +54,7 @@ Download from [releases](https://git.uuxo.net/UUXO/dbbackup/releases):
```bash
# Linux x86_64
wget https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.1.0/dbbackup-linux-amd64
wget https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.40.0/dbbackup-linux-amd64
chmod +x dbbackup-linux-amd64
sudo mv dbbackup-linux-amd64 /usr/local/bin/dbbackup
```
@@ -161,11 +161,15 @@ Cluster Restore Options
Safety Checks
[OK] Archive integrity verified
[OK] Dump validity verified
[OK] Disk space: 140 GB available
[OK] Required tools found
[OK] Target database accessible
c: Toggle cleanup | Enter: Proceed | Esc: Cancel
Advanced Options
✗ Debug Log: false (press 'd' to toggle)
c: Toggle cleanup | d: Debug log | Enter: Proceed | Esc: Cancel
```
**Backup Manager:**
@@ -180,7 +184,7 @@ FILENAME FORMAT SIZE MODIFIED
[OK] myapp_prod_20250114.dump.gz PostgreSQL Custom 12.3 GB 2025-01-14
[!!] users_db_20241220.dump.gz PostgreSQL Custom 850 MB 2024-12-20
r: Restore | v: Verify | i: Info | d: Delete | R: Refresh | Esc: Back
r: Restore | v: Verify | i: Info | d: Diagnose | D: Delete | R: Refresh | Esc: Back
```
**Configuration Settings:**
@@ -240,6 +244,12 @@ dbbackup restore single backup.dump --target myapp_db --create --confirm
# Restore cluster
dbbackup restore cluster cluster_backup.tar.gz --confirm
# Restore with debug logging (saves detailed error report on failure)
dbbackup restore cluster backup.tar.gz --save-debug-log /tmp/restore-debug.json --confirm
# Diagnose backup before restore
dbbackup restore diagnose backup.dump.gz --deep
# Cloud backup
dbbackup backup single mydb --cloud s3://my-bucket/backups/
@@ -257,6 +267,7 @@ dbbackup backup single mydb --dry-run
| `restore single` | Restore single database |
| `restore cluster` | Restore full cluster |
| `restore pitr` | Point-in-Time Recovery |
| `restore diagnose` | Diagnose backup file integrity |
| `verify-backup` | Verify backup integrity |
| `cleanup` | Remove old backups |
| `status` | Check connection status |
@@ -287,8 +298,8 @@ dbbackup backup single mydb --dry-run
| `--cloud` | Cloud storage URI | - |
| `--encrypt` | Enable encryption | false |
| `--dry-run, -n` | Run preflight checks only | false |
| `--notify` | Enable notifications | false |
| `--debug` | Enable debug logging | false |
| `--save-debug-log` | Save error report to file on failure | - |
## Encryption
@@ -436,9 +447,64 @@ dbbackup backup cluster -n # Short flag
Ready to backup. Remove --dry-run to execute.
```
## Backup Diagnosis
Diagnose backup files before restore to detect corruption or truncation:
```bash
# Diagnose a backup file
dbbackup restore diagnose backup.dump.gz
# Deep analysis (line-by-line COPY block verification)
dbbackup restore diagnose backup.dump.gz --deep
# JSON output for automation
dbbackup restore diagnose backup.dump.gz --json
# Diagnose cluster archive (checks all contained dumps)
dbbackup restore diagnose cluster_backup.tar.gz --deep
```
**Checks performed:**
- PGDMP signature validation (PostgreSQL custom format)
- Gzip integrity verification
- COPY block termination (detects truncated dumps)
- `pg_restore --list` validation
- Archive structure analysis
**Example output:**
```
🔍 Backup Diagnosis Report
══════════════════════════════════════════════════════════════
📁 File: mydb_20260105.dump.gz
Format: PostgreSQL Custom (gzip)
Size: 2.5 GB
🔬 Analysis Results:
✅ Gzip integrity: Valid
✅ PGDMP signature: Valid
✅ pg_restore --list: Success (245 objects)
❌ COPY block check: TRUNCATED
⚠️ Issues Found:
- COPY block for table 'orders' not terminated
- Dump appears truncated at line 1,234,567
💡 Recommendations:
- Re-run the backup for this database
- Check disk space on backup server
- Verify network stability during backup
```
**In Interactive Mode:**
- Press `d` in archive browser to diagnose any backup
- Automatic dump validity check in restore preview
- Toggle debug logging with `d` in restore options
## Notifications
Get alerted on backup events via email or webhooks.
Get alerted on backup events via email or webhooks. Configure via environment variables.
### SMTP Email
@@ -451,8 +517,8 @@ export NOTIFY_SMTP_PASSWORD="secret"
export NOTIFY_SMTP_FROM="dbbackup@example.com"
export NOTIFY_SMTP_TO="admin@example.com,dba@example.com"
# Enable notifications
dbbackup backup single mydb --notify
# Run backup (notifications triggered when SMTP is configured)
dbbackup backup single mydb
```
### Webhooks
@@ -465,7 +531,8 @@ export NOTIFY_WEBHOOK_SECRET="signing-secret" # Optional HMAC signing
# Slack webhook
export NOTIFY_WEBHOOK_URL="https://hooks.slack.com/services/T00/B00/XXX"
dbbackup backup single mydb --notify
# Run backup (notifications triggered when webhook is configured)
dbbackup backup single mydb
```
**Webhook payload:**

View File

@@ -1,396 +0,0 @@
# dbbackup v3.1.0 - Enterprise Backup Solution
**Released:** November 26, 2025
---
## 🎉 Major Features
### Point-in-Time Recovery (PITR)
Complete PostgreSQL Point-in-Time Recovery implementation:
- **WAL Archiving**: Continuous archiving of Write-Ahead Log files
- **WAL Monitoring**: Real-time monitoring of archive status and statistics
- **Timeline Management**: Track and visualize PostgreSQL timeline branching
- **Recovery Targets**: Restore to any point in time:
- Specific timestamp (`--target-time "2024-11-26 12:00:00"`)
- Transaction ID (`--target-xid 1000000`)
- Log Sequence Number (`--target-lsn "0/3000000"`)
- Named restore point (`--target-name before_migration`)
- Earliest consistent point (`--target-immediate`)
- **Version Support**: Both PostgreSQL 12+ (modern) and legacy formats
- **Recovery Actions**: Promote to primary, pause for inspection, or shutdown
- **Comprehensive Testing**: 700+ lines of tests with 100% pass rate
**New Commands:**
- `pitr enable/disable/status` - PITR configuration management
- `wal archive/list/cleanup/timeline` - WAL archive operations
- `restore pitr` - Point-in-time recovery with multiple target types
### Cloud Storage Integration
Multi-cloud backend support with streaming efficiency:
- **Amazon S3 / MinIO**: Full S3-compatible storage support
- **Azure Blob Storage**: Native Azure integration
- **Google Cloud Storage**: GCS backend support
- **Streaming Operations**: Memory-efficient uploads/downloads
- **Cloud-Native**: Direct backup to cloud, no local disk required
**Features:**
- Automatic multipart uploads for large files
- Resumable downloads with retry logic
- Cloud-side encryption support
- Metadata preservation in cloud storage
### Incremental Backups
Space-efficient backup strategies:
- **PostgreSQL**: File-level incremental backups
- Track changed files since base backup
- Automatic base backup detection
- Efficient restore chain resolution
- **MySQL/MariaDB**: Binary log incremental backups
- Capture changes via binlog
- Automatic log rotation handling
- Point-in-time restore capability
**Benefits:**
- 70-90% reduction in backup size
- Faster backup completion times
- Automated backup chain management
- Intelligent dependency tracking
### AES-256-GCM Encryption
Military-grade encryption for data protection:
- **Algorithm**: AES-256-GCM authenticated encryption
- **Key Derivation**: PBKDF2-SHA256 with 600,000 iterations (OWASP 2023)
- **Streaming**: Memory-efficient for large backups
- **Key Sources**: File (raw/base64), environment variable, or passphrase
- **Auto-Detection**: Restore automatically detects encrypted backups
- **Tamper Protection**: Authenticated encryption prevents tampering
**Security:**
- Unique nonce per encryption (no key reuse)
- Cryptographically secure random generation
- 56-byte header with algorithm metadata
- ~1-2 GB/s encryption throughput
### Foundation Features
Production-ready backup operations:
- **SHA-256 Verification**: Cryptographic backup integrity checking
- **Intelligent Retention**: Day-based policies with minimum backup guarantees
- **Safe Cleanup**: Dry-run mode, safety checks, detailed reporting
- **Multi-Database**: PostgreSQL, MySQL, MariaDB support
- **Interactive TUI**: Beautiful terminal UI with progress tracking
- **CLI Mode**: Full command-line interface for automation
- **Cross-Platform**: Linux, macOS, FreeBSD, OpenBSD, NetBSD
- **Docker Support**: Official container images
- **100% Test Coverage**: Comprehensive test suite
---
## ✅ Production Validated
**Real-World Deployment:**
- ✅ 2 production hosts in production environment
- ✅ 8 databases backed up nightly
- ✅ 30-day retention with minimum 5 backups
- ✅ ~10MB/night backup volume
- ✅ Scheduled at 02:09 and 02:25 CET
-**Resolved 4-day backup failure immediately**
**User Feedback (Ansible Claude):**
> "cleanup command is SO gut, dass es alle verwenden sollten"
> "--dry-run feature: chef's kiss!" 💋
> "Modern tooling in place, pragmatic and maintainable"
> "CLI design: Professional & polished"
**Impact:**
- Fixed failing backup infrastructure on first deployment
- Stable operation in production environment
- Positive feedback from DevOps team
- Validation of feature set and UX design
---
## 📦 Installation
### Download Pre-compiled Binary
**Linux (x86_64):**
```bash
wget https://git.uuxo.net/PlusOne/dbbackup/releases/download/v3.1.0/dbbackup-linux-amd64
chmod +x dbbackup-linux-amd64
sudo mv dbbackup-linux-amd64 /usr/local/bin/dbbackup
```
**Linux (ARM64):**
```bash
wget https://git.uuxo.net/PlusOne/dbbackup/releases/download/v3.1.0/dbbackup-linux-arm64
chmod +x dbbackup-linux-arm64
sudo mv dbbackup-linux-arm64 /usr/local/bin/dbbackup
```
**macOS (Intel):**
```bash
wget https://git.uuxo.net/PlusOne/dbbackup/releases/download/v3.1.0/dbbackup-darwin-amd64
chmod +x dbbackup-darwin-amd64
sudo mv dbbackup-darwin-amd64 /usr/local/bin/dbbackup
```
**macOS (Apple Silicon):**
```bash
wget https://git.uuxo.net/PlusOne/dbbackup/releases/download/v3.1.0/dbbackup-darwin-arm64
chmod +x dbbackup-darwin-arm64
sudo mv dbbackup-darwin-arm64 /usr/local/bin/dbbackup
```
### Build from Source
```bash
git clone https://git.uuxo.net/PlusOne/dbbackup.git
cd dbbackup
go build -o dbbackup
sudo mv dbbackup /usr/local/bin/
```
### Docker
```bash
docker pull git.uuxo.net/PlusOne/dbbackup:v3.1.0
docker pull git.uuxo.net/PlusOne/dbbackup:latest
```
---
## 🚀 Quick Start Examples
### Basic Backup
```bash
# Simple database backup
dbbackup backup single mydb
# Backup with verification
dbbackup backup single mydb
dbbackup verify mydb_backup.sql.gz
```
### Cloud Backup
```bash
# Backup to S3
dbbackup backup single mydb --cloud s3://my-bucket/backups/
# Backup to Azure
dbbackup backup single mydb --cloud azure://container/backups/
# Backup to GCS
dbbackup backup single mydb --cloud gs://my-bucket/backups/
```
### Encrypted Backup
```bash
# Generate encryption key
head -c 32 /dev/urandom | base64 > encryption.key
# Encrypted backup
dbbackup backup single mydb --encrypt --encryption-key-file encryption.key
# Restore (automatic decryption)
dbbackup restore single mydb_backup.sql.gz --encryption-key-file encryption.key
```
### Incremental Backup
```bash
# Create base backup
dbbackup backup single mydb --backup-type full
# Create incremental backup
dbbackup backup single mydb --backup-type incremental \
--base-backup mydb_base_20241126_120000.tar.gz
# Restore (automatic chain resolution)
dbbackup restore single mydb_incr_20241126_150000.tar.gz
```
### Point-in-Time Recovery
```bash
# Enable PITR
dbbackup pitr enable --archive-dir /backups/wal_archive
# Take base backup
pg_basebackup -D /backups/base.tar.gz -Ft -z -P
# Perform PITR
dbbackup restore pitr \
--base-backup /backups/base.tar.gz \
--wal-archive /backups/wal_archive \
--target-time "2024-11-26 12:00:00" \
--target-dir /var/lib/postgresql/14/restored
# Monitor WAL archiving
dbbackup pitr status
dbbackup wal list
```
### Retention & Cleanup
```bash
# Cleanup old backups (dry-run first!)
dbbackup cleanup --retention-days 30 --min-backups 5 --dry-run
# Actually cleanup
dbbackup cleanup --retention-days 30 --min-backups 5
```
### Cluster Operations
```bash
# Backup entire cluster
dbbackup backup cluster
# Restore entire cluster
dbbackup restore cluster --backups /path/to/backups/ --confirm
```
---
## 🔮 What's Next (v3.2)
Based on production feedback from Ansible Claude:
### High Priority
1. **Config File Support** (2-3h)
- Persist flags like `--allow-root` in `.dbbackup.conf`
- Per-directory configuration management
- Better automation support
2. **Socket Auth Auto-Detection** (1-2h)
- Auto-detect Unix socket authentication
- Skip password prompts for socket connections
- Improved UX for root users
### Medium Priority
3. **Inline Backup Verification** (2-3h)
- Automatic verification after backup
- Immediate corruption detection
- Better workflow integration
4. **Progress Indicators** (4-6h)
- Progress bars for mysqldump operations
- Real-time backup size tracking
- ETA for large backups
### Additional Features
5. **Ansible Module** (4-6h)
- Native Ansible integration
- Declarative backup configuration
- DevOps automation support
---
## 📊 Performance Metrics
**Backup Performance:**
- PostgreSQL: 50-150 MB/s (network dependent)
- MySQL: 30-100 MB/s (with compression)
- Encryption: ~1-2 GB/s (streaming)
- Compression: 70-80% size reduction (typical)
**PITR Performance:**
- WAL archiving: 100-200 MB/s
- WAL encryption: ~1-2 GB/s
- Recovery replay: 10-100 MB/s (disk I/O dependent)
**Resource Usage:**
- Memory: ~1GB constant (streaming architecture)
- CPU: 1-4 cores (configurable)
- Disk I/O: Streaming (no intermediate files)
---
## 🏗️ Architecture Highlights
**Split-Brain Development:**
- Human architects system design
- AI implements features and tests
- Micro-task decomposition (1-2h phases)
- Progressive enhancement approach
- **Result:** 52% faster development (5.75h vs 12h planned)
**Key Innovations:**
- Streaming architecture for constant memory usage
- Interface-first design for clean modularity
- Comprehensive test coverage (700+ test lines)
- Production validation in parallel with development
---
## 📄 Documentation
**Core Documentation:**
- [README.md](README.md) - Complete feature overview and setup
- [PITR.md](PITR.md) - Comprehensive PITR guide
- [DOCKER.md](DOCKER.md) - Docker usage and deployment
- [CHANGELOG.md](CHANGELOG.md) - Detailed version history
**Getting Started:**
- [QUICKRUN.md](QUICKRUN.MD) - Quick start guide
- [PROGRESS_IMPLEMENTATION.md](PROGRESS_IMPLEMENTATION.md) - Progress tracking
---
## 📜 License
Apache License 2.0
Copyright 2025 dbbackup Project
Licensed under the Apache License, Version 2.0. See [LICENSE](LICENSE) for details.
---
## 🙏 Credits
**Development:**
- Built using Multi-Claude collaboration architecture
- Split-brain development pattern (human architecture + AI implementation)
- 5.75 hours intensive development (52% time savings)
**Production Validation:**
- Deployed in production environments
- Real-world testing and feedback
- DevOps validation and feature requests
**Technologies:**
- Go 1.21+
- PostgreSQL 9.5-17
- MySQL/MariaDB 5.7+
- AWS SDK, Azure SDK, Google Cloud SDK
- Cobra CLI framework
---
## 🐛 Known Issues
None reported in production deployment.
If you encounter issues, please report them at:
https://git.uuxo.net/PlusOne/dbbackup/issues
---
## 📞 Support
**Documentation:** See [README.md](README.md) and [PITR.md](PITR.md)
**Issues:** https://git.uuxo.net/PlusOne/dbbackup/issues
**Repository:** https://git.uuxo.net/PlusOne/dbbackup
---
**Thank you for using dbbackup!** 🎉
*Professional database backup and restore utility for PostgreSQL, MySQL, and MariaDB.*

View File

@@ -85,7 +85,7 @@ We release security updates for the following versions:
- Never store unencrypted backups on public storage
**Docker Usage:**
- Use specific version tags (`:v3.1.0` not `:latest`)
- Use specific version tags (`:v3.2.0` not `:latest`)
- Run as non-root user (default in our image)
- Mount volumes read-only when possible
- Use Docker secrets for credentials

View File

@@ -43,7 +43,7 @@ What are you actually getting?
```bash
# Physical backup at InnoDB page level
# No XtraBackup. No external tools. Pure Go.
dbbackup backup --engine=clone --output=s3://bucket/backup
dbbackup backup single mydb --db-type mysql --cloud s3://bucket/backups/
```
### Filesystem Snapshots
@@ -78,10 +78,10 @@ dbbackup backup --engine=streaming --parallel-workers=8
**Day 1**: Run dbbackup alongside existing solution
```bash
# Test backup
dbbackup backup --database=mydb --output=s3://test-bucket/
dbbackup backup single mydb --cloud s3://test-bucket/
# Verify integrity
dbbackup verify s3://test-bucket/backup.sql.gz.enc
dbbackup verify s3://test-bucket/mydb_20260115.dump.gz
```
**Week 1**: Compare backup times, storage costs, recovery speed
@@ -112,10 +112,9 @@ curl -LO https://github.com/UUXO/dbbackup/releases/latest/download/dbbackup_linu
chmod +x dbbackup_linux_amd64
# Your first backup
./dbbackup_linux_amd64 backup \
--database=production \
--engine=auto \
--output=s3://my-backups/$(date +%Y%m%d)/
./dbbackup_linux_amd64 backup single production \
--db-type mysql \
--cloud s3://my-backups/
```
## The Bottom Line
@@ -131,4 +130,4 @@ Every dollar you spend on backup licensing is a dollar not spent on:
*Apache 2.0 Licensed. Free forever. No sales calls required.*
[GitHub](https://github.com/UUXO/dbbackup) | [Documentation](https://github.com/UUXO/dbbackup#readme) | [Release Notes](RELEASE_NOTES_v3.2.md)
[GitHub](https://github.com/UUXO/dbbackup) | [Documentation](https://github.com/UUXO/dbbackup#readme) | [Changelog](CHANGELOG.md)

View File

@@ -15,7 +15,7 @@ echo "🔧 Using Go version: $GO_VERSION"
# Configuration
APP_NAME="dbbackup"
VERSION="3.1.0"
VERSION="3.40.0"
BUILD_TIME=$(date -u '+%Y-%m-%d_%H:%M:%S_UTC')
GIT_COMMIT=$(git rev-parse --short HEAD 2>/dev/null || echo "unknown")
BIN_DIR="bin"

View File

@@ -33,6 +33,13 @@ var (
restoreNoProgress bool
restoreWorkdir string
restoreCleanCluster bool
restoreDiagnose bool // Run diagnosis before restore
restoreSaveDebugLog string // Path to save debug log on failure
// Diagnose flags
diagnoseJSON bool
diagnoseDeep bool
diagnoseKeepTemp bool
// Encryption flags
restoreEncryptionKeyFile string
@@ -214,12 +221,53 @@ Examples:
RunE: runRestorePITR,
}
// restoreDiagnoseCmd diagnoses backup files before restore
var restoreDiagnoseCmd = &cobra.Command{
Use: "diagnose [archive-file]",
Short: "Diagnose backup file integrity and format",
Long: `Perform deep analysis of backup files to detect issues before restore.
This command validates backup archives and provides detailed diagnostics
including truncation detection, format verification, and COPY block integrity.
Use this when:
- Restore fails with syntax errors
- You suspect backup corruption or truncation
- You want to verify backup integrity before restore
- Restore reports millions of errors
Checks performed:
- File format detection (custom dump vs SQL)
- PGDMP signature verification
- Gzip integrity validation
- COPY block termination check
- pg_restore --list verification
- Cluster archive structure validation
Examples:
# Diagnose a single dump file
dbbackup restore diagnose mydb.dump.gz
# Diagnose with verbose output
dbbackup restore diagnose mydb.sql.gz --verbose
# Diagnose cluster archive and all contained dumps
dbbackup restore diagnose cluster_backup.tar.gz --deep
# Output as JSON for scripting
dbbackup restore diagnose mydb.dump --json
`,
Args: cobra.ExactArgs(1),
RunE: runRestoreDiagnose,
}
func init() {
rootCmd.AddCommand(restoreCmd)
restoreCmd.AddCommand(restoreSingleCmd)
restoreCmd.AddCommand(restoreClusterCmd)
restoreCmd.AddCommand(restoreListCmd)
restoreCmd.AddCommand(restorePITRCmd)
restoreCmd.AddCommand(restoreDiagnoseCmd)
// Single restore flags
restoreSingleCmd.Flags().BoolVar(&restoreConfirm, "confirm", false, "Confirm and execute restore (required)")
@@ -232,6 +280,8 @@ func init() {
restoreSingleCmd.Flags().BoolVar(&restoreNoProgress, "no-progress", false, "Disable progress indicators")
restoreSingleCmd.Flags().StringVar(&restoreEncryptionKeyFile, "encryption-key-file", "", "Path to encryption key file (required for encrypted backups)")
restoreSingleCmd.Flags().StringVar(&restoreEncryptionKeyEnv, "encryption-key-env", "DBBACKUP_ENCRYPTION_KEY", "Environment variable containing encryption key")
restoreSingleCmd.Flags().BoolVar(&restoreDiagnose, "diagnose", false, "Run deep diagnosis before restore to detect corruption/truncation")
restoreSingleCmd.Flags().StringVar(&restoreSaveDebugLog, "save-debug-log", "", "Save detailed error report to file on failure (e.g., /tmp/restore-debug.json)")
// Cluster restore flags
restoreClusterCmd.Flags().BoolVar(&restoreConfirm, "confirm", false, "Confirm and execute restore (required)")
@@ -244,6 +294,8 @@ func init() {
restoreClusterCmd.Flags().BoolVar(&restoreNoProgress, "no-progress", false, "Disable progress indicators")
restoreClusterCmd.Flags().StringVar(&restoreEncryptionKeyFile, "encryption-key-file", "", "Path to encryption key file (required for encrypted backups)")
restoreClusterCmd.Flags().StringVar(&restoreEncryptionKeyEnv, "encryption-key-env", "DBBACKUP_ENCRYPTION_KEY", "Environment variable containing encryption key")
restoreClusterCmd.Flags().BoolVar(&restoreDiagnose, "diagnose", false, "Run deep diagnosis on all dumps before restore")
restoreClusterCmd.Flags().StringVar(&restoreSaveDebugLog, "save-debug-log", "", "Save detailed error report to file on failure (e.g., /tmp/restore-debug.json)")
// PITR restore flags
restorePITRCmd.Flags().StringVar(&pitrBaseBackup, "base-backup", "", "Path to base backup file (.tar.gz) (required)")
@@ -264,6 +316,117 @@ func init() {
restorePITRCmd.MarkFlagRequired("base-backup")
restorePITRCmd.MarkFlagRequired("wal-archive")
restorePITRCmd.MarkFlagRequired("target-dir")
// Diagnose flags
restoreDiagnoseCmd.Flags().BoolVar(&diagnoseJSON, "json", false, "Output diagnosis as JSON")
restoreDiagnoseCmd.Flags().BoolVar(&diagnoseDeep, "deep", false, "For cluster archives, extract and diagnose all contained dumps")
restoreDiagnoseCmd.Flags().BoolVar(&diagnoseKeepTemp, "keep-temp", false, "Keep temporary extraction directory (for debugging)")
restoreDiagnoseCmd.Flags().BoolVar(&restoreVerbose, "verbose", false, "Show detailed analysis progress")
}
// runRestoreDiagnose diagnoses backup files
func runRestoreDiagnose(cmd *cobra.Command, args []string) error {
archivePath := args[0]
// Convert to absolute path
if !filepath.IsAbs(archivePath) {
absPath, err := filepath.Abs(archivePath)
if err != nil {
return fmt.Errorf("invalid archive path: %w", err)
}
archivePath = absPath
}
// Check if file exists
if _, err := os.Stat(archivePath); err != nil {
return fmt.Errorf("archive not found: %s", archivePath)
}
log.Info("🔍 Diagnosing backup file", "path", archivePath)
diagnoser := restore.NewDiagnoser(log, restoreVerbose)
// Check if it's a cluster archive that needs deep analysis
format := restore.DetectArchiveFormat(archivePath)
if format.IsClusterBackup() && diagnoseDeep {
// Create temp directory for extraction
tempDir, err := os.MkdirTemp("", "dbbackup-diagnose-*")
if err != nil {
return fmt.Errorf("failed to create temp directory: %w", err)
}
if !diagnoseKeepTemp {
defer os.RemoveAll(tempDir)
} else {
log.Info("Temp directory preserved", "path", tempDir)
}
log.Info("Extracting cluster archive for deep analysis...")
// Extract and diagnose all dumps
results, err := diagnoser.DiagnoseClusterDumps(archivePath, tempDir)
if err != nil {
return fmt.Errorf("cluster diagnosis failed: %w", err)
}
// Output results
var hasErrors bool
for _, result := range results {
if diagnoseJSON {
diagnoser.PrintDiagnosisJSON(result)
} else {
diagnoser.PrintDiagnosis(result)
}
if !result.IsValid {
hasErrors = true
}
}
// Summary
if !diagnoseJSON {
fmt.Println("\n" + strings.Repeat("=", 70))
fmt.Printf("📊 CLUSTER SUMMARY: %d databases analyzed\n", len(results))
validCount := 0
for _, r := range results {
if r.IsValid {
validCount++
}
}
if validCount == len(results) {
fmt.Println("✅ All dumps are valid")
} else {
fmt.Printf("❌ %d/%d dumps have issues\n", len(results)-validCount, len(results))
}
fmt.Println(strings.Repeat("=", 70))
}
if hasErrors {
return fmt.Errorf("one or more dumps have validation errors")
}
return nil
}
// Single file diagnosis
result, err := diagnoser.DiagnoseFile(archivePath)
if err != nil {
return fmt.Errorf("diagnosis failed: %w", err)
}
if diagnoseJSON {
diagnoser.PrintDiagnosisJSON(result)
} else {
diagnoser.PrintDiagnosis(result)
}
if !result.IsValid {
return fmt.Errorf("backup file has validation errors")
}
log.Info("✅ Backup file appears valid")
return nil
}
// runRestoreSingle restores a single database
@@ -401,6 +564,12 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
// Create restore engine
engine := restore.New(cfg, log, db)
// Enable debug logging if requested
if restoreSaveDebugLog != "" {
engine.SetDebugLogPath(restoreSaveDebugLog)
log.Info("Debug logging enabled", "output", restoreSaveDebugLog)
}
// Setup signal handling
ctx, cancel := context.WithCancel(context.Background())
@@ -416,6 +585,37 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
cancel()
}()
// Run pre-restore diagnosis if requested
if restoreDiagnose {
log.Info("🔍 Running pre-restore diagnosis...")
diagnoser := restore.NewDiagnoser(log, restoreVerbose)
result, err := diagnoser.DiagnoseFile(archivePath)
if err != nil {
return fmt.Errorf("diagnosis failed: %w", err)
}
diagnoser.PrintDiagnosis(result)
if !result.IsValid {
log.Error("❌ Pre-restore diagnosis found issues")
if result.IsTruncated {
log.Error(" The backup file appears to be TRUNCATED")
}
if result.IsCorrupted {
log.Error(" The backup file appears to be CORRUPTED")
}
fmt.Println("\nUse --force to attempt restore anyway.")
if !restoreForce {
return fmt.Errorf("aborting restore due to backup file issues")
}
log.Warn("Continuing despite diagnosis errors (--force enabled)")
} else {
log.Info("✅ Backup file passed diagnosis")
}
}
// Execute restore
log.Info("Starting restore...", "database", targetDB)
@@ -584,6 +784,12 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
// Create restore engine
engine := restore.New(cfg, log, db)
// Enable debug logging if requested
if restoreSaveDebugLog != "" {
engine.SetDebugLogPath(restoreSaveDebugLog)
log.Info("Debug logging enabled", "output", restoreSaveDebugLog)
}
// Setup signal handling
ctx, cancel := context.WithCancel(context.Background())
@@ -620,6 +826,52 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
log.Info("Database cleanup completed")
}
// Run pre-restore diagnosis if requested
if restoreDiagnose {
log.Info("🔍 Running pre-restore diagnosis...")
// Create temp directory for extraction
diagTempDir, err := os.MkdirTemp("", "dbbackup-diagnose-*")
if err != nil {
return fmt.Errorf("failed to create temp directory for diagnosis: %w", err)
}
defer os.RemoveAll(diagTempDir)
diagnoser := restore.NewDiagnoser(log, restoreVerbose)
results, err := diagnoser.DiagnoseClusterDumps(archivePath, diagTempDir)
if err != nil {
return fmt.Errorf("diagnosis failed: %w", err)
}
// Check for any invalid dumps
var invalidDumps []string
for _, result := range results {
if !result.IsValid {
invalidDumps = append(invalidDumps, result.FileName)
diagnoser.PrintDiagnosis(result)
}
}
if len(invalidDumps) > 0 {
log.Error("❌ Pre-restore diagnosis found issues",
"invalid_dumps", len(invalidDumps),
"total_dumps", len(results))
fmt.Println("\n⚠ The following dumps have issues and will likely fail during restore:")
for _, name := range invalidDumps {
fmt.Printf(" - %s\n", name)
}
fmt.Println("\nRun 'dbbackup restore diagnose <archive> --deep' for full details.")
fmt.Println("Use --force to attempt restore anyway.")
if !restoreForce {
return fmt.Errorf("aborting restore due to %d invalid dump(s)", len(invalidDumps))
}
log.Warn("Continuing despite diagnosis errors (--force enabled)")
} else {
log.Info("✅ All dumps passed diagnosis", "count", len(results))
}
}
// Execute cluster restore
log.Info("Starting cluster restore...")

View File

@@ -64,6 +64,9 @@ type Config struct {
// Cluster parallelism
ClusterParallelism int // Number of concurrent databases during cluster operations (0 = sequential)
// Working directory for large operations (extraction, diagnosis)
WorkDir string // Alternative temp directory for large operations (default: system temp)
// Swap file management (for large backups)
SwapFilePath string // Path to temporary swap file
SwapFileSizeGB int // Size in GB (0 = disabled)

View File

@@ -22,6 +22,7 @@ type LocalConfig struct {
// Backup settings
BackupDir string
WorkDir string // Working directory for large operations
Compression int
Jobs int
DumpJobs int
@@ -97,6 +98,8 @@ func LoadLocalConfig() (*LocalConfig, error) {
switch key {
case "backup_dir":
cfg.BackupDir = value
case "work_dir":
cfg.WorkDir = value
case "compression":
if c, err := strconv.Atoi(value); err == nil {
cfg.Compression = c
@@ -174,6 +177,9 @@ func SaveLocalConfig(cfg *LocalConfig) error {
if cfg.BackupDir != "" {
sb.WriteString(fmt.Sprintf("backup_dir = %s\n", cfg.BackupDir))
}
if cfg.WorkDir != "" {
sb.WriteString(fmt.Sprintf("work_dir = %s\n", cfg.WorkDir))
}
if cfg.Compression != 0 {
sb.WriteString(fmt.Sprintf("compression = %d\n", cfg.Compression))
}
@@ -244,6 +250,9 @@ func ApplyLocalConfig(cfg *Config, local *LocalConfig) {
if local.BackupDir != "" {
cfg.BackupDir = local.BackupDir
}
if local.WorkDir != "" {
cfg.WorkDir = local.WorkDir
}
if cfg.CompressionLevel == 6 && local.Compression != 0 {
cfg.CompressionLevel = local.Compression
}
@@ -280,6 +289,7 @@ func ConfigFromConfig(cfg *Config) *LocalConfig {
Database: cfg.Database,
SSLMode: cfg.SSLMode,
BackupDir: cfg.BackupDir,
WorkDir: cfg.WorkDir,
Compression: cfg.CompressionLevel,
Jobs: cfg.Jobs,
DumpJobs: cfg.DumpJobs,

View File

@@ -339,7 +339,7 @@ func (e *CloneEngine) Backup(ctx context.Context, opts *BackupOptions) (*BackupR
// Save metadata
meta := &metadata.BackupMetadata{
Version: "3.1.0",
Version: "3.40.0",
Timestamp: startTime,
Database: opts.Database,
DatabaseType: "mysql",

View File

@@ -254,7 +254,7 @@ func (e *MySQLDumpEngine) Backup(ctx context.Context, opts *BackupOptions) (*Bac
// Save metadata
meta := &metadata.BackupMetadata{
Version: "3.1.0",
Version: "3.40.0",
Timestamp: startTime,
Database: opts.Database,
DatabaseType: "mysql",

View File

@@ -223,7 +223,7 @@ func (e *SnapshotEngine) Backup(ctx context.Context, opts *BackupOptions) (*Back
// Save metadata
meta := &metadata.BackupMetadata{
Version: "3.1.0",
Version: "3.40.0",
Timestamp: startTime,
Database: opts.Database,
DatabaseType: "mysql",

View File

@@ -0,0 +1,829 @@
package restore
import (
"bufio"
"bytes"
"compress/gzip"
"encoding/json"
"fmt"
"io"
"os"
"os/exec"
"path/filepath"
"regexp"
"strings"
"dbbackup/internal/logger"
)
// DiagnoseResult contains the results of a dump file diagnosis
type DiagnoseResult struct {
FilePath string `json:"file_path"`
FileName string `json:"file_name"`
FileSize int64 `json:"file_size"`
Format ArchiveFormat `json:"format"`
DetectedFormat string `json:"detected_format"`
IsValid bool `json:"is_valid"`
IsTruncated bool `json:"is_truncated"`
IsCorrupted bool `json:"is_corrupted"`
Errors []string `json:"errors,omitempty"`
Warnings []string `json:"warnings,omitempty"`
Details *DiagnoseDetails `json:"details,omitempty"`
}
// DiagnoseDetails contains detailed analysis of the dump file
type DiagnoseDetails struct {
// Header info
HasPGDMPSignature bool `json:"has_pgdmp_signature,omitempty"`
HasSQLHeader bool `json:"has_sql_header,omitempty"`
FirstBytes string `json:"first_bytes,omitempty"`
LastBytes string `json:"last_bytes,omitempty"`
// COPY block analysis (for SQL dumps)
CopyBlockCount int `json:"copy_block_count,omitempty"`
UnterminatedCopy bool `json:"unterminated_copy,omitempty"`
LastCopyTable string `json:"last_copy_table,omitempty"`
LastCopyLineNumber int `json:"last_copy_line_number,omitempty"`
SampleCopyData []string `json:"sample_copy_data,omitempty"`
// Structure analysis
HasCreateStatements bool `json:"has_create_statements,omitempty"`
HasInsertStatements bool `json:"has_insert_statements,omitempty"`
HasCopyStatements bool `json:"has_copy_statements,omitempty"`
HasTransactionBlock bool `json:"has_transaction_block,omitempty"`
ProperlyTerminated bool `json:"properly_terminated,omitempty"`
// pg_restore analysis (for custom format)
PgRestoreListable bool `json:"pg_restore_listable,omitempty"`
PgRestoreError string `json:"pg_restore_error,omitempty"`
TableCount int `json:"table_count,omitempty"`
TableList []string `json:"table_list,omitempty"`
// Compression analysis
GzipValid bool `json:"gzip_valid,omitempty"`
GzipError string `json:"gzip_error,omitempty"`
ExpandedSize int64 `json:"expanded_size,omitempty"`
CompressionRatio float64 `json:"compression_ratio,omitempty"`
}
// Diagnoser performs deep analysis of backup files
type Diagnoser struct {
log logger.Logger
verbose bool
}
// NewDiagnoser creates a new diagnoser
func NewDiagnoser(log logger.Logger, verbose bool) *Diagnoser {
return &Diagnoser{
log: log,
verbose: verbose,
}
}
// DiagnoseFile performs comprehensive diagnosis of a backup file
func (d *Diagnoser) DiagnoseFile(filePath string) (*DiagnoseResult, error) {
result := &DiagnoseResult{
FilePath: filePath,
FileName: filepath.Base(filePath),
Details: &DiagnoseDetails{},
IsValid: true, // Assume valid until proven otherwise
}
// Check file exists and get size
stat, err := os.Stat(filePath)
if err != nil {
result.IsValid = false
result.Errors = append(result.Errors, fmt.Sprintf("Cannot access file: %v", err))
return result, nil
}
result.FileSize = stat.Size()
if result.FileSize == 0 {
result.IsValid = false
result.IsTruncated = true
result.Errors = append(result.Errors, "File is empty (0 bytes)")
return result, nil
}
// Detect format
result.Format = DetectArchiveFormat(filePath)
result.DetectedFormat = result.Format.String()
// Analyze based on format
switch result.Format {
case FormatPostgreSQLDump:
d.diagnosePgDump(filePath, result)
case FormatPostgreSQLDumpGz:
d.diagnosePgDumpGz(filePath, result)
case FormatPostgreSQLSQL:
d.diagnoseSQLScript(filePath, false, result)
case FormatPostgreSQLSQLGz:
d.diagnoseSQLScript(filePath, true, result)
case FormatClusterTarGz:
d.diagnoseClusterArchive(filePath, result)
default:
result.Warnings = append(result.Warnings, "Unknown format - limited diagnosis available")
d.diagnoseUnknown(filePath, result)
}
return result, nil
}
// diagnosePgDump analyzes PostgreSQL custom format dump
func (d *Diagnoser) diagnosePgDump(filePath string, result *DiagnoseResult) {
file, err := os.Open(filePath)
if err != nil {
result.IsValid = false
result.Errors = append(result.Errors, fmt.Sprintf("Cannot open file: %v", err))
return
}
defer file.Close()
// Read first 512 bytes
header := make([]byte, 512)
n, err := file.Read(header)
if err != nil && err != io.EOF {
result.IsValid = false
result.Errors = append(result.Errors, fmt.Sprintf("Cannot read header: %v", err))
return
}
// Check PGDMP signature
if n >= 5 && string(header[:5]) == "PGDMP" {
result.Details.HasPGDMPSignature = true
result.Details.FirstBytes = "PGDMP..."
} else {
result.IsValid = false
result.IsCorrupted = true
result.Details.HasPGDMPSignature = false
result.Details.FirstBytes = fmt.Sprintf("%q", header[:minInt(n, 20)])
result.Errors = append(result.Errors,
"Missing PGDMP signature - file is NOT PostgreSQL custom format",
"This file may be SQL format incorrectly named as .dump",
"Try: file "+filePath+" to check actual file type")
return
}
// Try pg_restore --list to verify dump integrity
d.verifyWithPgRestore(filePath, result)
}
// diagnosePgDumpGz analyzes compressed PostgreSQL custom format dump
func (d *Diagnoser) diagnosePgDumpGz(filePath string, result *DiagnoseResult) {
file, err := os.Open(filePath)
if err != nil {
result.IsValid = false
result.Errors = append(result.Errors, fmt.Sprintf("Cannot open file: %v", err))
return
}
defer file.Close()
// Verify gzip integrity
gz, err := gzip.NewReader(file)
if err != nil {
result.IsValid = false
result.IsCorrupted = true
result.Details.GzipValid = false
result.Details.GzipError = err.Error()
result.Errors = append(result.Errors,
fmt.Sprintf("Invalid gzip format: %v", err),
"The file may be truncated or corrupted during transfer")
return
}
result.Details.GzipValid = true
// Read and check header
header := make([]byte, 512)
n, err := gz.Read(header)
if err != nil && err != io.EOF {
result.IsValid = false
result.Errors = append(result.Errors, fmt.Sprintf("Cannot read decompressed header: %v", err))
gz.Close()
return
}
gz.Close()
// Check PGDMP signature
if n >= 5 && string(header[:5]) == "PGDMP" {
result.Details.HasPGDMPSignature = true
result.Details.FirstBytes = "PGDMP..."
} else {
result.Details.HasPGDMPSignature = false
result.Details.FirstBytes = fmt.Sprintf("%q", header[:minInt(n, 20)])
// Check if it's actually SQL content
content := string(header[:n])
if strings.Contains(content, "PostgreSQL") || strings.Contains(content, "pg_dump") ||
strings.Contains(content, "SET ") || strings.Contains(content, "CREATE ") {
result.Details.HasSQLHeader = true
result.Warnings = append(result.Warnings,
"File contains SQL text but has .dump extension",
"This appears to be SQL format, not custom format",
"Restore should use psql, not pg_restore")
} else {
result.IsValid = false
result.IsCorrupted = true
result.Errors = append(result.Errors,
"Missing PGDMP signature in decompressed content",
"File is neither custom format nor valid SQL")
}
return
}
// Verify full gzip stream integrity by reading to end
file.Seek(0, 0)
gz, _ = gzip.NewReader(file)
var totalRead int64
buf := make([]byte, 32*1024)
for {
n, err := gz.Read(buf)
totalRead += int64(n)
if err == io.EOF {
break
}
if err != nil {
result.IsValid = false
result.IsTruncated = true
result.Details.ExpandedSize = totalRead
result.Errors = append(result.Errors,
fmt.Sprintf("Gzip stream truncated after %d bytes: %v", totalRead, err),
"The backup file appears to be incomplete",
"Check if backup process completed successfully")
gz.Close()
return
}
}
gz.Close()
result.Details.ExpandedSize = totalRead
if result.FileSize > 0 {
result.Details.CompressionRatio = float64(totalRead) / float64(result.FileSize)
}
}
// diagnoseSQLScript analyzes SQL script format
func (d *Diagnoser) diagnoseSQLScript(filePath string, compressed bool, result *DiagnoseResult) {
var reader io.Reader
var file *os.File
var gz *gzip.Reader
var err error
file, err = os.Open(filePath)
if err != nil {
result.IsValid = false
result.Errors = append(result.Errors, fmt.Sprintf("Cannot open file: %v", err))
return
}
defer file.Close()
if compressed {
gz, err = gzip.NewReader(file)
if err != nil {
result.IsValid = false
result.IsCorrupted = true
result.Details.GzipValid = false
result.Details.GzipError = err.Error()
result.Errors = append(result.Errors, fmt.Sprintf("Invalid gzip format: %v", err))
return
}
result.Details.GzipValid = true
reader = gz
defer gz.Close()
} else {
reader = file
}
// Analyze SQL content
scanner := bufio.NewScanner(reader)
// Increase buffer size for large lines (COPY data can have long lines)
buf := make([]byte, 0, 1024*1024)
scanner.Buffer(buf, 10*1024*1024)
var lineNumber int
var inCopyBlock bool
var lastCopyTable string
var copyStartLine int
var copyDataSamples []string
copyBlockPattern := regexp.MustCompile(`^COPY\s+("?[\w\."]+)"?\s+\(`)
copyEndPattern := regexp.MustCompile(`^\\\.`)
for scanner.Scan() {
lineNumber++
line := scanner.Text()
// Check first few lines for header
if lineNumber <= 10 {
if strings.Contains(line, "PostgreSQL") || strings.Contains(line, "pg_dump") {
result.Details.HasSQLHeader = true
}
}
// Track structure
upperLine := strings.ToUpper(strings.TrimSpace(line))
if strings.HasPrefix(upperLine, "CREATE ") {
result.Details.HasCreateStatements = true
}
if strings.HasPrefix(upperLine, "INSERT ") {
result.Details.HasInsertStatements = true
}
if strings.HasPrefix(upperLine, "BEGIN") {
result.Details.HasTransactionBlock = true
}
// Track COPY blocks
if copyBlockPattern.MatchString(line) {
if inCopyBlock {
// Previous COPY block wasn't terminated!
result.Details.UnterminatedCopy = true
result.IsTruncated = true
result.IsValid = false
result.Errors = append(result.Errors,
fmt.Sprintf("COPY block for '%s' starting at line %d was never terminated",
lastCopyTable, copyStartLine))
}
inCopyBlock = true
result.Details.HasCopyStatements = true
result.Details.CopyBlockCount++
matches := copyBlockPattern.FindStringSubmatch(line)
if len(matches) > 1 {
lastCopyTable = matches[1]
}
copyStartLine = lineNumber
copyDataSamples = nil
} else if copyEndPattern.MatchString(line) {
inCopyBlock = false
} else if inCopyBlock {
// We're in COPY data
if len(copyDataSamples) < 3 {
copyDataSamples = append(copyDataSamples, truncateString(line, 100))
}
}
// Store last line for termination check
if lineNumber > 0 && (lineNumber%100000 == 0) && d.verbose {
d.log.Debug("Scanning SQL file", "lines_processed", lineNumber)
}
}
if err := scanner.Err(); err != nil {
result.IsValid = false
result.IsTruncated = true
result.Errors = append(result.Errors,
fmt.Sprintf("Error reading file at line %d: %v", lineNumber, err),
"File may be truncated or contain invalid data")
}
// Check if we ended while still in a COPY block
if inCopyBlock {
result.Details.UnterminatedCopy = true
result.Details.LastCopyTable = lastCopyTable
result.Details.LastCopyLineNumber = copyStartLine
result.Details.SampleCopyData = copyDataSamples
result.IsTruncated = true
result.IsValid = false
result.Errors = append(result.Errors,
fmt.Sprintf("File ends inside COPY block for table '%s' (started at line %d)",
lastCopyTable, copyStartLine),
"The backup was truncated during data export",
"This explains the 'syntax error' during restore - COPY data is being interpreted as SQL")
if len(copyDataSamples) > 0 {
result.Errors = append(result.Errors,
fmt.Sprintf("Sample orphaned data: %s", copyDataSamples[0]))
}
} else {
result.Details.ProperlyTerminated = true
}
// Read last bytes for additional context
if !compressed {
file.Seek(-min(500, result.FileSize), 2)
lastBytes := make([]byte, 500)
n, _ := file.Read(lastBytes)
result.Details.LastBytes = strings.TrimSpace(string(lastBytes[:n]))
}
}
// diagnoseClusterArchive analyzes a cluster tar.gz archive
func (d *Diagnoser) diagnoseClusterArchive(filePath string, result *DiagnoseResult) {
// First verify tar.gz integrity
cmd := exec.Command("tar", "-tzf", filePath)
output, err := cmd.Output()
if err != nil {
result.IsValid = false
result.IsCorrupted = true
result.Errors = append(result.Errors,
fmt.Sprintf("Tar archive is invalid or corrupted: %v", err),
"Run: tar -tzf "+filePath+" 2>&1 | tail -20")
return
}
// Parse tar listing
files := strings.Split(strings.TrimSpace(string(output)), "\n")
var dumpFiles []string
hasGlobals := false
hasMetadata := false
for _, f := range files {
if strings.HasSuffix(f, ".dump") || strings.HasSuffix(f, ".sql.gz") {
dumpFiles = append(dumpFiles, f)
}
if strings.Contains(f, "globals.sql") {
hasGlobals = true
}
if strings.Contains(f, "manifest.json") || strings.Contains(f, "metadata.json") {
hasMetadata = true
}
}
result.Details.TableCount = len(dumpFiles)
result.Details.TableList = dumpFiles
if len(dumpFiles) == 0 {
result.Warnings = append(result.Warnings, "No database dump files found in archive")
}
if !hasGlobals {
result.Warnings = append(result.Warnings, "No globals.sql found - roles/tablespaces won't be restored")
}
if !hasMetadata {
result.Warnings = append(result.Warnings, "No manifest/metadata found - limited validation possible")
}
// For verbose mode, diagnose individual dumps inside the archive
if d.verbose && len(dumpFiles) > 0 {
d.log.Info("Cluster archive contains databases", "count", len(dumpFiles))
for _, df := range dumpFiles {
d.log.Info(" - " + df)
}
}
}
// diagnoseUnknown handles unknown format files
func (d *Diagnoser) diagnoseUnknown(filePath string, result *DiagnoseResult) {
file, err := os.Open(filePath)
if err != nil {
return
}
defer file.Close()
header := make([]byte, 512)
n, _ := file.Read(header)
result.Details.FirstBytes = fmt.Sprintf("%q", header[:minInt(n, 50)])
// Try to identify by content
content := string(header[:n])
if strings.Contains(content, "PGDMP") {
result.Warnings = append(result.Warnings, "File appears to be PostgreSQL custom format - rename to .dump")
} else if strings.Contains(content, "PostgreSQL") || strings.Contains(content, "pg_dump") {
result.Warnings = append(result.Warnings, "File appears to be PostgreSQL SQL - rename to .sql")
} else if bytes.HasPrefix(header, []byte{0x1f, 0x8b}) {
result.Warnings = append(result.Warnings, "File appears to be gzip compressed - add .gz extension")
}
}
// verifyWithPgRestore uses pg_restore --list to verify dump integrity
func (d *Diagnoser) verifyWithPgRestore(filePath string, result *DiagnoseResult) {
cmd := exec.Command("pg_restore", "--list", filePath)
output, err := cmd.CombinedOutput()
if err != nil {
result.Details.PgRestoreListable = false
result.Details.PgRestoreError = string(output)
// Check for specific errors
errStr := string(output)
if strings.Contains(errStr, "unexpected end of file") ||
strings.Contains(errStr, "invalid large-object TOC entry") {
result.IsTruncated = true
result.IsValid = false
result.Errors = append(result.Errors,
"pg_restore reports truncated or incomplete dump file",
fmt.Sprintf("Error: %s", truncateString(errStr, 200)))
} else if strings.Contains(errStr, "not a valid archive") {
result.IsCorrupted = true
result.IsValid = false
result.Errors = append(result.Errors,
"pg_restore reports file is not a valid archive",
"File may be corrupted or wrong format")
} else {
result.Warnings = append(result.Warnings,
fmt.Sprintf("pg_restore --list warning: %s", truncateString(errStr, 200)))
}
return
}
result.Details.PgRestoreListable = true
// Count tables in the TOC
lines := strings.Split(string(output), "\n")
tableCount := 0
var tables []string
for _, line := range lines {
if strings.Contains(line, " TABLE DATA ") {
tableCount++
if len(tables) < 20 {
parts := strings.Fields(line)
if len(parts) > 3 {
tables = append(tables, parts[len(parts)-1])
}
}
}
}
result.Details.TableCount = tableCount
result.Details.TableList = tables
}
// DiagnoseClusterDumps extracts and diagnoses all dumps in a cluster archive
func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*DiagnoseResult, error) {
// First, try to list archive contents without extracting (fast check)
listCmd := exec.Command("tar", "-tzf", archivePath)
listOutput, listErr := listCmd.CombinedOutput()
if listErr != nil {
// Archive listing failed - likely corrupted
errResult := &DiagnoseResult{
FilePath: archivePath,
FileName: filepath.Base(archivePath),
Format: FormatClusterTarGz,
DetectedFormat: "Cluster Archive (tar.gz)",
IsValid: false,
IsCorrupted: true,
Details: &DiagnoseDetails{},
}
errOutput := string(listOutput)
if strings.Contains(errOutput, "unexpected end of file") ||
strings.Contains(errOutput, "Unexpected EOF") ||
strings.Contains(errOutput, "truncated") {
errResult.IsTruncated = true
errResult.Errors = append(errResult.Errors,
"Archive appears to be TRUNCATED - incomplete download or backup",
fmt.Sprintf("tar error: %s", truncateString(errOutput, 300)),
"Possible causes: disk full during backup, interrupted transfer, network timeout",
"Solution: Re-create the backup from source database")
} else {
errResult.Errors = append(errResult.Errors,
fmt.Sprintf("Cannot list archive contents: %v", listErr),
fmt.Sprintf("tar error: %s", truncateString(errOutput, 300)),
"Run manually: tar -tzf "+archivePath+" 2>&1 | tail -50")
}
return []*DiagnoseResult{errResult}, nil
}
// Archive is listable - now check disk space before extraction
files := strings.Split(strings.TrimSpace(string(listOutput)), "\n")
// Check if we have enough disk space (estimate 4x archive size needed)
archiveInfo, _ := os.Stat(archivePath)
requiredSpace := archiveInfo.Size() * 4
// Check temp directory space - try to extract metadata first
if stat, err := os.Stat(tempDir); err == nil && stat.IsDir() {
// Try extraction of a small test file first
testCmd := exec.Command("tar", "-xzf", archivePath, "-C", tempDir, "--wildcards", "*.json", "--wildcards", "globals.sql")
testCmd.Run() // Ignore error - just try to extract metadata
}
d.log.Info("Archive listing successful", "files", len(files))
// Try full extraction
cmd := exec.Command("tar", "-xzf", archivePath, "-C", tempDir)
var stderr bytes.Buffer
cmd.Stderr = &stderr
if err := cmd.Run(); err != nil {
// Extraction failed
errResult := &DiagnoseResult{
FilePath: archivePath,
FileName: filepath.Base(archivePath),
Format: FormatClusterTarGz,
DetectedFormat: "Cluster Archive (tar.gz)",
IsValid: false,
Details: &DiagnoseDetails{},
}
errOutput := stderr.String()
if strings.Contains(errOutput, "No space left") ||
strings.Contains(errOutput, "cannot write") ||
strings.Contains(errOutput, "Disk quota exceeded") {
errResult.Errors = append(errResult.Errors,
"INSUFFICIENT DISK SPACE to extract archive for diagnosis",
fmt.Sprintf("Archive size: %s (needs ~%s for extraction)",
formatBytes(archiveInfo.Size()), formatBytes(requiredSpace)),
"Use CLI diagnosis instead: dbbackup restore diagnose "+archivePath,
"Or use --workdir flag to specify a location with more space")
} else if strings.Contains(errOutput, "unexpected end of file") ||
strings.Contains(errOutput, "Unexpected EOF") {
errResult.IsTruncated = true
errResult.IsCorrupted = true
errResult.Errors = append(errResult.Errors,
"Archive is TRUNCATED - extraction failed mid-way",
fmt.Sprintf("Error: %s", truncateString(errOutput, 200)),
"The backup file is incomplete and cannot be restored",
"Solution: Re-create the backup from source database")
} else {
errResult.IsCorrupted = true
errResult.Errors = append(errResult.Errors,
fmt.Sprintf("Extraction failed: %v", err),
fmt.Sprintf("tar error: %s", truncateString(errOutput, 300)))
}
// Still report what files we found in the listing
var dumpFiles []string
for _, f := range files {
if strings.HasSuffix(f, ".dump") || strings.HasSuffix(f, ".sql.gz") {
dumpFiles = append(dumpFiles, filepath.Base(f))
}
}
if len(dumpFiles) > 0 {
errResult.Details.TableList = dumpFiles
errResult.Details.TableCount = len(dumpFiles)
errResult.Warnings = append(errResult.Warnings,
fmt.Sprintf("Archive contains %d database dumps (listing only)", len(dumpFiles)))
}
return []*DiagnoseResult{errResult}, nil
}
// Find dump files
dumpsDir := filepath.Join(tempDir, "dumps")
entries, err := os.ReadDir(dumpsDir)
if err != nil {
// Try without dumps subdirectory
entries, err = os.ReadDir(tempDir)
if err != nil {
return nil, fmt.Errorf("cannot read extracted files: %w", err)
}
dumpsDir = tempDir
}
var results []*DiagnoseResult
for _, entry := range entries {
if entry.IsDir() {
continue
}
name := entry.Name()
if !strings.HasSuffix(name, ".dump") && !strings.HasSuffix(name, ".sql.gz") &&
!strings.HasSuffix(name, ".sql") {
continue
}
dumpPath := filepath.Join(dumpsDir, name)
d.log.Info("Diagnosing dump file", "file", name)
result, err := d.DiagnoseFile(dumpPath)
if err != nil {
d.log.Warn("Failed to diagnose file", "file", name, "error", err)
continue
}
results = append(results, result)
}
return results, nil
}
// PrintDiagnosis outputs a human-readable diagnosis report
func (d *Diagnoser) PrintDiagnosis(result *DiagnoseResult) {
fmt.Println("\n" + strings.Repeat("=", 70))
fmt.Printf("📋 DIAGNOSIS: %s\n", result.FileName)
fmt.Println(strings.Repeat("=", 70))
// Basic info
fmt.Printf("\nFile: %s\n", result.FilePath)
fmt.Printf("Size: %s\n", formatBytes(result.FileSize))
fmt.Printf("Format: %s\n", result.DetectedFormat)
// Status
if result.IsValid {
fmt.Println("\n✅ STATUS: VALID")
} else {
fmt.Println("\n❌ STATUS: INVALID")
}
if result.IsTruncated {
fmt.Println("⚠️ TRUNCATED: Yes - file appears incomplete")
}
if result.IsCorrupted {
fmt.Println("⚠️ CORRUPTED: Yes - file structure is damaged")
}
// Details
if result.Details != nil {
fmt.Println("\n📊 DETAILS:")
if result.Details.HasPGDMPSignature {
fmt.Println(" ✓ Has PGDMP signature (PostgreSQL custom format)")
}
if result.Details.HasSQLHeader {
fmt.Println(" ✓ Has PostgreSQL SQL header")
}
if result.Details.GzipValid {
fmt.Println(" ✓ Gzip compression valid")
}
if result.Details.PgRestoreListable {
fmt.Printf(" ✓ pg_restore can list contents (%d tables)\n", result.Details.TableCount)
}
if result.Details.CopyBlockCount > 0 {
fmt.Printf(" • Contains %d COPY blocks\n", result.Details.CopyBlockCount)
}
if result.Details.UnterminatedCopy {
fmt.Printf(" ✗ Unterminated COPY block: %s (line %d)\n",
result.Details.LastCopyTable, result.Details.LastCopyLineNumber)
}
if result.Details.ProperlyTerminated {
fmt.Println(" ✓ All COPY blocks properly terminated")
}
if result.Details.ExpandedSize > 0 {
fmt.Printf(" • Expanded size: %s (ratio: %.1fx)\n",
formatBytes(result.Details.ExpandedSize), result.Details.CompressionRatio)
}
}
// Errors
if len(result.Errors) > 0 {
fmt.Println("\n❌ ERRORS:")
for _, e := range result.Errors {
fmt.Printf(" • %s\n", e)
}
}
// Warnings
if len(result.Warnings) > 0 {
fmt.Println("\n⚠ WARNINGS:")
for _, w := range result.Warnings {
fmt.Printf(" • %s\n", w)
}
}
// Recommendations
if !result.IsValid {
fmt.Println("\n💡 RECOMMENDATIONS:")
if result.IsTruncated {
fmt.Println(" 1. Re-run the backup process for this database")
fmt.Println(" 2. Check disk space on backup server during backup")
fmt.Println(" 3. Verify network stability if backup was remote")
fmt.Println(" 4. Check backup logs for errors during the backup")
}
if result.IsCorrupted {
fmt.Println(" 1. Verify backup file was transferred completely")
fmt.Println(" 2. Check if backup file was modified after creation")
fmt.Println(" 3. Try restoring from a previous backup")
}
}
fmt.Println(strings.Repeat("=", 70))
}
// PrintDiagnosisJSON outputs diagnosis as JSON
func (d *Diagnoser) PrintDiagnosisJSON(result *DiagnoseResult) error {
output, err := json.MarshalIndent(result, "", " ")
if err != nil {
return err
}
fmt.Println(string(output))
return nil
}
// Helper functions
func truncateString(s string, maxLen int) string {
if len(s) <= maxLen {
return s
}
return s[:maxLen-3] + "..."
}
func formatBytes(bytes int64) string {
const unit = 1024
if bytes < unit {
return fmt.Sprintf("%d B", bytes)
}
div, exp := int64(unit), 0
for n := bytes / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f %cB", float64(bytes)/float64(div), "KMGTPE"[exp])
}
func min(a, b int64) int64 {
if a < b {
return a
}
return b
}
func minInt(a, b int) int {
if a < b {
return a
}
return b
}

View File

@@ -27,6 +27,8 @@ type Engine struct {
progress progress.Indicator
detailedReporter *progress.DetailedReporter
dryRun bool
debugLogPath string // Path to save debug log on error
errorCollector *ErrorCollector // Collects detailed error info
}
// New creates a new restore engine
@@ -77,6 +79,11 @@ func NewWithProgress(cfg *config.Config, log logger.Logger, db database.Database
}
}
// SetDebugLogPath enables saving detailed error reports on failure
func (e *Engine) SetDebugLogPath(path string) {
e.debugLogPath = path
}
// loggerAdapter adapts our logger to the progress.Logger interface
type loggerAdapter struct {
logger logger.Logger
@@ -306,6 +313,11 @@ func (e *Engine) restoreMySQLSQL(ctx context.Context, archivePath, targetDB stri
// executeRestoreCommand executes a restore command
func (e *Engine) executeRestoreCommand(ctx context.Context, cmdArgs []string) error {
return e.executeRestoreCommandWithContext(ctx, cmdArgs, "", "", FormatUnknown)
}
// executeRestoreCommandWithContext executes a restore command with error collection context
func (e *Engine) executeRestoreCommandWithContext(ctx context.Context, cmdArgs []string, archivePath, targetDB string, format ArchiveFormat) error {
e.log.Info("Executing restore command", "command", strings.Join(cmdArgs, " "))
cmd := exec.CommandContext(ctx, cmdArgs[0], cmdArgs[1:]...)
@@ -316,6 +328,12 @@ func (e *Engine) executeRestoreCommand(ctx context.Context, cmdArgs []string) er
fmt.Sprintf("MYSQL_PWD=%s", e.cfg.Password),
)
// Create error collector if debug log path is set
var collector *ErrorCollector
if e.debugLogPath != "" {
collector = NewErrorCollector(e.cfg, e.log, archivePath, targetDB, format, true)
}
// Stream stderr to avoid memory issues with large output
// Don't use CombinedOutput() as it loads everything into memory
stderr, err := cmd.StderrPipe()
@@ -336,6 +354,12 @@ func (e *Engine) executeRestoreCommand(ctx context.Context, cmdArgs []string) er
n, err := stderr.Read(buf)
if n > 0 {
chunk := string(buf[:n])
// Feed to error collector if enabled
if collector != nil {
collector.CaptureStderr(chunk)
}
// Only capture REAL errors, not verbose output
if strings.Contains(chunk, "ERROR:") || strings.Contains(chunk, "FATAL:") || strings.Contains(chunk, "error:") {
lastError = strings.TrimSpace(chunk)
@@ -352,6 +376,12 @@ func (e *Engine) executeRestoreCommand(ctx context.Context, cmdArgs []string) er
}
if err := cmd.Wait(); err != nil {
// Get exit code
exitCode := 1
if exitErr, ok := err.(*exec.ExitError); ok {
exitCode = exitErr.ExitCode()
}
// PostgreSQL pg_restore returns exit code 1 even for ignorable errors
// Check if errors are ignorable (already exists, duplicate, etc.)
if lastError != "" && e.isIgnorableError(lastError) {
@@ -360,8 +390,12 @@ func (e *Engine) executeRestoreCommand(ctx context.Context, cmdArgs []string) er
}
// Classify error and provide helpful hints
var classification *checks.ErrorClassification
var errType, errHint string
if lastError != "" {
classification := checks.ClassifyError(lastError)
classification = checks.ClassifyError(lastError)
errType = classification.Type
errHint = classification.Hint
e.log.Error("Restore command failed",
"error", err,
"last_stderr", lastError,
@@ -369,11 +403,37 @@ func (e *Engine) executeRestoreCommand(ctx context.Context, cmdArgs []string) er
"error_type", classification.Type,
"hint", classification.Hint,
"action", classification.Action)
return fmt.Errorf("restore failed: %w (last error: %s, total errors: %d) - %s",
err, lastError, errorCount, classification.Hint)
} else {
e.log.Error("Restore command failed", "error", err, "error_count", errorCount)
}
e.log.Error("Restore command failed", "error", err, "last_stderr", lastError, "error_count", errorCount)
// Generate and save error report if collector is enabled
if collector != nil {
collector.SetExitCode(exitCode)
report := collector.GenerateReport(
lastError,
errType,
errHint,
)
// Print report to console
collector.PrintReport(report)
// Save to file
if e.debugLogPath != "" {
if saveErr := collector.SaveReport(report, e.debugLogPath); saveErr != nil {
e.log.Warn("Failed to save debug log", "error", saveErr)
} else {
e.log.Info("Debug log saved", "path", e.debugLogPath)
fmt.Printf("\n📋 Detailed error report saved to: %s\n", e.debugLogPath)
}
}
}
if lastError != "" {
return fmt.Errorf("restore failed: %w (last error: %s, total errors: %d) - %s",
err, lastError, errorCount, errHint)
}
return fmt.Errorf("restore failed: %w", err)
}

View File

@@ -0,0 +1,569 @@
package restore
import (
"bufio"
"compress/gzip"
"encoding/json"
"fmt"
"io"
"os"
"os/exec"
"path/filepath"
"runtime"
"strings"
"time"
"dbbackup/internal/config"
"dbbackup/internal/logger"
)
// RestoreErrorReport contains comprehensive information about a restore failure
type RestoreErrorReport struct {
// Metadata
Timestamp time.Time `json:"timestamp"`
Version string `json:"version"`
GoVersion string `json:"go_version"`
OS string `json:"os"`
Arch string `json:"arch"`
// Archive info
ArchivePath string `json:"archive_path"`
ArchiveSize int64 `json:"archive_size"`
ArchiveFormat string `json:"archive_format"`
// Database info
TargetDB string `json:"target_db"`
DatabaseType string `json:"database_type"`
// Error details
ExitCode int `json:"exit_code"`
ErrorMessage string `json:"error_message"`
ErrorType string `json:"error_type"`
ErrorHint string `json:"error_hint"`
TotalErrors int `json:"total_errors"`
// Captured output
LastStderr []string `json:"last_stderr"`
FirstErrors []string `json:"first_errors"`
// Context around failure
FailureContext *FailureContext `json:"failure_context,omitempty"`
// Diagnosis results
DiagnosisResult *DiagnoseResult `json:"diagnosis_result,omitempty"`
// Environment (sanitized)
PostgresVersion string `json:"postgres_version,omitempty"`
PgRestoreVersion string `json:"pg_restore_version,omitempty"`
PsqlVersion string `json:"psql_version,omitempty"`
// Recommendations
Recommendations []string `json:"recommendations"`
}
// FailureContext captures context around where the failure occurred
type FailureContext struct {
// For SQL/COPY errors
FailedLine int `json:"failed_line,omitempty"`
FailedStatement string `json:"failed_statement,omitempty"`
SurroundingLines []string `json:"surrounding_lines,omitempty"`
// For COPY block errors
InCopyBlock bool `json:"in_copy_block,omitempty"`
CopyTableName string `json:"copy_table_name,omitempty"`
CopyStartLine int `json:"copy_start_line,omitempty"`
SampleCopyData []string `json:"sample_copy_data,omitempty"`
// File position info
BytePosition int64 `json:"byte_position,omitempty"`
PercentComplete float64 `json:"percent_complete,omitempty"`
}
// ErrorCollector captures detailed error information during restore
type ErrorCollector struct {
log logger.Logger
cfg *config.Config
archivePath string
targetDB string
format ArchiveFormat
// Captured data
stderrLines []string
firstErrors []string
lastErrors []string
totalErrors int
exitCode int
// Limits
maxStderrLines int
maxErrorCapture int
// State
startTime time.Time
enabled bool
}
// NewErrorCollector creates a new error collector
func NewErrorCollector(cfg *config.Config, log logger.Logger, archivePath, targetDB string, format ArchiveFormat, enabled bool) *ErrorCollector {
return &ErrorCollector{
log: log,
cfg: cfg,
archivePath: archivePath,
targetDB: targetDB,
format: format,
stderrLines: make([]string, 0, 100),
firstErrors: make([]string, 0, 10),
lastErrors: make([]string, 0, 10),
maxStderrLines: 100,
maxErrorCapture: 10,
startTime: time.Now(),
enabled: enabled,
}
}
// CaptureStderr processes and captures stderr output
func (ec *ErrorCollector) CaptureStderr(chunk string) {
if !ec.enabled {
return
}
lines := strings.Split(chunk, "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
if line == "" {
continue
}
// Store last N lines of stderr
if len(ec.stderrLines) >= ec.maxStderrLines {
// Shift array, drop oldest
ec.stderrLines = ec.stderrLines[1:]
}
ec.stderrLines = append(ec.stderrLines, line)
// Check if this is an error line
if isErrorLine(line) {
ec.totalErrors++
// Capture first N errors
if len(ec.firstErrors) < ec.maxErrorCapture {
ec.firstErrors = append(ec.firstErrors, line)
}
// Keep last N errors (ring buffer style)
if len(ec.lastErrors) >= ec.maxErrorCapture {
ec.lastErrors = ec.lastErrors[1:]
}
ec.lastErrors = append(ec.lastErrors, line)
}
}
}
// SetExitCode records the exit code
func (ec *ErrorCollector) SetExitCode(code int) {
ec.exitCode = code
}
// GenerateReport creates a comprehensive error report
func (ec *ErrorCollector) GenerateReport(errMessage string, errType string, errHint string) *RestoreErrorReport {
report := &RestoreErrorReport{
Timestamp: time.Now(),
Version: "1.0.0", // TODO: inject actual version
GoVersion: runtime.Version(),
OS: runtime.GOOS,
Arch: runtime.GOARCH,
ArchivePath: ec.archivePath,
ArchiveFormat: ec.format.String(),
TargetDB: ec.targetDB,
DatabaseType: getDatabaseType(ec.format),
ExitCode: ec.exitCode,
ErrorMessage: errMessage,
ErrorType: errType,
ErrorHint: errHint,
TotalErrors: ec.totalErrors,
LastStderr: ec.stderrLines,
FirstErrors: ec.firstErrors,
}
// Get archive size
if stat, err := os.Stat(ec.archivePath); err == nil {
report.ArchiveSize = stat.Size()
}
// Get tool versions
report.PostgresVersion = getCommandVersion("postgres", "--version")
report.PgRestoreVersion = getCommandVersion("pg_restore", "--version")
report.PsqlVersion = getCommandVersion("psql", "--version")
// Analyze failure context
report.FailureContext = ec.analyzeFailureContext()
// Run diagnosis if not already done
diagnoser := NewDiagnoser(ec.log, false)
if diagResult, err := diagnoser.DiagnoseFile(ec.archivePath); err == nil {
report.DiagnosisResult = diagResult
}
// Generate recommendations
report.Recommendations = ec.generateRecommendations(report)
return report
}
// analyzeFailureContext extracts context around the failure
func (ec *ErrorCollector) analyzeFailureContext() *FailureContext {
ctx := &FailureContext{}
// Look for line number in errors
for _, errLine := range ec.lastErrors {
if lineNum := extractLineNumber(errLine); lineNum > 0 {
ctx.FailedLine = lineNum
break
}
}
// Look for COPY-related errors
for _, errLine := range ec.lastErrors {
if strings.Contains(errLine, "COPY") || strings.Contains(errLine, "syntax error") {
ctx.InCopyBlock = true
// Try to extract table name
if tableName := extractTableName(errLine); tableName != "" {
ctx.CopyTableName = tableName
}
break
}
}
// If we have a line number, try to get surrounding context from the dump
if ctx.FailedLine > 0 && ec.archivePath != "" {
ctx.SurroundingLines = ec.getSurroundingLines(ctx.FailedLine, 5)
}
return ctx
}
// getSurroundingLines reads lines around a specific line number from the dump
func (ec *ErrorCollector) getSurroundingLines(lineNum int, context int) []string {
var reader io.Reader
var lines []string
file, err := os.Open(ec.archivePath)
if err != nil {
return nil
}
defer file.Close()
// Handle compressed files
if strings.HasSuffix(ec.archivePath, ".gz") {
gz, err := gzip.NewReader(file)
if err != nil {
return nil
}
defer gz.Close()
reader = gz
} else {
reader = file
}
scanner := bufio.NewScanner(reader)
buf := make([]byte, 0, 1024*1024)
scanner.Buffer(buf, 10*1024*1024)
currentLine := 0
startLine := lineNum - context
endLine := lineNum + context
if startLine < 1 {
startLine = 1
}
for scanner.Scan() {
currentLine++
if currentLine >= startLine && currentLine <= endLine {
prefix := " "
if currentLine == lineNum {
prefix = "> "
}
lines = append(lines, fmt.Sprintf("%s%d: %s", prefix, currentLine, truncateString(scanner.Text(), 100)))
}
if currentLine > endLine {
break
}
}
return lines
}
// generateRecommendations provides actionable recommendations based on the error
func (ec *ErrorCollector) generateRecommendations(report *RestoreErrorReport) []string {
var recs []string
// Check diagnosis results
if report.DiagnosisResult != nil {
if report.DiagnosisResult.IsTruncated {
recs = append(recs,
"CRITICAL: Backup file is truncated/incomplete",
"Action: Re-run the backup for the affected database",
"Check: Verify disk space was available during backup",
"Check: Verify network was stable during backup transfer",
)
}
if report.DiagnosisResult.IsCorrupted {
recs = append(recs,
"CRITICAL: Backup file appears corrupted",
"Action: Restore from a previous backup",
"Action: Verify backup file checksum if available",
)
}
if report.DiagnosisResult.Details != nil && report.DiagnosisResult.Details.UnterminatedCopy {
recs = append(recs,
fmt.Sprintf("ISSUE: COPY block for table '%s' was not terminated",
report.DiagnosisResult.Details.LastCopyTable),
"Cause: Backup was interrupted during data export",
"Action: Re-run backup ensuring it completes fully",
)
}
}
// Check error patterns
if report.TotalErrors > 1000000 {
recs = append(recs,
"ISSUE: Millions of errors indicate structural problem, not individual data issues",
"Cause: Likely wrong restore method or truncated dump",
"Check: Verify dump format matches restore command",
)
}
// Check for common error types
errLower := strings.ToLower(report.ErrorMessage)
if strings.Contains(errLower, "syntax error") {
recs = append(recs,
"ISSUE: SQL syntax errors during restore",
"Cause: COPY data being interpreted as SQL commands",
"Check: Run 'dbbackup restore diagnose <archive>' for detailed analysis",
)
}
if strings.Contains(errLower, "permission denied") {
recs = append(recs,
"ISSUE: Permission denied",
"Action: Check database user has sufficient privileges",
"Action: For ownership preservation, use a superuser account",
)
}
if strings.Contains(errLower, "does not exist") {
recs = append(recs,
"ISSUE: Missing object reference",
"Action: Ensure globals.sql was restored first (for roles/tablespaces)",
"Action: Check if target database was created",
)
}
if len(recs) == 0 {
recs = append(recs,
"Run 'dbbackup restore diagnose <archive>' for detailed analysis",
"Check the stderr output above for specific error messages",
"Review the PostgreSQL/MySQL logs on the target server",
)
}
return recs
}
// SaveReport saves the error report to a file
func (ec *ErrorCollector) SaveReport(report *RestoreErrorReport, outputPath string) error {
// Create directory if needed
dir := filepath.Dir(outputPath)
if err := os.MkdirAll(dir, 0755); err != nil {
return fmt.Errorf("failed to create directory: %w", err)
}
// Marshal to JSON with indentation
data, err := json.MarshalIndent(report, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal report: %w", err)
}
// Write file
if err := os.WriteFile(outputPath, data, 0644); err != nil {
return fmt.Errorf("failed to write report: %w", err)
}
return nil
}
// PrintReport prints a human-readable summary of the error report
func (ec *ErrorCollector) PrintReport(report *RestoreErrorReport) {
fmt.Println()
fmt.Println(strings.Repeat("═", 70))
fmt.Println(" 🔴 RESTORE ERROR REPORT")
fmt.Println(strings.Repeat("═", 70))
fmt.Printf("\n📅 Timestamp: %s\n", report.Timestamp.Format("2006-01-02 15:04:05"))
fmt.Printf("📦 Archive: %s\n", filepath.Base(report.ArchivePath))
fmt.Printf("📊 Format: %s\n", report.ArchiveFormat)
fmt.Printf("🎯 Target DB: %s\n", report.TargetDB)
fmt.Printf("⚠️ Exit Code: %d\n", report.ExitCode)
fmt.Printf("❌ Total Errors: %d\n", report.TotalErrors)
fmt.Println("\n" + strings.Repeat("─", 70))
fmt.Println("ERROR DETAILS:")
fmt.Println(strings.Repeat("─", 70))
fmt.Printf("\nType: %s\n", report.ErrorType)
fmt.Printf("Message: %s\n", report.ErrorMessage)
if report.ErrorHint != "" {
fmt.Printf("Hint: %s\n", report.ErrorHint)
}
// Show failure context
if report.FailureContext != nil && report.FailureContext.FailedLine > 0 {
fmt.Println("\n" + strings.Repeat("─", 70))
fmt.Println("FAILURE CONTEXT:")
fmt.Println(strings.Repeat("─", 70))
fmt.Printf("\nFailed at line: %d\n", report.FailureContext.FailedLine)
if report.FailureContext.InCopyBlock {
fmt.Printf("Inside COPY block for table: %s\n", report.FailureContext.CopyTableName)
}
if len(report.FailureContext.SurroundingLines) > 0 {
fmt.Println("\nSurrounding lines:")
for _, line := range report.FailureContext.SurroundingLines {
fmt.Println(line)
}
}
}
// Show first few errors
if len(report.FirstErrors) > 0 {
fmt.Println("\n" + strings.Repeat("─", 70))
fmt.Println("FIRST ERRORS:")
fmt.Println(strings.Repeat("─", 70))
for i, err := range report.FirstErrors {
if i >= 5 {
fmt.Printf("... and %d more\n", len(report.FirstErrors)-5)
break
}
fmt.Printf(" %d. %s\n", i+1, truncateString(err, 100))
}
}
// Show diagnosis summary
if report.DiagnosisResult != nil && !report.DiagnosisResult.IsValid {
fmt.Println("\n" + strings.Repeat("─", 70))
fmt.Println("DIAGNOSIS:")
fmt.Println(strings.Repeat("─", 70))
if report.DiagnosisResult.IsTruncated {
fmt.Println(" ❌ File is TRUNCATED")
}
if report.DiagnosisResult.IsCorrupted {
fmt.Println(" ❌ File is CORRUPTED")
}
for i, err := range report.DiagnosisResult.Errors {
if i >= 3 {
break
}
fmt.Printf(" • %s\n", err)
}
}
// Show recommendations
fmt.Println("\n" + strings.Repeat("─", 70))
fmt.Println("💡 RECOMMENDATIONS:")
fmt.Println(strings.Repeat("─", 70))
for _, rec := range report.Recommendations {
fmt.Printf(" • %s\n", rec)
}
// Show tool versions
fmt.Println("\n" + strings.Repeat("─", 70))
fmt.Println("ENVIRONMENT:")
fmt.Println(strings.Repeat("─", 70))
fmt.Printf(" OS: %s/%s\n", report.OS, report.Arch)
fmt.Printf(" Go: %s\n", report.GoVersion)
if report.PgRestoreVersion != "" {
fmt.Printf(" pg_restore: %s\n", report.PgRestoreVersion)
}
if report.PsqlVersion != "" {
fmt.Printf(" psql: %s\n", report.PsqlVersion)
}
fmt.Println(strings.Repeat("═", 70))
}
// Helper functions
func isErrorLine(line string) bool {
return strings.Contains(line, "ERROR:") ||
strings.Contains(line, "FATAL:") ||
strings.Contains(line, "error:") ||
strings.Contains(line, "PANIC:")
}
func extractLineNumber(errLine string) int {
// Look for patterns like "LINE 1:" or "line 123"
patterns := []string{"LINE ", "line "}
for _, pattern := range patterns {
if idx := strings.Index(errLine, pattern); idx >= 0 {
numStart := idx + len(pattern)
numEnd := numStart
for numEnd < len(errLine) && errLine[numEnd] >= '0' && errLine[numEnd] <= '9' {
numEnd++
}
if numEnd > numStart {
var num int
fmt.Sscanf(errLine[numStart:numEnd], "%d", &num)
return num
}
}
}
return 0
}
func extractTableName(errLine string) string {
// Look for patterns like 'COPY "tablename"' or 'table "tablename"'
patterns := []string{"COPY ", "table "}
for _, pattern := range patterns {
if idx := strings.Index(errLine, pattern); idx >= 0 {
start := idx + len(pattern)
// Skip optional quote
if start < len(errLine) && errLine[start] == '"' {
start++
}
end := start
for end < len(errLine) && errLine[end] != '"' && errLine[end] != ' ' && errLine[end] != '(' {
end++
}
if end > start {
return errLine[start:end]
}
}
}
return ""
}
func getDatabaseType(format ArchiveFormat) string {
if format.IsMySQL() {
return "mysql"
}
return "postgresql"
}
func getCommandVersion(cmd string, arg string) string {
output, err := exec.Command(cmd, arg).CombinedOutput()
if err != nil {
return ""
}
// Return first line only
lines := strings.Split(string(output), "\n")
if len(lines) > 0 {
return strings.TrimSpace(lines[0])
}
return ""
}

View File

@@ -201,6 +201,12 @@ func (m ArchiveBrowserModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
if len(m.archives) > 0 && m.cursor < len(m.archives) {
selected := m.archives[m.cursor]
// Handle diagnose mode - go directly to diagnosis view
if m.mode == "diagnose" {
diagnoseView := NewDiagnoseView(m.config, m.logger, m.parent, m.ctx, selected)
return diagnoseView, diagnoseView.Init()
}
// Validate selection based on mode
if m.mode == "restore-cluster" && !selected.Format.IsClusterBackup() {
m.message = errorStyle.Render("❌ Please select a cluster backup (.tar.gz)")
@@ -227,6 +233,14 @@ func (m ArchiveBrowserModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
formatSize(selected.Size),
selected.Modified.Format("2006-01-02 15:04:05"))
}
case "d":
// Run diagnosis on selected archive
if len(m.archives) > 0 && m.cursor < len(m.archives) {
selected := m.archives[m.cursor]
diagnoseView := NewDiagnoseView(m.config, m.logger, m, m.ctx, selected)
return diagnoseView, diagnoseView.Init()
}
}
}
@@ -242,6 +256,8 @@ func (m ArchiveBrowserModel) View() string {
title = "📦 Select Archive to Restore (Single Database)"
} else if m.mode == "restore-cluster" {
title = "📦 Select Archive to Restore (Cluster)"
} else if m.mode == "diagnose" {
title = "🔍 Select Archive to Diagnose"
}
s.WriteString(titleStyle.Render(title))
@@ -335,7 +351,7 @@ func (m ArchiveBrowserModel) View() string {
s.WriteString(infoStyle.Render(fmt.Sprintf("Total: %d archive(s) | Selected: %d/%d",
len(m.archives), m.cursor+1, len(m.archives))))
s.WriteString("\n")
s.WriteString(infoStyle.Render("⌨️ ↑/↓: Navigate | Enter: Select | f: Filter | i: Info | Esc: Back"))
s.WriteString(infoStyle.Render("⌨️ ↑/↓: Navigate | Enter: Select | d: Diagnose | f: Filter | i: Info | Esc: Back"))
return s.String()
}

View File

@@ -0,0 +1,463 @@
package tui
import (
"context"
"fmt"
"os"
"strings"
tea "github.com/charmbracelet/bubbletea"
"github.com/charmbracelet/lipgloss"
"dbbackup/internal/config"
"dbbackup/internal/logger"
"dbbackup/internal/restore"
)
var (
diagnoseBoxStyle = lipgloss.NewStyle().
Border(lipgloss.RoundedBorder()).
BorderForeground(lipgloss.Color("63")).
Padding(1, 2)
diagnosePassStyle = lipgloss.NewStyle().
Foreground(lipgloss.Color("2")).
Bold(true)
diagnoseFailStyle = lipgloss.NewStyle().
Foreground(lipgloss.Color("1")).
Bold(true)
diagnoseWarnStyle = lipgloss.NewStyle().
Foreground(lipgloss.Color("3"))
diagnoseInfoStyle = lipgloss.NewStyle().
Foreground(lipgloss.Color("244"))
diagnoseHeaderStyle = lipgloss.NewStyle().
Foreground(lipgloss.Color("63")).
Bold(true)
)
// DiagnoseViewModel shows backup file diagnosis results
type DiagnoseViewModel struct {
config *config.Config
logger logger.Logger
parent tea.Model
ctx context.Context
archive ArchiveInfo
result *restore.DiagnoseResult
results []*restore.DiagnoseResult // For cluster archives
running bool
completed bool
progress string
cursor int // For scrolling through cluster results
err error
}
// NewDiagnoseView creates a new diagnose view
func NewDiagnoseView(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context, archive ArchiveInfo) DiagnoseViewModel {
return DiagnoseViewModel{
config: cfg,
logger: log,
parent: parent,
ctx: ctx,
archive: archive,
running: true,
progress: "Starting diagnosis...",
}
}
func (m DiagnoseViewModel) Init() tea.Cmd {
return runDiagnosis(m.config, m.logger, m.archive)
}
type diagnoseCompleteMsg struct {
result *restore.DiagnoseResult
results []*restore.DiagnoseResult
err error
}
type diagnoseProgressMsg struct {
message string
}
func runDiagnosis(cfg *config.Config, log logger.Logger, archive ArchiveInfo) tea.Cmd {
return func() tea.Msg {
diagnoser := restore.NewDiagnoser(log, true)
// For cluster archives, we can do deep analysis
if archive.Format.IsClusterBackup() {
// Create temp directory (use WorkDir if configured for large archives)
log.Info("Creating temp directory for diagnosis", "workDir", cfg.WorkDir)
tempDir, err := createTempDirIn(cfg.WorkDir, "dbbackup-diagnose-*")
if err != nil {
return diagnoseCompleteMsg{err: fmt.Errorf("failed to create temp dir (workDir=%s): %w", cfg.WorkDir, err)}
}
log.Info("Using temp directory", "path", tempDir)
defer removeTempDir(tempDir)
// Diagnose all dumps in the cluster
results, err := diagnoser.DiagnoseClusterDumps(archive.Path, tempDir)
if err != nil {
return diagnoseCompleteMsg{err: err}
}
return diagnoseCompleteMsg{results: results}
}
// Single file diagnosis
result, err := diagnoser.DiagnoseFile(archive.Path)
if err != nil {
return diagnoseCompleteMsg{err: err}
}
return diagnoseCompleteMsg{result: result}
}
}
func (m DiagnoseViewModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
switch msg := msg.(type) {
case diagnoseCompleteMsg:
m.running = false
m.completed = true
if msg.err != nil {
m.err = msg.err
return m, nil
}
m.result = msg.result
m.results = msg.results
return m, nil
case diagnoseProgressMsg:
m.progress = msg.message
return m, nil
case tea.KeyMsg:
switch msg.String() {
case "ctrl+c", "q", "esc":
return m.parent, nil
case "up", "k":
if len(m.results) > 0 && m.cursor > 0 {
m.cursor--
}
case "down", "j":
if len(m.results) > 0 && m.cursor < len(m.results)-1 {
m.cursor++
}
case "enter", " ":
return m.parent, nil
}
}
return m, nil
}
func (m DiagnoseViewModel) View() string {
var s strings.Builder
// Header
s.WriteString(titleStyle.Render("🔍 Backup Diagnosis"))
s.WriteString("\n\n")
// Archive info
s.WriteString(diagnoseHeaderStyle.Render("Archive: "))
s.WriteString(m.archive.Name)
s.WriteString("\n")
s.WriteString(diagnoseHeaderStyle.Render("Format: "))
s.WriteString(m.archive.Format.String())
s.WriteString("\n")
s.WriteString(diagnoseHeaderStyle.Render("Size: "))
s.WriteString(formatSize(m.archive.Size))
s.WriteString("\n\n")
if m.running {
s.WriteString(infoStyle.Render("⏳ " + m.progress))
s.WriteString("\n\n")
s.WriteString(diagnoseInfoStyle.Render("This may take a while for large archives..."))
return s.String()
}
if m.err != nil {
s.WriteString(errorStyle.Render(fmt.Sprintf("❌ Diagnosis failed: %v", m.err)))
s.WriteString("\n\n")
s.WriteString(infoStyle.Render("Press Enter or Esc to go back"))
return s.String()
}
// For cluster archives, show summary + details
if len(m.results) > 0 {
s.WriteString(m.renderClusterResults())
} else if m.result != nil {
s.WriteString(m.renderSingleResult(m.result))
}
s.WriteString("\n")
s.WriteString(infoStyle.Render("Press Enter or Esc to go back"))
return s.String()
}
func (m DiagnoseViewModel) renderSingleResult(result *restore.DiagnoseResult) string {
var s strings.Builder
// Status
s.WriteString(strings.Repeat("─", 60))
s.WriteString("\n")
if result.IsValid {
s.WriteString(diagnosePassStyle.Render("✅ STATUS: VALID"))
} else {
s.WriteString(diagnoseFailStyle.Render("❌ STATUS: INVALID"))
}
s.WriteString("\n")
if result.IsTruncated {
s.WriteString(diagnoseFailStyle.Render("⚠️ TRUNCATED: File appears incomplete"))
s.WriteString("\n")
}
if result.IsCorrupted {
s.WriteString(diagnoseFailStyle.Render("⚠️ CORRUPTED: File structure is damaged"))
s.WriteString("\n")
}
s.WriteString(strings.Repeat("─", 60))
s.WriteString("\n\n")
// Details
if result.Details != nil {
s.WriteString(diagnoseHeaderStyle.Render("📊 DETAILS:"))
s.WriteString("\n")
if result.Details.HasPGDMPSignature {
s.WriteString(diagnosePassStyle.Render(" ✓ "))
s.WriteString("Has PGDMP signature (custom format)\n")
}
if result.Details.HasSQLHeader {
s.WriteString(diagnosePassStyle.Render(" ✓ "))
s.WriteString("Has PostgreSQL SQL header\n")
}
if result.Details.GzipValid {
s.WriteString(diagnosePassStyle.Render(" ✓ "))
s.WriteString("Gzip compression valid\n")
}
if result.Details.PgRestoreListable {
s.WriteString(diagnosePassStyle.Render(" ✓ "))
s.WriteString(fmt.Sprintf("pg_restore can list contents (%d tables)\n", result.Details.TableCount))
}
if result.Details.CopyBlockCount > 0 {
s.WriteString(diagnoseInfoStyle.Render(" • "))
s.WriteString(fmt.Sprintf("Contains %d COPY blocks\n", result.Details.CopyBlockCount))
}
if result.Details.UnterminatedCopy {
s.WriteString(diagnoseFailStyle.Render(" ✗ "))
s.WriteString(fmt.Sprintf("Unterminated COPY block: %s (line %d)\n",
result.Details.LastCopyTable, result.Details.LastCopyLineNumber))
}
if result.Details.ProperlyTerminated {
s.WriteString(diagnosePassStyle.Render(" ✓ "))
s.WriteString("All COPY blocks properly terminated\n")
}
if result.Details.ExpandedSize > 0 {
s.WriteString(diagnoseInfoStyle.Render(" • "))
s.WriteString(fmt.Sprintf("Expanded size: %s (ratio: %.1fx)\n",
formatSize(result.Details.ExpandedSize), result.Details.CompressionRatio))
}
}
// Errors
if len(result.Errors) > 0 {
s.WriteString("\n")
s.WriteString(diagnoseFailStyle.Render("❌ ERRORS:"))
s.WriteString("\n")
for i, e := range result.Errors {
if i >= 5 {
s.WriteString(diagnoseInfoStyle.Render(fmt.Sprintf(" ... and %d more\n", len(result.Errors)-5)))
break
}
s.WriteString(diagnoseFailStyle.Render(" • "))
s.WriteString(truncate(e, 70))
s.WriteString("\n")
}
}
// Warnings
if len(result.Warnings) > 0 {
s.WriteString("\n")
s.WriteString(diagnoseWarnStyle.Render("⚠️ WARNINGS:"))
s.WriteString("\n")
for i, w := range result.Warnings {
if i >= 3 {
s.WriteString(diagnoseInfoStyle.Render(fmt.Sprintf(" ... and %d more\n", len(result.Warnings)-3)))
break
}
s.WriteString(diagnoseWarnStyle.Render(" • "))
s.WriteString(truncate(w, 70))
s.WriteString("\n")
}
}
// Recommendations
if !result.IsValid {
s.WriteString("\n")
s.WriteString(diagnoseHeaderStyle.Render("💡 RECOMMENDATIONS:"))
s.WriteString("\n")
if result.IsTruncated {
s.WriteString(" 1. Re-run the backup process for this database\n")
s.WriteString(" 2. Check disk space on backup server\n")
s.WriteString(" 3. Verify network stability for remote backups\n")
}
if result.IsCorrupted {
s.WriteString(" 1. Verify backup was transferred completely\n")
s.WriteString(" 2. Try restoring from a previous backup\n")
}
}
return s.String()
}
func (m DiagnoseViewModel) renderClusterResults() string {
var s strings.Builder
// Summary
validCount := 0
invalidCount := 0
for _, r := range m.results {
if r.IsValid {
validCount++
} else {
invalidCount++
}
}
s.WriteString(strings.Repeat("─", 60))
s.WriteString("\n")
s.WriteString(diagnoseHeaderStyle.Render(fmt.Sprintf("📊 CLUSTER SUMMARY: %d databases\n", len(m.results))))
s.WriteString(strings.Repeat("─", 60))
s.WriteString("\n\n")
if invalidCount == 0 {
s.WriteString(diagnosePassStyle.Render("✅ All dumps are valid"))
s.WriteString("\n\n")
} else {
s.WriteString(diagnoseFailStyle.Render(fmt.Sprintf("❌ %d/%d dumps have issues", invalidCount, len(m.results))))
s.WriteString("\n\n")
}
// List all dumps with status
s.WriteString(diagnoseHeaderStyle.Render("Database Dumps:"))
s.WriteString("\n")
// Show visible range based on cursor
start := m.cursor - 5
if start < 0 {
start = 0
}
end := start + 12
if end > len(m.results) {
end = len(m.results)
}
for i := start; i < end; i++ {
r := m.results[i]
cursor := " "
if i == m.cursor {
cursor = ">"
}
var status string
if r.IsValid {
status = diagnosePassStyle.Render("✓")
} else if r.IsTruncated {
status = diagnoseFailStyle.Render("✗ TRUNCATED")
} else if r.IsCorrupted {
status = diagnoseFailStyle.Render("✗ CORRUPTED")
} else {
status = diagnoseFailStyle.Render("✗ INVALID")
}
line := fmt.Sprintf("%s %s %-35s %s",
cursor,
status,
truncate(r.FileName, 35),
formatSize(r.FileSize))
if i == m.cursor {
s.WriteString(archiveSelectedStyle.Render(line))
} else {
s.WriteString(line)
}
s.WriteString("\n")
}
// Show selected dump details
if m.cursor < len(m.results) {
selected := m.results[m.cursor]
s.WriteString("\n")
s.WriteString(strings.Repeat("─", 60))
s.WriteString("\n")
s.WriteString(diagnoseHeaderStyle.Render("Selected: " + selected.FileName))
s.WriteString("\n\n")
// Show condensed details for selected
if selected.Details != nil {
if selected.Details.UnterminatedCopy {
s.WriteString(diagnoseFailStyle.Render(" ✗ Unterminated COPY: "))
s.WriteString(selected.Details.LastCopyTable)
s.WriteString(fmt.Sprintf(" (line %d)\n", selected.Details.LastCopyLineNumber))
}
if len(selected.Details.SampleCopyData) > 0 {
s.WriteString(diagnoseInfoStyle.Render(" Sample orphaned data: "))
s.WriteString(truncate(selected.Details.SampleCopyData[0], 50))
s.WriteString("\n")
}
}
if len(selected.Errors) > 0 {
for i, e := range selected.Errors {
if i >= 2 {
break
}
s.WriteString(diagnoseFailStyle.Render(" • "))
s.WriteString(truncate(e, 55))
s.WriteString("\n")
}
}
}
s.WriteString("\n")
s.WriteString(infoStyle.Render("Use ↑/↓ to browse, Enter/Esc to go back"))
return s.String()
}
// Helper functions for temp directory management
func createTempDir(pattern string) (string, error) {
return os.MkdirTemp("", pattern)
}
func createTempDirIn(baseDir, pattern string) (string, error) {
if baseDir == "" {
return os.MkdirTemp("", pattern)
}
// Ensure base directory exists
if err := os.MkdirAll(baseDir, 0755); err != nil {
return "", fmt.Errorf("cannot create work directory: %w", err)
}
return os.MkdirTemp(baseDir, pattern)
}
func removeTempDir(path string) error {
return os.RemoveAll(path)
}

View File

@@ -92,6 +92,7 @@ func NewMenuModel(cfg *config.Config, log logger.Logger) *MenuModel {
"────────────────────────────────",
"Restore Single Database",
"Restore Cluster Backup",
"Diagnose Backup File",
"List & Manage Backups",
"────────────────────────────────",
"View Active Operations",
@@ -163,19 +164,21 @@ func (m *MenuModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return m.handleRestoreSingle()
case 5: // Restore Cluster Backup
return m.handleRestoreCluster()
case 6: // List & Manage Backups
case 6: // Diagnose Backup File
return m.handleDiagnoseBackup()
case 7: // List & Manage Backups
return m.handleBackupManager()
case 8: // View Active Operations
case 9: // View Active Operations
return m.handleViewOperations()
case 9: // Show Operation History
case 10: // Show Operation History
return m.handleOperationHistory()
case 10: // Database Status
case 11: // Database Status
return m.handleStatus()
case 11: // Settings
case 12: // Settings
return m.handleSettings()
case 12: // Clear History
case 13: // Clear History
m.message = "🗑️ History cleared"
case 13: // Quit
case 14: // Quit
if m.cancel != nil {
m.cancel()
}
@@ -244,21 +247,23 @@ func (m *MenuModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return m.handleRestoreSingle()
case 5: // Restore Cluster Backup
return m.handleRestoreCluster()
case 6: // List & Manage Backups
case 6: // Diagnose Backup File
return m.handleDiagnoseBackup()
case 7: // List & Manage Backups
return m.handleBackupManager()
case 7: // Separator
case 8: // Separator
// Do nothing
case 8: // View Active Operations
case 9: // View Active Operations
return m.handleViewOperations()
case 9: // Show Operation History
case 10: // Show Operation History
return m.handleOperationHistory()
case 10: // Database Status
case 11: // Database Status
return m.handleStatus()
case 11: // Settings
case 12: // Settings
return m.handleSettings()
case 12: // Clear History
case 13: // Clear History
m.message = "🗑️ History cleared"
case 13: // Quit
case 14: // Quit
if m.cancel != nil {
m.cancel()
}
@@ -407,6 +412,12 @@ func (m *MenuModel) handleBackupManager() (tea.Model, tea.Cmd) {
return manager, manager.Init()
}
// handleDiagnoseBackup opens archive browser for diagnosis
func (m *MenuModel) handleDiagnoseBackup() (tea.Model, tea.Cmd) {
browser := NewArchiveBrowser(m.config, m.logger, m, m.ctx, "diagnose")
return browser, browser.Init()
}
func (m *MenuModel) applyDatabaseSelection() {
if m == nil || len(m.dbTypes) == 0 {
return

View File

@@ -31,6 +31,8 @@ type RestoreExecutionModel struct {
restoreType string
cleanClusterFirst bool // Drop all user databases before cluster restore
existingDBs []string // List of databases to drop
saveDebugLog bool // Save detailed error report on failure
workDir string // Custom work directory for extraction
// Progress tracking
status string
@@ -49,7 +51,7 @@ type RestoreExecutionModel struct {
}
// NewRestoreExecution creates a new restore execution model
func NewRestoreExecution(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context, archive ArchiveInfo, targetDB string, cleanFirst, createIfMissing bool, restoreType string, cleanClusterFirst bool, existingDBs []string) RestoreExecutionModel {
func NewRestoreExecution(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context, archive ArchiveInfo, targetDB string, cleanFirst, createIfMissing bool, restoreType string, cleanClusterFirst bool, existingDBs []string, saveDebugLog bool, workDir string) RestoreExecutionModel {
return RestoreExecutionModel{
config: cfg,
logger: log,
@@ -62,6 +64,8 @@ func NewRestoreExecution(cfg *config.Config, log logger.Logger, parent tea.Model
restoreType: restoreType,
cleanClusterFirst: cleanClusterFirst,
existingDBs: existingDBs,
saveDebugLog: saveDebugLog,
workDir: workDir,
status: "Initializing...",
phase: "Starting",
startTime: time.Now(),
@@ -73,7 +77,7 @@ func NewRestoreExecution(cfg *config.Config, log logger.Logger, parent tea.Model
func (m RestoreExecutionModel) Init() tea.Cmd {
return tea.Batch(
executeRestoreWithTUIProgress(m.ctx, m.config, m.logger, m.archive, m.targetDB, m.cleanFirst, m.createIfMissing, m.restoreType, m.cleanClusterFirst, m.existingDBs),
executeRestoreWithTUIProgress(m.ctx, m.config, m.logger, m.archive, m.targetDB, m.cleanFirst, m.createIfMissing, m.restoreType, m.cleanClusterFirst, m.existingDBs, m.saveDebugLog),
restoreTickCmd(),
)
}
@@ -99,7 +103,7 @@ type restoreCompleteMsg struct {
elapsed time.Duration
}
func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config, log logger.Logger, archive ArchiveInfo, targetDB string, cleanFirst, createIfMissing bool, restoreType string, cleanClusterFirst bool, existingDBs []string) tea.Cmd {
func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config, log logger.Logger, archive ArchiveInfo, targetDB string, cleanFirst, createIfMissing bool, restoreType string, cleanClusterFirst bool, existingDBs []string, saveDebugLog bool) tea.Cmd {
return func() tea.Msg {
// Use configurable cluster timeout (minutes) from config; default set in config.New()
// Use parent context to inherit cancellation from TUI
@@ -146,6 +150,14 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
// STEP 2: Create restore engine with silent progress (no stdout interference with TUI)
engine := restore.NewSilent(cfg, log, dbClient)
// Enable debug logging if requested
if saveDebugLog {
// Generate debug log path based on archive name and timestamp
debugLogPath := fmt.Sprintf("/tmp/dbbackup-restore-debug-%s.json", time.Now().Format("20060102-150405"))
engine.SetDebugLogPath(debugLogPath)
log.Info("Debug logging enabled", "path", debugLogPath)
}
// Set up progress callback (but it won't work in goroutine - progress is already sent via logs)
// The TUI will just use spinner animation to show activity

View File

@@ -59,6 +59,8 @@ type RestorePreviewModel struct {
checking bool
canProceed bool
message string
saveDebugLog bool // Save detailed error report on failure
workDir string // Custom work directory for extraction
}
// NewRestorePreview creates a new restore preview
@@ -80,8 +82,10 @@ func NewRestorePreview(cfg *config.Config, log logger.Logger, parent tea.Model,
cleanFirst: false,
createIfMissing: true,
checking: true,
workDir: cfg.WorkDir, // Use configured work directory
safetyChecks: []SafetyCheck{
{Name: "Archive integrity", Status: "pending", Critical: true},
{Name: "Dump validity", Status: "pending", Critical: true},
{Name: "Disk space", Status: "pending", Critical: true},
{Name: "Required tools", Status: "pending", Critical: true},
{Name: "Target database", Status: "pending", Critical: false},
@@ -102,7 +106,7 @@ type safetyCheckCompleteMsg struct {
func runSafetyChecks(cfg *config.Config, log logger.Logger, archive ArchiveInfo, targetDB string) tea.Cmd {
return func() tea.Msg {
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
safety := restore.NewSafety(cfg, log)
@@ -121,7 +125,33 @@ func runSafetyChecks(cfg *config.Config, log logger.Logger, archive ArchiveInfo,
}
checks = append(checks, check)
// 2. Disk space
// 2. Dump validity (deep diagnosis)
check = SafetyCheck{Name: "Dump validity", Status: "checking", Critical: true}
diagnoser := restore.NewDiagnoser(log, false)
diagResult, diagErr := diagnoser.DiagnoseFile(archive.Path)
if diagErr != nil {
check.Status = "warning"
check.Message = fmt.Sprintf("Cannot diagnose: %v", diagErr)
} else if !diagResult.IsValid {
check.Status = "failed"
check.Critical = true
if diagResult.IsTruncated {
check.Message = "Dump is TRUNCATED - restore will fail"
} else if diagResult.IsCorrupted {
check.Message = "Dump is CORRUPTED - restore will fail"
} else if len(diagResult.Errors) > 0 {
check.Message = diagResult.Errors[0]
} else {
check.Message = "Dump has validation errors"
}
canProceed = false
} else {
check.Status = "passed"
check.Message = "Dump structure verified"
}
checks = append(checks, check)
// 3. Disk space
check = SafetyCheck{Name: "Disk space", Status: "checking", Critical: true}
multiplier := 3.0
if archive.Format.IsClusterBackup() {
@@ -137,7 +167,7 @@ func runSafetyChecks(cfg *config.Config, log logger.Logger, archive ArchiveInfo,
}
checks = append(checks, check)
// 3. Required tools
// 4. Required tools
check = SafetyCheck{Name: "Required tools", Status: "checking", Critical: true}
dbType := "postgres"
if archive.Format.IsMySQL() {
@@ -153,7 +183,7 @@ func runSafetyChecks(cfg *config.Config, log logger.Logger, archive ArchiveInfo,
}
checks = append(checks, check)
// 4. Target database check (skip for cluster restores)
// 5. Target database check (skip for cluster restores)
existingDBCount := 0
existingDBs := []string{}
@@ -243,6 +273,27 @@ func (m RestorePreviewModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
m.message = fmt.Sprintf("Create if missing: %v", m.createIfMissing)
}
case "d":
// Toggle debug log saving
m.saveDebugLog = !m.saveDebugLog
if m.saveDebugLog {
m.message = infoStyle.Render("📋 Debug log: enabled (will save detailed report on failure)")
} else {
m.message = "Debug log: disabled"
}
case "w":
// Toggle/set work directory
if m.workDir == "" {
// Set to backup directory as default alternative
m.workDir = m.config.BackupDir
m.message = infoStyle.Render(fmt.Sprintf("📁 Work directory set to: %s", m.workDir))
} else {
// Clear work directory (use system temp)
m.workDir = ""
m.message = "Work directory: using system temp"
}
case "enter", " ":
if m.checking {
m.message = "Please wait for safety checks to complete..."
@@ -255,7 +306,7 @@ func (m RestorePreviewModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
}
// Proceed to restore execution
exec := NewRestoreExecution(m.config, m.logger, m.parent, m.ctx, m.archive, m.targetDB, m.cleanFirst, m.createIfMissing, m.mode, m.cleanClusterFirst, m.existingDBs)
exec := NewRestoreExecution(m.config, m.logger, m.parent, m.ctx, m.archive, m.targetDB, m.cleanFirst, m.createIfMissing, m.mode, m.cleanClusterFirst, m.existingDBs, m.saveDebugLog, m.workDir)
return exec, exec.Init()
}
}
@@ -390,6 +441,41 @@ func (m RestorePreviewModel) View() string {
s.WriteString("\n\n")
}
// Advanced Options
s.WriteString(archiveHeaderStyle.Render("⚙️ Advanced Options"))
s.WriteString("\n")
// Work directory option
workDirIcon := "✗"
workDirStyle := infoStyle
workDirValue := "(system temp)"
if m.workDir != "" {
workDirIcon = "✓"
workDirStyle = checkPassedStyle
workDirValue = m.workDir
}
s.WriteString(workDirStyle.Render(fmt.Sprintf(" %s Work Dir: %s (press 'w' to toggle)", workDirIcon, workDirValue)))
s.WriteString("\n")
if m.workDir == "" {
s.WriteString(infoStyle.Render(" ⚠️ Large archives need more space than /tmp may have"))
s.WriteString("\n")
}
// Debug log option
debugIcon := "✗"
debugStyle := infoStyle
if m.saveDebugLog {
debugIcon = "✓"
debugStyle = checkPassedStyle
}
s.WriteString(debugStyle.Render(fmt.Sprintf(" %s Debug Log: %v (press 'd' to toggle)", debugIcon, m.saveDebugLog)))
s.WriteString("\n")
if m.saveDebugLog {
s.WriteString(infoStyle.Render(" Saves detailed error report to /tmp on failure"))
s.WriteString("\n")
}
s.WriteString("\n")
// Message
if m.message != "" {
s.WriteString(m.message)
@@ -403,15 +489,15 @@ func (m RestorePreviewModel) View() string {
s.WriteString(successStyle.Render("✅ Ready to restore"))
s.WriteString("\n")
if m.mode == "restore-single" {
s.WriteString(infoStyle.Render("⌨️ t: Toggle clean-first | c: Toggle create | Enter: Proceed | Esc: Cancel"))
s.WriteString(infoStyle.Render("⌨️ t: Clean-first | c: Create | w: WorkDir | d: Debug | Enter: Proceed | Esc: Cancel"))
} else if m.mode == "restore-cluster" {
if m.existingDBCount > 0 {
s.WriteString(infoStyle.Render("⌨️ c: Toggle cleanup | Enter: Proceed | Esc: Cancel"))
s.WriteString(infoStyle.Render("⌨️ c: Cleanup | w: WorkDir | d: Debug | Enter: Proceed | Esc: Cancel"))
} else {
s.WriteString(infoStyle.Render("⌨️ Enter: Proceed | Esc: Cancel"))
s.WriteString(infoStyle.Render("⌨️ w: WorkDir | d: Debug | Enter: Proceed | Esc: Cancel"))
}
} else {
s.WriteString(infoStyle.Render("⌨️ Enter: Proceed | Esc: Cancel"))
s.WriteString(infoStyle.Render("⌨️ w: WorkDir | d: Debug | Enter: Proceed | Esc: Cancel"))
}
} else {
s.WriteString(errorStyle.Render("❌ Cannot proceed - please fix errors above"))

View File

@@ -115,6 +115,26 @@ func NewSettingsModel(cfg *config.Config, log logger.Logger, parent tea.Model) S
Type: "path",
Description: "Directory where backup files will be stored",
},
{
Key: "work_dir",
DisplayName: "Work Directory",
Value: func(c *config.Config) string {
if c.WorkDir == "" {
return "(system temp)"
}
return c.WorkDir
},
Update: func(c *config.Config, v string) error {
if v == "" || v == "(system temp)" {
c.WorkDir = ""
return nil
}
c.WorkDir = filepath.Clean(v)
return nil
},
Type: "path",
Description: "Working directory for large operations (extraction, diagnosis). Use when /tmp is too small.",
},
{
Key: "compression_level",
DisplayName: "Compression Level",

View File

@@ -16,7 +16,7 @@ import (
// Build information (set by ldflags)
var (
version = "3.1.0"
version = "3.40.0"
buildTime = "unknown"
gitCommit = "unknown"
)