v3.40.0: Restore diagnostics and error reporting
Features: - restore diagnose command for backup file analysis - Deep COPY block verification for truncated dump detection - PGDMP signature and gzip integrity validation - Detailed error reports with --save-debug-log flag - Ring buffer stderr capture (prevents OOM on 2M+ errors) - Error classification with actionable recommendations TUI Enhancements: - Automatic dump validity safety check before restore - Press 'd' in archive browser to diagnose backups - Press 'd' in restore preview for debug log toggle - Debug logs saved to /tmp on failure when enabled Documentation: - Updated README with diagnose command and examples - Updated CHANGELOG with full feature list - Updated restore preview screenshots
This commit is contained in:
49
CHANGELOG.md
49
CHANGELOG.md
@@ -5,6 +5,55 @@ All notable changes to dbbackup will be documented in this file.
|
|||||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||||
|
|
||||||
|
## [3.40.0] - 2026-01-05 "The Diagnostician"
|
||||||
|
|
||||||
|
### Added - 🔍 Restore Diagnostics & Error Reporting
|
||||||
|
|
||||||
|
**Backup Diagnosis Command:**
|
||||||
|
- `restore diagnose <archive>` - Deep analysis of backup files before restore
|
||||||
|
- Detects truncated dumps, corrupted archives, incomplete COPY blocks
|
||||||
|
- PGDMP signature validation for PostgreSQL custom format
|
||||||
|
- Gzip integrity verification with decompression test
|
||||||
|
- `pg_restore --list` validation for custom format archives
|
||||||
|
- `--deep` flag for exhaustive line-by-line analysis
|
||||||
|
- `--json` flag for machine-readable output
|
||||||
|
- Cluster archive diagnosis scans all contained dumps
|
||||||
|
|
||||||
|
**Detailed Error Reporting:**
|
||||||
|
- Comprehensive error collector captures stderr during restore
|
||||||
|
- Ring buffer prevents OOM on high-error restores (2M+ errors)
|
||||||
|
- Error classification with actionable hints and recommendations
|
||||||
|
- `--save-debug-log <path>` saves JSON report on failure
|
||||||
|
- Reports include: exit codes, last errors, line context, tool versions
|
||||||
|
- Automatic recommendations based on error patterns
|
||||||
|
|
||||||
|
**TUI Restore Enhancements:**
|
||||||
|
- **Dump validity** safety check runs automatically before restore
|
||||||
|
- Detects truncated/corrupted backups in restore preview
|
||||||
|
- Press **`d`** to toggle debug log saving in Advanced Options
|
||||||
|
- Debug logs saved to `/tmp/dbbackup-restore-debug-*.json` on failure
|
||||||
|
- Press **`d`** in archive browser to run diagnosis on any backup
|
||||||
|
|
||||||
|
**New Commands:**
|
||||||
|
- `restore diagnose` - Analyze backup file integrity and structure
|
||||||
|
|
||||||
|
**New Flags:**
|
||||||
|
- `--save-debug-log <path>` - Save detailed JSON error report on failure
|
||||||
|
- `--diagnose` - Run deep diagnosis before cluster restore
|
||||||
|
- `--deep` - Enable exhaustive diagnosis (line-by-line analysis)
|
||||||
|
- `--json` - Output diagnosis in JSON format
|
||||||
|
- `--keep-temp` - Keep temporary files after diagnosis
|
||||||
|
- `--verbose` - Show detailed diagnosis progress
|
||||||
|
|
||||||
|
### Technical Details
|
||||||
|
- 1,200+ lines of new diagnostic code
|
||||||
|
- Error classification system with 15+ error patterns
|
||||||
|
- Ring buffer stderr capture (1MB max, 10K lines)
|
||||||
|
- Zero memory growth on high-error restores
|
||||||
|
- Full TUI integration for diagnostics
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## [3.2.0] - 2025-12-13 "The Margin Eraser"
|
## [3.2.0] - 2025-12-13 "The Margin Eraser"
|
||||||
|
|
||||||
### Added - 🚀 Physical Backup Revolution
|
### Added - 🚀 Physical Backup Revolution
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ Be respectful, constructive, and professional in all interactions. We're buildin
|
|||||||
|
|
||||||
**Bug Report Template:**
|
**Bug Report Template:**
|
||||||
```
|
```
|
||||||
**Version:** dbbackup v3.2.0
|
**Version:** dbbackup v3.40.0
|
||||||
**OS:** Linux/macOS/BSD
|
**OS:** Linux/macOS/BSD
|
||||||
**Database:** PostgreSQL 14 / MySQL 8.0 / MariaDB 10.6
|
**Database:** PostgreSQL 14 / MySQL 8.0 / MariaDB 10.6
|
||||||
**Command:** The exact command that failed
|
**Command:** The exact command that failed
|
||||||
|
|||||||
73
README.md
73
README.md
@@ -54,7 +54,7 @@ Download from [releases](https://git.uuxo.net/UUXO/dbbackup/releases):
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Linux x86_64
|
# Linux x86_64
|
||||||
wget https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.2.0/dbbackup-linux-amd64
|
wget https://git.uuxo.net/UUXO/dbbackup/releases/download/v3.40.0/dbbackup-linux-amd64
|
||||||
chmod +x dbbackup-linux-amd64
|
chmod +x dbbackup-linux-amd64
|
||||||
sudo mv dbbackup-linux-amd64 /usr/local/bin/dbbackup
|
sudo mv dbbackup-linux-amd64 /usr/local/bin/dbbackup
|
||||||
```
|
```
|
||||||
@@ -161,11 +161,15 @@ Cluster Restore Options
|
|||||||
|
|
||||||
Safety Checks
|
Safety Checks
|
||||||
[OK] Archive integrity verified
|
[OK] Archive integrity verified
|
||||||
|
[OK] Dump validity verified
|
||||||
[OK] Disk space: 140 GB available
|
[OK] Disk space: 140 GB available
|
||||||
[OK] Required tools found
|
[OK] Required tools found
|
||||||
[OK] Target database accessible
|
[OK] Target database accessible
|
||||||
|
|
||||||
c: Toggle cleanup | Enter: Proceed | Esc: Cancel
|
Advanced Options
|
||||||
|
✗ Debug Log: false (press 'd' to toggle)
|
||||||
|
|
||||||
|
c: Toggle cleanup | d: Debug log | Enter: Proceed | Esc: Cancel
|
||||||
```
|
```
|
||||||
|
|
||||||
**Backup Manager:**
|
**Backup Manager:**
|
||||||
@@ -180,7 +184,7 @@ FILENAME FORMAT SIZE MODIFIED
|
|||||||
[OK] myapp_prod_20250114.dump.gz PostgreSQL Custom 12.3 GB 2025-01-14
|
[OK] myapp_prod_20250114.dump.gz PostgreSQL Custom 12.3 GB 2025-01-14
|
||||||
[!!] users_db_20241220.dump.gz PostgreSQL Custom 850 MB 2024-12-20
|
[!!] users_db_20241220.dump.gz PostgreSQL Custom 850 MB 2024-12-20
|
||||||
|
|
||||||
r: Restore | v: Verify | i: Info | d: Delete | R: Refresh | Esc: Back
|
r: Restore | v: Verify | i: Info | d: Diagnose | D: Delete | R: Refresh | Esc: Back
|
||||||
```
|
```
|
||||||
|
|
||||||
**Configuration Settings:**
|
**Configuration Settings:**
|
||||||
@@ -240,6 +244,12 @@ dbbackup restore single backup.dump --target myapp_db --create --confirm
|
|||||||
# Restore cluster
|
# Restore cluster
|
||||||
dbbackup restore cluster cluster_backup.tar.gz --confirm
|
dbbackup restore cluster cluster_backup.tar.gz --confirm
|
||||||
|
|
||||||
|
# Restore with debug logging (saves detailed error report on failure)
|
||||||
|
dbbackup restore cluster backup.tar.gz --save-debug-log /tmp/restore-debug.json --confirm
|
||||||
|
|
||||||
|
# Diagnose backup before restore
|
||||||
|
dbbackup restore diagnose backup.dump.gz --deep
|
||||||
|
|
||||||
# Cloud backup
|
# Cloud backup
|
||||||
dbbackup backup single mydb --cloud s3://my-bucket/backups/
|
dbbackup backup single mydb --cloud s3://my-bucket/backups/
|
||||||
|
|
||||||
@@ -257,6 +267,7 @@ dbbackup backup single mydb --dry-run
|
|||||||
| `restore single` | Restore single database |
|
| `restore single` | Restore single database |
|
||||||
| `restore cluster` | Restore full cluster |
|
| `restore cluster` | Restore full cluster |
|
||||||
| `restore pitr` | Point-in-Time Recovery |
|
| `restore pitr` | Point-in-Time Recovery |
|
||||||
|
| `restore diagnose` | Diagnose backup file integrity |
|
||||||
| `verify-backup` | Verify backup integrity |
|
| `verify-backup` | Verify backup integrity |
|
||||||
| `cleanup` | Remove old backups |
|
| `cleanup` | Remove old backups |
|
||||||
| `status` | Check connection status |
|
| `status` | Check connection status |
|
||||||
@@ -288,6 +299,7 @@ dbbackup backup single mydb --dry-run
|
|||||||
| `--encrypt` | Enable encryption | false |
|
| `--encrypt` | Enable encryption | false |
|
||||||
| `--dry-run, -n` | Run preflight checks only | false |
|
| `--dry-run, -n` | Run preflight checks only | false |
|
||||||
| `--debug` | Enable debug logging | false |
|
| `--debug` | Enable debug logging | false |
|
||||||
|
| `--save-debug-log` | Save error report to file on failure | - |
|
||||||
|
|
||||||
## Encryption
|
## Encryption
|
||||||
|
|
||||||
@@ -435,6 +447,61 @@ dbbackup backup cluster -n # Short flag
|
|||||||
Ready to backup. Remove --dry-run to execute.
|
Ready to backup. Remove --dry-run to execute.
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Backup Diagnosis
|
||||||
|
|
||||||
|
Diagnose backup files before restore to detect corruption or truncation:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Diagnose a backup file
|
||||||
|
dbbackup restore diagnose backup.dump.gz
|
||||||
|
|
||||||
|
# Deep analysis (line-by-line COPY block verification)
|
||||||
|
dbbackup restore diagnose backup.dump.gz --deep
|
||||||
|
|
||||||
|
# JSON output for automation
|
||||||
|
dbbackup restore diagnose backup.dump.gz --json
|
||||||
|
|
||||||
|
# Diagnose cluster archive (checks all contained dumps)
|
||||||
|
dbbackup restore diagnose cluster_backup.tar.gz --deep
|
||||||
|
```
|
||||||
|
|
||||||
|
**Checks performed:**
|
||||||
|
- PGDMP signature validation (PostgreSQL custom format)
|
||||||
|
- Gzip integrity verification
|
||||||
|
- COPY block termination (detects truncated dumps)
|
||||||
|
- `pg_restore --list` validation
|
||||||
|
- Archive structure analysis
|
||||||
|
|
||||||
|
**Example output:**
|
||||||
|
```
|
||||||
|
🔍 Backup Diagnosis Report
|
||||||
|
══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
📁 File: mydb_20260105.dump.gz
|
||||||
|
Format: PostgreSQL Custom (gzip)
|
||||||
|
Size: 2.5 GB
|
||||||
|
|
||||||
|
🔬 Analysis Results:
|
||||||
|
✅ Gzip integrity: Valid
|
||||||
|
✅ PGDMP signature: Valid
|
||||||
|
✅ pg_restore --list: Success (245 objects)
|
||||||
|
❌ COPY block check: TRUNCATED
|
||||||
|
|
||||||
|
⚠️ Issues Found:
|
||||||
|
- COPY block for table 'orders' not terminated
|
||||||
|
- Dump appears truncated at line 1,234,567
|
||||||
|
|
||||||
|
💡 Recommendations:
|
||||||
|
- Re-run the backup for this database
|
||||||
|
- Check disk space on backup server
|
||||||
|
- Verify network stability during backup
|
||||||
|
```
|
||||||
|
|
||||||
|
**In Interactive Mode:**
|
||||||
|
- Press `d` in archive browser to diagnose any backup
|
||||||
|
- Automatic dump validity check in restore preview
|
||||||
|
- Toggle debug logging with `d` in restore options
|
||||||
|
|
||||||
## Notifications
|
## Notifications
|
||||||
|
|
||||||
Get alerted on backup events via email or webhooks. Configure via environment variables.
|
Get alerted on backup events via email or webhooks. Configure via environment variables.
|
||||||
|
|||||||
@@ -15,7 +15,7 @@ echo "🔧 Using Go version: $GO_VERSION"
|
|||||||
|
|
||||||
# Configuration
|
# Configuration
|
||||||
APP_NAME="dbbackup"
|
APP_NAME="dbbackup"
|
||||||
VERSION="3.2.0"
|
VERSION="3.40.0"
|
||||||
BUILD_TIME=$(date -u '+%Y-%m-%d_%H:%M:%S_UTC')
|
BUILD_TIME=$(date -u '+%Y-%m-%d_%H:%M:%S_UTC')
|
||||||
GIT_COMMIT=$(git rev-parse --short HEAD 2>/dev/null || echo "unknown")
|
GIT_COMMIT=$(git rev-parse --short HEAD 2>/dev/null || echo "unknown")
|
||||||
BIN_DIR="bin"
|
BIN_DIR="bin"
|
||||||
|
|||||||
252
cmd/restore.go
252
cmd/restore.go
@@ -33,6 +33,13 @@ var (
|
|||||||
restoreNoProgress bool
|
restoreNoProgress bool
|
||||||
restoreWorkdir string
|
restoreWorkdir string
|
||||||
restoreCleanCluster bool
|
restoreCleanCluster bool
|
||||||
|
restoreDiagnose bool // Run diagnosis before restore
|
||||||
|
restoreSaveDebugLog string // Path to save debug log on failure
|
||||||
|
|
||||||
|
// Diagnose flags
|
||||||
|
diagnoseJSON bool
|
||||||
|
diagnoseDeep bool
|
||||||
|
diagnoseKeepTemp bool
|
||||||
|
|
||||||
// Encryption flags
|
// Encryption flags
|
||||||
restoreEncryptionKeyFile string
|
restoreEncryptionKeyFile string
|
||||||
@@ -214,12 +221,53 @@ Examples:
|
|||||||
RunE: runRestorePITR,
|
RunE: runRestorePITR,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// restoreDiagnoseCmd diagnoses backup files before restore
|
||||||
|
var restoreDiagnoseCmd = &cobra.Command{
|
||||||
|
Use: "diagnose [archive-file]",
|
||||||
|
Short: "Diagnose backup file integrity and format",
|
||||||
|
Long: `Perform deep analysis of backup files to detect issues before restore.
|
||||||
|
|
||||||
|
This command validates backup archives and provides detailed diagnostics
|
||||||
|
including truncation detection, format verification, and COPY block integrity.
|
||||||
|
|
||||||
|
Use this when:
|
||||||
|
- Restore fails with syntax errors
|
||||||
|
- You suspect backup corruption or truncation
|
||||||
|
- You want to verify backup integrity before restore
|
||||||
|
- Restore reports millions of errors
|
||||||
|
|
||||||
|
Checks performed:
|
||||||
|
- File format detection (custom dump vs SQL)
|
||||||
|
- PGDMP signature verification
|
||||||
|
- Gzip integrity validation
|
||||||
|
- COPY block termination check
|
||||||
|
- pg_restore --list verification
|
||||||
|
- Cluster archive structure validation
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
# Diagnose a single dump file
|
||||||
|
dbbackup restore diagnose mydb.dump.gz
|
||||||
|
|
||||||
|
# Diagnose with verbose output
|
||||||
|
dbbackup restore diagnose mydb.sql.gz --verbose
|
||||||
|
|
||||||
|
# Diagnose cluster archive and all contained dumps
|
||||||
|
dbbackup restore diagnose cluster_backup.tar.gz --deep
|
||||||
|
|
||||||
|
# Output as JSON for scripting
|
||||||
|
dbbackup restore diagnose mydb.dump --json
|
||||||
|
`,
|
||||||
|
Args: cobra.ExactArgs(1),
|
||||||
|
RunE: runRestoreDiagnose,
|
||||||
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(restoreCmd)
|
rootCmd.AddCommand(restoreCmd)
|
||||||
restoreCmd.AddCommand(restoreSingleCmd)
|
restoreCmd.AddCommand(restoreSingleCmd)
|
||||||
restoreCmd.AddCommand(restoreClusterCmd)
|
restoreCmd.AddCommand(restoreClusterCmd)
|
||||||
restoreCmd.AddCommand(restoreListCmd)
|
restoreCmd.AddCommand(restoreListCmd)
|
||||||
restoreCmd.AddCommand(restorePITRCmd)
|
restoreCmd.AddCommand(restorePITRCmd)
|
||||||
|
restoreCmd.AddCommand(restoreDiagnoseCmd)
|
||||||
|
|
||||||
// Single restore flags
|
// Single restore flags
|
||||||
restoreSingleCmd.Flags().BoolVar(&restoreConfirm, "confirm", false, "Confirm and execute restore (required)")
|
restoreSingleCmd.Flags().BoolVar(&restoreConfirm, "confirm", false, "Confirm and execute restore (required)")
|
||||||
@@ -232,6 +280,8 @@ func init() {
|
|||||||
restoreSingleCmd.Flags().BoolVar(&restoreNoProgress, "no-progress", false, "Disable progress indicators")
|
restoreSingleCmd.Flags().BoolVar(&restoreNoProgress, "no-progress", false, "Disable progress indicators")
|
||||||
restoreSingleCmd.Flags().StringVar(&restoreEncryptionKeyFile, "encryption-key-file", "", "Path to encryption key file (required for encrypted backups)")
|
restoreSingleCmd.Flags().StringVar(&restoreEncryptionKeyFile, "encryption-key-file", "", "Path to encryption key file (required for encrypted backups)")
|
||||||
restoreSingleCmd.Flags().StringVar(&restoreEncryptionKeyEnv, "encryption-key-env", "DBBACKUP_ENCRYPTION_KEY", "Environment variable containing encryption key")
|
restoreSingleCmd.Flags().StringVar(&restoreEncryptionKeyEnv, "encryption-key-env", "DBBACKUP_ENCRYPTION_KEY", "Environment variable containing encryption key")
|
||||||
|
restoreSingleCmd.Flags().BoolVar(&restoreDiagnose, "diagnose", false, "Run deep diagnosis before restore to detect corruption/truncation")
|
||||||
|
restoreSingleCmd.Flags().StringVar(&restoreSaveDebugLog, "save-debug-log", "", "Save detailed error report to file on failure (e.g., /tmp/restore-debug.json)")
|
||||||
|
|
||||||
// Cluster restore flags
|
// Cluster restore flags
|
||||||
restoreClusterCmd.Flags().BoolVar(&restoreConfirm, "confirm", false, "Confirm and execute restore (required)")
|
restoreClusterCmd.Flags().BoolVar(&restoreConfirm, "confirm", false, "Confirm and execute restore (required)")
|
||||||
@@ -244,6 +294,8 @@ func init() {
|
|||||||
restoreClusterCmd.Flags().BoolVar(&restoreNoProgress, "no-progress", false, "Disable progress indicators")
|
restoreClusterCmd.Flags().BoolVar(&restoreNoProgress, "no-progress", false, "Disable progress indicators")
|
||||||
restoreClusterCmd.Flags().StringVar(&restoreEncryptionKeyFile, "encryption-key-file", "", "Path to encryption key file (required for encrypted backups)")
|
restoreClusterCmd.Flags().StringVar(&restoreEncryptionKeyFile, "encryption-key-file", "", "Path to encryption key file (required for encrypted backups)")
|
||||||
restoreClusterCmd.Flags().StringVar(&restoreEncryptionKeyEnv, "encryption-key-env", "DBBACKUP_ENCRYPTION_KEY", "Environment variable containing encryption key")
|
restoreClusterCmd.Flags().StringVar(&restoreEncryptionKeyEnv, "encryption-key-env", "DBBACKUP_ENCRYPTION_KEY", "Environment variable containing encryption key")
|
||||||
|
restoreClusterCmd.Flags().BoolVar(&restoreDiagnose, "diagnose", false, "Run deep diagnosis on all dumps before restore")
|
||||||
|
restoreClusterCmd.Flags().StringVar(&restoreSaveDebugLog, "save-debug-log", "", "Save detailed error report to file on failure (e.g., /tmp/restore-debug.json)")
|
||||||
|
|
||||||
// PITR restore flags
|
// PITR restore flags
|
||||||
restorePITRCmd.Flags().StringVar(&pitrBaseBackup, "base-backup", "", "Path to base backup file (.tar.gz) (required)")
|
restorePITRCmd.Flags().StringVar(&pitrBaseBackup, "base-backup", "", "Path to base backup file (.tar.gz) (required)")
|
||||||
@@ -264,6 +316,117 @@ func init() {
|
|||||||
restorePITRCmd.MarkFlagRequired("base-backup")
|
restorePITRCmd.MarkFlagRequired("base-backup")
|
||||||
restorePITRCmd.MarkFlagRequired("wal-archive")
|
restorePITRCmd.MarkFlagRequired("wal-archive")
|
||||||
restorePITRCmd.MarkFlagRequired("target-dir")
|
restorePITRCmd.MarkFlagRequired("target-dir")
|
||||||
|
|
||||||
|
// Diagnose flags
|
||||||
|
restoreDiagnoseCmd.Flags().BoolVar(&diagnoseJSON, "json", false, "Output diagnosis as JSON")
|
||||||
|
restoreDiagnoseCmd.Flags().BoolVar(&diagnoseDeep, "deep", false, "For cluster archives, extract and diagnose all contained dumps")
|
||||||
|
restoreDiagnoseCmd.Flags().BoolVar(&diagnoseKeepTemp, "keep-temp", false, "Keep temporary extraction directory (for debugging)")
|
||||||
|
restoreDiagnoseCmd.Flags().BoolVar(&restoreVerbose, "verbose", false, "Show detailed analysis progress")
|
||||||
|
}
|
||||||
|
|
||||||
|
// runRestoreDiagnose diagnoses backup files
|
||||||
|
func runRestoreDiagnose(cmd *cobra.Command, args []string) error {
|
||||||
|
archivePath := args[0]
|
||||||
|
|
||||||
|
// Convert to absolute path
|
||||||
|
if !filepath.IsAbs(archivePath) {
|
||||||
|
absPath, err := filepath.Abs(archivePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("invalid archive path: %w", err)
|
||||||
|
}
|
||||||
|
archivePath = absPath
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if file exists
|
||||||
|
if _, err := os.Stat(archivePath); err != nil {
|
||||||
|
return fmt.Errorf("archive not found: %s", archivePath)
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Info("🔍 Diagnosing backup file", "path", archivePath)
|
||||||
|
|
||||||
|
diagnoser := restore.NewDiagnoser(log, restoreVerbose)
|
||||||
|
|
||||||
|
// Check if it's a cluster archive that needs deep analysis
|
||||||
|
format := restore.DetectArchiveFormat(archivePath)
|
||||||
|
|
||||||
|
if format.IsClusterBackup() && diagnoseDeep {
|
||||||
|
// Create temp directory for extraction
|
||||||
|
tempDir, err := os.MkdirTemp("", "dbbackup-diagnose-*")
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create temp directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if !diagnoseKeepTemp {
|
||||||
|
defer os.RemoveAll(tempDir)
|
||||||
|
} else {
|
||||||
|
log.Info("Temp directory preserved", "path", tempDir)
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Info("Extracting cluster archive for deep analysis...")
|
||||||
|
|
||||||
|
// Extract and diagnose all dumps
|
||||||
|
results, err := diagnoser.DiagnoseClusterDumps(archivePath, tempDir)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("cluster diagnosis failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Output results
|
||||||
|
var hasErrors bool
|
||||||
|
for _, result := range results {
|
||||||
|
if diagnoseJSON {
|
||||||
|
diagnoser.PrintDiagnosisJSON(result)
|
||||||
|
} else {
|
||||||
|
diagnoser.PrintDiagnosis(result)
|
||||||
|
}
|
||||||
|
if !result.IsValid {
|
||||||
|
hasErrors = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Summary
|
||||||
|
if !diagnoseJSON {
|
||||||
|
fmt.Println("\n" + strings.Repeat("=", 70))
|
||||||
|
fmt.Printf("📊 CLUSTER SUMMARY: %d databases analyzed\n", len(results))
|
||||||
|
|
||||||
|
validCount := 0
|
||||||
|
for _, r := range results {
|
||||||
|
if r.IsValid {
|
||||||
|
validCount++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if validCount == len(results) {
|
||||||
|
fmt.Println("✅ All dumps are valid")
|
||||||
|
} else {
|
||||||
|
fmt.Printf("❌ %d/%d dumps have issues\n", len(results)-validCount, len(results))
|
||||||
|
}
|
||||||
|
fmt.Println(strings.Repeat("=", 70))
|
||||||
|
}
|
||||||
|
|
||||||
|
if hasErrors {
|
||||||
|
return fmt.Errorf("one or more dumps have validation errors")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Single file diagnosis
|
||||||
|
result, err := diagnoser.DiagnoseFile(archivePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("diagnosis failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diagnoseJSON {
|
||||||
|
diagnoser.PrintDiagnosisJSON(result)
|
||||||
|
} else {
|
||||||
|
diagnoser.PrintDiagnosis(result)
|
||||||
|
}
|
||||||
|
|
||||||
|
if !result.IsValid {
|
||||||
|
return fmt.Errorf("backup file has validation errors")
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Info("✅ Backup file appears valid")
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// runRestoreSingle restores a single database
|
// runRestoreSingle restores a single database
|
||||||
@@ -402,6 +565,12 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
|
|||||||
// Create restore engine
|
// Create restore engine
|
||||||
engine := restore.New(cfg, log, db)
|
engine := restore.New(cfg, log, db)
|
||||||
|
|
||||||
|
// Enable debug logging if requested
|
||||||
|
if restoreSaveDebugLog != "" {
|
||||||
|
engine.SetDebugLogPath(restoreSaveDebugLog)
|
||||||
|
log.Info("Debug logging enabled", "output", restoreSaveDebugLog)
|
||||||
|
}
|
||||||
|
|
||||||
// Setup signal handling
|
// Setup signal handling
|
||||||
ctx, cancel := context.WithCancel(context.Background())
|
ctx, cancel := context.WithCancel(context.Background())
|
||||||
defer cancel()
|
defer cancel()
|
||||||
@@ -416,6 +585,37 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
|
|||||||
cancel()
|
cancel()
|
||||||
}()
|
}()
|
||||||
|
|
||||||
|
// Run pre-restore diagnosis if requested
|
||||||
|
if restoreDiagnose {
|
||||||
|
log.Info("🔍 Running pre-restore diagnosis...")
|
||||||
|
|
||||||
|
diagnoser := restore.NewDiagnoser(log, restoreVerbose)
|
||||||
|
result, err := diagnoser.DiagnoseFile(archivePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("diagnosis failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
diagnoser.PrintDiagnosis(result)
|
||||||
|
|
||||||
|
if !result.IsValid {
|
||||||
|
log.Error("❌ Pre-restore diagnosis found issues")
|
||||||
|
if result.IsTruncated {
|
||||||
|
log.Error(" The backup file appears to be TRUNCATED")
|
||||||
|
}
|
||||||
|
if result.IsCorrupted {
|
||||||
|
log.Error(" The backup file appears to be CORRUPTED")
|
||||||
|
}
|
||||||
|
fmt.Println("\nUse --force to attempt restore anyway.")
|
||||||
|
|
||||||
|
if !restoreForce {
|
||||||
|
return fmt.Errorf("aborting restore due to backup file issues")
|
||||||
|
}
|
||||||
|
log.Warn("Continuing despite diagnosis errors (--force enabled)")
|
||||||
|
} else {
|
||||||
|
log.Info("✅ Backup file passed diagnosis")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Execute restore
|
// Execute restore
|
||||||
log.Info("Starting restore...", "database", targetDB)
|
log.Info("Starting restore...", "database", targetDB)
|
||||||
|
|
||||||
@@ -585,6 +785,12 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
|
|||||||
// Create restore engine
|
// Create restore engine
|
||||||
engine := restore.New(cfg, log, db)
|
engine := restore.New(cfg, log, db)
|
||||||
|
|
||||||
|
// Enable debug logging if requested
|
||||||
|
if restoreSaveDebugLog != "" {
|
||||||
|
engine.SetDebugLogPath(restoreSaveDebugLog)
|
||||||
|
log.Info("Debug logging enabled", "output", restoreSaveDebugLog)
|
||||||
|
}
|
||||||
|
|
||||||
// Setup signal handling
|
// Setup signal handling
|
||||||
ctx, cancel := context.WithCancel(context.Background())
|
ctx, cancel := context.WithCancel(context.Background())
|
||||||
defer cancel()
|
defer cancel()
|
||||||
@@ -620,6 +826,52 @@ func runRestoreCluster(cmd *cobra.Command, args []string) error {
|
|||||||
log.Info("Database cleanup completed")
|
log.Info("Database cleanup completed")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Run pre-restore diagnosis if requested
|
||||||
|
if restoreDiagnose {
|
||||||
|
log.Info("🔍 Running pre-restore diagnosis...")
|
||||||
|
|
||||||
|
// Create temp directory for extraction
|
||||||
|
diagTempDir, err := os.MkdirTemp("", "dbbackup-diagnose-*")
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create temp directory for diagnosis: %w", err)
|
||||||
|
}
|
||||||
|
defer os.RemoveAll(diagTempDir)
|
||||||
|
|
||||||
|
diagnoser := restore.NewDiagnoser(log, restoreVerbose)
|
||||||
|
results, err := diagnoser.DiagnoseClusterDumps(archivePath, diagTempDir)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("diagnosis failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for any invalid dumps
|
||||||
|
var invalidDumps []string
|
||||||
|
for _, result := range results {
|
||||||
|
if !result.IsValid {
|
||||||
|
invalidDumps = append(invalidDumps, result.FileName)
|
||||||
|
diagnoser.PrintDiagnosis(result)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(invalidDumps) > 0 {
|
||||||
|
log.Error("❌ Pre-restore diagnosis found issues",
|
||||||
|
"invalid_dumps", len(invalidDumps),
|
||||||
|
"total_dumps", len(results))
|
||||||
|
fmt.Println("\n⚠️ The following dumps have issues and will likely fail during restore:")
|
||||||
|
for _, name := range invalidDumps {
|
||||||
|
fmt.Printf(" - %s\n", name)
|
||||||
|
}
|
||||||
|
fmt.Println("\nRun 'dbbackup restore diagnose <archive> --deep' for full details.")
|
||||||
|
fmt.Println("Use --force to attempt restore anyway.")
|
||||||
|
|
||||||
|
if !restoreForce {
|
||||||
|
return fmt.Errorf("aborting restore due to %d invalid dump(s)", len(invalidDumps))
|
||||||
|
}
|
||||||
|
log.Warn("Continuing despite diagnosis errors (--force enabled)")
|
||||||
|
} else {
|
||||||
|
log.Info("✅ All dumps passed diagnosis", "count", len(results))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Execute cluster restore
|
// Execute cluster restore
|
||||||
log.Info("Starting cluster restore...")
|
log.Info("Starting cluster restore...")
|
||||||
|
|
||||||
|
|||||||
@@ -339,7 +339,7 @@ func (e *CloneEngine) Backup(ctx context.Context, opts *BackupOptions) (*BackupR
|
|||||||
|
|
||||||
// Save metadata
|
// Save metadata
|
||||||
meta := &metadata.BackupMetadata{
|
meta := &metadata.BackupMetadata{
|
||||||
Version: "3.2.0",
|
Version: "3.40.0",
|
||||||
Timestamp: startTime,
|
Timestamp: startTime,
|
||||||
Database: opts.Database,
|
Database: opts.Database,
|
||||||
DatabaseType: "mysql",
|
DatabaseType: "mysql",
|
||||||
|
|||||||
@@ -254,7 +254,7 @@ func (e *MySQLDumpEngine) Backup(ctx context.Context, opts *BackupOptions) (*Bac
|
|||||||
|
|
||||||
// Save metadata
|
// Save metadata
|
||||||
meta := &metadata.BackupMetadata{
|
meta := &metadata.BackupMetadata{
|
||||||
Version: "3.2.0",
|
Version: "3.40.0",
|
||||||
Timestamp: startTime,
|
Timestamp: startTime,
|
||||||
Database: opts.Database,
|
Database: opts.Database,
|
||||||
DatabaseType: "mysql",
|
DatabaseType: "mysql",
|
||||||
|
|||||||
@@ -223,7 +223,7 @@ func (e *SnapshotEngine) Backup(ctx context.Context, opts *BackupOptions) (*Back
|
|||||||
|
|
||||||
// Save metadata
|
// Save metadata
|
||||||
meta := &metadata.BackupMetadata{
|
meta := &metadata.BackupMetadata{
|
||||||
Version: "3.2.0",
|
Version: "3.40.0",
|
||||||
Timestamp: startTime,
|
Timestamp: startTime,
|
||||||
Database: opts.Database,
|
Database: opts.Database,
|
||||||
DatabaseType: "mysql",
|
DatabaseType: "mysql",
|
||||||
|
|||||||
726
internal/restore/diagnose.go
Normal file
726
internal/restore/diagnose.go
Normal file
@@ -0,0 +1,726 @@
|
|||||||
|
package restore
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bufio"
|
||||||
|
"bytes"
|
||||||
|
"compress/gzip"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"os/exec"
|
||||||
|
"path/filepath"
|
||||||
|
"regexp"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"dbbackup/internal/logger"
|
||||||
|
)
|
||||||
|
|
||||||
|
// DiagnoseResult contains the results of a dump file diagnosis
|
||||||
|
type DiagnoseResult struct {
|
||||||
|
FilePath string `json:"file_path"`
|
||||||
|
FileName string `json:"file_name"`
|
||||||
|
FileSize int64 `json:"file_size"`
|
||||||
|
Format ArchiveFormat `json:"format"`
|
||||||
|
DetectedFormat string `json:"detected_format"`
|
||||||
|
IsValid bool `json:"is_valid"`
|
||||||
|
IsTruncated bool `json:"is_truncated"`
|
||||||
|
IsCorrupted bool `json:"is_corrupted"`
|
||||||
|
Errors []string `json:"errors,omitempty"`
|
||||||
|
Warnings []string `json:"warnings,omitempty"`
|
||||||
|
Details *DiagnoseDetails `json:"details,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DiagnoseDetails contains detailed analysis of the dump file
|
||||||
|
type DiagnoseDetails struct {
|
||||||
|
// Header info
|
||||||
|
HasPGDMPSignature bool `json:"has_pgdmp_signature,omitempty"`
|
||||||
|
HasSQLHeader bool `json:"has_sql_header,omitempty"`
|
||||||
|
FirstBytes string `json:"first_bytes,omitempty"`
|
||||||
|
LastBytes string `json:"last_bytes,omitempty"`
|
||||||
|
|
||||||
|
// COPY block analysis (for SQL dumps)
|
||||||
|
CopyBlockCount int `json:"copy_block_count,omitempty"`
|
||||||
|
UnterminatedCopy bool `json:"unterminated_copy,omitempty"`
|
||||||
|
LastCopyTable string `json:"last_copy_table,omitempty"`
|
||||||
|
LastCopyLineNumber int `json:"last_copy_line_number,omitempty"`
|
||||||
|
SampleCopyData []string `json:"sample_copy_data,omitempty"`
|
||||||
|
|
||||||
|
// Structure analysis
|
||||||
|
HasCreateStatements bool `json:"has_create_statements,omitempty"`
|
||||||
|
HasInsertStatements bool `json:"has_insert_statements,omitempty"`
|
||||||
|
HasCopyStatements bool `json:"has_copy_statements,omitempty"`
|
||||||
|
HasTransactionBlock bool `json:"has_transaction_block,omitempty"`
|
||||||
|
ProperlyTerminated bool `json:"properly_terminated,omitempty"`
|
||||||
|
|
||||||
|
// pg_restore analysis (for custom format)
|
||||||
|
PgRestoreListable bool `json:"pg_restore_listable,omitempty"`
|
||||||
|
PgRestoreError string `json:"pg_restore_error,omitempty"`
|
||||||
|
TableCount int `json:"table_count,omitempty"`
|
||||||
|
TableList []string `json:"table_list,omitempty"`
|
||||||
|
|
||||||
|
// Compression analysis
|
||||||
|
GzipValid bool `json:"gzip_valid,omitempty"`
|
||||||
|
GzipError string `json:"gzip_error,omitempty"`
|
||||||
|
ExpandedSize int64 `json:"expanded_size,omitempty"`
|
||||||
|
CompressionRatio float64 `json:"compression_ratio,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// Diagnoser performs deep analysis of backup files
|
||||||
|
type Diagnoser struct {
|
||||||
|
log logger.Logger
|
||||||
|
verbose bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewDiagnoser creates a new diagnoser
|
||||||
|
func NewDiagnoser(log logger.Logger, verbose bool) *Diagnoser {
|
||||||
|
return &Diagnoser{
|
||||||
|
log: log,
|
||||||
|
verbose: verbose,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// DiagnoseFile performs comprehensive diagnosis of a backup file
|
||||||
|
func (d *Diagnoser) DiagnoseFile(filePath string) (*DiagnoseResult, error) {
|
||||||
|
result := &DiagnoseResult{
|
||||||
|
FilePath: filePath,
|
||||||
|
FileName: filepath.Base(filePath),
|
||||||
|
Details: &DiagnoseDetails{},
|
||||||
|
IsValid: true, // Assume valid until proven otherwise
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check file exists and get size
|
||||||
|
stat, err := os.Stat(filePath)
|
||||||
|
if err != nil {
|
||||||
|
result.IsValid = false
|
||||||
|
result.Errors = append(result.Errors, fmt.Sprintf("Cannot access file: %v", err))
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
result.FileSize = stat.Size()
|
||||||
|
|
||||||
|
if result.FileSize == 0 {
|
||||||
|
result.IsValid = false
|
||||||
|
result.IsTruncated = true
|
||||||
|
result.Errors = append(result.Errors, "File is empty (0 bytes)")
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Detect format
|
||||||
|
result.Format = DetectArchiveFormat(filePath)
|
||||||
|
result.DetectedFormat = result.Format.String()
|
||||||
|
|
||||||
|
// Analyze based on format
|
||||||
|
switch result.Format {
|
||||||
|
case FormatPostgreSQLDump:
|
||||||
|
d.diagnosePgDump(filePath, result)
|
||||||
|
case FormatPostgreSQLDumpGz:
|
||||||
|
d.diagnosePgDumpGz(filePath, result)
|
||||||
|
case FormatPostgreSQLSQL:
|
||||||
|
d.diagnoseSQLScript(filePath, false, result)
|
||||||
|
case FormatPostgreSQLSQLGz:
|
||||||
|
d.diagnoseSQLScript(filePath, true, result)
|
||||||
|
case FormatClusterTarGz:
|
||||||
|
d.diagnoseClusterArchive(filePath, result)
|
||||||
|
default:
|
||||||
|
result.Warnings = append(result.Warnings, "Unknown format - limited diagnosis available")
|
||||||
|
d.diagnoseUnknown(filePath, result)
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// diagnosePgDump analyzes PostgreSQL custom format dump
|
||||||
|
func (d *Diagnoser) diagnosePgDump(filePath string, result *DiagnoseResult) {
|
||||||
|
file, err := os.Open(filePath)
|
||||||
|
if err != nil {
|
||||||
|
result.IsValid = false
|
||||||
|
result.Errors = append(result.Errors, fmt.Sprintf("Cannot open file: %v", err))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
// Read first 512 bytes
|
||||||
|
header := make([]byte, 512)
|
||||||
|
n, err := file.Read(header)
|
||||||
|
if err != nil && err != io.EOF {
|
||||||
|
result.IsValid = false
|
||||||
|
result.Errors = append(result.Errors, fmt.Sprintf("Cannot read header: %v", err))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check PGDMP signature
|
||||||
|
if n >= 5 && string(header[:5]) == "PGDMP" {
|
||||||
|
result.Details.HasPGDMPSignature = true
|
||||||
|
result.Details.FirstBytes = "PGDMP..."
|
||||||
|
} else {
|
||||||
|
result.IsValid = false
|
||||||
|
result.IsCorrupted = true
|
||||||
|
result.Details.HasPGDMPSignature = false
|
||||||
|
result.Details.FirstBytes = fmt.Sprintf("%q", header[:minInt(n, 20)])
|
||||||
|
result.Errors = append(result.Errors,
|
||||||
|
"Missing PGDMP signature - file is NOT PostgreSQL custom format",
|
||||||
|
"This file may be SQL format incorrectly named as .dump",
|
||||||
|
"Try: file "+filePath+" to check actual file type")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try pg_restore --list to verify dump integrity
|
||||||
|
d.verifyWithPgRestore(filePath, result)
|
||||||
|
}
|
||||||
|
|
||||||
|
// diagnosePgDumpGz analyzes compressed PostgreSQL custom format dump
|
||||||
|
func (d *Diagnoser) diagnosePgDumpGz(filePath string, result *DiagnoseResult) {
|
||||||
|
file, err := os.Open(filePath)
|
||||||
|
if err != nil {
|
||||||
|
result.IsValid = false
|
||||||
|
result.Errors = append(result.Errors, fmt.Sprintf("Cannot open file: %v", err))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
// Verify gzip integrity
|
||||||
|
gz, err := gzip.NewReader(file)
|
||||||
|
if err != nil {
|
||||||
|
result.IsValid = false
|
||||||
|
result.IsCorrupted = true
|
||||||
|
result.Details.GzipValid = false
|
||||||
|
result.Details.GzipError = err.Error()
|
||||||
|
result.Errors = append(result.Errors,
|
||||||
|
fmt.Sprintf("Invalid gzip format: %v", err),
|
||||||
|
"The file may be truncated or corrupted during transfer")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
result.Details.GzipValid = true
|
||||||
|
|
||||||
|
// Read and check header
|
||||||
|
header := make([]byte, 512)
|
||||||
|
n, err := gz.Read(header)
|
||||||
|
if err != nil && err != io.EOF {
|
||||||
|
result.IsValid = false
|
||||||
|
result.Errors = append(result.Errors, fmt.Sprintf("Cannot read decompressed header: %v", err))
|
||||||
|
gz.Close()
|
||||||
|
return
|
||||||
|
}
|
||||||
|
gz.Close()
|
||||||
|
|
||||||
|
// Check PGDMP signature
|
||||||
|
if n >= 5 && string(header[:5]) == "PGDMP" {
|
||||||
|
result.Details.HasPGDMPSignature = true
|
||||||
|
result.Details.FirstBytes = "PGDMP..."
|
||||||
|
} else {
|
||||||
|
result.Details.HasPGDMPSignature = false
|
||||||
|
result.Details.FirstBytes = fmt.Sprintf("%q", header[:minInt(n, 20)])
|
||||||
|
|
||||||
|
// Check if it's actually SQL content
|
||||||
|
content := string(header[:n])
|
||||||
|
if strings.Contains(content, "PostgreSQL") || strings.Contains(content, "pg_dump") ||
|
||||||
|
strings.Contains(content, "SET ") || strings.Contains(content, "CREATE ") {
|
||||||
|
result.Details.HasSQLHeader = true
|
||||||
|
result.Warnings = append(result.Warnings,
|
||||||
|
"File contains SQL text but has .dump extension",
|
||||||
|
"This appears to be SQL format, not custom format",
|
||||||
|
"Restore should use psql, not pg_restore")
|
||||||
|
} else {
|
||||||
|
result.IsValid = false
|
||||||
|
result.IsCorrupted = true
|
||||||
|
result.Errors = append(result.Errors,
|
||||||
|
"Missing PGDMP signature in decompressed content",
|
||||||
|
"File is neither custom format nor valid SQL")
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify full gzip stream integrity by reading to end
|
||||||
|
file.Seek(0, 0)
|
||||||
|
gz, _ = gzip.NewReader(file)
|
||||||
|
|
||||||
|
var totalRead int64
|
||||||
|
buf := make([]byte, 32*1024)
|
||||||
|
for {
|
||||||
|
n, err := gz.Read(buf)
|
||||||
|
totalRead += int64(n)
|
||||||
|
if err == io.EOF {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
result.IsValid = false
|
||||||
|
result.IsTruncated = true
|
||||||
|
result.Details.ExpandedSize = totalRead
|
||||||
|
result.Errors = append(result.Errors,
|
||||||
|
fmt.Sprintf("Gzip stream truncated after %d bytes: %v", totalRead, err),
|
||||||
|
"The backup file appears to be incomplete",
|
||||||
|
"Check if backup process completed successfully")
|
||||||
|
gz.Close()
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
gz.Close()
|
||||||
|
|
||||||
|
result.Details.ExpandedSize = totalRead
|
||||||
|
if result.FileSize > 0 {
|
||||||
|
result.Details.CompressionRatio = float64(totalRead) / float64(result.FileSize)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// diagnoseSQLScript analyzes SQL script format
|
||||||
|
func (d *Diagnoser) diagnoseSQLScript(filePath string, compressed bool, result *DiagnoseResult) {
|
||||||
|
var reader io.Reader
|
||||||
|
var file *os.File
|
||||||
|
var gz *gzip.Reader
|
||||||
|
var err error
|
||||||
|
|
||||||
|
file, err = os.Open(filePath)
|
||||||
|
if err != nil {
|
||||||
|
result.IsValid = false
|
||||||
|
result.Errors = append(result.Errors, fmt.Sprintf("Cannot open file: %v", err))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
if compressed {
|
||||||
|
gz, err = gzip.NewReader(file)
|
||||||
|
if err != nil {
|
||||||
|
result.IsValid = false
|
||||||
|
result.IsCorrupted = true
|
||||||
|
result.Details.GzipValid = false
|
||||||
|
result.Details.GzipError = err.Error()
|
||||||
|
result.Errors = append(result.Errors, fmt.Sprintf("Invalid gzip format: %v", err))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
result.Details.GzipValid = true
|
||||||
|
reader = gz
|
||||||
|
defer gz.Close()
|
||||||
|
} else {
|
||||||
|
reader = file
|
||||||
|
}
|
||||||
|
|
||||||
|
// Analyze SQL content
|
||||||
|
scanner := bufio.NewScanner(reader)
|
||||||
|
// Increase buffer size for large lines (COPY data can have long lines)
|
||||||
|
buf := make([]byte, 0, 1024*1024)
|
||||||
|
scanner.Buffer(buf, 10*1024*1024)
|
||||||
|
|
||||||
|
var lineNumber int
|
||||||
|
var inCopyBlock bool
|
||||||
|
var lastCopyTable string
|
||||||
|
var copyStartLine int
|
||||||
|
var copyDataSamples []string
|
||||||
|
|
||||||
|
copyBlockPattern := regexp.MustCompile(`^COPY\s+("?[\w\."]+)"?\s+\(`)
|
||||||
|
copyEndPattern := regexp.MustCompile(`^\\\.`)
|
||||||
|
|
||||||
|
for scanner.Scan() {
|
||||||
|
lineNumber++
|
||||||
|
line := scanner.Text()
|
||||||
|
|
||||||
|
// Check first few lines for header
|
||||||
|
if lineNumber <= 10 {
|
||||||
|
if strings.Contains(line, "PostgreSQL") || strings.Contains(line, "pg_dump") {
|
||||||
|
result.Details.HasSQLHeader = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Track structure
|
||||||
|
upperLine := strings.ToUpper(strings.TrimSpace(line))
|
||||||
|
if strings.HasPrefix(upperLine, "CREATE ") {
|
||||||
|
result.Details.HasCreateStatements = true
|
||||||
|
}
|
||||||
|
if strings.HasPrefix(upperLine, "INSERT ") {
|
||||||
|
result.Details.HasInsertStatements = true
|
||||||
|
}
|
||||||
|
if strings.HasPrefix(upperLine, "BEGIN") {
|
||||||
|
result.Details.HasTransactionBlock = true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Track COPY blocks
|
||||||
|
if copyBlockPattern.MatchString(line) {
|
||||||
|
if inCopyBlock {
|
||||||
|
// Previous COPY block wasn't terminated!
|
||||||
|
result.Details.UnterminatedCopy = true
|
||||||
|
result.IsTruncated = true
|
||||||
|
result.IsValid = false
|
||||||
|
result.Errors = append(result.Errors,
|
||||||
|
fmt.Sprintf("COPY block for '%s' starting at line %d was never terminated",
|
||||||
|
lastCopyTable, copyStartLine))
|
||||||
|
}
|
||||||
|
|
||||||
|
inCopyBlock = true
|
||||||
|
result.Details.HasCopyStatements = true
|
||||||
|
result.Details.CopyBlockCount++
|
||||||
|
|
||||||
|
matches := copyBlockPattern.FindStringSubmatch(line)
|
||||||
|
if len(matches) > 1 {
|
||||||
|
lastCopyTable = matches[1]
|
||||||
|
}
|
||||||
|
copyStartLine = lineNumber
|
||||||
|
copyDataSamples = nil
|
||||||
|
|
||||||
|
} else if copyEndPattern.MatchString(line) {
|
||||||
|
inCopyBlock = false
|
||||||
|
|
||||||
|
} else if inCopyBlock {
|
||||||
|
// We're in COPY data
|
||||||
|
if len(copyDataSamples) < 3 {
|
||||||
|
copyDataSamples = append(copyDataSamples, truncateString(line, 100))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Store last line for termination check
|
||||||
|
if lineNumber > 0 && (lineNumber%100000 == 0) && d.verbose {
|
||||||
|
d.log.Debug("Scanning SQL file", "lines_processed", lineNumber)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := scanner.Err(); err != nil {
|
||||||
|
result.IsValid = false
|
||||||
|
result.IsTruncated = true
|
||||||
|
result.Errors = append(result.Errors,
|
||||||
|
fmt.Sprintf("Error reading file at line %d: %v", lineNumber, err),
|
||||||
|
"File may be truncated or contain invalid data")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if we ended while still in a COPY block
|
||||||
|
if inCopyBlock {
|
||||||
|
result.Details.UnterminatedCopy = true
|
||||||
|
result.Details.LastCopyTable = lastCopyTable
|
||||||
|
result.Details.LastCopyLineNumber = copyStartLine
|
||||||
|
result.Details.SampleCopyData = copyDataSamples
|
||||||
|
result.IsTruncated = true
|
||||||
|
result.IsValid = false
|
||||||
|
result.Errors = append(result.Errors,
|
||||||
|
fmt.Sprintf("File ends inside COPY block for table '%s' (started at line %d)",
|
||||||
|
lastCopyTable, copyStartLine),
|
||||||
|
"The backup was truncated during data export",
|
||||||
|
"This explains the 'syntax error' during restore - COPY data is being interpreted as SQL")
|
||||||
|
|
||||||
|
if len(copyDataSamples) > 0 {
|
||||||
|
result.Errors = append(result.Errors,
|
||||||
|
fmt.Sprintf("Sample orphaned data: %s", copyDataSamples[0]))
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
result.Details.ProperlyTerminated = true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read last bytes for additional context
|
||||||
|
if !compressed {
|
||||||
|
file.Seek(-min(500, result.FileSize), 2)
|
||||||
|
lastBytes := make([]byte, 500)
|
||||||
|
n, _ := file.Read(lastBytes)
|
||||||
|
result.Details.LastBytes = strings.TrimSpace(string(lastBytes[:n]))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// diagnoseClusterArchive analyzes a cluster tar.gz archive
|
||||||
|
func (d *Diagnoser) diagnoseClusterArchive(filePath string, result *DiagnoseResult) {
|
||||||
|
// First verify tar.gz integrity
|
||||||
|
cmd := exec.Command("tar", "-tzf", filePath)
|
||||||
|
output, err := cmd.Output()
|
||||||
|
if err != nil {
|
||||||
|
result.IsValid = false
|
||||||
|
result.IsCorrupted = true
|
||||||
|
result.Errors = append(result.Errors,
|
||||||
|
fmt.Sprintf("Tar archive is invalid or corrupted: %v", err),
|
||||||
|
"Run: tar -tzf "+filePath+" 2>&1 | tail -20")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse tar listing
|
||||||
|
files := strings.Split(strings.TrimSpace(string(output)), "\n")
|
||||||
|
var dumpFiles []string
|
||||||
|
hasGlobals := false
|
||||||
|
hasMetadata := false
|
||||||
|
|
||||||
|
for _, f := range files {
|
||||||
|
if strings.HasSuffix(f, ".dump") || strings.HasSuffix(f, ".sql.gz") {
|
||||||
|
dumpFiles = append(dumpFiles, f)
|
||||||
|
}
|
||||||
|
if strings.Contains(f, "globals.sql") {
|
||||||
|
hasGlobals = true
|
||||||
|
}
|
||||||
|
if strings.Contains(f, "manifest.json") || strings.Contains(f, "metadata.json") {
|
||||||
|
hasMetadata = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
result.Details.TableCount = len(dumpFiles)
|
||||||
|
result.Details.TableList = dumpFiles
|
||||||
|
|
||||||
|
if len(dumpFiles) == 0 {
|
||||||
|
result.Warnings = append(result.Warnings, "No database dump files found in archive")
|
||||||
|
}
|
||||||
|
|
||||||
|
if !hasGlobals {
|
||||||
|
result.Warnings = append(result.Warnings, "No globals.sql found - roles/tablespaces won't be restored")
|
||||||
|
}
|
||||||
|
|
||||||
|
if !hasMetadata {
|
||||||
|
result.Warnings = append(result.Warnings, "No manifest/metadata found - limited validation possible")
|
||||||
|
}
|
||||||
|
|
||||||
|
// For verbose mode, diagnose individual dumps inside the archive
|
||||||
|
if d.verbose && len(dumpFiles) > 0 {
|
||||||
|
d.log.Info("Cluster archive contains databases", "count", len(dumpFiles))
|
||||||
|
for _, df := range dumpFiles {
|
||||||
|
d.log.Info(" - " + df)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// diagnoseUnknown handles unknown format files
|
||||||
|
func (d *Diagnoser) diagnoseUnknown(filePath string, result *DiagnoseResult) {
|
||||||
|
file, err := os.Open(filePath)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
header := make([]byte, 512)
|
||||||
|
n, _ := file.Read(header)
|
||||||
|
result.Details.FirstBytes = fmt.Sprintf("%q", header[:minInt(n, 50)])
|
||||||
|
|
||||||
|
// Try to identify by content
|
||||||
|
content := string(header[:n])
|
||||||
|
if strings.Contains(content, "PGDMP") {
|
||||||
|
result.Warnings = append(result.Warnings, "File appears to be PostgreSQL custom format - rename to .dump")
|
||||||
|
} else if strings.Contains(content, "PostgreSQL") || strings.Contains(content, "pg_dump") {
|
||||||
|
result.Warnings = append(result.Warnings, "File appears to be PostgreSQL SQL - rename to .sql")
|
||||||
|
} else if bytes.HasPrefix(header, []byte{0x1f, 0x8b}) {
|
||||||
|
result.Warnings = append(result.Warnings, "File appears to be gzip compressed - add .gz extension")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// verifyWithPgRestore uses pg_restore --list to verify dump integrity
|
||||||
|
func (d *Diagnoser) verifyWithPgRestore(filePath string, result *DiagnoseResult) {
|
||||||
|
cmd := exec.Command("pg_restore", "--list", filePath)
|
||||||
|
output, err := cmd.CombinedOutput()
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
result.Details.PgRestoreListable = false
|
||||||
|
result.Details.PgRestoreError = string(output)
|
||||||
|
|
||||||
|
// Check for specific errors
|
||||||
|
errStr := string(output)
|
||||||
|
if strings.Contains(errStr, "unexpected end of file") ||
|
||||||
|
strings.Contains(errStr, "invalid large-object TOC entry") {
|
||||||
|
result.IsTruncated = true
|
||||||
|
result.IsValid = false
|
||||||
|
result.Errors = append(result.Errors,
|
||||||
|
"pg_restore reports truncated or incomplete dump file",
|
||||||
|
fmt.Sprintf("Error: %s", truncateString(errStr, 200)))
|
||||||
|
} else if strings.Contains(errStr, "not a valid archive") {
|
||||||
|
result.IsCorrupted = true
|
||||||
|
result.IsValid = false
|
||||||
|
result.Errors = append(result.Errors,
|
||||||
|
"pg_restore reports file is not a valid archive",
|
||||||
|
"File may be corrupted or wrong format")
|
||||||
|
} else {
|
||||||
|
result.Warnings = append(result.Warnings,
|
||||||
|
fmt.Sprintf("pg_restore --list warning: %s", truncateString(errStr, 200)))
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
result.Details.PgRestoreListable = true
|
||||||
|
|
||||||
|
// Count tables in the TOC
|
||||||
|
lines := strings.Split(string(output), "\n")
|
||||||
|
tableCount := 0
|
||||||
|
var tables []string
|
||||||
|
for _, line := range lines {
|
||||||
|
if strings.Contains(line, " TABLE DATA ") {
|
||||||
|
tableCount++
|
||||||
|
if len(tables) < 20 {
|
||||||
|
parts := strings.Fields(line)
|
||||||
|
if len(parts) > 3 {
|
||||||
|
tables = append(tables, parts[len(parts)-1])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
result.Details.TableCount = tableCount
|
||||||
|
result.Details.TableList = tables
|
||||||
|
}
|
||||||
|
|
||||||
|
// DiagnoseClusterDumps extracts and diagnoses all dumps in a cluster archive
|
||||||
|
func (d *Diagnoser) DiagnoseClusterDumps(archivePath, tempDir string) ([]*DiagnoseResult, error) {
|
||||||
|
// Extract to temp directory
|
||||||
|
cmd := exec.Command("tar", "-xzf", archivePath, "-C", tempDir)
|
||||||
|
if err := cmd.Run(); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to extract archive: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find dump files
|
||||||
|
dumpsDir := filepath.Join(tempDir, "dumps")
|
||||||
|
entries, err := os.ReadDir(dumpsDir)
|
||||||
|
if err != nil {
|
||||||
|
// Try without dumps subdirectory
|
||||||
|
entries, err = os.ReadDir(tempDir)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("cannot read extracted files: %w", err)
|
||||||
|
}
|
||||||
|
dumpsDir = tempDir
|
||||||
|
}
|
||||||
|
|
||||||
|
var results []*DiagnoseResult
|
||||||
|
for _, entry := range entries {
|
||||||
|
if entry.IsDir() {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
name := entry.Name()
|
||||||
|
if !strings.HasSuffix(name, ".dump") && !strings.HasSuffix(name, ".sql.gz") &&
|
||||||
|
!strings.HasSuffix(name, ".sql") {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
dumpPath := filepath.Join(dumpsDir, name)
|
||||||
|
d.log.Info("Diagnosing dump file", "file", name)
|
||||||
|
|
||||||
|
result, err := d.DiagnoseFile(dumpPath)
|
||||||
|
if err != nil {
|
||||||
|
d.log.Warn("Failed to diagnose file", "file", name, "error", err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
results = append(results, result)
|
||||||
|
}
|
||||||
|
|
||||||
|
return results, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// PrintDiagnosis outputs a human-readable diagnosis report
|
||||||
|
func (d *Diagnoser) PrintDiagnosis(result *DiagnoseResult) {
|
||||||
|
fmt.Println("\n" + strings.Repeat("=", 70))
|
||||||
|
fmt.Printf("📋 DIAGNOSIS: %s\n", result.FileName)
|
||||||
|
fmt.Println(strings.Repeat("=", 70))
|
||||||
|
|
||||||
|
// Basic info
|
||||||
|
fmt.Printf("\nFile: %s\n", result.FilePath)
|
||||||
|
fmt.Printf("Size: %s\n", formatBytes(result.FileSize))
|
||||||
|
fmt.Printf("Format: %s\n", result.DetectedFormat)
|
||||||
|
|
||||||
|
// Status
|
||||||
|
if result.IsValid {
|
||||||
|
fmt.Println("\n✅ STATUS: VALID")
|
||||||
|
} else {
|
||||||
|
fmt.Println("\n❌ STATUS: INVALID")
|
||||||
|
}
|
||||||
|
|
||||||
|
if result.IsTruncated {
|
||||||
|
fmt.Println("⚠️ TRUNCATED: Yes - file appears incomplete")
|
||||||
|
}
|
||||||
|
if result.IsCorrupted {
|
||||||
|
fmt.Println("⚠️ CORRUPTED: Yes - file structure is damaged")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Details
|
||||||
|
if result.Details != nil {
|
||||||
|
fmt.Println("\n📊 DETAILS:")
|
||||||
|
|
||||||
|
if result.Details.HasPGDMPSignature {
|
||||||
|
fmt.Println(" ✓ Has PGDMP signature (PostgreSQL custom format)")
|
||||||
|
}
|
||||||
|
if result.Details.HasSQLHeader {
|
||||||
|
fmt.Println(" ✓ Has PostgreSQL SQL header")
|
||||||
|
}
|
||||||
|
if result.Details.GzipValid {
|
||||||
|
fmt.Println(" ✓ Gzip compression valid")
|
||||||
|
}
|
||||||
|
if result.Details.PgRestoreListable {
|
||||||
|
fmt.Printf(" ✓ pg_restore can list contents (%d tables)\n", result.Details.TableCount)
|
||||||
|
}
|
||||||
|
if result.Details.CopyBlockCount > 0 {
|
||||||
|
fmt.Printf(" • Contains %d COPY blocks\n", result.Details.CopyBlockCount)
|
||||||
|
}
|
||||||
|
if result.Details.UnterminatedCopy {
|
||||||
|
fmt.Printf(" ✗ Unterminated COPY block: %s (line %d)\n",
|
||||||
|
result.Details.LastCopyTable, result.Details.LastCopyLineNumber)
|
||||||
|
}
|
||||||
|
if result.Details.ProperlyTerminated {
|
||||||
|
fmt.Println(" ✓ All COPY blocks properly terminated")
|
||||||
|
}
|
||||||
|
if result.Details.ExpandedSize > 0 {
|
||||||
|
fmt.Printf(" • Expanded size: %s (ratio: %.1fx)\n",
|
||||||
|
formatBytes(result.Details.ExpandedSize), result.Details.CompressionRatio)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Errors
|
||||||
|
if len(result.Errors) > 0 {
|
||||||
|
fmt.Println("\n❌ ERRORS:")
|
||||||
|
for _, e := range result.Errors {
|
||||||
|
fmt.Printf(" • %s\n", e)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Warnings
|
||||||
|
if len(result.Warnings) > 0 {
|
||||||
|
fmt.Println("\n⚠️ WARNINGS:")
|
||||||
|
for _, w := range result.Warnings {
|
||||||
|
fmt.Printf(" • %s\n", w)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Recommendations
|
||||||
|
if !result.IsValid {
|
||||||
|
fmt.Println("\n💡 RECOMMENDATIONS:")
|
||||||
|
if result.IsTruncated {
|
||||||
|
fmt.Println(" 1. Re-run the backup process for this database")
|
||||||
|
fmt.Println(" 2. Check disk space on backup server during backup")
|
||||||
|
fmt.Println(" 3. Verify network stability if backup was remote")
|
||||||
|
fmt.Println(" 4. Check backup logs for errors during the backup")
|
||||||
|
}
|
||||||
|
if result.IsCorrupted {
|
||||||
|
fmt.Println(" 1. Verify backup file was transferred completely")
|
||||||
|
fmt.Println(" 2. Check if backup file was modified after creation")
|
||||||
|
fmt.Println(" 3. Try restoring from a previous backup")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println(strings.Repeat("=", 70))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PrintDiagnosisJSON outputs diagnosis as JSON
|
||||||
|
func (d *Diagnoser) PrintDiagnosisJSON(result *DiagnoseResult) error {
|
||||||
|
output, err := json.MarshalIndent(result, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
fmt.Println(string(output))
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Helper functions
|
||||||
|
|
||||||
|
func truncateString(s string, maxLen int) string {
|
||||||
|
if len(s) <= maxLen {
|
||||||
|
return s
|
||||||
|
}
|
||||||
|
return s[:maxLen-3] + "..."
|
||||||
|
}
|
||||||
|
|
||||||
|
func formatBytes(bytes int64) string {
|
||||||
|
const unit = 1024
|
||||||
|
if bytes < unit {
|
||||||
|
return fmt.Sprintf("%d B", bytes)
|
||||||
|
}
|
||||||
|
div, exp := int64(unit), 0
|
||||||
|
for n := bytes / unit; n >= unit; n /= unit {
|
||||||
|
div *= unit
|
||||||
|
exp++
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%.1f %cB", float64(bytes)/float64(div), "KMGTPE"[exp])
|
||||||
|
}
|
||||||
|
|
||||||
|
func min(a, b int64) int64 {
|
||||||
|
if a < b {
|
||||||
|
return a
|
||||||
|
}
|
||||||
|
return b
|
||||||
|
}
|
||||||
|
|
||||||
|
func minInt(a, b int) int {
|
||||||
|
if a < b {
|
||||||
|
return a
|
||||||
|
}
|
||||||
|
return b
|
||||||
|
}
|
||||||
@@ -27,6 +27,8 @@ type Engine struct {
|
|||||||
progress progress.Indicator
|
progress progress.Indicator
|
||||||
detailedReporter *progress.DetailedReporter
|
detailedReporter *progress.DetailedReporter
|
||||||
dryRun bool
|
dryRun bool
|
||||||
|
debugLogPath string // Path to save debug log on error
|
||||||
|
errorCollector *ErrorCollector // Collects detailed error info
|
||||||
}
|
}
|
||||||
|
|
||||||
// New creates a new restore engine
|
// New creates a new restore engine
|
||||||
@@ -77,6 +79,11 @@ func NewWithProgress(cfg *config.Config, log logger.Logger, db database.Database
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetDebugLogPath enables saving detailed error reports on failure
|
||||||
|
func (e *Engine) SetDebugLogPath(path string) {
|
||||||
|
e.debugLogPath = path
|
||||||
|
}
|
||||||
|
|
||||||
// loggerAdapter adapts our logger to the progress.Logger interface
|
// loggerAdapter adapts our logger to the progress.Logger interface
|
||||||
type loggerAdapter struct {
|
type loggerAdapter struct {
|
||||||
logger logger.Logger
|
logger logger.Logger
|
||||||
@@ -306,6 +313,11 @@ func (e *Engine) restoreMySQLSQL(ctx context.Context, archivePath, targetDB stri
|
|||||||
|
|
||||||
// executeRestoreCommand executes a restore command
|
// executeRestoreCommand executes a restore command
|
||||||
func (e *Engine) executeRestoreCommand(ctx context.Context, cmdArgs []string) error {
|
func (e *Engine) executeRestoreCommand(ctx context.Context, cmdArgs []string) error {
|
||||||
|
return e.executeRestoreCommandWithContext(ctx, cmdArgs, "", "", FormatUnknown)
|
||||||
|
}
|
||||||
|
|
||||||
|
// executeRestoreCommandWithContext executes a restore command with error collection context
|
||||||
|
func (e *Engine) executeRestoreCommandWithContext(ctx context.Context, cmdArgs []string, archivePath, targetDB string, format ArchiveFormat) error {
|
||||||
e.log.Info("Executing restore command", "command", strings.Join(cmdArgs, " "))
|
e.log.Info("Executing restore command", "command", strings.Join(cmdArgs, " "))
|
||||||
|
|
||||||
cmd := exec.CommandContext(ctx, cmdArgs[0], cmdArgs[1:]...)
|
cmd := exec.CommandContext(ctx, cmdArgs[0], cmdArgs[1:]...)
|
||||||
@@ -316,6 +328,12 @@ func (e *Engine) executeRestoreCommand(ctx context.Context, cmdArgs []string) er
|
|||||||
fmt.Sprintf("MYSQL_PWD=%s", e.cfg.Password),
|
fmt.Sprintf("MYSQL_PWD=%s", e.cfg.Password),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// Create error collector if debug log path is set
|
||||||
|
var collector *ErrorCollector
|
||||||
|
if e.debugLogPath != "" {
|
||||||
|
collector = NewErrorCollector(e.cfg, e.log, archivePath, targetDB, format, true)
|
||||||
|
}
|
||||||
|
|
||||||
// Stream stderr to avoid memory issues with large output
|
// Stream stderr to avoid memory issues with large output
|
||||||
// Don't use CombinedOutput() as it loads everything into memory
|
// Don't use CombinedOutput() as it loads everything into memory
|
||||||
stderr, err := cmd.StderrPipe()
|
stderr, err := cmd.StderrPipe()
|
||||||
@@ -336,6 +354,12 @@ func (e *Engine) executeRestoreCommand(ctx context.Context, cmdArgs []string) er
|
|||||||
n, err := stderr.Read(buf)
|
n, err := stderr.Read(buf)
|
||||||
if n > 0 {
|
if n > 0 {
|
||||||
chunk := string(buf[:n])
|
chunk := string(buf[:n])
|
||||||
|
|
||||||
|
// Feed to error collector if enabled
|
||||||
|
if collector != nil {
|
||||||
|
collector.CaptureStderr(chunk)
|
||||||
|
}
|
||||||
|
|
||||||
// Only capture REAL errors, not verbose output
|
// Only capture REAL errors, not verbose output
|
||||||
if strings.Contains(chunk, "ERROR:") || strings.Contains(chunk, "FATAL:") || strings.Contains(chunk, "error:") {
|
if strings.Contains(chunk, "ERROR:") || strings.Contains(chunk, "FATAL:") || strings.Contains(chunk, "error:") {
|
||||||
lastError = strings.TrimSpace(chunk)
|
lastError = strings.TrimSpace(chunk)
|
||||||
@@ -352,6 +376,12 @@ func (e *Engine) executeRestoreCommand(ctx context.Context, cmdArgs []string) er
|
|||||||
}
|
}
|
||||||
|
|
||||||
if err := cmd.Wait(); err != nil {
|
if err := cmd.Wait(); err != nil {
|
||||||
|
// Get exit code
|
||||||
|
exitCode := 1
|
||||||
|
if exitErr, ok := err.(*exec.ExitError); ok {
|
||||||
|
exitCode = exitErr.ExitCode()
|
||||||
|
}
|
||||||
|
|
||||||
// PostgreSQL pg_restore returns exit code 1 even for ignorable errors
|
// PostgreSQL pg_restore returns exit code 1 even for ignorable errors
|
||||||
// Check if errors are ignorable (already exists, duplicate, etc.)
|
// Check if errors are ignorable (already exists, duplicate, etc.)
|
||||||
if lastError != "" && e.isIgnorableError(lastError) {
|
if lastError != "" && e.isIgnorableError(lastError) {
|
||||||
@@ -360,8 +390,12 @@ func (e *Engine) executeRestoreCommand(ctx context.Context, cmdArgs []string) er
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Classify error and provide helpful hints
|
// Classify error and provide helpful hints
|
||||||
|
var classification *checks.ErrorClassification
|
||||||
|
var errType, errHint string
|
||||||
if lastError != "" {
|
if lastError != "" {
|
||||||
classification := checks.ClassifyError(lastError)
|
classification = checks.ClassifyError(lastError)
|
||||||
|
errType = classification.Type
|
||||||
|
errHint = classification.Hint
|
||||||
e.log.Error("Restore command failed",
|
e.log.Error("Restore command failed",
|
||||||
"error", err,
|
"error", err,
|
||||||
"last_stderr", lastError,
|
"last_stderr", lastError,
|
||||||
@@ -369,11 +403,37 @@ func (e *Engine) executeRestoreCommand(ctx context.Context, cmdArgs []string) er
|
|||||||
"error_type", classification.Type,
|
"error_type", classification.Type,
|
||||||
"hint", classification.Hint,
|
"hint", classification.Hint,
|
||||||
"action", classification.Action)
|
"action", classification.Action)
|
||||||
return fmt.Errorf("restore failed: %w (last error: %s, total errors: %d) - %s",
|
} else {
|
||||||
err, lastError, errorCount, classification.Hint)
|
e.log.Error("Restore command failed", "error", err, "error_count", errorCount)
|
||||||
}
|
}
|
||||||
|
|
||||||
e.log.Error("Restore command failed", "error", err, "last_stderr", lastError, "error_count", errorCount)
|
// Generate and save error report if collector is enabled
|
||||||
|
if collector != nil {
|
||||||
|
collector.SetExitCode(exitCode)
|
||||||
|
report := collector.GenerateReport(
|
||||||
|
lastError,
|
||||||
|
errType,
|
||||||
|
errHint,
|
||||||
|
)
|
||||||
|
|
||||||
|
// Print report to console
|
||||||
|
collector.PrintReport(report)
|
||||||
|
|
||||||
|
// Save to file
|
||||||
|
if e.debugLogPath != "" {
|
||||||
|
if saveErr := collector.SaveReport(report, e.debugLogPath); saveErr != nil {
|
||||||
|
e.log.Warn("Failed to save debug log", "error", saveErr)
|
||||||
|
} else {
|
||||||
|
e.log.Info("Debug log saved", "path", e.debugLogPath)
|
||||||
|
fmt.Printf("\n📋 Detailed error report saved to: %s\n", e.debugLogPath)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if lastError != "" {
|
||||||
|
return fmt.Errorf("restore failed: %w (last error: %s, total errors: %d) - %s",
|
||||||
|
err, lastError, errorCount, errHint)
|
||||||
|
}
|
||||||
return fmt.Errorf("restore failed: %w", err)
|
return fmt.Errorf("restore failed: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
569
internal/restore/error_report.go
Normal file
569
internal/restore/error_report.go
Normal file
@@ -0,0 +1,569 @@
|
|||||||
|
package restore
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bufio"
|
||||||
|
"compress/gzip"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"os/exec"
|
||||||
|
"path/filepath"
|
||||||
|
"runtime"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"dbbackup/internal/config"
|
||||||
|
"dbbackup/internal/logger"
|
||||||
|
)
|
||||||
|
|
||||||
|
// RestoreErrorReport contains comprehensive information about a restore failure
|
||||||
|
type RestoreErrorReport struct {
|
||||||
|
// Metadata
|
||||||
|
Timestamp time.Time `json:"timestamp"`
|
||||||
|
Version string `json:"version"`
|
||||||
|
GoVersion string `json:"go_version"`
|
||||||
|
OS string `json:"os"`
|
||||||
|
Arch string `json:"arch"`
|
||||||
|
|
||||||
|
// Archive info
|
||||||
|
ArchivePath string `json:"archive_path"`
|
||||||
|
ArchiveSize int64 `json:"archive_size"`
|
||||||
|
ArchiveFormat string `json:"archive_format"`
|
||||||
|
|
||||||
|
// Database info
|
||||||
|
TargetDB string `json:"target_db"`
|
||||||
|
DatabaseType string `json:"database_type"`
|
||||||
|
|
||||||
|
// Error details
|
||||||
|
ExitCode int `json:"exit_code"`
|
||||||
|
ErrorMessage string `json:"error_message"`
|
||||||
|
ErrorType string `json:"error_type"`
|
||||||
|
ErrorHint string `json:"error_hint"`
|
||||||
|
TotalErrors int `json:"total_errors"`
|
||||||
|
|
||||||
|
// Captured output
|
||||||
|
LastStderr []string `json:"last_stderr"`
|
||||||
|
FirstErrors []string `json:"first_errors"`
|
||||||
|
|
||||||
|
// Context around failure
|
||||||
|
FailureContext *FailureContext `json:"failure_context,omitempty"`
|
||||||
|
|
||||||
|
// Diagnosis results
|
||||||
|
DiagnosisResult *DiagnoseResult `json:"diagnosis_result,omitempty"`
|
||||||
|
|
||||||
|
// Environment (sanitized)
|
||||||
|
PostgresVersion string `json:"postgres_version,omitempty"`
|
||||||
|
PgRestoreVersion string `json:"pg_restore_version,omitempty"`
|
||||||
|
PsqlVersion string `json:"psql_version,omitempty"`
|
||||||
|
|
||||||
|
// Recommendations
|
||||||
|
Recommendations []string `json:"recommendations"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// FailureContext captures context around where the failure occurred
|
||||||
|
type FailureContext struct {
|
||||||
|
// For SQL/COPY errors
|
||||||
|
FailedLine int `json:"failed_line,omitempty"`
|
||||||
|
FailedStatement string `json:"failed_statement,omitempty"`
|
||||||
|
SurroundingLines []string `json:"surrounding_lines,omitempty"`
|
||||||
|
|
||||||
|
// For COPY block errors
|
||||||
|
InCopyBlock bool `json:"in_copy_block,omitempty"`
|
||||||
|
CopyTableName string `json:"copy_table_name,omitempty"`
|
||||||
|
CopyStartLine int `json:"copy_start_line,omitempty"`
|
||||||
|
SampleCopyData []string `json:"sample_copy_data,omitempty"`
|
||||||
|
|
||||||
|
// File position info
|
||||||
|
BytePosition int64 `json:"byte_position,omitempty"`
|
||||||
|
PercentComplete float64 `json:"percent_complete,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// ErrorCollector captures detailed error information during restore
|
||||||
|
type ErrorCollector struct {
|
||||||
|
log logger.Logger
|
||||||
|
cfg *config.Config
|
||||||
|
archivePath string
|
||||||
|
targetDB string
|
||||||
|
format ArchiveFormat
|
||||||
|
|
||||||
|
// Captured data
|
||||||
|
stderrLines []string
|
||||||
|
firstErrors []string
|
||||||
|
lastErrors []string
|
||||||
|
totalErrors int
|
||||||
|
exitCode int
|
||||||
|
|
||||||
|
// Limits
|
||||||
|
maxStderrLines int
|
||||||
|
maxErrorCapture int
|
||||||
|
|
||||||
|
// State
|
||||||
|
startTime time.Time
|
||||||
|
enabled bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewErrorCollector creates a new error collector
|
||||||
|
func NewErrorCollector(cfg *config.Config, log logger.Logger, archivePath, targetDB string, format ArchiveFormat, enabled bool) *ErrorCollector {
|
||||||
|
return &ErrorCollector{
|
||||||
|
log: log,
|
||||||
|
cfg: cfg,
|
||||||
|
archivePath: archivePath,
|
||||||
|
targetDB: targetDB,
|
||||||
|
format: format,
|
||||||
|
stderrLines: make([]string, 0, 100),
|
||||||
|
firstErrors: make([]string, 0, 10),
|
||||||
|
lastErrors: make([]string, 0, 10),
|
||||||
|
maxStderrLines: 100,
|
||||||
|
maxErrorCapture: 10,
|
||||||
|
startTime: time.Now(),
|
||||||
|
enabled: enabled,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// CaptureStderr processes and captures stderr output
|
||||||
|
func (ec *ErrorCollector) CaptureStderr(chunk string) {
|
||||||
|
if !ec.enabled {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
lines := strings.Split(chunk, "\n")
|
||||||
|
for _, line := range lines {
|
||||||
|
line = strings.TrimSpace(line)
|
||||||
|
if line == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Store last N lines of stderr
|
||||||
|
if len(ec.stderrLines) >= ec.maxStderrLines {
|
||||||
|
// Shift array, drop oldest
|
||||||
|
ec.stderrLines = ec.stderrLines[1:]
|
||||||
|
}
|
||||||
|
ec.stderrLines = append(ec.stderrLines, line)
|
||||||
|
|
||||||
|
// Check if this is an error line
|
||||||
|
if isErrorLine(line) {
|
||||||
|
ec.totalErrors++
|
||||||
|
|
||||||
|
// Capture first N errors
|
||||||
|
if len(ec.firstErrors) < ec.maxErrorCapture {
|
||||||
|
ec.firstErrors = append(ec.firstErrors, line)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Keep last N errors (ring buffer style)
|
||||||
|
if len(ec.lastErrors) >= ec.maxErrorCapture {
|
||||||
|
ec.lastErrors = ec.lastErrors[1:]
|
||||||
|
}
|
||||||
|
ec.lastErrors = append(ec.lastErrors, line)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetExitCode records the exit code
|
||||||
|
func (ec *ErrorCollector) SetExitCode(code int) {
|
||||||
|
ec.exitCode = code
|
||||||
|
}
|
||||||
|
|
||||||
|
// GenerateReport creates a comprehensive error report
|
||||||
|
func (ec *ErrorCollector) GenerateReport(errMessage string, errType string, errHint string) *RestoreErrorReport {
|
||||||
|
report := &RestoreErrorReport{
|
||||||
|
Timestamp: time.Now(),
|
||||||
|
Version: "1.0.0", // TODO: inject actual version
|
||||||
|
GoVersion: runtime.Version(),
|
||||||
|
OS: runtime.GOOS,
|
||||||
|
Arch: runtime.GOARCH,
|
||||||
|
ArchivePath: ec.archivePath,
|
||||||
|
ArchiveFormat: ec.format.String(),
|
||||||
|
TargetDB: ec.targetDB,
|
||||||
|
DatabaseType: getDatabaseType(ec.format),
|
||||||
|
ExitCode: ec.exitCode,
|
||||||
|
ErrorMessage: errMessage,
|
||||||
|
ErrorType: errType,
|
||||||
|
ErrorHint: errHint,
|
||||||
|
TotalErrors: ec.totalErrors,
|
||||||
|
LastStderr: ec.stderrLines,
|
||||||
|
FirstErrors: ec.firstErrors,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get archive size
|
||||||
|
if stat, err := os.Stat(ec.archivePath); err == nil {
|
||||||
|
report.ArchiveSize = stat.Size()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get tool versions
|
||||||
|
report.PostgresVersion = getCommandVersion("postgres", "--version")
|
||||||
|
report.PgRestoreVersion = getCommandVersion("pg_restore", "--version")
|
||||||
|
report.PsqlVersion = getCommandVersion("psql", "--version")
|
||||||
|
|
||||||
|
// Analyze failure context
|
||||||
|
report.FailureContext = ec.analyzeFailureContext()
|
||||||
|
|
||||||
|
// Run diagnosis if not already done
|
||||||
|
diagnoser := NewDiagnoser(ec.log, false)
|
||||||
|
if diagResult, err := diagnoser.DiagnoseFile(ec.archivePath); err == nil {
|
||||||
|
report.DiagnosisResult = diagResult
|
||||||
|
}
|
||||||
|
|
||||||
|
// Generate recommendations
|
||||||
|
report.Recommendations = ec.generateRecommendations(report)
|
||||||
|
|
||||||
|
return report
|
||||||
|
}
|
||||||
|
|
||||||
|
// analyzeFailureContext extracts context around the failure
|
||||||
|
func (ec *ErrorCollector) analyzeFailureContext() *FailureContext {
|
||||||
|
ctx := &FailureContext{}
|
||||||
|
|
||||||
|
// Look for line number in errors
|
||||||
|
for _, errLine := range ec.lastErrors {
|
||||||
|
if lineNum := extractLineNumber(errLine); lineNum > 0 {
|
||||||
|
ctx.FailedLine = lineNum
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Look for COPY-related errors
|
||||||
|
for _, errLine := range ec.lastErrors {
|
||||||
|
if strings.Contains(errLine, "COPY") || strings.Contains(errLine, "syntax error") {
|
||||||
|
ctx.InCopyBlock = true
|
||||||
|
// Try to extract table name
|
||||||
|
if tableName := extractTableName(errLine); tableName != "" {
|
||||||
|
ctx.CopyTableName = tableName
|
||||||
|
}
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// If we have a line number, try to get surrounding context from the dump
|
||||||
|
if ctx.FailedLine > 0 && ec.archivePath != "" {
|
||||||
|
ctx.SurroundingLines = ec.getSurroundingLines(ctx.FailedLine, 5)
|
||||||
|
}
|
||||||
|
|
||||||
|
return ctx
|
||||||
|
}
|
||||||
|
|
||||||
|
// getSurroundingLines reads lines around a specific line number from the dump
|
||||||
|
func (ec *ErrorCollector) getSurroundingLines(lineNum int, context int) []string {
|
||||||
|
var reader io.Reader
|
||||||
|
var lines []string
|
||||||
|
|
||||||
|
file, err := os.Open(ec.archivePath)
|
||||||
|
if err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
// Handle compressed files
|
||||||
|
if strings.HasSuffix(ec.archivePath, ".gz") {
|
||||||
|
gz, err := gzip.NewReader(file)
|
||||||
|
if err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
defer gz.Close()
|
||||||
|
reader = gz
|
||||||
|
} else {
|
||||||
|
reader = file
|
||||||
|
}
|
||||||
|
|
||||||
|
scanner := bufio.NewScanner(reader)
|
||||||
|
buf := make([]byte, 0, 1024*1024)
|
||||||
|
scanner.Buffer(buf, 10*1024*1024)
|
||||||
|
|
||||||
|
currentLine := 0
|
||||||
|
startLine := lineNum - context
|
||||||
|
endLine := lineNum + context
|
||||||
|
|
||||||
|
if startLine < 1 {
|
||||||
|
startLine = 1
|
||||||
|
}
|
||||||
|
|
||||||
|
for scanner.Scan() {
|
||||||
|
currentLine++
|
||||||
|
if currentLine >= startLine && currentLine <= endLine {
|
||||||
|
prefix := " "
|
||||||
|
if currentLine == lineNum {
|
||||||
|
prefix = "> "
|
||||||
|
}
|
||||||
|
lines = append(lines, fmt.Sprintf("%s%d: %s", prefix, currentLine, truncateString(scanner.Text(), 100)))
|
||||||
|
}
|
||||||
|
if currentLine > endLine {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return lines
|
||||||
|
}
|
||||||
|
|
||||||
|
// generateRecommendations provides actionable recommendations based on the error
|
||||||
|
func (ec *ErrorCollector) generateRecommendations(report *RestoreErrorReport) []string {
|
||||||
|
var recs []string
|
||||||
|
|
||||||
|
// Check diagnosis results
|
||||||
|
if report.DiagnosisResult != nil {
|
||||||
|
if report.DiagnosisResult.IsTruncated {
|
||||||
|
recs = append(recs,
|
||||||
|
"CRITICAL: Backup file is truncated/incomplete",
|
||||||
|
"Action: Re-run the backup for the affected database",
|
||||||
|
"Check: Verify disk space was available during backup",
|
||||||
|
"Check: Verify network was stable during backup transfer",
|
||||||
|
)
|
||||||
|
}
|
||||||
|
if report.DiagnosisResult.IsCorrupted {
|
||||||
|
recs = append(recs,
|
||||||
|
"CRITICAL: Backup file appears corrupted",
|
||||||
|
"Action: Restore from a previous backup",
|
||||||
|
"Action: Verify backup file checksum if available",
|
||||||
|
)
|
||||||
|
}
|
||||||
|
if report.DiagnosisResult.Details != nil && report.DiagnosisResult.Details.UnterminatedCopy {
|
||||||
|
recs = append(recs,
|
||||||
|
fmt.Sprintf("ISSUE: COPY block for table '%s' was not terminated",
|
||||||
|
report.DiagnosisResult.Details.LastCopyTable),
|
||||||
|
"Cause: Backup was interrupted during data export",
|
||||||
|
"Action: Re-run backup ensuring it completes fully",
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check error patterns
|
||||||
|
if report.TotalErrors > 1000000 {
|
||||||
|
recs = append(recs,
|
||||||
|
"ISSUE: Millions of errors indicate structural problem, not individual data issues",
|
||||||
|
"Cause: Likely wrong restore method or truncated dump",
|
||||||
|
"Check: Verify dump format matches restore command",
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for common error types
|
||||||
|
errLower := strings.ToLower(report.ErrorMessage)
|
||||||
|
if strings.Contains(errLower, "syntax error") {
|
||||||
|
recs = append(recs,
|
||||||
|
"ISSUE: SQL syntax errors during restore",
|
||||||
|
"Cause: COPY data being interpreted as SQL commands",
|
||||||
|
"Check: Run 'dbbackup restore diagnose <archive>' for detailed analysis",
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
if strings.Contains(errLower, "permission denied") {
|
||||||
|
recs = append(recs,
|
||||||
|
"ISSUE: Permission denied",
|
||||||
|
"Action: Check database user has sufficient privileges",
|
||||||
|
"Action: For ownership preservation, use a superuser account",
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
if strings.Contains(errLower, "does not exist") {
|
||||||
|
recs = append(recs,
|
||||||
|
"ISSUE: Missing object reference",
|
||||||
|
"Action: Ensure globals.sql was restored first (for roles/tablespaces)",
|
||||||
|
"Action: Check if target database was created",
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(recs) == 0 {
|
||||||
|
recs = append(recs,
|
||||||
|
"Run 'dbbackup restore diagnose <archive>' for detailed analysis",
|
||||||
|
"Check the stderr output above for specific error messages",
|
||||||
|
"Review the PostgreSQL/MySQL logs on the target server",
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
return recs
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveReport saves the error report to a file
|
||||||
|
func (ec *ErrorCollector) SaveReport(report *RestoreErrorReport, outputPath string) error {
|
||||||
|
// Create directory if needed
|
||||||
|
dir := filepath.Dir(outputPath)
|
||||||
|
if err := os.MkdirAll(dir, 0755); err != nil {
|
||||||
|
return fmt.Errorf("failed to create directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Marshal to JSON with indentation
|
||||||
|
data, err := json.MarshalIndent(report, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to marshal report: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Write file
|
||||||
|
if err := os.WriteFile(outputPath, data, 0644); err != nil {
|
||||||
|
return fmt.Errorf("failed to write report: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// PrintReport prints a human-readable summary of the error report
|
||||||
|
func (ec *ErrorCollector) PrintReport(report *RestoreErrorReport) {
|
||||||
|
fmt.Println()
|
||||||
|
fmt.Println(strings.Repeat("═", 70))
|
||||||
|
fmt.Println(" 🔴 RESTORE ERROR REPORT")
|
||||||
|
fmt.Println(strings.Repeat("═", 70))
|
||||||
|
|
||||||
|
fmt.Printf("\n📅 Timestamp: %s\n", report.Timestamp.Format("2006-01-02 15:04:05"))
|
||||||
|
fmt.Printf("📦 Archive: %s\n", filepath.Base(report.ArchivePath))
|
||||||
|
fmt.Printf("📊 Format: %s\n", report.ArchiveFormat)
|
||||||
|
fmt.Printf("🎯 Target DB: %s\n", report.TargetDB)
|
||||||
|
fmt.Printf("⚠️ Exit Code: %d\n", report.ExitCode)
|
||||||
|
fmt.Printf("❌ Total Errors: %d\n", report.TotalErrors)
|
||||||
|
|
||||||
|
fmt.Println("\n" + strings.Repeat("─", 70))
|
||||||
|
fmt.Println("ERROR DETAILS:")
|
||||||
|
fmt.Println(strings.Repeat("─", 70))
|
||||||
|
|
||||||
|
fmt.Printf("\nType: %s\n", report.ErrorType)
|
||||||
|
fmt.Printf("Message: %s\n", report.ErrorMessage)
|
||||||
|
if report.ErrorHint != "" {
|
||||||
|
fmt.Printf("Hint: %s\n", report.ErrorHint)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Show failure context
|
||||||
|
if report.FailureContext != nil && report.FailureContext.FailedLine > 0 {
|
||||||
|
fmt.Println("\n" + strings.Repeat("─", 70))
|
||||||
|
fmt.Println("FAILURE CONTEXT:")
|
||||||
|
fmt.Println(strings.Repeat("─", 70))
|
||||||
|
|
||||||
|
fmt.Printf("\nFailed at line: %d\n", report.FailureContext.FailedLine)
|
||||||
|
if report.FailureContext.InCopyBlock {
|
||||||
|
fmt.Printf("Inside COPY block for table: %s\n", report.FailureContext.CopyTableName)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(report.FailureContext.SurroundingLines) > 0 {
|
||||||
|
fmt.Println("\nSurrounding lines:")
|
||||||
|
for _, line := range report.FailureContext.SurroundingLines {
|
||||||
|
fmt.Println(line)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Show first few errors
|
||||||
|
if len(report.FirstErrors) > 0 {
|
||||||
|
fmt.Println("\n" + strings.Repeat("─", 70))
|
||||||
|
fmt.Println("FIRST ERRORS:")
|
||||||
|
fmt.Println(strings.Repeat("─", 70))
|
||||||
|
|
||||||
|
for i, err := range report.FirstErrors {
|
||||||
|
if i >= 5 {
|
||||||
|
fmt.Printf("... and %d more\n", len(report.FirstErrors)-5)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
fmt.Printf(" %d. %s\n", i+1, truncateString(err, 100))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Show diagnosis summary
|
||||||
|
if report.DiagnosisResult != nil && !report.DiagnosisResult.IsValid {
|
||||||
|
fmt.Println("\n" + strings.Repeat("─", 70))
|
||||||
|
fmt.Println("DIAGNOSIS:")
|
||||||
|
fmt.Println(strings.Repeat("─", 70))
|
||||||
|
|
||||||
|
if report.DiagnosisResult.IsTruncated {
|
||||||
|
fmt.Println(" ❌ File is TRUNCATED")
|
||||||
|
}
|
||||||
|
if report.DiagnosisResult.IsCorrupted {
|
||||||
|
fmt.Println(" ❌ File is CORRUPTED")
|
||||||
|
}
|
||||||
|
for i, err := range report.DiagnosisResult.Errors {
|
||||||
|
if i >= 3 {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
fmt.Printf(" • %s\n", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Show recommendations
|
||||||
|
fmt.Println("\n" + strings.Repeat("─", 70))
|
||||||
|
fmt.Println("💡 RECOMMENDATIONS:")
|
||||||
|
fmt.Println(strings.Repeat("─", 70))
|
||||||
|
|
||||||
|
for _, rec := range report.Recommendations {
|
||||||
|
fmt.Printf(" • %s\n", rec)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Show tool versions
|
||||||
|
fmt.Println("\n" + strings.Repeat("─", 70))
|
||||||
|
fmt.Println("ENVIRONMENT:")
|
||||||
|
fmt.Println(strings.Repeat("─", 70))
|
||||||
|
|
||||||
|
fmt.Printf(" OS: %s/%s\n", report.OS, report.Arch)
|
||||||
|
fmt.Printf(" Go: %s\n", report.GoVersion)
|
||||||
|
if report.PgRestoreVersion != "" {
|
||||||
|
fmt.Printf(" pg_restore: %s\n", report.PgRestoreVersion)
|
||||||
|
}
|
||||||
|
if report.PsqlVersion != "" {
|
||||||
|
fmt.Printf(" psql: %s\n", report.PsqlVersion)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println(strings.Repeat("═", 70))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Helper functions
|
||||||
|
|
||||||
|
func isErrorLine(line string) bool {
|
||||||
|
return strings.Contains(line, "ERROR:") ||
|
||||||
|
strings.Contains(line, "FATAL:") ||
|
||||||
|
strings.Contains(line, "error:") ||
|
||||||
|
strings.Contains(line, "PANIC:")
|
||||||
|
}
|
||||||
|
|
||||||
|
func extractLineNumber(errLine string) int {
|
||||||
|
// Look for patterns like "LINE 1:" or "line 123"
|
||||||
|
patterns := []string{"LINE ", "line "}
|
||||||
|
for _, pattern := range patterns {
|
||||||
|
if idx := strings.Index(errLine, pattern); idx >= 0 {
|
||||||
|
numStart := idx + len(pattern)
|
||||||
|
numEnd := numStart
|
||||||
|
for numEnd < len(errLine) && errLine[numEnd] >= '0' && errLine[numEnd] <= '9' {
|
||||||
|
numEnd++
|
||||||
|
}
|
||||||
|
if numEnd > numStart {
|
||||||
|
var num int
|
||||||
|
fmt.Sscanf(errLine[numStart:numEnd], "%d", &num)
|
||||||
|
return num
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
func extractTableName(errLine string) string {
|
||||||
|
// Look for patterns like 'COPY "tablename"' or 'table "tablename"'
|
||||||
|
patterns := []string{"COPY ", "table "}
|
||||||
|
for _, pattern := range patterns {
|
||||||
|
if idx := strings.Index(errLine, pattern); idx >= 0 {
|
||||||
|
start := idx + len(pattern)
|
||||||
|
// Skip optional quote
|
||||||
|
if start < len(errLine) && errLine[start] == '"' {
|
||||||
|
start++
|
||||||
|
}
|
||||||
|
end := start
|
||||||
|
for end < len(errLine) && errLine[end] != '"' && errLine[end] != ' ' && errLine[end] != '(' {
|
||||||
|
end++
|
||||||
|
}
|
||||||
|
if end > start {
|
||||||
|
return errLine[start:end]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func getDatabaseType(format ArchiveFormat) string {
|
||||||
|
if format.IsMySQL() {
|
||||||
|
return "mysql"
|
||||||
|
}
|
||||||
|
return "postgresql"
|
||||||
|
}
|
||||||
|
|
||||||
|
func getCommandVersion(cmd string, arg string) string {
|
||||||
|
output, err := exec.Command(cmd, arg).CombinedOutput()
|
||||||
|
if err != nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
// Return first line only
|
||||||
|
lines := strings.Split(string(output), "\n")
|
||||||
|
if len(lines) > 0 {
|
||||||
|
return strings.TrimSpace(lines[0])
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
@@ -227,6 +227,14 @@ func (m ArchiveBrowserModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
formatSize(selected.Size),
|
formatSize(selected.Size),
|
||||||
selected.Modified.Format("2006-01-02 15:04:05"))
|
selected.Modified.Format("2006-01-02 15:04:05"))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
case "d":
|
||||||
|
// Run diagnosis on selected archive
|
||||||
|
if len(m.archives) > 0 && m.cursor < len(m.archives) {
|
||||||
|
selected := m.archives[m.cursor]
|
||||||
|
diagnoseView := NewDiagnoseView(m.config, m.logger, m, m.ctx, selected)
|
||||||
|
return diagnoseView, diagnoseView.Init()
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -335,7 +343,7 @@ func (m ArchiveBrowserModel) View() string {
|
|||||||
s.WriteString(infoStyle.Render(fmt.Sprintf("Total: %d archive(s) | Selected: %d/%d",
|
s.WriteString(infoStyle.Render(fmt.Sprintf("Total: %d archive(s) | Selected: %d/%d",
|
||||||
len(m.archives), m.cursor+1, len(m.archives))))
|
len(m.archives), m.cursor+1, len(m.archives))))
|
||||||
s.WriteString("\n")
|
s.WriteString("\n")
|
||||||
s.WriteString(infoStyle.Render("⌨️ ↑/↓: Navigate | Enter: Select | f: Filter | i: Info | Esc: Back"))
|
s.WriteString(infoStyle.Render("⌨️ ↑/↓: Navigate | Enter: Select | d: Diagnose | f: Filter | i: Info | Esc: Back"))
|
||||||
|
|
||||||
return s.String()
|
return s.String()
|
||||||
}
|
}
|
||||||
|
|||||||
450
internal/tui/diagnose_view.go
Normal file
450
internal/tui/diagnose_view.go
Normal file
@@ -0,0 +1,450 @@
|
|||||||
|
package tui
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
tea "github.com/charmbracelet/bubbletea"
|
||||||
|
"github.com/charmbracelet/lipgloss"
|
||||||
|
|
||||||
|
"dbbackup/internal/config"
|
||||||
|
"dbbackup/internal/logger"
|
||||||
|
"dbbackup/internal/restore"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
diagnoseBoxStyle = lipgloss.NewStyle().
|
||||||
|
Border(lipgloss.RoundedBorder()).
|
||||||
|
BorderForeground(lipgloss.Color("63")).
|
||||||
|
Padding(1, 2)
|
||||||
|
|
||||||
|
diagnosePassStyle = lipgloss.NewStyle().
|
||||||
|
Foreground(lipgloss.Color("2")).
|
||||||
|
Bold(true)
|
||||||
|
|
||||||
|
diagnoseFailStyle = lipgloss.NewStyle().
|
||||||
|
Foreground(lipgloss.Color("1")).
|
||||||
|
Bold(true)
|
||||||
|
|
||||||
|
diagnoseWarnStyle = lipgloss.NewStyle().
|
||||||
|
Foreground(lipgloss.Color("3"))
|
||||||
|
|
||||||
|
diagnoseInfoStyle = lipgloss.NewStyle().
|
||||||
|
Foreground(lipgloss.Color("244"))
|
||||||
|
|
||||||
|
diagnoseHeaderStyle = lipgloss.NewStyle().
|
||||||
|
Foreground(lipgloss.Color("63")).
|
||||||
|
Bold(true)
|
||||||
|
)
|
||||||
|
|
||||||
|
// DiagnoseViewModel shows backup file diagnosis results
|
||||||
|
type DiagnoseViewModel struct {
|
||||||
|
config *config.Config
|
||||||
|
logger logger.Logger
|
||||||
|
parent tea.Model
|
||||||
|
ctx context.Context
|
||||||
|
archive ArchiveInfo
|
||||||
|
result *restore.DiagnoseResult
|
||||||
|
results []*restore.DiagnoseResult // For cluster archives
|
||||||
|
running bool
|
||||||
|
completed bool
|
||||||
|
progress string
|
||||||
|
cursor int // For scrolling through cluster results
|
||||||
|
err error
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewDiagnoseView creates a new diagnose view
|
||||||
|
func NewDiagnoseView(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context, archive ArchiveInfo) DiagnoseViewModel {
|
||||||
|
return DiagnoseViewModel{
|
||||||
|
config: cfg,
|
||||||
|
logger: log,
|
||||||
|
parent: parent,
|
||||||
|
ctx: ctx,
|
||||||
|
archive: archive,
|
||||||
|
running: true,
|
||||||
|
progress: "Starting diagnosis...",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m DiagnoseViewModel) Init() tea.Cmd {
|
||||||
|
return runDiagnosis(m.config, m.logger, m.archive)
|
||||||
|
}
|
||||||
|
|
||||||
|
type diagnoseCompleteMsg struct {
|
||||||
|
result *restore.DiagnoseResult
|
||||||
|
results []*restore.DiagnoseResult
|
||||||
|
err error
|
||||||
|
}
|
||||||
|
|
||||||
|
type diagnoseProgressMsg struct {
|
||||||
|
message string
|
||||||
|
}
|
||||||
|
|
||||||
|
func runDiagnosis(cfg *config.Config, log logger.Logger, archive ArchiveInfo) tea.Cmd {
|
||||||
|
return func() tea.Msg {
|
||||||
|
diagnoser := restore.NewDiagnoser(log, true)
|
||||||
|
|
||||||
|
// For cluster archives, we can do deep analysis
|
||||||
|
if archive.Format.IsClusterBackup() {
|
||||||
|
// Create temp directory
|
||||||
|
tempDir, err := createTempDir("dbbackup-diagnose-*")
|
||||||
|
if err != nil {
|
||||||
|
return diagnoseCompleteMsg{err: fmt.Errorf("failed to create temp dir: %w", err)}
|
||||||
|
}
|
||||||
|
defer removeTempDir(tempDir)
|
||||||
|
|
||||||
|
// Diagnose all dumps in the cluster
|
||||||
|
results, err := diagnoser.DiagnoseClusterDumps(archive.Path, tempDir)
|
||||||
|
if err != nil {
|
||||||
|
return diagnoseCompleteMsg{err: err}
|
||||||
|
}
|
||||||
|
|
||||||
|
return diagnoseCompleteMsg{results: results}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Single file diagnosis
|
||||||
|
result, err := diagnoser.DiagnoseFile(archive.Path)
|
||||||
|
if err != nil {
|
||||||
|
return diagnoseCompleteMsg{err: err}
|
||||||
|
}
|
||||||
|
|
||||||
|
return diagnoseCompleteMsg{result: result}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m DiagnoseViewModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
||||||
|
switch msg := msg.(type) {
|
||||||
|
case diagnoseCompleteMsg:
|
||||||
|
m.running = false
|
||||||
|
m.completed = true
|
||||||
|
if msg.err != nil {
|
||||||
|
m.err = msg.err
|
||||||
|
return m, nil
|
||||||
|
}
|
||||||
|
m.result = msg.result
|
||||||
|
m.results = msg.results
|
||||||
|
return m, nil
|
||||||
|
|
||||||
|
case diagnoseProgressMsg:
|
||||||
|
m.progress = msg.message
|
||||||
|
return m, nil
|
||||||
|
|
||||||
|
case tea.KeyMsg:
|
||||||
|
switch msg.String() {
|
||||||
|
case "ctrl+c", "q", "esc":
|
||||||
|
return m.parent, nil
|
||||||
|
|
||||||
|
case "up", "k":
|
||||||
|
if len(m.results) > 0 && m.cursor > 0 {
|
||||||
|
m.cursor--
|
||||||
|
}
|
||||||
|
|
||||||
|
case "down", "j":
|
||||||
|
if len(m.results) > 0 && m.cursor < len(m.results)-1 {
|
||||||
|
m.cursor++
|
||||||
|
}
|
||||||
|
|
||||||
|
case "enter", " ":
|
||||||
|
return m.parent, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return m, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m DiagnoseViewModel) View() string {
|
||||||
|
var s strings.Builder
|
||||||
|
|
||||||
|
// Header
|
||||||
|
s.WriteString(titleStyle.Render("🔍 Backup Diagnosis"))
|
||||||
|
s.WriteString("\n\n")
|
||||||
|
|
||||||
|
// Archive info
|
||||||
|
s.WriteString(diagnoseHeaderStyle.Render("Archive: "))
|
||||||
|
s.WriteString(m.archive.Name)
|
||||||
|
s.WriteString("\n")
|
||||||
|
s.WriteString(diagnoseHeaderStyle.Render("Format: "))
|
||||||
|
s.WriteString(m.archive.Format.String())
|
||||||
|
s.WriteString("\n")
|
||||||
|
s.WriteString(diagnoseHeaderStyle.Render("Size: "))
|
||||||
|
s.WriteString(formatSize(m.archive.Size))
|
||||||
|
s.WriteString("\n\n")
|
||||||
|
|
||||||
|
if m.running {
|
||||||
|
s.WriteString(infoStyle.Render("⏳ " + m.progress))
|
||||||
|
s.WriteString("\n\n")
|
||||||
|
s.WriteString(diagnoseInfoStyle.Render("This may take a while for large archives..."))
|
||||||
|
return s.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
if m.err != nil {
|
||||||
|
s.WriteString(errorStyle.Render(fmt.Sprintf("❌ Diagnosis failed: %v", m.err)))
|
||||||
|
s.WriteString("\n\n")
|
||||||
|
s.WriteString(infoStyle.Render("Press Enter or Esc to go back"))
|
||||||
|
return s.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
// For cluster archives, show summary + details
|
||||||
|
if len(m.results) > 0 {
|
||||||
|
s.WriteString(m.renderClusterResults())
|
||||||
|
} else if m.result != nil {
|
||||||
|
s.WriteString(m.renderSingleResult(m.result))
|
||||||
|
}
|
||||||
|
|
||||||
|
s.WriteString("\n")
|
||||||
|
s.WriteString(infoStyle.Render("Press Enter or Esc to go back"))
|
||||||
|
|
||||||
|
return s.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m DiagnoseViewModel) renderSingleResult(result *restore.DiagnoseResult) string {
|
||||||
|
var s strings.Builder
|
||||||
|
|
||||||
|
// Status
|
||||||
|
s.WriteString(strings.Repeat("─", 60))
|
||||||
|
s.WriteString("\n")
|
||||||
|
|
||||||
|
if result.IsValid {
|
||||||
|
s.WriteString(diagnosePassStyle.Render("✅ STATUS: VALID"))
|
||||||
|
} else {
|
||||||
|
s.WriteString(diagnoseFailStyle.Render("❌ STATUS: INVALID"))
|
||||||
|
}
|
||||||
|
s.WriteString("\n")
|
||||||
|
|
||||||
|
if result.IsTruncated {
|
||||||
|
s.WriteString(diagnoseFailStyle.Render("⚠️ TRUNCATED: File appears incomplete"))
|
||||||
|
s.WriteString("\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
if result.IsCorrupted {
|
||||||
|
s.WriteString(diagnoseFailStyle.Render("⚠️ CORRUPTED: File structure is damaged"))
|
||||||
|
s.WriteString("\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
s.WriteString(strings.Repeat("─", 60))
|
||||||
|
s.WriteString("\n\n")
|
||||||
|
|
||||||
|
// Details
|
||||||
|
if result.Details != nil {
|
||||||
|
s.WriteString(diagnoseHeaderStyle.Render("📊 DETAILS:"))
|
||||||
|
s.WriteString("\n")
|
||||||
|
|
||||||
|
if result.Details.HasPGDMPSignature {
|
||||||
|
s.WriteString(diagnosePassStyle.Render(" ✓ "))
|
||||||
|
s.WriteString("Has PGDMP signature (custom format)\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
if result.Details.HasSQLHeader {
|
||||||
|
s.WriteString(diagnosePassStyle.Render(" ✓ "))
|
||||||
|
s.WriteString("Has PostgreSQL SQL header\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
if result.Details.GzipValid {
|
||||||
|
s.WriteString(diagnosePassStyle.Render(" ✓ "))
|
||||||
|
s.WriteString("Gzip compression valid\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
if result.Details.PgRestoreListable {
|
||||||
|
s.WriteString(diagnosePassStyle.Render(" ✓ "))
|
||||||
|
s.WriteString(fmt.Sprintf("pg_restore can list contents (%d tables)\n", result.Details.TableCount))
|
||||||
|
}
|
||||||
|
|
||||||
|
if result.Details.CopyBlockCount > 0 {
|
||||||
|
s.WriteString(diagnoseInfoStyle.Render(" • "))
|
||||||
|
s.WriteString(fmt.Sprintf("Contains %d COPY blocks\n", result.Details.CopyBlockCount))
|
||||||
|
}
|
||||||
|
|
||||||
|
if result.Details.UnterminatedCopy {
|
||||||
|
s.WriteString(diagnoseFailStyle.Render(" ✗ "))
|
||||||
|
s.WriteString(fmt.Sprintf("Unterminated COPY block: %s (line %d)\n",
|
||||||
|
result.Details.LastCopyTable, result.Details.LastCopyLineNumber))
|
||||||
|
}
|
||||||
|
|
||||||
|
if result.Details.ProperlyTerminated {
|
||||||
|
s.WriteString(diagnosePassStyle.Render(" ✓ "))
|
||||||
|
s.WriteString("All COPY blocks properly terminated\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
if result.Details.ExpandedSize > 0 {
|
||||||
|
s.WriteString(diagnoseInfoStyle.Render(" • "))
|
||||||
|
s.WriteString(fmt.Sprintf("Expanded size: %s (ratio: %.1fx)\n",
|
||||||
|
formatSize(result.Details.ExpandedSize), result.Details.CompressionRatio))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Errors
|
||||||
|
if len(result.Errors) > 0 {
|
||||||
|
s.WriteString("\n")
|
||||||
|
s.WriteString(diagnoseFailStyle.Render("❌ ERRORS:"))
|
||||||
|
s.WriteString("\n")
|
||||||
|
for i, e := range result.Errors {
|
||||||
|
if i >= 5 {
|
||||||
|
s.WriteString(diagnoseInfoStyle.Render(fmt.Sprintf(" ... and %d more\n", len(result.Errors)-5)))
|
||||||
|
break
|
||||||
|
}
|
||||||
|
s.WriteString(diagnoseFailStyle.Render(" • "))
|
||||||
|
s.WriteString(truncate(e, 70))
|
||||||
|
s.WriteString("\n")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Warnings
|
||||||
|
if len(result.Warnings) > 0 {
|
||||||
|
s.WriteString("\n")
|
||||||
|
s.WriteString(diagnoseWarnStyle.Render("⚠️ WARNINGS:"))
|
||||||
|
s.WriteString("\n")
|
||||||
|
for i, w := range result.Warnings {
|
||||||
|
if i >= 3 {
|
||||||
|
s.WriteString(diagnoseInfoStyle.Render(fmt.Sprintf(" ... and %d more\n", len(result.Warnings)-3)))
|
||||||
|
break
|
||||||
|
}
|
||||||
|
s.WriteString(diagnoseWarnStyle.Render(" • "))
|
||||||
|
s.WriteString(truncate(w, 70))
|
||||||
|
s.WriteString("\n")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Recommendations
|
||||||
|
if !result.IsValid {
|
||||||
|
s.WriteString("\n")
|
||||||
|
s.WriteString(diagnoseHeaderStyle.Render("💡 RECOMMENDATIONS:"))
|
||||||
|
s.WriteString("\n")
|
||||||
|
if result.IsTruncated {
|
||||||
|
s.WriteString(" 1. Re-run the backup process for this database\n")
|
||||||
|
s.WriteString(" 2. Check disk space on backup server\n")
|
||||||
|
s.WriteString(" 3. Verify network stability for remote backups\n")
|
||||||
|
}
|
||||||
|
if result.IsCorrupted {
|
||||||
|
s.WriteString(" 1. Verify backup was transferred completely\n")
|
||||||
|
s.WriteString(" 2. Try restoring from a previous backup\n")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return s.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m DiagnoseViewModel) renderClusterResults() string {
|
||||||
|
var s strings.Builder
|
||||||
|
|
||||||
|
// Summary
|
||||||
|
validCount := 0
|
||||||
|
invalidCount := 0
|
||||||
|
for _, r := range m.results {
|
||||||
|
if r.IsValid {
|
||||||
|
validCount++
|
||||||
|
} else {
|
||||||
|
invalidCount++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
s.WriteString(strings.Repeat("─", 60))
|
||||||
|
s.WriteString("\n")
|
||||||
|
s.WriteString(diagnoseHeaderStyle.Render(fmt.Sprintf("📊 CLUSTER SUMMARY: %d databases\n", len(m.results))))
|
||||||
|
s.WriteString(strings.Repeat("─", 60))
|
||||||
|
s.WriteString("\n\n")
|
||||||
|
|
||||||
|
if invalidCount == 0 {
|
||||||
|
s.WriteString(diagnosePassStyle.Render("✅ All dumps are valid"))
|
||||||
|
s.WriteString("\n\n")
|
||||||
|
} else {
|
||||||
|
s.WriteString(diagnoseFailStyle.Render(fmt.Sprintf("❌ %d/%d dumps have issues", invalidCount, len(m.results))))
|
||||||
|
s.WriteString("\n\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
// List all dumps with status
|
||||||
|
s.WriteString(diagnoseHeaderStyle.Render("Database Dumps:"))
|
||||||
|
s.WriteString("\n")
|
||||||
|
|
||||||
|
// Show visible range based on cursor
|
||||||
|
start := m.cursor - 5
|
||||||
|
if start < 0 {
|
||||||
|
start = 0
|
||||||
|
}
|
||||||
|
end := start + 12
|
||||||
|
if end > len(m.results) {
|
||||||
|
end = len(m.results)
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := start; i < end; i++ {
|
||||||
|
r := m.results[i]
|
||||||
|
cursor := " "
|
||||||
|
if i == m.cursor {
|
||||||
|
cursor = ">"
|
||||||
|
}
|
||||||
|
|
||||||
|
var status string
|
||||||
|
if r.IsValid {
|
||||||
|
status = diagnosePassStyle.Render("✓")
|
||||||
|
} else if r.IsTruncated {
|
||||||
|
status = diagnoseFailStyle.Render("✗ TRUNCATED")
|
||||||
|
} else if r.IsCorrupted {
|
||||||
|
status = diagnoseFailStyle.Render("✗ CORRUPTED")
|
||||||
|
} else {
|
||||||
|
status = diagnoseFailStyle.Render("✗ INVALID")
|
||||||
|
}
|
||||||
|
|
||||||
|
line := fmt.Sprintf("%s %s %-35s %s",
|
||||||
|
cursor,
|
||||||
|
status,
|
||||||
|
truncate(r.FileName, 35),
|
||||||
|
formatSize(r.FileSize))
|
||||||
|
|
||||||
|
if i == m.cursor {
|
||||||
|
s.WriteString(archiveSelectedStyle.Render(line))
|
||||||
|
} else {
|
||||||
|
s.WriteString(line)
|
||||||
|
}
|
||||||
|
s.WriteString("\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Show selected dump details
|
||||||
|
if m.cursor < len(m.results) {
|
||||||
|
selected := m.results[m.cursor]
|
||||||
|
s.WriteString("\n")
|
||||||
|
s.WriteString(strings.Repeat("─", 60))
|
||||||
|
s.WriteString("\n")
|
||||||
|
s.WriteString(diagnoseHeaderStyle.Render("Selected: " + selected.FileName))
|
||||||
|
s.WriteString("\n\n")
|
||||||
|
|
||||||
|
// Show condensed details for selected
|
||||||
|
if selected.Details != nil {
|
||||||
|
if selected.Details.UnterminatedCopy {
|
||||||
|
s.WriteString(diagnoseFailStyle.Render(" ✗ Unterminated COPY: "))
|
||||||
|
s.WriteString(selected.Details.LastCopyTable)
|
||||||
|
s.WriteString(fmt.Sprintf(" (line %d)\n", selected.Details.LastCopyLineNumber))
|
||||||
|
}
|
||||||
|
if len(selected.Details.SampleCopyData) > 0 {
|
||||||
|
s.WriteString(diagnoseInfoStyle.Render(" Sample orphaned data: "))
|
||||||
|
s.WriteString(truncate(selected.Details.SampleCopyData[0], 50))
|
||||||
|
s.WriteString("\n")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(selected.Errors) > 0 {
|
||||||
|
for i, e := range selected.Errors {
|
||||||
|
if i >= 2 {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
s.WriteString(diagnoseFailStyle.Render(" • "))
|
||||||
|
s.WriteString(truncate(e, 55))
|
||||||
|
s.WriteString("\n")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
s.WriteString("\n")
|
||||||
|
s.WriteString(infoStyle.Render("Use ↑/↓ to browse, Enter/Esc to go back"))
|
||||||
|
|
||||||
|
return s.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Helper functions for temp directory management
|
||||||
|
func createTempDir(pattern string) (string, error) {
|
||||||
|
return os.MkdirTemp("", pattern)
|
||||||
|
}
|
||||||
|
|
||||||
|
func removeTempDir(path string) error {
|
||||||
|
return os.RemoveAll(path)
|
||||||
|
}
|
||||||
@@ -31,6 +31,7 @@ type RestoreExecutionModel struct {
|
|||||||
restoreType string
|
restoreType string
|
||||||
cleanClusterFirst bool // Drop all user databases before cluster restore
|
cleanClusterFirst bool // Drop all user databases before cluster restore
|
||||||
existingDBs []string // List of databases to drop
|
existingDBs []string // List of databases to drop
|
||||||
|
saveDebugLog bool // Save detailed error report on failure
|
||||||
|
|
||||||
// Progress tracking
|
// Progress tracking
|
||||||
status string
|
status string
|
||||||
@@ -49,7 +50,7 @@ type RestoreExecutionModel struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// NewRestoreExecution creates a new restore execution model
|
// NewRestoreExecution creates a new restore execution model
|
||||||
func NewRestoreExecution(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context, archive ArchiveInfo, targetDB string, cleanFirst, createIfMissing bool, restoreType string, cleanClusterFirst bool, existingDBs []string) RestoreExecutionModel {
|
func NewRestoreExecution(cfg *config.Config, log logger.Logger, parent tea.Model, ctx context.Context, archive ArchiveInfo, targetDB string, cleanFirst, createIfMissing bool, restoreType string, cleanClusterFirst bool, existingDBs []string, saveDebugLog bool) RestoreExecutionModel {
|
||||||
return RestoreExecutionModel{
|
return RestoreExecutionModel{
|
||||||
config: cfg,
|
config: cfg,
|
||||||
logger: log,
|
logger: log,
|
||||||
@@ -62,6 +63,7 @@ func NewRestoreExecution(cfg *config.Config, log logger.Logger, parent tea.Model
|
|||||||
restoreType: restoreType,
|
restoreType: restoreType,
|
||||||
cleanClusterFirst: cleanClusterFirst,
|
cleanClusterFirst: cleanClusterFirst,
|
||||||
existingDBs: existingDBs,
|
existingDBs: existingDBs,
|
||||||
|
saveDebugLog: saveDebugLog,
|
||||||
status: "Initializing...",
|
status: "Initializing...",
|
||||||
phase: "Starting",
|
phase: "Starting",
|
||||||
startTime: time.Now(),
|
startTime: time.Now(),
|
||||||
@@ -73,7 +75,7 @@ func NewRestoreExecution(cfg *config.Config, log logger.Logger, parent tea.Model
|
|||||||
|
|
||||||
func (m RestoreExecutionModel) Init() tea.Cmd {
|
func (m RestoreExecutionModel) Init() tea.Cmd {
|
||||||
return tea.Batch(
|
return tea.Batch(
|
||||||
executeRestoreWithTUIProgress(m.ctx, m.config, m.logger, m.archive, m.targetDB, m.cleanFirst, m.createIfMissing, m.restoreType, m.cleanClusterFirst, m.existingDBs),
|
executeRestoreWithTUIProgress(m.ctx, m.config, m.logger, m.archive, m.targetDB, m.cleanFirst, m.createIfMissing, m.restoreType, m.cleanClusterFirst, m.existingDBs, m.saveDebugLog),
|
||||||
restoreTickCmd(),
|
restoreTickCmd(),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
@@ -99,7 +101,7 @@ type restoreCompleteMsg struct {
|
|||||||
elapsed time.Duration
|
elapsed time.Duration
|
||||||
}
|
}
|
||||||
|
|
||||||
func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config, log logger.Logger, archive ArchiveInfo, targetDB string, cleanFirst, createIfMissing bool, restoreType string, cleanClusterFirst bool, existingDBs []string) tea.Cmd {
|
func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config, log logger.Logger, archive ArchiveInfo, targetDB string, cleanFirst, createIfMissing bool, restoreType string, cleanClusterFirst bool, existingDBs []string, saveDebugLog bool) tea.Cmd {
|
||||||
return func() tea.Msg {
|
return func() tea.Msg {
|
||||||
// Use configurable cluster timeout (minutes) from config; default set in config.New()
|
// Use configurable cluster timeout (minutes) from config; default set in config.New()
|
||||||
// Use parent context to inherit cancellation from TUI
|
// Use parent context to inherit cancellation from TUI
|
||||||
@@ -146,6 +148,14 @@ func executeRestoreWithTUIProgress(parentCtx context.Context, cfg *config.Config
|
|||||||
// STEP 2: Create restore engine with silent progress (no stdout interference with TUI)
|
// STEP 2: Create restore engine with silent progress (no stdout interference with TUI)
|
||||||
engine := restore.NewSilent(cfg, log, dbClient)
|
engine := restore.NewSilent(cfg, log, dbClient)
|
||||||
|
|
||||||
|
// Enable debug logging if requested
|
||||||
|
if saveDebugLog {
|
||||||
|
// Generate debug log path based on archive name and timestamp
|
||||||
|
debugLogPath := fmt.Sprintf("/tmp/dbbackup-restore-debug-%s.json", time.Now().Format("20060102-150405"))
|
||||||
|
engine.SetDebugLogPath(debugLogPath)
|
||||||
|
log.Info("Debug logging enabled", "path", debugLogPath)
|
||||||
|
}
|
||||||
|
|
||||||
// Set up progress callback (but it won't work in goroutine - progress is already sent via logs)
|
// Set up progress callback (but it won't work in goroutine - progress is already sent via logs)
|
||||||
// The TUI will just use spinner animation to show activity
|
// The TUI will just use spinner animation to show activity
|
||||||
|
|
||||||
|
|||||||
@@ -59,6 +59,7 @@ type RestorePreviewModel struct {
|
|||||||
checking bool
|
checking bool
|
||||||
canProceed bool
|
canProceed bool
|
||||||
message string
|
message string
|
||||||
|
saveDebugLog bool // Save detailed error report on failure
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewRestorePreview creates a new restore preview
|
// NewRestorePreview creates a new restore preview
|
||||||
@@ -82,6 +83,7 @@ func NewRestorePreview(cfg *config.Config, log logger.Logger, parent tea.Model,
|
|||||||
checking: true,
|
checking: true,
|
||||||
safetyChecks: []SafetyCheck{
|
safetyChecks: []SafetyCheck{
|
||||||
{Name: "Archive integrity", Status: "pending", Critical: true},
|
{Name: "Archive integrity", Status: "pending", Critical: true},
|
||||||
|
{Name: "Dump validity", Status: "pending", Critical: true},
|
||||||
{Name: "Disk space", Status: "pending", Critical: true},
|
{Name: "Disk space", Status: "pending", Critical: true},
|
||||||
{Name: "Required tools", Status: "pending", Critical: true},
|
{Name: "Required tools", Status: "pending", Critical: true},
|
||||||
{Name: "Target database", Status: "pending", Critical: false},
|
{Name: "Target database", Status: "pending", Critical: false},
|
||||||
@@ -102,7 +104,7 @@ type safetyCheckCompleteMsg struct {
|
|||||||
|
|
||||||
func runSafetyChecks(cfg *config.Config, log logger.Logger, archive ArchiveInfo, targetDB string) tea.Cmd {
|
func runSafetyChecks(cfg *config.Config, log logger.Logger, archive ArchiveInfo, targetDB string) tea.Cmd {
|
||||||
return func() tea.Msg {
|
return func() tea.Msg {
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
safety := restore.NewSafety(cfg, log)
|
safety := restore.NewSafety(cfg, log)
|
||||||
@@ -121,7 +123,33 @@ func runSafetyChecks(cfg *config.Config, log logger.Logger, archive ArchiveInfo,
|
|||||||
}
|
}
|
||||||
checks = append(checks, check)
|
checks = append(checks, check)
|
||||||
|
|
||||||
// 2. Disk space
|
// 2. Dump validity (deep diagnosis)
|
||||||
|
check = SafetyCheck{Name: "Dump validity", Status: "checking", Critical: true}
|
||||||
|
diagnoser := restore.NewDiagnoser(log, false)
|
||||||
|
diagResult, diagErr := diagnoser.DiagnoseFile(archive.Path)
|
||||||
|
if diagErr != nil {
|
||||||
|
check.Status = "warning"
|
||||||
|
check.Message = fmt.Sprintf("Cannot diagnose: %v", diagErr)
|
||||||
|
} else if !diagResult.IsValid {
|
||||||
|
check.Status = "failed"
|
||||||
|
check.Critical = true
|
||||||
|
if diagResult.IsTruncated {
|
||||||
|
check.Message = "Dump is TRUNCATED - restore will fail"
|
||||||
|
} else if diagResult.IsCorrupted {
|
||||||
|
check.Message = "Dump is CORRUPTED - restore will fail"
|
||||||
|
} else if len(diagResult.Errors) > 0 {
|
||||||
|
check.Message = diagResult.Errors[0]
|
||||||
|
} else {
|
||||||
|
check.Message = "Dump has validation errors"
|
||||||
|
}
|
||||||
|
canProceed = false
|
||||||
|
} else {
|
||||||
|
check.Status = "passed"
|
||||||
|
check.Message = "Dump structure verified"
|
||||||
|
}
|
||||||
|
checks = append(checks, check)
|
||||||
|
|
||||||
|
// 3. Disk space
|
||||||
check = SafetyCheck{Name: "Disk space", Status: "checking", Critical: true}
|
check = SafetyCheck{Name: "Disk space", Status: "checking", Critical: true}
|
||||||
multiplier := 3.0
|
multiplier := 3.0
|
||||||
if archive.Format.IsClusterBackup() {
|
if archive.Format.IsClusterBackup() {
|
||||||
@@ -137,7 +165,7 @@ func runSafetyChecks(cfg *config.Config, log logger.Logger, archive ArchiveInfo,
|
|||||||
}
|
}
|
||||||
checks = append(checks, check)
|
checks = append(checks, check)
|
||||||
|
|
||||||
// 3. Required tools
|
// 4. Required tools
|
||||||
check = SafetyCheck{Name: "Required tools", Status: "checking", Critical: true}
|
check = SafetyCheck{Name: "Required tools", Status: "checking", Critical: true}
|
||||||
dbType := "postgres"
|
dbType := "postgres"
|
||||||
if archive.Format.IsMySQL() {
|
if archive.Format.IsMySQL() {
|
||||||
@@ -153,7 +181,7 @@ func runSafetyChecks(cfg *config.Config, log logger.Logger, archive ArchiveInfo,
|
|||||||
}
|
}
|
||||||
checks = append(checks, check)
|
checks = append(checks, check)
|
||||||
|
|
||||||
// 4. Target database check (skip for cluster restores)
|
// 5. Target database check (skip for cluster restores)
|
||||||
existingDBCount := 0
|
existingDBCount := 0
|
||||||
existingDBs := []string{}
|
existingDBs := []string{}
|
||||||
|
|
||||||
@@ -243,6 +271,15 @@ func (m RestorePreviewModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
m.message = fmt.Sprintf("Create if missing: %v", m.createIfMissing)
|
m.message = fmt.Sprintf("Create if missing: %v", m.createIfMissing)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
case "d":
|
||||||
|
// Toggle debug log saving
|
||||||
|
m.saveDebugLog = !m.saveDebugLog
|
||||||
|
if m.saveDebugLog {
|
||||||
|
m.message = infoStyle.Render("📋 Debug log: enabled (will save detailed report on failure)")
|
||||||
|
} else {
|
||||||
|
m.message = "Debug log: disabled"
|
||||||
|
}
|
||||||
|
|
||||||
case "enter", " ":
|
case "enter", " ":
|
||||||
if m.checking {
|
if m.checking {
|
||||||
m.message = "Please wait for safety checks to complete..."
|
m.message = "Please wait for safety checks to complete..."
|
||||||
@@ -255,7 +292,7 @@ func (m RestorePreviewModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Proceed to restore execution
|
// Proceed to restore execution
|
||||||
exec := NewRestoreExecution(m.config, m.logger, m.parent, m.ctx, m.archive, m.targetDB, m.cleanFirst, m.createIfMissing, m.mode, m.cleanClusterFirst, m.existingDBs)
|
exec := NewRestoreExecution(m.config, m.logger, m.parent, m.ctx, m.archive, m.targetDB, m.cleanFirst, m.createIfMissing, m.mode, m.cleanClusterFirst, m.existingDBs, m.saveDebugLog)
|
||||||
return exec, exec.Init()
|
return exec, exec.Init()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -390,6 +427,23 @@ func (m RestorePreviewModel) View() string {
|
|||||||
s.WriteString("\n\n")
|
s.WriteString("\n\n")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Advanced Options
|
||||||
|
s.WriteString(archiveHeaderStyle.Render("⚙️ Advanced Options"))
|
||||||
|
s.WriteString("\n")
|
||||||
|
debugIcon := "✗"
|
||||||
|
debugStyle := infoStyle
|
||||||
|
if m.saveDebugLog {
|
||||||
|
debugIcon = "✓"
|
||||||
|
debugStyle = checkPassedStyle
|
||||||
|
}
|
||||||
|
s.WriteString(debugStyle.Render(fmt.Sprintf(" %s Debug Log: %v (press 'd' to toggle)", debugIcon, m.saveDebugLog)))
|
||||||
|
s.WriteString("\n")
|
||||||
|
if m.saveDebugLog {
|
||||||
|
s.WriteString(infoStyle.Render(" Saves detailed error report to /tmp on failure"))
|
||||||
|
s.WriteString("\n")
|
||||||
|
}
|
||||||
|
s.WriteString("\n")
|
||||||
|
|
||||||
// Message
|
// Message
|
||||||
if m.message != "" {
|
if m.message != "" {
|
||||||
s.WriteString(m.message)
|
s.WriteString(m.message)
|
||||||
@@ -403,15 +457,15 @@ func (m RestorePreviewModel) View() string {
|
|||||||
s.WriteString(successStyle.Render("✅ Ready to restore"))
|
s.WriteString(successStyle.Render("✅ Ready to restore"))
|
||||||
s.WriteString("\n")
|
s.WriteString("\n")
|
||||||
if m.mode == "restore-single" {
|
if m.mode == "restore-single" {
|
||||||
s.WriteString(infoStyle.Render("⌨️ t: Toggle clean-first | c: Toggle create | Enter: Proceed | Esc: Cancel"))
|
s.WriteString(infoStyle.Render("⌨️ t: Clean-first | c: Create | d: Debug log | Enter: Proceed | Esc: Cancel"))
|
||||||
} else if m.mode == "restore-cluster" {
|
} else if m.mode == "restore-cluster" {
|
||||||
if m.existingDBCount > 0 {
|
if m.existingDBCount > 0 {
|
||||||
s.WriteString(infoStyle.Render("⌨️ c: Toggle cleanup | Enter: Proceed | Esc: Cancel"))
|
s.WriteString(infoStyle.Render("⌨️ c: Cleanup | d: Debug log | Enter: Proceed | Esc: Cancel"))
|
||||||
} else {
|
} else {
|
||||||
s.WriteString(infoStyle.Render("⌨️ Enter: Proceed | Esc: Cancel"))
|
s.WriteString(infoStyle.Render("⌨️ d: Debug log | Enter: Proceed | Esc: Cancel"))
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
s.WriteString(infoStyle.Render("⌨️ Enter: Proceed | Esc: Cancel"))
|
s.WriteString(infoStyle.Render("⌨️ d: Debug log | Enter: Proceed | Esc: Cancel"))
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
s.WriteString(errorStyle.Render("❌ Cannot proceed - please fix errors above"))
|
s.WriteString(errorStyle.Render("❌ Cannot proceed - please fix errors above"))
|
||||||
|
|||||||
Reference in New Issue
Block a user