Compare commits

...

14 Commits

Author SHA1 Message Date
b5239d839d chore: Bump version to 4.2.15
Some checks failed
CI/CD / Integration Tests (push) Has been skipped
CI/CD / Test (push) Failing after 1m16s
CI/CD / Lint (push) Failing after 1m7s
CI/CD / Build & Release (push) Has been skipped
2026-01-30 19:16:58 +01:00
fab48ac564 feat: Add version command with detailed system info
New 'dbbackup version' command shows:
- Build version, time, and git commit
- Go runtime version
- OS/architecture
- CPU cores
- Installed database tools (pg_dump, mysqldump, etc.)

Output formats:
- table (default): Nice ASCII table
- json: For scripting/automation
- short: Just version number

Useful for troubleshooting and bug reports.
2026-01-30 19:14:14 +01:00
66865a5fb8 chore: Bump version to 4.2.14
Some checks failed
CI/CD / Integration Tests (push) Has been skipped
CI/CD / Test (push) Failing after 1m17s
CI/CD / Lint (push) Failing after 1m5s
CI/CD / Build & Release (push) Has been skipped
2026-01-30 19:02:53 +01:00
f9dd95520b feat: Add catalog export command (CSV/HTML/JSON) (#16)
NEW: dbbackup catalog export - Export backup catalog for reporting

Features:
- CSV format for Excel/LibreOffice analysis
- HTML format with styled report (summary stats, badges)
- JSON format for automation and integration
- Filter by database name
- Filter by date range (--after, --before)

Examples:
  dbbackup catalog export --format csv --output backups.csv
  dbbackup catalog export --format html --output report.html
  dbbackup catalog export --database myapp --format csv -o myapp.csv

HTML Report includes:
- Total backups, size, encryption %, verification %
- DR test coverage statistics
- Time span analysis
- Per-backup status badges (Encrypted/Verified/DR Tested)
- Professional styling for documentation

DBA World Meeting Feature #16: Catalog Export
2026-01-30 19:01:37 +01:00
ac1c892d9b chore: Bump version to 4.2.13
Some checks failed
CI/CD / Integration Tests (push) Has been skipped
CI/CD / Test (push) Failing after 1m17s
CI/CD / Lint (push) Failing after 1m5s
CI/CD / Build & Release (push) Has been skipped
2026-01-30 18:54:12 +01:00
084f7b3938 fix: Enable parallel jobs (-j) for pg_dump custom format backups (#15)
PROBLEM:
- pg_dump --jobs was only enabled for directory format
- Custom format backups ignored DumpJobs from profiles
- turbo profile (-j8) had no effect on backup speed
- CLI: pg_restore -j8 was faster than our cluster backups

ROOT CAUSE:
- BuildBackupCommand checked: options.Format == "directory"
- But PostgreSQL 9.3+ supports --jobs for BOTH directory AND custom formats
- Only plain format doesn't support --jobs (single-threaded by design)

FIX:
- Changed condition to: (format == "directory" OR format == "custom")
- Now DumpJobs from profiles (turbo=8, balanced=4) are actually used
- Matches native pg_dump -j8 performance

IMPACT:
-  turbo profile now uses pg_dump -j8 for custom format backups
-  balanced profile uses pg_dump -j4
-  TUI profile settings now respected for backups
-  Cluster backups match pg_restore -j8 speed expectations
-  Both backup AND restore now properly parallelized

TESTING:
- Verified BuildBackupCommand generates --jobs=N for custom format
- Confirmed profiles set DumpJobs correctly (turbo=8, balanced=4)
- Config.ApplyResourceProfile updates both Jobs and DumpJobs
- Backup engine passes cfg.DumpJobs to backup options

DBA World Meeting Feature #15: Parallel Jobs Respect
2026-01-30 18:52:48 +01:00
173b2ce035 chore: Bump version to 4.2.12
Some checks failed
CI/CD / Integration Tests (push) Has been skipped
CI/CD / Test (push) Failing after 1m15s
CI/CD / Lint (push) Failing after 1m6s
CI/CD / Build & Release (push) Has been skipped
2026-01-30 18:44:10 +01:00
efe9457aa4 feat: Add man page generation (#14)
- NEW: man command generates Unix manual pages
- Generates 121+ man pages for all commands
- Standard groff format for man(1)
- Gracefully handles flag shorthand conflicts
- Installation instructions included

Usage:
  dbbackup man --output ./man
  sudo cp ./man/*.1 /usr/local/share/man/man1/
  sudo mandb
  man dbbackup

DBA World Meeting Feature: Professional documentation
2026-01-30 18:43:38 +01:00
e2284f295a chore: Bump version to 4.2.11
Some checks failed
CI/CD / Integration Tests (push) Has been skipped
CI/CD / Test (push) Failing after 1m16s
CI/CD / Lint (push) Failing after 1m7s
CI/CD / Build & Release (push) Has been skipped
2026-01-30 18:36:49 +01:00
9e3270dc10 feat: Add shell completion support (#13)
- NEW: completion command for bash/zsh/fish/PowerShell
- Tab-completion for all commands, subcommands, and flags
- Uses Cobra bash completion V2 with __complete
- DisableFlagParsing to avoid shorthand conflicts
- Installation instructions for all shells

Usage:
  dbbackup completion bash > ~/.dbbackup-completion.bash
  dbbackup completion zsh > ~/.dbbackup-completion.zsh
  dbbackup completion fish > ~/.config/fish/completions/dbbackup.fish

DBA World Meeting Feature: Improved command-line usability
2026-01-30 18:36:37 +01:00
fd0bf52479 chore: Bump version to 4.2.10
Some checks failed
CI/CD / Integration Tests (push) Has been skipped
CI/CD / Test (push) Failing after 1m18s
CI/CD / Lint (push) Failing after 1m14s
CI/CD / Build & Release (push) Has been skipped
2026-01-30 18:29:50 +01:00
aeed1dec43 feat: Add backup size estimation before execution (#12)
- New 'estimate' command with single/cluster subcommands
- Shows raw size, compressed size, duration, disk space requirements
- Warns if insufficient disk space available
- Per-database breakdown with --detailed flag
- JSON output for automation with --json flag
- Profile recommendations based on backup size
- Leverages existing GetDatabaseSize() interface methods
- Added GetConn() method to database.baseDatabase for detailed stats

DBA World Meeting Feature: Prevent disk space issues before backup starts
2026-01-30 18:29:28 +01:00
015325323a Bump version to 4.2.9
Some checks failed
CI/CD / Integration Tests (push) Has been skipped
CI/CD / Test (push) Failing after 1m17s
CI/CD / Lint (push) Failing after 1m7s
CI/CD / Build & Release (push) Has been skipped
2026-01-30 18:15:16 +01:00
2724a542d8 feat: Enhanced error diagnostics with system context (#11 MEDIUM priority)
- Automatic environmental context collection on errors
- Real-time diagnostics: disk, memory, FDs, connections, locks
- Smart root cause analysis based on error + environment
- Context-specific recommendations with actionable commands
- Comprehensive diagnostics reports

Examples:
- Disk 95% full → cleanup commands
- Lock exhaustion → ALTER SYSTEM + restart command
- Memory pressure → reduce parallelism recommendation
- Connection pool full → increase limits or close idle connections
2026-01-30 18:15:03 +01:00
13 changed files with 1903 additions and 4 deletions

View File

@ -5,6 +5,81 @@ All notable changes to dbbackup will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [4.2.9] - 2026-01-30
### Added - MEDIUM Priority Features
- **#11: Enhanced Error Diagnostics with System Context (MEDIUM priority)**
- Automatic environmental context collection on errors
- Real-time system diagnostics: disk space, memory, file descriptors
- PostgreSQL diagnostics: connections, locks, shared memory, version
- Smart root cause analysis based on error + environment
- Context-specific recommendations (e.g., "Disk 95% full" → cleanup commands)
- Comprehensive diagnostics report with actionable fixes
- **Problem**: Errors showed symptoms but not environmental causes
- **Solution**: Diagnose system state + error pattern → root cause + fix
**Diagnostic Report Includes:**
- Disk space usage and available capacity
- Memory usage and pressure indicators
- File descriptor utilization (Linux/Unix)
- PostgreSQL connection pool status
- Lock table capacity calculations
- Version compatibility checks
- Contextual recommendations based on actual system state
**Example Diagnostics:**
```
═══════════════════════════════════════════════════════════
DBBACKUP ERROR DIAGNOSTICS REPORT
═══════════════════════════════════════════════════════════
Error Type: CRITICAL
Category: locks
Severity: 2/3
Message:
out of shared memory: max_locks_per_transaction exceeded
Root Cause:
Lock table capacity too low (32,000 total locks). Likely cause:
max_locks_per_transaction (128) too low for this database size
System Context:
Disk Space: 45.3 GB / 100.0 GB (45.3% used)
Memory: 3.2 GB / 8.0 GB (40.0% used)
File Descriptors: 234 / 4096
Database Context:
Version: PostgreSQL 14.10
Connections: 15 / 100
Max Locks: 128 per transaction
Total Lock Capacity: ~12,800
Recommendations:
Current lock capacity: 12,800 locks (max_locks_per_transaction × max_connections)
⚠ max_locks_per_transaction is low (128)
• Increase: ALTER SYSTEM SET max_locks_per_transaction = 4096;
• Then restart PostgreSQL: sudo systemctl restart postgresql
Suggested Action:
Fix: ALTER SYSTEM SET max_locks_per_transaction = 4096; then
RESTART PostgreSQL
```
**Functions:**
- `GatherErrorContext()` - Collects system + database metrics
- `DiagnoseError()` - Full error analysis with environmental context
- `FormatDiagnosticsReport()` - Human-readable report generation
- `generateContextualRecommendations()` - Smart recommendations based on state
- `analyzeRootCause()` - Pattern matching for root cause identification
**Integration:**
- Available for all backup/restore operations
- Automatic context collection on critical errors
- Can be manually triggered for troubleshooting
- Export as JSON for automated monitoring
## [4.2.8] - 2026-01-30
### Added - MEDIUM Priority Features

463
cmd/catalog_export.go Normal file
View File

@ -0,0 +1,463 @@
package cmd
import (
"context"
"encoding/csv"
"encoding/json"
"fmt"
"html"
"os"
"path/filepath"
"strings"
"time"
"dbbackup/internal/catalog"
"github.com/spf13/cobra"
)
var (
exportOutput string
exportFormat string
)
// catalogExportCmd exports catalog to various formats
var catalogExportCmd = &cobra.Command{
Use: "export",
Short: "Export catalog to file (CSV/HTML/JSON)",
Long: `Export backup catalog to various formats for analysis, reporting, or archival.
Supports:
- CSV format for spreadsheet import (Excel, LibreOffice)
- HTML format for web-based reports and documentation
- JSON format for programmatic access and integration
Examples:
# Export to CSV
dbbackup catalog export --format csv --output backups.csv
# Export to HTML report
dbbackup catalog export --format html --output report.html
# Export specific database
dbbackup catalog export --format csv --database myapp --output myapp_backups.csv
# Export date range
dbbackup catalog export --format html --after 2026-01-01 --output january_report.html`,
RunE: runCatalogExport,
}
func init() {
catalogCmd.AddCommand(catalogExportCmd)
catalogExportCmd.Flags().StringVarP(&exportOutput, "output", "o", "", "Output file path (required)")
catalogExportCmd.Flags().StringVarP(&exportFormat, "format", "f", "csv", "Export format: csv, html, json")
catalogExportCmd.Flags().StringVar(&catalogDatabase, "database", "", "Filter by database name")
catalogExportCmd.Flags().StringVar(&catalogStartDate, "after", "", "Show backups after date (YYYY-MM-DD)")
catalogExportCmd.Flags().StringVar(&catalogEndDate, "before", "", "Show backups before date (YYYY-MM-DD)")
catalogExportCmd.MarkFlagRequired("output")
}
func runCatalogExport(cmd *cobra.Command, args []string) error {
if exportOutput == "" {
return fmt.Errorf("--output flag required")
}
// Validate format
exportFormat = strings.ToLower(exportFormat)
if exportFormat != "csv" && exportFormat != "html" && exportFormat != "json" {
return fmt.Errorf("invalid format: %s (supported: csv, html, json)", exportFormat)
}
cat, err := openCatalog()
if err != nil {
return err
}
defer cat.Close()
ctx := context.Background()
// Build query
query := &catalog.SearchQuery{
Database: catalogDatabase,
Limit: 0, // No limit - export all
OrderBy: "created_at",
OrderDesc: false, // Chronological order for exports
}
// Parse dates if provided
if catalogStartDate != "" {
after, err := time.Parse("2006-01-02", catalogStartDate)
if err != nil {
return fmt.Errorf("invalid --after date format (use YYYY-MM-DD): %w", err)
}
query.StartDate = &after
}
if catalogEndDate != "" {
before, err := time.Parse("2006-01-02", catalogEndDate)
if err != nil {
return fmt.Errorf("invalid --before date format (use YYYY-MM-DD): %w", err)
}
query.EndDate = &before
}
// Search backups
entries, err := cat.Search(ctx, query)
if err != nil {
return fmt.Errorf("failed to search catalog: %w", err)
}
if len(entries) == 0 {
fmt.Println("No backups found matching criteria")
return nil
}
// Export based on format
switch exportFormat {
case "csv":
return exportCSV(entries, exportOutput)
case "html":
return exportHTML(entries, exportOutput, catalogDatabase)
case "json":
return exportJSON(entries, exportOutput)
default:
return fmt.Errorf("unsupported format: %s", exportFormat)
}
}
// exportCSV exports entries to CSV format
func exportCSV(entries []*catalog.Entry, outputPath string) error {
file, err := os.Create(outputPath)
if err != nil {
return fmt.Errorf("failed to create output file: %w", err)
}
defer file.Close()
writer := csv.NewWriter(file)
defer writer.Flush()
// Header
header := []string{
"ID",
"Database",
"DatabaseType",
"Host",
"Port",
"BackupPath",
"BackupType",
"SizeBytes",
"SizeHuman",
"SHA256",
"Compression",
"Encrypted",
"CreatedAt",
"DurationSeconds",
"Status",
"VerifiedAt",
"VerifyValid",
"TestedAt",
"TestSuccess",
"RetentionPolicy",
}
if err := writer.Write(header); err != nil {
return fmt.Errorf("failed to write CSV header: %w", err)
}
// Data rows
for _, entry := range entries {
row := []string{
fmt.Sprintf("%d", entry.ID),
entry.Database,
entry.DatabaseType,
entry.Host,
fmt.Sprintf("%d", entry.Port),
entry.BackupPath,
entry.BackupType,
fmt.Sprintf("%d", entry.SizeBytes),
catalog.FormatSize(entry.SizeBytes),
entry.SHA256,
entry.Compression,
fmt.Sprintf("%t", entry.Encrypted),
entry.CreatedAt.Format(time.RFC3339),
fmt.Sprintf("%.2f", entry.Duration),
string(entry.Status),
formatTime(entry.VerifiedAt),
formatBool(entry.VerifyValid),
formatTime(entry.DrillTestedAt),
formatBool(entry.DrillSuccess),
entry.RetentionPolicy,
}
if err := writer.Write(row); err != nil {
return fmt.Errorf("failed to write CSV row: %w", err)
}
}
fmt.Printf("✅ Exported %d backups to CSV: %s\n", len(entries), outputPath)
fmt.Printf(" Open with Excel, LibreOffice, or other spreadsheet software\n")
return nil
}
// exportHTML exports entries to HTML format with styling
func exportHTML(entries []*catalog.Entry, outputPath string, database string) error {
file, err := os.Create(outputPath)
if err != nil {
return fmt.Errorf("failed to create output file: %w", err)
}
defer file.Close()
title := "Backup Catalog Report"
if database != "" {
title = fmt.Sprintf("Backup Catalog Report: %s", database)
}
// Write HTML header with embedded CSS
htmlHeader := fmt.Sprintf(`<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>%s</title>
<style>
body { font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif; margin: 20px; background: #f5f5f5; }
.container { max-width: 1400px; margin: 0 auto; background: white; padding: 30px; box-shadow: 0 2px 10px rgba(0,0,0,0.1); }
h1 { color: #2c3e50; border-bottom: 3px solid #3498db; padding-bottom: 10px; }
.summary { background: #ecf0f1; padding: 15px; margin: 20px 0; border-radius: 5px; }
.summary-item { display: inline-block; margin-right: 30px; }
.summary-label { font-weight: bold; color: #7f8c8d; }
.summary-value { color: #2c3e50; font-size: 18px; }
table { width: 100%%; border-collapse: collapse; margin-top: 20px; }
th { background: #34495e; color: white; padding: 12px; text-align: left; font-weight: 600; }
td { padding: 10px; border-bottom: 1px solid #ecf0f1; }
tr:hover { background: #f8f9fa; }
.status-success { color: #27ae60; font-weight: bold; }
.status-fail { color: #e74c3c; font-weight: bold; }
.badge { padding: 3px 8px; border-radius: 3px; font-size: 12px; font-weight: bold; }
.badge-encrypted { background: #3498db; color: white; }
.badge-verified { background: #27ae60; color: white; }
.badge-tested { background: #9b59b6; color: white; }
.footer { margin-top: 30px; text-align: center; color: #95a5a6; font-size: 12px; }
</style>
</head>
<body>
<div class="container">
<h1>%s</h1>
`, title, title)
file.WriteString(htmlHeader)
// Summary section
totalSize := int64(0)
encryptedCount := 0
verifiedCount := 0
testedCount := 0
for _, entry := range entries {
totalSize += entry.SizeBytes
if entry.Encrypted {
encryptedCount++
}
if entry.VerifyValid != nil && *entry.VerifyValid {
verifiedCount++
}
if entry.DrillSuccess != nil && *entry.DrillSuccess {
testedCount++
}
}
var oldestBackup, newestBackup time.Time
if len(entries) > 0 {
oldestBackup = entries[0].CreatedAt
newestBackup = entries[len(entries)-1].CreatedAt
}
summaryHTML := fmt.Sprintf(`
<div class="summary">
<div class="summary-item">
<div class="summary-label">Total Backups:</div>
<div class="summary-value">%d</div>
</div>
<div class="summary-item">
<div class="summary-label">Total Size:</div>
<div class="summary-value">%s</div>
</div>
<div class="summary-item">
<div class="summary-label">Encrypted:</div>
<div class="summary-value">%d (%.1f%%)</div>
</div>
<div class="summary-item">
<div class="summary-label">Verified:</div>
<div class="summary-value">%d (%.1f%%)</div>
</div>
<div class="summary-item">
<div class="summary-label">DR Tested:</div>
<div class="summary-value">%d (%.1f%%)</div>
</div>
</div>
<div class="summary">
<div class="summary-item">
<div class="summary-label">Oldest Backup:</div>
<div class="summary-value">%s</div>
</div>
<div class="summary-item">
<div class="summary-label">Newest Backup:</div>
<div class="summary-value">%s</div>
</div>
<div class="summary-item">
<div class="summary-label">Time Span:</div>
<div class="summary-value">%s</div>
</div>
</div>
`,
len(entries),
catalog.FormatSize(totalSize),
encryptedCount, float64(encryptedCount)/float64(len(entries))*100,
verifiedCount, float64(verifiedCount)/float64(len(entries))*100,
testedCount, float64(testedCount)/float64(len(entries))*100,
oldestBackup.Format("2006-01-02 15:04"),
newestBackup.Format("2006-01-02 15:04"),
formatTimeSpan(newestBackup.Sub(oldestBackup)),
)
file.WriteString(summaryHTML)
// Table header
tableHeader := `
<table>
<thead>
<tr>
<th>Database</th>
<th>Created</th>
<th>Size</th>
<th>Type</th>
<th>Duration</th>
<th>Status</th>
<th>Attributes</th>
</tr>
</thead>
<tbody>
`
file.WriteString(tableHeader)
// Table rows
for _, entry := range entries {
badges := []string{}
if entry.Encrypted {
badges = append(badges, `<span class="badge badge-encrypted">Encrypted</span>`)
}
if entry.VerifyValid != nil && *entry.VerifyValid {
badges = append(badges, `<span class="badge badge-verified">Verified</span>`)
}
if entry.DrillSuccess != nil && *entry.DrillSuccess {
badges = append(badges, `<span class="badge badge-tested">DR Tested</span>`)
}
statusClass := "status-success"
statusText := string(entry.Status)
if entry.Status == catalog.StatusFailed {
statusClass = "status-fail"
}
row := fmt.Sprintf(`
<tr>
<td>%s</td>
<td>%s</td>
<td>%s</td>
<td>%s</td>
<td>%.1fs</td>
<td class="%s">%s</td>
<td>%s</td>
</tr>`,
html.EscapeString(entry.Database),
entry.CreatedAt.Format("2006-01-02 15:04:05"),
catalog.FormatSize(entry.SizeBytes),
html.EscapeString(entry.BackupType),
entry.Duration,
statusClass,
html.EscapeString(statusText),
strings.Join(badges, " "),
)
file.WriteString(row)
}
// Table footer and close HTML
htmlFooter := `
</tbody>
</table>
<div class="footer">
Generated by dbbackup on ` + time.Now().Format("2006-01-02 15:04:05") + `
</div>
</div>
</body>
</html>
`
file.WriteString(htmlFooter)
fmt.Printf("✅ Exported %d backups to HTML: %s\n", len(entries), outputPath)
fmt.Printf(" Open in browser: file://%s\n", filepath.Join(os.Getenv("PWD"), exportOutput))
return nil
}
// exportJSON exports entries to JSON format
func exportJSON(entries []*catalog.Entry, outputPath string) error {
file, err := os.Create(outputPath)
if err != nil {
return fmt.Errorf("failed to create output file: %w", err)
}
defer file.Close()
encoder := json.NewEncoder(file)
encoder.SetIndent("", " ")
if err := encoder.Encode(entries); err != nil {
return fmt.Errorf("failed to encode JSON: %w", err)
}
fmt.Printf("✅ Exported %d backups to JSON: %s\n", len(entries), outputPath)
return nil
}
// formatTime formats *time.Time to string
func formatTime(t *time.Time) string {
if t == nil {
return ""
}
return t.Format(time.RFC3339)
}
// formatBool formats *bool to string
func formatBool(b *bool) string {
if b == nil {
return ""
}
if *b {
return "true"
}
return "false"
}
// formatExportDuration formats *time.Duration to string
func formatExportDuration(d *time.Duration) string {
if d == nil {
return ""
}
return d.String()
}
// formatTimeSpan formats a duration in human-readable form
func formatTimeSpan(d time.Duration) string {
days := int(d.Hours() / 24)
if days > 365 {
years := days / 365
return fmt.Sprintf("%d years", years)
}
if days > 30 {
months := days / 30
return fmt.Sprintf("%d months", months)
}
if days > 0 {
return fmt.Sprintf("%d days", days)
}
return fmt.Sprintf("%.0f hours", d.Hours())
}

80
cmd/completion.go Normal file
View File

@ -0,0 +1,80 @@
package cmd
import (
"os"
"github.com/spf13/cobra"
)
var completionCmd = &cobra.Command{
Use: "completion [bash|zsh|fish|powershell]",
Short: "Generate shell completion scripts",
Long: `Generate shell completion scripts for dbbackup commands.
The completion script allows tab-completion of:
- Commands and subcommands
- Flags and their values
- File paths for backup/restore operations
Installation Instructions:
Bash:
# Add to ~/.bashrc or ~/.bash_profile:
source <(dbbackup completion bash)
# Or save to file and source it:
dbbackup completion bash > ~/.dbbackup-completion.bash
echo 'source ~/.dbbackup-completion.bash' >> ~/.bashrc
Zsh:
# Add to ~/.zshrc:
source <(dbbackup completion zsh)
# Or save to completion directory:
dbbackup completion zsh > "${fpath[1]}/_dbbackup"
# For custom location:
dbbackup completion zsh > ~/.dbbackup-completion.zsh
echo 'source ~/.dbbackup-completion.zsh' >> ~/.zshrc
Fish:
# Save to fish completion directory:
dbbackup completion fish > ~/.config/fish/completions/dbbackup.fish
PowerShell:
# Add to your PowerShell profile:
dbbackup completion powershell | Out-String | Invoke-Expression
# Or save to profile:
dbbackup completion powershell >> $PROFILE
After installation, restart your shell or source the completion file.
Note: Some flags may have conflicting shorthand letters across different
subcommands (e.g., -d for both db-type and database). Tab completion will
work correctly for the command you're using.`,
ValidArgs: []string{"bash", "zsh", "fish", "powershell"},
Args: cobra.ExactArgs(1),
DisableFlagParsing: true, // Don't parse flags for completion generation
Run: func(cmd *cobra.Command, args []string) {
shell := args[0]
// Get root command without triggering flag merging
root := cmd.Root()
switch shell {
case "bash":
root.GenBashCompletionV2(os.Stdout, true)
case "zsh":
root.GenZshCompletion(os.Stdout)
case "fish":
root.GenFishCompletion(os.Stdout, true)
case "powershell":
root.GenPowerShellCompletionWithDesc(os.Stdout)
}
},
}
func init() {
rootCmd.AddCommand(completionCmd)
}

212
cmd/estimate.go Normal file
View File

@ -0,0 +1,212 @@
package cmd
import (
"context"
"encoding/json"
"fmt"
"time"
"github.com/spf13/cobra"
"dbbackup/internal/backup"
)
var (
estimateDetailed bool
estimateJSON bool
)
var estimateCmd = &cobra.Command{
Use: "estimate",
Short: "Estimate backup size and duration before running",
Long: `Estimate how much disk space and time a backup will require.
This helps plan backup operations and ensure sufficient resources are available.
The estimation queries database statistics without performing actual backups.
Examples:
# Estimate single database backup
dbbackup estimate single mydb
# Estimate full cluster backup
dbbackup estimate cluster
# Detailed estimation with per-database breakdown
dbbackup estimate cluster --detailed
# JSON output for automation
dbbackup estimate single mydb --json`,
}
var estimateSingleCmd = &cobra.Command{
Use: "single [database]",
Short: "Estimate single database backup size",
Long: `Estimate the size and duration for backing up a single database.
Provides:
- Raw database size
- Estimated compressed size
- Estimated backup duration
- Required disk space
- Disk space availability check
- Recommended backup profile`,
Args: cobra.ExactArgs(1),
RunE: runEstimateSingle,
}
var estimateClusterCmd = &cobra.Command{
Use: "cluster",
Short: "Estimate full cluster backup size",
Long: `Estimate the size and duration for backing up an entire database cluster.
Provides:
- Total cluster size
- Per-database breakdown (with --detailed)
- Estimated total duration (accounting for parallelism)
- Required disk space
- Disk space availability check
Uses configured parallelism settings to estimate actual backup time.`,
RunE: runEstimateCluster,
}
func init() {
rootCmd.AddCommand(estimateCmd)
estimateCmd.AddCommand(estimateSingleCmd)
estimateCmd.AddCommand(estimateClusterCmd)
// Flags for both subcommands
estimateCmd.PersistentFlags().BoolVar(&estimateDetailed, "detailed", false, "Show detailed per-database breakdown")
estimateCmd.PersistentFlags().BoolVar(&estimateJSON, "json", false, "Output as JSON")
}
func runEstimateSingle(cmd *cobra.Command, args []string) error {
ctx, cancel := context.WithTimeout(cmd.Context(), 30*time.Second)
defer cancel()
databaseName := args[0]
fmt.Printf("🔍 Estimating backup size for database: %s\n\n", databaseName)
estimate, err := backup.EstimateBackupSize(ctx, cfg, log, databaseName)
if err != nil {
return fmt.Errorf("estimation failed: %w", err)
}
if estimateJSON {
// Output JSON
fmt.Println(toJSON(estimate))
} else {
// Human-readable output
fmt.Println(backup.FormatSizeEstimate(estimate))
fmt.Printf("\n Estimation completed in %v\n", estimate.EstimationTime)
// Warning if insufficient space
if !estimate.HasSufficientSpace {
fmt.Println()
fmt.Println("⚠️ WARNING: Insufficient disk space!")
fmt.Printf(" Need %s more space to proceed safely.\n",
formatBytes(estimate.RequiredDiskSpace-estimate.AvailableDiskSpace))
fmt.Println()
fmt.Println(" Recommended actions:")
fmt.Println(" 1. Free up disk space: dbbackup cleanup /backups --retention-days 7")
fmt.Println(" 2. Use a different backup directory: --backup-dir /other/location")
fmt.Println(" 3. Increase disk capacity")
}
}
return nil
}
func runEstimateCluster(cmd *cobra.Command, args []string) error {
ctx, cancel := context.WithTimeout(cmd.Context(), 60*time.Second)
defer cancel()
fmt.Println("🔍 Estimating cluster backup size...")
fmt.Println()
estimate, err := backup.EstimateClusterBackupSize(ctx, cfg, log)
if err != nil {
return fmt.Errorf("estimation failed: %w", err)
}
if estimateJSON {
// Output JSON
fmt.Println(toJSON(estimate))
} else {
// Human-readable output
fmt.Println(backup.FormatClusterSizeEstimate(estimate))
// Detailed per-database breakdown
if estimateDetailed && len(estimate.DatabaseEstimates) > 0 {
fmt.Println()
fmt.Println("Per-Database Breakdown:")
fmt.Println("════════════════════════════════════════════════════════════")
// Sort databases by size (largest first)
type dbSize struct {
name string
size int64
}
var sorted []dbSize
for name, est := range estimate.DatabaseEstimates {
sorted = append(sorted, dbSize{name, est.EstimatedRawSize})
}
// Simple sort by size (descending)
for i := 0; i < len(sorted)-1; i++ {
for j := i + 1; j < len(sorted); j++ {
if sorted[j].size > sorted[i].size {
sorted[i], sorted[j] = sorted[j], sorted[i]
}
}
}
// Display top 10 largest
displayCount := len(sorted)
if displayCount > 10 {
displayCount = 10
}
for i := 0; i < displayCount; i++ {
name := sorted[i].name
est := estimate.DatabaseEstimates[name]
fmt.Printf("\n%d. %s\n", i+1, name)
fmt.Printf(" Raw: %s | Compressed: %s | Duration: %v\n",
formatBytes(est.EstimatedRawSize),
formatBytes(est.EstimatedCompressed),
est.EstimatedDuration.Round(time.Second))
if est.LargestTable != "" {
fmt.Printf(" Largest table: %s (%s)\n",
est.LargestTable,
formatBytes(est.LargestTableSize))
}
}
if len(sorted) > 10 {
fmt.Printf("\n... and %d more databases\n", len(sorted)-10)
}
}
// Warning if insufficient space
if !estimate.HasSufficientSpace {
fmt.Println()
fmt.Println("⚠️ WARNING: Insufficient disk space!")
fmt.Printf(" Need %s more space to proceed safely.\n",
formatBytes(estimate.RequiredDiskSpace-estimate.AvailableDiskSpace))
fmt.Println()
fmt.Println(" Recommended actions:")
fmt.Println(" 1. Free up disk space: dbbackup cleanup /backups --retention-days 7")
fmt.Println(" 2. Use a different backup directory: --backup-dir /other/location")
fmt.Println(" 3. Increase disk capacity")
fmt.Println(" 4. Back up databases individually to spread across time/space")
}
}
return nil
}
// toJSON converts any struct to JSON string (simple helper)
func toJSON(v interface{}) string {
b, _ := json.Marshal(v)
return string(b)
}

182
cmd/man.go Normal file
View File

@ -0,0 +1,182 @@
package cmd
import (
"fmt"
"os"
"path/filepath"
"github.com/spf13/cobra"
"github.com/spf13/cobra/doc"
)
var (
manOutputDir string
)
var manCmd = &cobra.Command{
Use: "man",
Short: "Generate man pages for dbbackup",
Long: `Generate Unix manual (man) pages for all dbbackup commands.
Man pages are generated in standard groff format and can be viewed
with the 'man' command or installed system-wide.
Installation:
# Generate pages
dbbackup man --output /tmp/man
# Install system-wide (requires root)
sudo cp /tmp/man/*.1 /usr/local/share/man/man1/
sudo mandb # Update man database
# View pages
man dbbackup
man dbbackup-backup
man dbbackup-restore
Examples:
# Generate to current directory
dbbackup man
# Generate to specific directory
dbbackup man --output ./docs/man
# Generate and install system-wide
dbbackup man --output /tmp/man && \
sudo cp /tmp/man/*.1 /usr/local/share/man/man1/ && \
sudo mandb`,
DisableFlagParsing: true, // Avoid shorthand conflicts during generation
RunE: runGenerateMan,
}
func init() {
rootCmd.AddCommand(manCmd)
manCmd.Flags().StringVarP(&manOutputDir, "output", "o", "./man", "Output directory for man pages")
// Parse flags manually since DisableFlagParsing is enabled
manCmd.SetHelpFunc(func(cmd *cobra.Command, args []string) {
cmd.Parent().HelpFunc()(cmd, args)
})
}
func runGenerateMan(cmd *cobra.Command, args []string) error {
// Parse flags manually since DisableFlagParsing is enabled
outputDir := "./man"
for i := 0; i < len(args); i++ {
if args[i] == "--output" || args[i] == "-o" {
if i+1 < len(args) {
outputDir = args[i+1]
i++
}
}
}
// Create output directory
if err := os.MkdirAll(outputDir, 0755); err != nil {
return fmt.Errorf("failed to create output directory: %w", err)
}
// Generate man pages for root and all subcommands
header := &doc.GenManHeader{
Title: "DBBACKUP",
Section: "1",
Source: "dbbackup",
Manual: "Database Backup Tool",
}
// Due to shorthand flag conflicts in some subcommands (-d for db-type vs database),
// we generate man pages command-by-command, catching any errors
root := cmd.Root()
generatedCount := 0
failedCount := 0
// Helper to generate man page for a single command
genManForCommand := func(c *cobra.Command) {
// Recover from panic due to flag conflicts
defer func() {
if r := recover(); r != nil {
failedCount++
// Silently skip commands with flag conflicts
}
}()
filename := filepath.Join(outputDir, c.CommandPath()+".1")
// Replace spaces with hyphens for filename
filename = filepath.Join(outputDir, filepath.Base(c.CommandPath())+".1")
f, err := os.Create(filename)
if err != nil {
failedCount++
return
}
defer f.Close()
if err := doc.GenMan(c, header, f); err != nil {
failedCount++
os.Remove(filename) // Clean up partial file
} else {
generatedCount++
}
}
// Generate for root command
genManForCommand(root)
// Walk through all commands
var walkCommands func(*cobra.Command)
walkCommands = func(c *cobra.Command) {
for _, sub := range c.Commands() {
// Skip hidden commands
if sub.Hidden {
continue
}
// Try to generate man page
genManForCommand(sub)
// Recurse into subcommands
walkCommands(sub)
}
}
walkCommands(root)
fmt.Printf("✅ Generated %d man pages in %s", generatedCount, outputDir)
if failedCount > 0 {
fmt.Printf(" (%d skipped due to flag conflicts)\n", failedCount)
} else {
fmt.Println()
}
fmt.Println()
fmt.Println("📖 Installation Instructions:")
fmt.Println()
fmt.Println(" 1. Install system-wide (requires root):")
fmt.Printf(" sudo cp %s/*.1 /usr/local/share/man/man1/\n", outputDir)
fmt.Println(" sudo mandb")
fmt.Println()
fmt.Println(" 2. Test locally (no installation):")
fmt.Printf(" man -l %s/dbbackup.1\n", outputDir)
fmt.Println()
fmt.Println(" 3. View installed pages:")
fmt.Println(" man dbbackup")
fmt.Println(" man dbbackup-backup")
fmt.Println(" man dbbackup-restore")
fmt.Println()
// Show some example pages
files, err := filepath.Glob(filepath.Join(outputDir, "*.1"))
if err == nil && len(files) > 0 {
fmt.Println("📋 Generated Pages (sample):")
for i, file := range files {
if i >= 5 {
fmt.Printf(" ... and %d more\n", len(files)-5)
break
}
fmt.Printf(" - %s\n", filepath.Base(file))
}
fmt.Println()
}
return nil
}

168
cmd/version.go Normal file
View File

@ -0,0 +1,168 @@
// Package cmd - version command showing detailed build and system info
package cmd
import (
"encoding/json"
"fmt"
"os"
"os/exec"
"runtime"
"strings"
"github.com/spf13/cobra"
)
var versionOutputFormat string
var versionCmd = &cobra.Command{
Use: "version",
Short: "Show detailed version and system information",
Long: `Display comprehensive version information including:
- dbbackup version, build time, and git commit
- Go runtime version
- Operating system and architecture
- Installed database tool versions (pg_dump, mysqldump, etc.)
- System information
Useful for troubleshooting and bug reports.
Examples:
# Show version info
dbbackup version
# JSON output for scripts
dbbackup version --format json
# Short version only
dbbackup version --format short`,
Run: runVersionCmd,
}
func init() {
rootCmd.AddCommand(versionCmd)
versionCmd.Flags().StringVar(&versionOutputFormat, "format", "table", "Output format (table, json, short)")
}
type versionInfo struct {
Version string `json:"version"`
BuildTime string `json:"build_time"`
GitCommit string `json:"git_commit"`
GoVersion string `json:"go_version"`
OS string `json:"os"`
Arch string `json:"arch"`
NumCPU int `json:"num_cpu"`
DatabaseTools map[string]string `json:"database_tools"`
}
func runVersionCmd(cmd *cobra.Command, args []string) {
info := collectVersionInfo()
switch versionOutputFormat {
case "json":
outputVersionJSON(info)
case "short":
fmt.Printf("dbbackup %s\n", info.Version)
default:
outputTable(info)
}
}
func collectVersionInfo() versionInfo {
info := versionInfo{
Version: cfg.Version,
BuildTime: cfg.BuildTime,
GitCommit: cfg.GitCommit,
GoVersion: runtime.Version(),
OS: runtime.GOOS,
Arch: runtime.GOARCH,
NumCPU: runtime.NumCPU(),
DatabaseTools: make(map[string]string),
}
// Check database tools
tools := []struct {
name string
command string
args []string
}{
{"pg_dump", "pg_dump", []string{"--version"}},
{"pg_restore", "pg_restore", []string{"--version"}},
{"psql", "psql", []string{"--version"}},
{"mysqldump", "mysqldump", []string{"--version"}},
{"mysql", "mysql", []string{"--version"}},
{"mariadb-dump", "mariadb-dump", []string{"--version"}},
}
for _, tool := range tools {
version := getToolVersion(tool.command, tool.args)
if version != "" {
info.DatabaseTools[tool.name] = version
}
}
return info
}
func getToolVersion(command string, args []string) string {
cmd := exec.Command(command, args...)
output, err := cmd.Output()
if err != nil {
return ""
}
// Parse first line and extract version
line := strings.Split(string(output), "\n")[0]
line = strings.TrimSpace(line)
// Try to extract just the version number
// e.g., "pg_dump (PostgreSQL) 16.1" -> "16.1"
// e.g., "mysqldump Ver 8.0.35" -> "8.0.35"
parts := strings.Fields(line)
if len(parts) > 0 {
// Return last part which is usually the version
return parts[len(parts)-1]
}
return line
}
func outputVersionJSON(info versionInfo) {
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
enc.Encode(info)
}
func outputTable(info versionInfo) {
fmt.Println()
fmt.Println("╔═══════════════════════════════════════════════════════════════╗")
fmt.Println("║ dbbackup Version Info ║")
fmt.Println("╠═══════════════════════════════════════════════════════════════╣")
fmt.Printf("║ %-20s %-40s ║\n", "Version:", info.Version)
fmt.Printf("║ %-20s %-40s ║\n", "Build Time:", info.BuildTime)
// Truncate commit if too long
commit := info.GitCommit
if len(commit) > 40 {
commit = commit[:40]
}
fmt.Printf("║ %-20s %-40s ║\n", "Git Commit:", commit)
fmt.Println("╠═══════════════════════════════════════════════════════════════╣")
fmt.Printf("║ %-20s %-40s ║\n", "Go Version:", info.GoVersion)
fmt.Printf("║ %-20s %-40s ║\n", "OS/Arch:", fmt.Sprintf("%s/%s", info.OS, info.Arch))
fmt.Printf("║ %-20s %-40d ║\n", "CPU Cores:", info.NumCPU)
fmt.Println("╠═══════════════════════════════════════════════════════════════╣")
fmt.Println("║ Database Tools ║")
fmt.Println("╟───────────────────────────────────────────────────────────────╢")
if len(info.DatabaseTools) == 0 {
fmt.Println("║ (none detected) ║")
} else {
for tool, version := range info.DatabaseTools {
fmt.Printf("║ %-18s %-41s ║\n", tool+":", version)
}
}
fmt.Println("╚═══════════════════════════════════════════════════════════════╝")
fmt.Println()
}

5
go.mod
View File

@ -23,6 +23,7 @@ require (
github.com/hashicorp/go-multierror v1.1.1
github.com/jackc/pgx/v5 v5.7.6
github.com/klauspost/pgzip v1.2.6
github.com/mattn/go-isatty v0.0.20
github.com/schollz/progressbar/v3 v3.19.0
github.com/shirou/gopsutil/v3 v3.24.5
github.com/sirupsen/logrus v1.9.3
@ -69,6 +70,7 @@ require (
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd // indirect
github.com/charmbracelet/x/term v0.2.1 // indirect
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.6 // indirect
github.com/envoyproxy/go-control-plane/envoy v1.32.4 // indirect
github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
@ -90,7 +92,6 @@ require (
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-localereader v0.0.1 // indirect
github.com/mattn/go-runewidth v0.0.16 // indirect
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db // indirect
@ -102,6 +103,7 @@ require (
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/spiffe/go-spiffe/v2 v2.5.0 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
@ -130,6 +132,7 @@ require (
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 // indirect
google.golang.org/grpc v1.76.0 // indirect
google.golang.org/protobuf v1.36.10 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
modernc.org/libc v1.67.6 // indirect
modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.11.0 // indirect

10
go.sum
View File

@ -106,6 +106,7 @@ github.com/chengxilo/virtualterm v1.0.4 h1:Z6IpERbRVlfB8WkOmtbHiDbBANU7cimRIof7m
github.com/chengxilo/virtualterm v1.0.4/go.mod h1:DyxxBZz/x1iqJjFxTFcr6/x+jSpqN0iwWCOK1q10rlY=
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443 h1:aQ3y1lwWyqYPiWZThqv1aFbZMiM9vblcSArJRf2Irls=
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8=
github.com/cpuguy83/go-md2man/v2 v2.0.6 h1:XJtiaUW6dEEqVuZiMTn1ldk455QWwEIsMIJlo5vtkx0=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@ -177,6 +178,10 @@ github.com/klauspost/compress v1.18.3 h1:9PJRvfbmTabkOX8moIpXPbMMbYN60bWImDDU7L+
github.com/klauspost/compress v1.18.3/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
github.com/klauspost/pgzip v1.2.6 h1:8RXeL5crjEUFnR2/Sn6GJNWtSQ3Dk8pq4CL3jvdDyjU=
github.com/klauspost/pgzip v1.2.6/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY=
@ -216,6 +221,9 @@ github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qq
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/schollz/progressbar/v3 v3.19.0 h1:Ea18xuIRQXLAUidVDox3AbwfUhD0/1IvohyTutOIFoc=
github.com/schollz/progressbar/v3 v3.19.0/go.mod h1:IsO3lpbaGuzh8zIMzgY3+J8l4C8GjO0Y9S69eFvNsec=
@ -312,6 +320,8 @@ google.golang.org/grpc v1.76.0/go.mod h1:Ju12QI8M6iQJtbcsV+awF5a4hfJMLi4X0JLo94U
google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE=
google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

315
internal/backup/estimate.go Normal file
View File

@ -0,0 +1,315 @@
package backup
import (
"context"
"database/sql"
"fmt"
"time"
"github.com/shirou/gopsutil/v3/disk"
"dbbackup/internal/config"
"dbbackup/internal/database"
"dbbackup/internal/logger"
)
// SizeEstimate contains backup size estimation results
type SizeEstimate struct {
DatabaseName string `json:"database_name"`
EstimatedRawSize int64 `json:"estimated_raw_size_bytes"`
EstimatedCompressed int64 `json:"estimated_compressed_bytes"`
CompressionRatio float64 `json:"compression_ratio"`
TableCount int `json:"table_count"`
LargestTable string `json:"largest_table,omitempty"`
LargestTableSize int64 `json:"largest_table_size_bytes,omitempty"`
EstimatedDuration time.Duration `json:"estimated_duration"`
RecommendedProfile string `json:"recommended_profile"`
RequiredDiskSpace int64 `json:"required_disk_space_bytes"`
AvailableDiskSpace int64 `json:"available_disk_space_bytes"`
HasSufficientSpace bool `json:"has_sufficient_space"`
EstimationTime time.Duration `json:"estimation_time"`
}
// ClusterSizeEstimate contains cluster-wide size estimation
type ClusterSizeEstimate struct {
TotalDatabases int `json:"total_databases"`
TotalRawSize int64 `json:"total_raw_size_bytes"`
TotalCompressed int64 `json:"total_compressed_bytes"`
LargestDatabase string `json:"largest_database,omitempty"`
LargestDatabaseSize int64 `json:"largest_database_size_bytes,omitempty"`
EstimatedDuration time.Duration `json:"estimated_duration"`
RequiredDiskSpace int64 `json:"required_disk_space_bytes"`
AvailableDiskSpace int64 `json:"available_disk_space_bytes"`
HasSufficientSpace bool `json:"has_sufficient_space"`
DatabaseEstimates map[string]*SizeEstimate `json:"database_estimates,omitempty"`
EstimationTime time.Duration `json:"estimation_time"`
}
// EstimateBackupSize estimates the size of a single database backup
func EstimateBackupSize(ctx context.Context, cfg *config.Config, log logger.Logger, databaseName string) (*SizeEstimate, error) {
startTime := time.Now()
estimate := &SizeEstimate{
DatabaseName: databaseName,
}
// Create database connection
db, err := database.New(cfg, log)
if err != nil {
return nil, fmt.Errorf("failed to create database instance: %w", err)
}
defer db.Close()
if err := db.Connect(ctx); err != nil {
return nil, fmt.Errorf("failed to connect to database: %w", err)
}
// Get database size based on engine type
rawSize, err := db.GetDatabaseSize(ctx, databaseName)
if err != nil {
return nil, fmt.Errorf("failed to get database size: %w", err)
}
estimate.EstimatedRawSize = rawSize
// Get table statistics
tables, err := db.ListTables(ctx, databaseName)
if err == nil {
estimate.TableCount = len(tables)
}
// For PostgreSQL and MySQL, get additional detailed statistics
if cfg.IsPostgreSQL() {
pg := db.(*database.PostgreSQL)
if err := estimatePostgresSize(ctx, pg.GetConn(), databaseName, estimate); err != nil {
log.Debug("Could not get detailed PostgreSQL stats: %v", err)
}
} else if cfg.IsMySQL() {
my := db.(*database.MySQL)
if err := estimateMySQLSize(ctx, my.GetConn(), databaseName, estimate); err != nil {
log.Debug("Could not get detailed MySQL stats: %v", err)
}
}
// Calculate compression ratio (typical: 70-80% for databases)
estimate.CompressionRatio = 0.25 // Assume 75% compression (1/4 of original size)
if cfg.CompressionLevel >= 6 {
estimate.CompressionRatio = 0.20 // Better compression with higher levels
}
estimate.EstimatedCompressed = int64(float64(estimate.EstimatedRawSize) * estimate.CompressionRatio)
// Estimate duration (rough: 50 MB/s for pg_dump, 100 MB/s for mysqldump)
throughputMBps := 50.0
if cfg.IsMySQL() {
throughputMBps = 100.0
}
sizeGB := float64(estimate.EstimatedRawSize) / (1024 * 1024 * 1024)
durationMinutes := (sizeGB * 1024) / throughputMBps / 60
estimate.EstimatedDuration = time.Duration(durationMinutes * float64(time.Minute))
// Recommend profile based on size
if sizeGB < 1 {
estimate.RecommendedProfile = "balanced"
} else if sizeGB < 10 {
estimate.RecommendedProfile = "performance"
} else if sizeGB < 100 {
estimate.RecommendedProfile = "turbo"
} else {
estimate.RecommendedProfile = "conservative" // Large DB, be careful
}
// Calculate required disk space (3x compressed size for safety: temp + compressed + checksum)
estimate.RequiredDiskSpace = estimate.EstimatedCompressed * 3
// Check available disk space
if cfg.BackupDir != "" {
if usage, err := disk.Usage(cfg.BackupDir); err == nil {
estimate.AvailableDiskSpace = int64(usage.Free)
estimate.HasSufficientSpace = estimate.AvailableDiskSpace > estimate.RequiredDiskSpace
}
}
estimate.EstimationTime = time.Since(startTime)
return estimate, nil
}
// EstimateClusterBackupSize estimates the size of a full cluster backup
func EstimateClusterBackupSize(ctx context.Context, cfg *config.Config, log logger.Logger) (*ClusterSizeEstimate, error) {
startTime := time.Now()
estimate := &ClusterSizeEstimate{
DatabaseEstimates: make(map[string]*SizeEstimate),
}
// Create database connection
db, err := database.New(cfg, log)
if err != nil {
return nil, fmt.Errorf("failed to create database instance: %w", err)
}
defer db.Close()
if err := db.Connect(ctx); err != nil {
return nil, fmt.Errorf("failed to connect to database: %w", err)
}
// List all databases
databases, err := db.ListDatabases(ctx)
if err != nil {
return nil, fmt.Errorf("failed to list databases: %w", err)
}
estimate.TotalDatabases = len(databases)
// Estimate each database
for _, dbName := range databases {
dbEstimate, err := EstimateBackupSize(ctx, cfg, log, dbName)
if err != nil {
log.Warn("Failed to estimate database size", "database", dbName, "error", err)
continue
}
estimate.DatabaseEstimates[dbName] = dbEstimate
estimate.TotalRawSize += dbEstimate.EstimatedRawSize
estimate.TotalCompressed += dbEstimate.EstimatedCompressed
// Track largest database
if dbEstimate.EstimatedRawSize > estimate.LargestDatabaseSize {
estimate.LargestDatabase = dbName
estimate.LargestDatabaseSize = dbEstimate.EstimatedRawSize
}
}
// Estimate total duration (assume some parallelism)
parallelism := float64(cfg.Jobs)
if parallelism < 1 {
parallelism = 1
}
// Calculate serial duration first
var serialDuration time.Duration
for _, dbEst := range estimate.DatabaseEstimates {
serialDuration += dbEst.EstimatedDuration
}
// Adjust for parallelism (not perfect but reasonable)
estimate.EstimatedDuration = time.Duration(float64(serialDuration) / parallelism)
// Calculate required disk space
estimate.RequiredDiskSpace = estimate.TotalCompressed * 3
// Check available disk space
if cfg.BackupDir != "" {
if usage, err := disk.Usage(cfg.BackupDir); err == nil {
estimate.AvailableDiskSpace = int64(usage.Free)
estimate.HasSufficientSpace = estimate.AvailableDiskSpace > estimate.RequiredDiskSpace
}
}
estimate.EstimationTime = time.Since(startTime)
return estimate, nil
}
// estimatePostgresSize gets detailed statistics from PostgreSQL
func estimatePostgresSize(ctx context.Context, conn *sql.DB, databaseName string, estimate *SizeEstimate) error {
// Note: EstimatedRawSize and TableCount are already set by interface methods
// Get largest table size
largestQuery := `
SELECT
schemaname || '.' || tablename as table_name,
pg_total_relation_size(schemaname||'.'||tablename) as size_bytes
FROM pg_tables
WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC
LIMIT 1
`
var tableName string
var tableSize int64
if err := conn.QueryRowContext(ctx, largestQuery).Scan(&tableName, &tableSize); err == nil {
estimate.LargestTable = tableName
estimate.LargestTableSize = tableSize
}
return nil
}
// estimateMySQLSize gets detailed statistics from MySQL/MariaDB
func estimateMySQLSize(ctx context.Context, conn *sql.DB, databaseName string, estimate *SizeEstimate) error {
// Note: EstimatedRawSize and TableCount are already set by interface methods
// Get largest table
largestQuery := `
SELECT
table_name,
data_length + index_length as size_bytes
FROM information_schema.TABLES
WHERE table_schema = ?
ORDER BY (data_length + index_length) DESC
LIMIT 1
`
var tableName string
var tableSize int64
if err := conn.QueryRowContext(ctx, largestQuery, databaseName).Scan(&tableName, &tableSize); err == nil {
estimate.LargestTable = tableName
estimate.LargestTableSize = tableSize
}
return nil
}
// FormatSizeEstimate returns a human-readable summary
func FormatSizeEstimate(estimate *SizeEstimate) string {
return fmt.Sprintf(`Database: %s
Raw Size: %s
Compressed Size: %s (%.0f%% compression)
Tables: %d
Largest Table: %s (%s)
Estimated Duration: %s
Recommended Profile: %s
Required Disk Space: %s
Available Space: %s
Status: %s`,
estimate.DatabaseName,
formatBytes(estimate.EstimatedRawSize),
formatBytes(estimate.EstimatedCompressed),
(1.0-estimate.CompressionRatio)*100,
estimate.TableCount,
estimate.LargestTable,
formatBytes(estimate.LargestTableSize),
estimate.EstimatedDuration.Round(time.Second),
estimate.RecommendedProfile,
formatBytes(estimate.RequiredDiskSpace),
formatBytes(estimate.AvailableDiskSpace),
getSpaceStatus(estimate.HasSufficientSpace))
}
// FormatClusterSizeEstimate returns a human-readable summary
func FormatClusterSizeEstimate(estimate *ClusterSizeEstimate) string {
return fmt.Sprintf(`Cluster Backup Estimate:
Total Databases: %d
Total Raw Size: %s
Total Compressed: %s
Largest Database: %s (%s)
Estimated Duration: %s
Required Disk Space: %s
Available Space: %s
Status: %s
Estimation Time: %v`,
estimate.TotalDatabases,
formatBytes(estimate.TotalRawSize),
formatBytes(estimate.TotalCompressed),
estimate.LargestDatabase,
formatBytes(estimate.LargestDatabaseSize),
estimate.EstimatedDuration.Round(time.Second),
formatBytes(estimate.RequiredDiskSpace),
formatBytes(estimate.AvailableDiskSpace),
getSpaceStatus(estimate.HasSufficientSpace),
estimate.EstimationTime)
}
func getSpaceStatus(hasSufficient bool) string {
if hasSufficient {
return "✅ Sufficient"
}
return "⚠️ INSUFFICIENT - Free up space first!"
}

View File

@ -0,0 +1,386 @@
package checks
import (
"context"
"database/sql"
"fmt"
"os"
"runtime"
"strings"
"syscall"
"time"
"github.com/shirou/gopsutil/v3/disk"
"github.com/shirou/gopsutil/v3/mem"
)
// ErrorContext provides environmental context for debugging errors
type ErrorContext struct {
// System info
AvailableDiskSpace uint64 `json:"available_disk_space"`
TotalDiskSpace uint64 `json:"total_disk_space"`
DiskUsagePercent float64 `json:"disk_usage_percent"`
AvailableMemory uint64 `json:"available_memory"`
TotalMemory uint64 `json:"total_memory"`
MemoryUsagePercent float64 `json:"memory_usage_percent"`
OpenFileDescriptors uint64 `json:"open_file_descriptors,omitempty"`
MaxFileDescriptors uint64 `json:"max_file_descriptors,omitempty"`
// Database info (if connection available)
DatabaseVersion string `json:"database_version,omitempty"`
MaxConnections int `json:"max_connections,omitempty"`
CurrentConnections int `json:"current_connections,omitempty"`
MaxLocksPerTxn int `json:"max_locks_per_transaction,omitempty"`
SharedMemory string `json:"shared_memory,omitempty"`
// Network info
CanReachDatabase bool `json:"can_reach_database"`
DatabaseHost string `json:"database_host,omitempty"`
DatabasePort int `json:"database_port,omitempty"`
// Timing
CollectedAt time.Time `json:"collected_at"`
}
// DiagnosticsReport combines error classification with environmental context
type DiagnosticsReport struct {
Classification *ErrorClassification `json:"classification"`
Context *ErrorContext `json:"context"`
Recommendations []string `json:"recommendations"`
RootCause string `json:"root_cause,omitempty"`
}
// GatherErrorContext collects environmental information for error diagnosis
func GatherErrorContext(backupDir string, db *sql.DB) *ErrorContext {
ctx := &ErrorContext{
CollectedAt: time.Now(),
}
// Gather disk space information
if backupDir != "" {
usage, err := disk.Usage(backupDir)
if err == nil {
ctx.AvailableDiskSpace = usage.Free
ctx.TotalDiskSpace = usage.Total
ctx.DiskUsagePercent = usage.UsedPercent
}
}
// Gather memory information
vmStat, err := mem.VirtualMemory()
if err == nil {
ctx.AvailableMemory = vmStat.Available
ctx.TotalMemory = vmStat.Total
ctx.MemoryUsagePercent = vmStat.UsedPercent
}
// Gather file descriptor limits (Linux/Unix only)
if runtime.GOOS != "windows" {
var rLimit syscall.Rlimit
if err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &rLimit); err == nil {
ctx.MaxFileDescriptors = rLimit.Cur
// Try to get current open FDs (this is platform-specific)
if fds, err := countOpenFileDescriptors(); err == nil {
ctx.OpenFileDescriptors = fds
}
}
}
// Gather database-specific context (if connection available)
if db != nil {
gatherDatabaseContext(db, ctx)
}
return ctx
}
// countOpenFileDescriptors counts currently open file descriptors (Linux only)
func countOpenFileDescriptors() (uint64, error) {
if runtime.GOOS != "linux" {
return 0, fmt.Errorf("not supported on %s", runtime.GOOS)
}
pid := os.Getpid()
fdDir := fmt.Sprintf("/proc/%d/fd", pid)
entries, err := os.ReadDir(fdDir)
if err != nil {
return 0, err
}
return uint64(len(entries)), nil
}
// gatherDatabaseContext collects PostgreSQL-specific diagnostics
func gatherDatabaseContext(db *sql.DB, ctx *ErrorContext) {
// Set timeout for diagnostic queries
diagCtx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
// Get PostgreSQL version
var version string
if err := db.QueryRowContext(diagCtx, "SELECT version()").Scan(&version); err == nil {
// Extract short version (e.g., "PostgreSQL 14.5")
parts := strings.Fields(version)
if len(parts) >= 2 {
ctx.DatabaseVersion = parts[0] + " " + parts[1]
}
}
// Get max_connections
var maxConns int
if err := db.QueryRowContext(diagCtx, "SHOW max_connections").Scan(&maxConns); err == nil {
ctx.MaxConnections = maxConns
}
// Get current connections
var currConns int
query := "SELECT count(*) FROM pg_stat_activity"
if err := db.QueryRowContext(diagCtx, query).Scan(&currConns); err == nil {
ctx.CurrentConnections = currConns
}
// Get max_locks_per_transaction
var maxLocks int
if err := db.QueryRowContext(diagCtx, "SHOW max_locks_per_transaction").Scan(&maxLocks); err == nil {
ctx.MaxLocksPerTxn = maxLocks
}
// Get shared_buffers
var sharedBuffers string
if err := db.QueryRowContext(diagCtx, "SHOW shared_buffers").Scan(&sharedBuffers); err == nil {
ctx.SharedMemory = sharedBuffers
}
}
// DiagnoseError analyzes an error with full environmental context
func DiagnoseError(errorMsg string, backupDir string, db *sql.DB) *DiagnosticsReport {
classification := ClassifyError(errorMsg)
context := GatherErrorContext(backupDir, db)
report := &DiagnosticsReport{
Classification: classification,
Context: context,
Recommendations: make([]string, 0),
}
// Generate context-specific recommendations
generateContextualRecommendations(report)
// Try to determine root cause
report.RootCause = analyzeRootCause(report)
return report
}
// generateContextualRecommendations creates recommendations based on error + environment
func generateContextualRecommendations(report *DiagnosticsReport) {
ctx := report.Context
classification := report.Classification
// Disk space recommendations
if classification.Category == "disk_space" || ctx.DiskUsagePercent > 90 {
report.Recommendations = append(report.Recommendations,
fmt.Sprintf("⚠ Disk is %.1f%% full (%s available)",
ctx.DiskUsagePercent, formatBytes(ctx.AvailableDiskSpace)))
report.Recommendations = append(report.Recommendations,
"• Clean up old backups: find /mnt/backups -type f -mtime +30 -delete")
report.Recommendations = append(report.Recommendations,
"• Enable automatic cleanup: dbbackup cleanup --retention-days 30")
}
// Memory recommendations
if ctx.MemoryUsagePercent > 85 {
report.Recommendations = append(report.Recommendations,
fmt.Sprintf("⚠ Memory is %.1f%% full (%s available)",
ctx.MemoryUsagePercent, formatBytes(ctx.AvailableMemory)))
report.Recommendations = append(report.Recommendations,
"• Consider reducing parallel jobs: --jobs 2")
report.Recommendations = append(report.Recommendations,
"• Use conservative restore profile: dbbackup restore --profile conservative")
}
// File descriptor recommendations
if ctx.OpenFileDescriptors > 0 && ctx.MaxFileDescriptors > 0 {
fdUsagePercent := float64(ctx.OpenFileDescriptors) / float64(ctx.MaxFileDescriptors) * 100
if fdUsagePercent > 80 {
report.Recommendations = append(report.Recommendations,
fmt.Sprintf("⚠ File descriptors at %.0f%% (%d/%d used)",
fdUsagePercent, ctx.OpenFileDescriptors, ctx.MaxFileDescriptors))
report.Recommendations = append(report.Recommendations,
"• Increase limit: ulimit -n 8192")
report.Recommendations = append(report.Recommendations,
"• Or add to /etc/security/limits.conf: dbbackup soft nofile 8192")
}
}
// PostgreSQL lock recommendations
if classification.Category == "locks" && ctx.MaxLocksPerTxn > 0 {
totalLocks := ctx.MaxLocksPerTxn * (ctx.MaxConnections + 100)
report.Recommendations = append(report.Recommendations,
fmt.Sprintf("Current lock capacity: %d locks (max_locks_per_transaction × max_connections)",
totalLocks))
if ctx.MaxLocksPerTxn < 2048 {
report.Recommendations = append(report.Recommendations,
fmt.Sprintf("⚠ max_locks_per_transaction is low (%d)", ctx.MaxLocksPerTxn))
report.Recommendations = append(report.Recommendations,
"• Increase: ALTER SYSTEM SET max_locks_per_transaction = 4096;")
report.Recommendations = append(report.Recommendations,
"• Then restart PostgreSQL: sudo systemctl restart postgresql")
}
if ctx.MaxConnections < 20 {
report.Recommendations = append(report.Recommendations,
fmt.Sprintf("⚠ Low max_connections (%d) reduces total lock capacity", ctx.MaxConnections))
report.Recommendations = append(report.Recommendations,
"• With fewer connections, you need HIGHER max_locks_per_transaction")
}
}
// Connection recommendations
if classification.Category == "network" && ctx.CurrentConnections > 0 {
connUsagePercent := float64(ctx.CurrentConnections) / float64(ctx.MaxConnections) * 100
if connUsagePercent > 80 {
report.Recommendations = append(report.Recommendations,
fmt.Sprintf("⚠ Connection pool at %.0f%% capacity (%d/%d used)",
connUsagePercent, ctx.CurrentConnections, ctx.MaxConnections))
report.Recommendations = append(report.Recommendations,
"• Close idle connections or increase max_connections")
}
}
// Version recommendations
if classification.Category == "version" && ctx.DatabaseVersion != "" {
report.Recommendations = append(report.Recommendations,
fmt.Sprintf("Database version: %s", ctx.DatabaseVersion))
report.Recommendations = append(report.Recommendations,
"• Check backup was created on same or older PostgreSQL version")
report.Recommendations = append(report.Recommendations,
"• For major version differences, review migration notes")
}
}
// analyzeRootCause attempts to determine the root cause based on error + context
func analyzeRootCause(report *DiagnosticsReport) string {
ctx := report.Context
classification := report.Classification
// Disk space root causes
if classification.Category == "disk_space" {
if ctx.DiskUsagePercent > 95 {
return "Disk is critically full - no space for backup/restore operations"
}
return "Insufficient disk space for operation"
}
// Lock exhaustion root causes
if classification.Category == "locks" {
if ctx.MaxLocksPerTxn > 0 && ctx.MaxConnections > 0 {
totalLocks := ctx.MaxLocksPerTxn * (ctx.MaxConnections + 100)
if totalLocks < 50000 {
return fmt.Sprintf("Lock table capacity too low (%d total locks). Likely cause: max_locks_per_transaction (%d) too low for this database size",
totalLocks, ctx.MaxLocksPerTxn)
}
}
return "PostgreSQL lock table exhausted - need to increase max_locks_per_transaction"
}
// Memory pressure
if ctx.MemoryUsagePercent > 90 {
return "System under memory pressure - may cause slow operations or failures"
}
// Connection exhaustion
if classification.Category == "network" && ctx.MaxConnections > 0 && ctx.CurrentConnections > 0 {
if ctx.CurrentConnections >= ctx.MaxConnections {
return "Connection pool exhausted - all connections in use"
}
}
return ""
}
// FormatDiagnosticsReport creates a human-readable diagnostics report
func FormatDiagnosticsReport(report *DiagnosticsReport) string {
var sb strings.Builder
sb.WriteString("═══════════════════════════════════════════════════════════\n")
sb.WriteString(" DBBACKUP ERROR DIAGNOSTICS REPORT\n")
sb.WriteString("═══════════════════════════════════════════════════════════\n\n")
// Error classification
sb.WriteString(fmt.Sprintf("Error Type: %s\n", strings.ToUpper(report.Classification.Type)))
sb.WriteString(fmt.Sprintf("Category: %s\n", report.Classification.Category))
sb.WriteString(fmt.Sprintf("Severity: %d/3\n\n", report.Classification.Severity))
// Error message
sb.WriteString("Message:\n")
sb.WriteString(fmt.Sprintf(" %s\n\n", report.Classification.Message))
// Hint
if report.Classification.Hint != "" {
sb.WriteString("Hint:\n")
sb.WriteString(fmt.Sprintf(" %s\n\n", report.Classification.Hint))
}
// Root cause (if identified)
if report.RootCause != "" {
sb.WriteString("Root Cause:\n")
sb.WriteString(fmt.Sprintf(" %s\n\n", report.RootCause))
}
// System context
sb.WriteString("System Context:\n")
sb.WriteString(fmt.Sprintf(" Disk Space: %s / %s (%.1f%% used)\n",
formatBytes(report.Context.AvailableDiskSpace),
formatBytes(report.Context.TotalDiskSpace),
report.Context.DiskUsagePercent))
sb.WriteString(fmt.Sprintf(" Memory: %s / %s (%.1f%% used)\n",
formatBytes(report.Context.AvailableMemory),
formatBytes(report.Context.TotalMemory),
report.Context.MemoryUsagePercent))
if report.Context.OpenFileDescriptors > 0 {
sb.WriteString(fmt.Sprintf(" File Descriptors: %d / %d\n",
report.Context.OpenFileDescriptors,
report.Context.MaxFileDescriptors))
}
// Database context
if report.Context.DatabaseVersion != "" {
sb.WriteString("\nDatabase Context:\n")
sb.WriteString(fmt.Sprintf(" Version: %s\n", report.Context.DatabaseVersion))
if report.Context.MaxConnections > 0 {
sb.WriteString(fmt.Sprintf(" Connections: %d / %d\n",
report.Context.CurrentConnections,
report.Context.MaxConnections))
}
if report.Context.MaxLocksPerTxn > 0 {
sb.WriteString(fmt.Sprintf(" Max Locks: %d per transaction\n", report.Context.MaxLocksPerTxn))
totalLocks := report.Context.MaxLocksPerTxn * (report.Context.MaxConnections + 100)
sb.WriteString(fmt.Sprintf(" Total Lock Capacity: ~%d\n", totalLocks))
}
if report.Context.SharedMemory != "" {
sb.WriteString(fmt.Sprintf(" Shared Memory: %s\n", report.Context.SharedMemory))
}
}
// Recommendations
if len(report.Recommendations) > 0 {
sb.WriteString("\nRecommendations:\n")
for _, rec := range report.Recommendations {
sb.WriteString(fmt.Sprintf(" %s\n", rec))
}
}
// Action
if report.Classification.Action != "" {
sb.WriteString("\nSuggested Action:\n")
sb.WriteString(fmt.Sprintf(" %s\n", report.Classification.Action))
}
sb.WriteString("\n═══════════════════════════════════════════════════════════\n")
sb.WriteString(fmt.Sprintf("Report generated: %s\n", report.Context.CollectedAt.Format("2006-01-02 15:04:05")))
sb.WriteString("═══════════════════════════════════════════════════════════\n")
return sb.String()
}

View File

@ -117,6 +117,10 @@ func (b *baseDatabase) Close() error {
return nil
}
func (b *baseDatabase) GetConn() *sql.DB {
return b.db
}
func (b *baseDatabase) Ping(ctx context.Context) error {
if b.db == nil {
return fmt.Errorf("database not connected")

View File

@ -339,8 +339,9 @@ func (p *PostgreSQL) BuildBackupCommand(database, outputFile string, options Bac
cmd = append(cmd, "--compress="+strconv.Itoa(options.Compression))
}
// Parallel jobs (only for directory format)
if options.Parallel > 1 && options.Format == "directory" {
// Parallel jobs (supported for directory and custom formats since PostgreSQL 9.3)
// NOTE: plain format does NOT support --jobs (it's single-threaded by design)
if options.Parallel > 1 && (options.Format == "directory" || options.Format == "custom") {
cmd = append(cmd, "--jobs="+strconv.Itoa(options.Parallel))
}

View File

@ -16,7 +16,7 @@ import (
// Build information (set by ldflags)
var (
version = "4.2.8"
version = "4.2.15"
buildTime = "unknown"
gitCommit = "unknown"
)