feat: add dry-run mode, GFS retention policies, and notifications
- Add --dry-run/-n flag for backup commands with comprehensive preflight checks - Database connectivity validation - Required tools availability check - Storage target and permissions verification - Backup size estimation - Encryption and cloud storage configuration validation - Implement GFS (Grandfather-Father-Son) retention policies - Daily/Weekly/Monthly/Yearly tier classification - Configurable retention counts per tier - Custom weekly day and monthly day settings - ISO week handling for proper week boundaries - Add notification system with SMTP and webhook support - SMTP email notifications with TLS/STARTTLS - Webhook HTTP notifications with HMAC-SHA256 signing - Slack-compatible webhook payload format - Event types: backup/restore started/completed/failed, cleanup, verify, PITR - Configurable severity levels and retry logic - Update README.md with documentation for all new features
This commit is contained in:
132
README.md
132
README.md
@@ -12,10 +12,13 @@ Database backup and restore utility for PostgreSQL, MySQL, and MariaDB.
|
||||
|
||||
- Multi-database support: PostgreSQL, MySQL, MariaDB
|
||||
- Backup modes: Single database, cluster, sample data
|
||||
- **Dry-run mode**: Preflight checks before backup execution
|
||||
- AES-256-GCM encryption
|
||||
- Incremental backups
|
||||
- Cloud storage: S3, MinIO, B2, Azure Blob, Google Cloud Storage
|
||||
- Point-in-Time Recovery (PITR) for PostgreSQL
|
||||
- **GFS retention policies**: Grandfather-Father-Son backup rotation
|
||||
- **Notifications**: SMTP email and webhook alerts
|
||||
- Interactive terminal UI
|
||||
- Cross-platform binaries
|
||||
|
||||
@@ -229,6 +232,9 @@ dbbackup restore cluster cluster_backup.tar.gz --confirm
|
||||
|
||||
# Cloud backup
|
||||
dbbackup backup single mydb --cloud s3://my-bucket/backups/
|
||||
|
||||
# Dry-run mode (preflight checks without execution)
|
||||
dbbackup backup single mydb --dry-run
|
||||
```
|
||||
|
||||
## Commands
|
||||
@@ -266,6 +272,8 @@ dbbackup backup single mydb --cloud s3://my-bucket/backups/
|
||||
| `--jobs` | Parallel jobs | 8 |
|
||||
| `--cloud` | Cloud storage URI | - |
|
||||
| `--encrypt` | Enable encryption | false |
|
||||
| `--dry-run, -n` | Run preflight checks only | false |
|
||||
| `--notify` | Enable notifications | false |
|
||||
| `--debug` | Enable debug logging | false |
|
||||
|
||||
## Encryption
|
||||
@@ -347,6 +355,130 @@ dbbackup cleanup /backups --retention-days 30 --min-backups 5
|
||||
dbbackup cleanup /backups --retention-days 7 --dry-run
|
||||
```
|
||||
|
||||
### GFS Retention Policy
|
||||
|
||||
Grandfather-Father-Son (GFS) retention provides tiered backup rotation:
|
||||
|
||||
```bash
|
||||
# GFS retention: 7 daily, 4 weekly, 12 monthly, 3 yearly
|
||||
dbbackup cleanup /backups --gfs \
|
||||
--gfs-daily 7 \
|
||||
--gfs-weekly 4 \
|
||||
--gfs-monthly 12 \
|
||||
--gfs-yearly 3
|
||||
|
||||
# Custom weekly day (Saturday) and monthly day (15th)
|
||||
dbbackup cleanup /backups --gfs \
|
||||
--gfs-weekly-day Saturday \
|
||||
--gfs-monthly-day 15
|
||||
|
||||
# Preview GFS deletions
|
||||
dbbackup cleanup /backups --gfs --dry-run
|
||||
```
|
||||
|
||||
**GFS Tiers:**
|
||||
- **Daily**: Most recent N daily backups
|
||||
- **Weekly**: Best backup from each week (configurable day)
|
||||
- **Monthly**: Best backup from each month (configurable day)
|
||||
- **Yearly**: Best backup from January each year
|
||||
|
||||
## Dry-Run Mode
|
||||
|
||||
Preflight checks validate backup readiness without execution:
|
||||
|
||||
```bash
|
||||
# Run preflight checks only
|
||||
dbbackup backup single mydb --dry-run
|
||||
dbbackup backup cluster -n # Short flag
|
||||
```
|
||||
|
||||
**Checks performed:**
|
||||
- Database connectivity (connect + ping)
|
||||
- Required tools availability (pg_dump, mysqldump, etc.)
|
||||
- Storage target accessibility and permissions
|
||||
- Backup size estimation
|
||||
- Encryption configuration validation
|
||||
- Cloud storage credentials (if configured)
|
||||
|
||||
**Example output:**
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════════╗
|
||||
║ [DRY RUN] Preflight Check Results ║
|
||||
╚══════════════════════════════════════════════════════════════╝
|
||||
|
||||
Database: PostgreSQL PostgreSQL 15.4
|
||||
Target: postgres@localhost:5432/mydb
|
||||
|
||||
Checks:
|
||||
─────────────────────────────────────────────────────────────
|
||||
✅ Database Connectivity: Connected successfully
|
||||
✅ Required Tools: pg_dump 15.4 available
|
||||
✅ Storage Target: /backups writable (45 GB free)
|
||||
✅ Size Estimation: ~2.5 GB required
|
||||
─────────────────────────────────────────────────────────────
|
||||
|
||||
✅ All checks passed
|
||||
|
||||
Ready to backup. Remove --dry-run to execute.
|
||||
```
|
||||
|
||||
## Notifications
|
||||
|
||||
Get alerted on backup events via email or webhooks.
|
||||
|
||||
### SMTP Email
|
||||
|
||||
```bash
|
||||
# Environment variables
|
||||
export NOTIFY_SMTP_HOST="smtp.example.com"
|
||||
export NOTIFY_SMTP_PORT="587"
|
||||
export NOTIFY_SMTP_USER="alerts@example.com"
|
||||
export NOTIFY_SMTP_PASSWORD="secret"
|
||||
export NOTIFY_SMTP_FROM="dbbackup@example.com"
|
||||
export NOTIFY_SMTP_TO="admin@example.com,dba@example.com"
|
||||
|
||||
# Enable notifications
|
||||
dbbackup backup single mydb --notify
|
||||
```
|
||||
|
||||
### Webhooks
|
||||
|
||||
```bash
|
||||
# Generic webhook
|
||||
export NOTIFY_WEBHOOK_URL="https://api.example.com/webhooks/backup"
|
||||
export NOTIFY_WEBHOOK_SECRET="signing-secret" # Optional HMAC signing
|
||||
|
||||
# Slack webhook
|
||||
export NOTIFY_WEBHOOK_URL="https://hooks.slack.com/services/T00/B00/XXX"
|
||||
|
||||
dbbackup backup single mydb --notify
|
||||
```
|
||||
|
||||
**Webhook payload:**
|
||||
```json
|
||||
{
|
||||
"version": "1.0",
|
||||
"event": {
|
||||
"type": "backup_completed",
|
||||
"severity": "info",
|
||||
"timestamp": "2025-01-15T10:30:00Z",
|
||||
"database": "mydb",
|
||||
"message": "Backup completed successfully",
|
||||
"backup_file": "/backups/mydb_20250115.dump.gz",
|
||||
"backup_size": 2684354560,
|
||||
"hostname": "db-server-01"
|
||||
},
|
||||
"subject": "✅ [dbbackup] Backup Completed: mydb"
|
||||
}
|
||||
```
|
||||
|
||||
**Supported events:**
|
||||
- `backup_started`, `backup_completed`, `backup_failed`
|
||||
- `restore_started`, `restore_completed`, `restore_failed`
|
||||
- `cleanup_completed`
|
||||
- `verify_completed`, `verify_failed`
|
||||
- `pitr_recovery`
|
||||
|
||||
## Configuration
|
||||
|
||||
### PostgreSQL Authentication
|
||||
|
||||
@@ -48,6 +48,7 @@ var (
|
||||
encryptBackupFlag bool
|
||||
encryptionKeyFile string
|
||||
encryptionKeyEnv string
|
||||
backupDryRun bool
|
||||
)
|
||||
|
||||
var singleCmd = &cobra.Command{
|
||||
@@ -123,6 +124,11 @@ func init() {
|
||||
cmd.Flags().StringVar(&encryptionKeyEnv, "encryption-key-env", "DBBACKUP_ENCRYPTION_KEY", "Environment variable containing encryption key/passphrase")
|
||||
}
|
||||
|
||||
// Dry-run flag for all backup commands
|
||||
for _, cmd := range []*cobra.Command{clusterCmd, singleCmd, sampleCmd} {
|
||||
cmd.Flags().BoolVarP(&backupDryRun, "dry-run", "n", false, "Validate configuration without executing backup")
|
||||
}
|
||||
|
||||
// Cloud storage flags for all backup commands
|
||||
for _, cmd := range []*cobra.Command{clusterCmd, singleCmd, sampleCmd} {
|
||||
cmd.Flags().String("cloud", "", "Cloud storage URI (e.g., s3://bucket/path) - takes precedence over individual flags")
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
"time"
|
||||
|
||||
"dbbackup/internal/backup"
|
||||
"dbbackup/internal/checks"
|
||||
"dbbackup/internal/config"
|
||||
"dbbackup/internal/database"
|
||||
"dbbackup/internal/security"
|
||||
@@ -28,6 +29,11 @@ func runClusterBackup(ctx context.Context) error {
|
||||
return fmt.Errorf("configuration error: %w", err)
|
||||
}
|
||||
|
||||
// Handle dry-run mode
|
||||
if backupDryRun {
|
||||
return runBackupPreflight(ctx, "")
|
||||
}
|
||||
|
||||
// Check privileges
|
||||
privChecker := security.NewPrivilegeChecker(log)
|
||||
if err := privChecker.CheckAndWarn(cfg.AllowRoot); err != nil {
|
||||
@@ -124,6 +130,16 @@ func runSingleBackup(ctx context.Context, databaseName string) error {
|
||||
// Update config from environment
|
||||
cfg.UpdateFromEnvironment()
|
||||
|
||||
// Validate configuration
|
||||
if err := cfg.Validate(); err != nil {
|
||||
return fmt.Errorf("configuration error: %w", err)
|
||||
}
|
||||
|
||||
// Handle dry-run mode
|
||||
if backupDryRun {
|
||||
return runBackupPreflight(ctx, databaseName)
|
||||
}
|
||||
|
||||
// Get backup type and base backup from command line flags (set via global vars in PreRunE)
|
||||
// These are populated by cobra flag binding in cmd/backup.go
|
||||
backupType := "full" // Default to full backup if not specified
|
||||
@@ -148,11 +164,6 @@ func runSingleBackup(ctx context.Context, databaseName string) error {
|
||||
}
|
||||
}
|
||||
|
||||
// Validate configuration
|
||||
if err := cfg.Validate(); err != nil {
|
||||
return fmt.Errorf("configuration error: %w", err)
|
||||
}
|
||||
|
||||
// Check privileges
|
||||
privChecker := security.NewPrivilegeChecker(log)
|
||||
if err := privChecker.CheckAndWarn(cfg.AllowRoot); err != nil {
|
||||
@@ -306,6 +317,11 @@ func runSampleBackup(ctx context.Context, databaseName string) error {
|
||||
return fmt.Errorf("configuration error: %w", err)
|
||||
}
|
||||
|
||||
// Handle dry-run mode
|
||||
if backupDryRun {
|
||||
return runBackupPreflight(ctx, databaseName)
|
||||
}
|
||||
|
||||
// Check privileges
|
||||
privChecker := security.NewPrivilegeChecker(log)
|
||||
if err := privChecker.CheckAndWarn(cfg.AllowRoot); err != nil {
|
||||
@@ -536,3 +552,25 @@ func findLatestClusterBackup(backupDir string) (string, error) {
|
||||
|
||||
return latestPath, nil
|
||||
}
|
||||
|
||||
// runBackupPreflight runs preflight checks without executing backup
|
||||
func runBackupPreflight(ctx context.Context, databaseName string) error {
|
||||
checker := checks.NewPreflightChecker(cfg, log)
|
||||
defer checker.Close()
|
||||
|
||||
result, err := checker.RunAllChecks(ctx, databaseName)
|
||||
if err != nil {
|
||||
return fmt.Errorf("preflight check error: %w", err)
|
||||
}
|
||||
|
||||
// Format and print report
|
||||
report := checks.FormatPreflightReport(result, databaseName, true)
|
||||
fmt.Print(report)
|
||||
|
||||
// Return appropriate exit code
|
||||
if !result.AllPassed {
|
||||
return fmt.Errorf("preflight checks failed")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
145
cmd/cleanup.go
145
cmd/cleanup.go
@@ -25,6 +25,13 @@ The retention policy ensures:
|
||||
2. At least --min-backups most recent backups are always kept
|
||||
3. Both conditions must be met for deletion
|
||||
|
||||
GFS (Grandfather-Father-Son) Mode:
|
||||
When --gfs flag is enabled, a tiered retention policy is applied:
|
||||
- Yearly: Keep one backup per year on the first eligible day
|
||||
- Monthly: Keep one backup per month on the specified day
|
||||
- Weekly: Keep one backup per week on the specified weekday
|
||||
- Daily: Keep most recent daily backups
|
||||
|
||||
Examples:
|
||||
# Clean up backups older than 30 days (keep at least 5)
|
||||
dbbackup cleanup /backups --retention-days 30 --min-backups 5
|
||||
@@ -35,6 +42,12 @@ Examples:
|
||||
# Clean up specific database backups only
|
||||
dbbackup cleanup /backups --pattern "mydb_*.dump"
|
||||
|
||||
# GFS retention: 7 daily, 4 weekly, 12 monthly, 3 yearly
|
||||
dbbackup cleanup /backups --gfs --gfs-daily 7 --gfs-weekly 4 --gfs-monthly 12 --gfs-yearly 3
|
||||
|
||||
# GFS with custom weekly day (Saturday) and monthly day (15th)
|
||||
dbbackup cleanup /backups --gfs --gfs-weekly-day Saturday --gfs-monthly-day 15
|
||||
|
||||
# Aggressive cleanup (keep only 3 most recent)
|
||||
dbbackup cleanup /backups --retention-days 1 --min-backups 3`,
|
||||
Args: cobra.ExactArgs(1),
|
||||
@@ -46,6 +59,15 @@ var (
|
||||
minBackups int
|
||||
dryRun bool
|
||||
cleanupPattern string
|
||||
|
||||
// GFS retention policy flags
|
||||
gfsEnabled bool
|
||||
gfsDaily int
|
||||
gfsWeekly int
|
||||
gfsMonthly int
|
||||
gfsYearly int
|
||||
gfsWeeklyDay string
|
||||
gfsMonthlyDay int
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -54,6 +76,15 @@ func init() {
|
||||
cleanupCmd.Flags().IntVar(&minBackups, "min-backups", 5, "Always keep at least this many backups")
|
||||
cleanupCmd.Flags().BoolVar(&dryRun, "dry-run", false, "Show what would be deleted without actually deleting")
|
||||
cleanupCmd.Flags().StringVar(&cleanupPattern, "pattern", "", "Only clean up backups matching this pattern (e.g., 'mydb_*.dump')")
|
||||
|
||||
// GFS retention policy flags
|
||||
cleanupCmd.Flags().BoolVar(&gfsEnabled, "gfs", false, "Enable GFS (Grandfather-Father-Son) retention policy")
|
||||
cleanupCmd.Flags().IntVar(&gfsDaily, "gfs-daily", 7, "Number of daily backups to keep (GFS mode)")
|
||||
cleanupCmd.Flags().IntVar(&gfsWeekly, "gfs-weekly", 4, "Number of weekly backups to keep (GFS mode)")
|
||||
cleanupCmd.Flags().IntVar(&gfsMonthly, "gfs-monthly", 12, "Number of monthly backups to keep (GFS mode)")
|
||||
cleanupCmd.Flags().IntVar(&gfsYearly, "gfs-yearly", 3, "Number of yearly backups to keep (GFS mode)")
|
||||
cleanupCmd.Flags().StringVar(&gfsWeeklyDay, "gfs-weekly-day", "Sunday", "Day of week for weekly backups (e.g., 'Sunday')")
|
||||
cleanupCmd.Flags().IntVar(&gfsMonthlyDay, "gfs-monthly-day", 1, "Day of month for monthly backups (1-28)")
|
||||
}
|
||||
|
||||
func runCleanup(cmd *cobra.Command, args []string) error {
|
||||
@@ -72,6 +103,11 @@ func runCleanup(cmd *cobra.Command, args []string) error {
|
||||
return fmt.Errorf("backup directory does not exist: %s", backupDir)
|
||||
}
|
||||
|
||||
// Check if GFS mode is enabled
|
||||
if gfsEnabled {
|
||||
return runGFSCleanup(backupDir)
|
||||
}
|
||||
|
||||
// Create retention policy
|
||||
policy := retention.Policy{
|
||||
RetentionDays: retentionDays,
|
||||
@@ -333,3 +369,112 @@ func formatBackupAge(t time.Time) string {
|
||||
return fmt.Sprintf("%d years", years)
|
||||
}
|
||||
}
|
||||
|
||||
// runGFSCleanup applies GFS (Grandfather-Father-Son) retention policy
|
||||
func runGFSCleanup(backupDir string) error {
|
||||
// Create GFS policy
|
||||
policy := retention.GFSPolicy{
|
||||
Enabled: true,
|
||||
Daily: gfsDaily,
|
||||
Weekly: gfsWeekly,
|
||||
Monthly: gfsMonthly,
|
||||
Yearly: gfsYearly,
|
||||
WeeklyDay: retention.ParseWeekday(gfsWeeklyDay),
|
||||
MonthlyDay: gfsMonthlyDay,
|
||||
DryRun: dryRun,
|
||||
}
|
||||
|
||||
fmt.Printf("📅 GFS Retention Policy:\n")
|
||||
fmt.Printf(" Directory: %s\n", backupDir)
|
||||
fmt.Printf(" Daily: %d backups\n", policy.Daily)
|
||||
fmt.Printf(" Weekly: %d backups (on %s)\n", policy.Weekly, gfsWeeklyDay)
|
||||
fmt.Printf(" Monthly: %d backups (day %d)\n", policy.Monthly, policy.MonthlyDay)
|
||||
fmt.Printf(" Yearly: %d backups\n", policy.Yearly)
|
||||
if cleanupPattern != "" {
|
||||
fmt.Printf(" Pattern: %s\n", cleanupPattern)
|
||||
}
|
||||
if dryRun {
|
||||
fmt.Printf(" Mode: DRY RUN (no files will be deleted)\n")
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
// Apply GFS policy
|
||||
result, err := retention.ApplyGFSPolicy(backupDir, policy)
|
||||
if err != nil {
|
||||
return fmt.Errorf("GFS cleanup failed: %w", err)
|
||||
}
|
||||
|
||||
// Display tier breakdown
|
||||
fmt.Printf("📊 Backup Classification:\n")
|
||||
fmt.Printf(" Yearly: %d\n", result.YearlyKept)
|
||||
fmt.Printf(" Monthly: %d\n", result.MonthlyKept)
|
||||
fmt.Printf(" Weekly: %d\n", result.WeeklyKept)
|
||||
fmt.Printf(" Daily: %d\n", result.DailyKept)
|
||||
fmt.Printf(" Total kept: %d\n", result.TotalKept)
|
||||
fmt.Println()
|
||||
|
||||
// Display deletions
|
||||
if len(result.Deleted) > 0 {
|
||||
if dryRun {
|
||||
fmt.Printf("🔍 Would delete %d backup(s):\n", len(result.Deleted))
|
||||
} else {
|
||||
fmt.Printf("✅ Deleted %d backup(s):\n", len(result.Deleted))
|
||||
}
|
||||
for _, file := range result.Deleted {
|
||||
fmt.Printf(" - %s\n", filepath.Base(file))
|
||||
}
|
||||
}
|
||||
|
||||
// Display kept backups (limited display)
|
||||
if len(result.Kept) > 0 && len(result.Kept) <= 15 {
|
||||
fmt.Printf("\n📦 Kept %d backup(s):\n", len(result.Kept))
|
||||
for _, file := range result.Kept {
|
||||
// Show tier classification
|
||||
info, _ := os.Stat(file)
|
||||
if info != nil {
|
||||
tiers := retention.ClassifyBackup(info.ModTime(), policy)
|
||||
tierStr := formatTiers(tiers)
|
||||
fmt.Printf(" - %s [%s]\n", filepath.Base(file), tierStr)
|
||||
} else {
|
||||
fmt.Printf(" - %s\n", filepath.Base(file))
|
||||
}
|
||||
}
|
||||
} else if len(result.Kept) > 15 {
|
||||
fmt.Printf("\n📦 Kept %d backup(s)\n", len(result.Kept))
|
||||
}
|
||||
|
||||
if !dryRun && result.SpaceFreed > 0 {
|
||||
fmt.Printf("\n💾 Space freed: %s\n", metadata.FormatSize(result.SpaceFreed))
|
||||
}
|
||||
|
||||
if len(result.Errors) > 0 {
|
||||
fmt.Printf("\n⚠️ Errors:\n")
|
||||
for _, err := range result.Errors {
|
||||
fmt.Printf(" - %v\n", err)
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Println(strings.Repeat("─", 50))
|
||||
|
||||
if dryRun {
|
||||
fmt.Println("✅ GFS dry run completed (no files were deleted)")
|
||||
} else if len(result.Deleted) > 0 {
|
||||
fmt.Println("✅ GFS cleanup completed successfully")
|
||||
} else {
|
||||
fmt.Println("ℹ️ No backups eligible for deletion under GFS policy")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// formatTiers formats a list of tiers as a comma-separated string
|
||||
func formatTiers(tiers []retention.Tier) string {
|
||||
if len(tiers) == 0 {
|
||||
return "none"
|
||||
}
|
||||
parts := make([]string, len(tiers))
|
||||
for i, t := range tiers {
|
||||
parts[i] = t.String()
|
||||
}
|
||||
return strings.Join(parts, ",")
|
||||
}
|
||||
|
||||
545
internal/checks/preflight.go
Normal file
545
internal/checks/preflight.go
Normal file
@@ -0,0 +1,545 @@
|
||||
package checks
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"dbbackup/internal/config"
|
||||
"dbbackup/internal/database"
|
||||
"dbbackup/internal/logger"
|
||||
)
|
||||
|
||||
// PreflightCheck represents a single preflight check result
|
||||
type PreflightCheck struct {
|
||||
Name string
|
||||
Status CheckStatus
|
||||
Message string
|
||||
Details string
|
||||
}
|
||||
|
||||
// CheckStatus represents the status of a preflight check
|
||||
type CheckStatus int
|
||||
|
||||
const (
|
||||
StatusPassed CheckStatus = iota
|
||||
StatusWarning
|
||||
StatusFailed
|
||||
StatusSkipped
|
||||
)
|
||||
|
||||
func (s CheckStatus) String() string {
|
||||
switch s {
|
||||
case StatusPassed:
|
||||
return "PASSED"
|
||||
case StatusWarning:
|
||||
return "WARNING"
|
||||
case StatusFailed:
|
||||
return "FAILED"
|
||||
case StatusSkipped:
|
||||
return "SKIPPED"
|
||||
default:
|
||||
return "UNKNOWN"
|
||||
}
|
||||
}
|
||||
|
||||
func (s CheckStatus) Icon() string {
|
||||
switch s {
|
||||
case StatusPassed:
|
||||
return "✓"
|
||||
case StatusWarning:
|
||||
return "⚠"
|
||||
case StatusFailed:
|
||||
return "✗"
|
||||
case StatusSkipped:
|
||||
return "○"
|
||||
default:
|
||||
return "?"
|
||||
}
|
||||
}
|
||||
|
||||
// PreflightResult contains all preflight check results
|
||||
type PreflightResult struct {
|
||||
Checks []PreflightCheck
|
||||
AllPassed bool
|
||||
HasWarnings bool
|
||||
FailureCount int
|
||||
WarningCount int
|
||||
DatabaseInfo *DatabaseInfo
|
||||
StorageInfo *StorageInfo
|
||||
EstimatedSize uint64
|
||||
}
|
||||
|
||||
// DatabaseInfo contains database connection details
|
||||
type DatabaseInfo struct {
|
||||
Type string
|
||||
Version string
|
||||
Host string
|
||||
Port int
|
||||
User string
|
||||
}
|
||||
|
||||
// StorageInfo contains storage target details
|
||||
type StorageInfo struct {
|
||||
Type string // "local" or "cloud"
|
||||
Path string
|
||||
AvailableBytes uint64
|
||||
TotalBytes uint64
|
||||
}
|
||||
|
||||
// PreflightChecker performs preflight checks before backup operations
|
||||
type PreflightChecker struct {
|
||||
cfg *config.Config
|
||||
log logger.Logger
|
||||
db database.Database
|
||||
}
|
||||
|
||||
// NewPreflightChecker creates a new preflight checker
|
||||
func NewPreflightChecker(cfg *config.Config, log logger.Logger) *PreflightChecker {
|
||||
return &PreflightChecker{
|
||||
cfg: cfg,
|
||||
log: log,
|
||||
}
|
||||
}
|
||||
|
||||
// RunAllChecks runs all preflight checks for a backup operation
|
||||
func (p *PreflightChecker) RunAllChecks(ctx context.Context, dbName string) (*PreflightResult, error) {
|
||||
result := &PreflightResult{
|
||||
Checks: make([]PreflightCheck, 0),
|
||||
AllPassed: true,
|
||||
}
|
||||
|
||||
// 1. Database connectivity check
|
||||
dbCheck := p.checkDatabaseConnectivity(ctx)
|
||||
result.Checks = append(result.Checks, dbCheck)
|
||||
if dbCheck.Status == StatusFailed {
|
||||
result.AllPassed = false
|
||||
result.FailureCount++
|
||||
}
|
||||
|
||||
// Extract database info if connection succeeded
|
||||
if dbCheck.Status == StatusPassed && p.db != nil {
|
||||
version, _ := p.db.GetVersion(ctx)
|
||||
result.DatabaseInfo = &DatabaseInfo{
|
||||
Type: p.cfg.DisplayDatabaseType(),
|
||||
Version: version,
|
||||
Host: p.cfg.Host,
|
||||
Port: p.cfg.Port,
|
||||
User: p.cfg.User,
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Required tools check
|
||||
toolsCheck := p.checkRequiredTools()
|
||||
result.Checks = append(result.Checks, toolsCheck)
|
||||
if toolsCheck.Status == StatusFailed {
|
||||
result.AllPassed = false
|
||||
result.FailureCount++
|
||||
}
|
||||
|
||||
// 3. Storage target check
|
||||
storageCheck := p.checkStorageTarget()
|
||||
result.Checks = append(result.Checks, storageCheck)
|
||||
if storageCheck.Status == StatusFailed {
|
||||
result.AllPassed = false
|
||||
result.FailureCount++
|
||||
} else if storageCheck.Status == StatusWarning {
|
||||
result.HasWarnings = true
|
||||
result.WarningCount++
|
||||
}
|
||||
|
||||
// Extract storage info
|
||||
diskCheck := CheckDiskSpace(p.cfg.BackupDir)
|
||||
result.StorageInfo = &StorageInfo{
|
||||
Type: "local",
|
||||
Path: p.cfg.BackupDir,
|
||||
AvailableBytes: diskCheck.AvailableBytes,
|
||||
TotalBytes: diskCheck.TotalBytes,
|
||||
}
|
||||
|
||||
// 4. Backup size estimation
|
||||
sizeCheck := p.estimateBackupSize(ctx, dbName)
|
||||
result.Checks = append(result.Checks, sizeCheck)
|
||||
if sizeCheck.Status == StatusFailed {
|
||||
result.AllPassed = false
|
||||
result.FailureCount++
|
||||
} else if sizeCheck.Status == StatusWarning {
|
||||
result.HasWarnings = true
|
||||
result.WarningCount++
|
||||
}
|
||||
|
||||
// 5. Encryption configuration check (if enabled)
|
||||
if p.cfg.CloudEnabled || os.Getenv("DBBACKUP_ENCRYPTION_KEY") != "" {
|
||||
encCheck := p.checkEncryptionConfig()
|
||||
result.Checks = append(result.Checks, encCheck)
|
||||
if encCheck.Status == StatusFailed {
|
||||
result.AllPassed = false
|
||||
result.FailureCount++
|
||||
}
|
||||
}
|
||||
|
||||
// 6. Cloud storage check (if enabled)
|
||||
if p.cfg.CloudEnabled {
|
||||
cloudCheck := p.checkCloudStorage(ctx)
|
||||
result.Checks = append(result.Checks, cloudCheck)
|
||||
if cloudCheck.Status == StatusFailed {
|
||||
result.AllPassed = false
|
||||
result.FailureCount++
|
||||
}
|
||||
|
||||
// Update storage info
|
||||
result.StorageInfo.Type = "cloud"
|
||||
result.StorageInfo.Path = fmt.Sprintf("%s://%s/%s", p.cfg.CloudProvider, p.cfg.CloudBucket, p.cfg.CloudPrefix)
|
||||
}
|
||||
|
||||
// 7. Permissions check
|
||||
permCheck := p.checkPermissions()
|
||||
result.Checks = append(result.Checks, permCheck)
|
||||
if permCheck.Status == StatusFailed {
|
||||
result.AllPassed = false
|
||||
result.FailureCount++
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// checkDatabaseConnectivity verifies database connection
|
||||
func (p *PreflightChecker) checkDatabaseConnectivity(ctx context.Context) PreflightCheck {
|
||||
check := PreflightCheck{
|
||||
Name: "Database Connection",
|
||||
}
|
||||
|
||||
// Create database connection
|
||||
db, err := database.New(p.cfg, p.log)
|
||||
if err != nil {
|
||||
check.Status = StatusFailed
|
||||
check.Message = "Failed to create database instance"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
|
||||
// Connect
|
||||
if err := db.Connect(ctx); err != nil {
|
||||
check.Status = StatusFailed
|
||||
check.Message = "Connection failed"
|
||||
check.Details = fmt.Sprintf("Cannot connect to %s@%s:%d - %s",
|
||||
p.cfg.User, p.cfg.Host, p.cfg.Port, err.Error())
|
||||
return check
|
||||
}
|
||||
|
||||
// Ping
|
||||
if err := db.Ping(ctx); err != nil {
|
||||
check.Status = StatusFailed
|
||||
check.Message = "Ping failed"
|
||||
check.Details = err.Error()
|
||||
db.Close()
|
||||
return check
|
||||
}
|
||||
|
||||
// Get version
|
||||
version, err := db.GetVersion(ctx)
|
||||
if err != nil {
|
||||
version = "unknown"
|
||||
}
|
||||
|
||||
p.db = db
|
||||
check.Status = StatusPassed
|
||||
check.Message = fmt.Sprintf("OK (%s %s)", p.cfg.DisplayDatabaseType(), version)
|
||||
check.Details = fmt.Sprintf("Connected to %s@%s:%d", p.cfg.User, p.cfg.Host, p.cfg.Port)
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
// checkRequiredTools verifies backup tools are available
|
||||
func (p *PreflightChecker) checkRequiredTools() PreflightCheck {
|
||||
check := PreflightCheck{
|
||||
Name: "Required Tools",
|
||||
}
|
||||
|
||||
var requiredTools []string
|
||||
if p.cfg.IsPostgreSQL() {
|
||||
requiredTools = []string{"pg_dump", "pg_dumpall"}
|
||||
} else if p.cfg.IsMySQL() {
|
||||
requiredTools = []string{"mysqldump"}
|
||||
}
|
||||
|
||||
var found []string
|
||||
var missing []string
|
||||
var versions []string
|
||||
|
||||
for _, tool := range requiredTools {
|
||||
path, err := exec.LookPath(tool)
|
||||
if err != nil {
|
||||
missing = append(missing, tool)
|
||||
} else {
|
||||
found = append(found, tool)
|
||||
// Try to get version
|
||||
version := getToolVersion(tool)
|
||||
if version != "" {
|
||||
versions = append(versions, fmt.Sprintf("%s %s", tool, version))
|
||||
}
|
||||
}
|
||||
_ = path // silence unused
|
||||
}
|
||||
|
||||
if len(missing) > 0 {
|
||||
check.Status = StatusFailed
|
||||
check.Message = fmt.Sprintf("Missing tools: %s", strings.Join(missing, ", "))
|
||||
check.Details = "Install required database tools and ensure they're in PATH"
|
||||
return check
|
||||
}
|
||||
|
||||
check.Status = StatusPassed
|
||||
check.Message = fmt.Sprintf("%s found", strings.Join(found, ", "))
|
||||
if len(versions) > 0 {
|
||||
check.Details = strings.Join(versions, "; ")
|
||||
}
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
// checkStorageTarget verifies backup directory is writable
|
||||
func (p *PreflightChecker) checkStorageTarget() PreflightCheck {
|
||||
check := PreflightCheck{
|
||||
Name: "Storage Target",
|
||||
}
|
||||
|
||||
backupDir := p.cfg.BackupDir
|
||||
|
||||
// Check if directory exists
|
||||
info, err := os.Stat(backupDir)
|
||||
if os.IsNotExist(err) {
|
||||
// Try to create it
|
||||
if err := os.MkdirAll(backupDir, 0755); err != nil {
|
||||
check.Status = StatusFailed
|
||||
check.Message = "Cannot create backup directory"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
} else if err != nil {
|
||||
check.Status = StatusFailed
|
||||
check.Message = "Cannot access backup directory"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
} else if !info.IsDir() {
|
||||
check.Status = StatusFailed
|
||||
check.Message = "Backup path is not a directory"
|
||||
check.Details = backupDir
|
||||
return check
|
||||
}
|
||||
|
||||
// Check disk space
|
||||
diskCheck := CheckDiskSpace(backupDir)
|
||||
|
||||
if diskCheck.Critical {
|
||||
check.Status = StatusFailed
|
||||
check.Message = "Insufficient disk space"
|
||||
check.Details = fmt.Sprintf("%s available (%.1f%% used)",
|
||||
formatBytes(diskCheck.AvailableBytes), diskCheck.UsedPercent)
|
||||
return check
|
||||
}
|
||||
|
||||
if diskCheck.Warning {
|
||||
check.Status = StatusWarning
|
||||
check.Message = fmt.Sprintf("%s (%s available, low space warning)",
|
||||
backupDir, formatBytes(diskCheck.AvailableBytes))
|
||||
check.Details = fmt.Sprintf("%.1f%% disk usage", diskCheck.UsedPercent)
|
||||
return check
|
||||
}
|
||||
|
||||
check.Status = StatusPassed
|
||||
check.Message = fmt.Sprintf("%s (%s available)", backupDir, formatBytes(diskCheck.AvailableBytes))
|
||||
check.Details = fmt.Sprintf("%.1f%% used", diskCheck.UsedPercent)
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
// estimateBackupSize estimates the backup size
|
||||
func (p *PreflightChecker) estimateBackupSize(ctx context.Context, dbName string) PreflightCheck {
|
||||
check := PreflightCheck{
|
||||
Name: "Estimated Backup Size",
|
||||
}
|
||||
|
||||
if p.db == nil {
|
||||
check.Status = StatusSkipped
|
||||
check.Message = "Skipped (no database connection)"
|
||||
return check
|
||||
}
|
||||
|
||||
// Get database size
|
||||
var dbSize int64
|
||||
var err error
|
||||
|
||||
if dbName != "" {
|
||||
dbSize, err = p.db.GetDatabaseSize(ctx, dbName)
|
||||
} else {
|
||||
// For cluster backup, we'd need to sum all databases
|
||||
// For now, just use the default database
|
||||
dbSize, err = p.db.GetDatabaseSize(ctx, p.cfg.Database)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
check.Status = StatusSkipped
|
||||
check.Message = "Could not estimate size"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
|
||||
// Estimate compressed size
|
||||
estimatedSize := EstimateBackupSize(uint64(dbSize), p.cfg.CompressionLevel)
|
||||
|
||||
// Check if we have enough space
|
||||
diskCheck := CheckDiskSpace(p.cfg.BackupDir)
|
||||
if diskCheck.AvailableBytes < estimatedSize*2 { // 2x buffer
|
||||
check.Status = StatusWarning
|
||||
check.Message = fmt.Sprintf("~%s (may not fit)", formatBytes(estimatedSize))
|
||||
check.Details = fmt.Sprintf("Only %s available, need ~%s with safety margin",
|
||||
formatBytes(diskCheck.AvailableBytes), formatBytes(estimatedSize*2))
|
||||
return check
|
||||
}
|
||||
|
||||
check.Status = StatusPassed
|
||||
check.Message = fmt.Sprintf("~%s (from %s database)",
|
||||
formatBytes(estimatedSize), formatBytes(uint64(dbSize)))
|
||||
check.Details = fmt.Sprintf("Compression level %d", p.cfg.CompressionLevel)
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
// checkEncryptionConfig verifies encryption setup
|
||||
func (p *PreflightChecker) checkEncryptionConfig() PreflightCheck {
|
||||
check := PreflightCheck{
|
||||
Name: "Encryption",
|
||||
}
|
||||
|
||||
// Check for encryption key
|
||||
key := os.Getenv("DBBACKUP_ENCRYPTION_KEY")
|
||||
if key == "" {
|
||||
check.Status = StatusSkipped
|
||||
check.Message = "Not configured"
|
||||
check.Details = "Set DBBACKUP_ENCRYPTION_KEY to enable encryption"
|
||||
return check
|
||||
}
|
||||
|
||||
// Validate key length (should be at least 16 characters for AES)
|
||||
if len(key) < 16 {
|
||||
check.Status = StatusFailed
|
||||
check.Message = "Encryption key too short"
|
||||
check.Details = "Key must be at least 16 characters (32 recommended for AES-256)"
|
||||
return check
|
||||
}
|
||||
|
||||
check.Status = StatusPassed
|
||||
check.Message = "AES-256 configured"
|
||||
check.Details = fmt.Sprintf("Key length: %d characters", len(key))
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
// checkCloudStorage verifies cloud storage access
|
||||
func (p *PreflightChecker) checkCloudStorage(ctx context.Context) PreflightCheck {
|
||||
check := PreflightCheck{
|
||||
Name: "Cloud Storage",
|
||||
}
|
||||
|
||||
if !p.cfg.CloudEnabled {
|
||||
check.Status = StatusSkipped
|
||||
check.Message = "Not configured"
|
||||
return check
|
||||
}
|
||||
|
||||
// Check required cloud configuration
|
||||
if p.cfg.CloudBucket == "" {
|
||||
check.Status = StatusFailed
|
||||
check.Message = "No bucket configured"
|
||||
check.Details = "Set --cloud-bucket or use --cloud URI"
|
||||
return check
|
||||
}
|
||||
|
||||
if p.cfg.CloudProvider == "" {
|
||||
check.Status = StatusFailed
|
||||
check.Message = "No provider configured"
|
||||
check.Details = "Set --cloud-provider (s3, minio, azure, gcs)"
|
||||
return check
|
||||
}
|
||||
|
||||
// Note: Actually testing cloud connectivity would require initializing the cloud backend
|
||||
// For now, just validate configuration is present
|
||||
check.Status = StatusPassed
|
||||
check.Message = fmt.Sprintf("%s://%s configured", p.cfg.CloudProvider, p.cfg.CloudBucket)
|
||||
if p.cfg.CloudPrefix != "" {
|
||||
check.Details = fmt.Sprintf("Prefix: %s", p.cfg.CloudPrefix)
|
||||
}
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
// checkPermissions verifies write permissions
|
||||
func (p *PreflightChecker) checkPermissions() PreflightCheck {
|
||||
check := PreflightCheck{
|
||||
Name: "Write Permissions",
|
||||
}
|
||||
|
||||
// Try to create a test file
|
||||
testFile := filepath.Join(p.cfg.BackupDir, ".dbbackup_preflight_test")
|
||||
f, err := os.Create(testFile)
|
||||
if err != nil {
|
||||
check.Status = StatusFailed
|
||||
check.Message = "Cannot write to backup directory"
|
||||
check.Details = err.Error()
|
||||
return check
|
||||
}
|
||||
f.Close()
|
||||
os.Remove(testFile)
|
||||
|
||||
check.Status = StatusPassed
|
||||
check.Message = "OK"
|
||||
check.Details = fmt.Sprintf("Can write to %s", p.cfg.BackupDir)
|
||||
|
||||
return check
|
||||
}
|
||||
|
||||
// Close closes any resources (like database connection)
|
||||
func (p *PreflightChecker) Close() error {
|
||||
if p.db != nil {
|
||||
return p.db.Close()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// getToolVersion tries to get the version of a command-line tool
|
||||
func getToolVersion(tool string) string {
|
||||
var cmd *exec.Cmd
|
||||
|
||||
switch tool {
|
||||
case "pg_dump", "pg_dumpall", "pg_restore", "psql":
|
||||
cmd = exec.Command(tool, "--version")
|
||||
case "mysqldump", "mysql":
|
||||
cmd = exec.Command(tool, "--version")
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
// Extract version from output
|
||||
line := strings.TrimSpace(string(output))
|
||||
// Usually format is "tool (PostgreSQL) X.Y.Z" or "tool Ver X.Y.Z"
|
||||
parts := strings.Fields(line)
|
||||
if len(parts) >= 3 {
|
||||
// Try to find version number
|
||||
for _, part := range parts {
|
||||
if len(part) > 0 && (part[0] >= '0' && part[0] <= '9') {
|
||||
return part
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
134
internal/checks/preflight_test.go
Normal file
134
internal/checks/preflight_test.go
Normal file
@@ -0,0 +1,134 @@
|
||||
package checks
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestPreflightResult(t *testing.T) {
|
||||
result := &PreflightResult{
|
||||
Checks: []PreflightCheck{},
|
||||
AllPassed: true,
|
||||
DatabaseInfo: &DatabaseInfo{
|
||||
Type: "postgres",
|
||||
Version: "PostgreSQL 15.0",
|
||||
Host: "localhost",
|
||||
Port: 5432,
|
||||
User: "postgres",
|
||||
},
|
||||
StorageInfo: &StorageInfo{
|
||||
Type: "local",
|
||||
Path: "/backups",
|
||||
AvailableBytes: 10 * 1024 * 1024 * 1024,
|
||||
TotalBytes: 100 * 1024 * 1024 * 1024,
|
||||
},
|
||||
EstimatedSize: 1 * 1024 * 1024 * 1024,
|
||||
}
|
||||
|
||||
if !result.AllPassed {
|
||||
t.Error("Result should be AllPassed")
|
||||
}
|
||||
|
||||
if result.DatabaseInfo.Type != "postgres" {
|
||||
t.Errorf("DatabaseInfo.Type = %q, expected postgres", result.DatabaseInfo.Type)
|
||||
}
|
||||
|
||||
if result.StorageInfo.Path != "/backups" {
|
||||
t.Errorf("StorageInfo.Path = %q, expected /backups", result.StorageInfo.Path)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPreflightCheck(t *testing.T) {
|
||||
check := PreflightCheck{
|
||||
Name: "Database Connectivity",
|
||||
Status: StatusPassed,
|
||||
Message: "Connected successfully",
|
||||
Details: "PostgreSQL 15.0",
|
||||
}
|
||||
|
||||
if check.Status != StatusPassed {
|
||||
t.Error("Check status should be passed")
|
||||
}
|
||||
|
||||
if check.Name != "Database Connectivity" {
|
||||
t.Errorf("Check name = %q", check.Name)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCheckStatusString(t *testing.T) {
|
||||
tests := []struct {
|
||||
status CheckStatus
|
||||
expected string
|
||||
}{
|
||||
{StatusPassed, "PASSED"},
|
||||
{StatusFailed, "FAILED"},
|
||||
{StatusWarning, "WARNING"},
|
||||
{StatusSkipped, "SKIPPED"},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
result := tc.status.String()
|
||||
if result != tc.expected {
|
||||
t.Errorf("Status.String() = %q, expected %q", result, tc.expected)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFormatPreflightReport(t *testing.T) {
|
||||
result := &PreflightResult{
|
||||
Checks: []PreflightCheck{
|
||||
{Name: "Test Check", Status: StatusPassed, Message: "OK"},
|
||||
},
|
||||
AllPassed: true,
|
||||
DatabaseInfo: &DatabaseInfo{
|
||||
Type: "postgres",
|
||||
Version: "PostgreSQL 15.0",
|
||||
Host: "localhost",
|
||||
Port: 5432,
|
||||
},
|
||||
StorageInfo: &StorageInfo{
|
||||
Type: "local",
|
||||
Path: "/backups",
|
||||
AvailableBytes: 10 * 1024 * 1024 * 1024,
|
||||
},
|
||||
}
|
||||
|
||||
report := FormatPreflightReport(result, "testdb", false)
|
||||
if report == "" {
|
||||
t.Error("Report should not be empty")
|
||||
}
|
||||
}
|
||||
|
||||
func TestFormatPreflightReportPlain(t *testing.T) {
|
||||
result := &PreflightResult{
|
||||
Checks: []PreflightCheck{
|
||||
{Name: "Test Check", Status: StatusFailed, Message: "Connection failed"},
|
||||
},
|
||||
AllPassed: false,
|
||||
FailureCount: 1,
|
||||
}
|
||||
|
||||
report := FormatPreflightReportPlain(result, "testdb")
|
||||
if report == "" {
|
||||
t.Error("Report should not be empty")
|
||||
}
|
||||
}
|
||||
|
||||
func TestFormatPreflightReportJSON(t *testing.T) {
|
||||
result := &PreflightResult{
|
||||
Checks: []PreflightCheck{},
|
||||
AllPassed: true,
|
||||
}
|
||||
|
||||
report, err := FormatPreflightReportJSON(result, "testdb")
|
||||
if err != nil {
|
||||
t.Errorf("FormatPreflightReportJSON() error = %v", err)
|
||||
}
|
||||
|
||||
if len(report) == 0 {
|
||||
t.Error("Report should not be empty")
|
||||
}
|
||||
|
||||
if report[0] != '{' {
|
||||
t.Error("Report should start with '{'")
|
||||
}
|
||||
}
|
||||
184
internal/checks/report.go
Normal file
184
internal/checks/report.go
Normal file
@@ -0,0 +1,184 @@
|
||||
package checks
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// FormatPreflightReport formats preflight results for display
|
||||
func FormatPreflightReport(result *PreflightResult, dbName string, verbose bool) string {
|
||||
var sb strings.Builder
|
||||
|
||||
sb.WriteString("\n")
|
||||
sb.WriteString("╔══════════════════════════════════════════════════════════════╗\n")
|
||||
sb.WriteString("║ [DRY RUN] Preflight Check Results ║\n")
|
||||
sb.WriteString("╚══════════════════════════════════════════════════════════════╝\n")
|
||||
sb.WriteString("\n")
|
||||
|
||||
// Database info
|
||||
if result.DatabaseInfo != nil {
|
||||
sb.WriteString(fmt.Sprintf(" Database: %s %s\n", result.DatabaseInfo.Type, result.DatabaseInfo.Version))
|
||||
sb.WriteString(fmt.Sprintf(" Target: %s@%s:%d",
|
||||
result.DatabaseInfo.User, result.DatabaseInfo.Host, result.DatabaseInfo.Port))
|
||||
if dbName != "" {
|
||||
sb.WriteString(fmt.Sprintf("/%s", dbName))
|
||||
}
|
||||
sb.WriteString("\n\n")
|
||||
}
|
||||
|
||||
// Check results
|
||||
sb.WriteString(" Checks:\n")
|
||||
sb.WriteString(" ─────────────────────────────────────────────────────────────\n")
|
||||
|
||||
for _, check := range result.Checks {
|
||||
icon := check.Status.Icon()
|
||||
color := getStatusColor(check.Status)
|
||||
reset := "\033[0m"
|
||||
|
||||
sb.WriteString(fmt.Sprintf(" %s%s%s %-25s %s\n",
|
||||
color, icon, reset, check.Name+":", check.Message))
|
||||
|
||||
if verbose && check.Details != "" {
|
||||
sb.WriteString(fmt.Sprintf(" └─ %s\n", check.Details))
|
||||
}
|
||||
}
|
||||
|
||||
sb.WriteString(" ─────────────────────────────────────────────────────────────\n")
|
||||
sb.WriteString("\n")
|
||||
|
||||
// Summary
|
||||
if result.AllPassed {
|
||||
if result.HasWarnings {
|
||||
sb.WriteString(" ⚠️ All checks passed with warnings\n")
|
||||
sb.WriteString("\n")
|
||||
sb.WriteString(" Ready to backup. Remove --dry-run to execute.\n")
|
||||
} else {
|
||||
sb.WriteString(" ✅ All checks passed\n")
|
||||
sb.WriteString("\n")
|
||||
sb.WriteString(" Ready to backup. Remove --dry-run to execute.\n")
|
||||
}
|
||||
} else {
|
||||
sb.WriteString(fmt.Sprintf(" ❌ %d check(s) failed\n", result.FailureCount))
|
||||
sb.WriteString("\n")
|
||||
sb.WriteString(" Fix the issues above before running backup.\n")
|
||||
}
|
||||
|
||||
sb.WriteString("\n")
|
||||
|
||||
return sb.String()
|
||||
}
|
||||
|
||||
// FormatPreflightReportPlain formats preflight results without colors
|
||||
func FormatPreflightReportPlain(result *PreflightResult, dbName string) string {
|
||||
var sb strings.Builder
|
||||
|
||||
sb.WriteString("\n")
|
||||
sb.WriteString("[DRY RUN] Preflight Check Results\n")
|
||||
sb.WriteString("==================================\n")
|
||||
sb.WriteString("\n")
|
||||
|
||||
// Database info
|
||||
if result.DatabaseInfo != nil {
|
||||
sb.WriteString(fmt.Sprintf("Database: %s %s\n", result.DatabaseInfo.Type, result.DatabaseInfo.Version))
|
||||
sb.WriteString(fmt.Sprintf("Target: %s@%s:%d",
|
||||
result.DatabaseInfo.User, result.DatabaseInfo.Host, result.DatabaseInfo.Port))
|
||||
if dbName != "" {
|
||||
sb.WriteString(fmt.Sprintf("/%s", dbName))
|
||||
}
|
||||
sb.WriteString("\n\n")
|
||||
}
|
||||
|
||||
// Check results
|
||||
sb.WriteString("Checks:\n")
|
||||
|
||||
for _, check := range result.Checks {
|
||||
status := fmt.Sprintf("[%s]", check.Status.String())
|
||||
sb.WriteString(fmt.Sprintf(" %-10s %-25s %s\n", status, check.Name+":", check.Message))
|
||||
if check.Details != "" {
|
||||
sb.WriteString(fmt.Sprintf(" └─ %s\n", check.Details))
|
||||
}
|
||||
}
|
||||
|
||||
sb.WriteString("\n")
|
||||
|
||||
// Summary
|
||||
if result.AllPassed {
|
||||
sb.WriteString("Result: READY\n")
|
||||
sb.WriteString("Remove --dry-run to execute backup.\n")
|
||||
} else {
|
||||
sb.WriteString(fmt.Sprintf("Result: FAILED (%d issues)\n", result.FailureCount))
|
||||
sb.WriteString("Fix the issues above before running backup.\n")
|
||||
}
|
||||
|
||||
sb.WriteString("\n")
|
||||
|
||||
return sb.String()
|
||||
}
|
||||
|
||||
// FormatPreflightReportJSON formats preflight results as JSON
|
||||
func FormatPreflightReportJSON(result *PreflightResult, dbName string) ([]byte, error) {
|
||||
type CheckJSON struct {
|
||||
Name string `json:"name"`
|
||||
Status string `json:"status"`
|
||||
Message string `json:"message"`
|
||||
Details string `json:"details,omitempty"`
|
||||
}
|
||||
|
||||
type ReportJSON struct {
|
||||
DryRun bool `json:"dry_run"`
|
||||
AllPassed bool `json:"all_passed"`
|
||||
HasWarnings bool `json:"has_warnings"`
|
||||
FailureCount int `json:"failure_count"`
|
||||
WarningCount int `json:"warning_count"`
|
||||
Database *DatabaseInfo `json:"database,omitempty"`
|
||||
Storage *StorageInfo `json:"storage,omitempty"`
|
||||
TargetDB string `json:"target_database,omitempty"`
|
||||
Checks []CheckJSON `json:"checks"`
|
||||
}
|
||||
|
||||
report := ReportJSON{
|
||||
DryRun: true,
|
||||
AllPassed: result.AllPassed,
|
||||
HasWarnings: result.HasWarnings,
|
||||
FailureCount: result.FailureCount,
|
||||
WarningCount: result.WarningCount,
|
||||
Database: result.DatabaseInfo,
|
||||
Storage: result.StorageInfo,
|
||||
TargetDB: dbName,
|
||||
Checks: make([]CheckJSON, len(result.Checks)),
|
||||
}
|
||||
|
||||
for i, check := range result.Checks {
|
||||
report.Checks[i] = CheckJSON{
|
||||
Name: check.Name,
|
||||
Status: check.Status.String(),
|
||||
Message: check.Message,
|
||||
Details: check.Details,
|
||||
}
|
||||
}
|
||||
|
||||
// Use standard library json encoding
|
||||
return marshalJSON(report)
|
||||
}
|
||||
|
||||
// marshalJSON is a simple JSON marshaler
|
||||
func marshalJSON(v interface{}) ([]byte, error) {
|
||||
return json.MarshalIndent(v, "", " ")
|
||||
}
|
||||
|
||||
// getStatusColor returns ANSI color code for status
|
||||
func getStatusColor(status CheckStatus) string {
|
||||
switch status {
|
||||
case StatusPassed:
|
||||
return "\033[32m" // Green
|
||||
case StatusWarning:
|
||||
return "\033[33m" // Yellow
|
||||
case StatusFailed:
|
||||
return "\033[31m" // Red
|
||||
case StatusSkipped:
|
||||
return "\033[90m" // Gray
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
}
|
||||
@@ -76,6 +76,15 @@ type Config struct {
|
||||
AllowRoot bool // Allow running as root/Administrator
|
||||
CheckResources bool // Check resource limits before operations
|
||||
|
||||
// GFS (Grandfather-Father-Son) retention options
|
||||
GFSEnabled bool // Enable GFS retention policy
|
||||
GFSDaily int // Number of daily backups to keep
|
||||
GFSWeekly int // Number of weekly backups to keep
|
||||
GFSMonthly int // Number of monthly backups to keep
|
||||
GFSYearly int // Number of yearly backups to keep
|
||||
GFSWeeklyDay string // Day for weekly backup (e.g., "Sunday")
|
||||
GFSMonthlyDay int // Day of month for monthly backup (1-28)
|
||||
|
||||
// PITR (Point-in-Time Recovery) options
|
||||
PITREnabled bool // Enable WAL archiving for PITR
|
||||
WALArchiveDir string // Directory to store WAL archives
|
||||
@@ -102,6 +111,22 @@ type Config struct {
|
||||
CloudSecretKey string // Secret key / Account key (Azure)
|
||||
CloudPrefix string // Key/object prefix
|
||||
CloudAutoUpload bool // Automatically upload after backup
|
||||
|
||||
// Notification options
|
||||
NotifyEnabled bool // Enable notifications
|
||||
NotifyOnSuccess bool // Send notifications on successful operations
|
||||
NotifyOnFailure bool // Send notifications on failed operations
|
||||
NotifySMTPHost string // SMTP server host
|
||||
NotifySMTPPort int // SMTP server port
|
||||
NotifySMTPUser string // SMTP username
|
||||
NotifySMTPPassword string // SMTP password
|
||||
NotifySMTPFrom string // From address for emails
|
||||
NotifySMTPTo []string // To addresses for emails
|
||||
NotifySMTPTLS bool // Use direct TLS (port 465)
|
||||
NotifySMTPStartTLS bool // Use STARTTLS (port 587)
|
||||
NotifyWebhookURL string // Webhook URL
|
||||
NotifyWebhookMethod string // Webhook HTTP method (POST/GET)
|
||||
NotifyWebhookSecret string // Webhook signing secret
|
||||
}
|
||||
|
||||
// New creates a new configuration with default values
|
||||
|
||||
256
internal/notify/manager.go
Normal file
256
internal/notify/manager.go
Normal file
@@ -0,0 +1,256 @@
|
||||
// Package notify - Notification manager for fan-out to multiple backends
|
||||
package notify
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// Manager manages multiple notification backends
|
||||
type Manager struct {
|
||||
config Config
|
||||
notifiers []Notifier
|
||||
mu sync.RWMutex
|
||||
hostname string
|
||||
}
|
||||
|
||||
// NewManager creates a new notification manager with configured backends
|
||||
func NewManager(config Config) *Manager {
|
||||
hostname, _ := os.Hostname()
|
||||
|
||||
m := &Manager{
|
||||
config: config,
|
||||
notifiers: make([]Notifier, 0),
|
||||
hostname: hostname,
|
||||
}
|
||||
|
||||
// Initialize enabled backends
|
||||
if config.SMTPEnabled {
|
||||
m.notifiers = append(m.notifiers, NewSMTPNotifier(config))
|
||||
}
|
||||
|
||||
if config.WebhookEnabled {
|
||||
m.notifiers = append(m.notifiers, NewWebhookNotifier(config))
|
||||
}
|
||||
|
||||
return m
|
||||
}
|
||||
|
||||
// AddNotifier adds a custom notifier to the manager
|
||||
func (m *Manager) AddNotifier(n Notifier) {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
m.notifiers = append(m.notifiers, n)
|
||||
}
|
||||
|
||||
// Notify sends an event to all enabled notification backends
|
||||
// This is a non-blocking operation that runs in a goroutine
|
||||
func (m *Manager) Notify(event *Event) {
|
||||
go m.NotifySync(context.Background(), event)
|
||||
}
|
||||
|
||||
// NotifySync sends an event synchronously to all enabled backends
|
||||
func (m *Manager) NotifySync(ctx context.Context, event *Event) error {
|
||||
// Add hostname if not set
|
||||
if event.Hostname == "" && m.hostname != "" {
|
||||
event.Hostname = m.hostname
|
||||
}
|
||||
|
||||
// Check if we should send based on event type/severity
|
||||
if !m.shouldSend(event) {
|
||||
return nil
|
||||
}
|
||||
|
||||
m.mu.RLock()
|
||||
notifiers := make([]Notifier, len(m.notifiers))
|
||||
copy(notifiers, m.notifiers)
|
||||
m.mu.RUnlock()
|
||||
|
||||
var errors []error
|
||||
var wg sync.WaitGroup
|
||||
|
||||
for _, n := range notifiers {
|
||||
if !n.IsEnabled() {
|
||||
continue
|
||||
}
|
||||
|
||||
wg.Add(1)
|
||||
go func(notifier Notifier) {
|
||||
defer wg.Done()
|
||||
if err := notifier.Send(ctx, event); err != nil {
|
||||
errors = append(errors, fmt.Errorf("%s: %w", notifier.Name(), err))
|
||||
}
|
||||
}(n)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
|
||||
if len(errors) > 0 {
|
||||
return fmt.Errorf("notification errors: %v", errors)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// shouldSend determines if an event should be sent based on configuration
|
||||
func (m *Manager) shouldSend(event *Event) bool {
|
||||
// Check minimum severity
|
||||
if !m.meetsSeverity(event.Severity) {
|
||||
return false
|
||||
}
|
||||
|
||||
// Check event type filters
|
||||
switch event.Type {
|
||||
case EventBackupCompleted, EventRestoreCompleted, EventCleanupCompleted, EventVerifyCompleted:
|
||||
return m.config.OnSuccess
|
||||
case EventBackupFailed, EventRestoreFailed, EventVerifyFailed:
|
||||
return m.config.OnFailure
|
||||
case EventBackupStarted, EventRestoreStarted:
|
||||
return m.config.OnSuccess
|
||||
default:
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
// meetsSeverity checks if event severity meets minimum threshold
|
||||
func (m *Manager) meetsSeverity(severity Severity) bool {
|
||||
severityOrder := map[Severity]int{
|
||||
SeverityInfo: 0,
|
||||
SeverityWarning: 1,
|
||||
SeverityError: 2,
|
||||
SeverityCritical: 3,
|
||||
}
|
||||
|
||||
eventLevel, ok := severityOrder[severity]
|
||||
if !ok {
|
||||
return true
|
||||
}
|
||||
|
||||
minLevel, ok := severityOrder[m.config.MinSeverity]
|
||||
if !ok {
|
||||
return true
|
||||
}
|
||||
|
||||
return eventLevel >= minLevel
|
||||
}
|
||||
|
||||
// HasEnabledNotifiers returns true if at least one notifier is enabled
|
||||
func (m *Manager) HasEnabledNotifiers() bool {
|
||||
m.mu.RLock()
|
||||
defer m.mu.RUnlock()
|
||||
|
||||
for _, n := range m.notifiers {
|
||||
if n.IsEnabled() {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// EnabledNotifiers returns the names of all enabled notifiers
|
||||
func (m *Manager) EnabledNotifiers() []string {
|
||||
m.mu.RLock()
|
||||
defer m.mu.RUnlock()
|
||||
|
||||
names := make([]string, 0)
|
||||
for _, n := range m.notifiers {
|
||||
if n.IsEnabled() {
|
||||
names = append(names, n.Name())
|
||||
}
|
||||
}
|
||||
return names
|
||||
}
|
||||
|
||||
// BackupStarted sends a backup started notification
|
||||
func (m *Manager) BackupStarted(database string) {
|
||||
event := NewEvent(EventBackupStarted, SeverityInfo, fmt.Sprintf("Starting backup of database '%s'", database)).
|
||||
WithDatabase(database)
|
||||
m.Notify(event)
|
||||
}
|
||||
|
||||
// BackupCompleted sends a backup completed notification
|
||||
func (m *Manager) BackupCompleted(database, backupFile string, size int64, duration interface{}) {
|
||||
event := NewEvent(EventBackupCompleted, SeverityInfo, fmt.Sprintf("Backup of database '%s' completed successfully", database)).
|
||||
WithDatabase(database).
|
||||
WithBackupInfo(backupFile, size)
|
||||
|
||||
if d, ok := duration.(interface{ Seconds() float64 }); ok {
|
||||
event.WithDetail("duration_seconds", fmt.Sprintf("%.2f", d.Seconds()))
|
||||
}
|
||||
|
||||
m.Notify(event)
|
||||
}
|
||||
|
||||
// BackupFailed sends a backup failed notification
|
||||
func (m *Manager) BackupFailed(database string, err error) {
|
||||
event := NewEvent(EventBackupFailed, SeverityError, fmt.Sprintf("Backup of database '%s' failed", database)).
|
||||
WithDatabase(database).
|
||||
WithError(err)
|
||||
m.Notify(event)
|
||||
}
|
||||
|
||||
// RestoreStarted sends a restore started notification
|
||||
func (m *Manager) RestoreStarted(database, backupFile string) {
|
||||
event := NewEvent(EventRestoreStarted, SeverityInfo, fmt.Sprintf("Starting restore of database '%s' from '%s'", database, backupFile)).
|
||||
WithDatabase(database).
|
||||
WithBackupInfo(backupFile, 0)
|
||||
m.Notify(event)
|
||||
}
|
||||
|
||||
// RestoreCompleted sends a restore completed notification
|
||||
func (m *Manager) RestoreCompleted(database, backupFile string, duration interface{}) {
|
||||
event := NewEvent(EventRestoreCompleted, SeverityInfo, fmt.Sprintf("Restore of database '%s' completed successfully", database)).
|
||||
WithDatabase(database).
|
||||
WithBackupInfo(backupFile, 0)
|
||||
|
||||
if d, ok := duration.(interface{ Seconds() float64 }); ok {
|
||||
event.WithDetail("duration_seconds", fmt.Sprintf("%.2f", d.Seconds()))
|
||||
}
|
||||
|
||||
m.Notify(event)
|
||||
}
|
||||
|
||||
// RestoreFailed sends a restore failed notification
|
||||
func (m *Manager) RestoreFailed(database string, err error) {
|
||||
event := NewEvent(EventRestoreFailed, SeverityError, fmt.Sprintf("Restore of database '%s' failed", database)).
|
||||
WithDatabase(database).
|
||||
WithError(err)
|
||||
m.Notify(event)
|
||||
}
|
||||
|
||||
// CleanupCompleted sends a cleanup completed notification
|
||||
func (m *Manager) CleanupCompleted(directory string, deleted int, spaceFreed int64) {
|
||||
event := NewEvent(EventCleanupCompleted, SeverityInfo, fmt.Sprintf("Cleanup completed: %d backups deleted", deleted)).
|
||||
WithDetail("directory", directory).
|
||||
WithDetail("space_freed", formatBytes(spaceFreed))
|
||||
m.Notify(event)
|
||||
}
|
||||
|
||||
// VerifyCompleted sends a verification completed notification
|
||||
func (m *Manager) VerifyCompleted(backupFile string, isValid bool) {
|
||||
if isValid {
|
||||
event := NewEvent(EventVerifyCompleted, SeverityInfo, "Backup verification passed").
|
||||
WithBackupInfo(backupFile, 0)
|
||||
m.Notify(event)
|
||||
} else {
|
||||
event := NewEvent(EventVerifyFailed, SeverityError, "Backup verification failed").
|
||||
WithBackupInfo(backupFile, 0)
|
||||
m.Notify(event)
|
||||
}
|
||||
}
|
||||
|
||||
// PITRRecovery sends a PITR recovery notification
|
||||
func (m *Manager) PITRRecovery(database, targetTime string) {
|
||||
event := NewEvent(EventPITRRecovery, SeverityInfo, fmt.Sprintf("Point-in-time recovery initiated for '%s' to %s", database, targetTime)).
|
||||
WithDatabase(database).
|
||||
WithDetail("target_time", targetTime)
|
||||
m.Notify(event)
|
||||
}
|
||||
|
||||
// NullManager returns a no-op notification manager
|
||||
func NullManager() *Manager {
|
||||
return &Manager{
|
||||
notifiers: make([]Notifier, 0),
|
||||
}
|
||||
}
|
||||
260
internal/notify/notify.go
Normal file
260
internal/notify/notify.go
Normal file
@@ -0,0 +1,260 @@
|
||||
// Package notify provides notification capabilities for backup events
|
||||
package notify
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"time"
|
||||
)
|
||||
|
||||
// EventType represents the type of notification event
|
||||
type EventType string
|
||||
|
||||
const (
|
||||
EventBackupStarted EventType = "backup_started"
|
||||
EventBackupCompleted EventType = "backup_completed"
|
||||
EventBackupFailed EventType = "backup_failed"
|
||||
EventRestoreStarted EventType = "restore_started"
|
||||
EventRestoreCompleted EventType = "restore_completed"
|
||||
EventRestoreFailed EventType = "restore_failed"
|
||||
EventCleanupCompleted EventType = "cleanup_completed"
|
||||
EventVerifyCompleted EventType = "verify_completed"
|
||||
EventVerifyFailed EventType = "verify_failed"
|
||||
EventPITRRecovery EventType = "pitr_recovery"
|
||||
)
|
||||
|
||||
// Severity represents the severity level of a notification
|
||||
type Severity string
|
||||
|
||||
const (
|
||||
SeverityInfo Severity = "info"
|
||||
SeverityWarning Severity = "warning"
|
||||
SeverityError Severity = "error"
|
||||
SeverityCritical Severity = "critical"
|
||||
)
|
||||
|
||||
// Event represents a notification event
|
||||
type Event struct {
|
||||
Type EventType `json:"type"`
|
||||
Severity Severity `json:"severity"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Database string `json:"database,omitempty"`
|
||||
Message string `json:"message"`
|
||||
Details map[string]string `json:"details,omitempty"`
|
||||
Error string `json:"error,omitempty"`
|
||||
Duration time.Duration `json:"duration,omitempty"`
|
||||
BackupFile string `json:"backup_file,omitempty"`
|
||||
BackupSize int64 `json:"backup_size,omitempty"`
|
||||
Hostname string `json:"hostname,omitempty"`
|
||||
}
|
||||
|
||||
// NewEvent creates a new notification event
|
||||
func NewEvent(eventType EventType, severity Severity, message string) *Event {
|
||||
return &Event{
|
||||
Type: eventType,
|
||||
Severity: severity,
|
||||
Timestamp: time.Now(),
|
||||
Message: message,
|
||||
Details: make(map[string]string),
|
||||
}
|
||||
}
|
||||
|
||||
// WithDatabase adds database name to the event
|
||||
func (e *Event) WithDatabase(db string) *Event {
|
||||
e.Database = db
|
||||
return e
|
||||
}
|
||||
|
||||
// WithError adds error information to the event
|
||||
func (e *Event) WithError(err error) *Event {
|
||||
if err != nil {
|
||||
e.Error = err.Error()
|
||||
}
|
||||
return e
|
||||
}
|
||||
|
||||
// WithDuration adds duration to the event
|
||||
func (e *Event) WithDuration(d time.Duration) *Event {
|
||||
e.Duration = d
|
||||
return e
|
||||
}
|
||||
|
||||
// WithBackupInfo adds backup file and size information
|
||||
func (e *Event) WithBackupInfo(file string, size int64) *Event {
|
||||
e.BackupFile = file
|
||||
e.BackupSize = size
|
||||
return e
|
||||
}
|
||||
|
||||
// WithHostname adds hostname to the event
|
||||
func (e *Event) WithHostname(hostname string) *Event {
|
||||
e.Hostname = hostname
|
||||
return e
|
||||
}
|
||||
|
||||
// WithDetail adds a custom detail to the event
|
||||
func (e *Event) WithDetail(key, value string) *Event {
|
||||
if e.Details == nil {
|
||||
e.Details = make(map[string]string)
|
||||
}
|
||||
e.Details[key] = value
|
||||
return e
|
||||
}
|
||||
|
||||
// Notifier is the interface that all notification backends must implement
|
||||
type Notifier interface {
|
||||
// Name returns the name of the notifier (e.g., "smtp", "webhook")
|
||||
Name() string
|
||||
// Send sends a notification event
|
||||
Send(ctx context.Context, event *Event) error
|
||||
// IsEnabled returns whether the notifier is configured and enabled
|
||||
IsEnabled() bool
|
||||
}
|
||||
|
||||
// Config holds configuration for all notification backends
|
||||
type Config struct {
|
||||
// SMTP configuration
|
||||
SMTPEnabled bool
|
||||
SMTPHost string
|
||||
SMTPPort int
|
||||
SMTPUser string
|
||||
SMTPPassword string
|
||||
SMTPFrom string
|
||||
SMTPTo []string
|
||||
SMTPTLS bool
|
||||
SMTPStartTLS bool
|
||||
|
||||
// Webhook configuration
|
||||
WebhookEnabled bool
|
||||
WebhookURL string
|
||||
WebhookMethod string // GET, POST
|
||||
WebhookHeaders map[string]string
|
||||
WebhookSecret string // For signing payloads
|
||||
|
||||
// General settings
|
||||
OnSuccess bool // Send notifications on successful operations
|
||||
OnFailure bool // Send notifications on failed operations
|
||||
OnWarning bool // Send notifications on warnings
|
||||
MinSeverity Severity
|
||||
Retries int // Number of retry attempts
|
||||
RetryDelay time.Duration // Delay between retries
|
||||
}
|
||||
|
||||
// DefaultConfig returns a configuration with sensible defaults
|
||||
func DefaultConfig() Config {
|
||||
return Config{
|
||||
SMTPPort: 587,
|
||||
SMTPTLS: false,
|
||||
SMTPStartTLS: true,
|
||||
WebhookMethod: "POST",
|
||||
OnSuccess: true,
|
||||
OnFailure: true,
|
||||
OnWarning: true,
|
||||
MinSeverity: SeverityInfo,
|
||||
Retries: 3,
|
||||
RetryDelay: 5 * time.Second,
|
||||
}
|
||||
}
|
||||
|
||||
// FormatEventSubject generates a subject line for notifications
|
||||
func FormatEventSubject(event *Event) string {
|
||||
icon := "ℹ️"
|
||||
switch event.Severity {
|
||||
case SeverityWarning:
|
||||
icon = "⚠️"
|
||||
case SeverityError, SeverityCritical:
|
||||
icon = "❌"
|
||||
}
|
||||
|
||||
verb := "Event"
|
||||
switch event.Type {
|
||||
case EventBackupStarted:
|
||||
verb = "Backup Started"
|
||||
icon = "🔄"
|
||||
case EventBackupCompleted:
|
||||
verb = "Backup Completed"
|
||||
icon = "✅"
|
||||
case EventBackupFailed:
|
||||
verb = "Backup Failed"
|
||||
icon = "❌"
|
||||
case EventRestoreStarted:
|
||||
verb = "Restore Started"
|
||||
icon = "🔄"
|
||||
case EventRestoreCompleted:
|
||||
verb = "Restore Completed"
|
||||
icon = "✅"
|
||||
case EventRestoreFailed:
|
||||
verb = "Restore Failed"
|
||||
icon = "❌"
|
||||
case EventCleanupCompleted:
|
||||
verb = "Cleanup Completed"
|
||||
icon = "🗑️"
|
||||
case EventVerifyCompleted:
|
||||
verb = "Verification Passed"
|
||||
icon = "✅"
|
||||
case EventVerifyFailed:
|
||||
verb = "Verification Failed"
|
||||
icon = "❌"
|
||||
case EventPITRRecovery:
|
||||
verb = "PITR Recovery"
|
||||
icon = "⏪"
|
||||
}
|
||||
|
||||
if event.Database != "" {
|
||||
return fmt.Sprintf("%s [dbbackup] %s: %s", icon, verb, event.Database)
|
||||
}
|
||||
return fmt.Sprintf("%s [dbbackup] %s", icon, verb)
|
||||
}
|
||||
|
||||
// FormatEventBody generates a message body for notifications
|
||||
func FormatEventBody(event *Event) string {
|
||||
body := fmt.Sprintf("%s\n\n", event.Message)
|
||||
body += fmt.Sprintf("Time: %s\n", event.Timestamp.Format(time.RFC3339))
|
||||
|
||||
if event.Database != "" {
|
||||
body += fmt.Sprintf("Database: %s\n", event.Database)
|
||||
}
|
||||
|
||||
if event.Hostname != "" {
|
||||
body += fmt.Sprintf("Host: %s\n", event.Hostname)
|
||||
}
|
||||
|
||||
if event.Duration > 0 {
|
||||
body += fmt.Sprintf("Duration: %s\n", event.Duration.Round(time.Second))
|
||||
}
|
||||
|
||||
if event.BackupFile != "" {
|
||||
body += fmt.Sprintf("Backup File: %s\n", event.BackupFile)
|
||||
}
|
||||
|
||||
if event.BackupSize > 0 {
|
||||
body += fmt.Sprintf("Backup Size: %s\n", formatBytes(event.BackupSize))
|
||||
}
|
||||
|
||||
if event.Error != "" {
|
||||
body += fmt.Sprintf("\nError: %s\n", event.Error)
|
||||
}
|
||||
|
||||
if len(event.Details) > 0 {
|
||||
body += "\nDetails:\n"
|
||||
for k, v := range event.Details {
|
||||
body += fmt.Sprintf(" %s: %s\n", k, v)
|
||||
}
|
||||
}
|
||||
|
||||
return body
|
||||
}
|
||||
|
||||
// formatBytes formats bytes as human-readable string
|
||||
func formatBytes(bytes int64) string {
|
||||
const unit = 1024
|
||||
if bytes < unit {
|
||||
return fmt.Sprintf("%d B", bytes)
|
||||
}
|
||||
div, exp := int64(unit), 0
|
||||
for n := bytes / unit; n >= unit; n /= unit {
|
||||
div *= unit
|
||||
exp++
|
||||
}
|
||||
return fmt.Sprintf("%.1f %cB", float64(bytes)/float64(div), "KMGTPE"[exp])
|
||||
}
|
||||
279
internal/notify/notify_test.go
Normal file
279
internal/notify/notify_test.go
Normal file
@@ -0,0 +1,279 @@
|
||||
package notify
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestNewEvent(t *testing.T) {
|
||||
event := NewEvent(EventBackupCompleted, SeverityInfo, "Backup completed")
|
||||
|
||||
if event.Type != EventBackupCompleted {
|
||||
t.Errorf("Type = %v, expected %v", event.Type, EventBackupCompleted)
|
||||
}
|
||||
|
||||
if event.Severity != SeverityInfo {
|
||||
t.Errorf("Severity = %v, expected %v", event.Severity, SeverityInfo)
|
||||
}
|
||||
|
||||
if event.Message != "Backup completed" {
|
||||
t.Errorf("Message = %q, expected %q", event.Message, "Backup completed")
|
||||
}
|
||||
|
||||
if event.Timestamp.IsZero() {
|
||||
t.Error("Timestamp should not be zero")
|
||||
}
|
||||
}
|
||||
|
||||
func TestEventChaining(t *testing.T) {
|
||||
event := NewEvent(EventBackupCompleted, SeverityInfo, "Backup completed").
|
||||
WithDatabase("testdb").
|
||||
WithBackupInfo("/backups/test.dump", 1024).
|
||||
WithHostname("server1").
|
||||
WithDetail("custom", "value")
|
||||
|
||||
if event.Database != "testdb" {
|
||||
t.Errorf("Database = %q, expected %q", event.Database, "testdb")
|
||||
}
|
||||
|
||||
if event.BackupFile != "/backups/test.dump" {
|
||||
t.Errorf("BackupFile = %q, expected %q", event.BackupFile, "/backups/test.dump")
|
||||
}
|
||||
|
||||
if event.BackupSize != 1024 {
|
||||
t.Errorf("BackupSize = %d, expected %d", event.BackupSize, 1024)
|
||||
}
|
||||
|
||||
if event.Hostname != "server1" {
|
||||
t.Errorf("Hostname = %q, expected %q", event.Hostname, "server1")
|
||||
}
|
||||
|
||||
if event.Details["custom"] != "value" {
|
||||
t.Errorf("Details[custom] = %q, expected %q", event.Details["custom"], "value")
|
||||
}
|
||||
}
|
||||
|
||||
func TestFormatEventSubject(t *testing.T) {
|
||||
tests := []struct {
|
||||
eventType EventType
|
||||
database string
|
||||
contains string
|
||||
}{
|
||||
{EventBackupCompleted, "testdb", "Backup Completed"},
|
||||
{EventBackupFailed, "testdb", "Backup Failed"},
|
||||
{EventRestoreCompleted, "", "Restore Completed"},
|
||||
{EventCleanupCompleted, "", "Cleanup Completed"},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
event := NewEvent(tc.eventType, SeverityInfo, "test")
|
||||
if tc.database != "" {
|
||||
event.WithDatabase(tc.database)
|
||||
}
|
||||
|
||||
subject := FormatEventSubject(event)
|
||||
if subject == "" {
|
||||
t.Errorf("FormatEventSubject() returned empty string for %v", tc.eventType)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFormatEventBody(t *testing.T) {
|
||||
event := NewEvent(EventBackupCompleted, SeverityInfo, "Backup completed").
|
||||
WithDatabase("testdb").
|
||||
WithBackupInfo("/backups/test.dump", 1024).
|
||||
WithHostname("server1")
|
||||
|
||||
body := FormatEventBody(event)
|
||||
|
||||
if body == "" {
|
||||
t.Error("FormatEventBody() returned empty string")
|
||||
}
|
||||
|
||||
// Should contain message
|
||||
if body == "" || len(body) < 10 {
|
||||
t.Error("Body should contain event information")
|
||||
}
|
||||
}
|
||||
|
||||
func TestDefaultConfig(t *testing.T) {
|
||||
config := DefaultConfig()
|
||||
|
||||
if config.SMTPPort != 587 {
|
||||
t.Errorf("SMTPPort = %d, expected 587", config.SMTPPort)
|
||||
}
|
||||
|
||||
if !config.SMTPStartTLS {
|
||||
t.Error("SMTPStartTLS should be true by default")
|
||||
}
|
||||
|
||||
if config.WebhookMethod != "POST" {
|
||||
t.Errorf("WebhookMethod = %q, expected POST", config.WebhookMethod)
|
||||
}
|
||||
|
||||
if !config.OnSuccess {
|
||||
t.Error("OnSuccess should be true by default")
|
||||
}
|
||||
|
||||
if !config.OnFailure {
|
||||
t.Error("OnFailure should be true by default")
|
||||
}
|
||||
|
||||
if config.Retries != 3 {
|
||||
t.Errorf("Retries = %d, expected 3", config.Retries)
|
||||
}
|
||||
}
|
||||
|
||||
func TestWebhookNotifierSend(t *testing.T) {
|
||||
var receivedPayload WebhookPayload
|
||||
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
if r.Method != "POST" {
|
||||
t.Errorf("Method = %q, expected POST", r.Method)
|
||||
}
|
||||
|
||||
if r.Header.Get("Content-Type") != "application/json" {
|
||||
t.Errorf("Content-Type = %q, expected application/json", r.Header.Get("Content-Type"))
|
||||
}
|
||||
|
||||
decoder := json.NewDecoder(r.Body)
|
||||
if err := decoder.Decode(&receivedPayload); err != nil {
|
||||
t.Errorf("Failed to decode payload: %v", err)
|
||||
}
|
||||
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
config := DefaultConfig()
|
||||
config.WebhookEnabled = true
|
||||
config.WebhookURL = server.URL
|
||||
|
||||
notifier := NewWebhookNotifier(config)
|
||||
|
||||
event := NewEvent(EventBackupCompleted, SeverityInfo, "Backup completed").
|
||||
WithDatabase("testdb")
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
err := notifier.Send(ctx, event)
|
||||
if err != nil {
|
||||
t.Errorf("Send() error = %v", err)
|
||||
}
|
||||
|
||||
if receivedPayload.Event.Database != "testdb" {
|
||||
t.Errorf("Received database = %q, expected testdb", receivedPayload.Event.Database)
|
||||
}
|
||||
}
|
||||
|
||||
func TestWebhookNotifierDisabled(t *testing.T) {
|
||||
config := DefaultConfig()
|
||||
config.WebhookEnabled = false
|
||||
|
||||
notifier := NewWebhookNotifier(config)
|
||||
|
||||
if notifier.IsEnabled() {
|
||||
t.Error("Notifier should be disabled")
|
||||
}
|
||||
|
||||
event := NewEvent(EventBackupCompleted, SeverityInfo, "test")
|
||||
err := notifier.Send(context.Background(), event)
|
||||
if err != nil {
|
||||
t.Errorf("Send() should not error when disabled: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSMTPNotifierDisabled(t *testing.T) {
|
||||
config := DefaultConfig()
|
||||
config.SMTPEnabled = false
|
||||
|
||||
notifier := NewSMTPNotifier(config)
|
||||
|
||||
if notifier.IsEnabled() {
|
||||
t.Error("Notifier should be disabled")
|
||||
}
|
||||
|
||||
event := NewEvent(EventBackupCompleted, SeverityInfo, "test")
|
||||
err := notifier.Send(context.Background(), event)
|
||||
if err != nil {
|
||||
t.Errorf("Send() should not error when disabled: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestManagerNoNotifiers(t *testing.T) {
|
||||
config := DefaultConfig()
|
||||
config.SMTPEnabled = false
|
||||
config.WebhookEnabled = false
|
||||
|
||||
manager := NewManager(config)
|
||||
|
||||
if manager.HasEnabledNotifiers() {
|
||||
t.Error("Manager should have no enabled notifiers")
|
||||
}
|
||||
|
||||
names := manager.EnabledNotifiers()
|
||||
if len(names) != 0 {
|
||||
t.Errorf("EnabledNotifiers() = %v, expected empty", names)
|
||||
}
|
||||
}
|
||||
|
||||
func TestManagerWithWebhook(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
config := DefaultConfig()
|
||||
config.WebhookEnabled = true
|
||||
config.WebhookURL = server.URL
|
||||
|
||||
manager := NewManager(config)
|
||||
|
||||
if !manager.HasEnabledNotifiers() {
|
||||
t.Error("Manager should have enabled notifiers")
|
||||
}
|
||||
|
||||
names := manager.EnabledNotifiers()
|
||||
if len(names) != 1 || names[0] != "webhook" {
|
||||
t.Errorf("EnabledNotifiers() = %v, expected [webhook]", names)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNullManager(t *testing.T) {
|
||||
manager := NullManager()
|
||||
|
||||
if manager.HasEnabledNotifiers() {
|
||||
t.Error("NullManager should have no enabled notifiers")
|
||||
}
|
||||
|
||||
// Should not panic
|
||||
manager.BackupStarted("testdb")
|
||||
manager.BackupCompleted("testdb", "/backup.dump", 1024, nil)
|
||||
manager.BackupFailed("testdb", nil)
|
||||
}
|
||||
|
||||
func TestFormatBytes(t *testing.T) {
|
||||
tests := []struct {
|
||||
input int64
|
||||
expected string
|
||||
}{
|
||||
{0, "0 B"},
|
||||
{500, "500 B"},
|
||||
{1024, "1.0 KB"},
|
||||
{1536, "1.5 KB"},
|
||||
{1048576, "1.0 MB"},
|
||||
{1073741824, "1.0 GB"},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
result := formatBytes(tc.input)
|
||||
if result != tc.expected {
|
||||
t.Errorf("formatBytes(%d) = %q, expected %q", tc.input, result, tc.expected)
|
||||
}
|
||||
}
|
||||
}
|
||||
179
internal/notify/smtp.go
Normal file
179
internal/notify/smtp.go
Normal file
@@ -0,0 +1,179 @@
|
||||
// Package notify - SMTP email notifications
|
||||
package notify
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/tls"
|
||||
"fmt"
|
||||
"net"
|
||||
"net/smtp"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// SMTPNotifier sends notifications via email
|
||||
type SMTPNotifier struct {
|
||||
config Config
|
||||
}
|
||||
|
||||
// NewSMTPNotifier creates a new SMTP notifier
|
||||
func NewSMTPNotifier(config Config) *SMTPNotifier {
|
||||
return &SMTPNotifier{
|
||||
config: config,
|
||||
}
|
||||
}
|
||||
|
||||
// Name returns the notifier name
|
||||
func (s *SMTPNotifier) Name() string {
|
||||
return "smtp"
|
||||
}
|
||||
|
||||
// IsEnabled returns whether SMTP notifications are enabled
|
||||
func (s *SMTPNotifier) IsEnabled() bool {
|
||||
return s.config.SMTPEnabled && s.config.SMTPHost != "" && len(s.config.SMTPTo) > 0
|
||||
}
|
||||
|
||||
// Send sends an email notification
|
||||
func (s *SMTPNotifier) Send(ctx context.Context, event *Event) error {
|
||||
if !s.IsEnabled() {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Build email
|
||||
subject := FormatEventSubject(event)
|
||||
body := FormatEventBody(event)
|
||||
|
||||
// Build headers
|
||||
headers := make(map[string]string)
|
||||
headers["From"] = s.config.SMTPFrom
|
||||
headers["To"] = strings.Join(s.config.SMTPTo, ", ")
|
||||
headers["Subject"] = subject
|
||||
headers["MIME-Version"] = "1.0"
|
||||
headers["Content-Type"] = "text/plain; charset=UTF-8"
|
||||
headers["Date"] = time.Now().Format(time.RFC1123Z)
|
||||
headers["X-Priority"] = s.getPriority(event.Severity)
|
||||
|
||||
// Build message
|
||||
var msg strings.Builder
|
||||
for k, v := range headers {
|
||||
msg.WriteString(fmt.Sprintf("%s: %s\r\n", k, v))
|
||||
}
|
||||
msg.WriteString("\r\n")
|
||||
msg.WriteString(body)
|
||||
|
||||
// Send with retries
|
||||
var lastErr error
|
||||
for attempt := 0; attempt <= s.config.Retries; attempt++ {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
if attempt > 0 {
|
||||
time.Sleep(s.config.RetryDelay)
|
||||
}
|
||||
|
||||
err := s.sendMail(ctx, msg.String())
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
lastErr = err
|
||||
}
|
||||
|
||||
return fmt.Errorf("smtp: failed after %d attempts: %w", s.config.Retries+1, lastErr)
|
||||
}
|
||||
|
||||
// sendMail sends the email message
|
||||
func (s *SMTPNotifier) sendMail(ctx context.Context, message string) error {
|
||||
addr := fmt.Sprintf("%s:%d", s.config.SMTPHost, s.config.SMTPPort)
|
||||
|
||||
// Create connection with timeout
|
||||
dialer := &net.Dialer{
|
||||
Timeout: 30 * time.Second,
|
||||
}
|
||||
|
||||
var conn net.Conn
|
||||
var err error
|
||||
|
||||
if s.config.SMTPTLS {
|
||||
// Direct TLS connection (port 465)
|
||||
tlsConfig := &tls.Config{
|
||||
ServerName: s.config.SMTPHost,
|
||||
}
|
||||
conn, err = tls.DialWithDialer(dialer, "tcp", addr, tlsConfig)
|
||||
} else {
|
||||
conn, err = dialer.DialContext(ctx, "tcp", addr)
|
||||
}
|
||||
if err != nil {
|
||||
return fmt.Errorf("dial failed: %w", err)
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
// Create SMTP client
|
||||
client, err := smtp.NewClient(conn, s.config.SMTPHost)
|
||||
if err != nil {
|
||||
return fmt.Errorf("smtp client creation failed: %w", err)
|
||||
}
|
||||
defer client.Close()
|
||||
|
||||
// STARTTLS if needed (and not already using TLS)
|
||||
if s.config.SMTPStartTLS && !s.config.SMTPTLS {
|
||||
if ok, _ := client.Extension("STARTTLS"); ok {
|
||||
tlsConfig := &tls.Config{
|
||||
ServerName: s.config.SMTPHost,
|
||||
}
|
||||
if err = client.StartTLS(tlsConfig); err != nil {
|
||||
return fmt.Errorf("starttls failed: %w", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Authenticate if credentials provided
|
||||
if s.config.SMTPUser != "" && s.config.SMTPPassword != "" {
|
||||
auth := smtp.PlainAuth("", s.config.SMTPUser, s.config.SMTPPassword, s.config.SMTPHost)
|
||||
if err = client.Auth(auth); err != nil {
|
||||
return fmt.Errorf("auth failed: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Set sender
|
||||
if err = client.Mail(s.config.SMTPFrom); err != nil {
|
||||
return fmt.Errorf("mail from failed: %w", err)
|
||||
}
|
||||
|
||||
// Set recipients
|
||||
for _, to := range s.config.SMTPTo {
|
||||
if err = client.Rcpt(to); err != nil {
|
||||
return fmt.Errorf("rcpt to failed: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Send message body
|
||||
w, err := client.Data()
|
||||
if err != nil {
|
||||
return fmt.Errorf("data command failed: %w", err)
|
||||
}
|
||||
defer w.Close()
|
||||
|
||||
_, err = w.Write([]byte(message))
|
||||
if err != nil {
|
||||
return fmt.Errorf("write failed: %w", err)
|
||||
}
|
||||
|
||||
return client.Quit()
|
||||
}
|
||||
|
||||
// getPriority returns X-Priority header value based on severity
|
||||
func (s *SMTPNotifier) getPriority(severity Severity) string {
|
||||
switch severity {
|
||||
case SeverityCritical:
|
||||
return "1" // Highest
|
||||
case SeverityError:
|
||||
return "2" // High
|
||||
case SeverityWarning:
|
||||
return "3" // Normal
|
||||
default:
|
||||
return "3" // Normal
|
||||
}
|
||||
}
|
||||
337
internal/notify/webhook.go
Normal file
337
internal/notify/webhook.go
Normal file
@@ -0,0 +1,337 @@
|
||||
// Package notify - Webhook HTTP notifications
|
||||
package notify
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/hmac"
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"time"
|
||||
)
|
||||
|
||||
// WebhookNotifier sends notifications via HTTP webhooks
|
||||
type WebhookNotifier struct {
|
||||
config Config
|
||||
client *http.Client
|
||||
}
|
||||
|
||||
// NewWebhookNotifier creates a new Webhook notifier
|
||||
func NewWebhookNotifier(config Config) *WebhookNotifier {
|
||||
return &WebhookNotifier{
|
||||
config: config,
|
||||
client: &http.Client{
|
||||
Timeout: 30 * time.Second,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// Name returns the notifier name
|
||||
func (w *WebhookNotifier) Name() string {
|
||||
return "webhook"
|
||||
}
|
||||
|
||||
// IsEnabled returns whether webhook notifications are enabled
|
||||
func (w *WebhookNotifier) IsEnabled() bool {
|
||||
return w.config.WebhookEnabled && w.config.WebhookURL != ""
|
||||
}
|
||||
|
||||
// WebhookPayload is the JSON payload sent to webhooks
|
||||
type WebhookPayload struct {
|
||||
Version string `json:"version"`
|
||||
Event *Event `json:"event"`
|
||||
Subject string `json:"subject"`
|
||||
Body string `json:"body"`
|
||||
Signature string `json:"signature,omitempty"`
|
||||
Metadata map[string]string `json:"metadata,omitempty"`
|
||||
}
|
||||
|
||||
// Send sends a webhook notification
|
||||
func (w *WebhookNotifier) Send(ctx context.Context, event *Event) error {
|
||||
if !w.IsEnabled() {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Build payload
|
||||
payload := WebhookPayload{
|
||||
Version: "1.0",
|
||||
Event: event,
|
||||
Subject: FormatEventSubject(event),
|
||||
Body: FormatEventBody(event),
|
||||
Metadata: map[string]string{
|
||||
"source": "dbbackup",
|
||||
},
|
||||
}
|
||||
|
||||
// Marshal to JSON
|
||||
jsonBody, err := json.Marshal(payload)
|
||||
if err != nil {
|
||||
return fmt.Errorf("webhook: failed to marshal payload: %w", err)
|
||||
}
|
||||
|
||||
// Sign payload if secret is configured
|
||||
if w.config.WebhookSecret != "" {
|
||||
sig := w.signPayload(jsonBody)
|
||||
payload.Signature = sig
|
||||
// Re-marshal with signature
|
||||
jsonBody, _ = json.Marshal(payload)
|
||||
}
|
||||
|
||||
// Send with retries
|
||||
var lastErr error
|
||||
for attempt := 0; attempt <= w.config.Retries; attempt++ {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
if attempt > 0 {
|
||||
time.Sleep(w.config.RetryDelay)
|
||||
}
|
||||
|
||||
err := w.doRequest(ctx, jsonBody)
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
lastErr = err
|
||||
}
|
||||
|
||||
return fmt.Errorf("webhook: failed after %d attempts: %w", w.config.Retries+1, lastErr)
|
||||
}
|
||||
|
||||
// doRequest performs the HTTP request
|
||||
func (w *WebhookNotifier) doRequest(ctx context.Context, body []byte) error {
|
||||
method := w.config.WebhookMethod
|
||||
if method == "" {
|
||||
method = "POST"
|
||||
}
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, method, w.config.WebhookURL, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create request: %w", err)
|
||||
}
|
||||
|
||||
// Set headers
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set("User-Agent", "dbbackup-notifier/1.0")
|
||||
|
||||
// Add custom headers
|
||||
for k, v := range w.config.WebhookHeaders {
|
||||
req.Header.Set(k, v)
|
||||
}
|
||||
|
||||
// Add signature header if secret is configured
|
||||
if w.config.WebhookSecret != "" {
|
||||
sig := w.signPayload(body)
|
||||
req.Header.Set("X-Webhook-Signature", "sha256="+sig)
|
||||
}
|
||||
|
||||
// Send request
|
||||
resp, err := w.client.Do(req)
|
||||
if err != nil {
|
||||
return fmt.Errorf("request failed: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
// Read response body for error messages
|
||||
respBody, _ := io.ReadAll(io.LimitReader(resp.Body, 1024))
|
||||
|
||||
// Check status code
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
return fmt.Errorf("unexpected status %d: %s", resp.StatusCode, string(respBody))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// signPayload creates an HMAC-SHA256 signature
|
||||
func (w *WebhookNotifier) signPayload(payload []byte) string {
|
||||
mac := hmac.New(sha256.New, []byte(w.config.WebhookSecret))
|
||||
mac.Write(payload)
|
||||
return hex.EncodeToString(mac.Sum(nil))
|
||||
}
|
||||
|
||||
// SlackPayload is a Slack-compatible webhook payload
|
||||
type SlackPayload struct {
|
||||
Text string `json:"text,omitempty"`
|
||||
Username string `json:"username,omitempty"`
|
||||
IconEmoji string `json:"icon_emoji,omitempty"`
|
||||
Channel string `json:"channel,omitempty"`
|
||||
Attachments []Attachment `json:"attachments,omitempty"`
|
||||
}
|
||||
|
||||
// Attachment is a Slack message attachment
|
||||
type Attachment struct {
|
||||
Color string `json:"color,omitempty"`
|
||||
Title string `json:"title,omitempty"`
|
||||
Text string `json:"text,omitempty"`
|
||||
Fields []AttachmentField `json:"fields,omitempty"`
|
||||
Footer string `json:"footer,omitempty"`
|
||||
FooterIcon string `json:"footer_icon,omitempty"`
|
||||
Timestamp int64 `json:"ts,omitempty"`
|
||||
}
|
||||
|
||||
// AttachmentField is a field in a Slack attachment
|
||||
type AttachmentField struct {
|
||||
Title string `json:"title"`
|
||||
Value string `json:"value"`
|
||||
Short bool `json:"short"`
|
||||
}
|
||||
|
||||
// NewSlackNotifier creates a webhook notifier configured for Slack
|
||||
func NewSlackNotifier(webhookURL string, config Config) *SlackWebhookNotifier {
|
||||
return &SlackWebhookNotifier{
|
||||
webhookURL: webhookURL,
|
||||
config: config,
|
||||
client: &http.Client{
|
||||
Timeout: 30 * time.Second,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// SlackWebhookNotifier sends Slack-formatted notifications
|
||||
type SlackWebhookNotifier struct {
|
||||
webhookURL string
|
||||
config Config
|
||||
client *http.Client
|
||||
}
|
||||
|
||||
// Name returns the notifier name
|
||||
func (s *SlackWebhookNotifier) Name() string {
|
||||
return "slack"
|
||||
}
|
||||
|
||||
// IsEnabled returns whether Slack notifications are enabled
|
||||
func (s *SlackWebhookNotifier) IsEnabled() bool {
|
||||
return s.webhookURL != ""
|
||||
}
|
||||
|
||||
// Send sends a Slack notification
|
||||
func (s *SlackWebhookNotifier) Send(ctx context.Context, event *Event) error {
|
||||
if !s.IsEnabled() {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Build Slack payload
|
||||
color := "#36a64f" // Green
|
||||
switch event.Severity {
|
||||
case SeverityWarning:
|
||||
color = "#daa038" // Orange
|
||||
case SeverityError, SeverityCritical:
|
||||
color = "#cc0000" // Red
|
||||
}
|
||||
|
||||
fields := []AttachmentField{}
|
||||
|
||||
if event.Database != "" {
|
||||
fields = append(fields, AttachmentField{
|
||||
Title: "Database",
|
||||
Value: event.Database,
|
||||
Short: true,
|
||||
})
|
||||
}
|
||||
|
||||
if event.Duration > 0 {
|
||||
fields = append(fields, AttachmentField{
|
||||
Title: "Duration",
|
||||
Value: event.Duration.Round(time.Second).String(),
|
||||
Short: true,
|
||||
})
|
||||
}
|
||||
|
||||
if event.BackupSize > 0 {
|
||||
fields = append(fields, AttachmentField{
|
||||
Title: "Size",
|
||||
Value: formatBytes(event.BackupSize),
|
||||
Short: true,
|
||||
})
|
||||
}
|
||||
|
||||
if event.Hostname != "" {
|
||||
fields = append(fields, AttachmentField{
|
||||
Title: "Host",
|
||||
Value: event.Hostname,
|
||||
Short: true,
|
||||
})
|
||||
}
|
||||
|
||||
if event.Error != "" {
|
||||
fields = append(fields, AttachmentField{
|
||||
Title: "Error",
|
||||
Value: event.Error,
|
||||
Short: false,
|
||||
})
|
||||
}
|
||||
|
||||
payload := SlackPayload{
|
||||
Username: "DBBackup",
|
||||
IconEmoji: ":database:",
|
||||
Attachments: []Attachment{
|
||||
{
|
||||
Color: color,
|
||||
Title: FormatEventSubject(event),
|
||||
Text: event.Message,
|
||||
Fields: fields,
|
||||
Footer: "dbbackup",
|
||||
Timestamp: event.Timestamp.Unix(),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// Marshal to JSON
|
||||
jsonBody, err := json.Marshal(payload)
|
||||
if err != nil {
|
||||
return fmt.Errorf("slack: failed to marshal payload: %w", err)
|
||||
}
|
||||
|
||||
// Send with retries
|
||||
var lastErr error
|
||||
for attempt := 0; attempt <= s.config.Retries; attempt++ {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
if attempt > 0 {
|
||||
time.Sleep(s.config.RetryDelay)
|
||||
}
|
||||
|
||||
err := s.doRequest(ctx, jsonBody)
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
lastErr = err
|
||||
}
|
||||
|
||||
return fmt.Errorf("slack: failed after %d attempts: %w", s.config.Retries+1, lastErr)
|
||||
}
|
||||
|
||||
// doRequest performs the HTTP request to Slack
|
||||
func (s *SlackWebhookNotifier) doRequest(ctx context.Context, body []byte) error {
|
||||
req, err := http.NewRequestWithContext(ctx, "POST", s.webhookURL, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create request: %w", err)
|
||||
}
|
||||
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
|
||||
resp, err := s.client.Do(req)
|
||||
if err != nil {
|
||||
return fmt.Errorf("request failed: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
respBody, _ := io.ReadAll(io.LimitReader(resp.Body, 256))
|
||||
|
||||
if resp.StatusCode != 200 {
|
||||
return fmt.Errorf("unexpected status %d: %s", resp.StatusCode, string(respBody))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
363
internal/retention/gfs.go
Normal file
363
internal/retention/gfs.go
Normal file
@@ -0,0 +1,363 @@
|
||||
package retention
|
||||
|
||||
import (
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"dbbackup/internal/metadata"
|
||||
)
|
||||
|
||||
// Tier represents a retention tier in GFS scheme
|
||||
type Tier int
|
||||
|
||||
const (
|
||||
TierDaily Tier = iota
|
||||
TierWeekly
|
||||
TierMonthly
|
||||
TierYearly
|
||||
)
|
||||
|
||||
func (t Tier) String() string {
|
||||
switch t {
|
||||
case TierDaily:
|
||||
return "daily"
|
||||
case TierWeekly:
|
||||
return "weekly"
|
||||
case TierMonthly:
|
||||
return "monthly"
|
||||
case TierYearly:
|
||||
return "yearly"
|
||||
default:
|
||||
return "unknown"
|
||||
}
|
||||
}
|
||||
|
||||
// ParseWeekday converts a weekday name to its integer value (0=Sunday, etc.)
|
||||
func ParseWeekday(name string) int {
|
||||
name = strings.ToLower(strings.TrimSpace(name))
|
||||
weekdays := map[string]int{
|
||||
"sunday": 0,
|
||||
"sun": 0,
|
||||
"monday": 1,
|
||||
"mon": 1,
|
||||
"tuesday": 2,
|
||||
"tue": 2,
|
||||
"wednesday": 3,
|
||||
"wed": 3,
|
||||
"thursday": 4,
|
||||
"thu": 4,
|
||||
"friday": 5,
|
||||
"fri": 5,
|
||||
"saturday": 6,
|
||||
"sat": 6,
|
||||
}
|
||||
if val, ok := weekdays[name]; ok {
|
||||
return val
|
||||
}
|
||||
return 0 // Default to Sunday
|
||||
}
|
||||
|
||||
// GFSPolicy defines a Grandfather-Father-Son retention policy
|
||||
type GFSPolicy struct {
|
||||
Enabled bool
|
||||
Daily int // Number of daily backups to keep
|
||||
Weekly int // Number of weekly backups to keep (e.g., Sunday)
|
||||
Monthly int // Number of monthly backups to keep (e.g., 1st of month)
|
||||
Yearly int // Number of yearly backups to keep (e.g., Jan 1st)
|
||||
WeeklyDay int // Day of week for weekly (0=Sunday, 1=Monday, etc.)
|
||||
MonthlyDay int // Day of month for monthly (1-31, 0 means last day)
|
||||
DryRun bool // Preview mode - don't actually delete
|
||||
}
|
||||
|
||||
// DefaultGFSPolicy returns a sensible default GFS policy
|
||||
func DefaultGFSPolicy() GFSPolicy {
|
||||
return GFSPolicy{
|
||||
Enabled: true,
|
||||
Daily: 7,
|
||||
Weekly: 4,
|
||||
Monthly: 12,
|
||||
Yearly: 3,
|
||||
WeeklyDay: 0, // Sunday
|
||||
MonthlyDay: 1, // 1st of month
|
||||
DryRun: false,
|
||||
}
|
||||
}
|
||||
|
||||
// BackupClassification holds the tier classification for a backup
|
||||
type BackupClassification struct {
|
||||
Backup *metadata.BackupMetadata
|
||||
Tiers []Tier
|
||||
IsBestDaily bool
|
||||
IsBestWeekly bool
|
||||
IsBestMonth bool
|
||||
IsBestYear bool
|
||||
DayKey string // YYYY-MM-DD
|
||||
WeekKey string // YYYY-WNN
|
||||
MonthKey string // YYYY-MM
|
||||
YearKey string // YYYY
|
||||
}
|
||||
|
||||
// GFSResult contains the results of GFS policy application
|
||||
type GFSResult struct {
|
||||
TotalBackups int
|
||||
ToDelete []*metadata.BackupMetadata
|
||||
ToKeep []*metadata.BackupMetadata
|
||||
Deleted []string // File paths that were deleted (or would be in dry-run)
|
||||
Kept []string // File paths that are kept
|
||||
TotalKept int // Total count of kept backups
|
||||
DailyKept int
|
||||
WeeklyKept int
|
||||
MonthlyKept int
|
||||
YearlyKept int
|
||||
SpaceFreed int64
|
||||
Errors []error
|
||||
}
|
||||
|
||||
// ApplyGFSPolicy applies Grandfather-Father-Son retention policy
|
||||
func ApplyGFSPolicy(backupDir string, policy GFSPolicy) (*GFSResult, error) {
|
||||
// Load all backups
|
||||
backups, err := metadata.ListBackups(backupDir)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return ApplyGFSPolicyToBackups(backups, policy)
|
||||
}
|
||||
|
||||
// ApplyGFSPolicyToBackups applies GFS policy to a list of backups
|
||||
func ApplyGFSPolicyToBackups(backups []*metadata.BackupMetadata, policy GFSPolicy) (*GFSResult, error) {
|
||||
result := &GFSResult{
|
||||
TotalBackups: len(backups),
|
||||
ToDelete: make([]*metadata.BackupMetadata, 0),
|
||||
ToKeep: make([]*metadata.BackupMetadata, 0),
|
||||
Errors: make([]error, 0),
|
||||
}
|
||||
|
||||
if len(backups) == 0 {
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// Sort backups by timestamp (newest first)
|
||||
sort.Slice(backups, func(i, j int) bool {
|
||||
return backups[i].Timestamp.After(backups[j].Timestamp)
|
||||
})
|
||||
|
||||
// Classify all backups
|
||||
classifications := classifyBackups(backups, policy)
|
||||
|
||||
// Select best backup for each tier
|
||||
dailySelected := selectBestForTier(classifications, TierDaily, policy.Daily)
|
||||
weeklySelected := selectBestForTier(classifications, TierWeekly, policy.Weekly)
|
||||
monthlySelected := selectBestForTier(classifications, TierMonthly, policy.Monthly)
|
||||
yearlySelected := selectBestForTier(classifications, TierYearly, policy.Yearly)
|
||||
|
||||
// Merge all selected backups
|
||||
keepSet := make(map[string]bool)
|
||||
for _, b := range dailySelected {
|
||||
keepSet[b.BackupFile] = true
|
||||
result.DailyKept++
|
||||
}
|
||||
for _, b := range weeklySelected {
|
||||
if !keepSet[b.BackupFile] {
|
||||
keepSet[b.BackupFile] = true
|
||||
result.WeeklyKept++
|
||||
}
|
||||
}
|
||||
for _, b := range monthlySelected {
|
||||
if !keepSet[b.BackupFile] {
|
||||
keepSet[b.BackupFile] = true
|
||||
result.MonthlyKept++
|
||||
}
|
||||
}
|
||||
for _, b := range yearlySelected {
|
||||
if !keepSet[b.BackupFile] {
|
||||
keepSet[b.BackupFile] = true
|
||||
result.YearlyKept++
|
||||
}
|
||||
}
|
||||
|
||||
// Categorize backups into keep/delete
|
||||
for _, backup := range backups {
|
||||
if keepSet[backup.BackupFile] {
|
||||
result.ToKeep = append(result.ToKeep, backup)
|
||||
result.Kept = append(result.Kept, backup.BackupFile)
|
||||
} else {
|
||||
result.ToDelete = append(result.ToDelete, backup)
|
||||
result.Deleted = append(result.Deleted, backup.BackupFile)
|
||||
result.SpaceFreed += backup.SizeBytes
|
||||
}
|
||||
}
|
||||
|
||||
// Set total kept count
|
||||
result.TotalKept = len(result.ToKeep)
|
||||
|
||||
// Execute deletions if not dry run
|
||||
if !policy.DryRun {
|
||||
for _, backup := range result.ToDelete {
|
||||
if err := deleteBackup(backup.BackupFile); err != nil {
|
||||
result.Errors = append(result.Errors, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// classifyBackups classifies each backup into tiers
|
||||
func classifyBackups(backups []*metadata.BackupMetadata, policy GFSPolicy) []BackupClassification {
|
||||
classifications := make([]BackupClassification, len(backups))
|
||||
|
||||
for i, backup := range backups {
|
||||
ts := backup.Timestamp.UTC()
|
||||
|
||||
classifications[i] = BackupClassification{
|
||||
Backup: backup,
|
||||
Tiers: make([]Tier, 0),
|
||||
DayKey: ts.Format("2006-01-02"),
|
||||
WeekKey: getWeekKey(ts),
|
||||
MonthKey: ts.Format("2006-01"),
|
||||
YearKey: ts.Format("2006"),
|
||||
}
|
||||
|
||||
// Every backup qualifies for daily
|
||||
classifications[i].Tiers = append(classifications[i].Tiers, TierDaily)
|
||||
|
||||
// Check if qualifies for weekly (correct day of week)
|
||||
if int(ts.Weekday()) == policy.WeeklyDay {
|
||||
classifications[i].Tiers = append(classifications[i].Tiers, TierWeekly)
|
||||
}
|
||||
|
||||
// Check if qualifies for monthly (correct day of month)
|
||||
if isMonthlyQualified(ts, policy.MonthlyDay) {
|
||||
classifications[i].Tiers = append(classifications[i].Tiers, TierMonthly)
|
||||
}
|
||||
|
||||
// Check if qualifies for yearly (January + monthly day)
|
||||
if ts.Month() == time.January && isMonthlyQualified(ts, policy.MonthlyDay) {
|
||||
classifications[i].Tiers = append(classifications[i].Tiers, TierYearly)
|
||||
}
|
||||
}
|
||||
|
||||
return classifications
|
||||
}
|
||||
|
||||
// selectBestForTier selects the best N backups for a tier
|
||||
func selectBestForTier(classifications []BackupClassification, tier Tier, count int) []*metadata.BackupMetadata {
|
||||
if count <= 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Group by tier key
|
||||
groups := make(map[string][]*metadata.BackupMetadata)
|
||||
|
||||
for _, c := range classifications {
|
||||
if !hasTier(c.Tiers, tier) {
|
||||
continue
|
||||
}
|
||||
|
||||
var key string
|
||||
switch tier {
|
||||
case TierDaily:
|
||||
key = c.DayKey
|
||||
case TierWeekly:
|
||||
key = c.WeekKey
|
||||
case TierMonthly:
|
||||
key = c.MonthKey
|
||||
case TierYearly:
|
||||
key = c.YearKey
|
||||
}
|
||||
|
||||
groups[key] = append(groups[key], c.Backup)
|
||||
}
|
||||
|
||||
// Get unique keys sorted by recency (newest first)
|
||||
keys := make([]string, 0, len(groups))
|
||||
for key := range groups {
|
||||
keys = append(keys, key)
|
||||
}
|
||||
sort.Slice(keys, func(i, j int) bool {
|
||||
return keys[i] > keys[j] // Reverse sort (newest first)
|
||||
})
|
||||
|
||||
// Limit to requested count
|
||||
if len(keys) > count {
|
||||
keys = keys[:count]
|
||||
}
|
||||
|
||||
// Select newest backup from each period
|
||||
result := make([]*metadata.BackupMetadata, 0, len(keys))
|
||||
for _, key := range keys {
|
||||
backups := groups[key]
|
||||
// Sort by timestamp, newest first
|
||||
sort.Slice(backups, func(i, j int) bool {
|
||||
return backups[i].Timestamp.After(backups[j].Timestamp)
|
||||
})
|
||||
result = append(result, backups[0])
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// getWeekKey returns ISO week key (YYYY-WNN)
|
||||
func getWeekKey(t time.Time) string {
|
||||
year, week := t.ISOWeek()
|
||||
return t.Format("2006") + "-W" + padInt(week, 2) + "-" + padInt(year, 4)
|
||||
}
|
||||
|
||||
// isMonthlyQualified checks if timestamp qualifies for monthly tier
|
||||
func isMonthlyQualified(ts time.Time, monthlyDay int) bool {
|
||||
if monthlyDay == 0 {
|
||||
// Last day of month
|
||||
nextMonth := time.Date(ts.Year(), ts.Month()+1, 1, 0, 0, 0, 0, ts.Location())
|
||||
lastDay := nextMonth.AddDate(0, 0, -1).Day()
|
||||
return ts.Day() == lastDay
|
||||
}
|
||||
return ts.Day() == monthlyDay
|
||||
}
|
||||
|
||||
// hasTier checks if tier list contains a specific tier
|
||||
func hasTier(tiers []Tier, tier Tier) bool {
|
||||
for _, t := range tiers {
|
||||
if t == tier {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// padInt pads an integer with leading zeros
|
||||
func padInt(n, width int) string {
|
||||
s := ""
|
||||
for i := 0; i < width; i++ {
|
||||
digit := byte('0' + n%10)
|
||||
s = string(digit) + s
|
||||
n /= 10
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// ClassifyBackup classifies a single backup into its tiers
|
||||
func ClassifyBackup(ts time.Time, policy GFSPolicy) []Tier {
|
||||
tiers := make([]Tier, 0, 4)
|
||||
|
||||
// Every backup qualifies for daily
|
||||
tiers = append(tiers, TierDaily)
|
||||
|
||||
// Weekly: correct day of week
|
||||
if int(ts.Weekday()) == policy.WeeklyDay {
|
||||
tiers = append(tiers, TierWeekly)
|
||||
}
|
||||
|
||||
// Monthly: correct day of month
|
||||
if isMonthlyQualified(ts, policy.MonthlyDay) {
|
||||
tiers = append(tiers, TierMonthly)
|
||||
}
|
||||
|
||||
// Yearly: January + monthly day
|
||||
if ts.Month() == time.January && isMonthlyQualified(ts, policy.MonthlyDay) {
|
||||
tiers = append(tiers, TierYearly)
|
||||
}
|
||||
|
||||
return tiers
|
||||
}
|
||||
192
internal/retention/gfs_test.go
Normal file
192
internal/retention/gfs_test.go
Normal file
@@ -0,0 +1,192 @@
|
||||
package retention
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"dbbackup/internal/metadata"
|
||||
)
|
||||
|
||||
func TestParseWeekday(t *testing.T) {
|
||||
tests := []struct {
|
||||
input string
|
||||
expected int
|
||||
}{
|
||||
{"Sunday", 0},
|
||||
{"sunday", 0},
|
||||
{"Sun", 0},
|
||||
{"Monday", 1},
|
||||
{"mon", 1},
|
||||
{"Tuesday", 2},
|
||||
{"Wed", 3},
|
||||
{"Thursday", 4},
|
||||
{"Friday", 5},
|
||||
{"Saturday", 6},
|
||||
{"sat", 6},
|
||||
{"invalid", 0},
|
||||
{"", 0},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
result := ParseWeekday(tc.input)
|
||||
if result != tc.expected {
|
||||
t.Errorf("ParseWeekday(%q) = %d, expected %d", tc.input, result, tc.expected)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestClassifyBackup(t *testing.T) {
|
||||
policy := GFSPolicy{
|
||||
WeeklyDay: 0,
|
||||
MonthlyDay: 1,
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
time time.Time
|
||||
expected []Tier
|
||||
}{
|
||||
{
|
||||
name: "Regular weekday",
|
||||
time: time.Date(2024, 1, 15, 10, 0, 0, 0, time.UTC),
|
||||
expected: []Tier{TierDaily},
|
||||
},
|
||||
{
|
||||
name: "Sunday weekly",
|
||||
time: time.Date(2024, 1, 14, 10, 0, 0, 0, time.UTC),
|
||||
expected: []Tier{TierDaily, TierWeekly},
|
||||
},
|
||||
{
|
||||
name: "First of month",
|
||||
time: time.Date(2024, 2, 1, 10, 0, 0, 0, time.UTC),
|
||||
expected: []Tier{TierDaily, TierMonthly},
|
||||
},
|
||||
{
|
||||
name: "First of January yearly",
|
||||
time: time.Date(2024, 1, 1, 10, 0, 0, 0, time.UTC),
|
||||
expected: []Tier{TierDaily, TierMonthly, TierYearly},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
result := ClassifyBackup(tc.time, policy)
|
||||
if len(result) != len(tc.expected) {
|
||||
t.Errorf("ClassifyBackup() returned %d tiers, expected %d", len(result), len(tc.expected))
|
||||
return
|
||||
}
|
||||
for i, tier := range result {
|
||||
if tier != tc.expected[i] {
|
||||
t.Errorf("tier[%d] = %v, expected %v", i, tier, tc.expected[i])
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyGFSPolicyToBackups(t *testing.T) {
|
||||
now := time.Now()
|
||||
backups := []*metadata.BackupMetadata{
|
||||
{BackupFile: "backup_day1.dump", Timestamp: now.AddDate(0, 0, -1), SizeBytes: 1000},
|
||||
{BackupFile: "backup_day2.dump", Timestamp: now.AddDate(0, 0, -2), SizeBytes: 1000},
|
||||
{BackupFile: "backup_day3.dump", Timestamp: now.AddDate(0, 0, -3), SizeBytes: 1000},
|
||||
{BackupFile: "backup_day4.dump", Timestamp: now.AddDate(0, 0, -4), SizeBytes: 1000},
|
||||
{BackupFile: "backup_day5.dump", Timestamp: now.AddDate(0, 0, -5), SizeBytes: 1000},
|
||||
{BackupFile: "backup_day6.dump", Timestamp: now.AddDate(0, 0, -6), SizeBytes: 1000},
|
||||
{BackupFile: "backup_day7.dump", Timestamp: now.AddDate(0, 0, -7), SizeBytes: 1000},
|
||||
{BackupFile: "backup_day8.dump", Timestamp: now.AddDate(0, 0, -8), SizeBytes: 1000},
|
||||
{BackupFile: "backup_day9.dump", Timestamp: now.AddDate(0, 0, -9), SizeBytes: 1000},
|
||||
{BackupFile: "backup_day10.dump", Timestamp: now.AddDate(0, 0, -10), SizeBytes: 1000},
|
||||
}
|
||||
|
||||
policy := GFSPolicy{
|
||||
Enabled: true,
|
||||
Daily: 5,
|
||||
Weekly: 2,
|
||||
Monthly: 1,
|
||||
Yearly: 1,
|
||||
WeeklyDay: 0,
|
||||
MonthlyDay: 1,
|
||||
DryRun: true,
|
||||
}
|
||||
|
||||
result, err := ApplyGFSPolicyToBackups(backups, policy)
|
||||
if err != nil {
|
||||
t.Fatalf("ApplyGFSPolicyToBackups() error = %v", err)
|
||||
}
|
||||
|
||||
if result.TotalKept < policy.Daily {
|
||||
t.Errorf("TotalKept = %d, expected at least %d", result.TotalKept, policy.Daily)
|
||||
}
|
||||
|
||||
if result.TotalBackups != len(backups) {
|
||||
t.Errorf("TotalBackups = %d, expected %d", result.TotalBackups, len(backups))
|
||||
}
|
||||
|
||||
if len(result.ToKeep)+len(result.ToDelete) != result.TotalBackups {
|
||||
t.Errorf("ToKeep(%d) + ToDelete(%d) != TotalBackups(%d)",
|
||||
len(result.ToKeep), len(result.ToDelete), result.TotalBackups)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGFSPolicyWithEmptyBackups(t *testing.T) {
|
||||
policy := DefaultGFSPolicy()
|
||||
policy.DryRun = true
|
||||
|
||||
result, err := ApplyGFSPolicyToBackups([]*metadata.BackupMetadata{}, policy)
|
||||
if err != nil {
|
||||
t.Fatalf("ApplyGFSPolicyToBackups() error = %v", err)
|
||||
}
|
||||
|
||||
if result.TotalBackups != 0 {
|
||||
t.Errorf("TotalBackups = %d, expected 0", result.TotalBackups)
|
||||
}
|
||||
|
||||
if result.TotalKept != 0 {
|
||||
t.Errorf("TotalKept = %d, expected 0", result.TotalKept)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDefaultGFSPolicy(t *testing.T) {
|
||||
policy := DefaultGFSPolicy()
|
||||
|
||||
if !policy.Enabled {
|
||||
t.Error("DefaultGFSPolicy should be enabled")
|
||||
}
|
||||
|
||||
if policy.Daily != 7 {
|
||||
t.Errorf("Daily = %d, expected 7", policy.Daily)
|
||||
}
|
||||
|
||||
if policy.Weekly != 4 {
|
||||
t.Errorf("Weekly = %d, expected 4", policy.Weekly)
|
||||
}
|
||||
|
||||
if policy.Monthly != 12 {
|
||||
t.Errorf("Monthly = %d, expected 12", policy.Monthly)
|
||||
}
|
||||
|
||||
if policy.Yearly != 3 {
|
||||
t.Errorf("Yearly = %d, expected 3", policy.Yearly)
|
||||
}
|
||||
}
|
||||
|
||||
func TestTierString(t *testing.T) {
|
||||
tests := []struct {
|
||||
tier Tier
|
||||
expected string
|
||||
}{
|
||||
{TierDaily, "daily"},
|
||||
{TierWeekly, "weekly"},
|
||||
{TierMonthly, "monthly"},
|
||||
{TierYearly, "yearly"},
|
||||
{Tier(99), "unknown"},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
result := tc.tier.String()
|
||||
if result != tc.expected {
|
||||
t.Errorf("Tier(%d).String() = %q, expected %q", tc.tier, result, tc.expected)
|
||||
}
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user