Compare commits
2 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 222bdbef58 | |||
| f7e9fa64f0 |
94
PITR.md
94
PITR.md
@@ -584,6 +584,100 @@ Document your recovery procedure:
|
|||||||
9. Create new base backup
|
9. Create new base backup
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Large Database Support (600+ GB)
|
||||||
|
|
||||||
|
For databases larger than 600 GB, PITR is the **recommended approach** over full dump/restore.
|
||||||
|
|
||||||
|
### Why PITR Works Better for Large DBs
|
||||||
|
|
||||||
|
| Approach | 600 GB Database | Recovery Time (RTO) |
|
||||||
|
|----------|-----------------|---------------------|
|
||||||
|
| Full pg_dump/restore | Hours to dump, hours to restore | 4-12+ hours |
|
||||||
|
| PITR (base + WAL) | Incremental WAL only | 30 min - 2 hours |
|
||||||
|
|
||||||
|
### Setup for Large Databases
|
||||||
|
|
||||||
|
**1. Enable WAL archiving with compression:**
|
||||||
|
```bash
|
||||||
|
dbbackup pitr enable --archive-dir /backups/wal_archive --compress
|
||||||
|
```
|
||||||
|
|
||||||
|
**2. Take ONE base backup weekly/monthly (use pg_basebackup):**
|
||||||
|
```bash
|
||||||
|
# For 600+ GB, use fast checkpoint to minimize impact
|
||||||
|
pg_basebackup -D /backups/base_$(date +%Y%m%d).tar.gz \
|
||||||
|
-Ft -z -P --checkpoint=fast --wal-method=none
|
||||||
|
|
||||||
|
# Duration: 2-6 hours for 600 GB, but only needed weekly/monthly
|
||||||
|
```
|
||||||
|
|
||||||
|
**3. WAL files archive continuously** (~1-5 GB/hour typical), capturing every change.
|
||||||
|
|
||||||
|
**4. Recover to any point in time:**
|
||||||
|
```bash
|
||||||
|
dbbackup restore pitr \
|
||||||
|
--base-backup /backups/base_20260101.tar.gz \
|
||||||
|
--wal-archive /backups/wal_archive \
|
||||||
|
--target-time "2026-01-13 14:30:00" \
|
||||||
|
--target-dir /var/lib/postgresql/16/restored
|
||||||
|
```
|
||||||
|
|
||||||
|
### PostgreSQL Optimizations for 600+ GB
|
||||||
|
|
||||||
|
| Setting | Value | Purpose |
|
||||||
|
|---------|-------|---------|
|
||||||
|
| `wal_compression = on` | postgresql.conf | 70-80% smaller WAL files |
|
||||||
|
| `max_wal_size = 4GB` | postgresql.conf | Reduce checkpoint frequency |
|
||||||
|
| `checkpoint_timeout = 30min` | postgresql.conf | Less frequent checkpoints |
|
||||||
|
| `archive_timeout = 300` | postgresql.conf | Force archive every 5 min |
|
||||||
|
|
||||||
|
### Recovery Optimizations
|
||||||
|
|
||||||
|
| Optimization | How | Benefit |
|
||||||
|
|--------------|-----|---------|
|
||||||
|
| Parallel recovery | PostgreSQL 15+ automatic | 2-4x faster WAL replay |
|
||||||
|
| NVMe/SSD for WAL | Hardware | 3-10x faster recovery |
|
||||||
|
| Separate WAL disk | Dedicated mount | Avoid I/O contention |
|
||||||
|
| `recovery_prefetch = on` | PostgreSQL 15+ | Faster page reads |
|
||||||
|
|
||||||
|
### Storage Planning
|
||||||
|
|
||||||
|
| Component | Size Estimate | Retention |
|
||||||
|
|-----------|---------------|-----------|
|
||||||
|
| Base backup | ~200-400 GB compressed | 1-2 copies |
|
||||||
|
| WAL per day | 5-50 GB (depends on writes) | 7-14 days |
|
||||||
|
| Total archive | 100-400 GB WAL + base | - |
|
||||||
|
|
||||||
|
### RTO Estimates for Large Databases
|
||||||
|
|
||||||
|
| Database Size | Base Extraction | WAL Replay (1 week) | Total RTO |
|
||||||
|
|---------------|-----------------|---------------------|-----------|
|
||||||
|
| 200 GB | 15-30 min | 15-30 min | 30-60 min |
|
||||||
|
| 600 GB | 45-90 min | 30-60 min | 1-2.5 hours |
|
||||||
|
| 1 TB | 60-120 min | 45-90 min | 2-3.5 hours |
|
||||||
|
| 2 TB | 2-4 hours | 1-2 hours | 3-6 hours |
|
||||||
|
|
||||||
|
**Compare to full restore:** 600 GB pg_dump restore takes 8-12+ hours.
|
||||||
|
|
||||||
|
### Best Practices for 600+ GB
|
||||||
|
|
||||||
|
1. **Weekly base backups** - Monthly if storage is tight
|
||||||
|
2. **Test recovery monthly** - Verify WAL chain integrity
|
||||||
|
3. **Monitor WAL lag** - Alert if archive falls behind
|
||||||
|
4. **Use streaming replication** - For HA, combine with PITR for DR
|
||||||
|
5. **Separate archive storage** - Don't fill up the DB disk
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Quick health check for large DB PITR setup
|
||||||
|
dbbackup pitr status --verbose
|
||||||
|
|
||||||
|
# Expected output:
|
||||||
|
# Base Backup: 2026-01-06 (7 days old) - OK
|
||||||
|
# WAL Archive: 847 files, 52 GB
|
||||||
|
# Recovery Window: 2026-01-06 to 2026-01-13 (7 days)
|
||||||
|
# Estimated RTO: ~90 minutes
|
||||||
|
```
|
||||||
|
|
||||||
## Performance Considerations
|
## Performance Considerations
|
||||||
|
|
||||||
### WAL Archive Size
|
### WAL Archive Size
|
||||||
|
|||||||
@@ -4,8 +4,8 @@ This directory contains pre-compiled binaries for the DB Backup Tool across mult
|
|||||||
|
|
||||||
## Build Information
|
## Build Information
|
||||||
- **Version**: 3.42.10
|
- **Version**: 3.42.10
|
||||||
- **Build Time**: 2026-01-12_14:25:53_UTC
|
- **Build Time**: 2026-01-13_07:23:20_UTC
|
||||||
- **Git Commit**: d19c065
|
- **Git Commit**: f153e61
|
||||||
|
|
||||||
## Recent Updates (v1.1.0)
|
## Recent Updates (v1.1.0)
|
||||||
- ✅ Fixed TUI progress display with line-by-line output
|
- ✅ Fixed TUI progress display with line-by-line output
|
||||||
|
|||||||
18
cmd/dedup.go
18
cmd/dedup.go
@@ -185,15 +185,15 @@ Examples:
|
|||||||
|
|
||||||
// Flags
|
// Flags
|
||||||
var (
|
var (
|
||||||
dedupDir string
|
dedupDir string
|
||||||
dedupIndexDB string // Separate path for SQLite index (for NFS/CIFS support)
|
dedupIndexDB string // Separate path for SQLite index (for NFS/CIFS support)
|
||||||
dedupCompress bool
|
dedupCompress bool
|
||||||
dedupEncrypt bool
|
dedupEncrypt bool
|
||||||
dedupKey string
|
dedupKey string
|
||||||
dedupName string
|
dedupName string
|
||||||
dedupDBType string
|
dedupDBType string
|
||||||
dedupDBName string
|
dedupDBName string
|
||||||
dedupDBHost string
|
dedupDBHost string
|
||||||
dedupDecompress bool // Auto-decompress gzip input
|
dedupDecompress bool // Auto-decompress gzip input
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
@@ -415,18 +415,18 @@ func (d *Diagnoser) diagnoseSQLScript(filePath string, compressed bool, result *
|
|||||||
// diagnoseClusterArchive analyzes a cluster tar.gz archive
|
// diagnoseClusterArchive analyzes a cluster tar.gz archive
|
||||||
func (d *Diagnoser) diagnoseClusterArchive(filePath string, result *DiagnoseResult) {
|
func (d *Diagnoser) diagnoseClusterArchive(filePath string, result *DiagnoseResult) {
|
||||||
// Calculate dynamic timeout based on file size
|
// Calculate dynamic timeout based on file size
|
||||||
// Assume minimum 50 MB/s throughput for compressed archive listing
|
// Large archives (100GB+) can take significant time to list
|
||||||
// Minimum 5 minutes, scales with file size
|
// Minimum 5 minutes, scales with file size, max 180 minutes for very large archives
|
||||||
timeoutMinutes := 5
|
timeoutMinutes := 5
|
||||||
if result.FileSize > 0 {
|
if result.FileSize > 0 {
|
||||||
// 1 minute per 3 GB, minimum 5 minutes, max 60 minutes
|
// 1 minute per 2 GB, minimum 5 minutes, max 180 minutes
|
||||||
sizeGB := result.FileSize / (1024 * 1024 * 1024)
|
sizeGB := result.FileSize / (1024 * 1024 * 1024)
|
||||||
estimatedMinutes := int(sizeGB/3) + 5
|
estimatedMinutes := int(sizeGB/2) + 5
|
||||||
if estimatedMinutes > timeoutMinutes {
|
if estimatedMinutes > timeoutMinutes {
|
||||||
timeoutMinutes = estimatedMinutes
|
timeoutMinutes = estimatedMinutes
|
||||||
}
|
}
|
||||||
if timeoutMinutes > 60 {
|
if timeoutMinutes > 180 {
|
||||||
timeoutMinutes = 60
|
timeoutMinutes = 180
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -437,29 +437,98 @@ func (d *Diagnoser) diagnoseClusterArchive(filePath string, result *DiagnoseResu
|
|||||||
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
|
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeoutMinutes)*time.Minute)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
|
// Use streaming approach with pipes to avoid memory issues with large archives
|
||||||
cmd := exec.CommandContext(ctx, "tar", "-tzf", filePath)
|
cmd := exec.CommandContext(ctx, "tar", "-tzf", filePath)
|
||||||
output, err := cmd.Output()
|
stdout, pipeErr := cmd.StdoutPipe()
|
||||||
if err != nil {
|
if pipeErr != nil {
|
||||||
// Check if it was a timeout
|
// Pipe creation failed - not a corruption issue
|
||||||
if ctx.Err() == context.DeadlineExceeded {
|
result.Warnings = append(result.Warnings,
|
||||||
result.IsValid = false
|
fmt.Sprintf("Cannot create pipe for verification: %v", pipeErr),
|
||||||
result.Errors = append(result.Errors,
|
"Archive integrity cannot be verified but may still be valid")
|
||||||
fmt.Sprintf("Verification timed out after %d minutes - archive is very large", timeoutMinutes),
|
|
||||||
"This does not necessarily mean the archive is corrupted",
|
|
||||||
"Manual verification: tar -tzf "+filePath+" | wc -l")
|
|
||||||
// Don't mark as corrupted on timeout
|
|
||||||
return
|
|
||||||
}
|
|
||||||
result.IsValid = false
|
|
||||||
result.IsCorrupted = true
|
|
||||||
result.Errors = append(result.Errors,
|
|
||||||
fmt.Sprintf("Tar archive is invalid or corrupted: %v", err),
|
|
||||||
"Run: tar -tzf "+filePath+" 2>&1 | tail -20")
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// Parse tar listing
|
var stderrBuf bytes.Buffer
|
||||||
files := strings.Split(strings.TrimSpace(string(output)), "\n")
|
cmd.Stderr = &stderrBuf
|
||||||
|
|
||||||
|
if startErr := cmd.Start(); startErr != nil {
|
||||||
|
result.Warnings = append(result.Warnings,
|
||||||
|
fmt.Sprintf("Cannot start tar verification: %v", startErr),
|
||||||
|
"Archive integrity cannot be verified but may still be valid")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stream output line by line to avoid buffering entire listing in memory
|
||||||
|
scanner := bufio.NewScanner(stdout)
|
||||||
|
scanner.Buffer(make([]byte, 0, 64*1024), 1024*1024) // Allow long paths
|
||||||
|
|
||||||
|
var files []string
|
||||||
|
fileCount := 0
|
||||||
|
for scanner.Scan() {
|
||||||
|
fileCount++
|
||||||
|
line := scanner.Text()
|
||||||
|
// Only store dump/metadata files, not every file
|
||||||
|
if strings.HasSuffix(line, ".dump") || strings.HasSuffix(line, ".sql.gz") ||
|
||||||
|
strings.HasSuffix(line, ".sql") || strings.HasSuffix(line, ".json") ||
|
||||||
|
strings.Contains(line, "globals") || strings.Contains(line, "manifest") ||
|
||||||
|
strings.Contains(line, "metadata") {
|
||||||
|
files = append(files, line)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
scanErr := scanner.Err()
|
||||||
|
waitErr := cmd.Wait()
|
||||||
|
stderrOutput := stderrBuf.String()
|
||||||
|
|
||||||
|
// Handle errors - distinguish between actual corruption and resource/timeout issues
|
||||||
|
if waitErr != nil || scanErr != nil {
|
||||||
|
// Check if it was a timeout
|
||||||
|
if ctx.Err() == context.DeadlineExceeded {
|
||||||
|
result.Warnings = append(result.Warnings,
|
||||||
|
fmt.Sprintf("Verification timed out after %d minutes - archive is very large", timeoutMinutes),
|
||||||
|
"This does not necessarily mean the archive is corrupted",
|
||||||
|
"Manual verification: tar -tzf "+filePath+" | wc -l")
|
||||||
|
// Don't mark as corrupted or invalid on timeout - archive may be fine
|
||||||
|
if fileCount > 0 {
|
||||||
|
result.Details.TableCount = len(files)
|
||||||
|
result.Details.TableList = files
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for specific gzip/tar corruption indicators
|
||||||
|
if strings.Contains(stderrOutput, "unexpected end of file") ||
|
||||||
|
strings.Contains(stderrOutput, "Unexpected EOF") ||
|
||||||
|
strings.Contains(stderrOutput, "gzip: stdin: unexpected end of file") ||
|
||||||
|
strings.Contains(stderrOutput, "not in gzip format") ||
|
||||||
|
strings.Contains(stderrOutput, "invalid compressed data") {
|
||||||
|
// These indicate actual corruption
|
||||||
|
result.IsValid = false
|
||||||
|
result.IsCorrupted = true
|
||||||
|
result.Errors = append(result.Errors,
|
||||||
|
"Tar archive appears truncated or corrupted",
|
||||||
|
fmt.Sprintf("Error: %s", truncateString(stderrOutput, 200)),
|
||||||
|
"Run: tar -tzf "+filePath+" 2>&1 | tail -20")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Other errors (signal killed, memory, etc.) - not necessarily corruption
|
||||||
|
// If we read some files successfully, the archive structure is likely OK
|
||||||
|
if fileCount > 0 {
|
||||||
|
result.Warnings = append(result.Warnings,
|
||||||
|
fmt.Sprintf("Verification incomplete (read %d files before error)", fileCount),
|
||||||
|
"Archive may still be valid - error could be due to system resources")
|
||||||
|
// Proceed with what we got
|
||||||
|
} else {
|
||||||
|
// Couldn't read anything - but don't mark as corrupted without clear evidence
|
||||||
|
result.Warnings = append(result.Warnings,
|
||||||
|
fmt.Sprintf("Cannot verify archive: %v", waitErr),
|
||||||
|
"Archive integrity is uncertain - proceed with caution or verify manually")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse the collected file list
|
||||||
var dumpFiles []string
|
var dumpFiles []string
|
||||||
hasGlobals := false
|
hasGlobals := false
|
||||||
hasMetadata := false
|
hasMetadata := false
|
||||||
|
|||||||
Reference in New Issue
Block a user