Compare commits

...

5 Commits

Author SHA1 Message Date
20b7f1ec04 feat: v2.0 Sprint 2 - Auto-Upload to Cloud (Part 2)
- Add cloud configuration to Config struct
- Integrate automatic upload into backup flow
- Add --cloud-auto-upload flag to all backup commands
- Support environment variables for cloud credentials
- Upload both backup file and metadata to cloud
- Non-blocking: backup succeeds even if cloud upload fails

Usage:
  dbbackup backup single mydb --cloud-auto-upload \
    --cloud-bucket my-backups \
    --cloud-provider s3

Or via environment:
  export CLOUD_ENABLED=true
  export CLOUD_AUTO_UPLOAD=true
  export CLOUD_BUCKET=my-backups
  export AWS_ACCESS_KEY_ID=...
  export AWS_SECRET_ACCESS_KEY=...
  dbbackup backup single mydb

Credentials from AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
2025-11-25 19:44:52 +00:00
ae3ed1fea1 feat: v2.0 Sprint 2 - Cloud Storage Support (Part 1)
- Add AWS SDK v2 for S3 integration
- Implement cloud.Backend interface for multi-provider support
- Add full S3 backend with upload/download/list/delete
- Support MinIO and Backblaze B2 (S3-compatible)
- Implement progress tracking for uploads/downloads
- Add cloud commands: upload, download, list, delete

New commands:
- dbbackup cloud upload [files] - Upload backups to cloud
- dbbackup cloud download [remote] [local] - Download from cloud
- dbbackup cloud list [prefix] - List cloud backups
- dbbackup cloud delete [remote] - Delete from cloud

Configuration via flags or environment:
- --cloud-provider, --cloud-bucket, --cloud-region
- --cloud-endpoint (for MinIO/B2)
- --cloud-access-key, --cloud-secret-key

New packages:
- internal/cloud - Cloud storage abstraction layer
2025-11-25 19:28:51 +00:00
ba5ae8ecb1 feat: v2.0 Sprint 1 - Backup Verification & Retention Policy
- Add SHA-256 checksum generation for all backups
- Implement verify-backup command for integrity validation
- Add JSON metadata format (.meta.json) with full backup info
- Create retention policy engine with smart cleanup
- Add cleanup command with dry-run and pattern matching
- Integrate metadata generation into backup flow
- Maintain backward compatibility with legacy .info files

New commands:
- dbbackup verify-backup [files] - Verify backup integrity
- dbbackup cleanup [dir] - Clean old backups with retention policy

New packages:
- internal/metadata - Backup metadata management
- internal/verification - Checksum validation
- internal/retention - Retention policy engine
2025-11-25 19:18:07 +00:00
884c8292d6 chore: Add Docker build script 2025-11-25 18:38:49 +00:00
6e04db4a98 feat: Add Docker support for easy distribution
- Multi-stage Dockerfile for minimal image size (~119MB)
- Includes PostgreSQL, MySQL, MariaDB client tools
- Non-root user (UID 1000) for security
- Docker Compose examples for all use cases
- Complete Docker documentation (DOCKER.md)
- Kubernetes CronJob examples
- Support for Docker secrets
- Multi-platform build support

Docker makes deployment trivial:
- No dependency installation needed
- Consistent environment
- Easy CI/CD integration
- Kubernetes-ready
2025-11-25 18:33:34 +00:00
20 changed files with 3155 additions and 17 deletions

21
.dockerignore Normal file
View File

@@ -0,0 +1,21 @@
.git
.gitignore
*.dump
*.dump.gz
*.sql
*.sql.gz
*.tar.gz
*.sha256
*.info
.dbbackup.conf
backups/
test_workspace/
bin/
dbbackup
dbbackup_*
*.log
.vscode/
.idea/
*.swp
*.swo
*~

250
DOCKER.md Normal file
View File

@@ -0,0 +1,250 @@
# Docker Usage Guide
## Quick Start
### Build Image
```bash
docker build -t dbbackup:latest .
```
### Run Container
**PostgreSQL Backup:**
```bash
docker run --rm \
-v $(pwd)/backups:/backups \
-e PGHOST=your-postgres-host \
-e PGUSER=postgres \
-e PGPASSWORD=secret \
dbbackup:latest backup single mydb
```
**MySQL Backup:**
```bash
docker run --rm \
-v $(pwd)/backups:/backups \
-e MYSQL_HOST=your-mysql-host \
-e MYSQL_USER=root \
-e MYSQL_PWD=secret \
dbbackup:latest backup single mydb --db-type mysql
```
**Interactive Mode:**
```bash
docker run --rm -it \
-v $(pwd)/backups:/backups \
-e PGHOST=your-postgres-host \
-e PGUSER=postgres \
-e PGPASSWORD=secret \
dbbackup:latest interactive
```
## Docker Compose
### Start Test Environment
```bash
# Start test databases
docker-compose up -d postgres mysql
# Wait for databases to be ready
sleep 10
# Run backup
docker-compose run --rm postgres-backup
```
### Interactive Mode
```bash
docker-compose run --rm dbbackup-interactive
```
### Scheduled Backups with Cron
Create `docker-cron`:
```bash
#!/bin/bash
# Daily PostgreSQL backup at 2 AM
0 2 * * * docker run --rm -v /backups:/backups -e PGHOST=postgres -e PGUSER=postgres -e PGPASSWORD=secret dbbackup:latest backup single production_db
```
## Environment Variables
**PostgreSQL:**
- `PGHOST` - Database host
- `PGPORT` - Database port (default: 5432)
- `PGUSER` - Database user
- `PGPASSWORD` - Database password
- `PGDATABASE` - Database name
**MySQL/MariaDB:**
- `MYSQL_HOST` - Database host
- `MYSQL_PORT` - Database port (default: 3306)
- `MYSQL_USER` - Database user
- `MYSQL_PWD` - Database password
- `MYSQL_DATABASE` - Database name
**General:**
- `BACKUP_DIR` - Backup directory (default: /backups)
- `COMPRESS_LEVEL` - Compression level 0-9 (default: 6)
## Volume Mounts
```bash
docker run --rm \
-v /host/backups:/backups \ # Backup storage
-v /host/config/.dbbackup.conf:/home/dbbackup/.dbbackup.conf:ro \ # Config file
dbbackup:latest backup single mydb
```
## Docker Hub
Pull pre-built image (when published):
```bash
docker pull uuxo/dbbackup:latest
docker pull uuxo/dbbackup:1.0
```
## Kubernetes Deployment
**CronJob Example:**
```yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: postgres-backup
spec:
schedule: "0 2 * * *" # Daily at 2 AM
jobTemplate:
spec:
template:
spec:
containers:
- name: dbbackup
image: dbbackup:latest
args: ["backup", "single", "production_db"]
env:
- name: PGHOST
value: "postgres.default.svc.cluster.local"
- name: PGUSER
value: "postgres"
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
volumeMounts:
- name: backups
mountPath: /backups
volumes:
- name: backups
persistentVolumeClaim:
claimName: backup-storage
restartPolicy: OnFailure
```
## Docker Secrets
**Using Docker Secrets:**
```bash
# Create secrets
echo "mypassword" | docker secret create db_password -
# Use in stack
docker stack deploy -c docker-stack.yml dbbackup
```
**docker-stack.yml:**
```yaml
version: '3.8'
services:
backup:
image: dbbackup:latest
secrets:
- db_password
environment:
- PGHOST=postgres
- PGUSER=postgres
- PGPASSWORD_FILE=/run/secrets/db_password
command: backup single mydb
volumes:
- backups:/backups
secrets:
db_password:
external: true
volumes:
backups:
```
## Image Size
**Multi-stage build results:**
- Builder stage: ~500MB (Go + dependencies)
- Final image: ~100MB (Alpine + clients)
- Binary only: ~15MB
## Security
**Non-root user:**
- Runs as UID 1000 (dbbackup user)
- No privileged operations needed
- Read-only config mount recommended
**Network:**
```bash
# Use custom network
docker network create dbnet
docker run --rm \
--network dbnet \
-v $(pwd)/backups:/backups \
dbbackup:latest backup single mydb
```
## Troubleshooting
**Check logs:**
```bash
docker logs dbbackup-postgres
```
**Debug mode:**
```bash
docker run --rm -it \
-v $(pwd)/backups:/backups \
dbbackup:latest backup single mydb --debug
```
**Shell access:**
```bash
docker run --rm -it --entrypoint /bin/sh dbbackup:latest
```
## Building for Multiple Platforms
```bash
# Enable buildx
docker buildx create --use
# Build multi-arch
docker buildx build \
--platform linux/amd64,linux/arm64,linux/arm/v7 \
-t uuxo/dbbackup:latest \
--push .
```
## Registry Push
```bash
# Tag for registry
docker tag dbbackup:latest git.uuxo.net/uuxo/dbbackup:latest
docker tag dbbackup:latest git.uuxo.net/uuxo/dbbackup:1.0
# Push to private registry
docker push git.uuxo.net/uuxo/dbbackup:latest
docker push git.uuxo.net/uuxo/dbbackup:1.0
```

58
Dockerfile Normal file
View File

@@ -0,0 +1,58 @@
# Multi-stage build for minimal image size
FROM golang:1.24-alpine AS builder
# Install build dependencies
RUN apk add --no-cache git make
WORKDIR /build
# Copy go mod files
COPY go.mod go.sum ./
RUN go mod download
# Copy source code
COPY . .
# Build binary
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags="-w -s" -o dbbackup .
# Final stage - minimal runtime image
FROM alpine:3.19
# Install database client tools
RUN apk add --no-cache \
postgresql-client \
mysql-client \
mariadb-client \
pigz \
pv \
ca-certificates \
tzdata
# Create non-root user
RUN addgroup -g 1000 dbbackup && \
adduser -D -u 1000 -G dbbackup dbbackup
# Copy binary from builder
COPY --from=builder /build/dbbackup /usr/local/bin/dbbackup
RUN chmod +x /usr/local/bin/dbbackup
# Create backup directory
RUN mkdir -p /backups && chown dbbackup:dbbackup /backups
# Set working directory
WORKDIR /backups
# Switch to non-root user
USER dbbackup
# Set entrypoint
ENTRYPOINT ["/usr/local/bin/dbbackup"]
# Default command shows help
CMD ["--help"]
# Labels
LABEL maintainer="UUXO"
LABEL version="1.0"
LABEL description="Professional database backup tool for PostgreSQL, MySQL, and MariaDB"

130
README.md
View File

@@ -16,6 +16,31 @@ Professional database backup and restore utility for PostgreSQL, MySQL, and Mari
## Installation ## Installation
### Docker (Recommended)
**Pull from registry:**
```bash
docker pull git.uuxo.net/uuxo/dbbackup:latest
```
**Quick start:**
```bash
# PostgreSQL backup
docker run --rm \
-v $(pwd)/backups:/backups \
-e PGHOST=your-host \
-e PGUSER=postgres \
-e PGPASSWORD=secret \
git.uuxo.net/uuxo/dbbackup:latest backup single mydb
# Interactive mode
docker run --rm -it \
-v $(pwd)/backups:/backups \
git.uuxo.net/uuxo/dbbackup:latest interactive
```
See [DOCKER.md](DOCKER.md) for complete Docker documentation.
### Download Pre-compiled Binary ### Download Pre-compiled Binary
Linux x86_64: Linux x86_64:
@@ -353,6 +378,111 @@ Restore entire PostgreSQL cluster from archive:
./dbbackup restore cluster ARCHIVE_FILE [OPTIONS] ./dbbackup restore cluster ARCHIVE_FILE [OPTIONS]
``` ```
### Verification & Maintenance
#### Verify Backup Integrity
Verify backup files using SHA-256 checksums and metadata validation:
```bash
./dbbackup verify-backup BACKUP_FILE [OPTIONS]
```
**Options:**
- `--quick` - Quick verification (size check only, no checksum calculation)
- `--verbose` - Show detailed information about each backup
**Examples:**
```bash
# Verify single backup (full SHA-256 check)
./dbbackup verify-backup /backups/mydb_20251125.dump
# Verify all backups in directory
./dbbackup verify-backup /backups/*.dump --verbose
# Quick verification (fast, size check only)
./dbbackup verify-backup /backups/*.dump --quick
```
**Output:**
```
Verifying 3 backup file(s)...
📁 mydb_20251125.dump
✅ VALID
Size: 2.5 GiB
SHA-256: 7e166d4cb7276e1310d76922f45eda0333a6aeac...
Database: mydb (postgresql)
Created: 2025-11-25T19:00:00Z
──────────────────────────────────────────────────
Total: 3 backups
✅ Valid: 3
```
#### Cleanup Old Backups
Automatically remove old backups based on retention policy:
```bash
./dbbackup cleanup BACKUP_DIRECTORY [OPTIONS]
```
**Options:**
- `--retention-days INT` - Delete backups older than N days (default: 30)
- `--min-backups INT` - Always keep at least N most recent backups (default: 5)
- `--dry-run` - Preview what would be deleted without actually deleting
- `--pattern STRING` - Only clean backups matching pattern (e.g., "mydb_*.dump")
**Retention Policy:**
The cleanup command uses a safe retention policy:
1. Backups older than `--retention-days` are eligible for deletion
2. At least `--min-backups` most recent backups are always kept
3. Both conditions must be met for a backup to be deleted
**Examples:**
```bash
# Clean up backups older than 30 days (keep at least 5)
./dbbackup cleanup /backups --retention-days 30 --min-backups 5
# Preview what would be deleted
./dbbackup cleanup /backups --retention-days 7 --dry-run
# Clean specific database backups
./dbbackup cleanup /backups --pattern "mydb_*.dump"
# Aggressive cleanup (keep only 3 most recent)
./dbbackup cleanup /backups --retention-days 1 --min-backups 3
```
**Output:**
```
🗑️ Cleanup Policy:
Directory: /backups
Retention: 30 days
Min backups: 5
📊 Results:
Total backups: 12
Eligible for deletion: 7
✅ Deleted 7 backup(s):
- old_db_20251001.dump
- old_db_20251002.dump
...
📦 Kept 5 backup(s)
💾 Space freed: 15.2 GiB
──────────────────────────────────────────────────
✅ Cleanup completed successfully
```
**Options:** **Options:**
- `--confirm` - Confirm and execute restore (required for safety) - `--confirm` - Confirm and execute restore (required for safety)

523
ROADMAP.md Normal file
View File

@@ -0,0 +1,523 @@
# dbbackup Version 2.0 Roadmap
## Current Status: v1.1 (Production Ready)
- ✅ 24/24 automated tests passing (100%)
- ✅ PostgreSQL, MySQL, MariaDB support
- ✅ Interactive TUI + CLI
- ✅ Cluster backup/restore
- ✅ Docker support
- ✅ Cross-platform binaries
---
## Version 2.0 Vision: Enterprise-Grade Features
Transform dbbackup into an enterprise-ready backup solution with cloud storage, incremental backups, PITR, and encryption.
**Target Release:** Q2 2026 (3-4 months)
---
## Priority Matrix
```
HIGH IMPACT
┌────────────────────┼────────────────────┐
│ │ │
│ Cloud Storage ⭐ │ Incremental ⭐⭐⭐ │
│ Verification │ PITR ⭐⭐⭐ │
│ Retention │ Encryption ⭐⭐ │
LOW │ │ │ HIGH
EFFORT ─────────────────┼──────────────────── EFFORT
│ │ │
│ Metrics │ Web UI (optional) │
│ Remote Restore │ Replication Slots │
│ │ │
└────────────────────┼────────────────────┘
LOW IMPACT
```
---
## Development Phases
### Phase 1: Foundation (Weeks 1-4)
**Sprint 1: Verification & Retention (2 weeks)**
**Goals:**
- Backup integrity verification with SHA-256 checksums
- Automated retention policy enforcement
- Structured backup metadata
**Features:**
- ✅ Generate SHA-256 checksums during backup
- ✅ Verify backups before/after restore
- ✅ Automatic cleanup of old backups
- ✅ Retention policy: days + minimum count
- ✅ Backup metadata in JSON format
**Deliverables:**
```bash
# New commands
dbbackup verify backup.dump
dbbackup cleanup --retention-days 30 --min-backups 5
# Metadata format
{
"version": "2.0",
"timestamp": "2026-01-15T10:30:00Z",
"database": "production",
"size_bytes": 1073741824,
"sha256": "abc123...",
"db_version": "PostgreSQL 15.3",
"compression": "gzip-9"
}
```
**Implementation:**
- `internal/verification/` - Checksum calculation and validation
- `internal/retention/` - Policy enforcement
- `internal/metadata/` - Backup metadata management
---
**Sprint 2: Cloud Storage (2 weeks)**
**Goals:**
- Upload backups to cloud storage
- Support multiple cloud providers
- Download and restore from cloud
**Providers:**
- ✅ AWS S3
- ✅ MinIO (S3-compatible)
- ✅ Backblaze B2
- ✅ Azure Blob Storage (optional)
- ✅ Google Cloud Storage (optional)
**Configuration:**
```toml
[cloud]
enabled = true
provider = "s3" # s3, minio, azure, gcs, b2
auto_upload = true
[cloud.s3]
bucket = "db-backups"
region = "us-east-1"
endpoint = "s3.amazonaws.com" # Custom for MinIO
access_key = "..." # Or use IAM role
secret_key = "..."
```
**New Commands:**
```bash
# Upload existing backup
dbbackup cloud upload backup.dump
# List cloud backups
dbbackup cloud list
# Download from cloud
dbbackup cloud download backup_id
# Restore directly from cloud
dbbackup restore single s3://bucket/backup.dump --target mydb
```
**Dependencies:**
```go
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"cloud.google.com/go/storage"
```
---
### Phase 2: Advanced Backup (Weeks 5-10)
**Sprint 3: Incremental Backups (3 weeks)**
**Goals:**
- Reduce backup time and storage
- File-level incremental for PostgreSQL
- Binary log incremental for MySQL
**PostgreSQL Strategy:**
```
Full Backup (Base)
├─ Incremental 1 (changed files since base)
├─ Incremental 2 (changed files since inc1)
└─ Incremental 3 (changed files since inc2)
```
**MySQL Strategy:**
```
Full Backup
├─ Binary Log 1 (changes since full)
├─ Binary Log 2
└─ Binary Log 3
```
**Implementation:**
```bash
# Create base backup
dbbackup backup single mydb --mode full
# Create incremental
dbbackup backup single mydb --mode incremental
# Restore (automatically applies incrementals)
dbbackup restore single backup.dump --apply-incrementals
```
**File Structure:**
```
backups/
├── mydb_full_20260115.dump
├── mydb_full_20260115.meta
├── mydb_incr_20260116.dump # Contains only changes
├── mydb_incr_20260116.meta # Points to base: mydb_full_20260115
└── mydb_incr_20260117.dump
```
---
**Sprint 4: Security & Encryption (2 weeks)**
**Goals:**
- Encrypt backups at rest
- Secure key management
- Encrypted cloud uploads
**Features:**
- ✅ AES-256-GCM encryption
- ✅ Argon2 key derivation
- ✅ Multiple key sources (file, env, vault)
- ✅ Encrypted metadata
**Configuration:**
```toml
[encryption]
enabled = true
algorithm = "aes-256-gcm"
key_file = "/etc/dbbackup/encryption.key"
# Or use environment variable
# DBBACKUP_ENCRYPTION_KEY=base64key...
```
**Commands:**
```bash
# Generate encryption key
dbbackup keys generate
# Encrypt existing backup
dbbackup encrypt backup.dump
# Decrypt backup
dbbackup decrypt backup.dump.enc
# Automatic encryption
dbbackup backup single mydb --encrypt
```
**File Format:**
```
+------------------+
| Encryption Header| (IV, algorithm, key ID)
+------------------+
| Encrypted Data | (AES-256-GCM)
+------------------+
| Auth Tag | (HMAC for integrity)
+------------------+
```
---
**Sprint 5: Point-in-Time Recovery - PITR (4 weeks)**
**Goals:**
- Restore to any point in time
- WAL archiving for PostgreSQL
- Binary log archiving for MySQL
**PostgreSQL Implementation:**
```toml
[pitr]
enabled = true
wal_archive_dir = "/backups/wal_archive"
wal_retention_days = 7
# PostgreSQL config (auto-configured by dbbackup)
# archive_mode = on
# archive_command = '/usr/local/bin/dbbackup archive-wal %p %f'
```
**Commands:**
```bash
# Enable PITR
dbbackup pitr enable
# Archive WAL manually
dbbackup archive-wal /var/lib/postgresql/pg_wal/000000010000000000000001
# Restore to point-in-time
dbbackup restore single backup.dump \
--target-time "2026-01-15 14:30:00" \
--target mydb
# Show available restore points
dbbackup pitr timeline
```
**WAL Archive Structure:**
```
wal_archive/
├── 000000010000000000000001
├── 000000010000000000000002
├── 000000010000000000000003
└── timeline.json
```
**MySQL Implementation:**
```bash
# Archive binary logs
dbbackup binlog archive --start-datetime "2026-01-15 00:00:00"
# PITR restore
dbbackup restore single backup.sql \
--target-time "2026-01-15 14:30:00" \
--apply-binlogs
```
---
### Phase 3: Enterprise Features (Weeks 11-16)
**Sprint 6: Observability & Integration (3 weeks)**
**Features:**
1. **Prometheus Metrics**
```go
# Exposed metrics
dbbackup_backup_duration_seconds
dbbackup_backup_size_bytes
dbbackup_backup_success_total
dbbackup_restore_duration_seconds
dbbackup_last_backup_timestamp
dbbackup_cloud_upload_duration_seconds
```
**Endpoint:**
```bash
# Start metrics server
dbbackup metrics serve --port 9090
# Scrape endpoint
curl http://localhost:9090/metrics
```
2. **Remote Restore**
```bash
# Restore to remote server
dbbackup restore single backup.dump \
--remote-host db-replica-01 \
--remote-user postgres \
--remote-port 22 \
--confirm
```
3. **Replication Slots (PostgreSQL)**
```bash
# Create replication slot for continuous WAL streaming
dbbackup replication create-slot backup_slot
# Stream WALs via replication
dbbackup replication stream backup_slot
```
4. **Webhook Notifications**
```toml
[notifications]
enabled = true
webhook_url = "https://slack.com/webhook/..."
notify_on = ["backup_complete", "backup_failed", "restore_complete"]
```
---
## Technical Architecture
### New Directory Structure
```
internal/
├── cloud/ # Cloud storage backends
│ ├── interface.go
│ ├── s3.go
│ ├── azure.go
│ └── gcs.go
├── encryption/ # Encryption layer
│ ├── aes.go
│ ├── keys.go
│ └── vault.go
├── incremental/ # Incremental backup engine
│ ├── postgres.go
│ └── mysql.go
├── pitr/ # Point-in-time recovery
│ ├── wal.go
│ ├── binlog.go
│ └── timeline.go
├── verification/ # Backup verification
│ ├── checksum.go
│ └── validate.go
├── retention/ # Retention policy
│ └── cleanup.go
├── metrics/ # Prometheus metrics
│ └── exporter.go
└── replication/ # Replication management
└── slots.go
```
### Required Dependencies
```go
// Cloud storage
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"cloud.google.com/go/storage"
// Encryption
"crypto/aes"
"crypto/cipher"
"golang.org/x/crypto/argon2"
// Metrics
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
// PostgreSQL replication
"github.com/jackc/pgx/v5/pgconn"
// Fast file scanning for incrementals
"github.com/karrick/godirwalk"
```
---
## Testing Strategy
### v2.0 Test Coverage Goals
- Minimum 90% code coverage
- Integration tests for all cloud providers
- End-to-end PITR scenarios
- Performance benchmarks for incremental backups
- Encryption/decryption validation
- Multi-database restore tests
### New Test Suites
```bash
# Cloud storage tests
./run_qa_tests.sh --suite cloud
# Incremental backup tests
./run_qa_tests.sh --suite incremental
# PITR tests
./run_qa_tests.sh --suite pitr
# Encryption tests
./run_qa_tests.sh --suite encryption
# Full v2.0 suite
./run_qa_tests.sh --suite v2
```
---
## Migration Path
### v1.x → v2.0 Compatibility
- ✅ All v1.x backups readable in v2.0
- ✅ Configuration auto-migration
- ✅ Metadata format upgrade
- ✅ Backward-compatible commands
### Deprecation Timeline
- v2.0: Warning for old config format
- v2.1: Full migration required
- v3.0: Old format no longer supported
---
## Documentation Updates
### New Docs
- `CLOUD.md` - Cloud storage configuration
- `INCREMENTAL.md` - Incremental backup guide
- `PITR.md` - Point-in-time recovery
- `ENCRYPTION.md` - Encryption setup
- `METRICS.md` - Prometheus integration
---
## Success Metrics
### v2.0 Goals
- 🎯 95%+ test coverage
- 🎯 Support 1TB+ databases with incrementals
- 🎯 PITR with <5 minute granularity
- 🎯 Cloud upload/download >100MB/s
- 🎯 Encryption overhead <10%
- 🎯 Full compatibility with pgBackRest for PostgreSQL
- 🎯 Industry-leading MySQL PITR solution
---
## Release Schedule
- **v2.0-alpha** (End Sprint 3): Cloud + Verification
- **v2.0-beta** (End Sprint 5): + Incremental + PITR
- **v2.0-rc1** (End Sprint 6): + Enterprise features
- **v2.0 GA** (Q2 2026): Production release
---
## What Makes v2.0 Unique
After v2.0, dbbackup will be:
**Only multi-database tool** with full PITR support
**Best-in-class UX** (TUI + CLI + Docker + K8s)
**Feature parity** with pgBackRest (PostgreSQL)
**Superior to mysqldump** with incremental + PITR
**Cloud-native** with multi-provider support
**Enterprise-ready** with encryption + metrics
**Zero-config** for 80% of use cases
---
## Contributing
Want to contribute to v2.0? Check out:
- [CONTRIBUTING.md](CONTRIBUTING.md)
- [Good First Issues](https://git.uuxo.net/uuxo/dbbackup/issues?labels=good-first-issue)
- [v2.0 Milestone](https://git.uuxo.net/uuxo/dbbackup/milestone/2)
---
## Questions?
Open an issue or start a discussion:
- Issues: https://git.uuxo.net/uuxo/dbbackup/issues
- Discussions: https://git.uuxo.net/uuxo/dbbackup/discussions
---
**Next Step:** Sprint 1 - Backup Verification & Retention (January 2026)

38
build_docker.sh Executable file
View File

@@ -0,0 +1,38 @@
#!/bin/bash
# Build and push Docker images
set -e
VERSION="1.1"
REGISTRY="git.uuxo.net/uuxo"
IMAGE_NAME="dbbackup"
echo "=== Building Docker Image ==="
echo "Version: $VERSION"
echo "Registry: $REGISTRY"
echo ""
# Build image
echo "Building image..."
docker build -t ${IMAGE_NAME}:${VERSION} -t ${IMAGE_NAME}:latest .
# Tag for registry
echo "Tagging for registry..."
docker tag ${IMAGE_NAME}:${VERSION} ${REGISTRY}/${IMAGE_NAME}:${VERSION}
docker tag ${IMAGE_NAME}:latest ${REGISTRY}/${IMAGE_NAME}:latest
# Show images
echo ""
echo "Images built:"
docker images ${IMAGE_NAME}
echo ""
echo "✅ Build complete!"
echo ""
echo "To push to registry:"
echo " docker push ${REGISTRY}/${IMAGE_NAME}:${VERSION}"
echo " docker push ${REGISTRY}/${IMAGE_NAME}:latest"
echo ""
echo "To test locally:"
echo " docker run --rm ${IMAGE_NAME}:latest --version"
echo " docker run --rm -it ${IMAGE_NAME}:latest interactive"

View File

@@ -90,6 +90,57 @@ func init() {
backupCmd.AddCommand(singleCmd) backupCmd.AddCommand(singleCmd)
backupCmd.AddCommand(sampleCmd) backupCmd.AddCommand(sampleCmd)
// Cloud storage flags for all backup commands
for _, cmd := range []*cobra.Command{clusterCmd, singleCmd, sampleCmd} {
cmd.Flags().Bool("cloud-auto-upload", false, "Automatically upload backup to cloud after completion")
cmd.Flags().String("cloud-provider", "", "Cloud provider (s3, minio, b2)")
cmd.Flags().String("cloud-bucket", "", "Cloud bucket name")
cmd.Flags().String("cloud-region", "us-east-1", "Cloud region")
cmd.Flags().String("cloud-endpoint", "", "Cloud endpoint (for MinIO/B2)")
cmd.Flags().String("cloud-prefix", "", "Cloud key prefix")
// Add PreRunE to update config from flags
originalPreRun := cmd.PreRunE
cmd.PreRunE = func(c *cobra.Command, args []string) error {
// Call original PreRunE if exists
if originalPreRun != nil {
if err := originalPreRun(c, args); err != nil {
return err
}
}
// Update cloud config from flags
if c.Flags().Changed("cloud-auto-upload") {
if autoUpload, _ := c.Flags().GetBool("cloud-auto-upload"); autoUpload {
cfg.CloudEnabled = true
cfg.CloudAutoUpload = true
}
}
if c.Flags().Changed("cloud-provider") {
cfg.CloudProvider, _ = c.Flags().GetString("cloud-provider")
}
if c.Flags().Changed("cloud-bucket") {
cfg.CloudBucket, _ = c.Flags().GetString("cloud-bucket")
}
if c.Flags().Changed("cloud-region") {
cfg.CloudRegion, _ = c.Flags().GetString("cloud-region")
}
if c.Flags().Changed("cloud-endpoint") {
cfg.CloudEndpoint, _ = c.Flags().GetString("cloud-endpoint")
}
if c.Flags().Changed("cloud-prefix") {
cfg.CloudPrefix, _ = c.Flags().GetString("cloud-prefix")
}
return nil
}
}
// Sample backup flags - use local variables to avoid cfg access during init // Sample backup flags - use local variables to avoid cfg access during init
var sampleStrategy string var sampleStrategy string
var sampleValue int var sampleValue int

152
cmd/cleanup.go Normal file
View File

@@ -0,0 +1,152 @@
package cmd
import (
"fmt"
"os"
"path/filepath"
"strings"
"dbbackup/internal/metadata"
"dbbackup/internal/retention"
"github.com/spf13/cobra"
)
var cleanupCmd = &cobra.Command{
Use: "cleanup [backup-directory]",
Short: "Clean up old backups based on retention policy",
Long: `Remove old backup files based on retention policy while maintaining minimum backup count.
The retention policy ensures:
1. Backups older than --retention-days are eligible for deletion
2. At least --min-backups most recent backups are always kept
3. Both conditions must be met for deletion
Examples:
# Clean up backups older than 30 days (keep at least 5)
dbbackup cleanup /backups --retention-days 30 --min-backups 5
# Dry run to see what would be deleted
dbbackup cleanup /backups --retention-days 7 --dry-run
# Clean up specific database backups only
dbbackup cleanup /backups --pattern "mydb_*.dump"
# Aggressive cleanup (keep only 3 most recent)
dbbackup cleanup /backups --retention-days 1 --min-backups 3`,
Args: cobra.ExactArgs(1),
RunE: runCleanup,
}
var (
retentionDays int
minBackups int
dryRun bool
cleanupPattern string
)
func init() {
rootCmd.AddCommand(cleanupCmd)
cleanupCmd.Flags().IntVar(&retentionDays, "retention-days", 30, "Delete backups older than this many days")
cleanupCmd.Flags().IntVar(&minBackups, "min-backups", 5, "Always keep at least this many backups")
cleanupCmd.Flags().BoolVar(&dryRun, "dry-run", false, "Show what would be deleted without actually deleting")
cleanupCmd.Flags().StringVar(&cleanupPattern, "pattern", "", "Only clean up backups matching this pattern (e.g., 'mydb_*.dump')")
}
func runCleanup(cmd *cobra.Command, args []string) error {
backupDir := args[0]
// Validate directory exists
if !dirExists(backupDir) {
return fmt.Errorf("backup directory does not exist: %s", backupDir)
}
// Create retention policy
policy := retention.Policy{
RetentionDays: retentionDays,
MinBackups: minBackups,
DryRun: dryRun,
}
fmt.Printf("🗑️ Cleanup Policy:\n")
fmt.Printf(" Directory: %s\n", backupDir)
fmt.Printf(" Retention: %d days\n", policy.RetentionDays)
fmt.Printf(" Min backups: %d\n", policy.MinBackups)
if cleanupPattern != "" {
fmt.Printf(" Pattern: %s\n", cleanupPattern)
}
if dryRun {
fmt.Printf(" Mode: DRY RUN (no files will be deleted)\n")
}
fmt.Println()
var result *retention.CleanupResult
var err error
// Apply policy
if cleanupPattern != "" {
result, err = retention.CleanupByPattern(backupDir, cleanupPattern, policy)
} else {
result, err = retention.ApplyPolicy(backupDir, policy)
}
if err != nil {
return fmt.Errorf("cleanup failed: %w", err)
}
// Display results
fmt.Printf("📊 Results:\n")
fmt.Printf(" Total backups: %d\n", result.TotalBackups)
fmt.Printf(" Eligible for deletion: %d\n", result.EligibleForDeletion)
if len(result.Deleted) > 0 {
fmt.Printf("\n")
if dryRun {
fmt.Printf("🔍 Would delete %d backup(s):\n", len(result.Deleted))
} else {
fmt.Printf("✅ Deleted %d backup(s):\n", len(result.Deleted))
}
for _, file := range result.Deleted {
fmt.Printf(" - %s\n", filepath.Base(file))
}
}
if len(result.Kept) > 0 && len(result.Kept) <= 10 {
fmt.Printf("\n📦 Kept %d backup(s):\n", len(result.Kept))
for _, file := range result.Kept {
fmt.Printf(" - %s\n", filepath.Base(file))
}
} else if len(result.Kept) > 10 {
fmt.Printf("\n📦 Kept %d backup(s)\n", len(result.Kept))
}
if !dryRun && result.SpaceFreed > 0 {
fmt.Printf("\n💾 Space freed: %s\n", metadata.FormatSize(result.SpaceFreed))
}
if len(result.Errors) > 0 {
fmt.Printf("\n⚠ Errors:\n")
for _, err := range result.Errors {
fmt.Printf(" - %v\n", err)
}
}
fmt.Println(strings.Repeat("─", 50))
if dryRun {
fmt.Println("✅ Dry run completed (no files were deleted)")
} else if len(result.Deleted) > 0 {
fmt.Println("✅ Cleanup completed successfully")
} else {
fmt.Println(" No backups eligible for deletion")
}
return nil
}
func dirExists(path string) bool {
info, err := os.Stat(path)
if err != nil {
return false
}
return info.IsDir()
}

394
cmd/cloud.go Normal file
View File

@@ -0,0 +1,394 @@
package cmd
import (
"context"
"fmt"
"os"
"path/filepath"
"strings"
"time"
"dbbackup/internal/cloud"
"github.com/spf13/cobra"
)
var cloudCmd = &cobra.Command{
Use: "cloud",
Short: "Cloud storage operations",
Long: `Manage backups in cloud storage (S3, MinIO, Backblaze B2).
Supports:
- AWS S3
- MinIO (S3-compatible)
- Backblaze B2 (S3-compatible)
- Any S3-compatible storage
Configuration via flags or environment variables:
--cloud-provider DBBACKUP_CLOUD_PROVIDER
--cloud-bucket DBBACKUP_CLOUD_BUCKET
--cloud-region DBBACKUP_CLOUD_REGION
--cloud-endpoint DBBACKUP_CLOUD_ENDPOINT
--cloud-access-key DBBACKUP_CLOUD_ACCESS_KEY (or AWS_ACCESS_KEY_ID)
--cloud-secret-key DBBACKUP_CLOUD_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)`,
}
var cloudUploadCmd = &cobra.Command{
Use: "upload [backup-file]",
Short: "Upload backup to cloud storage",
Long: `Upload one or more backup files to cloud storage.
Examples:
# Upload single backup
dbbackup cloud upload /backups/mydb.dump
# Upload with progress
dbbackup cloud upload /backups/mydb.dump --verbose
# Upload multiple files
dbbackup cloud upload /backups/*.dump`,
Args: cobra.MinimumNArgs(1),
RunE: runCloudUpload,
}
var cloudDownloadCmd = &cobra.Command{
Use: "download [remote-file] [local-path]",
Short: "Download backup from cloud storage",
Long: `Download a backup file from cloud storage.
Examples:
# Download to current directory
dbbackup cloud download mydb.dump .
# Download to specific path
dbbackup cloud download mydb.dump /backups/mydb.dump
# Download with progress
dbbackup cloud download mydb.dump . --verbose`,
Args: cobra.ExactArgs(2),
RunE: runCloudDownload,
}
var cloudListCmd = &cobra.Command{
Use: "list [prefix]",
Short: "List backups in cloud storage",
Long: `List all backup files in cloud storage.
Examples:
# List all backups
dbbackup cloud list
# List backups with prefix
dbbackup cloud list mydb_
# List with detailed information
dbbackup cloud list --verbose`,
Args: cobra.MaximumNArgs(1),
RunE: runCloudList,
}
var cloudDeleteCmd = &cobra.Command{
Use: "delete [remote-file]",
Short: "Delete backup from cloud storage",
Long: `Delete a backup file from cloud storage.
Examples:
# Delete single backup
dbbackup cloud delete mydb_20251125.dump
# Delete with confirmation
dbbackup cloud delete mydb.dump --confirm`,
Args: cobra.ExactArgs(1),
RunE: runCloudDelete,
}
var (
cloudProvider string
cloudBucket string
cloudRegion string
cloudEndpoint string
cloudAccessKey string
cloudSecretKey string
cloudPrefix string
cloudVerbose bool
cloudConfirm bool
)
func init() {
rootCmd.AddCommand(cloudCmd)
cloudCmd.AddCommand(cloudUploadCmd, cloudDownloadCmd, cloudListCmd, cloudDeleteCmd)
// Cloud configuration flags
for _, cmd := range []*cobra.Command{cloudUploadCmd, cloudDownloadCmd, cloudListCmd, cloudDeleteCmd} {
cmd.Flags().StringVar(&cloudProvider, "cloud-provider", getEnv("DBBACKUP_CLOUD_PROVIDER", "s3"), "Cloud provider (s3, minio, b2)")
cmd.Flags().StringVar(&cloudBucket, "cloud-bucket", getEnv("DBBACKUP_CLOUD_BUCKET", ""), "Bucket name")
cmd.Flags().StringVar(&cloudRegion, "cloud-region", getEnv("DBBACKUP_CLOUD_REGION", "us-east-1"), "Region")
cmd.Flags().StringVar(&cloudEndpoint, "cloud-endpoint", getEnv("DBBACKUP_CLOUD_ENDPOINT", ""), "Custom endpoint (for MinIO)")
cmd.Flags().StringVar(&cloudAccessKey, "cloud-access-key", getEnv("DBBACKUP_CLOUD_ACCESS_KEY", getEnv("AWS_ACCESS_KEY_ID", "")), "Access key")
cmd.Flags().StringVar(&cloudSecretKey, "cloud-secret-key", getEnv("DBBACKUP_CLOUD_SECRET_KEY", getEnv("AWS_SECRET_ACCESS_KEY", "")), "Secret key")
cmd.Flags().StringVar(&cloudPrefix, "cloud-prefix", getEnv("DBBACKUP_CLOUD_PREFIX", ""), "Key prefix")
cmd.Flags().BoolVarP(&cloudVerbose, "verbose", "v", false, "Verbose output")
}
cloudDeleteCmd.Flags().BoolVar(&cloudConfirm, "confirm", false, "Skip confirmation prompt")
}
func getEnv(key, defaultValue string) string {
if value := os.Getenv(key); value != "" {
return value
}
return defaultValue
}
func getCloudBackend() (cloud.Backend, error) {
cfg := &cloud.Config{
Provider: cloudProvider,
Bucket: cloudBucket,
Region: cloudRegion,
Endpoint: cloudEndpoint,
AccessKey: cloudAccessKey,
SecretKey: cloudSecretKey,
Prefix: cloudPrefix,
UseSSL: true,
PathStyle: cloudProvider == "minio",
Timeout: 300,
MaxRetries: 3,
}
if cfg.Bucket == "" {
return nil, fmt.Errorf("bucket name is required (use --cloud-bucket or DBBACKUP_CLOUD_BUCKET)")
}
backend, err := cloud.NewBackend(cfg)
if err != nil {
return nil, fmt.Errorf("failed to create cloud backend: %w", err)
}
return backend, nil
}
func runCloudUpload(cmd *cobra.Command, args []string) error {
backend, err := getCloudBackend()
if err != nil {
return err
}
ctx := context.Background()
// Expand glob patterns
var files []string
for _, pattern := range args {
matches, err := filepath.Glob(pattern)
if err != nil {
return fmt.Errorf("invalid pattern %s: %w", pattern, err)
}
if len(matches) == 0 {
files = append(files, pattern)
} else {
files = append(files, matches...)
}
}
fmt.Printf("☁️ Uploading %d file(s) to %s...\n\n", len(files), backend.Name())
successCount := 0
for _, localPath := range files {
filename := filepath.Base(localPath)
fmt.Printf("📤 %s\n", filename)
// Progress callback
var lastPercent int
progress := func(transferred, total int64) {
if !cloudVerbose {
return
}
percent := int(float64(transferred) / float64(total) * 100)
if percent != lastPercent && percent%10 == 0 {
fmt.Printf(" Progress: %d%% (%s / %s)\n",
percent,
cloud.FormatSize(transferred),
cloud.FormatSize(total))
lastPercent = percent
}
}
err := backend.Upload(ctx, localPath, filename, progress)
if err != nil {
fmt.Printf(" ❌ Failed: %v\n\n", err)
continue
}
// Get file size
if info, err := os.Stat(localPath); err == nil {
fmt.Printf(" ✅ Uploaded (%s)\n\n", cloud.FormatSize(info.Size()))
} else {
fmt.Printf(" ✅ Uploaded\n\n")
}
successCount++
}
fmt.Println(strings.Repeat("─", 50))
fmt.Printf("✅ Successfully uploaded %d/%d file(s)\n", successCount, len(files))
return nil
}
func runCloudDownload(cmd *cobra.Command, args []string) error {
backend, err := getCloudBackend()
if err != nil {
return err
}
ctx := context.Background()
remotePath := args[0]
localPath := args[1]
// If localPath is a directory, use the remote filename
if info, err := os.Stat(localPath); err == nil && info.IsDir() {
localPath = filepath.Join(localPath, filepath.Base(remotePath))
}
fmt.Printf("☁️ Downloading from %s...\n\n", backend.Name())
fmt.Printf("📥 %s → %s\n", remotePath, localPath)
// Progress callback
var lastPercent int
progress := func(transferred, total int64) {
if !cloudVerbose {
return
}
percent := int(float64(transferred) / float64(total) * 100)
if percent != lastPercent && percent%10 == 0 {
fmt.Printf(" Progress: %d%% (%s / %s)\n",
percent,
cloud.FormatSize(transferred),
cloud.FormatSize(total))
lastPercent = percent
}
}
err = backend.Download(ctx, remotePath, localPath, progress)
if err != nil {
return fmt.Errorf("download failed: %w", err)
}
// Get file size
if info, err := os.Stat(localPath); err == nil {
fmt.Printf(" ✅ Downloaded (%s)\n", cloud.FormatSize(info.Size()))
} else {
fmt.Printf(" ✅ Downloaded\n")
}
return nil
}
func runCloudList(cmd *cobra.Command, args []string) error {
backend, err := getCloudBackend()
if err != nil {
return err
}
ctx := context.Background()
prefix := ""
if len(args) > 0 {
prefix = args[0]
}
fmt.Printf("☁️ Listing backups in %s/%s...\n\n", backend.Name(), cloudBucket)
backups, err := backend.List(ctx, prefix)
if err != nil {
return fmt.Errorf("failed to list backups: %w", err)
}
if len(backups) == 0 {
fmt.Println("No backups found")
return nil
}
var totalSize int64
for _, backup := range backups {
totalSize += backup.Size
if cloudVerbose {
fmt.Printf("📦 %s\n", backup.Name)
fmt.Printf(" Size: %s\n", cloud.FormatSize(backup.Size))
fmt.Printf(" Modified: %s\n", backup.LastModified.Format(time.RFC3339))
if backup.StorageClass != "" {
fmt.Printf(" Storage: %s\n", backup.StorageClass)
}
fmt.Println()
} else {
age := time.Since(backup.LastModified)
ageStr := formatAge(age)
fmt.Printf("%-50s %12s %s\n",
backup.Name,
cloud.FormatSize(backup.Size),
ageStr)
}
}
fmt.Println(strings.Repeat("─", 50))
fmt.Printf("Total: %d backup(s), %s\n", len(backups), cloud.FormatSize(totalSize))
return nil
}
func runCloudDelete(cmd *cobra.Command, args []string) error {
backend, err := getCloudBackend()
if err != nil {
return err
}
ctx := context.Background()
remotePath := args[0]
// Check if file exists
exists, err := backend.Exists(ctx, remotePath)
if err != nil {
return fmt.Errorf("failed to check file: %w", err)
}
if !exists {
return fmt.Errorf("file not found: %s", remotePath)
}
// Get file info
size, err := backend.GetSize(ctx, remotePath)
if err != nil {
return fmt.Errorf("failed to get file info: %w", err)
}
// Confirmation prompt
if !cloudConfirm {
fmt.Printf("⚠️ Delete %s (%s) from cloud storage?\n", remotePath, cloud.FormatSize(size))
fmt.Print("Type 'yes' to confirm: ")
var response string
fmt.Scanln(&response)
if response != "yes" {
fmt.Println("Cancelled")
return nil
}
}
fmt.Printf("🗑️ Deleting %s...\n", remotePath)
err = backend.Delete(ctx, remotePath)
if err != nil {
return fmt.Errorf("delete failed: %w", err)
}
fmt.Printf("✅ Deleted %s (%s)\n", remotePath, cloud.FormatSize(size))
return nil
}
func formatAge(d time.Duration) string {
if d < time.Minute {
return "just now"
} else if d < time.Hour {
return fmt.Sprintf("%d min ago", int(d.Minutes()))
} else if d < 24*time.Hour {
return fmt.Sprintf("%d hours ago", int(d.Hours()))
} else {
return fmt.Sprintf("%d days ago", int(d.Hours()/24))
}
}

141
cmd/verify.go Normal file
View File

@@ -0,0 +1,141 @@
package cmd
import (
"fmt"
"os"
"path/filepath"
"strings"
"time"
"dbbackup/internal/metadata"
"dbbackup/internal/verification"
"github.com/spf13/cobra"
)
var verifyBackupCmd = &cobra.Command{
Use: "verify-backup [backup-file]",
Short: "Verify backup file integrity with checksums",
Long: `Verify the integrity of one or more backup files by comparing their SHA-256 checksums
against the stored metadata. This ensures that backups have not been corrupted.
Examples:
# Verify a single backup
dbbackup verify-backup /backups/mydb_20260115.dump
# Verify all backups in a directory
dbbackup verify-backup /backups/*.dump
# Quick verification (size check only, no checksum)
dbbackup verify-backup /backups/mydb.dump --quick
# Verify and show detailed information
dbbackup verify-backup /backups/mydb.dump --verbose`,
Args: cobra.MinimumNArgs(1),
RunE: runVerifyBackup,
}
var (
quickVerify bool
verboseVerify bool
)
func init() {
rootCmd.AddCommand(verifyBackupCmd)
verifyBackupCmd.Flags().BoolVar(&quickVerify, "quick", false, "Quick verification (size check only)")
verifyBackupCmd.Flags().BoolVarP(&verboseVerify, "verbose", "v", false, "Show detailed information")
}
func runVerifyBackup(cmd *cobra.Command, args []string) error {
// Expand glob patterns
var backupFiles []string
for _, pattern := range args {
matches, err := filepath.Glob(pattern)
if err != nil {
return fmt.Errorf("invalid pattern %s: %w", pattern, err)
}
if len(matches) == 0 {
// Not a glob, use as-is
backupFiles = append(backupFiles, pattern)
} else {
backupFiles = append(backupFiles, matches...)
}
}
if len(backupFiles) == 0 {
return fmt.Errorf("no backup files found")
}
fmt.Printf("Verifying %d backup file(s)...\n\n", len(backupFiles))
successCount := 0
failureCount := 0
for _, backupFile := range backupFiles {
// Skip metadata files
if strings.HasSuffix(backupFile, ".meta.json") ||
strings.HasSuffix(backupFile, ".sha256") ||
strings.HasSuffix(backupFile, ".info") {
continue
}
fmt.Printf("📁 %s\n", filepath.Base(backupFile))
if quickVerify {
// Quick check: size only
err := verification.QuickCheck(backupFile)
if err != nil {
fmt.Printf(" ❌ FAILED: %v\n\n", err)
failureCount++
continue
}
fmt.Printf(" ✅ VALID (quick check)\n\n")
successCount++
} else {
// Full verification with SHA-256
result, err := verification.Verify(backupFile)
if err != nil {
return fmt.Errorf("verification error: %w", err)
}
if result.Valid {
fmt.Printf(" ✅ VALID\n")
if verboseVerify {
meta, _ := metadata.Load(backupFile)
fmt.Printf(" Size: %s\n", metadata.FormatSize(meta.SizeBytes))
fmt.Printf(" SHA-256: %s\n", meta.SHA256)
fmt.Printf(" Database: %s (%s)\n", meta.Database, meta.DatabaseType)
fmt.Printf(" Created: %s\n", meta.Timestamp.Format(time.RFC3339))
}
fmt.Println()
successCount++
} else {
fmt.Printf(" ❌ FAILED: %v\n", result.Error)
if verboseVerify {
if !result.FileExists {
fmt.Printf(" File does not exist\n")
} else if !result.MetadataExists {
fmt.Printf(" Metadata file missing\n")
} else if !result.SizeMatch {
fmt.Printf(" Size mismatch\n")
} else {
fmt.Printf(" Expected: %s\n", result.ExpectedSHA256)
fmt.Printf(" Got: %s\n", result.CalculatedSHA256)
}
}
fmt.Println()
failureCount++
}
}
}
// Summary
fmt.Println(strings.Repeat("─", 50))
fmt.Printf("Total: %d backups\n", len(backupFiles))
fmt.Printf("✅ Valid: %d\n", successCount)
if failureCount > 0 {
fmt.Printf("❌ Failed: %d\n", failureCount)
os.Exit(1)
}
return nil
}

88
docker-compose.yml Normal file
View File

@@ -0,0 +1,88 @@
version: '3.8'
services:
# PostgreSQL backup example
postgres-backup:
build: .
image: dbbackup:latest
container_name: dbbackup-postgres
volumes:
- ./backups:/backups
- ./config/.dbbackup.conf:/home/dbbackup/.dbbackup.conf:ro
environment:
- PGHOST=postgres
- PGPORT=5432
- PGUSER=postgres
- PGPASSWORD=secret
command: backup single mydb
depends_on:
- postgres
networks:
- dbnet
# MySQL backup example
mysql-backup:
build: .
image: dbbackup:latest
container_name: dbbackup-mysql
volumes:
- ./backups:/backups
environment:
- MYSQL_HOST=mysql
- MYSQL_PORT=3306
- MYSQL_USER=root
- MYSQL_PWD=secret
command: backup single mydb --db-type mysql
depends_on:
- mysql
networks:
- dbnet
# Interactive mode example
dbbackup-interactive:
build: .
image: dbbackup:latest
container_name: dbbackup-tui
volumes:
- ./backups:/backups
environment:
- PGHOST=postgres
- PGUSER=postgres
- PGPASSWORD=secret
command: interactive
stdin_open: true
tty: true
networks:
- dbnet
# Test PostgreSQL database
postgres:
image: postgres:15-alpine
container_name: test-postgres
environment:
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=mydb
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- dbnet
# Test MySQL database
mysql:
image: mysql:8.0
container_name: test-mysql
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_DATABASE=mydb
volumes:
- mysql-data:/var/lib/mysql
networks:
- dbnet
volumes:
postgres-data:
mysql-data:
networks:
dbnet:
driver: bridge

19
go.mod
View File

@@ -18,6 +18,25 @@ require (
require ( require (
filippo.io/edwards25519 v1.1.0 // indirect filippo.io/edwards25519 v1.1.0 // indirect
github.com/aws/aws-sdk-go-v2 v1.40.0 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 // indirect
github.com/aws/aws-sdk-go-v2/config v1.32.1 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.19.1 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.14 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.3 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.5 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14 // indirect
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.0 // indirect
github.com/aws/aws-sdk-go-v2/service/signin v1.0.1 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.30.4 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.9 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.41.1 // indirect
github.com/aws/smithy-go v1.23.2 // indirect
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
github.com/charmbracelet/x/ansi v0.10.1 // indirect github.com/charmbracelet/x/ansi v0.10.1 // indirect

38
go.sum
View File

@@ -2,6 +2,44 @@ filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4= filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2 h1:+vx7roKuyA63nhn5WAunQHLTznkw5W8b1Xc0dNjp83s= github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2 h1:+vx7roKuyA63nhn5WAunQHLTznkw5W8b1Xc0dNjp83s=
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2/go.mod h1:HBCaDeC1lPdgDeDbhX8XFpy1jqjK0IBG8W5K+xYqA0w= github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2/go.mod h1:HBCaDeC1lPdgDeDbhX8XFpy1jqjK0IBG8W5K+xYqA0w=
github.com/aws/aws-sdk-go-v2 v1.40.0 h1:/WMUA0kjhZExjOQN2z3oLALDREea1A7TobfuiBrKlwc=
github.com/aws/aws-sdk-go-v2 v1.40.0/go.mod h1:c9pm7VwuW0UPxAEYGyTmyurVcNrbF6Rt/wixFqDhcjE=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 h1:DHctwEM8P8iTXFxC/QK0MRjwEpWQeM9yzidCRjldUz0=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3/go.mod h1:xdCzcZEtnSTKVDOmUZs4l/j3pSV6rpo1WXl5ugNsL8Y=
github.com/aws/aws-sdk-go-v2/config v1.32.1 h1:iODUDLgk3q8/flEC7ymhmxjfoAnBDwEEYEVyKZ9mzjU=
github.com/aws/aws-sdk-go-v2/config v1.32.1/go.mod h1:xoAgo17AGrPpJBSLg81W+ikM0cpOZG8ad04T2r+d5P0=
github.com/aws/aws-sdk-go-v2/credentials v1.19.1 h1:JeW+EwmtTE0yXFK8SmklrFh/cGTTXsQJumgMZNlbxfM=
github.com/aws/aws-sdk-go-v2/credentials v1.19.1/go.mod h1:BOoXiStwTF+fT2XufhO0Efssbi1CNIO/ZXpZu87N0pw=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14 h1:WZVR5DbDgxzA0BJeudId89Kmgy6DIU4ORpxwsVHz0qA=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14/go.mod h1:Dadl9QO0kHgbrH1GRqGiZdYtW5w+IXXaBNCHTIaheM4=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 h1:PZHqQACxYb8mYgms4RZbhZG0a7dPW06xOjmaH0EJC/I=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14/go.mod h1:VymhrMJUWs69D8u0/lZ7jSB6WgaG/NqHi3gX0aYf6U0=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 h1:bOS19y6zlJwagBfHxs0ESzr1XCOU2KXJCWcq3E2vfjY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14/go.mod h1:1ipeGBMAxZ0xcTm6y6paC2C/J6f6OO7LBODV9afuAyM=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 h1:WKuaxf++XKWlHWu9ECbMlha8WOEGm0OUEZqm4K/Gcfk=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4/go.mod h1:ZWy7j6v1vWGmPReu0iSGvRiise4YI5SkR3OHKTZ6Wuc=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.14 h1:ITi7qiDSv/mSGDSWNpZ4k4Ve0DQR6Ug2SJQ8zEHoDXg=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.14/go.mod h1:k1xtME53H1b6YpZt74YmwlONMWf4ecM+lut1WQLAF/U=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.3 h1:x2Ibm/Af8Fi+BH+Hsn9TXGdT+hKbDd5XOTZxTMxDk7o=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.3/go.mod h1:IW1jwyrQgMdhisceG8fQLmQIydcT/jWY21rFhzgaKwo=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.5 h1:Hjkh7kE6D81PgrHlE/m9gx+4TyyeLHuY8xJs7yXN5C4=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.5/go.mod h1:nPRXgyCfAurhyaTMoBMwRBYBhaHI4lNPAnJmjM0Tslc=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14 h1:FIouAnCE46kyYqyhs0XEBDFFSREtdnr8HQuLPQPLCrY=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14/go.mod h1:UTwDc5COa5+guonQU8qBikJo1ZJ4ln2r1MkF7Dqag1E=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14 h1:FzQE21lNtUor0Fb7QNgnEyiRCBlolLTX/Z1j65S7teM=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14/go.mod h1:s1ydyWG9pm3ZwmmYN21HKyG9WzAZhYVW85wMHs5FV6w=
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.0 h1:8FshVvnV2sr9kOSAbOnc/vwVmmAwMjOedKH6JW2ddPM=
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.0/go.mod h1:wYNqY3L02Z3IgRYxOBPH9I1zD9Cjh9hI5QOy/eOjQvw=
github.com/aws/aws-sdk-go-v2/service/signin v1.0.1 h1:BDgIUYGEo5TkayOWv/oBLPphWwNm/A91AebUjAu5L5g=
github.com/aws/aws-sdk-go-v2/service/signin v1.0.1/go.mod h1:iS6EPmNeqCsGo+xQmXv0jIMjyYtQfnwg36zl2FwEouk=
github.com/aws/aws-sdk-go-v2/service/sso v1.30.4 h1:U//SlnkE1wOQiIImxzdY5PXat4Wq+8rlfVEw4Y7J8as=
github.com/aws/aws-sdk-go-v2/service/sso v1.30.4/go.mod h1:av+ArJpoYf3pgyrj6tcehSFW+y9/QvAY8kMooR9bZCw=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.9 h1:LU8S9W/mPDAU9q0FjCLi0TrCheLMGwzbRpvUMwYspcA=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.9/go.mod h1:/j67Z5XBVDx8nZVp9EuFM9/BS5dvBznbqILGuu73hug=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.1 h1:GdGmKtG+/Krag7VfyOXV17xjTCz0i9NT+JnqLTOI5nA=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.1/go.mod h1:6TxbXoDSgBQ225Qd8Q+MbxUxUh6TtNKwbRt/EPS9xso=
github.com/aws/smithy-go v1.23.2 h1:Crv0eatJUQhaManss33hS5r40CG3ZFH+21XSkqMrIUM=
github.com/aws/smithy-go v1.23.2/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k= github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k=
github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8= github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8=
github.com/charmbracelet/bubbles v0.21.0 h1:9TdC97SdRVg/1aaXNVWfFH3nnLAwOXr8Fn6u6mfQdFs= github.com/charmbracelet/bubbles v0.21.0 h1:9TdC97SdRVg/1aaXNVWfFH3nnLAwOXr8Fn6u6mfQdFs=

View File

@@ -17,10 +17,12 @@ import (
"time" "time"
"dbbackup/internal/checks" "dbbackup/internal/checks"
"dbbackup/internal/cloud"
"dbbackup/internal/config" "dbbackup/internal/config"
"dbbackup/internal/database" "dbbackup/internal/database"
"dbbackup/internal/security" "dbbackup/internal/security"
"dbbackup/internal/logger" "dbbackup/internal/logger"
"dbbackup/internal/metadata"
"dbbackup/internal/metrics" "dbbackup/internal/metrics"
"dbbackup/internal/progress" "dbbackup/internal/progress"
"dbbackup/internal/swap" "dbbackup/internal/swap"
@@ -233,6 +235,14 @@ func (e *Engine) BackupSingle(ctx context.Context, databaseName string) error {
metrics.GlobalMetrics.RecordOperation("backup_single", databaseName, time.Now().Add(-time.Minute), info.Size(), true, 0) metrics.GlobalMetrics.RecordOperation("backup_single", databaseName, time.Now().Add(-time.Minute), info.Size(), true, 0)
} }
// Cloud upload if enabled
if e.cfg.CloudEnabled && e.cfg.CloudAutoUpload {
if err := e.uploadToCloud(ctx, outputFile, tracker); err != nil {
e.log.Warn("Cloud upload failed", "error", err)
// Don't fail the backup if cloud upload fails
}
}
// Complete operation // Complete operation
tracker.UpdateProgress(100, "Backup operation completed successfully") tracker.UpdateProgress(100, "Backup operation completed successfully")
tracker.Complete(fmt.Sprintf("Single database backup completed: %s", filepath.Base(outputFile))) tracker.Complete(fmt.Sprintf("Single database backup completed: %s", filepath.Base(outputFile)))
@@ -541,9 +551,9 @@ func (e *Engine) BackupCluster(ctx context.Context) error {
operation.Complete(fmt.Sprintf("Cluster backup created: %s (%s)", outputFile, size)) operation.Complete(fmt.Sprintf("Cluster backup created: %s (%s)", outputFile, size))
} }
// Create metadata file // Create cluster metadata file
if err := e.createMetadata(outputFile, "cluster", "cluster", ""); err != nil { if err := e.createClusterMetadata(outputFile, databases, successCountFinal, failCountFinal); err != nil {
e.log.Warn("Failed to create metadata file", "error", err) e.log.Warn("Failed to create cluster metadata file", "error", err)
} }
return nil return nil
@@ -910,9 +920,70 @@ regularTar:
// createMetadata creates a metadata file for the backup // createMetadata creates a metadata file for the backup
func (e *Engine) createMetadata(backupFile, database, backupType, strategy string) error { func (e *Engine) createMetadata(backupFile, database, backupType, strategy string) error {
metaFile := backupFile + ".info" startTime := time.Now()
content := fmt.Sprintf(`{ // Get backup file information
info, err := os.Stat(backupFile)
if err != nil {
return fmt.Errorf("failed to stat backup file: %w", err)
}
// Calculate SHA-256 checksum
sha256, err := metadata.CalculateSHA256(backupFile)
if err != nil {
return fmt.Errorf("failed to calculate checksum: %w", err)
}
// Get database version
ctx := context.Background()
dbVersion, _ := e.db.GetVersion(ctx)
if dbVersion == "" {
dbVersion = "unknown"
}
// Determine compression format
compressionFormat := "none"
if e.cfg.CompressionLevel > 0 {
if e.cfg.Jobs > 1 {
compressionFormat = fmt.Sprintf("pigz-%d", e.cfg.CompressionLevel)
} else {
compressionFormat = fmt.Sprintf("gzip-%d", e.cfg.CompressionLevel)
}
}
// Create backup metadata
meta := &metadata.BackupMetadata{
Version: "2.0",
Timestamp: startTime,
Database: database,
DatabaseType: e.cfg.DatabaseType,
DatabaseVersion: dbVersion,
Host: e.cfg.Host,
Port: e.cfg.Port,
User: e.cfg.User,
BackupFile: backupFile,
SizeBytes: info.Size(),
SHA256: sha256,
Compression: compressionFormat,
BackupType: backupType,
Duration: time.Since(startTime).Seconds(),
ExtraInfo: make(map[string]string),
}
// Add strategy for sample backups
if strategy != "" {
meta.ExtraInfo["sample_strategy"] = strategy
meta.ExtraInfo["sample_value"] = fmt.Sprintf("%d", e.cfg.SampleValue)
}
// Save metadata
if err := meta.Save(); err != nil {
return fmt.Errorf("failed to save metadata: %w", err)
}
// Also save legacy .info file for backward compatibility
legacyMetaFile := backupFile + ".info"
legacyContent := fmt.Sprintf(`{
"type": "%s", "type": "%s",
"database": "%s", "database": "%s",
"timestamp": "%s", "timestamp": "%s",
@@ -920,24 +991,170 @@ func (e *Engine) createMetadata(backupFile, database, backupType, strategy strin
"port": %d, "port": %d,
"user": "%s", "user": "%s",
"db_type": "%s", "db_type": "%s",
"compression": %d`, "compression": %d,
backupType, database, time.Now().Format("20060102_150405"), "size_bytes": %d
e.cfg.Host, e.cfg.Port, e.cfg.User, e.cfg.DatabaseType, e.cfg.CompressionLevel) }`, backupType, database, startTime.Format("20060102_150405"),
e.cfg.Host, e.cfg.Port, e.cfg.User, e.cfg.DatabaseType,
e.cfg.CompressionLevel, info.Size())
if strategy != "" { if err := os.WriteFile(legacyMetaFile, []byte(legacyContent), 0644); err != nil {
content += fmt.Sprintf(`, e.log.Warn("Failed to save legacy metadata file", "error", err)
"sample_strategy": "%s",
"sample_value": %d`, e.cfg.SampleStrategy, e.cfg.SampleValue)
} }
if info, err := os.Stat(backupFile); err == nil { return nil
content += fmt.Sprintf(`,
"size_bytes": %d`, info.Size())
} }
content += "\n}" // createClusterMetadata creates metadata for cluster backups
func (e *Engine) createClusterMetadata(backupFile string, databases []string, successCount, failCount int) error {
startTime := time.Now()
return os.WriteFile(metaFile, []byte(content), 0644) // Get backup file information
info, err := os.Stat(backupFile)
if err != nil {
return fmt.Errorf("failed to stat backup file: %w", err)
}
// Calculate SHA-256 checksum for archive
sha256, err := metadata.CalculateSHA256(backupFile)
if err != nil {
return fmt.Errorf("failed to calculate checksum: %w", err)
}
// Get database version
ctx := context.Background()
dbVersion, _ := e.db.GetVersion(ctx)
if dbVersion == "" {
dbVersion = "unknown"
}
// Create cluster metadata
clusterMeta := &metadata.ClusterMetadata{
Version: "2.0",
Timestamp: startTime,
ClusterName: fmt.Sprintf("%s:%d", e.cfg.Host, e.cfg.Port),
DatabaseType: e.cfg.DatabaseType,
Host: e.cfg.Host,
Port: e.cfg.Port,
Databases: make([]metadata.BackupMetadata, 0),
TotalSize: info.Size(),
Duration: time.Since(startTime).Seconds(),
ExtraInfo: map[string]string{
"database_count": fmt.Sprintf("%d", len(databases)),
"success_count": fmt.Sprintf("%d", successCount),
"failure_count": fmt.Sprintf("%d", failCount),
"archive_sha256": sha256,
"database_version": dbVersion,
},
}
// Add database names to metadata
for _, dbName := range databases {
dbMeta := metadata.BackupMetadata{
Database: dbName,
DatabaseType: e.cfg.DatabaseType,
DatabaseVersion: dbVersion,
Timestamp: startTime,
}
clusterMeta.Databases = append(clusterMeta.Databases, dbMeta)
}
// Save cluster metadata
if err := clusterMeta.Save(backupFile); err != nil {
return fmt.Errorf("failed to save cluster metadata: %w", err)
}
// Also save legacy .info file for backward compatibility
legacyMetaFile := backupFile + ".info"
legacyContent := fmt.Sprintf(`{
"type": "cluster",
"database": "cluster",
"timestamp": "%s",
"host": "%s",
"port": %d,
"user": "%s",
"db_type": "%s",
"compression": %d,
"size_bytes": %d,
"database_count": %d,
"success_count": %d,
"failure_count": %d
}`, startTime.Format("20060102_150405"),
e.cfg.Host, e.cfg.Port, e.cfg.User, e.cfg.DatabaseType,
e.cfg.CompressionLevel, info.Size(), len(databases), successCount, failCount)
if err := os.WriteFile(legacyMetaFile, []byte(legacyContent), 0644); err != nil {
e.log.Warn("Failed to save legacy cluster metadata file", "error", err)
}
return nil
}
// uploadToCloud uploads a backup file to cloud storage
func (e *Engine) uploadToCloud(ctx context.Context, backupFile string, tracker *progress.OperationTracker) error {
uploadStep := tracker.AddStep("cloud_upload", "Uploading to cloud storage")
// Create cloud backend
cloudCfg := &cloud.Config{
Provider: e.cfg.CloudProvider,
Bucket: e.cfg.CloudBucket,
Region: e.cfg.CloudRegion,
Endpoint: e.cfg.CloudEndpoint,
AccessKey: e.cfg.CloudAccessKey,
SecretKey: e.cfg.CloudSecretKey,
Prefix: e.cfg.CloudPrefix,
UseSSL: true,
PathStyle: e.cfg.CloudProvider == "minio",
Timeout: 300,
MaxRetries: 3,
}
backend, err := cloud.NewBackend(cloudCfg)
if err != nil {
uploadStep.Fail(fmt.Errorf("failed to create cloud backend: %w", err))
return err
}
// Get file info
info, err := os.Stat(backupFile)
if err != nil {
uploadStep.Fail(fmt.Errorf("failed to stat backup file: %w", err))
return err
}
filename := filepath.Base(backupFile)
e.log.Info("Uploading backup to cloud", "file", filename, "size", cloud.FormatSize(info.Size()))
// Progress callback
var lastPercent int
progressCallback := func(transferred, total int64) {
percent := int(float64(transferred) / float64(total) * 100)
if percent != lastPercent && percent%10 == 0 {
e.log.Debug("Upload progress", "percent", percent, "transferred", cloud.FormatSize(transferred), "total", cloud.FormatSize(total))
lastPercent = percent
}
}
// Upload to cloud
err = backend.Upload(ctx, backupFile, filename, progressCallback)
if err != nil {
uploadStep.Fail(fmt.Errorf("cloud upload failed: %w", err))
return err
}
// Also upload metadata file
metaFile := backupFile + ".meta.json"
if _, err := os.Stat(metaFile); err == nil {
metaFilename := filepath.Base(metaFile)
if err := backend.Upload(ctx, metaFile, metaFilename, nil); err != nil {
e.log.Warn("Failed to upload metadata file", "error", err)
// Don't fail if metadata upload fails
}
}
uploadStep.Complete(fmt.Sprintf("Uploaded to %s/%s/%s", backend.Name(), e.cfg.CloudBucket, filename))
e.log.Info("Backup uploaded to cloud", "provider", backend.Name(), "bucket", e.cfg.CloudBucket, "file", filename)
return nil
} }
// executeCommand executes a backup command (optimized for huge databases) // executeCommand executes a backup command (optimized for huge databases)

167
internal/cloud/interface.go Normal file
View File

@@ -0,0 +1,167 @@
package cloud
import (
"context"
"fmt"
"io"
"time"
)
// Backend defines the interface for cloud storage providers
type Backend interface {
// Upload uploads a file to cloud storage
Upload(ctx context.Context, localPath, remotePath string, progress ProgressCallback) error
// Download downloads a file from cloud storage
Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error
// List lists all backup files in cloud storage
List(ctx context.Context, prefix string) ([]BackupInfo, error)
// Delete deletes a file from cloud storage
Delete(ctx context.Context, remotePath string) error
// Exists checks if a file exists in cloud storage
Exists(ctx context.Context, remotePath string) (bool, error)
// GetSize returns the size of a remote file
GetSize(ctx context.Context, remotePath string) (int64, error)
// Name returns the backend name (e.g., "s3", "azure", "gcs")
Name() string
}
// BackupInfo contains information about a backup in cloud storage
type BackupInfo struct {
Key string // Full path/key in cloud storage
Name string // Base filename
Size int64 // Size in bytes
LastModified time.Time // Last modification time
ETag string // Entity tag (version identifier)
StorageClass string // Storage class (e.g., STANDARD, GLACIER)
}
// ProgressCallback is called during upload/download to report progress
type ProgressCallback func(bytesTransferred, totalBytes int64)
// Config contains common configuration for cloud backends
type Config struct {
Provider string // "s3", "minio", "azure", "gcs", "b2"
Bucket string // Bucket or container name
Region string // Region (for S3)
Endpoint string // Custom endpoint (for MinIO, S3-compatible)
AccessKey string // Access key or account ID
SecretKey string // Secret key or access token
UseSSL bool // Use SSL/TLS (default: true)
PathStyle bool // Use path-style addressing (for MinIO)
Prefix string // Prefix for all operations (e.g., "backups/")
Timeout int // Timeout in seconds (default: 300)
MaxRetries int // Maximum retry attempts (default: 3)
Concurrency int // Upload/download concurrency (default: 5)
}
// NewBackend creates a new cloud storage backend based on the provider
func NewBackend(cfg *Config) (Backend, error) {
switch cfg.Provider {
case "s3", "aws":
return NewS3Backend(cfg)
case "minio":
// MinIO uses S3 backend with custom endpoint
cfg.PathStyle = true
if cfg.Endpoint == "" {
return nil, fmt.Errorf("endpoint required for MinIO")
}
return NewS3Backend(cfg)
case "b2", "backblaze":
// Backblaze B2 uses S3-compatible API
cfg.PathStyle = false
if cfg.Endpoint == "" {
return nil, fmt.Errorf("endpoint required for Backblaze B2")
}
return NewS3Backend(cfg)
default:
return nil, fmt.Errorf("unsupported cloud provider: %s (supported: s3, minio, b2)", cfg.Provider)
}
}
// FormatSize returns human-readable size
func FormatSize(bytes int64) string {
const unit = 1024
if bytes < unit {
return fmt.Sprintf("%d B", bytes)
}
div, exp := int64(unit), 0
for n := bytes / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f %ciB", float64(bytes)/float64(div), "KMGTPE"[exp])
}
// DefaultConfig returns a config with sensible defaults
func DefaultConfig() *Config {
return &Config{
Provider: "s3",
UseSSL: true,
PathStyle: false,
Timeout: 300,
MaxRetries: 3,
Concurrency: 5,
}
}
// Validate checks if the configuration is valid
func (c *Config) Validate() error {
if c.Provider == "" {
return fmt.Errorf("provider is required")
}
if c.Bucket == "" {
return fmt.Errorf("bucket name is required")
}
if c.Provider == "s3" || c.Provider == "aws" {
if c.Region == "" && c.Endpoint == "" {
return fmt.Errorf("region or endpoint is required for S3")
}
}
if c.Provider == "minio" || c.Provider == "b2" {
if c.Endpoint == "" {
return fmt.Errorf("endpoint is required for %s", c.Provider)
}
}
return nil
}
// ProgressReader wraps an io.Reader to track progress
type ProgressReader struct {
reader io.Reader
total int64
read int64
callback ProgressCallback
lastReport time.Time
}
// NewProgressReader creates a progress tracking reader
func NewProgressReader(r io.Reader, total int64, callback ProgressCallback) *ProgressReader {
return &ProgressReader{
reader: r,
total: total,
callback: callback,
lastReport: time.Now(),
}
}
func (pr *ProgressReader) Read(p []byte) (int, error) {
n, err := pr.reader.Read(p)
pr.read += int64(n)
// Report progress every 100ms or when complete
now := time.Now()
if now.Sub(pr.lastReport) > 100*time.Millisecond || err == io.EOF {
if pr.callback != nil {
pr.callback(pr.read, pr.total)
}
pr.lastReport = now
}
return n, err
}

324
internal/cloud/s3.go Normal file
View File

@@ -0,0 +1,324 @@
package cloud
import (
"context"
"fmt"
"io"
"os"
"path/filepath"
"strings"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
// S3Backend implements the Backend interface for AWS S3 and compatible services
type S3Backend struct {
client *s3.Client
bucket string
prefix string
config *Config
}
// NewS3Backend creates a new S3 backend
func NewS3Backend(cfg *Config) (*S3Backend, error) {
if err := cfg.Validate(); err != nil {
return nil, fmt.Errorf("invalid config: %w", err)
}
ctx := context.Background()
// Build AWS config
var awsCfg aws.Config
var err error
if cfg.AccessKey != "" && cfg.SecretKey != "" {
// Use explicit credentials
credsProvider := credentials.NewStaticCredentialsProvider(
cfg.AccessKey,
cfg.SecretKey,
"",
)
awsCfg, err = config.LoadDefaultConfig(ctx,
config.WithCredentialsProvider(credsProvider),
config.WithRegion(cfg.Region),
)
} else {
// Use default credential chain (environment, IAM role, etc.)
awsCfg, err = config.LoadDefaultConfig(ctx,
config.WithRegion(cfg.Region),
)
}
if err != nil {
return nil, fmt.Errorf("failed to load AWS config: %w", err)
}
// Create S3 client with custom options
clientOptions := []func(*s3.Options){
func(o *s3.Options) {
if cfg.Endpoint != "" {
o.BaseEndpoint = aws.String(cfg.Endpoint)
}
if cfg.PathStyle {
o.UsePathStyle = true
}
},
}
client := s3.NewFromConfig(awsCfg, clientOptions...)
return &S3Backend{
client: client,
bucket: cfg.Bucket,
prefix: cfg.Prefix,
config: cfg,
}, nil
}
// Name returns the backend name
func (s *S3Backend) Name() string {
return "s3"
}
// buildKey creates the full S3 key from filename
func (s *S3Backend) buildKey(filename string) string {
if s.prefix == "" {
return filename
}
return filepath.Join(s.prefix, filename)
}
// Upload uploads a file to S3
func (s *S3Backend) Upload(ctx context.Context, localPath, remotePath string, progress ProgressCallback) error {
// Open local file
file, err := os.Open(localPath)
if err != nil {
return fmt.Errorf("failed to open file: %w", err)
}
defer file.Close()
// Get file size
stat, err := file.Stat()
if err != nil {
return fmt.Errorf("failed to stat file: %w", err)
}
fileSize := stat.Size()
// Create progress reader
var reader io.Reader = file
if progress != nil {
reader = NewProgressReader(file, fileSize, progress)
}
// Build S3 key
key := s.buildKey(remotePath)
// Upload to S3
_, err = s.client.PutObject(ctx, &s3.PutObjectInput{
Bucket: aws.String(s.bucket),
Key: aws.String(key),
Body: reader,
})
if err != nil {
return fmt.Errorf("failed to upload to S3: %w", err)
}
return nil
}
// Download downloads a file from S3
func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
// Build S3 key
key := s.buildKey(remotePath)
// Get object size first
size, err := s.GetSize(ctx, remotePath)
if err != nil {
return fmt.Errorf("failed to get object size: %w", err)
}
// Download from S3
result, err := s.client.GetObject(ctx, &s3.GetObjectInput{
Bucket: aws.String(s.bucket),
Key: aws.String(key),
})
if err != nil {
return fmt.Errorf("failed to download from S3: %w", err)
}
defer result.Body.Close()
// Create local file
if err := os.MkdirAll(filepath.Dir(localPath), 0755); err != nil {
return fmt.Errorf("failed to create directory: %w", err)
}
outFile, err := os.Create(localPath)
if err != nil {
return fmt.Errorf("failed to create local file: %w", err)
}
defer outFile.Close()
// Copy with progress tracking
var reader io.Reader = result.Body
if progress != nil {
reader = NewProgressReader(result.Body, size, progress)
}
_, err = io.Copy(outFile, reader)
if err != nil {
return fmt.Errorf("failed to write file: %w", err)
}
return nil
}
// List lists all backup files in S3
func (s *S3Backend) List(ctx context.Context, prefix string) ([]BackupInfo, error) {
// Build full prefix
fullPrefix := s.buildKey(prefix)
// List objects
result, err := s.client.ListObjectsV2(ctx, &s3.ListObjectsV2Input{
Bucket: aws.String(s.bucket),
Prefix: aws.String(fullPrefix),
})
if err != nil {
return nil, fmt.Errorf("failed to list objects: %w", err)
}
// Convert to BackupInfo
var backups []BackupInfo
for _, obj := range result.Contents {
if obj.Key == nil {
continue
}
key := *obj.Key
name := filepath.Base(key)
// Skip if it's just a directory marker
if strings.HasSuffix(key, "/") {
continue
}
info := BackupInfo{
Key: key,
Name: name,
Size: *obj.Size,
LastModified: *obj.LastModified,
}
if obj.ETag != nil {
info.ETag = *obj.ETag
}
if obj.StorageClass != "" {
info.StorageClass = string(obj.StorageClass)
} else {
info.StorageClass = "STANDARD"
}
backups = append(backups, info)
}
return backups, nil
}
// Delete deletes a file from S3
func (s *S3Backend) Delete(ctx context.Context, remotePath string) error {
key := s.buildKey(remotePath)
_, err := s.client.DeleteObject(ctx, &s3.DeleteObjectInput{
Bucket: aws.String(s.bucket),
Key: aws.String(key),
})
if err != nil {
return fmt.Errorf("failed to delete object: %w", err)
}
return nil
}
// Exists checks if a file exists in S3
func (s *S3Backend) Exists(ctx context.Context, remotePath string) (bool, error) {
key := s.buildKey(remotePath)
_, err := s.client.HeadObject(ctx, &s3.HeadObjectInput{
Bucket: aws.String(s.bucket),
Key: aws.String(key),
})
if err != nil {
// Check if it's a "not found" error
if strings.Contains(err.Error(), "NotFound") || strings.Contains(err.Error(), "404") {
return false, nil
}
return false, fmt.Errorf("failed to check object existence: %w", err)
}
return true, nil
}
// GetSize returns the size of a remote file
func (s *S3Backend) GetSize(ctx context.Context, remotePath string) (int64, error) {
key := s.buildKey(remotePath)
result, err := s.client.HeadObject(ctx, &s3.HeadObjectInput{
Bucket: aws.String(s.bucket),
Key: aws.String(key),
})
if err != nil {
return 0, fmt.Errorf("failed to get object metadata: %w", err)
}
if result.ContentLength == nil {
return 0, fmt.Errorf("content length not available")
}
return *result.ContentLength, nil
}
// BucketExists checks if the bucket exists and is accessible
func (s *S3Backend) BucketExists(ctx context.Context) (bool, error) {
_, err := s.client.HeadBucket(ctx, &s3.HeadBucketInput{
Bucket: aws.String(s.bucket),
})
if err != nil {
if strings.Contains(err.Error(), "NotFound") || strings.Contains(err.Error(), "404") {
return false, nil
}
return false, fmt.Errorf("failed to check bucket: %w", err)
}
return true, nil
}
// CreateBucket creates the bucket if it doesn't exist
func (s *S3Backend) CreateBucket(ctx context.Context) error {
exists, err := s.BucketExists(ctx)
if err != nil {
return err
}
if exists {
return nil
}
_, err = s.client.CreateBucket(ctx, &s3.CreateBucketInput{
Bucket: aws.String(s.bucket),
})
if err != nil {
return fmt.Errorf("failed to create bucket: %w", err)
}
return nil
}

View File

@@ -85,6 +85,17 @@ type Config struct {
TUIDryRun bool // TUI dry-run mode (simulate without execution) TUIDryRun bool // TUI dry-run mode (simulate without execution)
TUIVerbose bool // Verbose TUI logging TUIVerbose bool // Verbose TUI logging
TUILogFile string // TUI event log file path TUILogFile string // TUI event log file path
// Cloud storage options (v2.0)
CloudEnabled bool // Enable cloud storage integration
CloudProvider string // "s3", "minio", "b2"
CloudBucket string // Bucket name
CloudRegion string // Region (for S3)
CloudEndpoint string // Custom endpoint (for MinIO, B2)
CloudAccessKey string // Access key
CloudSecretKey string // Secret key
CloudPrefix string // Key prefix
CloudAutoUpload bool // Automatically upload after backup
} }
// New creates a new configuration with default values // New creates a new configuration with default values
@@ -192,6 +203,17 @@ func New() *Config {
TUIDryRun: getEnvBool("TUI_DRY_RUN", false), // Execute by default TUIDryRun: getEnvBool("TUI_DRY_RUN", false), // Execute by default
TUIVerbose: getEnvBool("TUI_VERBOSE", false), // Quiet by default TUIVerbose: getEnvBool("TUI_VERBOSE", false), // Quiet by default
TUILogFile: getEnvString("TUI_LOG_FILE", ""), // No log file by default TUILogFile: getEnvString("TUI_LOG_FILE", ""), // No log file by default
// Cloud storage defaults (v2.0)
CloudEnabled: getEnvBool("CLOUD_ENABLED", false),
CloudProvider: getEnvString("CLOUD_PROVIDER", "s3"),
CloudBucket: getEnvString("CLOUD_BUCKET", ""),
CloudRegion: getEnvString("CLOUD_REGION", "us-east-1"),
CloudEndpoint: getEnvString("CLOUD_ENDPOINT", ""),
CloudAccessKey: getEnvString("CLOUD_ACCESS_KEY", getEnvString("AWS_ACCESS_KEY_ID", "")),
CloudSecretKey: getEnvString("CLOUD_SECRET_KEY", getEnvString("AWS_SECRET_ACCESS_KEY", "")),
CloudPrefix: getEnvString("CLOUD_PREFIX", ""),
CloudAutoUpload: getEnvBool("CLOUD_AUTO_UPLOAD", false),
} }
// Ensure canonical defaults are enforced // Ensure canonical defaults are enforced

View File

@@ -0,0 +1,167 @@
package metadata
import (
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"os"
"path/filepath"
"time"
)
// BackupMetadata contains comprehensive information about a backup
type BackupMetadata struct {
Version string `json:"version"`
Timestamp time.Time `json:"timestamp"`
Database string `json:"database"`
DatabaseType string `json:"database_type"` // postgresql, mysql, mariadb
DatabaseVersion string `json:"database_version"` // e.g., "PostgreSQL 15.3"
Host string `json:"host"`
Port int `json:"port"`
User string `json:"user"`
BackupFile string `json:"backup_file"`
SizeBytes int64 `json:"size_bytes"`
SHA256 string `json:"sha256"`
Compression string `json:"compression"` // none, gzip, pigz
BackupType string `json:"backup_type"` // full, incremental (for v2.0)
BaseBackup string `json:"base_backup,omitempty"`
Duration float64 `json:"duration_seconds"`
ExtraInfo map[string]string `json:"extra_info,omitempty"`
}
// ClusterMetadata contains metadata for cluster backups
type ClusterMetadata struct {
Version string `json:"version"`
Timestamp time.Time `json:"timestamp"`
ClusterName string `json:"cluster_name"`
DatabaseType string `json:"database_type"`
Host string `json:"host"`
Port int `json:"port"`
Databases []BackupMetadata `json:"databases"`
TotalSize int64 `json:"total_size_bytes"`
Duration float64 `json:"duration_seconds"`
ExtraInfo map[string]string `json:"extra_info,omitempty"`
}
// CalculateSHA256 computes the SHA-256 checksum of a file
func CalculateSHA256(filePath string) (string, error) {
f, err := os.Open(filePath)
if err != nil {
return "", fmt.Errorf("failed to open file: %w", err)
}
defer f.Close()
hasher := sha256.New()
if _, err := io.Copy(hasher, f); err != nil {
return "", fmt.Errorf("failed to calculate checksum: %w", err)
}
return hex.EncodeToString(hasher.Sum(nil)), nil
}
// Save writes metadata to a .meta.json file
func (m *BackupMetadata) Save() error {
metaPath := m.BackupFile + ".meta.json"
data, err := json.MarshalIndent(m, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal metadata: %w", err)
}
if err := os.WriteFile(metaPath, data, 0644); err != nil {
return fmt.Errorf("failed to write metadata file: %w", err)
}
return nil
}
// Load reads metadata from a .meta.json file
func Load(backupFile string) (*BackupMetadata, error) {
metaPath := backupFile + ".meta.json"
data, err := os.ReadFile(metaPath)
if err != nil {
return nil, fmt.Errorf("failed to read metadata file: %w", err)
}
var meta BackupMetadata
if err := json.Unmarshal(data, &meta); err != nil {
return nil, fmt.Errorf("failed to parse metadata: %w", err)
}
return &meta, nil
}
// SaveCluster writes cluster metadata to a .meta.json file
func (m *ClusterMetadata) Save(targetFile string) error {
metaPath := targetFile + ".meta.json"
data, err := json.MarshalIndent(m, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal cluster metadata: %w", err)
}
if err := os.WriteFile(metaPath, data, 0644); err != nil {
return fmt.Errorf("failed to write cluster metadata file: %w", err)
}
return nil
}
// LoadCluster reads cluster metadata from a .meta.json file
func LoadCluster(targetFile string) (*ClusterMetadata, error) {
metaPath := targetFile + ".meta.json"
data, err := os.ReadFile(metaPath)
if err != nil {
return nil, fmt.Errorf("failed to read cluster metadata file: %w", err)
}
var meta ClusterMetadata
if err := json.Unmarshal(data, &meta); err != nil {
return nil, fmt.Errorf("failed to parse cluster metadata: %w", err)
}
return &meta, nil
}
// ListBackups scans a directory for backup files and returns their metadata
func ListBackups(dir string) ([]*BackupMetadata, error) {
pattern := filepath.Join(dir, "*.meta.json")
matches, err := filepath.Glob(pattern)
if err != nil {
return nil, fmt.Errorf("failed to scan directory: %w", err)
}
var backups []*BackupMetadata
for _, metaFile := range matches {
// Extract backup file path (remove .meta.json suffix)
backupFile := metaFile[:len(metaFile)-len(".meta.json")]
meta, err := Load(backupFile)
if err != nil {
// Skip invalid metadata files
continue
}
backups = append(backups, meta)
}
return backups, nil
}
// FormatSize returns human-readable size
func FormatSize(bytes int64) string {
const unit = 1024
if bytes < unit {
return fmt.Sprintf("%d B", bytes)
}
div, exp := int64(unit), 0
for n := bytes / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f %ciB", float64(bytes)/float64(div), "KMGTPE"[exp])
}

View File

@@ -0,0 +1,224 @@
package retention
import (
"fmt"
"os"
"path/filepath"
"sort"
"time"
"dbbackup/internal/metadata"
)
// Policy defines the retention rules
type Policy struct {
RetentionDays int
MinBackups int
DryRun bool
}
// CleanupResult contains information about cleanup operations
type CleanupResult struct {
TotalBackups int
EligibleForDeletion int
Deleted []string
Kept []string
SpaceFreed int64
Errors []error
}
// ApplyPolicy enforces the retention policy on backups in a directory
func ApplyPolicy(backupDir string, policy Policy) (*CleanupResult, error) {
result := &CleanupResult{
Deleted: make([]string, 0),
Kept: make([]string, 0),
Errors: make([]error, 0),
}
// List all backups in directory
backups, err := metadata.ListBackups(backupDir)
if err != nil {
return nil, fmt.Errorf("failed to list backups: %w", err)
}
result.TotalBackups = len(backups)
// Sort backups by timestamp (oldest first)
sort.Slice(backups, func(i, j int) bool {
return backups[i].Timestamp.Before(backups[j].Timestamp)
})
// Calculate cutoff date
cutoffDate := time.Now().AddDate(0, 0, -policy.RetentionDays)
// Determine which backups to delete
for i, backup := range backups {
// Always keep minimum number of backups (most recent ones)
backupsRemaining := len(backups) - i
if backupsRemaining <= policy.MinBackups {
result.Kept = append(result.Kept, backup.BackupFile)
continue
}
// Check if backup is older than retention period
if backup.Timestamp.Before(cutoffDate) {
result.EligibleForDeletion++
if policy.DryRun {
result.Deleted = append(result.Deleted, backup.BackupFile)
} else {
// Delete backup file and associated metadata
if err := deleteBackup(backup.BackupFile); err != nil {
result.Errors = append(result.Errors,
fmt.Errorf("failed to delete %s: %w", backup.BackupFile, err))
} else {
result.Deleted = append(result.Deleted, backup.BackupFile)
result.SpaceFreed += backup.SizeBytes
}
}
} else {
result.Kept = append(result.Kept, backup.BackupFile)
}
}
return result, nil
}
// deleteBackup removes a backup file and all associated files
func deleteBackup(backupFile string) error {
// Delete main backup file
if err := os.Remove(backupFile); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("failed to delete backup file: %w", err)
}
// Delete metadata file
metaFile := backupFile + ".meta.json"
if err := os.Remove(metaFile); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("failed to delete metadata file: %w", err)
}
// Delete legacy .sha256 file if exists
sha256File := backupFile + ".sha256"
if err := os.Remove(sha256File); err != nil && !os.IsNotExist(err) {
// Don't fail if .sha256 doesn't exist (new format)
}
// Delete legacy .info file if exists
infoFile := backupFile + ".info"
if err := os.Remove(infoFile); err != nil && !os.IsNotExist(err) {
// Don't fail if .info doesn't exist (new format)
}
return nil
}
// GetOldestBackups returns the N oldest backups in a directory
func GetOldestBackups(backupDir string, count int) ([]*metadata.BackupMetadata, error) {
backups, err := metadata.ListBackups(backupDir)
if err != nil {
return nil, err
}
// Sort by timestamp (oldest first)
sort.Slice(backups, func(i, j int) bool {
return backups[i].Timestamp.Before(backups[j].Timestamp)
})
if count > len(backups) {
count = len(backups)
}
return backups[:count], nil
}
// GetNewestBackups returns the N newest backups in a directory
func GetNewestBackups(backupDir string, count int) ([]*metadata.BackupMetadata, error) {
backups, err := metadata.ListBackups(backupDir)
if err != nil {
return nil, err
}
// Sort by timestamp (newest first)
sort.Slice(backups, func(i, j int) bool {
return backups[i].Timestamp.After(backups[j].Timestamp)
})
if count > len(backups) {
count = len(backups)
}
return backups[:count], nil
}
// CleanupByPattern removes backups matching a specific pattern
func CleanupByPattern(backupDir, pattern string, policy Policy) (*CleanupResult, error) {
result := &CleanupResult{
Deleted: make([]string, 0),
Kept: make([]string, 0),
Errors: make([]error, 0),
}
// Find matching backup files
searchPattern := filepath.Join(backupDir, pattern)
matches, err := filepath.Glob(searchPattern)
if err != nil {
return nil, fmt.Errorf("failed to match pattern: %w", err)
}
// Filter to only .dump or .sql files
var backupFiles []string
for _, match := range matches {
ext := filepath.Ext(match)
if ext == ".dump" || ext == ".sql" {
backupFiles = append(backupFiles, match)
}
}
// Load metadata for matched backups
var backups []*metadata.BackupMetadata
for _, file := range backupFiles {
meta, err := metadata.Load(file)
if err != nil {
// Skip files without metadata
continue
}
backups = append(backups, meta)
}
result.TotalBackups = len(backups)
// Sort by timestamp
sort.Slice(backups, func(i, j int) bool {
return backups[i].Timestamp.Before(backups[j].Timestamp)
})
cutoffDate := time.Now().AddDate(0, 0, -policy.RetentionDays)
// Apply policy
for i, backup := range backups {
backupsRemaining := len(backups) - i
if backupsRemaining <= policy.MinBackups {
result.Kept = append(result.Kept, backup.BackupFile)
continue
}
if backup.Timestamp.Before(cutoffDate) {
result.EligibleForDeletion++
if policy.DryRun {
result.Deleted = append(result.Deleted, backup.BackupFile)
} else {
if err := deleteBackup(backup.BackupFile); err != nil {
result.Errors = append(result.Errors, err)
} else {
result.Deleted = append(result.Deleted, backup.BackupFile)
result.SpaceFreed += backup.SizeBytes
}
}
} else {
result.Kept = append(result.Kept, backup.BackupFile)
}
}
return result, nil
}

View File

@@ -0,0 +1,114 @@
package verification
import (
"fmt"
"os"
"dbbackup/internal/metadata"
)
// Result represents the outcome of a verification operation
type Result struct {
Valid bool
BackupFile string
ExpectedSHA256 string
CalculatedSHA256 string
SizeMatch bool
FileExists bool
MetadataExists bool
Error error
}
// Verify checks the integrity of a backup file
func Verify(backupFile string) (*Result, error) {
result := &Result{
BackupFile: backupFile,
}
// Check if backup file exists
info, err := os.Stat(backupFile)
if err != nil {
result.FileExists = false
result.Error = fmt.Errorf("backup file does not exist: %w", err)
return result, nil
}
result.FileExists = true
// Load metadata
meta, err := metadata.Load(backupFile)
if err != nil {
result.MetadataExists = false
result.Error = fmt.Errorf("failed to load metadata: %w", err)
return result, nil
}
result.MetadataExists = true
result.ExpectedSHA256 = meta.SHA256
// Check size match
if info.Size() != meta.SizeBytes {
result.SizeMatch = false
result.Error = fmt.Errorf("size mismatch: expected %d bytes, got %d bytes",
meta.SizeBytes, info.Size())
return result, nil
}
result.SizeMatch = true
// Calculate actual SHA-256
actualSHA256, err := metadata.CalculateSHA256(backupFile)
if err != nil {
result.Error = fmt.Errorf("failed to calculate checksum: %w", err)
return result, nil
}
result.CalculatedSHA256 = actualSHA256
// Compare checksums
if actualSHA256 != meta.SHA256 {
result.Valid = false
result.Error = fmt.Errorf("checksum mismatch: expected %s, got %s",
meta.SHA256, actualSHA256)
return result, nil
}
// All checks passed
result.Valid = true
return result, nil
}
// VerifyMultiple verifies multiple backup files
func VerifyMultiple(backupFiles []string) ([]*Result, error) {
var results []*Result
for _, file := range backupFiles {
result, err := Verify(file)
if err != nil {
return nil, fmt.Errorf("verification error for %s: %w", file, err)
}
results = append(results, result)
}
return results, nil
}
// QuickCheck performs a fast check without full checksum calculation
// Only validates metadata existence and file size
func QuickCheck(backupFile string) error {
// Check file exists
info, err := os.Stat(backupFile)
if err != nil {
return fmt.Errorf("backup file does not exist: %w", err)
}
// Load metadata
meta, err := metadata.Load(backupFile)
if err != nil {
return fmt.Errorf("metadata missing or invalid: %w", err)
}
// Check size
if info.Size() != meta.SizeBytes {
return fmt.Errorf("size mismatch: expected %d bytes, got %d bytes",
meta.SizeBytes, info.Size())
}
return nil
}