Compare commits
6 Commits
v1.1
...
v2.0-sprin
| Author | SHA1 | Date | |
|---|---|---|---|
| 8929004abc | |||
| bdf9af0650 | |||
| 20b7f1ec04 | |||
| ae3ed1fea1 | |||
| ba5ae8ecb1 | |||
| 884c8292d6 |
758
CLOUD.md
Normal file
758
CLOUD.md
Normal file
@@ -0,0 +1,758 @@
|
|||||||
|
# Cloud Storage Guide for dbbackup
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
dbbackup v2.0 includes comprehensive cloud storage integration, allowing you to backup directly to S3-compatible storage providers and restore from cloud URIs.
|
||||||
|
|
||||||
|
**Supported Providers:**
|
||||||
|
- AWS S3
|
||||||
|
- MinIO (self-hosted S3-compatible)
|
||||||
|
- Backblaze B2
|
||||||
|
- Google Cloud Storage (via S3 compatibility)
|
||||||
|
- Any S3-compatible storage
|
||||||
|
|
||||||
|
**Key Features:**
|
||||||
|
- ✅ Direct backup to cloud with `--cloud` URI flag
|
||||||
|
- ✅ Restore from cloud URIs
|
||||||
|
- ✅ Verify cloud backup integrity
|
||||||
|
- ✅ Apply retention policies to cloud storage
|
||||||
|
- ✅ Multipart upload for large files (>100MB)
|
||||||
|
- ✅ Progress tracking for uploads/downloads
|
||||||
|
- ✅ Automatic metadata synchronization
|
||||||
|
- ✅ Streaming transfers (memory efficient)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### 1. Set Up Credentials
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# For AWS S3
|
||||||
|
export AWS_ACCESS_KEY_ID="your-access-key"
|
||||||
|
export AWS_SECRET_ACCESS_KEY="your-secret-key"
|
||||||
|
export AWS_REGION="us-east-1"
|
||||||
|
|
||||||
|
# For MinIO
|
||||||
|
export AWS_ACCESS_KEY_ID="minioadmin"
|
||||||
|
export AWS_SECRET_ACCESS_KEY="minioadmin123"
|
||||||
|
export AWS_ENDPOINT_URL="http://localhost:9000"
|
||||||
|
|
||||||
|
# For Backblaze B2
|
||||||
|
export AWS_ACCESS_KEY_ID="your-b2-key-id"
|
||||||
|
export AWS_SECRET_ACCESS_KEY="your-b2-application-key"
|
||||||
|
export AWS_ENDPOINT_URL="https://s3.us-west-002.backblazeb2.com"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Backup with Cloud URI
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup to S3
|
||||||
|
dbbackup backup single mydb --cloud s3://my-bucket/backups/
|
||||||
|
|
||||||
|
# Backup to MinIO
|
||||||
|
dbbackup backup single mydb --cloud minio://my-bucket/backups/
|
||||||
|
|
||||||
|
# Backup to Backblaze B2
|
||||||
|
dbbackup backup single mydb --cloud b2://my-bucket/backups/
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Restore from Cloud
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Restore from cloud URI
|
||||||
|
dbbackup restore single s3://my-bucket/backups/mydb_20260115_120000.dump --confirm
|
||||||
|
|
||||||
|
# Restore to different database
|
||||||
|
dbbackup restore single s3://my-bucket/backups/mydb.dump \
|
||||||
|
--target mydb_restored \
|
||||||
|
--confirm
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## URI Syntax
|
||||||
|
|
||||||
|
Cloud URIs follow this format:
|
||||||
|
|
||||||
|
```
|
||||||
|
<provider>://<bucket>/<path>/<filename>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Supported Providers:**
|
||||||
|
- `s3://` - AWS S3 or S3-compatible storage
|
||||||
|
- `minio://` - MinIO (auto-enables path-style addressing)
|
||||||
|
- `b2://` - Backblaze B2
|
||||||
|
- `gs://` or `gcs://` - Google Cloud Storage
|
||||||
|
- `azure://` - Azure Blob Storage (coming soon)
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
```bash
|
||||||
|
s3://production-backups/databases/postgres/
|
||||||
|
minio://local-backups/dev/mydb/
|
||||||
|
b2://offsite-backups/daily/
|
||||||
|
gs://gcp-backups/prod/
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configuration Methods
|
||||||
|
|
||||||
|
### Method 1: Cloud URIs (Recommended)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dbbackup backup single mydb --cloud s3://my-bucket/backups/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method 2: Individual Flags
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dbbackup backup single mydb \
|
||||||
|
--cloud-auto-upload \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket \
|
||||||
|
--cloud-prefix backups/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method 3: Environment Variables
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export CLOUD_ENABLED=true
|
||||||
|
export CLOUD_AUTO_UPLOAD=true
|
||||||
|
export CLOUD_PROVIDER=s3
|
||||||
|
export CLOUD_BUCKET=my-bucket
|
||||||
|
export CLOUD_PREFIX=backups/
|
||||||
|
export CLOUD_REGION=us-east-1
|
||||||
|
|
||||||
|
dbbackup backup single mydb
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method 4: Config File
|
||||||
|
|
||||||
|
```toml
|
||||||
|
# ~/.dbbackup.conf
|
||||||
|
[cloud]
|
||||||
|
enabled = true
|
||||||
|
auto_upload = true
|
||||||
|
provider = "s3"
|
||||||
|
bucket = "my-bucket"
|
||||||
|
prefix = "backups/"
|
||||||
|
region = "us-east-1"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
### Cloud Upload
|
||||||
|
|
||||||
|
Upload existing backup files to cloud storage:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Upload single file
|
||||||
|
dbbackup cloud upload /backups/mydb.dump \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket
|
||||||
|
|
||||||
|
# Upload with cloud URI flags
|
||||||
|
dbbackup cloud upload /backups/mydb.dump \
|
||||||
|
--cloud-provider minio \
|
||||||
|
--cloud-bucket local-backups \
|
||||||
|
--cloud-endpoint http://localhost:9000
|
||||||
|
|
||||||
|
# Upload multiple files
|
||||||
|
dbbackup cloud upload /backups/*.dump \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket \
|
||||||
|
--verbose
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cloud Download
|
||||||
|
|
||||||
|
Download backups from cloud storage:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Download to current directory
|
||||||
|
dbbackup cloud download mydb.dump . \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket
|
||||||
|
|
||||||
|
# Download to specific directory
|
||||||
|
dbbackup cloud download backups/mydb.dump /restore/ \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket \
|
||||||
|
--verbose
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cloud List
|
||||||
|
|
||||||
|
List backups in cloud storage:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all backups
|
||||||
|
dbbackup cloud list \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket
|
||||||
|
|
||||||
|
# List with prefix filter
|
||||||
|
dbbackup cloud list \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket \
|
||||||
|
--cloud-prefix postgres/
|
||||||
|
|
||||||
|
# Verbose output with details
|
||||||
|
dbbackup cloud list \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket \
|
||||||
|
--verbose
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cloud Delete
|
||||||
|
|
||||||
|
Delete backups from cloud storage:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Delete specific backup (with confirmation prompt)
|
||||||
|
dbbackup cloud delete mydb_old.dump \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket
|
||||||
|
|
||||||
|
# Delete without confirmation
|
||||||
|
dbbackup cloud delete mydb_old.dump \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket \
|
||||||
|
--confirm
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backup with Auto-Upload
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup and automatically upload
|
||||||
|
dbbackup backup single mydb --cloud s3://my-bucket/backups/
|
||||||
|
|
||||||
|
# With individual flags
|
||||||
|
dbbackup backup single mydb \
|
||||||
|
--cloud-auto-upload \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket \
|
||||||
|
--cloud-prefix backups/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Restore from Cloud
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Restore from cloud URI (auto-download)
|
||||||
|
dbbackup restore single s3://my-bucket/backups/mydb.dump --confirm
|
||||||
|
|
||||||
|
# Restore to different database
|
||||||
|
dbbackup restore single s3://my-bucket/backups/mydb.dump \
|
||||||
|
--target mydb_restored \
|
||||||
|
--confirm
|
||||||
|
|
||||||
|
# Restore with database creation
|
||||||
|
dbbackup restore single s3://my-bucket/backups/mydb.dump \
|
||||||
|
--create \
|
||||||
|
--confirm
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verify Cloud Backups
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verify single cloud backup
|
||||||
|
dbbackup verify-backup s3://my-bucket/backups/mydb.dump
|
||||||
|
|
||||||
|
# Quick verification (size check only)
|
||||||
|
dbbackup verify-backup s3://my-bucket/backups/mydb.dump --quick
|
||||||
|
|
||||||
|
# Verbose output
|
||||||
|
dbbackup verify-backup s3://my-bucket/backups/mydb.dump --verbose
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cloud Cleanup
|
||||||
|
|
||||||
|
Apply retention policies to cloud storage:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Cleanup old backups (dry-run)
|
||||||
|
dbbackup cleanup s3://my-bucket/backups/ \
|
||||||
|
--retention-days 30 \
|
||||||
|
--min-backups 5 \
|
||||||
|
--dry-run
|
||||||
|
|
||||||
|
# Actual cleanup
|
||||||
|
dbbackup cleanup s3://my-bucket/backups/ \
|
||||||
|
--retention-days 30 \
|
||||||
|
--min-backups 5
|
||||||
|
|
||||||
|
# Pattern-based cleanup
|
||||||
|
dbbackup cleanup s3://my-bucket/backups/ \
|
||||||
|
--retention-days 7 \
|
||||||
|
--min-backups 3 \
|
||||||
|
--pattern "mydb_*.dump"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Provider-Specific Setup
|
||||||
|
|
||||||
|
### AWS S3
|
||||||
|
|
||||||
|
**Prerequisites:**
|
||||||
|
- AWS account
|
||||||
|
- S3 bucket created
|
||||||
|
- IAM user with S3 permissions
|
||||||
|
|
||||||
|
**IAM Policy:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Version": "2012-10-17",
|
||||||
|
"Statement": [
|
||||||
|
{
|
||||||
|
"Effect": "Allow",
|
||||||
|
"Action": [
|
||||||
|
"s3:PutObject",
|
||||||
|
"s3:GetObject",
|
||||||
|
"s3:DeleteObject",
|
||||||
|
"s3:ListBucket"
|
||||||
|
],
|
||||||
|
"Resource": [
|
||||||
|
"arn:aws:s3:::my-bucket/*",
|
||||||
|
"arn:aws:s3:::my-bucket"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configuration:**
|
||||||
|
```bash
|
||||||
|
export AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
|
||||||
|
export AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
|
||||||
|
export AWS_REGION="us-east-1"
|
||||||
|
|
||||||
|
dbbackup backup single mydb --cloud s3://my-bucket/backups/
|
||||||
|
```
|
||||||
|
|
||||||
|
### MinIO (Self-Hosted)
|
||||||
|
|
||||||
|
**Setup with Docker:**
|
||||||
|
```bash
|
||||||
|
docker run -d \
|
||||||
|
-p 9000:9000 \
|
||||||
|
-p 9001:9001 \
|
||||||
|
-e "MINIO_ROOT_USER=minioadmin" \
|
||||||
|
-e "MINIO_ROOT_PASSWORD=minioadmin123" \
|
||||||
|
--name minio \
|
||||||
|
minio/minio server /data --console-address ":9001"
|
||||||
|
|
||||||
|
# Create bucket
|
||||||
|
docker exec minio mc alias set local http://localhost:9000 minioadmin minioadmin123
|
||||||
|
docker exec minio mc mb local/backups
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configuration:**
|
||||||
|
```bash
|
||||||
|
export AWS_ACCESS_KEY_ID="minioadmin"
|
||||||
|
export AWS_SECRET_ACCESS_KEY="minioadmin123"
|
||||||
|
export AWS_ENDPOINT_URL="http://localhost:9000"
|
||||||
|
|
||||||
|
dbbackup backup single mydb --cloud minio://backups/db/
|
||||||
|
```
|
||||||
|
|
||||||
|
**Or use docker-compose:**
|
||||||
|
```bash
|
||||||
|
docker-compose -f docker-compose.minio.yml up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backblaze B2
|
||||||
|
|
||||||
|
**Prerequisites:**
|
||||||
|
- Backblaze account
|
||||||
|
- B2 bucket created
|
||||||
|
- Application key generated
|
||||||
|
|
||||||
|
**Configuration:**
|
||||||
|
```bash
|
||||||
|
export AWS_ACCESS_KEY_ID="<your-b2-key-id>"
|
||||||
|
export AWS_SECRET_ACCESS_KEY="<your-b2-application-key>"
|
||||||
|
export AWS_ENDPOINT_URL="https://s3.us-west-002.backblazeb2.com"
|
||||||
|
export AWS_REGION="us-west-002"
|
||||||
|
|
||||||
|
dbbackup backup single mydb --cloud b2://my-bucket/backups/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Google Cloud Storage
|
||||||
|
|
||||||
|
**Prerequisites:**
|
||||||
|
- GCP account
|
||||||
|
- GCS bucket with S3 compatibility enabled
|
||||||
|
- HMAC keys generated
|
||||||
|
|
||||||
|
**Enable S3 Compatibility:**
|
||||||
|
1. Go to Cloud Storage > Settings > Interoperability
|
||||||
|
2. Create HMAC keys
|
||||||
|
|
||||||
|
**Configuration:**
|
||||||
|
```bash
|
||||||
|
export AWS_ACCESS_KEY_ID="<your-hmac-access-id>"
|
||||||
|
export AWS_SECRET_ACCESS_KEY="<your-hmac-secret>"
|
||||||
|
export AWS_ENDPOINT_URL="https://storage.googleapis.com"
|
||||||
|
|
||||||
|
dbbackup backup single mydb --cloud gs://my-bucket/backups/
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
### Multipart Upload
|
||||||
|
|
||||||
|
Files larger than 100MB automatically use multipart upload for:
|
||||||
|
- Faster transfers with parallel parts
|
||||||
|
- Resume capability on failure
|
||||||
|
- Better reliability for large files
|
||||||
|
|
||||||
|
**Configuration:**
|
||||||
|
- Part size: 10MB
|
||||||
|
- Concurrency: 10 parallel parts
|
||||||
|
- Automatic based on file size
|
||||||
|
|
||||||
|
### Progress Tracking
|
||||||
|
|
||||||
|
Real-time progress for uploads and downloads:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Uploading backup to cloud...
|
||||||
|
Progress: 10%
|
||||||
|
Progress: 20%
|
||||||
|
Progress: 30%
|
||||||
|
...
|
||||||
|
Upload completed: /backups/mydb.dump (1.2 GB)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Metadata Synchronization
|
||||||
|
|
||||||
|
Automatically uploads `.meta.json` with each backup containing:
|
||||||
|
- SHA-256 checksum
|
||||||
|
- Database name and type
|
||||||
|
- Backup timestamp
|
||||||
|
- File size
|
||||||
|
- Compression info
|
||||||
|
|
||||||
|
### Automatic Verification
|
||||||
|
|
||||||
|
Downloads from cloud include automatic checksum verification:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Downloading backup from cloud...
|
||||||
|
Download completed
|
||||||
|
Verifying checksum...
|
||||||
|
Checksum verified successfully: sha256=abc123...
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
### Local Testing with MinIO
|
||||||
|
|
||||||
|
**1. Start MinIO:**
|
||||||
|
```bash
|
||||||
|
docker-compose -f docker-compose.minio.yml up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
**2. Run Integration Tests:**
|
||||||
|
```bash
|
||||||
|
./scripts/test_cloud_storage.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
**3. Manual Testing:**
|
||||||
|
```bash
|
||||||
|
# Set credentials
|
||||||
|
export AWS_ACCESS_KEY_ID=minioadmin
|
||||||
|
export AWS_SECRET_ACCESS_KEY=minioadmin123
|
||||||
|
export AWS_ENDPOINT_URL=http://localhost:9000
|
||||||
|
|
||||||
|
# Test backup
|
||||||
|
dbbackup backup single mydb --cloud minio://test-backups/test/
|
||||||
|
|
||||||
|
# Test restore
|
||||||
|
dbbackup restore single minio://test-backups/test/mydb.dump --confirm
|
||||||
|
|
||||||
|
# Test verify
|
||||||
|
dbbackup verify-backup minio://test-backups/test/mydb.dump
|
||||||
|
|
||||||
|
# Test cleanup
|
||||||
|
dbbackup cleanup minio://test-backups/test/ --retention-days 7 --dry-run
|
||||||
|
```
|
||||||
|
|
||||||
|
**4. Access MinIO Console:**
|
||||||
|
- URL: http://localhost:9001
|
||||||
|
- Username: `minioadmin`
|
||||||
|
- Password: `minioadmin123`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### Security
|
||||||
|
|
||||||
|
1. **Never commit credentials:**
|
||||||
|
```bash
|
||||||
|
# Use environment variables or config files
|
||||||
|
export AWS_ACCESS_KEY_ID="..."
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Use IAM roles when possible:**
|
||||||
|
```bash
|
||||||
|
# On EC2/ECS, credentials are automatic
|
||||||
|
dbbackup backup single mydb --cloud s3://bucket/
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Restrict bucket permissions:**
|
||||||
|
- Minimum required: GetObject, PutObject, DeleteObject, ListBucket
|
||||||
|
- Use bucket policies to limit access
|
||||||
|
|
||||||
|
4. **Enable encryption:**
|
||||||
|
- S3: Server-side encryption enabled by default
|
||||||
|
- MinIO: Configure encryption at rest
|
||||||
|
|
||||||
|
### Performance
|
||||||
|
|
||||||
|
1. **Use multipart for large backups:**
|
||||||
|
- Automatic for files >100MB
|
||||||
|
- Configure concurrency based on bandwidth
|
||||||
|
|
||||||
|
2. **Choose nearby regions:**
|
||||||
|
```bash
|
||||||
|
--cloud-region us-west-2 # Closest to your servers
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Use compression:**
|
||||||
|
```bash
|
||||||
|
--compression gzip # Reduces upload size
|
||||||
|
```
|
||||||
|
|
||||||
|
### Reliability
|
||||||
|
|
||||||
|
1. **Test restores regularly:**
|
||||||
|
```bash
|
||||||
|
# Monthly restore test
|
||||||
|
dbbackup restore single s3://bucket/latest.dump --target test_restore
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Verify backups:**
|
||||||
|
```bash
|
||||||
|
# Daily verification
|
||||||
|
dbbackup verify-backup s3://bucket/backups/*.dump
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Monitor retention:**
|
||||||
|
```bash
|
||||||
|
# Weekly cleanup check
|
||||||
|
dbbackup cleanup s3://bucket/ --retention-days 30 --dry-run
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cost Optimization
|
||||||
|
|
||||||
|
1. **Use lifecycle policies:**
|
||||||
|
- S3: Transition old backups to Glacier
|
||||||
|
- Configure in AWS Console or bucket policy
|
||||||
|
|
||||||
|
2. **Cleanup old backups:**
|
||||||
|
```bash
|
||||||
|
dbbackup cleanup s3://bucket/ --retention-days 30 --min-backups 10
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Choose appropriate storage class:**
|
||||||
|
- Standard: Frequent access
|
||||||
|
- Infrequent Access: Monthly restores
|
||||||
|
- Glacier: Long-term archive
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Connection Issues
|
||||||
|
|
||||||
|
**Problem:** Cannot connect to S3/MinIO
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Error: failed to create cloud backend: failed to load AWS config
|
||||||
|
```
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
1. Check credentials:
|
||||||
|
```bash
|
||||||
|
echo $AWS_ACCESS_KEY_ID
|
||||||
|
echo $AWS_SECRET_ACCESS_KEY
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Test connectivity:
|
||||||
|
```bash
|
||||||
|
curl $AWS_ENDPOINT_URL
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Verify endpoint URL for MinIO/B2
|
||||||
|
|
||||||
|
### Permission Errors
|
||||||
|
|
||||||
|
**Problem:** Access denied
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Error: failed to upload to S3: AccessDenied
|
||||||
|
```
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
1. Check IAM policy includes required permissions
|
||||||
|
2. Verify bucket name is correct
|
||||||
|
3. Check bucket policy allows your IAM user
|
||||||
|
|
||||||
|
### Upload Failures
|
||||||
|
|
||||||
|
**Problem:** Large file upload fails
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Error: multipart upload failed: connection timeout
|
||||||
|
```
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
1. Check network stability
|
||||||
|
2. Retry - multipart uploads resume automatically
|
||||||
|
3. Increase timeout in config
|
||||||
|
4. Check firewall allows outbound HTTPS
|
||||||
|
|
||||||
|
### Verification Failures
|
||||||
|
|
||||||
|
**Problem:** Checksum mismatch
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Error: checksum mismatch: expected abc123, got def456
|
||||||
|
```
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
1. Re-download the backup
|
||||||
|
2. Check if file was corrupted during upload
|
||||||
|
3. Verify original backup integrity locally
|
||||||
|
4. Re-upload if necessary
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Full Backup Workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# Daily backup to S3 with retention
|
||||||
|
|
||||||
|
# Backup all databases
|
||||||
|
for db in db1 db2 db3; do
|
||||||
|
dbbackup backup single $db \
|
||||||
|
--cloud s3://production-backups/daily/$db/ \
|
||||||
|
--compression gzip
|
||||||
|
done
|
||||||
|
|
||||||
|
# Cleanup old backups (keep 30 days, min 10 backups)
|
||||||
|
dbbackup cleanup s3://production-backups/daily/ \
|
||||||
|
--retention-days 30 \
|
||||||
|
--min-backups 10
|
||||||
|
|
||||||
|
# Verify today's backups
|
||||||
|
dbbackup verify-backup s3://production-backups/daily/*/$(date +%Y%m%d)*.dump
|
||||||
|
```
|
||||||
|
|
||||||
|
### Disaster Recovery
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# Restore from cloud backup
|
||||||
|
|
||||||
|
# List available backups
|
||||||
|
dbbackup cloud list \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket disaster-recovery \
|
||||||
|
--verbose
|
||||||
|
|
||||||
|
# Restore latest backup
|
||||||
|
LATEST=$(dbbackup cloud list \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket disaster-recovery | tail -1)
|
||||||
|
|
||||||
|
dbbackup restore single "s3://disaster-recovery/$LATEST" \
|
||||||
|
--target restored_db \
|
||||||
|
--create \
|
||||||
|
--confirm
|
||||||
|
```
|
||||||
|
|
||||||
|
### Multi-Cloud Strategy
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# Backup to both AWS S3 and Backblaze B2
|
||||||
|
|
||||||
|
# Backup to S3
|
||||||
|
dbbackup backup single production_db \
|
||||||
|
--cloud s3://aws-backups/prod/ \
|
||||||
|
--output-dir /tmp/backups
|
||||||
|
|
||||||
|
# Also upload to B2
|
||||||
|
BACKUP_FILE=$(ls -t /tmp/backups/*.dump | head -1)
|
||||||
|
dbbackup cloud upload "$BACKUP_FILE" \
|
||||||
|
--cloud-provider b2 \
|
||||||
|
--cloud-bucket b2-offsite-backups \
|
||||||
|
--cloud-endpoint https://s3.us-west-002.backblazeb2.com
|
||||||
|
|
||||||
|
# Verify both locations
|
||||||
|
dbbackup verify-backup s3://aws-backups/prod/$(basename $BACKUP_FILE)
|
||||||
|
dbbackup verify-backup b2://b2-offsite-backups/$(basename $BACKUP_FILE)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## FAQ
|
||||||
|
|
||||||
|
**Q: Can I use dbbackup with my existing S3 buckets?**
|
||||||
|
A: Yes! Just specify your bucket name and credentials.
|
||||||
|
|
||||||
|
**Q: Do I need to keep local backups?**
|
||||||
|
A: No, use `--cloud` flag to upload directly without keeping local copies.
|
||||||
|
|
||||||
|
**Q: What happens if upload fails?**
|
||||||
|
A: Backup succeeds locally. Upload failure is logged but doesn't fail the backup.
|
||||||
|
|
||||||
|
**Q: Can I restore without downloading?**
|
||||||
|
A: No, backups are downloaded to temp directory, then restored and cleaned up.
|
||||||
|
|
||||||
|
**Q: How much does cloud storage cost?**
|
||||||
|
A: Varies by provider:
|
||||||
|
- AWS S3: ~$0.023/GB/month + transfer
|
||||||
|
- Backblaze B2: ~$0.005/GB/month + transfer
|
||||||
|
- MinIO: Self-hosted, hardware costs only
|
||||||
|
|
||||||
|
**Q: Can I use multiple cloud providers?**
|
||||||
|
A: Yes! Use different URIs or upload to multiple destinations.
|
||||||
|
|
||||||
|
**Q: Is multipart upload automatic?**
|
||||||
|
A: Yes, automatically used for files >100MB.
|
||||||
|
|
||||||
|
**Q: Can I use S3 Glacier?**
|
||||||
|
A: Yes, but restore requires thawing. Use lifecycle policies for automatic archival.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Related Documentation
|
||||||
|
|
||||||
|
- [README.md](README.md) - Main documentation
|
||||||
|
- [ROADMAP.md](ROADMAP.md) - Feature roadmap
|
||||||
|
- [docker-compose.minio.yml](docker-compose.minio.yml) - MinIO test setup
|
||||||
|
- [scripts/test_cloud_storage.sh](scripts/test_cloud_storage.sh) - Integration tests
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
For issues or questions:
|
||||||
|
- GitHub Issues: [Create an issue](https://github.com/yourusername/dbbackup/issues)
|
||||||
|
- Documentation: Check README.md and inline help
|
||||||
|
- Examples: See `scripts/test_cloud_storage.sh`
|
||||||
105
README.md
105
README.md
@@ -378,6 +378,111 @@ Restore entire PostgreSQL cluster from archive:
|
|||||||
./dbbackup restore cluster ARCHIVE_FILE [OPTIONS]
|
./dbbackup restore cluster ARCHIVE_FILE [OPTIONS]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Verification & Maintenance
|
||||||
|
|
||||||
|
#### Verify Backup Integrity
|
||||||
|
|
||||||
|
Verify backup files using SHA-256 checksums and metadata validation:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./dbbackup verify-backup BACKUP_FILE [OPTIONS]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
|
||||||
|
- `--quick` - Quick verification (size check only, no checksum calculation)
|
||||||
|
- `--verbose` - Show detailed information about each backup
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verify single backup (full SHA-256 check)
|
||||||
|
./dbbackup verify-backup /backups/mydb_20251125.dump
|
||||||
|
|
||||||
|
# Verify all backups in directory
|
||||||
|
./dbbackup verify-backup /backups/*.dump --verbose
|
||||||
|
|
||||||
|
# Quick verification (fast, size check only)
|
||||||
|
./dbbackup verify-backup /backups/*.dump --quick
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output:**
|
||||||
|
```
|
||||||
|
Verifying 3 backup file(s)...
|
||||||
|
|
||||||
|
📁 mydb_20251125.dump
|
||||||
|
✅ VALID
|
||||||
|
Size: 2.5 GiB
|
||||||
|
SHA-256: 7e166d4cb7276e1310d76922f45eda0333a6aeac...
|
||||||
|
Database: mydb (postgresql)
|
||||||
|
Created: 2025-11-25T19:00:00Z
|
||||||
|
|
||||||
|
──────────────────────────────────────────────────
|
||||||
|
Total: 3 backups
|
||||||
|
✅ Valid: 3
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Cleanup Old Backups
|
||||||
|
|
||||||
|
Automatically remove old backups based on retention policy:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./dbbackup cleanup BACKUP_DIRECTORY [OPTIONS]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
|
||||||
|
- `--retention-days INT` - Delete backups older than N days (default: 30)
|
||||||
|
- `--min-backups INT` - Always keep at least N most recent backups (default: 5)
|
||||||
|
- `--dry-run` - Preview what would be deleted without actually deleting
|
||||||
|
- `--pattern STRING` - Only clean backups matching pattern (e.g., "mydb_*.dump")
|
||||||
|
|
||||||
|
**Retention Policy:**
|
||||||
|
|
||||||
|
The cleanup command uses a safe retention policy:
|
||||||
|
1. Backups older than `--retention-days` are eligible for deletion
|
||||||
|
2. At least `--min-backups` most recent backups are always kept
|
||||||
|
3. Both conditions must be met for a backup to be deleted
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Clean up backups older than 30 days (keep at least 5)
|
||||||
|
./dbbackup cleanup /backups --retention-days 30 --min-backups 5
|
||||||
|
|
||||||
|
# Preview what would be deleted
|
||||||
|
./dbbackup cleanup /backups --retention-days 7 --dry-run
|
||||||
|
|
||||||
|
# Clean specific database backups
|
||||||
|
./dbbackup cleanup /backups --pattern "mydb_*.dump"
|
||||||
|
|
||||||
|
# Aggressive cleanup (keep only 3 most recent)
|
||||||
|
./dbbackup cleanup /backups --retention-days 1 --min-backups 3
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output:**
|
||||||
|
```
|
||||||
|
🗑️ Cleanup Policy:
|
||||||
|
Directory: /backups
|
||||||
|
Retention: 30 days
|
||||||
|
Min backups: 5
|
||||||
|
|
||||||
|
📊 Results:
|
||||||
|
Total backups: 12
|
||||||
|
Eligible for deletion: 7
|
||||||
|
|
||||||
|
✅ Deleted 7 backup(s):
|
||||||
|
- old_db_20251001.dump
|
||||||
|
- old_db_20251002.dump
|
||||||
|
...
|
||||||
|
|
||||||
|
📦 Kept 5 backup(s)
|
||||||
|
|
||||||
|
💾 Space freed: 15.2 GiB
|
||||||
|
──────────────────────────────────────────────────
|
||||||
|
✅ Cleanup completed successfully
|
||||||
|
```
|
||||||
|
|
||||||
**Options:**
|
**Options:**
|
||||||
|
|
||||||
- `--confirm` - Confirm and execute restore (required for safety)
|
- `--confirm` - Confirm and execute restore (required for safety)
|
||||||
|
|||||||
523
ROADMAP.md
Normal file
523
ROADMAP.md
Normal file
@@ -0,0 +1,523 @@
|
|||||||
|
# dbbackup Version 2.0 Roadmap
|
||||||
|
|
||||||
|
## Current Status: v1.1 (Production Ready)
|
||||||
|
- ✅ 24/24 automated tests passing (100%)
|
||||||
|
- ✅ PostgreSQL, MySQL, MariaDB support
|
||||||
|
- ✅ Interactive TUI + CLI
|
||||||
|
- ✅ Cluster backup/restore
|
||||||
|
- ✅ Docker support
|
||||||
|
- ✅ Cross-platform binaries
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Version 2.0 Vision: Enterprise-Grade Features
|
||||||
|
|
||||||
|
Transform dbbackup into an enterprise-ready backup solution with cloud storage, incremental backups, PITR, and encryption.
|
||||||
|
|
||||||
|
**Target Release:** Q2 2026 (3-4 months)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Priority Matrix
|
||||||
|
|
||||||
|
```
|
||||||
|
HIGH IMPACT
|
||||||
|
│
|
||||||
|
┌────────────────────┼────────────────────┐
|
||||||
|
│ │ │
|
||||||
|
│ Cloud Storage ⭐ │ Incremental ⭐⭐⭐ │
|
||||||
|
│ Verification │ PITR ⭐⭐⭐ │
|
||||||
|
│ Retention │ Encryption ⭐⭐ │
|
||||||
|
LOW │ │ │ HIGH
|
||||||
|
EFFORT ─────────────────┼──────────────────── EFFORT
|
||||||
|
│ │ │
|
||||||
|
│ Metrics │ Web UI (optional) │
|
||||||
|
│ Remote Restore │ Replication Slots │
|
||||||
|
│ │ │
|
||||||
|
└────────────────────┼────────────────────┘
|
||||||
|
│
|
||||||
|
LOW IMPACT
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Development Phases
|
||||||
|
|
||||||
|
### Phase 1: Foundation (Weeks 1-4)
|
||||||
|
|
||||||
|
**Sprint 1: Verification & Retention (2 weeks)**
|
||||||
|
|
||||||
|
**Goals:**
|
||||||
|
- Backup integrity verification with SHA-256 checksums
|
||||||
|
- Automated retention policy enforcement
|
||||||
|
- Structured backup metadata
|
||||||
|
|
||||||
|
**Features:**
|
||||||
|
- ✅ Generate SHA-256 checksums during backup
|
||||||
|
- ✅ Verify backups before/after restore
|
||||||
|
- ✅ Automatic cleanup of old backups
|
||||||
|
- ✅ Retention policy: days + minimum count
|
||||||
|
- ✅ Backup metadata in JSON format
|
||||||
|
|
||||||
|
**Deliverables:**
|
||||||
|
```bash
|
||||||
|
# New commands
|
||||||
|
dbbackup verify backup.dump
|
||||||
|
dbbackup cleanup --retention-days 30 --min-backups 5
|
||||||
|
|
||||||
|
# Metadata format
|
||||||
|
{
|
||||||
|
"version": "2.0",
|
||||||
|
"timestamp": "2026-01-15T10:30:00Z",
|
||||||
|
"database": "production",
|
||||||
|
"size_bytes": 1073741824,
|
||||||
|
"sha256": "abc123...",
|
||||||
|
"db_version": "PostgreSQL 15.3",
|
||||||
|
"compression": "gzip-9"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Implementation:**
|
||||||
|
- `internal/verification/` - Checksum calculation and validation
|
||||||
|
- `internal/retention/` - Policy enforcement
|
||||||
|
- `internal/metadata/` - Backup metadata management
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Sprint 2: Cloud Storage (2 weeks)**
|
||||||
|
|
||||||
|
**Goals:**
|
||||||
|
- Upload backups to cloud storage
|
||||||
|
- Support multiple cloud providers
|
||||||
|
- Download and restore from cloud
|
||||||
|
|
||||||
|
**Providers:**
|
||||||
|
- ✅ AWS S3
|
||||||
|
- ✅ MinIO (S3-compatible)
|
||||||
|
- ✅ Backblaze B2
|
||||||
|
- ✅ Azure Blob Storage (optional)
|
||||||
|
- ✅ Google Cloud Storage (optional)
|
||||||
|
|
||||||
|
**Configuration:**
|
||||||
|
```toml
|
||||||
|
[cloud]
|
||||||
|
enabled = true
|
||||||
|
provider = "s3" # s3, minio, azure, gcs, b2
|
||||||
|
auto_upload = true
|
||||||
|
|
||||||
|
[cloud.s3]
|
||||||
|
bucket = "db-backups"
|
||||||
|
region = "us-east-1"
|
||||||
|
endpoint = "s3.amazonaws.com" # Custom for MinIO
|
||||||
|
access_key = "..." # Or use IAM role
|
||||||
|
secret_key = "..."
|
||||||
|
```
|
||||||
|
|
||||||
|
**New Commands:**
|
||||||
|
```bash
|
||||||
|
# Upload existing backup
|
||||||
|
dbbackup cloud upload backup.dump
|
||||||
|
|
||||||
|
# List cloud backups
|
||||||
|
dbbackup cloud list
|
||||||
|
|
||||||
|
# Download from cloud
|
||||||
|
dbbackup cloud download backup_id
|
||||||
|
|
||||||
|
# Restore directly from cloud
|
||||||
|
dbbackup restore single s3://bucket/backup.dump --target mydb
|
||||||
|
```
|
||||||
|
|
||||||
|
**Dependencies:**
|
||||||
|
```go
|
||||||
|
"github.com/aws/aws-sdk-go-v2/service/s3"
|
||||||
|
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
|
||||||
|
"cloud.google.com/go/storage"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 2: Advanced Backup (Weeks 5-10)
|
||||||
|
|
||||||
|
**Sprint 3: Incremental Backups (3 weeks)**
|
||||||
|
|
||||||
|
**Goals:**
|
||||||
|
- Reduce backup time and storage
|
||||||
|
- File-level incremental for PostgreSQL
|
||||||
|
- Binary log incremental for MySQL
|
||||||
|
|
||||||
|
**PostgreSQL Strategy:**
|
||||||
|
```
|
||||||
|
Full Backup (Base)
|
||||||
|
├─ Incremental 1 (changed files since base)
|
||||||
|
├─ Incremental 2 (changed files since inc1)
|
||||||
|
└─ Incremental 3 (changed files since inc2)
|
||||||
|
```
|
||||||
|
|
||||||
|
**MySQL Strategy:**
|
||||||
|
```
|
||||||
|
Full Backup
|
||||||
|
├─ Binary Log 1 (changes since full)
|
||||||
|
├─ Binary Log 2
|
||||||
|
└─ Binary Log 3
|
||||||
|
```
|
||||||
|
|
||||||
|
**Implementation:**
|
||||||
|
```bash
|
||||||
|
# Create base backup
|
||||||
|
dbbackup backup single mydb --mode full
|
||||||
|
|
||||||
|
# Create incremental
|
||||||
|
dbbackup backup single mydb --mode incremental
|
||||||
|
|
||||||
|
# Restore (automatically applies incrementals)
|
||||||
|
dbbackup restore single backup.dump --apply-incrementals
|
||||||
|
```
|
||||||
|
|
||||||
|
**File Structure:**
|
||||||
|
```
|
||||||
|
backups/
|
||||||
|
├── mydb_full_20260115.dump
|
||||||
|
├── mydb_full_20260115.meta
|
||||||
|
├── mydb_incr_20260116.dump # Contains only changes
|
||||||
|
├── mydb_incr_20260116.meta # Points to base: mydb_full_20260115
|
||||||
|
└── mydb_incr_20260117.dump
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Sprint 4: Security & Encryption (2 weeks)**
|
||||||
|
|
||||||
|
**Goals:**
|
||||||
|
- Encrypt backups at rest
|
||||||
|
- Secure key management
|
||||||
|
- Encrypted cloud uploads
|
||||||
|
|
||||||
|
**Features:**
|
||||||
|
- ✅ AES-256-GCM encryption
|
||||||
|
- ✅ Argon2 key derivation
|
||||||
|
- ✅ Multiple key sources (file, env, vault)
|
||||||
|
- ✅ Encrypted metadata
|
||||||
|
|
||||||
|
**Configuration:**
|
||||||
|
```toml
|
||||||
|
[encryption]
|
||||||
|
enabled = true
|
||||||
|
algorithm = "aes-256-gcm"
|
||||||
|
key_file = "/etc/dbbackup/encryption.key"
|
||||||
|
|
||||||
|
# Or use environment variable
|
||||||
|
# DBBACKUP_ENCRYPTION_KEY=base64key...
|
||||||
|
```
|
||||||
|
|
||||||
|
**Commands:**
|
||||||
|
```bash
|
||||||
|
# Generate encryption key
|
||||||
|
dbbackup keys generate
|
||||||
|
|
||||||
|
# Encrypt existing backup
|
||||||
|
dbbackup encrypt backup.dump
|
||||||
|
|
||||||
|
# Decrypt backup
|
||||||
|
dbbackup decrypt backup.dump.enc
|
||||||
|
|
||||||
|
# Automatic encryption
|
||||||
|
dbbackup backup single mydb --encrypt
|
||||||
|
```
|
||||||
|
|
||||||
|
**File Format:**
|
||||||
|
```
|
||||||
|
+------------------+
|
||||||
|
| Encryption Header| (IV, algorithm, key ID)
|
||||||
|
+------------------+
|
||||||
|
| Encrypted Data | (AES-256-GCM)
|
||||||
|
+------------------+
|
||||||
|
| Auth Tag | (HMAC for integrity)
|
||||||
|
+------------------+
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Sprint 5: Point-in-Time Recovery - PITR (4 weeks)**
|
||||||
|
|
||||||
|
**Goals:**
|
||||||
|
- Restore to any point in time
|
||||||
|
- WAL archiving for PostgreSQL
|
||||||
|
- Binary log archiving for MySQL
|
||||||
|
|
||||||
|
**PostgreSQL Implementation:**
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pitr]
|
||||||
|
enabled = true
|
||||||
|
wal_archive_dir = "/backups/wal_archive"
|
||||||
|
wal_retention_days = 7
|
||||||
|
|
||||||
|
# PostgreSQL config (auto-configured by dbbackup)
|
||||||
|
# archive_mode = on
|
||||||
|
# archive_command = '/usr/local/bin/dbbackup archive-wal %p %f'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Commands:**
|
||||||
|
```bash
|
||||||
|
# Enable PITR
|
||||||
|
dbbackup pitr enable
|
||||||
|
|
||||||
|
# Archive WAL manually
|
||||||
|
dbbackup archive-wal /var/lib/postgresql/pg_wal/000000010000000000000001
|
||||||
|
|
||||||
|
# Restore to point-in-time
|
||||||
|
dbbackup restore single backup.dump \
|
||||||
|
--target-time "2026-01-15 14:30:00" \
|
||||||
|
--target mydb
|
||||||
|
|
||||||
|
# Show available restore points
|
||||||
|
dbbackup pitr timeline
|
||||||
|
```
|
||||||
|
|
||||||
|
**WAL Archive Structure:**
|
||||||
|
```
|
||||||
|
wal_archive/
|
||||||
|
├── 000000010000000000000001
|
||||||
|
├── 000000010000000000000002
|
||||||
|
├── 000000010000000000000003
|
||||||
|
└── timeline.json
|
||||||
|
```
|
||||||
|
|
||||||
|
**MySQL Implementation:**
|
||||||
|
```bash
|
||||||
|
# Archive binary logs
|
||||||
|
dbbackup binlog archive --start-datetime "2026-01-15 00:00:00"
|
||||||
|
|
||||||
|
# PITR restore
|
||||||
|
dbbackup restore single backup.sql \
|
||||||
|
--target-time "2026-01-15 14:30:00" \
|
||||||
|
--apply-binlogs
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 3: Enterprise Features (Weeks 11-16)
|
||||||
|
|
||||||
|
**Sprint 6: Observability & Integration (3 weeks)**
|
||||||
|
|
||||||
|
**Features:**
|
||||||
|
|
||||||
|
1. **Prometheus Metrics**
|
||||||
|
```go
|
||||||
|
# Exposed metrics
|
||||||
|
dbbackup_backup_duration_seconds
|
||||||
|
dbbackup_backup_size_bytes
|
||||||
|
dbbackup_backup_success_total
|
||||||
|
dbbackup_restore_duration_seconds
|
||||||
|
dbbackup_last_backup_timestamp
|
||||||
|
dbbackup_cloud_upload_duration_seconds
|
||||||
|
```
|
||||||
|
|
||||||
|
**Endpoint:**
|
||||||
|
```bash
|
||||||
|
# Start metrics server
|
||||||
|
dbbackup metrics serve --port 9090
|
||||||
|
|
||||||
|
# Scrape endpoint
|
||||||
|
curl http://localhost:9090/metrics
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Remote Restore**
|
||||||
|
```bash
|
||||||
|
# Restore to remote server
|
||||||
|
dbbackup restore single backup.dump \
|
||||||
|
--remote-host db-replica-01 \
|
||||||
|
--remote-user postgres \
|
||||||
|
--remote-port 22 \
|
||||||
|
--confirm
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Replication Slots (PostgreSQL)**
|
||||||
|
```bash
|
||||||
|
# Create replication slot for continuous WAL streaming
|
||||||
|
dbbackup replication create-slot backup_slot
|
||||||
|
|
||||||
|
# Stream WALs via replication
|
||||||
|
dbbackup replication stream backup_slot
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Webhook Notifications**
|
||||||
|
```toml
|
||||||
|
[notifications]
|
||||||
|
enabled = true
|
||||||
|
webhook_url = "https://slack.com/webhook/..."
|
||||||
|
notify_on = ["backup_complete", "backup_failed", "restore_complete"]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Technical Architecture
|
||||||
|
|
||||||
|
### New Directory Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
internal/
|
||||||
|
├── cloud/ # Cloud storage backends
|
||||||
|
│ ├── interface.go
|
||||||
|
│ ├── s3.go
|
||||||
|
│ ├── azure.go
|
||||||
|
│ └── gcs.go
|
||||||
|
├── encryption/ # Encryption layer
|
||||||
|
│ ├── aes.go
|
||||||
|
│ ├── keys.go
|
||||||
|
│ └── vault.go
|
||||||
|
├── incremental/ # Incremental backup engine
|
||||||
|
│ ├── postgres.go
|
||||||
|
│ └── mysql.go
|
||||||
|
├── pitr/ # Point-in-time recovery
|
||||||
|
│ ├── wal.go
|
||||||
|
│ ├── binlog.go
|
||||||
|
│ └── timeline.go
|
||||||
|
├── verification/ # Backup verification
|
||||||
|
│ ├── checksum.go
|
||||||
|
│ └── validate.go
|
||||||
|
├── retention/ # Retention policy
|
||||||
|
│ └── cleanup.go
|
||||||
|
├── metrics/ # Prometheus metrics
|
||||||
|
│ └── exporter.go
|
||||||
|
└── replication/ # Replication management
|
||||||
|
└── slots.go
|
||||||
|
```
|
||||||
|
|
||||||
|
### Required Dependencies
|
||||||
|
|
||||||
|
```go
|
||||||
|
// Cloud storage
|
||||||
|
"github.com/aws/aws-sdk-go-v2/service/s3"
|
||||||
|
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
|
||||||
|
"cloud.google.com/go/storage"
|
||||||
|
|
||||||
|
// Encryption
|
||||||
|
"crypto/aes"
|
||||||
|
"crypto/cipher"
|
||||||
|
"golang.org/x/crypto/argon2"
|
||||||
|
|
||||||
|
// Metrics
|
||||||
|
"github.com/prometheus/client_golang/prometheus"
|
||||||
|
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||||
|
|
||||||
|
// PostgreSQL replication
|
||||||
|
"github.com/jackc/pgx/v5/pgconn"
|
||||||
|
|
||||||
|
// Fast file scanning for incrementals
|
||||||
|
"github.com/karrick/godirwalk"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Testing Strategy
|
||||||
|
|
||||||
|
### v2.0 Test Coverage Goals
|
||||||
|
- Minimum 90% code coverage
|
||||||
|
- Integration tests for all cloud providers
|
||||||
|
- End-to-end PITR scenarios
|
||||||
|
- Performance benchmarks for incremental backups
|
||||||
|
- Encryption/decryption validation
|
||||||
|
- Multi-database restore tests
|
||||||
|
|
||||||
|
### New Test Suites
|
||||||
|
```bash
|
||||||
|
# Cloud storage tests
|
||||||
|
./run_qa_tests.sh --suite cloud
|
||||||
|
|
||||||
|
# Incremental backup tests
|
||||||
|
./run_qa_tests.sh --suite incremental
|
||||||
|
|
||||||
|
# PITR tests
|
||||||
|
./run_qa_tests.sh --suite pitr
|
||||||
|
|
||||||
|
# Encryption tests
|
||||||
|
./run_qa_tests.sh --suite encryption
|
||||||
|
|
||||||
|
# Full v2.0 suite
|
||||||
|
./run_qa_tests.sh --suite v2
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Migration Path
|
||||||
|
|
||||||
|
### v1.x → v2.0 Compatibility
|
||||||
|
- ✅ All v1.x backups readable in v2.0
|
||||||
|
- ✅ Configuration auto-migration
|
||||||
|
- ✅ Metadata format upgrade
|
||||||
|
- ✅ Backward-compatible commands
|
||||||
|
|
||||||
|
### Deprecation Timeline
|
||||||
|
- v2.0: Warning for old config format
|
||||||
|
- v2.1: Full migration required
|
||||||
|
- v3.0: Old format no longer supported
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Documentation Updates
|
||||||
|
|
||||||
|
### New Docs
|
||||||
|
- `CLOUD.md` - Cloud storage configuration
|
||||||
|
- `INCREMENTAL.md` - Incremental backup guide
|
||||||
|
- `PITR.md` - Point-in-time recovery
|
||||||
|
- `ENCRYPTION.md` - Encryption setup
|
||||||
|
- `METRICS.md` - Prometheus integration
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Metrics
|
||||||
|
|
||||||
|
### v2.0 Goals
|
||||||
|
- 🎯 95%+ test coverage
|
||||||
|
- 🎯 Support 1TB+ databases with incrementals
|
||||||
|
- 🎯 PITR with <5 minute granularity
|
||||||
|
- 🎯 Cloud upload/download >100MB/s
|
||||||
|
- 🎯 Encryption overhead <10%
|
||||||
|
- 🎯 Full compatibility with pgBackRest for PostgreSQL
|
||||||
|
- 🎯 Industry-leading MySQL PITR solution
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Release Schedule
|
||||||
|
|
||||||
|
- **v2.0-alpha** (End Sprint 3): Cloud + Verification
|
||||||
|
- **v2.0-beta** (End Sprint 5): + Incremental + PITR
|
||||||
|
- **v2.0-rc1** (End Sprint 6): + Enterprise features
|
||||||
|
- **v2.0 GA** (Q2 2026): Production release
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What Makes v2.0 Unique
|
||||||
|
|
||||||
|
After v2.0, dbbackup will be:
|
||||||
|
|
||||||
|
✅ **Only multi-database tool** with full PITR support
|
||||||
|
✅ **Best-in-class UX** (TUI + CLI + Docker + K8s)
|
||||||
|
✅ **Feature parity** with pgBackRest (PostgreSQL)
|
||||||
|
✅ **Superior to mysqldump** with incremental + PITR
|
||||||
|
✅ **Cloud-native** with multi-provider support
|
||||||
|
✅ **Enterprise-ready** with encryption + metrics
|
||||||
|
✅ **Zero-config** for 80% of use cases
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
Want to contribute to v2.0? Check out:
|
||||||
|
- [CONTRIBUTING.md](CONTRIBUTING.md)
|
||||||
|
- [Good First Issues](https://git.uuxo.net/uuxo/dbbackup/issues?labels=good-first-issue)
|
||||||
|
- [v2.0 Milestone](https://git.uuxo.net/uuxo/dbbackup/milestone/2)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Questions?
|
||||||
|
|
||||||
|
Open an issue or start a discussion:
|
||||||
|
- Issues: https://git.uuxo.net/uuxo/dbbackup/issues
|
||||||
|
- Discussions: https://git.uuxo.net/uuxo/dbbackup/discussions
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Next Step:** Sprint 1 - Backup Verification & Retention (January 2026)
|
||||||
38
build_docker.sh
Executable file
38
build_docker.sh
Executable file
@@ -0,0 +1,38 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Build and push Docker images
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
VERSION="1.1"
|
||||||
|
REGISTRY="git.uuxo.net/uuxo"
|
||||||
|
IMAGE_NAME="dbbackup"
|
||||||
|
|
||||||
|
echo "=== Building Docker Image ==="
|
||||||
|
echo "Version: $VERSION"
|
||||||
|
echo "Registry: $REGISTRY"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Build image
|
||||||
|
echo "Building image..."
|
||||||
|
docker build -t ${IMAGE_NAME}:${VERSION} -t ${IMAGE_NAME}:latest .
|
||||||
|
|
||||||
|
# Tag for registry
|
||||||
|
echo "Tagging for registry..."
|
||||||
|
docker tag ${IMAGE_NAME}:${VERSION} ${REGISTRY}/${IMAGE_NAME}:${VERSION}
|
||||||
|
docker tag ${IMAGE_NAME}:latest ${REGISTRY}/${IMAGE_NAME}:latest
|
||||||
|
|
||||||
|
# Show images
|
||||||
|
echo ""
|
||||||
|
echo "Images built:"
|
||||||
|
docker images ${IMAGE_NAME}
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "✅ Build complete!"
|
||||||
|
echo ""
|
||||||
|
echo "To push to registry:"
|
||||||
|
echo " docker push ${REGISTRY}/${IMAGE_NAME}:${VERSION}"
|
||||||
|
echo " docker push ${REGISTRY}/${IMAGE_NAME}:latest"
|
||||||
|
echo ""
|
||||||
|
echo "To test locally:"
|
||||||
|
echo " docker run --rm ${IMAGE_NAME}:latest --version"
|
||||||
|
echo " docker run --rm -it ${IMAGE_NAME}:latest interactive"
|
||||||
@@ -3,6 +3,7 @@ package cmd
|
|||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
|
"dbbackup/internal/cloud"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -90,6 +91,65 @@ func init() {
|
|||||||
backupCmd.AddCommand(singleCmd)
|
backupCmd.AddCommand(singleCmd)
|
||||||
backupCmd.AddCommand(sampleCmd)
|
backupCmd.AddCommand(sampleCmd)
|
||||||
|
|
||||||
|
// Cloud storage flags for all backup commands
|
||||||
|
for _, cmd := range []*cobra.Command{clusterCmd, singleCmd, sampleCmd} {
|
||||||
|
cmd.Flags().String("cloud", "", "Cloud storage URI (e.g., s3://bucket/path) - takes precedence over individual flags")
|
||||||
|
cmd.Flags().Bool("cloud-auto-upload", false, "Automatically upload backup to cloud after completion")
|
||||||
|
cmd.Flags().String("cloud-provider", "", "Cloud provider (s3, minio, b2)")
|
||||||
|
cmd.Flags().String("cloud-bucket", "", "Cloud bucket name")
|
||||||
|
cmd.Flags().String("cloud-region", "us-east-1", "Cloud region")
|
||||||
|
cmd.Flags().String("cloud-endpoint", "", "Cloud endpoint (for MinIO/B2)")
|
||||||
|
cmd.Flags().String("cloud-prefix", "", "Cloud key prefix")
|
||||||
|
|
||||||
|
// Add PreRunE to update config from flags
|
||||||
|
originalPreRun := cmd.PreRunE
|
||||||
|
cmd.PreRunE = func(c *cobra.Command, args []string) error {
|
||||||
|
// Call original PreRunE if exists
|
||||||
|
if originalPreRun != nil {
|
||||||
|
if err := originalPreRun(c, args); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if --cloud URI flag is provided (takes precedence)
|
||||||
|
if c.Flags().Changed("cloud") {
|
||||||
|
if err := parseCloudURIFlag(c); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Update cloud config from individual flags
|
||||||
|
if c.Flags().Changed("cloud-auto-upload") {
|
||||||
|
if autoUpload, _ := c.Flags().GetBool("cloud-auto-upload"); autoUpload {
|
||||||
|
cfg.CloudEnabled = true
|
||||||
|
cfg.CloudAutoUpload = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.Flags().Changed("cloud-provider") {
|
||||||
|
cfg.CloudProvider, _ = c.Flags().GetString("cloud-provider")
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.Flags().Changed("cloud-bucket") {
|
||||||
|
cfg.CloudBucket, _ = c.Flags().GetString("cloud-bucket")
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.Flags().Changed("cloud-region") {
|
||||||
|
cfg.CloudRegion, _ = c.Flags().GetString("cloud-region")
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.Flags().Changed("cloud-endpoint") {
|
||||||
|
cfg.CloudEndpoint, _ = c.Flags().GetString("cloud-endpoint")
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.Flags().Changed("cloud-prefix") {
|
||||||
|
cfg.CloudPrefix, _ = c.Flags().GetString("cloud-prefix")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Sample backup flags - use local variables to avoid cfg access during init
|
// Sample backup flags - use local variables to avoid cfg access during init
|
||||||
var sampleStrategy string
|
var sampleStrategy string
|
||||||
var sampleValue int
|
var sampleValue int
|
||||||
@@ -127,3 +187,39 @@ func init() {
|
|||||||
// Mark the strategy flags as mutually exclusive
|
// Mark the strategy flags as mutually exclusive
|
||||||
sampleCmd.MarkFlagsMutuallyExclusive("sample-ratio", "sample-percent", "sample-count")
|
sampleCmd.MarkFlagsMutuallyExclusive("sample-ratio", "sample-percent", "sample-count")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// parseCloudURIFlag parses the --cloud URI flag and updates config
|
||||||
|
func parseCloudURIFlag(cmd *cobra.Command) error {
|
||||||
|
cloudURI, _ := cmd.Flags().GetString("cloud")
|
||||||
|
if cloudURI == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse cloud URI
|
||||||
|
uri, err := cloud.ParseCloudURI(cloudURI)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("invalid cloud URI: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Enable cloud and auto-upload
|
||||||
|
cfg.CloudEnabled = true
|
||||||
|
cfg.CloudAutoUpload = true
|
||||||
|
|
||||||
|
// Update config from URI
|
||||||
|
cfg.CloudProvider = uri.Provider
|
||||||
|
cfg.CloudBucket = uri.Bucket
|
||||||
|
|
||||||
|
if uri.Region != "" {
|
||||||
|
cfg.CloudRegion = uri.Region
|
||||||
|
}
|
||||||
|
|
||||||
|
if uri.Endpoint != "" {
|
||||||
|
cfg.CloudEndpoint = uri.Endpoint
|
||||||
|
}
|
||||||
|
|
||||||
|
if uri.Path != "" {
|
||||||
|
cfg.CloudPrefix = uri.Dir()
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
334
cmd/cleanup.go
Normal file
334
cmd/cleanup.go
Normal file
@@ -0,0 +1,334 @@
|
|||||||
|
package cmd
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"dbbackup/internal/cloud"
|
||||||
|
"dbbackup/internal/metadata"
|
||||||
|
"dbbackup/internal/retention"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
)
|
||||||
|
|
||||||
|
var cleanupCmd = &cobra.Command{
|
||||||
|
Use: "cleanup [backup-directory]",
|
||||||
|
Short: "Clean up old backups based on retention policy",
|
||||||
|
Long: `Remove old backup files based on retention policy while maintaining minimum backup count.
|
||||||
|
|
||||||
|
The retention policy ensures:
|
||||||
|
1. Backups older than --retention-days are eligible for deletion
|
||||||
|
2. At least --min-backups most recent backups are always kept
|
||||||
|
3. Both conditions must be met for deletion
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
# Clean up backups older than 30 days (keep at least 5)
|
||||||
|
dbbackup cleanup /backups --retention-days 30 --min-backups 5
|
||||||
|
|
||||||
|
# Dry run to see what would be deleted
|
||||||
|
dbbackup cleanup /backups --retention-days 7 --dry-run
|
||||||
|
|
||||||
|
# Clean up specific database backups only
|
||||||
|
dbbackup cleanup /backups --pattern "mydb_*.dump"
|
||||||
|
|
||||||
|
# Aggressive cleanup (keep only 3 most recent)
|
||||||
|
dbbackup cleanup /backups --retention-days 1 --min-backups 3`,
|
||||||
|
Args: cobra.ExactArgs(1),
|
||||||
|
RunE: runCleanup,
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
retentionDays int
|
||||||
|
minBackups int
|
||||||
|
dryRun bool
|
||||||
|
cleanupPattern string
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
rootCmd.AddCommand(cleanupCmd)
|
||||||
|
cleanupCmd.Flags().IntVar(&retentionDays, "retention-days", 30, "Delete backups older than this many days")
|
||||||
|
cleanupCmd.Flags().IntVar(&minBackups, "min-backups", 5, "Always keep at least this many backups")
|
||||||
|
cleanupCmd.Flags().BoolVar(&dryRun, "dry-run", false, "Show what would be deleted without actually deleting")
|
||||||
|
cleanupCmd.Flags().StringVar(&cleanupPattern, "pattern", "", "Only clean up backups matching this pattern (e.g., 'mydb_*.dump')")
|
||||||
|
}
|
||||||
|
|
||||||
|
func runCleanup(cmd *cobra.Command, args []string) error {
|
||||||
|
backupPath := args[0]
|
||||||
|
|
||||||
|
// Check if this is a cloud URI
|
||||||
|
if isCloudURIPath(backupPath) {
|
||||||
|
return runCloudCleanup(cmd.Context(), backupPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Local cleanup
|
||||||
|
backupDir := backupPath
|
||||||
|
|
||||||
|
// Validate directory exists
|
||||||
|
if !dirExists(backupDir) {
|
||||||
|
return fmt.Errorf("backup directory does not exist: %s", backupDir)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create retention policy
|
||||||
|
policy := retention.Policy{
|
||||||
|
RetentionDays: retentionDays,
|
||||||
|
MinBackups: minBackups,
|
||||||
|
DryRun: dryRun,
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("🗑️ Cleanup Policy:\n")
|
||||||
|
fmt.Printf(" Directory: %s\n", backupDir)
|
||||||
|
fmt.Printf(" Retention: %d days\n", policy.RetentionDays)
|
||||||
|
fmt.Printf(" Min backups: %d\n", policy.MinBackups)
|
||||||
|
if cleanupPattern != "" {
|
||||||
|
fmt.Printf(" Pattern: %s\n", cleanupPattern)
|
||||||
|
}
|
||||||
|
if dryRun {
|
||||||
|
fmt.Printf(" Mode: DRY RUN (no files will be deleted)\n")
|
||||||
|
}
|
||||||
|
fmt.Println()
|
||||||
|
|
||||||
|
var result *retention.CleanupResult
|
||||||
|
var err error
|
||||||
|
|
||||||
|
// Apply policy
|
||||||
|
if cleanupPattern != "" {
|
||||||
|
result, err = retention.CleanupByPattern(backupDir, cleanupPattern, policy)
|
||||||
|
} else {
|
||||||
|
result, err = retention.ApplyPolicy(backupDir, policy)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("cleanup failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Display results
|
||||||
|
fmt.Printf("📊 Results:\n")
|
||||||
|
fmt.Printf(" Total backups: %d\n", result.TotalBackups)
|
||||||
|
fmt.Printf(" Eligible for deletion: %d\n", result.EligibleForDeletion)
|
||||||
|
|
||||||
|
if len(result.Deleted) > 0 {
|
||||||
|
fmt.Printf("\n")
|
||||||
|
if dryRun {
|
||||||
|
fmt.Printf("🔍 Would delete %d backup(s):\n", len(result.Deleted))
|
||||||
|
} else {
|
||||||
|
fmt.Printf("✅ Deleted %d backup(s):\n", len(result.Deleted))
|
||||||
|
}
|
||||||
|
for _, file := range result.Deleted {
|
||||||
|
fmt.Printf(" - %s\n", filepath.Base(file))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(result.Kept) > 0 && len(result.Kept) <= 10 {
|
||||||
|
fmt.Printf("\n📦 Kept %d backup(s):\n", len(result.Kept))
|
||||||
|
for _, file := range result.Kept {
|
||||||
|
fmt.Printf(" - %s\n", filepath.Base(file))
|
||||||
|
}
|
||||||
|
} else if len(result.Kept) > 10 {
|
||||||
|
fmt.Printf("\n📦 Kept %d backup(s)\n", len(result.Kept))
|
||||||
|
}
|
||||||
|
|
||||||
|
if !dryRun && result.SpaceFreed > 0 {
|
||||||
|
fmt.Printf("\n💾 Space freed: %s\n", metadata.FormatSize(result.SpaceFreed))
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(result.Errors) > 0 {
|
||||||
|
fmt.Printf("\n⚠️ Errors:\n")
|
||||||
|
for _, err := range result.Errors {
|
||||||
|
fmt.Printf(" - %v\n", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println(strings.Repeat("─", 50))
|
||||||
|
|
||||||
|
if dryRun {
|
||||||
|
fmt.Println("✅ Dry run completed (no files were deleted)")
|
||||||
|
} else if len(result.Deleted) > 0 {
|
||||||
|
fmt.Println("✅ Cleanup completed successfully")
|
||||||
|
} else {
|
||||||
|
fmt.Println("ℹ️ No backups eligible for deletion")
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func dirExists(path string) bool {
|
||||||
|
info, err := os.Stat(path)
|
||||||
|
if err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return info.IsDir()
|
||||||
|
}
|
||||||
|
|
||||||
|
// isCloudURIPath checks if a path is a cloud URI
|
||||||
|
func isCloudURIPath(s string) bool {
|
||||||
|
return cloud.IsCloudURI(s)
|
||||||
|
}
|
||||||
|
|
||||||
|
// runCloudCleanup applies retention policy to cloud storage
|
||||||
|
func runCloudCleanup(ctx context.Context, uri string) error {
|
||||||
|
// Parse cloud URI
|
||||||
|
cloudURI, err := cloud.ParseCloudURI(uri)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("invalid cloud URI: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("☁️ Cloud Cleanup Policy:\n")
|
||||||
|
fmt.Printf(" URI: %s\n", uri)
|
||||||
|
fmt.Printf(" Provider: %s\n", cloudURI.Provider)
|
||||||
|
fmt.Printf(" Bucket: %s\n", cloudURI.Bucket)
|
||||||
|
if cloudURI.Path != "" {
|
||||||
|
fmt.Printf(" Prefix: %s\n", cloudURI.Path)
|
||||||
|
}
|
||||||
|
fmt.Printf(" Retention: %d days\n", retentionDays)
|
||||||
|
fmt.Printf(" Min backups: %d\n", minBackups)
|
||||||
|
if dryRun {
|
||||||
|
fmt.Printf(" Mode: DRY RUN (no files will be deleted)\n")
|
||||||
|
}
|
||||||
|
fmt.Println()
|
||||||
|
|
||||||
|
// Create cloud backend
|
||||||
|
cfg := cloudURI.ToConfig()
|
||||||
|
backend, err := cloud.NewBackend(cfg)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create cloud backend: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// List all backups
|
||||||
|
backups, err := backend.List(ctx, cloudURI.Path)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to list cloud backups: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(backups) == 0 {
|
||||||
|
fmt.Println("No backups found in cloud storage")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("Found %d backup(s) in cloud storage\n\n", len(backups))
|
||||||
|
|
||||||
|
// Filter backups based on pattern if specified
|
||||||
|
var filteredBackups []cloud.BackupInfo
|
||||||
|
if cleanupPattern != "" {
|
||||||
|
for _, backup := range backups {
|
||||||
|
matched, _ := filepath.Match(cleanupPattern, backup.Name)
|
||||||
|
if matched {
|
||||||
|
filteredBackups = append(filteredBackups, backup)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
fmt.Printf("Pattern matched %d backup(s)\n\n", len(filteredBackups))
|
||||||
|
} else {
|
||||||
|
filteredBackups = backups
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sort by modification time (oldest first)
|
||||||
|
// Already sorted by backend.List
|
||||||
|
|
||||||
|
// Calculate retention date
|
||||||
|
cutoffDate := time.Now().AddDate(0, 0, -retentionDays)
|
||||||
|
|
||||||
|
// Determine which backups to delete
|
||||||
|
var toDelete []cloud.BackupInfo
|
||||||
|
var toKeep []cloud.BackupInfo
|
||||||
|
|
||||||
|
for _, backup := range filteredBackups {
|
||||||
|
if backup.LastModified.Before(cutoffDate) {
|
||||||
|
toDelete = append(toDelete, backup)
|
||||||
|
} else {
|
||||||
|
toKeep = append(toKeep, backup)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure we keep minimum backups
|
||||||
|
totalBackups := len(filteredBackups)
|
||||||
|
if totalBackups-len(toDelete) < minBackups {
|
||||||
|
// Need to keep more backups
|
||||||
|
keepCount := minBackups - len(toKeep)
|
||||||
|
if keepCount > len(toDelete) {
|
||||||
|
keepCount = len(toDelete)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Move oldest from toDelete to toKeep
|
||||||
|
for i := len(toDelete) - 1; i >= len(toDelete)-keepCount && i >= 0; i-- {
|
||||||
|
toKeep = append(toKeep, toDelete[i])
|
||||||
|
toDelete = toDelete[:i]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Display results
|
||||||
|
fmt.Printf("📊 Results:\n")
|
||||||
|
fmt.Printf(" Total backups: %d\n", totalBackups)
|
||||||
|
fmt.Printf(" Eligible for deletion: %d\n", len(toDelete))
|
||||||
|
fmt.Printf(" Will keep: %d\n", len(toKeep))
|
||||||
|
fmt.Println()
|
||||||
|
|
||||||
|
if len(toDelete) > 0 {
|
||||||
|
if dryRun {
|
||||||
|
fmt.Printf("🔍 Would delete %d backup(s):\n", len(toDelete))
|
||||||
|
} else {
|
||||||
|
fmt.Printf("🗑️ Deleting %d backup(s):\n", len(toDelete))
|
||||||
|
}
|
||||||
|
|
||||||
|
var totalSize int64
|
||||||
|
var deletedCount int
|
||||||
|
|
||||||
|
for _, backup := range toDelete {
|
||||||
|
fmt.Printf(" - %s (%s, %s old)\n",
|
||||||
|
backup.Name,
|
||||||
|
cloud.FormatSize(backup.Size),
|
||||||
|
formatBackupAge(backup.LastModified))
|
||||||
|
|
||||||
|
totalSize += backup.Size
|
||||||
|
|
||||||
|
if !dryRun {
|
||||||
|
if err := backend.Delete(ctx, backup.Key); err != nil {
|
||||||
|
fmt.Printf(" ❌ Error: %v\n", err)
|
||||||
|
} else {
|
||||||
|
deletedCount++
|
||||||
|
// Also try to delete metadata
|
||||||
|
backend.Delete(ctx, backup.Key+".meta.json")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("\n💾 Space %s: %s\n",
|
||||||
|
map[bool]string{true: "would be freed", false: "freed"}[dryRun],
|
||||||
|
cloud.FormatSize(totalSize))
|
||||||
|
|
||||||
|
if !dryRun && deletedCount > 0 {
|
||||||
|
fmt.Printf("✅ Successfully deleted %d backup(s)\n", deletedCount)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
fmt.Println("No backups eligible for deletion")
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// formatBackupAge returns a human-readable age string from a time.Time
|
||||||
|
func formatBackupAge(t time.Time) string {
|
||||||
|
d := time.Since(t)
|
||||||
|
days := int(d.Hours() / 24)
|
||||||
|
|
||||||
|
if days == 0 {
|
||||||
|
return "today"
|
||||||
|
} else if days == 1 {
|
||||||
|
return "1 day"
|
||||||
|
} else if days < 30 {
|
||||||
|
return fmt.Sprintf("%d days", days)
|
||||||
|
} else if days < 365 {
|
||||||
|
months := days / 30
|
||||||
|
if months == 1 {
|
||||||
|
return "1 month"
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%d months", months)
|
||||||
|
} else {
|
||||||
|
years := days / 365
|
||||||
|
if years == 1 {
|
||||||
|
return "1 year"
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%d years", years)
|
||||||
|
}
|
||||||
|
}
|
||||||
394
cmd/cloud.go
Normal file
394
cmd/cloud.go
Normal file
@@ -0,0 +1,394 @@
|
|||||||
|
package cmd
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"dbbackup/internal/cloud"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
)
|
||||||
|
|
||||||
|
var cloudCmd = &cobra.Command{
|
||||||
|
Use: "cloud",
|
||||||
|
Short: "Cloud storage operations",
|
||||||
|
Long: `Manage backups in cloud storage (S3, MinIO, Backblaze B2).
|
||||||
|
|
||||||
|
Supports:
|
||||||
|
- AWS S3
|
||||||
|
- MinIO (S3-compatible)
|
||||||
|
- Backblaze B2 (S3-compatible)
|
||||||
|
- Any S3-compatible storage
|
||||||
|
|
||||||
|
Configuration via flags or environment variables:
|
||||||
|
--cloud-provider DBBACKUP_CLOUD_PROVIDER
|
||||||
|
--cloud-bucket DBBACKUP_CLOUD_BUCKET
|
||||||
|
--cloud-region DBBACKUP_CLOUD_REGION
|
||||||
|
--cloud-endpoint DBBACKUP_CLOUD_ENDPOINT
|
||||||
|
--cloud-access-key DBBACKUP_CLOUD_ACCESS_KEY (or AWS_ACCESS_KEY_ID)
|
||||||
|
--cloud-secret-key DBBACKUP_CLOUD_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)`,
|
||||||
|
}
|
||||||
|
|
||||||
|
var cloudUploadCmd = &cobra.Command{
|
||||||
|
Use: "upload [backup-file]",
|
||||||
|
Short: "Upload backup to cloud storage",
|
||||||
|
Long: `Upload one or more backup files to cloud storage.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
# Upload single backup
|
||||||
|
dbbackup cloud upload /backups/mydb.dump
|
||||||
|
|
||||||
|
# Upload with progress
|
||||||
|
dbbackup cloud upload /backups/mydb.dump --verbose
|
||||||
|
|
||||||
|
# Upload multiple files
|
||||||
|
dbbackup cloud upload /backups/*.dump`,
|
||||||
|
Args: cobra.MinimumNArgs(1),
|
||||||
|
RunE: runCloudUpload,
|
||||||
|
}
|
||||||
|
|
||||||
|
var cloudDownloadCmd = &cobra.Command{
|
||||||
|
Use: "download [remote-file] [local-path]",
|
||||||
|
Short: "Download backup from cloud storage",
|
||||||
|
Long: `Download a backup file from cloud storage.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
# Download to current directory
|
||||||
|
dbbackup cloud download mydb.dump .
|
||||||
|
|
||||||
|
# Download to specific path
|
||||||
|
dbbackup cloud download mydb.dump /backups/mydb.dump
|
||||||
|
|
||||||
|
# Download with progress
|
||||||
|
dbbackup cloud download mydb.dump . --verbose`,
|
||||||
|
Args: cobra.ExactArgs(2),
|
||||||
|
RunE: runCloudDownload,
|
||||||
|
}
|
||||||
|
|
||||||
|
var cloudListCmd = &cobra.Command{
|
||||||
|
Use: "list [prefix]",
|
||||||
|
Short: "List backups in cloud storage",
|
||||||
|
Long: `List all backup files in cloud storage.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
# List all backups
|
||||||
|
dbbackup cloud list
|
||||||
|
|
||||||
|
# List backups with prefix
|
||||||
|
dbbackup cloud list mydb_
|
||||||
|
|
||||||
|
# List with detailed information
|
||||||
|
dbbackup cloud list --verbose`,
|
||||||
|
Args: cobra.MaximumNArgs(1),
|
||||||
|
RunE: runCloudList,
|
||||||
|
}
|
||||||
|
|
||||||
|
var cloudDeleteCmd = &cobra.Command{
|
||||||
|
Use: "delete [remote-file]",
|
||||||
|
Short: "Delete backup from cloud storage",
|
||||||
|
Long: `Delete a backup file from cloud storage.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
# Delete single backup
|
||||||
|
dbbackup cloud delete mydb_20251125.dump
|
||||||
|
|
||||||
|
# Delete with confirmation
|
||||||
|
dbbackup cloud delete mydb.dump --confirm`,
|
||||||
|
Args: cobra.ExactArgs(1),
|
||||||
|
RunE: runCloudDelete,
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
cloudProvider string
|
||||||
|
cloudBucket string
|
||||||
|
cloudRegion string
|
||||||
|
cloudEndpoint string
|
||||||
|
cloudAccessKey string
|
||||||
|
cloudSecretKey string
|
||||||
|
cloudPrefix string
|
||||||
|
cloudVerbose bool
|
||||||
|
cloudConfirm bool
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
rootCmd.AddCommand(cloudCmd)
|
||||||
|
cloudCmd.AddCommand(cloudUploadCmd, cloudDownloadCmd, cloudListCmd, cloudDeleteCmd)
|
||||||
|
|
||||||
|
// Cloud configuration flags
|
||||||
|
for _, cmd := range []*cobra.Command{cloudUploadCmd, cloudDownloadCmd, cloudListCmd, cloudDeleteCmd} {
|
||||||
|
cmd.Flags().StringVar(&cloudProvider, "cloud-provider", getEnv("DBBACKUP_CLOUD_PROVIDER", "s3"), "Cloud provider (s3, minio, b2)")
|
||||||
|
cmd.Flags().StringVar(&cloudBucket, "cloud-bucket", getEnv("DBBACKUP_CLOUD_BUCKET", ""), "Bucket name")
|
||||||
|
cmd.Flags().StringVar(&cloudRegion, "cloud-region", getEnv("DBBACKUP_CLOUD_REGION", "us-east-1"), "Region")
|
||||||
|
cmd.Flags().StringVar(&cloudEndpoint, "cloud-endpoint", getEnv("DBBACKUP_CLOUD_ENDPOINT", ""), "Custom endpoint (for MinIO)")
|
||||||
|
cmd.Flags().StringVar(&cloudAccessKey, "cloud-access-key", getEnv("DBBACKUP_CLOUD_ACCESS_KEY", getEnv("AWS_ACCESS_KEY_ID", "")), "Access key")
|
||||||
|
cmd.Flags().StringVar(&cloudSecretKey, "cloud-secret-key", getEnv("DBBACKUP_CLOUD_SECRET_KEY", getEnv("AWS_SECRET_ACCESS_KEY", "")), "Secret key")
|
||||||
|
cmd.Flags().StringVar(&cloudPrefix, "cloud-prefix", getEnv("DBBACKUP_CLOUD_PREFIX", ""), "Key prefix")
|
||||||
|
cmd.Flags().BoolVarP(&cloudVerbose, "verbose", "v", false, "Verbose output")
|
||||||
|
}
|
||||||
|
|
||||||
|
cloudDeleteCmd.Flags().BoolVar(&cloudConfirm, "confirm", false, "Skip confirmation prompt")
|
||||||
|
}
|
||||||
|
|
||||||
|
func getEnv(key, defaultValue string) string {
|
||||||
|
if value := os.Getenv(key); value != "" {
|
||||||
|
return value
|
||||||
|
}
|
||||||
|
return defaultValue
|
||||||
|
}
|
||||||
|
|
||||||
|
func getCloudBackend() (cloud.Backend, error) {
|
||||||
|
cfg := &cloud.Config{
|
||||||
|
Provider: cloudProvider,
|
||||||
|
Bucket: cloudBucket,
|
||||||
|
Region: cloudRegion,
|
||||||
|
Endpoint: cloudEndpoint,
|
||||||
|
AccessKey: cloudAccessKey,
|
||||||
|
SecretKey: cloudSecretKey,
|
||||||
|
Prefix: cloudPrefix,
|
||||||
|
UseSSL: true,
|
||||||
|
PathStyle: cloudProvider == "minio",
|
||||||
|
Timeout: 300,
|
||||||
|
MaxRetries: 3,
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.Bucket == "" {
|
||||||
|
return nil, fmt.Errorf("bucket name is required (use --cloud-bucket or DBBACKUP_CLOUD_BUCKET)")
|
||||||
|
}
|
||||||
|
|
||||||
|
backend, err := cloud.NewBackend(cfg)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create cloud backend: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return backend, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func runCloudUpload(cmd *cobra.Command, args []string) error {
|
||||||
|
backend, err := getCloudBackend()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.Background()
|
||||||
|
|
||||||
|
// Expand glob patterns
|
||||||
|
var files []string
|
||||||
|
for _, pattern := range args {
|
||||||
|
matches, err := filepath.Glob(pattern)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("invalid pattern %s: %w", pattern, err)
|
||||||
|
}
|
||||||
|
if len(matches) == 0 {
|
||||||
|
files = append(files, pattern)
|
||||||
|
} else {
|
||||||
|
files = append(files, matches...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("☁️ Uploading %d file(s) to %s...\n\n", len(files), backend.Name())
|
||||||
|
|
||||||
|
successCount := 0
|
||||||
|
for _, localPath := range files {
|
||||||
|
filename := filepath.Base(localPath)
|
||||||
|
fmt.Printf("📤 %s\n", filename)
|
||||||
|
|
||||||
|
// Progress callback
|
||||||
|
var lastPercent int
|
||||||
|
progress := func(transferred, total int64) {
|
||||||
|
if !cloudVerbose {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
percent := int(float64(transferred) / float64(total) * 100)
|
||||||
|
if percent != lastPercent && percent%10 == 0 {
|
||||||
|
fmt.Printf(" Progress: %d%% (%s / %s)\n",
|
||||||
|
percent,
|
||||||
|
cloud.FormatSize(transferred),
|
||||||
|
cloud.FormatSize(total))
|
||||||
|
lastPercent = percent
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
err := backend.Upload(ctx, localPath, filename, progress)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Printf(" ❌ Failed: %v\n\n", err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get file size
|
||||||
|
if info, err := os.Stat(localPath); err == nil {
|
||||||
|
fmt.Printf(" ✅ Uploaded (%s)\n\n", cloud.FormatSize(info.Size()))
|
||||||
|
} else {
|
||||||
|
fmt.Printf(" ✅ Uploaded\n\n")
|
||||||
|
}
|
||||||
|
successCount++
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println(strings.Repeat("─", 50))
|
||||||
|
fmt.Printf("✅ Successfully uploaded %d/%d file(s)\n", successCount, len(files))
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func runCloudDownload(cmd *cobra.Command, args []string) error {
|
||||||
|
backend, err := getCloudBackend()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.Background()
|
||||||
|
remotePath := args[0]
|
||||||
|
localPath := args[1]
|
||||||
|
|
||||||
|
// If localPath is a directory, use the remote filename
|
||||||
|
if info, err := os.Stat(localPath); err == nil && info.IsDir() {
|
||||||
|
localPath = filepath.Join(localPath, filepath.Base(remotePath))
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("☁️ Downloading from %s...\n\n", backend.Name())
|
||||||
|
fmt.Printf("📥 %s → %s\n", remotePath, localPath)
|
||||||
|
|
||||||
|
// Progress callback
|
||||||
|
var lastPercent int
|
||||||
|
progress := func(transferred, total int64) {
|
||||||
|
if !cloudVerbose {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
percent := int(float64(transferred) / float64(total) * 100)
|
||||||
|
if percent != lastPercent && percent%10 == 0 {
|
||||||
|
fmt.Printf(" Progress: %d%% (%s / %s)\n",
|
||||||
|
percent,
|
||||||
|
cloud.FormatSize(transferred),
|
||||||
|
cloud.FormatSize(total))
|
||||||
|
lastPercent = percent
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
err = backend.Download(ctx, remotePath, localPath, progress)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("download failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get file size
|
||||||
|
if info, err := os.Stat(localPath); err == nil {
|
||||||
|
fmt.Printf(" ✅ Downloaded (%s)\n", cloud.FormatSize(info.Size()))
|
||||||
|
} else {
|
||||||
|
fmt.Printf(" ✅ Downloaded\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func runCloudList(cmd *cobra.Command, args []string) error {
|
||||||
|
backend, err := getCloudBackend()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.Background()
|
||||||
|
prefix := ""
|
||||||
|
if len(args) > 0 {
|
||||||
|
prefix = args[0]
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("☁️ Listing backups in %s/%s...\n\n", backend.Name(), cloudBucket)
|
||||||
|
|
||||||
|
backups, err := backend.List(ctx, prefix)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to list backups: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(backups) == 0 {
|
||||||
|
fmt.Println("No backups found")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var totalSize int64
|
||||||
|
for _, backup := range backups {
|
||||||
|
totalSize += backup.Size
|
||||||
|
|
||||||
|
if cloudVerbose {
|
||||||
|
fmt.Printf("📦 %s\n", backup.Name)
|
||||||
|
fmt.Printf(" Size: %s\n", cloud.FormatSize(backup.Size))
|
||||||
|
fmt.Printf(" Modified: %s\n", backup.LastModified.Format(time.RFC3339))
|
||||||
|
if backup.StorageClass != "" {
|
||||||
|
fmt.Printf(" Storage: %s\n", backup.StorageClass)
|
||||||
|
}
|
||||||
|
fmt.Println()
|
||||||
|
} else {
|
||||||
|
age := time.Since(backup.LastModified)
|
||||||
|
ageStr := formatAge(age)
|
||||||
|
fmt.Printf("%-50s %12s %s\n",
|
||||||
|
backup.Name,
|
||||||
|
cloud.FormatSize(backup.Size),
|
||||||
|
ageStr)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println(strings.Repeat("─", 50))
|
||||||
|
fmt.Printf("Total: %d backup(s), %s\n", len(backups), cloud.FormatSize(totalSize))
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func runCloudDelete(cmd *cobra.Command, args []string) error {
|
||||||
|
backend, err := getCloudBackend()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.Background()
|
||||||
|
remotePath := args[0]
|
||||||
|
|
||||||
|
// Check if file exists
|
||||||
|
exists, err := backend.Exists(ctx, remotePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to check file: %w", err)
|
||||||
|
}
|
||||||
|
if !exists {
|
||||||
|
return fmt.Errorf("file not found: %s", remotePath)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get file info
|
||||||
|
size, err := backend.GetSize(ctx, remotePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to get file info: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Confirmation prompt
|
||||||
|
if !cloudConfirm {
|
||||||
|
fmt.Printf("⚠️ Delete %s (%s) from cloud storage?\n", remotePath, cloud.FormatSize(size))
|
||||||
|
fmt.Print("Type 'yes' to confirm: ")
|
||||||
|
var response string
|
||||||
|
fmt.Scanln(&response)
|
||||||
|
if response != "yes" {
|
||||||
|
fmt.Println("Cancelled")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("🗑️ Deleting %s...\n", remotePath)
|
||||||
|
|
||||||
|
err = backend.Delete(ctx, remotePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("delete failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("✅ Deleted %s (%s)\n", remotePath, cloud.FormatSize(size))
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func formatAge(d time.Duration) string {
|
||||||
|
if d < time.Minute {
|
||||||
|
return "just now"
|
||||||
|
} else if d < time.Hour {
|
||||||
|
return fmt.Sprintf("%d min ago", int(d.Minutes()))
|
||||||
|
} else if d < 24*time.Hour {
|
||||||
|
return fmt.Sprintf("%d hours ago", int(d.Hours()))
|
||||||
|
} else {
|
||||||
|
return fmt.Sprintf("%d days ago", int(d.Hours()/24))
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -10,6 +10,7 @@ import (
|
|||||||
"syscall"
|
"syscall"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"dbbackup/internal/cloud"
|
||||||
"dbbackup/internal/database"
|
"dbbackup/internal/database"
|
||||||
"dbbackup/internal/restore"
|
"dbbackup/internal/restore"
|
||||||
"dbbackup/internal/security"
|
"dbbackup/internal/security"
|
||||||
@@ -169,7 +170,36 @@ func init() {
|
|||||||
func runRestoreSingle(cmd *cobra.Command, args []string) error {
|
func runRestoreSingle(cmd *cobra.Command, args []string) error {
|
||||||
archivePath := args[0]
|
archivePath := args[0]
|
||||||
|
|
||||||
// Convert to absolute path
|
// Check if this is a cloud URI
|
||||||
|
var cleanupFunc func() error
|
||||||
|
|
||||||
|
if cloud.IsCloudURI(archivePath) {
|
||||||
|
log.Info("Detected cloud URI, downloading backup...", "uri", archivePath)
|
||||||
|
|
||||||
|
// Download from cloud
|
||||||
|
result, err := restore.DownloadFromCloudURI(cmd.Context(), archivePath, restore.DownloadOptions{
|
||||||
|
VerifyChecksum: true,
|
||||||
|
KeepLocal: false, // Delete after restore
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to download from cloud: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
archivePath = result.LocalPath
|
||||||
|
cleanupFunc = result.Cleanup
|
||||||
|
|
||||||
|
// Ensure cleanup happens on exit
|
||||||
|
defer func() {
|
||||||
|
if cleanupFunc != nil {
|
||||||
|
if err := cleanupFunc(); err != nil {
|
||||||
|
log.Warn("Failed to cleanup temp files", "error", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
log.Info("Download completed", "local_path", archivePath)
|
||||||
|
} else {
|
||||||
|
// Convert to absolute path for local files
|
||||||
if !filepath.IsAbs(archivePath) {
|
if !filepath.IsAbs(archivePath) {
|
||||||
absPath, err := filepath.Abs(archivePath)
|
absPath, err := filepath.Abs(archivePath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -182,6 +212,7 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
|
|||||||
if _, err := os.Stat(archivePath); err != nil {
|
if _, err := os.Stat(archivePath); err != nil {
|
||||||
return fmt.Errorf("archive not found: %s", archivePath)
|
return fmt.Errorf("archive not found: %s", archivePath)
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Detect format
|
// Detect format
|
||||||
format := restore.DetectArchiveFormat(archivePath)
|
format := restore.DetectArchiveFormat(archivePath)
|
||||||
|
|||||||
235
cmd/verify.go
Normal file
235
cmd/verify.go
Normal file
@@ -0,0 +1,235 @@
|
|||||||
|
package cmd
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"dbbackup/internal/cloud"
|
||||||
|
"dbbackup/internal/metadata"
|
||||||
|
"dbbackup/internal/restore"
|
||||||
|
"dbbackup/internal/verification"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
)
|
||||||
|
|
||||||
|
var verifyBackupCmd = &cobra.Command{
|
||||||
|
Use: "verify-backup [backup-file]",
|
||||||
|
Short: "Verify backup file integrity with checksums",
|
||||||
|
Long: `Verify the integrity of one or more backup files by comparing their SHA-256 checksums
|
||||||
|
against the stored metadata. This ensures that backups have not been corrupted.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
# Verify a single backup
|
||||||
|
dbbackup verify-backup /backups/mydb_20260115.dump
|
||||||
|
|
||||||
|
# Verify all backups in a directory
|
||||||
|
dbbackup verify-backup /backups/*.dump
|
||||||
|
|
||||||
|
# Quick verification (size check only, no checksum)
|
||||||
|
dbbackup verify-backup /backups/mydb.dump --quick
|
||||||
|
|
||||||
|
# Verify and show detailed information
|
||||||
|
dbbackup verify-backup /backups/mydb.dump --verbose`,
|
||||||
|
Args: cobra.MinimumNArgs(1),
|
||||||
|
RunE: runVerifyBackup,
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
quickVerify bool
|
||||||
|
verboseVerify bool
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
rootCmd.AddCommand(verifyBackupCmd)
|
||||||
|
verifyBackupCmd.Flags().BoolVar(&quickVerify, "quick", false, "Quick verification (size check only)")
|
||||||
|
verifyBackupCmd.Flags().BoolVarP(&verboseVerify, "verbose", "v", false, "Show detailed information")
|
||||||
|
}
|
||||||
|
|
||||||
|
func runVerifyBackup(cmd *cobra.Command, args []string) error {
|
||||||
|
// Check if any argument is a cloud URI
|
||||||
|
hasCloudURI := false
|
||||||
|
for _, arg := range args {
|
||||||
|
if isCloudURI(arg) {
|
||||||
|
hasCloudURI = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// If cloud URIs detected, handle separately
|
||||||
|
if hasCloudURI {
|
||||||
|
return runVerifyCloudBackup(cmd, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Expand glob patterns for local files
|
||||||
|
var backupFiles []string
|
||||||
|
for _, pattern := range args {
|
||||||
|
matches, err := filepath.Glob(pattern)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("invalid pattern %s: %w", pattern, err)
|
||||||
|
}
|
||||||
|
if len(matches) == 0 {
|
||||||
|
// Not a glob, use as-is
|
||||||
|
backupFiles = append(backupFiles, pattern)
|
||||||
|
} else {
|
||||||
|
backupFiles = append(backupFiles, matches...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(backupFiles) == 0 {
|
||||||
|
return fmt.Errorf("no backup files found")
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("Verifying %d backup file(s)...\n\n", len(backupFiles))
|
||||||
|
|
||||||
|
successCount := 0
|
||||||
|
failureCount := 0
|
||||||
|
|
||||||
|
for _, backupFile := range backupFiles {
|
||||||
|
// Skip metadata files
|
||||||
|
if strings.HasSuffix(backupFile, ".meta.json") ||
|
||||||
|
strings.HasSuffix(backupFile, ".sha256") ||
|
||||||
|
strings.HasSuffix(backupFile, ".info") {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("📁 %s\n", filepath.Base(backupFile))
|
||||||
|
|
||||||
|
if quickVerify {
|
||||||
|
// Quick check: size only
|
||||||
|
err := verification.QuickCheck(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Printf(" ❌ FAILED: %v\n\n", err)
|
||||||
|
failureCount++
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
fmt.Printf(" ✅ VALID (quick check)\n\n")
|
||||||
|
successCount++
|
||||||
|
} else {
|
||||||
|
// Full verification with SHA-256
|
||||||
|
result, err := verification.Verify(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("verification error: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if result.Valid {
|
||||||
|
fmt.Printf(" ✅ VALID\n")
|
||||||
|
if verboseVerify {
|
||||||
|
meta, _ := metadata.Load(backupFile)
|
||||||
|
fmt.Printf(" Size: %s\n", metadata.FormatSize(meta.SizeBytes))
|
||||||
|
fmt.Printf(" SHA-256: %s\n", meta.SHA256)
|
||||||
|
fmt.Printf(" Database: %s (%s)\n", meta.Database, meta.DatabaseType)
|
||||||
|
fmt.Printf(" Created: %s\n", meta.Timestamp.Format(time.RFC3339))
|
||||||
|
}
|
||||||
|
fmt.Println()
|
||||||
|
successCount++
|
||||||
|
} else {
|
||||||
|
fmt.Printf(" ❌ FAILED: %v\n", result.Error)
|
||||||
|
if verboseVerify {
|
||||||
|
if !result.FileExists {
|
||||||
|
fmt.Printf(" File does not exist\n")
|
||||||
|
} else if !result.MetadataExists {
|
||||||
|
fmt.Printf(" Metadata file missing\n")
|
||||||
|
} else if !result.SizeMatch {
|
||||||
|
fmt.Printf(" Size mismatch\n")
|
||||||
|
} else {
|
||||||
|
fmt.Printf(" Expected: %s\n", result.ExpectedSHA256)
|
||||||
|
fmt.Printf(" Got: %s\n", result.CalculatedSHA256)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
fmt.Println()
|
||||||
|
failureCount++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Summary
|
||||||
|
fmt.Println(strings.Repeat("─", 50))
|
||||||
|
fmt.Printf("Total: %d backups\n", len(backupFiles))
|
||||||
|
fmt.Printf("✅ Valid: %d\n", successCount)
|
||||||
|
if failureCount > 0 {
|
||||||
|
fmt.Printf("❌ Failed: %d\n", failureCount)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// isCloudURI checks if a string is a cloud URI
|
||||||
|
func isCloudURI(s string) bool {
|
||||||
|
return cloud.IsCloudURI(s)
|
||||||
|
}
|
||||||
|
|
||||||
|
// verifyCloudBackup downloads and verifies a backup from cloud storage
|
||||||
|
func verifyCloudBackup(ctx context.Context, uri string, quick, verbose bool) (*restore.DownloadResult, error) {
|
||||||
|
// Download from cloud with checksum verification
|
||||||
|
result, err := restore.DownloadFromCloudURI(ctx, uri, restore.DownloadOptions{
|
||||||
|
VerifyChecksum: !quick, // Skip checksum if quick mode
|
||||||
|
KeepLocal: false,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// If not quick mode, also run full verification
|
||||||
|
if !quick {
|
||||||
|
_, err := verification.Verify(result.LocalPath)
|
||||||
|
if err != nil {
|
||||||
|
result.Cleanup()
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// runVerifyCloudBackup verifies backups from cloud storage
|
||||||
|
func runVerifyCloudBackup(cmd *cobra.Command, args []string) error {
|
||||||
|
fmt.Printf("Verifying cloud backup(s)...\n\n")
|
||||||
|
|
||||||
|
successCount := 0
|
||||||
|
failureCount := 0
|
||||||
|
|
||||||
|
for _, uri := range args {
|
||||||
|
if !isCloudURI(uri) {
|
||||||
|
fmt.Printf("⚠️ Skipping non-cloud URI: %s\n", uri)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("☁️ %s\n", uri)
|
||||||
|
|
||||||
|
// Download and verify
|
||||||
|
result, err := verifyCloudBackup(cmd.Context(), uri, quickVerify, verboseVerify)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Printf(" ❌ FAILED: %v\n\n", err)
|
||||||
|
failureCount++
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cleanup temp file
|
||||||
|
defer result.Cleanup()
|
||||||
|
|
||||||
|
fmt.Printf(" ✅ VALID\n")
|
||||||
|
if verboseVerify && result.MetadataPath != "" {
|
||||||
|
meta, _ := metadata.Load(result.MetadataPath)
|
||||||
|
if meta != nil {
|
||||||
|
fmt.Printf(" Size: %s\n", metadata.FormatSize(meta.SizeBytes))
|
||||||
|
fmt.Printf(" SHA-256: %s\n", meta.SHA256)
|
||||||
|
fmt.Printf(" Database: %s (%s)\n", meta.Database, meta.DatabaseType)
|
||||||
|
fmt.Printf(" Created: %s\n", meta.Timestamp.Format(time.RFC3339))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
fmt.Println()
|
||||||
|
successCount++
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("\n✅ Summary: %d valid, %d failed\n", successCount, failureCount)
|
||||||
|
|
||||||
|
if failureCount > 0 {
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
101
docker-compose.minio.yml
Normal file
101
docker-compose.minio.yml
Normal file
@@ -0,0 +1,101 @@
|
|||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
# MinIO S3-compatible object storage for testing
|
||||||
|
minio:
|
||||||
|
image: minio/minio:latest
|
||||||
|
container_name: dbbackup-minio
|
||||||
|
ports:
|
||||||
|
- "9000:9000" # S3 API
|
||||||
|
- "9001:9001" # Web Console
|
||||||
|
environment:
|
||||||
|
MINIO_ROOT_USER: minioadmin
|
||||||
|
MINIO_ROOT_PASSWORD: minioadmin123
|
||||||
|
MINIO_REGION: us-east-1
|
||||||
|
volumes:
|
||||||
|
- minio-data:/data
|
||||||
|
command: server /data --console-address ":9001"
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 20s
|
||||||
|
retries: 3
|
||||||
|
networks:
|
||||||
|
- dbbackup-test
|
||||||
|
|
||||||
|
# PostgreSQL database for backup testing
|
||||||
|
postgres:
|
||||||
|
image: postgres:16-alpine
|
||||||
|
container_name: dbbackup-postgres-test
|
||||||
|
environment:
|
||||||
|
POSTGRES_USER: testuser
|
||||||
|
POSTGRES_PASSWORD: testpass123
|
||||||
|
POSTGRES_DB: testdb
|
||||||
|
POSTGRES_INITDB_ARGS: "-E UTF8 --locale=C"
|
||||||
|
ports:
|
||||||
|
- "5433:5432"
|
||||||
|
volumes:
|
||||||
|
- postgres-data:/var/lib/postgresql/data
|
||||||
|
- ./test_data:/docker-entrypoint-initdb.d
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -U testuser"]
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
|
networks:
|
||||||
|
- dbbackup-test
|
||||||
|
|
||||||
|
# MySQL database for backup testing
|
||||||
|
mysql:
|
||||||
|
image: mysql:8.0
|
||||||
|
container_name: dbbackup-mysql-test
|
||||||
|
environment:
|
||||||
|
MYSQL_ROOT_PASSWORD: rootpass123
|
||||||
|
MYSQL_DATABASE: testdb
|
||||||
|
MYSQL_USER: testuser
|
||||||
|
MYSQL_PASSWORD: testpass123
|
||||||
|
ports:
|
||||||
|
- "3307:3306"
|
||||||
|
volumes:
|
||||||
|
- mysql-data:/var/lib/mysql
|
||||||
|
- ./test_data:/docker-entrypoint-initdb.d
|
||||||
|
command: --default-authentication-plugin=mysql_native_password
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "-prootpass123"]
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
|
networks:
|
||||||
|
- dbbackup-test
|
||||||
|
|
||||||
|
# MinIO Client (mc) for bucket management
|
||||||
|
minio-mc:
|
||||||
|
image: minio/mc:latest
|
||||||
|
container_name: dbbackup-minio-mc
|
||||||
|
depends_on:
|
||||||
|
minio:
|
||||||
|
condition: service_healthy
|
||||||
|
entrypoint: >
|
||||||
|
/bin/sh -c "
|
||||||
|
sleep 5;
|
||||||
|
/usr/bin/mc alias set myminio http://minio:9000 minioadmin minioadmin123;
|
||||||
|
/usr/bin/mc mb --ignore-existing myminio/test-backups;
|
||||||
|
/usr/bin/mc mb --ignore-existing myminio/production-backups;
|
||||||
|
/usr/bin/mc mb --ignore-existing myminio/dev-backups;
|
||||||
|
echo 'MinIO buckets created successfully';
|
||||||
|
exit 0;
|
||||||
|
"
|
||||||
|
networks:
|
||||||
|
- dbbackup-test
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
minio-data:
|
||||||
|
driver: local
|
||||||
|
postgres-data:
|
||||||
|
driver: local
|
||||||
|
mysql-data:
|
||||||
|
driver: local
|
||||||
|
|
||||||
|
networks:
|
||||||
|
dbbackup-test:
|
||||||
|
driver: bridge
|
||||||
20
go.mod
20
go.mod
@@ -18,6 +18,26 @@ require (
|
|||||||
|
|
||||||
require (
|
require (
|
||||||
filippo.io/edwards25519 v1.1.0 // indirect
|
filippo.io/edwards25519 v1.1.0 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2 v1.40.0 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/config v1.32.2 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/credentials v1.19.2 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.14 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.3 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.5 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 // indirect
|
||||||
|
github.com/aws/smithy-go v1.23.2 // indirect
|
||||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
|
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
|
||||||
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
|
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
|
||||||
github.com/charmbracelet/x/ansi v0.10.1 // indirect
|
github.com/charmbracelet/x/ansi v0.10.1 // indirect
|
||||||
|
|||||||
54
go.sum
54
go.sum
@@ -2,6 +2,60 @@ filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
|
|||||||
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
|
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
|
||||||
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2 h1:+vx7roKuyA63nhn5WAunQHLTznkw5W8b1Xc0dNjp83s=
|
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2 h1:+vx7roKuyA63nhn5WAunQHLTznkw5W8b1Xc0dNjp83s=
|
||||||
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2/go.mod h1:HBCaDeC1lPdgDeDbhX8XFpy1jqjK0IBG8W5K+xYqA0w=
|
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2/go.mod h1:HBCaDeC1lPdgDeDbhX8XFpy1jqjK0IBG8W5K+xYqA0w=
|
||||||
|
github.com/aws/aws-sdk-go-v2 v1.40.0 h1:/WMUA0kjhZExjOQN2z3oLALDREea1A7TobfuiBrKlwc=
|
||||||
|
github.com/aws/aws-sdk-go-v2 v1.40.0/go.mod h1:c9pm7VwuW0UPxAEYGyTmyurVcNrbF6Rt/wixFqDhcjE=
|
||||||
|
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 h1:DHctwEM8P8iTXFxC/QK0MRjwEpWQeM9yzidCRjldUz0=
|
||||||
|
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3/go.mod h1:xdCzcZEtnSTKVDOmUZs4l/j3pSV6rpo1WXl5ugNsL8Y=
|
||||||
|
github.com/aws/aws-sdk-go-v2/config v1.32.1 h1:iODUDLgk3q8/flEC7ymhmxjfoAnBDwEEYEVyKZ9mzjU=
|
||||||
|
github.com/aws/aws-sdk-go-v2/config v1.32.1/go.mod h1:xoAgo17AGrPpJBSLg81W+ikM0cpOZG8ad04T2r+d5P0=
|
||||||
|
github.com/aws/aws-sdk-go-v2/config v1.32.2 h1:4liUsdEpUUPZs5WVapsJLx5NPmQhQdez7nYFcovrytk=
|
||||||
|
github.com/aws/aws-sdk-go-v2/config v1.32.2/go.mod h1:l0hs06IFz1eCT+jTacU/qZtC33nvcnLADAPL/XyrkZI=
|
||||||
|
github.com/aws/aws-sdk-go-v2/credentials v1.19.1 h1:JeW+EwmtTE0yXFK8SmklrFh/cGTTXsQJumgMZNlbxfM=
|
||||||
|
github.com/aws/aws-sdk-go-v2/credentials v1.19.1/go.mod h1:BOoXiStwTF+fT2XufhO0Efssbi1CNIO/ZXpZu87N0pw=
|
||||||
|
github.com/aws/aws-sdk-go-v2/credentials v1.19.2 h1:qZry8VUyTK4VIo5aEdUcBjPZHL2v4FyQ3QEOaWcFLu4=
|
||||||
|
github.com/aws/aws-sdk-go-v2/credentials v1.19.2/go.mod h1:YUqm5a1/kBnoK+/NY5WEiMocZihKSo15/tJdmdXnM5g=
|
||||||
|
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14 h1:WZVR5DbDgxzA0BJeudId89Kmgy6DIU4ORpxwsVHz0qA=
|
||||||
|
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14/go.mod h1:Dadl9QO0kHgbrH1GRqGiZdYtW5w+IXXaBNCHTIaheM4=
|
||||||
|
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12 h1:Zy6Tme1AA13kX8x3CnkHx5cqdGWGaj/anwOiWGnA0Xo=
|
||||||
|
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12/go.mod h1:ql4uXYKoTM9WUAUSmthY4AtPVrlTBZOvnBJTiCUdPxI=
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 h1:PZHqQACxYb8mYgms4RZbhZG0a7dPW06xOjmaH0EJC/I=
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14/go.mod h1:VymhrMJUWs69D8u0/lZ7jSB6WgaG/NqHi3gX0aYf6U0=
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 h1:bOS19y6zlJwagBfHxs0ESzr1XCOU2KXJCWcq3E2vfjY=
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14/go.mod h1:1ipeGBMAxZ0xcTm6y6paC2C/J6f6OO7LBODV9afuAyM=
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 h1:WKuaxf++XKWlHWu9ECbMlha8WOEGm0OUEZqm4K/Gcfk=
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4/go.mod h1:ZWy7j6v1vWGmPReu0iSGvRiise4YI5SkR3OHKTZ6Wuc=
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.14 h1:ITi7qiDSv/mSGDSWNpZ4k4Ve0DQR6Ug2SJQ8zEHoDXg=
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.14/go.mod h1:k1xtME53H1b6YpZt74YmwlONMWf4ecM+lut1WQLAF/U=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.3 h1:x2Ibm/Af8Fi+BH+Hsn9TXGdT+hKbDd5XOTZxTMxDk7o=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.3/go.mod h1:IW1jwyrQgMdhisceG8fQLmQIydcT/jWY21rFhzgaKwo=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.5 h1:Hjkh7kE6D81PgrHlE/m9gx+4TyyeLHuY8xJs7yXN5C4=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.5/go.mod h1:nPRXgyCfAurhyaTMoBMwRBYBhaHI4lNPAnJmjM0Tslc=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14 h1:FIouAnCE46kyYqyhs0XEBDFFSREtdnr8HQuLPQPLCrY=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14/go.mod h1:UTwDc5COa5+guonQU8qBikJo1ZJ4ln2r1MkF7Dqag1E=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14 h1:FzQE21lNtUor0Fb7QNgnEyiRCBlolLTX/Z1j65S7teM=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14/go.mod h1:s1ydyWG9pm3ZwmmYN21HKyG9WzAZhYVW85wMHs5FV6w=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.0 h1:8FshVvnV2sr9kOSAbOnc/vwVmmAwMjOedKH6JW2ddPM=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.0/go.mod h1:wYNqY3L02Z3IgRYxOBPH9I1zD9Cjh9hI5QOy/eOjQvw=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1 h1:OgQy/+0+Kc3khtqiEOk23xQAglXi3Tj0y5doOxbi5tg=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1/go.mod h1:wYNqY3L02Z3IgRYxOBPH9I1zD9Cjh9hI5QOy/eOjQvw=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/signin v1.0.1 h1:BDgIUYGEo5TkayOWv/oBLPphWwNm/A91AebUjAu5L5g=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/signin v1.0.1/go.mod h1:iS6EPmNeqCsGo+xQmXv0jIMjyYtQfnwg36zl2FwEouk=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2 h1:MxMBdKTYBjPQChlJhi4qlEueqB1p1KcbTEa7tD5aqPs=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2/go.mod h1:iS6EPmNeqCsGo+xQmXv0jIMjyYtQfnwg36zl2FwEouk=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sso v1.30.4 h1:U//SlnkE1wOQiIImxzdY5PXat4Wq+8rlfVEw4Y7J8as=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sso v1.30.4/go.mod h1:av+ArJpoYf3pgyrj6tcehSFW+y9/QvAY8kMooR9bZCw=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5 h1:ksUT5KtgpZd3SAiFJNJ0AFEJVva3gjBmN7eXUZjzUwQ=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5/go.mod h1:av+ArJpoYf3pgyrj6tcehSFW+y9/QvAY8kMooR9bZCw=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.9 h1:LU8S9W/mPDAU9q0FjCLi0TrCheLMGwzbRpvUMwYspcA=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.9/go.mod h1:/j67Z5XBVDx8nZVp9EuFM9/BS5dvBznbqILGuu73hug=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10 h1:GtsxyiF3Nd3JahRBJbxLCCdYW9ltGQYrFWg8XdkGDd8=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10/go.mod h1:/j67Z5XBVDx8nZVp9EuFM9/BS5dvBznbqILGuu73hug=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sts v1.41.1 h1:GdGmKtG+/Krag7VfyOXV17xjTCz0i9NT+JnqLTOI5nA=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sts v1.41.1/go.mod h1:6TxbXoDSgBQ225Qd8Q+MbxUxUh6TtNKwbRt/EPS9xso=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 h1:a5UTtD4mHBU3t0o6aHQZFJTNKVfxFWfPX7J0Lr7G+uY=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2/go.mod h1:6TxbXoDSgBQ225Qd8Q+MbxUxUh6TtNKwbRt/EPS9xso=
|
||||||
|
github.com/aws/smithy-go v1.23.2 h1:Crv0eatJUQhaManss33hS5r40CG3ZFH+21XSkqMrIUM=
|
||||||
|
github.com/aws/smithy-go v1.23.2/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
|
||||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k=
|
github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k=
|
||||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8=
|
github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8=
|
||||||
github.com/charmbracelet/bubbles v0.21.0 h1:9TdC97SdRVg/1aaXNVWfFH3nnLAwOXr8Fn6u6mfQdFs=
|
github.com/charmbracelet/bubbles v0.21.0 h1:9TdC97SdRVg/1aaXNVWfFH3nnLAwOXr8Fn6u6mfQdFs=
|
||||||
|
|||||||
@@ -17,10 +17,12 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"dbbackup/internal/checks"
|
"dbbackup/internal/checks"
|
||||||
|
"dbbackup/internal/cloud"
|
||||||
"dbbackup/internal/config"
|
"dbbackup/internal/config"
|
||||||
"dbbackup/internal/database"
|
"dbbackup/internal/database"
|
||||||
"dbbackup/internal/security"
|
"dbbackup/internal/security"
|
||||||
"dbbackup/internal/logger"
|
"dbbackup/internal/logger"
|
||||||
|
"dbbackup/internal/metadata"
|
||||||
"dbbackup/internal/metrics"
|
"dbbackup/internal/metrics"
|
||||||
"dbbackup/internal/progress"
|
"dbbackup/internal/progress"
|
||||||
"dbbackup/internal/swap"
|
"dbbackup/internal/swap"
|
||||||
@@ -233,6 +235,14 @@ func (e *Engine) BackupSingle(ctx context.Context, databaseName string) error {
|
|||||||
metrics.GlobalMetrics.RecordOperation("backup_single", databaseName, time.Now().Add(-time.Minute), info.Size(), true, 0)
|
metrics.GlobalMetrics.RecordOperation("backup_single", databaseName, time.Now().Add(-time.Minute), info.Size(), true, 0)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Cloud upload if enabled
|
||||||
|
if e.cfg.CloudEnabled && e.cfg.CloudAutoUpload {
|
||||||
|
if err := e.uploadToCloud(ctx, outputFile, tracker); err != nil {
|
||||||
|
e.log.Warn("Cloud upload failed", "error", err)
|
||||||
|
// Don't fail the backup if cloud upload fails
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Complete operation
|
// Complete operation
|
||||||
tracker.UpdateProgress(100, "Backup operation completed successfully")
|
tracker.UpdateProgress(100, "Backup operation completed successfully")
|
||||||
tracker.Complete(fmt.Sprintf("Single database backup completed: %s", filepath.Base(outputFile)))
|
tracker.Complete(fmt.Sprintf("Single database backup completed: %s", filepath.Base(outputFile)))
|
||||||
@@ -541,9 +551,9 @@ func (e *Engine) BackupCluster(ctx context.Context) error {
|
|||||||
operation.Complete(fmt.Sprintf("Cluster backup created: %s (%s)", outputFile, size))
|
operation.Complete(fmt.Sprintf("Cluster backup created: %s (%s)", outputFile, size))
|
||||||
}
|
}
|
||||||
|
|
||||||
// Create metadata file
|
// Create cluster metadata file
|
||||||
if err := e.createMetadata(outputFile, "cluster", "cluster", ""); err != nil {
|
if err := e.createClusterMetadata(outputFile, databases, successCountFinal, failCountFinal); err != nil {
|
||||||
e.log.Warn("Failed to create metadata file", "error", err)
|
e.log.Warn("Failed to create cluster metadata file", "error", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
@@ -910,9 +920,70 @@ regularTar:
|
|||||||
|
|
||||||
// createMetadata creates a metadata file for the backup
|
// createMetadata creates a metadata file for the backup
|
||||||
func (e *Engine) createMetadata(backupFile, database, backupType, strategy string) error {
|
func (e *Engine) createMetadata(backupFile, database, backupType, strategy string) error {
|
||||||
metaFile := backupFile + ".info"
|
startTime := time.Now()
|
||||||
|
|
||||||
content := fmt.Sprintf(`{
|
// Get backup file information
|
||||||
|
info, err := os.Stat(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to stat backup file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate SHA-256 checksum
|
||||||
|
sha256, err := metadata.CalculateSHA256(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to calculate checksum: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get database version
|
||||||
|
ctx := context.Background()
|
||||||
|
dbVersion, _ := e.db.GetVersion(ctx)
|
||||||
|
if dbVersion == "" {
|
||||||
|
dbVersion = "unknown"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Determine compression format
|
||||||
|
compressionFormat := "none"
|
||||||
|
if e.cfg.CompressionLevel > 0 {
|
||||||
|
if e.cfg.Jobs > 1 {
|
||||||
|
compressionFormat = fmt.Sprintf("pigz-%d", e.cfg.CompressionLevel)
|
||||||
|
} else {
|
||||||
|
compressionFormat = fmt.Sprintf("gzip-%d", e.cfg.CompressionLevel)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create backup metadata
|
||||||
|
meta := &metadata.BackupMetadata{
|
||||||
|
Version: "2.0",
|
||||||
|
Timestamp: startTime,
|
||||||
|
Database: database,
|
||||||
|
DatabaseType: e.cfg.DatabaseType,
|
||||||
|
DatabaseVersion: dbVersion,
|
||||||
|
Host: e.cfg.Host,
|
||||||
|
Port: e.cfg.Port,
|
||||||
|
User: e.cfg.User,
|
||||||
|
BackupFile: backupFile,
|
||||||
|
SizeBytes: info.Size(),
|
||||||
|
SHA256: sha256,
|
||||||
|
Compression: compressionFormat,
|
||||||
|
BackupType: backupType,
|
||||||
|
Duration: time.Since(startTime).Seconds(),
|
||||||
|
ExtraInfo: make(map[string]string),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add strategy for sample backups
|
||||||
|
if strategy != "" {
|
||||||
|
meta.ExtraInfo["sample_strategy"] = strategy
|
||||||
|
meta.ExtraInfo["sample_value"] = fmt.Sprintf("%d", e.cfg.SampleValue)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save metadata
|
||||||
|
if err := meta.Save(); err != nil {
|
||||||
|
return fmt.Errorf("failed to save metadata: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Also save legacy .info file for backward compatibility
|
||||||
|
legacyMetaFile := backupFile + ".info"
|
||||||
|
legacyContent := fmt.Sprintf(`{
|
||||||
"type": "%s",
|
"type": "%s",
|
||||||
"database": "%s",
|
"database": "%s",
|
||||||
"timestamp": "%s",
|
"timestamp": "%s",
|
||||||
@@ -920,24 +991,170 @@ func (e *Engine) createMetadata(backupFile, database, backupType, strategy strin
|
|||||||
"port": %d,
|
"port": %d,
|
||||||
"user": "%s",
|
"user": "%s",
|
||||||
"db_type": "%s",
|
"db_type": "%s",
|
||||||
"compression": %d`,
|
"compression": %d,
|
||||||
backupType, database, time.Now().Format("20060102_150405"),
|
"size_bytes": %d
|
||||||
e.cfg.Host, e.cfg.Port, e.cfg.User, e.cfg.DatabaseType, e.cfg.CompressionLevel)
|
}`, backupType, database, startTime.Format("20060102_150405"),
|
||||||
|
e.cfg.Host, e.cfg.Port, e.cfg.User, e.cfg.DatabaseType,
|
||||||
|
e.cfg.CompressionLevel, info.Size())
|
||||||
|
|
||||||
if strategy != "" {
|
if err := os.WriteFile(legacyMetaFile, []byte(legacyContent), 0644); err != nil {
|
||||||
content += fmt.Sprintf(`,
|
e.log.Warn("Failed to save legacy metadata file", "error", err)
|
||||||
"sample_strategy": "%s",
|
|
||||||
"sample_value": %d`, e.cfg.SampleStrategy, e.cfg.SampleValue)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if info, err := os.Stat(backupFile); err == nil {
|
return nil
|
||||||
content += fmt.Sprintf(`,
|
|
||||||
"size_bytes": %d`, info.Size())
|
|
||||||
}
|
}
|
||||||
|
|
||||||
content += "\n}"
|
// createClusterMetadata creates metadata for cluster backups
|
||||||
|
func (e *Engine) createClusterMetadata(backupFile string, databases []string, successCount, failCount int) error {
|
||||||
|
startTime := time.Now()
|
||||||
|
|
||||||
return os.WriteFile(metaFile, []byte(content), 0644)
|
// Get backup file information
|
||||||
|
info, err := os.Stat(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to stat backup file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate SHA-256 checksum for archive
|
||||||
|
sha256, err := metadata.CalculateSHA256(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to calculate checksum: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get database version
|
||||||
|
ctx := context.Background()
|
||||||
|
dbVersion, _ := e.db.GetVersion(ctx)
|
||||||
|
if dbVersion == "" {
|
||||||
|
dbVersion = "unknown"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create cluster metadata
|
||||||
|
clusterMeta := &metadata.ClusterMetadata{
|
||||||
|
Version: "2.0",
|
||||||
|
Timestamp: startTime,
|
||||||
|
ClusterName: fmt.Sprintf("%s:%d", e.cfg.Host, e.cfg.Port),
|
||||||
|
DatabaseType: e.cfg.DatabaseType,
|
||||||
|
Host: e.cfg.Host,
|
||||||
|
Port: e.cfg.Port,
|
||||||
|
Databases: make([]metadata.BackupMetadata, 0),
|
||||||
|
TotalSize: info.Size(),
|
||||||
|
Duration: time.Since(startTime).Seconds(),
|
||||||
|
ExtraInfo: map[string]string{
|
||||||
|
"database_count": fmt.Sprintf("%d", len(databases)),
|
||||||
|
"success_count": fmt.Sprintf("%d", successCount),
|
||||||
|
"failure_count": fmt.Sprintf("%d", failCount),
|
||||||
|
"archive_sha256": sha256,
|
||||||
|
"database_version": dbVersion,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add database names to metadata
|
||||||
|
for _, dbName := range databases {
|
||||||
|
dbMeta := metadata.BackupMetadata{
|
||||||
|
Database: dbName,
|
||||||
|
DatabaseType: e.cfg.DatabaseType,
|
||||||
|
DatabaseVersion: dbVersion,
|
||||||
|
Timestamp: startTime,
|
||||||
|
}
|
||||||
|
clusterMeta.Databases = append(clusterMeta.Databases, dbMeta)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save cluster metadata
|
||||||
|
if err := clusterMeta.Save(backupFile); err != nil {
|
||||||
|
return fmt.Errorf("failed to save cluster metadata: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Also save legacy .info file for backward compatibility
|
||||||
|
legacyMetaFile := backupFile + ".info"
|
||||||
|
legacyContent := fmt.Sprintf(`{
|
||||||
|
"type": "cluster",
|
||||||
|
"database": "cluster",
|
||||||
|
"timestamp": "%s",
|
||||||
|
"host": "%s",
|
||||||
|
"port": %d,
|
||||||
|
"user": "%s",
|
||||||
|
"db_type": "%s",
|
||||||
|
"compression": %d,
|
||||||
|
"size_bytes": %d,
|
||||||
|
"database_count": %d,
|
||||||
|
"success_count": %d,
|
||||||
|
"failure_count": %d
|
||||||
|
}`, startTime.Format("20060102_150405"),
|
||||||
|
e.cfg.Host, e.cfg.Port, e.cfg.User, e.cfg.DatabaseType,
|
||||||
|
e.cfg.CompressionLevel, info.Size(), len(databases), successCount, failCount)
|
||||||
|
|
||||||
|
if err := os.WriteFile(legacyMetaFile, []byte(legacyContent), 0644); err != nil {
|
||||||
|
e.log.Warn("Failed to save legacy cluster metadata file", "error", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// uploadToCloud uploads a backup file to cloud storage
|
||||||
|
func (e *Engine) uploadToCloud(ctx context.Context, backupFile string, tracker *progress.OperationTracker) error {
|
||||||
|
uploadStep := tracker.AddStep("cloud_upload", "Uploading to cloud storage")
|
||||||
|
|
||||||
|
// Create cloud backend
|
||||||
|
cloudCfg := &cloud.Config{
|
||||||
|
Provider: e.cfg.CloudProvider,
|
||||||
|
Bucket: e.cfg.CloudBucket,
|
||||||
|
Region: e.cfg.CloudRegion,
|
||||||
|
Endpoint: e.cfg.CloudEndpoint,
|
||||||
|
AccessKey: e.cfg.CloudAccessKey,
|
||||||
|
SecretKey: e.cfg.CloudSecretKey,
|
||||||
|
Prefix: e.cfg.CloudPrefix,
|
||||||
|
UseSSL: true,
|
||||||
|
PathStyle: e.cfg.CloudProvider == "minio",
|
||||||
|
Timeout: 300,
|
||||||
|
MaxRetries: 3,
|
||||||
|
}
|
||||||
|
|
||||||
|
backend, err := cloud.NewBackend(cloudCfg)
|
||||||
|
if err != nil {
|
||||||
|
uploadStep.Fail(fmt.Errorf("failed to create cloud backend: %w", err))
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get file info
|
||||||
|
info, err := os.Stat(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
uploadStep.Fail(fmt.Errorf("failed to stat backup file: %w", err))
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
filename := filepath.Base(backupFile)
|
||||||
|
e.log.Info("Uploading backup to cloud", "file", filename, "size", cloud.FormatSize(info.Size()))
|
||||||
|
|
||||||
|
// Progress callback
|
||||||
|
var lastPercent int
|
||||||
|
progressCallback := func(transferred, total int64) {
|
||||||
|
percent := int(float64(transferred) / float64(total) * 100)
|
||||||
|
if percent != lastPercent && percent%10 == 0 {
|
||||||
|
e.log.Debug("Upload progress", "percent", percent, "transferred", cloud.FormatSize(transferred), "total", cloud.FormatSize(total))
|
||||||
|
lastPercent = percent
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Upload to cloud
|
||||||
|
err = backend.Upload(ctx, backupFile, filename, progressCallback)
|
||||||
|
if err != nil {
|
||||||
|
uploadStep.Fail(fmt.Errorf("cloud upload failed: %w", err))
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Also upload metadata file
|
||||||
|
metaFile := backupFile + ".meta.json"
|
||||||
|
if _, err := os.Stat(metaFile); err == nil {
|
||||||
|
metaFilename := filepath.Base(metaFile)
|
||||||
|
if err := backend.Upload(ctx, metaFile, metaFilename, nil); err != nil {
|
||||||
|
e.log.Warn("Failed to upload metadata file", "error", err)
|
||||||
|
// Don't fail if metadata upload fails
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
uploadStep.Complete(fmt.Sprintf("Uploaded to %s/%s/%s", backend.Name(), e.cfg.CloudBucket, filename))
|
||||||
|
e.log.Info("Backup uploaded to cloud", "provider", backend.Name(), "bucket", e.cfg.CloudBucket, "file", filename)
|
||||||
|
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// executeCommand executes a backup command (optimized for huge databases)
|
// executeCommand executes a backup command (optimized for huge databases)
|
||||||
|
|||||||
167
internal/cloud/interface.go
Normal file
167
internal/cloud/interface.go
Normal file
@@ -0,0 +1,167 @@
|
|||||||
|
package cloud
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Backend defines the interface for cloud storage providers
|
||||||
|
type Backend interface {
|
||||||
|
// Upload uploads a file to cloud storage
|
||||||
|
Upload(ctx context.Context, localPath, remotePath string, progress ProgressCallback) error
|
||||||
|
|
||||||
|
// Download downloads a file from cloud storage
|
||||||
|
Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error
|
||||||
|
|
||||||
|
// List lists all backup files in cloud storage
|
||||||
|
List(ctx context.Context, prefix string) ([]BackupInfo, error)
|
||||||
|
|
||||||
|
// Delete deletes a file from cloud storage
|
||||||
|
Delete(ctx context.Context, remotePath string) error
|
||||||
|
|
||||||
|
// Exists checks if a file exists in cloud storage
|
||||||
|
Exists(ctx context.Context, remotePath string) (bool, error)
|
||||||
|
|
||||||
|
// GetSize returns the size of a remote file
|
||||||
|
GetSize(ctx context.Context, remotePath string) (int64, error)
|
||||||
|
|
||||||
|
// Name returns the backend name (e.g., "s3", "azure", "gcs")
|
||||||
|
Name() string
|
||||||
|
}
|
||||||
|
|
||||||
|
// BackupInfo contains information about a backup in cloud storage
|
||||||
|
type BackupInfo struct {
|
||||||
|
Key string // Full path/key in cloud storage
|
||||||
|
Name string // Base filename
|
||||||
|
Size int64 // Size in bytes
|
||||||
|
LastModified time.Time // Last modification time
|
||||||
|
ETag string // Entity tag (version identifier)
|
||||||
|
StorageClass string // Storage class (e.g., STANDARD, GLACIER)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ProgressCallback is called during upload/download to report progress
|
||||||
|
type ProgressCallback func(bytesTransferred, totalBytes int64)
|
||||||
|
|
||||||
|
// Config contains common configuration for cloud backends
|
||||||
|
type Config struct {
|
||||||
|
Provider string // "s3", "minio", "azure", "gcs", "b2"
|
||||||
|
Bucket string // Bucket or container name
|
||||||
|
Region string // Region (for S3)
|
||||||
|
Endpoint string // Custom endpoint (for MinIO, S3-compatible)
|
||||||
|
AccessKey string // Access key or account ID
|
||||||
|
SecretKey string // Secret key or access token
|
||||||
|
UseSSL bool // Use SSL/TLS (default: true)
|
||||||
|
PathStyle bool // Use path-style addressing (for MinIO)
|
||||||
|
Prefix string // Prefix for all operations (e.g., "backups/")
|
||||||
|
Timeout int // Timeout in seconds (default: 300)
|
||||||
|
MaxRetries int // Maximum retry attempts (default: 3)
|
||||||
|
Concurrency int // Upload/download concurrency (default: 5)
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewBackend creates a new cloud storage backend based on the provider
|
||||||
|
func NewBackend(cfg *Config) (Backend, error) {
|
||||||
|
switch cfg.Provider {
|
||||||
|
case "s3", "aws":
|
||||||
|
return NewS3Backend(cfg)
|
||||||
|
case "minio":
|
||||||
|
// MinIO uses S3 backend with custom endpoint
|
||||||
|
cfg.PathStyle = true
|
||||||
|
if cfg.Endpoint == "" {
|
||||||
|
return nil, fmt.Errorf("endpoint required for MinIO")
|
||||||
|
}
|
||||||
|
return NewS3Backend(cfg)
|
||||||
|
case "b2", "backblaze":
|
||||||
|
// Backblaze B2 uses S3-compatible API
|
||||||
|
cfg.PathStyle = false
|
||||||
|
if cfg.Endpoint == "" {
|
||||||
|
return nil, fmt.Errorf("endpoint required for Backblaze B2")
|
||||||
|
}
|
||||||
|
return NewS3Backend(cfg)
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("unsupported cloud provider: %s (supported: s3, minio, b2)", cfg.Provider)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// FormatSize returns human-readable size
|
||||||
|
func FormatSize(bytes int64) string {
|
||||||
|
const unit = 1024
|
||||||
|
if bytes < unit {
|
||||||
|
return fmt.Sprintf("%d B", bytes)
|
||||||
|
}
|
||||||
|
div, exp := int64(unit), 0
|
||||||
|
for n := bytes / unit; n >= unit; n /= unit {
|
||||||
|
div *= unit
|
||||||
|
exp++
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%.1f %ciB", float64(bytes)/float64(div), "KMGTPE"[exp])
|
||||||
|
}
|
||||||
|
|
||||||
|
// DefaultConfig returns a config with sensible defaults
|
||||||
|
func DefaultConfig() *Config {
|
||||||
|
return &Config{
|
||||||
|
Provider: "s3",
|
||||||
|
UseSSL: true,
|
||||||
|
PathStyle: false,
|
||||||
|
Timeout: 300,
|
||||||
|
MaxRetries: 3,
|
||||||
|
Concurrency: 5,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate checks if the configuration is valid
|
||||||
|
func (c *Config) Validate() error {
|
||||||
|
if c.Provider == "" {
|
||||||
|
return fmt.Errorf("provider is required")
|
||||||
|
}
|
||||||
|
if c.Bucket == "" {
|
||||||
|
return fmt.Errorf("bucket name is required")
|
||||||
|
}
|
||||||
|
if c.Provider == "s3" || c.Provider == "aws" {
|
||||||
|
if c.Region == "" && c.Endpoint == "" {
|
||||||
|
return fmt.Errorf("region or endpoint is required for S3")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if c.Provider == "minio" || c.Provider == "b2" {
|
||||||
|
if c.Endpoint == "" {
|
||||||
|
return fmt.Errorf("endpoint is required for %s", c.Provider)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ProgressReader wraps an io.Reader to track progress
|
||||||
|
type ProgressReader struct {
|
||||||
|
reader io.Reader
|
||||||
|
total int64
|
||||||
|
read int64
|
||||||
|
callback ProgressCallback
|
||||||
|
lastReport time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewProgressReader creates a progress tracking reader
|
||||||
|
func NewProgressReader(r io.Reader, total int64, callback ProgressCallback) *ProgressReader {
|
||||||
|
return &ProgressReader{
|
||||||
|
reader: r,
|
||||||
|
total: total,
|
||||||
|
callback: callback,
|
||||||
|
lastReport: time.Now(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (pr *ProgressReader) Read(p []byte) (int, error) {
|
||||||
|
n, err := pr.reader.Read(p)
|
||||||
|
pr.read += int64(n)
|
||||||
|
|
||||||
|
// Report progress every 100ms or when complete
|
||||||
|
now := time.Now()
|
||||||
|
if now.Sub(pr.lastReport) > 100*time.Millisecond || err == io.EOF {
|
||||||
|
if pr.callback != nil {
|
||||||
|
pr.callback(pr.read, pr.total)
|
||||||
|
}
|
||||||
|
pr.lastReport = now
|
||||||
|
}
|
||||||
|
|
||||||
|
return n, err
|
||||||
|
}
|
||||||
372
internal/cloud/s3.go
Normal file
372
internal/cloud/s3.go
Normal file
@@ -0,0 +1,372 @@
|
|||||||
|
package cloud
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/aws/aws-sdk-go-v2/aws"
|
||||||
|
"github.com/aws/aws-sdk-go-v2/config"
|
||||||
|
"github.com/aws/aws-sdk-go-v2/credentials"
|
||||||
|
"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
|
||||||
|
"github.com/aws/aws-sdk-go-v2/service/s3"
|
||||||
|
)
|
||||||
|
|
||||||
|
// S3Backend implements the Backend interface for AWS S3 and compatible services
|
||||||
|
type S3Backend struct {
|
||||||
|
client *s3.Client
|
||||||
|
bucket string
|
||||||
|
prefix string
|
||||||
|
config *Config
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewS3Backend creates a new S3 backend
|
||||||
|
func NewS3Backend(cfg *Config) (*S3Backend, error) {
|
||||||
|
if err := cfg.Validate(); err != nil {
|
||||||
|
return nil, fmt.Errorf("invalid config: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.Background()
|
||||||
|
|
||||||
|
// Build AWS config
|
||||||
|
var awsCfg aws.Config
|
||||||
|
var err error
|
||||||
|
|
||||||
|
if cfg.AccessKey != "" && cfg.SecretKey != "" {
|
||||||
|
// Use explicit credentials
|
||||||
|
credsProvider := credentials.NewStaticCredentialsProvider(
|
||||||
|
cfg.AccessKey,
|
||||||
|
cfg.SecretKey,
|
||||||
|
"",
|
||||||
|
)
|
||||||
|
|
||||||
|
awsCfg, err = config.LoadDefaultConfig(ctx,
|
||||||
|
config.WithCredentialsProvider(credsProvider),
|
||||||
|
config.WithRegion(cfg.Region),
|
||||||
|
)
|
||||||
|
} else {
|
||||||
|
// Use default credential chain (environment, IAM role, etc.)
|
||||||
|
awsCfg, err = config.LoadDefaultConfig(ctx,
|
||||||
|
config.WithRegion(cfg.Region),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to load AWS config: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create S3 client with custom options
|
||||||
|
clientOptions := []func(*s3.Options){
|
||||||
|
func(o *s3.Options) {
|
||||||
|
if cfg.Endpoint != "" {
|
||||||
|
o.BaseEndpoint = aws.String(cfg.Endpoint)
|
||||||
|
}
|
||||||
|
if cfg.PathStyle {
|
||||||
|
o.UsePathStyle = true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
client := s3.NewFromConfig(awsCfg, clientOptions...)
|
||||||
|
|
||||||
|
return &S3Backend{
|
||||||
|
client: client,
|
||||||
|
bucket: cfg.Bucket,
|
||||||
|
prefix: cfg.Prefix,
|
||||||
|
config: cfg,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Name returns the backend name
|
||||||
|
func (s *S3Backend) Name() string {
|
||||||
|
return "s3"
|
||||||
|
}
|
||||||
|
|
||||||
|
// buildKey creates the full S3 key from filename
|
||||||
|
func (s *S3Backend) buildKey(filename string) string {
|
||||||
|
if s.prefix == "" {
|
||||||
|
return filename
|
||||||
|
}
|
||||||
|
return filepath.Join(s.prefix, filename)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Upload uploads a file to S3 with multipart support for large files
|
||||||
|
func (s *S3Backend) Upload(ctx context.Context, localPath, remotePath string, progress ProgressCallback) error {
|
||||||
|
// Open local file
|
||||||
|
file, err := os.Open(localPath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to open file: %w", err)
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
// Get file size
|
||||||
|
stat, err := file.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to stat file: %w", err)
|
||||||
|
}
|
||||||
|
fileSize := stat.Size()
|
||||||
|
|
||||||
|
// Build S3 key
|
||||||
|
key := s.buildKey(remotePath)
|
||||||
|
|
||||||
|
// Use multipart upload for files larger than 100MB
|
||||||
|
const multipartThreshold = 100 * 1024 * 1024 // 100 MB
|
||||||
|
|
||||||
|
if fileSize > multipartThreshold {
|
||||||
|
return s.uploadMultipart(ctx, file, key, fileSize, progress)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Simple upload for smaller files
|
||||||
|
return s.uploadSimple(ctx, file, key, fileSize, progress)
|
||||||
|
}
|
||||||
|
|
||||||
|
// uploadSimple performs a simple single-part upload
|
||||||
|
func (s *S3Backend) uploadSimple(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error {
|
||||||
|
// Create progress reader
|
||||||
|
var reader io.Reader = file
|
||||||
|
if progress != nil {
|
||||||
|
reader = NewProgressReader(file, fileSize, progress)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Upload to S3
|
||||||
|
_, err := s.client.PutObject(ctx, &s3.PutObjectInput{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
Key: aws.String(key),
|
||||||
|
Body: reader,
|
||||||
|
})
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to upload to S3: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// uploadMultipart performs a multipart upload for large files
|
||||||
|
func (s *S3Backend) uploadMultipart(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error {
|
||||||
|
// Create uploader with custom options
|
||||||
|
uploader := manager.NewUploader(s.client, func(u *manager.Uploader) {
|
||||||
|
// Part size: 10MB
|
||||||
|
u.PartSize = 10 * 1024 * 1024
|
||||||
|
|
||||||
|
// Upload up to 10 parts concurrently
|
||||||
|
u.Concurrency = 10
|
||||||
|
|
||||||
|
// Leave parts on failure for debugging
|
||||||
|
u.LeavePartsOnError = false
|
||||||
|
})
|
||||||
|
|
||||||
|
// Wrap file with progress reader
|
||||||
|
var reader io.Reader = file
|
||||||
|
if progress != nil {
|
||||||
|
reader = NewProgressReader(file, fileSize, progress)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Upload with multipart
|
||||||
|
_, err := uploader.Upload(ctx, &s3.PutObjectInput{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
Key: aws.String(key),
|
||||||
|
Body: reader,
|
||||||
|
})
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("multipart upload failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Download downloads a file from S3
|
||||||
|
func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
|
||||||
|
// Build S3 key
|
||||||
|
key := s.buildKey(remotePath)
|
||||||
|
|
||||||
|
// Get object size first
|
||||||
|
size, err := s.GetSize(ctx, remotePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to get object size: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Download from S3
|
||||||
|
result, err := s.client.GetObject(ctx, &s3.GetObjectInput{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
Key: aws.String(key),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to download from S3: %w", err)
|
||||||
|
}
|
||||||
|
defer result.Body.Close()
|
||||||
|
|
||||||
|
// Create local file
|
||||||
|
if err := os.MkdirAll(filepath.Dir(localPath), 0755); err != nil {
|
||||||
|
return fmt.Errorf("failed to create directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
outFile, err := os.Create(localPath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create local file: %w", err)
|
||||||
|
}
|
||||||
|
defer outFile.Close()
|
||||||
|
|
||||||
|
// Copy with progress tracking
|
||||||
|
var reader io.Reader = result.Body
|
||||||
|
if progress != nil {
|
||||||
|
reader = NewProgressReader(result.Body, size, progress)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = io.Copy(outFile, reader)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to write file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// List lists all backup files in S3
|
||||||
|
func (s *S3Backend) List(ctx context.Context, prefix string) ([]BackupInfo, error) {
|
||||||
|
// Build full prefix
|
||||||
|
fullPrefix := s.buildKey(prefix)
|
||||||
|
|
||||||
|
// List objects
|
||||||
|
result, err := s.client.ListObjectsV2(ctx, &s3.ListObjectsV2Input{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
Prefix: aws.String(fullPrefix),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to list objects: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert to BackupInfo
|
||||||
|
var backups []BackupInfo
|
||||||
|
for _, obj := range result.Contents {
|
||||||
|
if obj.Key == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
key := *obj.Key
|
||||||
|
name := filepath.Base(key)
|
||||||
|
|
||||||
|
// Skip if it's just a directory marker
|
||||||
|
if strings.HasSuffix(key, "/") {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
info := BackupInfo{
|
||||||
|
Key: key,
|
||||||
|
Name: name,
|
||||||
|
Size: *obj.Size,
|
||||||
|
LastModified: *obj.LastModified,
|
||||||
|
}
|
||||||
|
|
||||||
|
if obj.ETag != nil {
|
||||||
|
info.ETag = *obj.ETag
|
||||||
|
}
|
||||||
|
|
||||||
|
if obj.StorageClass != "" {
|
||||||
|
info.StorageClass = string(obj.StorageClass)
|
||||||
|
} else {
|
||||||
|
info.StorageClass = "STANDARD"
|
||||||
|
}
|
||||||
|
|
||||||
|
backups = append(backups, info)
|
||||||
|
}
|
||||||
|
|
||||||
|
return backups, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete deletes a file from S3
|
||||||
|
func (s *S3Backend) Delete(ctx context.Context, remotePath string) error {
|
||||||
|
key := s.buildKey(remotePath)
|
||||||
|
|
||||||
|
_, err := s.client.DeleteObject(ctx, &s3.DeleteObjectInput{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
Key: aws.String(key),
|
||||||
|
})
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to delete object: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exists checks if a file exists in S3
|
||||||
|
func (s *S3Backend) Exists(ctx context.Context, remotePath string) (bool, error) {
|
||||||
|
key := s.buildKey(remotePath)
|
||||||
|
|
||||||
|
_, err := s.client.HeadObject(ctx, &s3.HeadObjectInput{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
Key: aws.String(key),
|
||||||
|
})
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
// Check if it's a "not found" error
|
||||||
|
if strings.Contains(err.Error(), "NotFound") || strings.Contains(err.Error(), "404") {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
return false, fmt.Errorf("failed to check object existence: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSize returns the size of a remote file
|
||||||
|
func (s *S3Backend) GetSize(ctx context.Context, remotePath string) (int64, error) {
|
||||||
|
key := s.buildKey(remotePath)
|
||||||
|
|
||||||
|
result, err := s.client.HeadObject(ctx, &s3.HeadObjectInput{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
Key: aws.String(key),
|
||||||
|
})
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return 0, fmt.Errorf("failed to get object metadata: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if result.ContentLength == nil {
|
||||||
|
return 0, fmt.Errorf("content length not available")
|
||||||
|
}
|
||||||
|
|
||||||
|
return *result.ContentLength, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// BucketExists checks if the bucket exists and is accessible
|
||||||
|
func (s *S3Backend) BucketExists(ctx context.Context) (bool, error) {
|
||||||
|
_, err := s.client.HeadBucket(ctx, &s3.HeadBucketInput{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
})
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
if strings.Contains(err.Error(), "NotFound") || strings.Contains(err.Error(), "404") {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
return false, fmt.Errorf("failed to check bucket: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreateBucket creates the bucket if it doesn't exist
|
||||||
|
func (s *S3Backend) CreateBucket(ctx context.Context) error {
|
||||||
|
exists, err := s.BucketExists(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if exists {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = s.client.CreateBucket(ctx, &s3.CreateBucketInput{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
})
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create bucket: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
198
internal/cloud/uri.go
Normal file
198
internal/cloud/uri.go
Normal file
@@ -0,0 +1,198 @@
|
|||||||
|
package cloud
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"net/url"
|
||||||
|
"path"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
// CloudURI represents a parsed cloud storage URI
|
||||||
|
type CloudURI struct {
|
||||||
|
Provider string // "s3", "minio", "azure", "gcs", "b2"
|
||||||
|
Bucket string // Bucket or container name
|
||||||
|
Path string // Path within bucket (without leading /)
|
||||||
|
Region string // Region (optional, extracted from host)
|
||||||
|
Endpoint string // Custom endpoint (for MinIO, etc)
|
||||||
|
FullURI string // Original URI string
|
||||||
|
}
|
||||||
|
|
||||||
|
// ParseCloudURI parses a cloud storage URI like s3://bucket/path/file.dump
|
||||||
|
// Supported formats:
|
||||||
|
// - s3://bucket/path/file.dump
|
||||||
|
// - s3://bucket.s3.region.amazonaws.com/path/file.dump
|
||||||
|
// - minio://bucket/path/file.dump
|
||||||
|
// - azure://container/path/file.dump
|
||||||
|
// - gs://bucket/path/file.dump (Google Cloud Storage)
|
||||||
|
// - b2://bucket/path/file.dump (Backblaze B2)
|
||||||
|
func ParseCloudURI(uri string) (*CloudURI, error) {
|
||||||
|
if uri == "" {
|
||||||
|
return nil, fmt.Errorf("URI cannot be empty")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse URL
|
||||||
|
parsed, err := url.Parse(uri)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("invalid URI: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract provider from scheme
|
||||||
|
provider := strings.ToLower(parsed.Scheme)
|
||||||
|
if provider == "" {
|
||||||
|
return nil, fmt.Errorf("URI must have a scheme (e.g., s3://)")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate provider
|
||||||
|
validProviders := map[string]bool{
|
||||||
|
"s3": true,
|
||||||
|
"minio": true,
|
||||||
|
"azure": true,
|
||||||
|
"gs": true,
|
||||||
|
"gcs": true,
|
||||||
|
"b2": true,
|
||||||
|
}
|
||||||
|
if !validProviders[provider] {
|
||||||
|
return nil, fmt.Errorf("unsupported provider: %s (supported: s3, minio, azure, gs, gcs, b2)", provider)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Normalize provider names
|
||||||
|
if provider == "gcs" {
|
||||||
|
provider = "gs"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract bucket and path
|
||||||
|
bucket := parsed.Host
|
||||||
|
if bucket == "" {
|
||||||
|
return nil, fmt.Errorf("URI must specify a bucket (e.g., s3://bucket/path)")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract region from AWS S3 hostname if present
|
||||||
|
// Format: bucket.s3.region.amazonaws.com or bucket.s3-region.amazonaws.com
|
||||||
|
var region string
|
||||||
|
var endpoint string
|
||||||
|
|
||||||
|
if strings.Contains(bucket, ".amazonaws.com") {
|
||||||
|
parts := strings.Split(bucket, ".")
|
||||||
|
if len(parts) >= 3 {
|
||||||
|
// Extract bucket name (first part)
|
||||||
|
bucket = parts[0]
|
||||||
|
|
||||||
|
// Extract region if present
|
||||||
|
// bucket.s3.us-west-2.amazonaws.com -> us-west-2
|
||||||
|
// bucket.s3-us-west-2.amazonaws.com -> us-west-2
|
||||||
|
for i, part := range parts {
|
||||||
|
if part == "s3" && i+1 < len(parts) && parts[i+1] != "amazonaws" {
|
||||||
|
region = parts[i+1]
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if strings.HasPrefix(part, "s3-") {
|
||||||
|
region = strings.TrimPrefix(part, "s3-")
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// For MinIO and custom endpoints, preserve the host as endpoint
|
||||||
|
if provider == "minio" || (provider == "s3" && !strings.Contains(bucket, "amazonaws.com")) {
|
||||||
|
// If it looks like a custom endpoint (has dots), preserve it
|
||||||
|
if strings.Contains(bucket, ".") && !strings.Contains(bucket, "amazonaws.com") {
|
||||||
|
endpoint = bucket
|
||||||
|
// Try to extract bucket from path
|
||||||
|
trimmedPath := strings.TrimPrefix(parsed.Path, "/")
|
||||||
|
pathParts := strings.SplitN(trimmedPath, "/", 2)
|
||||||
|
if len(pathParts) > 0 && pathParts[0] != "" {
|
||||||
|
bucket = pathParts[0]
|
||||||
|
if len(pathParts) > 1 {
|
||||||
|
parsed.Path = "/" + pathParts[1]
|
||||||
|
} else {
|
||||||
|
parsed.Path = "/"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clean up path (remove leading slash)
|
||||||
|
filepath := strings.TrimPrefix(parsed.Path, "/")
|
||||||
|
|
||||||
|
return &CloudURI{
|
||||||
|
Provider: provider,
|
||||||
|
Bucket: bucket,
|
||||||
|
Path: filepath,
|
||||||
|
Region: region,
|
||||||
|
Endpoint: endpoint,
|
||||||
|
FullURI: uri,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsCloudURI checks if a string looks like a cloud storage URI
|
||||||
|
func IsCloudURI(s string) bool {
|
||||||
|
s = strings.ToLower(s)
|
||||||
|
return strings.HasPrefix(s, "s3://") ||
|
||||||
|
strings.HasPrefix(s, "minio://") ||
|
||||||
|
strings.HasPrefix(s, "azure://") ||
|
||||||
|
strings.HasPrefix(s, "gs://") ||
|
||||||
|
strings.HasPrefix(s, "gcs://") ||
|
||||||
|
strings.HasPrefix(s, "b2://")
|
||||||
|
}
|
||||||
|
|
||||||
|
// String returns the string representation of the URI
|
||||||
|
func (u *CloudURI) String() string {
|
||||||
|
return u.FullURI
|
||||||
|
}
|
||||||
|
|
||||||
|
// BaseName returns the filename without path
|
||||||
|
func (u *CloudURI) BaseName() string {
|
||||||
|
return path.Base(u.Path)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Dir returns the directory path without filename
|
||||||
|
func (u *CloudURI) Dir() string {
|
||||||
|
return path.Dir(u.Path)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Join appends path elements to the URI path
|
||||||
|
func (u *CloudURI) Join(elem ...string) string {
|
||||||
|
newPath := u.Path
|
||||||
|
for _, e := range elem {
|
||||||
|
newPath = path.Join(newPath, e)
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%s://%s/%s", u.Provider, u.Bucket, newPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ToConfig converts a CloudURI to a cloud.Config
|
||||||
|
func (u *CloudURI) ToConfig() *Config {
|
||||||
|
cfg := &Config{
|
||||||
|
Provider: u.Provider,
|
||||||
|
Bucket: u.Bucket,
|
||||||
|
Prefix: u.Dir(), // Use directory part as prefix
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set region if available
|
||||||
|
if u.Region != "" {
|
||||||
|
cfg.Region = u.Region
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set endpoint if available (for MinIO, etc)
|
||||||
|
if u.Endpoint != "" {
|
||||||
|
cfg.Endpoint = u.Endpoint
|
||||||
|
}
|
||||||
|
|
||||||
|
// Provider-specific settings
|
||||||
|
switch u.Provider {
|
||||||
|
case "minio":
|
||||||
|
cfg.PathStyle = true
|
||||||
|
case "b2":
|
||||||
|
cfg.PathStyle = true
|
||||||
|
}
|
||||||
|
|
||||||
|
return cfg
|
||||||
|
}
|
||||||
|
|
||||||
|
// BuildRemotePath constructs the full remote path for a file
|
||||||
|
func (u *CloudURI) BuildRemotePath(filename string) string {
|
||||||
|
if u.Path == "" || u.Path == "." {
|
||||||
|
return filename
|
||||||
|
}
|
||||||
|
return path.Join(u.Path, filename)
|
||||||
|
}
|
||||||
@@ -85,6 +85,17 @@ type Config struct {
|
|||||||
TUIDryRun bool // TUI dry-run mode (simulate without execution)
|
TUIDryRun bool // TUI dry-run mode (simulate without execution)
|
||||||
TUIVerbose bool // Verbose TUI logging
|
TUIVerbose bool // Verbose TUI logging
|
||||||
TUILogFile string // TUI event log file path
|
TUILogFile string // TUI event log file path
|
||||||
|
|
||||||
|
// Cloud storage options (v2.0)
|
||||||
|
CloudEnabled bool // Enable cloud storage integration
|
||||||
|
CloudProvider string // "s3", "minio", "b2"
|
||||||
|
CloudBucket string // Bucket name
|
||||||
|
CloudRegion string // Region (for S3)
|
||||||
|
CloudEndpoint string // Custom endpoint (for MinIO, B2)
|
||||||
|
CloudAccessKey string // Access key
|
||||||
|
CloudSecretKey string // Secret key
|
||||||
|
CloudPrefix string // Key prefix
|
||||||
|
CloudAutoUpload bool // Automatically upload after backup
|
||||||
}
|
}
|
||||||
|
|
||||||
// New creates a new configuration with default values
|
// New creates a new configuration with default values
|
||||||
@@ -192,6 +203,17 @@ func New() *Config {
|
|||||||
TUIDryRun: getEnvBool("TUI_DRY_RUN", false), // Execute by default
|
TUIDryRun: getEnvBool("TUI_DRY_RUN", false), // Execute by default
|
||||||
TUIVerbose: getEnvBool("TUI_VERBOSE", false), // Quiet by default
|
TUIVerbose: getEnvBool("TUI_VERBOSE", false), // Quiet by default
|
||||||
TUILogFile: getEnvString("TUI_LOG_FILE", ""), // No log file by default
|
TUILogFile: getEnvString("TUI_LOG_FILE", ""), // No log file by default
|
||||||
|
|
||||||
|
// Cloud storage defaults (v2.0)
|
||||||
|
CloudEnabled: getEnvBool("CLOUD_ENABLED", false),
|
||||||
|
CloudProvider: getEnvString("CLOUD_PROVIDER", "s3"),
|
||||||
|
CloudBucket: getEnvString("CLOUD_BUCKET", ""),
|
||||||
|
CloudRegion: getEnvString("CLOUD_REGION", "us-east-1"),
|
||||||
|
CloudEndpoint: getEnvString("CLOUD_ENDPOINT", ""),
|
||||||
|
CloudAccessKey: getEnvString("CLOUD_ACCESS_KEY", getEnvString("AWS_ACCESS_KEY_ID", "")),
|
||||||
|
CloudSecretKey: getEnvString("CLOUD_SECRET_KEY", getEnvString("AWS_SECRET_ACCESS_KEY", "")),
|
||||||
|
CloudPrefix: getEnvString("CLOUD_PREFIX", ""),
|
||||||
|
CloudAutoUpload: getEnvBool("CLOUD_AUTO_UPLOAD", false),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Ensure canonical defaults are enforced
|
// Ensure canonical defaults are enforced
|
||||||
|
|||||||
167
internal/metadata/metadata.go
Normal file
167
internal/metadata/metadata.go
Normal file
@@ -0,0 +1,167 @@
|
|||||||
|
package metadata
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/hex"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// BackupMetadata contains comprehensive information about a backup
|
||||||
|
type BackupMetadata struct {
|
||||||
|
Version string `json:"version"`
|
||||||
|
Timestamp time.Time `json:"timestamp"`
|
||||||
|
Database string `json:"database"`
|
||||||
|
DatabaseType string `json:"database_type"` // postgresql, mysql, mariadb
|
||||||
|
DatabaseVersion string `json:"database_version"` // e.g., "PostgreSQL 15.3"
|
||||||
|
Host string `json:"host"`
|
||||||
|
Port int `json:"port"`
|
||||||
|
User string `json:"user"`
|
||||||
|
BackupFile string `json:"backup_file"`
|
||||||
|
SizeBytes int64 `json:"size_bytes"`
|
||||||
|
SHA256 string `json:"sha256"`
|
||||||
|
Compression string `json:"compression"` // none, gzip, pigz
|
||||||
|
BackupType string `json:"backup_type"` // full, incremental (for v2.0)
|
||||||
|
BaseBackup string `json:"base_backup,omitempty"`
|
||||||
|
Duration float64 `json:"duration_seconds"`
|
||||||
|
ExtraInfo map[string]string `json:"extra_info,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClusterMetadata contains metadata for cluster backups
|
||||||
|
type ClusterMetadata struct {
|
||||||
|
Version string `json:"version"`
|
||||||
|
Timestamp time.Time `json:"timestamp"`
|
||||||
|
ClusterName string `json:"cluster_name"`
|
||||||
|
DatabaseType string `json:"database_type"`
|
||||||
|
Host string `json:"host"`
|
||||||
|
Port int `json:"port"`
|
||||||
|
Databases []BackupMetadata `json:"databases"`
|
||||||
|
TotalSize int64 `json:"total_size_bytes"`
|
||||||
|
Duration float64 `json:"duration_seconds"`
|
||||||
|
ExtraInfo map[string]string `json:"extra_info,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// CalculateSHA256 computes the SHA-256 checksum of a file
|
||||||
|
func CalculateSHA256(filePath string) (string, error) {
|
||||||
|
f, err := os.Open(filePath)
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("failed to open file: %w", err)
|
||||||
|
}
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
|
hasher := sha256.New()
|
||||||
|
if _, err := io.Copy(hasher, f); err != nil {
|
||||||
|
return "", fmt.Errorf("failed to calculate checksum: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return hex.EncodeToString(hasher.Sum(nil)), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save writes metadata to a .meta.json file
|
||||||
|
func (m *BackupMetadata) Save() error {
|
||||||
|
metaPath := m.BackupFile + ".meta.json"
|
||||||
|
|
||||||
|
data, err := json.MarshalIndent(m, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to marshal metadata: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := os.WriteFile(metaPath, data, 0644); err != nil {
|
||||||
|
return fmt.Errorf("failed to write metadata file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load reads metadata from a .meta.json file
|
||||||
|
func Load(backupFile string) (*BackupMetadata, error) {
|
||||||
|
metaPath := backupFile + ".meta.json"
|
||||||
|
|
||||||
|
data, err := os.ReadFile(metaPath)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to read metadata file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var meta BackupMetadata
|
||||||
|
if err := json.Unmarshal(data, &meta); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse metadata: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return &meta, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveCluster writes cluster metadata to a .meta.json file
|
||||||
|
func (m *ClusterMetadata) Save(targetFile string) error {
|
||||||
|
metaPath := targetFile + ".meta.json"
|
||||||
|
|
||||||
|
data, err := json.MarshalIndent(m, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to marshal cluster metadata: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := os.WriteFile(metaPath, data, 0644); err != nil {
|
||||||
|
return fmt.Errorf("failed to write cluster metadata file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// LoadCluster reads cluster metadata from a .meta.json file
|
||||||
|
func LoadCluster(targetFile string) (*ClusterMetadata, error) {
|
||||||
|
metaPath := targetFile + ".meta.json"
|
||||||
|
|
||||||
|
data, err := os.ReadFile(metaPath)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to read cluster metadata file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var meta ClusterMetadata
|
||||||
|
if err := json.Unmarshal(data, &meta); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse cluster metadata: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return &meta, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ListBackups scans a directory for backup files and returns their metadata
|
||||||
|
func ListBackups(dir string) ([]*BackupMetadata, error) {
|
||||||
|
pattern := filepath.Join(dir, "*.meta.json")
|
||||||
|
matches, err := filepath.Glob(pattern)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to scan directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var backups []*BackupMetadata
|
||||||
|
for _, metaFile := range matches {
|
||||||
|
// Extract backup file path (remove .meta.json suffix)
|
||||||
|
backupFile := metaFile[:len(metaFile)-len(".meta.json")]
|
||||||
|
|
||||||
|
meta, err := Load(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
// Skip invalid metadata files
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
backups = append(backups, meta)
|
||||||
|
}
|
||||||
|
|
||||||
|
return backups, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// FormatSize returns human-readable size
|
||||||
|
func FormatSize(bytes int64) string {
|
||||||
|
const unit = 1024
|
||||||
|
if bytes < unit {
|
||||||
|
return fmt.Sprintf("%d B", bytes)
|
||||||
|
}
|
||||||
|
div, exp := int64(unit), 0
|
||||||
|
for n := bytes / unit; n >= unit; n /= unit {
|
||||||
|
div *= unit
|
||||||
|
exp++
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%.1f %ciB", float64(bytes)/float64(div), "KMGTPE"[exp])
|
||||||
|
}
|
||||||
211
internal/restore/cloud_download.go
Normal file
211
internal/restore/cloud_download.go
Normal file
@@ -0,0 +1,211 @@
|
|||||||
|
package restore
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/hex"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
|
||||||
|
"dbbackup/internal/cloud"
|
||||||
|
"dbbackup/internal/logger"
|
||||||
|
"dbbackup/internal/metadata"
|
||||||
|
)
|
||||||
|
|
||||||
|
// CloudDownloader handles downloading backups from cloud storage
|
||||||
|
type CloudDownloader struct {
|
||||||
|
backend cloud.Backend
|
||||||
|
log logger.Logger
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewCloudDownloader creates a new cloud downloader
|
||||||
|
func NewCloudDownloader(backend cloud.Backend, log logger.Logger) *CloudDownloader {
|
||||||
|
return &CloudDownloader{
|
||||||
|
backend: backend,
|
||||||
|
log: log,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// DownloadOptions contains options for downloading from cloud
|
||||||
|
type DownloadOptions struct {
|
||||||
|
VerifyChecksum bool // Verify SHA-256 checksum after download
|
||||||
|
KeepLocal bool // Keep downloaded file (don't delete temp)
|
||||||
|
TempDir string // Temp directory (default: os.TempDir())
|
||||||
|
}
|
||||||
|
|
||||||
|
// DownloadResult contains information about a downloaded backup
|
||||||
|
type DownloadResult struct {
|
||||||
|
LocalPath string // Path to downloaded file
|
||||||
|
RemotePath string // Original remote path
|
||||||
|
Size int64 // File size in bytes
|
||||||
|
SHA256 string // SHA-256 checksum (if verified)
|
||||||
|
MetadataPath string // Path to downloaded metadata (if exists)
|
||||||
|
IsTempFile bool // Whether the file is in a temp directory
|
||||||
|
}
|
||||||
|
|
||||||
|
// Download downloads a backup from cloud storage
|
||||||
|
func (d *CloudDownloader) Download(ctx context.Context, remotePath string, opts DownloadOptions) (*DownloadResult, error) {
|
||||||
|
// Determine temp directory
|
||||||
|
tempDir := opts.TempDir
|
||||||
|
if tempDir == "" {
|
||||||
|
tempDir = os.TempDir()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create unique temp subdirectory
|
||||||
|
tempSubDir := filepath.Join(tempDir, fmt.Sprintf("dbbackup-download-%d", os.Getpid()))
|
||||||
|
if err := os.MkdirAll(tempSubDir, 0755); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create temp directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract filename from remote path
|
||||||
|
filename := filepath.Base(remotePath)
|
||||||
|
localPath := filepath.Join(tempSubDir, filename)
|
||||||
|
|
||||||
|
d.log.Info("Downloading backup from cloud", "remote", remotePath, "local", localPath)
|
||||||
|
|
||||||
|
// Get file size for progress tracking
|
||||||
|
size, err := d.backend.GetSize(ctx, remotePath)
|
||||||
|
if err != nil {
|
||||||
|
d.log.Warn("Could not get remote file size", "error", err)
|
||||||
|
size = 0 // Continue anyway
|
||||||
|
}
|
||||||
|
|
||||||
|
// Progress callback
|
||||||
|
var lastPercent int
|
||||||
|
progressCallback := func(transferred, total int64) {
|
||||||
|
if total > 0 {
|
||||||
|
percent := int(float64(transferred) / float64(total) * 100)
|
||||||
|
if percent != lastPercent && percent%10 == 0 {
|
||||||
|
d.log.Info("Download progress", "percent", percent, "transferred", cloud.FormatSize(transferred), "total", cloud.FormatSize(total))
|
||||||
|
lastPercent = percent
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Download file
|
||||||
|
if err := d.backend.Download(ctx, remotePath, localPath, progressCallback); err != nil {
|
||||||
|
// Cleanup on failure
|
||||||
|
os.RemoveAll(tempSubDir)
|
||||||
|
return nil, fmt.Errorf("download failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
result := &DownloadResult{
|
||||||
|
LocalPath: localPath,
|
||||||
|
RemotePath: remotePath,
|
||||||
|
Size: size,
|
||||||
|
IsTempFile: !opts.KeepLocal,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try to download metadata file
|
||||||
|
metaRemotePath := remotePath + ".meta.json"
|
||||||
|
exists, err := d.backend.Exists(ctx, metaRemotePath)
|
||||||
|
if err == nil && exists {
|
||||||
|
metaLocalPath := localPath + ".meta.json"
|
||||||
|
if err := d.backend.Download(ctx, metaRemotePath, metaLocalPath, nil); err != nil {
|
||||||
|
d.log.Warn("Failed to download metadata", "error", err)
|
||||||
|
} else {
|
||||||
|
result.MetadataPath = metaLocalPath
|
||||||
|
d.log.Debug("Downloaded metadata", "path", metaLocalPath)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify checksum if requested
|
||||||
|
if opts.VerifyChecksum {
|
||||||
|
d.log.Info("Verifying checksum...")
|
||||||
|
checksum, err := calculateSHA256(localPath)
|
||||||
|
if err != nil {
|
||||||
|
// Cleanup on verification failure
|
||||||
|
os.RemoveAll(tempSubDir)
|
||||||
|
return nil, fmt.Errorf("checksum calculation failed: %w", err)
|
||||||
|
}
|
||||||
|
result.SHA256 = checksum
|
||||||
|
|
||||||
|
// Check against metadata if available
|
||||||
|
if result.MetadataPath != "" {
|
||||||
|
meta, err := metadata.Load(result.MetadataPath)
|
||||||
|
if err != nil {
|
||||||
|
d.log.Warn("Failed to load metadata for verification", "error", err)
|
||||||
|
} else if meta.SHA256 != "" && meta.SHA256 != checksum {
|
||||||
|
// Cleanup on verification failure
|
||||||
|
os.RemoveAll(tempSubDir)
|
||||||
|
return nil, fmt.Errorf("checksum mismatch: expected %s, got %s", meta.SHA256, checksum)
|
||||||
|
} else if meta.SHA256 == checksum {
|
||||||
|
d.log.Info("Checksum verified successfully", "sha256", checksum)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
d.log.Info("Download completed", "path", localPath, "size", cloud.FormatSize(result.Size))
|
||||||
|
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// DownloadFromURI downloads a backup using a cloud URI
|
||||||
|
func (d *CloudDownloader) DownloadFromURI(ctx context.Context, uri string, opts DownloadOptions) (*DownloadResult, error) {
|
||||||
|
// Parse URI
|
||||||
|
cloudURI, err := cloud.ParseCloudURI(uri)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("invalid cloud URI: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Download using the path from URI
|
||||||
|
return d.Download(ctx, cloudURI.Path, opts)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cleanup removes downloaded temp files
|
||||||
|
func (r *DownloadResult) Cleanup() error {
|
||||||
|
if !r.IsTempFile {
|
||||||
|
return nil // Don't delete non-temp files
|
||||||
|
}
|
||||||
|
|
||||||
|
// Remove the entire temp directory
|
||||||
|
tempDir := filepath.Dir(r.LocalPath)
|
||||||
|
if err := os.RemoveAll(tempDir); err != nil {
|
||||||
|
return fmt.Errorf("failed to cleanup temp files: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// calculateSHA256 calculates the SHA-256 checksum of a file
|
||||||
|
func calculateSHA256(filePath string) (string, error) {
|
||||||
|
file, err := os.Open(filePath)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
hash := sha256.New()
|
||||||
|
if _, err := io.Copy(hash, file); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
return hex.EncodeToString(hash.Sum(nil)), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// DownloadFromCloudURI is a convenience function to download from a cloud URI
|
||||||
|
func DownloadFromCloudURI(ctx context.Context, uri string, opts DownloadOptions) (*DownloadResult, error) {
|
||||||
|
// Parse URI
|
||||||
|
cloudURI, err := cloud.ParseCloudURI(uri)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("invalid cloud URI: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create config from URI
|
||||||
|
cfg := cloudURI.ToConfig()
|
||||||
|
|
||||||
|
// Create backend
|
||||||
|
backend, err := cloud.NewBackend(cfg)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create cloud backend: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create downloader
|
||||||
|
log := logger.New("info", "text")
|
||||||
|
downloader := NewCloudDownloader(backend, log)
|
||||||
|
|
||||||
|
// Download
|
||||||
|
return downloader.Download(ctx, cloudURI.Path, opts)
|
||||||
|
}
|
||||||
224
internal/retention/retention.go
Normal file
224
internal/retention/retention.go
Normal file
@@ -0,0 +1,224 @@
|
|||||||
|
package retention
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"sort"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"dbbackup/internal/metadata"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Policy defines the retention rules
|
||||||
|
type Policy struct {
|
||||||
|
RetentionDays int
|
||||||
|
MinBackups int
|
||||||
|
DryRun bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// CleanupResult contains information about cleanup operations
|
||||||
|
type CleanupResult struct {
|
||||||
|
TotalBackups int
|
||||||
|
EligibleForDeletion int
|
||||||
|
Deleted []string
|
||||||
|
Kept []string
|
||||||
|
SpaceFreed int64
|
||||||
|
Errors []error
|
||||||
|
}
|
||||||
|
|
||||||
|
// ApplyPolicy enforces the retention policy on backups in a directory
|
||||||
|
func ApplyPolicy(backupDir string, policy Policy) (*CleanupResult, error) {
|
||||||
|
result := &CleanupResult{
|
||||||
|
Deleted: make([]string, 0),
|
||||||
|
Kept: make([]string, 0),
|
||||||
|
Errors: make([]error, 0),
|
||||||
|
}
|
||||||
|
|
||||||
|
// List all backups in directory
|
||||||
|
backups, err := metadata.ListBackups(backupDir)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to list backups: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
result.TotalBackups = len(backups)
|
||||||
|
|
||||||
|
// Sort backups by timestamp (oldest first)
|
||||||
|
sort.Slice(backups, func(i, j int) bool {
|
||||||
|
return backups[i].Timestamp.Before(backups[j].Timestamp)
|
||||||
|
})
|
||||||
|
|
||||||
|
// Calculate cutoff date
|
||||||
|
cutoffDate := time.Now().AddDate(0, 0, -policy.RetentionDays)
|
||||||
|
|
||||||
|
// Determine which backups to delete
|
||||||
|
for i, backup := range backups {
|
||||||
|
// Always keep minimum number of backups (most recent ones)
|
||||||
|
backupsRemaining := len(backups) - i
|
||||||
|
if backupsRemaining <= policy.MinBackups {
|
||||||
|
result.Kept = append(result.Kept, backup.BackupFile)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if backup is older than retention period
|
||||||
|
if backup.Timestamp.Before(cutoffDate) {
|
||||||
|
result.EligibleForDeletion++
|
||||||
|
|
||||||
|
if policy.DryRun {
|
||||||
|
result.Deleted = append(result.Deleted, backup.BackupFile)
|
||||||
|
} else {
|
||||||
|
// Delete backup file and associated metadata
|
||||||
|
if err := deleteBackup(backup.BackupFile); err != nil {
|
||||||
|
result.Errors = append(result.Errors,
|
||||||
|
fmt.Errorf("failed to delete %s: %w", backup.BackupFile, err))
|
||||||
|
} else {
|
||||||
|
result.Deleted = append(result.Deleted, backup.BackupFile)
|
||||||
|
result.SpaceFreed += backup.SizeBytes
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
result.Kept = append(result.Kept, backup.BackupFile)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// deleteBackup removes a backup file and all associated files
|
||||||
|
func deleteBackup(backupFile string) error {
|
||||||
|
// Delete main backup file
|
||||||
|
if err := os.Remove(backupFile); err != nil && !os.IsNotExist(err) {
|
||||||
|
return fmt.Errorf("failed to delete backup file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete metadata file
|
||||||
|
metaFile := backupFile + ".meta.json"
|
||||||
|
if err := os.Remove(metaFile); err != nil && !os.IsNotExist(err) {
|
||||||
|
return fmt.Errorf("failed to delete metadata file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete legacy .sha256 file if exists
|
||||||
|
sha256File := backupFile + ".sha256"
|
||||||
|
if err := os.Remove(sha256File); err != nil && !os.IsNotExist(err) {
|
||||||
|
// Don't fail if .sha256 doesn't exist (new format)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete legacy .info file if exists
|
||||||
|
infoFile := backupFile + ".info"
|
||||||
|
if err := os.Remove(infoFile); err != nil && !os.IsNotExist(err) {
|
||||||
|
// Don't fail if .info doesn't exist (new format)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetOldestBackups returns the N oldest backups in a directory
|
||||||
|
func GetOldestBackups(backupDir string, count int) ([]*metadata.BackupMetadata, error) {
|
||||||
|
backups, err := metadata.ListBackups(backupDir)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sort by timestamp (oldest first)
|
||||||
|
sort.Slice(backups, func(i, j int) bool {
|
||||||
|
return backups[i].Timestamp.Before(backups[j].Timestamp)
|
||||||
|
})
|
||||||
|
|
||||||
|
if count > len(backups) {
|
||||||
|
count = len(backups)
|
||||||
|
}
|
||||||
|
|
||||||
|
return backups[:count], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetNewestBackups returns the N newest backups in a directory
|
||||||
|
func GetNewestBackups(backupDir string, count int) ([]*metadata.BackupMetadata, error) {
|
||||||
|
backups, err := metadata.ListBackups(backupDir)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sort by timestamp (newest first)
|
||||||
|
sort.Slice(backups, func(i, j int) bool {
|
||||||
|
return backups[i].Timestamp.After(backups[j].Timestamp)
|
||||||
|
})
|
||||||
|
|
||||||
|
if count > len(backups) {
|
||||||
|
count = len(backups)
|
||||||
|
}
|
||||||
|
|
||||||
|
return backups[:count], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// CleanupByPattern removes backups matching a specific pattern
|
||||||
|
func CleanupByPattern(backupDir, pattern string, policy Policy) (*CleanupResult, error) {
|
||||||
|
result := &CleanupResult{
|
||||||
|
Deleted: make([]string, 0),
|
||||||
|
Kept: make([]string, 0),
|
||||||
|
Errors: make([]error, 0),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find matching backup files
|
||||||
|
searchPattern := filepath.Join(backupDir, pattern)
|
||||||
|
matches, err := filepath.Glob(searchPattern)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to match pattern: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Filter to only .dump or .sql files
|
||||||
|
var backupFiles []string
|
||||||
|
for _, match := range matches {
|
||||||
|
ext := filepath.Ext(match)
|
||||||
|
if ext == ".dump" || ext == ".sql" {
|
||||||
|
backupFiles = append(backupFiles, match)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load metadata for matched backups
|
||||||
|
var backups []*metadata.BackupMetadata
|
||||||
|
for _, file := range backupFiles {
|
||||||
|
meta, err := metadata.Load(file)
|
||||||
|
if err != nil {
|
||||||
|
// Skip files without metadata
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
backups = append(backups, meta)
|
||||||
|
}
|
||||||
|
|
||||||
|
result.TotalBackups = len(backups)
|
||||||
|
|
||||||
|
// Sort by timestamp
|
||||||
|
sort.Slice(backups, func(i, j int) bool {
|
||||||
|
return backups[i].Timestamp.Before(backups[j].Timestamp)
|
||||||
|
})
|
||||||
|
|
||||||
|
cutoffDate := time.Now().AddDate(0, 0, -policy.RetentionDays)
|
||||||
|
|
||||||
|
// Apply policy
|
||||||
|
for i, backup := range backups {
|
||||||
|
backupsRemaining := len(backups) - i
|
||||||
|
if backupsRemaining <= policy.MinBackups {
|
||||||
|
result.Kept = append(result.Kept, backup.BackupFile)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if backup.Timestamp.Before(cutoffDate) {
|
||||||
|
result.EligibleForDeletion++
|
||||||
|
|
||||||
|
if policy.DryRun {
|
||||||
|
result.Deleted = append(result.Deleted, backup.BackupFile)
|
||||||
|
} else {
|
||||||
|
if err := deleteBackup(backup.BackupFile); err != nil {
|
||||||
|
result.Errors = append(result.Errors, err)
|
||||||
|
} else {
|
||||||
|
result.Deleted = append(result.Deleted, backup.BackupFile)
|
||||||
|
result.SpaceFreed += backup.SizeBytes
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
result.Kept = append(result.Kept, backup.BackupFile)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
114
internal/verification/verification.go
Normal file
114
internal/verification/verification.go
Normal file
@@ -0,0 +1,114 @@
|
|||||||
|
package verification
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"dbbackup/internal/metadata"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Result represents the outcome of a verification operation
|
||||||
|
type Result struct {
|
||||||
|
Valid bool
|
||||||
|
BackupFile string
|
||||||
|
ExpectedSHA256 string
|
||||||
|
CalculatedSHA256 string
|
||||||
|
SizeMatch bool
|
||||||
|
FileExists bool
|
||||||
|
MetadataExists bool
|
||||||
|
Error error
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify checks the integrity of a backup file
|
||||||
|
func Verify(backupFile string) (*Result, error) {
|
||||||
|
result := &Result{
|
||||||
|
BackupFile: backupFile,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if backup file exists
|
||||||
|
info, err := os.Stat(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
result.FileExists = false
|
||||||
|
result.Error = fmt.Errorf("backup file does not exist: %w", err)
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
result.FileExists = true
|
||||||
|
|
||||||
|
// Load metadata
|
||||||
|
meta, err := metadata.Load(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
result.MetadataExists = false
|
||||||
|
result.Error = fmt.Errorf("failed to load metadata: %w", err)
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
result.MetadataExists = true
|
||||||
|
result.ExpectedSHA256 = meta.SHA256
|
||||||
|
|
||||||
|
// Check size match
|
||||||
|
if info.Size() != meta.SizeBytes {
|
||||||
|
result.SizeMatch = false
|
||||||
|
result.Error = fmt.Errorf("size mismatch: expected %d bytes, got %d bytes",
|
||||||
|
meta.SizeBytes, info.Size())
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
result.SizeMatch = true
|
||||||
|
|
||||||
|
// Calculate actual SHA-256
|
||||||
|
actualSHA256, err := metadata.CalculateSHA256(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
result.Error = fmt.Errorf("failed to calculate checksum: %w", err)
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
result.CalculatedSHA256 = actualSHA256
|
||||||
|
|
||||||
|
// Compare checksums
|
||||||
|
if actualSHA256 != meta.SHA256 {
|
||||||
|
result.Valid = false
|
||||||
|
result.Error = fmt.Errorf("checksum mismatch: expected %s, got %s",
|
||||||
|
meta.SHA256, actualSHA256)
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// All checks passed
|
||||||
|
result.Valid = true
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// VerifyMultiple verifies multiple backup files
|
||||||
|
func VerifyMultiple(backupFiles []string) ([]*Result, error) {
|
||||||
|
var results []*Result
|
||||||
|
|
||||||
|
for _, file := range backupFiles {
|
||||||
|
result, err := Verify(file)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("verification error for %s: %w", file, err)
|
||||||
|
}
|
||||||
|
results = append(results, result)
|
||||||
|
}
|
||||||
|
|
||||||
|
return results, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// QuickCheck performs a fast check without full checksum calculation
|
||||||
|
// Only validates metadata existence and file size
|
||||||
|
func QuickCheck(backupFile string) error {
|
||||||
|
// Check file exists
|
||||||
|
info, err := os.Stat(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("backup file does not exist: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load metadata
|
||||||
|
meta, err := metadata.Load(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("metadata missing or invalid: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check size
|
||||||
|
if info.Size() != meta.SizeBytes {
|
||||||
|
return fmt.Errorf("size mismatch: expected %d bytes, got %d bytes",
|
||||||
|
meta.SizeBytes, info.Size())
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
253
scripts/test_cloud_storage.sh
Executable file
253
scripts/test_cloud_storage.sh
Executable file
@@ -0,0 +1,253 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Color output
|
||||||
|
RED='\033[0;31m'
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
BLUE='\033[0;34m'
|
||||||
|
NC='\033[0m' # No Color
|
||||||
|
|
||||||
|
echo -e "${BLUE}========================================${NC}"
|
||||||
|
echo -e "${BLUE}dbbackup Cloud Storage Integration Test${NC}"
|
||||||
|
echo -e "${BLUE}========================================${NC}"
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Configuration
|
||||||
|
MINIO_ENDPOINT="http://localhost:9000"
|
||||||
|
MINIO_ACCESS_KEY="minioadmin"
|
||||||
|
MINIO_SECRET_KEY="minioadmin123"
|
||||||
|
MINIO_BUCKET="test-backups"
|
||||||
|
POSTGRES_HOST="localhost"
|
||||||
|
POSTGRES_PORT="5433"
|
||||||
|
POSTGRES_USER="testuser"
|
||||||
|
POSTGRES_PASS="testpass123"
|
||||||
|
POSTGRES_DB="cloudtest"
|
||||||
|
|
||||||
|
# Export credentials
|
||||||
|
export AWS_ACCESS_KEY_ID="$MINIO_ACCESS_KEY"
|
||||||
|
export AWS_SECRET_ACCESS_KEY="$MINIO_SECRET_KEY"
|
||||||
|
export AWS_ENDPOINT_URL="$MINIO_ENDPOINT"
|
||||||
|
export AWS_REGION="us-east-1"
|
||||||
|
|
||||||
|
# Check if dbbackup binary exists
|
||||||
|
if [ ! -f "./dbbackup" ]; then
|
||||||
|
echo -e "${YELLOW}Building dbbackup...${NC}"
|
||||||
|
go build -o dbbackup .
|
||||||
|
echo -e "${GREEN}✓ Build successful${NC}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Function to wait for service
|
||||||
|
wait_for_service() {
|
||||||
|
local service=$1
|
||||||
|
local host=$2
|
||||||
|
local port=$3
|
||||||
|
local max_attempts=30
|
||||||
|
local attempt=1
|
||||||
|
|
||||||
|
echo -e "${YELLOW}Waiting for $service to be ready...${NC}"
|
||||||
|
|
||||||
|
while ! nc -z $host $port 2>/dev/null; do
|
||||||
|
if [ $attempt -ge $max_attempts ]; then
|
||||||
|
echo -e "${RED}✗ $service did not start in time${NC}"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
echo -n "."
|
||||||
|
sleep 1
|
||||||
|
attempt=$((attempt + 1))
|
||||||
|
done
|
||||||
|
|
||||||
|
echo -e "${GREEN}✓ $service is ready${NC}"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Step 1: Start services
|
||||||
|
echo -e "${BLUE}Step 1: Starting services with Docker Compose${NC}"
|
||||||
|
docker-compose -f docker-compose.minio.yml up -d
|
||||||
|
|
||||||
|
# Wait for services
|
||||||
|
wait_for_service "MinIO" "localhost" "9000"
|
||||||
|
wait_for_service "PostgreSQL" "localhost" "5433"
|
||||||
|
|
||||||
|
sleep 5
|
||||||
|
|
||||||
|
# Step 2: Create test database
|
||||||
|
echo -e "\n${BLUE}Step 2: Creating test database${NC}"
|
||||||
|
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -c "DROP DATABASE IF EXISTS $POSTGRES_DB;" postgres 2>/dev/null || true
|
||||||
|
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -c "CREATE DATABASE $POSTGRES_DB;" postgres
|
||||||
|
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d $POSTGRES_DB << EOF
|
||||||
|
CREATE TABLE users (
|
||||||
|
id SERIAL PRIMARY KEY,
|
||||||
|
name VARCHAR(100),
|
||||||
|
email VARCHAR(100),
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||||
|
);
|
||||||
|
|
||||||
|
INSERT INTO users (name, email) VALUES
|
||||||
|
('Alice', 'alice@example.com'),
|
||||||
|
('Bob', 'bob@example.com'),
|
||||||
|
('Charlie', 'charlie@example.com');
|
||||||
|
|
||||||
|
CREATE TABLE products (
|
||||||
|
id SERIAL PRIMARY KEY,
|
||||||
|
name VARCHAR(200),
|
||||||
|
price DECIMAL(10,2)
|
||||||
|
);
|
||||||
|
|
||||||
|
INSERT INTO products (name, price) VALUES
|
||||||
|
('Widget', 19.99),
|
||||||
|
('Gadget', 29.99),
|
||||||
|
('Doohickey', 39.99);
|
||||||
|
EOF
|
||||||
|
|
||||||
|
echo -e "${GREEN}✓ Test database created with sample data${NC}"
|
||||||
|
|
||||||
|
# Step 3: Test local backup
|
||||||
|
echo -e "\n${BLUE}Step 3: Creating local backup${NC}"
|
||||||
|
./dbbackup backup single $POSTGRES_DB \
|
||||||
|
--db-type postgres \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS \
|
||||||
|
--output-dir /tmp/dbbackup-test
|
||||||
|
|
||||||
|
LOCAL_BACKUP=$(ls -t /tmp/dbbackup-test/${POSTGRES_DB}_*.dump 2>/dev/null | head -1)
|
||||||
|
if [ -z "$LOCAL_BACKUP" ]; then
|
||||||
|
echo -e "${RED}✗ Local backup failed${NC}"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo -e "${GREEN}✓ Local backup created: $LOCAL_BACKUP${NC}"
|
||||||
|
|
||||||
|
# Step 4: Test cloud upload
|
||||||
|
echo -e "\n${BLUE}Step 4: Uploading backup to MinIO (S3)${NC}"
|
||||||
|
./dbbackup cloud upload "$LOCAL_BACKUP" \
|
||||||
|
--cloud-provider minio \
|
||||||
|
--cloud-bucket $MINIO_BUCKET \
|
||||||
|
--cloud-endpoint $MINIO_ENDPOINT
|
||||||
|
|
||||||
|
echo -e "${GREEN}✓ Upload successful${NC}"
|
||||||
|
|
||||||
|
# Step 5: Test cloud list
|
||||||
|
echo -e "\n${BLUE}Step 5: Listing cloud backups${NC}"
|
||||||
|
./dbbackup cloud list \
|
||||||
|
--cloud-provider minio \
|
||||||
|
--cloud-bucket $MINIO_BUCKET \
|
||||||
|
--cloud-endpoint $MINIO_ENDPOINT \
|
||||||
|
--verbose
|
||||||
|
|
||||||
|
# Step 6: Test backup with cloud URI
|
||||||
|
echo -e "\n${BLUE}Step 6: Testing backup with cloud URI${NC}"
|
||||||
|
./dbbackup backup single $POSTGRES_DB \
|
||||||
|
--db-type postgres \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS \
|
||||||
|
--output-dir /tmp/dbbackup-test \
|
||||||
|
--cloud minio://$MINIO_BUCKET/uri-test/
|
||||||
|
|
||||||
|
echo -e "${GREEN}✓ Backup with cloud URI successful${NC}"
|
||||||
|
|
||||||
|
# Step 7: Drop database for restore test
|
||||||
|
echo -e "\n${BLUE}Step 7: Dropping database for restore test${NC}"
|
||||||
|
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -c "DROP DATABASE $POSTGRES_DB;" postgres
|
||||||
|
|
||||||
|
# Step 8: Test restore from cloud URI
|
||||||
|
echo -e "\n${BLUE}Step 8: Restoring from cloud URI${NC}"
|
||||||
|
CLOUD_URI="minio://$MINIO_BUCKET/$(basename $LOCAL_BACKUP)"
|
||||||
|
./dbbackup restore single "$CLOUD_URI" \
|
||||||
|
--target $POSTGRES_DB \
|
||||||
|
--create \
|
||||||
|
--confirm \
|
||||||
|
--db-type postgres \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS
|
||||||
|
|
||||||
|
echo -e "${GREEN}✓ Restore from cloud successful${NC}"
|
||||||
|
|
||||||
|
# Step 9: Verify data
|
||||||
|
echo -e "\n${BLUE}Step 9: Verifying restored data${NC}"
|
||||||
|
USER_COUNT=$(PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d $POSTGRES_DB -t -c "SELECT COUNT(*) FROM users;")
|
||||||
|
PRODUCT_COUNT=$(PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d $POSTGRES_DB -t -c "SELECT COUNT(*) FROM products;")
|
||||||
|
|
||||||
|
if [ "$USER_COUNT" -eq 3 ] && [ "$PRODUCT_COUNT" -eq 3 ]; then
|
||||||
|
echo -e "${GREEN}✓ Data verification successful (users: $USER_COUNT, products: $PRODUCT_COUNT)${NC}"
|
||||||
|
else
|
||||||
|
echo -e "${RED}✗ Data verification failed (users: $USER_COUNT, products: $PRODUCT_COUNT)${NC}"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Step 10: Test verify command
|
||||||
|
echo -e "\n${BLUE}Step 10: Verifying cloud backup integrity${NC}"
|
||||||
|
./dbbackup verify-backup "$CLOUD_URI"
|
||||||
|
|
||||||
|
echo -e "${GREEN}✓ Backup verification successful${NC}"
|
||||||
|
|
||||||
|
# Step 11: Test cloud cleanup
|
||||||
|
echo -e "\n${BLUE}Step 11: Testing cloud cleanup (dry-run)${NC}"
|
||||||
|
./dbbackup cleanup "minio://$MINIO_BUCKET/" \
|
||||||
|
--retention-days 0 \
|
||||||
|
--min-backups 1 \
|
||||||
|
--dry-run
|
||||||
|
|
||||||
|
# Step 12: Create multiple backups for cleanup test
|
||||||
|
echo -e "\n${BLUE}Step 12: Creating multiple backups for cleanup test${NC}"
|
||||||
|
for i in {1..5}; do
|
||||||
|
echo "Creating backup $i/5..."
|
||||||
|
./dbbackup backup single $POSTGRES_DB \
|
||||||
|
--db-type postgres \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS \
|
||||||
|
--output-dir /tmp/dbbackup-test \
|
||||||
|
--cloud minio://$MINIO_BUCKET/cleanup-test/
|
||||||
|
sleep 1
|
||||||
|
done
|
||||||
|
|
||||||
|
echo -e "${GREEN}✓ Multiple backups created${NC}"
|
||||||
|
|
||||||
|
# Step 13: Test actual cleanup
|
||||||
|
echo -e "\n${BLUE}Step 13: Testing cloud cleanup (actual)${NC}"
|
||||||
|
./dbbackup cleanup "minio://$MINIO_BUCKET/cleanup-test/" \
|
||||||
|
--retention-days 0 \
|
||||||
|
--min-backups 2
|
||||||
|
|
||||||
|
echo -e "${GREEN}✓ Cloud cleanup successful${NC}"
|
||||||
|
|
||||||
|
# Step 14: Test large file upload (multipart)
|
||||||
|
echo -e "\n${BLUE}Step 14: Testing large file upload (>100MB for multipart)${NC}"
|
||||||
|
echo "Creating 150MB test file..."
|
||||||
|
dd if=/dev/zero of=/tmp/large-test-file.bin bs=1M count=150 2>/dev/null
|
||||||
|
|
||||||
|
echo "Uploading large file..."
|
||||||
|
./dbbackup cloud upload /tmp/large-test-file.bin \
|
||||||
|
--cloud-provider minio \
|
||||||
|
--cloud-bucket $MINIO_BUCKET \
|
||||||
|
--cloud-endpoint $MINIO_ENDPOINT \
|
||||||
|
--verbose
|
||||||
|
|
||||||
|
echo -e "${GREEN}✓ Large file multipart upload successful${NC}"
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
|
echo -e "\n${BLUE}Cleanup${NC}"
|
||||||
|
rm -f /tmp/large-test-file.bin
|
||||||
|
rm -rf /tmp/dbbackup-test
|
||||||
|
|
||||||
|
echo -e "\n${GREEN}========================================${NC}"
|
||||||
|
echo -e "${GREEN}✓ ALL TESTS PASSED!${NC}"
|
||||||
|
echo -e "${GREEN}========================================${NC}"
|
||||||
|
echo
|
||||||
|
echo -e "${YELLOW}To stop services:${NC}"
|
||||||
|
echo -e " docker-compose -f docker-compose.minio.yml down"
|
||||||
|
echo
|
||||||
|
echo -e "${YELLOW}To view MinIO console:${NC}"
|
||||||
|
echo -e " http://localhost:9001 (minioadmin / minioadmin123)"
|
||||||
|
echo
|
||||||
|
echo -e "${YELLOW}To keep services running for manual testing:${NC}"
|
||||||
|
echo -e " export AWS_ACCESS_KEY_ID=minioadmin"
|
||||||
|
echo -e " export AWS_SECRET_ACCESS_KEY=minioadmin123"
|
||||||
|
echo -e " export AWS_ENDPOINT_URL=http://localhost:9000"
|
||||||
|
echo -e " ./dbbackup cloud list --cloud-provider minio --cloud-bucket test-backups"
|
||||||
Reference in New Issue
Block a user