Compare commits
2 Commits
v2.0-sprin
...
v2.0-sprin
| Author | SHA1 | Date | |
|---|---|---|---|
| 8929004abc | |||
| bdf9af0650 |
758
CLOUD.md
Normal file
758
CLOUD.md
Normal file
@@ -0,0 +1,758 @@
|
||||
# Cloud Storage Guide for dbbackup
|
||||
|
||||
## Overview
|
||||
|
||||
dbbackup v2.0 includes comprehensive cloud storage integration, allowing you to backup directly to S3-compatible storage providers and restore from cloud URIs.
|
||||
|
||||
**Supported Providers:**
|
||||
- AWS S3
|
||||
- MinIO (self-hosted S3-compatible)
|
||||
- Backblaze B2
|
||||
- Google Cloud Storage (via S3 compatibility)
|
||||
- Any S3-compatible storage
|
||||
|
||||
**Key Features:**
|
||||
- ✅ Direct backup to cloud with `--cloud` URI flag
|
||||
- ✅ Restore from cloud URIs
|
||||
- ✅ Verify cloud backup integrity
|
||||
- ✅ Apply retention policies to cloud storage
|
||||
- ✅ Multipart upload for large files (>100MB)
|
||||
- ✅ Progress tracking for uploads/downloads
|
||||
- ✅ Automatic metadata synchronization
|
||||
- ✅ Streaming transfers (memory efficient)
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Set Up Credentials
|
||||
|
||||
```bash
|
||||
# For AWS S3
|
||||
export AWS_ACCESS_KEY_ID="your-access-key"
|
||||
export AWS_SECRET_ACCESS_KEY="your-secret-key"
|
||||
export AWS_REGION="us-east-1"
|
||||
|
||||
# For MinIO
|
||||
export AWS_ACCESS_KEY_ID="minioadmin"
|
||||
export AWS_SECRET_ACCESS_KEY="minioadmin123"
|
||||
export AWS_ENDPOINT_URL="http://localhost:9000"
|
||||
|
||||
# For Backblaze B2
|
||||
export AWS_ACCESS_KEY_ID="your-b2-key-id"
|
||||
export AWS_SECRET_ACCESS_KEY="your-b2-application-key"
|
||||
export AWS_ENDPOINT_URL="https://s3.us-west-002.backblazeb2.com"
|
||||
```
|
||||
|
||||
### 2. Backup with Cloud URI
|
||||
|
||||
```bash
|
||||
# Backup to S3
|
||||
dbbackup backup single mydb --cloud s3://my-bucket/backups/
|
||||
|
||||
# Backup to MinIO
|
||||
dbbackup backup single mydb --cloud minio://my-bucket/backups/
|
||||
|
||||
# Backup to Backblaze B2
|
||||
dbbackup backup single mydb --cloud b2://my-bucket/backups/
|
||||
```
|
||||
|
||||
### 3. Restore from Cloud
|
||||
|
||||
```bash
|
||||
# Restore from cloud URI
|
||||
dbbackup restore single s3://my-bucket/backups/mydb_20260115_120000.dump --confirm
|
||||
|
||||
# Restore to different database
|
||||
dbbackup restore single s3://my-bucket/backups/mydb.dump \
|
||||
--target mydb_restored \
|
||||
--confirm
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## URI Syntax
|
||||
|
||||
Cloud URIs follow this format:
|
||||
|
||||
```
|
||||
<provider>://<bucket>/<path>/<filename>
|
||||
```
|
||||
|
||||
**Supported Providers:**
|
||||
- `s3://` - AWS S3 or S3-compatible storage
|
||||
- `minio://` - MinIO (auto-enables path-style addressing)
|
||||
- `b2://` - Backblaze B2
|
||||
- `gs://` or `gcs://` - Google Cloud Storage
|
||||
- `azure://` - Azure Blob Storage (coming soon)
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
s3://production-backups/databases/postgres/
|
||||
minio://local-backups/dev/mydb/
|
||||
b2://offsite-backups/daily/
|
||||
gs://gcp-backups/prod/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration Methods
|
||||
|
||||
### Method 1: Cloud URIs (Recommended)
|
||||
|
||||
```bash
|
||||
dbbackup backup single mydb --cloud s3://my-bucket/backups/
|
||||
```
|
||||
|
||||
### Method 2: Individual Flags
|
||||
|
||||
```bash
|
||||
dbbackup backup single mydb \
|
||||
--cloud-auto-upload \
|
||||
--cloud-provider s3 \
|
||||
--cloud-bucket my-bucket \
|
||||
--cloud-prefix backups/
|
||||
```
|
||||
|
||||
### Method 3: Environment Variables
|
||||
|
||||
```bash
|
||||
export CLOUD_ENABLED=true
|
||||
export CLOUD_AUTO_UPLOAD=true
|
||||
export CLOUD_PROVIDER=s3
|
||||
export CLOUD_BUCKET=my-bucket
|
||||
export CLOUD_PREFIX=backups/
|
||||
export CLOUD_REGION=us-east-1
|
||||
|
||||
dbbackup backup single mydb
|
||||
```
|
||||
|
||||
### Method 4: Config File
|
||||
|
||||
```toml
|
||||
# ~/.dbbackup.conf
|
||||
[cloud]
|
||||
enabled = true
|
||||
auto_upload = true
|
||||
provider = "s3"
|
||||
bucket = "my-bucket"
|
||||
prefix = "backups/"
|
||||
region = "us-east-1"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Commands
|
||||
|
||||
### Cloud Upload
|
||||
|
||||
Upload existing backup files to cloud storage:
|
||||
|
||||
```bash
|
||||
# Upload single file
|
||||
dbbackup cloud upload /backups/mydb.dump \
|
||||
--cloud-provider s3 \
|
||||
--cloud-bucket my-bucket
|
||||
|
||||
# Upload with cloud URI flags
|
||||
dbbackup cloud upload /backups/mydb.dump \
|
||||
--cloud-provider minio \
|
||||
--cloud-bucket local-backups \
|
||||
--cloud-endpoint http://localhost:9000
|
||||
|
||||
# Upload multiple files
|
||||
dbbackup cloud upload /backups/*.dump \
|
||||
--cloud-provider s3 \
|
||||
--cloud-bucket my-bucket \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Cloud Download
|
||||
|
||||
Download backups from cloud storage:
|
||||
|
||||
```bash
|
||||
# Download to current directory
|
||||
dbbackup cloud download mydb.dump . \
|
||||
--cloud-provider s3 \
|
||||
--cloud-bucket my-bucket
|
||||
|
||||
# Download to specific directory
|
||||
dbbackup cloud download backups/mydb.dump /restore/ \
|
||||
--cloud-provider s3 \
|
||||
--cloud-bucket my-bucket \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Cloud List
|
||||
|
||||
List backups in cloud storage:
|
||||
|
||||
```bash
|
||||
# List all backups
|
||||
dbbackup cloud list \
|
||||
--cloud-provider s3 \
|
||||
--cloud-bucket my-bucket
|
||||
|
||||
# List with prefix filter
|
||||
dbbackup cloud list \
|
||||
--cloud-provider s3 \
|
||||
--cloud-bucket my-bucket \
|
||||
--cloud-prefix postgres/
|
||||
|
||||
# Verbose output with details
|
||||
dbbackup cloud list \
|
||||
--cloud-provider s3 \
|
||||
--cloud-bucket my-bucket \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Cloud Delete
|
||||
|
||||
Delete backups from cloud storage:
|
||||
|
||||
```bash
|
||||
# Delete specific backup (with confirmation prompt)
|
||||
dbbackup cloud delete mydb_old.dump \
|
||||
--cloud-provider s3 \
|
||||
--cloud-bucket my-bucket
|
||||
|
||||
# Delete without confirmation
|
||||
dbbackup cloud delete mydb_old.dump \
|
||||
--cloud-provider s3 \
|
||||
--cloud-bucket my-bucket \
|
||||
--confirm
|
||||
```
|
||||
|
||||
### Backup with Auto-Upload
|
||||
|
||||
```bash
|
||||
# Backup and automatically upload
|
||||
dbbackup backup single mydb --cloud s3://my-bucket/backups/
|
||||
|
||||
# With individual flags
|
||||
dbbackup backup single mydb \
|
||||
--cloud-auto-upload \
|
||||
--cloud-provider s3 \
|
||||
--cloud-bucket my-bucket \
|
||||
--cloud-prefix backups/
|
||||
```
|
||||
|
||||
### Restore from Cloud
|
||||
|
||||
```bash
|
||||
# Restore from cloud URI (auto-download)
|
||||
dbbackup restore single s3://my-bucket/backups/mydb.dump --confirm
|
||||
|
||||
# Restore to different database
|
||||
dbbackup restore single s3://my-bucket/backups/mydb.dump \
|
||||
--target mydb_restored \
|
||||
--confirm
|
||||
|
||||
# Restore with database creation
|
||||
dbbackup restore single s3://my-bucket/backups/mydb.dump \
|
||||
--create \
|
||||
--confirm
|
||||
```
|
||||
|
||||
### Verify Cloud Backups
|
||||
|
||||
```bash
|
||||
# Verify single cloud backup
|
||||
dbbackup verify-backup s3://my-bucket/backups/mydb.dump
|
||||
|
||||
# Quick verification (size check only)
|
||||
dbbackup verify-backup s3://my-bucket/backups/mydb.dump --quick
|
||||
|
||||
# Verbose output
|
||||
dbbackup verify-backup s3://my-bucket/backups/mydb.dump --verbose
|
||||
```
|
||||
|
||||
### Cloud Cleanup
|
||||
|
||||
Apply retention policies to cloud storage:
|
||||
|
||||
```bash
|
||||
# Cleanup old backups (dry-run)
|
||||
dbbackup cleanup s3://my-bucket/backups/ \
|
||||
--retention-days 30 \
|
||||
--min-backups 5 \
|
||||
--dry-run
|
||||
|
||||
# Actual cleanup
|
||||
dbbackup cleanup s3://my-bucket/backups/ \
|
||||
--retention-days 30 \
|
||||
--min-backups 5
|
||||
|
||||
# Pattern-based cleanup
|
||||
dbbackup cleanup s3://my-bucket/backups/ \
|
||||
--retention-days 7 \
|
||||
--min-backups 3 \
|
||||
--pattern "mydb_*.dump"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Provider-Specific Setup
|
||||
|
||||
### AWS S3
|
||||
|
||||
**Prerequisites:**
|
||||
- AWS account
|
||||
- S3 bucket created
|
||||
- IAM user with S3 permissions
|
||||
|
||||
**IAM Policy:**
|
||||
```json
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"s3:PutObject",
|
||||
"s3:GetObject",
|
||||
"s3:DeleteObject",
|
||||
"s3:ListBucket"
|
||||
],
|
||||
"Resource": [
|
||||
"arn:aws:s3:::my-bucket/*",
|
||||
"arn:aws:s3:::my-bucket"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Configuration:**
|
||||
```bash
|
||||
export AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
|
||||
export AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
|
||||
export AWS_REGION="us-east-1"
|
||||
|
||||
dbbackup backup single mydb --cloud s3://my-bucket/backups/
|
||||
```
|
||||
|
||||
### MinIO (Self-Hosted)
|
||||
|
||||
**Setup with Docker:**
|
||||
```bash
|
||||
docker run -d \
|
||||
-p 9000:9000 \
|
||||
-p 9001:9001 \
|
||||
-e "MINIO_ROOT_USER=minioadmin" \
|
||||
-e "MINIO_ROOT_PASSWORD=minioadmin123" \
|
||||
--name minio \
|
||||
minio/minio server /data --console-address ":9001"
|
||||
|
||||
# Create bucket
|
||||
docker exec minio mc alias set local http://localhost:9000 minioadmin minioadmin123
|
||||
docker exec minio mc mb local/backups
|
||||
```
|
||||
|
||||
**Configuration:**
|
||||
```bash
|
||||
export AWS_ACCESS_KEY_ID="minioadmin"
|
||||
export AWS_SECRET_ACCESS_KEY="minioadmin123"
|
||||
export AWS_ENDPOINT_URL="http://localhost:9000"
|
||||
|
||||
dbbackup backup single mydb --cloud minio://backups/db/
|
||||
```
|
||||
|
||||
**Or use docker-compose:**
|
||||
```bash
|
||||
docker-compose -f docker-compose.minio.yml up -d
|
||||
```
|
||||
|
||||
### Backblaze B2
|
||||
|
||||
**Prerequisites:**
|
||||
- Backblaze account
|
||||
- B2 bucket created
|
||||
- Application key generated
|
||||
|
||||
**Configuration:**
|
||||
```bash
|
||||
export AWS_ACCESS_KEY_ID="<your-b2-key-id>"
|
||||
export AWS_SECRET_ACCESS_KEY="<your-b2-application-key>"
|
||||
export AWS_ENDPOINT_URL="https://s3.us-west-002.backblazeb2.com"
|
||||
export AWS_REGION="us-west-002"
|
||||
|
||||
dbbackup backup single mydb --cloud b2://my-bucket/backups/
|
||||
```
|
||||
|
||||
### Google Cloud Storage
|
||||
|
||||
**Prerequisites:**
|
||||
- GCP account
|
||||
- GCS bucket with S3 compatibility enabled
|
||||
- HMAC keys generated
|
||||
|
||||
**Enable S3 Compatibility:**
|
||||
1. Go to Cloud Storage > Settings > Interoperability
|
||||
2. Create HMAC keys
|
||||
|
||||
**Configuration:**
|
||||
```bash
|
||||
export AWS_ACCESS_KEY_ID="<your-hmac-access-id>"
|
||||
export AWS_SECRET_ACCESS_KEY="<your-hmac-secret>"
|
||||
export AWS_ENDPOINT_URL="https://storage.googleapis.com"
|
||||
|
||||
dbbackup backup single mydb --cloud gs://my-bucket/backups/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Features
|
||||
|
||||
### Multipart Upload
|
||||
|
||||
Files larger than 100MB automatically use multipart upload for:
|
||||
- Faster transfers with parallel parts
|
||||
- Resume capability on failure
|
||||
- Better reliability for large files
|
||||
|
||||
**Configuration:**
|
||||
- Part size: 10MB
|
||||
- Concurrency: 10 parallel parts
|
||||
- Automatic based on file size
|
||||
|
||||
### Progress Tracking
|
||||
|
||||
Real-time progress for uploads and downloads:
|
||||
|
||||
```bash
|
||||
Uploading backup to cloud...
|
||||
Progress: 10%
|
||||
Progress: 20%
|
||||
Progress: 30%
|
||||
...
|
||||
Upload completed: /backups/mydb.dump (1.2 GB)
|
||||
```
|
||||
|
||||
### Metadata Synchronization
|
||||
|
||||
Automatically uploads `.meta.json` with each backup containing:
|
||||
- SHA-256 checksum
|
||||
- Database name and type
|
||||
- Backup timestamp
|
||||
- File size
|
||||
- Compression info
|
||||
|
||||
### Automatic Verification
|
||||
|
||||
Downloads from cloud include automatic checksum verification:
|
||||
|
||||
```bash
|
||||
Downloading backup from cloud...
|
||||
Download completed
|
||||
Verifying checksum...
|
||||
Checksum verified successfully: sha256=abc123...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Testing
|
||||
|
||||
### Local Testing with MinIO
|
||||
|
||||
**1. Start MinIO:**
|
||||
```bash
|
||||
docker-compose -f docker-compose.minio.yml up -d
|
||||
```
|
||||
|
||||
**2. Run Integration Tests:**
|
||||
```bash
|
||||
./scripts/test_cloud_storage.sh
|
||||
```
|
||||
|
||||
**3. Manual Testing:**
|
||||
```bash
|
||||
# Set credentials
|
||||
export AWS_ACCESS_KEY_ID=minioadmin
|
||||
export AWS_SECRET_ACCESS_KEY=minioadmin123
|
||||
export AWS_ENDPOINT_URL=http://localhost:9000
|
||||
|
||||
# Test backup
|
||||
dbbackup backup single mydb --cloud minio://test-backups/test/
|
||||
|
||||
# Test restore
|
||||
dbbackup restore single minio://test-backups/test/mydb.dump --confirm
|
||||
|
||||
# Test verify
|
||||
dbbackup verify-backup minio://test-backups/test/mydb.dump
|
||||
|
||||
# Test cleanup
|
||||
dbbackup cleanup minio://test-backups/test/ --retention-days 7 --dry-run
|
||||
```
|
||||
|
||||
**4. Access MinIO Console:**
|
||||
- URL: http://localhost:9001
|
||||
- Username: `minioadmin`
|
||||
- Password: `minioadmin123`
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Security
|
||||
|
||||
1. **Never commit credentials:**
|
||||
```bash
|
||||
# Use environment variables or config files
|
||||
export AWS_ACCESS_KEY_ID="..."
|
||||
```
|
||||
|
||||
2. **Use IAM roles when possible:**
|
||||
```bash
|
||||
# On EC2/ECS, credentials are automatic
|
||||
dbbackup backup single mydb --cloud s3://bucket/
|
||||
```
|
||||
|
||||
3. **Restrict bucket permissions:**
|
||||
- Minimum required: GetObject, PutObject, DeleteObject, ListBucket
|
||||
- Use bucket policies to limit access
|
||||
|
||||
4. **Enable encryption:**
|
||||
- S3: Server-side encryption enabled by default
|
||||
- MinIO: Configure encryption at rest
|
||||
|
||||
### Performance
|
||||
|
||||
1. **Use multipart for large backups:**
|
||||
- Automatic for files >100MB
|
||||
- Configure concurrency based on bandwidth
|
||||
|
||||
2. **Choose nearby regions:**
|
||||
```bash
|
||||
--cloud-region us-west-2 # Closest to your servers
|
||||
```
|
||||
|
||||
3. **Use compression:**
|
||||
```bash
|
||||
--compression gzip # Reduces upload size
|
||||
```
|
||||
|
||||
### Reliability
|
||||
|
||||
1. **Test restores regularly:**
|
||||
```bash
|
||||
# Monthly restore test
|
||||
dbbackup restore single s3://bucket/latest.dump --target test_restore
|
||||
```
|
||||
|
||||
2. **Verify backups:**
|
||||
```bash
|
||||
# Daily verification
|
||||
dbbackup verify-backup s3://bucket/backups/*.dump
|
||||
```
|
||||
|
||||
3. **Monitor retention:**
|
||||
```bash
|
||||
# Weekly cleanup check
|
||||
dbbackup cleanup s3://bucket/ --retention-days 30 --dry-run
|
||||
```
|
||||
|
||||
### Cost Optimization
|
||||
|
||||
1. **Use lifecycle policies:**
|
||||
- S3: Transition old backups to Glacier
|
||||
- Configure in AWS Console or bucket policy
|
||||
|
||||
2. **Cleanup old backups:**
|
||||
```bash
|
||||
dbbackup cleanup s3://bucket/ --retention-days 30 --min-backups 10
|
||||
```
|
||||
|
||||
3. **Choose appropriate storage class:**
|
||||
- Standard: Frequent access
|
||||
- Infrequent Access: Monthly restores
|
||||
- Glacier: Long-term archive
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Connection Issues
|
||||
|
||||
**Problem:** Cannot connect to S3/MinIO
|
||||
|
||||
```bash
|
||||
Error: failed to create cloud backend: failed to load AWS config
|
||||
```
|
||||
|
||||
**Solution:**
|
||||
1. Check credentials:
|
||||
```bash
|
||||
echo $AWS_ACCESS_KEY_ID
|
||||
echo $AWS_SECRET_ACCESS_KEY
|
||||
```
|
||||
|
||||
2. Test connectivity:
|
||||
```bash
|
||||
curl $AWS_ENDPOINT_URL
|
||||
```
|
||||
|
||||
3. Verify endpoint URL for MinIO/B2
|
||||
|
||||
### Permission Errors
|
||||
|
||||
**Problem:** Access denied
|
||||
|
||||
```bash
|
||||
Error: failed to upload to S3: AccessDenied
|
||||
```
|
||||
|
||||
**Solution:**
|
||||
1. Check IAM policy includes required permissions
|
||||
2. Verify bucket name is correct
|
||||
3. Check bucket policy allows your IAM user
|
||||
|
||||
### Upload Failures
|
||||
|
||||
**Problem:** Large file upload fails
|
||||
|
||||
```bash
|
||||
Error: multipart upload failed: connection timeout
|
||||
```
|
||||
|
||||
**Solution:**
|
||||
1. Check network stability
|
||||
2. Retry - multipart uploads resume automatically
|
||||
3. Increase timeout in config
|
||||
4. Check firewall allows outbound HTTPS
|
||||
|
||||
### Verification Failures
|
||||
|
||||
**Problem:** Checksum mismatch
|
||||
|
||||
```bash
|
||||
Error: checksum mismatch: expected abc123, got def456
|
||||
```
|
||||
|
||||
**Solution:**
|
||||
1. Re-download the backup
|
||||
2. Check if file was corrupted during upload
|
||||
3. Verify original backup integrity locally
|
||||
4. Re-upload if necessary
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Full Backup Workflow
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Daily backup to S3 with retention
|
||||
|
||||
# Backup all databases
|
||||
for db in db1 db2 db3; do
|
||||
dbbackup backup single $db \
|
||||
--cloud s3://production-backups/daily/$db/ \
|
||||
--compression gzip
|
||||
done
|
||||
|
||||
# Cleanup old backups (keep 30 days, min 10 backups)
|
||||
dbbackup cleanup s3://production-backups/daily/ \
|
||||
--retention-days 30 \
|
||||
--min-backups 10
|
||||
|
||||
# Verify today's backups
|
||||
dbbackup verify-backup s3://production-backups/daily/*/$(date +%Y%m%d)*.dump
|
||||
```
|
||||
|
||||
### Disaster Recovery
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Restore from cloud backup
|
||||
|
||||
# List available backups
|
||||
dbbackup cloud list \
|
||||
--cloud-provider s3 \
|
||||
--cloud-bucket disaster-recovery \
|
||||
--verbose
|
||||
|
||||
# Restore latest backup
|
||||
LATEST=$(dbbackup cloud list \
|
||||
--cloud-provider s3 \
|
||||
--cloud-bucket disaster-recovery | tail -1)
|
||||
|
||||
dbbackup restore single "s3://disaster-recovery/$LATEST" \
|
||||
--target restored_db \
|
||||
--create \
|
||||
--confirm
|
||||
```
|
||||
|
||||
### Multi-Cloud Strategy
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Backup to both AWS S3 and Backblaze B2
|
||||
|
||||
# Backup to S3
|
||||
dbbackup backup single production_db \
|
||||
--cloud s3://aws-backups/prod/ \
|
||||
--output-dir /tmp/backups
|
||||
|
||||
# Also upload to B2
|
||||
BACKUP_FILE=$(ls -t /tmp/backups/*.dump | head -1)
|
||||
dbbackup cloud upload "$BACKUP_FILE" \
|
||||
--cloud-provider b2 \
|
||||
--cloud-bucket b2-offsite-backups \
|
||||
--cloud-endpoint https://s3.us-west-002.backblazeb2.com
|
||||
|
||||
# Verify both locations
|
||||
dbbackup verify-backup s3://aws-backups/prod/$(basename $BACKUP_FILE)
|
||||
dbbackup verify-backup b2://b2-offsite-backups/$(basename $BACKUP_FILE)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## FAQ
|
||||
|
||||
**Q: Can I use dbbackup with my existing S3 buckets?**
|
||||
A: Yes! Just specify your bucket name and credentials.
|
||||
|
||||
**Q: Do I need to keep local backups?**
|
||||
A: No, use `--cloud` flag to upload directly without keeping local copies.
|
||||
|
||||
**Q: What happens if upload fails?**
|
||||
A: Backup succeeds locally. Upload failure is logged but doesn't fail the backup.
|
||||
|
||||
**Q: Can I restore without downloading?**
|
||||
A: No, backups are downloaded to temp directory, then restored and cleaned up.
|
||||
|
||||
**Q: How much does cloud storage cost?**
|
||||
A: Varies by provider:
|
||||
- AWS S3: ~$0.023/GB/month + transfer
|
||||
- Backblaze B2: ~$0.005/GB/month + transfer
|
||||
- MinIO: Self-hosted, hardware costs only
|
||||
|
||||
**Q: Can I use multiple cloud providers?**
|
||||
A: Yes! Use different URIs or upload to multiple destinations.
|
||||
|
||||
**Q: Is multipart upload automatic?**
|
||||
A: Yes, automatically used for files >100MB.
|
||||
|
||||
**Q: Can I use S3 Glacier?**
|
||||
A: Yes, but restore requires thawing. Use lifecycle policies for automatic archival.
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [README.md](README.md) - Main documentation
|
||||
- [ROADMAP.md](ROADMAP.md) - Feature roadmap
|
||||
- [docker-compose.minio.yml](docker-compose.minio.yml) - MinIO test setup
|
||||
- [scripts/test_cloud_storage.sh](scripts/test_cloud_storage.sh) - Integration tests
|
||||
|
||||
---
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
- GitHub Issues: [Create an issue](https://github.com/yourusername/dbbackup/issues)
|
||||
- Documentation: Check README.md and inline help
|
||||
- Examples: See `scripts/test_cloud_storage.sh`
|
||||
@@ -3,6 +3,7 @@ package cmd
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"dbbackup/internal/cloud"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
@@ -92,6 +93,7 @@ func init() {
|
||||
|
||||
// Cloud storage flags for all backup commands
|
||||
for _, cmd := range []*cobra.Command{clusterCmd, singleCmd, sampleCmd} {
|
||||
cmd.Flags().String("cloud", "", "Cloud storage URI (e.g., s3://bucket/path) - takes precedence over individual flags")
|
||||
cmd.Flags().Bool("cloud-auto-upload", false, "Automatically upload backup to cloud after completion")
|
||||
cmd.Flags().String("cloud-provider", "", "Cloud provider (s3, minio, b2)")
|
||||
cmd.Flags().String("cloud-bucket", "", "Cloud bucket name")
|
||||
@@ -109,32 +111,39 @@ func init() {
|
||||
}
|
||||
}
|
||||
|
||||
// Update cloud config from flags
|
||||
if c.Flags().Changed("cloud-auto-upload") {
|
||||
if autoUpload, _ := c.Flags().GetBool("cloud-auto-upload"); autoUpload {
|
||||
cfg.CloudEnabled = true
|
||||
cfg.CloudAutoUpload = true
|
||||
// Check if --cloud URI flag is provided (takes precedence)
|
||||
if c.Flags().Changed("cloud") {
|
||||
if err := parseCloudURIFlag(c); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
// Update cloud config from individual flags
|
||||
if c.Flags().Changed("cloud-auto-upload") {
|
||||
if autoUpload, _ := c.Flags().GetBool("cloud-auto-upload"); autoUpload {
|
||||
cfg.CloudEnabled = true
|
||||
cfg.CloudAutoUpload = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if c.Flags().Changed("cloud-provider") {
|
||||
cfg.CloudProvider, _ = c.Flags().GetString("cloud-provider")
|
||||
}
|
||||
if c.Flags().Changed("cloud-provider") {
|
||||
cfg.CloudProvider, _ = c.Flags().GetString("cloud-provider")
|
||||
}
|
||||
|
||||
if c.Flags().Changed("cloud-bucket") {
|
||||
cfg.CloudBucket, _ = c.Flags().GetString("cloud-bucket")
|
||||
}
|
||||
if c.Flags().Changed("cloud-bucket") {
|
||||
cfg.CloudBucket, _ = c.Flags().GetString("cloud-bucket")
|
||||
}
|
||||
|
||||
if c.Flags().Changed("cloud-region") {
|
||||
cfg.CloudRegion, _ = c.Flags().GetString("cloud-region")
|
||||
}
|
||||
if c.Flags().Changed("cloud-region") {
|
||||
cfg.CloudRegion, _ = c.Flags().GetString("cloud-region")
|
||||
}
|
||||
|
||||
if c.Flags().Changed("cloud-endpoint") {
|
||||
cfg.CloudEndpoint, _ = c.Flags().GetString("cloud-endpoint")
|
||||
}
|
||||
if c.Flags().Changed("cloud-endpoint") {
|
||||
cfg.CloudEndpoint, _ = c.Flags().GetString("cloud-endpoint")
|
||||
}
|
||||
|
||||
if c.Flags().Changed("cloud-prefix") {
|
||||
cfg.CloudPrefix, _ = c.Flags().GetString("cloud-prefix")
|
||||
if c.Flags().Changed("cloud-prefix") {
|
||||
cfg.CloudPrefix, _ = c.Flags().GetString("cloud-prefix")
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
@@ -178,3 +187,39 @@ func init() {
|
||||
// Mark the strategy flags as mutually exclusive
|
||||
sampleCmd.MarkFlagsMutuallyExclusive("sample-ratio", "sample-percent", "sample-count")
|
||||
}
|
||||
|
||||
// parseCloudURIFlag parses the --cloud URI flag and updates config
|
||||
func parseCloudURIFlag(cmd *cobra.Command) error {
|
||||
cloudURI, _ := cmd.Flags().GetString("cloud")
|
||||
if cloudURI == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Parse cloud URI
|
||||
uri, err := cloud.ParseCloudURI(cloudURI)
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid cloud URI: %w", err)
|
||||
}
|
||||
|
||||
// Enable cloud and auto-upload
|
||||
cfg.CloudEnabled = true
|
||||
cfg.CloudAutoUpload = true
|
||||
|
||||
// Update config from URI
|
||||
cfg.CloudProvider = uri.Provider
|
||||
cfg.CloudBucket = uri.Bucket
|
||||
|
||||
if uri.Region != "" {
|
||||
cfg.CloudRegion = uri.Region
|
||||
}
|
||||
|
||||
if uri.Endpoint != "" {
|
||||
cfg.CloudEndpoint = uri.Endpoint
|
||||
}
|
||||
|
||||
if uri.Path != "" {
|
||||
cfg.CloudPrefix = uri.Dir()
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
184
cmd/cleanup.go
184
cmd/cleanup.go
@@ -1,11 +1,14 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"dbbackup/internal/cloud"
|
||||
"dbbackup/internal/metadata"
|
||||
"dbbackup/internal/retention"
|
||||
"github.com/spf13/cobra"
|
||||
@@ -53,7 +56,15 @@ func init() {
|
||||
}
|
||||
|
||||
func runCleanup(cmd *cobra.Command, args []string) error {
|
||||
backupDir := args[0]
|
||||
backupPath := args[0]
|
||||
|
||||
// Check if this is a cloud URI
|
||||
if isCloudURIPath(backupPath) {
|
||||
return runCloudCleanup(cmd.Context(), backupPath)
|
||||
}
|
||||
|
||||
// Local cleanup
|
||||
backupDir := backupPath
|
||||
|
||||
// Validate directory exists
|
||||
if !dirExists(backupDir) {
|
||||
@@ -150,3 +161,174 @@ func dirExists(path string) bool {
|
||||
}
|
||||
return info.IsDir()
|
||||
}
|
||||
|
||||
// isCloudURIPath checks if a path is a cloud URI
|
||||
func isCloudURIPath(s string) bool {
|
||||
return cloud.IsCloudURI(s)
|
||||
}
|
||||
|
||||
// runCloudCleanup applies retention policy to cloud storage
|
||||
func runCloudCleanup(ctx context.Context, uri string) error {
|
||||
// Parse cloud URI
|
||||
cloudURI, err := cloud.ParseCloudURI(uri)
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid cloud URI: %w", err)
|
||||
}
|
||||
|
||||
fmt.Printf("☁️ Cloud Cleanup Policy:\n")
|
||||
fmt.Printf(" URI: %s\n", uri)
|
||||
fmt.Printf(" Provider: %s\n", cloudURI.Provider)
|
||||
fmt.Printf(" Bucket: %s\n", cloudURI.Bucket)
|
||||
if cloudURI.Path != "" {
|
||||
fmt.Printf(" Prefix: %s\n", cloudURI.Path)
|
||||
}
|
||||
fmt.Printf(" Retention: %d days\n", retentionDays)
|
||||
fmt.Printf(" Min backups: %d\n", minBackups)
|
||||
if dryRun {
|
||||
fmt.Printf(" Mode: DRY RUN (no files will be deleted)\n")
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
// Create cloud backend
|
||||
cfg := cloudURI.ToConfig()
|
||||
backend, err := cloud.NewBackend(cfg)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create cloud backend: %w", err)
|
||||
}
|
||||
|
||||
// List all backups
|
||||
backups, err := backend.List(ctx, cloudURI.Path)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to list cloud backups: %w", err)
|
||||
}
|
||||
|
||||
if len(backups) == 0 {
|
||||
fmt.Println("No backups found in cloud storage")
|
||||
return nil
|
||||
}
|
||||
|
||||
fmt.Printf("Found %d backup(s) in cloud storage\n\n", len(backups))
|
||||
|
||||
// Filter backups based on pattern if specified
|
||||
var filteredBackups []cloud.BackupInfo
|
||||
if cleanupPattern != "" {
|
||||
for _, backup := range backups {
|
||||
matched, _ := filepath.Match(cleanupPattern, backup.Name)
|
||||
if matched {
|
||||
filteredBackups = append(filteredBackups, backup)
|
||||
}
|
||||
}
|
||||
fmt.Printf("Pattern matched %d backup(s)\n\n", len(filteredBackups))
|
||||
} else {
|
||||
filteredBackups = backups
|
||||
}
|
||||
|
||||
// Sort by modification time (oldest first)
|
||||
// Already sorted by backend.List
|
||||
|
||||
// Calculate retention date
|
||||
cutoffDate := time.Now().AddDate(0, 0, -retentionDays)
|
||||
|
||||
// Determine which backups to delete
|
||||
var toDelete []cloud.BackupInfo
|
||||
var toKeep []cloud.BackupInfo
|
||||
|
||||
for _, backup := range filteredBackups {
|
||||
if backup.LastModified.Before(cutoffDate) {
|
||||
toDelete = append(toDelete, backup)
|
||||
} else {
|
||||
toKeep = append(toKeep, backup)
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure we keep minimum backups
|
||||
totalBackups := len(filteredBackups)
|
||||
if totalBackups-len(toDelete) < minBackups {
|
||||
// Need to keep more backups
|
||||
keepCount := minBackups - len(toKeep)
|
||||
if keepCount > len(toDelete) {
|
||||
keepCount = len(toDelete)
|
||||
}
|
||||
|
||||
// Move oldest from toDelete to toKeep
|
||||
for i := len(toDelete) - 1; i >= len(toDelete)-keepCount && i >= 0; i-- {
|
||||
toKeep = append(toKeep, toDelete[i])
|
||||
toDelete = toDelete[:i]
|
||||
}
|
||||
}
|
||||
|
||||
// Display results
|
||||
fmt.Printf("📊 Results:\n")
|
||||
fmt.Printf(" Total backups: %d\n", totalBackups)
|
||||
fmt.Printf(" Eligible for deletion: %d\n", len(toDelete))
|
||||
fmt.Printf(" Will keep: %d\n", len(toKeep))
|
||||
fmt.Println()
|
||||
|
||||
if len(toDelete) > 0 {
|
||||
if dryRun {
|
||||
fmt.Printf("🔍 Would delete %d backup(s):\n", len(toDelete))
|
||||
} else {
|
||||
fmt.Printf("🗑️ Deleting %d backup(s):\n", len(toDelete))
|
||||
}
|
||||
|
||||
var totalSize int64
|
||||
var deletedCount int
|
||||
|
||||
for _, backup := range toDelete {
|
||||
fmt.Printf(" - %s (%s, %s old)\n",
|
||||
backup.Name,
|
||||
cloud.FormatSize(backup.Size),
|
||||
formatBackupAge(backup.LastModified))
|
||||
|
||||
totalSize += backup.Size
|
||||
|
||||
if !dryRun {
|
||||
if err := backend.Delete(ctx, backup.Key); err != nil {
|
||||
fmt.Printf(" ❌ Error: %v\n", err)
|
||||
} else {
|
||||
deletedCount++
|
||||
// Also try to delete metadata
|
||||
backend.Delete(ctx, backup.Key+".meta.json")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("\n💾 Space %s: %s\n",
|
||||
map[bool]string{true: "would be freed", false: "freed"}[dryRun],
|
||||
cloud.FormatSize(totalSize))
|
||||
|
||||
if !dryRun && deletedCount > 0 {
|
||||
fmt.Printf("✅ Successfully deleted %d backup(s)\n", deletedCount)
|
||||
}
|
||||
} else {
|
||||
fmt.Println("No backups eligible for deletion")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// formatBackupAge returns a human-readable age string from a time.Time
|
||||
func formatBackupAge(t time.Time) string {
|
||||
d := time.Since(t)
|
||||
days := int(d.Hours() / 24)
|
||||
|
||||
if days == 0 {
|
||||
return "today"
|
||||
} else if days == 1 {
|
||||
return "1 day"
|
||||
} else if days < 30 {
|
||||
return fmt.Sprintf("%d days", days)
|
||||
} else if days < 365 {
|
||||
months := days / 30
|
||||
if months == 1 {
|
||||
return "1 month"
|
||||
}
|
||||
return fmt.Sprintf("%d months", months)
|
||||
} else {
|
||||
years := days / 365
|
||||
if years == 1 {
|
||||
return "1 year"
|
||||
}
|
||||
return fmt.Sprintf("%d years", years)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"dbbackup/internal/cloud"
|
||||
"dbbackup/internal/database"
|
||||
"dbbackup/internal/restore"
|
||||
"dbbackup/internal/security"
|
||||
@@ -169,18 +170,48 @@ func init() {
|
||||
func runRestoreSingle(cmd *cobra.Command, args []string) error {
|
||||
archivePath := args[0]
|
||||
|
||||
// Convert to absolute path
|
||||
if !filepath.IsAbs(archivePath) {
|
||||
absPath, err := filepath.Abs(archivePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid archive path: %w", err)
|
||||
}
|
||||
archivePath = absPath
|
||||
}
|
||||
// Check if this is a cloud URI
|
||||
var cleanupFunc func() error
|
||||
|
||||
// Check if file exists
|
||||
if _, err := os.Stat(archivePath); err != nil {
|
||||
return fmt.Errorf("archive not found: %s", archivePath)
|
||||
if cloud.IsCloudURI(archivePath) {
|
||||
log.Info("Detected cloud URI, downloading backup...", "uri", archivePath)
|
||||
|
||||
// Download from cloud
|
||||
result, err := restore.DownloadFromCloudURI(cmd.Context(), archivePath, restore.DownloadOptions{
|
||||
VerifyChecksum: true,
|
||||
KeepLocal: false, // Delete after restore
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to download from cloud: %w", err)
|
||||
}
|
||||
|
||||
archivePath = result.LocalPath
|
||||
cleanupFunc = result.Cleanup
|
||||
|
||||
// Ensure cleanup happens on exit
|
||||
defer func() {
|
||||
if cleanupFunc != nil {
|
||||
if err := cleanupFunc(); err != nil {
|
||||
log.Warn("Failed to cleanup temp files", "error", err)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
log.Info("Download completed", "local_path", archivePath)
|
||||
} else {
|
||||
// Convert to absolute path for local files
|
||||
if !filepath.IsAbs(archivePath) {
|
||||
absPath, err := filepath.Abs(archivePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid archive path: %w", err)
|
||||
}
|
||||
archivePath = absPath
|
||||
}
|
||||
|
||||
// Check if file exists
|
||||
if _, err := os.Stat(archivePath); err != nil {
|
||||
return fmt.Errorf("archive not found: %s", archivePath)
|
||||
}
|
||||
}
|
||||
|
||||
// Detect format
|
||||
|
||||
@@ -1,13 +1,16 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"dbbackup/internal/cloud"
|
||||
"dbbackup/internal/metadata"
|
||||
"dbbackup/internal/restore"
|
||||
"dbbackup/internal/verification"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
@@ -46,7 +49,21 @@ func init() {
|
||||
}
|
||||
|
||||
func runVerifyBackup(cmd *cobra.Command, args []string) error {
|
||||
// Expand glob patterns
|
||||
// Check if any argument is a cloud URI
|
||||
hasCloudURI := false
|
||||
for _, arg := range args {
|
||||
if isCloudURI(arg) {
|
||||
hasCloudURI = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// If cloud URIs detected, handle separately
|
||||
if hasCloudURI {
|
||||
return runVerifyCloudBackup(cmd, args)
|
||||
}
|
||||
|
||||
// Expand glob patterns for local files
|
||||
var backupFiles []string
|
||||
for _, pattern := range args {
|
||||
matches, err := filepath.Glob(pattern)
|
||||
@@ -139,3 +156,80 @@ func runVerifyBackup(cmd *cobra.Command, args []string) error {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// isCloudURI checks if a string is a cloud URI
|
||||
func isCloudURI(s string) bool {
|
||||
return cloud.IsCloudURI(s)
|
||||
}
|
||||
|
||||
// verifyCloudBackup downloads and verifies a backup from cloud storage
|
||||
func verifyCloudBackup(ctx context.Context, uri string, quick, verbose bool) (*restore.DownloadResult, error) {
|
||||
// Download from cloud with checksum verification
|
||||
result, err := restore.DownloadFromCloudURI(ctx, uri, restore.DownloadOptions{
|
||||
VerifyChecksum: !quick, // Skip checksum if quick mode
|
||||
KeepLocal: false,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// If not quick mode, also run full verification
|
||||
if !quick {
|
||||
_, err := verification.Verify(result.LocalPath)
|
||||
if err != nil {
|
||||
result.Cleanup()
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// runVerifyCloudBackup verifies backups from cloud storage
|
||||
func runVerifyCloudBackup(cmd *cobra.Command, args []string) error {
|
||||
fmt.Printf("Verifying cloud backup(s)...\n\n")
|
||||
|
||||
successCount := 0
|
||||
failureCount := 0
|
||||
|
||||
for _, uri := range args {
|
||||
if !isCloudURI(uri) {
|
||||
fmt.Printf("⚠️ Skipping non-cloud URI: %s\n", uri)
|
||||
continue
|
||||
}
|
||||
|
||||
fmt.Printf("☁️ %s\n", uri)
|
||||
|
||||
// Download and verify
|
||||
result, err := verifyCloudBackup(cmd.Context(), uri, quickVerify, verboseVerify)
|
||||
if err != nil {
|
||||
fmt.Printf(" ❌ FAILED: %v\n\n", err)
|
||||
failureCount++
|
||||
continue
|
||||
}
|
||||
|
||||
// Cleanup temp file
|
||||
defer result.Cleanup()
|
||||
|
||||
fmt.Printf(" ✅ VALID\n")
|
||||
if verboseVerify && result.MetadataPath != "" {
|
||||
meta, _ := metadata.Load(result.MetadataPath)
|
||||
if meta != nil {
|
||||
fmt.Printf(" Size: %s\n", metadata.FormatSize(meta.SizeBytes))
|
||||
fmt.Printf(" SHA-256: %s\n", meta.SHA256)
|
||||
fmt.Printf(" Database: %s (%s)\n", meta.Database, meta.DatabaseType)
|
||||
fmt.Printf(" Created: %s\n", meta.Timestamp.Format(time.RFC3339))
|
||||
}
|
||||
}
|
||||
fmt.Println()
|
||||
successCount++
|
||||
}
|
||||
|
||||
fmt.Printf("\n✅ Summary: %d valid, %d failed\n", successCount, failureCount)
|
||||
|
||||
if failureCount > 0 {
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
101
docker-compose.minio.yml
Normal file
101
docker-compose.minio.yml
Normal file
@@ -0,0 +1,101 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
# MinIO S3-compatible object storage for testing
|
||||
minio:
|
||||
image: minio/minio:latest
|
||||
container_name: dbbackup-minio
|
||||
ports:
|
||||
- "9000:9000" # S3 API
|
||||
- "9001:9001" # Web Console
|
||||
environment:
|
||||
MINIO_ROOT_USER: minioadmin
|
||||
MINIO_ROOT_PASSWORD: minioadmin123
|
||||
MINIO_REGION: us-east-1
|
||||
volumes:
|
||||
- minio-data:/data
|
||||
command: server /data --console-address ":9001"
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
|
||||
interval: 30s
|
||||
timeout: 20s
|
||||
retries: 3
|
||||
networks:
|
||||
- dbbackup-test
|
||||
|
||||
# PostgreSQL database for backup testing
|
||||
postgres:
|
||||
image: postgres:16-alpine
|
||||
container_name: dbbackup-postgres-test
|
||||
environment:
|
||||
POSTGRES_USER: testuser
|
||||
POSTGRES_PASSWORD: testpass123
|
||||
POSTGRES_DB: testdb
|
||||
POSTGRES_INITDB_ARGS: "-E UTF8 --locale=C"
|
||||
ports:
|
||||
- "5433:5432"
|
||||
volumes:
|
||||
- postgres-data:/var/lib/postgresql/data
|
||||
- ./test_data:/docker-entrypoint-initdb.d
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U testuser"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
networks:
|
||||
- dbbackup-test
|
||||
|
||||
# MySQL database for backup testing
|
||||
mysql:
|
||||
image: mysql:8.0
|
||||
container_name: dbbackup-mysql-test
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: rootpass123
|
||||
MYSQL_DATABASE: testdb
|
||||
MYSQL_USER: testuser
|
||||
MYSQL_PASSWORD: testpass123
|
||||
ports:
|
||||
- "3307:3306"
|
||||
volumes:
|
||||
- mysql-data:/var/lib/mysql
|
||||
- ./test_data:/docker-entrypoint-initdb.d
|
||||
command: --default-authentication-plugin=mysql_native_password
|
||||
healthcheck:
|
||||
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "-prootpass123"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
networks:
|
||||
- dbbackup-test
|
||||
|
||||
# MinIO Client (mc) for bucket management
|
||||
minio-mc:
|
||||
image: minio/mc:latest
|
||||
container_name: dbbackup-minio-mc
|
||||
depends_on:
|
||||
minio:
|
||||
condition: service_healthy
|
||||
entrypoint: >
|
||||
/bin/sh -c "
|
||||
sleep 5;
|
||||
/usr/bin/mc alias set myminio http://minio:9000 minioadmin minioadmin123;
|
||||
/usr/bin/mc mb --ignore-existing myminio/test-backups;
|
||||
/usr/bin/mc mb --ignore-existing myminio/production-backups;
|
||||
/usr/bin/mc mb --ignore-existing myminio/dev-backups;
|
||||
echo 'MinIO buckets created successfully';
|
||||
exit 0;
|
||||
"
|
||||
networks:
|
||||
- dbbackup-test
|
||||
|
||||
volumes:
|
||||
minio-data:
|
||||
driver: local
|
||||
postgres-data:
|
||||
driver: local
|
||||
mysql-data:
|
||||
driver: local
|
||||
|
||||
networks:
|
||||
dbbackup-test:
|
||||
driver: bridge
|
||||
15
go.mod
15
go.mod
@@ -20,9 +20,10 @@ require (
|
||||
filippo.io/edwards25519 v1.1.0 // indirect
|
||||
github.com/aws/aws-sdk-go-v2 v1.40.0 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/config v1.32.1 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.19.1 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/config v1.32.2 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.19.2 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 // indirect
|
||||
@@ -31,11 +32,11 @@ require (
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.5 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.0 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/signin v1.0.1 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.30.4 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.9 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.41.1 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 // indirect
|
||||
github.com/aws/smithy-go v1.23.2 // indirect
|
||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
|
||||
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
|
||||
|
||||
16
go.sum
16
go.sum
@@ -8,10 +8,16 @@ github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 h1:DHctwEM8P8iTXFxC
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3/go.mod h1:xdCzcZEtnSTKVDOmUZs4l/j3pSV6rpo1WXl5ugNsL8Y=
|
||||
github.com/aws/aws-sdk-go-v2/config v1.32.1 h1:iODUDLgk3q8/flEC7ymhmxjfoAnBDwEEYEVyKZ9mzjU=
|
||||
github.com/aws/aws-sdk-go-v2/config v1.32.1/go.mod h1:xoAgo17AGrPpJBSLg81W+ikM0cpOZG8ad04T2r+d5P0=
|
||||
github.com/aws/aws-sdk-go-v2/config v1.32.2 h1:4liUsdEpUUPZs5WVapsJLx5NPmQhQdez7nYFcovrytk=
|
||||
github.com/aws/aws-sdk-go-v2/config v1.32.2/go.mod h1:l0hs06IFz1eCT+jTacU/qZtC33nvcnLADAPL/XyrkZI=
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.19.1 h1:JeW+EwmtTE0yXFK8SmklrFh/cGTTXsQJumgMZNlbxfM=
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.19.1/go.mod h1:BOoXiStwTF+fT2XufhO0Efssbi1CNIO/ZXpZu87N0pw=
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.19.2 h1:qZry8VUyTK4VIo5aEdUcBjPZHL2v4FyQ3QEOaWcFLu4=
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.19.2/go.mod h1:YUqm5a1/kBnoK+/NY5WEiMocZihKSo15/tJdmdXnM5g=
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14 h1:WZVR5DbDgxzA0BJeudId89Kmgy6DIU4ORpxwsVHz0qA=
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14/go.mod h1:Dadl9QO0kHgbrH1GRqGiZdYtW5w+IXXaBNCHTIaheM4=
|
||||
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12 h1:Zy6Tme1AA13kX8x3CnkHx5cqdGWGaj/anwOiWGnA0Xo=
|
||||
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12/go.mod h1:ql4uXYKoTM9WUAUSmthY4AtPVrlTBZOvnBJTiCUdPxI=
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 h1:PZHqQACxYb8mYgms4RZbhZG0a7dPW06xOjmaH0EJC/I=
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14/go.mod h1:VymhrMJUWs69D8u0/lZ7jSB6WgaG/NqHi3gX0aYf6U0=
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 h1:bOS19y6zlJwagBfHxs0ESzr1XCOU2KXJCWcq3E2vfjY=
|
||||
@@ -30,14 +36,24 @@ github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14 h1:FzQE21lNtUor0
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14/go.mod h1:s1ydyWG9pm3ZwmmYN21HKyG9WzAZhYVW85wMHs5FV6w=
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.0 h1:8FshVvnV2sr9kOSAbOnc/vwVmmAwMjOedKH6JW2ddPM=
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.0/go.mod h1:wYNqY3L02Z3IgRYxOBPH9I1zD9Cjh9hI5QOy/eOjQvw=
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1 h1:OgQy/+0+Kc3khtqiEOk23xQAglXi3Tj0y5doOxbi5tg=
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1/go.mod h1:wYNqY3L02Z3IgRYxOBPH9I1zD9Cjh9hI5QOy/eOjQvw=
|
||||
github.com/aws/aws-sdk-go-v2/service/signin v1.0.1 h1:BDgIUYGEo5TkayOWv/oBLPphWwNm/A91AebUjAu5L5g=
|
||||
github.com/aws/aws-sdk-go-v2/service/signin v1.0.1/go.mod h1:iS6EPmNeqCsGo+xQmXv0jIMjyYtQfnwg36zl2FwEouk=
|
||||
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2 h1:MxMBdKTYBjPQChlJhi4qlEueqB1p1KcbTEa7tD5aqPs=
|
||||
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2/go.mod h1:iS6EPmNeqCsGo+xQmXv0jIMjyYtQfnwg36zl2FwEouk=
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.30.4 h1:U//SlnkE1wOQiIImxzdY5PXat4Wq+8rlfVEw4Y7J8as=
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.30.4/go.mod h1:av+ArJpoYf3pgyrj6tcehSFW+y9/QvAY8kMooR9bZCw=
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5 h1:ksUT5KtgpZd3SAiFJNJ0AFEJVva3gjBmN7eXUZjzUwQ=
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5/go.mod h1:av+ArJpoYf3pgyrj6tcehSFW+y9/QvAY8kMooR9bZCw=
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.9 h1:LU8S9W/mPDAU9q0FjCLi0TrCheLMGwzbRpvUMwYspcA=
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.9/go.mod h1:/j67Z5XBVDx8nZVp9EuFM9/BS5dvBznbqILGuu73hug=
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10 h1:GtsxyiF3Nd3JahRBJbxLCCdYW9ltGQYrFWg8XdkGDd8=
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10/go.mod h1:/j67Z5XBVDx8nZVp9EuFM9/BS5dvBznbqILGuu73hug=
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.41.1 h1:GdGmKtG+/Krag7VfyOXV17xjTCz0i9NT+JnqLTOI5nA=
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.41.1/go.mod h1:6TxbXoDSgBQ225Qd8Q+MbxUxUh6TtNKwbRt/EPS9xso=
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 h1:a5UTtD4mHBU3t0o6aHQZFJTNKVfxFWfPX7J0Lr7G+uY=
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2/go.mod h1:6TxbXoDSgBQ225Qd8Q+MbxUxUh6TtNKwbRt/EPS9xso=
|
||||
github.com/aws/smithy-go v1.23.2 h1:Crv0eatJUQhaManss33hS5r40CG3ZFH+21XSkqMrIUM=
|
||||
github.com/aws/smithy-go v1.23.2/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
|
||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k=
|
||||
|
||||
@@ -11,6 +11,7 @@ import (
|
||||
"github.com/aws/aws-sdk-go-v2/aws"
|
||||
"github.com/aws/aws-sdk-go-v2/config"
|
||||
"github.com/aws/aws-sdk-go-v2/credentials"
|
||||
"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
|
||||
"github.com/aws/aws-sdk-go-v2/service/s3"
|
||||
)
|
||||
|
||||
@@ -92,7 +93,7 @@ func (s *S3Backend) buildKey(filename string) string {
|
||||
return filepath.Join(s.prefix, filename)
|
||||
}
|
||||
|
||||
// Upload uploads a file to S3
|
||||
// Upload uploads a file to S3 with multipart support for large files
|
||||
func (s *S3Backend) Upload(ctx context.Context, localPath, remotePath string, progress ProgressCallback) error {
|
||||
// Open local file
|
||||
file, err := os.Open(localPath)
|
||||
@@ -108,17 +109,30 @@ func (s *S3Backend) Upload(ctx context.Context, localPath, remotePath string, pr
|
||||
}
|
||||
fileSize := stat.Size()
|
||||
|
||||
// Build S3 key
|
||||
key := s.buildKey(remotePath)
|
||||
|
||||
// Use multipart upload for files larger than 100MB
|
||||
const multipartThreshold = 100 * 1024 * 1024 // 100 MB
|
||||
|
||||
if fileSize > multipartThreshold {
|
||||
return s.uploadMultipart(ctx, file, key, fileSize, progress)
|
||||
}
|
||||
|
||||
// Simple upload for smaller files
|
||||
return s.uploadSimple(ctx, file, key, fileSize, progress)
|
||||
}
|
||||
|
||||
// uploadSimple performs a simple single-part upload
|
||||
func (s *S3Backend) uploadSimple(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error {
|
||||
// Create progress reader
|
||||
var reader io.Reader = file
|
||||
if progress != nil {
|
||||
reader = NewProgressReader(file, fileSize, progress)
|
||||
}
|
||||
|
||||
// Build S3 key
|
||||
key := s.buildKey(remotePath)
|
||||
|
||||
// Upload to S3
|
||||
_, err = s.client.PutObject(ctx, &s3.PutObjectInput{
|
||||
_, err := s.client.PutObject(ctx, &s3.PutObjectInput{
|
||||
Bucket: aws.String(s.bucket),
|
||||
Key: aws.String(key),
|
||||
Body: reader,
|
||||
@@ -131,6 +145,40 @@ func (s *S3Backend) Upload(ctx context.Context, localPath, remotePath string, pr
|
||||
return nil
|
||||
}
|
||||
|
||||
// uploadMultipart performs a multipart upload for large files
|
||||
func (s *S3Backend) uploadMultipart(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error {
|
||||
// Create uploader with custom options
|
||||
uploader := manager.NewUploader(s.client, func(u *manager.Uploader) {
|
||||
// Part size: 10MB
|
||||
u.PartSize = 10 * 1024 * 1024
|
||||
|
||||
// Upload up to 10 parts concurrently
|
||||
u.Concurrency = 10
|
||||
|
||||
// Leave parts on failure for debugging
|
||||
u.LeavePartsOnError = false
|
||||
})
|
||||
|
||||
// Wrap file with progress reader
|
||||
var reader io.Reader = file
|
||||
if progress != nil {
|
||||
reader = NewProgressReader(file, fileSize, progress)
|
||||
}
|
||||
|
||||
// Upload with multipart
|
||||
_, err := uploader.Upload(ctx, &s3.PutObjectInput{
|
||||
Bucket: aws.String(s.bucket),
|
||||
Key: aws.String(key),
|
||||
Body: reader,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("multipart upload failed: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Download downloads a file from S3
|
||||
func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
|
||||
// Build S3 key
|
||||
|
||||
198
internal/cloud/uri.go
Normal file
198
internal/cloud/uri.go
Normal file
@@ -0,0 +1,198 @@
|
||||
package cloud
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/url"
|
||||
"path"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// CloudURI represents a parsed cloud storage URI
|
||||
type CloudURI struct {
|
||||
Provider string // "s3", "minio", "azure", "gcs", "b2"
|
||||
Bucket string // Bucket or container name
|
||||
Path string // Path within bucket (without leading /)
|
||||
Region string // Region (optional, extracted from host)
|
||||
Endpoint string // Custom endpoint (for MinIO, etc)
|
||||
FullURI string // Original URI string
|
||||
}
|
||||
|
||||
// ParseCloudURI parses a cloud storage URI like s3://bucket/path/file.dump
|
||||
// Supported formats:
|
||||
// - s3://bucket/path/file.dump
|
||||
// - s3://bucket.s3.region.amazonaws.com/path/file.dump
|
||||
// - minio://bucket/path/file.dump
|
||||
// - azure://container/path/file.dump
|
||||
// - gs://bucket/path/file.dump (Google Cloud Storage)
|
||||
// - b2://bucket/path/file.dump (Backblaze B2)
|
||||
func ParseCloudURI(uri string) (*CloudURI, error) {
|
||||
if uri == "" {
|
||||
return nil, fmt.Errorf("URI cannot be empty")
|
||||
}
|
||||
|
||||
// Parse URL
|
||||
parsed, err := url.Parse(uri)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("invalid URI: %w", err)
|
||||
}
|
||||
|
||||
// Extract provider from scheme
|
||||
provider := strings.ToLower(parsed.Scheme)
|
||||
if provider == "" {
|
||||
return nil, fmt.Errorf("URI must have a scheme (e.g., s3://)")
|
||||
}
|
||||
|
||||
// Validate provider
|
||||
validProviders := map[string]bool{
|
||||
"s3": true,
|
||||
"minio": true,
|
||||
"azure": true,
|
||||
"gs": true,
|
||||
"gcs": true,
|
||||
"b2": true,
|
||||
}
|
||||
if !validProviders[provider] {
|
||||
return nil, fmt.Errorf("unsupported provider: %s (supported: s3, minio, azure, gs, gcs, b2)", provider)
|
||||
}
|
||||
|
||||
// Normalize provider names
|
||||
if provider == "gcs" {
|
||||
provider = "gs"
|
||||
}
|
||||
|
||||
// Extract bucket and path
|
||||
bucket := parsed.Host
|
||||
if bucket == "" {
|
||||
return nil, fmt.Errorf("URI must specify a bucket (e.g., s3://bucket/path)")
|
||||
}
|
||||
|
||||
// Extract region from AWS S3 hostname if present
|
||||
// Format: bucket.s3.region.amazonaws.com or bucket.s3-region.amazonaws.com
|
||||
var region string
|
||||
var endpoint string
|
||||
|
||||
if strings.Contains(bucket, ".amazonaws.com") {
|
||||
parts := strings.Split(bucket, ".")
|
||||
if len(parts) >= 3 {
|
||||
// Extract bucket name (first part)
|
||||
bucket = parts[0]
|
||||
|
||||
// Extract region if present
|
||||
// bucket.s3.us-west-2.amazonaws.com -> us-west-2
|
||||
// bucket.s3-us-west-2.amazonaws.com -> us-west-2
|
||||
for i, part := range parts {
|
||||
if part == "s3" && i+1 < len(parts) && parts[i+1] != "amazonaws" {
|
||||
region = parts[i+1]
|
||||
break
|
||||
}
|
||||
if strings.HasPrefix(part, "s3-") {
|
||||
region = strings.TrimPrefix(part, "s3-")
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// For MinIO and custom endpoints, preserve the host as endpoint
|
||||
if provider == "minio" || (provider == "s3" && !strings.Contains(bucket, "amazonaws.com")) {
|
||||
// If it looks like a custom endpoint (has dots), preserve it
|
||||
if strings.Contains(bucket, ".") && !strings.Contains(bucket, "amazonaws.com") {
|
||||
endpoint = bucket
|
||||
// Try to extract bucket from path
|
||||
trimmedPath := strings.TrimPrefix(parsed.Path, "/")
|
||||
pathParts := strings.SplitN(trimmedPath, "/", 2)
|
||||
if len(pathParts) > 0 && pathParts[0] != "" {
|
||||
bucket = pathParts[0]
|
||||
if len(pathParts) > 1 {
|
||||
parsed.Path = "/" + pathParts[1]
|
||||
} else {
|
||||
parsed.Path = "/"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Clean up path (remove leading slash)
|
||||
filepath := strings.TrimPrefix(parsed.Path, "/")
|
||||
|
||||
return &CloudURI{
|
||||
Provider: provider,
|
||||
Bucket: bucket,
|
||||
Path: filepath,
|
||||
Region: region,
|
||||
Endpoint: endpoint,
|
||||
FullURI: uri,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// IsCloudURI checks if a string looks like a cloud storage URI
|
||||
func IsCloudURI(s string) bool {
|
||||
s = strings.ToLower(s)
|
||||
return strings.HasPrefix(s, "s3://") ||
|
||||
strings.HasPrefix(s, "minio://") ||
|
||||
strings.HasPrefix(s, "azure://") ||
|
||||
strings.HasPrefix(s, "gs://") ||
|
||||
strings.HasPrefix(s, "gcs://") ||
|
||||
strings.HasPrefix(s, "b2://")
|
||||
}
|
||||
|
||||
// String returns the string representation of the URI
|
||||
func (u *CloudURI) String() string {
|
||||
return u.FullURI
|
||||
}
|
||||
|
||||
// BaseName returns the filename without path
|
||||
func (u *CloudURI) BaseName() string {
|
||||
return path.Base(u.Path)
|
||||
}
|
||||
|
||||
// Dir returns the directory path without filename
|
||||
func (u *CloudURI) Dir() string {
|
||||
return path.Dir(u.Path)
|
||||
}
|
||||
|
||||
// Join appends path elements to the URI path
|
||||
func (u *CloudURI) Join(elem ...string) string {
|
||||
newPath := u.Path
|
||||
for _, e := range elem {
|
||||
newPath = path.Join(newPath, e)
|
||||
}
|
||||
return fmt.Sprintf("%s://%s/%s", u.Provider, u.Bucket, newPath)
|
||||
}
|
||||
|
||||
// ToConfig converts a CloudURI to a cloud.Config
|
||||
func (u *CloudURI) ToConfig() *Config {
|
||||
cfg := &Config{
|
||||
Provider: u.Provider,
|
||||
Bucket: u.Bucket,
|
||||
Prefix: u.Dir(), // Use directory part as prefix
|
||||
}
|
||||
|
||||
// Set region if available
|
||||
if u.Region != "" {
|
||||
cfg.Region = u.Region
|
||||
}
|
||||
|
||||
// Set endpoint if available (for MinIO, etc)
|
||||
if u.Endpoint != "" {
|
||||
cfg.Endpoint = u.Endpoint
|
||||
}
|
||||
|
||||
// Provider-specific settings
|
||||
switch u.Provider {
|
||||
case "minio":
|
||||
cfg.PathStyle = true
|
||||
case "b2":
|
||||
cfg.PathStyle = true
|
||||
}
|
||||
|
||||
return cfg
|
||||
}
|
||||
|
||||
// BuildRemotePath constructs the full remote path for a file
|
||||
func (u *CloudURI) BuildRemotePath(filename string) string {
|
||||
if u.Path == "" || u.Path == "." {
|
||||
return filename
|
||||
}
|
||||
return path.Join(u.Path, filename)
|
||||
}
|
||||
211
internal/restore/cloud_download.go
Normal file
211
internal/restore/cloud_download.go
Normal file
@@ -0,0 +1,211 @@
|
||||
package restore
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"dbbackup/internal/cloud"
|
||||
"dbbackup/internal/logger"
|
||||
"dbbackup/internal/metadata"
|
||||
)
|
||||
|
||||
// CloudDownloader handles downloading backups from cloud storage
|
||||
type CloudDownloader struct {
|
||||
backend cloud.Backend
|
||||
log logger.Logger
|
||||
}
|
||||
|
||||
// NewCloudDownloader creates a new cloud downloader
|
||||
func NewCloudDownloader(backend cloud.Backend, log logger.Logger) *CloudDownloader {
|
||||
return &CloudDownloader{
|
||||
backend: backend,
|
||||
log: log,
|
||||
}
|
||||
}
|
||||
|
||||
// DownloadOptions contains options for downloading from cloud
|
||||
type DownloadOptions struct {
|
||||
VerifyChecksum bool // Verify SHA-256 checksum after download
|
||||
KeepLocal bool // Keep downloaded file (don't delete temp)
|
||||
TempDir string // Temp directory (default: os.TempDir())
|
||||
}
|
||||
|
||||
// DownloadResult contains information about a downloaded backup
|
||||
type DownloadResult struct {
|
||||
LocalPath string // Path to downloaded file
|
||||
RemotePath string // Original remote path
|
||||
Size int64 // File size in bytes
|
||||
SHA256 string // SHA-256 checksum (if verified)
|
||||
MetadataPath string // Path to downloaded metadata (if exists)
|
||||
IsTempFile bool // Whether the file is in a temp directory
|
||||
}
|
||||
|
||||
// Download downloads a backup from cloud storage
|
||||
func (d *CloudDownloader) Download(ctx context.Context, remotePath string, opts DownloadOptions) (*DownloadResult, error) {
|
||||
// Determine temp directory
|
||||
tempDir := opts.TempDir
|
||||
if tempDir == "" {
|
||||
tempDir = os.TempDir()
|
||||
}
|
||||
|
||||
// Create unique temp subdirectory
|
||||
tempSubDir := filepath.Join(tempDir, fmt.Sprintf("dbbackup-download-%d", os.Getpid()))
|
||||
if err := os.MkdirAll(tempSubDir, 0755); err != nil {
|
||||
return nil, fmt.Errorf("failed to create temp directory: %w", err)
|
||||
}
|
||||
|
||||
// Extract filename from remote path
|
||||
filename := filepath.Base(remotePath)
|
||||
localPath := filepath.Join(tempSubDir, filename)
|
||||
|
||||
d.log.Info("Downloading backup from cloud", "remote", remotePath, "local", localPath)
|
||||
|
||||
// Get file size for progress tracking
|
||||
size, err := d.backend.GetSize(ctx, remotePath)
|
||||
if err != nil {
|
||||
d.log.Warn("Could not get remote file size", "error", err)
|
||||
size = 0 // Continue anyway
|
||||
}
|
||||
|
||||
// Progress callback
|
||||
var lastPercent int
|
||||
progressCallback := func(transferred, total int64) {
|
||||
if total > 0 {
|
||||
percent := int(float64(transferred) / float64(total) * 100)
|
||||
if percent != lastPercent && percent%10 == 0 {
|
||||
d.log.Info("Download progress", "percent", percent, "transferred", cloud.FormatSize(transferred), "total", cloud.FormatSize(total))
|
||||
lastPercent = percent
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Download file
|
||||
if err := d.backend.Download(ctx, remotePath, localPath, progressCallback); err != nil {
|
||||
// Cleanup on failure
|
||||
os.RemoveAll(tempSubDir)
|
||||
return nil, fmt.Errorf("download failed: %w", err)
|
||||
}
|
||||
|
||||
result := &DownloadResult{
|
||||
LocalPath: localPath,
|
||||
RemotePath: remotePath,
|
||||
Size: size,
|
||||
IsTempFile: !opts.KeepLocal,
|
||||
}
|
||||
|
||||
// Try to download metadata file
|
||||
metaRemotePath := remotePath + ".meta.json"
|
||||
exists, err := d.backend.Exists(ctx, metaRemotePath)
|
||||
if err == nil && exists {
|
||||
metaLocalPath := localPath + ".meta.json"
|
||||
if err := d.backend.Download(ctx, metaRemotePath, metaLocalPath, nil); err != nil {
|
||||
d.log.Warn("Failed to download metadata", "error", err)
|
||||
} else {
|
||||
result.MetadataPath = metaLocalPath
|
||||
d.log.Debug("Downloaded metadata", "path", metaLocalPath)
|
||||
}
|
||||
}
|
||||
|
||||
// Verify checksum if requested
|
||||
if opts.VerifyChecksum {
|
||||
d.log.Info("Verifying checksum...")
|
||||
checksum, err := calculateSHA256(localPath)
|
||||
if err != nil {
|
||||
// Cleanup on verification failure
|
||||
os.RemoveAll(tempSubDir)
|
||||
return nil, fmt.Errorf("checksum calculation failed: %w", err)
|
||||
}
|
||||
result.SHA256 = checksum
|
||||
|
||||
// Check against metadata if available
|
||||
if result.MetadataPath != "" {
|
||||
meta, err := metadata.Load(result.MetadataPath)
|
||||
if err != nil {
|
||||
d.log.Warn("Failed to load metadata for verification", "error", err)
|
||||
} else if meta.SHA256 != "" && meta.SHA256 != checksum {
|
||||
// Cleanup on verification failure
|
||||
os.RemoveAll(tempSubDir)
|
||||
return nil, fmt.Errorf("checksum mismatch: expected %s, got %s", meta.SHA256, checksum)
|
||||
} else if meta.SHA256 == checksum {
|
||||
d.log.Info("Checksum verified successfully", "sha256", checksum)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
d.log.Info("Download completed", "path", localPath, "size", cloud.FormatSize(result.Size))
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// DownloadFromURI downloads a backup using a cloud URI
|
||||
func (d *CloudDownloader) DownloadFromURI(ctx context.Context, uri string, opts DownloadOptions) (*DownloadResult, error) {
|
||||
// Parse URI
|
||||
cloudURI, err := cloud.ParseCloudURI(uri)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("invalid cloud URI: %w", err)
|
||||
}
|
||||
|
||||
// Download using the path from URI
|
||||
return d.Download(ctx, cloudURI.Path, opts)
|
||||
}
|
||||
|
||||
// Cleanup removes downloaded temp files
|
||||
func (r *DownloadResult) Cleanup() error {
|
||||
if !r.IsTempFile {
|
||||
return nil // Don't delete non-temp files
|
||||
}
|
||||
|
||||
// Remove the entire temp directory
|
||||
tempDir := filepath.Dir(r.LocalPath)
|
||||
if err := os.RemoveAll(tempDir); err != nil {
|
||||
return fmt.Errorf("failed to cleanup temp files: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// calculateSHA256 calculates the SHA-256 checksum of a file
|
||||
func calculateSHA256(filePath string) (string, error) {
|
||||
file, err := os.Open(filePath)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
hash := sha256.New()
|
||||
if _, err := io.Copy(hash, file); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return hex.EncodeToString(hash.Sum(nil)), nil
|
||||
}
|
||||
|
||||
// DownloadFromCloudURI is a convenience function to download from a cloud URI
|
||||
func DownloadFromCloudURI(ctx context.Context, uri string, opts DownloadOptions) (*DownloadResult, error) {
|
||||
// Parse URI
|
||||
cloudURI, err := cloud.ParseCloudURI(uri)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("invalid cloud URI: %w", err)
|
||||
}
|
||||
|
||||
// Create config from URI
|
||||
cfg := cloudURI.ToConfig()
|
||||
|
||||
// Create backend
|
||||
backend, err := cloud.NewBackend(cfg)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create cloud backend: %w", err)
|
||||
}
|
||||
|
||||
// Create downloader
|
||||
log := logger.New("info", "text")
|
||||
downloader := NewCloudDownloader(backend, log)
|
||||
|
||||
// Download
|
||||
return downloader.Download(ctx, cloudURI.Path, opts)
|
||||
}
|
||||
253
scripts/test_cloud_storage.sh
Executable file
253
scripts/test_cloud_storage.sh
Executable file
@@ -0,0 +1,253 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Color output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo -e "${BLUE}dbbackup Cloud Storage Integration Test${NC}"
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo
|
||||
|
||||
# Configuration
|
||||
MINIO_ENDPOINT="http://localhost:9000"
|
||||
MINIO_ACCESS_KEY="minioadmin"
|
||||
MINIO_SECRET_KEY="minioadmin123"
|
||||
MINIO_BUCKET="test-backups"
|
||||
POSTGRES_HOST="localhost"
|
||||
POSTGRES_PORT="5433"
|
||||
POSTGRES_USER="testuser"
|
||||
POSTGRES_PASS="testpass123"
|
||||
POSTGRES_DB="cloudtest"
|
||||
|
||||
# Export credentials
|
||||
export AWS_ACCESS_KEY_ID="$MINIO_ACCESS_KEY"
|
||||
export AWS_SECRET_ACCESS_KEY="$MINIO_SECRET_KEY"
|
||||
export AWS_ENDPOINT_URL="$MINIO_ENDPOINT"
|
||||
export AWS_REGION="us-east-1"
|
||||
|
||||
# Check if dbbackup binary exists
|
||||
if [ ! -f "./dbbackup" ]; then
|
||||
echo -e "${YELLOW}Building dbbackup...${NC}"
|
||||
go build -o dbbackup .
|
||||
echo -e "${GREEN}✓ Build successful${NC}"
|
||||
fi
|
||||
|
||||
# Function to wait for service
|
||||
wait_for_service() {
|
||||
local service=$1
|
||||
local host=$2
|
||||
local port=$3
|
||||
local max_attempts=30
|
||||
local attempt=1
|
||||
|
||||
echo -e "${YELLOW}Waiting for $service to be ready...${NC}"
|
||||
|
||||
while ! nc -z $host $port 2>/dev/null; do
|
||||
if [ $attempt -ge $max_attempts ]; then
|
||||
echo -e "${RED}✗ $service did not start in time${NC}"
|
||||
return 1
|
||||
fi
|
||||
echo -n "."
|
||||
sleep 1
|
||||
attempt=$((attempt + 1))
|
||||
done
|
||||
|
||||
echo -e "${GREEN}✓ $service is ready${NC}"
|
||||
}
|
||||
|
||||
# Step 1: Start services
|
||||
echo -e "${BLUE}Step 1: Starting services with Docker Compose${NC}"
|
||||
docker-compose -f docker-compose.minio.yml up -d
|
||||
|
||||
# Wait for services
|
||||
wait_for_service "MinIO" "localhost" "9000"
|
||||
wait_for_service "PostgreSQL" "localhost" "5433"
|
||||
|
||||
sleep 5
|
||||
|
||||
# Step 2: Create test database
|
||||
echo -e "\n${BLUE}Step 2: Creating test database${NC}"
|
||||
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -c "DROP DATABASE IF EXISTS $POSTGRES_DB;" postgres 2>/dev/null || true
|
||||
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -c "CREATE DATABASE $POSTGRES_DB;" postgres
|
||||
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d $POSTGRES_DB << EOF
|
||||
CREATE TABLE users (
|
||||
id SERIAL PRIMARY KEY,
|
||||
name VARCHAR(100),
|
||||
email VARCHAR(100),
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
|
||||
INSERT INTO users (name, email) VALUES
|
||||
('Alice', 'alice@example.com'),
|
||||
('Bob', 'bob@example.com'),
|
||||
('Charlie', 'charlie@example.com');
|
||||
|
||||
CREATE TABLE products (
|
||||
id SERIAL PRIMARY KEY,
|
||||
name VARCHAR(200),
|
||||
price DECIMAL(10,2)
|
||||
);
|
||||
|
||||
INSERT INTO products (name, price) VALUES
|
||||
('Widget', 19.99),
|
||||
('Gadget', 29.99),
|
||||
('Doohickey', 39.99);
|
||||
EOF
|
||||
|
||||
echo -e "${GREEN}✓ Test database created with sample data${NC}"
|
||||
|
||||
# Step 3: Test local backup
|
||||
echo -e "\n${BLUE}Step 3: Creating local backup${NC}"
|
||||
./dbbackup backup single $POSTGRES_DB \
|
||||
--db-type postgres \
|
||||
--host $POSTGRES_HOST \
|
||||
--port $POSTGRES_PORT \
|
||||
--user $POSTGRES_USER \
|
||||
--password $POSTGRES_PASS \
|
||||
--output-dir /tmp/dbbackup-test
|
||||
|
||||
LOCAL_BACKUP=$(ls -t /tmp/dbbackup-test/${POSTGRES_DB}_*.dump 2>/dev/null | head -1)
|
||||
if [ -z "$LOCAL_BACKUP" ]; then
|
||||
echo -e "${RED}✗ Local backup failed${NC}"
|
||||
exit 1
|
||||
fi
|
||||
echo -e "${GREEN}✓ Local backup created: $LOCAL_BACKUP${NC}"
|
||||
|
||||
# Step 4: Test cloud upload
|
||||
echo -e "\n${BLUE}Step 4: Uploading backup to MinIO (S3)${NC}"
|
||||
./dbbackup cloud upload "$LOCAL_BACKUP" \
|
||||
--cloud-provider minio \
|
||||
--cloud-bucket $MINIO_BUCKET \
|
||||
--cloud-endpoint $MINIO_ENDPOINT
|
||||
|
||||
echo -e "${GREEN}✓ Upload successful${NC}"
|
||||
|
||||
# Step 5: Test cloud list
|
||||
echo -e "\n${BLUE}Step 5: Listing cloud backups${NC}"
|
||||
./dbbackup cloud list \
|
||||
--cloud-provider minio \
|
||||
--cloud-bucket $MINIO_BUCKET \
|
||||
--cloud-endpoint $MINIO_ENDPOINT \
|
||||
--verbose
|
||||
|
||||
# Step 6: Test backup with cloud URI
|
||||
echo -e "\n${BLUE}Step 6: Testing backup with cloud URI${NC}"
|
||||
./dbbackup backup single $POSTGRES_DB \
|
||||
--db-type postgres \
|
||||
--host $POSTGRES_HOST \
|
||||
--port $POSTGRES_PORT \
|
||||
--user $POSTGRES_USER \
|
||||
--password $POSTGRES_PASS \
|
||||
--output-dir /tmp/dbbackup-test \
|
||||
--cloud minio://$MINIO_BUCKET/uri-test/
|
||||
|
||||
echo -e "${GREEN}✓ Backup with cloud URI successful${NC}"
|
||||
|
||||
# Step 7: Drop database for restore test
|
||||
echo -e "\n${BLUE}Step 7: Dropping database for restore test${NC}"
|
||||
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -c "DROP DATABASE $POSTGRES_DB;" postgres
|
||||
|
||||
# Step 8: Test restore from cloud URI
|
||||
echo -e "\n${BLUE}Step 8: Restoring from cloud URI${NC}"
|
||||
CLOUD_URI="minio://$MINIO_BUCKET/$(basename $LOCAL_BACKUP)"
|
||||
./dbbackup restore single "$CLOUD_URI" \
|
||||
--target $POSTGRES_DB \
|
||||
--create \
|
||||
--confirm \
|
||||
--db-type postgres \
|
||||
--host $POSTGRES_HOST \
|
||||
--port $POSTGRES_PORT \
|
||||
--user $POSTGRES_USER \
|
||||
--password $POSTGRES_PASS
|
||||
|
||||
echo -e "${GREEN}✓ Restore from cloud successful${NC}"
|
||||
|
||||
# Step 9: Verify data
|
||||
echo -e "\n${BLUE}Step 9: Verifying restored data${NC}"
|
||||
USER_COUNT=$(PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d $POSTGRES_DB -t -c "SELECT COUNT(*) FROM users;")
|
||||
PRODUCT_COUNT=$(PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d $POSTGRES_DB -t -c "SELECT COUNT(*) FROM products;")
|
||||
|
||||
if [ "$USER_COUNT" -eq 3 ] && [ "$PRODUCT_COUNT" -eq 3 ]; then
|
||||
echo -e "${GREEN}✓ Data verification successful (users: $USER_COUNT, products: $PRODUCT_COUNT)${NC}"
|
||||
else
|
||||
echo -e "${RED}✗ Data verification failed (users: $USER_COUNT, products: $PRODUCT_COUNT)${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Step 10: Test verify command
|
||||
echo -e "\n${BLUE}Step 10: Verifying cloud backup integrity${NC}"
|
||||
./dbbackup verify-backup "$CLOUD_URI"
|
||||
|
||||
echo -e "${GREEN}✓ Backup verification successful${NC}"
|
||||
|
||||
# Step 11: Test cloud cleanup
|
||||
echo -e "\n${BLUE}Step 11: Testing cloud cleanup (dry-run)${NC}"
|
||||
./dbbackup cleanup "minio://$MINIO_BUCKET/" \
|
||||
--retention-days 0 \
|
||||
--min-backups 1 \
|
||||
--dry-run
|
||||
|
||||
# Step 12: Create multiple backups for cleanup test
|
||||
echo -e "\n${BLUE}Step 12: Creating multiple backups for cleanup test${NC}"
|
||||
for i in {1..5}; do
|
||||
echo "Creating backup $i/5..."
|
||||
./dbbackup backup single $POSTGRES_DB \
|
||||
--db-type postgres \
|
||||
--host $POSTGRES_HOST \
|
||||
--port $POSTGRES_PORT \
|
||||
--user $POSTGRES_USER \
|
||||
--password $POSTGRES_PASS \
|
||||
--output-dir /tmp/dbbackup-test \
|
||||
--cloud minio://$MINIO_BUCKET/cleanup-test/
|
||||
sleep 1
|
||||
done
|
||||
|
||||
echo -e "${GREEN}✓ Multiple backups created${NC}"
|
||||
|
||||
# Step 13: Test actual cleanup
|
||||
echo -e "\n${BLUE}Step 13: Testing cloud cleanup (actual)${NC}"
|
||||
./dbbackup cleanup "minio://$MINIO_BUCKET/cleanup-test/" \
|
||||
--retention-days 0 \
|
||||
--min-backups 2
|
||||
|
||||
echo -e "${GREEN}✓ Cloud cleanup successful${NC}"
|
||||
|
||||
# Step 14: Test large file upload (multipart)
|
||||
echo -e "\n${BLUE}Step 14: Testing large file upload (>100MB for multipart)${NC}"
|
||||
echo "Creating 150MB test file..."
|
||||
dd if=/dev/zero of=/tmp/large-test-file.bin bs=1M count=150 2>/dev/null
|
||||
|
||||
echo "Uploading large file..."
|
||||
./dbbackup cloud upload /tmp/large-test-file.bin \
|
||||
--cloud-provider minio \
|
||||
--cloud-bucket $MINIO_BUCKET \
|
||||
--cloud-endpoint $MINIO_ENDPOINT \
|
||||
--verbose
|
||||
|
||||
echo -e "${GREEN}✓ Large file multipart upload successful${NC}"
|
||||
|
||||
# Cleanup
|
||||
echo -e "\n${BLUE}Cleanup${NC}"
|
||||
rm -f /tmp/large-test-file.bin
|
||||
rm -rf /tmp/dbbackup-test
|
||||
|
||||
echo -e "\n${GREEN}========================================${NC}"
|
||||
echo -e "${GREEN}✓ ALL TESTS PASSED!${NC}"
|
||||
echo -e "${GREEN}========================================${NC}"
|
||||
echo
|
||||
echo -e "${YELLOW}To stop services:${NC}"
|
||||
echo -e " docker-compose -f docker-compose.minio.yml down"
|
||||
echo
|
||||
echo -e "${YELLOW}To view MinIO console:${NC}"
|
||||
echo -e " http://localhost:9001 (minioadmin / minioadmin123)"
|
||||
echo
|
||||
echo -e "${YELLOW}To keep services running for manual testing:${NC}"
|
||||
echo -e " export AWS_ACCESS_KEY_ID=minioadmin"
|
||||
echo -e " export AWS_SECRET_ACCESS_KEY=minioadmin123"
|
||||
echo -e " export AWS_ENDPOINT_URL=http://localhost:9000"
|
||||
echo -e " ./dbbackup cloud list --cloud-provider minio --cloud-bucket test-backups"
|
||||
Reference in New Issue
Block a user