Compare commits
8 Commits
v1.0
...
v2.0-sprin
| Author | SHA1 | Date | |
|---|---|---|---|
| 64f1458e9a | |||
| 8929004abc | |||
| bdf9af0650 | |||
| 20b7f1ec04 | |||
| ae3ed1fea1 | |||
| ba5ae8ecb1 | |||
| 884c8292d6 | |||
| 6e04db4a98 |
21
.dockerignore
Normal file
21
.dockerignore
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
.git
|
||||||
|
.gitignore
|
||||||
|
*.dump
|
||||||
|
*.dump.gz
|
||||||
|
*.sql
|
||||||
|
*.sql.gz
|
||||||
|
*.tar.gz
|
||||||
|
*.sha256
|
||||||
|
*.info
|
||||||
|
.dbbackup.conf
|
||||||
|
backups/
|
||||||
|
test_workspace/
|
||||||
|
bin/
|
||||||
|
dbbackup
|
||||||
|
dbbackup_*
|
||||||
|
*.log
|
||||||
|
.vscode/
|
||||||
|
.idea/
|
||||||
|
*.swp
|
||||||
|
*.swo
|
||||||
|
*~
|
||||||
531
AZURE.md
Normal file
531
AZURE.md
Normal file
@@ -0,0 +1,531 @@
|
|||||||
|
# Azure Blob Storage Integration
|
||||||
|
|
||||||
|
This guide covers using **Azure Blob Storage** with `dbbackup` for secure, scalable cloud backup storage.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [Quick Start](#quick-start)
|
||||||
|
- [URI Syntax](#uri-syntax)
|
||||||
|
- [Authentication](#authentication)
|
||||||
|
- [Configuration](#configuration)
|
||||||
|
- [Usage Examples](#usage-examples)
|
||||||
|
- [Advanced Features](#advanced-features)
|
||||||
|
- [Testing with Azurite](#testing-with-azurite)
|
||||||
|
- [Best Practices](#best-practices)
|
||||||
|
- [Troubleshooting](#troubleshooting)
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### 1. Azure Portal Setup
|
||||||
|
|
||||||
|
1. Create a storage account in Azure Portal
|
||||||
|
2. Create a container for backups
|
||||||
|
3. Get your account credentials:
|
||||||
|
- **Account Name**: Your storage account name
|
||||||
|
- **Account Key**: Primary or secondary access key (from Access Keys section)
|
||||||
|
|
||||||
|
### 2. Basic Backup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup PostgreSQL to Azure
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb \
|
||||||
|
--output backup.sql \
|
||||||
|
--cloud "azure://mycontainer/backups/db.sql?account=myaccount&key=ACCOUNT_KEY"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Restore from Azure
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Restore from Azure backup
|
||||||
|
dbbackup restore postgres \
|
||||||
|
--source "azure://mycontainer/backups/db.sql?account=myaccount&key=ACCOUNT_KEY" \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb_restored
|
||||||
|
```
|
||||||
|
|
||||||
|
## URI Syntax
|
||||||
|
|
||||||
|
### Basic Format
|
||||||
|
|
||||||
|
```
|
||||||
|
azure://container/path/to/backup.sql?account=ACCOUNT_NAME&key=ACCOUNT_KEY
|
||||||
|
```
|
||||||
|
|
||||||
|
### URI Components
|
||||||
|
|
||||||
|
| Component | Required | Description | Example |
|
||||||
|
|-----------|----------|-------------|---------|
|
||||||
|
| `container` | Yes | Azure container name | `mycontainer` |
|
||||||
|
| `path` | Yes | Object path within container | `backups/db.sql` |
|
||||||
|
| `account` | Yes | Storage account name | `mystorageaccount` |
|
||||||
|
| `key` | Yes | Storage account key | `base64-encoded-key` |
|
||||||
|
| `endpoint` | No | Custom endpoint (Azurite) | `http://localhost:10000` |
|
||||||
|
|
||||||
|
### URI Examples
|
||||||
|
|
||||||
|
**Production Azure:**
|
||||||
|
```
|
||||||
|
azure://prod-backups/postgres/db.sql?account=prodaccount&key=YOUR_KEY_HERE
|
||||||
|
```
|
||||||
|
|
||||||
|
**Azurite Emulator:**
|
||||||
|
```
|
||||||
|
azure://test-backups/postgres/db.sql?endpoint=http://localhost:10000&account=devstoreaccount1&key=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==
|
||||||
|
```
|
||||||
|
|
||||||
|
**With Path Prefix:**
|
||||||
|
```
|
||||||
|
azure://backups/production/postgres/2024/db.sql?account=myaccount&key=KEY
|
||||||
|
```
|
||||||
|
|
||||||
|
## Authentication
|
||||||
|
|
||||||
|
### Method 1: URI Parameters (Recommended for CLI)
|
||||||
|
|
||||||
|
Pass credentials directly in the URI:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
azure://container/path?account=myaccount&key=YOUR_ACCOUNT_KEY
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method 2: Environment Variables
|
||||||
|
|
||||||
|
Set credentials via environment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export AZURE_STORAGE_ACCOUNT="myaccount"
|
||||||
|
export AZURE_STORAGE_KEY="YOUR_ACCOUNT_KEY"
|
||||||
|
|
||||||
|
# Use simplified URI (credentials from environment)
|
||||||
|
dbbackup backup postgres --cloud "azure://container/path/backup.sql"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method 3: Connection String
|
||||||
|
|
||||||
|
Use Azure connection string:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=YOUR_KEY;EndpointSuffix=core.windows.net"
|
||||||
|
|
||||||
|
dbbackup backup postgres --cloud "azure://container/path/backup.sql"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Getting Your Account Key
|
||||||
|
|
||||||
|
1. Go to Azure Portal → Storage Accounts
|
||||||
|
2. Select your storage account
|
||||||
|
3. Navigate to **Security + networking** → **Access keys**
|
||||||
|
4. Copy **key1** or **key2**
|
||||||
|
|
||||||
|
**Important:** Keep your account keys secure. Use Azure Key Vault for production.
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Container Setup
|
||||||
|
|
||||||
|
Create a container before first use:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Azure CLI
|
||||||
|
az storage container create \
|
||||||
|
--name backups \
|
||||||
|
--account-name myaccount \
|
||||||
|
--account-key YOUR_KEY
|
||||||
|
|
||||||
|
# Or let dbbackup create it automatically
|
||||||
|
dbbackup cloud upload file.sql "azure://backups/file.sql?account=myaccount&key=KEY&create=true"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Access Tiers
|
||||||
|
|
||||||
|
Azure Blob Storage offers multiple access tiers:
|
||||||
|
|
||||||
|
- **Hot**: Frequent access (default)
|
||||||
|
- **Cool**: Infrequent access (lower storage cost)
|
||||||
|
- **Archive**: Long-term retention (lowest cost, retrieval delay)
|
||||||
|
|
||||||
|
Set the tier in Azure Portal or using Azure CLI:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
az storage blob set-tier \
|
||||||
|
--container-name backups \
|
||||||
|
--name backup.sql \
|
||||||
|
--tier Cool \
|
||||||
|
--account-name myaccount
|
||||||
|
```
|
||||||
|
|
||||||
|
### Lifecycle Management
|
||||||
|
|
||||||
|
Configure automatic tier transitions:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"rules": [
|
||||||
|
{
|
||||||
|
"name": "moveToArchive",
|
||||||
|
"type": "Lifecycle",
|
||||||
|
"definition": {
|
||||||
|
"filters": {
|
||||||
|
"blobTypes": ["blockBlob"],
|
||||||
|
"prefixMatch": ["backups/"]
|
||||||
|
},
|
||||||
|
"actions": {
|
||||||
|
"baseBlob": {
|
||||||
|
"tierToCool": {
|
||||||
|
"daysAfterModificationGreaterThan": 30
|
||||||
|
},
|
||||||
|
"tierToArchive": {
|
||||||
|
"daysAfterModificationGreaterThan": 90
|
||||||
|
},
|
||||||
|
"delete": {
|
||||||
|
"daysAfterModificationGreaterThan": 365
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
### Backup with Auto-Upload
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# PostgreSQL backup with automatic Azure upload
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database production_db \
|
||||||
|
--output /backups/db.sql \
|
||||||
|
--cloud "azure://prod-backups/postgres/$(date +%Y%m%d_%H%M%S).sql?account=myaccount&key=KEY" \
|
||||||
|
--compression 6
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backup All Databases
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup entire PostgreSQL cluster to Azure
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--all-databases \
|
||||||
|
--output-dir /backups \
|
||||||
|
--cloud "azure://prod-backups/postgres/cluster/?account=myaccount&key=KEY"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verify Backup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verify backup integrity
|
||||||
|
dbbackup verify "azure://prod-backups/postgres/backup.sql?account=myaccount&key=KEY"
|
||||||
|
```
|
||||||
|
|
||||||
|
### List Backups
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all backups in container
|
||||||
|
dbbackup cloud list "azure://prod-backups/postgres/?account=myaccount&key=KEY"
|
||||||
|
|
||||||
|
# List with pattern
|
||||||
|
dbbackup cloud list "azure://prod-backups/postgres/2024/?account=myaccount&key=KEY"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Download Backup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Download from Azure to local
|
||||||
|
dbbackup cloud download \
|
||||||
|
"azure://prod-backups/postgres/backup.sql?account=myaccount&key=KEY" \
|
||||||
|
/local/path/backup.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
### Delete Old Backups
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Manual delete
|
||||||
|
dbbackup cloud delete "azure://prod-backups/postgres/old_backup.sql?account=myaccount&key=KEY"
|
||||||
|
|
||||||
|
# Automatic cleanup (keep last 7 backups)
|
||||||
|
dbbackup cleanup "azure://prod-backups/postgres/?account=myaccount&key=KEY" --keep 7
|
||||||
|
```
|
||||||
|
|
||||||
|
### Scheduled Backups
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# Azure backup script (run via cron)
|
||||||
|
|
||||||
|
DATE=$(date +%Y%m%d_%H%M%S)
|
||||||
|
AZURE_URI="azure://prod-backups/postgres/${DATE}.sql?account=myaccount&key=${AZURE_STORAGE_KEY}"
|
||||||
|
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database production_db \
|
||||||
|
--output /tmp/backup.sql \
|
||||||
|
--cloud "${AZURE_URI}" \
|
||||||
|
--compression 9
|
||||||
|
|
||||||
|
# Cleanup old backups
|
||||||
|
dbbackup cleanup "azure://prod-backups/postgres/?account=myaccount&key=${AZURE_STORAGE_KEY}" --keep 30
|
||||||
|
```
|
||||||
|
|
||||||
|
**Crontab:**
|
||||||
|
```cron
|
||||||
|
# Daily at 2 AM
|
||||||
|
0 2 * * * /usr/local/bin/azure-backup.sh >> /var/log/azure-backup.log 2>&1
|
||||||
|
```
|
||||||
|
|
||||||
|
## Advanced Features
|
||||||
|
|
||||||
|
### Block Blob Upload
|
||||||
|
|
||||||
|
For large files (>256MB), dbbackup automatically uses Azure Block Blob staging:
|
||||||
|
|
||||||
|
- **Block Size**: 100MB per block
|
||||||
|
- **Parallel Upload**: Multiple blocks uploaded concurrently
|
||||||
|
- **Checksum**: SHA-256 integrity verification
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Large database backup (automatically uses block blob)
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database huge_db \
|
||||||
|
--output /backups/huge.sql \
|
||||||
|
--cloud "azure://backups/huge.sql?account=myaccount&key=KEY"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Progress Tracking
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup with progress display
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb \
|
||||||
|
--output backup.sql \
|
||||||
|
--cloud "azure://backups/backup.sql?account=myaccount&key=KEY" \
|
||||||
|
--progress
|
||||||
|
```
|
||||||
|
|
||||||
|
### Concurrent Operations
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup multiple databases in parallel
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--all-databases \
|
||||||
|
--output-dir /backups \
|
||||||
|
--cloud "azure://backups/cluster/?account=myaccount&key=KEY" \
|
||||||
|
--parallelism 4
|
||||||
|
```
|
||||||
|
|
||||||
|
### Custom Metadata
|
||||||
|
|
||||||
|
Backups include SHA-256 checksums as blob metadata:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verify metadata using Azure CLI
|
||||||
|
az storage blob metadata show \
|
||||||
|
--container-name backups \
|
||||||
|
--name backup.sql \
|
||||||
|
--account-name myaccount
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing with Azurite
|
||||||
|
|
||||||
|
### Setup Azurite Emulator
|
||||||
|
|
||||||
|
**Docker Compose:**
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
azurite:
|
||||||
|
image: mcr.microsoft.com/azure-storage/azurite:latest
|
||||||
|
ports:
|
||||||
|
- "10000:10000"
|
||||||
|
- "10001:10001"
|
||||||
|
- "10002:10002"
|
||||||
|
command: azurite --blobHost 0.0.0.0 --loose
|
||||||
|
```
|
||||||
|
|
||||||
|
**Start:**
|
||||||
|
```bash
|
||||||
|
docker-compose -f docker-compose.azurite.yml up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### Default Azurite Credentials
|
||||||
|
|
||||||
|
```
|
||||||
|
Account Name: devstoreaccount1
|
||||||
|
Account Key: Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==
|
||||||
|
Endpoint: http://localhost:10000/devstoreaccount1
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test Backup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup to Azurite
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database testdb \
|
||||||
|
--output test.sql \
|
||||||
|
--cloud "azure://test-backups/test.sql?endpoint=http://localhost:10000&account=devstoreaccount1&key=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
|
||||||
|
```
|
||||||
|
|
||||||
|
### Run Integration Tests
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run comprehensive test suite
|
||||||
|
./scripts/test_azure_storage.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
Tests include:
|
||||||
|
- PostgreSQL and MySQL backups
|
||||||
|
- Upload/download operations
|
||||||
|
- Large file handling (300MB+)
|
||||||
|
- Verification and cleanup
|
||||||
|
- Restore operations
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### 1. Security
|
||||||
|
|
||||||
|
- **Never commit credentials** to version control
|
||||||
|
- Use **Azure Key Vault** for production keys
|
||||||
|
- Rotate account keys regularly
|
||||||
|
- Use **Shared Access Signatures (SAS)** for limited access
|
||||||
|
- Enable **Azure AD authentication** when possible
|
||||||
|
|
||||||
|
### 2. Performance
|
||||||
|
|
||||||
|
- Use **compression** for faster uploads: `--compression 6`
|
||||||
|
- Enable **parallelism** for cluster backups: `--parallelism 4`
|
||||||
|
- Choose appropriate **Azure region** (close to source)
|
||||||
|
- Use **Premium Storage** for high throughput
|
||||||
|
|
||||||
|
### 3. Cost Optimization
|
||||||
|
|
||||||
|
- Use **Cool tier** for backups older than 30 days
|
||||||
|
- Use **Archive tier** for long-term retention (>90 days)
|
||||||
|
- Enable **lifecycle management** for automatic transitions
|
||||||
|
- Monitor storage costs in Azure Cost Management
|
||||||
|
|
||||||
|
### 4. Reliability
|
||||||
|
|
||||||
|
- Test **restore procedures** regularly
|
||||||
|
- Use **retention policies**: `--keep 30`
|
||||||
|
- Enable **soft delete** in Azure (30-day recovery)
|
||||||
|
- Monitor backup success with Azure Monitor
|
||||||
|
|
||||||
|
### 5. Organization
|
||||||
|
|
||||||
|
- Use **consistent naming**: `{database}/{date}/{backup}.sql`
|
||||||
|
- Use **container prefixes**: `prod-backups`, `dev-backups`
|
||||||
|
- Tag backups with **metadata** (version, environment)
|
||||||
|
- Document restore procedures
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Connection Issues
|
||||||
|
|
||||||
|
**Problem:** `failed to create Azure client`
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Verify account name is correct
|
||||||
|
- Check account key (copy from Azure Portal)
|
||||||
|
- Ensure endpoint is accessible (firewall rules)
|
||||||
|
- For Azurite, confirm `http://localhost:10000` is running
|
||||||
|
|
||||||
|
### Authentication Errors
|
||||||
|
|
||||||
|
**Problem:** `authentication failed`
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Check for spaces/special characters in key
|
||||||
|
- Verify account key hasn't been rotated
|
||||||
|
- Try using connection string method
|
||||||
|
- Check Azure firewall rules (allow your IP)
|
||||||
|
|
||||||
|
### Upload Failures
|
||||||
|
|
||||||
|
**Problem:** `failed to upload blob`
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Check container exists (or use `&create=true`)
|
||||||
|
- Verify sufficient storage quota
|
||||||
|
- Check network connectivity
|
||||||
|
- Try smaller files first (test connection)
|
||||||
|
|
||||||
|
### Large File Issues
|
||||||
|
|
||||||
|
**Problem:** Upload timeout for large files
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- dbbackup automatically uses block blob for files >256MB
|
||||||
|
- Increase compression: `--compression 9`
|
||||||
|
- Check network bandwidth
|
||||||
|
- Use Azure Premium Storage for better throughput
|
||||||
|
|
||||||
|
### List/Download Issues
|
||||||
|
|
||||||
|
**Problem:** `blob not found`
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Verify blob name (check Azure Portal)
|
||||||
|
- Check container name is correct
|
||||||
|
- Ensure blob hasn't been moved/deleted
|
||||||
|
- Check if blob is in Archive tier (requires rehydration)
|
||||||
|
|
||||||
|
### Performance Issues
|
||||||
|
|
||||||
|
**Problem:** Slow upload/download
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Use compression: `--compression 6`
|
||||||
|
- Choose closer Azure region
|
||||||
|
- Check network bandwidth
|
||||||
|
- Use Azure Premium Storage
|
||||||
|
- Enable parallelism for multiple files
|
||||||
|
|
||||||
|
### Debugging
|
||||||
|
|
||||||
|
Enable debug mode:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--cloud "azure://container/backup.sql?account=myaccount&key=KEY" \
|
||||||
|
--debug
|
||||||
|
```
|
||||||
|
|
||||||
|
Check Azure logs:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Azure CLI
|
||||||
|
az monitor activity-log list \
|
||||||
|
--resource-group mygroup \
|
||||||
|
--namespace Microsoft.Storage
|
||||||
|
```
|
||||||
|
|
||||||
|
## Additional Resources
|
||||||
|
|
||||||
|
- [Azure Blob Storage Documentation](https://docs.microsoft.com/azure/storage/blobs/)
|
||||||
|
- [Azurite Emulator](https://github.com/Azure/Azurite)
|
||||||
|
- [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/)
|
||||||
|
- [Azure CLI](https://docs.microsoft.com/cli/azure/storage)
|
||||||
|
- [dbbackup Cloud Storage Guide](CLOUD.md)
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
For issues specific to Azure integration:
|
||||||
|
|
||||||
|
1. Check [Troubleshooting](#troubleshooting) section
|
||||||
|
2. Run integration tests: `./scripts/test_azure_storage.sh`
|
||||||
|
3. Enable debug mode: `--debug`
|
||||||
|
4. Check Azure Service Health
|
||||||
|
5. Open an issue on GitHub with debug logs
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- [Google Cloud Storage Guide](GCS.md)
|
||||||
|
- [AWS S3 Guide](CLOUD.md#aws-s3)
|
||||||
|
- [Main Cloud Storage Documentation](CLOUD.md)
|
||||||
809
CLOUD.md
Normal file
809
CLOUD.md
Normal file
@@ -0,0 +1,809 @@
|
|||||||
|
# Cloud Storage Guide for dbbackup
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
dbbackup v2.0 includes comprehensive cloud storage integration, allowing you to backup directly to S3-compatible storage providers and restore from cloud URIs.
|
||||||
|
|
||||||
|
**Supported Providers:**
|
||||||
|
- AWS S3
|
||||||
|
- MinIO (self-hosted S3-compatible)
|
||||||
|
- Backblaze B2
|
||||||
|
- **Azure Blob Storage** (native support)
|
||||||
|
- **Google Cloud Storage** (native support)
|
||||||
|
- Any S3-compatible storage
|
||||||
|
|
||||||
|
**Key Features:**
|
||||||
|
- ✅ Direct backup to cloud with `--cloud` URI flag
|
||||||
|
- ✅ Restore from cloud URIs
|
||||||
|
- ✅ Verify cloud backup integrity
|
||||||
|
- ✅ Apply retention policies to cloud storage
|
||||||
|
- ✅ Multipart upload for large files (>100MB)
|
||||||
|
- ✅ Progress tracking for uploads/downloads
|
||||||
|
- ✅ Automatic metadata synchronization
|
||||||
|
- ✅ Streaming transfers (memory efficient)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### 1. Set Up Credentials
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# For AWS S3
|
||||||
|
export AWS_ACCESS_KEY_ID="your-access-key"
|
||||||
|
export AWS_SECRET_ACCESS_KEY="your-secret-key"
|
||||||
|
export AWS_REGION="us-east-1"
|
||||||
|
|
||||||
|
# For MinIO
|
||||||
|
export AWS_ACCESS_KEY_ID="minioadmin"
|
||||||
|
export AWS_SECRET_ACCESS_KEY="minioadmin123"
|
||||||
|
export AWS_ENDPOINT_URL="http://localhost:9000"
|
||||||
|
|
||||||
|
# For Backblaze B2
|
||||||
|
export AWS_ACCESS_KEY_ID="your-b2-key-id"
|
||||||
|
export AWS_SECRET_ACCESS_KEY="your-b2-application-key"
|
||||||
|
export AWS_ENDPOINT_URL="https://s3.us-west-002.backblazeb2.com"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Backup with Cloud URI
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup to S3
|
||||||
|
dbbackup backup single mydb --cloud s3://my-bucket/backups/
|
||||||
|
|
||||||
|
# Backup to MinIO
|
||||||
|
dbbackup backup single mydb --cloud minio://my-bucket/backups/
|
||||||
|
|
||||||
|
# Backup to Backblaze B2
|
||||||
|
dbbackup backup single mydb --cloud b2://my-bucket/backups/
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Restore from Cloud
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Restore from cloud URI
|
||||||
|
dbbackup restore single s3://my-bucket/backups/mydb_20260115_120000.dump --confirm
|
||||||
|
|
||||||
|
# Restore to different database
|
||||||
|
dbbackup restore single s3://my-bucket/backups/mydb.dump \
|
||||||
|
--target mydb_restored \
|
||||||
|
--confirm
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## URI Syntax
|
||||||
|
|
||||||
|
Cloud URIs follow this format:
|
||||||
|
|
||||||
|
```
|
||||||
|
<provider>://<bucket>/<path>/<filename>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Supported Providers:**
|
||||||
|
- `s3://` - AWS S3 or S3-compatible storage
|
||||||
|
- `minio://` - MinIO (auto-enables path-style addressing)
|
||||||
|
- `b2://` - Backblaze B2
|
||||||
|
- `gs://` or `gcs://` - Google Cloud Storage (native support)
|
||||||
|
- `azure://` or `azblob://` - Azure Blob Storage (native support)
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
```bash
|
||||||
|
s3://production-backups/databases/postgres/
|
||||||
|
minio://local-backups/dev/mydb/
|
||||||
|
b2://offsite-backups/daily/
|
||||||
|
gs://gcp-backups/prod/
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configuration Methods
|
||||||
|
|
||||||
|
### Method 1: Cloud URIs (Recommended)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dbbackup backup single mydb --cloud s3://my-bucket/backups/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method 2: Individual Flags
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dbbackup backup single mydb \
|
||||||
|
--cloud-auto-upload \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket \
|
||||||
|
--cloud-prefix backups/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method 3: Environment Variables
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export CLOUD_ENABLED=true
|
||||||
|
export CLOUD_AUTO_UPLOAD=true
|
||||||
|
export CLOUD_PROVIDER=s3
|
||||||
|
export CLOUD_BUCKET=my-bucket
|
||||||
|
export CLOUD_PREFIX=backups/
|
||||||
|
export CLOUD_REGION=us-east-1
|
||||||
|
|
||||||
|
dbbackup backup single mydb
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method 4: Config File
|
||||||
|
|
||||||
|
```toml
|
||||||
|
# ~/.dbbackup.conf
|
||||||
|
[cloud]
|
||||||
|
enabled = true
|
||||||
|
auto_upload = true
|
||||||
|
provider = "s3"
|
||||||
|
bucket = "my-bucket"
|
||||||
|
prefix = "backups/"
|
||||||
|
region = "us-east-1"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
### Cloud Upload
|
||||||
|
|
||||||
|
Upload existing backup files to cloud storage:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Upload single file
|
||||||
|
dbbackup cloud upload /backups/mydb.dump \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket
|
||||||
|
|
||||||
|
# Upload with cloud URI flags
|
||||||
|
dbbackup cloud upload /backups/mydb.dump \
|
||||||
|
--cloud-provider minio \
|
||||||
|
--cloud-bucket local-backups \
|
||||||
|
--cloud-endpoint http://localhost:9000
|
||||||
|
|
||||||
|
# Upload multiple files
|
||||||
|
dbbackup cloud upload /backups/*.dump \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket \
|
||||||
|
--verbose
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cloud Download
|
||||||
|
|
||||||
|
Download backups from cloud storage:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Download to current directory
|
||||||
|
dbbackup cloud download mydb.dump . \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket
|
||||||
|
|
||||||
|
# Download to specific directory
|
||||||
|
dbbackup cloud download backups/mydb.dump /restore/ \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket \
|
||||||
|
--verbose
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cloud List
|
||||||
|
|
||||||
|
List backups in cloud storage:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all backups
|
||||||
|
dbbackup cloud list \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket
|
||||||
|
|
||||||
|
# List with prefix filter
|
||||||
|
dbbackup cloud list \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket \
|
||||||
|
--cloud-prefix postgres/
|
||||||
|
|
||||||
|
# Verbose output with details
|
||||||
|
dbbackup cloud list \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket \
|
||||||
|
--verbose
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cloud Delete
|
||||||
|
|
||||||
|
Delete backups from cloud storage:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Delete specific backup (with confirmation prompt)
|
||||||
|
dbbackup cloud delete mydb_old.dump \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket
|
||||||
|
|
||||||
|
# Delete without confirmation
|
||||||
|
dbbackup cloud delete mydb_old.dump \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket \
|
||||||
|
--confirm
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backup with Auto-Upload
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup and automatically upload
|
||||||
|
dbbackup backup single mydb --cloud s3://my-bucket/backups/
|
||||||
|
|
||||||
|
# With individual flags
|
||||||
|
dbbackup backup single mydb \
|
||||||
|
--cloud-auto-upload \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket my-bucket \
|
||||||
|
--cloud-prefix backups/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Restore from Cloud
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Restore from cloud URI (auto-download)
|
||||||
|
dbbackup restore single s3://my-bucket/backups/mydb.dump --confirm
|
||||||
|
|
||||||
|
# Restore to different database
|
||||||
|
dbbackup restore single s3://my-bucket/backups/mydb.dump \
|
||||||
|
--target mydb_restored \
|
||||||
|
--confirm
|
||||||
|
|
||||||
|
# Restore with database creation
|
||||||
|
dbbackup restore single s3://my-bucket/backups/mydb.dump \
|
||||||
|
--create \
|
||||||
|
--confirm
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verify Cloud Backups
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verify single cloud backup
|
||||||
|
dbbackup verify-backup s3://my-bucket/backups/mydb.dump
|
||||||
|
|
||||||
|
# Quick verification (size check only)
|
||||||
|
dbbackup verify-backup s3://my-bucket/backups/mydb.dump --quick
|
||||||
|
|
||||||
|
# Verbose output
|
||||||
|
dbbackup verify-backup s3://my-bucket/backups/mydb.dump --verbose
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cloud Cleanup
|
||||||
|
|
||||||
|
Apply retention policies to cloud storage:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Cleanup old backups (dry-run)
|
||||||
|
dbbackup cleanup s3://my-bucket/backups/ \
|
||||||
|
--retention-days 30 \
|
||||||
|
--min-backups 5 \
|
||||||
|
--dry-run
|
||||||
|
|
||||||
|
# Actual cleanup
|
||||||
|
dbbackup cleanup s3://my-bucket/backups/ \
|
||||||
|
--retention-days 30 \
|
||||||
|
--min-backups 5
|
||||||
|
|
||||||
|
# Pattern-based cleanup
|
||||||
|
dbbackup cleanup s3://my-bucket/backups/ \
|
||||||
|
--retention-days 7 \
|
||||||
|
--min-backups 3 \
|
||||||
|
--pattern "mydb_*.dump"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Provider-Specific Setup
|
||||||
|
|
||||||
|
### AWS S3
|
||||||
|
|
||||||
|
**Prerequisites:**
|
||||||
|
- AWS account
|
||||||
|
- S3 bucket created
|
||||||
|
- IAM user with S3 permissions
|
||||||
|
|
||||||
|
**IAM Policy:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Version": "2012-10-17",
|
||||||
|
"Statement": [
|
||||||
|
{
|
||||||
|
"Effect": "Allow",
|
||||||
|
"Action": [
|
||||||
|
"s3:PutObject",
|
||||||
|
"s3:GetObject",
|
||||||
|
"s3:DeleteObject",
|
||||||
|
"s3:ListBucket"
|
||||||
|
],
|
||||||
|
"Resource": [
|
||||||
|
"arn:aws:s3:::my-bucket/*",
|
||||||
|
"arn:aws:s3:::my-bucket"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configuration:**
|
||||||
|
```bash
|
||||||
|
export AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
|
||||||
|
export AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
|
||||||
|
export AWS_REGION="us-east-1"
|
||||||
|
|
||||||
|
dbbackup backup single mydb --cloud s3://my-bucket/backups/
|
||||||
|
```
|
||||||
|
|
||||||
|
### MinIO (Self-Hosted)
|
||||||
|
|
||||||
|
**Setup with Docker:**
|
||||||
|
```bash
|
||||||
|
docker run -d \
|
||||||
|
-p 9000:9000 \
|
||||||
|
-p 9001:9001 \
|
||||||
|
-e "MINIO_ROOT_USER=minioadmin" \
|
||||||
|
-e "MINIO_ROOT_PASSWORD=minioadmin123" \
|
||||||
|
--name minio \
|
||||||
|
minio/minio server /data --console-address ":9001"
|
||||||
|
|
||||||
|
# Create bucket
|
||||||
|
docker exec minio mc alias set local http://localhost:9000 minioadmin minioadmin123
|
||||||
|
docker exec minio mc mb local/backups
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configuration:**
|
||||||
|
```bash
|
||||||
|
export AWS_ACCESS_KEY_ID="minioadmin"
|
||||||
|
export AWS_SECRET_ACCESS_KEY="minioadmin123"
|
||||||
|
export AWS_ENDPOINT_URL="http://localhost:9000"
|
||||||
|
|
||||||
|
dbbackup backup single mydb --cloud minio://backups/db/
|
||||||
|
```
|
||||||
|
|
||||||
|
**Or use docker-compose:**
|
||||||
|
```bash
|
||||||
|
docker-compose -f docker-compose.minio.yml up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backblaze B2
|
||||||
|
|
||||||
|
**Prerequisites:**
|
||||||
|
- Backblaze account
|
||||||
|
- B2 bucket created
|
||||||
|
- Application key generated
|
||||||
|
|
||||||
|
**Configuration:**
|
||||||
|
```bash
|
||||||
|
export AWS_ACCESS_KEY_ID="<your-b2-key-id>"
|
||||||
|
export AWS_SECRET_ACCESS_KEY="<your-b2-application-key>"
|
||||||
|
export AWS_ENDPOINT_URL="https://s3.us-west-002.backblazeb2.com"
|
||||||
|
export AWS_REGION="us-west-002"
|
||||||
|
|
||||||
|
dbbackup backup single mydb --cloud b2://my-bucket/backups/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Azure Blob Storage
|
||||||
|
|
||||||
|
**Native Azure support with comprehensive features:**
|
||||||
|
|
||||||
|
See **[AZURE.md](AZURE.md)** for complete documentation.
|
||||||
|
|
||||||
|
**Quick Start:**
|
||||||
|
```bash
|
||||||
|
# Using account name and key
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb \
|
||||||
|
--cloud "azure://container/backups/db.sql?account=myaccount&key=ACCOUNT_KEY"
|
||||||
|
|
||||||
|
# With Azurite emulator for testing
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb \
|
||||||
|
--cloud "azure://test-backups/db.sql?endpoint=http://localhost:10000"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Features:**
|
||||||
|
- Native Azure SDK integration
|
||||||
|
- Block blob upload for large files (>256MB)
|
||||||
|
- Azurite emulator support for local testing
|
||||||
|
- SHA-256 integrity verification
|
||||||
|
- Comprehensive test suite
|
||||||
|
|
||||||
|
### Google Cloud Storage
|
||||||
|
|
||||||
|
**Native GCS support with full features:**
|
||||||
|
|
||||||
|
See **[GCS.md](GCS.md)** for complete documentation.
|
||||||
|
|
||||||
|
**Quick Start:**
|
||||||
|
```bash
|
||||||
|
# Using Application Default Credentials
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb \
|
||||||
|
--cloud "gs://mybucket/backups/db.sql"
|
||||||
|
|
||||||
|
# With service account
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb \
|
||||||
|
--cloud "gs://mybucket/backups/db.sql?credentials=/path/to/key.json"
|
||||||
|
|
||||||
|
# With fake-gcs-server emulator for testing
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb \
|
||||||
|
--cloud "gs://test-backups/db.sql?endpoint=http://localhost:4443/storage/v1"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Features:**
|
||||||
|
- Native GCS SDK integration
|
||||||
|
- Chunked upload for large files (16MB chunks)
|
||||||
|
- fake-gcs-server emulator support
|
||||||
|
- Application Default Credentials support
|
||||||
|
- Workload Identity for GKE
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
### Multipart Upload
|
||||||
|
|
||||||
|
Files larger than 100MB automatically use multipart upload for:
|
||||||
|
- Faster transfers with parallel parts
|
||||||
|
- Resume capability on failure
|
||||||
|
- Better reliability for large files
|
||||||
|
|
||||||
|
**Configuration:**
|
||||||
|
- Part size: 10MB
|
||||||
|
- Concurrency: 10 parallel parts
|
||||||
|
- Automatic based on file size
|
||||||
|
|
||||||
|
### Progress Tracking
|
||||||
|
|
||||||
|
Real-time progress for uploads and downloads:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Uploading backup to cloud...
|
||||||
|
Progress: 10%
|
||||||
|
Progress: 20%
|
||||||
|
Progress: 30%
|
||||||
|
...
|
||||||
|
Upload completed: /backups/mydb.dump (1.2 GB)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Metadata Synchronization
|
||||||
|
|
||||||
|
Automatically uploads `.meta.json` with each backup containing:
|
||||||
|
- SHA-256 checksum
|
||||||
|
- Database name and type
|
||||||
|
- Backup timestamp
|
||||||
|
- File size
|
||||||
|
- Compression info
|
||||||
|
|
||||||
|
### Automatic Verification
|
||||||
|
|
||||||
|
Downloads from cloud include automatic checksum verification:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Downloading backup from cloud...
|
||||||
|
Download completed
|
||||||
|
Verifying checksum...
|
||||||
|
Checksum verified successfully: sha256=abc123...
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
### Local Testing with MinIO
|
||||||
|
|
||||||
|
**1. Start MinIO:**
|
||||||
|
```bash
|
||||||
|
docker-compose -f docker-compose.minio.yml up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
**2. Run Integration Tests:**
|
||||||
|
```bash
|
||||||
|
./scripts/test_cloud_storage.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
**3. Manual Testing:**
|
||||||
|
```bash
|
||||||
|
# Set credentials
|
||||||
|
export AWS_ACCESS_KEY_ID=minioadmin
|
||||||
|
export AWS_SECRET_ACCESS_KEY=minioadmin123
|
||||||
|
export AWS_ENDPOINT_URL=http://localhost:9000
|
||||||
|
|
||||||
|
# Test backup
|
||||||
|
dbbackup backup single mydb --cloud minio://test-backups/test/
|
||||||
|
|
||||||
|
# Test restore
|
||||||
|
dbbackup restore single minio://test-backups/test/mydb.dump --confirm
|
||||||
|
|
||||||
|
# Test verify
|
||||||
|
dbbackup verify-backup minio://test-backups/test/mydb.dump
|
||||||
|
|
||||||
|
# Test cleanup
|
||||||
|
dbbackup cleanup minio://test-backups/test/ --retention-days 7 --dry-run
|
||||||
|
```
|
||||||
|
|
||||||
|
**4. Access MinIO Console:**
|
||||||
|
- URL: http://localhost:9001
|
||||||
|
- Username: `minioadmin`
|
||||||
|
- Password: `minioadmin123`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### Security
|
||||||
|
|
||||||
|
1. **Never commit credentials:**
|
||||||
|
```bash
|
||||||
|
# Use environment variables or config files
|
||||||
|
export AWS_ACCESS_KEY_ID="..."
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Use IAM roles when possible:**
|
||||||
|
```bash
|
||||||
|
# On EC2/ECS, credentials are automatic
|
||||||
|
dbbackup backup single mydb --cloud s3://bucket/
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Restrict bucket permissions:**
|
||||||
|
- Minimum required: GetObject, PutObject, DeleteObject, ListBucket
|
||||||
|
- Use bucket policies to limit access
|
||||||
|
|
||||||
|
4. **Enable encryption:**
|
||||||
|
- S3: Server-side encryption enabled by default
|
||||||
|
- MinIO: Configure encryption at rest
|
||||||
|
|
||||||
|
### Performance
|
||||||
|
|
||||||
|
1. **Use multipart for large backups:**
|
||||||
|
- Automatic for files >100MB
|
||||||
|
- Configure concurrency based on bandwidth
|
||||||
|
|
||||||
|
2. **Choose nearby regions:**
|
||||||
|
```bash
|
||||||
|
--cloud-region us-west-2 # Closest to your servers
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Use compression:**
|
||||||
|
```bash
|
||||||
|
--compression gzip # Reduces upload size
|
||||||
|
```
|
||||||
|
|
||||||
|
### Reliability
|
||||||
|
|
||||||
|
1. **Test restores regularly:**
|
||||||
|
```bash
|
||||||
|
# Monthly restore test
|
||||||
|
dbbackup restore single s3://bucket/latest.dump --target test_restore
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Verify backups:**
|
||||||
|
```bash
|
||||||
|
# Daily verification
|
||||||
|
dbbackup verify-backup s3://bucket/backups/*.dump
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Monitor retention:**
|
||||||
|
```bash
|
||||||
|
# Weekly cleanup check
|
||||||
|
dbbackup cleanup s3://bucket/ --retention-days 30 --dry-run
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cost Optimization
|
||||||
|
|
||||||
|
1. **Use lifecycle policies:**
|
||||||
|
- S3: Transition old backups to Glacier
|
||||||
|
- Configure in AWS Console or bucket policy
|
||||||
|
|
||||||
|
2. **Cleanup old backups:**
|
||||||
|
```bash
|
||||||
|
dbbackup cleanup s3://bucket/ --retention-days 30 --min-backups 10
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Choose appropriate storage class:**
|
||||||
|
- Standard: Frequent access
|
||||||
|
- Infrequent Access: Monthly restores
|
||||||
|
- Glacier: Long-term archive
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Connection Issues
|
||||||
|
|
||||||
|
**Problem:** Cannot connect to S3/MinIO
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Error: failed to create cloud backend: failed to load AWS config
|
||||||
|
```
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
1. Check credentials:
|
||||||
|
```bash
|
||||||
|
echo $AWS_ACCESS_KEY_ID
|
||||||
|
echo $AWS_SECRET_ACCESS_KEY
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Test connectivity:
|
||||||
|
```bash
|
||||||
|
curl $AWS_ENDPOINT_URL
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Verify endpoint URL for MinIO/B2
|
||||||
|
|
||||||
|
### Permission Errors
|
||||||
|
|
||||||
|
**Problem:** Access denied
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Error: failed to upload to S3: AccessDenied
|
||||||
|
```
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
1. Check IAM policy includes required permissions
|
||||||
|
2. Verify bucket name is correct
|
||||||
|
3. Check bucket policy allows your IAM user
|
||||||
|
|
||||||
|
### Upload Failures
|
||||||
|
|
||||||
|
**Problem:** Large file upload fails
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Error: multipart upload failed: connection timeout
|
||||||
|
```
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
1. Check network stability
|
||||||
|
2. Retry - multipart uploads resume automatically
|
||||||
|
3. Increase timeout in config
|
||||||
|
4. Check firewall allows outbound HTTPS
|
||||||
|
|
||||||
|
### Verification Failures
|
||||||
|
|
||||||
|
**Problem:** Checksum mismatch
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Error: checksum mismatch: expected abc123, got def456
|
||||||
|
```
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
1. Re-download the backup
|
||||||
|
2. Check if file was corrupted during upload
|
||||||
|
3. Verify original backup integrity locally
|
||||||
|
4. Re-upload if necessary
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Full Backup Workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# Daily backup to S3 with retention
|
||||||
|
|
||||||
|
# Backup all databases
|
||||||
|
for db in db1 db2 db3; do
|
||||||
|
dbbackup backup single $db \
|
||||||
|
--cloud s3://production-backups/daily/$db/ \
|
||||||
|
--compression gzip
|
||||||
|
done
|
||||||
|
|
||||||
|
# Cleanup old backups (keep 30 days, min 10 backups)
|
||||||
|
dbbackup cleanup s3://production-backups/daily/ \
|
||||||
|
--retention-days 30 \
|
||||||
|
--min-backups 10
|
||||||
|
|
||||||
|
# Verify today's backups
|
||||||
|
dbbackup verify-backup s3://production-backups/daily/*/$(date +%Y%m%d)*.dump
|
||||||
|
```
|
||||||
|
|
||||||
|
### Disaster Recovery
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# Restore from cloud backup
|
||||||
|
|
||||||
|
# List available backups
|
||||||
|
dbbackup cloud list \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket disaster-recovery \
|
||||||
|
--verbose
|
||||||
|
|
||||||
|
# Restore latest backup
|
||||||
|
LATEST=$(dbbackup cloud list \
|
||||||
|
--cloud-provider s3 \
|
||||||
|
--cloud-bucket disaster-recovery | tail -1)
|
||||||
|
|
||||||
|
dbbackup restore single "s3://disaster-recovery/$LATEST" \
|
||||||
|
--target restored_db \
|
||||||
|
--create \
|
||||||
|
--confirm
|
||||||
|
```
|
||||||
|
|
||||||
|
### Multi-Cloud Strategy
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# Backup to both AWS S3 and Backblaze B2
|
||||||
|
|
||||||
|
# Backup to S3
|
||||||
|
dbbackup backup single production_db \
|
||||||
|
--cloud s3://aws-backups/prod/ \
|
||||||
|
--output-dir /tmp/backups
|
||||||
|
|
||||||
|
# Also upload to B2
|
||||||
|
BACKUP_FILE=$(ls -t /tmp/backups/*.dump | head -1)
|
||||||
|
dbbackup cloud upload "$BACKUP_FILE" \
|
||||||
|
--cloud-provider b2 \
|
||||||
|
--cloud-bucket b2-offsite-backups \
|
||||||
|
--cloud-endpoint https://s3.us-west-002.backblazeb2.com
|
||||||
|
|
||||||
|
# Verify both locations
|
||||||
|
dbbackup verify-backup s3://aws-backups/prod/$(basename $BACKUP_FILE)
|
||||||
|
dbbackup verify-backup b2://b2-offsite-backups/$(basename $BACKUP_FILE)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## FAQ
|
||||||
|
|
||||||
|
**Q: Can I use dbbackup with my existing S3 buckets?**
|
||||||
|
A: Yes! Just specify your bucket name and credentials.
|
||||||
|
|
||||||
|
**Q: Do I need to keep local backups?**
|
||||||
|
A: No, use `--cloud` flag to upload directly without keeping local copies.
|
||||||
|
|
||||||
|
**Q: What happens if upload fails?**
|
||||||
|
A: Backup succeeds locally. Upload failure is logged but doesn't fail the backup.
|
||||||
|
|
||||||
|
**Q: Can I restore without downloading?**
|
||||||
|
A: No, backups are downloaded to temp directory, then restored and cleaned up.
|
||||||
|
|
||||||
|
**Q: How much does cloud storage cost?**
|
||||||
|
A: Varies by provider:
|
||||||
|
- AWS S3: ~$0.023/GB/month + transfer
|
||||||
|
- Azure Blob Storage: ~$0.018/GB/month (Hot tier)
|
||||||
|
- Google Cloud Storage: ~$0.020/GB/month (Standard)
|
||||||
|
- Backblaze B2: ~$0.005/GB/month + transfer
|
||||||
|
- MinIO: Self-hosted, hardware costs only
|
||||||
|
|
||||||
|
**Q: Can I use multiple cloud providers?**
|
||||||
|
A: Yes! Use different URIs or upload to multiple destinations.
|
||||||
|
|
||||||
|
**Q: Is multipart upload automatic?**
|
||||||
|
A: Yes, automatically used for files >100MB.
|
||||||
|
|
||||||
|
**Q: Can I use S3 Glacier?**
|
||||||
|
A: Yes, but restore requires thawing. Use lifecycle policies for automatic archival.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Related Documentation
|
||||||
|
|
||||||
|
- [README.md](README.md) - Main documentation
|
||||||
|
- [AZURE.md](AZURE.md) - **Azure Blob Storage guide** (comprehensive)
|
||||||
|
- [GCS.md](GCS.md) - **Google Cloud Storage guide** (comprehensive)
|
||||||
|
- [ROADMAP.md](ROADMAP.md) - Feature roadmap
|
||||||
|
- [docker-compose.minio.yml](docker-compose.minio.yml) - MinIO test setup
|
||||||
|
- [docker-compose.azurite.yml](docker-compose.azurite.yml) - Azure Azurite test setup
|
||||||
|
- [docker-compose.gcs.yml](docker-compose.gcs.yml) - GCS fake-gcs-server test setup
|
||||||
|
- [scripts/test_cloud_storage.sh](scripts/test_cloud_storage.sh) - S3 integration tests
|
||||||
|
- [scripts/test_azure_storage.sh](scripts/test_azure_storage.sh) - Azure integration tests
|
||||||
|
- [scripts/test_gcs_storage.sh](scripts/test_gcs_storage.sh) - GCS integration tests
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
For issues or questions:
|
||||||
|
- GitHub Issues: [Create an issue](https://github.com/yourusername/dbbackup/issues)
|
||||||
|
- Documentation: Check README.md and inline help
|
||||||
|
- Examples: See `scripts/test_cloud_storage.sh`
|
||||||
250
DOCKER.md
Normal file
250
DOCKER.md
Normal file
@@ -0,0 +1,250 @@
|
|||||||
|
# Docker Usage Guide
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### Build Image
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker build -t dbbackup:latest .
|
||||||
|
```
|
||||||
|
|
||||||
|
### Run Container
|
||||||
|
|
||||||
|
**PostgreSQL Backup:**
|
||||||
|
```bash
|
||||||
|
docker run --rm \
|
||||||
|
-v $(pwd)/backups:/backups \
|
||||||
|
-e PGHOST=your-postgres-host \
|
||||||
|
-e PGUSER=postgres \
|
||||||
|
-e PGPASSWORD=secret \
|
||||||
|
dbbackup:latest backup single mydb
|
||||||
|
```
|
||||||
|
|
||||||
|
**MySQL Backup:**
|
||||||
|
```bash
|
||||||
|
docker run --rm \
|
||||||
|
-v $(pwd)/backups:/backups \
|
||||||
|
-e MYSQL_HOST=your-mysql-host \
|
||||||
|
-e MYSQL_USER=root \
|
||||||
|
-e MYSQL_PWD=secret \
|
||||||
|
dbbackup:latest backup single mydb --db-type mysql
|
||||||
|
```
|
||||||
|
|
||||||
|
**Interactive Mode:**
|
||||||
|
```bash
|
||||||
|
docker run --rm -it \
|
||||||
|
-v $(pwd)/backups:/backups \
|
||||||
|
-e PGHOST=your-postgres-host \
|
||||||
|
-e PGUSER=postgres \
|
||||||
|
-e PGPASSWORD=secret \
|
||||||
|
dbbackup:latest interactive
|
||||||
|
```
|
||||||
|
|
||||||
|
## Docker Compose
|
||||||
|
|
||||||
|
### Start Test Environment
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start test databases
|
||||||
|
docker-compose up -d postgres mysql
|
||||||
|
|
||||||
|
# Wait for databases to be ready
|
||||||
|
sleep 10
|
||||||
|
|
||||||
|
# Run backup
|
||||||
|
docker-compose run --rm postgres-backup
|
||||||
|
```
|
||||||
|
|
||||||
|
### Interactive Mode
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker-compose run --rm dbbackup-interactive
|
||||||
|
```
|
||||||
|
|
||||||
|
### Scheduled Backups with Cron
|
||||||
|
|
||||||
|
Create `docker-cron`:
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# Daily PostgreSQL backup at 2 AM
|
||||||
|
0 2 * * * docker run --rm -v /backups:/backups -e PGHOST=postgres -e PGUSER=postgres -e PGPASSWORD=secret dbbackup:latest backup single production_db
|
||||||
|
```
|
||||||
|
|
||||||
|
## Environment Variables
|
||||||
|
|
||||||
|
**PostgreSQL:**
|
||||||
|
- `PGHOST` - Database host
|
||||||
|
- `PGPORT` - Database port (default: 5432)
|
||||||
|
- `PGUSER` - Database user
|
||||||
|
- `PGPASSWORD` - Database password
|
||||||
|
- `PGDATABASE` - Database name
|
||||||
|
|
||||||
|
**MySQL/MariaDB:**
|
||||||
|
- `MYSQL_HOST` - Database host
|
||||||
|
- `MYSQL_PORT` - Database port (default: 3306)
|
||||||
|
- `MYSQL_USER` - Database user
|
||||||
|
- `MYSQL_PWD` - Database password
|
||||||
|
- `MYSQL_DATABASE` - Database name
|
||||||
|
|
||||||
|
**General:**
|
||||||
|
- `BACKUP_DIR` - Backup directory (default: /backups)
|
||||||
|
- `COMPRESS_LEVEL` - Compression level 0-9 (default: 6)
|
||||||
|
|
||||||
|
## Volume Mounts
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run --rm \
|
||||||
|
-v /host/backups:/backups \ # Backup storage
|
||||||
|
-v /host/config/.dbbackup.conf:/home/dbbackup/.dbbackup.conf:ro \ # Config file
|
||||||
|
dbbackup:latest backup single mydb
|
||||||
|
```
|
||||||
|
|
||||||
|
## Docker Hub
|
||||||
|
|
||||||
|
Pull pre-built image (when published):
|
||||||
|
```bash
|
||||||
|
docker pull uuxo/dbbackup:latest
|
||||||
|
docker pull uuxo/dbbackup:1.0
|
||||||
|
```
|
||||||
|
|
||||||
|
## Kubernetes Deployment
|
||||||
|
|
||||||
|
**CronJob Example:**
|
||||||
|
```yaml
|
||||||
|
apiVersion: batch/v1
|
||||||
|
kind: CronJob
|
||||||
|
metadata:
|
||||||
|
name: postgres-backup
|
||||||
|
spec:
|
||||||
|
schedule: "0 2 * * *" # Daily at 2 AM
|
||||||
|
jobTemplate:
|
||||||
|
spec:
|
||||||
|
template:
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: dbbackup
|
||||||
|
image: dbbackup:latest
|
||||||
|
args: ["backup", "single", "production_db"]
|
||||||
|
env:
|
||||||
|
- name: PGHOST
|
||||||
|
value: "postgres.default.svc.cluster.local"
|
||||||
|
- name: PGUSER
|
||||||
|
value: "postgres"
|
||||||
|
- name: PGPASSWORD
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: postgres-secret
|
||||||
|
key: password
|
||||||
|
volumeMounts:
|
||||||
|
- name: backups
|
||||||
|
mountPath: /backups
|
||||||
|
volumes:
|
||||||
|
- name: backups
|
||||||
|
persistentVolumeClaim:
|
||||||
|
claimName: backup-storage
|
||||||
|
restartPolicy: OnFailure
|
||||||
|
```
|
||||||
|
|
||||||
|
## Docker Secrets
|
||||||
|
|
||||||
|
**Using Docker Secrets:**
|
||||||
|
```bash
|
||||||
|
# Create secrets
|
||||||
|
echo "mypassword" | docker secret create db_password -
|
||||||
|
|
||||||
|
# Use in stack
|
||||||
|
docker stack deploy -c docker-stack.yml dbbackup
|
||||||
|
```
|
||||||
|
|
||||||
|
**docker-stack.yml:**
|
||||||
|
```yaml
|
||||||
|
version: '3.8'
|
||||||
|
services:
|
||||||
|
backup:
|
||||||
|
image: dbbackup:latest
|
||||||
|
secrets:
|
||||||
|
- db_password
|
||||||
|
environment:
|
||||||
|
- PGHOST=postgres
|
||||||
|
- PGUSER=postgres
|
||||||
|
- PGPASSWORD_FILE=/run/secrets/db_password
|
||||||
|
command: backup single mydb
|
||||||
|
volumes:
|
||||||
|
- backups:/backups
|
||||||
|
|
||||||
|
secrets:
|
||||||
|
db_password:
|
||||||
|
external: true
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
backups:
|
||||||
|
```
|
||||||
|
|
||||||
|
## Image Size
|
||||||
|
|
||||||
|
**Multi-stage build results:**
|
||||||
|
- Builder stage: ~500MB (Go + dependencies)
|
||||||
|
- Final image: ~100MB (Alpine + clients)
|
||||||
|
- Binary only: ~15MB
|
||||||
|
|
||||||
|
## Security
|
||||||
|
|
||||||
|
**Non-root user:**
|
||||||
|
- Runs as UID 1000 (dbbackup user)
|
||||||
|
- No privileged operations needed
|
||||||
|
- Read-only config mount recommended
|
||||||
|
|
||||||
|
**Network:**
|
||||||
|
```bash
|
||||||
|
# Use custom network
|
||||||
|
docker network create dbnet
|
||||||
|
|
||||||
|
docker run --rm \
|
||||||
|
--network dbnet \
|
||||||
|
-v $(pwd)/backups:/backups \
|
||||||
|
dbbackup:latest backup single mydb
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
**Check logs:**
|
||||||
|
```bash
|
||||||
|
docker logs dbbackup-postgres
|
||||||
|
```
|
||||||
|
|
||||||
|
**Debug mode:**
|
||||||
|
```bash
|
||||||
|
docker run --rm -it \
|
||||||
|
-v $(pwd)/backups:/backups \
|
||||||
|
dbbackup:latest backup single mydb --debug
|
||||||
|
```
|
||||||
|
|
||||||
|
**Shell access:**
|
||||||
|
```bash
|
||||||
|
docker run --rm -it --entrypoint /bin/sh dbbackup:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
## Building for Multiple Platforms
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Enable buildx
|
||||||
|
docker buildx create --use
|
||||||
|
|
||||||
|
# Build multi-arch
|
||||||
|
docker buildx build \
|
||||||
|
--platform linux/amd64,linux/arm64,linux/arm/v7 \
|
||||||
|
-t uuxo/dbbackup:latest \
|
||||||
|
--push .
|
||||||
|
```
|
||||||
|
|
||||||
|
## Registry Push
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Tag for registry
|
||||||
|
docker tag dbbackup:latest git.uuxo.net/uuxo/dbbackup:latest
|
||||||
|
docker tag dbbackup:latest git.uuxo.net/uuxo/dbbackup:1.0
|
||||||
|
|
||||||
|
# Push to private registry
|
||||||
|
docker push git.uuxo.net/uuxo/dbbackup:latest
|
||||||
|
docker push git.uuxo.net/uuxo/dbbackup:1.0
|
||||||
|
```
|
||||||
58
Dockerfile
Normal file
58
Dockerfile
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
# Multi-stage build for minimal image size
|
||||||
|
FROM golang:1.24-alpine AS builder
|
||||||
|
|
||||||
|
# Install build dependencies
|
||||||
|
RUN apk add --no-cache git make
|
||||||
|
|
||||||
|
WORKDIR /build
|
||||||
|
|
||||||
|
# Copy go mod files
|
||||||
|
COPY go.mod go.sum ./
|
||||||
|
RUN go mod download
|
||||||
|
|
||||||
|
# Copy source code
|
||||||
|
COPY . .
|
||||||
|
|
||||||
|
# Build binary
|
||||||
|
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags="-w -s" -o dbbackup .
|
||||||
|
|
||||||
|
# Final stage - minimal runtime image
|
||||||
|
FROM alpine:3.19
|
||||||
|
|
||||||
|
# Install database client tools
|
||||||
|
RUN apk add --no-cache \
|
||||||
|
postgresql-client \
|
||||||
|
mysql-client \
|
||||||
|
mariadb-client \
|
||||||
|
pigz \
|
||||||
|
pv \
|
||||||
|
ca-certificates \
|
||||||
|
tzdata
|
||||||
|
|
||||||
|
# Create non-root user
|
||||||
|
RUN addgroup -g 1000 dbbackup && \
|
||||||
|
adduser -D -u 1000 -G dbbackup dbbackup
|
||||||
|
|
||||||
|
# Copy binary from builder
|
||||||
|
COPY --from=builder /build/dbbackup /usr/local/bin/dbbackup
|
||||||
|
RUN chmod +x /usr/local/bin/dbbackup
|
||||||
|
|
||||||
|
# Create backup directory
|
||||||
|
RUN mkdir -p /backups && chown dbbackup:dbbackup /backups
|
||||||
|
|
||||||
|
# Set working directory
|
||||||
|
WORKDIR /backups
|
||||||
|
|
||||||
|
# Switch to non-root user
|
||||||
|
USER dbbackup
|
||||||
|
|
||||||
|
# Set entrypoint
|
||||||
|
ENTRYPOINT ["/usr/local/bin/dbbackup"]
|
||||||
|
|
||||||
|
# Default command shows help
|
||||||
|
CMD ["--help"]
|
||||||
|
|
||||||
|
# Labels
|
||||||
|
LABEL maintainer="UUXO"
|
||||||
|
LABEL version="1.0"
|
||||||
|
LABEL description="Professional database backup tool for PostgreSQL, MySQL, and MariaDB"
|
||||||
664
GCS.md
Normal file
664
GCS.md
Normal file
@@ -0,0 +1,664 @@
|
|||||||
|
# Google Cloud Storage Integration
|
||||||
|
|
||||||
|
This guide covers using **Google Cloud Storage (GCS)** with `dbbackup` for secure, scalable cloud backup storage.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [Quick Start](#quick-start)
|
||||||
|
- [URI Syntax](#uri-syntax)
|
||||||
|
- [Authentication](#authentication)
|
||||||
|
- [Configuration](#configuration)
|
||||||
|
- [Usage Examples](#usage-examples)
|
||||||
|
- [Advanced Features](#advanced-features)
|
||||||
|
- [Testing with fake-gcs-server](#testing-with-fake-gcs-server)
|
||||||
|
- [Best Practices](#best-practices)
|
||||||
|
- [Troubleshooting](#troubleshooting)
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### 1. GCP Setup
|
||||||
|
|
||||||
|
1. Create a GCS bucket in Google Cloud Console
|
||||||
|
2. Set up authentication (choose one):
|
||||||
|
- **Service Account**: Create and download JSON key file
|
||||||
|
- **Application Default Credentials**: Use gcloud CLI
|
||||||
|
- **Workload Identity**: For GKE clusters
|
||||||
|
|
||||||
|
### 2. Basic Backup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup PostgreSQL to GCS (using ADC)
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb \
|
||||||
|
--output backup.sql \
|
||||||
|
--cloud "gs://mybucket/backups/db.sql"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Restore from GCS
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Restore from GCS backup
|
||||||
|
dbbackup restore postgres \
|
||||||
|
--source "gs://mybucket/backups/db.sql" \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb_restored
|
||||||
|
```
|
||||||
|
|
||||||
|
## URI Syntax
|
||||||
|
|
||||||
|
### Basic Format
|
||||||
|
|
||||||
|
```
|
||||||
|
gs://bucket/path/to/backup.sql
|
||||||
|
gcs://bucket/path/to/backup.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
Both `gs://` and `gcs://` prefixes are supported.
|
||||||
|
|
||||||
|
### URI Components
|
||||||
|
|
||||||
|
| Component | Required | Description | Example |
|
||||||
|
|-----------|----------|-------------|---------|
|
||||||
|
| `bucket` | Yes | GCS bucket name | `mybucket` |
|
||||||
|
| `path` | Yes | Object path within bucket | `backups/db.sql` |
|
||||||
|
| `credentials` | No | Path to service account JSON | `/path/to/key.json` |
|
||||||
|
| `project` | No | GCP project ID | `my-project-id` |
|
||||||
|
| `endpoint` | No | Custom endpoint (emulator) | `http://localhost:4443` |
|
||||||
|
|
||||||
|
### URI Examples
|
||||||
|
|
||||||
|
**Production GCS (Application Default Credentials):**
|
||||||
|
```
|
||||||
|
gs://prod-backups/postgres/db.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
**With Service Account:**
|
||||||
|
```
|
||||||
|
gs://prod-backups/postgres/db.sql?credentials=/path/to/service-account.json
|
||||||
|
```
|
||||||
|
|
||||||
|
**With Project ID:**
|
||||||
|
```
|
||||||
|
gs://prod-backups/postgres/db.sql?project=my-project-id
|
||||||
|
```
|
||||||
|
|
||||||
|
**fake-gcs-server Emulator:**
|
||||||
|
```
|
||||||
|
gs://test-backups/postgres/db.sql?endpoint=http://localhost:4443/storage/v1
|
||||||
|
```
|
||||||
|
|
||||||
|
**With Path Prefix:**
|
||||||
|
```
|
||||||
|
gs://backups/production/postgres/2024/db.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
## Authentication
|
||||||
|
|
||||||
|
### Method 1: Application Default Credentials (Recommended)
|
||||||
|
|
||||||
|
Use gcloud CLI to set up ADC:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Login with your Google account
|
||||||
|
gcloud auth application-default login
|
||||||
|
|
||||||
|
# Or use service account for server environments
|
||||||
|
gcloud auth activate-service-account --key-file=/path/to/key.json
|
||||||
|
|
||||||
|
# Use simplified URI (credentials from environment)
|
||||||
|
dbbackup backup postgres --cloud "gs://mybucket/backups/backup.sql"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method 2: Service Account JSON
|
||||||
|
|
||||||
|
Download service account key from GCP Console:
|
||||||
|
|
||||||
|
1. Go to **IAM & Admin** → **Service Accounts**
|
||||||
|
2. Create or select a service account
|
||||||
|
3. Click **Keys** → **Add Key** → **Create new key** → **JSON**
|
||||||
|
4. Download the JSON file
|
||||||
|
|
||||||
|
**Use in URI:**
|
||||||
|
```bash
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--cloud "gs://mybucket/backup.sql?credentials=/path/to/service-account.json"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Or via environment:**
|
||||||
|
```bash
|
||||||
|
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
|
||||||
|
dbbackup backup postgres --cloud "gs://mybucket/backup.sql"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method 3: Workload Identity (GKE)
|
||||||
|
|
||||||
|
For Kubernetes workloads:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ServiceAccount
|
||||||
|
metadata:
|
||||||
|
name: dbbackup-sa
|
||||||
|
annotations:
|
||||||
|
iam.gke.io/gcp-service-account: dbbackup@project.iam.gserviceaccount.com
|
||||||
|
```
|
||||||
|
|
||||||
|
Then use ADC in your pod:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dbbackup backup postgres --cloud "gs://mybucket/backup.sql"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Required IAM Permissions
|
||||||
|
|
||||||
|
Service account needs these roles:
|
||||||
|
|
||||||
|
- **Storage Object Creator**: Upload backups
|
||||||
|
- **Storage Object Viewer**: List and download backups
|
||||||
|
- **Storage Object Admin**: Delete backups (for cleanup)
|
||||||
|
|
||||||
|
Or use predefined role: **Storage Admin**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Grant permissions
|
||||||
|
gcloud projects add-iam-policy-binding PROJECT_ID \
|
||||||
|
--member="serviceAccount:dbbackup@PROJECT_ID.iam.gserviceaccount.com" \
|
||||||
|
--role="roles/storage.objectAdmin"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Bucket Setup
|
||||||
|
|
||||||
|
Create a bucket before first use:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# gcloud CLI
|
||||||
|
gsutil mb -p PROJECT_ID -c STANDARD -l us-central1 gs://mybucket/
|
||||||
|
|
||||||
|
# Or let dbbackup create it (requires permissions)
|
||||||
|
dbbackup cloud upload file.sql "gs://mybucket/file.sql?create=true&project=PROJECT_ID"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Storage Classes
|
||||||
|
|
||||||
|
GCS offers multiple storage classes:
|
||||||
|
|
||||||
|
- **Standard**: Frequent access (default)
|
||||||
|
- **Nearline**: Access <1/month (lower cost)
|
||||||
|
- **Coldline**: Access <1/quarter (very low cost)
|
||||||
|
- **Archive**: Long-term retention (lowest cost)
|
||||||
|
|
||||||
|
Set the class when creating bucket:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gsutil mb -c NEARLINE gs://mybucket/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Lifecycle Management
|
||||||
|
|
||||||
|
Configure automatic transitions and deletion:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"lifecycle": {
|
||||||
|
"rule": [
|
||||||
|
{
|
||||||
|
"action": {"type": "SetStorageClass", "storageClass": "NEARLINE"},
|
||||||
|
"condition": {"age": 30, "matchesPrefix": ["backups/"]}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"action": {"type": "SetStorageClass", "storageClass": "ARCHIVE"},
|
||||||
|
"condition": {"age": 90, "matchesPrefix": ["backups/"]}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"action": {"type": "Delete"},
|
||||||
|
"condition": {"age": 365, "matchesPrefix": ["backups/"]}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Apply lifecycle configuration:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gsutil lifecycle set lifecycle.json gs://mybucket/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Regional Configuration
|
||||||
|
|
||||||
|
Choose bucket location for better performance:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# US regions
|
||||||
|
gsutil mb -l us-central1 gs://mybucket/
|
||||||
|
gsutil mb -l us-east1 gs://mybucket/
|
||||||
|
|
||||||
|
# EU regions
|
||||||
|
gsutil mb -l europe-west1 gs://mybucket/
|
||||||
|
|
||||||
|
# Multi-region
|
||||||
|
gsutil mb -l us gs://mybucket/
|
||||||
|
gsutil mb -l eu gs://mybucket/
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
### Backup with Auto-Upload
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# PostgreSQL backup with automatic GCS upload
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database production_db \
|
||||||
|
--output /backups/db.sql \
|
||||||
|
--cloud "gs://prod-backups/postgres/$(date +%Y%m%d_%H%M%S).sql" \
|
||||||
|
--compression 6
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backup All Databases
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup entire PostgreSQL cluster to GCS
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--all-databases \
|
||||||
|
--output-dir /backups \
|
||||||
|
--cloud "gs://prod-backups/postgres/cluster/"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verify Backup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verify backup integrity
|
||||||
|
dbbackup verify "gs://prod-backups/postgres/backup.sql"
|
||||||
|
```
|
||||||
|
|
||||||
|
### List Backups
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all backups in bucket
|
||||||
|
dbbackup cloud list "gs://prod-backups/postgres/"
|
||||||
|
|
||||||
|
# List with pattern
|
||||||
|
dbbackup cloud list "gs://prod-backups/postgres/2024/"
|
||||||
|
|
||||||
|
# Or use gsutil
|
||||||
|
gsutil ls gs://prod-backups/postgres/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Download Backup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Download from GCS to local
|
||||||
|
dbbackup cloud download \
|
||||||
|
"gs://prod-backups/postgres/backup.sql" \
|
||||||
|
/local/path/backup.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
### Delete Old Backups
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Manual delete
|
||||||
|
dbbackup cloud delete "gs://prod-backups/postgres/old_backup.sql"
|
||||||
|
|
||||||
|
# Automatic cleanup (keep last 7 backups)
|
||||||
|
dbbackup cleanup "gs://prod-backups/postgres/" --keep 7
|
||||||
|
```
|
||||||
|
|
||||||
|
### Scheduled Backups
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# GCS backup script (run via cron)
|
||||||
|
|
||||||
|
DATE=$(date +%Y%m%d_%H%M%S)
|
||||||
|
GCS_URI="gs://prod-backups/postgres/${DATE}.sql"
|
||||||
|
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database production_db \
|
||||||
|
--output /tmp/backup.sql \
|
||||||
|
--cloud "${GCS_URI}" \
|
||||||
|
--compression 9
|
||||||
|
|
||||||
|
# Cleanup old backups
|
||||||
|
dbbackup cleanup "gs://prod-backups/postgres/" --keep 30
|
||||||
|
```
|
||||||
|
|
||||||
|
**Crontab:**
|
||||||
|
```cron
|
||||||
|
# Daily at 2 AM
|
||||||
|
0 2 * * * /usr/local/bin/gcs-backup.sh >> /var/log/gcs-backup.log 2>&1
|
||||||
|
```
|
||||||
|
|
||||||
|
**Systemd Timer:**
|
||||||
|
```ini
|
||||||
|
# /etc/systemd/system/gcs-backup.timer
|
||||||
|
[Unit]
|
||||||
|
Description=Daily GCS Database Backup
|
||||||
|
|
||||||
|
[Timer]
|
||||||
|
OnCalendar=daily
|
||||||
|
Persistent=true
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=timers.target
|
||||||
|
```
|
||||||
|
|
||||||
|
## Advanced Features
|
||||||
|
|
||||||
|
### Chunked Upload
|
||||||
|
|
||||||
|
For large files, dbbackup automatically uses GCS chunked upload:
|
||||||
|
|
||||||
|
- **Chunk Size**: 16MB per chunk
|
||||||
|
- **Streaming**: Direct streaming from source
|
||||||
|
- **Checksum**: SHA-256 integrity verification
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Large database backup (automatically uses chunked upload)
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database huge_db \
|
||||||
|
--output /backups/huge.sql \
|
||||||
|
--cloud "gs://backups/huge.sql"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Progress Tracking
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup with progress display
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb \
|
||||||
|
--output backup.sql \
|
||||||
|
--cloud "gs://backups/backup.sql" \
|
||||||
|
--progress
|
||||||
|
```
|
||||||
|
|
||||||
|
### Concurrent Operations
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup multiple databases in parallel
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--all-databases \
|
||||||
|
--output-dir /backups \
|
||||||
|
--cloud "gs://backups/cluster/" \
|
||||||
|
--parallelism 4
|
||||||
|
```
|
||||||
|
|
||||||
|
### Custom Metadata
|
||||||
|
|
||||||
|
Backups include SHA-256 checksums as object metadata:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# View metadata using gsutil
|
||||||
|
gsutil stat gs://backups/backup.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
### Object Versioning
|
||||||
|
|
||||||
|
Enable versioning to protect against accidental deletion:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Enable versioning
|
||||||
|
gsutil versioning set on gs://mybucket/
|
||||||
|
|
||||||
|
# List all versions
|
||||||
|
gsutil ls -a gs://mybucket/backup.sql
|
||||||
|
|
||||||
|
# Restore previous version
|
||||||
|
gsutil cp gs://mybucket/backup.sql#VERSION /local/backup.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
### Customer-Managed Encryption Keys (CMEK)
|
||||||
|
|
||||||
|
Use your own encryption keys:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create encryption key in Cloud KMS
|
||||||
|
gcloud kms keyrings create backup-keyring --location=us-central1
|
||||||
|
gcloud kms keys create backup-key --location=us-central1 --keyring=backup-keyring --purpose=encryption
|
||||||
|
|
||||||
|
# Set default CMEK for bucket
|
||||||
|
gsutil kms encryption gs://mybucket/ projects/PROJECT/locations/us-central1/keyRings/backup-keyring/cryptoKeys/backup-key
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing with fake-gcs-server
|
||||||
|
|
||||||
|
### Setup fake-gcs-server Emulator
|
||||||
|
|
||||||
|
**Docker Compose:**
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
gcs-emulator:
|
||||||
|
image: fsouza/fake-gcs-server:latest
|
||||||
|
ports:
|
||||||
|
- "4443:4443"
|
||||||
|
command: -scheme http -public-host localhost:4443
|
||||||
|
```
|
||||||
|
|
||||||
|
**Start:**
|
||||||
|
```bash
|
||||||
|
docker-compose -f docker-compose.gcs.yml up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### Create Test Bucket
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Using curl
|
||||||
|
curl -X POST "http://localhost:4443/storage/v1/b?project=test-project" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"name": "test-backups"}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test Backup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup to fake-gcs-server
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database testdb \
|
||||||
|
--output test.sql \
|
||||||
|
--cloud "gs://test-backups/test.sql?endpoint=http://localhost:4443/storage/v1"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Run Integration Tests
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run comprehensive test suite
|
||||||
|
./scripts/test_gcs_storage.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
Tests include:
|
||||||
|
- PostgreSQL and MySQL backups
|
||||||
|
- Upload/download operations
|
||||||
|
- Large file handling (200MB+)
|
||||||
|
- Verification and cleanup
|
||||||
|
- Restore operations
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### 1. Security
|
||||||
|
|
||||||
|
- **Never commit credentials** to version control
|
||||||
|
- Use **Application Default Credentials** when possible
|
||||||
|
- Rotate service account keys regularly
|
||||||
|
- Use **Workload Identity** for GKE
|
||||||
|
- Enable **VPC Service Controls** for enterprise security
|
||||||
|
- Use **Customer-Managed Encryption Keys** (CMEK) for sensitive data
|
||||||
|
|
||||||
|
### 2. Performance
|
||||||
|
|
||||||
|
- Use **compression** for faster uploads: `--compression 6`
|
||||||
|
- Enable **parallelism** for cluster backups: `--parallelism 4`
|
||||||
|
- Choose appropriate **GCS region** (close to source)
|
||||||
|
- Use **multi-region** buckets for high availability
|
||||||
|
|
||||||
|
### 3. Cost Optimization
|
||||||
|
|
||||||
|
- Use **Nearline** for backups older than 30 days
|
||||||
|
- Use **Archive** for long-term retention (>90 days)
|
||||||
|
- Enable **lifecycle management** for automatic transitions
|
||||||
|
- Monitor storage costs in GCP Billing Console
|
||||||
|
- Use **Coldline** for quarterly access patterns
|
||||||
|
|
||||||
|
### 4. Reliability
|
||||||
|
|
||||||
|
- Test **restore procedures** regularly
|
||||||
|
- Use **retention policies**: `--keep 30`
|
||||||
|
- Enable **object versioning** (30-day recovery)
|
||||||
|
- Use **multi-region** buckets for disaster recovery
|
||||||
|
- Monitor backup success with Cloud Monitoring
|
||||||
|
|
||||||
|
### 5. Organization
|
||||||
|
|
||||||
|
- Use **consistent naming**: `{database}/{date}/{backup}.sql`
|
||||||
|
- Use **bucket prefixes**: `prod-backups`, `dev-backups`
|
||||||
|
- Tag backups with **labels** (environment, version)
|
||||||
|
- Document restore procedures
|
||||||
|
- Use **separate buckets** per environment
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Connection Issues
|
||||||
|
|
||||||
|
**Problem:** `failed to create GCS client`
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Check `GOOGLE_APPLICATION_CREDENTIALS` environment variable
|
||||||
|
- Verify service account JSON file exists and is valid
|
||||||
|
- Ensure gcloud CLI is authenticated: `gcloud auth list`
|
||||||
|
- For emulator, confirm `http://localhost:4443` is running
|
||||||
|
|
||||||
|
### Authentication Errors
|
||||||
|
|
||||||
|
**Problem:** `authentication failed` or `permission denied`
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Verify service account has required IAM roles
|
||||||
|
- Check if Application Default Credentials are set up
|
||||||
|
- Run `gcloud auth application-default login`
|
||||||
|
- Verify service account JSON is not corrupted
|
||||||
|
- Check GCP project ID is correct
|
||||||
|
|
||||||
|
### Upload Failures
|
||||||
|
|
||||||
|
**Problem:** `failed to upload object`
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Check bucket exists (or use `&create=true`)
|
||||||
|
- Verify service account has `storage.objects.create` permission
|
||||||
|
- Check network connectivity to GCS
|
||||||
|
- Try smaller files first (test connection)
|
||||||
|
- Check GCP quota limits
|
||||||
|
|
||||||
|
### Large File Issues
|
||||||
|
|
||||||
|
**Problem:** Upload timeout for large files
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- dbbackup automatically uses chunked upload
|
||||||
|
- Increase compression: `--compression 9`
|
||||||
|
- Check network bandwidth
|
||||||
|
- Use **Transfer Appliance** for TB+ data
|
||||||
|
|
||||||
|
### List/Download Issues
|
||||||
|
|
||||||
|
**Problem:** `object not found`
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Verify object name (check GCS Console)
|
||||||
|
- Check bucket name is correct
|
||||||
|
- Ensure object hasn't been moved/deleted
|
||||||
|
- Check if object is in Archive class (requires restore)
|
||||||
|
|
||||||
|
### Performance Issues
|
||||||
|
|
||||||
|
**Problem:** Slow upload/download
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Use compression: `--compression 6`
|
||||||
|
- Choose closer GCS region
|
||||||
|
- Check network bandwidth
|
||||||
|
- Use **multi-region** bucket for better availability
|
||||||
|
- Enable parallelism for multiple files
|
||||||
|
|
||||||
|
### Debugging
|
||||||
|
|
||||||
|
Enable debug mode:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--cloud "gs://bucket/backup.sql" \
|
||||||
|
--debug
|
||||||
|
```
|
||||||
|
|
||||||
|
Check GCP logs:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Cloud Logging
|
||||||
|
gcloud logging read "resource.type=gcs_bucket AND resource.labels.bucket_name=mybucket" \
|
||||||
|
--limit 50 \
|
||||||
|
--format json
|
||||||
|
```
|
||||||
|
|
||||||
|
View bucket details:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gsutil ls -L -b gs://mybucket/
|
||||||
|
```
|
||||||
|
|
||||||
|
## Monitoring and Alerting
|
||||||
|
|
||||||
|
### Cloud Monitoring
|
||||||
|
|
||||||
|
Create metrics and alerts:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Monitor backup success rate
|
||||||
|
gcloud monitoring policies create \
|
||||||
|
--notification-channels=CHANNEL_ID \
|
||||||
|
--display-name="Backup Failure Alert" \
|
||||||
|
--condition-display-name="No backups in 24h" \
|
||||||
|
--condition-threshold-value=0 \
|
||||||
|
--condition-threshold-duration=86400s
|
||||||
|
```
|
||||||
|
|
||||||
|
### Logging
|
||||||
|
|
||||||
|
Export logs to BigQuery for analysis:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gcloud logging sinks create backup-logs \
|
||||||
|
bigquery.googleapis.com/projects/PROJECT_ID/datasets/backup_logs \
|
||||||
|
--log-filter='resource.type="gcs_bucket" AND resource.labels.bucket_name="prod-backups"'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Additional Resources
|
||||||
|
|
||||||
|
- [Google Cloud Storage Documentation](https://cloud.google.com/storage/docs)
|
||||||
|
- [fake-gcs-server](https://github.com/fsouza/fake-gcs-server)
|
||||||
|
- [gsutil Tool](https://cloud.google.com/storage/docs/gsutil)
|
||||||
|
- [GCS Client Libraries](https://cloud.google.com/storage/docs/reference/libraries)
|
||||||
|
- [dbbackup Cloud Storage Guide](CLOUD.md)
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
For issues specific to GCS integration:
|
||||||
|
|
||||||
|
1. Check [Troubleshooting](#troubleshooting) section
|
||||||
|
2. Run integration tests: `./scripts/test_gcs_storage.sh`
|
||||||
|
3. Enable debug mode: `--debug`
|
||||||
|
4. Check GCP Service Status
|
||||||
|
5. Open an issue on GitHub with debug logs
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- [Azure Blob Storage Guide](AZURE.md)
|
||||||
|
- [AWS S3 Guide](CLOUD.md#aws-s3)
|
||||||
|
- [Main Cloud Storage Documentation](CLOUD.md)
|
||||||
130
README.md
130
README.md
@@ -16,6 +16,31 @@ Professional database backup and restore utility for PostgreSQL, MySQL, and Mari
|
|||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
|
### Docker (Recommended)
|
||||||
|
|
||||||
|
**Pull from registry:**
|
||||||
|
```bash
|
||||||
|
docker pull git.uuxo.net/uuxo/dbbackup:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
**Quick start:**
|
||||||
|
```bash
|
||||||
|
# PostgreSQL backup
|
||||||
|
docker run --rm \
|
||||||
|
-v $(pwd)/backups:/backups \
|
||||||
|
-e PGHOST=your-host \
|
||||||
|
-e PGUSER=postgres \
|
||||||
|
-e PGPASSWORD=secret \
|
||||||
|
git.uuxo.net/uuxo/dbbackup:latest backup single mydb
|
||||||
|
|
||||||
|
# Interactive mode
|
||||||
|
docker run --rm -it \
|
||||||
|
-v $(pwd)/backups:/backups \
|
||||||
|
git.uuxo.net/uuxo/dbbackup:latest interactive
|
||||||
|
```
|
||||||
|
|
||||||
|
See [DOCKER.md](DOCKER.md) for complete Docker documentation.
|
||||||
|
|
||||||
### Download Pre-compiled Binary
|
### Download Pre-compiled Binary
|
||||||
|
|
||||||
Linux x86_64:
|
Linux x86_64:
|
||||||
@@ -353,6 +378,111 @@ Restore entire PostgreSQL cluster from archive:
|
|||||||
./dbbackup restore cluster ARCHIVE_FILE [OPTIONS]
|
./dbbackup restore cluster ARCHIVE_FILE [OPTIONS]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Verification & Maintenance
|
||||||
|
|
||||||
|
#### Verify Backup Integrity
|
||||||
|
|
||||||
|
Verify backup files using SHA-256 checksums and metadata validation:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./dbbackup verify-backup BACKUP_FILE [OPTIONS]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
|
||||||
|
- `--quick` - Quick verification (size check only, no checksum calculation)
|
||||||
|
- `--verbose` - Show detailed information about each backup
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verify single backup (full SHA-256 check)
|
||||||
|
./dbbackup verify-backup /backups/mydb_20251125.dump
|
||||||
|
|
||||||
|
# Verify all backups in directory
|
||||||
|
./dbbackup verify-backup /backups/*.dump --verbose
|
||||||
|
|
||||||
|
# Quick verification (fast, size check only)
|
||||||
|
./dbbackup verify-backup /backups/*.dump --quick
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output:**
|
||||||
|
```
|
||||||
|
Verifying 3 backup file(s)...
|
||||||
|
|
||||||
|
📁 mydb_20251125.dump
|
||||||
|
✅ VALID
|
||||||
|
Size: 2.5 GiB
|
||||||
|
SHA-256: 7e166d4cb7276e1310d76922f45eda0333a6aeac...
|
||||||
|
Database: mydb (postgresql)
|
||||||
|
Created: 2025-11-25T19:00:00Z
|
||||||
|
|
||||||
|
──────────────────────────────────────────────────
|
||||||
|
Total: 3 backups
|
||||||
|
✅ Valid: 3
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Cleanup Old Backups
|
||||||
|
|
||||||
|
Automatically remove old backups based on retention policy:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./dbbackup cleanup BACKUP_DIRECTORY [OPTIONS]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
|
||||||
|
- `--retention-days INT` - Delete backups older than N days (default: 30)
|
||||||
|
- `--min-backups INT` - Always keep at least N most recent backups (default: 5)
|
||||||
|
- `--dry-run` - Preview what would be deleted without actually deleting
|
||||||
|
- `--pattern STRING` - Only clean backups matching pattern (e.g., "mydb_*.dump")
|
||||||
|
|
||||||
|
**Retention Policy:**
|
||||||
|
|
||||||
|
The cleanup command uses a safe retention policy:
|
||||||
|
1. Backups older than `--retention-days` are eligible for deletion
|
||||||
|
2. At least `--min-backups` most recent backups are always kept
|
||||||
|
3. Both conditions must be met for a backup to be deleted
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Clean up backups older than 30 days (keep at least 5)
|
||||||
|
./dbbackup cleanup /backups --retention-days 30 --min-backups 5
|
||||||
|
|
||||||
|
# Preview what would be deleted
|
||||||
|
./dbbackup cleanup /backups --retention-days 7 --dry-run
|
||||||
|
|
||||||
|
# Clean specific database backups
|
||||||
|
./dbbackup cleanup /backups --pattern "mydb_*.dump"
|
||||||
|
|
||||||
|
# Aggressive cleanup (keep only 3 most recent)
|
||||||
|
./dbbackup cleanup /backups --retention-days 1 --min-backups 3
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output:**
|
||||||
|
```
|
||||||
|
🗑️ Cleanup Policy:
|
||||||
|
Directory: /backups
|
||||||
|
Retention: 30 days
|
||||||
|
Min backups: 5
|
||||||
|
|
||||||
|
📊 Results:
|
||||||
|
Total backups: 12
|
||||||
|
Eligible for deletion: 7
|
||||||
|
|
||||||
|
✅ Deleted 7 backup(s):
|
||||||
|
- old_db_20251001.dump
|
||||||
|
- old_db_20251002.dump
|
||||||
|
...
|
||||||
|
|
||||||
|
📦 Kept 5 backup(s)
|
||||||
|
|
||||||
|
💾 Space freed: 15.2 GiB
|
||||||
|
──────────────────────────────────────────────────
|
||||||
|
✅ Cleanup completed successfully
|
||||||
|
```
|
||||||
|
|
||||||
**Options:**
|
**Options:**
|
||||||
|
|
||||||
- `--confirm` - Confirm and execute restore (required for safety)
|
- `--confirm` - Confirm and execute restore (required for safety)
|
||||||
|
|||||||
523
ROADMAP.md
Normal file
523
ROADMAP.md
Normal file
@@ -0,0 +1,523 @@
|
|||||||
|
# dbbackup Version 2.0 Roadmap
|
||||||
|
|
||||||
|
## Current Status: v1.1 (Production Ready)
|
||||||
|
- ✅ 24/24 automated tests passing (100%)
|
||||||
|
- ✅ PostgreSQL, MySQL, MariaDB support
|
||||||
|
- ✅ Interactive TUI + CLI
|
||||||
|
- ✅ Cluster backup/restore
|
||||||
|
- ✅ Docker support
|
||||||
|
- ✅ Cross-platform binaries
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Version 2.0 Vision: Enterprise-Grade Features
|
||||||
|
|
||||||
|
Transform dbbackup into an enterprise-ready backup solution with cloud storage, incremental backups, PITR, and encryption.
|
||||||
|
|
||||||
|
**Target Release:** Q2 2026 (3-4 months)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Priority Matrix
|
||||||
|
|
||||||
|
```
|
||||||
|
HIGH IMPACT
|
||||||
|
│
|
||||||
|
┌────────────────────┼────────────────────┐
|
||||||
|
│ │ │
|
||||||
|
│ Cloud Storage ⭐ │ Incremental ⭐⭐⭐ │
|
||||||
|
│ Verification │ PITR ⭐⭐⭐ │
|
||||||
|
│ Retention │ Encryption ⭐⭐ │
|
||||||
|
LOW │ │ │ HIGH
|
||||||
|
EFFORT ─────────────────┼──────────────────── EFFORT
|
||||||
|
│ │ │
|
||||||
|
│ Metrics │ Web UI (optional) │
|
||||||
|
│ Remote Restore │ Replication Slots │
|
||||||
|
│ │ │
|
||||||
|
└────────────────────┼────────────────────┘
|
||||||
|
│
|
||||||
|
LOW IMPACT
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Development Phases
|
||||||
|
|
||||||
|
### Phase 1: Foundation (Weeks 1-4)
|
||||||
|
|
||||||
|
**Sprint 1: Verification & Retention (2 weeks)**
|
||||||
|
|
||||||
|
**Goals:**
|
||||||
|
- Backup integrity verification with SHA-256 checksums
|
||||||
|
- Automated retention policy enforcement
|
||||||
|
- Structured backup metadata
|
||||||
|
|
||||||
|
**Features:**
|
||||||
|
- ✅ Generate SHA-256 checksums during backup
|
||||||
|
- ✅ Verify backups before/after restore
|
||||||
|
- ✅ Automatic cleanup of old backups
|
||||||
|
- ✅ Retention policy: days + minimum count
|
||||||
|
- ✅ Backup metadata in JSON format
|
||||||
|
|
||||||
|
**Deliverables:**
|
||||||
|
```bash
|
||||||
|
# New commands
|
||||||
|
dbbackup verify backup.dump
|
||||||
|
dbbackup cleanup --retention-days 30 --min-backups 5
|
||||||
|
|
||||||
|
# Metadata format
|
||||||
|
{
|
||||||
|
"version": "2.0",
|
||||||
|
"timestamp": "2026-01-15T10:30:00Z",
|
||||||
|
"database": "production",
|
||||||
|
"size_bytes": 1073741824,
|
||||||
|
"sha256": "abc123...",
|
||||||
|
"db_version": "PostgreSQL 15.3",
|
||||||
|
"compression": "gzip-9"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Implementation:**
|
||||||
|
- `internal/verification/` - Checksum calculation and validation
|
||||||
|
- `internal/retention/` - Policy enforcement
|
||||||
|
- `internal/metadata/` - Backup metadata management
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Sprint 2: Cloud Storage (2 weeks)**
|
||||||
|
|
||||||
|
**Goals:**
|
||||||
|
- Upload backups to cloud storage
|
||||||
|
- Support multiple cloud providers
|
||||||
|
- Download and restore from cloud
|
||||||
|
|
||||||
|
**Providers:**
|
||||||
|
- ✅ AWS S3
|
||||||
|
- ✅ MinIO (S3-compatible)
|
||||||
|
- ✅ Backblaze B2
|
||||||
|
- ✅ Azure Blob Storage (optional)
|
||||||
|
- ✅ Google Cloud Storage (optional)
|
||||||
|
|
||||||
|
**Configuration:**
|
||||||
|
```toml
|
||||||
|
[cloud]
|
||||||
|
enabled = true
|
||||||
|
provider = "s3" # s3, minio, azure, gcs, b2
|
||||||
|
auto_upload = true
|
||||||
|
|
||||||
|
[cloud.s3]
|
||||||
|
bucket = "db-backups"
|
||||||
|
region = "us-east-1"
|
||||||
|
endpoint = "s3.amazonaws.com" # Custom for MinIO
|
||||||
|
access_key = "..." # Or use IAM role
|
||||||
|
secret_key = "..."
|
||||||
|
```
|
||||||
|
|
||||||
|
**New Commands:**
|
||||||
|
```bash
|
||||||
|
# Upload existing backup
|
||||||
|
dbbackup cloud upload backup.dump
|
||||||
|
|
||||||
|
# List cloud backups
|
||||||
|
dbbackup cloud list
|
||||||
|
|
||||||
|
# Download from cloud
|
||||||
|
dbbackup cloud download backup_id
|
||||||
|
|
||||||
|
# Restore directly from cloud
|
||||||
|
dbbackup restore single s3://bucket/backup.dump --target mydb
|
||||||
|
```
|
||||||
|
|
||||||
|
**Dependencies:**
|
||||||
|
```go
|
||||||
|
"github.com/aws/aws-sdk-go-v2/service/s3"
|
||||||
|
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
|
||||||
|
"cloud.google.com/go/storage"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 2: Advanced Backup (Weeks 5-10)
|
||||||
|
|
||||||
|
**Sprint 3: Incremental Backups (3 weeks)**
|
||||||
|
|
||||||
|
**Goals:**
|
||||||
|
- Reduce backup time and storage
|
||||||
|
- File-level incremental for PostgreSQL
|
||||||
|
- Binary log incremental for MySQL
|
||||||
|
|
||||||
|
**PostgreSQL Strategy:**
|
||||||
|
```
|
||||||
|
Full Backup (Base)
|
||||||
|
├─ Incremental 1 (changed files since base)
|
||||||
|
├─ Incremental 2 (changed files since inc1)
|
||||||
|
└─ Incremental 3 (changed files since inc2)
|
||||||
|
```
|
||||||
|
|
||||||
|
**MySQL Strategy:**
|
||||||
|
```
|
||||||
|
Full Backup
|
||||||
|
├─ Binary Log 1 (changes since full)
|
||||||
|
├─ Binary Log 2
|
||||||
|
└─ Binary Log 3
|
||||||
|
```
|
||||||
|
|
||||||
|
**Implementation:**
|
||||||
|
```bash
|
||||||
|
# Create base backup
|
||||||
|
dbbackup backup single mydb --mode full
|
||||||
|
|
||||||
|
# Create incremental
|
||||||
|
dbbackup backup single mydb --mode incremental
|
||||||
|
|
||||||
|
# Restore (automatically applies incrementals)
|
||||||
|
dbbackup restore single backup.dump --apply-incrementals
|
||||||
|
```
|
||||||
|
|
||||||
|
**File Structure:**
|
||||||
|
```
|
||||||
|
backups/
|
||||||
|
├── mydb_full_20260115.dump
|
||||||
|
├── mydb_full_20260115.meta
|
||||||
|
├── mydb_incr_20260116.dump # Contains only changes
|
||||||
|
├── mydb_incr_20260116.meta # Points to base: mydb_full_20260115
|
||||||
|
└── mydb_incr_20260117.dump
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Sprint 4: Security & Encryption (2 weeks)**
|
||||||
|
|
||||||
|
**Goals:**
|
||||||
|
- Encrypt backups at rest
|
||||||
|
- Secure key management
|
||||||
|
- Encrypted cloud uploads
|
||||||
|
|
||||||
|
**Features:**
|
||||||
|
- ✅ AES-256-GCM encryption
|
||||||
|
- ✅ Argon2 key derivation
|
||||||
|
- ✅ Multiple key sources (file, env, vault)
|
||||||
|
- ✅ Encrypted metadata
|
||||||
|
|
||||||
|
**Configuration:**
|
||||||
|
```toml
|
||||||
|
[encryption]
|
||||||
|
enabled = true
|
||||||
|
algorithm = "aes-256-gcm"
|
||||||
|
key_file = "/etc/dbbackup/encryption.key"
|
||||||
|
|
||||||
|
# Or use environment variable
|
||||||
|
# DBBACKUP_ENCRYPTION_KEY=base64key...
|
||||||
|
```
|
||||||
|
|
||||||
|
**Commands:**
|
||||||
|
```bash
|
||||||
|
# Generate encryption key
|
||||||
|
dbbackup keys generate
|
||||||
|
|
||||||
|
# Encrypt existing backup
|
||||||
|
dbbackup encrypt backup.dump
|
||||||
|
|
||||||
|
# Decrypt backup
|
||||||
|
dbbackup decrypt backup.dump.enc
|
||||||
|
|
||||||
|
# Automatic encryption
|
||||||
|
dbbackup backup single mydb --encrypt
|
||||||
|
```
|
||||||
|
|
||||||
|
**File Format:**
|
||||||
|
```
|
||||||
|
+------------------+
|
||||||
|
| Encryption Header| (IV, algorithm, key ID)
|
||||||
|
+------------------+
|
||||||
|
| Encrypted Data | (AES-256-GCM)
|
||||||
|
+------------------+
|
||||||
|
| Auth Tag | (HMAC for integrity)
|
||||||
|
+------------------+
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Sprint 5: Point-in-Time Recovery - PITR (4 weeks)**
|
||||||
|
|
||||||
|
**Goals:**
|
||||||
|
- Restore to any point in time
|
||||||
|
- WAL archiving for PostgreSQL
|
||||||
|
- Binary log archiving for MySQL
|
||||||
|
|
||||||
|
**PostgreSQL Implementation:**
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pitr]
|
||||||
|
enabled = true
|
||||||
|
wal_archive_dir = "/backups/wal_archive"
|
||||||
|
wal_retention_days = 7
|
||||||
|
|
||||||
|
# PostgreSQL config (auto-configured by dbbackup)
|
||||||
|
# archive_mode = on
|
||||||
|
# archive_command = '/usr/local/bin/dbbackup archive-wal %p %f'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Commands:**
|
||||||
|
```bash
|
||||||
|
# Enable PITR
|
||||||
|
dbbackup pitr enable
|
||||||
|
|
||||||
|
# Archive WAL manually
|
||||||
|
dbbackup archive-wal /var/lib/postgresql/pg_wal/000000010000000000000001
|
||||||
|
|
||||||
|
# Restore to point-in-time
|
||||||
|
dbbackup restore single backup.dump \
|
||||||
|
--target-time "2026-01-15 14:30:00" \
|
||||||
|
--target mydb
|
||||||
|
|
||||||
|
# Show available restore points
|
||||||
|
dbbackup pitr timeline
|
||||||
|
```
|
||||||
|
|
||||||
|
**WAL Archive Structure:**
|
||||||
|
```
|
||||||
|
wal_archive/
|
||||||
|
├── 000000010000000000000001
|
||||||
|
├── 000000010000000000000002
|
||||||
|
├── 000000010000000000000003
|
||||||
|
└── timeline.json
|
||||||
|
```
|
||||||
|
|
||||||
|
**MySQL Implementation:**
|
||||||
|
```bash
|
||||||
|
# Archive binary logs
|
||||||
|
dbbackup binlog archive --start-datetime "2026-01-15 00:00:00"
|
||||||
|
|
||||||
|
# PITR restore
|
||||||
|
dbbackup restore single backup.sql \
|
||||||
|
--target-time "2026-01-15 14:30:00" \
|
||||||
|
--apply-binlogs
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 3: Enterprise Features (Weeks 11-16)
|
||||||
|
|
||||||
|
**Sprint 6: Observability & Integration (3 weeks)**
|
||||||
|
|
||||||
|
**Features:**
|
||||||
|
|
||||||
|
1. **Prometheus Metrics**
|
||||||
|
```go
|
||||||
|
# Exposed metrics
|
||||||
|
dbbackup_backup_duration_seconds
|
||||||
|
dbbackup_backup_size_bytes
|
||||||
|
dbbackup_backup_success_total
|
||||||
|
dbbackup_restore_duration_seconds
|
||||||
|
dbbackup_last_backup_timestamp
|
||||||
|
dbbackup_cloud_upload_duration_seconds
|
||||||
|
```
|
||||||
|
|
||||||
|
**Endpoint:**
|
||||||
|
```bash
|
||||||
|
# Start metrics server
|
||||||
|
dbbackup metrics serve --port 9090
|
||||||
|
|
||||||
|
# Scrape endpoint
|
||||||
|
curl http://localhost:9090/metrics
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Remote Restore**
|
||||||
|
```bash
|
||||||
|
# Restore to remote server
|
||||||
|
dbbackup restore single backup.dump \
|
||||||
|
--remote-host db-replica-01 \
|
||||||
|
--remote-user postgres \
|
||||||
|
--remote-port 22 \
|
||||||
|
--confirm
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Replication Slots (PostgreSQL)**
|
||||||
|
```bash
|
||||||
|
# Create replication slot for continuous WAL streaming
|
||||||
|
dbbackup replication create-slot backup_slot
|
||||||
|
|
||||||
|
# Stream WALs via replication
|
||||||
|
dbbackup replication stream backup_slot
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Webhook Notifications**
|
||||||
|
```toml
|
||||||
|
[notifications]
|
||||||
|
enabled = true
|
||||||
|
webhook_url = "https://slack.com/webhook/..."
|
||||||
|
notify_on = ["backup_complete", "backup_failed", "restore_complete"]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Technical Architecture
|
||||||
|
|
||||||
|
### New Directory Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
internal/
|
||||||
|
├── cloud/ # Cloud storage backends
|
||||||
|
│ ├── interface.go
|
||||||
|
│ ├── s3.go
|
||||||
|
│ ├── azure.go
|
||||||
|
│ └── gcs.go
|
||||||
|
├── encryption/ # Encryption layer
|
||||||
|
│ ├── aes.go
|
||||||
|
│ ├── keys.go
|
||||||
|
│ └── vault.go
|
||||||
|
├── incremental/ # Incremental backup engine
|
||||||
|
│ ├── postgres.go
|
||||||
|
│ └── mysql.go
|
||||||
|
├── pitr/ # Point-in-time recovery
|
||||||
|
│ ├── wal.go
|
||||||
|
│ ├── binlog.go
|
||||||
|
│ └── timeline.go
|
||||||
|
├── verification/ # Backup verification
|
||||||
|
│ ├── checksum.go
|
||||||
|
│ └── validate.go
|
||||||
|
├── retention/ # Retention policy
|
||||||
|
│ └── cleanup.go
|
||||||
|
├── metrics/ # Prometheus metrics
|
||||||
|
│ └── exporter.go
|
||||||
|
└── replication/ # Replication management
|
||||||
|
└── slots.go
|
||||||
|
```
|
||||||
|
|
||||||
|
### Required Dependencies
|
||||||
|
|
||||||
|
```go
|
||||||
|
// Cloud storage
|
||||||
|
"github.com/aws/aws-sdk-go-v2/service/s3"
|
||||||
|
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
|
||||||
|
"cloud.google.com/go/storage"
|
||||||
|
|
||||||
|
// Encryption
|
||||||
|
"crypto/aes"
|
||||||
|
"crypto/cipher"
|
||||||
|
"golang.org/x/crypto/argon2"
|
||||||
|
|
||||||
|
// Metrics
|
||||||
|
"github.com/prometheus/client_golang/prometheus"
|
||||||
|
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||||
|
|
||||||
|
// PostgreSQL replication
|
||||||
|
"github.com/jackc/pgx/v5/pgconn"
|
||||||
|
|
||||||
|
// Fast file scanning for incrementals
|
||||||
|
"github.com/karrick/godirwalk"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Testing Strategy
|
||||||
|
|
||||||
|
### v2.0 Test Coverage Goals
|
||||||
|
- Minimum 90% code coverage
|
||||||
|
- Integration tests for all cloud providers
|
||||||
|
- End-to-end PITR scenarios
|
||||||
|
- Performance benchmarks for incremental backups
|
||||||
|
- Encryption/decryption validation
|
||||||
|
- Multi-database restore tests
|
||||||
|
|
||||||
|
### New Test Suites
|
||||||
|
```bash
|
||||||
|
# Cloud storage tests
|
||||||
|
./run_qa_tests.sh --suite cloud
|
||||||
|
|
||||||
|
# Incremental backup tests
|
||||||
|
./run_qa_tests.sh --suite incremental
|
||||||
|
|
||||||
|
# PITR tests
|
||||||
|
./run_qa_tests.sh --suite pitr
|
||||||
|
|
||||||
|
# Encryption tests
|
||||||
|
./run_qa_tests.sh --suite encryption
|
||||||
|
|
||||||
|
# Full v2.0 suite
|
||||||
|
./run_qa_tests.sh --suite v2
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Migration Path
|
||||||
|
|
||||||
|
### v1.x → v2.0 Compatibility
|
||||||
|
- ✅ All v1.x backups readable in v2.0
|
||||||
|
- ✅ Configuration auto-migration
|
||||||
|
- ✅ Metadata format upgrade
|
||||||
|
- ✅ Backward-compatible commands
|
||||||
|
|
||||||
|
### Deprecation Timeline
|
||||||
|
- v2.0: Warning for old config format
|
||||||
|
- v2.1: Full migration required
|
||||||
|
- v3.0: Old format no longer supported
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Documentation Updates
|
||||||
|
|
||||||
|
### New Docs
|
||||||
|
- `CLOUD.md` - Cloud storage configuration
|
||||||
|
- `INCREMENTAL.md` - Incremental backup guide
|
||||||
|
- `PITR.md` - Point-in-time recovery
|
||||||
|
- `ENCRYPTION.md` - Encryption setup
|
||||||
|
- `METRICS.md` - Prometheus integration
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Metrics
|
||||||
|
|
||||||
|
### v2.0 Goals
|
||||||
|
- 🎯 95%+ test coverage
|
||||||
|
- 🎯 Support 1TB+ databases with incrementals
|
||||||
|
- 🎯 PITR with <5 minute granularity
|
||||||
|
- 🎯 Cloud upload/download >100MB/s
|
||||||
|
- 🎯 Encryption overhead <10%
|
||||||
|
- 🎯 Full compatibility with pgBackRest for PostgreSQL
|
||||||
|
- 🎯 Industry-leading MySQL PITR solution
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Release Schedule
|
||||||
|
|
||||||
|
- **v2.0-alpha** (End Sprint 3): Cloud + Verification
|
||||||
|
- **v2.0-beta** (End Sprint 5): + Incremental + PITR
|
||||||
|
- **v2.0-rc1** (End Sprint 6): + Enterprise features
|
||||||
|
- **v2.0 GA** (Q2 2026): Production release
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What Makes v2.0 Unique
|
||||||
|
|
||||||
|
After v2.0, dbbackup will be:
|
||||||
|
|
||||||
|
✅ **Only multi-database tool** with full PITR support
|
||||||
|
✅ **Best-in-class UX** (TUI + CLI + Docker + K8s)
|
||||||
|
✅ **Feature parity** with pgBackRest (PostgreSQL)
|
||||||
|
✅ **Superior to mysqldump** with incremental + PITR
|
||||||
|
✅ **Cloud-native** with multi-provider support
|
||||||
|
✅ **Enterprise-ready** with encryption + metrics
|
||||||
|
✅ **Zero-config** for 80% of use cases
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
Want to contribute to v2.0? Check out:
|
||||||
|
- [CONTRIBUTING.md](CONTRIBUTING.md)
|
||||||
|
- [Good First Issues](https://git.uuxo.net/uuxo/dbbackup/issues?labels=good-first-issue)
|
||||||
|
- [v2.0 Milestone](https://git.uuxo.net/uuxo/dbbackup/milestone/2)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Questions?
|
||||||
|
|
||||||
|
Open an issue or start a discussion:
|
||||||
|
- Issues: https://git.uuxo.net/uuxo/dbbackup/issues
|
||||||
|
- Discussions: https://git.uuxo.net/uuxo/dbbackup/discussions
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Next Step:** Sprint 1 - Backup Verification & Retention (January 2026)
|
||||||
38
build_docker.sh
Executable file
38
build_docker.sh
Executable file
@@ -0,0 +1,38 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Build and push Docker images
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
VERSION="1.1"
|
||||||
|
REGISTRY="git.uuxo.net/uuxo"
|
||||||
|
IMAGE_NAME="dbbackup"
|
||||||
|
|
||||||
|
echo "=== Building Docker Image ==="
|
||||||
|
echo "Version: $VERSION"
|
||||||
|
echo "Registry: $REGISTRY"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Build image
|
||||||
|
echo "Building image..."
|
||||||
|
docker build -t ${IMAGE_NAME}:${VERSION} -t ${IMAGE_NAME}:latest .
|
||||||
|
|
||||||
|
# Tag for registry
|
||||||
|
echo "Tagging for registry..."
|
||||||
|
docker tag ${IMAGE_NAME}:${VERSION} ${REGISTRY}/${IMAGE_NAME}:${VERSION}
|
||||||
|
docker tag ${IMAGE_NAME}:latest ${REGISTRY}/${IMAGE_NAME}:latest
|
||||||
|
|
||||||
|
# Show images
|
||||||
|
echo ""
|
||||||
|
echo "Images built:"
|
||||||
|
docker images ${IMAGE_NAME}
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "✅ Build complete!"
|
||||||
|
echo ""
|
||||||
|
echo "To push to registry:"
|
||||||
|
echo " docker push ${REGISTRY}/${IMAGE_NAME}:${VERSION}"
|
||||||
|
echo " docker push ${REGISTRY}/${IMAGE_NAME}:latest"
|
||||||
|
echo ""
|
||||||
|
echo "To test locally:"
|
||||||
|
echo " docker run --rm ${IMAGE_NAME}:latest --version"
|
||||||
|
echo " docker run --rm -it ${IMAGE_NAME}:latest interactive"
|
||||||
@@ -3,6 +3,7 @@ package cmd
|
|||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
|
"dbbackup/internal/cloud"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -90,6 +91,65 @@ func init() {
|
|||||||
backupCmd.AddCommand(singleCmd)
|
backupCmd.AddCommand(singleCmd)
|
||||||
backupCmd.AddCommand(sampleCmd)
|
backupCmd.AddCommand(sampleCmd)
|
||||||
|
|
||||||
|
// Cloud storage flags for all backup commands
|
||||||
|
for _, cmd := range []*cobra.Command{clusterCmd, singleCmd, sampleCmd} {
|
||||||
|
cmd.Flags().String("cloud", "", "Cloud storage URI (e.g., s3://bucket/path) - takes precedence over individual flags")
|
||||||
|
cmd.Flags().Bool("cloud-auto-upload", false, "Automatically upload backup to cloud after completion")
|
||||||
|
cmd.Flags().String("cloud-provider", "", "Cloud provider (s3, minio, b2)")
|
||||||
|
cmd.Flags().String("cloud-bucket", "", "Cloud bucket name")
|
||||||
|
cmd.Flags().String("cloud-region", "us-east-1", "Cloud region")
|
||||||
|
cmd.Flags().String("cloud-endpoint", "", "Cloud endpoint (for MinIO/B2)")
|
||||||
|
cmd.Flags().String("cloud-prefix", "", "Cloud key prefix")
|
||||||
|
|
||||||
|
// Add PreRunE to update config from flags
|
||||||
|
originalPreRun := cmd.PreRunE
|
||||||
|
cmd.PreRunE = func(c *cobra.Command, args []string) error {
|
||||||
|
// Call original PreRunE if exists
|
||||||
|
if originalPreRun != nil {
|
||||||
|
if err := originalPreRun(c, args); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if --cloud URI flag is provided (takes precedence)
|
||||||
|
if c.Flags().Changed("cloud") {
|
||||||
|
if err := parseCloudURIFlag(c); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Update cloud config from individual flags
|
||||||
|
if c.Flags().Changed("cloud-auto-upload") {
|
||||||
|
if autoUpload, _ := c.Flags().GetBool("cloud-auto-upload"); autoUpload {
|
||||||
|
cfg.CloudEnabled = true
|
||||||
|
cfg.CloudAutoUpload = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.Flags().Changed("cloud-provider") {
|
||||||
|
cfg.CloudProvider, _ = c.Flags().GetString("cloud-provider")
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.Flags().Changed("cloud-bucket") {
|
||||||
|
cfg.CloudBucket, _ = c.Flags().GetString("cloud-bucket")
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.Flags().Changed("cloud-region") {
|
||||||
|
cfg.CloudRegion, _ = c.Flags().GetString("cloud-region")
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.Flags().Changed("cloud-endpoint") {
|
||||||
|
cfg.CloudEndpoint, _ = c.Flags().GetString("cloud-endpoint")
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.Flags().Changed("cloud-prefix") {
|
||||||
|
cfg.CloudPrefix, _ = c.Flags().GetString("cloud-prefix")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Sample backup flags - use local variables to avoid cfg access during init
|
// Sample backup flags - use local variables to avoid cfg access during init
|
||||||
var sampleStrategy string
|
var sampleStrategy string
|
||||||
var sampleValue int
|
var sampleValue int
|
||||||
@@ -127,3 +187,39 @@ func init() {
|
|||||||
// Mark the strategy flags as mutually exclusive
|
// Mark the strategy flags as mutually exclusive
|
||||||
sampleCmd.MarkFlagsMutuallyExclusive("sample-ratio", "sample-percent", "sample-count")
|
sampleCmd.MarkFlagsMutuallyExclusive("sample-ratio", "sample-percent", "sample-count")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// parseCloudURIFlag parses the --cloud URI flag and updates config
|
||||||
|
func parseCloudURIFlag(cmd *cobra.Command) error {
|
||||||
|
cloudURI, _ := cmd.Flags().GetString("cloud")
|
||||||
|
if cloudURI == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse cloud URI
|
||||||
|
uri, err := cloud.ParseCloudURI(cloudURI)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("invalid cloud URI: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Enable cloud and auto-upload
|
||||||
|
cfg.CloudEnabled = true
|
||||||
|
cfg.CloudAutoUpload = true
|
||||||
|
|
||||||
|
// Update config from URI
|
||||||
|
cfg.CloudProvider = uri.Provider
|
||||||
|
cfg.CloudBucket = uri.Bucket
|
||||||
|
|
||||||
|
if uri.Region != "" {
|
||||||
|
cfg.CloudRegion = uri.Region
|
||||||
|
}
|
||||||
|
|
||||||
|
if uri.Endpoint != "" {
|
||||||
|
cfg.CloudEndpoint = uri.Endpoint
|
||||||
|
}
|
||||||
|
|
||||||
|
if uri.Path != "" {
|
||||||
|
cfg.CloudPrefix = uri.Dir()
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
334
cmd/cleanup.go
Normal file
334
cmd/cleanup.go
Normal file
@@ -0,0 +1,334 @@
|
|||||||
|
package cmd
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"dbbackup/internal/cloud"
|
||||||
|
"dbbackup/internal/metadata"
|
||||||
|
"dbbackup/internal/retention"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
)
|
||||||
|
|
||||||
|
var cleanupCmd = &cobra.Command{
|
||||||
|
Use: "cleanup [backup-directory]",
|
||||||
|
Short: "Clean up old backups based on retention policy",
|
||||||
|
Long: `Remove old backup files based on retention policy while maintaining minimum backup count.
|
||||||
|
|
||||||
|
The retention policy ensures:
|
||||||
|
1. Backups older than --retention-days are eligible for deletion
|
||||||
|
2. At least --min-backups most recent backups are always kept
|
||||||
|
3. Both conditions must be met for deletion
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
# Clean up backups older than 30 days (keep at least 5)
|
||||||
|
dbbackup cleanup /backups --retention-days 30 --min-backups 5
|
||||||
|
|
||||||
|
# Dry run to see what would be deleted
|
||||||
|
dbbackup cleanup /backups --retention-days 7 --dry-run
|
||||||
|
|
||||||
|
# Clean up specific database backups only
|
||||||
|
dbbackup cleanup /backups --pattern "mydb_*.dump"
|
||||||
|
|
||||||
|
# Aggressive cleanup (keep only 3 most recent)
|
||||||
|
dbbackup cleanup /backups --retention-days 1 --min-backups 3`,
|
||||||
|
Args: cobra.ExactArgs(1),
|
||||||
|
RunE: runCleanup,
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
retentionDays int
|
||||||
|
minBackups int
|
||||||
|
dryRun bool
|
||||||
|
cleanupPattern string
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
rootCmd.AddCommand(cleanupCmd)
|
||||||
|
cleanupCmd.Flags().IntVar(&retentionDays, "retention-days", 30, "Delete backups older than this many days")
|
||||||
|
cleanupCmd.Flags().IntVar(&minBackups, "min-backups", 5, "Always keep at least this many backups")
|
||||||
|
cleanupCmd.Flags().BoolVar(&dryRun, "dry-run", false, "Show what would be deleted without actually deleting")
|
||||||
|
cleanupCmd.Flags().StringVar(&cleanupPattern, "pattern", "", "Only clean up backups matching this pattern (e.g., 'mydb_*.dump')")
|
||||||
|
}
|
||||||
|
|
||||||
|
func runCleanup(cmd *cobra.Command, args []string) error {
|
||||||
|
backupPath := args[0]
|
||||||
|
|
||||||
|
// Check if this is a cloud URI
|
||||||
|
if isCloudURIPath(backupPath) {
|
||||||
|
return runCloudCleanup(cmd.Context(), backupPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Local cleanup
|
||||||
|
backupDir := backupPath
|
||||||
|
|
||||||
|
// Validate directory exists
|
||||||
|
if !dirExists(backupDir) {
|
||||||
|
return fmt.Errorf("backup directory does not exist: %s", backupDir)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create retention policy
|
||||||
|
policy := retention.Policy{
|
||||||
|
RetentionDays: retentionDays,
|
||||||
|
MinBackups: minBackups,
|
||||||
|
DryRun: dryRun,
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("🗑️ Cleanup Policy:\n")
|
||||||
|
fmt.Printf(" Directory: %s\n", backupDir)
|
||||||
|
fmt.Printf(" Retention: %d days\n", policy.RetentionDays)
|
||||||
|
fmt.Printf(" Min backups: %d\n", policy.MinBackups)
|
||||||
|
if cleanupPattern != "" {
|
||||||
|
fmt.Printf(" Pattern: %s\n", cleanupPattern)
|
||||||
|
}
|
||||||
|
if dryRun {
|
||||||
|
fmt.Printf(" Mode: DRY RUN (no files will be deleted)\n")
|
||||||
|
}
|
||||||
|
fmt.Println()
|
||||||
|
|
||||||
|
var result *retention.CleanupResult
|
||||||
|
var err error
|
||||||
|
|
||||||
|
// Apply policy
|
||||||
|
if cleanupPattern != "" {
|
||||||
|
result, err = retention.CleanupByPattern(backupDir, cleanupPattern, policy)
|
||||||
|
} else {
|
||||||
|
result, err = retention.ApplyPolicy(backupDir, policy)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("cleanup failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Display results
|
||||||
|
fmt.Printf("📊 Results:\n")
|
||||||
|
fmt.Printf(" Total backups: %d\n", result.TotalBackups)
|
||||||
|
fmt.Printf(" Eligible for deletion: %d\n", result.EligibleForDeletion)
|
||||||
|
|
||||||
|
if len(result.Deleted) > 0 {
|
||||||
|
fmt.Printf("\n")
|
||||||
|
if dryRun {
|
||||||
|
fmt.Printf("🔍 Would delete %d backup(s):\n", len(result.Deleted))
|
||||||
|
} else {
|
||||||
|
fmt.Printf("✅ Deleted %d backup(s):\n", len(result.Deleted))
|
||||||
|
}
|
||||||
|
for _, file := range result.Deleted {
|
||||||
|
fmt.Printf(" - %s\n", filepath.Base(file))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(result.Kept) > 0 && len(result.Kept) <= 10 {
|
||||||
|
fmt.Printf("\n📦 Kept %d backup(s):\n", len(result.Kept))
|
||||||
|
for _, file := range result.Kept {
|
||||||
|
fmt.Printf(" - %s\n", filepath.Base(file))
|
||||||
|
}
|
||||||
|
} else if len(result.Kept) > 10 {
|
||||||
|
fmt.Printf("\n📦 Kept %d backup(s)\n", len(result.Kept))
|
||||||
|
}
|
||||||
|
|
||||||
|
if !dryRun && result.SpaceFreed > 0 {
|
||||||
|
fmt.Printf("\n💾 Space freed: %s\n", metadata.FormatSize(result.SpaceFreed))
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(result.Errors) > 0 {
|
||||||
|
fmt.Printf("\n⚠️ Errors:\n")
|
||||||
|
for _, err := range result.Errors {
|
||||||
|
fmt.Printf(" - %v\n", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println(strings.Repeat("─", 50))
|
||||||
|
|
||||||
|
if dryRun {
|
||||||
|
fmt.Println("✅ Dry run completed (no files were deleted)")
|
||||||
|
} else if len(result.Deleted) > 0 {
|
||||||
|
fmt.Println("✅ Cleanup completed successfully")
|
||||||
|
} else {
|
||||||
|
fmt.Println("ℹ️ No backups eligible for deletion")
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func dirExists(path string) bool {
|
||||||
|
info, err := os.Stat(path)
|
||||||
|
if err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return info.IsDir()
|
||||||
|
}
|
||||||
|
|
||||||
|
// isCloudURIPath checks if a path is a cloud URI
|
||||||
|
func isCloudURIPath(s string) bool {
|
||||||
|
return cloud.IsCloudURI(s)
|
||||||
|
}
|
||||||
|
|
||||||
|
// runCloudCleanup applies retention policy to cloud storage
|
||||||
|
func runCloudCleanup(ctx context.Context, uri string) error {
|
||||||
|
// Parse cloud URI
|
||||||
|
cloudURI, err := cloud.ParseCloudURI(uri)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("invalid cloud URI: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("☁️ Cloud Cleanup Policy:\n")
|
||||||
|
fmt.Printf(" URI: %s\n", uri)
|
||||||
|
fmt.Printf(" Provider: %s\n", cloudURI.Provider)
|
||||||
|
fmt.Printf(" Bucket: %s\n", cloudURI.Bucket)
|
||||||
|
if cloudURI.Path != "" {
|
||||||
|
fmt.Printf(" Prefix: %s\n", cloudURI.Path)
|
||||||
|
}
|
||||||
|
fmt.Printf(" Retention: %d days\n", retentionDays)
|
||||||
|
fmt.Printf(" Min backups: %d\n", minBackups)
|
||||||
|
if dryRun {
|
||||||
|
fmt.Printf(" Mode: DRY RUN (no files will be deleted)\n")
|
||||||
|
}
|
||||||
|
fmt.Println()
|
||||||
|
|
||||||
|
// Create cloud backend
|
||||||
|
cfg := cloudURI.ToConfig()
|
||||||
|
backend, err := cloud.NewBackend(cfg)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create cloud backend: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// List all backups
|
||||||
|
backups, err := backend.List(ctx, cloudURI.Path)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to list cloud backups: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(backups) == 0 {
|
||||||
|
fmt.Println("No backups found in cloud storage")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("Found %d backup(s) in cloud storage\n\n", len(backups))
|
||||||
|
|
||||||
|
// Filter backups based on pattern if specified
|
||||||
|
var filteredBackups []cloud.BackupInfo
|
||||||
|
if cleanupPattern != "" {
|
||||||
|
for _, backup := range backups {
|
||||||
|
matched, _ := filepath.Match(cleanupPattern, backup.Name)
|
||||||
|
if matched {
|
||||||
|
filteredBackups = append(filteredBackups, backup)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
fmt.Printf("Pattern matched %d backup(s)\n\n", len(filteredBackups))
|
||||||
|
} else {
|
||||||
|
filteredBackups = backups
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sort by modification time (oldest first)
|
||||||
|
// Already sorted by backend.List
|
||||||
|
|
||||||
|
// Calculate retention date
|
||||||
|
cutoffDate := time.Now().AddDate(0, 0, -retentionDays)
|
||||||
|
|
||||||
|
// Determine which backups to delete
|
||||||
|
var toDelete []cloud.BackupInfo
|
||||||
|
var toKeep []cloud.BackupInfo
|
||||||
|
|
||||||
|
for _, backup := range filteredBackups {
|
||||||
|
if backup.LastModified.Before(cutoffDate) {
|
||||||
|
toDelete = append(toDelete, backup)
|
||||||
|
} else {
|
||||||
|
toKeep = append(toKeep, backup)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure we keep minimum backups
|
||||||
|
totalBackups := len(filteredBackups)
|
||||||
|
if totalBackups-len(toDelete) < minBackups {
|
||||||
|
// Need to keep more backups
|
||||||
|
keepCount := minBackups - len(toKeep)
|
||||||
|
if keepCount > len(toDelete) {
|
||||||
|
keepCount = len(toDelete)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Move oldest from toDelete to toKeep
|
||||||
|
for i := len(toDelete) - 1; i >= len(toDelete)-keepCount && i >= 0; i-- {
|
||||||
|
toKeep = append(toKeep, toDelete[i])
|
||||||
|
toDelete = toDelete[:i]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Display results
|
||||||
|
fmt.Printf("📊 Results:\n")
|
||||||
|
fmt.Printf(" Total backups: %d\n", totalBackups)
|
||||||
|
fmt.Printf(" Eligible for deletion: %d\n", len(toDelete))
|
||||||
|
fmt.Printf(" Will keep: %d\n", len(toKeep))
|
||||||
|
fmt.Println()
|
||||||
|
|
||||||
|
if len(toDelete) > 0 {
|
||||||
|
if dryRun {
|
||||||
|
fmt.Printf("🔍 Would delete %d backup(s):\n", len(toDelete))
|
||||||
|
} else {
|
||||||
|
fmt.Printf("🗑️ Deleting %d backup(s):\n", len(toDelete))
|
||||||
|
}
|
||||||
|
|
||||||
|
var totalSize int64
|
||||||
|
var deletedCount int
|
||||||
|
|
||||||
|
for _, backup := range toDelete {
|
||||||
|
fmt.Printf(" - %s (%s, %s old)\n",
|
||||||
|
backup.Name,
|
||||||
|
cloud.FormatSize(backup.Size),
|
||||||
|
formatBackupAge(backup.LastModified))
|
||||||
|
|
||||||
|
totalSize += backup.Size
|
||||||
|
|
||||||
|
if !dryRun {
|
||||||
|
if err := backend.Delete(ctx, backup.Key); err != nil {
|
||||||
|
fmt.Printf(" ❌ Error: %v\n", err)
|
||||||
|
} else {
|
||||||
|
deletedCount++
|
||||||
|
// Also try to delete metadata
|
||||||
|
backend.Delete(ctx, backup.Key+".meta.json")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("\n💾 Space %s: %s\n",
|
||||||
|
map[bool]string{true: "would be freed", false: "freed"}[dryRun],
|
||||||
|
cloud.FormatSize(totalSize))
|
||||||
|
|
||||||
|
if !dryRun && deletedCount > 0 {
|
||||||
|
fmt.Printf("✅ Successfully deleted %d backup(s)\n", deletedCount)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
fmt.Println("No backups eligible for deletion")
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// formatBackupAge returns a human-readable age string from a time.Time
|
||||||
|
func formatBackupAge(t time.Time) string {
|
||||||
|
d := time.Since(t)
|
||||||
|
days := int(d.Hours() / 24)
|
||||||
|
|
||||||
|
if days == 0 {
|
||||||
|
return "today"
|
||||||
|
} else if days == 1 {
|
||||||
|
return "1 day"
|
||||||
|
} else if days < 30 {
|
||||||
|
return fmt.Sprintf("%d days", days)
|
||||||
|
} else if days < 365 {
|
||||||
|
months := days / 30
|
||||||
|
if months == 1 {
|
||||||
|
return "1 month"
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%d months", months)
|
||||||
|
} else {
|
||||||
|
years := days / 365
|
||||||
|
if years == 1 {
|
||||||
|
return "1 year"
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%d years", years)
|
||||||
|
}
|
||||||
|
}
|
||||||
394
cmd/cloud.go
Normal file
394
cmd/cloud.go
Normal file
@@ -0,0 +1,394 @@
|
|||||||
|
package cmd
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"dbbackup/internal/cloud"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
)
|
||||||
|
|
||||||
|
var cloudCmd = &cobra.Command{
|
||||||
|
Use: "cloud",
|
||||||
|
Short: "Cloud storage operations",
|
||||||
|
Long: `Manage backups in cloud storage (S3, MinIO, Backblaze B2).
|
||||||
|
|
||||||
|
Supports:
|
||||||
|
- AWS S3
|
||||||
|
- MinIO (S3-compatible)
|
||||||
|
- Backblaze B2 (S3-compatible)
|
||||||
|
- Any S3-compatible storage
|
||||||
|
|
||||||
|
Configuration via flags or environment variables:
|
||||||
|
--cloud-provider DBBACKUP_CLOUD_PROVIDER
|
||||||
|
--cloud-bucket DBBACKUP_CLOUD_BUCKET
|
||||||
|
--cloud-region DBBACKUP_CLOUD_REGION
|
||||||
|
--cloud-endpoint DBBACKUP_CLOUD_ENDPOINT
|
||||||
|
--cloud-access-key DBBACKUP_CLOUD_ACCESS_KEY (or AWS_ACCESS_KEY_ID)
|
||||||
|
--cloud-secret-key DBBACKUP_CLOUD_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)`,
|
||||||
|
}
|
||||||
|
|
||||||
|
var cloudUploadCmd = &cobra.Command{
|
||||||
|
Use: "upload [backup-file]",
|
||||||
|
Short: "Upload backup to cloud storage",
|
||||||
|
Long: `Upload one or more backup files to cloud storage.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
# Upload single backup
|
||||||
|
dbbackup cloud upload /backups/mydb.dump
|
||||||
|
|
||||||
|
# Upload with progress
|
||||||
|
dbbackup cloud upload /backups/mydb.dump --verbose
|
||||||
|
|
||||||
|
# Upload multiple files
|
||||||
|
dbbackup cloud upload /backups/*.dump`,
|
||||||
|
Args: cobra.MinimumNArgs(1),
|
||||||
|
RunE: runCloudUpload,
|
||||||
|
}
|
||||||
|
|
||||||
|
var cloudDownloadCmd = &cobra.Command{
|
||||||
|
Use: "download [remote-file] [local-path]",
|
||||||
|
Short: "Download backup from cloud storage",
|
||||||
|
Long: `Download a backup file from cloud storage.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
# Download to current directory
|
||||||
|
dbbackup cloud download mydb.dump .
|
||||||
|
|
||||||
|
# Download to specific path
|
||||||
|
dbbackup cloud download mydb.dump /backups/mydb.dump
|
||||||
|
|
||||||
|
# Download with progress
|
||||||
|
dbbackup cloud download mydb.dump . --verbose`,
|
||||||
|
Args: cobra.ExactArgs(2),
|
||||||
|
RunE: runCloudDownload,
|
||||||
|
}
|
||||||
|
|
||||||
|
var cloudListCmd = &cobra.Command{
|
||||||
|
Use: "list [prefix]",
|
||||||
|
Short: "List backups in cloud storage",
|
||||||
|
Long: `List all backup files in cloud storage.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
# List all backups
|
||||||
|
dbbackup cloud list
|
||||||
|
|
||||||
|
# List backups with prefix
|
||||||
|
dbbackup cloud list mydb_
|
||||||
|
|
||||||
|
# List with detailed information
|
||||||
|
dbbackup cloud list --verbose`,
|
||||||
|
Args: cobra.MaximumNArgs(1),
|
||||||
|
RunE: runCloudList,
|
||||||
|
}
|
||||||
|
|
||||||
|
var cloudDeleteCmd = &cobra.Command{
|
||||||
|
Use: "delete [remote-file]",
|
||||||
|
Short: "Delete backup from cloud storage",
|
||||||
|
Long: `Delete a backup file from cloud storage.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
# Delete single backup
|
||||||
|
dbbackup cloud delete mydb_20251125.dump
|
||||||
|
|
||||||
|
# Delete with confirmation
|
||||||
|
dbbackup cloud delete mydb.dump --confirm`,
|
||||||
|
Args: cobra.ExactArgs(1),
|
||||||
|
RunE: runCloudDelete,
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
cloudProvider string
|
||||||
|
cloudBucket string
|
||||||
|
cloudRegion string
|
||||||
|
cloudEndpoint string
|
||||||
|
cloudAccessKey string
|
||||||
|
cloudSecretKey string
|
||||||
|
cloudPrefix string
|
||||||
|
cloudVerbose bool
|
||||||
|
cloudConfirm bool
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
rootCmd.AddCommand(cloudCmd)
|
||||||
|
cloudCmd.AddCommand(cloudUploadCmd, cloudDownloadCmd, cloudListCmd, cloudDeleteCmd)
|
||||||
|
|
||||||
|
// Cloud configuration flags
|
||||||
|
for _, cmd := range []*cobra.Command{cloudUploadCmd, cloudDownloadCmd, cloudListCmd, cloudDeleteCmd} {
|
||||||
|
cmd.Flags().StringVar(&cloudProvider, "cloud-provider", getEnv("DBBACKUP_CLOUD_PROVIDER", "s3"), "Cloud provider (s3, minio, b2)")
|
||||||
|
cmd.Flags().StringVar(&cloudBucket, "cloud-bucket", getEnv("DBBACKUP_CLOUD_BUCKET", ""), "Bucket name")
|
||||||
|
cmd.Flags().StringVar(&cloudRegion, "cloud-region", getEnv("DBBACKUP_CLOUD_REGION", "us-east-1"), "Region")
|
||||||
|
cmd.Flags().StringVar(&cloudEndpoint, "cloud-endpoint", getEnv("DBBACKUP_CLOUD_ENDPOINT", ""), "Custom endpoint (for MinIO)")
|
||||||
|
cmd.Flags().StringVar(&cloudAccessKey, "cloud-access-key", getEnv("DBBACKUP_CLOUD_ACCESS_KEY", getEnv("AWS_ACCESS_KEY_ID", "")), "Access key")
|
||||||
|
cmd.Flags().StringVar(&cloudSecretKey, "cloud-secret-key", getEnv("DBBACKUP_CLOUD_SECRET_KEY", getEnv("AWS_SECRET_ACCESS_KEY", "")), "Secret key")
|
||||||
|
cmd.Flags().StringVar(&cloudPrefix, "cloud-prefix", getEnv("DBBACKUP_CLOUD_PREFIX", ""), "Key prefix")
|
||||||
|
cmd.Flags().BoolVarP(&cloudVerbose, "verbose", "v", false, "Verbose output")
|
||||||
|
}
|
||||||
|
|
||||||
|
cloudDeleteCmd.Flags().BoolVar(&cloudConfirm, "confirm", false, "Skip confirmation prompt")
|
||||||
|
}
|
||||||
|
|
||||||
|
func getEnv(key, defaultValue string) string {
|
||||||
|
if value := os.Getenv(key); value != "" {
|
||||||
|
return value
|
||||||
|
}
|
||||||
|
return defaultValue
|
||||||
|
}
|
||||||
|
|
||||||
|
func getCloudBackend() (cloud.Backend, error) {
|
||||||
|
cfg := &cloud.Config{
|
||||||
|
Provider: cloudProvider,
|
||||||
|
Bucket: cloudBucket,
|
||||||
|
Region: cloudRegion,
|
||||||
|
Endpoint: cloudEndpoint,
|
||||||
|
AccessKey: cloudAccessKey,
|
||||||
|
SecretKey: cloudSecretKey,
|
||||||
|
Prefix: cloudPrefix,
|
||||||
|
UseSSL: true,
|
||||||
|
PathStyle: cloudProvider == "minio",
|
||||||
|
Timeout: 300,
|
||||||
|
MaxRetries: 3,
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.Bucket == "" {
|
||||||
|
return nil, fmt.Errorf("bucket name is required (use --cloud-bucket or DBBACKUP_CLOUD_BUCKET)")
|
||||||
|
}
|
||||||
|
|
||||||
|
backend, err := cloud.NewBackend(cfg)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create cloud backend: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return backend, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func runCloudUpload(cmd *cobra.Command, args []string) error {
|
||||||
|
backend, err := getCloudBackend()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.Background()
|
||||||
|
|
||||||
|
// Expand glob patterns
|
||||||
|
var files []string
|
||||||
|
for _, pattern := range args {
|
||||||
|
matches, err := filepath.Glob(pattern)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("invalid pattern %s: %w", pattern, err)
|
||||||
|
}
|
||||||
|
if len(matches) == 0 {
|
||||||
|
files = append(files, pattern)
|
||||||
|
} else {
|
||||||
|
files = append(files, matches...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("☁️ Uploading %d file(s) to %s...\n\n", len(files), backend.Name())
|
||||||
|
|
||||||
|
successCount := 0
|
||||||
|
for _, localPath := range files {
|
||||||
|
filename := filepath.Base(localPath)
|
||||||
|
fmt.Printf("📤 %s\n", filename)
|
||||||
|
|
||||||
|
// Progress callback
|
||||||
|
var lastPercent int
|
||||||
|
progress := func(transferred, total int64) {
|
||||||
|
if !cloudVerbose {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
percent := int(float64(transferred) / float64(total) * 100)
|
||||||
|
if percent != lastPercent && percent%10 == 0 {
|
||||||
|
fmt.Printf(" Progress: %d%% (%s / %s)\n",
|
||||||
|
percent,
|
||||||
|
cloud.FormatSize(transferred),
|
||||||
|
cloud.FormatSize(total))
|
||||||
|
lastPercent = percent
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
err := backend.Upload(ctx, localPath, filename, progress)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Printf(" ❌ Failed: %v\n\n", err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get file size
|
||||||
|
if info, err := os.Stat(localPath); err == nil {
|
||||||
|
fmt.Printf(" ✅ Uploaded (%s)\n\n", cloud.FormatSize(info.Size()))
|
||||||
|
} else {
|
||||||
|
fmt.Printf(" ✅ Uploaded\n\n")
|
||||||
|
}
|
||||||
|
successCount++
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println(strings.Repeat("─", 50))
|
||||||
|
fmt.Printf("✅ Successfully uploaded %d/%d file(s)\n", successCount, len(files))
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func runCloudDownload(cmd *cobra.Command, args []string) error {
|
||||||
|
backend, err := getCloudBackend()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.Background()
|
||||||
|
remotePath := args[0]
|
||||||
|
localPath := args[1]
|
||||||
|
|
||||||
|
// If localPath is a directory, use the remote filename
|
||||||
|
if info, err := os.Stat(localPath); err == nil && info.IsDir() {
|
||||||
|
localPath = filepath.Join(localPath, filepath.Base(remotePath))
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("☁️ Downloading from %s...\n\n", backend.Name())
|
||||||
|
fmt.Printf("📥 %s → %s\n", remotePath, localPath)
|
||||||
|
|
||||||
|
// Progress callback
|
||||||
|
var lastPercent int
|
||||||
|
progress := func(transferred, total int64) {
|
||||||
|
if !cloudVerbose {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
percent := int(float64(transferred) / float64(total) * 100)
|
||||||
|
if percent != lastPercent && percent%10 == 0 {
|
||||||
|
fmt.Printf(" Progress: %d%% (%s / %s)\n",
|
||||||
|
percent,
|
||||||
|
cloud.FormatSize(transferred),
|
||||||
|
cloud.FormatSize(total))
|
||||||
|
lastPercent = percent
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
err = backend.Download(ctx, remotePath, localPath, progress)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("download failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get file size
|
||||||
|
if info, err := os.Stat(localPath); err == nil {
|
||||||
|
fmt.Printf(" ✅ Downloaded (%s)\n", cloud.FormatSize(info.Size()))
|
||||||
|
} else {
|
||||||
|
fmt.Printf(" ✅ Downloaded\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func runCloudList(cmd *cobra.Command, args []string) error {
|
||||||
|
backend, err := getCloudBackend()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.Background()
|
||||||
|
prefix := ""
|
||||||
|
if len(args) > 0 {
|
||||||
|
prefix = args[0]
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("☁️ Listing backups in %s/%s...\n\n", backend.Name(), cloudBucket)
|
||||||
|
|
||||||
|
backups, err := backend.List(ctx, prefix)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to list backups: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(backups) == 0 {
|
||||||
|
fmt.Println("No backups found")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var totalSize int64
|
||||||
|
for _, backup := range backups {
|
||||||
|
totalSize += backup.Size
|
||||||
|
|
||||||
|
if cloudVerbose {
|
||||||
|
fmt.Printf("📦 %s\n", backup.Name)
|
||||||
|
fmt.Printf(" Size: %s\n", cloud.FormatSize(backup.Size))
|
||||||
|
fmt.Printf(" Modified: %s\n", backup.LastModified.Format(time.RFC3339))
|
||||||
|
if backup.StorageClass != "" {
|
||||||
|
fmt.Printf(" Storage: %s\n", backup.StorageClass)
|
||||||
|
}
|
||||||
|
fmt.Println()
|
||||||
|
} else {
|
||||||
|
age := time.Since(backup.LastModified)
|
||||||
|
ageStr := formatAge(age)
|
||||||
|
fmt.Printf("%-50s %12s %s\n",
|
||||||
|
backup.Name,
|
||||||
|
cloud.FormatSize(backup.Size),
|
||||||
|
ageStr)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println(strings.Repeat("─", 50))
|
||||||
|
fmt.Printf("Total: %d backup(s), %s\n", len(backups), cloud.FormatSize(totalSize))
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func runCloudDelete(cmd *cobra.Command, args []string) error {
|
||||||
|
backend, err := getCloudBackend()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.Background()
|
||||||
|
remotePath := args[0]
|
||||||
|
|
||||||
|
// Check if file exists
|
||||||
|
exists, err := backend.Exists(ctx, remotePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to check file: %w", err)
|
||||||
|
}
|
||||||
|
if !exists {
|
||||||
|
return fmt.Errorf("file not found: %s", remotePath)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get file info
|
||||||
|
size, err := backend.GetSize(ctx, remotePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to get file info: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Confirmation prompt
|
||||||
|
if !cloudConfirm {
|
||||||
|
fmt.Printf("⚠️ Delete %s (%s) from cloud storage?\n", remotePath, cloud.FormatSize(size))
|
||||||
|
fmt.Print("Type 'yes' to confirm: ")
|
||||||
|
var response string
|
||||||
|
fmt.Scanln(&response)
|
||||||
|
if response != "yes" {
|
||||||
|
fmt.Println("Cancelled")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("🗑️ Deleting %s...\n", remotePath)
|
||||||
|
|
||||||
|
err = backend.Delete(ctx, remotePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("delete failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("✅ Deleted %s (%s)\n", remotePath, cloud.FormatSize(size))
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func formatAge(d time.Duration) string {
|
||||||
|
if d < time.Minute {
|
||||||
|
return "just now"
|
||||||
|
} else if d < time.Hour {
|
||||||
|
return fmt.Sprintf("%d min ago", int(d.Minutes()))
|
||||||
|
} else if d < 24*time.Hour {
|
||||||
|
return fmt.Sprintf("%d hours ago", int(d.Hours()))
|
||||||
|
} else {
|
||||||
|
return fmt.Sprintf("%d days ago", int(d.Hours()/24))
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -10,6 +10,7 @@ import (
|
|||||||
"syscall"
|
"syscall"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"dbbackup/internal/cloud"
|
||||||
"dbbackup/internal/database"
|
"dbbackup/internal/database"
|
||||||
"dbbackup/internal/restore"
|
"dbbackup/internal/restore"
|
||||||
"dbbackup/internal/security"
|
"dbbackup/internal/security"
|
||||||
@@ -169,7 +170,36 @@ func init() {
|
|||||||
func runRestoreSingle(cmd *cobra.Command, args []string) error {
|
func runRestoreSingle(cmd *cobra.Command, args []string) error {
|
||||||
archivePath := args[0]
|
archivePath := args[0]
|
||||||
|
|
||||||
// Convert to absolute path
|
// Check if this is a cloud URI
|
||||||
|
var cleanupFunc func() error
|
||||||
|
|
||||||
|
if cloud.IsCloudURI(archivePath) {
|
||||||
|
log.Info("Detected cloud URI, downloading backup...", "uri", archivePath)
|
||||||
|
|
||||||
|
// Download from cloud
|
||||||
|
result, err := restore.DownloadFromCloudURI(cmd.Context(), archivePath, restore.DownloadOptions{
|
||||||
|
VerifyChecksum: true,
|
||||||
|
KeepLocal: false, // Delete after restore
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to download from cloud: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
archivePath = result.LocalPath
|
||||||
|
cleanupFunc = result.Cleanup
|
||||||
|
|
||||||
|
// Ensure cleanup happens on exit
|
||||||
|
defer func() {
|
||||||
|
if cleanupFunc != nil {
|
||||||
|
if err := cleanupFunc(); err != nil {
|
||||||
|
log.Warn("Failed to cleanup temp files", "error", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
log.Info("Download completed", "local_path", archivePath)
|
||||||
|
} else {
|
||||||
|
// Convert to absolute path for local files
|
||||||
if !filepath.IsAbs(archivePath) {
|
if !filepath.IsAbs(archivePath) {
|
||||||
absPath, err := filepath.Abs(archivePath)
|
absPath, err := filepath.Abs(archivePath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -182,6 +212,7 @@ func runRestoreSingle(cmd *cobra.Command, args []string) error {
|
|||||||
if _, err := os.Stat(archivePath); err != nil {
|
if _, err := os.Stat(archivePath); err != nil {
|
||||||
return fmt.Errorf("archive not found: %s", archivePath)
|
return fmt.Errorf("archive not found: %s", archivePath)
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Detect format
|
// Detect format
|
||||||
format := restore.DetectArchiveFormat(archivePath)
|
format := restore.DetectArchiveFormat(archivePath)
|
||||||
|
|||||||
235
cmd/verify.go
Normal file
235
cmd/verify.go
Normal file
@@ -0,0 +1,235 @@
|
|||||||
|
package cmd
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"dbbackup/internal/cloud"
|
||||||
|
"dbbackup/internal/metadata"
|
||||||
|
"dbbackup/internal/restore"
|
||||||
|
"dbbackup/internal/verification"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
)
|
||||||
|
|
||||||
|
var verifyBackupCmd = &cobra.Command{
|
||||||
|
Use: "verify-backup [backup-file]",
|
||||||
|
Short: "Verify backup file integrity with checksums",
|
||||||
|
Long: `Verify the integrity of one or more backup files by comparing their SHA-256 checksums
|
||||||
|
against the stored metadata. This ensures that backups have not been corrupted.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
# Verify a single backup
|
||||||
|
dbbackup verify-backup /backups/mydb_20260115.dump
|
||||||
|
|
||||||
|
# Verify all backups in a directory
|
||||||
|
dbbackup verify-backup /backups/*.dump
|
||||||
|
|
||||||
|
# Quick verification (size check only, no checksum)
|
||||||
|
dbbackup verify-backup /backups/mydb.dump --quick
|
||||||
|
|
||||||
|
# Verify and show detailed information
|
||||||
|
dbbackup verify-backup /backups/mydb.dump --verbose`,
|
||||||
|
Args: cobra.MinimumNArgs(1),
|
||||||
|
RunE: runVerifyBackup,
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
quickVerify bool
|
||||||
|
verboseVerify bool
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
rootCmd.AddCommand(verifyBackupCmd)
|
||||||
|
verifyBackupCmd.Flags().BoolVar(&quickVerify, "quick", false, "Quick verification (size check only)")
|
||||||
|
verifyBackupCmd.Flags().BoolVarP(&verboseVerify, "verbose", "v", false, "Show detailed information")
|
||||||
|
}
|
||||||
|
|
||||||
|
func runVerifyBackup(cmd *cobra.Command, args []string) error {
|
||||||
|
// Check if any argument is a cloud URI
|
||||||
|
hasCloudURI := false
|
||||||
|
for _, arg := range args {
|
||||||
|
if isCloudURI(arg) {
|
||||||
|
hasCloudURI = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// If cloud URIs detected, handle separately
|
||||||
|
if hasCloudURI {
|
||||||
|
return runVerifyCloudBackup(cmd, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Expand glob patterns for local files
|
||||||
|
var backupFiles []string
|
||||||
|
for _, pattern := range args {
|
||||||
|
matches, err := filepath.Glob(pattern)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("invalid pattern %s: %w", pattern, err)
|
||||||
|
}
|
||||||
|
if len(matches) == 0 {
|
||||||
|
// Not a glob, use as-is
|
||||||
|
backupFiles = append(backupFiles, pattern)
|
||||||
|
} else {
|
||||||
|
backupFiles = append(backupFiles, matches...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(backupFiles) == 0 {
|
||||||
|
return fmt.Errorf("no backup files found")
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("Verifying %d backup file(s)...\n\n", len(backupFiles))
|
||||||
|
|
||||||
|
successCount := 0
|
||||||
|
failureCount := 0
|
||||||
|
|
||||||
|
for _, backupFile := range backupFiles {
|
||||||
|
// Skip metadata files
|
||||||
|
if strings.HasSuffix(backupFile, ".meta.json") ||
|
||||||
|
strings.HasSuffix(backupFile, ".sha256") ||
|
||||||
|
strings.HasSuffix(backupFile, ".info") {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("📁 %s\n", filepath.Base(backupFile))
|
||||||
|
|
||||||
|
if quickVerify {
|
||||||
|
// Quick check: size only
|
||||||
|
err := verification.QuickCheck(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Printf(" ❌ FAILED: %v\n\n", err)
|
||||||
|
failureCount++
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
fmt.Printf(" ✅ VALID (quick check)\n\n")
|
||||||
|
successCount++
|
||||||
|
} else {
|
||||||
|
// Full verification with SHA-256
|
||||||
|
result, err := verification.Verify(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("verification error: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if result.Valid {
|
||||||
|
fmt.Printf(" ✅ VALID\n")
|
||||||
|
if verboseVerify {
|
||||||
|
meta, _ := metadata.Load(backupFile)
|
||||||
|
fmt.Printf(" Size: %s\n", metadata.FormatSize(meta.SizeBytes))
|
||||||
|
fmt.Printf(" SHA-256: %s\n", meta.SHA256)
|
||||||
|
fmt.Printf(" Database: %s (%s)\n", meta.Database, meta.DatabaseType)
|
||||||
|
fmt.Printf(" Created: %s\n", meta.Timestamp.Format(time.RFC3339))
|
||||||
|
}
|
||||||
|
fmt.Println()
|
||||||
|
successCount++
|
||||||
|
} else {
|
||||||
|
fmt.Printf(" ❌ FAILED: %v\n", result.Error)
|
||||||
|
if verboseVerify {
|
||||||
|
if !result.FileExists {
|
||||||
|
fmt.Printf(" File does not exist\n")
|
||||||
|
} else if !result.MetadataExists {
|
||||||
|
fmt.Printf(" Metadata file missing\n")
|
||||||
|
} else if !result.SizeMatch {
|
||||||
|
fmt.Printf(" Size mismatch\n")
|
||||||
|
} else {
|
||||||
|
fmt.Printf(" Expected: %s\n", result.ExpectedSHA256)
|
||||||
|
fmt.Printf(" Got: %s\n", result.CalculatedSHA256)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
fmt.Println()
|
||||||
|
failureCount++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Summary
|
||||||
|
fmt.Println(strings.Repeat("─", 50))
|
||||||
|
fmt.Printf("Total: %d backups\n", len(backupFiles))
|
||||||
|
fmt.Printf("✅ Valid: %d\n", successCount)
|
||||||
|
if failureCount > 0 {
|
||||||
|
fmt.Printf("❌ Failed: %d\n", failureCount)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// isCloudURI checks if a string is a cloud URI
|
||||||
|
func isCloudURI(s string) bool {
|
||||||
|
return cloud.IsCloudURI(s)
|
||||||
|
}
|
||||||
|
|
||||||
|
// verifyCloudBackup downloads and verifies a backup from cloud storage
|
||||||
|
func verifyCloudBackup(ctx context.Context, uri string, quick, verbose bool) (*restore.DownloadResult, error) {
|
||||||
|
// Download from cloud with checksum verification
|
||||||
|
result, err := restore.DownloadFromCloudURI(ctx, uri, restore.DownloadOptions{
|
||||||
|
VerifyChecksum: !quick, // Skip checksum if quick mode
|
||||||
|
KeepLocal: false,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// If not quick mode, also run full verification
|
||||||
|
if !quick {
|
||||||
|
_, err := verification.Verify(result.LocalPath)
|
||||||
|
if err != nil {
|
||||||
|
result.Cleanup()
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// runVerifyCloudBackup verifies backups from cloud storage
|
||||||
|
func runVerifyCloudBackup(cmd *cobra.Command, args []string) error {
|
||||||
|
fmt.Printf("Verifying cloud backup(s)...\n\n")
|
||||||
|
|
||||||
|
successCount := 0
|
||||||
|
failureCount := 0
|
||||||
|
|
||||||
|
for _, uri := range args {
|
||||||
|
if !isCloudURI(uri) {
|
||||||
|
fmt.Printf("⚠️ Skipping non-cloud URI: %s\n", uri)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("☁️ %s\n", uri)
|
||||||
|
|
||||||
|
// Download and verify
|
||||||
|
result, err := verifyCloudBackup(cmd.Context(), uri, quickVerify, verboseVerify)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Printf(" ❌ FAILED: %v\n\n", err)
|
||||||
|
failureCount++
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cleanup temp file
|
||||||
|
defer result.Cleanup()
|
||||||
|
|
||||||
|
fmt.Printf(" ✅ VALID\n")
|
||||||
|
if verboseVerify && result.MetadataPath != "" {
|
||||||
|
meta, _ := metadata.Load(result.MetadataPath)
|
||||||
|
if meta != nil {
|
||||||
|
fmt.Printf(" Size: %s\n", metadata.FormatSize(meta.SizeBytes))
|
||||||
|
fmt.Printf(" SHA-256: %s\n", meta.SHA256)
|
||||||
|
fmt.Printf(" Database: %s (%s)\n", meta.Database, meta.DatabaseType)
|
||||||
|
fmt.Printf(" Created: %s\n", meta.Timestamp.Format(time.RFC3339))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
fmt.Println()
|
||||||
|
successCount++
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("\n✅ Summary: %d valid, %d failed\n", successCount, failureCount)
|
||||||
|
|
||||||
|
if failureCount > 0 {
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
66
docker-compose.azurite.yml
Normal file
66
docker-compose.azurite.yml
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
# Azurite - Azure Storage Emulator
|
||||||
|
azurite:
|
||||||
|
image: mcr.microsoft.com/azure-storage/azurite:latest
|
||||||
|
container_name: dbbackup-azurite
|
||||||
|
ports:
|
||||||
|
- "10000:10000" # Blob service
|
||||||
|
- "10001:10001" # Queue service
|
||||||
|
- "10002:10002" # Table service
|
||||||
|
volumes:
|
||||||
|
- azurite_data:/data
|
||||||
|
command: azurite --blobHost 0.0.0.0 --queueHost 0.0.0.0 --tableHost 0.0.0.0 --loose --skipApiVersionCheck
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "nc", "-z", "localhost", "10000"]
|
||||||
|
interval: 5s
|
||||||
|
timeout: 3s
|
||||||
|
retries: 30
|
||||||
|
networks:
|
||||||
|
- dbbackup-net
|
||||||
|
|
||||||
|
# PostgreSQL 16 for testing
|
||||||
|
postgres:
|
||||||
|
image: postgres:16-alpine
|
||||||
|
container_name: dbbackup-postgres-azure
|
||||||
|
environment:
|
||||||
|
POSTGRES_USER: testuser
|
||||||
|
POSTGRES_PASSWORD: testpass
|
||||||
|
POSTGRES_DB: testdb
|
||||||
|
ports:
|
||||||
|
- "5434:5432"
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -U testuser -d testdb"]
|
||||||
|
interval: 5s
|
||||||
|
timeout: 3s
|
||||||
|
retries: 10
|
||||||
|
networks:
|
||||||
|
- dbbackup-net
|
||||||
|
|
||||||
|
# MySQL 8.0 for testing
|
||||||
|
mysql:
|
||||||
|
image: mysql:8.0
|
||||||
|
container_name: dbbackup-mysql-azure
|
||||||
|
environment:
|
||||||
|
MYSQL_ROOT_PASSWORD: rootpass
|
||||||
|
MYSQL_DATABASE: testdb
|
||||||
|
MYSQL_USER: testuser
|
||||||
|
MYSQL_PASSWORD: testpass
|
||||||
|
ports:
|
||||||
|
- "3308:3306"
|
||||||
|
command: --default-authentication-plugin=mysql_native_password
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "-prootpass"]
|
||||||
|
interval: 5s
|
||||||
|
timeout: 3s
|
||||||
|
retries: 10
|
||||||
|
networks:
|
||||||
|
- dbbackup-net
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
azurite_data:
|
||||||
|
|
||||||
|
networks:
|
||||||
|
dbbackup-net:
|
||||||
|
driver: bridge
|
||||||
59
docker-compose.gcs.yml
Normal file
59
docker-compose.gcs.yml
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
# fake-gcs-server - Google Cloud Storage Emulator
|
||||||
|
gcs-emulator:
|
||||||
|
image: fsouza/fake-gcs-server:latest
|
||||||
|
container_name: dbbackup-gcs
|
||||||
|
ports:
|
||||||
|
- "4443:4443"
|
||||||
|
command: -scheme http -public-host localhost:4443 -external-url http://localhost:4443
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "wget", "--spider", "-q", "http://localhost:4443/storage/v1/b"]
|
||||||
|
interval: 5s
|
||||||
|
timeout: 3s
|
||||||
|
retries: 30
|
||||||
|
networks:
|
||||||
|
- dbbackup-net
|
||||||
|
|
||||||
|
# PostgreSQL 16 for testing
|
||||||
|
postgres:
|
||||||
|
image: postgres:16-alpine
|
||||||
|
container_name: dbbackup-postgres-gcs
|
||||||
|
environment:
|
||||||
|
POSTGRES_USER: testuser
|
||||||
|
POSTGRES_PASSWORD: testpass
|
||||||
|
POSTGRES_DB: testdb
|
||||||
|
ports:
|
||||||
|
- "5435:5432"
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -U testuser -d testdb"]
|
||||||
|
interval: 5s
|
||||||
|
timeout: 3s
|
||||||
|
retries: 10
|
||||||
|
networks:
|
||||||
|
- dbbackup-net
|
||||||
|
|
||||||
|
# MySQL 8.0 for testing
|
||||||
|
mysql:
|
||||||
|
image: mysql:8.0
|
||||||
|
container_name: dbbackup-mysql-gcs
|
||||||
|
environment:
|
||||||
|
MYSQL_ROOT_PASSWORD: rootpass
|
||||||
|
MYSQL_DATABASE: testdb
|
||||||
|
MYSQL_USER: testuser
|
||||||
|
MYSQL_PASSWORD: testpass
|
||||||
|
ports:
|
||||||
|
- "3309:3306"
|
||||||
|
command: --default-authentication-plugin=mysql_native_password
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "-prootpass"]
|
||||||
|
interval: 5s
|
||||||
|
timeout: 3s
|
||||||
|
retries: 10
|
||||||
|
networks:
|
||||||
|
- dbbackup-net
|
||||||
|
|
||||||
|
networks:
|
||||||
|
dbbackup-net:
|
||||||
|
driver: bridge
|
||||||
101
docker-compose.minio.yml
Normal file
101
docker-compose.minio.yml
Normal file
@@ -0,0 +1,101 @@
|
|||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
# MinIO S3-compatible object storage for testing
|
||||||
|
minio:
|
||||||
|
image: minio/minio:latest
|
||||||
|
container_name: dbbackup-minio
|
||||||
|
ports:
|
||||||
|
- "9000:9000" # S3 API
|
||||||
|
- "9001:9001" # Web Console
|
||||||
|
environment:
|
||||||
|
MINIO_ROOT_USER: minioadmin
|
||||||
|
MINIO_ROOT_PASSWORD: minioadmin123
|
||||||
|
MINIO_REGION: us-east-1
|
||||||
|
volumes:
|
||||||
|
- minio-data:/data
|
||||||
|
command: server /data --console-address ":9001"
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 20s
|
||||||
|
retries: 3
|
||||||
|
networks:
|
||||||
|
- dbbackup-test
|
||||||
|
|
||||||
|
# PostgreSQL database for backup testing
|
||||||
|
postgres:
|
||||||
|
image: postgres:16-alpine
|
||||||
|
container_name: dbbackup-postgres-test
|
||||||
|
environment:
|
||||||
|
POSTGRES_USER: testuser
|
||||||
|
POSTGRES_PASSWORD: testpass123
|
||||||
|
POSTGRES_DB: testdb
|
||||||
|
POSTGRES_INITDB_ARGS: "-E UTF8 --locale=C"
|
||||||
|
ports:
|
||||||
|
- "5433:5432"
|
||||||
|
volumes:
|
||||||
|
- postgres-data:/var/lib/postgresql/data
|
||||||
|
- ./test_data:/docker-entrypoint-initdb.d
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -U testuser"]
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
|
networks:
|
||||||
|
- dbbackup-test
|
||||||
|
|
||||||
|
# MySQL database for backup testing
|
||||||
|
mysql:
|
||||||
|
image: mysql:8.0
|
||||||
|
container_name: dbbackup-mysql-test
|
||||||
|
environment:
|
||||||
|
MYSQL_ROOT_PASSWORD: rootpass123
|
||||||
|
MYSQL_DATABASE: testdb
|
||||||
|
MYSQL_USER: testuser
|
||||||
|
MYSQL_PASSWORD: testpass123
|
||||||
|
ports:
|
||||||
|
- "3307:3306"
|
||||||
|
volumes:
|
||||||
|
- mysql-data:/var/lib/mysql
|
||||||
|
- ./test_data:/docker-entrypoint-initdb.d
|
||||||
|
command: --default-authentication-plugin=mysql_native_password
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "-prootpass123"]
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
|
networks:
|
||||||
|
- dbbackup-test
|
||||||
|
|
||||||
|
# MinIO Client (mc) for bucket management
|
||||||
|
minio-mc:
|
||||||
|
image: minio/mc:latest
|
||||||
|
container_name: dbbackup-minio-mc
|
||||||
|
depends_on:
|
||||||
|
minio:
|
||||||
|
condition: service_healthy
|
||||||
|
entrypoint: >
|
||||||
|
/bin/sh -c "
|
||||||
|
sleep 5;
|
||||||
|
/usr/bin/mc alias set myminio http://minio:9000 minioadmin minioadmin123;
|
||||||
|
/usr/bin/mc mb --ignore-existing myminio/test-backups;
|
||||||
|
/usr/bin/mc mb --ignore-existing myminio/production-backups;
|
||||||
|
/usr/bin/mc mb --ignore-existing myminio/dev-backups;
|
||||||
|
echo 'MinIO buckets created successfully';
|
||||||
|
exit 0;
|
||||||
|
"
|
||||||
|
networks:
|
||||||
|
- dbbackup-test
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
minio-data:
|
||||||
|
driver: local
|
||||||
|
postgres-data:
|
||||||
|
driver: local
|
||||||
|
mysql-data:
|
||||||
|
driver: local
|
||||||
|
|
||||||
|
networks:
|
||||||
|
dbbackup-test:
|
||||||
|
driver: bridge
|
||||||
88
docker-compose.yml
Normal file
88
docker-compose.yml
Normal file
@@ -0,0 +1,88 @@
|
|||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
# PostgreSQL backup example
|
||||||
|
postgres-backup:
|
||||||
|
build: .
|
||||||
|
image: dbbackup:latest
|
||||||
|
container_name: dbbackup-postgres
|
||||||
|
volumes:
|
||||||
|
- ./backups:/backups
|
||||||
|
- ./config/.dbbackup.conf:/home/dbbackup/.dbbackup.conf:ro
|
||||||
|
environment:
|
||||||
|
- PGHOST=postgres
|
||||||
|
- PGPORT=5432
|
||||||
|
- PGUSER=postgres
|
||||||
|
- PGPASSWORD=secret
|
||||||
|
command: backup single mydb
|
||||||
|
depends_on:
|
||||||
|
- postgres
|
||||||
|
networks:
|
||||||
|
- dbnet
|
||||||
|
|
||||||
|
# MySQL backup example
|
||||||
|
mysql-backup:
|
||||||
|
build: .
|
||||||
|
image: dbbackup:latest
|
||||||
|
container_name: dbbackup-mysql
|
||||||
|
volumes:
|
||||||
|
- ./backups:/backups
|
||||||
|
environment:
|
||||||
|
- MYSQL_HOST=mysql
|
||||||
|
- MYSQL_PORT=3306
|
||||||
|
- MYSQL_USER=root
|
||||||
|
- MYSQL_PWD=secret
|
||||||
|
command: backup single mydb --db-type mysql
|
||||||
|
depends_on:
|
||||||
|
- mysql
|
||||||
|
networks:
|
||||||
|
- dbnet
|
||||||
|
|
||||||
|
# Interactive mode example
|
||||||
|
dbbackup-interactive:
|
||||||
|
build: .
|
||||||
|
image: dbbackup:latest
|
||||||
|
container_name: dbbackup-tui
|
||||||
|
volumes:
|
||||||
|
- ./backups:/backups
|
||||||
|
environment:
|
||||||
|
- PGHOST=postgres
|
||||||
|
- PGUSER=postgres
|
||||||
|
- PGPASSWORD=secret
|
||||||
|
command: interactive
|
||||||
|
stdin_open: true
|
||||||
|
tty: true
|
||||||
|
networks:
|
||||||
|
- dbnet
|
||||||
|
|
||||||
|
# Test PostgreSQL database
|
||||||
|
postgres:
|
||||||
|
image: postgres:15-alpine
|
||||||
|
container_name: test-postgres
|
||||||
|
environment:
|
||||||
|
- POSTGRES_PASSWORD=secret
|
||||||
|
- POSTGRES_DB=mydb
|
||||||
|
volumes:
|
||||||
|
- postgres-data:/var/lib/postgresql/data
|
||||||
|
networks:
|
||||||
|
- dbnet
|
||||||
|
|
||||||
|
# Test MySQL database
|
||||||
|
mysql:
|
||||||
|
image: mysql:8.0
|
||||||
|
container_name: test-mysql
|
||||||
|
environment:
|
||||||
|
- MYSQL_ROOT_PASSWORD=secret
|
||||||
|
- MYSQL_DATABASE=mydb
|
||||||
|
volumes:
|
||||||
|
- mysql-data:/var/lib/mysql
|
||||||
|
networks:
|
||||||
|
- dbnet
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
postgres-data:
|
||||||
|
mysql-data:
|
||||||
|
|
||||||
|
networks:
|
||||||
|
dbnet:
|
||||||
|
driver: bridge
|
||||||
75
go.mod
75
go.mod
@@ -17,14 +17,60 @@ require (
|
|||||||
)
|
)
|
||||||
|
|
||||||
require (
|
require (
|
||||||
|
cel.dev/expr v0.24.0 // indirect
|
||||||
|
cloud.google.com/go v0.121.6 // indirect
|
||||||
|
cloud.google.com/go/auth v0.17.0 // indirect
|
||||||
|
cloud.google.com/go/auth/oauth2adapt v0.2.8 // indirect
|
||||||
|
cloud.google.com/go/compute/metadata v0.9.0 // indirect
|
||||||
|
cloud.google.com/go/iam v1.5.2 // indirect
|
||||||
|
cloud.google.com/go/monitoring v1.24.2 // indirect
|
||||||
|
cloud.google.com/go/storage v1.57.2 // indirect
|
||||||
filippo.io/edwards25519 v1.1.0 // indirect
|
filippo.io/edwards25519 v1.1.0 // indirect
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0 // indirect
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 // indirect
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3 // indirect
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0 // indirect
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0 // indirect
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2 v1.40.0 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/config v1.32.2 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/credentials v1.19.2 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.14 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.3 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.5 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10 // indirect
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 // indirect
|
||||||
|
github.com/aws/smithy-go v1.23.2 // indirect
|
||||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
|
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
|
||||||
|
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||||
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
|
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
|
||||||
github.com/charmbracelet/x/ansi v0.10.1 // indirect
|
github.com/charmbracelet/x/ansi v0.10.1 // indirect
|
||||||
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd // indirect
|
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd // indirect
|
||||||
github.com/charmbracelet/x/term v0.2.1 // indirect
|
github.com/charmbracelet/x/term v0.2.1 // indirect
|
||||||
|
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443 // indirect
|
||||||
github.com/creack/pty v1.1.17 // indirect
|
github.com/creack/pty v1.1.17 // indirect
|
||||||
|
github.com/envoyproxy/go-control-plane/envoy v1.32.4 // indirect
|
||||||
|
github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect
|
||||||
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
|
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
|
||||||
|
github.com/felixge/httpsnoop v1.0.4 // indirect
|
||||||
|
github.com/go-jose/go-jose/v4 v4.1.2 // indirect
|
||||||
|
github.com/go-logr/logr v1.4.3 // indirect
|
||||||
|
github.com/go-logr/stdr v1.2.2 // indirect
|
||||||
|
github.com/google/s2a-go v0.1.9 // indirect
|
||||||
|
github.com/google/uuid v1.6.0 // indirect
|
||||||
|
github.com/googleapis/enterprise-certificate-proxy v0.3.7 // indirect
|
||||||
|
github.com/googleapis/gax-go/v2 v2.15.0 // indirect
|
||||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||||
github.com/jackc/pgpassfile v1.0.0 // indirect
|
github.com/jackc/pgpassfile v1.0.0 // indirect
|
||||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
|
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
|
||||||
@@ -36,10 +82,31 @@ require (
|
|||||||
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 // indirect
|
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 // indirect
|
||||||
github.com/muesli/cancelreader v0.2.2 // indirect
|
github.com/muesli/cancelreader v0.2.2 // indirect
|
||||||
github.com/muesli/termenv v0.16.0 // indirect
|
github.com/muesli/termenv v0.16.0 // indirect
|
||||||
|
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 // indirect
|
||||||
github.com/rivo/uniseg v0.4.7 // indirect
|
github.com/rivo/uniseg v0.4.7 // indirect
|
||||||
|
github.com/spiffe/go-spiffe/v2 v2.5.0 // indirect
|
||||||
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
|
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
|
||||||
golang.org/x/crypto v0.37.0 // indirect
|
github.com/zeebo/errs v1.4.0 // indirect
|
||||||
golang.org/x/sync v0.13.0 // indirect
|
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
|
||||||
golang.org/x/sys v0.36.0 // indirect
|
go.opentelemetry.io/contrib/detectors/gcp v1.36.0 // indirect
|
||||||
golang.org/x/text v0.24.0 // indirect
|
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 // indirect
|
||||||
|
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 // indirect
|
||||||
|
go.opentelemetry.io/otel v1.37.0 // indirect
|
||||||
|
go.opentelemetry.io/otel/metric v1.37.0 // indirect
|
||||||
|
go.opentelemetry.io/otel/sdk v1.37.0 // indirect
|
||||||
|
go.opentelemetry.io/otel/sdk/metric v1.37.0 // indirect
|
||||||
|
go.opentelemetry.io/otel/trace v1.37.0 // indirect
|
||||||
|
golang.org/x/crypto v0.43.0 // indirect
|
||||||
|
golang.org/x/net v0.46.0 // indirect
|
||||||
|
golang.org/x/oauth2 v0.33.0 // indirect
|
||||||
|
golang.org/x/sync v0.18.0 // indirect
|
||||||
|
golang.org/x/sys v0.37.0 // indirect
|
||||||
|
golang.org/x/text v0.30.0 // indirect
|
||||||
|
golang.org/x/time v0.14.0 // indirect
|
||||||
|
google.golang.org/api v0.256.0 // indirect
|
||||||
|
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 // indirect
|
||||||
|
google.golang.org/genproto/googleapis/api v0.0.0-20250818200422-3122310a409c // indirect
|
||||||
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 // indirect
|
||||||
|
google.golang.org/grpc v1.76.0 // indirect
|
||||||
|
google.golang.org/protobuf v1.36.10 // indirect
|
||||||
)
|
)
|
||||||
|
|||||||
166
go.sum
166
go.sum
@@ -1,9 +1,93 @@
|
|||||||
|
cel.dev/expr v0.24.0 h1:56OvJKSH3hDGL0ml5uSxZmz3/3Pq4tJ+fb1unVLAFcY=
|
||||||
|
cel.dev/expr v0.24.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
|
||||||
|
cloud.google.com/go v0.121.6 h1:waZiuajrI28iAf40cWgycWNgaXPO06dupuS+sgibK6c=
|
||||||
|
cloud.google.com/go v0.121.6/go.mod h1:coChdst4Ea5vUpiALcYKXEpR1S9ZgXbhEzzMcMR66vI=
|
||||||
|
cloud.google.com/go/auth v0.17.0 h1:74yCm7hCj2rUyyAocqnFzsAYXgJhrG26XCFimrc/Kz4=
|
||||||
|
cloud.google.com/go/auth v0.17.0/go.mod h1:6wv/t5/6rOPAX4fJiRjKkJCvswLwdet7G8+UGXt7nCQ=
|
||||||
|
cloud.google.com/go/auth/oauth2adapt v0.2.8 h1:keo8NaayQZ6wimpNSmW5OPc283g65QNIiLpZnkHRbnc=
|
||||||
|
cloud.google.com/go/auth/oauth2adapt v0.2.8/go.mod h1:XQ9y31RkqZCcwJWNSx2Xvric3RrU88hAYYbjDWYDL+c=
|
||||||
|
cloud.google.com/go/compute/metadata v0.9.0 h1:pDUj4QMoPejqq20dK0Pg2N4yG9zIkYGdBtwLoEkH9Zs=
|
||||||
|
cloud.google.com/go/compute/metadata v0.9.0/go.mod h1:E0bWwX5wTnLPedCKqk3pJmVgCBSM6qQI1yTBdEb3C10=
|
||||||
|
cloud.google.com/go/iam v1.5.2 h1:qgFRAGEmd8z6dJ/qyEchAuL9jpswyODjA2lS+w234g8=
|
||||||
|
cloud.google.com/go/iam v1.5.2/go.mod h1:SE1vg0N81zQqLzQEwxL2WI6yhetBdbNQuTvIKCSkUHE=
|
||||||
|
cloud.google.com/go/monitoring v1.24.2 h1:5OTsoJ1dXYIiMiuL+sYscLc9BumrL3CarVLL7dd7lHM=
|
||||||
|
cloud.google.com/go/monitoring v1.24.2/go.mod h1:x7yzPWcgDRnPEv3sI+jJGBkwl5qINf+6qY4eq0I9B4U=
|
||||||
|
cloud.google.com/go/storage v1.57.2 h1:sVlym3cHGYhrp6XZKkKb+92I1V42ks2qKKpB0CF5Mb4=
|
||||||
|
cloud.google.com/go/storage v1.57.2/go.mod h1:n5ijg4yiRXXpCu0sJTD6k+eMf7GRrJmPyr9YxLXGHOk=
|
||||||
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
|
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
|
||||||
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
|
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0 h1:JXg2dwJUmPB9JmtVmdEB16APJ7jurfbY5jnfXpJoRMc=
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0/go.mod h1:YD5h/ldMsG0XiIw7PdyNhLxaM317eFh5yNLccNfGdyw=
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 h1:9iefClla7iYpfYWdzPCRDozdmndjTm8DXdpCzPajMgA=
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2/go.mod h1:XtLgD3ZD34DAaVIIAyG3objl5DynM3CQ/vMcbBNJZGI=
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3 h1:ZJJNFaQ86GVKQ9ehwqyAFE6pIfyicpuJ8IkVaPBc6/4=
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3/go.mod h1:URuDvhmATVKqHBH9/0nOiNKk0+YcwfQ3WkK5PqHKxc8=
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0 h1:UQUsRi8WTzhZntp5313l+CHIAT95ojUI2lpP/ExlZa4=
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0/go.mod h1:Cz6ft6Dkn3Et6l2v2a9/RpN7epQ1GtDlO6lj8bEcOvw=
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0 h1:owcC2UnmsZycprQ5RfRgjydWhuoxg71LUfyiQdijZuM=
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0/go.mod h1:ZPpqegjbE99EPKsu3iUWV22A04wzGPcAY/ziSIQEEgs=
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0 h1:Ron4zCA/yk6U7WOBXhTJcDpsUBG9npumK6xw2auFltQ=
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0/go.mod h1:cSgYe11MCNYunTnRXrKiR/tHc0eoKjICUuWpNZoVCOo=
|
||||||
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2 h1:+vx7roKuyA63nhn5WAunQHLTznkw5W8b1Xc0dNjp83s=
|
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2 h1:+vx7roKuyA63nhn5WAunQHLTznkw5W8b1Xc0dNjp83s=
|
||||||
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2/go.mod h1:HBCaDeC1lPdgDeDbhX8XFpy1jqjK0IBG8W5K+xYqA0w=
|
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2/go.mod h1:HBCaDeC1lPdgDeDbhX8XFpy1jqjK0IBG8W5K+xYqA0w=
|
||||||
|
github.com/aws/aws-sdk-go-v2 v1.40.0 h1:/WMUA0kjhZExjOQN2z3oLALDREea1A7TobfuiBrKlwc=
|
||||||
|
github.com/aws/aws-sdk-go-v2 v1.40.0/go.mod h1:c9pm7VwuW0UPxAEYGyTmyurVcNrbF6Rt/wixFqDhcjE=
|
||||||
|
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 h1:DHctwEM8P8iTXFxC/QK0MRjwEpWQeM9yzidCRjldUz0=
|
||||||
|
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3/go.mod h1:xdCzcZEtnSTKVDOmUZs4l/j3pSV6rpo1WXl5ugNsL8Y=
|
||||||
|
github.com/aws/aws-sdk-go-v2/config v1.32.1 h1:iODUDLgk3q8/flEC7ymhmxjfoAnBDwEEYEVyKZ9mzjU=
|
||||||
|
github.com/aws/aws-sdk-go-v2/config v1.32.1/go.mod h1:xoAgo17AGrPpJBSLg81W+ikM0cpOZG8ad04T2r+d5P0=
|
||||||
|
github.com/aws/aws-sdk-go-v2/config v1.32.2 h1:4liUsdEpUUPZs5WVapsJLx5NPmQhQdez7nYFcovrytk=
|
||||||
|
github.com/aws/aws-sdk-go-v2/config v1.32.2/go.mod h1:l0hs06IFz1eCT+jTacU/qZtC33nvcnLADAPL/XyrkZI=
|
||||||
|
github.com/aws/aws-sdk-go-v2/credentials v1.19.1 h1:JeW+EwmtTE0yXFK8SmklrFh/cGTTXsQJumgMZNlbxfM=
|
||||||
|
github.com/aws/aws-sdk-go-v2/credentials v1.19.1/go.mod h1:BOoXiStwTF+fT2XufhO0Efssbi1CNIO/ZXpZu87N0pw=
|
||||||
|
github.com/aws/aws-sdk-go-v2/credentials v1.19.2 h1:qZry8VUyTK4VIo5aEdUcBjPZHL2v4FyQ3QEOaWcFLu4=
|
||||||
|
github.com/aws/aws-sdk-go-v2/credentials v1.19.2/go.mod h1:YUqm5a1/kBnoK+/NY5WEiMocZihKSo15/tJdmdXnM5g=
|
||||||
|
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14 h1:WZVR5DbDgxzA0BJeudId89Kmgy6DIU4ORpxwsVHz0qA=
|
||||||
|
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.14/go.mod h1:Dadl9QO0kHgbrH1GRqGiZdYtW5w+IXXaBNCHTIaheM4=
|
||||||
|
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12 h1:Zy6Tme1AA13kX8x3CnkHx5cqdGWGaj/anwOiWGnA0Xo=
|
||||||
|
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12/go.mod h1:ql4uXYKoTM9WUAUSmthY4AtPVrlTBZOvnBJTiCUdPxI=
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14 h1:PZHqQACxYb8mYgms4RZbhZG0a7dPW06xOjmaH0EJC/I=
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.14/go.mod h1:VymhrMJUWs69D8u0/lZ7jSB6WgaG/NqHi3gX0aYf6U0=
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14 h1:bOS19y6zlJwagBfHxs0ESzr1XCOU2KXJCWcq3E2vfjY=
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.14/go.mod h1:1ipeGBMAxZ0xcTm6y6paC2C/J6f6OO7LBODV9afuAyM=
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 h1:WKuaxf++XKWlHWu9ECbMlha8WOEGm0OUEZqm4K/Gcfk=
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4/go.mod h1:ZWy7j6v1vWGmPReu0iSGvRiise4YI5SkR3OHKTZ6Wuc=
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.14 h1:ITi7qiDSv/mSGDSWNpZ4k4Ve0DQR6Ug2SJQ8zEHoDXg=
|
||||||
|
github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.14/go.mod h1:k1xtME53H1b6YpZt74YmwlONMWf4ecM+lut1WQLAF/U=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.3 h1:x2Ibm/Af8Fi+BH+Hsn9TXGdT+hKbDd5XOTZxTMxDk7o=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.3/go.mod h1:IW1jwyrQgMdhisceG8fQLmQIydcT/jWY21rFhzgaKwo=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.5 h1:Hjkh7kE6D81PgrHlE/m9gx+4TyyeLHuY8xJs7yXN5C4=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.5/go.mod h1:nPRXgyCfAurhyaTMoBMwRBYBhaHI4lNPAnJmjM0Tslc=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14 h1:FIouAnCE46kyYqyhs0XEBDFFSREtdnr8HQuLPQPLCrY=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.14/go.mod h1:UTwDc5COa5+guonQU8qBikJo1ZJ4ln2r1MkF7Dqag1E=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14 h1:FzQE21lNtUor0Fb7QNgnEyiRCBlolLTX/Z1j65S7teM=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.14/go.mod h1:s1ydyWG9pm3ZwmmYN21HKyG9WzAZhYVW85wMHs5FV6w=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.0 h1:8FshVvnV2sr9kOSAbOnc/vwVmmAwMjOedKH6JW2ddPM=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.0/go.mod h1:wYNqY3L02Z3IgRYxOBPH9I1zD9Cjh9hI5QOy/eOjQvw=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1 h1:OgQy/+0+Kc3khtqiEOk23xQAglXi3Tj0y5doOxbi5tg=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/s3 v1.92.1/go.mod h1:wYNqY3L02Z3IgRYxOBPH9I1zD9Cjh9hI5QOy/eOjQvw=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/signin v1.0.1 h1:BDgIUYGEo5TkayOWv/oBLPphWwNm/A91AebUjAu5L5g=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/signin v1.0.1/go.mod h1:iS6EPmNeqCsGo+xQmXv0jIMjyYtQfnwg36zl2FwEouk=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2 h1:MxMBdKTYBjPQChlJhi4qlEueqB1p1KcbTEa7tD5aqPs=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/signin v1.0.2/go.mod h1:iS6EPmNeqCsGo+xQmXv0jIMjyYtQfnwg36zl2FwEouk=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sso v1.30.4 h1:U//SlnkE1wOQiIImxzdY5PXat4Wq+8rlfVEw4Y7J8as=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sso v1.30.4/go.mod h1:av+ArJpoYf3pgyrj6tcehSFW+y9/QvAY8kMooR9bZCw=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5 h1:ksUT5KtgpZd3SAiFJNJ0AFEJVva3gjBmN7eXUZjzUwQ=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sso v1.30.5/go.mod h1:av+ArJpoYf3pgyrj6tcehSFW+y9/QvAY8kMooR9bZCw=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.9 h1:LU8S9W/mPDAU9q0FjCLi0TrCheLMGwzbRpvUMwYspcA=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.9/go.mod h1:/j67Z5XBVDx8nZVp9EuFM9/BS5dvBznbqILGuu73hug=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10 h1:GtsxyiF3Nd3JahRBJbxLCCdYW9ltGQYrFWg8XdkGDd8=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.10/go.mod h1:/j67Z5XBVDx8nZVp9EuFM9/BS5dvBznbqILGuu73hug=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sts v1.41.1 h1:GdGmKtG+/Krag7VfyOXV17xjTCz0i9NT+JnqLTOI5nA=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sts v1.41.1/go.mod h1:6TxbXoDSgBQ225Qd8Q+MbxUxUh6TtNKwbRt/EPS9xso=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 h1:a5UTtD4mHBU3t0o6aHQZFJTNKVfxFWfPX7J0Lr7G+uY=
|
||||||
|
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2/go.mod h1:6TxbXoDSgBQ225Qd8Q+MbxUxUh6TtNKwbRt/EPS9xso=
|
||||||
|
github.com/aws/smithy-go v1.23.2 h1:Crv0eatJUQhaManss33hS5r40CG3ZFH+21XSkqMrIUM=
|
||||||
|
github.com/aws/smithy-go v1.23.2/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
|
||||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k=
|
github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k=
|
||||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8=
|
github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8=
|
||||||
|
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
||||||
|
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||||
github.com/charmbracelet/bubbles v0.21.0 h1:9TdC97SdRVg/1aaXNVWfFH3nnLAwOXr8Fn6u6mfQdFs=
|
github.com/charmbracelet/bubbles v0.21.0 h1:9TdC97SdRVg/1aaXNVWfFH3nnLAwOXr8Fn6u6mfQdFs=
|
||||||
github.com/charmbracelet/bubbles v0.21.0/go.mod h1:HF+v6QUR4HkEpz62dx7ym2xc71/KBHg+zKwJtMw+qtg=
|
github.com/charmbracelet/bubbles v0.21.0/go.mod h1:HF+v6QUR4HkEpz62dx7ym2xc71/KBHg+zKwJtMw+qtg=
|
||||||
github.com/charmbracelet/bubbletea v1.3.10 h1:otUDHWMMzQSB0Pkc87rm691KZ3SWa4KUlvF9nRvCICw=
|
github.com/charmbracelet/bubbletea v1.3.10 h1:otUDHWMMzQSB0Pkc87rm691KZ3SWa4KUlvF9nRvCICw=
|
||||||
@@ -18,16 +102,39 @@ github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd h1:vy0G
|
|||||||
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd/go.mod h1:xe0nKWGd3eJgtqZRaN9RjMtK7xUYchjzPr7q6kcvCCs=
|
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd/go.mod h1:xe0nKWGd3eJgtqZRaN9RjMtK7xUYchjzPr7q6kcvCCs=
|
||||||
github.com/charmbracelet/x/term v0.2.1 h1:AQeHeLZ1OqSXhrAWpYUtZyX1T3zVxfpZuEQMIQaGIAQ=
|
github.com/charmbracelet/x/term v0.2.1 h1:AQeHeLZ1OqSXhrAWpYUtZyX1T3zVxfpZuEQMIQaGIAQ=
|
||||||
github.com/charmbracelet/x/term v0.2.1/go.mod h1:oQ4enTYFV7QN4m0i9mzHrViD7TQKvNEEkHUMCmsxdUg=
|
github.com/charmbracelet/x/term v0.2.1/go.mod h1:oQ4enTYFV7QN4m0i9mzHrViD7TQKvNEEkHUMCmsxdUg=
|
||||||
|
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443 h1:aQ3y1lwWyqYPiWZThqv1aFbZMiM9vblcSArJRf2Irls=
|
||||||
|
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8=
|
||||||
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
||||||
github.com/creack/pty v1.1.17 h1:QeVUsEDNrLBW4tMgZHvxy18sKtr6VI492kBhUfhDJNI=
|
github.com/creack/pty v1.1.17 h1:QeVUsEDNrLBW4tMgZHvxy18sKtr6VI492kBhUfhDJNI=
|
||||||
github.com/creack/pty v1.1.17/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=
|
github.com/creack/pty v1.1.17/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=
|
||||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
|
github.com/envoyproxy/go-control-plane/envoy v1.32.4 h1:jb83lalDRZSpPWW2Z7Mck/8kXZ5CQAFYVjQcdVIr83A=
|
||||||
|
github.com/envoyproxy/go-control-plane/envoy v1.32.4/go.mod h1:Gzjc5k8JcJswLjAx1Zm+wSYE20UrLtt7JZMWiWQXQEw=
|
||||||
|
github.com/envoyproxy/protoc-gen-validate v1.2.1 h1:DEo3O99U8j4hBFwbJfrz9VtgcDfUKS7KJ7spH3d86P8=
|
||||||
|
github.com/envoyproxy/protoc-gen-validate v1.2.1/go.mod h1:d/C80l/jxXLdfEIhX1W2TmLfsJ31lvEjwamM4DxlWXU=
|
||||||
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f h1:Y/CXytFA4m6baUTXGLOoWe4PQhGxaX0KpnayAqC48p4=
|
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f h1:Y/CXytFA4m6baUTXGLOoWe4PQhGxaX0KpnayAqC48p4=
|
||||||
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f/go.mod h1:vw97MGsxSvLiUE2X8qFplwetxpGLQrlU1Q9AUEIzCaM=
|
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f/go.mod h1:vw97MGsxSvLiUE2X8qFplwetxpGLQrlU1Q9AUEIzCaM=
|
||||||
|
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
|
||||||
|
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
|
||||||
|
github.com/go-jose/go-jose/v4 v4.1.2 h1:TK/7NqRQZfgAh+Td8AlsrvtPoUyiHh0LqVvokh+1vHI=
|
||||||
|
github.com/go-jose/go-jose/v4 v4.1.2/go.mod h1:22cg9HWM1pOlnRiY+9cQYJ9XHmya1bYW8OeDM6Ku6Oo=
|
||||||
|
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||||
|
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
|
||||||
|
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||||
|
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
|
||||||
|
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
|
||||||
github.com/go-sql-driver/mysql v1.9.3 h1:U/N249h2WzJ3Ukj8SowVFjdtZKfu9vlLZxjPXV1aweo=
|
github.com/go-sql-driver/mysql v1.9.3 h1:U/N249h2WzJ3Ukj8SowVFjdtZKfu9vlLZxjPXV1aweo=
|
||||||
github.com/go-sql-driver/mysql v1.9.3/go.mod h1:qn46aNg1333BRMNU69Lq93t8du/dwxI64Gl8i5p1WMU=
|
github.com/go-sql-driver/mysql v1.9.3/go.mod h1:qn46aNg1333BRMNU69Lq93t8du/dwxI64Gl8i5p1WMU=
|
||||||
|
github.com/google/s2a-go v0.1.9 h1:LGD7gtMgezd8a/Xak7mEWL0PjoTQFvpRudN895yqKW0=
|
||||||
|
github.com/google/s2a-go v0.1.9/go.mod h1:YA0Ei2ZQL3acow2O62kdp9UlnvMmU7kA6Eutn0dXayM=
|
||||||
|
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||||
|
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||||
|
github.com/googleapis/enterprise-certificate-proxy v0.3.7 h1:zrn2Ee/nWmHulBx5sAVrGgAa0f2/R35S4DJwfFaUPFQ=
|
||||||
|
github.com/googleapis/enterprise-certificate-proxy v0.3.7/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
|
||||||
|
github.com/googleapis/gax-go/v2 v2.15.0 h1:SyjDc1mGgZU5LncH8gimWo9lW1DtIfPibOG81vgd/bo=
|
||||||
|
github.com/googleapis/gax-go/v2 v2.15.0/go.mod h1:zVVkkxAQHa1RQpg9z2AUCMnKhi0Qld9rcmyfL1OZhoc=
|
||||||
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
||||||
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
||||||
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
|
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
|
||||||
@@ -52,6 +159,8 @@ github.com/muesli/cancelreader v0.2.2 h1:3I4Kt4BQjOR54NavqnDogx/MIoWBFa0StPA8ELU
|
|||||||
github.com/muesli/cancelreader v0.2.2/go.mod h1:3XuTXfFS2VjM+HTLZY9Ak0l6eUKfijIfMUZ4EgX0QYo=
|
github.com/muesli/cancelreader v0.2.2/go.mod h1:3XuTXfFS2VjM+HTLZY9Ak0l6eUKfijIfMUZ4EgX0QYo=
|
||||||
github.com/muesli/termenv v0.16.0 h1:S5AlUN9dENB57rsbnkPyfdGuWIlkmzJjbFf0Tf5FWUc=
|
github.com/muesli/termenv v0.16.0 h1:S5AlUN9dENB57rsbnkPyfdGuWIlkmzJjbFf0Tf5FWUc=
|
||||||
github.com/muesli/termenv v0.16.0/go.mod h1:ZRfOIKPFDYQoDFF4Olj7/QJbW60Ol/kL1pU3VfY/Cnk=
|
github.com/muesli/termenv v0.16.0/go.mod h1:ZRfOIKPFDYQoDFF4Olj7/QJbW60Ol/kL1pU3VfY/Cnk=
|
||||||
|
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 h1:GFCKgmp0tecUJ0sJuv4pzYCqS9+RGSn52M3FUwPs+uo=
|
||||||
|
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8=
|
||||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
||||||
@@ -64,27 +173,84 @@ github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
|
|||||||
github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
|
github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
|
||||||
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=
|
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=
|
||||||
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||||
|
github.com/spiffe/go-spiffe/v2 v2.5.0 h1:N2I01KCUkv1FAjZXJMwh95KK1ZIQLYbPfhaxw8WS0hE=
|
||||||
|
github.com/spiffe/go-spiffe/v2 v2.5.0/go.mod h1:P+NxobPc6wXhVtINNtFjNWGBTreew1GBUCwT2wPmb7g=
|
||||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||||
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||||
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
|
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
|
||||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||||
|
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||||
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e h1:JVG44RsyaB9T2KIHavMF/ppJZNG9ZpyihvCd0w101no=
|
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e h1:JVG44RsyaB9T2KIHavMF/ppJZNG9ZpyihvCd0w101no=
|
||||||
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJuqunuUZ/Dhy/avygyECGrLceyNeo4LiM=
|
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJuqunuUZ/Dhy/avygyECGrLceyNeo4LiM=
|
||||||
|
github.com/zeebo/errs v1.4.0 h1:XNdoD/RRMKP7HD0UhJnIzUy74ISdGGxURlYG8HSWSfM=
|
||||||
|
github.com/zeebo/errs v1.4.0/go.mod h1:sgbWHsvVuTPHcqJJGQ1WhI5KbWlHYz+2+2C/LSEtCw4=
|
||||||
|
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
|
||||||
|
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
|
||||||
|
go.opentelemetry.io/contrib/detectors/gcp v1.36.0 h1:F7q2tNlCaHY9nMKHR6XH9/qkp8FktLnIcy6jJNyOCQw=
|
||||||
|
go.opentelemetry.io/contrib/detectors/gcp v1.36.0/go.mod h1:IbBN8uAIIx734PTonTPxAxnjc2pQTxWNkwfstZ+6H2k=
|
||||||
|
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 h1:q4XOmH/0opmeuJtPsbFNivyl7bCt7yRBbeEm2sC/XtQ=
|
||||||
|
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0/go.mod h1:snMWehoOh2wsEwnvvwtDyFCxVeDAODenXHtn5vzrKjo=
|
||||||
|
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 h1:F7Jx+6hwnZ41NSFTO5q4LYDtJRXBf2PD0rNBkeB/lus=
|
||||||
|
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q=
|
||||||
|
go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ=
|
||||||
|
go.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I=
|
||||||
|
go.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE=
|
||||||
|
go.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E=
|
||||||
|
go.opentelemetry.io/otel/sdk v1.37.0 h1:ItB0QUqnjesGRvNcmAcU0LyvkVyGJ2xftD29bWdDvKI=
|
||||||
|
go.opentelemetry.io/otel/sdk v1.37.0/go.mod h1:VredYzxUvuo2q3WRcDnKDjbdvmO0sCzOvVAiY+yUkAg=
|
||||||
|
go.opentelemetry.io/otel/sdk/metric v1.37.0 h1:90lI228XrB9jCMuSdA0673aubgRobVZFhbjxHHspCPc=
|
||||||
|
go.opentelemetry.io/otel/sdk/metric v1.37.0/go.mod h1:cNen4ZWfiD37l5NhS+Keb5RXVWZWpRE+9WyVCpbo5ps=
|
||||||
|
go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4=
|
||||||
|
go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0=
|
||||||
golang.org/x/crypto v0.37.0 h1:kJNSjF/Xp7kU0iB2Z+9viTPMW4EqqsrywMXLJOOsXSE=
|
golang.org/x/crypto v0.37.0 h1:kJNSjF/Xp7kU0iB2Z+9viTPMW4EqqsrywMXLJOOsXSE=
|
||||||
golang.org/x/crypto v0.37.0/go.mod h1:vg+k43peMZ0pUMhYmVAWysMK35e6ioLh3wB8ZCAfbVc=
|
golang.org/x/crypto v0.37.0/go.mod h1:vg+k43peMZ0pUMhYmVAWysMK35e6ioLh3wB8ZCAfbVc=
|
||||||
|
golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4=
|
||||||
|
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc=
|
||||||
|
golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04=
|
||||||
|
golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0=
|
||||||
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561 h1:MDc5xs78ZrZr3HMQugiXOAkSZtfTpbJLDr/lwfgO53E=
|
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561 h1:MDc5xs78ZrZr3HMQugiXOAkSZtfTpbJLDr/lwfgO53E=
|
||||||
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561/go.mod h1:cyybsKvd6eL0RnXn6p/Grxp8F5bW7iYuBgsNCOHpMYE=
|
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561/go.mod h1:cyybsKvd6eL0RnXn6p/Grxp8F5bW7iYuBgsNCOHpMYE=
|
||||||
|
golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE=
|
||||||
|
golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg=
|
||||||
|
golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4=
|
||||||
|
golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210=
|
||||||
|
golang.org/x/oauth2 v0.33.0 h1:4Q+qn+E5z8gPRJfmRy7C2gGG3T4jIprK6aSYgTXGRpo=
|
||||||
|
golang.org/x/oauth2 v0.33.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
|
||||||
golang.org/x/sync v0.13.0 h1:AauUjRAJ9OSnvULf/ARrrVywoJDy0YS2AwQ98I37610=
|
golang.org/x/sync v0.13.0 h1:AauUjRAJ9OSnvULf/ARrrVywoJDy0YS2AwQ98I37610=
|
||||||
golang.org/x/sync v0.13.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
golang.org/x/sync v0.13.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
||||||
|
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
|
||||||
|
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
||||||
|
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
|
||||||
|
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||||
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
|
golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
|
||||||
golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||||
|
golang.org/x/sys v0.37.0 h1:fdNQudmxPjkdUTPnLn5mdQv7Zwvbvpaxqs831goi9kQ=
|
||||||
|
golang.org/x/sys v0.37.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||||
golang.org/x/text v0.24.0 h1:dd5Bzh4yt5KYA8f9CJHCP4FB4D51c2c6JvN37xJJkJ0=
|
golang.org/x/text v0.24.0 h1:dd5Bzh4yt5KYA8f9CJHCP4FB4D51c2c6JvN37xJJkJ0=
|
||||||
golang.org/x/text v0.24.0/go.mod h1:L8rBsPeo2pSS+xqN0d5u2ikmjtmoJbDBT1b7nHvFCdU=
|
golang.org/x/text v0.24.0/go.mod h1:L8rBsPeo2pSS+xqN0d5u2ikmjtmoJbDBT1b7nHvFCdU=
|
||||||
|
golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng=
|
||||||
|
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU=
|
||||||
|
golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k=
|
||||||
|
golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
|
||||||
|
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
|
||||||
|
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
|
||||||
|
google.golang.org/api v0.256.0 h1:u6Khm8+F9sxbCTYNoBHg6/Hwv0N/i+V94MvkOSor6oI=
|
||||||
|
google.golang.org/api v0.256.0/go.mod h1:KIgPhksXADEKJlnEoRa9qAII4rXcy40vfI8HRqcU964=
|
||||||
|
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 h1:rHWScKit0gvAPuOnu87KpaYtjK5zBMLcULh7gxkCXu4=
|
||||||
|
google.golang.org/genproto v0.0.0-20250603155806-513f23925822/go.mod h1:HubltRL7rMh0LfnQPkMH4NPDFEWp0jw3vixw7jEM53s=
|
||||||
|
google.golang.org/genproto/googleapis/api v0.0.0-20250818200422-3122310a409c h1:AtEkQdl5b6zsybXcbz00j1LwNodDuH6hVifIaNqk7NQ=
|
||||||
|
google.golang.org/genproto/googleapis/api v0.0.0-20250818200422-3122310a409c/go.mod h1:ea2MjsO70ssTfCjiwHgI0ZFqcw45Ksuk2ckf9G468GA=
|
||||||
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 h1:tRPGkdGHuewF4UisLzzHHr1spKw92qLM98nIzxbC0wY=
|
||||||
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
|
||||||
|
google.golang.org/grpc v1.76.0 h1:UnVkv1+uMLYXoIz6o7chp59WfQUYA2ex/BXQ9rHZu7A=
|
||||||
|
google.golang.org/grpc v1.76.0/go.mod h1:Ju12QI8M6iQJtbcsV+awF5a4hfJMLi4X0JLo94ULZ6c=
|
||||||
|
google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE=
|
||||||
|
google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
|
||||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||||
|
|||||||
@@ -17,10 +17,12 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"dbbackup/internal/checks"
|
"dbbackup/internal/checks"
|
||||||
|
"dbbackup/internal/cloud"
|
||||||
"dbbackup/internal/config"
|
"dbbackup/internal/config"
|
||||||
"dbbackup/internal/database"
|
"dbbackup/internal/database"
|
||||||
"dbbackup/internal/security"
|
"dbbackup/internal/security"
|
||||||
"dbbackup/internal/logger"
|
"dbbackup/internal/logger"
|
||||||
|
"dbbackup/internal/metadata"
|
||||||
"dbbackup/internal/metrics"
|
"dbbackup/internal/metrics"
|
||||||
"dbbackup/internal/progress"
|
"dbbackup/internal/progress"
|
||||||
"dbbackup/internal/swap"
|
"dbbackup/internal/swap"
|
||||||
@@ -233,6 +235,14 @@ func (e *Engine) BackupSingle(ctx context.Context, databaseName string) error {
|
|||||||
metrics.GlobalMetrics.RecordOperation("backup_single", databaseName, time.Now().Add(-time.Minute), info.Size(), true, 0)
|
metrics.GlobalMetrics.RecordOperation("backup_single", databaseName, time.Now().Add(-time.Minute), info.Size(), true, 0)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Cloud upload if enabled
|
||||||
|
if e.cfg.CloudEnabled && e.cfg.CloudAutoUpload {
|
||||||
|
if err := e.uploadToCloud(ctx, outputFile, tracker); err != nil {
|
||||||
|
e.log.Warn("Cloud upload failed", "error", err)
|
||||||
|
// Don't fail the backup if cloud upload fails
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Complete operation
|
// Complete operation
|
||||||
tracker.UpdateProgress(100, "Backup operation completed successfully")
|
tracker.UpdateProgress(100, "Backup operation completed successfully")
|
||||||
tracker.Complete(fmt.Sprintf("Single database backup completed: %s", filepath.Base(outputFile)))
|
tracker.Complete(fmt.Sprintf("Single database backup completed: %s", filepath.Base(outputFile)))
|
||||||
@@ -541,9 +551,9 @@ func (e *Engine) BackupCluster(ctx context.Context) error {
|
|||||||
operation.Complete(fmt.Sprintf("Cluster backup created: %s (%s)", outputFile, size))
|
operation.Complete(fmt.Sprintf("Cluster backup created: %s (%s)", outputFile, size))
|
||||||
}
|
}
|
||||||
|
|
||||||
// Create metadata file
|
// Create cluster metadata file
|
||||||
if err := e.createMetadata(outputFile, "cluster", "cluster", ""); err != nil {
|
if err := e.createClusterMetadata(outputFile, databases, successCountFinal, failCountFinal); err != nil {
|
||||||
e.log.Warn("Failed to create metadata file", "error", err)
|
e.log.Warn("Failed to create cluster metadata file", "error", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
@@ -910,9 +920,70 @@ regularTar:
|
|||||||
|
|
||||||
// createMetadata creates a metadata file for the backup
|
// createMetadata creates a metadata file for the backup
|
||||||
func (e *Engine) createMetadata(backupFile, database, backupType, strategy string) error {
|
func (e *Engine) createMetadata(backupFile, database, backupType, strategy string) error {
|
||||||
metaFile := backupFile + ".info"
|
startTime := time.Now()
|
||||||
|
|
||||||
content := fmt.Sprintf(`{
|
// Get backup file information
|
||||||
|
info, err := os.Stat(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to stat backup file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate SHA-256 checksum
|
||||||
|
sha256, err := metadata.CalculateSHA256(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to calculate checksum: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get database version
|
||||||
|
ctx := context.Background()
|
||||||
|
dbVersion, _ := e.db.GetVersion(ctx)
|
||||||
|
if dbVersion == "" {
|
||||||
|
dbVersion = "unknown"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Determine compression format
|
||||||
|
compressionFormat := "none"
|
||||||
|
if e.cfg.CompressionLevel > 0 {
|
||||||
|
if e.cfg.Jobs > 1 {
|
||||||
|
compressionFormat = fmt.Sprintf("pigz-%d", e.cfg.CompressionLevel)
|
||||||
|
} else {
|
||||||
|
compressionFormat = fmt.Sprintf("gzip-%d", e.cfg.CompressionLevel)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create backup metadata
|
||||||
|
meta := &metadata.BackupMetadata{
|
||||||
|
Version: "2.0",
|
||||||
|
Timestamp: startTime,
|
||||||
|
Database: database,
|
||||||
|
DatabaseType: e.cfg.DatabaseType,
|
||||||
|
DatabaseVersion: dbVersion,
|
||||||
|
Host: e.cfg.Host,
|
||||||
|
Port: e.cfg.Port,
|
||||||
|
User: e.cfg.User,
|
||||||
|
BackupFile: backupFile,
|
||||||
|
SizeBytes: info.Size(),
|
||||||
|
SHA256: sha256,
|
||||||
|
Compression: compressionFormat,
|
||||||
|
BackupType: backupType,
|
||||||
|
Duration: time.Since(startTime).Seconds(),
|
||||||
|
ExtraInfo: make(map[string]string),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add strategy for sample backups
|
||||||
|
if strategy != "" {
|
||||||
|
meta.ExtraInfo["sample_strategy"] = strategy
|
||||||
|
meta.ExtraInfo["sample_value"] = fmt.Sprintf("%d", e.cfg.SampleValue)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save metadata
|
||||||
|
if err := meta.Save(); err != nil {
|
||||||
|
return fmt.Errorf("failed to save metadata: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Also save legacy .info file for backward compatibility
|
||||||
|
legacyMetaFile := backupFile + ".info"
|
||||||
|
legacyContent := fmt.Sprintf(`{
|
||||||
"type": "%s",
|
"type": "%s",
|
||||||
"database": "%s",
|
"database": "%s",
|
||||||
"timestamp": "%s",
|
"timestamp": "%s",
|
||||||
@@ -920,24 +991,170 @@ func (e *Engine) createMetadata(backupFile, database, backupType, strategy strin
|
|||||||
"port": %d,
|
"port": %d,
|
||||||
"user": "%s",
|
"user": "%s",
|
||||||
"db_type": "%s",
|
"db_type": "%s",
|
||||||
"compression": %d`,
|
"compression": %d,
|
||||||
backupType, database, time.Now().Format("20060102_150405"),
|
"size_bytes": %d
|
||||||
e.cfg.Host, e.cfg.Port, e.cfg.User, e.cfg.DatabaseType, e.cfg.CompressionLevel)
|
}`, backupType, database, startTime.Format("20060102_150405"),
|
||||||
|
e.cfg.Host, e.cfg.Port, e.cfg.User, e.cfg.DatabaseType,
|
||||||
|
e.cfg.CompressionLevel, info.Size())
|
||||||
|
|
||||||
if strategy != "" {
|
if err := os.WriteFile(legacyMetaFile, []byte(legacyContent), 0644); err != nil {
|
||||||
content += fmt.Sprintf(`,
|
e.log.Warn("Failed to save legacy metadata file", "error", err)
|
||||||
"sample_strategy": "%s",
|
|
||||||
"sample_value": %d`, e.cfg.SampleStrategy, e.cfg.SampleValue)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if info, err := os.Stat(backupFile); err == nil {
|
return nil
|
||||||
content += fmt.Sprintf(`,
|
|
||||||
"size_bytes": %d`, info.Size())
|
|
||||||
}
|
}
|
||||||
|
|
||||||
content += "\n}"
|
// createClusterMetadata creates metadata for cluster backups
|
||||||
|
func (e *Engine) createClusterMetadata(backupFile string, databases []string, successCount, failCount int) error {
|
||||||
|
startTime := time.Now()
|
||||||
|
|
||||||
return os.WriteFile(metaFile, []byte(content), 0644)
|
// Get backup file information
|
||||||
|
info, err := os.Stat(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to stat backup file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate SHA-256 checksum for archive
|
||||||
|
sha256, err := metadata.CalculateSHA256(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to calculate checksum: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get database version
|
||||||
|
ctx := context.Background()
|
||||||
|
dbVersion, _ := e.db.GetVersion(ctx)
|
||||||
|
if dbVersion == "" {
|
||||||
|
dbVersion = "unknown"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create cluster metadata
|
||||||
|
clusterMeta := &metadata.ClusterMetadata{
|
||||||
|
Version: "2.0",
|
||||||
|
Timestamp: startTime,
|
||||||
|
ClusterName: fmt.Sprintf("%s:%d", e.cfg.Host, e.cfg.Port),
|
||||||
|
DatabaseType: e.cfg.DatabaseType,
|
||||||
|
Host: e.cfg.Host,
|
||||||
|
Port: e.cfg.Port,
|
||||||
|
Databases: make([]metadata.BackupMetadata, 0),
|
||||||
|
TotalSize: info.Size(),
|
||||||
|
Duration: time.Since(startTime).Seconds(),
|
||||||
|
ExtraInfo: map[string]string{
|
||||||
|
"database_count": fmt.Sprintf("%d", len(databases)),
|
||||||
|
"success_count": fmt.Sprintf("%d", successCount),
|
||||||
|
"failure_count": fmt.Sprintf("%d", failCount),
|
||||||
|
"archive_sha256": sha256,
|
||||||
|
"database_version": dbVersion,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add database names to metadata
|
||||||
|
for _, dbName := range databases {
|
||||||
|
dbMeta := metadata.BackupMetadata{
|
||||||
|
Database: dbName,
|
||||||
|
DatabaseType: e.cfg.DatabaseType,
|
||||||
|
DatabaseVersion: dbVersion,
|
||||||
|
Timestamp: startTime,
|
||||||
|
}
|
||||||
|
clusterMeta.Databases = append(clusterMeta.Databases, dbMeta)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save cluster metadata
|
||||||
|
if err := clusterMeta.Save(backupFile); err != nil {
|
||||||
|
return fmt.Errorf("failed to save cluster metadata: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Also save legacy .info file for backward compatibility
|
||||||
|
legacyMetaFile := backupFile + ".info"
|
||||||
|
legacyContent := fmt.Sprintf(`{
|
||||||
|
"type": "cluster",
|
||||||
|
"database": "cluster",
|
||||||
|
"timestamp": "%s",
|
||||||
|
"host": "%s",
|
||||||
|
"port": %d,
|
||||||
|
"user": "%s",
|
||||||
|
"db_type": "%s",
|
||||||
|
"compression": %d,
|
||||||
|
"size_bytes": %d,
|
||||||
|
"database_count": %d,
|
||||||
|
"success_count": %d,
|
||||||
|
"failure_count": %d
|
||||||
|
}`, startTime.Format("20060102_150405"),
|
||||||
|
e.cfg.Host, e.cfg.Port, e.cfg.User, e.cfg.DatabaseType,
|
||||||
|
e.cfg.CompressionLevel, info.Size(), len(databases), successCount, failCount)
|
||||||
|
|
||||||
|
if err := os.WriteFile(legacyMetaFile, []byte(legacyContent), 0644); err != nil {
|
||||||
|
e.log.Warn("Failed to save legacy cluster metadata file", "error", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// uploadToCloud uploads a backup file to cloud storage
|
||||||
|
func (e *Engine) uploadToCloud(ctx context.Context, backupFile string, tracker *progress.OperationTracker) error {
|
||||||
|
uploadStep := tracker.AddStep("cloud_upload", "Uploading to cloud storage")
|
||||||
|
|
||||||
|
// Create cloud backend
|
||||||
|
cloudCfg := &cloud.Config{
|
||||||
|
Provider: e.cfg.CloudProvider,
|
||||||
|
Bucket: e.cfg.CloudBucket,
|
||||||
|
Region: e.cfg.CloudRegion,
|
||||||
|
Endpoint: e.cfg.CloudEndpoint,
|
||||||
|
AccessKey: e.cfg.CloudAccessKey,
|
||||||
|
SecretKey: e.cfg.CloudSecretKey,
|
||||||
|
Prefix: e.cfg.CloudPrefix,
|
||||||
|
UseSSL: true,
|
||||||
|
PathStyle: e.cfg.CloudProvider == "minio",
|
||||||
|
Timeout: 300,
|
||||||
|
MaxRetries: 3,
|
||||||
|
}
|
||||||
|
|
||||||
|
backend, err := cloud.NewBackend(cloudCfg)
|
||||||
|
if err != nil {
|
||||||
|
uploadStep.Fail(fmt.Errorf("failed to create cloud backend: %w", err))
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get file info
|
||||||
|
info, err := os.Stat(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
uploadStep.Fail(fmt.Errorf("failed to stat backup file: %w", err))
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
filename := filepath.Base(backupFile)
|
||||||
|
e.log.Info("Uploading backup to cloud", "file", filename, "size", cloud.FormatSize(info.Size()))
|
||||||
|
|
||||||
|
// Progress callback
|
||||||
|
var lastPercent int
|
||||||
|
progressCallback := func(transferred, total int64) {
|
||||||
|
percent := int(float64(transferred) / float64(total) * 100)
|
||||||
|
if percent != lastPercent && percent%10 == 0 {
|
||||||
|
e.log.Debug("Upload progress", "percent", percent, "transferred", cloud.FormatSize(transferred), "total", cloud.FormatSize(total))
|
||||||
|
lastPercent = percent
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Upload to cloud
|
||||||
|
err = backend.Upload(ctx, backupFile, filename, progressCallback)
|
||||||
|
if err != nil {
|
||||||
|
uploadStep.Fail(fmt.Errorf("cloud upload failed: %w", err))
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Also upload metadata file
|
||||||
|
metaFile := backupFile + ".meta.json"
|
||||||
|
if _, err := os.Stat(metaFile); err == nil {
|
||||||
|
metaFilename := filepath.Base(metaFile)
|
||||||
|
if err := backend.Upload(ctx, metaFile, metaFilename, nil); err != nil {
|
||||||
|
e.log.Warn("Failed to upload metadata file", "error", err)
|
||||||
|
// Don't fail if metadata upload fails
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
uploadStep.Complete(fmt.Sprintf("Uploaded to %s/%s/%s", backend.Name(), e.cfg.CloudBucket, filename))
|
||||||
|
e.log.Info("Backup uploaded to cloud", "provider", backend.Name(), "bucket", e.cfg.CloudBucket, "file", filename)
|
||||||
|
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// executeCommand executes a backup command (optimized for huge databases)
|
// executeCommand executes a backup command (optimized for huge databases)
|
||||||
|
|||||||
381
internal/cloud/azure.go
Normal file
381
internal/cloud/azure.go
Normal file
@@ -0,0 +1,381 @@
|
|||||||
|
package cloud
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"context"
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/base64"
|
||||||
|
"encoding/hex"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
|
||||||
|
"github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming"
|
||||||
|
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
|
||||||
|
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
|
||||||
|
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container"
|
||||||
|
)
|
||||||
|
|
||||||
|
// AzureBackend implements the Backend interface for Azure Blob Storage
|
||||||
|
type AzureBackend struct {
|
||||||
|
client *azblob.Client
|
||||||
|
containerName string
|
||||||
|
config *Config
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewAzureBackend creates a new Azure Blob Storage backend
|
||||||
|
func NewAzureBackend(cfg *Config) (*AzureBackend, error) {
|
||||||
|
if cfg.Bucket == "" {
|
||||||
|
return nil, fmt.Errorf("container name is required for Azure backend")
|
||||||
|
}
|
||||||
|
|
||||||
|
var client *azblob.Client
|
||||||
|
var err error
|
||||||
|
|
||||||
|
// Support for Azurite emulator (uses endpoint override)
|
||||||
|
if cfg.Endpoint != "" {
|
||||||
|
// For Azurite and custom endpoints
|
||||||
|
accountName := cfg.AccessKey
|
||||||
|
accountKey := cfg.SecretKey
|
||||||
|
|
||||||
|
if accountName == "" {
|
||||||
|
// Default Azurite account
|
||||||
|
accountName = "devstoreaccount1"
|
||||||
|
}
|
||||||
|
if accountKey == "" {
|
||||||
|
// Default Azurite key
|
||||||
|
accountKey = "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create credential
|
||||||
|
cred, err := azblob.NewSharedKeyCredential(accountName, accountKey)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create Azure credential: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build service URL for Azurite: http://endpoint/accountName
|
||||||
|
serviceURL := cfg.Endpoint
|
||||||
|
if !strings.Contains(serviceURL, accountName) {
|
||||||
|
// Ensure URL ends with slash
|
||||||
|
if !strings.HasSuffix(serviceURL, "/") {
|
||||||
|
serviceURL += "/"
|
||||||
|
}
|
||||||
|
serviceURL += accountName
|
||||||
|
}
|
||||||
|
|
||||||
|
client, err = azblob.NewClientWithSharedKeyCredential(serviceURL, cred, nil)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create Azure client: %w", err)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Production Azure using connection string or managed identity
|
||||||
|
if cfg.AccessKey != "" && cfg.SecretKey != "" {
|
||||||
|
// Use account name and key
|
||||||
|
accountName := cfg.AccessKey
|
||||||
|
accountKey := cfg.SecretKey
|
||||||
|
|
||||||
|
cred, err := azblob.NewSharedKeyCredential(accountName, accountKey)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create Azure credential: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
serviceURL := fmt.Sprintf("https://%s.blob.core.windows.net/", accountName)
|
||||||
|
client, err = azblob.NewClientWithSharedKeyCredential(serviceURL, cred, nil)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create Azure client: %w", err)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Use default Azure credential (managed identity, environment variables, etc.)
|
||||||
|
return nil, fmt.Errorf("Azure authentication requires account name and key, or use AZURE_STORAGE_CONNECTION_STRING environment variable")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
backend := &AzureBackend{
|
||||||
|
client: client,
|
||||||
|
containerName: cfg.Bucket,
|
||||||
|
config: cfg,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create container if it doesn't exist
|
||||||
|
// Note: Container creation should be done manually or via Azure portal
|
||||||
|
if false { // Disabled: cfg.CreateBucket not in Config
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
containerClient := client.ServiceClient().NewContainerClient(cfg.Bucket)
|
||||||
|
_, err = containerClient.Create(ctx, &container.CreateOptions{})
|
||||||
|
if err != nil {
|
||||||
|
// Ignore if container already exists
|
||||||
|
if !strings.Contains(err.Error(), "ContainerAlreadyExists") {
|
||||||
|
return nil, fmt.Errorf("failed to create container: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return backend, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Name returns the backend name
|
||||||
|
func (a *AzureBackend) Name() string {
|
||||||
|
return "azure"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Upload uploads a file to Azure Blob Storage
|
||||||
|
func (a *AzureBackend) Upload(ctx context.Context, localPath, remotePath string, progress ProgressCallback) error {
|
||||||
|
file, err := os.Open(localPath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to open file: %w", err)
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
fileInfo, err := file.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to stat file: %w", err)
|
||||||
|
}
|
||||||
|
fileSize := fileInfo.Size()
|
||||||
|
|
||||||
|
// Remove leading slash from remote path
|
||||||
|
blobName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
|
||||||
|
// Use block blob upload for large files (>256MB), simple upload for smaller
|
||||||
|
const blockUploadThreshold = 256 * 1024 * 1024 // 256 MB
|
||||||
|
|
||||||
|
if fileSize > blockUploadThreshold {
|
||||||
|
return a.uploadBlocks(ctx, file, blobName, fileSize, progress)
|
||||||
|
}
|
||||||
|
|
||||||
|
return a.uploadSimple(ctx, file, blobName, fileSize, progress)
|
||||||
|
}
|
||||||
|
|
||||||
|
// uploadSimple uploads a file using simple upload (single request)
|
||||||
|
func (a *AzureBackend) uploadSimple(ctx context.Context, file *os.File, blobName string, fileSize int64, progress ProgressCallback) error {
|
||||||
|
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
||||||
|
|
||||||
|
// Wrap reader with progress tracking
|
||||||
|
reader := NewProgressReader(file, fileSize, progress)
|
||||||
|
|
||||||
|
// Calculate MD5 hash for integrity
|
||||||
|
hash := sha256.New()
|
||||||
|
teeReader := io.TeeReader(reader, hash)
|
||||||
|
|
||||||
|
_, err := blockBlobClient.UploadStream(ctx, teeReader, &blockblob.UploadStreamOptions{
|
||||||
|
BlockSize: 4 * 1024 * 1024, // 4MB blocks
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to upload blob: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Store checksum as metadata
|
||||||
|
checksum := hex.EncodeToString(hash.Sum(nil))
|
||||||
|
metadata := map[string]*string{
|
||||||
|
"sha256": &checksum,
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = blockBlobClient.SetMetadata(ctx, metadata, nil)
|
||||||
|
if err != nil {
|
||||||
|
// Non-fatal: upload succeeded but metadata failed
|
||||||
|
fmt.Fprintf(os.Stderr, "Warning: failed to set blob metadata: %v\n", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// uploadBlocks uploads a file using block blob staging (for large files)
|
||||||
|
func (a *AzureBackend) uploadBlocks(ctx context.Context, file *os.File, blobName string, fileSize int64, progress ProgressCallback) error {
|
||||||
|
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
||||||
|
|
||||||
|
const blockSize = 100 * 1024 * 1024 // 100MB per block
|
||||||
|
numBlocks := (fileSize + blockSize - 1) / blockSize
|
||||||
|
|
||||||
|
blockIDs := make([]string, 0, numBlocks)
|
||||||
|
hash := sha256.New()
|
||||||
|
var totalUploaded int64
|
||||||
|
|
||||||
|
for i := int64(0); i < numBlocks; i++ {
|
||||||
|
blockID := base64.StdEncoding.EncodeToString([]byte(fmt.Sprintf("block-%08d", i)))
|
||||||
|
blockIDs = append(blockIDs, blockID)
|
||||||
|
|
||||||
|
// Calculate block size
|
||||||
|
currentBlockSize := blockSize
|
||||||
|
if i == numBlocks-1 {
|
||||||
|
currentBlockSize = int(fileSize - i*blockSize)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read block
|
||||||
|
blockData := make([]byte, currentBlockSize)
|
||||||
|
n, err := io.ReadFull(file, blockData)
|
||||||
|
if err != nil && err != io.ErrUnexpectedEOF {
|
||||||
|
return fmt.Errorf("failed to read block %d: %w", i, err)
|
||||||
|
}
|
||||||
|
blockData = blockData[:n]
|
||||||
|
|
||||||
|
// Update hash
|
||||||
|
hash.Write(blockData)
|
||||||
|
|
||||||
|
// Upload block
|
||||||
|
reader := bytes.NewReader(blockData)
|
||||||
|
_, err = blockBlobClient.StageBlock(ctx, blockID, streaming.NopCloser(reader), nil)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to stage block %d: %w", i, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update progress
|
||||||
|
totalUploaded += int64(n)
|
||||||
|
if progress != nil {
|
||||||
|
progress(totalUploaded, fileSize)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Commit all blocks
|
||||||
|
_, err := blockBlobClient.CommitBlockList(ctx, blockIDs, nil)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to commit block list: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Store checksum as metadata
|
||||||
|
checksum := hex.EncodeToString(hash.Sum(nil))
|
||||||
|
metadata := map[string]*string{
|
||||||
|
"sha256": &checksum,
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = blockBlobClient.SetMetadata(ctx, metadata, nil)
|
||||||
|
if err != nil {
|
||||||
|
// Non-fatal
|
||||||
|
fmt.Fprintf(os.Stderr, "Warning: failed to set blob metadata: %v\n", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Download downloads a file from Azure Blob Storage
|
||||||
|
func (a *AzureBackend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
|
||||||
|
blobName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
||||||
|
|
||||||
|
// Get blob properties to know size
|
||||||
|
props, err := blockBlobClient.GetProperties(ctx, nil)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to get blob properties: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fileSize := *props.ContentLength
|
||||||
|
|
||||||
|
// Download blob
|
||||||
|
resp, err := blockBlobClient.DownloadStream(ctx, nil)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to download blob: %w", err)
|
||||||
|
}
|
||||||
|
defer resp.Body.Close()
|
||||||
|
|
||||||
|
// Create local file
|
||||||
|
file, err := os.Create(localPath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create file: %w", err)
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
// Wrap reader with progress tracking
|
||||||
|
reader := NewProgressReader(resp.Body, fileSize, progress)
|
||||||
|
|
||||||
|
// Copy with progress
|
||||||
|
_, err = io.Copy(file, reader)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to write file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete deletes a file from Azure Blob Storage
|
||||||
|
func (a *AzureBackend) Delete(ctx context.Context, remotePath string) error {
|
||||||
|
blobName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
||||||
|
|
||||||
|
_, err := blockBlobClient.Delete(ctx, nil)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to delete blob: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// List lists files in Azure Blob Storage with a given prefix
|
||||||
|
func (a *AzureBackend) List(ctx context.Context, prefix string) ([]BackupInfo, error) {
|
||||||
|
prefix = strings.TrimPrefix(prefix, "/")
|
||||||
|
containerClient := a.client.ServiceClient().NewContainerClient(a.containerName)
|
||||||
|
|
||||||
|
pager := containerClient.NewListBlobsFlatPager(&container.ListBlobsFlatOptions{
|
||||||
|
Prefix: &prefix,
|
||||||
|
})
|
||||||
|
|
||||||
|
var files []BackupInfo
|
||||||
|
|
||||||
|
for pager.More() {
|
||||||
|
page, err := pager.NextPage(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to list blobs: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, blob := range page.Segment.BlobItems {
|
||||||
|
if blob.Name == nil || blob.Properties == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
file := BackupInfo{
|
||||||
|
Key: *blob.Name,
|
||||||
|
Name: filepath.Base(*blob.Name),
|
||||||
|
Size: *blob.Properties.ContentLength,
|
||||||
|
LastModified: *blob.Properties.LastModified,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try to get SHA256 from metadata
|
||||||
|
if blob.Metadata != nil {
|
||||||
|
if sha256Val, ok := blob.Metadata["sha256"]; ok && sha256Val != nil {
|
||||||
|
file.ETag = *sha256Val
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
files = append(files, file)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return files, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exists checks if a file exists in Azure Blob Storage
|
||||||
|
func (a *AzureBackend) Exists(ctx context.Context, remotePath string) (bool, error) {
|
||||||
|
blobName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
||||||
|
|
||||||
|
_, err := blockBlobClient.GetProperties(ctx, nil)
|
||||||
|
if err != nil {
|
||||||
|
var respErr *azcore.ResponseError
|
||||||
|
if respErr != nil && respErr.StatusCode == 404 {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
// Check if error message contains "not found"
|
||||||
|
if strings.Contains(err.Error(), "BlobNotFound") || strings.Contains(err.Error(), "404") {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
return false, fmt.Errorf("failed to check blob existence: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSize returns the size of a file in Azure Blob Storage
|
||||||
|
func (a *AzureBackend) GetSize(ctx context.Context, remotePath string) (int64, error) {
|
||||||
|
blobName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
||||||
|
|
||||||
|
props, err := blockBlobClient.GetProperties(ctx, nil)
|
||||||
|
if err != nil {
|
||||||
|
return 0, fmt.Errorf("failed to get blob properties: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return *props.ContentLength, nil
|
||||||
|
}
|
||||||
275
internal/cloud/gcs.go
Normal file
275
internal/cloud/gcs.go
Normal file
@@ -0,0 +1,275 @@
|
|||||||
|
package cloud
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/hex"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"cloud.google.com/go/storage"
|
||||||
|
"google.golang.org/api/iterator"
|
||||||
|
"google.golang.org/api/option"
|
||||||
|
)
|
||||||
|
|
||||||
|
// GCSBackend implements the Backend interface for Google Cloud Storage
|
||||||
|
type GCSBackend struct {
|
||||||
|
client *storage.Client
|
||||||
|
bucketName string
|
||||||
|
config *Config
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewGCSBackend creates a new Google Cloud Storage backend
|
||||||
|
func NewGCSBackend(cfg *Config) (*GCSBackend, error) {
|
||||||
|
if cfg.Bucket == "" {
|
||||||
|
return nil, fmt.Errorf("bucket name is required for GCS backend")
|
||||||
|
}
|
||||||
|
|
||||||
|
var client *storage.Client
|
||||||
|
var err error
|
||||||
|
ctx := context.Background()
|
||||||
|
|
||||||
|
// Support for fake-gcs-server emulator (uses endpoint override)
|
||||||
|
if cfg.Endpoint != "" {
|
||||||
|
// For fake-gcs-server and custom endpoints
|
||||||
|
client, err = storage.NewClient(ctx, option.WithEndpoint(cfg.Endpoint), option.WithoutAuthentication())
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create GCS client: %w", err)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Production GCS using Application Default Credentials or service account
|
||||||
|
if cfg.AccessKey != "" {
|
||||||
|
// Use service account JSON key file
|
||||||
|
client, err = storage.NewClient(ctx, option.WithCredentialsFile(cfg.AccessKey))
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create GCS client with credentials file: %w", err)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Use default credentials (ADC, environment variables, etc.)
|
||||||
|
client, err = storage.NewClient(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create GCS client: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
backend := &GCSBackend{
|
||||||
|
client: client,
|
||||||
|
bucketName: cfg.Bucket,
|
||||||
|
config: cfg,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create bucket if it doesn't exist
|
||||||
|
// Note: Bucket creation should be done manually or via gcloud CLI
|
||||||
|
if false { // Disabled: cfg.CreateBucket not in Config
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
bucket := client.Bucket(cfg.Bucket)
|
||||||
|
_, err = bucket.Attrs(ctx)
|
||||||
|
if err == storage.ErrBucketNotExist {
|
||||||
|
// Create bucket with default settings
|
||||||
|
if err := bucket.Create(ctx, cfg.AccessKey, nil); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create bucket: %w", err)
|
||||||
|
}
|
||||||
|
} else if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to check bucket: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return backend, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Name returns the backend name
|
||||||
|
func (g *GCSBackend) Name() string {
|
||||||
|
return "gcs"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Upload uploads a file to Google Cloud Storage
|
||||||
|
func (g *GCSBackend) Upload(ctx context.Context, localPath, remotePath string, progress ProgressCallback) error {
|
||||||
|
file, err := os.Open(localPath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to open file: %w", err)
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
fileInfo, err := file.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to stat file: %w", err)
|
||||||
|
}
|
||||||
|
fileSize := fileInfo.Size()
|
||||||
|
|
||||||
|
// Remove leading slash from remote path
|
||||||
|
objectName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
|
||||||
|
bucket := g.client.Bucket(g.bucketName)
|
||||||
|
object := bucket.Object(objectName)
|
||||||
|
|
||||||
|
// Create writer with automatic chunking for large files
|
||||||
|
writer := object.NewWriter(ctx)
|
||||||
|
writer.ChunkSize = 16 * 1024 * 1024 // 16MB chunks for streaming
|
||||||
|
|
||||||
|
// Wrap reader with progress tracking and hash calculation
|
||||||
|
hash := sha256.New()
|
||||||
|
reader := NewProgressReader(io.TeeReader(file, hash), fileSize, progress)
|
||||||
|
|
||||||
|
// Upload with progress tracking
|
||||||
|
_, err = io.Copy(writer, reader)
|
||||||
|
if err != nil {
|
||||||
|
writer.Close()
|
||||||
|
return fmt.Errorf("failed to upload object: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Close writer (finalizes upload)
|
||||||
|
if err := writer.Close(); err != nil {
|
||||||
|
return fmt.Errorf("failed to finalize upload: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Store checksum as metadata
|
||||||
|
checksum := hex.EncodeToString(hash.Sum(nil))
|
||||||
|
_, err = object.Update(ctx, storage.ObjectAttrsToUpdate{
|
||||||
|
Metadata: map[string]string{
|
||||||
|
"sha256": checksum,
|
||||||
|
},
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
// Non-fatal: upload succeeded but metadata failed
|
||||||
|
fmt.Fprintf(os.Stderr, "Warning: failed to set object metadata: %v\n", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Download downloads a file from Google Cloud Storage
|
||||||
|
func (g *GCSBackend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
|
||||||
|
objectName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
|
||||||
|
bucket := g.client.Bucket(g.bucketName)
|
||||||
|
object := bucket.Object(objectName)
|
||||||
|
|
||||||
|
// Get object attributes to know size
|
||||||
|
attrs, err := object.Attrs(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to get object attributes: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fileSize := attrs.Size
|
||||||
|
|
||||||
|
// Create reader
|
||||||
|
reader, err := object.NewReader(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to download object: %w", err)
|
||||||
|
}
|
||||||
|
defer reader.Close()
|
||||||
|
|
||||||
|
// Create local file
|
||||||
|
file, err := os.Create(localPath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create file: %w", err)
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
// Wrap reader with progress tracking
|
||||||
|
progressReader := NewProgressReader(reader, fileSize, progress)
|
||||||
|
|
||||||
|
// Copy with progress
|
||||||
|
_, err = io.Copy(file, progressReader)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to write file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete deletes a file from Google Cloud Storage
|
||||||
|
func (g *GCSBackend) Delete(ctx context.Context, remotePath string) error {
|
||||||
|
objectName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
|
||||||
|
bucket := g.client.Bucket(g.bucketName)
|
||||||
|
object := bucket.Object(objectName)
|
||||||
|
|
||||||
|
if err := object.Delete(ctx); err != nil {
|
||||||
|
return fmt.Errorf("failed to delete object: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// List lists files in Google Cloud Storage with a given prefix
|
||||||
|
func (g *GCSBackend) List(ctx context.Context, prefix string) ([]BackupInfo, error) {
|
||||||
|
prefix = strings.TrimPrefix(prefix, "/")
|
||||||
|
|
||||||
|
bucket := g.client.Bucket(g.bucketName)
|
||||||
|
query := &storage.Query{
|
||||||
|
Prefix: prefix,
|
||||||
|
}
|
||||||
|
|
||||||
|
it := bucket.Objects(ctx, query)
|
||||||
|
|
||||||
|
var files []BackupInfo
|
||||||
|
|
||||||
|
for {
|
||||||
|
attrs, err := it.Next()
|
||||||
|
if err == iterator.Done {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to list objects: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
file := BackupInfo{
|
||||||
|
Key: attrs.Name,
|
||||||
|
Name: filepath.Base(attrs.Name),
|
||||||
|
Size: attrs.Size,
|
||||||
|
LastModified: attrs.Updated,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try to get SHA256 from metadata
|
||||||
|
if attrs.Metadata != nil {
|
||||||
|
if sha256Val, ok := attrs.Metadata["sha256"]; ok {
|
||||||
|
file.ETag = sha256Val
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
files = append(files, file)
|
||||||
|
}
|
||||||
|
|
||||||
|
return files, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exists checks if a file exists in Google Cloud Storage
|
||||||
|
func (g *GCSBackend) Exists(ctx context.Context, remotePath string) (bool, error) {
|
||||||
|
objectName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
|
||||||
|
bucket := g.client.Bucket(g.bucketName)
|
||||||
|
object := bucket.Object(objectName)
|
||||||
|
|
||||||
|
_, err := object.Attrs(ctx)
|
||||||
|
if err == storage.ErrObjectNotExist {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return false, fmt.Errorf("failed to check object existence: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSize returns the size of a file in Google Cloud Storage
|
||||||
|
func (g *GCSBackend) GetSize(ctx context.Context, remotePath string) (int64, error) {
|
||||||
|
objectName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
|
||||||
|
bucket := g.client.Bucket(g.bucketName)
|
||||||
|
object := bucket.Object(objectName)
|
||||||
|
|
||||||
|
attrs, err := object.Attrs(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return 0, fmt.Errorf("failed to get object attributes: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return attrs.Size, nil
|
||||||
|
}
|
||||||
171
internal/cloud/interface.go
Normal file
171
internal/cloud/interface.go
Normal file
@@ -0,0 +1,171 @@
|
|||||||
|
package cloud
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Backend defines the interface for cloud storage providers
|
||||||
|
type Backend interface {
|
||||||
|
// Upload uploads a file to cloud storage
|
||||||
|
Upload(ctx context.Context, localPath, remotePath string, progress ProgressCallback) error
|
||||||
|
|
||||||
|
// Download downloads a file from cloud storage
|
||||||
|
Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error
|
||||||
|
|
||||||
|
// List lists all backup files in cloud storage
|
||||||
|
List(ctx context.Context, prefix string) ([]BackupInfo, error)
|
||||||
|
|
||||||
|
// Delete deletes a file from cloud storage
|
||||||
|
Delete(ctx context.Context, remotePath string) error
|
||||||
|
|
||||||
|
// Exists checks if a file exists in cloud storage
|
||||||
|
Exists(ctx context.Context, remotePath string) (bool, error)
|
||||||
|
|
||||||
|
// GetSize returns the size of a remote file
|
||||||
|
GetSize(ctx context.Context, remotePath string) (int64, error)
|
||||||
|
|
||||||
|
// Name returns the backend name (e.g., "s3", "azure", "gcs")
|
||||||
|
Name() string
|
||||||
|
}
|
||||||
|
|
||||||
|
// BackupInfo contains information about a backup in cloud storage
|
||||||
|
type BackupInfo struct {
|
||||||
|
Key string // Full path/key in cloud storage
|
||||||
|
Name string // Base filename
|
||||||
|
Size int64 // Size in bytes
|
||||||
|
LastModified time.Time // Last modification time
|
||||||
|
ETag string // Entity tag (version identifier)
|
||||||
|
StorageClass string // Storage class (e.g., STANDARD, GLACIER)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ProgressCallback is called during upload/download to report progress
|
||||||
|
type ProgressCallback func(bytesTransferred, totalBytes int64)
|
||||||
|
|
||||||
|
// Config contains common configuration for cloud backends
|
||||||
|
type Config struct {
|
||||||
|
Provider string // "s3", "minio", "azure", "gcs", "b2"
|
||||||
|
Bucket string // Bucket or container name
|
||||||
|
Region string // Region (for S3)
|
||||||
|
Endpoint string // Custom endpoint (for MinIO, S3-compatible)
|
||||||
|
AccessKey string // Access key or account ID
|
||||||
|
SecretKey string // Secret key or access token
|
||||||
|
UseSSL bool // Use SSL/TLS (default: true)
|
||||||
|
PathStyle bool // Use path-style addressing (for MinIO)
|
||||||
|
Prefix string // Prefix for all operations (e.g., "backups/")
|
||||||
|
Timeout int // Timeout in seconds (default: 300)
|
||||||
|
MaxRetries int // Maximum retry attempts (default: 3)
|
||||||
|
Concurrency int // Upload/download concurrency (default: 5)
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewBackend creates a new cloud storage backend based on the provider
|
||||||
|
func NewBackend(cfg *Config) (Backend, error) {
|
||||||
|
switch cfg.Provider {
|
||||||
|
case "s3", "aws":
|
||||||
|
return NewS3Backend(cfg)
|
||||||
|
case "minio":
|
||||||
|
// MinIO uses S3 backend with custom endpoint
|
||||||
|
cfg.PathStyle = true
|
||||||
|
if cfg.Endpoint == "" {
|
||||||
|
return nil, fmt.Errorf("endpoint required for MinIO")
|
||||||
|
}
|
||||||
|
return NewS3Backend(cfg)
|
||||||
|
case "b2", "backblaze":
|
||||||
|
// Backblaze B2 uses S3-compatible API
|
||||||
|
cfg.PathStyle = false
|
||||||
|
if cfg.Endpoint == "" {
|
||||||
|
return nil, fmt.Errorf("endpoint required for Backblaze B2")
|
||||||
|
}
|
||||||
|
return NewS3Backend(cfg)
|
||||||
|
case "azure", "azblob":
|
||||||
|
return NewAzureBackend(cfg)
|
||||||
|
case "gs", "gcs", "google":
|
||||||
|
return NewGCSBackend(cfg)
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("unsupported cloud provider: %s (supported: s3, minio, b2, azure, gcs)", cfg.Provider)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// FormatSize returns human-readable size
|
||||||
|
func FormatSize(bytes int64) string {
|
||||||
|
const unit = 1024
|
||||||
|
if bytes < unit {
|
||||||
|
return fmt.Sprintf("%d B", bytes)
|
||||||
|
}
|
||||||
|
div, exp := int64(unit), 0
|
||||||
|
for n := bytes / unit; n >= unit; n /= unit {
|
||||||
|
div *= unit
|
||||||
|
exp++
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%.1f %ciB", float64(bytes)/float64(div), "KMGTPE"[exp])
|
||||||
|
}
|
||||||
|
|
||||||
|
// DefaultConfig returns a config with sensible defaults
|
||||||
|
func DefaultConfig() *Config {
|
||||||
|
return &Config{
|
||||||
|
Provider: "s3",
|
||||||
|
UseSSL: true,
|
||||||
|
PathStyle: false,
|
||||||
|
Timeout: 300,
|
||||||
|
MaxRetries: 3,
|
||||||
|
Concurrency: 5,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate checks if the configuration is valid
|
||||||
|
func (c *Config) Validate() error {
|
||||||
|
if c.Provider == "" {
|
||||||
|
return fmt.Errorf("provider is required")
|
||||||
|
}
|
||||||
|
if c.Bucket == "" {
|
||||||
|
return fmt.Errorf("bucket name is required")
|
||||||
|
}
|
||||||
|
if c.Provider == "s3" || c.Provider == "aws" {
|
||||||
|
if c.Region == "" && c.Endpoint == "" {
|
||||||
|
return fmt.Errorf("region or endpoint is required for S3")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if c.Provider == "minio" || c.Provider == "b2" {
|
||||||
|
if c.Endpoint == "" {
|
||||||
|
return fmt.Errorf("endpoint is required for %s", c.Provider)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ProgressReader wraps an io.Reader to track progress
|
||||||
|
type ProgressReader struct {
|
||||||
|
reader io.Reader
|
||||||
|
total int64
|
||||||
|
read int64
|
||||||
|
callback ProgressCallback
|
||||||
|
lastReport time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewProgressReader creates a progress tracking reader
|
||||||
|
func NewProgressReader(r io.Reader, total int64, callback ProgressCallback) *ProgressReader {
|
||||||
|
return &ProgressReader{
|
||||||
|
reader: r,
|
||||||
|
total: total,
|
||||||
|
callback: callback,
|
||||||
|
lastReport: time.Now(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (pr *ProgressReader) Read(p []byte) (int, error) {
|
||||||
|
n, err := pr.reader.Read(p)
|
||||||
|
pr.read += int64(n)
|
||||||
|
|
||||||
|
// Report progress every 100ms or when complete
|
||||||
|
now := time.Now()
|
||||||
|
if now.Sub(pr.lastReport) > 100*time.Millisecond || err == io.EOF {
|
||||||
|
if pr.callback != nil {
|
||||||
|
pr.callback(pr.read, pr.total)
|
||||||
|
}
|
||||||
|
pr.lastReport = now
|
||||||
|
}
|
||||||
|
|
||||||
|
return n, err
|
||||||
|
}
|
||||||
372
internal/cloud/s3.go
Normal file
372
internal/cloud/s3.go
Normal file
@@ -0,0 +1,372 @@
|
|||||||
|
package cloud
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/aws/aws-sdk-go-v2/aws"
|
||||||
|
"github.com/aws/aws-sdk-go-v2/config"
|
||||||
|
"github.com/aws/aws-sdk-go-v2/credentials"
|
||||||
|
"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
|
||||||
|
"github.com/aws/aws-sdk-go-v2/service/s3"
|
||||||
|
)
|
||||||
|
|
||||||
|
// S3Backend implements the Backend interface for AWS S3 and compatible services
|
||||||
|
type S3Backend struct {
|
||||||
|
client *s3.Client
|
||||||
|
bucket string
|
||||||
|
prefix string
|
||||||
|
config *Config
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewS3Backend creates a new S3 backend
|
||||||
|
func NewS3Backend(cfg *Config) (*S3Backend, error) {
|
||||||
|
if err := cfg.Validate(); err != nil {
|
||||||
|
return nil, fmt.Errorf("invalid config: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.Background()
|
||||||
|
|
||||||
|
// Build AWS config
|
||||||
|
var awsCfg aws.Config
|
||||||
|
var err error
|
||||||
|
|
||||||
|
if cfg.AccessKey != "" && cfg.SecretKey != "" {
|
||||||
|
// Use explicit credentials
|
||||||
|
credsProvider := credentials.NewStaticCredentialsProvider(
|
||||||
|
cfg.AccessKey,
|
||||||
|
cfg.SecretKey,
|
||||||
|
"",
|
||||||
|
)
|
||||||
|
|
||||||
|
awsCfg, err = config.LoadDefaultConfig(ctx,
|
||||||
|
config.WithCredentialsProvider(credsProvider),
|
||||||
|
config.WithRegion(cfg.Region),
|
||||||
|
)
|
||||||
|
} else {
|
||||||
|
// Use default credential chain (environment, IAM role, etc.)
|
||||||
|
awsCfg, err = config.LoadDefaultConfig(ctx,
|
||||||
|
config.WithRegion(cfg.Region),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to load AWS config: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create S3 client with custom options
|
||||||
|
clientOptions := []func(*s3.Options){
|
||||||
|
func(o *s3.Options) {
|
||||||
|
if cfg.Endpoint != "" {
|
||||||
|
o.BaseEndpoint = aws.String(cfg.Endpoint)
|
||||||
|
}
|
||||||
|
if cfg.PathStyle {
|
||||||
|
o.UsePathStyle = true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
client := s3.NewFromConfig(awsCfg, clientOptions...)
|
||||||
|
|
||||||
|
return &S3Backend{
|
||||||
|
client: client,
|
||||||
|
bucket: cfg.Bucket,
|
||||||
|
prefix: cfg.Prefix,
|
||||||
|
config: cfg,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Name returns the backend name
|
||||||
|
func (s *S3Backend) Name() string {
|
||||||
|
return "s3"
|
||||||
|
}
|
||||||
|
|
||||||
|
// buildKey creates the full S3 key from filename
|
||||||
|
func (s *S3Backend) buildKey(filename string) string {
|
||||||
|
if s.prefix == "" {
|
||||||
|
return filename
|
||||||
|
}
|
||||||
|
return filepath.Join(s.prefix, filename)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Upload uploads a file to S3 with multipart support for large files
|
||||||
|
func (s *S3Backend) Upload(ctx context.Context, localPath, remotePath string, progress ProgressCallback) error {
|
||||||
|
// Open local file
|
||||||
|
file, err := os.Open(localPath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to open file: %w", err)
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
// Get file size
|
||||||
|
stat, err := file.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to stat file: %w", err)
|
||||||
|
}
|
||||||
|
fileSize := stat.Size()
|
||||||
|
|
||||||
|
// Build S3 key
|
||||||
|
key := s.buildKey(remotePath)
|
||||||
|
|
||||||
|
// Use multipart upload for files larger than 100MB
|
||||||
|
const multipartThreshold = 100 * 1024 * 1024 // 100 MB
|
||||||
|
|
||||||
|
if fileSize > multipartThreshold {
|
||||||
|
return s.uploadMultipart(ctx, file, key, fileSize, progress)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Simple upload for smaller files
|
||||||
|
return s.uploadSimple(ctx, file, key, fileSize, progress)
|
||||||
|
}
|
||||||
|
|
||||||
|
// uploadSimple performs a simple single-part upload
|
||||||
|
func (s *S3Backend) uploadSimple(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error {
|
||||||
|
// Create progress reader
|
||||||
|
var reader io.Reader = file
|
||||||
|
if progress != nil {
|
||||||
|
reader = NewProgressReader(file, fileSize, progress)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Upload to S3
|
||||||
|
_, err := s.client.PutObject(ctx, &s3.PutObjectInput{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
Key: aws.String(key),
|
||||||
|
Body: reader,
|
||||||
|
})
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to upload to S3: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// uploadMultipart performs a multipart upload for large files
|
||||||
|
func (s *S3Backend) uploadMultipart(ctx context.Context, file *os.File, key string, fileSize int64, progress ProgressCallback) error {
|
||||||
|
// Create uploader with custom options
|
||||||
|
uploader := manager.NewUploader(s.client, func(u *manager.Uploader) {
|
||||||
|
// Part size: 10MB
|
||||||
|
u.PartSize = 10 * 1024 * 1024
|
||||||
|
|
||||||
|
// Upload up to 10 parts concurrently
|
||||||
|
u.Concurrency = 10
|
||||||
|
|
||||||
|
// Leave parts on failure for debugging
|
||||||
|
u.LeavePartsOnError = false
|
||||||
|
})
|
||||||
|
|
||||||
|
// Wrap file with progress reader
|
||||||
|
var reader io.Reader = file
|
||||||
|
if progress != nil {
|
||||||
|
reader = NewProgressReader(file, fileSize, progress)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Upload with multipart
|
||||||
|
_, err := uploader.Upload(ctx, &s3.PutObjectInput{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
Key: aws.String(key),
|
||||||
|
Body: reader,
|
||||||
|
})
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("multipart upload failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Download downloads a file from S3
|
||||||
|
func (s *S3Backend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
|
||||||
|
// Build S3 key
|
||||||
|
key := s.buildKey(remotePath)
|
||||||
|
|
||||||
|
// Get object size first
|
||||||
|
size, err := s.GetSize(ctx, remotePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to get object size: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Download from S3
|
||||||
|
result, err := s.client.GetObject(ctx, &s3.GetObjectInput{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
Key: aws.String(key),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to download from S3: %w", err)
|
||||||
|
}
|
||||||
|
defer result.Body.Close()
|
||||||
|
|
||||||
|
// Create local file
|
||||||
|
if err := os.MkdirAll(filepath.Dir(localPath), 0755); err != nil {
|
||||||
|
return fmt.Errorf("failed to create directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
outFile, err := os.Create(localPath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create local file: %w", err)
|
||||||
|
}
|
||||||
|
defer outFile.Close()
|
||||||
|
|
||||||
|
// Copy with progress tracking
|
||||||
|
var reader io.Reader = result.Body
|
||||||
|
if progress != nil {
|
||||||
|
reader = NewProgressReader(result.Body, size, progress)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = io.Copy(outFile, reader)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to write file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// List lists all backup files in S3
|
||||||
|
func (s *S3Backend) List(ctx context.Context, prefix string) ([]BackupInfo, error) {
|
||||||
|
// Build full prefix
|
||||||
|
fullPrefix := s.buildKey(prefix)
|
||||||
|
|
||||||
|
// List objects
|
||||||
|
result, err := s.client.ListObjectsV2(ctx, &s3.ListObjectsV2Input{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
Prefix: aws.String(fullPrefix),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to list objects: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert to BackupInfo
|
||||||
|
var backups []BackupInfo
|
||||||
|
for _, obj := range result.Contents {
|
||||||
|
if obj.Key == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
key := *obj.Key
|
||||||
|
name := filepath.Base(key)
|
||||||
|
|
||||||
|
// Skip if it's just a directory marker
|
||||||
|
if strings.HasSuffix(key, "/") {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
info := BackupInfo{
|
||||||
|
Key: key,
|
||||||
|
Name: name,
|
||||||
|
Size: *obj.Size,
|
||||||
|
LastModified: *obj.LastModified,
|
||||||
|
}
|
||||||
|
|
||||||
|
if obj.ETag != nil {
|
||||||
|
info.ETag = *obj.ETag
|
||||||
|
}
|
||||||
|
|
||||||
|
if obj.StorageClass != "" {
|
||||||
|
info.StorageClass = string(obj.StorageClass)
|
||||||
|
} else {
|
||||||
|
info.StorageClass = "STANDARD"
|
||||||
|
}
|
||||||
|
|
||||||
|
backups = append(backups, info)
|
||||||
|
}
|
||||||
|
|
||||||
|
return backups, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete deletes a file from S3
|
||||||
|
func (s *S3Backend) Delete(ctx context.Context, remotePath string) error {
|
||||||
|
key := s.buildKey(remotePath)
|
||||||
|
|
||||||
|
_, err := s.client.DeleteObject(ctx, &s3.DeleteObjectInput{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
Key: aws.String(key),
|
||||||
|
})
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to delete object: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exists checks if a file exists in S3
|
||||||
|
func (s *S3Backend) Exists(ctx context.Context, remotePath string) (bool, error) {
|
||||||
|
key := s.buildKey(remotePath)
|
||||||
|
|
||||||
|
_, err := s.client.HeadObject(ctx, &s3.HeadObjectInput{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
Key: aws.String(key),
|
||||||
|
})
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
// Check if it's a "not found" error
|
||||||
|
if strings.Contains(err.Error(), "NotFound") || strings.Contains(err.Error(), "404") {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
return false, fmt.Errorf("failed to check object existence: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSize returns the size of a remote file
|
||||||
|
func (s *S3Backend) GetSize(ctx context.Context, remotePath string) (int64, error) {
|
||||||
|
key := s.buildKey(remotePath)
|
||||||
|
|
||||||
|
result, err := s.client.HeadObject(ctx, &s3.HeadObjectInput{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
Key: aws.String(key),
|
||||||
|
})
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return 0, fmt.Errorf("failed to get object metadata: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if result.ContentLength == nil {
|
||||||
|
return 0, fmt.Errorf("content length not available")
|
||||||
|
}
|
||||||
|
|
||||||
|
return *result.ContentLength, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// BucketExists checks if the bucket exists and is accessible
|
||||||
|
func (s *S3Backend) BucketExists(ctx context.Context) (bool, error) {
|
||||||
|
_, err := s.client.HeadBucket(ctx, &s3.HeadBucketInput{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
})
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
if strings.Contains(err.Error(), "NotFound") || strings.Contains(err.Error(), "404") {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
return false, fmt.Errorf("failed to check bucket: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreateBucket creates the bucket if it doesn't exist
|
||||||
|
func (s *S3Backend) CreateBucket(ctx context.Context) error {
|
||||||
|
exists, err := s.BucketExists(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if exists {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = s.client.CreateBucket(ctx, &s3.CreateBucketInput{
|
||||||
|
Bucket: aws.String(s.bucket),
|
||||||
|
})
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create bucket: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
198
internal/cloud/uri.go
Normal file
198
internal/cloud/uri.go
Normal file
@@ -0,0 +1,198 @@
|
|||||||
|
package cloud
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"net/url"
|
||||||
|
"path"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
// CloudURI represents a parsed cloud storage URI
|
||||||
|
type CloudURI struct {
|
||||||
|
Provider string // "s3", "minio", "azure", "gcs", "b2"
|
||||||
|
Bucket string // Bucket or container name
|
||||||
|
Path string // Path within bucket (without leading /)
|
||||||
|
Region string // Region (optional, extracted from host)
|
||||||
|
Endpoint string // Custom endpoint (for MinIO, etc)
|
||||||
|
FullURI string // Original URI string
|
||||||
|
}
|
||||||
|
|
||||||
|
// ParseCloudURI parses a cloud storage URI like s3://bucket/path/file.dump
|
||||||
|
// Supported formats:
|
||||||
|
// - s3://bucket/path/file.dump
|
||||||
|
// - s3://bucket.s3.region.amazonaws.com/path/file.dump
|
||||||
|
// - minio://bucket/path/file.dump
|
||||||
|
// - azure://container/path/file.dump
|
||||||
|
// - gs://bucket/path/file.dump (Google Cloud Storage)
|
||||||
|
// - b2://bucket/path/file.dump (Backblaze B2)
|
||||||
|
func ParseCloudURI(uri string) (*CloudURI, error) {
|
||||||
|
if uri == "" {
|
||||||
|
return nil, fmt.Errorf("URI cannot be empty")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse URL
|
||||||
|
parsed, err := url.Parse(uri)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("invalid URI: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract provider from scheme
|
||||||
|
provider := strings.ToLower(parsed.Scheme)
|
||||||
|
if provider == "" {
|
||||||
|
return nil, fmt.Errorf("URI must have a scheme (e.g., s3://)")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate provider
|
||||||
|
validProviders := map[string]bool{
|
||||||
|
"s3": true,
|
||||||
|
"minio": true,
|
||||||
|
"azure": true,
|
||||||
|
"gs": true,
|
||||||
|
"gcs": true,
|
||||||
|
"b2": true,
|
||||||
|
}
|
||||||
|
if !validProviders[provider] {
|
||||||
|
return nil, fmt.Errorf("unsupported provider: %s (supported: s3, minio, azure, gs, gcs, b2)", provider)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Normalize provider names
|
||||||
|
if provider == "gcs" {
|
||||||
|
provider = "gs"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract bucket and path
|
||||||
|
bucket := parsed.Host
|
||||||
|
if bucket == "" {
|
||||||
|
return nil, fmt.Errorf("URI must specify a bucket (e.g., s3://bucket/path)")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract region from AWS S3 hostname if present
|
||||||
|
// Format: bucket.s3.region.amazonaws.com or bucket.s3-region.amazonaws.com
|
||||||
|
var region string
|
||||||
|
var endpoint string
|
||||||
|
|
||||||
|
if strings.Contains(bucket, ".amazonaws.com") {
|
||||||
|
parts := strings.Split(bucket, ".")
|
||||||
|
if len(parts) >= 3 {
|
||||||
|
// Extract bucket name (first part)
|
||||||
|
bucket = parts[0]
|
||||||
|
|
||||||
|
// Extract region if present
|
||||||
|
// bucket.s3.us-west-2.amazonaws.com -> us-west-2
|
||||||
|
// bucket.s3-us-west-2.amazonaws.com -> us-west-2
|
||||||
|
for i, part := range parts {
|
||||||
|
if part == "s3" && i+1 < len(parts) && parts[i+1] != "amazonaws" {
|
||||||
|
region = parts[i+1]
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if strings.HasPrefix(part, "s3-") {
|
||||||
|
region = strings.TrimPrefix(part, "s3-")
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// For MinIO and custom endpoints, preserve the host as endpoint
|
||||||
|
if provider == "minio" || (provider == "s3" && !strings.Contains(bucket, "amazonaws.com")) {
|
||||||
|
// If it looks like a custom endpoint (has dots), preserve it
|
||||||
|
if strings.Contains(bucket, ".") && !strings.Contains(bucket, "amazonaws.com") {
|
||||||
|
endpoint = bucket
|
||||||
|
// Try to extract bucket from path
|
||||||
|
trimmedPath := strings.TrimPrefix(parsed.Path, "/")
|
||||||
|
pathParts := strings.SplitN(trimmedPath, "/", 2)
|
||||||
|
if len(pathParts) > 0 && pathParts[0] != "" {
|
||||||
|
bucket = pathParts[0]
|
||||||
|
if len(pathParts) > 1 {
|
||||||
|
parsed.Path = "/" + pathParts[1]
|
||||||
|
} else {
|
||||||
|
parsed.Path = "/"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clean up path (remove leading slash)
|
||||||
|
filepath := strings.TrimPrefix(parsed.Path, "/")
|
||||||
|
|
||||||
|
return &CloudURI{
|
||||||
|
Provider: provider,
|
||||||
|
Bucket: bucket,
|
||||||
|
Path: filepath,
|
||||||
|
Region: region,
|
||||||
|
Endpoint: endpoint,
|
||||||
|
FullURI: uri,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsCloudURI checks if a string looks like a cloud storage URI
|
||||||
|
func IsCloudURI(s string) bool {
|
||||||
|
s = strings.ToLower(s)
|
||||||
|
return strings.HasPrefix(s, "s3://") ||
|
||||||
|
strings.HasPrefix(s, "minio://") ||
|
||||||
|
strings.HasPrefix(s, "azure://") ||
|
||||||
|
strings.HasPrefix(s, "gs://") ||
|
||||||
|
strings.HasPrefix(s, "gcs://") ||
|
||||||
|
strings.HasPrefix(s, "b2://")
|
||||||
|
}
|
||||||
|
|
||||||
|
// String returns the string representation of the URI
|
||||||
|
func (u *CloudURI) String() string {
|
||||||
|
return u.FullURI
|
||||||
|
}
|
||||||
|
|
||||||
|
// BaseName returns the filename without path
|
||||||
|
func (u *CloudURI) BaseName() string {
|
||||||
|
return path.Base(u.Path)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Dir returns the directory path without filename
|
||||||
|
func (u *CloudURI) Dir() string {
|
||||||
|
return path.Dir(u.Path)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Join appends path elements to the URI path
|
||||||
|
func (u *CloudURI) Join(elem ...string) string {
|
||||||
|
newPath := u.Path
|
||||||
|
for _, e := range elem {
|
||||||
|
newPath = path.Join(newPath, e)
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%s://%s/%s", u.Provider, u.Bucket, newPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ToConfig converts a CloudURI to a cloud.Config
|
||||||
|
func (u *CloudURI) ToConfig() *Config {
|
||||||
|
cfg := &Config{
|
||||||
|
Provider: u.Provider,
|
||||||
|
Bucket: u.Bucket,
|
||||||
|
Prefix: u.Dir(), // Use directory part as prefix
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set region if available
|
||||||
|
if u.Region != "" {
|
||||||
|
cfg.Region = u.Region
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set endpoint if available (for MinIO, etc)
|
||||||
|
if u.Endpoint != "" {
|
||||||
|
cfg.Endpoint = u.Endpoint
|
||||||
|
}
|
||||||
|
|
||||||
|
// Provider-specific settings
|
||||||
|
switch u.Provider {
|
||||||
|
case "minio":
|
||||||
|
cfg.PathStyle = true
|
||||||
|
case "b2":
|
||||||
|
cfg.PathStyle = true
|
||||||
|
}
|
||||||
|
|
||||||
|
return cfg
|
||||||
|
}
|
||||||
|
|
||||||
|
// BuildRemotePath constructs the full remote path for a file
|
||||||
|
func (u *CloudURI) BuildRemotePath(filename string) string {
|
||||||
|
if u.Path == "" || u.Path == "." {
|
||||||
|
return filename
|
||||||
|
}
|
||||||
|
return path.Join(u.Path, filename)
|
||||||
|
}
|
||||||
@@ -85,6 +85,17 @@ type Config struct {
|
|||||||
TUIDryRun bool // TUI dry-run mode (simulate without execution)
|
TUIDryRun bool // TUI dry-run mode (simulate without execution)
|
||||||
TUIVerbose bool // Verbose TUI logging
|
TUIVerbose bool // Verbose TUI logging
|
||||||
TUILogFile string // TUI event log file path
|
TUILogFile string // TUI event log file path
|
||||||
|
|
||||||
|
// Cloud storage options (v2.0)
|
||||||
|
CloudEnabled bool // Enable cloud storage integration
|
||||||
|
CloudProvider string // "s3", "minio", "b2", "azure", "gcs"
|
||||||
|
CloudBucket string // Bucket/container name
|
||||||
|
CloudRegion string // Region (for S3, GCS)
|
||||||
|
CloudEndpoint string // Custom endpoint (for MinIO, B2, Azurite, fake-gcs-server)
|
||||||
|
CloudAccessKey string // Access key / Account name (Azure) / Service account file (GCS)
|
||||||
|
CloudSecretKey string // Secret key / Account key (Azure)
|
||||||
|
CloudPrefix string // Key/object prefix
|
||||||
|
CloudAutoUpload bool // Automatically upload after backup
|
||||||
}
|
}
|
||||||
|
|
||||||
// New creates a new configuration with default values
|
// New creates a new configuration with default values
|
||||||
@@ -192,6 +203,17 @@ func New() *Config {
|
|||||||
TUIDryRun: getEnvBool("TUI_DRY_RUN", false), // Execute by default
|
TUIDryRun: getEnvBool("TUI_DRY_RUN", false), // Execute by default
|
||||||
TUIVerbose: getEnvBool("TUI_VERBOSE", false), // Quiet by default
|
TUIVerbose: getEnvBool("TUI_VERBOSE", false), // Quiet by default
|
||||||
TUILogFile: getEnvString("TUI_LOG_FILE", ""), // No log file by default
|
TUILogFile: getEnvString("TUI_LOG_FILE", ""), // No log file by default
|
||||||
|
|
||||||
|
// Cloud storage defaults (v2.0)
|
||||||
|
CloudEnabled: getEnvBool("CLOUD_ENABLED", false),
|
||||||
|
CloudProvider: getEnvString("CLOUD_PROVIDER", "s3"),
|
||||||
|
CloudBucket: getEnvString("CLOUD_BUCKET", ""),
|
||||||
|
CloudRegion: getEnvString("CLOUD_REGION", "us-east-1"),
|
||||||
|
CloudEndpoint: getEnvString("CLOUD_ENDPOINT", ""),
|
||||||
|
CloudAccessKey: getEnvString("CLOUD_ACCESS_KEY", getEnvString("AWS_ACCESS_KEY_ID", "")),
|
||||||
|
CloudSecretKey: getEnvString("CLOUD_SECRET_KEY", getEnvString("AWS_SECRET_ACCESS_KEY", "")),
|
||||||
|
CloudPrefix: getEnvString("CLOUD_PREFIX", ""),
|
||||||
|
CloudAutoUpload: getEnvBool("CLOUD_AUTO_UPLOAD", false),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Ensure canonical defaults are enforced
|
// Ensure canonical defaults are enforced
|
||||||
|
|||||||
167
internal/metadata/metadata.go
Normal file
167
internal/metadata/metadata.go
Normal file
@@ -0,0 +1,167 @@
|
|||||||
|
package metadata
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/hex"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// BackupMetadata contains comprehensive information about a backup
|
||||||
|
type BackupMetadata struct {
|
||||||
|
Version string `json:"version"`
|
||||||
|
Timestamp time.Time `json:"timestamp"`
|
||||||
|
Database string `json:"database"`
|
||||||
|
DatabaseType string `json:"database_type"` // postgresql, mysql, mariadb
|
||||||
|
DatabaseVersion string `json:"database_version"` // e.g., "PostgreSQL 15.3"
|
||||||
|
Host string `json:"host"`
|
||||||
|
Port int `json:"port"`
|
||||||
|
User string `json:"user"`
|
||||||
|
BackupFile string `json:"backup_file"`
|
||||||
|
SizeBytes int64 `json:"size_bytes"`
|
||||||
|
SHA256 string `json:"sha256"`
|
||||||
|
Compression string `json:"compression"` // none, gzip, pigz
|
||||||
|
BackupType string `json:"backup_type"` // full, incremental (for v2.0)
|
||||||
|
BaseBackup string `json:"base_backup,omitempty"`
|
||||||
|
Duration float64 `json:"duration_seconds"`
|
||||||
|
ExtraInfo map[string]string `json:"extra_info,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClusterMetadata contains metadata for cluster backups
|
||||||
|
type ClusterMetadata struct {
|
||||||
|
Version string `json:"version"`
|
||||||
|
Timestamp time.Time `json:"timestamp"`
|
||||||
|
ClusterName string `json:"cluster_name"`
|
||||||
|
DatabaseType string `json:"database_type"`
|
||||||
|
Host string `json:"host"`
|
||||||
|
Port int `json:"port"`
|
||||||
|
Databases []BackupMetadata `json:"databases"`
|
||||||
|
TotalSize int64 `json:"total_size_bytes"`
|
||||||
|
Duration float64 `json:"duration_seconds"`
|
||||||
|
ExtraInfo map[string]string `json:"extra_info,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// CalculateSHA256 computes the SHA-256 checksum of a file
|
||||||
|
func CalculateSHA256(filePath string) (string, error) {
|
||||||
|
f, err := os.Open(filePath)
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("failed to open file: %w", err)
|
||||||
|
}
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
|
hasher := sha256.New()
|
||||||
|
if _, err := io.Copy(hasher, f); err != nil {
|
||||||
|
return "", fmt.Errorf("failed to calculate checksum: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return hex.EncodeToString(hasher.Sum(nil)), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save writes metadata to a .meta.json file
|
||||||
|
func (m *BackupMetadata) Save() error {
|
||||||
|
metaPath := m.BackupFile + ".meta.json"
|
||||||
|
|
||||||
|
data, err := json.MarshalIndent(m, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to marshal metadata: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := os.WriteFile(metaPath, data, 0644); err != nil {
|
||||||
|
return fmt.Errorf("failed to write metadata file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load reads metadata from a .meta.json file
|
||||||
|
func Load(backupFile string) (*BackupMetadata, error) {
|
||||||
|
metaPath := backupFile + ".meta.json"
|
||||||
|
|
||||||
|
data, err := os.ReadFile(metaPath)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to read metadata file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var meta BackupMetadata
|
||||||
|
if err := json.Unmarshal(data, &meta); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse metadata: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return &meta, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveCluster writes cluster metadata to a .meta.json file
|
||||||
|
func (m *ClusterMetadata) Save(targetFile string) error {
|
||||||
|
metaPath := targetFile + ".meta.json"
|
||||||
|
|
||||||
|
data, err := json.MarshalIndent(m, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to marshal cluster metadata: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := os.WriteFile(metaPath, data, 0644); err != nil {
|
||||||
|
return fmt.Errorf("failed to write cluster metadata file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// LoadCluster reads cluster metadata from a .meta.json file
|
||||||
|
func LoadCluster(targetFile string) (*ClusterMetadata, error) {
|
||||||
|
metaPath := targetFile + ".meta.json"
|
||||||
|
|
||||||
|
data, err := os.ReadFile(metaPath)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to read cluster metadata file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var meta ClusterMetadata
|
||||||
|
if err := json.Unmarshal(data, &meta); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse cluster metadata: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return &meta, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ListBackups scans a directory for backup files and returns their metadata
|
||||||
|
func ListBackups(dir string) ([]*BackupMetadata, error) {
|
||||||
|
pattern := filepath.Join(dir, "*.meta.json")
|
||||||
|
matches, err := filepath.Glob(pattern)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to scan directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var backups []*BackupMetadata
|
||||||
|
for _, metaFile := range matches {
|
||||||
|
// Extract backup file path (remove .meta.json suffix)
|
||||||
|
backupFile := metaFile[:len(metaFile)-len(".meta.json")]
|
||||||
|
|
||||||
|
meta, err := Load(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
// Skip invalid metadata files
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
backups = append(backups, meta)
|
||||||
|
}
|
||||||
|
|
||||||
|
return backups, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// FormatSize returns human-readable size
|
||||||
|
func FormatSize(bytes int64) string {
|
||||||
|
const unit = 1024
|
||||||
|
if bytes < unit {
|
||||||
|
return fmt.Sprintf("%d B", bytes)
|
||||||
|
}
|
||||||
|
div, exp := int64(unit), 0
|
||||||
|
for n := bytes / unit; n >= unit; n /= unit {
|
||||||
|
div *= unit
|
||||||
|
exp++
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%.1f %ciB", float64(bytes)/float64(div), "KMGTPE"[exp])
|
||||||
|
}
|
||||||
211
internal/restore/cloud_download.go
Normal file
211
internal/restore/cloud_download.go
Normal file
@@ -0,0 +1,211 @@
|
|||||||
|
package restore
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/hex"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
|
||||||
|
"dbbackup/internal/cloud"
|
||||||
|
"dbbackup/internal/logger"
|
||||||
|
"dbbackup/internal/metadata"
|
||||||
|
)
|
||||||
|
|
||||||
|
// CloudDownloader handles downloading backups from cloud storage
|
||||||
|
type CloudDownloader struct {
|
||||||
|
backend cloud.Backend
|
||||||
|
log logger.Logger
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewCloudDownloader creates a new cloud downloader
|
||||||
|
func NewCloudDownloader(backend cloud.Backend, log logger.Logger) *CloudDownloader {
|
||||||
|
return &CloudDownloader{
|
||||||
|
backend: backend,
|
||||||
|
log: log,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// DownloadOptions contains options for downloading from cloud
|
||||||
|
type DownloadOptions struct {
|
||||||
|
VerifyChecksum bool // Verify SHA-256 checksum after download
|
||||||
|
KeepLocal bool // Keep downloaded file (don't delete temp)
|
||||||
|
TempDir string // Temp directory (default: os.TempDir())
|
||||||
|
}
|
||||||
|
|
||||||
|
// DownloadResult contains information about a downloaded backup
|
||||||
|
type DownloadResult struct {
|
||||||
|
LocalPath string // Path to downloaded file
|
||||||
|
RemotePath string // Original remote path
|
||||||
|
Size int64 // File size in bytes
|
||||||
|
SHA256 string // SHA-256 checksum (if verified)
|
||||||
|
MetadataPath string // Path to downloaded metadata (if exists)
|
||||||
|
IsTempFile bool // Whether the file is in a temp directory
|
||||||
|
}
|
||||||
|
|
||||||
|
// Download downloads a backup from cloud storage
|
||||||
|
func (d *CloudDownloader) Download(ctx context.Context, remotePath string, opts DownloadOptions) (*DownloadResult, error) {
|
||||||
|
// Determine temp directory
|
||||||
|
tempDir := opts.TempDir
|
||||||
|
if tempDir == "" {
|
||||||
|
tempDir = os.TempDir()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create unique temp subdirectory
|
||||||
|
tempSubDir := filepath.Join(tempDir, fmt.Sprintf("dbbackup-download-%d", os.Getpid()))
|
||||||
|
if err := os.MkdirAll(tempSubDir, 0755); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create temp directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract filename from remote path
|
||||||
|
filename := filepath.Base(remotePath)
|
||||||
|
localPath := filepath.Join(tempSubDir, filename)
|
||||||
|
|
||||||
|
d.log.Info("Downloading backup from cloud", "remote", remotePath, "local", localPath)
|
||||||
|
|
||||||
|
// Get file size for progress tracking
|
||||||
|
size, err := d.backend.GetSize(ctx, remotePath)
|
||||||
|
if err != nil {
|
||||||
|
d.log.Warn("Could not get remote file size", "error", err)
|
||||||
|
size = 0 // Continue anyway
|
||||||
|
}
|
||||||
|
|
||||||
|
// Progress callback
|
||||||
|
var lastPercent int
|
||||||
|
progressCallback := func(transferred, total int64) {
|
||||||
|
if total > 0 {
|
||||||
|
percent := int(float64(transferred) / float64(total) * 100)
|
||||||
|
if percent != lastPercent && percent%10 == 0 {
|
||||||
|
d.log.Info("Download progress", "percent", percent, "transferred", cloud.FormatSize(transferred), "total", cloud.FormatSize(total))
|
||||||
|
lastPercent = percent
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Download file
|
||||||
|
if err := d.backend.Download(ctx, remotePath, localPath, progressCallback); err != nil {
|
||||||
|
// Cleanup on failure
|
||||||
|
os.RemoveAll(tempSubDir)
|
||||||
|
return nil, fmt.Errorf("download failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
result := &DownloadResult{
|
||||||
|
LocalPath: localPath,
|
||||||
|
RemotePath: remotePath,
|
||||||
|
Size: size,
|
||||||
|
IsTempFile: !opts.KeepLocal,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try to download metadata file
|
||||||
|
metaRemotePath := remotePath + ".meta.json"
|
||||||
|
exists, err := d.backend.Exists(ctx, metaRemotePath)
|
||||||
|
if err == nil && exists {
|
||||||
|
metaLocalPath := localPath + ".meta.json"
|
||||||
|
if err := d.backend.Download(ctx, metaRemotePath, metaLocalPath, nil); err != nil {
|
||||||
|
d.log.Warn("Failed to download metadata", "error", err)
|
||||||
|
} else {
|
||||||
|
result.MetadataPath = metaLocalPath
|
||||||
|
d.log.Debug("Downloaded metadata", "path", metaLocalPath)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify checksum if requested
|
||||||
|
if opts.VerifyChecksum {
|
||||||
|
d.log.Info("Verifying checksum...")
|
||||||
|
checksum, err := calculateSHA256(localPath)
|
||||||
|
if err != nil {
|
||||||
|
// Cleanup on verification failure
|
||||||
|
os.RemoveAll(tempSubDir)
|
||||||
|
return nil, fmt.Errorf("checksum calculation failed: %w", err)
|
||||||
|
}
|
||||||
|
result.SHA256 = checksum
|
||||||
|
|
||||||
|
// Check against metadata if available
|
||||||
|
if result.MetadataPath != "" {
|
||||||
|
meta, err := metadata.Load(result.MetadataPath)
|
||||||
|
if err != nil {
|
||||||
|
d.log.Warn("Failed to load metadata for verification", "error", err)
|
||||||
|
} else if meta.SHA256 != "" && meta.SHA256 != checksum {
|
||||||
|
// Cleanup on verification failure
|
||||||
|
os.RemoveAll(tempSubDir)
|
||||||
|
return nil, fmt.Errorf("checksum mismatch: expected %s, got %s", meta.SHA256, checksum)
|
||||||
|
} else if meta.SHA256 == checksum {
|
||||||
|
d.log.Info("Checksum verified successfully", "sha256", checksum)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
d.log.Info("Download completed", "path", localPath, "size", cloud.FormatSize(result.Size))
|
||||||
|
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// DownloadFromURI downloads a backup using a cloud URI
|
||||||
|
func (d *CloudDownloader) DownloadFromURI(ctx context.Context, uri string, opts DownloadOptions) (*DownloadResult, error) {
|
||||||
|
// Parse URI
|
||||||
|
cloudURI, err := cloud.ParseCloudURI(uri)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("invalid cloud URI: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Download using the path from URI
|
||||||
|
return d.Download(ctx, cloudURI.Path, opts)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cleanup removes downloaded temp files
|
||||||
|
func (r *DownloadResult) Cleanup() error {
|
||||||
|
if !r.IsTempFile {
|
||||||
|
return nil // Don't delete non-temp files
|
||||||
|
}
|
||||||
|
|
||||||
|
// Remove the entire temp directory
|
||||||
|
tempDir := filepath.Dir(r.LocalPath)
|
||||||
|
if err := os.RemoveAll(tempDir); err != nil {
|
||||||
|
return fmt.Errorf("failed to cleanup temp files: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// calculateSHA256 calculates the SHA-256 checksum of a file
|
||||||
|
func calculateSHA256(filePath string) (string, error) {
|
||||||
|
file, err := os.Open(filePath)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
hash := sha256.New()
|
||||||
|
if _, err := io.Copy(hash, file); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
return hex.EncodeToString(hash.Sum(nil)), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// DownloadFromCloudURI is a convenience function to download from a cloud URI
|
||||||
|
func DownloadFromCloudURI(ctx context.Context, uri string, opts DownloadOptions) (*DownloadResult, error) {
|
||||||
|
// Parse URI
|
||||||
|
cloudURI, err := cloud.ParseCloudURI(uri)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("invalid cloud URI: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create config from URI
|
||||||
|
cfg := cloudURI.ToConfig()
|
||||||
|
|
||||||
|
// Create backend
|
||||||
|
backend, err := cloud.NewBackend(cfg)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create cloud backend: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create downloader
|
||||||
|
log := logger.New("info", "text")
|
||||||
|
downloader := NewCloudDownloader(backend, log)
|
||||||
|
|
||||||
|
// Download
|
||||||
|
return downloader.Download(ctx, cloudURI.Path, opts)
|
||||||
|
}
|
||||||
224
internal/retention/retention.go
Normal file
224
internal/retention/retention.go
Normal file
@@ -0,0 +1,224 @@
|
|||||||
|
package retention
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"sort"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"dbbackup/internal/metadata"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Policy defines the retention rules
|
||||||
|
type Policy struct {
|
||||||
|
RetentionDays int
|
||||||
|
MinBackups int
|
||||||
|
DryRun bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// CleanupResult contains information about cleanup operations
|
||||||
|
type CleanupResult struct {
|
||||||
|
TotalBackups int
|
||||||
|
EligibleForDeletion int
|
||||||
|
Deleted []string
|
||||||
|
Kept []string
|
||||||
|
SpaceFreed int64
|
||||||
|
Errors []error
|
||||||
|
}
|
||||||
|
|
||||||
|
// ApplyPolicy enforces the retention policy on backups in a directory
|
||||||
|
func ApplyPolicy(backupDir string, policy Policy) (*CleanupResult, error) {
|
||||||
|
result := &CleanupResult{
|
||||||
|
Deleted: make([]string, 0),
|
||||||
|
Kept: make([]string, 0),
|
||||||
|
Errors: make([]error, 0),
|
||||||
|
}
|
||||||
|
|
||||||
|
// List all backups in directory
|
||||||
|
backups, err := metadata.ListBackups(backupDir)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to list backups: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
result.TotalBackups = len(backups)
|
||||||
|
|
||||||
|
// Sort backups by timestamp (oldest first)
|
||||||
|
sort.Slice(backups, func(i, j int) bool {
|
||||||
|
return backups[i].Timestamp.Before(backups[j].Timestamp)
|
||||||
|
})
|
||||||
|
|
||||||
|
// Calculate cutoff date
|
||||||
|
cutoffDate := time.Now().AddDate(0, 0, -policy.RetentionDays)
|
||||||
|
|
||||||
|
// Determine which backups to delete
|
||||||
|
for i, backup := range backups {
|
||||||
|
// Always keep minimum number of backups (most recent ones)
|
||||||
|
backupsRemaining := len(backups) - i
|
||||||
|
if backupsRemaining <= policy.MinBackups {
|
||||||
|
result.Kept = append(result.Kept, backup.BackupFile)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if backup is older than retention period
|
||||||
|
if backup.Timestamp.Before(cutoffDate) {
|
||||||
|
result.EligibleForDeletion++
|
||||||
|
|
||||||
|
if policy.DryRun {
|
||||||
|
result.Deleted = append(result.Deleted, backup.BackupFile)
|
||||||
|
} else {
|
||||||
|
// Delete backup file and associated metadata
|
||||||
|
if err := deleteBackup(backup.BackupFile); err != nil {
|
||||||
|
result.Errors = append(result.Errors,
|
||||||
|
fmt.Errorf("failed to delete %s: %w", backup.BackupFile, err))
|
||||||
|
} else {
|
||||||
|
result.Deleted = append(result.Deleted, backup.BackupFile)
|
||||||
|
result.SpaceFreed += backup.SizeBytes
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
result.Kept = append(result.Kept, backup.BackupFile)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// deleteBackup removes a backup file and all associated files
|
||||||
|
func deleteBackup(backupFile string) error {
|
||||||
|
// Delete main backup file
|
||||||
|
if err := os.Remove(backupFile); err != nil && !os.IsNotExist(err) {
|
||||||
|
return fmt.Errorf("failed to delete backup file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete metadata file
|
||||||
|
metaFile := backupFile + ".meta.json"
|
||||||
|
if err := os.Remove(metaFile); err != nil && !os.IsNotExist(err) {
|
||||||
|
return fmt.Errorf("failed to delete metadata file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete legacy .sha256 file if exists
|
||||||
|
sha256File := backupFile + ".sha256"
|
||||||
|
if err := os.Remove(sha256File); err != nil && !os.IsNotExist(err) {
|
||||||
|
// Don't fail if .sha256 doesn't exist (new format)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete legacy .info file if exists
|
||||||
|
infoFile := backupFile + ".info"
|
||||||
|
if err := os.Remove(infoFile); err != nil && !os.IsNotExist(err) {
|
||||||
|
// Don't fail if .info doesn't exist (new format)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetOldestBackups returns the N oldest backups in a directory
|
||||||
|
func GetOldestBackups(backupDir string, count int) ([]*metadata.BackupMetadata, error) {
|
||||||
|
backups, err := metadata.ListBackups(backupDir)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sort by timestamp (oldest first)
|
||||||
|
sort.Slice(backups, func(i, j int) bool {
|
||||||
|
return backups[i].Timestamp.Before(backups[j].Timestamp)
|
||||||
|
})
|
||||||
|
|
||||||
|
if count > len(backups) {
|
||||||
|
count = len(backups)
|
||||||
|
}
|
||||||
|
|
||||||
|
return backups[:count], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetNewestBackups returns the N newest backups in a directory
|
||||||
|
func GetNewestBackups(backupDir string, count int) ([]*metadata.BackupMetadata, error) {
|
||||||
|
backups, err := metadata.ListBackups(backupDir)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sort by timestamp (newest first)
|
||||||
|
sort.Slice(backups, func(i, j int) bool {
|
||||||
|
return backups[i].Timestamp.After(backups[j].Timestamp)
|
||||||
|
})
|
||||||
|
|
||||||
|
if count > len(backups) {
|
||||||
|
count = len(backups)
|
||||||
|
}
|
||||||
|
|
||||||
|
return backups[:count], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// CleanupByPattern removes backups matching a specific pattern
|
||||||
|
func CleanupByPattern(backupDir, pattern string, policy Policy) (*CleanupResult, error) {
|
||||||
|
result := &CleanupResult{
|
||||||
|
Deleted: make([]string, 0),
|
||||||
|
Kept: make([]string, 0),
|
||||||
|
Errors: make([]error, 0),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find matching backup files
|
||||||
|
searchPattern := filepath.Join(backupDir, pattern)
|
||||||
|
matches, err := filepath.Glob(searchPattern)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to match pattern: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Filter to only .dump or .sql files
|
||||||
|
var backupFiles []string
|
||||||
|
for _, match := range matches {
|
||||||
|
ext := filepath.Ext(match)
|
||||||
|
if ext == ".dump" || ext == ".sql" {
|
||||||
|
backupFiles = append(backupFiles, match)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load metadata for matched backups
|
||||||
|
var backups []*metadata.BackupMetadata
|
||||||
|
for _, file := range backupFiles {
|
||||||
|
meta, err := metadata.Load(file)
|
||||||
|
if err != nil {
|
||||||
|
// Skip files without metadata
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
backups = append(backups, meta)
|
||||||
|
}
|
||||||
|
|
||||||
|
result.TotalBackups = len(backups)
|
||||||
|
|
||||||
|
// Sort by timestamp
|
||||||
|
sort.Slice(backups, func(i, j int) bool {
|
||||||
|
return backups[i].Timestamp.Before(backups[j].Timestamp)
|
||||||
|
})
|
||||||
|
|
||||||
|
cutoffDate := time.Now().AddDate(0, 0, -policy.RetentionDays)
|
||||||
|
|
||||||
|
// Apply policy
|
||||||
|
for i, backup := range backups {
|
||||||
|
backupsRemaining := len(backups) - i
|
||||||
|
if backupsRemaining <= policy.MinBackups {
|
||||||
|
result.Kept = append(result.Kept, backup.BackupFile)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if backup.Timestamp.Before(cutoffDate) {
|
||||||
|
result.EligibleForDeletion++
|
||||||
|
|
||||||
|
if policy.DryRun {
|
||||||
|
result.Deleted = append(result.Deleted, backup.BackupFile)
|
||||||
|
} else {
|
||||||
|
if err := deleteBackup(backup.BackupFile); err != nil {
|
||||||
|
result.Errors = append(result.Errors, err)
|
||||||
|
} else {
|
||||||
|
result.Deleted = append(result.Deleted, backup.BackupFile)
|
||||||
|
result.SpaceFreed += backup.SizeBytes
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
result.Kept = append(result.Kept, backup.BackupFile)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
114
internal/verification/verification.go
Normal file
114
internal/verification/verification.go
Normal file
@@ -0,0 +1,114 @@
|
|||||||
|
package verification
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"dbbackup/internal/metadata"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Result represents the outcome of a verification operation
|
||||||
|
type Result struct {
|
||||||
|
Valid bool
|
||||||
|
BackupFile string
|
||||||
|
ExpectedSHA256 string
|
||||||
|
CalculatedSHA256 string
|
||||||
|
SizeMatch bool
|
||||||
|
FileExists bool
|
||||||
|
MetadataExists bool
|
||||||
|
Error error
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify checks the integrity of a backup file
|
||||||
|
func Verify(backupFile string) (*Result, error) {
|
||||||
|
result := &Result{
|
||||||
|
BackupFile: backupFile,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if backup file exists
|
||||||
|
info, err := os.Stat(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
result.FileExists = false
|
||||||
|
result.Error = fmt.Errorf("backup file does not exist: %w", err)
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
result.FileExists = true
|
||||||
|
|
||||||
|
// Load metadata
|
||||||
|
meta, err := metadata.Load(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
result.MetadataExists = false
|
||||||
|
result.Error = fmt.Errorf("failed to load metadata: %w", err)
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
result.MetadataExists = true
|
||||||
|
result.ExpectedSHA256 = meta.SHA256
|
||||||
|
|
||||||
|
// Check size match
|
||||||
|
if info.Size() != meta.SizeBytes {
|
||||||
|
result.SizeMatch = false
|
||||||
|
result.Error = fmt.Errorf("size mismatch: expected %d bytes, got %d bytes",
|
||||||
|
meta.SizeBytes, info.Size())
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
result.SizeMatch = true
|
||||||
|
|
||||||
|
// Calculate actual SHA-256
|
||||||
|
actualSHA256, err := metadata.CalculateSHA256(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
result.Error = fmt.Errorf("failed to calculate checksum: %w", err)
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
result.CalculatedSHA256 = actualSHA256
|
||||||
|
|
||||||
|
// Compare checksums
|
||||||
|
if actualSHA256 != meta.SHA256 {
|
||||||
|
result.Valid = false
|
||||||
|
result.Error = fmt.Errorf("checksum mismatch: expected %s, got %s",
|
||||||
|
meta.SHA256, actualSHA256)
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// All checks passed
|
||||||
|
result.Valid = true
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// VerifyMultiple verifies multiple backup files
|
||||||
|
func VerifyMultiple(backupFiles []string) ([]*Result, error) {
|
||||||
|
var results []*Result
|
||||||
|
|
||||||
|
for _, file := range backupFiles {
|
||||||
|
result, err := Verify(file)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("verification error for %s: %w", file, err)
|
||||||
|
}
|
||||||
|
results = append(results, result)
|
||||||
|
}
|
||||||
|
|
||||||
|
return results, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// QuickCheck performs a fast check without full checksum calculation
|
||||||
|
// Only validates metadata existence and file size
|
||||||
|
func QuickCheck(backupFile string) error {
|
||||||
|
// Check file exists
|
||||||
|
info, err := os.Stat(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("backup file does not exist: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load metadata
|
||||||
|
meta, err := metadata.Load(backupFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("metadata missing or invalid: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check size
|
||||||
|
if info.Size() != meta.SizeBytes {
|
||||||
|
return fmt.Errorf("size mismatch: expected %d bytes, got %d bytes",
|
||||||
|
meta.SizeBytes, info.Size())
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
382
scripts/test_azure_storage.sh
Executable file
382
scripts/test_azure_storage.sh
Executable file
@@ -0,0 +1,382 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Azure Blob Storage (Azurite) Testing Script for dbbackup
|
||||||
|
# Tests backup, restore, verify, and cleanup with Azure emulator
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Colors for output
|
||||||
|
RED='\033[0;31m'
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
BLUE='\033[0;34m'
|
||||||
|
NC='\033[0m' # No Color
|
||||||
|
|
||||||
|
# Test configuration
|
||||||
|
AZURITE_ENDPOINT="http://localhost:10000"
|
||||||
|
CONTAINER_NAME="test-backups"
|
||||||
|
ACCOUNT_NAME="devstoreaccount1"
|
||||||
|
ACCOUNT_KEY="Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
|
||||||
|
|
||||||
|
# Database connection details (from docker-compose)
|
||||||
|
POSTGRES_HOST="localhost"
|
||||||
|
POSTGRES_PORT="5434"
|
||||||
|
POSTGRES_USER="testuser"
|
||||||
|
POSTGRES_PASS="testpass"
|
||||||
|
POSTGRES_DB="testdb"
|
||||||
|
|
||||||
|
MYSQL_HOST="localhost"
|
||||||
|
MYSQL_PORT="3308"
|
||||||
|
MYSQL_USER="testuser"
|
||||||
|
MYSQL_PASS="testpass"
|
||||||
|
MYSQL_DB="testdb"
|
||||||
|
|
||||||
|
# Test counters
|
||||||
|
TESTS_PASSED=0
|
||||||
|
TESTS_FAILED=0
|
||||||
|
|
||||||
|
# Functions
|
||||||
|
print_header() {
|
||||||
|
echo -e "\n${BLUE}=== $1 ===${NC}\n"
|
||||||
|
}
|
||||||
|
|
||||||
|
print_success() {
|
||||||
|
echo -e "${GREEN}✓ $1${NC}"
|
||||||
|
((TESTS_PASSED++))
|
||||||
|
}
|
||||||
|
|
||||||
|
print_error() {
|
||||||
|
echo -e "${RED}✗ $1${NC}"
|
||||||
|
((TESTS_FAILED++))
|
||||||
|
}
|
||||||
|
|
||||||
|
print_info() {
|
||||||
|
echo -e "${YELLOW}ℹ $1${NC}"
|
||||||
|
}
|
||||||
|
|
||||||
|
wait_for_azurite() {
|
||||||
|
print_info "Waiting for Azurite to be ready..."
|
||||||
|
for i in {1..30}; do
|
||||||
|
if curl -f -s "${AZURITE_ENDPOINT}/devstoreaccount1?restype=account&comp=properties" > /dev/null 2>&1; then
|
||||||
|
print_success "Azurite is ready"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
sleep 1
|
||||||
|
done
|
||||||
|
print_error "Azurite failed to start"
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Build dbbackup if needed
|
||||||
|
build_dbbackup() {
|
||||||
|
print_header "Building dbbackup"
|
||||||
|
if [ ! -f "./dbbackup" ]; then
|
||||||
|
go build -o dbbackup .
|
||||||
|
print_success "Built dbbackup binary"
|
||||||
|
else
|
||||||
|
print_info "Using existing dbbackup binary"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Start services
|
||||||
|
start_services() {
|
||||||
|
print_header "Starting Azurite and Database Services"
|
||||||
|
docker-compose -f docker-compose.azurite.yml up -d
|
||||||
|
|
||||||
|
# Wait for services
|
||||||
|
sleep 5
|
||||||
|
wait_for_azurite
|
||||||
|
|
||||||
|
print_info "Waiting for PostgreSQL..."
|
||||||
|
sleep 3
|
||||||
|
|
||||||
|
print_info "Waiting for MySQL..."
|
||||||
|
sleep 3
|
||||||
|
|
||||||
|
print_success "All services started"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Stop services
|
||||||
|
stop_services() {
|
||||||
|
print_header "Stopping Services"
|
||||||
|
docker-compose -f docker-compose.azurite.yml down
|
||||||
|
print_success "Services stopped"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Create test data in databases
|
||||||
|
create_test_data() {
|
||||||
|
print_header "Creating Test Data"
|
||||||
|
|
||||||
|
# PostgreSQL
|
||||||
|
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d $POSTGRES_DB <<EOF
|
||||||
|
DROP TABLE IF EXISTS test_table;
|
||||||
|
CREATE TABLE test_table (
|
||||||
|
id SERIAL PRIMARY KEY,
|
||||||
|
name VARCHAR(100),
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||||
|
);
|
||||||
|
INSERT INTO test_table (name) VALUES ('Azure Test 1'), ('Azure Test 2'), ('Azure Test 3');
|
||||||
|
EOF
|
||||||
|
print_success "Created PostgreSQL test data"
|
||||||
|
|
||||||
|
# MySQL
|
||||||
|
mysql -h $MYSQL_HOST -P $MYSQL_PORT -u $MYSQL_USER -p$MYSQL_PASS $MYSQL_DB <<EOF
|
||||||
|
DROP TABLE IF EXISTS test_table;
|
||||||
|
CREATE TABLE test_table (
|
||||||
|
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||||
|
name VARCHAR(100),
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||||
|
);
|
||||||
|
INSERT INTO test_table (name) VALUES ('Azure Test 1'), ('Azure Test 2'), ('Azure Test 3');
|
||||||
|
EOF
|
||||||
|
print_success "Created MySQL test data"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 1: PostgreSQL backup to Azure
|
||||||
|
test_postgres_backup() {
|
||||||
|
print_header "Test 1: PostgreSQL Backup to Azure"
|
||||||
|
|
||||||
|
./dbbackup backup postgres \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS \
|
||||||
|
--database $POSTGRES_DB \
|
||||||
|
--output ./backups/pg_azure_test.sql \
|
||||||
|
--cloud "azure://$CONTAINER_NAME/postgres/backup1.sql?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "PostgreSQL backup uploaded to Azure"
|
||||||
|
else
|
||||||
|
print_error "PostgreSQL backup failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 2: MySQL backup to Azure
|
||||||
|
test_mysql_backup() {
|
||||||
|
print_header "Test 2: MySQL Backup to Azure"
|
||||||
|
|
||||||
|
./dbbackup backup mysql \
|
||||||
|
--host $MYSQL_HOST \
|
||||||
|
--port $MYSQL_PORT \
|
||||||
|
--user $MYSQL_USER \
|
||||||
|
--password $MYSQL_PASS \
|
||||||
|
--database $MYSQL_DB \
|
||||||
|
--output ./backups/mysql_azure_test.sql \
|
||||||
|
--cloud "azure://$CONTAINER_NAME/mysql/backup1.sql?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "MySQL backup uploaded to Azure"
|
||||||
|
else
|
||||||
|
print_error "MySQL backup failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 3: List backups in Azure
|
||||||
|
test_list_backups() {
|
||||||
|
print_header "Test 3: List Azure Backups"
|
||||||
|
|
||||||
|
./dbbackup cloud list "azure://$CONTAINER_NAME/postgres/?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Listed Azure backups"
|
||||||
|
else
|
||||||
|
print_error "Failed to list backups"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 4: Verify backup in Azure
|
||||||
|
test_verify_backup() {
|
||||||
|
print_header "Test 4: Verify Azure Backup"
|
||||||
|
|
||||||
|
./dbbackup verify "azure://$CONTAINER_NAME/postgres/backup1.sql?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Backup verification successful"
|
||||||
|
else
|
||||||
|
print_error "Backup verification failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 5: Restore from Azure
|
||||||
|
test_restore_from_azure() {
|
||||||
|
print_header "Test 5: Restore from Azure"
|
||||||
|
|
||||||
|
# Drop and recreate database
|
||||||
|
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d postgres <<EOF
|
||||||
|
DROP DATABASE IF EXISTS testdb_restored;
|
||||||
|
CREATE DATABASE testdb_restored;
|
||||||
|
EOF
|
||||||
|
|
||||||
|
./dbbackup restore postgres \
|
||||||
|
--source "azure://$CONTAINER_NAME/postgres/backup1.sql?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY" \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS \
|
||||||
|
--database testdb_restored
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Restored from Azure backup"
|
||||||
|
|
||||||
|
# Verify restored data
|
||||||
|
COUNT=$(PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d testdb_restored -t -c "SELECT COUNT(*) FROM test_table;")
|
||||||
|
if [ "$COUNT" -eq 3 ]; then
|
||||||
|
print_success "Restored data verified (3 rows)"
|
||||||
|
else
|
||||||
|
print_error "Restored data incorrect (expected 3 rows, got $COUNT)"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
print_error "Restore from Azure failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 6: Large file upload (block blob)
|
||||||
|
test_large_file_upload() {
|
||||||
|
print_header "Test 6: Large File Upload (Block Blob)"
|
||||||
|
|
||||||
|
# Create a large test file (300MB)
|
||||||
|
print_info "Creating 300MB test file..."
|
||||||
|
dd if=/dev/urandom of=./backups/large_test.dat bs=1M count=300 2>/dev/null
|
||||||
|
|
||||||
|
print_info "Uploading large file to Azure..."
|
||||||
|
./dbbackup cloud upload \
|
||||||
|
./backups/large_test.dat \
|
||||||
|
"azure://$CONTAINER_NAME/large/large_test.dat?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Large file uploaded successfully (block blob)"
|
||||||
|
|
||||||
|
# Verify file exists and has correct size
|
||||||
|
print_info "Downloading large file..."
|
||||||
|
./dbbackup cloud download \
|
||||||
|
"azure://$CONTAINER_NAME/large/large_test.dat?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY" \
|
||||||
|
./backups/large_test_downloaded.dat
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
ORIGINAL_SIZE=$(stat -f%z ./backups/large_test.dat 2>/dev/null || stat -c%s ./backups/large_test.dat)
|
||||||
|
DOWNLOADED_SIZE=$(stat -f%z ./backups/large_test_downloaded.dat 2>/dev/null || stat -c%s ./backups/large_test_downloaded.dat)
|
||||||
|
|
||||||
|
if [ "$ORIGINAL_SIZE" -eq "$DOWNLOADED_SIZE" ]; then
|
||||||
|
print_success "Downloaded file size matches original ($ORIGINAL_SIZE bytes)"
|
||||||
|
else
|
||||||
|
print_error "File size mismatch (original: $ORIGINAL_SIZE, downloaded: $DOWNLOADED_SIZE)"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
print_error "Large file download failed"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
|
rm -f ./backups/large_test.dat ./backups/large_test_downloaded.dat
|
||||||
|
else
|
||||||
|
print_error "Large file upload failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 7: Delete from Azure
|
||||||
|
test_delete_backup() {
|
||||||
|
print_header "Test 7: Delete Backup from Azure"
|
||||||
|
|
||||||
|
./dbbackup cloud delete "azure://$CONTAINER_NAME/mysql/backup1.sql?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Deleted backup from Azure"
|
||||||
|
|
||||||
|
# Verify deletion
|
||||||
|
if ! ./dbbackup cloud list "azure://$CONTAINER_NAME/mysql/?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY" | grep -q "backup1.sql"; then
|
||||||
|
print_success "Verified backup was deleted"
|
||||||
|
else
|
||||||
|
print_error "Backup still exists after deletion"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
print_error "Failed to delete backup"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 8: Cleanup old backups
|
||||||
|
test_cleanup() {
|
||||||
|
print_header "Test 8: Cleanup Old Backups"
|
||||||
|
|
||||||
|
# Create multiple backups with different timestamps
|
||||||
|
for i in {1..5}; do
|
||||||
|
./dbbackup backup postgres \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS \
|
||||||
|
--database $POSTGRES_DB \
|
||||||
|
--output "./backups/pg_cleanup_$i.sql" \
|
||||||
|
--cloud "azure://$CONTAINER_NAME/cleanup/backup_$i.sql?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY"
|
||||||
|
sleep 1
|
||||||
|
done
|
||||||
|
|
||||||
|
print_success "Created 5 test backups"
|
||||||
|
|
||||||
|
# Cleanup, keeping only 2
|
||||||
|
./dbbackup cleanup "azure://$CONTAINER_NAME/cleanup/?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY" --keep 2
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Cleanup completed"
|
||||||
|
|
||||||
|
# Count remaining backups
|
||||||
|
COUNT=$(./dbbackup cloud list "azure://$CONTAINER_NAME/cleanup/?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY" | grep -c "backup_")
|
||||||
|
if [ "$COUNT" -le 2 ]; then
|
||||||
|
print_success "Verified cleanup (kept 2 backups)"
|
||||||
|
else
|
||||||
|
print_error "Cleanup failed (expected 2 backups, found $COUNT)"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
print_error "Cleanup failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Main test execution
|
||||||
|
main() {
|
||||||
|
print_header "Azure Blob Storage (Azurite) Integration Tests"
|
||||||
|
|
||||||
|
# Setup
|
||||||
|
build_dbbackup
|
||||||
|
start_services
|
||||||
|
create_test_data
|
||||||
|
|
||||||
|
# Run tests
|
||||||
|
test_postgres_backup
|
||||||
|
test_mysql_backup
|
||||||
|
test_list_backups
|
||||||
|
test_verify_backup
|
||||||
|
test_restore_from_azure
|
||||||
|
test_large_file_upload
|
||||||
|
test_delete_backup
|
||||||
|
test_cleanup
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
|
print_header "Cleanup"
|
||||||
|
rm -rf ./backups
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
print_header "Test Summary"
|
||||||
|
echo -e "${GREEN}Passed: $TESTS_PASSED${NC}"
|
||||||
|
echo -e "${RED}Failed: $TESTS_FAILED${NC}"
|
||||||
|
|
||||||
|
if [ $TESTS_FAILED -eq 0 ]; then
|
||||||
|
print_success "All tests passed!"
|
||||||
|
stop_services
|
||||||
|
exit 0
|
||||||
|
else
|
||||||
|
print_error "Some tests failed"
|
||||||
|
print_info "Leaving services running for debugging"
|
||||||
|
print_info "Run 'docker-compose -f docker-compose.azurite.yml down' to stop services"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Run main
|
||||||
|
main
|
||||||
253
scripts/test_cloud_storage.sh
Executable file
253
scripts/test_cloud_storage.sh
Executable file
@@ -0,0 +1,253 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Color output
|
||||||
|
RED='\033[0;31m'
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
BLUE='\033[0;34m'
|
||||||
|
NC='\033[0m' # No Color
|
||||||
|
|
||||||
|
echo -e "${BLUE}========================================${NC}"
|
||||||
|
echo -e "${BLUE}dbbackup Cloud Storage Integration Test${NC}"
|
||||||
|
echo -e "${BLUE}========================================${NC}"
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Configuration
|
||||||
|
MINIO_ENDPOINT="http://localhost:9000"
|
||||||
|
MINIO_ACCESS_KEY="minioadmin"
|
||||||
|
MINIO_SECRET_KEY="minioadmin123"
|
||||||
|
MINIO_BUCKET="test-backups"
|
||||||
|
POSTGRES_HOST="localhost"
|
||||||
|
POSTGRES_PORT="5433"
|
||||||
|
POSTGRES_USER="testuser"
|
||||||
|
POSTGRES_PASS="testpass123"
|
||||||
|
POSTGRES_DB="cloudtest"
|
||||||
|
|
||||||
|
# Export credentials
|
||||||
|
export AWS_ACCESS_KEY_ID="$MINIO_ACCESS_KEY"
|
||||||
|
export AWS_SECRET_ACCESS_KEY="$MINIO_SECRET_KEY"
|
||||||
|
export AWS_ENDPOINT_URL="$MINIO_ENDPOINT"
|
||||||
|
export AWS_REGION="us-east-1"
|
||||||
|
|
||||||
|
# Check if dbbackup binary exists
|
||||||
|
if [ ! -f "./dbbackup" ]; then
|
||||||
|
echo -e "${YELLOW}Building dbbackup...${NC}"
|
||||||
|
go build -o dbbackup .
|
||||||
|
echo -e "${GREEN}✓ Build successful${NC}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Function to wait for service
|
||||||
|
wait_for_service() {
|
||||||
|
local service=$1
|
||||||
|
local host=$2
|
||||||
|
local port=$3
|
||||||
|
local max_attempts=30
|
||||||
|
local attempt=1
|
||||||
|
|
||||||
|
echo -e "${YELLOW}Waiting for $service to be ready...${NC}"
|
||||||
|
|
||||||
|
while ! nc -z $host $port 2>/dev/null; do
|
||||||
|
if [ $attempt -ge $max_attempts ]; then
|
||||||
|
echo -e "${RED}✗ $service did not start in time${NC}"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
echo -n "."
|
||||||
|
sleep 1
|
||||||
|
attempt=$((attempt + 1))
|
||||||
|
done
|
||||||
|
|
||||||
|
echo -e "${GREEN}✓ $service is ready${NC}"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Step 1: Start services
|
||||||
|
echo -e "${BLUE}Step 1: Starting services with Docker Compose${NC}"
|
||||||
|
docker-compose -f docker-compose.minio.yml up -d
|
||||||
|
|
||||||
|
# Wait for services
|
||||||
|
wait_for_service "MinIO" "localhost" "9000"
|
||||||
|
wait_for_service "PostgreSQL" "localhost" "5433"
|
||||||
|
|
||||||
|
sleep 5
|
||||||
|
|
||||||
|
# Step 2: Create test database
|
||||||
|
echo -e "\n${BLUE}Step 2: Creating test database${NC}"
|
||||||
|
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -c "DROP DATABASE IF EXISTS $POSTGRES_DB;" postgres 2>/dev/null || true
|
||||||
|
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -c "CREATE DATABASE $POSTGRES_DB;" postgres
|
||||||
|
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d $POSTGRES_DB << EOF
|
||||||
|
CREATE TABLE users (
|
||||||
|
id SERIAL PRIMARY KEY,
|
||||||
|
name VARCHAR(100),
|
||||||
|
email VARCHAR(100),
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||||
|
);
|
||||||
|
|
||||||
|
INSERT INTO users (name, email) VALUES
|
||||||
|
('Alice', 'alice@example.com'),
|
||||||
|
('Bob', 'bob@example.com'),
|
||||||
|
('Charlie', 'charlie@example.com');
|
||||||
|
|
||||||
|
CREATE TABLE products (
|
||||||
|
id SERIAL PRIMARY KEY,
|
||||||
|
name VARCHAR(200),
|
||||||
|
price DECIMAL(10,2)
|
||||||
|
);
|
||||||
|
|
||||||
|
INSERT INTO products (name, price) VALUES
|
||||||
|
('Widget', 19.99),
|
||||||
|
('Gadget', 29.99),
|
||||||
|
('Doohickey', 39.99);
|
||||||
|
EOF
|
||||||
|
|
||||||
|
echo -e "${GREEN}✓ Test database created with sample data${NC}"
|
||||||
|
|
||||||
|
# Step 3: Test local backup
|
||||||
|
echo -e "\n${BLUE}Step 3: Creating local backup${NC}"
|
||||||
|
./dbbackup backup single $POSTGRES_DB \
|
||||||
|
--db-type postgres \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS \
|
||||||
|
--output-dir /tmp/dbbackup-test
|
||||||
|
|
||||||
|
LOCAL_BACKUP=$(ls -t /tmp/dbbackup-test/${POSTGRES_DB}_*.dump 2>/dev/null | head -1)
|
||||||
|
if [ -z "$LOCAL_BACKUP" ]; then
|
||||||
|
echo -e "${RED}✗ Local backup failed${NC}"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo -e "${GREEN}✓ Local backup created: $LOCAL_BACKUP${NC}"
|
||||||
|
|
||||||
|
# Step 4: Test cloud upload
|
||||||
|
echo -e "\n${BLUE}Step 4: Uploading backup to MinIO (S3)${NC}"
|
||||||
|
./dbbackup cloud upload "$LOCAL_BACKUP" \
|
||||||
|
--cloud-provider minio \
|
||||||
|
--cloud-bucket $MINIO_BUCKET \
|
||||||
|
--cloud-endpoint $MINIO_ENDPOINT
|
||||||
|
|
||||||
|
echo -e "${GREEN}✓ Upload successful${NC}"
|
||||||
|
|
||||||
|
# Step 5: Test cloud list
|
||||||
|
echo -e "\n${BLUE}Step 5: Listing cloud backups${NC}"
|
||||||
|
./dbbackup cloud list \
|
||||||
|
--cloud-provider minio \
|
||||||
|
--cloud-bucket $MINIO_BUCKET \
|
||||||
|
--cloud-endpoint $MINIO_ENDPOINT \
|
||||||
|
--verbose
|
||||||
|
|
||||||
|
# Step 6: Test backup with cloud URI
|
||||||
|
echo -e "\n${BLUE}Step 6: Testing backup with cloud URI${NC}"
|
||||||
|
./dbbackup backup single $POSTGRES_DB \
|
||||||
|
--db-type postgres \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS \
|
||||||
|
--output-dir /tmp/dbbackup-test \
|
||||||
|
--cloud minio://$MINIO_BUCKET/uri-test/
|
||||||
|
|
||||||
|
echo -e "${GREEN}✓ Backup with cloud URI successful${NC}"
|
||||||
|
|
||||||
|
# Step 7: Drop database for restore test
|
||||||
|
echo -e "\n${BLUE}Step 7: Dropping database for restore test${NC}"
|
||||||
|
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -c "DROP DATABASE $POSTGRES_DB;" postgres
|
||||||
|
|
||||||
|
# Step 8: Test restore from cloud URI
|
||||||
|
echo -e "\n${BLUE}Step 8: Restoring from cloud URI${NC}"
|
||||||
|
CLOUD_URI="minio://$MINIO_BUCKET/$(basename $LOCAL_BACKUP)"
|
||||||
|
./dbbackup restore single "$CLOUD_URI" \
|
||||||
|
--target $POSTGRES_DB \
|
||||||
|
--create \
|
||||||
|
--confirm \
|
||||||
|
--db-type postgres \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS
|
||||||
|
|
||||||
|
echo -e "${GREEN}✓ Restore from cloud successful${NC}"
|
||||||
|
|
||||||
|
# Step 9: Verify data
|
||||||
|
echo -e "\n${BLUE}Step 9: Verifying restored data${NC}"
|
||||||
|
USER_COUNT=$(PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d $POSTGRES_DB -t -c "SELECT COUNT(*) FROM users;")
|
||||||
|
PRODUCT_COUNT=$(PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d $POSTGRES_DB -t -c "SELECT COUNT(*) FROM products;")
|
||||||
|
|
||||||
|
if [ "$USER_COUNT" -eq 3 ] && [ "$PRODUCT_COUNT" -eq 3 ]; then
|
||||||
|
echo -e "${GREEN}✓ Data verification successful (users: $USER_COUNT, products: $PRODUCT_COUNT)${NC}"
|
||||||
|
else
|
||||||
|
echo -e "${RED}✗ Data verification failed (users: $USER_COUNT, products: $PRODUCT_COUNT)${NC}"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Step 10: Test verify command
|
||||||
|
echo -e "\n${BLUE}Step 10: Verifying cloud backup integrity${NC}"
|
||||||
|
./dbbackup verify-backup "$CLOUD_URI"
|
||||||
|
|
||||||
|
echo -e "${GREEN}✓ Backup verification successful${NC}"
|
||||||
|
|
||||||
|
# Step 11: Test cloud cleanup
|
||||||
|
echo -e "\n${BLUE}Step 11: Testing cloud cleanup (dry-run)${NC}"
|
||||||
|
./dbbackup cleanup "minio://$MINIO_BUCKET/" \
|
||||||
|
--retention-days 0 \
|
||||||
|
--min-backups 1 \
|
||||||
|
--dry-run
|
||||||
|
|
||||||
|
# Step 12: Create multiple backups for cleanup test
|
||||||
|
echo -e "\n${BLUE}Step 12: Creating multiple backups for cleanup test${NC}"
|
||||||
|
for i in {1..5}; do
|
||||||
|
echo "Creating backup $i/5..."
|
||||||
|
./dbbackup backup single $POSTGRES_DB \
|
||||||
|
--db-type postgres \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS \
|
||||||
|
--output-dir /tmp/dbbackup-test \
|
||||||
|
--cloud minio://$MINIO_BUCKET/cleanup-test/
|
||||||
|
sleep 1
|
||||||
|
done
|
||||||
|
|
||||||
|
echo -e "${GREEN}✓ Multiple backups created${NC}"
|
||||||
|
|
||||||
|
# Step 13: Test actual cleanup
|
||||||
|
echo -e "\n${BLUE}Step 13: Testing cloud cleanup (actual)${NC}"
|
||||||
|
./dbbackup cleanup "minio://$MINIO_BUCKET/cleanup-test/" \
|
||||||
|
--retention-days 0 \
|
||||||
|
--min-backups 2
|
||||||
|
|
||||||
|
echo -e "${GREEN}✓ Cloud cleanup successful${NC}"
|
||||||
|
|
||||||
|
# Step 14: Test large file upload (multipart)
|
||||||
|
echo -e "\n${BLUE}Step 14: Testing large file upload (>100MB for multipart)${NC}"
|
||||||
|
echo "Creating 150MB test file..."
|
||||||
|
dd if=/dev/zero of=/tmp/large-test-file.bin bs=1M count=150 2>/dev/null
|
||||||
|
|
||||||
|
echo "Uploading large file..."
|
||||||
|
./dbbackup cloud upload /tmp/large-test-file.bin \
|
||||||
|
--cloud-provider minio \
|
||||||
|
--cloud-bucket $MINIO_BUCKET \
|
||||||
|
--cloud-endpoint $MINIO_ENDPOINT \
|
||||||
|
--verbose
|
||||||
|
|
||||||
|
echo -e "${GREEN}✓ Large file multipart upload successful${NC}"
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
|
echo -e "\n${BLUE}Cleanup${NC}"
|
||||||
|
rm -f /tmp/large-test-file.bin
|
||||||
|
rm -rf /tmp/dbbackup-test
|
||||||
|
|
||||||
|
echo -e "\n${GREEN}========================================${NC}"
|
||||||
|
echo -e "${GREEN}✓ ALL TESTS PASSED!${NC}"
|
||||||
|
echo -e "${GREEN}========================================${NC}"
|
||||||
|
echo
|
||||||
|
echo -e "${YELLOW}To stop services:${NC}"
|
||||||
|
echo -e " docker-compose -f docker-compose.minio.yml down"
|
||||||
|
echo
|
||||||
|
echo -e "${YELLOW}To view MinIO console:${NC}"
|
||||||
|
echo -e " http://localhost:9001 (minioadmin / minioadmin123)"
|
||||||
|
echo
|
||||||
|
echo -e "${YELLOW}To keep services running for manual testing:${NC}"
|
||||||
|
echo -e " export AWS_ACCESS_KEY_ID=minioadmin"
|
||||||
|
echo -e " export AWS_SECRET_ACCESS_KEY=minioadmin123"
|
||||||
|
echo -e " export AWS_ENDPOINT_URL=http://localhost:9000"
|
||||||
|
echo -e " ./dbbackup cloud list --cloud-provider minio --cloud-bucket test-backups"
|
||||||
390
scripts/test_gcs_storage.sh
Executable file
390
scripts/test_gcs_storage.sh
Executable file
@@ -0,0 +1,390 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Google Cloud Storage (fake-gcs-server) Testing Script for dbbackup
|
||||||
|
# Tests backup, restore, verify, and cleanup with GCS emulator
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Colors for output
|
||||||
|
RED='\033[0;31m'
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
BLUE='\033[0;34m'
|
||||||
|
NC='\033[0m' # No Color
|
||||||
|
|
||||||
|
# Test configuration
|
||||||
|
GCS_ENDPOINT="http://localhost:4443/storage/v1"
|
||||||
|
BUCKET_NAME="test-backups"
|
||||||
|
PROJECT_ID="test-project"
|
||||||
|
|
||||||
|
# Database connection details (from docker-compose)
|
||||||
|
POSTGRES_HOST="localhost"
|
||||||
|
POSTGRES_PORT="5435"
|
||||||
|
POSTGRES_USER="testuser"
|
||||||
|
POSTGRES_PASS="testpass"
|
||||||
|
POSTGRES_DB="testdb"
|
||||||
|
|
||||||
|
MYSQL_HOST="localhost"
|
||||||
|
MYSQL_PORT="3309"
|
||||||
|
MYSQL_USER="testuser"
|
||||||
|
MYSQL_PASS="testpass"
|
||||||
|
MYSQL_DB="testdb"
|
||||||
|
|
||||||
|
# Test counters
|
||||||
|
TESTS_PASSED=0
|
||||||
|
TESTS_FAILED=0
|
||||||
|
|
||||||
|
# Functions
|
||||||
|
print_header() {
|
||||||
|
echo -e "\n${BLUE}=== $1 ===${NC}\n"
|
||||||
|
}
|
||||||
|
|
||||||
|
print_success() {
|
||||||
|
echo -e "${GREEN}✓ $1${NC}"
|
||||||
|
((TESTS_PASSED++))
|
||||||
|
}
|
||||||
|
|
||||||
|
print_error() {
|
||||||
|
echo -e "${RED}✗ $1${NC}"
|
||||||
|
((TESTS_FAILED++))
|
||||||
|
}
|
||||||
|
|
||||||
|
print_info() {
|
||||||
|
echo -e "${YELLOW}ℹ $1${NC}"
|
||||||
|
}
|
||||||
|
|
||||||
|
wait_for_gcs() {
|
||||||
|
print_info "Waiting for fake-gcs-server to be ready..."
|
||||||
|
for i in {1..30}; do
|
||||||
|
if curl -f -s "$GCS_ENDPOINT/b" > /dev/null 2>&1; then
|
||||||
|
print_success "fake-gcs-server is ready"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
sleep 1
|
||||||
|
done
|
||||||
|
print_error "fake-gcs-server failed to start"
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
create_test_bucket() {
|
||||||
|
print_info "Creating test bucket..."
|
||||||
|
curl -X POST "$GCS_ENDPOINT/b?project=$PROJECT_ID" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d "{\"name\": \"$BUCKET_NAME\"}" > /dev/null 2>&1 || true
|
||||||
|
print_success "Test bucket created"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Build dbbackup if needed
|
||||||
|
build_dbbackup() {
|
||||||
|
print_header "Building dbbackup"
|
||||||
|
if [ ! -f "./dbbackup" ]; then
|
||||||
|
go build -o dbbackup .
|
||||||
|
print_success "Built dbbackup binary"
|
||||||
|
else
|
||||||
|
print_info "Using existing dbbackup binary"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Start services
|
||||||
|
start_services() {
|
||||||
|
print_header "Starting GCS Emulator and Database Services"
|
||||||
|
docker-compose -f docker-compose.gcs.yml up -d
|
||||||
|
|
||||||
|
# Wait for services
|
||||||
|
sleep 5
|
||||||
|
wait_for_gcs
|
||||||
|
create_test_bucket
|
||||||
|
|
||||||
|
print_info "Waiting for PostgreSQL..."
|
||||||
|
sleep 3
|
||||||
|
|
||||||
|
print_info "Waiting for MySQL..."
|
||||||
|
sleep 3
|
||||||
|
|
||||||
|
print_success "All services started"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Stop services
|
||||||
|
stop_services() {
|
||||||
|
print_header "Stopping Services"
|
||||||
|
docker-compose -f docker-compose.gcs.yml down
|
||||||
|
print_success "Services stopped"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Create test data in databases
|
||||||
|
create_test_data() {
|
||||||
|
print_header "Creating Test Data"
|
||||||
|
|
||||||
|
# PostgreSQL
|
||||||
|
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d $POSTGRES_DB <<EOF
|
||||||
|
DROP TABLE IF EXISTS test_table;
|
||||||
|
CREATE TABLE test_table (
|
||||||
|
id SERIAL PRIMARY KEY,
|
||||||
|
name VARCHAR(100),
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||||
|
);
|
||||||
|
INSERT INTO test_table (name) VALUES ('GCS Test 1'), ('GCS Test 2'), ('GCS Test 3');
|
||||||
|
EOF
|
||||||
|
print_success "Created PostgreSQL test data"
|
||||||
|
|
||||||
|
# MySQL
|
||||||
|
mysql -h $MYSQL_HOST -P $MYSQL_PORT -u $MYSQL_USER -p$MYSQL_PASS $MYSQL_DB <<EOF
|
||||||
|
DROP TABLE IF EXISTS test_table;
|
||||||
|
CREATE TABLE test_table (
|
||||||
|
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||||
|
name VARCHAR(100),
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||||
|
);
|
||||||
|
INSERT INTO test_table (name) VALUES ('GCS Test 1'), ('GCS Test 2'), ('GCS Test 3');
|
||||||
|
EOF
|
||||||
|
print_success "Created MySQL test data"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 1: PostgreSQL backup to GCS
|
||||||
|
test_postgres_backup() {
|
||||||
|
print_header "Test 1: PostgreSQL Backup to GCS"
|
||||||
|
|
||||||
|
./dbbackup backup postgres \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS \
|
||||||
|
--database $POSTGRES_DB \
|
||||||
|
--output ./backups/pg_gcs_test.sql \
|
||||||
|
--cloud "gs://$BUCKET_NAME/postgres/backup1.sql?endpoint=$GCS_ENDPOINT"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "PostgreSQL backup uploaded to GCS"
|
||||||
|
else
|
||||||
|
print_error "PostgreSQL backup failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 2: MySQL backup to GCS
|
||||||
|
test_mysql_backup() {
|
||||||
|
print_header "Test 2: MySQL Backup to GCS"
|
||||||
|
|
||||||
|
./dbbackup backup mysql \
|
||||||
|
--host $MYSQL_HOST \
|
||||||
|
--port $MYSQL_PORT \
|
||||||
|
--user $MYSQL_USER \
|
||||||
|
--password $MYSQL_PASS \
|
||||||
|
--database $MYSQL_DB \
|
||||||
|
--output ./backups/mysql_gcs_test.sql \
|
||||||
|
--cloud "gs://$BUCKET_NAME/mysql/backup1.sql?endpoint=$GCS_ENDPOINT"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "MySQL backup uploaded to GCS"
|
||||||
|
else
|
||||||
|
print_error "MySQL backup failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 3: List backups in GCS
|
||||||
|
test_list_backups() {
|
||||||
|
print_header "Test 3: List GCS Backups"
|
||||||
|
|
||||||
|
./dbbackup cloud list "gs://$BUCKET_NAME/postgres/?endpoint=$GCS_ENDPOINT"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Listed GCS backups"
|
||||||
|
else
|
||||||
|
print_error "Failed to list backups"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 4: Verify backup in GCS
|
||||||
|
test_verify_backup() {
|
||||||
|
print_header "Test 4: Verify GCS Backup"
|
||||||
|
|
||||||
|
./dbbackup verify "gs://$BUCKET_NAME/postgres/backup1.sql?endpoint=$GCS_ENDPOINT"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Backup verification successful"
|
||||||
|
else
|
||||||
|
print_error "Backup verification failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 5: Restore from GCS
|
||||||
|
test_restore_from_gcs() {
|
||||||
|
print_header "Test 5: Restore from GCS"
|
||||||
|
|
||||||
|
# Drop and recreate database
|
||||||
|
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d postgres <<EOF
|
||||||
|
DROP DATABASE IF EXISTS testdb_restored;
|
||||||
|
CREATE DATABASE testdb_restored;
|
||||||
|
EOF
|
||||||
|
|
||||||
|
./dbbackup restore postgres \
|
||||||
|
--source "gs://$BUCKET_NAME/postgres/backup1.sql?endpoint=$GCS_ENDPOINT" \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS \
|
||||||
|
--database testdb_restored
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Restored from GCS backup"
|
||||||
|
|
||||||
|
# Verify restored data
|
||||||
|
COUNT=$(PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d testdb_restored -t -c "SELECT COUNT(*) FROM test_table;")
|
||||||
|
if [ "$COUNT" -eq 3 ]; then
|
||||||
|
print_success "Restored data verified (3 rows)"
|
||||||
|
else
|
||||||
|
print_error "Restored data incorrect (expected 3 rows, got $COUNT)"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
print_error "Restore from GCS failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 6: Large file upload (chunked upload)
|
||||||
|
test_large_file_upload() {
|
||||||
|
print_header "Test 6: Large File Upload (Chunked)"
|
||||||
|
|
||||||
|
# Create a large test file (200MB)
|
||||||
|
print_info "Creating 200MB test file..."
|
||||||
|
dd if=/dev/urandom of=./backups/large_test.dat bs=1M count=200 2>/dev/null
|
||||||
|
|
||||||
|
print_info "Uploading large file to GCS..."
|
||||||
|
./dbbackup cloud upload \
|
||||||
|
./backups/large_test.dat \
|
||||||
|
"gs://$BUCKET_NAME/large/large_test.dat?endpoint=$GCS_ENDPOINT"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Large file uploaded successfully (chunked)"
|
||||||
|
|
||||||
|
# Verify file exists and has correct size
|
||||||
|
print_info "Downloading large file..."
|
||||||
|
./dbbackup cloud download \
|
||||||
|
"gs://$BUCKET_NAME/large/large_test.dat?endpoint=$GCS_ENDPOINT" \
|
||||||
|
./backups/large_test_downloaded.dat
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
ORIGINAL_SIZE=$(stat -f%z ./backups/large_test.dat 2>/dev/null || stat -c%s ./backups/large_test.dat)
|
||||||
|
DOWNLOADED_SIZE=$(stat -f%z ./backups/large_test_downloaded.dat 2>/dev/null || stat -c%s ./backups/large_test_downloaded.dat)
|
||||||
|
|
||||||
|
if [ "$ORIGINAL_SIZE" -eq "$DOWNLOADED_SIZE" ]; then
|
||||||
|
print_success "Downloaded file size matches original ($ORIGINAL_SIZE bytes)"
|
||||||
|
else
|
||||||
|
print_error "File size mismatch (original: $ORIGINAL_SIZE, downloaded: $DOWNLOADED_SIZE)"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
print_error "Large file download failed"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
|
rm -f ./backups/large_test.dat ./backups/large_test_downloaded.dat
|
||||||
|
else
|
||||||
|
print_error "Large file upload failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 7: Delete from GCS
|
||||||
|
test_delete_backup() {
|
||||||
|
print_header "Test 7: Delete Backup from GCS"
|
||||||
|
|
||||||
|
./dbbackup cloud delete "gs://$BUCKET_NAME/mysql/backup1.sql?endpoint=$GCS_ENDPOINT"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Deleted backup from GCS"
|
||||||
|
|
||||||
|
# Verify deletion
|
||||||
|
if ! ./dbbackup cloud list "gs://$BUCKET_NAME/mysql/?endpoint=$GCS_ENDPOINT" | grep -q "backup1.sql"; then
|
||||||
|
print_success "Verified backup was deleted"
|
||||||
|
else
|
||||||
|
print_error "Backup still exists after deletion"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
print_error "Failed to delete backup"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 8: Cleanup old backups
|
||||||
|
test_cleanup() {
|
||||||
|
print_header "Test 8: Cleanup Old Backups"
|
||||||
|
|
||||||
|
# Create multiple backups with different timestamps
|
||||||
|
for i in {1..5}; do
|
||||||
|
./dbbackup backup postgres \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS \
|
||||||
|
--database $POSTGRES_DB \
|
||||||
|
--output "./backups/pg_cleanup_$i.sql" \
|
||||||
|
--cloud "gs://$BUCKET_NAME/cleanup/backup_$i.sql?endpoint=$GCS_ENDPOINT"
|
||||||
|
sleep 1
|
||||||
|
done
|
||||||
|
|
||||||
|
print_success "Created 5 test backups"
|
||||||
|
|
||||||
|
# Cleanup, keeping only 2
|
||||||
|
./dbbackup cleanup "gs://$BUCKET_NAME/cleanup/?endpoint=$GCS_ENDPOINT" --keep 2
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Cleanup completed"
|
||||||
|
|
||||||
|
# Count remaining backups
|
||||||
|
COUNT=$(./dbbackup cloud list "gs://$BUCKET_NAME/cleanup/?endpoint=$GCS_ENDPOINT" | grep -c "backup_")
|
||||||
|
if [ "$COUNT" -le 2 ]; then
|
||||||
|
print_success "Verified cleanup (kept 2 backups)"
|
||||||
|
else
|
||||||
|
print_error "Cleanup failed (expected 2 backups, found $COUNT)"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
print_error "Cleanup failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Main test execution
|
||||||
|
main() {
|
||||||
|
print_header "Google Cloud Storage (fake-gcs-server) Integration Tests"
|
||||||
|
|
||||||
|
# Setup
|
||||||
|
build_dbbackup
|
||||||
|
start_services
|
||||||
|
create_test_data
|
||||||
|
|
||||||
|
# Run tests
|
||||||
|
test_postgres_backup
|
||||||
|
test_mysql_backup
|
||||||
|
test_list_backups
|
||||||
|
test_verify_backup
|
||||||
|
test_restore_from_gcs
|
||||||
|
test_large_file_upload
|
||||||
|
test_delete_backup
|
||||||
|
test_cleanup
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
|
print_header "Cleanup"
|
||||||
|
rm -rf ./backups
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
print_header "Test Summary"
|
||||||
|
echo -e "${GREEN}Passed: $TESTS_PASSED${NC}"
|
||||||
|
echo -e "${RED}Failed: $TESTS_FAILED${NC}"
|
||||||
|
|
||||||
|
if [ $TESTS_FAILED -eq 0 ]; then
|
||||||
|
print_success "All tests passed!"
|
||||||
|
stop_services
|
||||||
|
exit 0
|
||||||
|
else
|
||||||
|
print_error "Some tests failed"
|
||||||
|
print_info "Leaving services running for debugging"
|
||||||
|
print_info "Run 'docker-compose -f docker-compose.gcs.yml down' to stop services"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Run main
|
||||||
|
main
|
||||||
Reference in New Issue
Block a user