Compare commits
1 Commits
v2.0-sprin
...
v2.0-sprin
| Author | SHA1 | Date | |
|---|---|---|---|
| 64f1458e9a |
531
AZURE.md
Normal file
531
AZURE.md
Normal file
@@ -0,0 +1,531 @@
|
|||||||
|
# Azure Blob Storage Integration
|
||||||
|
|
||||||
|
This guide covers using **Azure Blob Storage** with `dbbackup` for secure, scalable cloud backup storage.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [Quick Start](#quick-start)
|
||||||
|
- [URI Syntax](#uri-syntax)
|
||||||
|
- [Authentication](#authentication)
|
||||||
|
- [Configuration](#configuration)
|
||||||
|
- [Usage Examples](#usage-examples)
|
||||||
|
- [Advanced Features](#advanced-features)
|
||||||
|
- [Testing with Azurite](#testing-with-azurite)
|
||||||
|
- [Best Practices](#best-practices)
|
||||||
|
- [Troubleshooting](#troubleshooting)
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### 1. Azure Portal Setup
|
||||||
|
|
||||||
|
1. Create a storage account in Azure Portal
|
||||||
|
2. Create a container for backups
|
||||||
|
3. Get your account credentials:
|
||||||
|
- **Account Name**: Your storage account name
|
||||||
|
- **Account Key**: Primary or secondary access key (from Access Keys section)
|
||||||
|
|
||||||
|
### 2. Basic Backup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup PostgreSQL to Azure
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb \
|
||||||
|
--output backup.sql \
|
||||||
|
--cloud "azure://mycontainer/backups/db.sql?account=myaccount&key=ACCOUNT_KEY"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Restore from Azure
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Restore from Azure backup
|
||||||
|
dbbackup restore postgres \
|
||||||
|
--source "azure://mycontainer/backups/db.sql?account=myaccount&key=ACCOUNT_KEY" \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb_restored
|
||||||
|
```
|
||||||
|
|
||||||
|
## URI Syntax
|
||||||
|
|
||||||
|
### Basic Format
|
||||||
|
|
||||||
|
```
|
||||||
|
azure://container/path/to/backup.sql?account=ACCOUNT_NAME&key=ACCOUNT_KEY
|
||||||
|
```
|
||||||
|
|
||||||
|
### URI Components
|
||||||
|
|
||||||
|
| Component | Required | Description | Example |
|
||||||
|
|-----------|----------|-------------|---------|
|
||||||
|
| `container` | Yes | Azure container name | `mycontainer` |
|
||||||
|
| `path` | Yes | Object path within container | `backups/db.sql` |
|
||||||
|
| `account` | Yes | Storage account name | `mystorageaccount` |
|
||||||
|
| `key` | Yes | Storage account key | `base64-encoded-key` |
|
||||||
|
| `endpoint` | No | Custom endpoint (Azurite) | `http://localhost:10000` |
|
||||||
|
|
||||||
|
### URI Examples
|
||||||
|
|
||||||
|
**Production Azure:**
|
||||||
|
```
|
||||||
|
azure://prod-backups/postgres/db.sql?account=prodaccount&key=YOUR_KEY_HERE
|
||||||
|
```
|
||||||
|
|
||||||
|
**Azurite Emulator:**
|
||||||
|
```
|
||||||
|
azure://test-backups/postgres/db.sql?endpoint=http://localhost:10000&account=devstoreaccount1&key=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==
|
||||||
|
```
|
||||||
|
|
||||||
|
**With Path Prefix:**
|
||||||
|
```
|
||||||
|
azure://backups/production/postgres/2024/db.sql?account=myaccount&key=KEY
|
||||||
|
```
|
||||||
|
|
||||||
|
## Authentication
|
||||||
|
|
||||||
|
### Method 1: URI Parameters (Recommended for CLI)
|
||||||
|
|
||||||
|
Pass credentials directly in the URI:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
azure://container/path?account=myaccount&key=YOUR_ACCOUNT_KEY
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method 2: Environment Variables
|
||||||
|
|
||||||
|
Set credentials via environment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export AZURE_STORAGE_ACCOUNT="myaccount"
|
||||||
|
export AZURE_STORAGE_KEY="YOUR_ACCOUNT_KEY"
|
||||||
|
|
||||||
|
# Use simplified URI (credentials from environment)
|
||||||
|
dbbackup backup postgres --cloud "azure://container/path/backup.sql"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method 3: Connection String
|
||||||
|
|
||||||
|
Use Azure connection string:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=YOUR_KEY;EndpointSuffix=core.windows.net"
|
||||||
|
|
||||||
|
dbbackup backup postgres --cloud "azure://container/path/backup.sql"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Getting Your Account Key
|
||||||
|
|
||||||
|
1. Go to Azure Portal → Storage Accounts
|
||||||
|
2. Select your storage account
|
||||||
|
3. Navigate to **Security + networking** → **Access keys**
|
||||||
|
4. Copy **key1** or **key2**
|
||||||
|
|
||||||
|
**Important:** Keep your account keys secure. Use Azure Key Vault for production.
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Container Setup
|
||||||
|
|
||||||
|
Create a container before first use:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Azure CLI
|
||||||
|
az storage container create \
|
||||||
|
--name backups \
|
||||||
|
--account-name myaccount \
|
||||||
|
--account-key YOUR_KEY
|
||||||
|
|
||||||
|
# Or let dbbackup create it automatically
|
||||||
|
dbbackup cloud upload file.sql "azure://backups/file.sql?account=myaccount&key=KEY&create=true"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Access Tiers
|
||||||
|
|
||||||
|
Azure Blob Storage offers multiple access tiers:
|
||||||
|
|
||||||
|
- **Hot**: Frequent access (default)
|
||||||
|
- **Cool**: Infrequent access (lower storage cost)
|
||||||
|
- **Archive**: Long-term retention (lowest cost, retrieval delay)
|
||||||
|
|
||||||
|
Set the tier in Azure Portal or using Azure CLI:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
az storage blob set-tier \
|
||||||
|
--container-name backups \
|
||||||
|
--name backup.sql \
|
||||||
|
--tier Cool \
|
||||||
|
--account-name myaccount
|
||||||
|
```
|
||||||
|
|
||||||
|
### Lifecycle Management
|
||||||
|
|
||||||
|
Configure automatic tier transitions:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"rules": [
|
||||||
|
{
|
||||||
|
"name": "moveToArchive",
|
||||||
|
"type": "Lifecycle",
|
||||||
|
"definition": {
|
||||||
|
"filters": {
|
||||||
|
"blobTypes": ["blockBlob"],
|
||||||
|
"prefixMatch": ["backups/"]
|
||||||
|
},
|
||||||
|
"actions": {
|
||||||
|
"baseBlob": {
|
||||||
|
"tierToCool": {
|
||||||
|
"daysAfterModificationGreaterThan": 30
|
||||||
|
},
|
||||||
|
"tierToArchive": {
|
||||||
|
"daysAfterModificationGreaterThan": 90
|
||||||
|
},
|
||||||
|
"delete": {
|
||||||
|
"daysAfterModificationGreaterThan": 365
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
### Backup with Auto-Upload
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# PostgreSQL backup with automatic Azure upload
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database production_db \
|
||||||
|
--output /backups/db.sql \
|
||||||
|
--cloud "azure://prod-backups/postgres/$(date +%Y%m%d_%H%M%S).sql?account=myaccount&key=KEY" \
|
||||||
|
--compression 6
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backup All Databases
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup entire PostgreSQL cluster to Azure
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--all-databases \
|
||||||
|
--output-dir /backups \
|
||||||
|
--cloud "azure://prod-backups/postgres/cluster/?account=myaccount&key=KEY"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verify Backup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verify backup integrity
|
||||||
|
dbbackup verify "azure://prod-backups/postgres/backup.sql?account=myaccount&key=KEY"
|
||||||
|
```
|
||||||
|
|
||||||
|
### List Backups
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all backups in container
|
||||||
|
dbbackup cloud list "azure://prod-backups/postgres/?account=myaccount&key=KEY"
|
||||||
|
|
||||||
|
# List with pattern
|
||||||
|
dbbackup cloud list "azure://prod-backups/postgres/2024/?account=myaccount&key=KEY"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Download Backup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Download from Azure to local
|
||||||
|
dbbackup cloud download \
|
||||||
|
"azure://prod-backups/postgres/backup.sql?account=myaccount&key=KEY" \
|
||||||
|
/local/path/backup.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
### Delete Old Backups
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Manual delete
|
||||||
|
dbbackup cloud delete "azure://prod-backups/postgres/old_backup.sql?account=myaccount&key=KEY"
|
||||||
|
|
||||||
|
# Automatic cleanup (keep last 7 backups)
|
||||||
|
dbbackup cleanup "azure://prod-backups/postgres/?account=myaccount&key=KEY" --keep 7
|
||||||
|
```
|
||||||
|
|
||||||
|
### Scheduled Backups
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# Azure backup script (run via cron)
|
||||||
|
|
||||||
|
DATE=$(date +%Y%m%d_%H%M%S)
|
||||||
|
AZURE_URI="azure://prod-backups/postgres/${DATE}.sql?account=myaccount&key=${AZURE_STORAGE_KEY}"
|
||||||
|
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database production_db \
|
||||||
|
--output /tmp/backup.sql \
|
||||||
|
--cloud "${AZURE_URI}" \
|
||||||
|
--compression 9
|
||||||
|
|
||||||
|
# Cleanup old backups
|
||||||
|
dbbackup cleanup "azure://prod-backups/postgres/?account=myaccount&key=${AZURE_STORAGE_KEY}" --keep 30
|
||||||
|
```
|
||||||
|
|
||||||
|
**Crontab:**
|
||||||
|
```cron
|
||||||
|
# Daily at 2 AM
|
||||||
|
0 2 * * * /usr/local/bin/azure-backup.sh >> /var/log/azure-backup.log 2>&1
|
||||||
|
```
|
||||||
|
|
||||||
|
## Advanced Features
|
||||||
|
|
||||||
|
### Block Blob Upload
|
||||||
|
|
||||||
|
For large files (>256MB), dbbackup automatically uses Azure Block Blob staging:
|
||||||
|
|
||||||
|
- **Block Size**: 100MB per block
|
||||||
|
- **Parallel Upload**: Multiple blocks uploaded concurrently
|
||||||
|
- **Checksum**: SHA-256 integrity verification
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Large database backup (automatically uses block blob)
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database huge_db \
|
||||||
|
--output /backups/huge.sql \
|
||||||
|
--cloud "azure://backups/huge.sql?account=myaccount&key=KEY"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Progress Tracking
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup with progress display
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb \
|
||||||
|
--output backup.sql \
|
||||||
|
--cloud "azure://backups/backup.sql?account=myaccount&key=KEY" \
|
||||||
|
--progress
|
||||||
|
```
|
||||||
|
|
||||||
|
### Concurrent Operations
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup multiple databases in parallel
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--all-databases \
|
||||||
|
--output-dir /backups \
|
||||||
|
--cloud "azure://backups/cluster/?account=myaccount&key=KEY" \
|
||||||
|
--parallelism 4
|
||||||
|
```
|
||||||
|
|
||||||
|
### Custom Metadata
|
||||||
|
|
||||||
|
Backups include SHA-256 checksums as blob metadata:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verify metadata using Azure CLI
|
||||||
|
az storage blob metadata show \
|
||||||
|
--container-name backups \
|
||||||
|
--name backup.sql \
|
||||||
|
--account-name myaccount
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing with Azurite
|
||||||
|
|
||||||
|
### Setup Azurite Emulator
|
||||||
|
|
||||||
|
**Docker Compose:**
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
azurite:
|
||||||
|
image: mcr.microsoft.com/azure-storage/azurite:latest
|
||||||
|
ports:
|
||||||
|
- "10000:10000"
|
||||||
|
- "10001:10001"
|
||||||
|
- "10002:10002"
|
||||||
|
command: azurite --blobHost 0.0.0.0 --loose
|
||||||
|
```
|
||||||
|
|
||||||
|
**Start:**
|
||||||
|
```bash
|
||||||
|
docker-compose -f docker-compose.azurite.yml up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### Default Azurite Credentials
|
||||||
|
|
||||||
|
```
|
||||||
|
Account Name: devstoreaccount1
|
||||||
|
Account Key: Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==
|
||||||
|
Endpoint: http://localhost:10000/devstoreaccount1
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test Backup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup to Azurite
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database testdb \
|
||||||
|
--output test.sql \
|
||||||
|
--cloud "azure://test-backups/test.sql?endpoint=http://localhost:10000&account=devstoreaccount1&key=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
|
||||||
|
```
|
||||||
|
|
||||||
|
### Run Integration Tests
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run comprehensive test suite
|
||||||
|
./scripts/test_azure_storage.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
Tests include:
|
||||||
|
- PostgreSQL and MySQL backups
|
||||||
|
- Upload/download operations
|
||||||
|
- Large file handling (300MB+)
|
||||||
|
- Verification and cleanup
|
||||||
|
- Restore operations
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### 1. Security
|
||||||
|
|
||||||
|
- **Never commit credentials** to version control
|
||||||
|
- Use **Azure Key Vault** for production keys
|
||||||
|
- Rotate account keys regularly
|
||||||
|
- Use **Shared Access Signatures (SAS)** for limited access
|
||||||
|
- Enable **Azure AD authentication** when possible
|
||||||
|
|
||||||
|
### 2. Performance
|
||||||
|
|
||||||
|
- Use **compression** for faster uploads: `--compression 6`
|
||||||
|
- Enable **parallelism** for cluster backups: `--parallelism 4`
|
||||||
|
- Choose appropriate **Azure region** (close to source)
|
||||||
|
- Use **Premium Storage** for high throughput
|
||||||
|
|
||||||
|
### 3. Cost Optimization
|
||||||
|
|
||||||
|
- Use **Cool tier** for backups older than 30 days
|
||||||
|
- Use **Archive tier** for long-term retention (>90 days)
|
||||||
|
- Enable **lifecycle management** for automatic transitions
|
||||||
|
- Monitor storage costs in Azure Cost Management
|
||||||
|
|
||||||
|
### 4. Reliability
|
||||||
|
|
||||||
|
- Test **restore procedures** regularly
|
||||||
|
- Use **retention policies**: `--keep 30`
|
||||||
|
- Enable **soft delete** in Azure (30-day recovery)
|
||||||
|
- Monitor backup success with Azure Monitor
|
||||||
|
|
||||||
|
### 5. Organization
|
||||||
|
|
||||||
|
- Use **consistent naming**: `{database}/{date}/{backup}.sql`
|
||||||
|
- Use **container prefixes**: `prod-backups`, `dev-backups`
|
||||||
|
- Tag backups with **metadata** (version, environment)
|
||||||
|
- Document restore procedures
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Connection Issues
|
||||||
|
|
||||||
|
**Problem:** `failed to create Azure client`
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Verify account name is correct
|
||||||
|
- Check account key (copy from Azure Portal)
|
||||||
|
- Ensure endpoint is accessible (firewall rules)
|
||||||
|
- For Azurite, confirm `http://localhost:10000` is running
|
||||||
|
|
||||||
|
### Authentication Errors
|
||||||
|
|
||||||
|
**Problem:** `authentication failed`
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Check for spaces/special characters in key
|
||||||
|
- Verify account key hasn't been rotated
|
||||||
|
- Try using connection string method
|
||||||
|
- Check Azure firewall rules (allow your IP)
|
||||||
|
|
||||||
|
### Upload Failures
|
||||||
|
|
||||||
|
**Problem:** `failed to upload blob`
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Check container exists (or use `&create=true`)
|
||||||
|
- Verify sufficient storage quota
|
||||||
|
- Check network connectivity
|
||||||
|
- Try smaller files first (test connection)
|
||||||
|
|
||||||
|
### Large File Issues
|
||||||
|
|
||||||
|
**Problem:** Upload timeout for large files
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- dbbackup automatically uses block blob for files >256MB
|
||||||
|
- Increase compression: `--compression 9`
|
||||||
|
- Check network bandwidth
|
||||||
|
- Use Azure Premium Storage for better throughput
|
||||||
|
|
||||||
|
### List/Download Issues
|
||||||
|
|
||||||
|
**Problem:** `blob not found`
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Verify blob name (check Azure Portal)
|
||||||
|
- Check container name is correct
|
||||||
|
- Ensure blob hasn't been moved/deleted
|
||||||
|
- Check if blob is in Archive tier (requires rehydration)
|
||||||
|
|
||||||
|
### Performance Issues
|
||||||
|
|
||||||
|
**Problem:** Slow upload/download
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Use compression: `--compression 6`
|
||||||
|
- Choose closer Azure region
|
||||||
|
- Check network bandwidth
|
||||||
|
- Use Azure Premium Storage
|
||||||
|
- Enable parallelism for multiple files
|
||||||
|
|
||||||
|
### Debugging
|
||||||
|
|
||||||
|
Enable debug mode:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--cloud "azure://container/backup.sql?account=myaccount&key=KEY" \
|
||||||
|
--debug
|
||||||
|
```
|
||||||
|
|
||||||
|
Check Azure logs:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Azure CLI
|
||||||
|
az monitor activity-log list \
|
||||||
|
--resource-group mygroup \
|
||||||
|
--namespace Microsoft.Storage
|
||||||
|
```
|
||||||
|
|
||||||
|
## Additional Resources
|
||||||
|
|
||||||
|
- [Azure Blob Storage Documentation](https://docs.microsoft.com/azure/storage/blobs/)
|
||||||
|
- [Azurite Emulator](https://github.com/Azure/Azurite)
|
||||||
|
- [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/)
|
||||||
|
- [Azure CLI](https://docs.microsoft.com/cli/azure/storage)
|
||||||
|
- [dbbackup Cloud Storage Guide](CLOUD.md)
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
For issues specific to Azure integration:
|
||||||
|
|
||||||
|
1. Check [Troubleshooting](#troubleshooting) section
|
||||||
|
2. Run integration tests: `./scripts/test_azure_storage.sh`
|
||||||
|
3. Enable debug mode: `--debug`
|
||||||
|
4. Check Azure Service Health
|
||||||
|
5. Open an issue on GitHub with debug logs
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- [Google Cloud Storage Guide](GCS.md)
|
||||||
|
- [AWS S3 Guide](CLOUD.md#aws-s3)
|
||||||
|
- [Main Cloud Storage Documentation](CLOUD.md)
|
||||||
83
CLOUD.md
83
CLOUD.md
@@ -8,7 +8,8 @@ dbbackup v2.0 includes comprehensive cloud storage integration, allowing you to
|
|||||||
- AWS S3
|
- AWS S3
|
||||||
- MinIO (self-hosted S3-compatible)
|
- MinIO (self-hosted S3-compatible)
|
||||||
- Backblaze B2
|
- Backblaze B2
|
||||||
- Google Cloud Storage (via S3 compatibility)
|
- **Azure Blob Storage** (native support)
|
||||||
|
- **Google Cloud Storage** (native support)
|
||||||
- Any S3-compatible storage
|
- Any S3-compatible storage
|
||||||
|
|
||||||
**Key Features:**
|
**Key Features:**
|
||||||
@@ -83,8 +84,8 @@ Cloud URIs follow this format:
|
|||||||
- `s3://` - AWS S3 or S3-compatible storage
|
- `s3://` - AWS S3 or S3-compatible storage
|
||||||
- `minio://` - MinIO (auto-enables path-style addressing)
|
- `minio://` - MinIO (auto-enables path-style addressing)
|
||||||
- `b2://` - Backblaze B2
|
- `b2://` - Backblaze B2
|
||||||
- `gs://` or `gcs://` - Google Cloud Storage
|
- `gs://` or `gcs://` - Google Cloud Storage (native support)
|
||||||
- `azure://` - Azure Blob Storage (coming soon)
|
- `azure://` or `azblob://` - Azure Blob Storage (native support)
|
||||||
|
|
||||||
**Examples:**
|
**Examples:**
|
||||||
```bash
|
```bash
|
||||||
@@ -381,26 +382,68 @@ export AWS_REGION="us-west-002"
|
|||||||
dbbackup backup single mydb --cloud b2://my-bucket/backups/
|
dbbackup backup single mydb --cloud b2://my-bucket/backups/
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Azure Blob Storage
|
||||||
|
|
||||||
|
**Native Azure support with comprehensive features:**
|
||||||
|
|
||||||
|
See **[AZURE.md](AZURE.md)** for complete documentation.
|
||||||
|
|
||||||
|
**Quick Start:**
|
||||||
|
```bash
|
||||||
|
# Using account name and key
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb \
|
||||||
|
--cloud "azure://container/backups/db.sql?account=myaccount&key=ACCOUNT_KEY"
|
||||||
|
|
||||||
|
# With Azurite emulator for testing
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb \
|
||||||
|
--cloud "azure://test-backups/db.sql?endpoint=http://localhost:10000"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Features:**
|
||||||
|
- Native Azure SDK integration
|
||||||
|
- Block blob upload for large files (>256MB)
|
||||||
|
- Azurite emulator support for local testing
|
||||||
|
- SHA-256 integrity verification
|
||||||
|
- Comprehensive test suite
|
||||||
|
|
||||||
### Google Cloud Storage
|
### Google Cloud Storage
|
||||||
|
|
||||||
**Prerequisites:**
|
**Native GCS support with full features:**
|
||||||
- GCP account
|
|
||||||
- GCS bucket with S3 compatibility enabled
|
|
||||||
- HMAC keys generated
|
|
||||||
|
|
||||||
**Enable S3 Compatibility:**
|
See **[GCS.md](GCS.md)** for complete documentation.
|
||||||
1. Go to Cloud Storage > Settings > Interoperability
|
|
||||||
2. Create HMAC keys
|
|
||||||
|
|
||||||
**Configuration:**
|
**Quick Start:**
|
||||||
```bash
|
```bash
|
||||||
export AWS_ACCESS_KEY_ID="<your-hmac-access-id>"
|
# Using Application Default Credentials
|
||||||
export AWS_SECRET_ACCESS_KEY="<your-hmac-secret>"
|
dbbackup backup postgres \
|
||||||
export AWS_ENDPOINT_URL="https://storage.googleapis.com"
|
--host localhost \
|
||||||
|
--database mydb \
|
||||||
|
--cloud "gs://mybucket/backups/db.sql"
|
||||||
|
|
||||||
dbbackup backup single mydb --cloud gs://my-bucket/backups/
|
# With service account
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb \
|
||||||
|
--cloud "gs://mybucket/backups/db.sql?credentials=/path/to/key.json"
|
||||||
|
|
||||||
|
# With fake-gcs-server emulator for testing
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb \
|
||||||
|
--cloud "gs://test-backups/db.sql?endpoint=http://localhost:4443/storage/v1"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Features:**
|
||||||
|
- Native GCS SDK integration
|
||||||
|
- Chunked upload for large files (16MB chunks)
|
||||||
|
- fake-gcs-server emulator support
|
||||||
|
- Application Default Credentials support
|
||||||
|
- Workload Identity for GKE
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
@@ -727,6 +770,8 @@ A: No, backups are downloaded to temp directory, then restored and cleaned up.
|
|||||||
**Q: How much does cloud storage cost?**
|
**Q: How much does cloud storage cost?**
|
||||||
A: Varies by provider:
|
A: Varies by provider:
|
||||||
- AWS S3: ~$0.023/GB/month + transfer
|
- AWS S3: ~$0.023/GB/month + transfer
|
||||||
|
- Azure Blob Storage: ~$0.018/GB/month (Hot tier)
|
||||||
|
- Google Cloud Storage: ~$0.020/GB/month (Standard)
|
||||||
- Backblaze B2: ~$0.005/GB/month + transfer
|
- Backblaze B2: ~$0.005/GB/month + transfer
|
||||||
- MinIO: Self-hosted, hardware costs only
|
- MinIO: Self-hosted, hardware costs only
|
||||||
|
|
||||||
@@ -744,9 +789,15 @@ A: Yes, but restore requires thawing. Use lifecycle policies for automatic archi
|
|||||||
## Related Documentation
|
## Related Documentation
|
||||||
|
|
||||||
- [README.md](README.md) - Main documentation
|
- [README.md](README.md) - Main documentation
|
||||||
|
- [AZURE.md](AZURE.md) - **Azure Blob Storage guide** (comprehensive)
|
||||||
|
- [GCS.md](GCS.md) - **Google Cloud Storage guide** (comprehensive)
|
||||||
- [ROADMAP.md](ROADMAP.md) - Feature roadmap
|
- [ROADMAP.md](ROADMAP.md) - Feature roadmap
|
||||||
- [docker-compose.minio.yml](docker-compose.minio.yml) - MinIO test setup
|
- [docker-compose.minio.yml](docker-compose.minio.yml) - MinIO test setup
|
||||||
- [scripts/test_cloud_storage.sh](scripts/test_cloud_storage.sh) - Integration tests
|
- [docker-compose.azurite.yml](docker-compose.azurite.yml) - Azure Azurite test setup
|
||||||
|
- [docker-compose.gcs.yml](docker-compose.gcs.yml) - GCS fake-gcs-server test setup
|
||||||
|
- [scripts/test_cloud_storage.sh](scripts/test_cloud_storage.sh) - S3 integration tests
|
||||||
|
- [scripts/test_azure_storage.sh](scripts/test_azure_storage.sh) - Azure integration tests
|
||||||
|
- [scripts/test_gcs_storage.sh](scripts/test_gcs_storage.sh) - GCS integration tests
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
664
GCS.md
Normal file
664
GCS.md
Normal file
@@ -0,0 +1,664 @@
|
|||||||
|
# Google Cloud Storage Integration
|
||||||
|
|
||||||
|
This guide covers using **Google Cloud Storage (GCS)** with `dbbackup` for secure, scalable cloud backup storage.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [Quick Start](#quick-start)
|
||||||
|
- [URI Syntax](#uri-syntax)
|
||||||
|
- [Authentication](#authentication)
|
||||||
|
- [Configuration](#configuration)
|
||||||
|
- [Usage Examples](#usage-examples)
|
||||||
|
- [Advanced Features](#advanced-features)
|
||||||
|
- [Testing with fake-gcs-server](#testing-with-fake-gcs-server)
|
||||||
|
- [Best Practices](#best-practices)
|
||||||
|
- [Troubleshooting](#troubleshooting)
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### 1. GCP Setup
|
||||||
|
|
||||||
|
1. Create a GCS bucket in Google Cloud Console
|
||||||
|
2. Set up authentication (choose one):
|
||||||
|
- **Service Account**: Create and download JSON key file
|
||||||
|
- **Application Default Credentials**: Use gcloud CLI
|
||||||
|
- **Workload Identity**: For GKE clusters
|
||||||
|
|
||||||
|
### 2. Basic Backup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup PostgreSQL to GCS (using ADC)
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb \
|
||||||
|
--output backup.sql \
|
||||||
|
--cloud "gs://mybucket/backups/db.sql"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Restore from GCS
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Restore from GCS backup
|
||||||
|
dbbackup restore postgres \
|
||||||
|
--source "gs://mybucket/backups/db.sql" \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb_restored
|
||||||
|
```
|
||||||
|
|
||||||
|
## URI Syntax
|
||||||
|
|
||||||
|
### Basic Format
|
||||||
|
|
||||||
|
```
|
||||||
|
gs://bucket/path/to/backup.sql
|
||||||
|
gcs://bucket/path/to/backup.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
Both `gs://` and `gcs://` prefixes are supported.
|
||||||
|
|
||||||
|
### URI Components
|
||||||
|
|
||||||
|
| Component | Required | Description | Example |
|
||||||
|
|-----------|----------|-------------|---------|
|
||||||
|
| `bucket` | Yes | GCS bucket name | `mybucket` |
|
||||||
|
| `path` | Yes | Object path within bucket | `backups/db.sql` |
|
||||||
|
| `credentials` | No | Path to service account JSON | `/path/to/key.json` |
|
||||||
|
| `project` | No | GCP project ID | `my-project-id` |
|
||||||
|
| `endpoint` | No | Custom endpoint (emulator) | `http://localhost:4443` |
|
||||||
|
|
||||||
|
### URI Examples
|
||||||
|
|
||||||
|
**Production GCS (Application Default Credentials):**
|
||||||
|
```
|
||||||
|
gs://prod-backups/postgres/db.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
**With Service Account:**
|
||||||
|
```
|
||||||
|
gs://prod-backups/postgres/db.sql?credentials=/path/to/service-account.json
|
||||||
|
```
|
||||||
|
|
||||||
|
**With Project ID:**
|
||||||
|
```
|
||||||
|
gs://prod-backups/postgres/db.sql?project=my-project-id
|
||||||
|
```
|
||||||
|
|
||||||
|
**fake-gcs-server Emulator:**
|
||||||
|
```
|
||||||
|
gs://test-backups/postgres/db.sql?endpoint=http://localhost:4443/storage/v1
|
||||||
|
```
|
||||||
|
|
||||||
|
**With Path Prefix:**
|
||||||
|
```
|
||||||
|
gs://backups/production/postgres/2024/db.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
## Authentication
|
||||||
|
|
||||||
|
### Method 1: Application Default Credentials (Recommended)
|
||||||
|
|
||||||
|
Use gcloud CLI to set up ADC:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Login with your Google account
|
||||||
|
gcloud auth application-default login
|
||||||
|
|
||||||
|
# Or use service account for server environments
|
||||||
|
gcloud auth activate-service-account --key-file=/path/to/key.json
|
||||||
|
|
||||||
|
# Use simplified URI (credentials from environment)
|
||||||
|
dbbackup backup postgres --cloud "gs://mybucket/backups/backup.sql"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method 2: Service Account JSON
|
||||||
|
|
||||||
|
Download service account key from GCP Console:
|
||||||
|
|
||||||
|
1. Go to **IAM & Admin** → **Service Accounts**
|
||||||
|
2. Create or select a service account
|
||||||
|
3. Click **Keys** → **Add Key** → **Create new key** → **JSON**
|
||||||
|
4. Download the JSON file
|
||||||
|
|
||||||
|
**Use in URI:**
|
||||||
|
```bash
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--cloud "gs://mybucket/backup.sql?credentials=/path/to/service-account.json"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Or via environment:**
|
||||||
|
```bash
|
||||||
|
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
|
||||||
|
dbbackup backup postgres --cloud "gs://mybucket/backup.sql"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method 3: Workload Identity (GKE)
|
||||||
|
|
||||||
|
For Kubernetes workloads:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ServiceAccount
|
||||||
|
metadata:
|
||||||
|
name: dbbackup-sa
|
||||||
|
annotations:
|
||||||
|
iam.gke.io/gcp-service-account: dbbackup@project.iam.gserviceaccount.com
|
||||||
|
```
|
||||||
|
|
||||||
|
Then use ADC in your pod:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dbbackup backup postgres --cloud "gs://mybucket/backup.sql"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Required IAM Permissions
|
||||||
|
|
||||||
|
Service account needs these roles:
|
||||||
|
|
||||||
|
- **Storage Object Creator**: Upload backups
|
||||||
|
- **Storage Object Viewer**: List and download backups
|
||||||
|
- **Storage Object Admin**: Delete backups (for cleanup)
|
||||||
|
|
||||||
|
Or use predefined role: **Storage Admin**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Grant permissions
|
||||||
|
gcloud projects add-iam-policy-binding PROJECT_ID \
|
||||||
|
--member="serviceAccount:dbbackup@PROJECT_ID.iam.gserviceaccount.com" \
|
||||||
|
--role="roles/storage.objectAdmin"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Bucket Setup
|
||||||
|
|
||||||
|
Create a bucket before first use:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# gcloud CLI
|
||||||
|
gsutil mb -p PROJECT_ID -c STANDARD -l us-central1 gs://mybucket/
|
||||||
|
|
||||||
|
# Or let dbbackup create it (requires permissions)
|
||||||
|
dbbackup cloud upload file.sql "gs://mybucket/file.sql?create=true&project=PROJECT_ID"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Storage Classes
|
||||||
|
|
||||||
|
GCS offers multiple storage classes:
|
||||||
|
|
||||||
|
- **Standard**: Frequent access (default)
|
||||||
|
- **Nearline**: Access <1/month (lower cost)
|
||||||
|
- **Coldline**: Access <1/quarter (very low cost)
|
||||||
|
- **Archive**: Long-term retention (lowest cost)
|
||||||
|
|
||||||
|
Set the class when creating bucket:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gsutil mb -c NEARLINE gs://mybucket/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Lifecycle Management
|
||||||
|
|
||||||
|
Configure automatic transitions and deletion:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"lifecycle": {
|
||||||
|
"rule": [
|
||||||
|
{
|
||||||
|
"action": {"type": "SetStorageClass", "storageClass": "NEARLINE"},
|
||||||
|
"condition": {"age": 30, "matchesPrefix": ["backups/"]}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"action": {"type": "SetStorageClass", "storageClass": "ARCHIVE"},
|
||||||
|
"condition": {"age": 90, "matchesPrefix": ["backups/"]}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"action": {"type": "Delete"},
|
||||||
|
"condition": {"age": 365, "matchesPrefix": ["backups/"]}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Apply lifecycle configuration:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gsutil lifecycle set lifecycle.json gs://mybucket/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Regional Configuration
|
||||||
|
|
||||||
|
Choose bucket location for better performance:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# US regions
|
||||||
|
gsutil mb -l us-central1 gs://mybucket/
|
||||||
|
gsutil mb -l us-east1 gs://mybucket/
|
||||||
|
|
||||||
|
# EU regions
|
||||||
|
gsutil mb -l europe-west1 gs://mybucket/
|
||||||
|
|
||||||
|
# Multi-region
|
||||||
|
gsutil mb -l us gs://mybucket/
|
||||||
|
gsutil mb -l eu gs://mybucket/
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
### Backup with Auto-Upload
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# PostgreSQL backup with automatic GCS upload
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database production_db \
|
||||||
|
--output /backups/db.sql \
|
||||||
|
--cloud "gs://prod-backups/postgres/$(date +%Y%m%d_%H%M%S).sql" \
|
||||||
|
--compression 6
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backup All Databases
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup entire PostgreSQL cluster to GCS
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--all-databases \
|
||||||
|
--output-dir /backups \
|
||||||
|
--cloud "gs://prod-backups/postgres/cluster/"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verify Backup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verify backup integrity
|
||||||
|
dbbackup verify "gs://prod-backups/postgres/backup.sql"
|
||||||
|
```
|
||||||
|
|
||||||
|
### List Backups
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all backups in bucket
|
||||||
|
dbbackup cloud list "gs://prod-backups/postgres/"
|
||||||
|
|
||||||
|
# List with pattern
|
||||||
|
dbbackup cloud list "gs://prod-backups/postgres/2024/"
|
||||||
|
|
||||||
|
# Or use gsutil
|
||||||
|
gsutil ls gs://prod-backups/postgres/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Download Backup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Download from GCS to local
|
||||||
|
dbbackup cloud download \
|
||||||
|
"gs://prod-backups/postgres/backup.sql" \
|
||||||
|
/local/path/backup.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
### Delete Old Backups
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Manual delete
|
||||||
|
dbbackup cloud delete "gs://prod-backups/postgres/old_backup.sql"
|
||||||
|
|
||||||
|
# Automatic cleanup (keep last 7 backups)
|
||||||
|
dbbackup cleanup "gs://prod-backups/postgres/" --keep 7
|
||||||
|
```
|
||||||
|
|
||||||
|
### Scheduled Backups
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# GCS backup script (run via cron)
|
||||||
|
|
||||||
|
DATE=$(date +%Y%m%d_%H%M%S)
|
||||||
|
GCS_URI="gs://prod-backups/postgres/${DATE}.sql"
|
||||||
|
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database production_db \
|
||||||
|
--output /tmp/backup.sql \
|
||||||
|
--cloud "${GCS_URI}" \
|
||||||
|
--compression 9
|
||||||
|
|
||||||
|
# Cleanup old backups
|
||||||
|
dbbackup cleanup "gs://prod-backups/postgres/" --keep 30
|
||||||
|
```
|
||||||
|
|
||||||
|
**Crontab:**
|
||||||
|
```cron
|
||||||
|
# Daily at 2 AM
|
||||||
|
0 2 * * * /usr/local/bin/gcs-backup.sh >> /var/log/gcs-backup.log 2>&1
|
||||||
|
```
|
||||||
|
|
||||||
|
**Systemd Timer:**
|
||||||
|
```ini
|
||||||
|
# /etc/systemd/system/gcs-backup.timer
|
||||||
|
[Unit]
|
||||||
|
Description=Daily GCS Database Backup
|
||||||
|
|
||||||
|
[Timer]
|
||||||
|
OnCalendar=daily
|
||||||
|
Persistent=true
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=timers.target
|
||||||
|
```
|
||||||
|
|
||||||
|
## Advanced Features
|
||||||
|
|
||||||
|
### Chunked Upload
|
||||||
|
|
||||||
|
For large files, dbbackup automatically uses GCS chunked upload:
|
||||||
|
|
||||||
|
- **Chunk Size**: 16MB per chunk
|
||||||
|
- **Streaming**: Direct streaming from source
|
||||||
|
- **Checksum**: SHA-256 integrity verification
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Large database backup (automatically uses chunked upload)
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database huge_db \
|
||||||
|
--output /backups/huge.sql \
|
||||||
|
--cloud "gs://backups/huge.sql"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Progress Tracking
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup with progress display
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database mydb \
|
||||||
|
--output backup.sql \
|
||||||
|
--cloud "gs://backups/backup.sql" \
|
||||||
|
--progress
|
||||||
|
```
|
||||||
|
|
||||||
|
### Concurrent Operations
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup multiple databases in parallel
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--all-databases \
|
||||||
|
--output-dir /backups \
|
||||||
|
--cloud "gs://backups/cluster/" \
|
||||||
|
--parallelism 4
|
||||||
|
```
|
||||||
|
|
||||||
|
### Custom Metadata
|
||||||
|
|
||||||
|
Backups include SHA-256 checksums as object metadata:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# View metadata using gsutil
|
||||||
|
gsutil stat gs://backups/backup.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
### Object Versioning
|
||||||
|
|
||||||
|
Enable versioning to protect against accidental deletion:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Enable versioning
|
||||||
|
gsutil versioning set on gs://mybucket/
|
||||||
|
|
||||||
|
# List all versions
|
||||||
|
gsutil ls -a gs://mybucket/backup.sql
|
||||||
|
|
||||||
|
# Restore previous version
|
||||||
|
gsutil cp gs://mybucket/backup.sql#VERSION /local/backup.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
### Customer-Managed Encryption Keys (CMEK)
|
||||||
|
|
||||||
|
Use your own encryption keys:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create encryption key in Cloud KMS
|
||||||
|
gcloud kms keyrings create backup-keyring --location=us-central1
|
||||||
|
gcloud kms keys create backup-key --location=us-central1 --keyring=backup-keyring --purpose=encryption
|
||||||
|
|
||||||
|
# Set default CMEK for bucket
|
||||||
|
gsutil kms encryption gs://mybucket/ projects/PROJECT/locations/us-central1/keyRings/backup-keyring/cryptoKeys/backup-key
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing with fake-gcs-server
|
||||||
|
|
||||||
|
### Setup fake-gcs-server Emulator
|
||||||
|
|
||||||
|
**Docker Compose:**
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
gcs-emulator:
|
||||||
|
image: fsouza/fake-gcs-server:latest
|
||||||
|
ports:
|
||||||
|
- "4443:4443"
|
||||||
|
command: -scheme http -public-host localhost:4443
|
||||||
|
```
|
||||||
|
|
||||||
|
**Start:**
|
||||||
|
```bash
|
||||||
|
docker-compose -f docker-compose.gcs.yml up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### Create Test Bucket
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Using curl
|
||||||
|
curl -X POST "http://localhost:4443/storage/v1/b?project=test-project" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"name": "test-backups"}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test Backup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup to fake-gcs-server
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--host localhost \
|
||||||
|
--database testdb \
|
||||||
|
--output test.sql \
|
||||||
|
--cloud "gs://test-backups/test.sql?endpoint=http://localhost:4443/storage/v1"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Run Integration Tests
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run comprehensive test suite
|
||||||
|
./scripts/test_gcs_storage.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
Tests include:
|
||||||
|
- PostgreSQL and MySQL backups
|
||||||
|
- Upload/download operations
|
||||||
|
- Large file handling (200MB+)
|
||||||
|
- Verification and cleanup
|
||||||
|
- Restore operations
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### 1. Security
|
||||||
|
|
||||||
|
- **Never commit credentials** to version control
|
||||||
|
- Use **Application Default Credentials** when possible
|
||||||
|
- Rotate service account keys regularly
|
||||||
|
- Use **Workload Identity** for GKE
|
||||||
|
- Enable **VPC Service Controls** for enterprise security
|
||||||
|
- Use **Customer-Managed Encryption Keys** (CMEK) for sensitive data
|
||||||
|
|
||||||
|
### 2. Performance
|
||||||
|
|
||||||
|
- Use **compression** for faster uploads: `--compression 6`
|
||||||
|
- Enable **parallelism** for cluster backups: `--parallelism 4`
|
||||||
|
- Choose appropriate **GCS region** (close to source)
|
||||||
|
- Use **multi-region** buckets for high availability
|
||||||
|
|
||||||
|
### 3. Cost Optimization
|
||||||
|
|
||||||
|
- Use **Nearline** for backups older than 30 days
|
||||||
|
- Use **Archive** for long-term retention (>90 days)
|
||||||
|
- Enable **lifecycle management** for automatic transitions
|
||||||
|
- Monitor storage costs in GCP Billing Console
|
||||||
|
- Use **Coldline** for quarterly access patterns
|
||||||
|
|
||||||
|
### 4. Reliability
|
||||||
|
|
||||||
|
- Test **restore procedures** regularly
|
||||||
|
- Use **retention policies**: `--keep 30`
|
||||||
|
- Enable **object versioning** (30-day recovery)
|
||||||
|
- Use **multi-region** buckets for disaster recovery
|
||||||
|
- Monitor backup success with Cloud Monitoring
|
||||||
|
|
||||||
|
### 5. Organization
|
||||||
|
|
||||||
|
- Use **consistent naming**: `{database}/{date}/{backup}.sql`
|
||||||
|
- Use **bucket prefixes**: `prod-backups`, `dev-backups`
|
||||||
|
- Tag backups with **labels** (environment, version)
|
||||||
|
- Document restore procedures
|
||||||
|
- Use **separate buckets** per environment
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Connection Issues
|
||||||
|
|
||||||
|
**Problem:** `failed to create GCS client`
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Check `GOOGLE_APPLICATION_CREDENTIALS` environment variable
|
||||||
|
- Verify service account JSON file exists and is valid
|
||||||
|
- Ensure gcloud CLI is authenticated: `gcloud auth list`
|
||||||
|
- For emulator, confirm `http://localhost:4443` is running
|
||||||
|
|
||||||
|
### Authentication Errors
|
||||||
|
|
||||||
|
**Problem:** `authentication failed` or `permission denied`
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Verify service account has required IAM roles
|
||||||
|
- Check if Application Default Credentials are set up
|
||||||
|
- Run `gcloud auth application-default login`
|
||||||
|
- Verify service account JSON is not corrupted
|
||||||
|
- Check GCP project ID is correct
|
||||||
|
|
||||||
|
### Upload Failures
|
||||||
|
|
||||||
|
**Problem:** `failed to upload object`
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Check bucket exists (or use `&create=true`)
|
||||||
|
- Verify service account has `storage.objects.create` permission
|
||||||
|
- Check network connectivity to GCS
|
||||||
|
- Try smaller files first (test connection)
|
||||||
|
- Check GCP quota limits
|
||||||
|
|
||||||
|
### Large File Issues
|
||||||
|
|
||||||
|
**Problem:** Upload timeout for large files
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- dbbackup automatically uses chunked upload
|
||||||
|
- Increase compression: `--compression 9`
|
||||||
|
- Check network bandwidth
|
||||||
|
- Use **Transfer Appliance** for TB+ data
|
||||||
|
|
||||||
|
### List/Download Issues
|
||||||
|
|
||||||
|
**Problem:** `object not found`
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Verify object name (check GCS Console)
|
||||||
|
- Check bucket name is correct
|
||||||
|
- Ensure object hasn't been moved/deleted
|
||||||
|
- Check if object is in Archive class (requires restore)
|
||||||
|
|
||||||
|
### Performance Issues
|
||||||
|
|
||||||
|
**Problem:** Slow upload/download
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Use compression: `--compression 6`
|
||||||
|
- Choose closer GCS region
|
||||||
|
- Check network bandwidth
|
||||||
|
- Use **multi-region** bucket for better availability
|
||||||
|
- Enable parallelism for multiple files
|
||||||
|
|
||||||
|
### Debugging
|
||||||
|
|
||||||
|
Enable debug mode:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dbbackup backup postgres \
|
||||||
|
--cloud "gs://bucket/backup.sql" \
|
||||||
|
--debug
|
||||||
|
```
|
||||||
|
|
||||||
|
Check GCP logs:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Cloud Logging
|
||||||
|
gcloud logging read "resource.type=gcs_bucket AND resource.labels.bucket_name=mybucket" \
|
||||||
|
--limit 50 \
|
||||||
|
--format json
|
||||||
|
```
|
||||||
|
|
||||||
|
View bucket details:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gsutil ls -L -b gs://mybucket/
|
||||||
|
```
|
||||||
|
|
||||||
|
## Monitoring and Alerting
|
||||||
|
|
||||||
|
### Cloud Monitoring
|
||||||
|
|
||||||
|
Create metrics and alerts:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Monitor backup success rate
|
||||||
|
gcloud monitoring policies create \
|
||||||
|
--notification-channels=CHANNEL_ID \
|
||||||
|
--display-name="Backup Failure Alert" \
|
||||||
|
--condition-display-name="No backups in 24h" \
|
||||||
|
--condition-threshold-value=0 \
|
||||||
|
--condition-threshold-duration=86400s
|
||||||
|
```
|
||||||
|
|
||||||
|
### Logging
|
||||||
|
|
||||||
|
Export logs to BigQuery for analysis:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gcloud logging sinks create backup-logs \
|
||||||
|
bigquery.googleapis.com/projects/PROJECT_ID/datasets/backup_logs \
|
||||||
|
--log-filter='resource.type="gcs_bucket" AND resource.labels.bucket_name="prod-backups"'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Additional Resources
|
||||||
|
|
||||||
|
- [Google Cloud Storage Documentation](https://cloud.google.com/storage/docs)
|
||||||
|
- [fake-gcs-server](https://github.com/fsouza/fake-gcs-server)
|
||||||
|
- [gsutil Tool](https://cloud.google.com/storage/docs/gsutil)
|
||||||
|
- [GCS Client Libraries](https://cloud.google.com/storage/docs/reference/libraries)
|
||||||
|
- [dbbackup Cloud Storage Guide](CLOUD.md)
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
For issues specific to GCS integration:
|
||||||
|
|
||||||
|
1. Check [Troubleshooting](#troubleshooting) section
|
||||||
|
2. Run integration tests: `./scripts/test_gcs_storage.sh`
|
||||||
|
3. Enable debug mode: `--debug`
|
||||||
|
4. Check GCP Service Status
|
||||||
|
5. Open an issue on GitHub with debug logs
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- [Azure Blob Storage Guide](AZURE.md)
|
||||||
|
- [AWS S3 Guide](CLOUD.md#aws-s3)
|
||||||
|
- [Main Cloud Storage Documentation](CLOUD.md)
|
||||||
66
docker-compose.azurite.yml
Normal file
66
docker-compose.azurite.yml
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
# Azurite - Azure Storage Emulator
|
||||||
|
azurite:
|
||||||
|
image: mcr.microsoft.com/azure-storage/azurite:latest
|
||||||
|
container_name: dbbackup-azurite
|
||||||
|
ports:
|
||||||
|
- "10000:10000" # Blob service
|
||||||
|
- "10001:10001" # Queue service
|
||||||
|
- "10002:10002" # Table service
|
||||||
|
volumes:
|
||||||
|
- azurite_data:/data
|
||||||
|
command: azurite --blobHost 0.0.0.0 --queueHost 0.0.0.0 --tableHost 0.0.0.0 --loose --skipApiVersionCheck
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "nc", "-z", "localhost", "10000"]
|
||||||
|
interval: 5s
|
||||||
|
timeout: 3s
|
||||||
|
retries: 30
|
||||||
|
networks:
|
||||||
|
- dbbackup-net
|
||||||
|
|
||||||
|
# PostgreSQL 16 for testing
|
||||||
|
postgres:
|
||||||
|
image: postgres:16-alpine
|
||||||
|
container_name: dbbackup-postgres-azure
|
||||||
|
environment:
|
||||||
|
POSTGRES_USER: testuser
|
||||||
|
POSTGRES_PASSWORD: testpass
|
||||||
|
POSTGRES_DB: testdb
|
||||||
|
ports:
|
||||||
|
- "5434:5432"
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -U testuser -d testdb"]
|
||||||
|
interval: 5s
|
||||||
|
timeout: 3s
|
||||||
|
retries: 10
|
||||||
|
networks:
|
||||||
|
- dbbackup-net
|
||||||
|
|
||||||
|
# MySQL 8.0 for testing
|
||||||
|
mysql:
|
||||||
|
image: mysql:8.0
|
||||||
|
container_name: dbbackup-mysql-azure
|
||||||
|
environment:
|
||||||
|
MYSQL_ROOT_PASSWORD: rootpass
|
||||||
|
MYSQL_DATABASE: testdb
|
||||||
|
MYSQL_USER: testuser
|
||||||
|
MYSQL_PASSWORD: testpass
|
||||||
|
ports:
|
||||||
|
- "3308:3306"
|
||||||
|
command: --default-authentication-plugin=mysql_native_password
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "-prootpass"]
|
||||||
|
interval: 5s
|
||||||
|
timeout: 3s
|
||||||
|
retries: 10
|
||||||
|
networks:
|
||||||
|
- dbbackup-net
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
azurite_data:
|
||||||
|
|
||||||
|
networks:
|
||||||
|
dbbackup-net:
|
||||||
|
driver: bridge
|
||||||
59
docker-compose.gcs.yml
Normal file
59
docker-compose.gcs.yml
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
# fake-gcs-server - Google Cloud Storage Emulator
|
||||||
|
gcs-emulator:
|
||||||
|
image: fsouza/fake-gcs-server:latest
|
||||||
|
container_name: dbbackup-gcs
|
||||||
|
ports:
|
||||||
|
- "4443:4443"
|
||||||
|
command: -scheme http -public-host localhost:4443 -external-url http://localhost:4443
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "wget", "--spider", "-q", "http://localhost:4443/storage/v1/b"]
|
||||||
|
interval: 5s
|
||||||
|
timeout: 3s
|
||||||
|
retries: 30
|
||||||
|
networks:
|
||||||
|
- dbbackup-net
|
||||||
|
|
||||||
|
# PostgreSQL 16 for testing
|
||||||
|
postgres:
|
||||||
|
image: postgres:16-alpine
|
||||||
|
container_name: dbbackup-postgres-gcs
|
||||||
|
environment:
|
||||||
|
POSTGRES_USER: testuser
|
||||||
|
POSTGRES_PASSWORD: testpass
|
||||||
|
POSTGRES_DB: testdb
|
||||||
|
ports:
|
||||||
|
- "5435:5432"
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -U testuser -d testdb"]
|
||||||
|
interval: 5s
|
||||||
|
timeout: 3s
|
||||||
|
retries: 10
|
||||||
|
networks:
|
||||||
|
- dbbackup-net
|
||||||
|
|
||||||
|
# MySQL 8.0 for testing
|
||||||
|
mysql:
|
||||||
|
image: mysql:8.0
|
||||||
|
container_name: dbbackup-mysql-gcs
|
||||||
|
environment:
|
||||||
|
MYSQL_ROOT_PASSWORD: rootpass
|
||||||
|
MYSQL_DATABASE: testdb
|
||||||
|
MYSQL_USER: testuser
|
||||||
|
MYSQL_PASSWORD: testpass
|
||||||
|
ports:
|
||||||
|
- "3309:3306"
|
||||||
|
command: --default-authentication-plugin=mysql_native_password
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "-prootpass"]
|
||||||
|
interval: 5s
|
||||||
|
timeout: 3s
|
||||||
|
retries: 10
|
||||||
|
networks:
|
||||||
|
- dbbackup-net
|
||||||
|
|
||||||
|
networks:
|
||||||
|
dbbackup-net:
|
||||||
|
driver: bridge
|
||||||
55
go.mod
55
go.mod
@@ -17,7 +17,21 @@ require (
|
|||||||
)
|
)
|
||||||
|
|
||||||
require (
|
require (
|
||||||
|
cel.dev/expr v0.24.0 // indirect
|
||||||
|
cloud.google.com/go v0.121.6 // indirect
|
||||||
|
cloud.google.com/go/auth v0.17.0 // indirect
|
||||||
|
cloud.google.com/go/auth/oauth2adapt v0.2.8 // indirect
|
||||||
|
cloud.google.com/go/compute/metadata v0.9.0 // indirect
|
||||||
|
cloud.google.com/go/iam v1.5.2 // indirect
|
||||||
|
cloud.google.com/go/monitoring v1.24.2 // indirect
|
||||||
|
cloud.google.com/go/storage v1.57.2 // indirect
|
||||||
filippo.io/edwards25519 v1.1.0 // indirect
|
filippo.io/edwards25519 v1.1.0 // indirect
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0 // indirect
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 // indirect
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3 // indirect
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0 // indirect
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0 // indirect
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0 // indirect
|
||||||
github.com/aws/aws-sdk-go-v2 v1.40.0 // indirect
|
github.com/aws/aws-sdk-go-v2 v1.40.0 // indirect
|
||||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 // indirect
|
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.3 // indirect
|
||||||
github.com/aws/aws-sdk-go-v2/config v1.32.2 // indirect
|
github.com/aws/aws-sdk-go-v2/config v1.32.2 // indirect
|
||||||
@@ -39,12 +53,24 @@ require (
|
|||||||
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 // indirect
|
github.com/aws/aws-sdk-go-v2/service/sts v1.41.2 // indirect
|
||||||
github.com/aws/smithy-go v1.23.2 // indirect
|
github.com/aws/smithy-go v1.23.2 // indirect
|
||||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
|
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
|
||||||
|
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||||
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
|
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
|
||||||
github.com/charmbracelet/x/ansi v0.10.1 // indirect
|
github.com/charmbracelet/x/ansi v0.10.1 // indirect
|
||||||
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd // indirect
|
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd // indirect
|
||||||
github.com/charmbracelet/x/term v0.2.1 // indirect
|
github.com/charmbracelet/x/term v0.2.1 // indirect
|
||||||
|
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443 // indirect
|
||||||
github.com/creack/pty v1.1.17 // indirect
|
github.com/creack/pty v1.1.17 // indirect
|
||||||
|
github.com/envoyproxy/go-control-plane/envoy v1.32.4 // indirect
|
||||||
|
github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect
|
||||||
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
|
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
|
||||||
|
github.com/felixge/httpsnoop v1.0.4 // indirect
|
||||||
|
github.com/go-jose/go-jose/v4 v4.1.2 // indirect
|
||||||
|
github.com/go-logr/logr v1.4.3 // indirect
|
||||||
|
github.com/go-logr/stdr v1.2.2 // indirect
|
||||||
|
github.com/google/s2a-go v0.1.9 // indirect
|
||||||
|
github.com/google/uuid v1.6.0 // indirect
|
||||||
|
github.com/googleapis/enterprise-certificate-proxy v0.3.7 // indirect
|
||||||
|
github.com/googleapis/gax-go/v2 v2.15.0 // indirect
|
||||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||||
github.com/jackc/pgpassfile v1.0.0 // indirect
|
github.com/jackc/pgpassfile v1.0.0 // indirect
|
||||||
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
|
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
|
||||||
@@ -56,10 +82,31 @@ require (
|
|||||||
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 // indirect
|
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 // indirect
|
||||||
github.com/muesli/cancelreader v0.2.2 // indirect
|
github.com/muesli/cancelreader v0.2.2 // indirect
|
||||||
github.com/muesli/termenv v0.16.0 // indirect
|
github.com/muesli/termenv v0.16.0 // indirect
|
||||||
|
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 // indirect
|
||||||
github.com/rivo/uniseg v0.4.7 // indirect
|
github.com/rivo/uniseg v0.4.7 // indirect
|
||||||
|
github.com/spiffe/go-spiffe/v2 v2.5.0 // indirect
|
||||||
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
|
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
|
||||||
golang.org/x/crypto v0.37.0 // indirect
|
github.com/zeebo/errs v1.4.0 // indirect
|
||||||
golang.org/x/sync v0.13.0 // indirect
|
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
|
||||||
golang.org/x/sys v0.36.0 // indirect
|
go.opentelemetry.io/contrib/detectors/gcp v1.36.0 // indirect
|
||||||
golang.org/x/text v0.24.0 // indirect
|
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 // indirect
|
||||||
|
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 // indirect
|
||||||
|
go.opentelemetry.io/otel v1.37.0 // indirect
|
||||||
|
go.opentelemetry.io/otel/metric v1.37.0 // indirect
|
||||||
|
go.opentelemetry.io/otel/sdk v1.37.0 // indirect
|
||||||
|
go.opentelemetry.io/otel/sdk/metric v1.37.0 // indirect
|
||||||
|
go.opentelemetry.io/otel/trace v1.37.0 // indirect
|
||||||
|
golang.org/x/crypto v0.43.0 // indirect
|
||||||
|
golang.org/x/net v0.46.0 // indirect
|
||||||
|
golang.org/x/oauth2 v0.33.0 // indirect
|
||||||
|
golang.org/x/sync v0.18.0 // indirect
|
||||||
|
golang.org/x/sys v0.37.0 // indirect
|
||||||
|
golang.org/x/text v0.30.0 // indirect
|
||||||
|
golang.org/x/time v0.14.0 // indirect
|
||||||
|
google.golang.org/api v0.256.0 // indirect
|
||||||
|
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 // indirect
|
||||||
|
google.golang.org/genproto/googleapis/api v0.0.0-20250818200422-3122310a409c // indirect
|
||||||
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 // indirect
|
||||||
|
google.golang.org/grpc v1.76.0 // indirect
|
||||||
|
google.golang.org/protobuf v1.36.10 // indirect
|
||||||
)
|
)
|
||||||
|
|||||||
112
go.sum
112
go.sum
@@ -1,5 +1,33 @@
|
|||||||
|
cel.dev/expr v0.24.0 h1:56OvJKSH3hDGL0ml5uSxZmz3/3Pq4tJ+fb1unVLAFcY=
|
||||||
|
cel.dev/expr v0.24.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
|
||||||
|
cloud.google.com/go v0.121.6 h1:waZiuajrI28iAf40cWgycWNgaXPO06dupuS+sgibK6c=
|
||||||
|
cloud.google.com/go v0.121.6/go.mod h1:coChdst4Ea5vUpiALcYKXEpR1S9ZgXbhEzzMcMR66vI=
|
||||||
|
cloud.google.com/go/auth v0.17.0 h1:74yCm7hCj2rUyyAocqnFzsAYXgJhrG26XCFimrc/Kz4=
|
||||||
|
cloud.google.com/go/auth v0.17.0/go.mod h1:6wv/t5/6rOPAX4fJiRjKkJCvswLwdet7G8+UGXt7nCQ=
|
||||||
|
cloud.google.com/go/auth/oauth2adapt v0.2.8 h1:keo8NaayQZ6wimpNSmW5OPc283g65QNIiLpZnkHRbnc=
|
||||||
|
cloud.google.com/go/auth/oauth2adapt v0.2.8/go.mod h1:XQ9y31RkqZCcwJWNSx2Xvric3RrU88hAYYbjDWYDL+c=
|
||||||
|
cloud.google.com/go/compute/metadata v0.9.0 h1:pDUj4QMoPejqq20dK0Pg2N4yG9zIkYGdBtwLoEkH9Zs=
|
||||||
|
cloud.google.com/go/compute/metadata v0.9.0/go.mod h1:E0bWwX5wTnLPedCKqk3pJmVgCBSM6qQI1yTBdEb3C10=
|
||||||
|
cloud.google.com/go/iam v1.5.2 h1:qgFRAGEmd8z6dJ/qyEchAuL9jpswyODjA2lS+w234g8=
|
||||||
|
cloud.google.com/go/iam v1.5.2/go.mod h1:SE1vg0N81zQqLzQEwxL2WI6yhetBdbNQuTvIKCSkUHE=
|
||||||
|
cloud.google.com/go/monitoring v1.24.2 h1:5OTsoJ1dXYIiMiuL+sYscLc9BumrL3CarVLL7dd7lHM=
|
||||||
|
cloud.google.com/go/monitoring v1.24.2/go.mod h1:x7yzPWcgDRnPEv3sI+jJGBkwl5qINf+6qY4eq0I9B4U=
|
||||||
|
cloud.google.com/go/storage v1.57.2 h1:sVlym3cHGYhrp6XZKkKb+92I1V42ks2qKKpB0CF5Mb4=
|
||||||
|
cloud.google.com/go/storage v1.57.2/go.mod h1:n5ijg4yiRXXpCu0sJTD6k+eMf7GRrJmPyr9YxLXGHOk=
|
||||||
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
|
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
|
||||||
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
|
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0 h1:JXg2dwJUmPB9JmtVmdEB16APJ7jurfbY5jnfXpJoRMc=
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0/go.mod h1:YD5h/ldMsG0XiIw7PdyNhLxaM317eFh5yNLccNfGdyw=
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 h1:9iefClla7iYpfYWdzPCRDozdmndjTm8DXdpCzPajMgA=
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2/go.mod h1:XtLgD3ZD34DAaVIIAyG3objl5DynM3CQ/vMcbBNJZGI=
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3 h1:ZJJNFaQ86GVKQ9ehwqyAFE6pIfyicpuJ8IkVaPBc6/4=
|
||||||
|
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3/go.mod h1:URuDvhmATVKqHBH9/0nOiNKk0+YcwfQ3WkK5PqHKxc8=
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0 h1:UQUsRi8WTzhZntp5313l+CHIAT95ojUI2lpP/ExlZa4=
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0/go.mod h1:Cz6ft6Dkn3Et6l2v2a9/RpN7epQ1GtDlO6lj8bEcOvw=
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0 h1:owcC2UnmsZycprQ5RfRgjydWhuoxg71LUfyiQdijZuM=
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0/go.mod h1:ZPpqegjbE99EPKsu3iUWV22A04wzGPcAY/ziSIQEEgs=
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0 h1:Ron4zCA/yk6U7WOBXhTJcDpsUBG9npumK6xw2auFltQ=
|
||||||
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0/go.mod h1:cSgYe11MCNYunTnRXrKiR/tHc0eoKjICUuWpNZoVCOo=
|
||||||
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2 h1:+vx7roKuyA63nhn5WAunQHLTznkw5W8b1Xc0dNjp83s=
|
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2 h1:+vx7roKuyA63nhn5WAunQHLTznkw5W8b1Xc0dNjp83s=
|
||||||
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2/go.mod h1:HBCaDeC1lPdgDeDbhX8XFpy1jqjK0IBG8W5K+xYqA0w=
|
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2/go.mod h1:HBCaDeC1lPdgDeDbhX8XFpy1jqjK0IBG8W5K+xYqA0w=
|
||||||
github.com/aws/aws-sdk-go-v2 v1.40.0 h1:/WMUA0kjhZExjOQN2z3oLALDREea1A7TobfuiBrKlwc=
|
github.com/aws/aws-sdk-go-v2 v1.40.0 h1:/WMUA0kjhZExjOQN2z3oLALDREea1A7TobfuiBrKlwc=
|
||||||
@@ -58,6 +86,8 @@ github.com/aws/smithy-go v1.23.2 h1:Crv0eatJUQhaManss33hS5r40CG3ZFH+21XSkqMrIUM=
|
|||||||
github.com/aws/smithy-go v1.23.2/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
|
github.com/aws/smithy-go v1.23.2/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
|
||||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k=
|
github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k=
|
||||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8=
|
github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8=
|
||||||
|
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
||||||
|
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||||
github.com/charmbracelet/bubbles v0.21.0 h1:9TdC97SdRVg/1aaXNVWfFH3nnLAwOXr8Fn6u6mfQdFs=
|
github.com/charmbracelet/bubbles v0.21.0 h1:9TdC97SdRVg/1aaXNVWfFH3nnLAwOXr8Fn6u6mfQdFs=
|
||||||
github.com/charmbracelet/bubbles v0.21.0/go.mod h1:HF+v6QUR4HkEpz62dx7ym2xc71/KBHg+zKwJtMw+qtg=
|
github.com/charmbracelet/bubbles v0.21.0/go.mod h1:HF+v6QUR4HkEpz62dx7ym2xc71/KBHg+zKwJtMw+qtg=
|
||||||
github.com/charmbracelet/bubbletea v1.3.10 h1:otUDHWMMzQSB0Pkc87rm691KZ3SWa4KUlvF9nRvCICw=
|
github.com/charmbracelet/bubbletea v1.3.10 h1:otUDHWMMzQSB0Pkc87rm691KZ3SWa4KUlvF9nRvCICw=
|
||||||
@@ -72,16 +102,39 @@ github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd h1:vy0G
|
|||||||
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd/go.mod h1:xe0nKWGd3eJgtqZRaN9RjMtK7xUYchjzPr7q6kcvCCs=
|
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd/go.mod h1:xe0nKWGd3eJgtqZRaN9RjMtK7xUYchjzPr7q6kcvCCs=
|
||||||
github.com/charmbracelet/x/term v0.2.1 h1:AQeHeLZ1OqSXhrAWpYUtZyX1T3zVxfpZuEQMIQaGIAQ=
|
github.com/charmbracelet/x/term v0.2.1 h1:AQeHeLZ1OqSXhrAWpYUtZyX1T3zVxfpZuEQMIQaGIAQ=
|
||||||
github.com/charmbracelet/x/term v0.2.1/go.mod h1:oQ4enTYFV7QN4m0i9mzHrViD7TQKvNEEkHUMCmsxdUg=
|
github.com/charmbracelet/x/term v0.2.1/go.mod h1:oQ4enTYFV7QN4m0i9mzHrViD7TQKvNEEkHUMCmsxdUg=
|
||||||
|
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443 h1:aQ3y1lwWyqYPiWZThqv1aFbZMiM9vblcSArJRf2Irls=
|
||||||
|
github.com/cncf/xds/go v0.0.0-20250501225837-2ac532fd4443/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8=
|
||||||
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
||||||
github.com/creack/pty v1.1.17 h1:QeVUsEDNrLBW4tMgZHvxy18sKtr6VI492kBhUfhDJNI=
|
github.com/creack/pty v1.1.17 h1:QeVUsEDNrLBW4tMgZHvxy18sKtr6VI492kBhUfhDJNI=
|
||||||
github.com/creack/pty v1.1.17/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=
|
github.com/creack/pty v1.1.17/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=
|
||||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
|
github.com/envoyproxy/go-control-plane/envoy v1.32.4 h1:jb83lalDRZSpPWW2Z7Mck/8kXZ5CQAFYVjQcdVIr83A=
|
||||||
|
github.com/envoyproxy/go-control-plane/envoy v1.32.4/go.mod h1:Gzjc5k8JcJswLjAx1Zm+wSYE20UrLtt7JZMWiWQXQEw=
|
||||||
|
github.com/envoyproxy/protoc-gen-validate v1.2.1 h1:DEo3O99U8j4hBFwbJfrz9VtgcDfUKS7KJ7spH3d86P8=
|
||||||
|
github.com/envoyproxy/protoc-gen-validate v1.2.1/go.mod h1:d/C80l/jxXLdfEIhX1W2TmLfsJ31lvEjwamM4DxlWXU=
|
||||||
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f h1:Y/CXytFA4m6baUTXGLOoWe4PQhGxaX0KpnayAqC48p4=
|
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f h1:Y/CXytFA4m6baUTXGLOoWe4PQhGxaX0KpnayAqC48p4=
|
||||||
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f/go.mod h1:vw97MGsxSvLiUE2X8qFplwetxpGLQrlU1Q9AUEIzCaM=
|
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f/go.mod h1:vw97MGsxSvLiUE2X8qFplwetxpGLQrlU1Q9AUEIzCaM=
|
||||||
|
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
|
||||||
|
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
|
||||||
|
github.com/go-jose/go-jose/v4 v4.1.2 h1:TK/7NqRQZfgAh+Td8AlsrvtPoUyiHh0LqVvokh+1vHI=
|
||||||
|
github.com/go-jose/go-jose/v4 v4.1.2/go.mod h1:22cg9HWM1pOlnRiY+9cQYJ9XHmya1bYW8OeDM6Ku6Oo=
|
||||||
|
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||||
|
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
|
||||||
|
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||||
|
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
|
||||||
|
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
|
||||||
github.com/go-sql-driver/mysql v1.9.3 h1:U/N249h2WzJ3Ukj8SowVFjdtZKfu9vlLZxjPXV1aweo=
|
github.com/go-sql-driver/mysql v1.9.3 h1:U/N249h2WzJ3Ukj8SowVFjdtZKfu9vlLZxjPXV1aweo=
|
||||||
github.com/go-sql-driver/mysql v1.9.3/go.mod h1:qn46aNg1333BRMNU69Lq93t8du/dwxI64Gl8i5p1WMU=
|
github.com/go-sql-driver/mysql v1.9.3/go.mod h1:qn46aNg1333BRMNU69Lq93t8du/dwxI64Gl8i5p1WMU=
|
||||||
|
github.com/google/s2a-go v0.1.9 h1:LGD7gtMgezd8a/Xak7mEWL0PjoTQFvpRudN895yqKW0=
|
||||||
|
github.com/google/s2a-go v0.1.9/go.mod h1:YA0Ei2ZQL3acow2O62kdp9UlnvMmU7kA6Eutn0dXayM=
|
||||||
|
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||||
|
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||||
|
github.com/googleapis/enterprise-certificate-proxy v0.3.7 h1:zrn2Ee/nWmHulBx5sAVrGgAa0f2/R35S4DJwfFaUPFQ=
|
||||||
|
github.com/googleapis/enterprise-certificate-proxy v0.3.7/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
|
||||||
|
github.com/googleapis/gax-go/v2 v2.15.0 h1:SyjDc1mGgZU5LncH8gimWo9lW1DtIfPibOG81vgd/bo=
|
||||||
|
github.com/googleapis/gax-go/v2 v2.15.0/go.mod h1:zVVkkxAQHa1RQpg9z2AUCMnKhi0Qld9rcmyfL1OZhoc=
|
||||||
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
||||||
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
||||||
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
|
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
|
||||||
@@ -106,6 +159,8 @@ github.com/muesli/cancelreader v0.2.2 h1:3I4Kt4BQjOR54NavqnDogx/MIoWBFa0StPA8ELU
|
|||||||
github.com/muesli/cancelreader v0.2.2/go.mod h1:3XuTXfFS2VjM+HTLZY9Ak0l6eUKfijIfMUZ4EgX0QYo=
|
github.com/muesli/cancelreader v0.2.2/go.mod h1:3XuTXfFS2VjM+HTLZY9Ak0l6eUKfijIfMUZ4EgX0QYo=
|
||||||
github.com/muesli/termenv v0.16.0 h1:S5AlUN9dENB57rsbnkPyfdGuWIlkmzJjbFf0Tf5FWUc=
|
github.com/muesli/termenv v0.16.0 h1:S5AlUN9dENB57rsbnkPyfdGuWIlkmzJjbFf0Tf5FWUc=
|
||||||
github.com/muesli/termenv v0.16.0/go.mod h1:ZRfOIKPFDYQoDFF4Olj7/QJbW60Ol/kL1pU3VfY/Cnk=
|
github.com/muesli/termenv v0.16.0/go.mod h1:ZRfOIKPFDYQoDFF4Olj7/QJbW60Ol/kL1pU3VfY/Cnk=
|
||||||
|
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 h1:GFCKgmp0tecUJ0sJuv4pzYCqS9+RGSn52M3FUwPs+uo=
|
||||||
|
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8=
|
||||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
||||||
@@ -118,27 +173,84 @@ github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
|
|||||||
github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
|
github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
|
||||||
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=
|
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=
|
||||||
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||||
|
github.com/spiffe/go-spiffe/v2 v2.5.0 h1:N2I01KCUkv1FAjZXJMwh95KK1ZIQLYbPfhaxw8WS0hE=
|
||||||
|
github.com/spiffe/go-spiffe/v2 v2.5.0/go.mod h1:P+NxobPc6wXhVtINNtFjNWGBTreew1GBUCwT2wPmb7g=
|
||||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||||
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||||
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
|
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
|
||||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||||
|
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||||
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e h1:JVG44RsyaB9T2KIHavMF/ppJZNG9ZpyihvCd0w101no=
|
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e h1:JVG44RsyaB9T2KIHavMF/ppJZNG9ZpyihvCd0w101no=
|
||||||
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJuqunuUZ/Dhy/avygyECGrLceyNeo4LiM=
|
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJuqunuUZ/Dhy/avygyECGrLceyNeo4LiM=
|
||||||
|
github.com/zeebo/errs v1.4.0 h1:XNdoD/RRMKP7HD0UhJnIzUy74ISdGGxURlYG8HSWSfM=
|
||||||
|
github.com/zeebo/errs v1.4.0/go.mod h1:sgbWHsvVuTPHcqJJGQ1WhI5KbWlHYz+2+2C/LSEtCw4=
|
||||||
|
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
|
||||||
|
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
|
||||||
|
go.opentelemetry.io/contrib/detectors/gcp v1.36.0 h1:F7q2tNlCaHY9nMKHR6XH9/qkp8FktLnIcy6jJNyOCQw=
|
||||||
|
go.opentelemetry.io/contrib/detectors/gcp v1.36.0/go.mod h1:IbBN8uAIIx734PTonTPxAxnjc2pQTxWNkwfstZ+6H2k=
|
||||||
|
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 h1:q4XOmH/0opmeuJtPsbFNivyl7bCt7yRBbeEm2sC/XtQ=
|
||||||
|
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0/go.mod h1:snMWehoOh2wsEwnvvwtDyFCxVeDAODenXHtn5vzrKjo=
|
||||||
|
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 h1:F7Jx+6hwnZ41NSFTO5q4LYDtJRXBf2PD0rNBkeB/lus=
|
||||||
|
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q=
|
||||||
|
go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ=
|
||||||
|
go.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I=
|
||||||
|
go.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE=
|
||||||
|
go.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E=
|
||||||
|
go.opentelemetry.io/otel/sdk v1.37.0 h1:ItB0QUqnjesGRvNcmAcU0LyvkVyGJ2xftD29bWdDvKI=
|
||||||
|
go.opentelemetry.io/otel/sdk v1.37.0/go.mod h1:VredYzxUvuo2q3WRcDnKDjbdvmO0sCzOvVAiY+yUkAg=
|
||||||
|
go.opentelemetry.io/otel/sdk/metric v1.37.0 h1:90lI228XrB9jCMuSdA0673aubgRobVZFhbjxHHspCPc=
|
||||||
|
go.opentelemetry.io/otel/sdk/metric v1.37.0/go.mod h1:cNen4ZWfiD37l5NhS+Keb5RXVWZWpRE+9WyVCpbo5ps=
|
||||||
|
go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4=
|
||||||
|
go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0=
|
||||||
golang.org/x/crypto v0.37.0 h1:kJNSjF/Xp7kU0iB2Z+9viTPMW4EqqsrywMXLJOOsXSE=
|
golang.org/x/crypto v0.37.0 h1:kJNSjF/Xp7kU0iB2Z+9viTPMW4EqqsrywMXLJOOsXSE=
|
||||||
golang.org/x/crypto v0.37.0/go.mod h1:vg+k43peMZ0pUMhYmVAWysMK35e6ioLh3wB8ZCAfbVc=
|
golang.org/x/crypto v0.37.0/go.mod h1:vg+k43peMZ0pUMhYmVAWysMK35e6ioLh3wB8ZCAfbVc=
|
||||||
|
golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4=
|
||||||
|
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc=
|
||||||
|
golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04=
|
||||||
|
golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0=
|
||||||
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561 h1:MDc5xs78ZrZr3HMQugiXOAkSZtfTpbJLDr/lwfgO53E=
|
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561 h1:MDc5xs78ZrZr3HMQugiXOAkSZtfTpbJLDr/lwfgO53E=
|
||||||
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561/go.mod h1:cyybsKvd6eL0RnXn6p/Grxp8F5bW7iYuBgsNCOHpMYE=
|
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561/go.mod h1:cyybsKvd6eL0RnXn6p/Grxp8F5bW7iYuBgsNCOHpMYE=
|
||||||
|
golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE=
|
||||||
|
golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg=
|
||||||
|
golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4=
|
||||||
|
golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210=
|
||||||
|
golang.org/x/oauth2 v0.33.0 h1:4Q+qn+E5z8gPRJfmRy7C2gGG3T4jIprK6aSYgTXGRpo=
|
||||||
|
golang.org/x/oauth2 v0.33.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
|
||||||
golang.org/x/sync v0.13.0 h1:AauUjRAJ9OSnvULf/ARrrVywoJDy0YS2AwQ98I37610=
|
golang.org/x/sync v0.13.0 h1:AauUjRAJ9OSnvULf/ARrrVywoJDy0YS2AwQ98I37610=
|
||||||
golang.org/x/sync v0.13.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
golang.org/x/sync v0.13.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
||||||
|
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
|
||||||
|
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
||||||
|
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
|
||||||
|
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||||
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
|
golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
|
||||||
golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||||
|
golang.org/x/sys v0.37.0 h1:fdNQudmxPjkdUTPnLn5mdQv7Zwvbvpaxqs831goi9kQ=
|
||||||
|
golang.org/x/sys v0.37.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||||
golang.org/x/text v0.24.0 h1:dd5Bzh4yt5KYA8f9CJHCP4FB4D51c2c6JvN37xJJkJ0=
|
golang.org/x/text v0.24.0 h1:dd5Bzh4yt5KYA8f9CJHCP4FB4D51c2c6JvN37xJJkJ0=
|
||||||
golang.org/x/text v0.24.0/go.mod h1:L8rBsPeo2pSS+xqN0d5u2ikmjtmoJbDBT1b7nHvFCdU=
|
golang.org/x/text v0.24.0/go.mod h1:L8rBsPeo2pSS+xqN0d5u2ikmjtmoJbDBT1b7nHvFCdU=
|
||||||
|
golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng=
|
||||||
|
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU=
|
||||||
|
golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k=
|
||||||
|
golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
|
||||||
|
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
|
||||||
|
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
|
||||||
|
google.golang.org/api v0.256.0 h1:u6Khm8+F9sxbCTYNoBHg6/Hwv0N/i+V94MvkOSor6oI=
|
||||||
|
google.golang.org/api v0.256.0/go.mod h1:KIgPhksXADEKJlnEoRa9qAII4rXcy40vfI8HRqcU964=
|
||||||
|
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 h1:rHWScKit0gvAPuOnu87KpaYtjK5zBMLcULh7gxkCXu4=
|
||||||
|
google.golang.org/genproto v0.0.0-20250603155806-513f23925822/go.mod h1:HubltRL7rMh0LfnQPkMH4NPDFEWp0jw3vixw7jEM53s=
|
||||||
|
google.golang.org/genproto/googleapis/api v0.0.0-20250818200422-3122310a409c h1:AtEkQdl5b6zsybXcbz00j1LwNodDuH6hVifIaNqk7NQ=
|
||||||
|
google.golang.org/genproto/googleapis/api v0.0.0-20250818200422-3122310a409c/go.mod h1:ea2MjsO70ssTfCjiwHgI0ZFqcw45Ksuk2ckf9G468GA=
|
||||||
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 h1:tRPGkdGHuewF4UisLzzHHr1spKw92qLM98nIzxbC0wY=
|
||||||
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
|
||||||
|
google.golang.org/grpc v1.76.0 h1:UnVkv1+uMLYXoIz6o7chp59WfQUYA2ex/BXQ9rHZu7A=
|
||||||
|
google.golang.org/grpc v1.76.0/go.mod h1:Ju12QI8M6iQJtbcsV+awF5a4hfJMLi4X0JLo94ULZ6c=
|
||||||
|
google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE=
|
||||||
|
google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
|
||||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||||
|
|||||||
381
internal/cloud/azure.go
Normal file
381
internal/cloud/azure.go
Normal file
@@ -0,0 +1,381 @@
|
|||||||
|
package cloud
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"context"
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/base64"
|
||||||
|
"encoding/hex"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
|
||||||
|
"github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming"
|
||||||
|
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
|
||||||
|
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
|
||||||
|
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container"
|
||||||
|
)
|
||||||
|
|
||||||
|
// AzureBackend implements the Backend interface for Azure Blob Storage
|
||||||
|
type AzureBackend struct {
|
||||||
|
client *azblob.Client
|
||||||
|
containerName string
|
||||||
|
config *Config
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewAzureBackend creates a new Azure Blob Storage backend
|
||||||
|
func NewAzureBackend(cfg *Config) (*AzureBackend, error) {
|
||||||
|
if cfg.Bucket == "" {
|
||||||
|
return nil, fmt.Errorf("container name is required for Azure backend")
|
||||||
|
}
|
||||||
|
|
||||||
|
var client *azblob.Client
|
||||||
|
var err error
|
||||||
|
|
||||||
|
// Support for Azurite emulator (uses endpoint override)
|
||||||
|
if cfg.Endpoint != "" {
|
||||||
|
// For Azurite and custom endpoints
|
||||||
|
accountName := cfg.AccessKey
|
||||||
|
accountKey := cfg.SecretKey
|
||||||
|
|
||||||
|
if accountName == "" {
|
||||||
|
// Default Azurite account
|
||||||
|
accountName = "devstoreaccount1"
|
||||||
|
}
|
||||||
|
if accountKey == "" {
|
||||||
|
// Default Azurite key
|
||||||
|
accountKey = "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create credential
|
||||||
|
cred, err := azblob.NewSharedKeyCredential(accountName, accountKey)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create Azure credential: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build service URL for Azurite: http://endpoint/accountName
|
||||||
|
serviceURL := cfg.Endpoint
|
||||||
|
if !strings.Contains(serviceURL, accountName) {
|
||||||
|
// Ensure URL ends with slash
|
||||||
|
if !strings.HasSuffix(serviceURL, "/") {
|
||||||
|
serviceURL += "/"
|
||||||
|
}
|
||||||
|
serviceURL += accountName
|
||||||
|
}
|
||||||
|
|
||||||
|
client, err = azblob.NewClientWithSharedKeyCredential(serviceURL, cred, nil)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create Azure client: %w", err)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Production Azure using connection string or managed identity
|
||||||
|
if cfg.AccessKey != "" && cfg.SecretKey != "" {
|
||||||
|
// Use account name and key
|
||||||
|
accountName := cfg.AccessKey
|
||||||
|
accountKey := cfg.SecretKey
|
||||||
|
|
||||||
|
cred, err := azblob.NewSharedKeyCredential(accountName, accountKey)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create Azure credential: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
serviceURL := fmt.Sprintf("https://%s.blob.core.windows.net/", accountName)
|
||||||
|
client, err = azblob.NewClientWithSharedKeyCredential(serviceURL, cred, nil)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create Azure client: %w", err)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Use default Azure credential (managed identity, environment variables, etc.)
|
||||||
|
return nil, fmt.Errorf("Azure authentication requires account name and key, or use AZURE_STORAGE_CONNECTION_STRING environment variable")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
backend := &AzureBackend{
|
||||||
|
client: client,
|
||||||
|
containerName: cfg.Bucket,
|
||||||
|
config: cfg,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create container if it doesn't exist
|
||||||
|
// Note: Container creation should be done manually or via Azure portal
|
||||||
|
if false { // Disabled: cfg.CreateBucket not in Config
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
containerClient := client.ServiceClient().NewContainerClient(cfg.Bucket)
|
||||||
|
_, err = containerClient.Create(ctx, &container.CreateOptions{})
|
||||||
|
if err != nil {
|
||||||
|
// Ignore if container already exists
|
||||||
|
if !strings.Contains(err.Error(), "ContainerAlreadyExists") {
|
||||||
|
return nil, fmt.Errorf("failed to create container: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return backend, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Name returns the backend name
|
||||||
|
func (a *AzureBackend) Name() string {
|
||||||
|
return "azure"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Upload uploads a file to Azure Blob Storage
|
||||||
|
func (a *AzureBackend) Upload(ctx context.Context, localPath, remotePath string, progress ProgressCallback) error {
|
||||||
|
file, err := os.Open(localPath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to open file: %w", err)
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
fileInfo, err := file.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to stat file: %w", err)
|
||||||
|
}
|
||||||
|
fileSize := fileInfo.Size()
|
||||||
|
|
||||||
|
// Remove leading slash from remote path
|
||||||
|
blobName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
|
||||||
|
// Use block blob upload for large files (>256MB), simple upload for smaller
|
||||||
|
const blockUploadThreshold = 256 * 1024 * 1024 // 256 MB
|
||||||
|
|
||||||
|
if fileSize > blockUploadThreshold {
|
||||||
|
return a.uploadBlocks(ctx, file, blobName, fileSize, progress)
|
||||||
|
}
|
||||||
|
|
||||||
|
return a.uploadSimple(ctx, file, blobName, fileSize, progress)
|
||||||
|
}
|
||||||
|
|
||||||
|
// uploadSimple uploads a file using simple upload (single request)
|
||||||
|
func (a *AzureBackend) uploadSimple(ctx context.Context, file *os.File, blobName string, fileSize int64, progress ProgressCallback) error {
|
||||||
|
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
||||||
|
|
||||||
|
// Wrap reader with progress tracking
|
||||||
|
reader := NewProgressReader(file, fileSize, progress)
|
||||||
|
|
||||||
|
// Calculate MD5 hash for integrity
|
||||||
|
hash := sha256.New()
|
||||||
|
teeReader := io.TeeReader(reader, hash)
|
||||||
|
|
||||||
|
_, err := blockBlobClient.UploadStream(ctx, teeReader, &blockblob.UploadStreamOptions{
|
||||||
|
BlockSize: 4 * 1024 * 1024, // 4MB blocks
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to upload blob: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Store checksum as metadata
|
||||||
|
checksum := hex.EncodeToString(hash.Sum(nil))
|
||||||
|
metadata := map[string]*string{
|
||||||
|
"sha256": &checksum,
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = blockBlobClient.SetMetadata(ctx, metadata, nil)
|
||||||
|
if err != nil {
|
||||||
|
// Non-fatal: upload succeeded but metadata failed
|
||||||
|
fmt.Fprintf(os.Stderr, "Warning: failed to set blob metadata: %v\n", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// uploadBlocks uploads a file using block blob staging (for large files)
|
||||||
|
func (a *AzureBackend) uploadBlocks(ctx context.Context, file *os.File, blobName string, fileSize int64, progress ProgressCallback) error {
|
||||||
|
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
||||||
|
|
||||||
|
const blockSize = 100 * 1024 * 1024 // 100MB per block
|
||||||
|
numBlocks := (fileSize + blockSize - 1) / blockSize
|
||||||
|
|
||||||
|
blockIDs := make([]string, 0, numBlocks)
|
||||||
|
hash := sha256.New()
|
||||||
|
var totalUploaded int64
|
||||||
|
|
||||||
|
for i := int64(0); i < numBlocks; i++ {
|
||||||
|
blockID := base64.StdEncoding.EncodeToString([]byte(fmt.Sprintf("block-%08d", i)))
|
||||||
|
blockIDs = append(blockIDs, blockID)
|
||||||
|
|
||||||
|
// Calculate block size
|
||||||
|
currentBlockSize := blockSize
|
||||||
|
if i == numBlocks-1 {
|
||||||
|
currentBlockSize = int(fileSize - i*blockSize)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read block
|
||||||
|
blockData := make([]byte, currentBlockSize)
|
||||||
|
n, err := io.ReadFull(file, blockData)
|
||||||
|
if err != nil && err != io.ErrUnexpectedEOF {
|
||||||
|
return fmt.Errorf("failed to read block %d: %w", i, err)
|
||||||
|
}
|
||||||
|
blockData = blockData[:n]
|
||||||
|
|
||||||
|
// Update hash
|
||||||
|
hash.Write(blockData)
|
||||||
|
|
||||||
|
// Upload block
|
||||||
|
reader := bytes.NewReader(blockData)
|
||||||
|
_, err = blockBlobClient.StageBlock(ctx, blockID, streaming.NopCloser(reader), nil)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to stage block %d: %w", i, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update progress
|
||||||
|
totalUploaded += int64(n)
|
||||||
|
if progress != nil {
|
||||||
|
progress(totalUploaded, fileSize)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Commit all blocks
|
||||||
|
_, err := blockBlobClient.CommitBlockList(ctx, blockIDs, nil)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to commit block list: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Store checksum as metadata
|
||||||
|
checksum := hex.EncodeToString(hash.Sum(nil))
|
||||||
|
metadata := map[string]*string{
|
||||||
|
"sha256": &checksum,
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = blockBlobClient.SetMetadata(ctx, metadata, nil)
|
||||||
|
if err != nil {
|
||||||
|
// Non-fatal
|
||||||
|
fmt.Fprintf(os.Stderr, "Warning: failed to set blob metadata: %v\n", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Download downloads a file from Azure Blob Storage
|
||||||
|
func (a *AzureBackend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
|
||||||
|
blobName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
||||||
|
|
||||||
|
// Get blob properties to know size
|
||||||
|
props, err := blockBlobClient.GetProperties(ctx, nil)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to get blob properties: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fileSize := *props.ContentLength
|
||||||
|
|
||||||
|
// Download blob
|
||||||
|
resp, err := blockBlobClient.DownloadStream(ctx, nil)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to download blob: %w", err)
|
||||||
|
}
|
||||||
|
defer resp.Body.Close()
|
||||||
|
|
||||||
|
// Create local file
|
||||||
|
file, err := os.Create(localPath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create file: %w", err)
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
// Wrap reader with progress tracking
|
||||||
|
reader := NewProgressReader(resp.Body, fileSize, progress)
|
||||||
|
|
||||||
|
// Copy with progress
|
||||||
|
_, err = io.Copy(file, reader)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to write file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete deletes a file from Azure Blob Storage
|
||||||
|
func (a *AzureBackend) Delete(ctx context.Context, remotePath string) error {
|
||||||
|
blobName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
||||||
|
|
||||||
|
_, err := blockBlobClient.Delete(ctx, nil)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to delete blob: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// List lists files in Azure Blob Storage with a given prefix
|
||||||
|
func (a *AzureBackend) List(ctx context.Context, prefix string) ([]BackupInfo, error) {
|
||||||
|
prefix = strings.TrimPrefix(prefix, "/")
|
||||||
|
containerClient := a.client.ServiceClient().NewContainerClient(a.containerName)
|
||||||
|
|
||||||
|
pager := containerClient.NewListBlobsFlatPager(&container.ListBlobsFlatOptions{
|
||||||
|
Prefix: &prefix,
|
||||||
|
})
|
||||||
|
|
||||||
|
var files []BackupInfo
|
||||||
|
|
||||||
|
for pager.More() {
|
||||||
|
page, err := pager.NextPage(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to list blobs: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, blob := range page.Segment.BlobItems {
|
||||||
|
if blob.Name == nil || blob.Properties == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
file := BackupInfo{
|
||||||
|
Key: *blob.Name,
|
||||||
|
Name: filepath.Base(*blob.Name),
|
||||||
|
Size: *blob.Properties.ContentLength,
|
||||||
|
LastModified: *blob.Properties.LastModified,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try to get SHA256 from metadata
|
||||||
|
if blob.Metadata != nil {
|
||||||
|
if sha256Val, ok := blob.Metadata["sha256"]; ok && sha256Val != nil {
|
||||||
|
file.ETag = *sha256Val
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
files = append(files, file)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return files, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exists checks if a file exists in Azure Blob Storage
|
||||||
|
func (a *AzureBackend) Exists(ctx context.Context, remotePath string) (bool, error) {
|
||||||
|
blobName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
||||||
|
|
||||||
|
_, err := blockBlobClient.GetProperties(ctx, nil)
|
||||||
|
if err != nil {
|
||||||
|
var respErr *azcore.ResponseError
|
||||||
|
if respErr != nil && respErr.StatusCode == 404 {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
// Check if error message contains "not found"
|
||||||
|
if strings.Contains(err.Error(), "BlobNotFound") || strings.Contains(err.Error(), "404") {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
return false, fmt.Errorf("failed to check blob existence: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSize returns the size of a file in Azure Blob Storage
|
||||||
|
func (a *AzureBackend) GetSize(ctx context.Context, remotePath string) (int64, error) {
|
||||||
|
blobName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
blockBlobClient := a.client.ServiceClient().NewContainerClient(a.containerName).NewBlockBlobClient(blobName)
|
||||||
|
|
||||||
|
props, err := blockBlobClient.GetProperties(ctx, nil)
|
||||||
|
if err != nil {
|
||||||
|
return 0, fmt.Errorf("failed to get blob properties: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return *props.ContentLength, nil
|
||||||
|
}
|
||||||
275
internal/cloud/gcs.go
Normal file
275
internal/cloud/gcs.go
Normal file
@@ -0,0 +1,275 @@
|
|||||||
|
package cloud
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/hex"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"cloud.google.com/go/storage"
|
||||||
|
"google.golang.org/api/iterator"
|
||||||
|
"google.golang.org/api/option"
|
||||||
|
)
|
||||||
|
|
||||||
|
// GCSBackend implements the Backend interface for Google Cloud Storage
|
||||||
|
type GCSBackend struct {
|
||||||
|
client *storage.Client
|
||||||
|
bucketName string
|
||||||
|
config *Config
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewGCSBackend creates a new Google Cloud Storage backend
|
||||||
|
func NewGCSBackend(cfg *Config) (*GCSBackend, error) {
|
||||||
|
if cfg.Bucket == "" {
|
||||||
|
return nil, fmt.Errorf("bucket name is required for GCS backend")
|
||||||
|
}
|
||||||
|
|
||||||
|
var client *storage.Client
|
||||||
|
var err error
|
||||||
|
ctx := context.Background()
|
||||||
|
|
||||||
|
// Support for fake-gcs-server emulator (uses endpoint override)
|
||||||
|
if cfg.Endpoint != "" {
|
||||||
|
// For fake-gcs-server and custom endpoints
|
||||||
|
client, err = storage.NewClient(ctx, option.WithEndpoint(cfg.Endpoint), option.WithoutAuthentication())
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create GCS client: %w", err)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Production GCS using Application Default Credentials or service account
|
||||||
|
if cfg.AccessKey != "" {
|
||||||
|
// Use service account JSON key file
|
||||||
|
client, err = storage.NewClient(ctx, option.WithCredentialsFile(cfg.AccessKey))
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create GCS client with credentials file: %w", err)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Use default credentials (ADC, environment variables, etc.)
|
||||||
|
client, err = storage.NewClient(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create GCS client: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
backend := &GCSBackend{
|
||||||
|
client: client,
|
||||||
|
bucketName: cfg.Bucket,
|
||||||
|
config: cfg,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create bucket if it doesn't exist
|
||||||
|
// Note: Bucket creation should be done manually or via gcloud CLI
|
||||||
|
if false { // Disabled: cfg.CreateBucket not in Config
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
bucket := client.Bucket(cfg.Bucket)
|
||||||
|
_, err = bucket.Attrs(ctx)
|
||||||
|
if err == storage.ErrBucketNotExist {
|
||||||
|
// Create bucket with default settings
|
||||||
|
if err := bucket.Create(ctx, cfg.AccessKey, nil); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create bucket: %w", err)
|
||||||
|
}
|
||||||
|
} else if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to check bucket: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return backend, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Name returns the backend name
|
||||||
|
func (g *GCSBackend) Name() string {
|
||||||
|
return "gcs"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Upload uploads a file to Google Cloud Storage
|
||||||
|
func (g *GCSBackend) Upload(ctx context.Context, localPath, remotePath string, progress ProgressCallback) error {
|
||||||
|
file, err := os.Open(localPath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to open file: %w", err)
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
fileInfo, err := file.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to stat file: %w", err)
|
||||||
|
}
|
||||||
|
fileSize := fileInfo.Size()
|
||||||
|
|
||||||
|
// Remove leading slash from remote path
|
||||||
|
objectName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
|
||||||
|
bucket := g.client.Bucket(g.bucketName)
|
||||||
|
object := bucket.Object(objectName)
|
||||||
|
|
||||||
|
// Create writer with automatic chunking for large files
|
||||||
|
writer := object.NewWriter(ctx)
|
||||||
|
writer.ChunkSize = 16 * 1024 * 1024 // 16MB chunks for streaming
|
||||||
|
|
||||||
|
// Wrap reader with progress tracking and hash calculation
|
||||||
|
hash := sha256.New()
|
||||||
|
reader := NewProgressReader(io.TeeReader(file, hash), fileSize, progress)
|
||||||
|
|
||||||
|
// Upload with progress tracking
|
||||||
|
_, err = io.Copy(writer, reader)
|
||||||
|
if err != nil {
|
||||||
|
writer.Close()
|
||||||
|
return fmt.Errorf("failed to upload object: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Close writer (finalizes upload)
|
||||||
|
if err := writer.Close(); err != nil {
|
||||||
|
return fmt.Errorf("failed to finalize upload: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Store checksum as metadata
|
||||||
|
checksum := hex.EncodeToString(hash.Sum(nil))
|
||||||
|
_, err = object.Update(ctx, storage.ObjectAttrsToUpdate{
|
||||||
|
Metadata: map[string]string{
|
||||||
|
"sha256": checksum,
|
||||||
|
},
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
// Non-fatal: upload succeeded but metadata failed
|
||||||
|
fmt.Fprintf(os.Stderr, "Warning: failed to set object metadata: %v\n", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Download downloads a file from Google Cloud Storage
|
||||||
|
func (g *GCSBackend) Download(ctx context.Context, remotePath, localPath string, progress ProgressCallback) error {
|
||||||
|
objectName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
|
||||||
|
bucket := g.client.Bucket(g.bucketName)
|
||||||
|
object := bucket.Object(objectName)
|
||||||
|
|
||||||
|
// Get object attributes to know size
|
||||||
|
attrs, err := object.Attrs(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to get object attributes: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fileSize := attrs.Size
|
||||||
|
|
||||||
|
// Create reader
|
||||||
|
reader, err := object.NewReader(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to download object: %w", err)
|
||||||
|
}
|
||||||
|
defer reader.Close()
|
||||||
|
|
||||||
|
// Create local file
|
||||||
|
file, err := os.Create(localPath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create file: %w", err)
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
// Wrap reader with progress tracking
|
||||||
|
progressReader := NewProgressReader(reader, fileSize, progress)
|
||||||
|
|
||||||
|
// Copy with progress
|
||||||
|
_, err = io.Copy(file, progressReader)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to write file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete deletes a file from Google Cloud Storage
|
||||||
|
func (g *GCSBackend) Delete(ctx context.Context, remotePath string) error {
|
||||||
|
objectName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
|
||||||
|
bucket := g.client.Bucket(g.bucketName)
|
||||||
|
object := bucket.Object(objectName)
|
||||||
|
|
||||||
|
if err := object.Delete(ctx); err != nil {
|
||||||
|
return fmt.Errorf("failed to delete object: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// List lists files in Google Cloud Storage with a given prefix
|
||||||
|
func (g *GCSBackend) List(ctx context.Context, prefix string) ([]BackupInfo, error) {
|
||||||
|
prefix = strings.TrimPrefix(prefix, "/")
|
||||||
|
|
||||||
|
bucket := g.client.Bucket(g.bucketName)
|
||||||
|
query := &storage.Query{
|
||||||
|
Prefix: prefix,
|
||||||
|
}
|
||||||
|
|
||||||
|
it := bucket.Objects(ctx, query)
|
||||||
|
|
||||||
|
var files []BackupInfo
|
||||||
|
|
||||||
|
for {
|
||||||
|
attrs, err := it.Next()
|
||||||
|
if err == iterator.Done {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to list objects: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
file := BackupInfo{
|
||||||
|
Key: attrs.Name,
|
||||||
|
Name: filepath.Base(attrs.Name),
|
||||||
|
Size: attrs.Size,
|
||||||
|
LastModified: attrs.Updated,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try to get SHA256 from metadata
|
||||||
|
if attrs.Metadata != nil {
|
||||||
|
if sha256Val, ok := attrs.Metadata["sha256"]; ok {
|
||||||
|
file.ETag = sha256Val
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
files = append(files, file)
|
||||||
|
}
|
||||||
|
|
||||||
|
return files, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exists checks if a file exists in Google Cloud Storage
|
||||||
|
func (g *GCSBackend) Exists(ctx context.Context, remotePath string) (bool, error) {
|
||||||
|
objectName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
|
||||||
|
bucket := g.client.Bucket(g.bucketName)
|
||||||
|
object := bucket.Object(objectName)
|
||||||
|
|
||||||
|
_, err := object.Attrs(ctx)
|
||||||
|
if err == storage.ErrObjectNotExist {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return false, fmt.Errorf("failed to check object existence: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSize returns the size of a file in Google Cloud Storage
|
||||||
|
func (g *GCSBackend) GetSize(ctx context.Context, remotePath string) (int64, error) {
|
||||||
|
objectName := strings.TrimPrefix(remotePath, "/")
|
||||||
|
|
||||||
|
bucket := g.client.Bucket(g.bucketName)
|
||||||
|
object := bucket.Object(objectName)
|
||||||
|
|
||||||
|
attrs, err := object.Attrs(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return 0, fmt.Errorf("failed to get object attributes: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return attrs.Size, nil
|
||||||
|
}
|
||||||
@@ -79,8 +79,12 @@ func NewBackend(cfg *Config) (Backend, error) {
|
|||||||
return nil, fmt.Errorf("endpoint required for Backblaze B2")
|
return nil, fmt.Errorf("endpoint required for Backblaze B2")
|
||||||
}
|
}
|
||||||
return NewS3Backend(cfg)
|
return NewS3Backend(cfg)
|
||||||
|
case "azure", "azblob":
|
||||||
|
return NewAzureBackend(cfg)
|
||||||
|
case "gs", "gcs", "google":
|
||||||
|
return NewGCSBackend(cfg)
|
||||||
default:
|
default:
|
||||||
return nil, fmt.Errorf("unsupported cloud provider: %s (supported: s3, minio, b2)", cfg.Provider)
|
return nil, fmt.Errorf("unsupported cloud provider: %s (supported: s3, minio, b2, azure, gcs)", cfg.Provider)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -88,13 +88,13 @@ type Config struct {
|
|||||||
|
|
||||||
// Cloud storage options (v2.0)
|
// Cloud storage options (v2.0)
|
||||||
CloudEnabled bool // Enable cloud storage integration
|
CloudEnabled bool // Enable cloud storage integration
|
||||||
CloudProvider string // "s3", "minio", "b2"
|
CloudProvider string // "s3", "minio", "b2", "azure", "gcs"
|
||||||
CloudBucket string // Bucket name
|
CloudBucket string // Bucket/container name
|
||||||
CloudRegion string // Region (for S3)
|
CloudRegion string // Region (for S3, GCS)
|
||||||
CloudEndpoint string // Custom endpoint (for MinIO, B2)
|
CloudEndpoint string // Custom endpoint (for MinIO, B2, Azurite, fake-gcs-server)
|
||||||
CloudAccessKey string // Access key
|
CloudAccessKey string // Access key / Account name (Azure) / Service account file (GCS)
|
||||||
CloudSecretKey string // Secret key
|
CloudSecretKey string // Secret key / Account key (Azure)
|
||||||
CloudPrefix string // Key prefix
|
CloudPrefix string // Key/object prefix
|
||||||
CloudAutoUpload bool // Automatically upload after backup
|
CloudAutoUpload bool // Automatically upload after backup
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
382
scripts/test_azure_storage.sh
Executable file
382
scripts/test_azure_storage.sh
Executable file
@@ -0,0 +1,382 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Azure Blob Storage (Azurite) Testing Script for dbbackup
|
||||||
|
# Tests backup, restore, verify, and cleanup with Azure emulator
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Colors for output
|
||||||
|
RED='\033[0;31m'
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
BLUE='\033[0;34m'
|
||||||
|
NC='\033[0m' # No Color
|
||||||
|
|
||||||
|
# Test configuration
|
||||||
|
AZURITE_ENDPOINT="http://localhost:10000"
|
||||||
|
CONTAINER_NAME="test-backups"
|
||||||
|
ACCOUNT_NAME="devstoreaccount1"
|
||||||
|
ACCOUNT_KEY="Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
|
||||||
|
|
||||||
|
# Database connection details (from docker-compose)
|
||||||
|
POSTGRES_HOST="localhost"
|
||||||
|
POSTGRES_PORT="5434"
|
||||||
|
POSTGRES_USER="testuser"
|
||||||
|
POSTGRES_PASS="testpass"
|
||||||
|
POSTGRES_DB="testdb"
|
||||||
|
|
||||||
|
MYSQL_HOST="localhost"
|
||||||
|
MYSQL_PORT="3308"
|
||||||
|
MYSQL_USER="testuser"
|
||||||
|
MYSQL_PASS="testpass"
|
||||||
|
MYSQL_DB="testdb"
|
||||||
|
|
||||||
|
# Test counters
|
||||||
|
TESTS_PASSED=0
|
||||||
|
TESTS_FAILED=0
|
||||||
|
|
||||||
|
# Functions
|
||||||
|
print_header() {
|
||||||
|
echo -e "\n${BLUE}=== $1 ===${NC}\n"
|
||||||
|
}
|
||||||
|
|
||||||
|
print_success() {
|
||||||
|
echo -e "${GREEN}✓ $1${NC}"
|
||||||
|
((TESTS_PASSED++))
|
||||||
|
}
|
||||||
|
|
||||||
|
print_error() {
|
||||||
|
echo -e "${RED}✗ $1${NC}"
|
||||||
|
((TESTS_FAILED++))
|
||||||
|
}
|
||||||
|
|
||||||
|
print_info() {
|
||||||
|
echo -e "${YELLOW}ℹ $1${NC}"
|
||||||
|
}
|
||||||
|
|
||||||
|
wait_for_azurite() {
|
||||||
|
print_info "Waiting for Azurite to be ready..."
|
||||||
|
for i in {1..30}; do
|
||||||
|
if curl -f -s "${AZURITE_ENDPOINT}/devstoreaccount1?restype=account&comp=properties" > /dev/null 2>&1; then
|
||||||
|
print_success "Azurite is ready"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
sleep 1
|
||||||
|
done
|
||||||
|
print_error "Azurite failed to start"
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Build dbbackup if needed
|
||||||
|
build_dbbackup() {
|
||||||
|
print_header "Building dbbackup"
|
||||||
|
if [ ! -f "./dbbackup" ]; then
|
||||||
|
go build -o dbbackup .
|
||||||
|
print_success "Built dbbackup binary"
|
||||||
|
else
|
||||||
|
print_info "Using existing dbbackup binary"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Start services
|
||||||
|
start_services() {
|
||||||
|
print_header "Starting Azurite and Database Services"
|
||||||
|
docker-compose -f docker-compose.azurite.yml up -d
|
||||||
|
|
||||||
|
# Wait for services
|
||||||
|
sleep 5
|
||||||
|
wait_for_azurite
|
||||||
|
|
||||||
|
print_info "Waiting for PostgreSQL..."
|
||||||
|
sleep 3
|
||||||
|
|
||||||
|
print_info "Waiting for MySQL..."
|
||||||
|
sleep 3
|
||||||
|
|
||||||
|
print_success "All services started"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Stop services
|
||||||
|
stop_services() {
|
||||||
|
print_header "Stopping Services"
|
||||||
|
docker-compose -f docker-compose.azurite.yml down
|
||||||
|
print_success "Services stopped"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Create test data in databases
|
||||||
|
create_test_data() {
|
||||||
|
print_header "Creating Test Data"
|
||||||
|
|
||||||
|
# PostgreSQL
|
||||||
|
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d $POSTGRES_DB <<EOF
|
||||||
|
DROP TABLE IF EXISTS test_table;
|
||||||
|
CREATE TABLE test_table (
|
||||||
|
id SERIAL PRIMARY KEY,
|
||||||
|
name VARCHAR(100),
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||||
|
);
|
||||||
|
INSERT INTO test_table (name) VALUES ('Azure Test 1'), ('Azure Test 2'), ('Azure Test 3');
|
||||||
|
EOF
|
||||||
|
print_success "Created PostgreSQL test data"
|
||||||
|
|
||||||
|
# MySQL
|
||||||
|
mysql -h $MYSQL_HOST -P $MYSQL_PORT -u $MYSQL_USER -p$MYSQL_PASS $MYSQL_DB <<EOF
|
||||||
|
DROP TABLE IF EXISTS test_table;
|
||||||
|
CREATE TABLE test_table (
|
||||||
|
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||||
|
name VARCHAR(100),
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||||
|
);
|
||||||
|
INSERT INTO test_table (name) VALUES ('Azure Test 1'), ('Azure Test 2'), ('Azure Test 3');
|
||||||
|
EOF
|
||||||
|
print_success "Created MySQL test data"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 1: PostgreSQL backup to Azure
|
||||||
|
test_postgres_backup() {
|
||||||
|
print_header "Test 1: PostgreSQL Backup to Azure"
|
||||||
|
|
||||||
|
./dbbackup backup postgres \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS \
|
||||||
|
--database $POSTGRES_DB \
|
||||||
|
--output ./backups/pg_azure_test.sql \
|
||||||
|
--cloud "azure://$CONTAINER_NAME/postgres/backup1.sql?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "PostgreSQL backup uploaded to Azure"
|
||||||
|
else
|
||||||
|
print_error "PostgreSQL backup failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 2: MySQL backup to Azure
|
||||||
|
test_mysql_backup() {
|
||||||
|
print_header "Test 2: MySQL Backup to Azure"
|
||||||
|
|
||||||
|
./dbbackup backup mysql \
|
||||||
|
--host $MYSQL_HOST \
|
||||||
|
--port $MYSQL_PORT \
|
||||||
|
--user $MYSQL_USER \
|
||||||
|
--password $MYSQL_PASS \
|
||||||
|
--database $MYSQL_DB \
|
||||||
|
--output ./backups/mysql_azure_test.sql \
|
||||||
|
--cloud "azure://$CONTAINER_NAME/mysql/backup1.sql?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "MySQL backup uploaded to Azure"
|
||||||
|
else
|
||||||
|
print_error "MySQL backup failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 3: List backups in Azure
|
||||||
|
test_list_backups() {
|
||||||
|
print_header "Test 3: List Azure Backups"
|
||||||
|
|
||||||
|
./dbbackup cloud list "azure://$CONTAINER_NAME/postgres/?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Listed Azure backups"
|
||||||
|
else
|
||||||
|
print_error "Failed to list backups"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 4: Verify backup in Azure
|
||||||
|
test_verify_backup() {
|
||||||
|
print_header "Test 4: Verify Azure Backup"
|
||||||
|
|
||||||
|
./dbbackup verify "azure://$CONTAINER_NAME/postgres/backup1.sql?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Backup verification successful"
|
||||||
|
else
|
||||||
|
print_error "Backup verification failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 5: Restore from Azure
|
||||||
|
test_restore_from_azure() {
|
||||||
|
print_header "Test 5: Restore from Azure"
|
||||||
|
|
||||||
|
# Drop and recreate database
|
||||||
|
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d postgres <<EOF
|
||||||
|
DROP DATABASE IF EXISTS testdb_restored;
|
||||||
|
CREATE DATABASE testdb_restored;
|
||||||
|
EOF
|
||||||
|
|
||||||
|
./dbbackup restore postgres \
|
||||||
|
--source "azure://$CONTAINER_NAME/postgres/backup1.sql?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY" \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS \
|
||||||
|
--database testdb_restored
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Restored from Azure backup"
|
||||||
|
|
||||||
|
# Verify restored data
|
||||||
|
COUNT=$(PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d testdb_restored -t -c "SELECT COUNT(*) FROM test_table;")
|
||||||
|
if [ "$COUNT" -eq 3 ]; then
|
||||||
|
print_success "Restored data verified (3 rows)"
|
||||||
|
else
|
||||||
|
print_error "Restored data incorrect (expected 3 rows, got $COUNT)"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
print_error "Restore from Azure failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 6: Large file upload (block blob)
|
||||||
|
test_large_file_upload() {
|
||||||
|
print_header "Test 6: Large File Upload (Block Blob)"
|
||||||
|
|
||||||
|
# Create a large test file (300MB)
|
||||||
|
print_info "Creating 300MB test file..."
|
||||||
|
dd if=/dev/urandom of=./backups/large_test.dat bs=1M count=300 2>/dev/null
|
||||||
|
|
||||||
|
print_info "Uploading large file to Azure..."
|
||||||
|
./dbbackup cloud upload \
|
||||||
|
./backups/large_test.dat \
|
||||||
|
"azure://$CONTAINER_NAME/large/large_test.dat?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Large file uploaded successfully (block blob)"
|
||||||
|
|
||||||
|
# Verify file exists and has correct size
|
||||||
|
print_info "Downloading large file..."
|
||||||
|
./dbbackup cloud download \
|
||||||
|
"azure://$CONTAINER_NAME/large/large_test.dat?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY" \
|
||||||
|
./backups/large_test_downloaded.dat
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
ORIGINAL_SIZE=$(stat -f%z ./backups/large_test.dat 2>/dev/null || stat -c%s ./backups/large_test.dat)
|
||||||
|
DOWNLOADED_SIZE=$(stat -f%z ./backups/large_test_downloaded.dat 2>/dev/null || stat -c%s ./backups/large_test_downloaded.dat)
|
||||||
|
|
||||||
|
if [ "$ORIGINAL_SIZE" -eq "$DOWNLOADED_SIZE" ]; then
|
||||||
|
print_success "Downloaded file size matches original ($ORIGINAL_SIZE bytes)"
|
||||||
|
else
|
||||||
|
print_error "File size mismatch (original: $ORIGINAL_SIZE, downloaded: $DOWNLOADED_SIZE)"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
print_error "Large file download failed"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
|
rm -f ./backups/large_test.dat ./backups/large_test_downloaded.dat
|
||||||
|
else
|
||||||
|
print_error "Large file upload failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 7: Delete from Azure
|
||||||
|
test_delete_backup() {
|
||||||
|
print_header "Test 7: Delete Backup from Azure"
|
||||||
|
|
||||||
|
./dbbackup cloud delete "azure://$CONTAINER_NAME/mysql/backup1.sql?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Deleted backup from Azure"
|
||||||
|
|
||||||
|
# Verify deletion
|
||||||
|
if ! ./dbbackup cloud list "azure://$CONTAINER_NAME/mysql/?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY" | grep -q "backup1.sql"; then
|
||||||
|
print_success "Verified backup was deleted"
|
||||||
|
else
|
||||||
|
print_error "Backup still exists after deletion"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
print_error "Failed to delete backup"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 8: Cleanup old backups
|
||||||
|
test_cleanup() {
|
||||||
|
print_header "Test 8: Cleanup Old Backups"
|
||||||
|
|
||||||
|
# Create multiple backups with different timestamps
|
||||||
|
for i in {1..5}; do
|
||||||
|
./dbbackup backup postgres \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS \
|
||||||
|
--database $POSTGRES_DB \
|
||||||
|
--output "./backups/pg_cleanup_$i.sql" \
|
||||||
|
--cloud "azure://$CONTAINER_NAME/cleanup/backup_$i.sql?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY"
|
||||||
|
sleep 1
|
||||||
|
done
|
||||||
|
|
||||||
|
print_success "Created 5 test backups"
|
||||||
|
|
||||||
|
# Cleanup, keeping only 2
|
||||||
|
./dbbackup cleanup "azure://$CONTAINER_NAME/cleanup/?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY" --keep 2
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Cleanup completed"
|
||||||
|
|
||||||
|
# Count remaining backups
|
||||||
|
COUNT=$(./dbbackup cloud list "azure://$CONTAINER_NAME/cleanup/?endpoint=$AZURITE_ENDPOINT&account=$ACCOUNT_NAME&key=$ACCOUNT_KEY" | grep -c "backup_")
|
||||||
|
if [ "$COUNT" -le 2 ]; then
|
||||||
|
print_success "Verified cleanup (kept 2 backups)"
|
||||||
|
else
|
||||||
|
print_error "Cleanup failed (expected 2 backups, found $COUNT)"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
print_error "Cleanup failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Main test execution
|
||||||
|
main() {
|
||||||
|
print_header "Azure Blob Storage (Azurite) Integration Tests"
|
||||||
|
|
||||||
|
# Setup
|
||||||
|
build_dbbackup
|
||||||
|
start_services
|
||||||
|
create_test_data
|
||||||
|
|
||||||
|
# Run tests
|
||||||
|
test_postgres_backup
|
||||||
|
test_mysql_backup
|
||||||
|
test_list_backups
|
||||||
|
test_verify_backup
|
||||||
|
test_restore_from_azure
|
||||||
|
test_large_file_upload
|
||||||
|
test_delete_backup
|
||||||
|
test_cleanup
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
|
print_header "Cleanup"
|
||||||
|
rm -rf ./backups
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
print_header "Test Summary"
|
||||||
|
echo -e "${GREEN}Passed: $TESTS_PASSED${NC}"
|
||||||
|
echo -e "${RED}Failed: $TESTS_FAILED${NC}"
|
||||||
|
|
||||||
|
if [ $TESTS_FAILED -eq 0 ]; then
|
||||||
|
print_success "All tests passed!"
|
||||||
|
stop_services
|
||||||
|
exit 0
|
||||||
|
else
|
||||||
|
print_error "Some tests failed"
|
||||||
|
print_info "Leaving services running for debugging"
|
||||||
|
print_info "Run 'docker-compose -f docker-compose.azurite.yml down' to stop services"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Run main
|
||||||
|
main
|
||||||
390
scripts/test_gcs_storage.sh
Executable file
390
scripts/test_gcs_storage.sh
Executable file
@@ -0,0 +1,390 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Google Cloud Storage (fake-gcs-server) Testing Script for dbbackup
|
||||||
|
# Tests backup, restore, verify, and cleanup with GCS emulator
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Colors for output
|
||||||
|
RED='\033[0;31m'
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
BLUE='\033[0;34m'
|
||||||
|
NC='\033[0m' # No Color
|
||||||
|
|
||||||
|
# Test configuration
|
||||||
|
GCS_ENDPOINT="http://localhost:4443/storage/v1"
|
||||||
|
BUCKET_NAME="test-backups"
|
||||||
|
PROJECT_ID="test-project"
|
||||||
|
|
||||||
|
# Database connection details (from docker-compose)
|
||||||
|
POSTGRES_HOST="localhost"
|
||||||
|
POSTGRES_PORT="5435"
|
||||||
|
POSTGRES_USER="testuser"
|
||||||
|
POSTGRES_PASS="testpass"
|
||||||
|
POSTGRES_DB="testdb"
|
||||||
|
|
||||||
|
MYSQL_HOST="localhost"
|
||||||
|
MYSQL_PORT="3309"
|
||||||
|
MYSQL_USER="testuser"
|
||||||
|
MYSQL_PASS="testpass"
|
||||||
|
MYSQL_DB="testdb"
|
||||||
|
|
||||||
|
# Test counters
|
||||||
|
TESTS_PASSED=0
|
||||||
|
TESTS_FAILED=0
|
||||||
|
|
||||||
|
# Functions
|
||||||
|
print_header() {
|
||||||
|
echo -e "\n${BLUE}=== $1 ===${NC}\n"
|
||||||
|
}
|
||||||
|
|
||||||
|
print_success() {
|
||||||
|
echo -e "${GREEN}✓ $1${NC}"
|
||||||
|
((TESTS_PASSED++))
|
||||||
|
}
|
||||||
|
|
||||||
|
print_error() {
|
||||||
|
echo -e "${RED}✗ $1${NC}"
|
||||||
|
((TESTS_FAILED++))
|
||||||
|
}
|
||||||
|
|
||||||
|
print_info() {
|
||||||
|
echo -e "${YELLOW}ℹ $1${NC}"
|
||||||
|
}
|
||||||
|
|
||||||
|
wait_for_gcs() {
|
||||||
|
print_info "Waiting for fake-gcs-server to be ready..."
|
||||||
|
for i in {1..30}; do
|
||||||
|
if curl -f -s "$GCS_ENDPOINT/b" > /dev/null 2>&1; then
|
||||||
|
print_success "fake-gcs-server is ready"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
sleep 1
|
||||||
|
done
|
||||||
|
print_error "fake-gcs-server failed to start"
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
create_test_bucket() {
|
||||||
|
print_info "Creating test bucket..."
|
||||||
|
curl -X POST "$GCS_ENDPOINT/b?project=$PROJECT_ID" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d "{\"name\": \"$BUCKET_NAME\"}" > /dev/null 2>&1 || true
|
||||||
|
print_success "Test bucket created"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Build dbbackup if needed
|
||||||
|
build_dbbackup() {
|
||||||
|
print_header "Building dbbackup"
|
||||||
|
if [ ! -f "./dbbackup" ]; then
|
||||||
|
go build -o dbbackup .
|
||||||
|
print_success "Built dbbackup binary"
|
||||||
|
else
|
||||||
|
print_info "Using existing dbbackup binary"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Start services
|
||||||
|
start_services() {
|
||||||
|
print_header "Starting GCS Emulator and Database Services"
|
||||||
|
docker-compose -f docker-compose.gcs.yml up -d
|
||||||
|
|
||||||
|
# Wait for services
|
||||||
|
sleep 5
|
||||||
|
wait_for_gcs
|
||||||
|
create_test_bucket
|
||||||
|
|
||||||
|
print_info "Waiting for PostgreSQL..."
|
||||||
|
sleep 3
|
||||||
|
|
||||||
|
print_info "Waiting for MySQL..."
|
||||||
|
sleep 3
|
||||||
|
|
||||||
|
print_success "All services started"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Stop services
|
||||||
|
stop_services() {
|
||||||
|
print_header "Stopping Services"
|
||||||
|
docker-compose -f docker-compose.gcs.yml down
|
||||||
|
print_success "Services stopped"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Create test data in databases
|
||||||
|
create_test_data() {
|
||||||
|
print_header "Creating Test Data"
|
||||||
|
|
||||||
|
# PostgreSQL
|
||||||
|
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d $POSTGRES_DB <<EOF
|
||||||
|
DROP TABLE IF EXISTS test_table;
|
||||||
|
CREATE TABLE test_table (
|
||||||
|
id SERIAL PRIMARY KEY,
|
||||||
|
name VARCHAR(100),
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||||
|
);
|
||||||
|
INSERT INTO test_table (name) VALUES ('GCS Test 1'), ('GCS Test 2'), ('GCS Test 3');
|
||||||
|
EOF
|
||||||
|
print_success "Created PostgreSQL test data"
|
||||||
|
|
||||||
|
# MySQL
|
||||||
|
mysql -h $MYSQL_HOST -P $MYSQL_PORT -u $MYSQL_USER -p$MYSQL_PASS $MYSQL_DB <<EOF
|
||||||
|
DROP TABLE IF EXISTS test_table;
|
||||||
|
CREATE TABLE test_table (
|
||||||
|
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||||
|
name VARCHAR(100),
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||||
|
);
|
||||||
|
INSERT INTO test_table (name) VALUES ('GCS Test 1'), ('GCS Test 2'), ('GCS Test 3');
|
||||||
|
EOF
|
||||||
|
print_success "Created MySQL test data"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 1: PostgreSQL backup to GCS
|
||||||
|
test_postgres_backup() {
|
||||||
|
print_header "Test 1: PostgreSQL Backup to GCS"
|
||||||
|
|
||||||
|
./dbbackup backup postgres \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS \
|
||||||
|
--database $POSTGRES_DB \
|
||||||
|
--output ./backups/pg_gcs_test.sql \
|
||||||
|
--cloud "gs://$BUCKET_NAME/postgres/backup1.sql?endpoint=$GCS_ENDPOINT"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "PostgreSQL backup uploaded to GCS"
|
||||||
|
else
|
||||||
|
print_error "PostgreSQL backup failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 2: MySQL backup to GCS
|
||||||
|
test_mysql_backup() {
|
||||||
|
print_header "Test 2: MySQL Backup to GCS"
|
||||||
|
|
||||||
|
./dbbackup backup mysql \
|
||||||
|
--host $MYSQL_HOST \
|
||||||
|
--port $MYSQL_PORT \
|
||||||
|
--user $MYSQL_USER \
|
||||||
|
--password $MYSQL_PASS \
|
||||||
|
--database $MYSQL_DB \
|
||||||
|
--output ./backups/mysql_gcs_test.sql \
|
||||||
|
--cloud "gs://$BUCKET_NAME/mysql/backup1.sql?endpoint=$GCS_ENDPOINT"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "MySQL backup uploaded to GCS"
|
||||||
|
else
|
||||||
|
print_error "MySQL backup failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 3: List backups in GCS
|
||||||
|
test_list_backups() {
|
||||||
|
print_header "Test 3: List GCS Backups"
|
||||||
|
|
||||||
|
./dbbackup cloud list "gs://$BUCKET_NAME/postgres/?endpoint=$GCS_ENDPOINT"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Listed GCS backups"
|
||||||
|
else
|
||||||
|
print_error "Failed to list backups"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 4: Verify backup in GCS
|
||||||
|
test_verify_backup() {
|
||||||
|
print_header "Test 4: Verify GCS Backup"
|
||||||
|
|
||||||
|
./dbbackup verify "gs://$BUCKET_NAME/postgres/backup1.sql?endpoint=$GCS_ENDPOINT"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Backup verification successful"
|
||||||
|
else
|
||||||
|
print_error "Backup verification failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 5: Restore from GCS
|
||||||
|
test_restore_from_gcs() {
|
||||||
|
print_header "Test 5: Restore from GCS"
|
||||||
|
|
||||||
|
# Drop and recreate database
|
||||||
|
PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d postgres <<EOF
|
||||||
|
DROP DATABASE IF EXISTS testdb_restored;
|
||||||
|
CREATE DATABASE testdb_restored;
|
||||||
|
EOF
|
||||||
|
|
||||||
|
./dbbackup restore postgres \
|
||||||
|
--source "gs://$BUCKET_NAME/postgres/backup1.sql?endpoint=$GCS_ENDPOINT" \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS \
|
||||||
|
--database testdb_restored
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Restored from GCS backup"
|
||||||
|
|
||||||
|
# Verify restored data
|
||||||
|
COUNT=$(PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -p $POSTGRES_PORT -U $POSTGRES_USER -d testdb_restored -t -c "SELECT COUNT(*) FROM test_table;")
|
||||||
|
if [ "$COUNT" -eq 3 ]; then
|
||||||
|
print_success "Restored data verified (3 rows)"
|
||||||
|
else
|
||||||
|
print_error "Restored data incorrect (expected 3 rows, got $COUNT)"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
print_error "Restore from GCS failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 6: Large file upload (chunked upload)
|
||||||
|
test_large_file_upload() {
|
||||||
|
print_header "Test 6: Large File Upload (Chunked)"
|
||||||
|
|
||||||
|
# Create a large test file (200MB)
|
||||||
|
print_info "Creating 200MB test file..."
|
||||||
|
dd if=/dev/urandom of=./backups/large_test.dat bs=1M count=200 2>/dev/null
|
||||||
|
|
||||||
|
print_info "Uploading large file to GCS..."
|
||||||
|
./dbbackup cloud upload \
|
||||||
|
./backups/large_test.dat \
|
||||||
|
"gs://$BUCKET_NAME/large/large_test.dat?endpoint=$GCS_ENDPOINT"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Large file uploaded successfully (chunked)"
|
||||||
|
|
||||||
|
# Verify file exists and has correct size
|
||||||
|
print_info "Downloading large file..."
|
||||||
|
./dbbackup cloud download \
|
||||||
|
"gs://$BUCKET_NAME/large/large_test.dat?endpoint=$GCS_ENDPOINT" \
|
||||||
|
./backups/large_test_downloaded.dat
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
ORIGINAL_SIZE=$(stat -f%z ./backups/large_test.dat 2>/dev/null || stat -c%s ./backups/large_test.dat)
|
||||||
|
DOWNLOADED_SIZE=$(stat -f%z ./backups/large_test_downloaded.dat 2>/dev/null || stat -c%s ./backups/large_test_downloaded.dat)
|
||||||
|
|
||||||
|
if [ "$ORIGINAL_SIZE" -eq "$DOWNLOADED_SIZE" ]; then
|
||||||
|
print_success "Downloaded file size matches original ($ORIGINAL_SIZE bytes)"
|
||||||
|
else
|
||||||
|
print_error "File size mismatch (original: $ORIGINAL_SIZE, downloaded: $DOWNLOADED_SIZE)"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
print_error "Large file download failed"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
|
rm -f ./backups/large_test.dat ./backups/large_test_downloaded.dat
|
||||||
|
else
|
||||||
|
print_error "Large file upload failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 7: Delete from GCS
|
||||||
|
test_delete_backup() {
|
||||||
|
print_header "Test 7: Delete Backup from GCS"
|
||||||
|
|
||||||
|
./dbbackup cloud delete "gs://$BUCKET_NAME/mysql/backup1.sql?endpoint=$GCS_ENDPOINT"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Deleted backup from GCS"
|
||||||
|
|
||||||
|
# Verify deletion
|
||||||
|
if ! ./dbbackup cloud list "gs://$BUCKET_NAME/mysql/?endpoint=$GCS_ENDPOINT" | grep -q "backup1.sql"; then
|
||||||
|
print_success "Verified backup was deleted"
|
||||||
|
else
|
||||||
|
print_error "Backup still exists after deletion"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
print_error "Failed to delete backup"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test 8: Cleanup old backups
|
||||||
|
test_cleanup() {
|
||||||
|
print_header "Test 8: Cleanup Old Backups"
|
||||||
|
|
||||||
|
# Create multiple backups with different timestamps
|
||||||
|
for i in {1..5}; do
|
||||||
|
./dbbackup backup postgres \
|
||||||
|
--host $POSTGRES_HOST \
|
||||||
|
--port $POSTGRES_PORT \
|
||||||
|
--user $POSTGRES_USER \
|
||||||
|
--password $POSTGRES_PASS \
|
||||||
|
--database $POSTGRES_DB \
|
||||||
|
--output "./backups/pg_cleanup_$i.sql" \
|
||||||
|
--cloud "gs://$BUCKET_NAME/cleanup/backup_$i.sql?endpoint=$GCS_ENDPOINT"
|
||||||
|
sleep 1
|
||||||
|
done
|
||||||
|
|
||||||
|
print_success "Created 5 test backups"
|
||||||
|
|
||||||
|
# Cleanup, keeping only 2
|
||||||
|
./dbbackup cleanup "gs://$BUCKET_NAME/cleanup/?endpoint=$GCS_ENDPOINT" --keep 2
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
print_success "Cleanup completed"
|
||||||
|
|
||||||
|
# Count remaining backups
|
||||||
|
COUNT=$(./dbbackup cloud list "gs://$BUCKET_NAME/cleanup/?endpoint=$GCS_ENDPOINT" | grep -c "backup_")
|
||||||
|
if [ "$COUNT" -le 2 ]; then
|
||||||
|
print_success "Verified cleanup (kept 2 backups)"
|
||||||
|
else
|
||||||
|
print_error "Cleanup failed (expected 2 backups, found $COUNT)"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
print_error "Cleanup failed"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Main test execution
|
||||||
|
main() {
|
||||||
|
print_header "Google Cloud Storage (fake-gcs-server) Integration Tests"
|
||||||
|
|
||||||
|
# Setup
|
||||||
|
build_dbbackup
|
||||||
|
start_services
|
||||||
|
create_test_data
|
||||||
|
|
||||||
|
# Run tests
|
||||||
|
test_postgres_backup
|
||||||
|
test_mysql_backup
|
||||||
|
test_list_backups
|
||||||
|
test_verify_backup
|
||||||
|
test_restore_from_gcs
|
||||||
|
test_large_file_upload
|
||||||
|
test_delete_backup
|
||||||
|
test_cleanup
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
|
print_header "Cleanup"
|
||||||
|
rm -rf ./backups
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
print_header "Test Summary"
|
||||||
|
echo -e "${GREEN}Passed: $TESTS_PASSED${NC}"
|
||||||
|
echo -e "${RED}Failed: $TESTS_FAILED${NC}"
|
||||||
|
|
||||||
|
if [ $TESTS_FAILED -eq 0 ]; then
|
||||||
|
print_success "All tests passed!"
|
||||||
|
stop_services
|
||||||
|
exit 0
|
||||||
|
else
|
||||||
|
print_error "Some tests failed"
|
||||||
|
print_info "Leaving services running for debugging"
|
||||||
|
print_info "Run 'docker-compose -f docker-compose.gcs.yml down' to stop services"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Run main
|
||||||
|
main
|
||||||
Reference in New Issue
Block a user