From e7f0a9f5eba8599195f0d226738f8563b9840821 Mon Sep 17 00:00:00 2001 From: Alexander Renz Date: Mon, 5 Jan 2026 12:41:18 +0100 Subject: [PATCH] docs: update documentation to match current CLI syntax - AZURE.md, GCS.md: Replace 'backup postgres' with 'backup single' - AZURE.md, GCS.md: Replace 'restore postgres --source' with proper syntax - AZURE.md, GCS.md: Remove non-existent --output, --source flags - VEEAM_ALTERNATIVE.md: Fix command examples and broken link - CONTRIBUTING.md: Remove RELEASE_NOTES step from release process - CHANGELOG.md: Remove reference to deleted file - Remove RELEASE_NOTES_v3.1.md (content is in CHANGELOG.md) --- AZURE.md | 74 +++----- CHANGELOG.md | 1 - CONTRIBUTING.md | 11 +- GCS.md | 80 +++------ RELEASE_NOTES_v3.1.md | 396 ------------------------------------------ VEEAM_ALTERNATIVE.md | 15 +- 6 files changed, 61 insertions(+), 516 deletions(-) delete mode 100644 RELEASE_NOTES_v3.1.md diff --git a/AZURE.md b/AZURE.md index fa91856..2b43031 100644 --- a/AZURE.md +++ b/AZURE.md @@ -28,21 +28,16 @@ This guide covers using **Azure Blob Storage** with `dbbackup` for secure, scala ```bash # Backup PostgreSQL to Azure -dbbackup backup postgres \ - --host localhost \ - --database mydb \ - --output backup.sql \ - --cloud "azure://mycontainer/backups/db.sql?account=myaccount&key=ACCOUNT_KEY" +dbbackup backup single mydb \ + --cloud "azure://mycontainer/backups/?account=myaccount&key=ACCOUNT_KEY" ``` ### 3. Restore from Azure ```bash -# Restore from Azure backup -dbbackup restore postgres \ - --source "azure://mycontainer/backups/db.sql?account=myaccount&key=ACCOUNT_KEY" \ - --host localhost \ - --database mydb_restored +# Download backup from Azure and restore +dbbackup cloud download "azure://mycontainer/backups/mydb.dump.gz?account=myaccount&key=ACCOUNT_KEY" ./mydb.dump.gz +dbbackup restore single ./mydb.dump.gz --target mydb_restored --confirm ``` ## URI Syntax @@ -99,7 +94,7 @@ export AZURE_STORAGE_ACCOUNT="myaccount" export AZURE_STORAGE_KEY="YOUR_ACCOUNT_KEY" # Use simplified URI (credentials from environment) -dbbackup backup postgres --cloud "azure://container/path/backup.sql" +dbbackup backup single mydb --cloud "azure://container/path/" ``` ### Method 3: Connection String @@ -109,7 +104,7 @@ Use Azure connection string: ```bash export AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=YOUR_KEY;EndpointSuffix=core.windows.net" -dbbackup backup postgres --cloud "azure://container/path/backup.sql" +dbbackup backup single mydb --cloud "azure://container/path/" ``` ### Getting Your Account Key @@ -196,11 +191,8 @@ Configure automatic tier transitions: ```bash # PostgreSQL backup with automatic Azure upload -dbbackup backup postgres \ - --host localhost \ - --database production_db \ - --output /backups/db.sql \ - --cloud "azure://prod-backups/postgres/$(date +%Y%m%d_%H%M%S).sql?account=myaccount&key=KEY" \ +dbbackup backup single production_db \ + --cloud "azure://prod-backups/postgres/?account=myaccount&key=KEY" \ --compression 6 ``` @@ -208,10 +200,7 @@ dbbackup backup postgres \ ```bash # Backup entire PostgreSQL cluster to Azure -dbbackup backup postgres \ - --host localhost \ - --all-databases \ - --output-dir /backups \ +dbbackup backup cluster \ --cloud "azure://prod-backups/postgres/cluster/?account=myaccount&key=KEY" ``` @@ -257,13 +246,9 @@ dbbackup cleanup "azure://prod-backups/postgres/?account=myaccount&key=KEY" --ke #!/bin/bash # Azure backup script (run via cron) -DATE=$(date +%Y%m%d_%H%M%S) -AZURE_URI="azure://prod-backups/postgres/${DATE}.sql?account=myaccount&key=${AZURE_STORAGE_KEY}" +AZURE_URI="azure://prod-backups/postgres/?account=myaccount&key=${AZURE_STORAGE_KEY}" -dbbackup backup postgres \ - --host localhost \ - --database production_db \ - --output /tmp/backup.sql \ +dbbackup backup single production_db \ --cloud "${AZURE_URI}" \ --compression 9 @@ -289,35 +274,25 @@ For large files (>256MB), dbbackup automatically uses Azure Block Blob staging: ```bash # Large database backup (automatically uses block blob) -dbbackup backup postgres \ - --host localhost \ - --database huge_db \ - --output /backups/huge.sql \ - --cloud "azure://backups/huge.sql?account=myaccount&key=KEY" +dbbackup backup single huge_db \ + --cloud "azure://backups/?account=myaccount&key=KEY" ``` ### Progress Tracking ```bash # Backup with progress display -dbbackup backup postgres \ - --host localhost \ - --database mydb \ - --output backup.sql \ - --cloud "azure://backups/backup.sql?account=myaccount&key=KEY" \ - --progress +dbbackup backup single mydb \ + --cloud "azure://backups/?account=myaccount&key=KEY" ``` ### Concurrent Operations ```bash -# Backup multiple databases in parallel -dbbackup backup postgres \ - --host localhost \ - --all-databases \ - --output-dir /backups \ +# Backup cluster with parallel jobs +dbbackup backup cluster \ --cloud "azure://backups/cluster/?account=myaccount&key=KEY" \ - --parallelism 4 + --jobs 4 ``` ### Custom Metadata @@ -365,11 +340,8 @@ Endpoint: http://localhost:10000/devstoreaccount1 ```bash # Backup to Azurite -dbbackup backup postgres \ - --host localhost \ - --database testdb \ - --output test.sql \ - --cloud "azure://test-backups/test.sql?endpoint=http://localhost:10000&account=devstoreaccount1&key=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==" +dbbackup backup single testdb \ + --cloud "azure://test-backups/?endpoint=http://localhost:10000&account=devstoreaccount1&key=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==" ``` ### Run Integration Tests @@ -492,8 +464,8 @@ Tests include: Enable debug mode: ```bash -dbbackup backup postgres \ - --cloud "azure://container/backup.sql?account=myaccount&key=KEY" \ +dbbackup backup single mydb \ + --cloud "azure://container/?account=myaccount&key=KEY" \ --debug ``` diff --git a/CHANGELOG.md b/CHANGELOG.md index bae9e0d..5d7a0f7 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -163,7 +163,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ### Documentation - Added comprehensive PITR.md guide (complete PITR documentation) - Updated README.md with PITR section (200+ lines) -- Added RELEASE_NOTES_v3.1.md (full feature list) - Updated CHANGELOG.md with v3.1.0 details - Added NOTICE file for Apache License attribution - Created comprehensive test suite (tests/pitr_complete_test.go - 700+ lines) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 5583053..a4100a9 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -274,12 +274,11 @@ Fixes #56 1. Update version in `main.go` 2. Update `CHANGELOG.md` -3. Create release notes (`RELEASE_NOTES_vX.Y.Z.md`) -4. Commit: `git commit -m "Release vX.Y.Z"` -5. Tag: `git tag -a vX.Y.Z -m "Release vX.Y.Z"` -6. Push: `git push origin main vX.Y.Z` -7. Build binaries: `./build_all.sh` -8. Create GitHub Release with binaries +3. Commit: `git commit -m "Release vX.Y.Z"` +4. Tag: `git tag -a vX.Y.Z -m "Release vX.Y.Z"` +5. Push: `git push origin main vX.Y.Z` +6. Build binaries: `./build_all.sh` +7. Create GitHub Release with binaries ## Questions? diff --git a/GCS.md b/GCS.md index 16503c5..cc8f9f9 100644 --- a/GCS.md +++ b/GCS.md @@ -28,21 +28,16 @@ This guide covers using **Google Cloud Storage (GCS)** with `dbbackup` for secur ```bash # Backup PostgreSQL to GCS (using ADC) -dbbackup backup postgres \ - --host localhost \ - --database mydb \ - --output backup.sql \ - --cloud "gs://mybucket/backups/db.sql" +dbbackup backup single mydb \ + --cloud "gs://mybucket/backups/" ``` ### 3. Restore from GCS ```bash -# Restore from GCS backup -dbbackup restore postgres \ - --source "gs://mybucket/backups/db.sql" \ - --host localhost \ - --database mydb_restored +# Download backup from GCS and restore +dbbackup cloud download "gs://mybucket/backups/mydb.dump.gz" ./mydb.dump.gz +dbbackup restore single ./mydb.dump.gz --target mydb_restored --confirm ``` ## URI Syntax @@ -107,7 +102,7 @@ gcloud auth application-default login gcloud auth activate-service-account --key-file=/path/to/key.json # Use simplified URI (credentials from environment) -dbbackup backup postgres --cloud "gs://mybucket/backups/backup.sql" +dbbackup backup single mydb --cloud "gs://mybucket/backups/" ``` ### Method 2: Service Account JSON @@ -121,14 +116,14 @@ Download service account key from GCP Console: **Use in URI:** ```bash -dbbackup backup postgres \ - --cloud "gs://mybucket/backup.sql?credentials=/path/to/service-account.json" +dbbackup backup single mydb \ + --cloud "gs://mybucket/?credentials=/path/to/service-account.json" ``` **Or via environment:** ```bash export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json" -dbbackup backup postgres --cloud "gs://mybucket/backup.sql" +dbbackup backup single mydb --cloud "gs://mybucket/" ``` ### Method 3: Workload Identity (GKE) @@ -147,7 +142,7 @@ metadata: Then use ADC in your pod: ```bash -dbbackup backup postgres --cloud "gs://mybucket/backup.sql" +dbbackup backup single mydb --cloud "gs://mybucket/" ``` ### Required IAM Permissions @@ -250,11 +245,8 @@ gsutil mb -l eu gs://mybucket/ ```bash # PostgreSQL backup with automatic GCS upload -dbbackup backup postgres \ - --host localhost \ - --database production_db \ - --output /backups/db.sql \ - --cloud "gs://prod-backups/postgres/$(date +%Y%m%d_%H%M%S).sql" \ +dbbackup backup single production_db \ + --cloud "gs://prod-backups/postgres/" \ --compression 6 ``` @@ -262,10 +254,7 @@ dbbackup backup postgres \ ```bash # Backup entire PostgreSQL cluster to GCS -dbbackup backup postgres \ - --host localhost \ - --all-databases \ - --output-dir /backups \ +dbbackup backup cluster \ --cloud "gs://prod-backups/postgres/cluster/" ``` @@ -314,13 +303,9 @@ dbbackup cleanup "gs://prod-backups/postgres/" --keep 7 #!/bin/bash # GCS backup script (run via cron) -DATE=$(date +%Y%m%d_%H%M%S) -GCS_URI="gs://prod-backups/postgres/${DATE}.sql" +GCS_URI="gs://prod-backups/postgres/" -dbbackup backup postgres \ - --host localhost \ - --database production_db \ - --output /tmp/backup.sql \ +dbbackup backup single production_db \ --cloud "${GCS_URI}" \ --compression 9 @@ -360,35 +345,25 @@ For large files, dbbackup automatically uses GCS chunked upload: ```bash # Large database backup (automatically uses chunked upload) -dbbackup backup postgres \ - --host localhost \ - --database huge_db \ - --output /backups/huge.sql \ - --cloud "gs://backups/huge.sql" +dbbackup backup single huge_db \ + --cloud "gs://backups/" ``` ### Progress Tracking ```bash # Backup with progress display -dbbackup backup postgres \ - --host localhost \ - --database mydb \ - --output backup.sql \ - --cloud "gs://backups/backup.sql" \ - --progress +dbbackup backup single mydb \ + --cloud "gs://backups/" ``` ### Concurrent Operations ```bash -# Backup multiple databases in parallel -dbbackup backup postgres \ - --host localhost \ - --all-databases \ - --output-dir /backups \ +# Backup cluster with parallel jobs +dbbackup backup cluster \ --cloud "gs://backups/cluster/" \ - --parallelism 4 + --jobs 4 ``` ### Custom Metadata @@ -460,11 +435,8 @@ curl -X POST "http://localhost:4443/storage/v1/b?project=test-project" \ ```bash # Backup to fake-gcs-server -dbbackup backup postgres \ - --host localhost \ - --database testdb \ - --output test.sql \ - --cloud "gs://test-backups/test.sql?endpoint=http://localhost:4443/storage/v1" +dbbackup backup single testdb \ + --cloud "gs://test-backups/?endpoint=http://localhost:4443/storage/v1" ``` ### Run Integration Tests @@ -593,8 +565,8 @@ Tests include: Enable debug mode: ```bash -dbbackup backup postgres \ - --cloud "gs://bucket/backup.sql" \ +dbbackup backup single mydb \ + --cloud "gs://bucket/" \ --debug ``` diff --git a/RELEASE_NOTES_v3.1.md b/RELEASE_NOTES_v3.1.md deleted file mode 100644 index f8cff32..0000000 --- a/RELEASE_NOTES_v3.1.md +++ /dev/null @@ -1,396 +0,0 @@ -# dbbackup v3.1.0 - Enterprise Backup Solution - -**Released:** November 26, 2025 - ---- - -## 🎉 Major Features - -### Point-in-Time Recovery (PITR) -Complete PostgreSQL Point-in-Time Recovery implementation: - -- **WAL Archiving**: Continuous archiving of Write-Ahead Log files -- **WAL Monitoring**: Real-time monitoring of archive status and statistics -- **Timeline Management**: Track and visualize PostgreSQL timeline branching -- **Recovery Targets**: Restore to any point in time: - - Specific timestamp (`--target-time "2024-11-26 12:00:00"`) - - Transaction ID (`--target-xid 1000000`) - - Log Sequence Number (`--target-lsn "0/3000000"`) - - Named restore point (`--target-name before_migration`) - - Earliest consistent point (`--target-immediate`) -- **Version Support**: Both PostgreSQL 12+ (modern) and legacy formats -- **Recovery Actions**: Promote to primary, pause for inspection, or shutdown -- **Comprehensive Testing**: 700+ lines of tests with 100% pass rate - -**New Commands:** -- `pitr enable/disable/status` - PITR configuration management -- `wal archive/list/cleanup/timeline` - WAL archive operations -- `restore pitr` - Point-in-time recovery with multiple target types - -### Cloud Storage Integration -Multi-cloud backend support with streaming efficiency: - -- **Amazon S3 / MinIO**: Full S3-compatible storage support -- **Azure Blob Storage**: Native Azure integration -- **Google Cloud Storage**: GCS backend support -- **Streaming Operations**: Memory-efficient uploads/downloads -- **Cloud-Native**: Direct backup to cloud, no local disk required - -**Features:** -- Automatic multipart uploads for large files -- Resumable downloads with retry logic -- Cloud-side encryption support -- Metadata preservation in cloud storage - -### Incremental Backups -Space-efficient backup strategies: - -- **PostgreSQL**: File-level incremental backups - - Track changed files since base backup - - Automatic base backup detection - - Efficient restore chain resolution - -- **MySQL/MariaDB**: Binary log incremental backups - - Capture changes via binlog - - Automatic log rotation handling - - Point-in-time restore capability - -**Benefits:** -- 70-90% reduction in backup size -- Faster backup completion times -- Automated backup chain management -- Intelligent dependency tracking - -### AES-256-GCM Encryption -Military-grade encryption for data protection: - -- **Algorithm**: AES-256-GCM authenticated encryption -- **Key Derivation**: PBKDF2-SHA256 with 600,000 iterations (OWASP 2023) -- **Streaming**: Memory-efficient for large backups -- **Key Sources**: File (raw/base64), environment variable, or passphrase -- **Auto-Detection**: Restore automatically detects encrypted backups -- **Tamper Protection**: Authenticated encryption prevents tampering - -**Security:** -- Unique nonce per encryption (no key reuse) -- Cryptographically secure random generation -- 56-byte header with algorithm metadata -- ~1-2 GB/s encryption throughput - -### Foundation Features -Production-ready backup operations: - -- **SHA-256 Verification**: Cryptographic backup integrity checking -- **Intelligent Retention**: Day-based policies with minimum backup guarantees -- **Safe Cleanup**: Dry-run mode, safety checks, detailed reporting -- **Multi-Database**: PostgreSQL, MySQL, MariaDB support -- **Interactive TUI**: Beautiful terminal UI with progress tracking -- **CLI Mode**: Full command-line interface for automation -- **Cross-Platform**: Linux, macOS, FreeBSD, OpenBSD, NetBSD -- **Docker Support**: Official container images -- **100% Test Coverage**: Comprehensive test suite - ---- - -## ✅ Production Validated - -**Real-World Deployment:** -- ✅ 2 production hosts in production environment -- ✅ 8 databases backed up nightly -- ✅ 30-day retention with minimum 5 backups -- ✅ ~10MB/night backup volume -- ✅ Scheduled at 02:09 and 02:25 CET -- ✅ **Resolved 4-day backup failure immediately** - -**User Feedback (Ansible Claude):** -> "cleanup command is SO gut, dass es alle verwenden sollten" - -> "--dry-run feature: chef's kiss!" 💋 - -> "Modern tooling in place, pragmatic and maintainable" - -> "CLI design: Professional & polished" - -**Impact:** -- Fixed failing backup infrastructure on first deployment -- Stable operation in production environment -- Positive feedback from DevOps team -- Validation of feature set and UX design - ---- - -## 📦 Installation - -### Download Pre-compiled Binary - -**Linux (x86_64):** -```bash -wget https://git.uuxo.net/PlusOne/dbbackup/releases/download/v3.1.0/dbbackup-linux-amd64 -chmod +x dbbackup-linux-amd64 -sudo mv dbbackup-linux-amd64 /usr/local/bin/dbbackup -``` - -**Linux (ARM64):** -```bash -wget https://git.uuxo.net/PlusOne/dbbackup/releases/download/v3.1.0/dbbackup-linux-arm64 -chmod +x dbbackup-linux-arm64 -sudo mv dbbackup-linux-arm64 /usr/local/bin/dbbackup -``` - -**macOS (Intel):** -```bash -wget https://git.uuxo.net/PlusOne/dbbackup/releases/download/v3.1.0/dbbackup-darwin-amd64 -chmod +x dbbackup-darwin-amd64 -sudo mv dbbackup-darwin-amd64 /usr/local/bin/dbbackup -``` - -**macOS (Apple Silicon):** -```bash -wget https://git.uuxo.net/PlusOne/dbbackup/releases/download/v3.1.0/dbbackup-darwin-arm64 -chmod +x dbbackup-darwin-arm64 -sudo mv dbbackup-darwin-arm64 /usr/local/bin/dbbackup -``` - -### Build from Source - -```bash -git clone https://git.uuxo.net/PlusOne/dbbackup.git -cd dbbackup -go build -o dbbackup -sudo mv dbbackup /usr/local/bin/ -``` - -### Docker - -```bash -docker pull git.uuxo.net/PlusOne/dbbackup:v3.1.0 -docker pull git.uuxo.net/PlusOne/dbbackup:latest -``` - ---- - -## 🚀 Quick Start Examples - -### Basic Backup -```bash -# Simple database backup -dbbackup backup single mydb - -# Backup with verification -dbbackup backup single mydb -dbbackup verify mydb_backup.sql.gz -``` - -### Cloud Backup -```bash -# Backup to S3 -dbbackup backup single mydb --cloud s3://my-bucket/backups/ - -# Backup to Azure -dbbackup backup single mydb --cloud azure://container/backups/ - -# Backup to GCS -dbbackup backup single mydb --cloud gs://my-bucket/backups/ -``` - -### Encrypted Backup -```bash -# Generate encryption key -head -c 32 /dev/urandom | base64 > encryption.key - -# Encrypted backup -dbbackup backup single mydb --encrypt --encryption-key-file encryption.key - -# Restore (automatic decryption) -dbbackup restore single mydb_backup.sql.gz --encryption-key-file encryption.key -``` - -### Incremental Backup -```bash -# Create base backup -dbbackup backup single mydb --backup-type full - -# Create incremental backup -dbbackup backup single mydb --backup-type incremental \ - --base-backup mydb_base_20241126_120000.tar.gz - -# Restore (automatic chain resolution) -dbbackup restore single mydb_incr_20241126_150000.tar.gz -``` - -### Point-in-Time Recovery -```bash -# Enable PITR -dbbackup pitr enable --archive-dir /backups/wal_archive - -# Take base backup -pg_basebackup -D /backups/base.tar.gz -Ft -z -P - -# Perform PITR -dbbackup restore pitr \ - --base-backup /backups/base.tar.gz \ - --wal-archive /backups/wal_archive \ - --target-time "2024-11-26 12:00:00" \ - --target-dir /var/lib/postgresql/14/restored - -# Monitor WAL archiving -dbbackup pitr status -dbbackup wal list -``` - -### Retention & Cleanup -```bash -# Cleanup old backups (dry-run first!) -dbbackup cleanup --retention-days 30 --min-backups 5 --dry-run - -# Actually cleanup -dbbackup cleanup --retention-days 30 --min-backups 5 -``` - -### Cluster Operations -```bash -# Backup entire cluster -dbbackup backup cluster - -# Restore entire cluster -dbbackup restore cluster --backups /path/to/backups/ --confirm -``` - ---- - -## 🔮 What's Next (v3.2) - -Based on production feedback from Ansible Claude: - -### High Priority -1. **Config File Support** (2-3h) - - Persist flags like `--allow-root` in `.dbbackup.conf` - - Per-directory configuration management - - Better automation support - -2. **Socket Auth Auto-Detection** (1-2h) - - Auto-detect Unix socket authentication - - Skip password prompts for socket connections - - Improved UX for root users - -### Medium Priority -3. **Inline Backup Verification** (2-3h) - - Automatic verification after backup - - Immediate corruption detection - - Better workflow integration - -4. **Progress Indicators** (4-6h) - - Progress bars for mysqldump operations - - Real-time backup size tracking - - ETA for large backups - -### Additional Features -5. **Ansible Module** (4-6h) - - Native Ansible integration - - Declarative backup configuration - - DevOps automation support - ---- - -## 📊 Performance Metrics - -**Backup Performance:** -- PostgreSQL: 50-150 MB/s (network dependent) -- MySQL: 30-100 MB/s (with compression) -- Encryption: ~1-2 GB/s (streaming) -- Compression: 70-80% size reduction (typical) - -**PITR Performance:** -- WAL archiving: 100-200 MB/s -- WAL encryption: ~1-2 GB/s -- Recovery replay: 10-100 MB/s (disk I/O dependent) - -**Resource Usage:** -- Memory: ~1GB constant (streaming architecture) -- CPU: 1-4 cores (configurable) -- Disk I/O: Streaming (no intermediate files) - ---- - -## 🏗️ Architecture Highlights - -**Split-Brain Development:** -- Human architects system design -- AI implements features and tests -- Micro-task decomposition (1-2h phases) -- Progressive enhancement approach -- **Result:** 52% faster development (5.75h vs 12h planned) - -**Key Innovations:** -- Streaming architecture for constant memory usage -- Interface-first design for clean modularity -- Comprehensive test coverage (700+ test lines) -- Production validation in parallel with development - ---- - -## 📄 Documentation - -**Core Documentation:** -- [README.md](README.md) - Complete feature overview and setup -- [PITR.md](PITR.md) - Comprehensive PITR guide -- [DOCKER.md](DOCKER.md) - Docker usage and deployment -- [CHANGELOG.md](CHANGELOG.md) - Detailed version history - -**Getting Started:** -- [QUICKRUN.md](QUICKRUN.MD) - Quick start guide -- [PROGRESS_IMPLEMENTATION.md](PROGRESS_IMPLEMENTATION.md) - Progress tracking - ---- - -## 📜 License - -Apache License 2.0 - -Copyright 2025 dbbackup Project - -Licensed under the Apache License, Version 2.0. See [LICENSE](LICENSE) for details. - ---- - -## 🙏 Credits - -**Development:** -- Built using Multi-Claude collaboration architecture -- Split-brain development pattern (human architecture + AI implementation) -- 5.75 hours intensive development (52% time savings) - -**Production Validation:** -- Deployed in production environments -- Real-world testing and feedback -- DevOps validation and feature requests - -**Technologies:** -- Go 1.21+ -- PostgreSQL 9.5-17 -- MySQL/MariaDB 5.7+ -- AWS SDK, Azure SDK, Google Cloud SDK -- Cobra CLI framework - ---- - -## 🐛 Known Issues - -None reported in production deployment. - -If you encounter issues, please report them at: -https://git.uuxo.net/PlusOne/dbbackup/issues - ---- - -## 📞 Support - -**Documentation:** See [README.md](README.md) and [PITR.md](PITR.md) -**Issues:** https://git.uuxo.net/PlusOne/dbbackup/issues -**Repository:** https://git.uuxo.net/PlusOne/dbbackup - ---- - -**Thank you for using dbbackup!** 🎉 - -*Professional database backup and restore utility for PostgreSQL, MySQL, and MariaDB.* diff --git a/VEEAM_ALTERNATIVE.md b/VEEAM_ALTERNATIVE.md index b27438d..783db16 100644 --- a/VEEAM_ALTERNATIVE.md +++ b/VEEAM_ALTERNATIVE.md @@ -43,7 +43,7 @@ What are you actually getting? ```bash # Physical backup at InnoDB page level # No XtraBackup. No external tools. Pure Go. -dbbackup backup --engine=clone --output=s3://bucket/backup +dbbackup backup single mydb --db-type mysql --cloud s3://bucket/backups/ ``` ### Filesystem Snapshots @@ -78,10 +78,10 @@ dbbackup backup --engine=streaming --parallel-workers=8 **Day 1**: Run dbbackup alongside existing solution ```bash # Test backup -dbbackup backup --database=mydb --output=s3://test-bucket/ +dbbackup backup single mydb --cloud s3://test-bucket/ # Verify integrity -dbbackup verify s3://test-bucket/backup.sql.gz.enc +dbbackup verify s3://test-bucket/mydb_20260115.dump.gz ``` **Week 1**: Compare backup times, storage costs, recovery speed @@ -112,10 +112,9 @@ curl -LO https://github.com/UUXO/dbbackup/releases/latest/download/dbbackup_linu chmod +x dbbackup_linux_amd64 # Your first backup -./dbbackup_linux_amd64 backup \ - --database=production \ - --engine=auto \ - --output=s3://my-backups/$(date +%Y%m%d)/ +./dbbackup_linux_amd64 backup single production \ + --db-type mysql \ + --cloud s3://my-backups/ ``` ## The Bottom Line @@ -131,4 +130,4 @@ Every dollar you spend on backup licensing is a dollar not spent on: *Apache 2.0 Licensed. Free forever. No sales calls required.* -[GitHub](https://github.com/UUXO/dbbackup) | [Documentation](https://github.com/UUXO/dbbackup#readme) | [Release Notes](RELEASE_NOTES_v3.2.md) +[GitHub](https://github.com/UUXO/dbbackup) | [Documentation](https://github.com/UUXO/dbbackup#readme) | [Changelog](CHANGELOG.md)