docs: simplify README - remove marketing tone

This commit is contained in:
2025-11-06 08:04:23 +00:00
parent a5560fd6b1
commit 26ad1dc130

559
README.md
View File

@ -1,130 +1,61 @@
# DB Backup Tool - Advanced Database Backup Solution
# dbbackup
A comprehensive, high-performance database backup and restore solution with **multi-database support** (PostgreSQL & MySQL), **intelligent CPU optimization**, **real-time progress tracking**, **native pgx v5 driver**, and **beautiful interactive UI**.
Database backup utility for PostgreSQL and MySQL with support for large databases.
## 🚀 **NEW: Huge Database Support & Performance Optimizations**
## Recent Changes
### ⚡ **Phase 1 & 2: Production-Ready Large Database Handling**
### Large Database Handling
- **90% Memory Reduction**: Streaming compression with zero-copy I/O
- **Native pgx v5**: 48% memory reduction vs lib/pq, 30-50% faster queries
- **Smart Format Selection**: Auto-switches to plain format for databases >5GB
- **Handles 100GB+ Databases**: No more OOM kills on huge BLOB data
- **Parallel Compression**: Auto-detects pigz for 3-5x faster compression
-**Streaming Pipeline**: `pg_dump | pigz | disk` (no Go buffers)
-**2-Hour Timeouts**: Per-database limits prevent hangs
-**Size Detection**: Pre-flight checks and warnings for large databases
- Streaming compression reduces memory usage by ~90%
- Native pgx v5 driver reduces memory by ~48% compared to lib/pq
- Automatic format selection based on database size
- Per-database timeout configuration (default: 240 minutes)
- Parallel compression support via pigz when available
### 📊 **Performance Benchmarks**
### Memory Usage
| Database Size | Memory Before | Memory After | Status |
|---------------|---------------|--------------|--------|
| 10GB | 8.2GB (OOM) | 850MB | ✅ **90% reduction** |
| 25GB | KILLED | 920MB | ✅ **Works now** |
| 50GB | KILLED | 940MB | ✅ **Works now** |
| 100GB+ | KILLED | <1GB | **Works now** |
| Database Size | Memory Usage |
|---------------|--------------|
| 10GB | ~850MB |
| 25GB | ~920MB |
| 50GB | ~940MB |
| 100GB+ | <1GB |
**Driver Performance (pgx v5 vs lib/pq):**
- Connection Speed: **51% faster** (22ms vs 45ms)
- Query Performance: **31% faster** on large result sets
- Memory Usage: **48% lower** on 10GB+ databases
- BLOB Handling: **Fixed** - no more OOM on binary data
### Progress Tracking
## 🌟 NEW: Enhanced Progress Tracking & Logging
- Real-time progress indicators
- Step-by-step operation tracking
- Structured logging with timestamps
- Operation history
### 📊 **Real-Time Progress Monitoring**
- **Live Progress Bars**: Visual progress indicators with percentage completion
- **Step-by-Step Tracking**: Detailed breakdown of each operation phase
- **Time Estimates**: Elapsed time and estimated completion times
- **Data Transfer Metrics**: Real-time file counts and byte transfer statistics
## Features
### 📝 **Comprehensive Logging System**
- **Timestamped Entries**: All operations logged with precise timestamps
- **Structured Metadata**: Rich operation context and performance metrics
- **Error Aggregation**: Detailed error reporting with troubleshooting information
- **Operation History**: Complete audit trail of all backup/restore activities
- PostgreSQL and MySQL support
- Single database, sample, and cluster backup modes
- CPU detection and parallel job optimization
- Interactive terminal interface
- Cross-platform binaries (Linux, macOS, Windows, BSD)
- SSL/TLS support
- Configurable compression levels
### 🎨 **Enhanced Interactive Experience**
- **Animated UI**: Spinners, progress bars, and color-coded status indicators
- **Operation Dashboard**: Real-time monitoring of active and completed operations
- **Status Summaries**: Post-operation reports with duration and file statistics
- **History Viewer**: Browse past operations with detailed metrics
## Installation
### 💡 **Smart Progress Features**
```bash
# Live cluster backup progress example
🔄 Starting cluster backup (all databases)
Backing up global objects...
Getting database list...
Backing up 13 databases...
[1/13] Backing up database: backup_test_db
Database size: 84.1 MB
✅ Completed backup_test_db (56.5 MB)
[2/13] Backing up database: postgres
Database size: 7.4 MB
✅ Completed postgres (822 B)
...
Backup summary: 13 succeeded, 0 failed
Creating compressed archive...
✅ Cluster backup completed: cluster_20251105_102722.tar.gz (56.6 MB)
```
### Pre-compiled Binaries
## 🚀 Key Features
### ✨ **Core Functionality**
- **Multi-Database Support**: PostgreSQL (pgx v5) and MySQL with unified interface
- **Huge Database Support**: Handles 100GB+ databases with <1GB memory
- **Multiple Backup Modes**: Single database, sample backups, full cluster backups
- **Cross-Platform**: Pre-compiled binaries for Linux, macOS, Windows, and BSD systems
- **Interactive TUI**: Beautiful terminal interface with real-time progress indicators
- **Native Performance**: pgx v5 driver for 48% lower memory and 30-50% faster queries
### 🧠 **Intelligent CPU Optimization**
- **Automatic CPU Detection**: Detects physical and logical cores across platforms
- **Workload-Aware Scaling**: Optimizes parallelism based on workload type
- **Big Server Support**: Configurable CPU limits for high-core systems
- **Performance Tuning**: Separate optimization for backup and restore operations
- **Parallel Compression**: Auto-uses pigz for multi-core compression (3-5x faster)
### 🗄️ **Large Database Optimizations**
- **Streaming Architecture**: Zero-copy I/O with pg_dump | pigz pipeline
- **Smart Format Selection**: Auto-switches formats based on database size
- **Memory Efficiency**: Constant <1GB usage regardless of database size
- **BLOB Support**: Handles multi-GB binary data without OOM
- **Per-Database Timeouts**: 2-hour limits prevent individual database hangs
- **Size Detection**: Pre-flight checks and warnings for optimal strategy
### 🔧 **Advanced Configuration**
- **SSL/TLS Support**: Full SSL configuration with multiple modes
- **Compression**: Configurable compression levels (0-9)
- **Environment Integration**: Environment variable and CLI flag support
- **Flexible Paths**: Configurable backup directories and naming
## 📦 Installation
### Pre-compiled Binaries (Recommended)
Download the appropriate binary for your platform from the `bin/` directory:
Download the binary for your platform:
```bash
# Linux (Intel/AMD)
curl -L https://git.uuxo.net/uuxo/dbbackup/raw/branch/main/bin/dbbackup_linux_amd64 -o dbbackup
chmod +x dbbackup
sudo mv dbbackup /usr/local/bin/dbbackup
# macOS (Intel)
curl -L https://git.uuxo.net/uuxo/dbbackup/raw/branch/main/bin/dbbackup_darwin_amd64 -o dbbackup
chmod +x dbbackup
sudo mv dbbackup /usr/local/bin/dbbackup
# macOS (Apple Silicon)
curl -L https://git.uuxo.net/uuxo/dbbackup/raw/branch/main/bin/dbbackup_darwin_arm64 -o dbbackup
chmod +x dbbackup
sudo mv dbbackup /usr/local/bin/dbbackup
# Windows (download and rename)
curl -L https://git.uuxo.net/uuxo/dbbackup/raw/branch/main/bin/dbbackup_windows_amd64.exe -o dbbackup.exe
# Use directly or move into PATH
```
### Build from Source
@ -135,440 +66,210 @@ cd dbbackup
go build -o dbbackup main.go
```
### Cross-Platform Build
## Usage
### Interactive Mode
```bash
# Build for all platforms
./build_all.sh
```
## 🖥️ Usage Examples
### Interactive Mode (Recommended)
```bash
# Start interactive interface with menu system
dbbackup interactive
# Alternative aliases (all equivalent)
dbbackup menu
dbbackup ui
```
The interactive mode provides:
- **Real-time progress tracking** with animated spinners
- **Visual menu navigation** with arrow keys
- **Database engine switching** using left/right arrows or pressing `t`
- **Automatic status updates** without requiring keypresses
- **Operation history** and active operation monitoring
Interactive mode provides menu navigation with arrow keys and automatic status updates.
> 💡 In the interactive menu, use the left/right arrow keys (or press `t`) to switch the target engine between PostgreSQL and MySQL/MariaDB before launching an operation.
### Progress Tracking in CLI Mode
Progress tracking is automatically enabled for all backup operations in CLI mode:
### Command Line
```bash
# Single database backup with automatic progress tracking
# Single database backup
dbbackup backup single myapp_db
# Sample backup (10% of data) with progress indicators
dbbackup backup sample myapp_db --sample-ratio 10
# Cluster backup with comprehensive progress output
dbbackup backup cluster
# All operations show:
# - Step-by-step progress indicators
# - Database sizes and completion status
# - Elapsed time and final summary
# - Success/failure counts
```
### Command Line Interface
#### Basic Backup Operations
```bash
# Single database backup (auto-optimized for your CPU)
dbbackup backup single myapp_db --db-type postgres
# MySQL/MariaDB backup using the short flag
dbbackup backup single myapp_db -d mysql --host mysql.example.com --port 3306
# Sample backup (10% of data)
dbbackup backup sample myapp_db --sample-ratio 10
# Full cluster backup (PostgreSQL only)
dbbackup backup cluster --db-type postgres
```
# Full cluster backup (PostgreSQL)
dbbackup backup cluster
#### CPU-Optimized Operations
```bash
# Auto-detect and optimize for your hardware
dbbackup backup single myapp_db --auto-detect-cores
# Manual CPU configuration for big servers
dbbackup backup cluster --jobs 16 --dump-jobs 8 --max-cores 32
# Set workload type for optimal performance
dbbackup backup single myapp_db --cpu-workload io-intensive
# Show CPU information and recommendations
dbbackup cpu
```
#### Huge Database Operations (100GB+)
```bash
# Cluster backup with optimizations for huge databases
dbbackup backup cluster --auto-detect-cores
# The tool automatically:
# - Detects database sizes
# - Uses plain format for databases >5GB
# - Enables streaming compression
# - Sets 2-hour timeout per database
# - Caps compression at level 6
# - Uses parallel dumps if available
# For maximum performance on huge databases
dbbackup backup cluster \
--dump-jobs 8 \
--compression 3 \
--jobs 16
# With pigz installed (parallel compression)
sudo apt-get install pigz # or yum install pigz
dbbackup backup cluster --compression 6
# 3-5x faster compression with all CPU cores
```
#### Database Connectivity
```bash
# PostgreSQL with SSL
dbbackup backup single mydb \
# With custom settings
dbbackup backup single myapp_db \
--host db.example.com \
--port 5432 \
--user backup_user \
--ssl-mode require
# MySQL with compression
dbbackup backup single mydb \
--db-type mysql \
--host mysql.example.com \
--port 3306 \
--compression 9
# Local PostgreSQL (socket connection)
sudo -u postgres dbbackup backup cluster --insecure
```
#### System Diagnostics
### System Commands
```bash
# Check connection status and configuration
# Check connection status
dbbackup status
# Run preflight checks before backup
# Run preflight checks
dbbackup preflight
# List available databases and backups
# List databases and backups
dbbackup list
# Show CPU optimization settings
# Show CPU information
dbbackup cpu
# Enable debug logging for troubleshooting
dbbackup status --debug
```
## ⚙️ Configuration
## Configuration
### CPU Optimization Settings
### Command Line Flags
| Flag | Description | Default | Example |
|------|-------------|---------|---------|
| `--auto-detect-cores` | Enable automatic CPU detection | `true` | `--auto-detect-cores` |
| `--jobs` | Parallel restore jobs | Auto-detected | `--jobs 8` |
| `--dump-jobs` | Parallel backup jobs | Auto-detected | `--dump-jobs 4` |
| `--max-cores` | Maximum cores to use | Auto-detected | `--max-cores 16` |
| `--cpu-workload` | Workload type | `balanced` | `--cpu-workload cpu-intensive` |
| Flag | Description | Default |
|------|-------------|---------|
| `--host` | Database host | `localhost` |
| `--port` | Database port | `5432` (PostgreSQL), `3306` (MySQL) |
| `--user` | Database user | `postgres` |
| `--database` | Database name | `postgres` |
| `-d`, `--db-type` | Database type | `postgres` |
| `--ssl-mode` | SSL mode | `prefer` |
| `--jobs` | Parallel jobs | Auto-detected |
| `--dump-jobs` | Parallel dump jobs | Auto-detected |
| `--compression` | Compression level (0-9) | `6` |
| `--backup-dir` | Backup directory | `/var/lib/pgsql/db_backups` |
#### Workload Types:
- **`balanced`**: Uses logical cores (default)
- **`cpu-intensive`**: Uses physical cores (best for dumps)
- **`io-intensive`**: Uses 2x logical cores (best for restores)
### PostgreSQL
### Database Configuration
Use the `postgres` system user for local connections:
```bash
sudo -u postgres dbbackup backup cluster
```
| Flag | Description | Default | Example |
|------|-------------|---------|---------|
| `--host` | Database host | `localhost` | `--host db.example.com` |
| `--port` | Database port | `5432` (PG), `3306` (MySQL) | `--port 5432` |
| `--user` | Database user | `postgres` (PG), `root` (MySQL) | `--user backup_user` |
| `--database` | Database name | `postgres` | `--database myapp_db` |
| `-d`, `--db-type` | Database type (`postgres`, `mysql`, `mariadb`) | `postgres` | `-d mysql` |
| `--ssl-mode` | SSL mode | `prefer` | `--ssl-mode require` |
| `--insecure` | Disable SSL | `false` | `--insecure` |
SSL modes: `disable`, `prefer`, `require`, `verify-ca`, `verify-full`
### PostgreSQL Options
Cluster operations (backup/restore/verify) are PostgreSQL-only.
- Use the built-in `postgres` role whenever possible (`sudo -u postgres dbbackup ...`) so `pg_dump`/`pg_restore` inherit the right permissions.
- `--db-type postgres` is the default; include it explicitly when running mixed test suites or automation.
- `--ssl-mode` accepts PostgreSQL modes (`disable`, `prefer`, `require`, `verify-ca`, `verify-full`). Leave unset for local socket connections or set to `require` for TLS-only environments.
- Add `--insecure` to force `sslmode=disable` when working against local clusters or CI containers without certificates.
- Cluster operations (`backup cluster`, `restore`, `verify`) are PostgreSQL-only; the command will validate the target type before executing.
### MySQL / MariaDB
### MySQL / MariaDB Options
Set `--db-type mysql` or `--db-type mariadb`:
```bash
dbbackup backup single mydb \
--db-type mysql \
--host 127.0.0.1 \
--user backup_user \
--password ****
```
- Set `--db-type mysql` (or `mariadb`) to load the MySQL driver; the tool normalizes either value.
- Provide connection parameters explicitly: `--host 127.0.0.1 --port 3306 --user backup_user --password **** --database backup_demo`.
- The `--password` flag passes credentials to both `mysql` and `mysqldump`; alternatively export `MYSQL_PWD` for non-interactive runs.
- Use `--insecure` to emit `--skip-ssl` for servers without TLS. When TLS is required, choose `--ssl-mode require|verify-ca|verify-identity`.
- MySQL backups are emitted as `.sql.gz` files; restore previews display the exact `gunzip | mysql` pipeline that will execute once `--confirm` support is introduced.
MySQL backups are created as `.sql.gz` files.
### Environment Variables
```bash
# Database connection
# Database
export PG_HOST=localhost
export PG_PORT=5432
export PG_USER=postgres
export PGPASSWORD=secret
export DB_TYPE=postgres
export MYSQL_HOST=localhost
export MYSQL_PORT=3306
export MYSQL_USER=root
export MYSQL_PWD=secret
export MYSQL_DATABASE=myapp_db
# CPU optimization
export AUTO_DETECT_CORES=true
export CPU_WORKLOAD_TYPE=balanced
export MAX_CORES=16
# Backup settings
# Backup
export BACKUP_DIR=/var/backups
export COMPRESS_LEVEL=6
# Cluster backup timeout in minutes (controls overall cluster operation timeout)
# Default: 240 (4 hours)
export CLUSTER_TIMEOUT_MIN=240
export CLUSTER_TIMEOUT_MIN=240 # Cluster timeout in minutes
# Swap file management (Linux only, requires root)
# Automatically create temporary swap file for large backups to prevent OOM kills
export AUTO_SWAP=false # Enable automatic swap management
export SWAP_FILE_SIZE_GB=8 # Swap file size in GB (0 = disabled)
export SWAP_FILE_PATH=/tmp/dbbackup_swap # Path to temporary swap file
# Swap file management (Linux + root only)
export AUTO_SWAP=false
export SWAP_FILE_SIZE_GB=8
export SWAP_FILE_PATH=/tmp/dbbackup_swap
```
## 🏗️ Architecture
## Architecture
### Package Structure
```
dbbackup/
├── cmd/ # CLI commands (Cobra framework)
├── cmd/ # CLI commands
├── internal/
│ ├── config/ # Configuration management
│ ├── database/ # Database abstraction layer
│ ├── config/ # Configuration
│ ├── database/ # Database drivers
│ ├── backup/ # Backup engine
│ ├── cpu/ # CPU detection and optimization
│ ├── logger/ # Structured logging
│ ├── cpu/ # CPU detection
│ ├── logger/ # Logging
│ ├── progress/ # Progress indicators
│ └── tui/ # Terminal user interface
└── bin/ # Cross-platform binaries
│ └── tui/ # Terminal UI
└── bin/ # Binaries
```
### Supported Platforms
| Platform | Architecture | Binary Name |
|----------|-------------|-------------|
| **Linux** | amd64 | `dbbackup_linux_amd64` |
| **Linux** | arm64 | `dbbackup_linux_arm64` |
| **Linux** | armv7 | `dbbackup_linux_arm_armv7` |
| **macOS** | amd64 (Intel) | `dbbackup_darwin_amd64` |
| **macOS** | arm64 (Apple Silicon) | `dbbackup_darwin_arm64` |
| **Windows** | amd64 | `dbbackup_windows_amd64.exe` |
| **Windows** | arm64 | `dbbackup_windows_arm64.exe` |
| **FreeBSD** | amd64 | `dbbackup_freebsd_amd64` |
| **OpenBSD** | amd64 | `dbbackup_openbsd_amd64` |
| **NetBSD** | amd64 | `dbbackup_netbsd_amd64` |
Linux (amd64, arm64, armv7), macOS (amd64, arm64), Windows (amd64, arm64), FreeBSD, OpenBSD, NetBSD
## 🚀 Performance Optimization
## Performance
### Automatic CPU Detection
### CPU Detection
The tool automatically detects your system's CPU configuration and optimizes job counts:
The tool detects CPU configuration and adjusts parallelism automatically:
```bash
# View detected CPU information
dbbackup cpu
# Example output:
# Architecture: amd64
# Logical Cores: 16
# Physical Cores: 8
# Model: Intel Xeon CPU E5-2667 v4
# Recommended jobs (balanced): 16
```
### Big Server Optimization
### Large Database Handling
For high-core systems with large databases:
Streaming architecture maintains constant memory usage regardless of database size. Databases >5GB automatically use plain format. Parallel compression via pigz is used when available.
```bash
# Example: 32-core server with large database
dbbackup backup cluster \
--jobs 24 \
--dump-jobs 12 \
--max-cores 32 \
--cpu-workload cpu-intensive \
--compression 9
```
### Memory Usage Notes
### Memory Considerations
- Small databases (<1GB): ~500MB
- Medium databases (1-10GB): ~800MB
- Large databases (10-50GB): ~900MB
- Huge databases (50GB+): ~1GB
- **Small databases** (< 1GB): Use default settings (~500MB memory)
- **Medium databases** (1-10GB): Default settings work great (~800MB memory)
- **Large databases** (10-50GB): Auto-optimized (~900MB memory)
- **Huge databases** (50-100GB+): **Fully supported** (~1GB constant memory)
- **BLOB-heavy databases**: Streaming architecture handles any size
## Troubleshooting
### Architecture Improvements
### Connection Issues
#### Phase 1: Streaming & Smart Format Selection ✅
- **Zero-copy I/O**: pg_dump writes directly to pigz
- **Smart format**: Plain format for >5GB databases (no TOC overhead)
- **Streaming compression**: No intermediate Go buffers
- **Result**: 90% memory reduction
#### Phase 2: Native pgx v5 Integration ✅
- **Connection pooling**: Optimized 2-10 connection pool
- **Binary protocol**: Lower CPU usage for type conversion
- **Better BLOB handling**: Native streaming support
- **Runtime tuning**: work_mem=64MB, maintenance_work_mem=256MB
- **Result**: 48% memory reduction, 30-50% faster queries
See [LARGE_DATABASE_OPTIMIZATION_PLAN.md](LARGE_DATABASE_OPTIMIZATION_PLAN.md) and [PRIORITY2_PGX_INTEGRATION.md](PRIORITY2_PGX_INTEGRATION.md) for complete technical details.
## 🔍 Troubleshooting
### Common Issues
#### CPU Detection Issues
```bash
# If auto-detection fails, manually set values
dbbackup backup single mydb --auto-detect-cores=false --jobs 4 --dump-jobs 2
```
#### Connection Issues
```bash
# Test connection
dbbackup status --debug
dbbackup status
# Common fixes
dbbackup status --insecure # Disable SSL
dbbackup status --ssl-mode=disable # Explicit SSL disable
sudo -u postgres dbbackup status # Use postgres user (Linux)
# Disable SSL
dbbackup status --insecure
# Use postgres user (Linux)
sudo -u postgres dbbackup status
```
#### Performance Issues
### Out of Memory Issues
Check kernel logs for OOM events:
```bash
# Check CPU optimization
dbbackup cpu
# Try different workload types
dbbackup backup single mydb --cpu-workload io-intensive
```
#### Out of Memory (OOM) Issues
If backups are being killed by the system with "signal: killed" errors:
```bash
# Check kernel logs for OOM killer events
dmesg --ctime | grep -i -E "oom|kill|memory"
journalctl -k --since "1 hour ago" | grep -i oom
# Check current memory and swap
dmesg | grep -i oom
free -h
swapon --show
```
# Option 1: Enable automatic swap file management (Linux + root only)
Enable swap file management (Linux + root):
```bash
export AUTO_SWAP=true
export SWAP_FILE_SIZE_GB=8
sudo dbbackup backup cluster
```
# Option 2: Manually add swap before backup
Or manually add swap:
```bash
sudo fallocate -l 8G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
# Run your backup
dbbackup backup cluster
# Cleanup after
sudo swapoff /swapfile
sudo rm /swapfile
# Option 3: Reduce memory usage
export DUMP_JOBS=2 # Fewer parallel jobs
export COMPRESS_LEVEL=3 # Lower compression
dbbackup backup cluster
```
**Note**: Automatic swap management (`AUTO_SWAP=true`) requires:
- Linux operating system
- Root privileges (run with `sudo`)
- Available disk space for the swap file
The tool will automatically create, enable, and cleanup the temporary swap file during cluster backups.
### Debug Mode
```bash
# Enable detailed logging
dbbackup backup single mydb --debug
```
## 📋 Comparison with Original Bash Script
## Documentation
| Feature | Bash Script | Go Implementation |
|---------|-------------|-------------------|
| **Database Support** | PostgreSQL only | PostgreSQL + MySQL |
| **CPU Detection** | Basic | Advanced multi-platform |
| **User Interface** | Text-based | Beautiful interactive TUI |
| **Error Handling** | Shell-based | Type-safe, comprehensive |
| **Performance** | Shell overhead | Native binary speed |
| **Cross-Platform** | Linux only | 10+ platforms |
| **Dependencies** | Many external tools | Self-contained binary |
| **Maintainability** | Monolithic script | Modular packages |
- [HUGE_DATABASE_QUICK_START.md](HUGE_DATABASE_QUICK_START.md) - Quick start for large databases
- [LARGE_DATABASE_OPTIMIZATION_PLAN.md](LARGE_DATABASE_OPTIMIZATION_PLAN.md) - Optimization details
- [PRIORITY2_PGX_INTEGRATION.md](PRIORITY2_PGX_INTEGRATION.md) - pgx v5 integration
## <EFBFBD> Additional Documentation
## License
- **[HUGE_DATABASE_QUICK_START.md](HUGE_DATABASE_QUICK_START.md)** - Quick start guide for 100GB+ databases
- **[LARGE_DATABASE_OPTIMIZATION_PLAN.md](LARGE_DATABASE_OPTIMIZATION_PLAN.md)** - Complete 5-phase optimization strategy
- **[PRIORITY2_PGX_INTEGRATION.md](PRIORITY2_PGX_INTEGRATION.md)** - Native pgx v5 integration details
MIT License
## <EFBFBD>📄 License
## Repository
Released under MIT License. See LICENSE file for details.
## 🤝 Contributing
1. Fork the repository at https://git.uuxo.net/uuxo/dbbackup
2. Create a feature branch
3. Make your changes
4. Add tests if applicable
5. Submit a pull request
## 🆘 Support
- **Issues**: Report bugs and feature requests at https://git.uuxo.net/uuxo/dbbackup/issues
- **Repository**: https://git.uuxo.net/uuxo/dbbackup
- **Documentation**: Check the documentation files for detailed information
---
**Built with ❤️ using Go** - High-performance, type-safe, and cross-platform database backup solution.
https://git.uuxo.net/uuxo/dbbackup