Initial commit

This commit is contained in:
2024-11-28 10:25:46 +01:00
commit d192f9fbc4
10 changed files with 3355 additions and 0 deletions

4
.gitignore vendored Normal file
View File

@@ -0,0 +1,4 @@
.vscode/*
!.vscode/settings.json
!.vscode/tasks.json
!.vscode/launch.json

219
README.MD Normal file
View File

@@ -0,0 +1,219 @@
# HMAC File Server Release Notes
**HMAC File Server** is a secure, scalable, and feature-rich file server with advanced capabilities like HMAC authentication, resumable uploads, chunked uploads, file versioning, and optional ClamAV scanning for file integrity and security. This server is built with extensibility and operational monitoring in mind, including Prometheus metrics support and Redis integration.
## Features
- **HMAC Authentication:** Secure file uploads and downloads with HMAC tokens.
- **File Versioning:** Enable versioning for uploaded files with configurable retention.
- **Chunked and Resumable Uploads:** Handle large files efficiently with support for resumable and chunked uploads.
- **ClamAV Scanning:** Optional virus scanning for uploaded files.
- **Prometheus Metrics:** Monitor system and application-level metrics.
- **Redis Integration:** Use Redis for caching or storing application states.
- **File Expiration:** Automatically delete files after a specified TTL.
- **Graceful Shutdown:** Handles signals and ensures proper cleanup.
- **Deduplication:** Remove duplicate files based on hashing for storage efficiency.
---
## Installation
### Prerequisites
- Go 1.20+
- Redis (optional, if Redis integration is enabled)
- ClamAV (optional, if file scanning is enabled)
### Clone and Build
```bash
git clone https://github.com/your-repo/hmac-file-server.git
cd hmac-file-server
go build -o hmac-file-server main.go
```
---
## Configuration
The server configuration is managed through a `config.toml` file. Below are the supported configuration options:
### **Server Configuration**
| Key | Description | Example |
|------------------------|-----------------------------------------------------|---------------------------------|
| `ListenPort` | Port or Unix socket to listen on | `":8080"` |
| `UnixSocket` | Use a Unix socket (`true`/`false`) | `false` |
| `Secret` | Secret key for HMAC authentication | `"your-secret-key"` |
| `StoragePath` | Directory to store uploaded files | `"/mnt/storage/hmac-file-server"` |
| `LogLevel` | Logging level (`info`, `debug`, etc.) | `"info"` |
| `LogFile` | Log file path (optional) | `"/var/log/hmac-file-server.log"` |
| `MetricsEnabled` | Enable Prometheus metrics (`true`/`false`) | `true` |
| `MetricsPort` | Prometheus metrics server port | `"9090"` |
| `FileTTL` | File Time-to-Live duration | `"168h0m0s"` |
| `DeduplicationEnabled` | Enable file deduplication based on hashing | `true` |
| `MinFreeBytes` | Minimum free space required on storage path (in bytes) | `104857600` |
### **Uploads**
| Key | Description | Example |
|----------------------------|-----------------------------------------------|-------------|
| `ResumableUploadsEnabled` | Enable resumable uploads | `true` |
| `ChunkedUploadsEnabled` | Enable chunked uploads | `true` |
| `ChunkSize` | Chunk size for chunked uploads (bytes) | `1048576` |
| `AllowedExtensions` | Allowed file extensions for uploads | `[".png", ".jpg"]` |
### **Time Settings**
| Key | Description | Example |
|------------------|--------------------------------|----------|
| `ReadTimeout` | HTTP server read timeout | `"2h"` |
| `WriteTimeout` | HTTP server write timeout | `"2h"` |
| `IdleTimeout` | HTTP server idle timeout | `"2h"` |
### **ClamAV Configuration**
| Key | Description | Example |
|--------------------|-------------------------------------------|----------------------------------|
| `ClamAVEnabled` | Enable ClamAV virus scanning (`true`) | `true` |
| `ClamAVSocket` | Path to ClamAV Unix socket | `"/var/run/clamav/clamd.ctl"` |
| `NumScanWorkers` | Number of workers for file scanning | `2` |
### **Redis Configuration**
| Key | Description | Example |
|----------------------------|----------------------------------|-------------------|
| `RedisEnabled` | Enable Redis integration | `true` |
| `RedisDBIndex` | Redis database index | `0` |
| `RedisAddr` | Redis server address | `"localhost:6379"`|
| `RedisPassword` | Password for Redis authentication| `""` |
| `RedisHealthCheckInterval` | Health check interval for Redis | `"30s"` |
### **Workers and Connections**
| Key | Description | Example |
|------------------------|------------------------------------|-------------------|
| `NumWorkers` | Number of upload workers | `2` |
| `UploadQueueSize` | Size of the upload queue | `50` |
---
## Running the Server
### Basic Usage
Run the server with a configuration file:
```bash
./hmac-file-server -config ./config.toml
```
### Metrics Server
If `MetricsEnabled` is `true`, the Prometheus metrics server will run on the port specified in `MetricsPort` (default: `9090`).
---
## Development Notes
- **Versioning:** Enabled via `EnableVersioning`. Ensure `MaxVersions` is set appropriately to prevent storage issues.
- **File Cleaner:** The file cleaner runs hourly and deletes files older than the configured `FileTTL`.
- **Redis Health Check:** Automatically monitors Redis connectivity and logs warnings on failure.
---
## Testing
To run the server locally for development:
```bash
go run main.go -config ./config.toml
```
Use tools like **cURL** or **Postman** to test file uploads and downloads.
### Example File Upload with HMAC Token
```bash
curl -X PUT -H "Authorization: Bearer <HMAC-TOKEN>" -F "file=@example.txt" http://localhost:8080/uploads/example.txt
```
Replace `<HMAC-TOKEN>` with a valid HMAC signature generated using the configured `Secret`.
---
## Monitoring
Prometheus metrics include:
- File upload/download durations
- Memory usage
- CPU usage
- Active connections
- HTTP requests metrics (total, method, path)
---
## Example `config.toml`
```toml
[server]
listenport = "8080"
unixsocket = false
storagepath = "/mnt/storage/"
loglevel = "info"
logfile = "/var/log/file-server.log"
metricsenabled = true
metricsport = "9090"
DeduplicationEnabled = true
filettl = "336h" # 14 days
minfreebytes = 104857600 # 100 MB in bytes
[timeouts]
readtimeout = "4800s"
writetimeout = "4800s"
idletimeout = "4800s"
[security]
secret = "example-secret-key"
[versioning]
enableversioning = false
maxversions = 1
[uploads]
resumableuploadsenabled = true
chunkeduploadsenabled = true
chunksize = 8192
allowedextensions = [".txt", ".pdf", ".png", ".jpg", ".jpeg", ".gif", ".bmp", ".tiff", ".svg", ".webp", ".wav", ".mp4", ".avi", ".mkv", ".mov", ".wmv", ".flv", ".webm", ".mpeg", ".mpg", ".m4v", ".3gp", ".3g2", ".mp3", ".ogg"]
[clamav]
clamavenabled = true
clamavsocket = "/var/run/clamav/clamd.ctl"
numscanworkers = 2
[redis]
redisenabled = true
redisdbindex = 0
redisaddr = "localhost:6379"
redispassword = ""
redishealthcheckinterval = "120s"
[workers]
numworkers = 2
uploadqueuesize = 50
[file]
filerevision = 1
```
This configuration file is set up with essential features like Prometheus integration, ClamAV scanning, and file handling with deduplication and versioning options. Adjust the settings according to your infrastructure needs.
### Additional Features
- **Deduplication**: Automatically remove duplicate files based on hashing.
- **Versioning**: Store multiple versions of files and keep a maximum of `MaxVersions` versions.
- **ClamAV Integration**: Scan uploaded files for viruses using ClamAV.
- **Redis Caching**: Utilize Redis for caching file metadata for faster access.
This release ensures an efficient and secure file management system, suited for environments requiring high levels of data security and availability.
```

67
cmd/server/config.toml Normal file
View File

@@ -0,0 +1,67 @@
# Server Settings
[server]
ListenPort = "8080"
UnixSocket = false
StoreDir = "./testupload"
LogLevel = "info"
LogFile = "./hmac-file-server.log"
MetricsEnabled = true
MetricsPort = "9090"
FileTTL = "8760h"
# Workers and Connections
[workers]
NumWorkers = 2
UploadQueueSize = 500
# Timeout Settings
[timeouts]
ReadTimeout = "600s"
WriteTimeout = "600s"
IdleTimeout = "600s"
# Security Settings
[security]
Secret = "a-orc-and-a-humans-is-drinking-ale"
# Versioning Settings
[versioning]
EnableVersioning = false
MaxVersions = 1
# Upload/Download Settings
[uploads]
ResumableUploadsEnabled = true
ChunkedUploadsEnabled = true
ChunkSize = 16777216
AllowedExtensions = [
# Document formats
".txt", ".pdf",
# Image formats
".png", ".jpg", ".jpeg", ".gif", ".bmp", ".tiff", ".svg", ".webp",
# Video formats
".wav", ".mp4", ".avi", ".mkv", ".mov", ".wmv", ".flv", ".webm", ".mpeg", ".mpg", ".m4v", ".3gp", ".3g2",
# Audio formats
".mp3", ".ogg"
]
# ClamAV Settings
[clamav]
ClamAVEnabled = false
ClamAVSocket = "/var/run/clamav/clamd.ctl"
NumScanWorkers = 4
# Redis Settings
[redis]
RedisEnabled = false
RedisAddr = "localhost:6379"
RedisPassword = ""
RedisDBIndex = 0
RedisHealthCheckInterval = "120s"
# Deduplication
[deduplication]
enabled = false

1923
cmd/server/main.go Normal file
View File

@@ -0,0 +1,1923 @@
// main.go
package main
import (
"bufio"
"context"
"crypto/hmac"
"crypto/sha256"
"encoding/hex"
"flag"
"fmt"
"io"
"mime"
"net"
"net/http"
"net/url"
"os"
"os/signal"
"path/filepath"
"runtime"
"strconv"
"strings"
"syscall"
"time"
"sync"
"github.com/dutchcoders/go-clamd" // ClamAV integration
"github.com/go-redis/redis/v8" // Redis integration
"github.com/patrickmn/go-cache"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/shirou/gopsutil/cpu"
"github.com/shirou/gopsutil/disk"
"github.com/shirou/gopsutil/host"
"github.com/shirou/gopsutil/mem"
"github.com/sirupsen/logrus"
"github.com/spf13/viper"
)
// Configuration structure
type ServerConfig struct {
ListenPort string `mapstructure:"ListenPort"`
UnixSocket bool `mapstructure:"UnixSocket"`
StoragePath string `mapstructure:"StoragePath"`
LogLevel string `mapstructure:"LogLevel"`
LogFile string `mapstructure:"LogFile"`
MetricsEnabled bool `mapstructure:"MetricsEnabled"`
MetricsPort string `mapstructure:"MetricsPort"`
FileTTL string `mapstructure:"FileTTL"`
MinFreeBytes int64 `mapstructure:"MinFreeBytes"` // Minimum free bytes required
DeduplicationEnabled bool `mapstructure:"DeduplicationEnabled"`
}
type TimeoutConfig struct {
ReadTimeout string `mapstructure:"ReadTimeout"`
WriteTimeout string `mapstructure:"WriteTimeout"`
IdleTimeout string `mapstructure:"IdleTimeout"`
}
type SecurityConfig struct {
Secret string `mapstructure:"Secret"`
}
type VersioningConfig struct {
EnableVersioning bool `mapstructure:"EnableVersioning"`
MaxVersions int `mapstructure:"MaxVersions"`
}
type UploadsConfig struct {
ResumableUploadsEnabled bool `mapstructure:"ResumableUploadsEnabled"`
ChunkedUploadsEnabled bool `mapstructure:"ChunkedUploadsEnabled"`
ChunkSize int64 `mapstructure:"ChunkSize"`
AllowedExtensions []string `mapstructure:"AllowedExtensions"`
}
type ClamAVConfig struct {
ClamAVEnabled bool `mapstructure:"ClamAVEnabled"`
ClamAVSocket string `mapstructure:"ClamAVSocket"`
NumScanWorkers int `mapstructure:"NumScanWorkers"`
}
type RedisConfig struct {
RedisEnabled bool `mapstructure:"RedisEnabled"`
RedisDBIndex int `mapstructure:"RedisDBIndex"`
RedisAddr string `mapstructure:"RedisAddr"`
RedisPassword string `mapstructure:"RedisPassword"`
RedisHealthCheckInterval string `mapstructure:"RedisHealthCheckInterval"`
}
type WorkersConfig struct {
NumWorkers int `mapstructure:"NumWorkers"`
UploadQueueSize int `mapstructure:"UploadQueueSize"`
}
type FileConfig struct {
FileRevision int `mapstructure:"FileRevision"`
}
type Config struct {
Server ServerConfig `mapstructure:"server"`
Timeouts TimeoutConfig `mapstructure:"timeouts"`
Security SecurityConfig `mapstructure:"security"`
Versioning VersioningConfig `mapstructure:"versioning"`
Uploads UploadsConfig `mapstructure:"uploads"`
ClamAV ClamAVConfig `mapstructure:"clamav"`
Redis RedisConfig `mapstructure:"redis"`
Workers WorkersConfig `mapstructure:"workers"`
File FileConfig `mapstructure:"file"`
}
// UploadTask represents a file upload task
type UploadTask struct {
AbsFilename string
Request *http.Request
Result chan error
}
// ScanTask represents a file scan task
type ScanTask struct {
AbsFilename string
Result chan error
}
// NetworkEvent represents a network-related event
type NetworkEvent struct {
Type string
Details string
}
var (
conf Config
versionString string = "v2.0-dev"
log = logrus.New()
uploadQueue chan UploadTask
networkEvents chan NetworkEvent
fileInfoCache *cache.Cache
clamClient *clamd.Clamd // Added for ClamAV integration
redisClient *redis.Client // Redis client
redisConnected bool // Redis connection status
mu sync.RWMutex
// Prometheus metrics
uploadDuration prometheus.Histogram
uploadErrorsTotal prometheus.Counter
uploadsTotal prometheus.Counter
downloadDuration prometheus.Histogram
downloadsTotal prometheus.Counter
downloadErrorsTotal prometheus.Counter
memoryUsage prometheus.Gauge
cpuUsage prometheus.Gauge
activeConnections prometheus.Gauge
requestsTotal *prometheus.CounterVec
goroutines prometheus.Gauge
uploadSizeBytes prometheus.Histogram
downloadSizeBytes prometheus.Histogram
// Constants for worker pool
MinWorkers = 5 // Increased from 10 to 20 for better concurrency
UploadQueueSize = 10000 // Increased from 5000 to 10000
// Channels
scanQueue chan ScanTask
ScanWorkers = 5 // Number of ClamAV scan workers
)
func main() {
// Set default configuration values
setDefaults()
// Flags for configuration file
var configFile string
flag.StringVar(&configFile, "config", "./config.toml", "Path to configuration file \"config.toml\".")
flag.Parse()
// Load configuration
err := readConfig(configFile, &conf)
if err != nil {
log.Fatalf("Error reading config: %v", err) // Fatal: application cannot proceed
}
log.Info("Configuration loaded successfully.")
// Initialize file info cache
fileInfoCache = cache.New(5*time.Minute, 10*time.Minute)
// Create store directory
err = os.MkdirAll(conf.Server.StoragePath, os.ModePerm)
if err != nil {
log.Fatalf("Error creating store directory: %v", err)
}
log.WithField("directory", conf.Server.StoragePath).Info("Store directory is ready")
// Setup logging
setupLogging()
// Log system information
logSystemInfo()
// Initialize Prometheus metrics
initMetrics()
log.Info("Prometheus metrics initialized.")
// Initialize upload and scan queues
uploadQueue = make(chan UploadTask, conf.Workers.UploadQueueSize)
scanQueue = make(chan ScanTask, conf.Workers.UploadQueueSize)
networkEvents = make(chan NetworkEvent, 100)
log.Info("Upload, scan, and network event channels initialized.")
// Context for goroutines
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Start network monitoring
go monitorNetwork(ctx)
go handleNetworkEvents(ctx)
// Update system metrics
go updateSystemMetrics(ctx)
// Initialize ClamAV client if enabled
if conf.ClamAV.ClamAVEnabled {
clamClient, err = initClamAV(conf.ClamAV.ClamAVSocket)
if err != nil {
log.WithFields(logrus.Fields{
"error": err.Error(),
}).Warn("ClamAV client initialization failed. Continuing without ClamAV.")
} else {
log.Info("ClamAV client initialized successfully.")
}
}
// Initialize Redis client if enabled
if conf.Redis.RedisEnabled {
initRedis()
}
// Redis Initialization
initRedis()
log.Info("Redis client initialized and connected successfully.")
// ClamAV Initialization
if conf.ClamAV.ClamAVEnabled {
clamClient, err = initClamAV(conf.ClamAV.ClamAVSocket)
if err != nil {
log.WithFields(logrus.Fields{
"error": err.Error(),
}).Warn("ClamAV client initialization failed. Continuing without ClamAV.")
} else {
log.Info("ClamAV client initialized successfully.")
}
}
// Initialize worker pools
initializeUploadWorkerPool(ctx)
if conf.ClamAV.ClamAVEnabled && clamClient != nil {
initializeScanWorkerPool(ctx)
}
// Start Redis health monitor if Redis is enabled
if conf.Redis.RedisEnabled && redisClient != nil {
go MonitorRedisHealth(ctx, redisClient, parseDuration(conf.Redis.RedisHealthCheckInterval))
}
// Setup router
router := setupRouter()
// Start file cleaner
fileTTL, err := time.ParseDuration(conf.Server.FileTTL)
if err != nil {
log.Fatalf("Invalid FileTTL: %v", err)
}
go runFileCleaner(ctx, conf.Server.StoragePath, fileTTL)
// Parse timeout durations
readTimeout, err := time.ParseDuration(conf.Timeouts.ReadTimeout)
if err != nil {
log.Fatalf("Invalid ReadTimeout: %v", err)
}
writeTimeout, err := time.ParseDuration(conf.Timeouts.WriteTimeout)
if err != nil {
log.Fatalf("Invalid WriteTimeout: %v", err)
}
idleTimeout, err := time.ParseDuration(conf.Timeouts.IdleTimeout)
if err != nil {
log.Fatalf("Invalid IdleTimeout: %v", err)
}
// Configure HTTP server
server := &http.Server{
Addr: ":" + conf.Server.ListenPort, // Prepend colon to ListenPort
Handler: router,
ReadTimeout: readTimeout,
WriteTimeout: writeTimeout,
IdleTimeout: idleTimeout,
}
// Start metrics server if enabled
if conf.Server.MetricsEnabled {
go func() {
http.Handle("/metrics", promhttp.Handler())
log.Infof("Metrics server started on port %s", conf.Server.MetricsPort)
if err := http.ListenAndServe(":"+conf.Server.MetricsPort, nil); err != nil {
log.Fatalf("Metrics server failed: %v", err)
}
}()
}
// Setup graceful shutdown
setupGracefulShutdown(server, cancel)
// Start server
log.Infof("Starting HMAC file server %s...", versionString)
if conf.Server.UnixSocket {
// Listen on Unix socket
if err := os.RemoveAll(conf.Server.ListenPort); err != nil {
log.Fatalf("Failed to remove existing Unix socket: %v", err)
}
listener, err := net.Listen("unix", conf.Server.ListenPort)
if err != nil {
log.Fatalf("Failed to listen on Unix socket %s: %v", conf.Server.ListenPort, err)
}
defer listener.Close()
if err := server.Serve(listener); err != nil && err != http.ErrServerClosed {
log.Fatalf("Server failed: %v", err)
}
} else {
// Listen on TCP port
if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed {
log.Fatalf("Server failed: %v", err)
}
}
}
// Function to load configuration using Viper
func readConfig(configFilename string, conf *Config) error {
viper.SetConfigFile(configFilename)
viper.SetConfigType("toml")
// Read in environment variables that match
viper.AutomaticEnv()
viper.SetEnvPrefix("HMAC") // Prefix for environment variables
// Read the config file
if err := viper.ReadInConfig(); err != nil {
return fmt.Errorf("error reading config file: %w", err)
}
// Unmarshal the config into the Config struct
if err := viper.Unmarshal(conf); err != nil {
return fmt.Errorf("unable to decode into struct: %w", err)
}
// Debug log the loaded configuration
log.Debugf("Loaded Configuration: %+v", conf.Server)
// Validate the configuration
if err := validateConfig(conf); err != nil {
return fmt.Errorf("configuration validation failed: %w", err)
}
// Set Deduplication Enabled
conf.Server.DeduplicationEnabled = viper.GetBool("deduplication.Enabled")
return nil
}
// Set default configuration values
func setDefaults() {
// Server defaults
viper.SetDefault("server.ListenPort", "8080")
viper.SetDefault("server.UnixSocket", false)
viper.SetDefault("server.StoragePath", "./uploads")
viper.SetDefault("server.LogLevel", "info")
viper.SetDefault("server.LogFile", "")
viper.SetDefault("server.MetricsEnabled", true)
viper.SetDefault("server.MetricsPort", "9090")
viper.SetDefault("server.FileTTL", "8760h") // 365d -> 8760h
viper.SetDefault("server.MinFreeBytes", 100<<20) // 100 MB
// Timeout defaults
viper.SetDefault("timeouts.ReadTimeout", "4800s") // supports 's'
viper.SetDefault("timeouts.WriteTimeout", "4800s")
viper.SetDefault("timeouts.IdleTimeout", "4800s")
// Security defaults
viper.SetDefault("security.Secret", "changeme")
// Versioning defaults
viper.SetDefault("versioning.EnableVersioning", false)
viper.SetDefault("versioning.MaxVersions", 1)
// Uploads defaults
viper.SetDefault("uploads.ResumableUploadsEnabled", true)
viper.SetDefault("uploads.ChunkedUploadsEnabled", true)
viper.SetDefault("uploads.ChunkSize", 8192)
viper.SetDefault("uploads.AllowedExtensions", []string{
".txt", ".pdf",
".png", ".jpg", ".jpeg", ".gif", ".bmp", ".tiff", ".svg", ".webp",
".wav", ".mp4", ".avi", ".mkv", ".mov", ".wmv", ".flv", ".webm", ".mpeg", ".mpg", ".m4v", ".3gp", ".3g2",
".mp3", ".ogg",
})
// ClamAV defaults
viper.SetDefault("clamav.ClamAVEnabled", true)
viper.SetDefault("clamav.ClamAVSocket", "/var/run/clamav/clamd.ctl")
viper.SetDefault("clamav.NumScanWorkers", 2)
// Redis defaults
viper.SetDefault("redis.RedisEnabled", true)
viper.SetDefault("redis.RedisAddr", "localhost:6379")
viper.SetDefault("redis.RedisPassword", "")
viper.SetDefault("redis.RedisDBIndex", 0)
viper.SetDefault("redis.RedisHealthCheckInterval", "120s")
// Workers defaults
viper.SetDefault("workers.NumWorkers", 2)
viper.SetDefault("workers.UploadQueueSize", 50)
// Deduplication defaults
viper.SetDefault("deduplication.Enabled", true)
}
// Validate configuration fields
func validateConfig(conf *Config) error {
if conf.Server.ListenPort == "" {
return fmt.Errorf("ListenPort must be set")
}
if conf.Security.Secret == "" {
return fmt.Errorf("secret must be set")
}
if conf.Server.StoragePath == "" {
return fmt.Errorf("StoragePath must be set")
}
if conf.Server.FileTTL == "" {
return fmt.Errorf("FileTTL must be set")
}
// Validate timeouts
if _, err := time.ParseDuration(conf.Timeouts.ReadTimeout); err != nil {
return fmt.Errorf("invalid ReadTimeout: %v", err)
}
if _, err := time.ParseDuration(conf.Timeouts.WriteTimeout); err != nil {
return fmt.Errorf("invalid WriteTimeout: %v", err)
}
if _, err := time.ParseDuration(conf.Timeouts.IdleTimeout); err != nil {
return fmt.Errorf("invalid IdleTimeout: %v", err)
}
// Validate Redis configuration if enabled
if conf.Redis.RedisEnabled {
if conf.Redis.RedisAddr == "" {
return fmt.Errorf("RedisAddr must be set when Redis is enabled")
}
}
// Add more validations as needed
return nil
}
// Setup logging
func setupLogging() {
level, err := logrus.ParseLevel(conf.Server.LogLevel)
if err != nil {
log.Fatalf("Invalid log level: %s", conf.Server.LogLevel)
}
log.SetLevel(level)
if conf.Server.LogFile != "" {
logFile, err := os.OpenFile(conf.Server.LogFile, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)
if err != nil {
log.Fatalf("Failed to open log file: %v", err)
}
log.SetOutput(io.MultiWriter(os.Stdout, logFile))
} else {
log.SetOutput(os.Stdout)
}
// Use Text formatter for human-readable logs
log.SetFormatter(&logrus.TextFormatter{
FullTimestamp: true,
// You can customize the format further if needed
})
}
// Log system information
func logSystemInfo() {
log.Info("========================================")
log.Infof(" HMAC File Server - %s ", versionString)
log.Info(" Secure File Handling with HMAC Auth ")
log.Info("========================================")
log.Info("Features: Prometheus Metrics, Chunked Uploads, ClamAV Scanning")
log.Info("Build Date: 2024-10-28")
log.Infof("Operating System: %s", runtime.GOOS)
log.Infof("Architecture: %s", runtime.GOARCH)
log.Infof("Number of CPUs: %d", runtime.NumCPU())
log.Infof("Go Version: %s", runtime.Version())
v, _ := mem.VirtualMemory()
log.Infof("Total Memory: %v MB", v.Total/1024/1024)
log.Infof("Free Memory: %v MB", v.Free/1024/1024)
log.Infof("Used Memory: %v MB", v.Used/1024/1024)
cpuInfo, _ := cpu.Info()
for _, info := range cpuInfo {
log.Infof("CPU Model: %s, Cores: %d, Mhz: %f", info.ModelName, info.Cores, info.Mhz)
}
partitions, _ := disk.Partitions(false)
for _, partition := range partitions {
usage, _ := disk.Usage(partition.Mountpoint)
log.Infof("Disk Mountpoint: %s, Total: %v GB, Free: %v GB, Used: %v GB",
partition.Mountpoint, usage.Total/1024/1024/1024, usage.Free/1024/1024/1024, usage.Used/1024/1024/1024)
}
hInfo, _ := host.Info()
log.Infof("Hostname: %s", hInfo.Hostname)
log.Infof("Uptime: %v seconds", hInfo.Uptime)
log.Infof("Boot Time: %v", time.Unix(int64(hInfo.BootTime), 0))
log.Infof("Platform: %s", hInfo.Platform)
log.Infof("Platform Family: %s", hInfo.PlatformFamily)
log.Infof("Platform Version: %s", hInfo.PlatformVersion)
log.Infof("Kernel Version: %s", hInfo.KernelVersion)
}
// Initialize Prometheus metrics
// Duplicate initMetrics function removed
func initMetrics() {
uploadDuration = prometheus.NewHistogram(prometheus.HistogramOpts{Namespace: "hmac", Name: "file_server_upload_duration_seconds", Help: "Histogram of file upload duration in seconds.", Buckets: prometheus.DefBuckets})
uploadErrorsTotal = prometheus.NewCounter(prometheus.CounterOpts{Namespace: "hmac", Name: "file_server_upload_errors_total", Help: "Total number of file upload errors."})
uploadsTotal = prometheus.NewCounter(prometheus.CounterOpts{Namespace: "hmac", Name: "file_server_uploads_total", Help: "Total number of successful file uploads."})
downloadDuration = prometheus.NewHistogram(prometheus.HistogramOpts{Namespace: "hmac", Name: "file_server_download_duration_seconds", Help: "Histogram of file download duration in seconds.", Buckets: prometheus.DefBuckets})
downloadsTotal = prometheus.NewCounter(prometheus.CounterOpts{Namespace: "hmac", Name: "file_server_downloads_total", Help: "Total number of successful file downloads."})
downloadErrorsTotal = prometheus.NewCounter(prometheus.CounterOpts{Namespace: "hmac", Name: "file_server_download_errors_total", Help: "Total number of file download errors."})
memoryUsage = prometheus.NewGauge(prometheus.GaugeOpts{Namespace: "hmac", Name: "memory_usage_bytes", Help: "Current memory usage in bytes."})
cpuUsage = prometheus.NewGauge(prometheus.GaugeOpts{Namespace: "hmac", Name: "cpu_usage_percent", Help: "Current CPU usage as a percentage."})
activeConnections = prometheus.NewGauge(prometheus.GaugeOpts{Namespace: "hmac", Name: "active_connections_total", Help: "Total number of active connections."})
requestsTotal = prometheus.NewCounterVec(prometheus.CounterOpts{Namespace: "hmac", Name: "http_requests_total", Help: "Total number of HTTP requests received, labeled by method and path."}, []string{"method", "path"})
goroutines = prometheus.NewGauge(prometheus.GaugeOpts{Namespace: "hmac", Name: "goroutines_count", Help: "Current number of goroutines."})
uploadSizeBytes = prometheus.NewHistogram(prometheus.HistogramOpts{
Namespace: "hmac",
Name: "file_server_upload_size_bytes",
Help: "Histogram of uploaded file sizes in bytes.",
Buckets: prometheus.ExponentialBuckets(100, 10, 8),
})
downloadSizeBytes = prometheus.NewHistogram(prometheus.HistogramOpts{
Namespace: "hmac",
Name: "file_server_download_size_bytes",
Help: "Histogram of downloaded file sizes in bytes.",
Buckets: prometheus.ExponentialBuckets(100, 10, 8),
})
if conf.Server.MetricsEnabled {
prometheus.MustRegister(uploadDuration, uploadErrorsTotal, uploadsTotal)
prometheus.MustRegister(downloadDuration, downloadsTotal, downloadErrorsTotal)
prometheus.MustRegister(memoryUsage, cpuUsage, activeConnections, requestsTotal, goroutines)
prometheus.MustRegister(uploadSizeBytes, downloadSizeBytes)
}
}
// Update system metrics
func updateSystemMetrics(ctx context.Context) {
ticker := time.NewTicker(10 * time.Second)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
log.Info("Stopping system metrics updater.")
return
case <-ticker.C:
v, _ := mem.VirtualMemory()
memoryUsage.Set(float64(v.Used))
cpuPercent, _ := cpu.Percent(0, false)
if len(cpuPercent) > 0 {
cpuUsage.Set(cpuPercent[0])
}
goroutines.Set(float64(runtime.NumGoroutine()))
}
}
}
// Function to check if a file exists and return its size
func fileExists(filePath string) (bool, int64) {
if cachedInfo, found := fileInfoCache.Get(filePath); found {
if info, ok := cachedInfo.(os.FileInfo); ok {
return !info.IsDir(), info.Size()
}
}
fileInfo, err := os.Stat(filePath)
if os.IsNotExist(err) {
return false, 0
} else if err != nil {
log.Error("Error checking file existence:", err)
return false, 0
}
fileInfoCache.Set(filePath, fileInfo, cache.DefaultExpiration)
return !fileInfo.IsDir(), fileInfo.Size()
}
// Function to check file extension
func isExtensionAllowed(filename string) bool {
if len(conf.Uploads.AllowedExtensions) == 0 {
return true // No restrictions if the list is empty
}
ext := strings.ToLower(filepath.Ext(filename))
for _, allowedExt := range conf.Uploads.AllowedExtensions {
if strings.ToLower(allowedExt) == ext {
return true
}
}
return false
}
// Version the file by moving the existing file to a versioned directory
func versionFile(absFilename string) error {
versionDir := absFilename + "_versions"
err := os.MkdirAll(versionDir, os.ModePerm)
if err != nil {
return fmt.Errorf("failed to create version directory: %v", err)
}
timestamp := time.Now().Format("20060102-150405")
versionedFilename := filepath.Join(versionDir, filepath.Base(absFilename)+"."+timestamp)
err = os.Rename(absFilename, versionedFilename)
if err != nil {
return fmt.Errorf("failed to version the file: %v", err)
}
log.WithFields(logrus.Fields{
"original": absFilename,
"versioned_as": versionedFilename,
}).Info("Versioned old file")
return cleanupOldVersions(versionDir)
}
// Clean up older versions if they exceed the maximum allowed
func cleanupOldVersions(versionDir string) error {
files, err := os.ReadDir(versionDir)
if err != nil {
return fmt.Errorf("failed to list version files: %v", err)
}
if conf.Versioning.MaxVersions > 0 && len(files) > conf.Versioning.MaxVersions {
excessFiles := len(files) - conf.Versioning.MaxVersions
for i := 0; i < excessFiles; i++ {
err := os.Remove(filepath.Join(versionDir, files[i].Name()))
if err != nil {
return fmt.Errorf("failed to remove old version: %v", err)
}
log.WithField("file", files[i].Name()).Info("Removed old version")
}
}
return nil
}
// Process the upload task
func processUpload(task UploadTask) error {
absFilename := task.AbsFilename
tempFilename := absFilename + ".tmp"
r := task.Request
log.Infof("Processing upload for file: %s", absFilename)
startTime := time.Now()
// Handle uploads and write to a temporary file
if conf.Uploads.ChunkedUploadsEnabled {
log.Debugf("Chunked uploads enabled. Handling chunked upload for %s", tempFilename)
err := handleChunkedUpload(tempFilename, r)
if err != nil {
uploadDuration.Observe(time.Since(startTime).Seconds())
log.WithFields(logrus.Fields{
"file": tempFilename,
"error": err,
}).Error("Failed to handle chunked upload")
return err
}
} else {
log.Debugf("Handling standard upload for %s", tempFilename)
err := createFile(tempFilename, r)
if err != nil {
log.WithFields(logrus.Fields{
"file": tempFilename,
"error": err,
}).Error("Error creating file")
uploadDuration.Observe(time.Since(startTime).Seconds())
return err
}
}
// Perform ClamAV scan on the temporary file
if clamClient != nil {
log.Debugf("Scanning %s with ClamAV", tempFilename)
err := scanFileWithClamAV(tempFilename)
if err != nil {
log.WithFields(logrus.Fields{
"file": tempFilename,
"error": err,
}).Warn("ClamAV detected a virus or scan failed")
os.Remove(tempFilename)
uploadErrorsTotal.Inc()
return err
}
log.Infof("ClamAV scan passed for file: %s", tempFilename)
}
// Handle file versioning if enabled
if conf.Versioning.EnableVersioning {
existing, _ := fileExists(absFilename)
if existing {
log.Infof("File %s exists. Initiating versioning.", absFilename)
err := versionFile(absFilename)
if err != nil {
log.WithFields(logrus.Fields{
"file": absFilename,
"error": err,
}).Error("Error versioning file")
os.Remove(tempFilename)
return err
}
log.Infof("File versioned successfully: %s", absFilename)
}
}
// Rename temporary file to final destination
err := os.Rename(tempFilename, absFilename)
if err != nil {
log.WithFields(logrus.Fields{
"temp_file": tempFilename,
"final_file": absFilename,
"error": err,
}).Error("Failed to move file to final destination")
os.Remove(tempFilename)
return err
}
log.Infof("File moved to final destination: %s", absFilename)
// Handle deduplication if enabled
if conf.Server.DeduplicationEnabled {
log.Debugf("Deduplication enabled. Checking duplicates for %s", absFilename)
err = handleDeduplication(context.Background(), absFilename)
if err != nil {
log.WithError(err).Error("Deduplication failed")
uploadErrorsTotal.Inc()
return err
}
log.Infof("Deduplication handled successfully for file: %s", absFilename)
}
log.WithFields(logrus.Fields{
"file": absFilename,
}).Info("File uploaded and processed successfully")
uploadDuration.Observe(time.Since(startTime).Seconds())
uploadsTotal.Inc()
return nil
}
// uploadWorker processes upload tasks from the uploadQueue
func uploadWorker(ctx context.Context, workerID int) {
log.Infof("Upload worker %d started.", workerID)
defer log.Infof("Upload worker %d stopped.", workerID)
for {
select {
case <-ctx.Done():
return
case task, ok := <-uploadQueue:
if !ok {
log.Warnf("Upload queue closed. Worker %d exiting.", workerID)
return
}
log.Infof("Worker %d processing upload for file: %s", workerID, task.AbsFilename)
err := processUpload(task)
if err != nil {
log.Errorf("Worker %d failed to process upload for %s: %v", workerID, task.AbsFilename, err)
uploadErrorsTotal.Inc()
} else {
log.Infof("Worker %d successfully processed upload for %s", workerID, task.AbsFilename)
}
task.Result <- err
close(task.Result)
}
}
}
// Initialize upload worker pool
func initializeUploadWorkerPool(ctx context.Context) {
for i := 0; i < MinWorkers; i++ {
go uploadWorker(ctx, i)
}
log.Infof("Initialized %d upload workers", MinWorkers)
}
// Worker function to process scan tasks
func scanWorker(ctx context.Context, workerID int) {
log.WithField("worker_id", workerID).Info("Scan worker started")
for {
select {
case <-ctx.Done():
log.WithField("worker_id", workerID).Info("Scan worker stopping")
return
case task, ok := <-scanQueue:
if !ok {
log.WithField("worker_id", workerID).Info("Scan queue closed")
return
}
log.WithFields(logrus.Fields{
"worker_id": workerID,
"file": task.AbsFilename,
}).Info("Processing scan task")
err := scanFileWithClamAV(task.AbsFilename)
if err != nil {
log.WithFields(logrus.Fields{
"worker_id": workerID,
"file": task.AbsFilename,
"error": err,
}).Error("Failed to scan file")
} else {
log.WithFields(logrus.Fields{
"worker_id": workerID,
"file": task.AbsFilename,
}).Info("Successfully scanned file")
}
task.Result <- err
close(task.Result)
}
}
}
// Initialize scan worker pool
func initializeScanWorkerPool(ctx context.Context) {
for i := 0; i < ScanWorkers; i++ {
go scanWorker(ctx, i)
}
log.Infof("Initialized %d scan workers", ScanWorkers)
}
// Setup router with middleware
func setupRouter() http.Handler {
mux := http.NewServeMux()
mux.HandleFunc("/", handleRequest)
if conf.Server.MetricsEnabled {
mux.Handle("/metrics", promhttp.Handler())
}
// Apply middleware
handler := loggingMiddleware(mux)
handler = recoveryMiddleware(handler)
handler = corsMiddleware(handler)
return handler
}
// Middleware for logging
func loggingMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
requestsTotal.WithLabelValues(r.Method, r.URL.Path).Inc()
next.ServeHTTP(w, r)
})
}
// Middleware for panic recovery
func recoveryMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
defer func() {
if rec := recover(); rec != nil {
log.WithFields(logrus.Fields{
"method": r.Method,
"url": r.URL.String(),
"error": rec,
}).Error("Panic recovered in HTTP handler")
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
}
}()
next.ServeHTTP(w, r)
})
}
// corsMiddleware handles CORS by setting appropriate headers
func corsMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Set CORS headers
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS")
w.Header().Set("Access-Control-Allow-Headers", "Content-Type, Authorization, X-File-MAC")
w.Header().Set("Access-Control-Max-Age", "86400") // Cache preflight response for 1 day
// Handle preflight OPTIONS request
if r.Method == http.MethodOptions {
w.WriteHeader(http.StatusOK)
return
}
// Proceed to the next handler
next.ServeHTTP(w, r)
})
}
// Handle file uploads and downloads
func handleRequest(w http.ResponseWriter, r *http.Request) {
if r.Method == http.MethodPost && strings.Contains(r.Header.Get("Content-Type"), "multipart/form-data") {
absFilename, err := sanitizeFilePath(conf.Server.StoragePath, strings.TrimPrefix(r.URL.Path, "/"))
if err != nil {
log.WithError(err).Error("Invalid file path")
http.Error(w, "Invalid file path", http.StatusBadRequest)
return
}
err = handleMultipartUpload(w, r, absFilename)
if err != nil {
log.WithError(err).Error("Failed to handle multipart upload")
http.Error(w, "Failed to handle multipart upload", http.StatusInternalServerError)
return
}
w.WriteHeader(http.StatusCreated)
return
}
// Get client IP address
clientIP := r.Header.Get("X-Real-IP")
if clientIP == "" {
clientIP = r.Header.Get("X-Forwarded-For")
}
if clientIP == "" {
// Fallback to RemoteAddr
host, _, err := net.SplitHostPort(r.RemoteAddr)
if err != nil {
log.WithError(err).Warn("Failed to parse RemoteAddr")
clientIP = r.RemoteAddr
} else {
clientIP = host
}
}
// Log the request with the client IP
log.WithFields(logrus.Fields{
"method": r.Method,
"url": r.URL.String(),
"remote": clientIP,
}).Info("Incoming request")
// Parse URL and query parameters
p := r.URL.Path
a, err := url.ParseQuery(r.URL.RawQuery)
if err != nil {
log.Warn("Failed to parse query parameters")
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
return
}
fileStorePath := strings.TrimPrefix(p, "/")
if fileStorePath == "" || fileStorePath == "/" {
log.Warn("Access to root directory is forbidden")
http.Error(w, "Forbidden", http.StatusForbidden)
return
} else if fileStorePath[0] == '/' {
fileStorePath = fileStorePath[1:]
}
absFilename, err := sanitizeFilePath(conf.Server.StoragePath, fileStorePath)
if err != nil {
log.WithFields(logrus.Fields{
"file": fileStorePath,
"error": err,
}).Warn("Invalid file path")
http.Error(w, "Invalid file path", http.StatusBadRequest)
return
}
switch r.Method {
case http.MethodPut:
handleUpload(w, r, absFilename, fileStorePath, a)
case http.MethodHead, http.MethodGet:
handleDownload(w, r, absFilename, fileStorePath)
case http.MethodOptions:
// Handled by NGINX; no action needed
w.Header().Set("Allow", "OPTIONS, GET, PUT, HEAD")
return
default:
log.WithField("method", r.Method).Warn("Invalid HTTP method for upload directory")
http.Error(w, "Method Not Allowed", http.StatusMethodNotAllowed)
return
}
}
// Handle file uploads with extension restrictions and HMAC validation
func handleUpload(w http.ResponseWriter, r *http.Request, absFilename, fileStorePath string, a url.Values) {
// Log the storage path being used
log.Infof("Using storage path: %s", conf.Server.StoragePath)
// Determine protocol version based on query parameters
var protocolVersion string
if a.Get("v2") != "" {
protocolVersion = "v2"
} else if a.Get("token") != "" {
protocolVersion = "token"
} else if a.Get("v") != "" {
protocolVersion = "v"
} else {
log.Warn("No HMAC attached to URL. Expecting 'v', 'v2', or 'token' parameter as MAC")
http.Error(w, "No HMAC attached to URL. Expecting 'v', 'v2', or 'token' parameter as MAC", http.StatusForbidden)
return
}
log.Debugf("Protocol version determined: %s", protocolVersion)
// Initialize HMAC
mac := hmac.New(sha256.New, []byte(conf.Security.Secret))
// Calculate MAC based on protocolVersion
if protocolVersion == "v" {
mac.Write([]byte(fileStorePath + "\x20" + strconv.FormatInt(r.ContentLength, 10)))
} else if protocolVersion == "v2" || protocolVersion == "token" {
contentType := mime.TypeByExtension(filepath.Ext(fileStorePath))
if contentType == "" {
contentType = "application/octet-stream"
}
mac.Write([]byte(fileStorePath + "\x00" + strconv.FormatInt(r.ContentLength, 10) + "\x00" + contentType))
}
calculatedMAC := mac.Sum(nil)
log.Debugf("Calculated MAC: %x", calculatedMAC)
// Decode provided MAC from hex
providedMACHex := a.Get(protocolVersion)
providedMAC, err := hex.DecodeString(providedMACHex)
if err != nil {
log.Warn("Invalid MAC encoding")
http.Error(w, "Invalid MAC encoding", http.StatusForbidden)
return
}
log.Debugf("Provided MAC: %x", providedMAC)
// Validate the HMAC
if !hmac.Equal(calculatedMAC, providedMAC) {
log.Warn("Invalid MAC")
http.Error(w, "Invalid MAC", http.StatusForbidden)
return
}
log.Debug("HMAC validation successful")
// Validate file extension
if !isExtensionAllowed(fileStorePath) {
log.WithFields(logrus.Fields{
// No need to sanitize and validate the file path here since absFilename is already sanitized in handleRequest
"file": fileStorePath,
"error": err,
}).Warn("Invalid file path")
http.Error(w, "Invalid file path", http.StatusBadRequest)
uploadErrorsTotal.Inc()
return
}
// absFilename = sanitizedFilename
// Check if there is enough free space
err = checkStorageSpace(conf.Server.StoragePath, conf.Server.MinFreeBytes)
if err != nil {
log.WithFields(logrus.Fields{
"storage_path": conf.Server.StoragePath,
"error": err,
}).Warn("Not enough free space")
http.Error(w, "Not enough free space", http.StatusInsufficientStorage)
uploadErrorsTotal.Inc()
return
}
// Create an UploadTask with a result channel
result := make(chan error)
task := UploadTask{
AbsFilename: absFilename,
Request: r,
Result: result,
}
// Submit task to the upload queue
select {
case uploadQueue <- task:
// Successfully added to the queue
log.Debug("Upload task enqueued successfully")
default:
// Queue is full
log.Warn("Upload queue is full. Rejecting upload")
http.Error(w, "Server busy. Try again later.", http.StatusServiceUnavailable)
uploadErrorsTotal.Inc()
return
}
// Wait for the worker to process the upload
err = <-result
if err != nil {
// The worker has already logged the error; send an appropriate HTTP response
http.Error(w, fmt.Sprintf("Upload failed: %v", err), http.StatusInternalServerError)
return
}
// Upload was successful
w.WriteHeader(http.StatusCreated)
}
// Handle file downloads
func handleDownload(w http.ResponseWriter, r *http.Request, absFilename, fileStorePath string) {
fileInfo, err := getFileInfo(absFilename)
if err != nil {
log.WithError(err).Error("Failed to get file information")
http.Error(w, "Not Found", http.StatusNotFound)
downloadErrorsTotal.Inc()
return
} else if fileInfo.IsDir() {
log.Warn("Directory listing forbidden")
http.Error(w, "Forbidden", http.StatusForbidden)
downloadErrorsTotal.Inc()
return
}
contentType := mime.TypeByExtension(filepath.Ext(fileStorePath))
if contentType == "" {
contentType = "application/octet-stream"
}
w.Header().Set("Content-Type", contentType)
// Handle resumable downloads
if conf.Uploads.ResumableUploadsEnabled {
handleResumableDownload(absFilename, w, r, fileInfo.Size())
return
}
if r.Method == http.MethodHead {
w.Header().Set("Content-Length", strconv.FormatInt(fileInfo.Size(), 10))
downloadsTotal.Inc()
return
} else {
// Measure download duration
startTime := time.Now()
log.Infof("Initiating download for file: %s", absFilename)
http.ServeFile(w, r, absFilename)
downloadDuration.Observe(time.Since(startTime).Seconds())
downloadSizeBytes.Observe(float64(fileInfo.Size()))
downloadsTotal.Inc()
log.Infof("File downloaded successfully: %s", absFilename)
return
}
}
// Create the file for upload with buffered Writer
func createFile(tempFilename string, r *http.Request) error {
absDirectory := filepath.Dir(tempFilename)
err := os.MkdirAll(absDirectory, os.ModePerm)
if err != nil {
log.WithError(err).Errorf("Failed to create directory %s", absDirectory)
return fmt.Errorf("failed to create directory %s: %w", absDirectory, err)
}
// Open the file for writing
targetFile, err := os.OpenFile(tempFilename, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0644)
if err != nil {
log.WithError(err).Errorf("Failed to create file %s", tempFilename)
return fmt.Errorf("failed to create file %s: %w", tempFilename, err)
}
defer targetFile.Close()
// Use a large buffer for efficient file writing
bufferSize := 4 * 1024 * 1024 // 4 MB buffer
writer := bufio.NewWriterSize(targetFile, bufferSize)
buffer := make([]byte, bufferSize)
totalBytes := int64(0)
for {
n, readErr := r.Body.Read(buffer)
if n > 0 {
totalBytes += int64(n)
_, writeErr := writer.Write(buffer[:n])
if writeErr != nil {
log.WithError(writeErr).Errorf("Failed to write to file %s", tempFilename)
return fmt.Errorf("failed to write to file %s: %w", tempFilename, writeErr)
}
}
if readErr != nil {
if readErr == io.EOF {
break
}
log.WithError(readErr).Error("Failed to read request body")
return fmt.Errorf("failed to read request body: %w", readErr)
}
}
err = writer.Flush()
if err != nil {
log.WithError(err).Errorf("Failed to flush buffer to file %s", tempFilename)
return fmt.Errorf("failed to flush buffer to file %s: %w", tempFilename, err)
}
log.WithFields(logrus.Fields{
"temp_file": tempFilename,
"total_bytes": totalBytes,
}).Info("File uploaded successfully")
uploadSizeBytes.Observe(float64(totalBytes))
return nil
}
// Scan the uploaded file with ClamAV (Optional)
func scanFileWithClamAV(filePath string) error {
log.WithField("file", filePath).Info("Scanning file with ClamAV")
scanResultChan, err := clamClient.ScanFile(filePath)
if err != nil {
log.WithError(err).Error("Failed to initiate ClamAV scan")
return fmt.Errorf("failed to initiate ClamAV scan: %w", err)
}
// Receive scan result
scanResult := <-scanResultChan
if scanResult == nil {
log.Error("Failed to receive scan result from ClamAV")
return fmt.Errorf("failed to receive scan result from ClamAV")
}
// Handle scan result
switch scanResult.Status {
case clamd.RES_OK:
log.WithField("file", filePath).Info("ClamAV scan passed")
return nil
case clamd.RES_FOUND:
log.WithFields(logrus.Fields{
"file": filePath,
"description": scanResult.Description,
}).Warn("ClamAV detected a virus")
return fmt.Errorf("virus detected: %s", scanResult.Description)
default:
log.WithFields(logrus.Fields{
"file": filePath,
"status": scanResult.Status,
"description": scanResult.Description,
}).Warn("ClamAV scan returned unexpected status")
return fmt.Errorf("ClamAV scan returned unexpected status: %s", scanResult.Description)
}
}
// initClamAV initializes the ClamAV client and logs the status
func initClamAV(socket string) (*clamd.Clamd, error) {
if socket == "" {
log.Error("ClamAV socket path is not configured.")
return nil, fmt.Errorf("ClamAV socket path is not configured")
}
clamClient := clamd.NewClamd("unix:" + socket)
err := clamClient.Ping()
if err != nil {
log.Errorf("Failed to connect to ClamAV at %s: %v", socket, err)
return nil, fmt.Errorf("failed to connect to ClamAV: %w", err)
}
log.Info("Connected to ClamAV successfully.")
return clamClient, nil
}
// Handle resumable downloads
func handleResumableDownload(absFilename string, w http.ResponseWriter, r *http.Request, fileSize int64) {
rangeHeader := r.Header.Get("Range")
if rangeHeader == "" {
// If no Range header, serve the full file
startTime := time.Now()
http.ServeFile(w, r, absFilename)
downloadDuration.Observe(time.Since(startTime).Seconds())
downloadSizeBytes.Observe(float64(fileSize))
downloadsTotal.Inc()
return
}
// Parse Range header
ranges := strings.Split(strings.TrimPrefix(rangeHeader, "bytes="), "-")
if len(ranges) != 2 {
http.Error(w, "Invalid Range", http.StatusRequestedRangeNotSatisfiable)
downloadErrorsTotal.Inc()
return
}
start, err := strconv.ParseInt(ranges[0], 10, 64)
if err != nil {
http.Error(w, "Invalid Range", http.StatusRequestedRangeNotSatisfiable)
downloadErrorsTotal.Inc()
return
}
// Calculate end byte
end := fileSize - 1
if ranges[1] != "" {
end, err = strconv.ParseInt(ranges[1], 10, 64)
if err != nil || end >= fileSize {
http.Error(w, "Invalid Range", http.StatusRequestedRangeNotSatisfiable)
downloadErrorsTotal.Inc()
return
}
}
// Set response headers for partial content
w.Header().Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", start, end, fileSize))
w.Header().Set("Content-Length", strconv.FormatInt(end-start+1, 10))
w.Header().Set("Accept-Ranges", "bytes")
w.WriteHeader(http.StatusPartialContent)
// Serve the requested byte range
file, err := os.Open(absFilename)
if err != nil {
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
downloadErrorsTotal.Inc()
return
}
defer file.Close()
// Seek to the start byte
_, err = file.Seek(start, 0)
if err != nil {
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
downloadErrorsTotal.Inc()
return
}
// Create a buffer and copy the specified range to the response writer
buffer := make([]byte, 32*1024) // 32KB buffer
remaining := end - start + 1
startTime := time.Now()
for remaining > 0 {
if int64(len(buffer)) > remaining {
buffer = buffer[:remaining]
}
n, err := file.Read(buffer)
if n > 0 {
if _, writeErr := w.Write(buffer[:n]); writeErr != nil {
log.WithError(writeErr).Error("Failed to write to response")
downloadErrorsTotal.Inc()
return
}
remaining -= int64(n)
}
if err != nil {
if err != io.EOF {
log.WithError(err).Error("Error reading file during resumable download")
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
downloadErrorsTotal.Inc()
}
break
}
}
downloadDuration.Observe(time.Since(startTime).Seconds())
downloadSizeBytes.Observe(float64(end - start + 1))
downloadsTotal.Inc()
}
// Handle chunked uploads with bufio.Writer
func handleChunkedUpload(tempFilename string, r *http.Request) error {
log.WithField("file", tempFilename).Info("Handling chunked upload to temporary file")
// Ensure the directory exists
absDirectory := filepath.Dir(tempFilename)
err := os.MkdirAll(absDirectory, os.ModePerm)
if err != nil {
log.WithError(err).Errorf("Failed to create directory %s for chunked upload", absDirectory)
return fmt.Errorf("failed to create directory %s: %w", absDirectory, err)
}
targetFile, err := os.OpenFile(tempFilename, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0644)
if err != nil {
log.WithError(err).Error("Failed to open temporary file for chunked upload")
return err
}
defer targetFile.Close()
writer := bufio.NewWriterSize(targetFile, int(conf.Uploads.ChunkSize))
buffer := make([]byte, conf.Uploads.ChunkSize)
totalBytes := int64(0)
for {
n, err := r.Body.Read(buffer)
if n > 0 {
totalBytes += int64(n)
_, writeErr := writer.Write(buffer[:n])
if writeErr != nil {
log.WithError(writeErr).Error("Failed to write chunk to temporary file")
return writeErr
}
}
if err != nil {
if err == io.EOF {
break // Finished reading the body
}
log.WithError(err).Error("Error reading from request body")
return err
}
}
err = writer.Flush()
if err != nil {
log.WithError(err).Error("Failed to flush buffer to temporary file")
return err
}
log.WithFields(logrus.Fields{
"temp_file": tempFilename,
"total_bytes": totalBytes,
}).Info("Chunked upload completed successfully")
uploadSizeBytes.Observe(float64(totalBytes))
return nil
}
// Get file information with caching
func getFileInfo(absFilename string) (os.FileInfo, error) {
if cachedInfo, found := fileInfoCache.Get(absFilename); found {
if info, ok := cachedInfo.(os.FileInfo); ok {
return info, nil
}
}
fileInfo, err := os.Stat(absFilename)
if err != nil {
return nil, err
}
fileInfoCache.Set(absFilename, fileInfo, cache.DefaultExpiration)
return fileInfo, nil
}
// Monitor network changes
func monitorNetwork(ctx context.Context) {
currentIP := getCurrentIPAddress() // Placeholder for the current IP address
for {
select {
case <-ctx.Done():
log.Info("Stopping network monitor.")
return
case <-time.After(10 * time.Second):
newIP := getCurrentIPAddress()
if newIP != currentIP && newIP != "" {
currentIP = newIP
select {
case networkEvents <- NetworkEvent{Type: "IP_CHANGE", Details: currentIP}:
log.WithField("new_ip", currentIP).Info("Queued IP_CHANGE event")
default:
log.Warn("Network event channel is full. Dropping IP_CHANGE event.")
}
}
}
}
}
// Handle network events
func handleNetworkEvents(ctx context.Context) {
for {
select {
case <-ctx.Done():
log.Info("Stopping network event handler.")
return
case event, ok := <-networkEvents:
if !ok {
log.Info("Network events channel closed.")
return
}
switch event.Type {
case "IP_CHANGE":
log.WithField("new_ip", event.Details).Info("Network change detected")
// Example: Update Prometheus gauge or trigger alerts
// activeConnections.Set(float64(getActiveConnections()))
}
// Additional event types can be handled here
}
}
}
// Get current IP address (example)
func getCurrentIPAddress() string {
interfaces, err := net.Interfaces()
if err != nil {
log.WithError(err).Error("Failed to get network interfaces")
return ""
}
for _, iface := range interfaces {
if iface.Flags&net.FlagUp == 0 || iface.Flags&net.FlagLoopback != 0 {
continue // Skip interfaces that are down or loopback
}
addrs, err := iface.Addrs()
if err != nil {
log.WithError(err).Errorf("Failed to get addresses for interface %s", iface.Name)
continue
}
for _, addr := range addrs {
if ipnet, ok := addr.(*net.IPNet); ok && ipnet.IP.IsGlobalUnicast() && ipnet.IP.To4() != nil {
return ipnet.IP.String()
}
}
}
return ""
}
// setupGracefulShutdown sets up handling for graceful server shutdown
func setupGracefulShutdown(server *http.Server, cancel context.CancelFunc) {
quit := make(chan os.Signal, 1)
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
go func() {
sig := <-quit
log.Infof("Received signal %s. Initiating shutdown...", sig)
// Create a deadline to wait for.
ctxShutdown, shutdownCancel := context.WithTimeout(context.Background(), 30*time.Second)
defer shutdownCancel()
// Attempt graceful shutdown
if err := server.Shutdown(ctxShutdown); err != nil {
log.Errorf("Server shutdown failed: %v", err)
} else {
log.Info("Server shutdown gracefully.")
}
// Signal other goroutines to stop
cancel()
// Close the upload, scan, and network event channels
close(uploadQueue)
log.Info("Upload queue closed.")
close(scanQueue)
log.Info("Scan queue closed.")
close(networkEvents)
log.Info("Network events channel closed.")
log.Info("Shutdown process completed. Exiting application.")
os.Exit(0)
}()
}
// Initialize Redis client
func initRedis() {
if !conf.Redis.RedisEnabled {
log.Info("Redis is disabled in configuration.")
return
}
redisClient = redis.NewClient(&redis.Options{
Addr: conf.Redis.RedisAddr,
Password: conf.Redis.RedisPassword,
DB: conf.Redis.RedisDBIndex,
})
// Test the Redis connection
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
_, err := redisClient.Ping(ctx).Result()
if err != nil {
log.Fatalf("Failed to connect to Redis: %v", err)
}
log.Info("Connected to Redis successfully")
// Set initial connection status
mu.Lock()
redisConnected = true
mu.Unlock()
// Start monitoring Redis health
go MonitorRedisHealth(context.Background(), redisClient, parseDuration(conf.Redis.RedisHealthCheckInterval))
}
// MonitorRedisHealth periodically checks Redis connectivity and updates redisConnected status.
func MonitorRedisHealth(ctx context.Context, client *redis.Client, checkInterval time.Duration) {
ticker := time.NewTicker(checkInterval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
log.Info("Stopping Redis health monitor.")
return
case <-ticker.C:
err := client.Ping(ctx).Err()
mu.Lock()
if err != nil {
if redisConnected {
log.Errorf("Redis health check failed: %v", err)
}
redisConnected = false
} else {
if !redisConnected {
log.Info("Redis reconnected successfully")
}
redisConnected = true
log.Debug("Redis health check succeeded.")
}
mu.Unlock()
}
}
}
// Helper function to parse duration strings
func parseDuration(durationStr string) time.Duration {
duration, err := time.ParseDuration(durationStr)
if err != nil {
log.WithError(err).Warn("Invalid duration format, using default 30s")
return 30 * time.Second
}
return duration
}
// RunFileCleaner periodically deletes files that exceed the FileTTL duration.
func runFileCleaner(ctx context.Context, storeDir string, ttl time.Duration) {
ticker := time.NewTicker(1 * time.Hour)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
log.Info("Stopping file cleaner.")
return
case <-ticker.C:
now := time.Now()
err := filepath.Walk(storeDir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if info.IsDir() {
return nil
}
if now.Sub(info.ModTime()) > ttl {
err := os.Remove(path)
if err != nil {
log.WithError(err).Errorf("Failed to remove expired file: %s", path)
} else {
log.Infof("Removed expired file: %s", path)
}
}
return nil
})
if err != nil {
log.WithError(err).Error("Error walking store directory for file cleaning")
}
}
}
}
// DeduplicateFiles scans the store directory and removes duplicate files based on SHA256 hash.
// It retains one copy of each unique file and replaces duplicates with hard links.
func DeduplicateFiles(storeDir string) error {
hashMap := make(map[string]string) // map[hash]filepath
var mu sync.Mutex
var wg sync.WaitGroup
fileChan := make(chan string, 100)
// Worker to process files
numWorkers := 10
for i := 0; i < numWorkers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for filePath := range fileChan {
hash, err := computeFileHash(filePath)
if err != nil {
logrus.WithError(err).Errorf("Failed to compute hash for %s", filePath)
continue
}
mu.Lock()
original, exists := hashMap[hash]
if !exists {
hashMap[hash] = filePath
mu.Unlock()
continue
}
mu.Unlock()
// Duplicate found
err = os.Remove(filePath)
if err != nil {
logrus.WithError(err).Errorf("Failed to remove duplicate file %s", filePath)
continue
}
// Create hard link to the original file
err = os.Link(original, filePath)
if err != nil {
logrus.WithError(err).Errorf("Failed to create hard link from %s to %s", original, filePath)
continue
}
logrus.Infof("Removed duplicate %s and linked to %s", filePath, original)
}
}()
}
// Walk through the store directory
err := filepath.Walk(storeDir, func(path string, info os.FileInfo, err error) error {
if err != nil {
logrus.WithError(err).Errorf("Error accessing path %s", path)
return nil
}
if !info.Mode().IsRegular() {
return nil
}
fileChan <- path
return nil
})
if err != nil {
return fmt.Errorf("error walking the path %s: %w", storeDir, err)
}
close(fileChan)
wg.Wait()
return nil
}
// computeFileHash computes the SHA256 hash of the given file.
func computeFileHash(filePath string) (string, error) {
file, err := os.Open(filePath)
if err != nil {
return "", fmt.Errorf("unable to open file %s: %w", filePath, err)
}
defer file.Close()
hasher := sha256.New()
if _, err := io.Copy(hasher, file); err != nil {
return "", fmt.Errorf("error hashing file %s: %w", filePath, err)
}
return hex.EncodeToString(hasher.Sum(nil)), nil
}
// Handle multipart uploads
func handleMultipartUpload(w http.ResponseWriter, r *http.Request, absFilename string) error {
err := r.ParseMultipartForm(32 << 20) // 32MB is the default used by FormFile
if err != nil {
log.WithError(err).Error("Failed to parse multipart form")
http.Error(w, "Failed to parse multipart form", http.StatusBadRequest)
return err
}
file, handler, err := r.FormFile("file")
if err != nil {
log.WithError(err).Error("Failed to retrieve file from form data")
http.Error(w, "Failed to retrieve file from form data", http.StatusBadRequest)
return err
}
defer file.Close()
// Validate file extension
if !isExtensionAllowed(handler.Filename) {
log.WithFields(logrus.Fields{
"filename": handler.Filename,
"extension": filepath.Ext(handler.Filename),
}).Warn("Attempted upload with disallowed file extension")
http.Error(w, "Disallowed file extension. Allowed extensions are: "+strings.Join(conf.Uploads.AllowedExtensions, ", "), http.StatusForbidden)
uploadErrorsTotal.Inc()
return fmt.Errorf("disallowed file extension")
}
// Create a temporary file
tempFilename := absFilename + ".tmp"
tempFile, err := os.OpenFile(tempFilename, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666)
if err != nil {
log.WithError(err).Error("Failed to create temporary file")
http.Error(w, "Failed to create temporary file", http.StatusInternalServerError)
return err
}
defer tempFile.Close()
// Copy the uploaded file to the temporary file
_, err = io.Copy(tempFile, file)
if err != nil {
log.WithError(err).Error("Failed to copy uploaded file to temporary file")
http.Error(w, "Failed to copy uploaded file", http.StatusInternalServerError)
return err
}
// Perform ClamAV scan on the temporary file
if clamClient != nil {
err := scanFileWithClamAV(tempFilename)
if err != nil {
log.WithFields(logrus.Fields{
"file": tempFilename,
"error": err,
}).Warn("ClamAV detected a virus or scan failed")
os.Remove(tempFilename)
uploadErrorsTotal.Inc()
return err
}
}
// Handle file versioning if enabled
if conf.Versioning.EnableVersioning {
existing, _ := fileExists(absFilename)
if existing {
err := versionFile(absFilename)
if err != nil {
log.WithFields(logrus.Fields{
"file": absFilename,
"error": err,
}).Error("Error versioning file")
os.Remove(tempFilename)
return err
}
}
}
// Move the temporary file to the final destination
err = os.Rename(tempFilename, absFilename)
if err != nil {
log.WithFields(logrus.Fields{
"temp_file": tempFilename,
"final_file": absFilename,
"error": err,
}).Error("Failed to move file to final destination")
os.Remove(tempFilename)
return err
}
log.WithFields(logrus.Fields{
"file": absFilename,
}).Info("File uploaded and scanned successfully")
uploadsTotal.Inc()
return nil
}
// sanitizeFilePath ensures that the file path is within the designated storage directory
func sanitizeFilePath(baseDir, filePath string) (string, error) {
// Resolve the absolute path
absBaseDir, err := filepath.Abs(baseDir)
if err != nil {
return "", fmt.Errorf("failed to resolve base directory: %w", err)
}
absFilePath, err := filepath.Abs(filepath.Join(absBaseDir, filePath))
if err != nil {
return "", fmt.Errorf("failed to resolve file path: %w", err)
}
// Check if the resolved file path is within the base directory
if !strings.HasPrefix(absFilePath, absBaseDir) {
return "", fmt.Errorf("invalid file path: %s", filePath)
}
return absFilePath, nil
}
// checkStorageSpace ensures that there is enough free space in the storage path
func checkStorageSpace(storagePath string, minFreeBytes int64) error {
var stat syscall.Statfs_t
err := syscall.Statfs(storagePath, &stat)
if err != nil {
return fmt.Errorf("failed to get filesystem stats: %w", err)
}
// Calculate available bytes
availableBytes := stat.Bavail * uint64(stat.Bsize)
if int64(availableBytes) < minFreeBytes {
return fmt.Errorf("not enough free space: %d bytes available, %d bytes required", availableBytes, minFreeBytes)
}
return nil
}
// Function to compute SHA256 checksum of a file
func computeSHA256(filePath string) (string, error) {
file, err := os.Open(filePath)
if err != nil {
return "", fmt.Errorf("failed to open file for checksum: %w", err)
}
defer file.Close()
hasher := sha256.New()
if _, err := io.Copy(hasher, file); err != nil {
return "", fmt.Errorf("failed to compute checksum: %w", err)
}
return hex.EncodeToString(hasher.Sum(nil)), nil
}
// handleDeduplication handles file deduplication using SHA256 checksum and hard links
func handleDeduplication(ctx context.Context, absFilename string) error {
// Compute checksum of the uploaded file
checksum, err := computeSHA256(absFilename)
if err != nil {
log.Errorf("Failed to compute SHA256 for %s: %v", absFilename, err)
return fmt.Errorf("checksum computation failed: %w", err)
}
log.Debugf("Computed checksum for %s: %s", absFilename, checksum)
// Check Redis for existing checksum
existingPath, err := redisClient.Get(ctx, checksum).Result()
if err != nil && err != redis.Nil {
log.Errorf("Redis error while fetching checksum %s: %v", checksum, err)
return fmt.Errorf("redis error: %w", err)
}
if err != redis.Nil {
// Duplicate found, create hard link
log.Infof("Duplicate detected: %s already exists at %s", absFilename, existingPath)
err = os.Link(existingPath, absFilename)
if err != nil {
log.Errorf("Failed to create hard link from %s to %s: %v", existingPath, absFilename, err)
return fmt.Errorf("failed to create hard link: %w", err)
}
log.Infof("Created hard link from %s to %s", existingPath, absFilename)
return nil
}
// No duplicate found, store checksum in Redis
err = redisClient.Set(ctx, checksum, absFilename, 0).Err()
if err != nil {
log.Errorf("Failed to store checksum %s in Redis: %v", checksum, err)
return fmt.Errorf("failed to store checksum in Redis: %w", err)
}
log.Infof("Stored new file checksum in Redis: %s -> %s", checksum, absFilename)
return nil
}

888
dashboard/dashboard.json Normal file
View File

@@ -0,0 +1,888 @@
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"id": 70,
"links": [],
"panels": [
{
"fieldConfig": {
"defaults": {},
"overrides": []
},
"gridPos": {
"h": 6,
"w": 24,
"x": 0,
"y": 0
},
"id": 5,
"options": {
"code": {
"language": "plaintext",
"showLineNumbers": false,
"showMiniMap": false
},
"content": "<div style=\"text-align: center; background-color: #111217; padding: 20px;\">\n <h3 style=\"color: white; font-family: 'Arial', sans-serif; font-weight: bold;\">HMAC Dashboard</h3>\n <img src=\"https://block.uuxo.net/hmac_icon.png\" alt=\"HMAC Icon\" style=\"width: 50px; height: 50px; display: block; margin: 10px auto;\">\n <p style=\"font-family: 'Verdana', sans-serif; color: white;\">\n This dashboard monitors <strong style=\"color: #FF5733;\">key metrics</strong> for the \n <span style=\"font-style: italic; color: #007BFF;\">HMAC File Server</span>.\n </p>\n</div>\n",
"mode": "html"
},
"pluginVersion": "11.3.1",
"title": "HMAC Dashboard",
"transparent": true,
"type": "text"
},
{
"datasource": {
"default": true,
"type": "prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "percent"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 6,
"x": 0,
"y": 6
},
"id": 14,
"options": {
"displayMode": "gradient",
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": false
},
"maxVizHeight": 300,
"minVizHeight": 16,
"minVizWidth": 8,
"namePlacement": "auto",
"orientation": "horizontal",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showUnfilled": true,
"sizing": "auto",
"valueMode": "color"
},
"pluginVersion": "11.3.1",
"targets": [
{
"editorMode": "code",
"expr": "hmac_cpu_usage_percent",
"format": "table",
"legendFormat": "Downloads",
"range": true,
"refId": "A"
}
],
"title": "HMAC CPU Usage",
"type": "bargauge"
},
{
"datasource": {
"default": true,
"type": "prometheus"
},
"fieldConfig": {
"defaults": {
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "bytes"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 6,
"x": 6,
"y": 6
},
"id": 18,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "11.3.1",
"targets": [
{
"editorMode": "code",
"expr": "hmac_memory_usage_bytes",
"legendFormat": "Download Errors",
"range": true,
"refId": "A"
}
],
"title": "HMAC Memory Usage",
"type": "stat"
},
{
"datasource": {
"default": true,
"type": "prometheus"
},
"fieldConfig": {
"defaults": {
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 6,
"x": 12,
"y": 6
},
"id": 17,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "11.3.1",
"targets": [
{
"editorMode": "code",
"expr": "hmac_goroutines_count",
"format": "table",
"legendFormat": "Download Errors",
"range": true,
"refId": "A"
}
],
"title": "HMAC GoRoutines",
"type": "stat"
},
{
"datasource": {
"default": true,
"type": "prometheus"
},
"fieldConfig": {
"defaults": {
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "short"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 18,
"y": 6
},
"id": 11,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "value",
"wideLayout": true
},
"pluginVersion": "11.3.1",
"targets": [
{
"editorMode": "code",
"expr": "hmac_file_server_uploads_total",
"format": "table",
"legendFormat": "Uploads",
"range": true,
"refId": "A"
}
],
"title": "HMAC Uploads",
"type": "stat"
},
{
"datasource": {
"default": true,
"type": "prometheus"
},
"fieldConfig": {
"defaults": {
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "short"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 21,
"y": 6
},
"id": 12,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "value",
"wideLayout": true
},
"pluginVersion": "11.3.1",
"targets": [
{
"editorMode": "code",
"expr": "hmac_file_server_downloads_total",
"legendFormat": "Downloads",
"range": true,
"refId": "A"
}
],
"title": "HMAC Downloads",
"type": "stat"
},
{
"datasource": {
"default": true,
"type": "prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "s"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 0,
"y": 11
},
"id": 15,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "11.3.1",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "bduehd5vqv1moa"
},
"editorMode": "code",
"expr": "hmac_file_server_upload_duration_seconds_sum + hmac_file_server_download_duration_seconds_sum",
"hide": false,
"instant": false,
"legendFormat": "__auto",
"range": true,
"refId": "B"
}
],
"title": "HMAC Up/Down Duration",
"type": "stat"
},
{
"datasource": {
"default": true,
"type": "prometheus"
},
"fieldConfig": {
"defaults": {
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 3,
"y": 11
},
"id": 10,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "value",
"wideLayout": true
},
"pluginVersion": "11.3.1",
"targets": [
{
"editorMode": "code",
"expr": "go_threads",
"format": "table",
"legendFormat": "{{hmac-file-server}}",
"range": true,
"refId": "A"
}
],
"title": "HMAC GO Threads",
"type": "stat"
},
{
"datasource": {
"default": true,
"type": "prometheus"
},
"fieldConfig": {
"defaults": {
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 6,
"y": 11
},
"id": 21,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "value",
"wideLayout": true
},
"pluginVersion": "11.3.1",
"targets": [
{
"editorMode": "code",
"expr": "hmac_file_deletions_total",
"format": "table",
"legendFormat": "{{hmac-file-server}}",
"range": true,
"refId": "A"
}
],
"title": "HMAC FileTTL Deletion(s)",
"type": "stat"
},
{
"datasource": {
"default": true,
"type": "prometheus"
},
"fieldConfig": {
"defaults": {
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 9,
"y": 11
},
"id": 20,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "value",
"wideLayout": true
},
"pluginVersion": "11.3.1",
"targets": [
{
"editorMode": "code",
"expr": "hmac_cache_misses_total",
"format": "table",
"legendFormat": "{{hmac-file-server}}",
"range": true,
"refId": "A"
}
],
"title": "HMAC Cache Misses",
"type": "stat"
},
{
"datasource": {
"default": true,
"type": "prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 12,
"y": 11
},
"id": 16,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "11.3.1",
"targets": [
{
"editorMode": "code",
"exemplar": false,
"expr": "hmac_active_connections_total",
"format": "table",
"instant": false,
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "HMAC Active Connections",
"type": "stat"
},
{
"datasource": {
"default": true,
"type": "prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 15,
"y": 11
},
"id": 19,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "11.3.1",
"targets": [
{
"editorMode": "code",
"expr": "hmac_infected_files_total",
"format": "table",
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "HMAC infected file(s)",
"type": "stat"
},
{
"datasource": {
"default": true,
"type": "prometheus"
},
"fieldConfig": {
"defaults": {
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 18,
"y": 11
},
"id": 2,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "11.3.1",
"targets": [
{
"editorMode": "code",
"expr": "hmac_file_server_upload_errors_total",
"legendFormat": "Upload Errors",
"range": true,
"refId": "A"
}
],
"title": "HMAC Upload Errors",
"type": "stat"
},
{
"datasource": {
"default": true,
"type": "prometheus"
},
"fieldConfig": {
"defaults": {
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 21,
"y": 11
},
"id": 13,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "11.3.1",
"targets": [
{
"editorMode": "code",
"expr": "hmac_file_server_download_errors_total",
"legendFormat": "Download Errors",
"range": true,
"refId": "A"
}
],
"title": "HMAC Download Errors",
"type": "stat"
}
],
"preload": false,
"schemaVersion": 40,
"tags": [],
"templating": {
"list": []
},
"time": {
"from": "now-5m",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "HMAC File Server Metrics",
"uid": "de0ye5t0hzq4ge",
"version": 129,
"weekStart": ""
}

52
go.mod Normal file
View File

@@ -0,0 +1,52 @@
module github.com/PlusOne/hmac-file-server
go 1.21
require (
github.com/go-redis/redis/v8 v8.11.5
github.com/prometheus/client_golang v1.20.5
github.com/shirou/gopsutil v3.21.11+incompatible
github.com/sirupsen/logrus v1.9.3
)
require (
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/magiconair/properties v1.8.7 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/pelletier/go-toml/v2 v2.2.2 // indirect
github.com/sagikazarmark/locafero v0.4.0 // indirect
github.com/sagikazarmark/slog-shim v0.1.0 // indirect
github.com/sourcegraph/conc v0.3.0 // indirect
github.com/spf13/afero v1.11.0 // indirect
github.com/spf13/cast v1.6.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/spf13/viper v1.19.0 // indirect
github.com/subosito/gotenv v1.6.0 // indirect
go.uber.org/atomic v1.9.0 // indirect
go.uber.org/multierr v1.9.0 // indirect
golang.org/x/exp v0.0.0-20230905200255-921286631fa9 // indirect
golang.org/x/text v0.18.0 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)
require (
github.com/BurntSushi/toml v1.4.0
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/dutchcoders/go-clamd v0.0.0-20170520113014-b970184f4d9e
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/klauspost/compress v1.17.11 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/patrickmn/go-cache v2.1.0+incompatible
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.60.1 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/tklauser/go-sysconf v0.3.14 // indirect
github.com/tklauser/numcpus v0.9.0 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
golang.org/x/sys v0.26.0 // indirect
google.golang.org/protobuf v1.35.1 // indirect
)

122
go.sum Normal file
View File

@@ -0,0 +1,122 @@
github.com/BurntSushi/toml v1.4.0 h1:kuoIxZQy2WRRk1pttg9asf+WVv6tWQuBNVmK8+nqPr0=
github.com/BurntSushi/toml v1.4.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/dutchcoders/go-clamd v0.0.0-20170520113014-b970184f4d9e h1:rcHHSQqzCgvlwP0I/fQ8rQMn/MpHE5gWSLdtpxtP6KQ=
github.com/dutchcoders/go-clamd v0.0.0-20170520113014-b970184f4d9e/go.mod h1:Byz7q8MSzSPkouskHJhX0er2mZY/m0Vj5bMeMCkkyY4=
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
github.com/go-redis/redis/v8 v8.11.5 h1:AcZZR7igkdvfVmQTPnu9WE37LRrO/YrBH5zWyjDC0oI=
github.com/go-redis/redis/v8 v8.11.5/go.mod h1:gREzHqY1hg6oD9ngVRbLStwAWKhA0FEgq8Jd4h5lpwo=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/klauspost/compress v1.17.11 h1:In6xLpyWOi1+C7tXUUWv2ot1QvBjxevKAaI6IXrJmUc=
github.com/klauspost/compress v1.17.11/go.mod h1:pMDklpSncoRMuLFrf1W9Ss9KT+0rH90U12bZKk7uwG0=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/magiconair/properties v1.8.7 h1:IeQXZAiQcpL9mgcAe1Nu6cX9LLw6ExEHKjN0VQdvPDY=
github.com/magiconair/properties v1.8.7/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU=
github.com/onsi/gomega v1.18.1 h1:M1GfJqGRrBrrGGsbxzV5dqM2U2ApXefZCQpkukxYRLE=
github.com/onsi/gomega v1.18.1/go.mod h1:0q+aL8jAiMXy9hbwj2mr5GziHiwhAIQpFmmtT5hitRs=
github.com/patrickmn/go-cache v2.1.0+incompatible h1:HRMgzkcYKYpi3C8ajMPV8OFXaaRUnok+kx1WdO15EQc=
github.com/patrickmn/go-cache v2.1.0+incompatible/go.mod h1:3Qf8kWWT7OJRJbdiICTKqZju1ZixQ/KpMGzzAfe6+WQ=
github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6Wq+LM=
github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y=
github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.60.1 h1:FUas6GcOw66yB/73KC+BOZoFJmbo/1pojoILArPAaSc=
github.com/prometheus/common v0.60.1/go.mod h1:h0LYf1R1deLSKtD4Vdg8gy4RuOvENW2J/h19V5NADQw=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/sagikazarmark/locafero v0.4.0 h1:HApY1R9zGo4DBgr7dqsTH/JJxLTTsOt7u6keLGt6kNQ=
github.com/sagikazarmark/locafero v0.4.0/go.mod h1:Pe1W6UlPYUk/+wc/6KFhbORCfqzgYEpgQ3O5fPuL3H4=
github.com/sagikazarmark/slog-shim v0.1.0 h1:diDBnUNK9N/354PgrxMywXnAwEr1QZcOr6gto+ugjYE=
github.com/sagikazarmark/slog-shim v0.1.0/go.mod h1:SrcSrq8aKtyuqEI1uvTDTK1arOWRIczQRv+GVI1AkeQ=
github.com/shirou/gopsutil v3.21.11+incompatible h1:+1+c1VGhc88SSonWP6foOcLhvnKlUeu/erjjvaPEYiI=
github.com/shirou/gopsutil v3.21.11+incompatible/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo=
github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0=
github.com/spf13/afero v1.11.0 h1:WJQKhtpdm3v2IzqG8VMqrr6Rf3UYpEF239Jy9wNepM8=
github.com/spf13/afero v1.11.0/go.mod h1:GH9Y3pIexgf1MTIWtNGyogA5MwRIDXGUr+hbWNoBjkY=
github.com/spf13/cast v1.6.0 h1:GEiTHELF+vaR5dhz3VqZfFSzZjYbgeKDpBxQVS4GYJ0=
github.com/spf13/cast v1.6.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.19.0 h1:RWq5SEjt8o25SROyN3z2OrDB9l7RPd3lwTWU8EcEdcI=
github.com/spf13/viper v1.19.0/go.mod h1:GQUN9bilAbhU/jgc1bKs99f/suXKeUMct8Adx5+Ntkg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/tklauser/go-sysconf v0.3.14 h1:g5vzr9iPFFz24v2KZXs/pvpvh8/V9Fw6vQK5ZZb78yU=
github.com/tklauser/go-sysconf v0.3.14/go.mod h1:1ym4lWMLUOhuBOPGtRcJm7tEGX4SCYNEEEtghGG/8uY=
github.com/tklauser/numcpus v0.9.0 h1:lmyCHtANi8aRUgkckBgoDk1nHCux3n2cgkJLXdQGPDo=
github.com/tklauser/numcpus v0.9.0/go.mod h1:SN6Nq1O3VychhC1npsWostA+oW+VOQTxZrS604NSRyI=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.uber.org/atomic v1.9.0 h1:ECmE8Bn/WFTYwEW/bpKD3M8VtR/zQVbavAoalC1PYyE=
go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/multierr v1.9.0 h1:7fIwc/ZtS0q++VgcfqFDxSBZVv/Xo49/SYnDFupUwlI=
go.uber.org/multierr v1.9.0/go.mod h1:X2jQV1h+kxSjClGpnseKVIxpmcjrj7MNnI0bnlfKTVQ=
golang.org/x/exp v0.0.0-20230905200255-921286631fa9 h1:GoHiUyI/Tp2nVkLI2mCxVkOjsbSXD66ic0XW0js0R9g=
golang.org/x/exp v0.0.0-20230905200255-921286631fa9/go.mod h1:S2oDrQGGwySpoQPVqRShND87VCbxmc6bL1Yd2oYrm6k=
golang.org/x/net v0.29.0 h1:5ORfpBpCs4HzDYoodCDBbwHzdR5UrLBZ3sOnUJmFoHo=
golang.org/x/net v0.29.0/go.mod h1:gLkgy8jTGERgjzMic6DS9+SP0ajcu6Xu3Orq/SpETg0=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.26.0 h1:KHjCJyddX0LoSTb3J+vWpupP9p0oznkqVk/IfjymZbo=
golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.18.0 h1:XvMDiNzPAl0jr17s6W9lcaIhGUfUORdGCNsuLmPG224=
golang.org/x/text v0.18.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
google.golang.org/protobuf v1.35.1 h1:m3LfL6/Ca+fqnjnlqQXNpFPABW1UD7mjh8KO2mKFytA=
google.golang.org/protobuf v1.35.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=
gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

BIN
test/hmac_icon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

80
test/hmac_test.go Normal file
View File

@@ -0,0 +1,80 @@
package main
import (
"crypto/hmac"
"crypto/sha256"
"encoding/hex"
"fmt"
"mime"
"net/http"
"net/url"
"os"
"path/filepath" // Added this import for filepath usage
"strconv"
"testing"
)
const (
serverURL = "http://[::1]:8080" // Replace with your actual server URL
secret = "a-orc-and-a-humans-is-drinking-ale" // Replace with your HMAC secret key
uploadPath = "hmac_icon.png" // Test file to upload
protocolType = "v2" // Use v2, v, or token as needed
)
// TestUpload performs a basic HMAC validation and upload test.
func TestUpload(t *testing.T) {
// File setup for testing
file, err := os.Open(uploadPath)
if err != nil {
t.Fatalf("Error opening file: %v", err)
}
defer file.Close()
fileInfo, _ := file.Stat()
fileStorePath := uploadPath
contentLength := fileInfo.Size()
// Generate HMAC based on protocol type
hmacValue := generateHMAC(fileStorePath, contentLength, protocolType)
// Formulate request URL with HMAC in query params
reqURL := fmt.Sprintf("%s/%s?%s=%s", serverURL, fileStorePath, protocolType, url.QueryEscape(hmacValue))
// Prepare HTTP PUT request with file data
req, err := http.NewRequest(http.MethodPut, reqURL, file)
if err != nil {
t.Fatalf("Error creating request: %v", err)
}
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Content-Length", strconv.FormatInt(contentLength, 10))
// Execute HTTP request
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("Error executing request: %v", err)
}
defer resp.Body.Close()
t.Logf("Response status: %s", resp.Status)
}
// Generates the HMAC based on your protocol version
func generateHMAC(filePath string, contentLength int64, protocol string) string {
mac := hmac.New(sha256.New, []byte(secret))
macString := ""
// Calculate HMAC according to protocol
if protocol == "v" {
mac.Write([]byte(filePath + "\x20" + strconv.FormatInt(contentLength, 10)))
macString = hex.EncodeToString(mac.Sum(nil))
} else if protocol == "v2" || protocol == "token" {
contentType := mime.TypeByExtension(filepath.Ext(filePath))
if contentType == "" {
contentType = "application/octet-stream"
}
mac.Write([]byte(filePath + "\x00" + strconv.FormatInt(contentLength, 10) + "\x00" + contentType))
macString = hex.EncodeToString(mac.Sum(nil))
}
return macString
}