BlogBusiness Technology
Business Technology

Self-Hosting in 2026: The Complete Guide to Running Your Own Services

Why pay monthly SaaS fees when you can run the same (or better) services on your own hardware? This comprehensive guide covers self-hosting everything from email and file storage to Git repositories, project management, analytics, and monitoring. Learn about hardware selection, Docker Compose configurations, reverse proxy setup with Nginx, SSL certificates, backup strategies, and maintaining uptime.

A

Alex Thompson

CEO & Cloud Architecture Expert at ZeonEdge with 15+ years building enterprise infrastructure.

March 8, 2026
42 min read

Every month, businesses pay hundreds or thousands of dollars for SaaS subscriptions: Google Workspace for email, Slack for messaging, Dropbox for file storage, GitHub for code, Jira for project management, Google Analytics for web analytics. These services are convenient, but they come with significant costs: monthly fees that increase as your team grows, data stored on someone else's servers in someone else's jurisdiction, vendor lock-in that makes migration painful, limited customization, and the ever-present risk of price increases or service discontinuation.

Self-hosting means running these services on your own infrastructure — a cloud server, a dedicated server, or even hardware in your office. The open-source ecosystem in 2026 provides alternatives to almost every SaaS product, many of which are equal to or better than their commercial counterparts.

This guide covers the complete self-hosting stack: infrastructure choices, Docker Compose deployments, reverse proxy configuration, SSL certificates, backup strategies, monitoring, and maintenance procedures.

Chapter 1: Infrastructure Choices

VPS (Virtual Private Server)

A VPS is the easiest way to start self-hosting. You get a virtual machine with a public IP address, and you install whatever you want on it.

Recommended providers (2026):

  • Hetzner: Best value in Europe. A CX41 (4 vCPU, 16 GB RAM, 160 GB NVMe) costs about 15 euros/month. Their dedicated servers are even better value — a AX41-NVMe (Ryzen 5 3600, 64 GB RAM, 2x512 GB NVMe) runs about 40 euros/month.
  • Contabo: Ultra-cheap but slower storage. Good for non-critical services.
  • DigitalOcean/Linode/Vultr: Good documentation, slightly more expensive, US-based.
  • Oracle Cloud Free Tier: 4 ARM cores, 24 GB RAM — completely free, forever. Limited availability.

Minimum Specifications by Use Case

# Small team (1-5 people): Email, files, Git, wiki
# 2 vCPU, 4 GB RAM, 80 GB SSD
# Cost: ~5-8 euros/month

# Medium team (5-20 people): Above + project management, CI/CD, analytics
# 4 vCPU, 16 GB RAM, 200 GB SSD
# Cost: ~15-25 euros/month

# Large team (20-100 people): Full suite with redundancy
# 8 vCPU, 32 GB RAM, 500 GB SSD (or dedicated server)
# Cost: ~40-80 euros/month

Initial Server Setup

# Connect to your new server
ssh root@your-server-ip

# Update system
apt update && apt upgrade -y

# Install Docker and Docker Compose
curl -fsSL https://get.docker.com | sh
apt install -y docker-compose-plugin

# Install essential tools
apt install -y   git   htop   ncdu   tmux   ufw   certbot   python3-certbot-nginx   nginx

# Configure firewall
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable

# Create a directory structure for your services
mkdir -p /opt/services/{data,config,backups}
cd /opt/services

Chapter 2: Reverse Proxy with Nginx and SSL

A reverse proxy sits in front of all your self-hosted services and routes traffic based on the domain name. It also handles SSL termination so that each service doesn't need to manage its own certificates.

# /opt/services/docker-compose.yml — Nginx Proxy Manager
# (the easiest way to manage reverse proxy + SSL)

version: '3.8'

services:
  nginx-proxy-manager:
    image: jc21/nginx-proxy-manager:latest
    container_name: nginx-proxy-manager
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
      - "81:81"  # Admin UI
    volumes:
      - ./data/nginx-proxy-manager/data:/data
      - ./data/nginx-proxy-manager/letsencrypt:/etc/letsencrypt
    environment:
      DB_SQLITE_FILE: "/data/database.sqlite"

Alternatively, use Nginx directly with Certbot for Let's Encrypt SSL certificates:

# /etc/nginx/conf.d/gitea.conf
server {
    listen 443 ssl http2;
    server_name git.yourdomain.com;

    ssl_certificate /etc/letsencrypt/live/git.yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/git.yourdomain.com/privkey.pem;

    # Strong SSL configuration
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
    ssl_prefer_server_ciphers off;

    # Security headers
    add_header Strict-Transport-Security "max-age=63072000" always;
    add_header X-Content-Type-Options nosniff;
    add_header X-Frame-Options DENY;

    location / {
        proxy_pass http://localhost:3001;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # WebSocket support (needed for some services)
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }

    # Increase upload size for Git pushes
    client_max_body_size 512M;
}

# HTTP to HTTPS redirect
server {
    listen 80;
    server_name git.yourdomain.com;
    return 301 https://$server_name$request_uri;
}
# Generate SSL certificate with Certbot
certbot --nginx -d git.yourdomain.com

# Auto-renewal (already set up by certbot, verify with)
certbot renew --dry-run

Chapter 3: Essential Self-Hosted Services

Git Repository: Gitea

Gitea is a lightweight Git hosting solution that's a drop-in replacement for GitHub/GitLab. It supports pull requests, issues, CI/CD (via Gitea Actions), package registry, wikis, and more — using about 200 MB of RAM.

# /opt/services/gitea/docker-compose.yml
version: '3.8'

services:
  gitea:
    image: gitea/gitea:latest
    container_name: gitea
    restart: unless-stopped
    environment:
      - USER_UID=1000
      - USER_GID=1000
      - GITEA__database__DB_TYPE=postgres
      - GITEA__database__HOST=gitea-db:5432
      - GITEA__database__NAME=gitea
      - GITEA__database__USER=gitea
      - GITEA__database__PASSWD=secure_password_here
      - GITEA__server__ROOT_URL=https://git.yourdomain.com/
      - GITEA__server__SSH_DOMAIN=git.yourdomain.com
      - GITEA__server__SSH_PORT=2222
      - GITEA__mailer__ENABLED=true
      - GITEA__mailer__SMTP_ADDR=smtp.yourdomain.com
      - GITEA__mailer__SMTP_PORT=587
      - GITEA__mailer__FROM=gitea@yourdomain.com
      - GITEA__service__DISABLE_REGISTRATION=true
    volumes:
      - ./data/gitea:/data
    ports:
      - "3001:3000"
      - "2222:22"
    depends_on:
      - gitea-db

  gitea-db:
    image: postgres:16-alpine
    container_name: gitea-db
    restart: unless-stopped
    environment:
      POSTGRES_DB: gitea
      POSTGRES_USER: gitea
      POSTGRES_PASSWORD: secure_password_here
    volumes:
      - ./data/gitea-db:/var/lib/postgresql/data

File Storage: Nextcloud

Nextcloud replaces Google Drive, Dropbox, and OneDrive. It provides file sync, sharing, collaborative document editing (with Collabora or OnlyOffice), calendars, contacts, tasks, and hundreds of apps.

# /opt/services/nextcloud/docker-compose.yml
version: '3.8'

services:
  nextcloud:
    image: nextcloud:latest
    container_name: nextcloud
    restart: unless-stopped
    environment:
      - POSTGRES_HOST=nextcloud-db
      - POSTGRES_DB=nextcloud
      - POSTGRES_USER=nextcloud
      - POSTGRES_PASSWORD=secure_password_here
      - REDIS_HOST=nextcloud-redis
      - NEXTCLOUD_ADMIN_USER=admin
      - NEXTCLOUD_ADMIN_PASSWORD=initial_admin_password
      - NEXTCLOUD_TRUSTED_DOMAINS=cloud.yourdomain.com
      - OVERWRITEPROTOCOL=https
      - OVERWRITEHOST=cloud.yourdomain.com
    volumes:
      - ./data/nextcloud/html:/var/www/html
      - ./data/nextcloud/data:/var/www/html/data
    ports:
      - "3002:80"
    depends_on:
      - nextcloud-db
      - nextcloud-redis

  nextcloud-db:
    image: postgres:16-alpine
    container_name: nextcloud-db
    restart: unless-stopped
    environment:
      POSTGRES_DB: nextcloud
      POSTGRES_USER: nextcloud
      POSTGRES_PASSWORD: secure_password_here
    volumes:
      - ./data/nextcloud-db:/var/lib/postgresql/data

  nextcloud-redis:
    image: redis:7-alpine
    container_name: nextcloud-redis
    restart: unless-stopped
    command: redis-server --requirepass redis_password_here
    volumes:
      - ./data/nextcloud-redis:/data

Email: Mailu

Self-hosting email is notoriously difficult due to deliverability issues (spam filters, reputation, DKIM/SPF/DMARC). Mailu packages everything needed into a Docker-based solution.

# Generate Mailu configuration
# Visit: https://setup.mailu.io/ to generate your docker-compose.yml
# It handles: Postfix (SMTP), Dovecot (IMAP), Rspamd (spam filter),
# ClamAV (antivirus), Roundcube/Rainloop (webmail), and admin UI

# Key DNS records you MUST set:
# MX record: yourdomain.com → mail.yourdomain.com (priority 10)
# A record: mail.yourdomain.com → your-server-ip
# SPF: v=spf1 mx a:mail.yourdomain.com ~all
# DKIM: Generated by Mailu (add as TXT record)
# DMARC: _dmarc.yourdomain.com TXT "v=DMARC1; p=quarantine; rua=mailto:dmarc@yourdomain.com"
# rDNS/PTR: your-server-ip → mail.yourdomain.com (set via hosting provider)

Project Management: Plane

Plane is an open-source alternative to Jira and Linear. It provides issues, sprints, kanban boards, roadmaps, and cycles with a modern interface.

# Clone and deploy Plane
git clone https://github.com/makeplane/plane.git /opt/services/plane
cd /opt/services/plane

# Configure environment
cp .env.example .env
# Edit .env with your domain, database credentials, etc.

# Deploy with Docker Compose
docker compose -f docker-compose.yml up -d

Analytics: Plausible or Umami

Replace Google Analytics with a privacy-friendly, self-hosted alternative. Plausible and Umami are both excellent choices that provide essential web analytics without cookies or personal data collection.

# /opt/services/plausible/docker-compose.yml
version: '3.8'

services:
  plausible:
    image: ghcr.io/plausible/community-edition:latest
    container_name: plausible
    restart: unless-stopped
    command: sh -c "sleep 10 && /entrypoint.sh db createdb && /entrypoint.sh db migrate && /entrypoint.sh run"
    ports:
      - "3005:8000"
    depends_on:
      - plausible-db
      - plausible-events-db
    environment:
      - DATABASE_URL=postgres://plausible:password@plausible-db:5432/plausible
      - CLICKHOUSE_DATABASE_URL=http://plausible-events-db:8123/plausible_events
      - SECRET_KEY_BASE=generate_with_openssl_rand_hex_64
      - BASE_URL=https://analytics.yourdomain.com
      - DISABLE_REGISTRATION=invite_only

  plausible-db:
    image: postgres:16-alpine
    container_name: plausible-db
    restart: unless-stopped
    environment:
      POSTGRES_DB: plausible
      POSTGRES_USER: plausible
      POSTGRES_PASSWORD: password
    volumes:
      - ./data/plausible-db:/var/lib/postgresql/data

  plausible-events-db:
    image: clickhouse/clickhouse-server:latest
    container_name: plausible-events-db
    restart: unless-stopped
    volumes:
      - ./data/plausible-events-db:/var/lib/clickhouse
    ulimits:
      nofile:
        soft: 262144
        hard: 262144

Chapter 4: Monitoring Your Self-Hosted Stack

Uptime Monitoring: Uptime Kuma

Uptime Kuma monitors the availability of all your services and sends alerts when something goes down.

# /opt/services/uptime-kuma/docker-compose.yml
version: '3.8'

services:
  uptime-kuma:
    image: louislam/uptime-kuma:latest
    container_name: uptime-kuma
    restart: unless-stopped
    volumes:
      - ./data/uptime-kuma:/app/data
    ports:
      - "3010:3001"

System Monitoring: Grafana + Prometheus + Node Exporter

# /opt/services/monitoring/docker-compose.yml
version: '3.8'

services:
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    restart: unless-stopped
    volumes:
      - ./config/prometheus.yml:/etc/prometheus/prometheus.yml
      - ./data/prometheus:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.retention.time=30d'
    ports:
      - "9090:9090"

  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    restart: unless-stopped
    environment:
      GF_SECURITY_ADMIN_PASSWORD: secure_grafana_password
      GF_SERVER_ROOT_URL: https://grafana.yourdomain.com
    volumes:
      - ./data/grafana:/var/lib/grafana
    ports:
      - "3011:3000"

  node-exporter:
    image: prom/node-exporter:latest
    container_name: node-exporter
    restart: unless-stopped
    command:
      - '--path.rootfs=/host'
    volumes:
      - '/:/host:ro,rslave'
    ports:
      - "9100:9100"

Chapter 5: Backup Strategy

Without backups, self-hosting is playing Russian roulette with your data. You need automated, tested, off-site backups.

3-2-1 Backup Rule

Keep 3 copies of your data, on 2 different types of media, with 1 copy off-site. For self-hosted services: the primary data on your server, a local backup on the same server (different disk), and an off-site backup to object storage or another server.

#!/bin/bash
# /opt/services/backup.sh — Daily backup script

set -euo pipefail

BACKUP_DIR="/opt/services/backups"
DATE=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=30

echo "Starting backup at $(date)"

# 1. Backup all PostgreSQL databases
for db_container in gitea-db nextcloud-db plausible-db; do
    DB_NAME=$(docker inspect $db_container         --format '{{range .Config.Env}}{{println .}}{{end}}'         | grep POSTGRES_DB | cut -d= -f2)

    echo "Backing up database: $DB_NAME from $db_container"
    docker exec $db_container pg_dump -U postgres $DB_NAME         | gzip > "$BACKUP_DIR/${db_container}_${DATE}.sql.gz"
done

# 2. Backup service data directories
for service in gitea nextcloud plausible uptime-kuma; do
    echo "Backing up data for: $service"
    tar -czf "$BACKUP_DIR/${service}_data_${DATE}.tar.gz"         -C /opt/services/data "$service" 2>/dev/null || true
done

# 3. Backup configurations
tar -czf "$BACKUP_DIR/configs_${DATE}.tar.gz"     /opt/services/*/docker-compose.yml     /opt/services/config/     /etc/nginx/conf.d/     2>/dev/null || true

# 4. Upload to off-site storage (using rclone)
# Configure rclone first: rclone config
# Supports: S3, Backblaze B2, Wasabi, Google Drive, etc.
rclone sync "$BACKUP_DIR" remote:server-backups/     --max-age "${RETENTION_DAYS}d"     --transfers 4

# 5. Clean up old local backups
find "$BACKUP_DIR" -type f -mtime +$RETENTION_DAYS -delete

echo "Backup completed at $(date)"
# Add to crontab: run daily at 2 AM
# crontab -e
0 2 * * * /opt/services/backup.sh >> /var/log/backup.log 2>&1

Testing Backups

A backup that hasn't been tested is not a backup. Schedule monthly restore tests:

#!/bin/bash
# restore-test.sh — Monthly restore verification

# Spin up a temporary database
docker run -d --name restore-test-db     -e POSTGRES_PASSWORD=test     postgres:16-alpine

sleep 5

# Restore the latest backup
LATEST_BACKUP=$(ls -t /opt/services/backups/gitea-db_*.sql.gz | head -1)
zcat "$LATEST_BACKUP" | docker exec -i restore-test-db     psql -U postgres -d postgres

# Verify the data
docker exec restore-test-db psql -U postgres -d gitea     -c "SELECT COUNT(*) FROM repository;"

# Clean up
docker rm -f restore-test-db

echo "Restore test completed successfully"

Chapter 6: Security for Self-Hosted Services

Self-hosting means you are responsible for security. There is no vendor to blame, no support team to call, and no automatic patches.

# Automated Docker image updates with Watchtower
# Only update during maintenance windows, with health checks

docker run -d     --name watchtower     --restart unless-stopped     -v /var/run/docker.sock:/var/run/docker.sock     containrrr/watchtower     --schedule "0 0 4 * * SUN"     --cleanup     --include-restarting     --notifications slack     --notification-url "https://hooks.slack.com/services/YOUR/WEBHOOK/URL"

Self-hosting gives you complete control over your data, eliminates recurring SaaS costs, provides unlimited customization, and ensures that you're never at the mercy of a vendor's pricing changes or service discontinuations. The investment in setting up and maintaining self-hosted services pays for itself many times over, especially for teams that value data sovereignty and long-term cost predictability.

ZeonEdge specializes in self-hosted infrastructure design, deployment, and management. Whether you need a single-server setup or a multi-node cluster with high availability, we help businesses run their own services with confidence. Contact our infrastructure team to plan your self-hosting migration.

A

Alex Thompson

CEO & Cloud Architecture Expert at ZeonEdge with 15+ years building enterprise infrastructure.

Related Articles

Best Practices

Redis Mastery in 2026: Caching, Queues, Pub/Sub, Streams, and Beyond

Redis is far more than a cache. It is an in-memory data structure server that can serve as a cache, message broker, queue, session store, rate limiter, leaderboard, and real-time analytics engine. This comprehensive guide covers every Redis data structure, caching patterns, Pub/Sub messaging, Streams for event sourcing, Lua scripting, Redis Cluster for horizontal scaling, persistence strategies, and production operational best practices.

Emily Watson•44 min read
Cloud & Infrastructure

DNS Deep Dive in 2026: How DNS Works, How to Secure It, and How to Optimize It

DNS is the invisible infrastructure that makes the internet work. Every website visit, every API call, every email delivery starts with a DNS query. Yet most developers barely understand how DNS works, let alone how to secure it. This exhaustive guide covers DNS resolution, record types, DNSSEC, DNS-over-HTTPS, DNS-over-TLS, split-horizon DNS, DNS-based load balancing, failover strategies, and common misconfigurations.

Marcus Rodriguez•42 min read
Cloud & Infrastructure

Linux Server Hardening for Production in 2026: The Complete Security Checklist

A default Linux server installation is a playground for attackers. SSH with password auth, no firewall, unpatched packages, and services running as root. This exhaustive guide covers every hardening step from initial setup through ongoing maintenance — SSH configuration, firewall rules, user management, kernel hardening, file integrity monitoring, audit logging, automatic updates, and intrusion detection.

Alex Thompson•42 min read

Ready to Transform Your Infrastructure?

Let's discuss how we can help you achieve similar results.