BlogDevOps
DevOps

Setting Up a Self-Hosted GitLab Runner on a VPS Without Breaking Everything

A complete guide to installing, registering, and configuring a self-hosted GitLab Runner with Docker executor on your VPS — including concurrent job configuration, caching, and security hardening.

M

Marcus Rodriguez

Lead DevOps Engineer specializing in CI/CD pipelines, container orchestration, and infrastructure automation.

February 3, 2026
15 min read

GitLab's shared runners work well for small projects, but they come with limitations: queue times during peak hours, restricted build minutes on free tiers, limited hardware resources, and no ability to cache Docker images or layers between builds. A self-hosted GitLab Runner on your own VPS eliminates all of these limitations and gives you full control over the build environment.

However, setting up a self-hosted runner involves several steps that are easy to get wrong: installing the runner software, registering it with your GitLab instance, configuring the Docker executor, handling concurrent jobs, managing disk space, and securing the runner against malicious code in CI scripts. This guide walks through the entire process on a fresh Ubuntu 24.04 VPS.

Prerequisites and Server Preparation

Start with a VPS running Ubuntu 24.04 with at least 2 CPU cores and 4 GB of RAM. For projects that build Docker images, allocate at least 50 GB of disk space — Docker images and build cache accumulate quickly. Dedicated runner VPS instances are available from providers like Hetzner for around 8 euros per month, DigitalOcean for 24 dollars per month, or Vultr for 18 dollars per month.

Update the system and install Docker, which the runner will use to execute CI jobs:

# Update system
sudo apt update && sudo apt upgrade -y

# Install Docker
curl -fsSL https://get.docker.com | sudo sh

# Add your user to the docker group
sudo usermod -aG docker $USER

# Verify Docker is running
sudo systemctl status docker
docker --version

Installing GitLab Runner

Install the GitLab Runner package from GitLab's official repository:

# Add the GitLab Runner repository
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh | sudo bash

# Install GitLab Runner
sudo apt install gitlab-runner -y

# Verify installation
gitlab-runner --version

The GitLab Runner runs as a systemd service under the gitlab-runner user. Verify it is running:

sudo systemctl status gitlab-runner
sudo systemctl enable gitlab-runner

Registering the Runner

To connect the runner to your GitLab instance, you need a registration token. Find it in your GitLab project under Settings then CI/CD then Runners, or at the group level for runners shared across multiple projects.

In GitLab 16 and later, the registration process uses an authentication token rather than the old registration token. Create a new runner in the GitLab UI first, then use the authentication token:

sudo gitlab-runner register \
    --non-interactive \
    --url "https://gitlab.com" \
    --token "glrt-YOUR_RUNNER_TOKEN" \
    --executor "docker" \
    --docker-image "node:20-alpine" \
    --description "Production VPS Runner" \
    --tag-list "docker,vps,production" \
    --run-untagged="true" \
    --locked="false"

The --executor docker tells the runner to execute each CI job inside a fresh Docker container. The --docker-image sets the default image used when a job does not specify one. The --tag-list assigns tags that you can reference in your .gitlab-ci.yml to target specific runners.

Configuring the Runner for Production Use

The default runner configuration is not optimized for production. Edit /etc/gitlab-runner/config.toml to add concurrent job support, caching, resource limits, and Docker optimizations:

concurrent = 4
check_interval = 3
shutdown_timeout = 30

[[runners]]
  name = "Production VPS Runner"
  url = "https://gitlab.com"
  token = "glrt-YOUR_RUNNER_TOKEN"
  executor = "docker"
  
  [runners.docker]
    image = "node:20-alpine"
    privileged = false
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    
    # Volume mounts for caching
    volumes = [
      "/var/run/docker.sock:/var/run/docker.sock",
      "/cache:/cache"
    ]
    
    # Pull policy - use cached images when possible
    pull_policy = ["if-not-present"]
    
    # Resource limits per container
    memory = "4g"
    cpus = "2"
    
    # Cleanup
    shm_size = 268435456  # 256MB shared memory
    
  [runners.cache]
    Type = "s3"
    Shared = true
    [runners.cache.s3]
      ServerAddress = "s3.amazonaws.com"
      BucketName = "my-gitlab-runner-cache"
      BucketLocation = "us-east-1"

Key settings explained: concurrent = 4 allows four CI jobs to run simultaneously on this runner. Each job runs in its own Docker container, so ensure your VPS has enough CPU and RAM. A 4-core, 8 GB RAM server can comfortably run 4 concurrent jobs. pull_policy = ["if-not-present"] uses locally cached Docker images instead of pulling them from the registry every time, saving 30-60 seconds per job. privileged = false is a security measure — privileged containers have full access to the host and should only be enabled if you need Docker-in-Docker for building Docker images.

Docker-in-Docker vs Docker Socket Binding

If your CI jobs build Docker images, you need a way to access the Docker daemon from inside the CI container. There are two approaches:

Docker Socket Binding (recommended for performance): Mount the host's Docker socket into the CI container. This is faster because it uses the host's Docker daemon and cache directly, but it has security implications — the CI job can control all containers on the host.

# In config.toml
volumes = ["/var/run/docker.sock:/var/run/docker.sock"]

Docker-in-Docker (DinD): Run a separate Docker daemon inside the CI container. This is more isolated but slower because it cannot share the image cache with the host. It also requires privileged = true.

# In .gitlab-ci.yml
build:
  image: docker:24
  services:
    - docker:24-dind
  variables:
    DOCKER_TLS_CERTDIR: "/certs"
  script:
    - docker build -t myapp .

For most self-hosted runners, Docker socket binding is the better choice. You trust the CI scripts because they are your own code, and the performance benefit is significant.

Caching Dependencies Between Jobs

Configure caching in your .gitlab-ci.yml to avoid reinstalling dependencies on every job:

variables:
  npm_config_cache: "$CI_PROJECT_DIR/.npm"

cache:
  key:
    files:
      - package-lock.json
  paths:
    - .npm/
    - node_modules/

stages:
  - install
  - test
  - build
  - deploy

install:
  stage: install
  script:
    - npm ci
  artifacts:
    paths:
      - node_modules/
    expire_in: 1 hour

test:
  stage: test
  needs: [install]
  script:
    - npm test

build:
  stage: build
  needs: [install]
  script:
    - npm run build
  artifacts:
    paths:
      - .next/
    expire_in: 1 day

Disk Space Management

Self-hosted runners accumulate Docker images, build artifacts, and cached data over time. Without cleanup, the disk fills up and jobs start failing. Set up automated cleanup:

# /etc/cron.daily/gitlab-runner-cleanup
#!/bin/bash
docker image prune -a --filter "until=72h" --force
docker volume prune --force
docker builder prune --keep-storage=10GB --force

# Clean up old build artifacts
find /home/gitlab-runner/builds -type d -mtime +7 -exec rm -rf {} + 2>/dev/null || true

Make the script executable: sudo chmod +x /etc/cron.daily/gitlab-runner-cleanup.

Security Hardening

A GitLab Runner executes arbitrary code defined in CI scripts. If your runner is shared across multiple projects or users, security is critical:

1. Never run the runner with privileged = true unless absolutely necessary. 2. Set resource limits (memory, CPU) to prevent runaway jobs from consuming all server resources. 3. Use allowed_images in config.toml to restrict which Docker images can be used. 4. Enable runner isolation so each job starts in a completely fresh container. 5. Use tags to control which projects can use the runner. 6. Regularly update the runner and Docker to patch security vulnerabilities.

# Security-focused config.toml snippet
[runners.docker]
  privileged = false
  allowed_images = ["node:*", "python:*", "docker:*", "alpine:*"]
  allowed_services = ["docker:*-dind", "postgres:*", "redis:*"]
  memory = "4g"
  cpus = "2"

ZeonEdge sets up and manages self-hosted CI/CD infrastructure for teams that need fast, reliable, and secure builds. Learn more about our DevOps services.

M

Marcus Rodriguez

Lead DevOps Engineer specializing in CI/CD pipelines, container orchestration, and infrastructure automation.

Ready to Transform Your Infrastructure?

Let's discuss how we can help you achieve similar results.