Recipes for Linux Servers

Services

  • Keep it simple.
  • LinuxServer has pre-built images that are tested and easy to deploy, so prefer these images if possible.

Setup Dynamic DNS

  • DuckDNS is the easiest way to setup dynamic DNS AFAIK.
  • Your ISP or router manufacturer may also provide dynamic DNS service.

DuckDNS

  1. Create an account on DuckDNS
  2. Create a subdomain for your services (e.g. yourdomain.duckdns.org)
  3. Deploy the DuckDNS container and allow a few min for your IP to update

Setup Reverse Proxy

  • NGINX Proxy Manager is the easiest way to setup a reverse proxy AFAIK
  • SWAG is another option, but is more involved.

NGINX Proxy Manager

  1. Deploy the NGINX Proxy Manager container
  2. Log in and change password
  3. Create a new proxy host:
    • Domain Names: yourdomain.duckdns.org or a subdomain like service.yourdomain.duckdns.org
    • Scheme: http or https depending on your service
    • Forward Hostname / IP: The URL of your service, or local IP, e.g. 192.168.1.101
    • Forward Port: The Port of your service, e.g. 8080
    • Enable Block Common Exploits
  4. Add an SSL Certificate
    • Request a new SSL Certificate with Let's Encrypt
    • Enable Force SSL and HTTP/2 Support
  5. Add access list if you want to restrict access to certain users

Setup Multiple Services and Load Balancer (naive)

  • Expose services on one port
  • Create load balancer with the internal:external port mapping
  • Or use Kubernetes

Compose

docker-compose.yml

version: "3"
services:
    app1:
        # config here
        expose:
            - "9000"
    app2:
        # config here
        expose:
            - "9000"
    app3:
        # config here
        expose:
            - "9000"
    app4:
        # config here
        expose:
            - "9000"
    load_balancer:
        image: nginx:1.19.2-alpine
        volumes:
        - ./nginx.conf:/etc/nginx/nginx.conf:ro
        ports:
        - "9000:9000"
        depends_on:
        - app1
        - app2
        - app3
        - app4

nginx.conf

conf
user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;

    keepalive_timeout  65;

    upstream app {
        server app1:9000;
        server app2:9000;
        server app3:9000;
        server app4:9000;
    }

    server {
        listen       9000;
        listen  [::]:9000;
        server_name  localhost;

         ignore_invalid_headers off;
         client_max_body_size 0;
         proxy_buffering off;

        location / {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;

            proxy_connect_timeout 300;
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            chunked_transfer_encoding off;

            proxy_pass http://app;
        }
    }
}

Check server performance

Grafana/Kibana

  • For small scale deployment, this is not worth the hassle. NetData gives you most of what you need out of the box.
  • See Grafana deploys for more