Skip to content

HA Deployment

Although a standalone Nginx deployment is sufficient for most home lab needs, a high-availability (HA) deployment allows you to perform maintenance on the reverse proxy without affecting operational continuity.

The Nginx instance can be deployed as a VM or an LXC container.

This guide shows how to create a high-availability NGINX reverse proxy with:

  • Active - Standby architecture
  • A floating virtual IP automatically assigned to the active NGINX node
  • Proxy configurations automatically synced between both Nginx nodes

Tested Environment

Role Specs Operating System Nginx Version Hostname Interface 1 Interface 2
Active Proxy 1 vCPU / 512 MB RAM / 8 GB Disk Ubuntu 24.04 Server (Noble Numbat) 1.24.0 proxy1.aadya.tech eth0: 10.12.20.61 eth1:
Standby Proxy 1 vCPU / 512 MB RAM / 8 GB Disk Ubuntu 24.04 Server (Noble Numbat) 1.24.0 proxy2.aadya.tech eth0: 10.12.20.62 eth1:

Virtual IP Address (VIP): 10.12.99.50

Architecture

graph LR
    User -->|Request| VIP(Virtual IP Address)
    subgraph NGINX Reverse Proxy
        VIP(Virtual IP Address)
        RP1(Reverse Proxy 1)
        RP2(Reverse Proxy 2)
        VIP -- Active  --> RP1
        VIP -. Standby .-> RP2
    end
    subgraph Backend Servers
        RP1 -->|Forward Request| app1[App 1]
        RP1 -->|Forward Request| app2[App 2]
    end
    app1 -->|Response| RP1
    app2 -->|Response| RP1
    VIP -->|Response| User

Setting up Nginx Reverse Proxy

  1. Login as root and ensure system is up to date before starting the setup.
    Bash
    apt update && apt dist-upgrade -y
    
  2. Since the instances will serve as only a reverse proxy, full version of Nginx is not required.
    Bash
    apt install -y nginx-light
    

Setting up Keepalived

  1. Install keepalived on both instances.
    Bash
    apt install -y keepalived
    
  2. Configure keepalived on proxy1 by creating /etc/keepalived/keepalived.conf. This configuration checks Nginx status via eth0 (management interface) and assigns VIP to eth1 (DMZ interface). Unicast configuration is used here since eth0 and eth1 are on different subnets that cannot communicate directly.

    /etc/keepalived/keepalived.conf
    global_defs {
        enable_script_security
        script_user root
    }
    
    vrrp_script chk_nginx {
        script "/usr/bin/systemctl is-active --quiet nginx"
        interval 2
        timeout 2
        fall 2
        rise 1
        weight -20
    }
    
    vrrp_instance VI_1 {
        state MASTER
        interface eth0
        unicast_src_ip 10.12.20.61
        unicast_peer {
            10.12.20.62
        }
        virtual_router_id 51
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass P@ss-123
        }
        virtual_ipaddress {
            10.12.99.50/24 dev eth1
        }
        track_script {
            chk_nginx
        }
    }
    

    Tip

    In case VIP is just a different IP on the same network interface, multicast can be used with following config.

    Bash
    global_defs {
         enable_script_security
         script_user root
    }
    
    vrrp_script chk_nginx {
        script "/usr/bin/systemctl is-active --quiet nginx"
        interval 2
        timeout 2
        fall 2
        rise 1
        weight -20
    }
    
    vrrp_instance VI_1 {
        state MASTER
        interface eth0
        virtual_router_id 51
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass P@ss-123
        }
        virtual_ipaddress {
            10.12.99.50/24 dev eth1
        }
        track_script {
            chk_nginx
        }
    }
    

  3. Configure keepalived on proxy2 by creating /etc/keepalived/keepalived.conf similar to above with the highlighted changes.

    /etc/keepalived/keepalived.conf
    global_defs {
        enable_script_security
        script_user root
    }
    
    vrrp_script chk_nginx {
        script "/usr/bin/systemctl is-active --quiet nginx"
        interval 2
        timeout 2
        fall 2
        rise 1
        weight -20
    }
    
    vrrp_instance VI_1 {
        state BACKUP
        interface eth0
        unicast_src_ip 10.12.20.62
        unicast_peer {
            10.12.20.61
        }
        virtual_router_id 51
        priority 90
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass P@ss-123
        }
        virtual_ipaddress {
            10.12.99.50/24 dev eth1
        }
        track_script {
            chk_nginx
        }
    }
    

  4. Enable services on both instances.
    Bash
    systemctl enable keepalived
    systemctl start keepalived
    
  5. Verify failover by stopping Nginx or restarting one instance — the VIP should automatically reassign to the other node.

Setting up Unison

  1. Install Unison on both instances.
    Bash
    apt install -y unison
    
    unison-fsmonitor is required to automatically watch for file changes. This is already compiled by the unison build script but for some reason Ubuntu doesn't include it. So that has to be manually installed on both instances.
    Bash
    UNISON_VERSION=$(unison -version | awk '{print $3}')
    echo "Installed Unison version: " $UNISON_VERSION
    echo "Installing Unison FS Monitor." \
        && apt install -y wget ocaml make \
        && pushd /tmp \
        && wget https://github.com/bcpierce00/unison/archive/v$UNISON_VERSION.tar.gz \
        && tar -xzvf v$UNISON_VERSION.tar.gz \
        && rm v$UNISON_VERSION.tar.gz \
        && pushd unison-$UNISON_VERSION \
        && make \
        && cp -t /usr/local/bin ./src/unison ./src/unison-fsmonitor \
        && popd \
        && rm -rf unison-$UNISON_VERSION \
        && cd ~ \
        && apt autoremove -y --purge ocaml* make
    
  2. Setup SSH key based access by creating and copying over SSH keypairs between instances.
    On proxy1
    Bash
    ssh-keygen -t ed25519
    ssh-copy-id -i ~/.ssh/id_ed25519.pub root@10.12.20.62
    
    On proxy2
    Bash
    ssh-keygen -t ed25519
    ssh-copy-id -i ~/.ssh/id_ed25519.pub root@10.12.20.61
    
  3. Create folder for Unison configs on both instances.
    Bash
    mkdir ~/.unison
    
  4. Create Unison Profile on proxy1 for Nginx sites-available and sites-enabled directories:
    /root/.unison/nginx_sites_available.prf
    root = /etc/nginx/sites-available
    root = ssh://root@10.12.20.62//etc/nginx/sites-available
    
    auto = true
    batch = true
    repeat = watch
    log = true
    backup = Name *
    prefer = newer
    ignore = Name *.swp
    
    /root/.unison/nginx_sites_enabled.prf
    root = /etc/nginx/sites-enabled
    root = ssh://root@10.12.20.62//etc/nginx/sites-enabled
    
    auto = true
    batch = true
    repeat = watch
    log = true
    backup = Name *
    prefer = newer
    ignore = Name *.swp
    
  5. Repeat step 4 on proxy2 replacing IP address on line 2 with IP address of proxy1.
  6. Create respective systemd services on both instances.
    /etc/systemd/system/unison-nginx-sites-available.service
    [Unit]
    Description=Unison Nginx Sites Available Config Sync
    After=network.target
    
    [Service]
    ExecStart=/usr/bin/unison nginx_sites_available
    User=root
    Restart=always
    RestartSec=5
    
    [Install]
    WantedBy=multi-user.target
    
    /etc/systemd/system/unison-nginx-sites-enabled.service
    [Unit]
    Description=Unison Nginx Sites Enabled Config Sync
    After=network.target
    
    [Service]
    ExecStart=/usr/bin/unison nginx_sites_enabled
    User=root
    Restart=always
    RestartSec=5
    
    [Install]
    WantedBy=multi-user.target
    
  7. Finally enable the created services on both instances.
    Bash
    systemctl daemon-reload
    systemctl enable --now unison-nginx-sites-available
    systemctl enable --now unison-nginx-sites-enabled
    

Info

It's a good practice to create/edit proxy configs on the node that currently holds the VIP, i.e., the active node and let Unison sync it to the passive node.

Applying Nginx configuration

At this point you should have a HA deployment of Nginx where config can be applied by reloading Nginx on both instances.