All articles🇬🇧 English
7 min

Ubuntu VPS Production Setup — 8 Steps

How to prepare an Ubuntu server for production: firewall, SSH, Docker, Nginx, SSL, backups. Real VPS setup experience.

serverproductionubuntusecuritydockernginx

Preparing a VPS for production means configuring firewall, SSH access, Docker, Nginx, SSL, and backups before any application code reaches the server. Without these steps the server is vulnerable to attacks, data loss, and downtime.

This article is based on a real VPS setup for a fullstack application (NestJS + React + PostgreSQL + Redis). Every command has been tested on Ubuntu 22.04. If you want to follow along while tracking your progress, there is an interactive checklist where you can mark each step as done.

Server Requirements

Minimum configuration that runs a fullstack stack without performance issues:

ParameterValue
OSUbuntu 22.04 LTS
RAM4 GB (NestJS + React + PostgreSQL + Redis simultaneously)
CPU2 cores
Disk50 GB SSD
NetworkStatic IP

Ubuntu 22.04 LTS — 5 years of support, security updates until 2027, the largest number of docs and compatible packages among server distributions.

First Login and System Update

First connection is as root — the only user that exists on a fresh VPS:

ssh root@YOUR_SERVER_IP

apt update && apt upgrade -y
apt install -y curl wget git vim htop ufw

htop and ufw are needed in the next steps. The rest is a basic toolkit for working with the server.

Firewall Setup (UFW)

UFW (Uncomplicated Firewall) is an interface to iptables built into Ubuntu. Rules must be added before enabling the firewall, otherwise the current SSH connection will drop.

sudo ufw default deny incoming
sudo ufw default allow outgoing

# SSH — add first, before enabling UFW
sudo ufw allow 22/tcp comment 'SSH'
sudo ufw allow 80/tcp comment 'HTTP'
sudo ufw allow 443/tcp comment 'HTTPS'

sudo ufw show added
sudo ufw enable
sudo ufw status numbered

Three rules that break production most often:

  1. Don't close the current SSH session until you verify a new connection from a separate terminal
  2. Don't open ports 5432 (PostgreSQL) and 6379 (Redis) to the outside — Docker containers communicate through an internal network, external database access is unnecessary
  3. Don't open the pgAdmin port (8082) — for remote access an SSH tunnel is enough: ssh -L 8082:localhost:8082 user@server

Secure SSH Access

The minimum needed right now: a separate user instead of root and key-based login instead of passwords.

# Create user with sudo
adduser deployer
usermod -aG sudo deployer

# Copy SSH key from local machine
ssh-copy-id deployer@YOUR_SERVER_IP

After verifying key-based login in a separate terminal — disable password and root access:

sudo sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config
sudo sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config
sudo sshd -t && sudo systemctl restart sshd

This is the basic level. Generating ed25519 keys, ~/.ssh/config for multiple servers, agent forwarding, fail2ban, and changing the default port are covered in the dedicated SSH keys guide.

Installing Docker

Docker is used for containerizing the application and databases. Installation follows the official Docker instructions for Ubuntu:

# Remove conflicting packages
for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do
  sudo apt-get remove $pkg 2>/dev/null
done

# Dependencies and GPG key
sudo apt install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

# Repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

# Add current user to docker group
sudo usermod -aG docker $USER

After being added to the docker group you need to re-login (exit SSH and reconnect) — newgrp docker creates a child shell where some environment variables may differ.

Verification:

docker --version
docker compose version

Nginx as Reverse Proxy

Nginx accepts external requests and proxies them to Docker containers — frontend, backend, WebSocket. Installation:

sudo apt install -y nginx
sudo systemctl status nginx

Config files are created in /etc/nginx/sites-available/ with a symlink to /etc/nginx/sites-enabled/. Before every systemctl reload nginx — always run sudo nginx -t:

sudo nginx -t && sudo systemctl reload nginx

If nginx -t returns an error — reload won't proceed, and the working config stays intact. The habit of checking before reloading saves you from downtime.

SSL Certificates via Let's Encrypt

Let's Encrypt issues free certificates valid for 90 days. Certbot automates both issuance and renewal. Before running certbot, DNS records must already point to the server's IP — Let's Encrypt verifies this during issuance.

sudo apt install -y certbot python3-certbot-nginx
sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com

Certbot modifies the Nginx config automatically, adding the SSL block. The auto-renewal timer is active by default:

sudo systemctl status certbot.timer
sudo certbot renew --dry-run

If dry-run completed without errors — certificates will renew automatically 30 days before expiration.

Monitoring and Logging

The minimum set without which diagnosing production issues is impossible:

# System resources in real time
htop

# Nginx logs — proxy errors, 502/504
sudo tail -f /var/log/nginx/error.log

# Logs for a specific Docker container
docker logs -f --tail 100 container_name

# System journal by service
sudo journalctl -u nginx -f --no-pager

docker logs -f --tail 100 shows the last 100 lines and continues the stream — without --tail a heavy container's log can output hundreds of megabytes to stdout.

PostgreSQL Backup

pg_dump through Docker is the simplest way to make daily backups. The script saves a dump as gzip and deletes files older than 7 days:

sudo vim /usr/local/bin/backup-db.sh
#!/bin/bash
set -euo pipefail

BACKUP_DIR="/backups/postgres"
DATE=$(date +%Y%m%d_%H%M%S)
CONTAINER="your_postgres_container"
DB_USER="your_user"
DB_NAME="your_db"

mkdir -p "$BACKUP_DIR"

docker exec "$CONTAINER" pg_dump -U "$DB_USER" "$DB_NAME" \
  | gzip > "$BACKUP_DIR/backup_${DATE}.sql.gz"

find "$BACKUP_DIR" -name "backup_*.sql.gz" -mtime +7 -delete
sudo chmod +x /usr/local/bin/backup-db.sh

# Add to cron
sudo crontab -e

Cron line for daily backup at 03:00:

0 3 * * * /usr/local/bin/backup-db.sh >> /var/log/backup-db.log 2>&1

Redirecting to a log file lets you track when a backup silently stops working. set -euo pipefail in the script ensures that an error at any stage (container not found, no disk space) won't be silently swallowed.

FAQ

Which VPS to choose for production?

For a Node.js + PostgreSQL + Redis stack, 4 GB RAM and 2 vCPU is sufficient. Popular international options include Hetzner (price/performance) and DigitalOcean (documentation). Choose a provider with a data center geographically close to your users.

Can I skip Nginx and proxy directly?

Technically — yes, Node.js can listen on 80/443. Practically — no. Nginx handles SSL termination, static files, rate limiting, and gzip compression more efficiently than Node.js. At 1000+ concurrent connections the difference in memory usage is 5–8x.

Why UFW if Docker already has iptables?

Docker adds its own iptables rules directly, bypassing UFW. This means docker run -p 5432:5432 will open the port to the internet even if UFW blocks it. The solution — don't publish database ports: instead of -p 5432:5432 use Docker networks so containers communicate internally.

How to verify backups are working?

Once a month, restore a dump on a test environment: gunzip < backup.sql.gz | docker exec -i postgres psql -U user dbname. A backup that has never been restored is not a backup.