Overview
A Hostinger VPS is a stock Ubuntu or Debian box with root SSH access. Treat it like any other cloud VM: harden first, deploy second, monitor third. The vendor changes nothing about the playbook. Front it with cloudflare for TLS termination at the edge, caching, and origin-shielding.
Harden before you deploy anything
Run the harden steps on first login, before installing the app.
# Create a non-root sudo user
adduser deploy && usermod -aG sudo deploy
# Copy your key, then disable password auth and root login
mkdir -p /home/deploy/.ssh && cp ~/.ssh/authorized_keys /home/deploy/.ssh/
chown -R deploy:deploy /home/deploy/.ssh && chmod 700 /home/deploy/.ssh
# /etc/ssh/sshd_config: PermitRootLogin no, PasswordAuthentication no
systemctl restart ssh
# Firewall, fail2ban, automatic security patches
ufw default deny incoming && ufw allow OpenSSH && ufw allow 80 && ufw allow 443 && ufw enable
apt install -y fail2ban unattended-upgrades
dpkg-reconfigure --priority=low unattended-upgradesSSH keys only. Password auth on a public IP is a footgun.
Deploy by tagged release, not by random git pull
Two patterns work; pick one and stick to it.
- Git pull on a tagged release: a deploy script runs
git fetch --tags, checks out the tag, runs the build, and reloads the service. Tags make rollback one command (git checkout v1.4.2 && systemctl reload app). - Systemd-managed binary or container: CI builds an artifact, uploads it to the box, and a systemd unit restarts the service. Use this when the build is heavy and the VPS should not run it.
Either way, the deploy user owns the app directory and only the deploy user can restart the service.
Reverse-proxy with Caddy unless you need fine control
Caddy provisions TLS automatically through Let’s Encrypt and reloads on config change. A working Caddyfile:
example.com {
reverse_proxy localhost:3000
encode zstd gzip
}
That is the whole config. Caddy renews certs in the background and serves HTTP/2 and HTTP/3 by default.
Reach for nginx when you need request rewriting, fine-grained caching, multi-upstream load balancing, or stream proxying. Nginx is the better tool for complex routing; Caddy is the better tool for “put TLS in front of one app.”
If TLS terminates at cloudflare in Full strict mode, the origin still needs a valid cert. Caddy handles that without configuration.
Manage processes with systemd, not pm2
Systemd is on the box already. It supervises, restarts on failure, captures logs to the journal, and starts on boot. A minimal unit:
# /etc/systemd/system/app.service
[Unit]
Description=App
After=network.target
[Service]
User=deploy
WorkingDirectory=/srv/app
ExecStart=/usr/bin/node server.js
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.targetpm2 and forever add a userland supervisor on top of an OS that already supervises. Use systemd unless the runtime ships its own daemon.
Back up off-box on a schedule
Local snapshots are not backups. Push backups to S3, Backblaze B2, or a separate provider on a schedule.
resticfor encrypted, deduplicated backups of files and databases. One command, one repo, one retention policy.rsnapshotif you want plain rsync snapshots and do not need encryption at rest.- For Postgres, run
pg_dumpinto the backup pipeline; see postgres for the dump command.
Test the restore. A backup you have not restored is a hope.
Monitor with node_exporter and an uptime check
Two layers catch most outages.
node_exporteron the VPS, scraped by a hosted Prometheus (Grafana Cloud free tier works) or a self-hosted one. CPU, memory, disk, and load are the table-stakes metrics.- An external uptime check (UptimeRobot, BetterStack, Cronitor) hitting a public health endpoint every minute. If the box is down, internal metrics cannot tell you.
Alert on disk above 80 percent. Disk-full is the most common preventable outage on a small VPS.