This draft is based on a real provisioning log from April and May 2021. I kept the original posting date, reorganized the notes into a cleaner deployment narrative, and replaced passwords, API tokens, hostnames, and user-specific secrets with placeholders.

In all my years working as a Linux systems administrator, one habit has stayed constant wherever I have worked: I keep work logs. This is one of them. I have cleaned it up for readability, but it still reflects the way I document real infrastructure work so another admin, a future version of myself, or even a hiring manager can quickly understand what was done and why.

It reflects how I actually brought up a CentOS 7 server on Vultr for a PHP and WordPress-heavy workload. The stack was practical for its time: MariaDB 10.2, Nginx, Remi PHP-FPM, Varnish, and Let’s Encrypt using DNS-01 validation through Cloudflare.

One important note before diving in: this workflow is historically accurate to 2021, but I would not launch a new CentOS 7 system today. For a modern rebuild, I would start from a supported distribution such as AlmaLinux, Rocky Linux, Debian, or Ubuntu LTS.

What I Was Setting Up

The goal was to provision a fresh Vultr instance, prepare storage, install the core web stack, enable TLS, and leave the box in a maintainable state for multiple hosted sites.

The high-level checklist looked like this:

  • Prepare and warm the disk
  • Update the base OS and install build tools
  • Lock down and open only required firewall ports
  • Install MariaDB, Nginx, PHP-FPM, and Varnish
  • Configure a private network interface
  • Issue Let’s Encrypt certificates with Cloudflare DNS validation
  • Tune PHP-FPM for the workload
  • Apply final access hardening

1. Pre-Warm the Disk

One of the first things I did on new volumes and snapshot-based disks was warm the storage so the first real workload did not pay the initialization penalty.

1
2
3
dd if=/dev/vda1 of=/dev/null bs=1M
dd if=/dev/zero of=/dev/<device> bs=1M
dd if=/dev/<device> of=/dev/null

For a more controlled read pass, fio was also useful:

1
2
3
4
5
6
7
8
9
yum install -y fio
fio \
  --filename=/dev/vda1 \
  --rw=read \
  --bs=128k \
  --iodepth=32 \
  --ioengine=libaio \
  --direct=1 \
  --name=volume-initialize

This is a small step, but it matters on fresh infrastructure when you want predictable performance before database and application traffic starts landing.

2. Base OS Preparation

After the instance was reachable, I installed the baseline packages, standardized locale settings, and made sure the system was fully updated.

1
2
3
4
5
6
7
8
9
yum install -y epel-release

export LANG=en_US.UTF-8
export LANGUAGE=en_US.UTF-8
export LC_COLLATE=C
export LC_CTYPE=en_US.UTF-8

yum update -y
yum groupinstall 'Development Tools' -y

At this stage I was also continuing some of the machine setup through Ansible and shell-based provisioning, depending on which part of the stack I was iterating on.

3. Firewall and Network Basics

Before application deployment, I made sure firewalld matched the intended access pattern. I disabled AllowZoneDrifting, opened a custom admin port, and then added the public web and telemetry ports I needed.

1
2
3
4
firewall-cmd --permanent --add-port=31337/tcp
firewall-cmd --permanent --add-port={80/tcp,443/tcp,9200/tcp,5601/tcp,5044/tcp}
firewall-cmd --reload
firewall-cmd --list-all

Later, for VPN-related work, I also added a narrow UDP allow rule for a specific source range:

1
2
3
4
firewall-cmd --permanent --zone=public --add-rich-rule='
  rule family="ipv4"
  source address="180.190.0.0/16"
  port protocol="udp" port="1194" accept'

On the Vultr side, I also treated the provider firewall as part of the overall design rather than relying on host rules alone.

4. MariaDB 10.2

The database layer was MariaDB 10.2. I added the vendor repository, updated the machine to replace older client libraries, and then installed the server and client packages.

1
2
3
4
5
6
# /etc/yum.repos.d/mariadb.repo
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.2/centos7-amd64
gpgkey = https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck = 1
1
2
3
4
5
yum update -y
yum install -y mariadb-server mariadb-client
mysql -V
systemctl enable --now mariadb
mysql_secure_installation

I originally tried to set the root password directly with mysqladmin, but mysql_secure_installation ended up being the safer and cleaner path for that stage of the setup.

For application access, I later granted a scoped database user rather than leaving everything tied to root:

1
mysql -e "GRANT ALL PRIVILEGES ON \`appprefix\\_%\`.* TO 'app_db_user'@'localhost' IDENTIFIED BY '<strong-password>'"

For convenience on the box, I also kept a client profile with a non-root application user:

1
2
3
4
# ~/.my.cnf
[client]
user=app_db_user
password=<strong-password>

5. Nginx and PHP-FPM

The web tier used Nginx with multiple PHP versions available through Remi. That gave me some flexibility across sites while keeping the host standardized.

1
2
3
4
5
6
yum install -y nginx
systemctl enable --now nginx

firewall-cmd --zone=public --permanent --add-service=http
firewall-cmd --zone=public --permanent --add-service=https
firewall-cmd --reload

I then added the Remi repository and installed both PHP 7.2 and PHP 7.3 FPM pools:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
yum install -y http://rpms.remirepo.net/enterprise/remi-release-7.rpm
yum install -y php72 php73
yum install -y php72-php-fpm php73-php-fpm
yum install -y \
  php73-php-zip \
  php73-php-dom \
  php73-php-gmagick \
  php73-php-SimpleXML \
  php73-php-ssh2 \
  php73-php-gd \
  php73-php-mbstring \
  php73-php-opcache \
  php73-php-posix

systemctl enable php72-php-fpm
systemctl enable php73-php-fpm

One operational detail I noted at the time was not to leave the default FPM pools running before the final pool configuration was ready. I would start them only long enough to confirm the service units were healthy, then stop them until the actual pool files were in place.

1
2
3
4
systemctl start php72-php-fpm
systemctl start php73-php-fpm
systemctl stop php72-php-fpm
systemctl stop php73-php-fpm

For Nginx itself, I often replaced the default layout with a prebuilt configuration set generated elsewhere, backed up the existing files first, and generated fresh DH parameters:

1
2
3
4
cd /etc/nginx
tar -czvf nginx_$(date +'%F_%H-%M-%S').tar.gz nginx.conf conf.d/
tar -xf nginx.tar.gz
openssl dhparam -out /etc/nginx/dhparam.pem 2048

6. Varnish

This host also used Varnish 6.5. The setup was straightforward once the correct repository file was in place.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
# /etc/yum.repos.d/varnishcache_varnish65.repo
[varnishcache_varnish65]
name=varnishcache_varnish65
baseurl=https://packagecloud.io/varnishcache/varnish65/el/7/$basearch
repo_gpgcheck=1
gpgcheck=0
enabled=1
gpgkey=https://packagecloud.io/varnishcache/varnish65/gpgkey
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300

[varnishcache_varnish65-source]
name=varnishcache_varnish65-source
baseurl=https://packagecloud.io/varnishcache/varnish65/el/7/SRPMS
repo_gpgcheck=1
gpgcheck=0
enabled=1
gpgkey=https://packagecloud.io/varnishcache/varnish65/gpgkey
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300
1
2
3
4
5
yum install -y pygpgme yum-utils
yum -q makecache -y --disablerepo='*' --enablerepo='varnishcache_varnish65'
yum install -y varnish
systemctl enable --now varnish
varnishd -V

In this case I kept the service defaults mostly intact because Nginx was still occupying the expected ports in the final layout. The main functional change was shipping the correct default.vcl into place.

7. Private Network Interface

The instance also had a private network interface configured manually through the traditional CentOS network scripts:

1
2
3
4
5
6
7
8
9
# /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=static
IPADDR=10.40.112.5
NETMASK=255.255.0.0
IPV6INIT=no
MTU=1450

That gave me a clean path for internal traffic where needed, instead of forcing everything over the public interface.

8. Let’s Encrypt with Cloudflare DNS Validation

I chose DNS-01 validation because it avoided having to weaken the live Nginx configuration just to issue certificates. That was especially useful on a multi-site box where I wanted the TLS flow to stay predictable.

First, I installed Certbot into an isolated Python virtual environment:

1
2
3
4
5
6
yum install -y python3 augeas-libs.x86_64
python3 -m venv /opt/certbot/
/opt/certbot/bin/pip install --upgrade pip
/opt/certbot/bin/pip install certbot certbot-nginx certbot-dns-cloudflare
ln -s /opt/certbot/bin/certbot /usr/bin/certbot
certbot plugins

To verify the Cloudflare token before requesting certificates:

1
2
3
curl -X GET "https://api.cloudflare.com/client/v4/user/tokens/verify" \
  -H "Authorization: Bearer <cloudflare-api-token>" \
  -H "Content-Type: application/json"

The credentials file was stored separately with restrictive permissions:

1
2
# ~/.cloudflare.ini
dns_cloudflare_api_token = <cloudflare-api-token>
1
chmod 600 ~/.cloudflare.ini

A representative certificate request looked like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
certbot certonly \
  -d example.com \
  -d www.example.com \
  --email [email protected] \
  --dns-cloudflare \
  --dns-cloudflare-credentials ~/.cloudflare.ini \
  --dns-cloudflare-propagation-seconds 60 \
  -n \
  --agree-tos \
  --force-renewal

I repeated the same pattern for each hosted domain.

For renewals, I used the standard twice-daily cron pattern and a post-renew hook to validate and reload Nginx:

1
echo "0 0,12 * * * root /opt/certbot/bin/python -c 'import random; import time; time.sleep(random.random() * 3600)' && certbot renew -q" | sudo tee -a /etc/crontab > /dev/null
1
2
3
4
5
6
7
cat >/etc/letsencrypt/renewal-hooks/post/nginx-reload.sh <<'EOF'
#!/bin/bash
nginx -t && systemctl reload nginx
EOF
chmod a+x /etc/letsencrypt/renewal-hooks/post/nginx-reload.sh

nginx -t && systemctl reload nginx

9. PHP-FPM Pool Tuning

For the main workload, I defined a dedicated PHP-FPM pool instead of relying on the default www pool. The values below reflect the shape of the 2021 deployment and are the kind of settings I like to see captured in a work log because they show how the service was intended to behave under load.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
# /etc/opt/remi/php73/php-fpm.d/www.conf
[personal-wp]
user = cliper
group = cliper
listen = /run/php/php7.3-fpm.sock
listen.owner = cliper
listen.group = nginx
listen.mode = 0660
pm = ondemand
pm.max_children = 64
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
pm.process_idle_timeout = 5s
pm.max_requests = 50
catch_workers_output = no
php_admin_value[error_log] = /var/www/logs/php-fpm/error.log
php_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected]
php_admin_value[opcache.lockfile_path] = /var/www/var/php-fpm

Once the pool and web server config were ready, I brought the stack up in the expected order:

1
2
3
systemctl start nginx
systemctl start varnish
systemctl restart php73-php-fpm

The logs I cared about most during validation were:

  • /var/www/logs/php-fpm/error.log
  • /var/opt/remi/php73/log/php-fpm/error.log
  • /var/log/nginx/error.log
  • site-specific Nginx error logs under /var/log/nginx/

10. Access and Final Hardening

A few final steps rounded out the build:

1
2
3
usermod -aG wheel cliper
usermod -aG nginx cliper
passwd -l root

I also reviewed sudoers through visudo so administrative access was tied to the right group membership rather than direct root logins.

What This Log Shows

For Linux administrators, this kind of document is useful because it captures sequencing, not just package names. The order matters: disk warm-up, system update, repo setup, service install, TLS, pool tuning, and then hardening.

For hiring managers, the value is slightly different. This is not just a list of commands. It shows how I approached a real production-style server build: preserving operational notes, isolating risk, planning for maintainability, and leaving enough detail behind that another admin could take over without guessing.

What I Would Change Today

If I were rebuilding this stack now, I would make a few different choices:

  • Use a supported operating system instead of CentOS 7
  • Prefer infrastructure as code and idempotent provisioning for more of the stack
  • Standardize secret handling through a vault or environment-backed process
  • Reevaluate whether Varnish is still justified for the current application mix
  • Consolidate PHP versions unless there is a hard compatibility requirement

Even so, this draft still reflects the kind of practical systems work I have done for years: clear priorities, careful sequencing, and enough operational discipline to make a server reliable after the first login.