WordPress Hosting with Portainer, NGINX, and Docker

If you are in the business of offering hosted solutions for clients, Docker is a fantastic alternative to a standard shared host or dedicated server per client. There are a number of benefits to Dockerizing your client sites including service/site isolation, security benefits, environment customization (per client/site), portability, and many others.

Unfortunately, Docker and its more robust sibling Docker Compose are a little more complicated to get started for the average developer (especially ones focused on front-end development). This tutorial will guide you through setting up a Docker management service called Portainer and will walk you through deploying multiple isolated WordPress client sites. Hopefully, you will see the benefit of running client sites this way despite the small initial overhead.

If you have been deploying and managing multiple client WordPress sites on a single shared server (or even multiple servers with one site per server) then you might wonder why Dockerization would make any kind of difference. For me, the most important reasons are isolation (and implicitly, security) and ease of management.

Running multiple WordPress sites on a single host typically means that you have a single database service running multiple client databases as well as multiple local directories holding your client’s WordPress files (both system and uploaded). Dockerizing each client’s site means that both the WordPress files and the associated database are contained within their own isolated container. Effectively, this simulates an isolated server environment for each site.

The above image isn’t WordPress-specific but it conveys the general idea. One virtual server that, through Docker, shares common system/kernel resources and runs multiple isolated stacks. From a practical perspective, this means that there are independent instances of WordPress and MySQL running. There is no sharing of a single database service and a security breach of one site means that the others are not easily accessible for corruption (as is often the case in a cheap shared hosting environment).

For those who run independent virtual servers on a per-client/site basis, I might argue that the practice is likely overkill for the majority of clients out there. Yes, small virtual servers are pretty cheap these days but most client sites don’t require even the small amount of dedicated resources that these servers provide (much-less 24/7). You can save your clients money and your own time in managing multiple servers by consolidating smaller clients (traffic/resource wise) onto a single Dockerized host.

Provision a Virtual Server

You most likely already have a server that your client sites are running on. You can continue using this server if you’d like or you can provision a new one to start fresh. Be sure to also point any domains that you will be using to this new server’s IP address.

Portainer and Docker Management

Portainer is a Docker manager that is both free and easy to use. Getting this up and running is our first step after provisioning our virtual server. I’m going to assume that you already have a clean Ubuntu 18.04 install and we’ll go from there. Execute the following commands to install Docker:

sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
sudo apt update
sudo apt install docker-ce

Then install docker-compose!

sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Finally, install Portainer. Portainer is just another Docker container but it will be the only one that you will install and run from the command line!

sudo docker volume create portainer_data
sudo docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer

At this point you should have Portainer installed and running on port 9000 of your virtual server. Open a browser and go to: http://<your.server.ip.address>:9000. You should see the new user registration page:

Go ahead and enter a new password and optionally change the default admin username.

On the next screen, select the “Local” environment option then click “Connect.” You will then be brought to the main Portainer Dashboard!

Here, there are a variety of options and settings you can change if needed. For the purposes of this tutorial, we’ll just get right to creating two client WordPress stacks.

Docker-compose and WordPress Configuration

Click on the “local” endpoint to connect to the locally managed Docker instance. At this point there’s obviously not much running except for the Portainer service itself.

Let’s create our first WordPress stack. In Portainer, a stack is just a docker-compose configuration which in turn is just multiple docker containers usually networked together. Click on “Stacks” in the left navigation then at the top, click on “Add Stack.”

We’ll name our first stack, “Client Site 1” and be sure to have, “Web Editor” selected and “Administrator” under Access Control. In the editor, paste in the following docker-compose file:

version: '2'
services:
  wordpress:
    image: wordpress:latest # https://hub.docker.com/_/wordpress/
    ports:
      - 90:80 # change port for every deployment. Get nginx config to look here
    volumes:
      - ./client1/config/php.conf.uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
      - ./client1/wp-app:/var/www/html # Full wordpress project
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_NAME: wordpress
      WORDPRESS_DB_USER: root
      WORDPRESS_DB_PASSWORD: Qfjhk560ahjljksdfHIT3g9gsfg # change this
    depends_on:
      - db
    networks:
      - client1
  db:
    image: mysql:5.7
    ports:
      - 3315:3306 # change port for every new deployment
    volumes:
      - ./client1/mysql:/var/lib/mysql
    environment:
      MYSQL_DATABASE: wordpress
      MYSQL_ROOT_PASSWORD: Qfjhk560ahjljksdfHIT3g9gsfg # change this
    networks:
      - client1
networks:
  client1:
      driver: bridge

This script is doing a few things:

  • Defining two docker containers: WordPress and MySQL.
  • It’s specifying MySQL 5.7 since the WordPress container doesn’t have the updated MySQL 8 authentication mechanism yet.
  • Specifying that the WordPress container will map port 90 on the host to the container’s port 80. This will be where all public traffic to WordPress will flow through. We’ll tackle securing this later.
  • Specifying that the MySQL container will map port 3314 on the host to the container’s port 3306. This allows us to connect to MySQL for each of our sites based on port number after SSH’ing into the main server.
  • It is defining a bridged network named, “client1” that will be shared between the two docker instances and the client.
  • It is defining the isolated MySQL login shared between only this stack.

Change the two password fields to be something random but be sure they both match! Once you are ready, click on “Deploy the Stack” to start the image download and spin everything up. Once that is complete you will see that the new stack is up and running! In the left navigation, click on “Containers” and you should see both the WordPress and MySQL containers managed by compose!

Configuring NGINX as a Reverse Proxy

Great! Now we have a full isolated WordPress stack ready to go…almost. You will remember that we mapped the host port to 90. That’s not a standard port and since we intend to run multiple WordPress sites on this server, we have to have a mechanism to span them across ports and map them through different hosts/domains. This is where NGINX comes in.

First, be sure that NGINX is installed:

 sudo apt install nginx

Then, let’s go in and create a new server config in /etc/nginx/sites-available/<client1.com>

server {
        server_name    *<client1.com>;

        location / {
                proxy_pass http://127.0.0.1:90;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For 
                $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto https;
        }
}

Notice that the proxy_pass is pointing to the localhost on port 90! Be sure to change the server_name to your client’s FQDN and be sure that domain’s DNS is pointing to your server’s IP address. Once that file is squared away, be sure to create the symlink into sites-enabled!

sudo ln -s /etc/nginx/sites-available/<client1.com> /etc/nginx/sites-enabled/

sudo service nginx restart # Restart the NGINX server.

At this point, almost everything should be working as expected. In your browser, if you navigate to http://<client1.com>:90 you should see the WordPress setup page. However, since we should ALWAYS be using TLS (HTTPS), we have to generate a TLS certificate for our domain. I’m a huge fan of Let’s Encrypt and Certbot since they issue free TLS certificates. For the vast majority of clients, this issuer will be just fine. First, let’s install Certbot:

sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository universe
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot python-certbot-nginx

To issue the certificate, we simply execute the following:

sudo certbot --nginx

You will then be guided through a series of questions where you can select the domain you wish to get your certificate issued for. Each domain you have listed under a given NGINX “server_name” directive and that is enabled (in the “sites_enabled” directory) should be shown. You can run this command multiple times for as many domains as you’d like. At the end, Certbot will ask you if you want it to update your NGINX config for you and to redirect non-HTTP traffic to use HTTPS. Say yes to both.

At this point, you should be able to navigate to https://<client1.com> in your browser and see the WordPress setup screen. That’s it!

Adding More Client Sites

To add additional sites, you simply need to run through Docker Compose and WordPress Configuration and Configuring NGINX as a Reverse Proxy again. Except this time around, you will need to bump the port numbers that you use for WordPress and MySQL. In the original Docker Compose file above we used the port mapping of 90:80 for WordPress and 3315:3306 for MySQL. Let’s bump those container ports up by one. Additionally, let’s go ahead and create a new MySQL password that should match in both containers:

... # WordPress
    ports:
      - 91:80 # change port for every deployment. Get nginx config to look here
    ...
        WORDPRESS_DB_PASSWORD: xrUhb>HNsnpUs#R77#2M
... # MySQL
    ports:
      - 3316:3306 # change port for every new deployment
    ...
        MYSQL_ROOT_PASSWORD: xrUhb>HNsnpUs#R77#2M
....

Now, go ahead and create another stack using this new configuration and run through the same NGINX/Certbot steps but using the new <client2.com> domain and voila! Repeat this process for as many client sites as you need.

Backups and Customization

The nice thing about how the directory mappings are done in the Compose script is that from the main Docker server for each of your sites, you have access to the PHP configuration files, database files, and WordPress app files. This makes things like backup and recovery very easy.

Login to your Docker server again and cd into the data directory where Portainer keeps all your sites files:

cd /data/compose

In this directory, you will notice more directories with incremental numbers depending on how many stacks you have deployed. To be honest, I’m not quite sure what the numbered directory means aside from perhaps a stack or build number but you should have one directory for “1” there. cd into it you should see a directory called <client1>. Enter that directory and you should see something like the following:

  • config – Contains the “php.conf.uploads.ini” so you can set PHP config overrides.
  • mysql – Contains the native MySQL database files.
  • wp-app – Contains the main WordPress application/files.

Given that we can access these files directly from the main Docker server we can easily modify and/or write scripts for backup. You can also drop in existing “wp-app/wp-content” uploads and such here. To connect to the MySQL instance of a particular client, you would simply tunnel your MySQL connection through SSH to the main Docker server and connect to the localhost port of that particulat client. So in the above example, our connection parameters might look like:

Of course, you’d use the MySQL password specified in that client’s Compose file. From here you could import an existing site’s database.

Conclusion

So the process is a bit long but once you get everything set up, deploying subsequent WordPress sites in this way is quick, secure, and fairly performant. Of course, overall performance depends on server specs and how many sites you have on that server and amount of traffic each has. In combination with things like WP caching plugins and even the use of CDNs, you can likely get a very good number of client sites on a $10-$20/month virtual server.

    Leave Your Comment Here