This post is less of a "how-to" and more of a "why". This is for those developers or sysadmins who aren't very familiar with Docker and don't think it's worth using. I know what you're thinking, but there are still old-school admins that don't use and refuse to learn Docker in 2019.
LXC is cool.
I've hosted dozens (maybe hundreds) of short-term web projects on traditional container infrastructure, and before that even more on virtual machines. LXC comes with a lot less resource overhead than VMs, easier decoupling of application tiers, and a hell of a lot more ease-of-use than installing everything on a single host OS.
My process for setting up a blog usually looked like this mess:
Choose container host -> choose container OS -> install LXC container -> configure LXC container with development environment -> install nginx -> set up nginx -> set up PHP or Node -> MAYBE set up SQL DB backend -> set up website -> write website content
LXC might be fun the first dozen times but it has several weaknesses compared to modern solutions. Lucky for you, we don't have to do things like that anymore, Grandpa. Oh no no.
Now we have Docker.
Docker allows me to declaratively write a configuration file for my chosen blogging platform of choice and never manually configure another OS again. That's right - I write this one file, and then I have complete portability of that software's infrastructure to any Docker host. It's like a dream come true.
Not only do I get maximum portability, but I also get versioning of this infrastructure through
git! Since I use
docker-swarm, all I have to do to update my running Docker infrastructure is update the docker-config.yml, and then run a
docker stack deploy. It's actually that easy!
Okay, I'm sold. How do I do it?
docker swarm init; wget https://git.matri.cx/James/DockerIt/raw/branch/master/ghost/docker-compose.yml; docker stack deploy -c docker-compose.yaml ghost;
Once you have Docker installed, have your ghost content in the directory ./ghost, and you are using the domain blog.matri.cx, this will take care of everything involved in setting up and running the Ghost blogging platform. If I ever have to move this to a different host, I'll just copy my ./ghost directory and then run those three commands on the new host. It'll take care of everything else. It's magic.
Let's break that down a bit, though, because you're probably not familiar with all of the concepts above.
docker swarm init; creates a Docker Swarm instance with your new host as the manager. If you've already got a docker swarm, you know you don't need to run this, and you don't need to be reading this post. Thanks for reading anyway.
wget https://...docker-compose.yml; pulls the docker-compose configuration for this application. That configuration looks like:
version: "3" services: web: image: ghost:2.22.3-alpine deploy: replicas: 1 resources: limits: cpus: "1" memory: 1000M restart_policy: condition: on-failure ports: - "3003:2368" volumes: - ./ghost:/var/lib/ghost/content environment: - NODE_ENV=production - PROD_DOMAIN=https://blog.matri.cx - url=https://blog.matri.cx
Which is really rather simple. It's a version "3" configuration, meaning it'll use syntax available for version 3 of docker-compose. It creates a single "service", called
web. It uses the 2.22.3 Ghost image based on Alpine Linux (my favorite server distribution). It creates 1 copy of the server that is limited to 1 CPU and 1000M of RAM. It will restart automatically any time the container fails. It maps the port 3003 on the host to the port 2368 in the container. It maps the ./ghost directory on the host to the /var/lib/ghost/content directory in the container. Finally, it defines a few environment variables that tell Ghost that this is a production blog (higher security standards) that uses the
Finally, we have
docker stack deploy -c docker-compose.yml ghost, which deploys this configuration under the name
ghost. Docker takes care of pulling all of the required images and spinning up a virtualized network to run any of the services defined (in this case, just
web) within. In my production environment, I have an nginx reverse proxy taking care of requests on 443 and routes them to port 3003 on the Docker host, which is then mapped to the Ghost container's port 2368. That's how you're reading this page right now!
If I were to write an article describing this same process with as much depth using traditional LXC, that article would easily be 5x as long. Easily. That's not even considering the fact that migrating applications from that infrastructure would take just about as long unless you went with Snapshots, which are far less modular. This initial Docker setup took me around 30 minutes; migrating it to a new host, even in the cloud, would take less than 5 minutes.
I love Docker.