Recently, I purchased a four post rack enclosure for my servers.

While I've had more expansive equipment in the past, for the last year or so I have been running all of my services from two Dell PowerEdge R410s connected to a terrible TP-Link 8 port switch going straight to my ISP-provided Cable Modem/Router combo. Please try to hold your vomit; I know that isn't a pretty picture.

However, a few months ago I decided that I wanted to emulate a highly-available containerized environment as closely as I could on a sub-$1000 budget, including the price of the servers I already owned factored into that budget. One of the goals for this project was to have enough RAM and CPU resources to be able to run real applications that you'd see in production. This post is going to cover both the hardware and software that I've deployed in my environment. I intend to update this when there are major additions or other changes to my homelab rather than making new posts.

If you know me, you're thinking about asking, "Why didn't you just use the cloud? You're AWS certified!", then my answer is, "Sometimes my office gets cold in the winter and I didn't want to buy a space heater."

Let's start with the $$$.

$$$ Budget $$$

  1. "Raising Electronics" four post 27U rack enclosure from Amazon. It consists of several pieces of metal with holes in them. Very simple stuff. $170
  2. 3x Dell Sliding Rapid Rail kits for my R410s - 01HGRH. $140
  3. 1U CyberPower CPS1215RMS 120v/15a Power Distribution Unit. $46
  4. 4 Port VGA KVM Switch with cables. $30
  5. Miscellaneous Ethernet cables. $18
  6. 1U Raising Electronics Horizontal Cable Management Unit. $15
  7. AC Infinity 1U Rack Shelf. $23
  8. 3x Dell PowerEdge R410s. $507
  9. Cisco WS-C3750G-48TS-S 48 Gigabit Ports Layer 3 Switch. $62
  10. Dell SonicWall TZ-215 Firewall. $0

I ended up breaking my budget by only $11! A lot of this hardware is on the older side, but I'm not running any mission critical applications locally and electricity here is $0.06/kWh.

Host Hardware Breakdown

  • eve-psr-dock0.lan.matri.cx

    • Dell PowerEdge R410
    • 64GB RAM
    • 2x Intel Xeon x5670
    • 4x 2TB Seagate Enterprise HDD
  • eve-psr-dock1.lan.matri.cx

    • Dell PowerEdge R410
    • 32GB RAM
    • 2x Intel Xeon x5660
    • 4x 2TB Seagate Enterprise HDD
  • eve-psr-dock2.lan.matri.cx

    • Dell PowerEdge R410
    • 86GB RAM
    • 2x Intel Xeon L5640
    • 4x 2TB Seagate Enterprise HDD

Network Configuration

I wanted to keep the network as simple as possible. I don't believe in using advanced features just for fun unless you're trying to simulate a particular real-world application, and in my case there was no demand for a crazy networking setup. I have an ISP-provided Cable Modem/Router combo that is in "bridge mode". It has an incoming cable connection and one outgoing Ethernet connection to my SonicWall Firewall appliance. The SonicWall handles NAT, all routing on the network, and DHCP. I have one flat /24 subnet for my network and DHCP hands out local DNS (bind) that's running as a Kubernetes pod. The Sonicwall has one port that goes to the Cisco switch, and every other device on this network connects to that switch. Kubernetes uses the Flannel network plugin rather than something more cumbersome like Weave.

Very, very simple. Very, very easy to run and maintain.

The Fun Stuff

Each host is configured with RAID 6 for 4TB of usable storage per host and can survive up to 2 disk failures. I installed CentOS 7.6 on all three physical hosts and used Ansible to install necessary software and configure the machines.

There is a 2TB replicated GlusterFS volume that exists across all three hosts in order to provide a shared storage medium that I leverage as a persistent storage source. Using this, I have persistent data for my containers regardless of which host is asked to spin up the application. This replicated GlusterFS volume reserves the full volume size on each host, but provides high performance and very low latency. I've been very happy with Gluster!

Docker Swarm is the container orchestration engine I used for this deployment. While I love Kubernetes, I already have a Kubernetes testing environment in AWS. I knew that using Docker Swarm would simplify and streamline the deployment of applications in my environment given my one simple requirement: Highly Available Containerization. This also gave me the opportunity to work with the second most popular container orchestration engine that many Kubernetes pros haven't ever looked at.

k u b e r n e t e s is in the homelab now. Docker Swarm is dead and orchestrates none of my services. I was just being lazy and didn't want to port my homelab's Swarm to Kubernetes. I did the right thing, don't worry.

All three hosts are set as Docker Swarm Managers Kubernetes Masters with stacked etcd in order to provide availability with the loss of up to one host. This means that if a host loses a drive, a power supply, or shuts down for any other reason my cluster will continue to operate without interruption! I do understand that having all three hosts in the same rack and in the same Availability Zone (my house) means they're all dependent on the same network and power delivery, so there are multiple single points of failure, but I can only do so much on a homelab budget!

As I'm utilizing Docker Swarm, I am managing all of my deployments through docker-compose files Kubernetes, I am managing all of my deployments through Kubernetes YAML files that exist on the GlusterFS volume. My directory structure looks like this:

/glust/
|
|___ /glust/kube/
|   |
|   |___ /glust/kube/$application_name
|   |    $application_name.yml
|   |    README.md
|   |    |
|   |    |___ /glust/kube/$application_name/$application_data
|   |    |    ( Application Data Goes Here )

Maintenance of Docker Swarm Kubernetes is incredibly simple, as all I have to do is define my deployments/pods/services in the docker-compose Kubernetes YAML file and run kubectl apply -f $application_name.yml. Each Kubernetes YAML file defines volumes for all persistent data, which is mounted to the container from host directory somewhere within /glust/kube/$application_name/$application_data/. This means that everything I need to run the application, including its content, is isolated to its directory within /glust/kube/, and I can copy that folder over to a different Kubernetes environment and have the same deployment running in seconds. I can also easily backup my Kubernetes configurations and data on a per-deployment basis.

There's a lot more that can and will be covered here, but I don't want to delay publishing this forever. I hate to put out unfinished work, but I'm considering this a living document that will continue to be modified as things change and I find the time to add more details. Here's my non-exhaustive TODO list for items that I still need to write about on this post:

  1. Management tools
  2. DNS set up
  3. Containerized nginx reverse proxy
  4. Matrix-Synapse
  5. Local Docker Registry