For a long time, I’ve been a big fan of technical people having a “home lab”. Whether you’re a developer, work in networking, or server management – what better place to: learn new technology, and also bring enterprise-class technology to your home network!
For any home lab, I believe you need a few things to keep you sane:
- DHCP, which updates DNS with names.
- A reverse lookup zone for DNS (where you can resolve a name from an IP address)
- An Active Directory for single-signon everywhere (optional/may not be needed)
- A certificate authority so you don’t have cert nags for all of your internal stuff.
- A hypervisor to quickly stand up servers.
- [NEW!] A Docker engine instance to quickly stand up containers.
First, why those things? Without DHCP with DNS updating, you are stuck with dealing with IP addresses. As a civilized human living in the year 2017, I insist on being able to use regular machine names from ANY machine. This is already a solved problem which few people implement!
Without this, you are either referring to machines by IP addresses, or you are updating hosts file on several machines. That typically means you are using static IP addresses. If so – stop doing that!
DHCP all the things:
In the very olden days, people used to have Excel spreadsheets of IP addresses and hostnames, and machines had static IP addresses. Then, Microsoft (at the least) fixed this. I say Microsoft because I haven’t found a Linux equivalent of this yet. If you set up a Microsoft Active Directory, then install DNS and DHCP and have those be integrated with AD, then you get a magical thing: when DHCP gives out IP addresses, it not-only tells the clients about your DNS, it also registers their computer name in the DNS.
This means that the instant that you bring up a new machine called server1 – every machine on the network can now reference it by name. This is what sanity looks like in your home lab. Without this, you will be driving yourself insane with the mundane details!
Every single computer (workstation, server, or device) should use DHCP – EXCEPT for: your Active Directory/DNS servers, and your router.
So – I recommend setting up:
- Active Directory (with Windows Server Essentials 2016 – these are cheap licenses and I don’t burn up a useful “Server” license for this, if you have MSDN)
- DNS (AD-integrated)
- DHCP (AD-integrated)
In DHCP, make sure you take full advantage and fill out all the things you want to send down to your DHCP clients, including DNS and WINS servers:
Working with Microsoft DNS:
I’ve been a fan of using Microsoft AD and Microsoft DNS (and Microsoft DHCP), because they really work together well. I’ve set up Bind on Linux. Although that is straight-forward, this Microsoft solution does quite a lot with just a few steps.
Once you install a Windows Server operating system, one of the roles you can give it is an Active Directory domain controller. It’s ideal to have a dedicated primary and backup machine. Even if they are on the same box, it’s still worth it. Even simple things like rebooting one of the machines after you install updates – you’ll still have DHCP and name resolution while that machine is down. So aside from just regular fault tolerance, it’s cheap insurance to stand up at least one backup. These can and should be virtual machines, not physical, dedicated boxes.
When you do install AD, it wants to install DNS with it. I just also install DHCP, and tie that in too. If you are unfamiliar – I did a series of short videos which walk through how to do this with a slightly-older version of Microsoft Server (circa 2005). See here: https://www.youtube.com/watch?v=PTmRppqtV3s&list=PL9KcjYQmhxH746z46LqhrM3ETaloJRKfR
The one additional step you should take is to create a reverse lookup zone. It’s as simple as choosing a reverse one when creating a new zone on the DNS server, and specifying the IP address range. it will then automatically keep track of the “PTR” records which point an IP address to a DNS name (whereas a forward zone tracks DNS name to IP address).
Setting up a Certificate Authority (CA)
I just wrote a blog post about webCA. This is what I’m using on my internal network now. I “trust” the root CA on all of my workstations and servers, and I can then use my internal services without any cert nags.
Setting up a Hypervisor:
I wrote about this recently. I’ve been using Proxmox for a few months now and I’m extremely happy with it. It does everything I need, it’s easy to use, and it’s web-based. If you are going to say “But what about VMWare?” – first of all, just for saying their name, you likely owe them a fee – and second of all, I address VMWare, Virtualbox, and Hyper-V in that blog post. If you just want the short answer – go try: https://www.proxmox.com/
Setting up a Docker host:
If you plan on running “production” Docker containers, and if you have a Synology NAS, then consider just using the Docker feature there. However, for a standalone Docker host (perhaps hosted in Proxmox) or on your workstation, check out www.portainer.io for a nice little web management UI for it.
This is only going to get your so far, but it does work well. If you need HA or multiple hosts, you’re talking about an order-of-magnitude more work to set up a Docker “orchestration” product for a cluster.
Well, this is the easy part. For a long time, we lived in the age of scarcity. Good hardware was difficult or expensive to get. Nowadays, you can get very excellent machines for not-a-lot-of-money.
Unless you have a computer rack, and a place to run noisy, power-hungry servers – I would just recommend getting a regular desktop machine. Failing that, a “tower” style server.
I really like the Lenovo ThinkServer for a home lab. You can typically get very good configurations for surprisingly little money, and best of all, they are dead-quiet! For example, as of this writing there are several of them for under $299 on Amazon (i3’s with 8GB of RAM). Keep in mind that a “low end” processor that would suck on your workstation, will not be noticeable on a server. Just get a couple SSD drives for it, go to www.crucial.com and max out the RAM. I think 32GB is a good number for a home lab, plenty of room for all sorts of projects.
I don’t hear people talk about a home lab setup, so in preparation for some upcoming posts – I wanted to write down a bit about my approach, and what I’ve found that works. In case it helps, here’s a summary of what I recommend:
- Hardware: get a Lenovo ThinkServer with a couple of SSD’s and 16GB or more of RAM.
- Virtualization: on that server, install Proxmox for virtualization.
- AD/DNS/DHCP: within Proxmox, set up two Active Directory servers with DNS and DHCP. Don’t forget the reverse DNS zone.
- Docker: within Proxmox, set up a “permanent” Docker host and run webCA. (alternate: install webCA as a container in your Synology DSM, if you have one)
Interestingly, this should cost little or no money. If you have MSDN, then your Windows Server licenses are free. If not, then this may be cost-prohibitive. And then for the hardware, maybe you already have a server or something that could be upgraded, laying around? Aside from that, everything else is free.
What do you think? What home lab equipment do you think is necessary?