In present day, the context in which you run a server or application has become quite blurred. In the olden days, you’d install an operating system on physical hardware and run your application on that. Nowadays, not only do you have virtualization, you have several different kinds of virtualization.
- Hypervisor – like VMWare, Hyper-V, Xen Server, or VirtualBox. These represent products that run on physical hardware, and allow multiple “virtual” machines to be run on the same, shared hardware. Also, KVM is a hypervisor built-in to the Linux kernel since v2.6.20 back in 2007 – so really any Linux machine can become a hypervisor host, with some management software in place.
- Cloud-based Hypervisors – like AWS and Azure offer Infrastructure-as-a-Service (IaaS) via their own proprietary tooling.
- Containers – where you run some or all of your application in virtual/wrapped way, but some part of your context might be shared – like the kernel or networking.
- LXC or Linux Containers is an interesting technology built on the idea of OS virtualization. You have one shared Linux kernel, and could potentially have several virtual “servers” that have their own context.
- LXD is the successor to LXC and is, I think, closer to a real hypervisor that hosts LXC containers.
- Docker runs entire self-contained applications in “containers” in a unique run-time environment.
- CoreOS Rocket is supposed to be the primary competitor to Docker, but as of this writing it’s still in it’s early stages.
For your home lab or maybe for work, you probably want to be able to stand up virtual server or workstation. Above are just some of your choices. For me, I’ve generally stayed with Hyper-V because I’ve been doing mostly Microsoft-based things. However, last year I switched over to Xen Server.
Why am I changing again? Well, each technology has their own problems, but some issues are bigger than others:
- Hyper-V can really only be managed from a Windows computer. There is no simple support from macOS or Linux. But worse, to remotely administer a Hyper-V host even on Windows, is no small task. If you do NOT have Active Directory, then I simply couldn’t get it working. This is what happened this weekend. Before though, I had an Active Directory and it was still remarkably difficult. You have to set up credential delegation and WSMan*, etc. It took me all afternoon.
Microsoft COMPLETELY misses the mark on this. Their Hyper-V Server 2016 is free to download and use, but it’s absurdly complex to remotely administer – which you need to do. Hyper-V server doesn’t have a user interface, it’s just a command-line. Worse, in the year 2017 – there should be web interface for administering Hyper-V. This is a total no-brainer. No complicated setup, just navigate to a web page, log in, and start working. Hyper-V Server 2016, and/or using Hyper-V as your preferred hypervisor in modern day, is a mistake. That is, until there is a web interface for management.
- Xen Server has a somewhat similar problem. It runs on a really strange version of Linux, has a terrible command-line interface on the console – and you need to download a Windows application to administer it. Again, in the year 2017 this is ridiculous. There were also very frustrating elements of Xen also like: VM’s don’t boot when the server does. You have to SSH in and change a config file for that, and I never did figure out how to get Xen to see my additional disk space, because the regular Linux tools I expected, weren’t there.
- VirtualBox is absolutely the best hypervisor to run for interactive VM’s. For example, on your workstation if you want to run macOS, or Linux with a Windowing environment, this works very well. The reason I say it’s the absolute winner in this category is because you can also “share” USB devices with the VM. For example, if you have a USB drive plugged or or perhaps a fingerprint reader, you can pass those devices through to the VM. No other virtualization technology supports that. Virtualbox is great for interactively working with VM’s but is not good at running servers “in the background”, or “headless”.
- VMWare is so expensive, I might owe them money just for mentioning their name.
- Cloud-based IaaS are typically very nice, and relatively simple, but like a mainframe, you pay for every single bit of CPU and disk that you use. So, for “hobbyist” type activities that aren’t making you money, this is not a great fit.
Ugg. All I want is a decent hypervisor to run some servers in the lab. I want simple authentication and ideally a web interface. If it’s Linux, I want it to be a real distribution (like Debian or CentOS – not BSD or some proprietary/custom distro).
A Solution – Proxmox:
After getting aggravated with this whole mess, I spent some time researching. I kept running across people talking about Proxbox (https://www.proxmox.com/en/ || https://en.wikipedia.org/wiki/Proxmox_Virtual_Environment) which has been around for quite a while:
I downloaded it and checked it out. In short, here are some of the reasons I REALLY liked this, right away:
- Web interface – It’s got a full-featured web interface for administering the server.
- “It’s a Linux system! I know this!” – it’s a slightly customized version of Debian Jessie.
- Supports VM’s and Linux Containers – and lets them run side-by-side
- Web-based shell – in case you do need to get to the console of the host OR client VM’s and don’t have an SSH client handy, right in the web interface, you can bring up a terminal command-line.
- Everything worked – without any significant hassle. What the documentation says, is correct. Made for easy installation and configuration.
Let’s look at some of the features.
Here is the main screen at the data center level, you can pool together multiple VM hosts:
And here is the main screen at the server level:
One cool idea is you set up a place that has all of your operating system or Linux container images. So, under “Storage” at the data center level, you set up a new Directory:
You can specify what kind of files can do in that directory: Disk Image, ISO Image, etc. Once you have that set up, you can download/upload all of the images you want to use. For example, in my “all” storage item, I have ISO images for VM’s and Container Templates for LXC containers:
The LXC containers are pre-built, pre-configured images from which you can start from. For example, the Debian8 Turnkey LAMP template has Debian Jessie, Apache, MySQL, and PHP pre-installed.
You upload images by either clicking on that “Upload” button on the Content tab of your storage item:
or if you are in the console or access the console from this web portal, you could do a wget or curl to put the file in place:
Now for the LXC containers, you’d click “Templates” and you can pick-and-choose a template from which to start from:
As you can see, pretty cool so far. Very straight-forward and intuitive to use.
Creating a VM:
How easy is it to create a virtual machine? Well, in the top-right of the navigation, you click Create VM and then follow the prompts:
At this point, we can start the VM and see that it’s running:
but again, right from this web page, we can click on the “Console” menu of the left and start installing the operation system from the console of that machine:
Creating an LXC container:
LXC containers are pretty similar. To start, click on the “Create CT” button in the top right and follow the wizard:
Notice how on this screen, these are ISO images, but the LXC “Templates” that were downloaded earlier.
Now we start up the container and can see it running:
and just like with a VM, we can click on Console to finish configuring the container:
Gotcha – getting Proxmox to see your other disks:
In all fairness, there was one slight problem I had – but it was quickly resolved, despite me not being too familiar with the details. In this system I have 3 drives, which were configured like this:
- SSD – which was formatted during the OS install, where the system lives. (reports as /dev/sdc)
- 2TB disk – unformatted (reports as /dev/sda)
- 2TB disk – unformatted (reports as /dev/sdb)
Since there is no Debian GUI, how do I format these other drives and make them available to Proxmox? Even worse, turns out Proxmox doesn’t deal directly with drives and instead only works with the Linux Logical Volume Manager (LVM). Crap, where do I even start with that?!
Well, I found this page:
and I literally just followed all of the directions and it worked exactly as described. I can now see the drives:
and then I created a new LVM storage item with those two drives:
You can see that that 4TB is available to VM’s and containers, now.
The bottom line for me is that I was very frustrated with the state of hypervisors. I researched the state of the art, and found a product that I really, really like! It really ticks all the checkboxes for me and was quite literally exactly what I was looking for.
So, for your home lab or if you are looking for a non-zillion-dollar alternative to VMWare, Proxmox seems like an excellent option.