Synology and the state of consumer NAS devices

In modern day, you likely have some disk storage needs. For example:

  • Just regular, important files
  • Family photos and videos
  • Regular backups of everyone’s computer
  • (optional) IP cameras, and a way to record them

Network Attached Storage, or NAS is meant to solve that. It’s a device that plugs into your network which has a whole bunch of disk space. The software then makes that disk space available to your computers and can even run additional software too. But primarily it’s meant as a common storage medium available to everyone in the home, or an office.


After hearing so many other converts, I decided to drink the Kool-aid and get a Synology NAS.


Below is my story and my path to buying yet another NAS device.

The 1st NAS:
So, a couple of years ago, I bought a NAS. I got a Netgear ReadyNAS 104. It holds 4 disks:


and the web interface looks like this:


This seemed to fine. It has a nice web interface and does all the things you’d expect. I initially set this up with RAID 5 and 4 drives. RAID 5 uses 1 drives worth of space for recovery, meaning that you can lose up to 1 drive, and the you will not have lost your data. Replace the failed drive, it re-syncs, and you never have downtime.

However, I started going through drives pretty quickly – every couple of months. It didn’t seem to be the drives, it seemed to be the ReadyNAS that had the problem. The ReadyNAS would tell me that a drive failed. Even if I reformatted it and re-inserted it – that drive was dead, as far as ReadyNAS was concerned. After the third time, I’d had enough. I had a bunch of other things going on at the time, so I just picked up a different NAS as a stop-gap.

The 2nd NAS:
So the second NAS I got was a Seagate D4. This also holds 4 drives:

Image result for seagate d4

Same deal, it supports RAID, and has a nice web interface:


I got four drives, different brand than before but all the same for this NAS. I set up the main volume as RAID 5. Same thing, every few months, I’d get consistency errors or would outright lose a drive. After the 2nd “failed” drive – I was getting pretty annoyed.

What about RAID 10?
After talking with friends who are more on the infrastructure side than me, I was advised that RAID5 and RAID6 – although ideal on paper, are often not reliable. I was advised by multiple people to use RAID10. This is basically RAID 0 (mirroring) with RAID 1 (striping, without parity).


This is great, you have striping which makes the disk look like one big chunk of disk space and you have a mirror, or a live, second copy of the data in case of a drive failure. The downside is you lose HALF of your potential disk space.

In this case, I had 4 x 5TB drives. If you set that up with RAID 5, you lose one 1 disk of space, and are left with 15TB usable. With RAID 10, you lose half – so you are left with 10TB usable, even though you physically have 20TB of space!

That’s the price to pay if you want reliable storage, right? Think about it, that is significant. Often, a NAS has so much space that you simply CAN’T back it up any place. It’s a live backup, and it actually MUST be fault-tolerant. So, for these devices, I MUST use RAID 10 because that’s the only reliable option.


I still had the ReadyNAS laying around (which holds a 2nd copy of my data) – so I did ultimately set up it, and the Seagate with RAID 10. The result? I didn’t have any more disk “failures”. So, the reliability problem was fixed. So if you have a NAS like this and see drive failures – switch to RAID 10.


Other NAS Features:
Meanwhile, in talking with other techy people, I hear several people rave about Synology. In fact, you can’t really find anything negative about Synology – except that they have pretty expensive equipment.

But what I learned is that the Synology NAS products were much more than just a NAS. The “apps” you can install on them made them into a valuable piece of home or home lab infrastructure!

But wait, the ReadyNAS has “apps” too:


and wait a second, so does the Seagate:


Both of these have similar problems. First, they had a VERY limited number of apps – and most were weird, one-off apps written many years ago. But worse, is they were not updated.

ReadyNAS Apps – you can only install what is available via the web interface, and any “updates” you want to do, you have to do from the web interface too. The problem is they never updated anything. ReadyNAS uses a proprietary Linux distro and there isn’t anything you can do from the command-line (and there is no package manager either).

Seagate Apps – I initially started trying to get OwnCloud to work. You can install this “app”, but it’s extremely old – like from ~2012. It was so old that the mobile app couldn’t even connect to it. Again, you can only “upgrade” it if an upgrade is available in the web interface.

So the Seagate NAS had a VERY old version available. I spent a whole weekend on this at one point. In the end, it turns out that the proprietary NAS OS uses a virtualization technology called “Rainbow”. So even when I was able to SSH into the NAS, and then step into the Rainbow emulation (where OwnCloud was running in a Debian environment) – I couldn’t get past a long list of upgrade errors.

Meanwhile, the other “problem” I had was that for my IP cameras, I’ve been using ZoneMinder. It is Free and Open Source (FOSS), but it’s not a very good product. The Seagate NAS has a “surveillance manager” app, but I couldn’t get any of my IP cameras working with it.

The move to Synology:
At this point, I have two NAS’s that are now reliable – but I can’t use a parity-based RAID level with either. I can’t use either for an OwnCloud type setup (where I can get to my files from anywhere), nor can I take advantage of the disk space for IP camera recording. It’s just a plain NAS.

So, I ended up getting a Synology DS916+ which holds 4 drives and is upgradeable with an expansion unit which holds an additional 5 drives, if needed.

First, the OS and web interface is pretty impressive. The Disk Station Manager (DSM) web interface is like a full-on windowing application and everything runs right in the browser:


So what problems does this solve for me?

  • Reliability – I can use SHR, which is just like RAID 5, and the new Btrfs file system which specifically helps with data integrity when spanned across multiple disks
  • IP cameras – using the Surveillance Station app, I now have a very robust tool for recording and archiving my IP cameras, and using my NAS storage for that (I wasn’t before)
  • Apps – Synology has a VERY rich and engaged app store with many useful apps which seem to be maintained.
  • Remote access – I’d need to research this more, but it seems like they have a pretty mature way to remotely access all of your files while you are away. This includes matching mobile apps as well.

In short, all the things I thought I was getting when I got my first NAS.

Bottom Line:
The Synology devices aren’t really consumer-class NAS’s, they are more for the “hobbyist” and “small business”. That said, they are also significantly more expensive than other consumer-grade NAS devices. However, is it really a “bargain” if the cheaper NAS’s aren’t reliable and don’t do all the things you want it to do?

What’s next for me is I need to finish migrating my data over to the Synology – I’ll wipe the drives on the other two NAS’s and sell those. Meanwhile, I need to finish setting up the rest of my IP cameras, and will research more into how secure the whole “remote access” thing is.

Bottom line, if you are in the market for a good quality NAS for your home, home lab, or small business – I see now why Synology gets such high marks. It’s a bit more expensive, but you are really getting the best thing on the market, right now. Also, it’s a significant step-up, in many ways, from a consumer-grade NAS.

Share Button

Proxmox – a great Hypervisor for running servers

In present day, the context in which you run a server or application has become quite blurred. In the olden days, you’d install an operating system on physical hardware and run your application on that. Nowadays, not only do you have virtualization, you have several different kinds of virtualization.

For example:

  • Hypervisor – like VMWare, Hyper-V, Xen Server, or VirtualBox. These represent products that run on physical hardware, and allow multiple “virtual” machines to be run on the same, shared hardware. Also, KVM is a hypervisor built-in to the Linux kernel since v2.6.20 back in 2007 – so really any Linux machine can become a hypervisor host, with some management software in place.
  • Cloud-based Hypervisors – like AWS and Azure offer Infrastructure-as-a-Service (IaaS) via their own proprietary tooling.
  • Containers – where you run some or all of your application in virtual/wrapped way, but some part of your context might be shared – like the kernel or networking.
    • LXC or Linux Containers is an interesting technology built on the idea of OS virtualization. You have one shared Linux kernel, and could potentially have several virtual “servers” that have their own context.
    • LXD is the successor to LXC and is, I think, closer to a real hypervisor that hosts LXC containers.
    • Docker runs entire self-contained applications in “containers” in a unique run-time environment.
    • CoreOS Rocket is supposed to be the primary competitor to Docker, but as of this writing it’s still in it’s early stages.

For your home lab or maybe for work, you probably want to be able to stand up virtual server or workstation. Above are just some of your choices. For me, I’ve generally stayed with Hyper-V because I’ve been doing mostly Microsoft-based things. However, last year I switched over to Xen Server.

The Problem:
Why am I changing again? Well, each technology has their own problems, but some issues are bigger than others:

  • Hyper-V can really only be managed from a Windows computer. There is no simple support from macOS or Linux. But worse, to remotely administer a Hyper-V host even on Windows, is no small task. If you do NOT have Active Directory, then I simply couldn’t get it working. This is what happened this weekend. Before though, I had an Active Directory and it was still remarkably difficult. You have to set up credential delegation and WSMan*, etc. It took me all afternoon.

    Microsoft COMPLETELY misses the mark on this. Their Hyper-V Server 2016 is free to download and use, but it’s absurdly complex to remotely administer – which you need to do. Hyper-V server doesn’t have a user interface, it’s just a command-line. Worse, in the year 2017 – there should be web interface for administering Hyper-V. This is a total no-brainer. No complicated setup, just navigate to a web page, log in, and start working. Hyper-V Server 2016, and/or using Hyper-V as your preferred hypervisor in modern day, is a mistake. That is, until there is a web interface for management.

  • Xen Server has a somewhat similar problem. It runs on a really strange version of Linux, has a terrible command-line interface on the console – and you need to download a Windows application to administer it. Again, in the year 2017 this is ridiculous. There were also very frustrating elements of Xen also like: VM’s don’t boot when the server does. You have to SSH in and change a config file for that, and I never did figure out how to get Xen to see my additional disk space, because the regular Linux tools I expected, weren’t there.
  • VirtualBox is absolutely the best hypervisor to run for interactive VM’s. For example, on your workstation if you want to run macOS, or Linux with a Windowing environment, this works very well. The reason I say it’s the absolute winner in this category is because you can also “share” USB devices with the VM. For example, if you have a USB drive plugged or or perhaps a fingerprint reader, you can pass those devices through to the VM. No other virtualization technology supports that. Virtualbox is great for interactively working with VM’s but is not good at running servers “in the background”, or “headless”.
  • VMWare is so expensive, I might owe them money just for mentioning their name.
  • Cloud-based IaaS are typically very nice, and relatively simple, but like a mainframe, you pay for every single bit of CPU and disk that you use. So, for “hobbyist” type activities that aren’t making you money, this is not a great fit.

Ugg. All I want is a decent hypervisor to run some servers in the lab. I want simple authentication and ideally a web interface. If it’s Linux, I want it to be a real distribution (like Debian or CentOS – not BSD or some proprietary/custom distro).

A Solution – Proxmox:
After getting aggravated with this whole mess, I spent some time researching. I kept running across people talking about Proxbox ( || which has been around for quite a while:


I downloaded it and checked it out. In short, here are some of the reasons I REALLY liked this, right away:

  • Web interface – It’s got a full-featured web interface for administering the server.
  • It’s a Linux system! I know this! – it’s a slightly customized version of Debian Jessie.
  • Supports VM’s and Linux Containers – and lets them run side-by-side
  • Web-based shell – in case you do need to get to the console of the host OR client VM’s and don’t have an SSH client handy, right in the web interface, you can bring up a terminal command-line.
  • Everything worked – without any significant hassle. What the documentation says, is correct. Made for easy installation and configuration.

Let’s look at some of the features.

Here is the main screen at the data center level, you can pool together multiple VM hosts:


And here is the main screen at the server level:


One cool idea is you set up a place that has all of your operating system or Linux container images. So, under “Storage” at the data center level, you set up a new Directory:


You can specify what kind of files can do in that directory: Disk Image, ISO Image, etc. Once you have that set up, you can download/upload all of the images you want to use. For example, in my “all” storage item, I have ISO images for VM’s and Container Templates for LXC containers:


The LXC containers are pre-built, pre-configured images from which you can start from. For example, the Debian8 Turnkey LAMP template has Debian Jessie, Apache, MySQL, and PHP pre-installed.

You upload images by either clicking on that “Upload” button on the Content tab of your storage item:


or if you are in the console or access the console from this web portal, you could do a wget or curl to put the file in place:


Now for the LXC containers, you’d click “Templates” and you can pick-and-choose a template from which to start from:


As you can see, pretty cool so far. Very straight-forward and intuitive to use.

Creating a VM:
How easy is it to create a virtual machine? Well, in the top-right of the navigation, you click Create VM and then follow the prompts:

image image image image image image image

At this point, we can start the VM and see that it’s running:


but again, right from this web page, we can click on the “Console” menu of the left and start installing the operation system from the console of that machine:


Creating an LXC container:
LXC containers are pretty similar. To start, click on the “Create CT” button in the top right and follow the wizard:

image image

Notice how on this screen, these are ISO images, but the LXC “Templates” that were downloaded earlier.

image image image image image image

Now we start up the container and can see it running:


and just like with a VM, we can click on Console to finish configuring the container:


Gotcha – getting Proxmox to see your other disks:
In all fairness, there was one slight problem I had – but it was quickly resolved, despite me not being too familiar with the details. In this system I have 3 drives, which were configured like this:

  1. SSD – which was formatted during the OS install, where the system lives. (reports as /dev/sdc)
  2. 2TB disk – unformatted (reports as /dev/sda)
  3. 2TB disk – unformatted (reports as /dev/sdb)

Since there is no Debian GUI, how do I format these other drives and make them available to Proxmox? Even worse, turns out Proxmox doesn’t deal directly with drives and instead only works with the Linux Logical Volume Manager (LVM). Crap, where do I even start with that?!

Well, I found this page:

and I literally just followed all of the directions and it worked exactly as described. I can now see the drives:


and then I created a new LVM storage item with those two drives:


You can see that that 4TB is available to VM’s and containers, now.

Bottom Line:
The bottom line for me is that I was very frustrated with the state of hypervisors. I researched the state of the art, and found a product that I really, really like! It really ticks all the checkboxes for me and was quite literally exactly what I was looking for.

So, for your home lab or if you are looking for a non-zillion-dollar alternative to VMWare, Proxmox seems like an excellent option.

Share Button

Quick Review of Parrot Security OS v3.1

I had a request to review Parrot OS. I haven’t looked at it in quite some time, so I downloaded the latest from:



What is it?
In short, Parrot Security OS is a security/anonymity/pentesting distribution of Linux, similar to Kali. It is Debian-based, and they too have gone to a “rolling” release of the operating system and tools. This is a security project from, which has several other security-oriented products and services, such as public DNS, encrypted XMPP/IRC chat, certificate authority, etc.

In short, Parrot OS is basically a customized version of Debian Linux which has all of the industry-standard pentesting tools like Kali has, but this also has anonymity features such as i2p, tor, bleachbit, AnonSurf, etc. – and also has regular Linux-y stuff like LibreOffice, IceDove, Hexchat, etc.  This also uses the Mate window manager instead of regular Gnome or KDE too.

Here’s a quick look at the login greeter, and some of the menus:










Bottom Line:
I played around with it for a while, and everything seems to work like it should. Really, in a custom distro, it just comes down to whether you like what was assembled. And generally, yeah – this is pretty cool, and best of all, everything works like it should.

If there is any negative, it’s just around the concept. Whether you are pentesting or being a black hat, you REALLY need to have clean opsec. That means ZERO chance of your contaminating your machine with your live/personal data. So, if you have a pentesting rig, it really shouldn’t have e-mail on it for example. You should have one environment for security stuff, and a different, discrete environment for your personal stuff – not even on the same hard drive.

This distribution seems like it’s trying to be everything to everyone: you have your pentest tools, but you also have your creature comforts of e-mail and IM. Although that is great, that (to me) seems like it’s just a matter of time before some people might slip-up. For example, when you open fb link in Firefox, you correlate your Facebook cookie with that anonymized connection where you just performed your hack, tying you, potentially criminally, to an event! It’s super easy to have your personally-identifiable data leak into your other work.

So I guess the conclusion I come to is that I want a pentesting rig to be specialized and ONLY have what I need for those sorts of activities. If I also feel “comfortable” enough to use my personal stuff on that setup, that’s a recipe for disaster (for me, at least)!

With that said, there isn’t anything about the distribution that forces you to be sloppy, it’s just something to consider. Arguably too, the target audience could be someone who LIVES in their pseudonym. In which case, e-mail, IM, facebook, etc of their “hacker identity” is not as dangerous to potentially leak. For example, a reporter, whistleblower, or dissident might use all of these features effectively, and safely.

In the end, it looks like a totally comparable alternative to Kali (plus a bunch more), if you are looking for a new distribution!

Share Button

Creating Undo Functionality

Lately I’ve been working on an app to automate some features of a really great deployment product called Octopus Deploy. It’s a really great product, but when you need to set up a new deployment, you need to go into several screens and it’s several steps. The good news is, the app is all API-based. So, you can do everything either via a REST interface for a C# client library.

In this case, there are I think like 6 things I need to do in that system. If any one of the steps fail, I need to undo everything I’ve done up until this point.

Enter the Command Pattern:
Universally, it’s accepted that if you need undo functionality, you’d use the Command design pattern. However, that design pattern is really geared towards a system you are writing, where you have control over the state of everything. In this case, it’s quite different: I’m interacting with an API of a 3rd-party system and have no control over it’s state nor availability.

Command Pattern-ish…
So I took the spirit of the Command pattern and simplified it a bit. First, I want to “unwind” the list of things in the reverse order. For example, if the order is:

1) Create project group
2) Create project
3) Create team and assign to team
4) …

I don’t want to undo those in the same order, because just about all of the steps will fail. I’d need to unwind them in reverse order:

3) De-assign team from project; delete team.
2) Delete project
1) Delete project group

So, the correct data structure for that is a “Stack” object. You “push” things onto the stack (like loading a Pez dispenser or firearm magazine) or you “pop” things off the stack when you want to retrieve them. For example:


And this “UndoItem” type is just a class I created to keep track of every item to undo:


Now, for each operation I want to do, I keep track of the code that it would take to “undo” that operation. I made an example project which just creates directories:


When we do a “undoItems.Push(..)” – that is what keeps track of the code that it would take to undo that operation. What’s crazy is that because .NET supports “closures”, you can reference variables outside of the scope of your anonymous function. So, imagine several minutes later if we were undoing this operation, “directory1” would normally be out of scope. However, .NET knows how to keep track of the external references that you make.

Doing the Undo:
OK – so we have a stack to keep track of the undo items, and whenever we do something undoable, we write down what it would take to back it out. Finally, we need to actually execute the stack of undos. So, I have code like this in the exception block:


So this walks through the stack, pops off an item, and then executes that undo code while printing the screen what it’s doing.

Bottom Line:
I thought this was an interesting computer science problem. I didn’t solve it in the typical way with a full-blown Command pattern, but this little chunk of code in the same vein/spirit worked out well, so I wanted to write it down.

As I mentioned, I made a complete sample, which you can view or download from GitHub:

Share Button

netdata: Web-based Performance Monitoring for Linux

Co-worker Stephen pointed me in the direction of this “netdata” open source project, located here: Rather than trying to explain what it is first, it might be easier for you to just navigate here and be blown-away first!


So what this is, is a lightweight, but very rich and robust, web-based performance monitoring tool – for Linux. I say for Linux because I don’t believe it is currently set up to run any anything.

Installation on Linux (and Raspberry Pi!):
This is compelling to me because it’s very quick and easy to install and it runs on everything. You run a few commands, and then you navigate to and voila, you see the same kind of dashboard above, except for your system!

So, to install on any Linux system – including (I tested on) Raspbian for Raspberry Pi and Banana Pi – just follow the instructions from there:

Installation on Ubuntu, on Windows?!:
After the Windows 10 anniversary update, there is now an Ubuntu Linux subsystem that runs within Windows 10. It’s not exactly full-blown Ubuntu, but it’s got most things you’d expect, including “apt-get”. So, I thought: why not give it a shot?

Sure enough, it installed just like normal, following the same link as above!

The one “gotcha” here though is that it only shows processes from the perspective of this Ubuntu subsystem. So, it’s not a complete system monitor – but it does technically work!

Bottom Line:
If you have a Raspberry PI project – or any other Linux systems that you’d want to monitor the performance, this tool is easy and quick to install and has a ridiculously powerful and beautiful dashboard to show you all of the details!

Share Button

Wireless all the things! Streaming audio and video in 2016

I recently realized that a few of my technology annoyances have been solved. In fact, for a few things, there are great solutions!


Streaming Music to Speakers:
For example, I use Microsoft’s Groove Music Pass for my music. I can stream and download unlimited music on multiple devices. In the car, I use streaming “bluetooth audio” from my phone which allows me to take my playlists on the road. At home though… ugg. I have a Windows machine where I have the Line Out jack hooked to a home stereo tuner via RCA jacks. So, I can only listen “on the big speakers”, the audio out from this one Windows machine.

Bluetooth enable ALL home theaters, and home stereos:
I thought in my head: “I wish I could stream music from my phone or iPad to those speakers” – and voila, that technology exists! And, it only costs $22 each:


With this device, you plug in power, and it has a jack and included cable that goes to RCA plugs. So, for each home theater/AV/home stereo you have – get one of these. Then, hit the button on the back and pair it to your phone, tablet, and computers (Windows/Linux/macos all support this).

Oh, and remember my main Windows machine used to be hooked up to those speakers? Well, I just pair it to the device too:


What is interesting is that many devices can be paired and connected to this bluetooth device – and whenever a device is sending audio, that is the one that gets through. If multiple devices are sending audio at the same time, the bluetooth device either makes it choppy or picks one. Bottom line though, many devices can easily share a home stereo or surround sound system from computers, tablets or phones.

What a cool technology, right? So now, for $22 each, you can stream from any computer/tablet/phone to any audio system you have in your home or office.

OK, so streaming audio is now a solved-problem!

Streaming Video to TV’s and Monitors:
Along the same lines, wouldn’t it be cool to “throw something up on the screen” in your office or living room… wirelessly? Well, that technology already exists too – and it’s also somewhat cheap! I recommend these Microsoft Wireless Display Adapters, which are $49 new – but you can find used ones for typically half that:


Now, before you go saying “who needs some Microsoft proprietary device that only works with Windows?” – that’s the thing, although I think Microsoft does use it’s own technology when connecting from Windows, these also support “miracast”, “HDMI over wifi”. Miracast is widely supported on Android, iOS, macos, and even Linux.

Wireless enable ALL monitors and tv’s in your home:
So, imagine that any public screen in your home or office that has an HDMI port, you could have one of these, and you could connect to it as an “extra monitor” from pretty much any computer/tablet/phone! Connecting to a “wireless display” is natively supported in Windows 10 too:


Now, this isn’t all good news. First, is that although the technology works great – it can be flaky. To be clear, you can stream 1080p video/audio and it works without issue. However, suddenly (and a couple of times a day), it will randomly disconnect. You just need to reconnect. Also, in the case of my phone using a miracast app, I can see the display adapter, but it won’t successfully connect. So, it’s not perfect.

Bottom Line:
I was pleasantly surprised that technology now easily exists that let’s you share your audio or your video on any device in your home/office. Using those bluetooth devices above, I can move to the living room and continue streaming my music to the home theater system. When I go to the office, I can stream to the system there – all from phone.

Similarly, if I want to show someone something on an extra monitor or TV somewhere near me, I can do that from virtually any computer/tablet/phone.

This is one of those things where I just took a few minutes and bought the hardware I needed, and you just need to set it up once – and it’s easy. After that, you can live like you’re living in the future!!

Share Button

ReactOS – an open source version of Windows?!

ReactOS is an actual, legal, open source version of “Windows”. This means that it is binary-compatible with Windows and runs many/most Windows programs. It’s legal because they basically recreated Windows from scratch and put it under a GNU license.


Admittedly, this is probably a niche thing which will only appeal to probably three categories of people:

  1. People who are curious about technology.
  2. People who want/need to run Windows but are fed-up with the privacy and security nightmare that it’s become.
  3. People who need Windows to run one or two programs, but don’t want to keep track of legitimate Windows licenses, and activation, etc.

I am mostly in #3 and a little in #1. In my case, if I run a pentesting laptop with Kali Linux, what do you do about the few Windows-only testing tools that are out there? Well, if I install Virtualbox, and then install a ReactOS virtual machine, I can run all of those programs in there and Microsoft doesn’t even have to get involved!

The Good!:
Well, it looks, feels, and acts pretty much like an old version of Windows – so it’s familiar (but very, very fast and snappy):


another notable thing is that the installation takes like :03 minutes, and once it’s booted – it uses only 117MB of RAM, just idling – compared to 1,500-2,000MB of RAM for Windows 10:


The video, and then keyboard/mouse capture were terrible with Virtualbox, but luckily the “guest additions” installed and work perfectly! That was a pleasant surprise:


If you install those, the keyboard/mouse are seamless and the desktop resizes resolution whenever you change the size of the window.

The penultimate example of good is that since it’s not “quite” real windows, it has it’s own application manager, where you can install known-compatible software, and it’s how you update the system:


And lastly, the main good thing is that pretty much everything “just works”. I haven’t run across any showstoppers yet – but I’m sure there must be, given the complextication of Windows.

The Bad:
I haven’t found much bad. There are only two things that come to mind for me. For applications that draw their own window and window manager buttons (min, max, close in the top-right) – like Firefox or Chrome, those don’t show up.


Also, since this seems to be based on an old version of Windows (maybe Win95 or Win98?) – it can bog down pretty easily. It’s very light and snappy, but when more than a few things are going on – you can tell the OS slows down. It “feels” like the way Windows used to bog down, in the olden days.

Bottom line:
Is this a replacement for Windows 10? Probably not, for most people. Given the complexity (and advancement) of Windows, I am sure there are newer apps that won’t work. However, if you want a super-lightweight OS which will run “most” Windows apps, and is free – this so far seems to be a very cool alternative!

Share Button

Building a Kali Linux KDE image

Well, the second major release of Kali 2 came out and we were told you could now have different desktop experiences! Instead of the clunky old Gnome interface:


you could use modern window managers like KDE:


or super-fast window managers like XFCE! Well, when I looked on the downloads page, there is not a download for KDE, sadly:


So, how do you use Kali Linux with KDE, then? If you use the regular/other builds, it’s not available in the dropdown on the login/greeter screen. After a little research, I found I have to build my own ISO. Hoy boy, that’s probably complexticated, right?

Well, the good news is, it’s quite easy. Follow the instructions from this page, and run (as root, from a directory, like ~/kali/):

# apt-get install curl git live-build cdebootstrap
# git clone git://
# cd live-build-config
# ./ –distribution kali-rolling –variant kde –verbose

and then, let your computer run for a while. On a low-powered laptop, this process took about 1 hour and 20 minutes. This will pull down everything it needs to build a custom, KDE-oriented .iso file – which you can then burn to a DVD or a USB thumb drive, which you can then use to install wherever you’d like. I did a blog post on how to do that from Linux. The .iso file will be in the ./images/ folder from where you last left off.

I’m not sure why they included specific downloads for Mate, XFCE, etc, but left off KDE. But above, is how you can fix that and run Kali with a KDE desktop experience.

With that said, after I did all of this, the installer detects an ethernet and wifi adapter, but after installation, the OS (and KDE) doesn’t see either NIC. So, there is still more work to do – but here’s how you can at least get started!

UPDATE: post-install, I needed to do a couple of things. First, is I added the following to /etc/network/interfaces:

auto eth0
iface eth0 inet dhcp

this will tell the OS to at least look for a device called eth0, which is the physical Ethernet NIC. You can do the same with “wlan0” too, but you’d need to specify the SSID and I’ve never been able to successfully configure that wpa_supplicant stuff. So, there’s no point – PLUS, the NetworkManager we are installing next, replaces the purpose of this file.

To bring up the network card, you can do a:

$ sudo ifup eth0

and hopefully you should get a DHCP address. Now, to support wireless, and now that you at least have a wired connection – install the following (this is only for KDE):

$ sudo apt-get install plasma-nm

this is the NetworkManager plug-in for the “plasma” window manager. I started from these Debian instructions, and then followed the links to the KDE-specific instructions. In essence, install “plasma-nm”, and then on main status bar, right-click, Add Widget, and search for Network, and drag that to your “system tray” area in the bottom right. You’ll now be able to browse and connect to WiFi networks.

Who knows why this isn’t enabled by default, but it’s an easy enough fix – and you only have to do this one, just after installation of the operating system.

Share Button

The dilution of the operating system

As you might know, I resigned from a position I had for exactly 10 years, exclusively supporting Microsoft .NET development. That was a couple of months ago. Without “having” to stay on Windows anymore in my new role, I’ve been having a walkabout with other operating systems. Specifically, I’ve been living almost exclusively off of Ubuntu Linux and MacOS for the past couple of months. What have I learned?

image image image

First, I learned that if you use Windows, virtually every possible “regular” app you would use, is available in some form on these other operating systems. So, you could easily use either of these operating systems. I mean even things like the Kindle app for Windows, it’s available for Mac, but what about Linux? Well, there is and it’s the full Amazon Kindle app experience, right in the browser. Similar for OneNote, there is a native app for Windows and MacOS, but on Linux, you can just use the browser app via to open your OneNote notebooks. The web UX isn’t quite as nice, but it’s totally doable. Even apps like Skype are available natively on Ubuntu and MacOS now too.

Second, everything is coming to every operating system. The Ubuntu Linux command-line has come to Windows 10, PowerShell is now open source is is available on Linux and MacOS, etc. If there is a useful app, it seems that it’s just a matter of time before it’s available on “the other” platforms.

Using Windows full-time:
Why even wander away from Windows in the first place? Well first for me, is the outrageous security and privacy things in Windows 10. “When something is free, YOU are the product”, as the saying goes. Now that we know that Windows regularly sends data, in addition to them tracking everything you do is just… creepy, unnecessary, and when they are inevitably hacked, can only be bad. It’s just “accepted” that it’s OK that someone wander around your house and observe everything you do, and document it too – it’s just crazy. I mean, because it’s your “personal computer” even more private than your home, nowadays?

But even aside from that, which I acknowledge, some people don’t care about – Windows is also frustrating to use, compared to Linux. When you are working in the command-line, Windows hasn’t really changed much since the 1980’s. It’s a woefully lacking environment. Then, there are sometimes Unix-y things that you want or need to do, where Windows just can’t do it. For example, I changed how my DHCP/DNS works at my house and needed to track down which remote machines were using which IP addresses. Nmap works easily and quickly on Linux. So – Windows is not an “everything I want” environment.

Using Ubuntu full-time:
If you are going to use Linux, and want things to “just work”, then Ubuntu is the only practical answer. This is because if a vendor takes the time to get their product working on Linux, they address Ubuntu first, because it’s the most popular. I’ve been pleasantly surprised by my experience. Even advanced things like getting a fingerprint reader to work, and having simple whole-disk encryption (similar to BitLocker) are easy to use. At the hardware-level, since Ubuntu has a far, far smaller footprint, it seems to use far less battery – which is really good for laptop use. To give you a reference, Ubuntu, with the Unity window manager open just idling, uses about 700mb of RAM. Windows, just sitting idling uses about 2,000mb of RAM (2GB).

Despite it being a great platform, it’s not all great. First is MS Office and OneNote, specifically. For Office, you can use LibreOffice, which comes pre-installed. This can open and save MS Office formatted-files… but not perfectly. It has corrupted both Word and Excel files by messing up the formatting just a little bit. That’s not cool. And OneNote, because there is no native app, you have to use it in the browser, which is not a great experience. It gets the job done, but it’s not a great experience.

With that said, there is one big benefit – I have found Windows running in VirtualBox on Linux is much more flawless/seamless than any other platform, and it’s definitely better than running Ubuntu in a virtual machine. The window manager (Unity) in Ubuntu uses hardware acceleration, so when you run it in a VM, you see lag and slow UI performance. Meaning, that Ubuntu hosting Windows is definitely the best computer-in-computer environment I’ve run across. However, the battery drains 2x to 3x faster when running Windows in a VM though,so it’s not a mobile/portable solution – you need to be near a plug. So – Ubuntu too is not an “everything I want” environment.

Using MacOS full-time:
I initially exposed myself to MacOS when I started looking at Xamarin a few years ago. I was really pleasantly surprised, by two things mainly. First was that I didn’t realize that pretty much every product that exists for Windows, also has a native release for MacOS too. And secondly, how pretty and seamless the user experience is. So, using MacOS, I can use Office for Mac, including a native OneNote app, the command-line IS a “bash” shell, which also has the same experience as Linux. Well, almost completely. There is even a “package manager” called “brew” where you can install apps – with something like “brew install app-name”. What’s not to love?!

Well, there are a few things to not-love. First, virtualization in every technology, is… kind of bad. Using VirtualBox for example, no matter if you are hosting Linux or Windows, those client machines are laggy, choppy, and noticeably slow. I have the latest MacBook Pro too – with an i7 CPU – so it’s not hardware, either! Also, when you hook up a couple of extra monitors to it, the whole UI slows down significantly, whereas Windows didn’t. So, it works EXTREMELY well on a laptop with one screen, but when you start pushing the hardware, you quickly see the cracks.

Next, is the keyboard. Mac has it’s own ecosystem and has been living a parallel life next to Windows for decades. So, common keyboard layouts and keyboard shortcuts are different and it drives me nuts. For example, instead of CTRL+C and CTRL+V for copy and paste, it’s Command+C and Command+V.


If you are using a “regular” keyboard or connecting remotely, this translates to WindowsKey+C and WindowsKey+V. Ctrl+V brings you to the end of the page for some reason. Imagine trying to paste a link into a Facebook post (where you are scrolled halfway down and do CTRL+V – which brings you to the bottom of the scroll. You have to scroll back up and find that post – then you find out and you didn’t even have it copied because it’s “the other” keyboard shortcut.

I realize this may sound nit-picky, but it’s not. As a developer, there is no “Home” or “End” keys for example, or even a “Delete” key. To do those things you have to do: Fn+Backspace for Delete, Fn+LeftArrow for Home, and Fn+RightArrow for End. To be productive while coding is difficult, especially if you’ve used a non-Mac keyboard layout and shortcuts for decades. So – MacOS too is not an “everything I want” environment.

Which OS is best?
In short, none… or all. There is no clear winner. In fact, the reason for this post is I realized that these three operating systems are extremely similar and are moving closer together every day. If you have VM’s – use Windows or Ubuntu. If you want very snappy performance and beautiful UI, but wonky keyboard, use MacOS. Want a great command-line interface and robust package manager, use MacOS or Linux, but not Windows. Need MS Office and especially OneNote? Use Windows or MacOS, but not Ubuntu.

Bottom line:
In my little pseudo-experiment, I’ve realized I’m not entirely happy with any of these OS’s, and none of them stand out as being particularly great nor particularly bad. They are all like 85% the same, and the 15% they are different is mostly a good difference, which is a deficit of one of the others.

My goal was to find “the ultimate” setup where I could live out of one laptop and have ALL of the things want. My conclusion? The technology isn’t quite there yet. You just have to pick one and be ok with not being satisfied. MacOS would be my choice except the keyboard and the performance are showstoppers. Ubuntu would be my choice, but the lack of native MS Excel and OneNote are showstoppers. Windows 10 would be my choice, but the electronic stalking and terrible command-line are showstoppers.

You tell me: what am I missing? What is the “ultimate” computing environment in present day?

Share Button

Using Django + Git + VSO

OK, so I won’t be doing blog posts as often as I thought – I’ve had a lot of other non-technical things going on. However, I have dug back into Django again – and I’ve fallen in love it all over again!

It does seem like it will ultimately be pretty unusable on Windows (due to a list of issues with working with a real RDBMS), it is pretty good on Linux and/or MacOS. I’m working on a project, mostly on Ubuntu, but it runs equally well on MacOS – and I came up with some noteworthy things.

First, since I first dug into this last year – this framework continues to amaze me. The idea is that you can basically define your database tables  in a few dozen lines of code, Django will automatically give you Create Read Update Delete (CRUD) screens, with validation, and dropdowns for the related tables – stuff that is tedious and takes time to write. That is referred to as Django Admin. PLUS the Django Rest Framework with just a few lines of code per table will expose REST endpoints for each table, AND give you a website where you can learn and play with the REST API.

In short, I’m nothing short of blown-away by this technology. I’ve never seen anything like it. Needless to say, I’m planning to and am actively working on a project that uses this. I’m most excited that this will save me 10’s of thousands of lines of code I’d otherwise have to write in .NET – if I were to use that technology instead!

Wait a second, if this isn’t a great idea on Windows, how am I writing code? What am I using for an editor? I’ve said it once and I’ll say it again, Visual Studio Code, the standalone code editor is an amazing tool. Not only is it amazing editor, but it runs on Debian or Fedora-based Linux distributions (*.deb or *.rpm) and it runs nicely on MacOS too.

So, for Django, I installed VS Code on Ubuntu, installed the Python extensions to give me color-coding and intellisense, and it’s been a dream to work on! Not only is is a great editor with all of the “comforts” I normally need from Visual Studio working with .NET code, it also seamlessly has Git functionality built right in too!

Screenshot from 2016-08-07 15-02-27

Meanwhile, for Django stuff, I have a split console using “tmux”.

On the bottom is “ runserver” which runs the web server, and automagically restarts whenever it detects a code change – and will give me compile errors upon changes too. On the top is a console in that Python environment where I can make and push database migrations. These are just like Entity Framework migrations, except you do them from the command-line:


So – this ends up being a nice little development setup. It’s very quick, and since Django does SOOOOO much for you, you can stand up an application in a very short amount of time.

Since this is a proprietary/for-profit app, I’m using VSO for it instead of GitHub. So, I created a new Git repository. There are a couple of things which took a little bit to figure out. One is if you have two-factor authentication turned-on for your Microsoft account (which you definitely should), you can’t use those credentials from the Git command-line. Luckily, Git providers like GitHub and VSO give you a way to supply alternate credentials. In the case of VSO, it’s here:


But you’ll see a message on there that this is highly discouraged. instead, you should create “personal access tokens”. You can do that from here:


How do you put all of this together? Well, to get started with Git – check out this blog post. The only thing that is different is that when you go to do a “git push” or “git pull”, you will be prompted for credentials. How this “personal access token” works, is that you can put anything in the username field (or leave it blank) and you use the access token for the password. OK, that one is easy.

However, the next thing I found is that 1) I needed to register with Git and tell it who I was and 2) it prompted me for credentials every-single-time, which got annoying. So, to register your credentials and cache them, do something like this from the command-line:

$ git config –global “John Doe”

$ git config –global “”

and then to enable a credential cache, do something like this:

$ git config –global credential.helper cache

$ git config –global credential.helper ‘cache –timeout=3600’

Where that timeout is in seconds.

Bottom Line:
Once all of that is in place, development was easy-breezy! I save changes in VS Code, the Python web server restarts and I can see my changes in the website. When I’m done working, I use VS Code to “Sync” where it commits my changes, pushes them to VSO, then does a pull from VSO – making it so my local machine is N’Sync with VSO.

I know there are selling-points to other technologies, but I will say, I’m more-than-pleased with everything I’ve done in Django so far. Aside from the Windows RDBMS issues, I haven’t run across anything else really which has slowed me down! So, if you are wondering how to work with Django on Linux or MacOS, and use Git as a source control provider, hopefully this helps!

Share Button

Incredibly interesting blog posts from a .NET duh-veloper