The dilution of the operating system

As you might know, I resigned from a position I had for exactly 10 years, exclusively supporting Microsoft .NET development. That was a couple of months ago. Without “having” to stay on Windows anymore in my new role, I’ve been having a walkabout with other operating systems. Specifically, I’ve been living almost exclusively off of Ubuntu Linux and MacOS for the past couple of months. What have I learned?

image image image

Advertisement

First, I learned that if you use Windows, virtually every possible “regular” app you would use, is available in some form on these other operating systems. So, you could easily use either of these operating systems. I mean even things like the Kindle app for Windows, it’s available for Mac, but what about Linux? Well, there is https://read.amazon.com and it’s the full Amazon Kindle app experience, right in the browser. Similar for OneNote, there is a native app for Windows and MacOS, but on Linux, you can just use the browser app via http://OneDrive.com to open your OneNote notebooks. The web UX isn’t quite as nice, but it’s totally doable. Even apps like Skype are available natively on Ubuntu and MacOS now too.

Second, everything is coming to every operating system. The Ubuntu Linux command-line has come to Windows 10, PowerShell is now open source is is available on Linux and MacOS, etc. If there is a useful app, it seems that it’s just a matter of time before it’s available on “the other” platforms.

Using Windows full-time:
Why even wander away from Windows in the first place? Well first for me, is the outrageous security and privacy things in Windows 10. “When something is free, YOU are the product”, as the saying goes. Now that we know that Windows regularly sends data, in addition to them tracking everything you do is just… creepy, unnecessary, and when they are inevitably hacked, can only be bad. It’s just “accepted” that it’s OK that someone wander around your house and observe everything you do, and document it too – it’s just crazy. I mean, because it’s your “personal computer” even more private than your home, nowadays?

But even aside from that, which I acknowledge, some people don’t care about – Windows is also frustrating to use, compared to Linux. When you are working in the command-line, Windows hasn’t really changed much since the 1980’s. It’s a woefully lacking environment. Then, there are sometimes Unix-y things that you want or need to do, where Windows just can’t do it. For example, I changed how my DHCP/DNS works at my house and needed to track down which remote machines were using which IP addresses. Nmap works easily and quickly on Linux. So – Windows is not an “everything I want” environment.

Using Ubuntu full-time:
If you are going to use Linux, and want things to “just work”, then Ubuntu is the only practical answer. This is because if a vendor takes the time to get their product working on Linux, they address Ubuntu first, because it’s the most popular. I’ve been pleasantly surprised by my experience. Even advanced things like getting a fingerprint reader to work, and having simple whole-disk encryption (similar to BitLocker) are easy to use. At the hardware-level, since Ubuntu has a far, far smaller footprint, it seems to use far less battery – which is really good for laptop use. To give you a reference, Ubuntu, with the Unity window manager open just idling, uses about 700mb of RAM. Windows, just sitting idling uses about 2,000mb of RAM (2GB).

Despite it being a great platform, it’s not all great. First is MS Office and OneNote, specifically. For Office, you can use LibreOffice, which comes pre-installed. This can open and save MS Office formatted-files… but not perfectly. It has corrupted both Word and Excel files by messing up the formatting just a little bit. That’s not cool. And OneNote, because there is no native app, you have to use it in the browser, which is not a great experience. It gets the job done, but it’s not a great experience.

Advertisement

With that said, there is one big benefit – I have found Windows running in VirtualBox on Linux is much more flawless/seamless than any other platform, and it’s definitely better than running Ubuntu in a virtual machine. The window manager (Unity) in Ubuntu uses hardware acceleration, so when you run it in a VM, you see lag and slow UI performance. Meaning, that Ubuntu hosting Windows is definitely the best computer-in-computer environment I’ve run across. However, the battery drains 2x to 3x faster when running Windows in a VM though,so it’s not a mobile/portable solution – you need to be near a plug. So – Ubuntu too is not an “everything I want” environment.

Using MacOS full-time:
I initially exposed myself to MacOS when I started looking at Xamarin a few years ago. I was really pleasantly surprised, by two things mainly. First was that I didn’t realize that pretty much every product that exists for Windows, also has a native release for MacOS too. And secondly, how pretty and seamless the user experience is. So, using MacOS, I can use Office for Mac, including a native OneNote app, the command-line IS a “bash” shell, which also has the same experience as Linux. Well, almost completely. There is even a “package manager” called “brew” where you can install apps – with something like “brew install app-name”. What’s not to love?!

Well, there are a few things to not-love. First, virtualization in every technology, is… kind of bad. Using VirtualBox for example, no matter if you are hosting Linux or Windows, those client machines are laggy, choppy, and noticeably slow. I have the latest MacBook Pro too – with an i7 CPU – so it’s not hardware, either! Also, when you hook up a couple of extra monitors to it, the whole UI slows down significantly, whereas Windows didn’t. So, it works EXTREMELY well on a laptop with one screen, but when you start pushing the hardware, you quickly see the cracks.

Next, is the keyboard. Mac has it’s own ecosystem and has been living a parallel life next to Windows for decades. So, common keyboard layouts and keyboard shortcuts are different and it drives me nuts. For example, instead of CTRL+C and CTRL+V for copy and paste, it’s Command+C and Command+V.

image

If you are using a “regular” keyboard or connecting remotely, this translates to WindowsKey+C and WindowsKey+V. Ctrl+V brings you to the end of the page for some reason. Imagine trying to paste a link into a Facebook post (where you are scrolled halfway down and do CTRL+V – which brings you to the bottom of the scroll. You have to scroll back up and find that post – then you find out and you didn’t even have it copied because it’s “the other” keyboard shortcut.

I realize this may sound nit-picky, but it’s not. As a developer, there is no “Home” or “End” keys for example, or even a “Delete” key. To do those things you have to do: Fn+Backspace for Delete, Fn+LeftArrow for Home, and Fn+RightArrow for End. To be productive while coding is difficult, especially if you’ve used a non-Mac keyboard layout and shortcuts for decades. So – MacOS too is not an “everything I want” environment.

Which OS is best?
In short, none… or all. There is no clear winner. In fact, the reason for this post is I realized that these three operating systems are extremely similar and are moving closer together every day. If you have VM’s – use Windows or Ubuntu. If you want very snappy performance and beautiful UI, but wonky keyboard, use MacOS. Want a great command-line interface and robust package manager, use MacOS or Linux, but not Windows. Need MS Office and especially OneNote? Use Windows or MacOS, but not Ubuntu.

Bottom line:
In my little pseudo-experiment, I’ve realized I’m not entirely happy with any of these OS’s, and none of them stand out as being particularly great nor particularly bad. They are all like 85% the same, and the 15% they are different is mostly a good difference, which is a deficit of one of the others.

My goal was to find “the ultimate” setup where I could live out of one laptop and have ALL of the things want. My conclusion? The technology isn’t quite there yet. You just have to pick one and be ok with not being satisfied. MacOS would be my choice except the keyboard and the performance are showstoppers. Ubuntu would be my choice, but the lack of native MS Excel and OneNote are showstoppers. Windows 10 would be my choice, but the electronic stalking and terrible command-line are showstoppers.

You tell me: what am I missing? What is the “ultimate” computing environment in present day?

Advertisement
Share Button

Using Django + Git + VSO

OK, so I won’t be doing blog posts as often as I thought – I’ve had a lot of other non-technical things going on. However, I have dug back into Django again – and I’ve fallen in love it all over again!

http://www.fullstackpython.com/img/django-logo-positive.png

Advertisement

It does seem like it will ultimately be pretty unusable on Windows (due to a list of issues with working with a real RDBMS), it is pretty good on Linux and/or MacOS. I’m working on a project, mostly on Ubuntu, but it runs equally well on MacOS – and I came up with some noteworthy things.

Django
First, since I first dug into this last year – this framework continues to amaze me. The idea is that you can basically define your database tables  in a few dozen lines of code, Django will automatically give you Create Read Update Delete (CRUD) screens, with validation, and dropdowns for the related tables – stuff that is tedious and takes time to write. That is referred to as Django Admin. PLUS the Django Rest Framework with just a few lines of code per table will expose REST endpoints for each table, AND give you a website where you can learn and play with the REST API.

In short, I’m nothing short of blown-away by this technology. I’ve never seen anything like it. Needless to say, I’m planning to and am actively working on a project that uses this. I’m most excited that this will save me 10’s of thousands of lines of code I’d otherwise have to write in .NET – if I were to use that technology instead!

An IDE?
Wait a second, if this isn’t a great idea on Windows, how am I writing code? What am I using for an editor? I’ve said it once and I’ll say it again, Visual Studio Code, the standalone code editor is an amazing tool. Not only is it amazing editor, but it runs on Debian or Fedora-based Linux distributions (*.deb or *.rpm) and it runs nicely on MacOS too.

So, for Django, I installed VS Code on Ubuntu, installed the Python extensions to give me color-coding and intellisense, and it’s been a dream to work on! Not only is is a great editor with all of the “comforts” I normally need from Visual Studio working with .NET code, it also seamlessly has Git functionality built right in too!

Screenshot from 2016-08-07 15-02-27

Meanwhile, for Django stuff, I have a split console using “tmux”.

On the bottom is “manage.py runserver” which runs the web server, and automagically restarts whenever it detects a code change – and will give me compile errors upon changes too. On the top is a console in that Python environment where I can make and push database migrations. These are just like Entity Framework migrations, except you do them from the command-line:

image

So – this ends up being a nice little development setup. It’s very quick, and since Django does SOOOOO much for you, you can stand up an application in a very short amount of time.

Git:
Since this is a proprietary/for-profit app, I’m using VSO for it instead of GitHub. So, I created a new Git repository. There are a couple of things which took a little bit to figure out. One is if you have two-factor authentication turned-on for your Microsoft account (which you definitely should), you can’t use those credentials from the Git command-line. Luckily, Git providers like GitHub and VSO give you a way to supply alternate credentials. In the case of VSO, it’s here:

https://[YOUR-ID].visualstudio.com/_details/security/altcreds

But you’ll see a message on there that this is highly discouraged. instead, you should create “personal access tokens”. You can do that from here:

https://[YOUR-ID].visualstudio.com/_details/security/tokens

How do you put all of this together? Well, to get started with Git – check out this blog post. The only thing that is different is that when you go to do a “git push” or “git pull”, you will be prompted for credentials. How this “personal access token” works, is that you can put anything in the username field (or leave it blank) and you use the access token for the password. OK, that one is easy.

However, the next thing I found is that 1) I needed to register with Git and tell it who I was and 2) it prompted me for credentials every-single-time, which got annoying. So, to register your credentials and cache them, do something like this from the command-line:

$ git config –global user.name “John Doe”

$ git config –global user.email “jdoe@example.com”

and then to enable a credential cache, do something like this:

$ git config –global credential.helper cache

$ git config –global credential.helper ‘cache –timeout=3600’

Where that timeout is in seconds.

Bottom Line:
Once all of that is in place, development was easy-breezy! I save changes in VS Code, the Python web server restarts and I can see my changes in the website. When I’m done working, I use VS Code to “Sync” where it commits my changes, pushes them to VSO, then does a pull from VSO – making it so my local machine is N’Sync with VSO.

I know there are selling-points to other technologies, but I will say, I’m more-than-pleased with everything I’ve done in Django so far. Aside from the Windows RDBMS issues, I haven’t run across anything else really which has slowed me down! So, if you are wondering how to work with Django on Linux or MacOS, and use Git as a source control provider, hopefully this helps!

Advertisement
Share Button

Showing file operation progress in Linux

If you wanted to burn an ISO to a thumb drive in Linux, you’d typically use “dd”. That might look something like this:

sudo dd if=~/Downloads/kali-linux-2016.1-amd64.iso of=/dev/sdb bs=512k

This copies the input file of the .iso, to the output “file” of the /dev/sdb device, which is my thumb drive. The problem is, when you run this, it just sits there for like :02 whole minutes, and it doesn’t show any status.

Advertisement

Well, there is another command called “pv” where you can monitor the progress of data through a pipe. Now, “dd” is already self-contained, so how to inject this monitor in-between? Most programs allow you to feed input to it from the console, or “pipe” it to the program. So, you can kick off “pv”, have it open and feed the file through the pipe, and show progress, while the secondary program is running (“dd”, in this case).

Using “pv” to show the status of the operating, the same command above, now looks like this:

sudo pv -tpreb ~/Downloads/kali-linux-2016.1-amd64.iso | sudo dd of=/dev/sdb bs=512k

So “pv” opens the file (the -tpreb are for formatting), and the contents of that file are piped to “dd”. Now, when it runs, I see progress like this:

2.74GiB 0:00:14 [18.2MiB/s] [=====================>                            ] 44% ETA 0:00:17

Not only did this solve this immediate “problem”, this has some other implications. Any time you are moving, copying, or processing a file in any way, you could potentially use “pv” to show the status. You can even pipe this to “dialog” which will show a text dialog box with a status bar too. Very cool!

So, since Future Robert will likely need this in the future, I thought I’d write it down here. Got any other useful command line tools like this? Leave a comment below…

Advertisement
Share Button

A few updates…

Hey, it’s been a long time! Well, after 10 years (to the day) of being a specialist with the Microsoft .NET technology, I decided it was time to move on. So, i accepted a new position, being more of a generalist, and more on the infrastructure and devops side of things. In my new role, I’m digging into Docker, Puppet, dotnet Core, and Infrastructure as code (IaC) – which includes pretty cool technologies for automating provisioning, and integrating with cloud providers like Amazon and Azure. So that is keeping me busy and I’m having a lot of fun, trying to automate things along the way too.

As for the state of this blog, I expect things to more or less stay the same. Despite me being a .NET specialist by day, my blog posts have covered a wide array of technologies. I suspect that since I’m not doing software development during the day anymore, I will likely be doing much more of that in my free time. So ironically, this blog may end up covering a lot more development items now, instead of infrastructure! I guess I need the 60/40 balance – whatever I’m not-doing during the day, I do in my free time.

Immediately in the future, I have a few web/mobile projects that I’m motivated to work on – likely using a Backend as a Service for the backend. For the front-ends, those will likely be Angular and Bootstrap, and for the mobile side, I want to give Xamarin a go, now that it’s free. Specifically, I want to see how far I can get with Xamarin.Forms to build out a companion app and release that on each app store.

As far as infrastructure stuff, that will be much more difficult to blog about. At work, we’re using all expensive, pay-products, including Red Hat – so I can’t really do any of those things in my homelab, therefore I can’t easily blog about them.

Lastly, I have a couple of quick updates:

Update the 1st: I have since updated my Linux “update.sh” script, which goes and updates/upgrades all of the packages installed on a Debian-based Linux distribution. It now detects if a reboot is required and only prompts you if it is required. You can see the blog post here and the updated script here.

Update the 2nd: I have since updated my Conky desktop to be a little more refined, and show a bit more detail. you can see that blog post here and the updated .conkyrc file here.

Conky Screenshot

 

That’s about it for now. Now that I’m somewhat settled into the new position, I’d like to get back into a semi-regular routine with blogging. Like before, I don’t like to blog just to put something out, instead I like to share something when it’s interesting. And I should have some interesting things coming down the pipe! The RobbyTron2000 is back online!

Share Button

What I learned from bringing an AngularJS app to production

A couple of months ago, I was innocently asked by co-worker, author, F# MVP and all-around-nice-guy, Jamie Dixon if I wanted to participate in a work-based hackathon for the weekend. I naively said “sure!” Well, we ended up winning 1st place and part of first place was that you then had to bring that idea to production. We just did one of our final drops to production, for a bigger roll-out later in the month – and I thought I’d write down what I learned.

image

Hackathon vs Rapid Application Development:
The first thing I realized, which causes some problems, is that the motivations and incentives are different, when you are just trying to “crank out an app” in a very short period of time for a hackathon, versus doing legitimate development for a real app.

Now, you might say: “what do you mean by that, isn’t all of your code production quality? You don’t have two ways of writing an app, do you?”

Let me clarify. I do definitely try to always write production code. However, in the example of a hackathon, where you literally have hours or minutes left – there will be many more shortcuts taken, versus what would be tolerable in a production environment. But wait, that wasn’t even our problem. Me and Jamie both believe in writing good-quality code and trying to keep the codebase as clean as we can, regardless. The problem was that in a Hackathon, you want the most bang for your buck.

So, we used some previously unproven technologies to try to be as innovative as possible. This is great for a hackathon; and horrible for a production app!

This ended up being one of the main problems. I’d done Hello, World! functionality with these technologies before, but haven’t taken an app to production with them before, and not together.

Put another way, if you are starting a new app, you want to embrace some innovation, integrate some new technologies, but if the WHOLE APP is nothing but cutting edge stuff, you are setting yourself up for a world of hurt! This is bad because newer technologies aren’t going have as good documentation, and there isn’t going to be nearly as much on StackOverflow.

This is the crux of the problem. You take one approach for a hackathon, and you take a different approach for a production app. Because we wrote this as a hackathon app, this really killed later on!

So – if you are coding a hackathon, then use all cutting-edge stuff if you want! But if you are writing a brand new app, use mostly-known technologies and leverage one, maybe two new things, to keep your sanity. In our case, we shot ourselves in the foot because we used all innovative things (to us), but then had to suffer the learning curve of quickly bringing them to production!

The Architecture of the app:
This was an app around people self-reporting campaign contributions. This means that we needed:

  • Authentication/authorization
  • Web front-end
  • A database
  • A rules engine

So, what we used was:

  • Authentication – used SiteMinder, because OAuth would’ve taken too much work
  • Authorization – used Web API custom attributes. There will be a future blog post on that – lots of lessons learned.
  • Look/feel – bootstrap, because that’s my go to technology for that
  • UI Data binding/UI logic – AngularJS v1.x
  • REST API – ASP.NET Web API
  • Database access – Code-First Entity Framework with the Repository Pattern
  • Database – SQL Server (code-first, push changes when the app runs, and re-seed known values)
  • Rules engine – very small amount of curt, complexticated F# code. I don’t really understand it, so you’d need to talk to Jamie about that one – or maybe he’ll be doing a blog post on it?

One of the biggest unknowns was the viability of using AngularJS for a tediousness of a production app. It’s great for simple stuff, but what about when things start getting nasty and you need all sorts of special cases, like you do in real code. How well does it scale?

In this case, we did build this as an AngularJS single page application (SPA), which uses AngularJS client-side routing – which has some pros and cons.

As it turned out, the rules engine, repositories, server-side code and unit tests were all pretty-well taken care of in that first hackathon weekend. The remaining 2 months was almost exclusively working in HTML and JavaScript. That means, the app is mostly JavaScript. Ugg.

But what’s wrong with JavaScript?
I know, I know, JavaScript is the darling of the technology world at the moment. The reason for that is because it’s a relatively simple technology, which works on everything, and can generally get the job done.

However, it’s not as simple as that. A big benefit of object oriented programming is that you can easily manage and abstract dependencies – via interfaces. A big benefit of functional programming is you can do the same, but with function pointers. With these professional-level languages, you can purposely create a well-managed codebase.

With JavaScript, I found that no matter how much I tried to organize my code into: namespaces, classes, or AngularJS controllers, directives, and services – I just ended up with a lot of disorganized JavaScript.

JavaScript is like earbud cables, it naturally wants to keep finding it’s way from order, to disorder.

Worse is that although you can technically unit test JavaScript, in reality, that too is kind of a big mess. If you are coming from a world where your production code is separate from unit testing code, and where a nice, clean unit testing framework runs tests and gives you code coverage – the JavaScript equivalent is a far, far cry.

I didn’t do any JavaScript unit testing, but it seems like Jasmine is the de facto standard. If you want to get a taste for how that works, check out this YouTube playlist which seems to be quite good!

So, with an architecture like this, a big majority of the code is going to be JavaScript – which is both very difficult to effectively organize, and very difficult/tedious to test.

The good things:
Some things which I think work really well in this stack that we used:

  1. Entity Framework – we came up with some good, creative ways to deal with .Include(“..”) statements to make sure you’re bringing back the smallest graph of data possible, for each call.
  2. Repository Pattern – I still remain convinced this is the most-ideal way to abstract-away Entity Framework. For God-knows what reason, there is still no IDbContext or IDbSet so you can’t effectively mock any away your database. It’s crazy, but to get around that, having a simple repository interface makes everything downstream, completely testable!
  3. Web API – the patterns for Web API are pretty good – and the built-in support for giving back standard error codes was helpful too. We came up with some great, small, succinct code for our REST API.
  4. Bootstrap + Font Awesome – both of these made it very easy to lay out professional-looking, consistent pages, with appropriate icons. No complaints, here!
  5. AngularJS – this framework does the routing of the requests and the data-binding. On both fronts, it does this well, generally. There are some notable exceptions though, listed below.

Overall though, this collection of technologies worked very well together and development went VERY fast, despite the learning curve.

The bad things:
If the app was built pretty quickly, where did all of the time go? Well, it was troubleshooting problems. Looking back, here are the biggest problems I’ve had:

  1. AngularJS – maintaining state (at the site level) – although you can use $rootScope to keep track of the current user, we never came up with good techniques for figuring out when the user is logged-in, when we got their user info back from the REST call, and detecting if there was any change in that. In other words, sometimes the site is awkward when it first loads and it takes 4 seconds for the user information to populate.
  2. AngularJS – maintaining state (at the page level) – this isn’t so much difficult, but it just got to be unwieldy. Since your $scope object for that page is available to everything, it basically acts like a global variable. Global variables are inherently difficult to keep contained, and other actors can unexpectedly change a value.
  3. AngularJS – maintaining state (using a wizard, within a wizard) – this is the hell I was in the for the last two weeks. There was a $scope for the page, the $scope for a “wizard” I made, with it’s own controller, and then on one step of that wizard, that could pop another wizard, which had it’s own $scope. Sometimes I needed to notify the parent window from the inner wizard; sometimes I needed to trigger something from the main window, in the inner wizard. This really got to be unwieldy. I spent a lot of time in the JavaScript debugger to figure out which scope could see which data from which other scope.
  4. AngularJS – dealing with a DatePicker ending up being a 2 day ordeal. Despite there being AngularUI and countless implementations, I didn’t find any that worked correctly AND which worked on all browsers. I ended up using a raw jQuery UI data picker, and $watch-ing some events. It does technically work, but it’s a mess, every place I need to have a date field.
  5. Performance when developing locally – I still have no idea why, but locally, all of my REST calls took just about 5 seconds. On the server, it would be sub-second. So, this made local development slow and annoying. The workstation I worked on was an i7 with 16GB of RAM and an SSD – it wasn’t a simple resource problem.

So although development did move quickly, there were some significantly frustrating problems that came about too.

 

Bottom line:
So now that we are in production, what is my take on AngularJS v1.x? Well, it’s a mixed-bag. For a simple app, man, this entire stack works really well together. However, as we got into more complexity, the JavaScript quickly started turning into a mess. You just can’t really organize that code very well for some reason.

The other thing is: should you use single page applications with AngularJS routing? Well, again, for a simple app – I think it’s kind of ideal to do it that way, because it can be executed  pretty nicely. However, for a medium to large app, I think the way to do would be to use something like ASP.NET MVC to provide the structure of the site and initial HTML – and then use AngularJS on those specific pages, to make each one of those views dynamic, by using data binding and REST services.

Having done web development for 20+ years, I will say that the current state of the art is the best it’s ever been. The developer can be so productive, with so little code, in comparison with yesteryear. So, although I found some limitations with AngularJS – I still think it’s a pretty great framework.

Share Button

Using VMware vSphere or XenServer for a Hypervisor

As I’ve been digging into some of the newer infrastructure technologies lately, it dawned on me that I really knew nothing about VMWare and XenServer. The primary reason for my VMWare ignorance was that every time I went anywhere near it, there was always talk of how much money it costs. I’m surprised they don’t charge money, just for looking at their website! I think only the VMWare “player” was free, at one point? Everything else though has always been super-expensive, priced for the enterprise-only.

image

And for XenServer, every time I looked at it, I’d be :15 minutes in and I still couldn’t figure out how to download it or how to install it. Luckily that is changed, more on that in a minute.

image

In present day, both of these need to be reevaluated by me. VMWare has a full-blown hypervisor available, that is free to use – it’s called VMWare vSphere. And XenServer, although open source, is run by Citrix now, and they have a free standalone operating system system hypervisor too, VERY similar to vSphere, you can get that here.

These products are similar to Oracle VirtualBox or Microsoft Hyper-V, but both definitely seem more industrial-strength, in terms of how much control you have over the environment and also because they are intended to installed AS the operating system, not just add-on programs like Hyper-V and Virtualbox.

If you are not familiar, this whole concept is the idea of having software on your laptop, where you can run a “virtual” machine on your physical computer. So, a physical host machine (like, your laptop) will run a hypervisor, and then within that hypervisor may be several virtual machines. These could be virtual workstations, servers, and they can run Windows, Linux, etc. Thing is, those virtual machines don’t realize they are running in a virtualized environment. To them, they just think they are running on bare-metal! That means that you can install Windows, Windows Server, Linux, etc to host all sorts of workstation and server configurations.

What’s the difference between all of these?
As mentioned, it seems like as far as standalone hypervisors, VMWare, VirtualBox, XenServer, and Hyper-V are the major players. Each has some upsides and downsides. So, depending on what you need, one might be more appealing than the others. Here’s my take:

Microsoft Hyper-V (Windows-only):
Hyper-V can be set up in a few ways. First, if you have Windows 10 Enterprise, you can enable it in Control Panel –> Programs and Features, by using the “Turn Windows features on or off”. Next, on any regular installation of Windows Server, you can simply add the role. Generally though, except for development environments, virtualization should be the only thing that box does. So, Microsoft also have Microsoft Hyper-V Server. This is a super-scaled-back version of Windows Server which ONLY has Hyper-V enabled. And it only has a command-line to interact with the server too. You can download that for free here.

The upside of Hyper-V is that it’s pretty easy to use, it’s basically free, and pretty much any version of Windows or Linux run on it. The downsides are that it’s not really an “enterprise class” hypervisor, simply because it doesn’t easily support multitenancy, and it can be difficult to manage. There is no web interface, and it takes a Level 17 Sorcerer’s magic to get your workstation permissions just right to use the MMC plug-in, to be able to connect. So, Hyper-V is great for local hosting of VM’s on your workstation (if you are on Windows), and it’s good for hosting VM’s in small shops where you have perhaps 1 or 2 sysadmins.

Oracle VirtualBox:
VirtualBox is pretty cool first because it runs on everything: Windows, MacOS X, and many/most distributions of Linux. It’s even more simplistic than Hyper-V, in terms of how technical you can get with your hosting. That means it’s pretty easy to use by anyone. VirtualBox though, is not really intended (I don’t think) to be a enterprise-class hypervisor. It too, really shines for the local developer. You can bring up virtual servers on your laptop, or in very small shops, you could probably use it to run your real servers – although it is a little tricky to get it to work without having to log in first and start it up manually!

VMware vSphere:
VMWare is, by design, meant to be the enterprise-class answer for hypervisors. It runs on as it’s own operating system, and has rich, deep features, and also ties in with the suite of very expensive partner products from VMware.

Citrix XenServer:
I was really pleasantly surprised, here. You can download an ISO, burn it to a USB thumb drive or DVD, and put it in your server. Reboot and you are in the installer. Everything is pretty obvious and intuitive. In fact, the installer is eerily similar to vSphere – I wonder what the back-story is on that?! Anyhow, this is pretty much the open source answer to vSphere. It has many of the same capabilities, except it’s free to use.

How do you install vSphere?
The idea with vSphere is that it IS the operating system. You don’t install Windows or Linux, and install this on top – this is the operating system. So, you have to (*sigh*) register for an account, and then you can download a valid license key. It is free to use, but you still have to register it and activate your license. If you don’t want the inevitable spam that is going to come from giving them your e-mail address, you might choose to use a disposable e-mail instead?

Anyhow, it’s basically a command-line installation and once installed, you can’t really do much from the console. Although it seems to be based on Linux, there is no shell prompt. Instead, you do everything from the the web client or from the “client” application you download.

Using vSphere:
Since you don’t really do much from the console, one installed, you need to log into the management website. What’s the IP address of the server? Well, for those using DHCP, it will display it right on the console screen. In my case, it’s 192.681.1.21 – so I open that in a browser and voila:

image

If I choose the 2nd link of “Open the VMware Host Client”, that is the main management website – which is quite functional!!

image

Here is what it looks like, creating a new virtual machine:

image

and then if I chose that other option to download the Windows client – apparently, they are phasing it out. There is a warning that new v6 features aren’t available, but still, it’s a pretty powerful interface with lots of features:

image

Now, one thing I noticed was that I could only be connected to one hypervisor at a time. Note that 192.168.1.21 is at the top of the tree, above. Well, back on the main page, there was also a link to VMware vCenter. That let’s me manage several virtual servers from one interface. “Great, yeah, let’s do that!” so I navigate to the download page, and it looks like it costs money?

image

Yep, OK, yeah, that’s the VMware that we all know. So the bottom line here is that vSphere is free (after you register the product and activate the license), but that’s it. Absolutely anything else, is going to cost you money. A lot of money!

How do you install XenServer?
Again, this is where I was pleasantly surprised. XenServer was supposed to be “the open source answer” to virtualization. You know how for every pay product, there is an open source answer? Well, it just seemed like the project was a mess for several years. I’ve gone back to it several times. I don’t think I’m a stupid person, but I never got off the ground with it once. I could never find how to download it, or the download page I found, didn’t have the core product – it was confusing. Now, it’s different!

Download the XenServer operating system install from here: http://xenserver.org/overview-xenserver-open-source-virtualization/download.html

When you boot off the media, the install is very similar to vSphere. In the end, it has a few more things you can do from the console and there IS a shell prompt you can use, but similar to vSphere, there is a web interface where you start to do all the heavy lifting.

Using XenServer:
Similar to vSphere, the console of the machine tells you the machine name and IP address. So, navigate to that in a browser and here is what you see:

image

So, there is no web interface out of the box. There is a pay product called https://xen-orchestra.com/ though. So meanwhile, on Windows, I downloaded the XenCenter app and sure enough, it’s got lots of great features:

image

and similarly, to create a new VM:

image

and even better, note that I can manage multiple hypervisors from this tool too:

image

So – this is free, it has lots of great features – does it have a downside? Well, probably the biggest downside is that if you are primarily using Linux, there is no way to manage these servers via a GUI. You have to do it all via command line. Aside from that, if you are looking for simple, basic functionality to manage virtual machines – this is a great option.

Bottom line:
From what I can tell, for basic “single group of admins” (or, non-multitenant setups) type of on-prem virtual machine hosting, these pretty much seem to be the options: Hyper-V, VirtualBox, vSphere, and XenServer. So which should you use? Here’s my take:

  • If you just need developer-type servers and workstations, and you work on Linux or MacOS X – use VirtualBox
  • If you just need developer-type servers and workstations, and you work on Windows – use Hyper-V. If you don’t have Win10 Enterprise, then use VirtualBox
  • If you have some semi-permanent “servers” you want set up, regardless of whether it’s Windows or Linux, I definitely now prefer XenServer. Hyper-V falls short because if you run Hyper-V server, the ONLY way to manage the virtual machines is via PowerShell or the MMC console, and even then, you only have basic controls. XenServer has a very powerful Windows client, and for everything else, there is always the command-line.

VMware vSphere, to me, is still only for enterprise use, because it’s mired in licenses and VERY high costs. That’s a lot of baggage that big companies like, but for home-labing or even a small to medium size business, who has the time/money for that?

Share Button

Using Vagrant to create a reproducible, multi-machine, virtualized dev environment (IaC)

This is another piece of very cool technology that I ran across recently. Imagine that for your software development project, you want to bring up virtual machines on your workstation. For example, image you might use:

  • Name=webserver 2GB of RAM, install Ubuntu Server, install Apache, and PHP
  • Name=databaseserver 4GB of RAM, install Ubuntu Server, install Apache and PHP (for PHPMySQL), and MySQL
  • Name=batchserver 2GB of RAM, install Ubuntu Server

Don’t get bogged down in the operating system – these could be Windows machines, and you can be using any kind of hypervisor, locally.

Now, if you have a few people on your team, this can be time-consuming and tedious. Worse, chances are each of your installs will be slightly different. Even worse, what if some people are using Macs or Linux machines with VirtualBox, and you have a developer using Windows 10 and Hyper-V? Because nowadays, since pretty much every technology runs on every OS, the operating system is starting to simply become a personal preference, right?

Each workstation, with those virtual machines, are going to be very different. What if there was a better way?

Enter Vagrant (https://www.vagrantup.com/):
This is a product that integrates with many virtualization and automation technologies, including VirtualBox, VMWare, Hyper-V, puppet/chef/Ansible, AWS, etc. You basically define a configuration file of what you want for virtual machines, and Vagrant goes and builds them – consistently, every time.

image

To me, the really compelling part is that you literally have one configuration file which describes everything about your machines, including scripts you want to run after they are provisioned (e.g. patching the system, installing software, configuring software, etc). This includes CPU, RAM, Disk, and Network settings – everything. Then, the fact that this equally supports VIrtualBox, Hyper-V, VMWare, etc – is quite amazing.

This means that any developer running on MacOS X, Linux, or Windows, can bring down the file, and build out the virtual machines, using whichever hypervisor they want, and the machines will be identical for each developer!

Certainly, this is somewhat of a niche solution – it is for development areas where the developers bring up servers as VM’s on their workstation. However, Vagrant could be used in all sorts of virtualization automation scenarios – and it does work with popular technologies like puppet, Ancible, and chef too.

The Source:
This concept is called Infrastructure as Code (IaC), where you store what infrastructure you need, in with your source code. You literally check it into source control; it’s a tiny file. So, I put my example up on github, here:

https://github.com/RobSeder/vagrant-demo

In this scenario, you define the definitions of the servers like this:

    {
        :name => "webserver",
        :eth1 => "192.168.100.101",
        :mem => "2048",
        :cpu => "2",
        :postinstallscript => "webserver_post.sh"
    }

In fact, the whole file is short-enough, so here is the entire configuration:

image

This is all using Ruby syntax. The “boxes” value is an array that stores the details of each server. In this case, I’m defining 3 servers I want to set up. On line 31, that is the operating system I want to install. I just chose this for all, for simplicity-sake. This is basically Ubuntu Server, and there is an image available for it for VirtualBox and Hyper-V.

Then, on line 36, we loop through each box in the boxes array and process each one. On line 43 for example, is the VirtualBox specific settings. On line 48 are the Hyper-V specific settings. So, if you download Vagrant (from www.vagrantup.com), on MacOS X, Linux, Windows, or whatever… and if you have either VirtualBox or Hyper-V installed – do the following on your workstation:

  1. Create a directory somewhere called vagrant-demo
  2. Execute: git clone https://github.com/RobSeder/vagrant-demo
  3. Navigate into this inner vagrant-demo folder (still from the command-line)
  4. Type, depending on what you have installed on the local machine:
      vagrant up –provider=virtualbox
    or
      vagrant up –provider=hyperv

And you should see it build out these boxes, exactly the same as I have on my machines here!

On MacOS X:
To test this, I did this on MacOS X and sure-enough, I watch the command-line scroll by and see the 3 machines. Not only did it stand up the servers, it then installed the correct software on each, and fully-patched each server (because of the .sh files which are also part of that github project):

image

On Kubuntu:
O
n Kubuntu (which is Ubuntu with the KDE window manager), same thing – I did a git clone, vagrant up and voila, the same machines got built:

image

On Windows:
And finally on windows, same thing here, except I did a “vagrant up –provider=hyperv” – but aside from that, the new servers were created, and then fleshed-out the same way:

image

Bottom line:
I think it’s important to note that Vagrant didn’t just create the VM’s – but it created them based off the specifications in a small config file that you check in with your development source code. And maybe even bigger than that, after it provisioned it correctly, it automatically updated/upgraded the system and installed the specific server software for each machine (Apache, MySQL, etc) – all automatically! This is defined in the *.sh files which are also checked-in to that github project.

Maybe the reason I find this so compelling is I do prefer this style of development infrastructure. I like for a developer to have a capable laptop where they can bring their “servers” within them. This means the developer can be on a plane, on a park bench, or at their desk – and they don’t need network connectivity to be able to work. Or even better, they don’t have to worry about stepping on the other developers as he/she does work. Instead, they have a complete, and in this case consistent, data center right on their laptop.

Bottom line for me, this is a really cool technology – and a great, lightweight way to manage your per-developer VM’s, by storing the configuration in code. This concept of Infrastructure as Code can only save time because every developer will get exactly the same configuration. Not all shops work this way – many use AWS, Azure, or on-prem servers – but for ones who do per-developer “laptop data centers”, this is a very cool tool!

Share Button

Getting started with OpenStack – cloud software for your data center

It seems to me that the world of IT infrastructure had a big shift several years ago when Amazon Web Services (AWS) and Microsoft Azure started offering off-premise infrastructure, which was collectively called “the cloud”. This is the concept of having your “servers” be hosted virtually, off-site, in the “cloud”, which has many, many benefits – with cost-savings being a big one.

However, it wasn’t until recently that I think there has been another wave, and a confluence of several technologies which has led to a new era for infrastructure and systems management. It seems like the next plateau for the industry is going to be a few things:

  • Docker: The capability to completely “containerize” your application, using Docker for example, and deploy it to any Docker host. This is where the application and all of it’s dependencies can be packaged-up and run on anything that supports Docker, making the “operating system” irrelevant. Most technologies, including ASP.NET are now pretty much ready for this. In fact, I wrote a blog post about that, here.
  • Infrastructure as Code (IaC): The capability to dynamically provision the infrastructure your app needs. Using so-called “infrastructure as code”, the development area defines the servers, load-balancing, etc that is needed for the application in a JSON configuration file, and that environment is created, or updated upon every deploy. This means that your configuration for v1.4.2 of your application is stored in source control, right along with the app. Need to revert to v1.4.2, and the deploy process will bring the hardware back to that configuration for that version too. I have an upcoming blog post on using Puppet for this.
  • On-Prem “cloud services” (a.k.a private cloud): most people agree that AWS and Azure are really great ways to manage infrastructure. However, they cost money for every bit of usage. “Cloud is the new mainframe”, as I always say, because you can’t make a move without there being charge-back! Most companies have infrastructure already on-site or simply need to host systems on-premise. So, how could you take the convenience of “managing your infrastructure in the cloud”, but have the physical hardware on-prem? There seem to be two significant offerings for this at the moment: Microsoft Azure Stack (also see here), which brings the actual Azure console to your on-prem hardware, and OpenStack, which is an open-source cloud management software – which is what this blog post is about.

To my brain, it seems like these three techniques is where infrastructure is heading because it supports: legacy apps who just want to deploy to VM “servers” like always. It supports simple/modern apps who want to containerize and easily scale up and out,  using something like Docker. And it would support complex apps that can perhaps containerize, but also have complex “server” setups, and could manage that with infrastructure-as-code (IaC). For example, you have message queues, middle-tier “processing” servers, reporting servers, etc. which all need to be set up a specific way and/or verified whenever you deploy your application.

Put another way, now that virtualization has been mastered, and now that containerization has been mastered, the last pieces that are needed is to: make as much of this self-service, as is possible and then automate all the things!!

image

With that said, here’s a look at what I’ve learned from playing around with OpenStack.

About OpenStack:
OpenStack was created by Rackspace and NASA in the year 2010. It was meant to be an open source way that you could offer a “cloud” experience using your own hardware, on-premise. Specifically, offering Infrastructure as a Service (IaaS).

image

Here is a great video I found which explains the origin story pretty well:

and specifically, here is another good video which shows the features of the latest version:

Looks pretty cool, right?!

Getting the software:
Believe it or not, this is where it starts getting complexticated. I spent quite a bit of time in the documentation, and from what I gather, it seems like there are three options:

  1. Download and run “devstack”, a complete, single-server edition which has all of the features – which I’ll cover here.
  2. Use one of the MANY customized versions, for many different distributions of Linux, see here: https://www.openstack.org/marketplace/distros/
  3. Use one of the many hosting providers who happen to use OpenStack, in case you don’t want to use Amazon, Microsoft, IBM, or Google. See here: https://www.openstack.org/marketplace/

In my case, since most larger companies use Red Hat Enterprise Linux (RHEL), and with CentOS being the civilian version of the same, I went with that approach. I downloaded and installed CentOS on a bare-metal machine. This is not a small nor simple “program” to install, this is meant to be the core of your data center, so digging to the real, multi-server install will be beyond the scope of this post.

Installing the software:
I just followed the instructions on this page: http://docs.openstack.org/developer/devstack/ for CentOS/RHEL. I guess there are many thoughts on WHERE to put this source code. Since this is for the entire server, I am putting the source in:

/opt/openstack/src/

Then, per the documentation, a new user called “stack” is created, and has “sudo” permission. So, while still logged in as root, I change the owner of that path:

# chown stack:stack -R /opt/openstack/

this means change the owner to user “stack” and group “stack”, recursively for all subdirectories and files starting from /opt/openstack/. This is needed because stack.sh needs to be run as the stack user, and that stack user needs to have permission to that path. With that done, I went into the devstack folder and ran: stack.sh. This runs for a while – like, several minutes.

When complete, it gives you a status like this:

image

Great, I’ll just open a browser and navigate to http://192.168.1.24/dashboard and I’ll be off and running!

Configuring the firewall:
Well wait, the first thing I found was that despite what the output of stack.sh said, I couldn’t get to the website. Well, CentOS and RHEL ship with SELinux and with the firewall turned-on, by default. So, we need to allow those two ports in the firewall. First, we edit the iptables:

$ sudo nano /etc/sysconfig/iptables

and then we add two lines, after the existing “ACCEPT” entries, but before any of the “REJECT” entries:

-A INPUT -p tcp -m state –state NEW -m tcp –dport 80 -j ACCEPT
-A INPUT -p tcp -m state –state NEW -m tcp –dport 5000 -j ACCEPT

then, exit and save changes. To put these rules in-effect, run:

$ sudo service iptables restart

Now, you should be able to get to the main website.

Using OpenStack:
I can now log into the portal, which is http://192.168.1.24/dashboard for me – but will be different for you, and I see a login screen:

image

from the output from stack.sh, I see can either log in as admin/nomoresecret or demo/nomoresecret. I’ll log in as admin first.

image

from the admin side, you can set up all of the things you want to make available to end-users, like VM images, block storage, virtual LAN’s, etc.

Then, if I log in as the demo user, I see a slightly different interface:

image

and very similar to AWS, I can click on “Instances” and go spin up a new virtual machine:

image

and as an admin, you can predefine virtual machine configurations, similar to Azure and AWS:

image

and when done, you can see your new virtual machines:

image

this worked earlier, but when I went to get a screenshot of it now, it’s in an error state – I keep getting a “block device mapping” error for some reason. But you get the idea. You can normally click on the instance name and even open an SSH prompt from there too.

Bottom line:
From reading about this particular product, it seems like this could be a pretty good alternative to manually managing many, many Hyper-V or VMWare servers. In theory, you have OpenStack be your hypervisor and you could literally manage an entire data center of virtual machines from one console. Better, you can define tenants and let development areas provision whatever they need, directly. Even better than that, there are API’s for everything you see, which means that infrastructure provisioning could be automated too, where needed.

For me personally, I homelab with two “servers” that host many virtual machines. In my free time I’d like to see if I can ultimately get OpenStack to replace that, and manage the hardware of those two host machines, via an interface like that. If/when I transition to that, I’ll likely do a blog post or two on it. From reading up on it, it doesn’t seem like a trivial thing – but I do like the idea!

Share Button

Being more secure on your Android phone

I recently abandoned (and left-for-dead), Windows 10 mobile and switched to Android. My first order of business was to see what sort of security and privacy options I have. I found some excellent news and some horrible news. Here is my current take on how to be relatively secure and private on an Android device.

The Good, the bad, and the ugly:
Android has a notorious reputation for being overwhelmed with malware. Not because of the operating system, but from apps who ask for permissions, and people who simply allowed it. I was SHOCKED to see even the most innocuous of apps asking for permission for my location, my contacts, and other private things.

Take Pandora Radio for example. This is a mainstream technology and should be safe, right? If you look at the permissions, they are pretty intrusive:

image

it’s clever that they don’t let you see all of them at once. Take a look at the scrollbar on the right, there are a LOT more permissions it “needs” to run. Why would Pandora radio righteously need to see and MODIFY my calendar and send emails on my behalf?! That’s outrageous, but there were plenty that were worse. So, this was annoying. Out of ignorance, the Android user based allowed this to become the norm, which sucks.

The other fundamental thing which makes me uneasy is that since Android is made by Google, it wants to use a Google account to do everything on the phone. Now, we all know by now that Google uses every byte it can access from you, to strengthen their profile of you, and to sell your data to other vendors. So, I created a new Google account just for the phone, which I guess is better than nothing. But when it comes to using any Google functionality, I full expect that all of that is logged.

So, right out of the gate, we are not starting out great. However, there is some really great news, when it comes to security and privacy, too!

First, using Open VPN Connect, the phone connects and STAYS connected to VPN, keeping raw data out of the hands of my cellular provider. As mentioned, I really like Private Internet Access (click the logo, it’s $3.33/month):

and here’s what that looks like when connected on the Android:

Screenshot_2016-03-05-09-07-50

Next, the Tor project (www.torproject.org) has Orbot, a proxy application which can route all of your network traffic – or just application-specific network traffic, over the Tor network. If you are not familiar, this typically means your network traffic is sent encrypted, across 3 Tor nodes, which makes your identity and the ability for marketing agents and government agents to track your behavior:

Screenshot_2016-03-05-09-08-09

and Android has a Tor browser too, called Orfox – since it uses the Onion Router and is based-on Firefox:

Screenshot_2016-03-05-09-08-45

When you use the Tor browser, it routes all of the traffic from JUST this browser, over the Tor network, offering the same protections.

Lastly, Signal is an in-place replacement for your SMS/MMS app. It lets you send/receive text and picture messages, but if the other party is also using Signal, then the entire conversation has end-to-end encryption. Within the app, you can also make phone calls that are fully-encrypted too.

Here are some key resources for this:

The main website with more detail:
https://whispersystems.org/

“You Should Really Consider Installing Signal, an Encrypted Messaging App for iPhone” (this is a little dated, and for Android too – but it gives the background of the Edward Snowden angle)
https://theintercept.com/2015/03/02/signal-iphones-encrypted-messaging-app-now-supports-text/

How to make an encrypted phone call with Signal
http://support.whispersystems.org/hc/en-us/articles/213132447-Who-can-I-call-

So, when you put all of this together, it’s a net-positive in my book. Sure, the device itself and some of the apps might compromise some of your data, but your internet usage, text messages, and phone calls can all be quite secure and private.

What to install:
So, if you have an Android device, here’s what I recommend you install (all for free) from the Google Play store:

  • OpenVPN Connect (because although the PIA VPN app worked initially, it stopped. OpenVPN works very well too)
  • Signal Private Messenger
  • Orbot (Tor-proxy)
  • Orfox (Tor browser)
  • Firefox (if you use the default Google Chrome, all of your activity is tracked, because you are logged-into your phone)
  • DuckDuckGo (a standalone search app which does anonymous searches via the DDG search engine)

As two final steps, you should also encrypt your phone, and set a very good passcode.

If you installed and used these apps, on your encrypted phone with a great passcode, then you have a remarkably more-secure setup than when you started. It should be quite difficult to lose your information to hackers, governments, or other thieves.

Before and After:
I’m generally a visual person, so I drew a crude diagram. Why would you go through all of this trouble? Well, I would argue it’s not a lot of trouble. These are things that take just a few minutes to set up. Once they are set up, they are effortless to use. But still, why go through the effort?

Well, by default, all of your: voice calls, SMS, MMS, e-mail, and web traffic from your phone, go over your mobile provider connection. Per their privacy policy (all of the ones in the U.S. at least, with no exception that I know of) – they capture every byte of readable data and sell it to third parties.

Worse, is when you consider data breaches. Here are the top 20 worst data breaches from 2014. Here are several of the largest data breaches from 2015. It is just a matter of time before your mobile provider is breached. What data do their have? Your data. What sites you visit, perhaps data that can be used to steal your identity, and it’s not just you – it will be the same for your spouse and children too. There is ZERO value to you, for the mobile provider stalking you and storing your data. In fact, it ONLY has downsides for you. They just do it because it’s a way to make money off of you, not because it offers any value to you.

So, by default, your mobile provider owns your data access, phone, and text. All of this is collected, correlated, and sold… and just sitting there waiting to be hacked:

image

by installing a few apps, that changes the availability and confidentiality of this data to something radically different, like this:

image

If you use Signal for text and phone, if the recipient also uses Signal (it’s free for all parties), then all of that is encrypted “end-to-end” which means no part of your phone calls or text messages will be observable by your mobile carrier anymore.

Similarly, if you use VPN to go out past your mobile provider and then also use things like OpenDNS to prevent DNS leaks, and Tor for browsing or as a proxy service, you make it quite difficult for any provider, hacker, or government to follow what you are doing. As described above, this is not air-tight, but it at least gets rid of the peeping toms outside your window, and gets people to stop digging through your trash bags. I just mean, that is what the real world equivalent is, to me.

Bottom Line:
Although no phone platform is perfect, and I’m not super-happy about some core features that aren’t secure in Android, I am pretty happy with the overall solution. Just by installing a few free apps, I can make secure/private phone calls, communicate over SMS privately, and even use VPN and Tor to browse the web without organizations or people observing my every move.

If you have an Android, why not install the apps above and give it a shot? If you are on iPhone, I didn’t verify but I know at least OpenVPN Connect is available for iPhone and the website says that Signal is available for iPhone too.

Since I’m still new to modern-Android, is there anything else I’m missing for this platform?

Share Button

I’ve given up on Windows Phone 10

To know me is to know that I’ve been a fan of the re-imagined Windows Mobile/Windows Phone for the past ~6 years or so – since Windows Phone 7 came out.

image

Was I doing it to be different? No, not really. There were two compelling reasons:

  1. Since my day job is a Microsoft .NET developer, it was easy and simple for me to dabble and write apps for my own phone.
  2. The Windows Phone 7 and Windows Phone 8 platforms were outstanding! Both the hardware and software were quite excellent!

Now, you’ve likely heard people say “b-b-but there aren’t 1.5 million apps in the Windows Store, so that platform is useless!”. Well, as a user of the platform, that was never a problem. Any company/organization/app that was worth looking at, also came out on Windows Phone. So, you couple that with the VERY high quality of the Nokia Lumia handsets, it was a great platform – I say that as a developer and as an end-user.

The reason I stayed with Windows Phone 7 and 8 was that it was a great platform which was poorly-marketed, consistently. So, I didn’t mind being in the minority, because it was such a quality platform.

Enter Windows 10 Mobile:
As you might have imagined, I was looking forward to Windows 10 Mobile. They wrote the OS from the ground up and with Microsoft acquiring Nokia and taking over the Lumia brand – this should be it!

This should be the Microsoft equivalent of the Apple iPhone. A high-quality vendor putting their weight and force behind a unified platform.

Well, I got a Lumia 950 right went it came out in October 2015 – about 5 months ago.

Since then, it’s been rough-going. The hardware is a significant step-down in quality from the Nokia Lumia’s – but it technically works. The phone generally doesn’t feel high-quality though. The operating system though is… well, junk. From Day One (up to and including today), I consistently have all sorts of problems:

  1. If I tried to “tether” a device to it, to use it as a hotspot, the phone would freeze. Like, “you had to pop the back and pull the battery” kind of freeze. In fact, I ended up buying an external mobile hotspot because of this, and pay an extra $20/month for the privilege of using it (more on this below).
  2. SIM errors make the phone cell connection just “go away” once or twice per day. Meaning, you will look down, and instead of “AT&T LTE” in the top-left, you see a circle with a line through it. “I wonder how long cellular has been offline?” I would wonder. You need to reboot to get it to come back. When it came back online, I would find I have voicemails and several missed text messages. It does this once to twice per day.
  3. Many programs regularly crash randomly and go away.
  4. Now that I am using VPN, Windows Phone is the only platform who doesn’t understand how this needs to be implemented. It only stays connected while the screen is on. So, the whole time while if uses your cell connection to get e-mail and MMS, it won’t use VPN. You have to unlock the screen and reconnect VPN every-single-time.
  5. After using Continuum successfully a few times – now it just crashes my Microsoft Display Adapters (both of the ones I have), and if it can connect, I can no longer use the phone as a trackpad – which means you have no “mouse” when using Continuum.

I guess those are the first five MAJOR problems I have with the platform, which come to mind at the moment. It’s one thing to stick with the “underdog” if it was a solid platform – but even 5 months in, it is still a ridiculously bad platform. Like, I can’t believe this ever made it to market.

So that’s it, I hit my limit.

What to get for a phone?
For the first time in 6 years, I took a fresh look at the entire mobile market – what is a good phone to get? Although the iPhone is a nice platform, there are many aspects of it I don’t like – plus, I don’t subscribe to the rest of the Apple ecosystem.

That leaves, pretty much, just Android. Well, in talking with Binoj from CodeRewind – he explained that not all Android phones are junk. In fact, there are several nice, high-end Android devices with great features. So, from reading CodeRewind’s review (of a slightly different OnePlus model) here, and then seeing this great review on YouTube, I decided on a:

OnePlus/2
https://oneplus.net/2

This ends up being a pretty great phone! It’s a dual-SIM (for those who have to carry a work phone too), has a comfortable size (5.5”) screen, and has a built-in fingerprint sensor on the bottom for locking and unlocking the phone, among many other great features:

http://www.droid-life.com/wp-content/uploads/2015/07/OnePlus-2-Leak-2.jpg

More on tethering vs mobile hotspot:
I telework (a.k.a work-from-home), so that means I need to have professional-level infrastructure and “continuity” plans, just like a business does. I live in Florida, so if a hurricane is heading my way or there is some regional disaster, I need alternative ways to get: power, internet, and phone. I wrote about my strategy a couple of years ago, here.

Anyhow, that’s primarily why I care about tethering. If I grab my work laptop and can tether it to my phone, then I can continue to work. Well, now that I have a proper phone that I can tether to, I decided to try out the different methods.

Android supports sharing it’s internet connection via: Bluetooth, over USB, and as a WiFi hotspot. Also, as discussed, I have a separate mobile hotspot. So, I connected a Windows 10 laptop, each several times, and ran some speed tests (using http://beta.speedtest.net/):

image

As you can see, I got significantly better speeds when using the external, separate hotspot, as opposed to tether via any technique through my phone.

image

Bottom line, I think I will hang on to the mobile hotspot for this.

Bottom Line:
I’ve been pretty aggravated with Windows 10 Phone since I got it, so I’m pretty relieved to have this all resolved. #FirstWorldProblems What’s crazier is that it was super-simple to go from Windows to Android because everything, including all the Microsoft apps I use, are also available on Android.

For example, for music, I use Microsoft Groove Pass. This means that when I am in the truck, I can “pair” the phone with the audio system, and use “bluetooth audio” to listen to music. What’s better is that if I create a playlist on my home PC, it syncs in Groove. Well, Microsoft has a Groove app for Android, so that works exactly like it did with Windows Phone.

Anyhow, since I know a few others who have Windows Phone, I thought I’d share my experience and transition from Windows 10 Mobile, to Android.

Share Button

Incredibly interesting blog posts from a .NET duh-veloper

Advertisement