Quick Review of Parrot Security OS v3.1

I had a request to review Parrot OS. I haven’t looked at it in quite some time, so I downloaded the latest from: https://www.parrotsec.org/



What is it?
In short, Parrot Security OS is a security/anonymity/pentesting distribution of Linux, similar to Kali. It is Debian-based, and they too have gone to a “rolling” release of the operating system and tools. This is a security project from https://www.frozenbox.org/, which has several other security-oriented products and services, such as public DNS, encrypted XMPP/IRC chat, certificate authority, etc.

In short, Parrot OS is basically a customized version of Debian Linux which has all of the industry-standard pentesting tools like Kali has, but this also has anonymity features such as i2p, tor, bleachbit, AnonSurf, etc. – and also has regular Linux-y stuff like LibreOffice, IceDove, Hexchat, etc.  This also uses the Mate window manager instead of regular Gnome or KDE too.

Here’s a quick look at the login greeter, and some of the menus:











Bottom Line:
I played around with it for a while, and everything seems to work like it should. Really, in a custom distro, it just comes down to whether you like what was assembled. And generally, yeah – this is pretty cool, and best of all, everything works like it should.

If there is any negative, it’s just around the concept. Whether you are pentesting or being a black hat, you REALLY need to have clean opsec. That means ZERO chance of your contaminating your machine with your live/personal data. So, if you have a pentesting rig, it really shouldn’t have e-mail on it for example. You should have one environment for security stuff, and a different, discrete environment for your personal stuff – not even on the same hard drive.

This distribution seems like it’s trying to be everything to everyone: you have your pentest tools, but you also have your creature comforts of e-mail and IM. Although that is great, that (to me) seems like it’s just a matter of time before some people might slip-up. For example, when you open fb link in Firefox, you correlate your Facebook cookie with that anonymized connection where you just performed your hack, tying you, potentially criminally, to an event! It’s super easy to have your personally-identifiable data leak into your other work.

So I guess the conclusion I come to is that I want a pentesting rig to be specialized and ONLY have what I need for those sorts of activities. If I also feel “comfortable” enough to use my personal stuff on that setup, that’s a recipe for disaster (for me, at least)!

With that said, there isn’t anything about the distribution that forces you to be sloppy, it’s just something to consider. Arguably too, the target audience could be someone who LIVES in their pseudonym. In which case, e-mail, IM, facebook, etc of their “hacker identity” is not as dangerous to potentially leak. For example, a reporter, whistleblower, or dissident might use all of these features effectively, and safely.

In the end, it looks like a totally comparable alternative to Kali (plus a bunch more), if you are looking for a new distribution!

Share Button

Creating Undo Functionality

Lately I’ve been working on an app to automate some features of a really great deployment product called Octopus Deploy. It’s a really great product, but when you need to set up a new deployment, you need to go into several screens and it’s several steps. The good news is, the app is all API-based. So, you can do everything either via a REST interface for a C# client library.

In this case, there are I think like 6 things I need to do in that system. If any one of the steps fail, I need to undo everything I’ve done up until this point.


Enter the Command Pattern:
Universally, it’s accepted that if you need undo functionality, you’d use the Command design pattern. However, that design pattern is really geared towards a system you are writing, where you have control over the state of everything. In this case, it’s quite different: I’m interacting with an API of a 3rd-party system and have no control over it’s state nor availability.

Command Pattern-ish…
So I took the spirit of the Command pattern and simplified it a bit. First, I want to “unwind” the list of things in the reverse order. For example, if the order is:

1) Create project group
2) Create project
3) Create team and assign to team
4) …

I don’t want to undo those in the same order, because just about all of the steps will fail. I’d need to unwind them in reverse order:

3) De-assign team from project; delete team.
2) Delete project
1) Delete project group

So, the correct data structure for that is a “Stack” object. You “push” things onto the stack (like loading a Pez dispenser or firearm magazine) or you “pop” things off the stack when you want to retrieve them. For example:


And this “UndoItem” type is just a class I created to keep track of every item to undo:


Now, for each operation I want to do, I keep track of the code that it would take to “undo” that operation. I made an example project which just creates directories:


When we do a “undoItems.Push(..)” – that is what keeps track of the code that it would take to undo that operation. What’s crazy is that because .NET supports “closures”, you can reference variables outside of the scope of your anonymous function. So, imagine several minutes later if we were undoing this operation, “directory1” would normally be out of scope. However, .NET knows how to keep track of the external references that you make.

Doing the Undo:
OK – so we have a stack to keep track of the undo items, and whenever we do something undoable, we write down what it would take to back it out. Finally, we need to actually execute the stack of undos. So, I have code like this in the exception block:


So this walks through the stack, pops off an item, and then executes that undo code while printing the screen what it’s doing.

Bottom Line:
I thought this was an interesting computer science problem. I didn’t solve it in the typical way with a full-blown Command pattern, but this little chunk of code in the same vein/spirit worked out well, so I wanted to write it down.

As I mentioned, I made a complete sample, which you can view or download from GitHub:


Share Button

netdata: Web-based Performance Monitoring for Linux

Co-worker Stephen pointed me in the direction of this “netdata” open source project, located here: https://github.com/firehol/netdata. Rather than trying to explain what it is first, it might be easier for you to just navigate here and be blown-away first! http://my-netdata.io/



So what this is, is a lightweight, but very rich and robust, web-based performance monitoring tool – for Linux. I say for Linux because I don’t believe it is currently set up to run any anything.

Installation on Linux (and Raspberry Pi!):
This is compelling to me because it’s very quick and easy to install and it runs on everything. You run a few commands, and then you navigate to and voila, you see the same kind of dashboard above, except for your system!

So, to install on any Linux system – including (I tested on) Raspbian for Raspberry Pi and Banana Pi – just follow the instructions from there:


Installation on Ubuntu, on Windows?!:
After the Windows 10 anniversary update, there is now an Ubuntu Linux subsystem that runs within Windows 10. It’s not exactly full-blown Ubuntu, but it’s got most things you’d expect, including “apt-get”. So, I thought: why not give it a shot?

Sure enough, it installed just like normal, following the same link as above!

The one “gotcha” here though is that it only shows processes from the perspective of this Ubuntu subsystem. So, it’s not a complete system monitor – but it does technically work!

Bottom Line:
If you have a Raspberry PI project – or any other Linux systems that you’d want to monitor the performance, this tool is easy and quick to install and has a ridiculously powerful and beautiful dashboard to show you all of the details!

Share Button

Wireless all the things! Streaming audio and video in 2016

I recently realized that a few of my technology annoyances have been solved. In fact, for a few things, there are great solutions!


Streaming Music to Speakers:
For example, I use Microsoft’s Groove Music Pass for my music. I can stream and download unlimited music on multiple devices. In the car, I use streaming “bluetooth audio” from my phone which allows me to take my playlists on the road. At home though… ugg. I have a Windows machine where I have the Line Out jack hooked to a home stereo tuner via RCA jacks. So, I can only listen “on the big speakers”, the audio out from this one Windows machine.

Bluetooth enable ALL home theaters, and home stereos:
I thought in my head: “I wish I could stream music from my phone or iPad to those speakers” – and voila, that technology exists! And, it only costs $22 each:


With this device, you plug in power, and it has a jack and included cable that goes to RCA plugs. So, for each home theater/AV/home stereo you have – get one of these. Then, hit the button on the back and pair it to your phone, tablet, and computers (Windows/Linux/macos all support this).

Oh, and remember my main Windows machine used to be hooked up to those speakers? Well, I just pair it to the device too:


What is interesting is that many devices can be paired and connected to this bluetooth device – and whenever a device is sending audio, that is the one that gets through. If multiple devices are sending audio at the same time, the bluetooth device either makes it choppy or picks one. Bottom line though, many devices can easily share a home stereo or surround sound system from computers, tablets or phones.

What a cool technology, right? So now, for $22 each, you can stream from any computer/tablet/phone to any audio system you have in your home or office.

OK, so streaming audio is now a solved-problem!

Streaming Video to TV’s and Monitors:
Along the same lines, wouldn’t it be cool to “throw something up on the screen” in your office or living room… wirelessly? Well, that technology already exists too – and it’s also somewhat cheap! I recommend these Microsoft Wireless Display Adapters, which are $49 new – but you can find used ones for typically half that:


Now, before you go saying “who needs some Microsoft proprietary device that only works with Windows?” – that’s the thing, although I think Microsoft does use it’s own technology when connecting from Windows, these also support “miracast”, “HDMI over wifi”. Miracast is widely supported on Android, iOS, macos, and even Linux.

Wireless enable ALL monitors and tv’s in your home:
So, imagine that any public screen in your home or office that has an HDMI port, you could have one of these, and you could connect to it as an “extra monitor” from pretty much any computer/tablet/phone! Connecting to a “wireless display” is natively supported in Windows 10 too:


Now, this isn’t all good news. First, is that although the technology works great – it can be flaky. To be clear, you can stream 1080p video/audio and it works without issue. However, suddenly (and a couple of times a day), it will randomly disconnect. You just need to reconnect. Also, in the case of my phone using a miracast app, I can see the display adapter, but it won’t successfully connect. So, it’s not perfect.

Bottom Line:
I was pleasantly surprised that technology now easily exists that let’s you share your audio or your video on any device in your home/office. Using those bluetooth devices above, I can move to the living room and continue streaming my music to the home theater system. When I go to the office, I can stream to the system there – all from phone.

Similarly, if I want to show someone something on an extra monitor or TV somewhere near me, I can do that from virtually any computer/tablet/phone.

This is one of those things where I just took a few minutes and bought the hardware I needed, and you just need to set it up once – and it’s easy. After that, you can live like you’re living in the future!!

Share Button

ReactOS – an open source version of Windows?!

ReactOS is an actual, legal, open source version of “Windows”. This means that it is binary-compatible with Windows and runs many/most Windows programs. It’s legal because they basically recreated Windows from scratch and put it under a GNU license.


Admittedly, this is probably a niche thing which will only appeal to probably three categories of people:

  1. People who are curious about technology.
  2. People who want/need to run Windows but are fed-up with the privacy and security nightmare that it’s become.
  3. People who need Windows to run one or two programs, but don’t want to keep track of legitimate Windows licenses, and activation, etc.

I am mostly in #3 and a little in #1. In my case, if I run a pentesting laptop with Kali Linux, what do you do about the few Windows-only testing tools that are out there? Well, if I install Virtualbox, and then install a ReactOS virtual machine, I can run all of those programs in there and Microsoft doesn’t even have to get involved!

The Good!:
Well, it looks, feels, and acts pretty much like an old version of Windows – so it’s familiar (but very, very fast and snappy):


another notable thing is that the installation takes like :03 minutes, and once it’s booted – it uses only 117MB of RAM, just idling – compared to 1,500-2,000MB of RAM for Windows 10:


The video, and then keyboard/mouse capture were terrible with Virtualbox, but luckily the “guest additions” installed and work perfectly! That was a pleasant surprise:


If you install those, the keyboard/mouse are seamless and the desktop resizes resolution whenever you change the size of the window.

The penultimate example of good is that since it’s not “quite” real windows, it has it’s own application manager, where you can install known-compatible software, and it’s how you update the system:


And lastly, the main good thing is that pretty much everything “just works”. I haven’t run across any showstoppers yet – but I’m sure there must be, given the complextication of Windows.

The Bad:
I haven’t found much bad. There are only two things that come to mind for me. For applications that draw their own window and window manager buttons (min, max, close in the top-right) – like Firefox or Chrome, those don’t show up.


Also, since this seems to be based on an old version of Windows (maybe Win95 or Win98?) – it can bog down pretty easily. It’s very light and snappy, but when more than a few things are going on – you can tell the OS slows down. It “feels” like the way Windows used to bog down, in the olden days.

Bottom line:
Is this a replacement for Windows 10? Probably not, for most people. Given the complexity (and advancement) of Windows, I am sure there are newer apps that won’t work. However, if you want a super-lightweight OS which will run “most” Windows apps, and is free – this so far seems to be a very cool alternative!

Share Button

Building a Kali Linux KDE image

Well, the second major release of Kali 2 came out and we were told you could now have different desktop experiences! Instead of the clunky old Gnome interface:


you could use modern window managers like KDE:


or super-fast window managers like XFCE! Well, when I looked on the downloads page, there is not a download for KDE, sadly:


So, how do you use Kali Linux with KDE, then? If you use the regular/other builds, it’s not available in the dropdown on the login/greeter screen. After a little research, I found I have to build my own ISO. Hoy boy, that’s probably complexticated, right?

Well, the good news is, it’s quite easy. Follow the instructions from this page, and run (as root, from a directory, like ~/kali/):

# apt-get install curl git live-build cdebootstrap
# git clone git://git.kali.org/live-build-config.git
# cd live-build-config
# ./build.sh –distribution kali-rolling –variant kde –verbose

and then, let your computer run for a while. On a low-powered laptop, this process took about 1 hour and 20 minutes. This will pull down everything it needs to build a custom, KDE-oriented .iso file – which you can then burn to a DVD or a USB thumb drive, which you can then use to install wherever you’d like. I did a blog post on how to do that from Linux. The .iso file will be in the ./images/ folder from where you last left off.

I’m not sure why they included specific downloads for Mate, XFCE, etc, but left off KDE. But above, is how you can fix that and run Kali with a KDE desktop experience.

With that said, after I did all of this, the installer detects an ethernet and wifi adapter, but after installation, the OS (and KDE) doesn’t see either NIC. So, there is still more work to do – but here’s how you can at least get started!

UPDATE: post-install, I needed to do a couple of things. First, is I added the following to /etc/network/interfaces:

auto eth0
iface eth0 inet dhcp

this will tell the OS to at least look for a device called eth0, which is the physical Ethernet NIC. You can do the same with “wlan0” too, but you’d need to specify the SSID and I’ve never been able to successfully configure that wpa_supplicant stuff. So, there’s no point – PLUS, the NetworkManager we are installing next, replaces the purpose of this file.

To bring up the network card, you can do a:

$ sudo ifup eth0

and hopefully you should get a DHCP address. Now, to support wireless, and now that you at least have a wired connection – install the following (this is only for KDE):

$ sudo apt-get install plasma-nm

this is the NetworkManager plug-in for the “plasma” window manager. I started from these Debian instructions, and then followed the links to the KDE-specific instructions. In essence, install “plasma-nm”, and then on main status bar, right-click, Add Widget, and search for Network, and drag that to your “system tray” area in the bottom right. You’ll now be able to browse and connect to WiFi networks.

Who knows why this isn’t enabled by default, but it’s an easy enough fix – and you only have to do this one, just after installation of the operating system.

Share Button

The dilution of the operating system

As you might know, I resigned from a position I had for exactly 10 years, exclusively supporting Microsoft .NET development. That was a couple of months ago. Without “having” to stay on Windows anymore in my new role, I’ve been having a walkabout with other operating systems. Specifically, I’ve been living almost exclusively off of Ubuntu Linux and MacOS for the past couple of months. What have I learned?

image image image

First, I learned that if you use Windows, virtually every possible “regular” app you would use, is available in some form on these other operating systems. So, you could easily use either of these operating systems. I mean even things like the Kindle app for Windows, it’s available for Mac, but what about Linux? Well, there is https://read.amazon.com and it’s the full Amazon Kindle app experience, right in the browser. Similar for OneNote, there is a native app for Windows and MacOS, but on Linux, you can just use the browser app via http://OneDrive.com to open your OneNote notebooks. The web UX isn’t quite as nice, but it’s totally doable. Even apps like Skype are available natively on Ubuntu and MacOS now too.

Second, everything is coming to every operating system. The Ubuntu Linux command-line has come to Windows 10, PowerShell is now open source is is available on Linux and MacOS, etc. If there is a useful app, it seems that it’s just a matter of time before it’s available on “the other” platforms.

Using Windows full-time:
Why even wander away from Windows in the first place? Well first for me, is the outrageous security and privacy things in Windows 10. “When something is free, YOU are the product”, as the saying goes. Now that we know that Windows regularly sends data, in addition to them tracking everything you do is just… creepy, unnecessary, and when they are inevitably hacked, can only be bad. It’s just “accepted” that it’s OK that someone wander around your house and observe everything you do, and document it too – it’s just crazy. I mean, because it’s your “personal computer” even more private than your home, nowadays?

But even aside from that, which I acknowledge, some people don’t care about – Windows is also frustrating to use, compared to Linux. When you are working in the command-line, Windows hasn’t really changed much since the 1980’s. It’s a woefully lacking environment. Then, there are sometimes Unix-y things that you want or need to do, where Windows just can’t do it. For example, I changed how my DHCP/DNS works at my house and needed to track down which remote machines were using which IP addresses. Nmap works easily and quickly on Linux. So – Windows is not an “everything I want” environment.

Using Ubuntu full-time:
If you are going to use Linux, and want things to “just work”, then Ubuntu is the only practical answer. This is because if a vendor takes the time to get their product working on Linux, they address Ubuntu first, because it’s the most popular. I’ve been pleasantly surprised by my experience. Even advanced things like getting a fingerprint reader to work, and having simple whole-disk encryption (similar to BitLocker) are easy to use. At the hardware-level, since Ubuntu has a far, far smaller footprint, it seems to use far less battery – which is really good for laptop use. To give you a reference, Ubuntu, with the Unity window manager open just idling, uses about 700mb of RAM. Windows, just sitting idling uses about 2,000mb of RAM (2GB).

Despite it being a great platform, it’s not all great. First is MS Office and OneNote, specifically. For Office, you can use LibreOffice, which comes pre-installed. This can open and save MS Office formatted-files… but not perfectly. It has corrupted both Word and Excel files by messing up the formatting just a little bit. That’s not cool. And OneNote, because there is no native app, you have to use it in the browser, which is not a great experience. It gets the job done, but it’s not a great experience.

With that said, there is one big benefit – I have found Windows running in VirtualBox on Linux is much more flawless/seamless than any other platform, and it’s definitely better than running Ubuntu in a virtual machine. The window manager (Unity) in Ubuntu uses hardware acceleration, so when you run it in a VM, you see lag and slow UI performance. Meaning, that Ubuntu hosting Windows is definitely the best computer-in-computer environment I’ve run across. However, the battery drains 2x to 3x faster when running Windows in a VM though,so it’s not a mobile/portable solution – you need to be near a plug. So – Ubuntu too is not an “everything I want” environment.

Using MacOS full-time:
I initially exposed myself to MacOS when I started looking at Xamarin a few years ago. I was really pleasantly surprised, by two things mainly. First was that I didn’t realize that pretty much every product that exists for Windows, also has a native release for MacOS too. And secondly, how pretty and seamless the user experience is. So, using MacOS, I can use Office for Mac, including a native OneNote app, the command-line IS a “bash” shell, which also has the same experience as Linux. Well, almost completely. There is even a “package manager” called “brew” where you can install apps – with something like “brew install app-name”. What’s not to love?!

Well, there are a few things to not-love. First, virtualization in every technology, is… kind of bad. Using VirtualBox for example, no matter if you are hosting Linux or Windows, those client machines are laggy, choppy, and noticeably slow. I have the latest MacBook Pro too – with an i7 CPU – so it’s not hardware, either! Also, when you hook up a couple of extra monitors to it, the whole UI slows down significantly, whereas Windows didn’t. So, it works EXTREMELY well on a laptop with one screen, but when you start pushing the hardware, you quickly see the cracks.

Next, is the keyboard. Mac has it’s own ecosystem and has been living a parallel life next to Windows for decades. So, common keyboard layouts and keyboard shortcuts are different and it drives me nuts. For example, instead of CTRL+C and CTRL+V for copy and paste, it’s Command+C and Command+V.


If you are using a “regular” keyboard or connecting remotely, this translates to WindowsKey+C and WindowsKey+V. Ctrl+V brings you to the end of the page for some reason. Imagine trying to paste a link into a Facebook post (where you are scrolled halfway down and do CTRL+V – which brings you to the bottom of the scroll. You have to scroll back up and find that post – then you find out and you didn’t even have it copied because it’s “the other” keyboard shortcut.

I realize this may sound nit-picky, but it’s not. As a developer, there is no “Home” or “End” keys for example, or even a “Delete” key. To do those things you have to do: Fn+Backspace for Delete, Fn+LeftArrow for Home, and Fn+RightArrow for End. To be productive while coding is difficult, especially if you’ve used a non-Mac keyboard layout and shortcuts for decades. So – MacOS too is not an “everything I want” environment.

Which OS is best?
In short, none… or all. There is no clear winner. In fact, the reason for this post is I realized that these three operating systems are extremely similar and are moving closer together every day. If you have VM’s – use Windows or Ubuntu. If you want very snappy performance and beautiful UI, but wonky keyboard, use MacOS. Want a great command-line interface and robust package manager, use MacOS or Linux, but not Windows. Need MS Office and especially OneNote? Use Windows or MacOS, but not Ubuntu.

Bottom line:
In my little pseudo-experiment, I’ve realized I’m not entirely happy with any of these OS’s, and none of them stand out as being particularly great nor particularly bad. They are all like 85% the same, and the 15% they are different is mostly a good difference, which is a deficit of one of the others.

My goal was to find “the ultimate” setup where I could live out of one laptop and have ALL of the things want. My conclusion? The technology isn’t quite there yet. You just have to pick one and be ok with not being satisfied. MacOS would be my choice except the keyboard and the performance are showstoppers. Ubuntu would be my choice, but the lack of native MS Excel and OneNote are showstoppers. Windows 10 would be my choice, but the electronic stalking and terrible command-line are showstoppers.

You tell me: what am I missing? What is the “ultimate” computing environment in present day?

Share Button

Using Django + Git + VSO

OK, so I won’t be doing blog posts as often as I thought – I’ve had a lot of other non-technical things going on. However, I have dug back into Django again – and I’ve fallen in love it all over again!


It does seem like it will ultimately be pretty unusable on Windows (due to a list of issues with working with a real RDBMS), it is pretty good on Linux and/or MacOS. I’m working on a project, mostly on Ubuntu, but it runs equally well on MacOS – and I came up with some noteworthy things.

First, since I first dug into this last year – this framework continues to amaze me. The idea is that you can basically define your database tables  in a few dozen lines of code, Django will automatically give you Create Read Update Delete (CRUD) screens, with validation, and dropdowns for the related tables – stuff that is tedious and takes time to write. That is referred to as Django Admin. PLUS the Django Rest Framework with just a few lines of code per table will expose REST endpoints for each table, AND give you a website where you can learn and play with the REST API.

In short, I’m nothing short of blown-away by this technology. I’ve never seen anything like it. Needless to say, I’m planning to and am actively working on a project that uses this. I’m most excited that this will save me 10’s of thousands of lines of code I’d otherwise have to write in .NET – if I were to use that technology instead!

Wait a second, if this isn’t a great idea on Windows, how am I writing code? What am I using for an editor? I’ve said it once and I’ll say it again, Visual Studio Code, the standalone code editor is an amazing tool. Not only is it amazing editor, but it runs on Debian or Fedora-based Linux distributions (*.deb or *.rpm) and it runs nicely on MacOS too.

So, for Django, I installed VS Code on Ubuntu, installed the Python extensions to give me color-coding and intellisense, and it’s been a dream to work on! Not only is is a great editor with all of the “comforts” I normally need from Visual Studio working with .NET code, it also seamlessly has Git functionality built right in too!

Screenshot from 2016-08-07 15-02-27

Meanwhile, for Django stuff, I have a split console using “tmux”.

On the bottom is “manage.py runserver” which runs the web server, and automagically restarts whenever it detects a code change – and will give me compile errors upon changes too. On the top is a console in that Python environment where I can make and push database migrations. These are just like Entity Framework migrations, except you do them from the command-line:


So – this ends up being a nice little development setup. It’s very quick, and since Django does SOOOOO much for you, you can stand up an application in a very short amount of time.

Since this is a proprietary/for-profit app, I’m using VSO for it instead of GitHub. So, I created a new Git repository. There are a couple of things which took a little bit to figure out. One is if you have two-factor authentication turned-on for your Microsoft account (which you definitely should), you can’t use those credentials from the Git command-line. Luckily, Git providers like GitHub and VSO give you a way to supply alternate credentials. In the case of VSO, it’s here:


But you’ll see a message on there that this is highly discouraged. instead, you should create “personal access tokens”. You can do that from here:


How do you put all of this together? Well, to get started with Git – check out this blog post. The only thing that is different is that when you go to do a “git push” or “git pull”, you will be prompted for credentials. How this “personal access token” works, is that you can put anything in the username field (or leave it blank) and you use the access token for the password. OK, that one is easy.

However, the next thing I found is that 1) I needed to register with Git and tell it who I was and 2) it prompted me for credentials every-single-time, which got annoying. So, to register your credentials and cache them, do something like this from the command-line:

$ git config –global user.name “John Doe”

$ git config –global user.email “jdoe@example.com”

and then to enable a credential cache, do something like this:

$ git config –global credential.helper cache

$ git config –global credential.helper ‘cache –timeout=3600’

Where that timeout is in seconds.

Bottom Line:
Once all of that is in place, development was easy-breezy! I save changes in VS Code, the Python web server restarts and I can see my changes in the website. When I’m done working, I use VS Code to “Sync” where it commits my changes, pushes them to VSO, then does a pull from VSO – making it so my local machine is N’Sync with VSO.

I know there are selling-points to other technologies, but I will say, I’m more-than-pleased with everything I’ve done in Django so far. Aside from the Windows RDBMS issues, I haven’t run across anything else really which has slowed me down! So, if you are wondering how to work with Django on Linux or MacOS, and use Git as a source control provider, hopefully this helps!

Share Button

Showing file operation progress in Linux

If you wanted to burn an ISO to a thumb drive in Linux, you’d typically use “dd”. That might look something like this:

sudo dd if=~/Downloads/kali-linux-2016.1-amd64.iso of=/dev/sdb bs=512k

This copies the input file of the .iso, to the output “file” of the /dev/sdb device, which is my thumb drive. The problem is, when you run this, it just sits there for like :02 whole minutes, and it doesn’t show any status.

Well, there is another command called “pv” where you can monitor the progress of data through a pipe. Now, “dd” is already self-contained, so how to inject this monitor in-between? Most programs allow you to feed input to it from the console, or “pipe” it to the program. So, you can kick off “pv”, have it open and feed the file through the pipe, and show progress, while the secondary program is running (“dd”, in this case).

Using “pv” to show the status of the operating, the same command above, now looks like this:

sudo pv -tpreb ~/Downloads/kali-linux-2016.1-amd64.iso | sudo dd of=/dev/sdb bs=512k

So “pv” opens the file (the -tpreb are for formatting), and the contents of that file are piped to “dd”. Now, when it runs, I see progress like this:

2.74GiB 0:00:14 [18.2MiB/s] [=====================>                            ] 44% ETA 0:00:17

Not only did this solve this immediate “problem”, this has some other implications. Any time you are moving, copying, or processing a file in any way, you could potentially use “pv” to show the status. You can even pipe this to “dialog” which will show a text dialog box with a status bar too. Very cool!

So, since Future Robert will likely need this in the future, I thought I’d write it down here. Got any other useful command line tools like this? Leave a comment below…

Share Button

A few updates…

Hey, it’s been a long time! Well, after 10 years (to the day) of being a specialist with the Microsoft .NET technology, I decided it was time to move on. So, i accepted a new position, being more of a generalist, and more on the infrastructure and devops side of things. In my new role, I’m digging into Docker, Puppet, dotnet Core, and Infrastructure as code (IaC) – which includes pretty cool technologies for automating provisioning, and integrating with cloud providers like Amazon and Azure. So that is keeping me busy and I’m having a lot of fun, trying to automate things along the way too.

As for the state of this blog, I expect things to more or less stay the same. Despite me being a .NET specialist by day, my blog posts have covered a wide array of technologies. I suspect that since I’m not doing software development during the day anymore, I will likely be doing much more of that in my free time. So ironically, this blog may end up covering a lot more development items now, instead of infrastructure! I guess I need the 60/40 balance – whatever I’m not-doing during the day, I do in my free time.

Immediately in the future, I have a few web/mobile projects that I’m motivated to work on – likely using a Backend as a Service for the backend. For the front-ends, those will likely be Angular and Bootstrap, and for the mobile side, I want to give Xamarin a go, now that it’s free. Specifically, I want to see how far I can get with Xamarin.Forms to build out a companion app and release that on each app store.

As far as infrastructure stuff, that will be much more difficult to blog about. At work, we’re using all expensive, pay-products, including Red Hat – so I can’t really do any of those things in my homelab, therefore I can’t easily blog about them.

Lastly, I have a couple of quick updates:

Update the 1st: I have since updated my Linux “update.sh” script, which goes and updates/upgrades all of the packages installed on a Debian-based Linux distribution. It now detects if a reboot is required and only prompts you if it is required. You can see the blog post here and the updated script here.

Update the 2nd: I have since updated my Conky desktop to be a little more refined, and show a bit more detail. you can see that blog post here and the updated .conkyrc file here.

Conky Screenshot


That’s about it for now. Now that I’m somewhat settled into the new position, I’d like to get back into a semi-regular routine with blogging. Like before, I don’t like to blog just to put something out, instead I like to share something when it’s interesting. And I should have some interesting things coming down the pipe! The RobbyTron2000 is back online!

Share Button

Incredibly interesting blog posts from a .NET duh-veloper