If there is one constant in IT and software development, it’s that change never stops. However, in the past year to 18 months, there has been an explosion of new technologies, particularly related to software development. I find myself disagreeing with many things I see – yet, these technologies are spreading like wildfire, and respectable people are cheering it on!
What is going on? You say, “Well Rob, you’re kind of an idiot, that might be it?” – true, but wait, I think there is a little more to it than that!
I’ve been thinking over the past few days why am I so bothered by this trend? Is this a case of “Who moved my cheese?” Is this just a case of my getting older and grumpy, and not-liking change?
The conclusion I come to is: no. In fact, I can quantify my reservations. I even challenge you to ask yourself these same questions and see what conclusions you draw!
The world as it was, circa 2012:
Way back in the year 2012, I came into a much fuller understanding of what it meant to be a software professional. A combination of my professional experiences, reading Clean Code, and the Clean Coder by Robert C. Martin, etc. I learned more about what it meant to write professional-quality software. I don’t mean that in some boastful way. In fact, I mean it in the opposite way – that there is a VERY reasonable standard for most professions, and our profession (and me, specifically) wasn’t where it needed to be.
So what did we learn? What did we know for sure? Here are some things off the top of my head:
- Assumptions are the root of all software bugs. Be explicit, always.
- Compile-time errors are better than run-time errors.
- The earlier in the lifecycle you find a bug, the easier and exponentially-cheaper it is to fix.
- The “Waterfall” methodology is a hold-over from physical manufacturing and is a pretty bad fit for software development. We rarely know what we want in the beginning, don’t have the tools to mock up something effective, and in the end, the customer changes their mind.
- Much of making software maintainable is about managing dependencies. When “all the arrows” on your diagram point in one direction, that app will collapse under it’s own weight someday.
- Lines of code mean very little, clarity is key – having many small files, and many small functions is good.
- Clarity and readability trump clever, terse coding – except if it inhibits performance
- The SOLID principles may be disputed, but are still by-far the best way to approach software so far.
What I took from this is: “Great! We know the mistakes of the past. We now have a better idea of how to move forward.” This also means to me that technology decisions should be looked at through this lens. Because no matter what language you are writing code in, the items above almost all apply!
Enter some new technologies:
Perhaps what has bothered me most is that the default disposition of seemingly everyone – is that all new technology is great, and even ideal! “If it’s new, it must be better and that’s all there is to it!” they seem to say.
I’ve been around long enough to have seen bad technologies take-hold. So, I’m usually coming from a hopeful-yet-cautious standpoint. I want a new technology to “sell me” on it’s idea. If it is a self-evident step forward, then it is obviously a good technology.
All technology is going to either be:
A) A self-evident, undisputed, good, new way to do something
B) A trade-off of some good and some bad
C) It’s a bad idea and it will (or needs to) fizzle out
When you introduce a technology which is claimed to be good, but is not self-evident, that is when my ears perk up. This is a “smell” that something isn’t right. When I dig further, I find myself often having hesitations too. Sometimes I can put my finger on it, sometimes I can’t. Sometimes it’s a trade-off, and so long as you understand the good and bad, then it’s fine! Some technologies though are sold as all-good, all the time. If true, this is quite rare unless you are truly inventing something new that didn’t exist before.
Here’s how I see things:
As I evaluate new technologies, I immediately see “Oh wow! They built upon those old lessons” or “Oh no, why did they do it that way?” My skepticism leads me to different conclusions for different products. Despite that, what I see from the industry is that EVERY new technology is fantastic and going to change the world and… I disagree!
Calling ‘em out: F# AND functional programming:
I first must admit that I am not a math person. I have tried many times, to be a math person, but it does not come easy to me. This also means that “munging through data” is not my strength, regardless of the technology. That aside though, let’s talk about F# and functional programming in general.
Let’s start with this wildly obnoxious blog post, written entirely in his native language of sarcasm, about Ten reasons not to use a statically typed functional programming language, with my non-sarcastic rebuttal:
- Reason 1: I don’t want to follow the latest fad
Correct. I want to use critical-thinking in deciding which technology to use versus just following the “cool kid” pack. If it is self-evident that a technology is beneficial, then why do people feel the need to keep selling it? Shouldn’t it stand on it’s merits, all by itself?
- Reason 2: I get paid by the line
No, but software professionals prefer clarity over cleverness. Using your logic, if we could reduce our program down to a few lines of Perl and store (and search) for our data in regular expressions, we’d be golden, right?! Wait, that was sarcasm. What I mean to say, is that a majority of your time is spent READING code, and having terse, confusing code is not helping, it’s hurting.
- Reason 3: I love me some curly braces
Correct. They serve a purpose, they EXPLICITLY define scope. That are non-intrusive, take up very little space and accomplish a lot. Well-worth the cost of admission for me. Assumptions are at the root of all software bugs. Curly braces reduce my assumptions about scope.
- Reason 4: I like to see explicit types
Correct. Assumptions are at the root of all software bugs. Explicit types reduces my assumptions about variables.
- Reason 5: I like to fix bugs
To say that one programming language naturally defends against bugs is a big stretch. He was talking about this post, which I don’t fully agree with either. Taking it at face value though, what is the cost? The cost seems to be creating new instances of types all over the place because I “can’t ever change data”. Also, I left working in a verse specialized, terse language which is difficult to understand and communicate to others. So, this isn’t so much a pro for functional programming as it is a con.
Maybe I need to see a real, full application written in functional language. All I ever see are academic examples, which don’t seem like they would scale well at all. I mean that the code quickly gets very unmanageable.
- Reason 6: I live in the debugger
Then you are doing it wrong. An effective, statically-typed, professional software developer will spend most of his time in unit tests.
- Reason 7: I don’t want to think about every little detail
I assume he means “want”, and not “don’t want”, here. Again, assumptions are at the root of all software bugs. Yes, I absolutely want to test edge cases, assert my assumptions, and write professional-quality code!
- Reason 8: I like to check for nulls
Correct. This is part of being a professional software developer, you don’t just write “happy path” code and call it a “product”! I wonder if his head would explode once he learned that Fault Tree Analysis exists, where for human safety, every error condition is accounted for. Shoot, that was more sarcasm, sorry. There is another saying I’ve been using for years too: “There is a big difference between whipping up a program and releasing a product.”
- Reason 9: I like to use design patterns everywhere
(sigh) Correct. Some people way, way smarter than you or I, figured out some pretty smart ways to construct software. Yes, I would like to stand on the shoulders of their wisdom.
- Reason 10: It’s too mathematical
*shrugs* I don’t know, I haven’t found anything beyond math, to be easy or fast, in a functional language (and I’ve played with a few).
What an obnoxious post, and for me especially so because I disagree with every-single-point. So, he’s trying to be snarky and matter-of-fact, with all incorrect facts. ANYhow…
Next up is a blog post: SOLID: the next step is Functional. This is well-researched and well-presented, but I disagree on a few points.
I run across this a lot where it seems like the person has an answer, and they are in search of a question. They want the answer to be F#, Haskell, or Scala – and then they write bad C# code to present a strawman problem.
In the case of this blog post, he has C# code which could be written better. Instead, he rewrites it in F# and in-essence says: “See! F# fixed that problem!”. If you want to use a functional language, go do it! It’s not necessary though to demonize statically-typed languages in the process.
Get to the point, already!!
This post is really just a brain-dump of several conversations I’ve had with colleagues (who all disagree with me, by the way!) – and to try to quantify why I don’t agree, and why it’s been bothering me to see so many bad technologies getting non-critical acclaim.
A way I can try to sum it up, is like this:
It took a few decades, but we realized that writing tightly-coupled, brittle code leads to an inevitable, catastrophic collapse of an application. At some point, the application simply becomes too expensive to maintain. At that point, it typically collapses under it’s own weight and there is a re-write. That anti-pattern has been going on for DECADES now. Meanwhile, 71% of IT projects are over-budget, and over-time! Can you imagine that statistic in any other field? Or in the case of mainframe/COBOL, many shops simply don’t modify the program anymore because it’s too scary or too costly to change.
But then! We learned quite a bit. A bunch of smart people came up with the Agile manifesto, the SOLID principles, etc, and thought leaders like Martin Fowler and Bob Martin starting leading the way. We may not have fixed everything, but we have an EXCELLENT idea of what has been going wrong AND we have a prescription to fix it. Hint: it doesn’t have much to do with your programming language! It’s your approach to writing code.
When you apply SOLID and Agile, you tend to see effort and cost look like this, instead:
The effort and cost go down as the software starts to “become” what it needs to be!
My challenge to you as you are evaluating a new technology, is to apply this critical thinking:
- Is it just plainly self-evident that this new technology is good? Then use it!
- Did you discover that this new technology has positive aspects, but there are trade-offs? Then consider it!
- Did you discover that this new technology seems good on the surface, but is problematic? Then abandon it!
…oh, and by the way, if you disagree, please speak up! If I’ve mis-stated something, please leave a comment below. This is just the opinion I’ve formed so far, but I’m always open to new ideas.