I posit that we now have a whole new class of consumer that is completely oblivious to certain hitherto fundamental concepts – and this can lead to poor business decisions and overall sub-optimal execution and results.
I got the idea after a discussion with an ex colleague (that’s now working for a cloud vendor) where he proudly proclaimed that infrastructure is unimportant and uninteresting.
I’ll start generically and shift to IT. The generic aspect of this problem is very interesting, since it’s lowering quality in all sorts of fields.
And never forget: Just because something is widely and easily available doesn’t mean it’s better. It simply means that more people have access to it.
I had a a few alternate titles for this post:
- Architecture and Infrastructure STILL Matter!
- Ubiquity Does Not Mean Ignoring Fundamentals!
- What you don’t know CAN hurt you!
- Using the Cloud Means you Need to Know More Things, Not Less!
The Gateway Drug
Certain technologies are a mixed blessing since they have significantly reduced human capabilities in certain areas while immensely helping in others.
Take for example the calculator. In the beginning there was the abacus, and it helped people calculate faster.
Then we developed the slide rule, and it helped make calculations even easier and faster (and contributed to severe eye strain, I mean, look at it!)
A bit later we built electronic calculators, with some able to even do symbolic math.
And these days we have access to amazing tools like Wolfram Alpha.
Are such tools able to hugely assist in the speed of computation? Absolutely.
But, at the same time, they have led to a reduced human ability to do calculations manually, and unless explicitly restricted in early school years, they result in a total inability to do certain kinds of math by hand.
“But, but, they remove the mundane to help you focus on what’s important!”
The argument for the use of many enabling technologies is that they let you focus on being creative by removing the shackles of old methods and speeding up things.
A great example is photography.
I, too stopped using film a long time ago. However, I learned the “hard” way, developing my own film and printing my own pictures in a darkroom, and have a firm grasp of lighting, dynamic range, contrast etc. This has nothing to do with creativity, but merely the driving principles of something.
I notice many new photographers aren’t even aware of basic photography concepts like shutter speed and aperture, and think they can “fix it all in post”.
Then they wonder why this or that undesirable effect is showing in their images, when the root of the problem is that they lack the fundamental knowledge needed to start with a better source picture. Blurry images, lens flares, too much or too little depth of field, etc.
You see – digital photography and automation haven’t abolished the notions of shutter speed, aperture, focal length or, in general, physics. Rather, certain things are now much easier, yes, but that doesn’t mean one should be trying to photograph high speed sports with a telephoto lens at a slow shutter speed and expect crisp results.
It doesn’t help that the most common camera these days is found on smartphones, which explains the slew of hyperfocal wide angle shots, but I digress…
Losing Knowledge in IT
Technology has made a great many things easier in IT. And it has made people far more productive than ever before. I like the fact that I don’t have to write my own printer drivers any more, for example.
However, I’m noticing that certain things are either taken for granted, or, even worse, aren’t understood at all.
Here are some of my pet peeves in this area:
- Data correctness, for instance, strong checksumming. There is a whole class of customer that doesn’t even think of asking about this – and they’re being sold solutions that are often completely devoid of any such protection. Case in point: a popular hypervisor vendor didn’t even have checksums in their SDS/HCI solution until much later in life – yet, they never saw fit to tell anyone this, until they implemented checksums! Of course, several customers experienced data corruption issues until the checksums were added…
- Developers adopting function consumption constructs (“serverless”), thinking the tech frees them from worrying about infrastructure, without understanding how the underlying “serverless” architecture truly works and what the limitations are. They are often surprised at the very slow speeds of certain functions – and the astronomical cost if not coded right.
- People embarking on a cloud journey and assuming that just because something is now in the cloud it’s automatically protected against anything and they don’t need to architect it for fault tolerance and business resumption.
- This applies to consumer stuff too: The fact that your favorite online photo storage provider (which you’re using exclusively) will shut down and you will have a few days to relocate potentially multiple TB of images… but you never thought of that eventuality and aren’t keeping a second copy. Since it’s all in the cloud. “Well-protected”.
- Customers going all-in with software-defined this or that, blissfully unaware of the fact that they still have to worry about upgrading the firmware of the components of their solution, until something breaks…
- The block storage of a famous cloud vendor having an (according to them!) 0.2% annualized failure rate – meaning that out of 1000 pieces of data, 2 will disappear or get corrupted every year. That’s staggeringly bad data integrity, yet most consumers of this service are unaware of this – and as a result do not deploy mechanisms to guard against this problem. By the time they need to read the corrupted data, it may be too late.
- Customers going all-in with cloud without realizing that there exists such a thing as a network egress cost – and failing to take that into account when modeling their costs.
These are expensive, dangerous and potentially career-ending mistakes. And they could all be avoided if one knows to ask the right questions, test properly and can architect around the architectural limitations (Netflix is a great example of this).
Ignorance is Not Bliss!
Like in the checksum example, where the vendor never even told customers their solution didn’t initially do something crucial like checksumming, what else are you unaware of?
Most importantly, do you even know what questions to ask in order to expose the holes? (because everything has holes).
For instance, being in the cloud doesn’t mean we can now abolish fundamental concepts like RAID and checksums. It simply means someone else is (hopefully) worrying about such things, and their standards may vehemently disagree with yours.
But do you have standards around such things? Or do you take them for granted?
Commoditization and Cloud Need Additional Skills
If anything, the ubiquity of certain technologies now means that customers need additional skills they didn’t need before. To code for “serveless” for example, one needs to adopt a totally different development methodology and they need to be aware of the architectural considerations and constraints of their chosen platform in order for that code to run well (and not cost a fortune).
“Cloud” doesn’t mean you don’t need to worry about architecture and infrastructure any longer, it simply means you need to worry about a very different architecture and infrastructure. But fundamentals haven’t become unimportant all of a sudden.
The benefits of these new technologies can be significant – if one understands and adopts them properly.
Call to Action: Stay Sharp and Don’t Take Stuff For Granted!
Embrace new technologies but don’t forget what more established technologies have taught us.
Don’t assume certain new technologies are doing certain things in an acceptable way.
Failing to do so will expose you to unnecessary and (usually) avoidable risk.