Outdated Best Practices: How Experienced Programmers May Fail at Building Cloud Systems

 

Cloud-Fail

I’m a big believer in formal education because of the importance of understanding “first principles” – the foundational elements upon which engineers design and build things.  The strengths and weaknesses of any such product are direct results of the designer’s understanding, or lack thereof, of immutable truths endemic to their discipline.  Software engineering is no exception:  A disciplined, experienced software engineer most certainly includes first principles in the set of precepts he or she brings to the job. Unfortunately, it’s the experience that can be a problem when they approach building cloud-based systems.

An expert can often confuse a first principle with an assumption based on a long-standing status quo.  Cloud software design patterns incorporate distinct differences from historical patterns because compute resources are more plentiful and rapidly acquired than in the past.  Here are three deeply ingrained best-practices, based on those assumptions that have served us well, but do not pertain to cloud software systems:

Fallacy #1 – Memory is precious

Only 30 years ago, a megabyte of RAM was considered HUGE, and a 100MB disk something one only found in corporate data centers.  Now one is hard pressed to find anything smaller than 5GB even in consumer grade USB solid-state drives.  Even so, most seasoned programmers (I am guilty) approach new tasks with austerity and utmost efficiency as guiding principles.  Many of us are conditioned to choose data structures and their attendant algorithms that balance speed with memory consumption.  B-Trees are a great example, with trees-on-disk and complicated indexing schemes the backbone of relational database management systems. Boyce-Codd and Third Normal Forms are as much about minimizing data duplication as they are about proving data consistency, which can be achieved in other ways.

In the cloud, memory (both RAM and disk) is not only cheap, it is dynamically available, and transient – therefore, not a long-term investment.  This allows a programmer to choose data structures and algorithms that sacrifice space for speed and scalability.  It also explains the rise of “no-SQL” database systems, where simple indexing and distribution schemes (like sharding) are the rule, and one is not afraid to replicate data as many times as necessary to avoid lengthy future lookups.

Fallacy #2 – Server-side is more powerful than client-side

For years, the “big iron” was in the data center, and only anemic clients were attached to it – this included the desktop PC’s running on Intel x86 platforms.  Designers of client/server systems made great use of server-side resources to perform tasks that were specific to a client, such as pre-rendering a chart as a graphic, or persisting and managing the state of a client during a multi-transactional protocol.

In building cloud software, clients should take on as much responsibility for computation as is possible, given constraints of bandwidth between the client and server.  Squandering underutilized, distributed client-side resources is a missed opportunity to increase the speed and scalability of the back-end.  The server ideally only performs tasks that it alone can perform.  Consider file upload (client to server). Traditionally, the server maintains elaborate state information about each client’s upload-in-progress, insuring that the client sends all blocks, that no blocks are corrupted, etc.  In the new model, it is up to the client to make sure it sends all blocks, and request a checksum from the server if it thinks it is necessary.  The server simply validates access and places reasonable limits on the total size of the upload.

Fallacy #3 – Good security is like an onion

The “onion” model for security places the IT “crown jewels” – such as mission critical applications and data – inside multiple layers of defenses, starting with the corporate firewall on the outside.  The inner “rings” comprise more software and hardware solutions, such as identity and access management systems, eventually leading to the operating system, middleware, and execution runtime (such as a Java virtual machine) policies for an application. The problem with defense-in-depth is that software designers often do not really understand it, and assume that if their code runs deep inside these rings, then they are not required to consider attacks against which they are mistakenly thought to be protected by the “outer layers.”  True defense-in-depth would require that they consider those attacks, and provide a backup defense in their code.  (That’s real “depth.”)

In the cloud era, all software should be written as if it is going to run in the public cloud, even if it is not. There are certainly a set of absolutes that a designer must assume are being provided securely by the service provider – isolation from other tenants, enforcement of configured firewall rules, and consistent Internet connectivity of the container are the basics.  Beyond those, however, nothing should be taken for granted.  That’s not just good security design for cloud software – its good security design for any software. Although this may seem like more work for the programmer, it results in more portable code, equipped to survive in both benign and hostile environments, yielding maximum flexibility in choosing deployment targets.

Experience is still valuable

None of the above is meant to imply that experienced software hands should not be considered in producing cloud-based systems.  In fact, their experience could be crucial in helping development teams avoid common pitfalls that exist both in and outside of the cloud.  As I said earlier, knowing and adhering to first principles is always important! If you’re one of those seasoned devs, just keep in mind that some things have changed in the software engineering landscape where cloud is concerned. A little adjustment to your own internal programming should be all that is required to further enhance your value to any team.

This entry was posted in Cloud Computing Technology Insights. Bookmark the permalink.