What’s Worse than Cloud Service Provider Lock-in?

Cloud Lock In

“If you are running your IT systems in a traditional private data center, you are already locked-in, and not in a good way.”

I was recently in a job interview before a panel of IT industry analysts, defending my positions in a hastily-written research note about the cloud service provider business.  During an interaction with one of the tormentors… I mean, interviewers… the topic of cloud vendor lock-in surfaced in the context of customer retention and competitive opportunities.

It is no secret that detractors of public cloud keep “lock-in” as a go-to arrow in their quiver, right alongside “insecurity.”  But that doesn’t mean it is manufactured FUD (Fear, Uncertainty and Doubt).  Cloud lock-in is very real, and organizations adopting cloud technologies, especially public cloud, must be have a relevant strategy addressing it.

But is cloud lock-in necessarily “bad?”

CSPs are continuing to build out their stack of services into higher layers of abstraction; databases, data analytics, even gaming platforms are being built and maintained by the service, freeing customers to focus on application-specific logic and presentation details to meet their business needs. This results in a high-velocity, nimble IT execution environment that allows them to respond quickly and effectively.  In short: they derive considerable value from being “locked-in.”  Clearly, if they later have reason to abandon their current CSP for another and haven’t planned for that possibility, it could be too intractable to attempt disentanglement.

Yes, there are ways to plan for and enable easier extraction.  Technologies like containers provide better isolation from the cloud infrastructure, as well as make it possible to “bring your own” services with you, rather than depend on the platform’s.  The challenge is knowing where to draw the line in just how much to “abstract away” the service: you could end up spending so much time insulating your application from the implementation details of a service that you lose the aforementioned benefits.  That could be worse than being directly “locked-in” if your competitors do it and get to market faster.

But a new thought was introduced during my interview, and I wish I could credit the person who gave it to me through the form of a question: Aren’t organizations who choose to remain in their private data centers even more “locked-in?”

This is a profound question, and it underscores the fundamental shift in thinking that must occur if organizations are to successfully survive the shift to cloud-based IT.  If you are running your IT systems in a traditional private data center, you are already locked-in, and not in a good way.  You’re much more isolated from the riches of the Internet, unable to take advantage of economies of scale, forced to re-implement otherwise common services for yourself, burdened with highly customized infrastructure and attendant support systems that will get more expensive to maintain over time as others abandon the model.

Privacy and control underpin the rationale continuing to feed this model, and although privacy is a good reason, control – specifically, the ability to customize infrastructure to the nth degree – is not.   CSPs build and maintain infrastructure faster, cheaper, and with higher rates of successful outcomes. IT departments that see cloud as an extension of their data center, that must conform to their data center’s operational paradigms, and disrupt as little as possible the routines and processes they have developed over the years will be at a major competitive disadvantage in the future as their more prescient and capable competitors adopt DevOps and cloud models to drive their businesses at higher velocity.  Private cloud, when done properly, is a good intermediary solution that provides the control and privacy while bringing many (but certainly not all) of the benefits of public cloud.  A hybrid cloud solution that marries public and private seems to offer the best compromise.

Oh, the interview? Apparently they liked my answers enough to offer me a position.  So for now, this is the last of any real cloud-related technical blog you’ll see from me outside of the company’s official channels.  After all, one must not give away the store:  All product must be secured!

That is… locked-up. :)

Posted in Cloud Computing Technology Insights | Comments Off on What’s Worse than Cloud Service Provider Lock-in?

Outdated Best Practices: How Experienced Programmers May Fail at Building Cloud Systems

Cloud-Fail

I’m a big believer in formal education because of the importance of understanding “first principles” – the foundational elements upon which engineers design and build things.  The strengths and weaknesses of any such product are direct results of the designer’s understanding, or lack thereof, of immutable truths endemic to their discipline.  Software engineering is no exception:  A disciplined, experienced software engineer most certainly includes first principles in the set of precepts he or she brings to the job. Unfortunately, it’s the experience that can be a problem when they approach building cloud-based systems.

An expert can often confuse a first principle with an assumption based on a long-standing status quo.  Cloud software design patterns incorporate distinct differences from historical patterns because compute resources are more plentiful and rapidly acquired than in the past.  Here are three deeply ingrained best-practices, based on those assumptions that have served us well, but do not pertain to cloud software systems:

Fallacy #1 – Memory is precious

Only 30 years ago, a megabyte of RAM was considered HUGE, and a 100MB disk something one only found in corporate data centers.  Now one is hard pressed to find anything smaller than 5GB even in consumer grade USB solid-state drives.  Even so, most seasoned programmers (I am guilty) approach new tasks with austerity and utmost efficiency as guiding principles.  Many of us are conditioned to choose data structures and their attendant algorithms that balance speed with memory consumption.  B-Trees are a great example, with trees-on-disk and complicated indexing schemes the backbone of relational database management systems. Boyce-Codd and Third Normal Forms are as much about minimizing data duplication as they are about proving data consistency, which can be achieved in other ways.

In the cloud, memory (both RAM and disk) is not only cheap, it is dynamically available, and transient – therefore, not a long-term investment.  This allows a programmer to choose data structures and algorithms that sacrifice space for speed and scalability.  It also explains the rise of “no-SQL” database systems, where simple indexing and distribution schemes (like sharding) are the rule, and one is not afraid to replicate data as many times as necessary to avoid lengthy future lookups.

Fallacy #2 – Server-side is more powerful than client-side

For years, the “big iron” was in the data center, and only anemic clients were attached to it – this included the desktop PC’s running on Intel x86 platforms.  Designers of client/server systems made great use of server-side resources to perform tasks that were specific to a client, such as pre-rendering a chart as a graphic, or persisting and managing the state of a client during a multi-transactional protocol.

In building cloud software, clients should take on as much responsibility for computation as is possible, given constraints of bandwidth between the client and server.  Squandering underutilized, distributed client-side resources is a missed opportunity to increase the speed and scalability of the back-end.  The server ideally only performs tasks that it alone can perform.  Consider file upload (client to server). Traditionally, the server maintains elaborate state information about each client’s upload-in-progress, insuring that the client sends all blocks, that no blocks are corrupted, etc.  In the new model, it is up to the client to make sure it sends all blocks, and request a checksum from the server if it thinks it is necessary.  The server simply validates access and places reasonable limits on the total size of the upload.

Fallacy #3 – Good security is like an onion

The “onion” model for security places the IT “crown jewels” – such as mission critical applications and data – inside multiple layers of defenses, starting with the corporate firewall on the outside.  The inner “rings” comprise more software and hardware solutions, such as identity and access management systems, eventually leading to the operating system, middleware, and execution runtime (such as a Java virtual machine) policies for an application. The problem with defense-in-depth is that software designers often do not really understand it, and assume that if their code runs deep inside these rings, then they are not required to consider attacks against which they are mistakenly thought to be protected by the “outer layers.”  True defense-in-depth would require that they consider those attacks, and provide a backup defense in their code.  (That’s real “depth.”)

In the cloud era, all software should be written as if it is going to run in the public cloud, even if it is not. There are certainly a set of absolutes that a designer must assume are being provided securely by the service provider – isolation from other tenants, enforcement of configured firewall rules, and consistent Internet connectivity of the container are the basics.  Beyond those, however, nothing should be taken for granted.  That’s not just good security design for cloud software – its good security design for any software. Although this may seem like more work for the programmer, it results in more portable code, equipped to survive in both benign and hostile environments, yielding maximum flexibility in choosing deployment targets.

Experience is still valuable

None of the above is meant to imply that experienced software hands should not be considered in producing cloud-based systems.  In fact, their experience could be crucial in helping development teams avoid common pitfalls that exist both in and outside of the cloud.  As I said earlier, knowing and adhering to first principles is always important! If you’re one of those seasoned devs, just keep in mind that some things have changed in the software engineering landscape where cloud is concerned. A little adjustment to your own internal programming should be all that is required to further enhance your value to any team.

Posted in Cloud Computing Technology Insights | Comments Off on Outdated Best Practices: How Experienced Programmers May Fail at Building Cloud Systems

Approaching the IT Transformation Barrier (7)

Panning for Use Cases

“New use cases and their subsequent solutions are like a feedback loop that creates a gravity well of chaotic innovation.”

Part 7 – Panning for new Use Cases

This is the last of a seven-part series considering the ongoing transformation of information technology as a result of the six independent yet highly synergistic evolutionary threads, shown in the diagram below.  We’ve gone full circle, talking about the theme of composability in hardware and software, the role of cloud and software-defined networking, the impact of the triumvirate of containers, the API economy, and microservices.  Unlike these five threads, which are focused on the “How?” of the technology itself, this final installment focuses on the “Why?”

RealTXForm - Use Cases

An interesting point about all of these technology threads: they’ve been in motion, plotting their respective trajectories for many decades.  The ideas that are just now finding practical, wide-spread application were conceived before the many of the programmers writing the production code were born.  One of the best examples is mobile devices that have uninterrupted Internet connectivity.  Another is the fully distributed operating system, in which the location of computer resources and data become almost irrelevant to the end user.  Academic papers treating these topics can be found at least as early as the 1980’s, with Tanenbaum and Renesse’s ACM Computing Surveys paper on distributed operating systems appearing in 1985 as a good summary. Once these capabilities move from science fiction to science fact, new fields of innovation are unlocked, and use cases heretofore not even considered begin to come into focus.

Not only have new use cases drawn the threads of evolving technology into a tapestry of increasing value from IT, they also create demand for new innovations by highlighting gaps in the current landscape.  Just a little additional creativity and extra work can fill in these cracks to deliver truly mature solutions to problems that were never even known to be problems, because thinking of them as problems was too far outside our normal way of life.  At the heart of the disruption are the abilities to coordinate resources, influence the actions of individuals, and transfer funds friction-free in microseconds – all on a massively global scale.  Many aspects of life on Earth have been curtailed by assumptions that such synchronization and fiscal fluidity are more than not possible, but never would be possible.  The cracks in those assumptions are now beginning to weaken the status quo.

The “sharing economy” is a good example of a class of stressor use cases.  Business models adopted by such companies as Uber, Lyft, HomeAway and Airbnb approach problems in ways that would have been unthinkable 20 years ago: by crowd-sourcing resources and intelligently brokering them in a matter of seconds.  Only now, with the ubiquitous Internet, distributed cloud, and mobile connected devices, can such a scheme be implemented on the massive scale required to make it workable.  But in addition to solving previously unstated problems, these solutions also challenge the very assumptions upon which businesses models and legal systems have been based for centuries. Case in point: taxi companies are the only entities capable of creating privately run public transit, so regulating taxi companies solves the problem of regulating privately run public transit.  A similar case exists for hotels. Not only does the sharing economy and the ensuing IT transformation break these long-standing assumptions, they also create new opportunities for innovating on top of them, finding new ways to create and distribute wealth, and achieve better self-governance.

New use cases and their subsequent solutions are like a feedback loop that creates a gravity well of chaotic innovation.  It pulls technology forward, and the center of that gravity well gets larger with each new contribution.  That pull accelerates things, like innovation, and destroys things, like outmoded models and invalid assumptions.  Such a mix is ripe with opportunity!  It’s no wonder that the modern template for success is the tech startup, the new California Gold Rush to find the next killer app and new use case, with barriers to entry so low that over half a million new ventures are launched each year.  There is no doubt that such a dynamic, evolving environment of collaboration combined with creative thought from those who have not be trained in the old ways, will produce new solutions for a host of other problems humanity has, until now, simply assumed could never be otherwise.

The question of whether the human organism or its social and political constructs can accommodate the pace of change we are foisting upon them at such a global scale is still unanswered, as is the uncertainty surrounding the safety of the massively complex and interconnected systems we are fashioning.  It doesn’t matter, though, because the laws of gravity must be obeyed: the opportunity is too big, its draw inexorable, and the rewards potentially too great. The tracks have been laid over many decades crossing a continent of human technical evolution, transporting anyone who has a reasonable credit limit and the adventuresome spirit of entrepreneurship to their own Sutter’s Mill, a keyboard in their backpack with which to pan for the new gold.

 

Posted in Cloud Computing Technology Insights | Comments Off on Approaching the IT Transformation Barrier (7)

Approaching the IT Transformation Barrier (6)

A-New-OOP

“The need for a common language requirement is the chief problem microservices solve.”

Part 6 – Episode IV: A New OOP

Our series exploring the six independent yet related threads that are converging to drive dramatic transformation in information technology is entering the home stretch today.  I’ve been visiting each of these threads in turn, as shown in the diagram below. Each has been evolving on its own, rooted in decades of research, innovation, and trial-and-error.  They are coming together in the second decade of the 21st century to revolutionize IT in ways the scope of which has not been seen since the 20th.

RealITXform - Microservices

Today, the wheel turns to “microservices.”

Most definitions of microservices explain it as an architectural concept in which complex software systems are built from interconnected processes; the interconnection is typically facilitated by Internet Protocol transport and is “language agnostic.”  As with everything else in the above wheel, microservices are nothing more than the natural evolution of software architecture as it changes to take advantage of the ubiquitous Internet in both senses of its geographic reach and the adoption of its core technologies.

I like to think of microservices as the natural evolution of Object Oriented Programming (OOP).  OOP models software components as objects.  Objects are typically instances of classes, a class simply being a pattern or blueprint from which instances are built.  An object’s interface is the set of externally visible methods and attributes that other objects can use to interact with this object.  A simple example is class Point, which has attributes X and Y to represent the point’s coordinates in a Cartesian plane, and methods such as move() to alter those coordinates.

Classes are typically defined using languages such as Java.  Indeed, most modern languages have an object concept, though their implementations differ. After defining the classes, one writes executable code to create object instances, and those instances themselves contain code in their various methods to interact with other objects: creating new objects, destroying them, invoking them to do work of some kind – all of which amount to a coordinated effort to solve a problem.

Class libraries are pre-written collections of classes that solve common problems. Most languages come with built-in libraries for things like basic data structures (generic queues, lists, trees, and hash maps).  Open source and the Internet have created a rich but chaotic sea of class libraries for literally hundreds of different languages that solve even more specific problems, such as an e-commerce catalog or a discussion board for social media applications.

Until recently, if you wanted to build an application that used pre-built libraries from disparate sources, you chose a language in which there was available an implementation of each:  The easy way (there are hard ways, of course) to have the code work together – to have objects from one library be able to see and use the interfaces of objects from the other libraries – was to use the facility of the underlying language’s invocation mechanism. The need for a common language requirement is the chief problem microservices solve.

A microservice is really nothing more than an object, the interface of which is available as a network addressable target (URL) and is not “written” in any specific language.  The main difference is that the interface between these objects is the network, not a memory space such as a call stack or a heap.

Every interface needs two things to be successful: 1) marshaling, which is the packaging up and later unpacking of a payload of information, and 2) the transmission of that payload from object A to object B. Interfaces inside of an execution environment achieve both of these things by simply pointing to data in memory, and the programming language provides an abstraction to make sure this pointing is done safely and consistently.

Microservices are loosely coupled – they don’t share a memory space, so they can’t marshal and transmit data by simply pointing to it.  Instead, the data has to be packaged up into a network packet and sent across the Internet to the receiver, where it is unpacked and used in a manner the caller expects. The Hypertext Transport Protocol (HTTP) is the ubiquitous standard defining connections and transfers of payloads across the World Wide Web, atop the global Internet.  When used in a particular way that adheres to the semantics of web objects, HTTP can be used to implement RESTful interfaces. The definition of REST and why it is important is beyond the scope of this article, but there are good reasons to adhere to REST.

HTTP does not dictate the structure (marshaling) of the payload. Although XML (eXtended Markup Languae) and JSON (JavaScript Object Notation) give us ways to represent just about any data in plain text, they require additional logic to build a functioning interface. One such standard, the Simple Object Access Protocol (SOAP) is exactly that – it defines how to use XML to marshal objects for transmission.  However, many people see SOAP as too complicated, so they use their own flavor of XML or JSON to build their payloads. If there is a fly in the microservices ointment, it has to do with this aspect of undefined interfaces, and the lack of a heavily adopted service definition discovery and access protocol.

Even so, the use of the microservices model is exploding, and we’re seeing more hyper-reuse of code and services to build software. When “shopping” for a service, application developers typically follow this workflow:

  1. If a service already exists, we simply pay for it (if required) and knit it into our application.
  2. Else, we need to create an instance of the service, so we “construct” an instance on our favorite cloud provider by spawning a Docker container from an image from a Docker repository, like Docker Hub.

It’s a new age of software architecture. For the purposes of this article (and because it supports the contrived title), I’m going to call it the Fourth Age, and compare it to the previous three as shown in the diagram below. (This is a related but different concept than the familiar “programming language generations” concept.)

Four Ages of Software Architecture

The First Age of Software Architecture was characterized by a lack of abstraction. Setting binary machine language aside as something only a masochist codes directly, assembly language provided a convenient shorthand, but very little else in the way of supporting problem solving.  The Second Age added procedural languages and data structure abstractions; however, languages were designed with an emphasis on general purpose code execution, and not the support of best practices in creating and operating upon data structures.  The Third Age was the OOP age, where languages evolved to support a more regimented, system-supported means of curating data within a program.  However, those structures were limited to software written in a specific language; interfacing with foreign code was not addressed.

The Fourth Age is the ability to combine code drawn from multiple sources and languages to create a solution.   Microservices are only the latest and most unifying attempt at doing so.  Previous attempts include Microsoft’s “Common Object Model” and the Object Management Group’s Common Object Request Broker Architecture (CORBA).  There are many others. The point is, until we saw things like HTTP/REST, Docker containers, cheap cloud hosting services, and freely available open source libraries like Docker Hub become common place, the Fourth Age couldn’t reach maturity – it was held back in many ways that microservices now have the opportunity to transcend.

So it is the age of the microservice, and the transformation of IT as we know it is reaching the peak of its powers.  If train for this new era you will, clear your mind you must, and changes accept.

May the Fourth be with you.

 

Posted in Cloud Computing Technology Insights | Comments Off on Approaching the IT Transformation Barrier (6)

Approaching the IT Transformation Barrier (5)

Planet-of-the-APIs

“Cracking the code of how to best implement and monetize APIs is crucial to success in the emerging API economy.”

Part 5 – The Planet of the APIs

After a brief holiday hiatus, we’re back to inspect the multiple aspects of the transformation that is taking all aspects of information technology from a traditional private do-it-yourself isolated model to a modern public outsourced cooperative model.  These aspects, or threads as I have been calling them, have led us to a place where they, in combination, have propelled IT beyond a line of demarcation between the old and new ways of doing things.  Here again is the overview graphic I’ve been using to guide our exploration, with today’s focus being on the API Economy.

API-Economy-ReallTXform

We started our tour in the lower left of the above diagram, talking about the composability of hardware and software systems, followed by the cloud model that has been realized through achievements in software defined networking, and more recently in applying lightweight virtualization to achieve uniform packaging and distribution of workloads.  These are all “nuts and bolts” things – the gears inside the big machine that give us capabilities. But capabilities to do what, exactly? What problems do they help us solve, and in what ways are those solutions superior to the ones that came before?

Throughout our discussion, the terms dynamic, reusable, shared, distributed, and lightweight have appeared consistently and repeatedly.  Clearly, the difference between this “new” way of doing IT and the “old” way of doing it includes these characteristics, and they are present not only at the lower level where the technology components reside, but also in the higher levels of the stack where those technologies are harnessed to solve business problems.  This is where the “API Economy” thread enters the discussion, because it is concerned with how we use these new building blocks to achieve better outcomes in problem solving – specifically in enabling organizations to do business on the Internet.

Application Programming Interfaces, as with most of the other things on the big wheel above, are not new. However, when combined with recent strides and innovations in connectivity technologies, they become a focal point in the ongoing transformation.  Decades of work have gone into understanding how to build complex software systems.  A key tenet of enterprise software design is to decompose a problem into smaller sub-problems, then solve those sub-problems in a way that their respective solutions can be later combined to solve the larger problem. This also allows us the opportunity to identify code that can be shared across the same solution.  For example, if we need to sort several tables of numbers, we write the sorting code only once, and design the solution to call that code with different tables as needed.  This “divide and conquer” method is shown at the far left of the figure below.

Evolution of Code Reusability

But a valuable consequence of this approach is the creation of code that can be shared outside of the initial solution for which it was conceived.  If we create an online catalog for a book store as part of its overall IT system, we can likely reuse that component for a grocery store, provided we built the component with re-use in mind.  Historically, such components are kept in libraries of either source or object code that can be distributed and shared between developers that later compile or link them into larger solutions as needed, as shown in the middle of the figure above.  This is only possible if the entry points into the reusable code module are well documented, and the module itself has been designed with easy re-use in mind.  APIs help in both of these matters, by providing consistent, well documented entry points (typically, subroutine invocations) and ensuring that the module is self-sufficient (isolated), relying only on the information it receives through its API to set client-specific contexts.

This method of sharing, which is essentially the copy/paste model, is not very efficient.  When changes are made to the master source code for the shared component, the changes must be propagated to all instances of the component.  For traditional digital media, this is extremely cumbersome.  Online repositories improve the experience somewhat. Still, it is a matter of transporting and updating code instead of transporting and operating on data.

Until recently, this was the best we could hope for, but with the high-speed ubiquitous Internet, and with universally adopted standards such as the Hypertext Transport Protocol (HTTP) to carry API payloads, we finally have achieved more dynamic and lightweight means of sharing code as shown in the far right of the figure above.  Instead of copying and replicating the code, we extend Internet access to running instances of it.  Changes to the code can be rolled into the service and instantly be in use without users of it needing to make updates. API management is also evolving to assume this model of reuse: By assigning version control to the API itself, we can evolve the service over time while ensuring that clients always connect to a version that offers the features and semantic behaviors they are expecting.

So that’s the “how” part of it.  But “why” is this important?

The new use cases we will discuss at the conclusion of this series are born of opportunity brought about by global hyper-connectivity, device mobility, and large-scale distribution of data.  Big Data Analytics, the Internet of Things, taking full advantage of social media, and similar cases require that we be able to connect data to processes within fractions of a second, regardless of their locations.  To be competitive in this environment, and certainly to take advantage of its opportunities, businesses must be able to “hook in” to the data stream in order to provide their own unique added value.  APIs, and the services backing them, are the technical mechanism for achieving this.  Any organization wishing to participate in the emerging API economy, where data is the currency, and the ability to transport and transform that currency into ever more valuable forms, must have a well-defined, easy-to-use API with a dependable, scalable service behind it.  This becomes a building block that developers can acquire in building their applications, which gives the vendor behind the API more clout and relevance in the ecosystem.

Although the technology for creating and managing these APIs and services is largely ready, there is still much experimentation to be performed regarding how to extract revenue from these constructs.  The easiest to understand is basic pay-for-access; but, in a world where most services are “free”, potential customers will likely be offended to be charged just to walk through the front door of an API, so we have variants of limited use with paid upgrade, trial periods, personal data as payment, and so on.

Although Darwinian-type evolution was not the cause of apes ascending to dominate the Earth in “The Planet of the Apes” (it was a bootstrap paradox, for those who really worry about such details), their ascendance was still due to mankind’s inability to evolve to meet the threats and capitalize on the opportunities that a changing environment presented. Cracking the code of how to best implement and monetize APIs is crucial to success in the emerging API economy. We still don’t know which variations will prove to be the dominant species of API interaction models.  Only one thing is certain: those who wait to find out won’t be around to benefit from the knowledge: They will be extinct.

Posted in Cloud Computing Technology Insights | Comments Off on Approaching the IT Transformation Barrier (5)

Approaching the IT Transformation Barrier (4)

The-Tupperware-Party

 

“…the direct to consumer aspect of the container model is at the heart of its transformative capabilities.

Part 4 – The Tupperware Party

This is the fourth installment of a multi-part series relating six largely independent yet convergent evolutionary industry threads to one big IT revolution – a major Transformation.  The first installment gives an overview of the diagram below, but today’s excursion is to consider the one at the top of the diagram because, at least for me, it’s the most recent to join the party, and the catalyst that seems to be causing acceleration of monumental change.

Containers---RealITXform

Why? The emergence of Docker as a technology darling has been atypical: Rarely does a startup see the kind of broad industry acknowledgement, if not outright support, it has garnered.  For proof, look to the list of signatories for the Linux Foundation’s Open Container Initiative. I am hard pressed to find anyone who is anyone missing from that list: Amazon, Google, Microsoft, VMware, Cisco, HP, IBM, Dell, Oracle, Red Hat, Suse, …  That’s just the short list.  (Ok, Apple isn’t in there, but who would expect them to be?)

The point is, it isn’t all hype. In fact, the promise of Docker images and the flexibility of the runtime containers into which their payloads could be deployed is one of the first real glimpses we’ve seen of a universal Platform as a Service.  (More on that when I talk about Microservices.) It takes us one step away from heavy x86-type virtualization of backing resources – such as CPU, disk, and memory – and closer to abstractions of things that software designers really care about, which are things like data structures, guaranteed persistence, and transactional flows.

At the risk of being overly pedagogical, I feel I should at least make a few points on “containers” and “Docker” about which many are often confused.  Understanding these points is important to reading the tea leaves of the IT industry’s future.

First, “containers” is used interchangeably to refer to the image of an application in a package (like a Docker image), as well as the runtime environment into which that image can be placed. The latter is actually the more accurate definition, which is why you will see references in more technical literature to both Docker images and Docker containers.  Images are the package.  Containers are the target for their deployment.

To understand what’s in the package, let’s first understand what the containers, or runtime environments, are.  The container concept itself is pretty broad: think of it like an apartment in an apartment complex where a program is going to spend its active life.  Some apartments are grouped together into units, perhaps intentionally sharing things like drainage and water, or unintentionally sharing things like sounds.  The degree of isolation, and how much of it is intentional or unintentional, is dependent upon the container technology.

Hypervisor-type virtualization such as VMware vSphere (ESX), Microsoft Hyper-V, and open source KVM create containers that look like x86 computer hardware.  These containers are “heavyweight” because, to make use of them, one must install everything from the operating system “up.”  Heavyweight virtualization basically “fakes out” an operating system and its attendant applications into thinking they have their own physical server.  This approach has the merit of being very easy to understand and adopt by traditional IT shops because it fits their operational models of buying and managing servers.  But, as a method of packaging, shipping, and deploying applications, it carries a lot of overhead.

The types of containers that are making today’s headlines are commonly called “lightweight,” which is common parlance for the more accurate label “operating system-level virtualization.”  In this scenario, the operating system already exists, and the virtualization layer “fakes out” an application into thinking it has that operating system all to itself, instead of sharing it with others.  By removing the need to ship entire operating system code in the package, and not imposing virtualization overhead on operating system intrinsics (kernel calls), a more efficient virtualization method is achieved.  The size of the packages are potentially smaller as are the runtime resource footprints of the containers.  It is generally faster to create, dispose, and modify them than an equivalent heavier-weight version.

The most well-known contemporary lightweight containers are based on Linux kernel cgroups functionality and file system unioning.  Going into the details of how it works is beyond the scope of this ideally shorter post.  More important, is that Linux containers are not the only lightweight containers in the world. In fact, they are not the only lightweight containers that Docker packaging will eventually support, and is in part why Microsoft announced a partnership with Docker earlier this year, and has signed on to the Linux Foundation OCI.

Perhaps the biggest misunderstanding at this juncture is that Microsoft’s Docker play is about running Windows applications on Linux in Docker containers.  It isn’t, although that is a somewhat interesting tack.  Instead, it is about being able to use Docker’s packaging (eventually, the OCI standard) to bundle up Windows applications into the same format as Linux applications, then use the same tools and repositories to manage and distribute those packages.  Microsoft has its own equivalent of containers for Windows. It also recently announced Windows Nano Server, a JEOS (just-enough operating system) to create lightweight containers that Docker images of Windows payloads can target.

The figure below demonstrates what an ecosystem of mixed Linux and Windows containers could look like. Imagine having a repository of OCI-standard (i.e., Docker) images, the payloads of which are mixed – some being Linux apps (red hexagons), others being Windows apps (blue pentagons).  The tools for managing the packages and the repo are based on the same standard, regardless of the payload.  A person looking for an application can peruse an intermediate catalog of the repo and choose an application.  Then they press “Purchase and Deploy,” the automation finds a cloud service that expresses a container API with the qualifier that it supports the type of payload (Windows or Linux), and the instance is deployed without the consumer of the application ever knowing the type of operating system it required.

OCI Ecosystem

Self-service catalogs and stores aren’t new, and that isn’t the point of the example.  The point is that it becomes easier to develop and deliver software into such marketplaces without abandoning one’s preference of operating system as a developer or becoming deeply entrenched in one marketplace provider’s specific implementation, and it is easier for the application consumer to find and use applications in more places, paying less for the resources they use and without regard for details such as operating system.   There is a more direct relationship between software providers and consumers, reducing or even eliminating the need for a middleman like traditional IT departments.

This is a powerful concept!  It de-emphasizes the operating system on the consumption side, which forces a change in how IT departments must justify their existence.  Instead of being dependency solvers – building and maintaining a physical plant, sourcing hardware, operating system, middleware, and staff to house and take care of it all – they must become brokers of the cloud services (public or private) that automatically provide those things.  Ideally, they own the box labeled “Self-Service Catalog and Deployment Automation” in the figure, because that is the logic that both enables their customers to use cloud services easily, while the organization maintains control over what is deployed and to where.

This is a radical departure from the status-quo. When I raised the concept in a recent discussion, it was met with disbelief, and even some ridicule.  Specifically, the comment was that the demise of IT has been predicted by many people on many previous occasions, and it has yet to happen. Although I do not predict its demise, I do foresee one of the biggest changes to that set of disciplines since the x86 server went mainstream.

If there’s anything I’ve learned in this industry, it is that the unexpected can and will happen.  The unthinkable was 100% certain to occur when viewed in hindsight, which is too late.  Tupperware (the company) is best known for defining an entire category of home storage products, notably the famous “burping” container.  But more profound and what enabled it to best its competitors was the development of its direct marketing methods: the similarly iconic Tupperware Party.  For cloud computing, the container as technology is certainly interesting, but it’s the bypassing of the middleman – the direct to consumer aspect of the model – that is really at the heart of its transformative capabilities.

Welcome to the party.

Posted in Cloud Computing Technology Insights | Comments Off on Approaching the IT Transformation Barrier (4)

Approaching the IT Transformation Barrier (3)

 

The Power of Steam

“Cloud isn’t a technology. It’s a force of nature.”

Part 3 – The Power of Steam

I’ve previously advanced the notion that the IT industry has reached and is in the act of collectively crossing a synchronization barrier – a point where many independent threads converge to one point, and only at that point can they all move forward together with new-found synergy.  The diagram below shows these six trajectories leading us toward “Real IT Transformation.”  Taking each of the vectors in turn, I last dealt with the fundamental theme of composition in both hardware and software systems, and how it is disrupting the current status quo of monolithic development and operational patterns in both of these domains.  Advancing clockwise, we now consider the cloud and software defined networking.

Cloud and SDN---RealITXform

If there is one recurring element in all six threads, its flexibility – the ability to reshape something over and over without breaking it, and to bring a thing to bear in often unforeseen ways on problems both old and new.  It is a word that can be approximated by others such as agility, rapidity, elasticity, adaptability… the now quite-familiar litany of characteristics we attribute to the cloud computing model.

Flexibility is a survival trait of successful business.  The inflexible perish because they fail to adapt, to keep up, to bend and not break, and their IT systems are at once the most vulnerable as well as the most potentially empowering.  It is obvious why the promise of cloud computing has propelled the concept far beyond fad, or the latest hype curve.  Cloud isn’t a technology, it is a force of nature, just like Darwinian evolution. For IT practitioners wishing to avoid the endangered species list, it is a philosophy with proven potential to transform not just the way we compute and do business, but the way we fundamentally think about information technology from every aspect:  provider, developer, distributor, consultant, and the end-user.  No one is shielded from the effects of the inevitable transformation, which some will see as a mass extinction of outdated roles and responsibilities.  (And lot’s of people with scary steely eyeballs who want to be worshiped as gods. Wait. Wrong storyline…)

Although this theme of flexibility is exhibited by each of the six threads, and “cloud” naturally bears the standard for it, there is something specific to the way cloud expresses this dynamism that justifies it being counted among the six vectors of change: software defined networking (SDN).  I refer to this in a very general sense, as well as in the more defined marketing and technical definition recently promulgated, which I refer to as “true SDN” in the figure below.

SDN Layers

For cloud computing, the invention of the Domain Name System was where SDN began. (DNS… SDN… coincidence? Of course, but we need a little gravitas in the piece at this point, contrived though it may be.) IP addresses were initially hard to move, and DNS allowed us to move logical targets by mapping them to different IP addresses over time.  The idea came fully into its own with Dynamic DNS, where servers could change IP addresses at higher frequencies without clients seeing stale resolution data as with normal DNS zone transfers and root-server updates.  Load balancers and modern routing protocols further improved the agility of the IP substrate to stay synchronized with the increasingly volatile DNS mappings.

With DNS operating at the macro-network level, SDN took another step at the micro-layer with the birth of virtual local networking, born out of x86 virtualization (hypervisors).  By virtualizing the network switch and, eventually, the IP router itself, redefinition of a seemingly “physical” network that would previously require moving wires from port to port, could be done purely through software control. More recent moves to encapsulate network functions in to virtual machines – Network Functions Virtualization (NFV) – continue to evolve the concept in a way that even more aptly complements end-to-end virtualization of network communication, which would be…

…the capstone  – true-SDN or “SDN proper.” It bridges the micro and macro layers (the data planes) with a software-driven control plane that configures them to work together in concert.  Standards such as OpenFlow, and the ETSI NFV MANO, place the emphasis on getting data from point A to point B, with the details of routing and configuration at both local and wider-area-network layers being left to automation to solve.   When married with cloud computing’s notion of applications running not just “anywhere” but potentially “everywhere” the need for this kind of abstraction is clear.  Not only is there the cloud “backend” that could be in one data center today and another the next, but the millions of end-point mobile devices that are guaranteed to move between tethering points many times a day, often traversing global distances in a matter of hours.  Users of these devices have expectations that their experience and connectivity will be unchanged whether they are in Austin or Barcelona within the same 24 hour period.  Without true SDN, these problems are extremely difficult if not impossible to solve at global scale.

It’s easy to blow off “cloud” as just the latest buzzword. Believe me, I’ve had my fill of “cloudwashing,” too.  But don’t be fooled into thinking that cloud is insubstantial.  Remember: hot water vapor was powerful enough to industrialize entire nations.  Cloud computing combined with the heat of SDN have no less significance in the revolutionary changes happening right now to IT.

 

Posted in Cloud Computing Technology Insights | Comments Off on Approaching the IT Transformation Barrier (3)

Approaching the IT Transformation Barrier (2)

The-Compositional-Frontier

Part 2 – The Compositional Frontier

In my previous installment, I introduced the idea that the Information Technology industry is approaching, and perhaps is already beginning to cross, a major point of inflection.  I likened it to a barrier synchronization, in that six threads (shown in the graphic below) of independently evolving activity are converging at this point to create a radically new model for how business derives value from IT. In the next six parts, I’ll tackle each of the threads and explain where it came from, how it evolved, and why its role is critical in the ensuing transformation.  We’ll start in the lower left corner with “Composable Hardware and Software.”

Composable-HW-and-SW---RealITXform

The Hardware Angle

Although hardware and software are usually treated quite differently, the past 15 years have taught us that software can replace hardware at nearly every abstraction level in which we find the latter above the most primitive backing resources.  The success of x86 virtualization is a primary example, but it goes far beyond simply replacing or further abstracting physical resources into virtual ones.  Server consolidation was the initial use case for this technology, allowing us to sub-divide and dynamically redistribute physical resources across competing workloads.  Soon, a long list of advantages that a software abstraction for physical resources can provide followed: Live migration, appliance packaging, snapshotting and rollback, and replication and disaster recovery demonstrated that “software defined” meant “dynamic and flexible”, and this characteristic is highly valued by the business because it further enables agility and velocity.

Clearly, the ability to manage large pools of physical resources (e.g., an entire datacenter rack or even an aisle) to compose “right sized” virtual servers is highly valued.  Although hypervisor technology made this a reality, it did so with a performance penalty, and by adding an additional layer of complexity.  This has incentivized hardware manufacturers to begin providing composable hardware capabilities in the native hardware/firmware combinations that they design.

Although one could argue that this obviates hypervisor virtualization, it does not solve or provide all the functionality that hypervisors and their attendant systems management ecosystems do.  However, when composable hardware is combined with container technology, they do begin to erode the value of “heavy” server virtualization.  The opportunity for hypervisor vendors is to anticipate this coming change and extend their technologies and ecosystems to embrace these value shifts, and ease their users’ transition within a familiar systems management environment.  (VMware’s recent Photon effort is cut from this strategy.)

The Software Angle

Modular software is not new. The landscape is littered with many attempts, some successful, at viewing applications as constructs of modular components that can be composed in much the same way as one would build a Lego or Tinker Toy project.  Service Oriented Architecture is the aegis under which most of these efforts fall, in which the components are seen as service providers, and a framework enables the discovery and binding of them into something useful.  The figure below shows how two different applications (blue and green) comprise their own master logic (“main” routine) and call outs to 3rd party applications, one of which they have in common.  Such deployments are quickly becoming commonplace and will eventually be the norm.

Composing-Two-Apps

The success of such a model depends on two things: 1) accessible, useful, ready-to-use content, and; 2) a viable, well-defined, ubiquitous framework. In other words, you need to be able to source the bricks quickly, and they should fit together pretty much on their own, without a lot of work on your part.

Until just recently, success with this model has been limited to highly controlled environments such as the large enterprise with its resources and fortitude to train  developers and build the environment.  The frameworks are Draconian in their requirements, and the content limited to either what you build yourself, or pay for dearly.  It has taken the slow maturation of open source and the Internet to bring the concept to the masses, and it looks quite different from traditional SOA.  (I call it “loosey goosey SOA.”)

In the Web 2.0 era, service oriented architecture (note the lack of capitalization) is driven much more by serendipity than a master framework.  Basic programming problems (lists, catalogs, file upload, sorts, databases, etc.) have been solved many times over, and high quality code is available for free if you know where to look.  Cheap cloud services and extensive online documentation for every technology one could contemplate using allow anyone to tinker and build.  There is no “IT department” saying “NO!” – so a younger generation of potentially naïve programmers is building things in ways that evoke great skepticism from the establishment.

But they work. They sell. They’re successful. They’re the future. And that is all that the market really cares about.

So, technical dogmatism bows to pragmatism, and the new model of compositional programming continues to gain momentum.  The “citizen programmer,” who is more of a compositional artist than a coder, gains more power while the traditional dependency-solving IT department is diminished, replaced by automation in the cloud. The problems of this model – its brittleness, lack of definition, and so forth – are now themselves the target of new technology development.  This is where software companies who want to stay in business will look to find their relevance with solutions that emphasize choice of services, languages, and data persistence because they must – developers will not tolerate being dictated to in these matters.

“…like rama lama lama ka dinga da dinga dong…”

We-Go-Together

The trend toward greater flexibility, especially through the use of compositional techniques, is present in both hardware and software domains.  That these trends complement each other should be no surprise.  They “go together.”  As we build our applications from smaller, more short-lived components, we’ll need the ability to rapidly (in seconds) source just the right amount of infrastructure to support them.  This rapid acquire/release capability is critical to achieving maximum business velocity at the lowest possible cost.

“That’s the way it should be. Wah-ooh, yeah!”

 

Posted in Cloud Computing Technology Insights | Comments Off on Approaching the IT Transformation Barrier (2)

Approaching the IT Transformation Barrier

The-New-Steely-Eyes-of-IT

Part 1 – The New Steely Eyes of IT

Just like the Star Trek episode “Where No Man Has Gone Before” depicts the wondrous, perhaps frightening aspects of traversing a Big Cosmic Line, we’re likewise coming to a barrier of sorts… a barrier synchronization point, to be specific, in the Information Technology industry.  It’s a juncture where at least six things in confluence are profoundly disrupting IT usage patterns that we’ve built up over the past three decades.

I’ll dispense with the shameless co-opting of the Shrine of Nerdom, and instead turn to those of you familiar with computing in the parallel. I suspect you already get why I’m using the “barrier synchronization” pattern as a metaphor for this change: Many independent threads are running in parallel, but they arrive at a synchronization point (the “barrier”) where they must wait until all the related threads have arrived. Once that happens, they can move forward and make progress in solving the problem together.

But it’s more than just synchronization. There truly is transformation awaiting on the other side of that rendezvous. Here’s my reasoning as to why: In order for “Real IT Transformation” to occur, we’ve been waiting on six related industry threads to come together: 1) containers, 2) the API economy, 3) microservices, 4) composable hardware and software systems, 5) cloud and software defined networking, and 6) a host of use cases that place demands on these things so as to evoke transformative application of them in concert.  These threads and their convergence are shown in the figure below.

RealITXform

What’s interesting about these threads is that each has an origin independent of the others, yet they are all highly interdependent when it comes to maximizing their potential.

Most of these threads have been evolving on their own for decades, following paths to maturation shaped by a number of influencing vectors brought to bear by real world application.  But one point has been exerting gravity on all of them, drawing them toward an inevitable point of union where 2+2 really does equal 5. When put together, these six threads transcend themselves and we see an incredible transformation of how we think about and use “information technology.” Ready to bow to your new god, Jim?

 

The customer-facing values of this emerging pattern are in providing more choice in sourcing, developing, deploying, and managing applications without requiring the complex dependency requirements that traditional IT departments have solved.  Let me say that in plain English: The system is learning how to automate the bread-and-butter tasks that complex IT departments have grown up over the years to solve.

So, let’s talk about that: Why do IT departments even exist in the first place?  The answer is that IT departments historically solve the dependency requirements chain that stretches from the business needing an IT solution to actually making the solution available to the business.  Their customer (the business) wants an app/functionality that makes the business go. Once identified, IT solves the dependencies:

  1. What application software solves the problem?
  2. What operating system and middleware does that application require?
  3. What hardware will be required for all of that?
  4. What kind of hardware will be needed?
  5. Where will that hardware be housed?
  6. Who will make sure it stays up and running, and the dust is occasionally removed?
  7. Who can explain to the business how to use it and get value out of the solution?

You, no doubt, can come up with many others.  The point is, in order to make that dependency solving task more efficient and scalable within the enterprise, IT departments and business that house them have standardized on things like hardware (x86 servers, Dell vs. HP, SAN vs DAS, etc.) and software (Windows, System Center, etc.) so that the scope of things they must become expert at is tractable. This creates constraints on choices, which counter productively reduces the amount of value the business might get from IT.  “Sorry, we know app X is best for that function, but it runs on Windows and we’re not a Windows shop.”

In the next generation IT scenario at the center of the figure, the dependency requirements-solving function that the IT department has historically provided is now made less onerous for the business. This is because the maturing technologies (around the circle in the figure) automate everything from solving the dependencies as well as sourcing, deploying, and operating the components in solution.  In other words, the system is learning do what IT departments historically have done: given a business request for IT functionality, choose the best path to fulfillment.  As a result, the IT department is becoming more of a broker of services and less a dependency requirements solver.

In my next installment, I’ll start going around the wheel of the figure and describing each of the threads and how it is becoming an essential part of the transformative model.  What will emerge at the end of our tour is a list of characteristics that sets the “Real IT Transformation” apart from similar failed visions that have preceded it, and why this time success of the model is inevitable.

Well, assuming a second super-god doesn’t emerge from the barrier and they kill each other.  But, let’s think positively, shall we?

Posted in Cloud Computing Technology Insights | Comments Off on Approaching the IT Transformation Barrier

A New Way to Think about Cloud Service Models (Part 3)

ServiceSpectrum Logo

Part 3 – PaaS: A Spectrum of Services

In my previous post, I started developing the notion of a “universal” cloud service model that has IaaS is at one end and SaaS at the other of a spectrum. But spectrum of what?  I think it is the spectrum of services available from all cloud sources, distributed across a continuum of cloud service abstractions and the types of applications built upon them, as shown in the figure below.

Spectrum of Applications across Cloud Computing Service Models - Copyright (C) 2015 James Craig Lowery

This figure shows many cloud concepts in relation to each other.  The horizontal dimension of the chart represents the richness of the services consumed by applications: The further left an application or service appears in this dimension, then the more generic (i.e., closer to pure infrastructure) are the services being consumed.  Conversely, the further right, the more specific (i.e., more like a complete application) they are.

The vertical dimension captures the notion of the NIST IaaS/PaaS/SaaS triumvirate. The lower an app or service in this dimension, the more likely it is associated with IaaS, and the higher, the more likely SaaS.  Clearly, both the horizontal and vertical dimensions express the same concept using different terms, as emphasized by the representative applications falling along a straight line of positive slope.

In this interpretation, with far left and right or bottom and top being analogous to IaaS and SaaS, respectively, PaaS is left to represent everything in between.  Large-grained examples of the types of services that would fall into a PaaS category are shown along the Application Domain Specificity axis, anchored on the left by “Generic” (IaaS) and on the right by “Specific Application” (SaaS) on the right.

Traditional Datacenter Applications, shown in the lower left of the diagram, are simply typical “heavy” stacks of operating system, middleware, and application, and some persistent data stored in a logical physical machine form (usually a virtual machine). As previously mentioned, this type of application is the direct result of migrating legacy applications into the cloud using the familiar IaaS model, taking no advantage of richer services cloud providers offer.

Moving from left to right, the next less-generic (more-specific) type of services is the first PaaS-proper service most cloud adopters will encounter: structured data persistence. Indeed, most successful IaaS vendors have naturally grown “up the stack” into PaaS by providing content addressable storage, structured and unstructured table spaces, message queues, and the like.  At this level of abstraction, traditional datacenter applications have been refactored to use cloud-based persistence as a service, instead of managing disk files or communicating through non-network interfaces to database management systems.

The third typical stage of application evolution moving up and to the right is the Custom Cloud Application.  At this stage, the application is written using programming patterns that conform to best-practice cloud service consumption techniques.  Not only is cloud-based persistence used, it is exclusive – no other forms of persistence (storing something in a file in a VM, for example) is allowed. Although enterprise application server execution environments such as J2EE are usually incorporated into the architecture to create efficient common runtimes, it is when they are combined with network-delivered services for identity and optimization, and programming patterns that emphasize functional idempotence, that a new breed of highly available, reliable and scalable (even when backed by unreliable infrastructure) applications emerges.  Still, the logic comprising the core of the application is largely custom-built.

The fourth stage sees the heavy adoption of code reuse to create cloud applications.  Although the new model described in the previous paragraph still dictates the architecture, the majority of the code itself comes from elsewhere, specifically from open source.  The application programmer becomes more of a composition artist, skilled in his or her knowledge of what code already exists, how to source it, and how to integrate it with the bit of custom logic required to complete the application.

The fifth PaaS model, tantamount to SaaS, is the application that is composed from APIs.  This natural progression from the open source re-use case above keeps with the theme of composition, but replaces the reusable code with access to self-contained micro-services executing in their own contexts elsewhere on the internetwork. A micro-service can be thought of similarly as an object instance in classic Object Oriented Programming (OOP), except that this “object” is maximally loosely coupled from its clients: it could be running on a different machine with a different architecture, operating system, and written in any language. The only thing that matters is its interface, which is accessed via internet-based technologies.  Put another more succinct way, a micro-service is a domain-constrained set of functions presented by a low-profile executing entity though an IP-based API.

An example is an inventory service that knows how to CREATE a persistent CATALOG of things, ADD things to the catalog, LIST the catalog, and DELETE items from the catalog. This is similar in concept to generic object classes in OOP.  In fact, an object wrapper class is a natural choice in some situations to mediate access to the service.  The difference is that, instead of creating an application through the composition of cooperating objects in a shared run-time, we now create applications through the composition of cooperating micro-services in a shared networking environment.

One additional aspect of the figure upon which we should elaborate is the flux of qualitative values as one moves from point-to-point in this spectrum. The potential cost and ability to control the minute details of the infrastructure are maximized in the lower-left of the diagram. Clearly, if one is building atop generic infrastructure such as CPU, RAM, disk, and network interfaces, one has the most latitude (control) in how these will be used.  It should also be clear that in forgoing the large existing body of work that abstracts these generic resources into services more directly compatible with programming objects, and eschewing the benefits of shared multi-tenant architectures, one will likely pay more than is necessary to achieve the objective.  Conversely, as one gives up control and moves to the upper right of the diagram, the capability to quickly deliver solutions (i.e., applications) of immediate or near-term value becomes greater, and the programmer and operational teams are further spared many of the repetitive and mundane tasks associated with optimization and scaling.

Summary

So, that’s my take on cloud service models.  There’s really only one: PaaS. It’s the unifying concept that fits the general problem cloud can ultimately address.  But my concept of PaaS differs from traditional notions embodied in things like Cloud Foundry, OpenShift, and the like.  Those are but a small slice of the entire spectrum, and the “platform” is much more limited in scope than the view of the entire Internet and its plethora of services as the “platform.”  In a multi-cloud world, where we need the ability to use services from many sources and change our selections at any time due to our need or their availability, this is the only definition that makes sense.

Posted in Cloud Computing Technology Insights | Comments Off on A New Way to Think about Cloud Service Models (Part 3)