Approaching the IT Transformation Barrier (4)



“…the direct to consumer aspect of the container model is at the heart of its transformative capabilities.

Part 4 – The Tupperware Party

This is the fourth installment of a multi-part series relating six largely independent yet convergent evolutionary industry threads to one big IT revolution – a major Transformation.  The first installment gives an overview of the diagram below, but today’s excursion is to consider the one at the top of the diagram because, at least for me, it’s the most recent to join the party, and the catalyst that seems to be causing acceleration of monumental change.


Why? The emergence of Docker as a technology darling has been atypical: Rarely does a startup see the kind of broad industry acknowledgement, if not outright support, it has garnered.  For proof, look to the list of signatories for the Linux Foundation’s Open Container Initiative. I am hard pressed to find anyone who is anyone missing from that list: Amazon, Google, Microsoft, VMware, Cisco, HP, IBM, Dell, Oracle, Red Hat, Suse, …  That’s just the short list.  (Ok, Apple isn’t in there, but who would expect them to be?)

The point is, it isn’t all hype. In fact, the promise of Docker images and the flexibility of the runtime containers into which their payloads could be deployed is one of the first real glimpses we’ve seen of a universal Platform as a Service.  (More on that when I talk about Microservices.) It takes us one step away from heavy x86-type virtualization of backing resources – such as CPU, disk, and memory – and closer to abstractions of things that software designers really care about, which are things like data structures, guaranteed persistence, and transactional flows.

At the risk of being overly pedagogical, I feel I should at least make a few points on “containers” and “Docker” about which many are often confused.  Understanding these points is important to reading the tea leaves of the IT industry’s future.

First, “containers” is used interchangeably to refer to the image of an application in a package (like a Docker image), as well as the runtime environment into which that image can be placed. The latter is actually the more accurate definition, which is why you will see references in more technical literature to both Docker images and Docker containers.  Images are the package.  Containers are the target for their deployment.

To understand what’s in the package, let’s first understand what the containers, or runtime environments, are.  The container concept itself is pretty broad: think of it like an apartment in an apartment complex where a program is going to spend its active life.  Some apartments are grouped together into units, perhaps intentionally sharing things like drainage and water, or unintentionally sharing things like sounds.  The degree of isolation, and how much of it is intentional or unintentional, is dependent upon the container technology.

Hypervisor-type virtualization such as VMware vSphere (ESX), Microsoft Hyper-V, and open source KVM create containers that look like x86 computer hardware.  These containers are “heavyweight” because, to make use of them, one must install everything from the operating system “up.”  Heavyweight virtualization basically “fakes out” an operating system and its attendant applications into thinking they have their own physical server.  This approach has the merit of being very easy to understand and adopt by traditional IT shops because it fits their operational models of buying and managing servers.  But, as a method of packaging, shipping, and deploying applications, it carries a lot of overhead.

The types of containers that are making today’s headlines are commonly called “lightweight,” which is common parlance for the more accurate label “operating system-level virtualization.”  In this scenario, the operating system already exists, and the virtualization layer “fakes out” an application into thinking it has that operating system all to itself, instead of sharing it with others.  By removing the need to ship entire operating system code in the package, and not imposing virtualization overhead on operating system intrinsics (kernel calls), a more efficient virtualization method is achieved.  The size of the packages are potentially smaller as are the runtime resource footprints of the containers.  It is generally faster to create, dispose, and modify them than an equivalent heavier-weight version.

The most well-known contemporary lightweight containers are based on Linux kernel cgroups functionality and file system unioning.  Going into the details of how it works is beyond the scope of this ideally shorter post.  More important, is that Linux containers are not the only lightweight containers in the world. In fact, they are not the only lightweight containers that Docker packaging will eventually support, and is in part why Microsoft announced a partnership with Docker earlier this year, and has signed on to the Linux Foundation OCI.

Perhaps the biggest misunderstanding at this juncture is that Microsoft’s Docker play is about running Windows applications on Linux in Docker containers.  It isn’t, although that is a somewhat interesting tack.  Instead, it is about being able to use Docker’s packaging (eventually, the OCI standard) to bundle up Windows applications into the same format as Linux applications, then use the same tools and repositories to manage and distribute those packages.  Microsoft has its own equivalent of containers for Windows. It also recently announced Windows Nano Server, a JEOS (just-enough operating system) to create lightweight containers that Docker images of Windows payloads can target.

The figure below demonstrates what an ecosystem of mixed Linux and Windows containers could look like. Imagine having a repository of OCI-standard (i.e., Docker) images, the payloads of which are mixed – some being Linux apps (red hexagons), others being Windows apps (blue pentagons).  The tools for managing the packages and the repo are based on the same standard, regardless of the payload.  A person looking for an application can peruse an intermediate catalog of the repo and choose an application.  Then they press “Purchase and Deploy,” the automation finds a cloud service that expresses a container API with the qualifier that it supports the type of payload (Windows or Linux), and the instance is deployed without the consumer of the application ever knowing the type of operating system it required.

OCI Ecosystem

Self-service catalogs and stores aren’t new, and that isn’t the point of the example.  The point is that it becomes easier to develop and deliver software into such marketplaces without abandoning one’s preference of operating system as a developer or becoming deeply entrenched in one marketplace provider’s specific implementation, and it is easier for the application consumer to find and use applications in more places, paying less for the resources they use and without regard for details such as operating system.   There is a more direct relationship between software providers and consumers, reducing or even eliminating the need for a middleman like traditional IT departments.

This is a powerful concept!  It de-emphasizes the operating system on the consumption side, which forces a change in how IT departments must justify their existence.  Instead of being dependency solvers – building and maintaining a physical plant, sourcing hardware, operating system, middleware, and staff to house and take care of it all – they must become brokers of the cloud services (public or private) that automatically provide those things.  Ideally, they own the box labeled “Self-Service Catalog and Deployment Automation” in the figure, because that is the logic that both enables their customers to use cloud services easily, while the organization maintains control over what is deployed and to where.

This is a radical departure from the status-quo. When I raised the concept in a recent discussion, it was met with disbelief, and even some ridicule.  Specifically, the comment was that the demise of IT has been predicted by many people on many previous occasions, and it has yet to happen. Although I do not predict its demise, I do foresee one of the biggest changes to that set of disciplines since the x86 server went mainstream.

If there’s anything I’ve learned in this industry, it is that the unexpected can and will happen.  The unthinkable was 100% certain to occur when viewed in hindsight, which is too late.  Tupperware (the company) is best known for defining an entire category of home storage products, notably the famous “burping” container.  But more profound and what enabled it to best its competitors was the development of its direct marketing methods: the similarly iconic Tupperware Party.  For cloud computing, the container as technology is certainly interesting, but it’s the bypassing of the middleman – the direct to consumer aspect of the model – that is really at the heart of its transformative capabilities.

Welcome to the party.

Posted in Cloud Computing Technology Insights | Comments Off

Approaching the IT Transformation Barrier (3)


The Power of Steam

“Cloud isn’t a technology. It’s a force of nature.”

Part 3 – The Power of Steam

I’ve previously advanced the notion that the IT industry has reached and is in the act of collectively crossing a synchronization barrier – a point where many independent threads converge to one point, and only at that point can they all move forward together with new-found synergy.  The diagram below shows these six trajectories leading us toward “Real IT Transformation.”  Taking each of the vectors in turn, I last dealt with the fundamental theme of composition in both hardware and software systems, and how it is disrupting the current status quo of monolithic development and operational patterns in both of these domains.  Advancing clockwise, we now consider the cloud and software defined networking.

Cloud and SDN---RealITXform

If there is one recurring element in all six threads, its flexibility – the ability to reshape something over and over without breaking it, and to bring a thing to bear in often unforeseen ways on problems both old and new.  It is a word that can be approximated by others such as agility, rapidity, elasticity, adaptability… the now quite-familiar litany of characteristics we attribute to the cloud computing model.

Flexibility is a survival trait of successful business.  The inflexible perish because they fail to adapt, to keep up, to bend and not break, and their IT systems are at once the most vulnerable as well as the most potentially empowering.  It is obvious why the promise of cloud computing has propelled the concept far beyond fad, or the latest hype curve.  Cloud isn’t a technology, it is a force of nature, just like Darwinian evolution. For IT practitioners wishing to avoid the endangered species list, it is a philosophy with proven potential to transform not just the way we compute and do business, but the way we fundamentally think about information technology from every aspect:  provider, developer, distributor, consultant, and the end-user.  No one is shielded from the effects of the inevitable transformation, which some will see as a mass extinction of outdated roles and responsibilities.  (And lot’s of people with scary steely eyeballs who want to be worshiped as gods. Wait. Wrong storyline…)

Although this theme of flexibility is exhibited by each of the six threads, and “cloud” naturally bears the standard for it, there is something specific to the way cloud expresses this dynamism that justifies it being counted among the six vectors of change: software defined networking (SDN).  I refer to this in a very general sense, as well as in the more defined marketing and technical definition recently promulgated, which I refer to as “true SDN” in the figure below.

SDN Layers

For cloud computing, the invention of the Domain Name System was where SDN began. (DNS… SDN… coincidence? Of course, but we need a little gravitas in the piece at this point, contrived though it may be.) IP addresses were initially hard to move, and DNS allowed us to move logical targets by mapping them to different IP addresses over time.  The idea came fully into its own with Dynamic DNS, where servers could change IP addresses at higher frequencies without clients seeing stale resolution data as with normal DNS zone transfers and root-server updates.  Load balancers and modern routing protocols further improved the agility of the IP substrate to stay synchronized with the increasingly volatile DNS mappings.

With DNS operating at the macro-network level, SDN took another step at the micro-layer with the birth of virtual local networking, born out of x86 virtualization (hypervisors).  By virtualizing the network switch and, eventually, the IP router itself, redefinition of a seemingly “physical” network that would previously require moving wires from port to port, could be done purely through software control. More recent moves to encapsulate network functions in to virtual machines – Network Functions Virtualization (NFV) – continue to evolve the concept in a way that even more aptly complements end-to-end virtualization of network communication, which would be…

…the capstone  – true-SDN or “SDN proper.” It bridges the micro and macro layers (the data planes) with a software-driven control plane that configures them to work together in concert.  Standards such as OpenFlow, and the ETSI NFV MANO, place the emphasis on getting data from point A to point B, with the details of routing and configuration at both local and wider-area-network layers being left to automation to solve.   When married with cloud computing’s notion of applications running not just “anywhere” but potentially “everywhere” the need for this kind of abstraction is clear.  Not only is there the cloud “backend” that could be in one data center today and another the next, but the millions of end-point mobile devices that are guaranteed to move between tethering points many times a day, often traversing global distances in a matter of hours.  Users of these devices have expectations that their experience and connectivity will be unchanged whether they are in Austin or Barcelona within the same 24 hour period.  Without true SDN, these problems are extremely difficult if not impossible to solve at global scale.

It’s easy to blow off “cloud” as just the latest buzzword. Believe me, I’ve had my fill of “cloudwashing,” too.  But don’t be fooled into thinking that cloud is insubstantial.  Remember: hot water vapor was powerful enough to industrialize entire nations.  Cloud computing combined with the heat of SDN have no less significance in the revolutionary changes happening right now to IT.


Posted in Cloud Computing Technology Insights | Comments Off

Approaching the IT Transformation Barrier (2)


Part 2 – The Compositional Frontier

In my previous installment, I introduced the idea that the Information Technology industry is approaching, and perhaps is already beginning to cross, a major point of inflection.  I likened it to a barrier synchronization, in that six threads (shown in the graphic below) of independently evolving activity are converging at this point to create a radically new model for how business derives value from IT. In the next six parts, I’ll tackle each of the threads and explain where it came from, how it evolved, and why its role is critical in the ensuing transformation.  We’ll start in the lower left corner with “Composable Hardware and Software.”


The Hardware Angle

Although hardware and software are usually treated quite differently, the past 15 years have taught us that software can replace hardware at nearly every abstraction level in which we find the latter above the most primitive backing resources.  The success of x86 virtualization is a primary example, but it goes far beyond simply replacing or further abstracting physical resources into virtual ones.  Server consolidation was the initial use case for this technology, allowing us to sub-divide and dynamically redistribute physical resources across competing workloads.  Soon, a long list of advantages that a software abstraction for physical resources can provide followed: Live migration, appliance packaging, snapshotting and rollback, and replication and disaster recovery demonstrated that “software defined” meant “dynamic and flexible”, and this characteristic is highly valued by the business because it further enables agility and velocity.

Clearly, the ability to manage large pools of physical resources (e.g., an entire datacenter rack or even an aisle) to compose “right sized” virtual servers is highly valued.  Although hypervisor technology made this a reality, it did so with a performance penalty, and by adding an additional layer of complexity.  This has incentivized hardware manufacturers to begin providing composable hardware capabilities in the native hardware/firmware combinations that they design.

Although one could argue that this obviates hypervisor virtualization, it does not solve or provide all the functionality that hypervisors and their attendant systems management ecosystems do.  However, when composable hardware is combined with container technology, they do begin to erode the value of “heavy” server virtualization.  The opportunity for hypervisor vendors is to anticipate this coming change and extend their technologies and ecosystems to embrace these value shifts, and ease their users’ transition within a familiar systems management environment.  (VMware’s recent Photon effort is cut from this strategy.)

The Software Angle

Modular software is not new. The landscape is littered with many attempts, some successful, at viewing applications as constructs of modular components that can be composed in much the same way as one would build a Lego or Tinker Toy project.  Service Oriented Architecture is the aegis under which most of these efforts fall, in which the components are seen as service providers, and a framework enables the discovery and binding of them into something useful.  The figure below shows how two different applications (blue and green) comprise their own master logic (“main” routine) and call outs to 3rd party applications, one of which they have in common.  Such deployments are quickly becoming commonplace and will eventually be the norm.


The success of such a model depends on two things: 1) accessible, useful, ready-to-use content, and; 2) a viable, well-defined, ubiquitous framework. In other words, you need to be able to source the bricks quickly, and they should fit together pretty much on their own, without a lot of work on your part.

Until just recently, success with this model has been limited to highly controlled environments such as the large enterprise with its resources and fortitude to train  developers and build the environment.  The frameworks are Draconian in their requirements, and the content limited to either what you build yourself, or pay for dearly.  It has taken the slow maturation of open source and the Internet to bring the concept to the masses, and it looks quite different from traditional SOA.  (I call it “loosey goosey SOA.”)

In the Web 2.0 era, service oriented architecture (note the lack of capitalization) is driven much more by serendipity than a master framework.  Basic programming problems (lists, catalogs, file upload, sorts, databases, etc.) have been solved many times over, and high quality code is available for free if you know where to look.  Cheap cloud services and extensive online documentation for every technology one could contemplate using allow anyone to tinker and build.  There is no “IT department” saying “NO!” – so a younger generation of potentially naïve programmers is building things in ways that evoke great skepticism from the establishment.

But they work. They sell. They’re successful. They’re the future. And that is all that the market really cares about.

So, technical dogmatism bows to pragmatism, and the new model of compositional programming continues to gain momentum.  The “citizen programmer,” who is more of a compositional artist than a coder, gains more power while the traditional dependency-solving IT department is diminished, replaced by automation in the cloud. The problems of this model – its brittleness, lack of definition, and so forth – are now themselves the target of new technology development.  This is where software companies who want to stay in business will look to find their relevance with solutions that emphasize choice of services, languages, and data persistence because they must – developers will not tolerate being dictated to in these matters.

“…like rama lama lama ka dinga da dinga dong…”


The trend toward greater flexibility, especially through the use of compositional techniques, is present in both hardware and software domains.  That these trends complement each other should be no surprise.  They “go together.”  As we build our applications from smaller, more short-lived components, we’ll need the ability to rapidly (in seconds) source just the right amount of infrastructure to support them.  This rapid acquire/release capability is critical to achieving maximum business velocity at the lowest possible cost.

“That’s the way it should be. Wah-ooh, yeah!”


Posted in Cloud Computing Technology Insights | Comments Off

Approaching the IT Transformation Barrier


Part 1 – The New Steely Eyes of IT

Just like the Star Trek episode “Where No Man Has Gone Before” depicts the wondrous, perhaps frightening aspects of traversing a Big Cosmic Line, we’re likewise coming to a barrier of sorts… a barrier synchronization point, to be specific, in the Information Technology industry.  It’s a juncture where at least six things in confluence are profoundly disrupting IT usage patterns that we’ve built up over the past three decades.

I’ll dispense with the shameless co-opting of the Shrine of Nerdom, and instead turn to those of you familiar with computing in the parallel. I suspect you already get why I’m using the “barrier synchronization” pattern as a metaphor for this change: Many independent threads are running in parallel, but they arrive at a synchronization point (the “barrier”) where they must wait until all the related threads have arrived. Once that happens, they can move forward and make progress in solving the problem together.

But it’s more than just synchronization. There truly is transformation awaiting on the other side of that rendezvous. Here’s my reasoning as to why: In order for “Real IT Transformation” to occur, we’ve been waiting on six related industry threads to come together: 1) containers, 2) the API economy, 3) microservices, 4) composable hardware and software systems, 5) cloud and software defined networking, and 6) a host of use cases that place demands on these things so as to evoke transformative application of them in concert.  These threads and their convergence are shown in the figure below.


What’s interesting about these threads is that each has an origin independent of the others, yet they are all highly interdependent when it comes to maximizing their potential.

Most of these threads have been evolving on their own for decades, following paths to maturation shaped by a number of influencing vectors brought to bear by real world application.  But one point has been exerting gravity on all of them, drawing them toward an inevitable point of union where 2+2 really does equal 5. When put together, these six threads transcend themselves and we see an incredible transformation of how we think about and use “information technology.” Ready to bow to your new god, Jim?


The customer-facing values of this emerging pattern are in providing more choice in sourcing, developing, deploying, and managing applications without requiring the complex dependency requirements that traditional IT departments have solved.  Let me say that in plain English: The system is learning how to automate the bread-and-butter tasks that complex IT departments have grown up over the years to solve.

So, let’s talk about that: Why do IT departments even exist in the first place?  The answer is that IT departments historically solve the dependency requirements chain that stretches from the business needing an IT solution to actually making the solution available to the business.  Their customer (the business) wants an app/functionality that makes the business go. Once identified, IT solves the dependencies:

  1. What application software solves the problem?
  2. What operating system and middleware does that application require?
  3. What hardware will be required for all of that?
  4. What kind of hardware will be needed?
  5. Where will that hardware be housed?
  6. Who will make sure it stays up and running, and the dust is occasionally removed?
  7. Who can explain to the business how to use it and get value out of the solution?

You, no doubt, can come up with many others.  The point is, in order to make that dependency solving task more efficient and scalable within the enterprise, IT departments and business that house them have standardized on things like hardware (x86 servers, Dell vs. HP, SAN vs DAS, etc.) and software (Windows, System Center, etc.) so that the scope of things they must become expert at is tractable. This creates constraints on choices, which counter productively reduces the amount of value the business might get from IT.  “Sorry, we know app X is best for that function, but it runs on Windows and we’re not a Windows shop.”

In the next generation IT scenario at the center of the figure, the dependency requirements-solving function that the IT department has historically provided is now made less onerous for the business. This is because the maturing technologies (around the circle in the figure) automate everything from solving the dependencies as well as sourcing, deploying, and operating the components in solution.  In other words, the system is learning do what IT departments historically have done: given a business request for IT functionality, choose the best path to fulfillment.  As a result, the IT department is becoming more of a broker of services and less a dependency requirements solver.

In my next installment, I’ll start going around the wheel of the figure and describing each of the threads and how it is becoming an essential part of the transformative model.  What will emerge at the end of our tour is a list of characteristics that sets the “Real IT Transformation” apart from similar failed visions that have preceded it, and why this time success of the model is inevitable.

Well, assuming a second super-god doesn’t emerge from the barrier and they kill each other.  But, let’s think positively, shall we?

Posted in Cloud Computing Technology Insights | Comments Off

A New Way to Think about Cloud Service Models (Part 3)

ServiceSpectrum Logo

Part 3 – PaaS: A Spectrum of Services

In my previous post, I started developing the notion of a “universal” cloud service model that has IaaS is at one end and SaaS at the other of a spectrum. But spectrum of what?  I think it is the spectrum of services available from all cloud sources, distributed across a continuum of cloud service abstractions and the types of applications built upon them, as shown in the figure below.

Spectrum of Applications across Cloud Computing Service Models - Copyright (C) 2015 James Craig Lowery

This figure shows many cloud concepts in relation to each other.  The horizontal dimension of the chart represents the richness of the services consumed by applications: The further left an application or service appears in this dimension, then the more generic (i.e., closer to pure infrastructure) are the services being consumed.  Conversely, the further right, the more specific (i.e., more like a complete application) they are.

The vertical dimension captures the notion of the NIST IaaS/PaaS/SaaS triumvirate. The lower an app or service in this dimension, the more likely it is associated with IaaS, and the higher, the more likely SaaS.  Clearly, both the horizontal and vertical dimensions express the same concept using different terms, as emphasized by the representative applications falling along a straight line of positive slope.

In this interpretation, with far left and right or bottom and top being analogous to IaaS and SaaS, respectively, PaaS is left to represent everything in between.  Large-grained examples of the types of services that would fall into a PaaS category are shown along the Application Domain Specificity axis, anchored on the left by “Generic” (IaaS) and on the right by “Specific Application” (SaaS) on the right.

Traditional Datacenter Applications, shown in the lower left of the diagram, are simply typical “heavy” stacks of operating system, middleware, and application, and some persistent data stored in a logical physical machine form (usually a virtual machine). As previously mentioned, this type of application is the direct result of migrating legacy applications into the cloud using the familiar IaaS model, taking no advantage of richer services cloud providers offer.

Moving from left to right, the next less-generic (more-specific) type of services is the first PaaS-proper service most cloud adopters will encounter: structured data persistence. Indeed, most successful IaaS vendors have naturally grown “up the stack” into PaaS by providing content addressable storage, structured and unstructured table spaces, message queues, and the like.  At this level of abstraction, traditional datacenter applications have been refactored to use cloud-based persistence as a service, instead of managing disk files or communicating through non-network interfaces to database management systems.

The third typical stage of application evolution moving up and to the right is the Custom Cloud Application.  At this stage, the application is written using programming patterns that conform to best-practice cloud service consumption techniques.  Not only is cloud-based persistence used, it is exclusive – no other forms of persistence (storing something in a file in a VM, for example) is allowed. Although enterprise application server execution environments such as J2EE are usually incorporated into the architecture to create efficient common runtimes, it is when they are combined with network-delivered services for identity and optimization, and programming patterns that emphasize functional idempotence, that a new breed of highly available, reliable and scalable (even when backed by unreliable infrastructure) applications emerges.  Still, the logic comprising the core of the application is largely custom-built.

The fourth stage sees the heavy adoption of code reuse to create cloud applications.  Although the new model described in the previous paragraph still dictates the architecture, the majority of the code itself comes from elsewhere, specifically from open source.  The application programmer becomes more of a composition artist, skilled in his or her knowledge of what code already exists, how to source it, and how to integrate it with the bit of custom logic required to complete the application.

The fifth PaaS model, tantamount to SaaS, is the application that is composed from APIs.  This natural progression from the open source re-use case above keeps with the theme of composition, but replaces the reusable code with access to self-contained micro-services executing in their own contexts elsewhere on the internetwork. A micro-service can be thought of similarly as an object instance in classic Object Oriented Programming (OOP), except that this “object” is maximally loosely coupled from its clients: it could be running on a different machine with a different architecture, operating system, and written in any language. The only thing that matters is its interface, which is accessed via internet-based technologies.  Put another more succinct way, a micro-service is a domain-constrained set of functions presented by a low-profile executing entity though an IP-based API.

An example is an inventory service that knows how to CREATE a persistent CATALOG of things, ADD things to the catalog, LIST the catalog, and DELETE items from the catalog. This is similar in concept to generic object classes in OOP.  In fact, an object wrapper class is a natural choice in some situations to mediate access to the service.  The difference is that, instead of creating an application through the composition of cooperating objects in a shared run-time, we now create applications through the composition of cooperating micro-services in a shared networking environment.

One additional aspect of the figure upon which we should elaborate is the flux of qualitative values as one moves from point-to-point in this spectrum. The potential cost and ability to control the minute details of the infrastructure are maximized in the lower-left of the diagram. Clearly, if one is building atop generic infrastructure such as CPU, RAM, disk, and network interfaces, one has the most latitude (control) in how these will be used.  It should also be clear that in forgoing the large existing body of work that abstracts these generic resources into services more directly compatible with programming objects, and eschewing the benefits of shared multi-tenant architectures, one will likely pay more than is necessary to achieve the objective.  Conversely, as one gives up control and moves to the upper right of the diagram, the capability to quickly deliver solutions (i.e., applications) of immediate or near-term value becomes greater, and the programmer and operational teams are further spared many of the repetitive and mundane tasks associated with optimization and scaling.


So, that’s my take on cloud service models.  There’s really only one: PaaS. It’s the unifying concept that fits the general problem cloud can ultimately address.  But my concept of PaaS differs from traditional notions embodied in things like Cloud Foundry, OpenShift, and the like.  Those are but a small slice of the entire spectrum, and the “platform” is much more limited in scope than the view of the entire Internet and its plethora of services as the “platform.”  In a multi-cloud world, where we need the ability to use services from many sources and change our selections at any time due to our need or their availability, this is the only definition that makes sense.

Posted in Cloud Computing Technology Insights | Comments Off

A New Way to Think about Cloud Service Models (Part 2)

Part 2 – A Universal Service Model

StarAAS Universe

In my previous post, I introduced the idea that all cloud services are “platform as a service,” especially when we think of the entire Internet as the platform, and the various services available to applications at run-time.  These services include, but are certainly not limited to, what is broadly called Infrastructure as a Service. IaaS has been extremely popular and successful with traditional IT practitioners because it fits relatively easily into their operational models.  Still, it is the most basic and least developed of the spectrum of services the net has to offer.

It is a strategic blunder to think that cloud’s ultimate value can be tapped by simply applying traditional IT datacenter concepts and patterns.  In fact, the quest to add cloud to one’s IT portfolio without undertaking significant software development and operational reform will likely lead to failure. Certainly the outsourcing of infrastructure as a service reduces risk, simplifies operations, and potentially reduces overall costs to a business if implemented properly.   But to stop there is to cheat one’s self of the true riches the cloud model can unearth when fully pursued.

To understand this better, consider that the only purpose of IT operations is to provide data processing and storage that enable a business to meet its objectives:  “It’s all about apps and data.” IT becomes a drag on the business if it fails to adopt capabilities and transformative operational models that are more directly relevant to providing the application and data storage facilities the business needs.  Whereas in the past, the most directly relevant capabilities may have been providing raw compute power and storage capacity tailored to custom applications, the reality of our current environment is that the various wheels that make apps and data go have been reinvented so many times that they are easy to find and often free.  The global Internet and open source software are the primary reasons for this change.

The idea of transformation is at the heart of cloud computing’s real value.  One must go beyond IaaS to fully reap its rewards, and most have done so, perhaps unknowingly, by using the service model at the opposite end of the spectrum: “Software as a Service” or SaaS.  In this model, the service presented is the application itself, and using the service is just a matter of paying subscription fees to cover a number of users, connections, queries, etc., then accessing the application with a standard web browser or via an API.  This service model is decades old, but standing up such a service has historically been very difficult. Until recently, none but the largest organizations had the expertise, resources, and fortitude to build them.

It has been mentioned that IaaS is at one end and SaaS at the other of a spectrum. But spectrum of what?  I contend it is the spectrum of services available from all cloud sources, distributed across a continuum of cloud service abstractions and the types of applications built upon them.  I’ve created a diagram, shown in the figure below, to help facilitate a complete discussion of this concept, which I’ll tackle in my next installment. For now, take a look and start thinking about the implications of this “universal service model.”

Spectrum of Applications across Cloud Computing Service Models - Copyright (C) 2015 James Craig Lowery

Posted in Cloud Computing Technology Insights | Comments Off

A New Way to Think about Cloud Service Models (Part 1)

Part 1: IaaS != Entrée


Most discussions aimed at answering the question “What is cloud computing?” start with the three service models as defined by NIST: IaaS, PaaS, and SaaS.  At this point, I won’t offend you by assuming you need remediation in the common definitions of Infrastructure, Platform and Software as a service.  What may be more interesting is looking at these three models in a new way that casts them not as three separate approaches to consuming cloud resources, but as three aspects of the same model, which is ultimately an expanded, more inclusive definition of Platform as a Service than has previously been purveyed.

The key to successfully harnessing the cloud is to capitalize upon it for what it is, and not to force it to be something it is not.  The maturity and ubiquity of the Internet Protocol suite in conjunction with open source software and the global Internet itself have made it both possible and cost-effective to present resources such as compute, memory, persistent storage, and inter-process communication as network accessible functions.  The cloud is therefore best modeled as a set of services exposing those resources at varying levels of abstraction.  In this alternate interpretation, it is the range of richness in those services that gives us IaaS, PaaS, and SaaS.

For example, because all compute stacks have hardware at their foundation, an obvious service is one that directly exposes hardware resources in their base forms: compute as CPU, memory as RAM, persistent storage as disk, and the network as an Ethernet interface.  The model is that of a hardware-defined “machine” with said resources under the control of a built-in basic input and output system (BIOS).  This service is readily understood and quickly adopted by those who have worked in and built traditional datacenters in which server hardware is the starting point for information technology services.  The fact that the “machines” may be either virtually implemented, or implemented on “bare metal” is usually not very important.  What is important is that the machines as exposed through the service can – with few exceptions – be used just like the “real” servers with which one is already familiar.

The services described above are commonly labeled “Infrastructure as a Service” because the usage model closely mimics that of traditional datacenter infrastructure.  IT professionals who come to the cloud from traditional backgrounds quickly adapt to it and see obvious ways to extend their operational models to include its use.  Indeed, the success of IaaS to date can be attributed to this familiarity.  Unfortunately, that success may lead aspiring cloud adopters to believe that this model is the epitome of cloud computing, when it in fact has the least potential for truly transforming IT.

In my next installment, I’ll explain why the IaaS model is the least interesting and transformative of the service models, why more abstract PaaS models have seen slow penetration, and the recent confluence of technologies that is finally bridging the gap from IaaS to PaaS and beyond.


Posted in Cloud Computing Technology Insights | Comments Off

Privacy: The Cloud Model’s Waterloo?

Part 2 – How Privacy Could Cripple the Cloud

Security and privacy are often mentioned together. This is natural, because they are related, yet distinct topics.  Security is a technical concept concerned with the controlled access, protection, integrity, and availability of data and computer system functions.  Privacy is a common policy objective that can be achieved in part by applying security principles and technical solutions.  In previous posts, I’ve discussed how cloud security is not really a problem, any more so than it is in IT in general, when it is approached properly in the cloud environment.  Privacy, as discussed in Part 1 of this series, is unfortunately not so “simple.”

I’ve already made the case that some commonly held beliefs about what is inherently secure or insecure are based on control of the physical infrastructure and ownership of the premises where the infrastructure is located.  I’ve further posited that these ideas are outdated, and at the very least insufficient to insure secure cloud systems (as well as traditional data centers, for that matter).

In the discussion of privacy, we’ve seen governments urged to action by their citizenry to combat the erosion of individual control of personal information.  Unfortunately, lawmakers have approached such legislation from the outmoded perspective of physical security.  Privacy laws are rife with injunctions that PII must only be stored or transmitted under circumstances that derive from a physical location.  The European Union, for example, forbids the PII of any EU citizen from being stored or transmitted outside of the EU. Although treaties such as the EU-US Safe Harbor Framework, established in 2000, facilitates some degree of data sharing across the EU-US boundary, it is showing signs of failure when applied to cloud-based application scenarios. Although laudable in their intent, privacy laws that dependent upon containing data within a particular jurisdiction can prove to have more negative effects on both privacy outcomes and achieving the full benefit of cloud-based global services to the individual being “protected.”

First, given the highly-publicized data breaches of large corporations and government entities, it is obvious that data behind a brick wall is still quite vulnerable.  Laws that mandate limits on where data can be stored may convey a false sense of security to those who think that meeting the law’s requirements results in sufficient protection.  Socially engineered attacks can penetrate all but the most highly guarded installations, and once the data has been extracted, it is impossible to “undisclose” it. Again, it is the perimeter-based model that is not up to the task of protecting data in a hyper-connected world. Second, limitations on how data can be shared and transmitted can negatively impact the owner of PII when they cannot be serviced by cloud vendors outside of their jurisdiction.  Frameworks such as Safe Harbor are band-aids, not solutions.

One solution is to endow sensitive data with innate protection, such that wherever the data goes, its protection must also go. A container model for self-protecting data allows for the owner to specify his or her intentions regarding the data’s distribution and use, regardless of its location, and is the zero trust model for data.  Rather than depend on a perimeter and control of physical infrastructure to insure privacy objectives are met, the policy is built into the data container, and only when the policy is followed will the data be made available.

Of course, such a solution is easier described in a paragraph than implemented, although many valiant efforts have been attempted to varying degrees of success.  Still, a viable implementation – one that is scalable, robust, and easily made ubiquitous – has yet to be created.  Unfortunately, the wheels of governments and legal systems will not be inclined to wait for it.  Without educating policy makers to better understand the real threats to privacy rather than the perceived ones, we invite a continued spate of ill-conceived requirements that could make the problem worse while ironically robbing “protected” citizens of the full value of cloud technology.


Posted in Cloud Computing Technology Insights | Comments Off

Privacy: The Cloud Model’s Waterloo?

Part 1 – Privacy Ain’t the Same As Security

Most people consider the word privacy solely in the context of cloud deployment models, where a private cloud is one reserved strictly for the use of a specific group of people that have a common affiliation, such as being employed by the same company. But it is becoming quickly evident that the more broad context of global legal systems and basic human rights are where cloud computing may meet its privacy Waterloo.

The concept of personal privacy is present in all cultures to varying degrees.  Western cultures have developed the expectation of universal individual privacy as a right.  As such, privacy is a legal construct, not a technical one.  It is founded upon the idea that information not publicly observable about a person belongs to that person, and is subject to their will regarding disclosure and subsequent use.

By default, most legal systems require the individual to opt out of their rights to privacy, rather than opt in.  This means that, unless there is specific permission from the owner of the data to allow its use in specific ways, the use is unlawful and a violation of that person’s privacy rights. Examples include the United States healthcare privacy laws, and the European Union’s privacy directive.

There are instances to the contrary, where opt-in is the default.  One is the privacy-related right to not be approached without consent.  An example is the US Federal Trade Commission’s National Do Not Call Registry, which one must actively join in order to supposedly avoid unwanted marketing telephone calls. This solution also demonstrates the difficulty in balancing privacy of the individual with the free-speech rights of others.

The details of privacy law vary across jurisdictions, and historically have been somewhat anemic.  Before the printed word, propagation of personal information could only occur by word-of-mouth, which was highly suspect as mere gossip.  The printed word resulted in more accurate and authoritative data communication, but the cost rarely allowed for transmitting personal details outside the realm of celebrity (in which case it was considered part and parcel of one’s celebrated position). These limitations rarely tested the laws, and when they did, it was infrequently enough to manage on a case-by-case basis. But, as with so many other legal constructs, computer systems and networking have strained the law to breaking points.

The modern, democratized Internet has enabled the near instantaneous propagation of data at little expense, by almost anyone, to broad audiences.  In supposed acts of public service, “whistle blowers” purposefully disclose private information to call attention to illicit activities or behaviors of the data owners: Whether their ends justify their means is hotly debated, though it is a clear violation of privacy. Vast databases of personal information collected by governments and corporations are at much greater risk to be copied by unauthorized agents, which most people agree is data theft.  In these cases, it is fairly easy to spot the transgressor and the violation.

But the free flow of information in the cloud computing era brings ambiguity to what once seemed straightforward.  Individuals who volunteer personal information often do not realize just how far their voluntary disclosure may propagate, or how it might be used, especially when combined with other information gleaned from other sources.  The hyper-connected Internet allows data from multiple sources to be correlated, creating a more complete picture of an individual than he or she may know, or would otherwise condone.  The data may come from a seemingly innocuous disclosure such as a marketing questionnaire, or from public records which, until today, were simply too difficult to find, much less match up with other personally identifiable information (PII).  Attempts to ensure data anonymity by “scrubbing” it for obvious PII such as name, address, phone number, and so on, is increasingly ineffective as something as simple as a time stamp can tie two datum together and lead to an eventual PII link in other data sets.

This particular problem is one of the negative aspects of big data analytics, by which vast sources of data, both structured like database tables and unstructured like tweets or blog posts, can be pulled together to find deeper meaning through inferential analysis.  Certainly, big data analytics can discover important trends and help identify solutions to problems by giving us insight in a way we could never have achieved before. The scope, diversity, size, and access of data combined with cheap, distributed, open source software has brought this capability to the masses.  The fact that we can also infer personal information that the owner believes to be private, and has not given us consent to use, must also be dealt with.

As cloud computing continues on the ascendant, and high-profile data breaches fill the news headlines, governments have been forced to revisit their privacy laws and increase protection, specifically for individuals.  In jurisdictions such as the United States, privacy rules are legislated and enforced by sector.  For example, the Health Insurance Portability and Accountability Act (HIPPA) established strict privacy rules for the healthcare sector, building upon previous acts such as the Privacy Act of 1974.  Although the Payment Card Industry (PCI) standard is not a law, it is motivated by laws in many states designed to protect financial privacy and guard against fraud. In the European Union, the Data Protection Directive of 1995 created strict protections of personal data processing, storage and transmission that applies in all cases.  This directive is expected to be superseded by a much stronger law in coming years.

In an environment where there are legal sanctions and remedies for those who have suffered violations of their privacy, one is wise to exercise caution in collecting, handling, storing, and using PII, regardless of the source.  Cloud technologies make it all too easy to unintentionally break privacy laws, and ignorance is not an acceptable plea in the emerging legal environment.  Clearly, for cloud to be successful, and for us to be successful in applying it to our business problems, we need systematic controls to prevent such abuses.

But is a failure to guarantee privacy in the cloud enough to kill the cloud model, or hobble it to insignificance?  More on this line of thinking in Part 2.

Posted in Cloud Computing Technology Insights | Comments Off

Becoming an IoT Automation Geek

I’ve long wanted more automation and remote control over my house, but until recently such wasn’t possible without a lot of special project boards, wiring,  gaps in functionality and – most importantly – time. Now, with the Internet of Things (latest buzzword for automating and monitoring using Internet technologies for transport) this is becoming so much easier, and something for which I can make time.  Also, it is somewhat synergistic with my cloud architecture role at Dell: Although IoT is assigned to others and not to me, it still warrants some first-hand experience to be a fully competent chief cloud architect.  (At least, that’s my rationale!)

Two weeks ago, the two thermostats in our house were replaced with Honeywell WiFi models. It makes it SO easy to program a schedule when you can do it in a web interface! We never have to touch the controls now – they change for different times of the day automatically, and we can control them from our phones remotely if necessary.

I’ve just completed upgrading the pool control system to include WiFi connectivity and Internet remote monitoring and management. I upgraded the pool control to a Jandy AquaLink RS Dual Equipment model, so that now pool, spa, spa heater, spa lights, blower, and pool solar heater are all unified into a single controller, controllable via the network.

Now, the only missing item is the pool light, which is wired through a toggle switch on the other side of the house, near the master bedroom. Rather than run a wire from the controller to that switch, I’ve decided to branch out further into home automation with a SmartThings hub. I’ll replace the pool light with a smart toggle switch, and I’ll put a relay sensor in the Jandy controller so that when we “turn on” the pool light via the pool remotes (phones, etc.) it will close the relay, send an event to the SmartThings hub, and it will then turn on the actual switch.

Since I’m putting in the SmartThings hub, I can now add more Z-Wave IoT devices to the house and solve some other quirky problems, like turning the porch lights on and off at dusk/dawn and power cycling my server or desktop if necessary when traveling. I’m sure I’ll think of others. With SmartThings, you program your own apps and device drivers in Groovy, so I can make it do just about anything I want.  Well, at least that’s the theory.  I’m supposed to receive the hub and devices tomorrow, so it shall be tested shortly.


Posted in Cloud Computing Technology Insights | Comments Off