Seven Years Later

In the months leading up to taking an analyst position with Gartner in 2016 I was a consistent poster to my blog site. I used it as a way to showcase my writing skills, and a place to have some kind of marketing presence as a free agent until an opportunity presented itself.

My first few interactions with Gartner as a potential analyst hire were pleasant enough, but I wasn’t sure I was ready to leave behind a more traditional executive role. But Gartner was persistent in pursuing me, and I joined. At first, I thought it would be a three year stint, to strengthen my network, then I’d strike out again.

In April of 2023 it will have been seven years since I joined Gartner and I don’t regret it. During the pandemic and other upheavals in society and in my personal life I’ve found employment with Gartner to be very fulfilling and rewarding. Although I can’t say I’ll be there for the rest of my career, I don’t see any big reason to move on right now.

In the meantime, you won’t see much blog activity here. This server is primarily used for archiving other things. The WordPress site is a nice front-door, and the information about me is a good way to present myself to those interested in me, my personal life, and my career.

Posted in Uncategorized | Comments Off on Seven Years Later

Three Years Later

Nearly three years with Gartner.  I have to say I’ve been far more successful with this company than I ever expected. It is such a natural fit for me. I enjoy writing the research, talking to clients, the occasional travel to meet them in person.  I am able to say what I really think and not have to toe a specific technology company’s corporate line.  It’s quite liberating as a technologist!

The most notable accomplishments both came in early 2019:

  • I was the top analyst in the Technology and Service Providers group in 2018, based on my client satisfaction and overall productivity numbers.
  • I was one of the top 5 analysts across all of Gartner for 2018.
  • I was promoted to Vice President.

Now, a “Vice President Analyst” isn’t the kind of vice president you might find at other companies.  At Gartner, VP Analyst is an individual contributor, but we are ranked at the executive level because of our experience and because we are providing advice across multiple areas of knowledge with executives who are our seat holders.  It’s the best of both worlds!  An executive salary and bonus program, without people management.  (The Managing Vice Presidents are the people who manage the analysts.  Well, I should say they try to manage us. Analysts are notoriously unmanageable.)

The major research I’ve inherited is the Gartner Magic Quadrant for Public Cloud Infrastructure Professional and Managed Services, Worldwide. If you’ve ever seen an “MQ” then you know them for the graphic that has the Leaders, Visionaries, Challengers, and Niche Players.  Work on an MQ takes 4-5 months and is an arduous process, filled with many challenges, not the lest of which is vendors disagreeing with your assessment as “not factual” when it is actually an opinion based on fact. Though it is a “heavy lift” research note, it creates so many additional research ideas and insights into the market, making me a much better analysts for both end user and technology and service provider clients.  I’ve done this MQ for two years now and I hope I can continue for at least one more.

So, I’ve achieved some major milestones in the first three years that I had told myself would be the litmus test of whether I stayed or left.  As things stand now, I’m staying for the foreseeable future.  Gartner is a great place to be, and I haven’t begun to fully tap all the opportunities it can provide me.

Posted in Uncategorized | Comments Off on Three Years Later

One Year Later

So, it has been almost a year now since I accepted a position with Gartner Inc. as an IT industry analyst.  In that time, all of my thinking and writing about the industry has been redirected to Gartner’s research outputs through notes and client inquiry.  It occurs to me that those visiting my blog might think I’ve “gone dark” or that this web site is not tended. So, to at least keep the lights on, I’ll occasionally post things that I can, although much of it won’t be about cloud technology as I have written in years past.

Although I can’t directly post my research here, I can post links to it.  Of course, a Gartner subscription or one-off payment will be required to access them, but it will give those who stumble upon my personal blog an idea of where my thoughts have wandered since joining the ranks of the analysts.

Incorporating and Optimizing Public Cloud Costs in Modern SaaS Pricing Models

27 January 2017  |  As independent software vendors build modern SaaS offerings, SaaS technology product management leaders must develop new pricing models. These models must account for the variable cost and cloud-unique benefits of the underlying public cloud hyperscale IaaS and PaaS services that they build upon….

Analyst(s): Craig Lowery

Market Trends: The Convergence of IaaS and PaaS Cloud Services

15 November 2016  |  The accelerating convergence of IaaS and PaaS into a service model combining the two will disrupt existing markets. Technology strategic planners must understand the opportunities and threats from delivering and consuming cloud services through these historically distinct offerings….

Analyst(s): Craig Lowery  |  Mike Dorosh

Market Guide for Managed Hybrid Cloud Hosting, North America

25 October 2016  |  Infrastructure and operations leaders considering the use of a managed service provider to support their public, private and hybrid cloud infrastructure-as-a-service deployments have a variety of options and capabilities to choose from. Gartner helps to align their choice with primary use cases….

Analyst(s): Mike Dorosh  |  Craig Lowery

Emerging Technology Analysis: Serverless Computing and Function Platform as a Service

15 September 2016  |  Serverless computing solutions execute logic in environments with no visible VM or OS. Services such as Amazon Web Services Lambda are disrupting many cloud development and operational patterns. Technology and service provider product managers must prepare for the change….

Analyst(s): Craig Lowery

Seven Steps to Reducing Public Cloud IaaS Expense

30 August 2016  |  Managing public cloud infrastructure-as-a-service expense can be complicated. From pricing plans to underutilized resources, there are many potential pitfalls for overspend along with missed opportunities to cut costs. IT leaders should take these low-cost, low-risk steps to find immediate savings….

Analyst(s): Craig Lowery  |  Gary Spivak

Innovation Insight for Cloud Service Expense Management Tools

12 August 2016  |  Public cloud IaaS consumption and spending continue to grow. I&O leaders responsible for IaaS spending need to get ahead of spending and waste through the emerging practice of cloud service expense management, and to take advantage of the emerging tools….

Analyst(s): Gary Spivak  |  Craig Lowery  |  Robert Naegle

Posted in Cloud Computing Technology Insights | Tagged , , , , , | Comments Off on One Year Later

What’s Worse than Cloud Service Provider Lock-in?

Cloud Lock In

“If you are running your IT systems in a traditional private data center, you are already locked-in, and not in a good way.”

I was recently in a job interview before a panel of IT industry analysts, defending my positions in a hastily-written research note about the cloud service provider business.  During an interaction with one of the tormentors… I mean, interviewers… the topic of cloud vendor lock-in surfaced in the context of customer retention and competitive opportunities.

It is no secret that detractors of public cloud keep “lock-in” as a go-to arrow in their quiver, right alongside “insecurity.”  But that doesn’t mean it is manufactured FUD (Fear, Uncertainty and Doubt).  Cloud lock-in is very real, and organizations adopting cloud technologies, especially public cloud, must be have a relevant strategy addressing it.

But is cloud lock-in necessarily “bad?”

CSPs are continuing to build out their stack of services into higher layers of abstraction; databases, data analytics, even gaming platforms are being built and maintained by the service, freeing customers to focus on application-specific logic and presentation details to meet their business needs. This results in a high-velocity, nimble IT execution environment that allows them to respond quickly and effectively.  In short: they derive considerable value from being “locked-in.”  Clearly, if they later have reason to abandon their current CSP for another and haven’t planned for that possibility, it could be too intractable to attempt disentanglement.

Yes, there are ways to plan for and enable easier extraction.  Technologies like containers provide better isolation from the cloud infrastructure, as well as make it possible to “bring your own” services with you, rather than depend on the platform’s.  The challenge is knowing where to draw the line in just how much to “abstract away” the service: you could end up spending so much time insulating your application from the implementation details of a service that you lose the aforementioned benefits.  That could be worse than being directly “locked-in” if your competitors do it and get to market faster.

But a new thought was introduced during my interview, and I wish I could credit the person who gave it to me through the form of a question: Aren’t organizations who choose to remain in their private data centers even more “locked-in?”

This is a profound question, and it underscores the fundamental shift in thinking that must occur if organizations are to successfully survive the shift to cloud-based IT.  If you are running your IT systems in a traditional private data center, you are already locked-in, and not in a good way.  You’re much more isolated from the riches of the Internet, unable to take advantage of economies of scale, forced to re-implement otherwise common services for yourself, burdened with highly customized infrastructure and attendant support systems that will get more expensive to maintain over time as others abandon the model.

Privacy and control underpin the rationale continuing to feed this model, and although privacy is a good reason, control – specifically, the ability to customize infrastructure to the nth degree – is not.   CSPs build and maintain infrastructure faster, cheaper, and with higher rates of successful outcomes. IT departments that see cloud as an extension of their data center, that must conform to their data center’s operational paradigms, and disrupt as little as possible the routines and processes they have developed over the years will be at a major competitive disadvantage in the future as their more prescient and capable competitors adopt DevOps and cloud models to drive their businesses at higher velocity.  Private cloud, when done properly, is a good intermediary solution that provides the control and privacy while bringing many (but certainly not all) of the benefits of public cloud.  A hybrid cloud solution that marries public and private seems to offer the best compromise.

Oh, the interview? Apparently they liked my answers enough to offer me a position.  So for now, this is the last of any real cloud-related technical blog you’ll see from me outside of the company’s official channels.  After all, one must not give away the store:  All product must be secured!

That is… locked-up. 🙂

Posted in Cloud Computing Technology Insights | Comments Off on What’s Worse than Cloud Service Provider Lock-in?

Outdated Best Practices: How Experienced Programmers May Fail at Building Cloud Systems

 

Cloud-Fail

I’m a big believer in formal education because of the importance of understanding “first principles” – the foundational elements upon which engineers design and build things.  The strengths and weaknesses of any such product are direct results of the designer’s understanding, or lack thereof, of immutable truths endemic to their discipline.  Software engineering is no exception:  A disciplined, experienced software engineer most certainly includes first principles in the set of precepts he or she brings to the job. Unfortunately, it’s the experience that can be a problem when they approach building cloud-based systems.

An expert can often confuse a first principle with an assumption based on a long-standing status quo.  Cloud software design patterns incorporate distinct differences from historical patterns because compute resources are more plentiful and rapidly acquired than in the past.  Here are three deeply ingrained best-practices, based on those assumptions that have served us well, but do not pertain to cloud software systems:

Fallacy #1 – Memory is precious

Only 30 years ago, a megabyte of RAM was considered HUGE, and a 100MB disk something one only found in corporate data centers.  Now one is hard pressed to find anything smaller than 5GB even in consumer grade USB solid-state drives.  Even so, most seasoned programmers (I am guilty) approach new tasks with austerity and utmost efficiency as guiding principles.  Many of us are conditioned to choose data structures and their attendant algorithms that balance speed with memory consumption.  B-Trees are a great example, with trees-on-disk and complicated indexing schemes the backbone of relational database management systems. Boyce-Codd and Third Normal Forms are as much about minimizing data duplication as they are about proving data consistency, which can be achieved in other ways.

In the cloud, memory (both RAM and disk) is not only cheap, it is dynamically available, and transient – therefore, not a long-term investment.  This allows a programmer to choose data structures and algorithms that sacrifice space for speed and scalability.  It also explains the rise of “no-SQL” database systems, where simple indexing and distribution schemes (like sharding) are the rule, and one is not afraid to replicate data as many times as necessary to avoid lengthy future lookups.

Fallacy #2 – Server-side is more powerful than client-side

For years, the “big iron” was in the data center, and only anemic clients were attached to it – this included the desktop PC’s running on Intel x86 platforms.  Designers of client/server systems made great use of server-side resources to perform tasks that were specific to a client, such as pre-rendering a chart as a graphic, or persisting and managing the state of a client during a multi-transactional protocol.

In building cloud software, clients should take on as much responsibility for computation as is possible, given constraints of bandwidth between the client and server.  Squandering underutilized, distributed client-side resources is a missed opportunity to increase the speed and scalability of the back-end.  The server ideally only performs tasks that it alone can perform.  Consider file upload (client to server). Traditionally, the server maintains elaborate state information about each client’s upload-in-progress, insuring that the client sends all blocks, that no blocks are corrupted, etc.  In the new model, it is up to the client to make sure it sends all blocks, and request a checksum from the server if it thinks it is necessary.  The server simply validates access and places reasonable limits on the total size of the upload.

Fallacy #3 – Good security is like an onion

The “onion” model for security places the IT “crown jewels” – such as mission critical applications and data – inside multiple layers of defenses, starting with the corporate firewall on the outside.  The inner “rings” comprise more software and hardware solutions, such as identity and access management systems, eventually leading to the operating system, middleware, and execution runtime (such as a Java virtual machine) policies for an application. The problem with defense-in-depth is that software designers often do not really understand it, and assume that if their code runs deep inside these rings, then they are not required to consider attacks against which they are mistakenly thought to be protected by the “outer layers.”  True defense-in-depth would require that they consider those attacks, and provide a backup defense in their code.  (That’s real “depth.”)

In the cloud era, all software should be written as if it is going to run in the public cloud, even if it is not. There are certainly a set of absolutes that a designer must assume are being provided securely by the service provider – isolation from other tenants, enforcement of configured firewall rules, and consistent Internet connectivity of the container are the basics.  Beyond those, however, nothing should be taken for granted.  That’s not just good security design for cloud software – its good security design for any software. Although this may seem like more work for the programmer, it results in more portable code, equipped to survive in both benign and hostile environments, yielding maximum flexibility in choosing deployment targets.

Experience is still valuable

None of the above is meant to imply that experienced software hands should not be considered in producing cloud-based systems.  In fact, their experience could be crucial in helping development teams avoid common pitfalls that exist both in and outside of the cloud.  As I said earlier, knowing and adhering to first principles is always important! If you’re one of those seasoned devs, just keep in mind that some things have changed in the software engineering landscape where cloud is concerned. A little adjustment to your own internal programming should be all that is required to further enhance your value to any team.

Posted in Cloud Computing Technology Insights | Comments Off on Outdated Best Practices: How Experienced Programmers May Fail at Building Cloud Systems

Approaching the IT Transformation Barrier (7)

Panning for Use Cases

“New use cases and their subsequent solutions are like a feedback loop that creates a gravity well of chaotic innovation.”

Part 7 – Panning for new Use Cases

This is the last of a seven-part series considering the ongoing transformation of information technology as a result of the six independent yet highly synergistic evolutionary threads, shown in the diagram below.  We’ve gone full circle, talking about the theme of composability in hardware and software, the role of cloud and software-defined networking, the impact of the triumvirate of containers, the API economy, and microservices.  Unlike these five threads, which are focused on the “How?” of the technology itself, this final installment focuses on the “Why?”

An interesting point about all of these technology threads: they’ve been in motion, plotting their respective trajectories for many decades.  The ideas that are just now finding practical, wide-spread application were conceived before the many of the programmers writing the production code were born.  One of the best examples is mobile devices that have uninterrupted Internet connectivity.  Another is the fully distributed operating system, in which the location of computer resources and data become almost irrelevant to the end user.  Academic papers treating these topics can be found at least as early as the 1980’s, with Tanenbaum and Renesse’s ACM Computing Surveys paper on distributed operating systems appearing in 1985 as a good summary. Once these capabilities move from science fiction to science fact, new fields of innovation are unlocked, and use cases heretofore not even considered begin to come into focus.

Not only have new use cases drawn the threads of evolving technology into a tapestry of increasing value from IT, they also create demand for new innovations by highlighting gaps in the current landscape.  Just a little additional creativity and extra work can fill in these cracks to deliver truly mature solutions to problems that were never even known to be problems, because thinking of them as problems was too far outside our normal way of life.  At the heart of the disruption are the abilities to coordinate resources, influence the actions of individuals, and transfer funds friction-free in microseconds – all on a massively global scale.  Many aspects of life on Earth have been curtailed by assumptions that such synchronization and fiscal fluidity are more than not possible, but never would be possible.  The cracks in those assumptions are now beginning to weaken the status quo.

The “sharing economy” is a good example of a class of stressor use cases.  Business models adopted by such companies as Uber, Lyft, HomeAway and Airbnb approach problems in ways that would have been unthinkable 20 years ago: by crowd-sourcing resources and intelligently brokering them in a matter of seconds.  Only now, with the ubiquitous Internet, distributed cloud, and mobile connected devices, can such a scheme be implemented on the massive scale required to make it workable.  But in addition to solving previously unstated problems, these solutions also challenge the very assumptions upon which businesses models and legal systems have been based for centuries. Case in point: taxi companies are the only entities capable of creating privately run public transit, so regulating taxi companies solves the problem of regulating privately run public transit.  A similar case exists for hotels. Not only does the sharing economy and the ensuing IT transformation break these long-standing assumptions, they also create new opportunities for innovating on top of them, finding new ways to create and distribute wealth, and achieve better self-governance.

New use cases and their subsequent solutions are like a feedback loop that creates a gravity well of chaotic innovation.  It pulls technology forward, and the center of that gravity well gets larger with each new contribution.  That pull accelerates things, like innovation, and destroys things, like outmoded models and invalid assumptions.  Such a mix is ripe with opportunity!  It’s no wonder that the modern template for success is the tech startup, the new California Gold Rush to find the next killer app and new use case, with barriers to entry so low that over half a million new ventures are launched each year.  There is no doubt that such a dynamic, evolving environment of collaboration combined with creative thought from those who have not be trained in the old ways, will produce new solutions for a host of other problems humanity has, until now, simply assumed could never be otherwise.

The question of whether the human organism or its social and political constructs can accommodate the pace of change we are foisting upon them at such a global scale is still unanswered, as is the uncertainty surrounding the safety of the massively complex and interconnected systems we are fashioning.  It doesn’t matter, though, because the laws of gravity must be obeyed: the opportunity is too big, its draw inexorable, and the rewards potentially too great. The tracks have been laid over many decades crossing a continent of human technical evolution, transporting anyone who has a reasonable credit limit and the adventuresome spirit of entrepreneurship to their own Sutter’s Mill, a keyboard in their backpack with which to pan for the new gold.

Posted in Cloud Computing Technology Insights, Uncategorized | Comments Off on Approaching the IT Transformation Barrier (7)

Approaching the IT Transformation Barrier (6)

A-New-OOP

“The need for a common language requirement is the chief problem microservices solve.”

Part 6 – Episode IV: A New OOP

Our series exploring the six independent yet related threads that are converging to drive dramatic transformation in information technology is entering the home stretch today.  I’ve been visiting each of these threads in turn, as shown in the diagram below. Each has been evolving on its own, rooted in decades of research, innovation, and trial-and-error.  They are coming together in the second decade of the 21st century to revolutionize IT in ways the scope of which has not been seen since the 20th.

RealITXform - Microservices

Today, the wheel turns to “microservices.”

Most definitions of microservices explain it as an architectural concept in which complex software systems are built from interconnected processes; the interconnection is typically facilitated by Internet Protocol transport and is “language agnostic.”  As with everything else in the above wheel, microservices are nothing more than the natural evolution of software architecture as it changes to take advantage of the ubiquitous Internet in both senses of its geographic reach and the adoption of its core technologies.

I like to think of microservices as the natural evolution of Object Oriented Programming (OOP).  OOP models software components as objects.  Objects are typically instances of classes, a class simply being a pattern or blueprint from which instances are built.  An object’s interface is the set of externally visible methods and attributes that other objects can use to interact with this object.  A simple example is class Point, which has attributes X and Y to represent the point’s coordinates in a Cartesian plane, and methods such as move() to alter those coordinates.

Classes are typically defined using languages such as Java.  Indeed, most modern languages have an object concept, though their implementations differ. After defining the classes, one writes executable code to create object instances, and those instances themselves contain code in their various methods to interact with other objects: creating new objects, destroying them, invoking them to do work of some kind – all of which amount to a coordinated effort to solve a problem.

Class libraries are pre-written collections of classes that solve common problems. Most languages come with built-in libraries for things like basic data structures (generic queues, lists, trees, and hash maps).  Open source and the Internet have created a rich but chaotic sea of class libraries for literally hundreds of different languages that solve even more specific problems, such as an e-commerce catalog or a discussion board for social media applications.

Until recently, if you wanted to build an application that used pre-built libraries from disparate sources, you chose a language in which there was available an implementation of each:  The easy way (there are hard ways, of course) to have the code work together – to have objects from one library be able to see and use the interfaces of objects from the other libraries – was to use the facility of the underlying language’s invocation mechanism. The need for a common language requirement is the chief problem microservices solve.

A microservice is really nothing more than an object, the interface of which is available as a network addressable target (URL) and is not “written” in any specific language.  The main difference is that the interface between these objects is the network, not a memory space such as a call stack or a heap.

Every interface needs two things to be successful: 1) marshaling, which is the packaging up and later unpacking of a payload of information, and 2) the transmission of that payload from object A to object B. Interfaces inside of an execution environment achieve both of these things by simply pointing to data in memory, and the programming language provides an abstraction to make sure this pointing is done safely and consistently.

Microservices are loosely coupled – they don’t share a memory space, so they can’t marshal and transmit data by simply pointing to it.  Instead, the data has to be packaged up into a network packet and sent across the Internet to the receiver, where it is unpacked and used in a manner the caller expects. The Hypertext Transport Protocol (HTTP) is the ubiquitous standard defining connections and transfers of payloads across the World Wide Web, atop the global Internet.  When used in a particular way that adheres to the semantics of web objects, HTTP can be used to implement RESTful interfaces. The definition of REST and why it is important is beyond the scope of this article, but there are good reasons to adhere to REST.

HTTP does not dictate the structure (marshaling) of the payload. Although XML (eXtended Markup Languae) and JSON (JavaScript Object Notation) give us ways to represent just about any data in plain text, they require additional logic to build a functioning interface. One such standard, the Simple Object Access Protocol (SOAP) is exactly that – it defines how to use XML to marshal objects for transmission.  However, many people see SOAP as too complicated, so they use their own flavor of XML or JSON to build their payloads. If there is a fly in the microservices ointment, it has to do with this aspect of undefined interfaces, and the lack of a heavily adopted service definition discovery and access protocol.

Even so, the use of the microservices model is exploding, and we’re seeing more hyper-reuse of code and services to build software. When “shopping” for a service, application developers typically follow this workflow:

  1. If a service already exists, we simply pay for it (if required) and knit it into our application.
  2. Else, we need to create an instance of the service, so we “construct” an instance on our favorite cloud provider by spawning a Docker container from an image from a Docker repository, like Docker Hub.

It’s a new age of software architecture. For the purposes of this article (and because it supports the contrived title), I’m going to call it the Fourth Age, and compare it to the previous three as shown in the diagram below. (This is a related but different concept than the familiar “programming language generations” concept.)

Four Ages of Software Architecture

The First Age of Software Architecture was characterized by a lack of abstraction. Setting binary machine language aside as something only a masochist codes directly, assembly language provided a convenient shorthand, but very little else in the way of supporting problem solving.  The Second Age added procedural languages and data structure abstractions; however, languages were designed with an emphasis on general purpose code execution, and not the support of best practices in creating and operating upon data structures.  The Third Age was the OOP age, where languages evolved to support a more regimented, system-supported means of curating data within a program.  However, those structures were limited to software written in a specific language; interfacing with foreign code was not addressed.

The Fourth Age is the ability to combine code drawn from multiple sources and languages to create a solution.   Microservices are only the latest and most unifying attempt at doing so.  Previous attempts include Microsoft’s “Common Object Model” and the Object Management Group’s Common Object Request Broker Architecture (CORBA).  There are many others. The point is, until we saw things like HTTP/REST, Docker containers, cheap cloud hosting services, and freely available open source libraries like Docker Hub become common place, the Fourth Age couldn’t reach maturity – it was held back in many ways that microservices now have the opportunity to transcend.

So it is the age of the microservice, and the transformation of IT as we know it is reaching the peak of its powers.  If train for this new era you will, clear your mind you must, and changes accept.

May the Fourth be with you.

 

Posted in Cloud Computing Technology Insights | Comments Off on Approaching the IT Transformation Barrier (6)

Approaching the IT Transformation Barrier (5)

Planet-of-the-APIs

“Cracking the code of how to best implement and monetize APIs is crucial to success in the emerging API economy.”

Part 5 – The Planet of the APIs

After a brief holiday hiatus, we’re back to inspect the multiple aspects of the transformation that is taking all aspects of information technology from a traditional private do-it-yourself isolated model to a modern public outsourced cooperative model.  These aspects, or threads as I have been calling them, have led us to a place where they, in combination, have propelled IT beyond a line of demarcation between the old and new ways of doing things.  Here again is the overview graphic I’ve been using to guide our exploration, with today’s focus being on the API Economy.

API-Economy-ReallTXform

We started our tour in the lower left of the above diagram, talking about the composability of hardware and software systems, followed by the cloud model that has been realized through achievements in software defined networking, and more recently in applying lightweight virtualization to achieve uniform packaging and distribution of workloads.  These are all “nuts and bolts” things – the gears inside the big machine that give us capabilities. But capabilities to do what, exactly? What problems do they help us solve, and in what ways are those solutions superior to the ones that came before?

Throughout our discussion, the terms dynamic, reusable, shared, distributed, and lightweight have appeared consistently and repeatedly.  Clearly, the difference between this “new” way of doing IT and the “old” way of doing it includes these characteristics, and they are present not only at the lower level where the technology components reside, but also in the higher levels of the stack where those technologies are harnessed to solve business problems.  This is where the “API Economy” thread enters the discussion, because it is concerned with how we use these new building blocks to achieve better outcomes in problem solving – specifically in enabling organizations to do business on the Internet.

Application Programming Interfaces, as with most of the other things on the big wheel above, are not new. However, when combined with recent strides and innovations in connectivity technologies, they become a focal point in the ongoing transformation.  Decades of work have gone into understanding how to build complex software systems.  A key tenet of enterprise software design is to decompose a problem into smaller sub-problems, then solve those sub-problems in a way that their respective solutions can be later combined to solve the larger problem. This also allows us the opportunity to identify code that can be shared across the same solution.  For example, if we need to sort several tables of numbers, we write the sorting code only once, and design the solution to call that code with different tables as needed.  This “divide and conquer” method is shown at the far left of the figure below.

Evolution of Code Reusability

But a valuable consequence of this approach is the creation of code that can be shared outside of the initial solution for which it was conceived.  If we create an online catalog for a book store as part of its overall IT system, we can likely reuse that component for a grocery store, provided we built the component with re-use in mind.  Historically, such components are kept in libraries of either source or object code that can be distributed and shared between developers that later compile or link them into larger solutions as needed, as shown in the middle of the figure above.  This is only possible if the entry points into the reusable code module are well documented, and the module itself has been designed with easy re-use in mind.  APIs help in both of these matters, by providing consistent, well documented entry points (typically, subroutine invocations) and ensuring that the module is self-sufficient (isolated), relying only on the information it receives through its API to set client-specific contexts.

This method of sharing, which is essentially the copy/paste model, is not very efficient.  When changes are made to the master source code for the shared component, the changes must be propagated to all instances of the component.  For traditional digital media, this is extremely cumbersome.  Online repositories improve the experience somewhat. Still, it is a matter of transporting and updating code instead of transporting and operating on data.

Until recently, this was the best we could hope for, but with the high-speed ubiquitous Internet, and with universally adopted standards such as the Hypertext Transport Protocol (HTTP) to carry API payloads, we finally have achieved more dynamic and lightweight means of sharing code as shown in the far right of the figure above.  Instead of copying and replicating the code, we extend Internet access to running instances of it.  Changes to the code can be rolled into the service and instantly be in use without users of it needing to make updates. API management is also evolving to assume this model of reuse: By assigning version control to the API itself, we can evolve the service over time while ensuring that clients always connect to a version that offers the features and semantic behaviors they are expecting.

So that’s the “how” part of it.  But “why” is this important?

The new use cases we will discuss at the conclusion of this series are born of opportunity brought about by global hyper-connectivity, device mobility, and large-scale distribution of data.  Big Data Analytics, the Internet of Things, taking full advantage of social media, and similar cases require that we be able to connect data to processes within fractions of a second, regardless of their locations.  To be competitive in this environment, and certainly to take advantage of its opportunities, businesses must be able to “hook in” to the data stream in order to provide their own unique added value.  APIs, and the services backing them, are the technical mechanism for achieving this.  Any organization wishing to participate in the emerging API economy, where data is the currency, and the ability to transport and transform that currency into ever more valuable forms, must have a well-defined, easy-to-use API with a dependable, scalable service behind it.  This becomes a building block that developers can acquire in building their applications, which gives the vendor behind the API more clout and relevance in the ecosystem.

Although the technology for creating and managing these APIs and services is largely ready, there is still much experimentation to be performed regarding how to extract revenue from these constructs.  The easiest to understand is basic pay-for-access; but, in a world where most services are “free”, potential customers will likely be offended to be charged just to walk through the front door of an API, so we have variants of limited use with paid upgrade, trial periods, personal data as payment, and so on.

Although Darwinian-type evolution was not the cause of apes ascending to dominate the Earth in “The Planet of the Apes” (it was a bootstrap paradox, for those who really worry about such details), their ascendance was still due to mankind’s inability to evolve to meet the threats and capitalize on the opportunities that a changing environment presented. Cracking the code of how to best implement and monetize APIs is crucial to success in the emerging API economy. We still don’t know which variations will prove to be the dominant species of API interaction models.  Only one thing is certain: those who wait to find out won’t be around to benefit from the knowledge: They will be extinct.

Posted in Cloud Computing Technology Insights, Uncategorized | Comments Off on Approaching the IT Transformation Barrier (5)

Approaching the IT Transformation Barrier (4)

The-Tupperware-Party

 

“…the direct to consumer aspect of the container model is at the heart of its transformative capabilities.

Part 4 – The Tupperware Party

This is the fourth installment of a multi-part series relating six largely independent yet convergent evolutionary industry threads to one big IT revolution – a major Transformation.  The first installment gives an overview of the diagram below, but today’s excursion is to consider the one at the top of the diagram because, at least for me, it’s the most recent to join the party, and the catalyst that seems to be causing acceleration of monumental change.

Containers---RealITXform

Why? The emergence of Docker as a technology darling has been atypical: Rarely does a startup see the kind of broad industry acknowledgement, if not outright support, it has garnered.  For proof, look to the list of signatories for the Linux Foundation’s Open Container Initiative. I am hard pressed to find anyone who is anyone missing from that list: Amazon, Google, Microsoft, VMware, Cisco, HP, IBM, Dell, Oracle, Red Hat, Suse, …  That’s just the short list.  (Ok, Apple isn’t in there, but who would expect them to be?)

The point is, it isn’t all hype. In fact, the promise of Docker images and the flexibility of the runtime containers into which their payloads could be deployed is one of the first real glimpses we’ve seen of a universal Platform as a Service.  (More on that when I talk about Microservices.) It takes us one step away from heavy x86-type virtualization of backing resources – such as CPU, disk, and memory – and closer to abstractions of things that software designers really care about, which are things like data structures, guaranteed persistence, and transactional flows.

At the risk of being overly pedagogical, I feel I should at least make a few points on “containers” and “Docker” about which many are often confused.  Understanding these points is important to reading the tea leaves of the IT industry’s future.

First, “containers” is used interchangeably to refer to the image of an application in a package (like a Docker image), as well as the runtime environment into which that image can be placed. The latter is actually the more accurate definition, which is why you will see references in more technical literature to both Docker images and Docker containers.  Images are the package.  Containers are the target for their deployment.

To understand what’s in the package, let’s first understand what the containers, or runtime environments, are.  The container concept itself is pretty broad: think of it like an apartment in an apartment complex where a program is going to spend its active life.  Some apartments are grouped together into units, perhaps intentionally sharing things like drainage and water, or unintentionally sharing things like sounds.  The degree of isolation, and how much of it is intentional or unintentional, is dependent upon the container technology.

Hypervisor-type virtualization such as VMware vSphere (ESX), Microsoft Hyper-V, and open source KVM create containers that look like x86 computer hardware.  These containers are “heavyweight” because, to make use of them, one must install everything from the operating system “up.”  Heavyweight virtualization basically “fakes out” an operating system and its attendant applications into thinking they have their own physical server.  This approach has the merit of being very easy to understand and adopt by traditional IT shops because it fits their operational models of buying and managing servers.  But, as a method of packaging, shipping, and deploying applications, it carries a lot of overhead.

The types of containers that are making today’s headlines are commonly called “lightweight,” which is common parlance for the more accurate label “operating system-level virtualization.”  In this scenario, the operating system already exists, and the virtualization layer “fakes out” an application into thinking it has that operating system all to itself, instead of sharing it with others.  By removing the need to ship entire operating system code in the package, and not imposing virtualization overhead on operating system intrinsics (kernel calls), a more efficient virtualization method is achieved.  The size of the packages are potentially smaller as are the runtime resource footprints of the containers.  It is generally faster to create, dispose, and modify them than an equivalent heavier-weight version.

The most well-known contemporary lightweight containers are based on Linux kernel cgroups functionality and file system unioning.  Going into the details of how it works is beyond the scope of this ideally shorter post.  More important, is that Linux containers are not the only lightweight containers in the world. In fact, they are not the only lightweight containers that Docker packaging will eventually support, and is in part why Microsoft announced a partnership with Docker earlier this year, and has signed on to the Linux Foundation OCI.

Perhaps the biggest misunderstanding at this juncture is that Microsoft’s Docker play is about running Windows applications on Linux in Docker containers.  It isn’t, although that is a somewhat interesting tack.  Instead, it is about being able to use Docker’s packaging (eventually, the OCI standard) to bundle up Windows applications into the same format as Linux applications, then use the same tools and repositories to manage and distribute those packages.  Microsoft has its own equivalent of containers for Windows. It also recently announced Windows Nano Server, a JEOS (just-enough operating system) to create lightweight containers that Docker images of Windows payloads can target.

The figure below demonstrates what an ecosystem of mixed Linux and Windows containers could look like. Imagine having a repository of OCI-standard (i.e., Docker) images, the payloads of which are mixed – some being Linux apps (red hexagons), others being Windows apps (blue pentagons).  The tools for managing the packages and the repo are based on the same standard, regardless of the payload.  A person looking for an application can peruse an intermediate catalog of the repo and choose an application.  Then they press “Purchase and Deploy,” the automation finds a cloud service that expresses a container API with the qualifier that it supports the type of payload (Windows or Linux), and the instance is deployed without the consumer of the application ever knowing the type of operating system it required.

OCI Ecosystem

Self-service catalogs and stores aren’t new, and that isn’t the point of the example.  The point is that it becomes easier to develop and deliver software into such marketplaces without abandoning one’s preference of operating system as a developer or becoming deeply entrenched in one marketplace provider’s specific implementation, and it is easier for the application consumer to find and use applications in more places, paying less for the resources they use and without regard for details such as operating system.   There is a more direct relationship between software providers and consumers, reducing or even eliminating the need for a middleman like traditional IT departments.

This is a powerful concept!  It de-emphasizes the operating system on the consumption side, which forces a change in how IT departments must justify their existence.  Instead of being dependency solvers – building and maintaining a physical plant, sourcing hardware, operating system, middleware, and staff to house and take care of it all – they must become brokers of the cloud services (public or private) that automatically provide those things.  Ideally, they own the box labeled “Self-Service Catalog and Deployment Automation” in the figure, because that is the logic that both enables their customers to use cloud services easily, while the organization maintains control over what is deployed and to where.

This is a radical departure from the status-quo. When I raised the concept in a recent discussion, it was met with disbelief, and even some ridicule.  Specifically, the comment was that the demise of IT has been predicted by many people on many previous occasions, and it has yet to happen. Although I do not predict its demise, I do foresee one of the biggest changes to that set of disciplines since the x86 server went mainstream.

If there’s anything I’ve learned in this industry, it is that the unexpected can and will happen.  The unthinkable was 100% certain to occur when viewed in hindsight, which is too late.  Tupperware (the company) is best known for defining an entire category of home storage products, notably the famous “burping” container.  But more profound and what enabled it to best its competitors was the development of its direct marketing methods: the similarly iconic Tupperware Party.  For cloud computing, the container as technology is certainly interesting, but it’s the bypassing of the middleman – the direct to consumer aspect of the model – that is really at the heart of its transformative capabilities.

Welcome to the party.

Posted in Cloud Computing Technology Insights | Comments Off on Approaching the IT Transformation Barrier (4)

Approaching the IT Transformation Barrier (3)

 

The Power of Steam

“Cloud isn’t a technology. It’s a force of nature.”

Part 3 – The Power of Steam

I’ve previously advanced the notion that the IT industry has reached and is in the act of collectively crossing a synchronization barrier – a point where many independent threads converge to one point, and only at that point can they all move forward together with new-found synergy.  The diagram below shows these six trajectories leading us toward “Real IT Transformation.”  Taking each of the vectors in turn, I last dealt with the fundamental theme of composition in both hardware and software systems, and how it is disrupting the current status quo of monolithic development and operational patterns in both of these domains.  Advancing clockwise, we now consider the cloud and software defined networking.

Cloud and SDN---RealITXform

If there is one recurring element in all six threads, its flexibility – the ability to reshape something over and over without breaking it, and to bring a thing to bear in often unforeseen ways on problems both old and new.  It is a word that can be approximated by others such as agility, rapidity, elasticity, adaptability… the now quite-familiar litany of characteristics we attribute to the cloud computing model.

Flexibility is a survival trait of successful business.  The inflexible perish because they fail to adapt, to keep up, to bend and not break, and their IT systems are at once the most vulnerable as well as the most potentially empowering.  It is obvious why the promise of cloud computing has propelled the concept far beyond fad, or the latest hype curve.  Cloud isn’t a technology, it is a force of nature, just like Darwinian evolution. For IT practitioners wishing to avoid the endangered species list, it is a philosophy with proven potential to transform not just the way we compute and do business, but the way we fundamentally think about information technology from every aspect:  provider, developer, distributor, consultant, and the end-user.  No one is shielded from the effects of the inevitable transformation, which some will see as a mass extinction of outdated roles and responsibilities.  (And lot’s of people with scary steely eyeballs who want to be worshiped as gods. Wait. Wrong storyline…)

Although this theme of flexibility is exhibited by each of the six threads, and “cloud” naturally bears the standard for it, there is something specific to the way cloud expresses this dynamism that justifies it being counted among the six vectors of change: software defined networking (SDN).  I refer to this in a very general sense, as well as in the more defined marketing and technical definition recently promulgated, which I refer to as “true SDN” in the figure below.

SDN Layers

For cloud computing, the invention of the Domain Name System was where SDN began. (DNS… SDN… coincidence? Of course, but we need a little gravitas in the piece at this point, contrived though it may be.) IP addresses were initially hard to move, and DNS allowed us to move logical targets by mapping them to different IP addresses over time.  The idea came fully into its own with Dynamic DNS, where servers could change IP addresses at higher frequencies without clients seeing stale resolution data as with normal DNS zone transfers and root-server updates.  Load balancers and modern routing protocols further improved the agility of the IP substrate to stay synchronized with the increasingly volatile DNS mappings.

With DNS operating at the macro-network level, SDN took another step at the micro-layer with the birth of virtual local networking, born out of x86 virtualization (hypervisors).  By virtualizing the network switch and, eventually, the IP router itself, redefinition of a seemingly “physical” network that would previously require moving wires from port to port, could be done purely through software control. More recent moves to encapsulate network functions in to virtual machines – Network Functions Virtualization (NFV) – continue to evolve the concept in a way that even more aptly complements end-to-end virtualization of network communication, which would be…

…the capstone  – true-SDN or “SDN proper.” It bridges the micro and macro layers (the data planes) with a software-driven control plane that configures them to work together in concert.  Standards such as OpenFlow, and the ETSI NFV MANO, place the emphasis on getting data from point A to point B, with the details of routing and configuration at both local and wider-area-network layers being left to automation to solve.   When married with cloud computing’s notion of applications running not just “anywhere” but potentially “everywhere” the need for this kind of abstraction is clear.  Not only is there the cloud “backend” that could be in one data center today and another the next, but the millions of end-point mobile devices that are guaranteed to move between tethering points many times a day, often traversing global distances in a matter of hours.  Users of these devices have expectations that their experience and connectivity will be unchanged whether they are in Austin or Barcelona within the same 24 hour period.  Without true SDN, these problems are extremely difficult if not impossible to solve at global scale.

It’s easy to blow off “cloud” as just the latest buzzword. Believe me, I’ve had my fill of “cloudwashing,” too.  But don’t be fooled into thinking that cloud is insubstantial.  Remember: hot water vapor was powerful enough to industrialize entire nations.  Cloud computing combined with the heat of SDN have no less significance in the revolutionary changes happening right now to IT.

 

Posted in Cloud Computing Technology Insights, Uncategorized | Comments Off on Approaching the IT Transformation Barrier (3)