The Myth of Cloud Insecurity (cont.)

Part 3 – Leaping the Chasm

In my previous posts (Part 1, Part 2) in this series, we established the theory that the human species has come to rely on physical ownership and control of something as necessary and sometimes even sufficient to securing that thing.  This theory doesn’t hold in the cloud-era IT world where virtual objects such as applications and data, backed by physical machines, are more vulnerable in that virtual world than they are in the physical one in which they are rooted.  This line of reasoning is challenging for those who insist that “private” clouds will always trump public cloud services when it comes to providing the ultimate security.

The key to security in the cloud is to make the application responsible for its security, not the supporting infrastructure.  This implies that not all applications as they may currently exist in a private data center are suitable for “true” cloud deployment.  In fact, it is likely that very few are ready to make the jump into the cloud, because there is a gap in application fitness that must first be addressed.  Perhaps the staunch supporters of private cloud as the bastion of security take their positions not understanding or simply being dismissive of the fact that application transformation is a requirement to achieving the most secure public cloud deployments.

When the distributed perimeter pattern is followed by cloud application developers, the CSP is left to focus on creating the most secure service possible, meaning that it is generically configurable for each application, and that those configurations are difficult if not impossible for interlopers to modify to their own advantage.  Because CSPs operate on such large scale, they are arguably more likely to do a better job of securing these basic services than all but a handful of enterprise data centers could.  Of course, this model can only succeed when the application properly configures and uses the service.  To do otherwise could invite a security breach.

But even in the case where an individual cloud application is misconfigured and vulnerable, it is important to note that the cloud service itself and other applications are not likely affected by the vulnerability.  If a poorly designed app is compromised, the attacker can only gain control over what that app controls.  Even if the app has “superuser” privileges for the tenant (a very bad thing to do!), it cannot impact other tenants except possibly by side effect, such as launching a denial-of-service attack by saturating the network or CPU.  Even then, good CSPs will enforce limits that prevent the DoS attack from having significant negative impacts on other tenants. In effect, the CSP approaches each tenant as a potential security risk that could behave poorly, even if not officially “compromised.”

This is not to say that private cloud has no place in cloud-era IT. In the security context alone, there are many applications that will never be refactored for cloud, and for good reasons. It’s expensive to refactor.  The application may not have a very long lifespan, anyway.  It may be that the nature of the application is such that a full public-cloud makeover won’t add much value to it where users or the organization are concerned.  In these cases, it is best to either leave the application as is, or recast it partially to a cloud model to take advantage of efficiencies in the private cloud, while still depending on the existing security controls.

For new application development, however, there should be no question of adopting the full-on cloud model because it maximizes deployment flexibility.  Even if you don’t think you’ll ever deploy the app to public cloud, write it as if you will!   When we design and write applications to take responsibility for their own security, we are subscribing to the zero trust security principle at a fine level of granularity (the application, or its container).   By not trusting anything beyond the boundaries of the application’s container, and designing the application accordingly, we endow it with the strongest possible security profile, and thus the potential for greater mobility between cloud service providers and locations, regardless of deployment model.

Posted in Cloud Computing Technology Insights | Comments Off

The Myth of Cloud Insecurity (cont.)

Part 2 – Distributing the Security Perimeter

In part 1 of this series,  we examined the fallacy that physical ownership and control of hardware, combined with multi-layered perimeter defense strategies, leads to the most secure IT deployments.  In the hyper-connected cloud era, this concept doesn’t hold. In private data centers that rely on this model (see figure A below), a breach in one co-resident system typically exposes others to attack. Indeed, some of the most successful viral and worm attacks follow the model of gaining entry first, then using information gleaned from the most vulnerable systems to break into others in the same data center without having to navigate the “strong” outer perimeter again.  One analysis of this type of leap-frog attack leads us to the conclusion that it is actually not a good example of defense-in-depth. Defense in depth would dictate that, having gained new information from attacking one system, the attacker has at least as difficult of a time using the new-found knowledge to get to its next victim as it did this one.

Applications that are written to run “in the cloud” make very few assumptions about the security of the cloud service as a whole.  Concepts such as a firewall move from the data center’s network boundary to the application’s network boundary, where rules can be configured to suit exactly the requirements of the application, rather than the aggregate requirements of all apps within the data center.  This distributed custom-perimeter model is shown in figure B, below.  By forcing applications to take on more responsibility for their own security, we make them more portable.  They can run in a private cloud, public cloud, on or off premises.  The degree to which ownership and proximity of the service affect the security of the application is much smaller when the application is designed with this self-enforced security model in mind.

SecurityModels

Although applications should not rely on data center security for their own specific needs, security at the data center level is still critical to overall success of the cloud model.  Since the application relies heavily on the network connection to implement persistence and access, the application developer/operator has the responsibility of configuring the cloud services for that application to achieve those specific goals. The cloud service provider (CSP) is responsible to deliver a reliable, secure infrastructure service that, once configured for the application, maintains that configuration in a secure and available fashion, as shown in the figure below. Each tenant’s apartment is constructed by the CSP in accordance with the tenant’s requirements, such as size and ingress/egress protections. The CSP further guarantees the isolation of the tenants as part of its security obligations.

CloudServiceProviderDataCenter

As an example, let us return to our firewall scenario, but with a bit more detail.  Suppose an application requires only connections for HTTP and HTTPS for all communication and persistence operations.  Upon deployment, the cloud service is configured to assign an Internet Protocol address and associated Domain Name System name to the application, and to implement rules to admit only HTTP and HTTPS traffic from specific sources to the application’s container.  Subsequent to deployment, the cloud service must insure that the IP address and DNS names are not changed, that the firewall rules are not altered, that the rules are enforced, and that isolation of the application from other applications in other tenancies is strictly enforced.  In this way, the cloud service provider’s role is cleanly separated from that of the application developer/operator because the CSP does not know or care what the firewall rules are. It only knows that it must enforce the ones configured.  This is in stark contrast to traditional data centers where firewall rules are a merge, often with conflicts, of the various rules each application and subsystem behind the firewall might require.

In my third and final post in this series, I’ll further expound the virtues and caveats of this model and how, when properly implemented, it solves the security problem for applications in the most general case: public cloud.

Posted in Cloud Computing Technology Insights | Comments Off

The Myth of Cloud Insecurity

Part 1 – The False Sense of Physical Security

As is often the case when new paradigms are advanced, cloud computing as a viable method for sourcing information technology resources has met with many criticisms, ranging from doubts about the basic suitability for enterprise applications, to warnings that “pay-as-you-go” models harbor unacceptable hidden costs when used at scale.  But perhaps the most widespread and difficult to repudiate is the notion that the cloud model is inherently less secure than traditional data center models.

It is easy to understand why some take the position that cloud is unsuitable, or at the least very difficult to harness for conducting secure business operations.  Traditional security depends heavily on the fortress concept, one that is ingrained in us as a species: We have a long history of securing physical spaces with brick walls, barbed wire fences, moats, and castles.  Security practice has long advocated placing IT resources inside highly controlled spaces, with perimeter defenses as the first and sometimes only obstacle to would-be attacks.  Best practice teaches the “onion model,” a direct application of the defense in depth concept, where there are castles within brick walls within barbed-wire fences, creating multiple layers of protection for the crown jewels at the center, as shown in Figure A below.  This model is appealing because it is natural to assume that if we place our servers and disk drives inside a fenced-in facility on gated property with access controlled doors and locked equipment racks (i.e., a modern data center), then they are most secure.  The fact that we have physical control over the infrastructures translates automatically to a sense that the applications and data that they contain are protected.  Similarly, when those same applications and data are placed on infrastructure we can’t see, touch, or control at the lowest level, we question just how secure they really can be. This requires more faith in the cloud service provider than most are able to muster.

SecurityModels

 

But the advent of the commercial Internet resulted in exponential growth of the adoption of networking services, and, today, Internet connectivity is an absolute “must have” for nearly all computing applications, not the novelty it was 20 years ago.  The degree of required connectivity is such that most organizations can no longer keep up with requested changes in firewalls and access policies. The result is a less agile, less competitive business encumbered by unwieldy nested perimeter-based security systems, as shown in Figure A above. Even when implemented correctly, those traditional security measures often fail because the perimeter defenses cannot possibly anticipate all the ways that applications on either side of the Internet demarcation may interact.

The implication is that applications must now be written without assumptions about or dependencies upon the security profile of a broader execution environment.  One can’t simply assume the hosting environment is secure, although that is still quite important, but for different reasons.  More on this line of thinking, and the explanation of Figure B and why it is more desirable in the cloud era, in my next post.

 

 

 

Posted in Cloud Computing Technology Insights | Comments Off

The Private Cloud Pendulum

We once had a unified vision of how cloud would be adopted by the average enterprise.   With all the uncertainty around security, cost and performance of public cloud, they would naturally transform their private data centers into private clouds. Once successful in that incremental transition, they would be more comfortable with extending to the public cloud, resulting in the Holy Grail – a hybrid deployment.

We were only half right.

As events have unfolded, we see that hybrid clouds are indeed the desired outcome. However, the way-points on that journey are, for a number of cloud adoption profiles, reversed from what we had predicted. Instead of first stopping at private cloud, many skipped it entirely and went to using public cloud in spite of their own previously voiced objections. Why?

For all but the larger enterprises and the very capable mid-market IT organizations, a private cloud has often been too difficult to build and maintain. The technology existed, but it was far from turnkey. In the face of the challenging business and operational transformations that cloud demands, this was too distracting and unnecessary: public cloud was sitting there, gleamingly simple and ready-to-use without the burdens of operational hassles.

So, for these “I need it to be as easy as possible” cloud adopters, the pendulum largely bypassed private and swung to public. But will it stay there?

The costs of public cloud are tricky to pin down and manage. You are paying a premium for someone else to handle the headaches. For many projects this makes sense. For long-running activities that don’t take advantage of public cloud’s scale and global reach, you will likely pay more than is necessary.

The chickens, as they say, will come home to roost. The true cost of public cloud will become apparent, and the advantages of private cloud more compelling for the mid-market. The pendulum will return, and when it does, we’ll have evolved private cloud technologies to make them suitable for organizations looking for an appliance-like experience instead of building an IT practice around them.

But what if we’re wrong again? Regardless of how that pendulum swings, having choices and management capabilities that span multiple clouds (public, private, whatever) ensures you’ll be able to keep cost and utility in balance. The trick is to invest in tools and technologies that enable that choice;  buy and design software that is infrastructure-agnostic, dependent only upon abstracted network services of which analogs are available from many providers. That way, whether it’s now or in the future, when you’re ready for private cloud (or it is ready for you), you’ll be in a position to further expand your selection of cloud targets by adding your own.

Posted in Cloud Computing Technology Insights | Comments Off

The Elusive Butterfly of Policy

I love it when visionaries bridge the gaps in their utopian depictions of The Future of IT with hand-waving explanations.  As a supposed said visionary, I plead guilty.  My most recent transgression: presenting the concept of “policy” in the context of data centers, workloads, etc., as if it were well understood and its supporting technologies mature enough for market adoption.

In spinning our tales of how great life will be when we finally complete this transformation rather appropriately (due to the ambiguities involved) labeled “cloud,” we realize we can’t tell a convincing story without the Policy character.  He’s like the Sherriff in the Wild West – without him enforcing The Law, it’s just too dangerous a place for normal, everyday people.

Why do I believe policy is a gating factor for accelerating cloud adoption?

Although it is the favorite analogy of cloud evangelists, electric service is not the same as compute-as-a-service.  Unlike the power company, which sends indistinguishable electrons into your home or business and eventually into the ground, cloud computing services require that data and intents to act upon it move across that boundary.  And data is quite distinguishable, valuable, even dangerous in the wrong hands.

And that’s why policy is not simply “important” – it is essential to the success of cloud computing. Data and data access must be managed in a controlled manner, and cloud consumers will need guarantees to that effect.  Policy is the mechanism by which the degree and type of control is specified.  Policy enforcement ensures those controls are observed.

Easy. (Did you feel the rush of air on your cheek?)  Seriously, although much progress has been made to begin to express and implement policies in IT systems, it is a largely manual error-prone process.  Still, some technologies are beginning to emerge that give us a bit of hope we can really solve the problem.

But that’s only the first chapter of a longer story in which the right kinds of policies must be crafted in order to meet the intended objectives.  That’s the poster child use case for policy – compliance.  In future posts I’ll discuss how policy and automation are mutually dependent, and how together they will help us achieve policy enforcement and compliance objectives in tomorrow’s virtual data centers.

Posted in Cloud Computing Technology Insights | 2 Comments

The Rebirth of Automation

I suppose by now it is fait accompli that cloud computing is going to revolutionize the information technology industry.  It certainly seems on track to do so.  One of the main reasons for its success is the highly dynamic and virtualized environments that most cloud platforms provide.  In these modern “data centers” every resource – from virtualized processors and storage to network configuration – can be defined, deployed, configured, managed and maintained through software interaction alone.  No more cables to plug and unplug. No more racks to re-wire just because a server is being re-purposed. RAM can be added to an under-provisioned system in seconds, and never a chassis opened. And the list goes on…

It is in this only recently realized totally virtual data center that one of IT’s oldest workhorses may finally see its full potential realized: automation. It’s the key to achieving maximum efficiency because it provides the following benefits:

  • Task Compression – This time-honored method of increasing efficiency through abstraction is at the heart of automation’s promise. Processes which would normally require many manual steps are reduced to a single invocation.
  • Speed – Not only are machines good at repeating long lists of steps, they can do it quickly! Automated processes can be made to execute as fast as possible, with no unnecessary delays between steps.
  • Repeatable Successful Outcomes – Even with the best documentation, manual execution is prone to human error, such as missing, misreading, or misinterpreting a step. Automation helps to ensure that the same set of steps is always followed, in the same order, with the same results, every time an operation is performed.

So, if automation is so great, why is it only now getting broad traction?

“Run Book Automation” (RBA) has been around for decades, but has never been an easy-to-use or widely accessible technology.  Sure, it’s been possible to fully (well, nearly fully) automate many of the most mundane, frequently used, or most error-prone manual processes through brute force of a million different API’s and a few special-purpose proxies and operating system agents.  But those implementations – with many moving parts under the control of as many different companies that have little incentive to ensure their pieces work “better together” – have proved extremely brittle.  The expense of the software and human resources to design, construct and maintain such systems has only made them useful to the largest of organizations.

A modern incarnation of RBA – “Orchestration” – is well positioned to take full advantage of the new totally virtualized cloud platforms, with visual tools that make the creation and maintenance of automated processes more intuitive, and translation of those high-level intentions to various APIs and endpoints effortless.  Not only that: Codified processes, or “orchestrations,” can be packaged up and shared in community, or sold as products themselves.  Thus, modern orchestration solutions unleash automation from the shackles of traditional RBA and make it available to the masses.  Since cloud computing is in effect “enterprise-grade data centers for the masses,” it is no surprise that orchestration will find its way in to that very same market alongside it.

Although orchestration is finally hitting its stride, uncontrolled automation is a recipe for disaster.  That’s where “policy” comes in….a topic for another day…

Posted in Cloud Computing Technology Insights | Tagged , | 2 Comments

Are End Users Too Stupid To Self-Serve Workloads?

In one of my former lives I founded a tech support team on a college campus.  Staffed with computer science students, the team was responsible for rolling out and supporting Internet and desktop computing technologies across a population of several thousand people.  As CS students, I suppose they were a bit of an elitist group.  They invariably faced users who were largely unfamiliar with these new tools, and were simply ignorant of how they worked, which generated support tickets.  When this happened, the team used a special support code to tag the incident:

Code PEBKAC = “Problem Exists Between Keyboard and Chair”

Not as often used, was:

Code ID-10-T = “Idiot”

Of course, over time they learned the technology and these smug little codes fell into disuse.

Today, I’m seeing a similar attitude with some of the IT support cultures in companies that are considering rolling out cloud technologies in their organizations. Specifically, the concept of self-service is drawing a considerable amount of snickering and eye-rolling from many an IT professional.  The idea that we would actually give a USER the ability to requisition SERVERS through a portal is laughable, and is immediately dismissed.  Why?

The reason invariably given: Users are too stupid to know when they need a server, and why, and they’d just end up creating a big mess that IT would have to clean up later.

But how much of that attitude is grounded in reality, and how much is based in the elitism which many IT organizations see being eroded by the user emancipation cloud and attending concepts like self-service bring?  I suspect the latter to be the case.

Keep in mind that “end user” is a broad term.  It could be the marketing professional needing a web site for a promotion, or a software dev who would normally have to wait months for a work order to complete before an environment is available to work in.  The point is: they are customers of the IT organization.

It can’t be denied that IT’s role in deploying workloads for the business is diminishing  Most users view the IT organization as a barrier to getting work done, not a helper.  When it can be sidestepped, it will be. Unlike those users many years ago who had never accessed the Internet before, and had never had a desktop PC all to themselves, today’s end users are computer-savvy, and self-service is something they understand from many contexts in their experience, from online shopping to subscribing to Internet services like Skype.  Doing it for workloads they need to achieve their business objectives is natural and inevitable.

So if you find yourself scoffing at the very idea that an end user should be allowed to (gasp!) self-serve and deploy their own web servers, etc., as needed, take a moment to search your soul: Is it because you REALLY don’t think they’re capable of making a good decision, or because you’re afraid they just won’t need you anymore?

 

Posted in Cloud Computing Technology Insights | Tagged , | 1 Comment

Why We’ll Gladly Pay More to Compute in the Clouds

The idea that public cloud computing is cheaper than traditional forms of IT staging, such as on-premises data center and co-location, may have had legs in the early days of cloud’s buzz, but the truth has finally been widely recognized: cloud computing isn’t at face value cheap, or cheaper, than what we’ve been doing up until now.

If that’s the case, then why is cloud catching on?  Isn’t the ultimate goal to lower cost and boost the bottom line, regardless of one’s business?

The effects of moving to the cloud have a much broader impact than simply relocating one’s IT assets to an alternative hosting arrangement.  I think of it in much the same way that the value proposition of virtualization has been slowly, but fully realized over the years.

In its early days, x86 virtualization was seen purely as a means of improving hardware utilization by increasing the application-to-box ratio.  “Consolidation” was the easy win, and it appeared to lower IT expenses because it reduced the hardware budget.

But time revealed that virtualization has a hidden price associated with increased management cost, the most famous being “VM sprawl.”  But by the time this became apparent, the really valuable capabilities of virtualizaton began to take center stage: live migration, rapid deployment, portability of workloads, dynamic resource allocation. Roll them into one term, “flexibility”, and you see why there is more value to virtualization than just bottom line IT costs – it enables the overall business in new ways that were not possible or practical before.

Cloud computing is entering that phase of public awareness where its true benefits are being appreciated. The flexibility ascribed to virtualization also applies to cloud.  But there are others, such as: disaster recovery, capacity on demand, carrier-grade reliability, expense-structured payment, and global reach. The value of these are not reflected in the bottom-line cost of the monthly bill.  They are woven into the business’s new abilities related to agility and simplicity.

So the old rule holds true: only things that improve the bottom line will survive in the B2B marketplace. Cloud computing obeys that law by bringing new value to the table in ways not possible prior to its advent.  Although we’re still learning how to exploit these new capabilities, and to quantify their considerable untapped benefit, there’s no doubt the value is there, and worth the additional cost.

 

Posted in Cloud Computing Technology Insights | 5 Comments

IaaS Exodus

I fear my thinking of late may be somewhat inconsistent: one part of my brain sees quite clearly that PaaS (platform as a service) is destined to win as the preferred method for developing and staging software in the future.  I firmly believe that.  At the same time, part of me has been assuming that the “migration to cloud” is simply swapping traditional OS/server infrastructure – which I call “legacy IaaS” – for private or public cloud IaaS.  And that, I believe, is not going to be the case in the limit.

Perhaps as a software guy working for a traditionally hardware company, I’ve allowed part of me to succumb to fallacious thinking: that people will always want to manage their own infrastructure – install their own operating system, configure it, and then stage software on top of that stack.  But my belief that PaaS wins is at complete odds with that thinking.

There’s no doubt that in the short term, consumers of IT will find it valuable to simply migrate their current legacy IaaS workloads, whether they are physical or virtual, from traditional data centers to IaaS cloud platforms.  It’s relatively straightforward and usually doesn’t require an application rewrite. The trend is accelerating and will continue for some time for various reasons: to control spending, be more agile in responding to market demands, or to take advantage of cloud-specific benefits such as better global reach or multiple staging points across geographies.  But these benefits are inherent to the fundamental nature of “cloud computing” and are not specific to IaaS.

It will become quite clear, quite soon, that IaaS is not the right delivery model on which to build end-user consumed software (SaaS).  Managing one’s own OS and run-time environment stack may give one a sense of control and security, but it isn’t scalable or competitive in the face of PaaS alternatives.  At some tipping point, migrations from traditional IaaS to cloud IaaS will be redirected to PaaS.  More than that, those who have migrated to cloud IaaS will make a final migration to PaaS, leaving IaaS behind for good.

This “IaaS exodus” is inevitable.  Whether it be on-premises or off, private or public, managed or unmanaged – Infrastructure as a Service can’t compete with Platform as a Service for application development and delivery.  Infrastructure will always be there, but it will be running the PaaS – not the apps.

 

Posted in Cloud Computing Technology Insights | Tagged , , , | 2 Comments

Why PaaS Wins

Working as a technology strategist for a large IT vendor, I’m tasked with thinking about technology from the customer perspective.  But often customers ask for things not because they want them, but because they have underlying needs they are trying to address in the best way they know how.  Their ask is the result of a sort of secondary effect of a larger issue.

I think IaaS and PaaS are good examples of an expressed want versus a solution that addresses the underlying need.  IaaS is of the former kind because it is an incremental enhancement to a now-outdated paradigm for implementing IT – the general purpose operating system (GPOS).

GPOSes grew out of an era when hardware was insanely expensive and isolated.  This necessitated the evolution of the lowly “monitor” program into an all-purpose middleware layer abstracting hardware into easier-to-program-for services, and securely and efficiently multiplexing those resources across as many concurrent processes as possible.  Although these advancements optimized resource utilization, it made application programming both easier and more difficult in various respects, and did not initially account for  internetworking on the scale of today’s Internet. 

We’ve since entered a different era of inexpensive, broadly distributed, well connected hardware.  Backing resources for compute, storage, and network can be so highly abstracted by this new “distributed operating system” that application development can be vastly simplified, and the staging and execution of those applications are no longer constrained by hardware boundaries.  That’s PaaS.

But making the jump from old to new, to take advantage of this new distributed world, requires rewriting “legacy” apps. And once that decision is made, one must determine in what language and using what platform –  there’s still looming uncertainty over which ones will survive the coming shakeout. It’s painful and doesn’t happen overnight, but that PaaS transition will happen in time, and here’s why…

IaaS is a hack that makes the transition less painful by providing a waypoint on the journey between old and new.  It’s the old GPOS wearing a fuel-guzzling rocket-pack, allowing legacy apps the ability to take to the clouds, expensively and clumsily.  It doesn’t solve the problem, but it gets you a few steps closer to the ideal. 

In the end, I believe IaaS becomes the ultimate salesman for PaaS, so I’m not opposed to promoting it as an intermediate solution.  As organizations move legacy apps to IaaS, they’ll reap only a small set of the benefits they could be enjoying if they go all in with PaaS. By the time that realization hits, the PaaS wars will be over, and the inertia to fully embrace it will be easily overcome.

Posted in Cloud Computing Technology Insights | Tagged , , | Comments Off