A New Way to Think about Cloud Service Models (Part 3)

ServiceSpectrum Logo

Part 3 – PaaS: A Spectrum of Services

In my previous post, I started developing the notion of a “universal” cloud service model that has IaaS is at one end and SaaS at the other of a spectrum. But spectrum of what?  I think it is the spectrum of services available from all cloud sources, distributed across a continuum of cloud service abstractions and the types of applications built upon them, as shown in the figure below.

Spectrum of Applications across Cloud Computing Service Models - Copyright (C) 2015 James Craig Lowery

This figure shows many cloud concepts in relation to each other.  The horizontal dimension of the chart represents the richness of the services consumed by applications: The further left an application or service appears in this dimension, then the more generic (i.e., closer to pure infrastructure) are the services being consumed.  Conversely, the further right, the more specific (i.e., more like a complete application) they are.

The vertical dimension captures the notion of the NIST IaaS/PaaS/SaaS triumvirate. The lower an app or service in this dimension, the more likely it is associated with IaaS, and the higher, the more likely SaaS.  Clearly, both the horizontal and vertical dimensions express the same concept using different terms, as emphasized by the representative applications falling along a straight line of positive slope.

In this interpretation, with far left and right or bottom and top being analogous to IaaS and SaaS, respectively, PaaS is left to represent everything in between.  Large-grained examples of the types of services that would fall into a PaaS category are shown along the Application Domain Specificity axis, anchored on the left by “Generic” (IaaS) and on the right by “Specific Application” (SaaS) on the right.

Traditional Datacenter Applications, shown in the lower left of the diagram, are simply typical “heavy” stacks of operating system, middleware, and application, and some persistent data stored in a logical physical machine form (usually a virtual machine). As previously mentioned, this type of application is the direct result of migrating legacy applications into the cloud using the familiar IaaS model, taking no advantage of richer services cloud providers offer.

Moving from left to right, the next less-generic (more-specific) type of services is the first PaaS-proper service most cloud adopters will encounter: structured data persistence. Indeed, most successful IaaS vendors have naturally grown “up the stack” into PaaS by providing content addressable storage, structured and unstructured table spaces, message queues, and the like.  At this level of abstraction, traditional datacenter applications have been refactored to use cloud-based persistence as a service, instead of managing disk files or communicating through non-network interfaces to database management systems.

The third typical stage of application evolution moving up and to the right is the Custom Cloud Application.  At this stage, the application is written using programming patterns that conform to best-practice cloud service consumption techniques.  Not only is cloud-based persistence used, it is exclusive – no other forms of persistence (storing something in a file in a VM, for example) is allowed. Although enterprise application server execution environments such as J2EE are usually incorporated into the architecture to create efficient common runtimes, it is when they are combined with network-delivered services for identity and optimization, and programming patterns that emphasize functional idempotence, that a new breed of highly available, reliable and scalable (even when backed by unreliable infrastructure) applications emerges.  Still, the logic comprising the core of the application is largely custom-built.

The fourth stage sees the heavy adoption of code reuse to create cloud applications.  Although the new model described in the previous paragraph still dictates the architecture, the majority of the code itself comes from elsewhere, specifically from open source.  The application programmer becomes more of a composition artist, skilled in his or her knowledge of what code already exists, how to source it, and how to integrate it with the bit of custom logic required to complete the application.

The fifth PaaS model, tantamount to SaaS, is the application that is composed from APIs.  This natural progression from the open source re-use case above keeps with the theme of composition, but replaces the reusable code with access to self-contained micro-services executing in their own contexts elsewhere on the internetwork. A micro-service can be thought of similarly as an object instance in classic Object Oriented Programming (OOP), except that this “object” is maximally loosely coupled from its clients: it could be running on a different machine with a different architecture, operating system, and written in any language. The only thing that matters is its interface, which is accessed via internet-based technologies.  Put another more succinct way, a micro-service is a domain-constrained set of functions presented by a low-profile executing entity though an IP-based API.

An example is an inventory service that knows how to CREATE a persistent CATALOG of things, ADD things to the catalog, LIST the catalog, and DELETE items from the catalog. This is similar in concept to generic object classes in OOP.  In fact, an object wrapper class is a natural choice in some situations to mediate access to the service.  The difference is that, instead of creating an application through the composition of cooperating objects in a shared run-time, we now create applications through the composition of cooperating micro-services in a shared networking environment.

One additional aspect of the figure upon which we should elaborate is the flux of qualitative values as one moves from point-to-point in this spectrum. The potential cost and ability to control the minute details of the infrastructure are maximized in the lower-left of the diagram. Clearly, if one is building atop generic infrastructure such as CPU, RAM, disk, and network interfaces, one has the most latitude (control) in how these will be used.  It should also be clear that in forgoing the large existing body of work that abstracts these generic resources into services more directly compatible with programming objects, and eschewing the benefits of shared multi-tenant architectures, one will likely pay more than is necessary to achieve the objective.  Conversely, as one gives up control and moves to the upper right of the diagram, the capability to quickly deliver solutions (i.e., applications) of immediate or near-term value becomes greater, and the programmer and operational teams are further spared many of the repetitive and mundane tasks associated with optimization and scaling.

Summary

So, that’s my take on cloud service models.  There’s really only one: PaaS. It’s the unifying concept that fits the general problem cloud can ultimately address.  But my concept of PaaS differs from traditional notions embodied in things like Cloud Foundry, OpenShift, and the like.  Those are but a small slice of the entire spectrum, and the “platform” is much more limited in scope than the view of the entire Internet and its plethora of services as the “platform.”  In a multi-cloud world, where we need the ability to use services from many sources and change our selections at any time due to our need or their availability, this is the only definition that makes sense.

Posted in Cloud Computing Technology Insights | Comments Off

A New Way to Think about Cloud Service Models (Part 2)

Part 2 – A Universal Service Model

StarAAS Universe

In my previous post, I introduced the idea that all cloud services are “platform as a service,” especially when we think of the entire Internet as the platform, and the various services available to applications at run-time.  These services include, but are certainly not limited to, what is broadly called Infrastructure as a Service. IaaS has been extremely popular and successful with traditional IT practitioners because it fits relatively easily into their operational models.  Still, it is the most basic and least developed of the spectrum of services the net has to offer.

It is a strategic blunder to think that cloud’s ultimate value can be tapped by simply applying traditional IT datacenter concepts and patterns.  In fact, the quest to add cloud to one’s IT portfolio without undertaking significant software development and operational reform will likely lead to failure. Certainly the outsourcing of infrastructure as a service reduces risk, simplifies operations, and potentially reduces overall costs to a business if implemented properly.   But to stop there is to cheat one’s self of the true riches the cloud model can unearth when fully pursued.

To understand this better, consider that the only purpose of IT operations is to provide data processing and storage that enable a business to meet its objectives:  “It’s all about apps and data.” IT becomes a drag on the business if it fails to adopt capabilities and transformative operational models that are more directly relevant to providing the application and data storage facilities the business needs.  Whereas in the past, the most directly relevant capabilities may have been providing raw compute power and storage capacity tailored to custom applications, the reality of our current environment is that the various wheels that make apps and data go have been reinvented so many times that they are easy to find and often free.  The global Internet and open source software are the primary reasons for this change.

The idea of transformation is at the heart of cloud computing’s real value.  One must go beyond IaaS to fully reap its rewards, and most have done so, perhaps unknowingly, by using the service model at the opposite end of the spectrum: “Software as a Service” or SaaS.  In this model, the service presented is the application itself, and using the service is just a matter of paying subscription fees to cover a number of users, connections, queries, etc., then accessing the application with a standard web browser or via an API.  This service model is decades old, but standing up such a service has historically been very difficult. Until recently, none but the largest organizations had the expertise, resources, and fortitude to build them.

It has been mentioned that IaaS is at one end and SaaS at the other of a spectrum. But spectrum of what?  I contend it is the spectrum of services available from all cloud sources, distributed across a continuum of cloud service abstractions and the types of applications built upon them.  I’ve created a diagram, shown in the figure below, to help facilitate a complete discussion of this concept, which I’ll tackle in my next installment. For now, take a look and start thinking about the implications of this “universal service model.”

Spectrum of Applications across Cloud Computing Service Models - Copyright (C) 2015 James Craig Lowery

Posted in Cloud Computing Technology Insights | Comments Off

A New Way to Think about Cloud Service Models (Part 1)

Part 1: IaaS != Entrée

IaaS-is-the-appetizer-not-the-main-course

Most discussions aimed at answering the question “What is cloud computing?” start with the three service models as defined by NIST: IaaS, PaaS, and SaaS.  At this point, I won’t offend you by assuming you need remediation in the common definitions of Infrastructure, Platform and Software as a service.  What may be more interesting is looking at these three models in a new way that casts them not as three separate approaches to consuming cloud resources, but as three aspects of the same model, which is ultimately an expanded, more inclusive definition of Platform as a Service than has previously been purveyed.

The key to successfully harnessing the cloud is to capitalize upon it for what it is, and not to force it to be something it is not.  The maturity and ubiquity of the Internet Protocol suite in conjunction with open source software and the global Internet itself have made it both possible and cost-effective to present resources such as compute, memory, persistent storage, and inter-process communication as network accessible functions.  The cloud is therefore best modeled as a set of services exposing those resources at varying levels of abstraction.  In this alternate interpretation, it is the range of richness in those services that gives us IaaS, PaaS, and SaaS.

For example, because all compute stacks have hardware at their foundation, an obvious service is one that directly exposes hardware resources in their base forms: compute as CPU, memory as RAM, persistent storage as disk, and the network as an Ethernet interface.  The model is that of a hardware-defined “machine” with said resources under the control of a built-in basic input and output system (BIOS).  This service is readily understood and quickly adopted by those who have worked in and built traditional datacenters in which server hardware is the starting point for information technology services.  The fact that the “machines” may be either virtually implemented, or implemented on “bare metal” is usually not very important.  What is important is that the machines as exposed through the service can – with few exceptions – be used just like the “real” servers with which one is already familiar.

The services described above are commonly labeled “Infrastructure as a Service” because the usage model closely mimics that of traditional datacenter infrastructure.  IT professionals who come to the cloud from traditional backgrounds quickly adapt to it and see obvious ways to extend their operational models to include its use.  Indeed, the success of IaaS to date can be attributed to this familiarity.  Unfortunately, that success may lead aspiring cloud adopters to believe that this model is the epitome of cloud computing, when it in fact has the least potential for truly transforming IT.

In my next installment, I’ll explain why the IaaS model is the least interesting and transformative of the service models, why more abstract PaaS models have seen slow penetration, and the recent confluence of technologies that is finally bridging the gap from IaaS to PaaS and beyond.

 

Posted in Cloud Computing Technology Insights | Comments Off

Privacy: The Cloud Model’s Waterloo?

Part 2 – How Privacy Could Cripple the Cloud

Security and privacy are often mentioned together. This is natural, because they are related, yet distinct topics.  Security is a technical concept concerned with the controlled access, protection, integrity, and availability of data and computer system functions.  Privacy is a common policy objective that can be achieved in part by applying security principles and technical solutions.  In previous posts, I’ve discussed how cloud security is not really a problem, any more so than it is in IT in general, when it is approached properly in the cloud environment.  Privacy, as discussed in Part 1 of this series, is unfortunately not so “simple.”

I’ve already made the case that some commonly held beliefs about what is inherently secure or insecure are based on control of the physical infrastructure and ownership of the premises where the infrastructure is located.  I’ve further posited that these ideas are outdated, and at the very least insufficient to insure secure cloud systems (as well as traditional data centers, for that matter).

In the discussion of privacy, we’ve seen governments urged to action by their citizenry to combat the erosion of individual control of personal information.  Unfortunately, lawmakers have approached such legislation from the outmoded perspective of physical security.  Privacy laws are rife with injunctions that PII must only be stored or transmitted under circumstances that derive from a physical location.  The European Union, for example, forbids the PII of any EU citizen from being stored or transmitted outside of the EU. Although treaties such as the EU-US Safe Harbor Framework, established in 2000, facilitates some degree of data sharing across the EU-US boundary, it is showing signs of failure when applied to cloud-based application scenarios. Although laudable in their intent, privacy laws that dependent upon containing data within a particular jurisdiction can prove to have more negative effects on both privacy outcomes and achieving the full benefit of cloud-based global services to the individual being “protected.”

First, given the highly-publicized data breaches of large corporations and government entities, it is obvious that data behind a brick wall is still quite vulnerable.  Laws that mandate limits on where data can be stored may convey a false sense of security to those who think that meeting the law’s requirements results in sufficient protection.  Socially engineered attacks can penetrate all but the most highly guarded installations, and once the data has been extracted, it is impossible to “undisclose” it. Again, it is the perimeter-based model that is not up to the task of protecting data in a hyper-connected world. Second, limitations on how data can be shared and transmitted can negatively impact the owner of PII when they cannot be serviced by cloud vendors outside of their jurisdiction.  Frameworks such as Safe Harbor are band-aids, not solutions.

One solution is to endow sensitive data with innate protection, such that wherever the data goes, its protection must also go. A container model for self-protecting data allows for the owner to specify his or her intentions regarding the data’s distribution and use, regardless of its location, and is the zero trust model for data.  Rather than depend on a perimeter and control of physical infrastructure to insure privacy objectives are met, the policy is built into the data container, and only when the policy is followed will the data be made available.

Of course, such a solution is easier described in a paragraph than implemented, although many valiant efforts have been attempted to varying degrees of success.  Still, a viable implementation – one that is scalable, robust, and easily made ubiquitous – has yet to be created.  Unfortunately, the wheels of governments and legal systems will not be inclined to wait for it.  Without educating policy makers to better understand the real threats to privacy rather than the perceived ones, we invite a continued spate of ill-conceived requirements that could make the problem worse while ironically robbing “protected” citizens of the full value of cloud technology.

 

Posted in Cloud Computing Technology Insights | Comments Off

Privacy: The Cloud Model’s Waterloo?

Part 1 – Privacy Ain’t the Same As Security

Most people consider the word privacy solely in the context of cloud deployment models, where a private cloud is one reserved strictly for the use of a specific group of people that have a common affiliation, such as being employed by the same company. But it is becoming quickly evident that the more broad context of global legal systems and basic human rights are where cloud computing may meet its privacy Waterloo.

The concept of personal privacy is present in all cultures to varying degrees.  Western cultures have developed the expectation of universal individual privacy as a right.  As such, privacy is a legal construct, not a technical one.  It is founded upon the idea that information not publicly observable about a person belongs to that person, and is subject to their will regarding disclosure and subsequent use.

By default, most legal systems require the individual to opt out of their rights to privacy, rather than opt in.  This means that, unless there is specific permission from the owner of the data to allow its use in specific ways, the use is unlawful and a violation of that person’s privacy rights. Examples include the United States healthcare privacy laws, and the European Union’s privacy directive.

There are instances to the contrary, where opt-in is the default.  One is the privacy-related right to not be approached without consent.  An example is the US Federal Trade Commission’s National Do Not Call Registry, which one must actively join in order to supposedly avoid unwanted marketing telephone calls. This solution also demonstrates the difficulty in balancing privacy of the individual with the free-speech rights of others.

The details of privacy law vary across jurisdictions, and historically have been somewhat anemic.  Before the printed word, propagation of personal information could only occur by word-of-mouth, which was highly suspect as mere gossip.  The printed word resulted in more accurate and authoritative data communication, but the cost rarely allowed for transmitting personal details outside the realm of celebrity (in which case it was considered part and parcel of one’s celebrated position). These limitations rarely tested the laws, and when they did, it was infrequently enough to manage on a case-by-case basis. But, as with so many other legal constructs, computer systems and networking have strained the law to breaking points.

The modern, democratized Internet has enabled the near instantaneous propagation of data at little expense, by almost anyone, to broad audiences.  In supposed acts of public service, “whistle blowers” purposefully disclose private information to call attention to illicit activities or behaviors of the data owners: Whether their ends justify their means is hotly debated, though it is a clear violation of privacy. Vast databases of personal information collected by governments and corporations are at much greater risk to be copied by unauthorized agents, which most people agree is data theft.  In these cases, it is fairly easy to spot the transgressor and the violation.

But the free flow of information in the cloud computing era brings ambiguity to what once seemed straightforward.  Individuals who volunteer personal information often do not realize just how far their voluntary disclosure may propagate, or how it might be used, especially when combined with other information gleaned from other sources.  The hyper-connected Internet allows data from multiple sources to be correlated, creating a more complete picture of an individual than he or she may know, or would otherwise condone.  The data may come from a seemingly innocuous disclosure such as a marketing questionnaire, or from public records which, until today, were simply too difficult to find, much less match up with other personally identifiable information (PII).  Attempts to ensure data anonymity by “scrubbing” it for obvious PII such as name, address, phone number, and so on, is increasingly ineffective as something as simple as a time stamp can tie two datum together and lead to an eventual PII link in other data sets.

This particular problem is one of the negative aspects of big data analytics, by which vast sources of data, both structured like database tables and unstructured like tweets or blog posts, can be pulled together to find deeper meaning through inferential analysis.  Certainly, big data analytics can discover important trends and help identify solutions to problems by giving us insight in a way we could never have achieved before. The scope, diversity, size, and access of data combined with cheap, distributed, open source software has brought this capability to the masses.  The fact that we can also infer personal information that the owner believes to be private, and has not given us consent to use, must also be dealt with.

As cloud computing continues on the ascendant, and high-profile data breaches fill the news headlines, governments have been forced to revisit their privacy laws and increase protection, specifically for individuals.  In jurisdictions such as the United States, privacy rules are legislated and enforced by sector.  For example, the Health Insurance Portability and Accountability Act (HIPPA) established strict privacy rules for the healthcare sector, building upon previous acts such as the Privacy Act of 1974.  Although the Payment Card Industry (PCI) standard is not a law, it is motivated by laws in many states designed to protect financial privacy and guard against fraud. In the European Union, the Data Protection Directive of 1995 created strict protections of personal data processing, storage and transmission that applies in all cases.  This directive is expected to be superseded by a much stronger law in coming years.

In an environment where there are legal sanctions and remedies for those who have suffered violations of their privacy, one is wise to exercise caution in collecting, handling, storing, and using PII, regardless of the source.  Cloud technologies make it all too easy to unintentionally break privacy laws, and ignorance is not an acceptable plea in the emerging legal environment.  Clearly, for cloud to be successful, and for us to be successful in applying it to our business problems, we need systematic controls to prevent such abuses.

But is a failure to guarantee privacy in the cloud enough to kill the cloud model, or hobble it to insignificance?  More on this line of thinking in Part 2.

Posted in Cloud Computing Technology Insights | Comments Off

Becoming an IoT Automation Geek

I’ve long wanted more automation and remote control over my house, but until recently such wasn’t possible without a lot of special project boards, wiring,  gaps in functionality and – most importantly – time. Now, with the Internet of Things (latest buzzword for automating and monitoring using Internet technologies for transport) this is becoming so much easier, and something for which I can make time.  Also, it is somewhat synergistic with my cloud architecture role at Dell: Although IoT is assigned to others and not to me, it still warrants some first-hand experience to be a fully competent chief cloud architect.  (At least, that’s my rationale!)

Two weeks ago, the two thermostats in our house were replaced with Honeywell WiFi models. It makes it SO easy to program a schedule when you can do it in a web interface! We never have to touch the controls now – they change for different times of the day automatically, and we can control them from our phones remotely if necessary.

I’ve just completed upgrading the pool control system to include WiFi connectivity and Internet remote monitoring and management. I upgraded the pool control to a Jandy AquaLink RS Dual Equipment model, so that now pool, spa, spa heater, spa lights, blower, and pool solar heater are all unified into a single controller, controllable via the network.

Now, the only missing item is the pool light, which is wired through a toggle switch on the other side of the house, near the master bedroom. Rather than run a wire from the controller to that switch, I’ve decided to branch out further into home automation with a SmartThings hub. I’ll replace the pool light with a smart toggle switch, and I’ll put a relay sensor in the Jandy controller so that when we “turn on” the pool light via the pool remotes (phones, etc.) it will close the relay, send an event to the SmartThings hub, and it will then turn on the actual switch.

Since I’m putting in the SmartThings hub, I can now add more Z-Wave IoT devices to the house and solve some other quirky problems, like turning the porch lights on and off at dusk/dawn and power cycling my server or desktop if necessary when traveling. I’m sure I’ll think of others. With SmartThings, you program your own apps and device drivers in Groovy, so I can make it do just about anything I want.  Well, at least that’s the theory.  I’m supposed to receive the hub and devices tomorrow, so it shall be tested shortly.

 

Posted in Cloud Computing Technology Insights | Comments Off

The Myth of Cloud Insecurity (cont.)

Part 3 – Leaping the Chasm

In my previous posts (Part 1, Part 2) in this series, we established the theory that the human species has come to rely on physical ownership and control of something as necessary and sometimes even sufficient to securing that thing.  This theory doesn’t hold in the cloud-era IT world where virtual objects such as applications and data, backed by physical machines, are more vulnerable in that virtual world than they are in the physical one in which they are rooted.  This line of reasoning is challenging for those who insist that “private” clouds will always trump public cloud services when it comes to providing the ultimate security.

The key to security in the cloud is to make the application responsible for its security, not the supporting infrastructure.  This implies that not all applications as they may currently exist in a private data center are suitable for “true” cloud deployment.  In fact, it is likely that very few are ready to make the jump into the cloud, because there is a gap in application fitness that must first be addressed.  Perhaps the staunch supporters of private cloud as the bastion of security take their positions not understanding or simply being dismissive of the fact that application transformation is a requirement to achieving the most secure public cloud deployments.

When the distributed perimeter pattern is followed by cloud application developers, the CSP is left to focus on creating the most secure service possible, meaning that it is generically configurable for each application, and that those configurations are difficult if not impossible for interlopers to modify to their own advantage.  Because CSPs operate on such large scale, they are arguably more likely to do a better job of securing these basic services than all but a handful of enterprise data centers could.  Of course, this model can only succeed when the application properly configures and uses the service.  To do otherwise could invite a security breach.

But even in the case where an individual cloud application is misconfigured and vulnerable, it is important to note that the cloud service itself and other applications are not likely affected by the vulnerability.  If a poorly designed app is compromised, the attacker can only gain control over what that app controls.  Even if the app has “superuser” privileges for the tenant (a very bad thing to do!), it cannot impact other tenants except possibly by side effect, such as launching a denial-of-service attack by saturating the network or CPU.  Even then, good CSPs will enforce limits that prevent the DoS attack from having significant negative impacts on other tenants. In effect, the CSP approaches each tenant as a potential security risk that could behave poorly, even if not officially “compromised.”

This is not to say that private cloud has no place in cloud-era IT. In the security context alone, there are many applications that will never be refactored for cloud, and for good reasons. It’s expensive to refactor.  The application may not have a very long lifespan, anyway.  It may be that the nature of the application is such that a full public-cloud makeover won’t add much value to it where users or the organization are concerned.  In these cases, it is best to either leave the application as is, or recast it partially to a cloud model to take advantage of efficiencies in the private cloud, while still depending on the existing security controls.

For new application development, however, there should be no question of adopting the full-on cloud model because it maximizes deployment flexibility.  Even if you don’t think you’ll ever deploy the app to public cloud, write it as if you will!   When we design and write applications to take responsibility for their own security, we are subscribing to the zero trust security principle at a fine level of granularity (the application, or its container).   By not trusting anything beyond the boundaries of the application’s container, and designing the application accordingly, we endow it with the strongest possible security profile, and thus the potential for greater mobility between cloud service providers and locations, regardless of deployment model.

Posted in Cloud Computing Technology Insights | Comments Off

The Myth of Cloud Insecurity (cont.)

Part 2 – Distributing the Security Perimeter

In part 1 of this series,  we examined the fallacy that physical ownership and control of hardware, combined with multi-layered perimeter defense strategies, leads to the most secure IT deployments.  In the hyper-connected cloud era, this concept doesn’t hold. In private data centers that rely on this model (see figure A below), a breach in one co-resident system typically exposes others to attack. Indeed, some of the most successful viral and worm attacks follow the model of gaining entry first, then using information gleaned from the most vulnerable systems to break into others in the same data center without having to navigate the “strong” outer perimeter again.  One analysis of this type of leap-frog attack leads us to the conclusion that it is actually not a good example of defense-in-depth. Defense in depth would dictate that, having gained new information from attacking one system, the attacker has at least as difficult of a time using the new-found knowledge to get to its next victim as it did this one.

Applications that are written to run “in the cloud” make very few assumptions about the security of the cloud service as a whole.  Concepts such as a firewall move from the data center’s network boundary to the application’s network boundary, where rules can be configured to suit exactly the requirements of the application, rather than the aggregate requirements of all apps within the data center.  This distributed custom-perimeter model is shown in figure B, below.  By forcing applications to take on more responsibility for their own security, we make them more portable.  They can run in a private cloud, public cloud, on or off premises.  The degree to which ownership and proximity of the service affect the security of the application is much smaller when the application is designed with this self-enforced security model in mind.

SecurityModels

Although applications should not rely on data center security for their own specific needs, security at the data center level is still critical to overall success of the cloud model.  Since the application relies heavily on the network connection to implement persistence and access, the application developer/operator has the responsibility of configuring the cloud services for that application to achieve those specific goals. The cloud service provider (CSP) is responsible to deliver a reliable, secure infrastructure service that, once configured for the application, maintains that configuration in a secure and available fashion, as shown in the figure below. Each tenant’s apartment is constructed by the CSP in accordance with the tenant’s requirements, such as size and ingress/egress protections. The CSP further guarantees the isolation of the tenants as part of its security obligations.

CloudServiceProviderDataCenter

As an example, let us return to our firewall scenario, but with a bit more detail.  Suppose an application requires only connections for HTTP and HTTPS for all communication and persistence operations.  Upon deployment, the cloud service is configured to assign an Internet Protocol address and associated Domain Name System name to the application, and to implement rules to admit only HTTP and HTTPS traffic from specific sources to the application’s container.  Subsequent to deployment, the cloud service must insure that the IP address and DNS names are not changed, that the firewall rules are not altered, that the rules are enforced, and that isolation of the application from other applications in other tenancies is strictly enforced.  In this way, the cloud service provider’s role is cleanly separated from that of the application developer/operator because the CSP does not know or care what the firewall rules are. It only knows that it must enforce the ones configured.  This is in stark contrast to traditional data centers where firewall rules are a merge, often with conflicts, of the various rules each application and subsystem behind the firewall might require.

In my third and final post in this series, I’ll further expound the virtues and caveats of this model and how, when properly implemented, it solves the security problem for applications in the most general case: public cloud.

Posted in Cloud Computing Technology Insights | Comments Off

The Myth of Cloud Insecurity

Part 1 – The False Sense of Physical Security

As is often the case when new paradigms are advanced, cloud computing as a viable method for sourcing information technology resources has met with many criticisms, ranging from doubts about the basic suitability for enterprise applications, to warnings that “pay-as-you-go” models harbor unacceptable hidden costs when used at scale.  But perhaps the most widespread and difficult to repudiate is the notion that the cloud model is inherently less secure than traditional data center models.

It is easy to understand why some take the position that cloud is unsuitable, or at the least very difficult to harness for conducting secure business operations.  Traditional security depends heavily on the fortress concept, one that is ingrained in us as a species: We have a long history of securing physical spaces with brick walls, barbed wire fences, moats, and castles.  Security practice has long advocated placing IT resources inside highly controlled spaces, with perimeter defenses as the first and sometimes only obstacle to would-be attacks.  Best practice teaches the “onion model,” a direct application of the defense in depth concept, where there are castles within brick walls within barbed-wire fences, creating multiple layers of protection for the crown jewels at the center, as shown in Figure A below.  This model is appealing because it is natural to assume that if we place our servers and disk drives inside a fenced-in facility on gated property with access controlled doors and locked equipment racks (i.e., a modern data center), then they are most secure.  The fact that we have physical control over the infrastructures translates automatically to a sense that the applications and data that they contain are protected.  Similarly, when those same applications and data are placed on infrastructure we can’t see, touch, or control at the lowest level, we question just how secure they really can be. This requires more faith in the cloud service provider than most are able to muster.

SecurityModels

 

But the advent of the commercial Internet resulted in exponential growth of the adoption of networking services, and, today, Internet connectivity is an absolute “must have” for nearly all computing applications, not the novelty it was 20 years ago.  The degree of required connectivity is such that most organizations can no longer keep up with requested changes in firewalls and access policies. The result is a less agile, less competitive business encumbered by unwieldy nested perimeter-based security systems, as shown in Figure A above. Even when implemented correctly, those traditional security measures often fail because the perimeter defenses cannot possibly anticipate all the ways that applications on either side of the Internet demarcation may interact.

The implication is that applications must now be written without assumptions about or dependencies upon the security profile of a broader execution environment.  One can’t simply assume the hosting environment is secure, although that is still quite important, but for different reasons.  More on this line of thinking, and the explanation of Figure B and why it is more desirable in the cloud era, in my next post.

 

 

 

Posted in Cloud Computing Technology Insights | Comments Off

The Private Cloud Pendulum

We once had a unified vision of how cloud would be adopted by the average enterprise.   With all the uncertainty around security, cost and performance of public cloud, they would naturally transform their private data centers into private clouds. Once successful in that incremental transition, they would be more comfortable with extending to the public cloud, resulting in the Holy Grail – a hybrid deployment.

We were only half right.

As events have unfolded, we see that hybrid clouds are indeed the desired outcome. However, the way-points on that journey are, for a number of cloud adoption profiles, reversed from what we had predicted. Instead of first stopping at private cloud, many skipped it entirely and went to using public cloud in spite of their own previously voiced objections. Why?

For all but the larger enterprises and the very capable mid-market IT organizations, a private cloud has often been too difficult to build and maintain. The technology existed, but it was far from turnkey. In the face of the challenging business and operational transformations that cloud demands, this was too distracting and unnecessary: public cloud was sitting there, gleamingly simple and ready-to-use without the burdens of operational hassles.

So, for these “I need it to be as easy as possible” cloud adopters, the pendulum largely bypassed private and swung to public. But will it stay there?

The costs of public cloud are tricky to pin down and manage. You are paying a premium for someone else to handle the headaches. For many projects this makes sense. For long-running activities that don’t take advantage of public cloud’s scale and global reach, you will likely pay more than is necessary.

The chickens, as they say, will come home to roost. The true cost of public cloud will become apparent, and the advantages of private cloud more compelling for the mid-market. The pendulum will return, and when it does, we’ll have evolved private cloud technologies to make them suitable for organizations looking for an appliance-like experience instead of building an IT practice around them.

But what if we’re wrong again? Regardless of how that pendulum swings, having choices and management capabilities that span multiple clouds (public, private, whatever) ensures you’ll be able to keep cost and utility in balance. The trick is to invest in tools and technologies that enable that choice;  buy and design software that is infrastructure-agnostic, dependent only upon abstracted network services of which analogs are available from many providers. That way, whether it’s now or in the future, when you’re ready for private cloud (or it is ready for you), you’ll be in a position to further expand your selection of cloud targets by adding your own.

Posted in Cloud Computing Technology Insights | Comments Off