Articles /
Cloud & Mirrors
The term cloud computing is appearing more and more frequently in the press and advertising. The idea of the cloud has floated beyond trade media and appears frequently in mainstream publications. The inferred definition of the cloud, pieced together by the context of these appearances, isn’t very clear. One might say that the exact definition of cloud computing is nebulous, or even, ahem, cloudy.
The most basic definition, that would apply to more detailed definitions, says that cloud computing is the provision of computing resources at a remote site, accessible from the internet. This soft definition understandably creates strong positive responses in both IT marketers and IT consumers, since the basic ideas of cloud computing is very appealing. Anyone who has had to worry about servers, and all of the baggage that comes along with them in our increasingly connected world, would find instant appeal in the basic idea of cloud computing. It is appealing to be removed from some, or all of the technical minutiae that come with high demand servers and resources, like components, space, power, connectivity, backups, and redundancy. But, what are you really removed from when trusting computing resources in the cloud?
The most progressive and technically impressive form of cloud computing is best described as utility computing. In this model, computing resources are billed as a metered service, similar to how public water and electric utilities are billed. Utility computing vendors may offer many kinds of low-level computing resources that are each billed separately. For example, Amazon’s AWS platform offers processing, storage, bandwidth, relational database, email, and load balancing as separately billed services.
These resources may be enabled, disabled, or mixed depending on the needs of a particular project. All of these resources are accessible to developers as an application programming interface. So, specific development skills are required to develop and maintain programs within the cloud. Applications must be coded to use this specialized platform, or existing applications must be retrofitted and then then migrated to the platform. Some existing, proprietary applications may not run at all in this environment. Utility computing introduces new development and maintenance challenges for organizations, while offloading the older challenges of hardware and connectivity maintenance.
On the other end of the cloud computing spectrum is an offering called a Virtual Private Server or VPS. A VPS is a hosted virtual server with dynamically allocatable resources. This kind of solution shares more similarities with traditional computing than utility computing. A familiar Windows Remote Desktop or Linux shell login may be provided. Hosting fees may be billed monthly based on the amount of dedicated resources. Processor, memory, disk space, and bandwidth are billed based on allocated space, rather than usage. These resources, however, are dynamically allocatable on-the-fly. Traditional applications may run on a VPS the same as they would on a traditional server. No special retro-fitting is necessary for existing applications, making migrations simple. Again, some older maintenance and hardware administration roles are offloaded to the service provider.
Cloud computing is not a magic bullet; not all responsibilities are outsourced to the vendor. It is important that organizations moving to cloud computing solutions understand their own requirements, and who is responsible for these requirements. High profile outages have occurred recently, including an Amazon AWS outage resulting in downtime for high traffic websites, and in some cases, permanent loss of data. Amazon’s service agreement fine print may give little recourse to affected customers. Microsoft’s cloud computing platform, Azure, had a similar issue during their Community Technology Preview in 2009. Even the largest computing vendors are not immune to events causing significant downtime.
It is clear that, when moving to a cloud computing solution, many of the old administration and maintenance considerations still hold true in one way or another. The organization must understand what services are provided by the hosting vendor, and what responsibilities remain. Many familiar questions arise:
- Service Level Agreement – What service level is being promised by the cloud computing vendor? How does this match up to the performance requirements of your organization?
- Backup and Retention – Are backups maintained by the vendor, and if so, how long are these backups retained? What is the turnaround time for data restore requests? Is this suitable for your organization?
- Security – What security mechanisms are in place, and how are they managed? Who is responsible for patching, anti-virus, and other common computer sanitation problems?
- Disaster Recovery – What plans exist to restore availability in the event of a disaster? Can resources be replicated to a geographically disparate data center?
- Portability – Can the hosted application and data be easily migrated to another provider, or in-house, if necessary? Is the computing platform a widely-used format (for example, common virtual server disk images), or something more proprietary?
To be sure, the spectrum of cloud computing options is liberating. The old, tedious responsibilities of hardware and (to some extent) software selection, administration, and maintenance is outsourced to a vendor. Organizations may focus resources on their users, their applications, and their data. But, cloud computing vendors are notinfallible, and high profile cloud outages have occurred during their short tenures. It is important that organizations choose the right computing platform for their needs, and always ask the old questions that still apply when a worst-case scenario plays out.