Wednesday, February 23, 2011

Server Virtualiza​tion and the Path to Enlightenm​ent

by


The pace of change in the data center is brisk to say the least.  One of the most significant drivers of change is the broad adoption of server virtualization, which is designed to allow multiple applications to independently co-exist on the same physical server.  There have been many different approaches to server virtualization in the past:  “envelopes” in MVS (zOS); Mainframe Domain Facility from Amdahl; Dynamic System Domains and Containers from Sun; and so forth.  Today, the preferred solution is to use hypervisors to encapsulate applications and their operating system instances inside a virtual machine (VM).

It may seem like hypervisors such as VMware’s ESX have sprung out of nowhere.  In fact, the hypervisor has been more than 45 years in the making and can be traced back to a 1964 R&D project at IBM’s Cambridge research facility running on a modified IBM 360-40 mainframe.  Initially known as CP-40 and later as VM/CMS, it was eventually released as IBM’s first fully supported hypervisor in 1972 under the name VM/370.  Although it remained in the shadow of MVS, VM/370 proved to be the O/S that customers would not let IBM kill off. Today, it is known as z/VM and runs on IBM’s z-series mainframes.

The “modern” history of hypervisors began when Mendel Rosenblum, an associate professor at Stanford, and a few of his students created a hypervisor on x86 servers as a graduate project.  Mendel then teamed up with his wife Diane Greene to start VMware.  In the beginning, the significant overhead required to run the hypervisor limited its use to test and development environments. This changed when the boys from Cambridge University invented para-virtualization and open-sourced Xen.  Combined with the hardware support that Intel and AMD baked into their processors, the required system overhead dropped to under 10% and the hypervisor exploded into the production world.

Today there is a wealth of hypervisors to choose from: ESXi, Hyper-V, Xen and KVM on x86 servers, plus a set specific to various UNIX boxes and mainframes.  Today, thank to the ubiquity of hypervisors, almost all companies have implemented some form of server virtualization.

At my previous employer, I was a VMware customer and had the opportunity to interact with a number of their customers.  What I noticed is that most businesses embrace server virtualization in three stages, what I call “the path to enlightenment.”  In the first stage, IT is seeking to tame server sprawl through server consolidation.  When a server runs a single application, average utilization of that physical server is generally 5%-8%.  Using VMs to isolate the applications from each other, multiple applications can co-exist on a single server, increasing hardware utilization to 25%-35% (or more if you are good or lucky).  This made it possible to actually reduce the number of servers, bucking the trends of the last several decades.  Stage one: Consolidation – saving capital costs.

During this initial stage, virtualized server pools are generally small and configurations are static, with VM migration limited to once or twice per year to facilitate maintenance.  The security model is simplistic with a very limited number of VLANs and zones implemented within the server pool.  For the most part, the virtualized applications are limited to non-critical apps.  In this first phase, the legacy data center network proves to be adequate.

At some point during the first stage, IT realizes there is a greater benefit than CAPEX savings – agility.  It begins when IT discovers that provisioning new virtual “servers” to meet the needs of the business groups can now be performed in hours rather than the weeks or months typically required for new physical servers.  Suddenly IT is a hero – they are exceeding their “customers’” expectations.  Now the business can move faster and IT can be more responsive.  New capabilities come on line in less time.  Resources can be added quickly to respond to changes in demand, while applications that did not work out can be taken off-line and the resources easily reallocated.  Stage 2: Agility – for the infrastructure and the business.

Finally, as VMs become more dynamic, there is a third stage of enlightenment – resilience.  The ability to pick up and move an application safely and dynamically can also be used to build a more resilient infrastructure without having to resort to complex HA (high availability) clusters; now, HA can be delivered to all applications in the data center.  Stage 3: Resilience – keeping the business running.

As customers move into the second and third stages, the pools of virtualized servers grow in size, and they find that a single, larger resource pool is both more efficient and more agile than multiple smaller pools.  The environment becomes more dynamic, with VM migration becoming common place in order to facilitate workload balancing and resilience.  Many or even most of the applications become virtualized, including the critical apps.  It is at this stage we start to see big Oracle databases being virtualized — not because they will share the server with other apps but because they can now be easily moved to another server.  And finally, because of the number of applications, there needs to be a more sophisticated security model.  The number of VLANs and security zones implemented within the server pools grows dramatically.

It is at the point, when customers move from the consolidation stage to the agility and resiliency stages, that they have an epiphany.  The legacy hierarchical network embedded in their data center is the single greatest impediment to achieving the promise of the virtualized data center.  And that, my friends, will be the subject of my next posting.

No comments:

Post a Comment