Friday, July 31, 2009

Designing Server Infrastructures: Decoupling Applications and Servers

At this juncture in discussing server infrastructure designs, it is time to take a step back and see the forest for the trees. Virtual Consolidation, is not a design at all, but a shim that has allowed Server Sprawl to live on, aptly called VM Sprawl. This is not to say virtualization was a bad idea, it was a brilliant idea, but for a more fundamental reason - it has decoupled applications from hardware. Or more specifically, business software - which is inherently malleable - has been decoupled from hardware - which is inherently rigid. This is vital because business processes - which are also malleable - are tightly coupled with and dependent on business applications, and by transitivity they are also now decoupled from hardware - more on this in future post.In virtualized environments applications are no longer deployed directly onto physical servers, but to virtual servers. In this paradigm the role of server infrastructure is to provide a unified execution platform for virtual machines. Applications running in virtual machines are still of the utmost importance to operations as a whole, but are now strictly in the realm of application deployment. Prior to virtualization, application deployment and server infrastructure were inseperable.This separation by itself doesn't solve anything, but it does set the stage for a paradigm shift.

Tuesday, July 28, 2009

Designing Server Infrastructures: Virtual Consolidation

In data centres across the globe, underutilized (mostly Windows) servers are being consolidated onto virtualization platforms, like VM Ware and Xen. In the midst of this changing environment, we pick up our evolving discussion on server infrastructure designs for large enterprises, with Virtual Consolidation.

A virtualization layer, or hypervisor, is installed directly onto the server; providing a platform for multiple virtual server or guest to share a single physical server.Just as Server Sprawl was a reaction to the Standard Enterprise Stack design, Virtual Consolidation is a reaction to Server Sprawl. It addresses Sprawl's most obvious flaw, hardware underutilization. Depending on the amount it can bring enterprises' singificant reductions in required hardware.

Also like the Standard Stack, the hypervisor provides a common platform for application portability and distributed availability.
Unfortunately while solving two problem, two more are introduced. Of primary concern is the lack of capital controls, physical hardware procurement provided. Less scrutiny will lead to more virtual servers, who's life cycle now must be supported. This trend, known as VM Sprawl, can be mitigated with proper VM life cycle management, unfortunately most enterprises do not commit the necessary "blood and treasure".

Also the virtualization layer now performs hardware abstraction and resource sharing, leaving only a single duty, providing common system services, for the operating system. The components of the operating system that used to provide these services are now redundant. Add to this the irrelevant middleware components inherited from server sprawl and there is a significant amount of unnecessary software.Virtual Consolidation doesn't do anything in an of itself to help with supporting asset's life cycles. For the most part, except for the act of deploying servers, all the life cycle problems from Server Sprawl still exist in this design.

Since this is the current state of many enterprises infrastructures, what problems need to be addressed:

  • unnecessary software
  • manual software and hardware life cycle processes
  • lack of VM life cycle management

Cost: Foot Print: H/W Low S/W High

Cost: Life Cycle: Medium-High

Responsiveness: Medium

Saturday, July 25, 2009

Designing Server Infrastructures: Server Sprawl

Welcome back to an ongoing discussion about designing server infrastructures for large enterprises. After starting with Standard Enterprise Stack design, now up is Server Sprawl which is not a design in itself, but more an unfortunate devolution of the former standard stack.

Started partially as a response to Windows Server's (NT4, 2000 and 2003) lacking, except in certain circumstances, support for mixing multiple workloads - The Business Case for Mixed Workload Consolidation: Challenges of Mixed Workload Consolidation. To be fair, on many occasions the problem was the application not playing well with others. Also the hardware portion of the stack was not homogeneous, as hardware is purchased at different times by capital projects.

The tipping point was the huge productivity gains software developers are achieving by leveraging an ever expanding array of programming languages, development frameworks, runtimes and libraries. Businesses demanded to take advantage of this expanded library of software, with its new capabilities.

In response to these pressures more and more stacks came into existence to support fewer or single applications. Silos appeared and server sprawl set in.

In this new architecture the previous constraints on asset footprint and life cycle costs - stack standardization and control - no longer apply. A multitude of stacks now must be supported; using mostly manual processes brought forward from the old paradigm. These pressures balloon life cycle costs and stagnate responsiveness.

To maintain some control, standardization is pushed to the different layers of the stack. For example, on every server that requires Oracle Database, it is installed the same way - above App1 and App3 both run on Middleware1. The unintended consequence of this, is that even if an application doesn't require a component in a standard middleware layer, it is installed anyway; in other words unnecessary asset footprint, that must be maintained.

The much larger problem is resource sharing, or lack there of. Since a single application, runs on a single physical server, hardware underutilization is rampant. An April 2009 report by the Uptime Institute found that average data center utilization is an embarrassing 10%. Also availability will often require dual redundancy, because each application has it's own stack configuration.

Cost: Asset Footprint: High

Cost: Life Cycle: High

Responsiveness: Medium-Low - Can respond, but processes are manual and costs are prohibitive

Designing Server Infrastructures: Standard Enterprise Stack

Welcome to the first installment of an ongoing discussion about current and future server infrastructure designs for large enterprises.

First examined is a design made popular in the recent past, the Standard Enterprise Stack.

The hallmark of this design, are a limited number of standard stacks of middleware (application servers, runtimes, libraries, databases, third party software, etc), a general purpose operating system (Windows Server) and server hardware. Enterprises standardize on a single technology, like Java EE, and a single vendor's implementation of that technology, like BEA.

Though many processes are manual, life cycle costs are constrained by the repeatability and scale standardization brings

Importantly, asset utilization is maximized because multiple applications share the resources of, and are portable between, the standard stacks.

Unfortunately for the same reasons rigid standardization keeps costs low, it also is dismal at responding to changing requirements. Augmenting a stack for a new application can be prohibitively expensive because all existing applications must still be supported. The other option, adding a new stack, negates the benefits of standardization in the first place.

Cost: Asset Footprint: Low

Cost: Life Cycle: Low

Responsiveness: Low

The design is a making a comebake in the cloud as Platform as a Service. Google's AppEngine is a popular example.