Saturday, July 25, 2009

Designing Server Infrastructures: Server Sprawl

Welcome back to an ongoing discussion about designing server infrastructures for large enterprises. After starting with Standard Enterprise Stack design, now up is Server Sprawl which is not a design in itself, but more an unfortunate devolution of the former standard stack.

Started partially as a response to Windows Server's (NT4, 2000 and 2003) lacking, except in certain circumstances, support for mixing multiple workloads - The Business Case for Mixed Workload Consolidation: Challenges of Mixed Workload Consolidation. To be fair, on many occasions the problem was the application not playing well with others. Also the hardware portion of the stack was not homogeneous, as hardware is purchased at different times by capital projects.

The tipping point was the huge productivity gains software developers are achieving by leveraging an ever expanding array of programming languages, development frameworks, runtimes and libraries. Businesses demanded to take advantage of this expanded library of software, with its new capabilities.

In response to these pressures more and more stacks came into existence to support fewer or single applications. Silos appeared and server sprawl set in.

In this new architecture the previous constraints on asset footprint and life cycle costs - stack standardization and control - no longer apply. A multitude of stacks now must be supported; using mostly manual processes brought forward from the old paradigm. These pressures balloon life cycle costs and stagnate responsiveness.

To maintain some control, standardization is pushed to the different layers of the stack. For example, on every server that requires Oracle Database, it is installed the same way - above App1 and App3 both run on Middleware1. The unintended consequence of this, is that even if an application doesn't require a component in a standard middleware layer, it is installed anyway; in other words unnecessary asset footprint, that must be maintained.

The much larger problem is resource sharing, or lack there of. Since a single application, runs on a single physical server, hardware underutilization is rampant. An April 2009 report by the Uptime Institute found that average data center utilization is an embarrassing 10%. Also availability will often require dual redundancy, because each application has it's own stack configuration.

Cost: Asset Footprint: High

Cost: Life Cycle: High

Responsiveness: Medium-Low - Can respond, but processes are manual and costs are prohibitive

No comments:

Post a Comment