Monday, August 10, 2009

Designing Server Infrastructures: Virtual Appliances

The pervasive use of virtualization requires rethinking how applications are deployed to reduce software bloat and take advantage of the decoupling virtualization offers. One approach is virtual appliances. Applications are examined introspectively, vertically down the software stack, to determine dependencies and requirements for operation. The bottom of the dependency chains will be common across applications of the same platform - Windows, Red Hat, Ubuntu, Oracle and Novel - collectively these components form a Just Enough Operating System (JeOS) - Windows 2008 Server Core, Red Hat Thin Crust, Ubuntu JeOS, Oracle JeOS, and Novel SUSE JeOS respectively. On top of this platform the remaining dependencies are stacked (like building blocks) in reverse dependent order (like a dependency tree), with the application sitting on top. This arrangement is self contained, application focused, and minimalist - only necessary software is installed.

Update Oct 24 2009:

Updating and patching software is a time consuming and thus expensive operation because of the required due diligence and testing, virtual appliances significantly reduce this burden.

"Server Core reduces the footprint of the OS from about 5 GB in WS2K8 to 1.5 GB, and based on recent tests, will reduce the number of patches admins may need to employ by 60% over Windows 2000. The reason there is very simple: There's no need for admins to patch files that simply aren't there."

This is from a 2007 BetaNews report from Microsoft TechEd. Fast forward to the present and Windows Server 2008 Release 2 debuted this summer with even more JeOS focus and features - SQL Server, ASP.NET, IIS 7.5 and PowerShell for administration.

Cost: Foot Print: H/W Low S/W Low

Cost: Life Cycle: Medium-High

Responsiveness: Medium

The design depends on the application and dependent software's modularity, so only required functionality is installed. Fortunately most modern software is at least somewhat modular.

Friday, July 31, 2009

Designing Server Infrastructures: Decoupling Applications and Servers

At this juncture in discussing server infrastructure designs, it is time to take a step back and see the forest for the trees. Virtual Consolidation, is not a design at all, but a shim that has allowed Server Sprawl to live on, aptly called VM Sprawl. This is not to say virtualization was a bad idea, it was a brilliant idea, but for a more fundamental reason - it has decoupled applications from hardware. Or more specifically, business software - which is inherently malleable - has been decoupled from hardware - which is inherently rigid. This is vital because business processes - which are also malleable - are tightly coupled with and dependent on business applications, and by transitivity they are also now decoupled from hardware - more on this in future post.In virtualized environments applications are no longer deployed directly onto physical servers, but to virtual servers. In this paradigm the role of server infrastructure is to provide a unified execution platform for virtual machines. Applications running in virtual machines are still of the utmost importance to operations as a whole, but are now strictly in the realm of application deployment. Prior to virtualization, application deployment and server infrastructure were inseperable.This separation by itself doesn't solve anything, but it does set the stage for a paradigm shift.

Tuesday, July 28, 2009

Designing Server Infrastructures: Virtual Consolidation

In data centres across the globe, underutilized (mostly Windows) servers are being consolidated onto virtualization platforms, like VM Ware and Xen. In the midst of this changing environment, we pick up our evolving discussion on server infrastructure designs for large enterprises, with Virtual Consolidation.

A virtualization layer, or hypervisor, is installed directly onto the server; providing a platform for multiple virtual server or guest to share a single physical server.Just as Server Sprawl was a reaction to the Standard Enterprise Stack design, Virtual Consolidation is a reaction to Server Sprawl. It addresses Sprawl's most obvious flaw, hardware underutilization. Depending on the amount it can bring enterprises' singificant reductions in required hardware.

Also like the Standard Stack, the hypervisor provides a common platform for application portability and distributed availability.
Unfortunately while solving two problem, two more are introduced. Of primary concern is the lack of capital controls, physical hardware procurement provided. Less scrutiny will lead to more virtual servers, who's life cycle now must be supported. This trend, known as VM Sprawl, can be mitigated with proper VM life cycle management, unfortunately most enterprises do not commit the necessary "blood and treasure".

Also the virtualization layer now performs hardware abstraction and resource sharing, leaving only a single duty, providing common system services, for the operating system. The components of the operating system that used to provide these services are now redundant. Add to this the irrelevant middleware components inherited from server sprawl and there is a significant amount of unnecessary software.Virtual Consolidation doesn't do anything in an of itself to help with supporting asset's life cycles. For the most part, except for the act of deploying servers, all the life cycle problems from Server Sprawl still exist in this design.

Since this is the current state of many enterprises infrastructures, what problems need to be addressed:

  • unnecessary software
  • manual software and hardware life cycle processes
  • lack of VM life cycle management

Cost: Foot Print: H/W Low S/W High

Cost: Life Cycle: Medium-High

Responsiveness: Medium

Saturday, July 25, 2009

Designing Server Infrastructures: Server Sprawl

Welcome back to an ongoing discussion about designing server infrastructures for large enterprises. After starting with Standard Enterprise Stack design, now up is Server Sprawl which is not a design in itself, but more an unfortunate devolution of the former standard stack.

Started partially as a response to Windows Server's (NT4, 2000 and 2003) lacking, except in certain circumstances, support for mixing multiple workloads - The Business Case for Mixed Workload Consolidation: Challenges of Mixed Workload Consolidation. To be fair, on many occasions the problem was the application not playing well with others. Also the hardware portion of the stack was not homogeneous, as hardware is purchased at different times by capital projects.

The tipping point was the huge productivity gains software developers are achieving by leveraging an ever expanding array of programming languages, development frameworks, runtimes and libraries. Businesses demanded to take advantage of this expanded library of software, with its new capabilities.

In response to these pressures more and more stacks came into existence to support fewer or single applications. Silos appeared and server sprawl set in.

In this new architecture the previous constraints on asset footprint and life cycle costs - stack standardization and control - no longer apply. A multitude of stacks now must be supported; using mostly manual processes brought forward from the old paradigm. These pressures balloon life cycle costs and stagnate responsiveness.

To maintain some control, standardization is pushed to the different layers of the stack. For example, on every server that requires Oracle Database, it is installed the same way - above App1 and App3 both run on Middleware1. The unintended consequence of this, is that even if an application doesn't require a component in a standard middleware layer, it is installed anyway; in other words unnecessary asset footprint, that must be maintained.

The much larger problem is resource sharing, or lack there of. Since a single application, runs on a single physical server, hardware underutilization is rampant. An April 2009 report by the Uptime Institute found that average data center utilization is an embarrassing 10%. Also availability will often require dual redundancy, because each application has it's own stack configuration.

Cost: Asset Footprint: High

Cost: Life Cycle: High

Responsiveness: Medium-Low - Can respond, but processes are manual and costs are prohibitive

Designing Server Infrastructures: Standard Enterprise Stack

Welcome to the first installment of an ongoing discussion about current and future server infrastructure designs for large enterprises.

First examined is a design made popular in the recent past, the Standard Enterprise Stack.

The hallmark of this design, are a limited number of standard stacks of middleware (application servers, runtimes, libraries, databases, third party software, etc), a general purpose operating system (Windows Server) and server hardware. Enterprises standardize on a single technology, like Java EE, and a single vendor's implementation of that technology, like BEA.

Though many processes are manual, life cycle costs are constrained by the repeatability and scale standardization brings

Importantly, asset utilization is maximized because multiple applications share the resources of, and are portable between, the standard stacks.

Unfortunately for the same reasons rigid standardization keeps costs low, it also is dismal at responding to changing requirements. Augmenting a stack for a new application can be prohibitively expensive because all existing applications must still be supported. The other option, adding a new stack, negates the benefits of standardization in the first place.

Cost: Asset Footprint: Low

Cost: Life Cycle: Low

Responsiveness: Low

The design is a making a comebake in the cloud as Platform as a Service. Google's AppEngine is a popular example.

Tuesday, June 30, 2009

Designing Server Infrastructures: An Introduction

Welcome to the introduction to an ongoing discussion on server infrastructure designs for large enterprises and the organization of software and hardware compute resources. Commentary will focus on the cost and customer responsiveness of the infrastructures to host enterprise applications.

Customer responsiveness - speed and cost effectiveness when responding to changing requirements (often in the form of new applications or added functionality to an existing applications)

Cost - indirectly measured by two leading indicators: the size of the asset base (number of servers), and effort associated with assets' life cycles (effort to deploy, test, configure, maintain, monitor, support and decommission)

Please join and participate in the ensuing conversation, all feedback is greatly appreciated.

Existing design discussions:

  1. Standard Enterprise Stack
  2. Server Sprawl
  3. Virtual Consolidation
  4. Decoupling Applications and Servers

Saturday, May 9, 2009

Why Virtualize?

Virtualization is an increasingly popular trend in data centers around the world and is gathering momentum among sys admins and CIOs alike. From the unindoctrinated, I often here the question "Why Virtualize?" and have been wanting to address this in a post for quite some time. While doing research for it though, I came across the next best thing (or maybe next better thing), a blog post by Arthur Cole of ITBusinessEdge that pretty much summed up my thoughts:

"“Why virtualize?” is actually two-faceted. On one hand it’s important to determine why data centers are going virtual, and on the other we should try to figure out why they should"

"
The first question is relatively easy to answer. Most people are virtualizing to save money. Storage Expo surveyed more than 300 companies recently on their virtualization objectives and found that 62 percent cited server consolidation as their primary goal"

"As for why should you virtualize, that’s a more complicated question. While consolidation is a worthy goal, it only scratches the surface of potential benefits the technology can deliver."

"What we’re actually talking about here is a complete transformation of the data center akin to the introduction of minicomputers or the development of open systems, according to storage consultant Stephen Foskett"

To be fair there are drawbacks to virtualization, namely VM sprawl. But this can be managed with intelligent VM life cycle policies and a shift in applications and operating systems towards virtual appliances and JeOS (such as Windows Server 2008 Server Core) - Check out this interesting essay Rethinking Application Delivery in the Age of Complexity from the Cutter Consortium and rPath.

Money is a great side effect, but the real power is that virtual machines are just software and are decoupled from the consolidated, highly available hardware (I hope this sparks everyone's imagination). This is why virtualization has such powerful potential to transform the industry and allow for an ever increasingly agile and simple infrastructure and application life cycle.