Gartner Blog Network

David J. Cappuccio
Research VP
6 years at Gartner
41 years IT industry

David J. Cappuccio is a managing vice president and chief of research for the Infrastructure teams with Gartner, responsible for research in data center futures, servers, power/cooling, green IT, enterprise management and IT operations. Read Full Bio

Prepare for the Shrinking Data Center

by Dave Cappuccio  |  November 8, 2010

There are a few outside forces in the market today which are going to impact Data Centers in a big way over the next 5 years; smarter designs, Green pressures, conquering density and cloud computing. Not surprisingly many data center managers today are trying to figure out how to design and plan for the future, […]

Read more »

Initial Results: Gartner’s Global IT Maintenance Council

by Dave Cappuccio  |  July 9, 2010

Technorati Tags: Maintenance,Global IT Council,Gartner I have been working with members of Gartner’s Global IT Council for the past 8 months as they discussed the most pressing issues with IT Maintenance in their organizations. The Gartner Global IT Council for IT Maintenance consists of CIOs and senior IT leaders of large global enterprises who work […]

Read more »

Food for Thought: Capacity Planning Revised – Install new Servers to Save Money!

by Dave Cappuccio  |  June 13, 2010

  One of the cruel ironies within data centers is that the one constant within them is the need for continuous change. Locked between the dual forces of applications growth and equipment obsolescence, capacity planning has become one of the critical skills in data centers. Now if we all had unlimited white space to grow […]

Read more »

To Many IT Silos? Virtualize Your Staff – The “T” Shaped Technologist

by Dave Cappuccio  |  May 1, 2010

No, this is not a 2010 version of the matrix management agenda;  I’ve been there, done that.  However, in an era where everything in IT seems to be consolidating, virtualizing, converging and “cloudifying”, surprisingly few people are talking about our IT staffs and how all this change is affecting them.  In all but the most […]

Read more »

Just a Thought – Is VMware Enabling Legacy Sprawl – Another Y2K in the Making?

by Dave Cappuccio  |  September 25, 2009

One of the hidden beauties of any virtualized state is the clear disaggregation of hardware and software, the logical separation of traditionally tightly coupled environments, allowing us much greater flexibility in deciding what applications to run where, and for what reasons. This flexibility in and of itself is a good thing, and we all benefit from it, but I suspect that if we’re not careful in how we use virtualization some apparently intelligent decisions today may in fact turn out to be significant problems in the future.

Take for example some of our legacy applications. Not the ones we’ve been living with on mainframes or large Unix systems for the past 30 years, but legacy x86 applications. You know, those early Windows applications written in C or C++ (or even early Java) which now run quietly every day on those older Windows NT or Windows 2000 sever platforms. Like many older applications from the Big Iron days, these are often poorly documented and not designed with the reusable constructs of SOA, but as stand alone, end to end systems.

In many companies these are clear targets for virtualization. They often have stable performance characteristics (few peaks and valleys), allowing a high number of images per physical server. Since they are older applications the platform itself is not likely to change and the amount of enhancements to the application have been minimized over the years.

As newer Operating Systems emerge and servers grow into 8 and more cores there is a risk that these legacy applications will suffer compatibility or performance problems and will require significant and costly retooling. But with development budgets shrinking, or staying stable at best, higher priority projects will most often get the funding over legacy rewrites, so the prudent IT manager will keep these applications running on the older OS’s in a standardized virtual container as long as possible. This costs very little, does not impact the performance or reliability of the application, and gives IT some breathing room before a large, and possibly complex rewrite begins.

Sounds like yet another side benefit of virtualization, and for the near term it certainly is, as emulating older environments on newer technologies has been practiced for years, and the financial benefits are obvious.

However, as the underlying operating systems get older, updates, patches, and support begins to get marginalized by vendors, and eventually formal maintenance support from the vendor will reach an end. At this point enterprises will have another difficult decision to make – to continue emulation of an older application on an unsupported platform, or to begin the replacement (or rewrite) process for the application. Either choice is fraught with risks, costs and business impact, and in many organizations this will not be a single decision, because as we continue this march towards virtualization the number of legacy applications marginalized into self contained environments will continue to grow.

So what are the impacts? In a best case scenario this is just the ranting of a cynical naysayer and we will continue to upgrade and improve applications as needed, and migrate them to newer platforms when appropriate. It’s a non-issue. But the alternate world view is that the beauty of virtualization is that once an environment is created it can run “as is” indefinitely, with little or no attention from the outside. If this happens IT will almost always focus on near term issues, because these issues are what drive us – after all we are a reactive crowd – and when budgets are always tight the funding to fix what isn’t broken rarely materializes.

In the second world view we may find ourselves scrambling to update scores or even hundreds of these applications to run on newer platforms when we are least prepared to do so. This is similar to Y2K in some respects in that these problems are not caused by lack of knowledge or awareness, but just by years of pushing the issue into that low priority bucket that’s so convenient to use during the planning cycle.

Just a thought……

Read more »

Building a Data Center – Where to Begin?

by Dave Cappuccio  |  June 30, 2009

This post is response to a question from Allison relating to the series of questions that need to be asked before embarking on a data center build project. Where to begin? In my first post of this series Allison sent along a quick note with an interesting observation;  “I found this article very insightful, but […]

Read more »

Building a New Data Center – How Much Energy Will I Need

by Dave Cappuccio  |  June 30, 2009

This post is the third in a series of questions that need to be asked before embarking on a data center build project. How much energy will I need? Great question.  Somebody always asks the obvious, especially when there is no way to definitively answer the question.  In the old days (you know, 8 or […]

Read more »

Just a Thought; Will the Blinders on IT Enable Public Clouds?

by Dave Cappuccio  |  June 30, 2009

I’m thinking that, especially in IT, history has a habit of repeating itself. A friend and I were talking about the drivers and catalysts in IT the other day and the topic of Public Clouds came up.  He was convinced that the catalyst for the success of public cloud was going to be enterprise class […]

Read more »

Just a Thought; Will VMware become the next Novell?

by Dave Cappuccio  |  June 30, 2009

I know, I know, the title alone can be misleading, but what I’m thinking here is a result of yet another trip down memory lane.   I was talking with a client at the IT Operations and Management conference this week and he was a huge VMware fan (as are many).  During the conversation he was […]

Read more »

Building a New Data Center? How big is big enough?

by Dave Cappuccio  |  April 17, 2009

This post is the first in a series of questions that need to be asked before embarking on a data center build project. How big is big enough? The first question asked is often the most difficult to answer, or the simplest.  “It depends” might be valid for an analyst, but not when you’re potentially […]

Read more »