David Cappuccio

A member of the Gartner Blog Network

David J. Cappuccio
Research VP
6 years at Gartner
41 years IT industry

David J. Cappuccio is a managing vice president and chief of research for the Infrastructure teams with Gartner, responsible for research in data center futures, servers, power/cooling, green IT, enterprise management and IT operations. Read Full Bio

Coverage Areas:

Just a Thought – Is VMware Enabling Legacy Sprawl – Another Y2K in the Making?

by Dave Cappuccio  |  September 25, 2009  |  2 Comments

One of the hidden beauties of any virtualized state is the clear disaggregation of hardware and software, the logical separation of traditionally tightly coupled environments, allowing us much greater flexibility in deciding what applications to run where, and for what reasons.  This flexibility in and of itself is a good thing, and we all benefit from it, but I suspect that if we’re not careful in how we use virtualization some apparently intelligent decisions today may in fact turn out to be significant problems in the future.

Take for example some of our legacy applications.  Not the ones we’ve been living with on mainframes or large Unix systems for the past 30 years, but legacy x86 applications.  You know, those early Windows applications written in C or C++ (or even early Java) which now run quietly every day on those older Windows NT or Windows 2000 sever platforms.  Like many older applications from the Big Iron days, these are often poorly documented and not designed with the reusable constructs of SOA, but as stand alone, end to end systems.

In many companies these are clear targets for virtualization.  They often have stable performance characteristics (few peaks and valleys), allowing a high number of images per physical server.  Since they are older applications the platform itself is not likely to change and the amount of enhancements to the application have been minimized over the years.

As newer Operating Systems emerge and servers grow into 8 and more cores there is a risk that these legacy applications will suffer compatibility or performance problems and will require significant and costly retooling.  But with development budgets shrinking, or staying stable at best, higher priority projects will most often get the funding over legacy rewrites, so the prudent IT manager will keep these applications running on the older OS’s in a standardized virtual container as long as possible.  This costs very little, does not impact the performance or reliability of the application, and gives IT some breathing room before a large, and possibly complex rewrite begins.

Sounds like yet another side benefit of virtualization, and for the near term it certainly is, as emulating older environments on newer technologies has been practiced for years, and the financial benefits are obvious. 

However, as the underlying operating systems get older, updates, patches, and support begins to get marginalized by vendors, and eventually formal maintenance support from the vendor will reach an end.  At this point enterprises will have another difficult decision to make – to continue emulation of an older application on an unsupported platform, or to begin the replacement (or rewrite) process for the application.  Either choice is fraught with risks, costs and business impact, and in many organizations this will not be a single decision, because as we continue this march towards virtualization the number of legacy applications marginalized into self contained environments will continue to grow.  

So what are the impacts?  In a best case scenario this is just the ranting of a cynical naysayer and we will continue to upgrade and improve applications as needed, and migrate them to newer platforms when appropriate.  It’s a non-issue.  But the alternate world view is that the beauty of virtualization is that once an environment is created it can run “as is” indefinitely, with little or no attention from the outside.  If this happens IT will almost always focus on near term issues, because these issues are what drive us – after all we are a reactive crowd – and when budgets are always tight the funding to fix what isn’t broken rarely materializes. 

In the second world view we may find ourselves scrambling to update scores or even hundreds of these applications to run on newer platforms when we are least prepared to do so.  This is similar to Y2K in some respects in that these problems are not caused by lack of knowledge or awareness, but just by years of pushing the issue into that low priority bucket that’s so convenient to use during the planning cycle. 

Just a thought……

2 Comments »

Category: Data Centers Food for Thought     Tags: , , ,

2 responses so far ↓

  • 1 wallyton   September 28, 2009 at 10:45 am

    Oh, that there would be another Y2K. It sure is nice to dream. This presents an interesting potential dilemma down the road, but far enough down the road, that some one will address it. I totally agree agree that it’s the near term issues that get the budget dollars.

  • 2 CJ.Blackwood   January 7, 2010 at 9:16 am

    For 20 years I heard the same response from management to questions from technical staff. Sad to hear we have not learned anything from that fiasco.