David Cappuccio

A member of the Gartner Blog Network

David J. Cappuccio
Research VP
6 years at Gartner
41 years IT industry

David J. Cappuccio is a managing vice president and chief of research for the Infrastructure teams with Gartner, responsible for research in data center futures, servers, power/cooling, green IT, enterprise management and IT operations. Read Full Bio

The Evolution of Fit For Purpose or Micro Data Centers

by Dave Cappuccio  |  June 20, 2014  |  1 Comment

For the past umpteen years there has been a trend towards consolidating data centers – from many sites to as few as possible.  Or if building your own data center wasn’t plausible, consolidating from owned sites to leased, hosted or collocated sites became the option.  

Arguably there are many valid reasons to consolidate, and in most cases there can be significant savings to be had, which I won’t go into here.   However, consolidating for the sake of “less is better” may not be the best decision, especially when we apply the right metric to the issue – value to the business.  

Value from an IT perspective too often gets defined as cost savings or operational efficiencies (which is a politically correct way of saying “cost savings”).  But from a business perspective “value to the business” equates to agility, performance and service continuity.  

The problem here is that you could argue both sides of the equation and have valid points from both perspectives.  Consolidation can make IT more costs effective, and it can make IT more agile while improving performance (often through tech refresh cycles).  But oftentimes the achilles heal of these projects are site or geographic specific issues revolving around latency, redundancy and offline support, and those arguments always favor the business case of keeping dispersed compute resources where they are.

But what if you could get the benefits of consolidation, while still satisfying that value to the business metric?  

An obvious example of this might be an organization with multiple small manufacturing plants, or a retailer with multiple stores,  each supported by common services provided by a consolidated IT environment.  While 90%+ of the workload could easily be supported using remote services, if the central site had a failure anywhere in the IT path (network, servers, storage, applications, etc), operations at those remote sites could be compromised.  If multiple sites are impacted simultaneously the lost opportunity or transaction costs could be significant.  

To solve this issue many of these sites leave behind “some” IT  in the form of small server rooms – but these are often managed by partial FTE’s and are not maintained and updated using the same process or methodologies as central IT, which over time increases costs and increases complexity (platform again, OS releases, tech refresh cycles, etc).

There is an emerging market for “fit for purpose” or Micro Data Centers; IT environments designed to be easily installed, self contained, scaleable and remotely managed.  Designed right these mDC’s (for lack of a better term) could be as small as a single rack, but would contain servers, storage (SSD or HD), switching, UPS (if needed), and support their own cooling.  With current and next generation technologies some pretty significant capacities could be attained even within one or two racks, and they could be environmentally supported in an office area or a shop floor.  With a standardized OS/VM and a reasonable network, almost all maintenance could be handled remotely as well.   And as a side benefit – varying tier levels could easily be designed in, based on business needs and budget requirements.

Food for thought…. all comments are welcome of course….

Location:Earth

1 Comment »

Category: Consolidation Data Centers Food for Thought     Tags:

Shine Some Light on Shadow IT

by Dave Cappuccio  |  November 27, 2013  |  2 Comments

You may have heard the term shadow IT bandied about lately, in both analyst musings and the industry press (they often feed off each other in the always fascinating game of buzzword bingo).  What’s interesting about shadow IT is that it’s really nothing new, but rather something thats been with us for years, but the latest version (which I’ll creatively call V2) has grown in significance (and scope) to where the C-suite should start to pay attention.

Local Support Driven by Necessity

What is shadow IT?  That depends on your perspective and how long you’ve been involved with IT.  In the early days of distributed computing, client server and the PC “enabling” the business, shadow IT was that group of people that both held everything together at the business unit or department level, and caused all the chaos for central IT.  Oftentimes these individuals were not formally IT, or even responsible for IT support, but because of their innate skills, their standing within the business unit, or their relationship with peers, they became the de facto stand-in for a formal IT process which had yet to reach the hinterlands.

In most cases this was not a budgeted support function either, it was just done because it was either much more expedient than using the “formal” process, or it had grown organically with the business units adoption of IT and thus became just one of those embedded processes. This 1.0 version of shadow IT was focused mostly on  the introduction of unauthorized technologies (both hardware and software) and the subsequent peer support that was required.  “Hey Joe” support structures were common in business, but undocumented, and were often the first avenue an end user pursued to solve a technical issue.  Over the years shadow IT V1 was mostly eliminated by the implementation of standardized architectures, automated support services, efficient call centers, and an increase in systems complexity.

That said, version 2.0 will not be so easy to deal with.

Shadow IT today is radically different from the past in both it’s reasons (to exist) and its impact, and in many cases it will not be something that traditional IT can solve by rolling out more technology and process.

Driven by need.  (responsiveness to users).  Over the past few years we have seen a dramatic increase in the number and types of devices available to business users.  In most instances organizations found that waiting for IT to go through it’s traditional approval/evaluation process was not acceptable, especially with the potential productivity benefits that came with these devices (e.g. tablets, smart phones).  IT organizations found themselves either putting device and software approvals on the fast track, or continually reacting to support requests for new device types that were “non-standard”.  Saying no the business was not an option, especially since most of these devices were funded through departmental non-IT budgets and were seen as potential gateways to new business processes and emerging market opportunities.

Driven by speed (applications delivery).  This proliferation of user friendly devices had a serious cascade effect in the form of a flood of newer, smaller, purpose built applications that were being downloaded by business users.  These applications were rarely what would be considered enterprise-class, but to the business user it didn’t matter, as these applications were inexpensive, quickly installed, and quickly updated with rapid version/refresh cycles.  This sequence of events began to drive a cultural shift in many business units, one where the quality of an application did not come first, but where access and availability did, which in turn meant that a high majority of the innovative applications being tested by the business were acquired outside of the control of central IT, essentially establishing a business IT process that was distinctly separate from the central IT process.

Driven by knowledge (responsiveness to markets).  As business users have become more comfortable with these newer device types, applications are emerging that focus specifically on the convergence of content, context and location.  Using these three drivers the types and linkages between devices, applications, business processes and large data stores (e.g. existing CRM and ERP systems) have moved the focus of applications away from enabling the business towards one of enabling the individual.  

What can IT do?  If IT is to respond to this form of Shadow IT they must pair with business partners and work together as part of innovation labs (often driven by the business and not IT),  Innovation labs can become the integration point where IT experience and business innovation can come together, but the focus is NOT as a control point, but as an enabler of change, with clear understanding on both sides of the aisle of what the potential cascade effects will be on both the business and on IT.   Otherwise IT organizations can decide that their real value is to support and maintain a flexible infrastructure that supports rapid innovation by the business, and just let 1,000 flowers bloom.   Either way the new Shadow IT, which we should now call Business IT, is here to stay.

Food for thought….

2 Comments »

Category: Food for Thought IT Operations     Tags: ,

Data Center Space Efficiency Metric

by Dave Cappuccio  |  July 2, 2013  |  2 Comments

The DCSE Metric: A simple way to look at data center space is to analyze the effective use of space by existing IT equipment, relative to the total available space for IT. The DCSE metric factors in both Horizontal Space Utilization (HSU) and Vertical Space Utilization (VSU).

Vertical Space Utilization (VSU)

VSU = Installed IT Equipment (RU) divided by (Total Rack Space Equipment (RU) * Optimum Target)

Where VSU is the ratio of the total quantity of installed IT equipment in terms of “RUs” (i.e. standard rack units – 1U or one rack unit, 19″ [48 cm] wide and 1.75″ [4.45 cm] tall) to the total number of “RUs” available; both in all racks inside of the Data Center facility. Optimum target is simply the maximum utilization level allowed for racks within the data center. Ideally this number would be close to 100%, but in many older data centers it is not possible due to cooling or power constraints at the rack level. By applying optimum targets to this formula we can chart a metric that is relevant to any data center design.

Horizontal Space Utilization (HSU)

Where HSU is the ratio of the total quantity of installed racks divided by the maximum quantity of racks supported by the Data Center facility.

DCSE Example

The organization supports a small data center of 1,600 square feet, but currently the floor space is near 85% of capacity. Therefore the current number of racks = 45, and the maximum number of racks = 53 (assumes 30 square feet on average per rack).

HSU = 45 / 53 = .85

The average rack utilization is 70% and the estimated “optimum” utilization is 80% due to power and cooling limitations. Therefore the actual installed equipment or rack unit count is 1,333 (45 racks X 42 RU per rack X 70%). The Optimal installed equipment count is 1,523 (53 Racks X 42 RU per rack X 80%).

VSU = 1,333 / 1,523 = .88

This shows a data center nearing it’s logical capacity, but with room to grow if configured properly. We multiply both VSU and HSU variables together and by applying a geometric mean to their product we insure that small or large changes in one variable do not have an unbalanced impact on the overall score (see Evidence). Therefore, the Data Center Space Efficiency index becomes:

DCSE = Geomean(.85*.88) = 86% capacity

Given the criteria above the data center is operating at 86% of it’s potential capacity for equipment and space utilization – no great surprise.

What DCSE points out rather quickly is the potential growth available within this existing configuration. With a combination of higher virtualization levels and increased rack densities it’s likely this rack environment will support existing growth rates for quite some time. And yes, we must assume that both power and cooling are available to support these higher densities., but by factoring in the Optimal rack density much of the power and cooling issues can be mitigated. If not, an analysis of the cost to add additional power and cooling vs. the cost to build out a new data center might in fact change the overall decision making process.

Calculating the Impact of Technology Refresh

An interesting use of DCSE is it can also be used for what-if analysis on potential upgrades. As an example, given the environment above, let’s assume the plans are in place to upgrade ½ of the existing installed base of servers from 2U devices to 1U. The results are as follows:

HSU stays the same since the rack count does not change.

The current installed RU count is 1,333 but with 1/2 being upgraded to 1U servers the total used RU count would become 1,333 X 0.75 = 1,000.

VSU changes to 1,000 / 1,523 = .53

Therefore: DCSE = Geomean(0.85 * 0.53) =  68% capacity

Using this model is becomes clear that by using a technology refresh of ½ of the servers the data center space is now at 68% of possible capacity rather than 86%, providing a logical method to increase productivity while deferring capital expense on a new data center for years to come.

Bottom Line: DCSE is not the end-all for data center space planning, but was designed to give IT Managers a view of capacity levels within their data centers, and a means to compare that level to realistic potential (optimal) capacity levels, rather than just using a hypothetical maximum. Using DCSE on an ongoing basis will yield a clear view of how space and capacity targets are changing over time, and how an organizations overall data center efficiency is improving.

2 Comments »

Category: Data Centers     Tags: , ,

Software Defined Data Centers – Hype or Reality?

by Dave Cappuccio  |  July 1, 2013  |  Comments Off

Our industry just loves new catch phrases – and when a new one catches on the vendors and press can get amazingly creative is assigning that phrase roles and responsibilities that go far beyond its original concept.  In the recent past it was all about cloud computing.  Once the industry grokked the concept of “cloud” it became difficult to find any new product or release that was not either cloud enabled or defined “as a Service”.  I saw many “new” products announced that were essentially older products (some not so successful) that were being re-marketed as a cloud solution – even though it really had no defining function or service that was derived from the cloud concept. My favorite was DCaaS – “Data Center as a Service” – or what we used to call Hosting.   Call it marketing 201 – If its a new phrase and has cache – use it for all its worth, regardless of the reality of things.

Spin forward to today and the buzz word du jour is “Software Defined X”.   Leading the pack is Software Defined Networks and Software Defined Storage, which intuitively make sense, but lately the variation game has begun and I’ve heard about Software Defined Organizations, Software Defined Staffing, Software Defined Power, and Software Defined Radio Receivers (really).  The interesting one to come out of the pack though is Software Defined Data Centers.  While originally I was skeptical about any SDx assignation, the more I think about SDDC’s the more the concept resonates.

Lets start by doing some quick definitional work – at least from my perspective.  If software is being used to manage or automate a component or process lets call it software controlled x, because it’s not defining anything, but is controlling a specific action.  So if I’m controlling an HR process it’s software controlled, or if I’m monitoring power consumption and pricing, its software controlled.  However, if the layer of abstraction goes up a notch, and I need to manage/control many diverse components within an ecosystem, then the Software Defined terminology begins to make sense.  Software Defined Networking is all about controlling/automating many diverse elements within the network stack from a control plane rather than at the component level.  Software Defined Storage is about controlling discrete device types and varying file types as a single storage pool rather than individual elements.

Software Defined Data Centers, it could be argued, bring this discussion up one more layer.  In theory an SDDC is a layer of abstraction above multiple other SDx layers (network, virtualization, storage, etc), whereby the Data Centers, wherever they are located, are controlled/automated from a single control plane, using a common set of API’s.  Sometimes called the Virtual Data Center, the idea is that in a perfect world data center resources would be placed where it made the most economic sense, and then the allocation and use of those resources could be controlled by rules and analytics, allowing both workflows and workloads to be moved/directed where they best served the business at any particular point in time (e.g. year end processing, business continuity, disaster avoidance, time zones, etc).  And in a true SDDC environment the physical data center location (and ownership) become irrelevant, which means that true Hybrid Data Centers will emerge – allowing perhaps critical work on premises, non-critical off-premises, and load or time sensitive in the cloud.

This obviously is mostly conceptual right now, but it is the way the industry is heading, and astute IT managers today are thinking not necessarily about the bells and whistles vendors are promising – but about the organizational impact these environments are going to cause.  Think about a rules and automation based environment tied together with API’s;  the role of SysAdmin and NetAdmin and StorAdmin have just changed dramatically.  Programming, script and analytic skills will be the key enabler and the most valued skills within IT, while component based skills will become secondary.  And problem identification and resolution will become one of the most complex tasks IT will need to manage.

Software Defined Data Centers will be a brave new world – are you ready?

 

Comments Off

Category: Data Center Design Data Centers Food for Thought Software Defined Data Center     Tags: , ,

Rack Unit Effectiveness–A Useable Data Center Metric

by Dave Cappuccio  |  November 9, 2012  |  1 Comment

Data Centers have gotten a bad rap of late, with both the press and senior executives putting pressure on them to improve overall efficiency and reduce operating costs (yet again).  The focus for the last few years has been around energy efficiency and the PUE (Power Usage Effectiveness) metric developed by The Green Grid.  PUE is a great metric, and when used wisely it can help organizations easily improve energy efficiency oftentimes by 20% and more.  Unfortunately for Data Center managers, the efficiency gains attained with a PUE focus were almost all on the Facilities side of the equation, and while IT and the Data Center may have gained some benefits (e.g. improved cooling), the lion share of the operational savings was applied to the Facilities budget (unless of course you’re one of those rare companies that gives IT it’s own power budget).

But realistically the headlong rush to better PUE’s has done little to improve data center efficiency.  Having a facility with a great PUE is one thing, but if my data center is highly underutilized, or if the resources are poorly managed, IT has not solved the real problem of trying to get the most out of the resources we already have.  The other problem with PUE is that as Data Center managers strive for more energy efficient IT equipment, they could inadvertently degrade that wonderful PUE the Facilities team reported last year.  Take a hypothetical data center with an average PUE of 1.5.  If IT decides it’s time to do a technology refresh on some servers, and bring in the current generation as replacements, the overall performance and productivity of applications will increase, but because of the energy efficiency improvements vendors have made, the overall power draw for IT could very easily go down.  When that happens the ration of total building power to IT load gets worse – negatively impacting PUE.  So a great decision by IT could easily create a bad impression of Facilities, unless everyone understands the overall value of what was done.

So given that rather long preamble, I’ve been thinking about taking the same concept of PUE (optimum vs. actual usage) and applying it to the Data Center proper in order to create a resource efficiency metric.  The problem with creating a metric like this is that all Data Centers are not created equal – and don’t have the same type of equipment or configurations.  So given that caveat, I’d like to propose creating a metric around the most common resource available in most Data Centers – the Rack Unit, or RU.  A standard rack today has 42U, others have 48U, 50U and more, but the single RU itself is something we can track.

So here is the basic idea – and I’ll be writing more on the RUE and RUiE metrics in Gartner’s published research.  Let’s assume the following just for illustrative purposes:

300 Rack maximum capacity (approximately 9,000 feet of floor space).
Standard 42U size
180 racks are currently installed, and average 65% utilization.

The maximum RU count at capacity is 12,600 (300*42)
The Installed RU count is 7,560 (180*42)
The utilized RU count is 4,914 (7,560*65%)

Using the same construct as PUE, we take the maximum and divide by the actual (12,600 / 4,914) and come up with a ratio of 2.56 (where capacity would be 1.0)

The Data Center RUE is now 2.56 and can be track fairly easily to monitor both growth and efficiency.

Using the reciprocal (1/RUE) yields your utilization;  RUiE =  1 / 2.56 = 39%.

Now the big flaw here is an obvious one – nobody wants to get to perfection.  An RUE of 1.0 would indicate you were completely out of room – and that’s not a metric I’d want to attain.  However, using this same idea you could modify the maximum capacity to an optimal capacity. Lets assume no rack should exceed 90% capacity as a target.  The results would look like this:

The maximum RU count at capacity is 11,340 (300*42*.90)
The Installed RU count is 7,560 (180*42)
The utilized RU count is 4,914 (7,560*65%)

RUE now becomes 2.31 (11,340 / 4,914) and the RUiE is 43% ( 1 / 2.31).

Still a usable metric, but one with where if I reached my optimal goal of 1.0 I’d still have some space left while I built (or moved to) the next Data Center.

Food for thought?  Comments welcome.

1 Comment »

Category: Data Centers Food for Thought     Tags: ,

The Case for the Infinite Data Center

by Dave Cappuccio  |  June 7, 2012  |  Comments Off

When faced with planning a new data center the question of how much space will be needed is potentially the most difficult to determine. That said, the answer is often one of the quickest made – with the least amount of analysis, and I would suggest that it is rarely correct, and in most cases the final size is far larger than what is actually needed.

The first mistake many people make is to base their estimates on what they currently have – extrapolating out future space needs according to historical growth patterns. It sounds like a logical approach, but there are two fundamental problems; the first being an assumption that the floor space currently used is being used properly, and the second is a 2 dimensional view, or the assumption that usable space is a horizontal construct, rather than a combination of both horizontal and vertical space.

Many times I have seen Data Center managers or Facilities teams start with the following assumption: We are out of (or near) capacity in our data center, therefore when we build next we will need more space. If we have 5,000 feet today we must need at least 7,500 or more to sustain our growth. The error is that the focus is on square footage, not compute capacity per square footage.

By looking at compute capacity as the metric things begin to change rather quickly. As an example, lets take a typical environment of 40 server racks. In a high percentage of data centers today these racks would be populated with servers 1 or 2 generations old, depending on corporate refresh cycles, and the average server would be a standard 2u height. The racks would rarely be nearing physical capacity but might actually be maxed out in logical capacity due to power or cooling constraints at the rack level (the mantra to avoid creating hot spots in data centers has actually made floor and rack use a lot less efficient).

Given a 60% load capacity on average (again, to avoid hot spots), our example would yield an average of 13 physical servers per rack (assume 42u racks) and 520 physical servers. Given 30 square feet per rack (which includes aisle ways, door swing space, etc), the 40 racks would require 1,200 feet of floor space.

So how big should the next data center be? If we assume 15% CAGR as an average growth target, in 10 years our small IT room would need to support at least 160 racks with over 2,000 physical servers, and would require almost 5,000 square feet of floor space.

But – what if we thought both vertically and horizontally? The above all assumes things stay status quo and I acquire the same type of equipment and apply the same configuration policies throughout. But lets assume whatever floor size you design was created to allow full use of rack space without the fear of hot spots (and there are many ways to do this with a great deal of expense). Taking the same 40 racks, if pushed to 90% capacity on average (leaving some room for switches, etc), and upgrading the existing server base over the next 2 years to 1U servers would support 1,520 physical servers.

So a data center of the exact same size, containing 40 racks, with the proper design, would support 15% growth every year for at least 8 more years. Now the question becomes – do we build it bigger to support the original target of 2,000 servers, or will a future technology refresh within the next 8 years double our capacity yet again?

Doing some simple spreadsheet exercises and asking these “what if” questions can yield some startling results when it comes to capacity estimates. And the logic works with servers as well as storage, as each device category continues to decrease in size, improve in capacity and performance, and reduce it’s power consumption per unit of work with each new generation.

If we were to look at these performance and density trends and make the assumption that the curve will continue – even at a much slower pace, it becomes clear that even small data center environments can have significant growth rates (well in excess of 20% CAGR), while maintaining the exact same footprint over the next 15 to 20 years.

Food for thought – and as an aside – food for thought when contemplating the life cycle of a Container based data center as well.

Comments Off

Category: Consolidation Data Center Design Data Centers Food for Thought     Tags:

Steps to Ease Data Center Cooling – Number 5

by Dave Cappuccio  |  April 6, 2012  |  Comments Off

Cooling with the data center has become our achilles heal in many cases. Historically the folks in IT had relatively nothing to do with heat or cooling management, this was strictly under the purview of the facilities team (after all, if it wasn’t IT gear, it didn’t count). In todays world though the IT team has to get involved, since they are the ones that need to live with (and fix) the problem.

In this series of posts I’ll posit 10 of the easy steps you can take to solve, or mitigate the cooling issue at your site.

9. Technology Refresh
Using technology refresh as a cooling solution may seem counterintuitive to many people, but it in fact is a proven solution for many. An interesting trend has been under way that could help IT organizations solve multiple problems simultaneously (see “Grow Disk Storage 800% or More, Without Increasing Power or Cooling Costs, in the Same Space”). The problems are intertwined in almost all data centers: capacity, space and power. Each issue is impacted almost every time equipment is added or changed on the data center floor. Historically, capacity planners focused on new application growth and a continuous drive toward virtualization, while keeping existing equipment in “maintenance mode,” trying to get the most work out of the equipment over the longest time possible — especially when faced with tight budgets. It turns out that this seemingly prudent use of resources was not, or will not necessarily be, the most prudent thing to do. One reason is that the energy requirements of older servers, in some cases, is three to four times greater than new equipment.

In recent years, x86 server performance has been doubling (or increasing even more than that) with each new generation, while at the same time becoming more energy efficient. Doubling the performance and halving the power in the same space is a sound cost-saving concept. When you look at AMD or Intel performance numbers over the last few generations, and then compare the energy consumption for each of those generations, you’ll notice a dramatic increase in processor performance, while at the same time seeing a significant reduction in energy consumption (and heat generation) for those servers. An equipment replacement policy that is escalated (rather than deferred due to capital constraints) can in fact have the added benefit of reducing energy (and cooling) requirements while also reducing physical capacity (smaller footprint) and increasing performance.

10. External Augmentation
For data centers nearing capacity, either of physical floor space or the facilities infrastructure to support the IT load, the idea of external augmentation is beginning to resonate. In actuality this technique is not about augmenting an existing environment, but about offloading some percentage of workload elsewhere in order to free up power, cooling and floor space in the existing data center for future growth. Depending on the age and location of the data center the type of workload involved can vary greatly. In some cases data centers are so old there is a great risk of impacting business outcomes with an extended outage and therefore the high risk, mission critical systems are potential move candidates while improvements or a retrofit project is completed.

In other cases the data center may in fact be very robust and highly fault resilient, but cannot handle the current growth trends. In these cases offloading non-critical work (e.g. back office systems, test/development) may become a viable alternative to building out a complete new data center. In either case the offloading is often considered a short term (e.g. 2 year) solution while the optimal solution is developed.

Comments Off

Category: Data Centers Power and Cooling     Tags: , ,

Steps to Ease Data Center Cooling – Number 4

by Dave Cappuccio  |  March 30, 2012  |  Comments Off

Cooling with the data center has become our achilles heal in many cases. Historically the folks in IT had relatively nothing to do with heat or cooling management, this was strictly under the purview of the facilities team (after all, if it wasn’t IT gear, it didn’t count). In todays world though the IT team has to get involved, since they are the ones that need to live with (and fix) the problem.

Well the good news is that in most older data centers (older being 10+ years), there are plenty of low hanging fruit to choose from when deciding what project to undertake in order to develop a more efficient cooling environment within the data center.

In this series of posts I’ll posit 10 of the easy steps you can take to solve, or mitigate the cooling issue at your site.

7. Shut down CRACs
It is often the case that a data center can have too much cooling rather than too little. Many companies find themselves with a sizeable data center that is cooled to a consistently low temperature across the floor space, even when some of that floor space is either empty, or contains equipment that needs minimal cooling. In these situations, especially within older data centers, the solution can be as easy as physically shutting down some CRACs. This simple technique is often overlooked by IT for a couple of reasons. First, the responsibility for Infrastructure equipment like CRACs falls under the purview of the Facilities team, and therefore IT staff rarely think about CRAC efficiency. The second reason, especially on older equipment, is that the system has been set to a standard fan speed (often High), and left in that condition as a standard operating procedure. These fans in many cases are the most energy hungry devices on the data center floor, so any opportunity to either moderate them (item 2) or shut them down should be taken advantage of.

8. Shrink Floor-space
Companies that have experienced M&A activity, or those that have employed newer (smaller) server and storage technologies often find themselves with more floor space than is actually needed. In many cases IT looks at this space as a value-add, as it provides room for potential growth in years to come. However, this excess space also needs to be conditioned and is often kept at the same temperature as the rest of the floor since it’s all one contiguous space. In the past few years we have seen an increasing trend to shut down this space, freeing it up for other uses. By walling off excess space IT can reduce monthly operating costs (reduced energy use), while at the same time freeing up possible office space, IT work areas, or even releasing leased space. In situations where the asset is owned and IT isn’t quite sure how much growth they can expect over time, the use of temporary moveable walls might be a viable alternative. In either method the objective is to reduce the conditioned IT space down to what is absolutely needed for the next few years, and not keeping all available space just because it’s there.

Comments Off

Category: Food for Thought     Tags:

Steps to Ease Data Center Cooling – Number 3

by Dave Cappuccio  |  March 28, 2012  |  Comments Off

Cooling with the data center has become our achilles heal in many cases. Historically the folks in IT had relatively nothing to do with heat or cooling management, this was strictly under the purview of the facilities team (after all, if it wasn’t IT gear, it didn’t count). In todays world though the IT team has to get involved, since they are the ones that need to live with (and fix) the problem.

Well the good news is that in most older data centers (older being 10+ years), there are plenty of low hanging fruit to choose from when deciding what project to undertake in order to develop a more efficient cooling environment within the data center.

In this series of posts I’ll posit 10 of the easy steps you can take to solve, or mitigate the cooling issue at your site.

5. Airflow
The primary force for cooling in data centers is air, and the control of airflow can be a simple method of increasing cooling efficiencies with minimal expense. In many cases when equipment is installed in racks and the rack has open space, server administrators fail to install blanking panels (or defer it until they have time – which often never happens). By not installing these panels hot air from one server easily moves up the rack, contaminating (heating up) equipment above it. These panels are design to control this flow and should be used whenever possible.

A second and more basic method of improving airflow is the remove any blockages from under the floor itself. The accumulation of power and data cables over the years, especially in older data centers, is often when of the biggest impediments of good airflow, and can restrict efficient cooling by as much as 30%. The issue is often that due to the multiple layers of cables under the floor, pulling them out can be a risky issue for data centers that are running production workloads, as mislabeled cables may get pulled which then disrupt active systems. These projects are often best managed as weekend/holiday endeavors, often after a significant amount of planning.

Additionally there are air flow systems available for redirecting air flow under floors via fans and sensors. Example vendors; Tate Floors, Triad, Legrand.

6. Chimneys
The issue with hot racks is that hot air generated within the rack can leak out into the floor space and be reintroduced into another rack, thus aggravating that racks cooling process. A simple solution developed over the past few years is to create a chimney above the rack that redirects the hot air directly upwards towards the plenum for removal from the data center. These chimneys have been built by Facilities teams, or can be acquired by specialty vendors, and while not the most attractive device in the data center, they do what’s necessary to improve cooling efficiency at a very low cost. Typical vendors; Chatsworth, HP, Great Lakes)

- Posted using BlogPress from my iPad

Comments Off

Category: Food for Thought     Tags:

Steps to Ease Data Center Cooling – Number 2

by Dave Cappuccio  |  March 25, 2012  |  Comments Off

Cooling with the data center has become our achilles heal in many cases. Historically the folks in IT had relatively nothing to do with heat or cooling management, this was strictly under the purview of the facilities team (after all, if it wasn’t IT gear, it didn’t count). In todays world though the IT team has to get involved, since they are the ones that need to live with (and fix) the problem.

Well the good news is that in most older data centers (older being 10+ years), there are plenty of low hanging fruit to choose from when deciding what project to undertake in order to develop a more efficient cooling environment within the data center.

In this series of posts I’ll posit 10 of the easy steps you can take to solve, or mitigate the cooling issue at your site.

3. Ambient Temperature
Ambient temperature, or the average temperature in the data center, is often the easiest efficiency target, and often the one that is most often overlooked. Historically data centers were operated with room temperatures in the 68°F to 71°F range, primarily due to concerns about overheated IT equipment. These concerns were first developed by data centers during the mainframe era and have been carried forward to all data centers, regardless of equipment mix. However, todays server, storage and networking equipment have operating temperature variances that can easily exceed 95°F and still remain within the manufacturers guidelines.

Now we do not advocate running data centers at those high temperatures, but ASHRAE (American Society of Heating, Refrigeration and Air Conditioning Engineers) has published it’s 2011 guidelines for data centers which recommend average ambient temperatures between 74°F and 80°F. Raising these temperatures can be one of the fastest ways to save operating costs there is, as studies have shown that raising the ambient temperature by 1°F can save upwards of 3% in energy costs. To obtain significant energy reduction while still maintaining a comfortable working environment Gartner recommends that data center operators consider an average ambient temperature of 78°F.

4. Hot and Cold Containment
The placement of racks within data centers has changed over the years, primarily due to an ever-increasing need to manage the heat exiting them. For the past 10 years or so the idea of hot and cold containment has become standard operating procedure (see 1 above). While this did not completely solve the heat problem, it did position the racks such that the hot air leaving one row would not immediately be drawn into the next row. Server administrators were still limited to how many devices to put into each rack though, as higher density racks would often create hot spots on the floor. These hot spots were then controlled by placing higher density racks across the floor space, essentially sharing their higher air temperature outputs with the rest of the floor.

Well designed data centers today have solved the hot spot issue by concentrating high density racks into hot or cold containment isles, rather than by spreading them around the floor. These aisles are designed so that all the heat leaving the racks contained within the row via walls, and then is quickly channeled upwards to the plenum, thus insuring hot air is “shared” with the rest of the data center. These hot containment zones can be constructed via sheet rock, heavy-duty plastic sheeting, or self contained units pre-designed by vendors. Hot or cold containment zones have two distinct benefits; the reduction of energy required to cool the data center floor through the elimination of heat leakage from the ends of rows, and the ability to fully utilize rack densities, allowing increased kilowatts per rack in the containment zones, thus increasing the usable rack space.

Comments Off

Category: Data Center Design Data Centers Power and Cooling     Tags: , ,