In the recent Green Grid White Paper, “Impact of Virtualization on Data Center Physical Infrastructure“, there’s an interesting discussion about the impact of virtualization on data center density, capacity, and power waste.
However, most of the findings apply to in-house environments where the same entity that virtualizes has to deal with right-sizing the UPS, building a floor layout, etc.
But in the world of outsourced data centers, most of the time the responsibility resides outside of your business, and the implications shift accordingly.
- Footprint right-sizing becomes someone else’s problem – if you have the right contract (which you should anyway
- Virtualization may lead to a slight uptick in density, but we haven’t seen one as big as implied by the Green Grid team
- However, density certainly does lead to proportionally lower facilities costs – within reason
Footprint Resizing Implications
For those who outsource data centers, footprint resizing is a contractual flexibility issue. One of the most important assets you can have in a co-location agreement is not the space itself, but options on that space – including options to buy additional adjacent space and divest yourself of space that you no longer need. We would hope factors such as virtualization be the drivers of this trend, but nowadays business downturns are just as much a factor in making scale-down options valuable.
One area where we have not seen the paper’s conclusions coming true in an outsourced environment is a link between density and virtualization.
There’s actually not a huge reason to link the two – most of the builds we’ve dealt with use peak power for density definition, and at real peaks (usually boot-up) a virtualized server will not draw any more than a comparably configured non-virtualized one, making peak usage equal. These peaks may be sustained longer, but we’re still probably talking about a 1/3 bump at most (from 75% peak to average to 100%), which is not what takes you from low density computing to high density.
What we’ve typically seen with our clients going down the virtualization path is a conversion of some higher-end 3U-4U server used for databases into a host for virtual machines. But even if you fill a full rack with nothing but HP DL580s, you’re still likely to wind up at 8kw per rack at most, which in our book is mid-range density. You might wind up leaving 2x the space for that rack, but you probably don’t need 100 square feet of empty space or water cooling projects (both of which we’ve seen for “really high density”)
Typical enterprise virtualized environments have nowhere near the 15kw, 20kw, or even 30kw some clients have been putting in since 2006. These footprints usually involve blades or some other latest and greatest hardware configuration specifically optimized for fast I/O among the devices, which we’ve seen for gaming, custom apps, and other compute-intensive uses, but much less so for the first step in virtualization.
One last interesting note is the price relationship between density and cost per kilowatt. Most deals do fall in the low-mid density category (i.e. under 8kw / rack), and according to the SPY Index space cost data there is a definitive linear trendline downwards there – the higher the density, the lower the cost per kilowatt.
Whether it continues into ultra-high density is more dubious – provider choice drops off a cliff, non-recurring costs can increase, etc. This makes the polynomial curve shown below slightly more accurate – eventually the cost of cooling the equivalent of a small bonfire of bits gets to be disproportionately high. I would suspect that the trough actually lies a bit further to the left, but if virtualization does lead to slightly higher density, it would seem that overall cost picture of outsourced providers would provide an extra boost to the bottom line.