Federal Data Center Consolidation Effort Uncovers Additional 3,000 Data Centers.
“We’re three years into the data center consolidation effort, and the government still does not know how many centers it has,” said David Powner, director of Information technology management at the Government Accountability Office.
David Powner testified Tuesday at a Senate hearing on IT duplication across government.
Here’s some 570 of 1,056 Federal Data Centers that are either closing or have been closed since FDCCI began (according to cio.gov). I’ve used espatial to map these so that you can all get a better feel for the locations, agencies, and numbers of data centers closing.
This 1,056 is not of the originally reported 2,094 data centers to OMB in 2010. It’s of the now estimated 6,000 data centers reported by OMB based upon reclassification efforts to try to determine the actual number of federal data centers.
There has been a lot of debate lately over the validity of PUE. It’s an aging metric that we were all very interested in as a gross measure of efficiency; but that was seven years ago. At least that’s when I first can remember Christian Belady promoting it at Green Grid. Seven years is a long time in the recent data center space. In that time things like OpenCompute have gotten started, better and newer waterside and airside economizers have hit the scene along with better adoption of VFD and ECF in air handlers and a ton of new modular and containment rigs. The other big change is that there is no longer any debate or argument over the validity of hot/cold isolation or containment strategies and there is a lot more synergy going on when it comes to new and effective solutions for everything from mixed environments to homogenous environments to HPC all the way to colo and multitenancy setups.
McKinsey released CADE, Corporate Average Data Center Efficiency, as a broad ranging concept to help data center operators understand not just IT and Cooling power usage but also resource efficiency and utilization with a holistic approach.
The promise of CADE was that you’d be able to see, at a glance, your utilization and availability of power, space (floor, RUs, cable density, underfloor, overhead, etc), cooling (chiller, compressor, condenser, pump, water, and air handlers) along side IT metrics such as availability and utilization of compute resources (CPU, RAM, Network, Storage, IO, etc). The challenge was how difficult it was to measure in practice these variables. It’s only now, five years after CADE was originally introduced that we’re seeing traction where Facebook and Google are starting to look at it and DCIM providers are starting to build in the hooks to support it like Schneider, ABB, Emerson, and several others. There is talk that CA and Eaton will also be updating their DCIM suites to accommodate CADE.
How will CADE impact Federal mandates on data center energy efficiency such as FDCCI and the associated EOs? Nobody knows. But speculation is that AMI will need to be augmented to also include a strategy for really understanding and taking a holistic approach to the complete compute environment.
This also has translated into thought shifts from EnergyStar to UL 2640 as metrics for measuring and understanding the efficiency of servers in terms of transactional capability (transactions per watt). This seems to fit nicely with OpenCompute and the timing couldn’t be better.