Software is Eating the World

Software is Eating the World

Five years ago Marc Andreessen rightfully declared that “software is eating the world. That has proven to be correct in all aspects of technology, but unfortunately Data Center power and cooling infrastructure remains encumbered in the static hardware world and thus is the second highest in TCO with servers still being the primary expense. As Cloud Infrastructure and the need for more compute and storage resources rapidly increase, we must find a way to make data centers more efficient, and power/cooling are currently the main bottleneck to achieving that.

Elastic Provisioning

Elastic Provisioning

Advances in virtualization and programmatic infrastructure has resulted in Cloud Computing being an innovation driver for many companies due to the velocity that they can deploy and expand. Yet the same advances for power provisioning and optimization haven’t kept pace, with the only measurable progress being in power utilization efficiency (PUE). Virtualization, Containerization, and Cloud architectures apply additional pressure to power consumption and availability due to the increase in server/storage density.

Rise of the Software Defined Data Center

Rise of the Software Defined Data Center

Allied Market Research is predicting that the global SDDC market will reach $139b by 2022 and predicts that CAGR will grow by 32% from 2016-2022. In addition, they are predicting that the hyper-scale data center market grows to $71.2b by 2022. Power infrastructure, which has already typically been over-provisioned to meet both current peak loads and future growth, will only become more costly and inefficient as this hyper-growth continues. There needs to be a way to efficiently unlock and utilize excess capacity and programmatically remove constraints.

Enter Virtual Power Systems

Enter Virtual Power Systems

VPS has developed a software-defined power infrastructure solution that they call ICER (Intelligent Control of Energy), and while there is a hardware element to it, the core solution is the software stack. At the base of this stack is the ICE Operating System. ICE Applications run on top of ICE OS, and there is also a ICE Cloud service that provides remote monitoring and automated control. The primary challenge, as I wrote about in my previous blog post, is that in order to have enough capacity for peak loads, power infrastructure has to be over-provisioned resulting in locked, unused capacity. VPS solves this by providing a battery component of the solution which supply power during peak loads. This is what is known as battery-based peak shaving, and, once the peak load requirements subside, the batteries start to recharge to prepare for the next peak.

Scale-Out Power Resources

Scale-Out Power Resources

The important aspect of the overall VPS stack is the software functionality that scales out the ICE batteries and applies a resource pooling algorithm to them. Instead of focusing on individual racks, or subsets of racks, it allows power policies to be applied to logical sets of racks and/or rows. VPS refers to this as Rackshare. This scaling out is extremely important with respect to “peak shaving”, which is the evening out of the power load the ICE software application stack. The other important part of this approach is the ability to address elastic power requirement changes dynamically instead of previous static processes to help with peak shaving.