In my role applying software defined power (SDP) capabilities to data centers much time is spent demonstrating new possibilities and how data center designs are affected.
The potential design changes are wide ranging and applicable to any data center. Future demand is driving this process. In particular, the impact of 5G networks and the proliferation of edge data centers creates demand for improved operational efficiency while ensuring enough bandwidth, high availability, and minimal latency.
In a recent byline in Data Center Knowledge, Scott Fulton contemplated the impact of hyperscale cloud platforms on data center designs and functions. I agree that hyperscale demand is driving innovation, especially as new designs trickle down to other data center business models and applications. Hyperscale designs embrace a different approach to infrastructure risk, reliability and redundancy, as Yigit Bulut, partner at EYP Mission Critical Facilities, explained in the article.
As a result, designers must rethink their approach to traditional data center operations in terms of economics and scaling. This holds true when considering cooling and power. To meet the business and operational goals of hyperscale data centers, organizations must change how power and cooling are allocated.
In the case of cooling, new technologies can improve rack power density. For power, the ability to leverage innovative SDP solutions can increase available power capacity and reliability while reducing capital and operational expenses by reallocating stranded capacity in the power system topology.
Consider how a data center may be optimized using a design incorporating SDP. A 2N power topology is preferred to maximize power reliability with two active utility services each capable of supporting the entire facility in case one becomes unavailable. This approach can go a step further to ensure both utility services have allocations to operate at greater than 50% of the facility’s 2N capacity during normal operations. This extends all the way to the UPS equipment. The cooling systems will also be designed to manage higher power density racks and/or require additional white space when compared to the 2N capacity of the UPS equipment.
This approach doesn’t make much operational sense today—unless SDP is part of the solution.
Here’s how it would work: Assume the white space was defined with availability zones—one for traditional 2N workloads and other zones for various categories of workloads that may be active-active, migratable or have other service level agreements.
SDP systems take advantage of compatible power switching and stored energy to manage and control power flow. Consider the case where a 2N power system that was operating at 150% of the 2N capacity suddenly only has one source available. Stored energy may be used as the availability zone workloads are migrated to an unaffected area in the facility or outside to other facilities. The stored energy capacity maintains the remaining power source at or below 100% capacity on a single feed with stability and reliability until the loads in the availability zones are shed and the power system regains stability.
Once the data center returns to normal operations, software migrates workloads back into the facility, as needed. Such a system also has the capacity to peak shave to minimize power allocations, manage burstable power conditions, and manage power demand at the utility service level.
This progressive design concept could yield up to twice the steady state of power capacity of a 2N data center when compared to one designed without Software Defined Power.
There is no doubt that new tools in data center designers’ toolboxes make it possible to explore expanded degrees of freedom in facility design and implementation. In the meantime, we should all take a page from the hyperscale data center design book, which provides a roadmap for how to gain greater operational efficiency. The economics, reliability and resiliency of these design possibilities cannot be ignored.