At the recent DCD New York event, industry experts and data center operators gathered to discuss and debate what next-gen data centers will look like. According to Suvojit Ghosh, managing director of the Computing Infrastructure Research Center (CIRC) at McMaster University in Ontario, data centers will be faster, cooler and more automated by 2030.
Let’s face it, they’ll have to be in order to keep up with the constant onslaught of data-intensive applications. In addition to being more efficient and economical, data centers of the future will take on different forms, with a higher concentration of hyper-converged infrastructures and distributed edge computing workloads.
In envisioning that future, industry pundits, like Ghosh, predict a shift to specialized chipsets fueling a new generation of advanced hardware for high-density processing. New liquid cooling technologies and increased power awareness will become de rigueur in designing and building new and diverse data center environments. From an economics or energy efficiency point of view, it won’t make sense to add more power and cooling continuously as computing spikes intensify.
Today’s data center operators waste up to 50% of existing power capacity, which really underscores that we simply can’t continue to overprovision—and overpay—for power and cooling. Data centers of the future will need to optimize energy utilization at every possible level. In terms of cooling, Ghosh’s CIRC is analyzing costs associated with immersion cooling, liquid-to-the-chip and water-chilled rear-door cooling units, all of which are showing economies of scale for dense workloads when compared to air cooling.
However, software remains the biggest catalyst for producing data center economies and efficiencies—with Software Defined Power (SDP) smack dab in the middle. The ability to dynamically control power distribution, both in and to racks, nodes and workloads, will better support power-hungry hypercompute and machine learning applications both at the edge and core of evolving data centers. New form factors, such as micro data centers and edge data centers, for example, are only increasing the cost of power components.
Software capable of driving real-time decisions will be the key to operating these new edge data centers, because just to maintain them will require the consolidation of compute, network, storage and power resources. SDP is really the only way to solve edge power problems while improving data center scalability, flexibility, programmability and intelligence. SDP puts data center operators on an accelerated path to designing, building and maintaining next-gen infrastructures with “power aware” workload orchestration. As a result, enterprises and co-lo providers will be able to readily increase rack-power density while supporting new demanding IT loads and dynamic SLAs associated with data centers of the future.