Data Centre Power Supply Hardware & Software Trends

Martin Hägerdal
Head of Ericsson Power Modules
data centres
Data centres are becoming increasingly important as the amount of data being stored and accessed rises exponentially - but one forgotten element is the data centre power supplies.

Today’s data centres face two key challenges. They must keep up with demands on performance from an increasing number of cloud-connected devices, producing greater and greater volumes of data which needs to be communicated and stored. And they must maintain or reduce the energy consumption of their servers while increasing their performance. Data centres in the US alone used 91 billion kWh of energy in 2013 (equivalent to the output of 34 large coal-fired power plants) which is predicted to rise to 140 billion kWh in 2020; this comes with not only a massive environmental cost, but a huge dollar cost, too.

These challenges are driving several different innovations in server board-level power supplies. As IC technologies advance and more complex devices such as FPGAs are used to enhance the computing performance of these boards, the power demands of these new loads are getting more and more complex. FPGAs in particular are notoriously particular; they require multiple very high quality power lines at low voltages and high currents, have strict conditions for things like ramp up times, and often have dynamic requirements such as adjustable voltage power rails.

Power supplies are also under scrutiny for their energy efficiency. Rather than reduce the power consumed by the active parts of the system (any efficiencies gained are used to power elevated performance), the focus is on reducing the energy wasted by the power conversion sub-systems as far as possible. Since no power converter can be 100% efficient, a percentage of the energy is lost at every stage, and since there are multiple stages, the problem is compounded.

The result is a need for more energy efficient power supplies that can maintain their efficiency across changing load conditions, while providing tighter regulated, more dynamic power lines for the devices they are powering.

One way the industry has responded to this challenge is the development of digital power supply products, which can be monitored and controlled remotely so that their setup can be changed based on measured system parameters. For example, if the monitored load current and temperature suggest that the IC being powered is in danger of overheating, the power being supplied can be dialled back until it recovers. Or, power consumption can be logged to ensure that the system is operating as energy efficiently as possible. If an FPGA is reprogrammed and changes its power needs, this can be communicated too. Overall, digital control has been a huge step forward in implementing intelligent power management schemes to optimise performance and energy efficiency.

Power supply manufacturers have been working hard to design configurable, digitally controlled power supply hardware with advanced features that can implement these intelligent power management schemes. The PMBus communications protocol has also emerged, to enable effective communication of these commands to the power supplies. The latest version of the protocol features reduced communication latencies and a dedicated adaptive voltage scaling (AVS) bus to statically and dynamically control processor voltages. However, to allow data centres to take full advantage of these technologies, advanced software to design, manage and control power supply systems is vital.

Advanced software is used right from the start of the design stage to configure optimised power supply systems. For example, software programs such as Ericsson Power Designer may be used to calculate and optimise the parameters of output filters to stabilise the power supply’s transient response. This software should make designers’ lives easier by enabling robust power system design very quickly via an easy to use GUI (graphical user interface). Software may also be used to adjust power supply parameters while in-situ, to enable techniques such as adaptive compensation of the PWM control loop, and other advanced energy optimisation algorithms.

Taking it one step further, software should be able to monitor and control the entire power architecture from the system level, rather than limiting its scope to the control of individual power conversion stages. While this exists today to a certain extent, many installations are still not reaching the hardware’s full potential for increasing energy efficiency by responding to load changes or environmental conditions as they are being held back by the control software’s capabilities. Another level of intelligence is required to bring it all together.

While much time and effort has been spent on developing innovative hardware, power supply manufacturers’ software offerings are somewhat lagging behind in terms of innovation and functionality. As server rack power architectures evolve to meet the ongoing challenges of high performance data centres, the industry will need a renewed focus on software development to fully enable the already present hardware features that can meet the power needs of complex loads while reducing energy losses on a grand scale. 

Share this page

Want more like this? Register for our newsletter
GaN’s Ground-Floor Opportunity Rudy Ramos | Mouser Electronics
GaN’s Ground-Floor Opportunity
The electronics industry has a major role to play in helping to save energy, by enabling better equipment and new ways of working and living that that are more efficient and environmentally friendly. Maintaining the pace of technological progress is key, but improvements become both smaller and harder to achieve as each technology matures. We can see this trend in the development of power semiconductors, as device designers seek more complex and expensive ways to reduce switching energy and RDS(ON) against silicon’s natural limitations.