Advertisement
Articles
Advertisement

The challenges and increased demands of the datacenter

Fri, 10/18/2013 - 9:36am

Our increasingly connected world and the oceans of data we generate has put the datacenter under tremendous pressure to accommodate our demands efficiently and reliably. ECN recently interviewed Ajith Jain, Vicor’s director, global communications and computer unit about the trends and the challenges facing designers of datacenter devices.

ECN: How is the datacenter meeting the huge energy demands that mobile devices, cloud storage and the Internet of Things all require? And how does this affect power conversion?

AJ: Datacenters are pursuing alternative energy sources (wind, solar, BloomBox, etc), and are designing the server racks with more power efficient products that enhance FLOPs/Watt. By using enhanced power conversion technology to cater to high power CPUs, memory and chipsets, datacenter operators can reduce power conversion losses and hence reduce the cooling requirements, which in turn reduces the total power consumed by the datacenter. This is often referred to as PUE (Power Usage Effectiveness = Total Facility Power/IT Equipment Power). High power CPUs, faster DDR memories, and the latest generation chipsets all demand lower voltages and very high and dynamic current profiles, challenging the traditional power conversion and distribution architectures. 


ECN: When designing or scaling up a data center, how is power distribution and control considered?

AJ: High voltage DC distribution is a compelling alternative to AC distribution, to reduce power distribution losses. Higher voltage (>12V) has been considered by many as the distribution voltage on the server rack backplane, which further contributes to reducing losses with the lower voltage high current rails (losses are I²R, increasing voltage and reducing currents help reduce distribution losses.) Factorized Power Architecture further enables this endeavor by separating the high voltage, low current paths and the low voltage, high current paths closer to the CPU/memory, reducing the I²R losses involved.

ECN: What about processor loads in the datacenter? What should designers keep in mind when addressing power conversion losses and efficiency issues?

AJ: Future generations of CPUs will require higher current at relatively lower voltages. Combined with the voltage scaling technology, this puts a huge demand on the power converters’ ability to meet the dynamic profile. Nevertheless future generations of high core-density CPUs will demand ever-increasing power than the current generation. Designers need to keep this in mind when selecting and implementing power conversion architectures. In addition, designers are challenged to provide more FLOPs/Watt utilizing increased numbers of processors. This directly impacts the board space available for power converters, pushing designers to look for new architectures that increase power density.

ECN: Is DC power, thanks to renewable energy which outputs power as DC, making inroads in the data center and if so, how?

AJ: Yes, DC power is definitely making inroads into the data center for reasons quoted before, here’s a summary;
o High voltage DC distribution is preferred over AC distribution for the following reasons:
   • Energy storage is handled in DC and not AC, so AC distribution requires an inverter to convert stored DC (battery backup) back to AC, which is not efficient
   • Alternative energy sources like solar, wind and BloomBox all provide DC voltages - DC distribution now becomes a seamless integration
   • Given these factors,  high voltage DC distribution reduces the number of power conversion stages and hence reduces distribution losses
o The availability of higher voltage to CPU/memory converter products - for example, Vicor’s  48Vin to 0.9Vout solution for VR12.0, and its 48Vin to 1.8Vout solution for VR12.5 - enables higher voltage DC than the traditional 12V approaches, further reducing power loss and total power consumed. This has a positive impact on total cost of ownership.
o High voltage DC (384 VDC) combined with 48V to CPU/Memory converters provide a complete optimized power converter solution at the highest efficiency the industry has ever witnessed seen. This is a huge incentive for datacenter players to consider DC distribution.


400-V power distribution holds a lot of promise for telecom and datacom in terms of improved power efficiency. Can you describe some of the progress in this area and how it will both affect and benefit designers?
- Large datacenter are already in the process of implementing HVDC and 48V to CPU approaches today. These datacenter operators can begin this transition by implementing 384V distribution, and then transition to the point of load (384V is the nominal of the ETSI range of 260V to 410V).

ECN: How are datacenter infrastructure management software (DCIM) platforms changing the way power is consumed in the data center?

AJ: Monitoring voltages and currents on the higher power loads helps the DCIM to further optimize the number of sever blades being active at all times. Monitoring the status also enables DCIMs to reduce or increase the cooling required, which in turn impacts the power consumed for cooling

ECN: One trend that comes up often is virtualized architectures for managing loads across multiple data centers in different locations. What can designers do to maximize energy efficiency with the virtualized architecture in mind?

AJ: This has no impact on the power converter technologies. However, using the CPU idle power for additional tasks on the same server rack definitely increases the throughput for the same power consumed.
 
ECN: Do you see modular, low-power microservers dramatically changing the data center or will its future be as a niche market?

AJ: Microservers are geared toward lower power applications like social networking, web hosting and mail server applications. Based on our information, most datacenter customers budget in 10 percent of their loads as microsever centric.

ECN: What should engineers consider when selecting fault protection solutions?

AJ: System reliability is the key to begin with. Using more efficient power converters, you can reduce the amount of heat and hence not only reduce the cooling costs but also increase the MTBF of the product. Reducing heat and improving system reliability also puts lesser demands on secondary protection/redundancy. This allows system designers to use more available space for better designs.

ECN: What are some of the latest trends for backup power to handle utility power disruptions. What are some of the monitoring and control trends? Where does the UPS system fit in?

AJ: Fuel cells like bloom energy, and high voltage battery technology (384V, 48V batteries)

ECN: What additional telecom/datacom/networking trends do you expect to emerge in the near future?

AJ: More and more ASICs used are being designed in smaller process geometries, demanding lower core voltages south of 0.9 V and increasing currents north of 90 A. This is a significant challenge for traditional power converters in terms of conversion efficiency. More features designed in by system designers yield significant board space constraints, pushing the power to a small area toward the tail end of the board. In addition, rising ambient temperatures challenge both the useful power that could be obtained and the MTBF of the power converter. High power GPUs are being considered in place of CPUs to cater to increasing data processing power. This directly impacts power deliver and distribution architectures.

Advertisement

Share this Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading