Server rack power management

Powering next‑gen AI racks with efficient, scalable solutions across PSUs, IBCs, and VRMs

About

As AI workloads surge and GPU power demands rise exponentially, traditional server rack architectures are pushed to their physical, electrical and thermal limits. Modern AI data centers drive a dramatic shift in infrastructure design, pushing from the tens of kilowatts per rack in traditional architectures to more than 100 kW today,  and moving toward 600 kW to 1.x MW in the future. 

Design engineers must overcome rapidly increasing GPU power consumption, higher thermal efficiency and density requirements across PSUs, IBCs and VRMs, shrinking system footprints, higher voltages with lower currents and significant reduction in power distribution losses.

Our broad server rack power managment portfolio is built to address these challenges with Si, SiC, and GaN power switches and ICs such as gate drivers, power stages, auxiliary power, current sensors, digital controllers, and microcontrollers. Covering every major power stage in the rack - from power supply units (PSU) and AC-DC conversion stages, to intermediate bus converters (IBCs) and voltage regulation modules (VRMs), it delivers scalable power, optimized efficiency and seamless integration for next-generation AI server racks.  

As AI server racks operation transitions to hunderds of kilowatts and ulimately megawatt-class power, power supply unit (PSU) architecturemust undergo a complete redesign. Next-generation racks adopt high-voltage DC (HVDC) conversion, shifting from ~400 V AC to 800 V DC. To support this transitions, PSU switch ratings must scale from 650 V to 1.2. kV, enabling advanced three‑phase topologies such as B6 or Vienna PFC for higher efficiency under heavy AI workloads. Cooling is a critical challenge as PSU form factors remain tightly restricted. In these cases, sidecar plaement frees vertical rack spaces and improves cooling.

Infineon addresses these challenges with a comprehensive portfolio of Si, GaN and SiC power technologies. CoolSiC™ and CoolGaN™  devices reduce the switching losses in AC-DC stages in AI server PSUs. Along our switches, our ecosystem of gate drivers, digital isolators, auxiliary power and microcontrollrs supports both single-phase and three-phase PSUs across multiple topologies. A variety of modular and SMD packages, along with reference designs, helps deisgn sngineers accelerate development, improve efficiency and reduce time-to-market fir next generation PSUs.

Battery backup units (BBUs) act as the safety nets to instantly sipply power when PSU or grid power outage. For example, to meet 48V system requirements and demand of 90-240 seconds of backup time, BBUs must deliver high efficiency and a scalable architecture. Infineon's BBU solutions leverage patented rechnologies which provide higher power density and efficiency compared to current solutions. Leveraging 40 V and 80 V MOSFET‑based designs, these platforms offer high efficiency, reduced BOM cost, and scalable performance across multiple power levels, while operating effectively within limited voltage ranges. 

Intermediate bus converter (IBCs) must provide high power density, excellent thermal performance, high effciency, and voltage conversion between the PSU and POL. Infineon addresses these needs with a broad portfolio of  CoolGaN™ switches, gate drivers, digital controllers, microncontrollers,  current sensors, and modular power stages, enabling scalable and reliable IBC designs for next‑generation AI racks. With strong support for wide‑bandgap devices and reference designs optimized for both HV and LV conversion paths, Infineon helps design engineers improve efficiency, power density and accelerate time‑to‑market.

Infineon complements this offer with a selection of protection devices which ensure a controlled, safe power ramp-up, protecting both IBC and the rest of the rack, while maintaining stability in high-density environments like AI server racks. For medium-voltage protection, we have a range of hot-swap or e-fuse devices such as XDP700 series, while for high-voltage protection we have hotswap solution with SiC JFET device.

XPU and ASIC power solutions are essential in modern AI servers, where advanced processors require precise low-volztage, high-current power delivery to operate relably under heavy computational loads. In these systems, the output from the Intermediate Bus Converter (IBC) is stepped down through Voltage Regulator Modules (VRMs) to meet the exact voltage needs of XPUs (e.g.,GPUs, TPUs) ASICs and custom AI accelerators. 

 As compute performance increases, power delivery networks must support tighter voltage regulation, faster transient response, and higher current density to maintain performance integrity, energy efficiency, and reliability. At the same time, rising power density demands advanced thermal management as modern XPUs generate substantial heat within the limited board space. As a result,  XPU and ASIC power solutions must combine high-efficiency VRM design, optimized power stage integration, and advanced thermal handling to ensure stable operation under demanding AI workloads.

As AI workloads surge and GPU power demands rise exponentially, traditional server rack architectures are pushed to their physical, electrical and thermal limits. Modern AI data centers drive a dramatic shift in infrastructure design, pushing from the tens of kilowatts per rack in traditional architectures to more than 100 kW today,  and moving toward 600 kW to 1.x MW in the future. 

Design engineers must overcome rapidly increasing GPU power consumption, higher thermal efficiency and density requirements across PSUs, IBCs and VRMs, shrinking system footprints, higher voltages with lower currents and significant reduction in power distribution losses.

Our broad server rack power managment portfolio is built to address these challenges with Si, SiC, and GaN power switches and ICs such as gate drivers, power stages, auxiliary power, current sensors, digital controllers, and microcontrollers. Covering every major power stage in the rack - from power supply units (PSU) and AC-DC conversion stages, to intermediate bus converters (IBCs) and voltage regulation modules (VRMs), it delivers scalable power, optimized efficiency and seamless integration for next-generation AI server racks.  

As AI server racks operation transitions to hunderds of kilowatts and ulimately megawatt-class power, power supply unit (PSU) architecturemust undergo a complete redesign. Next-generation racks adopt high-voltage DC (HVDC) conversion, shifting from ~400 V AC to 800 V DC. To support this transitions, PSU switch ratings must scale from 650 V to 1.2. kV, enabling advanced three‑phase topologies such as B6 or Vienna PFC for higher efficiency under heavy AI workloads. Cooling is a critical challenge as PSU form factors remain tightly restricted. In these cases, sidecar plaement frees vertical rack spaces and improves cooling.

Infineon addresses these challenges with a comprehensive portfolio of Si, GaN and SiC power technologies. CoolSiC™ and CoolGaN™  devices reduce the switching losses in AC-DC stages in AI server PSUs. Along our switches, our ecosystem of gate drivers, digital isolators, auxiliary power and microcontrollrs supports both single-phase and three-phase PSUs across multiple topologies. A variety of modular and SMD packages, along with reference designs, helps deisgn sngineers accelerate development, improve efficiency and reduce time-to-market fir next generation PSUs.

Battery backup units (BBUs) act as the safety nets to instantly sipply power when PSU or grid power outage. For example, to meet 48V system requirements and demand of 90-240 seconds of backup time, BBUs must deliver high efficiency and a scalable architecture. Infineon's BBU solutions leverage patented rechnologies which provide higher power density and efficiency compared to current solutions. Leveraging 40 V and 80 V MOSFET‑based designs, these platforms offer high efficiency, reduced BOM cost, and scalable performance across multiple power levels, while operating effectively within limited voltage ranges. 

Intermediate bus converter (IBCs) must provide high power density, excellent thermal performance, high effciency, and voltage conversion between the PSU and POL. Infineon addresses these needs with a broad portfolio of  CoolGaN™ switches, gate drivers, digital controllers, microncontrollers,  current sensors, and modular power stages, enabling scalable and reliable IBC designs for next‑generation AI racks. With strong support for wide‑bandgap devices and reference designs optimized for both HV and LV conversion paths, Infineon helps design engineers improve efficiency, power density and accelerate time‑to‑market.

Infineon complements this offer with a selection of protection devices which ensure a controlled, safe power ramp-up, protecting both IBC and the rest of the rack, while maintaining stability in high-density environments like AI server racks. For medium-voltage protection, we have a range of hot-swap or e-fuse devices such as XDP700 series, while for high-voltage protection we have hotswap solution with SiC JFET device.

XPU and ASIC power solutions are essential in modern AI servers, where advanced processors require precise low-volztage, high-current power delivery to operate relably under heavy computational loads. In these systems, the output from the Intermediate Bus Converter (IBC) is stepped down through Voltage Regulator Modules (VRMs) to meet the exact voltage needs of XPUs (e.g.,GPUs, TPUs) ASICs and custom AI accelerators. 

 As compute performance increases, power delivery networks must support tighter voltage regulation, faster transient response, and higher current density to maintain performance integrity, energy efficiency, and reliability. At the same time, rising power density demands advanced thermal management as modern XPUs generate substantial heat within the limited board space. As a result,  XPU and ASIC power solutions must combine high-efficiency VRM design, optimized power stage integration, and advanced thermal handling to ensure stable operation under demanding AI workloads.