Computing Changes with Power/Cooling

This chart compares power consumption and cooling requirements for server-grade computing hardware.

CPU Servers (Intel Xeon, AMD EPYC)

  • 1U-4U Rack: 0.2-1.2kW power consumption
  • 208V power supply
  • Standard air cooling (CRAC, server fans) sufficient
  • PUE: 1.4-1.6 (Power Usage Effectiveness)

GPU Servers (DGX Series)

Power consumption and cooling complexity increase dramatically:

Low-Power Models (DGX-1, DGX-2)

  • 3.5-10kW power consumption
  • Tesla V100 GPUs
  • High-performance air cooling required

Medium-Power Models (DGX A100, H100)

  • 6.5-10.2kW power consumption
  • 400V high voltage required
  • Liquid cooling recommended or essential

Highest-Performance Models (DGX B200, GB200)

  • 14.3-120kW extreme power consumption
  • Blackwell architecture GPUs
  • Full liquid cooling essential
  • PUE 1.1-1.2 with improved cooling efficiency

Key Trends Summary

The evolution from CPU to GPU computing represents a fundamental shift in data center infrastructure requirements. Power consumption scales dramatically from kilowatts to tens of kilowatts, driving the transition from traditional air cooling to sophisticated liquid cooling systems. Higher-performance systems paradoxically achieve better power efficiency through advanced cooling technologies, while requiring substantial infrastructure upgrades including high-voltage power delivery and comprehensive thermal management solutions.

※ Disclaimer: All figures presented in this chart are approximate reference values and may vary significantly depending on actual environmental conditions, workloads, configurations, ambient temperature, and other operational factors.

WIth Claude

Inside H100

From Claude with some prompting
This image illustrates the internal architecture of the Nvidia H100 GPU. It shows the key components and interconnections within the GPU. A few key points from the image:

The PCIe Gen5 interface connects the H100 GPU to the external system, CPUs, storage devices, an

The NVLink allows interconnecting multiple H100 GPUs, supporting up to 6 NVlink connections with a 900GB/s bandwidth.

The GPU has an internal HBM3 memory of 80GB, which is 2x faster than the previous HBM2 memory.