Supermicro Unveils Liquid-Cooled Data Centers Solutions

Supermicro has announced the availability of comprehensive liquid-cooled solutions, including cold plates, cooling distribution units (CDUs), coolant distribution manifolds (CDMs), and complete cooling towers. These advancements are poised to significantly reduce the Power Usage Effectiveness (PUE) of data centers, potentially lowering overall power consumption by up to 40%. 

Charles Liang, President and CEO of Supermicro, emphasized the company’s commitment to working with AI and high-performance computing (HPC) customers to integrate the latest technologies into their data centers. Liang highlighted that Supermicro’s liquid cooling solutions can support up to 100 kW per rack, facilitating denser AI and HPC computing environments while reducing total cost of ownership (TCO).

The company’s modular approach, dubbed the ‘building block architecture,’ enables the rapid market deployment of cutting-edge GPUs and accelerators. This strategic partnership with trusted suppliers allows Supermicro to deliver new rack-scale solutions with expedited delivery times.

Supermicro‘s application-optimized, high-performance servers are engineered to support the most powerful CPUs and GPUs for tasks such as simulation, data analytics, and machine learning. The flagship Supermicro 4U 8-GPU liquid-cooled server stands out in the market, providing petaflops of AI computing power in a compact form factor, leveraging the capabilities of NVIDIA H100/H200 HGX GPUs. The upcoming shipment will include the liquid-cooled Supermicro X14 SuperBlade in both 8U and 6U configurations, the rackmount X14 Hyper, and the Supermicro X14 BigTwin. These platforms are optimized for HPC and will support Intel’s Xeon 6900 processors with performance cores in a compact, multi-node design.

Liquid-Cooled Servers

In addition, Supermicro has confirmed its support for the latest accelerators from leading semiconductor companies, including the Intel Gaudi 3 and AMD’s MI300X accelerators. The Supermicro SuperBlade, capable of hosting up to 120 nodes per rack, is designed to handle large-scale HPC applications within just a few racks. At the upcoming International Supercomputing Conference (ISC) 2024, Supermicro plans to showcase a broad array of servers, including systems that integrate Intel Xeon 6 processors.

During ISC 2024 last week, Supermicro demonstrated an extensive lineup of solutions tailored for HPC and AI environments. The highlight was the new 4U 8-GPU liquid-cooled servers equipped with NVIDIA HGX H100 and H200 GPUs, with future support planned for the NVIDIA B200 HGX GPUs. These advanced systems utilize high-speed HBM3 memory to bring more data closer to the GPU, thus accelerating AI training and HPC simulation tasks. The remarkable density of the 4U liquid-cooled servers would mean a single rack can deliver over 126 petaflops of performance, calculated as 8 servers multiplied by 8 GPUs each, achieving 1979 teraflops FP16 (with sparsity).

The Supermicro SYS-421GE-TNHR2-LCC model offers dual 4th or 5th Gen Intel Xeon processors, while the AS -4125GS-TNHR2-LCC variant comes with dual 4th Gen AMD EPYC CPUs. Furthermore, the AS -8125GS-TNMR2 server provides users with access to 8 AMD Instinct MI300X accelerators, alongside dual AMD EPYC 9004 Series Processors, delivering up to 128 cores and 256 threads with a maximum of 6TB memory. Each AMD Instinct MI300X accelerator is equipped with 192GB of HBM3 memory per GPU, all interconnected via AMD’s Universal Base Board (UBB 2.0).

Moreover, Supermicro’s new AS -2145GH-TNMR-LCC and AS -4145GH-TNMR APU servers are specifically designed to accelerate HPC workloads using the MI300A APU. These systems combine high-performance AMD CPUs, GPUs, and HBM3 memory, featuring 912 AMD CDNA 3 GPU compute units, 96 “Zen 4” cores, and 512GB of unified HBM3 memory in a single unit.

With these innovations, Supermicro continues to push the envelope in data center efficiency and performance, providing B2B clients with robust, scalable solutions to meet the demands of modern AI and HPC applications.

Total
0
Shares
Share 0
Tweet 0
Pin it 0
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Sam Altman (OpenAI) – The Possibilities of AI [Entire Talk]

Next Post

Eviden Unveils AI Server Family in Partnership with AMD, BullSequana AI 600

Related Posts