menu

NF5688G7

Unprecedented Performance
Optimized Energy Efficiency
Leading Architecture Design
Multi-scenarios Adaptation
Detail

NF5688M6

Advanced Architecture for High Performance
High Scalability for Different Requirements
Stable Design for Reliability
Detail

NF5468M6

Excellent Performance
Flexible Configuration
Rich Ecology
High Reliability
Detail

NF5688G7

NF5688G7, the 6U hyperscale training platform equipped with dual 4th Gen Intel Xeon Scalable Processors or AMD EPYCTM 9004 Series Processors and 8x NVIDIA latest GPUs, features industry-leading performance, ultimate I/O expansion, and ultrahigh energy efficiency. The precisely optimized system architecture with 4x CPU to GPU bandwidth, up to 4.0Tbps networking bandwidth, 8TB system memory, and 300TB massive local storage can fully satisfy the communication and capacity demands of multi-dimensional parallelism training for giant-scale models. 12 PCIe expansions can be flexibly configured with CX7, OCP3.0, and multiple SmartNICs, making it an ideal solution for both on-premises and cloud deployment. It is built to handle the most demanding AI computing tasks like trillion-parameter Transformer model training, massive recommender systems, AIGC, and Metaverse workloads.

Key FeaturesTechnical Specifications

Key Features

Unprecedented Performance

Powered by 8* NVIDIA latest GPUs in a 6U chassis, TDP up to 700W.

Support 2x 4th Gen Intel Xeon Scalable Processors or AMD EPYCTM 9004 Series Processors.

Industry-leading performance with 16PFlops AI performance by 3 times enhancement. The Transformer Engine significantly accelerates the training speed of GPT large mode.

Optimized Energy Efficiency

Extremely low air-cooled heat dissipation overhead, less fan, higher power efficiency.

54V, 12V separated power supply with N+N redundancy reducing power conversion loss.

Intelligent regulation of heat dissipation in different layers to reduce the power consumption and noise.

Leading Architecture Design

Lightning-fast intra-node connectivity with 4x CPU to GPU bandwidth improvement.

Ultra-high scalable inter-node networking with up to 4.0Tbps non-blocking bandwidth.

Cluster-level optimized architecture, GPU : Compute Network : Storage Network = 8:8:2.

Multi-scenarios Adaptation

Full modular design and extremely flexible configurations satisfying both on-premises and cloud deployment.

Easily harness large-scale model training, such as GPT-3, MT-NLG, stable diffusion and Alphafold.

Diversified SuperPod solutions accelerating the most cutting-edge innovation including AIGC, AI4Science and Metaverse.

Technical Specifications

Item

NF5688-M7-A0-R0-00

NF5688-A7-A0-R0-00

Height

6U

GPU

NVIDIA HGX Hopper 8-GPU, TDP up to 700W per GPU

Processor

2*4th Gen Intel Xeon Scalable Processors, TDP 350W

2* AMD EPYCTM 9004 Series Processors, Max cTDP 400W

Memory

32 * DDR5 DIMMs, up to 4800MT/s

24 * DDR5 DIMMs, up to 4800MT/s

Storage

24 * 2.5’ SSD, up to 16 * NVMe U.2

M.2

2* Onboard NVMe/SATA M.2 (optional)

2* Onboard NVMe M.2 (optional)

PCIe Slot

Support 10 * PCIe Gen5 x16 slots. One PCIe Gen5 x16 slot can be replaced with two x16 slots (PCIe Gen5 x8 rate).

Optional support Bluefield-3, CX7, and various SmartNICs

RAID

Optional support RAID 0/1/10/5/50/6/60, etc., support Cache super capacitor protection

Front I/O

1*USB 3.0, 1*USB 2.0, 1*VGA

Rear I/O

2*USB3.0, 1*RJ45, 1* MicroUSB, 1* VGA

OCP

Optional support 1*OCP 3.0, support NCSI

Management

DC-SCM BMC management module with Aspeed 2600

TPM

TPM 2.0

Fan

6 hot-swap fans with N+1 redundancy

Power

2* 12V 3200W and 6* 54V 2700W, Platinum/Titanium PSU with N+N redundancy

Size (W*H*D)

447mm*263mm*860mm

Weight

Net weight 92kg(Gross weight: 107kg)

Environmental

Parameters

Working temperature:10℃~35℃;  Storage temperature:-40℃~70℃

Working humidity:10%~80% R.H.;Storage humidity:10%~93% R.H.