All Products
-
International Freight Forwarding Services
-
Cross Border Sourcing
-
China Air Freight Service
-
China Sea Freight Services
-
Middle East Shipping
-
International Rail Freight
-
Door To Door Shipping From China
-
Road Freight From China
-
International Packing Service
-
International Warehousing Services
-
Cargo Insurance Service
Enterprise HPE ProLiant Compute DL384 Gen12 NVIDIA GH200 NVL2 Free Compute Private Cloud Rack mount Gpu AI Server

Contact me for free samples and coupons.
Whatsapp:0086 18588475571
Wechat: 0086 18588475571
Skype: sales10@aixton.com
If you have any concern, we provide 24-hour online help.
xProduct Details
Type | Rack | Processor Type | NVIDIA Grace CPU And Hopper GPU |
---|---|---|---|
Brand | HPE | ||
Highlight | Rack mount Gpu AI Server,NVIDIA GH200 Gpu AI Server,Private Cloud Gpu AI Server |
Product Description
Overview
HPE ProLiant Compute DL384 Gen12 is the first server from HPE enabled with NVIDIA GH200 NVL2, optimized for AI inferencing for large language models that require a large memory capacity and/or non-AI workloads like large scale simulation, EDA, weather forecasting, etc.


What’s New
With up to 1.2TB of fast unified memory and 5TB/s bandwidth, the NVIDIA GH200 NVL2 can handle large language models fine tuning and inferencing with Retrieval Augmented Generation (RAG) and more users featuring twice the performance of the previous generation. The HPE ProLiant Compute DL384 Gen12 delivers the best performance per GPU in our HPE ProLiant portfolio. HPE ProLiant Compute DL384 is ideal for mixed or memory intensive workloads, whether you are doing AI or traditional HPC.
Building on HPE ProLiant as the legendary foundation, the HPE ProLiant Compute DL384 Gen12 delivers a consistent experience across the HPE ProLiant portfolio with HPE iLO management and firmware that delivers robust reliability and security with silicon Root of Trust technology from HPE.
HPE and NVIDIA are continuing to innovate to help our customers unlock next-gen scale-out accelerated computing for generative AI, with superchip performance for their Enterprise AI Factory.





• The first HPE ProLiant rack mount server with the latest NVIDIA GH200 Grace Hopper™ Superchip
• Support for dual superchips with NVIDIA GH200 NVL2. NVLink between two GH200 for twice the memory and performance.
• Support for the latest NVIDIA InfiniBand,Ethernet, and Bluefield adapters ensures your AI fabric runs at top performance. • NVIDIA OVX™ certified for Artificial Intelligence workloads
• High-performance LLM inference to maximize data center utilization
• With up to 2X higher inference performance compared to H100
• HPE Silicon Root of Trust offers industry leading innovation based on the HPE zero trust architecture from edge to cloud, to protect your infrastructure, workloads, and data
Application:
To simplify your experience, configuration of the server has been streamlined significantly. In brief we have set up only three core hardware kits- the server itself, a kit with a single GH200 superchip, and a kit with two GH200 superchips. The superchip kits include the enablement items needed to build any supported configuration of the server. Then we have a small number of configuration settings to choose that set whether you configure with the spare PCIe, OCP, or added drives. This is summarized in tables below:


Recommended Products