Scale to accommodate growing AI workloads
Leverage breakthrough innovation with fully integrated solutions from the hardware to the software layer to take on workloads of any size and scope.
Accelerate AI initiatives
Deploy AI at scale with a solution optimized for workloads in Natural Language Processing (NLP), Large Language Model (LLM) training, and multimodal training.
Meet evolving business demands with optimum flexibility
Improve power efficiency with a direct liquid cooling option and create a flexible environment with a broad range of supported technologies including accelerators, storage and networking.
Our customers
Boost AI performance with the #1 server for natural language processing
HPE Cray XD670 is #1 in NLP and a top performer across all the MLPerf Inference v4.0 benchmark models where it participated, including GenAI, Computer Vision, and LLMs.
5 reasons to choose HPE Cray XD670
Organizations can greatly benefit from the scale and power of a system designed and optimized for AI workloads that are heavily parallelized, requiring GPU acceleration for optimum performance.
Take the next steps
Ready to get started? Explore purchasing options or engage with HPE experts to determine the best solution for your business needs.
More ways to explore
Unlock AI
Simplify AI complexity, accelerate productivity, and get pilots to production faster.
HPE Supercomputing
Empower world-changing innovation and discovery in the Exascale era and beyond, with faster time-to-results and accelerated AI.
Machine Learning Development Environment Software
Easily implement and train Machine Learning models by removing complexities, optimizing cost, and accelerating innovation.
Learn more about the HPE GreenLake cloud experience
All HPE GreenLake cloud services are accessed through a unified control plane that delivers a consistent, open and extensible cloud operating experience for all your services and users, wherever the workloads and data are located.
Technical specifications
-
ChassisCChassis
- 5U chassis system
- Single 2x CPU node
-
GPU acceleration – Option A
- 8x NVIDIA H200 SXM 700W TDP GPUs with 141 GB HBM each
- 2x 5th Gen Intel Xeon Scalable processors CPUs, up to 400W TDP
- 32x DIMMs DDR5, 8-channel memory per socket, up to 5600 MHz
-
GPU acceleration – Option B
- 8x NVIDIA H100 SXM 700W TDP GPUs with 80 GB HBM each
- 2x 4th Gen Intel Xeon Scalable processors CPUs, up to 300W TDP
- 32x DIMMs DDR5, 8-channel memory per socket, up to 4800 MHz