AI is getting increasingly complex demanding unprecedented levels of compute power. NVIDIA DGX-2 packs 16 of the world’s most powerful GPUs to accelerate new AI model types that were previously untrainable. Groundbreaking GPU scalability, lets you train 4X bigger models on a single node with 10X the performance of an 8-GPU system.
NVIDIA DGX-2 is now available in two models, including a new enhanced DGX-2H - specifically engineered for maximum performance for the most demanding applications. Learn how DGX-2H is the compute building block of DGX-2 POD - the first AI supercomputing infrastructure to achieve Top 500 performance.
AI SCALE ON A WHOLE NEW LEVEL
DGX-2 delivers a ready-to-go solution for rapidly scaling up AI. Flexible networking options for building the largest deep learning compute clusters, combined with virtualization speed scaling and improve user and workload isolation in shared infrastructure environments. With an accelerated deployment model and an architecture purpose-built to scale easily, your team can spend more time driving insights and less time building infrastructure.
A REVOLUTIONARY AI NETWORK FABRIC
With DGX-2, model complexity and size are no longer constrained by the limits of traditional architectures. Now, you can take advantage of model-parallel training with the NVIDIA NVSwitch networking fabric. It’s the innovative technology behind the world’s first 2-petaFLOPS GPU accelerator with 2.4 TB/s of bisection bandwidth, delivering a 24X increase over prior generations.
ACCESS TO AI EXPERTISE
DGX-2 is purpose-built for reliability, availability, and serviceability (RAS) to reduce unplanned downtime, streamline serviceability and maintain operational continuity. With NVIDIA® DGX-2™, you get access to NVIDIA’s AI expertise, that can jump-start your work for faster insights.
The New Standard in Backtesting
NVIDIA DGX-2 with accelerated Python processed 20 million trading simulations and set a new standard in the latest STAC-A3 backtesting benchmark report.
Raising the Bar for AI Infrastructure
NVIDIA DGX Systems set eight AI performance records in MLPerf 0.6¹—a set of benchmarks that enable the machine learning (ML) field to measure training performance across a diverse set of usages.
Shattering WorldRecords
Learn how the world’s 22nd fastest supercomputer, the NVIDIA DGX SuperPOD – built with DGX systems, is being used to accelerate autonomous vehicle development.
16X FULLY CONNECTED TESLA V100 32GB
0.5 TB total high-bandwidth memory for more complex deep learning models
DGX-2 | ||
CPUs | 2 x Intel Xeon Platinum |
|
GPUs | 16 x NVIDIA Tesla V100 32GB HBM2 |
|
System Memory | Up to 1.5 TB DDR4 | |
GPU Memory | 512 GB HBM2 (16 x 32 GB) |
|
Storage |
30 TB NVMe Up to 60 TB |
|
Networking | 8 x Infiniband or 8 x 100 GbE |
|
Power | 10 kW | |
Size | 350 lbs | |
GPU Throughput |
Tensor: 1920 TFLOPs FP16: 480 TFLOPs FP32: 240 TFLOPs FP64: 120 TFLOPs |