AMD Instinct™ MI350 Series GPUs, featuring the cutting-edge new 4th Gen AMD CDNA™ architecture, are equipped with powerful and energy-efficient cores, up to 288GB of HBM3E memory, and 8TB/s bandwidth. By supporting next-generation FP6 and FP4 data types (including matrix sparsity), they redefine AI acceleration to achieve lightning-fast AI inference and training, setting new benchmarks for performance, efficiency, and scalability in Generative AI and High-Performance Computing (HPC).
AMD Instinct MI350 Series includes the Instinct MI350X and MI355X GPUs and Instinct platforms, offering a 4x increase in AI compute capability and a 35x leap in inference performance compared to the previous generation, helping various industries achieve transformative AI solutions. The MI355X OAM 288GB delivers up to 2.1x AI performance, 2.1x HPC performance, and 1.6x memory capacity compared to the NVIDIA B200 SXM5 180GB. At the same time, the MI355X also offers significant improvements in price-performance, generating up to 40% more tokens per dollar compared to competitor solutions.
AMD Instinct MI350 Series Platforms integrate 8 fully interconnected MI355X or MI350X GPU OAM modules onto an industry-standard OCP design via 4th Gen AMD Infinity Fabric™ technology, with an industry-leading 2.3TB HBM3E memory capacity for high-throughput AI processing. These ready-to-deploy platforms support a variety of systems, from standard air-cooled UBB-based servers to ultra-dense Direct Liquid Cooled (DLC) platforms, helping to accelerate time-to-market and reduce development costs when integrating AMD Instinct MI350 Series GPUs into existing AI rack and server infrastructures.
Leading hyperscalers, model builders, and AI companies worldwide have widely deployed AMD Instinct GPUs, including Meta, OpenAI, Microsoft, Oracle Cloud Infrastructure (OCI), and xAI.