AMD vs NVIDIA – who stood out more at SC22?

AMD vs NVIDIA – who stood out more at SC22?

AMD vs NVIDIA – who stood out more at SC22?

  • 101 supercomputers on the latest Top 500 ranking were powered by AMD EPYC processors and AMD Instinct accelerators, a 38% increase from the previous year
  • NVIDIA announced the widespread deployment of its next-generation H100 Tensor Core GPUs and Quantum-2 InfiniBand

The history of the rivalries between AMD and NVIDIA is well known to those who have invested extensively in the tech industry. These two incredible tech behemoths have dominated tech for decades.

Some might contend that the war between the two giants resembles a battle of wills. While the other would perform better in the GPU department, one would excel in the CPU department. One thing is for sure; they have competed head-to-head, pushing one another to develop the most ground-breaking technologies ever made.

During the International Conference for High Performance Computing, Networking, Storage and Analysis (SC22), both AMD and NVIDIA made some exciting announcements relating to AI, CPUs, and HPC.

NVIDIA continues to innovate the GPU industry

Firstly, let’s talk about what NVIDIA revealed at the conference. NVIDIA announced the widespread deployment of its next-generation H100 Tensor Core GPUs and Quantum-2 InfiniBand, including new services on the Microsoft Azure cloud and 50+ new partner systems in order to speed up scientific discovery.

In addition to announcing support for its Omniverse simulation platform on NVIDIA A100 and H100-powered systems, the company also delivered significant improvements to its cuQuantum, CUDA, and BlueField DOCA acceleration libraries at SC22.

The H100, Quantum-2, and library updates are all a part of NVIDIA’s HPC platform, a complete technology stack that includes CPUs, GPUs, DPUs, systems, networking, and a wide variety of AI and HPC software. This platform enables researchers to effectively accelerate their work on powerful systems, on-site or in the cloud.

Other NVIDIA partners introducing H100-powered servers in various configurations include ASUS, Atos, Dell Technologies, INGRASYS, GIGABYTE, Hewlett Packard Enterprise, Lenovo, Penguin Solutions, QCT, and Supermicro.

Given its position as the market leader for GPUs, NVIDIA is expected to continually make investments and introduce new products.

AMD as the leader in CPUs

Meanwhile, AMD, they have demonstrated its ongoing success and dominant position within the high performance computing (HPC) industry. The processors of choice for the most demanding HPC workloads powering the most intricate simulations and modelling tools continue to be AMD EPYC CPUs and AMD Instinct accelerators.

AMD is a leader in performance and efficiency innovation, as seen by the most recent Top500 list. Compared to the November 2021 list, which included 73 supercomputers powered by AMD, the latest list had 101, a 38% increase. With 1.1 exaflops, the Frontier supercomputer at Oak Ridge National Laboratory (ORNL), run on AMD processors and accelerators, continues to lead the Top 500 list.

AMD continues to expand its legacy of introducing products that set new performance benchmarks. The company’s most recent partnerships have also helped to improve the HPC sector and show how popular AMD CPUs and accelerators are becoming.

  • AMD introduces the 4th Gen AMD EPYC processors, which include up to 96 cores, 12 memory channels, and 384GB of DDR5 memory. The newest EPYC processors may provide the superior performance required for demanding HPC workloads.
  • Microsoft released a preview of new virtual machines (VMs) for high performance computing. 4th Gen AMD EPYC CPUs power both the brand-new HX-series and HBv4-series virtual machines.

How AMD and NVIDIA leverage AI for the future of the tech ecosystem

The ROCm open ecosystem, which supports AMD accelerators, facilitates scientific discoveries by enabling integration with environments across various vendors and architectures. This year, AMD announced the extension of the AMD Instinct and ROCm ecosystem, providing exascale-class technology to a wide range of HPC and AI customers.

Additionally, AMD officially became a founder member of the PyTorch Foundation, which Meta AI started. The organization, which will be a member of the non-profit Linux Foundation, will promote and support an ecosystem of open-source projects to accelerate the adoption of AI technologies. Finally, Meta AI created and released AITemplate (AIT), a unified inference system that can be speeded up using AMD Instinct accelerators. On various popular AI models, AIT provides matrix core performance that is nearly hardware-native.

On the other hand, NVIDIA stated that NVIDIA Omniverse, an open platform for developing and running metaverse applications, now integrates with the top scientific computing visualization software and supports additional batch-rendering workloads on systems powered by NVIDIA A100 and H100 Tensor Core GPUs.

NVIDIA also unveiled fully real-time scientific and industrial digital twins for high-performance computing, made possible by NVIDIA OVXTM, a computing system made to support massive Omniverse digital twins, and Omniverse Cloud, a software- and infrastructure-as-a-service solution.

Furthermore, Omniverse now allows researchers, scientists, and engineers working in AI and HPC to perform batch workloads on their A100 or H100 systems, such as rendering videos and images or generating synthetic 3D data.

Even if there are differences between the two companies, they will continue to innovate for a better business future by going above and beyond. Both businesses are currently profitable, with AMD devoting a significant portion of its efforts to CPUs while NVIDIA dominates the GPU market. It’ll be interesting to see how this announcement will benefit the two heading to 2023.