AMD and PyTorch Foundation to promote adoption of AI and ML capabilities

Source – Shutterstock

Can open source enable better adoption of AI and ML capabilities?

  • AMD will provide expertise with a variety of compute engine capabilities and open software for accelerated workloads
  • Open software is essential to advance HPC, AI, and ML research

Artificial intelligence (AI) and its subset Machine learning (ML) are the engines driving today’s digitalization. The core of Industry 4.0 is these new technologies, which combine humans and robots while enhancing each’s ability to revolutionize economies and people’s lives.

At the moment, China and Japan are now driving AI adoption in the Asian market. Among the biggest manufacturers of industrial robots, China and Japan are also commercializing AI research domains. China’s huge market is open to quick change and may therefore accept new AI advancements without negatively impacting the country’s economy. In Southeast Asia, Singapore is leading the AI adoption.

In essence, this means that more tech giants will experiment and deliver cutting-edge technologies to this region, including AMD, which is no stranger to the region. This year, the company has made a number of significant announcements, including growing its engineering expertise, semiconductor manufacturing facility, and R&D footprint.

Now, AMD will be a founding member of the newly established PyTorch Foundation. The foundation, which will be a part of the nonprofit Linux Foundation, will promote and support an ecosystem of open source projects using PyTorch, the ML software framework that Meta originally developed and promoted. PyTorch will be used to drive the adoption of AI tooling.

As a founding member, AMD joins other industry leaders in putting the expansion of the active community around PyTorch first. AMD will assist the PyTorch Foundation by working to democratize cutting-edge tools, libraries, and other components to make these ML innovations accessible to everyone. AMD will do this with the support of innovations like the AMD ROCm open software platform, AMD Instinct accelerators, Adaptive SoCs, and CPUs.

It all starts with open source

In order to advance High Performance Computing (HPC), AI, and ML research, open software is essential, according to AMD’s Brad McCredie, corporate vice president of Data Center and Accelerated Processing. AMD is prepared to contribute its experience with open software platforms and innovation to the PyTorch Foundation.

“AMD Instinct accelerators and ROCm software power important HPC and ML sites around the world, from exascale supercomputers at research labs to major cloud deployments showcasing the convergence of HPC and AI/ML. Together with other foundation members, we will support the acceleration of science and research that can make a dramatic impact on the world,” commented McCredie.

Santosh Janardhan, VP, Infrastructure at Meta, expressed his excitement at AMD’s decision to join the PyTorch Foundation and contribute to its members’ access to its broad range of HPC, AI, and ML capabilities.

“AMD has continued to support PyTorch with its integration on ROCm open software platform and has worked extensively with the open-source community and other foundation members to advance performance of ML and AI workloads. The collaborative support offered by AMD continues our engagement across broad industry initiatives for global impact,” he said.

AMD’s approach in advancing AI and ML

With its extensive product and software portfolio, AMD is in a unique position to support clients and partners in the development and deployment of applications utilizing various kinds of AI across the business, cloud, and endpoints. AMD can support a wide range of pervasive AI and ML models, from the small edge points to huge scale-out training and inference workloads, thanks to a diversified mix of hardware that includes AMD Instinct and Alveo accelerators, adaptable SoCs, and CPUs.

Additionally, AMD collaborates closely with the AI open community to advance and expand machine and deep learning capabilities. Inference development platforms for AMD adaptive SoCs and Alveo data center accelerators are offered by Vitis AI. In order to provide software developers with machine learning acceleration as part of their software code, Vitis AI hooks into popular software developer tools and makes use of a rich collection of optimized open-source libraries.

The ROCm open source platform is constantly changing to satisfy the demands of the AI/ML and HPC community. With the most recent version of ROCm 5.0, developers now have access to pre-built AI framework containers on AMD Infinity Hub, cutting-edge tools, simplified installation, and improved kernel launch and application performance. AMD ROCm support also went from beta to stable with the most recent PyTorch 1.12 version.