Cisco takes on Broadcom with new chips for AI supercomputer
- Cisco claims its AI chips, G200 & G202, will be the “most powerful” networking chips fueling AI/ML workloads.
- Cisco said those chips are being tested by five of the six major cloud providers.
Two months after Broadcom Inc released a new chip to wire together supercomputers for artificial intelligence (AI) work, Cisco Systems unveiled something similar. The networking giant had just launched a series of networking chips tailored to AI supercomputers, offerings that can easily compete with what Broadcom and even Marvell Technology offers.
The networking chips, G200 and G202, were unveiled on June 20 and came three and a half years after launching Cisco Silicon One. In 2019 Cisco made waves when it announced Cisco Silicon One, the company’s foray into the merchant networking silicon business.
To give an idea of what purpose Cisco Silicon One serves, most current internet infrastructure cannot handle the demands being put on it by applications such as VR/AR, AI, 5G, 10G, 16K streaming, adaptive cybersecurity, quantum computing, and more. Cisco Silicon One is the networking giant’s solution to the problem.
Universally adaptable and programmable, it aims to support the needs of service providers and web-scale market segments, and fixed and modular platforms. “We are proud to announce our fourth-generation set of devices, the Cisco Silicon One G200 and Cisco Silicon One G202, which we are sampling to customers now,” Cisco fellow and former principal engineer Rakesh Chopra said in a blog posting.
He added that the expansion enhances the Cisco Silicon One lineup, which covers a wide range from 3.2 Tbps to 51.2 Tbps, offering a unified architecture and software development kit that ensures seamless convergence without any compromises.
Typically, new generations are launched every 18 to 24 months, demonstrating a pace of innovation two times faster than normal silicon development. Without mentioning names, Cisco also shared that the chips from its SiliconOne series are currently being tested by five of the six major cloud providers.
Cisco, AI and supercomputers
According to data collected by the Synergy Research Group, the four biggest cloud providers in the world include Amazon Web Services, Microsoft Azure, Google Cloud, and Alibaba Cloud. Just like Broadcom, Cisco is a major supplier of networking equipment, too, including ethernet switches, which connect devices such as computers, laptops, routers, servers, and printers to a local area network.
Due to the rise of AI applications like OpenAI’s ChatGPT and Alphabet Inc’s Bard, new challenges are presented for the networks inside data centers. To respond to questions with human-like answers, such systems must be trained using vast amounts of data. Unfortunately, that job is far too big for one computer chip.
Instead, the job must be split across thousands of chips called graphics processing units (GPUs), which have to function like one giant computer to work on the job for weeks or even months. That makes the speed at which the individual chips can communicate important.
When Broadcom announced a new chip in April, it said its new Jericho3-AI could connect up to 32,000 GPU chips. Cisco’s announcement this week said the same – that the latest generation of its ethernet switches can connect up to 32,000 GPUs together. Cisco even highlighted that its G200 and G202 had doubled their performance compared with the previous generation.
“G200 & G202 are going to be the most powerful networking chips in the market fueling AI/ML workloads enabling the most power-efficient network,” Chopra said. He noted that the chips could help carry out AI and machine learning tasks with 40% fewer switches and a lesser lag while being more power efficient.
Through groundbreaking innovations, Cisco’s G200 and G202 also significantly reduce network costs, power consumption, and latency. Broadcom’s Jericho3-AI chip, on the other hand, is meant to compete with another supercomputer networking technology called InfiniBand.
Besides Broadcom, Marvell Technology, which manufactures networking chips in data centers, is seeing soaring demand for AI products. Marvell was also the first data infrastructure silicon supplier to sample and commercially release the industry-leading 112G SerDes and has been a leader in data infrastructure products based on TSMC’s 5nm process.
“AI has emerged as a key growth driver for Marvell,” CEO Matt Murphy said last month. He added that while Marvell is still in the early stages of its ramp-up in AI production, “we are forecasting AI revenue in fiscal 2024 to at least double from the prior year and continue to proliferate in the coming years.”
- NVIDIA and NTT DOCOMO revolutionize telecom services with world’s first GPU-accelerated 5G network
- Sony battles new hack: ‘Is my account safe?’ Echoes among concerned customers
- GlobalFoundries opens Malaysian office, seeks funding from U.S. CHIPS act
- Can we expect a new AI from Amazon soon, given its up to US$4 billion investment in Anthropic?
- Oracle Fusion Data Intelligence pioneering the change in analytics