IBM’s now serving chips for AI
INSTANCES of artificial intelligence (AI), machine learning (ML), or deep learning are appearing across all sorts of enterprise service offerings. While there’s a certain amount of bandwagon-jumping and overuse of the terms to grab headlines, machine-learning (et al) implementations are becoming quite the norm.
Combined with a rise in the numbers of massive public networks of computing power (hyperscale data centers) offering everything-as-a-service (XaaS) from the cloud, it’s no surprise that the big enterprise-level server vendors are responding with AI-centric technologies.
The chip is optimized for the particular demands of AI computation: in tests, it runs workloads on common AI frameworks such as Chainer and TensorFlow at four times the speed of existing systems. It is also claimed to have a positive effect of workloads in “accelerated databases” such as Kinetica – that is, databases which utilize graphics processing units (GPUs) to crunch binary.
The chips and servers will appear first in the IBM cloud and will also power a new supercomputer which is currently under construction by Oakridge and Lawrence Livermore costing over US$300 million, for the US Department of Energy.
The increase in computational power is matched by a new design of system bus which is designed to speed up I/O and bandwidth, especially between the microprocessor unit and the GPUs so vital in AI computation. Primarily from NVIDIA, these systems will be addressed more quickly with IBM’s latest iterations of NVIDIA NVLink and OpenCAPI, which accelerate data movement speeds by up to ten times.
IBM’s new product offerings will allow it to better compete in the so-called hyperscale data center market, and the products will no doubt percolate down both into the smaller cloud providers’ server racks and into the private enterprise cloud, as customers of cloud providers demand better and more powerful ML processing.
The increasing demand for AI computation should drive IBM to a good position financially, as data scientists look to cut computation times by large factors. IBM has a prominent track record in this field; its distributed deep learning library announced in August already promises massive speed gains.
- HP and Google will start producing ‘Made in India’ Chromebook laptops
- Digital banks: What’s driving success in Southeast Asia?
- 800 Gbps milestone: NEC’s leap in optical submarine cable technology
- Can Google keep its ‘best search engine’ title as Apple evolves?
- No, overheating iPhones will not explode!