Sony and Microsoft develop AI imaging chip – what it means for industry
- Sony & Microsoft teamed up to integrate AI into Sony imaging chips for the first time ever
- AI-powered smart camera solutions will make it easier for their enterprise customers to perform video analytics
- Advances could pave the way for other smart camera applications in other sectors, including AR & self-driving cars
Sony Corp and Microsoft Corp have collaborated to integrate artificial intelligence (AI) capabilities into the new imaging chip from the Japanese firm, a major improvement to a camera product described by the electronics giant as a world-first.
The big benefit of the new chip, the IMX500, is that it has its own built-in processor and memory, allowing it to analyze video using AI software like Microsoft’s Azure, but running in a self-contained system that is quicker, easier and safer than current methods.
Both Sony and Microsoft believe that AI-powered smart camera solutions will make it easier for their enterprise customers to perform video analytics.
“Video analytics and smart cameras can drive better business insights and outcomes across a wide range of scenarios for businesses,” said Takeshi Numoto, corporate vice president and commercial chief marketing officer at Microsoft.
“Through this partnership, we’re combining Microsoft’s expertise in providing trusted, enterprise-grade AI and analytics solutions with Sony’s established leadership in the imaging sensors market to help uncover new opportunities for our mutual customers and partners.”
AI-powered smart camera solutions
The AI-powered Sony camera is capable of capturing high-resolution video at up to 30 frames per second and performing the AI research simultaneously.
Both companies intend to build a smart camera managed app, powered by Azure Internet of Things (IoT) and cognitive services, which can be used in parallel with the IMX500 to enable video analytics use cases for enterprise clientele.
Independent software vendors focused on computer vision/video analytics solutions and smart camera OEMs will be able to use the app to create their own industry-specific or custom video analytics and computer vision solutions.
This new smart camera also has the ability to deliver more privacy-conscious monitoring at a time of rising public surveillance to help rein in the spread of the novel coronavirus.
Apple Inc with its Face ID biometric authentication, powered by the custom-designed Neural Engine processor of the iPhone, has already demonstrated the plausibility of integrating AI and imagery to create more secure systems. Huawei Technologies Co and Google have both dedicated AI silicon to assist with image processing in their smartphones.
“We are aware many companies are developing AI chips and it’s not like we try to make our AI chip better than others,” commented Hideki Somemiya, senior general manager of Sony’s System Solutions group.
“Our focus is on how we can distribute AI computing across the system, taking cost and efficiency into consideration. Edge computing is a trend, and in that respect, ours is the ‘edge of the edge’.”
The advance made by Sony is to eliminate the need for transfer within the device itself. Although Apple and Google still use traditional image sensors that translate light particles for reading their chips into computer-readable image formats, the IMX500 is capable of doing the analytical work without any data leaving its physical boundaries.
AR & smart car applications
The AI-capable system will also help to advance applications of augmented reality. The two US tech giants, whose iOS and Android operating systems essentially dominate the entire mobile industry, are investing heavily in the production of AR: Google Maps now provides the option of showing 3-D directions over a user experience video feed while Apple is planning new 3D cameras on the next iPhone package.
Sony’s AI sensor could hasten the adoption of smart-car technology without the need for a “cloud brain,” as some existing systems have.
Shinpei Kato, founder and chief technology officer of self-driving company Tier IV Inc, said, “This on-chip approach enables a system design to be more flexible and even optimized, given that the cost of image processing, which is one of the most compute-intensive tasks for autonomous driving, can be offloaded from an electronic control unit.”