Source: Shutterstock

AWS Panorama — bringing computer vision to business masses?

  • Amazon has launched new hardware and an SDK that can turn any networked camera into a computer vision device
  • Users can deploy machine learning models to identify queue lengths in retail, or detect safety infringements in heavy industries
  • The technology raises questions around privacy, setting the precedence of automated workplace surveillance 

Amazon’s cloud arm has launched a new hardware device, the AWS Panorama which, used with the accompanying SDK (software development kit) can enable any existing on-premises IP cameras with computer vision capabilities. 

AWS is pitching the “dust proof and water resistant” hardware in industries such as manufacturing and retail. The company says that with AWS Panorama, customers can automate tasks that have traditionally required human inspection.  

“For example, you can use AWS Panorama to evaluate manufacturing quality, identify bottlenecks in industrial processes, and monitor workplace safety and security — even in environments with limited or no internet connectivity,” reads a launch announcement

Customers have only to plug the hardware in, connect it to their network, and the device will automatically the existing fleet of cameras on the network. Users can then build computer vision models using the AWS machine learning platform Sagemaker, or they can enable production-ready applications, like PPE detection, retail queue length, or crowd counting, developed by AWS or third-party creators.  

Models can then be run on the device (at the edge) to access real-time predictions in remote and isolated places where network connectivity can be slow, expensive, or intermittent. 

“You can analyze video feeds from multiple cameras in parallel, generating highly accurate predictions within milliseconds,” AWS said, adding that by acting locally on the data, images remain on-site. 

Computer vision is an increasingly utilized branch of machine learning, enabling the advancement of cutting-edge robotics, including autonomous vehicles and drones, that can ‘see’ and act on their surroundings. 

It’s also widely used on manufacturing production lines in quality control, with 3D-enabled computer vision systems able to identify the smallest defect in a component that the human eye might miss. 

In the pharmaceutical industry, computer vision has been used to detect and analyze bacterial growth in Petri dishes containing samples of vaccines in production. This is proving a more accurate and effective alternative human inspection in detecting production problems and can ultimately bring medicines and vaccines into circulation faster. 

In surveillance, use of computer vision technology can be much more unnerving, even controversial. A number of tech giants, including Amazon itself and IBM, have publicly backed away from offering its technology to law enforcement, owed to potential bias in datasets leading to discriminatory policing. 

But computer vision used in surveillance systems isn’t just about spying on citizens or holding criminals to account. Construction sites can employ the technology to identify when workers or vehicles are straying into dangerous, off-limit zones, sounding an alert to the individual on the ground, or even sending a signal that shuts off machinery. 

Edge computing is key to establishing trust in these systems, ensuring that the data being collected and processed on a private network, and doesn’t leave the premises. 

Wider use of computer vision in enterprise surveillance will, of course, feature in discussions around the implications of employee monitoring, and transparency around how employers are using and responding to the data these, potentially invasive, systems provide.