a screenshot of an AI identifying a shoplifter in a bookstore

Soon, AI would be the one to determine if you’re clocking in jail time. Source: EarthEyes

Beware, the AI is watching you

IN the future, algorithms will decide whether or not you get caught for a crime – and it’ll all boil down to how you “behave” on camera.

Japanese telco giant NTT East, along with startup Earth Eyes have demonstrated a CCTV camera that can recognize “troubling behavior” without human supervision.

Called the “AI Guardman”, the security camera is designed to help identify shoplifters. According to The Verge, the AI scans live video streams and track the gait and behavior of humans. If it matches a predefined ‘suspicious’ behavior it will immediately alert the shopkeeper.

According to IT Media, instances of shoplifting reduced by 40 percent after piloting AI Guardman in stores.

However, Japan isn’t the only one developing AI for security cameras. Various companies across China and America are developing products based on deep learning to analyze video footage quickly.

Already, companies like Amazon and Nest are rolling out facial recognition capabilities on home security cameras.

In Asia, Chinese companies Megvii and Yitu are gaining momentum across the region; the latter expanding into Singapore last year.

Megvii has demonstrated its technology, Face++, to police departments in Thailand with positive feedback. Meanwhile, body-mounted cameras used by the auxiliary force of the Malaysian police deploys Yitu’s facial recognition system.

Most facial recognition technology today is tailored to the individual business use cases. AI Guardman gains an upper hand with plug and play capabilities.

According to The Verge quoting a spokesperson for NTT East, “the camera would go on sale at the end of July, with an up-front price of around $2,150 and a monthly subscription fee of $40 for cloud support.” The company is aiming to introduce the camera to 10,000 stores in the next three years.

Having said that, there are currently still a lot of underlying problems with AI surveillance, like privacy, accuracy, and discrimination. AI can only make as accurate a prediction as the data that they trained on; contextual data tends to be riddled with human bias.

In the case of AI Guardman, the AI can accurately measure gait, but the definition of ‘suspicious behavior’ might still need some work. NTT East told The Verge that “common errors” by the include misidentifying indecisive customers and salesclerks restocking shelves.

The technology is still a bit green at the moment, but it won’t be green for long. Automated surveillance is becoming increasingly prevalent. Don’t be surprised if an AI is the one who decides if you’re going to jail or not in the future.