- AI is seeing more uptake than ever before, but with great power comes great responsibility
- We explore edge-case trends, positive and negative, in the ‘grey area’ of AI
Artificial intelligence (AI) continues to hold its title as the top buzzword of enterprise tech, but its appeal is well-founded. We now seem to be shifting from the era of businesses simply talking about AI, to actually getting hands-on, exploring the ways it can be used to tackle real-world challenges.
AI is increasingly providing a solution to problems old and new, then again, while the technology is proving itself incredibly powerful, not all of its potential is necessarily positive. Here, we explore some of the more edge-case applications of AI taking place this year.
Advances in deep-learning and AI continue to make deepfakes more realistic. This technology has already proven itself dangerous in the wrong hands; many predict that deepfakes could provide a dangerous new medium for information warfare, helping to spread misinformation or ‘fake news’. The majority of its use, however, is in the creation of non-consensual pornography which most frequently targets celebrities, owed to large amounts of data samples in the public domain. Deepfake technology has also been used in highly-sophisticated phishing campaigns.
Beyond illicit ingenuity in shady corners of cyberspace, the fundamental technology is proving itself a valuable tool in a few other disparate places. Gartner’s Andrew Frank called the technology a potential “asset” to enterprises in personalized content production: “Businesses that utilize mass personalization need to up their game on the volume and variety of content that they can produce, and GANs’ [Generative Adversarial Network] simulated data can help.”
Last year, a video featuring David Beckham speaking in nine different languages for a ‘Malaria No More’ campaign was released. The content was a result of video manipulation algorithms and represented how the technology can be used for a positive outcome — reaching a multitude of different audiences quickly with accessible, localized content in an engaging medium.
Meanwhile, a UK-based autonomous vehicle software company has developed deepfake technology that is able to generate thousands of photo-realistic images in minutes, which helps it train autonomous driving systems in lifelike scenarios, meaning the vehicle makers can accelerate the training of systems when off the road.
The Financial Times also reported on a growing divide between traditional computer-generated graphics – which are often expensive and time-consuming – and the recent rise in deepfake tech, while Disney used deepfake technology to include the young ‘version’ of Harrison Ford as Han Solo in the recent Star Wars films.
Facial recognition is enabling convenience, whether it’s a quick passport check-in process at the airport (remember those?) or the swanky facial software in newer phone models. But AI’s use in facial recognition extends now to surveillance, security, and law enforcement. At best, it can cut through some of the noise of traditional policing. At worst, it’s susceptible to some of its own in-built biases, with recorded instances of systems trained on misrepresentative datasets leading to gender and ethnicity biases.
Facial recognition has been dragged to the fore of discussion, following its use at BLM protests and the wrongful arrest of Robert Julian-Borchak Williams at the hand of faulty AI algorithms earlier this year. A number of large tech firms, including Amazon and IBM,have withdrawn their technology from use by law enforcement.
AI has a long way to go to match the expertise of our human brains when it comes to recognizing faces. These things on the front of us are complex and changeable; algorithms can be easily confused. There’s a roadmap of hope for the format, though, thanks to further advances in deep-learning. As an AI machine matches two faces correctly or incorrectly, it remembers the steps and creates a network of connections, picking up past patterns and repeating them or altering them slightly.
Facial recognition’s controversies have furthered discussions around ethical AI, allowing us to clearly understand the tangible impact of misrepresentative datasets in training AI models, which are equally worrying in other applications and use cases, such as recruitment. As the technology is deployed into more and more areas in the world around us, its dependability, neutrality and compliance with existing laws becomes all the more critical.
With every promising advance in technology comes another challenge, and a recent CBInsights paper warns of AI’s role in the rise of ‘new-age’ hacks.
Sydney-based researchers Skylight Cyber reported finding an inherent bias in an AI model developed by cybersecurity firm Cylance, and were able to create a universal bypass that allowed malware to go undetected. They were able to understand how the AI model works, the features it uses to reach decisions, and create tools to fool it time and again. There’s also the potential for a new crop of hackers and malware to ‘poison data’ – corrupting AI algorithms and disrupting the usual detection of malicious/normal network behaviour. This problematic level of manipulation doesn’t do a lot for the plaudits that many cybersecurity firms give to products that use AI.
AI is also being used by the attackers themselves. In March last year, scammers were thought to have leveraged AI to impersonate the voice of a business executive at a UK-based energy business, requesting from an employee the successful transfer of hundreds and thousands of dollars to a fraudulent account. More recently, it’s emerged that these concerns are valid, and not a whole lot of sophistication is required to pull them off. As seen in the case of Katie Jones — a fake LinkedIn account used to ‘spy’ and phish information from her connections— an AI-generated image was enough to dupe unsuspecting businessmen into connecting and potentially sharing sensitive information.
Meanwhile, some believe AI-driven malware could be years away — if on the horizon at all — but IBM has researched how existing AI models can be combined with current malware techniques to create ‘challenging new breeds’ in a project dubbed DeepLocker. Comparing its potential capabilities to a “sniper attack” as opposed to traditional malware’s “spray and pray” approach, IBM said DeepLocker was designed for stealth: “It flies under the radar, avoiding detection until the precise moment it recognizes a specific target.”
There’s no end to innovation when it comes to cybercrime, and we seem set for some sophisticated, disruptive activity to emerge from the murkier shadows of AI.
AutoML – AI that writes itself
Automated machine learning, or AutoML (a term coined by Google), reduces or completely removes the need for skilled data scientists to build machine learning models. Instead, these systems allow users to provide training data as an input, and receive a machine learning model as an output.
AutoML software companies may take a few different approaches. One approach is to take the data and train every kind of model, picking the one that works best. Another is to build one or more models that combine the others, which sometimes give better results. Businesses ranging from motor vehicles to data management, analytics and translation are seeking refined machine learning models through the use of AutoML. With a marked shortage of AI experts, this technology will help democratise the tech and cut down computing costs.
Despite its name, AutoML has so far relied a lot on human input to code instructions and programs that tell a computer what to do. Users then still have to code and ‘tune’ algorithms to serve as ‘building blocks’ for the machine to get started. There are pre-made algorithms that beginners can use, but it’s not quite ‘automatic’.
Google computer scientists believe they have come up with a new AutoML method that can generate the best possible algorithm for a specific function, without human intervention. The new method is dubbed AutoML-Zero, which works by continuously trying algorithms against different tasks, and improving upon them using a process of elimination, much like Darwinian evolution.
Solving its own carbon footprint
AI and machine learning may be streamlining processes, but they are doing so at some cost to the environment.
AI is computationally intensive (it uses a whole load of energy), which explains why a lot of its advances have been top-down. As more companies look to cut costs and utilize AI, the spotlight will fall on the development and maintenance of energy-efficient AI devices, and tools that can be used to turn the tide by pointing AI expertise towards large-scale energy management.
Artificial Intelligence also has a role in augmenting energy efficiency. Tech giants are using systems that can gather data from sensors every five minutes, and use algorithms to predict how different combinations of actions will positively or negatively affect energy use.
In 2018, China’s data centers produced 99 million metric tons of carbon dioxide (that’s equivalent to 21 million cars on the road). Worldwide, data centers consume 3 to 5 percent of total global electricity, and that will continue to rise as we rely more on cloud-bases services. Savvy to the need to ‘go green’, tech giants are now employing AI systems that can gather data from sensors every five minutes, and use algorithms to predict how different combinations of actions will positively or negatively affect energy use. AI tools can also spot issues with cooling systems before they happen, avoiding costly shutdowns and outages for cloud customers.
From low power AI processors in edge technologies to large scale renewable energy solutions (that’s AI dictating the angle of solar panels, and predicting wind power output based on weather forecasts), there are positive moves happening as we enter the 2020s. More green-conscious, AI-intensive tech firms are popping all the time, and we look forward to seeing how they navigate the double-edged sword of energy-guzzling AI being used to mitigate the guzzling of energy.