
Next-gen cybersecurity tech offers more than lip service to AI
There aren’t many suppliers to the modern business, in whatever function (HR, finance, supply chain, production, etc.) which don’t claim to use some form of artificial intelligence, machine or deep learning, in their products.
Of course, there’s always been a tendency to use the buzzword or phrase of the day: for instance, in the past, there have been the fashionable suffixes “-o-matic” and “-orama” applied to just about everything new of the time. Since then we’ve seen every product tie-ing in with this latest and greatest technological breakthrough. To not do so spells marketing death – why portray any tech as last year’s model?
The internet tends to not discriminate between AI, machine learning, deep learning, neural networks, automated cognition, and a dozen or more common similes. There are reasons for this, of course, not least because the technology is new and an accepted terminology has yet to be widely accepted.
But additionally, the actual methods used to create artificial intelligence instances in technology are specialist, and needfully complex. Without a more profound understanding, deep learning might as well be bandied about in marketing materials as any other alternative – after all, who really knows the differences?
While there are plenty of resources on the internet that will inform the reader about recursive functions, Caffe, and back-propagation, this site’s focus is more directed towards the practical functions which AI, ML, DL, etc. have in the business environment. And it’s in the business data context which computing methods such as AI can be advantageous.
The most significant threat to most businesses and organizations these days is probably some type of cyberattack. While achieving practical insurance against natural floods or traditional industrial espionage is a cinch, the changing nature of the threats makes cybersecurity a difficult nut to crack.
On a practical level, once a cyberthreat becomes known, its details are usually quickly disseminated across the globe to help others prevent incursions by the same method. This loosely interconnected information network is pretty reliable, albeit slow, and unfortunately sometimes, the resulting protection is more than a little hit-or-miss.
Like most humans, hackers are inherently lazy. A typical piece of malware might consists of in the region of 10,000 lines of code. New malware variants created from the ground up are more or less unknown; instead, malware creators take existing materials and re-task parts to achieve whatever ends.
A small percentage change in a malware instance’s code base (one or two percent) is enough to fool traditional methods of cybersecurity. And while there are only a finite number of malware types (rootkits, trojans, ransomware, etc.), there are plenty of variants on each – certainly, enough to keep the cybersecurity industry on its toes.
Into this situation comes artificial intelligence, and specifically, deep learning. Deep learning is probably best defined in this situation as a method of data classification which employs multiple layers of neural network code. Sometimes called node layers, each stratum takes data from a previous layer, parses it and passes it on to the next. The more “hidden” layers, the better the recognition – in this case, of malicious code.
Like all artificial intelligence structures, the process of learning is constant. The more data fed into deep learning (DL) algorithms, the greater the accuracy. DL can either classify its findings (if taught the differences between file types, for instance) or cluster them: this is a distinction between supervised and unsupervised learning.
From a practical point of view, to protect every susceptible endpoint in the world would involve at least every local network gateway to have at its disposal a very large amount of processing power. Clearly, that’s impractical; each malware instance in a supervised learning phase needs breaking down almost to a bit-by-bit level before being fed into the learning structure.
Then, the algorithms need checking and the learning process furthered with terabytes of unsupervised learning materials – the more, the better.
Few have that type of computing power sitting around unused. Instead what AI-powered cybersecurity technology employs is quite similar to many models of computing: a powerful cluster of compute doing the heavy lifting, with dumb or thin endpoints reaping the benefits and on occasion, picking up updates.
Like anyone mining cryptocurrencies, creators of deep-learning malware mediation systems employ large arrays of GPUs. These form relatively cheap, low-power computing “grunt” providers without needing liquid-cooled Cray supercomputers or building-sized grid computing facilities.
The GPU arrays running DL cybersecurity code have a much better hit rate than “traditional” AV methods such as deep packet inspection – partly because the data is examined on a much smaller scale by the self-learning automata. Changing three or four percent of a code base bought on the dark web may be the hacker’s modus operandi at present, but this type of malware creation’s days is clearly numbered.
There are but a few companies exploring AI, machine or deep learning for cybersecurity purposes at present: Cylance, FireEye and Deep Instinct are perhaps the only three with a viable commercial product to date.
Of the three, Deep Instinct is notable. It was named as the “Most Disruptive” AI startup by NVIDIA at the 2017 Inception Awards, reflecting the company’s ability to change the face not only of cybersecurity but the malware “industry” too.
Headquartered in Tel Aviv, Israel, Deep Instinct has offices across the globe. Tokyo, Sydney, New York, Singapore and San Francisco were all chosen as being close to not only the local tech “action” but the internet backbone.
The company’s founders and executive staff are steeped in defense methodology – many of their number hail from the Israeli military – and Deep Instinct’s solutions offer protection at the level of every individual endpoint, without any abnormal processing power or storage required – this solution is just another app.
Systems administrators using the products typically pay a small sum per endpoint protected, and the lightweight client instances of the products are updated according to local decisions.
Until the ne’er-do-wells of the online world begin to develop new methods of malware creation that rely more on ingenuity & hard work, and less on the tweaking of found code, systems protected by deep learning stand a good chance of immunity from a greater number of threats than previously hoped for.
READ MORE
- HP and Google will start producing ‘Made in India’ Chromebook laptops
- Digital banks: What’s driving success in Southeast Asia?
- 800 Gbps milestone: NEC’s leap in optical submarine cable technology
- Can Google keep its ‘best search engine’ title as Apple evolves?
- No, overheating iPhones will not explode!