Threats can often come from nothing more than a rogue click. Source: Shutterstock

Could machine learning help counter internal data breaches?

In an increasingly digital and interconnected age, data and integrated systems have grown to play an exclusive role in boosting the transformation process of a corporation.

The two make up the information network systems that corporations have today. Naturally, concerns over the network security increase as data are contained within the network systems and various endpoints can serve as entry points for cybercriminals.

However, there is a common misconception that threats are almost always, external; from outside of the operation’s network perimeter.

The default impression seems to revolve around how outsiders are the main causes of cybersecurity incidents – but what about internal actors like the employees?

Data breaches and cybersecurity issues can stem from within the network itself. In fact, a report by PwC revealed that 44 percent of cyber incidents can be attributed to internal actors.

Albeit, this does not mean that employees – or even managing directors – are all liabilities waiting to turn into cybercriminals. It does, however, reflect that cybersecurity measures can sometimes miscalculate the risks that internal actors present.

In such cases, advanced intelligent capabilities like machine learning (ML) can offer a solution. The technology not only has the ability to process myriads of data in real-time, but it can also learn from those data and make insightful predictions.

This is why ML can be seen as the perfect coupling for cybersecurity as it can evolve, much like the threats are themselves, but quicker. ML has a lot to offer to security teams when it comes to managing the threats and risks within a network perimeter – particularly via behavioral analytics capabilities.

Since internal security incidents are, more often than not, accidental in nature due to misjudgments or blunders, there is a need to leverage a solution that can stop these accidents before they happen.

ML can be trained to do exactly just that, but corporations need to endorse efforts into tailoring the solution to perform this particular function. What they need to do is to utilize human resource data and create a new risk-based algorithm to train the ML model.

Employees’ online activity data can be used in this scenario. When the model is able to analyze employees’ online behavior and the steps they take before making an error that can lead to accidental data breaches or leaks, alerts can be sent to users, and risks better managed overall.

ML can also be trained to issue warnings or reminders to employees who are, for example, about to access a phishing link or click on a malware link. This would not only protect the operation network, but it will also serve as training for employees to be more critical when they are online.

However, such data can be considered as sensitive information that can easily be manipulated if not contextualized and processed ethically.

Not to mention, leveraging these data can make employees who never meant any harm feel like they are being watched and restricted when it comes to what they do online.

This is a challenge that security teams and HR teams have to solve. One way of addressing this issue is by making sure that all employees are well-aware that their data is being used to train the ML model only. On top of that, their activities – as long as deemed acceptable – will not be held against them in any way.

A lot has been said about why companies need to protect their data and why CISO roles have grown to become so pivotal in the development of operations, especially when going digital.

Nonetheless, a key point to remember here is to remember that internal actors, in some ways, can still possess some form of threat to the corporation.

ML, in its quest, will always be valuable as it will not only help detect possible misjudgment errors by employees, but it can also detect employees who are intentionally trying to cause harm.