Divide & conquer? Micro-segmentation adapts as business and hacker strategies change
Alterations in cyber attack methods are down to a couple of overriding influences: what’s available to the hacker in terms of method/strategy, and how the rest of us are storing and attempting to protect data.
The vast majority of malware attacks of one sort or another are either duplicates or at best refinements of existing methods. Most hackers are not original thinkers; indeed in that particular world, being a “script kiddie” is something of an insulting label. This lowest stratum of the underworld pecking order simply picks up existing malicious code, may make only minor changes to it, and then deploys it in a manner that may or may not be particularly original.
Attack methods therefore may not change significantly overnight. Instead, careful refinement of existing strategy is used to compromise systems. As the cybersecurity industry adapts, it finds new methods to counteract threats and variations on thereon; it is only occasionally that entirely new vectors and methods crop up. Recent cases in point would have been 2016-2017’s spike in ransomware, 2003-2005’s predominance of rootkits, and this year’s emergence of chip-based exploits such as Spectre.
As far as working methods go, sometimes trends change cybersecurity methods. BYOD has led to a resurgence in development of the methods used to protect clients (or to use the latest terminology, endpoints) from attack.
iOS, Android, Blackberry and Windows Mobile devices onsite in the enterprise workplace forced the security industry to rethink, but the underlying protection methods have not changed much since the days of antivirus applications installed on Windows 98 desktops.
Firewalls have morphed from packet filtering to stateful, to “next-gen” deployments which specifically protect web applications. In data center settings, the firewall is moving “inwards” too as enterprise turns increasingly to cloud deployments and uses XaaS (just-about-anything as a service) as weapons of choice.
The unfortunate truth is that traditional cybersecurity methods are just not suitable for the latest enterprise-level deployments and topologies based across a complex cloud. The cybersecurity industry is, in large part, struggling to adjust to tech such as virtualized infrastructures & services, container-based applications, microservices, and, increasingly, serverless models.
Inside a data center, the billions of potential interconnections between (increasingly abstracted) nodes simply cannot be protected by traditional antimalware measures. The predominance of east-west traffic is not particularly well-suited to packet monitoring technology, which has in the past been deployed at the gateway, rather than inside complex, ever-shifting datacenter networks.
Even fairly simple setups of (for instance) a single app and 10 workloads produces internode traffic which creates highly complex (if aesthetically rather pleasing) pathways:
The question is, are there protection methods out there which can be deployed, or can be adapted, to protect the increasing predominance of applications utterly reliant on the continuous, optimized provision of cloud structure and its supporting services.
The changes in data centers (like hyperconvergance) are taking place to allow IT departments to respond more swiftly to the dictates of business decisions and strategies. One of the great positives of the latest cloud-based working methods and the technologies which now underpin it (micro-services, containerized applications, etc.) is malleability; displaying an IT agility which can match new business practices.
Older cybersecurity technologies are often too risky to transplant in, as a mis-deployment can quickly lead to essential service outage. Additionally, with the multitude of complex potential interconnections existing at any time in the data center, it is easy to forget or miss vulnerabilities.
Some providers are using segmentation, at least partially, to carve out protected spaces in which service uptimes & efficiencies can be assured. But even if a watertight cybersecurity system can be enacted, as virtualized setups scale (spinning up new battalions of storage, compute and networking), it is almost impossible for cybersecurity to reflect those changes at the required speed.
The very latest antimalware technologies, therefore, use a method whose name, micro-segmentation, refers back to an old method of network optimization (reducing network hops) but is a new security and optimization model.
Micro-segmented antimalware techniques are designed, from the ground up, to protect the inside of data structures (be they centralized or geographically disparate). In particular, they are designed to prevent bad actors from affecting the east-west traffic in a data lake which, today, turns the wheels of industry.
Micro-segmentation technologies interconnect with the virtualization & governing software layers of data repositories of all types, and are therefore aware of each virtualized or real data node – plus the joining data flows. This type of granular approach to security could not have been implemented or indeed would not have been relevant in the past. However, micro-segmentation security technology now perfectly reflects the changing landscape inside often complex, hybrid, multisite or stretched deployments.
As well as protection, micro-segmenting technology is also used to monitor data center health, can be used to identify bottlenecks, and create overviews of even very complex structures which use multiple locations – from front-line tier I apps down to cold storage HDD arrays.
As data centers adapt and change, and services or new network zones spin up or change quickly, the very latest cybersecurity technology will mirror those changes and protect their owners. Here is one such supplier which we at Tech Wire Asia would like to suggest as capable of protecting next-gen data centres and the valuable resources contained therein; now and in the future.
“We just kept coming back to the idea that it shouldn’t be so hard to proactively stop threats inside data centers and as technologists we had the opportunity to solve it – so we did.” – PJ Kirner CTO & founder of Illumio.
Illumio’s adaptive security platform delivers micro-segmentation that begins with visibility for east-west application traffic inside existing data centers & public clouds, then creates segmentation policies for multiple compute (bare metal, VMs, and containers) in public cloud & on-premise environments.
The basic premise of micro-segmentation is to contain a bad actor. Like the compartmentalizing used in ship construction, micro-segmentation limits the ability of a bad actor to move beyond the compartment where the breach occurred. This prevents its free movement and protects valuable assets.
Illumio’s micro-segmentation solution adapts as applications change, scale, migrate or leverage public cloud. The challenge is formulating a micro-segmentation policy for applications in the first place. Anyone who has created a firewall rule that broke an application understands the obstacles.
Illumio begins by visualizing the dependencies between the workloads of an application. This helps develop policies that isolate & protect traffic between workloads, application tiers, processes, and environments. Security can limit traffic when a critical vulnerability is discovered and IT is unable to deploy a relevant patch.
Across bare-metal servers, VMs, and containers, from data center to the public cloud, Illumio protects and informs through traffic visibility. To read more click here. Alternatively, start a free trial or contact a representative local to your organization today.
- Is the global chip shortage causing more semiconductor frauds, counterfeits?
- Taiwan’s GlobalWafers is giving US its first silicon wafer facility in over two decades
- Moving towards a proactive cybersecurity approach in Malaysia
- Time Dotcom’s sale of AIMS data center finally has suitors?
- Paperweight: Wealth management is still among the least tech-literate sectors of the financial services industry