
A virtualized shake-up – the latest in storage tech
Perhaps the biggest shake-up that the data center is going through right now is software-defined infrastructures. This new breed of technologies is changing the way IT is consumed – it’s certainly changing the way technology is considered by those outside the data center itself.
Because of software-defined technologies, IT has become a malleable resource, rather than the somewhat tricky bunch of tools and capabilities that required expert handling.
To understand the impact of abstraction (or convergence or software-defined platforms) in storage, we need to step back for a moment and look at some of the different technologies which are in daily use in data centers throughout the world, in order to get used to the complexity of what needs to be managed.
Storage: different file systems running volumes on disk types ranging from 5400rpm HDDs to NVMe SSD arrays, connected by ethernet or fiber, either in the cloud or in-house or in remote deployments like branch offices.
Infrastructure: switches, routers, firewalls, load balancers, edge devices, and so forth, running multiple OSes, with discrete firmware and platform-specific management requirements.
The above list barely scratches the surface of the task facing storage specialist staff each day. To a certain extent, today’s suppliers of technology enable us to manage, maintain and expand the ability of businesses to deploy the storage, disaster recovery and archiving tech they need, when they want it. But it’s no easy task.
There are multiple interfaces and dashboards at hand, and while staff can be skilled in several areas at once (virtualization, cluster management and network security, for example), it’s often a balancing act between maintaining service levels, responding to clients’ changing needs, and ensuring there are backup and failovers in place.
One way in which life is made a great deal more straightforward for the data center staff – and by happy coincidence, improving utilization and scalability metrics – is by software abstraction of storage.
At the heart of most enterprises today is the new currency of business: data. The explosion in data use looks set to continue, putting increasing pressure on IT to manage and protect, and more importantly, scale.
Abstracting the storage layer gives businesses the ability to manage all storage as a unified resource, to be allocated and utilized to the best ends of the organization. In the tech press, the phrase ‘data silo’ has become more common. Software-defined storage technologies remove the concept of discrete silos of data, ensuring the business gets full value from its resources (and the investment it has made in them).
In practical terms, instead of having to use a changing array of management tools to address individual storage devices, the technology of abstraction handles all interactions on a site-by-site, device-by-device basis that was previously necessary.
While every use will be different, the capabilities of the latest in software-defined storage and data protection mean that for instance:
– Primary and secondary data (structured and unstructured data) can be allocated to the correct store, according to need.
– Data can be duplicated (and compressed, if necessary) according to SLAs, rather than according to hardware constraints.
– Recovery times are therefore shorter, and recovered assets can be placed on dissimilar platforms from their snapshot source.
– The unified storage array is expandable without downtime (so-called forklift upgrades are history).
– Dynamic resource-scaling allows asset distribution to match a business’s needs – not after a procurement process.
– Distributed resources are managed as one. These include edge installations, cloud storage and in-data centers.
– Commodity hardware can reduce CAPEX significantly.
At Tech Wire Asia, we’ve considered four suppliers of software-defined storage and data protection systems which we think businesses of all sizes should consider. Each has a subtly different offering, reliant on varying degrees of its own hardware or software.
Each business’s requirements will be different from the next – we hope you source one of the following as a future supplier.
STORAGECRAFT
The Utah-headquartered StorageCraft is in the business continuity game – protecting and managing all things data, its storage, management, backing-up, restoring and archiving.
This business-centric approach has led the company to provide a range of scale-out object-based NAS storage, and backup & recovery solutions. These are unified by the company’s management platform, OneSystems, which provides a point of monitoring and oversight that covers all of an organization’s storage, backup and recovery deployments, from remote branches to the central data center.
Latest out of the StorageCraft stable is OneXafe, a platform that provides converged storage re-allocatable on the fly from secondary to primary data, and data protection.
At the core of OneXafe is a patented distributed object-based file system that delivers universal data access by providing NFS and SMB access for users and applications. Data protection is built over the scalable object store, delivering powerful recovery (VMs come back online in milliseconds) and workflow optimized for management simplicity.
ShadowXafe is similarly new and is designed with backup and restore SLAs in mind for physical and virtual machines – perfect for businesses of all size and MSPs where rollback to previous snapshots needs to be the very fastest possible. ShadowXafe integrates well with StorageCraft Cloud Services, thus providing extending DRaaS, a service which essentially abstracts out the differences between cloud and in-house storage, back-up and restore.
To read more about StorageCraft, and these exciting new products in particular, click here.
HEWLETT-PACKARD ENTERPRISE
HPE offers specific software which provides enterprise infrastructure-as-a-service that’s rapidly-deployable and scalable in everyday use.
Rapid Provisioning for Database, for instance, delivers scalable capability which can publish a catalog of varied database services with a minimum of management effort. The solution comes with out-of-the-box preset solutions, tuned specifically to many common apps and open programming environments. From these templates, companies can build their own bespoke configurations.
HPE’s automated process manages the incorporation of bare-metal servers, VMware-based virtual machines, and Docker applications – all with accompanying storage arrays.
Complete infrastructures can be incorporated into a single management console, reducing costs as multiple interfaces do not need to be learned and switched between for different functions. Training costs are slashed, therefore, and staff can be repurposed to more business-centric roles.
HPE’s partners & VARs in Asia will provide all necessary roll-out plans plus consultancy and will help oversee implementation over any time period when required.
With years of experience gleaned from the first modern computers right through to next-gen software-defined data center technologies, HPE is well placed to ensure businesses take full advantage of the latest advantages in virtualization.
CISCO
Like several of the big names that hail from computing’s early days, Cisco is fast reinventing itself, more as a provider of business-oriented solutions than as a networking hardware supplier – albeit the gold standard in that field.
The company offers an abstracted data center in its entirety, from gateway devices to software-defined networks, virtual machine and container support, and unified storage that combines a broad sweep of storage types.
Clearly the company would prefer data centers to deploy as much of its own hardware as possible, but the company is nothing if not pragmatic – its storage devices can be equipped with a range of suppliers’ hardware, and from the hardware layer up the stack, it’s Cisco providing the abstraction on top.
The company’s Hyperflex platform creates what the company calls a “unified computing system”. It’s an abstraction system that runs multiple hypervisors, Kubernetes Docker containers and massively distributed storage nodes.
Control and monitoring via APIs (for use in dozens of industry tools) or via Hyperflex Connect creates a simplicity of management that removes the need for multiple control screens. There’s even a deployment ‘wizard’ – complete, no doubt, with memories of early mail-merge assistants.
The HX Data Platform, specifically, is an extensible system designed for the hyperconverged environment. Features include native disaster recovery, in-line deduping, on-the-fly compression, cloning and snapshots.
The technology is available across a range of storage, including spinning HDDs, SSDs and NVMe.
*Some of the companies featured on this editorial are commercial partners of Tech Wire Asia
READ MORE
- Guardians of the digital realm: How securing privileged accounts can help safeguard government institutions
- World Environment Day 2023: Five ways businesses can achieve supply chain sustainability
- The battle of VR headsets: Meta unveils Quest 3 right before Apple’s debut
- Here’s how Applied Materials manages supply chain and semiconductor research
- Business hubs in a decentralized world