
Dispelling the hyperconvergence myths – the data center evolves
So, why isn’t your data center hyperconverged? Like bachelors about town on a Friday night, are CTOs commitment-phobic and, frankly, scared of getting rinsed? Here are some of the reasons oft-quoted for not taking the HCI plunge:
“I’m just not risking hyperconvergence”
Erring on the side of caution is a commendable character trait for any CTO. Many organizations have hyper-converged a proportion of their compute, storage & infrastructure to assess the possibilities, but deployment is effectively sandboxed.
Full production workloads are, of course, not the things to be used as guinea pigs for new technology, but HCI is now well out of its early life-cycle adoption phase.
Common-or-garden data center solutions of a limited degree of convergence are not broken but do lack the scalability and agility (two watchwords of a strategic IT function’s credo) of HCI. And according to an IDC report, HCI deployments will, in the next 3-5 years, outnumber non-converged topologies.
Naturally, the IDC/Forrester/Gartner references so beloved of the marketing machines of many tech companies often have dubious commercial provenance. But on the ground, real-world data center experience is lending credibility to the claims – and the enterprises currently getting onto the bandwagon aren’t necessarily left-field types.
“My workloads will be negatively impacted or just aren’t suitable”
While past its nascence, HCI now still suffers from a reputation for coming with overheads which impinge on app performance. The accepted wisdom is that hyper-convergence isn’t suitable for tier-one deployments.
Latest generations of HCI offerings (see the featured companies’ profiles, below) have lowered overheads to the extent that not only are hyper-converged structures competing with traditional architectures, in many areas the underlying schema and methodologies are also creating positives.
The de-duping and compression of data on its way to being striped, for instance, combined with intelligent cache layers, create savings in hardware costs (storage arrays don’t come cheap) and also faster IOPS and lower latencies for read/write. Cisco’s Hyperflex (see below) is advertised as having an incredibly narrow – and therefore predictable – latency time range, at whatever scale.
HCI is clearly gathering momentum, with significant investment in qualifying specific apps and hardware for hyper-converged deployment.
Virtualized desktops, SAP installs, and containerization all now sit happily in the shiny new HCI data center. Kubernetes support was one of the first boxes that really needed to be ticked by HCI providers; now micro-services are safely bedded in and optimized for software-defined topologies.
This year’s glut in supply of flash memory has held storage prices low – this is grist to HCI’s mill – but impressively swift tech in the form of Optane & NVMe nodes is also pushing procurement teams to seriously consider HCI as economically viable. Performance hikes look great on the suppliers’ spec sheets, but for the business, a good ROI via improved business efficiency (and strategic legroom) is what’s required.
“I won’t be able to tick all the right boxes on an RFP”
In addition to the honorable mention to containerization above, some of the latest services and shiny new ways of working are optimized for and preferably deployed in HCIs.
Organizations deploying machine learning at scale either as a standalone or an adjunct to existing products (over and beyond the lip-service level of the “driven by AI” footnote to every service under the sun) can now run graphics arrays in hyper-converged networks.
HCI-qualified Kubernetes on NVIDIA GPUs in a software-defined, fully virtualized environment, anyone?
The ability to scale and deploy at speed has always been HCI’s trump card, and IT departments are increasingly relied upon to be responsive: DevOps need to work Agile, data centers need to be agile.
Compliance strictures are increasingly dictating policy right across the enterprise, although the onus is now on ITC functions; data being the new currency which needs obsessive guarding.
Baked-in encryption (self-encrypting drives, for instance) and native DR mean that whichever acronym soup you happen to swim in (PCI-DSS, HIPAA, FISMA, etc.) data is as secure as it can be made to be.
Out of the box, HCI is more secure than the Iron Mountain truck of yore carrying tape, and beats “traditional” data centers’ security measures in the race for the CTO’s peace of mind.
At Tech Wire Asia we hope that some of the above have begun to allay any concerns you may have had about hyper-convergence in the data center.
If you need more information, or specifics as to how to deploy HCI in your private, public or hybrid infrastructures, please consider one of the following three leaders in the field.
CISCO
Cisco’s HyperFlex platform and other offerings from a broad portfolio (comprising the company’s Unified Computing System™) form the basis for HCIs capable of running multi-hypervisors including Hyper-V and vSphere, applications using Kubernetes-managed Docker containers, GPU-powered ML compute and hybrid storage nodes.
Operational simplicity underpins the agility & scalability of Cisco HyperFlex, with control and monitoring via APIs (for use in dozens of industry standard tools), or via Cisco’s cloud management portal, Intersight. Cisco even offers a “deployment wizard” (with animated Gandalf, we hope?) and pre-packaged OVAs, among other scoping tools.
The HX Data Platform is a log-structured, scale-out file system designed specifically for hyper-converged environments with several advantages over its competitors and legacy systems which are reaching EOL.
Features include built-in replication and native DR, inline de-duping, permanent compression, thin provisioning, cloning & snapshots – all available across dynamic storage options: SSDs, HDDs, and NVMe (specifically deployed for maximum results in the caching layers).
Learn more about Cisco’s hyper-converged offerings here.
NUTANIX
Nutanix’s One OS, One Click provision offers integrated data protection, non-disruptive upgrades, and self-healing systems. This platform, it is claimed, ensures a level of predictability in performance that is not achievable with the vagaries of a traditional data center.
The company accepts that enterprise needs to run, at data center level, multiple applications, all of which have quite separate demands.
Transactional and analytical workloads, traditionally, are best optimized using different stacks. With performance scaled linearly, the ability to change computing capacity at will, should, the company suggests, lead to fewer bottlenecks.
Every Nutanix solution incorporates powerful capabilities such as data integrity checks, tunable redundancy, data path redundancy, application and both synchronous and asynchronous replication to other Nutanix systems or public cloud storage facilities.
This disaster recovery provision and systems like it should ensure the smooth maintenance of system uptime, as the bringing online of backup systems (and their data) should be a matter for software, and not need hardware reconfiguration on the fly by expensive staff.
Nutanix’s hypervisor-agnostic solution provides a choice of virtualization environments, including VMware vSphere, Microsoft Windows Server 2012 R2 with Hyper-V, and KVM.
Like most HCI solutions, all data center operations are centralized into a single console which obviates the need for staff training every time a new system is installed and becomes operationally critical.
HPE
HPE’s ability to provide hyper-converged system pivoted on its 2017 acquisition of SimpliVity.
HPE SimpliVity converges the server and primary storage layers, de-duplication algorithms and appliances, backup solutions, replication, optimisation technologies and cloud gateways.
HP has made significant investment in data efficiency and protection, the HPE Omnistack Data Accelerator Card processes in-line duplication, compression and data optimisation across primary and backup layers. By offloading this process, the VM suffers little performance penalty. HP report a median data efficiency of 40:1.
HPE’s hyperconvergent solutions are deployable as a single node within a data centre, right out to a full-scale organisation-wide installation — according to the speed at which CTO’s wish to progress.
The controlling element runs as a virtualised machine on vSphere ESXi, and abstracts data from the supporting hardware. All policy management functions are handled by the VM, with the solution integrating into existing management consoles such as VMware Center, among opthers.
HP solutions are suitable for mission-critical applications, full data centre consolidation exercises, and remote office/branch support, as well as virtual desktop environment (VDI) provision.
Hewlett-Packard solutions can be deployed as a single node within an existing topology, and then expanded out as scale as the technology proves itself.
*Some of the companies featured in this article are commercial partners of Tech Wire Asia
READ MORE
- HP and Google will start producing ‘Made in India’ Chromebook laptops
- Digital banks: What’s driving success in Southeast Asia?
- 800 Gbps milestone: NEC’s leap in optical submarine cable technology
- Can Google keep its ‘best search engine’ title as Apple evolves?
- No, overheating iPhones will not explode!