The future of network infrastructure in the data center: software
With increasing demands on technology fuelled by more widespread acceptance of digital workflows, the enterprise has to move quickly in order to accommodate change.
More business leaders demand more from their IT provision, and in turn, IT departments are moving from their technology focus to a more strategic role.
Since the economic crash of 2008, cost-cutting exercises are commonplace, and, although hardware costs are falling, the ramping-up of demand for technical solutions means that savings in IT infrastructure and running costs need to be found.
While it seems odd to refer to systems only a few years old, “legacy” hardware, software and working methods change constantly in order to keep up with the relentless pace of change.
Nowhere is technology change more apparent than in the data center – by definition, the data center is where there is the highest concentration of digital resource is located.
Hyperconvergance is an architecture that is based on cloud principles & economies, and is software-based. This makes upgrade, agility, and malleability the heart of the provision.
Hyper-converged infrastructures (HCIs) consolidate computation, data protection, switches, storage and virtualization into one.
By abstracting physical realities into data constructs, not only are internal data center (DC) structures made immaterial but, by dint of the same technology, geographic displacement of resources is also negated.
In a solid HCI, a single interface manages all aspects of systems that previously required:
– multiple dashboards and interfaces for specific areas of operation
– multiple skill sets for staff
– multiple suppliers and support networks
– disparate procurement
– variation in protection systems, both in hardware across multiple software applications
All IT professionals are aware of the positives that virtualization, for instance, can bring: fuller resource use, lower management costs, instantaneous switchover and scaling according to sudden need changes. In HCIs, this ethos is extended right across the entire infrastructure, creating the software-defined data center (SDDC).
SDDCs possess multiple advantages over their predecessors:
– scalability. IT has always been required to enable business expansion. With SDDCs, it can!
– improved utilization. Being built on common components, SDDCs remove the unnecessary demarcation of resources into ‘data islands’
– fewer personnel. Businesses’ biggest costs walk around on two legs. Research by Avaya suggests an efficient SDDC lowers staffing costs from 40 percent of the total, overall costs to around 20 percent.
– lower provisioning time (and cost). The flexibility and agility of an SDDC reduce the time taken for new business units’ requirements’ setup and maintenance.
Cloud-inspired (and cloud-derived) tech has bled into overall enterprise-level IT; in the vanguard are lessons that data center management can learn from.
Social media and search engine hardware procurement policies, for instance, have shown that maximum efficiency is gained by using ‘commodity hardware’: hardware that is, to a certain extent, expected to fail. By installing an abstract software layer above the hardware, the effects of physical failure are negligible.
Commoditising hardware also adds to cheap scalability, both upscaling (lower costs) and downsizing (low capital budget write-off).
Policy can be abstracted from infrastructure: the old paradigm of IT departments’ internal concerns dictating IT management policy is no longer an issue. With a generalized policy, complete technology refreshes can take place wherever necessary without reconfiguration of overarching policies.
This aspect, in particular, should put an end to “shadow” IT policies arising. Shadow policies are developed ad-hoc when an IT department is unable to provide necessary services (or required services in an acceptable timeframe). Departments then start to create their own IT offshoots in order to circumvent IT departments. On the larger balance sheet, this leads to duplication of costs and inefficiencies.
Reasons to utilize an HCI:
1. Manage centrally. A single shared resource pool comprising of backup, cloud gateway, storage, computation, routing, and security, enables IT to aggregate resources as a single federation.
2. Agility. Hyberconverged solutions enable, for instance, consistent data deduplication and the use of reduced data, as opposed to fully-expanded information.
3. Focus on software. Hyperconvergance optimises software-defined data center ideology. By not having to replace infrastructure components, the enterprise gains advantages from change, in effect, immediately.
4. Scalability and efficiency. The bigger the step size that IT provision requires in terms of infrastructure, the higher the cost. SDDCs can scale quickly with much lower procurement steps.
5. Automation. There is no need to develop multiple automating systems covering provision from different suppliers. One environment encapsulates all scripted processes.
6. Virtualization allows resources to be retasked as required. For instance, in the wake of security crisis (such as the spread of the recent WannaCry malware), priority can be immediately reassigned to backup and replication, unlike in legacy systems.
7. Data protection. Comprehensive backup and recovery systems underpin efficient protection systems that do not need to “rehydrate” data and re-deduplicate.
If you are considering a move to cloud-inspired tech in order to achieve an HCI, you will want to consult with one of the following enablers:
Hewlett Packard Enterprise is one of the world’s largest providers of enterprise-level data center infrastructure, with over 80 percent of Fortune 500 manufacturing companies using HPE infrastructure products.
The company’s hyperconverged infrastructure (HCI) offerings enable a level of flexibility in IT infrastructure that is simply not possible with traditional, physical installations. The attractiveness of a software basis of provision to an IT department will appeal – IT management will be well aware of CAPEX and depreciation costs, but of course, it is the overall benefits to the business are what drive demand for cutting edge data center tech like HCI.
With seamless provision of flexible IT according to changing business needs is the ultimate goal, the benefits of a hyperconverged infrastructure lead to a paradigm which provides the business-oriented end-goal, of IT as a service (ITaaS).
When the enterprise is able to adopt an ITaaS model, IT resources can be quickly provisioned for any type of workload, while maintaining the management and control needed across the entire infrastructure. This capability will allow the business to dictate change as it requires, and have IT respond as a strategic player.
In purely IT management terms, HPE’s hyperconvergance technology improves recovery point objectives (RPOs) as well as recovery time objectives (RTOs). This reduces backup and recovery times to seconds, and vastly improve the ratio between logical data to physical storage.
Oracle’s solutions for the data center are intended to enable business innovation on the part of their clients.
Oracle’s data center solutions can, of course, encompass a range of hardware and software vendors, but perhaps unsurprisingly, their servers, storage, and engineered systems are optimized to perform at their best when running Oracle software.
Because of their economies of scale, Oracle has various pre-tested implementations which have been evaluated and passed as enterprise-ready by internal testers; these results have been, in time, endorsed by some major clients.
By providing the option to pursue a pre-built migration route, Oracle claims to be able to lower integration and management costs of all sizes of SDDC roll-out, from a single instance to a multi-continent topology.
While cloud computing is seen as the way in which data will be most commonly stored and manipulated in the medium to long term, the cloud’s core architectural structure and strategies used to manage this structure are duplicated to create an Oracle-based HCI.
This provision of a ‘private cloud’ allows IT teams, confident of their cloud management capabilities to be able to make the transition to the tech that holds the SDDC together and enables the speed of reaction required to cope with strategic, business-led demands.
Before the phrase ‘hyperconverged’ was ever (allegedly) coined in 2012, Pivot3 was offering solutions that were effectively HCI.
The company’s strength emanates from its use of NVMe PCIe flash to deliver better performance than conventional HCI solutions. As a result, response times are shorter and the company claims to increase availability density, running more VMs per HCI node.
Pivot3’s expertise has traditionally been utilized by media and physical security companies, whose requirements include storage of large amounts of data – typically video.
However, the company has made strides in attracting different verticals in recent years, and the company’s client base now numbers fewer surveillance and media companies than other types of concern.
Pivot3’s management engine is in its fifth generation and is structured in such a way that it is policy-based. Different applications can be assigned priority, and the solutions respond therefore according to given strategy, rather than along purely granular, technically-driven lines.
This allows its enterprise clients to make generous savings on their running costs, and the use of commodity hardware should also lower CAPEX in the long term.
The company numbers over 2,400 customers in 54 countries and 18,000 hyperconverged installations in disparate industries such as healthcare, government, transportation, security, entertainment, education, gaming, and retail.
- Cisco to provide private 5G network to enterprises in Malaysia with TM
- Here are the cities leading the data center growth in Asia Pacific
- For SMBs in Singapore, 5G is not as complicated as it seems
- The Great Layoff has not dampened the demand for tech talent
- Empowering security for mission-critical applications