edge

As data center management looks up, what’s on the horizon? 2019 and beyond in DCs

With a couple of notable exceptions (like big data processing using artificial intelligence algorithms), increases in computing power and improvements in storage technology have not necessarily meant that humans could work any faster or harder. But instead that technology is now able to shorten time-to-value for new apps and services, and managing that provision’s infrastructure has got easier.

In data centers, software-defined structures like convergence, hyperconvergence, and service abstractions are becoming more prevalent and mainstream.

Converge and utilize the idle

Perhaps the earliest example of software abstraction was the virtual server. Virtual machines now predominate in local, remote cloud and edge data centers/microcenters, and their presence has led to an uptick in utilization metrics. In the next few years, hyperconvergent solutions should do more to push those figures upwards, although — according to the Data Center 2025: Exploring the Possibilities survey carried out by Vertiv — data center managers’ perceptions of utilization percentages (20 to 35 percent) are probably still optimistic.

The data industry is undoubtedly looking to increases in abstraction solutions’ efficiencies over the next few years to make better use of resources. That will significantly lower costs for service provision, and in combination with next-gen cooling methods (like water-cooled cabinet backs) and renewable energy, should lower the industry’s carbon footprint. As well as satisfying the responsibility of the data center owners to keep environmental impact low (and keep the CSR teams happy), there’s also the possibility for new markets in capacity trading.

Likely dependent on advanced hyperconvergent solutions, companies should be able to buy and sell unused capacity in a price-driven market. That will, of course, create another revenue stream for the enterprise, but will also enable even modest players to provide the type of demand burst capabilities that today’s businesses are increasingly requiring. Exciting times, indeed.

Keeping it local

Another example of the rising standards of computing, storage, and networking is in industrial internet of things (IIoT). Cheaper technology now means smart, and interconnected devices are proliferating, running manufacturing plants, aiding smoother supply chains, and keeping our cities safer and cleaner.

Naturally enough, each device produces data, and needs to move it somewhere, so edge-based data centers are increasingly commonplace. There are more edge sites because of more IIoT deployments, of course, and these are often preferred to a centralized data topology because of several factors:

  • Latency is lower between data capture and presentation, which is critical for human-centric applications (like in medicine or health & safety-focused settings) as well as machine to machine deployments.
  • Data-intensive work, typically using big data pools like those produced even by modest IoT installations, is best carried out locally. In short, local processing is cheaper and more effective than bandwidth to the central data center.

Cisco has coined the phrase “fog” computing, which refers to multiple edge deployments and micro data centers, joined and resourced by hyperconvergent software — the control plane may be centralized, but the data plane is distributed.

Naturally, today’s data center providers and vendors need to be exploring possibilities in edge-based services, hardware, and infrastructure. As the industry that’s powering the cloud, the data center and, literally, the apps and services that run most businesses, data center managers need to be looking to diversify using the skills and resources they already have at hand.

Getting data-critical

With increased numbers of data lakes (inevitable in the emerging IIoT-heavy world) come increased concerns about data security. Naturally, governments tend to lag several years behind the curve in terms of governance regulation, but the weight of legislation is growing, and the consequences of data loss remain high to catastrophic regardless.

With 5G aiding more IIoT deployments, the creation of effective and connected edge deployments will become much more manageable. That means more information gathering and distribution will be required. In turn, data center managers need to be considering data security issues for live and archived data alike — on a massive scale.

Furthermore, because almost every business is increasingly reliant on technology, the risks and implications for downtimes increase significantly. New generations of cloud-focused data security measures are coming on-stream, as are AI-powered network behavior monitors, capable of flagging anomalous behaviors anywhere in the data center or edge installation.

There’s also a rise in the number of data fortresses, as demand for security grows. Fortresses are typically hardened repositories for archived information; data that needs preserving (often for legislative reasons) but where access speeds and latencies are not critical issues.

Data fortresses can become part of data center offerings, thereby creating a new revenue stream alongside new and existing services, like business continuity services, disaster recovery, and contingency measure planning and consultation.

Here at Tech Wire Asia, we’re considering three providers of the types of next-generation data center services that will enable 21st-century managers the scope and resources to explore these and other possibilities as the world transforms digitally. From discrete silos of information, data centers need to look to diversify in both topological and business terms, and one of the following can provide the tools for necessary change.

VERTIV

As one of the world’s largest suppliers of data center infrastructure, with consulting prowess to match, US-based Vertiv certainly has the product line and expertise to create the types of solutions data center management is looking for. Aware that edge-based computing is being used by forward-looking organizations to respond better to new technologies such as IoT, AI, and big data processing, Vertiv has products specifically suited to get a rapid return on investment and start creating value for customers in record time.

At the higher end, the company has prefabricated data centers ready to ship and install to remote locations, even those in extreme conditions of heat or cold. Entire independent buildings can be assembled, complete with full infrastructure, including power, heat management, fire control, physical security, and connectivity.

Scale that type of offering down, and the company’s SmartRow range comprises, similarly, of plug-and-play usability, perfect for deploying in smaller server rooms (or network cupboards!) without any disruption to the business.

Further up the range, there’s a series of SmartCabinets and SmartAisles, which are standalone units complete with power connections, UPS, thermal management, power distribution, monitoring, and security. To read more about these products on these pages, click here.

CISCO

Cisco’s solutions for the data center are intended to enable business innovation on the part of its clients. Its data center solutions can, of course, encompass a range of software, but a great deal of its infrastructure is agnostic, running on x86 architecture, and therefore, a malleable resource.

The company’s UCS (server), ten-years-old this year, revolutionized the data center and is primarily seen as the first device that allowed large-scale convergence of systems. From a single hardware unit, scalable to many thousands of units, the solution abstracted compute and networking to provide an elastic, configurable basis for services that reflected businesses’ growing demands.

Cisco’s overarching hyperconvergent software stack allows what it’s termed fog computing, where multiple smaller data centers, typically located on the edge, combine with larger, more centralized resources.

Control and maintenance are achieved by administrators anywhere (given the correct privileges), and because of the abstracted layers of the solutions, it means that resources can be set up and running according to demand, rather than as dictated (or rather, limited) by network infrastructure.

You can read more about Cisco’s offerings here.

edge

VMWARE

As software abstraction continues running many of the online services businesses rely on every day, VMware is well-positioned to advise its customers on overall strategy: after all, the company’s technology underpins much of the cloud’s infrastructure.

Based on the vSphere and vRealize Suite, the VMware vCloud Suite means data centers have the capability of abstracting out their services to remote locations such as edge units, remote nodes, and so-called “micro-centers”. That creates the sort of infrastructure that can react quickly and easily scale, whether that’s for burst management, or simply to cater for company growth, or to cope with takeover or merger.

There are cloud orchestration tools for data center development teams and operational staff, right up to sizeable devops scale, where bespoke, new services are being created for the provider’s customers. There is also a host of monitoring and metering to ensure that the right resources are allocated to where they’re needed. Like all good infrastructure tools based on abstraction layers, these can be rules-based (and therefore automatable), or new facilities and services can be spun up manually as the data center service provider creates new offerings to broaden its appeal and focus its products.

You can read more about VMware’s offerings on our site here.

*Some of the companies featured on this editorial are commercial partners of TechWireAsia