Nasuni: The time to retire the on-premise NAS has arrived
Technology professionals need no lectures as to the increasing amounts of data collected and shared by the enterprise. For several decades, they have been tasked with providing, storing, maintaining, and protecting file data that serves the organisation’s users and remote locations. Annual growth rates for unstructured file data exceed 30% in most cases today.
Complexity develops over time, with the data centre ecosystems of many businesses now comprising a mixture of NAS devices on-premise, backup software and disaster recovery sites, SAN in local data centres, remote proprietary object storage (DropBox, Google Drive et al.), public cloud buckets MPLS and riverbed WAN accelerators and probably more than a few shared partitions on client drives.
IT teams constantly play catch-up to make sure there is adequate backup and disaster recovery in place and do their best to protect data at rest and in transit. Today there is the added challenge of preparing for a ransomware attack and ensuring files can be recovered quickly should an attack occur. We’d expect to see local agents scanning for malware, perimeter defences and various WAF and CASB systems for the remote stores. In short, storage supervisors play a daily game of “hunt the data” in the hope of shoring up provisioning according to the business’s needs.
For users, applications, and services, the situation is far from ideal, causing significant obstructions to the business’s growth and success. Local datastores are not readily available outside the LAN, and multiple remote storage services represent silos, too. Typically, accessing needed information at one site from another source, cloud or on-premise, is either impossible, or slow, or risky (in terms of security) or leads to duplication. It’s expensive too — CAPEX on hardware, OPEX on staffing and maintenance.
A company helping enterprises solve these file storage issues is Nasuni (the name stems from NAS, unified). Its UNiFS file system sitting on an object storage layer abstracts data access for all users and locations, placing all resources in public or private clouds and uses thin appliances at remote locations to cache and expedite read/writes of the hot local data. It also automatically backs up all data and provisions disaster recovery on the cloud of your choice.
We spoke to Craig Stockdale, the company’s Managing Director for the APJ region, about how Nasuni helps companies increase productivity and lower costs by an average of 60% in a secure and safer way. On that latter issue, he told us about on-the-fly data replication and, therefore, the capability for immediate rollback. If ransomware struck at 09:00, then Nasuni users could restore as of 08:59: “I can restore that folder, or I can restore the entire volume, right at that point in time [to go] back to operational. Continuous file versioning with fast restore is a patented part of the Nasuni solution. And what that also means is that you have built-in disaster recovery, and built-in ransomware recovery.”
Along with simplified file restore and disaster recovery, the Nasuni management console simplifies maintenance of what were very complex storage topologies. Craig told us that helps IT teams simplify what they do, and therefore lowers running costs. “If you’re trying to remotely administer all those NAS devices, that’s quite an arduous task, where you have multiple single panes of glass, so to speak. There’s a lot of time, effort and energy [spent] to try and manage all of that.”
Anyone involved professionally with data provisioning in a modern enterprise knows that demands for storage are always going to grow – never diminish. The obvious benefit of the cloud is, therefore, practically infinite storage capacity. But why not just go out and blow the kids’ inheritance on a giant AWS S3 bucket, for instance?
That’s just copying the same problem from on-premise to the cloud; Craig said: “In a lot of cases, companies have 600 terabytes, to pick a number, of on-premise primary file storage. Then they have a secondary data centre with another 600 terabytes for backup and disaster recovery purposes. This means they are paying for their capacity twice in order to have data protection and availability. With Nasuni, companies pay for 600TB once, not twice, plus get continuous reliability and rollback to any snapshot, anywhere, any time. With Nasuni you can date all file storage silos in the cloud replacing traditional on premises primary storage Craig told us the company’s offering is cloud vendor agnostic for those worried about proprietary platform lock-in. The data resources remain in Nasuni’s clients’ tenancy, ready for movement to a different cloud if the client wants it, or to a multi-cloud architecture. He said, adding, the option’s there — few use it. Pushed for time, we asked Craig for his elevator pitch for the Nasuni solution to those twin problems of a growing data repository total size and an increase in the complexity of its structure.
“What we do is provide users a globally distributed file system that enables remote access and collaboration to all of your critical files, your assets, with remote administration, and allow administrators to manage the file infrastructure from anywhere, at any time.”
At Tech Wire Asia, we usually find technology that’s either powerful and intriguing or highly practical and business-focused — but rarely both. In the case of the Nasuni platform, it happens to live comfortably in both camps. And like all the good ideas, it’s actually very simple: object storage put to practical use for end-users and the systems on which they rely.
- Singtel a paragon for 5G in Singapore
- China, India are poised to lead the global data center growth in APAC
- BlackBerry software embedded in over 215 million vehicles
- Chip shortage: The lack of “chips to make chips” is exacerbating the shortage by another 2 years
- eTail Asia 2022: Here’s what went down at Asia’s largest e-retail summit