This blog post is an excerpt from GovLoop's recent research brief, Simplicity, Scalability, Efficiency: The Hyper-Converged Infrastructure. To read the full brief, head here.
Simplicity and Ease of Operation
As explained earlier, hyper-converged infrastructures are modular systems intended to scale out by adding modules. Their simplicity comes from the fact that a user can leverage improvements at the storage controller software layer.
“The number one benefit behind a hyper-converged infrastructure is the simplicity of deployment and management of your IT environment,” Grant Challenger, Software Defined Storage Specialist at VMware said. “It’s all converged, it’s all being consumed. Storage, network and compute are in one environment with one pane of glass, so you have an extremely high degree of simplicity for deployment and for the end user.”
And as most of the public sector can agree, simplifying government operations is always a good thing.
Reduced Staff Needs
As Challenger explained, hyper-converged systems are managed via a single pane of glass. That means instead of having a set of applications and one team to manage your storage array, another team to manage virtualization, and another team to manage the server hardware, one team — or in some cases, one person — can manage the complete hyper-converged stack.
“To get to thousands of virtual machines,” Challenger said, “you can literally plug in [VMware] EVO:RAIL or EVO:RACK without the need for storage personnel or without the need for networking folks and have a very large set of virtualization infrastructure, storage, network and compute. So overall, as I see this adoption towards hyper-convergence…growing, resources will either get leveled up or moved into different roles.”
In this manner, dozens of employees who previously had to administer the system can now be deployed at the organization to work on other needs and innovative solutions.
To illustrate the cost savings of a hyper-converged infrastructure, Challenger offered a hypothetical scenario of an end user.
“Vendors are virtualizing networks, they’re virtualizing storage, but they’re doing this based on demand, and they’re doing it on the availability of power,” he said. “Meaning, there was a time when with an x86 server, even if you virtualized it, you weren’t going be able to additionally handle other services and features. Now, we’re extrapolating capabilities out of proprietary hardware and software, running on a storage array or a network switch, and we’re putting them into the x86 servers. Those x86 servers now have the power to deliver on all those capabilities at the software layer in the hypervisor, where five years ago they did not.”
This allows the end user to realistically take advantage of that power — and along with that power comes some incredible economic scale. That commodity hardware can run anything in the data center without someone augmenting it with proprietary hardware designs and software, thereby increasing cost savings.
VMware is changing the economic-aimed model because it doesn’t charge for capacity, Challenger said.
“If you buy software-defined storage from us, if you build your servers with 1-terabyte drives today, and a year later you like the 10-terabyte drives because the performance is there and, you’re comfortable with it, you can get 10 times the storage you originally purchased without spending additional funds,” he said. “So the economic model is different at that layer, and it’s also different at the commodity hardware layer. You don’t have to buy the fibre channel, you don’t have to buy a dedicated [storage-area network] switching environment for networking, you don’t have to buy a proprietary array. You just buy the white box.”
“We don’t charge for capacity; we license the storage on a per CPU basis. With any other storage company out there, you will have to buy capacity,” he added.
Download the full brief, Simplicity, Scalability, Efficiency: The Hyper-Converged Infrastructure.