Hyper-converged, Converged or Build your own Infrastructure?

I get asked a lot about different hardware deployment models for cloud. Hyper-converged is the new buzzword for hardware today, previously it was converged stacks. It’s worth thinking about all of the options available and the change you would need to make internally to your processes, platform and applications to make best use of these hardware platforms.

I’m going to give a brief overview of some datacenter hardware possibilities and the benefits of each.

Multi-Vendor (Build your own)

This is the traditional way of building a datacenter. First you make a list of things you need to run your applications: Compute, Network, Storage, Hypervisor. Then you speak to a compute vendor about their compute, storage vendor about their storage etc.

You then decide on the features that are useful to you and inevitably prioritize what you buy based on budget.

The benefits to this are that you instantly have a multi-vendor strategy. If for some reason you don’t want to do business with your compute vendor, you change it without having to change storage, network etc. It will also give you the benefit of chosing which specific hardware features are going to give you the best return on your investment.

The drawbacks to this are mostly operational. You need to create a design for how all your Lego bricks go together. You need to get someone to put together all your Lego bricks together, then test them etc.

Examples: Cisco/Juniper/HP/Dell/EMC/NetApp etc.

Convergence-arrowsConverged Stack

The converged architecture used to be the new guy, but it has now proved itself as an efficient way to provide datacenter solutions. The premise is that someone else has already designed how your Lego bricks go together, built it and tested it. It’s a standardized solution which has been proven and benchmarked.  It still contains all of the traditional elements (Compute, Network, Storage, Hypervisor). All that’s left for you to do it take delivery, unwrap your rack and plug it in.

Examples: VCE vBlock, NetApp FlexPod, Dell vStart, HP ConvergedSystem, IBM VersaStack

Hyper-converged Architecture (HCA)

Hyper-convergence is an architecture which takes the converged stack one step further. If you take the benefits of a converged stack and amplify them, you’re close to describing it. This way though, the compute, storage, storage network, hypervisor and automation platform are all in one box. This provides a simple and flat scale out architecture, with a heavy emphasis on automation and virtualization.

HCA isn’t really even about the hardware. It’s all provided in a single server, so in theory any server can become a hyper-converged solution by running the right software on it (Although typically vendors have guidelines on whats required, eg. SSD storage). It’s all about the virtualization layer and the automation around deployment and resilience. A very typical use-case for hyper-convergence today is VDI as this benefits most from scale-out performance.

Benefits include:

  • Software defined Storage embedded – Not having to think about a separate san.
  • Operational cost is lower, not having to manage multiple elements.
  • Support and troubleshooting easier with a single vendor.
  • Elasticity – Add and remove appliances and they are automatically configured to be part of the pool
  • Scale out model which enables fast growth and performance (The more appliances you add, the more performance you get)
  • Workload mobility.

Appliance Examples: EMC vSPEX Blue, SimpliVity

Software Examples: VMware EVO:RAIL, Nutanix.

 

NB: There is sometimes some confusion between Software defined Storage (SdS) and Hyper-convergence. Hyper-convergence is an architecture that uses SdS as a technology to provide scale out storage. SdS is all about bringing the storage closer to the application and it provides a virtualization layer for storage to allow you to control it as software.

White-Box/Commodity

So white-box architectures are becoming more and more interesting and pervasive. The idea of a white-box architecture is to buy the cheapest hardware with no care for features and little care for resilience. In a Hardware Defined Datacenter (HDDC) this is unthinkable. What if a server crashes that’s running my business critical application? In the software defined datacenter (SDDC) this becomes less of an issue as everything is abstracted from the features within the hardware and resiliency is built-in to the platform. With Cloud-Native and Web-Scale applications, you take this a step further by expecting your servers to fail. In this world you have applications built up of microservices distributed across your datacenter, so if a server crashes, that service is already running somewhere else and nobody notices and outage. Your monitoring tools tell you that your server is down, so you plug-in a new one and that’s it.

An analogy for using White-Box/Commodity servers is “Cattle and Pets” (Not mine, but I like it).

Pets Cattle
You give your pets names like: rover.dog.01 You give your cattle names like: azfffhrh1134.ffs
They all have unique characteristics and are cared for They are identical
If they get sick, you nurse them back to health If one gets sick, you shoot them and replace with another one

With white-box solutions it all really depends on how you design your cloud platform from a software level. If you have a fixed workload with a “traditional” style app, (think Oracle) then you need to think about the cost of this server failing from an operational point of view, along with the cost of the server replacement itself. With more elastic applications that aren’t affected by the downtime and you have a cloud platform which can automatically or quickly recognise new hardware and configure it seamlessly, then you can start to care less about the infrastructure itself and go to a commodity/white box solution.

Leave a Reply