Hyperconvergence

Rise of the Super Box

Complexity Creates Costs and Inhibits Innovation

While virtualisation as a technology simplified the running of server workloads some time ago, the complexity of the underlying infrastructure supporting the hypervisor has remained substantial. Leveraging discrete server, storage, backup and replication solutions does have its benefits for many enterprises. Particularly those organisations with low latency application requirements. However this complexity comes at a cost. Both direct and indirect. To compete in todays environment companies must remain agile and continually innovate. Yet such complexity often stifles agility as infrastructure delivery becomes cumbersome and time consuming. Innovation is restricted as an unfortunate result.

The advent of hyperconvergence and the single ‘superbox’ solution has made CIO’s rethink their long term infrastructure strategies. According to Gartner, Hyperconverged Integrated Systems (HCIS) will comprise infrastructures in 25% of all large enterprises by the year 2018.

An evolution of converged infrastructure systems. Hyperconvergence combines the hypervisor, compute, storage, network switching, backup, replication and real-time deduplication into a single box solution. Many also include ancillary services like cloud gateways, caching and WAN optimization - minimising the need for additional software common across enterprises. The hyperconvergence proposition for companies is extremely logical. Replace your disparate infrastructure components from multiple vendors with a single unit ‘turnkey’ solution. One which is fully integrated, commoditised and preconfigured with the hypervisor of your choice.

Enterprise scale is achieved simply by adding enough hyperconverged units to support the requirements of the desired virtual workloads. Complexity is reduced, agility is increased and performance and stability are maintained. At least that’s the theoretical concept at hand.

Public cloud architectures have gained significant attention in recent years. However hyperconvergence is quickly becoming a viable alternative for those seeking the on-demand scalability of a public cloud. Without the tradeoffs associated with moving systems and data offsite. For those looking to replace their traditional, physical desktop estates with a Virtual Desktop Infrastructure (VDI) solution, hyperconvergence also represents a potential high workload density option. Fitting more VDI’s on less hardware than the everpresent blade/san storage option found throught most enterprise data centres these days.

Hyperconverged Integrated Systems (HCIS) will deliver bimodal infrastructures in 25% of all large enterprises by the year 2018.

Complexity Simplified

A Turnkey Infrastructure Approach

One Step Closer to Plug and Play Infrastructure

Hyperconvergence is a relatively new technology and enterprise adoption is still very much in the early stages. The potential to transform the infrastructure landscape is however quite promising. The obvious benefit of leveraging hyperconverged, single box solutions is the removal of multi-vendor complexity. All of the internal hardware and software components are architected to work harmoniously together by a single vendor. Requiring no additional integration. Architectural issues are therefore potentially reduced and any that do arise, should be resolved much more quickly. Compared to those in a multi-vendor environment.

Within a hyperconverged system the components themselves are commoditised and consistent. Ensuring stability and maximising supportability. Resource allocation is managed by the HCIS itself based on workload profiling, history and policies allocated by administrators. The combination of flash and standard storage enables hot and cold data designation.

Scalibility is in theory limitless and is comparatively much simpler than traditional, unconverged infrastructure models. Nodes are effectively turnkey solutions which can be added easily as the resource demands of an enterprise estate requires. Potentially reducing the overhead of traditional demand forecasting and supporting the agile needs of your business predictably. Data efficiency and protection are also key parts of the hyperconvergence proposition. Deduplication, compression and optimisation of data are combined natively. Increasing overall data efficiency by reducing IOPS, storage and network bandwidth requirements without impacting application performance. Inbuilt data protection mechanisms replace the traditional third party backup/ recovery solutions. Enabling RPOs and RTOs for critical applications and data. Disaster recovery for most hyperconverged data centres is simply a matter of achieving parity at your recovery site. Providing a comparable number of nodes exist replication can be enabled quickly and easily.

Unified, global administration is also an impressive feature of leading HCIS platforms. Integration with products like VMWare VCenter simplify operations and improve service delivery across all hyperconverged systems. Enabling complex administrative tasks like data protection and QOS to occur through a single console.

For those of you interested in cost efficiencies, hyperconverged systems represent a potential 40-60% reduction in overall Capex and OpEx. They have the potential to realise time to value in buying, deploying and managing eight times faster. Given their smaller footprint, there is also the potential for up to 90% reduction in power, cooling and rack space.

Unified management enables complex administrative tasks to be performed through products like VCenter

It's Binary

Well actually, it doesn't have to be

Parallel Adoption Is a Sustainable Option

Some leading analysts have gone as far as to project hyperconverged infrastructure will replace traditional SAN/NAS solutions in the enterprise data centre. Though the decision on whether or not to move solely to hyperconverged systems is not necessarily a binary one. In fact, companies need not approach the technology option in such black or white terms.

Hyperconvergence is a good alternative to both the legacy three-tier infrastructure and the newer converged infrastructure architectures. As ever it’s dependent on the desired workloads at hand. Like many new technologies hyperconvergence is ideal for greenfield implementations. In particular for startup companies. Where agility is critical in the early stages and infrastructure delivery times can directly affect the time it takes to start generating revenues. Greenfield sites for established enterprises are unfortunately more of a rarity. They also normally include the need to harmonise with existing infrastructure in some way, shape or form. Though this doesn’t detract from the proposition of replacing your traditional three-tier data centre architectures.

Fortunately, enterprises do not traditionally connect all their internal storage and compute infrastructures to form a single, supercomputing solution. Their data centres are generally segregated into silo’s for fault tolerance. Which are aligned to specific workload requirements and business functions. This feasibly enables enterprises to dip their toe in the proverbial water of hyperconvergence – through controlled and systematic parallel adoption.

Profiling server and application workloads with absolute accuracy is challenging. It requires a great deal of attention, the correct tools and sufficient time over which to capture right data. Most workloads are a relatively well known quantity, easily supported by hyperconverged systems. Unfortunately not all workloads are appropriate for hyperconverged systems. Some are simply better suited to other architectures. Moving a labour intensive system onto a platform which can’t support it is quite frankly, a great way to lose business. Hence parallel adoption is the safest option for enterprises looking to realise the benefits of hyperconvergence.

Parrallel adoption need not be a temporary solution. It’s actually sustainable longer term. While that negates some of the benefits of using hyperconvergence, it still enables enterprises to meet select infrastructure requirements with agility and velocity. Enterprises could for example leverage hyperconverged systems for VDI’s easily, without impacting systems that aren’t hyperconvergence ready. Longer term, the decision to transition entirely can be then made on empirical evidence. Rather than just data sheets and the hyperbole of vendors.

Customers want smaller segregated fault domains regardless of the infrastructure solutions they deploy in their data centres.