Hyperconverged Infrastructure: HCI benefits for workload mobility

Can HCI ease Hybrid IT (on-premise and cloud / multi-cloud) deployments and workload mobility

Organisations thinking about adopting or migrating to the cloud or are already using cloud need to think ahead to the future - for example optimising workloads across your hybrid environment (on-premise and external cloud systems). Enterprises already on the cloud model cite a number of concerns around the longevity, for example high long term costs of public cloud, coupled with the advancement of technology such as cloud-as-a-services for on-premise datacentres, have triggered strong interest in traditional hybrid and multi-cloud cloud alternatives. 

Wouldn't it be nice, to make life easier - architecture and infrastructure wise, to make the mobility of workloads (data, applications, databases) between on-premise and cloud platforms easier, more secure and efficient. That was rhetorical, who wouldn't want that, and it’s our job in IT to make the flow of business easier, safer and efficient using IT.

Traditional cloud architecture to date raises issues around workload management, consistent performance, regulatory compliance, latency, security, integration and mobility for future business agility.

In a perfect world, workloads should be able to migrate flawlessly between different public clouds. Unfortunately the reality is that workloads are typically created / coded for a specific cloud providers API.

So, how can we efficiently manage the workloads between on-premise and cloud platforms?

Through Common integrated platforms, such as HCI platforms.

What is HCI?

Hyper converged systems (HCI) is a software-defined, unified system that combines all the elements of a traditional data centre: storage, compute, networking, and management. Designed to create common / standardised all-in-one computing, storage and networking stack, to enable data and application movement with power, efficiency and security.

The compute resources within the stack are tightly integrated and pre-optimised. The hardware and software are packaged into convenient appliances or nodes, which can be deployed singularly to start and then quickly and easily scaled out as your resource demands increase.

What can you do with HCI – key benefits

> HCI provides cloud-like flexibility and agility, but with the control and security of an on-premise deployment.

> makes it easier to run the same software in the cloud that runs on on-premises hardware. This makes shifting workloads where they're needed much more simple.

> makes administration much easier, letting you manage all aspects of your infrastructure from one place. It’s ‘just-in-time’ infrastructure scaling. If resources become scarce, you simply call your vendor, ask for another node, and deploy it.

> a consolidated / unified approach to management and operations

> integration with your existing applications and solutions

> build your own private cloud - deploy cloud-like infrastructure on-premise and gain more overall control of your cybersecurity.

> extend to public cloud - choose an ‘as-a-service’ option from one of the largest HCI cloud ecosystems (AWS, VMware, Azure, Dell) for less time spent for you on managing infrastructure.

> achieve true hybrid cloud - you can mix VMs and container-based applications, deploy across a mix of data centre, public cloud and edge environments.

> enables interoperability between different types of cloud.

> helps prevent vendor lock-in.

> choose from multiple hardware possibilities.

> resolve multi-vendor compatibility problems - reduce complexity by removing compatibility problems between multiple vendors associated with the stack.

> advance toward deployment of a software-defined data centre (SDDC) and / or infrastructure (SDI)

> application mobility – ability to move workloads back and forth

> ability to burst to the cloud, e.g. from on-premise to public cloud, to handle spikes in demand

> flexibility to choose the best cost model for each workloads

> consistent performance

> ability to easily license environments flexibly

> choose the best security and compliance model for your data


The present and future of HCI

2020 saw more organisations pivoting toward hybrid cloud and software-defined infrastructure. One of the main reasons enterprises are seeking hyper-converged systems is the ability to create a cloud like provisioning model while still maintaining physical control of the IT solutions and data on-premises, claim VMware and Dell.

Further evidence shows workload placement has become paramount and has become an industry unto itself, driven by needing to ensure there are enough infrastructure resources available in all necessary locations (hybrid clouds enable you to operate workloads both in the cloud and in various on-premises locations, including edge environments).

Deployment considerations

Side-by-side HCI deployment: where an HCI platform is deployed in the existing data centre along with traditional heterogeneous infrastructure. This approach allows businesses to migrate workloads to HCI over time and can be performed over the long term. Displaced hardware can be repurposed or decommissioned in smaller, more manageable portions. It is likely that HCI will run in tandem with traditional infrastructures over the long term, and the two can easily coexist.

Per-application HCI deployment: brings hyper-converged infrastructure into the existing data centre environment, but rather than migrate existing workloads to the new infrastructure, the HCI is intended only to support specific new applications or computing initiatives, such as a new virtual desktop infrastructure deployment or a new big data processing cluster, previous workloads are left intact on the existing infrastructure.

Pre-packaged - cloud to on-premise solutions: AWS & VMware bring you ‘cloud in a rack’ designed to reside on your on-premise datacentre locations, for your existing cloud services. Amazon is bringing native services and AWS-designed compute, networking and storage hardware infrastructure to virtually any datacentre, co-location space or on-premise facility with Outposts, which will be fully managed and supported by the company. You will find similar collaborations with VMware, Dell and other leading vendors, making the bridge for workload mobility easier.

Edge computing: Growth of IoT devices and smart devices will demand processing closer to the endpoint, and the demand for the compute power and storage that HCI can provide rapidly will enable that growth. For many organisations, the explosion of data generated at the network edge will necessitate, if it hasn't already, moving data processing and analysis out of central facilities and cloud locations to localised systems sized for the task.

For example, key interests and drivers for edge computing include things like support of 5G wireless networks, mobile customer workloads, manufacturing facilities, mines, oil production sites and warehouse distribution centres. Dropping mobile edge computing nodes in base stations, office parks, and large office, apartment complexes and remote locations is critical to offloading workloads and maintaining adequate performance.

Important considerations

Adapting to Covid-19

The speed and flexibility in HCI has made it well-suited to rapid deployment, and even rapid repurposing. The emergence of COVID-19 has forced a vast number of users to suddenly work from home. This has made organisations have to suddenly deploy additional resources and infrastructure to support the business computing needs of users now working remotely. HCI systems has played a notable role in such rapid infrastructure adjustments.

Similarly, those with numerous remote locations and employees will find the ability to remotely manage hyper-converged systems an attractive option in situations with no local IT personnel and an improvement over backhauling network traffic every time users need to run an enterprise application or access a database.

Applications and workload mobility

When it comes to enterprise cloud adoption, workload mobility – the ability to move applications and their associated data to a new environment with zero or very limited downtime, has long been a missing piece of the technology puzzle. Workload mobility is a critical feature because it provides the flexibility to run each workload in the most optimal location and prevents vendor lock in. It might make sense initially to run in one particular cloud environment during testing and development, but as the application’s users grow, it might require an environment that offers better scale and performance. This is why most companies consider hybrid cloud the ideal IT operating model. Certain apps run in a public cloud, especially for resource elasticity – while others run on-prem, typically for greater economics, compliance, or security concerns. A true workload mobility capability can migrate between on-premises infrastructures, allowing you to run your workload in the optimal location.

Look for an HCI solution that lets cloud operators extend, burst or migrate applications across clouds, without re-architecting each environment. The majority of apps eventually migrate back on-prem once their resource requirements are established. Lifting and shifting to and from the cloud is costly and time consuming. Make sure your HCI solution is designed to facilitate this movement. Public cloud skill sets are hard to come by, so the ideal HCI solution allows you to use the exact same tooling and skill set whether working on-prem or in the cloud.

Cloud native and traditional applications

Many legacy applications have not been designed for the cloud and require an on-premise stack that can save data in the cloud and offer the same simplicity as cloud operations. Hybrid cloud model can be deployed for traditional application development, for example, SQL databases and business intelligence systems if you use and follow more strict and linear development methodologies, but the real advantage of mobility is for cloud native applications in agile DevOps methodologies using container ecosystems such as Kubernetes.

What databases, applications and workloads do companies run on hyperconverged infrastructure?

In the past, HCI started with workloads like VDI (remote virtual desktop) and ROBO (remote or branch office). That dynamic has rapidly changed as more and more users of HCI solutions have made their systems available with more and more production and datacentre workloads, even as they prepare their resources for the future.

HCI can run any enterprise application, at any scale

Business applications are critical to enterprise survival, and thus the underlying infrastructure must be robust, resilient, and powerful enough to run the full complement of workloads – and run them well.

One way to be sure that your HCI can really deliver is to check its certifications for the most demanding and resource-hungry enterprise workloads, such as SAP HANA.

You can also use established benchmarks, such as HammerDB, to see how popular databases including SQL Server and Oracle perform on your solution.

Some prime examples of what applications run on HCI

> Virtual desktop infrastructure (VDI) and application virtualisation

> Remote Office and Branch Office (ROBO) deployments

> Dev/Test Apps Puppet, Docker, Chef

> Business-critical applications: Oracle databases and E-Business Suite, SAP Business Suite (and SAP HANA), Microsoft SQL Server, Microsoft Dynamics, IBM DB2, and many others.

> Messaging and collaboration applications: Microsoft Exchange and SharePoint, as well as unified communication solutions such as Cisco UC, Avaya Aura, and Microsoft Skype for Business.

> Server virtualization and private cloud: Multi-hypervisor support for VMware ESXi, Microsoft Hyper-V, and Nutanix AHV virtualization

> Service provider software stacks: including AWS, VMware, Microsoft Azure Stack, OpenStack, Red Hat OpenShift and bare-metal Kubernetes clusters.

> Big data and cloud-native apps: Splunk, MongoDB, elastic, and more.

A word on containers and Kubernetes

We have seen the rise of container-based computing, e.g. Kubernetes, has greatly reduced the friction to adopt hybrid and multi-cloud by encapsulating software code and dependencies, enabling software to run uniformly and consistently on any cloud infrastructure. Supported by VMware (VMware Tanzu Kubernetes Grid Integrated Edition), allows Kubernetes to run on the Cloud Foundation full stack HCI, to automate deployment of Kubernetes workloads and include all of the tools, libraries and registries needed. Integrates into private and public clouds, and edge environments.

These offer the highest portability between different clouds, offering a path for companies to shift workloads between clouds, avoiding lock in and allowing vendor adoption based on best fit for purpose.

APIs help liberate workloads to run anywhere within reason. Containers will be the method for getting workloads into the cloud and making sure they remain portable.

HCI support for container clusters. Vendors such as Dell EMC/VMware, Nutanix and Cisco now offer HCI configurations optimised for popular container software, such as Kubernetes (also Kubernetes-based hybrid clouds, such as Red Hat OpenShift and Google Anthos, offer application-level control for container environments across multiple on- and off-premises environments).

Hypervisors supported

Mark Chuang, VP Product Management at VMware said “we are saying to our customers that we understand there are many reasons why they have so many clouds, and we’re not here to talk about why, but how we can help them with that. We are saying, let us help you with the networking, with the management and the workload portability between a VMware cloud, an Amazon cloud, an Azure cloud and Google cloud – because that is a the reality of the situation they need help with”.

If you select an HCI vendor that supports all of the Hypervisors and all of the clouds you can make your applications leverage each technology to your best advantage and lower OPEX costs by up to 60% without rewriting your applications to be cloud friendly.

Security

Few would dispute that security is a paramount concern, and with a strain on cybersecurity experts in the market, it’s essential to choose an HCI solution equipped with features like self-healing STIGs that automate and enforce security configuration, remediation and hardening. This capacity reduces the risk of human error and saves weeks of manual tasks and checklists.

Having HCI on-premise allows you to keep your current security policies. For some customers having sensitive data on the cloud is not even an option due to their policies. If you go to the cloud you must remember that you are responsible for the security of your data, not the cloud service provider and new policy schemes may be needed.

Backup and Disaster Recovery

Backup and disaster recovery is currently the fastest-growing application in the hyper-converged market, due to the rising concerns for faster data backup and security. The ability for hyper-convergence to also lower total cost of ownership and operating expenses for backup and disaster recovery is a big trend to keep an eye on in 2020. Hyper-convergence reduces the requirement of separate backup software, deduplication appliances and storage arrays.

Performance

You have to evaluate if the cloud provides the bandwidth, throughput, and availability that your operation requires vs. on-premise.

Today, HCI offerings benefit from the radical improvements that have taken place in processors, memory and storage devices, as well as dramatic advances in software-defined technologies that re-define how businesses perceive and handle resources and workloads.

Be sure that your HCI solution utilises cutting-edge technologies such as NVMe and RDMA to deliver the performance that write-heavy, mission-critical workloads require. A best of breed HCI platform can support it all, whether it’s databases, ERP, big data, unified communications, or VDI.

Getting your network needs right

Latency could increase if hyper-converged runs on networks with poor performance. This means that enterprises should invest in enhancing networking to make the best of their HCI deployments in high-performance clusters.

Another option is to rent storage space at a telco datacentre where the connections to the public cloud are faster. However is could slow down in-house operations, depending on the speed of connection to the telco. This model works best when the telco data centre is nearby.

Advancements on HCI: Composable infrastructure

Fact remains many organisations will prefer more traditional IT approaches that leave in place buttons so administrators can fine-tune for more granular control of resources. One possible answer in the software-defined mode is composable infrastructure, which merges aspects of HCI and converged infrastructure with programmatic control of resources to make it easier to stand up and down virtual servers for specific workloads.

Composable infrastructure is ideally suited for organisations that want the benefits of hyper-convergence but also need to support virtual, physical and containerised workloads.

It goes a step beyond hyper-convergence by providing a fluid platform in which all resources are pooled together and tightly managed with software. These software-defined constructs make it possible to easily and quickly deploy physical, virtual and container-based workloads.

The resulting fabric enables quick creation of workload operating environments in which the administrator doesn't have to mess around with hardware beyond initial deployment.

It negates the need for IT administrators to be concerned with the physical location of infrastructure components. When a software application requests infrastructure to run, available services are located through an automated discovery process and resources are allocated on demand. When an infrastructure resource is no longer required, it is re-appropriated so it can be allocated to another application that needs it.

Several vendors, including HP Enterprise and Cisco are promoting the concept as a way for internal IT departments to provision workloads just as quickly and efficiently as public cloud service providers can, while still maintaining control over the infrastructure that supports mission-critical applications in a private cloud setting.

As of this writing, there are no agreed-upon standards for deploying a composable infrastructure and different vendors and proponents are describing composable infrastructure by different names -- including programmable infrastructure, intelligent infrastructure, software-defined infrastructure, Infrastructure as Code ( IaC ), decoupled infrastructure and hardware disaggregation.

dHCI Technology (Disaggregated HCI)

Disaggregated HCI works well for those who need more control over their resource allocation but still want operational flexibility and ease.

dHDI allows the use of leading commodity servers to be used for the compute element, but allowing independent networking products from the like of HPE Aruba/Mellanox/Cisco vendors to form the iSCSI element of the build, and finally high performance storage, such as the HPE Nimble AF/HF Platforms to offer the storage tier.

* This type of HCI, allows the customer to independently scale compute to meet business needs, or even, scale the storage without having to increase or change compute – which to date was a restriction or cost burden for adopters.

Support for Artificial Intelligence & Machine Learning

The ability to support a huge volume of scalable containers makes HCI a natural fit for machine learning and artificial intelligence workloads, which demand enormous numbers of compute instances.

This is an opportunity for us to rethink what HCI should do,” David Wang, director of product marketing for storage at HPE said. “We’ve moving from software-defined to AI-driven, and that’s allowing infrastructure to be autonomous using cloud-based machine learning to predict and prevent issues, where infrastructure is not just efficient but self-optimized.”

The pervasive use of artificial intelligence and hyper-convergence is making the datacentre more intelligent and automated around monitoring and managing assets and risks. An AI-defined infrastructure's advanced analytics would make it possible to detect anomalies, find root causes, predict disruptions to services and proactively address issues before they occur. These features can also help secure the infrastructure by being able to better assess risks in real-time and orchestrate data protections. So, it might pull data from events, metrics, system logs, database logs, operating systems, application settings, hardware components or any other data points that can provide consumable telemetric data.

More IT groups recognise HCI as a smart way to swiftly deploy important new initiatives, such as AI. “HCI offers a convenient platform for AI experiments and other purpose-built applications,” says Mark Henderson, a memory and storage product marketing manager of Intel’s Data Platforms group. “It gives you an easy button to push that provides infrastructure to data scientists without having to re-architect your entire enterprise environment. People can do those small-scale experiments and try things out and then scale from there.”

Final thoughts

HCI does provide tangible benefits for easing the migration and portability of workloads within hybrid cloud environments, it’s just that you have to buy the expensive infrastructure to make it work smoothly, the middleware, scaling nodes, licensing, the service.

As with everything, the critical elements lie in the outcomes you aim to achieve. Regardless of which direction you go, make sure your selected infrastructure path, or paths, adheres to your organisation's workload needs, disaster recovery goals and transformation requirements.

By Paul Rummery, Securenet Consulting