Home Making the Jump to Hyperscale Network Security

Making the Jump to Hyperscale Network Security

PREVIEW:

What is hyperscale? Hyperscale is the ability of an architecture to scale appropriately as increased demand is added to the system. This involves the ability to seamlessly provision and add more resources to the system that make up a larger distributed computing environment. Hyperscale is necessary to build a robust and scalable distributed system. In other words, it is the tight integration of storage, compute, and virtualization layers of an infrastructure into a single solution architecture.

There are many reasons why an organization might adopt Hyperscale computing. Hyperscale may offer the best, or only way to realize a specific business goal like providing cloud computing services. Generally, though, hyperscale solutions deliver the most cost-effective approach to addressing a demanding set of requirements. For example, a big data analytics project might be most economically addressed through the scale and computing density available in Hyperscale. Rapid deployment and automated management capabilities make scaling out simple and hassle free for businesses of all sizes. By tightly integrating networking and compute resources in a software-defined system, you can fully utilize all hardware resources available to you. By orchestrating your resources in an innovative way, you get much more from what you already have.

However, before we can understand the present and future of Hyperscale Network Security, let’s take a quick look into the past.

Traditional approaches to hyperscale networks. A traditional Network Security solution includes static security power. Static security power can be limited by its compute capacity, which needs to be managed individually and oftentimes manually. This is a very time consuming process. To help alleviate the stress, there are some architectures that have been put into place.

The most common approach to achieving hyperscalability has been the DIY, or “do it yourself” approach. This IT approach is a mix-up of technologies that uses a virtualization hypervisor layered on top. The steps needed in order to create this cluster are straightforward: install hypervisors on several servers acting as hosts and add a storage area network (SAN) or network attached storage (NAS). The great thing about this type of DIY architecture at the time was that it offered a great deal of flexibility in being able to choose your hardware and software from multiple vendor solutions. The only downside, as it being a multivendor solution, was the complexity of managing such a solution or having the security expertise to get it implemented. Moreover, we have not even mentioned maintaining the support for these disparate solutions within this type of architecture.

One of the main disadvantages of using NAS is having a single point of failure. In a world where extreme scalability, uptime and resiliency are required, this is an absolute no. SAN on the other hand, while it can be scalable and operates at relative fast speeds, is extremely expensive. This approach to achieving hyperscalability is out of reach for businesses of all sizes.

Another misstep in the do-it-yourself architecture of hyperscalability is relying on an ability to share a common storage system. Unfortunately, this leaves us with a single point of failure for the entire infrastructure. Another big no no. Distributing all the storage processing and redundancy across different nodes may seem viable and allows us to scale out our storage systems, but ultimately this technology adds additional cost and complexity to the solution as a whole.

Download the full text here.