What do I need to have set up in order to launch Vital?

The initial pieces you need for K8’s onboarding set-up are access to the Kubernetes environment and a bit of knowledge and familiarity with Kubernetes. The latter includes knowledge on the environment, your deployment, and the container within that deployment, which is the application. You also need to have immediate access to the metrics. Capturing metrics is crucial to the optimization process.

During K8’s onboarding a Kubernetes-targeted environment is crucial. To access it, you have to use the cube control command line client. This is a very common procedure. You can get instructions on how to do it from Kubernetes.io. If you’re lacking a Kubernetes environment, the simple way around it is with a tool called mini cube, which will launch a little Kubernetes environment on the local network. It’s compatible with almost every laptop used today.

If you have your own application that you want to incorporate into this environment, then you need to know the target namespace, the deployment, and the container you want to optimize. Those are the key pieces of information. These will actually get the optimization started. Everything else we can discover As we continue to optimize, we can actually bring in a consulting engineer, if necessary, to help you fine tune that optimization.

 

What are the security implications of running servo in my Kubernetes cluster during K8’s Onboarding?

In the Opsani environment, what we really see is a component that’s really just talking to the edge of the application. We’re looking at the application metadata. We do want to know something about application data – its throughput, its latency, some of those sorts of parameters in the environment. But we also want to avoid actually looking at the application itself. So, we look at the metrics and underlying infrastructure. Basically, to modify that infrastructure component, we just add or remove resources. Since that is all we are doing within the servo, we’re not usually touching any user data in any fashion. That’s already one layer of security there. 

Now, we do have a SAS component. There is the machine learning engine that actually does the optimization analysis and suggests new variants of the system to try. That engine collects some information as well as high level metrics: latency, throughput, data at that level. There are some environment variables and passing components that we also interact with. But these are also very much limited to modifying how the application runs, not the actual application data or application itself.

So, we are really touching very little of the system. As much as that is abstracted into ones and zeros, basically enumerated lists of things, we don’t even know what they mean. Instead of really knowing a lot about the process, it  creates a very much of a black box approach to the optimization where we don’t need to know and don’t want to know about the application. All we want to be able to do is to get a result that lets us say this was better or worse than the previous set of parameters they tried.

So, the feedback loop is very little, very light information. Honestly, all the security audits that we’ve gone through so far, nobody has ever had any concern about the actual data that we need to pass out of the system because it’s not critical to your application or your application value.

 

During K8’s Onboarding, if Opsani somehow upsets my production environment what do, I do about it?

Basically, it’s designed to run in production, especially if you have a canary environment .

There are a couple of different ways how we can actually run Opsani. We can run it in a parallel environment. If you really are concerned that somehow Opsani could impact your running production system, we can create a separate environment for that and work through those sorts of models. 

Our general K8’s onboarding optimization processes are real long-running optimizations that tend to work best in production environments. Your actual production load into the environment. We are siphoning off a portion of that production load, but we’re siphoning it into a production equivalent resource. During K8’s onboarding, we’re taking a copy of your environments within the Kubernetes environment. This is known as a pod. We’re taking that copy and implementing that, referring to a canary deployment in this case.

Many people know about canary from the normal DevOps world of discussion of a canary, as a new resource that gets deployed. Often though, a canary is considered something that has new code in it. But we’re not changing the code.

Our canary is what we call a production canary. We are taking the exact same code that you’re currently running in production and scale it up to 3, 5, 10, 50, 70 pods. Whatever the number of the resources it needs to meet your production scale. Then, we either add one more resource or potentially take one of your resources away and add a resource that’s canary.

The only reason we are calling out that resource as separate is because we want to be able to make some modifications. We might want to restrict its memory. We might want to increase the amount of CPU that is allocated to that one particular resource to see if that’s the thing that’s going to make for a more efficient application.

With one resource out of N, usually 10, 15, 20 additional resources, we are taking a portion of the traffic. We can actually really minimize any impact of the changes that would occur with the particular application component. We are sitting behind the load balancer in all cases and the load balancer actually becomes a part of that optimization cycle. Thus, there’s very little chance of any kind of disruption to your app or cluster. 

 

Can I use my own load generator?

If you’re going into an environment where you want to do more of a performance optimization style operation, this is our non-production approach – run a load generator.

We have run a number of different load generators in this environment. What happens is the servo has to have a way of triggering that load generator for the applications that have a known measurable life cycle for that load process. This works fairly well and that we’re able to execute basically any binary-based load generator or launch a container to run load generators. That is a little bit of  a customization process on a case-by-case basis.

But from the perspective of Vital, we, by default, provide the vegeta load generator as a way of generating general application load against your environment to run that initial optimization. That’s the load generator approach. The other approach is to use actual production load, the canary approach we talked about. 

In regards to running in production, often we are looking at production load as the generated load into the environment and we are going to optimize based on that and based over time as well.

 

What load generator do you ship with a tutorial?

The tutorial is using the vegeta load generator. It’s a golang-based high-performance generator that requires very little resource in order to actually generate a reasonable amount of load against the system.

In general, we find that people who do sort load generator style optimization. They use that more of a smoke test. They are validating that their application still runs and doesn’t break in an unknown scale of load. But more often than not, those load optimizations are not necessarily real world. It’s hard to replicate real-world user traffic against your application. 

So, while that’s a good way to get started, you must understand how your system scales, and even use it as effectively as a release component. Rrun an optimization to make sure that you have the right scale to hit a certain load limit.

What we find is that most people find it more useful to run optimization continuously. This requires running in a production case, which for us is this canary mode of optimization. That would then let you continuously optimize your application over time. In fact, you can take that optimization and apply it directly to the local running system so that it can actually start to act as effectively a scale manager as well. So, we will optimize the system and then make sure that it’s always running optimally over long periods of time.

 

Where can I get my hands on Vital?

That’s easy! Opsani.com/vital. Just sign up there and you’ll get a response back, once we are able to help you get up and running. Basically run a script and you’re up and running.